text
stringlengths
56
7.94M
\begin{document} \title[Translation bounded Fourier transform]{Tempered distributions with translation bounded measure as Fourier transform and the generalized Eberlein decomposition} \author{Timo Spindeler} \address{Fakult\"at f\"ur Mathematik, Universit\"at Bielefeld, \newline \hspace*{\parindent} Postfach 100131, 33501 Bielefeld, Germany} \operatorname{e}mail{[email protected]} \author{Nicolae Strungaru} \address{Department of Mathematical Sciences, MacEwan University \newline \hspace*{\parindent} 10700 -- 104 Avenue, Edmonton, AB, T5J 4S2, Canada\\ and \\ Institute of Mathematics ``Simon Stoilow''\newline \hspace*{\parindent}Bucharest, Romania} \operatorname{e}mail{[email protected]} \urladdr{http://academic.macewan.ca/strungarun/} \begin{abstract} In this paper, we study the class of tempered distributions whose Fourier transform is a translation bounded measure and show that each such distribution in ${\mathbb R}^d$ has order at most $2d$. We show the existence of the generalized Eberlein decomposition within this class of distributions, and its compatibility with all previous Eberlein decompositions. The generalized Eberlein decomposition for Fourier transformable measures and properties of its components are discussed. Lastly, we take a closer look at the absolutely continuous spectrum of measures supported on Meyer sets. \operatorname{e}nd{abstract} \keywords{Fourier Transform of measures, Almost periodic measures, Lebesgue decomposition} \subjclass[2010]{43A05, 43A25, 52C23} \maketitle \section{Introduction} Diffraction is one of the main techniques used by physicists to study the atomic structure of materials. It is used in material sciences to determine the arrangements of atoms in metals and crystals and in biology to determine the structure of organic compounds. Mathematically, the process of diffraction is modeled as follows: starting with a set $\Lambda \subset {\mathbb R}^d$, representing the position of atoms in the structure, or more generally a (translation bounded) measure $\mu$, one can construct a measure $\gamma$ called the autocorrelation (or 2-point correlation) of $\Lambda$ or $\mu$ (see \cite{TAO} for the general background). The measure $\gamma$ is positive definite and hence Fourier transformable \cite{ARMA1,BF,MoSt}, and its Fourier transform $\widehat{\gamma}$ is a positive measure \cite{ARMA1,BF,MoSt}. It is this measure $\widehat{\gamma}$ which models the outcome of a physical diffraction experiment, and for this reason it is called the diffraction measure of $\Lambda$ (or $\mu$). It has a decomposition \[ \widehat{\gamma}=\left(\widehat{\gamma}\right)_{\operatorname{pp}}+\left(\widehat{\gamma}\right)_{\operatorname{ac}}+\left(\widehat{\gamma}\right)_{\operatorname{sc}} \,, \] into positive pure point, absolutely continuous and singular continuous measures, respectively. The pure point spectrum $\left(\widehat{\gamma}\right)_{\operatorname{pp}}$ is usually associated to the order present in the structure. Structures with pure point diffraction, meaning $\widehat{\gamma}=\left(\widehat{\gamma}\right)_{\operatorname{pp}}$, have been of special interest to physicists and mathematicians. While for a log time pure point diffraction has been associated with periodicity, the discovery of quasicrystals \cite{She} has led to a paradigm shift and showed that there is much we do not understand about physical diffraction. It is now known that pure point diffraction is equivalent to the mean almost periodicity of the structure \cite{LSS,LSS2}. Over the years, it has become clear that structures with non-trivial continuous spectrum can also have interesting properties, and hence are of interest. While the case of structures with pure point diffraction is very well understood now \cite{LSS,LSS2}, the study of systems with continuous or mixed spectrum is just in its infancy. The singular continuous diffraction spectrum $\left(\widehat{\gamma}\right)_{\operatorname{sc}}$ in particular is very mysterious and very little is known about it in general. One of the most useful tools in the study of the components of the diffraction is the Eberlein decomposition (see below for definition and properties). Given a measure $\mu$ with autocorrelation $\gamma$, there exists a unique decomposition (see Theorem~\operatorname{Re}f{prop ebe} and Theorem~\operatorname{Re}f{t1} below) \[ \gamma=\gamma_{\operatorname{s}}+\gamma_0 \,, \] called the Eberlein decomposition, with the property that both $\gamma_{\operatorname{s}}$ and $\gamma_0$ are positive definite measures and \[ \widehat{\,\gamma_{\operatorname{s}}\,}=\left(\widehat{\gamma}\right)_{\operatorname{pp}} \qquad \text{ and } \qquad \widehat{\,\gamma_{0}\,}=\left(\widehat{\gamma}\right)_{\operatorname{c}} \,. \] This allows us to study the pure point diffraction spectrum $(\widehat{\gamma})_{\operatorname{pp}}$ and continuous diffraction spectrum $(\widehat{\gamma})_{\operatorname{c}}$ of $\mu$, respectively, in the Fourier dual space by studying instead the components $\gamma_{\operatorname{s}}$ and $\gamma_0$, respectively, of the autocorrelation measure $\gamma$ in the real space. This approach has proven effective in the study of diffraction of measures with Meyer set support \cite{NS1,NS5,NS11,NS20a,NS21} and in the study of compatible random substitution \cite{BSS}. Because of this, one would like to further decompose \[ \gamma_{0}= \gamma_{0a}+\gamma_{0s} \,, \] such that \[ \operatorname{Re}allywidehat{\gamma_{\operatorname{0s}}}= \left( \widehat{\gamma} \right)_{\operatorname{sc}} \qquad \text{ and } \qquad \operatorname{Re}allywidehat{\gamma_{\operatorname{0a}}}= \left( \widehat{\gamma} \right)_{\operatorname{ac}} \,. \] Whenever this is possible, we will refer to \begin{equation}\label{EQged} \gamma =\gamma_{\operatorname{s}}+\gamma_{0a}+\gamma_{0s} \operatorname{e}nd{equation} as the complete (or generalized) Eberlein decomposition of $\gamma$. The generalized Eberlein decomposition exists for Fourier transformable measures with Meyer set support \cite{NS20a,NS21}. This result has important consequences for one-dimensional Pisot substitutions, see for example \cite{BaGa,BG2}, and we expect that establishing the existence of the generalized Eberlein decomposition may have important consequences in general. This makes it an important open problem for diffraction theory. Let us state this explicitly. \begin{question} Given a positive definite (or more generally a Fourier transformable) measure $\gamma$ on ${\mathbb R}^d$, can one find a decomposition as in \operatorname{e}qref{EQged} which is the Fourier dual to the Lebesgue decomposition of $\widehat{\gamma}\, $? \operatorname{e}nd{question} It is the main goal of the this paper to show that each Fourier transformable measure $\gamma$ on ${\mathbb R}^d$ has a generalized Eberlein decomposition \operatorname{e}qref{EQged} with $\gamma_{0a}, \gamma_{0s}$ tempered distributions of order $2d$. Our approach is as follows. Since for a Fourier transformable measure $\gamma$, the measure $\widehat{\gamma}$ is translation bounded \cite[Thm. 4.9.23]{MoSt}, we restrict our attention to the class $\mathcal{DTBM}({\mathbb R}^d)$ of tempered distributions whose Fourier transform is a translation bounded measure. The Fourier transform then becomes a bijection between $\mathcal{DTBM}({\mathbb R}^d)$ and ${\mathcal M}^\infty({\mathbb R}^d)$. So, we will first study the properties of the space $\mathcal{DTBM}({\mathbb R}^d)$. In the first main result of the paper, Theorem~\operatorname{Re}f{Prop 2}, we show that for each $\omega \in \mathcal{DTBM}({\mathbb R}^d)$ there exists some $f \in {\mathbb C}u({\mathbb R}^d)$ such that \[ \omega=\left((\partial_{x_1})^{2d}+\ldots+ (\partial_{x_d})^{2d}+(-1)^d\right) (f \ensuremath{\lambda\!\!\!\lambda}) \] in the sense of distributions. In particular, it follows that each element $\omega \in \mathcal{DTBM}({\mathbb R}^d)$ is a distribution of order (at most) $2d$. Next, we prove in Proposition~\operatorname{Re}f{gen ebe dist} that each distribution $\omega \in \mathcal{DTBM}({\mathbb R}^d)$ admits a generalized Eberlein decomposition \[ \omega=\omega_{\operatorname{s}}+\omega_{\operatorname{0a}}+ \omega_{\operatorname{0s}} \,, \] with $\omega_{\operatorname{s}},\omega_{\operatorname{0a}}, \omega_{\operatorname{0s}}\in \mathcal{DTBM}({\mathbb R}^d)$, which is in Fourier duality with the Lebesgue decomposition. By combining these results, we prove in Theorem~\operatorname{Re}f{coro:mainrd} that each Fourier transformable measure $\gamma$ on ${\mathbb R}^d$ admits a generalized Eberlein decomposition \[ \gamma=\gamma_{\operatorname{s}}+\omega_{\operatorname{0a}}+ \omega_{\operatorname{0s}} \,, \] where $\gamma_{\operatorname{s}}$ and $\gamma_{0}:=\omega_{\operatorname{0a}}+ \omega_{\operatorname{0s}}$ are Fourier transformable measures and $\omega_{\operatorname{0a}}$, $\omega_{\operatorname{0s}}$ are tempered distributions of order (at most) 2d. We complete the paper by taking a closer look in Section~\operatorname{Re}f{sect last} at those distributions in $\mathcal{DTBM}({\mathbb R}^d)$ whose Fourier transform is pure point, continuous or absolutely continuous measure, respectively. In general, given a positive definite measure $\gamma$, one would hope all three components of the generalized Eberlein decomposition are measures. It is not clear if this is the case. More importantly, one can define the diffraction of more general processes (see for example \cite{LM} and \cite[Sect.~1.4 and Thm.~3.28]{LSS}) in which case the autocorrelation is not necessarily a measure but the diffraction is a translation bounded measure. In this case, the autocorrelation can be viewed as an element in $\mathcal{DTBM}({\mathbb R}^d)$, and hence our approach can be used in the study of such more general objects. Lastly, let us mention here that some of the results this paper can be extended to the space $\mathcal{S}_{\widehat{\mathcal{M}}}({\mathbb R}^d)$ of tempered distributions with measure Fourier transform, which was introduced and studied in \cite{ST}. However, in diffraction theory we only deal with measures whose Fourier transform is a translation bounded measure, and as translation boundedness gives more properties and simplifies many proofs (see for example Theorem~\operatorname{Re}f{Prop 2}), we will restrict to $\mathcal{DTBM}({\mathbb R}^d)$ in this article. \section{Preliminaries} Throughout this paper, we will work in the $d$-dimensional Euclidean space ${\mathbb R}^d$. The Lebesgue measure will be denoted by $\ensuremath{\lambda\!\!\!\lambda}$ or simply $\mbox{d} x$. We will denote by ${\mathbb C}u({\mathbb R}^d)$, ${\mathbb C}c({\mathbb R}^d)$, ${\textsf S}({\mathbb R}^d)$ and ${\mathbb C}c^{\infty}({\mathbb R}^d)$, the spaces of uniformly continuous and bounded functions, compactly supported continuous functions, Schwartz functions and compactly supported functions which are arbitrarily often differentiable, respectively. For any function $g$ on ${\mathbb R}^d$ and $t \in {\mathbb R}^d$, the functions $T_tg, \widetilde{g}$ and $g^{\dagger}$ are defined by \[ (T_tg)(x):=g(x-t)\,, \qquad \widetilde{g}(x):=\overline{g(-x)} \qquad \text{ and } \qquad g^{\dagger}(x):=g(-x) \,. \] A \operatorname{e}mph{measure} $\mu$ on ${\mathbb R}^d$ is a linear functional on $C_{\text{c}}({\mathbb R}^d)$ such that, for every compact subset $K\subset {\mathbb R}^d$, there is a constant $a_K>0$ with \[ |\mu(\varphi)| \leqslant a_{K}\, \|\varphi\|_{\infty} \] for all $\varphi\in C_{\text{c}}({\mathbb R}^d)$ with ${\operatorname{supp}}(\varphi) \subseteq K$. Here, $\|\varphi\|_{\infty}$ denotes the supremum norm of $\varphi$. By Riesz' representation theorem, this definition is equivalent to the classical measure theory concept of regular Radon measure. Similar to functions, for a measure $\mu$ on ${\mathbb R}^d$ and $t \in {\mathbb R}^d$, we define $T_t\mu, \widetilde{\mu}$ and $\mu^{\dagger}$ by \[ (T_t\mu)(\varphi):= \mu(T_{-t}\varphi)\,, \qquad \widetilde{\mu}(\varphi):=\overline{ \mu (\widetilde{\varphi})} \qquad \text{ and } \quad \mu^{\dagger}(\varphi):= \mu(\varphi^{\dagger}). \] Given a measure $\mu$, there exists a positive measure $| \mu|$ such that, for all $\psi \in {\mathbb C}c({\mathbb R}^d)$ with $\psi \geqslant 0$, we have \cite{Ped} (compare \cite[Appendix]{CRS2}) \[ | \mu| (\psi)= \sup \{ \left| \mu (\varphi) \right| \ :\ \varphi \in {\mathbb C}c({\mathbb R}^d),\, |\varphi| \leqslant \psi \} \,. \] The measure $| \mu|$ is called the \operatorname{e}mph{total variation of} $\mu$. The measure $\mu$ is called \operatorname{e}mph{translation bounded} if for all compact sets $K \subset {\mathbb R}^d$ we have \[ \| \mu \|_{K}:=\sup_{t \in {\mathbb R}^d} \left| \mu \right|(t+K) < \infty \,. \] We will denote the space of translation bounded measures by ${\mathcal M}^\infty({\mathbb R}^d)$. \begin{remark} \begin{itemize} \item[(a)] A measure $\mu$ is translation bounded if and only if for all $\varphi \in {\mathbb C}c({\mathbb R}^d)$ we have \cite{ARMA1,MoSt} $\varphi*\mu \in {\mathbb C}u({\mathbb R}^d)$. Here \[ (\varphi*\mu)(x):= \int_{{\mathbb R}^d} \varphi(x-y)\ \mbox{d} \mu(y) \,. \] \item[(b)] A measure $\mu$ is translation bounded if and only if $\| \mu \|_{U} < \infty$ for a single pre-compact Borel set with non-empty interior \cite{BM,SS2}. Moreover, in this case, two different pre-compact Borel sets with non-empty interior define equivalent norms. \operatorname{e}nd{itemize} \operatorname{e}nd{remark} Since ${\mathbb C}c^\infty({\mathbb R}^d) \subseteq {\mathbb C}c({\mathbb R}^d)$, each measure is a distribution of order 0. A measure $\mu$ is called a \operatorname{e}mph{tempered measure} if $\mu : {\mathbb C}c^\infty({\mathbb R}^d) \to {\mathbb R}^d$ is the restriction to ${\mathbb C}c^\infty({\mathbb R}^d)$ of a tempered distribution $\omega : {\textsf S}({\mathbb R}^d) \to {\mathbb C}$. It is easy to see that this is equivalent to $\mu : {\mathbb C}c^\infty({\mathbb R}^d) \to {\mathbb C}$ being continuous with respect to the Schwartz topology induced by the embedding ${\mathbb C}c^\infty({\mathbb R}^d) \hookrightarrow {\textsf S}({\mathbb R}^d)$ (see for example \cite{ARMA1}). \begin{remark} A positive measure is tempered if and only if it is slowly increasing \cite[p.~242]{sch} \cite[Thm.~2.1]{Kaba}, meaning that there exists a polynomial $P \in {\mathbb R}[X_1,\ldots,X_d]$ such that \[ \int_{{\mathbb R}^d} \frac{1}{1+|P(x)|}\ \mbox{d} \mu(x) < \infty \,. \] If $\mu$ is a measure such that $|\mu|$ is slowly increasing, then $\mu$ is tempered \cite[p. 47]{ARMA1}. In this case, we say that $\mu$ is tempered in strong sense. Note that there is an example of a tempered measure $\mu$ such that $|\mu|$ is not tempered \cite{ARMA1}. In particular, by the above, this measure is tempered but not tempered in strong sense. \operatorname{e}nd{remark} Any translation bounded measure is slowly increasing and hence tempered \cite[Thm.~7.1]{ARMA1}. The basic tool we will use in this project is the following estimate. As this estimate gives an alternate proof that every translation bounded measure is slowly increasing, we include it here. \begin{lemma} \label{lem:distr_2} Let $\nu\in\mathcal{M}^{\infty}({\mathbb R}^d)$. Then, there is a constant $c>0$ such that \[ \int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1}\, \mbox{d}|\nu|(x) \leqslant c\, \|\nu\|_{[-\frac{1}{2},\frac{1}{2}]^d} < \infty \,. \] In particular, \[ \frac{1}{(2\pi{\mathrm{i}} (\cdot)_1)^{2d}+\ldots+(2\pi{\mathrm{i}} (\cdot)_d)^{2d}+(-1)^d}\, \nu \] is a finite measure on ${\mathbb R}^d$. \operatorname{e}nd{lemma} \begin{proof} We first note that \begin{equation} \label{eq:c_n} \begin{aligned} {} &\int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \, \mbox{d}|\nu|(x) \\ &\phantom{XXXXXX}=\sum_{k\in{\mathbb Z}^d} \int_{k+[-\frac{1}{2},\frac{1}{2}]^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1}\, \mbox{d}|\nu|(x) \\ &\phantom{XXXXXX}\leqslant \|\nu\|_{[-\frac{1}{2},\frac{1}{2}]^d} \sum_{k\in{\mathbb Z}^d} \sup_{x\in k+[-\frac{1}{2},\frac{1}{2}]^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \,. \operatorname{e}nd{aligned} \operatorname{e}nd{equation} Now, let $M:=\{k\in{\mathbb Z}^d\,:\, k_j\geqslant 2 \text{ for all } 1\leqslant j\leqslant d\}$. On the one hand, since ${\mathbb Z}^d\setminus M$ is finite, one has \begin{equation} \label{eq:c_1} c_1:=\sum_{k\in{\mathbb Z}^d\setminus M} \sup_{x\in k+[-\frac{1}{2},\frac{1}{2}]^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} <\infty \,. \operatorname{e}nd{equation} Also note that for $d >1$, by H\"older inequality, we have \[ \sum_{j=1}^d (2 \pi x_j)^2 \cdot 1 \leqslant \left( \sum_{j=1}^d (2 \pi x_j)^{2d} \right)^\frac{1}{d} \left( \sum_{j=1}^d 1^q \right)^\frac{1}{q} \,, \] where $q=\frac{d}{d-1}$ is the conjugate of $d$. Therefore, one has \begin{equation}\label{eq2} \left( (2 \pi x_1)^2+ (2 \pi x_2)^2+ \ldots + (2 \pi x_d)^2 \right)^d \leqslant \left( (2 \pi x_1)^{2d}+ (2 \pi x_2)^{2d}+ \ldots + (2 \pi x_d)^{2d} \right) d^{d-1} \,. \operatorname{e}nd{equation} Moreover, \operatorname{e}qref{eq2} trivially holds when $d=1$. Therefore, we obtain \begin{equation} \label{eq:c_2} \begin{aligned} c_2 &:=\sum_{k\in M} \sup_{x\in k+[-\frac{1}{2},\frac{1}{2}]^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \\ &\overset{\operatorname{e}qref{eq2}}{\leqslant} \sum_{k\in M} \sup_{x\in k+[-\frac{1}{2},\frac{1}{2}]^d} \frac{d^{d-1}}{((2\pi x_1)^{2}+\ldots+(2\pi x_d)^{2})^d} \\ &\leqslant \sum_{|k_1|\geqslant 2}\cdots \sum_{|k_d|\geqslant 2} \left(\frac{d}{(2\pi (|k_1|-1))^{2}+\ldots+(2\pi (|k_d|-1))^{2}} \right)^d \\ &\leqslant \left(\sum_{|k_1|\geqslant 1} \frac{d}{(2\pi k_1)^2} \right) \cdot \ldots \cdot \left(\sum_{|k_d|\geqslant 1} \frac{d}{(2\pi k_d)^2} \right) < \infty \,. \operatorname{e}nd{aligned} \operatorname{e}nd{equation} Finally, Eqs.~\operatorname{e}qref{eq:c_n}, \operatorname{e}qref{eq:c_1} and \operatorname{e}qref{eq:c_2} give \[ \int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1}\, \mbox{d}|\nu|(x) \leqslant \|\nu\|_{[-\frac{1}{2},\frac{1}{2}]^d}\, (c_1+ c_2) \,. \] The claim follows with $c:=c_1+ c_2$. \operatorname{e}nd{proof} As immediate consequences of Lemma~\operatorname{Re}f{lem:distr_2}, we obtain the following results. \begin{corollary} Let $\nu\in\mathcal{M}^{\infty}({\mathbb R}^d)$. Then, \[ \frac{1}{\left( (2\pi{\mathrm{i}} (\cdot)_1)^{2}+\ldots+(2\pi{\mathrm{i}} (\cdot)_d)^{2}\right)^d+(-1)^d}\, \nu \] is a finite measure on ${\mathbb R}^d$. \operatorname{e}nd{corollary} \begin{proof} This follows immediately from Lemma~\operatorname{Re}f{lem:distr_2} and the inequality \[ \frac{1}{\left( (2\pi x_1)^{2}+\ldots+(2\pi x_d)^{2}\right)^d+1} \leqslant \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \,. \] \operatorname{e}nd{proof} \begin{corollary}\label{cor tb implies tempered}\cite[Thm.~7.1]{ARMA1} Let $\mu \in {\mathcal M}^\infty({\mathbb R}^d)$. Then, for all $f \in {\textsf S}({\mathbb R}^d)$, we have \[ \int_{{\mathbb R}^d} |f(s)|\ \mbox{d} |\mu|(s) < \infty \,. \] Moreover, the mapping $f \operatorname{MAP}sto \int_{{\mathbb R}^d} f(s)\ \mbox{d}\mu(s)$ is a tempered distribution. \operatorname{e}nd{corollary} \begin{proof} By the above with $P(x_1,\ldots,x_d):=(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}$, we have \begin{align*} \int_{{\mathbb R}^d} |f(x)|\ \mbox{d}| \mu|(x) &\leqslant \int_{{\mathbb R}^d} |f(x) (1+ P(x))|\ \mbox{d} \Big(\frac{1}{1+P} |\mu|\Big)(x) \\ &\leqslant C \| (1+P)f \|_\infty \operatorname{e}nd{align*} for all $f \in {\textsf S}({\mathbb R}^d)$, where \[ C= \int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1}\, \mbox{d}|\nu|(x) < \infty \,. \] The claim follows. \operatorname{e}nd{proof} Finally, let us introduce some basic concepts about point sets which we will meet in the paper. \begin{definition} A point set $\Lambda\subset{\mathbb R}^d$ is called \operatorname{e}mph{relatively dense} if there is a compact set $K\subset {\mathbb R}^d$ such that $\Lambda+K={\mathbb R}^d$. The set $\Lambda$ is called \operatorname{e}mph{uniformly discrete} if there is an open neighborhood $U$ of $0$ such that $(x+U)\cap (y+U)=\varnothing$ for all distinct $x,y\in \Lambda$. The set $\Lambda$ is called a \operatorname{e}mph{Meyer set} if it is relatively dense and $\Lambda-\Lambda$ is uniformly discrete. Here, $\Lambda - \Lambda$ denotes the \operatorname{e}mph{Minkowski difference} \[ \Lambda - \Lambda := \{ x-y\, :\, x,y \in \Lambda \} \,. \] \operatorname{e}nd{definition} For more properties of Meyer sets and their full classification, we refer the reader to \cite{LAG,Meyer,MOO}. \subsection{Fourier transformability} Next, we briefly review the notion of Fourier transformability for measures and the connection to the Fourier transform of tempered distributions. Let us recall that the Fourier transform and inverse Fourier transform, respectively, of a function $f \in L^1({\mathbb R}^d)$ are denoted by $\widehat{f}$ and $\operatorname{Re}allywidecheck{f}$. They are defined via \[ \widehat{f}(x)= \int_{{\mathbb R}^d} e^{-2 \pi i x \cdot y} f(y)\ \mbox{d} y \qquad \text{ and } \qquad \operatorname{Re}allywidecheck{f}(x) = \widehat{f}(-x) \,. \] Next, let us review the definition of Fourier transformability for measures. \begin{definition} A measure $\mu$ on ${\mathbb R}^d$ is called \operatorname{e}mph{Fourier transformable as measure} if there exists a measure $\widehat{\mu}$ on ${\mathbb R}^d$ such that \[ \operatorname{Re}allywidecheck{\varphi}\in L^2(|\widehat{\mu}|) \qquad \text{ and } \qquad \left\langle \mu\, , \, \varphi*\widetilde{\varphi} \right\rangle = \left\langle \widehat{\mu}\, , \, |\operatorname{Re}allywidecheck{\varphi}|^2 \right\rangle \] for all $\varphi \in{\mathbb C}c({\mathbb R}^d)$. In this case, $\widehat{\mu}$ is called the \operatorname{e}mph{measure Fourier transform} of $\mu$. \operatorname{e}nd{definition} We will often use the following result. \begin{theorem}\label{FT measure dist}\cite[Thm.~5.2]{NS20b} Let $\mu$ be a measure on ${\mathbb R}^d$. Then, $\mu$ is Fourier transformable as a measure if and only if $\mu$ is tempered as a distribution and its Fourier transform as a tempered distribution is a translation bounded measure. Moreover, in this case, the Fourier transform of $\mu$ in the measure and distribution sense coincide. \qed \operatorname{e}nd{theorem} The following result is easy to prove and will be useful later in the paper. Note that the definition of the Fourier transformability of measures guarantees that \operatorname{e}qref{eq1} below holds for all Fourier transformable measures $\mu$ and all $\varphi \in K_2({\mathbb R}^d)=\mbox{Span} \{ \psi * \phi : \psi, \phi \in {\mathbb C}c({\mathbb R}^d) \}$. Nevertheless, there are many Fourier transformable measures for which \operatorname{e}qref{eq1} fails for functions $\varphi \in {\mathbb C}c({\mathbb R}^d) \backslash K_2({\mathbb R}^d)$. The fact that $\rho$ is a finite measure is crucial in Proposition~\operatorname{Re}f{P1}. \begin{proposition} \label{P1} Let $\rho$ be a finite measure on ${\mathbb R}^d$, let $h= \operatorname{Re}allywidecheck{\rho}$ and let $\mu=h \ensuremath{\lambda\!\!\!\lambda}$. Then, for all $\varphi \in {\mathbb C}c({\mathbb R}^d)$, we have \begin{equation}\label{eq1} \mu(\varphi)= \rho(\operatorname{Re}allywidecheck{\varphi}) \,. \operatorname{e}nd{equation} \operatorname{e}nd{proposition} \begin{proof} Note here that since $\nu$ is finite, all integrals below are finite and hence, by Fubini's theorem, \begin{align*} \mu(f) &= \int_{{\mathbb R}^d} \varphi(x)\, h(x) \, \mbox{d} x = \int_{{\mathbb R}^d} \varphi(x) \int_{{\mathbb R}^d} \operatorname{e}^{2\pi{\mathrm{i}} x\cdot y} \, \mbox{d}\rho(y)\, \mbox{d} x \\ &= \int_{{\mathbb R}^d} \int_{{\mathbb R}^d} \varphi(x)\, \operatorname{e}^{2\pi{\mathrm{i}} x\cdot y}\, \mbox{d} x\, \, \mbox{d}\rho(y) = \int_{{\mathbb R}^d} \operatorname{Re}allywidecheck{\varphi}(y)\, \mbox{d} \rho(y) = \rho(\operatorname{Re}allywidecheck{\varphi}) \,, \operatorname{e}nd{align*} which establishes the desired equality. \operatorname{e}nd{proof} \subsection{Almost periodicity} Here, we briefly review the concepts of almost periodicity for functions, measures and tempered distributions. For more general overviews, we recommend the review \cite{MoSt} as well as \cite{Eb,LS2,LSS,LSS2,ST}, just to name a few. \begin{definition} A function $f\in{\mathbb C}u({\mathbb R}^d)$ is \operatorname{e}mph{strongly (or Bohr) almost periodic}, if the closure of $\{T_tf\, :\, t\in {\mathbb R}^d\}$ in $({\mathbb C}u({\mathbb R}^d), \| \cdot \|_\infty)$ is compact. $f\in{\mathbb C}u({\mathbb R}^d)$ is \operatorname{e}mph{weakly almost periodic}, if the closure of $\{T_tf\, :\, t\in {\mathbb R}^d\}$ in the weak topology of the Banach space $({\mathbb C}u({\mathbb R}^d), \| \cdot \|_\infty)$ is compact. The spaces of strongly and weakly almost periodic functions on ${\mathbb R}^d$ are denoted by $S\hspace*{-1pt}AP({\mathbb R}^d)$ and $W\hspace*{-2pt}AP({\mathbb R}^d)$, respectively. \operatorname{e}nd{definition} For every $f\in W\hspace*{-2pt}AP({\mathbb R}^d)$ the \operatorname{e}mph{mean} \[ M(f):=\lim_{n\to\infty} \frac{1}{(2n)^d} \int_{s+[-n,n]^d} f(x)\ \mbox{d} x \] exists uniformly in $s\in {\mathbb R}^d$, see \cite[Thm.~14.1]{Eb}, \cite[Prop.~4.5.9]{MoSt}. Moreover, we also have $|f| \in W\hspace*{-2pt}AP({\mathbb R}^d)$ \cite[Thm.~11.2]{Eb}, \cite[Prop.~4.3.11]{MoSt}. \begin{definition} A function $f\in W\hspace*{-2pt}AP({\mathbb R}^d)$ is \operatorname{e}mph{null weakly almost periodic} if $M(|f|)=0$. We denote the space of null weakly almost periodic functions by $WAP_0({\mathbb R}^d)$. \operatorname{e}nd{definition} In the spirit of \cite{ARMA}, these notions carry over to measures and tempered distributions, respectively, via convolutions with functions in ${\mathbb C}c({\mathbb R}^d)$ or ${\textsf S}({\mathbb R}^d)$, respectively. \begin{definition} A measure $\mu\in{\mathcal M}^{\infty}({\mathbb R}^d)$ is \operatorname{e}mph{strongly}, \operatorname{e}mph{weakly} or \operatorname{e}mph{null weakly almost periodic} if $\varphi*\mu$ is a strongly, weakly or null weakly almost periodic function, for all $\varphi \in{\mathbb C}c({\mathbb R}^d)$. We will denote by $\mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}^d)$, $\mathcal{W}\hspace*{-1pt}\mathcal{AP}({\mathbb R}^d)$, and $\mathcal{W}\hspace*{-1pt}\mathcal{AP}_0({\mathbb R}^d)$ the spaces of strongly, weakly and null weakly almost periodic measures. Analogously, a tempered distribution $\omega\in{\textsf S}'({\mathbb R}^d)$ is called respectively \operatorname{e}mph{weakly, strongly} or \operatorname{e}mph{null weakly almost periodic} if, for all $f \in {\textsf S}({\mathbb R}^d)$, the function $f*\omega$ is respectively weakly, strongly or null weakly almost periodic. Here, \[ (f*\omega)(x)= \omega( f(x- \cdot)) \in C^\infty({\mathbb R}^d) \,. \] We shall denote the corresponding spaces of tempered distributions by $\mathsf{WAP}({\mathbb R}^d), \mathsf{SAP}({\mathbb R}^d)$ and $\mathsf{WAP}_0({\mathbb R}^d)$ respectively. \operatorname{e}nd{definition} At the end of this section, we introduce the \operatorname{e}mph{Eberlein decomposition} for measures and distributions, and discus its relevance to diffraction theory. Let us first remind the reader of the following results. \begin{proposition}\label{prop ebe}\cite[Thm.~4.10.10]{MoSt}, \cite[Thm.~5.2]{ST} \begin{itemize} \item[(a)] $\displaystyle \mathcal{W}\hspace*{-1pt}\mathcal{AP}({\mathbb R}^d) = \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}^d)\oplus \mathcal{W}\hspace*{-1pt}\mathcal{AP}_0({\mathbb R}^d)$. \item[(b)] $\displaystyle \mathsf{WAP}({\mathbb R}^d) = \mathsf{SAP}({\mathbb R}^d)\oplus \mathsf{WAP}_0({\mathbb R}^d)$. \operatorname{e}nd{itemize} \operatorname{e}nd{proposition} Proposition~\operatorname{Re}f{prop ebe} says that each $\mu \in \mathcal{W}\hspace*{-1pt}\mathcal{AP}({\mathbb R}^d)$ has a unique Eberlein decomposition \begin{equation}\label{EQ1} \mu =\mu_{\operatorname{s}}+ \mu_0 \,, \operatorname{e}nd{equation} with $\mu_{\operatorname{s}} \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}^d)$ and $\mu_0 \in \mathcal{W}\hspace*{-1pt}\mathcal{AP}_0({\mathbb R}^d)$. Similarly, each $\omega \in \mathsf{WAP}({\mathbb R}^d)$ has a unique Eberlein decomposition \begin{equation}\label{EQ2} \omega =\omega_{\operatorname{s}}+ \omega_0 \,, \operatorname{e}nd{equation} with $\omega_{\operatorname{s}} \in \mathsf{SAP}({\mathbb R}^d)$ and $\omega_0 \in \mathsf{WAP}_0({\mathbb R}^d)$. Furthermore, by \cite[Thm.~5.3]{ST}, we have $\mathcal{W}\hspace*{-1pt}\mathcal{AP}({\mathbb R}^d) \subseteq \mathsf{WAP}({\mathbb R}^d)$, $\mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}^d) \subseteq \mathsf{SAP}({\mathbb R}^d)$ and $\mathcal{W}\hspace*{-1pt}\mathcal{AP}_0({\mathbb R}^d) \subseteq \mathsf{WAP}_0({\mathbb R}^d)$. In particular, for a measure $\mu \in \mathcal{W}\hspace*{-1pt}\mathcal{AP}({\mathbb R}^d) \subseteq \mathsf{WAP}({\mathbb R}^d)$, the decompositions of \operatorname{e}qref{EQ1} and \operatorname{e}qref{EQ2} coincide. The importance of the Eberlein decomposition for diffraction theory is given by the following two results. \begin{theorem}\label{t1}\cite[Thm.~4.10.4 and Thm.~4.10.12]{MoSt} Let $\mu \in {\mathcal M}^\infty({\mathbb R}^d)$ be Fourier transformable as measure. Then, $\mu \in \mathcal{W}\hspace*{-1pt}\mathcal{AP}({\mathbb R}^d)$, the measures $\mu_{\operatorname{s}}$, $\mu_0$ are Fourier transformable and \[ \operatorname{Re}allywidehat{\mu_{\operatorname{s}}}= \left( \widehat{\mu} \right)_{\operatorname{pp}} \qquad \text{ and } \qquad \operatorname{Re}allywidehat{\mu_{0}}= \left( \widehat{\mu} \right)_{\operatorname{c}} \,. \] \operatorname{e}nd{theorem} \begin{theorem}\label{t2}\cite[Thm.~6.1]{ST} Let $\omega \in {\textsf S}'({\mathbb R}^d)$. If $\widehat{\omega}$ is a measure then $\omega \in \mathsf{WAP}({\mathbb R}^d)$ and \[ \operatorname{Re}allywidehat{\omega_{\operatorname{s}}}= \left( \widehat{\omega} \right)_{\operatorname{pp}} \qquad \text{ and } \qquad \operatorname{Re}allywidehat{\omega_{0}}= \left( \widehat{\omega} \right)_{\operatorname{c}} \,. \] \operatorname{e}nd{theorem} As we mentioned in the introduction, it is our goal in this paper to show the existence of the generalized Eberlein decomposition, and study some of its properties. To do this, we will work with the larger class $\mathcal{DTBM}({\mathbb R}^d)$ of tempered distributions whose Fourier transform is a translation bounded measure. This class contains all measures which are Fourier transformable (as measure). \section{Properties of $\mathcal{DTBM}({\mathbb R}^d)$} As emphasized in the previous section, the following space will play the central role in this section \[ \mathcal{DTBM}({\mathbb R}^d):= \{ \omega \in {\textsf S}'({\mathbb R}^d)\, :\, \widehat{\omega} \in {\mathcal M}^\infty({\mathbb R}^d) \} \,. \] Note that by Theorem~\operatorname{Re}f{t2} we have $\mathcal{DTBM}({\mathbb R}^d) \subseteq \mathsf{WAP}({\mathbb R}^d)$. Moreover, by Theorem~\operatorname{Re}f{FT measure dist}, if $\mu$ is a measure on ${\mathbb R}^d$ which is Fourier transformable as measure, then $\mu \in \mathcal{DTBM}({\mathbb R}^d)$. Let us note that every measure $\mu \in {\mathcal M}^\infty({\mathbb R}^d)$ is a tempered measure by Corollary~\operatorname{Re}f{cor tb implies tempered}, and hence the Fourier transform of some $\omega \in {\textsf S}'({\mathbb R}^d)$, which by definition is in $\mathcal{DTBM}({\mathbb R}^d)$. Therefore, we get the following simple fact. \begin{fact}\label{Fact 1} The Fourier transform is a bijection from $\mathcal{DTBM}({\mathbb R}^d)$ to ${\mathcal M}^\infty({\mathbb R}^d)$. \operatorname{e}nd{fact} Since this space will play a fundamental role in remainder of the paper, we will characterize it in the following. Let us start with a simple characterization for the positive definite tempered distributions in this space. First, recall that $\omega \in {\textsf S}'({\mathbb R}^d)$ is called \operatorname{e}mph{positive definite} if for all $f \in {\textsf S}({\mathbb R}^d)$ we have \[ \omega (f*\widetilde{f}) \geqslant 0 \,. \] By the Bochner--Schwartz theorem \cite[Thm. IX.10]{ReSi}, a tempered distribution is positive definite if and only if its Fourier transform is a positive tempered measure. Recall that given $\omega \in {\textsf S}'({\mathbb R}^d)$ and $f \in {\textsf S}({\mathbb R}^d)$, the convolution $f*\omega$ is an infinitely many times differentiable function, which is not necessarily bounded. If $f*\omega$ is bounded for all $f \in {\textsf S}({\mathbb R}^d)$, we say that $\omega$ is a \operatorname{e}mph{translation bounded} tempered distribution (see \cite{ST} for properties of these tempered distributions). The space of translation bounded tempered distributions is denoted by ${\textsf S}'_{\infty}({\mathbb R}^d)$. We can now prove the following result. \begin{proposition} Let $\omega \in {\textsf S}'({\mathbb R}^d)$ be positive definite. Then, the following statements are equivalent. \begin{itemize} \item[(i)] $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. \item[(ii)] $\widehat{\omega} \in {\textsf S}_{\infty}'({\mathbb R}^d)$. \item[(iii)] For all $f \in {\textsf S}({\mathbb R}^d)$ there exists some constant $C>0$ such that \[ \big|\omega(e^{2 \pi {\mathrm{i}} t \cdot} f)\big| \leqslant C \] for all $t \in {\mathbb R}^d$. \item[(iv)] There exists some $f \in {\textsf S}({\mathbb R}^d)$ and $C>0$ with $f\neq 0$, $\widehat{f} \geqslant 0$ such that \[ \big|\omega(e^{2 \pi {\mathrm{i}} t \cdot} f)\big| \leqslant C \] for all $t \in {\mathbb R}^d$. \operatorname{e}nd{itemize} \operatorname{e}nd{proposition} \begin{proof} The equivalence (i)$\iff$(ii) follows from \cite[Prop.~2.5]{ST}. \noindent (ii)${\mathrm{i}}plies$(iii) Let $f \in {\textsf S}({\mathbb R}^d)$, and let $C:= \| \widehat{f}*\widehat{\omega} \|_\infty < \infty$. The number $C$ is finite because $\widehat{\omega} \in {\textsf S}_{\infty}'({\mathbb R}^d)$. Then, for all $t \in {\mathbb R}^d$ we have \[ \big|\omega(e^{-2 \pi {\mathrm{i}} t \cdot} f)\big| = \big|\widehat{\omega}(\operatorname{Re}allywidecheck{e^{-2 \pi {\mathrm{i}} t \cdot} f})\big| = \big| (\widehat{f} * \widehat{\omega}) (t) \big| \leqslant C \,. \] \noindent (iii)${\mathrm{i}}plies$(iv) This is obvious. \noindent (iv)${\mathrm{i}}plies$(i) First, $f \neq 0$ implies $\operatorname{Re}allywidecheck{f} \neq 0$. Therefore, there exists some $s \in {\mathbb R}^d$ such that $\operatorname{Re}allywidecheck{f}(s) \neq 0$. Since $\operatorname{Re}allywidecheck{f}(s)=\widehat{f}(-s) \geqslant 0$, we get $\operatorname{Re}allywidecheck{f}(s) >0$. Therefore, there exists some $r>0$ and $c>0$ such that \[ \operatorname{Re}allywidecheck{f}(x) \geqslant c \qquad \text{ for all } x \in B_r(s) \,. \] Now, let $\mu := \widehat{\omega}$, which is a positive measure since $\omega$ is positive definite. Since $\operatorname{Re}allywidecheck{f}$ and $\mu$ are positive, we have \begin{align*} c\, \mu(B_r(x)) &= \int_{{\mathbb R}^d} c\, 1_{B_r(x)}(y)\ \mbox{d}\mu(y) = \int_{{\mathbb R}^d} c\, 1_{B_r(s)}(y-x+s)\ \mbox{d}\mu(y) \\ &\leqslant \int_{{\mathbb R}^d} \operatorname{Re}allywidecheck{f} (y-x+s)\ \mbox{d} \mu(y) =\int_{{\mathbb R}^d} \operatorname{Re}allywidecheck{\operatorname{e}^{2\pi{\mathrm{i}}(s-x)\cdot}f}\,(y)\ \mbox{d} \widehat{\omega}(y) \\ &=\omega(e^{2 \pi {\mathrm{i}} (s-x) \cdot} f) \leqslant C \operatorname{e}nd{align*} for all $x \in {\mathbb R}^d$. This gives that \[ \| \mu \|_{B_{r}(0)} = \sup_{x \in {\mathbb R}^d} \mu(B_r(x)) \leqslant \frac{C}{c} \,, \] which proves the claim. \operatorname{e}nd{proof} Next, by repeating the arguments of \cite{SS2}, we can give the following characterization of $\mathcal{DTBM}({\mathbb R}^d)$. \begin{proposition} Let $\omega \in {\textsf S}'({\mathbb R}^d)$. Let $B:= \{ \varphi \in {\mathbb C}c^\infty({\mathbb R}^d)\, :\, {\operatorname{supp}}(\varphi) \subseteq [-1,1]^d \}$. Then, the following statements are equivalent. \begin{itemize} \item[(i)] $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. \item[(ii)] $\widehat{\omega} \in {\textsf S}_\infty'({\mathbb R}^d)$ and the operator \[ T: B \to {\mathbb C}u({\mathbb R}^d) \,, \qquad \varphi \operatorname{MAP}sto \varphi*\widehat{\omega} \,, \] is a continuous operator. \item[(iii)] There exists a constant $C>0$ such that, for all $t \in {\mathbb R}^d$ and $\varphi \in{\mathbb C}c^\infty({\mathbb R}^d)$ with $|\varphi| \leqslant 1_{[-1,1]^d}$, we have \[ \big|\omega(e^{2 \pi {\mathrm{i}} t \cdot} \operatorname{Re}allywidecheck{\varphi})\big| \leqslant C \,. \] \item[(iv)] There exist positive definite $\omega_1, \omega_2, \omega_3, \omega_4 \in \mathcal{DTBM}({\mathbb R}^d)$ such that \[ \omega=\omega_1-\omega_2+{\mathrm{i}}(\omega_3-\omega_4) \,. \] \operatorname{e}nd{itemize} \operatorname{e}nd{proposition} \begin{proof} (i)${\mathrm{i}}plies$(iv) This is immediate. Indeed, if $\mu = \widehat{\omega} \in {\mathcal M}^\infty({\mathbb R}^d)$, then there exist positive measures $\mu_1,\mu_2,\mu_3, \mu_4 \in {\mathcal M}^\infty({\mathbb R}^d)$ such that \[ \mu=\mu_1-\mu_2+{\mathrm{i}}(\mu_3-\mu_4) \,. \] As usual, for all $1 \leqslant j \leqslant 4$, there exists some $\omega_j \in \mathcal{DTBM}({\mathbb R}^d)$ such that $\widehat{\omega_j}= \mu_j$. The claim follows. \noindent (iv)${\mathrm{i}}plies$(i) This is also immediate, as a (finite) linear combination of translation bounded measures is translation bounded. \noindent (i)${\mathrm{i}}plies$(ii) Let $\mu =\widehat{\omega} \in {\mathcal M}^\infty({\mathbb R}^d)$. Then, $\widehat{\omega} \in {\textsf S}_\infty'({\mathbb R}^d)$ by \cite[Cor.~2.1]{ST}. Moreover, we have \[ \|T(\varphi) \|_\infty=\| \varphi*\mu \|_\infty \leqslant \| \mu \|_{ [-1,1]^d } \|\varphi \|_\infty \] for all $\varphi \in B$. \noindent (ii)${\mathrm{i}}plies$(iii) Let $\varphi\in {\mathbb C}c^\infty({\mathbb R}^d)$ with $|\varphi| \leqslant 1_{[-1,1]^d}$. Then, $\varphi \in B$ and $\| \varphi \|_\infty \leqslant 1$. Therefore, \[ \big|\omega(e^{2\pi{\mathrm{i}} t \cdot} \operatorname{Re}allywidecheck{\varphi})\big| =|(\varphi*\widehat{\omega}) (t) | \leqslant \| \varphi* \widehat{\omega} \|_\infty \leqslant \|T \|\, \|\varphi \|_\infty = \|T \| \,. \] \noindent (iii)${\mathrm{i}}plies$(ii) Let $\varphi \in B$. Define \[ \phi= \begin{cases} \frac{1}{\|\varphi\|_\infty} \varphi & \mbox{ if } \|\varphi \|_\infty \neq 0\,, \\ 0 & \mbox{ if } \|\varphi \|_\infty = 0 \,. \operatorname{e}nd{cases} \] Then, $\phi \in B, |\phi| \leqslant 1_{[-1,1]^d}$ and $\varphi= \|\varphi \|_\infty \phi$. Therefore, we have \[ | (\varphi*\widehat{\omega})(t)| = \|\varphi\|_\infty \left| (\phi*\widehat{\omega})(t) \right| =\|\varphi\|_\infty \left|\omega(e^{2 \pi {\mathrm{i}} t \cdot}\, \operatorname{Re}allywidecheck{\phi} )\right| \leqslant C \| \varphi \|_\infty \] for all $t \in {\mathbb R}^d$. \noindent (ii)${\mathrm{i}}plies$ (i) Let $N \in {\mathbb N}$ be arbitrary and let $K_N := [-N,N]^d$. Via a standard partition of unity argument, there exist some $\phi_1,\ldots,\phi_k \in {\mathbb C}c^\infty({\mathbb R}^d)$ and $t_1,\ldots,t_k \in {\mathbb R}^d$ such that $0 \leqslant \phi_j(x) \leqslant 1$ for all $x \in {\mathbb R}^d$, ${\operatorname{supp}}(\phi_j) \subseteq [-1,1]^d$ and \[ \sum_{j=1}^k T_{t_j}\phi_j (x) = 1 \] for all $x \in K_N$. Then, for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d:K_N):=\{ \psi \in {\mathbb C}c^\infty({\mathbb R}^d)\, :\, {\operatorname{supp}}(\psi) \subseteq K_N\}$, we have \[ \varphi=\sum_{j=1}^k \varphi T_{t_j}\phi_j \,. \] Therefore, we obtain \[ | \widehat{\omega}(\varphi)| = | (\widehat{\omega}*\varphi^{\dagger})(0)| \leqslant \sum_{j=1}^k | (\widehat{\omega}*(\varphi T_{t_j}\phi_j)^{\dagger})(0)| = \sum_{j=1}^k \left|(\widehat{\omega}*(\phi_jT_{-t_j}\varphi)^{\dagger}) (t_j) \right| \,. \] Therefore, since ${\operatorname{supp}}((\phi_jT_{-t_j}\varphi)^{\dagger}) \subseteq [-1,1]^d$, (ii) implies \begin{align*} | \widehat{\omega}(\varphi)| &\leqslant \sum_{j=1}^k \left|(\widehat{\omega}*(\phi_jT_{-t_j} \varphi)^{\dagger}) (t_j) \right| \leqslant \sum_{j=1}^k \|T\|\, \| (\phi_jT_{-t_j}\varphi)^{\dagger} \|_\infty \\ &\leqslant \sum_{j=1}^k \|T\|\, \|T_{-t_j}\varphi \|_\infty = C_N\, \|\varphi \|_\infty \,, \operatorname{e}nd{align*} where $C_N:= k \|T \|$ depends only on $N$ and the choice of $\phi_1,\ldots,\phi_k$. Since ${\mathbb C}c^\infty({\mathbb R}^d:K_N)$ is dense in ${\mathbb C}c({\mathbb R}^d:K_N)=\{ \psi \in {\mathbb C}c({\mathbb R}^d) : {\operatorname{supp}}(\psi) \subseteq K_N \}$, it follows that for all $N$, $\widehat{\omega}$ can be uniquely extended to a continuous functional on ${\mathbb C}c({\mathbb R}^d:K_N)$. Therefore, there exists a measure $\mu_N$ supported inside $[-N,N]^d$ such that \[ \mu_N(\varphi) = \widehat{\omega}(\varphi) \] for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$ with ${\operatorname{supp}}(\varphi) \subseteq [-N,N]^d$. It is easy to see that $\mu_{N}= \mu_{N+1}|_{[-N,N]^d}$. We can then define $\mu: {\mathbb C}c({\mathbb R}^d) \to {\mathbb C}$ via \[ \mu(\psi) = \mu_N(\psi) \qquad \mbox{ with } {\operatorname{supp}}(\psi) \subseteq [-N,N]^d \,, \] and the definition does not depend on the choice of $N$. It is easy to see that $\mu$ is linear, and \[ | \mu(\psi) | \leqslant C_N \|\psi \|_\infty \] for all $\psi \in {\mathbb C}c({\mathbb R}^d)$ with ${\operatorname{supp}}(\psi) \subseteq [-N,N]^d$. This shows that $\mu$ is a measure and \[ \mu(\phi)= \widehat{\omega}(\phi) \qquad \text{ for all } \phi \in {\mathbb C}c^\infty({\mathbb R}^d) \,. \] Finally, setting ${\mathcal F}:= \{ \psi \in {\mathbb C}c^\infty({\mathbb R}^d)\, :\, |\psi| \leqslant 1_{[-1,1]^d} \}$, \cite[Cor.~3.4]{SS2} gives \[ \| \mu \|_{[-1,1]^d} = \sup_{\psi \in {\mathcal F}} \| \psi*\mu \|_\infty =\sup_{\psi \in {\mathcal F}} \| \psi*\widehat{\omega} \|_\infty = \sup_{\psi \in {\mathcal F}} \| T\psi) \|_\infty \leqslant \|T \| \,. \] This shows that $\mu =\widehat{\omega}$ is a translation bounded measure. \operatorname{e}nd{proof} Next, we give a more explicit description of $\mathcal{DTBM}({\mathbb R}^d)$ in terms of derivatives of functions in the Fourier--Stieltjes algebra \[ B({\mathbb R}^d):= \{ \widehat{\rho}\, :\, \rho \mbox{ is a finite measure on } {\mathbb R}^d \} \,. \] In particular, we will show that each tempered distribution $\omega \in \mathcal{DTBM}({\mathbb R}^d)$ is a distribution of order (at most) $2d$. \begin{theorem}\label{Prop 2} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$, and let $\nu=\widehat{\omega}$. Then, there exists some function $h \in B({\mathbb R}^d)\subset {\mathbb C}u({\mathbb R}^d)$ such that \begin{itemize} \item[(a)] $\mu:= h \ensuremath{\lambda\!\!\!\lambda} $ is a translation bounded measure and hence a tempered distribution. \item[(b)] For every $\varphi \in {\mathbb C}c({\mathbb R}^d)$, one has \[ \mu(\varphi) = \int_{{\mathbb R}^d} \frac{\operatorname{Re}allywidecheck{\varphi}(x)}{(2\pi{\mathrm{i}} x_1)^{2d}+\ldots+(2\pi{\mathrm{i}} x_d)^{2d}+(-1)^d}\ \mbox{d} \nu(x) \,. \] \item[(c)] Let $\Psi_{d}:=(\partial_{x_1})^{2d}+\ldots+ (\partial_{x_d})^{2d}+(-1)^d$. Then, $\Psi_{d}\mu$ is a tempered distribution and \[ \Psi_{d}\mu = \omega \,. \] \item[(d)] For all $f \in {\textsf S}({\mathbb R}^d)$, we have \[ \left| \omega (f) \right| \leqslant \|h \|_\infty \Big( \|f \|_1 + \sum_{j=1}^d \| (\partial_{x_j})^{2d} f \|_1 \Big) \,. \] \item[(e)] $\omega$ is a distribution of order $2d$. \operatorname{e}nd{itemize} \operatorname{e}nd{theorem} \begin{proof} Let $\rho:=\frac{1}{(2\pi{\mathrm{i}}(\cdot)_1)^{2d}+\ldots+(2\pi{\mathrm{i}}(\cdot)_d)^{2d}+(-1)^d} \nu$. By Lemma~\operatorname{Re}f{lem:distr_2}, $\rho$ is a finite measure, and hence $h:= \operatorname{Re}allywidecheck{\rho} \in B({\mathbb R}^d) \subset {\mathbb C}u({\mathbb R}^d)$. \noindent (a) Since $f \in {\mathbb C}u({\mathbb R}^d)$, we have $\mu \in {\mathcal M}^\infty({\mathbb R}^d)$. In particular, $\mu$ is a tempered distribution. \noindent (b) This follows from Proposition~\operatorname{Re}f{P1} with $\rho$ as in the proof of (a). \noindent (c) Let $f \in {\textsf S}({\mathbb R}^d)$. Then, \begin{align*} \omega(f) &= \widehat{\theta}(\operatorname{Re}allywidecheck{f})=\nu(\operatorname{Re}allywidecheck{f}) = \int_{{\mathbb R}^d} \frac{((2\pi{\mathrm{i}} x_1)^{2d}+\ldots+(2\pi{\mathrm{i}} x_d)^{2d}+(-1)^d) \,\operatorname{Re}allywidecheck{f} (x)}{(2\pi{\mathrm{i}} x_1)^{2d}+\ldots+(2\pi{\mathrm{i}} x_d)^{2d}+(-1)^d}\, \mbox{d}\nu(x) \\ &= \int_{{\mathbb R}^d} \frac{\operatorname{Re}allywidecheck{\Psi_{d}f}\,(x)}{(2\pi{\mathrm{i}} x_1)^{2d} +\ldots+(2\pi{\mathrm{i}} x_d)^{2d}+(-1)^d} \, \mbox{d}\nu(x) = \mu(\Psi_{d}f) = (\Psi_{d}\mu)(f) \,. \operatorname{e}nd{align*} Thus, $\omega=\Psi_{d}\mu$ as tempered distributions. \noindent (d) Let $f \in {\textsf S}({\mathbb R}^d)$. Then, \begin{align*} \omega(f) &= (\Psi_{d}\mu )f= \mu\left(\big((\partial_{x_1})^{2d}+\ldots + (\partial_{x_d})^{2d}+(-1)^d\big)f \right) \\ &=\int_{R^d} h(x) \big(\big((\partial_{x_1})^{2d} + \ldots + (\partial_{x_d})^{2d} +(-1)^d\big)f \big)(x)\ \mbox{d} \ensuremath{\lambda\!\!\!\lambda} (x) \,. \operatorname{e}nd{align*} Therefore, \begin{align*} | \omega (f) | &\leqslant \int_{{\mathbb R}^d} \int_{R^d} |h(x)| \big((\partial_{x_1})^{2d}f(x) |+\ldots+ |(\partial_{x_d})^{2d}f(x)|+|f(x)| \big) \mbox{d} \ensuremath{\lambda\!\!\!\lambda} (x) \\ &\leqslant \|h\|_\infty \Big( \|f \|_1 + \sum_{j=1}^d \|(\partial_{x_j} )^{2d} f \|_1 \Big) \,. \operatorname{e}nd{align*} \noindent (e) Let $K$ be a compact subset of ${\mathbb R}^d$, and let $\phi\in{\mathbb C}c^{\infty}({\mathbb R}^d)$ with $\text{supp}(\phi)\subseteq K$. Now, (d) and Lemma~\operatorname{Re}f{lem:distr_2} imply \[ \left|\omega(\phi)\right| \leqslant \|h\|_\infty \Big( \|\phi \|_1 + \sum_{j=1}^d \| (\partial_{x_j})^{2d} \phi \|_1 \Big) \leqslant \|h\|_\infty \,\ensuremath{\lambda\!\!\!\lambda}(K) \,\Big(\sum_{j=1}^d\|(\partial_{x_j})^{2d} \phi\|_{\infty} + \|\phi\|_{\infty}\Big) \,. \] Thus, $\omega$ is a distribution of order $2d$. \operatorname{e}nd{proof} Let us now look at a very simple example. \begin{example} Let $\omega=\delta_{{\mathbb Z}}$. Then, by Poisson's summation formula $\nu:=\widehat{\omega}=\delta_{{\mathbb Z}}$, and hence $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. In this case $h(x)=-\frac{1}{2\pi}\sum_{k\in{\mathbb Z}}e^{-\frac{|k-x|}{2\pi}}$. To see this, we should first remind the reader that \[ \frac{1}{x^2+1} = \widehat{\phi}(x) \qquad \text{ with }\ \phi(x) = \frac{1}{2\pi} \operatorname{e}^{-\frac{|x|}{2\pi}} \,. \] Now, an application of Poisson's summation formula gives \begin{align*} h(x) &= -\int_{{\mathbb R}} \operatorname{e}^{-2\pi{\mathrm{i}} xy} \frac{1}{y^2+1}\, \mbox{d}\delta_{{\mathbb Z}}(y) = -\sum_{k\in{\mathbb Z}} \frac{1}{k^2+1} \operatorname{e}^{-2\pi{\mathrm{i}} kx} \\ &= -\sum_{k\in{\mathbb Z}} \widehat{\phi}(k) \operatorname{e}^{-2\pi{\mathrm{i}} kx} = -\sum_{k\in{\mathbb Z}} \phi(k-x) = -\frac{1}{2\pi}\sum_{k\in{\mathbb Z}}e^{-\frac{|k-x|}{2\pi}} \,. \operatorname{e}nd{align*} \operatorname{e}nd{example} Let us note in passing that the proofs in this section can be used to prove the following results about tempered distributions with (not necessarily translation bounded) measure Fourier transform. \begin{lemma} \begin{itemize} \item[(a)] Let $h\in B({\mathbb R}^d)$ and let \[ \omega=\Psi_{d}(h \ensuremath{\lambda\!\!\!\lambda}) \,. \] Then, $\omega$ is a tempered distribution and $\nu=\widehat{\omega}$ is a measure satisfying \[ \int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \, \mbox{d}|\nu|(x) < \infty \,. \] \item[(b)] Let $\nu$ be a measure satisfying \[ \int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \, \mbox{d}|\nu|(x) < \infty \,. \] Then, there exists some $h\in B({\mathbb R}^d)$ such that $\omega=\Psi_{d}(h \ensuremath{\lambda\!\!\!\lambda}) $ is a tempered distribution and $\nu=\widehat{\omega}$. \operatorname{e}nd{itemize} \operatorname{e}nd{lemma} \begin{proof} (a) Since $h \in B({\mathbb R}^d)$, there exists a finite measure $\mu$ on ${\mathbb R}^d$ such that $h=\operatorname{Re}allywidecheck{\mu}$. Let \[ \nu=\left((2\pi{\mathrm{i}}(\cdot)_1)^{2d}+\ldots+(2\pi{\mathrm{i}}(\cdot)_d)^{2d} +(-1)^d \right)\mu \,. \] Then, $\nu$ is a measure. Now, since $h=\operatorname{Re}allywidecheck{\mu}$ and $\mu$ is a finite measure, we have \begin{equation}\label{eq1121} \widehat{h \ensuremath{\lambda\!\!\!\lambda}} = \mu \operatorname{e}nd{equation} as measures \cite[Lemma~4.9.15]{MoSt}. In particular, \operatorname{e}qref{eq1121} holds as tempered distributions by Thm.~\operatorname{Re}f{FT measure dist}. Finally, we obtain \begin{align*} \widehat{\omega} &=\operatorname{Re}allywidehat{\Psi_{d}(h \ensuremath{\lambda\!\!\!\lambda})} =\left( (2\pi{\mathrm{i}}(\cdot)_1)^{2d}+\ldots+(2\pi{\mathrm{i}}(\cdot)_d)^{2d} +(-1)^d \right) \widehat{h \ensuremath{\lambda\!\!\!\lambda}} \\ &= \left((2\pi{\mathrm{i}}(\cdot)_1)^{2d}+\ldots+(2\pi{\mathrm{i}}(\cdot)_d)^{2d} +(-1)^d \right) \mu = \nu \,. \operatorname{e}nd{align*} Moreover, \[ \int_{{\mathbb R}^d} \frac{1}{(2\pi x_1)^{2d}+\ldots+(2\pi x_d)^{2d}+1} \, \mbox{d}|\nu|(x)= \left| \mu \right|({\mathbb R}^d) < \infty \,, \] since $\mu$ is a finite measure. \noindent (b) This is similar to the proof of Proposition~\operatorname{Re}f{Prop 2}. \operatorname{e}nd{proof} Exactly the same way as in Theorem~\operatorname{Re}f{Prop 2}, we can also prove the following result. Since the proof is identical to the one of Theorem~\operatorname{Re}f{Prop 2}, we skip it. \begin{proposition} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$, and let $\nu=\widehat{\omega}$. Then, there exists some function $h\in B({\mathbb R}^d)\subset {\mathbb C}u({\mathbb R}^d)$ such that \begin{itemize} \item[(a)] $\mu:= h \ensuremath{\lambda\!\!\!\lambda} $ is a translation bounded measure and hence a tempered distribution. \item[(b)] For every $\varphi\in {\mathbb C}c({\mathbb R}^d)$, one has \[ \mu(\varphi) = \int_{{\mathbb R}^d} \frac{\operatorname{Re}allywidecheck{\varphi}(x)}{((2\pi{\mathrm{i}} x_1)^{2}+\ldots+(2\pi{\mathrm{i}} x_d)^{2})^d+(-1)^d} \, \mbox{d}\nu(x) \,. \] \item[(c)]Let $\Phi_{d}:=\left((\partial_{x_1})^{2}+\ldots+ (\partial_{x_d})^{2}\right)^{d}+(-1)^d$. Then, $\Phi_{d}\mu$ is a tempered distribution and \[ \Phi_{d}\mu = \omega \,. \] \item[(d)] For all $f \in {\textsf S}({\mathbb R}^d)$ we have \[ \left| \omega (f) \right| \leqslant \|h \|_\infty \Big( \|f \|_1 + \Big\| \Big( \sum_{j=1}^d (\partial_{x_j})^{2} \Big)^d f \Big\|_1 \Big) \,. \] \item[(e)] $\omega$ is a distribution of order $2d$. \qed \operatorname{e}nd{itemize} \operatorname{e}nd{proposition} \section{The existence of a generalized Eberlein decomposition} We can now prove the existence of generalized Eberlein decomposition at the level of distributions of order $2d$. We start with the following simple results. \begin{proposition}\label{gen ebe dist} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. Then, there exist unique distributions $\omega_{\operatorname{s}}, \omega_{\operatorname{0a}}$ and $\omega_{\operatorname{0s}} \in \mathcal{DTBM}({\mathbb R}^d)$ such that $\omega =\omega_{\operatorname{s}}+\omega_{\operatorname{0a}}+\omega_{\operatorname{0s}}$ as well as \[ \widehat{\,\omega_{\operatorname{s}}\,} = (\widehat{\omega})_{\operatorname{pp}} \,, \qquad \widehat{\omega_{\operatorname{0a}}} = (\widehat{\omega})_{\operatorname{ac}} \qquad \text{ and } \qquad \widehat{\omega_{\operatorname{0s}}} = (\widehat{\omega})_{\operatorname{sc}} \,. \] Moreover, there exist functions $h_{\operatorname{s}}$, $h_{\operatorname{0a}}$ and $h_{\operatorname{0s}} \in B({\mathbb R}^d)$ such that \[ \omega_{\operatorname{s}}=\Psi_{d}\left(h_{\operatorname{s}} \ensuremath{\lambda\!\!\!\lambda}\right) \,,\qquad \omega_{\operatorname{0a}}=\Psi_{d}\left(h_{\operatorname{0a}} \ensuremath{\lambda\!\!\!\lambda}\right) \qquad \operatorname{ and } \qquad \omega_{\operatorname{0s}}=\Psi_{d}\left(h_{\operatorname{0s}} \ensuremath{\lambda\!\!\!\lambda}\right) \,, \] Finally, with $\omega_{\operatorname{0}}=\omega_{\operatorname{0a}}+\omega_{\operatorname{0s}}$ the decomposition $$ \omega=\omega_{\operatorname{s}}+\omega_{0} $$ is the decomposition of \operatorname{e}qref{EQ2}. \operatorname{e}nd{proposition} \begin{proof} Since $\omega \in \mathcal{DTBM}({\mathbb R}^d)$ we have $\nu:= \widehat{\omega} \in {\mathcal M}^\infty({\mathbb R}^d)$, and hence $\nu_{\operatorname{pp}}, \nu_{\operatorname{ac}}, \nu_{\operatorname{sc}} \in {\mathcal M}^\infty({\mathbb R}^d)$ by \cite[Lem.~3.12]{SS2}. The existence of $\omega_{\operatorname{s}}, \omega_{\operatorname{0a}}$ and $\omega_{\operatorname{0s}} \in \mathcal{DTBM}({\mathbb R}^d)$ follows now from Fact~\operatorname{Re}f{Fact 1}. The existence of $h_{\operatorname{s}}$, $h_{\operatorname{0a}}$ and $h_{\operatorname{0s}}$ follows from Theorem~\operatorname{Re}f{Prop 2}. The last claim follows from Theorem~\operatorname{Re}f{t2}. \operatorname{e}nd{proof} \begin{remark} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$ be positive definite and let $\nu =\widehat{\omega}$. We can ask wether or not the distributions $\omega_{\operatorname{s}}$, $\omega_{\operatorname{0a}}$ and $\omega_{\operatorname{0s}}$ can be chosen as measures. However, for the distributions to be measures, it is necessary that $\nu_{\operatorname{pp}}$, $\nu_{\operatorname{ac}}$ and $\nu_{\operatorname{sc}}$ are weakly almost periodic \cite[Thm.~4.11.12]{MoSt}. So, in general the answer is `no'. For example, if $\nu = \ensuremath{\lambda\!\!\!\lambda}|_{[0, \infty)}$ and $\omega =\operatorname{Re}allywidecheck{\nu}$, it is clear that $\omega=\omega_{\operatorname{0a}}$ is not a measure. Similarly, with $\omega= \operatorname{Re}allywidecheck{\delta_{{\mathbb N}}}$, the distribution $\omega=\omega_{\operatorname{s}}$ is not a measure. \operatorname{e}nd{remark} In the case of Fourier transformable measures, Proposition~\operatorname{Re}f{gen ebe dist} yields the following consequence. \begin{theorem} \label{coro:mainrd} Let $\gamma$ be measure on ${\mathbb R}^d$, which is Fourier transformable as a measure. Then, there exist unique Fourier transformable measure $\gamma_{\operatorname{s}}$, tempered distributions $\omega_{\operatorname{0a}}, \omega_{\operatorname{0s}} \in \mathcal{DTBM}({\mathbb R}^d)$ of order $2d$ and functions $h_{\operatorname{s}}$, $h_{\operatorname{0a}}$ and $h_{\operatorname{0s}} \in B({\mathbb R}^d)$ such that \begin{itemize} \item[(a)]$\gamma_0:=\omega_{\operatorname{0a}}+\omega_{\operatorname{0s}}$ is a Fourier transformable measure. \item[(b)] $\gamma= \gamma_{\operatorname{s}}+\omega_{\operatorname{0a}}+ \omega_{\operatorname{0s}}$. \item[(c)] $\gamma=\gamma_{\operatorname{s}}+\gamma_0$ is the Eberlein decomposition of Proposition~\operatorname{Re}f{prop ebe}. \item[(d)] $\widehat{\,\gamma_{\operatorname{s}}\,}= (\widehat{\gamma})_{\operatorname{pp}}$, $\widehat{\omega_{\operatorname{0a}}}= (\widehat{\gamma})_{\operatorname{ac}}$ and $\widehat{\omega_{\operatorname{0s}}}= (\widehat{\gamma})_{\operatorname{sc}}$. \item[(e)] $\gamma_{\operatorname{s}}=\Psi_{d}\left(h_{\operatorname{s}} \ensuremath{\lambda\!\!\!\lambda}\right)$, $\omega_{\operatorname{0a}}=\Psi_{d}\left(h_{\operatorname{0a}} \ensuremath{\lambda\!\!\!\lambda}\right)$ and $\omega_{\operatorname{0s}}=\Psi_{d}\left(h_{\operatorname{0s}} \ensuremath{\lambda\!\!\!\lambda}\right)$. \item[(f)] If $\gamma$ is positive definite, then $\gamma_{\operatorname{s}},\gamma_0$ are positive definite measures and $\omega_{\operatorname{0a}}, \omega_{\operatorname{0s}}$ are positive definite tempered distributions. \operatorname{e}nd{itemize} \operatorname{e}nd{theorem} \begin{proof} Since $\gamma$ is a positive definite measure, we have $\gamma \in \mathcal{DTBM}({\mathbb R}^d)$ by Theorem~\operatorname{Re}f{FT measure dist}. Now, let \[ \gamma=\gamma_{\operatorname{s}}+ \gamma_0 \] be the usual Eberlein decomposition of $\gamma$ from Theorem~\operatorname{Re}f{t1}, and let \[ \gamma=\omega_{\operatorname{s}}+\omega_{\operatorname{0a}}+\omega_{\operatorname{0s}} \] be the decomposition of Proposition~\operatorname{Re}f{gen ebe dist}. Then, we have \[ \widehat{\,\gamma_{\operatorname{s}}\,} = \left( \widehat{\gamma} \right)_{\operatorname{pp}} = \widehat{\,\omega_{\operatorname{s}}\,} \qquad \text{ and } \qquad \widehat{\,\gamma_{0}\,} = \left( \widehat{\gamma} \right)_{\operatorname{c}} =\left( \widehat{\gamma} \right)_{\operatorname{ac}} +\left( \widehat{\gamma} \right)_{\operatorname{sc}} = \widehat{\omega_{\operatorname{0s}}}+\widehat{\omega_{\operatorname{0a}}} \] in the sense of tempered distributions. The injectivity of the Fourier transform then gives \[ \gamma_{\operatorname{s}} =\omega_{\operatorname{s}} \qquad \text{ and } \qquad \gamma_{0} = \omega_{\operatorname{0s}}+\omega_{\operatorname{0a}} \,. \] Note here that $\omega_{\operatorname{0a}}, \omega_{\operatorname{0s}} \in \mathcal{DTBM}({\mathbb R}^d)$ by Proposition~\operatorname{Re}f{gen ebe dist}. Also, by Theorem~\operatorname{Re}f{FT measure dist}, we have $\gamma_{\operatorname{s}} \in \mathcal{DTBM}({\mathbb R}^d)$. In particular, Theorem~\operatorname{Re}f{Prop 2} gives the existence of $h_{\operatorname{s}}$, $h_{\operatorname{0a}}$ and $h_{\operatorname{0s}} \in B({\mathbb R}^d)$ such that (e) holds. The injectivity of the Fourier transform for tempered distributions and uniqueness of the Lebesgue decomposition for $\widehat{\gamma}$ gives the uniqueness of $\gamma_{\operatorname{s}}, \gamma_0,\omega_{\operatorname{0a}}, \omega_{\operatorname{0s}}$. Finally, if $\gamma$ is a positive definite measure, then $\widehat{\gamma}$ is a positive measure and hence so are $(\widehat{\gamma})_{\operatorname{pp}}, (\widehat{\gamma})_{\operatorname{c}}, (\widehat{\gamma})_{\operatorname{ac}},(\widehat{\gamma})_{\operatorname{sc}}$. This gives (f) and completes the proof. \operatorname{e}nd{proof} Now, Proposition~\operatorname{Re}f{gen ebe dist} makes the following definitions natural. \begin{definition} \begin{align*} \mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d) &:=\{ \omega \in \mathcal{DTBM}({\mathbb R}^d)\, :\, \widehat{\omega} \mbox{ is pure point} \} \\ \mathcal{DTBM}_{\operatorname{0s}}({\mathbb R}^d) &:=\{ \omega \in \mathcal{DTBM}({\mathbb R}^d)\, : \, \widehat{\omega} \mbox{ is singular continuous } \} \\ \mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d) &:=\{ \omega \in \mathcal{DTBM}({\mathbb R}^d)\, : \, \widehat{\omega} \mbox{ is absolutely continuous} \} \\ \mathcal{DTBM}_{0}({\mathbb R}^d) &:=\{ \omega \in \mathcal{DTBM}({\mathbb R}^d)\, :\, \widehat{\omega} \mbox{ is continuous} \} \operatorname{e}nd{align*} \operatorname{e}nd{definition} As usual, we denote by ${\mathcal M}^\infty_{\operatorname{pp}}({\mathbb R}^d)$, ${\mathcal M}^\infty_{\operatorname{c}}({\mathbb R}^d)$, ${\mathcal M}^\infty_{\operatorname{ac}}({\mathbb R}^d)$, ${\mathcal M}^\infty_{\operatorname{sc}}({\mathbb R}^d)$ the spaces of translation bounded pure point measures, translation bounded continuous measures, translation bounded absolutely continuous measures and translation bounded singular continuous measures, respectively. \begin{proposition}\label{prop: GED} These spaces just defined have the following properties. \begin{itemize} \item[(a)] $\mathcal{DTBM}({\mathbb R}^d) \subseteq \mathsf{WAP}({\mathbb R}^d)$, $\mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d) \subseteq \mathsf{SAP}({\mathbb R}^d)$, $\mathcal{DTBM}_{0}({\mathbb R}^d) \subseteq \mathsf{WAP}_0({\mathbb R}^d)$. \item[(b)] $\mathcal{DTBM}({\mathbb R}^d) =\mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d) \oplus \mathcal{DTBM}_{0}({\mathbb R}^d)$. \item[(c)] $\mathcal{DTBM}_{0}({\mathbb R}^d) =\mathcal{DTBM}_{\operatorname{0s}}({\mathbb R}^d) \oplus \mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d)$. \item[(d)] $\mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d)= \mathcal{DTBM}({\mathbb R}^d) \cap \mathsf{SAP}({\mathbb R}^d)$. \item[(e)] $\mathcal{DTBM}_{0}({\mathbb R}^d)= \mathcal{DTBM}({\mathbb R}^d) \cap \mathsf{WAP}_0({\mathbb R}^d)$. \item[(f)] The Fourier transform induces the following bijections: \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=.4em,minimum width=2em] { \mathcal{DTBM}({\mathbb R}^d) &= \mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d) &\oplus \mathcal{DTBM}_{\operatorname{0s}}({\mathbb R}^d) & \oplus \mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d) \\ {\mathcal M}^\infty({\mathbb R}^d)&={\mathcal M}^\infty_{\operatorname{pp}}({\mathbb R}^d)&\oplus {\mathcal M}^\infty_{\operatorname{sc}}({\mathbb R}^d)&\oplus {\mathcal M}^\infty_{\operatorname{ac}}({\mathbb R}^d) \\}; \path[<->] (m-1-1) edge node [left] {$\mathcal{F}$} (m-2-1) (m-1-2) edge node [left] {$\mathcal{F}$} (m-2-2) (m-1-3) edge node [left] {$\mathcal{F}$} (m-2-3) (m-1-4) edge node [left] {$\mathcal{F}$} (m-2-4); \operatorname{e}nd{tikzpicture} \operatorname{e}nd{itemize} \operatorname{e}nd{proposition} \begin{proof} (a) is a consequence of Theorem~\operatorname{Re}f{t2}, \cite[Prop.~6.1]{ST} and \cite[Prop.~6.2]{ST}. (b) and (c) are an immediate consequence of Proposition~\operatorname{Re}f{gen ebe dist}, Fact~\operatorname{Re}f{Fact 1} and \cite[Lem.~3.12]{SS2}. (d) and (e) follow from \cite[Prop.~6.1]{ST} and \cite[Prop.~6.2]{ST}, respectively. (f) is a consequence of Fact~\operatorname{Re}f{Fact 1}. \operatorname{e}nd{proof} \section{On the components of the generalized Eberlein decomposition}\label{sect last} Let $\gamma$ be the autocorrelation of $\omega\in{\mathcal M}^{\infty}({\mathbb R}^d)$. Then, $\gamma \in {\mathcal M}^\infty({\mathbb R}^d) \cap \mathcal{DTBM}({\mathbb R}^d)$. In order to better understand the pure point, continuous, absolute continuous and singular continuous spectrum, respectively, we need a better understanding of the components of the generalized Eberlein decomposition of $\gamma$. In particular, we need to understand the spaces $\mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d)$, $\mathcal{DTBM}_{0}({\mathbb R}^d)$, $\mathcal{DTBM}_{\operatorname{0s}}({\mathbb R}^d)$ and $\mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d)$. The subspaces $\mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d)$, $\mathcal{DTBM}_{0}({\mathbb R}^d)$ are characterized by Proposition~\operatorname{Re}f{prop: GED}. Indeed, for a distribution $\omega \in \mathcal{DTBM}({\mathbb R}^d)$, one has \begin{itemize} \item {} $\omega \in \mathcal{DTBM}_{\operatorname{s}}({\mathbb R}^d)$ if and only if $f*\omega \in SAP({\mathbb R}^d)$ for all $f \in {\textsf S}({\mathbb R}^d)$. \item {} $\omega \in \mathcal{DTBM}_{0}({\mathbb R}^d)$ if and only if $f*\omega \in WAP_0({\mathbb R}^d)$ for all $f \in {\textsf S}({\mathbb R}^d)$. \operatorname{e}nd{itemize} While we can't constructively obtain the (restricted) Eberlein decomposition this way, it can be used to check if a given decomposition is the Eberlein decomposition. Indeed, we have the folowing result, compare \cite[Prop.~5.8.7.]{NS11}. \begin{lemma} Let $\omega, \omega_1, \omega_2 \in \mathcal{DTBM}({\mathbb R}^d)$ be such that $\omega=\omega_1+\omega_2$. Then, $\omega_1=\omega_{\operatorname{s}}$ and $\omega_2=\omega_{0}$ if and only if, for all $f \in {\textsf S}({\mathbb R}^d)$, the following two conditions hold: \begin{itemize} \item{} $f*\omega_1 \in SAP({\mathbb R}^d)$. \item{} $\displaystyle \lim_{n\to\infty} \frac{1}{(2n)^d} \int_{[-n,n]^d} | (f*\omega_2)(x)|\ \mbox{d} x =0$. \operatorname{e}nd{itemize} \operatorname{e}nd{lemma} \begin{proof} $\Longrightarrow$ For all $f \in {\textsf S}({\mathbb R}^d)$, we have $f*\omega_1=f*\omega_{\operatorname{s}} \in SAP({\mathbb R}^d)$ and $f*\omega_2=f*\omega_{0} \in WAP_0({\mathbb R}^d)$. Therefore, we have \[ \lim_{n\to\infty} \frac{1}{(2n)^d} \int_{[-n,n]^d} |( f*\omega_2)(x) |\ \mbox{d} x =0 \,. \] \noindent $\Longleftarrow$ Let $f \in {\textsf S}({\mathbb R}^d)$. Then $f*\omega_1 \in SAP({\mathbb R}^d)$. Moreover, by Proposition~\operatorname{Re}f{gen ebe dist}(a), we have $f*\omega_2 \in WAP({\mathbb R}^d)$ and hence $f*\omega_2 \in WAP_0({\mathbb R}^d)$ by assumption. Thus, $f*\omega=f*\omega_1+f*\omega_2$ is the Eberlein decomposition of $f*\omega \in WAP({\mathbb R}^d)$. Therefore, by \cite[Thm.~5.2]{ST} we have \[ f*\omega_1 =(f*\omega)_{\operatorname{s}}=f*(\omega_{\operatorname{s}}) \qquad \text{ and } \qquad f*\omega_2 =(f*\omega)_0=f*(\omega_{0})\,. \] As this holds for all $f \in {\textsf S}({\mathbb R}^d)$, we get $\omega_1=\omega_{\operatorname{s}}, \omega_2=\omega_{0}$. \operatorname{e}nd{proof} Consequently, we obtain the following simpler characterisation of $\omega \in \mathcal{DTBM}_{0}({\mathbb R}^d)$. \begin{corollary} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. Then, $\omega \in \mathcal{DTBM}_0({\mathbb R}^d)$ if and only if \[ \lim_{n\to\infty} \frac{1}{(2n)^d} \int_{[-n,n]^d} | (f*\omega)(x)|\ \mbox{d} x =0 \] for all $f \in {\textsf S}({\mathbb R}^d)$. \qed \operatorname{e}nd{corollary} Going further into the generalized Eberlein decomposition, the problem becomes much more complicated. The main issues is that the space $\mathcal{DTBM}_{\operatorname{0s}}({\mathbb R}^d)$ is very mysterious, and there is not much we can say about it right now. Instead, we will take a closer look at $\mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d)$. Via an application of the Riemann--Lebesgue lemma and Plancherel's theorem, we find necessary conditions and sufficient conditions for a tempered distribution $\omega\in \mathcal{DTBM}({\mathbb R}^d)$ to belong in this space. Finding a necessary and sufficient condition seems to be a difficult problem, which is likely related to finding an intrinsic characterisation of the \operatorname{e}mph{Fourier algebra} \[ A(\widehat{{\mathbb R}^d})=\{ \widehat{f}\, :\, f \in L^1({\mathbb R}^d) \} \,. \] Let us start with necessary conditions. Here, we follow the approach of \cite[Thm.~27]{SS3}. \begin{proposition}\label{RL} Let $\omega \in \mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d)$. Then, for all $f \in {\textsf S}({\mathbb R}^d)$, we have $f * \omega \in {\mathbb C}z({\mathbb R}^d)$. \operatorname{e}nd{proposition} \begin{proof} Let $g \in L^1_{\operatorname{loc}}({\mathbb R}^d)$ be such that $ \widehat{\omega}= g \ensuremath{\lambda\!\!\!\lambda}$, and let $f \in {\textsf S}({\mathbb R}^d)$. Since $\nu:= g \ensuremath{\lambda\!\!\!\lambda} \in {\mathcal M}^\infty({\mathbb R}^d)$, we have \begin{align*} \int_{{\mathbb R}^d} \Big| \widehat{f}(x)\, g(x) \Big|\ \mbox{d} x &=\int_{{\mathbb R}^d} \Big| \widehat{f}(x)\, g(x) \Big|\ \mbox{d} x \\ &\leqslant \Big\| \Big(\sum_{j=1}^d(2\pi (\cdot)_j)^{2d}+1 \Big) \widehat{f} \Big\|_\infty \int_{{\mathbb R}^d} \frac{1}{\sum_{j=1}^d(2\pi (x)_j)^{2d}+1}\ \mbox{d}|\nu|(x) \\ &< \infty \operatorname{e}nd{align*} by Lemma~\operatorname{Re}f{lem:distr_2}. Therefore, $\widehat{f}\, g \in L^1({\mathbb R}^d)$. The Riemann--Lebesgue lemma now gives \[ f * \omega = \operatorname{Re}allywidecheck{ (\widehat{f} g)} \in {\mathbb C}z({\mathbb R}^d) \,. \] \operatorname{e}nd{proof} \begin{remark} \begin{itemize} \item[(a)] The converse of Proposition~\operatorname{Re}f{RL} is not true \cite[Rem.~28]{SS3}. \item[(b)] Let $\omega \in {\textsf S}'({\mathbb R}^d)$ be such that $\widehat{\omega}$ is absolutely continuous (but not necessarily translation bounded). Then, exactly as in Proposition~\operatorname{Re}f{RL}, one can show that $\operatorname{Re}allywidecheck{\varphi} * \omega \in {\mathbb C}z({\mathbb R}^d)$, for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. \operatorname{e}nd{itemize} \operatorname{e}nd{remark} Next, we give a sufficient condition for the Fourier transform of a tempered distribution to be a continuous measure with $L_{\text{loc}}^2({\mathbb R}^d)$-density function. \begin{proposition}\label{propL2} Let $\omega \in {\textsf S}'({\mathbb R}^d)$. Then, there exists some $f \in L^2_{\operatorname{loc}}({\mathbb R}^d)$ such that $\widehat{\omega} =f \ensuremath{\lambda\!\!\!\lambda}$ if and only if $\operatorname{Re}allywidecheck{\varphi}*\omega \in L^2({\mathbb R}^d)$ for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. \operatorname{e}nd{proposition} \begin{proof} $\Longrightarrow$ Let $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$, and let $K \subset {\mathbb R}^d$ be the support of $\varphi$. Let $g:=f \varphi$. Then, \[ \int_{{\mathbb R}^d} |g(x)|^2\ \mbox{d} x = \int_{K} |f(x)|^2\,|\varphi(x)|^2\ \mbox{d} x \leqslant \| \varphi \|_\infty^2 \int_{K} |f(x)|^2\ \mbox{d} x < \infty \,. \] Therefore, $g \in L^2({\mathbb R}^d)$, and hence, by Plancharel's theorem, there exists some $h \in L^2({\mathbb R}^d)$ such that $\widehat{h}=g$. Now, by \cite[Thm.~2.2]{ARMA1}, $h \ensuremath{\lambda\!\!\!\lambda}$ is Fourier transformable as a measure and $\widehat{h \ensuremath{\lambda\!\!\!\lambda}}= g \ensuremath{\lambda\!\!\!\lambda}$. In particular, $h \ensuremath{\lambda\!\!\!\lambda} $ is a tempered distribution by Theorem~\operatorname{Re}f{FT measure dist}, and as tempered distributions we have \[ \widehat{h \ensuremath{\lambda\!\!\!\lambda}}= g\ensuremath{\lambda\!\!\!\lambda}= f \varphi \ensuremath{\lambda\!\!\!\lambda} =\operatorname{Re}allywidehat{ (\operatorname{Re}allywidecheck{\varphi} * \omega)\ensuremath{\lambda\!\!\!\lambda}} \,. \] Therefore, the functions $h$ and $\omega * \operatorname{Re}allywidecheck{\varphi}$ agree as tempered distributions, and hence agree almost everywhere. Finally, $h \in L^2({\mathbb R}^d)$ implies $\omega*\operatorname{Re}allywidecheck{\varphi} \in L^2({\mathbb R}^d)$ as claimed. \noindent $\Longleftarrow$ For each $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$, we have $\omega * \operatorname{Re}allywidecheck{\varphi} \in L^2({\mathbb R}^d)$. Let $f_{\varphi} \in L^2({\mathbb R}^d)$ be the $L^2$-Fourier transform of $\omega * \operatorname{Re}allywidecheck{\varphi}$. Then, exactly as above, as tempered distributions we have \begin{equation} \label{eq:vwo} \varphi \widehat{\omega}= \operatorname{Re}allywidehat{(\omega*\operatorname{Re}allywidecheck{\varphi}) \ensuremath{\lambda\!\!\!\lambda} }= f_{\varphi} \ensuremath{\lambda\!\!\!\lambda} \,. \operatorname{e}nd{equation} Next, for each $n\in{\mathbb Z}^d$, fix some $\varphi_n\in{\mathbb C}c({\mathbb R}^d)$ such that $\varphi_n(x) =1$ for all $x\in n+[0,1)^d$. Now, define \begin{equation} \label{eq:ffv} f(x):= f_{\varphi_n}(x) \qquad \text{ if } x\in n+[0,1)^d \,. \operatorname{e}nd{equation} Then, $\varphi_n\in L^2({\mathbb R}^d)$ implies $f|_{n+[0,1)^d}\in L^2({\mathbb R}^d)$. As every compact set be covered by finitely many sets of the form $n+[0,1)^d$, we obtain $f\in L_{\text{loc}}^2({\mathbb R}^d)$. Finally, if $\varphi\in{\mathbb C}c({\mathbb R}^d)$ such that ${\operatorname{supp}}(\varphi)\subseteq n+[0,1)^d$ for some $n$, we have $\varphi=\varphi\varphi_n$, and hence by \operatorname{e}qref{eq:vwo} and \operatorname{e}qref{eq:ffv} \[ \widehat{\omega}(\varphi) = (\varphi_n\widehat{\omega})(\varphi) = (f_{\varphi_n}\ensuremath{\lambda\!\!\!\lambda})(\varphi) = \int_{n+[0,1)^d} f_{\varphi_n}(x)\, \varphi(x)\ \mbox{d} x = (f\ensuremath{\lambda\!\!\!\lambda})(\varphi) \,. \] Now, via a partition of unity argument, we can write each $\phi \in {\mathbb C}c^\infty({\mathbb R}^d)$ as a linear combination of $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$ with ${\operatorname{supp}}(\varphi) \subseteq n+[0,1)^d$ and hence $\omega = f \ensuremath{\lambda\!\!\!\lambda}$ on ${\mathbb C}c^\infty({\mathbb R}^d)$. Therefore, by the density of ${\mathbb C}c^\infty({\mathbb R}^d)$ in ${\textsf S}({\mathbb R}^d)$, the two tempered distributions agree. \operatorname{e}nd{proof} Cauchy--Schwarz' or H\"older's inequality imply that $L^2_{\text{loc}}({\mathbb R}^d) \subseteq L^1_{\text{loc}}({\mathbb R}^d)$, which immediately leads to the following sufficient conditions. \begin{corollary} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. If $\widehat{\varphi}*\omega \in L^2({\mathbb R}^d)$ for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$, then $\omega \in \mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d)$. \qed \operatorname{e}nd{corollary} \begin{corollary}\label{cor3} Let $\omega \in \mathcal{DTBM}({\mathbb R}^d)$. If $g*\omega \in L^2({\mathbb R}^d)$ for all $g \in {\textsf S}({\mathbb R}^d)$, then $\omega \in \mathcal{DTBM}_{\operatorname{0a}}({\mathbb R}^d)$. \qed \operatorname{e}nd{corollary} Consider now a Fourier transformable measure $\mu$ with uniformly discrete support. If we could strengthen Corollary~\operatorname{Re}f{cor3} by replacing `for all $g \in {\textsf S}({\mathbb R}^d)$' by `for all $g \in {\mathbb C}c^\infty({\mathbb R}^d)$', then a simple computation allows us to replace this condition by a much simpler restriction on the coefficients of $\mu$. This will in turn have interesting consequences for Fourier transformable measures with Meyer set support. In order to do this, let us introduce the following definition. \begin{definition} A measurable function $f : {\mathbb R}^d \to {\mathbb C}$ is called \operatorname{e}mph{Schwartz to $L^2$ compatible} if $gf \in L^2({\mathbb R}^d)$ for all $g \in {\textsf S}({\mathbb R}^d)$. We denote the space of Schwartz to $L^2$ compatible functions by\footnote{We reversed the order of letters here since $SL_2$ is used for the special linear group.} $\mathcal{LS}_2({\mathbb R}^d)$. A measurable function $f : {\mathbb R}^d \to {\mathbb C}$ is called \operatorname{e}mph{weakly Schwartz to $L^2$ compatible} if $\widehat{\varphi}f \in L^2({\mathbb R}^d)$ for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. We denote the space of weakly Schwartz to $L^2$ compatible functions by $\mathcal{WLS}_2({\mathbb R}^d)$. \operatorname{e}nd{definition} Let us start with some straight forward consequences of the definition. \begin{lemma}\label{lemma1} \begin{itemize} \item[(a)] $\mathcal{LS}_2({\mathbb R}^d) \subseteq \mathcal{WLS}_2({\mathbb R}^d) \subseteq L^2_{\operatorname{loc}}({\mathbb R}^d) \subseteq L^1_{\operatorname{loc}}({\mathbb R}^d)$. \item[(b)] Let $f \in L^1_{\operatorname{loc}}({\mathbb R}^d)$. If there exists some $k \in {\mathbb N}$ and some $2 \leqslant p \leqslant \infty$ such that $\frac{1}{(1+|\cdot|^2)^k}\, f\in L^p({\mathbb R}^d)$, then $f \in \mathcal{LS}_2({\mathbb R}^d)$. \item[(c)] $L^\infty({\mathbb R}^d) \subseteq \mathcal{LS}_2({\mathbb R}^d)$. \operatorname{e}nd{itemize} \operatorname{e}nd{lemma} \begin{proof} (a) $\mathcal{LS}_2({\mathbb R}^d) \subseteq \mathcal{WLS}_2({\mathbb R}^d)$ follows from the fact that $\widehat{\varphi} \in {\textsf S}({\mathbb R}^d)$ for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. Next, let $f \in \mathcal{WLS}_2({\mathbb R}^d)$. Let $K \subseteq {\mathbb R}^d$ be any compact set. It is easy to see that there is some $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$ such that $\widehat{\varphi} \geqslant 1_{K}$. Then, \[ \int_{K} | f(x)|^2\ \mbox{d} x \leqslant \int_{{\mathbb R}^d} | f(x) |^2\, |\widehat{\varphi}(x)|^2\ \mbox{d} x < \infty \,. \] This shows that $ \mathcal{WLS}_2({\mathbb R}^d) \subseteq L^2_{\operatorname{loc}}({\mathbb R}^d)$. As mentioned above, $L^2_{\operatorname{loc}}({\mathbb R}^d) \subseteq L^1_{\operatorname{loc}}({\mathbb R}^d)$ follows from Cauchy--Schwarz' or H\"older's inequality. \noindent (b) Let $g \in {\textsf S}({\mathbb R}^d)$, and let $2 \leqslant q \leqslant \infty$ be so that $\frac{2}{p}+\frac{2}{q}=1$. Such a $q$ exists, since $p\geqslant2$. Then, an application of H\"older's inequality gives \begin{align*} \| gf \|^2_2 &= \Big\| \Big| \frac{1}{(1+|\cdot|^2)^k} f \Big|^2\, \Big| (1+|\cdot|^2)^k g \Big|^2 \Big\|_1 \\ &\leqslant \Big\| \Big| \frac{1}{(1+|\cdot|^2)^k} f \Big|^2 \Big\|_\frac{p}{2} \ \Big\| \Big| (1+|\cdot|^2)^k g\Big|^2 \Big\|_{\frac{q}{2}} \\ &= \Big\| \frac{1}{(1+|\cdot|^2)^k} f\Big\|_p\ \Big\| (1+|\cdot|^2)^k g \Big \|_q < \infty \operatorname{e}nd{align*} because of the assumption on $f$ and $g \in {\textsf S}({\mathbb R}^d)$. \noindent (c) This follows from (b). \operatorname{e}nd{proof} \begin{remark} \begin{itemize} \item[(a)] Let \[ f(x)= \begin{cases} \frac{1}{\sqrt{x}} &\mbox{ if } 0< x \leqslant 1 \,, \\ 0 & \mbox{ otherwise}\,. \operatorname{e}nd{cases} \] Then, $f \in L^1({\mathbb R}) \subseteq L^1_{loc}({\mathbb R})$ and $f \in L^p({\mathbb R})$ for all $1 \leqslant p <2$. Next, if $g \in {\textsf S}({\mathbb R})$ is any function such that $g \geq 1$ on $[0,1]$, then $fg \notin L^2({\mathbb R})$. This shows that $f \notin \mathcal{LS}_2({\mathbb R}^d) $ and $f \notin \mathcal{WLS}_2({\mathbb R}^d) $. In particular, the condition $2 \leqslant p \leqslant \infty$ in Lemma~\operatorname{Re}f{lemma1}(b) is sharp. \item[(b)] Let $f \in L^1_{\operatorname{loc}}({\mathbb R}^d)$ be such that $f \ensuremath{\lambda\!\!\!\lambda}$ is a tempered measure in strong sense. Then, there exists some $n\in{\mathbb N}$ such that \[ \int_{{\mathbb R}^d} \frac{1}{(1+|x|^2)^n}\, |f(x)|\ \mbox{d} x < \infty \,, \] which is the condition from Lemma~\operatorname{Re}f{lemma1}(b) for $p=1$. Unfortunately, as pointed above, this is not enough to conclude that $f \in \mathcal{LS}_2({\mathbb R}^d)$. \operatorname{e}nd{itemize} \operatorname{e}nd{remark} The next result is an immediate consequence of Lemma~\operatorname{Re}f{lemma1}. \begin{corollary}\label{Cor212} We have the following equality of spaces. \begin{align*} L^{\infty}({\mathbb R}^d) &=L^1_{\operatorname{loc}}({\mathbb R}^d)\, \cap\, L^\infty({\mathbb R}^d) =L^2_{\operatorname{loc}}({\mathbb R}^d) \, \cap\, L^\infty({\mathbb R}^d) \\ &= \mathcal{WLS}_2({\mathbb R}^d)\, \cap\, L^\infty({\mathbb R}^d)= \mathcal{LS}_2({\mathbb R}^d) \, \cap\, L^\infty({\mathbb R}^d) \,. \operatorname{e}nd{align*} \operatorname{e}nd{corollary} \begin{proof} Lemma~\operatorname{Re}f{lemma1}(c) implies \[ L^\infty({\mathbb R}^d) \subseteq \mathcal{LS}_2({\mathbb R}^d) \subseteq \mathcal{WLS}_2({\mathbb R}^d) \subseteq L^2_{\operatorname{loc}}({\mathbb R}^d) \subseteq L^1_{\operatorname{loc}}({\mathbb R}^d) \,. \] The claim follows. \operatorname{e}nd{proof} Now, we can prove the following result. Note here that the only difference between this result and Proposition~\operatorname{Re}f{propL2}, is that the definition of $\mathcal{LS}_2({\mathbb R}^d)$ and $\mathcal{WLS}_2({\mathbb R}^d)$, respectively, allows us replace in the proof of Proposition~\operatorname{Re}f{propL2} the function $\operatorname{Re}allywidecheck{g}$ for $g \in {\mathbb C}c^\infty({\mathbb R}^d)$ by $h \in {\textsf S}({\mathbb R}^d)$ and $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$, respectively. Repeating the proof with these changes, we get the following result. As the conversion of the proof is straight forward, we skip it. \begin{proposition}\label{T2} Let $\omega \in {\textsf S}'({\mathbb R}^d)$. \begin{itemize} \item[(a)] There exists some $f \in \mathcal{LS}_2({\mathbb R}^d)$ such that $\widehat{\omega}=f \ensuremath{\lambda\!\!\!\lambda}$ if and only if $h * \omega \in L^2({\mathbb R}^d)$ for all $h \in {\textsf S}({\mathbb R}^d)$. \item[(b)] There exists some $f \in \mathcal{WLS}_2({\mathbb R}^d)$ such that $\widehat{\omega}=f \ensuremath{\lambda\!\!\!\lambda}$ if and only if $\varphi * \omega \in L^2({\mathbb R}^d)$ for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. \operatorname{e}nd{itemize} \qed \operatorname{e}nd{proposition} We can now prove the following result, which has interesting consequences for measures with Meyer sets support. \begin{theorem}\label{T4} Let $\Lambda \subset {\mathbb R}^d$ be a uniformly discrete set, and let \[ \mu= \sum_{x \in \Lambda} c_x \delta_x \] be a tempered measure. Then, the following statements are equivalent. \begin{itemize} \item[(i)] There exists some $f \in \mathcal{WLS}_2({\mathbb R}^d)$ such that $\widehat{\mu}=f \ensuremath{\lambda\!\!\!\lambda}$. \item[(ii)] $\varphi * \mu \in L^2({\mathbb R}^d)$ for all $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. \item[(iii)] $\sum_{x \in \Lambda } |c_x|^2 < \infty$. \operatorname{e}nd{itemize} \operatorname{e}nd{theorem} \begin{proof} (i)$\iff$(ii): This is a consequence of Proposition~\operatorname{Re}f{T2}. \noindent (ii)${\mathrm{i}}plies$ (iii): Let $U \subseteq {\mathbb R}^d$ be such that $\Lambda$ is $U$-uniformly discrete. Pick $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$ such that ${\operatorname{supp}}(\varphi) \subseteq U$ and $\phi \geqslant 1$ on some compact set $K \subseteq U$ of positive Lebesgue measure. Then, by \cite[Lem. 5.8.3]{NS11}, we have \[ | \mu* \varphi |= | \mu | * |\varphi| \,. \] Moreover, by the $U$-uniform discreteness of $\Lambda$ (compare the proof of \cite[Lem. 5.8.3]{NS11}), for each $y\in {\mathbb R}^d$, there exists at most one $x \in\Lambda$ such that $|\varphi(y-x)| \neq 0$. This immediately gives \[ | (\mu* \varphi) (y)|^2 =\sum_{x \in \Lambda} |c_x|^2\, |\varphi(y-x)|^2 \geqslant \sum_{x \in \Lambda} |c_x|^2\, |1_{K}(y-x)| \qquad \text{ for all } y \in \Lambda \,. \] Note here that if $\Lambda$ is finite then, \[ \lambda(K) \sum_{x \in \Lambda} |c_x|^2 = \sum_{x \in \Lambda} |c_x|^2 \int_{{\mathbb R}^d} |1_{K}(y-x)|\ \mbox{d} y =\int_{{\mathbb R}^d} \sum_{x \in \Lambda} |c_x|^2 \, |1_{K}(y-x)|\ \mbox{d} y \,. \] On another hand, if $\Lambda$ is infinite, it is countable by uniform discreteness. Let $\{ x_n \}$ be an enumeration of $\Lambda$. Then, the sequence of functions $f_n:=\sum_{k=1}^n |c_{x_k}|^2 |1_{K}(\cdot-x_k)| \in L^1({\mathbb R}^d)$ is increasing, and hence \[ \lambda(K) \sum_{x \in \Lambda} |c_x|^2 = \lim_{n\to\infty} \int_{{\mathbb R}^d} f_n(y)\ \mbox{d} y = \int_{{\mathbb R}^d} \lim_{n\to\infty} f_n(y)\ \mbox{d} y = \int_{{\mathbb R}^d} \left( \sum_{x \in \Lambda} |c_x|^2\, |1_{K}(y-x)| \right) \mbox{d} y \,, \] where we applied the monotone convergence theorem. Therefore, in both cases, we obtain \[ \lambda(K) \sum_{x \in \Lambda} |c_x|^2 = \int_{{\mathbb R}^d} \left( \sum_{x \in \Lambda} |c_x|^2 |1_{K}(y-x)| \right) \mbox{d} y \leqslant \int_{{\mathbb R}^d} | (\mu* \varphi)(y)|^2\ \mbox{d} y < \infty \,. \] \noindent (iii)${\mathrm{i}}plies$(ii): Note first that (iii) implies that \[ \nu:= \sum_{x \in \Lambda} |c_x|^2 \delta_x \] is a finite positive measure on ${\mathbb R}^d$. Let again $U \subseteq {\mathbb R}^d$ be such that $\Lambda$ is $U$-uniformly discrete. Let $\varphi \in {\mathbb C}c^\infty({\mathbb R}^d)$. By a standard partition of unity argument, there exist $\varphi_1,\ldots,\varphi_k \in {\mathbb C}c^\infty({\mathbb R}^d)$ and $y_1,\ldots,y_k \in {\mathbb R}^d$ such that \[ \varphi =\sum_{j=1}^k\varphi_j \qquad \text{ with } \qquad {\operatorname{supp}}(\varphi_j) \subseteq y_j+U \ \text{ for all } 1 \leqslant j \leqslant k \,. \] By \cite[Lem.~5.8.3]{NS11} and its proof, we have \[ | (\mu* \varphi_j)(y)| = (| \mu | * |\varphi_j|)(y) =\sum_{x \in \Lambda} |c_x|\, |\varphi_j(y-x)| \] with $|c_x||\varphi_j(y-x)|=0$, for all $1 \leqslant j \leqslant k$ and all $y \in {\mathbb R}^d$, for all but at most one $x \in \Lambda$. In particular, this implies \[ | (\mu* \varphi_j)(x) |^2 =\sum_{x \in \Lambda} |c_x|^2\, |\varphi_j(y-x)|^2 = \int_{{\mathbb R}^d} |\varphi_j(y-x)|^2\ \mbox{d} \nu(x) \] for all $1 \leqslant j \leqslant k$ and $y \in {\mathbb R}^d$. Therefore, an application of Fubini's theorem gives \begin{align*} \int_{{\mathbb R}^d} | (\mu* \varphi_j)(y) |^2 \mbox{d} y &=\int_{{\mathbb R}^d} \int_{{\mathbb R}^d} |\varphi_j(y-x)|^2\ \mbox{d} \nu(x)\, \mbox{d} y\\ &=\int_{{\mathbb R}^d} \int_{{\mathbb R}^d} |\varphi_j(y-x)|^2\ \mbox{d} y\ \mbox{d} \nu(x) =\| \varphi_j\|_2^2\int_{{\mathbb R}^d} 1\ \mbox{d} \nu(x) \\ &=\| \varphi_j\|_2^2\, \sum_{x \in \Lambda} |c_x|^2 < \infty \,. \operatorname{e}nd{align*} This completes the proof. \operatorname{e}nd{proof} The next corollary is an immediate consequence. \begin{corollary}\label{cor234} Let $\Lambda \subset {\mathbb R}^d$ be a uniformly discrete set, and let \[ \mu= \sum_{x \in \Lambda} c_x \delta_x \] be a tempered measure such that $\sum_{x \in \Lambda } |c_x|^2 < \infty$. Then, its Fourier transform in the tempered distribution sense is an absolutely continuous measure. In particular, if $\mu$ is Fourier transformable as a measure, $\widehat{\mu}$ is absolutely continuous. \operatorname{e}nd{corollary} Fourier transformable measures with Meyer set support are of special interest in some branches of mathematics like Aperiodic Order. Therefore, we are going to list some consequences for this special class of measures. To do so, we will recall the following result. \begin{theorem}\label{T5}\cite[Thm.~4.1]{NS20a} \cite[Thm.~5.7]{NS21} Let $\mu$ be a Fourier transformable measure supported inside a Meyer set $\Gamma \subset {\mathbb R}^d$. Then, there exists some Meyer set $\Lambda \subset {\mathbb R}^d$ and some function $c: \Lambda \to {\mathbb C}$ such that \[ \gamma_{\operatorname{0a}}= \sum_{x \in \Lambda} c(x)\, \delta_x \,. \] \qed \operatorname{e}nd{theorem} By combining this result with Theorem~\operatorname{Re}f{T4} and \cite[Cor.~35]{SS3}, we obtain the following consequence. \begin{theorem}\label{T6} Let $\mu$ be a Fourier transformable measure supported inside a Meyer set $\Gamma \subset {\mathbb R}^d$. Then, there exists some Meyer set $\Lambda \subset {\mathbb R}^d$ and some bounded function $c: \Lambda \to {\mathbb C}$ such that \begin{itemize} \item[(a)] $\mu_{\operatorname{0a}}= \sum_{x \in \Lambda} c(x)\, \delta_x$. \item[(b)] $\lim_{ \|x \| \to \infty } |c(x)| =0$. \item[(c)] If there exists some $f \in L^1_{\operatorname{loc}}({\mathbb R}^d) \cap L^\infty({\mathbb R}^d)$ such that $\left(\widehat{\mu}\right)_{\operatorname{ac}}=f \ensuremath{\lambda\!\!\!\lambda}$, then \[ \sum_{x \in \Lambda} |c(x)|^2 < \infty \,. \] \operatorname{e}nd{itemize} \operatorname{e}nd{theorem} \begin{proof} (a) This follows from Theorem~\operatorname{Re}f{T5}. \noindent (b) This is a consequence of \cite[Cor.~35]{SS3}. \noindent (c) By Corollary~\operatorname{Re}f{Cor212}, we have $f \in \mathcal{WLS}_2({\mathbb R}^d)$. The claim follows now from Theorem~\operatorname{Re}f{T4}. \operatorname{e}nd{proof} \begin{thebibliography}{99} \bibitem{ARMA1} L.N.~Argabright and J.~Gil de Lamadrid: {Fourier analysis of unbounded measures on locally compact Abelian groups}, \textit{Memoirs Amer.\ Math.\ Soc.}, \textbf{145}, AMS, Providence, RI (1974). \bibitem{BaGa} M.~Baake and F.~G\"{a}hler: {Pair correlations of aperiodic inflation rules via renormalisation: Some interesting examples}, \textit{Topol. Appl.} \textbf{205} (2016), 4--27; \texttt{arXiv:1511.00885}. \bibitem{TAO} M.~Baake and U.~Grimm: \textit{Aperiodic Order. Vol. 1: A Mathematical Invitation}, Cambridge University Press, Cambridge (2013). \bibitem{BG2} M. Baake and U. Grimm: {Squirals and beyond: Substitution tilings with singular continuous spectrum}, \textit{Ergod. Th. \& Dynam. Sys.} \textbf{34} (2014), 1077--1102; \texttt{arXiv:1205.1384}. \bibitem{TAO2} M.~Baake and U.~Grimm (eds.): \textit{Aperiodic Order. Vol. 2: Crystallography and Almost Periodicity}, Cambridge University Press, Cambridge (2017). \bibitem{BM} M.~Baake and R.V.~Moody: {Weighted Dirac combs with pure point diffraction}, \textit{J.\ Reine Angew.\ Math.\ (Crelle)} \textbf{573} (2004), 61--94; \texttt{arXiv:math.MG/0203030}. \bibitem{BSS} M. Baake, T. Spindeler and N. Strungaru: Diffraction of compatible random substitutions in one dimension, \textit{Indag. Math} \textbf{29} (2018), 1031--1071; \texttt{arXiv:math.DS/1712.00323}. \bibitem{BF} C. ~Berg, G. ~Forst: \textit{Potential Theory on Locally Compact Abelian Groups}, Springer, Berlin, (1975). \bibitem{Eb} W. F. Eberlein: Abstract ergodic theorems and weak almost periodic functions, {\operatorname{e}m Trans. Amer. Math. Soc.} \textbf{67} (1949), 217--224. \bibitem{ARMA} J.~Gil de Lamadrid and L.N.~Argabright: {Almost periodic measures}, \textit{Memoirs Amer.\ Math.\ Soc.} \textbf{85} (1990) no.~ 428, AMS, Providence, RI. \bibitem{Kaba} M. Kabanava: Tempered Radon measures, \textit{Revista Matem\'atica Complutense} \textbf{21} (2008), 553–-564. \bibitem{LAG} J. Lagarias: {Meyer's concept of quasicrystal and quasiregular sets}, \textit{Commun. Math. Phys.} \textbf{179} (1996), 365--376. \bibitem{LM} D. Lenz and R.~V.~Moody: {Stationary processes with pure point diffraction}, \textit{ Ergod.\ Th.\ \& Dynam.\ Syst.} \textbf{37(8)} (2017), 2597--264; \texttt{arXiv:1111.3617}. \bibitem{LSS} D.~Lenz, T.~Spindeler and N.~Strungaru: Pure point diffraction and mean, Besicovitch and Weyl almost periodicity, \textit{preprint} (2020); \texttt{arXiv:2006.10821}. \bibitem{LSS2} D. Lenz, T. Spindeler and N. Strungaru: Pure point spectrum for dynamical systems and mean almost periodicity, \textit{preprint}(2020); \texttt{arXiv:2006.10825}. \bibitem{LS2} D.~Lenz and N.~Strungaru: {On weakly almost periodic measures}, \textit{Trans.\ Amer.\ Math.\ Soc.} \textbf{371} (2019) 6843--6881; \texttt{arXiv:1609.08219}. \bibitem{Meyer} Y.~Meyer: \textit{Algebraic Numbers and Harmonic Analysis}, North-Holland, Amsterdam, (1972). \bibitem{MOO} R. V. Moody: {Meyer sets and their duals}, in: \textit{The Mathematics of Long-Range Aperiodic Order}, ed. R. V. Moody, NATO ASI Series , Vol \textbf{C489} (1997), Kluwer, Dordrecht, pp. 403--441. \bibitem{MoSt} R.V.~Moody and N.~Strungaru: {Almost periodic measures and their Fourier transforms}, in \cite{TAO2}, pp. 173--270. \bibitem{Ped} G.~K.~Pedersen: \textit{Analysis Now}, Springer, New York, (1989); Revised \ printing (1995). \bibitem{ReSi} M. Reed and B. Simon: \textit{Methods of Modern Mathematical Physics I: Functional Analysis}, 2nd ed. (1980), Academic Press, San Diego, CA. \bibitem{CRS2} C. Richard and N. Strungaru: { A short guide to pure point diffraction in cut-and-project sets}, \textit{J. Phys. A: Math. Theor.} \textbf{50}(2017) 25pp.; \texttt{arXiv:1606.08831}. \bibitem{sch} L. Schwartz: \textit{Th\'eorie des distributions}, Publications de l'Institut de math\'matique de l'Universit\' de Strasbourg, \textbf{9-10} (1966). \bibitem{She} D. Shechtman, I. Blech, D. Gratias and J. W. Cahn: Metallic phase with long-range orientational order and no translation symmetry, \textit{Phys.\ Rev.\ Lett.}\ \textbf{53} (1984), 183--185. \bibitem{SS3} T. Spindeler and Strungaru: A note on measures vanishing at infinity, \textit{ Rev. Math. Phys.} \textbf{31} (2019), 1950007; \texttt{arXiv:1610.03381}. \bibitem{SS2} T. Spindeler and N. Strungaru: On norm almost periodic measures, \textit{Math. Z.} (2021), in press; \texttt{arXiv:1810.09490}. \bibitem{NS1} N.~Strungaru: {Almost periodic measures and long-range order in Meyer sets}, \textit{Discr. Comput. Geom.} \textbf{33} (2005), 483--505. \bibitem{NS5} N.~Strungaru: {On weighted Dirac combs supported inside model sets}, \textit{J. Phys. A: Math. Theor.} \textbf{47} (2014), 19 pp.; \texttt{arXiv:1309.7947}. \bibitem{NS11} N.~Strungaru: Almost Periodic Pure Point Measures, in \cite{TAO2}, pp. 271--342; \texttt{arXiv:1501.00945}. \bibitem{NS20a} N.~Strungaru: {On the Fourier analysis of measures with Meyer set support}, \textit {J. Funct. Anal.} \textbf{278} (2020) 30 pp.; \texttt{arXiv:1807.03815} \bibitem{NS20b} N.~Strungaru: On the Fourier transformability of strongly almost periodic measures, \textit{Can.\ J.\ Math.} \textbf{72} (2020) 900--927; \texttt{arXiv:1704.04778}. \bibitem{NS21} N.~Strungaru: Why do Meyer sets diffract?, \textit{preprint}(2021); \texttt{arXiv:2101.10513}. \bibitem{ST} N.~Strungaru and V.~Terauds: Diffraction theory and almost periodic distributions, \textit{J.\ Stat. Phys.} \textbf{164} (2016) 1183--1216; \texttt{arXiv:1603.04796}. \operatorname{e}nd{thebibliography} \operatorname{e}nd{document}
\begin{document} \title{Decoherence-full subsystems and the cryptographic power of a private shared reference frame} \author{Stephen D. Bartlett} \email{[email protected]} \affiliation{School of Physical Sciences, The University of Queensland, Queensland 4072, Australia} \author{Terry Rudolph} \email{[email protected]} \affiliation{Optics Section, Blackett Laboratory, Imperial College London, London SW7 2BZ, United Kingdom} \author{Robert W. Spekkens} \email{[email protected]} \affiliation{Perimeter Institute for Theoretical Physics, 35 King St.~N, Waterloo, Ontario N2J 2W9, Canada} \date{10 September 2004} \begin{abstract} We show that private shared reference frames can be used to perform private quantum and private classical communication over a public quantum channel. Such frames constitute a novel type of private shared correlation (distinct from private classical keys or shared entanglement) useful for cryptography. We present optimally efficient schemes for private quantum and classical communication given a finite number of qubits transmitted over an insecure channel and given a private shared Cartesian frame and/or a private shared reference ordering of the qubits. We show that in this context, it is useful to introduce the concept of a \emph{decoherence-full} subsystem, wherein every state is mapped to the completely mixed state under the action of the decoherence. \end{abstract} \pacs{03.67.Dd, 03.67.Hk, 03.67.Pp, 03.65.Ta} \maketitle \section{Introduction} It is well known that a private classical key can be used for secure classical communication on a public channel using the Vernam cipher (one-time pad)~\cite{Ver26}. Specifically, an $n$-bit string $M$, the \emph{plain-text}, can be added bit-wise (modulo 2) to a random $n$-bit string $K$, the \emph{key}, to yield an $n$-bit string $C=M \oplus K$, the \emph{cipher-text}. Someone who possesses the key can retrieve the plain-text from the cipher-text via $M=C \oplus K$; however, for someone who does not possess the key, $C$ is completely random and contains no information about $M$. The cipher-text can therefore be transmitted over a public channel with complete security. In quantum cryptography\footnote{Note that we are not here referring to quantum key distribution, but rather to the use of a key for encoding information.}, quantum rather than classical systems are used for the transmission (i.e., a quantum cipher-text), allowing for one or both of the following innovations: (i) the key is quantum, corresponding to entanglement between the cooperating parties; (ii) the plain-text is quantum, namely, a quantum state drawn from a set of states not all of which are orthogonal. A classical plain-text can be encrypted with a quantum key (specifically, 2 c-bits can be encrypted using 1 e-bit of entanglement) by making use of a dense coding protocol~\cite{Ben92}. A quantum plain-text can be encrypted with a classical key (specifically, 1 qubit with 2 c-bits) by a scheme known as a private quantum channel~\cite{Amb00}. Finally, a quantum plain-text may be encrypted with a quantum key (1 qubit with 2 e-bits) using the quantum Vernam cipher~\cite{Leu02} \footnote{Alternative schemes for encrypting 1 qubit using 2 e-bits are to implement a teleportation protocol for the qubit wherein the classical communication is achieved by dense coding, or to convert the 2 e-bits into 2 secret c-bits through measurement and then use the protocol of~\cite{Amb00}.}. Note that when the plain-text is quantum, it has been shown that it is possible, by monitoring for eavesdropping, to recycle the key for future use~\cite{Leu02,Opp03}. What all these schemes have in common is that they make use of \emph{private shared correlations} to encode information. In this paper, we wish to consider the applications to cryptography of a different sort of private shared correlation, namely, a private \emph{shared reference frame} (SRF). Two parties are said to share a reference frame (RF) for some degree of freedom when there exists an isomorphism between their experimental operations involving this degree of freedom~\cite{BRS03a}. For example, Alice and Bob are said to share a Cartesian frame, defining an orthogonal trihedron of spatial orientations, when they can implement the following task. Alice sends to Bob a spin-1/2 particle aligned along a direction $\vec{n}$ with respect to her local Cartesian frame. She then communicates a classical description of this direction to Bob (for instance, its Euler angles), and Bob must orient his Stern-Gerlach magnets in such a way that the spin-1/2 particle emerges in the upper path with certainty. If Alice and Bob can orient themselves with respect to the fixed stars, then they will be able to implement the task above, and thus will be said to share a Cartesian frame. An alternative method for sharing a Cartesian frame is for Alice and Bob to possess, within their respective labs, sets of gyroscopes that were aligned at a time prior to Alice and Bob having been separated. Two parties are said to possess a \emph{private} SRF for some degree of freedom if the experimental operations of all other parties fail to be isomorphic to theirs in the sense described above. Although it is difficult to imagine how a Cartesian frame defined by the fixed stars might be made private, it is clear that if the Cartesian frame is defined by a set of gyroscopes, privacy amounts to no other party having gyroscopes that are known to be aligned with those of Alice and Bob. Unlike either classical or quantum information, which can be communicated using any degree of freedom one chooses, reference frames require the transmission of a system with a very specific degree of freedom~\cite{Per02}. Two clocks can only be synchronized by the transmission of physical systems that carry timing information, such as photons, and two Cartesian frames can only be aligned by the transmission of physical systems that carry some directional information, such as spin-1/2 particles. The optimal way of establishing a SRF given different sorts of information carriers has been the subject of many recent investigations~\cite{GisPop,Per01,Bag01,Per01b,Bag01b,Lin03}. Recognizing the distinction between SRFs and either classical key or quantum entanglement has also been important in identifying the resources that are required for continuous variable teleportation in quantum optics~\cite{Fur98,Bra99,Rud01,Enk02,Wis02,Wis03,San03,Wis04}. There have also been several investigations into the impact of \emph{lacking} the resource of a SRF for various tasks. These tasks have included communicating classical and quantum information~\cite{BRS03a}, accessing entanglement~\cite{Bar03}, discriminating states in a data hiding protocol~\cite{Ver03}, and implementing successful cheating strategies in two-party cryptographic protocols such as bit commitment~\cite{KMP03}. In the present work, we further clarify the nature of SRFs as a resource, by determining the extent to which \emph{private} SRFs are a resource for cryptography. To illustrate the general idea, consider the case where Alice and Bob share a private Cartesian frame. They can then achieve some private classical communication as follows: Alice transmits to Bob an orientable physical system (e.g., a pencil or a gyroscope) after encoding her message into the relative orientation between this system and her local reference frame (for instance, by turning her bit string into a set of Euler angles). Bob can decrypt the message by measuring the relative orientation between this system and his local reference frame. Because an eavesdropper (Eve) does not have a reference frame correlated with theirs, she cannot infer any information about the message from the transmission. In classical mechanics, it is in principle possible to discriminate among a continuum of different states of a finite system. In this setting, a private shared reference frame together with the transmission of a finite system would allow for the private communication of an arbitrarily long message. However, in quantum mechanics, finite systems support only a finite number of distinguishable states, so the question of the private communication capacity of a private SRF given finite uses of a channel is non-trivial. In addition, we can investigate the possibility of private \emph{quantum} communication. This paper is structured as follows. In Sec.~\ref{sec:Example}, we describe how two parties who share a private Cartesian frame can privately communicate quantum or classical information using one, two, or three transmitted spin-1/2 particles. These examples illustrate the central concepts of the paper. In Sec.~\ref{sec:Quantum}, we present optimally efficient private quantum communication schemes for arbitrary numbers of transmitted qubits. It is also here that we properly introduce the concept of a decoherence-full subsystem. In Sec.~\ref{sec:Classical}, we present optimally efficient schemes for private classical communication for large numbers of transmitted qubits. Finally, in Sec.~\ref{sec:Conclusions} we conclude with a discussion of the significance of these results as well as some directions for future research. \section{Some simple examples} \label{sec:Example} Consider a communication scenario consisting of two parties, a sender (Alice) and a receiver (Bob), who have access to an insecure noiseless quantum channel and who possess a private SRF. Continuing with our example, we consider spin systems that possess only rotational degrees of freedom, in which case all local experimental operations, such as the placement of a Stern-Gerlach magnet, are performed relative to a local Cartesian frame which is private. \subsection{One transmitted qubit} Consider the transmission of a single qubit from Alice to Bob. As they possess an isomorphism between their experimental operations, Bob can use the outcomes of his measurements to infer information about Alice's preparation. For example, they can communicate a classical bit by Alice preparing one of an orthogonal pair of states ($|0\rangle$ or $|1\rangle$) and Bob performing the corresponding projective measurement which reveals the preparation with certainty. On the other hand, an eavesdropper (Eve) who does not share Alice and Bob's private SRF cannot correlate the outcomes of her measurements with Alice's preparations. To represent the state of the transmitted qubit, Eve must average over all rotations $\Omega \in$ SU(2) that could describe the relation between her local RF and theirs. Thus, Eve would represent the state of the qubit relative to her uncorrelated reference frame as \begin{equation} \label{eq:SingleQubitSuperop} \mathcal{E}_1(\rho) = \int {\rm d}\Omega\, R(\Omega) \rho R^\dag (\Omega) = \tfrac{1}{2}I \, , \end{equation} where $R(\Omega)$ is the spin-1/2 unitary representation of $\Omega \in$ SU(2), ${\rm d}\Omega$ is the SU(2)-invariant measure\footnote{The invariant measure is chosen using the maximum entropy principle: because Eve has no prior knowledge about Alice's RF, she should assume a uniform measure over all possibilities.} and $I$ is the identity. Thus, as a result of being uncorrelated with the private SRF, Eve cannot acquire any information about Alice's preparation. Using this single qubit and their private SRF, Alice and Bob can privately communicate one logical qubit, and thus also one logical classical bit. \subsection{Two transmitted qubits: Decoherence full subspaces} \label{sec:twotransmittedqubits} If multiple qubits are transmitted, it is possible for Eve to acquire some information about the preparation even without access to the private SRF by performing \emph{relative} measurements on the qubits~\cite{BRS03b}. Consider the example of two transmitted qubits, and suppose that Alice assigns the state $\rho$ to the pair. Eve does not know how her RF is oriented relative to Alice's, but she knows that both qubits were prepared relative to the same RF. Thus, Eve's description of the pair is obtained from Alice's by averaging over all rotations $\Omega \in$ SU(2), but with the same rotation applied to each qubit. Eve therefore describes the pair by the Werner state~\cite{Wer89} \begin{align} \label{eq:TwoQubitSuperop} \mathcal{E}_2(\rho) &= \int {\rm d}\Omega\, R(\Omega)^{\otimes2} \rho R^\dag (\Omega)^{\otimes2} \nonumber \\ &= p_1 (\tfrac{1}{3}\Pi_{j=1}) + p_0 \Pi_{j=0} \, , \end{align} where \begin{equation} p_j = {\rm Tr}(\rho \Pi_{j})\,, \end{equation} and where $R(\Omega)^{\otimes2} \equiv R(\Omega) \otimes R(\Omega)$ is the (reducible) collective representation of SU(2) on two qubits, and $\Pi_j$ is the projector onto the subspace of total angular momentum $j$. It is clear that Eve has some probability of distinguishing states that differ in the weight they assign to the symmetric ($j=1$) and antisymmetric ($j=0$) subspaces. Moreover, she can distinguish perfectly between the antisymmetric state and a state which lies in the symmetric subspace. In other words, despite not sharing the RF, Eve can still measure the magnitude of the total angular momentum operator $\hat{J}^2$ and thus acquire information about the preparation. Eq.~(\ref{eq:TwoQubitSuperop}) implies that the two-qubit superoperator $\mathcal{E}_2$ is completely depolarizing on the three-dimensional symmetric subspace. In contrast to decoherence-free subspaces~\cite{Zan97} used in quantum computing, the effect of the map $\mathcal{E}_2$ on this subspace is irreversible: the superoperator takes any state on this subspace to a fixed state, namely, the completely mixed state on this subspace. In Section~\ref{sec:Quantum}, we will define subspaces with this property to be \emph{decoherence-full subspaces}\footnote{Note that the term ``decoherence'' has many connotations in the literature. Here, we shall take the term to be synonymous with ``noise'', where this noise may arise from ignorance rather than a coupling to the environment.}. By encoding in a decoherence-full subspace, Alice can achieve private quantum communication. For instance, Alice can encode a logical qutrit\footnote{A qutrit is a 3-dimensional generalization of the qubit.} state into a state $\rho_S$ of two qubits that has support entirely within the symmetric subspace. Bob, sharing the private RF, can recover this qutrit with perfect fidelity. However, Eve identifies all such qutrit states with $\mathcal{E}_2(\rho_S) = \tfrac{1}{3}\Pi_{j=1}$, the completely mixed state on the $j=1$ subspace, and therefore cannot infer anything about $\rho_S$. Thus, using this scheme, a private qutrit can be transmitted from Alice to Bob using two qubits. Now consider how many classical bits of information Alice can transmit privately to Bob. An obvious scheme is for her to encode a classical trit as three orthogonal states within the symmetric subspace. (For example, using the three symmetric Bell states $|\psi^+\rangle$, $|\phi^+\rangle$ and $|\phi^-\rangle$.) However, this is not the optimally efficient scheme. Suppose instead that Alice encodes two classical bits as the four orthogonal states \begin{equation} \label{eq:fourprivatestates} \left| i\right\rangle =\frac{1}{2}\left| \psi ^{-}\right\rangle +\frac{\sqrt{3}}{2}\left| {\bf n}_{i}\right\rangle \left| {\bf n}_{i}\right\rangle \, ,\quad i=1,\ldots,4 \, , \end{equation} where $\left| \psi ^{-}\right\rangle $ is the singlet state and the $\left| {\bf n}_{i}\right\rangle \left| {\bf n}_{i}\right\rangle $ are four states in the symmetric subspace with both spins pointed in the same direction, with the four directions forming a tetrahedron, and with the phases chosen to ensure orthogonality of the $\left| i\right\rangle$~(see~\cite{Mas95}). It is easy to verify that \begin{equation} \mathcal{E}_{2}(|i\rangle\langle i|)=\tfrac{1}{4}I\,, \end{equation} the completely mixed state on the two qubit Hilbert space. Thus, these four states are completely distinguishable by Bob but completely indistinguishable by Eve. By Holevo's theorem, two classical bits is the maximum one could possibly communicate by the transmission of two qubits, so this scheme is optimally efficient. \subsection{Three transmitted qubits: Decoherence-full subsystems} Consider the transmission of three qubits from Alice to Bob. If Alice prepares these qubits in the state $\rho$, then Eve, who lacks the SRF, assigns the state \begin{align} \label{eq:ThreeQubitSuperop} \mathcal{E}_3(\rho) &= \int {\rm d}\Omega\, R(\Omega)^{\otimes3} \rho R^\dag (\Omega)^{\otimes3} \,. \end{align} With three qubits, the four-dimensional symmetric subspace consisting of states with total angular momentum $j=3/2$ is a decoherence-full subspace: all states on this subspace are mapped by $\mathcal{E}_3$ to the completely mixed state on this subspace. The four-dimensional subspace $\mathbb{H}_{j=1/2}$ consisting of states with total angular momentum $j=1/2$ has a more complex structure. This subspace can be given a tensor product structure (TPS)~\cite{Zan03} as \begin{equation} \label{eq:TwoQubitTPS} \mathbb{H}_{j=1/2} = \mathbb{H}_{R} \otimes \mathbb{H}_{P} \, , \end{equation} where $\mathbb{H}_{R}$ is a two-dimensional Hilbert space that carries the $j=1/2$ irreducible representation of SU(2), and $\mathbb{H}_{P}$ is a two-dimensional Hilbert space that carries the trivial representation of SU(2). This TPS does not correspond to the TPS obtained by combining multiple qubits: it is \emph{virtual}~\cite{Zan01b}. We refer to these two factor spaces as \emph{subsystems}, a concept we will define more precisely in Section~\ref{sec:Quantum}. For the moment, we consider how the superoperator $\mathcal{E}_3$ acts on states in terms of these subsystems. Because SU(2) acts irreducibly on $\mathbb{H}_{R}$ and trivially on $\mathbb{H}_{P}$, the superoperator $\mathcal{E}_3$ restricted to states on $\mathbb{H}_{j=1/2}$ can be expressed as \begin{equation} \label{eq:TwoQubitEonTPS} \mathcal{E}_3(\rho_{j=1/2}) = (\mathcal{D}_R \otimes \mathcal{I}_P)(\rho_{j=1/2}) \, , \end{equation} where $\mathcal{D}_R$ is the completely depolarizing superoperator on $\mathbb{H}_{R}$ and $\mathcal{I}_P$ is the identity operation on $\mathbb{H}_{P}$. Thus, $\mathcal{E}_3$ takes any product state of the form $\rho_R \otimes \sigma_P$ to the state $\tfrac{1}{2}I_R \otimes \sigma_P$. In fact, $\mathcal{D}_R \otimes \mathcal{I}_P$ maps any state $\rho_{j=1/2}$ on $\mathbb{H}_{R} \otimes \mathbb{H}_{P}$ to the product state $\tfrac{1}{2}I_R \otimes {\rm Tr}_R(\rho_{j=1/2})$, where ${\rm Tr}_R$ is the partial trace over the subsystem $\mathbb{H}_R$, thus removing all correlations between the subsystems. We call the subsystem $\mathbb{H}_{R}$ a \emph{decoherence-full subsystem}. We can now express the action of the superoperator $\mathcal{E}_3$ on an arbitrary state $\rho$ of three qubits as \begin{equation} \label{eq:E3Decomp} \mathcal{E}_3(\rho) = p_{3/2} (\tfrac{1}{4}\Pi_{j=3/2}) + p_{1/2} (\tfrac{1}{2}\mathbb{I}_R \otimes \rho_P) \, , \end{equation} where \begin{align} p_{j}&= {\rm Tr}(\rho\Pi_{j})\,, \\ \rho_P &= \tfrac{1}{p_{1/2}} {\rm Tr}_R(\Pi_{j=1/2}\rho\Pi_{j=1/2}) \, . \end{align} Consider the following two options that Alice has for privately communicating quantum states to Bob using their private SRF: 1) she can encode quantum states into the decoherence-full $j=3/2$ subspace (allowing private communication of two qubits); 2) she can encode a qubit state $\rho$ into a product state $\rho\otimes \sigma_0$ in the $j=1/2$ subspace, where $\sigma_0$ is some fixed state on $\mathbb{H}_P$. (Using the latter scheme, all states are represented by Eve as $\tfrac{1}{2}I_R \otimes \sigma_0$, who thus cannot obtain any information about $\rho$.) Clearly, using the $j=3/2$ subspace provides a superior capacity, and we will prove in Section~\ref{sec:Quantum} that this scheme is optimally efficient for three qubits. Note however that for greater numbers of qubits, the decoherence-full subsystems typically have greater dimensionality than the decoherence-full subspaces, and schemes that encode within them are necessary to achieve optimal efficiency. For private \emph{classical} communication, the question of optimal efficiency is much more complex. One scheme would be for Alice to encode two c-bits into four orthogonal states within the $j=3/2$ decoherence-full subspace. Using the $j=1/2$ subspace, it might seem that the best Alice can do is to encode a single c-bit into two orthogonal states in the decoherence-full subsystem $\mathbb{H}_R$; however, there is a better scheme using this subspace. If Alice encodes two c-bits into four orthogonal \emph{maximally entangled} states on the virtual TPS $\mathbb{H}_R \otimes \mathbb{H}_P$, these states are completely distinguishable by Bob but, using Eq.~(\ref{eq:E3Decomp}), all map to the same state $\tfrac{1}{2}I_R \otimes \tfrac{1}{2}I_P$ on $\mathbb{H}_R \otimes \mathbb{H}_P$ under $\mathcal{E}_3$ and thus are completely indistinguishable from Eve's perspective. Thus, using the $j=1/2$ subspace, Alice can privately transmit two c-bits to Bob, the same number as can be achieved using the $j=3/2$ subspace. It turns out that the optimally efficient scheme for private classical communication uses \emph{both} the $j=3/2$ and $j=1/2$ subspaces. Let $|j{=}3/2,\mu\rangle$, $\mu=1,\ldots,4$ be four orthogonal states on the $j=3/2$ subspace, and let $|j{=}1/2,\mu\rangle$, $\mu=1,\ldots,4$ be four maximally entangled states (as described above) on the $j=1/2$ subspace. Define the eight orthogonal states \begin{equation} \label{eq:EightOnThreeQubits} |b,\mu\rangle = \frac{1}{\sqrt{2}}\bigl(|j{=}3/2,\mu\rangle + (-1)^b |j{=}1/2,\mu\rangle\bigr) \, , \end{equation} where $b=1,2$ and $\mu=1,\ldots,4$. Alice can encode 3 c-bits into these eight states, which are completely distinguishable by Bob. It is easily shown using Eq.~(\ref{eq:E3Decomp}) that the decohering superoperator $\mathcal{E}_3$ maps all of these states to the completely mixed state on the total Hilbert space; thus, these states are completely indistinguishable by Eve. This scheme is optimally efficient for private classical communication because, by Holevo's theorem, three c-bits is the maximum amount of classical communication that can be achieved with three transmitted qubits. So we see that the optimal efficiency for private classical communication (three c-bits) is greater than that for private quantum communication (two qubits) if we directly compare c-bits to qubits. This result generalizes in the case of $N$ transmitted qubits. Note, however, that the ratio of private capacity to public capacity decreases with increasing $N$. The examples presented in this section illustrate the central concepts of this paper. We now turn to the general case. \section{Private quantum communication} \label{sec:Quantum} \subsection{General schemes for private quantum communication} We begin the general discussion by defining private quantum communication schemes (using public quantum channels and without classical ``broadcast'' channels) as in~\cite{Amb00}, and deriving some general results for such schemes. Any time Alice and Bob have some private shared correlation, that is, one to which Eve does not have access, Eve's description of the systems transmitted along the channel is related to Alice's description by a decohering superoperator, denoted by $\mathcal{E}$. \textbf{Definition:} \emph{A private quantum communication scheme for $\mathcal{E}$.} Such a scheme consists of an \emph{encoding} $\mathcal{C}$, mapping message states in a logical Hilbert space $\mathbb{H}_L$ to encoded states on the Hilbert space $\mathbb{H}$ of the transmitted system, such that (i) the map $\mathcal{C}$ is invertible by Bob (who possesses the private shared correlations), allowing him to decode and recover states on $\mathbb{H}_L$ with perfect fidelity, and (ii) the encoding satisfies \begin{equation} \label{eq:EncodedStates} \mathcal{E}[\mathcal{C}(\varrho_L)] = \rho_0 \, , \quad \forall\ \varrho_L\ \text{on}\ \mathbb{H}_L \, , \end{equation} where $\rho_0$ is some fixed state on $\mathbb{H}$. This latter property ensures that all encoded states are completely indistinguishable from Eve's perspective, so that she cannot acquire any information about $\varrho_L$ through measurements on $\mathcal{E}[\mathcal{C}(\varrho_L)]$. This definition is equivalent to a ``private quantum channel'' defined in~\cite{Amb00}. We define an \emph{optimally efficient} private quantum communication scheme as one for which $\mathbb{H}_L$ is of maximal dimension. The invertibility of the encoding $\mathcal{C}$ by Bob places stringent conditions on the image of the logical Hilbert space $\mathbb{H}_L$ in $\mathbb{H}$. In order to ensure this invertibility, one method of encoding is to choose $\mathcal{C}$ such that $\mathbb{H}_L$ maps isomorphically to a subspace $\mathbb{H}' \subset \mathbb{H}$ of equal dimension. However, the most general method of encoding involves using ancilla systems~\cite{Amb00}. Let $\mathbb{H}'' \subset \mathbb{H}$ be a subspace that possesses a tensor product structure $\mathbb{H}'' = \mathbb{H}_A \otimes \mathbb{H}_B$ with $\mathbb{H}_A$ isomorphic to $\mathbb{H}_L$. The Hilbert space $\mathbb{H}_A$ is referred to as a \emph{subsystem} of $\mathbb{H}$. An encoding $\mathcal{C}$ that maps any state $\varrho_L$ on $\mathbb{H}_L$ to the state $\varrho_L \otimes \sigma_0$ on $\mathbb{H}_A \otimes \mathbb{H}_B$ for some fixed ancillary state $\sigma_0$ on $\mathbb{H}_B$ is the most general encoding that is invertible. In this case, we say that $\mathbb{H}_L$ is encoded by $\mathcal{C}$ into the subsystem $\mathbb{H}_A$. In order for encoded states in a subsystem to be completely indistinguishable by Eve, the superoperator $\mathcal{E}$ must map them all to the same density matrix $\rho_0$ on $\mathbb{H}$. We give a name to such subsystems. \textbf{Definition:} \emph{Completely private subystems.} For all $\varrho_L$ on $\mathbb{H}_A$, and for a fixed $\sigma_0$ on $\mathbb{H}_B$, if \begin{equation} \label{eq:EncodedStatesInSubsystem} \mathcal{E}(\varrho_L \otimes \sigma_0) = \rho_0 \, , \end{equation} where $\rho_0$ is independent of $\varrho_L$, then the subsystem $\mathbb{H}_A$ is said to be \emph{completely private} with respect to $\mathcal{E}$. Every completely private subsystem with respect to a superoperator $\mathcal{E}$ allows for the definition of a private quantum communication scheme. The scheme simply encodes a logical Hilbert space isomorphically into this completely private subsystem. \subsection{Decoherence-full subsystems} \label{subsec:DFull} In the following, we highlight a particular class of completely private subsystems, namely, those for which every state defined on the subsystem is mapped by $\mathcal{E}$ to the completely mixed state on the subsystem. In contrast to the decoherence-free (D-free) or noiseless subsystems~\cite{Kni00,Zan01} employed in quantum computing, the effect of the decoherence on these subsystems is maximal, and so we dub these \emph{decoherence-full} (D-full) subsystems. \textbf{Definition:} \emph{Decoherence-full subspaces/subsystems.} Consider a superoperator $\mathcal{E}$ that acts on density operators on a Hilbert space $\mathbb{H}$. A \emph{decoherence-full (D-full) subspace} is a subspace $\mathbb{H}' \subset \mathbb{H}$ such that the superoperator $\mathcal{E}$ maps every density operator on $\mathbb{H}'$ to the completely mixed density operator on $\mathbb{H}'$. Consider a subspace $\mathbb{H}'' \subset \mathbb{H}$ that possesses a tensor product structure $\mathbb{H}'' = \mathbb{H}_A \otimes \mathbb{H}_B$ such that \begin{equation} \label{eq:EonPair} \mathcal{E}(\rho_A \otimes \rho_B) = \tfrac{1}{d_A}I_A \otimes \rho_B' \, , \end{equation} where $\tfrac{1}{d_A} I_A$ is the completely mixed state on $\mathbb{H}_A$ and $\rho_B'$ is independent of $\rho_A$. We define such a $\mathbb{H}_A$ to be a \emph{decoherence-full (D-full) subsystem}. If, in addition, $\rho'_B=\rho_B$ for all $\rho_B$, so that $\mathbb{H}_B$ is decoherence-free, that is, if \begin{equation} \label{eq:EonPair2} \mathcal{E}(\rho_A \otimes \rho_B) = \tfrac{1}{d_A}I_A \otimes \rho_B \, , \end{equation} for all $\rho_A \otimes \rho_B$, then we define the product $\mathbb{H}_A \otimes \mathbb{H}_B$ to be a \emph{D-full/D-free subsystem pair}. Restricted to a D-full/D-free subsystem pair, the superoperator $\mathcal{E}$ has the decomposition $\mathcal{E}_{AB} = \mathcal{D}_A \otimes \mathcal{I}_B$ with respect to this TPS, where $\mathcal{D}_{A}$ is the completely depolarizing superoperator on $\mathbb{H}_{A}$, and $\mathcal{I}_{B}$ acts trivially on $\mathbb{H}_{B}$. Note that a D-full subspace is a special case of a D-full subsystem for which $\mathbb{H}_B$ is one-dimensional. In the following, we will show that D-full subsystems define optimally efficient schemes for private quantum communication for the class of superoperators describing Eve's ignorance of an SRF. \subsection{Group-averaging superoperators} The results so far in this section have not made any assumptions about the sort of private shared correlation that Alice and Bob are using to encode their information. We now focus on the case of a private SRF. This restriction will allow for a simple decomposition of the total Hilbert space into D-full/D-free subsystem pairs. Note first that every reference frame is associated with a symmetry group. For instance, a Cartesian frame is associated with the group of rotations SU(2), a clock (phase reference) is associated with U(1), and a reference ordering (which we shall consider in section \ref{sec:quantumschemeprivaterefordering}) is associated with the symmetric group $S_N$~\footnote{However, note that a \emph{partial} reference frame is associated with a factor space of a group; e.g., a reference direction is associated with the factor space SU(2)/U(1), where U(1) is the symmetry group of the direction under rotations.}. If Eve does not share Alice and Bob's RF, then she is ignorant of which element of the group describes the relation between her local RF and that of Alice and Bob. The unital superoperator $\mathcal{E}$ describing Eve's ignorance is therefore an average over the collective representation $T$ of a group $G$ acting on $\mathbb{H}$. If $G$ is a Lie group, then $\mathcal{E}$ acts on states $\rho$ on $\mathbb{H}$ as \begin{equation} \label{eq:SuperopAverageLieGroup} \mathcal{E}(\rho) = \int_G {\rm d}v(g) T(g) \rho T^\dag(g) \, , \end{equation} where ${\rm d}v$ is the group-invariant measure on $G$. For finite groups, the superoperator acts as \begin{equation} \label{eq:SuperopAverageFiniteGroup} \mathcal{E}(\rho) = \frac{1}{{\rm dim}\ G} \sum_i T(g_i) \rho T^\dag(g_i) \, , \end{equation} where ${\rm dim}\ G$ is the dimension of $G$. In the following, we use the notation of Lie groups; all results are equally applicable to finite groups. If $T$ acts irreducibly on $\mathbb{H}$, then $\mathcal{E}$ is completely depolarizing (by Schur's lemma). However, if $T$ is reducible, then we can use the irreducible representations (irreps) $T_j$ of $G$ to construct projection operators \begin{equation} \label{eq:ProjOperators} \Pi_j \propto \int_G {\rm d}v(g) T_j(g^{-1}) T(g) \, , \end{equation} up to a constant of proportionality. These projection operators decompose the Hilbert space $\mathbb{H}$ into a direct sum as \begin{equation} \label{eq:GeneralDirectSum} \mathbb{H} = \bigoplus_j \mathbb{H}_j \, . \end{equation} In general, each irrep occurs multiple times; we can factor each subspace $\mathbb{H}_j$ into a tensor product of subspaces $\mathbb{H}_{jA} \otimes \mathbb{H}_{jB}$ as follows. Each subsystem $\mathbb{H}_{jA}$ is the carrier space for the irreducible representation $T_j$ of $G$, and each corresponding subsystem $\mathbb{H}_{jB}$ carries the trivial representation of $G$ and has dimension equal to the multiplicity of $T_j$. (See~\cite{Ful91}.) The total Hilbert space decomposes as \begin{equation} \label{eq:GeneralDirectSumProd} \mathbb{H} = \bigoplus_j \mathbb{H}_{jA} \otimes \mathbb{H}_{jB} \, . \end{equation} Each subsystem $\mathbb{H}_{jA}$ is D-full, and each subsystem $\mathbb{H}_{jB}$ is D-free. Thus, each $\mathbb{H}_{jA} \otimes \mathbb{H}_{jB}$ form a D-full/D-free subsystem pair. The action of the superoperator $\mathcal{E}$ can be expressed in terms of this decomposition as \begin{equation} \label{eq:ActionOfEOnArbitrary} \mathcal{E}(\rho) = \sum_j (\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}) (\Pi_j \rho \Pi_j) \, , \end{equation} where $\mathcal{D}_{jA}$ is the completely depolarizing superoperator on each $\mathbb{H}_{jA}$, and $\mathcal{I}_{jB}$ acts trivially on each $\mathbb{H}_{jB}$. It should be noted that the $\mathbb{H}_{jA}$ are the \emph{only} D-full subsystems. This claim follows from the fact that if a subsystem is D-full then the representation $T$ of $G$ must act irreducibly when restricted to it, and the fact that the $\mathbb{H}_{jA}$ are the only subsystems on which the representation $T$ of $G$ acts irreducibly. The inference from a subsystem being D-full to having $T$ act irreducibly upon it is perhaps not obvious, so we give a short proof by contradiction. Suppose $\mathbb{H}_{A}$ is a D-full subsystem on which $T$ acts reducibly. It then follows that there exists an invariant subspace $\mathbb{H}_A' \subset \mathbb{H}_A$, meaning that for any $g \in G$, $T(g)$ maps $\mathbb{H}_A'$ onto itself. Thus, the action of $\mathcal{E}$ of Eq.~(\ref{eq:SuperopAverageLieGroup}) must take a state in $\mathbb{H}_A'$ to a state with support entirely on $\mathbb{H}_A'$, which cannot be the completely mixed state on $\mathbb{H}_A$. It follows that $\mathbb{H}_A$ is not a D-full subsystem, which contradicts our initial assumption. \subsection{Optimally efficient private quantum communication schemes} We can now prove our central result for private quantum communication schemes: \textbf{Theorem 1:} An optimally efficient private quantum communication scheme for a group-averaging decohering superoperator $\mathcal{E}$ is given by encoding into the largest D-full subsystem for $\mathcal{E}$. \textbf{Proof:} It is clear that every private quantum communication scheme encodes into a completely private subsystem. It suffices therefore to show that the dimension of any completely private subsystem for a group-averaging decohering superoperator $\mathcal{E}$ is less than or equal to the dimension of the largest D-full subsystem for $\mathcal{E}$. Let $\mathbb{H}_E$ be a completely private subsystem for a group-averaging decohering superoperator $\mathcal{E}$ of the form given in Eq.~(\ref{eq:SuperopAverageLieGroup}), and let $\mathbb{H}'_E$ be the complementary subsystem such that $\mathbb{H}_E \otimes_E \mathbb{H}'_E \subset \mathbb{H}$ (where $\otimes_E$ denotes the tensor product structure with respect to these subsystems). The condition for $\mathbb{H}_E$ to be completely private is \begin{equation} \label{eq:RepeatCompletelyPrivate} \mathcal{E}(|\psi_E\rangle\langle\psi_E| \otimes_E \sigma_0) = \rho_0 \, , \quad \forall\ |\psi_E\rangle \in \mathbb{H}_E \,, \end{equation} for some fixed state $\sigma_0$ on $\mathbb{H}'_E$, where $\rho_0$ is a density operator on $\mathbb{H}$ that is independent of $|\psi_E\rangle$. Because $\sigma_0$ is arbitrary, we can choose it to be a pure state $\sigma_0 = |\phi_0\rangle\langle\phi_0|$ for $|\phi_0\rangle \in \mathbb{H}'_E$, which simplifies our proof. Using the expression~(\ref{eq:ActionOfEOnArbitrary}) for the action of $\mathcal{E}$ and projecting both sides of condition~(\ref{eq:RepeatCompletelyPrivate}) onto an irrep $j$ gives \begin{equation} \label{eq:EOnSingleIrrep} (\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}) (|\psi_{jE}\rangle\langle\psi_{jE}|) = \rho_{0j} \, , \end{equation} where we have defined $|\psi_{jE}\rangle \equiv \Pi_j (|\psi_E\rangle \otimes_E |\phi_0\rangle) \in \mathbb{H}_j$ and $\rho_{0j} \equiv \Pi_j\rho_0\Pi_j$. Consider an irrep $j$ for which $\rho_{0j} \neq 0$. (At least one such $j$ must exist, as the irreps span the Hilbert space.) Taking the partial trace over the D-full subsystem $\mathbb{H}_{jA}$ (denoted ${\rm Tr}_{jA}$) and using the cyclic property of trace to eliminate $\mathcal{D}_{jA}$ gives \begin{equation} \label{eq:PartialTraceOnSingleIrrep} {\rm Tr}_{jA} (|\psi_{jE}\rangle\langle\psi_{jE}|) = {\rm Tr}_{jA}(\rho_{0j})\,, \quad \forall\ |\psi_E\rangle \in \mathbb{H}_E \, . \end{equation} Let $|\psi_E\rangle$ and $|\chi_E\rangle$ be two orthogonal states in $\mathbb{H}_E$. Because $\mathbb{H}_E$ is a linear space, $(|\psi_E\rangle + |\chi_E\rangle)/\sqrt{2} \in \mathbb{H}_E$; thus, \begin{align} \label{eq:PlusSuperposition} {\rm Tr}_{jA}(\rho_{0j}) &= {\rm Tr}_{jA}(|\psi_{jE}\rangle\langle\psi_{jE}|) \,, \nonumber \\ {\rm Tr}_{jA}(\rho_{0j}) &= {\rm Tr}_{jA}(|\chi_{jE}\rangle\langle\chi_{jE}|) \,, \\ {\rm Tr}_{jA}(\rho_{0j}) &= \tfrac{1}{2}{\rm Tr}_{jA}\bigl[(|\psi_{jE}\rangle + |\chi_{jE}\rangle)(\langle\psi_{jE}| + \langle\chi_{jE}|)\bigr] \, . \nonumber \end{align} These equations lead to the identity \begin{equation} \label{eq:PlusIdentity} {\rm Tr}_{jA}(|\psi_{jE}\rangle\langle\chi_{jE}|) + {\rm Tr}_{jA}(|\chi_{jE}\rangle\langle\psi_{jE}|)=0\, . \end{equation} Repeating this argument for $(|\psi_E\rangle + i |\chi_E\rangle)/\sqrt{2} \in \mathbb{H}_E$ gives \begin{equation} \label{eq:MinusIdentity} {\rm Tr}_{jA}(|\psi_{jE}\rangle\langle\chi_{jE}|) - {\rm Tr}_{jA}(|\chi_{jE}\rangle\langle\psi_{jE}|)=0 \, . \end{equation} Combining these equations, we obtain \begin{equation} \label{eq:PartialTraceOnSingleIrrep2} {\rm Tr}_{jA} (|\psi_{jE}\rangle\langle\chi_{jE}|) = 0\, \quad \text{if}\ \langle\psi_E|\chi_E\rangle = 0 \, . \end{equation} Let $|\zeta_{jB}\rangle$ be any state in $\mathbb{H}_{jB}$ such that $\langle\zeta_{jB}| \rho_{0j} |\zeta_{jB}\rangle \neq 0$ (guaranteed to exist if $\rho_{0j} \neq 0$). We define the relative state of $|\zeta_{jB}\rangle$ with respect to $|\psi_{jE}\rangle$, denoted $|\psi_{jE,A}\rangle$, by \begin{equation} \label{eq:ReducedState} |\psi_{jE,A}\rangle \equiv \langle\zeta_{jB}| \psi_{jE}\rangle \,. \end{equation} All such relative states are nonzero because \begin{align} \label{eq:NonZeroReducedStates} \langle\psi_{jE,A}|\psi_{jE,A}\rangle &= {\rm Tr}_{jA} \bigl( \langle\zeta_{jB} | \psi_{jE}\rangle \langle\psi_{jE}|\zeta_{jB}\rangle \bigr) \nonumber \\ &= \langle\zeta_{jB}| {\rm Tr}_{jA}(|\psi_{jE}\rangle\langle\psi_{jE}|)|\zeta_{jB}\rangle \nonumber \\ &= \langle\zeta_{jB}| {\rm Tr}_{jA}(\rho_{0j}) |\zeta_{jB}\rangle \nonumber \\ &\neq 0 \, , \end{align} where the third equality uses Eq.~(\ref{eq:PartialTraceOnSingleIrrep}). The relative states of $|\zeta_{jB}\rangle$ with respect to a pair of orthogonal states, $|\psi_E\rangle$ and $|\chi_E\rangle$, in $\mathbb{H}_E$ satisfy \begin{align} \label{eq:OrthogonalReducedStates} \langle\chi_{jE,A}|\psi_{jE,A}\rangle &= {\rm Tr}_{jA} \bigl( \langle\zeta_{jB} | \chi_{jE}\rangle \langle\psi_{jE}|\zeta_{jB}\rangle \bigr) \nonumber \\ &= \langle\zeta_{jB}| {\rm Tr}_{jA}(|\psi_{jE}\rangle\langle\chi_{jE}|)|\zeta_{jB}\rangle \nonumber \\ &= 0 \, , \end{align} where the final step follows from Eq.~(\ref{eq:PartialTraceOnSingleIrrep2}). Thus, for any two orthogonal states $|\psi\rangle_E$ and $|\chi\rangle_E$ in $\mathbb{H}_E$, there exists a pair of non-zero orthogonal states in $\mathbb{H}_{jA}$. The number of orthogonal states in $\mathbb{H}_{jA}$ is upper bounded by its dimension. Thus, the dimension of any completely private subsystem $\mathbb{H}_E$ cannot be greater than the dimension of the D-full subsystem $\mathbb{H}_{jA}$ for any $j$ for which $\rho_{0j} \neq 0$. It follows that the dimension of a completely private subsystem cannot be greater than the dimension of the \emph{largest} D-full subsystem. Thus, an optimally efficient encoding in achieved by using the largest D-full subsystem. $\Box$ \subsection{Optimally efficient quantum communication scheme for a private shared Cartesian frame} We now use the group theoretical structure of the superoperator $\mathcal{E}_N$ to determine the optimally efficient quantum communication scheme for a private shared Cartesian frame and transmission of $N$ spin-1/2 particles. The Hilbert space $(\mathbb{C}^2)^{\otimes N}$ of these $N$ qubits carries a collective tensor representation $R^{\otimes N}$ of SU(2), by which a rotation $\Omega \in$ SU(2) acts identically on each of the $N$ qubits. This Hilbert space also carries a representation $P_N$ of the symmetric group $S_N$, which is the group of permutations of the $N$ qubits. The action of these two groups commute, and Schur-Weyl duality~\cite{Ful91} states that the Hilbert space $(\mathbb{C}^2)^{\otimes N}$ carries a multiplicity-free direct sum of SU(2)$\times S_N$ irreps, each of which can be labelled by the SU(2) total angular momentum quantum number $j$. For simplicity, we restrict $N$ to be an even integer for the remainder of this paper. Then, \begin{equation} \label{eq:DirectSum} (\mathbb{C}^2)^{\otimes N} = \bigoplus_{j=0}^{N/2} \mathbb{H}_j \,, \end{equation} where $\mathbb{H}_j$ is the eigenspace of total angular momentum with eigenvalue $j$, and the group SU(2)$\times S_N$ acts irreducibly on each eigenspace. Because the groups SU(2) and $S_N$ commute, the Hilbert space can be further decomposed. Each subspace $\mathbb{H}_j$ in the direct sum can be factored into a tensor product $\mathbb{H}_j = \mathbb{H}_{jR} \otimes \mathbb{H}_{jP}$, such that SU(2) acts irreducibly on $\mathbb{H}_{jR}$ and trivially on $\mathbb{H}_{jP}$, and $S_N$ acts irreducibly on $\mathbb{H}_{jP}$ and trivially on $\mathbb{H}_{jR}$. Thus, \begin{equation} \label{eq:DirectSumOfProducts} (\mathbb{C}^2)^{\otimes N} = \bigoplus_{j=0}^{N/2} \mathbb{H}_{jR} \otimes \mathbb{H}_{jP} \, . \end{equation} The dimension of $\mathbb{H}_{jR}$ is \begin{equation} \label{eq:MultiplicityR} d_{jR}=2j+1\, , \end{equation} and that of $\mathbb{H}_{jP}$ is~\cite{BRS03a} \begin{equation} \label{eq:Multiplicity} d_{jP}=\binom{N}{N/2-j}\frac{2j+1}{N/2+j+1} \, . \end{equation} If Alice prepares $N$ qubits in a state $\rho$ and sends them to Bob, an eavesdropper Eve who is uncorrelated with the private SRF will describe the state as mixed over all rotations $\Omega \in$ SU(2). Thus, the superoperator $\mathcal{E}_N$ acting on a general density operator $\rho$ of $N$ qubits that describes the lack of knowledge of this private SRF is given by~\cite{BRS03a} \begin{equation} \label{eq:NQubitDecoheringChannel} \mathcal{E}_N(\rho) = \int {\rm d}\Omega \, R(\Omega)^{\otimes N} \rho R^{\dag}(\Omega)^{\otimes N} \, . \end{equation} The effect of this superoperator is best seen through use of the decomposition (\ref{eq:DirectSumOfProducts}) of the Hilbert space. The subsystems $\mathbb{H}_{jP}$ are D-free or noiseless subsystems~\cite{Kni00} under the action of this superoperator; states encoded into these subsystems are completely protected from this decoherence. In contrast, $\mathcal{E}_N$ is completely depolarizing on each $\mathbb{H}_{jR}$ subsystem, and thus the $\mathbb{H}_{jR}$ are D-full subsystems. For each $j$, the subsystems $\mathbb{H}_{jR} \otimes \mathbb{H}_{jP}$ form a D-full/D-free subsystem pair. The largest D-full subsystem occurs for $j_{\rm max} = N/2$ and has dimension $2j_{\rm max}+1 =N+1$. This D-full subsystem defines the optimally efficient private quantum communication scheme (by Theorem 1). Thus, given a private Cartesian frame and the transmission of $N$ qubits, Alice and Bob can privately communicate $\log (N+1)$ qubits, or $\log (N)$ qubits asymptotically. \subsection{Optimally efficient quantum communication scheme for a private shared reference ordering} \label{sec:quantumschemeprivaterefordering} Note the duality of the rotation group and the symmetric group in the system described above. One may ask why we consider a reference frame for the first group and not the second. In fact, we have implicitly assumed a reference frame for the permutation group in the form of a \emph{shared reference ordering}. The simplest way in which two parties can possess a shared reference ordering is if they agree on some labelling of the qubits, for instance, using their temporal order, and if the quantum channel preserves this labelling. The shared reference ordering that has been assumed up until now has been taken to be public (i.e., Eve shares it as well); however, one can also consider it to be private. Here, we consider the dual problem to the one of the previous section: a public Cartesian frame and a private reference ordering. Note that sharing a private reference ordering is not equivalent to sharing a secret key. This inequivalence may seem surprising, because the most obvious way in which Alice and Bob may share a private reference ordering is for them to agree on a secret permutation of $N$ elements (Alice applies the permutation to the qubits prior to transmission and Bob applies it to the qubits after receiving them). As there are $N!$ elements in $S_N$, this secret permutation is equivalent to sharing $\log (N!)$ bits of secret key. Nonetheless, in general when Alice and Bob share a private reference ordering they need not share any secret key. For instance, suppose the channel that connects Alice and Bob implements some fixed permutation $p_{C}$ of the qubits, and that this permutation is unknown to both Alice and Bob. The shared reference ordering is provided to Alice and Bob in the form of two devices, one for each party. Alice's device applies some permutation $p_A$ to her qubits prior to transmission, and Bob's device applies some permutation $p_B$ upon receiving them. The devices are designed such that $p_B = (p_C p_A)^{-1}$, and thus Bob recovers the quantum state of the qubits prepared by Alice. Assuming that $p_C$ is equally likely to be any element of $S_N$, Alice has no knowledge of $p_B$ and Bob has no knowledge of $p_A$. Therefore, they do not share a secret key. Note further that although Eve may have knowledge of $p_C$ (which she may acquire, for instance, by examining the channel), she has no knowledge of $p_A$, and assuming that $p_A$ is chosen uniformly among elements of $S_N$, Eve's description of the qubits is related to Alice's description by the superoperator \begin{equation} \label{eq:MixAllPermutations} \mathcal{P}_N[\rho] = \frac{1}{N!} \sum_{p \in S_N} P(p) \rho P^\dag(p) \, , \end{equation} where $P(p)$ is the unitary operator corresponding to the permutation $p$ of the qubits. When the $P(p)$ are decomposed into irreps, $\mathcal{P}_N$ induces the decomposition of $\mathbb{H}$ specified in Eq.~(\ref{eq:DirectSumOfProducts}), which is the same decomposition that was induced by $\mathcal{E}_N$. However, there is a difference: with respect to the superoperator $\mathcal{P}_N$, the subsystems $\mathbb{H}_{jP}$ are D-full (because $S_N$ acts irreducibly on these subsystems) and the subsystems $\mathbb{H}_{jR}$ are D-free (because $S_N$ acts trivially on these). For large $N$, the largest $\mathbb{H}_{jP}$ occurs for $j = j_{\rm max}$, the integer nearest to $\sqrt{N}/2$, and has dimension $d_{jP} = O(2^N/N)$ (meaning that $d_{jP} < c 2^N/N$ for some constant $c$ and for all values of $N$) as can be deduced from Eq.~(\ref{eq:Multiplicity}). This D-full subsystem defines the optimally efficient private quantum communication scheme. It allows for private communication of $N - \log_2 N$ logical qubits asymptotically given $N$ transmitted qubits. \subsection{Optimally efficient scheme for a private shared Cartesian frame and reference ordering} Another interesting case is the one where Alice and Bob possess both a private Cartesian frame as well as a private reference ordering. For transmission of $N$ qubits in this situation, Eve's lack of knowledge about either reference is characterized by the superoperator $\mathcal{E}_N \circ \mathcal{P}_N$. Interestingly, this superoperator is not completely depolarizing on the entire Hilbert space. Even without sharing either reference, Eve can still measure the total $\hat{J}^2$ operator to acquire information about the preparation. However, the subspaces $\mathbb{H}_{jR} \otimes \mathbb{H}_{jP}$ for each $j$ are D-full under the action of this superoperator. Thus, Alice and Bob can perform private quantum communication by encoding into one of these spaces. The largest D-full subspace occurs for $j = j_{\rm max}$, the integer nearest to $\sqrt{N}/2$, and has dimension $d_{jR}d_{jP} = O(2^N/\sqrt{N})$. Asymptotically, this allows for $N- \frac{1}{2}\log_2 N$ private logical qubits to be encoded in $N$ transmitted qubits. \subsection{The duality between cryptography and communication} We have been concerned with determining how much quantum information, prepared relative to some RF, can be completely hidden from someone who does not share this RF. If this person is an eavesdropper, then this concealment can be very useful for cryptography, as we have shown. However, it can occur that someone with whom one \emph{wants} to communicate does not share the RF, for whatever reason. In this case, one is interested in the opposite problem, namely, how much quantum information can be made completely \emph{accessible} to someone who does not share the RF. This amount is determined by the largest D-free subsystem, as was shown in~\cite{BRS03a}. The following dichotomy arises: information encoded in a D-full subsystem is hidden from someone lacking the RF, while information encoded in a D-free subsystem is still accessible to someone lacking the RF. The implications for the case we are considering can be summarized as follows. From the perspective of someone who lacks the SU(2) SRF, the $\mathbb{H}_{jR}$ are D-full and the $\mathbb{H}_{jP}$ are D-free; from the perspective of someone who lacks the $S_N$ SRF, it is the $\mathbb{H}_{jP}$ that are D-full and the $\mathbb{H}_{jR}$ that are D-free. Thus, the number of logical qubits that can be transmitted privately given a private SU(2) SRF is equal to the number of logical qubits that can be communicated to a receiver that lacks the $S_N$ SRF, and similarly with SU(2) and $S_N$ reversed. What is bad for private quantum communication using a private SRF is good for quantum communication in the absence of an SRF. \section{Private classical communication} \label{sec:Classical} We now consider the private communication of classical information through a quantum channel using the resource of a private SRF. We provide upper bounds on the efficiency of such schemes (maximum number of private messages that can be sent), and present schemes for private SU(2) and/or $S_N$ SRFs that asymptotically saturate these bounds. As it turns out, the optimally efficient schemes for private classical communication are more efficient than the optimally efficient private quantum schemes (comparing private c-bits directly with private qubits). \textbf{Definition:} \emph{A private classical communication scheme for a decohering superoperator $\mathcal{E}$.} Such a scheme consists of a set $\{ \rho_i \}$ of density operators on $\mathbb{H}$ prepared by Alice that are (i) orthogonal, so that Bob can distinguish these classical messages with certainty, and (ii) satisfy \begin{equation} \label{eq:ClassicalEncodedStates} \mathcal{E}[\rho_i] = \rho_0 \, , \quad \forall\ \rho_i \, , \end{equation} where $\rho_0$ is some fixed state in $\mathbb{H}$, ensuring that Eve cannot gain any information about these classical messages. An optimally efficient private classical communication scheme has the maximum number of elements in the set $\{ \rho_i \}$. It is clear that every private \emph{quantum} communication scheme can be turned into a private \emph{classical} communication scheme by encoding the classical messages into an orthogonal set of quantum states within the D-full subsystem employed by the latter. However, we now show that for the group-averaging superoperators, there exist private classical communication schemes that perform much better. As with our three qubit example given in Section~\ref{sec:Example}, the key to finding efficient private classical communication schemes is to encode into states that are entangled between D-full and D-free subsystems and span many irreps. \subsection{An illustrative example} Consider the following illustrative example. Let $\mathbb{H}$ be a Hilbert space. Let $\mathcal{E}$ be a superoperator acting on states of this space such that, under a decomposition of $\mathbb{H}$ as \begin{equation} \label{eq:ExampleDirectSum} \mathbb{H} = \bigoplus_{a=1}^A \mathbb{H}_{a1} \otimes \mathbb{H}_{a2} \,, \end{equation} the subsystems $\mathbb{H}_{a1} \otimes \mathbb{H}_{a2}$ are D-full/D-free subsystem pairs under the action of $\mathcal{E}$. For our example, we enforce the additional (and atypical) constraint that \begin{equation} \label{eq:DimensionsTheSame} {\rm dim}\,\mathbb{H}_{a1} = {\rm dim}\,\mathbb{H}_{a2} = d \, , \end{equation} for some integer $d$ independent of $a$. Thus, all of the D-full subsystems $\mathbb{H}_{a1}$ and D-free subsystems $\mathbb{H}_{a2}$ are of the same dimension, and the dimension of the total Hilbert space $\mathbb{H}$ is $Ad^2$. If Eve's lack of correlations is described by the superoperator $\mathcal{E}$, then a simple private classical communication scheme can be constructed as follows. For a \emph{fixed} arbitrary $a$, choose a set of $d$ orthogonal states $\{|a,k\rangle_1, k=1,\ldots,d \}$ spanning the D-full subsystem $\mathbb{H}_{a1}$, and an arbitrary fixed state $|a,0\rangle_2 \in \mathbb{H}_{a2}$. Then $d$ classical messages can be encoded into the $d$ orthogonal states $|a,k\rangle_1 \otimes |a,0\rangle$. All of these states map to the same density operator $\tfrac{1}{d}I_{a1} \otimes |a,0\rangle_2\langle a, 0|$ under the action of $\mathcal{E}$. However, a more efficient scheme can be constructed using \emph{entangled} states in $\mathbb{H}_{a1} \otimes \mathbb{H}_{a2}$, as follows. Let $\{|a,k\rangle_1, k=1,\ldots,d \}$ be a basis for $\mathbb{H}_{a1}$, and $\{|a,k'\rangle_2, k'=1,\ldots,d \}$ be a basis for $\mathbb{H}_{a2}$. The states \begin{equation} \label{eq:EntangledStates} |\psi_{alm}\rangle = \frac{1}{\sqrt{d}}\sum_{k=1}^d \exp(2\pi {\rm i}km/d)|a,k\rangle_1 |a,k+l\rangle_2 \, , \end{equation} for $l,m=1,\ldots,d$ are an orthogonal basis of $d^2$ maximally entangled states in $\mathbb{H}_{a1} \otimes \mathbb{H}_{a2}$. Using the fact that the maximally entangled states $|\psi_{alm}\rangle$ possess maximally mixed reduced density operators ${\rm Tr}_{a1}(|\psi_{alm}\rangle\langle\psi_{alm}|) = \frac{1}{d} I_{a_2}$, it follows that all such maximally entangled states map under $\mathcal{E}$ to the state \begin{equation} \label{eq:EntangledStatesDecohere} \mathcal{E}(|\psi_{alm}\rangle\langle\psi_{alm}|) = \tfrac{1}{d} I_{a 1} \otimes \tfrac{1}{d} I_{a 2} \, , \end{equation} for all $l,m$. Thus, one can encode $d^2$ messages into entangled states of this form. Finally, we present an optimally efficient scheme which performs even better. Again, we define the entangled states $|\psi_{alm}\rangle$ for every $a=1,\ldots,A$ as in Eq.~(\ref{eq:EntangledStates}); these states form an orthogonal basis for the entire Hilbert space $\mathbb{H}$. We then construct the Fourier transform states over the index $a$ \begin{equation} \label{eq:FourierTransformedEntStates} |\phi_{\mu lm}\rangle = \sum_{a=1}^A \exp(2\pi{\rm i}\mu a/A) |\psi_{alm}\rangle \, , \end{equation} for $\mu = 1,\ldots,A$. These states are also orthogonal: \begin{equation} \label{eq:FourierTransStatesOrthogonal} \langle \phi_{\mu lm}|\phi_{\mu'l'm'}\rangle = \delta_{ll'}\delta_{mm'}\delta_{\mu\mu'} \, , \end{equation} and each has the same and equal support on each of the subspaces $\mathbb{H}_{a1} \otimes \mathbb{H}_{a2}$. It is easily shown that they all map under the action of $\mathcal{E}$ to the completely mixed operator on $\mathbb{H}$; that is, \begin{equation} \label{eq:MaximallyMixed} \mathcal{E}(|\phi_{\mu lm}\rangle\langle\phi_{\mu lm}|) = \tfrac{1}{Ad^2} I \, , \quad \forall\ l,m,\mu \, . \end{equation} Thus, these orthogonal states define a private classical communication scheme. We note that there are $Ad^2 = {\rm dim}\,\mathbb{H}$ such states; therefore by Holevo's theorem this scheme is optimally efficient. The difficulty with generalizing this scheme to typical group-averaging superoperators is that the induced tensor product structure of D-full and D-free subsystems for a given irrep typically do not have equal dimensions, and these change as we vary over irreps. Below, we formulate and prove several theorems that allow us to place upper bounds on the number of private classical messages, and to construct asymptotically-optimal schemes for private classical communication using private SU(2) and $S_N$ SRFs. \subsection{A single D-full/D-free subsystem pair} \label{sec:classicalschemeoneirrep} Consider a decohering superoperator of the form $\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}$ defined on $\mathbb{H}_A \otimes \mathbb{H}_B$. This superoperator takes any state $\rho_{AB}$ on $\mathbb{H}_A \otimes \mathbb{H}_B$ to $\tfrac{1}{d_A} I_A \otimes {\rm Tr}_A(\rho_{AB})$. We therefore have a single D-full/D-free subsystem pair. We now prove a lemma for the optimally efficient private classical communication scheme in this case. \textbf{Lemma 1:} Consider a Hilbert space $\mathbb{H}_A \otimes \mathbb{H}_B$, where $\mathbb{H}_A$ ($\mathbb{H}_B$) has dimensionality $d_A$ ($d_B$). Let $\{ \rho_i \}$ be a private classical communication scheme for the superoperator $\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}$. The maximum number of private classical messages (i.e., the maximum cardinality of the set $\{ \rho_i \}$) is $M = d_A \cdot {\rm min}\{d_A,d_B\}$. \textbf{Proof:} We consider two separate cases for the dimensions of the D-full and D-free subsystems. Each proof gives a construction for an optimally efficient private classical communication scheme. Let $\{ |k\rangle_A\}$ and $\{ |k\rangle_B\}$ be an orthonormal basis for $\mathbb{H}_A$ and $\mathbb{H}_B$, respectively. \textit{Case 1:} $d_A \geq d_B$. The $d_A \cdot d_B$ orthogonal maximally-entangled states \begin{equation} \label{eq:OneIrrepMaxEntangled} |\psi_{lm}\rangle = \frac{1}{\sqrt{d_B}} \sum_{k=1}^{d_B} \exp(2\pi{\rm i}km/d_B) |k+l\rangle_A |k\rangle_B \, , \end{equation} where $l = 1,\ldots,d_A$ and $m=1,\ldots,d_B$, satisfy $\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}(|\psi_{lm}\rangle\langle\psi_{lm}|) = \frac{1}{d_A} I_A \otimes \tfrac{1}{d_B} I_B$. Thus, this set of states forms a private classical communication scheme. Because $d_A \cdot d_B$ is the dimension of $\mathbb{H}_A \otimes \mathbb{H}_B$, there cannot exist a larger set of orthogonal states on this space, and thus this scheme is optimally efficient. \textit{Case 2:} $d_A < d_B$. The $d_A^2$ orthogonal maximally-entangled states \begin{equation} \label{eq:OneIrrepMaxEntangled2} |\psi_{lm}\rangle = \frac{1}{\sqrt{d_A}} \sum_{k=1}^{d_A} \exp(2\pi{\rm i}km/d_A) |k+l\rangle_A |k\rangle_B \, , \end{equation} where $l,m = 1,\ldots,d_A$, satisfy $\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}(|\psi_{lm}\rangle\langle\psi_{lm}|) = \frac{1}{d_A} I_A \otimes \sigma_B$, with \begin{equation} \label{eq:PartialIdentity} \sigma_B = \frac{1}{d_A} \sum_{k=1}^{d_A} |k\rangle_B\langle k| \, . \end{equation} Thus, this set of states forms a private classical communication scheme. This set of states has cardinality less than the dimension of the joint Hilbert space; however, as we now show, the scheme is optimally efficient. First, consider sets of pure states. Every such state must have the same reduced density operator on $\mathbb{H}_B$, which we denote by $\sigma_B$. For a pure state, the rank of the reduced density operators on $A$ and $B$ must be equal, and because the former is bounded above by $d_A$, the latter must be as well. Thus, we can limit our consideration to the subspace $\mathbb{H}'_B \subset \mathbb{H}_B$ spanned by the support of $\sigma_B$, whose dimension is bounded above by $d_A$. But this is just \textit{Case 1} applied to $\mathbb{H}_A \otimes \mathbb{H}'_B$, for which $d_A^2$ is the maximum number of private messages. It remains to be shown that making use of a set of \emph{mixed} states does not allow for a better scheme. Imagine a set $\{ \rho_i \}$ of mixed states on $\mathbb{H}_A \otimes \mathbb{H}_B$, containing $M$ elements. Each $\rho_i$ must have the same reduced density operator on $\mathbb{H}_B$, which we denote by $\sigma_B$. We denote the rank of $\sigma_B$ by $r$. Expressing each $\rho_i$ as an eigendecomposition, we have \begin{equation} \label{eq:MixedEigendecomp} \rho_i = \sum_{l=1}^{L_i} p^i_l |\psi^{(i)}_l\rangle_{AB}\langle \psi^{(i)}_l| \, , \end{equation} where $\{|\psi^{(i)}_l\rangle_{AB}|l=1,\dots,L_i\}$ are $L_i$ pure states on $\mathbb{H}_A \otimes \mathbb{H}_B$. Each of these pure states has a reduced density matrix $\sigma^{(i)}_{lB} = {\rm Tr}_A(|\psi^{(i)}_l\rangle_{AB}\langle\psi^{(i)}_l|)$ with rank $r_l^{(i)} \le d_A$. For each $i$, a convex sum over $l$ of the $\sigma^{(i)}_{lB}$ must yield $\sigma_B$. It follows that $\sum_{l=1}^{L_i} r_l^{(i)} \ge r$, which implies that for all $i$, $L_i d_A \ge r$. However, if all $\rho_i$ are orthogonal, they must possess orthogonal supports, and these will therefore span a space of dimension $\sum_{i=1}^M L_i$. This space is contained in $\mathbb{H}_A \otimes \mathbb{H}^{(r)}_{B}$ (where $\mathbb{H}^{(r)}_{B} \subset \mathbb{H}_B$ is the $r$-dimensional space spanned by the support of $\sigma_B$) and thus $\sum_{i=1}^M L_i \le d_A r$. Combining these inequalities yields $M \leq d_A^2$. The set of states in Eq.~(\ref{eq:OneIrrepMaxEntangled2}) consists of $M=d_A^2$ elements, therefore the scheme involving these states is optimally efficient. $\Box$ \subsection{A general group-averaging superoperator} In the previous section we considered communication schemes using states that are confined to a single D-full/D-free subsystem pair. The most general scheme, however, makes use of states that span many such pairs. We must therefore consider the more general group-averaging superoperator $\mathcal{E}$ of Eq.~(\ref{eq:ActionOfEOnArbitrary}). Let $\{ \rho_i \}$ be a private classical communication scheme for this superoperator, satisfying $\mathcal{E}(\rho_i) = \rho_0$ for all $\rho_i$. We now prove a lemma which bounds the cardinality of $\{ \rho_i \}$. \textbf{Lemma 2:} An upper bound on the number of states on $\mathbb{H}$ in a private classical communication scheme for $\mathcal{E}$ is $M = \sum_j M_j$, where $M_j$ is the maximum number of states on $\mathbb{H}_j$ in a private classical communication scheme for $\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}$. \textbf{Proof:} By assumption, \begin{equation} \mathcal{E}(\rho_i) = \rho_0 \, , \end{equation} for all $i$. Projecting both sides of this equation onto an irrep $j$, we obtain \begin{equation} \label{eq:PrivateInOneIrrep} (\mathcal{D}_{jA} \otimes \mathcal{I}_{jB})(\Pi_j \rho_i \Pi_j) = \Pi_j \rho_0 \Pi_j \, , \end{equation} for all $i$. By Lemma 1, there are at most $M_j$ orthogonal states that are mapped by $\mathcal{D}_{jA} \otimes \mathcal{I}_{jB}$ to the same density operator. Therefore the supports of $\{ \Pi_j \rho_i \Pi_j, i=1,2,\ldots \}$ must lie in a subspace of $\mathbb{H}_j$ with dimension not greater than $M_j$. The set of states $\{\rho_i\}$ must therefore have support on a subspace with dimension $M = \sum_j M_j$. The cardinality of the set of orthogonal states $\{ \rho_i \}$ forming a private communication scheme is therefore upper-bounded by $M = \sum_j M_j$. $\Box$ Thus, we have the following theorem: \textbf{Theorem 2:} In a private classical communication scheme for a group-averaging superoperator $\mathcal{E}$, the number $M$ of private classical messages satisfies \begin{equation} \label{eq:UpperBoundOnM} M \le \sum_j d_{jA} \cdot {\rm min}\{d_{jA},d_{jB}\} \, , \end{equation} where the $d_{jA}$ ($d_{jB}$) are the dimensions of the D-full (D-free) subsystems defined by $\mathcal{E}$. The proof is immediate from the preceding Lemmas. Given that our theorem yields only an upper bound on the number of private classical messages that can be sent, the question of exactly how many private classical messages can be achieved remains open. As the example provided in Eq.~(\ref{eq:fourprivatestates}) of section \ref{sec:twotransmittedqubits} illustrates, the optimally efficient scheme is likely to make use of states that span irreps possessing unequal dimensions. \subsection{Private classical communication using a private Cartesian frame} We now consider the specific case of a private Cartesian frame, and present a scheme for private classical communication that is optimally efficient in the limit of large $N$. Consider the decomposition of the $N$-qubit Hilbert space $(\mathbb{C}^2)^{\otimes N}$ into a direct sum of D-full/D-free subsystem pairs as in Eq.~(\ref{eq:DirectSumOfProducts}). First, we note, from Eq.~(\ref{eq:MultiplicityR}) and (\ref{eq:Multiplicity}), that for all $j$ strictly less than the maximum value $N/2$, the D-free subsystem $\mathbb{H}_{jP}$ is always of \emph{greater or equal} dimension than the D-full subsystem $\mathbb{H}_{jR}$. Thus, we will employ irreps up to, but \emph{not} including, $j=N/2$. Let $j_{\rm min}<N/2$ be some fixed irrep. We now construct orthogonal entangled states for every irrep in the range $j_{\rm min} \leq j < N/2$ as follows. For convenience, we denote the dimension of the D-full subsystem of the $j_{\rm min}$ irrep by $d$, that is, $d\equiv2j_{\rm min}+1$. Choose a set of orthogonal states $\{ |j,s\rangle_R, s=1,\ldots,d \}$ for $\mathbb{H}_{jR}$ and a corresponding set of orthogonal states $\{ |j,s'\rangle_P, s'=1,\ldots,d \}$ for $\mathbb{H}_{jP}$; note that such sets always exist because ${\rm dim}\,\mathbb{H}_{jR} = 2j+1 \geq d$ for all $j$ in the range $j_{\rm min} \leq j < N/2$. For each irrep in this range, a set of $d^2$ orthogonal entangled states are then given by \begin{equation} \label{eq:SU(2)MaxEntStates2} |\psi_{jkl}\rangle = \frac{1}{\sqrt{d}} \sum_{s=0}^d \exp(2\pi{\rm i}sk/d) |j,s\rangle_R |j,s+l\rangle_P \, . \end{equation} We wish to construct Fourier transformed states over $j$ with equal weight in each irrep. Thus, we define \begin{equation} \label{eq:SU(2)FourierTransformedEntStates} |\phi_{\mu kl}\rangle = \sum_{j=j_{\rm min}}^{N/2-1} \exp(2\pi{\rm i}\mu j/(N/2 - j_{\rm min})) |\psi_{jkl}\rangle \, . \end{equation} These states are all orthogonal, and all map to the same density matrix under the superoperator $\mathcal{E}_N$. The range of both $i$ and $l$ is $(1,\ldots,d=2j_{\rm min}+1)$, and the range of $\mu$ is $(1,\ldots,N/2-j_{\rm min})$; thus, there are a total of \begin{equation} \label{eq:TotalOrthoFourierStates} M = (N/2-j_{\rm min})(2j_{\rm min}+1)^2 \end{equation} distinct states. To maximize this number asymptotically, we choose $j=j_{\rm min}$, the integer nearest to $N/3$; this choice results in $O(N^3)$ distinct states. Thus, asymptotically, this scheme allows for $3\log_2 N$ private classical bits to be communicated using $N$ transmitted qubits, which saturates the upper bound given by Theorem 2. \subsection{Private classical communication using a private reference ordering} As a second example of private classical communication, we consider the case where the private SRF is a private reference ordering. As discussed in section \ref{sec:quantumschemeprivaterefordering}, in this case the superoperator is $\mathcal{P}_N$ (defined in Eq.~(\ref{eq:MixAllPermutations})), and the $\mathbb{H}_{jP}$ are D-full subsystems while the $\mathbb{H}_{jR}$ are D-free subsystems. We consider only the limit of large $N$. In this case, the upper bound on the number of messages is simply $2^N$, the dimensionality of the entire Hilbert space. This bound is saturated asymptotically by a scheme similar to the one used in the previous section. For $j < N/2$, we have $d_{jP}\geq d_{jR}$, so that Case 1 of the proof of Lemma 1 applies and we can define $d_{jR}d_{jP}$ entangled states within the $j$ irrep (using Eq.~(\ref{eq:OneIrrepMaxEntangled})) which cannot be distinguished by Eve. For every $j$ value in a window of approximate width $\sqrt{N}$ centered at the integer nearest $\sqrt{N}$, we have in the asymptotic limit that $d_{jR} = O(\sqrt{N})$ and $d_{jP}= O(2^N/N)$ (using Eqs.~(\ref{eq:MultiplicityR}) and (\ref{eq:Multiplicity}) and Stirling's formula). Thus, in each such irrep, one can find $M_j = O(2^N/\sqrt{N})$ orthogonal states that cannot be distinguished by Eve. We can therefore Fourier transform these states across the $\sqrt{N}$ irreps, using the construction of Eq.~(\ref{eq:FourierTransformedEntStates}). The end result is a set of states that cannot be distinguished by Eve, the cardinality of which is $M= O(2^N)$. Thus, asymptotically, one achieves $N$ private c-bits using this scheme. \subsection{Private classical communication using a private Cartesian frame and reference ordering} If Alice and Bob possess both a private Cartesian frame and a private reference ordering of the transmitted qubits, then they can encode at least as many classical messages as they could with just a private reference ordering. Thus, asymptotically, they can achieve $N$ private c-bits in this case as well. One cannot achieve any more than this, because Holevo's theorem ensures that using $N$ transmitted qubits at most $N$ c-bits, whether private or public, can be communicated. \section{Discussion} \label{sec:Conclusions} In this paper, we have demonstrated that private shared reference frames are a resource of private correlations which can be used for cryptography. We have presented optimally efficient schemes for private quantum and classical communication using an insecure quantum channel for spin-1/2 systems and a shared Cartesian reference frame and/or a shared reference ordering of the systems. The results are summarized in Table~\ref{tab:Capacity}. \begin{table}[t] \caption{Asymptotic capacity for private quantum and classical communication for $N$ transmitted qubits and various private shared reference frames. \label{tab:Capacity}} \begin{center} \footnotesize \begin{tabular}{|c|c|c|} \hline \raisebox{0pt}[13pt][7pt]{Nature of the private SRF} &\begin{minipage}{0.9in}{Private quantum}\\{capacity (qubits)}\end{minipage}& \begin{minipage}{0.9in}{Private classical}\\{capacity (c-bits)} \end{minipage}\\ \hline \hline \begin{minipage}{1.4in}{Private Cartesian frame}\\{(Private SU(2) SRF)} \end{minipage} &\raisebox{0pt}[13pt][7pt]{$\log_2 (N)$} & \raisebox{0pt}[13pt][7pt]{$3\log_2 (N)$}\\ \hline \begin{minipage}{1.4in}{Private reference ordering}\\{(Private $S_N$ SRF)} \end{minipage} &\raisebox{0pt}[13pt][7pt]{$N - \log_2 (N)$} & \raisebox{0pt}[13pt][7pt]{$N$}\\ \hline \begin{minipage}{1.4in}{Both private}\\{(Private SU(2) \& $S_N$ SRF)} \end{minipage} &\raisebox{0pt}[13pt][7pt]{$N - \frac{1}{2} \log_2 (N)$} & \raisebox{0pt}[13pt][7pt]{$N$}\\ \hline \end{tabular} \end{center} \end{table} We note that our private classical schemes using a private SRF are similar in some ways to private-key cryptography, specifically, the Vernam cipher (one-time pad)~\cite{Ver26}. For example, the secret key in the Vernam cipher can be used only once to ensure perfect security. Similarly, for our classical schemes, only a single plain-text (classical or quantum) can be encoded using a single private SRF. If the same private SRF is used to encode two plain-texts, then the \emph{relation} that holds between the two cipher-texts carries information about the plain-texts, and because it is possible to learn about this relation without making use of the SRF, Eve can obtain this information. This fact is clear from the example of a classical communication scheme by transmission of a classical pencil or gyroscope, considered in the introduction. Although Eve cannot determine the Euler angles of the pencils relative to the shared Cartesian frame, she can measure the angular separation of the two pencils. It is also useful to consider the \emph{differences} between using private shared reference frames and a secret key for private communication. One clear difference is that a secret key may be subdivided into a number of smaller secret keys, and each of these can be used independently of one another. (By ``independently'', we mean that one can encode a plain-text using the first key prior to knowing the identity of the plain-text that will be encoded using the second key). This feature does not hold when implementing private communication using a private SRF. Although a private SRF is not equivalent to secret classical key or entanglement, the former can yield the latter when supplemented by the use of a public quantum channel. Specifically, one can distribute a secret classical key by implementing the private classical communication scheme outlined in this paper with the key as plain-text. Similarly, one can establish entanglement between two parties by implementing a private quantum communication scheme where the subsystem encoding the quantum plain-text is entangled with systems that the sender keeps. Note that a private SRF also yields secret classical key if it is supplemented by a public SRF. For instance, perfect private and public shared Cartesian frames yield an infinite amount of secret key (in practice, the size of the key is limited by the size of the physical system that defines the Cartesian frame). Another question of interest is how a private SRF is established. Clearly, a public Cartesian frame together with an infinite classical key yields a perfect private Cartesian frame (the key defines the Euler angles of the private frame relative to the public frame). Shared entanglement of a certain sort can also be consumed to align local RFs~\cite{tez1,Aci01,Joz00}. Another interesting possibility is to set up the SRF by transmitting systems from Alice to Bob in a way that is sensitive to eavesdropping. Whether an analogue of key distribution can be achieved in this context is an interesting question for future research. Another such question is whether one can recycle a private SRF by monitoring for eavesdropping, in the same manner that one can recycle classical key and entanglement \cite{Leu02,Opp03}. Finally, we note that we have considered only classical reference frames. Preliminary research into the description and characterization of \emph{quantum} reference frames (c.f.,~\cite{Sch04,Col04}) leaves open the possibility for their use as a shared private correlation. Although the relationship between secret keys and entanglement has been analyzed in some detail~\cite{Col01}, the relationship between these and private SRFs still remains largely unexplored. Quantifying the power of private SRFs for encoding classical and quantum information is an important step in such an investigation. \textit{Note added in proof.} Recent independent results have established the optimal schemes for transmitting an SU(2) reference frame~\cite{Chi04,Bag04} and an $S_N$ reference ordering~\cite{Kor04} through the transmission of quantum systems. The techniques used in these investigations are remarkably similar to those used to develop our optimal private classical communication schemes using a private shared RF. Specifically, the optimal $N$-qubit states used for transmitting a reference frame or reference ordering span many irreps and are entangled between D-full/D-free subsystem pairs within an irrep, as do the states used in our optimal private classical communication schemes. \begin{acknowledgments} T.R.\ is supported by the UK Engineering and Physical Sciences Research Council. R.W.S.\ is supported by the Natural Sciences and Engineering Research Council of Canada. The authors gratefully acknowledge I. Devetak, D.\ Gottesman, D. Leung, M.\ A.\ Nielsen, and M.\ Plenio for helpful discussions. \end{acknowledgments} \end{document}
\begin{document} \title[] {Sharp convergence rates of time discretization for stochastic time-fractional PDEs subject to additive space-time white noise } \author[]{Max Gunzburger} \address{Department of Scientific Computing, Florida State University, Tallahassee, FL 32306, USA.} \email {\href{mailto:[email protected]}{gunzburg{\it @}fsu.edu}} \author[]{Buyang Li} \address{Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong.} \email {\href{mailto:[email protected]}{buyang.li{\it @}polyu.edu.hk}} \author[]{Jilu Wang} \address{Department of Scientific Computing, Florida State University, Tallahassee, FL 32306, USA.} \curraddr{Department of Mathematics and Statistics, Mississippi State University, Mississippi State, 39762, USA.} \email {\href{mailto:[email protected]}{jwang13{\it @}fsu.edu}} {\rm d}ate{} \keywords{stochastic partial differential equation, time-fractional derivative, space-time white noise, error estimates} \subjclass[2010]{60H15, 60H35, 65M12} \begin{abstract} The stochastic time-fractional equation $\partial_t \psi -\Delta\partial_t^{1-\alpha} \psi = f + {\rm d}ot W$ with space-time white noise ${\rm d}ot W$ is discretized in time by a backward-Euler convolution quadrature for which the sharp-order error estimate \[ ({\mathbb E}\|\psi(\cdot,t_n)-\psi_n\|_{L^2(\mathcal{O})}^2)^{\frac{1}{2}}=O(\tau^{\frac{1}{2}-\frac{\alpha d}{4}}) \] is established for $\alpha\in(0,2/d)$, where $d$ denotes the spatial dimension, $\psi_n$ the approximate solution at the $n^{\rm th}$ time step, and $\mathbb{E}$ the expectation operator. In particular, the result indicates sharp convergence rates of numerical solutions for both stochastic subdiffusion and diffusion-wave problems in one spatial dimension. Numerical examples are presented to illustrate the theoretical analysis. \\[5pt] \end{abstract} \maketitle \thispagestyle{headings} \markright{\small Mathematics of Computation (accepted) } \section{Introduction}\label{sec:intro} We are interested in the convergence of numerical methods for solving the stochastic time-fractional partial differential equation (PDE) problem \begin{align}\label{Frac-SPDE} \left\{\begin{aligned} &\partial_t \psi(x,t)-\Delta \partial_t^{1-\alpha}\psi(x,t) = f(x,t)+{\rm d}ot W(x,t) && (x,t)\in \mathcal{O}\times {\mathbb R}_+ \\ &\psi(x,t)=0 && (x,t)\in \partial\mathcal{O}\times {\mathbb R}_+ \\ &\psi(x,0)=\psi_0(x) && x\in \mathcal{O}, \end{aligned}\right. \end{align} where $\mathcal{O}\subset\mathbb{R}^d$, $d\in\{1,2,3\}$, denotes a bounded region with Lipschitz boundary $\partial\mathcal{O}$, $f(x,t)$ a given deterministic source function, $\psi_0(x)$ given deterministic initial condition, and ${\rm d}ot W(x,t)$ a space-time white noise, i.e., the time derivative of a cylindrical Wiener process in $L^2(\mathcal{O})$. The underlying probability sample space for the stochastic noise is denoted by $\Omega$. The operator $\Delta: D(\Delta)\rightarrow L^2(\mathcal{O})$ denotes the Laplacian, defined on the domain $$ D(\Delta)=\{\phi\in H^1_0(\mathcal{O}): \Delta \phi\in L^2(\mathcal{O})\} , $$ and $\partial_t^{1-\alpha}\psi$ denotes the left-sided Caputo fractional time derivative of order $1-\alpha\in(-1,1)$, defined by (c.f. \cite[pp. 91]{KST}) \begin{align} \label{Caputo} \partial_t^{1-\alpha} \psi(x,t) := \left\{ \begin{aligned} &\frac{1}{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha-1} \frac{\partial\psi(x,s)}{\partial s} {\rm d} s &&\mbox{if}\,\,\, \alpha\in(0,1], \\ &\frac{1}{\Gamma(\alpha-1)} \int_0^t (t-s)^{\alpha-2}\psi(x,s){\rm d} s &&\mbox{if}\,\,\, \alpha\in(1,2), \end{aligned} \right. \end{align} where $\Gamma(s):=\int_0^\infty t^{s-1}e^{-t}{\rm d} t$ denotes Euler's gamma function. Problem \eqref{Frac-SPDE} arises, e.g, when considering heat transfer in a material with thermal memory based on a modified Fick's law \cite{Choi-MacCamy-1989,Gurtin-PipKin-1968,MacCamy1977,Nunziato1971}, subject to stochastic noise \cite{ClementDaPrato,KP,MijenaNane}. For the model \eqref{Frac-SPDE}, both the fractional time derivative and the stochastic process forcing result in solution having low regularity. Hence, the numerical approximation of such problems and the corresponding numerical analysis are very challenging. By defining $\partial_t^{\alpha} \psi(x,t):=\partial_t^{\alpha-1}\partial_t \psi(x,t)$ for $\alpha\in(1,2)$ and using the identity \begin{align} \partial_t^{\alpha-1}\partial_t^{1-\alpha}\psi(x,t) = \left\{ \begin{aligned} &\psi(x,t)-\psi(x,0) &&\mbox{if}\,\,\,\alpha\in(0,1),\\ &\psi(x,t) &&\mbox{if}\,\,\,\alpha\in(1,2), \end{aligned} \right. \end{align} applying $\partial_t^{\alpha-1}$ to \eqref{Frac-SPDE} yields another formulation of \eqref{Frac-SPDE}: \begin{align} \partial_t^\alpha \psi(x,t) - \Delta\psi(x,t) = \left\{ \begin{aligned} &\partial_t^{\alpha-1}( f(x,t)+{\rm d}ot W(x,t) ) -\Delta\psi(x,0) &&\mbox{if}\,\,\,\alpha\in(0,1),\\ & f(x,t)+{\rm d}ot W(x,t) -\Delta\psi(x,0) &&\mbox{if}\,\,\,\alpha=1,\\ &\partial_t^{\alpha-1}(f(x,t) +{\rm d}ot W(x,t) ) &&\mbox{if}\,\,\,\alpha\in(1,2) , \end{aligned} \right. \end{align} where the case $\alpha=1$ can be verified directly from \eqref{Frac-SPDE}. For the sake of clarity, we focus on only one of the equivalent problems, namely \eqref{Frac-SPDE}. The solution of \eqref{Frac-SPDE} can be decomposed into the solution of the deterministic problem \begin{align}\label{Deter-SPDE2} \left\{ \begin{array}{ll} \partial_t v(x,t) -\Delta \partial_t^{1-\alpha}v(x,t) = f (x,t) & (x,t)\in \mathcal{O}\times {\mathbb R}_+ \\ v(x,t)=0 & (x,t)\in \partial\mathcal{O}\times {\mathbb R}_+ \\ v(x,0)=\psi_0(x) & x\in \mathcal{O} \end{array}\right. \end{align} plus the solution of the stochastic problem \begin{align}\label{Frac-SPDE2} \left\{\begin{aligned} &\partial_t u(x,t) -\Delta \partial_t^{1-\alpha} u(x,t) = {\rm d}ot W(x,t) && (x,t)\in \mathcal{O}\times {\mathbb R}_+ \\ &u(x,t)=0 &&(x,t)\in \partial\mathcal{O}\times {\mathbb R}_+ \\ &u(x,0)=0 && x\in \mathcal{O} . \end{aligned}\right. \end{align} The stability and convergence of numerical solutions of \eqref{Deter-SPDE2} have been widely studied \cite{CCP2007,CuestaLubichPalencia:2006,LubichSloanThomee:1996,McLeanMustapha:2015,MustaphaSchotzau2014}. For example, if $f$ is smooth in time then numerical methods of up to order $6$ are available for approximating the solution of \eqref{Deter-SPDE2} and its equivalent formulations \cite{JinLazarovZhouSISC2016,JinLiZhou-BDF,JinLiZhou-CN,JLZ-MaxLp,LLSWZ2018,LWZ2017,LubichSloanThomee:1996,McLeanMustapha:2015}. In particular, the convolution quadrature generated by the backward Euler method yields a first-order convergence rate for solving \eqref{Deter-SPDE2}. In this work, we focus on numerical approximation of the stochastic time-fractional PDE \eqref{Frac-SPDE2} with additive space-time white noise based on the convolution quadrature generated by the backward Euler method. In the case $\alpha\in(1,2)$ and $d=1$, rigorous error estimates for numerical solutions of \eqref{Frac-SPDE2} are carried out in \cite{KP} for the case of additive Gaussian noise in the general $Q$-Wiener process setting. For a space-time white noise, an almost optimal-order convergence rate for the time discretization error \begin{align}\label{Estimate-epsilon} ({\mathbb E}\|u(\cdot,t_n)-u_n\|_{L^2(\mathcal{O})}^2)^{\frac{1}{2}}=O(\tau^{\frac{1}{2}-\frac{\alpha }{4}-\epsilon}) \end{align} is proved \cite[Remark 4.7, with $\rho=\alpha$]{KP}) for arbitrarily small $\epsilon>0$, where $u(\cdot,t_n)$ and $u_n$ denote the mild solution and numerical solution of \eqref{Frac-SPDE2} at time $t_n$, respectively. The estimate \eqref{Estimate-epsilon} is ``almost optimal'' in the sense that the optimal approximation theoretic error estimate for functions having the regularity of the solution $u$ does not have the $\epsilon$ term in the exponent. We are not aware of any rigorous numerical analyses in the case $\alpha\in(0,1)$. In the case $\alpha=1$, error estimates for time discretization of the stochastic PDE \eqref{Frac-SPDE2} are proved in \cite{GyongyNualart1997} and \cite{AllenNovoselZhang1998,DuZhang2002,Gyongy1999,Yan} for Rothe's method and the backward Euler method, respectively, with different spatial discretization methods. In particular, the convergence rate $O(\tau^{\frac14})$ in time was proved, which corresponds to $\epsilon=0$ in \eqref{Estimate-epsilon}. Some modern references on numerical analysis for stochastsic PDEs in the case $\alpha=1$ include \cite{GWZ-2014,Jentzen-Kloeden-2016,Lord-Powell-Shardlow-2014,Zhang-Karniadakis-2017}. The error estimate \eqref{Estimate-epsilon} is consistent with the H\"older continuity of the solution $u\in C^{\gamma}([0,T];L^2(\Omega;L^2(\mathcal{O})))$ and the pathwise $\gamma$-H\"older continuity, with arbitrary $\gamma\in(0,\frac12-\frac{\alpha d}{4})$; see Appendix \ref{Append} or \cite[Corollary 1]{MijenaNane}. The aim of this paper is to prove, for general $d$-dimensional domains, the sharper order convergence estimate \begin{align}\label{Estimate-e} ({\mathbb E}\|u(\cdot,t_n)-u_n\|_{L^2(\mathcal{O})}^2)^{\frac{1}{2}}=O(\tau^{\frac{1}{2}-\frac{\alpha d}{4}})\qquad \mbox{ $\alpha\in(0,2/d)$, \, $d\in\{1,2,3\}$} \end{align} for time discretization of the stochastic PDE \eqref{Frac-SPDE2}, where by ``sharp'' we mean that we are able to obtain an approximation theoretic convergence rate that is consistent with respect to the regularity of the solution in time. This estimate is achieved via a more delicate analysis of the resolvent operator by using its Laplace transform representation. Our result covers both subdiffusion and diffusion-wave cases in one-dimensional spatial domains and, for the subdiffusion case, multi-dimensional domains. The rest of the paper is organized as follows. In Section \ref{The main results}, we present the backward-Euler convolution quadrature scheme we use to determine approximate solutions of the stochastic time-fractional PDE \eqref{Frac-SPDE2} and then state our main theoretical results. In Section \ref{Proof of Theorem}, we derive an integral representation of the numerical solution for which we prove sharp convergence rate results for the approximate solution. Numerical results are given in Section \ref{Numerical examples} to illustrate the theoretical analyses. Throughout this paper, we denote by $C$, with/without a subscript, a generic constant independent of $n$ and $\tau$ which could be different at different occurrences. \section{The main results} \label{The main results} In this section, we describe the time discretization scheme we use for determining approximate solutions of the stochastic time-fractional PDE \eqref{Frac-SPDE} and state our main results about the convergence rate of the numerical solutions. \subsection{Mild solution of the stochastic PDE} Let $\phi_j(x)$, $j=1,2,{\rm d}ots$, denote the $L^2$-norm normalized eigenfunctions of the Laplacian operator $-\Delta$ corresponding to the eigenvalues $\lambda_j$, $j=1,2,{\rm d}ots,$ arranged in nondecreasing order. The cylindrical Wiener process on $L^2(\mathcal{O})$ can be represented as (cf. \cite[Proposition 4.7, with $Q=I$ and $U_1$ denoting some negative-order Sobolev space]{PratoZabczyk2014}) \begin{align}\label{white-noise} W(x,t) =\sum_{j=1}^\infty \phi_j(x) W_j(t) \end{align} with independent one-dimensional Wiener processes $W_j(t)$, $j=1,2,{\rm d}ots$. In the case $\psi_0=0$, the solution of the deterministic problem \eqref{Deter-SPDE2} can be expressed by (via Laplace transform, cf. \cite[(3.11) and line 4 of page 12]{LubichSloanThomee:1996}) \begin{align}\label{Deter-sol-repr} v(\cdot,t) = \int_0^t E(t-s) f(\cdot,s){\rm d} s , \end{align} where the operator $E(t):L^2(\mathcal{O})\rightarrow L^2(\mathcal{O})$ is given by \begin{equation}\label{eqn:EF} E(t) \phi:=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\kappa}}e^{zt} z^{\alpha-1} (z^\alpha-\Delta )^{-1}\phi\, {\rm d} z \quad \forall\, \phi\in L^2(\mathcal{O}) , \end{equation} with integration over a contour $\Gamma_{\theta,\kappa}$ on the complex plane, \begin{align}\label{contour-Gamma} \Gamma_{\theta,\kappa} &\, =\left\{z\in \mathbb{C}: |z|=\kappa , |\arg z|\le \theta\right\}\cup \{z\in \mathbb{C}: z=\rho e^{\pm {\rm i}\theta}, \rho\ge \kappa \} \nonumber\\ &=: \Gamma_{\theta,\kappa}^\kappa+\Gamma_{\theta,\kappa}^\theta . \end{align} The angle $\theta$ above can be any angle such that $\pi/2<\theta<\min(\pi,\pi/\alpha)$ so that, for all $z$ to the right of $\Gamma_{\theta,\kappa}$ in the complex plane, $z^\alpha\in\Sigma_{\alpha\theta}:=\{z\in\mathbb{C}\backslash \{0\}:|\arg z|\le\alpha\theta \}$ with $\alpha\theta<\pi$. Correspondingly, the mild solution of \eqref{Frac-SPDE2} is define as (cf. \cite{MijenaNane} and \cite[Proposition 2.7]{KP}) \begin{align} u(\cdot,t)&= \int_0^t E(t-s){\rm d} W(\cdot,s) \label{Mild-sol-} \\ & = \sum_{j=1}^\infty \int_0^t E(t-s)\phi_j{\rm d} W_j(s) . \label{Mild-sol} \end{align} This mild solution is well defined in $C^{\gamma}([0,T];L^2(\Omega;L^2(\mathcal{O})))$ for arbitrary $\gamma\in(0,\frac12-\frac{\alpha d}{4})$; see Appendix \ref{Append}. \subsection{Convolution quadrature} Let $\{t_n=n\tau\}_{n=0}^N$ denote a uniform partition of the interval $[0,T]$ with a time step size $\tau = T/N$, and let $u^n=u(x,t_n)$. Under the zero initial condition, the Caputo fractional time derivative $\partial_t^{1-\alpha} u(x,t_n)$ can be discretized by the backward-Euler convolution quadrature \cite{Lubich:1986} (also known as Gr\"unwald-Letnikov approximation, cf. \cite{CuestaLubichPalencia:2006}) \begin{align} \bar\partial_\tau^{1-\alpha} u_n =\frac{1}{\tau^{1-\alpha} } \sum_{j=0}^n b_{n-j}u_j ,\quad n=0,1,2,{\rm d}ots,N, \end{align} where $b_j$, $j=0,1,2,{\rm d}ots N$, are the coefficients in the power series expansion \begin{align} (1-\zeta)^{1-\alpha}=\sum_{j=0}^\infty b_{j}\zeta^j . \end{align} Here, $1-\zeta$ is the characteristic function of the backward-Euler method and we set \begin{align}\label{definition-d} {\rm d}elta_\tau(\zeta)= \frac{1-\zeta}{\tau} \quad \textrm{for }\zeta\in {\mathbb C}\backslash[1,\infty) . \end{align} For any given sequence $\{v_n\}_{n=0}^\infty\in \ell^2(L^2(\mathcal{O}))$, we denote \begin{align}\label{gener-func} \widetilde v(\zeta)=\sum_{n=0}^\infty v_n\zeta^n \quad \textrm{for }\zeta\in {\mathbb D} \end{align} which is referred to as the generating function of the sequence $\{v_n\}_{n=0}^\infty$ (see \cite{LubichSloanThomee:1996}). Clearly, $\widetilde v$ is an $L^2(\mathcal{O})$-valued analytic function in the unit disk ${\mathbb D}$ and the limit $$ \widetilde v(e^{i\theta})=\lim_{r\rightarrow 1_-}\widetilde v(re^{i\theta}) $$ exists in $L^2(0,2\pi;L^2(\Omega))$. Then, we have \begin{equation}\label{generate-dv} \begin{aligned} \sum_{n=0}^\infty (\bar\partial_\tau^{1-\alpha} v_n)\zeta^n &=\sum_{n=0}^\infty \frac{1}{\tau^{1-\alpha}} \sum_{j=0}^n b_{n-j}v_j\zeta^n \\&=({\rm d}elta_\tau(\zeta))^{1-\alpha}\sum_{j=0}^\infty v_j\zeta^j =({\rm d}elta_\tau(\zeta))^{1-\alpha}\widetilde v(\zeta) . \end{aligned} \end{equation} \subsection{Time-stepping scheme and main theorem} With the notations introduced in the last subsection, we discretize the fractional-order derivative $\partial_t^{1-\alpha}$ in \eqref{Frac-SPDE2} by using convolution quadrature in time to obtain \begin{equation}\label{CQ-scheme2} \frac{u_n -u_{n-1}}{\tau} -\Delta \bar\partial_\tau^{1-\alpha} u_n = \frac{W(\cdot,t_n)-W(\cdot,t_{n-1})}{\tau} . \end{equation} Equivalently, $u_n$ can be expressed as \begin{equation}\label{CQ-scheme-} \begin{aligned} u_n =& \big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} u_{n-1}+ \tau^{\alpha} \sum_{j=0}^{n-1} b_{n-j}\Delta\big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} u_j \\ &\qquad\qquad +\big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} \big(W(\cdot,t_n)-W(\cdot,t_{n-1}) \big) \\ =& \big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} u_{n-1}+ \tau^{\alpha} \sum_{j=0}^{n-1} b_{n-j}\Delta\big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} u_j \\ &\qquad\qquad+\sum_{j=1}^\infty \big(W_j(t_n)-W_j(t_{n-1}) \big) \big(1+ \tau^{\alpha} b_0 \lambda_j \big)^{-1} \phi_j , \end{aligned} \end{equation} where ${\rm Id}$ denotes the identity operator. The main result of this paper is the following theorem. \begin{theorem}\label{MainTHM} Let $\alpha\in(0,2/d)$ with $d\in\{1,2,3\}$. Then, for each $n=1,2,{\rm d}ots,N,$ the numerical solution $u_n$ given by \eqref{CQ-scheme2} is well defined in $L^2(\Omega;L^2(\mathcal{O}))$ and converges to the mild solution $u(\cdot,t_n)$ with sharp order of convergence, i.e., we have \begin{align}\label{main-estimate} \max_{1\le n\le N} \, \bigg({\mathbb E}\,\|u(\cdot,t_n)-u_n\|_{L^2(\mathcal{O})}^2\bigg)^{\frac12} \le C\tau^{\frac12-\frac{\alpha d}{4}} , \end{align} where ${\mathbb E}$ denotes the expectation operator and the constant $C$ is independent of $T$. \end{theorem} \section{Proof of Theorem \ref{MainTHM}} \label{Proof of Theorem} \subsection{The numerical solution in $L^2(\Omega;L^2(\mathcal{O}))$} In this subsection, we show that the numerical solution is well defined in $L^2(\Omega;L^2(\mathcal{O}))$. To this end, we use the following estimate for the eigenvalues of the Laplacian operator. For the simplicity of notations, we denote by $(\cdot,\cdot)$ and $\|\cdot\|$ the inner product and norm of $L^2(\mathcal{O})$, respectively. \begin{lemma}[\cite{Laptev,LY}]\label{ineq-eigen} Let $\mathcal{O}$ denote a bounded domain in $\mathbb{R}^d$, $d\in\{1,2,3\}$. Suppose $\lambda_j$ denotes the $j^{\rm th}$ eigenvalue of the Dirichlet boundary problem for the Laplacian operator $-\Delta$ in $\mathcal{O}$. With $|\mathcal{O}|$ denoting the volume of $\mathcal{O}$, we have that \begin{align} \lambda_j\ge \frac{C_d d}{d+2} j^{2/d} |\mathcal{O}|^{-2/d} \end{align} for all $j\ge 1$, where $C_d=(2\pi)^2 B_d^{-2/d}$ and $B_d$ denotes the volume of the unit $d$-dimensional ball. \end{lemma} \begin{lemma}\label{2-order-M} Under the assumptions of Theorem \ref{MainTHM}, the numerical solution given by \eqref{CQ-scheme-} is well defined in $L^2(\Omega;L^2(\mathcal{O}))$. \end{lemma} \noindent{\it Proof.}$\,\,$ Clearly, if $u_j\in L^2(\Omega;L^2(\mathcal{O}))$ for $j=0,{\rm d}ots,n-1$, then \begin{align}\label{numer-sol-1} \big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} u_{n-1}+ \tau^{\alpha} \sum_{j=0}^{n-1} b_{n-j}\Delta\big({\rm Id}- \tau^{\alpha} b_0 \Delta \big)^{-1} u_j\in L^2(\Omega;L^2(\mathcal{O})) . \end{align} In view of \eqref{CQ-scheme-}, we only need to prove \begin{align}\label{numer-sol-def} \sum_{j=1}^\infty \big(W_j(t_n)-W_j(t_{n-1}) \big) \big(1+ \tau^{\alpha} b_0 \lambda_j \big)^{-1} \phi_j \in L^2(\Omega;L^2(\mathcal{O})) . \end{align} In fact, we have \begin{align*} &{\mathbb E}\bigg\|\sum_{j=\ell}^{\ell+m} \big(W_j(t_n)-W_j(t_{n-1}) \big) \big(1+ \tau^{\alpha} b_0 \lambda_j \big)^{-1} \phi_j \bigg\|^2\\ &={\mathbb E} \sum_{j=\ell}^{\ell+m} |W_j(t_n)-W_j(t_{n-1}) |^2 \big(1+ \tau^{\alpha} b_0 \lambda_j \big)^{-2} = \sum_{j=\ell}^{\ell+m} \tau \big(1+ \tau^{\alpha} b_0 \lambda _j\big)^{-2} \\ &\le Cb_0^{-2}\tau^{1-2\alpha} \sum_{j=\ell}^{\ell+m} j^{-4/d} \rightarrow 0\quad\mbox{as}\,\,\,\ell\rightarrow\infty . \end{align*} Hence, for a fixed time step $\tau$, $$\sum_{j=1}^\ell \big(W_j(t_n)-W_j(t_{n-1}) \big) \big(1+ \tau^{\alpha} b_0 \lambda_j \big)^{-1} \phi_j , \quad \ell=1,2,{\rm d}ots$$ is a Cauchy sequence in $L^2(\Omega;L^2(\mathcal{O}))$. Consequently, \eqref{numer-sol-def} is proved. In view of \eqref{CQ-scheme-} and \eqref{numer-sol-1}-\eqref{numer-sol-def}, the numerical solution $u_n$ is well defined in $L^2(\Omega;L^2(\mathcal{O}))$. \qed \subsection{A technical lemma} To prove the error estimate in Theorem \ref{MainTHM}, we need the following technical lemma. \begin{lemma}\label{ineq-sum} Let $\alpha\ge 0$ and $d\in\{1,2,3\}$. Then there exist constants $C$ and $C_\varphi$ such that \begin{align} & \sum_{j=1}^\infty \bigg(\frac{r^{\alpha}}{r^\alpha + \lambda_j }\bigg)^2 \le Cr^{\alpha d/2} \quad \forall\, r>0, \label{ineq-sum2} \\ &\bigg|\frac{1}{z + \lambda_j }\bigg| \le \frac{C_\varphi }{|z| + \lambda_j } \qquad \qquad \forall\, z \in \Sigma_{\varphi}\,\,\,\mbox{with}\,\,\, \varphi\in(0,\pi) , \label{ineq-sum-} \end{align} where the constant $C$ depends on the dimension $d$ and the volume of the domain $\mathcal{O}$, and $C_\varphi$ depends on the angle $\varphi\in(0,\pi)$. \end{lemma} \begin{proof} Clearly, Lemma \ref{ineq-eigen} implies $\lambda_j \ge C j^{2/d}$, with a constant $C$ depending on the dimension $d$ and the volume of the domain $\mathcal{O}$. First, if $0<r< 1$, \begin{align} & \sum_{j=1}^\infty \bigg(\frac{r^{\alpha}}{r^\alpha + \lambda_j }\bigg)^2 \le \sum_{j=1}^\infty \frac{r^{2\alpha}}{Cj^{4/d} } \le Cr^{2\alpha} \le Cr^{\alpha d/2} ,\quad\mbox{($d/2\le 2$ for $d=1,2,3$)}. \end{align} Second, if $r\ge 1$, by setting $M=\lfloor r^{\alpha d/2} \rfloor \ge 1$ to be the largest integer that does not exceed $r^{\alpha d/2}$, we have \begin{align*} \sum_{j=1}^\infty \bigg(\frac{r^{\alpha}}{r^\alpha + \lambda_j }\bigg)^2 &\le \sum_{j=1}^\infty \bigg(\frac{r^{\alpha}}{r^\alpha + Cj^{2/d} }\bigg)^2 \\ &= \sum_{j=1}^{M+1} \bigg(\frac{r^{\alpha}}{r^\alpha + Cj^{2/d} }\bigg)^2 +\sum_{j=M+2}^{\infty} \bigg(\frac{r^{\alpha}}{r^\alpha + Cj^{2/d} }\bigg)^2 =:I_1+I_2. \end{align*} It is easy to see that $I_1\le M+1\le 2M\le 2r^{\alpha d/2}$ and \begin{align*} I_2 &\le \int_{r^{\alpha d/2}}^\infty \bigg(\frac{r^\alpha}{r^\alpha+Cs^{2/d}} \bigg)^2 {\rm d} s = C^{-d/2}r^{\alpha d/2}\int_{C^{d/2}}^\infty \bigg(\frac{1}{1+\xi^{2/d}} \bigg)^2 {\rm d}\xi \le Cr^{\alpha d/2}, \end{align*} where the equality follows by changing the variable $s=C^{-d/2}r^{\alpha d/2}\xi$. This proves \eqref{ineq-sum2} in the case $r\ge 1$. Finally, for the point $\xi=-\lambda_j+0\textrm{i}$ in the complex plane, we have $|\xi|=\lambda_j$ and $|z-\xi|=|z+\lambda_j|$. By looking at the triangle with three vertices $z$, $0$, and $\xi$ with interior angles $\omega_z$, $\omega_0$, and $\omega_{\xi}$ at the three vertices, respectively, we have $$ \frac{|z-\xi|}{\sin(\omega_0)} =\frac{|z|}{\sin(\omega_{\xi})} =\frac{\lambda_j}{\sin(\omega_z)} . $$ If $\omega_0\ge \pi/2$, then $|z-\xi|$ would be the length of the longest side of the triangle, i.e., $$ |z-\xi|\ge |z|\quad\mbox{and}\quad |z-\xi|\ge \lambda_j $$ which immediately implies $$ |z-\xi|\ge \frac{1}{2}(|z|+ \lambda_j) . $$ If $\omega_0\le \pi/2$, then the angle condition $|{\rm arg}(z)|<\varphi$ implies $\omega_0>\pi-\varphi$. Hence, we have $$ |z-\xi|=\frac{|z|\sin(\omega_0)}{\sin(\omega_{\xi})} \ge |z| \sin(\varphi) \quad \mbox{and} \quad |z-\xi|=\frac{\lambda_j\sin(\omega_0)}{\sin(\omega_{z})} \ge \lambda_j \sin(\varphi) $$ which immediately implies $$ |z-\xi|\ge \frac{\sin(\varphi)}{2}(|z|+ \lambda_j) . $$ In either case, we have \eqref{ineq-sum-}. This completes the proof of Lemma \ref{ineq-sum}. \end{proof} \subsection{Solution representations} In this subsection, we derive a representation of the semidiscrete solution $u_n$ by means of the discrete analogue of the Laplace transform and generating function. Let $\Gamma_{\theta,\kappa}^{(\tau)}$ denote the truncated piece of the contour $\Gamma_{\theta,\kappa}$ defined by \begin{align}\label{trunc-contour} \Gamma_{\theta,\kappa}^{(\tau)}:= \{z\in\Gamma_{\theta,\kappa}\,\,:\,\,|\textrm{Im}(z)|\leq \pi/\tau\} . \end{align} For $\rho\in(0,1)$, let $\Gamma^{(\tau)}_\rho$ denote the segment of a vertical line defined by \begin{align}\label{contour-rho} \Gamma^{(\tau)}_\rho:= \{z=-\ln (\rho)/\tau+\textrm{i} y\,\,:\,\,y\in\mathbb{R} \textrm{ and } |y|\leq \pi/\tau \}. \end{align} The following technical lemma will be used in this and next subsections, where ${\rm arccot}(\cdot)$ denotes the inverse of the cotangent function ${\rm cot}: (0,\pi)\rightarrow\mathbb{R}$. \begin{lemma} \label{ineq-1} Let $\alpha\ge 0$ and $\theta\in\big(\frac{\pi}{2},{\rm arccot}(-\frac{2}{\pi})\big)$ be given, and let $\rho\in(0,1)$ be fixed, with ${\rm d}elta_\tau(\zeta)$ defined in \eqref{definition-d}. Then, both ${\rm d}elta(e^{-z\tau})$ and $({\rm d}elta(e^{-z\tau})^\alpha-\Delta)^{-1}$ are analytic with respect to $z$ in the region enclosed by $$ \mbox{$\Gamma^{(\tau)}_\rho$,\,\,\, $\Gamma^{(\tau)}_{\theta,\kappa}$,\,\,\, and the two lines $\mathbb{R}\pm\textrm{i}\pi/\tau$} \quad\mbox{whenever}\quad 0<\kappa\le \min(1/T,-\ln(\rho)/\tau). $$ Furthermore, we have the following estimates: \begin{align} &{\rm d}elta_\tau(e^{-z\tau})\in\Sigma_{\theta}&\forall\, z\in\Gamma_{\theta,\kappa}^{(\tau)} \label{angle-delta} \\ &C_0|z|\le |{\rm d}elta_\tau(e^{-\tau z})|\le C_1|z| &\forall\, z\in\Gamma_{\theta,\kappa}^{(\tau)} \label{z-delta} \\ &|{\rm d}elta_\tau(e^{-\tau z})-z|\le C\tau|z|^2 &\forall\, z\in\Gamma_{\theta,\kappa}^{(\tau)} \\ &|{\rm d}elta_\tau(e^{-\tau z})^\alpha-z^\alpha|\le C\tau|z|^{\alpha+1} &\forall\, z\in\Gamma_{\theta,\kappa}^{(\tau)}, \label{zalpha-delta} \end{align} where the constants $C_0$, $C_1$, and $C$ are independent of $\tau$ and $\kappa\in(0,\min(\frac{1}{T},-\frac{\ln(\rho)}{\tau}))$. \end{lemma} \begin{proof} Clearly, \eqref{angle-delta} is a consequence of the following two inequalities: \begin{align} &0\le {\rm arg}\bigg(\frac{1-e^{-z\tau}}{\tau}\bigg)\le {\rm arg}(z) &&\mbox{if}\,\,\, 0\le {\rm arg}(z)\le \theta , \label{argz-0} \\ &-{\rm arg}(z)\le {\rm arg}\bigg(\frac{1-e^{-z\tau}}{\tau}\bigg)\le 0 &&\mbox{if}\,\,\, -\theta\le {\rm arg}(z)\le 0 , \label{argz-0-} \end{align} which can be proved in the following way when $\frac{\pi}{2}\le \theta\le {\rm arccot}\big(-\frac{2}{\pi}\big)$. If ${\rm arg}(z)=\varphi\in[0,\theta] $ and $0\le {\rm Im}(z)\le \pi/\tau$ (thus $0\le \tau |z|\sin(\varphi)\le \pi$), then it is easy to see that ${\rm arg}\big(\frac{1-e^{-\tau z}}{\tau}\big)\ge 0$ and \begin{align*} \cot \bigg({\rm arg}\bigg(\frac{1-e^{-\tau z}}{\tau}\bigg)\bigg) &=\frac{1-e^{-\tau |z|\cos(\varphi)}\cos(\tau |z|\sin(\varphi)) }{e^{-\tau |z|\cos(\varphi)}\sin (\tau |z|\sin(\varphi)) } \\ &=\frac{e^{\tau |z|\cos(\varphi) }-\cos(\tau |z|\sin(\varphi)) }{\sin (\tau |z|\sin(\varphi)) } \\ &\ge \frac{1+\tau |z|\cos(\varphi) -\cos(\tau |z|\sin(\varphi)) }{\sin (\tau |z|\sin(\varphi)) } \ \quad\mbox{(Taylor's expansion)} \\ &= \frac{1+\omega\cot(\varphi) -\cos(\omega) }{\sin (\omega) } \ \ \quad\quad \mbox{(set $\omega=\tau |z|\sin(\varphi)\in[0,\pi)$)} . \end{align*} We shall prove $\cot\big({\rm arg}\big(\frac{1-e^{-\tau z}}{\tau}\big)\big)\ge \cot(\varphi)$ so that $0\le {\rm arg}\big(\frac{1-e^{-\tau z}}{\tau}\big)\le \varphi={\rm arg}(z)$. To this end, we consider the function $$ g(\omega):=1+\omega\cot(\varphi) -\cos(\omega) -\sin (\omega)\cot(\varphi) ,\quad\omega\in[0,\pi] , $$ whose derivative is $$ g'(\omega):=\frac{\cos(\varphi) -\cos(\omega+\varphi)}{\sin(\varphi)} . $$ In the case $\varphi\in(0,\frac{\pi}{2}]$, $g'(\omega)\ge 0$ for $\omega\in[0,\pi]$. In the case $\varphi\in(\frac{\pi}{2},\pi)$, $g'(\omega)\ge 0$ for $\omega\in[0,2\pi-2\varphi]$ and $g'(\omega)\le 0$ for $\omega\in[2\pi-2\varphi,\pi]$. In either case, the function $g(\omega)$ achieves its minimum value at one of the two end points $\omega=0$ and $\omega=\pi$, with $$ g(0)=0\quad\mbox{and}\quad g(\pi)=2+\pi\cot(\varphi). $$ If $\frac{\pi}{2}\le \theta\le {\rm arccot}\big(-\frac{2}{\pi}\big)$, we then have $g(\pi)\ge 0$. Consequently, $g(\omega)\ge 0$ for all $\omega\in[0,\pi]$ and $\cot\big({\rm arg}\big(\frac{1-e^{-\tau z}}{\tau}\big)\big)\ge \cot(\varphi)$ which implies $$ 0\le {\rm arg}\bigg(\frac{1-e^{-\tau z}}{\tau}\bigg)\le \varphi={\rm arg}(z). $$ This proves \eqref{argz-0}. The inequality \eqref{argz-0-} can be proved in the same way. This completes the proof of \eqref{angle-delta} which further implies that ${\rm d}elta(e^{-z\tau})$ and $({\rm d}elta(e^{-z\tau})^\alpha-\Delta)^{-1}$ are analytic with respect to $z$ in the region enclosed by $$ \mbox{$\Gamma^{(\tau)}_\rho$,\,\,\, $\Gamma^{(\tau)}_{\theta,\kappa}$\,\,\, and the two lines $\mathbb{R}\pm\textrm{i}\pi/\tau$} \quad\mbox{whenever}\quad 0<\kappa\le -\ln(\rho)/\tau. $$ The estimates \eqref{z-delta}-\eqref{zalpha-delta} are simple consequences of Taylor's theorem. \end{proof} \begin{remark} The condition $\kappa\le \frac{1}{T}$ is not needed in the proof of this lemma, but is needed in the estimates of the next subsection such as \eqref{esti-f}. \end{remark} To derive the representation of the numerical solution $u_n$, we introduce some notations. Let $\bar\partial_\tau W$ be defined by \begin{align}\label{pw-const} &\bar\partial_\tau W(\cdot,t_0):=0 \\ &\bar\partial_\tau W(\cdot,t):=\frac{W(\cdot,t_n)-W(\cdot,t_{n-1})}{\tau}&&\mbox{for}\,\,\, t\in(t_{n-1},t_n],\,\,\, n=1,2,{\rm d}ots,N\\ &\bar\partial_\tau W(\cdot,t):=0 &&\mbox{for } t> t_{N} , \end{align} where we have set $\bar\partial_\tau W(\cdot,t)=0 $ for $t>t_N$; this does not affect the value of $u_n$, $n=1,2,{\rm d}ots,N$, upon solving \eqref{CQ-scheme2}. Similarly, we define \begin{align}\label{pw-const-} &\bar\partial_\tau W_j(t_0):=0 \\ &\bar\partial_\tau W_j(t):=\frac{W_j(t_n)-W_j(t_{n-1})}{\tau}&&\mbox{for}\,\,\, t\in(t_{n-1},t_n],\,\,\, n=1,2,{\rm d}ots,N\\ &\bar\partial_\tau W_j(t):=0 && \mbox{for}\,\,\,t> t_{N} . \end{align} With these definitions, there are only a finite number of nonzero terms in the sequence $\bar\partial_\tau W(\cdot,t_n)$, $n=0,1,2,{\rm d}ots$. Consequently, the generating function $$ \widetilde{\bar\partial_\tau W}(\cdot,\zeta)=\sum_{n=0}^\infty \bar\partial_\tau W(\cdot,t_n)\zeta^n $$ is well defined (polynomial in $\zeta$). Then, we have the following result. \begin{proposition}\label{un-rep} For the time-stepping scheme \eqref{CQ-scheme2}, the semidiscrete solution $u_n$ can be represented by \begin{align}\label{u-semi} u_n &= \int_0^{t_n} E_\tau(t_n-s)\bar\partial_\tau W(\cdot,s){\rm d} s \\ &= \sum_{j=1}^\infty \int_{0}^{t_n} E_\tau(t_n-s) \phi_j \bar\partial_\tau W_j(s) {\rm d} s , \end{align} where the operator $E_\tau(\cdot)$ is given by \begin{equation}\label{eqn:EF2} E_\tau(t) \phi:=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\kappa}^{(\tau)}}e^{zt} \frac{z\tau }{e^{z\tau}-1}{\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}({\rm d}elta_\tau(e^{-z\tau})^\alpha -\Delta )^{-1}\phi\, {\rm d} z \quad \forall\, \phi\in L^2(\mathcal{O}) \end{equation} with integration over the truncated contour $\Gamma_{\theta,\kappa}^{(\tau)}$ defined in \eqref{trunc-contour}, oriented with increasing imaginary parts, with the parameters $\kappa$ and $\theta$ satisfying the conditions of Lemma \ref{ineq-1}. \end{proposition} \begin{proof} In view of definition \eqref{gener-func} and the identity \eqref{generate-dv}, multiplying \eqref{CQ-scheme2} by $\zeta^n$ and summing up the results over $n=0,1,2,{\rm d}ots$ yield \begin{align} {\rm d}elta_\tau(\zeta)\widetilde u(\zeta)-{\rm d}elta_\tau(\zeta)^{1-\alpha}\Delta\widetilde u(\zeta) =\widetilde{\bar\partial_\tau W}(\cdot,\zeta). \end{align} Then, \begin{align}\label{tilde-u} &\widetilde u(\zeta) = {\rm d}elta_\tau(\zeta)^{\alpha-1}({\rm d}elta_\tau(\zeta)^\alpha -\Delta )^{-1}\widetilde{\bar\partial_\tau W}(\cdot,\zeta). \end{align} The function $\widetilde u(\zeta)$ defined in \eqref{tilde-u} is analytic with respect to $\zeta$ in a neighborhood of the origin. By Cauchy's integral formula, it implies that for $\rho\in(0,1)$ \begin{align*} u_n = \frac{1}{2\pi\textrm{i}}\int_{|\zeta|=\rho} \zeta^{-n-1}\widetilde u(\zeta) {\rm d}\zeta = \frac{\tau}{2\pi\textrm{i}}\int_{\Gamma^{(\tau)}_\rho} e^{zt_n}\widetilde u (e^{-z\tau}){\rm d} z, \end{align*} where the second equality is obtained by the change of variables $\zeta=e^{-z\tau}$, with the contour $\Gamma^{(\tau)}_\rho$ defined in \eqref{contour-rho}. From Lemma \ref{ineq-1}, we see that both ${\rm d}elta(e^{-z\tau})$ and $({\rm d}elta(e^{-z\tau})^\alpha-\Delta)^{-1}$ are analytic with respect to $z$ in the region $\Sigma\subset\mathbb{C}$ enclosed by $\Gamma^{(\tau)}_\rho$, $\Gamma^{(\tau)}_{\theta,\kappa}$, and the two lines $\mathbb{R}\pm\textrm{i}\pi/\tau$. Thus, $e^{zt_n}\widetilde u(e^{-z\tau})$ is analytic with respect to $z\in\Sigma$. Because the values of $e^{zt_n}\widetilde u(e^{-z\tau})$ on the two lines $\mathbb{R}\pm\textrm{i}\pi/\tau$ coincide, it follows that (by applying Cauchy's integral formula) \begin{align}\label{un-rep-gammatau} u_n &= \frac{\tau}{2\pi\textrm{i}}\int_{\Gamma^{(\tau)}_\rho} e^{zt_n}\widetilde u (e^{-z\tau}){\rm d} z \nonumber\\ &= \frac{\tau}{2\pi\textrm{i}}\int_{\Gamma^{(\tau)}_{\theta,\kappa}} e^{zt_n}\widetilde u (e^{-z\tau}){\rm d} z +\frac{\tau}{2\pi\textrm{i}}\int_{\mathbb{R}+\frac{\textrm{i}\pi}{\tau}} e^{zt_n}\widetilde u (e^{-z\tau}){\rm d} z \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\quad\,\,\, -\frac{\tau}{2\pi\textrm{i}}\int_{\mathbb{R}-\frac{\textrm{i}\pi}{\tau}} e^{zt_n}\widetilde u (e^{-z\tau}){\rm d} z \nonumber \\ &= \frac{\tau}{2\pi\textrm{i}}\int_{\Gamma^{(\tau)}_{\theta,\kappa}} e^{zt_n}\widetilde u (e^{-z\tau}){\rm d} z \nonumber \\ &= \frac{\tau}{2\pi i}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} e^{zt_n}{\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}({\rm d}elta_\tau(e^{-z\tau})^\alpha -\Delta )^{-1} \widetilde{\bar\partial_\tau W}(\cdot,e^{-z\tau}) {\rm d} z \nonumber \\ &= \frac{\tau}{2\pi i}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} e^{zt_n}{\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}({\rm d}elta_\tau(e^{-z\tau})^\alpha -\Delta )^{-1} \frac{z}{e^{z\tau}-1} \widehat {\bar\partial_\tau W}(\cdot,z) {\rm d} z , \end{align} where we have substituted \eqref{tilde-u} into the above equality and used the following (straightforward to check) identity in the last step: $$ \widetilde{\partial_\tau W}(\cdot,e^{-z\tau}) =\frac{z}{e^{z\tau}-1} \widehat {\bar\partial_\tau W}(\cdot,z ) $$ with $\widehat {\bar\partial_\tau W}$ denoting the Laplace transform (in time) of the piecewise constant function $\bar\partial_\tau W$. Through the Laplace transform rule \begin{align}\label{LT-rule} {\mathcal L}^{-1}(\widehat f\, \widehat g)(t) =\int_0^t{\mathcal L}^{-1}(\widehat f \, )(t-s){\mathcal L}^{-1}(\widehat g)(s){\rm d} s , \end{align} one can derive \eqref{u-semi} from \eqref{un-rep-gammatau}. The proof of Proposition \ref{un-rep} is complete. \end{proof} \subsection{Error estimate} In this subsection, we derive an error estimate for the numerical scheme \eqref{CQ-scheme2}. The following lemma is concerned with the difference between the kernels of \eqref{eqn:EF} and \eqref{eqn:EF2}. It will be used in the proof of Theorem \ref{MainTHM}. \begin{lemma}\label{ineq-f} Let $\alpha\in(0,2/d)$ be given and let ${\rm d}elta_\tau(\zeta)$ be defined as in \eqref{definition-d} with the parameters $\kappa$ and $\theta$ satisfying the conditions of Lemma \ref{ineq-1}. Then, we have \begin{align*} &\bigg|z^{\alpha-1}(z^\alpha+\lambda_j)^{-1} -\frac{z\tau }{e^{z\tau}-1} {\rm d}elta_\tau(e^{-\tau z})^{\alpha-1}({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}\bigg| \le \frac{C\tau|z|^\alpha }{|z|^\alpha+\lambda_j } , \ \forall\, z\in \Gamma_{\theta,\kappa}^{(\tau)}. \end{align*} \end{lemma} \begin{proof} By the triangle inequality and Lemma \ref{ineq-1}, we have \begin{align*} &|z^{\alpha-1}(z^\alpha+\lambda_j)^{-1} - \frac{z\tau }{e^{z\tau}-1} {\rm d}elta_\tau(e^{-\tau z})^{\alpha-1}({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| \\ &\quad\le \bigg| \frac{e^{z\tau}-1-z\tau }{e^{z\tau}-1} \bigg| |z^{\alpha-1}(z^\alpha+\lambda_j)^{-1}| \\ &\qquad + \bigg|\frac{z\tau }{e^{z\tau}-1}\bigg| |z|^{\alpha-1}|(z^\alpha+\lambda_j)^{-1}-({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1} | \\ &\qquad + \bigg|\frac{z\tau }{e^{z\tau}-1}\bigg||z^{\alpha-1}-{\rm d}elta_\tau(e^{-\tau z})^{\alpha-1}| |({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| =: \mathcal{J}_1 + \mathcal{J}_2 + \mathcal{J}_3 , \end{align*} where \begin{align*} \mathcal{J}_1 &\le C|z\tau| |z^{\alpha-1}(z^\alpha+\lambda_j)^{-1}| \quad\mbox{\big(using the Taylor expansion of $\big| \frac{e^{z\tau}-1-z\tau }{e^{z\tau}-1} \big|$\big)} \\ \mathcal{J}_2 &\le C |z|^{\alpha-1}|z^\alpha-{\rm d}elta_\tau(e^{-\tau z})^\alpha| |(z^\alpha+\lambda_j)^{-1}({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| \\ &\le C \tau|z|^{2\alpha} |(z^\alpha+\lambda_j)^{-1}({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| \qquad\,\,\,\mbox{(here we use \eqref{zalpha-delta})} \\ &\le C\tau|z|^{2\alpha} (|z|^\alpha+\lambda_j)^{-1}(|{\rm d}elta_\tau(e^{-\tau z})|^\alpha+\lambda_j)^{-1} \\ &\le \frac{C\tau|z|^\alpha}{|z|^\alpha+\lambda_j}. \end{align*} In the estimates above we have used the following inequality (cf. \cite[inequality C.1]{GLW-2017}) \begin{align}\label{ztau-eztau} \bigg|\frac{z\tau}{e^{z\tau}-1}\bigg|\le C,\quad\forall\, z\in \Gamma_{\theta,\kappa}^{(\tau)}. \end{align} The last inequality is due to Lemma \ref{ineq-sum} together with the angle condition ${\rm arg}(z^\alpha)\le \alpha\theta<\pi$ and ${\rm arg}({\rm d}elta_\tau(e^{-\tau z})^\alpha)\le \alpha\theta<\pi$ (cf. Lemma \ref{ineq-1}). Furthermore, we have \begin{align*} \mathcal{J}_3 &\le C|z^{\alpha-1}-{\rm d}elta_\tau(e^{-\tau z})^{\alpha-1}| |({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| \\ &\le C\Big(|z^{\alpha}-{\rm d}elta_\tau(e^{-\tau z})^{\alpha}| |z|^{-1} + |z^{-1}-{\rm d}elta_\tau(e^{-\tau z})^{-1}||{\rm d}elta_\tau(e^{-\tau z})|^{\alpha} \Big)|({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| \\ &\le \Big(C\tau |z|^{1+\alpha} |z|^{-1} +C \tau |z|^2 |z|^{-1} |{\rm d}elta_\tau(e^{-\tau z})|^{\alpha-1} \Big)|({\rm d}elta_\tau(e^{-\tau z})^\alpha+\lambda_j)^{-1}| \\ & \le \frac{C\tau|z|^\alpha}{|z|^\alpha+\lambda_j} \qquad\qquad \mbox{(here we use Lemma \ref{ineq-sum} and Lemma \ref{ineq-1})} . \end{align*} The proof of Lemma \ref{ineq-f} is complete. \end{proof} Now, we start to prove Theorem \ref{MainTHM}. From \eqref{Mild-sol} and \eqref{eqn:EF} we see that the mild solution admits the decomposition \begin{align} \label{u-repr-} u(\cdot,t) &= \sum_{j=1}^\infty \phi_j\int_0^t F^{(\tau)}_j (t-s) {\rm d} W_j(s) + \sum_{j=1}^\infty \phi_j\int_0^t H^{(\tau)}_j (t-s) {\rm d} W_j(s) \end{align} with \begin{align} &F^{(\tau)}_j (t) := \frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\kappa}^{(\tau)}}e^{zt} z^{\alpha-1} (z^\alpha +\lambda_j)^{-1} \, {\rm d} z \\ &H^{(\tau)}_j(t) := \frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}\backslash\Gamma_{\theta,\kappa}^{(\tau)}} e^{zt} z^{\alpha-1}(z^\alpha +\lambda_j )^{-1} {\rm d} z . \label{def-Hjt} \end{align} Also, \eqref{u-semi} and \eqref{eqn:EF2} imply \begin{align} \label{un-repr-} u_n &=\sum_{j=1}^\infty\phi_j \int_0^{t_n} E^{(\tau)}_j(t_n-s) \bar\partial_\tau W_j(s) {\rm d} s \end{align} with \begin{equation} \label{expr-Etau} E^{(\tau)}_j(t):=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\kappa}^{(\tau)}}e^{zt} \frac{z\tau }{e^{z\tau}-1}{\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}({\rm d}elta_\tau(e^{-z\tau})^\alpha +\lambda_j )^{-1} \, {\rm d} z . \end{equation} Comparing \eqref{u-repr-} and \eqref{un-repr-} yields \begin{align} \label{error} u(\cdot,t_n)-u_n &= \sum_{j=1}^\infty \phi_j\int_0^{t_n} (F^{(\tau)}_j (t_n-s) -E^{(\tau)}_j (t-s)){\rm d} W_j(s) \nonumber\\ &\quad +\sum_{j=1}^\infty \phi_j\int_0^{t_n} E^{(\tau)}_j (t_n-s) \Big ({\rm d} W_j(s) -\bar\partial_\tau W_j(s){\rm d} s \Big) \nonumber\\ &\quad + \sum_{j=1}^\infty \phi_j\int_0^{t_n} H^{(\tau)}_j (t_n-s) {\rm d} W_j(s) \nonumber\\ &=: {\mathcal E}_\tau(t_n) + {\mathcal G}_\tau(t_n)+ {\mathcal H}_\tau(t_n) . \end{align} Then Theorem \ref{MainTHM} is a consequence of the following lemma. The proof of Theorem \ref{MainTHM} is complete. \qed \begin{lemma}\label{E-G-H} Under the assumptions of Theorem \ref{MainTHM}, we have ${\mathcal E}_\tau(t_n) , {\mathcal G}_\tau(t_n), {\mathcal H}_\tau(t_n)\in L^2(\Omega;L^2(\mathcal{O}))$, satisfying the following estimate: \begin{align} {\mathbb E} \|{\mathcal E}_\tau(t_n)\|^2 +{\mathbb E} \|{\mathcal G}_\tau(t_n)\|^2 +{\mathbb E} \|{\mathcal H}_\tau(t_n)\|^2 \leq C\tau^{1-\alpha d/2}. \end{align} \end{lemma} \noindent{\it Proof.}$\,\,$ First, we estimate ${\mathcal H}_\tau(t_n)$. By choosing a number $\beta\in(\alpha d/2,1)$ and using Lemma \ref{ineq-sum}, we have \begin{align}\label{E-Htau} {\mathbb E}\, \|{\mathcal H}_\tau(t_n) \|^2 &= \int_0^{t_n} \sum_{j=1}^\infty |H_j^{(\tau)}(t_n-s) |^2 {\rm d} s \quad\mbox{(It\^o's isometry)}\nonumber \\ & =\int_0^{t_n} \sum_{j=1}^\infty |H_j^{(\tau)}(s) |^2 {\rm d} s \nonumber \\ &= \int_0^{t_n} \sum_{j=1}^\infty \bigg|\frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}\backslash\Gamma_{\theta,\kappa}^{(\tau)}} e^{zs} z^{\alpha-1}(z^\alpha + \lambda_j )^{-1} {\rm d} z\bigg|^2 {\rm d} s \nonumber\\ &\le \int_0^{t_n} \sum_{j=1}^\infty \bigg|\frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}\backslash\Gamma_{\theta,\kappa}^{(\tau)}} \frac{z^{\alpha}}{ z^\alpha + \lambda_j } \frac{e^{zs}}{z} {\rm d} z\bigg|^2 {\rm d} s \nonumber\\ &\le C \int_0^{t_n} \sum_{j=1}^\infty \bigg(\int_{\Gamma_{\theta,\kappa}\backslash\Gamma_{\theta,\kappa}^{(\tau)}} \frac{|{\rm d} z|}{ |z|^{2-\beta} } \bigg)\bigg( \int_{\Gamma_{\theta,\kappa}\backslash\Gamma_{\theta,\kappa}^{(\tau)}} \bigg|\frac{z^{\alpha}}{ z^\alpha + \lambda_j }\bigg|^2 \frac{ |e^{zs}|^2 }{ |z|^{\beta} } |{\rm d} z|\bigg) {\rm d} s \nonumber\\ &\le C \int_0^{t_n} \sum_{j=1}^\infty\bigg(\int_{1/\tau}^\infty \frac{{\rm d} r}{ r^{2-\beta} } \bigg) \bigg(\int_{1/\tau}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha + \lambda_j }\bigg|^2 \frac{e^{(2s \cos\theta) r} }{ r^{\beta} } {\rm d} r \bigg){\rm d} s \nonumber\\ &\le C \tau^{1-\beta} \int_0^{t_n} \int_{1/\tau}^\infty \sum_{j=1}^\infty \bigg(\frac{r^{\alpha}}{r^\alpha + \lambda_j }\bigg)^2 \frac{e^{(2s \cos\theta) r} }{ r^{\beta} } {\rm d} r {\rm d} s \nonumber\\ &\le C \tau^{1-\beta} \int_0^{t_n} \int_{1/\tau}^\infty r^{\alpha d/2-\beta} e^{(2s \cos\theta) r} {\rm d} r {\rm d} s \nonumber\\ &\le C \tau^{1-\beta} \int_{1/\tau}^\infty r^{\alpha d/2-\beta-1} (1-e^{(2t_n \cos\theta) r}) {\rm d} r \nonumber\\ &\le C \tau^{1-\beta} \tau^{\beta-d\alpha/2} \nonumber\\ &\le C \tau^{1-\alpha d/2} . \end{align} Next, we estimate ${\mathcal E}_\tau(t_n)$. To this end, we apply Lemma \ref{ineq-f} and obtain \begin{align}\label{ftau} &|F^{(\tau)}_j (s) -E^{(\tau)}_j (s)|^2 \nonumber \\ &= \biggl|\frac{1}{2\pi \textrm{i}}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} e^{zs} \bigg(\frac{z^{\alpha-1}}{z^\alpha+ \lambda_j } - \frac{z\tau }{e^{z\tau}-1} \frac{{\rm d}elta_\tau(e^{-\tau z})^{\alpha-1}}{{\rm d}elta_\tau(e^{-\tau z})^\alpha + \lambda_j } \bigg) {\rm d} z \bigg|^2 \nonumber\\ &\le \bigg(\int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{zs}| \frac{C\tau|z|^\alpha }{|z|^\alpha+\lambda_j } |{\rm d} z| \bigg)^2 \nonumber \\ &\le C\tau^2 \bigg( \int_{\Gamma_{\theta,\kappa}^{(\tau)}} |{\rm d} z| \bigg) \int_{\Gamma_{\theta,\kappa}^{(\tau)}} \bigg(\frac{|z|^\alpha }{|z|^\alpha+\lambda_j } \bigg)^2 |e^{zs}|^2 |{\rm d} z| \nonumber \\ &\le C\tau \int_{\Gamma_{\theta,\kappa}^{(\tau)}} \bigg(\frac{|z|^\alpha }{|z|^\alpha+\lambda_j } \bigg)^2 |e^{zs}|^2 |{\rm d} z| . \end{align} By using the expression of $\mathcal{E}_\tau(t_n)$, we have \begin{align}\label{E-Etau} {\mathbb E}\, \|{\mathcal E}_\tau(t_n) \|^2 &= \sum_{j=1}^\infty {\mathbb E}\,\bigg|\int_0^{t_n} (F^{(\tau)}_j (t_n-s) -E^{(\tau)}_j(t_n-s)){\rm d} W_j(s)\bigg|^2 \nonumber\\ &= \sum_{j=1}^\infty \int_0^{t_n} \big|F^{(\tau)}_j(t_n-s)-E^{(\tau)}_j(t_n-s)\big|^2 {\rm d} s \nonumber\\ &= \sum_{j=1}^\infty \int_0^{t_n} \big|F^{(\tau)}_j(s)-E^{(\tau)}_j(s)\big|^2 {\rm d} s \nonumber\\ &\le C\tau \int_0^{t_n}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} \sum_{j=1}^\infty \bigg(\frac{|z|^\alpha }{|z|^\alpha+\lambda_j } \bigg)^2 |e^{zs}|^2 |{\rm d} z| {\rm d} s \nonumber \\ &\le C\tau\int_0^{t_n}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} |z|^{\alpha d/2}|e^{zs}|^2|{\rm d} z| {\rm d} s, \end{align} where the last inequality follows from Lemma \ref{ineq-sum}. Since $t_n\ge \tau$ and \begin{align*} \Gamma_{\theta,\kappa}^{(\tau)} &=\{z\in\mathbb{C}:z=r e^{\pm\textrm{i}\theta},r\ge\kappa, r |\sin(\theta)| \le\pi/\tau \} \cup \{z\in\mathbb{C}:|z|=\kappa,|\arg z|\le\theta\} , \end{align*} by choosing $\kappa \le \frac{2}{t_n|\sin(\theta)|}$, we have \begin{align}\label{esti-f} &{\mathbb E} \|\mathcal{E}_\tau(t_n)\|^2 \nonumber \\ &\le C\tau\int_0^{t_n}\int_\kappa^{\frac{\pi}{\tau|\sin(\theta)|}} r^{\alpha d/2} e^{(2s\cos\theta)r}{\rm d} r{\rm d} s \nonumber\\ &\quad +C\tau\int_0^{t_n}\int_{-\theta}^\theta \kappa^{\alpha d/2+1} e^{2s\cos(\psi)\kappa} {\rm d} \psi {\rm d} s \nonumber\\ &\le C\tau\int_\kappa^{\frac{\pi}{\tau|\sin(\theta)|}} r^{\alpha d/2-1}(1-e^{(2t_n\cos\theta)r}){\rm d} r +C\tau\int_0^{t_n} \int_{-\theta}^\theta \kappa^{\alpha d/2+1} e^{2s\kappa} {\rm d}\psi{\rm d} s \nonumber\\ &\le C\tau^{1-\alpha d/2}+C\tau \kappa^{\alpha d/2}(e^{2t_n\kappa}-1) \nonumber\\ &\leq C\tau^{1-\alpha d/2}. \end{align} Finally, we estimate $\mathcal{G}_\tau(t_n)$. Because $\bar \partial_\tau W_j(t_n)=\frac{1}{\tau}\int_{t_{n-1}}^{t_n}{\rm d} W_j(s)$, we obtain \begin{align} \mathcal{G}_\tau(t_n) &=\sum_{j=1}^\infty \phi_j\int_0^{t_n} E^{(\tau)}_j (t_n-s) \Big ({\rm d} W_j(s) -\bar\partial_\tau W_j(s){\rm d} s \Big) \nonumber \\ &= \sum_{j=1}^\infty \phi_j\sum_{i=1}^n\bigg(\int_{t_{i-1}}^{t_i}E^{(\tau)}_j(t_n-s){\rm d} W_j(s)-\int_{t_{i-1}}^{t_i}E^{(\tau)}_j(t_n-\xi)\bar \partial_\tau W_j(t_i){\rm d} \xi\bigg) \nonumber\\ &= \sum_{j=1}^\infty \phi_j\sum_{i=1}^n\int_{t_{i-1}}^{t_i} \bigg(\frac{1}{\tau}\int_{t_{i-1}}^{t_i}(E^{(\tau)}_j(t_n-s)-E^{(\tau)}_j(t_n-\xi)){\rm d} \xi\bigg){\rm d} W_j(s) . \end{align} Then, \begin{align} \label{Estimate-I2} {\mathbb E}\|\mathcal{G}_\tau(t_n) \|^2 &= \sum_{j=1}^\infty {\mathbb E}\bigg|\sum_{i=1}^n\int_{t_{i-1}}^{t_i}\bigg(\frac{1}{\tau}\int_{t_{i-1}}^{t_i} \Big(E^{(\tau)}_j(t_n-s)-E^{(\tau)}_j(t_n-\xi)\Big) {\rm d}\xi\bigg) {\rm d} W_j(s) \bigg|^2 \nonumber\\ &= \sum_{j=1}^\infty \sum_{i=1}^n\int_{t_{i-1}}^{t_i} \bigg|\frac{1}{\tau}\int_{t_{i-1}}^{t_i} \Big(E^{(\tau)}_j(t_n-s)-E^{(\tau)}_j(t_n-\xi)\Big) {\rm d}\xi \bigg|^2 {\rm d} s . \end{align} By using the expression \eqref{expr-Etau}, for $|s-\xi|\le \tau$ we have \begin{align} &\Big|E^{(\tau)}_j(t_n-s)-E^{(\tau)}_j(t_n-\xi)\Big|^2 \nonumber\\ &= \bigg|\frac{1}{2\pi\textrm{i}}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} e^{z(t_n-s)}(1-e^{z(s-\xi)}){\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}({\rm d}elta_\tau(e^{-z\tau})^\alpha+\lambda_j)^{-1}\frac{z\tau}{e^{z\tau}-1}{\rm d} z \bigg|^2 \nonumber\\ &\le C\bigg(\int_{\Gamma_{\theta,\kappa}^{(\tau)}}|{\rm d} z| \bigg)\bigg(\int_{\Gamma_{\theta,\kappa}^{(\tau)}}|e^{z(t_n-s)}|^2 |1-e^{z(s-\xi)}|^2\bigg|\frac{{\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}}{{\rm d}elta_\tau(e^{-z\tau})^\alpha+\lambda_j}\bigg|^2\bigg|\frac{z\tau}{e^{z\tau}-1}\bigg| |{\rm d} z| \bigg) \nonumber \\ &\le C\tau^{-1}\int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{z(t_n-s)}|^2 \tau^2|z|^2 \bigg|\frac{{\rm d}elta_\tau(e^{-z\tau})^{\alpha-1}}{{\rm d}elta_\tau(e^{-z\tau})^\alpha+\lambda_j}\bigg|^2 |{\rm d} z| \nonumber \\ &\le C\tau \int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{z(t_n-s)}|^2 |z|^2 \bigg(\frac{|z|^{\alpha-1}}{|z|^\alpha+\lambda_j} \bigg)^2 |{\rm d} z| \quad\mbox{(here we use Lemma \ref{ineq-sum})}, \nonumber \\ \end{align} where we have used $ |1-e^{z(s-\xi)}|\le C|z(s-\xi)|\le C|z|\tau$ and \eqref{ztau-eztau} in deriving the second to last inequality. Substituting the last inequality into \eqref{Estimate-I2} yields \begin{align}\label{E-Gtau} {\mathbb E}\|\mathcal{G}_\tau(t_n) \|^2 &\le \sum_{j=1}^\infty C\tau\int_0^{t_n} \int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{z(t_n-s)}|^2 |z|^2 \bigg(\frac{|z|^{\alpha-1}}{|z|^\alpha+\lambda_j} \bigg)^2 |{\rm d} z| {\rm d} s \nonumber\\ &\le C\tau\int_0^{t_n} \int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{z(t_n-s)}|^2 |z|^2 \sum_{j=1}^\infty \bigg(\frac{|z|^{\alpha-1}}{|z|^\alpha+\lambda_j} \bigg)^2 |{\rm d} z| {\rm d} s \nonumber\\ &\le C\tau\int_0^{t_n} \int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{z(t_n-s)}|^2 |z|^{\alpha d/2} |{\rm d} z| {\rm d} s \nonumber\\ &= C\tau\int_0^{t_n} \int_{\Gamma_{\theta,\kappa}^{(\tau)}} |e^{zs}|^2 |z|^{\alpha d/2} |{\rm d} z| {\rm d} s \quad\mbox{(here we use a change of variable)} \nonumber\\ &\le C\tau^{1-\alpha d/2}, \end{align} where the last inequality can be estimated in the same way as \eqref{E-Etau}. \qed \section{Numerical examples}\label{Numerical examples} \setcounter{equation}{0} In this section, we present three numerical examples to illustrate the theoretical analyses. {\bf Example 1.} We first consider the one-dimensional stochastic time-fractional equation \begin{align}\label{eq-alpha} \partial_t u(x,t)-\partial^2_{x}\partial_t^{1-\alpha} u(x,t)= f(x,t)+\varepsilon{\rm d}ot W(x,t) \end{align} for $0\le x\le 1$, $0<t\le 1$, with homogenous Dirichlet boundary condition and $0$ initial condition. In the above equation, \begin{align*} f(x,t)=2tx^2(1-x)^2-\frac{2t^{1+\alpha}}{\Gamma(2+\alpha)}(2-12x+12x^2), \end{align*} $\varepsilon$ is a given constant, and $W$ the cylindrical Wiener process. In the absence of white noise, the exact solution would be $u_d(x,t)=t^2x^2(1-x)^2$, which corresponds to the exact mean of the stochastic solution. We discretize the problem \eqref{eq-alpha} in time by using the scheme \eqref{CQ-scheme2} and, in space, by continuous piecewise linear finite element method. Here, $h=1/M$ denotes the spatial mesh size and $U^n(x)$ the numerical solution of the fully discrete scheme. We take $\tau=h=2^{-5}$ and $\varepsilon=0.1$. For each computation, $I=1000$ independent realizations are performed with different Wiener processes. For each realization $\omega_i$, $i=1,{\rm d}ots,I$, we generate $M$ independent Brownian motions $W_j(t)$, $j=1,{\rm d}ots,M$. In Figure 4.1(left), we present the exact solution $u_d$ of the deterministic problem, the mean value of numerical solutions for \eqref{eq-alpha}, and the standard deviation, respectively, at $t_n=1$. Moreover, the numerical approximations $U^n(x,\omega_i)$, $i=1,2,3$ of $u(x,t_n,\omega_i)$, with three independent realizations, are given in Figure 4.1(right) at $t_n=1$. The numerical simulations in Figure 4.1 are performed by taking $\alpha=0.5$. Similar results are shown in Figure 4.2 for $\alpha=1.3$. Because the solution has $C^{\min(\frac{1}{\alpha}-\frac12,1)}(\overline\Omega)$ pathwise regularity in space (cf. \cite[Proposition 2]{MijenaNane}), the numerical solution for $\alpha=0.5$ (the solution is $C^{1}(\overline\Omega)$) is smoother than the numerical solution for $\alpha=1.3$ (the solution is $C^{0.27}(\overline\Omega)$); see Figures \ref{fig41} and \ref{fig42} for a visual comparison. \begin{figure} \caption{Numerical approximations for $u(x,t)$ with $\alpha=0.5$} \label{fig41} \end{figure} \begin{figure} \caption{Numerical approximations for $u(x,t)$ with $\alpha=1.3$} \label{fig42} \end{figure} {\bf Example 2.} We next consider the convergence rate of the numerical scheme \eqref{CQ-scheme2} for \eqref{eq-alpha} with $\varepsilon=1$. The problem \eqref{eq-alpha} is discretized using backward-Euler convolution quadrature and a linear Galerkin finite element method. To investigate the convergence rate, we consider $I=1000$ independent realizations for each time step $\tau_k=2^{-k}$, $k=5,{\rm d}ots,8$. In order to focus on the time discretization error, we solve the time-discrete stochastic PDE \eqref{CQ-scheme2} using a sufficiently small spatial mesh size $h=1/M=2^{-9}$ so that the spatial discretization error is relatively negligible. Then the error $E(\tau_k)$ is computed by \begin{align}\label{t-order} E(\tau_k)=\bigg(\frac{1}{I}\sum_{i=1}^I \|U^{N,\tau_k}(\cdot,\omega_i)-U^{N,\tau_{k-1}}(\cdot,\omega_i)\|^2\bigg)^{\frac{1}{2}} \end{align}for $k=6,7,8$. In \cite{LubichSloanThomee:1996}, it is proved that the backward-Euler convolution quadrature for time-fractional PDE \eqref{Deter-SPDE2} is first-order convergent. Thus, by Theorem \ref{MainTHM}, the convergence order of the scheme \eqref{CQ-scheme2} for problem \eqref{eq-alpha} should be $O(\tau^{\frac{1}{2}-\frac{\alpha}{4}})$ in a one-dimensional spatial domain. Consequently, we expect the error $E(\tau_k)$ to have the convergence rate \begin{align}\label{time-order} \log_2\frac{E(\tau_k)}{E(\tau_{k+1})} \approx \log_2\bigg(\frac{\tau_k}{\tau_{k+1}}\bigg)^{\frac{1}{2}-\frac{\alpha}{4}}=\frac{1}{2}-\frac{\alpha}{4} \end{align} for successive halvings of the time step. We test the above result by taking $\alpha=0.5$ and $0.9$ for a subdiffusion setting and $\alpha=1.3$ and $1.7$ for a diffusion-wave setting. From \eqref{time-order}, $\log_2\frac{E(\tau_k)}{E(\tau_{k+1})}\approx 0.375$ for $\alpha=0.5$, $\log_2\frac{E(\tau_k)}{E(\tau_{k+1})}\approx 0.275$ for $\alpha=0.9$, $\log_2\frac{E(\tau_k)}{E(\tau_{k+1})} \approx 0.175$ for $\alpha=1.3$, and $\log_2\frac{E(\tau_k)}{E(\tau_{k+1})} \approx 0.075$ for $\alpha=1.7$. Clearly, the results in Table \ref{time-error} ($t_n=1$) illustrate the sharp convergence rate. \begin{table}[!ht] \begin{center} \caption{$E(\tau_k)$ and convergence rates in 1D.} \label{time-error} \begin{tabular}{c|ccc|c} \hline \hline \ {$\alpha\backslash \tau_k$}&$2^{-6}$&$2^{-7}$&$2^{-8}$&order\ \\ \hline \ $\alpha=0.5$ &1.075e-02&8.284e-03&6.382e-03&0.376 (0.375)\ \\ \ $\alpha=0.9$ &2.825e-02&2.340e-02&1.921e-02&0.278 (0.275)\ \\ \ $\alpha=1.3$ &6.340e-02&5.654e-02&5.004e-02&0.171 (0.175)\ \\ \ $\alpha=1.7$ &1.415e-01&1.352e-01&1.275e-01&0.075 (0.075)\ \\ \hline \hline \end{tabular} \end{center} \end{table} {\bf Example 3.} Lastly, we consider the stochastic time-fractional equation \begin{align}\label{eq-alpha-2} \partial_t u(x,t)-\Delta\partial_t^{1-\alpha} u(x,t)= f(x,t)+{\rm d}ot W(x,t) \end{align} in the two-dimensional spatial domain $[0,1]\times[0,1]$, with homogenous Dirichlet boundary condition and $0$ initial condition. Here, we choose $0<t\le 1$ and \begin{align*} f(x,t) &=2tx_1^2x_2^2(1-x_1)^2(1-x_2)^2 -\frac{2t^{1+\alpha}}{\Gamma(2+\alpha)}(2-12x_1+12x_1^2)x_2^2(1-x_2)^2 \\ &\quad -\frac{2t^{1+\alpha}}{\Gamma(2+\alpha)}x_1^2(1-x_1)^2(2-12x_2+12x_2^2) \end{align*} for $x=(x_1,x_2)\in[0,1]\times[0,1]$. The exact solution of the corresponding deterministic problem is $u_d(x,t)=t^2 x_1^2x_2^2(1-x_1)^2(1-x_2)^2$. We solve the stochastic equation \eqref{eq-alpha-2} using the backward-Euler scheme \eqref{CQ-scheme2}, where spatial discretization is effected by the standard piecewise linear Galerkin finite element method. A uniform triangular partition with 50 nodes in each direction is used. Similarly, we choose $\tau_k=2^{-k}$, $k=4,{\rm d}ots,7$, and consider $I=1000$ independent realizations to investigate the temporal convergence rate. For each realization $\omega_i$, $i=1,{\rm d}ots,I$, we generate $2500$ independent Brownian motions $W_j(t)$, $j=1,{\rm d}ots,2500$. The mesh size $h=\frac{\sqrt{2}}{50}$ is fixed so that spatial error is relatively negligible. Then, the error \eqref{t-order} is computed for each fixed time step $\tau_k$ and presented in Table \ref{time-error-2} at $t_n=1$. By Theorem \ref{MainTHM}, the convergence rate of the scheme \eqref{CQ-scheme2} for problem \eqref{eq-alpha-2} is $O(\tau^{\frac{1}{2}-\frac{\alpha}{2}})$ in the two spatial dimensional setting. Clearly, the numerical results are consistent with the theoretical analyses given in Theorem \ref{MainTHM}. \begin{table}[!ht] \begin{center} \caption{$E(\tau_k)$ and convergence rates in 2D.} \label{time-error-2} \begin{tabular}{c|ccc|c} \hline \hline \ {$\alpha\backslash \tau_k$}&$2^{-5}$&$2^{-6}$&$2^{-7}$&order\ \\ \hline \ $\alpha=0.3$ &3.848e-03&2.941e-03&2.335e-03&0.361 (0.35)\ \\ \ $\alpha=0.5$ &9.992e-03&8.556e-03&7.395e-03&0.217 (0.25)\ \\ \ $\alpha=0.7$ &2.115e-02&1.936e-02&1.762e-02&0.132 (0.15)\ \\ \ $\alpha=0.9$ &3.997e-02&3.914e-02&3.851e-02&0.033 (0.05)\ \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Conclusion} We considered the stability and convergence of numerical approximations of a stochastic time-fractional PDE by using the backward Euler convolution quadrature in time. By means of a discrete analogue of the inverse Laplace transform, we derived an integral representation of the numerical solution which was then used to prove the sharp convergence rate of the numerical approximation. Instead of the contour $$\Gamma_{\theta}^{(\tau)}=\{z\in\mathbb{C}:|{\rm arg}(z)|=\theta,\,\,|{\rm Im}(z)|\le \pi/\tau\}$$ used in \cite{LubichSloanThomee:1996}, we have used the contour $\Gamma_{\theta,\kappa}^{(\tau)}$ given in \eqref{trunc-contour} for the analysis in this paper. The contour $\Gamma_{\theta,\kappa}^{(\tau)}$ excludes the origin and thus can be used not only for the Dirichlet Laplacian but also for the Neumann Laplacian (whose spectrum includes the origin). Similarly, for the Dirichlet Laplacian, it is not necessary to show that ${\rm d}elta_\tau(e^{-z\tau})\in \Sigma_\theta$ as in \eqref{angle-delta}. Using this result, the analyses in this paper can be naturally extended to the Neumann Laplacian. The paper focuses on semidiscretization in time by convolution quadrature. The main contribution of the paper is to show the possibility of removing the $\epsilon$ term in the previously obtained error estimate \eqref{Estimate-epsilon} and to establish a foundation for the further analysis of spatial discretization considered in \cite{GLW-2017}. Our analysis is based on specific growth properties of the eigenvalues of the Laplacian operator (cf. Lemma \ref{ineq-eigen}), and thus may not be directly extended to more general abstract operators such as semilinear problems and multiplicative space-time white noises. For example, for the multiplicative noise case, the identities \eqref{u-repr-}-\eqref{expr-Etau} do not hold and thus the analysis becomes more complicated. Thus, extension to semilinear problems and multiplicative space-time white noises remains open and certainly should be a subject of future research. Theorem \ref{MainTHM} can be extended to higher order moments by using the Burkh\"older-Davis-Gundy inequality (cf. \cite[(1.1)]{NNeerven-Veraar-Weis-2007}, \cite[(6.29)]{PratoZabczyk2014}, or \cite{Pratelli-1988})$:$ for all $p\in(1,\infty)$ there exists $C_p >0$ such that $$ \mathbb{E} \bigg(\max_{1\le n\le N} \bigg\|\int_0^{t_n}\phi(s) \,{\rm d} W(s)\bigg\|^p \bigg) \le C_p\, \mathbb{E}\bigg[ \bigg(\int_0^T \|\phi(s)\|_{L_2^0}^2 \,{\rm d} s\bigg)^{\frac{p}{2}}\bigg] , $$ where $L_2^0$ denotes the space of Hilbert-Schmidt operators on $L^2(\mathcal{O})$. Let \begin{align} &H^{(\tau)}(t) := \frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}\backslash\Gamma_{\theta,\kappa}^{(\tau)}} e^{zt} z^{\alpha-1}(z^\alpha - \Delta )^{-1} {\rm d} z , \end{align} which is a Hilbert-Schmidt operator satisfying (see \eqref{def-Hjt}) $$\|H^{(\tau)}(t)\|_{L^0_2}^2=\sum_{j=1}^\infty |H^{(\tau)}_j (t)|^2 . $$ By using the definition of $\mathcal{H}_\tau(t_n)$ in \eqref{error}, we have \begin{align} {\mathbb E}\, (\max_{1\le n\le N} \|{\mathcal H}_\tau(t_n) \|^p) &= {\mathbb E}\bigg(\max_{1\le n\le N} \bigg\|\int_0^{t_n} H^{(\tau)}(t_n-s) {\rm d} W(s) \bigg\|^p\bigg) \nonumber \\ &\le C_p\, {\mathbb E} \bigg[\bigg(\int_0^T \|H^{(\tau)}(t_n-s) \|_{L^0_2}^2 {\rm d} s \bigg)^{\frac{p}{2}}\bigg] \quad\mbox{(Burkh\"older inequality)} \nonumber \\ &= C_p\, {\mathbb E} \bigg[\bigg(\int_0^T \sum_{j=1}^\infty |H^{(\tau)}_j (t_n-s)|^2 {\rm d} s \bigg)^{\frac{p}{2}}\bigg] \nonumber \\ &=C_p\, \bigg(\int_0^T \sum_{j=1}^\infty |H_j^{(\tau)}(t_n-s) |^2 {\rm d} s \bigg)^{\frac{p}{2}} \nonumber \\ &\le C_p\, \tau^{(\frac12-\frac{\alpha d}{4})p} , \end{align} where the last inequality utilizes the result of \eqref{E-Htau}. Similarly, the following estimates can be proved by using \eqref{esti-f} and \eqref{E-Gtau}: $$ {\mathbb E} (\max_{1\le n\le N}\|{\mathcal E}_\tau(t_n)\|^p) \le C_p\, \tau^{(\frac12-\frac{\alpha d}{4})p} \quad\mbox{and}\quad {\mathbb E} (\max_{1\le n\le N}\|{\mathcal G}_\tau(t_n)\|^p) \le C_p\, \tau^{(\frac12-\frac{\alpha d}{4})p} . $$ Substituting these estimates into \eqref{error} yields the $p$-moment estimate: \begin{align}\label{main-estimate-p} \big({\mathbb E}\,\max_{1\le n\le N}\|u(\cdot,t_n)-u_n\|^p\big)^{\frac1p} \le C_p\, \tau^{\frac12-\frac{\alpha d}{4}} ,\quad\forall\,p\in(1,\infty) . \end{align} The estimate above also implies the existence of a random variable $\mathcal{C}$ having finite moments of any order independent of $\tau$ such that the following pathwise estimate holds: \begin{align}\label{main-estimate-path} \max_{1\le n\le N}\|u(\cdot,t_n)-u_n\| \le \mathcal{C} \, \tau^{\frac12-\frac{\alpha d}{4}} . \end{align} The convergence rate proved in this article is optimal with respect to the regularity of the solution in time. However, whether it is the highest possible convergence rate among all possible numerical methods is unknown. For stochastic ODEs, the convergence rates of some numerical methods (such as Milstein's method) may be higher than the regularity of the solution in time. But Milstein's method does not yield higher convergence rate for the stochastic PDE problem considered in this article. For example, in the case $\alpha=d=1$, Milstein's method (equivalent to Euler Maruyama with additive noise) \begin{equation} u_n= u_{n-1} - \tau \Delta u_{n-1} + W(\cdot,t_n)-W(\cdot,t_{n-1}) \end{equation} does not converge at all (as it is an explicit scheme, which requires a CFL condition that cannot be satisfied by a semi-discretization in time). Even if we modify the Milstein's method to be an implicit scheme \begin{equation}\label{euler-un-w} u_n= u_{n-1} - \tau \Delta u_{n} + W(\cdot,t_n)-W(\cdot,t_{n-1}) , \end{equation} the scheme only has strong convergence rate $O(\tau^{1/4})$. This is different from stochastic ODEs, for which the strong convergence rates of Milstein's method and the corresponding implicit scheme are $O(\tau)$ (higher than the temporal regularity of the solution). This difference between stochastic PDEs and stochastic ODEs is due to the fact that $\Delta u$ does not have the same temporal regularity as $u$ when the PDE is driven by a space-time white noise \eqref{white-noise}. In particular, $\Delta u$ is not H\"older continuous in time in the space $L^2(\mathcal{O})$ (but Milstein's method requires H\"older continuity of $\Delta u$ to achieve a higher convergence rate). In the case of colored noise (instead of space-time white noise), the implicit Euler scheme \eqref{euler-un-w} can achieve a better convergence rate than the temporal H\"older regularity of the solution; see \cite{Wang-2017}. \appendix\label{append} \section{The mild solution in $C^\gamma([0,T];L^2(\Omega;L^2(\mathcal{O})))$}\label{Append} In the cases $\alpha<\min(1,2/d)$ and $\alpha>1$ with $d=1$, the mild solution of \eqref{Frac-SPDE2} (with space-time white noise) has been studied in different function spaces under different settings. For example, see \cite{KP,MijenaNane}. For the reader's convenience, in this appendix, we illustrate that the mild solution given by \eqref{Mild-sol-}-\eqref{Mild-sol} is indeed well defined in $C^\gamma([0,T];L^2(\Omega;L^2(\mathcal{O})))$, a result used for the numerical analysis in this paper. \begin{theorem} The mild solution defined by \eqref{Mild-sol-}-\eqref{Mild-sol} is in $C^{\gamma}([0,T];L^2(\Omega;L^2(\mathcal{O})))$ for arbitrary $\gamma\in(0,\frac12-\frac{\alpha d}{4})$. \end{theorem} \noindent{\it Proof}.$\,\,$ In \eqref{Mild-sol}, the formula \eqref{eqn:EF} implies \begin{align} \label{each-term} \int_0^t E(t-s)\phi_j{\rm d} W_j(s) = \phi_j \int_0^t h_j(t-s) {\rm d} W_j(s) \end{align} for a deterministic time-independent function $\phi_j\in L^2(\mathcal{O})$ and with the deterministic space-independent function $h_j(\cdot)$ given by $$ h_j(t-s) =\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\kappa}}e^{z(t-s)} z^{\alpha-1} (z^\alpha +\lambda_j)^{-1} \, {\rm d} z . $$ By the theory of the Ito integral and the identity \eqref{each-term}, each term in \eqref{Mild-sol} is well defined in $C([0,T];L^2(\Omega; L^2(\mathcal{O})))$. Because the one-dimensional Wiener processes $W_j(s)$, $j=1,2,{\rm d}ots$, are independent of each other, it follows that \begin{align}\label{Est-Et-sW} &\sup_{t\in[0,T]} {\mathbb E} \bigg\|\sum_{j=\ell}^{\ell+m} \int_0^t E(t-s)\phi_j{\rm d} W_j(s) \bigg\|^2 \nonumber \\ &=\sup_{t\in[0,T]} \sum_{j=\ell}^{\ell+m} \int_0^t \| E(t-s)\phi_j\|^2{\rm d} s =\sup_{t\in[0,T]} \sum_{j=\ell}^{\ell+m} \int_0^t \| E(s)\phi_j\|^2{\rm d} s \nonumber \\ &\le\sum_{j=\ell}^{\ell+m} \int_0^T \| E(s)\phi_j\|^2{\rm d} s \nonumber \\ &= \int_0^{T} \sum_{j=\ell}^{\ell+m} \bigg|\frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}} e^{zs} z^{\alpha-1}(z^\alpha + \lambda_j )^{-1} {\rm d} z\bigg|^2 {\rm d} s \nonumber \\ &\le C\int_0^{T} \sum_{j=\ell}^\infty\bigg(\int_{\Gamma_{\theta,\kappa}^\theta} \frac{1}{|z|^\beta}|{\rm d} z| \bigg) \bigg(\int_{\Gamma_{\theta,\kappa}^\theta} \bigg|\frac{z^{\alpha}}{z^\alpha+\lambda_j}\bigg|^2\frac{|e^{2zs}| }{|z|^{2-\beta}} |{\rm d} z| \bigg) {\rm d} s \nonumber \\ &\quad +C\int_0^{T} \sum_{j=\ell}^\infty \bigg(\int_{\Gamma_{\theta,\kappa}^\kappa} \frac{1}{|z|^\beta}|{\rm d} z| \bigg) \bigg(\int_{\Gamma_{\theta,\kappa}^\kappa} \bigg|\frac{z^{\alpha}}{z^\alpha+\lambda_j}\bigg|^2\frac{|e^{2zs}| }{|z|^{2-\beta}} |{\rm d} z| \bigg) {\rm d} s \nonumber \\ &\le C\int_0^{T} \sum_{j=\ell}^\infty \bigg(\int_{\kappa}^\infty \frac{1}{r^\beta}{\rm d} r \bigg) \bigg(\int_{\kappa}^\infty\bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2\frac{e^{-2rs|\cos(\theta)|}}{r^{2-\beta}} {\rm d} r \bigg) {\rm d} s \quad\mbox{(use Lemma \ref{ineq-sum})} \nonumber \\ &\quad +C\int_0^{T} \sum_{j=\ell}^\infty \bigg(\int_{-\theta}^\theta \frac{1}{\kappa^\beta} \kappa{\rm d}\varphi \bigg) \bigg(\int_{-\theta}^\theta \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2\frac{e^{2\kappa s\cos(\varphi)}}{\kappa^{2-\beta}}\kappa{\rm d}\varphi \bigg) {\rm d} s \nonumber \\ &\le C\kappa^{1-\beta} \int_0^{T}\int_{\kappa}^\infty \sum_{j=\ell}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2\frac{e^{-2rs|\cos(\theta)|}}{r^{2-\beta}} {\rm d} r {\rm d} s \nonumber \\ &\quad +C\kappa^{1-\beta} \int_0^{T}\int_{-\theta}^\theta \sum_{j=\ell}^\infty \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2\frac{e^{2\kappa s\cos(\varphi)}}{\kappa^{2-\beta}}\kappa{\rm d}\varphi {\rm d} s ,\nonumber \\ \end{align} where $\beta\in(1,2-\alpha d/2)$. In view of Lemma \ref{ineq-sum}, ${\rm d}isplaystyle\sum_{j=1}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2 \le Cr^{\alpha d/2}$ implies ${\rm d}isplaystyle\sum_{j=\ell}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2\rightarrow 0$ as $ \ell\rightarrow \infty$ and \begin{align}\label{int-k-1} \int_0^{T}\int_{\kappa}^\infty \sum_{j=\ell}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2\frac{e^{-2rs|\cos(\theta)|}}{r^{2-\beta}} {\rm d} r {\rm d} s &\le \int_0^{T}\int_{\kappa}^\infty \sum_{j=1}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2\frac{e^{-2rs|\cos(\theta)|}}{r^{2-\beta}} {\rm d} r {\rm d} s \nonumber \\ &\le C\int_0^{T}\int_{\kappa}^\infty r^{\alpha d/2} \frac{e^{-2rs|\cos(\theta)|}}{r^{2-\beta}} {\rm d} r {\rm d} s \nonumber \\ &\le C\int_{\kappa}^\infty r^{\alpha d/2+\beta-3} (1-e^{-2r T|\cos(\theta)|}) {\rm d} r \nonumber \\ &\le C\kappa^{\alpha d/2+\beta-2} . \end{align} The Lebesgue dominated convergence theorem implies that \begin{align*} \lim_{\ell\rightarrow \infty} \kappa^{1-\beta} \int_0^{T}\int_{\kappa}^\infty \sum_{j=\ell}^\infty \bigg|\frac{r^{\alpha}}{r^\alpha+\lambda_j}\bigg|^2\frac{e^{-2rs|\cos(\theta)|}}{r^{2-\beta}} {\rm d} r {\rm d} s =0 . \end{align*} Similarly, ${\rm d}isplaystyle\sum_{j=\ell}^\infty \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2\le C\kappa^{\alpha d/2}$ implies $ {\rm d}isplaystyle\sum_{j=\ell}^\infty \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2\rightarrow 0$ as $\ell\rightarrow \infty , $ and \begin{align}\label{int-k-2} \int_0^{T}\int_{-\theta}^\theta \sum_{j=\ell}^\infty \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2\frac{e^{2\kappa s\cos(\varphi)}}{\kappa^{2-\beta}}\kappa{\rm d}\varphi {\rm d} s &\le \int_0^{T}\int_{-\theta}^\theta \sum_{j=1}^\infty \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2 \frac{e^{2\kappa s\cos(\varphi)}}{\kappa^{2-\beta}}\kappa {\rm d}\varphi {\rm d} s \nonumber \\ &\le C\int_0^{T}\int_{-\theta}^\theta \kappa^{\alpha d/2} \frac{e^{2\kappa s}}{\kappa^{2-\beta}} \kappa{\rm d}\varphi {\rm d} s \nonumber \\ &\le C\int_{-\theta}^\theta \kappa^{\alpha d/2+\beta-2} (e^{2\kappa T-1}) {\rm d} \varphi \nonumber \\ &\le C\kappa^{\alpha d/2+\beta-2} . \end{align} Again, the Lebesgue dominated convergence theorem implies that \begin{align*} \lim_{\ell\rightarrow \infty} \kappa^{1-\beta} \int_0^{T}\int_{-\theta}^\theta \sum_{j=\ell}^\infty \bigg|\frac{\kappa^{\alpha}}{\kappa^\alpha+\lambda_j}\bigg|^2\frac{e^{2\kappa s\cos(\varphi)}}{\kappa^{2-\beta}}\kappa{\rm d}\varphi {\rm d} s =0 . \end{align*} Overall, we have \begin{align*} \sup_{t\in[0,T]} {\mathbb E}\bigg\|\sum_{j=\ell}^{\ell+m} \int_0^t E(t-s)\phi_j{\rm d} W_j(s) \bigg\|^2 \rightarrow 0 \quad\mbox{as}\,\,\, \ell\rightarrow \infty \end{align*} which implies that the sequence $$ \sum_{j=1}^\ell \int_0^t E(t-s)\phi_j{\rm d} W_j(s) ,\quad \ell=1,2,{\rm d}ots $$ is a Cauchy sequence in $C([0,T];L^2(\Omega; L^2(\mathcal{O})))$. Consequently, the sequence converges to a function $u\in C([0,T];L^2(\Omega; L^2(\mathcal{O})))$, which is the mild solution defined by \eqref{Mild-sol}. Let $L_2^0$ denote the space of Hilbert-Schmidt operators on $L^2(\mathcal{O})$ (cf. \cite[Appendix C]{PratoZabczyk2014}) with the operator norm \begin{align} \|E(t-s)\|_{L_2^0} = \bigg(\sum_{j=1}^\infty \|E(t-s)\phi_j\|_{L^2(\mathcal{O})}^2\bigg)^{\frac12} . \end{align} The above analysis clearly shows that \begin{align} \int_0^t \|E(t-s)\|_{L_2^0}^2 {\rm d} s<\infty . \end{align} In view of \cite[Proposition 4.20 and page 99]{PratoZabczyk2014}, the stochastic integral \eqref{Mild-sol-} is well defined in $L^2(\Omega;L^2(\mathcal{O}))$, and \eqref{Mild-sol-} coincides with the series representation \eqref{Mild-sol} (\cite[section 4.2.2]{PratoZabczyk2014}). Similar to the estimate \eqref{Est-Et-sW}, by considering \begin{align*} &\frac{u(\cdot,t)-u(\cdot,t-h)}{h^\gamma} \\ &=\sum_{j=1}^\infty \int_0^{t-h} \frac{E(t-s)-E(t-h-s)}{h^\gamma}\phi_j{\rm d} W_j(s) +\frac{1}{h^\gamma}\sum_{j=1}^\infty \int_{t-h}^t E(t-s)\phi_j{\rm d} W_j(s) \end{align*} we have \begin{align}\label{Holder-u} &\sup_{t\in[0,T]}{\mathbb E} \bigg\|\frac{u(\cdot,t)-u(\cdot,t-h)}{h^\gamma}\bigg\|^2 \nonumber \\ &=\sup_{t\in[0,T]} \bigg( \sum_{j=1}^{\infty} \int_0^{t-h} \bigg\| \frac{E(t-s)-E(t-h-s)}{h^\gamma}\phi_j\bigg\|^2{\rm d} s \nonumber \\ &\quad +\ \frac{1}{h^{2\gamma}}\sum_{j=1}^\infty \int_{t-h}^t \|E(t-s)\phi_j\|^2{\rm d} s \bigg) \\ &\le\sum_{j=1}^{\infty} \int_0^T \bigg\| \frac{E(s+h)-E(s)}{h^\gamma}\phi_j\bigg\|^2{\rm d} s +\sum_{j=1}^\infty \frac{1}{h^{2\gamma}}\int_{0}^h \|E(s)\phi_j\|^2{\rm d} s .\nonumber \end{align} Because $\big|\frac{e^{zh}-1}{h^\gamma}\big|\le C|z|^\gamma$ on the contour $\Gamma_{\theta,\kappa}$ (on which ${\rm Re}(z)\le 0$ when $|z|\ge\kappa$), it follows that \begin{align*} &\sum_{j=1}^{\infty} \int_0^T \bigg\| \frac{E(s+h)-E(s)}{h^\gamma}\phi_j\bigg\|^2{\rm d} s \\ &= \int_0^{T} \sum_{j=1}^{\infty} \bigg|\frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}} \frac{e^{zh}-1}{h^\gamma}e^{zs} z^{\alpha-1}(z^\alpha + \lambda_j )^{-1} {\rm d} z\bigg|^2 {\rm d} s \\ &\le C\int_0^{T} \sum_{j=1}^\infty\bigg(\int_{\Gamma_{\theta,\kappa}^\theta} \frac{1}{|z|^\beta}|{\rm d} z| \bigg) \bigg(\int_{\Gamma_{\theta,\kappa}^\theta} \bigg|\frac{z^{\alpha}}{z^\alpha+\lambda_j}\bigg|^2\frac{|e^{2zs}| }{|z|^{2-\beta-2\gamma}} |{\rm d} z| \bigg) {\rm d} s \\ &\quad +C\int_0^{T} \sum_{j=1}^\infty \bigg(\int_{\Gamma_{\theta,\kappa}^\kappa} \frac{1}{|z|^\beta}|{\rm d} z| \bigg) \bigg(\int_{\Gamma_{\theta,\kappa}^\kappa} \bigg|\frac{z^{\alpha}}{z^\alpha+\lambda_j}\bigg|^2\frac{|e^{2zs}| }{|z|^{2-\beta-2\gamma}} |{\rm d} z| \bigg) {\rm d} s \\ &\le C\int_{\kappa}^\infty r^{\alpha d/2+\beta+2\gamma-3} (1-e^{-2r T|\cos(\theta)|}) {\rm d} r +C\int_{-\theta}^\theta \kappa^{\alpha d/2+\beta+2\gamma-2} (e^{2\kappa T-1}) \\ &\le C\kappa^{\alpha d/2+\beta+2\gamma-2}, \end{align*} where the second last inequality requires $\beta>1$ for the improper integral $\int_{\Gamma_{\theta,\kappa}^\kappa} \frac{1}{|z|^\beta}|{\rm d} z|$ to be convergent (then the estimates follow similarly as for \eqref{int-k-1}-\eqref{int-k-2}), and the last inequality requires $\alpha d/2+\beta+2\gamma-3<-1$. This requires $2\gamma<1-\frac{\alpha d}{2}$. Also, \begin{align*} &\sum_{j=1}^{\infty} \frac{1}{h^{2\gamma}}\int_0^h \| E(s)\phi_j\|^2{\rm d} s \\ &= \frac{1}{h^{2\gamma}}\int_0^{h} \sum_{j=1}^{\infty} \bigg|\frac{1}{2\pi i}\int_{\Gamma_{\theta,\kappa}} e^{zs} z^{\alpha-1}(z^\alpha + \lambda_j )^{-1} {\rm d} z\bigg|^2 {\rm d} s \\ &\le \frac{C}{h^{2\gamma}}\int_0^{h} \sum_{j=1}^\infty\bigg(\int_{\Gamma_{\theta,\kappa}^\theta} \frac{1}{|z|^\beta}|{\rm d} z| \bigg) \bigg(\int_{\Gamma_{\theta,\kappa}^\theta} \bigg|\frac{z^{\alpha}}{z^\alpha+\lambda_j}\bigg|^2\frac{|e^{2zs}| }{|z|^{2-\beta}} |{\rm d} z| \bigg) {\rm d} s \\ &\quad +\frac{C}{h^{2\gamma}}\int_0^{h} \sum_{j=1}^\infty \bigg(\int_{\Gamma_{\theta,\kappa}^\kappa} \frac{1}{|z|^\beta}|{\rm d} z| \bigg) \bigg(\int_{\Gamma_{\theta,\kappa}^\kappa} \bigg|\frac{z^{\alpha}}{z^\alpha+\lambda_j}\bigg|^2\frac{|e^{2zs}| }{|z|^{2-\beta}} |{\rm d} z| \bigg) {\rm d} s \\ &\le C\int_{\kappa}^\infty r^{\alpha d/2+\beta-3} \frac{1-e^{-2r h|\cos(\theta)|}}{h^{2\gamma}}{\rm d} r +C\int_{-\theta}^\theta \kappa^{\alpha d/2+\beta-2} \frac{e^{2\kappa h-1}}{h^{2\gamma}} {\rm d}\varphi \\ &\le C\int_{\kappa}^\infty r^{\alpha d/2+\beta+2\gamma-3} {\rm d} r +C\int_{-\theta}^\theta \kappa^{\alpha d/2+\beta+2\gamma-2} \\ &\le C\kappa^{\alpha d/2+\beta+2\gamma-2}, \end{align*} which again requires $\beta>1$ and $2\gamma<1-\frac{\alpha d}{2}$ for the convergence of the improper integrals. Substituting the last two results into \eqref{Holder-u} yields that $u\in C^{\gamma}([0,T];L^2(\Omega; L^2(\mathcal{O})))$ for arbitrary $\gamma\in(0,\frac12-\frac{\alpha d}{4})$. \qed {\bf Acknowledgements.} The authors would like to thank Professor Xiaobing Feng for many valuable suggestions and comments. {\bf Funding.} The research of M. Gunzburger and J. Wang was supported in part by the USA National Science Foundation grant DMS-1315259 and by the USA Air Force Office of Scientific Research grant FA9550-15-1-0001. The work of B. Li was supported in part by the Hong Kong RGC grant 15300817. \end{document}
\begin{document} \title{Comment on \textquotedblleft Does the weak trace show the past of a quantum particle?\textquotedblright} \author{Q.\ Duprey} \affiliation{ENSEA, 6 avenue du Ponceau, 95014 Cergy-Pontoise cedex, France} \author{A. Matzkin} \affiliation{Laboratoire de Physique Th\'eorique et Mod\'elisation, CNRS Unit\'e 8089, CY Cergy Paris Universit\'e, 95302 Cergy-Pontoise cedex, France} \begin{abstract} In the paper \textquotedblleft Does the weak trace show the past of a quantum particle?\textquotedblright\lbrack arXiv:2109.14060v2], it is argued that null weak values of the spatial projectors are inadequate to infer the presence of a quantum particle at an intermediate time between preparation and detection. This conclusion relies on two arguments -- (i) the role of the disturbance induced by a weak measurement, and (ii) classical-like features like continuous paths that must purportedly be associated with a quantum particle presence. Here we first show that (i) arises from a misunderstanding of null weak values by putting forward a simple counter-example that highlights that the relevant quantities to examine are the vanishing amplitudes, not the wavefunction. Then we briefly argue that enforcing classical pre-conditions in order to account for quantum properties during unitary evolution is unlikely to lead to a consistent understanding of quantum phenomena. \newline\newline \end{abstract} \maketitle In order to learn something about a physical system's properties, a measurement of the system is necessary. For a quantum system, a standard measurement radically changes the system's evolution as the premeasurement state is projected to one of the eigenstates of the measured observable. It is therefore difficult, even in principle, to imagine a procedure that would enable one to measure the properties of a system at an intermediate time, without affecting the system's evolution. With weak measurements \cite{origine} it is possible to achieve minimally perturbing non-destructive measurements. An obervable $\hat{O}$ of a system initially prepared (\textquotedblleft preselected\textquotedblright) in state $\left\vert \psi_{i}\right\rangle $ is weakly measured by coupling the system to a quantum pointer. \textquotedblleft Weakly\textquotedblright\ means here that the system's evolution after the coupling is only affected to first order, so that when a different observable, say $\hat{N}$, is subsequently measured (through a projective measurement), the probability to obtain the final state $\left\vert \psi_{f}\right\rangle $ (an eigenstate of $\hat{N}$) is not modified to first order (relative to the same evolution without the coupling). Once $\left\vert \psi_{f}\right\rangle $ is obtained (\textquotedblleft post-selected\textquotedblright), the quantum pointer coupled to $\hat{O}$ is shifted by $\operatorname{Re}\left( O^{w}\right) $ where the weak value $O^{w}$ is given by \begin{equation} O^{w}=\frac{\left\langle \psi_{f}\right\vert \hat{O}\left\vert \psi _{i}\right\rangle }{\left\langle \psi_{f}\right\vert \left. \psi _{i}\right\rangle }. \label{w} \end{equation} \begin{figure} \caption{The original 3-paths interferometer with a nested Mach-Zehnder, introduced in \cite{vaidman2013} \label{fig1} \end{figure} Let us now examine a quantum particle propagating inside the interferometer depicted in Fig.\ 1 in a pre and post-selected situation. Weak couplings can be implemented jointly on the different arms.\ Typically, the corresponding weak values are generally non-zero and all the coupled quantum pointers will shift. Vaidman \cite{vaidman2013} proposed that the weak values of the spatial projector, $\hat{O}=\Pi_{x}\equiv\left\vert x\right\rangle \left\langle x\right\vert $ could be used as a \textquotedblleft weak trace criterion\textquotedblright, telling us where the quantum particle has been during its evolution inside the interferometer. In the typical case just mentioned, this criterion implies that the particle has been in all the arms in the interferometer, i.e. a statement of the paths superposition. In some situations, the weak value for a coupling at position $x_{0}$ will vanish, $\Pi_{x_{0}}^{w}=0$, indicating that the quantum particle was not there (in the sense that its spatial degree of freedom was not detected at $x_{0}$). \begin{figure} \caption{Similar to Fig. 1, with the addition of polarization rotations inside the nested Mach-Zehnder and a polarizer at post-selection. Now the wavefunction on arm E does not vanish, although the weak values are the same as those of Fig. 1.} \label{fig2} \end{figure} An apparent paradox pointed out by Vaidman \cite{vaidman2013,refs} happens when the interferometer is balanced such that the wavefunction along arm E vanishes.\ Then, taking the initial state $\left\vert \psi_{i}\right\rangle =\left( \left\vert \psi_{D}\right\rangle +i\left\vert \psi_{A}\right\rangle \right) /\sqrt{2},$ if post-selection is chosen when detector $D_{2}$ clicks, that is $\left\vert \psi_{f}\right\rangle =\left( \left\vert \psi _{A}\right\rangle +i\left\vert \psi_{E}\right\rangle \right) /\sqrt{2}$, the following weak values are obtained: \begin{align} \text{ }\Pi_{A}^{w} & =1\qquad\Pi_{D}^{w}=0\label{c1}\\ \Pi_{B}^{w} & =\frac{1}{2}\qquad\Pi_{C}^{w}=-\frac{1}{2}\\ \Pi_{E}^{w} & =0.\label{c3} \end{align} This means that the quantum particle is detected inside the nested interferometer, but is not detected in the ingoing (D) or outgoing (E) arms. The weak trace is therefore discontinuous. In the paper \textquotedblleft Does the weak trace show the past of a quantum particle?\textquotedblright\ \cite{P} Hance \emph{et al.} claim that the weak trace approach is inconsistent because weak measurements disturb the system. They argue that if weak interactions are made inside the nested interferometer, then the perfect destructive interference on arm E that exists when no weak couplings are implemented is disturbed, so that the wavefunction $\psi_{E}(x)$ is not zero anymore. This is indeed the case, but this observation is irrelevant to having a vanishing weak value.\ Put differently Eq. (\ref{c3}) may hold irrespective of whether $\psi_{E}(x)$ vanishes or not\footnote{Note this is the case here for arm D, since $\Pi_{D}^{w}=0$ but $\psi_{D}(x)$ is not zero. The post-selected state evolved backward in time does vanish on arm D, but this backward evolved state does not seem to be taken as physically real in Ref. \cite{P}.}. A vanishing weak value requires the numerator of Eq. (\ref{w}), a transition amplitude, to vanish, not the wavefunctions. This can be seen by slightly modifying the interferometer pictured in Fig.\ 1 in the following way (see Fig.\ 2). We now include the polarization of the photon and choose $\left\vert \psi_{i}\right\rangle =\left( \left\vert \psi_{D}\right\rangle +i\left\vert \psi_{A}\right\rangle \right) \left\vert H\right\rangle /\sqrt{2}$ as the initial state (where $\left\vert H\right\rangle ,\left\vert V\right\rangle $ stand for horizontal and vertical polarization, and $\left\vert \nearrow\right\rangle =\left( \left\vert H\right\rangle +\left\vert V\right\rangle \right) /\sqrt{2}$, $\left\vert \searrow\right\rangle =\left( \left\vert H\right\rangle -\left\vert V\right\rangle \right) /\sqrt{2}$ label the polarization in the diagonal basis). Inside the nested interferometer, we add wave-plates in order to rotate the polarization on each arm such that the state of the photon inside the interferometer becomes $i\left\vert \psi_{B}\right\rangle \left\vert \nearrow\right\rangle +\left\vert \psi_{C}\right\rangle \left\vert \searrow\right\rangle $. Now on arm E the wavefunction becomes $\left\vert \psi_{E}\right\rangle \left( -\left\vert \nearrow\right\rangle +\left\vert \searrow\right\rangle \right) $ which does not vanish. However, it is an easy exercise to check that the structure of Eqs. (\ref{c1})-(\ref{c3}) is left untouched -- Eqs. (\ref{c1}) and (\ref{c3}) remain identical while $\Pi _{B}^{w}$ and $\Pi_{C}^{w}$ pick up a factor depending on the polarization rotation. $\Pi_{E}^{w}$ vanishes because the state on arm E\ is orthogonal to the post-selected one. This counter-example, an adaptation to the present discussion of a 3-paths atomic interferometer introduced previously \cite{duprey} (see also \cite{soko}), disproves the argument given in \cite{P} since we still have $\Pi_{E}^{w}=0,$ but now implementing the weak interactions does not change the state on arm E from an undisturbed vacuum to a \textquotedblleft perturbed state with light present\textquotedblright\ \cite{P} (as for any weak measurement, we have a slight perturbation introduced by the weak couplings). And the trace of the spatial degree of freedom of the quantum particle remains discontinuous. Note that in this counter-example the post-selected state evolved backward in time does not vanish in arm D, although $\Pi_{D}^{w}=0$. The second aim of \cite{P} is to show that \textquotedblleft the weak trace does not reveal the path of a quantum particle\textquotedblright. The authors assert that a quantum particle must have a continuous path (like a classical particle), as per their condition ii). As it is given, this condition appears somewhat arbitrary and not particularly meaningful: it depends on how a path is defined for a quantum particle, and has no relation with an observational warrant that could confirm this claim. Indeed, the wavefunction is continuous and can be understood as propagating along Feynman paths.\ But these paths can interfere, and the destructive interference between amplitudes carried by different Feynman paths is precisely what makes a weak value vanish \cite{APRR}.\ However the wavefunction or the Feynman paths are usually taken to be mere computational tools (at least according to standard quantum mechanics), so while it is possible to explain discontinuous weak value traces in terms of continuous but destructively interfering paths, this hardly changes the experimental fact that position weak values are discontinuous. The underlying issue here is not to decree that quantum particles \emph{must} have continuous paths, but (a) to put forward a cogent framework in order to define properties of a quantum system at an intermediate time; and (b) suggest an experimental scheme in order to observe such properties.\ Obviously, one can refuse to ascribe properties to quantum systems at intermediate times, but then the path of a quantum particle cannot even be defined (and the issue of whether these paths are continuous or not becomes moot). It turns out that weak measurements already constitute a framework in which weak values can be related to quantum properties, provided one is willing to relax the eigenstate-eigenvalue link (see \cite{FP} and Refs.\ therein). And the resulting weak values, even if they predict a non-classical behavior, can be experimentally observed. To conclude, we have shown that the arguments given in \cite{P} aiming to show that weak values of the spatial projector give an inconsistent account of the past position of a quantum particle are incorrect. The interpretation of weak values is controversial \cite{controversy}, and it is interesting to confront different viewpoints in order to understand the elusive nature and properties of quantum systems. Nevertheless, it is preferable to base the discussion on technically relevant arguments. \end{document}
\begin{document} \title{A Study on Splay Trees\thanks{ This work was supported by national funds through Funda\c{c} \slugger{sicomp}{xxxx}{xx}{x}{x--x} \begin{abstract} We study the dynamic optimality conjecture, which predicts that splay trees are a form of universally efficient binary search tree, for any access sequence. We reduce this claim to a regular access bound, which seems plausible and might be easier to prove. This approach may be useful to establish dynamic optimality\footnote{This draft version, as it exists immediately prior to editing and production by the Publisher will be posted on the noncommercial pre-print server arXiv.org, in accordance with SIAM's Consent to Publish.}. \end{abstract} \begin{keywords} Data Structures, Binary Search Trees, Splay Trees, Dynamic Optimality, Competitive Analysis, Amortized Analysis \end{keywords} \begin{AMS} 68P05, 68P10, 05C05, 94A17, 68Q25, 68P20, 68W27, 68W40, 68Q25 \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{TEX PRODUCTION}{USING SIAM'S \LaTeX\ MACROS} \section{Introduction} \label{sec:introduction} Binary search trees (BSTs) are ubiquitous in computer science. Their relevance is well established, both in theory and in practise. Figure~\ref{fig:intro} illustrates this data structure. The numbers in the nodes represent the keys. If read left to right, the keys form an ordered sequence, shown bellow the tree. This property can be used to efficiently determine if a given number exists in the tree. If we want to check if the tree contains the number $0.55$ we can start by comparing this value with the one at the root, $0.56$. Since the number we are searching for is larger than $0.56$, the search continues in the left sub-tree, i.e., it proceeds to the node containing $0.40$, thus discarding all the sub-tree to the right. It is the process of eliminating a large portion of its search space that makes BSTs efficient. \begin{figure} \caption{Binary search tree containing keys from $[0,1]$.} \label{fig:intro} \end{figure} As the search goes through the tree it will eventually reach the node containing $0.52$, at which point it becomes apparent that $0.55$ is not present in the structure. In fact this search concludes that no number in the open interval $]0.52, 0.56[$ exists in the tree. This is the only valid conclusion. Even though the search passed through nodes $0.40$ and $0.51$ it would not be valid to conclude that no number in $]0.40,0.51[$ exists, because $0.41$ is on the tree. Hence whenever a search fails we still obtain information from the data structure. In particular we obtained the numbers in the tree that are the successor ($0.56$) and the predecessor ($0.52$) of $0.55$. Consider how the shape of the tree affects the performance of the queries. If we are computing a single, isolated, query, we may want to guarantee that, at each node, the size of the left and right sub-trees is roughly the same. Note that we never know which sub-tree a given search will choose. If both sub-trees have the same size at least half the search space is discarded at each step. Alternatively we could impose that the tree does not contain long branches, since a search might have to traverse all of it. Either way, some policy must be used to main a tree shape that reduces the search time. Determining ``good'' tree shapes is a complex task, specially when we consider a sequence of queries, instead of a single query. Moreover, must of the time, this sequence is not known a priori. If a sequence of a couple million queries searches for $0.21$ in half of them and only once for $0.85$, then it might be better to keep $0.21$ at the root and leave $0.85$ on the longest existing branch. We would expect such a shape to reduce the overall time. Hence a balanced tree is important if we need to answer several essentially distinct queries. On the other hand if the queries are strongly biased towards a specific region of the keys maintaining certain branches shorter than the rest might be necessary. Several strategies are known to maintain efficient BST shapes, see Knuth~\cite{Knuth:1998:ACP:280635} for an introduction to the subject. In this paper we focus on the approach used by splay trees~\cite{Sleator:1985:SBS:3828.3835}. Our goal is to show that this approach is optimal, in the sense that no other strategy can be asymptotically faster than splay trees. Notice that splay trees dynamically alter their structure as they process queries. Hence we assume that any other BST can do the same. Figure~\ref{fig:introSp} illustrates the tree that results from accessing $0.21$ on a splay tree. \begin{figure} \caption{After splaying $0.21$.} \label{fig:introSp} \end{figure} Our main contribution is the following: \begin{itemize} \item We, almost, show that splay trees are dynamically optimal. This means that no other BST is asymptotically faster than splay trees, no matter what is the structure of the query sequence or the re-shaping policy it uses. This property was conjectured to be true, over 30 years ago~\cite{Sleator:1985:SBS:3828.3835}. Our proof depends on a regular access property, which we deem plausible and conjecture to be true, see Section~\ref{sec:analysis-overview}. At this time we do not have a proof of this property. \end{itemize} \section{The Problem} \label{sec:problem} The introduction exemplified how a BST can be used to maintain a finite set $K$ of, keys, elements of a universe set $X$. In our example $X$ is $\mathbb{Q}$. Our goal is to maintain a data structure such that given a fraction $x$ from $\mathbb{Q}$ we can determine if $x$ is in $K$, or not. We refer to these operations as queries. A query is successful it $x$ does belong to $K$ and unsuccessful otherwise. Several efficient data structures exist for this problem, depending on which resources are critical and on which extra operations are necessary. For our example we also want map behaviour. This means that the elements of $K$ contain information. Whenever $x$ does exist in $K$ we also want to access the associated information. The two major classes of data structures that can be used in this scenario are trees and hashes. The set $\mathbb{Q}$ is, in general, referred to as the key universe and the associated information is referred to as the value set. In this paper we omit this latter set. Contrary to hashes, BSTs rely on the order relation among the elements of the universe. In $\mathbb{Q}$ we have for example that $(1/4) < (1/2)$. Hence BSTs can efficiently determine successors and predecessors. The successor of $x$ in $K$ is $\min\{y \in C | x < y\}$. To simplify the analysis we do not consider unsuccessful queries, i.e., we assume that all the queries find elements in $K$. An unsuccessful query for $x$ can be modelled by two successful ones, one for the successor and one for the predecessor. In general BSTs support inserting and removing elements from $K$, but we also do not study those operations. The re-structuring policy of splay trees consists in moving the accessed nodes, and the nodes in the respective path, upwards towards the root. The precise operations are shown in Figures~\ref{F:zig},~\ref{F:zigzig}~and~\ref{F:zigzag}, where the node containing $x$ is the one being accessed. The configuration before the access is represented on the left and the structure after the access on the right. Accesses when $x$ is on the right sub-tree are obtained by symmetry. \begin{figure} \caption{Zig operation.} \label{F:zig} \end{figure} \begin{figure} \caption{Zig Zig operation.} \label{F:zigzig} \end{figure} \begin{figure} \caption{Zig Zag operation.} \label{F:zigzag} \end{figure} To compare splay trees with any other BST we assume that the other BST works in the following way. Besides the tree itself there is a cursor that moves between nodes. An algorithm on a BST may perform any of the following operations: \begin{description} \item[Compare] the key at the cursor with the current search value. \item[Move] the cursor to an adjacent node, left child, right child or parent. \item[Rotate] the node upwards. The Zig operation is a rotation of node $x$, (Figure~\ref{F:zig}). \end{description} To perform a search in a BST the cursor starts at the root and performs a sequence of the previous operations. The result of the search is the node for which a comparison with $x$ was equal. This node may, or may not, be the last one in the sequence. At the end of a search the cursor must return to the root, those moves are accounted for. In this model we allow for constant extra information at each node, such as colour for red-black trees or height for AVL trees. The intention is to forbid dynamic memory blocks, which could be used to implement hashing. Hence only a fixed constant amount of information can be added to a node. Other BST models exist, that can be shown to be within a constant factor of the one we consider~\cite{DBLP:journals/siamcomp/Wilber89}. Among all BSTs we focus on the one that achieves the best performance for a query sequence. This means choosing the tree after knowing the complete query sequence, in other words, offline. We refer to this optimal tree as $T$. We do not count compare operations and therefore assume that they cost 0. For the optimal tree we count the number of moves and rotations, but for the splay tree we only count the number of moves. This avoids having to carry around a factor of 2 or 3, which would result from counting rotations and or comparisons. \section{Analysis Overview} \label{sec:analysis-overview} In this section we overview the main techniques in the analysis and describe our new potential function. \begin{description} \item[Amortized analysis.] The analysis is amortized, meaning that the time a splay operation takes is not accounted per si, but considering preceding and succeeding operations~\cite{doi:10.1137/0606031}. Therefore the amount of time an operation requires can be partially shifted to some other operation in the sequence. The resulting time is known as the amortized time. The most common way to transform the total sum into an equivalent telescopic sum is to associate a ``potential'' value to each state of the data structure $D$, the value $\Phi(D)$. The amortized cost of an operation is then defined as $\hat{c} = c + \Phi(D') - \Phi(D)$, were $c$ is the actual cost and $\Phi(D')$ the potential after the operation is performed. Summing the previous equations over all $m$ queries, and using the fact that it is a telescopic sum, we obtain a global relation $\sum_{1\leq i \leq m} c_i = \left( \sum_{1\leq i \leq m} \hat{c}_i \right) - \Phi(D_m)$. In this equation $\Phi(D_m)$ represents the potential after the sequence, which might be a negative number. The value $\Phi(D_0)$ represents the potential at the beginning and it was omitted because it is assumed to be $0$. This is a classical tool in the analysis of splay trees. We present a new potential function. This function is chosen so that the structure of the optimal tree gets transferred into the splay tree. \item[Restricted Accesses.] After bounding the amortized cost of accesses in $S$ we proceed to amortize the cost of rotations in $T$. Such rotations alter $\Phi(D)$ and therefore most be accounted for. To obtain a constant bound on this variation we introduce extra, organizing splays, in $S$. The amortized time to compute these extra splays will depend on the depth of the node we are rotating in $T$. Therefore if $T$ performs two rotations in a row we need to count the node depth twice. To obtain a valid bound we force $T$ to move its cursor back to the root after performing a rotation. We also further impose that $T$ may not access nodes of depth $3$ or more. Although these restrictions seem harsh we show that, with a linear slowdown factor, it is possible to force the optimal sequence to respect them, Lemma~\ref{lem:simulateT}. \item[Organizing Splays.] Splay trees do not know the full sequence of queries in advance, fortunately the analysis does. This means that in the analysis it is possible to verify if the structure of the splay tree is adequate for the upcoming queries. We optimize the tree structure by adding extra organizing splays. These splays introduce the logic dependency on a regular access. \end{description} The regular access statement is the following: \newtheorem{conjecture}[theorem]{Conjecture} \begin{conjecture}[Regular Access] \label{C:regular} Let $c_1 + \ldots + c_{m}$ be the total cost of a sequence of $m$ splay operations and $c'_1 + \ldots + c'_{m+e}$ the cost of performing the same sequence of splay operations, augmented with $e$ extra splays anywhere. Then $c_1 + \ldots + c_{m} = O(c'_1 + \ldots + c'_{m+e})$. \end{conjecture} This conjecture basically states that computing more splays takes longer. We assumed this property in our reasoning, until it became evident that it was not trivial to establish. Our proof relies precisely on this property because the extra splays are used to optimize the tree for future accesses. We will now define the potential function $\Phi$ and give an overview of the results that we prove in Section~\ref{sec:details}. $T$ denotes the optimal tree. To every node $v$ of $T$ we assign a weight $w_T(v) = 1/4^{d_T(v)}$, where $d_T(v)$ is the depth of $v$, i.e., the distance to the root. The root itself has depth $0$, its children have depth $1$ and so on. For each node we add up all the weights of its sub-tree, including $w_T(v)$ itself, the resulting sum is denoted $s_T(v)$. The rank $r_T(v)$ is computed as $\log(s_T(v))$. The potential of tree $T$ is given by the following sum: \[ P(T) = \sum_{v \in T} r_T(v) \] Let us consider the trees in Figure~\ref{F:examplePhi}. We have that $w_T(a) = w_T(c) = 1/16$, $w_T(b) = w_T(e) = 1/4$ and $w_T(d) =1$. The resulting $r$ values are $r_T(a) = r_T(c) =\log(1/4^2) = -4$, $r_T(e) = -2$, $r_T(b) = \log (1/4+1/8) = \log (3/8)$ and $r_T(d) = \log (1 + 1/2 + 1/8) = \log (13/8)$. Hence the total potential of $T$ is $P(T) = r_T(b) + r_T(d) -10$. We use $S$ to denote the splay tree. There is a one to one relation between the nodes of these trees, because both trees share the same key values. For any node $v$, of $T$, its correspondent in $S$ is $f(v)$, by correspondent we mean that $v$ and $f(v)$ store the same key value. This correspondence is used to transfer the weight values from $T$ to $S$, more precisely set $w(f(v)) = w_T(v)$. We omit the $S$ subscript to simplify notation. The values of $w$, $s$ and $r$ that do not have a subscript refer to $S$. Moreover we also omit the function $f$ and assume instead that $v$ in $S$ means $f(v)$. Hence the previous relation will be written as $w(v) = w_T(v)$. The values $s(v)$ and $P(S)$ are computed as before. In general $s(v)$ and $s_T(v)$ are not, necessarily, equal. See Figure~\ref{F:examplePhi}. Likewise $P(T)$ and $P(S)$ are also not, necessarily, equal. In fact we use $\Phi(D) = P(S) - P(T)$ as our potential function. In our example we have that $r(a) = r_T(a)$, $r(c) = r_T(c)$, $r(e) = r_T(e)$, $r(b)= r_T(d)$ and $r(d) = \log(1 + 1/4 + 1/16) = \log (21/16)$. Therefore $\Phi(D) = \log(21/16) - \log(3/8) = \log(7/2)$. \begin{figure} \caption{Optimal tree $T$ on top, splay tree $T$ in the bottom, upside down. Example of computing $\Phi$. The $f$ function is illustrated by the dashed arrows. The weight values $w$ are show in between the two trees. Inside the black rectangles we show the $s$ values associated with the nodes. } \label{F:examplePhi} \end{figure} In the analysis we assume that $S$ and $T$ take turns. First $S$ searches for $x$ and $T$ remains idle. Then $T$ searches for the same $x$, while $S$ remains idle. Hence whenever the structure of $S$ changes $P(S)$ changes, but $P(T)$ remains constant. However when the structure of $T$ changes, both $P(T)$ and $P(S)$ change. The structure of our argument is as follows: \begin{itemize} \item We show that $-n < \Phi(D_m)$, Lemma~\ref{lem:bound-phi}, where $n$ is the number of keys. \item We show that the amortized cost to splay a node $v$, in $S$, is at most $O(1 + d_T(v))$, Lemma~\ref{sec:amortized-costs-s}. \item We show that whenever there is a rotation at node $v$ in $T$ we have $\Delta \Phi = O(1)$, by splaying at most 3 nodes, whose depth in $T$ is at most $d_T(v)$, Lemma~\ref{lem:powerDelta}. \item We combine these results with a restricted optimal sequence, Lemma~\ref{lem:simulateT}, and obtain an optimality result, Theorem~\ref{teo:optimal}. \end{itemize} Thus we establish that if the optimal tree, with $n$ nodes, needs $R$ rotations and $M$ moves to process a given query sequence then splay trees require $O(n + R + M)$ time, provided they have the regular access property. \section{The Details} \label{sec:details} \begin{figure} \caption{Symbol table.} \label{fig:notation} \end{figure} In this section we fill in the details that are summarized in the previous section. This section is divided into three parts. We start by studying the properties of our potential function $\Phi$. We then shift the focus to $T$ and explain how to restrict the general access sequence to ensure that it respects certain desirable conditions. The last part presents bounds for the amortized costs of accessing elements in $S$ and rotating nodes in $T$. We then combine these results to obtain an optimality result. \subsection{Properties of $\Phi$} \label{sec:properties-phi} Let us start by analyzing the potential function $\Phi(D)$. We assume that the initial configuration of $S$ and $T$ is the same. Therefore $P(S)=P(T)$ and $\Phi(D_0) = 0$. Given that we are using base $2$ logarithms it would be natural to use base $(1/2)$ powers for the weights $w$ and $w_T$. Using $(1/4)$ instead causes the amortized access cost to be twice as big. On the other hand we obtain some handy properties. \begin{lemma} \label{lem:boundSV} Let $v$ be a node. The following bounds hold: \begin{align} 0 &\leq w(v) &&\mbox{ for any $v$ in $T$ or $S$.} \label{eq:basic0} \\ w(v) &\leq 1 &&\mbox{ for any $v$ in $T$ or $S$.} \label{eq:basic4} \\ w(v) &\leq s(v) &&\mbox{ for any $v$ in $T$ or $S$.} \label{eq:basic1} \\ s_T(v) &< 2 \times w_T(v)&& \mbox{ only for $v$ in $T$.} \label{eq:basic2} \\ s(v) &< 2 &&\mbox{ for any $v$ in $T$ or $S$ } \label{eq:basic3} \\ s(t') &= s_T(t) &&\mbox{ for the root $t'$ of $S$ and $t$ of $T$ } \label{eq:basic5} \end{align} \end{lemma} \begin{proof} Inequalities~(\ref{eq:basic0}) and~(\ref{eq:basic4}), follow directly from our definition of weights. Inequality~(\ref{eq:basic1}), follows from the definition of $s$ and Inequality~(\ref{eq:basic0}). Inequality~(\ref{eq:basic2}) use a geometric series. For the nodes $v$ in $T$ the value $s_T(v)$ is a sum of powers of $1/4$, which depends on the topology of tree. If $T$ is a single branch then $s_T(v) < w_T(v) (1 + (1/4) + (1/4^{2}) + \ldots + (1/4^j) + \ldots) < (4 w_T(v)/3)$. If $T$ is perfectly balanced, this bound becomes even larger. In that case $s_T(v) < w_T(v)(1 + 2 \times (1/4) + 2^2 \times (1/4^2) + \ldots + 2^j \times (1/4^j) + \ldots) < 2 \times w_T(v)$. In this case the bound comes from the geometric series of $(1/2)$. Moreover the remaining cases will also be bounded by the geometric series of $(1/2)$. Inequality~(\ref{eq:basic3}) holds for $T$, because of inequalities~(\ref{eq:basic4}) and~(\ref{eq:basic2}). For $S$ note that in general $T$ cannot have more than $2^d$ nodes with depth $d$ and consequently $S$ cannot have more that $2^d$ nodes with weight $(1/4^d)$, hence we obtain the geometric series of $(1/2)$ again. For equality~(\ref{eq:basic5}) notice that the sums at the root contain all the weight values. Since these values are mapped from $T$ to $S$ they are globally the same and therefore so is their sum. \end{proof} In particular from Inequality~(\ref{eq:basic3}) we deduce that $r(t) \leq 1$, for the root $t$ of $S$. Using potential functions that can assume negative values implies that the value of $\Phi(D_m)$ becomes a term in the total time. Hence we bound this value. \begin{lemma} \label{lem:bound-phi} The bound $-n < \Phi(D)$ holds for any configuration of $S$ and $T$, where $n$ is the number of keys. \end{lemma} \begin{proof} The following derivation establishes the result. \begin{eqnarray} P(S) - P(T) & = & \Phi(D) \\ \sum_{v \in S} \log(w(v)) - P(T) & \leq & P(S) - P(T)\label{eq:2}\\ \sum_{v \in S} \log(w(v)) - \sum_{v \in T} \log(2w_T(v)) & < & \sum_{v \in S} \log(w(v)) - P(T) \label{eq:3}\\ -\sum_{v \in T} 1 & = & \sum_{v \in S} \log(w(v)) - \sum_{v \in T} \log(2w_T(v)) \label{eq:4} \\ -n & = & -\sum_{v \in T} 1 \end{eqnarray} To obtain Inequality~(\ref{eq:2}) use Inequality~(\ref{eq:basic1}), for all the terms of the sum in $P(S)$. For Inequality~(\ref{eq:3}) use Inequality~(\ref{eq:basic2}) and apply logarithms and change signals. Equation~(\ref{eq:4}) follows from the same argument as Equation~(\ref{eq:basic5}) and the fact that $\log(2w_T(v)) = 1 + \log(w_T(v))$. \end{proof} \subsection{Restricting $T$} \label{sec:restricting-t} Competing directly with $T$ is fairly hard, partially because the sequence of accesses in $T$ is completely free. For our purposes we need that the accesses in $T$ respect the following properties: \begin{itemize} \item When a node $v$ is visited by the cursor or involved in a rotation its depth is less than $3$, i.e., $d_T(v) < 3$. \item After a rotation the cursor moves back to the root. \end{itemize} We refer to a sequence of nodes that respects these conditions as a restricted sequence. Let us show that given any access sequence of visited nodes in $T$ it is possible to simulate it in another tree $T'$, so that the accesses in $T'$ are restricted. By simulation we mean that the sequence visited by the cursor of $T$ is a sub-sequence of the one visited by the cursor of $T'$. Naturally the simulation will be slower than the original sequence. \begin{lemma} \label{lem:simulateT} Any sequence of nodes visited by the cursor of $T$, consisting of $M$ moves and $R$ rotations, can be simulated by a restricted sequence of cursor moves on a tree $T'$ with $4M+3R$ moves and $2M+R$ rotations. \end{lemma} \begin{proof} \begin{figure} \caption{Simulation of $T$ (left) by $T'$ (right). Initial configuration on top. The middle configuration represents the result of moving the cursor from node $y$ to node $x$. The bottom configuration shows the result of performing a rotation on $x$, while in the middle configuration.} \label{fig:simul} \end{figure} Figure~\ref{fig:simul} illustrates the operations that we consider in this Lemma. Besides the same keys as $T$, the tree $T'$ contains two extra keys, $\min$ and $\max$, that are, respectively, smaller and larger than all the other keys in $T$. The simulation works by keeping the node at the cursor of $T$ in the root of $T'$. We assume that the initial state of $T$ and $T'$ is almost alike. The same key is stored a the root. The children of $T'$ are $\min$ and $\max$. The left sub-tree of $T$ is stored in the right child of $\min$ and the right sub-tree of $T$ is stored in the left child of $\max$. Shown at the top in Figure~\ref{fig:simul}. Whenever the cursor of $T$ moves down the corresponding node is moved up to the root of $T'$, the process is a ZigZag operation on $T'$, or ZagZig depending on which grand-child. Note that between the rotations of the ZigZag operation the cursor must return to the root to comply with the second condition of restricted sequences. These movements are underlined in the example. Hence every downward move in $T$ originates $4$ moves and $2$ rotations in $T'$. This process transforms the trees in the top of Figure~\ref{fig:simul}, to the trees in the middle. In this example the sequence of operations is: \texttt{moveTo($\min$)}, \texttt{moveTo($x$)}, \texttt{rotate()}, \underline{\texttt{moveTo($y$)}, \texttt{moveTo($x$)}} \texttt{rotate()}. The cursor may also move upwards on the tree, hence reversing the previous move. In this case the sequence of operations would be: \texttt{moveTo($y$)},\texttt{rotate()}, \texttt{moveTo($x$)}, \texttt{moveTo($\min$)}, \texttt{rotate()}, \texttt{moveTo($y$)}. This requires $4$ moves and $2$ rotations. Therefore a move in $T$ originates $4$ moves and $2$ rotations in $T'$. In this example the initial move was to $y$, because $y$ is the child in $T'$ that corresponds to he parent in $T$. To determine this property we could store, in $T'$ the depth of the nodes in $T$. Only for the nodes in the leftmost and rightmost branches of $T'$, otherwise updated values would be hard to maintain. However it is not necessary to do so. Recall that $T'$ is simulating $T$ and $T$ is the optimal tree, which does not need to consult the keys to know which move to perform, it only needs to behave as a BST. Therefore $T'$ also does not need extra information. Whenever a rotation is performed in $T$ the structure of $T'$ must be adapted accordingly. Recall that a rotation on node $v$ means that $v$ is moved upwards. In Figure~\ref{fig:simul} the transition from the middle to the bottom shows how a rotation alters the structure of $T'$. In this case the sequence of moves is: \texttt{moveTo($y$)}, \texttt{moveTo($\max$)}, \texttt{rotate()}, \texttt{moveTo($x$)}. Hence a rotation originates $3$ moves and $1$ rotation in $T'$. The general procedure is to move to $y$ because it is the child of $T'$ that corresponds to the parent of $x$ in $T$, and move again in the same direction, in this case to $\max$. This node is rotated upwards and the procedure finishes by returning to the cursor back to the root. Note that in Figure~\ref{fig:simul} the cursor of $T$ is drawn close to the root. In general the structure of the upward path from the cursor of $T$ to the root gets splitted into the leftmost and rightmost branches of $T'$, but the update procedure is essentially as explained. \end{proof} In the following results we continue to use $T$, instead of $T'$, which simplifies the analysis. The tree $T'$ is used only in Theorem~\ref{teo:optimal}. \subsection{Amortized Costs of $S$ and $T$} \label{sec:amortized-costs-s} Let us now bound the amortized time of splaying a node of $S$. \begin{lemma} \label{lem:amortizedS} Splaying a node $v$ takes at most $4 + 6 d_{T}(v)$, amortized time, where $d_{T}(v)$ is the depth of the corresponding node in $T$. \end{lemma} \begin{proof} The following derivation establishes this bound: \begin{eqnarray} \hat{c}_{i} & \leq & 1 + 3[r(t) - r(v)] \label{eq:aad1}\\ & < & 1 + 3[1 - r(v)] \label{eq:aad2}\\ & = & 4 - 3 r(v) \label{eq:aad3}\\ & \leq & 4 - 3 \log(w(v)) \label{eq:aad4}\\ & = & 4 - 3 \log(1/4^{d_{T}(v)}) \label{eq:aad5}\\ & = & 4 + 6 d_{T}(v) \label{eq:aad6} \end{eqnarray} Inequality~(\ref{eq:aad1}) is the classic amortized access Lemma~\ref{lem:amortizedAccess}, see Section~\ref{sec:related-work}. Inequality~(\ref{eq:aad2}) follows from~(\ref{eq:basic3}) and recalling that $r(t) = \log(s(t))$. Inequality~(\ref{eq:aad4}) results from applying logarithms and switching signs to~(\ref{eq:basic1}). The remaining equations use the definition of weight $w$ and simplify the result. \end{proof} This Lemma shows that the amortized time to splay a node is slightly more than $6$ times the time that it is necessary to access the corresponding node in $T$. Let us focus on the amortized cost of accessing nodes in $T$. Whenever $T$ accesses a node there is no real cost for $S$. However if those accesses involve rotations in $T$ then the value of $\Phi$ changes, which will constitute an amortized cost that $S$ most account for. \begin{lemma} \label{lem:powerDelta} Whenever a node $v$ of $T$, with $d_T(v) < 3$, gets rotated the bound $\Delta \Phi \leq 11 + \log (11)$ holds, after, at most, $3$ nodes of $S$ are splayed. Each splayed node $v^*$ has $d_T(v^*) \leq d_T(v)$. \end{lemma} \begin{proof} Recall that in a rotation the node $v$ moves upwards on the tree. We consider the cases when $d_T(v) = 1$ and $d_T(v) = 2$. \begin{figure} \caption{Variation of $\Phi$ with rotation on T.} \label{F:rotP} \end{figure} The proof is illustrated in Figure~\ref{F:rotP}, which shows the situation when $d_T(v) = 1$, but from which we can infer properties that apply to all cases. The optimal tree $T$ is represented on top and the splay tree $S$ in the bottom, upside down. To simplify let us assume that these trees are only slightly different. Meaning that they share some structure. The nodes in sub-trees $A$, $B$ and $C$ are the same, but their shape is not necessarily equal. For example the descendants of node $u$ are not, necessarily, the same. A fundamental observation for this proof is that almost all the nodes, in the trees, contribute $0$ to the value $\Delta \Phi$. The only nodes that effectively contribute to $\Delta \Phi$ belong to, at least, one the following categories: \begin{enumerate} \item Nodes that contain descendants from more than one of the sets $A$, $B$, $C$, either in $S$ or $T$ or both. Examples of these nodes are the ones containing $x$ and $y$. Moreover some nodes $a \in A$, $b \in B$ and $c \in C$ may appear in $S$ with this property. \item Nodes for which the set of descendants is altered by the rotation in $T$. Only nodes $x$ and $y$. \end{enumerate} Hence we are claiming that the nodes in $A$, $B$ and $C$ do not contribute to $\Delta \Phi$. Consider a node $u$, contained in the sub-tree $A$. Let $w_T(u)$ be the weight of $u$ before the rotation and $w_T'(u)$ be the weight after the rotation. Likewise we consider the $s_T$, $s_T'$, $r_T$, $r_T'$ values and the values $s$, $s'$, $r$, $r'$, over $S$. Let us account for the contribution of $u$ to $\Delta \Phi$, i.e., $r'(u)-r(u)+r_T(u)-r'_T(u)= \log([s'(u)/s(u)]\times[s_T(u)/s'_T(u)])$. Notice that $w'(u)/w(u)= 4$, because the depth of $u$ decreases in $T$. Likewise $s'(u)/s(u)=4$, because the same happens for all the elements in $A$. On the other hand $s_T(u)/s'_T(u)= 1/4$ because the order of the factors is reversed. Therefore the previous value is $\log (4/4) = 0$. A similar reasoning holds for the elements in $C$. The nodes in $B$ do not suffer potential variations and therefore contribute $0$ to the overall variation. In this scenario the only nodes that contribute to $\Delta \Phi$ are the ones that contain descendants from more than one of the $A$, $B$, $C$ sets. In our simplification only $x$ and $y$. However in the general case the nodes in sub-trees $A$, $B$ and $C$ have different relations in $S$ and in $T$. Hence we need to be more precise as to what these nodes should be. The nodes in $A$ are the descendants of $x$ that are smaller than $x$. The nodes in $B$ are the descendants of both $x$ and $y$. The nodes in $y$ are the descendants of $y$ that are larger than $y$. To force these sets of nodes to be consistent between $S$ and $T$ we splay node $x$ and node $y$ in $S$, in this order. Now $A$ and $C$ contain the same nodes in $S$ and $T$, but it may happen that some ZigZig operation pulls a node $b$ upwards from $B$, so that it splits $B$ and is in between the nodes $x$ and $y$. In this case we must also count the contribution from $b$. Using an argument similar to the one above we can guarantee that $s'(b)/s(b) \leq 4$ and $s_T(b)/s'_T(b) = 1$ and therefore $r'(b)-r(b)+r_T(b)-r'_T(b) \leq 2+0 = 2$. For the node containing $x$ we also have that $s'(x)/s(x) \leq 4$, but $s_T(x)/s'_T(x)$ is trickier, because $s'_T(x)$ now includes the extra terms related to $w_T'(y)$ and $s_T'(C)$, these values are non-negative and therefore decrease the fraction $s_T(x)/s'_T(x)$. Therefore the bound of $4$ holds true. Hence for the node containing $x$ we count another $4$ units, as $r'(x)-r(x)+r_T(x)-r'_T(x) \leq 2+2 = 4$. For the node containing $y$ we also have that $s'(y)/s(y) \leq 4$ but $s_T(y)/s'_T(y)$ is again trickier. The following derivation obtains a bound. \begin{eqnarray} s_T(y)/s'_T(y) & = & \frac{w_T(y)+s_T(C)+s_T(B)+w_T(x)+s_T(A)}{s'_T(y)} \label{eq:6} \\ & = & \frac{w_T(y)+s_T(C)+s_T(B)}{s'_T(y)} + \frac{w_T(x)+s_T(A)}{s'_T(y)} \label{eq:7}\\ & \leq & 4 + \frac{w_T(x)+s_T(A)}{s'_T(y)} \label{eq:8} \\ & \leq & 4 + \frac{w_T(x)+s_T(A)}{1/4} \label{eq:9} \\ & = & 4 + \frac{(1/4)+s_T(A)}{1/4} \label{eq:10} \\ & \leq & 4 + \frac{(1/4)+(1/8)}{1/4} \label{eq:11} \\ & \leq & 4 + 1.5 \label{eq:12} \end{eqnarray} Equations~(\ref{eq:6}) and~(\ref{eq:7}) are simple manipulations. Inequality~(\ref{eq:8}) is our general bound of $4$, for the fraction that does not change the descendants. Inequality~(\ref{eq:basic1}) yields $1/4 = w'_T(y) \leq s'_T(y)$, which we use to establish Inequality~(\ref{eq:9}). Inequality~(\ref{eq:11}) follows from Inequality~(\ref{eq:basic2}) that yields $s_T(A) \leq 2 \times 1/16 = 1/8$. Therefore the total for $y$ is $r'(y)-r(y)+r_T(y)-r'_T(y) \leq 2 + \log(5.5) = 1 + \log(11)$. Summing up, in this case we splayed two nodes and have to account for $b$, $x$ and $y$, therefore $\Delta \Phi \leq 2 + 4 + (1 + \log(11)) = 7 + \log(11)$. Let us consider the case $d_T(v) = 2$. This means that there is a node $v_r$ which is an ancestor of $y$, in $T$. Let us assume that $v_r$ is to the right of $C$. The case where the node is to the left of $A$ is easier. It may happen that $v_r$ is inside $C$ in $S$, in that case we splay $v_r$. This splay operation may in turn split $C$ by pulling up a node $c$. In total we splayed 3 nodes and have $5$ nodes that may contribute to $\Delta \Phi$, the nodes $b$ and $c$, that got pulled up, and the nodes $x$, $y$, $v_r$. For node $c$ we have a bound of $4$, using the argument above, which holds because the descendants of $c$ in $T$ do not change. Hence $r'(c)-r(c)+r_T(c)-r'_T(c) \leq 2+2 = 4$. Note that $v_r = t$ is the root of both $S$ and $T$. Therefore we can can bound its variation by $0$, instead of our pessimistic $4$ value. For this node we have $r'(t) = r'_T(t)$ and $r(t) = r_T(t)$, by Equation~(\ref{eq:basic5}). Hence in this case the total bound is $\Delta \Phi \leq 2 + 4 + 4 + (1 + \log(11)) + 0 = 11 + \log (11)$ Notice that all this splaying may change the ancestor relation between $x$ and $y$ in $S$. This may happen when there is no node $b$ and we are pulling a node $v_\ell$, to the left of $A$. This relation in $S$ does not change the argument above, as both $x$ and $y$ are being pessimistically bounded in $S$. The crucial consequence of the extra splays is that the nodes in $A$, $B$ and $C$ are properly encapsulated, and the number of nodes whose descendants belong to more than one of these sets is limited. \end{proof} We can now prove our optimality result. \begin{theorem} \label{teo:optimal} Consider a sequence of $m$ queries, to a splay tree $S$ with $n+2$ keys, for which, a similar, optimal tree $T$ uses $R$ rotations and $M$ cursor movements. Provided that splay trees have regular access, then the total number of operations performed by $S$ is $O(n + R + M)$. \end{theorem} \begin{proof} The proof is given by the following deduction: \begin{eqnarray} \sum_{1 \leq i \leq m} c_i - k (n+2) & \leq & k \left( \sum_{1\leq i \leq m+e} \ell + c'_i \right) - k (n+2)\label{Eq:R0} \nonumber \\ & = & k \left( (m+e) \ell - (n+2) + \sum_{1\leq i \leq m+e} c'_i \right) \label{Eq:R1}\\ & \leq & k \left((m+e) \ell + \Phi(D_m) + \sum_{1\leq i \leq m+e} c'_i \right) \label{Eq:R2} \\ & = & k \left((m+e) \ell + \Phi(D_m) - \Phi(D_0) + \sum_{1\leq i \leq m+e} c'_i \right) \label{Eq:R3} \\ & = & k \left((m+e) \ell + \sum_{1\leq i \leq m+e} \hat{c}'_i \right) \label{Eq:R4} \\ & \leq & k \left[(m+e) \ell + (11 + \log 11) R' + (1 + 3) (4 + 6 M') \right] \label{Eq:R5} \\ & \leq & k \left[(m+3R') \ell + (11+\log 11) R' + (1 + 3) (4 + 6 M') \right] \label{Eq:R6} \\ & = & k \left[m \ell + 16 + (11 + 3 \ell + \log 11) R' + 24 M' \right] \label{Eq:R7} \\ & = & k \left[m \ell + 16 + (11 + 3 \ell + \log 11) (2M + R) + 24(4M+3R) \right] \label{Eq:R8} \\ & = & k \left[m \ell + 16 + (118 + 6 \ell + 2 \log 11) M + (83 + 3 \ell + \log 11) R \right] \label{Eq:R9} \end{eqnarray} The above deduction uses the restricted sequence over $T'$, instead of the original optimal sequence over $T$. We point out which of the results in the previous lemmas apply to $T'$. The values $R'$ and $M'$ refer to the number of rotations and cursor movements in $T'$. We aim to bound the value on the left. The first inequality is our regular access conjecture. In here we are using explicit constants, $k$ and $\ell$, instead of the notation $O$, to determine the hidden factors. Equation~(\ref{Eq:R1}) rearranges the sums. Equation~(\ref{Eq:R2}) follows from Lemma~\ref{lem:bound-phi}. In Equation~(\ref{Eq:R3}) we add the value $\Phi(D_0)=0$, because we are assuming that the initial structure of $S$ and $T'$ is the same. Equation~(\ref{Eq:R4}) follows from the telescopic nature of the definition of amortized costs. Inequality~(\ref{Eq:R5}) follows from Lemmas~\ref{lem:amortizedS} and~\ref{lem:powerDelta}. From Lemma~\ref{lem:amortizedS} we conclude that the sum of the $\hat{c}'_i$ that corresponds to splay operations that answer queries is at most $(4 + 6 M')$, because $T'$ must also move its cursor to those nodes and between any such nodes the cursor returns to the root. Therefore for $T'$ to answer a query it most perform at least $d_T(v)$ cursor movements, where $v$ is the node containing the key that corresponds to the query. To bound the sum of the $\hat{c}'_i$ that corresponds to the costs to update $S$, due to rotations in $T'$, we use Lemma~\ref{lem:powerDelta}. Note that we can apply this Lemma because of the first property of restricted sequences. The fixed cost yields the term $(11 + \log 11) R'$. The amortized cost of splaying, at most, $3$ nodes of $S$ is bounded by the term $3(4 + 6 M')$. This bound holds because the second property of restricted sequences implies that $T'$ must perform at least $d_T(v)$ cursor movements before rotating node $v$. This property states that after each rotation the cursor of $T'$ returns to the root, see Section~\ref{sec:restricting-t}. In Equation~(\ref{Eq:R6}) we use the fact that $e \leq 3R'$, i.e., we use at most $3$ extra splays for each rotation in $T'$. Equation~(\ref{Eq:R7}) results by rearranging terms. In Equation~(\ref{Eq:R8}) we use the bounds from Lemma~\ref{lem:simulateT}. Equation~(\ref{Eq:R9}) obtains the final bound, by rearranging terms. \end{proof} Let us now survey related work, so that we can discuss our result in context. \section{Related Work} \label{sec:related-work} This section describes splay trees and some previous results. These trees were proposed by Sleator and Tarjan~\cite{Sleator:1985:SBS:3828.3835} in 1985. An extensive up to date survey on dynamic optimally was given by Iacono~\cite{Survey}. There are several restricted optimally results for BSTs. Assuming that the optimal BST is static than the best performance is the entropy $O(\sum_{i=1}^m f_i \log(m/f_i))$, where $m$ is the total number of queries and $f_i$ is the number of queries for the $i$-th key. Knuth presented the first algorithm to determine such an optimal tree~\cite{Knuth:1998:ACP:280635}. This dynamic programming algorithm requires $O(n^2)$ time to determine the optimal tree. Kurt Mehlhorn obtained a faster, $O(n)$ time algorithm, which approximates the optimal tree~\cite{Mehlhorn}. In the context of this paper this result would be referred to as statically optimal, because we are hiding factors into the $O$ notation. Splay trees also obtain this kind of optimality and moreover do not need to know the $f_i$ values. Static optimality becomes the best upper bound when the keys in the query sequence are independent of each other. We are only interested in approximating dynamic optimality, i.e., using $O$ notation and being a factor away from the optimal value, because it is expected that the exact problem is NP-Complete~\cite{BST_SODA2009}. Even in this case the only BST that achieves the dynamic optimality bound, assumes free rotations and takes exponential time to select operations~\cite{DBLP:journals/algorithmica/BlumCK03}. The good performance properties of splay trees are due to the access Lemma, which was establish in the paper on splay trees~\cite{Sleator:1985:SBS:3828.3835}. Here we repeat the original argument and highlight the fact that it follows from Jenssen's Inequality~\cite{Jenssen}. \begin{lemma}[Amortized Access] \label{lem:amortizedAccess} Splaying a node $v$ to the root $t$ takes at most $1+3[r(t)-r(v)]$, amortized time. \end{lemma} \begin{proof} Recall Figures~\ref{F:zig},~\ref{F:zigzig} and~\ref{F:zigzag}. When $v$ is at the root, the bound is trivial. Hence let us focus on the last operation that is used to access $v$, which contains $x$ in our figures. \begin{description} \item[Zig] In this case $x$ is a child of $y$, which is at the root. The amortized cost is computed as follows: \begin{eqnarray} \hat{c}_{Zig} &=& 1 + r'(x) + r'(y) - r(x) - r(y) \label{eq:z1}\\ & \leq & 1 + r'(x) - r(x) \label{eq:z2} \\ & \leq & 1 + 3[r'(x) - r(x)] \end{eqnarray} The only nodes that change rank are $x$ and $y$. The Inequality~(\ref{eq:z1}) follows from the fact that $r'(y) \leq r(y)$. On the other hand Inequality~(\ref{eq:z2}) is true because $r(x) \leq r'(x)$. \item[ZigZig] In this case $x$, $y$ and $z$ change rank \begin{eqnarray} \hat{c}_{ZigZig} & = & 2 + r'(x) + r'(y) + r'(z) - r(x) - r(y) - r(z) \label{eq:zz1}\\ & = & 2 + r'(y) + r'(z) - r(x) - r(y) \label{eq:zz2} \\ & \leq & 2 + r'(x) + r'(z) - 2 r(x) \label{eq:zz3} \\ & \leq & 3[r'(x)-r(x)] \label{eq:zz4} \end{eqnarray} Equation~(\ref{eq:zz2}) follows from the fact that $r'(x) = r(z)$. Inequality~(\ref{eq:zz3}) is true because $r'(y) \leq r'(x)$ and $- r(y) \leq - r(x)$. Inequality~(\ref{eq:zz4}) reduces to proving that $ (r(x) + r'(z))/2 \leq r'(x) - 1$, this relation is known as Jensen's inequality and it holds because the $\log$ function is concave. More explicitly the relation is the following: \begin{equation*} \frac{\log(s(x)) + \log(s'(z))}{2} \leq \log\left(\frac{s(x)+s'(z)}{2}\right) \end{equation*} \item[ZigZag] In this case $x$, $y$ and $z$ change rank. \begin{eqnarray} \hat{c}_{ZigZag} & = & 2 + r'(x) + r'(y) + r'(z) - r(x) - r(y) - r(z) \label{eq:za1}\\ & = & 2 + r'(y) + r'(z) - r(x) - r(y) \label{eq:za2} \\ & \leq & 2 + r'(y) + r'(z) - 2 r(x) \label{eq:za3} \\ & \leq & 2[r'(x)-r(x)] \label{eq:za4} \\ & \leq & 3[r'(x)-r(x)] \label{eq:za5} \end{eqnarray} Equation~(\ref{eq:za2}) follows from the fact that $r'(x) = r(z)$. Inequality~(\ref{eq:za3}) is true because $- r(y) \leq -r(x)$. Inequality~(\ref{eq:za4}) is also Jensen's inequality, in this case it reduces to $(r'(y)+ r'(z))/2 \leq r'(x) - 1$. The Inequality~(\ref{eq:za5}) is true because $r(x) \leq r'(x)$. \end{description} Notice that in any access to $S$ there is at most $1$ Zig operation and several ZigZig and ZigZag. This justifies why it is important to omit the term $1$ in the bound for the ZigZig and ZigZag operations. The overall bound is obtained by summing the bounds of the respective operations. This sum telescopes to the expression in the Lemma. \end{proof} A recent detailed study of this Lemma is available~\cite{DBLP:conf/esa/ChalermsookG0MS15}. An important consequence of the splay operation is that most of the nodes in the splayed branch are moved upwards, i.e., their depth gets reduced, essentially in half. A detailed study on depth reduction was given by Subramanian~\cite{DBLP:journals/jal/Subramanian96}. Several good performance theorem follow from this Lemma: \begin{description} \item[Static Optimality,] obtaining the entropy bound we mentioned in the beginning of the Section. \item[Static Finger,] the performance of splay trees can be made to made dependent on the position of a given element of the key set $X$. Let $f$ be the position of a certain element $x$ in the ordered set $X$ and $|x_i-f|$ the distance between that element and the element $x_i$, the $i$-th element in the query sequence. Then processing a sequence of queries with a splay tree takes $O(n \log n + m + \sum_{i=1}^m\log(|x_i-f|+1))$. In fact splay trees have an even better better performance because the chosen element at position $f$ can be made dynamic~\cite{doi:10.1137/S0097539797326988,doi:10.1137/S009753979732699X}, but this result requires a longer analysis. \item[Working Set,] meaning that recently accessed are quicker to access again. Let $t(x)$ be the number of different items accessed before accessing $x$ since the last time $x$ was accessed, or since the beginning of the sequence if $x$ was never accessed before. The the total time to process a sequence of queries is $O(n \log n + m + \sum_{i=1}^m \log(t(i)+1))$. An important consequence of this bound is that if the keys are assigned arbitrarily (but consistently) to unordered data, splay trees behave as dynamically optimal~\cite{DBLP:journals/algorithmica/Iacono05}. \item[Unified bound,] combining the best performance of the 3 previous bounds. Dedicated structures where designed to achieve an even better bound which uses the dynamic finger bound~\cite{DBLP:journals/tcs/BadoiuCDI07}, instead of the static version. Generalized versions of this bound, with tighter values for structured sequences where also proposed~\cite{DBLP:journals/corr/abs-1302-6914}. \end{description} Another important performance bound is the sequential access, or scanning, bound, which states that accessing the $n$ keys sequentially takes only $O(n)$ time. This was initially established by Tarjan~\cite{TarjanSeq} with a factor of $9$, the current best factor of 4.5~\cite{Elmasry2004459} was proven by Elmasry. Besides dynamic optimality Sleator and Tarjan~\cite{Sleator:1985:SBS:3828.3835} proposed the traversal conjecture, which remains unproven. It is similar to the sequential access but the key order is taken to be the preorder of another BST. Other open conjectures on splay trees include the Deque conjecture, which claims that splay trees can be used to implement a deque, with $O(1)$ amortized time per operation~\cite{dequeCom,DBLP:journals/corr/abs-0707-2160}. The split conjecture claims that deleting all the nodes of a splay tree takes $O(n)$ time, in whatever order~\cite{lucas1992competitiveness}. Studying binary search trees from a geometric point of view has yielded several important results~\cite{BST_SODA2009}. It presented an online algorithm, which may yield dynamically optimal BSTs. This algorithm is referred to as greedy, and it was originally proposed as an offline algorithm~\cite{lucas1988canonical,munro2000competitiveness}. The first $O(\log n)$ performance bound, of this algorithm, was established by Fox~\cite{DBLP:conf/wads/Fox11}. The same authors proposed Tango trees which are $O(\log \log n)$ competitive to the optimal BST~\cite{Tango_SICOMP}. This was the first structure to obtain a proven non-trivial competitive ratio. The worst case performance of these trees was further improved~\cite{DBLP:conf/wads/DerryberryS09} to $O(\log n)$ per operation. The $O(\log \log n)$ ratio was also proved for the chain-splay variation of splaying~\cite{Georgakopoulos200837}. Another alternative to try and obtain, proven, dynamic optimality consists in combining BSTs. In~\cite{ComboBST_ICALP2013} the authors show that given any constant number of online BST algorithms (subject to certain technical restrictions), there is an online BST algorithm that performs asymptotically as well on any sequence as the best input BST algorithm. Iacono~\cite{Survey} recently proposed another approach, the Weighted Majority Algorithm. This approach is proven to yield a dynamically optimal BST, provided any such data structure exists. A considerable amount of research as also gone into the study of lower bounds, i.e., formulating expressions that are smaller than $R+M$, i.e., the amount of operations required by $T$. The first lower bounds where established by Wilber~\cite{DBLP:journals/siamcomp/Wilber89}. These bounds where further improved by the geometric view of BSTs~\cite{BST_SODA2009}. In this view accesses to nodes in a tree are drawn as points in the plane. One coordinate stores the ordered keys and the other coordinate represents time. We can then consider rectangles using these points as corners. Two rectangles rectangles are independent if their corners are outside their interception or in the boundary of the interception. An access sequence in a BST must respect any independent set of rectangles. Hence the maximum independent set of rectangles yields a lower bound for the number of visited nodes in a BST. The alternation lower and the funnel bounds can also be explained with the geometric view of BSTs. Moreover this latter bound is also related to the number of turns that it is necessary to move a key to root, with a procedure proposed by Allen and Munro~\cite{DBLP:journals/jacm/AllenM78}. \section{Conclusions and Further Work} \label{sec:concl-furth-work} In this paper we studied the dynamic optimality conjecture of splay trees, which was proposed by the authors of these trees~\cite{Sleator:1985:SBS:3828.3835} and has stood for more than 30 years. During this time a vast amount of results and approaches have been proposed for this problem and related conjectures. Note that a proof of dynamic optimality would establish other conjectures. All this research has resulted in new data structures, several of which where proven to be $(\log \log n)$ competitive. The results we present are a step forward. We reduced dynamic optimality to the regular access conjecture. This conjecture seems very plausible. On the other hand, if it is false that would also be a remarkable property. In the future we plan to research this conjecture. An exhaustive literature review may provide the necessary tools to solve it. We also plan to investigate if our potential function can be applied to the analysis of other dynamically optimal candidates, namely the greedy algorithm, combining BSTs or the weighted majority algorithm. \end{document}
\begin{document} \title{On the Fault-Tolerance Threshold for Surface Codes with General Noise} \maketitle \author{Jing Hao Chai* and Hui Khoon Ng*} \begin{affiliations} Dr.~J.~H.~Chai\\ Centre for Quantum Technologies, National University of Singapore,\\ 3 Science Drive 2, Singapore 117543, Singapore\\ Email Address: [email protected]\\ Assoc.~Prof.~H.~K.~Ng\\ Yale-NUS College, 16 College Avenue West, Singapore 138527\\ Centre for Quantum Technologies, National University of Singapore\\ Department of Physics, National University of Singapore\\ MajuLab, CNRS-UNS-NUS-NTU International Joint Unit, UMI 3654, Singapore\\ Email Address: [email protected] \end{affiliations} \keywords{Fault-tolerant quantum computing, quantum error correction, surface codes, quantum accuracy threshold} \begin{abstract} Fault-tolerant quantum computing based on surface codes has emerged as a popular route to large-scale quantum computers capable of accurate computation even in the presence of noise. Its popularity is, in part, because the fault-tolerance or accuracy threshold for surface codes is believed to be less stringent than competing schemes. This threshold is the noise level below which computational accuracy can be increased by increasing physical resources for noise removal, and is an important engineering target for realising quantum devices. The current conclusions about surface code thresholds are, however, drawn largely from studies of probabilistic noise. While a natural assumption, current devices experience noise beyond such a model, raising the question of whether conventional statements about the thresholds apply. Here, we attempt to extend past proof techniques to derive the fault-tolerance threshold for surface codes subjected to general noise with no particular structure. Surprisingly, we found no nontrivial threshold, i.e., there is no guarantee the surface code prescription works for general noise. While this is not a proof that the scheme fails, we argue that current proof techniques are likely unable to provide an answer. A genuinely new idea is needed, to reaffirm the feasibility of surface code quantum computing. \end{abstract} \section{Introduction} The subject of quantum computing has seen an immense growth in interest over the past few years, with academic groups and industry partners keenly pursuing the realisation of small-scale devices and rapidly expanding the variety of problems such devices can tackle. The eventual goal of quantum computing, however, is large-scale computers capable of reliable computation for problem sizes large enough to genuinely exploit the advantage of a quantum approach over classical computers. To tackle large problems with sufficient accuracy to be useful, quantum computational circuits, built from physical components that are unavoidably noisy, have to be implemented in a manner robust against noise and the computational errors that can result. The subject of fault-tolerant quantum computing is about how one can compute more accurately, while using noisy components, by investing more physical resources to deal with the errors that arise. \\ Fault-tolerant quantum computing schemes are based on the technique of quantum error correction. By encoding computational information in a well-chosen part---the code space---of the physical quantum state space, quantum error correction allows errors in the encoded information to be detected, diagnosed, and corrected, provided few enough errors occurred to not exceed the code capability. However, the quantum error correction process, namely the syndrome measurement (for error detection and diagnosis) and the recovery (for correction), is itself carried out by physical components that are also noisy and error-prone. Furthermore, the use of error correction requires a typically significant increase in the number of physical components---more physical quantum registers (e.g., qubits) and physical operations (gates and measurements)---in the quantum computer, hence increasing the number of ways things can go wrong in the presence of noise. Error correction can thus actually cause a net increase, rather than decrease, in errors in the quantum computer. Fault-tolerant quantum computing is precisely about how to design the error-corrected quantum computing circuit in a manner than ensures a net removal of errors. Provided the strength of the physical noise is below a threshold level, a fault-tolerant quantum computing prescription enables us to effectively remove computational errors even with noisy components, and consequently increase computational accuracy, \\ This noise threshold is discussed in fault-tolerance literature as the quantum accuracy threshold (see, for example, the classic fault tolerance papers \cite{KnillLaflammeZurek,PreskillReliableQC,Ben-Or}). Specifically, the accuracy threshold (or fault-tolerance threshold) refers to a threshold level of noise below which a fault-tolerant quantum computing prescription---which tells us how to build quantum circuits from noisy components---is able to arbitrarily increase computational accuracy by increasing the error correction capability of the code, accompanied by an increase in physical resources needed for the error correction procedure. The accuracy threshold determines the point where the noise is weak enough for scalable quantum computing to be possible, and is hence an important engineering target for building quantum computing devices. Many past works on the theory of fault-tolerant quantum computing have focused on deriving estimates of the accuracy threshold for different fault-tolerant schemes, based on different types of codes, under different models of noise (see, for example, References \cite{AGP,TerhalBurkard,AKP,AliferisLeakage,HKPreskill}). \\ In this work, we take a closer look at the accuracy threshold for fault-tolerant quantum computing based on surface codes \cite{KitaevPlanar,Dennis,FreedmanMeyer}. The surface code (see, for example, References \cite{TerhalMemory,MartinisFowler} for an introduction to the subject) is a type of topological code that has emerged as a promising candidate for fault-tolerant quantum computing. In recent years, significant advances have been made in the implementation of surface codes as a means towards fault-tolerant quantum computing. These include experiments \cite{corcoles2015demonstration,takita2016demonstration} that demonstrated the basic steps of surface code stabilizer measurements on five noisy qubits, as well as rudimentary error correction experiments on quantum chip architectures with qubits arranged in a planar array \cite{andersen2020repeated,chen2021exponential,WallraffSurfaceCode,IonTrapColorCode, erhard2021entangling,marques2022logical,ChineseSurfaceCode}. While experimental progress continues to hinge on both the scale of the quantum chips as well as the quality of the qubits therein, notable theoretical advances have also been made in recent years. These include proposals to manipulate the encoded information (see, for example, \cite{MartinisFowler,LatticeSurgery,PokingHoles,Yoder_SurfaceCode, Litinsky,VuillotLatticeSurgery, brown2020fault, Bartlett2020braiding, chamberland2022circuitLatticeSurgery}), as well as discussions of how universal quantum computing could be realistically achieved with surface codes (see, for example, \cite{campbell2017roads, lao2018mapping, chamberland2020topological, BombinLogicalFT, chamberland2022universal}). \\ The most basic form of the surface code is implemented on a square lattice of physical qubits and has the primary experimental advantage (as do many topological codes) that the error correction steps can be done with just nearest-neighbor coupling between qubits. The quantum information is stored as a global property of the entire code lattice, making it tolerant to localized errors. Compared to some of the earlier fault-tolerant schemes based on concatenated codes, surface codes require comparatively fewer ancillary qubits and complicated ancilla states that can be difficult to prepare in practice \cite{Dennis}. Due to these features, the surface code has become a popular pathway towards large-scale quantum computers. Already, it has underpinned proposals for the development of physical quantum hardware architecture, with trapped ions \cite{TrappedIonArray, TrappedIonArray2}, quantum dots \cite{QDotsArray}, nitrogen-vacancy centers \cite{NVarray}, superconducting qubits \cite{Topoarray}, etc.\\ Another reason for surface code's popularity is its seemingly more forgiving accuracy threshold, with less stringent requirements on the control of noise than many earlier fault-tolerant schemes. This conclusion comes from past work that can be classified into two main types of analyses \footnote{Reference \cite{BravyiCoherent} does not fall into these two categories. It studies code-capacity performance under coherent noise, using a novel analytical technique to allow for faster numerical simulation at larger code distances. The authors concluded that coherent noise is not detrimental to the performance of surface codes, when syndrome measurements are assumed to be ideal.}: (1) a phase transition argument \cite{Dennis,WangPreskill,NovaisCorrelated,ChubbsFlammia} that makes use of the topological nature of the code, and (2) analytical and numerical circuit-level studies for probabilistic noise \cite{Dennis,FowlerProof,StephensThreshold,MartinisFowler,BiasedNoiseThreshold_bad}. The former phase transition argument lacks the detailed circuit-level calculations needed to derive a rigorous accuracy threshold that fully accounts for the fault-tolerant error correction procedure, while the latter studies on probabilistic noise may not be applicable for actual quantum devices. In current devices, noise processes include, for example, spontaneous decay (modeled as amplitude-damping noise) as well as over- or under-rotation in gate operations due to misalignment (modeled as unitary noise); neither can be considered as probabilistic noise. \\ Here, we examine how one might derive an accuracy threshold for surface codes exposed to general noise---one with no particular structure and includes probabilistic noise as a specific example---by generalizing existing analyses for probabilistic noise. There is perhaps a general expectation in the community that the same methods for probabilistic noise extend to general noise (this was true for older fault-tolerant schemes based on concatenated codes), and that the threshold numbers are similar. As we explain here, however, current known techniques for deriving an accuracy threshold for surface codes likely do not give a nontrivial (i.e., nonzero noise strength) threshold for general noise. While this is not a proof that fault-tolerant quantum computing based on surface codes cannot work under general noise, it serves as a caution that our current conclusions about surface codes, founded largely upon probabilistic noise studies, may not apply to actual quantum devices. This highlights the need for new ideas to resolve the question of whether a nonzero accuracy threshold exists for surface codes, and whether our confidence in the success of large-scale quantum computers built upon surface codes is well founded. \section{Surface code basics} \subsection{Code structure} The (planar) surface code \cite{KitaevPlanar,Dennis,FreedmanMeyer} is an error-correcting code defined on a two-dimensional (2D) square lattice of qubits, some of which carry the computational data---the data qubits---and others are ancillary qubits---or just ancillas---used for the syndrome measurements in the error correction. The joint state of the data qubits, in the absence of errors, resides in a two-dimensional code space, thus encoding a single qubit of information, and is generally highly entangled across the different qubits. The encoded, or logical, qubit is thus delocalized over the entire 2D lattice. \\ The surface code lattice comprises, for odd $L$, $L-1$ rows and $L$ columns of vertices, such that there are two ``smooth" boundaries (left and right boundaries in Figure \ref{fig:surface1}), and two ``rough" boundaries (top and bottom boundaries in Figure \ref{fig:surface1}). There are altogether $L^2$ vertical edges and $(L-1)^2$ horizontal edges, and on each edge resides a data qubit, giving a total of $L^2+(L-1)^2$ data qubits. Ancillary qubits reside on vertices and in the centres of plaquettes, corresponding to two different ancilla types (see Figure \ref{fig:surface1}): $X$ ancillas that sit on the vertices, so named because they are for measuring the $X$-type stabilizer operators which act on the four (fewer, if at the lattice boundaries) data qubits surrounding the vertex; $Z$ ancillas at the center of the plaquettes for measuring the $Z$-type stabilizer operators that act on the four data qubits bordering the plaquette. There are $L(L-1)$ each of such $X$ and $Z$ ancillas, giving altogether $2L(L-1)$ ancillary qubits, or, equivalently, $2L(L-1)$ stabilizer operators. Altogether, there are $(2L-1)^2$ data and ancillary qubits in the lattice.\\ The $2L(L-1)$ $X$- and $Z$-type stabilizer operators together specify a two-dimensional code space carried by the data qubits, namely, the state space on which the stabilizer operators act like the identity operation. The logical $X(Z)$ operator can be identified as, up to factors from the stabilizer group, the tensor product of $X(Z)$ on every data qubit in a set that connects one smooth(rough) boundary to the other (see Figure \ref{fig:surface1}). The code is designed to correct arbitrary errors on up to $t$ data qubits, with $t$ related to the lattice size $L$ as $t=\tfrac{1}{2}(L-1)$.\\ \begin{figure} \caption{\label{fig:surface1} \label{fig:surface1} \end{figure} \subsection{Recovering from errors} The syndrome measurements, namely, the measurement of the $X$- and $Z$-type stabilizer operators, are carried out using the $X$ and $Z$ ancillas. The $X$-type stabilizers detect $Z$ errors on the data qubits, while the $Z$-type ones detect $X$ errors. Together, they detect arbitrary qubit errors, noting that $Y=XZ$ and that every qubit error can be written as a sum of $X$, $Y$, and $Z$. To measure one stabilizer operator, CNOT gates are applied consecutively, connecting each of the data qubits involved in that stabilizer operator to the same ancilla, effectively transferring the information about errors in those data qubits to the ancilla. The ancilla is then measured, and the measurement result, either $+1$ or $-1$, is recorded. A preliminary detection of an error happens when the ancilla measurement result differs from its value in the syndrome measurement in the previous error correction cycle. Such a change in measurement value is referred to as a defect, is said to be located at the position of the ancilla, and is (in the ideal case) an indication that there are errors in the data qubits connected to that ancilla.\\ The defect locations are gathered after a single round of syndrome measurements, i.e., measurement of all $2L(L-1)$ stabilizer operators, and the set of defect locations is processed, or decoded, to deduce what errors have occurred in the data qubits. The stabilizer operators are distributed across the lattice in a manner such that, if an $X$ error, say, occurred in a data qubit, the error flips the measurement results (e.g., a $+1$ to a $-1$) on the the two $Z$ ancillas neighboring it (one if it is on a lattice boundary). Both will manifest defects in the next round of syndrome measurement. If $X$ errors occur on two adjacent data qubits simultaneously, the $Z$ ancilla in between the two will encounter the flip twice, winding up with no defects, but the other two $Z$ ancillas next to the data qubits will register defects (see Figure \ref{fig:defects}). Analogous statements hold for $Z$ errors and associated $X$ ancillas. This gives the connection between defects and errors: Every error on a data qubit produces a pair of defects\footnote{The only exception to this is when an error occurs at the boundary of the code lattice. This will result in only one defect, as the other ancilla location is ``missing"---it falls outside of the boundary.}, and adjacent data qubits in error separate the defect pairs further apart. Defects manifest only on the ``outermost" ancillas, and the data qubits along the lattice path---a connected sequence of edges (see Figure \ref{fig:defects})---joining the two defects are in error. In fact, data qubits on any path with those same two defects as endpoints can be said to be in error, and this will be correct up to operators from the stabilizer group.\\ \begin{figure} \caption{\label{fig:defects} \label{fig:defects} \end{figure} This then suggests a way to decode the defects into errors. Connect pairs of defects together, and the data qubits on the connecting path are guessed to be in error. A path here refers to a connected line that joins two defects together; equivalently, we can regard a path as the set of edges that the connected line passes through (see Figure \ref{fig:defects}). This allows us to talk about data qubits that lie on a given path, and those data qubits are said to be in error, as deduced by the decoder according to the observed defects.\\ Among the many possible connecting paths that join two defects, one chooses the path with the highest probability of error occurrence. If the errors occur independently, this translates into the shortest path, with the fewest data qubits in error. This is the basic principle of the minimum-weight perfect-matching (MWPM) decoder \cite{Dennis}, a standard decoder for the surface code, and the one we will use here \footnote{There are of course many other possible decoders for surface codes. In recent years, a variety of approaches to decoding the surface-code syndromes have been developed \cite{BravyiMLDecoder,MCMCdecoder,MultiPathDecoder,CellularDecoder,neural1,neural2,LinearTimeDecoder}, with varying levels of performance and advantages, e.g., ones that fail more often than say the MWPM decoder, but use less classical computation time, or ones that take inspiration from physical models in nature.}. See Figure \ref{fig:defects} for two illustrative examples---there, the lattice paths marked in orange are the shortest paths, and the ones that will be chosen by the MWPM decoder.\\ The decoder chooses paths that connect observed defects in pairs. The recovery is then performed to (attempt to) reverse the errors that occurred. If the defects were for $X(Z)$ errors (i.e., observed on the $Z(X)$ ancillas), the $X(Z)$ operator is applied to all data qubits on the chosen paths. Now, if the actual errors that occurred, together with the applied recovery operations, form a connected path of data qubits that extends from one smooth(rough) boundary to the other, we wind up with an overall $X_\mathrm{L}(Z_\mathrm{L})$ operation on the encoded information, i.e., we have a logical error. The data qubit on every edge of such a path is acted upon by either by an error or the recovery operation, but not both (which would cause the path to become disconnected and no logical error results). We refer to such a path that connects the two relevant---i.e., smooth ones for $X$ errors, and rough ones for $Z$ errors---boundaries as a ``spanning path". Such spanning paths extend across the entire code lattice and correspond to logical errors.\\ \subsection{Noisy recovery and the syndrome lattices} In practice, the physical components used to implement the syndrome measurements---the CNOT gates, the ancillas, as well as the final measurements---will inevitably be noisy. This can lead to unreliable syndrome information, in the form of missing or spurious defects, that confuse the detection of errors. To obtain robust information in the presence of such imperfections, a standard procedure is to repeat the syndrome measurement a few times \cite{ShorFTQC}. Persistent erroneous syndrome information require multiple errors to have occurred over the multiple syndrome measurement cycles, which is unlikely. The total defect set gathered over the multiple cycles are then jointly decoded into a best guess for the actual error locations.\\ Following past work on the subject (see, for example, Reference \cite{Dennis}), we assume the syndrome measurement is repeated $N\sim O(L)$ times before decoding is done. Often, $N=L$, but we do not need the specific form of $N$ here, only that it varies affinely with $L$. The defects gathered over the $N$ cycles can be visualized on a three-dimensional (3D) $L\times L\times N$ lattice, extending in two spatial ($L\times L$) and one temporal ($N$) directions. Each time slice corresponds to the surface code lattice (said to extend spatially) subjected to a single syndrome measurement cycle, and the defects occur at the ancilla locations where they manifest in space (where on the surface code lattice) and time (which syndrome cycle). This rectangular 3D lattice can be embellished into what we will refer to as a syndrome lattice (this was called ``a lattice of dots and lines" in Reference \cite{FowlerProof}), useful for the MWPM decoder and for our discussion below. We first consider the situation of $Z$ errors occurring only\footnote{The surface code deals with $X$ and $Z$ errors separately, so we can treat them separately. Any arbitrary error can be decomposed into a sum of $X$, $Y=XZ$, and $Z$ errors, so this handles all errors.}. The vertices of the 3D lattice described above are the locations of the $X$ ancillas, and hence locations where defects can manifest when $Z$ errors occur. Two adjacent vertices on the same time slice are connected by an edge, on which a data qubit resides. If a single fault (i.e., something going wrong) occurs on the data qubit, causing a $Z$ error, the two ancillas on the two vertices will manifest defects. \\ We want to extend this feature that an edge joins two vertices if a single fault, nominally said to occur on that edge, can lead to defects manifesting on the vertices to faults that arise in the course of error correction. The ancillary qubits themselves can also have faults, and a $Z$ error on an $X$ ancilla will cause two defects to appear in two adjacent time slices (first time slice when the error occurs, and the next, when the ancilla is reset and hence the error is eliminated) on the same ancilla location. We add an edge that joins the two vertices---same spatial location of the ancilla but separated one time step apart---where the two defects manifest, and that (temporal) edge is said to have a fault when that $X$ ancilla suffers a $Z$ error.\\ Similarly, when a CNOT gate in the syndrome measurement circuit has a fault, it causes $Z$ errors on the ancillary qubit and/or data qubit involved, and possibly spread to other qubits by subsequent gates (if any). Defects can then manifest at a pair of vertices, and we can again join them together by an edge. A CNOT fault is said to occur on that edge. A detailed study \cite{WangFowler} (see also Reference \cite{Autotune} for how this information can be incorporated into the decoder) of how faults can lead to defects allows us to put an edge between two vertices of the 3D lattice, wherever a single fault can cause a pair of defects to manifest at the vertices joined by the edge. The resulting lattice, now with many more edges than the original 3D lattice, is called the syndrome lattice for $Z$ errors; similarly, one can construct a syndrome lattice for $X$ errors, identical to that for $Z$ errors, but now with $Z$ ancillas at the vertices. Figure \ref{fig:unitLattice} shows a portion of a syndrome lattice, with each interior vertex connected by edges to 12 other vertices. \begin{figure} \caption{\label{fig:unitLattice} \label{fig:unitLattice} \end{figure} With this picture of syndrome lattices, the decoding of defects into errors can be carried out in a manner similar to the no-noise case, except that the paths connecting defects now traverse the 3D lattice, possibly extending across different time slices. Edges on the paths are now edges on the syndrome lattice, not those on the original surface code lattice. A spanning path in the syndrome lattice for $X(Z)$ errors is now one that starts on one relevant boundary \emph{face} (comprising all the relevant boundaries of the different time slices) of the syndrome lattice and ends on the opposite face. A logical error still results when the errors and recovery form such a spanning path \footnote{In principle, one can also talk about paths that start and/or end on the temporal boundaries, i.e., the first and last time slices of the syndrome lattice. For some decoding protocols, such paths can lead to logical errors in the next round of syndrome measurement cycles. We ignore such complications here, as they do not change our argument about the accuracy threshold.}. \subsection{Error description}\label{sec:errorModel} We also need a description of the noise that affects our quantum computer. Without loss of generality, noise in the quantum computer, over some discrete time interval (e.g., computational clock cycle) can be described by completely positive (CP) and trace-preserving (TP) maps acting on the data and ancillary qubits, as well as any environmental degrees of freedom that come into play. This can arise from background noise, or be the result of noisy gates acting on subsets of qubits. Noise arising from an imperfect gate is assumed to affect only those qubits participating in that gate.\\ The noise description can be easily incorporated into our discussion using the syndrome lattices: Each edge of the syndrome lattices is acted upon by a CPTP map $\mathcal{N}_a$, for $a=1,2,\ldots 2A$ numbering the edges in the two syndrome lattices, and $A$ is the total number of edges in a single lattice. Each $\mathcal{N}_a$ acting on an edge in the $X(Z)$ lattice is assumed to be such that it can lead to $X(Z)$ error(s). $\mathcal{N}_a$ can be a primitive noise process like background noise on a single ancilla (such an $\mathcal{N}_a$ acts on a temporal edge in the syndrome lattices), but also propagated noise like a fault occurring on a CNOT acting on a pair of qubits and the resulting errors are spread by subsequent gates to other qubits before the syndrome measurement is done. Since the primitive noise is CPTP, so will be the propagated noise, and the propagated noise, by construction, is associated with an edge in the syndrome lattices. \\ We split each noise map $\mathcal{N}_a$ into two parts in the following manner \cite{Ben-Or}. Consider the deviation of $\mathcal{N}_a$ from the identity map $\mathbb{1}$, and let $\eta_a\equiv\Vert\mathcal{N}_a-\mathbb{1}\Vert$. Here, we use a unitarily invariant and submultiplicative superoperator norm such that $\Vert\mathcal{E}\Vert=1$ for any CPTP $\mathcal{E}$ (e.g., the diamond norm works). We define $\eta\equiv\max_a\eta_a$. Then, $\Vert\mathcal{N}_a-\mathbb{1}\Vert\leq\eta$ $\forall a$, and we can write each $\mathcal{N}_a$ as \begin{equation}\label{eq:Nk} \mathcal{N}_a=(1-\eta)\mathbb{1}+\mathcal{F}_a. \end{equation} with $\Vert \mathcal{F}_a\Vert\leq 2\eta$ for all $a$, and $\eta$ characterizes the strength of the general noise. We refer to $\mathcal{F}_a$ as a ``fault", i.e., something goes wrong that can cause errors on the qubits associated with that edge, and can lead to defects at the endpoints of the $a$th edge. The $\mathbb{1}$ term (with weight $1-\eta$) is regarded as the ``no-fault" situation, and does not cause defects to appear.\\ For probabilistic noise, $\mathcal{F}_a$ can be written as $\eta\mathcal{E}_a$ where $\mathcal{E}_a$ is a CPTP map, so that $\mathcal{N}_a$ carries the interpretation that nothing happens (i.e., $\mathbb{1}$ is applied) with probability $1-\eta$ and a fault $\mathcal{E}_a$ happens with probability $\eta$. For general noise, $\mathcal{F}_a$ need not be proportional to a CPTP map, and there is no such straightforward probabilistic interpretation. \section{Accuracy threshold for probabilistic noise} Here, we derive the accuracy threshold for the case of probabilistic noise, starting with the original basic argument of Reference \cite{FowlerProof} (which provided the main existing analytical proof for surface codes under probabilistic noise), and extending it to include higher-order corrections. \subsection{Basic argument} Let us first review the logic leading to the accuracy threshold estimate for surface codes as originally presented in Reference \cite{FowlerProof}. We assume a probabilistic noise model as described in Section \ref{sec:errorModel}: Each edge of the syndrome lattice has probability $p$ of having an error that can result in defects at its endpoints. $p$ here is the $\eta$ of Equation \eqref{eq:Nk}, but we write $p$ to emphasize its probabilistic interpretation. To deduce the threshold for surface codes, we want to bound the total probability that things can go wrong, i.e., that logical errors occur after the recovery operation as determined by the MWPM decoder. As described earlier, a logical error occurs when the errors that occur, together with the recovery operations, form a spanning path in the syndrome lattice. Such a path has a number of edges---its length $r$---no fewer than $L$ in order to connect two opposing boundaries. At least $\lceil r/2\rceil$ of those $r$ edges are associated with errors, or the pairing of the defects on the path, as decided by the MWPM decoder, cannot be minimum weight. The probability that the errors and resulting recovery operations form a specific length-$r$ path is \begin{equation}\label{eq:Pr} P(r)\equiv \sum_{k=\lceil r/2\rceil}^r\binom{r}{k}p^k(1-p)^{r-k}\leq p^{\lceil r/2\rceil}\sum_{k=0}^r\binom{r}{k}=p^{\lceil r/2\rceil}2^r, \end{equation} where we assume the $k\in\bigl[\lceil r/2\rceil,r\bigr]$ erroneous edges can occur anywhere on the path (each with probability $p$), and the remaining edges have no errors (each with probability $1-p$, and added to the path by the recovery).\\ We want to estimate the total probability of occurrence of such spanning paths, giving the total probability of logical errors. It is difficult to count all such paths exactly. However, one can arrive at an upper bound in the following way, as originally proposed in Reference \cite{FowlerProof}. We begin by choosing a vertex on one of the relevant boundary faces of the syndrome lattice---there are $2NL$ \footnote{This $2NL$ can be understood as follows: The spanning path can extend from one relevant boundary face to the opposite face. The starting vertex can hence be one of $NL$ vertices, multiplied by 2 for the two syndrome lattices.} such possible starting vertices. From this starting point, we choose the next vertex---there are no more than 11 possible vertices to choose from, given the structure of the syndrome lattice. We continue adding vertices to the path, each time choosing a next vertex connected by an edge to the previous one---there are 11 options each time, excluding the vertex we just came from---to arrive at a path with length $r$: $r+1$ vertices in total, connected by $r$ edges. We consider all such length-$r$ paths, which includes spanning paths that correspond to logical errors, but also many paths that start at one boundary face but never reach the opposite boundary face. With this, we can upper-bound the total probability of logical errors by \begin{equation}\label{eq:probthreshold} \sum_{r=L}^\infty(\textrm{number of such length-$r$ paths})P(r)\leq cNL \frac{(22^2p)^{\lceil L/2\rceil}}{1-22^2p}\equiv P_\mathrm{UB}(L), \end{equation} where $c$ is the numerical constant $c\equiv 2 (23/22)\simeq 2.1$. The upper bound expression $P_\mathrm{UB}(L)$ expression can be arrived at by straightforward counting, together with Equation \eqref{eq:Pr} (see the Methods section). Reference \cite{FowlerProof} concludes by saying the $P_\mathrm{UB}(L)$ can be made arbitrarily small by increasing $L$, as long as $p < 1/(22^2)=1/484\equiv p_\mathrm{th}$. This ensures $P_\mathrm{UB}(L+1)\leq P_{UB}(L)$, and $p_\mathrm{th}$ can be identified as the accuracy threshold for the surface code in the presence of probabilistic noise.\\ There are some caveats, however, to this analysis. Observe that $P_\mathrm{UB}(L)$ is unphysical --- it is larger than 1 --- as a probability for $p$ below, but close to, the threshold value. That this upper bound $P_\mathrm{UB}(L)$ can be larger than 1, when the actual logical probability cannot be so, is simply a consequence of the overcounting in the argument. There are two sources of overcounting: (1) In deriving Equation \eqref{eq:Pr}, $P(r)$ is itself an over-count since not all possible allocations of $k\leq \lceil r/2\rceil$ erroneous edges in a path of length $r$ lead to a logical error after the recovery. In fact, the surface code correctly removes the errors in many situations with larger than $t$ errors. Furthermore, our bound on $P(r)$ itself contains an inequality. (2) We were unable to exclude from our counting paths that started on one boundary face but did not reach the opposite boundary face. (2) no doubt gives a much larger contribution to the overcounting than (1).\\ Also, note that we are only able to say that, when $p<p_\mathrm{th}$, the \emph{upper bound} $P_\mathrm{UB}(L)$ can be made smaller and smaller as $L$ grows. This does not technically mean that the actual logical error probability shrinks as $L$ grows, just that our upper bound on it shrinks. It does, nevertheless, suffice to guarantee that, for a given $p$ below the threshold value, we can find a large enough $L$ such that the logical error probability, which is smaller than $P_\mathrm{UB}(L)$, is small enough. This statement is true also of the standard accuracy threshold estimates for concatenated codes (see, for example, References \cite{Ben-Or,AGP,TerhalBurkard}), though in those cases, the upper bounds are likely quite close to the actual logical error probability, unlike the surface-code situation here.\\ \subsection{Higher-order terms: the disconnected pieces} There is, however, a more worrying issue with the analysis of Reference \cite{FowlerProof}. On the one hand, we mentioned that the analysis overcounts by including many paths that do not span the lattice and hence do not contribute to the logical error probability; on the other hand, we now point out that it in fact misses many paths that do contribute.\\ The argument of Reference \cite{FowlerProof} counts all connected paths that span the lattice and can lead to a logical error. One could also have disconnected paths alongside a connected one (see an example in Figure \ref{fig:loopspath}). These disconnected paths may lead to no additional logical errors---they can correspond to errors that occur but are then correctly removed by the recovery procedure, or they may also span the lattice---but give corrections to the probability of occurrence of the basic connected path. Each disconnected path contributes a higher-order term, but there are many more possibilities of such disconnected pieces, and the combinatorial factor could potentially offset the higher-order probability factor. None of these disconnected pieces were considered in the analysis of Reference \cite{FowlerProof}.\\ \begin{figure} \caption{\label{fig:loopspath} \label{fig:loopspath} \end{figure} It turns out, however, that these disconnected paths do not change the final bound for the situation of probabilistic noise, as we now explain. We simply need to understand the correction factor, associated with each length-$r$ spanning path, needed to include the disconnected paths. This can be counted as follows. A particular length-$r$ connected path, chosen as described in the previous subsection, corresponds to a particular set of $r$ edges, out of the $A$ edges in the syndrome lattice. Disconnected paths can be included alongside these chosen $r$ edges by deciding if the remaining $A-r$ edges have errors or not. If we count all possibilities, the correction factor, to be multiplied to $P(r)$ of Equation \eqref{eq:Pr}, is then \begin{equation} \sum_{\ell=0}^{A-r}\binom{A-r}{\ell}p^\ell(1-p)^{A-r-\ell}=[p+(1-p)]^{A-r}=1. \end{equation} Here, the summation index $\ell$ is the number of edges not on the spanning path ($A-r$ of these) that are chosen to have errors; the remaining $A-r-\ell$ edges are error-free. That the higher-order disconnected correction amounts to a multiplicative factor of 1 explains why the analysis of Reference \cite{FowlerProof} still gives the right answer, despite disregarding these possibilities.\\ This same argument, however, does not work when one attempts to extend the analysis to general noise, as we will see in the next section. \section{Accuracy threshold for general noise} To see where the problem lies for general noise, let us organize the analysis in a more transparent manner. Let us regard the entire surface code error correction procedure---the multiple rounds of syndrome extraction, the decoding, and the recovery---as a single quantum computation $\mathscr{C}$, built from noisy operations. $\mathscr{C}$ can be written as a sequence of CPTP maps, \begin{equation} \mathscr{C}\equiv \mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}. \end{equation} Here, $\mathscr{G}$ refers to all the gate operations (the CNOT gates) for the $N$ rounds of syndrome extraction, as well as any memory (identity gate) steps, $\mathscr{M}$ collects all the measurements on the ancilla qubits to extract the syndromes, and $\mathscr{R}$ is the decoding procedure followed by the recovery operations determined by the decoding. $\mathscr{M}$ refers only to the measurement part of the syndrome extraction procedure, i.e., it does not include the CNOT gates that transfer the error information from the data qubits to the ancilla qubits---those are collected in $\mathscr{G}$. All measurements from the multiple syndrome extraction rounds are assumed to be delayed to the end, imitating the actual situation of measuring as we go along by assuming no fault insertions between the last CNOT on the ancilla qubit (in $\mathscr{G}$) and the actual measurement (in $\mathscr{M}$). The noise in the measurement is assumed to be lumped together with the noise of that last CNOT gate. $\mathscr{M}$ can thus be considered as perfect, i.e., no noise. $\mathscr{R}$, which describes the procedure of deducing the errors from the defect information collected over the multiple syndrome rounds, followed by the standard ``virtual" recovery using a Pauli-frame change, is actually a purely classical procedure, and hence can also be considered noise-free.\\ All the noisy operations are then only in $\mathscr{G}$, which itself can be written as a sequence of noisy gates. Now, the gate sequence in $\mathscr{G}$ comprises only error correction gates---we are not doing any nontrivial computation. In fact, recalling how the syndrome lattices are constructed, which takes the sequence of error correction gates already into account, $\mathscr{G}$ can be written as a sequence of the CPTP maps $\mathcal{N}_a$ [recall Equation \eqref{eq:Nk}], one on each edge of the two syndrome lattices. Thus, we have, \begin{equation}\label{eq:G} \mathscr{G}=\mathcal{N}_{2A}\mathscr{C}c\mathcal{N}_{2A-1}\mathscr{C}c\cdots\mathscr{C}c\mathcal{N}_1. \end{equation} Inserting the fault/no-fault split of $\mathcal{N}_a$ into $\mathscr{G}$, we can then read $\mathscr{G}$ as a sum over ``fault paths", each comprising a sequence of $\mathbb{1}$ and $\mathcal{F}$s (i.e., the $\mathcal{F}_a$s, dropping the index for brevity), \begin{equation}\label{eq:FP} \mathscr{G}=\sum\textrm{fault paths}=(1-\eta)^{2A}\mathbb{1}\mathscr{C}c\cdots\mathscr{C}c \mathbb{1}+(1-\eta)^{2A-1}\mathbb{1}\mathscr{C}c\cdots\mathscr{C}c\mathbb{1}\mathscr{C}c\mathcal{F}_1+\ldots\ \end{equation} Each fault path corresponds to the insertion of $\mathbb{1}$---carrying weight $(1-\eta)$---or fault $\mathcal{F}$ on each edge of the syndrome lattice. The very first term of Equation \eqref{eq:FP} is proportional to the identity map, as it should be for error correction operations when no faults occur.\\ We can split $\mathscr{G}$ into two pieces, one ``good", the other ``bad": \begin{equation} \mathscr{G}\equiv \mathscr{G}_\mathrm{good}+\mathscr{G}_\mathrm{bad}. \end{equation} The good piece $\mathscr{G}_\mathrm{good}$ comprises the sum of all the fault paths that, after passing through $\mathscr{M}$ and $\mathscr{R}$, result in no errors, i.e., the errors that occur are correctly removed by the error correction \footnote{Actually, the good piece can contain also fault paths where the errors are not corrected in this decoding round but fixed only in the next round. Such situations occur in decoders that allow errors to be ``joined to the temporal boundary" and delayed to be fixed in the next round. Just as we have been ignoring spanning paths that start or end on temporal boundaries, we will ignore these here. Again, they do not change our arguments.}; the bad piece $\mathscr{G}_\mathrm{bad}$ is the sum of those fault paths---the remaining ones---that result in a logical error after $\mathscr{M}$ and $\mathscr{R}$. What we want to do is to obtain an upper bound on the size of $\mathscr{G}_\mathrm{bad}$, as a bound on how badly the error correction can fail.\\ \subsection{An upper bound} How do we identify which fault paths in $\mathscr{G}$ belong to $\mathscr{G}_\mathrm{bad}$? These are precisely those that, after $\mathscr{M}$ and $\mathscr{R}$, have at least one spanning path, comprising edges associated errors or recovery operations. Reference \cite{FowlerProof} gave a way of counting (in fact, \emph{over}counting) such paths, as explained in the previous section---that counting does not care whether we have probabilistic or general noise. For each such identified path $\mathcal{P}$, with specified error and recovery edges, there is a corresponding error-only fault path in $\mathscr{G}_\mathrm{bad}$, with the fault $\mathcal{F}$ insertions precisely on those edges in $\mathcal{P}$ with errors, and identity on all the other edges in $\mathcal{P}$ (these become the recovery edges in $\mathcal{P}$ only after $\mathscr{M}$ and $\mathscr{R}$). What about the other edges not in $\mathcal{P}$? Those can, in principle, have fault insertions or not and can be accounted for by inserting the full $\mathcal{N}$ (again, dropping the index $a$ for brevity), i.e., both possibilities, $\mathbb{1}$ and $\mathcal{F}$, hence taking care of the ``disconnected pieces" mentioned earlier. We write the sum of all such paths symbolically as \begin{equation}\label{eq:GB} \mathscr{G}'=\sum_\mathcal{P} \mathcal{F}(\mathcal{P})\mathscr{C}c\mathcal{N}(\overline{\mathcal{P}}), \end{equation} where $\mathcal{F}(\mathcal{P})$ denotes inserting $\mathcal{F}$ or $\mathbb{1}$ on the edges in $\mathcal{P}$ as described above, and $\mathcal{N}(\overline\mathcal{P})$ denotes inserting $\mathcal{N}$ on every edge of the syndrome lattice not in $\mathcal{P}$, i.e., the complement of $\mathcal{P}$, written here as $\overline{\mathcal{P}}$.\\ Note that we have written $\mathscr{G}'$ in Equation \eqref{eq:GB}, not $\mathscr{G}_\mathrm{bad}$, for the following reason: $\mathscr{G}'$ contains all fault paths that translate into logical errors, i.e., contains $\mathscr{G}_\mathrm{bad}$, but also ones that do not ultimately give logical errors. Part of this comes from the overcounting of $\mathcal{P}$---as mentioned earlier, the counting in Reference \cite{FowlerProof} includes many $\mathcal{P}$s that are not spanning paths. In addition, our way of adding fault insertions in $\overline\mathcal{P}$ by inserting the full $\mathcal{N}$ could give arrangements of errors such that, after passing through $\mathscr{M}$ and $\mathscr{R}$, lead to a recovery that do not realise $\mathcal{P}$ as the error-plus-recovery path, but rather connect the resulting defects in a manner that do not give a spanning path and hence no logical error (see the Methods section for an example and further elaboration on this point). This means that we have included some---possibly very many---fault paths originally supposed to be in $\mathscr{G}_\mathrm{good}$ within $\mathscr{G}'$.\\ Despite this, let us persist with $\mathscr{G}'$ for a bit more, before coming back to discuss the consequences of the overcounting. The goal is to bound the part of $\mathscr{C}$ that leads to logical errors, i.e., the bad piece, $\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}_\mathrm{bad}$. We do not yet have $\mathscr{G}_\mathrm{bad}$; instead, let us first compute the norm of $\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}'$. Using a submultiplicative superoperator norm such that any CPTP map has unit norm, we have \begin{equation}\label{eq:sumFP} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}'\Vert\leq \Vert\mathscr{R}\Vert\,\Vert\mathscr{M}\Vert\,\Vert\mathscr{G}'\Vert=\Vert\mathscr{G}'\Vert\leq \sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert\,\Vert\mathcal{N}(\overline\mathcal{P})\Vert=\sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert. \end{equation} This final piece $\sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert$ can be bounded in a similar way as in the probabilistic noise case discussed above, replacing $p$ with $2\eta$ ($\geq \Vert\mathcal{F}_a\Vert\forall a$), and we find (see Methods section), \begin{align}\label{eq:Wprime} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}'\Vert\leq \sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert\leq cNL\frac{(22^2\cdot 2\eta)^{\lceil L/2\rceil}}{1-22^2\cdot 2\eta}\equiv W'_{\mathrm{UB}}(L). \end{align} For probabilistic noise, $W'_\mathrm{UB}(L)=P_\mathrm{UB}(L)$, our upper bound from before, if we read $2\eta$ as $p$. For general noise, we can draw the same conclusion as before, that $W'_\mathrm{UB}(L)$ shrinks as $L$ increases if $2\eta\leq 1/22^2$. \subsection{Bounding the bad piece} So, what does this upper bound, established using $\mathscr{G}'$, tell us about the actual quantity we care about, namely, the occurrence of logical errors, which involves $\mathscr{G}_\mathrm{bad}$ instead? For probabilistic noise, this is straightforward. The sum over fault paths in $\mathscr{G}'$ or $\mathscr{G}_\mathrm{bad}$ is a probabilistic sum, with each additional term contributing a positive quantity. The overcounting in $\mathscr{G}'$ compared to $\mathscr{G}_\mathrm{bad}$ hence only makes its norm larger. We can hence say that, for probabilistic noise, \begin{equation} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}_\mathrm{bad}\Vert\leq\Vert\mathscr{G}_\mathrm{bad}\Vert\leq \Vert\mathscr{G}'\Vert\leq W'_\mathrm{UB}(L), \end{equation} recovering the conclusions for probabilistic noise discussed earlier [with $2\eta$ in place of $p$, so that $W'_\mathrm{UB}(L)=P_\mathrm{UB}(L)$]. As before, we see that the fault insertions on $\overline \mathcal{P}$ contribute trivially: They became 1 after taking the norm of $\mathcal{N}(\overline\mathcal{P})$ in $\mathscr{G}'$.\\ For general noise, however, we run into difficulties. Each term in a sum of fault paths can no longer be said to always give a positive contribution. Plus and minus signs, or even generally complex phases, associated with each term can lead to cancellations and destructive interference. We cannot then claim that $\mathscr{G}'$, which contains fault paths that do not lead to logical errors, has a larger norm than $\mathscr{G}_\mathrm{bad}$ itself. In fact, that the so-called disconnected pieces, encapsulated in the $\mathcal{N}(\overline\mathcal{P})$ term in $\mathscr{G}'$, disappear from the final bound on $\mathscr{G}'$ only because the $\mathbb{1}$ and $\mathcal{F}$ terms on every edge are there to ensure that the full CPTP $\mathcal{N}$ appears in the sum and hence has unit norm. However, not all of the terms will lead to logical errors, so in $\mathscr{G}_\mathrm{bad}$ itself, we do not expect the full $\mathcal{N}$ to always occur (see an elaboration on this point in the Methods section), and these disconnected pieces should appear nontrivially in $\mathscr{G}_\mathrm{bad}$, with incomplete cancellations.\\ One can take the opposite tack and bound each fault path separately before taking the sum, so that no such cancellations can appear and each additional fault path contributes positively, as in the probabilistic noise situation. If we do this, that we are overcounting no longer matters. Then, \begin{equation}\label{eq:sumFP2} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}_\mathrm{bad}\Vert\leq \sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert\sum_{a\in\overline\mathcal{P}}{\left[(1-\eta)\Vert\mathbb{1}\Vert+\Vert\mathcal{F}_a\Vert\right]}, \end{equation} where we have taken the norm of the individual $\mathbb{1}$ and $\mathcal{F}$ terms in $\mathcal{N}$ before taking the sum, to avoid any possible cancellations. Noting that $\Vert\mathcal{F}_a\Vert\leq 2\eta$ and following our earlier argument (see Methods section), we have \begin{align}\label{eq:W} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}_\mathrm{bad}\Vert&\leq cNL(1+\eta)^A\frac{{\left[\frac{22^2\cdot2\eta}{(1+\eta)^2}\right]}^{\lceil L/2\rceil}}{1-\frac{22^2\cdot 2\eta}{(1+\eta)^2}}\equiv W_\mathrm{UB}(L)\,, \end{align} with $c=2(23/22)$ as before.\\ The real situation lies somewhere between these two extremes of $W_\mathrm{UB}(L)$ and $W'_\mathrm{UB}(L)$. Some of the disconnected terms probably do appear together so that the full $\mathcal{N}$ appears in $\mathscr{G}_\mathrm{bad}$, but this probably does not occur in all cases. How to distinguish those cases remains a difficult counting problem at the moment.\\ Our analysis here with $\mathscr{G}'$ actually mirrors similar considerations in the old proofs of fault-tolerant quantum computing with concatenated codes and recursive simulation (see, for example, Refs.~\cite{Ben-Or, AGP}. There, however, their corresponding $\mathscr{G}'$ term is actually just $\mathscr{G}_\mathrm{bad}$, as every term identified genuinely gives a logical error. No such mixing of terms over from the good $\mathscr{G}_\mathrm{good}$ side happens there. Again, our difficulty here lies in not being able to precisely count only the paths that lead to logical errors. That first counting lattice-spanning paths and then adding disconnected pieces does not suffice here is a direct consequence of the fact that the surface code decoding requires a \emph{global} consideration of all defects that appear. One simply cannot decide whether a logical error results just from looking at a subset of defects or erroneous edges. \subsection{A cost-benefit analysis} As observed earlier, the upper bound $W'_\mathrm{UB}(L)$ shrinks as $L$ increases, as long as $\eta$ is small enough, establishing a threshold condition on $\eta$. For $W_\mathrm{UB}(L)$, there is no such threshold on $\eta$: $A$ grows as $L^3$, and the $(1+\eta)^A$ factor grows more rapidly than the $\eta^{\lceil L/2\rceil}$ factor shrinks as $L$ increases, so that $W_\mathrm{UB}(L)$ blows up unless $\eta=0$. Thus, we cannot establish a nontrivial (i.e., $\eta>0$) accuracy threshold condition using the $W_\mathrm{UB}$ upper bound on $\mathscr{G}_\mathrm{bad}$, while $W'_\mathrm{UB}(L)$ is not actually an upper bound on $\mathscr{G}_\mathrm{bad}$. \\ This is, of course, not a proof that there is no nontrivial accuracy threshold for surface codes in the presence of general noise. A better proof method could derive an upper bound on $\mathscr{G}_\mathrm{bad}$ that does yield a nontrivial threshold. In our current approach, which generalizes existing proofs to include the disconnected pieces, the disconnected pieces seem to be the ones causing the problem---our simplistic counting in $W_\mathrm{UB}(L)$ led to the $(1+\eta)^A>1$ factor that explodes as $L$ grows. However, as we now attempt to argue, just by improving the counting of these disconnected pieces alone will likely not solve the problem.\\ Observe that both $W_\mathrm{UB}(L)$ and $W'_\mathrm{UB}(L)$ take the form, \begin{equation}\label{eq:genExp} \underbrace{f(\alpha,\eta)(1+\alpha\eta)^{v(L)}}_{C(L)}\underbrace{{\left[\frac{22^2\cdot2\eta}{(1+\alpha\eta)^2}\right]}^{\ell(L)}}_{B(L)}, \end{equation} where $\ell(L)\equiv\lceil L/2\rceil$, a function growing linearly with $L$; $v(L)\equiv A\sim L^3$, a function growing with the volume of the syndrome lattice; $f(\alpha,\eta)\equiv cNL{\left[1-\frac{22^2\cdot2\eta}{(1+\alpha\eta)^2}\right]}^{-1}$; and $\alpha$ is a parameter such that $\alpha =0$ for $W'_\mathrm{UB}(L)$ while $\alpha=1$ for $W_\mathrm{UB}(L)$. Expression \eqref{eq:genExp} can be split into two factors, $C(L)$ and $B(L)$, as indicated above (suppressing the $\alpha$ and $\eta$ dependences for brevity). $B(L)$ can be identified as the benefit of doing error correction with surface codes---as $L$ increases, the code is capable of removing a number of errors that grows linearly with $L$, so that the remnant ``bad" piece shrinks as $\sim\eta^{\ell(L)}$, as opposed to just $\eta$ without error correction. $C(L)$ represents the cost of doing error correction, collecting together the terms that capture the fact that there are more locations for faults to occur as $L$ grows. Written this way, the presence of an accuracy threshold can be thought of as arising from a cost-benefit analysis. A nontrivial threshold exists when the benefit outgrows [i.e., $B(L)$ shrinks] the cost as the scale $L$ of the code increases. For $W_\mathrm{UB}(L)$, the cost $C(L)$ grows only linearly with $L$---it has no dependence on $v(L)$---while $B(L)$ shrinks exponentially with $L$, giving a nontrivial threshold; for $W'_\mathrm{UB}(L)$, however, $C(L)$, with the exponential $L$ dependence on $v(L)$, quickly outstrips the exponential suppression in $B(L)$, and no nontrivial threshold exists.\\ As argued earlier, the actual norm of $\mathscr{G}_\mathrm{bad}$ should lie somewhere in between $W_\mathrm{UB}(L)$ and $W'_\mathrm{UB}(L)$. If one is able to improve the counting of the disconnected pieces, by identifying when the $\mathbb{1}$ and $\mathcal{F}$ terms of $\mathcal{N}$ occur together or not, the same logic as we have followed in this work will give a bound on $\mathscr{G}_\mathrm{bad}$ that has a similar form as Equation \eqref{eq:genExp}, but with an $\alpha$ value somewhere between 0 and 1. Whatever value $\alpha$ turns out to be, however, as long as it is nonzero, the dependence on $v(L)\sim L^3$ will appear in the bound, causing the cost $C(L)$ to grow much more rapidly then the suppression provided by the benefit term $B(L)$ which shrinks only with an exponent $\ell(L)\sim L$, yielding again no nontrivial threshold.\\ It thus appears difficult to escape this conclusion of a trivial threshold condition following our current lines of proof. It may suggest that there is no nontrivial threshold for general noise, beyond the earlier simplistic probabilistic noise considerations, but at least, our argument here suggests that a genuinely new idea---different from overcounting first spanning paths and then adding in disconnected pieces, reminiscent of past work on such threshold analyses---is needed to have the hope of deriving a nontrivial threshold. It may very well be that one has to be able to identify only fault paths that genuinely lead to logical errors, i.e., only those in $\mathscr{G}_\mathrm{bad}$, but this appears to be a very challenging task. \section{Conclusions} The surface code has emerged as a strong contender for building large-scale quantum computing, with its promise of experimental requirements that are more feasible, including the general consensus of a less stringent fault-tolerance threshold. That consensus, however, is largely founded upon studies that modelled noise in a probabilistic manner, but unfortunately do not encompass all noise seen in actual quantum devices. The expectation is that the threshold conclusions can be extended, using similar proof techniques, to include general noise, as was the case for older fault-tolerance schemes based on concatenated codes like the Steane code.\\ Here, our attempt to do precisely that, namely, to extend existing arguments that gave the surface code threshold under probabilistic noise to the case of general noise, led to no nontrivial threshold. As we now see, it is clear why existing arguments cannot work: The overcounting of spanning paths, exacerbated by the added disconnected pieces, can lead to the cancellation of terms for general noise such that we cannot argue that the resulting norm still upper-bounds the bad part of the computation. The extra terms, those that do not correspond to logical errors and are hence not in $\mathscr{G}_\mathrm{bad}$, do not matter for probabilistic noise as all terms sum constructively, These extra terms arise not just from the overcounting (following Reference \cite{FowlerProof}) of the spanning paths to include paths that do not reach the opposite boundary, but also from the fact that added defects in the disconnected pieces can cause the decoder to break up a spanning path such that no logical error results. This is a direct manifestation of the global nature of the surface-code decoding, that the entire set of defects has to be considered; one simply cannot draw conclusions from just a local subset of defects.\\ As argued above, merely improving the counting of the disconnected pieces will likely not improve matters. Part of the issue, which may hint at a genuine failure of surface-code quantum computing for general noise, is that the contribution from the disconnected pieces---part of the cost of doing error correction---seems to grow as the volume of the syndrome lattice, while the benefit, namely the removal of errors, only leads to a linear-in-$L$ suppression. Unless one finds a way of arguing that the disconnected contribution is only a unit factor (which appears difficult, as elaborated on in the Methods section), it seems difficult to escape this conclusion that the cost of doing surface-code error correction rapidly outstrips the benefit, and no nontrivial threshold results.\\ Again, we emphasize that we are not able to prove definitively that surface-code quantum computing fails under general noise. A proof approach that avoids the pitfalls we pointed out here might still give a nonzero fault-tolerance threshold. We invite the reader to the task, and simply raise the caution that the question of the efficacy of surface-code quantum computing may not be as settled as it may seem at the moment. \section{Methods} Here, we provide the technical details of the results discussed in the main text. \subsection{Probabilistic noise}\label{sec:MethodsProb} The steps leading to Equation~\eqref{eq:probthreshold} are as follows, starting from the left-hand side of that equation, and evaluated for odd $L=2t+1$ (so $t+1=\lceil L/2\rceil$): \begin{align} &\quad \sum_{r=L}^\infty(\textrm{number of length-$r$ paths})P(r)= \sum_{r=L}^\infty \underbrace{4NL}_{\textrm{starting vertex}} \times \underbrace{11^r}_{r \textrm{ other vertices}}\times P(r)\\ &\leq\sum_{r=L}^\infty (2NL)11^rp^{\lceil r/2\rceil}2^r=2NL{\left(\sum_{r=L,r\textrm{ odd}}^\infty 22^rp^{\lceil r/2\rceil}+\sum_{r=L+1,r\textrm{ even}}^\infty 22^rp^{\lceil r/2\rceil}\right)}\nonumber\\ &=2NL{\left(\sum_{s=t}^\infty 22^{2s+1}p^{\lceil (2s+1)/2\rceil}+\sum_{s=t}^\infty 22^{2s+2}p^{\lceil (2s+2)/2\rceil}\right)}\nonumber\\ &=2NL(22)(23)p\sum_{s=t}^\infty (22^2p)^{s}=2NL(22)(23)p\frac{(22^2p)^t}{1-22^2p}=2NL\frac{23}{22}\frac{(22^2p)^{\lceil L/2\rceil}}{1-22^2p}.\nonumber \end{align} Note that there are differences between our expression here and the corresponding one in Reference \cite{FowlerProof}, stemming from differences in the handling of the temporal boundaries, and our exact evaluation of the sum in the second line of the above equation (Reference \cite{FowlerProof} only approximated the result). These differences, however, do not affect the conclusions on the accuracy threshold. \subsection{General noise} Let us first provide the steps towards Equations \eqref{eq:Wprime} and \eqref{eq:W}, before elaborating on how the disconnected pieces can cause problems. Starting from Equation \eqref{eq:sumFP}, we have, \begin{align} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}'\Vert\leq \Vert\mathscr{G}'\Vert&\leq \sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert \leq \sum_{r=L}^\infty(\textrm{number of such length-$r$ paths})W(r), \end{align} where $W(r)$ is the upper bound on the norm of a particular length-$r$ path [analogous to $P(r)$ of Eq.~\eqref{eq:Pr}], \begin{equation} W(r)\equiv \sum_{k=\lceil r/2\rceil}^r\binom{r}{k}(2\eta)^k (1-\eta)^{r-k}\Vert\mathbb{1}\Vert\leq (2\eta)^{\lceil r/2\rceil}\sum_{k=0}^r\binom{r}{k}=(2\eta)^{\lceil r/2\rceil}2^r. \end{equation} Following the previous argument for probabilistic noise (see Section \ref{sec:MethodsProb}), we then find that, for general noise, we have \begin{equation} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}'\Vert\leq cNL\frac{(22^2\cdot 2\eta)^{\lceil L/2\rceil}}{1-22^2\cdot 2\eta}, \end{equation} as given in Equation \eqref{eq:Wprime}.\\ Next, let us derive Equation \eqref{eq:W}. Beginning with Equation \eqref{eq:sumFP2}, and noting that $\Vert\mathcal{F}_a\Vert\leq 2\eta$, we have \begin{align} \Vert\mathscr{R}\mathscr{C}c\mathscr{M}\mathscr{C}c\mathscr{G}_\mathrm{bad}\Vert&\leq \sum_\mathcal{P}\Vert\mathcal{F}(\mathcal{P})\Vert(1+\eta)^{A-|\mathcal{P}|}\nonumber\\ &\leq \sum_{r=L}^\infty(\textrm{number of length-$r$ paths})W(r)(1+\eta)^{A-r}\nonumber\\ &\leq 2NL(1+\eta)^A\sum_{r=L}^\infty{\left(\frac{22}{1+\eta}\right)}^r(2\eta)^{\lceil r/2\rceil}=cNL(1+\eta)^A\frac{{\left[\frac{22^2\cdot2\eta}{(1+\eta)^2}\right]}^{\lceil L/2\rceil}}{1-\frac{22^2\cdot 2\eta}{(1+\eta)^2}}\,,, \end{align} where the last equality follows the same logic as in Section \ref{sec:MethodsProb}, and $c=2(23/22)$ as before.\\ Now, let us elaborate on the problems caused by the disconnected pieces, and whether the full $\mathcal{N}$ can be inserted in our sum of fault paths above for every edge in the disconnected pieces. We first recall the logic leading up to this. We estimated the bad piece, containing the fault paths that lead to logical errors, in the following manner: We first (over)counted all spanning paths---these are fully connected sets of edges that go from one boundary face of the syndrome lattice to the opposite face. Then, we embellished each spanning path with disconnected pieces by adding edges on the syndrome lattice that can have faults or no faults, corresponding to the insertion of $\mathcal{F}$ or $(1-\eta)\mathbb{1}$, respectively. In $\mathscr{G}'$, we inserted both pieces, i.e., the full $\mathcal{N}$, for every edge not on the spanning path, yielding the upper bounds $W_\mathrm{UB}(L)$ and $W'_\mathrm{UB}(L)$ depending on when we take the norm. \\ Let us first see how disconnected pieces added to a specified spanning path can result in no logical errors, thus effectively moving the corresponding fault path from the $\mathscr{G}_\mathrm{bad}$ piece (for the spanning path only) to the $\mathscr{G}_\mathrm{good}$ piece (for the spanning path together with the disconnected pieces). An illustrative example is given in Figure \ref{fig:disconnectedPaths}, for the $L=7$ surface code. We begin with a specified spanning path [marked in orange and blue in Figure \ref{fig:disconnectedPaths}(a)], corresponding to a fault path with four insertions of $\mathcal{F}$ as indicated (on edges 2--5). The spanning path results from the MWPM decoder assigning recovery operations at the indicated edges---there are three such recovery edges (edges 1, 6, and 7), and we say that this path has weight 3, counting the number of edges with recovery. The defects manifest at the ancilla locations marked $a$ and $b$. If these faults are the only ones that appear, the spanning path will indeed emerge after the recovery, and a logical error will result. That a four-fault situation can lead to a logical error should come as no surprise since the $L=7$ code guarantees successful correction only if no more than 3 errors occurred. \\ Consider, however, adding in some disconnected pieces. Specifically, suppose we have faults at the edges marked $8$ and $9$ in Figure \ref{fig:disconnectedPaths}. Defects will manifest at ancilla locations $c, d, e,$ and $f$. If these were the only faults that appear (i.e., ignoring those on the spanning path), the recovery will correctly remove them by applying recovery operations at edges $8$ and $9$. No logical errors will result. \\ The problem arises when we have both the faults on the spanning path and those on the disconnected pieces together. The total weight of the spanning path and the disconnected pieces is $3 +2=5$. This is, however, not the minimum-weight path for the observed defects. Instead, as shown in Figure \ref{fig:disconnectedPaths}(b), there is a weight-4 path connecting the defects on the spanning path with those from the disconnected pieces. This then will be the recovery route determined by the MWPM decoder, and no logical error will result. Adding such disconnected pieces thus lead to the break up of the spanning path, moving the corresponding fault path from $\mathscr{G}_\mathrm{bad}$ into $\mathscr{G}_\mathrm{good}$.\\ Now, one way of ensuring that we get a large set of terms that contribute only to $\mathscr{G}_\mathrm{bad}$, i.e., translate into logical errors, is to do the following: For each spanning path, have an ``exclusion zone" around it, such that edges in that exclusion zone not on the spanning path all get only the $\mathbb{1}$ insertion, while edges outside of the exclusion zone get the full $\mathcal{N}$ insertion. The exclusion zone is one that is large enough (it will grow with size $L^3$) such that any defects that occur outside of it will not be joined by the decoder to any of the defects that occur on the ends of edges on the spanning path. Then, the situation of Figure \ref{fig:disconnectedPaths} cannot occur, and the spanning path remains a spanning one even with defects that arise from the disconnected pieces. The fault paths in this set of terms will contribute only terms with factors of $(1-\eta)^{A'}$ (from the norm of the $\mathbb{1}$ piece), for $A'$ being the number of edges outside the exclusion zone, and a $1$ (from the norm of the full $\mathcal{N}$) for all the remaining edges. These do not give the $(1+\eta)^A$ factors that caused problems in $W_\mathrm{UB}(L)$. \\ \begin{figure} \caption{\label{fig:disconnectedPaths} \label{fig:disconnectedPaths} \end{figure} These are, unfortunately, not the only terms that occur in $\mathscr{G}_\mathrm{bad}$. Faults can occur in the exclusion zone as long as the defects that arise either do not cause the break up of the original spanning path, or that they lead to new spanning paths after the decoder. Going back to the example of Figure \ref{fig:disconnectedPaths}, we observe that if an insertion of a fault $\mathcal{F}$ occurs only on edge 8 and not on edge 9, the spanning path will remain unbroken even with the extra disconnected piece, and the corresponding fault path remains a part of $\mathscr{G}_\mathrm{bad}$. It is the simultaneous insertion of $\mathcal{F}$s at \emph{both} edges 8 and 9 that changes the situation to one with no logical error. This is a clear indication that we cannot have the full $\mathcal{N}$ inserted at both edges 8 and 9, if we want to be sure that a logical error results. Figuring out the combinatorics of when this happens---and there should be many possibilities since the exclusion zone grows as $L^3$ in size---for all spanning paths seems very challenging. \textbf{Acknowledgements} \par This work is supported by a Centre for Quantum Technologies (CQT) Fellowship. CQT is a Research Centre of Excellence funded by the Ministry of Education and the National Research Foundation of Singapore. \end{document}
\begin{document} \title{Optical Implementation of a Unitarily Correctable Code} \author{K.M. Schreiter} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \affiliation{Department of Physics \& Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \author{A. Pasieka} \affiliation{Department of Physics, University of Guelph, Guelph, Ontario, N1G 2W1, Canada} \author{R. Kaltenbaek} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \affiliation{Department of Physics \& Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \author{K.J. Resch} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \affiliation{Department of Physics \& Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \author{D.W. Kribs} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \affiliation{Department of Mathematics \& Statistics, University of Guelph, Guelph, Ontario, N1G 2W1, Canada} \date{\today} \begin{abstract} Noise poses a challenge for any real-world implementation in quantum information science. The theory of quantum error correction deals with this problem via methods to encode and recover quantum information in a way that is resilient against that noise. Unitarily correctable codes are an error correction technique wherein a single unitary recovery operation is applied without the need for an ancilla Hilbert space. Here, we present the first optical implementation of a non-trivial unitarily correctable code for a noisy quantum channel with no decoherence-free subspaces or noiseless subsystems. We show that recovery of our initial states is achieved with high fidelity ($\ge0.97$), quantitatively proving the efficacy of this unitarily correctable code. \end{abstract} \pacs{42.50.Ex; 03.67.Pp; 03.67.Hk} \maketitle \section{Introduction} In the realm of quantum information science one of the most pervasive obstacles that must be overcome is the effect of interactions with the environment -- the resulting noise must be dealt with in any quantum system of practical interest~\cite{NC00,Got06}. The theory of quantum error correction has developed with this primary motivation in mind. The well-known passive error correction (or error avoidance) notions of decoherence-free subspaces (DFS) and noiseless subsystems (NS)~\cite{PSE96,DG97,ZR97c,LCW98a,KLV00a,Zan01b,KBLW01} have been extensively explored. Experiments have demonstrated the principle of DFSs in optical systems~\cite{KBAW00,MLRS03,BDWR04}, ion traps~\cite{KMRSIMW01}, and NMR~\cite{OLK03}. The primary appeal of passive quantum error correction schemes is the fact they \textit{are} passive -- no correction operation is required once the noise acts. However, this occurs at the cost of a limited number of possible correctable codes for a given channel by relying heavily on symmetry in the noise~\cite{CK06}. When DFS/NS are not available or ideal, active quantum error correction (where a correction operation other than the identity map is needed) must be employed, resulting in an expanded set of correctable codes. Approaches such as the stabilizer formalism~\cite{Got96} provide formulas for finding and implementing codes in this regime. However, there can be a range of costs here that may negatively impact practical implementations. Clearly then, there are benefits to be had in the identification of correction schemes that require the application of a recovery operation, but retain some of the appealing properties of passive codes. Here we experimentally implement a non-trivial example of such a correction scheme by applying random, anticorrelated noise to a two-qubit optical system. This noisy channel leads to the loss of encoded information and has no passive codes. However, we show that one qubit can be encoded against the noise such that the encoded states are recovered with high fidelity by applying a single unitary recovery operation after each application of the noisy channel. This form of \emph{unitarily correctable code} was recently introduced~\cite{KLPL05,KS06,KPZ08}. \section{Unitarily Correctable Codes} One of the most general theoretical descriptions of how to deal with environmental interactions is given by the formalism of operator quantum error correction~\cite{KLP05,KLPL05}. In this theory, encoding, recovery and decoding of information takes place on subsystems of the full system. The key notion in operator quantum error correction is formulated as follows: a subsystem $\hil^{A}$ of $\mathcal{H}=\left(\hil^{A}\otimes\hil^{B}\right)\oplus\ch{K}$ is \emph{correctable} for a quantum channel $\ch{E}$ if there exists a recovery operation $\ch{R}$ such that for every pair of density operators $\rho^{A}\in\hil^{A}$ and $\sigma^{B}\in\hil^{B}$ there is some other density operator $\tau^{B}\in\hil^{B}$ such that \begin{equation}\label{OQEC}\ch{R}\circ\ch{E}\circ\ch{P}_{AB}(\rho^{A}\otimes\sigma^{B})=\ch{P}_{AB}(\rho^{A}\otimes\tau^{B}).\end{equation} Here $\hil^{B}$ is an ancillary subsystem, $\ch{K}$ is an ancillary subspace orthogonal to $\hil^{A}\otimes\hil^{B}$, $\ch{P}_{AB}$ is the projection superoperator of $\mathcal{H}$ onto $\hil^{A}\otimes\hil^{B}$ and by a quantum channel, we refer to a completely-positive trace preserving map~\cite{NC00}. Simply stated, a subsystem $\hil^{A}$ is correctable if, when we restrict our attention to it and anything it interacts with ($\hil^{A}\otimes\hil^{B}$), there is a recovery operation that returns $\hil^{A}$ to its initial state, while doing anything to $\hil^{B}$. Decoherence-free subspaces and noiseless subsystems are two special cases of Eq.~(\ref{OQEC}), namely when the recovery operation is the identity map, and when $\dim{\hil^{B}}=1$ or $\dim{\hil^{B}}\geq 1$ respectively. Active schemes such as the stabilizer formalism are examples of Eq.~(\ref{OQEC}) in generality. We say that a subsystem $\hil^{A}$ is \emph{unitarily correctable} for a quantum channel $\ch{E}$ if there exists a unitary recovery operation $\ch{U}$ such that for every pair of density operators $\rho^{A}\in\hil^{A}$ and $\sigma^{B}\in\hil^{B}$ there is some other density operator $\tau^{B}\in\hil^{B}$ such that \begin{equation}\label{UCC}\ch{U}\circ\ch{E}\circ\ch{P}_{AB}(\rho^{A}\otimes\sigma^{B})=\ch{P}_{AB}(\rho^{A}\otimes\tau^{B}),\end{equation} where $\ch{U}$ is a unitary map such that $\ch{U}:\rho\mapsto U\rho U^{\dagger}$, with $U$ a unitary operator on $\hil^{A}\otimes\hil^{B}$. In the general case of Eq.~(\ref{OQEC}), every recovery operation $\ch{R}$ can be implemented via a unitary map on a (in general) larger Hilbert space~\cite{NC00}. Thus from another perspective, unitarily correctable codes are precisely the codes for which an extended Hilbert space is not required in the recovery process. It may not be immediately apparent why this is advantageous as it is simply a special case of Eq.~(\ref{OQEC}). The first point to recognize is that Eq.~(\ref{UCC}) is a generalization of the DFS/NS case since the identity map is a unitary map -- thus the number of correction options for a given quantum channel will be at least as large. More importantly perhaps, quantum channels with \textit{no} passive codes can gain \textit{nearly}-passive correction options. Secondly, this potential increase in correction options comes only at the cost of applying a single unitary map after the noise and no measurements. Thanks to these two points, unitarily correctable codes take advantage of two of the most appealing aspects of both active and passive correction schemes -- increased range of correction options and easily implemented correction. From a practical standpoint, a correction scheme is only useful if one can find codes for which the scheme is capable of correcting a class of error operations of interest. In the case of unitarily correctable codes this can be achieved for the class of unital operations (those for which the identity operator is unaffected by the noise $\ch{E}(I)=I$) as shown by Theorem 2 of~\cite{KS06}. The theorem states that the unitarily correctable codes for a quantum channel $\ch{E}$ are precisely the DFS/NS for $\ch{E}^{\dagger}\circ\ch{E}$, where $\ch{E}^{\dagger}$ denotes the dual map for $\ch{E}$, defined via the relation between expectation values: $\Tr{\ch{E}(\rho)X}=\Tr{\rho\ch{E}^{\dagger}(X)}$. One can readily check that $\ch{E}$ is unital and trace preserving if and only if $\ch{E}^{\dagger}$ is as well. The problem of finding the DFS/NS for a given channel has been fully characterized~\cite{K03,CK06}. In the case of a unital channel, the DFS/NS are the subsystems of the full Hilbert space that commute with each of the Kraus operators of the channel. This fact, together with Theorem 2 of~\cite{KS06} provides a way to compute the unitarily correctable codes for a given unital channel. The next section includes an explicit calculation for a specific example. \section{Experimental Model} As a demonstrative example, we seek a simple quantum channel with no DFS/NS but with a unitarily correctable code. To that end, consider the two-qubit phase-flip channel with Kraus operators $\left\{\frac{1}{\sqrt{2}}Z_{1},\frac{1}{\sqrt{2}}Z_{2}\right\}$ where $Z_{1}=\sigma_{z}\otimesI$ and $Z_{2}=I\otimes \sigma_{z}$. First note that there are no DFS/NS for this channel: $\ch{E}$ is a unital map so we must consider the operators that commute with both $Z_{1}$ and $Z_{2}$. In the standard basis, $$Z_{1}=\left(\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{array}\right)$$ and, $$Z_{2}=\left(\begin{array}{cccc}1&0&0&0\\0&-1&0&0\\0&0&1&0\\0&0&0&-1\end{array}\right).$$ Considering an arbitrary matrix in the Hilbert space, $M=\left(m_{i,j}\right)$: \begin{equation}\left[Z_{1},M\right]\\=\left(\begin{array}{cccc}0&0&2m_{1,3}&2m_{1,4}\\0&0&2m_{2,3}&2m_{2,4}\\-2m_{3,1}&-2m_{3,2}&0&0\\-2m_{4,1}&-2m_{4,2}&0&0\end{array}\right).\nonumber\end{equation} The off-diagonal blocks must thus be equal to zero for any element of the Hilbert space to commute with $Z_{1}$ and so all elements of the Hilbert space that commute with $Z_{1}$ must be of the form (for arbitrary $a$ through $h$): $$\left(\begin{array}{cccc}a&b&0&0\\c&d&0&0\\0&0&e&f\\0&0&g&h\end{array}\right).$$ Similarly, the elements that commute with $Z_{2}$ are: $$\left(\begin{array}{cccc}j&0&k&0\\0&l&0&m\\n&0&o&0\\0&p&0&q\end{array}\right),$$ for arbitrary $j$ through $q$. Therefore, the only operators that commute with both $Z_{1}$ and $Z_{2}$ are of the form $$\left(\begin{array}{cccc}r&0&0&0\\0&s&0&0\\0&0&t&0\\0&0&0&u\end{array}\right),$$ for arbitrary $r$ through $u$. Since there are no non-zero off-diagonal elements we cannot encode any quantum information -- there are no DFS/NS for $\ch{E}$. However, because $\ch{E}$ is a unital map we can find the unitarily correctable codes for $\ch{E}$ by looking at the DFS/NS for $\ch{E}^{\dagger}\circ\ch{E}$. The Kraus operators for the unital channel $\ch{E}^{\dagger}\circ\ch{E}$ are $\left\{\frac{1}{\sqrt{2}}I,\frac{1}{\sqrt{2}}Z_{1}Z_{2}\right\}$. In the standard basis $$Z_{1}Z_{2}=\left(\begin{array}{cccc}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&1\end{array}\right),$$ thus the operators that commute with the Kraus operators of $\ch{E}^{\dagger}\circ\ch{E}$ are of the form $$\left( \begin {array}{cccc} a&0&0&b\\0&c&d&0\\0&e&f&0\\g&0&0&h\end {array} \right),$$ for arbitrary $a$ through $h$. So, there are two one-qubit unitarily correctable codes for $\ch{E}$ (the two one-qubit decoherence-free subspaces for $\ch{E}^{\dagger}\circ\ch{E}$): $\mathcal{C}_{1}=\text{span}(\ket{00},\ket{11})$ and $\mathcal{C}_{2}=\text{span}(\ket{01},\ket{10})$. Finally, in order to find a suitable correction operation, consider the effect of $\ch{E}$ on an arbitrary two-qubit density matrix $\rho=\left(\rho_{i,j}\right)$: \begin{equation}\label{channel_effect}\ch{E}(\rho)=\left( \begin {array}{cccc} \rho_{1,1}&0&0&-\rho_{1,4}\\0&\rho_{2,2}&-\rho_{2,3}&0\\0&-\rho_{3,2}&\rho_{3,3}&0\\-\rho_{4,1}&0&0&\rho_{4,4}\end {array} \right).\end{equation} There are many candidate unitary correction operations -- for $\mathcal{C}_{1}$ the controlled-phase gate is one -- it is also easy to see that both $Z_{1}$ and $Z_{2}$ will correct either of $\mathcal{C}_{1}$ or $\mathcal{C}_{2}$ when taken as the unitary matrix of Eq.~(\ref{UCC}). In any experimental setting, some gates are easier to implement than others (e.g. in the optical setting, 2-qubit gates are difficult because of the lack of photon-photon interactions). The flexibility in the choice of the unitary correction operation is another appealing aspect of unitarily correctable codes. We have found what we set out to look for, a quantum channel with no DFS/NS, but two unitarily correctable codes. Although one qubit cannot be encoded with no recovery operation, one can be encoded with a single unitary recovery. Therefore this model allows for a clear experimental demonstration of one of the key advantages of the unitarily correctable code approach -- we gain a \textit{nearly-passive correction scheme} for a quantum channel with \textit{no passive codes}. An additional advantage of this model is that it can be realized in an optical setting. Defining $\ket{H}\equiv\ket{0}$ and $\ket{V}\equiv\ket{1}$, a phase-flip can be achieved by placing a half-wave plate (HWP) with its optic axis oriented along $\ket{H}$ in the path of a photon. Thus $Z_{1}$ ($Z_{2}$) is implemented by a HWP in the first (second) photon path with nothing in the second (first). Since both $Z_{1}$ and $Z_{2}$ will correct either codespace, we can experimentally implement the correction by placing a HWP in either photon path. Thus, the particular optical implementation of the unitarily correctable code we consider is as follows: considering $$\ch{U}\circ\ch{E}\circ\ch{P}_{AB}(\rho^{A}\otimes\sigma^{B})=\ch{P}_{AB}(\rho^{A}\otimes\tau^{B}),$$ the full Hilbert space $\mathcal{H}$ is the set of polarization states of two entangled photons. $\mathcal{H}$ can be decomposed as $\mathcal{H}=\left(\hil^{A}\otimes\hil^{B}\right)\oplus\ch{K}$ where $\dim{\hil^{B}}=1$. The single encoded qubit will be supported on $P_{\mathcal{C}_{1}}=\kb{00}{00}+\kb{11}{11}$, so $\dim{\hil^{A}}=2$, and the ancilla space $\ch{K}$ is supported on $P_{\mathcal{C}_{1}^{\perp}}=\kb{01}{01}+\kb{10}{10}$ and also has dimension $2$. Thus the projection superoperator $\ch{P}_{AB}$ focuses our attention onto the codespace $\mathcal{C}_{1}$: $\ch{P}_{AB}:\rho\mapsto P_{\mathcal{C}_{1}}\rho P_{\mathcal{C}_{1}}$. The noise is provided by randomly fired anti-correlated phase-flips in both photon paths and has the form $\ch{E}:\rho\mapsto\frac{1}{2}Z_{1}\rho Z_{1}+\frac{1}{2}Z_{2}\rho Z_{2}$. Finally, the correction operation is provided by a single HWP placed in the second photon path, set to apply $Z_{2}$, so $\ch{U}:\rho\mapsto Z_{2}\rho Z_{2}$. \section{Experimental Setup} \begin{figure} \caption{(Color online) Experimental setup. (a) Source of polarization-entangled photon pairs. An ultraviolet diode laser is used to pump a pair of orthogonally-oriented BiBO crystals to produce entangled photon pairs by spontaneous parametric down-conversion. The precise state produced is determined by the polarization of the laser light and can be set using a half-wave plate (HWP). A pair of compensation crystals are used to compensate group-velocity mismatch in the down-conversion crystals. The light is collected into single-mode fibers. More details can be found in the text. (b) Experimental implementation of the noise model, correction, and tomography. A quarter-wave plate is used to adjust the phase of the entangled pairs; together with the HWP in the UV laser and the fiber-based polarization controllers, these operations are sufficient to prepare an arbitrary pure state in $\mathcal{C} \label{experiment} \end{figure} \begin{figure} \caption{(Color online) Example set of experimentally measured two-photon density matrices (left: real parts, right: imaginary parts). (a) The initial state was chosen to have a non-trivial value of both angles in Eq.~(\ref{generalcodestate} \label{sample2dms} \end{figure} Our experimental source is shown in Fig.~\ref{experiment}(a). We use a 185~mW free-running UV diode laser (Newport model: LQC405-180E, centre wavelength $404.4$~nm, bandwidth 0.8~nm FWHM). To improve mode quality, the beam is passed through a spatial filter to obtain a 55~mW near-TEM$_{00}$ beam. A motorized half-wave plate then rotates the pump polarization from horizontal to an arbitrary linear polarization. The diode laser pumps a pair of 0.5~mm thick orthogonally-oriented BiBO nonlinear crystals cut for type-I non-collinear degenerate down-conversion with an 3$^\circ$ half-opening angle outside the crystal \cite{Kwiat99}. A 1~mm $\alpha$-BBO and a 1~mm quartz both cut for maximum birefringence are used to compensate group-velocity mismatch in the down-conversion crystals. Entangled photon pairs emitted by the source pass through bandpass filters (centre wavelength 810~nm, bandwidth 5~nm FWHM) and are coupled into single-mode optical fibers. The light is coupled back into free space before application of the noise, as shown in Fig.~\ref{experiment}(b). The noise $\ch{E}$ is implemented by a pair of computer controlled liquid-crystal variable phase retarders (Meadowlark LVC-100, LCVR), one in each photon path. The LCVRs are both set to implement phase-flips. To simulate a noisy channel \textit{either} one or the other LCVR is randomly fired with a switching rate of $1\;$Hz providing anti-correlated noise. The random switching was implemented using a software pseudo-random number generator (LabView, National Instruments). Recovery is achieved with a half-wave plate oriented along $\ket{H}$ in the path of photon 2. We characterize all of our states using quantum state tomography, Fig~\ref{experiment}(b). There is a polarization analyzer in each arm comprised of a half-wave plate, a quarter-wave plate, and a polarizing beam-splitter. We use the tomographically-overcomplete polarization measurements $\{H,V,D,A,R,L\}$ for each photon ($R$ and $L$ represent right- and left-circular polarizations), i.e., 36 measurement settings on the pair of photons. Density matrices are reconstructed from the experimentally measured counts using the maximum-likelihood method \cite{James01}. We performed quantum state tomography at each of three stages in the experiment: in the absence of noise or correction, in the presence of noise, and in the presence of both noise and correction. Typical rates for the source are approximately 12000 coincidence counts/s and 60000 singles counts/s when the fibers are directly connected to detectors. Using the half-wave plate placed before the down-conversion crystals, the fiber-based polarization controllers, and the tilted quarter-wave plate in the path of photon 1, the source can produce output states in $\mathcal{C}_{1}$ of the form \begin{equation} \ket{\psi}=\cos2\theta \ket{HH} + \sin2\theta e^{i\phi} \ket{VV}. \label{generalcodestate} \end{equation} With the half-wave plate oriented at $\theta=22.5^\circ$ and the phase angle at $\phi=0$ the system ideally produces the Bell state $\ket{\phi^+}=\frac{1}{\sqrt{2}}(\ket{HH}+\ket{VV})$. In this case, we measure visibilities of 99.2\% in the horizontal/vertical (H/V) basis and 95.3\% in the diagonal/antidiagonal (D/A) basis. Reconstructing the state using quantum state tomography we find, in this case, that the fidelity~\cite{J04,FN01} with $\ket{\phi^{+}}$ is $0.97$, the tangle~\cite{CKW00} is $\tau=0.9$, and the linear entropy~\cite{FN02} is $S_{L}=0.065$. Thus our source is capable of producing highly entangled and nearly pure states. \section{Results \& Discussion} \begin{figure} \caption{(Color online) Measured fidelity between initial and final states. The mixed state fidelity \cite{J04} \label{summary2qb} \end{figure} An example set of reconstructed density matrices, chosen to show the effect of the noise and correction on a state with unequal populations, and both real and imaginary coherences, is shown in Fig.~\ref{sample2dms}. Data was accumulated for 5~s per measurement setting. The initial state is shown in Fig.~\ref{sample2dms}(a). This state has fidelity $0.98$ with a state in $\mathcal{C}_{1}$ [Eq.~(\ref{generalcodestate})] with $\theta=35.5^\circ$ and $\phi=46.5^\circ$. Subjecting the photon pairs to the noise changes the state to the one shown in Fig.~\ref{sample2dms}(b). The fidelity of this state to the initial state in Fig.~\ref{sample2dms}(a) has been reduced to $0.62$. Qualitatively, one can see that the effect of the noise $\ch{E}$ does not decohere these states, but rather flips the sign of both the real and imaginary coherences. The correction, $\ch{U}$, is implemented by placing a half-wave plate in the path of photon 2. When the noisy state was corrected, we obtained the state shown in Fig.~\ref{sample2dms}(c) where the fidelity with the initial state has been restored to $0.97$. The recovery of high fidelity states demonstrates the effectiveness of this unitarily correctable code. We characterized the effect of the noise and noise plus correction on a range of input states. We adjusted the source to states in $\mathcal{C}_{1}$ with real coefficients ($\phi=0$) and tuned those coefficients by adjusting the angle of the HWP in the pump laser, $\theta$. The fidelity measurements of the noise affected states are shown in Fig.~\ref{summary2qb} where the initial states are shown as open circles and the theoretical prediction, $F=\text{cos}^{2}{4\theta}$, is a solid black line. These data clearly show that the noise affects the states, but by different amounts. As expected, those states closer to the product states $\ket{HH}$ and $\ket{VV}$, or HWP settings $0^\circ$ (or $90^\circ$) and $45^\circ$, respectively, are less affected by the noise; in the case of $\ket{HH}$ and $\ket{VV}$, the states are invariant under the noise. Those states closer to the maximally entangled states $\ket{\phi^+}$ and $\ket{\phi^-}=\frac{1}{\sqrt{2}}(\ket{HH}-\ket{VV})$, or HWP settings $22.5^\circ$ and $67.5^\circ$, respectively, are most affected by the noise as it drives them to nearly orthogonal states. The states are restored to greater than $0.98$ fidelity in all measurements thereby demonstrating the effectiveness of this unitarily correctable code across the entire codespace $\mathcal{C}_{1}$. \section{Conclusion} We have described a simple noise model which does not have a decoherence-free subspace or noiseless subsystem but does contain two unitarily correctable codes. We have implemented this noise model by constructing a quantum channel in a physical system, namely the polarization of a pair of optical photons, in which two liquid-crystal variable phase retarders fire in an anticorrelated, but random manner. We have sent photon pairs through this noisy channel in states ranging from separable to nearly maximally entangled and compared the impact of the noise and the noise plus unitary correction on the quality of the states. Our data shows that the noise can dramatically impact the fidelity of the output state with the initial, especially in the case of highly entangled states in $\mathcal{C}_{1}$. However, the unitary correction restores the state quality with fidelities greater than $0.97$ for all states in $\mathcal{C}_{1}$. Our experiment shows how to translate the theory of unitarily correctable codes, a new method combining aspects of passive and active quantum error correction, into experimental realization. As these codes expand the class of nearly-passive correctable codes, it is an interesting open question as to how these approaches could be utilized in more natural physical systems, as opposed to controlled application of noise. \section{Acknowledgments} This work was funded by NSERC, Ontario MRI ERA, CFI, IQC, and OCE. A.P. was partially supported by an Ontario Graduate Scholarship. R.K. acknowledges financial support from IQC. We thank Zhenwen Wang for technical assistance with electronics. \end{document}
\begin{document} \title{An Upper Bound on the number of distinct Composition Series in a finite group\footnote{Email: [email protected]}} \author{ Abhijit Bhattacharjee} \date{ \small{ Department of Mathematics, Institute of Science, Banaras Hindu University,\\Varanasi, India-221005.\\} \today} \maketitle \begin{abstract} In this paper we prove that among all finite groups of order $\leq n$ (where $n\geq4$ be a natural number) the number of distinct composition series is bounded above by $\prod_{i=1}^{\left[\log_{2}n\right]}\left(2^{i}-1\right)$ and it is attained if and only if $G$ is the elementary abelian $2$-group of order $2^{\alpha},$ where $\alpha=\left[\log_{2}n\right]$. This bound is non trivial upper bound of composition series and so far best possible. \end{abstract} \textit{Keywords:} \small{Finite group, composition series, Sylow subgroup, elementary abelian group} \textit{Mathematics Subject Classifications(2010):} \small{20D30, 20D15}\\ \section{ Introduction} A composition series, that is a series of subgroups each normal in the previous such that corresponding factor groups are simple. Any finite group has a composition series.The concept of a composition series in a group is due to Evariste Galois(1831)[Theorem 5.9, \cite{key-1}]. The famous Jordan-Holder theorem which says that, the composition factors in a composition series are unique up to isomorphism was proved in nineteenth century \cite{key-5, key-6}. Sometimes a group of small order has a huge number of distinct composition series. For example an elementary abelian group of order 64 has 615195 distinct composition series.To find an upper bound for composition series in a finite group is a natural question. In \cite{key-7}, there is an algorithm in GAP to find the distinct composition series of any group of finite order.The aim of this paper is to provide an upper bound of the number of distinct composition series of any group of finite order. The approach of this is combinatorial and the method is elementary. All the groups considered in this paper are of finite order. \vskip 5pt There are three sections in the paper namely section 2, section 3 and section 4. In section 2 we prove Theorem 2.1. which is the main theorem of section 2. All the theorems and results in section 2 are well known. In section 3 we prove Theorem 3.7. which is a new result and it is the main theorem of section 3. In section 4 we prove Theorem 4.3. by proving an inequality in number theory. This theorem is the main theorem of this paper. \section{Preliminary Results } \begin{theorem}\label{t2.1} Among all finite groups of order $n$, the abelian group with elementary abelian Sylow subgroups, has the highest number of distinct composition series. \end{theorem} \vskip 3pt \noindent Before proving theorem \ref{t2.1} we recall some useful theorems in group theory at first. \begin{theorem}[Theorem 2.28, \cite{key-1}](Correspondence Theorem) Given $G\triangleright N$, denote the the family of subgroups of $G$ that contain $N$ by $sub(G;N)$ and the family of subgroups of $G/N$ by $sub(G/N)$. Then there is a bijection \[\phi:sub(G;N)\longrightarrow sub(G/N),\] \[H\mapsto H/N\] which preserves all subgroup lattice and normality relationships. \end{theorem} \begin{theorem} [Exercise 14, Section 5.2, \cite{key-8}] A finite abelian group is self dual. \end{theorem} \begin{lemma} [Lemma 4.2.7, \cite{key-9}]\label{l2.4} Suppose $G$ be a finite group and let $\prod\left(G\right)=\left\{ p_{1},p_{2},\dots ,p_{r}\right\} $ where $\prod\left(G\right)$ denotes the distinct prime factors in the order of $G$. Let $P_{i}\in Syl_{p_{i}}\left(G\right)$. If $G=P_{1}\times P_{2}\times \dots \times P_{r}$ then \[\mid m\left(G\right)\mid=\sum_{i=1}^{r}\mid m\left(P_{i}\right)\mid,\] where $m\left(G\right)$denotes the distinct maximal subgroups in $G$. \end{lemma} \begin{corollary} If $G$ be the abelian group with elementary abelian Sylow subgroups of order $n$ where $n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}$ then $m\left(G\right)=\sum_{i=1}^{r}\left(\frac{p_{i}^{\alpha_{i}}-1}{p_{i}-1}\right)$. \end{corollary} \begin{proof} Since $G$ is a finite abelian group then it is self dual and direct products of its Sylow subgroups. Therefore number of maximal subgroups in $G$ is equal to the sum of all the maximal subgroups in it's Sylow subgroups. Now elementary abelian group is self dual. So number of maximal subgroups in an elementary abelian group is exactly same with the number of minimal subgroups.Therefore number of subgroups of index $p_{i}$ is number of distinct subgroups of order $p_{i}$in an elementary abelian $p_{i}$group of order $p_{i}^{\alpha_{i}}$ which is $\frac{p_{i}^{\alpha_{i}}-1}{p_{i}-1}$ and the result follows immediately. \end{proof} \begin{example} If $G=\mathbb Z_{2}\times \mathbb Z_{2}\times \mathbb Z_{2}\times \mathbb Z_{2}\times \mathbb Z_{3}\times \mathbb Z_{3}\times \mathbb Z_{5}\times \mathbb Z_{5}$ then $\mid G\mid=3600$ and \[m\left(G\right)=\frac{2^{4}-1}{2-1}+\frac{3^{2}-1}{3-1}+\frac{5^{2}-1}{5-1}=15+4+6=25.\] \end{example} \begin{lemma} If $G=S_{1}\times S_{2}$, where $S_{1}$, $S_{2}$ are two finite simple groups. If $G_{1}$ is a proper non-trivial normal subgroup of $G$ then $G_{1}$ is isomorphic to either $S_{1}$or $S_{2}$. \end{lemma} \begin{proof} We consider the projection map $p:G\rightarrow S_{1}$ and the projection map is onto. Therefore the image of a normal subgroup under the projection map $p$ will be a normal subgroup in $S_{1}$. Since $S_{1}$ is a simple group we have the following two choices : \vskip 3pt \noindent Case 1 : If $p\left(G_{1}\right)=\left\{ e\right\} $. Then $G_{1}$ is a subgroup of $S_{2}$ and it is a non trivial normal subgroup of the simple group group $S_{2}$ then $G_{1}=S_{2}$ i.e. $G_{1} {\backsimeq} S_{2}$. \vskip 3pt \noindent Case 2 : If $p\left(G_{1}\right)=S_{1}$. Now we will take the kernel $K=S_{2}\cap G_{1}$of the projection map. Here $K$ will become a normal subgroup of the simple group $S_{2}$. So either $K=\left\{ e\right\} $or $S_{2}$. If $K=\left\{ e\right\} $ then $G_{1}\backsimeq S_{1}$. If $K=S_{2}$ then $G_{1}\backsimeq S_{1}\times S_{2}$, which is a contradiction since $G_{1}$is a proper subgroup of $G$. \end{proof} \begin{corollary} If $G=S_{1}\times S_{2}\times \dots \times S_{k}$ be the direct product of $k$ non-abelian simple groups then $G$ has $2^{k}$ normal subgroups namely $N_{1}\times N_{2}\times\dots \times N_{k}$ where either $N_{i}=\left\{ e\right\} $ or $N_{i}=S_{i}$ for $1\leq i\leq k$. Also $G$ has only $k$ maximal normal subgroups namely $M_{1},M_{2},\dots ,M_{k}$ where $M_{i}=S_{1}\times S_{2}\times \dots \times S_{i-1}\times e\times S_{i+1}\times\dots \times S_{k}$ and $G/M_{i}\backsimeq S_{i}$ for $1\leq i\leq k$. \end{corollary} \begin{example} Let $G=A_{5}\times A_{5}$ then $\mid G\mid=3600$ and $G$ has only $2^{2}=4$ normal subgroups namely $e\times e$, $e\times A_{5}$, $A_{5}\times e$, $G$ and it has only $2$ maximal normal subgroups namely $e\times A_{5}$ and $A_{5}\times e$. \end{example} \begin{theorem}[Theorem 8.6, \cite{key-4}](Birkhoff) Every algebra $A$ is isomorphic to subdirect product of subdirectly irreducible algebras. \end{theorem} \vskip 5pt \noindent Now we present the proof of theorem \ref{t2.1}. \begin{proof} Let $G$ be the abelian group with elementary abelian Sylow subgroups such that $\mid G\mid=n$. Now by lemma \ref{l2.4}, number of maximal subgroups in a finite nil-potent group of order $n$ is exactly same of the sum of distinct maximal subgroups in each Sylow subgroup. But we know that among all finite $p$ groups of same order the elementary abelian group has highest number of maximal subgroups. This proves that among all nil-potent groups of order $n$, $G$ has the highest number of maximal normal subgroups and hence the highest number of composition series. Let $K$ be a solvable group of order $n$ with $m$ maximal normal subgroups namely $M_{1},M_{2},\dots ,M_{m}$. Then index of $M_{i}$ in $K$ is a prime and $K/M_{i}\backsimeq \mathbb Z_{p}$ for some prime $p$. Let $C^{\prime}$ be the commutator subgroup of $K$. Then $C^{\prime}\trianglelefteq M_{i}.$ Let $M=\cap_{i=1}^{m}M_{i}$ be the intersection of all the maximal normal subgroups of $K$. $M$ is a normal subgroup of $K$ such that $C^{\prime}\trianglelefteq M$ and therefore $K/M$ is abelian. Now by Correspondence Theorem there exists a bijection \[\phi:sub\left(K;M\right)\longrightarrow sub\left(K/M\right),\] which preserves subgroup lattice and normality relationship. Therefore $K/M$ is an abelian group with exactly $m$ maximal normal subgroups. But it is already proved that among all abelian groups of same order the abelian group with elementary abelian Sylow subgroups, has the highest number of maximal normal subgroups.For non solvable non simple group we take the Jacobson radical $J(G)$ which is the intersection of all the maximal normal subgroups of $G$ and we will use the subdirect irreducible theorem in universal algebra which says any algebra can be expressed as a subdirect product of subdirectly irreducible algebras . Then $J(G)$ is a characteristic subgroup of $G$ and also $J(G/J(G))=1$ (By \cite{key-3}, page - 4). So $J(G)$ is a normal subgroup of $G$ such the the quotient $G/J(G)$ is the direct product of simple groups.(By \cite{key-2}, Lemma 4.1). Now if $J(G)$ is trivial then $G$ is direct product of simple groups. Otherwise, let $J(G)$ be a non trivial then we can get a minimal normal subgroup $P$ such that $P$ is a subgroup of $J(G)$ . Then we take the quotient $G/P$ and get a normal covering. Proceeding this way it reduces to the case when $G$ is a direct products of simple groups as $G$ embeds in $G/N_{1}\times G/N_{2}\times \dots \times G/N_{k} $ such that $1=\cap_{i=1}^{k}N_{i}$ where each $N_{i}$ is a maximal normal subgroup and where $k$ is a natural number. Now if $H$ is a non-solvable non-simple group of order $n$ then $H$ is direct product of isomorphic non-abelian simple groups by Corollary 2.8, direct product of $k$ copies of non-abelian simple groups has only $k$ maximal normal subgroups, which proves the theorem. \end{proof} \section{Some New Results} In this section we prove Theorem 3.7, which is the main result in this section. \begin{definition} We define $C_{G}$ as the set of all distinct composition series of the group $G$. \end{definition} \begin{theorem}\label{t3.2} Let $n\geq2$ be a positive integer where $n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}$. Then \[\mid C_{\mathbb Z_{n}}\mid=\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}.\] \end{theorem} \begin{proof} We know that for each positive divisor $d$ of $n$ there exists a unique subgroup of $\mathbb Z_{n}$ of order $n$. \vskip 3pt \noindent Let $S$ be the set of sequence of primes from the set $\{p_{1},\dots ,p_{r}\}$ of length $\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}$, where $p_{i}$ occurs $\alpha_{i}$ times. Then \[\mid S\mid=\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}.\] \vskip 3pt \noindent Now $C_{\mathbb Z_{n}}$ is the set of distinct composition series of $\mathbb Z_{n}$. Now define a function $f:S\rightarrow C_{\mathbb Z_{n}}$ by \[f\left(\beta_{1}\beta_{2}\dots \beta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r-1}}\beta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}\right)=\left\{ e\right\} \trianglelefteq \mathbb Z_{\beta_{1}}\trianglelefteq \mathbb Z_{\beta_{1}\beta_{2}}\trianglelefteq\dots \trianglelefteq \mathbb Z_{\beta_{1}\beta_{2}\dots \beta_{\alpha_{1}+\alpha_{2}+\dots+\alpha_{r}}}=\mathbb Z_{n},\] where $\beta_{1},\beta_{2},\dots ,\beta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}$ are primes such that $\beta_{i}=p_{i}$ has exactly $\alpha_{i}$ solutions for $1\leq i\leq\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}$. So $\beta_{1}\beta_{2}\dots \beta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r-1}}\beta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}\in S$. Now $\left\{ e\right\} \trianglelefteq \mathbb Z_{\beta_{1}}\trianglelefteq \mathbb Z_{\beta_{1}\beta_{2}}\trianglelefteq\dots \trianglelefteq \mathbb Z_{\beta_{1}\beta_{2}\dots \beta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}}=\mathbb Z_{n}$ i.e.$\left\{ e\right\} \trianglelefteq \mathbb Z_{\theta_{1}}\trianglelefteq \mathbb Z_{\theta_{2}}\trianglelefteq\dots \trianglelefteq \mathbb Z_{\theta_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}}=\mathbb Z_{n}$ is a composition series of $\mathbb Z_{n}$ where $\theta_{i}=\prod_{j=1}^{i}\beta_{j}$, $1\leq i\leq\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}.$ Then $f:S\rightarrow C_{\mathbb Z_{n}}$ is an injective mapping by its construction. Now we will prove that it is surjective also. Let $\left\{ e\right\} =G_{0}\trianglelefteq G_{1}\trianglelefteq\dots \trianglelefteq G_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}=\mathbb Z_{n}$ be a composition series of $\mathbb Z_{n}.$ Since for each divisor $d$ of $n$, there exists a unique subgroup of order $d$ of $\mathbb Z_{n}$ and any subgroup of $\mathbb Z_{n}$ is cyclic then $\dfrac{\mid G_{i}\mid}{\mid G_{i-1}\mid}$ is a prime number. \vskip 5pt \noindent Define $g:C_{\mathbb Z_{n}}\longrightarrow S$ by \[g\left(C_{\mathbb Z_{n}}\right)=q_{1}q_{2}\dots q_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}},\] where $q_{i}=\dfrac{\mid G_{i}\mid}{\mid G_{i-1}\mid},$ for $1\leq i\leq\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}.$ Then each $q_{i}$ is a prime for $1\leq i\leq\alpha+\alpha_{2}+\dots +\alpha_{r}.$ \vskip 5pt \noindent So $q_{1}q_{2}\dots q_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}$ is a sequence of primes such that $q_{i}=p_{i}$ has exactly $\alpha_{i}$ solutions for $1\leq i\leq\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}$. So, $q_{1}q_{2}\dots q_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}\in S$ and therefore $f^{-1}=g$ and hence $f$ is surjective and hence bijective also. Therefore, \[\mid S\mid=\mid C_{\mathbb Z_{n}}\mid=\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}\] i.e. $\mathbb Z_{n}$ has $\dfrac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}$ distinct composition series. \end{proof} \begin{example} {\em Let us try to understand the above theorem for $\mathbb Z_{360}$. \noindent Note that, $360=2^{3}3^{2}5^{1}$. Now $\frac{\left(3+2+1\right)!}{3!2!1!}=60$ distinct sequence of primes can be formed of length $6$ such that each sequence contains $3$ times prime $2$,$2$ times prime $3$ and $1$ time prime $5$. Some of them are $232253, 323522, 523322.$ Now we will construct a composition series corresponding to these numbers. \begin{enumerate} \item The composition series corresponding to $232253$ will be \[\left\{ e\right\} \trianglelefteq \mathbb Z_{2}\trianglelefteq \mathbb Z_{6}\trianglelefteq \mathbb Z_{12}\trianglelefteq \mathbb Z_{24}\trianglelefteq \mathbb Z_{120}\trianglelefteq \mathbb Z_{360}.\] \item The composition series corresponding to $323522$ will be \[\left\{ e\right\} \trianglelefteq \mathbb Z_{3}\trianglelefteq \mathbb Z_{6}\trianglelefteq \mathbb Z_{18}\trianglelefteq \mathbb Z_{90}\trianglelefteq \mathbb Z_{180}\trianglelefteq \mathbb Z_{360}.\] \item The composition series corresponding to $523322$ will be \[\left\{ e\right\} \trianglelefteq \mathbb Z_{5}\trianglelefteq \mathbb Z_{10}\trianglelefteq \mathbb Z_{30}\trianglelefteq \mathbb Z_{90}\trianglelefteq \mathbb Z_{180}\trianglelefteq \mathbb Z_{360}.\] \end{enumerate}} \end{example} \begin{theorem}\label{t3.4} Let $G$ be an abelian group of order $n$ where $n\geq2$ be a natural number such that $n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}.$ Then $\mid C_{G}\mid=\prod_{i=1}^{r}t_{i}\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}$ where $t_{i}$ are the numbers of distinct composition series of the Sylow $p_{i}$- subgroups of $G.$ \end{theorem} \begin{proof} Note that in finite abelian group there exists at least one subgroup of order $h$ for each divisor $h$ of the order of the group $n$ and also each composition factor of an abelian group is of prime order. The order of composition factors of an finite abelian group is a prime number and using them we can make a sequence of primes which belongs to $S$ ( which is described in the Theorem \ref{t3.2} ). Therefore the number of distinct composition series of $G$ is a multiple of $\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}.$ Now let $\left\{ e\right\} =G_{0}\trianglelefteq G_{1}\trianglelefteq\dots \trianglelefteq G_{m}\trianglelefteq G_{m+1}\trianglelefteq\dots \trianglelefteq G_{\alpha_{1}+\alpha_{2}+\dots +\alpha_{r}}=G$ be a composition series of $G$ where $j$ be the least positive integer such that $p_{1}^{s_{1}}$ divide $\mid G_{j}\mid$and $k$ be the least positive integer such that $p_{1}^{s_{1}+1}$ divide $\mid G_{k}\mid$ when \[1\leq j<k\leq\alpha_{1}+\alpha_{2}+\dots +\alpha_{r},~1\leq s_{1}<s_{1}+1\leq\alpha_{1}.\] Therefore $\mid G_{j}\mid=p_{1}^{s_{1}}q$ where $p_{1}^{s_{1}},q\in\mathbb{N}$ with $\gcd\left(p_{1}^{s_{1}},q\right)=1.$ $\mid G_{k}\mid=p_{1}^{s_{1}+1}u$, where $p_{1}^{s_{1}+1},u\in\mathbb{N}$ with $\gcd\left(p_{1}^{s_{1}+1},u\right)=1.$ \vskip 5pt \noindent Let $G_{j}=A_{j}B_{j}$ where $A_{j}$ and $B_{j}$ are two subgroups of $G$ of order $p_{1}^{s_{1}}$ and $q$ respectively. Let $G_{k}=C_{k}D_{k}$ where $C_{k}$ and $D_{k}$ are two subgroups of $G$ of order $p_{1}^{s_{1}+1}$ and $u$ respectively. We will prove that $A_{j}\subseteq C_{k}$ ; if not then, $\exists$ some $x\in A_{j}$ such that $x\notin C_{k}.$ $x\in A_{j}\Rightarrow x=p_{1}^{v_{1}}$. Now $x\in A_{j}\Longrightarrow x\in G_{j,}$ $x\in G_{j}\Longrightarrow x\in G_{j+1}$ as $G_{j}$ is a normal subgroup of $G_{j+1}$. $x\in G_{j+1}\Longrightarrow x\in G_{j+2}$ as $G_{j+1}$ is a normal subgroup of $G_{j+2}$. So proceeding in this way we can prove that $x\in G_{k}$. Now $G_{k}=C_{k}D_{k}$ where $C_{k}\cap D_{k}=\left\{ e\right\}.$ $x\in G_{k}\Longrightarrow x\in C_{k}D_{k}$, i.e. $x=cd$ where $c\in C_{k},d\in D_{k}$. Therefore $x=p_{1}^{c_{1}}d,p_{1}^{c_{1}}\in C_{k}$. So $p_{1}^{v_{1}}=p_{1}^{c_{1}}d$ i.e. $p_{1}^{v_{1}-c_{1}}=d,$ As $\gcd\left(p_{1}^{\alpha_{1}},u\right)=1$ so $P_{1}\cap D_{k}=\left\{ e\right\} $ but $p_{1}^{v_{1}-c_{1}}\in P_{1}\cap D_{k}$ i.e. $p_{1}^{v_{1}-c_{1}}=e$,$p_{1}^{v_{1}}=p_{1}^{c_{1}}$ hence $x\left(=p_{1}^{c_{1}}\right)\in C_{k}$, a contradiction. \vskip 5pt \noindent Therefore in the composition series of Sylow $p_{1}$-group, $A_{j}$ is a subgroup of $C_{k}$. So if the Sylow $p_{1}$-group has $t_{1}$ distinct composition series then we have $t_{1}\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}$ such possible composition series due to Sylow $p_{1}$-subgroup in $G$. In a similar way for the other Sylow $p_{2}$-group, \dots, Sylow $p_{r}$- group we have $\prod_{i=1}^{r}t_{i}\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}$ distinct composition series for $G$. \end{proof} \begin{theorem}\label{t3.5} In an elementary abelian group of order $p^{n}$, there are $\prod_{k=1}^{n}\left(\frac{p^{k}-1}{p-1}\right)=\frac{\prod_{k=1}^{n}\left(p^{k}-1\right)}{\left(p-1\right)^{n}}$ distinct composition series. \end{theorem} \begin{proof} Let $\left\{ e\right\} \trianglelefteq p\trianglelefteq p^{2}\trianglelefteq\dots \trianglelefteq p^{k-1}\trianglelefteq p^{k}\trianglelefteq\dots \trianglelefteq p^{n}$ be a composition series for the elementary abelian group of order $p^{n}$. To find the number of distinct composition series, we will find the possible choice from $p^{k-1}$ to $p^{k}$ at first. Let there be $C_{k-1}$ choice available for $p^{k-1}$ to $p^{k}$. Now in an elementary abelian group of order $p^{n}$ there are \[\frac{\prod_{i=0}^{k-1}\left(\left(p^{n}-1\right)-\left(p^{i}-1\right)\right)}{\prod_{i=0}^{k-1}\left(p^{k}-p^{i}\right)}=A~\mbox{(say)}\] distinct subgroups of order $p^{k}$ each of which is an elementary abelian (By \cite{key-10}, page - 65). In an elementary abelian group of order $p^{n}$, also there are \[\frac{\prod_{i=0}^{k-2}\left(\left(p^{n}-1\right)-\left(p^{i}-1\right)\right)}{\prod_{i=0}^{k-2}\left(p^{k}-p^{i}\right)}=B~\mbox{(say)}\] distinct subgroups of order $p^{k-1}$ each of which is an elementary abelian(By \cite{key-10}, page-65). Again in an elementary abelian group of order $p^{k}$ there are \[\frac{\prod_{i=0}^{k-2}\left(\left(p^{k}-1\right)-\left(p^{i}-1\right)\right)}{\prod_{i=0}^{k-2}\left(p^{k}-p^{i}\right)}=D~\mbox{(say)}\] distinct subgroups of order $p^{k-1}$ each of which is elementary abelian (By \cite{key-10}, page-65). Now there is an identical algebraic structure of any two subgroups of same order in an elementary abelian group i.e. in elementary abelian group of order $p^{n}$ , any two distinct subgroups of order $p^{k}$ have the same number of subgroup order $p^{k-1}$. Then \[C_{k-1}=\frac{A.B}{D}=\frac{p^{n+1-k}-1}{p-1}.\] \noindent As a result we have $\prod_{k=1}^{n-1}\left(\frac{p^{n+1-k}-1}{p-1}\right)$distinct possible choice for the composition series of an elementary abelian group of order $p^{n}$ ( as $k\geq1$ ), which is same as \[\prod_{k=1}^{n}\left(\frac{p^{n+1-k}-1}{p-1}\right)=\prod_{k=1}^{n}\left(\frac{p^{k}-1}{p-1}\right)=\frac{\prod_{k=1}^{n}\left(p^{k}-1\right)}{\left(p-1\right)^{n}}.\] \end{proof} \begin{theorem} Let $G$ be an abelian group with elementary abelian Sylow subgroups of order $n\geq2$ where $n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}$, then \[\mid C_{G}\mid=\prod_{i=1}^{r}\left(\prod_{j=1}^{\alpha_{i}}\frac{p_{i}^{j}-1}{p_{i}-1}\right)\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}.\] \end{theorem} \begin{proof} This followed from theorem \ref{t3.4} and theorem \ref{t3.5}. \end{proof} \begin{theorem} Let $G$ be a finite group of order $n$ where $n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}$, then \[\mid C_{G}\mid\leq\prod_{i=1}^{r}\left(\prod_{j=1}^{\alpha_{i}}\frac{p_{i}^{j}-1}{p_{i}-1}\right)\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!}\] and the equality is hold if and only if $G$ is the abelian group with elementary abelian sylow subgroups. \end{theorem} \begin{proof} This followed from theorem \ref{t2.1} and theorem 3.6. \end{proof} \section{Main Result} In this section we prove theorem \ref{t4.3} which is the main theorem of the paper. This section is more elementary and computational. \begin{lemma}\label{l4.1} Let $X=\prod_{i=\alpha_{1}+1}^{\alpha_{1}+k}(2^{i}-1)\alpha_{1}!\alpha_{r}!$ and $Y=\prod_{j=1}^{\alpha_{r}}(\frac{p^{j}-1}{p-1})(\alpha_{1}+\alpha_{r})!$, where $p\geq5$ is a prime and $\alpha_{1}\geq0$ is an integer, $\alpha_{r}\in\mathbb{N}$, such that $t=p^{\alpha_{r}}$ and $[\log_{2}t]=k$. Also $\alpha_{r}\geq2$ whenever $p=3$. Then $\frac{X}{Y}$ is a monotone increasing function of $\alpha_{1}$. \end{lemma} \begin{proof} Let \[f(\alpha_{1})=\frac{\prod_{i=1}^{\alpha_{1}+k}(2^{i}-1)}{\prod_{i=1}^{\alpha_{1}}(2^{i}-1)\prod_{j=1}^{\alpha_{r}}(\frac{p^{j}-1}{p-1})\frac{(\alpha_{1}+\alpha_{r})!}{\alpha_{1}!\alpha_{r}!}}.\] Then \[f(\alpha_{1}+1)=\frac{\prod_{i=1}^{\alpha_{1}+k+1}(2^{i}-1)}{\prod_{i=1}^{\alpha_{1}+1}(2^{i}-1)\prod_{j=1}^{\alpha_{r}}(\frac{p^{j}-1}{p-1})\frac{(\alpha_{1}+\alpha_{r}+1)!}{(\alpha_{1}+1)!\alpha_{r}!}}.\] Therefore \[\frac{f(\alpha_{1}+1)}{f(\alpha_{1})}=\frac{2^{\alpha_{1}+k+1}-1}{2^{\alpha_{1}+1}-1}\frac{\alpha_{1}+1}{\alpha_{1}+\alpha_{r}+1}.\] \vskip 5pt \noindent In order to prove $f(\alpha_{1}+1)>f(\alpha_{1})$ we have to prove that \[\frac{2^{\alpha_{1}+k+1}-1}{2^{\alpha_{1}+1}-1}>\frac{\alpha_{1}+\alpha_{r}+1}{\alpha_{1}+1}.\] As $\frac{2^{\alpha_{1}+1+k}-1}{2^{\alpha_{1}+1}-1}>2^{k}$ and $\alpha_{r}+1\geq\frac{\alpha_{1}+1+\alpha_{r}}{\alpha_{1}+1}$. So it is sufficient enough to prove \[2^{k}>\alpha_{r}+1 \Longleftrightarrow k>\log_{2}(\alpha_{r}+1).\] Now $\log_{2}t=k+f$, where $0<f<1$. So $k=\log_{2}t-f$. Therefore, we have to prove that \[\log_{2}t-f>\log_{2}(\alpha_{r}+1) \Longleftrightarrow\log_{2}(\frac{t}{\alpha_{r}+1})>f.\] As $0<f<1$ it is sufficient enough to prove $\log_{2}(\frac{t}{\alpha_{r}+1})>1$ since $\log x$ is a monotone increasing function for $x>0$, i.e. \[\frac{t}{\alpha_{r}+1}>2\Longleftrightarrow p^{\alpha_{r}}>2\alpha_{r}+2\] for all prime $p\geq5$ with $\alpha_{r}\geq1$ and $p=3$ with $\alpha_{r}\geq2$, which can easily be proved using mathematical induction. Hence $\frac{X}{Y}$ is a monotone increasing function of $\alpha_{1}$. \end{proof} \begin{theorem} \label{t4.2} Among all $p$ groups of order $\leq n$, where $n\geq4$ is a positive integer; the elementary abelian group of order $2^{\alpha}$ where $\alpha=\left[\log_{2}n\right]$, has the highest number of composition series. \end{theorem} \begin{proof} We know that among all finite $p$-groups of equal order the elementary abelian group has the highest number of composition series. Let $G_{2}$ be an elementary abelian group of order $2^{\alpha}$, where $\alpha=\left[\log_{2}n\right]$ and let $G_{p}$ be an elementary abelian $p$ group of order $p^{\beta}$, where $\beta=\left[\log_{p}n\right]$ with $p\geq3$ be a prime and $n\geq4$. \vskip 5pt \noindent Now we know that if $\left[\log_{q}r\right]=t$ then $\left[\log_{q}qr\right]=t+1$ for all prime $q\geq2$ and $r,t\in\mathbb{N}$ with $q\leq r$. Obviously $2$ divides $p-1$. Therefore we get \begin{equation} \label{e1} 2^{\left[\log_{2}n\right]}-1>\frac{p^{\left[\log_{p}n\right]}-1}{p-1},~\mbox{whenever}~n\geq4. \end{equation} \[\mid C_{G_{2}}\mid=\prod_{i=1}^{\left[\log_{2}n\right]}\left(2^{i}-1\right),\] \[\mid C_{G_{p}}\mid=\prod_{i=1}^{\left[\log_{p}n\right]}\left(\frac{p^{i}-1}{p-1}\right).\] \vskip 5pt \noindent From (\ref{e1}) it implies that \[\prod_{i=1}^{\left[\log_{2}n\right]}\left(2^{i}-1\right)>\prod_{i=1}^{\left[\log_{p}n\right]}\left(\frac{p^{i}-1}{p-1}\right)\] i.e.$\mid C_{G_{2}}\mid>\mid C_{G_{p}}\mid$. Therefore among all $p$ groups of order $\leq n$, the elementary abelian group of order $2^{\alpha}$ has the highest number of composition series where $\alpha=\left[\log_{2}n\right]$. \end{proof} \begin{theorem}\label{t4.3} Let $n\geq4$ be a positive integer. Among all finite groups of order $\leq n$, the elementary abelian group of order $2^{\alpha}$ has the highest number of composition series where $\alpha=\left[\log_{2}n\right]$. \end{theorem} \begin{proof} $n\geq4$ is a natural number and $\alpha=\left[\log_{2}n\right]$ and let $G$ be the elementary abelian group of order $2^{\alpha}$ then \[\mid C_{G}\mid=\prod_{i=1}^{\left[\log_{2}n\right]}\left(2^{i}-1\right).\] As we know that among all finite groups of order $n\geq2$ , the abelian group with elementary abelian Sylow subgroups has the highest number of composition series. So in order to prove our claim we have to prove that \[\prod_{i=1}^{\left[\log_{2}n\right]}\left(2^{i}-1\right)\geq\prod_{i=1}^{q}\left(\prod_{j=1}^{\beta_{i}}\frac{p_{i}^{j}-1}{p_{i}-1}\right)\frac{\left(\sum_{i=1}^{q}\beta_{i}\right)!}{\prod_{i=1}^{q}\beta_{i}!}\] \noindent where each $p_{i}$ is a prime and $\beta_{i},q\in\mathbb{N}$ such that $\prod_{i=1}^{q}p_{i}^{\beta_{i}}\leq n$ and the equality is hold if and only if $q=1,p_{1}=2$ and $\beta_{1}=\left[\log_{2}n\right]$. The inequality will be proved in the following steps. \vskip 7pt \noindent Step 1 : Let $H$ be an abelian group of order $m$ with elementary abelian Sylow subgroups such that $4\leq m\leq n$ where $m=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}$ also $\mid C_{H}\mid$denotes the number of distnct composition series of $H$. Let $p_{r}$ be a prime such that $p_{r}\neq2$ and $p_{r}^{\alpha_{r}}=t,\left[\log_{2}t\right]=k$. Then $2^{k}<p_{r}^{\alpha_{r}}.$ Let $H^{\prime}$ be the abelian group of order $\frac{\mid H\mid.2^{k}}{p_{r}^{\alpha_{r}}}$ with elementary abelian Sylow subgroups. Then $\mid H^{\prime}\mid<\mid H\mid$. We will prove that $\mid C_{H^{\prime}}\mid>\mid C_{H}$$\mid$. \vskip 7pt \noindent Step 2 : If $\mid H^{\prime}\mid=2^{k}p_{1}^{\alpha_{1}}\dots p_{r-1}^{\alpha_{r-1}}$ i.e. $2$ is not a divisor of $m$ then we have $k>\alpha_{r}$ and \[\prod_{i=1}^{k}\left(2^{k}-1\right)>\prod_{i=1}^{\alpha_{r}}\left(\frac{p_{r}^{i}-1}{p_{r}-1}\right).\] \noindent Therefore, obviously $\mid C_{H^{\prime}}\mid>\mid C_{H}\mid$. \vskip 7pt \noindent Step 3 : If $\mid H\mid=2^{\alpha_{1}}p_{2}^{\alpha_{2}}\dots p_{r}^{\alpha_{r}}$ then $\mid H^{\prime}\mid=2^{\alpha_{1}+k}p_{2}^{\alpha_{2}}\dots p_{r-1}^{\alpha_{r-1}}.$ In this case we will prove that $\mid C_{H^{\prime}}\mid>\mid C_{H}\mid$. \[\mid C_{H}\mid=\prod_{i=1}^{\alpha_{1}}\left(2^{i}-1\right)\prod_{i=1}^{\alpha_{r}}\left(\frac{p_{r}^{\alpha_{r}}-1}{p_{r}-1}\right)\prod_{i=2}^{r-1}\left(\prod_{j=1}^{\alpha_{i}}\frac{p_{i}^{j}-1}{p_{i}-1}\right)\frac{\left(\sum_{i=1}^{r}\alpha_{i}\right)!}{\prod_{i=1}^{r}\alpha_{i}!},\] \[\mid C_{H^{\prime}}\mid=\prod_{i=1}^{\alpha_{1}+k}\left(2^{i}-1\right)\prod_{i=2}^{r-1}\left(\prod_{j=1}^{\alpha_{i}}\frac{p_{i}^{j}-1}{p_{i}-1}\right)\frac{\left(\sum_{i=1}^{r-1}\alpha_{i}+k\right)!}{\prod_{i=2}^{r-1}\alpha_{i}!\left(\alpha_{1}+k\right)!}.\] \noindent Therefore, \[\frac{\mid C_{H^{\prime}}\mid}{\mid C_{H}\mid}=\frac{\prod_{i=\alpha_{1}+1}^{\alpha_{1}+k}\left(2^{i}-1\right)\left(k+s\right)!\alpha_{1}!\alpha_{r}!}{\prod_{j=1}^{\alpha_{r}}\left(\frac{p_{r}^{j}-1}{p_{r}-1}\right)\left(\alpha_{1}+k\right)!\left(\alpha_{r}+s\right)!},\] where $s=\sum_{i=1}^{r-1}\alpha_{i}.$ So in order to prove our claim we have to prove that \begin{equation}\label{e2} \prod_{i=\alpha_{1}+1}^{\alpha_{1}+k}\left(2^{i}-1\right)\left(k+s\right)!\alpha_{1}!\alpha_{r}!> \prod_{j=1}^{\alpha_{r}}\left(\frac{p_{r}^{j}-1}{p_{r}-1}\right)\left(\alpha_{1}+k\right)!\left(\alpha_{r}+s\right)! \end{equation} \noindent Note that $s=\sum_{i=1}^{r-1}\alpha_{i}\geq\alpha_{1},\Longrightarrow s=\alpha_{1}+a,$ where $a\geq 0,~k>\alpha_{r}\Longrightarrow k=\alpha_{r}+b,~b>0;~a,b\in\mathbb{\mathbb{Z}}$. Then inequality ( \ref{e2}) reduces to \begin{equation}\label{e3} \prod_{i=\alpha_{1}+1}^{\alpha_{1}+k}\left(2^{i}-1\right)\left(\alpha_{1}+\alpha_{r}+a+b\right)!\alpha_{1}!\alpha_{r}!>\prod_{j=1}^{\alpha_{r}}\left(\frac{p_{r}^{j}-1}{p_{r}-1}\right)\left(\alpha_{1}+\alpha_{r}+a\right)!\left(\alpha_{1}+\alpha_{r}+b\right)! \end{equation} \noindent [Here we are ignoring the case $p_{r}=3$ with $\alpha_{r}=1$, as $m\geq4$ at least one other case must exist.] Note \[\frac{\left(\alpha_{1}+\alpha_{r}+a+b\right)!\alpha_{1}!\alpha_{r}!}{\left(\alpha_{1}+\alpha_{r}+a\right)!\left(\alpha_{1}+\alpha_{r}+b\right)!}\] is a monotone increasing function of $a$ for fixed $\alpha_{1},\alpha_{r},b$. So it is sufficient enough to prove the inequality for $a=0$ i.e. \begin{equation}\label{e4} \prod_{i=\alpha_{1}+1}^{\alpha_{1}+k}\left(2^{i}-1\right)\alpha_{1}!\alpha_{r}!>\prod_{j=1}^{\alpha_{r}}\left(\frac{p_{r}^{j}-1}{p_{r}-1}\right)\left(\alpha_{1}+\alpha_{r}\right)! \end{equation} \noindent Let $X=\prod_{i=\alpha_{1}+1}^{\alpha_{1}+k}(2^{i}-1)\alpha_{1}!\alpha_{r}!$ and $Y=\prod_{j=1}^{\alpha_{r}}(\frac{p_{r}^{j}-1}{p_{r}-1})(\alpha_{1}+\alpha_{r})!$. Then according to lemma \ref{l4.1} $\frac{X}{Y}$ is a monotone increasing function of $\alpha_{1}$ whenever $p_{r}$ and $\alpha_{r}$ are fixed. So it is sufficiently enough to prove the inequality (\ref{e4}) for $\alpha_{1}=0$ i.e. \[\prod_{i=1}^{k}\left(2^{i}-1\right)>\prod_{j=1}^{\alpha_{r}}\left(\frac{p_{r}^{j}-1}{p_{r}-1}\right),\] which is following from theorem \ref{t4.2}. As a result we prove that $\mid C_{H^{\prime}}\mid>\mid C_{H}\mid$. Now we will repeat the same process in the group $H^{\prime}$ and get a group $H^{\prime\prime}$ such that $\mid C_{H^{\prime\prime}}\mid>\mid C_{H^{\prime}}\mid$. Since $\mid H\mid$is finite then $\mid H\mid$has only finitely many prime divisors. So after a finite number of steps we get an elementary abelian group of order $2^{\eta}$, where $\eta\leq\left[\log_{2}n\right]$. \vskip 7pt \noindent Step 4 : Let $\mid H\mid=2^{\alpha_{1}}3^{1}$ for any $\alpha_{1}\in\mathbb{N}$. Then $\mid C_{H}\mid=\prod_{i=1}^{\alpha_{1}}\left(2^{i}-1\right)\left(\alpha_{1}+1\right)$. Then we can find an elementary abelian group $H^{\prime}$ such that$\mid H^{\prime}\mid=2^{\alpha_{1}+1}$ and $\mid C_{H^{\prime}}\mid=\prod_{i=1}^{\alpha_{1}+1}\left(2^{i}-1\right)$. Obviously $\mid C_{H^{\prime}}\mid>\mid C_{H}\mid$ as $2^{\alpha_{1}+1}>\alpha_{1}+2$ for each $\alpha_{1}\in\mathbb{N}$. Among all $2$ groups of order $\leq n$, $G$ has the highest number of composition series which proves the theorem. \end{proof} \end{document}
\begin{document} \title{Various stabilities of the Alexander polynomials of knots and links} \pagestyle{myheadings} \markboth{ } { } \begin{abstract} In this paper, we study distribution of the zeros of the Alexander polynomials of knots and links in $S^3$. We call a knot or link {\it real stable} (resp. {\it circular stable}) if all the zeros of its Alexander polynomial are real (resp. unit complex). We give a general construction of real stable and circular stable knots and links. We also study pairs of real stable knots and links such that the zeros of the Alexander polynomials are interlaced. \end{abstract} \keywords{Knot, Link, 2-bridge link, Alexander polynomial, half-plane property, real stable polynomial, circular stable polynomial, interlacing zeros} \ccode{Mathematics Subject Classification 2000: 57M25, 57M27, 26C10} \newcommand{\sect}[3]{ #1:\ #2 \dotfill \pageref{#1}\linebreak} \newcommand{\subsect}[3]{ \hspace{-1mm}ace*{5mm} #1:\ #2 \dotfill \pageref{#1}\linebreak} {\centerline {\bf Table of contents}} \noindent 0: Introduction\dotfill\pageref{0}\\ \sect{1}{Stability Property}{2} \subsect{1.1}{Half-plane property}{2} \subsect{1.2}{$D$-stable polynomial}{3} \sect{2}{Hurwitz-stability}{6} \subsect{2.1}{Hurwitz-Routh Criterion}{7} \subsect{2.2}{Lyapunov matrix}{7} \sect{3}{Stable polynomial}{8} \subsect{3.1}{Multivariate stable polynomials}{8} \subsect{3.2}{Real stable univariate polynomials}{9} \sect{4}{Preliminaries }{10} \sect{5}{The Alexander polynomials of alternating knots}{18} \subsect{5.1}{Hoste's Conjecture}{18} \subsect{5.2}{Trapezoidal Conjecture}{19} \sect{6}{Construction of real stable knots (I)}{20} \subsect{6.1}{Quasi-rational knots}{21} \subsect{6.2}{Stable quasi-rational knots}{21} \subsect{6.3}{Examples}{22} \sect{7}{Construction of real stable knots (II)}{23} \subsect{7.1}{Positive or negative disks}{23} \subsect{7.2}{Stable alternating knots and links}{24} \subsect{7.3}{Pseudo-positive or pseudo-negative disk}{25} \subsect{7.4}{Example}{25} \sect{8}{Exceptional stable knots and links}{25} \subsect{8.1}{Exceptional stable knots}{26} \subsect{8.2}{Exceptional stable links}{26} \sect{9}{Interlacing property (I) 2-bridge knots}{28} \sect{10}{Interlacing property (II) Quasi-rational knots $X_n$}{33} \sect{11}{Interlacing property (III) Quasi-rational knots $Y_{2n+1}$}{36} \subsect{11.1}{Conway polynomials of $Y_n$}{37} \subsect{11.2}{Alexander polynomials of $Y_n$}{40} \sect{12}{$c$-stable knots and links }{42} \subsect{12.1}{Regular and exceptional $c$-stable $2$-bridge knots and links}{42} \subsect{12.2}{Construction of $c$-stable quasi-rational knots and links }{43} \subsect{12.3}{General construction of $c$-stable knots and links }{44} \subsect{12.4}{Interlacing property of zeros on the unit circle}{45} \sect{13}{Bi-stable knots and links}{46} \subsect{13.1}{Bi-stable 2-bridge knots and links}{46} \subsect{13.2}{Exceptional bi-stable knots and Salem knots}{47} \subsect{13.3}{General bi-stable knots and links}{48} \sect{14}{Mobius Transformations}{49} \sect{15}{Montesinos knots}{56} \sect{16}{Multivariate stable link polynomials}{61} \sect{17}{Inversive links}{65} \subsect{17.1}{Standard inversive links}{65} \subsect{17.2}{Exceptional inversive links}{67} Appendix A: Representation polynomials\dotfill\pageref{A}\\ \subsect{A.1}{Parabolic representation}{68} \subsect{A.2}{Dihedral representation}{70} Appendix B: Determination of $\delta_4$\dotfill\pageref{B}\\ Appendix C: Distribution of the zeros\dotfill\pageref{C}\\ References\dotfill\pageref{ref} \setcounter{section}{-1} \section{Introduction}\label{0} Let $\mathcal H \subset \CC$ be an open right half-plane, i.e., $\{\alpha \in \CC | {\rm Re}(\alpha) > 0\}$, or an open upper half-plane, i.e., $\{\alpha \in \CC | {\rm Im}(\alpha) > 0\}$. Let $f(z_1, \cdots, z_n) \in \CC[z_1, \cdots, z_n]$ be a polynomial in $n$ variables, $z_1, \cdots, z_n$. We say that $f(z_1, \cdots, z_n)$ is {\it $\mathcal H$-stable} if for any values $\alpha_j \in \mathcal H, 1 \leq j \leq n, f(\alpha_1, \cdots, \alpha_n) \neq 0$. If $\mathcal H$ is an open right half-plane, then $f$ is called {\it Hurwitz stable}. If $\mathcal H$ is an open upper half-plane, then $f$ is called a {\it stable polynomial}, and further, if $f$ is a real polynomial, $f$ is sometimes called {\it real stable}. The theory of stable polynomials has a long history, but the recent development of this theory is very impressive and is summarized in a remarkable survey article \cite{wag}. The purpose of this paper is to provide a recent study on various stabilities of the Alexander polynomials of knots or links in $S^3$. The study was motivated by our desire to answer a question (later called conjecture) posed by Jim Hoste in 2002. He asks if the real part of each zero of the Alexander polynomial $\Delta_K(t)$ of an alternating knot $K$ is larger than $-1$. It is exactly a question whether $\Delta_K (-(t+1))$ is (strongly) Hurwitz stable for an alternating knot $K$. The question leads us to other problems on stabilities of the Alexander polynomial of a (not necessarily alternating) knot. For example, since the sequence of the coefficients of a stable univariate real polynomial under a certain condition is unimodal, we see immediately that the stable Alexander polynomial of an alternating knot satisfies Trapezoidal Conjecture, one of the outstanding conjectures that still remains open. In \cite{LM}, it is shown that many 2-bridge knots or links satisfy HosteÅfs Conjecture. Further, a few more subtle theorems on Hurwitz stability and real stability of the Alexander polynomials of 2-bridge knots or links are proven. In this paper, knots or links are not necessarily alternating, and we discuss stabilities of the Alexander polynomials of knots or links, and further, we discuss the third stability, called {\it circular stability}, of the Alexander polynomials of knots or links. This paper is organized as follows. It consists of two parts. The first part, consisting of Sections 1-3 is a quick review of various types of stable polynomials. Almost all materials in this part are taken from various known sources and hence proofs are entirely omitted. The rest of the paper forms the second part of the paper. In Section 4, first we introduce new notations and various terminologies and then we prove a couple of propositions on matrices. We use these propositions as basic tools to prove many theorems in this paper. For convenience, we say that a knot or link is {\it $\mathcal H$-stable} if its Alexander polynomial is $\mathcal H$-stable. In Section 5, we review some connections between stable Alexander polynomials and various conjectures in knot theory. In Sections 6 and 7, we study real stable Alexander polynomial of a knot or link. The proto-type is a 2-bridge knot or link. As is shown in \cite{LM}, a 2-bridge knot with an alternating continued fraction expansion, i.e., $[2a_1, 2a_2, \cdots, 2a_m], a_j a_{j+1} < 0$, is always stable. By generalizing these knots, we construct more general stable knots in these sections. In Section 8, we discuss some exceptional stable 2-bridge knots or links which have non-alternating continued fraction expansions. We discuss the stability of the (2 variable) Alexander polynomials of such links in Section 16. One of the important properties of stable polynomials is the \lq\lq interlacing property\rq\rq\ of the zeros. In Sections 9 through 11, we discuss this property, first, for 2-bridge knots or links (Section 9) and then for a generalization of 2-bridge knots, which we name {\it quasi-rational} knots (Sections 10 and 11). In Section 12, we study circular stable polynomials. A real polynomial $f(z)$ is called {\it circular stable} (or $c$-stable) if all the zeros of $f(z)$ lie on the unit circle, i.e., $|\alpha|$ = 1 for any zero $\alpha$ of $f(z)$. The Alexander polynomial of a special alternating knot is always $c$-stable. But, many non-special alternating knots also have the $c$-stable Alexander polynomial. In this section, we give a systematic way to construct $c$-stable knots or links. These knots or links are in general not alternating. Now for convenience, we call a real polynomial $f(z)$ {\it bi-stable} if the zero of $f(z)$ is either real or unit complex number. In Section 13, we prove that 2-bridge knots of a certain special type are bi-stable. A proof of the bi-stability of a polynomial is rather complicated. To show the real stability, a Seifert matrix plays a key role, but to show the bi-stability, the interlacing property of the zeros is crucial. Bi-stable knots or links appear implicitly in \cite{hiro} and others. Salem fibred knots are a special type of bi-stable knots. We briefly discuss them in this section. In Section 14, we study a Mobius transformation $\varphi: \CC \cup \{\infty \} \longrightarrow \CC \cup \{\infty \}$. There is one special Mobius transformation $\varphi$ that relates the Alexander polynomial $\Delta_K(t)$ of a knot $K$ to Hosokawa polynomial $\nabla_{L(K)}(t)$ of a link $L(K)$ with an arbitrary number of components in such a way that $\varphi$ maps all the zeros of $\Delta_K(t)$ to all the zeros of $\nabla_{L(K)}(t)$. In particular,\\ (1) if $\Delta_K(t)$ is $c$-stable, then $\nabla_{L(K)}(t)$ is real stable, and\\ (2) if $\Delta_K(t)$ is real stable, then $\nabla_{L(K)}(t)$ is $c$-stable and further,\\ (3) if $\Delta_K(t)$ is bi-stable, $\nabla_{L(K)}(t)$ is bi-stable. Then we show that given $\Delta_K(t)$, we can express $\nabla_{L(K)}(t)$ in terms of the coefficients of $\Delta_K(t)$.\\ The (reduced) Alexander polynomial of a link depends on orientation of each component in a very delicate manner. In fact, there exists a 2-component link $L$ such that one orientation gives a stable link, but reversing the orientation of one component results in a $c$-stable link. We call such a link {\it inversive}. We have many inversive Montesinos links. Therefore, in Section 15, we study the Alexander polynomials of alternating Montesinos knots or links. We specify some class of alternating Montesinos knots or links and prove that a knot $K$ or link in this class has the following property:\\ (a) If $K$ is a knot, then $K$ is $c$-stable, stable or bi-stable.\\ (b) If $K$ is a link, then $K$ is inversive. In Section 16, we consider a 2-component link $L$. Let $\Delta_L (x, y)$ be the Alexander polynomial of $L$. The stability problem of $\Delta_L (x, y)$ is not an easy problem, unless $\Delta_L (x, y)$ is multi-affine, i.e., each variable has degree at most one in each term. If $\Delta_L (x, y)$ is stable, then so is $\Delta_L (t, t)$, but in general $t^n \Delta_L (t, t^{-1})$ is not stable, where $n =\deg_y\Delta_L (x, y)$. Note that $t^n \Delta_L (t, t^{-1})$ is the Hosokawa polynomial of $L$ with orientation of the second component reversed. On the other hand, $t^n\Delta_L (t, -t^{-1})$ is always stable, if $\Delta_L (x, y)$ is stable. In Section 16, we discuss mainly the stability problem of the Alexander polynomials $\Delta_{L}(x,y)$ of $2$-bridge links $L$. In the last section, Section 17, we study inversive 2-bridge links using 2-variable Alexander polynomials $\Delta_{K}(x,y)$.\\ Appendix has three sections. In Appendix A, we study the stability problem of integer polynomials considered in knot theory, particularly, the stability of Riley polynomials associated to parabolic or dihedral representation of the knot group. Riley studied these representations of the knot groups $G(K(r))$ of 2-bridge knots $K(r)$. He defined an integer polynomial $\theta_K(z)$ associated to the parabolic representation of $G(K(r))$ to $SL(2,\CC)$. It is known \cite{ri72} or \cite{swa} that $\theta_{K(r)}(z)$ is real stable, if $r = 1/(2n+1)$. However, if $r \ne 1/(2n+1)$, it is usually not stable. The second polynomial Riley studied is an integer polynomial $\varphi_{2n+1}(z)$ associated to a trace-free representation of $G(K(1/2n+1))$ onto a dihedral group $D_{2n+1} \subset GL(2, \CC)$ (see \cite{ri84}). In this section, we prove that $\varphi_{2n+1}(z)$ is real stable. Since $\varphi_{2n+1}(z)$ is not reciprocal, we cannot apply the methods we used in the previous sections and our approach here is quite different. In Appendix B, we discuss the maximal values of the real parts of the zeros of the Alexander polynomials of alternating knots. Let $\delta (K)$ be the maximal value of the real parts of the zeros of $\Delta_K (t)$. It is proved in Section 4 that even for alternating knots $K$, $\delta (K)$ is not bounded, i.e., given any positive real number $\delta_0$, there exists an alternating knot $K_0$ such that $\delta(K_0) > \delta_0$. It should be noted that for a 2-bridge knot $K$, $\delta(K) < 6$ (\cite{LM}, \cite{St}). However, for alternating knots, we can modify this invariant as follows. Let $\Gamma_{2n}$ be the set of all alternating knots $K$ with $\deg \Delta_K (t) = 2n$. We conjecture that $\delta(K)$ for $K$ in $\Gamma_{2n}$ is bounded, i.e., there exists a positive real number $\delta_{2n}$ such that $\delta (K) \leq \delta_{2n}$ for $K \in \Gamma_{2n}$. Further, we conjecture that $\delta_{2n}$ can be achieved by fibred alternating knots. This seems true for 2-bridge knots. If the conjecture holds, then since the number of alternating fibred knots in each $\Gamma_{2n}$ is finite, we can determine $\delta_{2n}$ for each $n$. In this section, we prove that the conjecture holds for $n=1$ and $2$. In the last section, Appendix C, we discuss the distribution of the zeros of a series of some special type of 2-bridge knots. It seems that these examples suggest many deep properties of the distribution of the zeros of the Alexander polynomials of alternating knots and links. Finally, we note that some of the theorems in this paper have been announced without proofs in the survey article \cite{HMsurvey}. \section{Stability Property}\label{1} \subsection{Half-plane property}\label{1.1} Let $\mathcal H \subset \CC$ be an open half-plane such that $\partial \overline{\mathcal H}$ contains the origin. Let $f(z_1, \cdots, z_n) \in \CC[z_1, \cdots, z_n]$ be a polynomial in $n$ variables. \begin{dfn}\cite[p.303]{branden}\label{dfn:1.1} $f \in \CC[z_1, \cdots, z_n]$ is said to be {\it $\mathcal H$-stable} if $f\equiv 0$ identically, or for any values $\alpha_j \in \mathcal H, 1 \leq j \leq n, f(\alpha_1, \cdots, \alpha_n) \ne 0$. If $f(z_1, \cdots, z_n) \in \CC[z_1, \cdots, z_n]$ is $\mathcal H$-stable for some open half-plane, we say $f$ has a {\it half-plane property}. \end{dfn} There are two special cases. \begin{dfn}\cite[p.303]{branden}\label{dfn:1.2} (1) Let $\mathcal H$ be the right-half plane, i.e., $\mathcal H = \{\alpha \in \CC|{\rm \ Re}(\alpha) >0\}$. Then an $\mathcal H$-stable polynomial $f \in \CC[z_1, \cdots, z_n]$ is called {\it Hurwitz-stable}. In other wards, $f$ is Hurwitz-stable if for any $\alpha_j \in \CC, 1\leq j \leq n$, such that ${\rm Re}(\alpha_j) > 0$, $f(\alpha_1, \cdots, \alpha_n) \ne 0$. (2) Let $\mathcal H$ be the upper-half plane, i.e., $\mathcal H = \{\alpha \in \CC|Im(\alpha)>0\}$. Then an $\mathcal H$-stable polynomial $f \in \CC[z_1, \cdots, z_n]$ is called a {\it stable polynomial}. \end{dfn} \begin{rem}\label{rem:1.3} If a real polynomial $f \in \RR[z_1, \cdots, z_n]$ is stable, $f$ is sometimes called {\it real stable}. \end{rem} From definitions we see immediately \begin{prop}\label{prop:1.4} Let $f(z) \in \RR[z]$ be a real univariate polynomial. Then (1) $f(z)$ is real stable if and only if $f(z)$ has only real zeros. (2) $f(z)$ is Hurwitz-stable if and only if for any zero $\alpha$ of $f(z)$, ${\rm Re}(\alpha) \leq 0$. \end{prop} \begin{ex}\label{ex:1.5} (1) $f(t) = t^4 + 7t^3 +13t^2 +7t + 1$ is real stable and also Hurwitz-stable. (2) $f(t) = t^4 +2t^3 -5t^2 + 2t + 1$ is neither real stable, nor Hurwitz-stable. \end{ex} The theorem below is elementary, but useful. \begin{thm}\cite[Lemma 2.4]{wag}\label{thm:1.6} The following operations preserve $\mathcal H$-stability in $\CC[z_1,\cdots, z_n ]$. \noindent (a) Permutation: For any permutation $\sigma \in S_n$,\\ \hspace{-1mm}ace*{5mm} $f \longrightarrow f (z_{\sigma(1)}, \cdots, z_{\sigma(n)})$ \noindent (b) Scaling: For any $c \in \CC,$ and $(a_1, \cdots, a_n) \in R_{+}^n$ (i.e., $a_j> 0, 1 \leq j \leq n$),\\ \hspace{-1mm}ace*{5mm} $ f \longrightarrow cf(a_1 z_1, \cdots, a_n z_n)$ \noindent (c) Diagonalization: For $\{i,j\}, 1 \leq i, j \leq n$,\\ \hspace{-1mm}ace*{5mm} $f \longrightarrow f (z_1, \cdots, z_n) \mid_{z_i = z_j}$ \end{thm} \subsection{$D$-stable polynomial}\label{1.2} There is another type of stability. \begin{dfn}\cite{bb09}\label{dfn:1.6} Let $D$ be the unit open disk in $\CC$. A polynomial $f(z) \in \CC[z]$ is called {\it $D$-stable} if for any $\alpha \in D, f(\alpha) \ne 0$. \end{dfn} \begin{prop}\label{prop:1.7} Suppose $f(z) \in \RR[z]$ is reciprocal, i.e., $f(z) = z^n f(z^{-1})$ for some $n$. If $f(z)$ is $D$-stable, then all zeros $\alpha$ of $f(z)$ are on the unit circle, i.e., $|\alpha |= 1$. \end{prop} \begin{dfn}\label{dfn:1.8} We say that $f(z) \in \CC[z]$ is {\it $c$-stable} if for each zero $\alpha$ of $f$, $|\alpha|=1$. \end{dfn} \begin{ex}\label{ex:1.8} (1) $f(t) = 2t^6 - 4t^5 +6t^4 -7t^3 +6t^2 -4t +2$ is $c$-stable, but not Hurwitz-stable. (2) $f(t) = 2t^6 - 4t^5 +6t^4 -9t^3 +6t^2 -4t +2$ is is neither $c$-stable, nor Hurwitz-stable.\\ (3) $f(t) = 3t^4 -12t^3 + 17t^2 -12t + 3$ is neither $c$-stable, nor Hurwitz-stable. \end{ex} \section{Hurwitz-stability}\label{2} There are two basic tools to show Hurwitz-stability of a real univariate polynomial. \subsection{Hurwitz-Routh Criterion}\label{2.1} Let $f(z) = a_0 z^n + a_1 z^{n-1}+\dots + a_n \in \RR[z]$ be a real polynomial, where $a_0>0, a_j\in \RR, 0 \leq j \leq n$. Define an $n\times n$ matrix $H_n$ as follows: \begin{equation} H_n= \left[ \begin{array}{cccccc} a_1&a_0&0&0&\cdots&0\\[11pt] a_3&a_2&a_1&a_0&\cdots&0\\[11pt] & &\ddots & & &\\\ \vdots& & & & &\vdots\\ & & & & \ddots &\\ a_{2n-1}&a_{2n-2}& \cdots& & a_{n+1}&a_n \end{array} \right], \end{equation} where we define $a_j=0$ if $j > n$. For $1 \leq k \leq n$, let $H_k$ be the first $k\times k$ principal submatrix of $H_n$. Namely, $H_k$ is the $k \times k$ submatrix consisting of the first $k$ rows and columns of $H_n$. For example, $H_1=[a_1]$ and $H_2=\left[\begin{array}{cc} a_1&a_0\\ a_3&a_2 \end{array}\right]$. We say that $f(z)$ is {\it strongly Hurwitz-stable} (or simply {\it s-Hurwitz-stable}) if any zero of $f(z)$ has a negative real part. \begin{thm}[Hurwitz-Routh Criterion]\cite[Theorem 8.8.1]{lanc} \label{thm:2.1} A real polynomial $f(z) = \sum_{j=0}^{n} a_j z^{n-j}, a_0> 0, a_j\in \RR, 1 \leq j \leq n$, is strongly Hurwitz-stable if and only if $\det H_k> 0$ for $1 \leq k \leq n$. \end{thm} Using Theorem \ref{thm:2.1}, we can characterize strongly Hurwitz-stable polynomials with small degrees. \begin{ex}\label{ex:2.2} (1) $f(z)=a_0 z + a_1, a_0>0$, is s-Hurwitz-stable if and only if $a_1> 0$.\\ (2) $f(z)= a_0 z^2 + a_1 z + a_2, a_0> 0$, is s-Hurwitz-stable if and only if $a_1, a_2> 0$.\\ (3) $f(z)=a_0 z^3+a_1z^2+a_2z+a_3,a_0>0$, is s-Hurwitz-stable if and only if $a_1, a_2, a_3>0$ and $a_1a_2>a_0a_3$.\\ (4) $f(z)=a_0z^4+a_1z^3+a_2z^2+a_3z+a_4, a_0>0$ is s-Hurwitz-stable if and only if (i) $a_1, a_2, a_3, a_4>0$, (ii) $a_1a_2>a_0 a_3$, and (iii) $a_3(a_1a_2-a_0a_3)>a_1^2a_4$. \end{ex} \subsection{Lyapunov matrix}\label{2.2} There is another important tool to study Hurwitz-stability of a real univariate polynomial given by Lyapunov. Let $f(z)$ be a real polynomial of degree $n$. Let $M$ be a companion matrix of $f(z)$. \begin{thm}[Lyapunov, {\cite[Theorem 8.7.2]{lanc}}]\label{thm:2.3} $f(z)$ is strongly Hurwitz-stable if and only if there exist two real positive definite (symmetric) matrices $V$ and $W$ such that \begin{equation} VM + M^T V = - W. \end{equation} \end{thm} For convenience, we call $V$ a {\it Lyapunov matrix} associated to $M$. It is often quite difficult to find a Lyapunov matrix even if $f(z)$ is known to be Hurwitz-stable. \begin{ex} (1) $f(z) = z+a_1$. Then $M=[-a_1]$. If $a_1<0$, Lyapunov matrix does not exist, since $M$ is positive definite. If $a_1>0$, then $V = E$ is a Lyapunov matrix associated to $M$ and $f(z)$ is s-Hurwitz-stable.\\ (2) Let $f(z) = z^2 + a_1 z + a_2$. If $a_1, a_2 > 0$, then we know $f(z)$ is s-Hurwitz-stable, see Example \ref{ex:2.2} (2). For example, if $a_1= 3$ and $a_2=4$, i.e., $M=\left[\begin{array}{cc} 0&-4\\ 1&-3 \end{array}\right]$, then $V = \left[\begin{array}{cc} 7/12&-1/2\\ -1/2&5/6 \end{array}\right]$ is a Lyapunov matrix and $W=E$. \end{ex} In graph theory, this concept appears in literatures. We mention one example. \begin{ex}[{\cite[Theorem 1.1]{cosw}} and {\cite[p.208]{bb}}] The spanning-tree polynomial of a connected finite graph is Hurwitz-stable and also stable. \end{ex} \section{Stable polynomial}\label{3} \subsection{Multivariate stable polynomials}\label{3.1} First, we state two basic properties of stable polynomials. \begin{thm}[{\cite[Lemma 2.4]{wag}}] \label{thm:3.1} The following operations preserve stability in $\CC[z_1, \dots, z_n]$.\\ (a) Specialization: For any $a\in \CC$ with $\mathrm{Im}(a) \ge 0$, $f\rightarrow f(a, z_2, \dots, z_n)$\\ (b) Inversion: If $\deg_{z_1}(f) = d$, $f\rightarrow z_1^d f(-z_1^{-1}, z_2, \dots, z_n)$.\\ (c) Differentiation (or contraction) $f \longrightarrow \frac{\partial}{\partial z_1}f (z_1, \cdots, z_n)$ \end{thm} Next, the following theorems give us systematic ways to construct stable polynomials. \begin{thm}[{\cite[Proposition 2.4]{bb}}] \label{thm:3.2} Let $A_i, 1 \le i \le n$, be complex, semi-positive definite $m\times m$ matrices and $B$ be an $m \times m$ Hermitian matrix. Then, $f(z_1, \dots, z_n) = \det[z_1 A_1+\dots+ z_n A_n + B]$ is stable. \end{thm} As a consequence of Theorem \ref{thm:3.2}, we have: \begin{thm}[{\cite[p.308]{branden}}] \label{thm:3.3} Let $Z ={\rm diag}(z_1, \dots, z_n)$ be a diagonal matrix. If $A$ is an $n\times n$ Hermitian matrix, then both $\det(Z+A)$ and $\det(E+AZ)$ are stable. \end{thm} If $n = 2$, then the converse of Theorem \ref{thm:3.2} holds for a real stable polynomial. \begin{thm}[{\cite[Theorem 1.13]{bb2}}]\label{thm:3.4} (Characterization of real stable polynomials with two variables) Let $f(x, y) \in \RR[x, y]$. Then $f$ is real stable if and only if $f$ is written as \begin{equation} f(x, y) = \pm \det[xA + yB + C], \end{equation} \noindent where $A$ and $B$ are positive semi-definite matrices and $C$ is a symmetric matrix of the same order. \end{thm} The following theorem claims that the stability of multivariate polynomials can be reduced to the stability of univariate polynomials. \begin{thm}[{\cite[Lemma 2.3]{wag}}]\label{thm:3.5} A polynomial $f \in \CC[z_1, \dots, z_n]$ is stable if and only if for any $(a_1, \dots, a_n)\in\RR^n$ and $(b_1, \dots, b_n)\in\RR_+^n$ (i.e., $b_j>~0, 1 \le j \le n$), $f(a_1+ b_1 t, \dots, a_n+b_n t) \in \CC[t]$ is stable. \end{thm} If a polynomial is of special type, the stability problem could be slightly simpler. \begin{thm}[{\cite[Theorem 5.6]{branden}}]\label{thm:3.6} Let $f\in \RR[z_1, \dots, z_n]$ be a multi-affine polynomial, (i.e., each variable $z_j$ has degree at most $1$ in each term). Then $f$ is stable if and only if for all $(x_1, \dots, x_n)\in \RR^n$ and for $1 \le i, j \le n$, $\Delta_{i j}(f)(x_1, \dots, x_n) \ge 0$, where $\Delta_{i j}(f) =\dfrac{\partial f}{\partial z_i}\dfrac{\partial f}{\partial z_j}-\dfrac{\partial^2 f}{\partial z_i \partial z_j} f$ \end{thm} \begin{rem} If $f$ is not multi-affine, then in Theorem \ref{thm:3.4}, the \lq\lq only if\rq\rq\ part holds, but the \lq\lq if\rq\rq\ part does not. \end{rem} \begin{ex}[{\cite[Example 5.7]{branden}}]\label{ex:3.8} Let $f = a_{00}+a_{01}y+a_{10}x + a_{11}xy, a_{ij}\in\RR$. Then $\Delta_{12}(f) = -\left[\begin{array}{cc} a_{00}&a_{01}\\ a_{10}&a_{11} \end{array} \right]$. Therefore, $f$ is stable if and only if $\det [a_{ij}] \le 0$. \end{ex} \begin{thm}[{\cite[p.1]{ww}}]\label{thm:3.9} Suppose $f\in \CC[z_1, \dots, z_n]$ is homogeneous. Then, $f$ is Hurwitz-stable if and only if $f$ is stable. \end{thm} \subsection{ Real stable univariate polynomials}\label{3.2} In this subsection, we discuss real stable univariate polynomials. We are particularly interested in them, since they have many deep properties. \begin{thm}[{\cite[p.307]{branden}}]\label{thm:3.10} Let $f(z) = a_0 z^n+ a_1z^{n-1}+ \dots+ a_n \in \RR[z]$, $a_0\neq 0, a_j \ge 0, 0 \le j \le n$. Suppose $f(z)$ is real stable. If $a_ia_k\neq0$ for $i < k$, then for any $j, i < j < k, a_j\neq 0$. Therefore, if $a_n\neq 0$, then all $a_j \neq 0$, for $1 \le j \le n$. \end{thm} Theorem \ref{thm:3.10} shows that it is worth studying a sequence of the coefficients of a real stable polynomial. \begin{dfn}[{\cite[p.126]{wilf}}]\label{dfn:3.11} A sequence $\{c_0, c_1, \dots, c_n\}$ of positive numbers is called {\it unimodal} if there exist indices $r$, $s$ such that \begin{equation} \label{siki:3.2} c_0 \le c_1 \le \dots \le c_r= c_{r+1}=\dots= c_{r+s}\ge c_{r+s+1} \ge\dots \ge c_n. \end{equation} Further, $\{c_0, c_1, \dots, c_n\}$ is called {\it log-concave} if \begin{equation}\label{siki:3.3} c_{j-1}c_{j+1}\le c_j^2\ {\rm for}\ j = 1, 2,\dots, n-1. \end{equation} If \lq\lq$\le$\rq\rq\ is replaced by \lq\lq$<$\rq\rq\ in (\ref{siki:3.3}), then it is called {\it strictly log-concave}. \end{dfn} For example, binomial coefficients $\left\{ \binom{m}{k}\right\}_{k=0}^m$ is unimodal. The following theorem is well-known. \begin{thm} [{\cite[Proposition p.127]{wilf}}]\label{thm:3.12} If a positive sequence $\{c_0, c_1, \dots, c_n\}$ is log-concave, then it is unimodal. \end{thm} Now we have an important result. \begin{thm}[{\cite[p.127]{wilf}}]\label{thm:3.13} Let $f(z) = a_0z^n+ a_1z^{n-1}+ \dots+ a_n\in \RR[z], a_0\neq 0, a_n\neq 0$. Suppose $a_j\ge 0, 0 \le j \le n$. If $f$ is real stable (and hence $a_j> 0$ for all $j \ge 0$), then $\{a_0, a_1, \dots ,a_n\}$ is strictly log-concave, and hence it is unimodal. \end{thm} In this case, we have either $a_0<a_1<\dots< a_r>a_{r+1}>\dots>a_n$ or $a_0<a_1<\dots<a_r=a_{r+1}>a_{r+2}>\dots>a_n$. In the rest of this paper, we study various stabilities of (mostly) the Alexander polynomials of knots and links in $S^3$. \section{Preliminaries}\label{4} From this section on, we study polynomials of knots and links from the view point of stabilities. In this paper, we make a strict distinction between a knot and a link. Namely, by a link it means a disjoint union of two or more simple closed curves in $S^3$. If the material can be applied on knots and links as well, we always write such as \lq\lq knots (or links)" Unless specified otherwise, we assume that a link is oriented, but the orientation is not usually mentioned. A 2-bridge knot (or link) $K$ is always represented by a rational number $r=\beta / \alpha$, where we assume $0\le |\beta|\le \alpha$ and $\gcd(\alpha,\beta)=1$. When $K$ is a link, $\alpha$ is even. When $K$ is a knot, then $\alpha$ is odd and $\beta$ is assumed to be even. Then, $r$ has a unique even continued fraction expansion of the following form, where $a_i \neq 0, 1\le i\le m$. \begin{align*} r =\cfrac{1}{2a_1 -\cfrac{1}{2a_2 -\cfrac{1}{2a_3 -\cfrac{1}{\ddots -\cfrac{1}{2a_{m-1} -\cfrac{1}{2a_m}}}}}} \end{align*} \noindent This expansion is written as $r = [2a_1 , 2a_2 , \cdots, 2a_m]$. We call $K$ a 2-bridge knot (or a link) of type $r = \beta / \alpha$ and denoted it by $K(r)$. For example, $2/3 =[2,2]$ represents a trefoil knot and $2/5 =[2,-2]$ represents a figure eight knot. For $2$-bridge links, we assume they are oriented as in Figure 4.1. For example, $1/4 =[4]$ represents a non-fibred link and $3/4 =[2,2,2]$ represents a fibred torus link. \byouga{4-1}{7}{4.1} Now, we discuss briefly a Seifert surface and Seifert matrix of a $2$-bridge knot or link $K(r)$. Since we assume that $r = [2a_1 , 2a_2 , \cdots, 2a_m]$, $K(r)$ has a natural Seifert surface $F$ depicted in Fig. 4.2 below. \byouga{4-2}{6.8}{4.2} Let $\alpha_1 , \alpha_2 ,\cdots, \alpha_m$ be (oriented) simple closed curves on $F$ as are shown in Fig. 4.2. Then ${{\alpha_j },1 \leq j\leq m}$ forms a basis for $H_1 (F, \ZZ)$. Let $u_{i,j} = lk(\alpha_i^+ , \alpha_j )$, where $\alpha_i^+$ denotes the simple closed curve in $S^3$ that is a slight lift of $\alpha_i$ toward the positive normal direction. Then $M = [u_{i,j}]_{1 \leq i,j \leq m}$ is a Seifert matrix of $K(r)$. In this paper, we call $M$ a standard Seifert matrix of $K(r)$. It is easy to see from Fig. 4.2 that $M$ is as below left (resp. right) when $m$ is even (resp. odd).\\ \centerline{ $\left[ \begin{array}{ccccccc} a_1 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\cdots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}0 \\ -1 \hspace{-1mm}&\hspace{-1mm} a_2 \hspace{-1mm}&\hspace{-1mm}1 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\\ 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} a_3 \hspace{-1mm}&\hspace{-1mm}0 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\\ \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}-1 \hspace{-1mm}&\hspace{-1mm} a_4 \hspace{-1mm}&\hspace{-1mm}1 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\\ \vdots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\ddots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\vdots\\ \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}0 \hspace{-1mm}&\hspace{-1mm} a_{m-1}\hspace{-1mm}&\hspace{-1mm}0\\ 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm}\cdots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}0 \hspace{-1mm}&\hspace{-1mm}-1 \hspace{-1mm}&\hspace{-1mm} a_m \end{array} \right],$ $\left[ \begin{array}{ccccccc} a_1 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\cdots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}0 \\ -1 \hspace{-1mm}&\hspace{-1mm} a_2 \hspace{-1mm}&\hspace{-1mm}1 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\\ 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} a_3 \hspace{-1mm}&\hspace{-1mm}0 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\\ \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}-1 \hspace{-1mm}&\hspace{-1mm} a_4 \hspace{-1mm}&\hspace{-1mm}1 \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\\ \vdots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\ddots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}\vdots\\ \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}-1 \hspace{-1mm}&\hspace{-1mm} a_{m-1}\hspace{-1mm}&\hspace{-1mm}1\\ 0 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm}\cdots \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm}0 \hspace{-1mm}&\hspace{-1mm}0 \hspace{-1mm}&\hspace{-1mm} a_m \end{array} \right] $ } Then the Alexander polynomial $\Delta_{K(r)}(t)$ of $K(r)$ is defined by $\Delta_{K(r)} (t) = \det (tM - M^{T})$. For 2-bridge knots and links, we have other particular forms of Seifert surfaces depicted in Fig. 4.3, where the boxes contain some full-twists. A Seifert surface for a 2-bridge knot or link is obtained by successively plumbing unknotted twisted annuli, where the shaded squares indicate the glueing squares for plumbing. The usual way is depicted in Fig. 4.3 top. This surface is isotopic to those in Fig 4.2. The surface depicted in Fig. 4.3 bottom is new. Note that two surfaces are not in general isotopic, but bound the same 2bridge knot or link if the corresponding boxes contain the same number of full-twists. \byouga{4-3}{6.0}{4.3} \noindent To be more precise, given $r=[2a_1,2a_2, \dots, 2a_n]$, let $A_1, A_2,\dots, A_n$ be unknotted annuli such that $A_i$ has $a_i$ full-twists. In Fig 4.3 top, for, $2\le i\le n$, $A_i$ is plumbed on the negative (resp. positive) side of $A_{i-1}$ if $i$ is even (resp. odd). This type of plumbed surface is said to be {\it of chain type}. Meanwhile in Fig 4.3 bottom, every annulus is plumbed on the negative side of the proceeding annulus. This type of plumbed surface is said to be {\it of twisted chain type}. A plumbed surface of twisted chain type has a Seifert matrix of the following form, which is also said to be of twisted type.\\ \begin{equation}\label{siki:4.1} \left[\begin{array}{ccccc} a_1 \hspace{-1mm}&\hspace{-1mm} 1 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm}\cdots \hspace{-1mm}&\hspace{-1mm} 0\\ 0 \hspace{-1mm}&\hspace{-1mm} a_2 \hspace{-1mm}&\hspace{-1mm} 1 \hspace{-1mm}&\hspace{-1mm} 0 \hspace{-1mm}&\hspace{-1mm} \vdots\\ \vdots \hspace{-1mm}&\hspace{-1mm} \ddots\hspace{-1mm}&\hspace{-1mm} \ddots\hspace{-1mm}&\hspace{-1mm} \ddots \hspace{-1mm}&\hspace{-1mm} 0\\ \hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} a_{n-1} \hspace{-1mm}&\hspace{-1mm} 1\\ 0 \hspace{-1mm}&\hspace{-1mm} \cdots\hspace{-1mm}&\hspace{-1mm} \hspace{-1mm}&\hspace{-1mm} 0\hspace{-1mm}&\hspace{-1mm} a_n \end{array}\right] \end{equation} If $K$ is a link (of $\mu$ components, $\mu \geq 2$), $\Delta_K (t_1 , t_2, \cdots, t_{\mu})$ denotes the (multivariate) Alexander polynomial of $K$. (\cite{tor}) Then $\Delta_K (t, t, \cdots,t) / (t-1)^{\mu - 2} := \nabla_K (t)$ is called the {\it Hosokawa polynomial} of $K$. (\cite{hoso}) The degree of $\nabla_K (t)$ is even and $(t-1)\Delta_K (t, t, \cdots,t) = (t-1)^{\mu -1}\nabla_K (t)$ is called the {\it reduced Alexander polynomial} of a link $K$. We denote by $\Delta_K (t)$ the reduced Alexander polynomial of a link $K$, but we call it simply the Alexander polynomial of a link $K$. Generally, if $M$ is a Seifert matrix of a knot (resp. a link) $K$, then the Alexander polynomial (resp. reduced Alexander polynomial) is defined as $\det(t M-M^T)$. To study the zeros of a real stable polynomial $f(z) \in \RR[z]$, it is usually assumed that the leading coefficient is positive. Since $\Delta_K (t)$ is defined up to $\pm t$, we denote by $D_K (t)$ the Alexander polynomial of a knot (or link) $K$ with a positive leading coefficient and a non-zero constant term. We may call $D_K(t)$ the {\it normalized Alexander polynomial} of $\Delta_K (t)$, or the {\it normalization} of $\Delta_K (t)$. We should note that for a 2-bridge knot or link $K(r)$ with $r=[2a_1,2a_2,\dots,2a_n]$, the normalization $D_{K(r)}(t)$ of $\Delta_{K(r)}(t)$ is given by $D_{K(r)}(t)=\varepsilon \det (tM-M^{T})$, where $\varepsilon=\prod_{j=1}^{n}\frac{a_j}{|a_j|}$. For a knot $K$, $\Delta_K (t)$ is reciprocal, namely, $\Delta_K (t) = t^n \Delta_K (t^{-1})$ for some integer $n$. However, for a link, $\Delta_K (t)$ is not necessarily reciprocal, but the Hosokawa polynomial $\nabla_K (t)$ is always reciprocal. Suppose that $f(t)$ is a reciprocal real polynomial of degree $2n$. Let $x = t + \frac{1}{t}$. Then $t^{-n}f(t)$ can be written uniquely as a real polynomial $F(x)$ in $x$ of degree $n$. For convenience, we call $F(x)$ the {\it modified polynomial} or {\it modification} of $f(t)$. We should note that if $f(t) = \Delta_K (t)$ for a knot $K$, then $F(x)$ is equivalent to the Conway polynomial of $K$. To be more precise, let $C_K (z) = a_0 z^{2n} + a_1 z^{2n-2} + \cdots + a_{n-1} z^2 + a_n$ be the Conway polynomial of a knot $K$. Then $F(x) = a_0 (x-2)^n + a_1 (x-2)^{n-1} + \cdots + a_{n-1} (x-2) + a_n$. $F(x)$ is not necessarily reciprocal. For convenience, we call a knot $K$ (or link) {\it (real) stable}, {\it $c$-stable} or {\it bi-stable} if the Alexander polynomial of $K$ is, respectively, (real) stable, $c$-stable or bi-stable. Further, we call the complex zero $\alpha$ of $\Delta_K (t)$ with $|\alpha|=1$ a {\it unit complex zero} of $\Delta_K (t)$. If the bi-stable Alexander polynomial has both real zeros and unit complex zeros, we call it {\it strictly bi-stable} and such a knot or link is called {\it strictly bi-stable}. If any zero of $\Delta_K (t)$ is neither real nor unit complex, we say $\Delta_K (t)$ is {\it totally unstable} and a knot (or link) $K$ is called {\it totally unstable}. Therefore, a knot (or link) is classified into five classes: stable, $c$-stable, strictly bi-stable, totally unstable and none of them. Note that in literature, a stable knot may have appeared in a different sense. However, in this paper, we use our terminologies. A link is never totally unstable, since $\Delta_K (1)=0$. If a knot $K$ is totally unstable, then $\deg\Delta_K (t)$ is divisible by $4$. This is because if $\alpha$ is a complex zero of $\Delta_K (t)$, then so are $\bar{\alpha}, \frac{1}{\alpha}, \frac{1}{\bar{\alpha}}$. The Hosokawa polynomial $\nabla_K (t)$ of a link can be totally unstable if $\deg\nabla_K (t)$ is a multiple of $4$. The stability problem of $\Delta_K (t)$ can be checked by using modified polynomial of $\Delta_K (t)$ as follows. \begin{prop}\label{prop:4.1} Let $F(x)$ be the modified polynomial of $\Delta_K(t)$ of a knot $K$. Then $K$ is bi-stable if and only if $F(x)$ is real stable. (Therefore, if $F(x)$ does not have a real zero, $K$ is totally unstable.) Further, if $K$ is bi-stable, the number of the real zeros of $\Delta_K (t)$ is exactly twice the number of the real zeros $\alpha$ of $F(x)$ (counting multiplicity) such that $|\alpha| \geq 2$. \end{prop} {\it Proof.} We prove a slightly more general statement. Let $f(t)$ be a real reciprocal polynomial of degree $2n$. We write $f(t) = c_0 t^{2n} + c_1 t^{2n-1} + \cdots + c_{2n-1} t + c_{2n}, c_0 > 0, c_j = c_{2n-j}, 0 \leq j \leq 2n$. Express $f(t) = c_0 \prod_{j=1}^{2n} (t- \alpha_j)$ and $F(x) = c_0 \prod_{j=1}^n (x- \beta_j)$. Suppose $\beta_j$ is real and $|\beta_j | \geq 2$. Then $\beta_j$ gives two real zeros of $f(t)$, since $t - \beta_j + 1/t = 0$ has two real zeros. However, if $\mid \beta_j | < 2$, then $t - \beta_j + 1/t = 0$ gives two unit complex zeros. This proves the \lq\lq if\rq\rq\ part. Conversely, suppose that $f(t)$ is bi-stable. If $\alpha_j$ is real, then $\alpha_j + \frac{1}{\alpha_j}$ is real and hence the corresponding zero of $F(x)$ is real and further, $|\alpha_j + \frac{1}{\alpha_j} | \geq 2$. If $\alpha$ is unit complex, then $\alpha_j + \frac{1}{\alpha_j} = \alpha_j + \overline{\alpha}_j$ is real, and hence the corresponding zero of $F(x)$ is real and $|\alpha_j + \frac{1}{\alpha_j}| < 2$. \qed \begin{rem}\label{rem:4.2} In \cite{wu}, Wu proved a similar proposition using Conway polynomial. \end{rem} \begin{ex}\label{ex:4.n7} Let $f(t) = t^6 - 3t^5 + 2t^4 - t^3 +2t^2 -3t +1$. Then $F(x) = (x^3 -3x) - 3(x^2 -2) + 2x - 1 = x^3 - 3x^2 - x + 5$. $F(x)$ has three real zeros, two of which are in the interval $(-2, 2)$. Therefore, $f(t)$ has two real zeros and four unit complex zeros, and hence $f(t)$ is strictly bi-stable. \end{ex} In the rest of this section, we show four elementary but useful propositions. The first proposition is well-known. \begin{prop}\label{prop:4.4} (Min-Max Theorem) Let $M$ be a real symmetric matrix of order $n$. Let $\alpha_1 \leq \alpha_2 \leq \cdots \leq \alpha_n$ be eigenvalues of $M$. Then, \begin{equation*} \alpha_1 \leq \min\{diagonal\ entries\ of\ M\}{\rm \ and\ } \alpha_n \geq \max\{diagonal\ entries\ of\ M\}. \end{equation*} \end{prop} Since the second proposition is less known, we give a proof. \begin{prop}\label{prop:4.5} (Strong Positivity Lemma) Let $M = [a_{i,j}]_{1 \leq i,j \leq n}$ be an $n \times n$ real matrix such that for $i = 1,2, \cdots,n$, \begin{align}\label{siki:4.2} &(1)\ a_{i,i} > 0, \nonumber\\ &(2)\ a_{i,i} > \mid a_{i,1} \mid + \mid a_{i,2} \mid + \cdots +\mid a_{i,i-1} \mid + \mid a_{i,i+1} \mid + \cdots + \mid a_{i,n} \mid. \end{align} Then $\det M > 0$. \end{prop} For convenience, a row that satisfies (2) is called {\it excessive}. {\it Proof.} If $n=1$ or $2$, the proposition is trivially true. Suppose the proposition holds for $(n-1)\times(n-1)$ matrices. Let $P = [\lambda_{i,j}]$ be an $n \times n$ matrix of the form \begin{align}\label{siki:4.3} &(1)\ \lambda_{i,i}= 1, 1 \leq i \leq n,\nonumber\\ &(2)\ \lambda_{1,k}= -\frac{a_{k,1}}{a_{1,1}}, 1 \leq k \leq n,\nonumber\\ &(3)\ \lambda_{i,j} = 0, {\rm \ for\ } i \ne j {\rm \ or}\ i\ne 1. \end{align} Then $PM = \left[ \begin{array}{c|ccc} a_{11}&a_{12}&\cdots&a_{1n}\\ \hline 0& & &\\ \vdots& & M'& \\ 0& & & \end{array} \right] $, where $M^{\prime} =[ a_{i,j}^{\prime}]_{2 \leq i,j \leq n}$ is an $(n-1) \times (n-1)$ matrix of the form: For $i, j =2,\cdots,n$, \begin{align}\label{siki:4.4} a_{i,j}^{\prime} = a_{i,j} - a_{1,j}\frac{a_{i,1}}{a_{1,1}}. \end{align} We claim that $a_{i,i}^{\prime} > 0$ and every row of $M^{\prime}$ is excessive. If $a_{i,1}= 0$, then $a_{i,2}^{\prime} = a_{i,2}, \cdots, a_{i,n}^{\prime} = a_{i,n}$ and hence $a_{i,i}^{\prime} = a_{i,i}>0$, and since the $i^{\rm th}$ row of $M$ is excessive, the $i^{\rm th}$ row of $M^{\prime}$ is also excessive. Suppose $a_{i,1} \ne 0$. We show \begin{equation}\label{siki:4.5} a_{i,i}^{\prime} > | a_{i,2}^{\prime} |+ | a_{i,3}^{\prime}| + \cdots+|a_{i,i-1}^{\prime}|+ | a_{i,i+1}^{\prime} | + \cdots+ |a_{i,n}^{\prime} |. \end{equation} First, for $j \ne i$ and $j \geq 2, | a_{i,j}^{\prime}|=| a_{i,j} - a_{1,j}\frac{a_{i,1}}{a_{1,1}}| \leq | a_{i,j}|+ \frac{|a_{1,j}| |a_{i,1}|}{a_{1,1}}$ and hence \begin{equation}\label{siki:4.6} \sum_{j=2, j \ne i}^n | a_{i,j}^{\prime} | \leq \sum_{j=2, j\ne i}^n | a_{i,j}| + \frac{|a_{i,1}|}{a_{1,1}} \sum_{j=2, j\ne i}^ n |a_{1,j}|. \end{equation} Next, for $i \geq 2$, since $a_{i,i} > |a_{i,1}|$ and $a_{1,1} > |a_{1,i}|$, we see \begin{equation}\label{siki:4.7} a_{i,i}^{\prime} =a_{i,i}- \frac{a_{1,i} a_{i,1}}{a_{1,1}} \geq a_{i,i}- \frac{|a_{1,i}| |a_{i,1}|}{a_{1,1}} > 0. \end{equation} From (\ref{siki:4.6}) and (\ref{siki:4.7}), we have; \begin{align}\label{siki:4.8} a_{i,i}^{\prime} - \sum_{j=2, j \ne i}^n | a_{i,j}^{\prime}| &\geq a_{i,i} - \frac{|a_{1,i}| |a_{i,1}|}{a_{1,1}} - \sum_{j=2, j \ne i}^n | a_{i,j} | - \frac{|a_{i,1}|}{a_{1,1}}\sum_{j=2, j\ne i} |a_{1,j}| \nonumber\\ &= a_{i,i} - \sum_{j=2, j \ne i} |a_{i,j} | - \frac{|a_{i,1}|}{a_{1,1}}\left( |a_{1,i}| + \sum_{j=2, j \ne i}^n |a_{1,j}|\right) \nonumber\\ &= a_{i,i} - \sum_{j=2, j \ne i}^n | a_{i,j}| - \frac{|a_{i,1}|}{a_{1,1}} \sum_{j=2}^n |a_{1,j}|. \end{align} By assumption, \begin{align}\label{siki:4.9} a_{i,i} - \sum_{j=2, j \ne i}^n |a_{i,j}| > |a_{i,1} | {\rm \ and} \nonumber\\ \delta = \sum_{j=2}^n \frac{| a_{1,j} |}{a_{1,1}} < 1. \end{align} Therefore, from (\ref{siki:4.9}), we have; \begin{equation}\label{siki:4.10} a_{i,i}^{\prime} - \sum_{j=2, j\ne i}^n |a_{i,j}^{\prime} | > |a_{i,1} | - | a_{i,1} | \delta > 0. \end{equation} This proves (\ref{siki:4.5}). Since all rows of $M^{\prime}$ are excessive and $a_{i,i}^{\prime} > 0, 2\leq i \leq n$, by (\ref{siki:4.7}), it follows by induction that $\det M^{\prime} > 0$ and hence $\det M =\det(PM)= a_{1,1} \det M^{\prime} > 0$. \qed \begin{rem}\label{rem:4.6} In the above proof, even if the $i^{\rm th}$ row of $M$ is not excessive, i.e., $a_{i,i} = | a_{i,1}| +| a_{i,2} |+ \cdots+ |a_{i,i-1} | + |a_{i,i+1}|+ \cdots + | a_{i,n}|$, a new $i^{\rm th}$ row of $M^{\prime}$ becomes excessive, provided $a_{i,1} \ne 0$ and the first row is excessive. In fact, the first inequality in (\ref{siki:4.9}) becomes an equality, but the second inequality holds, and hence (\ref{siki:4.10}) holds. \end{rem} Let $P$ be a matrix representing an elementary matrix operation. Then we say, for convenience, that a matrix $PMP^{T}$ is obtained from $M$ by applying an elementary matrix operation on the rows (and columns) of $M$ simultaneously. For example, we say something like that a new matrix $M^{\prime}$ is obtained by interchanging the $i^{\mathrm th}$ row (and column) and the $j^{\mathrm th}$ row(and column) of $M$ simultaneously. Using Remark \ref{rem:4.6}, we can obtain the same conclusion under a slightly weaker assumption than that of Proposition \ref{prop:4.5}. \begin{prop}\label{prop:4.7} (Positivity Lemma) Let $M = [a_{i,j}]_{1 \leq i,j \leq n}$ be an $n\times n$ real matrix. Assume that \begin{align} \label{siki:4.11} (1) &\ M {\mathrm \ cannot\ be\ transformed\ into\ a\ form:\ } \left[\begin{array}{cc} A&B\\ O&C \end{array}\right] {\mathrm \ or\ } \left[\begin{array}{cc} A&O\\ B&C \end{array}\right] \nonumber\\ &{\mathrm \ by\ a\ sequence\ of\ exchanges\ of\ the}\ i^{\mathrm th} {\rm \ row\ (and\ column)\ and\ the\ } j^{\mathrm th} {\rm\ row\ } \nonumber\\ &{\rm (and\ column)\ simultaneously,\ where\ } A {\rm \ and}\ C {\rm \ are\ square\ matrices}. \nonumber\\ (2)\ &{\rm For\ } i=1,2, \cdots,n,\ a_{i,i} > 0. \nonumber\\ (3)\ &{\rm For}\ i=1,2, \cdots, n,\ a_{i,i} \geq | a_{i,1}| + \cdots+ | a_{i,i-1} | + | a_{i,i+1}| + \cdots + | a_{i,n}|, \nonumber\\ (4)\ &M {\rm \ has\ at\ least\ one\ excessive\ row}. \end{align} Then $\det M > 0$. Further, if $M$ is symmetric, then $M$ is positive definite. \end{prop} \begin{rem}\label{rem:4.8} If (4) is dropped, then $\det M \geq 0$. Suppose (1) is dropped. Then if each of the block matrices $A$ and $C$ satisfies (\ref{siki:4.11}) (1) - (4), then $\det M > 0$. \end{rem} {\it Proof of Proposition \ref{prop:4.7}. } We may assume without loss of generality that $M$ has been arranged by a sequence of exchanges of rows and columns, simultaneously, in such a way that \begin{align}\label{siki:4.12} &(1)\ {\rm the\ first\ } k\ {\rm rows\ are\ excessive,\ but\ each\ of\ the\ remaining\ rows\ are\ not} \nonumber\\ &(2)\ {\rm for\ each\ } i=k+1, \cdots,n, {\rm \ at\ least\ one\ of\ } a_{i,1}, a_{i,2}, \cdots, a_{i,i-1} {\rm \ is\ not\ } 0. \end{align} Write $M =\tbt{A}{B}{C}{D}$, where $A$ is a $k \times k$ matrix. Starting from the first row, we can transform $A$ into an upper triangular matrix $A^{\prime}$ by a sequence of row operations (without changing the value of the determinant) to get $M^{\prime} = \tbt{A^{\prime}}{B^{\prime}}{C}{D}$, where $A^{\prime} = \left[ \begin{array}{cccc} \hspace{-1mm}ace{-1mm}a_{11}\hspace{-1mm}ace{-1mm} & & & \hspace{-1mm}ace{-1mm}* \\[-4pt] &\hspace{-1mm}ace{-2mm} a_{22}' \hspace{-1mm}ace{-2mm}& & \\[-4pt] & & \ddots& \\[-4pt] \hspace{-1mm}ace{-1mm}O\hspace{-1mm}ace{-1mm} & & &\hspace{-1mm}ace{-1mm} a_{kk}'\hspace{-1mm}ace{-1mm} \end{array} \right] $. By Remark \ref{rem:4.6}, each row of the first $k$ rows is excessive. Now by (\ref{siki:4.12}), we may assume without loss of generality that one of $a_{k+1,1}, \cdots, a_{k+1,k}$ is not $0$. Apply row operations repeatedly on the $(k+1)^{\rm st}$ row so that the first $k$ entries of the $(k+1)^{\mathrm st}$ row of $M^{\prime}$ become $0$, and also, the new $(k+1)^{\mathrm st}$ row is excessive by Remark \ref{rem:4.6}. We can repeat this until all rows are excessive. Then apply Proposition \ref{prop:4.5}, to show that $\det M>0$. Furthermore, the same argument can be applied to show that all principal minors of $M$ are positive, and hence if $M$ is symmetric, then $M$ is positive definite. \qed Finally, we prove the following proposition. \begin{prop}\label{prop:4.9} Let $M$ be a real matrix of the form:\\ $M = \tbt{A}{O}{H}{B}$ or $\tbt{A}{H}{O}{B}$, where $A$ is a $p\times p$ positive definite symmetric matrix and $B$ a $q \times q$ negative definite symmetric matrix and $H$ an arbitrary matrix. Then $M^{-1} M^{T}$ is conjugate to a symmetric matrix in $GL(p+q, \RR)$. Therefore, the characteristic polynomial $f(t)$ of the matrix $M^{-1} M^{T}$ is real stable. \end{prop} {\it Proof.} First, if $A=E_p$ and $B=-E_q$, then $M^2 =E_{p+q}$, and hence, $M^{-1}M^{T} = M M^{T}$ is symmetric. Now, consider the general case. Let $P_A$ and $P_B$ be, respectively, matrices which diagonalize $A$ and $B$. Write $P_A A P_A^{T} = \mathrm diag\{a_1, a_2, \cdots, a_p\}, a_j > 0$, for $1 \leq j \leq p$, and $P_B B P_B^{T} = \mathrm diag \{ -b_1, -b_2, \cdots, -b_q\}$, $b_j > 0$, for $1 \leq j \leq q$. Let $D_a = \mathrm diag \{ 1/ \sqrt{a_1}, 1/ \sqrt{a_2}, \cdots, 1 / \sqrt{a_p}\}$ and $D_b = \mathrm diag \{ 1/ \sqrt{b_1}, 1/ \sqrt{b_2}, \cdots, 1 / \sqrt{b_q}\}$. Further, let $P = P_A \oplus P_B$ and $D = D_a \oplus D_b$. Then a simple computation shows that $ D P M P^{T} D^{T} = M_0, {\rm \ where\ } M_0 = \left[\begin{array}{cc} E_p &O \\ H_0 & -E_q \end{array}\right]$. Now since $M_0^2 = E_{p+q}$, it follows that $M = P^{-1} D^{-1} M_0 (D^{T})^{-1} (P^{T})^{-1}$ and $M^{-1}M^{T} = P^{T}D^{T} M_0 D P P^{-1}D^{-1} M_0^{T} (D^{-1})^{T} (P^{-1})^{T}$ = $P^{T}D^{T} M_0 M_0^{T} (D^{T})^{-1}(P^{T})^{-1}$, and hence, $M^{-1}M^{T}$ is conjugate of a symmetric matrix $M_0 M_0^{T}$. \qed \section{The Alexander polynomials of alternating knots}\label{5} Before we concentrate on the study of various stabilities of knots or links, we discuss, in this section, some connection between the stability of alternating knots or links and various conjectures in Knot theory. \subsection{Hoste's Conjecture}\label{5.1} In 2002, based on his extensive calculations of the zeros of the Alexander polynomials, Hoste made the following conjecture. \begin{yosou}[J. Hoste, 2002]\label{conj:5.1} Let $K$ be an alternating knot and $\Delta_K(t)$ the Alexander polynomial of $K$. Then for any zero $\alpha$ of $\Delta_K(t)$, Re$(\alpha) > -1$. \end{yosou} One of the key observations is that Conjecture \ref{conj:5.1} is equivalent to the following \begin{yosou}\label{conj:5.2} Under the same assumption, $\Delta_K (-(t+1)) \in\RR[t]$ is strongly Hurwitz-stable. \end{yosou} Using Lyapunov matrices, the following theorem is proved. \begin{thm}[{\cite[Theorem 1]{LM}}]\label{thm:5.3} Let $K$ be a $2$-bridge knot (or link). Then $\Delta_K (- (t+3))$ and $\Delta_K (t+6)$ are strongly Hurwitz-stable. Equivalently, any zero $\alpha$ of $\Delta_K(t)$ satisfies \begin{equation} -3 < Re(\alpha) < 6. \end{equation} \end{thm} For other special results, see \cite[Theorems 3,4 and 5]{LM}. \begin{rem}\label{rem:5.4} A.Stoimenow proves in \cite{St} that for a 2-bridge knot (or link) $K$, any zero $\alpha$ of $\Delta_K (t)$ satisfies \begin{equation} \left| \sqrt{\alpha} -\dfrac{1}{\sqrt{\alpha}} \right| < 2 \end{equation} This implies \begin{equation} -1< {\rm Re}(\alpha)< 3+\sqrt{8}=5.8284... \end{equation} \end{rem} It should be noted that for a non-alternating knot, neither a lower bound nor an upper bound of Re($\alpha$) exist \cite[Examples 1 and 2]{LM}. Further, we think that an upper bound of Re($\alpha$) does exist only for a family of $2$-bridge knots or links. In fact, there exists an infinite sequence of alternating stable (Montesinos) knots $K_1, K_2, \dots, K_m,\dots$ such that the maximal value of the zeros of $\Delta_{K_m}(t)$ is at least $m+1$. (See Theorem \ref{thm:15.2} Case 3.) Therefore, in general, an upper bound of Re($\alpha$) does not exist, even for alternating knots. However, an upper bound may exist for some family of the Alexander polynomials. For example, let $\Gamma_n$ be the set of all Alexander polynomials (of degree $n$) of alternating knots. \begin{yosou}\label{conj:5.6} There exist a real number $\delta_n>0$ such that for any zero $\alpha$ of $\Delta_K(t)$ in $\Gamma_n$ \begin{equation} Re(\alpha) \leq \delta_n \end{equation} \end{yosou} It is known that Conjecture \ref{conj:5.6} is false for non-alternating knots \cite[Example 2]{LM}. Since the Alexander polynomial of an alternating knot $K$ is of the form $\Delta_K(t)=\sum_{j=0}^{2n}(-1)^j c_j t^{2n-j}, c_j>0,0\le j\le 2n$, it follows that if $\Delta_K(t)$ is real stable, then all the zeros are positive and hence Conjecture \ref{conj:5.1} holds. Therefore, we have: \begin{thm}\label{thm:5.6} Let $K$ be an alternating knot. If $K$ is bi-stable, then Conjecture \ref{conj:5.1} holds for $K$. \end{thm} \subsection{Trapezoidal Conjecture}\label{5.2} Let $\Delta_K(t)=\sum_{j=0}^{2n}(-1)^j c_j t^{2n-j}$ be the Alexander polynomial of an alternating knot $K$. Then the Trapezoidal conjecture claims: \begin{yosou}\label{conj:5.7} \cite{fox62} There is an integer $k$, $1\le k\le n$, such that \begin{equation}\label{siki:5.5} c_0<c_1<\dots<c_k=c_{k+1}=\dots=c_{2n-k}>c_{2n-k+1}\dots>c_{2n} \end{equation} \end{yosou} This conjecture has been proven for several families of alternating knots, but not proven in general, \cite{hart}, \cite{mu85} etc. Further, this conjecture does not hold for Hosokawa polynomials of alternating links. For example, a 2-bridge link $K(r), r=11/14$ is bi-stable, and satisfies Trapezoidal conjecture, since $\Delta_K(t)=t^5-3t^4+3t^3-3t^2+3t-1$, but its Hosokawa polynomial $\nabla_K(t)=\Delta_K(t)/(1-t)=t^4-2t^3+t^2-2t+1$ does not. The trapezoidal property of the coefficients is quite similar to the unimodality of a sequence considered in Section 3. For an alternating knot $K$, $\Delta_K(-t)=\sum_{j=0}^{2n} c_j t^{2n-j}$ satisfies all assumptions of Theorem \ref{thm:3.13} and hence the coefficient sequence is strictly log-concave. Therefore, we have \begin{equation}\label{siki:5.6} c_0<c_1<\dots<c_n>c_{n+1}>c_{n+2}>\dots>c_{2n} \end{equation} and hence we obtain the following: \begin{thm}\label{thm:5.8} For an alternating stable knot (or link) $K$, Trapezoidal conjecture holds. \end{thm} We should note that if the number of components of an alternating stable link $K$ is even, then $\deg\Delta_K(t)$ is odd, say $2n+1$, and (\ref{siki:5.6}) should be replace by (\ref{siki:5.7}) below. \begin{equation}\label{siki:5.7} c_0<c_1<\dots<c_n=c_{n+1}>c_{n+2}>\dots>c_{2n+1} \end{equation} For a knot $K$, there is another necessary condition for $\Delta_K(t)$ to be stable. \begin{prop} \label{prop:5.9} If $K$ is a stable knot, then the signature $\sigma(K)$ of $K$ is zero. \end{prop} In fact, if the signature is not zero, $\Delta_K(t)$ has at least two zeros on the unit circle \cite{mi}. However, the converse of Proposition \ref{prop:5.9} is false. For example let $K(r)$ be the 2-bridge knot where $r=[2,2,-4,-2]$. Then, $\sigma(K)=0$ but the zeros of $\Delta_K(t)=2 - 6 t + 9 t^2 - 6 t^3 + 2 t^4$ are $1\pm i$ and $\frac{1\pm i}{2}$. Further, the following example shows that for links, Proposition \ref{prop:5.9} does not hold. \begin{ex}\label{ex:5.10} Let $L$ be an alternating pretzel link $P(2,4,4)$, oriented so that $L$ is a special alternating $3$-component link. Then the reduced Alexander polynomial $\Delta_L(t) = 8(t-1)^2$ that is stable, while $\sigma(L)=2$. \end{ex} We suspect that Hoste's conjecture and the Trapezoidal conjecture are independent. However, for alternating knots the condition $\sigma(K)=0$ may imply (\ref{siki:5.6}). Therefore, we propose the following conjecture. \begin{yosou}\label{conj:5.11} Let $\Delta_K(t)=\sum_{j=0}^{2n}(-1)^j c_j t^{2n-j}, c_j>0$ be the Alexander polynomial of an alternating knot $K$. If $\sigma(K)=0$, then the coefficient sequence satisfies (\ref{siki:5.6}), i.e., $c_0<c_1<\dots<c_n>c_{n+1}>c_{n+2}>\dots>c_{2n} $. More generally, if $\sigma(K)=2k$, then the coefficient sequence satisfies (\ref{siki:5.8}) below: \begin{equation}\label{siki:5.8} c_0<c_1<\dots<c_{n-m-1}<c_{n-m}=\dots=c_{n+m}>c_{n+m+1}>\dots>c_{2n}, \end{equation} \noindent where $m\le k$. \end{yosou} This conjecture is quite likely true for $2$-bridge knots. However, it is false for non-alternating knots. In fact, the signature of a non-alternating knot $10_{132}$ is $0$, but the Alexander polynomial is $t^4-t^3+t^2-t+1$. \begin{rem}\label{rem:5.12} In graph theory, the concept of unimodality has been used in \cite{gr}. Very recently, the following long-standing conjecture was proved in \cite{huh}. \end{rem} \begin{yosou}\label{conj:5.12}\cite[p. 534]{gr} The sequence of the coefficients of the chromatic polynomial is unimodal. \end{yosou} \section{Construction of real stable knots (I)}\label{6} The first family of alternating stable knots or links was given in Theorem \ref{thm:6.1} below. In the proof, Seifert surfaces in Fig. 4.2 were used to show that there exists a symmetric companion matrix $M$ of the Alexander polynomial of $K(r)$, and hence, all the eigenvalues of $M$ are real. \begin{thm}[{\cite[Theorem 2]{LM}}] \label{thm:6.1} Let $r=[2a_1, 2a_2 , \cdots, 2a_m]$ be an even continued fraction expansion of a rational number $r = \beta/ \alpha$. If the sequence $\{a_1, a_2, \cdots, a_m\}$ alternates in sign, i.e., $a_j a_{j+1} < 0$, $1\le j\le m-1$ then the $2$-bridge knot (or link) $K(r)$ is real stable. \end{thm} In this section, first, we construct a new surface that is a slight generalization of a Seifert surface for a $2$-bridge knot (or link), and then, using these surfaces, we define a knot (or link), called a quasi-rational knot (or link), and generalize Theorem \ref{thm:6.1}. \subsection{Quasi-rational knots} \label{6.1} In this subsection, we define the class of \lq\lq quasi-rational links\rq\rq\ which is a generalization of 2-bridge links. \begin{dfn}\label{dfn:6.2} Let $D$ be a disk with two families $\Gamma_1=\{\alpha_1,\dots, \alpha_p\}$, $\Gamma_2=\{\beta_1,\dots,\beta_q\}$ of properly embedded arcs in $D$, where no arcs share their end points on $\partial D$. In each family, the arcs are disjoint, but arcs from different families may intersect one another. Each arc, say $\gamma$, is assigned with a non-zero integer $w(\gamma)$ called a weight. Push the interior of $\alpha_i$'s (resp. $\beta_j$'s) in the positive (resp. negative) normal direction of $D$, and along each pushed arc $\gamma'$, attach a band to $D$ with $w(\gamma)$ half-twists. The boundary of the resulting surface $F(\Gamma_1,\Gamma_2)$ is called a {\it quasi-rational} knot or link. Conventionally, the arcs in $\Gamma_2$ are depicted in dotted lines. (See Fig. 6.1.) \end{dfn} The surface $F$ is orientable if and only if all weights are even. In that case, $F(\Gamma_1,\Gamma_2)$ is a Murasugi sum of two Seifert surfaces $F(\Gamma_1,\emptyset)$ and $F(\emptyset,\Gamma_2)$ along $D$, where the summands are respectively connected sums of elementary torus links. Hence by \cite{ga}, $F(\Gamma_1,\Gamma_2)$ is of minimal genus. \begin{ex} As particular example, we see in Fig. 4.2 or Fig. 6.1 that 2-bridge knots and links $K(r)$ are quasi-rational.\\ For weights $\{2w(\alpha_1),2w(\alpha_2), \dots, 2w(\alpha_p)\}$, $\{2w(\beta_1),2w(\beta_2), \dots, 2w(\beta_q)\}$, a nice Seifert matrix $M= \left[\begin{array}{cc} A_1&O\\C&A_2\end{array}\right]$ is obtained from $F(\Gamma_1,\Gamma_2)$, where $A_1=\mathrm diag\{w(\alpha_1),w(\alpha_2), \dots, w(\alpha_p)\}, A_2=\mathrm diag\{w(\beta_1),w(\beta_2), \dots, w(\beta_q)\}$ and in $C$, all diagonal entries are $-1$, $(k,k+1)$-entries are $1$ and the other entries are $0$. For convenience, we say that $M$ is of {\it split type}. \end{ex} \byouga{6-1}{7.5}{6.1: $r=[2a_1,2a_2,\dots]$} \subsection{Stable quasi-rational knots}\label{6.2} In this subsection we generalize Theorem \ref{thm:6.1} to some classes in quasi-rational knots or links. \begin{thm}\label{thm:6.3} Let $F(\Gamma_1,\Gamma_2)$ be a Seifert surface for a quasi-rational knot or link. Suppose the weights for $\Gamma_1$ (resp. $\Gamma_2$) are all positive (resp. negative) and even. Then $L=\partial F(\Gamma_1,\Gamma_2)$ is alternating and stable. \end{thm} {\it Proof.} By turning the arcs in $\Gamma_1$ outside of the disk $D$, we have an alternating diagram for $L$. Let $\{2w(\alpha_1),2w(\alpha_2), \dots, 2w(\alpha_p)\}$ and $\{2w(\beta_1),2w(\beta_2), \dots, 2w(\beta_q)\}$ be weights for $\Gamma_1$ and $\Gamma_2$, where $w(\alpha_i)$'s are positive and $w(\beta_j)$'s are negative. Take a natural bases of $H_1(F(\Gamma_1,\Gamma_2))$, where each loop is a union of the core curve of an attached band and a curve in $D$. The orientations of the loops are arbitrary. Then we have a Seifert matrix $M= \left[\begin{array}{cc} A_1&O\\C&A_2\end{array}\right]$, where $A_1=\mathrm diag\{w(\alpha_1),w(\alpha_2), \dots, w(\alpha_p)\}, A_2=\mathrm diag\{w(\beta_1),w(\beta_2), \dots, w(\beta_q)\}$. Since $\Delta_L(t)=\det(t M-M^T)=(\det M)\det(t E - M^{-1} M^T)$, it follows from Proposition \ref{prop:4.9} that $\Delta_L(t)/\det M$ is the characteristic polynomial of a symmetric matrix, and hence $\Delta_L(t)$ is stable. \qed Finally, we see that the signature of $L$ is $p-q$. Since a stable knot has signature $0$, it follows that if $p \ne q$, $L$ is not a stable knot, but a stable link. \subsection{Examples}\label{6.3} In this subsection, we construct two series of stable knots, both of which are quasi-rational knots. These knots contain first two alternating non-$2$-bridge stable knots whose Alexander polynomials have the real zeros larger than $6$. \begin{ex}\label{ex:6.4} The family of knots denoted by\\ \centerline{ $X_n(2a_1, 2a_2, \cdots, 2a_n \mid 2b_1, 2b_2, \cdots, 2b_n)$ } is depicted in Fig.6.2. For example, $X_1 (2 \mid-2)$ is $4_1$ and $X_2 (2,2 \mid-2,-2)$ is $8_{12}$ and $X_3 (2,2,2 \mid-2,-2,-2)$ is $12_{a0125}$. If all $a_j, 1 \leq j \leq n$, are positive and all $b_j, 1 \leq j \leq n$, are negative, then $X_n$ is alternating and always stable. A Seifert surface is obtained by applying Seifert's algorithm to Fig.6.2 (right). \end{ex} \byouga{6-2}{9}{6.2} \begin{ex}\label{ex:6.5} The family of knots denoted by\\ \centerline{ $Y_{2n+1} (2a_1 , 2a_2, \cdots, 2a_{2n+1} \mid 2b_1, 2b_2, \cdots, 2b_{2n+1})$ } is depicted in Figure 6.3, together with a Seifert matrix in the case of $n=3$. For example, $Y_1 (2 \mid-2)$ is $4_1$ and $Y_3 (2,2,2 \mid-2,-2,-2)$ is $12_{a1124}$. As before, if all $a_j, 1 \leq j \leq 2n+1$, are positive and all $b_j, 1 \leq j \leq 2n+1$, are negative, then $Y_{2n+1}$ is alternating and always stable. \end{ex} \byouga{6-3}{7.5}{6.3} We note that $12_{a0125}$ and $12_{a1124}$ are only alternating knots with at most $12$ crossings such that the real part of the zeros is larger than $6$. In fact, they are $6.904\cdots$ for $12_{a0125}$ and $7.699\cdots$ for $12_{a1124}$. Furthermore, these values increase as $n$ increases unboundedly if all $a_j =1$ and all $b_j = -1$, as is proved in Theorem \ref{thm:6.6} below. \begin{thm}\label{thm:6.6} (1) Let $K_n^{(1)} = X_n (2,2, \cdots,2 \mid -2,-2,\cdots,-2)$. Then $K_n^{(1)}$ is stable and the maximal value of the zeros is at least $n+1$. (2) Let $K_{2n+1}^{(2)} = Y_{2n+1}(2,2, \cdots,2 \mid -2,-2, \cdots,-2)$. Then $K_{2n+1}^{(2)}$ is stable and the maximal value of the zeros is at least $2n+1$. \end{thm} {\it Proof.} (1) In the proof of Theorem \ref{thm:6.3}, the matrix $C$ is a lower triangular matrix such that every entry under the diagonal entries and diagonal entry as well are $1$, but $0$ elsewhere. Since all $a_j$ are $1$ and all $b_j$ are $-1$, it follows from Proposition \ref{prop:4.9} that a companion matrix of the Alexander polynomial of $K_n^{(1)}$ is $\tbt{E_n}{C^T}{C}{E_n + C C^T}=S$. It is obvious that the maximal value of the diagonal entries of $S$ is $n+1$. This proves (1) by Min-Max Theorem. (2) In this case $C$ is like the lower left matrix in Fig.6.3. Then by the same argument, we see that the maximal value of the diagonal entries of $\tbt{E_{2n+1}}{C^T}{C}{E_{2n+1} +C C^T}$ is $2n+1$. This proves (2). \qed These two quasi-rational knots are quite unlikely Montesinos knots. However, the infinite sequences $\{K_n^{(1)}\}$ and $\{K_{2n+1}^{(2)}\}$ give the evidence that the maximal value of the real parts of the zeros of Alexander polynomials of alternating knots is unbounded. Further, the maximal zero $7.699\cdots$ of the Alexander polynomial of $12_{a1124}$ is quite likely the number $\delta_6$ defined in Section 5. (See Appendix B.) \begin{rem}\label{rem:6.8} Since $K_n^{(1)}$ and $K_{2n+1}^{(2)}$ both are strongly negative amphicheiral, their Alexander polynomials $\Delta(t)$ have the property: $\Delta(t^2) = f(t) f(-t)$ for some $f(t) \in \ZZ[t]$. See \cite{hartleyKawauchi}. Therefore, their Conway polynomials are of the form: $g(z) g(-z)$ for some $g(z) \in \ZZ[z]$. Furthermore, it is easy to show that $X_n(2a_1, 2a_2, \cdots, 2a_n| 2b_1,2b_2, \cdots, 2b_n)$ is strongly negative amphicheiral if $b_j = -a_{n+1-j}, 1 \leq j \leq n$. Similarly, $Y_{2n+1}(2a_1, 2a_2,\cdots, 2a_{2n+1}| 2b_1, 2b_2, \cdots, 2b_{2n+1})$ is strongly negative amphicheiral if $b_j = -a_j, 1 \leq j \leq 2n+1$. Therefore, the Conway polynomials of these knots are of the form: $g(z) g(-z)$ for some $g(z) \in \ZZ[z ]$ and generally these knots are neither stable nor $c$-stable unless all $a_j$'s have the same sign. \end{rem} \section{ Construction of real stable knots (II)}\label{7} In this section, we show a more general construction of stable knots or links. \subsection{Positive or negative disks}\label{7.1} Let $D$ be a disk and divide $D$ into small domains by a (not necessarily connected) plane graph $G$. Let $\{v_1, v_2, \cdots, v_n, {v_1}^{\circ},{v_2}^{\circ}, \cdots , {v_k}^{\circ}\}$ and $\{e_1, e_2, \cdots , e_{\ell}\}$ be, respectively, the set of vertices and edges of $G$, where ${v_j}^{\circ}, 1 \leq j \leq k$, is a vertex on $\partial D$, and $e_j, 1 \leq j \leq {\ell}$, is not a part of $\partial D$ and $e_j$ does not intersect $\partial D$ except its ends. We call $v_j$ an interior vertex and ${v_j}^{\circ}$ a boundary vertex. Any part of the boundary of $D$ is not considered as an edge of $G$. We assume \begin{align}\label{siki:7.1} (1)\ &{\rm To\ every\ interior\ vertex\ } v_j {\rm \ of}\ G, {\rm \ there\ is\ a\ path\ in\ }\ G {\rm \ that\ connects\ } v_j {\rm \ to\ some\ } \nonumber\\ & {\rm boundary\ vertex\ } {v_i}^{\circ}. \nonumber\\ (2)\ &{\rm The\ valency\ of\ } v_j {\rm \ is\ at\ least\ 2}. \end{align} Now each edge $e_j, 1 \leq j \leq {\ell}$, has a weight, $w(e_j) = m_j$, a non-zero integer, as before. If all weights are even, we call such a graph an even graph. If some weights are odd, then we assume that $G$ satisfies condition (\ref{siki:7.2}) below. Let $d_1, d_2, \cdots, d_m$ be the domains in which $G$ divides $D$ and $\{e_{j_1}, e_{j_2}, \cdots, e_{j_s}\}$ be the set of all edges on the boundary of $d_j, 1 \leq j \leq m$. Then for $1\leq j \leq m$, \begin{equation}\label{siki:7.2} \sum_{i=1}^s w(e_{j_i}) \equiv 0 \ ({\rm mod}\ 2) \end{equation} If $G$ is an even graph, then (\ref{siki:7.2}) is satisfied automatically. A weighted graph $G$ that satisfies (\ref{siki:7.2}) is called {\it admissible}. If every edge $G$ has positive (or negative) weight, $G$ is called a positive (or negative) graph. Now, first, we replace each interior vertex with a small disk and then, as we did in the previous section, replace each edge $e_j, 1\leq j \leq {\ell}$, with a narrow band of $w(e_j) = m_j $ half-twists. The resulting surface is denoted by $F(G)$ and is called the surface representing a weighted graph $G$. The projection of the 1-skeleton of $F(G) - int(D)$ on $D$ is $G$. $F(G)$ is orientable if and only if $G$ is admissible. See Fig. 7.1. \byouga{7-1}{8}{7.1} It is well-known that any knot or link is obtained as the boundary of a Murasugi sum of finitely many surfaces $F(G_j)$ representing admissible weighted graphs $G_j$. In particular, any alternating knot or link is obtained as the boundary of a Murasugi sum of finitely many surfaces $F(G_j)$ representing admissible positive and/or negative weighted graphs $G_j$, where glueing is only allowed between $F(G_p)$ with a positive graph $G_p$ and $F(G_q)$ with a negative graph $G_q$. \subsection{Stable alternating knots and links}\label{7.2} We prove the following theorem that is a generalization of Theorem \ref{thm:6.3}. \begin{thm}\label{thm:7.1} Let $\{F(G_1^+), \cdots, F(G_p^+)\}$ and $\{F(G_1^{-}), \cdots, F(G_q^{-})\}$ be respectively the set of surfaces representing even positive and negative graphs $\{G_j^+\}$ and $\{G_j^{-}\}$. Suppose $K$ is obtained as the boundary of a Murasugi sum of these surfaces, where glueing is only allowed between surfaces of positive graphs and those of negative graphs. Then $K$ is alternating and real stable. \end{thm} {\it Proof.} From a construction, a Seifert matrix $M$ of $K$ is of the form of (\ref{siki:7.3}) below, where $A$ is the direct sum of positive definite symmetric matrices and $C$ is the direct sum of negative definite symmetric matrices, and $B$ is obtained from the information of the gluing process of a Murasugi sum. \begin{equation}\label{siki:7.3} M=\left[\begin{array}{cc} A&O\\ B&C \end{array} \right] \end{equation} Then by Proposition \ref{prop:4.9}, $\Delta_K (t)/ \det(M)$ is the characteristic polynomial of a symmetric matrix, and hence $\Delta_K (t)$ is stable. This proves Theorem \ref{thm:7.1}. \qed \subsection{Pseudo-positive or pseudo-negative disk}\label{7.3} Let $G$ be a weighted even plane graph on a disk $D$ that divides $D$ as before. Suppose $G$ is neither positive nor negative. Let $F(G)$ be the surface obtained from $G$. We call $G$ a {\it pseudo-positive} (or {\it pseudo-negative}) if the Seifert matrix obtained from $F(G)$ is positive definite (or negative definite). In general, $F(G)$ is not alternating. However, we have the following theorem. \begin{thm}\label{thm:7.2} Let $\{G_i^+\}, 1 \leq i \leq p$ and $\{G_j^-\}, 1 \leq j \leq n$ be the sets of even pseudo-positive and pseudo-negative graphs, respectively. Let $\{F(G_1^+), \cdots, F(G_p^+)\}$ and $\{F(G_1^{-}), \cdots, F(G_n^{-})\}$ be respectively the sets of surfaces representing $\{G_i^+\}$ and $\{G_j^{-}\}$. Suppose $K$ is obtained as the boundary of a Murasugi sum of these surfaces, where glueing is only allowed between surfaces of pseudo-positive graphs and those of pseudo-negative graphs. Then $K$ is real stable. \end{thm} Since a proof is essentially the same as that of Theorem \ref{thm:7.1}. we omit the details. We note that $K$ is not necessarily alternating. \begin{rem}\label{rem:7.3} If an even graph $G$ is neither positive nor negative, then a Seifert matrix $M$ of $F(G)$ may not be positive definite or negative definite. However, we may change at most $s$ weights of $G$ so that $F(G)$ becomes pseudo-positive or pseudo-negative, where $s$ is the first Betti number of $H_1(F(G);\ZZ)$. For details, see Section 12. \end{rem} \subsection{Example}\label{7.4} Let $G_1$ and $G_2$ be even weighted graphs on disks as in Fig.7.2. Let $F(G)=F(G_1) \ast F(G_2)$, a Murasugi sum of $F(G_1)$ and $F(G_2)$. Then $K= \partial F(G)$ is not stable, but bi-stable, since $\Delta_K (t) = t^6 -4t^4 + 7 t^3 - 4t^2 +1$ has two real and four unit complex zeros. Now change $G_1$ to $G_1^{\prime}$ by giving three new even weights to $G_1$ as shown in Fig 7.2. Then $G_1^{\prime}$ is pseudo-positive and $K^{\prime} =\partial{F(G_1^{\prime})\ast F(G_2)}$ is stable. In fact, $\Delta_{K^{\prime}}(t) = t^6 -15t^5 + 60t^4-93t^3 +60t^2 -15t +1$ is stable. \byouga{7-2}{12}{7.2} \section{Exceptional stable knots and links}\label{8} If the sequence of a continued fraction expansion of $r$ alternates in sign, then $K(r)$ is stable. (Theorem \ref{thm:6.1} or {\cite[Theorem 2]{LM}}). However, the converse is not necessarily true. There are many stable $2$-bridge knots with non-alternating sequences. For example, if $r = [2,-2,-8,2]$, $K(r)$ is stable, since $\Delta_{K(r)}(t) = (2t^2 -5t+2)^2$. Further, for $2$-bridge links, it is possible to construct systematically such exceptional stable links. In this section, we show some of these knots and links. \subsection{Exceptional stable knots}\label{8.1} We begin with the following proposition. \begin{prop}\label{prop:8.1} Let $r=[2a,-2,-2b,2c]$, where $a,b,c > 0$. Then $K(r)$ is stable if $bc \geq 2a(c+1)$. \end{prop} {\it Proof.} First we see that $\Delta_{K(r)}(t)= At^4-Bt^3+Ct^2-Bt+A$, where $A=abc$, $B=4abc+bc-ac+a$ and $C=6abc+2bc-2ac+2a+1$. Consider the modified polynomial $f(x)$ of $\Delta_{K(r)}(t)$, where $f(x)= Ax^2-Bx+(C-2A)$. Since the discriminant $d$ of $f(x)$ is $bc(bc-2a(c+1))+a^2(c-1)^2$, it follows that $d \geq 0$ if $bc \geq 2a(c+1)$. Let $\alpha$ and $\beta$ be two real zeros of $f(x)$. We claim that $|\alpha|, |\beta| > 2$. In fact, $\alpha$ and $\beta$ are given as $\frac{B \pm \sqrt{d}}{2abc} =2 + {\frac{bc-a(c-1) \pm \sqrt{d}}{2abc}}$. But since $bc \geq 2a(c+1)$, we see that $bc-a(c-1) \geq 0$ and also $(bc-a(c-1))^2 > d$, and hence $\alpha, \beta > 2$. \qed \begin{ex}\label{ex:8.2} If $a=1, b=4$ and $c=1$, then $K(r)$ is stable. \end{ex} A similar argument can be applied to show the following proposition. \begin{prop}\label{prop:8.3} Let $r=[2a,2b,-2b,-2a], a,b > 0$. Then $K(r)$ is stable if and only if $a \geq 4b$. Here, $f(x)= 1 + 2 a^2 - 4 a b + 4 a^2 b^2 + (-a^2 + 2 a b - 4 a^2 b^2) x + a^2 b^2 x^2$. \end{prop} \begin{ex}\label{ex:8.4} (1) For $r=[8,2,-2,-8]$, $\Delta_{K(r)}(t) = (4t^2-9t+4)^2$ is stable.\\ (2) Let $r=[10,2,-2,-10]$, $\Delta_{K(r)}(t) = 25t^4-115t^3+181t^2-115t+25$ is stable. \end{ex} \subsection{Exceptional stable links}\label{8.2} In this sub-section, we study exceptional stable $2$-bridge links. \begin{prop}\label{prop:8.5} Let $r=[2a,2b,-2c]$, $a,b,c > 0$. Then $K(r)$ is stable if and only if $a \geq c$. \end{prop} {\it Proof.} We see that $\Delta_{K(r)} = (t-1)(At^2-Bt+A)$, where $A=abc$ and $B=2abc+a-c$. Therefore $g(t) =At^2-Bt+A$ has two real zeros if the discriminant $d = (a-c)(4abc+a-c) \geq 0$. Since $4abc+a-c > 0$, the proposition follows easily. \qed Now we construct exceptional stable links systematically. \begin{dfn}\label{dfn:8x1} Let $r=[2a_1, 2a_2,\dots, 2a_n]$ be a sequence of non-zero integers.\\ (1) $N(r)$ denotes $t M(r) - M(r)^T$, where $M(r)$ is the $n \times n$ matrix $\{m_{i,j}\}$ such that for all $k$'s, $m_{k,k}=a_k$, $m_{k,k+1}=1$, and other entries are $0$. See (\ref{siki:4.1}).\\ (2) we write $-r=[-2a_1, -2a_2,\dots, -2a_n]$, and $r^{-1}=[2a_n, 2a_{n-1},\dots, 2a_1]$. \end{dfn} \begin{lemm}\label{lem:8x2} For a given sequence $r=[2a_1, 2a_2,\dots, 2a_n]$, we have;\\ $\det N(r)=\det N(r^{-1})=(-1)^n \det N(-r)=(-1)^n \det N(-r^{-1})$. \end{lemm} {\it Proof.} This lemma can be simply proven by induction on $n$, but understood better in terms of Alexander polynomials of 2-bridge knots and links. Actually, $\det N(r)$ coincides with the Alexander polynomial $\Delta_{K(r)}(t)$. And hence $\det N(r), \det N(r^{-1}), \det N(-r)$ and $\det N(-r^{-1})$ are equivalent up to multiplication of $\pm 1$ and powers of $t$. In this case, the differences are detected by their constant terms, which are respectively $(-1)^n \prod_{i=1}^n a_i,(-1)^n \prod_{i=1}^n a_i, (-1)^n\prod_{i=1}^n (-a_i)$, and $(-1)^n \prod_{i=1}^n (-a_i)$. \qed For given two sequences $r=[2a_1,2a_2, \cdots, 2a_n]$ and $s=[2b_1,2b_2, \cdots, 2b_n]$, and an integer $k$, let $[r,2k,s]$ denote $[2a_1,2a_2, \cdots, 2a_n ,2k,2b_1,2b_2, \cdots, 2b_n]$. \begin{thm}\label{thm:8x3} Given $r = [2a_1,2a_2, \cdots, 2a_n]$, let $T_1 =[r,2k,r]$, $T_2=[r,2k,-r]$, $T_3=[r,2k,r^{-1}]$ and $T_4=[r,2k,-r^{-1}]$. Then for any integer $k \ne 0$, we have;\\ (1) $\Delta_{K(T_i)}(t)= \Delta_{K(r)}(t) f(t)$, where f(t) is an integer polynomial. ($1\le i\le 4$)\\ (2) $\Delta_{K(T_4)}(t) = k(t-1) [\Delta_{K(r)}(t)]^2$. \end{thm} \begin{cor}\label{cor:8x4} $\Delta_{K(T_4)}(t)$ is stable if and only if $\Delta_{K(r)}(t)$ is stable. \end{cor} For other cases, $T_1,T_2$ and $T_3$, generally $\Delta_{K(T_i)}(t)$ is not stable unless the sequence of $T_j$ alternates in sign. For example, if $m$ is odd and $r$ is an alternating sequence with $2a_1 > 0$ and $k < 0$, then $K(T_1)$ and $K(T_3)$ are stable. \begin{ex}\label{ex:8x6} (1) Let $s=[4,2,-2]$. By Proposition \ref{prop:8.5}, $K(s)$ is stable. Thus, for $r=[4,2,-2,-4,2,-2,-4]$, $K(r)$ is stable by Theorem \ref{thm:8x3}, and further, for $r^{\prime} = [r,2k, -r^{-1}]$, $K(r^{\prime})$ is also stable, if $k \ne 0$. (2) Let $s=[2,-2,-8,2]$. Then $K(s)$ is stable by Proposition \ref{prop:8.1}. Therefore, for $r=[2,-2,-8,2,4,-2,8,2,-2]$, $K(r)$ is stable. \end{ex} Theorem \ref{thm:8x3} is a corollary of the following lemma. For a given matrix $M$, $M_{ij}$ denotes the matrix obtained from $M$ by deleting the $i^{\rm th}$ row and $j^{\rm th}$ column. \begin{lemm}\label{lem:8x5} For sequences $r=[2a_1,2a_2, \cdots, 2a_n]$ and $s=[2b_1,2b_2, \cdots, 2b_n]$, let $A$ (resp. $B$) denote $t M(r) -M(r)^{T}$ (resp. $t M(s) -M(s)^{T}$), where $M(r)$ and $M(s)$ are as in Definition \ref{dfn:8x1}. Then we have: $\Delta_{K([r, 2k, s])}(t) =\det N([r, 2k, s])= k (t-1)\det A\det B+ t \left( \det A_{n,n}\det B+ \det A \det B_{1,1} \right) $. \end{lemm} {\it Proof of Theorem \ref{thm:8x3}.} (1) If $s$ is equal to $r, r^{-1},-r$ or $-r^{-1}$, then $\det A=\varepsilon \det B$, where $\varepsilon$ equals $1$ or $-1$ according to Lemma \ref{lem:8x2}. Since $\Delta_{K(r)}(t)=\det A$, we have the conclusion by Lemma \ref{lem:8x5}. (2) If $r= -r^{-1}$, then (i) $\det A\det B =(-1)^n \det A \det A$, (ii) $\det A_{n,n} \det B =(-1)^n \det A_{n,n}\det A$, and (iii) $\det A \det B_{1,1} =\det A (-1)^{n-1}\det A_{n,n}$. Therefore, by Lemma \ref{lem:8x5}, we have the conclusion. \qed The following formula is often used in this paper. A proof is an exercise. \begin{prop}\label{prop:LA} Let $A$ and $B$ be square matrices of sizes $n$ and $m$. Let $M$ be the matrix obtained from $A\oplus B$ by changing the $(\alpha,n+\beta)$-entry to $x$ and $(n+\gamma,\delta)$-entry to $y$, where $1\le \alpha, \delta\le n$ and $1\le \beta,\gamma\le m$. Then we have: \begin{equation}\label{siki:formula} \det M=\det A\det B-(-1)^{\alpha+\beta+\gamma+\delta} xy \det A_{\alpha,\delta}\det B_{\gamma,\beta} \end{equation} \end{prop} {\it Proof of lemma \ref{lem:8x5}.} By (\ref{siki:formula}), we have the following:\\ \begin{align*} \det N([r,2k,s]) =& \det \left[\begin{array}{cr|c|cc} & & & &\\ \mathcal Hsymb{A} & & & & \\ & & t & & \\\hline & -1& k(t-1) & t & \\ \hline & & -1 & &\\ & & & &\\ & & & &\mathcal Hsymb{B} \end{array} \right]\\ =& \det A \det \left[\begin{array}{c|cc} k(t-1) & t & \\ \hline -1 & &\\ & &\\ & &\mathcal Hsymb{B} \end{array} \right]+t \det A_{n,n}\det B\\ =& \det A \left\{(k(t-1) \det B +t \det B_{1,1}\right\}+t \det A_{n,n}\det B\\ =&k(t-1)\det A\det B+t(\det A_{n,n}\det B + \det A \det B_{1,1}) \end{align*} \qed \begin{qu}\label{qu:8.13} To what extent does the stability property of the Alexander polynomials of an alternating knot $K$ reflect the topological properties of $K$? \end{qu} \begin{probl}\label{probl:8.14} Characterize stable alternating knots and links. \end{probl} \section{Interlacing property (I) 2-bridge knots}\label{9} For a series of stable real polynomials, the interlacing property of two sets of zeros is an interesting and important property. First, in this section, we prove a simple, but useful basic theorem in this paper (Theorem \ref{thm:9.4}). We begin with a definition. \begin{dfn}[{\cite[p.310]{branden}}]\label{dfn:9.1} Let $f, g \in \RR[z]$ be univariate polynomials. Suppose $f, g$ are real stable. Let $\alpha_1\le \alpha_2\le \dots\le \alpha_n$ and $\beta_1\le\beta_2\le\dots\le \beta_m$ be the zeros of $f$ and $g$, respectively. Then we say that the zeros $\{\alpha_j\}$ and $\{\beta_k\}$ are {\it interlaced}, (or we simply say that $f$ and $g$ are {\it interlaced}) if the following conditions are satisfied.\\ (i) $|m - n| \le 1$,\\ (ii) they can be ordered so that (a) if $n= m$, then \begin{center} $\displaystyle{\ \ \alpha_1\le\beta_1\le\alpha_2\le\beta_2\le\dots\le\alpha_n\le\beta_n, {\mathrm or}}$\\ $\displaystyle{\beta_1\le\alpha_1\le\beta_2\le\alpha_2\le\dots\le\beta_n\le\alpha_n}$, \end{center} (b) if $n = m+1$, then \\ \centerline{ $\alpha_1\le\beta_1\le\alpha_2\le\beta_2\le\dots\le\alpha_m\le\beta_m\le\alpha_{m+1}(=\alpha_n)$,} (c) if $m = n+1$, then \\ \centerline{ $\beta_1\le\alpha_1\le\beta_2\le\alpha_2\le\dots\le\alpha_n\le\beta_{n+1}(=\beta_m)$.} \end{dfn} \begin{dfn}[{\cite[p.56]{wag}}]\label{dfn:9.2} For $f, g \in \CC[z]$, we define the {\it Wronskian} $W[f, g]$ as $W[f, g] = f'g - f g'$. For $f (\neq 0), g (\neq 0)\in \RR[z]$, we say that two real stable $f, g$ are in {\it proper position} (denoted by $f\ll g$) if $W[f, g] \le 0$ on all real values. \end{dfn} If the zeros of $f$ and $g$ are interlaced, then either $W[f,g] \le 0$ or $W[f, g] \ge 0$ on all real values and hence $f \ll g$ or $g \ll f$ \cite[p.56]{wag}. \begin{thm}[{\cite[p.57]{wag}}]\label{thm:9.3} (Hermite-Kakeya-Obreschkoff Theorem) Let $f, g\in \RR[z]$. Then all non-zero polynomials in $\{af + bg | a,b \in\RR\}$ are real-rooted if and only if (1) $f , g$ are real stable and (2) $f \ll g, g \ll f$ or $f = g = 0$. \end{thm} Now first we prove the following theorem. Although Theorem \ref{thm:9.3} is a strong tool, our proof is not a simple application of Theorem \ref{thm:9.3}. \begin{thm}\label{thm:9.4} Let $s=[2a_1, -2a_2, \cdots, (-1)^{k-1}2a_k, \cdots, (-1)^{n-1} 2a_n]$, $s^{\prime}=[2a_1, -2a_2, \cdots, (-1)^{n-2}2a_{n-1}]$, where $a_j > 0$, for $1 \leq j \leq n$. Then $\Delta_{K(s)} (t) \Delta_{K(s^{\prime})}(t)$ is simple, and $\Delta_{K(s)}(t)$ and $\Delta_{K(s^{\prime})}(t)$ are interlaced. \end{thm} Note that by reversing the sequences, the same conclusion of Theorem \ref{thm:9.4} holds for the case $s'=[ -2a_2, 2a_3, \cdots, (-1)^{n-1}2a_n]$. {\it Proof of Theorem \ref{thm:9.4}.} Our proof is by induction. We use Seifert matrices of twisted chain type (\ref{siki:4.1}). Case 1. $n=2$. $\Delta_{K(s')}(t)=a_1(t-1)$ and $\Delta_{K(s)} (t) = -a_{1}a_{2} + t + 2a_{1}a_{2}t -a_{1}a_{2}t^2$. Since $\Delta_{K(s)}(1)=1$, $\Delta_{K(s)}(t)\Delta_{K(s')}(t)$ is simple and $\Delta_{K(s)}(t)$ has two real zeros, $\alpha_1$ and $\alpha_2$ with $\alpha_1 < 1 < \alpha_2$. Case 2. $n=3$. Write $s=[2a,-2b,2c], s'=[2a, -2b], s''=[2a], a,b,c > 0$. By a Seifert matrix $M = \left[\begin{array}{ccc} a&1&0\\ 0&-b&1\\ 0&0&c \end{array}\right]$ for $K(s)$, we have \begin{align*} \Delta_{K(s)}(t)= &c(t-1) \Delta_{K(s')}(t)+ t \Delta_{K(S'')}(t)\\ =&(t-1)\left\{c \left( -ab+(1+2ab)t-a b t^2 \right) +at\right\}. \end{align*} Consider two curves $y_1= c\left(-ab+(1+2ab)t-abt^2 \right)$ and $y_2 = -at$. From the observation in case $n=2$, we see that these two curves intersect in two points at say, $t=\beta_1$ and $t=\beta_2$ such that $\beta_1<\alpha_1<1<\alpha_2<\beta_2$. See Fig.9.1. Since the zeros of $\Delta_{K(s)}(t)$ are $\beta_1,1$ and $\beta_2$, we have the conclusion for $n=3$. \byouga{9-1}{6}{9.1} Note that for alternating stable knots and links, all the zeros of the Alexander polynomials are positive, because the coefficients are non-zero and have alternating signs. Moreover, since the Alexander polynomials are reciprocal, each zero less than $1$ has its counterpart greater than one, and vice versa. Case 3. General case.\\ Assume inductively that $\Delta_{K(s')}(t)\Delta_{K(s'')}(t)$ is simple and $\Delta_{K(s')}(t)$ and $\Delta_{K(s'')}(t)$ are interlaced, where for simplicity, we write the sequences as follows: \begin{align*} s&=[2a_1,-2a_2,\dots, (-1)^{n-1}2a_{n}],\\ s'&=[2a_1,-2a_2,\dots, (-1)^{n-2}2a_{n-1}],\\ s''&=[2a_1,-2a_2,\dots, (-1)^{n-3}2a_{n-2}]. \end{align*} Use Seifert matrices $M_s,M_{s'},M_{s''}$ of twisted chain type and call \begin{align*} f_n(t)&:=\det(t M_s -M_s^{T}),\\ f_{n-1}(t)&:=\det(t M_{s'}- M_{s'}^{T}),\\ f_{n-2}(t)&:=\det(t M_{s''}-M_{s''}^{T}). \end{align*} Expand $\det(t M_s -M_s^{T})$ at the last row and column, and we have: \begin{equation} f_n(t)=(-1)^{n-1} a_{n}(t-1) f_{n-1} + t f_{n-2} \end{equation} Consider two curves $y_1=(-1)^{n-1} a_{n}(t-1) f_{n-1}$ and $y_2=-t f_{n-2}$. We show that $y_1$ and $y_2$ intersect in $n$ points and (i) $f_n f_{n-1}$ is simple and (ii) $f_n(t)$ and $f_{n-1}(t)$ are interlaced. First, note that the leading coefficients of $y_1$ and $y_2$ have the same sign. Case 3.1. $n$ is even, say $2m$. Fig. 9.2 depicts the case $m$ is even (in, particular $m=4$). The case $m$ is odd is similar, and the arguments are the same. By induction hypothesis $f_{n-1}f_{n-2}$ is simple, and since $K(s')$ is a link and $f_{n-1}$ is simple, $(t-1)$ divides $f_{n-1}$ exactly once. Hence $t=1$ is the double zero of $y_1$. Meanwhile, since $K(s'')$ is a knot, $(t-1)$ does not divide $f_{n-2}$. Also, by induction hypothesis, $f_{n-1}$ and $f_{n-2}$ have interlaced zeros. If $m$ is even (resp. odd), then the leading coefficients of $y_1$ and $y_2$ are both positive (resp. negative). The sets of the zeros of $f_{n-1}$ and $y_1$ are the same and the set of the zeros of $y_2$ coincides with that of $f_{n-2}$ with $0$ added. Then two curves $y_1$ and $y_2$ intersect exactly $n$ times and (i) $f_n f_{n-1}$ is simple and (ii) $f_n$ and $f_{n-1}$ are interlaced, as shown in Fig. 9.2. Since $\deg f_n=n$. We have the conclusion. \byouga{9-2}{11}{9.2} \byouga{9-3}{11}{9.3} Case 3.2. $n$ is odd, say $2m+1$. Fig. 9.3 depicts the case $m$ is even (in, particular $m=4$). The case $m$ is odd is similar, and the arguments are the same. $K(s')$ is a knot, and hence $f_{n-1}(1)\neq 0$. By induction hypothesis, (i) $f_{n-1}f_{n-2}$ is simple and (ii) $f_{n-1}$ and $f_{n-2}$ have interlaced zeros. If $m$ is even (resp. odd), then the leading coefficients of $y_1$ and $y_2$ are both positive (resp. negative). $y_1$ and $y_2$ have interlacing zeros, except sharing $1$ in common. Then two curves $y_1$ and $y_2$ intersect exactly $n$ times and (i) $f_{n}f_{n-1}$ is simple and (ii) $f_n$ and $f_{n-1}$ are interlaced. \qed In proving Theorem \ref{thm:9.4}, we have the following theorem: \begin{thm}\label{thm:9.5new} Let $s=[2a_1, -2a_2, \cdots, (-1)^{k-1}2a_k, \cdots, (-1)^{n-1} 2a_n]$ and $r=[2a_1, -2a_2, \cdots, (-1)^{k-1}2a_k, \cdots, (-1)^{n-1} 2(a_n-1)]$, (or $r=[2(a_1-1), -2a_2, \cdots, (-1)^{k-1}2a_k, \cdots, (-1)^{n-1} 2a_n], a_1>1$), where $a_j > 0$, for $1 \leq j \leq n$ and $a_n>1$. Let $\{\alpha_j,1\le j\le [\frac{n}{2}]\}$ and $\{\beta_j,1\le j\le [\frac{n}{2}]\}$ be the zeros of $\Delta_{K(r)}(t)$ and $\Delta_{K(s)}(t)$ respectively in $(0,1)$. Then $\{\alpha_j\}$ and $\{\beta_j\}$ are disjoint and are interlaced, i.e., $0<\alpha_1<\beta_1<\dots< \alpha_{[\frac{n}{2}]}<\beta_{[\frac{n}{2}]}<1$. \end{thm} {\it Proof.} As in the proof of Theorem \ref{thm:9.4}, $f_n=(-1)^{n-1} a_n (t-1)f_{n-1}+t f_{n-2}$. If we replace $a_n$ by $a_n -1$, the curve $y_1$ is squeezed toward the $t$-axes, while the intersection points with the $t$-axes are fixed. Since $y_2$ is irrelevant to $a_n$, each of the zeros of $f_n$ less than (resp. more than) $1$ is moved toward (but never beyond) its left (resp. right) neighbour. Therefore, we have the conclusion. \qed Theorem \ref{thm:9.5new} is generalized as follows: \begin{thm}\label{thm:9.a1} Let $s=[2a_1,-2a_2,\dots, (-1)^{k-1}2a_k, \dots, (-1)^{n-1} 2a_n]$ and $r=[2a_1,-2a_2,\dots, (-1)^{k-1}2(a_{k}-1),\dots, (-1)^{n-1}2a_n]$, where $a_j>0,1\le j\le n$ and $a_k>1,1\le k\le n$. Let $\delta(K)$ be the maximal value of the zeros of $\Delta_{K}(t)$. Then $\delta(K(s))<\delta(K(r))$. \end{thm} {\it Proof.} We prove the theorem for the case where $n$ is even, say, $n=2m$ and $k$ is odd. The same argument works for the other cases. Let $s_1=[2a_1,-2a_2,\dots, (-1)^{k-2}2a_{k-1}]$, $s_2=[(-1)^{k-1}2a_k,\dots,(-1)^{n-1}2a_n]$, $r_2=[(-1)^{k-1}2(a_{k}-1), \dots, (-1)^{n-1}2a_n]$, $s_1'=[2a_1,-2a_2,\dots, (-1)^{k-3}2a_{k-2}]$, $s_2'=[(-1)^k 2a_{k+1},\dots,(-1)^{n-1}2a_n]$. We use Seifert matrices $M(s)$, $M(r)$, $M(s_1)$, $M(s_2)$, $M(r_2)$, $M(s_1')$ and $M(s_2')$ of twisted chain type. Let $f_n=\det(t M(s)-M(s)^T)$, $\widehat{f_n}=\det(tM(r)-M(r)^T)$, $f_{k-1}=\det(tM(s_1)-M(s_1)^T)$, $g_{n-k+1}=\det(tM(s_2)-M(s_2)^T)$, $h_{n-k+1}=\det(tM(r_2)-M(r_2)^T)$, $f_{k-2}=\det(tM(s_1')-M(s_1')^T)$, $g_{n-k}=\det(tM(s_2')-M(s_2')^T)$. Now by Proposition \ref{prop:LA}, we have $f_n=f_{k-1}g_{n-k+1}+t f_{k-2} g_{n-k}$ and $\widehat{f_n}=f_{k-1}h_{n-k+1}+t f_{k-2}g_{n-k}$. Let $\alpha, \beta$ and $\gamma$ be respectively the smallest zeros of $f_{k-1},g_{n-k+1}$ and $h_{n-k+1}$. Then $\gamma<\beta$ be Theorem \ref{thm:9.5new}. Let $y_1=f_{k-1}g_{n-k+1},z_1=f_{k-1}h_{n-k+1}$ and $y_2=-tf_{k-2}g_{n-k}$. We note that the signs of the leading coefficients of $y_1$ and $z_1$ are both $(-1)^{m}$ and that of $f_{k-2}g_{n-k}$ is $(-1)^{m-1}$. Also, the smallest positive zero of $y_2$ is larger than $\alpha$ or $\beta$ by Theorem \ref{thm:9.4}. Consider the intersection of three curves $y_1,z_1$ and $y_2$. The smallest value of the intersection will be seen from the diagrams below. See Fig 9.4 for the case $\alpha\le\gamma$, and Fig 9.5 for the case $\gamma<\alpha$. \byouga{9-4}{6.5}{9.4} \byouga{9-5}{12}{9.5} In each ease, $d(K(r))<d(K(s))$, where $d(K)$ denotes the smallest (positive) zero of $\Delta_{K}(t)$, and hence $\delta(K(r))>\delta(K(s))$. \qed As an immediate consequence of Theorem \ref{thm:9.a1}, we have the following theorem: \begin{thm}\label{thm:9.a3} Let $F_n$ be the set of stable 2-bridge knots or links $K(r)$ with $r=[2a_1,-2a_2,\dots, (-1)^{n-1}2a_n], a_j>0, 1\le j\le n$. Then $\delta(K(r))$ is maximal if and only if $r=[2,-2,\dots, (-1)^{n-1} 2]$, i.e., $K(r)$ is fibred. \end{thm} \begin{ex}\label{ex:9.9} (1) Let $s=[4,-2,2,-6,4,-2], s'=[4,-2,2,-6,4]$. Then the zeros of $\Delta_{K(s)}$ are approximately $\{0.2866, 0.4550, 0.7654, 1.3065, 2.1976, 3.4888\}$ and those of $\Delta_{K(s')}$ are approximately $\{0.2877, 0.6179, 1.0000, 1.6183, 3.4761\}$\\ (2) Let $s=[4,-2,2,-6,4,-2,4,-4], s'=[4,-2,2,-6,4,-2,4]$.\\ Then the zeros of $\Delta_{K(s)}$ are approximately\\ $\{ 0.2857, 0.3535, 0.6148, 0.8171, 1.2237, 1.6265, 2.8287, 3.4999\}$ and those of $\Delta_{K(s')}$ are approximately $\{0.2859, 0.3716, 0.6772, 1., 1.4767, 2.6913, 3.4973\}$\\ \end{ex} \begin{rem}\label{rem:9.10} For exceptional stable 2-bridge knots or links, the interlacing property may not hold. For example, let $s = [10,2,-2,-10]$ and $s'=[10,2,-2]$. Then $K(r)$ and $K(s')$ are both stable, but they are not interlaced. In fact, the zeros of $\Delta_{K(s)}(t)$ are approximately $\{0.4923, 0.7592, 1.3172, 2.0313\}$, but those of $\Delta_{K(s')}(t)$ are approximately $\{0.4202, 1, 2.3797\}$. Therefore, they are not interlaced. \end{rem} \section{Interlacing property (II) Quasi-rational knots ${X_n}$}\label{10} In the following two sections, we prove the interlacing property for two series of alternating stable knots ${X_n }$ and ${Y_{2n+1}}$ considered in Section 6.3. The idea of our proof is similar to the proof of Theorem \ref{thm:9.4}, but we need a lot of computations. The first series of knots ${X_n}$ shows us that the zeros of the Alexander polynomials of alternating knots is unbounded. On the other hand, the Alexander polynomial of each knot in the second series of knots ${Y_{2n+1}}$ is not irreducible. Nevertheless, the maximal value of the zeros of a factor (of degree 4) of the Alexander polynomial of $Y_3$ is quite likely equal to $\delta_6$ defined in Section 5.1. Now consider a series of stable knots $X_n (a,b) = X_n (2a_1 ,\cdots, 2a_n \mid -2b_1 , \cdots, -2b_n ), a_j ,b_j > 0, 1 \leq j \leq n$. We prove \begin{thm}\label{thm:10.1} (1) $X_n (a,b)$ is a stable alternating knot of genus $n$. (2) Let $K_n = X_n (2,2, \cdots, 2|-2,-2, \cdots, -2)$. Then for $n \geq 2$, $\Delta_{K_n}(t)\Delta_{K_{n-1}}(t)$ is simple and $\Delta_{K_n}(t)$ and $(t-1)\Delta_{K_{n-1}}(t)$ are interlaced. (3) Let $\alpha_n$ be the maximal value of the zeros of $\Delta_{K_n}(t)$. Then $\alpha_n \geq n+1$. \end{thm} We suspect that (2) holds for $X_n (a,b)$ with any $a_j > 0$ and $b_j>0, 1 \leq j \leq n$. Therefore we conjecture: \begin{yosou}\label{conj:10.2} Let $a_j > 0$ and $b_j > 0, 1 \leq j \leq n$. Then for $n\ge 2$, $\Delta_{X_n}(t)\Delta_{X_{n-1}}(t)$ is simple, and $\Delta_{X_n}(t)$ and $(t-1)\Delta_{X_{n-1}}(t)$ are interlaced. \end{yosou} Now since (1) and (3) are already proved in Theorem \ref{thm:6.6} (1), we prove only (2) in this section. For simplicity, we denote by $G(n)$ the normalization of $\Delta_{K_n}(t)$. As the first step, we prove \begin{prop}\label{prop:10.3} Let $\lambda (t) = 2t^2 -5t + 2$. (1) For $n \geq 2, G(n) = \lambda (t) G(n-1) - (t-1)^4 G(n-2)$. (2) For $n \geq 0, \lambda(t) \mbox{$\not|\,$} G(n)$. (3) For $n \geq 0, (t-1) \mbox{$\not|\,$} G(n)$, where we define $G(0) = 1$. \end{prop} {\it Proof.} Since $K_n$ is a knot, (3) holds trivially. Now Proposition \ref{prop:10.3} holds for $n=1$ and $2$. In fact, $K_1 = 4_1$ and $K_2 = 8_{12}$ and $\Delta_{K_2}(t)$ = $t^4 -7t^3 + 13t^2 - 7t +1 =(2t^2 -5t+ 2) (t^2 -3t+1) -(t-1)^4 = \lambda(t) G(1) -(t-1)^4 G(0)$. Suppose $n \geq 3$. A Seifert matrix $M$ of $K_n$ is given in a proof of Theorem \ref{thm:6.6} (1). It is of the form: $M = \tbt{E_n}{O}{C}{-E_n}$, where $C$ is a lower triangular matrix defined in the proof of Theorem \ref{thm:6.6} (1). Since $\Delta_{K_n} (t) = \det [Mt -M^{T} ]$, we see that $G(n) = \det N$, where $N = \tbt{(t-1) E_n}{ C^{T}}{Ct}{ (t-1)E_n}$. To compute $G(n)$, first expand $\det N$ along the $n^{\mathrm th}$ row and we obtain by Proposition \ref{prop:LA} that $G(n) = \det N = (t-1) \det N_1 - t G(n-1)$, where $N_1 = \tbt{(t-1)E_{n-1}}{C_1^{T}}{C_1 t}{(t-1) E_n}$ and $C_1$ is an $n \times (n-1)$ matrix of the form: \begin{center} $C_1= \left[ \begin{array}{rrrcr} 1&\multicolumn{4}{c}{}\\ 1&1&\multicolumn{3}{c}{\largesymbol{O}}\\ 1&1&1&\multicolumn{2}{c}{}\\ \multicolumn{3}{c}{}&\ddots& \\ 1&1&1&\cdots&1\\ 1&1&1&\cdots&1 \end{array} \right] $ \end{center} Next, subtract the next to last row of $\det N_1$ from the last row and then subtract the next to last column of $\det N_1$ from the last column and then expand it along the last row, and we obtain $\det N_1 = 2(t-1)G(n-1) -(t-1)^3 G(n-2)$. Therefore, $G(n) = (t-1) \det N_1 - t G(n-1)$ = $(t-1)\{2(t-1) G(n-1) -(t-1)^3 G(n-2)\}- t G(n-1)$ =$\{2(t-1)^2 -t\}G(n-1) -(t-1)^4 G(n-2) = \lambda(t) G(n-1) - (t-1)^4 G(n-2)$. This proves (1). Using (1), (2) follows by induction. \qed To prove Theorem \ref{thm:10.1} (2), let $\alpha_1 < \alpha_2 < \cdots < \alpha_{n-1} < 1$ be the zeros of $(t-1) G(n-1)$ in $[0,1]$ and let $\beta_1 < \beta_2 < \cdots < \beta_n$ be the zeros of $G(n)$ in $[0,1]$. Then it suffices to prove the following proposition. \begin{prop}\label{prop:10.4} (1) For $n\ge 1$, $G(n)G(n-1)$ is simple. (2) $\{\alpha_j, 1 \leq j \leq n-1\}\cup \{1\}$ and $\{\beta_j,1 \leq j \leq n\}$ are interlaced, namely, \begin{equation}\label{siki:10.1} \beta_1 < \alpha_1 < \beta_2 < \alpha_2 < \cdots < \alpha_{n-1} < \beta_n < 1. \end{equation} \noindent (3) (a) If $n$ is even, say $2m$, then $\alpha_m < \frac{1}{2} < \beta_{m+1}$, and (b) if $n$ is odd, say $2m+1$, then $\beta_{m+1} <\frac{1}{2} < \alpha_{m+1}$. \end{prop} We note that $\frac{1}{2}$ (and 2) are the zeros of $\lambda(t)$ and also we need (3) to show (2) by induction. {\it Proof.} We use induction. Case $n=1$. The zeros of $G(1) = t^2 -3t +1$ in $[0,1]$ is $\beta_1 = 0.38\cdots$. Since $G(0) = 1$, we have $\beta_1 < 1$ and further $\beta_1 < \frac{1}{2}$. This proves Proposition \ref{prop:10.4} when $n=1$. Case $n=2$. The zeros of $G(2)$ in $[0,1]$ are $\beta_1 =0.228\cdots$ and $\beta_2 = 0.5449\cdots$, while the zeros of $(t-1) G(1)$ in $[0,1]$ are $\alpha_1 = 0.38\cdots$ and 1. Therefore, $\beta_1 < \alpha_1 < \beta_2 < 1$ and $\alpha_1 < \frac{1}{2} < \beta_2$. Case $n \geq 3$. Suppose $G(n)G(n-1)$ is simple. Let $\{\alpha_j, 1 \leq j \leq n-1\}\cup \{1\}$ be the zeros of $(t-1)G(n-1)$ in $[0,1]$ and $\{\beta_j ,1 \leq j \leq n\}$ be the zeros of $G(n)$ in $[0,1]$. Inductively, we assume that they are interlaced, namely, \begin{equation}\label{siki:10.2} \beta_1 < \alpha_1 < \beta_2 < \alpha_2 < \cdots < \alpha_{n-1} < \beta_n < 1. \end{equation} Consider the zeros of $G(n+1)$ in $[0,1]$. Since $G(n+1) = \lambda(t) G(n) - (t-1)^4 G(n-1)$, the zeros of $G(n+1)$ in $[0,1]$ are determined by the intersection of two curves $y_1 = \lambda(t) G(n)$ and $y_2 = (t-1)^4 G(n-1)$. Note that $\lambda(t) G(n)$ is simple and since $(t-1) \mbox{$\not|\,$} G(n-1)$, $1$ is the zero of $y_2$ of exactly order 4. Case (a) $n+1$ is even, say $2m$. Then by induction, since $n$ is odd, we have \begin{align}\label{siki:10.3} \beta_1 &< \alpha_1 < \beta_2 < \alpha_2 < \cdots < \beta_m < \frac{1}{2} < \alpha_m \nonumber\\ &< \beta_{m+1} < \cdots < \alpha_{n-1} < \beta_n <1. \end{align} Note that $y_1(0)= 2$ and $y_2 (0) =1$, since $G(k)$ is monic for all $k \geq 1$. When $m$ is even (resp. odd), two curves are depicted in Fig. 10.1 top (resp. bottom). \byouga{10-1}{10}{10.1} There are exactly $(n+1)$ points of intersection $\{\gamma_j, 1 \leq j \leq n+1\}$ in $[0,1]$ which are the zeros of $G(n+1)$ in $[0,1]$: \begin{align*} \gamma_1 &< \beta_1 < \gamma_2 < \beta_2 < \gamma_3 < \cdots < \gamma_m < \beta_m < \frac{1}{2} < \gamma_{m+1} \nonumber\\ &< \beta_{m+1} < \cdots < \gamma_n < \beta_n < \gamma_{n+1} < 1. \end{align*} Thus $G(n+1)G(n)$ is simple and $\{\gamma_j \}$ and $\{\beta_j \}\cup \{1\}$ are interlaced and $\beta_m < \frac{1}{2} < \gamma_{m+1}$. Proposition \ref{prop:10.4} is now proved for this case. Case (b) $n+1$ is odd, say $2m+1$. Then by induction, since $n$ is even, we see \begin{align*} \beta_1 &< \alpha_1 < \beta_2 < \alpha_2 < \cdots < \alpha_m < \frac{1}{2} < \beta_{m+1} \nonumber\\ &< \alpha_{m+1} < \cdots < \alpha_{n-1} < \beta_n < 1. \end{align*} When $m$ is even (resp. odd), two curves are depicted in Fig. 10.2 top (resp. bottom). \byouga{10-1}{10}{10.1}i There are $(n+1)$ points of intersection $\{\gamma_j, 1 \leq j \leq n+1\}$ in $[0,1]$ and \begin{align*} \gamma_1 &< \beta_1 < \gamma_2 < \beta_2 < <\cdots < \beta_m < \gamma_{m+1} < \frac{1}{2} < \beta_{m+1} \nonumber\\ &< \gamma_{m+2} < \cdots < \beta_n < \gamma_{n+1} < 1. \end{align*} Now we covered all cases and a proof is completed. \qed \begin{ex} Given below is the list of the zeros of the Alexander polynomials for $G(k), k=1,2,\dots,8$. The table satisfies Theorem \ref{thm:10.1}. \end{ex} $ \begin{array}{c} 0.382 \\ 2.618 \end{array} \begin{array}{c} 0.228 \\ 0.544 \\ 1.838 \\ 4.390 \end{array} \begin{array}{c} 0.145 \\ 0.458 \\ 0.578 \\ 1.730 \\ 2.186 \\ 6.904 \end{array} \begin{array}{c} 0.098 \\ 0.382 \\ 0.526 \\ 0.591 \\ 1.692 \\ 1.900 \\ 2.618 \\ 10.193 \end{array} \begin{array}{c} 0.070 \\ 0.320 \\ 0.474 \\ 0.557 \\ 0.597 \\ 1.674 \\ 1.797 \\ 2.109 \\ 3.129 \\ 14.273 \end{array} \begin{array}{c} 0.052 \\ 0.269 \\ 0.426 \\ 0.519 \\ 0.573 \\ 0.601 \\ 1.664 \\ 1.746 \\ 1.927 \\ 2.349 \\ 3.719 \\ 19.155 \end{array} \begin{array}{c} 0.040 \\ 0.228 \\ 0.382 \\ 0.481 \\ 0.544 \\ 0.582 \\ 0.603 \\ 1.658 \\ 1.717 \\ 1.838 \\ 2.077 \\ 2.618 \\ 4.390 \\ 24.841 \end{array} \begin{array}{c} 0.032 \\ 0.194 \\ 0.343 \\ 0.446 \\ 0.515 \\ 0.560 \\ 0.589 \\ 0.605 \\ 1.654 \\ 1.699 \\ 1.786 \\ 1.943 \\ 2.242 \\ 2.915 \\ 5.144 \\ 31.333 \end{array} $ \section{Interlacing property (III) Quasi-rational knots $\{Y_{2n+1}\}$} \label{11} In this section, we discuss a slightly different sense of interlacing property of the second series of stable alternating knots. The Alexander polynomials of knots in this series may have multiple zeros and thus, they are not irreducible. Nevertheless, some zero has the largest value up to 12 crossing knots. Let $Y_{2n+1}(2a_1, 2a_2, \cdots, 2a_{2n+1}\mid 2b_1, 2b_2, \cdots, 2b_{2n+1})$ be a quasi-rational knot obtained as the boundary of the surface constructed in Example \ref{ex:6.5}. Define a series of alternating quasi-rational knots $Y_{n}$ by $Y_{2n+1}( \underbrace{-2, -2, \cdots, -2}_{2n-1} | \underbrace{2, 2, \cdots, 2}_{2n-1})$. Then $Y_1$ is the knot $4_1$ and $Y_2$ is the 12 crossing alternating knot $12_{a1124}$. We note that the largest zero of $Y_2$ is $7.69853\cdots$, which attains the largest real part of all the zeros of Alexander polynomials of alternating knots up to 12 crossings. \subsection{Conway polynomials of $Y_n$}\label{11.1} In this subsection, we give inductive formulae for the Conway polynomials of $Y_n$. (For the Conway polynomial, see \cite{Li}.) Denote by $W_n$ the link obtained from $Y_n$ by removing band corresponding the the middle vertical edge of the graph in Fig.6.3. Denote by $c_n(z)$ (resp. $d_n(z)$) the Conway polynomial of $Y_n$ (resp. $W_n$). For simplicity, we write a function $f_n(z)$ of $z$ as $f_n$. \begin{prop}\label{prop:11.1} Let $a(z)=1-z^2$, $b(z)=1+z^2$. Then we have: \begin{align}\label{siki:11.1} &c_1=a, \nonumber\\ &c_2=a(a^2-4z^2), \ and\ for \ n\ge 1, \nonumber\\ &c_{n+2} = a^2 (2 c_{n+1} - b^2 c_{n}). \end{align} \end{prop} {\it Proof.} In (\ref{siki:11.2}) below, we can write $d_{n}$ (resp. $d_{n+1}$) in terms of $c_n$ and $c_{n+1}$ (resp. $c_{n+1}$ and $c_{n+2}$). Substituting these into (\ref{siki:11.3}), we have the conclusion. Using skein trees, we prove Lemma \ref{lem:11.2} at the end of this subsection. \qed \begin{lemm}\label{lem:11.2} \begin{align} & d_1=z,\ and\ for\ n\ge 1, \nonumber\\ & \label{siki:11.2} c_{n+1}=(3z^4-4z^2+1)c_{n}+2z(z^4-1)d_{n}\\ & \label{siki:11.3} d_{n+1}=2z(1-z^2) c_{n}+(1-z^4)d_{n}. \end{align} \end{lemm} We know that the Conway polynomials $c_n$ have a factor $a=1-z^2$. In the proposition below, we determine the exponent of $a$ in $c_n$. \begin{prop}\label{prop:11.3} Let $f_{2m-1}=\dfrac{c_{2m-1}}{a^{2m-1}}, f_{2m}=\dfrac{c_{2m}}{a^{2m-1}}$. Then for $m\ge1$, we have the following, where $a(z)=1-z^2$, $b(z)=1+z^2$. \begin{align} \label{siki:11.4} & f_{2m-1}, f_{2m} \in \ZZ[z]\\ \label{siki:11.5} & a\not| f_{2m-1}\ and\ a\not\vert f_{2m} ,\\ \label{siki:11.6} & f_{2m+1}=2f_{2m}-b^2 f_{2m-1}\\ \label{siki:11.7} & f_{2m+2}=2a^2 f_{2m+1}-b ^2 f_{2m} \end{align} \end{prop} {\it Proof.} First we prove (\ref{siki:11.4}) by induction. For $c_1$ and $c_2$, the claim is trivial. Assume that the claim holds up to $2m$. By (\ref{siki:11.1}) and induction hypothesis, we have \begin{align} c_{2m+1}&=a^2(2 c_{2m}-b^2 c_{2m-1})\nonumber\\ &=a^2(2a^{2m-1}f_{2m}-b^2a^{2m-1}f_{2m-1})\nonumber\\ \label{siki:11.8} &=a^{2m+1}(2 f_{2m}-b^2 f_{2m-1}), \end{align} and hence $c_{2m+1}$ has factor $a^{2m+1}$. We also have \begin{align} c_{2m+2}&=a^2(2 c_{2m+1}-b^2c_{2m})\nonumber\\ &=a^2(2 a^{2m+1} f_{2m+1}-b^2 a^{2m-1} f_{2m})\nonumber\\ \label{siki:11.9} &=a^{2m+1}(2a^2 f_{2m+1}-b^2 f_{2m}), \end{align} and hence $c_{2m+2}$ has factor $a^{2m+1}$. Therefore, we have (\ref{siki:11.4}). By (\ref{siki:11.8}) and (\ref{siki:11.9}), we have (\ref{siki:11.6}) and (\ref{siki:11.7}). Finally, we prove (\ref{siki:11.5}), using the fact that if a polynomial $f(t)$ is divided by $a=1-z^2$ then $f(1)=0$. Let $e_n=f_n(1)$, and we prove that $e_k\neq 0$ for $k\ge 1$. By putting $z=1$ in (\ref{siki:11.6}) and (\ref{siki:11.7}), we have \begin{align} \label{siki:11.10} e_{2m+1}&=2e_{2m}-4e_{2m-1}\\ \label{siki:11.11} e_{2m+2}&=-4e_{2m} \end{align} Since $e_1=1, e_2=-4$, by (\ref{siki:11.11}) we have $e_{2m}=(-4)^{m}$ and hence $e_{2m}\neq 0$. Then by (\ref{siki:11.10}), $e_{2m+1}=2(-4)^{m}-4e_{2m-1}$ and hence $e_{2m+1}$ is positive (resp. negative) if $m$ is even (resp. odd), and in any case, $e_{2m+1}\neq 0$. In fact, we see that $e_{2m-1}=(-4)^{m-1}(2m-1)$. \qed The following lemma is used to prove Lemma \ref{lem:11.2}. The equation (\ref{siki:11.12}) means the relation of the Conway polynomials of three knots or links that differ only locally as depicted in the diagrams. In Lemma \ref{lem:11.4} and its proof, we adopt such a convention. Lemma \ref{lem:11.4} below reduces the Conway polynomial of a diagram with two or more parallel bands (of positive writhe) to those with one and zero bands. \begin{lemm}\label{lem:11.4} \begin{align} \label{siki:11.12} \incskein{11}&=2z\incskein{1}-z^2\incskein{none}\\ \label{siki:11.13} \incskein{111}&=3z^2\incskein{1}-2z^3\incskein{none} \end{align} \end{lemm} {\it Proof.} First, we have $\incskein{1}=\incskein{0}+z\incskein{none}$ and hence $\incskein{0}=\incskein{1}-z\incskein{none}$. \\ \noindent $\incskein{11}=\incskein{01}+z\incskein{1} =\incskein{00}+z\incskein{0}+z\incskein{1} =z\Bigl(\incskein{1}-z\incskein{none}\Bigr)+z\incskein{1}$.\\ \noindent $\incskein{111}=2z\incskein{01}+z^2\incskein{1} =2z\Bigl(2z \incskein{1}-z^2\incskein{none}\Bigr) -z^2\incskein{1}$.\\ In fact, we can inductively prove $\underbrace{\incskein{1}\cdots\incskein{1}}_{n}= nz^{n-1}\incskein{1}-(n-1)z^n\incskein{none}$. \qed To prove Lemma \ref{lem:11.2} using skein trees, notice the following (see Figure 11.1). Suppose that a band $b_1$ crosses over exactly one other band $b_2$, then untwisting $b_1$ results in cutting both $b_1$ and $b_2$. Suppose a band $b_1$ crosses over exactly two bands $b_2$ and $b_3$. Then untwisting $b_1$ results in removing $b_1$ and merging the band $b_2$ and $b_3$. \begin{minipage}{1\hsize} \centerline{ (1)\ \raisebox{-13pt}{\inclf{cut}{2.5} } \hspace{-1mm}ace{0.8cm} (2)\ \raisebox{-13pt}{\inclf{merge}{2.5}} } \centerline{Fig. 11.1} \vspace*{2mm} \end{minipage} {\it Proof of Lemma \ref{lem:11.2}.} We depict skein trees for $Y_n$'s and $W_n$'s. The knots $Y_n$'s and links $W_n$'s are represented by diagrams in Figures 11.2 through 11.5. Each dotted arc indicates the site where an arc is removed, and hence dotted arcs are not counted as arcs. Horizontal arcs are assigned with weight $-2$ and non-horizontal ones with weight $2$, except for the ones with label $4$ in Fig. 11.4. Recall that a diagram represents a knot or link on the boundary of a Seifert surface obtained from a disk by attaching twisted bands along the arcs, where horizontal arcs contribute bands on the top side, and non-horizontal arcs on the back side. First, we prove (\ref{siki:11.3}). We have a skein tree as in Fig. 11.2, where the sites of crossing changes and splicing are marked with $*$. In the first step, we use Fig. 11.1 (1). \begin{minipage}{1\hsize} \vspace*{3mm} \centerline{\inclf{ws}{9}} \centerline{Fig. 11.2} \vspace*{3mm} \end{minipage} In the left of the bottom row, we have two parallel bands. We apply Lemma \ref{lem:11.4}, but since the writhe is negative, we replace $z$ by $-z$. See Fig. 11.3. Then we have (\ref{siki:11.3}). \begin{minipage}{1\hsize} \vspace*{3mm} \centerline{\inclf{wsb}{7}} \centerline{Fig. 11.3} \vspace*{3mm} \end{minipage} Next, we prove (\ref{siki:11.2}). We have a skein tree as in Fig. 11.4. In the first step, we use Fig. 11.1 (2) and have a band with two full-twists. On the way, we have the connected sum of $Y_n$ and the 2-bridge link $K(\frac{1}{4})$, whose Conway polynomial is equal to that of $Y_n$ multiplied by $-2z$. \begin{minipage}{1\hsize} \vspace*{3mm} \centerline{\inclf{ys}{9}} \centerline{Fig. 11.4} \vspace*{3mm} \end{minipage} In the left of the third row, we have three parallel bands, and apply Lemma \ref{lem:11.4} with $z$ replaced by $-z$. See Fig. 11.5. Then we have (\ref{siki:11.2}).\\ Now we have completed a proof of Lemma \ref{lem:11.2}. \qed \begin{minipage}{1\hsize} \vspace*{3mm} \centerline{\inclf{ysb}{7}} \centerline{Fig. 11.5} \vspace*{3mm} \end{minipage} \subsection{Alexander polynomials of $Y_n$}\label{11.2} In the previous subsection we investigated the Conway polynomial $c_n(z)$ of $Y_n$. Now we translate $c_n(z)$ to the normalized Alexander polynomials $h_n(t)$ of $Y_n$. By Proposition \ref{prop:11.3} we inductively have the following: \begin{cor}\label{cor:11.5} $\deg f_1(z)=1,\deg f_2(z)=4$. For $m>1$, $\deg f_{2m-1}(z)=4(m-1),\deg f_{2m}(z)=4m$. The leading coefficient of $f_k$ is equal to $1$, for $k\ge 1$. \end{cor} The Alexander polynomial of $Y_n$ is obtained by putting $z=\sqrt{t}-\frac{1}{\sqrt{t}}$ in $c_n(z)$. \begin{prop}\label{prop:11.6} Let $h_n(t)$ be the normalized Alexander polynomial of $Y_n$. Let $\mu(t)=1-3t+t^2$ and $\rho(t)=1-t+t^2$. Let $g_{2m-1}(t)=f_{2m-1}(\sqrt{t}-\frac{1}{\sqrt{t}})t^{2(m-1)}$ and $g_{2m}(t)=f_{2m}(\sqrt{t}-\frac{1}{\sqrt{t}})t^{2m}$ Then we have: \begin{align} h_{2m-1}(t)&= \mu(t)^{2m-1} g_{2m-1}(t)\\ h_{2m}(t)&= \mu(t)^{2m-1} g_{2m}(t) \end{align} Moreover, $\mu(t)\not|g_{2m-1}(t), \mu(t)\not|g_{2m}(t)$. \end{prop} \begin{prop}\label{prop:11.7} $\deg g_{2m-1}(t)=4(m-1), \deg g_{2m}=4m$, $g_k(t)$'s are monic and reciprocal. \begin{align} \label{siki:11.16} & g_1=1, g_2=1-10t+19t^2-10t^3+t^4\\ \label{siki:11.17} & g_{2m+1}=2 g_{2m}-\rho(t)^2 g_{2m-1}\\ \label{siki:11.18} & g_{2m+2}=2\mu(t)^2 g_{2m+1}-\rho(t)^2 g_{2m} \end{align} \end{prop} Now we study the interlacing property of $g_{k}$ and $g_{k+1}$. The following is the main theorem of this section. \begin{thm}\label{thm:11.8} For $m\ge1$, we have the following, where $\mu_0\doteq0.382$ is the zero of $\mu(t)=1-3t+t^2$ in $[0,1]$.\\ (1) $g_{2m-1}g_{2m}$ is simple,\\ (2) $(t-1)\mu(t)g_{2m-1}$ and $g_{2m}$ are interlaced.\\ Namely, let $\alpha_1 < \alpha_2 < \cdots < \alpha_{2(m-1)}$ be the zeros of $g_{2m-1}$ in $[0,1]$, and $\beta_1 < \beta_2 < \cdots < \beta_{2m}$ be the zeros of $g_{2m}$ in $[0,1]$. Then,\\ $ \beta_1<\alpha_1<\beta_2 <\dots < \alpha_{m-1} < \beta_m < \mu_0 < \beta_{m+1}< \alpha_{m} <\dots< \alpha_{2(m-1)}<\beta_{2m}<1$.\\ (3) $g_{2m}g_{2m+1}$ is simple,\\ (4) $(t-1)g_{2m}$ and $\mu(t)g_{2m+1}$ are interlaced. Namely, let $\gamma_1 < \gamma_2 < \cdots < \gamma_{2m}$ be the zeros of $g_{2m+1}$ in $[0,1]$. Then,\\ $ \gamma_1<\beta_1<\gamma_2 <\dots < \gamma_{m} < \beta_m < \mu_0 < \beta_{m+1}< \gamma_{m+1} <\dots< \beta_{2m}<\gamma_{2m}<1$. \end{thm} \begin{ex}\label{ex:11.9} We list $g_k$'s with $k$ up to 6. Note that in general, $g_k(0)=g_k(1)=1$. \begin{align*} g_1=&1, g_2=1 - 10 t + 19 t^2 - 10 t^3 + t^4, g_3=1-18 t+35 t^2-18 t^3+t^4\\ g_4=&1-36 t+266 t^2-784 t^3+1107 t^4-784 t^5+266 t^6-36 t^7+t^8\\ g_5=&1 - 52 t + 458 t^2 - 1424 t^3 + 2035 t^4 - 1424 t^5 + 458 t^6 - 52 t^7 + t^8\\ g_6=&(1 - 10 t + 19 t^2 - 10 t^3 + t^4) (1 - 68 t + 522 t^2 - 1552 t^3 + 2195 t^4 \\ &- 1552 t^5 + 522 t^6 - 68 t^7 + t^8) \end{align*} Fig. 11.6 depicts adjacent pairs of $g_k$'s. \begin{center} \inclf{exin11}{12} \centerline{Fig.11.6} \end{center} \end{ex} {\it Proof of Theorem \ref{thm:11.8}.} We prove the theorem by induction. The zeros of $g_2$ in $[0,1]$ are approximately $0.129$ and $0.662$, and those of $g_3$ are approximately $0.063$ and $0.765$. See Fig. 11.7. Since $\mu_0\doteq 0.38$, the claim is true for the pairs $(g_1,g_2)$ and $(g_2,g_3)$, i.e., $g_1 g_2$ is simple and $\mu(t)(t-1)g_1$ and $g_2$ are interlaced, also, $g_2 g_3$ is simple and $(t-1)g_2$ and $\mu(t) g_3$ are interlaced. \begin{center} \inclf{11-7}{5} \centerline{Fig.11.7} \end{center} For general cases, we first fix $m$ and assume (1) and (2) of the statement of Theorem \ref{thm:11.8}, prove (3) and (4) for that $m$, and then prove (1) and (2) with $m$ replaced by $m+1$. By Proposition \ref{prop:11.7}, $g_{2m+1}=2g_{2m}-(1-t+t^2)^2g_{2m-1}$. So we examine intersection points of $y_1(t)=(1-t+t^2)^2g_{2m-1}$ and $y_2(t)=2 g_{2m}$. See Fig. 11.8 for $m$ odd (here $m=5$). For the case $m$ being even, the figure is similar. Since $0<1-t+t^2\leq 1$ in $[0,1]$, $y_1$ and $g_{2m-1}$ have the same real zeros, and the graph of $y_1$ is obtained from that of $g_{2m-1}$ by moving it toward the $x$-axis, fixing the points of intersection with the $x$- and $y$-axes. By assumption, $\{$zeros of $y_1\}\cup\{\mu_0\}\cup\{1\}$ and $\{$zeros of $y_2\}$ are interlaced. Since $g_{2m-1}(0)=g_{2m-1}(1)=g_{2m}(0)=g_{2m}(1)=1$, we have $y_1(0)=y_1(1)=1, y_2(0)=y_2(1)=2$. Let $\gamma_1<\cdots\gamma_{2m}$ be the zeros of $g_{2m+1}$ in $[0,1]$. Then we see that $\gamma_1<\beta_1<\alpha_1<\dots< \alpha_{m-1}<\gamma_{m}<\beta_{m}<\mu_0< \beta_{m+1}<\gamma_{m+1}<\alpha_m<\beta_{m+2}<\gamma_{m+2}< \dots<\alpha_{2m-2}<\beta_{2m}<\gamma_{2m}<1$. Therefore, we have (3) and (4) of Theorem \ref{thm:11.8} for the fixed $m$. \begin{center} \inclf{11-8}{11.5} \centerline{Fig.11.8} \end{center} Next assume (3) and (4) holds for a fixed $m$. Then we examine $g_{2m+1}$ and $g_{2m+2}$ and prove (1) and (2) with $m$ replaced by $m+1$. By proposition \ref{prop:11.7}, $g_{2m+2}=2(1-3t+t^2)^2g_{2m+1}-(1-t+t^2)^2g_{2m}$. So we examine intersection points of $y_1(t)=(1-t+t^2)^2g_{2m}$ and $y_2(t)=2 (1-3t+t^2)^2 g_{2m+1}$. Note that $\mu_0$ is a zero of $y_2$ of order 2. See Fig. 11.9 below for $m$ odd (here $m=5$). \begin{center} \inclf{11-9}{11.5} \centerline{Fig.11.9} \end{center} For the case $m$ being even, the figure is similar. Let $\delta_1<\dots<\delta_{2m+2}$ be the zeros of $g_{2m+2}$ in $[0,1]$. Then we see that $\delta_1<\beta_1<\delta_2<\beta_2<\dots< \delta_{m}<\beta_{m}<\delta_{m+1} <\mu_0< \delta_{m+2}<\beta_{m+1}<\dots <\delta_{2m+1}<\beta_{2m}< \delta_{2m+2}<1$. Therefore, we have (1) and (2) with $m$ replaced by $m+1$.\\ Proof of Theorem \ref{thm:11.8} is now complete. \qed \begin{qu}\label{qu:11.5} If the zeros of $\Delta_{K_1}(t)$ and $\Delta_{K_2}(t)$ are interlaced, how are $K_1$ and $K_2$ related geometrically ? \end{qu} \section{$c$-stable knots and links }\label{12} It is well-known \cite{mi} that if the absolute value of the signature of a knot $K$ is equal to the degree of the Alexander polynomial, then all the zeros of $\Delta_K(t)$ are on the unit circle and hence, $K$ is $c$-stable. However, the converse is not necessarily true, even for $2$-bridge knots. In this section, first we discuss $c$-stable 2-bridge knots and links, and then we show some general construction of $c$-stable knots or links. \subsection{Regular and exceptional $c$-stable $2$-bridge knots and links}\label{12.1} We begin with the following proposition: \begin{prop}\label{prop:12.1} Let $r=[2a_1,2a_2,\dots,2a_n]$. If all $a_i$'s have the same sign, then $K(r)$ is $c$-stable. \end{prop} {\it Proof.} Suppose that all $a_j$'s are positive. Let $M$ be a Seifert matrix of twisted chain type. (See Fig 4.3). Then $M+M^{T}$ is positive definite by Positivity lemma (Proposition \ref{prop:4.7}), and hence $|\sigma(K(r))|=\deg \Delta_{K(r)}(t)$. Therefore, $K(r)$ is $c$-stable. \qed The converse of Proposition \ref{prop:12.1} does not hold. The simplest counter-example is $K(r)$, where $r=[2,8,-2,-2]$. In fact $\Delta_{K(r)}(t)=(2-3t+2t^2)^2$ and hence $K(r)$ is $c$-stable, but $\sigma(K(r))=0$. In fact, we have the following proposition. \begin{prop}\label{prop:12.2} Let $r=[2,2k,-2,-2]$. Then we have:\\ (1) If $k<0$, then $K(r)$ is strictly bi-stable.\\ (2) If $k=1,2,3$, then $K(r)$ is totally unstable.\\ (3) If $k\ge 4$, then $(r)$ is $c$-stable. \end{prop} {\it Proof.} First, we see that $\Delta_{K(r)}(t)=k(t-1)^2(t^2-t+1)+t^2$, hence the modification of $\Delta_{K(r)}(t)$ is $f(x)=k(x-1)(x-2)+1$. The conclusion follows by checking the intersection of two curves $y_1=(x-1)(x-2)$ and $y_2=-\frac{1}{k}$. \qed \begin{yosou}\label{conj:12.3} Let $r_m=[ \underbrace{2,2, \dots, 2}_{m-1},2k, \underbrace{-2,-2,\dots, -2}_{m}]$. Then if $m$ is even, $K(r_m)$ is $c$-stable for sufficiently large $k$. To be more precise, there exists a positive integer $N_m$ such that (1) if $m$ is even, then $K(r_m)$ is $c$-stable for $k\ge N_m$ and (2) if $m$ is odd, then $K(r_m)$ is $c$-stable for $k\le -N_m$. \end{yosou} We can show that $N_2=4, N_3=3$ and $N_4=7$ (See Appendix C). These knots $K(r_m)$ are exceptional $c$-stable knots.\\ Example \ref{ex:12.4} below gives an exceptional $c$-stable link. \begin{ex}\label{ex:12.4} Let $r=[2,2,2,-6,-2]$. Then $K(r)$ is a $c$-stable link. In fact, $\Delta_{K(r)}(t)=(t-1)(3t^4-6t^3+7t^2-6t+3)$ is $c$-stable. \end{ex} \subsection{ Construction of $c$-stable quasi-rational knots and links }\label{12.2} Let $K$ be a quasi-rational knot or link such that a Seifert matrix of $K$ is of the form :$ M=\tbt{A}{O}{B}{C}$, where $A = \mathrm diag\{a_1, a_2, \cdots, a_p\}, a_j > 0, 1 \leq j \leq p$ and $C=\mathrm diag\{c_1,c_2, \cdots, c_q\}, c_j>0, 1 \leq j \leq q$. Let $B = [b_{i,j}]_{1 \leq i \leq p, 1 \leq j \leq q}$. \begin{prop}\label{prop:12.5} Suppose $M$ satisfies the following conditions:\\ (1) $a_k > \frac{1}{2} \{|b_{1,k}|+|b_{2,k}|+ \cdots +|b_{q,k}|\}$ for $k=1,2, \cdots, p$,\\ (2) $c_{\ell} > \frac{1}{2} \{|b_{\ell,1}|+|b_{\ell,2}|+ \cdots +|b_{\ell,p}|\}$ for $\ell =1,2, \cdots, q$.\\ Then $K$ is $c$-stable. \end{prop} {\it Proof.} Since a symmetric matrix $\widehat{M}$ = $M + M^{T}$ is positive definite, the signature of $\widehat{M}$ is equal to $p+q$ and also $p+q$ is the degree of the Alexander polynomial of $K$. Hence, $K$ is $c$-stable. \qed Note that $K$ is generally non-alternating. \begin{rem}\label{rem:12.6} Conditions (1) and (2) in Proposition \ref{prop:12.5} are sufficient conditions for knots $X_n=X_n(2a_1,2a_2,\dots, 2a_n|2b_1,2b_2,\dots, 2b_n)$ defined in Example \ref{ex:6.4} to be $c$-stable. Suppose that all $a_j>0$ and $b_j>0$. Then if $X_n$ is $c$-stable, at least (1) or (2) is necessary. In fact, $K_3=X_3(2, 2, 2|2,2,2)$ is not $c$-stable, but strictly bi-stable. Here, $\Delta_{K_3}(t)=t^6-4t^4+7t^3-4t^2+1$ and $K_3$ is not alternating. On the other hand, $X_3(4,2,2|2,2,2)$ is $c$-stable. \end{rem} \begin{prop}\label{prop:12.7} Let $X_n = X_n (2a_1, 2a_2, \cdots, 2a_n \mid 2b_1, 2b_2, \cdots, 2b_n)$ be a quasi-rational knot defined in Example \ref{ex:6.4}. Suppose\\ (1) $a_1\geq n/2, a_2 \geq (n-1)/2, \cdots, a_k \geq (n-k+1)/2, \cdots, a_n \geq 1/2$, and\\ (2) $b_1 \geq 1/2, b_2 \geq 2/2, \cdots, b_k \geq k/2, \cdots, b_n \geq n/2$.\\ Then $X_n$ is $c$-stable. \end{prop} {\it Proof.} Let $M$ be a Seifert matrix given in Section 6.3. Since $a_j$ and $b_j$ are integers, it follows that $a_n \geq 1$ and $b_1 \geq 1$. Therefore, the $n^{\rm th}$ row and $(n+1)^{\rm st}$ row are excessive. Apply the proof of Positivity Lemma on the matrix $M + M^{T}$ to show that $M + M^{T}$ is positive definite. \qed \begin{prop}\label{prop:12.8} Let $Y_{2n+1} = Y_{2n+1}(2a_1,2a_2, \cdots, 2a_{2n+1} \mid 2b_1, 2b_2, \cdots, 2b_{2n+1})$ be a quasi-rational knot defined in Example \ref{ex:6.5}. Suppose\\ (1) $a_1$ and $a_{2n+1} \geq 1, a_2$ and $a_{2n} \geq 2, \cdots, a_k$ and $a_{2n+2-k} \geq k, \cdots, a_{n+1} \geq n+1$,\\ (2) $b_1$ and $b_{2n+1} \geq 1, b_2$ and $b_{2n} \geq 2, \cdots, b_k$ and $b_{2n+2-k} \geq k, \cdots, b_{n+1} \geq n+1$.\\ Then $Y_{2n+1}$ is $c$-stable. \end{prop} {\it Proof.} $M + M^{T}$ satisfies all conditions of Positivity Lemma. \qed \subsection{General construction of $c$-stable knots and links }\label{12.3} The previous propositions show that given an arbitrary quasi-rational knot or link, we can make it $c$-stable by changing the number of full twists on some bands. In this subsection, we generalize this result to that we can construct a $c$-stable knot or link from a given Seifert surface. In case of Seifert surfaces specified by graphs as before, we can construct a $c$-stable knot or link with the same underlying graph. \begin{thm}\label{thm:12.9} Let $F$ be a Seifert surface for a knot or link $K$, with ${\rm rank}H_1(F,\mathbb{Z})=n$. Suppose that a system of mutually disjoint $n$ arcs $\alpha_1, \dots, \alpha_n$ properly embedded in $F$ is specified so that $F\setminus \cup_i\alpha_i$ is a disk. Let $\widetilde{F}$ be a Seifert surface obtained by full-twisting $F$ along each arc $\alpha_i$, $k_i$ times. Denote by $\widetilde{K}$ the knot or link $\partial \widetilde{F}$. Then, there exist $N_i \in{\mathbb N}\ (i=1, 2\dots, n)$ such that if $k_i\geq N_i$ for each $i$, then $\widetilde{K}$ is $c$-stable. \end{thm} {\it Proof.} Let $L=\{\ell_1,\dots,\ell_n\}$ be a set of embedded loops in $F$ such that for each $i$, $\alpha_i \cap (\cup_i \ell_i)$ is a single transverse point in $\ell_i$. Note that such a system $L$ is unique up to isotopy since $F\setminus \cup_i \alpha_i$ is a disk. Then $L$ with an arbitrary orientation gives Seifert matrices $S$ for $F$ and $\widetilde{S}$ for $\widetilde{F}$. Since twisting $F$ along $\alpha_i$ affects only the self-linking number of $\ell_i$, $\widetilde{S}-S$ is a diagonal matrix whose $i$th diagonal entry is $k_i$. Let $M$ be the symmetric matrix $\widetilde{S}+\widetilde{S}^T=(m_{i,j})$. If each $k_i$ is large enough, we have $m_{i,i}>0$ and $m_{i,i}>\Sigma_{j\neq i}\ |m_{i,j}|$ and hence by Strong Positivity Lemma (Proposition \ref{prop:4.5}), $M$ is positive definite. Then the signature $\sigma(M)$ is equal to $n$. By \cite{mi}, $\Delta_{\widetilde{K}}(t)$ has at least $n$ of its zeros on the unit circle, and hence $n\le \deg\Delta_{\widetilde{K}}(t)$. Since $n=2g(\widetilde{F})$, we have ${\rm deg}\Delta_{\widetilde{K}}(t)\le n$. Therefore, $\deg\Delta_{\widetilde{K}}(t)= n$ and the conclusion follows. \qed Note that if $M$ is positive definite, we have $n\le \deg\Delta_{\widetilde{K}}(t)\le 2g(\widetilde{K})\le n$, and hence $\widetilde{F}$ is a minimal genus Seifert surface for $\widetilde{K}$. Before we discuss some application of Theorem \ref{thm:12.9} we prove one proposition. \begin{prop}\label{prop:12.10} Let $G$ be a positive (or negatie) admissible connected planer graph on a disk $D$. Suppose that $G$ satisfies (\ref{siki:7.1}). Let $F(G)$ be the surface representing $G$. Then $K=\partial F(G)$ is alternating and $c$-stable. \end{prop} We should note that $G$ is not necessarily an even graph. {\it Proof.} Since a diagram is special alternating, $K$ is special alternating. Now let $M$ be a Seifert matrix obtained from $F(G)$. Then $M+M^T$ is positive (or negative) definite by Positivity lemma and hence $K$ is $c$-stable. \qed Now take finitely many disks $D_1,D_2,\dots, D_n$ each of which has a positive admissible graph $G_j$ ($1\le j\le n$) that satisfies (\ref{siki:7.1}). Consider a Murasugi sum $F$ of surfaces $F(G_1), F(G_2), \dots F(G_n)$ glued by an arbitrary fashion. Then the knot $K=\partial F$ is generally not $c$-stable, but by Theorem \ref{thm:12.9}, we can make $K$ to be $c$-stable, by changing at most $s$ weights in $\{G_1,G_2,\cdots, G_n\}$, where $s=$rank$H_1(F;\ZZ)$. \begin{ex}\label{ex:12.11} The knot or link in the left is not $c$-stable, but by changing at most four weights, it becomes $c$-stable. \end{ex} \byouga{12-1}{12}{12.1} \subsection{Interlacing property of zeros on the unit circle}\label{12.4} In this sub-section, we define the interlacing property for two $c$-stable real polynomials. \begin{dfn}\label{def:12.12} Let $f(t)$ and $g(t)$ be $c$-stable real polynomials, and let $\{\alpha_j, 1 \leq j \leq n\}$ and $\{\beta_k, 1 \leq k \leq m\}$ be, respectively, the unit complex zeros of $f(t)$ and $g(t)$ with a property that ${\rm Im}(\alpha_j) \geq 0$ and ${\rm Im} (\beta_k) \geq 0$. Then we say that $f(t)$ and $g(t)$ are {\it interlaced} if $\{{\rm Re}(\alpha_j), 1 \leq j \leq n\}$ and $\{{\rm Re}(\beta_k), 1 \leq k \leq m\}$ are interlaced. \end{dfn} As a typical example, we prove the following proposition: \begin{prop}\label{prop:12.13} Let $r_n = [\underbrace{2,2, \cdots, 2}_{n}]$. Then $\Delta_{K(r_n)}(t)$ and $\Delta_{K(r_{n-1})} (t)$ are interlaced. \end{prop} {\it Proof.} The unit complex zeros $\{\alpha_k\}$of $\Delta_{K(r_n)} (t)$ with ${\rm Im}(\alpha_j) \geq 0$ are: (1) if $n$ is even, say $2m$, then $\{\alpha_k\}=\{e^{\frac{(2k+1)\pi}{2m+1}}, 0 \leq k \leq m-1\}$, and (2) if $n$ is odd, say $2m+1$, then $\{\alpha_k\}=\{e^{\frac{2k\pi}{2m+2}}, 0 \leq k \leq m\}$. Then, the proposition follows from inequalities below. \begin{align}\label{siki:12.1} (1)& \cos \frac{2k\pi}{2m+2} > \cos \frac{(2k+1)\pi}{2m+1} > \cos\frac{(2k+2)\pi}{2m+2}, 0 \leq k \leq m-1.\nonumber\\ (2)& \cos \frac{(2k+1)\pi}{2m+3} > \cos \frac{(2k+2)\pi}{2m+2} > \cos\frac{(2k+3)\pi}{2m+3}, 0 \leq k \leq m-1 \end{align} \qed The following theorem is the $c$-stable version of Theorem \ref{thm:9.4}, and is proved by using modified Alexander polynomials instead of Alexander polynomials. Therefore, the details are omitted. \begin{thm}\label{thm:12.14} Let $r =[2a_1, 2a_2, \cdots, 2a_n], a_j > 0, 1 \leq j \leq n$ and $s= [2a_1, 2a_2, \cdots, 2a_{n-1}]$. Then are $\Delta_{K(r)}(t)$ and $\Delta_{K(s)}(t)$ interlaced. \end{thm} \begin{probl}\label{probl:12.15} Characterize $c$-stable alternating knots and links. \end{probl} \section{Bi-stable knots and links}\label{13} \subsection{Bi-stable 2-bridge knots and links}\label{13.1} A bi-stable knot has not only real zeros, but also unit complex zeros. Therefore, we could say that it combines two parts, one is a $c$-stable part and another is a real stable part. From this point of view, the following theorem is not surprising, although a proof is not straightforward. \begin{thm}\label{thm:13.1} Let $r = [2a_1, 2a_2, \cdots, 2a_{2m}, 2b_1, -2b_2, 2b_3,-2b_4, \cdots, -2b_{2p}]$, (or $r = [ 2b_1, -2b_2, 2b_3, -2b_4, \cdots, -2b_{2p}, 2a_1, 2a_2, \cdots, 2a_{2m}]$), where $a_j>0, 1\le j\le 2m$ and $b_k > 0, 1\le k\le 2p$, Then $K(r)$ is bi-stable. The number of the real zeros is $2p$ and that of the unit complex zeros is $2m$. \end{thm} {\it Proof.} Since the signature of $K(r)$ is $2m$, it follows that the number of the unit complex zeros is at least $2m$. Therefore, it suffices to show that the number of the real zeros is (at least) $2p$. First we prove the following lemma. \begin{lemm}\label{lem:13.2} (1) Let $r^{\prime} = [2a_1, 2a_2, \cdots, 2a_{2m-1}]$. Then $(t-1)$ divides $\Delta_{K(r^{\prime})}(t)$, but $(t-1)^2$ does not. (2) Let $r^{\ast} = [-2b_2, 2b_3, -2b_4, \cdots, -2b_{2p}]$. Then $(t-1)$ divides $\Delta_{K(r^{\ast})}(t)$, but $(t-1)^2$ does not. \end{lemm} {\it Proof.} (1) $\Delta_{K(r')} (t) /(t-1) = \Delta_{K(r')}(t,t)$, where $\Delta_{K(r')}(x,y)$ denotes the 2-variable Alexander polynomial of a 2-component link $K(r')$. Then $|\Delta_{K(r')}(1,1)|$ is the absolute value of the linking number $\ell$ between two components of $K(r')$. \cite{tor} Since $|\ell | = |a_1 + a_3 + \cdots +a_{2m-1}| > 0$, $\Delta_{K(r')}(t, t)$ is not divisible by $t-1$. (2) $K(r^*)$ is stable, and all the zeros are simple. \qed \begin{lemm}\label{lem:13.3} Let $D_K (t)$ be the normalization of $\Delta_K (t)$. Then we have \begin{equation} D_{K(r)}(t) = D_{K(r_1)} (t) D_{K(r_2)}(t) +t D_{K(r^{\prime})} (t) D_{K(r^{\ast})}(t), \end{equation} \noindent where $r_1 = [2a_1, 2a_2, \cdots, 2a_{2m}]$, $r_2 = [2b_1, -2b_2, 2b_3, -2b_4, \cdots, -2b_{2p}]$, and $r^{\prime}$ and $r^{\ast}$ are given in Lemma \ref{lem:13.2}. \end{lemm} {\it Proof.} Using a twisted chain type Seifert surface of $K(r)$, we have a Seifert matrix $M$ of the form: $M=\tbt{A}{B}{O}{C}$, where $A$, $C$ are Seifert matrices of $K(r_1)$ and $K(r_2)$, respectively, and $B$ has only $1$ at the $(2m,2m+1)$-entry and $0$ elsewhere (see (\ref{siki:4.1})). Then it is easy to see that $D_{K(r)}(t)$ has the required form. \qed We return to a proof of Theorem \ref{thm:13.1}. We know now \begin{align}\label{siki:13.2} (1)\ &f_m(t) = D_{K(r_1)}(t){\rm \ is}\ c{\rm -stable\ and\ hence\ } f_m > 0 {\rm \ for\ any\ real\ } t. \nonumber\\ (2)\ &D_{K(r^{\prime})} (t){\rm \ is\ } c{\rm -stable,\ and\ has\ only\ one\ real\ zero\ that\ is\ 1}, \nonumber\\ &{\rm and\ hence\ we\ can\ write\ } D_{K(r^{\prime})}(t) = (t-1) g_m (t) {\rm \ and\ } g_m (t) >0 {\rm \ for\ any\ real\ }t, \nonumber\\ (3)\ &D_{K(r_2)}D_{K(r^*)} {\rm \ is\ simple,}\nonumber \\ (4)\ &D_{K(r_2)}(t){\rm \ is\ stable\ and\ has\ } 2p {\rm \ positive\ real\ zeros,\ say,\ } \beta_1 < \beta_2 < \cdots < \beta_p, {\rm \ in\ }[0,1], \nonumber\\ (5)\ &D_{K(r^{\ast})}(t) = (t-1) h_p (t){\rm \ is\ stable\ and\ has\ } (2p-1) {\rm \ real\ zeros,\ say,\ } \nonumber\\ &\alpha_1 < \alpha_2 < \cdots < \alpha_{p-1} < \alpha_p (= 1){\rm \ in\ } [0,1], {\rm \ and\ } h_p (1) \ne 0. \nonumber\\ &{\rm \ Further,\ } \{ \beta_j, 1 \leq j \leq p\} {\rm \ and\ } \{\alpha_j,1 \leq j \leq p\}{\rm \ are\ interlaced,\ i.e.,\ } \nonumber\\ & \beta_1 < \alpha_1 < \beta_2 < \alpha_2 < \cdots < \alpha_{p-1} < \beta_p < \alpha_p = 1. \end{align} Using these notations, we can write \begin{align}\label{siki:13.3} D_{K(r)}(t) = f_m(t) D_{K(r_2)}(t) +t (t-1)^2 g_m (t) h_p (t),{\rm \ and\ } g_m(1) \ne 0 \ne h_p (1). \end{align} Now we calculate the number of real zeros of $D_{K(r)} (t)$. From (\ref{siki:13.3}), we see that the real zeros of $D_{K(r)}(t)$ are determined by the intersection $\{\gamma_j\}$ of two curves $y_1 = f_m (t) D_{K(r_2)}(t)$ and $y_2 = - t (t-1)^2 g_m (t) h_p (t)$. Using the fact that $\{\alpha_j \}$ and $\{\beta_j \}$ are interlaced, we have a graph below. Note that $f_m(t), g_m(t) > 0$ for any real $t$, and $t=1$ is a double zero for $y_2$. Further, $y_1(0)>0$ and $y_2'(0)<0$. (1) If $p$ is even, we have Fig. 13.1. \byouga{13-1}{10}{13.1} From the graph, we see that there are (at least) $p$ points of intersection $\{\gamma_j, 1 \leq j \leq p\}$ in $[0,1]$ and \begin{equation*} \beta_1 < \gamma_1 < \beta_2 < \gamma_2 < \beta_3 < \cdots < \beta_{p-1} < \gamma_{p-1} < \beta_p < \gamma_p < 1. \end{equation*} (2) If $p$ is odd, then we have the following graph. \byouga{13-2}{10}{13.2} Therefore, $D_{K(r)}(t)$ has at least (and hence exactly) $2p$ real zeros. \qed \begin{ex}\label{ex:13.4} Let $r=[4,2,6,2,4,-6,2,-4]$. $\Delta_{K(r)}(t)$ has hour real zeros and four unit complex zeros. \end{ex} \subsection{Exceptional bi-stable knots and Salem knots}\label{13.2} A fibred knot (or link) $K$ is called a {\it Salem fibred knot ({\rm or} link)} (\cite{hiro}), if $\Delta_K (t)$ is bi-stable and has exactly two real ( $\neq 1$) zeros. A typical example of a Salem fibred knot is a 2-bridge knots $K(r_m)$ by Theorem \ref{thm:13.1}, where $r_m = [2, 2, \cdots, 2, -2]$, $m$ odd, (or abbreviated $[(2)^m,-2]$). Modifying $K(r_m)$, we obtain a series of exceptional bi-stable knots given below. \begin{prop}\label{prop:13.n5} Let $r(m,n) = [(2)^m, -2, (2)^n], m \geq n \geq 0, m+n$ being odd. Then $K(r(m,n))$ is a Salem fibred knot. \end{prop} {\it Proof.} By induction on $m$ and $n$, it is shown easily that the normalized Alexander polynomial $D_{m,n}(t)$ of $K(r(m,n))$ is given by the following formula: $D_{m,n}(t) = \sum_{k=0}^n (-1)^k (4k+1) t^k + (4n+3) \sum_{k=n+1}^m(-1)^k t^k + \sum_{j=0}^n (-1)^{m+j+1} (4n+1-4j) t^{m+j+1}$ Since $D_{m,n}(1) = -1$, $D_{m,n}(t)$ has at least two real zeros. Further, since $\sigma(K(r(m,n)) = m+n-1$, $D_{m,n}(t)$ has at least $m+n-1$ unit complex zeros and hence $K(r(m,n))$ is a Salem fibred knot. \qed Let $\mu (K)$ denote the maximal absolute value of the real zeros of $\Delta_K(t)$. We note that if $K$ is a Salem fibred knot, $\mu(K)$ is equal to Mahler measure of $\Delta_K(t)$. Now our computation suggests that for $m \geq 1$, \begin{align} (1)&\ \mu (K(r(m+2,0)) < \mu (K(r(m,0))\ {\rm and} \nonumber\\ (2)&\ \mu (K(r(m+2,1)) > \mu (K(r(m,1)).\ {\rm Further},\nonumber\\ (3)&\ \mu (K(r(m+n,0)) < \mu (K(r(m,n)) < \mu (K(r(m+n-1,1)).\ {\rm Finally} \nonumber\\ (4)&\ \lim_{m \to \infty} \mu (K(r(m,0)) = 2,\ {\rm and}\nonumber\\ (5)&\ \lim_{m \to \infty} \mu(K(r(m,1)) \doteq 3.41421 \end{align} Beside these bi-stable knots, Hironaka showed two more Salem fibred 2-bridge knots \cite{hiro}. \begin{align*} (1)&\ K_1 = K(s_1), s_1 = [(2)^5, (-2)^3],\ {\rm and}\ \mu(K_1) \doteq 1.63557,\nonumber\\ (2)&\ K_2 = K(s_2), s_2= [(2)^9, (-2)^5]\ {\rm and}\ \mu(K_2) \doteq 1.42501. \end{align*} We find three more sporadic Salem fibred 2-bridge knots. \begin{align} (3)& \ K_3 = K(s_3), s_3 = [(2)^6, -2, 2, -2, -2],\ {\rm and}\ \mu(K_3) \doteq 3.94748,\nonumber\\ (4)& \ K_4 = K(s_4), s_4 = [(2)^4, (-2)^3,2]\ {\rm and}\ \mu (K_4) \doteq 2.38215,\nonumber\\ (5)&\ K_5 = K(s_5), s_5 = [(2)^6, (-2)^5, (2)^3]\ {\rm and}\ \mu(K_5) \doteq 1.80017. \end{align} We suspect that there exist other Salem fibred 2-bridge knots. However, contrary to knots, we find many Salem fibred 2-bridge links and we will study these links in a separate paper. \subsection{General bi-stable knots and links}\label{13.3} A 2-bridge knot in Theorem \ref{thm:13.1} is given as a quasi-rational knot shown in Fig 13.3. \byouga{13-3}{4}{13.3} The first half part $[2a_1, 2a_2, \cdots, 2a_{2m}]$ is a special alternating knot and it is also represented by an admissible positive graph $G_0$. For example, the surface $F(G_0)$ of $K(r)$, $r=[2a,2b]$ is represented by $G_0$. See Fig.13.4. \byouga{13-4}{6}{13.4} Therefore, for example, $F(G)$ of $K(r)$, $r=[2a_1,2a_2, 2b_1,-2b_2,2b_3,-2b_4]$ is represented by a graph $G$ below. \byouga{13-5}{3}{13.5} This observation suggests us a construction of general bi-stable knots as in Proposition \ref{prop:13.5} below. See Fig. 13.6 for example, where a bi-stable knot is depicted by a graph, and the zeros are plotted. \begin{prop}\label{prop:13.5} Let $G_0$ be an admissible positive (or negative) graph on a disk. Attach $p$ mutually disjoint positive (or negative) arcs to $\partial D$ and then $p$ negative (or positive) arcs to $\partial D$ from the back side in such a way that the first (or the last) arc crosses exactly one edge of $G_0$, where the $2p$ arcs attached to $\partial D$ represent a 2-bridge knot (or link) as is shown in Fig 13.6. $G_0$ together with these $2p$ arcs forms a graph $G$. Then the knot $K=\partial F(G)$ is bi-stable. \end{prop} A proof is exactly the same as that of Theorem \ref{thm:13.1} and we omit the details. \qed \byouga{13-6}{11}{13.6} A crucial point of this construction is the interlacing property of a 2-bridge knot $K(r)$, $r=[2b_1,-2b_2,2b_3,\dots, -2b_p]$. Therefore, a knot $K(r)$ may be replaced by other stable knots which have some kind of interlacing property. In Fig 13.7 below, a 2-bridge knot $K(r)$ is replaced by a stable knot $K_3= X_3(2,2,2|-2,-2,-2)$. The knot $K$ thus obtained is bi-stable. In fact, $\Delta_K(t) = -3+44 t-235 t^2+662 t^3-1161 t^4+1387 t^5-1161 t^6+662 t^7-235 t^8+44 t^9-3 t^{10}$ is bi-stable. The zeros are plotted in Fig 13.7 right. \byouga{13-7}{11}{13.7} However, $K(r)$ may not be replaced by an exceptional stable 2-bridge knot $K'$. If the c-stable part $[2a_1, 2a_2 , \cdots, 2a_{2m}]$ is replaced by an exceptional $c$-stable 2-bridge knot, then the knot is generally not bi-stable. For example, neither $[2,8,-2,-2,4,-2,6,-8]$ nor $[2,8,-2,-2,10,2,-2,-10]$ is bi-stable, where $[2,8,-2,-2]$ is exceptionally $c$-stable and $[10,2,-2,-10]$ is exceptionally stable. However, it is interesting to see that if the second term 8 in both cases is replaced by a sufficiently large positive integer, the knots become bi-stable. \section{Mobius Transformations}\label{14} In this section, we study the image of the zeros of the Alexander polynomial of a knot by a Mobius transformation $\varphi$. We begin with the definition of a special Mobius transformation $\varphi$ that is used in this section.\\ Let $\varphi : C \cup \{\infty\} \longrightarrow C \cup \{\infty\}$ be a Mobius transformation given by \begin{equation}\label{siki:14.1} \varphi (z) = \frac{1-z i}{z-i}. \end{equation} $\varphi$ has the following properties. \begin{align}\label{siki:14.2} &(1)\ \varphi{\rm \ is\ one\ to\ one\ and\ } {\varphi}^{-1} {\rm \ is\ given\ by\ } {\varphi}^{-1}(z) = \frac{1+zi}{z+i}, \nonumber\\ &(2)\ \varphi{\rm \ keeps\ two\ points\ } z = \pm 1 {\rm \ fixed,} \nonumber\\ &(3)\ \varphi(0) = i, \varphi (-i) = 0{\rm \ and\ } \varphi (i) = \infty,{\rm \ and} \nonumber\\ &(4)\ \varphi^2 (z) = 1/z. \end{align} We can easily check the following lemma. \begin{lemm}\label{lem:14.1} (1) $\varphi$ maps the interior of the unit circle centred at 0 onto the upper half-plane, and the exterior of the unit circle onto the lower half-plane.\\ (2) $\varphi$ maps the unit circle onto the real line and vice versa. \end{lemm} \qed The following simple property of $\varphi$ is crucial to our purpose. A proof follows from easy computations, and hence we omit the details. \begin{prop}\label{prop:14.2} For any $\alpha \in C$, $\alpha \ne 0, \pm i$,\\ (1) $\varphi (\alpha) + \varphi (\frac{1}{\alpha}) = 4 / (\alpha + \frac{1}{\alpha})$.\\ (2) $\varphi (\alpha) \varphi (\frac{1}{\alpha}) = 1$.\\ In particular, if $\alpha(\ne 0)$ is real or $|\alpha| = 1, \alpha \ne \pm i$, then $\alpha + \frac{1}{\alpha}$ and $\varphi (\alpha) +\varphi (\frac{1}{\alpha})$ are both real. \end{prop} \qed The main theorem in this section is the following: \begin{thm}\label{thm:14.3} Let $f(t)$ be a reciprocal real polynomial of even degree, say $2n$. Assume that $0$ and $\pm i $ are not zeros of $f(t)$. Then there exists a reciprocal real polynomial $f^{\ast}(t)$ of the same degree $2n$ satisfying the following conditions.\\ (1) the zeros of $f^{\ast}(t)$ are exactly the image of the zeros of $f(t)$ under $\varphi$, namely, if $\alpha_1 , \alpha_2 , \cdots, \alpha_{2n}$ are the zeros of $f(t)$, then $\varphi (\alpha_1), \varphi(\alpha_2), \cdots, \varphi (\alpha_{2n})$ are exactly the zeros of $f^{\ast}(t)$.\\ (2) If $f$ is an integer polynomial, then so is $f^*$.\\ (3) $|f^{\ast}(1) | = 2^n |f(1)|$.\\ (4) $|f^{\ast}(-1)| = 2^n |f(-1)|$.\\ (5) ${(f^{\ast})^{\ast}}(t) = 2^{2n} f(t)$.\\ Furthermore, such an $f^{\ast}(t)$ is unique up to $\pm 1$. \end{thm} Before we prove the theorem, we mention a couple of corollaries. \begin{cor}\label{cor:14.4} Let $\Delta_K (t)$ be the Alexander polynomial of a knot $K$ and degree $\Delta_K (t) = 2n$. Then we have the following:\\ (1) ${\Delta_K}^{\ast} (t)$ is a reciprocal integer polynomial of the same degree, $2n$. Therefore, ${\Delta_K}^{\ast} (t)$ is the Hosokawa polynomial of some link $K^{\ast}$ (with an arbitrary number of components).\\ (2) $ | {\Delta_K}^{\ast}(1)| = 2^n$ and $|{\Delta_K}^{\ast} (-1)| = 2^n |\Delta_K (-1)|$.\\ (3) ${\Delta_K}^{{\ast}{\ast}}(t) = 2^{2n} \Delta_K (t)$.\\ \end{cor} \begin{cor}\label{cor:14.5} If $\Delta_K(t)$ is stable, then ${\Delta_K}^{\ast} (t)$ is $c$-stable. If $\Delta_K(t)$ is $c$-stable, then ${\Delta_K}^{\ast} (t)$ is stable. Further, if $\Delta_K(t)$ is bi-stable, so is ${\Delta_K}^{\ast} (t)$. \end{cor} Now we proceed to a proof of Theorem \ref{thm:14.3}. Write \begin{equation}\label{siki:14.3} f(t) = c_0 t^{2n} + c_1 t^{2n-1} + \cdots + c_{2n}, \end{equation} \noindent where $c_0 > 0$ and $c_j = c_{2n-j}, 0 \leq j \leq 2n$.\\ Let $\alpha_1, 1/ \alpha_1, \alpha_2,1/ \alpha_2, \cdots, \alpha_n,1/ \alpha_n$ be all the zeros of $f(t)$. Then we can write \begin{equation}\label{siki:14.4} f(t) = c_0 \prod_{j=1}^n (t- \alpha_j) (t -\frac{1}{\alpha_j}) \end{equation} \begin{lemm}\label{lem:14.6} Let $A_j = \alpha_j +\frac{1}{\alpha_j}, 1 \leq j \leq n$. Then $\lambda = \prod_{j=1}^n A_j$ is a real number. \end{lemm} {\it Proof.} Since $\alpha_j$ is a zero of $f(t)$, so is $\overline{\alpha_j}$ and hence $(\alpha_j + \frac{1}{\alpha_j} )(\overline{\alpha_j} + \frac{1}{\overline{\alpha_j}})$ is a real number. \qed Later we will see that $\lambda c_0$ is an integer and show the following; \begin{equation}\label{siki:14.5} f^{\ast}(t) = \lambda c_0 \prod_{j=1}^n (t- \varphi(\alpha_j) )( t - \varphi(\frac{1}{\alpha_j} )) \end{equation} Now consider $F(t) = f(t)/c_0 = t^{2n} + \frac{c_1}{c_0} t^{2n-1} + \cdots+ \frac{c_{2n}}{c_0}$. Then \begin{equation}\label{siki:14.6} F(t) = \prod_{j=1}^n (t-\alpha_j) (t- \frac{1}{\alpha_j}) = \prod_{j=1}^n (t^2 - A_j t + 1). \end{equation} For $0\le k\le n$, define $X_k = \sum_{j_1, \cdots, j_k} A_{j_1} A_{j_2} \cdots A_{j_k}$, where the summation runs over all $ j_1, j_2, \cdots, j_k$ such that $1 \leq j_1 < j_2 < \cdots < j_k \leq n$. In particular, $X _0= 1$ and $X_n = \lambda$. By expanding the right hand side of (\ref{siki:14.6}), we have the following system of relations. Case I. $n=2m$. For $k = 0, 1, 2, \cdots, m$, \begin{equation}\label{siki:14.7} \frac{c_{2k}}{c_0} = \binom{n}{k} X_0 + \binom{n-2}{k-1} X_2 + \cdots + \binom{n-2k}{k-k} X_{2k}, \end{equation} and for $k = 1,2, \cdots, m$, \begin{equation}\label{siki:14.8} -\frac{c_{2k-1}}{c_0} = \binom{n-1}{k-1} X_1 + \binom{n-3}{k-2} X_3 + \cdots + \binom{n-(2k-1)}{k-k}X_{2k-1} \end{equation} For simplicity, let $M_0$ and $N_0$ be, respectively, the coefficient matrices of the system of relations of (\ref{siki:14.7}) and (\ref{siki:14.8}). Namely, $M_0$ and $N_0$ are lower triangular integer matrices of sizes respectively $m+1$ and $m$, and each with determinant $1$. \begin{align}\label{siki:14.11} &M_0 (X_0, X_2, \cdots, X_{2m})^T = \frac{1}{c_0}(c_0, c_2, \cdots, c_{2m})^T\ {\rm and} \nonumber\\ &N_0 (X_1, X_3, \cdots, X_{2m-1})^T = -\frac{1}{c_0}(c_1, c_3, \cdots, c_{2m-1})^T, \end{align} and hence \begin{align}\label{siki:14.12} &(X_0, X_2, \cdots, X_{2m})^T = \frac{M_0^{-1}}{c_0}(c_0, c_2, \cdots, c_{2m})^T {\rm \ and} \nonumber\\ &(X_1, X_3, \cdots, X_{2m-1})^T = -\frac{N_0^{-1}}{c_0}(c_1, c_3, \cdots, c_{2m-1})^T. \end{align} Case II. $n=2m+1$. The same argument shows the following For $k = 0, 1, 2, \cdots, m$, \begin{equation}\label{siki:14.13} \frac{c_{2k}}{c_0} = \binom{n}{k} X_0 + \binom{n-2}{k-1} X_2 + \cdots + \binom{n-2k}{k-k} X_{2k}, \end{equation} and \begin{equation}\label{siki:14.14} -\frac{c_{2k+1}}{c_0} = \binom{n-1}{k} X_1 + \binom{n-3}{k-1}X_3 + \cdots + \binom{n-(2k+1)}{k-k}X_{2k+1} \end{equation} Using coefficient matrices $M_1$ and $N_1$ of these systems of relations, we can write \begin{align}\label{siki:14.15} & M_1 (X_0,X_2, \cdots X_{2m})^T = \frac{1}{c_0} (c_0, c_2, \cdots, c_{2m})^T{\rm and} \nonumber\\ & N_1 (X_1,X_3, \cdots X_{2m+1})^T = -\frac{1}{c_0} (c_1, c_3, \cdots, c_{2m+1})^T. \end{align} Here $M_1$ and $N_1$ are $(m+1) \times (m+1)$ lower triangular integer matrices with determinant $1$ and hence \begin{align}\label{siki:14.18} &(X_0, X_2, \cdots, X_{2m})^T = \frac{M_1^{-1}}{c_0}(c_0, c_2, \cdots, c_{2m})^T, {\rm and} \nonumber\\ &(X_1, X_3, \cdots, X_{2m+1})^T = -\frac{N_1^{-1}}{c_0}(c_1, c_3, \cdots, c_{2m+1})^T \end{align} Now we study $f^{\ast}(t)$: \begin{align}\label{siki:14.19} f^{\ast}(t) &= \lambda c_0 \prod_{j=1}^n (t - \varphi(\alpha_j))(t - \varphi(\frac{1}{\alpha_j} )) \nonumber\\ &= \lambda c_0 \prod_{j=1}^n [t^2 - (\varphi(\alpha_j) + \varphi(\frac{1}{\alpha_j} )) t +\varphi(\alpha_j )\varphi(\frac{1}{\alpha_j})] \nonumber\\ &= \lambda c_0 \prod_{j=1}^n (t^2 - \frac{4}{A_j} t + 1). \end{align} We write it as \begin{equation}\label{siki:14.20} f^{\ast}(t) = \lambda c_0 (d_0 t^{2n} + d_1 t^{2n-1} + \cdots + d_{2n}), d_0 =1. \end{equation} If we compare (\ref{siki:14.19}) with (\ref{siki:14.6}), we see immediately the following relations. Case (I) $n=2m$. For $k = 0, 1, 2, \cdots, m$, \begin{align}\label{siki:14.21} \frac{d_{2k}}{d_0} = &\binom{n}{k} X_0 + \binom{n-2}{k-1} \sum_{j_1,j_2} \frac{4^2}{A_{j_1} A_{j_2}} + \binom{n-4}{k-2} \sum_{j_1, \cdots, j_4} \frac{4^4}{A_{j_1}\cdots A_{j_4}} + \cdots \nonumber\\ &+ \binom{n-2k}{k-k} \sum_{j_1, \cdots, j_{2k}} \frac{4^{2k}}{A_{j_1} \cdots A_{j_{2k}}}, \end{align} and hence, \begin{align}\label{siki:14.22} \lambda d_{2k} &= \binom{n}{k} X_n + \binom{n-2}{k-1} 4^2 X_{n-2} + \binom{n-4}{k-2} 4^4 X_{n-4} + \cdots \nonumber\\ &+ \binom{n-2k}{k-k} 4^{2k} X_{n-2k} \end{align} Similarly, for $k = 1,2, \cdots,m$, \begin{align}\label{siki:14.23} - \lambda d_{2k-1} &=\binom{n-1}{k-1} 4X_{n-1} + \binom{n-3}{k-2} 4^3 X_{n-3} + \cdots \nonumber\\ &+ \binom{n-(2k-1)}{k-k} 4^{2k-1} X_{n-(2k-1)}. \end{align} Let $P_{\ell}$ be a diagonal matrix of order $\ell$ of the form: \begin{equation*} P_{\ell} = \mathrm diag\{1, 4^2 ,4^4 , \cdots , 4^{2(\ell -1)}\}. \end{equation*} Then (\ref{siki:14.22}) and (\ref{siki:14.23}) can be written as \begin{align}\label{siki:14.24} &\lambda (d_0, d_2, \cdots,d_{2m})^T = M_0 P_{m+1} (X_n, X_{n-2}, \cdots,X_0)^T, {\rm \ and} \nonumber\\ - &\lambda (d_1, d_3, \cdots, d_{2m-1})^T = N_0 \widehat{P_m} (X_{n-1}, X_{n-3}, \cdots, X_1)^T, \end{align} \noindent where $\widehat{P_{\ell}} = 4 P_{\ell}$. Case II. $n=2m+1$. The same argument shows \begin{align}\label{siki:14.25} &\lambda (d_0, d_2, \cdots,d_{2m})^T = M_1 P_{m+1} (X_{2m+1}, X_{2m-1}, \cdots,X_1)^T,{\rm \ and} \nonumber\\ -& \lambda (d_1, d_3, \cdots, d_{2m+1})^T = N_1 \widehat{P_{m+1}} (X_{2m}, X_{2m-2}, \cdots, X_0)^T. \end{align} Let $Q_{\ell}$ be an $\ell \times \ell$ matrix (that is the mirror of the identity matrix), namely, $Q_{\ell} = [ q_{i,j}]_{1 \leq i,j \leq \ell}$, where $q_{i,j} = 1$, if $i+j=\ell+1$, and $q_{i,j} = 0$, otherwise. Using $Q_{\ell}$, we have the final result. Case (a) $n=2m$. $(X_n, X_{n-2}, \cdots, X_0)^T = Q_{m+1}(X_0, X_2, \cdots, X_n)^T$ and $(X_{n-1}, X_{n-3}, \cdots, X_1)^T = Q_m (X_1, X_3, \cdots, X_{n-1})^T$, and hence, combining (\ref{siki:14.24}) and (\ref{siki:14.12}), we have \begin{align}\label{siki:14.26} &\lambda c_0 (d_0, d_2, \cdots, d_{2m})^T = M_0 P_{m+1} Q_{m+1}M_0^{-1} (c_0, c_2, \cdots, c_{2m})^T {\rm \ and}\\ -& \lambda c_0 (d_1, d_3, \cdots, d_{2m-1})^T = N_0 \widehat{P_m}Q_m (-N_0^{-1}) (c_1, c_3, \cdots, c_{2m-1})^T. \end{align} Case (b) $n=2m+1$. Similarly, we have \begin{align}\label{siki:14.28} \lambda c_0 (d_0, d_2, \cdots, d_{2m})^T &= M_1 P_{m+1}c_0 (X_{2m+1}, X_{2m-1}, \cdots, X_1)^T \nonumber\\ &= M_1 P_{m+1}Q_{m+1}(-N_1^{-1}) (c_1, c_3, \cdots, c_{2m+1})^T {\rm \ and}\\ -\lambda c_0 (d_1, d_3, \cdots, d_{2m+1})^T &= N_1 \widehat{P_{m+1}}c_0 (X_{2m}, X_{2m-2}, \cdots, X_0)^T \nonumber\\ &= N_1 \widehat{P_{m+1}}Q_{m+1}M_1^{-1} (c_0, c_2, \cdots, c_{2m})^T. \end{align} To be more precise, let $f(t) = \sum_{j=0}^{2n} c_j t^{2n-j}, c_0 > 0$, and $f^{\ast}(t) = \sum_{j=0}^{2n} a_j t^{2n-j}$. Then $a_j, 0 \leq j \leq n$, is obtained by the following formulas. Case (a) $n=2m$. \begin{align}\label{siki:14.30} &(a_0, a_2, \cdots,a_{2m})^T = M_0 P_{m+1} Q_{m+1} M_0^{-1} (c_0, c_2, \cdots, c_{2m})^T{\rm \ and} \nonumber\\ &(a_1, a_3, \cdots,a_{2m-1})^T = N_0 \widehat{P_m} Q_m N_0^{-1} (c_1, c_3, \cdots, c_{2m-1})^T. \end{align} Case (b) $n=2m+1$. \begin{align}\label{siki:14.31} &(a_0, a_2, \cdots,a_{2m})^T = -M_1 P_{m+1} Q_{m+1} N_1^{-1} (c_1, c_3, \cdots, c_{2m+1})^T{\rm \ and}\nonumber\\ &(a_1, a_3, \cdots,a_{2m+1})^T = -N_1 \widehat{P_{m+1}} Q_{m+1} M_1^{-1} (c_0, c_2, \cdots, c_{2m})^T. \end{align} This proves Theorem \ref{thm:14.3} (1). Since all the matrices involved in the proof are integer matrices, it follows that if $f(t)$ is an integer polynomial, then so is $f^*(t)$. This proves (2). To prove (3) and (4), we compute $f(\pm 1)$ and $f^{\ast}(\pm 1)$. Since $f(t) = c_0 \prod_{j=1}^n (t^2 - A_j t +1)$, it follows that $f(1) = c_0 \prod_{j=1}^n (2-A_j)$ and $f(-1) = c_0 \prod_{j=1}^n (2+A_j)$. Meanwhile, $f^{\ast}(t) = \lambda c_0 \prod_{j=1}^n (t^2 - \frac{4}{A_j} t +1)$, and hence $f^{\ast}(1)$ = $\lambda c_0 \prod_{j=1}^n (2 - \frac{4}{A_j})$ =$ \lambda c_0 \prod_{j=1}^n \frac{1}{A_j}(2A_j - 4)$ = $\lambda c_0 \frac{1}{\lambda} 2^n \prod_{j=1}^n (A_j - 2) = 2^n (-1)^n f(1)$. Similarly, $f^{\ast}(-1)$ = $\lambda c_0 \prod_{j=1}^n (2 + \frac{4}{A_j})$ = $c_0 \prod_{j=1}^n (2A_j +4)$ = $c_0 2^n \prod_{j=1}^2 (A_j +2) = 2^n f(-1)$. This proves (3) and (4). To show (5), first we note that $\varphi^2 (z) = 1/z$. Thus the set of the zeros of $f^{{\ast}{\ast}}(t)$ and that of $f(t)$ are identical. Therefore, $f(t)$ divides $f^{{\ast}{\ast}}(t)$ or $f^{{\ast}{\ast}}(t)$ divides $f(t)$. However, $f^{{\ast}{\ast}}(1) = 2^n f^{\ast}(1)= 2^{2n}f(1)$ and hence, $f^{{\ast}{\ast}}(t) = 2^{2n}f(t)$. Finally, the uniqueness is evident. A proof of Theorem \ref{thm:14.3} is now completed. \qed \begin{ex}\label{ex:14.7} Let $f(t) = \sum_{j=0}^{2n}c_j t^{2n-j}, c_0> 0$ and $f^{\ast}(t) = \sum_{j=0}^{2n}a_j t^{2n-j}$. \\ (1) (i) Let $n = 1$ and $m = 0$. Then $M_1 = N_1 = P_1 = Q_1 = [1]$, and hence $a_0 = - c_1$ and $a_1 = -4c_0$. For example, if $f(t) = t^2 - 3t + 1$, then $f^{\ast}(t) = 3t^2 -4t +3$.\\ (ii) Let $n=3$ and $m=1$. Then $M_1= \left[\begin{array}{cc} 1&0\\ 3&1 \end{array}\right]$ and $N_1 = \tbt{1}{0}{2}{1}$, and hence\\ $\tv{a_0}{a_2}=\tbt{2}{-1}{-10}{-3}\tv{c_1}{c_3}$ and $\tv{a_1}{a_3}=\tbt{12}{-4}{-40}{-8} \tv{c_0}{c_2}$. For example, if $f(t) = t^6 - t^5 + t^3 - t + 1$, then $f^{\ast}(t) = -(3t^6 -12t^5 - 7t^4 + 40 t^3 -7t^2 -12t+3)$.\\ (2) (i) Let $n=2$ and $m=1$. Then $M_0 = \tbt{1}{0}{2}{1}$ and $N_0= [1]$. Therefore, $\tv{a_0}{a_2}=\tbt{-2}{1}{12}{2} \tv{c_0}{c_2}$ and $(a_1)=(4c_1)$. For example, if $f(t) = t^4 -t^3 +t^2 - t +1$, then $f^{\ast} (t) = - (t^4 + 4t^3 -14t^2 + 4t + 1)$.\\ (ii) Let $n=4$ and $m=2$. Then $M_0 = \left[\begin{array}{rrr} 1&0&0\\ 4&1&0\\ 6&2&1 \end{array}\right]$, and $N_0 = \tbt{1}{0}{3}{1}$, and hence, $\left[\begin{array}{r} a_0\\ a_2\\ a_4 \end{array}\right]= \left[\begin{array}{rrr} 2&-2&1\\ -56&8&4\\ 140&20&6 \end{array}\right] \left[\begin{array}{r} c_0\\ c_2\\ c_4 \end{array}\right] $ and $\tv{a_1}{a_3}=\tbt{-12}{4}{28}{12}\tv{c_1}{c_3}$. For example, if $f(t) = \sum_{j=0}^8 (-1)^j t^{8-j}$, then $f^{\ast}(t) = t^8 +8t^7 - 44t^6 - 40t^5 + 166t^4 - 40t^3 - 44t^2 + 8t +1$. \end{ex} \begin{rem}\label{rem:14.8} Even if $f(t)$ is monic, $f^{\ast}(t)$ is not necessarily monic. Furthermore, even if $\{c_0 , c_1 , c_2 , \cdots, c_{2n}\}$ alternates in sign, $\{a_0 , a_1, a_2, \cdots, a_{2n}\}$ may not alternate in sign. \end{rem} \begin{qu}\label{qu:14.9} Let $K$ be a $c$-stable knot and $K^*$ be a stable link obtained by $\varphi$. What can we say on $K^*$? Does there exist a geometric way to construct $K^*$ from $K$? \end{qu} \section{Montesinos knots}\label{15} In this section, we study the various stabilities of alternating Montesinos knots or links. It is not surprising to see that many Montesinos knots or links are quasi-rational, and hence, their stability properties can be determined by our method discussed earlier. Now we begin with a well-known result of a characterization of alternating Montesinos knots or links. Let $K= M(e \mid \beta_1/ \alpha_1,\beta_2/ \alpha_2, \cdots,\beta_n/ \alpha_n )$ be a Montesinos knot or link. We assume that $n \geq 3$. Montesinos knots or links have two classes.\\ Class I (1) $\beta_i/ \alpha_i> 0 $ for any $i$, $1\leq i \leq n$, and $e \geq 0$, or\\ (2) $\beta_i/ \alpha_i< 0$ for any $i$, $1 \leq i \leq n$, and $e \leq 0$.\\ Class (II) $0 < \sharp \{\beta_i/ \alpha_i > 0, 1 \leq i \leq n\}< n$ and $e = 0$. The following proposition is well-known. (See \cite{lt}.) \begin{prop}\label{prop:15.1} A Montesinos knot (or link) $K$ is alternating if and only if $K$ belongs to Class (I). \end{prop} Since we are interested in various stabilities of alternating knots or links, we study the special classes of Montesinos knots or links described in the following theorem. \begin{thm}\label{thm:15.2} Let $K = M(e \mid \beta_1 / \alpha_1, \beta_2 / \alpha_2 , \cdots, \beta_n / \alpha_n), n \geq3$, be a Montesinos knot or link. We assume the following conditions:\\ (1) $\beta_i / \alpha_i > 0$ for any $i, 1 \leq i \leq n$,\\ (2) $e \ge 0$.\\ (3) At most one $\alpha_i $ , say $\alpha_1$, can be even.\\ (4) $\beta_i \equiv 0$ (mod 2), $1 \leq i \leq n$, unless $\alpha_i$ is even.\\ (5) Let $[2a_1^{(i)}, 2a_2^{(i)}, \cdots, 2a_{m_i}^{(i)}]$ be the even continued fraction expansion of $\beta_i / \alpha_i, 1 \leq i \leq n$. For each $i$, the sequence $\{2a_1^{(i)}, 2a_2^{(i)}, \cdots, 2a_{m_i}^{(i)}\}$ alternates in sign. In particular, $2a_1^{(i)} > 0$ for $1 \leq i \leq n $.\\ Then we have the following conclusion.\\ Case 1. If all $\alpha_i , 1 \leq i \leq n$ are odd and e is odd, then $K$ is a special alternating knot, and hence $K$ is $c$-stable.\\ Case 2. If all $\alpha_i , 1 \leq i \leq n$, are odd and e is even, then $K$ is a 2-component link and $K$ is inversive. (A 2-component link $K$ is said to be inversive if the original (oriented) link is $c$-stable (or stable), but if the orientation of one component is revered, the resulting (oriented) link becomes a stable (or $c$-stable) link.)\\ Case 3. If $\alpha_1$ is even and others are odd and $e$ is even, then $K$ is a knot and is stable. If $e$ is $2$ and all $|a_j^{(i)}| = 1$ for $1 \leq i \leq n$ and $1 \leq j \leq m_i $, then the maximal value of the zero of $\Delta_K(t)$ is at least $n+1$.\\ Case 4. If $\alpha_1$ is even and others are odd, and $e$ is odd, then $K$ is a knot and $K$ is bi-stable. \end{thm} \begin{rem}\label{rem:15.3} We should note that our cases do not contain all alternating Montesinos knots. Since we are interested in stability of the Alexander polynomial, the assumption (5) is crucial in the theorem. Any knot or link in our list has some stability properties. \end{rem} Now, proofs of the first three cases are easy. For the first case, $K$ has a special alternating as in Fig. 15.1 and hence, $K$ is $c$-stable. \byouga{15-1}{8}{15.1} For the second case, one orientation gives us a special alternating diagram, and hence it is $c$-stable. If we reverse orientation of one component, the diagram shows that $K$ is a quasi-rational links discussed in Section 6, and hence, it is stable. See Figuress 15.2 and 15.3. \byouga{15-1}{8}{15.1}i \byouga{15-1}{8}{15.1}ii For the third case, $K$ is also quasi-rational knot discussed in Section 6, and hence, it is stable. Since the second statement can be proven by applying the same argument used in the proof of Theorem \ref{thm:6.6} (1), we omit the details. See Fig. 15.4. \byouga{15-1}{8}{15.1}v The last case is the most complicated case. (See Fig.15.5.) \byouga{15-5}{12}{15.5} Let $r_j = \frac{\beta_j}{\alpha_j}, 1\leq j\leq n$, and write:\\ $ r_1 = [2a_1^{(1)}, -2a_2^{(1)}, \cdots, (-1)^{k-1}2a_k^{(1)}, \cdots, 2a_{2m_1+1}^{(1)}],\ {\rm where\ } a_k^{(1)} > 0, 1 \leq k \leq 2m_1 +1. $ Let $r_0 = [- e, r_1 ] = [- e, 2a_1^{(1)}, -2a_2^{(1)}, \cdots, (-1)^{k-1}2a_k^{(1)}, \cdots, 2a_{2m_1+1}^{(1)}]$. Since $e$ is odd, we see from the diagram that $K(r_0)$ is a special alternating knot. Further, we see that $K$ is a Murasugi sum of $K(r_0)$ and the connected sum of remaining $(n-1)$ 2-bridge knots $K(r_2) \sharp K(r_3) \sharp \cdots \sharp K(r_n)$. Since $K(r_0)$ is $c$-stable and $K(r_2) \sharp K(r_3) \sharp \cdots \sharp K(r_n)$ is stable, it is not surprising that a Murasugi sum of these knots is bi-stable, but a proof is not immediate. The rest of this section will be devoted to a proof of this case.\\ First, from the diagram, we see that a Seifert matrix $M$ of $K$ is a direct sum of $M_j, j=0, 2,3, \cdots,n$, except the first column, where $M_j$ is a Seifert matrix of $K(r_j)$ of twisted chain type. \\ \begin{center} $M = \left[ \begin{tabular}{c|c|c|c|c|c} \multicolumn{2}{c|}{}&\multicolumn{4}{c}{}\\[-2mm] \multicolumn{2}{c|}{$M_0$} & \multicolumn{3}{c}{}\\[2mm]\hline \smav& &$M_2$& \multicolumn{3}{c}{} \\\hline \smav& & &$M_3$& \multicolumn{2}{c}{}\\\hline $\vdots$& \multicolumn{3}{c|}{} &$\hspace{-1mm}ace{-1mm}\ddots\hspace{-1mm}ace{-1mm}$ & \\ \hline \smav&\multicolumn{4}{c|}{}& $M_n$ \end{tabular} \right] $ \end{center} In fact, $M$ is of the form above and has the following properties. \begin{align}\label{siki:15.1} (1)\ &{\rm \ The\ diagonal\ entries\ of\ } M_0{\rm \ is\ quite\ different\ from\ those\ of\ } M_j, j \geq 2. {\rm \ They\ are\ }\nonumber\\ & \{-\frac{e+1}{2}, \underbrace{-1, \cdots, -1}_{2a_1^{(1)}-1}, -(a_2^{(1)}+1), \underbrace{-1, \dots, -1}_{2a_3^{(1)}-1}, -(a_4^{(1)}+1) , \underbrace{-1, \cdots, -1}_{2a_5^{(1)}-1}, \nonumber\\ &-(a_6^{(1)}+1), \cdots, -(a_{2m_1}^{(1)}+1), \underbrace{-1, \cdots, -1}_{2a_{m_1+1}^{(1)}-1} \}, \nonumber\\ (2)\ &{\rm Other\ \mbox{non-zero}\ entries\ of\ } M_0 {\rm \ are\ those\ of\ the\ one\ line\ above\ the\ diagonal,}\nonumber\\ &{\rm all\ of\ which\ are\ } 1. \nonumber\\ (3)\ &{\rm The\ diagonal\ entries\ of\ } M_j , j \geq 2,{\rm \ are\ } \{a_1^{(j)}, -a_2^{(j)}, \cdots, -a_{2m_j}^{(j)}\}. \nonumber\\ (4)\ &{\rm The\ size\ } \rho_0{\rm \ of\ } M_0 {\rm \ is\ } \rho_0 = 1 + \sum_{j \equiv 1 (2)} (2a_j^{(1)}-1) + m_1 = \sum_{j \equiv 1 (2)} 2a_j^{(1)}, \nonumber\\ &{\rm \ while\ the\ size\ } \rho_j{\rm \ of\ } M_j{\rm \ is\ } \rho_j = 2m_j, 2 \leq j \leq n. \nonumber\\ (5)\ &{\rm The\ extra\ 1\ on\ the\ first\ column\ of\ } M{\rm \ appears\ only\ on\ the\ first\ row\ of\ } \nonumber\\ &{\rm each\ block\ matrix\ } M_2, M_3, \cdots, M_n, {\rm \ namely\, 1\ appears\ on\ the\ } \nonumber\\ &(\rho_0+ 1, 1)-, (\rho_0 + \rho_2 + 1, 1)-, \cdots, (\sum_{j=0, j \ne 1}^n \rho_j + 1, 1)\mbox{\rm -entries\ of\ } M. \end{align} Now to study the Alexander polynomial of K, we consider the determinant of $tM -M^T$ that is of the form $\det \left[ \begin{array}{l|c|c|c|c|c|} tM_0-M_0^T& -1\, 0 \cdots 0& \multicolumn{1}{c}{\cdots}& \multicolumn{1}{c}{\cdots}& \multicolumn{1}{c}{\cdots}& \multicolumn{1}{c}{\cdots}\\\cline{1-2} \smavt&tM_2-M_2^T& \multicolumn{4}{c}{\hugesymbol{O} }\\ \cline{1-3} \vdots& \multicolumn{1}{c|}{}&\multicolumn{1}{r|}{\smallsymbol{\ddots}}&\multicolumn{3}{c}{}\\[1mm] \cline{3-4} \vdots&\multicolumn{2}{c|}{}&tM_2-M_2^T& \multicolumn{2}{c}{}\\\cline{4-5} \vdots&\multicolumn{3}{c|}{}&{\ddots}& \multicolumn{1}{c}{}\\\cline{5-6} \vdots&\multicolumn{4}{c|}{\hugesymbol{O}}& \multicolumn{1}{c}{tM_p-M_p^T}\\ \end{array} \right] $ By expanding it along the first row and then the first column, we can show easily that \begin{align}\label{siki:15.2} \Delta_K(t) &= \det (Mt -M^T )\nonumber\\ &= \prod_{j=0, j\ne 1}^n\det (tM_j - M_j^T) \nonumber\\ &\ \ + t \det \widehat{M_0} \sum_{j=2}^n \det (tM_2 - M_2^T ) \cdots \det \widehat{M_j} \cdots \det(tM_n -M_n^T) \nonumber\\ &= \Delta_{K(r_0)}(t) \Delta_{K(r_2)}(t) \cdots \Delta_{K(r_n)}(t) \nonumber\\ &\ \ + t \det \widehat{M_0}[\sum_{j=2}^{n} \Delta_{K(r_2)}(t) \cdots \Delta_{K(\widehat{r_j})} (t) \cdots \Delta_{K(r_n)}(t)], \end{align} \noindent where $\widehat{M_j}, j=0,2,3, \cdots, n$, is the matrix obtained from $tM_j - M_j^T$ by deleting the first row and column and $\widehat{r_j} = [-2a_2^{(j)}, 2a_3^{(j)}, \cdots, -2a_{2m_j}^{(j)}], j \geq 2$. For simplicity, we denote \begin{equation*} f_0 = \det (tM_0 - M_0^T){\rm \ and\ } \widehat{f_0} = - \det \widehat{M_0}, \end{equation*} and for $j = 2,3, \cdots, n$, \begin{equation*} f_j = (-1)^{m_j} \det (tM_j - M_j^T){\rm \ and\ } \widehat{f_j} = (-1)^{m_j} \det \widehat{M_j}. \end{equation*} Since the leading coefficients of $f_0, \widehat{f_0}, f_j$ and $\widehat{f_j}$ are all positive, these polynomials are normalizations of $\Delta_{K(r_0)}(t), \det \widehat{M_0}, \Delta_{K(r_j)}(t)$ and $\Delta_{K(\widehat{r_j})}(t)$, respectively. Using these polynomials, we rewrite (\ref{siki:15.2}) as follows. \begin{align}\label{siki:15.3} \Delta_{K}(t) &= f_0 (t) f_2 (t) \cdots f_n (t) (-1)^{m_2 + \cdots +m_n}\nonumber\\ &\ \ + t \widehat{f_0} (t) \widehat{f_2} (t) f_3(t) \cdots f_n (t) (-1) (-1)^{m_2 + \cdots +m_n} \nonumber\\ & \ \ t \widehat{f_0} (t) f_2(t) \widehat{f_3} (t) \cdots f_n (t) (-1) (-1)^{m_2 + \cdots +m_n} + \cdots \nonumber\\ & \ \ t \widehat{f_0} (t) f_2 (t) f_3(t) \cdots f_{n-1}(t) \widehat{f_n} (t) (-1) (-1)^{m_2 + \cdots +m_n}. \end{align} Therefore, the normalization $F$ of $\Delta_{K}(t)$ is \begin{align*} F &= f_0 (t) f_2(t) \cdots f_n (t) - t \widehat{f_0}(t) \{ \widehat{f_2}(t) f_3 (t) \cdots f_n (t)\\ & \ \ + f_2(t) \widehat{f_3}(t) f_4 (t) \cdots f_n(t) + \cdots + f_2 (t) f_3(t) \cdots f_{n-1}(t) \widehat{f_n}(t) \}. \end{align*} Further, $\widehat{f_0}, \widehat{f_2}, \cdots, \widehat{f_n}$ are Alexander polynomials of links and hence, they are divisible by $t-1$. Let $f_0^{\ast} = \frac{\widehat{f_0}}{t-1}$ and $f_j^{\ast} = \frac{\widehat{f_j}}{t-1}, 2 \leq j \leq n$. Then $f_0^{\ast}$ and $f_j^{\ast}$ are respectively reciprocal and $f_j^{\ast} (t)$ is also stable. We can write \begin{equation*} F = f_0 f_2 \cdots f_n - t(t-1)^2 f_0^{\ast} \{ \sum_{j=2}^n f_2 \cdots f_j^{\ast} \cdots f_n \}. \end{equation*} Let $F_1 = f_0 f_2 \cdots f_n$ and $F_2 = t(t-1)^2 f_0^{\ast} \{ {\displaystyle \sum_{j=2}^n} f_2 \cdots f_j^{\ast} \cdots f_n \}$, and further, for $2\le k \le n$, $F_{2,k} =t(t-1)^2f_0^{*}f_2\cdots f_k^{*}\cdots f_n$. Since $\deg F=\deg{\displaystyle \sum_{j=0,j\neq1}^{n}} f_j$ and $|\sigma(K)|=\rho_0=\deg f_0$, it suffices to show that $F$ has at least $2q=\sum_{j=2}^{n} \deg f_j$ real zeros. The proof will be divided into two parts. In the first part we consider the case where no zeros of $f_2 (t), \cdots, f_n (t)$ are in common, i.e., $f_2 (t) \cdots f_n (t)$ is simple. Note that each $f_j (t), j \ne 0$, has no multiple zeros. In the second part, we consider the case where these Alexander polynomials have some zeros in common. (1) Case (1). $f_2 f_3 \cdots f_n$ is simple. We will show that two curves $y_1=F_1$ and $y_2=F_2$ intersect at least $q =\sum_{j= 2}^n \rho_j/2$ points in $[0,1]$. Let $\gamma_1< \gamma_2< \cdots< \gamma_q$ be all the (real) zeros of $f_2 f_3 \cdots f_n$ in $[0,1]$. Note that $\deg F_1=\deg F_2+1$ and $F_1(1)\neq 0$. First we see that (1)$F_1(0) > 0$ and (2)$F_2(0) =0$ and $F_2^{\prime}(0) > 0$.\\ In fact, (1) $F_1(0) = \prod_{j \ne 1}f_j (0) > 0$ , since the leading coefficient of $f_j, j \ne 1$, is positive and $f_j$ is reciprocal. (2) follows, since $f_j^{\ast}(0) > 0$ for $j \ne 1$. Now, suppose $\gamma_1$ is the zeros of $f_k$. Then $f_k (\gamma_1) = 0$, but $f_k^{\ast}(\gamma_1) \ne 0$, since $f_k f_k^{\ast}$ is simple. Therefore, $F_{2,k}(\gamma_1) \ne 0$, but $F_{2,j}(\gamma_1) = 0$ for $j \ne k$. Further, since $F_{2,k}^{\prime}(0) > 0$, it follows that $F_{2,k}(\gamma_1) > 0$, and hence $F_1$ and $F_2$ intersect in $[0, \gamma_1]$. Next, we prove inductively the following lemma. \begin{lemm}\label{lem:15.4} (1) $F_1 (t) \leq 0$ in $[\gamma_{2k-1}, \gamma_{2k}], 1 \leq k \leq [\frac{q}{2}]$, and $F_1 (t) \geq 0$ in $[\gamma_{2k}, \gamma_{2k+1}], 1 \leq k \leq [\frac{q-1}{2}]$.\\ (2) $F_2 (\gamma_{2k+1}) > 0, 0 \leq k \leq [\frac{q-1}{2}]$ and $F_2(\gamma_{2k}) < 0, 1 \leq k \leq [\frac{q}{2}]$.\\ Therefore, $F_1$ and $F_2$ intersect in $[\gamma_i , \gamma_{i+1}], 1 \leq i \leq q-1$, and hence $F_1$ and $F_2$ intersect at least $q$ points in $[0, 1]$. \end{lemm} Note that $F_1$ and $F_2$ do not intersect in $[\gamma_q, \gamma_{q+1}] \ni 1$. {\it Proof.} Since (1) is obvious, we prove only (2) by induction on $k$. Since we already showed that $F_2 (\gamma_1) > 0$, we consider $F_2(\gamma_2)$. Suppose $\gamma_2$ is the zero of $f_s (t)$. Then $F_{2,j}(\gamma_2) = 0$ for $j \ne s$. We know that $\gamma_1$ is the zero of $f_k$.\\ (1) If $s=k$, then $f_k^{\ast}(\gamma_2) \ne 0$ and $F_{2,k}(\gamma_1) >0$. Since $\gamma_1$ and $\gamma_2$ are the zeros of $f_k (=f_s)$, and $f_k$ and $(t-1) f_k^{\ast}$ are interlaced, we see that $f_k^{\ast}$ has the zero $\beta_0$ in $[\gamma_1, \gamma_2]$ and hence $F_{2,k}$ crosses the $t$-axis at $\beta_0$. Therefore, $F_{2,k}(\gamma_2) < 0$. Since $F_{2,j}(\gamma_2) = 0$ for $j \ne k$, it follows that $F_2(\gamma_2) < 0$.\\ (2) If $s \ne k$, then $F_{2,j}(\gamma_2)= 0$ for $j \ne s$ and further $F_{2,s}(\gamma_2) < 0$, since $F_{2,s}(\gamma_2) \ne 0$ and $F_{2,s}^{\prime} (\gamma_1) < 0$. Therefore, $F_2 (\gamma_2) < 0$. Now consider $F_2 (\gamma_m), 1 \leq m \leq q$. Suppose $\gamma_m$ is the zero of $f_p$. Case (1) $\gamma_m$ is the smallest zeros of $f_p$. Then $f_p$ is not 0 at $\gamma_1, \gamma_2, \cdots, \gamma_{m-1}$ and it is obvious that (i) if $m$ is even, then $F_{2,p}(\gamma_m) < 0$, and $F_{2,\ell}(\gamma_m) = 0$ for $\ell \ne p$ and hence $F_2(\gamma_m) < 0$, and (ii) if $m$ is odd, then $F_{2,p}(\gamma_m) > 0$, and $F_{2,\ell}(\gamma_m) = 0$ for $\ell \ne p$ and hence $F_{2}(\gamma_m) > 0$. Case (2) There exists $\gamma_h, h < m$, that is the closest zero of $f_p$ to $\gamma_m$, i.e., $f_p(\gamma_h) = 0$, but $f_p (\gamma_{h+1}) \ne 0,\cdots, f_p(\gamma_{m-1})\ne 0$. If $h$ is even, by induction assumption, $F_{2,p}(\gamma_h) < 0$. Further, if $m$ is even, there are an odd number of zeros $\gamma_{h+1},\cdots, \gamma_{m-1}$ between $\gamma_h$ and $\gamma_m,$ and $F_{2,p}$ crosses the $t$-axis at these points. However, since $f_p$ and $(t-1)f_p^*$ are interlaced, there is exactly one zero of $f_p^*$, say $\beta_1$, in $[\gamma_h, \gamma_m],$ i.e., $\gamma_h < \beta_1 < \gamma_m$, and $F_{2,p}$ must cross the $t$-axis at $\beta_1$ as well. Thus $F_{2,p}(\gamma_m) < 0$. Since $F_{2,\ell}(\gamma_m)= 0$ for $\ell \ne p$, we have $F_2 (\gamma_m)< 0$. The same argument works for other cases where (a) $h$ is even and $m$ is odd, and (b) $h$ is odd and $m$ is even or odd. This proves Lemma \ref{lem:15.4} and Theorem \ref{thm:15.2} for non-multiple zero case. Case (II) $f_2 f_3 \cdots f_n$ is not simple. Let $\gamma_1 < \gamma_2 < \cdots < \gamma_{2d}$ be all distinct real zeros of $f_2 f_3 \cdots f_n$. Let $p_k$ be the multiplicity of $\gamma_k, 1 \leq k \leq 2d$ and $a_j$ the leading coefficient of $f_j$. Note $a_j > 0, 2\leq j \leq n$. Since $f_2 f_3 \cdots f_n = a_2 a_3 \cdots a_n \prod_{k=1}^{2d} (t -\gamma_k)^{p_k}$, we can write \begin{align*} F = &f_0 f_2 f_3 \cdots f_n - t(t-1)^2 f_0^{\ast} \sum_{j=2}^{n} f_2 \cdots f_j^{\ast}\cdots f_n\\ = & a_2 a_3 \cdots a_n f_0 \prod_{k=1}^{2d} (t- \gamma_k)^{p_k} - t(t-1)^2 f_0^{\ast} \sum_{j=2}^n a_2 a_3 \cdots a_n f_j^{\ast}[\prod_{k=1}^{2d}(t- \gamma_k)^{p_k} / f_j]\\ =& a_2 a_3 \cdots a_n \prod_{k=1}^{2d}(t- \gamma_k)^{p_k-1} [f_0 \prod_{k=1}^{2d}(t- \gamma_k) - t(t-1)^2 f_0^{\ast} \sum_{j=2}^n f_j^{\ast} \prod_{k=1}^{2d}(t- \gamma_k) / f_j]. \end{align*} Since $f_j$ is simple and $\{\gamma_k\}$ is the set of all distinct zeros of $f_2 f_3 \cdots f_n$, we see that $\prod_{k=1}^{2d}(t-\gamma_k)/ f_j$ is a (real) polynomial that is denoted by $g_j$. Therefore to prove Theorem \ref{thm:15.2}, it suffices to show that $G = f_0 \prod_{k=1}^{2d}(t- \gamma_k) - t(t-1)^2 f_0^{\ast} \sum_{j=2}^n f_j^{\ast} g_j $ has $2d$ real zeros, or equivalently, two curves $y_1 = G_1 (t) = f_0 \prod_{k=1}^{2d}(t- \gamma_k)$ and $y_2 = G_2 (t) = t(t-1)^2 f_0^{\ast}\sum_{j=1}^n f_j^{\ast} g_j$ have $d$ points of intersection in $[0,1]$. Let $G_{2,j} = t(t-1)^2 f_0^{\ast} f_j^{\ast} g_j$ and hence $G_2 = \sum_{j=1}^n G_{2,j}$. Note that $f_0$ and $f_0^{\ast}$ do not have any real zeros. Suppose that $\gamma_j$ is the zero of $f_{j_1}, f_{j_2}, \cdots, f_{j_{p_j}}$, i.e., $\gamma_j$ is the zeros of $f_2 \cdots f_n$ of multiplicity $p_j$. Since the zeros of $g_k$ consist of all real zeros of $G_1$ except those of $f_k$, it follows that $G_{2,k}(\gamma_j)\neq 0$ if and only if $\gamma_j$ is the zero of $f_k$. Therefore, as it was proved in Lemma \ref{lem:15.4}, we can prove inductively that to each $\lambda, 1 \leq \lambda\leq p_j, G_{2, j_{\lambda}}(\gamma_j)> 0$ or $< 0$, according as $j$ is odd or even. Since $G_{2, \ell}(\gamma_j) = 0$, if $\ell \ne j_{\lambda}, 1 \leq \lambda \leq p_j$, it follows that $G_2 (\gamma_j) = \sum_{\lambda = 1}^{p_j} G_{2,j_{\lambda}} (\gamma_j)> 0$ or $< 0$, according as $j$ is odd or even. Therefore, $y_1= G_1$ and $y_2 = G_2$ intersect in $[\gamma_{j-1},\gamma_j]$ and finally, two curves $y_1 = G_1$ and $y_2 = G_2$ intersect at least $d$ points in $[0,1]$ A proof of Theorem \ref{thm:15.2} is now complete. \qed \section{Multivariate stable link polynomials}\label{16} In this section, we study the stability of the 2-variable Alexander polynomial $\Delta_{K(r)}(x,y)$ of a 2-bridge link $K(r)$, $r= \beta/ 2\alpha, 0 < \beta < 2\alpha$. Let $\mathcal H$ be the upper half-plane of $\CC$. If the reduced Alexander polynomial of $K(r)$, i.e., $\Delta_{K(r)}(t) = (t-1)\Delta_{K(r)}(t,t)$ is not real stable, then $\Delta_{K(r)}(x,y)$ is not $\mathcal H$-stable. Therefore, we only need to consider a real stable link. If the continued fraction expansion of $r$ gives an alternating sequence, then $K(r)$ is real stable. However, for such links, $\mathcal H$-stability problem is completely solved (Proposition \ref{prop:16.5}). Therefore, we should study exceptional stable links. In this section, we solve the $\mathcal H$-stability problem for the simplest exceptional stable 2-bridge links. Now we begin with a definition. \begin{dfn}\label{dfn:16.1} For a positive integer $n$, we define $G_n = \frac{x^n - y^n}{x-y}$ and $G_{-n}= \frac{-1}{(xy)^n} G_n$. In particular, we define $G_0 = 0$. \end{dfn} It is easy to see that $G_n$ is $\mathcal H$-stable if and only if $|n| \leq 2$. The following proposition is well-known. \begin{prop}\label{prop:16.2} Let $r = [2a_1, 2b_1, 2a_2,2b_2, \cdots, 2a_n, 2b_n, 2a_{n+1}]$. Then $\Delta_{K(r)}(x, y)$ is give by \begin{equation}\label{siki:16.1} \Delta_{K(r)}(x,y) = \sum_{0 \leq m \leq n} b_{j_1}b_{j_2}\cdots b_{j_m} (x-1)^m (y-1)^m G_{\mu_1}G_{\mu_2} \cdots G_{\mu_{m+1}}, \end{equation} where the summation is taken over all indices $j_k$ such that $1 \leq j_1 < j_2 < \cdots < j_m \leq n$ and $\mu_1 = a_1 + a_2 + \cdots + a_{j_1}$, $\mu_2 = a_{j_1+1} + \cdots + a_{j_2}, \cdots, \mu_k = a_{j_{k-1}+1} + a_{j_{k-1}+2} + \cdots + a_{j_k}$, $\cdots$, $\mu_{m+1} = a_{j_m+1} + a_{j_m+2} + \cdots + a_n$. \end{prop} We should note that (\ref{siki:16.1}) is slightly different from the original formula given in \cite{kane}, since the orientation of one component of $K(r)$ in \cite{kane} is different form ours. \begin{ex}\label{ex:16.3} (1) If $r = [2a_1]$, then $\Delta_{K(r)}(x,y) = G_{a_1}$\\ (2) If $r = [2a_1, 2b_1, 2a_2]$, then $\Delta_{K(r)}(x,y) = b_1 (x-1) (y-1) G_{a_1}G_{a_2} + G_{a_1+a_2}$\\ (3) If $r = [2a_1 , 2b_1, 2a_2, 2b_2, 2a_3]$, then \begin{align*} \Delta_{K(r)}(x,y) &= b_1 b_2 (x-1)^2 (y-1)^2 G_{a_1} G_{a_2} G_{a_3}\\ &\ \ + (x-1)(y-1) \{b_1 G_{a_1}G_{a_2 +a_3} + b_2 G_{a_1+a_2} G_{a_3}\} + G_{a_1 +a_2+a_3} \end{align*} \end{ex} Now the following simple proposition gives us a strong restriction on $\mathcal H$-stability for 2-component links. \begin{prop}\label{prop:16.4} Let $K = K_1 \cup K_2$ be a 2-component link. Suppose that $\Delta_K (x,y)$ is not a constant. (If $\Delta_K (x,y)$ is a constant, then $K$ is always $\mathcal H$-stable.) If $K$ is $\mathcal H$-stable, then both $K_1$ and $K_2$ are stable and further, $|lk(K_1, K_2)| \leq 2$. \end{prop} {\it Proof.} By Theorem \ref{thm:3.1}(a), if $\Delta_K (x,y)$ is $\mathcal H$-stable, then $\Delta_K (x,1)$ and $\Delta_K (1,y)$ are $\mathcal H$-stable. Further, $\Delta_K (x,1) = \frac{x^{\ell} -1}{x-1} \Delta_{K_1} (x)$ and $\Delta_{K}(1,y)=\frac{y^\ell -1}{y-1}\Delta_{K_2}(y_2)$ \cite{tor}, where $\ell = {\it lk}(K_1, K_2)$, and hence, $\Delta_{K_1}(x), \Delta_{K_2} (y)$ and $\frac{x^{\ell}-1}{x -1}$ are $\mathcal H$-stable. The proposition follows immediately. \qed \begin{prop}\label{prop:16.5} Let $r = [2a_1, -2b_1, 2a_2, -2b_2 \cdots, 2a_n, -2b_n, 2a_{n+1}] , a_j, b_j > 0$. Then $K(r)$ is $\mathcal H$-stable if and only if (1) $n = 0$ and $a_1 = 1$ or $2$, or (2) $n=1$ and $a_1 = a_2 = 1$. \end{prop} {\it Proof.} (a) Suppose $K(r)$ is $\mathcal H$-stable. Then $|\ell| = |{\it lk}(K_1, K_2)| \leq 2$. Since $\ell = \sum_{j =1}^{n+1}a_j$, we have (1) or (2). (b) Suppose (1) or (2) holds. If (1) holds, then $\Delta_K (x,y) = 1$ or $x+y$, and both are $\mathcal H$-stable. Suppose (2) holds. Then $r =[2, -2b_1, 2]$ and $\Delta_K (x,y)$ = $-b_1 (x-1)(y-1) +(x+y)$ = $-b_1 + (b_1 +1) (x+y) - b_1 xy$. Since $\Delta_K (x,y)$ is multi-affine, it is $\mathcal H$-stable if (and only if ) $\det \tbt{-b_1}{b_1+1}{b_1+1}{- b_1} < 0$ (Example \ref{ex:3.8}). Since $b_1 > 0$, obviously, the determinant is negative. This proves Proposition \ref{prop:16.5}. \qed We now consider exceptional stable links. \begin{prop}\label{prop:16.6} Let $r = [2a,2b, -2c], a,b,c > 0$ and $a > c$. (1)Suppose $a=c$. Then $K(r)$ is $\mathcal H$-stable if and only if $a = 1$ or $2$. (2) Suppose $a>c$. Then $K(r)$ is not $\mathcal H$-stable, unless $(a,c) = (2,1)$. \end{prop} {\it Proof.} (1) Suppose $a=c$. Then $\Delta_K (x,y)= b(x-1)(y-1)G_a^2$. If $a \geq 3$, then $G_a$ is not $\mathcal H$-stable and hence $K(r)$ is not $\mathcal H$-stable. However, if $a = c = 1$ or $2$, then each factor is $\mathcal H$-stable and hence $K(r)$ is $\mathcal H$-stable. (2) Suppose $a > c$. Since $lk(K_1,K_2) = a-c$, $c = a-1$ or $c = a-2$. From (\ref{siki:16.1}), we have $\Delta_{K(r)} (x,y)$ = $b (x-1) (y-1) G_a G_{-c}+ G_{a-c} = b (x-1)(y-1) \frac{x^a - y^a}{x-y} \frac{x^c - y^c}{x-y} \frac{-1}{(xy)^c} + \frac{x^{a-c} - y^{a-c}}{x-y}$. Let $F(x,y) = (xy)^c \Delta_{K(r)} (x,y) =-b (x-1)(y-1) \frac{x^a - y^a}{x-y} \frac{x^c - y^c}{x-y} + \frac{x^{a-c} - y^{a-c}}{x-y}(xy)^c$. If $\Delta_{K(r)} (x,y)$ is $\mathcal H$-stable, so is $\Delta_{K(r)} (x,-1)$ by Theorem \ref{thm:3.1} (a) and hence $f(x) = F(x, -1) = 2b (x-1) \frac{x^a - (-1)^a}{x+1} \frac{x^c - (-1)^c}{x+1} + \frac{x^{a-c} - (-1)^{a-c}}{x+1}(-1)^c x^c$ must be $\mathcal H$-stable. Case(I) $c=a-1$ and $a \geq 3$. Then $f(x) =2b (x-1) \frac{x^a - (-1)^a}{x+1} \frac{x^{a-1}-(-1)^{a-1}}{x+1} + (-1)^{a-1} x^{a-1}$. We show that if $a \geq 3$, $f(x)$ is not (real) stable. Since $f(x)$ is reciprocal, consider the modified polynomial $g(z)$ of $f(x)$. Case(a) $a$ is even, say $2m \geq 4$. Then $f(x) =2b (x-1) \frac{x^{2m} - 1}{x+1} \frac{x^{2m-1} +1}{x+1} -x^{2m-1}$ and hence $g(z)$ is written as $g(z) = 2b g_1 (z) g_2(z) -1$, where $g_1(z)$ is the modification of $f_1(x) = (x-1) \frac{x^{2m}-1}{x+1}$ and $g_2(z)$ is that of $f_2(x) = \frac{x^{2m-1}+1}{x+1}$. Since both $f_1$ and $f_2$ are $c$-stable, all zeros of $g_1 (z)$ and $g_2 (z)$ are in $[-2,2]$ and further, since $m \geq 2$, at least one zero of $g_1 (z)$ and $g_2 (z)$ are in $(-2, 2)$. Therefore it is impossible that all points of intersection of two curves $z_1 = g_1(z) g_2(z)$ and $z_2= \frac{1}{2b}, b > 0$ are outside of $(-2, 2)$, and hence $f(x)$ cannot be stable. Case(b) $a$ is odd, say $2m+1, m \geq 1$. Then $f(x) = 2b (x-1) \frac{x^{2m+1} + 1}{x+1} \frac{x^{2m} -1}{x+1} + x^{2m}$ and the modification is: $g (z) = 2bg_1(z) g_2 (z) +1$, where $g_1 (z)$ is the modification of $\frac{x^{2m+1}+1}{x+1}$ and $g_2(z)$ is that of $(x-1) \frac{x^{2m} - 1}{x+1}$. Therefore, all the zeros of $g_1 (z)$ are in $(-2,2)$. As is proved in Case (a), $f(x)$ cannot be stable. Case(II). $c=a-2$ and $a \geq 3$. Then \begin{align*} \Delta_{K(r)}(x,y) &= b (x-1) (y-1) G_a G_{-(a-2)} + G_2\\ &= b (x-1)(y-1) \frac{x^{a} - y^{a}}{x-y} \frac{x^{a-2}- y^{a-2}}{x-y} \frac{-1}{(xy)^{a-2}} + x + y \end{align*} and \begin{align*} f(x) &= (-1)^{a-2} x^{a-2} \Delta_{K(r)}(x, -1)\\ & = 2b (x-1) \frac{x^a - (-1)^a}{x+1} \frac{x^{a-2} - (-1)^{a-2}}{x+1} + (x -1) (-1)^{a-2} x^{a-2}. \end{align*} Case(a) $a$ is even, say $2m \geq 4$. Then $f(x) = 2b (x-1) \frac{x^{2m} - 1}{x+1} \frac{x^{2m-2} - 1}{x+1} + (x-1) x^{2m-2}$ and hence $h(x)= \frac{f(x)}{x-1} = 2b (x-1)^2 \frac{x^{2m} - 1}{x^2-1} \frac{x^{2m-2} - 1}{x^2-1} + x^{2m-2}$ is reciprocal. The modification $\lambda (z)$ of $h(x)$ is $\lambda (z) = 2b(z-2) \lambda_1(z) \lambda_2(z) +1$, where $\lambda_1(z)$ and $\lambda_2 (z)$ are the modifications of $\frac{x^{2m}-1}{x^2-1}$ and $\frac{x^{2m-2}-1}{x^2-1}$, respectively. Since all the zeros of $\lambda_1(z)$ and $\lambda_2(z)$ are in $(-2,2)$, $h (x)$ cannot be real stable. Case(b) $a$ is odd, say $2m+1$. The same argument works to show that $h(x)$ is not real stable and hence $\Delta_{K(r)}(x,y)$ is not $\mathcal H$-stable. \qed If $(a,c) = (2,1)$, we have the following proposition. \begin{prop}\label{prop:16.7} Let $r = [4,2,-2]$. Then $K(r)$ is $\mathcal H$-stable. \end{prop} {\it Proof.} $xy\Delta_{K(r)} (x,y) = -(x-1)(y-1) G_2 G_1 +xy G_1 = -(x-1)(y-1) (x+y) +xy = -(xy-x-y)(x+y-1)$. Each factor is $\mathcal H$-stable by Example \ref{ex:3.8} and hence $K(r)$ is $\mathcal H$-stable. \qed \begin{qu}\label{qu:16.8} For $r=[4,2b,-2]$ and $b \geq 2$, is $K(r)$ $\mathcal H$-stable ? \end{qu} Finally, we prove that the 2-variable Alexander polynomial of a 2-bridge link has the same property as Theorem \ref{thm:8x3} (2). Using this, we can systematically obtain exceptional $\mathcal H$-stable 2-component 2-bridge links. \begin{thm}\label{thm:16.9} Let $s = [2a_1, 2b_1, 2a_2,2b_2, \cdots, 2a_n,2b_n, 2a_{n+1}], a_j \neq 0 \neq b_j, 1 \leq j \leq n+1$. Let $r = [s, 2k, -s^{-1}], k \neq 0$. Then, \begin{equation}\label{siki:16.2} \Delta_{K(r)}(x,y) = k(x-1)(y-1) [\Delta_{K(s)} (x,y)]^2, \end{equation} and hence, $K(r)$ is $\mathcal H$-stable if and only if $K(s)$ is $\mathcal H$-stable. \end{thm} {\it Proof.} Consider a sequence of integers\\ $A = \{a_1, b_1, a_2, b_2, \cdots, a_n, b_n, \cdots, a_{2n+1}, b_{2n+1}, a_{2n+2}\}$. Take an ordered subset $C=\{b_1, b_2, \cdots, b_{2n+1}\}$. Let $\widehat{C}$ be the set of all ordered subset of $C$, i.e., $\widehat{C} \ni U = \{b_{j_1}, b_{j_2}, \cdots, b_{j_k}\}$, where $1 \leq j_1 < j_2 < \cdots < j_k \leq 2n+1$. To each set $U$ in $C$, we define a mapping $\rho_{2n+1} : \widehat{C} \longrightarrow \ZZ[x^{\pm 1}, y^{\pm 1}]$ as follows. \begin{equation}\label{siki:16.n1} \rho_{2n+1} (U) = b_{j_1}b_{j_2}\cdots b_{j_k} (x-1)^k (y-1)^k G_{\mu_1}G_{\mu_2}\cdots G_{\mu_{k+1}}, \end{equation} where $\mu_1 = a_1 + a_2 + \cdots + a_{j_1}$, $\mu_2 = a_{j_1+1} + \cdots + a_{\mu_{j_2}}, \cdots$, $\mu_{k+1} = a_{j_k +1}+ a_{j_k+2} + \cdots + a_{2n+2}$. For example, $\rho_{2n+1} (\phi) = G_{a_1+ a_2 + \cdots + a_{2n+2}}$ and $\rho_{2n+1} (C) = b_1 b_2 \cdots b_{2n+1} (x-1)^{2n+1} (y-1)^{2n+1} G_{a_1}G_{a_2} \cdots G_{a_{2n+2}}$. Now to each $U$, we call $U^{\ast}= \{b_{2n+2-j_k}, b_{2n+2-j_{k-1}}, \cdots, b_{2n+2-j_1}\}$ the dual of $U$. We use these concepts to prove the theorem. In the following, we assume that \begin{align}\label{siki:16.n2} &{\rm for}\ 1 \leq j \leq n+1, a_{n+1+j} = - a_{n+2-j},\ {\rm and}\\ &{\rm for}\ 1 \leq j \leq n,b_{n+1+j} = - b_{n+1-j}. \end{align} Therefore, $A$ becomes \begin{align*} \{a_1, b_1, a_2, b_2, \cdots, a_{n+1},b_{n+1}, -a_{n+1}, -b_n, -a_n, \cdots, -b_2, -a_2, -b_1, -a_1\} \end{align*} and we can write \begin{equation}\label{siki:16.n3} \rho_{2n+1} (U^{\ast}) = b_{2n+2-j_k}\cdots b_{2n+2-j_1} (x-1)^k (y-1)^k G_{-\mu_{k+1}}G_{-\mu_k}\cdots G_{-\mu_1}. \end{equation} Then we prove \noindent {\bf Claim 1.} {\it $\rho_{2n+1}(U) + \rho_{2n+1}(U^{\ast}) = 0$, and hence $\sum_{U}\rho_{2n+1}(U) = 0$, where the summation is taken over all $U$ that does not contain $b_{n+1}$. } {\it Proof.} If some $\mu_i = 0$, then $G_{\mu_1}=G_{-\mu_i} = 0$ and hence $\rho_{2n+1}(U) = \rho_{2n+1}(U^{\ast}) = 0$. Therefore we may assume that none of $\mu_i$ is $0$. Let $m$ be the number of negative elements in $U$. Then that number in $U^{\ast}$ is $k-m$. Let $q$ be the number of negative integers in the set $\{\mu_{j_1}, \cdots, \mu_{j_{k+1}}\}$. Then that number in $U^{\ast}$ is $k+1-q$. Therefore the number of occurrence of $(-1)$ in $\rho_{2n+1}(U)$ is $m+q$, while that in $\rho_{2n+1} (U^{\ast})$ is $k-m+k+1-q \equiv m+q+1$ (mod 2). Next we count the exponent of the factor \mbox{$\frac{1}{xy}$} in $\rho (U)$ and $\rho (U^{\ast})$. Suppose $\mu_{\ell_1}, \mu_{\ell_2}, \cdots, \mu_{\ell_q}$ are negative. Then in $\rho_{2n+1}(U)$, the exponent of $\frac{1}{xy}$ is $ |\mu_{\ell_1}| + | \mu_{\ell_2}| + \cdots +| \mu_{\ell_q}|$, while in $\rho_{2n+1}(U^{\ast})$ that is $\sum_{\lambda \neq \ell_j}\mu_{\lambda}$. Since $\sum_{\lambda \neq \ell_j} \mu_{\lambda} - \sum_{j=1}^q |\mu_{\ell_j}| = \sum_{j=1}^{2n+2} a_j= 0$, we see that $\rho_{2n+1}(U) + \rho_{2n+1}(U^{\ast}) = 0$. \qed Now consider a short sequence $A_0 = \{ a_1, b_1, a_2, b_2, \cdots, a_n, b_n, a_{n+1}\}$, the first half part of $A$. Let $B = \{ b_1, b_2, \cdots, b_n\}$ be an ordered set and $\widehat{B}$ be the set of all ordered subset of $B$. Then we have a mapping $\rho_n: \widehat{B} \longrightarrow Z[x^{\pm 1}, y^{\pm 1}]$. Given $U = \{b_{j_1}, b_{j_2}, \cdots, b_{j_p}, b_{n+1}, b_{j_{p+2}}, \cdots, b_{j_k}\} \in \widehat{C}$, we can define two sets $W_{+}$ and $W_{-}$ in $\widehat{B}$ as follows. $W_{+} = \{b_{j_1}, b_{j_2},\cdots, b_{j_p}\}$ and $W_{-} =\{b_{2n+2-j_k}, b_{2n+2-j_{k-1}}, \cdots, b_{2n+2-j_{p+2}}\}$. Then we claim \noindent {\bf Claim 2.} {\it $\rho_{2n+1}(U) = b_{n+1}(x-1) (y-1) \rho_n (W_{+}) \rho_n (W_{-}) \frac{-1}{(xy)^\alpha}$, where $\alpha = a_1+ a_2 + \cdots + a_{n+1}$. } {\it Proof.} Since $U \ni b_{n+1}$, we can write \begin{align*} \rho_{2n+1}(U) = &\\ &b_{n+1}(x-1)(y-1) \left(b_{j_1} \cdots b_{j_p}(x-1)^p (y-1)^p G_{\mu_1} \cdots G_{\mu_{p+1}}\right)\\ &\left(b_{j_{p+2}} \cdots b_{j_k} (x-1)^{k-p-1}(y-1)^{k-p-1} G_{\mu_{p+2}} \cdots G_{\mu_{k+1}}\right). \end{align*} Therefore, it suffices to show that \begin{equation}\label{siki:16.n4} (x-1)^{k-p-1}(y-1)^{k-p-1}(b_{j_{p+2}} \cdots b_{j_k}) G_{\mu_{p+2}}\cdots G_{\mu_{k+1}} = \rho_n (W_{-}) \frac{-1}{(xy)^\alpha}. \end{equation} Since $\rho_n (W_{-}) = (x-1)^{k-p-1}(y-1)^{k-p-1}(b_{2n+2-j_k} \cdots b_{2n+2-j_{p+2}}) G_{-\mu_{k+1}} \cdots G_{-\mu_{p+2}}$, as done before, we compare the number of occurrences of $-1$ and the exponent of $\frac{1}{xy}$, in LHS and RHS of (\ref{siki:16.n4}). First, let $d$ be the number of negative $b_i$ in LHS. Then that number in RHS is $k-p-1-d$. Let $q$ be the number of negative $\mu_{\lambda}$ in the set $\{\mu_{p+2}, \cdots, \mu_{k+1}\}$ in LHS. Then that number in RHS is $k-p-q$, and hence the sign of RHS is opposite to that of LHS. Next, we count the exponent of $\frac{1}{xy}$. Let $-\nu_1, -\nu_2, \cdots, -\nu_q$ be all negative members in $\{\mu_{p+2}, \cdots, \mu_{k+1}\}$, and $\nu_{q+1}, \cdots, \nu_{k-p}$ be all positive members. Then the exponent of $\frac{1}{xy}$ in LHS is exactly $\nu_1 + \nu_2 + \cdots + \nu_q$, while that in RHS is $\nu_{q+1} + \cdots + \nu_{k-p}$. Since $\nu_{q+1}+ \cdots + \nu_{k-p} -(\nu_1 + \cdots + \nu_q) = \sum_{j=1}^{n+1} a_j = \alpha$, Claim 2 follows. \qed Claim 2 implies easily the following \noindent {\bf Claim 3}. {\it If $U \in \widehat{C}$ contains $b_{n+1}$, then $U^{\ast} \ni b_{n+1}$ and $\rho_{2n+1}(U) = \rho_{2n+1}(U^{\ast})$. } From Claims 1-3, we have, \noindent{\bf Claim 4.} {\it $\sum_{U \in \widehat{C}} \rho_{2n+1}(U)$ = $b_{n+1}(x-1) (y-1) [\sum_{V \in \widehat{B}} \rho_n (V)]^2(\frac{-1}{(xy)^{\alpha}})$. } Hence Theorem \ref{thm:16.9} follows. \qed \begin{ex}\label{ex:16.10} Let $s=[4,2,-2]$. Then $K(s)$ is $\mathcal H$-stable by Proposition \ref{prop:16.7}, and hence for $r=[s,2k,-s^{-1}], k \neq 0$, $K(r)$ is $\mathcal H$-stable. \end{ex} If $K(s)$ is a 2-bridge knot, Theorem \ref{thm:16.9} does not hold, but $\Delta_{K(r)}(x,y)$ will be of a nice form. The following theorem is proven by applying a similar argument used in the proof of Theorem \ref{thm:16.9}. The detail will appear in a separate paper. \begin{thm}\label{thm:16.11} Let $s = [2a_1, 2b_1, 2a_2, 2b_2, \cdots, 2a_n, 2b_n], a_j \neq 0 \neq b_j, 1 \leq j \leq n,$ and $r = [s, 2k, -s^{-1}], k \neq 0$. Then \begin{equation}\label{siki:16.4} \Delta_{K(r)}(x,y) = G_k f(x,y) f(y,x), \end{equation} where $f(x,y) \in \ZZ[x,y]$ and $f(t,t) = \Delta_{K(s)}(t)$. \end{thm} We should note that if $|k| \geq 3$, then $G_k$ is not $\mathcal H$-stable. Therefore, for $K(r)$ to be $\mathcal H$-stable, $k$ must be $\pm 1$ or $\pm 2$. \section{Inversive links}\label{17} A 2-component link $K$ is called {\it inversive} if the original link is stable (or $c$-stable), but reversing the orientation of one component results in a $c$-stable (or stable) link. We see in Section 15 that some Montesinos links are inversive (Theorem \ref{thm:15.2}, Case 2). In this section, we study these links using 2-variable Alexander polynomial $\Delta_K(x,y)$. \subsection{Standard inversive links}\label{17.1} From the definition, the following proposition is immediate. \begin{prop}\label{prop:17.1} Let $K$ be a 2-component link and $\Delta_K (x,y)$ the Alexander polynomial. Then $K$ is inversive if and only if \begin{align}\label{siki:17.1} (1)&\ \Delta_K (t,t) {\it \ is\ stable\ (or\ \mbox{$c$-stable})\ and}\nonumber\\ (2)&\ t^n \Delta_K (t, t^{-1}) {\it \ is\ \mbox{$c$-stable}\ (or\ stable)} \end{align} \end{prop} \begin{rem}\label{rem:17.2} Note that (2) in (\ref{siki:17.1}) is equivalent to (2') below, since $\Delta_K(x,y) = x^m y^n \Delta_K (x^{-1},y^{-1})$ for some integers $n$ and $m$.\\ \centerline{ (2') $t^m \Delta_K (t^{-1},t)$ is $c$-stable (or stable). } For convenience, we call $\Delta_K (x,y)$ {\it inversive} if $\Delta_K (x,y)$ satisfies (1) and (2) in (\ref{siki:17.1}). \end{rem} Proposition \ref{prop:17.1} and Theorem \ref{thm:16.9} imply the following: \begin{prop}\label{prop:17.3} If a 2-bridge link $K(s)$ is inversive, then $K(r)$ is inversive, where $r=[s, 2k, -s^{-1}], k \neq 0$. \end{prop} The simplest inversive 2-bridge link is $K(s), s=[2a], a \neq 0$. Therefore we have the following corollary. \begin{cor}\label{cor:17.4} Let $r = [2a, 2k, -2a]$, where $k \neq 0$ and $a > 0$. Then $K(r)$ is inversive. \end{cor} If $K(s)$ is a 2-bridge knot, then $K(r), r=[s,2k, -s^{-1}]$, may not be inversive. \begin{ex}\label{ex:17.5} Let $s=[2,-2]$ and $r=[2,-2,2,2,-2]$. Then $K(r)$ is not inversive. In fact, $\Delta_{K(r)}(x,y) = (1-(2x+y)+xy)(1-(x+2y)+xy)$ and $\Delta_{K(r)}(t,t) =(1-3t+t^2)^2$ is stable, but $\Delta_{K(r)}(t,t^{-1}) = (1-2t+2t^2)(2-2t+t^2)$ is not $c$-stable. \end{ex} Now consider the general case. For convenience, we denote by $K^{\ast}$ the 2-component link obtained from $K$ by reversing the orientation of one component of $K$. \begin{prop}\label{prop:17.6} Let $r = [2a_1, -2a_2, \cdots, (-1)^{2n} 2a_{2n+1}], a_j> 0$ for $1 \leq j \leq 2n+1$. Then $K(r)$ is inversive. \end{prop} {\it Proof.} First, by Theorem \ref{thm:6.1} $\Delta_{K(r)}(t)$ =$(t-1)\Delta_{K(r)}(t,t)$ is stable. Further, we see that a diagram of $K(r)$ is alternating and the diagram of $K^{\ast}(r)$ is special alternating, and hence $\Delta_{K^{\ast}(r)}(t)$ is $c$-stable. Therefore, $K(r)$ is inversive. \qed If $r=[2a_1,2a_2, \cdots, 2a_{2n+1}], a_j > 0$ for $1 \leq j \leq 2n+1$, then $K(r)$ is $c$-stable, by Proposition \ref{prop:12.1}. But, generally, $K^{\ast}(r)$ is not stable, and hence $K(r)$ is not inversive. The following proposition, however, gives one sufficient condition for $K(r)$ to be inversive. \begin{prop}\label{prop:17.7} Let $r=[2a_1,2a_2, \cdots, 2a_{2n+1}], a_j > 0$ for $1\leq j \leq 2n+1$. If $a_{2k+1}= 1$ for all $k$, $1 \leq k \leq n$, then $K^{\ast}(r)$ is stable and hence $K(r)$ is inversive. \end{prop} {\it Proof.} Write $r = \frac{\beta}{2\alpha}$. We may assume without loss of generality that $0 <\beta < 2\alpha$. Denote $r^{\ast} = \frac{2\alpha -\beta}{2\alpha}$. Then it is known \cite[Proposition 3.17]{torti} that $K(r^{\ast})$ is equivalent to the mirror image of $K^{\ast}(r)$. Therefore, $K^{\ast}(r)$ is stable if (and only if) $K(r^{\ast})$ is stable. Now, the even continued fraction expansion of $r^{\ast}$ is called the {\it dual} of (the even continued fraction expansion of) $r$ in \cite[Theorem 3.5]{torti} and there is an algorithm to find the expression of $r^\ast$ \cite[p.7]{torti}. Using this algorithm, we can show that if all $a_{2k+1} = 1, 1 \leq k \leq n$, then $r^{\ast}=[2, -(2a_2 -2), 2, -(2a_4 -2), \cdots, 2, -(2a_{2n} -2), 2]$. Therefore, $K(r^{\ast})$ is stable and $K(r)$ is inversive. \qed \begin{rem}\label{rem:17.8} In Proposition \ref{prop:17.7}, if we assume that all $a_{2k}= 1$, for $1 \leq k \leq n$, then both $K(r)$ and $K(r^{\ast})$ are $c$-stable. In fact, it is easy to show that all entries of $r^{\ast}$ are positive. \end{rem} \begin{ex}\label{ex:17.9} (1) Let $r = [2,4,2,6,2] = \frac{69}{118}$. Then $r^{\ast} =\frac{49}{118} = [2,-2,2,-4,2]$ and hence $K(r^{\ast})$ is stable. Since $K(r)$ is $c$-stable, $K(r)$ is inversive.\\ (2) Let $r=[4,4,2] = \frac{7}{26}$. Then $r^{\ast} = \frac{19}{26}$ = [2,2,2,-2,2]. $K(r^{\ast})$ is not $c$-stable. In fact, $\Delta_{K(r^{\ast})} (x,y) = x^2 y^2 - (2x^2 y +2xy^2) + 3xy - (2x+2y) + 1$, and hence $\Delta_{K(r^{\ast})} (t,t) = t^4-4t^3 +3t^2 -4t+2$. Then the modified polynomial $f(x)$ of $\Delta_{K(r^{\ast})}(t,t)$ has two real zeros, one of which is in $(-2,2)$ and another is larger than 2, and hence $\Delta_{K(r^{\ast})}(t, t)$ is strictly bi-stable, and $K(r)$ is not inversive.\\ (3) Let $r=[4,2,2,2,4] = \frac{13}{42}$. Then $r^{\ast} =\frac{29}{42}=[2,2,6,2,2]$ and hence $K(r^{\ast})$ is $c$-stable. \end{ex} \subsection{Exceptional inversive links}\label{17.2} If the original 2-bridge link $K(r)$ is an exceptional stable link, then $K(r^{\ast})$ (and hence $K^{\ast}(r)$) may not be $c$-stable. However, the following proposition shows that for some exceptional 2-bridge link, $K(r^{\ast})$ is $c$-stable. \begin{prop}\label{prop:17.10} let $r=[4,2k,-2], k > 0$. Then $K(r)$ is inversive. \end{prop} {\it Proof.} Since $\Delta_{K(r)}(x, y) = k(x-1)(y-1)(x+y)-xy$, we see that $\Delta_{K(r^{\ast})}(t,t) = t^2 \Delta_{K(r)}(t, t^{-1}) =kt^4 -2kt^3 +(2k+1)t^2 -2kt +k$. The modified polynomial $f(x)$ of $\Delta_{K(r^{\ast})}(t,t)$ is $f(x) =kx^2 -2kx +1$, and both zeros of $f(x)$ are real and are in $(0,2)$ and hence $\Delta_{K(r^{\ast})}(t,t)$ is $c$-stable and $K(r)$ is inversive. \qed This is a rather exceptional case. For example, for $r$= [6,2,-2], $K(r)$ is not inversive. However, in general, we can prove the following theorem. \begin{thm}\label{thm:17.11} Let $r = [2a,2k,-2c], a > c, k > 0$. If $k$ is sufficiently large, then $K(r)$ is inversive. More precisely, there exists a positive integer $N(a,c)$ such that if $k \geq N(a,c)$, then $K(r)$ is inversive. \end{thm} {\it Proof.} Since $\Delta_{K(r)}(x,y) = k (x-1)(y-1) G_a G_{-c} + G_{a-c}$, a simple calculation shows that $\Delta_{K(r^{\ast})}(t, t) = k(t-1) \frac{t^{2a} -1}{t^2 -1} \frac{t^{2c} -1}{t^2 -1} + \frac{t^{2(a-c)} -1}{t^2 -1} t^{2c}$, and the modification $f(x)$ of $\Delta_{K(r)}(t,t)$ is of the form: $f(x) = k(x-2) f_1 (x) f_2(x) + g(x)$, where $f_1, f_2$ and $g$ are, respectively, the modifications of $\frac{t^{2a} -1}{t^2-1}$, $\frac{t^{2c} -1}{t^2 -1}$ and $\frac{t^{2(a-c)} -1}{t^2 -1}$. We note that all the zeros of $f_1, f_2$ and $g$ are in $(-2,2)$. Consider two graphs $z_1 = (x-2)f_1 f_2$ and $z_2 =-\frac{g(x)}{k}$. If $k \longrightarrow \infty$, then $z_2\longrightarrow 0$ and hence if $k$ is sufficiently large, the points of intersection of two curves are almost the zeros (not 2) of $z_1$ and hence, $\Delta_{K(r^{\ast})}(t,t)$ is $c$-stable when $k$ is sufficiently large. Therefore $K(r)$ is inversive if $k$ is sufficiently large. \qed \begin{probl}\label{probl:17.12} Determine $N(a,c)$. \end{probl} We should note that if $a=c$ then $N(a,a) =1$. \begin{ex}\label{ex:17.13} It is easy to show that $N(3,1) = 3$ and $N(3,2) = 2$. \end{ex} \begin{qu}\label{qu:17.14} Can a 2-component inversive link $K$ be characterized by the Alexander polynomial $\Delta_{K}(x,y)$? \end{qu} \begin{alphasection} \addtocounter{alphasect}{1} {\bf Appendix A: Representation polynomials}\label{A} \setcounter{thm}{0} \setcounter{subsection}{0} \setcounter{equation}{0} There are various integer polynomials associated to representations of the knot group into $GL(2,\CC)$. In this section, we discuss two particular representations of $G(K)$, namely, a parabolic representation of $G(K(r))$, the group of a $2$-bridge knot $K(r)$, to $SL(2,\CC)$ and a trace-free representation of $G(K(r))$ to a dihedral group $D_{2n+1}\subset GL(2,\CC)$. \subsection{Parabolic representation}\label{A.1} Let $\theta_r (z)$ be the parabolic representation polynomial (Riley polynomial) of $G(K(r))$ to $SL(2,\CC)$. (See \cite{ri72}.) Suppose $r=\frac{1}{2n+1}$, and hence $K(r)$ is a torus knot of type $(2,2n+1)$. Then $\theta_r (z) = \sum_{k=0}^n \binom{n+k}{2k} z^k$. \begin{thm}\label{thm:A.1} \cite{ri72},\cite{swa} If $r = \frac{1}{2n+1}, \theta_r (z)$ is real stable. In fact, all the zeros of $\theta_r (z)$ are simple and they are \begin{equation}\label{siki:A.1} \alpha_k = - 4 \sin^2 \frac{(2k-1)\pi}{2(2k+1)}, 1 \leq k \leq n \end{equation} \end{thm} \begin{rem}\label{rem:A.2} For a rational number $r = \beta/ \alpha, 0 < \beta < \alpha, \alpha$ odd, $\theta_r (z)$ is an integer polynomial of degree $\frac{\alpha-1}{2}$ and generally, $\theta_r(z)$ is not reciprocal. \end{rem} \begin{ex}\label{ex:A.3} (1) Let $r = 2/5$. Then $\theta_r(z)= 1 -z + z^2$ is not stable, but $c$-stable. (2) Let $r = 5/7$. Then $\theta_r(z) = 1 + 2z +z^2 + z^3$ is not stable. \end{ex} \begin{probl}\label{probl:A.4} Characterize $r$ so that $\theta_r(z)$ is stable. \end{probl} For a 2-bridge link $K(r), r= q/2n$, Riley polynomial is defined in a slightly different manner. Let $G(K(r)) = \langle x,y| Wy=yW\rangle$ be a presentation of the group of $K(r)$, where $x$ and $y$ are (oriented) meridian generators. Then $W$ is of the form: \begin{equation}\label{siki:Ax.2} W = x^{\varepsilon_1} y^{\eta_1} x^{\varepsilon_2} y^{\eta_2} \cdots x^{\varepsilon_{n-1}} y^{\eta_{n-1}} x^{\varepsilon_n}, \end{equation} \noindent where (1) $| \varepsilon_j | = | \eta_j | = 1$ for all $j$, and (2) $\varepsilon_j = \varepsilon_{n-j+1}$ for $1 \leq j \leq n$, and $\eta_j = \eta_{n-j} $ for $1 \leq j \leq n-1$. Let $\varphi : x \longrightarrow \left[\begin{array}{cc} 1& 1\\ 0&1 \end{array}\right]$ and $y \longrightarrow \left[\begin{array}{cc} 1&0\\ z&1 \end{array}\right]$ be a parabolic representation of a free group $F(x,y)$, generated by $x$ and $y$, in $SL(2,\CC)$, and $z$ is a complex number that is determined later. Then $\varphi$ defines a parabolic representation $\varphi_r$ of $G(K(r))$ in $SL(2,\CC)$ if and only if \begin{equation}\label{siki:Ax.3} \varphi (Wy) = \varphi (yW). \end{equation} Let $\varphi (W) = \tbt{a_r (z)}{ b_r (z)}{ c_r(z)}{ d_r(z)}$. Then a simple computation shows that (\ref{siki:Ax.3}) is equivalent to \begin{equation}\label{siki:Ax.4} z = 0{\rm \ or\ } b_r (z) = 0{\rm \ and\ } a_r(z) = d_r(z). \end{equation} We prove first that always $a_r (z) = d_r (z)$. To prove this, we need the following simple lemma. For convenience, we call a matrix $M = \tbt{a}{ b}{c}{d} \in GL(2,\CC)$ is of $D$-type if $a=d$. \begin{lemm}\label{lem:Ax.5} If each of $M$ and $N$ in $GL(2,\CC)$ is of $D$-type, then $NMN$ is also of $D$-type. \end{lemm} Now $\varphi (x)$ and $\varphi (y)$ are of $D$-type, and so are $\varphi (x^{-1})$ and $\varphi (y^{-1})$. Since the length of $W$ is $2n-1$, $W$ has the central element. If $n$ is even, say $2m$, it is $y^{\eta_m}$. Since $\varphi (y^{\eta_m})$ is of $D$-type and $\varepsilon_m = \varepsilon_{m+1}$, we see by Lemma \ref{lem:Ax.5} that $\varphi (x^{\varepsilon_m} y^{\eta_m} x^{\varepsilon_{m+1}})$ is of $D$-type. Further, $\varphi (y^{\eta_{m-1}}(x^{\varepsilon_m} y^{\eta_m} x^{\varepsilon_{m+1}}) y^{\eta_{m+1}})$ is also of $D$-type. By repeating this process, we see that $\varphi (W)$ = $\varphi (x^{\varepsilon_1} y^{\eta_1} \cdots y^{\eta_m}\cdots y^{\eta_{2m-1}} x^{\varepsilon_{2m}})$ is of $D$-type. If $n$ is odd, say $2m+1$, then the central element is $x^{\varepsilon_{m+1}}$, and the same argument works well. If $z=0$, then $\varphi$ is an abelian representation, and hence we ignore it. But each zero of $b_r (z)$ gives a (non-abelian) parabolic representation $\varphi_r$. $b_r (z)$ is called {\it Riley polynomial} $\theta_r (z)$ of $K(r)$. The degree of $\theta_r (z)$ is $n-1$. \begin{ex}\label{ex:Ax.6} Let $r=1/2n$, $n\ge 2$. Then $K(r)$ is an elementary torus link, and it follows from (\ref{siki:A.k6}) and \cite[Prop. 2.4]{eval} that the Riley polynomial $\theta_r (z)$ of $K(r)$ is given by (\ref{siki:Ax.5}) below. \begin{equation}\label{siki:Ax.5} \theta_r (z) = \sum_{j=0}^{n-1} \binom{n+j}{2j+1} z^j. \end{equation} \end{ex} It is known that $\theta_r (z)$ is real stable. In fact, the zeros of $\theta_r(z), r=\frac{1}{2n}$ are $-4\sin^2\frac{r\pi}{2n}, r=1,2\dots,n-1$ \cite{swa}. The following proposition confirms a conjecture by Dan Silver \cite{silv}. \begin{prop}\label{prop:Ax.6} Let $r = q/2n, 0 < q < 2n, \gcd(q, 2n) = 1$. Then $|\theta_r (0)| = |lk(K(r))|$, where $lk(K(r))$ denotes the linking number between two components of a 2-bridge link $K(r)$. \end{prop} {\it Proof.} $\theta_r (0)$ is determined by $\varphi_r (W)$ evaluated at $z=0$ which is $\prod_{j=1}^n \varphi_r (x^{\varepsilon_j}) = \tbt{1}{\sum_{j=1}^n \varepsilon_j }{0}{1}$. Since $\sum_{j=1}^n \varepsilon_j $ is equal to $lk(K(r))$, Proposition \ref{prop:Ax.6} follows. \qed If $lk(K(r)) = 0$, then Dan Silver also conjectures that the absolute value of the coefficient $c_1$ of $z$ of $\theta_r (z)$ is the wrapping number of $K(r)$. However, examples below show that it is not correct. Note that if $lk(K(r)) = 0$, then $n$ is a multiple of $4$. \begin{ex}\label{ex:Ax.7} (1) Let $r=9/16 = [2,4,-2]$. Then $\theta_r (z) = z(2+z^2)(2-4z+4z^2 -2z^3 +z^4)$, but the wrapping number is $2$. (2) Let $r=11/24 =[2,-6,-2]$. Then $\theta_r (z)$ = $6z +18z^2 +35z^3 +48z^4 +56z^5 +44z^6 +36z^7 +16z^8 +10z^9 +2z^{10} +z^{11}$, but the wrapping number is $2$. \end{ex} \subsection{Dihedral representation}\label{A.2} It is well-known that there is a trace-free representation $\xi$ of a dihedral group $D_p = \langle x,y| x^2 = y^2 = (xy)^p = 1 \rangle$, $p=2n+1$, in $GL(2,\CC)$. $\xi$ is given by $\xi(x)=\tbt{-1}{1}{0}{1}$ and $\xi(y)=\tbt{-1}{0}{\omega}{1}$, where $\omega$ is a zero of the integer polynomial $\varphi_p (z) = \sum_{k=0}^n \frac{2n+1}{2k+1}\binom{n+k}{2k} z^k$. See \cite{ri72}. \begin{ex}\label{ex:A.5} $\varphi_3 (z) = z +3$, $\varphi_5 (z) = z^2 + 5z + 5$, $\varphi_7 (z) = z^3 + 7z^2 + 14z +7$, $\varphi_9 (z) = z^4 + 9z^3 + 27z^2 +30 z + 9 =(z+3)(z^3 +6z^2 +9z+3)$. \end{ex} Using $\xi$, we prove in \cite{hm09} some properties of the twisted Alexander polynomial of a 2-bridge knot associated to a dihedral representation. In this subsection, we prove for any odd number $2n + 1$, $\varphi_{2n+1}(z)$ is stable. Our proof is different from those of other parts in this paper. \begin{thm}\label{thm:A.6} Let $p = 2n+1, n \geq 1$, and $\varphi_p (z) = \sum_{k=0}^n \frac{2n+1}{2k+1}\binom{n+k}{2k} z^k$. Let $\zeta= e^{\frac{2\pi i}{p}}$. Then $z_k = \zeta^k + \zeta^{-k} -2, 1 \leq k \leq n$, are the zeros of $\varphi_p (z)$. \end{thm} {\it Proof.} Since $\zeta^k + \zeta^{-k} - 2 = (\sqrt{\zeta} - \frac{1}{\sqrt{\zeta}})^{2k}$, it suffices to show that \begin{equation}\label{siki:A.k6} \varphi_{2n+1} (z_1) = \sum_{k=0}^n \frac{2n+1}{2k+1}\binom{n+k}{2k}(\sqrt{\zeta} - \frac{1}{\sqrt{\zeta}})^{2k}=0. \end{equation} Now we expand the right side of (\ref{siki:A.k6}) and let $A_k^{(n)}$ denote the term of $\zeta^k, k=0, \pm 1, \pm 2, \cdots, \pm n$. Namely, \begin{equation*} \varphi_{2n+1} (z_1)= A_{-n}^{(n)} \zeta^{-n} + \cdots + A_{-1}^{(n)} \zeta^{-1} + A_{0}^{(n)} + A_{1}^{(n)} \zeta + A_{2}^{(n)} \zeta^{2} + \cdots + A_{n}^{(n)} \zeta^{n}. \end{equation*} Then we see that \begin{align}\label{siki:A.k7} A_0^{(n)} &= \sum_{k=0}^n\frac{2n+1}{2k+1} \binom{n+k}{2k} (-1)^k \binom{2k}{k}.\nonumber\\ A_1^{(n)} &= A_{-1}^{(n)} = \sum_{k=1}^n\frac{2n+1}{2k+1} \binom{n+k}{2k} (-1)^k \binom{2k}{k+1}\nonumber\\ &\vdots\nonumber\\ A_m^{(n)} &= A_{-m}^{(n)} = \sum_{k=m}^n\frac{2n+1}{2k+1} \binom{n+k}{2k)} (-1)^{k+m} \binom{2k}{k+m}\nonumber\\ &\vdots\nonumber\\ A_n^{(n)} &= A_{-n}^{(n)} = \frac{2n+1}{2n+1} \binom{2n}{2k} (-1)^2n \binom{2n}{2n}=1 \end{align} Therefore, to prove $\varphi_{2n+1}(z)= 0$, it suffices to show \begin{equation*} A_0^{(n)} = A_1^{(n)} = \cdots = A_n^{(n)} = 1. \end{equation*} We prove these equalities by applying generating function theory. First we show that $A_0^{(n)} = 1$. Consider the generating function $F_0(x)$ of $A_0^{(n)}$: \begin{align}\label{siki:A.3} F_0 (x) &= \sum_{n\geq 0} A_0^{(n)} x^n \nonumber\\ &= \sum_{n \geq 0}\{x^n \sum_{k \geq 0}\binom{n+k}{2k} \binom{2k}{k} (-1)^k \frac{2n+1}{2k+1}\}. \end{align} We prove that $F_0 (x) = \frac{1}{1-x} = 1 + x + x^2 + \cdots+ x^n +\cdots$.\\ Now, by interchanging the order of summations in (\ref{siki:A.3}), we have \begin{equation*} F_0 (x) = \sum_{k \geq 0}[\binom{2k}{k} \frac{(-1)^k}{2k+1} x^{-k} \{\sum_{n \geq 0} (2n+1) \binom{n+k}{2k} x^{n+k}\}]. \end{equation*} Note $\sum_{n \geq 0} (2n+1) \binom{n+k}{2k} x^{n+k}$ = $\sum_{n \geq k} (2n+1) \binom{n+k}{2k} x^{n+k}$. Let $n+k = r$. Then $2n+1=2(r-k)+1=2r-2k+1$ and \begin{align}\label{siki:A.4} &\sum_{n \geq k} (2n+1) \binom{n+k}{2k} x^{n+k} \nonumber\\ &\ \ = \sum_{r\geq 2k}(2r-2k+1) \binom{r}{2k} x^r \nonumber\\ &\ \ = x^{2k}\{(2k+1) + (2k+3) \binom{2k+1}{1} x \nonumber\\ & \ \ \ \ + (2k+5) \binom{2k+2}{2}x^2 + \cdots + (2k+2m+1) \binom{2k+m}{m} x^m + \cdots \} \nonumber\\ &\ \ = x^{2k} \sum_{m \geq 0} (2k+2m+1) \binom{2k+m}{m} x^m. \end{align} We need the following lemma. \begin{lemm}\label{lem:A.7} (1) [W, p.50, (2.5.7)] $\sum_{m \geq 0}\binom{2k+m}{m} x^m = \frac{1}{(1-x)^{2k+1}}$.\\ (2) [W, p.50, (2.5.11)] $\sum_{k \geq 0}\binom{2k}{k} x^k = \frac{1}{\sqrt{1-4k}}$.\\ (3) [W p.51, (2.5.15)] $\sum_{k \geq 0}\binom{2k+m}{k} x^k = \frac{1}{\sqrt{1-4x}} (\frac{1 - \sqrt{1-4x}}{2x})^m$.\\ (4) [W p.32] Let $P(y)$ be a polynomial and $f= \sum_{n \ \geq 0} a_n x^n$. Then $\sum_{n \geq 0}P(n) a_n x^n = P(x\frac{d}{dx})f$. \end{lemm} \begin{ex}\label{ex:A.8} If $P(y) = 2y+2k+1$ and $f= \sum_{m \geq 0} \binom{2k+m}{m} x^m = (1-x)^{-(2k+1)}$. Then $P(m) = 2m+2k+1$ and \begin{align*} &\sum_{m \geq 0}(2m+2k+1) \binom{2k+m}{m} x^m\\ &\ \ \ = 2x\frac{\;df}{dx}+ (2k+1)f \\ &\ \ \ = 2(-(2k+1))(-1)(1-x)^{-(2k+2)} x + (2k+1) (1-x)^{-(2k+1)}\\ &\ \ \ = \frac{(2k+1)(x+1)}{(1-x)^{2k+2}}. \end{align*} \end{ex} Using Example \ref{ex:A.8}, we have from (\ref{siki:A.4}) \begin{equation}\label{siki:A.5} \sum_{n \geq 0} (2n+1) \binom{n+k}{2k} x^{n+k} = x^{2k} \frac{(2k+1)(x+1)}{(1-x)^{2k+2}}. \end{equation} Therefore, \begin{align*} F_0 (x) &= \sum_{k \geq 0}\binom{2k}{k}\frac{(-1)^k}{2k+1} x^{-k}\left\{\frac{x^{2k}(2k+1) (x+1)}{(1-x)^{2k+2}}\right\} \\ &= \frac{x+1}{(1-x)^2} \sum_{k \geq 0} \binom{2k}{k} (-1)^k \left\{\frac{x}{(1-x)^2}\right\}^k. \end{align*} Let $y = \frac{x}{(1-x)^2}$. Then $F_0 (x) = \frac{x+1}{(1-x)^2} \sum_{k \geq 0} (-1)^k \binom{2k}{k}y^k$. By Lemma \ref{lem:A.7} (2), we see that $\sum_{k \geq 0} (-1)^k \binom{2k}{k} y^k = \frac{1}{\sqrt{1+4y}}$. Since $1+4y = 1 + \frac{4x}{(1-x)^2} = \frac{(1+x)^2}{(1-x)^2}$, we have $\sqrt{1+4y} = \frac{1+x}{1-x}$, and hence, \begin{equation*} F_0 (x) = \frac{x+1}{(1-x)^2} \frac{1-x}{1+x} = \frac{1}{1-x} = 1 + x + x^2 + \cdots. \end{equation*} Therefore, for any $n$, the coefficient of $F_0 (x)$ is 1, i.e., $A_0^{(n)} = 1$. Next, for $m \geq 1$, we show $A_m^{(n)}= 1$. Our approach is almost the same, but we need a slight change in the process. Now, $A_m^{(n)} = \sum_{k \geq m} (-1)^{k+m} \frac{2n+1}{2k+1} \binom{n+k}{2k} \binom{2k}{k+m}$. As before, we interchange the order of summations of the generating function $F_m (x)$ of $A_m^{(n)}$: \begin{align*} F_m (x)& = \sum_{n \geq 0} x^n \left\{\sum_{k \geq m} (-1)^{k+m} \frac{2n+1}{2k+1} \binom{n+k}{2k} \binom{2k}{k+m}\right\} \\ &= \sum_{k \geq m}\frac{(-1)^{k+m}}{2k+1} \binom{2k}{k+m} x^{-k}\left\{\sum_{n \geq 0}(2n+1) \binom{n+k}{2k} x^{n+k}\right\}\\ &= \sum_{k \geq m} \frac{(-1)^{k+m}}{2k+1} \binom{2k}{k+m} x^{-k} \frac{x^{2k}(x+1) (2k+1)}{(1-x)^{2k+2}}\ \ {\rm \ by\ } (\ref{siki:A.5})\\%(by (A.5)) &= \frac{x+1}{(x-1)^2} \sum_{k \geq m} (-1)^{k+m} \binom{2k}{k+m} \frac{x^k}{(1-x)^{2k}}. \end{align*} We show $F_m (x) = \frac{x^m}{1-x} = x^m + x^{m+1} + x^{m+2} + \cdots$. We note that \begin{align*} &\sum_{k \geq m}(-1)^{k+m}\binom{2k}{k+m} \frac{x^k}{(1-x)^{2k}}\\ & \ \ \ = \binom{2m}{2m} \frac{x^m}{(1-x)^{2m}} - \binom{2m+2}{2m+1} \frac{x^{m+1}}{(1-x)^{2(m+1)}} + \cdots\\ & \ \ \ = \frac{x^m}{(1-x)^{2m}}\left\{ \binom{2m}{2m} - \binom{2m+2}{2m+1} \frac{x}{(1-x)^2} + \cdots \right\}\\ & \ \ \ = \frac{x^m}{(1-x)^{2m}} \sum_{k \geq 0} (-1)^k \binom{2k+2m}{k}\left\{\frac{x}{(1-x)^2}\right\}^k. \end{align*} Again, let $y= \frac{x}{(1-x)^2}$. Then $\sqrt{1+4y} = \frac{1+x}{1-x}$ and $ 1 - \sqrt{1+4y} = 1 - \frac{1+x}{1-x} = \frac{-2x}{1-x}$ and $\frac{1 - \sqrt{1+4y}}{-2y} = 1 - x$. Then by Lemma \ref{lem:A.7} (3), we have \begin{align*} &\sum_{k \geq m} (-1)^{k+m} \binom{2k}{k+m} \frac{x^k}{(1-x)^{2k}}\\ &\ \ \ = \frac{x^m}{(1-x)^{2m}} \frac{1}{\sqrt{1+4y}} \left\{\frac{1 - \sqrt{1+4y}}{-2y}\right\}^{2m}\\ & \ \ \ = \frac{x^m}{(1-x)^{2m}} \frac{1-x}{1+x} (1-x)^{2m}\\ & \ \ \ = \frac{x^m (1-x)}{1+x}. \end{align*} Therefore $F_m (x)$ = $\frac{x+1}{(1-x)^2} \frac{x^m (1-x)}{1+x} = \frac{x^m}{1-x} = x^m (1+x+x^2 + \cdots)$. Thus if $n < m$, then $A_m^{(n)} = 0$, and if $n \geq m$, then $A_m^{(n)}=1$, i.e., for any $n \geq m, A_m^{(n)} = 1$. \qed \addtocounter{alphasect}{1} {\bf Appendix B: Determination of $\delta_4$}\label{B} \setcounter{thm}{0} \setcounter{equation}{0} Let $\Gamma_{2n}$ be the set of all Alexander polynomials $\Delta_K(t)$ of alternating knots $K$ of genus $n$, i.e., $\deg\Delta_K(t) = 2n$. Let $\delta_{2n}(K)$ be the maximal value of Re$(\alpha)$ of the zero $\alpha$ of $\Delta_K(t)$ and $\delta_{2n} = \max_{\Delta_K(t) \in \Gamma_{2n}} \delta_{2n}(K)$. \begin{yosou}\label{conj:A2.1} $\delta_{2n}$ exists for any $n \geq 1$, and further, there is a fibred stable alternating knot $K_n$ such that $\delta_{2n}(K_n) = \delta_{2n}$. \end{yosou} In this section, we prove Conjecture \ref{conj:A2.1} for $n=2$. Conjecture \ref{conj:A2.1} is trivially true for $n=1$. In fact, $\delta_2 = 2.618 \cdots$ that is a zero of $\Delta_{K_1}(t) = t^2 -3t+1$, where $K_1$ is $4_1$. \begin{thm}\label{thm:A2.2} Let $K_2 = 8_{12}$. Then $\delta_4 = \delta_4 (K_2) = 4.3902 \cdots$. \end{thm} Note that $K_2$ is a fibred stable alternating knot with $\Delta_{K_2}(t) = t^4 -7t^3 +13t^2 -7t +1$. {\it Proof of Theorem \ref{thm:A2.2}.} Let $K$ be an alternating knot. Since $\delta_4 (K_2 )$ is the maximal value among those of fibred knots of genus 2, we may assume that $K$ is a non-fibred alternating knot. Write $\Delta_K(t) = a t^4 - bt^3 + ct^2 - bt +a$, where $a,b,c > 0$ and further, $a \geq 2$. To prove Theorem \ref{thm:A2.2}, we need the following theorem due to Jong. \begin{thm}\label{thm:A2.3} \cite{jong} Let $\Delta_K(t) = a t^4 - bt^3 + ct^2 - bt + a$, where $a,b,c > 0$, be the Alexander polynomial of an alternating knot $K$ of genus 2. Then the following holds.\\ (1) if $\sigma(K) = 0$, then $3a-1 \leq b \leq 6a+1$,\\ (2) if $|\sigma(K)| = 2$, then $2a+1 \leq b \leq 6a -1$,\\ (3) if $|\sigma(K)| = 4$, then $2a-1 \leq b \leq 4a- 2$. \end{thm} Now there are three cases.\\ Case 1. $\Delta_K(t)$ has four complex zeros, none of which is a unit complex. Then, $\sigma(K) = 0$. Let $\alpha, \overline{\alpha}, \beta, \overline{\beta}$ be all zeros of$\Delta_K(t)$, where $\alpha \beta = 1$ and $\overline{\alpha} \overline{\beta} = 1$. First, the real part of each zero is positive. In fact, if $Re(\alpha) < 0$, then the real parts of all zeros are negative, since $\alpha \beta$ = $\overline{\alpha} \overline{\beta} = 1$. Therefore, $\alpha +\overline{\alpha} + \beta + \overline{\beta} = 2{\rm Re}(\alpha) + 2{\rm Re}(\beta)< 0$, but $\alpha +\overline{\alpha} + \beta + \overline{\beta} = b/a > 0$, a contradiction. Now suppose ${\rm Re}(\alpha) \geq \delta_4$. Then $b/a =\alpha +\overline{\alpha} +\beta + \overline{\beta} = 2{\rm Re}(\alpha) + 2{\rm Re}(\beta) > 2\delta_4 > 8$, but by Theorem \ref{thm:A2.3}, $b/a \leq 6 + \frac{1}{a} < 7$, a contradiction. Therefore, $\delta_4(K) < \delta_4$.\\ Case 2. $K$ is $c$-stable. Trivially, $\delta_{4}(K) < 1$ and hence $\delta_4 (K) < \delta_4$. If $|\sigma(K)|=4$, then $K$ is $c$-stable and hence we may assume hereafter that $|\sigma(K)| \leq 2$, and further, $\Delta_K(t)$ has at least two real zeros. Therefore, the last case is the following:\\ Case 3. $K$ is bi-stable, but not $c$-stable. From the above remark, we see that $\delta _4 (K)$ is the maximal real zero of $\Delta_K(t)$. To show that $\delta_4 (K) < \delta_4$, first we consider the modified polynomial $f(x)$ of $\Delta_K(t)$. Write $\Delta_K(t) = a t^4 - bt^3 + (2b - 2a - \varepsilon )t^2 - bt + a$, where $\varepsilon= \pm 1$. Then \begin{equation*} f(x) = ax^2 - bx +(2b - 4a - \varepsilon) = (x-2)(ax - (b-2a)) -\varepsilon. \end{equation*} Since $\delta_4(K)$ is less than the maximal real zero of $f(x)$, we compute the real zeros of $f(x)$. Now the real zeros of $f(x)$ are determined by the intersection of two curves $y_1 = (x-2)(ax - (b-2a))$ and $y_2 = \varepsilon$. Since $y_1 (0) = 2(b-2a) \geq 2$, by Theorem \ref{thm:A2.3}, we have the following graphs: (1) $\dfrac{b-2a}{a} \le 2$,\ \ (2) $\dfrac{b-2a}{a} \ge 2$ \byouga{B-1}{7}{B.1} The maximal real zero $\gamma$ of $f(x)$ (if exists) is given by \begin{equation*} \gamma = \frac{b}{2a} + \sqrt {\frac{d}{4a^2}},{\rm \ where\ } d = (b-4a)^2 +4a \varepsilon. \end{equation*} From this formula, we should note that when $a$ is fixed, $\gamma$ gets larger as $b$ gets larger. Therefore, to obtain the maximal real zero of $f(x)$, $b$ should be the maximal possible value. Subcase (a) $\varepsilon = 1$. From Fig. B.1, we see that $f(x)$ has two zeros, one is larger than 2, but the other is less than 2. Therefore, $\Delta_K(t)$ has two unit complex zeros and two real zeros, and hence $|\sigma(K)| = 2$. By Theorem \ref{thm:A2.3} (2), we see that $2a + 1 \leq b \leq 6a - 1$. When $b=6a-1$, $d= 4a^2 + 1$ and hence \begin{equation*} \gamma = \frac{b}{2a} + \sqrt{\frac{d}{4a^2}} = \frac{b}{2a} + \sqrt{1+ \frac{1}{4a^2}}. \end{equation*} Since $a \geq 2$, we have $\gamma = \frac{6a -1}{2a} + \sqrt{1 + \frac{1}{4a^2}} \leq 3 + \sqrt{1.0625} = 4.03077\cdot < \delta_4$, and hence $\delta_4(K) < \delta_4$. Subcase (b) $\varepsilon = -1$. Since $f(x)$ has a real zero larger than 2, Fig B.1 (1) cannot occur. Therefore, from Fig B.1 (2), $f(x)$ has two real zeros greater than 2 and hence $\Delta_K(t)$ has four real zeros, and $|\sigma(K)| = 0$. Then by Theorem \ref{thm:A2.3} (1), we have that $3a-1 \leq b \leq 6a + 1$. When $b=6a+1$, $d=4a^2 +1$ and $\gamma = \frac{6a+1}{2a} + \sqrt{1+ \frac{1}{4a^2}}$. Since $a \geq 2$, it follows that $\gamma \leq 3 + \frac{1}{4} + \sqrt{1.0625} = 4.281\cdots < \delta_4$.\\ A proof of Theorem \ref{thm:A2.2} is now complete. \qed \addtocounter{alphasect}{1} {\bf Appendix C: Distribution of the zeros.}\label{C} \setcounter{thm}{0} \setcounter{equation}{0} In this section, we discuss distribution of the zeros of the Alexander polynomials of two infinite sequences of 2-bridge knots. Namely, they are vertical and horizontal extensions of the 2-bridge knot $[2,2,2,2,-2,-2,-2,-2]$. Let $r(k) = [2,2,2,2k, -2,-2,-2,-2], k \ne 0$. For simplicity, $K(r(k))$ will be denoted by $K(k)$. The type of the zeros of the Alexander polynomial of $K(k)$ depends on $k$. More precisely, we prove the following theorem. \begin{thm}\label{thm:D.1} Case 1. $k > 0$.\\ (1) If $k=1$ or $2$, then $\Delta_{K(k)}(t)$ is totally unstable, i.e., every zero is a non-unit complex number.\\ (2) If $k=3,4,5$ or $6$, then $\Delta_{K(k)}(t)$ has four unit complex zeros and four non-unit complex zeros, and hence $\Delta_{K(k)}(t)$ has no real zeros.\\ (3) If $k \geq 7$, then $\Delta_{K(k)}(t)$ has eight unit complex zeros, and hence $K(k)$ is $c$-stable. Therefore, in this case, $\Delta_{K(k)}(t)$ does not have real zeros. Case 2. $k < 0$.\\ For all $k$, $\Delta_{K(k)}(t)$ has two real zeros and six unit complex zeros, and hence $K(k)$ is strictly bi-stable. \end{thm} \begin{ex} For any $k\neq 0$, $\Delta_{K(k)}(t)= k -3kt +5kt^2 -7kt^3 +(8 k +1)t^4 -7kt^5 +5kt^6 -3kt^7 +kt^8$. We plot the zeros around the unit circle in Fig. C.1 for $k=-4,-3,-2,-1,1,2,\dots,8$. \end{ex} \byouga{C-1}{12}{C.1} \begin{rem}\label{rem:D.2} If $k > 0$, then $\sigma(K(k)) = 0$. But, if $k \geq 7$, $K(k)$ is $c$-stable. Therefore, $c$-stable alternating knots are not necessarily special alternating. If $k < 0$, then $|\sigma(K(k))|= 2$, but $\Delta_{K(k)}(t)$ has always more than two unit complex zeros. \end{rem} {\it Proof of Theorem \ref{thm:D.1}}. First, using a standard Seifert matrix of $K(k)$, we can show that $\Delta_{K(k)}(t) = k f(t) + t^4$, where $f(t) = (t -1)^2 (t^2 +1) ( t^4 -t^3 +t^2 -t+1)$. Consider the modification $F(x)$ of $\Delta_{K(k)}(t): F(t) = kx(x-2)(x^2-x -1) +1$. To prove the theorem, we study the intersection of two curves, $y_1 = g(x) = x(x-2)(x^2 -x -1)$ and $y_2 = - 1/k$, $k \ne 0$. By simple calculations, $y_1$ is depicted in Fig. C.2. Then we see (1) two curves $y_1$ and $y_2 = -1/k, k \leq -1$, intersect in exactly four points, only one of which has $x$-coordinate greater than 2 and others in $(-2,2)$. This proves Case 2.\\ (2) Suppose $k > 0$. If $k = 1$ or $2$, two curves do not intersect and hence, $K(k)$ is totally unstable. If $k = 3,4,5$ or $6$, then two curves intersect in two points with $x$-coordinate in $(-2,2)$. If $k \geq 7$, two curves intersect in four points with $x$-coordinate between $-2$ and $2$, and hence all the zeros are unit complex. This proves Theorem \ref{thm:D.1}. \qed \byouga{C-2}{7}{C.2} The previous sequence can be considered as a vertical extension of the original 2-bridge knot $K(1) = [2,2,2,2,-2,-2,-2,-2]$. The next sequence is a horizontal extension of $K(1)$. Consider the sequence $r[n] = [2,\dots,2, -2, \dots, -2]$, $n$ consecutive $2$'s followed by $n$ consecutive $-2$'s, with $n\ge 1$. $K(r[n])$ will be denoted by $K[n]$. We prove the following theorem. \begin{thm}\label{thm:D.4} (1) If $n$ is odd, then $\Delta_{K[n]}(t)$ has two real zeros, and other are non-unit complex zeros.\\ (2) If $n$ is even, then $\Delta_{K[n]}(t)$ is totally unstable. \end{thm} {\it Proof.} First, using a standard Seifert matrix, it is easy to show by induction on $n$ that \begin{equation}\label{siki:D.2} \Delta_{K[n]}(t) = \sum_{k=0}^{n-1}(-1)^k (2k+1)\{t^k + t^{2n-k}\} + (-1)^n (2n+1) t^n. \end{equation} To prove the theorem, we need the following lemma. \begin{lemm}\label{lem:D.5} $\Delta_{K[n]}(t)$ does not have a unit complex zeros. \end{lemm} {\it Proof.} Obviously, $\pm 1$ is not the zeros of $\Delta_{K[n]}(t)$. Now, we express $\Delta_{K[n]}(t)$ in a different form: \begin{equation}\label{siki:D.3} (t+1)^2 \Delta_{K[n]}(t) = (t^{2n+1}-1)(t-1) + (-1)^n 4 t^{n+1}. \end{equation} Suppose $\Delta_{K[n]}(t)$ has a unit complex zero $\alpha = e^{i\theta}, \theta \ne 0, \pi$. Then $(e^{(2n+1)i\theta} -1)(e^{i\theta}-1) = (-1)^{n+1}4 e^{(n+1)i\theta}$ and hence, $|e^{(2n+1)i\theta} -1| |e^{i\theta}-1| = 4$. This is impossible, since $|e^{(2n+1)i\theta} -1| \leq 2$ and $|e^{i\theta}-1| < 2$. \qed We return to a proof of the theorem.\\ Case 1. $n$ is odd, say $2m+1$. Then\\ $\Delta_{K[n]}(t) = (t-1)^2 (t^{2m}+t^{2m-2} + \cdots +t^2 +1) (t^{2m}-t^{2m-1} +\cdots + t^2 -t +1) -t^{2m+1}$. Consider the modified polynomial $F(x)$ of $\Delta_{K[n]}(t)$. Then $F(x) =(x-2)g(x)h(x) - 1$, where $g(x)$ and $h(x)$ are, respectively, the modified polynomials of $f_1(t) =t^{2m} + t^{2m-2}+ \cdots + t^2 + 1$ and $f_2 (t) = t^{2m} -t^{2m-1} +\cdots + t^2 -t +1$. Since $f_1(t)$ and $f_2(t)$ have only unit complex zeros, all the zeros of $g(x)$ and $h(x)$ are real in $(-2,2)$. Then the zeros of $\Delta_{K[n]}(t)$ are determined by intersection of two curves $y_1 = (x-2)g(x)h(x)$ and $y_2 = 1$. See Fig. C.3 (1). We see that $y_1$ and $y_2$ intersect in one point $P(p_1,p_2)$, with $p_1 > 2$. If $y_2$ intersects $y_1$ at another point, say $Q (q_1,q_2)$, then $-2 < q_1 < 2$ and the corresponding zeros of $\Delta_{K[n]}(t)$ are unit complex. This is impossible by Lemma \ref{lem:D.5}. \byouga{C-3}{12}{C.3} Case 2. $n$ is even, say $2m$. Then again, we have $\Delta_{K[n]}(t)= (t-1)^2 (t^{2m-2} +t^{2m-4}+\cdots +t^2+1) (t^{2m}-t^{2m-1}+\cdots + t^2-t +1) +t^{2m}$.\\ From this form, we see easily that $\Delta_{K[n]}(t)$ has no real zeros, since $\Delta_{K[n]}(t)> 0 $ if $t$ is real. Now we did as before, consider the modified polynomial $F(x)$ of $\Delta_{K[n]}(t)$ and see that $F(x) =(x-2) g(x)h(x) +1$, where $g(x)$ and $h(x)$ are the modifications of $\frac{t^{2m}-1}{t^2 -1}$ and $\frac{t^{2m+1}+1}{t +1}$, respectively. Both have only real zeros in $(-2,2)$. Consider the intersection of $y_1 = (x-2)g(x)h(x)$ and $y_2 = -1$. Since $\deg y = 2m$, the graph appears as in Fig. C.3 (2). As we proved earlier, $y_1$ and $y_2$ do not intersect, otherwise $\Delta_{K[n]}(t)$ would have a unit complex zero. \qed Now, none of the zeros of $\Delta_{K[n]}(t)$ is unit complex, but these zeros seem to be distributed in a narrow strip containing the unit circle. More precisely, we propose the following conjecture. \begin{yosou}\label{conj:D.6} Let $\alpha= \frac{3- \sqrt{5}}{2}$. $\alpha$ is one of the real zeros of $\Delta_{K[1]}(t)=t^2 -3t +1$. Let $C$ be the circle with centre at $(\frac{\alpha -1}{2}, 0)$ and radius $\frac{\alpha+1}{2}$. Then, for $n \geq 1$, all the zeros of $\Delta_{K[n]}(t)$ with length $< 1$ lie in a narrow lunar domain bounded by the unit circle and $C$. \end{yosou} \byouga{C-4}{5}{C.4} \begin{rem}\label{siki:D.7} If Hoste\rq s conjecture is true, then none of the zeros of the Alexander polynomial of an alternating knot is in the interior of the circle with centre at $(-1/2, 0)$ and radius $1/2$. \end{rem} \begin{ex} Fig. C.5 below depicts the zeros for the cases $n=1,\dots, 10$ around the unit circle. \end{ex} \byouga{C-5}{12}{C.5} \end{alphasection} \end{document}
\begin{document} \title{Cyclotomic Units and Class Groups in $\mathbb{Z}_p$-extensions of real abelian fields} \author{Filippo A. E. Nuccio\footnote{Universit\`a ``La Sapienza'' - ROME, \texttt{[email protected]}}} \date{} \maketitle \abstract{For a real abelian number field $F$ and for a prime $p$ we study the relation between the $p$-parts of the class groups and of the quotients of global units modulo cyclotomic units along the cyclotomic $\mathbb{Z}_p$-extension of $F$. Assuming Greenberg's conjecture about the vanishing of the $\lambda$-invariant of the extension, a map between these groups has been constructed by several authors, and shown to be an isomorphism if $p$ does not split in $F$. We focus in the split case, showing that there are, in general, non-trivial kernels and cokernels.\\\\\small{2000 Mathematical Subject Classification: $11R23$, $11R29$}} \section{Introduction} Let $F/\mathbb{Q}$ be a real abelian field of conductor $f$ and let $Cl_F$ be its ideal class group. A beautiful formula for the order of this class group comes from the group of Cyclotomic Units: this is a subgroup of the global units $\mathcal{O}_F^\times$ whose index is linked to the order of $Cl_F$. To be precise, we give the following definition (\cite{Sin80/81}, section 4): \begin{dfz} For integers $n>1$ and $a$ not divisible by $n$, let $\zeta_n$ be a primitive $n$-th root of unity. Then $Norm^{\mathbb{Q}(\zeta_n)}_{F\cap \mathbb{Q}(\zeta_n)}(1-\zeta_n^a)\in F$ and we define the cyclotomic numbers $D_F$ to be the subgroup of $F^\times$ generated by $-1$ and $Norm^{\mathbb{Q}(\zeta_n)}_{F\cap \mathbb{Q}(\zeta_n)}(1-\zeta_n^a)$ for all $n>1$ and all $a$ not divisible by $n$. Then we define the Cyclotomic Units of $F$ to be $$ Cyc_F:=D_F\cap \mathcal{O}_F^\times $$ \end{dfz} Sinnott proved in \cite{Sin80/81}, Theorem 4.1 together with Proposition 5.1, the following theorem: \begin{trm*}[Sinnott] There exists an explicit constant $\kappa_F$ divisible only by $2$ and by primes dividing $[F:\mathbb{Q}]$ such that $$ [\mathcal{O}_F^\times:Cyc_F]=\kappa_F|Cl_F|\;. $$ \end{trm*} Let now $p$ be an odd prime that does not divide $[F:\mathbb{Q}]$: by tensoring $\mathcal{O}_F^\times$, $Cyc_F$ and $Cl_F$ with $\mathbb{Z}_p$ we get an equality $$ [\mathcal{O}_F^\times\otimes\mathbb{Z}_p:Cyc_F\otimes\mathbb{Z}_p]=|Cl_F\otimes\mathbb{Z}_p| $$ and it is natural to ask for an algebraic interpretation of this. Moreover, observe that our assumption $p\nmid [F:\mathbb{Q}]$ makes the Galois group $\Delta:=\mathrm{Gal}(F/\mathbb{Q})$ act on the modules appearing above through one-dimensional characters, and we can decompose them accordingly: in the sequel we write $M(\chi)$ for every $\mathbb{Z}[\Delta]$-module $M$ to mean the submodule of $M\otimes \mathbb{Z}_p$ of $M$ on which $\Delta$ acts as $\chi$, where $\chi\in\hat{\Delta}$ (see the beginning of Section \ref{SecCycUnits} for a precise discussion). Then an even more optimistic question is to hope for a character-by-character version of Sinnott's theorem, namely \begin{equation}\label{charbychar} [\mathcal{O}_F^\times\otimes\mathbb{Z}_p(\chi):Cyc_F\otimes\mathbb{Z}_p(\chi)]\stackrel{?}{=}|Cl_F\otimes\mathbb{Z}_p(\chi)|\; \end{equation} and then ask for an algebraic interpretation of this. Although it is easy to see that these $\Delta$-modules are in general not isomorphic (see the example on page 143 of \cite{KraSch95}), it can be shown that they sit in an exact sequence for a wide class of fields arising in classical Iwasawa theory. More precisely, let $F_\infty/F$ be the cyclotomic $\mathbb{Z}_p$-extension of $F$ and let $\Gamma=\mathrm{Gal}(F_\infty/F)\cong\mathbb{Z}_p$: then $$ F_\infty=\bigcup_{n\geq 0}F_n\supset\hdots\supset F_n\supset F_{n-1}\supset\hdots\supset F_0=F $$ where $F_n/F$ is a cyclic extension of degree $p^n$ whose Galois group is isomorphic to $\Gamma/\Gamma^{p^n}$. In a celebrated work (see \cite{Iwa73}) Iwasawa gives a formula for the growth of the order of $Cl_{F_n}\otimes\mathbb{Z}_p$: he proves that there are three integers $\mu,\lambda$ and $\nu$, and an index $n_0\geq 0$, such that $$ |Cl_{F_n}\otimes\mathbb{Z}_p|=p^{\mu p^n+\lambda n+\nu}\; \mathrm{\;for\;every\;}n\geq n_0\;. $$ Moreover, Ferrero and Washington proved in \cite{FerWas79} that the invariant $\mu$ vanishes. A long-standing conjecture by Greenberg (see \cite{Gre76}, where conditions for this vanishing are studied) predicts that $\lambda=0$: according to the conjecture the $p$-part of the class groups should stay bounded in the tower. Although a proof of this conjecture has never been provided, many computational checks have been performed verifying the conjecture in many cases (see, for instance, \cite{KraSch95}). Under the assumptions $\lambda=0$ and $\chi(p)\ne 1$, \emph{i. e.} $p$ does not split in $F$, some authors (see \cite{BelNQD01}, \cite{KraSch95}, \cite{Kuz96} and \cite{Oza97}) were able to construct an explicit isomorphism \begin{equation}\label{millemila} \alpha:\big(Cl_{F_n}\otimes\mathbb{Z}_p\big)(\chi)\cong\big(\mathcal{O}_{F_n}^\times/Cyc_{F_n}\otimes\mathbb{Z}_p\big)(\chi) \end{equation} if $n$ is big enough. Although the construction of the above morphism works also in the case $\chi(p)=1$, as detailed in the beginning of Section \ref{Main Result}, the split case seems to have never been addressed. We focus then on this case, and study the map in this contest, still calling it $\alpha$. Our main result is the following (see Corollary \ref{Renefruit}) \begin{trm*} With notations as above, assume that $\chi$ is a character of $\Delta$ such that $\chi(p)=1$ and that $\lambda=0$. Then, for sufficiently big $n$, there is an exact sequence $$ 0\longrightarrow \mathrm{K} \rightarrow \big(Cl_{F_n}\otimes\mathbb{Z}_p\big)(\chi)\stackrel{\alpha}{\rightarrow}\big(\mathcal{O}_{F_n}^\times/Cyc_{F_n}\otimes\mathbb{Z}_p\big)(\chi)\rightarrow \mathrm{C}\rightarrow 0\;: $$ both the kernel $\mathrm{K}$ and the cokernel $\mathrm{C}$ of $\alpha$ are cyclic groups with trivial $\Gamma$-action of order $|L_p(1,\chi)|_p^{-1}$ where $L_p(s,\chi)$ is the Kubota-Leopoldt $p$-adic $L$-function. \end{trm*} \noindent\textbf{Acknowledgments } This work is part of my PhD thesis, written under the supervision of René Schoof. I would like to take this opportunity to thank him not only for proposing me to work on this subject and for the help he gave me in writing this paper, but especially for all the time and patience he put in following me through my PhD and for the viewpoint on Mathematics he suggested me. \section{Some Tate Cohomology}\label{SecTate} In this section we briefly recall some well-known facts that are useful in the sequel. Throughout, $L/K$ is a cyclic extension of number fields, whose Galois group we denote by $G$. In our application, $K$ and $L$ will usually be layers $F_m$ and $F_n$ of the cyclotomic $\mathbb{Z}_p$-extension for some $n\geq m$, but we prefer here not to restrict to this special case. We need to introduce some notation. Let $$ \mathbb{U}_K=\prod_{v\nmid \infty}\mathcal{O}_{K,v}^\times\times\prod_{v\mid \infty}K_v^\times $$ be the id\`ele units, \emph{i. e.} id\`eles having valuation $0$ at all finite place $v$ (we refer the reader to sections $14-19$ of Cassels' paper in \cite{CasFro86} for basic properties of id\`eles and id\`eles class group) and let $\Sigma$ be the set of places of $K$ that ramify in $L/K$. It is known (see section $1.4$ of Serre's paper in \cite{CasFro86}) that the Tate cohomology of local units in an unramified extension of local fields is trivial: therefore Shapiro's Lemma (see Proposition 7.2 of Tate's paper in \cite{CasFro86}, ) gives $$ \hqG{\mathbb{U}_L}=\hqG{\prod_{v\in\Sigma}\prod_{w\mid v}\mathcal{O}_{L,w}^\times}\cong\prod_{v\in\Sigma}\hat{H}^q(G_v,\mathcal{O}_{L,w}^\times), $$ where we fix a choice of a place $w$ of $L$ above $v$ for every $v\in\Sigma$ and we denote by $G_v$ its decomposition group in $G$; we will make this identification throughout. We denote the product of local units at places $v\in\Sigma$ appearing above by $U_\Sigma$. Consider the following commutative diagram of $G$-modules: \begin{equation}\label{magic square} \xymatrix{&0\ar@{>}[1,0]&0\ar@{>}[1,0]&0\ar@{>}[1,0]&\\ 0\ar@{>}[0,1]&\mathcal{O}_L^\times\ar@{>}[1,0]\ar@{>}[0,1] &L^\times\ar@{>}[1,0]\ar@{>}[0,1]&\mathrm{Pr}_L\ar@{>}[1,0]\ar@{>}[0,1]&0\\ 0\ar@{>}[0,1]&\mathbb{U}_L\ar@{>}[1,0]\ar@{>}[0,1]& \mathbb{A}_L^\times \ar@{>}[1,0]\ar@{>}[0,1]& \mathrm{Id}_L\ar@{>}[1,0]\ar@{>}[0,1]&0 \\ 0\ar@{>}[0,1]&Q_L\ar@{>}[1,0]\ar@{>}[0,1] &C_L\ar@{>}[1,0]\ar@{>}[0,1]& Cl_L\ar@{>}[1,0]\ar@{>}[0,1]&0\\ &0&0&0} \end{equation} Here $\mathrm{Id}_L$ and $\mathrm{Pr}_L$ denote the group of all fractional ideals of $L$ and of principal ideals, respectively; while $C_L$ is the group of id\`ele classes and $Q_L=\mathbb{U}_L/\mathcal{O}_L^\times$. \begin{lmm}\label{primi} Consider the following diagram induced by (\ref{magic square}): \begin{equation*}\label{lemma primi} \xymatrix{ &\huG{\mathbb{U}_L}\ar@{>}[1,0]^\beta\\ \hzG{Cl_L}\ar@{>}[0,1]^\pi&\huG{Q_L}\;.} \end{equation*} Then $\mathrm{Im}(\beta)=\pi(\overline{\Sigma^G})$ where $\Sigma^G$ are the primes in $L$ above $\Sigma$ fixed by $G$ and $\overline{\Sigma^G}$ is their image in $\hzG{Cl_L}$. \end{lmm} \begin{proof} First of all, the above decomposition $$ \huG{U_\Sigma}\cong \prod_{v\in\Sigma}\huG{\prod_{w\mid v}\mathcal{O}_w^\times} $$ allows to write $\beta=\prod\beta_v$ where $\beta_v$ is the restriction of $\beta$ to the $v$-th factor $\huG{\prod \mathcal{O}_w^\times}$; we therefore fix a place $v\in\Sigma$ and we show that $\mathrm{Im}(\beta_v)=\pi(\overline{\mathfrak{p}_{w_1}\cdots\mathfrak{p}_{w_g}})$ where the $\mathfrak{p}_{w_i}$'s are the primes of $L$ above $v$. This follows from the fact that $\huG{\prod \mathcal{O}_w^\times}$ is a product of $g$ (the number of places $w\mid v$) cyclic groups, each of order $e_v$, the ramification index of $v$ in $L/K$. Now fix a uniformizer $\pi_{w_i}\in\mathcal{O}_{w_i}$ for all $w_i\mid v$ and chose a generator $\sigma_v$ of $G_v$: we have $$ \beta_v(\pi_{w_1}^{1-\sigma_{v}},\hdots,\pi_{w_g}^{1-\sigma_{v}})=\pi(\overline{\mathfrak{p}_{w_1}\cdots\mathfrak{p}_{w_g}})\;, $$ as can immediately be seen from the commutativity of \begin{equation*}\xymatrix{ \hzG{\mathrm{Id}_L}\ar@{>}[0,1]\ar@{>}[1,0]^{1-\sigma}&\huG{\prod_{w\mid v}\mathcal{O}_w^\times}\ar@{>}[1,0]^{\beta_v}\\ \hzG{Cl_L}\ar@{>}[0,1]^\pi&\huG{Q_L} }\end{equation*} where $\sigma$ is a generator of $G$ (inducing $\sigma_v$ through $G_v\hookrightarrow G$). \end{proof} \begin{prop}[See \cite{Iwa73}]\label{ker-capit} Let $\jmath: Cl_K\rightarrow Cl_L$ be the map induced by extending fractional ideals of $\mathcal{O}_K$ to $\mathcal{O}_L$. Then $$ \mathrm{Ker}(\jmath)\cong\mathrm{Ker}\mathcal{B}ig(\hUG{\mathcal{O}_L^\times}\rightarrow\hUG{U_\Sigma}\mathcal{B}ig)\;. $$ \end{prop} \begin{proof} We simply apply Snake Lemma twice. First of all, apply it to \begin{equation*}\xymatrix{ 0\ar@{>}[0,1]&Q_K\ar@{>}[0,1]\ar@{>}[1,0]&C_K\ar@{>}[0,1]\ar@{=}[1,0]&Cl_K\ar@{>}[0,1]\ar@{>}[1,0]^\jmath&0\\ 0\ar@{>}[0,1]&Q_L^G\ar@{>}[0,1]&C_L^G\ar@{>}[0,1]&Cl_L^G\ar@{>}[0,1]&\hUG{Q_L}\;: } \end{equation*} it shows $\mathrm{Ker}(\jmath)\cong Q_L^G/Q_K$. Then apply it to \begin{equation*}\xymatrix{ 0\ar@{>}[0,1]&\mathcal{O}_K^\times\ar@{>}[0,1]\ar@{=}[1,0]&\mathbb{U}_K\ar@{>}[0,1]\ar@{=}[1,0]&Q_K\ar@{>}[0,1]\ar@{>}[1,0]&0\\ 0\ar@{>}[0,1]&(\mathcal{O}_L^\times)^G\ar@{>}[0,1]&(\mathbb{U}_L)^G\ar@{>}[0,1]&Q_L^G\ar@{>}[0,1]&\hUG{\mathcal{O}_L^\times}\;, } \end{equation*} finding $Q_L^G/Q_K\cong\mathrm{Ker}\mathcal{B}ig(\hUG{\mathcal{O}_L^\times}\rightarrow\hUG{U_\Sigma}\mathcal{B}ig)$. \end{proof} \noindent\textbf{Remark. } The above proof does not use the hypothesis that $G$ be cyclic. In fact, we will in the sequel we apply the proposition assuming this cyclicity, finding $\mathrm{Ker}(\jmath)\cong\mathrm{Ker}\mathcal{B}ig(\huG{\mathcal{O}_L^\times}\rightarrow\huG{U_\Sigma}\mathcal{B}ig)\;$. We remark that in his thesis \cite{Gre76} Greenberg gives the following criterion: $\lambda=0$ if and only if the map $\jmath$ relative to the cyclic extension $F_n/F$, restricted to $p$-Sylow subgroups, becomes $0$ for sufficiently large $n$. This will be the starting point for our proof of Theorem \ref{LHF}.\\ Finally, we state a Lemma about the cohomology of the direct product of two groups. The main source for this is \cite{Sch90}. Suppose that $F$ is a Galois subfield of $K$ such that $L/F$ is Galois with $\mathrm{Gal}(L/F)\cong \Delta\times G$ where $\Delta=\mathrm{Gal}(K/F)$ is abelian and the isomorphism above is induced by restriction. Then every ``arithmetic'' module attached to $L$ comes equipped with a natural action of $\Delta\times G$ and we want to compare this action with the natural one on the Tate cohomology of $G$. We have the following \begin{lmm}\label{cohomRene'}[\cite{Sch90}, Section 4] Suppose that $G$ is a $p$-group and that $p\nmid |\Delta|$. Let $M$ be a $\mathbb{Z}[\Delta\times G]-module$: then, for every $q\in\mathbb{Z}$, the natural map $$ \hqG{M\otimes\mathbb{Z}_p}^\Delta\longrightarrow\hat{H}^q\mathcal{B}ig(G,(M\otimes\mathbb{Z}_p)^\Delta\mathcal{B}ig)\;. $$ is an isomorphism (of abelian groups with trivial $\Delta\times G$-action). \end{lmm} \section{Cyclotomic Units}\label{SecCycUnits} Fix from now on a non-trivial character $\chi\neq 1$ of $\Delta$ and an odd prime $p\nmid [F:\mathbb{Q}]$ such that $\chi(p)=1$. We set $R=\mathbb{Z}_p[\mathrm{Im}(\chi)]$ and we let $\delta\in \Delta$ act on $R$ by $x\mapsto\chi(\delta)x$. In this way, $R$ becomes a $\mathbb{Z}_p[\Delta]$-algebra. For every $\mathbb{Z}[\Delta]$-module $M$ we denote by $M(\chi)$ the $\chi$-eigenspace of $M\otimes\mathbb{Z}_p$ for the action of $\Delta$: there is an isomorphism of $R$-modules $M(\chi^{-1})\cong\big((M\otimes\mathbb{Z}_p)\otimes_{\mathbb{Z}_p}R\big)^\Delta$ (our notation is consistent with that of \cite{Rub00}, Section $1.6$, where all this is properly spelled out: we denote by $R$ what Rubin calls $\mathcal{O}$). In particular, with notations and assumption as in Lemma \ref{cohomRene'}, \begin{equation}\label{chicohom} \hqG{M(\chi)}\cong \hqG{M}(\chi)\qquad\forall\;q\in\mathbb{Z}\;. \end{equation} It is easy to see that $Cl_F(\chi)$ is isomorphic via the norm to the $p$-part of the class group of $F^{\mathrm{Ker}\chi}$, a field in which $p$ splits completely (see \cite{Sch90} for more details). Analogously, $\mathcal{O}_F^\times/Cyc_F(\chi)$ is isomorphic to $\big(\mathcal{O}_{F^{\mathrm{Ker}\chi}}^\times/Cyc_{F^{\mathrm{Ker}\chi}}\big)\otimes\mathbb{Z}_p$: replacing if necessary $F$ by $F^{\mathrm{Ker}\chi}$ we can therefore assume that $p$ splits completely in $F$, and we assume this throughout. We now go back to the situation described in the introduction: let then $F_\infty/F$ be the $\mathbb{Z}_p$-extension of $F$ and denote by $\Gamma$ its Galois group. For every $n$ and for every prime ideal $\mathfrak{p}\subseteq\mathcal{O}_{F_n}$ dividing $p$, we let $\mathcal{O}_{F_n,\mathfrak{p}}^{\times}$ denote the local units at $\mathfrak{p}$. Then we set $$ U_n:=\big(\prod_{\mathfrak{p}\mid p}\mathcal{O}_{F_n,\mathfrak{p}}^{\times})(\chi) $$ and analogously $$ \mathcal{O}_n^\times:=\mathcal{O}_{F_n}^{\times}(\chi)\quad\mathrm{and}\quad Cyc_n:=Cyc_{F_n}(\chi)\;. $$ Observe that since in a $\mathbb{Z}_p$-extension only primes above $p$ ramify, the set $\Sigma$ of Section \ref{SecTate} relative to $F_n/F$ is $\Sigma=\{\mathfrak{p}\subset \mathcal{O}_F,\;\mathfrak{p}|p\}$ for all $n$ and our notation is consistent with the one introduced there. We finally set \begin{equation*} B_n:=\mathcal{O}_n^\times/Cyc_n\;,\quad\mathcal{B}_n:=U_n/Cyc_n\quad\text{and}\quad A_n:=Cl_{F_n}(\chi)\;. \end{equation*} By Sinnot's theorem above, together with Theorem 5.3 in \cite{Sin80/81} that guarantees $p\nmid \kappa_{F_n}$ for all $n$, the groups $Cl_{F_n}\otimes\mathbb{Z}_p$ and $\big(\mathcal{O}_{F_n}^\times/Cyc_{F_n}\big)\otimes\mathbb{Z}_p$ have the same order. The character-by-character version of this result is much deeper: it is known as Gras' Conjecture, and is a consequence of the (now proven) Main Conjecture of Iwasawa Theory, as detailed in \cite{Gre77} or in \cite{BelNQD01}. It follows that $|A_n|=|B_n|$ for all $n\geq 0$.\\ \noindent\textbf{Remark. } The semi-local units considered by Gillard in his paper \cite{Gil79-1} are products over all $\mathfrak{p}$ above $p$ of local units that are $1\pmod{\mathfrak{p}}$. In our situation, all completions $F_\mathfrak{p}$ at primes $\mathfrak{p}\mid p$ are isomorphic to $\mathbb{Q}_p$, so the two definitions coincide and $U_n$ is a free $\mathbb{Z}_p$-module of rank $1$.\\ Moreover, since $p$ splits completely in $F/\mathbb{Q}$, all primes above $p$ totally ramify in $F_n/F$ and the subgroup of fractional ideals in $F_n$ having support above $p$ is isomorphic to $\mathbb{Z}[\Delta]$. After tensoring with $R$, we can consider the $\chi$-eigenspace, that is still cyclic (as an $R$-module, now) and so is its projection to $A_n$. Thus, it makes sense to speak of \emph{the subgroup of $A_n$ of primes above $p$}, a cyclic group that we denote by $\Pi_n$. Since it is contained in $A_n^{G_n}$, we can use (\ref{chicohom}) to restate Lemma \ref{primi} saying that (with the same notation introduced there) $\beta(\hu{U_n})=\pi(\Pi_n)$ for all $n\geq m$.\\ We now investigate in some detail the structure of $Cyc_n$. First of all, letting $f$ be the conductor of $F$, we define the unit $$ \eta_n:=\bigg(\mathrm{Norm}^{\mathbb{Q}(\zeta_{fp^{n+1}})}_{F_n}\big(1-\zeta_{fp^{n+1}}\big)\bigg)^{\sum_{\delta\in\Delta}\chi(\delta^{-1})\delta}\in Cyc_n\;. $$ It is a unit since $p\nmid f$ because $p$ splits completely in $F$, it is cyclotomic by definition and we projected it in the $\chi$-component: morever, Sinnott's description of Cyclotomic Units shows that $Cyc_n=\eta_0\mathbb{Z}_p\times \eta_n\mathbb{Z}_p[G_n]$ (see, for instance, section 3 of \cite{Gil79-2} for details). In particular, we have an isomorphism of $G_n$-modules $Cyc_n\cong \mathbb{Z}_p\times I_{G_n}$ where $I_{G_n}$ is the augmentation ideal in $\mathbb{Z}_p[G_n]$, and we find a split exact sequence \begin{equation}\label{?} 0\longrightarrow\langle\eta_n\rangle\longrightarrow Cyc_n\longrightarrow \langle\eta_0\rangle\longrightarrow 0 \end{equation} where we denote, here and in what follows, by $\langle\eta_0\rangle$ and $\langle\eta_n\rangle$ the $\mathbb{Z}_p[G_n]$-modules generated by $\eta_0$ and $\eta_n$ respectively: by the above isomorphisms, (\ref{?}) corresponds to the sequence \begin{equation}\label{??} 0\longrightarrow I_{G_n}\longrightarrow I_{G_n}\times \mathbb{Z}_p\longrightarrow \mathbb{Z}_p\longrightarrow 0\;. \end{equation} \begin{lmm}\label{cyc} We have $\hz{\langle\eta_n\rangle}=0$ and $\hu{\langle\eta_0\rangle}=0$. In particular, the natural map $\hq{Cyc_n}\cong\hq{\langle\eta_0\rangle}\times\hq{\langle\eta_n\rangle}$ induced by \ref{?} gives isomorphisms of $\Delta$-modules $$\hz{Cyc_n}\cong\hz{\langle\eta_0\rangle}$$ and $$ \hu{Cyc_n}\cong\hu{\langle\eta_n\rangle}\;. $$ Both are cyclic groups of order $p^n$\;. \end{lmm} \begin{proof} The exact sequence $$ 0\longrightarrow I_{G_n}\longrightarrow \mathbb{Z}_p[G_n]\longrightarrow \mathbb{Z}_p\longrightarrow 0 $$ shows, since $\hq{\mathbb{Z}_p[G_n]}=0$ for all $q$, that $\hz{I_{G_n}}\cong\hu{\mathbb{Z}_p}=0$. The $\mathbb{Z}_p[G_n]$-isomorphisms $\langle\eta_n\rangle\cong I_{G_n}$ and $\langle\eta_0\rangle\cong\mathbb{Z}_p$ give the result. \end{proof} Suppose now that $\lambda=0$. The estension $F_\infty/F$ is totally ramified since $p$ splits in $F$ and the norm maps $N^n_m:A_n\rightarrow A_m$ are surjective by class field theory for all $n\geq m$; assuming $\lambda=0$ and chosing $m$ big enough, the orders of $A_n$ and $A_m$ coincide, and these norm maps are actually isomorphisms. Therefore the projective limit $X=\varprojlim A_n$ with respect to norms stabilizes to a finite group and $A_n\cong X$ for all $n\gg 0$ (we introduce here the notation $a\gg b$, equivalent to $b\ll a$, to mean that there exists a $b_0\geq b$ such that what we are stating holds for all $a\geq b_0$). In particular, the action of $\Gamma$ on $X$ must factor through a certain quotient $G_m=\Gamma/\Gamma^{p^m}$. Therefore $G_{n,m}$ acts trivially on $A_n$ for all $n\geq m$ and the $G_{n,m}$-norm $N_{G_{n,m}}=\sum_{\tau\in G_{n,m}}\tau$ acts on $A_n$ as multiplication by $p^{n-m}$. Choosing $n$ big enough so that $p^{n-m}A_n=0$, we find $N_{G_{n,m}}A_n=0$ and \begin{equation}\label{n>>m}\begin{split} \hz{A_n}=&A_n^{G_{n,m}}/N_{G_{n,m}}A_n=A_n\\ &=A_n[N_{G_{n,m}}]/I_{G_{n,m}}A_n=\hu{A_n} \end{split}\end{equation} where $I_{G_{n,m}}$ is the augmentation ideal of $\mathbb{Z}_p[G_{n,m}]$. Therefore $A_n\cong \hu{A_n}\cong\hz{A_n}$, whenever $\lambda=0$ and $n\gg m\gg 0$. A similar argument leads to the equivalent of (\ref{n>>m}) for $B_n$, namely $\hq{B_n}\cong B_n$ for all $q\in\mathbb{Z}$. \begin{lmm}\label{crocerossa} If $\lambda=0$ and $m\gg 0$, the natural map $$ H^{1}(G_{n,m},Cyc_n)\longrightarrow H^{1}(G_{n,m},\mathcal{O}_n^\times) $$ is injective for all $n\geq m$. \end{lmm} \begin{proof} Taking $G_{n,m}$-cohomology in the exact sequence defining $B_n$ gives \begin{multline} 0\longrightarrow H^0(G_{n,m},Cyc_n)\longrightarrow H^0(G_{n,m},\mathcal{O}_n^\times)\longrightarrow H^0(G_{n,m},B_n)\longrightarrow \\ \longrightarrow H^1(G_{n,m},Cyc_n)\longrightarrow H^1(G_{n,m},\mathcal{O}_n^\times)\;. \end{multline} Since $G_{n,m}$-invariants of $Cyc_n$ and $\mathcal{O}_n^\times$ are $Cyc_m$ and $\mathcal{O}_m^\times$ respectively, we find $\mathrm{Ker}\mathcal{B}ig(H^{1}(G_{n,m},Cyc_n)\longrightarrow H^{1}(G_{n,m},\mathcal{O}_n^\times)\mathcal{B}ig)=B_n^{G_{n,m}}/B_m$. Assuming that $m$ is big enough and $\lambda=0$ implies that the orders of $B_n$ and $B_m$ coincide, and the same holds, \emph{a fortiori}, for $B_n^{G_{n,m}}$ and $B_m$. Thus, the above kernel is trivial. \end{proof} \section{Semi-local Units modulo Cyclotomic Units} We now state a very useful result about semi-local units in our setting; it can already be found in a paper by Iwasawa \cite{Iwa60}. We keep the same notation introduced in the previous section and we make from now on constant use of Lemma \ref{cohomRene'} above, especially in the form of the isomorphism in (\ref{chicohom}). \begin{dfz}\label{U1 e B1} We define $U_n^1$ to be the kernel $U_n^1=\mathrm{Ker}(N^n_0:U_n\rightarrow U_0)$ and we set $\mathcal{B}_n^1=U_n^1/\langle\eta_n\rangle$. \end{dfz} \begin{prop}\label{splitting} The natural map $U_n^1\times U_0\hookrightarrow U_n$ induced by injections is an isomorphism of $G_n$-modules. It induces a decomposition $\mathcal{B}_n\cong \mathcal{B}_n^1\times\mathcal{B}_0$. \end{prop} \begin{proof} Consider the exact sequence induced by the norm map $N^n_0$ $$ 0\longrightarrow U_n^1\longrightarrow U_n\longrightarrow N^n_0(U_n)\subseteq U_0\longrightarrow 0\;. $$ Since the extension $F_{n,\mathfrak{p}_n}/F_{0,\mathfrak{p}_0}$ is cyclic and totally ramified, local class field theory shows that $\hat{H}^0(G_n,U_n)=U_0/N^n_0(U_n)$ is a cyclic group of order $p^n$. As $U_0\cong\mathbb{Z}_p$ this shows $N^n_0(U_n)=U_0^{p^n}$ and since $U_0$ contains no roots of unity of $p$-power order we can identify this group with $U_0$ simply by extracting $p^n$-th roots. We find \begin{equation}\label{splitseq} 0\longrightarrow U_n^1\longrightarrow U_n\stackrel{\sqrt[p^n]{N^n_0}}{\longrightarrow} U_0\longrightarrow 0\;. \end{equation} Since the natural embedding $U_0\hookrightarrow U_n$ is a $G_n$-linear section of (\ref{splitseq}), it splits the sequence and therefore gives an isomorphism $U_n^1\times U_0\cong U_n$. The fact that this splitting induces an isomorphism $\mathcal{B}_n\cong\mathcal{B}_n^1\times \mathcal{B}_0$ simply follows from the commutativity of the following diagram: $$\xymatrix{ 0\ar@{>}[0,1]&\langle\eta_n\rangle\ar@{>}[0,1]\ar@{^(->}[1,0]&Cyc_n\ar@{>}[0,1]^{\sqrt[p^n]{N^n_0}}\ar@{^(->}[1,0]&\langle\eta_0\rangle\ar@{>}[0,1]\ar@{^(->}[1,0]&0\\ 0\ar@{>}[0,1]&U_n^1 \ar@{>}[0,1]&U_n\ar@{>}[0,1]^{\sqrt[p^n]{N^n_0}}&U_0\ar@{>}[0,1]&0\;. }$$ \end{proof} More useful than the splitting itself is the following easy consequence: \begin{cor}\label{cohom-splitting} For every $n\geq m\geq 0$ and for every $q\in\mathbb{Z}$ the natural maps $$ \hq{U_n}\rightarrow\hq{U_n^1}\times\hq{U_0} $$ and $$ \hq{\mathcal{B}_n}\rightarrow\hq{\mathcal{B}_n^1}\times\hq{\mathcal{B}_0}\;. $$ induced by Proposition \ref{splitting} are isomorphisms of abelian groups. In particular, we have identifications $\hu{U_n}=\hu{U_n^1}$ and $\hz{U_n}=\hz{U_0}$. \end{cor} \begin{proof} The splitting of the cohomology groups follows immediately from the Proposition. Concerning the cohomology of $U_n$, we observe that, since $G_{n,m}$ acts trivially on the torsion-free module $U_0$, the group $\hz{U_0}$ is cyclic of order $p^{n-m}$ while $\hu{U_0}=0$. This already implies that $\hu{U_n}= \hu{U_n^1}$. It also shows that $\hz{U_n^1}$ must be trivial because $\hz{U_n}$ is itself cyclic of order $p^{n-m}$ by local class field theory. \end{proof} \begin{lmm}\label{ordine cohom} For every $m\geq 0$ and for every $n\gg m$ there are isomorphisms $\hq{\mathcal{B}_n^1}\cong \mathbb{Z}_p/L_p(1,\chi)$ and $\hq{\mathcal{B}_0}\cong \mathbb{Z}_p/L_p(1,\chi)$ holding for every $q\in\mathbb{Z}$. \end{lmm} \begin{proof} Let $\Lambda:=\mathbb{Z}_p[[T]]$ and fix an isomorphism $\varpi:\Lambda\cong\mathbb{Z}_p[[\Gamma]]$. This isomorphism fixes a choice of a topological generator $\varpi(1+T)=:\gamma_0$ of $\Gamma$ and we denote by $\kappa\in\mathbb{Z}_p^\times$ the element $\varepsilon_{cyc}(\gamma_0)$ where $$ \varepsilon_{cyc}:\mathrm{Gal}(\bar{F}/F)\longrightarrow \mathbb{Z}_p^\times $$ is the cyclotomic character of $F$. The main tool of the proof will be Theorem 2 of \cite{Gil79-1}, that gives isomorphisms of $\mathbb{Z}_p[[\Gamma]]$-modules \begin{equation}\label{gil=} \mathcal{B}_0\cong \mathbb{Z}_p/L_p(1,\chi)\qquad\text{and}\qquad\mathcal{B}_n\cong \Lambda/(f(T),\omega_n(T)/T) \end{equation} where $\omega_n(T)=(1+T)^{p^n}-1$ and $f(T)\in \Lambda$ is the power series verifying $f(\kappa^s-1)=L_p(1-s,\chi)$ for all $s\in\mathbb{Z}_p$. We make $\Gamma$ act on the modules appearing in (\ref{gil=}) by $\gamma_0\cdot x=\varpi(\gamma_0)x=(1+T)x$ for all $x\in \mathcal{B}_0$ (\emph{resp.} all $x\in \mathcal{B}_n$): this induces the action of $G_{n,m}$ we need to compute the cohomology with respect to. Starting with $\mathcal{B}_0$, observe that the action of $\Gamma$, and thus of its subquotient $G_{n,m}$, is trivial on the \emph{finite group} $\mathcal{B}_0$: as in (\ref{n>>m}) we get $$ \hq{\mathcal{B}_0}\cong\mathcal{B}_0\qquad\text{for all }n\gg m\gg 0\;. $$ and we apply (\ref{gil=}) to get our claim. Now we compute $\hu{\mathcal{B}_n^1}$: by definition, $\hu{\mathcal{B}_n^1}=\mathcal{B}_n^1[N_{G_{n,m}}]/I_{G_{n,m}}\mathcal{B}_n^1$. Applying $\varpi$ we find $I_{G_{n,m}}\cong \omega_m(T)(\Lambda/\omega_n(T))$ and $\varpi(N_{G_{n,m}})=\nu_{n,m}(T)$ where $\nu_{n,m}(T):=\omega_n(T)/\omega_m(T)$. Hence $$ \hu{\mathcal{B}_n^1}\cong\frac{\{g(T)\in\Lambda\;\mathcal{B}ig\vert\; g(T)\nu_{n,m}(T)\in (f(T),\omega_n(T)/T)\}}{(f(T),\omega_n(T)/T,\omega_m(T))}\;. $$ As observed in \cite{Gil79-1}, Lemma 5, $f(T)$ and $\omega_n(T)/T$ have no common zeroes. Therefore a relation $$ g(T)\frac{\omega_n(T)}{\omega_m(T)}=a(T)f(T)+b(T)\frac{\omega_n(T)}{T} $$ implies $\nu_{n,m}(T)\mid a(T)$ and we find $g(T)=c(T)f(T)+b(T)\omega_m(T)/T$ for some $c(T)\in\Lambda$: thus, \begin{equation*}\begin{split} \hu{\mathcal{B}_n^1}&\cong\frac{(f(T),\omega_m(T)/T)}{(f(T),\omega_n(T)/T,\omega_m(T))}\\ &\cong \frac{(\omega_m(T)/T)}{(f(T),\omega_n(T)/T,\omega_m(T))}\;. \end{split}\end{equation*} The evaluation map $g(T)\omega_m(T)/T\mapsto g(0)$ gives an isomorphism $$ \frac{(\omega_m(T)/T)}{(\omega_m(T))}\cong\mathbb{Z}_p $$ and we find $$ \frac{(\omega_m(T)/T)}{(f(T),\omega_n(T)/T,\omega_m(T))}\cong \mathbb{Z}_p/\big(f(0),\omega_n(0)\big)\;. $$ Since this last module is $\mathbb{Z}_p/f(0)$ as soon as $n$ is big enough and, by definition, $f(0)=L_p(1,\chi)$, we get our claim for $q=-1$. Using now that $\mathcal{B}_n^1$ is finite and therefore has a trivial Herbrand quotient, we know that the order of $\hz{\mathcal{B}_n^1}$ is again $|L_p(1,\chi)|_p^{-1}$: the fact that it is a cyclic group comes from the exact sequence \begin{equation*}\begin{split} 0\rightarrow \hz{\mathcal{B}_n^1}\rightarrow\hz{U_n^1}\rightarrow&\hz{\langle\eta_n\rangle}\rightarrow\\ &\rightarrow\hU{\mathcal{B}_n^1}\rightarrow 0 \end{split}\end{equation*} since $\hz{U_n^1}$ is itself cyclic, as discussed in Corollary \ref{cohom-splitting}.\\ Finally, the fact that $G_{n,m}$ is cyclic gives isomorphisms in Tate cohomology $\hat{H}^{2q}(G_{n,m},M)\cong \hat{H}^{0}(G_{n,m},M)$ for all modules $M$ (and analogously $\hat{H}^{2q+1}(G_{n,m},M)\cong \hat{H}^{-1}(G_{n,m},M)$), so the claim for all $q$'s follows from our computation in the cases $q=0,-1$. \end{proof} \begin{prop} \label{ordine fissi} Recall that $X=\varprojlim A_n$: then, $|X^\Gamma|\stackrel{p}{=}L_p(1,\chi)$, where by $a\stackrel{p}{=}b$ we mean $ab^{-1}\in\mathbb{Z}_p^\times$. \end{prop} \begin{proof} Let $L_0$ be the maximal pro-$p$ abelian extension of $F_\infty$ everywhere unramified and let $M_0$ be the maximal pro-$p$ abelian extension of $F_\infty$ unramified outside $p$. We claim that $L_0=M_0$. This follows from the fact that for every $\mathfrak{p}\subseteq \mathcal{O}_F$ dividing $p$, the local field $F_\mathfrak{p}$ is $\mathbb{Q}_p$, since $p$ splits completely, and it therefore admits only two independent $\mathbb{Z}_p$-extensions by local class field theory. In particular, every pro-$p$ extension of $F_{\infty,\mathfrak{p}}$ that is abelian over $F_\mathfrak{p}$ must be unramified, so $M_0=L_0$. Now let $Y:=\mathrm{Gal}(L_\infty/F_\infty)$ where $L_\infty$ is the maximal pro-$p$ abelian extension of $F_\infty$ everywhere unramified: then the Artin reciprocity map gives an isomorphism $X\cong Y(\chi)$; also, let $M_\infty$ be the maximal pro-$p$ abelian extension of $F_\infty$ unramified outside $p$ and $\mathscr{Y}:=\mathrm{Gal}(M_\infty/F_\infty)$. A classical argument (see \cite{Was97}, chapter 13) shows that $Y_\Gamma=\mathrm{Gal}(L_0/F_\infty)$ and $\mathscr{Y}_\Gamma=\mathrm{Gal}(M_0/F_\infty)$: our claim above implies that $Y_\Gamma=\mathscr{Y}_\Gamma$. Since the actions of $\Delta$ and $\Gamma$ commute with each other, this also shows $X_\Gamma=\mathscr{Y}(\chi)_\Gamma$. Combine this with the following exact sequence induced by multiplication by $\gamma_0-1$ where $\gamma_0$ is a topological generator of $\Gamma$ $$ 0\longrightarrow X(\chi)^\Gamma\longrightarrow X(\chi)\stackrel{\gamma_0-1}{\longrightarrow}X(\chi)\longrightarrow X(\chi)_\Gamma\longrightarrow 0\;: $$ it gives $|X^\Gamma|=|X_\Gamma|=|\mathscr{Y}(\chi)_\Gamma|$. The Main Conjecture of Iwasawa Theory, as proved by Rubin in the appendix of \cite{Lan90}, shows that the characteristic polynomial of $\mathscr{Y}(\chi)$ is $F(T)$ where $F(T)$ is the distinguished polynomial determined by $L_p(1-s,\chi)\stackrel{p}{=}F\big((1+p)^s-1\big)$ for all $s\in\mathbb{Z}_p$. Since $\mathscr{Y}$ contains no non-zero finite $\Gamma$-submodules (see \cite{NeuSchWin00}), we find $\mathscr{Y}^\Gamma=0$ and the order of $\mathscr{Y}(\chi)_\Gamma$ is $F(0)\stackrel{p}{=}L_p(1,\chi)$. \end{proof} \begin{cor}\label{ordine primi} If $\lambda=0$, then $\Pi_n$ is a cyclic group of order $|L_p(1,\chi)|_p^{-1}$ for every $n\gg 0$. \end{cor} \begin{proof} Indeed, Theorem 2 in \cite{Gre76} shows that $\lambda=0$ if and only if $X^\Gamma=\Pi_n$. The result now follows from the proposition and from the remark of section \ref{SecCycUnits}. \end{proof} \section{Main result}\label{Main Result} We are now in position of proving our main result. We stick to the notation introduced in Section \ref{SecCycUnits}. Let $n\geq 0$ and let $Q_n:=(\mathbb{U}_{F_n})(\chi)/\mathcal{O}_n^\times=Q_{F_n}(\chi)$ as in Section \ref{SecTate}: consider the exact sequence $$ 0\longrightarrow \mathcal{O}_n^\times\longrightarrow \mathbb{U}_{F_n}(\chi)\longrightarrow Q_n\longrightarrow 0\;. $$ Since $Cyc_n\subseteq \mathcal{O}_n^\times$, it induces an exact sequence $$ 0\longrightarrow B_n\longrightarrow \mathbb{U}_{F_n}(\chi)/Cyc_n\longrightarrow Q_n\longrightarrow 0\;, $$ and the Tate cohomology of $\mathbb{U}_{F_n}(\chi)/Cyc_n$ coincides with that of $\mathcal{B}_n$, as discussed in Section \ref{SecTate}. For every $m\leq n$ the cyclicity of Tate cohomology for cyclic groups induces an exact square \begin{equation}\label{hex}\xymatrix{ \hz{Q_n}\ar@{>}[0,1]^{\alpha_{[0,1]}}&\hu{B_n}\ar@{>}[1,0]\\ \hz{\mathcal{B}_n}\ar@{>}[-1,0]&\hu{\mathcal{B}_n}\ar@{>}[1,0]\\ \hz{B_n}\ar@{>}[-1,0]&\hu{Q_n}\ar@{>}[0,-1]^{\alpha_{[1,0]}}\;. }\end{equation} Pick now $q\in\mathbb{Z}$ and consider the exact sequence \begin{equation}\label{Q=A} 0\longrightarrow Q_n\longrightarrow C_{F_n}(\chi)\longrightarrow A_n\longrightarrow 0\;: \end{equation} as the actions of $G_{n,m}$ and of $\Delta$ commute, we have $\hq{C_n(\chi)}\cong\hq{C_n}(\chi)$; global class field theory (see section $11.3$ of Tate's paper in \cite{CasFro86}) shows that $\hq{C_n}(\chi)\cong\hat{H}^{q+2}(G_{n,m},\mathbb{Z}(\chi))=0$ because we assumed $\chi\neq 1$. Therefore the long exact cohomology sequence of (\ref{Q=A}) induces isomorphisms \begin{equation}\label{Q&A} \hq{A_n}\cong \hat{H}^{q+1}(G_{n,m},Q_n)\quad\mathrm{\;for\;every\;} q\in\mathbb{Z}\;. \end{equation} \noindent\textbf{Remark. } Observe that our discussion never uses the assumption $\chi(p)=1$. Indeed, the maps $\alpha_{[0,1]}$ and $\alpha_{[1,0]}$ are defined whenever $\chi\neq 1$ and are indeed the same maps appearing in Proposition 2.6 of \cite{KraSch95}, where the case $\chi(p)\ne 1$ is treated. As discussed in the introduction, in that case they turned out to be isomorphism if $\lambda=0$ (see also \cite{BelNQD01}, \cite{Kuz96} and \cite{Oza97}). We are going to see this is not the case if $\chi(p)=1$. \begin{trm} \label{LHF} Assume $\lambda=0$ and $n\gg m\gg 0$. Then the kernels $\mathrm{Ker}(\alpha_{[0,1]})$, $\mathrm{Ker}(\alpha_{[1,0]})$ and the cokernels $\mathrm{Coker}(\alpha_{[0,1]})$, $\mathrm{Coker}(\alpha_{[1,0]})$ are cyclic groups of order $|L_p(1,\chi)|_p^{-1}$. \end{trm} \begin{proof} We start by determining $\mathrm{Ker}(\alpha_{[0,1]})$. Choose $m$ big enough so that $|A_{m+k}|=|A_m|$ for all $k\geq 0$ and $n\geq m$ big enough so that $\hq{A_n}=A_n$. As in the remark of Section \ref{SecTate}, Proposition 2 of \cite{Gre76} shows that $\lambda=0$ implies $A_m=\mathrm{Ker}(\jmath_{m,n})$ if $n$ is sufficiently large. Combining Proposition \ref{ker-capit} with (\ref{Q&A}), this gives an injection \begin{equation}\label{injQ} \hz{Q_n}\hookrightarrow \hu{\mathcal{O}_n^\times}\;. \end{equation} Consider now the following commutative diagram, whose row and column are exact and where the injectivity of the vertical arrow in the middle follows from Lemma \ref{crocerossa}: \begin{equation}\label{ker}\xymatrix{ &\hu{B_n}\\ \hz{Q_n}\ar@{^(->}[0,1]\ar@{>}[-1,1]^{\alpha_{[0,1]}}&\hu{\mathcal{O}_n^\times}\ar@{>}[-1,0]\ar@{>}[0,1]&\hu{U_n}\\ &\hu{Cyc_n}\ar@{^(->}[-1,0]\ar@{>}[-1,1]^\psi\\ }\end{equation} An easy diagram chase shows that $\mathrm{Ker}(\alpha_{[0,1]})\cong\mathrm{Ker}(\psi)$. In order to study $\mathrm{Ker}(\psi)$, observe that $\psi$ appears in the sequence \begin{equation}\label{01} 0\to \hz{\mathcal{B}_n^1}\to\hu{\langle\eta_n\rangle}\stackrel{\psi}{\to}\hu{U_n^1}\;, \end{equation} because $\hu{Cyc_n}=\hu{\langle\eta_n\rangle}$ by Lemma \ref{cyc}, while $\hu{U_n}=\hu{U^1_n}$ by Corollary \ref{cohom-splitting}. Morever, again by Corollary \ref{cohom-splitting}, $\hz{U_n^1}=0$ and (\ref{01}) is exact, thus giving \begin{equation}\label{ker01} \mathrm{Ker}(\alpha_{[0,1]})\cong\hz{\mathcal{B}_n^1}\cong \mathbb{Z}_p/L_p(1,\chi)\;, \end{equation} the last isomorphisms being Lemma \ref{ordine cohom}.\\ Having determined $\mathrm{Ker}(\alpha_{[0,1]})$, the exactness of (\ref{hex}) together with Corollary \ref{cohom-splitting} (and Lemma \ref{ordine cohom}) show immediately that \begin{equation}\label{cok10} \mathrm{Coker}(\alpha_{[1,0]})\cong\hz{\mathcal{B}_0}\cong\mathbb{Z}_p/L_p(1,\chi)\;. \end{equation} Since the orders of $A_n$ and $B_n$ coincide for $n\gg 0$, and since these groups are isomorphic to $\hq{Q_n}$ and $\hq{B_n}$ respectively (see (\ref{n>>m}) and (\ref{Q&A})), the equalities of orders $$ |\mathrm{Ker}(\alpha_{[0,1]})|=|\mathrm{Coker}(\alpha_{[0,1]})|\quad\text{and}\quad|\mathrm{Ker}(\alpha_{[1,0]})|=|\mathrm{Coker}(\alpha_{[1,0]})| $$ hold. By (\ref{ker01}) and (\ref{cok10}), the four groups have the same order, equal to $|L_p(1,\chi)|_p^{-1}$.\\ We are left with the structure of $\mathrm{Ker}(\alpha_{[1,0]})$ and $\mathrm{Coker}(\alpha_{[0,1]})$. The map $\alpha_{[1,0]}$ is the composition \begin{equation}\label{inclusione?}\xymatrix{ \hu{Q_n}\ar@{>}[0,1]^{\tilde{\beta}}\ar@/_2pc/[rr]_{\alpha_{[1,0]}}&\hz{\mathcal{O}_n^\times}\ar@{>}[0,1]&\hz{B_n} }\end{equation} and $\mathrm{Ker}(\alpha_{[1,0]})\supseteq\mathrm{Ker}(\tilde{\beta})=\Pi_n$, the last identification coming from Lemma \ref{primi}. Combining Corollary \ref{ordine primi} with the computation of the order of $\mathrm{Ker}(\alpha_{[1,0]})$ performed above, the inclusion cannot be strict, and $\mathrm{Ker}(\alpha_{[1,0]})$ is cyclic of the prescribed order. Looking at $\mathrm{Ker}(\alpha_{[1,0]})$ and at $\mathrm{Coker}(\alpha_{[0,1]})$ as subgroups of $\hu{\mathcal{B}_n}$ as in (\ref{hex}), and knowing the structure of this last module by Lemma \ref{cohom-splitting}, shows that $\mathrm{Coker}(\alpha_{[0,1]})$ is cyclic, too. \end{proof} Now we can single out from the proof a precise description of the kernels of the maps $\alpha_{[1,0]}$ and $\alpha_{[0,1]}$ when seen as maps $$ \alpha_{[i,j]}:A_n\rightarrow B_n $$ by combining (\ref{n>>m}) and (\ref{Q&A}). Before stating the next result, observe that, by Lemma (\ref{cyc}), $\hu{Cyc_n}\cong \hu{\langle\eta_n\rangle}$, while (\ref{?}) and (\ref{??}) show that $\hu{\langle\eta_n\rangle}\cong \hu{I_{G_n}}\cong \hz{\mathbb{Z}_p}$. It is clear that these isomorphisms are not only $G_{n,m}$-linear, but also $G_n$-linear: therefore $\hu{Cyc_n}$ has trivial $G_n$-action. \begin{cor}\label{Renefruit} With the same hypothesis as in the theorem, $$ \mathrm{Ker}(\alpha_{[0,1]})=\mathrm{Ker}(\alpha_{[1,0]})=\Pi_n\;. $$ \end{cor} \begin{proof} While proving the theorem we found $\mathrm{Ker}(\alpha_{[1,0]})=\Pi_n$, and we now focus on $\mathrm{Ker}(\alpha_{[0,1]})$. We already know it is cyclic: looking again at (\ref{ker}) we find $\mathrm{Ker}(\alpha_{[0,1]})\subseteq\mathrm{Im}\big(\hu{Cyc_n}\big)\subseteq \hu{\mathcal{O}_n^\times}^{G_n}$: since the isomorphisms $\hz{Q_n}\cong A_n$ are $G_n$-linear, we get $\mathrm{Ker}(\alpha_{[0,1]})\subseteq A_n^{G_n}$. As in the proof of Corollary (\ref{ordine primi}), the assumption $\lambda=0$ is equivalent to $\Pi_n=X^\Gamma$ and $X^\Gamma=A_n^{G_n}$ if $n$ is big enough; putting all together, we have $\mathrm{Ker}(\alpha_{[0,1]})\subseteq \Pi_n$. Since they have the same order thanks to Corollary \ref{ordine primi} together with Theorem \ref{LHF}, the inclusion turns into an equality. \end{proof} \noindent\textbf{Remark. } As the above Corollary shows, there are indeed two maps $\alpha_{[0,1]}$ and $\alpha_{[1,0]}$ sitting in an exact sequence $$ 0\longrightarrow \Pi_n \longrightarrow A_n\stackrel{\alpha}{\rightarrow}B_n\longrightarrow \mathrm{\mathcal{B}_0}/\eta_0\longrightarrow 0\;, $$ where $\alpha$ can be either of them. This is the same as in the non-split case, where both $\alpha_{[0,1]}$ and $\alpha_{[1,0]}$ give an isomorphism $A_n\cong B_n$ for $n\gg 0$ if $\lambda=0$ (see \cite{KraSch95}). \end{document}
\begin{document} \author{Federico Binda, Jin Cao, Wataru Kai and Rin Sugiyama} \title{Torsion and divisibility for reciprocity sheaves and 0-cycles with modulus} \subjclass[2010]{Primary 14C25; Secondary 19E15, 14F42} \mathbb{A}tEndDocument{ {\footnotesize (F.~Binda) \textsc{Fakult\"at f\"ur Mathematik, Universit\"at Duisburg-Essen, Thea-Leymann Strasse 9, 45127 Essen, Germany.} \textit{E-mail address}, \texttt{[email protected]} \par \addvspace{ amount} (J.~Cao) \textsc{Fakult\"at f\"ur Mathematik, Universit\"at Duisburg-Essen, Thea-Leymann Strasse 9, 45127 Essen, Germany.} \textit{E-mail address}, \texttt{[email protected]} \par \addvspace{ amount} (W.~Kai) \textsc{Graduate School of Mathematical Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, 153-8914 Tokyo, Japan.} \textit{E-mail address}, \texttt{[email protected]} \par \addvspace{ amount} (R.~Sugiyama) \textsc{Department of Mathematics, Tokyo Denki University, 8 Senju-Asahi-Cho, Adachi-Ku, 120-8551 Tokyo, Japan.} \textit{E-mail address}, \texttt{[email protected]} \par }} \begin{abstract} The notion of {\it modulus} is a striking feature of Rosenlicht-Serre's theory of generalized Jacobian varieties of curves. It was carried over to algebraic cycles on general varieties by Bloch-Esnault, Park, R\"ulling, Krishna-Levine. Recently, Kerz-Saito introduced a notion of Chow group of $0$-cycles with modulus in connection with geometric class field theory with wild ramification for varieties over finite fields. We study the non-homotopy invariant part of the Chow group of $0$-cycles with modulus and show their torsion and divisibility properties. Modulus is being brought to sheaf theory by Kahn-Saito-Yamazaki in their attempt to construct a generalization of Voevodsky-Suslin-Friedlander's theory of homotopy invariant presheaves with transfers. We prove parallel results about torsion and divisibility properties for them. \end{abstract} \maketitle \setcounter{tocdepth}{1} \section{Introduction} Let $k$ be a field and let $\overline{X}$ be a proper $k$-variety equipped with an effective Cartier divisor $D$. For such a pair $(\overline{X}, D)$, Kerz and Saito recently defined in \cite{KS1} a notion of Chow group $\mathbb{C}H_0(\overline{X}|D)$ of $0$-cycles on $\overline{X}$ with modulus $D$ as a quotient of the group $Z_0(X)$ of $0$-cycles on the open complement $X:=\overline{X}\setminus |D|$. When $\overline{X}$ is a smooth projective curve, the group $\mathbb{C}H_0(\overline{X}|D)$ is isomorphic to the relative Picard group $\operatorname{Pic} (\overline{X} ,D)$ of isomorphism classes of pairs given by a line bundle on $\overline{X}$ together a trivialization along $D$. Its degree-0-part agrees with the group of $k$-rational points of the generalized Jacobian $\mathrm{Jac} (\overline{X}|D)$ of Rosenlicht and Serre (see, for instance, \cite[Chapter II]{SerreGACC}). If $D$ is non-reduced, then $\mathrm{Jac}(\overline{X} |D)$ is a commutative algebraic group of general type, i.e.~an extension of a semi-abelian variety by a unipotent group, which depends on the multiplicity of $D$. The existence of such non-homotopy invariant part suggests that the group $\mathbb{C}H_0(\overline{X}|D)$ may give new geometric and arithmetic information about the pair $(\overline{X}, D)$ that cannot be captured by the classical (homotopy invariant) motivic cohomology groups. Intimately connected with the world of cycles subject to some modulus conditions is the recent work of Kahn, Saito and Yamazaki \cite{KSY}, which gives a categorical attempt at the quest for a non-homotopy-invariant motivic theory. This encompasses unipotent phenomena and is modeled on the generalized Jacobians of Rosenlicht and Serre. Kahn-Saito-Yamazaki developed the notion of ``reciprocity'' for (pre)sheaves with transfers, which is weaker than homotopy invariance, with the purpose of eventually constructing a new motivic triangulated category, larger than Voevodsky's $\mathbf{DM}^{\rm eff}(k,\mathbb{Z})$ and containing unipotent information. The goal of this paper is to exhibit some differences between the classical homotopy invariant objects and the new non-homotopy invariant ones, such as $0$-cycles with modulus and reciprocity sheaves. For $0$-cycles, we shall see in \S \ref{rel-to-Suslin} that there is a canonical surjection from the Chow group with modulus to the $0$-th Suslin homology group (as defined e.g.~ in \cite[Definition 7.1]{MVW}) \[ \pi_{\overline{X},D}\colon \mathbb{C}H_0(\overline{X}|D)\longrightarrow \operatorname{H}_0^{\operatorname{Sing}}(X). \] Since $\operatorname{H}^{\operatorname{Sing}}_0(X)$ is the maximal homotopy invariant quotient of the group $Z_0(X)$ of $0$-cycles on $X$, the kernel $\operatorname{U}(\overline{X}|D)$ of $\pi_{\overline{X},D}$ measures the failure of $\mathbb{C}H _0(\overline{X}|D)$ to be homotopy invariant (nonetheless, its degree-$0$-part enjoys $\mathbb{P}^1$-invariance as pointed out in Remark \ref{P1inv}). The first result of this paper is the following divisibility property of $\operatorname{U}(\overline{X}|D)$: \begin{thm}[{\it see} Theorem \ref{div-chow}]\label{div-chow-intro} \begin{enumerate}[{\rm (1)}] \item If $\operatorname{char} (k)=0$, then the group $\operatorname{U}(\overline{X}|D)$ is divisible. \item If $\operatorname{char} (k)=p>0$, then $\operatorname{U}(\overline{X}|D)$ is a $p$-primary torsion group. \end{enumerate} \end{thm} Our second result is complementary to Theorem \ref{div-chow-intro}, and concerns the torsion part of $\mathbb{C}H_0(\overline{X}|D)$. \begin{thm}[{\it see} Corollary \ref{Torsion-Curves-Body}]\label{Torsion-curves-Intro} Let $k$ be an algebraically closed field of exponential characteristic $p\geq 1$. Let $\overline{X}$ be a projective variety over $k$, regular in codimension one. Let $D$ be an effective Cartier divisor on $\overline{X}$ such that the open complement $X= \overline{X} \setminus |D|$ is smooth over $k$. Let $\alpha\in \mathbb{C}H_0(\overline{X}|D)$ be a prime-to-$p$-torsion cycle. Then there exist a smooth projective curve $\mathbb{C}b$ and a morphism $\varphi \colon \mathbb{C}b \to \overline{X}$ for which $\varphi ^{*}D$ is a well defined Cartier divisor on $\mathbb{C}b$, and a prime-to-$p$-torsion cycle $\beta\in \mathbb{C}H_0(\mathbb{C}b| (\varphi ^*D)_{\mathrm{red}})$ such that $\varphi _*(\beta)=\alpha$. \end{thm} In other words, the prime-to-$p$-torsion part of $\mathbb{C}H_0(\overline{X}|D)$ is nearly independent of the multiplicities of $D$. However, the theorem does not provide, a priori, an identification between the prime-to-$p$-torsion parts of $\mathbb{C}H_0(\overline{X}|D)$ and of $\operatorname{H}_0^{\operatorname{Sing}}(X)$. Our third result is about reciprocity sheaves: \begin{thm}[{\it see} Theorem \ref{htpy}]\label{Theorem-htpy-Intro} Let $F \in \mathbf{REC}_k$ be a reciprocity presheaf with transfers, separated for the Zariski topology. Then $F$ is homotopy invariant (i.e.~the map of presheaves $F\to F(-\times \mathbb{A}^1)$ is an isomorphism) either when $\operatorname{char} (k)=0$ and $F$ is torsion, or when $\operatorname{char} (k)=p>0$ and $F$ is $p$-torsion free. \end{thm} In order to measure the lack of homotopy invariance of a reciprocity sheaf $F$, we define, similarly to what we did for $0$-cycles, $\operatorname{U}(F)$ to be the kernel of the canonical surjection \[ F\to \operatorname{H}_0(F), \] where $\operatorname{H}_0(F)$ is the maximal homotopy invariant quotient of $F$ (see Section \ref{unip-section}). Corollary \ref{cor1-intro} gives an analogue of Theorem \ref{div-chow-intro} for the reciprocity sheaf $\operatorname{U}(F)$: \begin{cor}[{\it see} Corollary \ref{cor1}]\label{cor1-intro} \begin{enumerate}[{\rm (1)}] \item If $\operatorname{char} (k)=0$, then $\operatorname{U}(F)$ is divisible. \item If $\operatorname{char} (k)=p>0$, then $\operatorname{U}(F)$ is a $p$-primary torsion sheaf. \end{enumerate} \end{cor} We remark that by combining Corollary \ref{cor1-intro} and some results of \cite{KSY}, one can give an alternative proof of Theorem \ref{div-chow-intro} when $X$ is smooth and quasi-affine. This paper is organized as follows. Section \ref{Section-zero-cycles} is devoted to studying the Chow groups of $0$-cycles with modulus. In \S\,\ref{rel-to-Suslin}, we investigate the relation between $\mathbb{C}H_0(\overline{X}|D)$ for a pair $(\overline{X}, D)$ and the $0$-th Suslin homology of the complement $\overline{X} \setminus |D|$. In \S\,2.3, we prove Theorem \ref{div-chow-intro}, using some technical results, Lemma \ref{key-lem} and Lemma \ref{key-lem-p}. In \S \S 2.4--2.5, we prove Theorem \ref{Torsion-curves-Intro}. Its proof is purely geometric and follows the approach of Levine \cite{MarcTorsion} to Rojtman's torsion theorem for singular projective varieties. One of the main tools, inspired by the work \cite{LW} of Levine and Weibel on $0$-cycles on singular varieties, is a rigidity result for the torsion subgroup of $\mathbb{C}H_0(\overline{X}|D)$ (see Theorem \ref{thm-discreteness}). Section \ref{Section-Reciprocity} is devoted to studying torsion and divisibility phenomena for reciprocity (pre)sheaves with transfers. In \S\,\ref{section-htpy-rec}, we prove Theorem \ref{Theorem-htpy-Intro}, using again Lemma \ref{key-lem} and Lemma \ref{key-lem-p}. In \S\,\ref{unip-section}, we study the sheaf $\operatorname{U}(F)$ and the homology sheaves $\operatorname{H} _i(F)$ of the Suslin complex of a reciprocity sheaf $F$. As consequences of Theorem \ref{Theorem-htpy-Intro} we get Corollary \ref{cor1-intro} and some further result on $\operatorname{H}_i(F)$ (see Corollary \ref{cor3}). \subsection*{Acknowledgments}Large part of this work was carried out during the special semester in Motivic Homotopy Theory at the University of Duisburg-Essen (SS 2014). The authors wish to thank Marc Levine heartily for providing an excellent working environment and for the support via the Alexander von Humboldt foundation and the SFB Transregio 45 ``Periods, moduli spaces and arithmetic of algebraic varieties''. The first author is supported by the DFG Schwerpunkt Programme 1786 "Homotopy theory and Algebraic Geometry". The third author is supported by JSPS as a Research Fellow and through JSPS KAKENHI Grant (15J02264), and was supported by the Program for Leading Graduate Schools, MEXT, Japan during the work. The fourth author is supported by JSPS KAKENHI Grant (16K17579). We sincerely appreciate the referee's careful and valuable comments to an earlier draft of this paper, which helped us to significantly clarify and improve the exposition. \section{Chow group of $0$-cycles with modulus}\label{Section-zero-cycles} \subsection{Definition of $0$-cycles with modulus} We recall the definition of $\mathbb{C}H _0(\overline{X}|D)$ from Kerz and Saito \cite{KS1}. \subsubsection{}\label{DefChowMod}We fix a base field $k$. For an integral scheme $\overline{C}$ over $k$ and for a closed subscheme $E$ of $\overline{C}$, we set \begin{align*} G(\overline{C},E) =\bigcap_{x\in E}\mathrm{Ker}\bigl(\mathcal{O}_{\overline{C},x}^{\times}\to \mathcal{O}_{E,x}^{\times} \bigr) = \varinjlim_{E\subset U\subset \overline{C}}\Gamma(U, \operatorname{Ker}(\mathcal{O}_{\overline{C}}^\times \to \mathcal{O}_{E}^\times)), \end{align*} where $x$ runs over all the points of $E$ and $U$ runs over the set of open subsets containing $E$. The intersection takes place in the rational function field $k(\overline{C})$. We say that a rational function $f\in G(\overline{C},E)$ satisfies the modulus condition with respect to $E$. \subsubsection{}\label{DefChowMod1}Let $\overline{X}$ be a scheme of finite type over $k$ and let $D$ be an effective Cartier divisor on $\overline{X}$. Write $X$ for the complement $\overline{X} \setminus |D|$ and $Z_0(X)$ for the group of $0$-cycles on $X$. Let $\overline{C}$ be an integral normal curve over $k$ and let $\varphi_{\overline{C}}:\overline{C}\to \overline{X}$ be a finite morphism such that $\varphi_{\overline{C}}(\overline{C})\not \subset D$. Write $C= \varphi_{\overline{C}}^{-1}(X)$. The push-forward of cycles gives a group homomorphism $ \tau_{\overline{C}}\colon G(\overline{C},\varphi_{\overline{C}}^*(D))\to Z_0(X)$ that sends a function $f$ to the $0$-cycle $(\varphi _{\overline{C}}|_{C})_*\operatorname{div}_{\overline{C}}(f)$. \begin{df}\label{DefChowMod-Definition} In the notations of \ref{DefChowMod1}, we define the \emph{Chow group $\mathbb{C}H_0(\overline{X}|D)$ of $0$-cycles of $\overline{X}$ with modulus $D$} as the cokernel of the homomorphism \[ \tau\colon\bigoplus_{\varphi_{\overline{C}}\colon \overline{C}\to \overline{X}}G(\overline{C},\varphi_{\overline{C}}^*(D)) \xrightarrow{\bigoplus \tau _{\overline{C}}} Z_0(X), \] where the sum runs over the set of finite morphisms from an integral normal curve $\varphi_{\overline{C}}\colon \overline{C}\to \overline{X}$ such that $\varphi_{\overline{C}}(\overline{C})\not \subset D$. \end{df} \begin{rmk} A generalization to higher dimensional cycles and to higher Chow groups (in the sense of Bloch) $\mathbb{C}H_r(\overline{X}|D,n)$ is given in \cite{BS}, where the above groups $\mathbb{C}H _0(\overline{X}|D)$ are shown to agree with the corresponding higher Chow groups with modulus $\mathbb{C}H_0(\overline{X}|D, 0)$ (see \cite[Theorem 3.3]{BS}). A different definition of Chow group of $0$-cycles with modulus is proposed by Russell in \cite{R}. \end{rmk} \begin{prop} \label{example} Let $(\overline{X},D)$ and $(\overline{Y},E)$ be pairs of proper schemes of finite type over $k$ and effective Cartier divisors on them. Assume that $\overline{Y}$ is connected and $Y= \overline{Y}\setminus |E|$ has a $k$-rational point. If the degree map induces an isomorphism $\mathbb{C}H _0(\overline{Y}_{k'}|E_{k'})\xrightarrow{\simeq } \mathbb{Z} $ for any finite field extension $k'/k$, then the proper push-forward map induced by the projection $p_1\colon \overline{X}\times \overline{Y}\to \overline{X}$ is an isomorphism \[ p_{1,*}\colon \mathbb{C}H_0(\overline{X}\times \overline{Y}|\overline{X}\times E+D\times \overline{Y})\xrightarrow{\simeq } \mathbb{C}H_0(\overline{X}|D). \] \end{prop} \begin{proof} Let $y_0$ be a $k$-rational point of $Y$ and let $\iota \colon \overline{X}\times \{ y_0 \} \underline{h}ookrightarrow \overline{X}\times \overline{Y}$ be the canonical closed embedding. Since we have that $p_{1,*}\circ \iota _* =\mathrm{id}$ on $\mathrm{CH}_0(\overline{X}|D)$, it suffices to show that $\iota _*\colon \mathbb{C}H _0(\overline{X}|D)\to \mathbb{C}H _0(\overline{X}\times \overline{Y}|\overline{X}\times E+D\times \overline{Y})$ is surjective. Let $z$ be a closed point on $X\times Y$ (here we write $X=\overline{X}\setminus |D|$, $Y=\overline{Y}\setminus |E|$) and let $k(z)$ be its residue field. We claim that the class of $z$ in $\mathbb{C}H _0(\overline{X}\times \overline{Y}|\overline{X}\times E+D\times \overline{Y})$ comes from $\mathbb{C}H _0(\overline{X}\times \{ y_0 \} |D\times \{ y_0\} )$. Note that the $0$-cycle $z$ comes from a canonical $0$-cycle on $(\overline{X}\times \overline{Y})_{k(z)}$. By the commutative diagram of push-forward maps below, it suffices to show that this $0$-cycle comes from $\mathbb{C}H _0((\overline{X}\times \{ y_0\} )_{k(z)} |(D\times \{ y_0\} )_{k(z)} )$, \[ \xymatrix{ \mathbb{C}H _0((\overline{X}\times \{ y_0\} )_{k(z)} |(D\times \{ y_0\} )_{k(z)} ) \ar[r] \ar[d] & \mathbb{C}H _0((\overline{X}\times \overline{Y})_{k(z)} |(\overline{X}\times E+D\times \overline{Y})_{k(z)} ) \ar[d] \\ \mathbb{C}H _0(\overline{X}\times \{ y_0\} |D\times \{ y_0\} )\ar[r] &\mathbb{C}H _0(\overline{X}\times \overline{Y} |\overline{X}\times E+D\times \overline{Y} ) . } \] Thus by replacing $k$ by $k(z)$, we may assume $z$ is a rational point $x\times y$, where $x\in X(k)$ and $y\in Y(k)$ (note that the assumptions on $\overline{Y}$ remain valid after this replacement). Since we have $\mathbb{C}H _0(\overline{Y}|E)\simeq \mathbb{Z} $ via the degree map, there are finitely many integral normal curves $\overline{W}_i$ with finite maps $\varphi _i\colon \overline{W}_i\to \overline{Y}$ and rational functions $f_i$ in $G(\overline{W}_i,\varphi _i^{-1}(E))$ such that the equality of cycles \[ \sum _i \varphi _{i,*}\operatorname{div} _{\overline{W}_i}(f_i) =[y]-[y_0] \] holds on $\overline{Y}$. Let $\overline{T}_i=\{ x \} \times W_i ~(\simeq W_i)$ and let $\psi _i=(x,\varphi _i)\colon \overline{T}_i\to \overline{X}\times \overline{Y}$ be the induced finite map. Then we find that $f_i$ belongs to $G(\overline{T}_i,\psi _i^{-1}(\overline{X}\times E+D\times \overline{Y}))$ and the following equality holds on $X\times Y$ \[ \sum _i \psi _{i,*}\operatorname{div} _{\overline{T}_i}(f_i)=[x\times y]-[x\times y_0] . \] Thus the class of $z$ is in the image of $\iota _*$. This completes the proof. \end{proof} \begin{rmk}\label{P1inv} A relevant example for Proposition \ref{example} is the isomorphism \begin{equation}\label{eq:P1invariance} \mathbb{C}H_0(\overline{X}\times \mathbb{P}^1|D\times \mathbb{P}^1)\simeq \mathbb{C}H_0(\overline{X}|D), \end{equation} that can be interpreted as a $\mathbb{P}^1$-invariance property for Chow groups of $0$-cycles with modulus. For $\overline{X}$ smooth and quasi-projective, the isomorphism \eqref{eq:P1invariance} is also a consequence of \cite[Theorem 4.6]{KP3}. \end{rmk} \begin{rmk} Proposition \ref{example} can be interpreted in the language of reciprocity sheaves (see Section 3.1) as follows: Let $(\overline{X},D)$ and $(\overline{Y},E)$ be pairs of proper integral $k$-schemes and effective Cartier divisors such that $\overline{X}\setminus |D|$ and $\overline{Y}\setminus |E|$ are smooth and quasi-affine. For such pairs, we have reciprocity presheaves $ h(\overline{X},D)$ and $ h(\overline{Y},E)$ (see Remark \ref{Phi-def}) which, for any field extension $k'/k$, satisfy $ h(\overline{X},D)(k')=\mathbb{C}H _0(\overline{X}_{k'},D_{k'})$ and $ h(\overline{Y},E)(k')=\mathbb{C}H _0(\overline{Y}_{k'},E_{k'})$ (see \ref{Phi-def} as well as \cite[Proposition 2.2.2]{KSY}). Now assume that $Y:=\overline{Y}\setminus |E|$ has a $k$-rational point and that $ h(\overline{Y},E)_{\mathrm{Zar}}\simeq \mathbb{Z}$. In particular, for any field extension $k'/k$ we have $\mathbb{C}H _0(\overline{Y}_{k'}|E_{k'})\simeq \mathbb{Z}$. An example of such pair is given by $(\mathbb{P}^1,\infty)$. Then there is an isomorphism \begin{equation}\label{interpreted} h(\overline{X} \times \overline{Y}, D \times \overline{Y}+\overline{X}\times E)_{\mathrm{Zar}} \xrightarrow{\simeq } h(\overline{X},D )_{\mathrm{Zar}}. \end{equation} Indeed, by Proposition \ref{example} we have isomorphisms $ h(\overline{X} \times \overline{Y}, D \times \overline{Y}+\overline{X}\times E)(k') \xrightarrow{\simeq } h(\overline{X},D )(k')$ for any field extension $k'/k$. Then, the Injectivity Theorem \cite[Theorem 6]{KSY} for reciprocity sheaves applied to the kernel and cokernel of the map \eqref{interpreted} gives our assertion. Note that the condition $ h(\overline{Y},E)_{\mathrm{Zar}}\simeq \mathbb{Z}$ implies that $ h(\overline{Y},E')_{\mathrm{Zar}}\simeq h_0(Y)_{\mathrm{Zar}}\simeq \mathbb{Z}$ for every divisor $E'$ contained in $E$ as a subscheme. The reader should compare the isomorphism with the isomorphism of homotopy invariant sheaves \[ {h}(X\times Y)_{\mathrm{Zar}} \simeq {h}(X)_{\mathrm{Zar}}. \] The displayed isomorphism \eqref{interpreted} will give some examples to the question raised in \cite[Remark 3.5.1]{KSY}, e.g.,~if $\dim X=1$, we get an isomorphism \[ h_{0}(\overline{X} \times \overline{Y}, (D \times \overline{Y}+\overline{X}\times E)_\mathrm{red})_{\mathrm{Zar}} \simeq h_{0}(X\times Y)_{\mathrm{Zar}}. \] \end{rmk} \subsection{Relation to Suslin homology}\label{rel-to-Suslin} Let $S$ be an irreducible scheme of finite type over $k$ and $X$ be a scheme of finite type over $S$. We denote by $C_0(X/S)$ the group of finite correspondences of $X$ over $S$ \cite[\S 3]{SV}, i.e.~the free abelian group generated by closed integral subschemes of $X$ that are finite and surjective over $S$. Recall from \cite[\S 3]{SV} (or \cite[Definition 7.1]{MVW}) that one defines the $0$-th Suslin homology group $\operatorname{H}_0^{\operatorname{Sing}}(X)$ of $X$ to be the cokernel of \[C_0(X\times (\mathbb{P}^1\setminus\{1\} )/ (\mathbb{P}^1\setminus\{1\} ))\xrightarrow{\partial = (\partial_0 - \partial_\infty)} C_0(X/\Spec{k}) = Z_0(X), \] where $\partial_i$ is induced by $t=i\colon \Spec{k}\to \mathbb{P}^1$, for $i=0, \infty$. The groups $\operatorname{H} _0^{\operatorname{Sing}}(X)$ are covariant for arbitrary morphisms of $k$-schemes of finite type. Note that there is a natural surjection induced by the identity map on $0$-cycles: $ \operatorname{H}_0^{\operatorname{Sing}}(X)\to \mathbb{C}H_0(X).$ The following is stated in \cite[Introduction]{KS1}. We include a verification of it for the convenience of the reader. \begin{prop}\label{ch-to-suslin} Let $\overline{X}$ be a proper scheme over $k$, $D$ an effective Cartier divisor on $\overline{X}$ and $X$ the complement $\overline{X}\setminus |D|$. Then the identity map of $Z_0(X)$ induces a natural surjection \[ \pi_{\overline{X},D}\colon\mathbb{C}H_0(\overline{X}|D) \longrightarrow \operatorname{H}_0^{\operatorname{Sing}}(X). \] \end{prop} \begin{proof}The two groups have the same set of generators, so it is enough to show that the relations defining the Chow group of $0$-cycles with modulus are $0$ in the Suslin homology group. Let $\varphi_{\overline{C}}:\overline{C}\to \overline{X}$ be a finite morphism from a normal curve $\overline{C}$ with $\varphi_{\overline{C}}(\overline{C})\not \subset D$ and let $f\in G(\overline{C},\varphi_{\overline{C}}^*(D))$. We claim that $\tau_{\overline{C}}(f)=0$ in $\operatorname{H}_0^{\operatorname{Sing}}(X)$. We may replace $\overline{X}$ by $\overline{C}$ to prove the claim (by the covariance of $\operatorname{H} _0^{\operatorname{Sing}}(-)$). We regard $f$ as a morphism $f\colon\overline{C}\to \mathbb{P}^1$. Since $\overline{C}$ is proper over $k$ the map $f$ either constant or surjective. In the former case the claim is obvious, so let us assume $f$ is surjective. Let $\Gamma_f \subset \overline{C} \times \mathbb{P}^1$ be the graph of $f$ and let $W=\Gamma_f\cap (\overline{C}\times (\mathbb{P}^1\backslash\{1\}))$. Since $f\equiv 1 \mod \ \varphi _{\overline{C}}^*(D)$, the irreducible closed set $W$ belongs to $C_0((\overline{C}\setminus {D})\times (\mathbb{P}^1\setminus\{1\} )/ (\mathbb{P}^1\setminus\{1\} ))$ and that we have \[ \partial(W) =(\partial_0-\partial_\infty) (W)=\operatorname{div}_{\overline{C}}(f)=\tau_{\overline{C}}(f), \] proving the claim. \end{proof} \subsection{Divisibility result for $0$-cycles with modulus} Let $\overline{X}$ be a proper $k$-scheme and let $D$ be an effective Cartier divisor on it. As before, let $X = \overline{X} \setminus |D|$. By Proposition \ref{ch-to-suslin}, there is a canonical surjection \[ \pi_{\overline{X},D}\colon\mathbb{C}H_0(\overline{X}|D) \longrightarrow \operatorname{H}_0^{\operatorname{Sing}}(X). \] We define $\operatorname{U}(\overline{X}|D)$ to be the kernel of $\pi_{\overline{X},D}$, and call it \emph{the unipotent part of $\mathbb{C}H_0(\overline{X}|D)$}. Since the surjection $\pi_{\overline{X},D}$ is compatible with the degree maps, the group $\operatorname{U}(\overline{X}|D)$ fits into the following exact sequence \[ 0\longrightarrow \operatorname{U}(\overline{X}|D) \longrightarrow \mathbb{C}H_0(\overline{X}|D)^0\longrightarrow \operatorname{H}_0^{\operatorname{Sing}}(X)^0\longrightarrow 0, \] where $\mathbb{C}H_0(\overline{X}|D)^0$ and $\operatorname{H}_0^{\operatorname{Sing}}(X)^0$ denote the degree-0-parts of $\mathbb{C}H_0(\overline{X}|D)$ and $\operatorname{H}_0^{\operatorname{Sing}}(X)$ respectively. Since $ \operatorname{H}_0^{\operatorname{Sing}}(X)$ is the maximal homotopy invariant quotient of the group of $0$-cycles $Z_0(X)$, the group $\operatorname{U}(\overline{X}|D)$ measures precisely the failure of homotopy invariance of $\mathbb{C}H _0(\overline{X} |D)$. \begin{thm}\label{div-chow} Let $\overline{X}$ be a proper $k$-scheme and $D$ be an effective Cartier divisor on it. Then we have: \begin{enumerate}[{\rm (1)}] \item If $\operatorname{char} (k)=0$, then $\operatorname{U}(\overline{X}|D)$ is divisible. \item If $\operatorname{char} (k)=p>0$, then $\operatorname{U}(\overline{X}|D)$ is a $p$-primary torsion group. \end{enumerate} \end{thm} We start with some auxiliary lemmas. \begin{lem}\label{key-lem} Let $k$ be a field of characteristic zero. Let $\overline{C}$ be a proper normal integral curve over $k$. Let $D$ be an effective Cartier divisor on $\overline{C}$ and write $D_\mathrm{red}$ for the corresponding reduced divisor. Then the quotient group $G(\overline{C},D_\mathrm{red})/G(\overline{C},D)$ has a $k$-vector space structure. In particular, for any integer $n>0$, there is an isomorphism $ G(\overline{C},D)/n \stackrel{\simeq}{\longrightarrow} G(\overline{C},D_\mathrm{red})/n. $ \end{lem} \begin{proof} Write $D=\sum_{i=1}^rn_i[P_i]$. By the definition of $G(\overline{C},D)$, one has the following commutative diagram with exact rows {\small \[ \xymatrix{ 0\ar[r]& G(\overline{C},D)\ar[r]\ar[d]&\ar@{=}[d] \mathcal{O}_{\overline{C},D_\mathrm{red}}^{\times }\ar[r]& \ar[d]\displaystyle \bigoplus_{i=1}^r \bigl(\mathcal{O}_{\overline{C},P_i}/\mathfrak{m}_{P_i}^{n_i}\bigr)^{\times }\ \ar[r]&0\\ 0\ar[r] &G(\overline{C},D_\mathrm{red})\ar[r]& \mathcal{O}_{\overline{C},D_\mathrm{red}}^{\times }\ar[r]& \displaystyle \bigoplus_{i=1}^r k(P_i)^{\times} \ar[r]&0. } \]} Therefore by the snake lemma, we get \begin{equation}\label{eq:lem-key-lem} G(\overline{C},D_\mathrm{red})/G(\overline{C},D)\xleftarrow{\simeq} \bigoplus_{i=1}^r \frac{1+\mathfrak{m}_{P_i}}{1+\mathfrak{m}_{P_i}^{n_i}}\xrightarrow{\simeq}\bigoplus_{i=1}^r \mathfrak{m}_{P_i}/\mathfrak{m}_{P_i}^{n_i}, \end{equation} where the second isomorphism is obtained by taking the logarithm. Since the last term in \eqref{eq:lem-key-lem} is a $k$-vector space, the group $G(\overline{C},D_\mathrm{red})/G(\overline{C},D)$ has an induced $k$-vector space structure. \end{proof} \begin{lem}\label{key-lem-p} Let $k$ be a field of positive characteristic $p$. Let $\overline{C}$ be an integral scheme of finite type over $k$ and $D'\subset D$ be closed subschemes of $\overline{C}$ having the same support. Then there is a positive integer $m$ such that for any $f\in G(\overline{C},D')$, its $p^m$-power $f^{p^m}$ belongs to $G(\overline{C}, D)$ (and consequently, the quotient group $G(\overline{C},D')/G(\overline{C},D)$ is annihilated by a power of $p$). \end{lem} \begin{proof} Let $f\in G(\overline{C},D') = \bigcap _{x\in D'} \operatorname{Ker} (\mathcal{O}^*_{\overline{C},x}\to \mathcal{O}^*_{D',x})$. For each point $x\in D'$, we have \[ f \in 1+ I'_x \subset \mathcal{O}_{\overline{C},x}^* \] where $I'_x$ is the stalk at $x$ of the ideal sheaf $I'\subset \mathcal{O}_{\overline{C}}$ defining $D'$. By the relation $|D|= |D'|$, the defining ideal $I$ of $D$ contains some power of $I'$ (say $(I')^{p^m}\subset I$). Thus we have \[ f^{p^m} \in 1+ I'^{p^m}_x \subset 1+I_x \] for each $x\in |D|\subset |D'|$. Therefore $f^{p^m}$ belongs to $\bigcap _{x\in D} (1+I_x )= G(\overline{C},D)$. This completes the proof. \end{proof} \begin{lem}\label{gen-U} There is a surjection \begin{equation}\label{thereIsASurj} \bigoplus \varphi _*: \bigoplus_{\varphi\colon\overline{C}\to\overline{X}}G(\overline{C},\varphi^*(D)_\mathrm{red})/G(\overline{C},\varphi^*(D))\longrightarrow \operatorname{U}(\overline{X}|D) \end{equation} where $\varphi : \overline{C}\to\overline{X}$ runs over the set of finite morphisms from normal proper curves $\overline{C}$ over $k$ such that $\varphi_{\overline{C}}(\overline{C}) \not \subset D $. \end{lem} \begin{proof} By definition, the group $\operatorname{U}(\overline{X}|D)$ is generated by the cycles of the form $\partial (W)$ for $W\in C_0 (X\times (\mathbb{P}^1\setminus\{ 1\} )/(\mathbb{P}^1\setminus\{ 1\} ))$. Without loss of generality, we may assume that $W$ is irreducible. Let $\overline{W}$ be its closure in $\overline{X}\times \mathbb{P}^1$ and $\overline{W}^N$ be its normalization. Note that it is an integral normal curve. Let $f$ be the composite map $f\colon \overline{W}^N\to \overline{X}\times \mathbb{P} ^1\to \mathbb{P}^1$ and let $\varphi\colon\overline{W}^N\to \overline{X}\times \mathbb{P} ^1 \to \overline{X}$. From the condition $W\in C_0(X\times (\mathbb{P}^1\setminus\{ 1\} )/(\mathbb{P}^1\setminus\{ 1\} ))$, we find $f\in G(\overline{W}^N,\varphi ^*(D)_{\mathrm{red}})$. We have \begin{equation}\label{partialW} \partial(W)=\varphi _*(\operatorname{div}_{\overline{W}^N}(f)). \end{equation} Since $\overline{W}^N$ is a proper integral curve, the map $\varphi \colon \overline{W}^N\to \overline{X}$ is either constant or finite. In the former case the right hand side of the equation \eqref{partialW} is zero. In the latter case, the finite map $\varphi $ and the function $f\in G(\overline{W}^N,\varphi ^*(D)_{\mathrm{red}})$ determine an element in the source of the map \eqref{thereIsASurj}. In any case the equation \eqref{partialW} displays $\partial (W)$ as an element in the image of the map \eqref{thereIsASurj}. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{div-chow}}] Given the previous Lemma \ref{gen-U}, if $\operatorname{char} (k) =0$ then the statement is a consequence of Lemma \ref{key-lem}. If $\operatorname{char} (k) = p>0$, it is a consequence of Lemma \ref{key-lem-p}. \end{proof} \begin{cor}\label{decomp-chow} Let $\overline{X}$ be a proper $k$-scheme and $D$ be an effective Cartier divisor on it. Write $X=\overline{X}\setminus |D|$. Then: \begin{enumerate}[{\rm (1)}] \item If $\operatorname{char} (k)=0$, then there is a non-canonical decomposition \[ \mathbb{C}H_0(\overline{X}|D)\simeq \operatorname{H}_0^{\operatorname{Sing}}(X)\oplus \operatorname{U}(\overline{X}|D). \] \item If $\operatorname{char} (k)=p>0$, then the canonical surjection $\pi_{\overline{X},D}\colon \mathbb{C}H _0(\overline{X|}D)\to \operatorname{H}_0^{\operatorname{Sing}}(X)$ is an isomorphism up to $p$-torsions. \end{enumerate} \end{cor} \begin{rmk}\label{rmk-Roitman-charp} Under the much stronger assumption that $X$ is smooth and quasi-affine, Theorem \ref{div-chow} also follows from Corollary \ref{cor1} for $F=h(\overline{X},D)$. Indeed, in the notations of \textit{loc.~cit.}~we have $h(\overline{X} ,D)(k)=\mathbb{C}H _0(\overline{X} |D)$ and $\mathrm{U}(h (\overline{X} ,D))(k)=\mathrm{U}(\overline{X} |D)$. \end{rmk} \subsection{Discreteness of torsion $0$-cycles with modulus} In this section, generalizing some ideas developed in \cite{LW} for the Chow group of $0$-cycles on a singular variety, we prove the useful Theorem \ref{thm-discreteness} below, showing a form of discreteness or rigidity for the torsion subgroups of the groups $\mathbb{C}H_0$ with modulus. We will frequently apply this property in the next section. \subsubsection{}\label{LetolX} Let $\overline{X}$ be a proper variety over an {\it algebraically closed field} $k$ of exponential characteristic $p\geq1$. Let $D$ be an effective Cartier divisor on $\overline{X}$ and suppose that the singular locus of $\overline{X}$ is contained in $D$, so that the open subscheme $X=\overline{X}\setminus |D|$ is a regular (equivalently, smooth) $k$-scheme. We denote by $\mathit{cl}_{\overline{X}|D}$ the canonical projection morphism \[cl_{\overline{X}|D}\colon Z_0(X)\to \mathbb{C}H_0(\overline{X}| D).\] \subsubsection{}\label{LetC} Let $C$ be a smooth curve over $k$ and $W = \sum_{i=1}^n n_i W_i\in C_0(C\times {X}/C)$ a finite correspondence from $C$ to $\overline{X}$ such that $|{W}|\subset C\times X$. Let $x$ be a closed point in $C$. Since $|W|$ is flat over $C$, we know that $\dim (|W|\cap (x\times X)) =0$, so that $|W|$ and $x\times X$ are in good position. Let $p_1, p_2$ be the projections from ${C}\times \overline{X}$ to $C$ and to $\overline{X}$ respectively. Then the $0$-cycle \[W(x):=p_{2, *} (W \cap p_1^*(x))\] on $\overline{X}$ is well-defined and supported outside $D$. \begin{thm}\label{thm-discreteness} Let the notation be as in \S \S $\ref{LetolX}$ and $\ref{LetC}$. Let $n$ be an integer prime to $p$. Assume that there exists a dense open subset $C^o$ of $C$ such that for every $x\in C^o(k)$ one has \[n\cdot cl_{\overline{X}/D}(W(x)) =0.\] Then the function $x\in C(k)\mapsto cl_{\overline{X}/D}(W(x))$ is constant. \end{thm} \begin{proof}The proof uses the strategy of the proof of \cite[Proposition 4.1]{LW}. Let $C\subseteq \overline{C}$ be the smooth compactification of $C$. Let $\overline{W}_i$ be the closure of $W_i$ in $\overline{C}\times \overline{X}$ and $\overline{W}_i^N$ be its normalization, which is a smooth projective curve. Let $u_i\colon \overline{W}_i^N\to \overline{X}$ be the composite $\overline{W}_i^N\to \overline{C}\times \overline{X}\xrightarrow{p_2}\overline{X}$ (which is either a constant map into $X$ or is a finite map). By \cite[Proposition 2.10]{KP3} we have a proper push-forward map $ u_{i,*}\colon \mathbb{C}H _0(\overline{W}_i^N|u_i^*(D))\to \mathbb{C}H _0(\overline{X}|D). $ Let $\phi _i $ be the composite $\overline{W}_i^N\to \overline{C}\times \overline{X}\xrightarrow{p_1}\overline{C}$. Set an effective Cartier divisor $D_{\overline{C},W}:= \sum _{i} \phi _{i,*}u_i^*D$ on $\overline{C}$. By \cite[Proposition 2.12]{KP3}, there is a flat pull-back map $ \phi _i^*\colon \mathbb{C}H _0(\overline{C}|D_{\overline{C},W})\to \mathbb{C}H _0(\overline{W}_i^N|u^*_iD). $ We define a homomorphism: \begin{equation*}\label{WeFinallySet} \operatorname{Tr}_W = \sum_{i=1}^n \phi _i^*\circ u_{i,*}\colon \mathbb{C}H_0(\overline{C}| D_{\overline{C}, W}) \to \mathbb{C}H_0(\overline{X}| D).\end{equation*} An easy computation shows that we have for every $x\in C(k)$ \begin{equation}\label{PropTransferChow} cl_{\overline{X}/D}(W(x)) = \operatorname{Tr}_W(cl_{\overline{C}/D_{\overline{C}, W}}(x)).\end{equation} Let $\mathbb{C}H_0(\overline{C}|D_{\overline{C}, W})^0$ denote the degree-$0$-part of the Chow group $\mathbb{C}H_0(\overline{C}|D_{\overline{C}, W})$. It is generated by differences of classes of closed points in $C^o$ by a moving argument on the curve $\overline{C}$ using the Riemann-Roch theorem, and therefore $\operatorname{Tr}_W$ maps $\mathbb{C}H_0(\overline{C}|D_{\overline{C}, W})^0$ into $\mathbb{C}H_0(\overline{X}|D)[n]$. Note now that since $\overline{C}$ is a curve, we have $\mathbb{C}H_0(\overline{C}|D_{\overline{C}, W})^0 = \mathrm{Jac}(\overline{C}| D_{\overline{C}, W})(k)^0$ where the right hand side is the neutral component of the Rosenlicht-Serre generalized Jacobian. Since $n$ is prime to $\operatorname{char} (k)$, $\mathbb{C}H_0(\overline{C}|D_{\overline{C}, W})^0$ is $n$-divisible by \cite[Chapter V]{SerreGACC}, and therefore the image of $\mathbb{C}H_0(\overline{C}|D_{\overline{C}, W})^0$ in $\mathbb{C}H_0(\overline{X}|D)$ is $0$. Hence by \eqref{PropTransferChow}, for every pair of closed points $x_1, x_2$ in $C(k)$, we have \begin{align*} cl_{\overline{X}/D}(W(x_1)) - cl_{\overline{X}/D}(W(x_2)) &= \operatorname{Tr}_W(cl_{\overline{C}/D_{\overline{C}, W}}(x_1)) -\operatorname{Tr}_W(cl_{\overline{C}/D_{\overline{C}, W}}(x_2)) \\ &= \operatorname{Tr}_W(cl_{\overline{C}/D_{\overline{C}, W}}(x_1) -cl_{\overline{C}/D_{\overline{C}, W}}(x_2) ) = 0,\end{align*} proving the statement. \end{proof} \begin{rmk}\label{rmk-disc} (1) Let notation be as in \S \S \ref{LetolX} and \ref{LetC}. The function $cl_{\overline{X}/D}(W(-))$ can be regarded as function $cl_{\overline{X}/D}(W(-))_K\colon C(K)\to \mathbb{C}H_0(\overline{X}_K|D_K)$ for any field extension $K/k$. If the function $cl_{\overline{X}/D}(W(-))_K$ maps $C^o(K)$ to $\mathbb{C}H_0(\overline{X}_K|D_K)[n]$ for an algebraically closed field $K$, then the map is constant. (2) The statement of Theorem \ref{thm-discreteness} is true for a local setting in the following sense. Let $k$ be an algebraically closed field and let $\overline{X},D$ be as above. Let $S$ be a semi-local scheme of a normal curve over $k$ at closed points and let $K$ be the fraction field of $S$. Let $W \in C_0(X\times S/S)$ be a relative finite correspondence. The divisor $W$ defines as above a function $ S(K) \to \mathbb{C}H_0(\overline{X}_K|D_K) $. If the image is contained in the $n$-torsion subgroup, then the function is constant. \end{rmk} \begin{cor}\label{invariance-of-torsion} Let the notations be as in \S$\ref{LetolX}$ and let $K$ be an extension field of $k$. Then the natural map \[ \mathrm{CH}_0(\overline{X}|D) \longrightarrow \mathrm{CH}_0(\overline{X}_K|D_K) \] is injective and induces an isomorphism \[ \mathrm{CH}_0(\overline{X}|D)\{ p'\} \simeq \mathrm{CH}_0(\overline{X}_K|D_K)\{ p' \}, \] where $M\{ p' \} $ denotes the prime-to-$p$-torsion subgroup of an abelian group $M$. \end{cor} \begin{proof} Suppose $z\in \mathrm{CH}_0(\overline{X}|D)$ is annihilated in $\mathrm{CH}_0(\overline{X}_K|D_K)$. Then, by a limit argument, the relation annihilating $z_K$ is defined over $X_A$, where $A$ is a finitely generated $k$-subalgebra of $K$: i.e.~there is a flat family $\overline{C}\subset X_A$ of curves in $X$ parametrized by $\Spec {A}$ and a rational function $f\in G(\overline{C},D_A)$ with $\operatorname{div} (f)= z_A$. By specializing to a $k$-rational point of $\mathrm{Spec}(A)$, we get a relation annihilating $z$; hence $z\in \mathrm{CH}_0(\overline{X}|D)$ is zero. Having shown the injectivity, to show the surjectivity on prime-to-$p$-parts we may assume that $K$ is algebraically closed. Suppose we are given an element $z_K\in \mathrm{CH}_0(\overline{X}_K|D_K)$ annihilated by an integer $n$ prime to the exponential characteristic. The same limit argument shows that there exist: \begin{enumerate} \item a finitely generated smooth $k$-subalgebra $A$ of $K$; \item a cycle $z_A$ on $X_A$ which is flat over $\mathrm{Spec}(A)$ and induces $z_K$ by the scalar extension $A\to K$; \item a relation annihilating $n\cdot z_A$. \end{enumerate}By specializing to an arbitrary $k$-rational point $x\colon \mathrm{Spec}(k)\to \mathrm{Spec}(A)$, we get a cycle $z$ on $X$ which is annihilated by $n$ in $\mathrm{CH}_0(\overline{X}|D)$. We show now that $z$ maps to $z_K$ by scalar extension $k\to K$. Consider the $K$-scheme $\mathrm{Spec}(A\otimes _k K)$. There are two distinguished $K$-rational points on it: one is $\eta \colon \mathrm{Spec}(K)\to \mathrm{Spec}(A\otimes _k K)$ which corresponds to the inclusion $A\to K$ and the other is $x\times _k K\colon \mathrm{Spec}(K) \to \mathrm{Spec}(A\otimes _k K)$. By Bertini's theorem, there is a smooth curve $C$ of $\mathrm{Spec}(A\otimes _kK)$ passing thorough $\eta$ and $x\otimes _k K$. Restriction of the flat family of cycles $z_A$ over $\Spec {A}$ by $C\to \mathrm{Spec}(A)$ gives a cycle $z_C$ on $X\times _k C=X_K\times _K C$, which is a family of $n$-torsion $0$-cycles on $X_K$ parametrized by $C$. Then since $K$ is algebraically closed, we can apply Theorem \ref{thm-discreteness} to conclude that $z\otimes _kK=z_C(x\times _k K)$ and $z_K=z_C(\eta )$ are equal in $\mathrm{CH}_0(\overline{X}_K|D_K)$. \end{proof} \subsection{Torsion cycles with modulus and cycles on curves} \subsubsection{}\label{SettingTors} Suppose that $\overline{X}$ is a projective variety over an algebraically closed field $k$ of characteristic $p\geq 0$, regular in codimension one. Let $D$ be an effective Cartier divisor on $\overline{X}$ such that the open complement $X = \overline{X}\setminus |D|$ is smooth. \defT_{\ol{X}|D}{T_{\overline{X}|D}} \defT_{\ol{X}|D}red{T_{\overline{X}|D_{red}}} Denote by $T_{\ol{X}|D}$ the subgroup of $\mathbb{C}H _0(\overline{X}|D)^0$ generated by elements $b$ for which there exists a smooth proper curve $\overline{C}$ with a finite morphism $\varphi \colon \overline{C}\to\overline{X}$ satisfying $\varphi(\overline{C}) \nsubseteq D$, and an element $a\in \mathbb{C}H _0(\overline{C}|\varphi ^*D)_\mathrm{tors}$ such that $b=\varphi _*a$. The main technical result of this section is Theorem \ref{tor-comes-curve}. The proof is strongly inspired by \cite[Proposition 3.4]{MarcTorsion}. \begin{thm}\label{tor-comes-curve} In the notation in \S \ref{SettingTors}, we have an equality $T_{\overline{X}|D}\{ p' \} = \mathbb{C}H_{0}(\overline{X}|D)\{p'\}. $ \end{thm} We will actually be able to deduce from it the following more refined result that represents the heart of the proof of Theorem \ref{Torsion-curves-Intro} (see Corollary \ref{Torsion-Curves-Body} below). \begin{prop}\label{generic-rep-T} For any given element of $\mathbb{C}H _0(\overline{X}|D)\{ p'\} $, there is a smooth proper curve $\mathbb{C}b$ over $k$ with a finite morphism $\varphi \colon \mathbb{C}b\to \overline{X}$, such that the given element comes from $\mathbb{C}H _0(\mathbb{C}b|\varphi ^*D)\{ p'\}$. \end{prop} \begin{ex}In characteristic $0$, we can show that $\mathbb{C}H_0(\overline{X}|D)_{\mathrm{tors}}\simeq \operatorname{H}_0^{\operatorname{Sing}}(X)_{\mathrm{tors}}$ in the case $\overline{X} = \mathbb{C}b_1\times \mathbb{C}b_2$ for $\mathbb{C}b_i$ two smooth projective curves over $k$ and $D= \mathbb{C}b_1\times \mathfrak{m}$ for $\mathfrak{m} = \sum_{i=1}^{r}n_i[x_i]$ an effective divisor on $\mathbb{C}b_2$. The proof, that we omit, uses Proposition \ref{generic-rep-T} together with Rojtman's torsion theorem for an open subvariety of a smooth projective variety (as in the formulation of \cite{SS}). \end{ex} The proofs of Theorem \ref{tor-comes-curve} and Proposition \ref{generic-rep-T} require some technical works. Their eventual proofs are completed at the end of this section. \subsubsection{} If $\dim \overline{X}=1$, then Theorem \ref{tor-comes-curve} is trivially true. So we may assume $\dim \overline{X}\ge 2$. The following lemma reduces the general case to the case of surfaces. \begin{lem} Suppose that the following equality holds for all $(\overline{X},D)$ as above whenever $\dim \overline{X}=2$: \[T_{\overline{X}|D}\{ p' \} = \mathbb{C}H_{0}(\overline{X}|D)\{p'\} .\] Then the equality holds for all $(\overline{X},D)$. \end{lem} \begin{proof} Let $z$ be a $0$-cycle whose class in $\mathbb{C}H_{0}(\overline{X}|D)_\mathrm{tors}$ is nonzero and $n$-torsion with $n$ prime to $p$. Then there are normal, proper, integral curves $\overline{C}_{1}, \dots, \overline{C}_{s}$ with finite morphisms $\varphi_{i}: \overline{C}_{i} \to \overline{X}$ and $f_{i} \in G(\overline{C}_i,\varphi_i^*(D))$ such that $$nz = \sum^{s}_{i=1}\varphi_{i,*}(\operatorname{div}(f_{i})).$$ One may assume that $\varphi_{i}$ maps $\overline{C}_{i}$ birationally to its image. By blowing up with point centers lying on $X$, one can construct a projective birational morphism $\pi \colon \overline{Y}\to \overline{X}$ such that \begin{enumerate} \item $\pi^{-1}(X)$ is smooth and $\pi^{-1}$ is an isomorphism in a neighborhood of $D$; \item the maps $\varphi_{i}: \overline{C}_{i} \to \overline{X}$ factor through an inclusion $\phi_{i}: \overline{C}_{i} \to \overline{Y}$; \item there is a $0$-cycle $\tilde{z}$ on $\overline{Y}$, smooth projective rational curves $L_{j}$ for $j=s+1,\cdots, r$ lying in the exceptional locus, and rational functions $g_{j}$ on $L_{j}$ such that we have relations: \[\pi_{*}(\tilde{z}) = z, \quad n\tilde{z} = \sum^{s}_{i=1} \phi_{i, *}(\operatorname{div}(f_{i})) + \sum^{r}_{j=s+1} \phi_{j, *}(\operatorname{div}(g_{j})),\] where $\phi_{j}: L_{j} \to \overline{Y}$ is the inclusion. \end{enumerate} Furthermore, after further blow-ups, we may assume that the union $$\overline{C} = \bigcup^{s}_{i = 1}\overline{C}_{i} \cup \bigcup^{r}_{j = s+1}L_{j}$$ has at most two components passing through any point of $\overline{C}$. In particular, $\overline{C}$ has embedding dimension two, which implies that there is a general surface section $\overline{S}$ of $\overline{Y}$ containing $\overline{C}$ which is regular in $\overline{S}\cap \pi ^{-1}(X)$ \cite[Theorems (1), (7)]{AK}. Then the assumption applied to (the normalization of) $\overline{S}$ implies that $\tilde{z} \in T_{\overline{S}|E}$. Composing with $\pi$, we get that $z \in T_{\ol{X}|D}$. \end{proof} \subsubsection{} From now until the end of the proof of Theorem \ref{tor-comes-curve}, we will assume that $\dim \overline{X}=2$. Let $n$ be a positive integer prime to $p$. Let $z$ be a $0$-cycle on $X$ of degree zero. Let $\overline{C}$ be a proper smooth curve in $\overline{X}$ containing $|z|$. Then, as $\mathbb{C}H _0(\overline{C}|\varphi ^* D)^0$ is $n$-divisible, there is a $0$-cycle $y$ on $\overline{C}$ with $ny = z$ in $\mathbb{C}H _0(\overline{C}|\varphi ^*D)^0$. \begin{df} Let $n$, $\overline{C}$ be as above. We define an element $ n_{\overline{C}}^{-1}(z) \in \mathbb{C}H_0(\overline{X}|D)^0/T_{\ol{X}|D} $ to be the class represented by $y$. \end{df} This does not depend on the choice of $y$ and it satisfies $n_{\overline{C}}^{-1}(nz)=n\cdot n_{\overline{C}}^{-1}(z)=z$ in $\mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D}$. The next proposition shows that it also does not depend on the choice of the curve $\overline{C}$. \begin{prop}\label{indep-of-C} Let $z$ be a $0$-cycle on $X$ of degree zero and $\overline{C},\overline{C}'$ be two curves satisfying the above conditions with respect to $z$. Then we have an equality $ n_{\overline{C}}^{-1}(z)=n_{\overline{C}'}^{-1}(z) $ in $\mathbb{C}H_0(\overline{X}|D)^0/T_{\ol{X}|D}$. \end{prop} \begin{proof} Fix a closed point $x_0\in X$, $x_0 \notin |z|$. Let $ P=\{ \overline{C}_x=H_x\cdot \overline{X} ~|~ x\in \mathbb{P}^1 \} $ be a pencil of hypersurface sections of $\overline{X}$ satisfying the following conditions \begin{enumerate} \item The generic member $\overline{C}_t$ of $P$ is smooth and irreducible, and misses the singular locus of $D_{\mathrm{red}}$. \item The base locus of $P$ contains $|z| \cup \{ x_0 \} $ and misses $D$. \item The equality of Cartier divisors $\overline{C}_0=\overline{C}+E$ and $\overline{C}_\infty =\overline{C}'+E'$ holds on $\overline{X}$, where $E$ and $E'$ are smooth and irreducible, intersect $D$ at least in one point each, but miss the singular locus of $D_{\mathrm{red}}$. Moreover they are disjoint from the base locus of $P$. In addition, $\overline{C}_0$ and $\overline{C}_\infty$ have only ordinary double points as singularities. \end{enumerate} (The condition that $E$ and $E'$ meet $D$ is automatic if the hypersurfaces have sufficiently high degree.) By blowing up along the base locus of $P$ we get a morphism $\pi_P\colon \overline{X}_P \to \mathbb{P}^1$. We denote by $u$ the blowing down map $u\colon \overline{X}_P\to \overline{X}$. Set $X_P= u^{-1}(X)$ and $D_P = u^*D$. Write $z=\sum _i n_i p_i$ and set $Z=\sum n_i u^{-1}(p_i)$ which is a divisor on $X_P$ and satisfying $ \overline{C}_x\cdot Z=z $ on $\overline{C}_x $ for every member $\overline{C}_x$ of the pencil $P$. Let $S$ be the spectrum of the local ring of $\mathbb{P}^1$ at $0$, and denote by $s$ its closed point, by $\eta$ its generic point and by $\overline{\eta}$ a geometric generic point. We denote by $\overline{X}_S\to S$ the base-change of $\pi_P$ to $S$: it is a semistable projective curve over $S$. By construction, the special fiber $(\overline{X}_S)_s$ coincides with $\overline{C}_0 = \overline{C} + E$, while the generic fiber $\overline{C}_{\eta}:=(\overline{X}_S)_\eta $ represents the generic member of the pencil. The choice of the extra point $x_0$ outside $D$ determines a section $s_0:\mathbb{P}^1\to \overline{X}_P$ of $\pi_P$. We let $s_0$ denote the closed subscheme of $\overline{X}_P$ given by it as well. Consider the presheaf of abelian groups $\mathbf{Pic}^0_{(\overline{X}_S|D_S\sqcup s_0)/S}$ on the category of separated schemes of finite type over $S$ given by $T\mapsto \{ $pairs $(\mathcal{L},\alpha ) \}$ where $\mathcal{L}$ is a line bundle on $X_S\times _S T$ which has degree zero along every fiber over $T$ and $\alpha $ is an isomorphism $\mathcal{L}|_{(D_S \sqcup s_0)\times _ST}\simeq \mathcal{O}_{(D_S \sqcup s_0)\times _ST}$. It is representable by a scheme locally of finite type over $S$ (cf.~Lemma \ref{RepresRelPic} below). The divisor $Z\subset {X}_P$ determines a section $Z\colon S\to \mathbf{Pic}^0_{(\overline{X}_S|D_S\sqcup s_0)/S}$. Take a point $\xi_s$ (resp.\ $\xi_{\overline{\eta}}$) on the closed fiber $\mathrm{Jac}{(\overline{C}_0|(D\sqcup s_0)\cdot \overline{C}_0)}$ (resp.~on the geometric generic fiber $\mathrm{Jac}{(\overline{C}_{\overline{\eta}}|(D\sqcup s_0)\cdot \overline{C}_{\overline{\eta}})}$) of $\mathbf{Pic}^0_{(\overline{X}_S|D_S\sqcup s_0)/S}$ such that $n\xi_s=Z_s $ (resp. $n\xi_{\overline{\eta}}=Z_{\overline{\eta}}$ ) and that $\xi_s$ is a specialization of $\xi_{\overline{\eta}}$. Here we used the $n$-divisibility of $\mathrm{Jac}{(\overline{C}_0|(D\sqcup s_0)\cdot \overline{C}_0)}$ and $\mathrm{Jac}{(\overline{C}_{\overline{\eta}}|(D\sqcup s_0)\cdot \overline{C}_{\overline{\eta}})}$. Then there is a spectrum $S'$ of a DVR dominating $S$ and a morphism \[ \gamma': S' \longrightarrow \mathbf{Pic}^0_{(\overline{X}_S|D_S\sqcup s_0)/S} \] such that $\gamma'(s')=\xi_s$ and $\gamma'(\overline{\eta}')=\xi_{\overline{\eta}}$. Here $s'$ and $\overline{\eta}'$ are the closed point and the geometric generic point of $S'$. There is a Cartier divisor $W$ on $\overline{X}_S\times_SS'$ finite flat over $S'$ representing $\gamma'$. It naturally gives an element in $C_0(\overline{X}_{S'}/S')$. Then we have $n \cdot cl(W(s'))=u_*(Z_s)$ in $\mathbb{C}H _0(\overline{X}_{s'}|D_{s'})$ and $n\cdot cl(W(\overline{\eta}'))=u_*(Z_{\overline{\eta }})$ in $\mathbb{C}H _0(\overline{X}_{\overline{\eta}'}|D_{\overline{\eta}'})$. Then the image of the map $\phi=cl({W}(-))-cl({W}(s')):S'(k(\overline{\eta }'))\longrightarrow\mathbb{C}H_0(\overline{X}_{\overline{\eta}'}|D_{\overline{\eta}'})$ lies in the $n$-torsion subgroup of the target, since $u_*(Z_s)=u_*(Z_{\overline{\eta}})$ in $\mathbb{C}H_0(\overline{X}_{\overline{\eta}}|D_{\overline{\eta}})$. By the discreteness Theorem \ref{thm-discreteness} (in the formulation of Remark \ref{rmk-disc}(2)) and by $\phi(s')=0$, the map $\phi$ is identically zero. Therefore we have \[ 0 =cl({W}(\overline{\eta}'))-cl({W}(s')) =n_{\overline{C}_{\overline{\eta}}}^{-1}(z)-n_{\overline{C}}^{-1}(z). \] Hence $n_{\overline{C}_{\overline{\eta}}}^{-1}(z)=n_{\overline{C}}^{-1}(z)$. Similarly $n_{\overline{C}_{\overline{\eta}}}^{-1}(z)=n_{\overline{C}'}^{-1}(z)$. This completes the proof of Proposition \ref{indep-of-C}. \end{proof} In the above proof, we used the following lemma. We let $S$ be the spectrum of a discrete valuation ring and denote by $s$ the closed point of $S$ and by $\eta$ the generic point of $S$. \begin{lem}\label{RepresRelPic} Let $\pi \colon X \to S$ be a semistable projective curve over $S$, i.e.~$\pi$ is a projective and flat morphism of relative dimension $1$ whose geometric fibers are reduced, connected curves having only ordinary singularities. Assume $\pi$ is smooth over $\eta$ and admits a section $s_0$. Let $D$ be an effective Cartier divisor on $X$ which is flat over $S$. Then the relative Picard functor $\mathbf{Pic}^0_{(X|D\sqcup s_0)/S}$ is representable by a scheme (locally) of finite type over $S$. \end{lem} \begin{proof} The presheaf $\mathbf{Pic}^0_{(X| s_0)/S}$ of line bundles of degree $0$ with a fixed trivialization along $s_0$ is isomorphic to $P^0$ in \cite[\S \S (1.2) and (3.2)d, cf.~\S (1.3)]{Raynaud}. Thanks to the semi-stability assumption, the conditions \cite[(N)${}^*$(6.1.4)]{Raynaud} and \cite[Theorem 8.2.1 (i)]{Raynaud} are satisfied in our situation. Therefore by [{\it op.~cit.}, implication (i)$\mathbb{R}ightarrow $(vi)], the presheaf $\mathbf{Pic}_{(X|s_0)/S}$ is representable by a scheme (locally) of finite type over $S$. Forgetting the extra trivialization along $D$ gives a canonical morphism of sheaves \[\phi\colon \mathbf{Pic}^0_{(X|D\sqcup s_0)/S} \to \mathbf{Pic}^0_{(X|s_0)/S}. \] Let $G$ be the affine group scheme over $S$ given by $\pi_{D, *} \mathbb{G}_{m, D}$. Then it is easy to show that $\phi$ makes $ \mathbf{Pic}^0_{(X|D\sqcup s_0)/S}$ a $G$-torsor over $\mathbf{Pic}^0_{(X|s_0)/S}$. Hence, by the relative representability theorem \cite[Lemma 3.6]{Gro}, the presheaf $\mathbf{Pic}^0_{(X|D\sqcup s_0)/S}$ is representable by a scheme locally of finite type over $S$. \end{proof} Let $Z_0(X)^0$ denote the group of $0$-cycles on $X$ of degree zero. By Proposition \ref{indep-of-C} we have a well-defined homomorphism \[ n_X^{-1}: Z_0(X)^0\to \mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D} \] satisfying $n_X^{-1}(nz)=n\cdot n_X^{-1}(z)=z$. In order to complete the proof of Theorem \ref{tor-comes-curve}, we need two more lemmas. \begin{lem}\label{blow-up-lem} Let $u: \overline{X}'\to \overline{X}$ be a blow-up at a point $p$ in $X$. Then the following diagram commutes: \[ \xymatrix{ Z_0(X')^0\ar[r]^(0.4){n_{X'}^{-1}} \ar[d]_{u_*}&\mathbb{C}H _0(\overline{X}'|D)^0/T_{\ol{X}|D} \ar[d]_{u_*} \\% Z_0(X)^0\ar[r]^(0.4){n_{X}^{-1}}&\mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D} } \] \end{lem} \begin{proof} Write an element $z'\in Z_0(X')^0$ as $z'=z'_1+z'_2$ where $z'_1$ is supported on the exceptional divisor $E$ of $u$ and $z'_2$ is supported on $X'\setminus E$. Set $d=\deg z'_1$. Choose $q\in E$ and write \[ z'=(z'_1-d\cdot q)+(d\cdot q+z'_2) \] as sum of $0$-cycles of degree zero. The first term vanishes when we apply $u_*$ and when we apply $u_*\circ n_{X'}^{-1}$, so we may assume $z'$ is of the form $z'=d\cdot q +z'_2$. Take a proper smooth curve in $X$ which contains $|u_*z'|$ and passes through $p$ having the right tangent direction so that the strict transform $C'\subset X'$ (which is isomorphic to $C$) contains $q$. We have a tautological identity \[ u_*(n_{C'}^{-1}(z'))=n_C^{-1}(u_*z'). \] The left hand side is equal to $u_*(n_{X'}^{-1}(z'))$, and the right hand side to $n_X^{-1}(u_*z')$. \end{proof} \begin{lem}\label{Prop-Factn_-1} The map $n_X^{-1}$ factors through $\mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D}$. \end{lem} \begin{proof} It suffices to show the following: let $\varphi \colon \overline{C}\to \overline{X}$ be a morphism from a proper smooth curve such that $\varphi $ is a birational map to its image and such that $\varphi(\overline{C})\not \subset D$, and let $z$ be a $0$-cycle on $\overline{C}$ which represents a torsion class in $\mathbb{C}H _0(\overline{C}|\varphi ^*D)^0$. Then $n_X^{-1}(j_*z)=0$. To achieve this, blow up $X$ at the singular points of $\varphi (C)$ several times so that the strict transform of $\varphi (C)$ is non-singular, therefore isomorphic to $C$. If we take a $0$-cycle $y$ on $\overline{C}$ such that $ny=z$ in $\mathbb{C}H _0(\overline{C}|\varphi ^*D)^0$, by Lemma \ref{blow-up-lem} applied to $z\in Z_0(X')^0$ we have \[ n_X^{-1}(\varphi _*z)=\varphi _*y \text{ in }\mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D}. \] Since $y$ is a torsion class on $\overline{C}$, the right hand side is zero. This completes the proof. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{tor-comes-curve}}] By Lemma \ref{Prop-Factn_-1}, we have a map \[ n_X^{-1}: \mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D}\to \mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D} \] such that for every $z\in \mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D}$ we have $n_X^{-1}(nz)=n\cdot n_X^{-1}(z)=z$. If $z\in \mathbb{C}H _0(\overline{X}|D)^0$ is annihilated by an integer $n$ prime to $p$, then we have $z=n_X^{-1}(nz)=0$ in $\mathbb{C}H _0(\overline{X}|D)^0/T_{\ol{X}|D}$. This completes the proof of Theorem \ref{tor-comes-curve}. \end{proof} \begin{lem}\label{exist-pencil} Let $\overline{C}$ be a smooth proper curve and $\varphi \colon \overline{C} \to \overline{X}$ be a morphism with $\overline{C}\to \varphi (\overline{C})$ birational. Suppose we are given a pencil $P=\{ \overline{C}_x=H_x\cdot \overline{X}~|~x\in \mathbb{P}^1 \}$ satisfying: \begin{enumerate} \item $\overline{C}_1=\varphi (\overline{C})+E$, where $E$ is a smooth proper irreducible curve missing $\varphi (\overline{C})\cap |D|$ such that $E$ intersects $\varphi (\overline{C})$ transversally. \item The axis misses $D$ and intersects $X$ transversally. \end{enumerate} Then we have: \[ \varphi _*(\mathrm{CH}_0(\overline{C}|\varphi ^*D)\{ p'\} )\subset \varphi _{\overline{C}_{\overline{\eta}}*} (\mathrm{CH}_0(\overline{C}_{\overline{\eta}}|D\cdot \overline{C}_{\overline{\eta}})\{ p' \} ) \text{ in }\mathbb{C}H _0(\overline{X}_{\overline{\eta}}|D_{\overline{\eta}})_\mathrm{tors}, \] where $\overline{C}_{\overline{\eta} }$ is the geometric generic member of the pencil. Moreover, the map \[ \varphi _{C_{\overline{\eta}}*}\colon\mathrm{CH}_0(\overline{C}_{\overline{\eta}}|~D\cdot \overline{C}_{\overline{\eta}})\{ p'\} \to \mathbb{C}H _0(\overline{X}_{\overline{\eta}}|D_{\overline{\eta}})\{ p'\} \overset{\mathrm{Cor.~}\ref{invariance-of-torsion}}{=} \mathbb{C}H _0(\overline{X}|D)\{ p'\} \] is $\mathrm{Gal}(\overline{\eta}/\eta)$-equivariant. \end{lem} \begin{proof} By blowing along the base locus of $P$ we get a morphism $\pi _P\colon \overline{X}_P\to \mathbb{P}^1$. Since the base locus misses $D$, we can view $D = D_P$ as a Cartier divisor on $\overline{X}_P$. If we choose a point $x_0$ in the base locus, we get a section $s_0\colon\mathbb{P}^1 \to \overline{X}_P$. Denote by $S$ the local scheme of $\mathbb{P}^1$ at $1$. Let $\overline{X}_S$ and $D_S$ be the base change to $S$ of $\overline{X}_P$ and $D_P$ respectively. The Cartier divisor $D_S$ on $\overline{X}_S$ is finite and flat over $S$ and we can use Lemma \ref{RepresRelPic} to show that $\mathbf{Pic}^0_{({\overline{X}_S}|D_S\sqcup s_0)/S}$ is representable. Take any $y\in \mathrm{Jac}(\overline{C}|\varphi ^*D)\{ p'\}$ and a lifting $y'$ of $(y,0)\in \mathrm{Jac}(\overline{C}|\varphi ^*D)\oplus \mathrm{Jac}(E|D\cdot E)$ in the surjection \[ \mathrm{Jac}(\overline{C}_1|(D\cdot \overline{C}_1)\sqcup x_0)\{ p'\} \twoheadrightarrow \mathrm{Jac}(\overline{C}|\varphi ^*D)\{ p'\} \oplus \mathrm{Jac}(E|D\cdot E)\{ p'\} . \] Then the image of $y$ in $\mathbb{C}H _0(\overline{X}|D)$ is equal to the image of $y'$ by the composite map \[ \mathrm{Jac}(C_1|(D\cdot C_1)\sqcup x_0)\{ p'\} \twoheadrightarrow \mathrm{Jac}(\overline{C}|\varphi ^*D)\{ p'\} \oplus \mathrm{Jac}(E|D\cdot E)\{ p'\} \to \mathbb{C}H _0(\overline{X}|D). \] Suppose that the lift $y'$ is annihilated by an integer $n$ prime to $p$. Then there is an irreducible component $B$ of $\mathbf{Pic}(({\overline{X}_S}|D\sqcup s_0)/S)[n]$ containing $y'$, since this group scheme is \'etale over $S$. Note that $B$ is regular and $1$-dimensional. There is a horizontal Cartier divisor $W$ on $\overline{X}_S\times _S B$ which represents the section $B\to \mathbf{Pic}((\overline{X}_S|D\sqcup s_0)/S)$. Then by the rigidity (Remark \ref{rmk-disc}) of \[ cl(W(-))\colon B(k(\overline{\eta}))\to \coprod _{b\in B(k(\overline{\eta}))} \mathrm{Jac}(\overline{C}_b|~D\cdot \overline{C}_b) [n]\longrightarrow \mathbb{C}H _0(\overline{X}|D)[n] \] we find that $y'$ is in the image of $ \mathrm{Jac}(\overline{C}_{\overline{\eta}}|~D\cdot \overline{C}_{\overline{\eta}})[n]$. For the second assertion, we first take any $n$ prime to $p$ and $z\in \mathrm{Jac}(\overline{C}_{\overline{\eta} }|D\cdot \overline{C}_{\overline{\eta} }\sqcup x_0)({\overline{\eta} })[n]$. Let $B$ be the closure in $\mathbf{Pic}_{(\overline{X}_P|D\sqcup s_0)/S}[n]$ of its image in $\mathrm{Jac}(C_{\eta}|D\cdot C_{\eta}\sqcup x_0)[n]$. If we let $\sigma \in \mathrm{Gal}(\overline{\eta} /\eta )$ act on $z$, the resulting element \[ z^\sigma :\mathrm{Spec}(k(\overline{\eta} )) \xrightarrow{\sigma} \mathrm{Spec}(k(\overline{\eta} )) \xrightarrow{z} B\] lands on the same point of $B$. On the other hand, by the discreteness Theorem \ref{thm-discreteness}, $B(\overline{\eta})\to \mathbb{C}H _0(\overline{X}|D)[n]$ is a constant map. Therefore $z$ and $z^\sigma$ maps to the same element of $\mathbb{C}H _0(\overline{X}|D)[n]$. This shows that the map \[ \mathrm{Jac}(\overline{C}_{\overline{\eta}}|D\cdot \overline{C}_{\overline{\eta}}\sqcup x_0)({\overline{\eta}})\{ p'\}\to \mathbb{C}H _0(\overline{X}|D) \] is $\mathrm{Gal}(\overline{\eta} /\eta )$-equivariant. It follows that so is the map \[ \mathrm{Jac}(\overline{C}_{\overline{\eta}}|D\cdot C_{\overline{\eta}})({\overline{\eta}})\{ p'\}\to \mathbb{C}H _0(\overline{X}|D). \] This complets the proof of Lemma \ref{exist-pencil}. \end{proof} \begin{proof}[{\bf Proof of Proposition \ref{generic-rep-T}}] Take any two elements $y_1,y_2\in \mathbb{C}H _0(\overline{X}|D)\{ p' \}$, which comes from torsion parts of $\mathbb{C}H _0$ of smooth proper curves $ \overline{C}^{(1)}, \overline{C}^{(2)}$ respectively. We show that there is a smooth proper curve $\overline{C}^{(3)}$ mapping to $\overline{X}$ and an element in $\mathbb{C}H _0( \overline{C}^{(3)}|D\cdot \overline{C}^{(3)})\{ p' \}$ which maps to $y_1+y_2$. We may assume for $i=1,2$, that $\overline{C}^{(i)}\to X$ maps $\overline{C}^{(i)}$ birationally to the image. Take a pencil such that \begin{enumerate} \item $\overline{C}_0=\varphi ^{(1)}(\overline{C}^{(1)})+E^{(1)}$, where $E^{(1)}$ is a smooth proper irreducible curve missing $\varphi ^{(1)}(\overline{C}^{(1)})\cap |D|$ such that $E^{(1)}$ intersects $\varphi ^{(1)}(\overline{C}^{(1)})$ transversally. \item $\overline{C}_1=\varphi ^{(2)}(\overline{C}^{(2)})+E^{(2)}$, where $E^{(2)}$ is a smooth proper irreducible curve missing $\varphi ^{(2)}(\overline{C}^{(2)})\cap |D|$ such that $E^{(2)}$ intersects $\varphi ^{(2)}(\overline{C}^{(2)})$ transversally. \item The axis misses $D$ and intersects $X$ transversally. \end{enumerate} Then we can apply Lemma \ref{exist-pencil} to deduce that the image of $\mathrm{Jac}(\overline{C}_{\overline{\eta}}|~D\cdot \overline{C}_{\overline{\eta}})\{ p' \} $ contains the images of $\mathrm{Jac}(\overline{C}^{(1)}|\varphi ^{(1)*}D)\{ p' \}$ and $\mathrm{Jac}(\overline{C}^{(2)}|\varphi ^{(2)*}D)\{ p' \} $. Specializing to a generic member of the pencil, we find that the given elements $y_1,y_2$ come from $\overline{C}^{(3)}:=\overline{C}_x$ for some $x\in \mathbb{P}^1(k)$. Repeating this argument finitely many times, we find that any element of $T$ comes from an appropriate smooth proper curve $\overline{C}$ over $k$. \end{proof} \begin{cor}[see Theorem \ref{Torsion-curves-Intro}]\label{Torsion-Curves-Body} Let $k$ be an algebraically closed field of exponential characteristic $p\geq 1$. Let $\overline{X}$ be a projective variety over $k$, regular in codimension one. Let $D$ be an effective Cartier divisor on $\overline{X}$ such that the open complement $X= \overline{X} \setminus |D|$ is smooth over $k$. Let $\alpha\in \mathbb{C}H_0(\overline{X}|D)$ be a prime-to-$p$-torsion cycle. Then there exist a smooth projective curve $\mathbb{C}b$ with a morphism $\varphi \colon \mathbb{C}b \to \overline{X}$ such that $\varphi ^{*}D$ is a well defined Cartier divisor on $\mathbb{C}b$ and a prime-to-$p$-torsion $0$-cycle $\beta \in \mathbb{C}H_0(\mathbb{C}b| (\varphi ^*D)_{\mathrm{red}}) \{ p'\} $ such that $\varphi _*(\beta) = \alpha $ in $\mathbb{C}H _0(\overline{X}|D)$. \end{cor} \begin{proof} By Proposition \ref{generic-rep-T}, it is enough to show that for a proper smooth curve $\mathbb{C}b$ and an effective divisor $D$ on $\mathbb{C}b$, we have the following isomorphism \[ \pi_{\mathbb{C}b,D}\{p'\}\colon \mathbb{C}H_0(\mathbb{C}b|D)\{p'\}\xrightarrow{\simeq} \mathbb{C}H_0(\mathbb{C}b|D_\mathrm{red})\{p'\}= \operatorname{H}_0^{\operatorname{Sing}}(C)\{p'\}. \] The second equality is true because $\overline{C}$ is a curve. The first map is an isomorphism because its kernel $\operatorname{U}(\overline{C}|D)=G(\overline{C},D_\mathrm{red})/G(\overline{C},D)$ is uniquely $n$-divisible for any $n$ prime to $p$ by Lemma \ref{key-lem} and Lemma \ref{key-lem-p}. \end{proof} \begin{rmk}For a more definitive result regarding the torsion of $\mathbb{C}H_0(\overline{X}|D)$ in characteristic zero, using a completely different method, see \cite[Theorem 7.8 and Theorem 8.8]{BK}. \end{rmk} \section{Reciprocity presheaves and sheaves}\label{Section-Reciprocity} \subsection{Recall of definitions and fundamental results} We denote by $\mathbf{Sch} /k$ the category of separated schemes of finite type over $k$ and by $\mathbf{Sm}/k$ the subcategory of smooth schemes over $k$. Let $\mathbf{Cor}/k$ be the category of finite correspondences over $k$: it has the same class of objects of $\mathbf{Sm} /k$, and the set of morphisms from $X$ to $Y$ is $\mathrm{Hom}_{\mathbf{Cor}/k}(X,Y)=\mathrm{Cor}(X,Y):=C_0(X\times Y/X)$. A {\it presheaf with transfers} is a presheaf of abelian groups on $\mathbf{Cor}/k$ (see \cite[Lecture 2]{MVW} for their basic properties). We write $\mathbf{PST}$ for the abelian category of presheaves with transfers. The following definitions are taken from \cite[\S 2]{KSY}. \begin{df}\label{def-mod-pair} A pair $(\overline{X}, Y)$ of schemes is called \emph{a modulus pair} if \begin{romanlist} \item $\overline{X} \in \mathbf{Sch}/k$ is integral and proper over $k$; \item $Y \subset \overline{X}$ is a closed subscheme such that $X = \overline{X} \setminus Y$ is quasi-affine (i.e.~quasi-compact and isomorphic to an open subscheme of an affine scheme) and smooth over $k$. \end{romanlist} \end{df} Let $(\overline{X}, Y)$ be a modulus pair and write $X = \overline{X} \setminus Y$ for the quasi-affine open complement. For $S \in \mathbf{Sm}/k$, we denote by $\mathbb{C}c_{(\overline{X}, Y)}(S)$ the class of morphisms $\varphi\colon \overline{C} \to \overline{X} \times S$ fitting in the following commutative diagram \[\xymatrix{ & \overline{C} \ar[d]^{\varphi} \ar[rd]^{\gamma_\varphi} \ar[ld]_{p_\varphi} & \\ S & \overline{X}\times S\ar[r] \ar[l] & \overline{X}, }\] where $\overline{C} \in \mathbf{Sch}/k$ is an integral normal scheme and $\varphi$ is a finite morphism such that, for some generic point $\eta$ of $S$, $\dim \overline{C} \times_{S} \eta = 1$ and the image of $\gamma_{\varphi}$ is not contained in $Y$. Let $G(\overline{C}, \gamma_{\varphi}^{*}Y)$ as in \S\ref{DefChowMod}. Then the divisor map on $\overline{C}$ induces $$\mathrm{div}_{\overline{C}}: G(\overline{C}, \gamma_{\varphi}^{*}Y) \to C_0(C/S),$$ where $C = \overline{C} \setminus \gamma^{*}_{\varphi}Y$. \begin{df} Let $F\in \mathbf{PST}$ be a presheaf with transfers, $(\overline{X}, Y)$ a modulus pair with $X = \overline{X} \setminus Y$. Let $a \in F(X)$. We say that $Y$ is a modulus for $a$ if for every $\varphi: \overline{C} \to \overline{X} \times S \in \mathbb{C}c_{(\overline{X}, Y)}(S)$ and every $f \in G(\overline{C}, \gamma_{\varphi}^{*}Y)$, we have \[(\varphi_{*}\mathrm{div}_{\overline{C}}(f))^{*}(a) = 0\] in $F(S)$. Here $\varphi_{*}\colon C_0(C/S) \to C_0(X \times S/S)=\mathrm{Cor}(S,X)$ denotes the push-forward of correspondences. \end{df} \begin{df} Let $F\in \mathbf{PST}$ be a presheaf with transfers. We say $F$ has \emph{reciprocity} (or that $F$ is a \emph{reciprocity presheaf}) if, for any quasi-affine $X \in \mathbf{Sm} /k$, any $a \in F(X)$, and any open immersion $X \underline{h}ookrightarrow \overline{X}$ with $\overline{X}$ integral proper over $k$, $a$ has modulus $Y$ for some closed subscheme $Y \subset \overline{X}$ such that $X = \overline{X} \setminus Y$. Following the notation in \cite{KSY}, we use $\mathbb{R}EC$ to denote the full subcategory of the category of presheaves with transfers consisting of reciprocity presheaves. Note that $\mathbb{R}EC$ is an abelian category. \end{df} \begin{rmk} \label{Phi-def}Let $(\overline{X}, Y)$ be a modulus pair. Denote the category of abelian groups by $\mathbb{A}b$. By \cite[Theorem 2.1.5]{KSY}, the functor \begin{equation*}\label{FunctHavModulus}\mathbf{PST} \to \mathbb{A}b, \quad F\mapsto \{a \in F(X) | a \text{ has modulus }Y\} \end{equation*} is representable by a presheaf with transfers, denoted by $h(\overline{X}, Y)$. It is explicitly constructed by \[ S\mapsto \operatorname{coker}\left( \bigoplus _{\varphi \in \mathcal{C}_{(\overline{X},Y)}(S) } G(\overline{C},\gamma _{\varphi }^*Y)\xrightarrow{\varphi _*\circ \operatorname{div} } C_0(X\times S/S) \right) .\] If $Y$ happens to be an effective Cartier divisor on $\overline{X}$, then $h(\overline{X}, Y)\in \mathbb{R}EC$. \end{rmk} \subsection{Homotopy invariance and torsion reciprocity sheaves}\label{section-htpy-rec} \begin{thm}\label{htpy} Let $F$ be a reciprocity presheaf with transfers which is separated for the Zariski topology. Then $F$ is homotopy invariant in the following cases: \begin{enumerate}[{\rm (1)}] \item $\operatorname{char} (k)=0$ and $F$ is torsion; \item $\operatorname{char} (k)=p>0$ and $F$ is $p$-torsion free. \end{enumerate} \end{thm} \begin{proof} We first prove $(1)$. By \cite[Theorem 3.1.1(2)]{KSY}, it suffices to show that any element $a \in F(X)$ has reduced modulus. Since $F$ is separated for the Zariski topology, we can use the criterion given by \cite[Remark 4.1.2]{KSY} (which depends on Injectivity Theorem \cite[Theorem 6]{KSY}). Namely, let $K=k(S)$ be the function field of a connected $S\in \mathbf{Sm} /k$ and $\overline{C}$ be a normal integral proper curve over $K$. Let $\varphi \colon \overline{C}\to \overline{X}\times _kK$ be a finite morphism such that $\varphi (\overline{C})\not\subset Y\times _kK$. Put $C=\varphi ^{-1}(X\times _kK)$ and let $\psi \colon C\to X$ be the induced map. Let $D=\varphi ^{-1}(Y\times _kK)$. Then the element $a$ has reduced modulus if, for all $\varphi \colon \overline{C}\to \overline{X}\times _kK$ as above, the map \[ (\psi _*\operatorname{div} (-))^*(a) \colon G(\overline{C},D_{\mathrm{red}})\to F(K) \] is zero. Since $F$ is torsion, there is an integer $n>0$ such that $na=0$. Thus the above map factors through $G(\overline{C},D_\mathrm{red})/n$. Since $F$ is a reciprocity presheaf with transfers, it has in particular weak reciprocity in the sense of \cite[Definition 5.1.6]{KSY}. By definition, there exists then an effective divisor $E$ on $\overline{C}$ which is a weak modulus for $\psi^*(a)$, with $|E|= |D|$. By Lemma \ref{key-lem}, we have $G(\overline{C},E)/n\simeq G(\overline{C},D_\mathrm{red})/n, $ so that the map $(\psi _*\operatorname{div} (-))^*(a)\colon G(\overline{C},D_\mathrm{red}) \to G(\overline{C},D_\mathrm{red})/n\simeq G(\overline{C},E)/n\to F(K) $ is zero. This proves (1). We now prove the case (2). Again, it suffices to show that any element $a \in F(X)$ has reduced modulus. We use the following variant of \cite[Remark 4.1.2]{KSY} deduced from \cite[Theorem 6]{KSY}: the element $a$ has reduced modulus if for any morphism $\varphi \colon \overline{C}\to \overline{X}$ as above and for any $f\in G(\overline{C},D_{\mathrm{red}})$, the element $(\varphi _*\operatorname{div} (f))^*(a) \in F(K)$ is zero. Now, since $F$ is a reciprocity presheaf, the section $a$ has a modulus $Y\subset \overline{X}$ supported on $\overline{X}\setminus X$. By Lemma \ref{key-lem-p}, for a large $n>0$, we have $f^{p^n}\in G(\overline{C},\varphi ^* (Y\times _kK))$. Since $Y$ is a modulus for $a$, we have $ (\varphi _* \mathrm{div}(f^{p^n}))^*(a)=0 \text{ in } F(S), $ that is, \[ p^n (\varphi _* \mathrm{div}_{\overline{C}}(f))^*(a)=0 \text{ in } F(S). \] But by assumption $F(S)$ is a $p$-torsion free abelian group, so that $ (\varphi _* \mathrm{div}_{\overline{C}}(f))^*(a)=0. $ This completes the proof. \end{proof} \subsection{Unipotency and divisibility}\label{unip-section} Recall that, by Chevalley's Theorem, every algebraic group $G$ over a perfect field $k$ can be written as an extention of a semi-abelian variety $A$ by a unipotent group $U$ \begin{equation}\label{UtoGtoA} 0\to U \to G \to A \to 0, \end{equation} where \eqref{UtoGtoA} is exact when $U$, $G$ and $A$ are considered as sheaves for the \'etale (or the Zariski) topology. Note that every commutative algebraic group over $k$ defines a presheaf with transfers, cf.~\cite[Proof of Lemma 3.2]{SS}. For the rest of the section, by a sheaf we will mean a Zariski sheaf on $\mathbf{Sm} /k$. If $F$ is a sheaf with transfers, we denote by $\operatorname{H}_i(F)$ the $i$-th homology sheaf of the Suslin complex $C_*(F)$ of $F$. This is defined as follows (see \cite[Definition 1.4]{MVW}): using the cosimplicial scheme $\{ \Delta ^i \} _{i\ge 0}$ with $\Delta ^i=\Spec {k[x_0,\dots ,x_i]/(x_0+\dots +x_i-1)}$, the $i$-th term $C_i(F)$ of the Suslin complex is given by $C_i(F)=F(-\times \Delta ^i)$. The differentials are given by alternating sums of the face maps. It is known that $\operatorname{H}_i(F)$ are homotopy invariant for every $i\geq 0$ \cite[Corollary 2.19]{MVW}. \begin{prop} For every unipotent group U, we have $\operatorname{H}_0(U)=0$. \end{prop} \begin{proof} We have to show that for every smooth $k$-scheme $X$, the map of abelian groups \[ i_0^* - i_1^* : U(X\times \mathbb{A}^1) \to U(X) \] is surjective. Note that $U$ is isomorphic to an affine space $\mathbb{A}^n$ as a scheme. Fix an isomorphism $U\simeq \mathbb{A}^n$ mapping $0\in U$ to the origin. Translating the ``multiplication by $t\in \mathbb{A}^1$'' map by this isomorphism, we have a morphism of schemes $\mu\colon U\times \mathbb{A}^1 \to U$, which coincides with $\mathrm{id}_U$ on $U\times \{ 1 \}$ and with the constant map to $0$ on $U\times \{ 0 \}$. Now given an $f\in U(X)$, we define a section $\tilde{f}\in U(X\times \mathbb{A}^1)$ by the composition \[ X\times \mathbb{A}^1 \xrightarrow{f\times \mathrm{id} } U\times \mathbb{A}^1 \xrightarrow{\mu} U \] Then we clearly have $i_0^* (\tilde{f})= 0$ and $i_1^*(\tilde{f})=f$, so the section $- \tilde{f} \in U(X\times \mathbb{A}^1)$ does the job. \end{proof} \begin{cor} Let $G$ be an algebraic group, which is an extention of a semi-abelian variety $A$ by a unipotent group $U$ as in $(\ref{UtoGtoA})$. Then we have $\operatorname{H}_0(G)=A$. In particular, an algebraic group $G$ over a perfect field is unipotent if and only if one has $\operatorname{H}_0(G)=0.$ \end{cor} This corollary motivates the following definition. \begin{df}\label{def-uf} \begin{enumerate}[$(1)$] \item A reciprocity Zariski sheaf $F$ is said to be {\it unipotent} if it satisfies $\operatorname{H}_0(F)=0$. \item For a reciprocity sheaf $F$, we define a reciprocity sheaf $\operatorname{U}(F)$ (the {\it unipotent part} of $F$) to be the kernel of the canonical surjection $F\to \operatorname{H} _0(F)$ \[ \operatorname{U}(F)= \operatorname{Ker} (F \to \operatorname{H}_0(F)). \] \end{enumerate} (note that the definitions themselves make sense for any abelian Zariski sheaf). \end{df} \begin{rmk} \begin{enumerate} \item An algebraic group over a perfect field $k$ is unipotent as a reciprocity sheaf if and only if it is unipotent in the classical sense. \item The unipotent part of a sheaf $F$ is unipotent in the sense of Definition \ref{def-uf}; this follows from the long exact sequence of Suslin homology arising from the short exact sequence $0\to \operatorname{U}(F)\to F\to \operatorname{H} _0(F)\to 0$: \[ 0=\operatorname{H}_1(\operatorname{H}_0(F))\to \operatorname{H}_0(\operatorname{U}(F)) \to \operatorname{H}_0(F)\xrightarrow{\simeq} \operatorname{H}_0(\operatorname{H}_0(F)). \] \item The reciprocity sheaf $\Omega ^i _-$ (\cite[Appendix]{KSY}) is unipotent. When $k$ is perfect, so is $\Omega ^i _{-/k}$. In fact, every $\mathcal{O}$-module $F$ on $\mathbf{Sm}/k$ satisfies the condition $\operatorname{H}_0(F)=0$ even before Zariski-sheafification. \end{enumerate} \end{rmk} The following Corollaries are direct consequences of Theorem \ref{htpy}. Recall that an abelian sheaf $F$ is said to be {\it divisible} if the multiplication-by-$n$ map $F\xrightarrow{n} F$ is surjective as a map of sheaves for any positive integer $n$. \begin{cor}\label{cor1} Let $F$ be a reciprocity sheaf. \begin{enumerate}[{\rm (1)}] \item Suppose $\operatorname{char} (k)=0$. Then the unipotent part $\operatorname{U}(F)$ is divisible. \item Suppose $\operatorname{char} (k) =p>0$. Then the unipotent part $\operatorname{U}(F)$ is of $p$-primary torsion. \end{enumerate} \end{cor} \begin{proof} We first show (1). Let $G=\operatorname{U}(F)$. Consider the cokernel $G/n$ of the map $G\xrightarrow{n} G$. By Theorem \ref{htpy}, $G/n$ is homotopy invariant. Thus we have a surjection $\operatorname{H}_0(G)\to \operatorname{H}_0(G/n)=G/n$, and hence $G/n=0$ by $\operatorname{H}_0(G)=0$. We now show (2). Let $\operatorname{U}(F)\{p\}$ be the subsheaf of $\operatorname{U}(F)$ of $p$-primary torsion and let $G$ be the quotient of $\operatorname{U}(F)$ by $\operatorname{U}(F)\{p\}$. Then we have a short exact sequence \[ 0\longrightarrow \operatorname{U}(F)\{p\} \longrightarrow \operatorname{U}(F)\longrightarrow G\longrightarrow 0. \] By Theorem \ref{htpy}\,(2), $G$ is homotopy invariant. The argument given above applies to show that $G=0$, completing the proof. \end{proof} \begin{cor}\label{cor3} Let $F$ be a reciprocity sheaf. \begin{enumerate}[{\rm (1)}] \item If $\operatorname{char} (k) =0$, then $\operatorname{H}_1(F)$ is torsion free and $\operatorname{H}_i(F)$ is uniquely divisible for any $i\geq 2$; \item If $\operatorname{char} (k) =p>0$, then $\operatorname{H}_i(F)$ is of $p$-primary torsion for any $i\geq 1.$ \end{enumerate} \end{cor} \begin{proof} Consider the exact sequence $0\to \operatorname{U}(F)\to F \to \operatorname{H}_0(F) \to 0. $ Taking homology $\operatorname{H}_*$ gives a long exact sequence \[ \cdots \to \operatorname{H}_{i+1}(\operatorname{H}_0(F))\to \operatorname{H}_i(\operatorname{U}(F))\to \operatorname{H}_i(F) \to \operatorname{H}_i(\operatorname{H}_0(F)) \to\cdots. \] Since $\operatorname{H}_0(F)$ is homotopy invariant, $\operatorname{H}_i(\operatorname{H}_0(F))=0$ for $i\geq1$ and thus we have $\operatorname{H}_i(\operatorname{U}(F))=\operatorname{H}_i(F)$ for $i\geq 1$. We may therefore assume that $F=\operatorname{U}(F)$. In this case, assertion (2) is clear, since all sections of $\operatorname{U}(F)$ are of $p$-primary torsion by Corollary \ref{cor1}\,(2). It remains to show the assertion (1) when $F=\operatorname{U}(F)$. Let $n>1$. By Corollary \ref{cor1}\,(1), we have an exact sequence \begin{equation}\label{eqStar} \qquad 0\longrightarrow F[n] \longrightarrow F\stackrel{ n}{\longrightarrow} F \longrightarrow 0. \end{equation} By Theorem \ref{htpy}, $F[n]$ is homotopy invariant, and hence $\operatorname{H}_i(F[n])=0$ for $i\geq1$. Taking homology on \eqref{eqStar} gives then a short exact sequence $0\longrightarrow \operatorname{H}_1(F)\stackrel{n}{\longrightarrow} \operatorname{H}_1(F) \to F[n] \longrightarrow0, $ proving that $\operatorname{H}_1(F)$ is torsion free and that $\operatorname{H}_i(F)$ is uniquely divisible for $i\geq2$. \end{proof} \begin{rmk}\label{uniquely-div} We call a sheaf $F$ {\it uniquely divisible} if $F\xrightarrow{n} F$ is an isomorphism of sheaves for every $n>0$ (equivalently, the sections of $F$ are uniquely divisible abelian groups). For a given reciprocity sheaf $F$ over a field of characteristic $0$, there are equivalent conditions (see Corollaries \ref{cor1}--\ref{cor3}): (1) $\operatorname{U}(F)$ is torsion free; (1)$'$ $\operatorname{U}(F)$ is uniquely divisible; (2) $F_{\mathrm{tors}}\simeq \operatorname{H}_0(F)_{\mathrm{tors}}$ by the canonical map; (3) $\operatorname{H}_1(F)$ is divisible; (3)$'$ $\operatorname{H}_1(F)$ is uniquely divisible. Note that the class of such presheaves with transfers is closed under extension. \end{rmk} \begin{rmk} An example of a unipotent reciprocity sheaf over a field of characteristic zero which is {\it not} uniquely divisible is provided by $\mathbb{G}_a/\mathbb{Z}$, the quotient of $\mathbb{G}_a$ by the constant sub-presheaf with transfers $\mathbb{Z}$. Its torsion part is the constant sheaf with transfers $\mathbb{Q}/\mathbb{Z}$. In this case one has $\operatorname{H}_1(\mathbb{G}_a/\mathbb{Z})=\mathbb{Z}$. \end{rmk} For unique divisibility, we have the following \begin{prop} Suppose $k$ is an algebraically closed field of characteristic zero. Let $U$ be a unipotent reciprocity sheaf which is an \'etale sheaf. Then the quotient presheaf (which is a Zariski sheaf) $U/U(k)$ is uniquely divisible. Here we view $U(k)$ as a constant subsheaf of $U$. Consequently, over $k$, the sheaf $U$ is uniquely divisible if and only if the abelian group $U(k)$ is torsion free. \end{prop} \begin{proof} For all local smooth scheme $X$, we have a commutative diagram of exact sequences \[\xymatrix{ 0\to U(X)_{\mathrm{tors}} \ar[r] &U(X) \ar[r] &U(X)/U(X)_{\mathrm{tors}}\to 0\\ 0\to U(k)_{\mathrm{tors}} \ar[u]^{\rho} \ar[r] &U(k) \ar[u]\ar[r] &U(k)/U(k)_{\mathrm{tors}}\ar[u]\to 0 . } \] By Suslin's rigidity \cite[Theorem 7.20]{MVW} which is applicable by Theorem \ref{htpy}$(1)$, the map $\rho$ is an isomorphism. Now by Corollary \ref{cor1}(1), the groups $U(X)/\mathrm{tors}$ and $U(k)/\mathrm{tors}$ are uniquely divisible. Then by the snake lemma we see that $U(X)/U(k)$ is uniquely divisible. \end{proof} \end{document}
\begin{document} \title{Quantum state transfer through a qubit network with energy shifts and fluctuations} \author{Andrea Casaccino} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA,02139, USA\\ Information Engineering Department, University of Siena, I-53100 Siena, Italy} \author{Seth Lloyd} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA} \author{Stefano Mancini} \affiliation{Department of Physics, University of Camerino, I-62032 Camerino, Italy} \author{Simone Severini} \affiliation{Institute for Quantum Computing and Department of Combinatorics \& Optimization, University of Waterloo, Waterloo N2L 3G1, ON Canada} \begin{abstract} We study quantum state transfer through a qubit network modeled by spins with XY interaction, when relying on a single excitation. We show that it is possible to achieve perfect transfer by shifting (adding) energy to specific vertices. This technique appears to be a potentially powerful tool to change, and in some cases improve, transfer capabilities of quantum networks. Analytical results are presented for all-to-all networks and all-to-all networks with a missing link. Moreover, we evaluate the effect of random fluctuations on the transmission fidelity. \end{abstract} \maketitle \section{Introduction} The rapid growth of the area of quantum information has led to consider the idea of multi users quantum networks with the final goal of realizing a number of nano-scale devices and communication protocols \cite{Kimble}. The study of networks of interacting qubits (spins) constitutes a good testing ground for this purpose. In the last few years, this kind of networks have been specifically considered to be good candidates for engineering perfect quantum channels and allowing information transfer between distant locations \cite{sou, ch, sub} (see also \cite{sou1}, for a review). Such networks appear to be useful for the implementation of data buses in quantum mechanical devices, in particular because they undergo a free dynamics after an initial set-up. In this perspective, the possibility of having \emph{perfect state transfer} (for short, \emph{PST}) comes from suitable quantum interference effects in the network dynamics. However, one of the problems arising in such a scenario is given by natural dispersion effects and destructive interference, which determine a loss of information between communicating sites. In the worst cases, information can even remain totally localized, due to Anderson localization effects \cite{And}. While this situation may still be useful, this is not the case when designing protocols for distant communication. In a number of recent papers, PST has been related to the combinatorial properties of networks (see, \emph{e.g.}, \cite{be}, and the references contained therein). In particular, in the $XY$ model (respectively, the $XYZ$ model), when considering a single excitation, it has been shown that PST essentially depends on the eigensystem of the adjacency matrix of the graph (respectively, the Laplacian matrix), because certain invariant eigenspaces of the total Hilbert space evolve independently. Here we discuss the problem of how to improve the fidelity of excitation transfer for a fixed interaction ($XY$ model) and network. In particular, we show that, by a suitable energy shift corresponding to some vertices in the network, it is possible to achieve perfect transfer in cases where this does not usually happen. We conjecture that this is possible in many networks, whenever we add a suitable amount of energy. Moreover, we evaluate the effect of random fluctuations on the transmission fidelity. We separately consider noise affecting qubits' frequencies and qubits' couplings and we show signatures of Anderson localization \cite{And} as well as of stochastic resonance \cite{Fabio}. The structure of the paper is as follows. In Section II, we describe the model considered here. In Section III, we give rigorous results for all-to-all networks and all-to-all networks with a missing link, therefore extending the cases studied in \cite{ca}. For these networks, we show that a certain energy shift allows PST. Indeed, it is well-known that there is no PST for an all-to-all network without energy shift. For the case of an all-to-all network with a missing link, the energy shift changes the periodicity of the evolution. In Section IV, we discuss how to enhance the transfer fidelity for a linear spin chain. It is known that a spin chain with constant couplings allows PST between its end-vertices only when it has length two or three. Evidence given by numerics show that PST can be achieved in chains of any length by an appropriate energy shift independent of the number of nodes. The drawback is a rapid increase of the transfer time. Furthermore, the number of geodesics between the input and output vertex seems to play a role in determining the transfer time. Finally, in Section V, we show how noise affects the transfer. In particular, we show that disordered couplings are more deleterious than disordered frequencies when optimal energy shift is used. In the absence of such a shift, the noise may enhance the transmission fidelity. Conclusions are drawn in Section VI, where we briefly summarize the results and outline potential applications. \section{Set-up} Let $G=(V,E)$ be a simple undirected graph (that is, without loops or parallel edges), with set of vertices $V(G)$ (such that $|V(G)|=n$) and set of edges $E(G)$. The \emph{adjacency matrix} of $G$ is denoted by $A(G)$ and defined by $[A(G)]_{ij}=1$, if $ij\in E(G)$; $[A(G)]_{ij}=0$ if $ij\notin E(G)$. The adjacency matrix is a useful tool to describe a network of $n$ spin-$1/2$ quantum particles. The particles are usually attached to the vertices of $G$, while the edges of $G$ represent their allowed couplings. If one considers the $XY$ interaction model then $\{i,j\}\in E(G)$ means that the particles $i$ and $j$ interact by the Hamiltonian $[H_{XY}(G)]_{ij}=\left( X_{i}X_{j} +Y_{i}Y_{j}\right) $. Throughout the paper $X_{i}$ and $Y_{i}$ denote the usual Pauli operators of the $i$-th particle. Here we consider unit coupling constant. Thus, the Hamiltonian of the whole network reads \begin{equation} H_{XY}(G)=\frac{1}{2}\sum_{i\neq j=1}^{n}[A(G)]_{ij}\left( X_{i}X_{j} +Y_{i}Y_{j}\right) \label{Hnet} \end{equation} and it acts on the Hilbert space $\left( \mathbb{C}^{2}\right)^{\otimes n} $. Let us now restrict our attention to the single excitation subspace $\mathbb{C}^{n}$, \emph{i.e.}, the subspace of dimension $n$ spanned by the vectors $\{|1\rangle,\ldots,|n\rangle\}$. A vector $|j\rangle$ indicates the presence of the excitation on the $j$-th site and the absence on all the others. This is equivalent to the following tensor product of the $Z$- eigenstates $|\underset{n}{\underbrace{0\ldots010\ldots0}}\rangle$, being $1$ in the $j$-th position. In the basis $\{|1\rangle,\ldots,|n\rangle\}$, the Hamiltonian coming from Eq. (\ref{Hnet}) has entries $[H_{XY}(G)]_{ij} =2[A(G)]_{ij}$. This will be called the $XY${ \emph{adjacency matrix} of the graph }$G$. Hereafter, we shall consider the possibility of adding an amount $\Delta_{E}$ of free energy to desired sites. In this case, the $XY$ Hamiltonian reads \begin{equation} \lbrack H_{XY}(G,E_{i})]_{ij}=\left\{ \begin{tabular} [c]{ll} $\Delta_{E}(i),$ & if $i=j;$\\ $2,$ & if $i,j\in E(G);$\\ $0,$ & otherwise, \end{tabular} \ \ \ \right. \label{HGij} \end{equation} We simply write $\Delta_{E}$ instead of $\Delta_{E}(i)$ when $i$ is clear from the context. Finally, let us recall the definition of the \emph{fidelity} at time $t$ between vertex $i$ and vertex $j$ as $f_{G}(i,j;t):=|\langle i|e^{-\iota H(G)t}|j\rangle|^{2}$, where $i$ represents the input vertex and $j$ the output vertex (in short $I/O$). \section{Fidelity} In this section, we present rigorous results about the effects of an energy shift only in the input/output vertices for two specific networks: we consider the case of the \emph{complete graph}, $K_{n}$, and of the \emph{complete graph with a missing link}, $K_{n}^{-}$. In these two cases, given the Hamiltonian $H_{XY}$, we express analytically the fidelity and the transfer time as a function of $n$ and $\Delta_{E}$. \subsection{Complete graph} Every two vertices of the complete graph $K_{n}$ are adjacent. For this graph, we can prove the next result: \begin{theorem} \label{th1}Let $\alpha=\sqrt{4n^{2}-4(n-4)\Delta_{E}+\Delta_{E}^{2}}$ with $n\geq4$ and $k\in\mathbb{N}$. For an energy shift $\Delta_{E}(i,j)$ on the vertices $i,j\in I/O$, we have the following observations: \begin{itemize} \item $\max_{t}f_{K_{n}}(i,i;t)=\max_{t}f_{K_{n}}(j,j;t)=1$, for $\Delta _{E}(i,j)=2n$ and $t=2k\pi/\alpha$; \item $\max_{t}f_{K_{n}^{-}}(k,k;t)=1$, for every $k\notin I/O$ and $t=4k\pi/\alpha$. When $i\neq j$, \item $\max_{t}f_{K_{n}}(i,j;t)=1$, for $\Delta_{E}(i,j)=2n$ and $t=\left( 2\pi+4\pi k\right) /\alpha$; \item $\max_{t}f_{K_{n}}(i,k;t)=16/\alpha^{2}$, for $\Delta_{E}(i)=2n$, $k\notin I/O$ and $t=\left( 2\pi+4\pi k\right)/\alpha$; \item $\max_{t}f_{K_{n}}(k,l;t)=[(\alpha(n-2)-2)]^{2}/4\alpha^{2}(n-2)^{2}$, for $k,l\notin I/O$ and $t=2k\pi/\alpha$. \end{itemize} \end{theorem} \begin{proof} {The }$XY${ adjacency matrix of $K_{n}$ has the form \[ \lbrack H_{XY}({K_{n}})]_{ij}=\left\{ \begin{tabular} [c]{ll} $\Delta_{E},$ & if $i=j\in I/O;$\\ $0,$ & if $i=j\not \in I/O;$\\ $2,$ & otherwise. \end{tabular} \ \ \right. \] } {The characteristic polynomial $P(\lambda)$ can be obtained as a function of $n$ and $\Delta_{E}$: \begin{align*} P(\lambda) & =(\lambda+2)^{n-3}(\Delta_{E}-2-\lambda)\times\left( 4(n-1)\right. \\ & \left. -2(n-3)\Delta_{E}+2(n-2)\lambda+\Delta_{E}\lambda-\lambda ^{2}\right) . \end{align*} The roots of $P(\lambda)$ are as follows: }$\lambda_{1}=\Delta_{E}-2$, $\lambda_{2}^{n-3}=-2$, $\lambda_{3,4}^{\pm}=(2(n-2)+\Delta_{E}\pm\alpha)/2$. A corresponding (unnormalized) orthogonal basis of eigenvectors can be written as \begin{align*} |\lambda_{1}\rangle & =(-1,0,\ldots,0,1)\\ \lbrack|\lambda_{2}^{1\leq l\leq n-3}\rangle]_{u} & =\left\{ \begin{tabular} [c]{rl} $-\frac{1}{l},$ & if $u\in\{2,n-r:1\leqslant r\leqslant l-1\};$\\ $1,$ & if $u=n-l;$\\ $0,$ & otherwise, \end{tabular} \ \ \right. \\ |\lambda_{3,4}^{\pm}\rangle & =(1,\omega^{\pm},\ldots,\omega^{\pm},1), \end{align*} where $\omega^{\pm}=\frac{1}{4(n-2)}(2(n-4)-\Delta_{E}\pm\alpha)$. Thus, from the spectral decomposition of the unitary matrix in the canonical basis, $U_{t}(K_{n})\equiv e^{-\iota H(K_{n})t}$, we have the following diagonal entries: \begin{itemize} \item if $i\in I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ii} & =\frac{1}{4\alpha}\left( \alpha-2n+\Delta _{E}+8\right) e^{-\iota\left[ \lambda_{3}\right] t}\\ & +\frac{1}{4\alpha}\left( \alpha-2n-\Delta_{E}+8\right) e^{-\iota\left[ \lambda_{4}\right] t}\\ & +\frac{1}{2}e^{-\iota\left[ \lambda_{1}\right] t}; \end{align*} \item if $i\notin I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ii} & =\frac{1}{n-2}\left( n-3\right) e^{-\iota\left[ \lambda_{1}\right] t}\\ & +\frac{1}{2n\alpha-4\alpha}\left( \alpha-2n+\Delta_{E}+8\right) e^{-\iota\left[ \lambda_{3}\right] t}\\ & +\frac{1}{2n\alpha-4\alpha}\left( \alpha+2n-\Delta_{E}-8\right) e^{-\iota\left[ \lambda_{4}\right] t}. \end{align*} \end{itemize} The off-diagonal entries of $U_{t}(K_{n})$ are as follows: \begin{itemize} \item if $i\neq j$ and $i,j\in I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ij} & =\frac{\Delta_{E}-2(n-4)+\alpha}{4\alpha }e^{-\iota\left[ \lambda_{3}\right] t}\\ & +\frac{2(n-4)-\Delta_{E}+\alpha}{4\alpha}e^{-\iota\left[ \lambda _{4}\right] t}\\ & -\frac{1}{2}e^{-\iota\left[ \lambda_{1}\right] t}; \end{align*} \item if $i\neq j$, $i\in I/O$ and $j\notin I/O$ or \emph{viz}, then $[U_{t}(K_{n})]_{ij}=2(e^{-\iota\left[ \lambda_{3}\right] t}-e^{-\iota \left[ \lambda_{4}\right] t})/\alpha$. \item if $i\neq j$ and $i,j\notin I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ij} & =\frac{\Delta_{E}-2(n-4)+\alpha}{2(n-2)\alpha }e^{-\iota\left[ \lambda_{4}\right] t}\\ & +\frac{2(n-4)-\Delta_{E}+\alpha}{2(n-2)\alpha}e^{-\iota\left[ \lambda _{3}\right] t}\\ & -\frac{1}{(n-2)\alpha}e^{-\iota\left[ \lambda_{2}\right] t}. \end{align*} \end{itemize} This gives us the tools to evaluate the fidelity for generic situations. For instance, if we take $i,j\in I/O$, the fidelity $f(i,j;t)=|\langle j|U_{t}(K_{n})|i\rangle|^{2}$ reads \begin{align} f(i,j;t) & =\frac{\Delta_{E}^{2}+3\alpha^{2}-4\Delta_{E}(n-4)+4(n-4)^{2} }{8\alpha^{2}}\label{fid}\\ & +\frac{\left( 8+\Delta_{E}+\alpha-2n\right) \left( \alpha+2n-8-\Delta _{E}\right) }{8\alpha^{2}}\cos(\alpha t) \nonumber\\ & -\frac{\Delta_{E}-2(n-4)+\alpha}{4\alpha}\cos\left[ t\frac{2n-\Delta _{E}+\alpha}{2}\right] \nonumber\\ & +\frac{\Delta_{E}-2(n-4)-\alpha}{4\alpha}\cos\left[ t\frac{2n-\Delta _{E}-\alpha}{2}\right] .\nonumber \end{align} Imposing $\Delta_{E}=2n$, PST is achieved for $t=\frac{1}{\alpha}\left( 2\pi+4\pi k\right) $ with $k\in\mathbb{N}$. \end{proof} The main results are visualized in Fig. (\ref{fig1}) where the fidelity $f(i,j;t)$ between any two vertices $i$ and $j$ of a complete graph is plotted as a function of time $t$. It is well known that, in this case, there is no PST without energy shift as it is shown by the dashed line. On the contrary, optimal energy shift ($\Delta_{E}=2n$) allows PST (fidelity equal to one) at times $t=\left( 2\pi+4\pi k\right)/\alpha $ with $k\in\mathbb{N}$ as shown by the solid line. \begin{figure} \caption{Fidelity $f(i,j;t)$ between any two vertices $i$ and $j$ of a complete graph with $n=5$ as a function of time $t$ in two settings: in the absence of energy shift (dashed line) and in the presence of optimal energy shift (solid line). \label{fig1} \label{fig1} \end{figure} \subsection{Complete graph with a missing link} The graph $K_{n}^{-}$ is obtained from $K_{n}$ by deleting an edge, specifically the one between the input and the output vertex. The next result describes the behavior of the system in this case: \begin{theorem} \label{th2}Let $\beta=\sqrt{4(n^{2}+2n-7)-4(n-3)\Delta_{E}+\Delta_{E}^{2}}$ with $n\geq4$ and $k\in\mathbb{N}$. For an energy shift $\Delta_{E}(i,j)$ on the vertices $i,j\in I/O$, we have the following observations: \begin{itemize} \item $\max_{t}f_{K_{n}}(i,i;t)=\max_{t}f_{K_{n}}(j,j;t)=1$, for $\Delta _{E}(i,j)=2n-6$ and $t=2k\pi/\beta$; \item $\max_{t}f_{K_{n}^{-}}(k,k;t)=1$, for every $k\notin I/O$ and $t=4k\pi/\beta$. When $i\neq j$, \item $\max_{t}f_{K_{n}}(i,j;t)=1$, for $\Delta_{E}(i,j)=2n-6$ and $t=\left( 2\pi+4\pi k\right)/\alpha$; \item $\max_{t}f_{K_{n}}(i,k;t)=16/\beta^{2}$, for $\Delta_{E}(i)=2n-6$, $k\notin I/O$ and $t=\left( 2\pi+4\pi k\right)/\beta$; \item $\max_{t}f_{K_{n}}(k,l;t)=[(\beta(n-2)-2)]^{2}/4\beta^{2}(n-2)^{2}$, for $k,l\notin I/O$ and $t=2k\pi/\beta $. \end{itemize} \end{theorem} \begin{proof} The proof is very similar to the one of Theorem \ref{th1}. {The characteristic polynomial $P(\lambda)$ of the }$XY$ adjacency matrix of $K_{n}^{-}$ {can be obtained as function of $n$ and $\Delta_{E}$: \begin{align*} P(\lambda) & =(\lambda+2)^{n-3}(\Delta_{E}-2-\lambda)\times\left( 4(n-1)\right. \\ & \left. -2(n-3)\Delta_{E}+2(n-2)\lambda+\Delta_{E}\lambda-\lambda ^{2}\right) . \end{align*} The roots of $P(\lambda)$ are as follows: }$\lambda_{1}=\Delta_{E}$, $\lambda_{2}^{n-3}=-2$, $\lambda_{3,4}^{\pm}=(2(n-3)+\Delta_{E}\pm\beta)/2$. A corresponding (unnormalized) orthogonal basis of eigenvectors can be written as \begin{align*} |\lambda_{1}\rangle & =(-1,0,\ldots,0,1)\\ \lbrack|\lambda_{2}^{1\leq l\leq n-3}\rangle]_{u} & =\left\{ \begin{tabular} [c]{rl} $-\frac{1}{l},$ & if $u\in\{2,n-r:1\leqslant r\leqslant l-1\};$\\ $1,$ & if $u=n-l;$\\ $0,$ & otherwise, \end{tabular} \ \ \ \right. \\ |\lambda_{3,4}^{\pm}\rangle & =(1,\omega^{\pm},\ldots,\omega^{\pm},1), \end{align*} where $\omega^{\pm}=\frac{1}{4(n-2)}(2(n-3)-\Delta_{E}\pm\beta)$. The diagonal entries of $U_{t}(K_{n})\equiv e^{-\iota H(K_{n})t}$ are given in terms of its spectral decomposition: \begin{itemize} \item if $i\in I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ii} & =\frac{1}{4\beta}\left( \beta-2n+\Delta _{E}+6\right) e^{-\iota\left[ \lambda_{3}\right] t}\\ & +\frac{1}{4\beta}\left( \beta-2n-\Delta_{E}+6\right) e^{-\iota\left[ \lambda_{4}\right] t}\\ & +\frac{1}{2}e^{-\iota\left[ \lambda_{1}\right] t}; \end{align*} \item if $i\notin I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ii} & =\frac{1}{n-2}\left( n-3\right) e^{-\iota\left[ \lambda_{1}\right] t}\\ & +\frac{1}{2n\beta-4\beta}\left( \beta-2n+\Delta_{E}+6\right) e^{-\iota\left[ \lambda_{3}\right] t}\\ & +\frac{1}{2n\beta-4\beta}\left( \beta+2n-\Delta_{E}-6\right) e^{-\iota\left[ \lambda_{4}\right] t}. \end{align*} \end{itemize} The off-diagonal entries of $U_{t}(K_{n})$ are as follows: \begin{itemize} \item if $i\neq j$ and $i,j\in I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ij} & =\frac{\Delta_{E}-2(n-3)+\beta}{4\beta }e^{-\iota\left[ \lambda_{3}\right] t}\\ & +\frac{2(n-3)-\Delta_{E}+\alpha}{4\beta}e^{-\iota\left[ \lambda _{4}\right] t}\\ & -\frac{1}{2}e^{-\iota\left[ \lambda_{1}\right] t}; \end{align*} \item if $i\neq j$, $i\in I/O$ and $j\notin I/O$ or \emph{viz}., then $[U_{t}(K_{n})]_{ij}=2(e^{-\iota\left[ \lambda_{3}\right] t}-e^{-\iota \left[ \lambda_{4}\right] t})/\beta$; \item if $i\neq j$ and $i,j\notin I/O$ then \begin{align*} \lbrack U_{t}(K_{n})]_{ij} & =\frac{\Delta_{E}-2(n-3)+\beta}{2(n-2)\beta }e^{-\iota\left[ \lambda_{4}\right] t}\\ & +\frac{2(n-3)-\Delta_{E}+\alpha}{2(n-2)\beta}e^{-\iota\left[ \lambda _{3}\right] t}\\ & -\frac{1}{(n-2)\beta}e^{-\iota\left[ \lambda_{2}\right] t}. \end{align*} \end{itemize} If we take $i,j\in I/O$, the fidelity $f(i,j;t)=|\langle j|U_{t}(K_{n} )|i\rangle|^{2}$ reads \begin{align}\label{fidml} f(i,j;t) & =\frac{\Delta_{E}^{2}+3\beta^{2}-4\Delta_{E}(n-3)+4(n-3)^{2} }{8\beta^{2}}\\ & +\frac{(6+\Delta_{E}+\beta-2n)(\beta+2n-6-\Delta_{E})}{8\beta^{2}} \cos(\beta t) \nonumber\\ & -\frac{6-2n+\Delta_{E}+\beta}{4\beta}\cos\left[ t\frac{2n-6-\Delta _{E}+\beta}{2}\right] \nonumber\\ & +\frac{6-2n+\Delta_{E}-\beta}{4\beta}\cos\left[ t\frac{2n-6-\Delta _{E}-\beta}{2}\right] \nonumber \end{align} If we assume that $\Delta_{E}=2n-6$ then PST is achieved for $t=\frac{1} {\beta}\left( 2\pi+4\pi k\right) $ with $k\in\mathbb{N}$. \end{proof} Notice that for $K_{n}^{-}$ we have $\Delta_{E}=2n-6$, while for $K_{n}$ we have $\Delta_{E}=2n$. This fact alone does not provide enough information to conjecture that the energy shift required for PST in a graph with $m$ edges is proportional to $m$. Indeed, the energy shift appears to be a nonlinear function of the eigensystem of the matrix $H_{XY}(G)$. Also, notice that the matrices $H_{XY}(K_{n})$ and $H_{XY}(K_{n}^{-})$ satisfy the relation $(H_{XY} (K_{n})\cdot H_{XY}(K_{n}^{-}))^{T}=H_{XY}(K_{n}^{-})\cdot H_{XY}(K_{n})$. \begin{figure} \caption{Fidelity $f(i,j;t)$ between the two nonadjacent vertices $i$ and $j$ of a complete graph with a missing link with $n=5$ as a function of time $t$ in two settings: in the absence of energy shift (dashed line) and in the presence of optimal energy shift (solid line). \label{fig2} \label{fig2} \end{figure} The main results are visualized in Fig. (\ref{fig2}). Here, the fidelity $f(i,j;t)$ between the two nonadjacent vertices $i$ and $j$ of $K_{5}^{-}$ is plotted as a function of time $t$. In this case, it is known \cite{ca} that for $n$ multiple of four there is PST in the isotropic Heisenberg model without energy shift. In the $XY$ model considered here, numerical solutions of the suitable \emph{t} in Eq. (\ref{fidml}) suggest that \emph{PST} can be reached also in other cases as shown by the dashed line in Fig. (\ref{fig2}). In this case the fidelity is one for $t\approx 5$ and $n=5$. This is a remarkable generalization that we conjecture true for every $n$. An intuition of this fact can be derived substituting the value of $\Delta_{E}=0$ in the Eq. (\ref{fidml}) that turns out to be independent of the number of nodes $n$ when the fidelity reaches its maximum. However, the use of optimal energy shift ($\Delta_{E}=2n-6$) allows PST at shorter times $t=\frac{1}{\beta}\left( 2\pi+4\pi k\right) $ with $k\in\mathbb{N}$ as shown by the solid line, thus facilitating the information transfer. \section{Spin chains} A special case of a network is represented by a linear spin chain, where the vertices at the extremities are considered as input and output. By adding an appropriate free energy shift $\Delta_{E}$ (independent of the number of nodes $n$) to these vertices, numerical results involving relatively large chains point out that we can always achieve PST. This is remarkable because usual spin chains with more than three vertices do not allow PST. As a counterpart of this fact, the transfer time generally grows rapidly with the number of vertices and with the amount of energy $\Delta_{E}$. However, in some special cases like the three vertices chain, where PST is achievable without adding energy, the energy shift only causes a larger transfer time. Table \ref{tbl} shows accordingly some numerical results for chains of small length and energy shift on the end-vertices. Apart from $n=2$, for the sake of clarity, we take the closest integer to the real values obtained. \begin{table} \centering \begin{tabular} [c]{c|cccc} $\Delta_{E}\backslash n$ & $2$ & $3$ & $4$ & $5$\\\hline $10$ & $0.7$ & $5$ & $19$ & $99$\\ $20$ & $0.7$ & $8$ & $81$ & $8010$\\ $30$ & $0.7$ & $12$ & $178$ & $2665$\\ $40$ & $0.7$ & $16$ & $313$ & $6260$\\ $50$ & $0.7$ & $20$ & $494$ & $12294$ \end{tabular} \caption{Numerical results of the transfer time for chains of small length and varying energy shift on the end-vertices.}\label{tbl} \end{table} It is plausible that the transfer time in a generic network decreases as the number of paths between the input and the output vertex increases. The minimum transfer time is clearly achieved when the two vertices are adjacent. Thus, for a fixed amount of energy $\Delta_{E}$, numerical results show that the transfer time for the maximum fidelity, $t_{ij}(G)$, between vertices $i$ and $j$ of a network $G$ on $n$ vertices, is \begin{equation} t_{ij}(G)\approx\mathcal{O}\left( \frac{t_{\Delta_{E},k}}{p_{min} (i,j)}\right) . \label{vpk} \end{equation} Here $t_{\Delta_{E},k}$ is the transfer time of the spin chain with $n$ vertices and $p_{min}(i,j)$ is the number of different geodesics between $i$ and $j$. Table \ref{Table1a} shows the transfer time required to obtain a fidelity close to one, when we consider antipodal vertices in graphs of a family constructed as follows: only two vertices, which are then said to be \emph{antipodal}, have degree $l$; all other vertices have degree $2$ and belong to paths connecting the antipodal vertices. Such paths are disjoint and have only the antipodal vertices in common. The number of vertices in a graph with $l$ paths of length $n$ is $n+\left( n-2\right) l$, for $n\geq3$. Table \ref{Table1a} gives evidence that we can gradually cut the transfer time by increasing the number of paths. Intuitively, an equivalent result should be also obtained by modifying the couplings in the original chain. \begin{table} \centering \begin{tabular} [c]{c|ccc} $l\backslash n$ & $3$ & $4$ & $5$\\\hline $1$ & $5$ & $19$ & $99$\\ $2$ & $3$ & $11$ & $62$\\ $3$ & $2$ & $9$ & $42$\\ $4$ & $1$ & $6$ & $36$ \end{tabular} \caption{Numerical results for the decrease of the transfer time with a fixed energy shift on the end-vertices and an increasing number of paths}\label{Table1a} \end{table} \section{Fluctuations} In this section, we analyze the problem of transferring an energy excitation in the presence of noise. We keep working with $K_{n}$ and $K_{n}^{-}$. In practice, we consider a gaussian stochastic process $\xi_{ij}$ of zero mean and $\sigma^{2}$ variance, affecting the energy of the particles (qubits' frequencies) or the interaction energies (qubits' couplings). Under this assumption, the Hamiltonian entries become \[ \lbrack H_{XY}(G,\xi)]_{ij}=\left\{ \begin{tabular} [c]{ll} $\Delta_{E}+\xi_{ii},$ & if $i=j\in I/O;$\\ $2+\xi_{ij},$ & if $ij\in E(G);$\\ $0+\xi_{ij},$ & otherwise. \end{tabular} \right. \] We then distinguish two cases: noise affecting the vertices and noise affecting the edges. Formally, \begin{itemize} \item[1.] $\xi_{ii}\neq 0$, $for every i \in V(G)$ and $\xi_{ij}=0$, when $i\neq j$; \item[2.] $\xi_{ij}\neq 0$, $ for every i and j\in E(G)$ and $\xi_{ii}= 0$. \end{itemize} We are interested in evaluating the average fidelity as a function of the variance of the independent gaussian random variables. The chosen energy shift $\Delta_{E}$ is the optimal one, according to the results of Section III. \begin{figure} \caption{On the left, average fidelity between any two vertices of a complete graph as a function of the variance $\sigma^{2} \label{fig3} \end{figure} The results are reported in Fig.\ref{fig3}. By comparing top and bottom graphics we can see that disordered couplings are more deleterious than disordered frequencies with an optimal energy shift. This fact has been already pointed out in a different context by Gammaitoni \emph{et al.} in \cite{Fazio}. The decay of fidelity over $\sigma^{2}$ comes from the fact that the noise causes localization phenomena for the excitation transfer \cite{And}. This is more evident for $K_{n}^{-}$ where the ``degree of disorder" is higher (compare top-left and top-right plots). Furthermore, in the absence of energy shift the noise may enhances the transmission fidelity (top-left and bottom-right plots). This is reminiscent of stochastic resonance effects \cite{Fabio}. \section{Conclusions} We have shown how to enhance the fidelity of excitation transfer in a quantum spin network with a fixed interaction ($XY$ model) and network. It turns out that it is possible to achieve perfect transfer with the use of suitable energy shifts in all-to-all networks and in all-to-all networks with a missing link. We conjecture that this is possible in \emph{any} network. This technique is promising for future applications as recent works with super-conducting qubits suggest (\cite{st} and references therein). Finally, we have shown how different kinds of noise affect the transfer fidelity. We believe that our results could open up new perspectives for communication or information processing in quantum networks. \noindent\emph{Acknowledgments.} The authors would like to thank Stefano Pirandola, Masoud Mohseni and Yasser Omar for useful discussions. The work of S. M. is supported by the European Commission, under the FET-Open grant agreement HIP, number FP7-ICT-221889. Research at IQC is supported in part by DTOARO, ORDCF, CFI, CIFAR, and MITACS. \end{document}
\begin{document} \begin{frontmatter} \title{The reducibility of quasi-periodic linear Hamiltonian systems and its application to Hill's equation} \author[au1,au2]{Nina Xue} \address[au1]{School of Mathematical Sciences, Beijing Normal University, Beijing 100875, P.R. China.} \author[au1]{Xiong Li\footnote{ Partially supported by the NSFC (11571041) and the Fundamental Research Funds for the Central Universities. Corresponding author.}} \address[au2]{School of Mathematics and Information Sciences, Weifang University, Weifang, Shandong, 261061, P.R. China.} \ead[au2]{[email protected]} \begin{abstract} In this paper, we consider the reducibility of the quasi-periodic linear Hamiltonian system $$\dot{x}=(A+\varepsilon Q(t))x, $$ where $A$ is a constant matrix with possible multiple eigenvalues, $Q(t)$ is analytic quasi-periodic with respect to $t$, and $\varepsilon$ is a sufficiently small parameter. Under some non-resonant conditions, it is proved that, for most sufficiently small $\varepsilon$, the Hamiltonian system can be reduced to a constant coefficient Hamiltonian system by means of a quasi-periodic symplectic change of variables with the same basic frequencies as $Q(t)$. Application to quasi-periodic Hill's equation is also given. \end{abstract} \begin{keyword} Quasi-periodic Hamiltonian linear systems \sep Reducibility \sep KAM iteration \sep Hill's equation \end{keyword} \end{frontmatter} \section{Introduction} In this paper, we are concerned with the reducibility of the quasi-periodic linear Hamiltonian system \begin{equation}\label{intro1} \dot{x}=(A+\varepsilon Q(t))x, \end{equation} where $A$ is a constant matrix with possible multiple eigenvalues, $Q(t)$ is analytic quasi-periodic with respect to $t$, and $\varepsilon$ is a sufficiently small parameter. Firstly, let us recall the definition of the reducibility for quasi-periodic linear systems. Let $A(t)$ be an $n\times n$ quasi-periodic matrix, the differential equation \begin{equation}\label{GE} \dot{x}=A(t)x, \ x\in \mathbb{R}^{n} \end{equation} is called reducible, if there exists a non-singular quasi-periodic change of variables $$ x=\phi(t)y, $$ where $\phi(t)$ and $\phi^{-1}(t)$ are quasi-periodic and bounded, which changes (\ref{GE}) into \begin{equation}\label{GEC} \dot{y}=By, \ y\in \mathbb{R}^{n} \end{equation} where $B$ is a constant matrix. The well known Floquet theorem states that every periodic differential equation (\ref{GE}) can be reduced to a constant coefficient differential equation (\ref{GEC}) by means of a periodic change of variables with the same period as $A(t)$. However this is not true for the quasi-periodic linear system, one can see \cite{Palmer} for more details. In 1981, Johnson and Sell \cite{Johnson} proved that the quasi-periodic linear system (\ref{GE}) is reducible if the quasi-periodic coefficient matrix $A(t)$ satisfies the "full spectrum" condition. Therefore, many authors (\cite{Li1}, \cite{Jorba1}, \cite{Jorba2}, \cite{Xu1}) paid attention to the reducibility of the quasi-periodic linear system (\ref{intro1}), which is close to a constant coefficient linear system. This problem was first studied by Jorba and Sim\'{o} in \cite{Jorba1}. Suppose that $A$ is a constant matrix with different eigenvalues, they proved that if the eigenvalues of $A$ and the frequencies of $Q$ satisfy some non-resonant conditions, then there exist some sufficiently small $\varepsilon_{0}>0$ and a non-empty Cantor set $E\subset(0,\varepsilon_{0})$, such that for any $|\varepsilon| \in E$, system \eqref{intro1} is reducible. Moreover, the relative measure of the set $(0,\varepsilon_{0})\setminus E$ in $(0,\varepsilon_{0})$ is exponentially small in $\varepsilon_{0}$. Later, Xu \cite{Xu1} obtained the similar result for the multiple eigenvalue case. In 1996, Jorba and Sim\'{o} \cite{Jorba2} extended the conclusion of the linear system to the nonlinear system \begin{equation}\label{intro2} \dot{x}=(A+\varepsilon Q(t,\varepsilon))x+\varepsilon g(t)+h(x,t), \ \ x\in\mathbb{R}^{n}. \end{equation} Suppose that $A$ has $n$ different nonzero eigenvalues, they proved that under some non-resonant conditions and non-degeneracy conditions, there exists a non-empty Cantor set $E\subset(0,\varepsilon_{0})$, such that for all $|\varepsilon| \in E$, system \eqref{intro2} is reducible. Later, Wang and Xu \cite{Wang} further inverstigated the nonlinear quasi-periodic system \begin{equation}\label{intro3} \dot{x}=Ax+f(t,x,\varepsilon), \ \ x\in\mathbb{R}^{2}, \end{equation} where $A$ is a real $2\times 2$ constant matrix, and $f(t,0,\varepsilon)=O(\varepsilon), \ \partial_x f(t,0,\varepsilon)=O(\varepsilon)$ as $\varepsilon\rightarrow 0.$ They proved without any non-degeneracy condition, one of two results holds: (1) system \eqref{intro3} is reducible to $\dot{y}=By+O(y)$ for all $\varepsilon \in(0,\varepsilon_{0})$; (2) there exists a non-empty Cantor set $E\subset(0,\varepsilon_{0})$, such that system \eqref{intro3} is reducible to $\dot{y}=By+O(y^{2})$ for all $|\varepsilon| \in E$. In \cite{You}, Her and You considered one-parameter family of quasi-periodic linear system \begin{equation}\label{You} \dot{x}=(A(\lambda)+g(\omega_1t, \cdots, \omega_lt, \lambda)x, \end{equation} where $A\in C^\omega(\Lambda, gl(m,C))$ ($C^\omega(\Lambda, gl(m,C))$ be the set of $m\times m$ matrices $A(\lambda)$ depending analytically on a parameter $\lambda$ in a closed interval $\Lambda\subset \mathbb{R}$), and $g$ is analytic and small. They proved that under some non-resonance conditions and non-degeneracy conditions, there exists an open and dense set $\mathcal{A}$ in $C^\omega(\Lambda, gl(m,C))$, such that for each $A\in \mathcal{A}$, system \eqref{You} is reducible for almost all $\lambda \in \Lambda$. Instead of a total reduction to a constant coefficient linear system, Jorba, Ramirez-ros and Villanueva \cite{Jorba3} investigated the effective reducibility of the following quasi-periodic system \begin{equation}\label{intro4} \dot{x}=(A+\varepsilon Q(t,\varepsilon))x, \ \ |\varepsilon|\leq\varepsilon_{0}, \end{equation} where $A$ is a constant matrix with different eigenvalues. They proved that under non-resonant conditions, by a quasi-periodic transformation, system \eqref{intro4} is reducible to a quasi-periodic system $$\dot{y}=(A^{\ast}(\varepsilon)+\varepsilon R^{\ast}(t,\varepsilon))y, \ \ |\varepsilon|\leq \varepsilon_{\ast}\leq \varepsilon_{0},$$ where $R^{\ast}$ is exponentially small in $\varepsilon$. Li and Xu \cite{Li2} obtained the similar result for Hamiltonian systems. In this paper we will study the reducibility of the quasi-periodic linear {\it Hamiltonian} system (\ref{intro1}), where the matrix $A$ may have multiple eigenvalues. To this end, the following assumptions are made. {\it Assumption A: Non-resonant condition.} Let all eigenvalues of the matrix $A$ be $\lambda_1, \cdots, \lambda_n$, $Q(t)$ be an analytic quasi-periodic function on $D_{\rho}=\{\theta \in \mathbb{C}^{r}: |\mathrm{Im}\theta_{j}|\leq\rho, j=1,2,\cdots,r\}$ with the frequencies $\omega=(\omega_1, \cdots, \omega_r)$. Suppose that $\lambda=(\lambda_{1}, \cdots, \lambda_{n})$ and $\omega=(\omega_{1}, \cdots, \omega_{r})$ satisfy the non-resonant conditions $$\big|\langle k,\omega\rangle\sqrt{-1}-\lambda_{i}+\lambda_{j}\big|\geq \frac{\alpha}{|k|^{\tau}}$$ for all $k\in \mathbb{Z}^{r}\setminus\{0\}$, $0\le i,j\le n$, where $\alpha >0$ is a small constant and $\tau>r-1$. {\it Assumption B: Non-degeneracy condition.} Assume that $A+\varepsilon \overline{Q}$ has $n$ different eigenvalues $\mu_{1},\cdots,\mu_{n}$ with $|\mu_i|\geq 2\delta \varepsilon, \ |\mu_{i}-\mu_{j}|\geq 2\delta \varepsilon, \ i\not=j, \ 0\leq i,j\leq n,$ where $\delta$ is a positive constant independently of $\varepsilon$. Here we denote the average of $Q(t)$ by $\overline{Q}$, that is, $$\overline{Q}=\lim_{T\rightarrow \infty}\frac{1}{2T}\int_{-T}^{T}Q(t)dt.$$ We are in a position to state the main result. \begin{theorem}\label{Main result} Suppose that the Hamiltonian system \eqref{intro1} satisfies the assumptions A and B. Then there exist some sufficiently small $\varepsilon_0>0$ and a non-empty Cantor subset $E_{\varepsilon_0}\subset (0,\varepsilon_0)$ with positive Lebesgue measure, such that for $\varepsilon \in E_{\varepsilon_0}$, the Hamiltonian system \eqref{intro1} is reducible, i.e., there is an analytic quasi-periodic \textbf{symplectic} transformation $x=\psi(t)y$, where $\psi(t)$ has same frequencies as $Q(t)$, which changes \eqref{intro1} into the Hamiltonian system $\dot{y}=By,$ where $B$ is a constant matrix. Moreover, if $\varepsilon_0$ is small enough, the relative measure of $E_{\varepsilon_0}$ in $(0,\varepsilon_0)$ is close to $1$. \end{theorem} Now we give some remarks on this result. Firstly, here we deal with the Hamiltonian system and have to find the \textit{symplectic} transformation, which is different from that in \cite{Jorba1} and \cite{Xu1}. Secondly, we consider the reducibility, other than the effective reducibility in \cite{Jorba3} and \cite{Li2}. The last but not the least, we can allow the matrix $A$ to have multiple eigenvalues. Of course, if the eigenvalues of $A$ are different, the non-degeneracy condition holds naturally. After finishing this work, we consult references again and find the literature \cite{Li3}. In \cite{Li3}, Li, Zhu and Chen considered the following nonlinear analytic quasi-periodic Hamiltonian system \begin{equation}\label{Li} \dot{x}=(A+\varepsilon Q(t))x+\varepsilon g(t)+h(x,t), \ x\in \mathbb{R}^{2n}, \end{equation} where $A$ is a constant matrix, $h=O(x^2) (x\rightarrow 0)$, and $h(x,t), Q(t), g(t)$ are analytic quasi-periodic on $D_\rho$ with respect to t. They proved that, under suitable hypothesis of analyticity, non-resonance conditions and non-degeneracy conditions, there exists a non-empty Cantor set $E^\ast\subset(0,\varepsilon_{0})$ with positive Lebesgue measure, such that for $\varepsilon\in E^\ast$, there is a quasi-periodic symplectic transformation, which changes the Hamiltonian system \eqref{Li} into the Hamiltonian system $$\dot{y}=B(\varepsilon)y+h_\infty(y,t,\varepsilon),$$ where $B$ is a real constant matrix and $h_\infty(y,t,\varepsilon)=O(y^2)$ as $y\rightarrow 0$. Moreover, $\mbox{meas}((0,\varepsilon_{0})\setminus E^\ast)=o(\varepsilon_0)$ as $\varepsilon_0\rightarrow 0$. Here we remark that if $g(t)\equiv 0, \ h(x,t)\equiv 0$ in \eqref{Li}, the result in Theorem 2.1 of \cite{Li3} is just the same as our main result in Theorem \ref{Main result}. However, in Theorem 2.1 of \cite{Li3}, $A$ is a matrix that can be diagonalized. In this paper, $A$ is only a constant matrix with possible multiple eigenvalues, which enables us to study equation \eqref{intro+1}, because \begin{equation*} A=\left( \begin{matrix} 0&1\\ 0&0 \end{matrix} \right) \end{equation*} can not be diagonalized. Moreover, the non-resonance conditions and non-degeneracy conditions in Theorem 2.1 of \cite{Li3} are all stronger than assumptions A and B in this paper. Of course, the proof of our main result in Theorem \ref{Main result} is different from that in \cite{Li3} in some respect. For instance, in the estimate on the measure, we do not need the non-degeneracy conditions which guarantee that $\lambda_i^m-\lambda_j^m$ are Lipschitz from above and from below. Furthermore, when proving the convergence of the iteration, our method can obtain some information to analyze the quasi-periodic Hill's equation \eqref{intro+1}. From the above, it is necessary to give the complete proof of Theorem \ref{Main result}. Therefore, we will prove Theorem \ref{Main result} in Section 3 of this paper. As an example, we apply Theorem \ref{Main result} to the following quasi-periodic Hill's equation \begin{equation}\label{intro+1} \ddot{x}+\varepsilon a(t)x=0, \end{equation} where $a(t)$ is analytic quasi-periodic with the frequencies $\omega=(\omega_1, \cdots, \omega_r)$. Denote the average of $a(t)$ by $\bar{a}$. If $\bar{a}>0$ and the frequencies $\omega$ of $a(t)$ satisfy the Diophantine condition $$\big|\langle k,\omega\rangle\big|\geq \frac{\alpha}{|k|^{\tau}}$$ for all $k\in \mathbb{Z}^{r}\setminus\{0\}$, where $\alpha >0$ is a small constant and $\tau>r-1$, then there exists some sufficiently small $\varepsilon_0>0$, equation \eqref{intro+1} is reducible and the equilibrium of \eqref{intro+1} is stable in the sense of Lyapunov for most sufficiently small $\varepsilon\in (0, \varepsilon_0)$. Moreover, all solutions of equation \eqref{intro+1} are quasi-periodic with the frequencies $\Omega=(\omega_1, \cdots,\omega_r, \sqrt{b})$ for most sufficiently small $\varepsilon\in (0, \varepsilon_0)$, where $b=\bar{a}\varepsilon+O(\varepsilon^2)$ as $\varepsilon\to 0$. Here we remark that if we rewrite equation \eqref{intro+1} into the Hamiltonian system \eqref{intro1}, we find that \begin{equation*} A=\left( \begin{matrix} 0&1\\ 0&0 \end{matrix} \right), \end{equation*} which has multiple eigenvalues $\lambda_1=\lambda_2=0$. One can see Section 4 for more details about this example. There are plenty of works about the stability of the equilibria of quasi-periodic Hamiltonian systems, one can refer to \cite{Bibikov}, \cite{Liu1}, \cite{Liu2} and \cite{Wu} for a detailed description. In general, in order to determine the type of stability of the equilibria of quasi-periodic Hamiltonian systems, the authors need to assume that the corresponding linearized system is reducible, and some conditions were added to the system after the reducibility. However, as far as we know, the case that the conditions are added to the original system has not been considered in the literature up to now, which we will study in the future. The paper is organized as follows. In Section 2, we list some basic definitions and results that will be useful in the proof of the main result. In Section 3, we will prove Theorem \ref{Main result}. The quasi-periodic Hill's equation \eqref{intro+1} will be analyzed in Section 4. \section{Some preliminaries} We first give the definition of quasi-periodic functions. \begin{definition} A function $f$ is said to be a quasi-periodic function with a vector of basic frequencies $\omega=(\omega_{1},\omega_{2}, \cdots, \omega_{r})$, if $f(t)=F(\theta_{1},\theta_{2},\cdots,\theta_{r})$, where $F$ is $2\pi$ periodic in all its arguments and $\theta_{j}=\omega_{j}t$ for $j=1,2,\cdots,r.$ Moreover, if $F(\theta)\,(\theta=(\theta_{1},\theta_{2},\cdots,\theta_{r}))$ is analytic on $D_{\rho}=\{\theta \in \mathbb{C}^{r}: |\mathrm{Im}\theta_{j}|\leq\rho, j=1,2,\cdots,r\}$, we say that $f(t)$ is analytic quasi-periodic on $D_{\rho}$. \end{definition} It is well known that an analytic quasi-periodic function $f(t)$ can be expanded as Fourier series $$f(t)=\sum_{k\in\mathbb{Z}^{r}}f_{k}e^{\langle k,\omega\rangle\sqrt{-1}t}$$ with Fourier coefficients defined by $$f_{k}=\frac{1}{(2\pi)^{r}}\int_{\mathbb{T}^{r}}F(\theta)e^{-\langle k,\theta\rangle\sqrt{-1}}d\theta.$$ Denote by $||f||_{\rho}$ the norm $$||f||_{\rho}=\sum_{k\in\mathbb{Z}^{r}}|f_{k}|e^{|k|\rho}.$$ \begin{definition} An $n\times n$ matrix $Q(t)=(q_{ij}(t))_{1\leq i,j\leq n}$ is said to be analytic quasi-periodic on $D_{\rho}$ with frequencies $\omega=(\omega_{1},\omega_{2}, \cdots, \omega_{r})$, if all $q_{ij}(t) \ (i,j=1,2,\cdots,n)$ are analytic quasi-periodic on $D_{\rho}$ with frequencies $\omega=(\omega_{1},\omega_{2}, \cdots, \omega_{r})$. \end{definition} Define the norm of $Q$ by $$||Q||_{\rho}=\max_{1\leq i\leq n}\sum_{j=1}^{n}||q_{ij}||_{\rho}.$$ It is easy to see that $$ ||Q_{1}Q_{2}||_{\rho}\leq ||Q_{1}||_{\rho}||Q_{2}||_{\rho}. $$ If $Q$ is a constant matrix, write $||Q||=||Q||_{\rho}$ for simplicity. Denote the average of $Q(t)$ by $\overline{Q}=(\overline{q}_{ij})_{1\leq i,j\leq n}$, where $$\overline{q}_{ij}=\lim_{T\rightarrow \infty}\frac{1}{2T}\int_{-T}^{T}q_{ij}(t)dt,$$ see \cite{Siegel} for the existence of the limit. Also we need two lemmas which are provided in this section for the proof of Theorem \ref{Main result}, that were proved in \cite{Jorba2}. \begin{lemma}\label{lem1} Let $h: B_\sigma(0)\subset \mathbb{R}^n\rightarrow \mathbb{R}^n$ be a $C^2$ function that satisfies $h(0)=0, D_xh(0)=0,$ $||D_{xx}h(x)||\leq K, x\in B_\sigma(0).$ Then $||h(x)||\leq\frac{K}{2}||x||^2$, $||D_xh(x)||\leq K||x||$. \end{lemma} \begin{lemma}\label{lem2} Suppose that $B_0$ is an $n\times n$ matrix with different nonzero eigenvalues $\mu_1^0,\cdots,\mu^0_{n}$ satisfying $|\mu^0_i|>\gamma, \ |\mu_i^0-\mu_j^0|>\gamma, i \not=j, 1\leq i,j\leq n,$ and $S_0$ is a regular matrix such that $S_0^{-1}B_0S_0=diag(\mu_1^0,\cdots,\mu_{n}^0).$ Set $\beta_0=\max\{||S_0||,||S_0^{-1}||\}$, and choose $b$ such that $$0<b<\frac{\gamma}{(3n-1)\beta_0^2}.$$ If $B_1$ verifies $||B_1-B_0||\leq b,$ then the following conclusions hold: (1) $B_1$ has n different nonzero eigenvalues $\mu_{1}^1,\cdots,\mu_{n}^1$. (2) There exists a regular matrix $S_1$ such that $S_1^{-1}B_1S_1=diag(\mu_{1}^1,\cdots,\mu_{n}^1)$, which satisfies $||S_1||, ||S_1^{-1}||\leq \beta_1,$ where $\beta_1=2\beta_0$. \end{lemma} The next lemma is used to perform a step of the inductive procedure in the proof of Theorem \ref{Main result}. \begin{lemma}\label{lem3} Consider the differential equation \begin{equation}\label{lemma1} \dot{P}(t)=\Lambda P(t)-P(t)\Lambda+R(t), \end{equation} where $\Lambda$ is a constant Hamiltonian matrix with $n$ different eigenvalues $\nu_1,\cdots,\nu_n$, $R$ is an analytic quasi-periodic Hamiltonian matrix on $D_\rho$ with frequencies $\omega$, satisfying $\overline{R}=0$. If \begin{equation}\label{lemma2} |\langle k,\omega\rangle\sqrt{-1}-\nu_i+\nu_j|\geq\frac{\alpha}{|k|^{3\tau}} \end{equation} for all $0\not=k\in\mathbb{Z}^r,$ and $|\nu_i|\geq \delta\varepsilon, \ |\nu_{i}-\nu_{j}|\geq \delta \varepsilon,$ for $i\not=j, \ 0\leq i,j\leq n,$ where $\delta$ is a positive constant independently of $\varepsilon$, then equation \eqref{lemma1} has a unique analytic quasi-periodic Hamiltonian solution $P(t)$ with $\overline{P}=0$, where $P(t)$ has frequencies $\omega$, and satisfies \begin{equation}\label{lemma3} ||P||_{\rho-s}\leq\frac{c}{\alpha s^\nu }||R||_\rho \end{equation} with $\nu=3\tau+r$ and $0<s<\rho,$ where the constant $c$ depends only on $\tau$ and $r$. \end{lemma} \noindent{\bf Proof.}~ Choosing $S$ such that $S^{-1}\Lambda S=D=diag(\nu_1,\cdots,\nu_n)$, making the change of variable $P(t)=SX(t)S^{-1}$ and defining $Y(t)=S^{-1}R(t)S$, equation \eqref{lemma1} becomes \begin{equation}\label{lemma+1} \dot{X}(t)=DX(t)-X(t)D+Y(t), \ \ \overline{Y}=0. \end{equation} Expanding $X$ and $Y$ into Fourier series yields that $$X(t)=\sum_{k\in \mathbb{Z}^r}X_ke^{\langle k,\omega\rangle\sqrt{-1}t}, \ \ Y(t)=\sum_{k\in \mathbb{Z}^r}Y_ke^{\langle k,\omega\rangle\sqrt{-1}t},$$ where $X_{k}=(x_{ij}^{k})_{1\leq i,j\leq n}$ and $Y_{k}=(y_{ij}^{k})_{1\leq i,j\leq n}.$ By comparing the coefficients of \eqref{lemma+1}, we obtain that $$ x^k_{ij}=\frac{y^k_{ij}}{\langle k,\omega\rangle\sqrt{-1}-\nu_i+\nu_j}, \ \ 1\leq i,j\leq n, k\not=0, $$ and $$x^0_{ij}=0, \ 1\leq i,j\leq n.$$ Since $R$ is analytic on $D_\rho$, $Y$ is also analytic on $D_\rho$. Therefore, we have $$||Y_k||\leq ||Y||_\rho e^{-|k|\rho}.$$ Hence \begin{eqnarray*}\label{lemma+2} ||X||_{\rho-s}&=&\sum_{k\in\mathbb{Z}^r}||X_k||e^{|k|(\rho-s)}\\ &\leq&\sum_{0\not=k\in\mathbb{Z}^r}\frac{|k|^{3\tau} e^{-s|k|}}{\alpha}||Y||_\rho\\ &\leq&\frac{c}{\alpha s^\nu }||Y||_\rho, \end{eqnarray*} where $\nu=3\tau+r$ and $0<s<\rho.$ Here and hereafter we always use the same symbol $c$ to denote different constants in estimates. Hence $$||P||_{\rho-s}\leq c ||X||_{\rho-s}\leq\frac{c}{\alpha s^\nu }||Y||_\rho\leq \frac{c}{\alpha s^\nu }||R||_\rho.$$ Now we prove that $P$ is Hamiltonian. Since $\Lambda$ and $R$ are Hamiltonian, then $\Lambda=J\Lambda_J$ and $R=JR_J$, where $\Lambda_J$ and $R_J$ are symmetric. Let $P_J=J^{-1}P$, if $P_J$ is symmetric, then $P$ is Hamiltonian. Below we prove that $P_J$ is symmetric. Substituting $P=JP_I$ into equation \eqref{lemma1} yields that \begin{equation}\label{lemma4} \dot{P_J}=\Lambda_JJP_J-P_JJ\Lambda_J+R_J, \end{equation} and transposing equation \eqref{lemma4}, we get \begin{equation*} \dot{P_J}^T=\Lambda_JJP_J^T-P_J^TJ\Lambda_J+R_J. \end{equation*} It is easy to see that $JP_J$ and $JP_J^T$ are solutions of \eqref{lemma1}, moreover, $\overline{JP_J}=\overline{JP_J^T}=0.$ Since the solution of \eqref{lemma1} with $\overline{P}=0$ is unique, we have that $JP_J=JP_J^T$, which implies that $P$ is Hamiltonian. Up to now, we have finished the proof of this lemma. $\square$ \section{Proof of Theorem \ref{Main result}} From the assumptions of Theorem \ref{Main result}, it follows that $A+\varepsilon \overline{Q}$ is a Hamiltonian matrix with $n$ different eigenvalues $\mu_{1}, \ \cdots, \ \mu_{n}$, and $|\mu_i|\geq 2\delta \varepsilon, \ |\mu_{i}-\mu_{j}|\geq 2\delta \varepsilon, \ i\not=j, \ 0\leq i,j\leq n,$ where $\delta$ is a positive constant independently of $\varepsilon$. We rewrite the Hamiltonian system \eqref{intro1} into \begin{equation}\label{pf1} \dot{x}=[A+\varepsilon \overline{Q}+\varepsilon(Q(t)-\overline{Q})]x:=(A_1+\varepsilon \widetilde{Q}(t))x, \end{equation} where $A_1=A+\varepsilon \overline{Q}, \ \widetilde{Q}(t)=Q(t)-\overline{Q}$, $\overline{\widetilde{Q}}=0$. Introduce the change of variables $x=e^{\varepsilon P(t)}x_1$, where $P(t)$ will be determined later, under this symplectic transformation, the Hamiltonian system \eqref{pf1} is changed into the new Hamiltonian system \begin{equation}\label{pf2} \dot{x_1}=e^{-\varepsilon P(t)}(A_1+\varepsilon \widetilde{Q}-\varepsilon\dot{P})e^{\varepsilon P(t)}x_1. \end{equation} Expand $e^{\varepsilon P}$ and $e^{-\varepsilon P}$ into $$e^{\varepsilon P}=I+\varepsilon P+B, \ \ e^{-\varepsilon P}=I-\varepsilon P+\widetilde{B},$$ where $$B=\frac{(\varepsilon P)^2}{2!}+\frac{(\varepsilon P)^3}{3!}+\cdots, \ \ \widetilde{B}=\frac{(\varepsilon P)^2}{2!}-\frac{(\varepsilon P)^3}{3!}+\cdots.$$ Then the Hamiltonian system \eqref{pf2} can be rewritten \begin{eqnarray}\label{pf3} \dot{x_1}&=&(I-\varepsilon P+\widetilde{B})(A_1+\varepsilon\widetilde{Q}-\varepsilon\dot{P})(I+\varepsilon P+B)x_1\nonumber\\[0.2cm] &=&(A_1+\varepsilon \widetilde{Q}-\varepsilon \dot{P}+\varepsilon A_1P-\varepsilon PA_1+\varepsilon^2Q_1)x_1, \end{eqnarray} where \begin{eqnarray*} Q_1&=&-P(\widetilde{Q}-\dot{P})+(\widetilde{Q}-\dot{P})P-P(A_1+\varepsilon\widetilde{Q}-\varepsilon\dot{P})P\\ &&+(I-\varepsilon P)(A_1+\varepsilon\widetilde{Q}-\varepsilon\dot{P})\frac{B}{\varepsilon^2}\\ &&+e^{\varepsilon P}(A_1+\varepsilon\widetilde{Q}-\varepsilon\dot{P})\frac{\widetilde{B}}{\varepsilon^2}. \end{eqnarray*} We would like to have $$ \widetilde{Q}-\dot{P}+ A_1P-PA_1=0,$$ which is equivalent to \begin{equation}\label{pf4} \dot{P}=A_1P-PA_1+\widetilde{Q}. \end{equation} By the assumption B of Theorem \ref{Main result}, it is easy to see that the inequalities $$|\mu_i|\geq \delta\varepsilon, \ |\mu_{i}-\mu_{j}|\geq \delta \varepsilon, \ i\not=j, \ 0\leq i,j\leq n$$ hold. Moreover, if the equalities \begin{equation}\label{+1} |\langle k,\omega\rangle\sqrt{-1}-\mu_i+\mu_j|\geq\frac{\alpha_0}{|k|^{3\tau}}, \ 0\not=k\in\mathbb{Z}^r, \end{equation} also hold, where $\alpha_0=\frac{\alpha}{2},$ thus, by Lemma \ref{lem3}, \eqref{pf4} is solvable for $P$ on a smaller domain, that is, there is a unique quasi-periodic Hamiltonian matrix $P(t)$ with frequencies $\omega$ on $D_{\rho-s}$, which satisfies $\overline{P}=0$ and \begin{equation}\label{pf5} ||P||_{\rho-s}\leq\frac{c}{\alpha_0 s^\nu}||\widetilde{Q}||_{\rho}\leq\frac{c}{\alpha_0 s^\nu}||Q||_{\rho}, \end{equation} where $ s=\frac{1}{2}\rho.$ Therefore, by \eqref{pf4}, the Hamiltonian system \eqref{pf3} becomes \begin{equation}\label{pf5} \dot{x_1}=(A_1+\varepsilon^2Q_1)x_1, \end{equation} where \begin{eqnarray*} Q_1&=&P(A_1P-PA_1)+(PA_1-A_1P)P\\[0.2cm] &&-P(A_1+\varepsilon(PA_1-A_1P))P\\ &&+(I-\varepsilon P)(A_1+\varepsilon(PA_1-A_1P))\frac{B}{\varepsilon^2}\\ &&+\frac{\widetilde{B}}{\varepsilon^2}(A_1+\varepsilon(PA_1-A_1P))e^{\varepsilon P}. \end{eqnarray*} From Lemma \ref{lem1}, it follows that $$||B||_{\rho-s}\leq c||\varepsilon P||_{\rho-s}^2, \ ||\widetilde{B}||_{\rho-s}\leq c||\varepsilon P||_{\rho-s}^2.$$ Therefore, if $|\varepsilon|$ is sufficiently small, we have $$||Q_1||_{\rho-s}\leq c||P||_{\rho-s}^2\leq\frac{c}{\alpha_0^2s^{2\nu}}||Q||_{\rho}^2.$$ Now we consider the iteration step. In the $m^{th}$ step, we consider the Hamiltonian system \begin{equation}\label{pf6} \dot{x}_{m}=(A_{m}+\varepsilon^{2^{m}} Q_{m}(t))x_{m}, \ m\geq 1, \end{equation} where $A_m$ has $n$ different eigenvalues $\lambda_1^m,\cdots,\lambda_n^m$ with $$ |\lambda_i^m|\geq \delta \varepsilon, \ \ |\lambda_i^m-\lambda_j^m|\geq \delta \varepsilon, \ i\not=j, \ 1\leq i,j\leq n,$$ here we define $\lambda_i^1=\mu_i, \ i=1,\cdots,n. $ Let $A_{m+1}=A_m+\varepsilon^{2^{m}}\overline{Q}_m$, then the Hamiltonian system \eqref{pf6} becomes \begin{equation}\label{pf7} \dot{x}_{m}=(A_{m+1}+\varepsilon^{2^{m}} \widetilde{Q}_{m})x_{m}, \ m\geq 1, \end{equation} where $\widetilde{Q}_m=Q_m(t)-\overline{Q}_m.$ We need to solve $$\dot{P}_m=A_{m+1}P_m-P_mA_{m+1}+\widetilde{Q}_m.$$ If$$|\langle k,\omega\rangle\sqrt{-1}-\lambda_i^{m+1}+\lambda_j^{m+1}|\geq\frac{\alpha_{m}}{|k|^{3\tau}}, \ \ 0\not=k\in\mathbb{Z}^r,$$ and $A_{m+1}$ has $n$ different eigenvalues $\lambda_1^{m+1},\cdots,\lambda_n^{m+1}$ with $$ |\lambda_i^{m+1}|\geq \delta \varepsilon, \ \ |\lambda_i^{m+1}-\lambda_j^{m+1}|\geq \delta \varepsilon,\ \ i\not=j,\ 1\leq i,j\leq n,$$ by Lemma \ref{lem3}, there is a unique quasi-periodic Hamiltonian matrix $P_m(t)$ with frequencies $\omega$ on $D_{\rho_m-s_m}$, which satisfies \begin{equation}\label{pf8} ||P_m||_{\rho_m-s_m}\leq\frac{c}{\alpha_m s_m^\nu }||Q_m||_{\rho_m}. \end{equation} Thus, under the symplectic change of variables $x_{m}=e^{\varepsilon^{2^{m}}P_m(t)}x_{m+1}$, the Hamiltonian system \eqref{pf7} is changed into \begin{equation*} \dot{x}_{m+1}=(A_{m+1}+\varepsilon^{2^{m+1}}Q_{m+1})x_{m+1}, \end{equation*} where \begin{eqnarray*} Q_{m+1}(t)&=&P_m(A_{m+1}P_m-P_m A_{m+1})+(P_m A_{m+1}-A_{m+1}P_m)P_m\\[0.2cm] &&-P_m\left(A_{m+1}+\varepsilon^{2^{m}}(P_m A_{m+1}-A_{m+1}P_m)\right)P_m\\ &&+(I-\varepsilon^{2^{m}}P_m)\left(A_{m+1}+\varepsilon^{2^{m}}(P_m A_{m+1}-A_{m+1}P_m)\right)\frac{B_m}{\varepsilon^{2^{m+1}}}\\ &&+\frac{\widetilde{B}_m}{\varepsilon^{2^{m+1}}}\left(A_{m+1}+\varepsilon^{2^{m}}(P_m A_{m+1}-A_{m+1}P_m)\right)e^{\varepsilon^{2^{m}} P_m}, \end{eqnarray*} $$e^{\varepsilon^{2^{m}} P_m}=I+\varepsilon^{2^{m}} P_m+B_m,$$ $$ \ e^{-\varepsilon^{2^{m}} P_m}=I-\varepsilon^{2^{m}} P_m+\widetilde{B}_m,$$ and $$B_m=\frac{(\varepsilon^{2^{m}} P_m)^2}{2!}+\frac{(\varepsilon^{2^{m}}P_m)^3}{3!}+\cdots,$$ $$\widetilde{B}_m=\frac{(\varepsilon^{2^{m}} P_m)^2}{2!}-\frac{(\varepsilon^{2^{m}}P_m)^3}{3!}+\cdots.$$ From Lemma \ref{lem1}, it follows that $$||B_m||_{\rho_m-s_m}\leq c||\varepsilon^{2^m} P_m||_{\rho_m-s_m}^2, \ ||\widetilde{B}_m||_{\rho_m-s_m}\leq c||\varepsilon^{2^m} P_m||_{\rho_m-s_m}^2.$$ Therefore, if $|\varepsilon|$ is sufficiently small, by \eqref{pf8} we have \begin{equation}\label{pf9} ||Q_{m+1}||_{\rho_m-s_m}\leq\frac{c}{\alpha_m^2 s_m^{2\nu}}||Q_m||^2_{\rho_m}. \end{equation} Now we prove that the iteration is convergent as $m\rightarrow \infty$. When $m=1$, we choose $$ \alpha_1=\frac{1}{4}\alpha, \ \rho_1=\frac{1}{2}\rho, \ s_1=\frac{1}{8}\rho, \ F_1=\frac{||\varepsilon^2 Q_1||_{\rho_1}}{\alpha_1^2s_1^{2\nu}}.$$ At the $m^{th}$ step, we define $$\alpha_m=\frac{\alpha}{(m+1)^2}, \ s_m=\frac{\rho}{2^{m+2}}, \ \rho_m=\rho_1-(s_1+s_2+\cdots+s_{m-1})$$ and $$F_m=\frac{\varepsilon^{2^{m}}||Q_m||_{\rho_m}}{\alpha_m^2s_m^{2\nu}}.$$ By \eqref{pf9}, we have $$F_{m+1}\leq c\frac{\varepsilon^{2^{m+1}}||Q_m||^2_{\rho_m}}{(\alpha_m^2s_m^{2\nu})^2}=cF_m^2,$$ where the constant $c$ depends only on $\alpha, \rho$. Hence it follows that \begin{equation}\label{pf10} cF_{m+1}\leq(cF_m)^2\leq(cF_1)^{2^m}. \end{equation} If $cF_1<1$, then $cF_m\rightarrow 0$ as $m\rightarrow\infty.$ From \eqref{pf8} it follows that \begin{equation}\label{pf11} ||\varepsilon^{2^{m}}P_m||_{\rho_m-s_m}<c F_m. \end{equation} Thus, if $cF_1<\frac{1}{2}$, then $$||e^{\pm \varepsilon^{2^{m}}P_m}||_{\rho_m}\leq 2.$$ Since \begin{equation}\label{pf+1} ||A_{m+1}-A_m||=||\varepsilon^{2^{m}}\overline{Q}_m||\leq ||\varepsilon^{2^{m}}Q_m||_{\rho_m}< c F_m, \end{equation} if $cF_1\leq\frac{\delta \varepsilon}{(3n-1)\beta_m^2}$, it follows from \eqref{pf+1} that $$||A_{m+1}-A_m||\leq\frac{\delta \varepsilon}{(3n-1)\beta_m^2}, \ \mbox{for} \ \mbox{any} \ m\geq 1,$$ where $\beta_m=\max\{||S_m||,||S_m^{-1}||\}$ and $S_m$ is the regular matrix in Lemma \ref{lem2} such that $$S_m^{-1}A_mS_m=diag(\lambda_1^m, \cdots, \lambda_n^m).$$ Thus, it follows from Lemma \ref{lem2} that $A_{m+1}$ has $n$ different eigenvalues $\lambda_1^{m+1},\cdots,\lambda_n^{m+1}.$ Moreover $$|\lambda_{i}^{m+1}-\lambda_{j}^{m+1}|\geq \delta \varepsilon, \ i\not=j, 1\leq i,j\leq n,$$ and $$|\lambda_i^{m+1}|\geq \delta\varepsilon, \ i=1, \cdots, n.$$ In fact, \begin{eqnarray*} \big|\lambda_{i}^{m+1}-\lambda_{j}^{m+1}\big|&\geq& \big|\lambda^1_{i}-\lambda^1_{j}\big| -\sum_{l=1}^{m}\left(|\lambda_{i}^{l+1}-\lambda^l_{i}|+|\lambda_{j}^{l+1}-\lambda^l_{j}|\right)\\ &\geq&\big|\lambda^1_{i}-\lambda^1_{j}\big|-2\sum_{l=1}^{m}||A_{l+1}-A_l||\\ &\geq&\big|\lambda^1_{i}-\lambda^1_{j}\big|-2\sum_{l=1}^{m}c F_l\\ &\geq&2\delta \varepsilon-2\sum_{l=1}^{m}c F_l. \end{eqnarray*} Moreover, we have \begin{eqnarray*} \sum_{l=1}^{m}c F_l&\leq& \sum_{l=1}^{\infty}c F_l\leq \sum_{m=0}^{\infty}(c F_1)^{2^{m}}\leq\sum_{m=1}^{\infty}(c F_1)^m \\ &=&\frac{c F_1}{1-c F_1}<2c F_1. \end{eqnarray*} Thus, if $cF_1\leq \min\left\{\frac{1}{2},\frac{1}{4}\delta \varepsilon, \frac{\delta \varepsilon}{(3n-1)\beta_m^2}\right\}$, that is, $0< \varepsilon \leq \min\left\{1,\frac{c}{||Q||_\rho},\frac{c}{||Q||^2_\rho}\right\},$ then by \eqref{pf10}, we have $$|\lambda_{i}^{m+1}-\lambda_{j}^{m+1}|\geq2\delta\varepsilon-4\varepsilon cF_1\geq \delta \varepsilon, \ i\not=j, 1\leq i,j\leq n.$$ In the same way as above, we have $$|\lambda_i^{m+1}|\geq \delta\varepsilon, \ i=1, \cdots, n.$$ Let $D_\ast=\bigcap_{m=1}^\infty D_{\rho_m}=D_{\frac{\rho}{4}}.$ By \eqref{pf11}, the composition of all the changes $e^{\varepsilon^{2^{m}} P_m}$ converges to $\psi$ as $m\rightarrow\infty$. Obviously, $$||\varepsilon^{2^{m}}Q_m||_{D_\ast}\leq cF_m\rightarrow 0, \ m\rightarrow\infty.$$ Furthermore, it follows from \eqref{pf+1} that $A_m$ is convergent as $m\rightarrow\infty$. Define $B=\displaystyle\lim_{m\rightarrow\infty}A_m.$ Then, under the symplectic change of variables $x=\psi(t) y$, the Hamiltonian system \eqref{intro1} is changed into $\dot{y}=B y.$ Now we prove that, for most sufficiently small $\varepsilon$, such symplectic transformation exists. From the above iteration, we need to prove that the non-resonant conditions \begin{equation}\label{pf12} |\langle k,\omega\rangle\sqrt{-1}-\lambda_i^{m+1}+\lambda_j^{m+1}|\geq\frac{\alpha_m}{|k|^{3\tau}} \end{equation} for all $0\not=k\in\mathbb{Z}^r, 1\leq i,j\leq n, \ m=0,1,2,\cdots$, hold for most sufficiently small $\varepsilon$. Let $f(\varepsilon)=\langle k,\omega\rangle\sqrt{-1}-\lambda_i^{m+1}+\lambda_j^{m+1}, \ i\not=j,$ and $$O_{ijm}^k=\left\{\varepsilon\in(0,\varepsilon_0):|f(\varepsilon)|<\frac{\alpha_m}{|k|^{3\tau}}\right\},$$ where we choose $$\varepsilon_0=\min\left\{1, \frac{c}{||Q||_\rho},\frac{c}{||Q||^2_\rho}\right\} $$ such that, for $\varepsilon\in(0,\varepsilon_0)$, the above iteration is convergent, and \begin{equation}\label{pf13} \Big|\frac{df}{d\varepsilon}\Big|=\Big|\frac{d}{d\varepsilon}(\lambda_i^{m+1}-\lambda_j^{m+1})\Big|\geq \delta. \end{equation} For $\varepsilon \in (0,\varepsilon_0)$, by \eqref{pf+1} we have \begin{eqnarray*} ||A_{m+1}-A_1||&\leq&||A_{m+1}-A_{m}||+\cdots+||A_2-A_1||\\[0.2cm] &\leq&c F_m+\cdots+ c F_2 \\ &\leq& 2c F_1\leq \frac{1}{2}\delta\varepsilon. \end{eqnarray*} Hence \begin{eqnarray*} |f(\varepsilon)|&\geq&|\langle k,\omega\rangle\sqrt{-1}-\lambda_i+\lambda_j|-|\lambda_i^1-\lambda_i|-|\lambda_j^1-\lambda_j|\\[0.2cm] &&-|\lambda_i^1-\lambda_i^{m+1}|-|\lambda_j^1-\lambda_j^{m+1}|\\ &\geq&\frac{\alpha}{|k|^\tau}-2q\varepsilon-2||A_{m+1}-A_1||\\ &\geq&\frac{\alpha}{|k|^\tau}-2q\varepsilon-\delta\varepsilon \geq\frac{\alpha}{|k|^\tau}-3M\varepsilon_0, \end{eqnarray*} where $M=\max\{q,\delta\}$. If $\frac{1}{|k|^\tau}\geq\frac{6 M\varepsilon_0}{\alpha},$ then $$|f(\varepsilon)|\geq\frac{\alpha}{2|k|^\tau}>\frac{\alpha_m}{|k|^{3\tau}},$$ and $O_{ijm}^k=\emptyset.$ Suppose that $\frac{1}{|k|^\tau}<\frac{6 M\varepsilon_0}{\alpha}$. By \eqref{pf13}, it follows that $$ \mbox{meas}\left(O_{ijm}^k\right)<\frac{\alpha_m}{|k|^{3\tau}\delta}.$$ Thus, \begin{eqnarray*} \mbox{meas}\left(\bigcup_{i\not=j}\bigcup_{0\not=k\in\mathbb{Z}^r}O_{ijm}^k\right) &\leq&\frac{n^2\alpha_m}{\delta}\sum_{|k|^{-\tau}<\frac{6 M\varepsilon_0}{\alpha}}\frac{1}{|k|^{3\tau}}\\ &\leq& \frac{n^2\alpha_m}{\delta}\cdot\frac{36M^2\varepsilon_0^2}{\alpha^2}\sum_{k\in\mathbb{Z}^r}\frac{1}{|k|^\tau}\\ &\leq&\frac{c\varepsilon_0^2}{m^2}. \end{eqnarray*} Let $$E_m=\left\{\varepsilon\in(0,\varepsilon_0):|\langle k,\omega\rangle\sqrt{-1}-\lambda_i^{m+1}+\lambda_j^{m+1}|>\frac{\alpha_m}{|k|^{3\tau}}, 0\not=k\in\mathbb{Z}^r, \ i\not=j\right\}.$$ Then $$(0,\varepsilon_0)-E_m=\bigcup_{i\not=j}\bigcup_{0\not=k\in\mathbb{Z}^r}O_{ijm}^k.$$ Thus $$\mbox{meas}\left((0,\varepsilon_0)-E_m\right)\leq \frac{c\varepsilon_0^2}{m^2}.$$ Let $E_{\varepsilon_0}=\cap_{m=1}^{\infty}E_m,$ then $$\mbox{meas}\left((0,\varepsilon_0)-E_{\varepsilon_0}\right)\leq c\varepsilon_0^2,$$ and $$\lim_{\varepsilon_0\rightarrow 0}\frac{\mbox{meas}\left((0,\varepsilon_0)-E_{\varepsilon_0}\right)}{\varepsilon_0}=0.$$ Therefore, $E_{\varepsilon_0}$ is a non-empty subset of $(0,\varepsilon_0)$. Thus, for $\varepsilon\in E_{\varepsilon_0}$, the Hamiltonian system \eqref{intro1} is reducible. i.e., there exists a symplectic transformation $x=\psi(t)y$, which changes the Hamiltonian system \eqref{intro1} into the Hamiltonian system $\dot{y}=By$. Thus, Theorem \ref{Main result} is proved completely. \section{The quasi-periodic Hill's equation} As an example, we apply Theorem \ref{Main result} to the following quasi-periodic Hill's equation \begin{equation}\label{exp1} \ddot{x}+\varepsilon a(t)x=0, \end{equation} where $a(t)$ is an analytic quasi-periodic function on $D_\rho$ with frequencies $\omega=(\omega_1, \cdots, \omega_r)$. Denote the average of $a(t)$ by $\bar{a}$, and suppose $\bar{a}>0$. Let $\dot{x}=y$, then equation \eqref{exp1} can be rewritten in the equivalent form \begin{equation}\label{exp2} \dot{x}=y, \ \ \dot{y}=-\varepsilon a(t)x. \end{equation} To apply Theorem \ref{Main result}, we express \eqref{exp2} in the form \begin{equation}\label{exp3} \dot{z}=(A+\varepsilon Q(t))z, \end{equation} where \begin{equation*} z=\left( \begin{matrix} x\\ y \end{matrix} \right), \ \ A=\left( \begin{matrix} 0&1\\ 0&0 \end{matrix} \right), \ \ Q=\left( \begin{matrix} 0&0\\ -a(t)&0 \end{matrix} \right). \end{equation*} It is easy to see that $A$ has multiple eigenvalues $\lambda_1=\lambda_2=0$, moreover, $A+\varepsilon \overline{Q}$ has two different eigenvalues $\mu_1=i\sqrt{\bar{a}\varepsilon}, \ \mu_2=-i\sqrt{\bar{a}\varepsilon},$ where $ \overline{Q}$ stands for the average of the matrix $Q(t)$ and $i=\sqrt{-1}.$ It is clear that $$|\mu_i|=\sqrt{\bar{a}}\sqrt{\varepsilon}\geq 2\delta \varepsilon, \ i=1,2,$$ and $$|\mu_1-\mu_2|=2\sqrt{\bar{a}}\sqrt{\varepsilon}\geq 2\delta \varepsilon,$$ where we choose $\delta=\frac{1}{2}\sqrt{\bar{a}}>0$, which is a constant independent of $\varepsilon$. Therefore, Theorem \ref{Main result} can be applied. It follows from Theorem \ref{Main result} that the following result holds. \begin{theorem}\label{app1} Assume that $a(t)$ is an analytic quasi-periodic function on $D_\rho$ with frequencies $\omega=(\omega_1, \cdots, \omega_r)$, and $\bar{a}>0$. If the frequencies $\omega$ of $a(t)$ satisfy the Diophantine condition \begin{equation}\label{Diophantine condition} \big|\langle k,\omega\rangle\big|\geq \frac{\alpha}{|k|^{\tau}} \end{equation} for all $k\in \mathbb{Z}^{r}\setminus\{0\}$, where $\alpha >0$ is a small constant and $\tau>r-1$. Then there exist some sufficiently small $\varepsilon_0>0$ and a non-empty Cantor subset $E_{\varepsilon_0}\subset (0,\varepsilon_0)$ with positive Lebesgue measure, such that for $\varepsilon \in E_{\varepsilon_0}$, the Hamiltonian system \eqref{exp3} is reducible. Moreover, if $\varepsilon_0$ is small enough, the relative measure of $E_{\varepsilon_0}$ in $(0,\varepsilon_0)$ is close to $1$. \end{theorem} \begin{remark} From Theorem \ref{app1}, it follows that equation \eqref{exp1} can be changed into a constant coefficient differential equation for most sufficiently small $\varepsilon>0$. \end{remark} Now we want to study the Lyapunov stability of the equilibrium of the equation \eqref{exp1}, using the results obtained in Section 3. If $a(t)$ is periodic in time ($T$ is the period), one famous stability criterion was given by Magnus and Winkler \cite{Magnus} for Hill's equation \begin{equation}\label{sta1} \ddot{x}+ a(t)x=0. \end{equation} That is, \eqref{sta1} is stable if $$a(t)>0, \ \int_0^Ta(t)dt\leq\frac{4}{T},$$ which can be shown using a Poincar\'{e} inequality. Such a stability criterion had been generalized and improved by Zhang and Li in \cite{Zhang1}, which now is the so-called $L^p$-criterion. Recently, Zhang in \cite{Zhang2} had extended such a criterion to the linear planar Hamiltonian system $$\dot{x}=m(t)y, \ \ \dot{y}=-n(t)x,$$ where $m(t), n(t)$ are continuous and $T$-periodic functions. However, for the quasi-periodic Hill's equation \eqref{exp1}, the results above can not be applied directly. Now we obtain a result about the stability of the equilibrium of equation \eqref{exp1}. \begin{theorem}\label{stable} Under the conditions of Theorem \ref{app1}, the equilibrium of the equation \eqref{exp1} is stable in the sense of Lyapunov for most sufficiently small $\varepsilon>0$. \end{theorem} \noindent{\bf Proof.}~ Theorem \ref{app1} tells us that, for most sufficiently small $\varepsilon \in(0,\varepsilon_0)$, there exists an analytic quasi-periodic symplectic transformation $z = \psi(t)z_\infty$, where $\psi(t)$ has same frequencies as $Q(t)$, which changes \eqref{exp3} into the Hamiltonian system \begin{equation}\label{stable+1} \dot{z}_\infty= Bz_\infty, \end{equation} where $B$ is a constant matrix. Moreover, from the proof of Theorem \ref{Main result} in Section 3, it follows that $B$ has two different eigenvalues $\lambda_1^\infty, \lambda_2^\infty$, satisfying $$|\lambda_i^\infty|\geq \delta\varepsilon \ (i=1,2), \ \ |\lambda_1^\infty-\lambda_2^\infty|\geq \delta\varepsilon.$$ Furthermore, by the proof of Theorem \ref{Main result}, we have \begin{equation}\label{stable2} ||B-(A+\varepsilon \overline{Q})||\leq cF_1=O(\varepsilon^2). \end{equation} Therefore, the two different eigenvalues of $B$ are pure imaginary and can be written in the form $$\lambda_i^\infty=\pm i\sqrt{b}, \ i=1,2,$$ where $b$ can be written in the following form \begin{equation}\label{b} b=\bar{a}\varepsilon+O(\varepsilon^2), \end{equation} which depends on $\bar{a}$ and $\varepsilon$ only. Thus, there exists a singular symplectic matrix $S$ such that \begin{equation*} S^{-1}BS=\left( \begin{matrix} i\sqrt{b}&0\\ 0&-i\sqrt{b} \end{matrix} \right). \end{equation*} Let $z_\infty=S\tilde{z}_\infty$, under this symplectic transformation, the Hamiltonian system \eqref{stable+1} is changed into \begin{equation*} \dot{\tilde{z}}_\infty=S^{-1}BS\tilde{z}_\infty=\left( \begin{matrix} i\sqrt{b}&0\\ 0&-i\sqrt{b} \end{matrix} \right)\tilde{z}_\infty. \end{equation*} Hence, by an analytic quasi-periodic symplectic transformation, equation \eqref{exp1} is changed into \begin{equation}\label{exp4} \ddot{x}_\infty+b x_\infty=0. \end{equation} It is easy to see that equation \eqref{exp4} is elliptic. Therefore, the equilibrium of equation \eqref{exp1} is stable in the sense of Lyapunov for most sufficiently small $\varepsilon>0$. $\square$ For the existence of quasi-periodic solution of equation \eqref{exp1}, we have the following result. \begin{theorem}\label{quasi-periodic solution} Under the conditions of Theorem \ref{app1}, all solutions of equation \eqref{exp1} are quasi-periodic with frequencies $\Omega=(\omega_1, \cdots,\omega_r, \sqrt{b})$ for most sufficiently small $\varepsilon>0$, where $b$ is given by (\ref{b}). \end{theorem} \noindent{\bf Proof.}~ By Theorem \ref{app1}, we know that, for most sufficiently small $\varepsilon \in(0,\varepsilon_0)$, there exists an analytic quasi-periodic symplectic transformation which has same frequencies as $a(t)$, by this transformation, equation \eqref{exp1} is changed into \eqref{exp4}. On the other hand, it is easy to see that all solutions of the equation \eqref{exp4} are periodic, and the frequency of these solutions is $\sqrt{b}$. Thus, we only need to prove that, for most sufficiently small $\varepsilon\in(0,\varepsilon_0)$, the following non-resonant condition \begin{equation}\label{solution1} |k_1\omega_1+\cdots+k_r\omega_r+k_{r+1}\sqrt{b}|\geq\frac{\alpha_0}{|k|^{5\tau+4}} \end{equation} holds for all $k=(k_1,\cdots,k_{r+1})\in\mathbb{Z}^{r+1}\setminus\{0\}$ , where $\alpha_0$ is defined in Section 3, that is, $\alpha_0=\frac{1}{2}\alpha$, and $\omega=(\omega_1,\cdots,\omega_r)$ are the frequencies of $a(t)$. If $k_{r+1}=0$, then from the Diophantine condition \eqref{Diophantine condition}, it follows that \eqref{solution1} holds. Suppose that $k_{r+1}\not=0$. Let $g(\varepsilon)=k_1\omega_1+\cdots+k_r\omega_r+k_{r+1}\sqrt{b}, $ and $$O_k=\left\{\varepsilon\in(0,\varepsilon_0):|g(\varepsilon)|<\frac{\alpha_0}{|k|^{5\tau+4}}\right\}.$$ It follows from the non-degeneracy condition that \begin{equation}\label{solution2} \Big|\frac{dg}{d\varepsilon}\Big|=\Big|\frac{d}{d\varepsilon}(k_{r+1}\sqrt{b})\Big|\geq |k_{r+1}|\delta. \end{equation} By \eqref{b}, we have $$\sqrt{b}\leq 4\delta\sqrt{\varepsilon}.$$ From the Diophantine condition \eqref{Diophantine condition}, it follows that \begin{eqnarray*} |g(\varepsilon)|&\geq&\frac{\alpha}{(|k_1|+\cdots+|k_r|)^\tau}-|k_{r+1}|\sqrt{b}\\ &\geq&\frac{\alpha}{|k|^\tau}-|k_{r+1}|\sqrt{b}\\ &\geq& \frac{\alpha}{|k|^\tau}-|k_{r+1}|4\delta\sqrt{\varepsilon}\\ &\geq&\frac{\alpha}{|k|^\tau}-4\delta|k|\sqrt{\varepsilon_0}. \end{eqnarray*} If $\frac{1}{|k|^{\tau+1}}\geq \frac{8\delta\sqrt{\varepsilon_0}}{\alpha}$, then $$\big|g(\varepsilon)\big|\geq\frac{\alpha}{2|k|^\tau}\geq\frac{\alpha_0}{|k|^{5\tau+4}},$$ and $O_k=\emptyset$. Suppose that $\frac{1}{|k|^{\tau+1}}< \frac{8\delta\sqrt{\varepsilon_0}}{\alpha}$, it follows from \eqref{solution2} that $$\mbox{meas}(O_k)<\frac{\alpha_0}{|k|^{5\tau+4}|k_{r+1}|\delta}.$$ Thus, \begin{eqnarray*} \mbox{meas}\left(\bigcup_{0\not=k\in\mathbb{Z}^{r+1}}O_k\right)&\leq& \frac{\alpha_0}{\delta}\sum_{\frac{1}{|k|^{\tau+1}}< \frac{8\delta\sqrt{\varepsilon_0}}{\alpha}}\frac{1}{|k|^{5\tau+4}|k_{r+1}|}\\ &\leq&\frac{\alpha_0}{\delta}\frac{(8\delta)^4\varepsilon_0^2}{\alpha^4}\sum_{k\in\mathbb{Z}^{r+1}}\frac{1}{|k|^\tau |k_{r+1}|}\\ &\leq& c\varepsilon_0^2\sum_{0\not=k_{r+1}\in\mathbb{Z}}\frac{1}{|k_{r+1}|^{\tau+1}}\\ &\leq& c\varepsilon_0^2. \end{eqnarray*} Then $$\lim_{\varepsilon_0\rightarrow 0}\frac{\mbox{meas}\left(\bigcup_{0\not=k\in\mathbb{Z}^{r+1}}O_k\right)}{\varepsilon_0}=0.$$ Therefore, \eqref{solution1} holds for most sufficiently small $\varepsilon\in(0,\varepsilon_0)$. Thus, all solutions of equation \eqref{exp1} are quasi-periodic with frequencies $\Omega=(\omega_1, \cdots,\omega_r, \sqrt{b})$ for most sufficiently small $\varepsilon>0$. $\square$ \section*{References} \end{document}
\begin{document} \title[Integration and differential equations for typical paths]{Stochastic integration and differential equations for typical paths} \author{Daniel Bartl$^\ast$ \and Michael Kupper$^\times$ \and Ariel Neufeld$^+$} \thanks{ $^\ast$Department of Mathematics, University of Vienna, [email protected]. \\ $\phantom{ddd}$$^\times$Department of Mathematics, University of Konstanz, [email protected].\\ $\phantom{ddd}$$^+$Division of Mathematical Sciences, NTU Singapore, [email protected]. } \keywords{F\"ollmer integration; Pathwise Stochastic Intergral; Pathwise SDE;\\ $\phantom{ddd}$Infinite dimensional stochastic calculus; Vovk's outer measure} \date{\today} \subjclass[2010]{60H05, 60H10; 60H15; 91G20 } \begin{abstract} The goal of this paper is to define stochastic integrals and to solve stochastic differential equations for typical paths taking values in a possibly infinite dimensional separable Hilbert space without imposing any probabilistic structure. In the spirit of \cite{PerkowskiPromel16,Vovk12} and motivated by the pricing duality result obtained in \cite{bartl2017pathwise} we introduce an outer measure as a variant of the pathwise minimal superhedging price where agents are allowed to trade not only in $\omega$ but also in $\int\omega\,d\omega:=\omega^2 -\langle \omega \rangle$ and where they are allowed to include beliefs in future paths of the price process expressed by a prediction set. We then call a property to hold true on typical paths if the set of paths where the property fails is null with respect to our outer measure. It turns out that adding the second term $\omega^2 -\langle \omega \rangle$ in the definition of the outer measure enables to directly construct stochastic integrals which are continuous, even for typical paths taking values in an infinite dimensional separable Hilbert space. Moreover, when restricting to continuous paths whose quadratic variation is absolutely continuous with uniformly bounded derivative, a second construction of model-free stochastic integrals for typical paths is presented, which then allows to solve in a model-free way stochastic differential equations for typical paths. \end{abstract} \maketitle \setcounter{equation}{0} \section{Introduction} {In this paper we investigate the problem of constructing pathwise stochastic integrals as well as solutions of stochastic differential equations without a reference probability measure. It is well-known that defining a stochastic integral is a highly non-trivial problem and cannot be deduced directly from classical measure-theoretical calculus, as in general, stochastic processes describing the noise of the dynamics do not have finite variation paths. } The It\^o integral and the corresponding It\^o calculus have been developed overcoming the obstacle of how to define integrals and differential equations when noise occurs. However, its construction heavily depends on a probabilistic structure and cannot be defined pathwise. More precisely, the construction of the stochastic integral is accomplished by a $L^2(P)$-limit procedure, and from the Bichteler-Dellacherie theorem it is known that the only class of good integrators for which the integral is, in a suitable sense, a continuous operator, are semimartingales. Later, there were several approaches to define stochastic integrals pathwise, without assuming any probabilistic structure. This allows to consider more general paths as integrators, rather than semimartingale paths. Moreover, motivated from mathematical finance, pathwise stochastic calculus can be employed to price financial derivatives without assuming any probabilistic model on the financial market leading to robust prices; see \cite{AcciaioBeiglbockPenknerSchachermayer.16,bartl2017pathwise,bartl2017duality,beiglbock2017pathwise,BurzoniFrittelliMaggis.16,DolinskySoner.12} to name but a few. {The first result which provides a construction of a stochastic integral without imposing any probabilistic structure was given in F\"ollmer \cite{Follmer81}. In Bichteler~\cite{Bichteler81} and Karandikar~\cite{Karandikar95} a pathwise construction of the stochastic integral was proposed for c\`adl\`ag integrands which enables to solve the so-called aggregation problem of defining a stochastic integral which coincides with the classical stochastic integral simultaneously for all semimartingale measures. This has been used to price financial derivatives under Knightian uncertainty, see \cite{NeufeldNutz13,NutzSoner12,SonerTouziZhang13}. A solution for the above aggregation problem, under the continuum hypothesis, was obtained in Nutz~\cite{Nutz12} for general predictable integrands using medial limits. In Lyons~\cite{Lyons95} F\"ollmer's pathwise stochastic calculus has been extended to obtain prices of American and European options under volatility uncertainty. In Davis--Obloj--Raval~\cite{DavisOblojRaval14} F\"ollmer's pathwise stochastic calculus has been employed to price weighted variance swaps when a finite number of European call and put options for a known price are traded. c sequence of stopping times, given by nition of the integral. In Cont--Fourni\'e~\cite{ContFournie10} pathwise stochastic integrals with a directional derivative (whose construction goes back to Dupire~\cite{Dupire09}) of a non-anticipative functional as integrand have been constructed and a change of variable formula for such integrals was obtained. Using this framework, an It\^o isometry for such integrals was established in Ananova--Cont~\cite{AnanovaCont16}, whereas in Riga~\cite{Riga16} a pathwise notion for the gain process with respect to corresponding self-financing trading strategies was introduced. In Cont--Perkowski~\cite{cont2018pathwise} it was shown that one can extend F\"ollmer's pathwise It\^o calculus to paths with arbitrary regularity by employing the concept of $p$-th variation along a sequence of time partitions. Moreover, during the reviewing process of this work, the existence and uniqueness of pathwise SDEs has been established in Galane--Lochowski--Mhlanga~\cite{galane2018sdes} using the similar approach of deriving a model-free version of the Burkholder--Davis--Gundy inequality. Furthermore, for pathwise construction of stochastic integrals with respect to c\`adl\`ag integrators, we refer to Hirai~\cite{Hirai17} as well as Lochowski--Perkowski--Pr\"omel~ \cite{LochowskiPerkowskiPromel17}.} Recently, motivated by game-theoretic considerations, Vovk introduced in \cite{Vovk12} an outer measure on the space of continuous paths and declared an event to be typical if its complement is null with respect to the defined outer measure. He then showed that typical paths possess a quadratic variation. In other words, it was shown in \cite{Vovk12} that paths which do not possess a quadratic variation allow a form of arbitrage. Vovk's approach was employed in Perkowski--Pr\"omel~\cite{PerkowskiPromel16} to define an outer measure which can be interpreted as the pathwise minimal superhedging price motivated from financial mathematics. Using their outer measure they constructed a model-free stochastic integral which is continuous for typical price paths and connected their typical paths with rough paths by demonstrating that every typical price path possess an It\^o rough path. Moreover, Vovk~\cite{Vovk15purely} and Vovk--Shafer~\cite{VovkShafer17} provide several additional constructions of model-free stochastic integrals for typical paths. In \cite{peng2007g,peng2010nonlinear} It\^o calculus with respect to the so-called $G$-Brownian motion as integrator has been developed by Peng, which is motivated from financial mathematics when investigating pricing and portfolio optimization problems under volatility uncertainty. There, by referring to the notion of typical paths, the pathwise integral and the corresponding stochastic calculus is defined for typical paths with respect to the so-called $G$-expectation. {The goal of this work is to provide a construction of a model-free stochastic integral for typical paths which allows to solve stochastic differential equations pathwise. Our setting is similar to the one in Perkowski--Pr\"omel~\cite{PerkowskiPromel16}. More precisely, we introduce an outer measure which is defined as a variant of the pathwise minimal superhedging price and call a property to hold true on typical paths if the set of paths where the property fails is null with respect to our outer measure. The main difference, compared to the outer measure in \cite{PerkowskiPromel16}, is that in our definition hedging is not only allowed in $\omega$ representing the price path of the risky security, but also in the second security $\omega^2-\langle \omega \rangle$. This roughly means that superhedging strategies both in $\omega$ and $\int \omega \,d\omega$ are permitted. It turns out that adding the second term in the definition of the outer measure enables to directly define stochastic integrals which are continuous, even for paths taking values in an infinite dimensional separable Hilbert space, see Theorem~\ref{thm:integral}. Its proof is based on an elementary, but crucial observation provided in Lemma~\ref{lem:integrals.squared} using heavily the second order term in the definition of the outer measure, which is then employed to derive a Burkholder--Davis--Gundy (BDG) type of inequality, see Proposition~\ref{prop:bdg.calE}. To be able to solve stochastic differential equations pathwise, a second construction of model-free stochastic integrals is provided when restricting to all paths possessing an absolutely continuous quadratic variation whose derivative is uniformly bounded, see Theorem~\ref{thm:integral.Xi.has.assumptions}. This notion of a model-free stochastic integral allows us to solve stochastic differential equations for typical paths taking values in a possibly infinite dimensional Hilbert space, see Theorem~\ref{thm:sde}.} The remainder of this paper is organized as follows. In Section~\ref{sec:SetupMainResults}, we introduce the setup and state our main results, whose proofs are then provided in Section~ \ref{sec:Proof}. \section{Setup and main results} \label{sec:SetupMainResults} Let $H$ be a separable Hilbert space with scalar product $\langle\cdot,\cdot\rangle_H$ and respective norm $\|h\|_H=\langle h, h\rangle_H^{1/2}$. For a finite time horizon $T>0$ we denote by $C([0,T],H)$ the space of all continuous paths $\omega\colon [0,T]\to H$ endowed with the supremum norm $\sup_{t\in[0,T]} \|\omega(t)\|_H$. Let $\Omega$ be the Borel set of all $\omega\in C([0,T],H)$ for which the \emph{pathwise quadratic variation} $\langle\omega\rangle$ given by \[\langle\omega\rangle_t:= \lim_{m\to\infty} \sum_{k=1}^\infty \Big(\|\omega\big(\sigma^m_{k}(\omega)\wedge t\big)\|_H - \|\omega\big(\sigma^m_{k-1}(\omega)\wedge t\big)\|_H\Big)^2\] exists as a limit in the supremum norm in $C([0,T],\mathbb{R})$ along the dyadic partition \[\sigma^m_k(\omega):=\inf\left\{t\geq \sigma^m_{k-1}(\omega) : \|\omega(t)-\omega(\sigma^m_{k-1}(\omega))\|_H\ge 2^{-m}\right\}\] with $\sigma^m_0(\omega)=0$ for all $k,m\in\mathbb{N}$. Define the processes $S\colon[0,T]\times\Omega\to H$ and $\mathbb{S}\colon[0,T]\times\Omega\to \mathbb{R}$ by \[ S_t(\omega):=\omega(t)\qquad\text{ and }\qquad \mathbb{S}_t(\omega):=\|S_t(\omega)\|_H^2 -\langle\omega\rangle_t,\] and let $\mathbb{F}$ be the raw filtration on $\Omega$ given by $\mathcal{F}_t:=\sigma(S_s: s\leq t)$ for all $t\in[0,T]$ and $\mathbb{F}_+$ its right-continuous version. Given another separable Hilbert space $K$, denote by $L(H,K)$ the Banach space of all bounded linear operators $F\colon H \to K$ endowed with the operator norm $\|F\|_{L(H,K)}:=\sup_{\|h\|_H\le 1}\|F(h)\|_K$. We denote by $\mathcal{H}_s(H,K)$ the set of all \emph{simple integrands}, i.e.~processes $F\colon [0,T]\times \Omega\to L(H,K)$ of the form \[F_t(\omega)=\sum_{n=0}^\infty f_n(\omega)1_{( \tau_n({\omega}),\tau_{n+1}({\omega})]}(t)\] where $0=\tau_0\leq\dots \leq \tau_n\leq\tau_{n+1}\leq \dots\leq T$ are $\mathbb{F}_+$-stopping times such that for each $\omega$ there is $n(\omega)$ such that $\tau_{n(\omega)}(\omega)=T$ and the functions $f_n\colon\Omega\to L(H,K)$ are $\mathcal{F}_{\tau_n+}$-measurable. For such a simple integrand $F$ the stochastic integral $(F\cdot N)$ against any process $N\colon[0,T]\times\Omega\to H$ can be defined pathwise \[ (F \cdot N)_t(\omega) :=\sum_{n=0}^\infty f_n(\omega)\big(N^t_{\tau_{n+1}}(\omega)-N^t_{\tau_n}(\omega)\big) \in K \] where $N^t_s:=N_{s\wedge t}$. Notice that processes in $\mathcal{H}_s(H,\mathbb{R})$ can be viewed as simple integrands with values in $H$ by identifying $L(H,\mathbb{R})$ with $H$. Our results strongly rely on the following modified version of Vovk's \cite{Vovk12} and Perkowski--Pr\"omel's \cite{PerkowskiPromel16} outer measure. If not explicitly stated otherwise, all (in-)equalities between functions $X:\Omega\to[-\infty,+\infty]$ are understood pointwise on $\Omega$. \begin{definition} \label{def:second-vovk} Let $\Xi\subseteq\Omega$ be a fixed prediction set. Then for all $X\colon\Omega\to[0,+\infty]$ we define \[ \mathcal{E}(X):= \inf\!\left\{ \lambda\geq 0 \,: \begin{array}{l} \text{there are $(F^n)$ in $\mathcal{H}_s(H,\mathbb{R})$ and $(G^n)$ in $\mathcal{H}_s(\mathbb{R},\mathbb{R})$ such that}\\ \text{$\lambda+(F^n\cdot S)_t+(G^n\cdot \mathbb{S})_t\geq 0$ on $\Xi$ for all $n$ and $t\in[0,T]$, and} \\ \text{$\lambda+ \liminf_n\big( (F^n\cdot S)_T +(G^n\cdot \mathbb{S})_T \big)\geq X$ on $\Xi$ } \end{array}\!\right\}. \] Moreover, we say that a property holds for \emph{typical paths} (on $\Xi$) if $\mathcal{E}(1_N)=0$ for the set $N$ where the property fails. \end{definition} {\begin{remark}\label{rem:finance} Motivated by the work of Vovk \cite{Vovk12} and Perkowski--Pr\"omel~\cite{PerkowskiPromel16}, our outer measure is defined as a variant of the pathwise minimal superhedging price. Here, the pathwise superhedging property only needs to hold with respect to a predefined prediction set of paths. Such a superhedging price, which can be seen as a second-order Vovk approach, was introduced in \cite{bartl2017pathwise} and enabled to provide a pricing duality result when the financial agent is allowed to include beliefs in future paths of the price process expressed by a prediction set $\Xi$, while eliminating all those which are seen as impossible. This reduces the (robust) superhedging price, which typically leads to too high prices, see \cite{DolinskyNeufeld.16,Neufeld.17}. We refer to \cite{bartl2017duality,HouObloj.15,Mykland.03} for related works regarding prediction sets and its relation to pricing of financial derivatives. \end{remark}} {\begin{remark}\label{rem:Vovk-quad} We restrict ourselves to paths for which the quadratic variation exists to a priori be able to define our outer measure $\mathcal{E}(\cdot)$. However, Vovk showed in \cite{Vovk12} that typical paths with respect to his outer measure automatically possess a quadratic variation. Comparing our outer measure with the one in \cite{Vovk12}, we can argue as in Perkowski--Pr\"omel~\cite[Lemma~2.9]{PerkowskiPromel16} that our outer measure enforces our typical paths to possess a quadratic variation, meaning that in fact it was not a restriction to consider only paths with finite quadratic variation. \end{remark}} From now on we fix a prediction set $\Xi \subseteq \Omega$ and consider the outer measure $\mathcal{E}(\cdot)$ with respect to $\Xi$. Further, we denote by $\mathcal{M}(\Xi)$ the set of \emph{martingale measures supported on $\Xi$}, i.e.~all Borel probability measures $P$ on $\Omega$ such that $(S_t)$ is a $P$-$\mathbb{F}$-martingale and $P(\Xi)=1$. The function $t\mapsto \langle\omega\rangle_t$ is continuous and nondecreasing for all $\omega\in\Omega$, thus induces a finite measure on $[0,T]$. Therefore, we denote \[( F \cdot \langle S\rangle)_t(\omega):= \int_0^t F_u(\omega)\, d\langle S(\omega)\rangle_u\] the Lebesgue-Stieltjes integral with respect to a function $F\colon[0,T]\times\Omega \to \mathbb{R}$ such that $F(\omega)$ is measurable and $\int_0^T|F_u(\omega)|\, d \langle S (\omega)\rangle_u < +\infty$ for all $\omega \in \Omega$ and set $( F \cdot \langle S\rangle)_t(\omega):=+\infty$ otherwise. Now, we start with our first result stating that for any prediction set $\Xi\subseteq \Omega$ we can define for typical paths stochastic integrals which are continuous. To that end, for any $F\colon [0,T]\times \Omega \to L(H,K)$ we introduce \begin{equation*} \|F\|_{\mathcal{H}^\infty(H,K)}:=\sup_{\omega\in\Xi} (\|F\|^2_{L(H,K)} \cdot \langle S\rangle\big)_T^\frac{1}{2}(\omega) \end{equation*} and define the space of integrands \[ \mathcal{H}^\infty(H,K) :=\left\{ F\colon\Omega\times[0,T]\to L(H,K) : \begin{array}{l} \|F\|_{\mathcal{H}^\infty(H,K)}<+\infty, \text{ and there exists}\\ \text{a sequence } (F^n) \text{ in } \mathcal{H}_s(H,K)\\ \text{such that } \|F-F^n\|_{\mathcal{H}^\infty(H,K)} \to 0 \end{array}\!\right\}.\] \begin{theorem} \label{thm:integral} Let $F \in \mathcal{H}^\infty(H,K)$ and assume that $K$ is finite dimensional. Then the stochastic integral \[(F\cdot S)\colon \Omega\to C([0,T],K) \] exists and satisfies the following weak Burkholder--Davis--Gundy (BDG) type of inequality \[\mathcal{E}\Big( \sup_{t\in[0,T]} \|(F\cdot S)_t\|_K^2 \Big) \leq 4 \sup_{\omega\in \Xi} (\|F\|_{L(H,K)}^2\cdot \langle S\rangle)_T(\omega) =4 \|F\|_{\mathcal{H}^\infty(H,K)}^2.\] Moreover, the space $\mathcal{H}^\infty(H,K)$ and the stochastic integral are linear (for typical paths) and the latter coincides with the classical It\^o-integral under every martingale measure $P\in\mathcal{M}(\Xi)$. \end{theorem} { \begin{remark}\label{rem-integrand-infty} Note that the set of integrands $\mathcal{H}^\infty(H,K)$ is large for natural choices of prediction sets. Indeed, if, e.g., $\Xi\subseteq\{\omega\in\Omega : \langle\omega\rangle_T\leq c\}$ for some constant $c>0$, then clearly $\|F\|_{\mathcal{H}^\infty(H,K)}\leq c^{\frac{1}{2}} \sup_{t\in[0,T],\omega\in\Xi} \|F_t(\omega)\|_{L(H,K)}^2$. In particular, as every bounded, adapted, and c\`adl\`ag function $F\colon [0,T]\times \Omega\to L(H,K)$ can be approximated uniformly by simple integrands, it then follows that $F\in \mathcal{H}^\infty(H,K)$. Moreover, if we require $\Xi\subseteq \Xi_c$, where $\Xi_c\subseteq \Omega$ is the prediction set for which there exists a constant $c\geq0$ such that \begin{align} \label{Xi-c} \Xi_c=\bigg\{\omega\in C([0,T],H) : \begin{array}{l} \omega \text{ is H\"older continuous and }\langle \omega\rangle \text{ is}\\ \text{absolutely continuous with }d\langle \omega\rangle/dt \leq c \end{array} \!\bigg\}, \end{align} then $\|F\|_{\mathcal{H}^\infty(H,K)}^2\leq \sup_{\omega\in\Xi_c} (c\int_0^T \|F_t(\omega)\|_{L(H,K)}^2\,dt)^{\frac{1}{2}}$. In particular, $\mathcal{H}^\infty(H,K)$ contains for instance all deterministic $L^2(dt)$-Borel functions $F$. \end{remark}} \begin{remark} If $K$ is a general (not finite dimensional) Hilbert space, then the stochastic integral $(F\cdot S)$ exists for every $F\in\mathcal{H}^\infty(H,K)$. However, it remains open whether it has a continuous modification. We refer to Remark~\ref{rem:finite-dim2} for further details. \end{remark} \begin{remark} Throughout this paper we work with the real-valued quadratic variation $\langle S\rangle$ of the $H$-valued processes $S$. However, for $K=\mathbb{R}$ one can instead consider the tensor-valued process $\text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S\text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}}$ defined by \[\text{\ensuremath{\langle\hspace*{-0.5ex}\langle}}\omega\text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}}_t:= \lim_{m\to\infty} \sum_{k=1}^\infty \Big(\omega\big(\sigma^m_{k}(\omega)\wedge t\big) -\omega\big(\sigma^m_{k-1}(\omega)\wedge t\big)\Big)\otimes \Big(\omega\big(\sigma^m_{k}(\omega)\wedge t\big) -\omega\big(\sigma^m_{k-1}(\omega)\wedge t\big)\!\Big)\] where $\sigma^m_k(\omega):=\inf\left\{t\geq \sigma^m_{k-1}(\omega) : \|\omega(t)-\omega(\sigma^m_{k-1}(\omega))\|_H \ge 2^{-m}\right\}$ with $\sigma^m_0(\omega)=0$ for all $k,m\in\mathbb{N}$, and where $\otimes$ denotes the tensor product. Then the processes $\text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S \text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}}$ and $\mathbb{S}:=S\otimes S - \text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S \text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}}$ take values in the tensor space $H\otimes H$. In this setting, $\mathcal{E}(\cdot)$ can be defined as before with the difference that the integrands $G^n$ are elements of $\mathcal{H}_s(H\otimes H,\mathbb{R})$. In the weak BDG inequality of Theorem \ref{thm:integral}, the term $(\|F\|_{L(H,\mathbb{R})}^2\cdot \langle S\rangle)$ has to be replaced by ``$(F\otimes F \cdot \text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S\text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}})$'', see e.g.~\cite[Chapter 20]{metivier1982semimartingales} for more details on tensor quadratic variation. Note that in case $H=\mathbb{R}^d$ it holds $H\otimes H=\mathbb{R}^{d\times d}$, the process $\text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S \text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}}$ is the symmetric matrix containing the pairwise covariation of all components of $S$, and $(F\otimes F \cdot \text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S\text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}})=\sum_{i,j} ( F^iF^j\cdot \langle S^i,S^j\rangle)$. Replacing $\langle S\rangle$ by $\text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S\text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}}$ might be of interest for the following reason: The prediction set $\Xi$ may include different predictions for the quadratic variation and covariation of different components of $S$. While this is ignored in $\sup_{\omega\in\Xi}(\|F\|_{L(H,K)}^2\cdot \langle S\rangle)_T(\omega)$, it is incorporated in $\sup_{\omega\in\Xi}(F\otimes F \cdot \text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S \text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}})_T(\omega)$, and the integral $(F\cdot S)$ can potentially be defined for a larger space of integrands $F$. \end{remark} { \begin{remark}\label{rem:rough-path} Note that the process $\mathbb{S}$ used in Definition~\ref{def:second-vovk} to define our stochastic integral also appears in rough path theory. Indeed, the iterated integral $\frac{1}{2}(S\otimes S - \text{\ensuremath{\langle\hspace*{-0.5ex}\langle}} S \text{\ensuremath{\rangle\hspace*{-0.5ex}\rangle}})$ is a reduced rough path which allows to define pathwise integrals of gradient 1-forms, see \cite[Section~5]{FrizHairer2014}. Nevertheless, the construction of the corresponding integrals are different. The limit procedure with respect to the outer measure allows us to obtain a larger class of integrands, see Remark~\ref{rem-integrand-infty}. However, while our integrals are only defined for typical paths, rough path theory allows to construct integrals pathwise for regular enough integrands. \end{remark}} To be able to not only define stochastic integrals for typical paths, but also solve stochastic differential equations, we need to control the quadratic variation of typical paths. {To that end, for the rest of this section, we work with the particular prediction set $\Xi_c\subseteq \Omega$ defined in \eqref{Xi-c}. } For any $F\colon [0,T]\times \Omega \mapsto L(H,K)$, set \begin{equation*} \|F\|_{\mathcal{H}^2(H,K)}:=\bigg(\int_0^T\mathcal{E}\big(\|F_t\|_{L(H,K)}^2\big)\,dt\bigg)^{\frac{1}{2}}. \end{equation*} For the prediction set $\Xi_c$ it turns out that stochastic integrals can be defined for integrands lying in the set \[ \mathcal{H}^2(H,K) :=\left\{ F\colon\Omega\times[0,T]\to L(H,K) : \begin{array}{l} \|F\|_{\mathcal{H}^2(H,K)}<+\infty, \text{ and there exists a}\\ \text{sequence } (F^n) \text{ in } \mathcal{H}_{s,c}(H,K)\text{ such that}\\ \|F-F^n\|_{\mathcal{H}^2(H,K)} \to 0 \end{array}\!\right\},\] where $\mathcal{H}_{s,c}(H,K)$ is the set of those $\sum_{n=0}^\infty f_n1_{( \tau_n,\tau_{n+1}]}\in\mathcal{H}_s(H,K)$ such that the stopping times $(\tau_n)$ are deterministic and $f_n\colon\Omega\to L(H,K)$ is continuous for each $n$. Note that $\mathcal{H}^2(H,K)$ is a linear space. More precisely, the following result holds true. \begin{theorem} \label{thm:integral.Xi.has.assumptions} {Let $\Xi\equiv\Xi_c$ be the prediction set defined in \eqref{Xi-c}} and let $F \in \mathcal{H}^2(H,K)$. Then the stochastic integral \[(F\cdot S)\colon \Omega\to C([0,T],K) \] exists and satisfies the following weak BDG-type inequality \[\mathcal{E}\Big( \sup_{t\in[0,T]} \|(F\cdot S)_t\|_K^2 \Big) \leq 4c \int_0^T \mathcal{E}(\|F_t\|_{L(H,K)}^2)dt =4c \|F\|_{\mathcal{H}^2(H,K)}^2.\] Moreover, it coincides with the classical stochastic integral under every martingale measure $P\in\mathcal{M}(\Xi_c)$. In addition, if $f\colon [0,T]\times K\to L(H,K)$ is Lipschitz continuous then the map $\Omega \times [0, T ]\ni (\omega,t)\mapsto f (t, (F \cdot S)_t (\omega)) \in L(H, K)$ is an element of $\mathcal{H}^2(H,K)$. \end{theorem} { \begin{remark}\label{rem:integrand-2} Already under some mild regularity assumptions on $F\colon [0,T]\times \Omega\to L(H,K)$, we have $F\in\mathcal{H}^2(H,K)$. More precisely, the following holds true. Let $F\colon [0,T]\times \Omega\to L(H,K)$ be continuous such that $\|F_t-F_s\|_{L(H,K)}\leq \rho(|t-s|) (1+\sup_{r\in[0,T]}\|S_r\|_H^p)$ for some $p\in[1,\infty)$ and some continuous function $\rho$ which satisfies $\rho(0)=0$. Then $F\in\mathcal{H}^2(H,K)$. We provide its proof in Subsection~\ref{subsec:proof-integral-ass}. \end{remark} } To be able to define a notion of a solution of a stochastic differential equation for typical paths, let $A\colon [0,T]\times\Omega\to\mathbb{R}$ be a process such that $\omega\mapsto A_t(\omega)$ is continuous for all $t$ and $t\mapsto |A(\omega)_t|$ is absolutely continuous with $d |A|(\omega)/dt\le c$ for all $\omega\in\Omega$. Moreover, let $\mu\colon [0,T]\times K\to L(\mathbb{R},K)$ and $\sigma\colon [0,T]\times K\to L(H,K)$ be two functions which satisfy the following. \begin{assumption}\label{ass:SDE} There is a constant $L>0$ such that for all $k,k^\prime\in K$ and $t,t^\prime\in[0,T]$ we have that \begin{align} \label{eq:Lipschitz-coeff} \begin{split} \|\sigma(t,k)-\sigma(t^\prime,k^\prime)\|_{L(H,K)}&\le L(|t-t^\prime|+\|k-k'\|_K),\\ \|\mu(t,k)-\mu(t^\prime,k^\prime)\|_{L(\mathbb{R},K)}&\le L(|t-t^\prime|+\|k-k'\|_K). \end{split} \end{align} \end{assumption} Then we can state our third main result stating the existence of solutions of stochastic differential equations for typical paths. \begin{theorem} \label{thm:sde} {Let $\Xi\equiv\Xi_c$ be the prediction set defined in \eqref{Xi-c}} and assume that Assumption~\ref{ass:SDE} holds. Moreover, let $x_0 \in K$. Then there exists a unique (up to typical paths) $X\colon\Omega\to C([0,T],K)$ such that $X\in\mathcal{H}^2(K,\mathbb{R})$ and $X$ solves the SDE \[ dX_t = \mu(t,X_t)\,dA_t + \sigma(t,X_t)\,dS_t,\quad X_0=x_0, \] i.e.~\[X_t=x_0+(\mu(\cdot,X) \cdot A)_t+ (\sigma(\cdot,X)\cdot S)_t\] for typical paths. \end{theorem} For the precise definition of $(\mu(\cdot,X) \cdot A)$ and $(\sigma(\cdot,X)\cdot S)$ see Lemma \ref{lem:intA} \& Remark \ref{rem2:identification} and Lemma~\ref{le:f-Lip-nice} \& Remark~\ref{rem:identification}, respectively. \begin{remark}\label{rem:explanationPart1} We point out that with our methods we cannot solve SDEs for typical paths on the space $\mathcal{H}^\infty(H,K)$ instead of $\mathcal{H}^2(H,K)$, {even when $\Xi\equiv\Xi_c$}. The reason is that the corresponding norm $\|\cdot\|_{\mathcal{H}^\infty(H,K)}$ is too strong to obtain a (similar) result that if $F \in \mathcal{H}^\infty(H,K)$ and $f\colon [0,T]\times K\to L(H,K)$ is Lipschitz continuous, then $f(\cdot,(F\cdot S))\in\mathcal{H}^\infty(H,K)$. But such a relation is the key property necessary to solve SDEs. We refer to Lemma~\ref{le:f-Lip-nice} for further details. \end{remark} {The main reason why we restrict ourselves in Theorem~\ref{thm:integral.Xi.has.assumptions} and Theorem~\ref{thm:sde} is the following duality result going back to \cite{bartl2017pathwise}, which is heavily used in the respective proofs. \begin{theorem}\label{thm:dual-Xi-c} Let $\Xi\equiv\Xi_c$ be the prediction set defined in \eqref{Xi-c}. Then for every $X\colon C([0,T],H)\to[0,+\infty]$ which is the pointwise limit of an increasing sequence $X_n\colon\Omega\to[0,+\infty]$ of upper semicontinuous functions one has \begin{equation} \label{eq:duality-Xi-c} \mathcal{E}(X)=\sup_{P\in\mathcal{M}(\Xi_c)} E_P[X]. \end{equation} In particular, the duality \eqref{eq:duality-Xi-c} holds for every nonnegative upper or lower semicontinuous function $X\colon C([0,T],H)\to[0,+\infty]$. \end{theorem} \begin{remark}\label{rem:quasi-sure} From the duality result in Theorem~\ref{thm:dual-Xi-c}, we see that $\mathcal{E}(N)=0$ if and only if $P(N)=0$ for all $P \in \mathcal{M}(\Xi_c)$. Moreover, the duality result shows that the stochastic integral defined in Theorem~\ref{thm:integral.Xi.has.assumptions} and the corresponding solution of the stochastic differential equation in Theorem~\ref{thm:sde} provide a way to solve the aggregation problem appearing in the quasi-sure setting with respect to $\mathcal{M}(\Xi_c)$ $($see, e.g., \cite{Nutz12}$)$, which has particular applications in financial mathematics when model uncertainty occurs. \end{remark} \begin{remark}\label{rem:gen-Xi-c} In fact, Theorem~\ref{thm:integral.Xi.has.assumptions} and Theorem~\ref{thm:sde} could have been stated for more general prediction sets $\Xi\subseteq \Xi_c$ satisfying the conditions imposed in Theorem~\ref{thm:dual} to obtain the duality result analogous to Theorem~\ref{thm:dual-Xi-c}. However, since the conditions are rather technical and our goal is to include as many paths as possible, we decided to state Theorem~\ref{thm:integral.Xi.has.assumptions} and Theorem~\ref{thm:sde} with respect to $\Xi_c$. \end{remark} } \section{Proofs of our main results} \label{sec:Proof} \subsection{Properties of $\mathcal{E}(\cdot)$} In this subsection, we analyze properties of the outer measure $\mathcal{E}$ which will be crucial to define stochastic integrals and solutions of stochastic differential equation for typical paths. Throughout this subsection, we work with the conventions $0\cdot(+\infty)=(+\infty)\cdot 0= 0$ and $+\infty-\infty=-\infty+\infty=+\infty$. First, observe that directly from its definition, the outer measure $\mathcal{E}$ is sublinear, positive homogeneous, and satisfies $\mathcal{E}(\lambda)\leq \lambda$ for all $\lambda \in [0,+\infty)$. In addition, $\mathcal{E}$ satisfies the following properties. \begin{proposition} \label{prop:expectation.is.sigma.subadd} The functional $\mathcal{E}$ is countably subadditive, i.e. \[\mathcal{E}\Big({\textstyle\sum\limits_{n=1}^\infty} X_n\Big) \leq \sum_{n=1}^\infty \mathcal{E}(X_n)\] and satisfies \[\mathcal{E}\Big(\big({\textstyle\sum\limits_{n=1}^\infty} X_n \big)^2\Big)^{\frac{1}{2}} \leq\sum_{n=1}^\infty \mathcal{E}\big(X_n^2\big)^\frac{1}{2}\] for every sequence $X_n\colon\Omega\to[0,+\infty]$, $n \in \mathbb{N}$. Furthermore, it fulfills the Cauchy--Schwarz inequality \[ \mathcal{E}(|X| |Y|)\leq \mathcal{E}(X^2)^\frac{1}{2} \mathcal{E}(Y^2)^\frac{1}{2} \] for all $X,Y\colon\Omega\to[-\infty,+\infty]$. \end{proposition} \begin{proof} The proof of countable subadditivity is the same as in \cite[Lemma 4.1]{Vovk12} and \cite[Lemma 2.3]{PerkowskiPromel16}. However, due to the different setting and in order to be self contained, we provide a proof. Without loss of generality assume that $\sum_n \mathcal{E}(X_n)<+\infty$. Fix $\varepsilon>0$, a sequence $(c_n)$ in $(0,+\infty)$ such that $\sum_n c_n=\varepsilon$, and let $\lambda_n:=\mathcal{E}(X_n)+c_n$, as well as $\lambda:=\sum_n \lambda_n$. Then, by definition of $\mathcal{E}(X_n)$, for every $n$ there are two sequences of simple integrands $(F^{n,m})_m$ and $(G^{n,m})_m$ such that \[ \lambda_n+(F^{n,m}\cdot S)_t+(G^{n,m}\cdot \mathbb{S})_t\geq 0\quad\text{for all $t\in[0,T]$ and $m$}\] and \[ \lambda_n+\liminf_m\big( (F^{n,m}\cdot S)_T+(G^{n,m}\cdot \mathbb{S})_T\big) \geq X_n. \] Now define the simple integrands $F^m:=\sum_{n\le m} H^{n,m}$ and $G^m:=\sum_{n\le m} G^{n,m}$ for each $m$. Then $\lambda+(F^m\cdot S)_t+(G^m\cdot \mathbb{S})_t\geq 0$ for all $m$, and superadditivity of $\liminf$ implies for every $k\in\mathbb{N}$ that \begin{align*} &\lambda + \liminf_m \big( (F^m\cdot S)_T+(G^m\cdot \mathbb{S})_T\big)\\ &=\liminf_m\Big( {\textstyle\sum\limits_{n\le m}}\big(\lambda_n + (F^{n,m} \cdot S)_T+(G^{n,m}\cdot \mathbb{S})_T\big)\Big)\\ &\geq \sum_{n \le k} \liminf_m\Big(\lambda_n + (F^{n,m} \cdot S)_T+(G^{n,m}\cdot \mathbb{S})_T\Big) \geq \sum_{n\le k} X_n. \end{align*} Passing to the limit in $k$ yields $\mathcal{E}(\sum_n X_n)\leq \lambda=\sum_n \mathcal{E}(X_n)+\varepsilon$. Since $\varepsilon>0$ was arbitrary, we obtain the first inequality. As for H\"older's inequality, let $X,Y\colon\Omega\to[-\infty,+\infty]$. First, we assume that $\mathcal{E}(X^2)<+\infty$ and $\mathcal{E}(Y^2)<+\infty$. If $\mathcal{E}(X^2)=0$ or $\mathcal{E}(Y^2)=0$, then the pointwise estimate $|X| |Y|\leq \frac{\alpha X^2}{2}+\frac{Y^2}{2\alpha}$ for all $\alpha>0$, together with sublinearity and monotonicity of $\mathcal{E}$ yields \[\mathcal{E}(|X| |Y|)\leq \frac{\alpha}{2}\mathcal{E}(X^2)+\frac{1}{2\alpha}\mathcal{E}(Y^2)\] so that $\mathcal{E}(|X| |Y|)=0$. If $\mathcal{E}(X^2)>0$ and $\mathcal{E}(Y^2)>0$, then the previous inequality applied to $\tilde X:= X/\mathcal{E}(X^2)^\frac{1}{2}$ and $\tilde Y:= Y/\mathcal{E}(Y^2)^\frac{1}{2}$ with $\alpha=1$ leads to \[\frac{\mathcal{E}(|X| |Y|)}{ \mathcal{E}(X^2)^\frac{1}{2} \mathcal{E}(Y^2)^\frac{1}{2}}=\mathcal{E}(|\tilde X| |\tilde Y|)\leq \frac{\mathcal{E}(\tilde X^2)}{2}+\frac{\mathcal{E}(\tilde Y^2)}{2}= 1.\] Second, if $\mathcal{E}(X^2)=0$ and $\mathcal{E}(Y^2)=+\infty$ the first part implies $0\le\mathcal{E}(|X|)\leq \mathcal{E}(X^2)^\frac{1}{2}$, i.e.~$\mathcal{E}(|X|)=0$. Therefore, the pointwise inequality $|X| |Y| \leq \sum_{n=1}^\infty |X|$ together with the countable subadditivity of $\mathcal{E}$ yields that \[\mathcal{E}\big(|X| |Y|\big)\le \mathcal{E}\Big({\textstyle\sum\limits_{n=1}^\infty} |X|\Big)\le \sum_{n=1}^\infty \mathcal{E}\big( |X|\big)=0.\] To show the second statement, let $(X_n)$ be a family of functions $X_n\colon\Omega\to[0,+\infty]$ which is at most countable. By the previous steps we have \begin{align*} \mathcal{E}\Big( \big({\textstyle\sum\limits_{n}} X_n \big)^2 \Big)&= \mathcal{E}\Big( {\textstyle\sum\limits_{n,m}} X_n X_m \Big) \leq \sum_{n,m} \mathcal{E} ( X_n X_m ) \\ &\leq \sum_{n,m} \mathcal{E}(X_n^2)^{\frac{1}{2}} \mathcal{E}(X_m^2)^{\frac{1}{2}} =\Big(\sum_n \mathcal{E}(X_n^2)^{\frac{1}{2}}\Big)^2. \end{align*} It remains to take the root. \end{proof} \begin{proposition} \label{prop:L2-Banach-Banach} Given an arbitrary Banach space $(B,\|\cdot\|_B)$, we define $\|X\|:=\mathcal{E}(\|X\|_B^2)^{\frac{1}{2}}$ for $X\colon\Omega\to B$. Then the following hold: \begin{enumerate}[(i)] \item\label{seminorm} The functional $\|\cdot\|$ is a semi-norm, i.e.~it only takes non-negative values, is absolutely homogeneous, and satisfies the triangle inequality. \item \label{lem:complete} Every Cauchy sequence $X_n\colon\Omega\to B$, $n\in \mathbb{N}$, w.r.t.~$\|\cdot\|$ has a limit $X\colon \Omega\to B$, i.e.~$\|X_n - X\|\to 0$, and there is a subsequence $(n_k)$ such that $X_{n_k}(\omega)\to X(\omega)$ for typical paths. \end{enumerate} \end{proposition} \begin{proof} It is clear that $\|\cdot\|$ only takes values in $[0,+\infty]$ and is absolutely homogeneous. To show the triangle inequality, let $X,Y\colon\Omega\to B$. It follows from Proposition \ref{prop:expectation.is.sigma.subadd} that \begin{align*} \|X+Y\| &\le \mathcal{E}\big((\|X\|_B+\|Y\|_B)^2\big)^{\frac{1}{2}} \\ &\le \mathcal{E}\big(\|X\|_B^2\big)^{\frac{1}{2}} + \mathcal{E}\big(\|Y\|_B^2\big)^{\frac{1}{2}} = \|X\| + \|Y\|. \end{align*} To see that \eqref{lem:complete} holds true, let $(X_n)$ be a Cauchy sequence and choose a subsequence $(X_{n_k})$ such that $\|X_{n_{k+1}}-X_{n_{k}}\|\leq 2^{-k}$. By Proposition \ref{prop:expectation.is.sigma.subadd} it holds \[ \mathcal{E}\Big(\big({\textstyle\sum\limits_{k}} \|X_{n_{k+1}}-X_{n_k}\|_B\big)^2\Big)^{\frac{1}{2}} \leq \sum_{k} \|X_{n_{k+1}}-X_{n_k}\| <+\infty. \] This implies that the set $N:=\{\sum_{k} \|X_{n_{k+1}}-X_{n_k}\|_B=+\infty\}$ satisfies $\mathcal{E}(1_N)=0$. As $B$ is complete, for every $\omega\in N^c$, the sequence $(X_{n_{k}}(\omega))_k$ has a limit. Therefore, $X:=\lim_k X_{n_k}1_{N^c}$ is a mapping from $\Omega$ to $B$ and Proposition \ref{prop:expectation.is.sigma.subadd} yields \[ \|X-X_{n_k}\| \leq \mathcal{E}\Big(\big( {\textstyle\sum\limits_{l\geq k}} \|X_{n_{l+1}}-X_{n_l}\|_B\big)^2 \Big)^{\frac{1}{2}} \leq \sum_{l\geq k} \| X_{n_{l+1}}-X_{n_l} \| \leq 2^{-k+1} \to 0\] as $k$ tends to infinity. Since $(X_n)$ is a Cauchy sequence, the triangle inequality shows that $\|X-X_n\|\leq \|X-X_{n_k}\|+\|X_{n_k}-X_n\|\to 0$ as $k,n\to\infty$. \end{proof} \begin{lemma} \label{lem:weak.duality} For every $P\in\mathcal{M}(\Xi)$ and every measurable $X\colon \Omega\to[0,+\infty]$ one has $E_P[X]\leq\mathcal{E}(X)$. \end{lemma} \begin{proof} Fix $P\in\mathcal{M}(\Xi)$ and $X\colon \Omega\to[0,+\infty]$ measurable. First, if $F\in\mathcal{H}_s(H,\mathbb{R})$ and $G\in\mathcal{H}_s(\mathbb{R},\mathbb{R})$ are of the form \[F=\sum_{n\leq N} f_n1_{(\tau_n,\tau_{n+1}]}\quad\mbox{and}\quad G=\sum_{n\leq N} g_n1_{(\tau_n,\tau_{n+1}]}\] such that $\sup_{n\le N} \|f_n\|_H<\infty$ and $\sup_{n \le N} |g_n|<\infty$ for some $N\in\mathbb{N}$, then the process $(F\cdot S)+(G\cdot \mathbb{S})$ is a continuous martingale (the martingale property follows e.g.~by approximating $f_n$ $P$-a.s.~by functions with finite range and dominated convergence). Second, let $\lambda\geq 0$, $F=\sum_{n} f_n1_{(\tau_n,\tau_{n+1}]}$ in $\mathcal{H}_s(H,\mathbb{R})$, and $G=\sum_{n} g_n1_{(\tau_n,\tau_{n+1}]}$ in $\mathcal{H}_s(\mathbb{R},\mathbb{R})$ be such that $\lambda + (F\cdot S)_t + (G\cdot \mathbb{S})_t\geq 0$ on $\Xi$ for all $t$. Define the stopping times \[\sigma_m:=\inf\{t\geq 0 : \|S_t\|_H\geq m \text{ or } \|F_t\|_{L(H,\mathbb{R})}\geq m\}\wedge \tau_m\] and similarly, $\tilde{\sigma}_m:=\inf\{t\geq 0 : |\mathbb{S}_t|\geq m \text{ or } \|G_t\|_{L(\mathbb{R},\mathbb{R})}\geq m\}\wedge \tau_m$. Note that for every $\omega$ there is $m=m(\omega)$ such that $\sigma_m(\omega)=\tilde{\sigma}_m(\omega)=T$. Defining $F^m:= F1_{(0,\sigma_m]}$ in $\mathcal{H}_s(H,\mathbb{R})$ and $G^m:= G1_{(0,\tilde{\sigma}_m]}$ in $\mathcal{H}_s(\mathbb{R},\mathbb{R})$ for every $m$ one observes that $(F^m\cdot S)_t= (F\cdot S)_{t\wedge \sigma_m}$ and $(G^m\cdot\mathbb{S})_t= (G\cdot \mathbb{S})_{t\wedge \sigma_m}$ converge pointwise to $(F\cdot S)_t$ and $(G\cdot \mathbb{S})_t$, respectively. However, since $(F^m\cdot S)$ and $(G^m\cdot\mathbb{S})$ are martingales by the first step and $\lambda +(F^m\cdot S)_t+(G^m\cdot\mathbb{S})_t\geq 0$ on $\Xi$ for all $t$, it follows from Fatou's lemma that $(F\cdot S)+(G\cdot\mathbb{S})$ is a supermartingale. Finally, let $\lambda\geq 0$, $(F^n)$ a sequence in $\mathcal{H}_s(H,\mathbb{R})$, and $(G^n)$ a sequence in $\mathcal{H}_s(\mathbb{R},\mathbb{R})$ such that $\lambda+(F^n\cdot S)_t+(G^n\cdot \mathbb{S})_t\geq 0$ on $\Xi$ for all $t$ and $\lambda+ \liminf_n\big( (F^n\cdot S)_T +(G^n\cdot \mathbb{S})_T \big)\geq X$ on $\Xi$. Since $(F^n\cdot S)+(G^n\cdot\mathbb{S})$ is a supermartingale by the previous arguments, it follows from Fatou's lemma that \begin{align*} E_P[X] &\leq E_P\Big[\lambda+\liminf_n ( (F^n\cdot S)_T+(G^n\cdot\mathbb{S})_T ) \Big]\\ &\leq \liminf_n E_P[ \lambda + (F^n\cdot S)_T+(G^n\cdot\mathbb{S})_T ] \leq \lambda. \end{align*} As $\lambda$ was arbitrary, this shows $E_P[X]\leq\mathcal{E}(X)$. \end{proof} \subsection{Proof of Theorem~\ref{thm:integral}} The goal of this subsection is to prove Theorem~\ref{thm:integral}. Lemma~\ref{lem:integrals.squared}, though of elementary nature, is the key observation in what follows. More precisely, it is exactly in this Lemma that we see that adding the second order, i.e. integrals with respect to $\mathbb{S}$ in the definition of $\mathcal{E}$, leads to a simple It\^o isometry and, with the help of a pathwise inequality, to a BDG-inequality. This, in turn, allows us to directly define stochastic integrals for typical paths which are continuous. \begin{lemma} \label{lem:integrals.squared} For every simple integrand $F\in\mathcal{H}_s(H,K)$ there exists $\tilde{F} \in \mathcal{H}_s(H,\mathbb{R})$ such that for all $t\in[0,T]$ we have \[\|(F\cdot S)_t\|^2_K \leq (\tilde{F}\cdot S)_t+ (\|F\|^2_{L(H,K)}\cdot \mathbb{S})_t + (\|F\|^2_{L(H,K)}\cdot \langle S\rangle)_t.\] In particular, the following weak It\^o isometry holds true. For every $F\in\mathcal{H}_s(H,K)$, it holds that \[\mathcal{E}(\|(F\cdot S)_T\|^2_K) \leq \sup_{\omega\in \Xi}(\|F\|^2_{L(H,K)}\cdot \langle S\rangle)_T(\omega) = \|F\|_{\mathcal{H}^\infty(H,K)}^2.\] \end{lemma} \begin{proof} Fix a simple integrand $F=\sum_nf_n1_{(\tau_n,\tau_{n+1}]}\in\mathcal{H}_s(H,K)$. By definition, when denoting $f^*_n$ the adjoint operator of $f_n$, we see that \begin{align*} \|(F\cdot S)_t\|^2_K &=\sum_{n,m}\Big\langle f_m (S^t_{\tau_{m+1}}-S^t_{\tau_m}), f_n (S^t_{\tau_{n+1}}-S^t_{\tau_n})\Big\rangle_K\\ & =2\sum_{m<n}\Big\langle f_n^\ast f_m (S^t_{\tau_{m+1}}-S^t_{\tau_m}), S^t_{\tau_{n+1}}-S^t_{\tau_n} \Big\rangle_H +\sum_n \| f_n (S^t_{\tau_{n+1}}-S^t_{\tau_n}) \|_K^2, \end{align*} where all sums are finite by definition of simple integrands. Using the inequality $\| f_n (S^t_{\tau_{n+1}}-S^t_{\tau_n}) \|_K^2 \le \|f_n\|^2_{L(H,K)} \|S^t_{\tau_{n+1}}-S^t_{\tau_n}\|^2_H$ and since \begin{align*} \|S^t_{\tau_{n+1}}-S^t_{\tau_n}\|^2_H&= \|S^t_{\tau_{n+1}}\|^2_H- \|S^t_{\tau_{n}}\|^2_H -2 \langle S^t_{\tau_n}, S^t_{\tau_{n+1}}-S^t_{\tau_n}\rangle_H \\ &= \mathbb{S}^t_{\tau_{n+1}} - \mathbb{S}^t_{\tau_{n}} + \langle S\rangle^t_{\tau_{n+1}}- \langle S\rangle^t_{\tau_{n}} -2 \langle S^t_{\tau_n}, S^t_{\tau_{n+1}}-S^t_{\tau_n}\rangle_H \end{align*} it follows that \begin{align*} \|(F\cdot S)_t\|^2_K &\le 2\Big\langle \sum_{m<n} f_n^\ast f_m (S^t_{\tau_{m+1}}-S^t_{\tau_m}) - \|f_n\|^2_{L(H,K)} S^t_{\tau_n}, S^t_{\tau_{n+1}}-S^t_{\tau_n} \Big\rangle_H \\ &\qquad +\sum_n \|f_n\|^2_{L(H,K)}(\mathbb{S}^t_{\tau_{n+1}}-\mathbb{S}^t_{\tau_{n}}) +\sum_n \|f_n\|^2_{L(H,K)}(\langle S\rangle^t_{\tau_{n+1}}-\langle S\rangle^t_{\tau_{n}}) \\&=(\tilde{F}\cdot S)_t+ (\|F\|^2_{L(H,K)}\cdot \mathbb{S})_t + (\|F\|^2_{L(H,K)}\cdot \langle S\rangle)_t, \end{align*} where \[\tilde{F}:=2\sum_n \Big(\sum_{m<n} f_n^\ast f_m (S_{\tau_{m+1}}-S_{\tau_m}) - \|f_n\|^2_{L(H,K)} S_{\tau_n}\Big)1_{(\tau_n,\tau_{n+1}]} \in\mathcal{H}_s(H,\mathbb{R}).\] For the second claim let $\lambda:=\sup_{\omega\in \Xi}(\|F\|^2_{L(H,K)}\cdot \langle S\rangle)_T(\omega)$. Since $\tilde{F}\in\mathcal{H}_s(H,\mathbb{R})$, $\|F\|^2_{L(H,K)}\in\mathcal{H}_s(\mathbb{R},\mathbb{R})$, and \begin{align*} &\lambda + (\tilde{F}\cdot S)_t+ (\|F\|^2_{L(H,K)}\cdot \mathbb{S})_t \geq \|(F\cdot S)_t\|_K^2 \geq 0 \quad\text{for all }t\\ &\lambda + (\tilde{F}\cdot S)_T+ (\|F\|^2_{L(H,K)}\cdot \mathbb{S})_T \geq \|(F\cdot S)_T\|_K^2, \end{align*} it follows that $\mathcal{E}(\|(F\cdot S)_T\|_K^2)\leq \lambda=\|F\|_{\mathcal{H}^\infty(H,K)}^2$. \end{proof} We first recall an inequality from \cite{acciaio2013trajectorial} which turns out to be crucial for the proof of the weak BDG inequality. The connection between pathwise inequalities as in \eqref{inequality:martingale} and martingale inequalities are studied in \cite{acciaio2013trajectorial,beiglbock2014martingale,beiglbock2015pathwise}. \begin{lemma} \label{lem:bgd.path} Let $N\in\mathbb{N}$. Then for every $(x_0,\dots,x_N)\in\mathbb{R}^{N+1}$ it holds that \begin{equation}\label{inequality:martingale} \max_{0\leq n\leq N} x_n^2\leq 4 x_N^2-4\sum_{n=0}^{N-1}\big(\max_{0\leq i\leq n} x_i\big)(x_{n+1}-x_n). \end{equation} \end{lemma} \begin{proof} This is \cite[Proposition 2.1]{acciaio2013trajectorial} and the remark afterwards. \end{proof} \begin{proposition}[Weak BDG inequality for simple integrands] \label{prop:bdg.calE} Assume that $K$ is finite dimensional. Then for every $F\in\mathcal{H}_s(H,K)$ one has that \[ \mathcal{E}\Big( \sup_{t\in[0,T]} \|(F\cdot S)_t\|_K^2 \Big) \leq 4 \sup_{\omega\in\Xi}\big( \|F\|_{L(H,K)}^2\cdot \langle S\rangle\big)_T(\omega) =4 \|F\|_{\mathcal{H}^\infty(H,K)}^2.\] \end{proposition} \begin{proof} First assume that $K=\mathbb{R}$ and fix a simple integrand $F=\sum_{n} f_n1_{(\tau_n,\tau_{n+1}]} =\sum_n F_{\tau_{n+1}}1_{(\tau_n,\tau_{n+1}]}\in\mathcal{H}_s(H,K)$. Let $\sigma_n^m$, $n,m \in \mathbb{N}$, be stopping times such that \[\{\tau_n(\omega):n\in\mathbb{N}\}\subset \{ \sigma^m_n(\omega) : n\in\mathbb{N}\}=: D^m(\omega),\] for every $m$ the set $D^m(\omega)$ is finite and $D^m(\omega)\subset D^{m+1}(\omega)$, and the closure of $\cup_m D^m(\omega)$ equals $[0,T]$ for every $\omega\in\Omega$. Now fix some $m$ and note that \[F=\sum_n F_{\sigma^m_{n+1}}1_{(\sigma^m_n,\sigma^m_{n+1}]} \quad\text{since } (\sigma^m_n) \text{ is finer than }(\tau_n).\] Now, for every $m$ define \[ \tilde{H}^m:=-4\sum_n \Big( \big(\max_{0\leq i\leq n} (F\cdot S)_{\sigma_i^m}\big) F_{\sigma^m_n} \Big) 1_{(\sigma_n^m,\sigma_{n+1}^m]} \in \mathcal{H}_s(H,\mathbb{R}).\] Then, by Lemma \ref{lem:bgd.path} (applied to ``$x_n=(F\cdot S^t)_{\sigma_n^m}$'') it holds for all $t\in[0,T]$ that \begin{align*} \max_{n\in\mathbb{N}} (F\cdot S^t)_{\sigma_n^m}^2 &\leq 4 (F\cdot S^t)_T^2 - 4\sum_{n\in\mathbb{N}} \big(\max_{0\leq i\leq n} (F\cdot S^t)_{\sigma_i^m}\big) F_{\sigma^m_{n+1}}(S^t_{\sigma^m_{n+1}}-S^t_{\sigma^m_n})\\ &= 4(F\cdot S)_t^2+ (\tilde{H}^m\cdot S)_t. \end{align*} Furthermore, by Lemma \ref{lem:integrals.squared}, there exists $\tilde{F}\in\mathcal{H}_s(H,\mathbb{R})$ such that for every $t\in[0,T]$ we have \[(F\cdot S)_t^2 \leq (\tilde{F}\cdot S)_t+ (\|F\|^2_{L(H,K)}\cdot \mathbb{S})_t + (\|F\|^2_{L(H,K)}\cdot \langle S\rangle)_t.\] Therefore, setting $H^m:=4\tilde{F}+\tilde{H}^m\in\mathcal{H}_s(H,\mathbb{R})$,\, $G:=4\|F\|^2_{L(H,K)}\in\mathcal{H}_s(\mathbb{R},\mathbb{R})$, and $\lambda:=4\sup_{\omega\in\Xi} (\|F\|^2_{L(H,K)}\cdot \langle S\rangle)_T(\omega)$, we obtain for every $t\in[0,T]$ that \[0\leq \max_{n\in\mathbb{N}} (F\cdot S^t)_{\sigma_n^m}^2 \leq \lambda + (H^m\cdot S)_t +(G\cdot \mathbb{S})_t. \] As $\cup_m\{\sigma^n_m:n\}$ is dense in $[0,T]$, we conclude that \[ \max_{t\in[0,T]} (F\cdot S)_t^2 =\lim_m \max_{n\in\mathbb{N}} (F\cdot S^T)_{\sigma_n^m}^2 \leq \lambda + \liminf_m\big( (H^m\cdot S)_T +(G\cdot \mathbb{S})_T \big). \] This shows that $\mathcal{E}(\sup_{t\in[0,T]} (F\cdot S)_t^2)\leq \lambda=4\|F\|_{\mathcal{H}^\infty(H,K)}^2$, and concludes the proof for $K=\mathbb{R}$. If $K$ is finite dimensional (say with orthonormal basis $\{k_1,\dots,k_d\}$), one has $\|(F\cdot S)_t\|_K^2=\sum_{i=1}^d (F^i\cdot S)_t^2$ with $F^i=\langle F,k_i\rangle_K$. Therefore, one can apply the previous step to every $F^i$ to obtain the desired result also when $K$ is finite dimensional. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:integral}}] Fix $F\in\mathcal{H}^\infty(H,K)$ and a sequence $(F^n)$ in $\mathcal{H}_s(H,K)$ such that $\|F-F^n\|_{\mathcal{H}^\infty(H,K)}\to 0$. Now Proposition \ref{prop:bdg.calE} implies that \[\mathcal{E}\Big(\sup_{t\in[0,T]}\|(F^n\cdot S)_t-(F^m\cdot S)_t\|_K^2\Big) \leq 4\|F^n-F^m\|_{\mathcal{H}^\infty(H,K)}^2.\] Therefore Proposition \ref{prop:L2-Banach-Banach} (applied to the Banach space $B:=C([0,T],K)$) implies the existence of a limit $(F\cdot S)\colon \Omega\to B$ such that \[\mathcal{E}\Big(\sup_{t\in[0,T]}\|(F\cdot S)_t-(F^n\cdot S)_t\|_K^2\Big) \to 0. \] The proof that (for typical paths) $(F\cdot S)$ does not depend on the choice of the sequence $(F^n)$ which converges to $F$ and that the BDG inequality extends to $F\in\mathcal{H}^\infty(H,K)$ follows from the triangle inequality by standard arguments. Moreover, we derive from the well-known $L^2(P)$-limit procedure for the construction of the classical stochastic integral (see, e.g., \cite{philip2004stochastic}) and Lemma~\ref{lem:weak.duality} that indeed, the constructed stochastic integral for typical paths coincides with the classical stochastic integral under every martingale measure $P\in\mathcal{M}(\Xi)$. \end{proof} \begin{remark} \label{rem:finite-dim2} By arguing like in the proof of Theorem~\ref{thm:integral}, but using the weak It\^o isometry introduced in Lemma~\ref{lem:integrals.squared} instead of the weak BDG-inequality defined in Proposition~\ref{prop:bdg.calE}, we can define stochastic integrals with respect to integrands $F \in \mathcal{H}^\infty(H,K)$ without imposing that $K$ is finite dimensional. Moreover, using standard arguments involving the triangle inequality, we see that the weak It\^o isometry introduced in Lemma~\ref{lem:integrals.squared} for simple integrands in $\mathcal{H}_s(H,K)$ also holds true for integrands in $\mathcal{H}^\infty(H,K)$. However, it remains open if the stochastic integral with respect to integrands in $\mathcal{H}^\infty(H,K)$ possess a continuous modification, since the weak It\^o isometry, compared to the weak BDG inequality whose proof depends on the fact that $K$ is finite dimensional, is too weak to guarantee that the sequence of simple integrals converges uniformly. \end{remark} \subsection{Duality Result for Second-order Vovk's outer measure} \label{subsec:duality} The goal of this subsection is to provide a duality result for the outer measure $\mathcal{E}$. \begin{theorem}\label{thm:dual} Assume that $\Xi=\{\omega\in \Omega : (\omega,\langle\omega\rangle)\in\bar{\Xi} \}$ for a subset $\bar{\Xi}\subset C([0,T],H)\times C([0,T],\mathbb{R})$ which satisfies \begin{itemize} \item[(i)] $\bar{\Xi}$ is the countable union of compact sets (w.r.t.~uniform convergence), \item[(ii)] $\bar{\omega}(\cdot\wedge t)\in\bar{\Xi}$ for all $\bar{\omega}\in\bar{\Xi}$ and $t\in[0,T]$, \item[(iii)] $\nu(0)=0$ and $\nu$ is nondecreasing for all $\bar{\omega}=(\omega,\nu)\in\bar{\Xi}$. \end{itemize} Then for every $X\colon C([0,T],H)\to[0,+\infty]$ which is the pointwise limit of an increasing sequence $X_n\colon\Omega\to[0,+\infty]$ of upper semicontinuous functions one has \begin{equation} \label{eq:duality} \mathcal{E}(X)=\sup_{P\in\mathcal{M}(\Xi)} E_P[X]. \end{equation} In particular, the duality \eqref{eq:duality} holds for every nonnegative upper or lower semicontinuous function $X\colon C([0,T],H)\to[0,+\infty]$. \end{theorem} {Note that the prediction set $\Xi_c$ defined in \eqref{Xi-c} satisfies the conditions (i)--(iii).} Moreover, up to a different admissibility condition in the definition of $\mathcal{E}$, the statement of Theorem \ref{thm:dual} is similar to \cite[Theorem 2.2]{bartl2017pathwise}. However, the additional assumption that $\bar{\Xi}$ contains all stopped paths, can be used as in the proof of \cite[Theorem~2.1]{bartl2017duality} to deduce Theorem \ref{thm:dual} in the present setting. Therefore, we only provide a sketch of the proof with main focus on the admissibility condition. Let us recall the setting of \cite{bartl2017pathwise}. On \[\bar{\Omega}:=C([0,T],H)\times C([0,T],\mathbb{R})\] we consider the processes $\bar{S}_t(\bar{\omega}):=\omega(t)$ and $\bar{\mathbb{S}}_t(\bar{\omega}):=\|\omega(t)\|_H^2-\nu(t)$ for $\bar{\omega}=(\omega,\nu)\in\bar{\Omega}$. For $\bar{\Delta}:=\{\bar{\omega}\in\bar{\Omega} : \omega\in \Omega\text{ and }\langle\omega\rangle=\nu\}$, we consider the filtration $\bar{\mathbb{F}}_+^{\bar{\Delta}}$ defined as the right-continuous version of $\bar{\mathcal{F}}_t^{\bar{\Delta}}=\sigma(\bar{S}_s,\bar{\mathbb{S}}_s:s\leq t)\vee \sigma( \bar{N}: \bar{N}\subset \bar{\Omega}\setminus\bar{\Delta})$. A function $\bar{F}$ on $[0,T]\times\bar{\Omega}$ with values in $L(H,\mathbb{R})$ or $L(\mathbb{R},\mathbb{R})$ is called simple if it is of the form \[\bar{F}_t(\bar{\omega})=\sum_{n\in\mathbb{N}} \bar{f}_n(\bar{\omega}) 1_{( \bar{\tau}_n(\bar{\omega}),\bar{\tau}_{n+1}(\bar{\omega})]}(t)\] for $(t,\bar{\omega})\in [0,T]\times\bar{\Omega}$, where $0=\bar{\tau}_0\leq\dots\leq \bar{\tau}_n\leq\bar{\tau}_{n+1}\leq \dots\leq T$ are $\bar{\mathbb{F}}^{\bar{\Delta}}_+$-stopping times such that for each $\bar{\omega}$ only finitely many stopping times are strictly smaller than $T$, and $\bar{f}_n$ are $\bar{\mathcal{F}}^{\bar{\Delta}}_{\bar{\tau}_n+}$-measurable functions on $\bar\Omega$ with values in $L(H,\mathbb{R})$ and $L(\mathbb{R},\mathbb{R})$, respectively. The function $\bar{F}$ is called finite simple, if $\bar{\tau}_n=T$ for all $n\geq N$ for some $N\in\mathbb{N}$. Consider the functional \[ \bar{\mathcal{E}}(\bar{X}):= \inf\left\{ \lambda\geq 0 \,:\, \begin{array}{l} \text{there are two simple sequences $(\bar{H}^n)$, $(\bar{G}^n)$ such that}\\ \text{$\lambda+(\bar{H}^n\cdot \bar{S})_t+(\bar{G}^n\cdot \bar{\mathbb{S}})_t\geq 0$ on $\bar{\Delta}\cap\bar{\Xi}$ for all $t$ and $n$, and} \\ \text{$\lambda+ \liminf_n\big( (\bar{G}^n\cdot\bar{S})_T +(\bar{G}^n\cdot \bar{\mathbb{S}})_T \big)\geq \bar{X}$ on $\bar{\Delta}\cap\bar{\Xi}$} \end{array}\!\right\}\] for $\bar{X}\colon\bar{\Omega}\to[-\infty,+\infty]$. Further, for a measurable set $\bar{A}\subset\bar{\Omega}$, we denote by $\bar{\mathcal{M}}(\bar{A})$ the set of all Borel probabilities $\bar{P}$ on $\bar{\Omega}$ such that $\bar{P}(\bar{A})=1$ and both $\bar{S}$ and $\bar{\mathbb{S}}$ are $\bar{P}$-martingales w.r.t.~the filtration $\bar{\mathbb{F}}_+^{\bar{\Delta}}$. The reason to consider the enlarged space $\bar{\Omega}$ is that the duality argument in the proof of Theorem \ref{thm:dual} builds on topological arguments and the set $\Xi$ in contrast to $\bar{\Xi}$ is not regular enough. The following transfer principle is the reason why duality on $\Omega$ can be recovered from duality on the enlarged space $\bar{\Omega}$. \begin{lemma} \label{lem:transfer} For every measurable function $X\colon C([0,T],H)\to[0,+\infty]$ one has \[ \mathcal{E}(X)=\bar{\mathcal{E}}(X\circ\bar{S}) \quad\text{and}\quad \sup_{P\in\mathcal{M}(\Xi)} E_P[X] =\sup_{\bar{P}\in\bar{\mathcal{M}}(\bar{\Xi})} E_{\bar{P}}[X\circ \bar{S}]. \] \end{lemma} \begin{proof} The proof of the first equality follows directly from \cite[Lemma~4.5]{bartl2017pathwise}, whereas the second equality is a direct consequence of \cite[Lemma 4.6]{bartl2017pathwise}. \end{proof} \begin{lemma} \label{lem:choose.compacts} There is an increasing sequence of nonempty compact sets $\bar{\Xi}_n\subset\bar{\Omega}$ such that $\bar{\Xi}=\bigcup_n \bar{\Xi}_n$ and $\bar{\omega}(\cdot\wedge t)\in \bar{\Xi}_n$ for every $\bar{\omega}\in \bar{\Xi}_n$ and $t\in[0,T]$. \end{lemma} \begin{proof} The proof is similar to \cite[Lemma 4.5]{bartl2017duality}. \end{proof} \begin{lemma} \label{lem:integral.over.time.geq0} Assume that $\lambda + (\bar{G}\cdot \bar{S})_T+(\bar{H}\cdot\bar{\mathbb{S}})_T\geq 0$ on $\bar{\Xi}$, where $\bar{G}$ and $\bar{H}$ are finite simple integrands. Then $\lambda + (\bar{G}\cdot \bar{S})_t+(\bar{G}\cdot\bar{\mathbb{S}})_t\geq 0$ on $\bar{\Xi}$ for every $t\in[0,T]$. \end{lemma} \begin{proof} The proof is similar to \cite[Lemma 4.6]{bartl2017duality}. \end{proof} We are now ready for the proof of the duality theorem. \begin{proof}[\text{Proof of Theorem \ref{thm:dual}}] We first approximate the functional $\bar{\mathcal{E}}$. To that end, let $\bar{\Xi}_n$ be the sets of Lemma \ref{lem:choose.compacts} and for $\bar{X}\colon\bar{\Omega}\to [-\infty,+\infty]$ define \[\bar{\mathcal{E}}_n(\bar{X}):= \inf\left\{ \lambda\in\mathbb{R}: \begin{array}{l} \text{there are finite simple integrands $\bar{H}$, $\bar{G}$ and $c\geq 0$ such that}\\ \text{$\lambda+(\bar{H}\cdot \bar{S})_t+(\bar{G}\cdot \bar{\mathbb{S}})_t\geq -c$ on $\bar{\Delta}\cap\bar{\Xi}_n$ for all $t,n$, and} \\ \text{$\lambda+ (\bar{G}\cdot\bar{S})_T +(\bar{G}\cdot \bar{\mathbb{S}})_T \geq \bar{X}$ on $\bar{\Delta}\cap\bar{\Xi}_n$}. \end{array}\!\right\}. \] As $\bar{\Xi}_n$ is compact, one can verify that $\bar{\mathcal{E}}_n$ is sufficiently regular so that for every upper semicontinuous and bounded function $X\colon \bar{\Omega}\to\mathbb{R}$ we have \begin{equation}\label{rep:n} \bar{\mathcal{E}}_n(\bar{X})=\sup_{\bar{P}\in\bar{\mathcal{M}}(\bar{\Xi}_n)} E_{\bar{P}}[\bar{X}]. \end{equation} The precise argumentation is given in the steps (a)-(c) of \cite[Theorem 2.2]{bartl2017pathwise}. Next, let $\bar{X}_n\colon \bar{\Omega}\to[0,+\infty)$, $n \in \mathbb{N}$, be bounded upper semicontinuous functions and $\bar{X}:=\sup_n \bar{X}_n$. Then it holds $\bar{\mathcal{E}}(\bar{X})=\sup_{\bar{P}\in\bar{\mathcal{M}}(\bar{\Xi})} E_{\bar{P}}[\bar{X}]$. Indeed, by the weak duality in Lemma \ref{lem:weak.duality}, using that $\bar{\mathcal{M}}(\bar{\Xi})\supset \bar{\mathcal{M}}(\bar{\Xi}_n)$, $\bar{X}\geq \bar{X}_n$ for all $n$, and the representation \eqref{rep:n} one has \begin{align} \label{eq:calE.geq.supP.geq.supcalEn} \bar{\mathcal{E}}(\bar{X}) \geq \sup_{\bar{P}\in\bar{\mathcal{M}}(\bar{\Xi})} E_{\bar{P}}[\bar{X}] \geq \sup_n \sup_{\bar{P}\in\bar{\mathcal{M}}(\bar{\Xi}_n)} E_{\bar{P}}[\bar{X}_n] =\sup_n\bar{\mathcal{E}}_n(\bar{X}_n) . \end{align} This shows $\bar{\mathcal{E}}(\bar{X})\geq \sup_n\bar{\mathcal{E}}_n(\bar{X}_n)$. To prove the reverse inequality, which then implies that all inequalities in \eqref{eq:calE.geq.supP.geq.supcalEn} are actually equalities, one may assume without loss of generality that $m:=\sup_n\bar{\mathcal{E}}_n(\bar{X}_n)<+\infty$. Given some fixed $\varepsilon>0$, for every $n$ there exist by definition finite simple integrands $\bar{G}^n$ and $\bar{H}^n$ such that \begin{align} \label{eq:strategy.for.mathcalEn} m+\varepsilon/2 + (\bar{G}^n\cdot\bar{S})_T+ (\bar{H}^n\cdot\bar{\mathbb{S}})_T\geq \bar{X}_n \quad\text{on } \bar{\Delta}\cap\bar{\Xi}_n. \end{align} Now define the stopping times \[ \bar{\sigma}_n:=\inf\{t\in[0,T] : m+\varepsilon/2 + (\bar{G}^n\cdot \bar{S})_t+ (\bar{H}^n\cdot\bar{\mathbb{S}})_t=-\varepsilon/2 \}\wedge T. \] Then $(\bar{G}^n\cdot \bar{S})_{\bar{\sigma}_n}=(\tilde{G}^n\cdot \bar{S})_T$ for the simple integrand $\tilde{G}^n:=\bar{G}^n1_{[0,\bar{\sigma}_n]}$ and similarly $(\bar{H}^n\cdot \bar{\mathbb{S}})_{\bar{\sigma}_n}=(\tilde{H}^n\cdot \bar{\mathbb{S}})_T$ for $\tilde{H}^n:=\bar{H}^n 1_{[0,\bar{\sigma}_n]}$. Since $\bar{X}_n\geq 0$, it follows from \eqref{eq:strategy.for.mathcalEn} and Lemma \ref{lem:integral.over.time.geq0} that $\bar{\sigma}_n=T$ on $ \bar{\Delta}\cap\bar{\Xi}_n$. The assumptions that $\bar{\Xi}=\bigcup_n\bar{\Xi}_n$ and $\bar{\Xi}_n\subset\bar{\Xi}_{n+1}$ therefore imply \begin{align*} &m+\varepsilon + (\tilde{G}^n\cdot\bar{S})_t+ (\tilde{H}^n\cdot\bar{\mathbb{S}})_t\geq 0 \quad\text{on }\bar{\Omega} \ \mbox{ for all $t$ and $n$,}\\ &m+\varepsilon + \liminf_n\Big( (\tilde{G}^n\cdot\bar{S})_T+ (\tilde{H}^n\cdot\bar{\mathbb{S}})_T\Big) \geq \liminf_n \bar{X}_n=\bar{X} \quad\text{on } \bar{\Delta}\cap\bar{\Xi}, \end{align*} which shows that $\bar{\mathcal{E}}(\bar{X})\leq m+\varepsilon$. As $\varepsilon$ was arbitrary, the desired inequality follows. The proof of the theorem is now readily completed using the transfer principle stated in Lemma \ref{lem:transfer}. \end{proof} \subsection{Proof of Theorem~\ref{thm:integral.Xi.has.assumptions}} \label{subsec:proof-integral-ass} Throughout this subsection we consider the prediction set $\Xi\equiv\Xi_c$ defined in \eqref{Xi-c}, i.e.~every $\omega\in\Xi_c$ is H\"older continuous with $d\langle \omega\rangle/dt\leq c$. \begin{proposition} \label{lem:BDG.Xi.has.assumptions} Let $F\in \mathcal{H}_{s,c}(H,K)$. Then, one has \[ \mathcal{E}\Big( \sup_{t\in [0,T]} \|(F \cdot S)_t\|_K^2 \Big) \leq 4 c \int_0^T \mathcal{E}\big( \|F_t\|_{L(H,K)}^2\big) dt.\] \end{proposition} \begin{proof} First, since $\Omega\ni \omega \mapsto \sup_{t\in[0,T]} \|(F \cdot S)_t(\omega)\|_K^2$ is lower semicontinuous, the function \[C([0,T],H)\ni \omega \mapsto \sup_{\delta>0}\inf_{\substack{\tilde \omega\in\Omega\\ \|\tilde\omega-\omega\|_H\le\delta}} \sup_{t\in[0,T]} \|(F \cdot S)_t(\tilde\omega)\|_K^2 \] defines its lower semicontinuous extension on $C([0,T],H)$. Moreover, since by assumption $\Xi\equiv\Xi_c$, the conditions of Theorem \ref{thm:dual} are satisfied. Therefore, we can apply Theorem \ref{thm:dual} to obtain that \[ \mathcal{E}\Big( \sup_{t\in[0,T]} \|(F \cdot S)_t\|_K^2 \Big) =\sup_{P\in\mathcal{M}(\Xi)} E_P\Big[ \sup_{t\in[0,T]} \|(F \cdot S)_t\|_K^2 \Big]. \] Now, under every $P\in\mathcal{M}(\Xi)$, the process $\|(F \cdot S)\|_K$ is a (real-valued) submartingale. Therefore, Doob's maximal inequality implies \[ E_P\Big[ \sup_{t\in[0,T]} \|(F \cdot S)_t\|_K^2 \Big] \leq 4 E_P[ \|(F \cdot S)_T\|_K^2] \leq 4 E_P[ (\|F\|_{L(H,K)}^2 \cdot \langle S\rangle )_T]\] where the last inequality is the weak It\^o-Isometry (apply e.g.~the same arguments as in the proof of Lemma \ref{lem:integrals.squared} and integrate w.r.t.~$P$). Finally, by assumption and Lemma \ref{lem:weak.duality} one has \[E_P[ (\|F\|_{L(H,K)}^2 \cdot \langle S\rangle )_T] \leq c \int_0^T E_P[\|F_t\|_{L(H,K)}^2]\, dt \leq c \int_0^T \mathcal{E}(\|F_t\|_{L(H,K)}^2)\, dt \] which proves the claim. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:integral.Xi.has.assumptions}}] The part of Theorem \ref{thm:integral} which states that the stochastic integral exists for integrands in $\mathcal{H}^2(H,K)$ follows from Proposition \ref{lem:BDG.Xi.has.assumptions} using the exact same arguments as in the proof of Theorem \ref{thm:integral}. The second part is shown in Lemma~\ref{le:f-Lip-nice} below. \end{proof} { \begin{proof}[\textbf{Proof of Remark~\ref{rem:integrand-2}}] Define $\tau_n:=(n/N)\wedge T$ for every $n\geq 0$ and then set $G:=\sum_{n=0}^\infty F_{\tau_n}1_{(\tau_n,\tau_{n+1}]}\in\mathcal{H}_{s,c}(H,K)$. As both $F_t$ and $G_t$ are continuous for every $t\in[0,T]$, it follow from Theorem \ref{thm:dual} that \begin{align*} \mathcal{E}(\|F_t-G_t\|_{L(H,K)}^2) &= \sup_{P\in\mathcal{M}(\Xi)} E_P[\|F_t-G_t\|_{L(H,K)}^2]\\ &\leq \rho(1/N)^2 \Big( 1+ \sup_{P\in\mathcal{M}(\Xi)} E_P\Big[\sup_{r\in[0,T]}\langle S_r\rangle_T^{p/2}\Big]\Big)\\ &\leq \rho(1/N)^2 (1+(Tc)^{p/2}). \end{align*} Indeed, the first inequality follows from the modulus of continuity assumption on $F$ together with the classical BDG-inequality applied under each $P$, and the second inequality by assumption that $d\langle S\rangle_t\leq c\,dt$ on $\Xi_c$. Integrating over $t$ and taking the limit for $N\to\infty$ yields the claim. \end{proof} } In the next Lemma, we extend the inequality obtained in Proposition~\ref{lem:BDG.Xi.has.assumptions} to integrands lying in $\mathcal{H}^2(H,K)$. Moreover, we prove for Lipschitz continuous functions $f\colon [0,T]\times K \mapsto L(H,K)$, that $f(\cdot,(F\cdot S))\in\mathcal{H}^2(H,K)$ whenever $F \in \mathcal{H}^2(H,K)$. This is the crucial property allowing to solve stochastic differential equations for typical paths. \begin{lemma} \label{le:f-Lip-nice} Let $F\in\mathcal{H}^2(H,K)$. Then, one has \[ \mathcal{E}\Big( \sup_{t\in[0,T]} \|(F \cdot S)_t\|_K^2 \Big) \leq 4c \int_0^T \mathcal{E}\big( \|F_t\|_{L(H,K)}^2\big)\, dt.\] In addition, if $f\colon [0,T]\times K \mapsto L(H,K)$ is Lipschitz continuous then the map $\Omega \times [0,T] \ni (\omega,t) \mapsto f(t,(F\cdot S)_t(\omega)) \in L(H,K)$ is an element of $\mathcal{H}^2(H,K)$. \end{lemma} \begin{proof} The first part follows from Proposition \ref{lem:BDG.Xi.has.assumptions} and the triangle inequality. It remains to prove that $f(\cdot,(F\cdot S))\in \mathcal{H}^2(H,K)$. Assume first that $F\in\mathcal{H}_{s,c}(H,K)$ and define \[H^n:=\sum_{i=0}^{n-1} f(iT/n,(F\cdot S)_{iT/n}) 1_{(iT/n,(i+1)T/n]}\] for every $n$. Since $\omega\mapsto (F\cdot S)_{iT/n}(\omega)$ is continuous for ever $i$, one has $H^n\in\mathcal{H}_{s,c}(H,K)$. Moreover, with $\pi_n(t):=\max\{ iT/n :i\in\mathbb{N}\mbox{ such that }iT/n\le t\}$ and $L_f$ being the Lipschitz constant of $f$, one has \begin{align} \|f(\cdot,(F\cdot S))-H^n\|_{\mathcal{H}^2(H,K)}^2 &\leq \int_0^T2L_f^2 \mathcal{E}\Big( |t-\pi_n(t)|^2+ \|(F\cdot S)_t-(F\cdot S)_{\pi_n(t)}\|_K^2\Big) \,dt \nonumber\\ &\leq 2L_f^2\int_0^T |t-\pi_n(t)|^2 + \mathcal{E}\big(\| (F\cdot S)_t-(F\cdot S)_{\pi_n(t)}\|_K^2\big) \,dt. \label{eq:rem:explanation1} \end{align} Now Theorem \ref{thm:dual}, the weak It\^o-Isometry (argue as in Proposition \ref{lem:BDG.Xi.has.assumptions}), and weak duality (Lemma \ref{lem:weak.duality}) imply for every $t$ that \begin{align} &\mathcal{E}\big(\|(F\cdot S)_t-(F\cdot S)_{\pi_n(t)}\|_K^2\big) =\sup_{P\in\mathcal{M}(\Xi)}E_P\big[\|(F\cdot S)_t-(F\cdot S)_{\pi_n(t)}\|_K^2\big] \nonumber\\ &\leq \sup_{P\in\mathcal{M}(\Xi)}E_P\Big[\int_{\pi_n(t)}^t\|F_s\|_{L(H,K)}^2\, d\langle S\rangle_s \Big] \leq c \int_{\pi_n(t)}^t \mathcal{E}\big( \|F_s\|_{L(H,K)}^2\big)\, ds \label{eq:rem:explanation2}\\ &\leq c \int_{0}^T \mathcal{E}\big( \|F_s\|_{L(H,K)}^2\big)\, ds <+\infty. \nonumber \end{align} This shows that $\mathcal{E}\big(\|(F\cdot S)_t-(F\cdot S)_{\pi_n(t)}\|_K^2\big)$ is dominated by $c \int_{0}^T \mathcal{E}\big( \|F_s\|_{L(H,K)}^2\big)\, ds <+\infty$ and converges pointwise to $0$ when $n$ goes to infinity since then $\pi_n(t)\to t$. Therefore, dominated convergence implies $\|f(\cdot,(F\cdot S))-H^n\|_{\mathcal{H}^2(H,K)}\to 0$, which shows that $f(\cdot,(F\cdot S))\in\mathcal{H}^2(H,K)$. The general case follows by approximating $F\in\mathcal{H}^2(H,K)$ by $F^n\in\mathcal{H}_{s,c}(H,K)$, and using the inequality \begin{align*} \|f(\cdot,(F^n\cdot S))-f(\cdot,(F\cdot S))\|^2_{\mathcal{H}^2(H,K)}&\le L^2_f \int_0^T \mathcal{E}\!\big(\|(F^n\cdot S)_t-(F\cdot S)_t\|^2_K\big)\,dt \nonumber\\ &\le T L^2_f \mathcal{E}\Big(\sup_{0\le t\le T}\|(F^n\cdot S)_t-(F\cdot S)_t\|^2_K\Big) \\ &\le 4c T L^2_f \|F^n-F\|^2_{\mathcal{H}^2(H,K)}, \nonumber \end{align*} where the last inequality is ensured by the first part. \end{proof} \begin{remark}\label{rem:identification} Let $F\in\mathcal{H}^2(H,K)$. Similar to the proof of Lemma \ref{le:f-Lip-nice} one can verify \begin{enumerate} \item $f(\cdot, F)\in\mathcal{H}^2(H,K)$ for every function $f:[0,T]\times K\to L(H,K)$ which is Lipschitz continuous, and \item $(F\cdot S)$ can be identified with an element in $\mathcal{H}^2(K,\mathbb{R})$, by considering $i(F\cdot S)$ for the isometric isomorphism $i:K\to L(K,\mathbb{R})$ given by the Riesz representation theorem. \end{enumerate} In Subsection \ref{sec:Picard} we will frequently use this identification. \end{remark} \subsection{Proof of Theorem \ref{thm:sde}}\label{sec:Picard} We recall that throughout this subsection we consider the particular prediction set $\Xi_c$ defined in \eqref{Xi-c} and that Assumption ~\ref{ass:SDE} hold. \begin{lemma} \label{lem:intA} For $F\in\mathcal{H}^2(\mathbb{R},K)$, the integral \[(F\cdot A)\colon\Omega\to C([0,T],K)\] exists and satisfies \[ \mathcal{E}\Big(\sup_{t\in[0,T]} \|(F\cdot A)_t\|_K^2 \Big) \leq c^2T \int_0^T \mathcal{E}(\|F_t\|^2_{L(\mathbb{R},K)})\, dt.\] In particular, $(F\cdot A)\in\mathcal{H}^2(K,\mathbb{R})$ by identifying $K$ with $L(K,\mathbb{R})$. Moreover, if $f\colon [0,T]\times K\to L(\mathbb{R},K)$ is Lipschitz continuous, then $f(\cdot,(F\cdot A))\in\mathcal{H}^2(\mathbb{R},K)$. \end{lemma} \begin{proof} For $F,G\in\mathcal{H}_{s,c}(\mathbb{R},K)$, Theorem \ref{thm:dual} and H\"older's inequality implies \begin{align*} &\mathcal{E}\Big(\sup_{t\in[0,T]} \|(F\cdot A)_t-(G\cdot A)_t\|_K^2 \Big) \leq \sup_{P\in\mathcal{M}(\Xi)} E_P\Big[ \Big(\int_0^T \|F_t-G_t\|_K c\,dt \Big)^2 \Big]\\ &\leq c^2T \int_0^T \sup_{P\in\mathcal{M}(\Xi)}E_P[\|F_t-G_t\|_K^2] \, dt \leq c^2T \|F-G\|_{\mathcal{H}^2(\mathbb{R},K)}^2. \end{align*} The rest follows by the same arguments as in the proof of Theorem \ref{thm:integral}. The second part can be proved as in Lemma \ref{le:f-Lip-nice}. \end{proof} In line with Remark \ref{rem:identification} the following holds. \begin{remark} \label{rem2:identification} For $F\in\mathcal{H}^2(K,\mathbb{R})$ one has \begin{enumerate} \item $f(\cdot, F)\in\mathcal{H}^2(\mathbb{R},K)$ for every function $f:[0,T]\times K\to L(\mathbb{R},K)$ which is Lipschitz continuous, and \item $(F\cdot A)$ can be identified with an element in $\mathcal{H}^2(K,\mathbb{R})$. \end{enumerate} \end{remark} Now we are ready to prove Theorem \ref{thm:sde}. Consider the Picard iteration \[X^{n+1}:=x_0+(\mu(\cdot,X^n) \cdot A)+ (\sigma(\cdot,X^n)\cdot S) \] starting at $X^0\equiv x_0\in K$ and recall the Lipschitz continuity of $\mu$ and $\sigma$ imposed in \eqref{eq:Lipschitz-coeff} with corresponding Lipschitz constant $L$ . \begin{lemma} \label{lem:aprori} Assume that $X^n:\Omega \to C([0,T],K)$ and $X^n\in \mathcal{H}^2(K,\mathbb{R})$. Then the process $X^{n+1}:\Omega\to C([0,T],K)$ satisfies $X^{n+1}\in\mathcal{H}^2(H,K)$ and \[g^{n+1}(t):=\mathcal{E}\Big(\sup_{s\in[0,t]} \|X_s^{n+1}-X_s^n\|_K^2\Big) \leq C \int_0^t g^n(s)\,ds\] for all $t\in [0,T]$ where $g^0\equiv \sup_{t\in[0,T]}\|\mu(t,x_0)\|_{L(\mathbb{R},K)} +\sup_{t\in[0,T]}\|\sigma(t,x_0)\|_{L(H,K)}<+\infty$ and $C:=(2c^2 T+8c)L^2$. \end{lemma} \begin{proof} By Lemma \ref{le:f-Lip-nice}, Remark \ref{rem:identification}, Lemma \ref{lem:intA}, and Remark \ref{rem2:identification} it holds that $X^{n+1}:\Omega\to C([0,T],K)$ and $X^{n+1}\in\mathcal{H}^2(H,K)$. Define $\Delta\mu^n_t:=\mu(t,X^n_t)-\mu(t,X^{n-1}_t)$, $\Delta\sigma^n_t:=\sigma(t,X^n_t)-\sigma(t,X^{n-1}_t)$, and $\Delta X^n_t:=X^n_t-X^{n-1}_t$ for all $t\in[0,T]$ and $n\in\mathbb{N}$. Since \[\sup_{s\in[0,t]} \|\Delta X_s^{n+1}\|_K^2 \leq 2 \sup_{s\in[0,t]} \|(\Delta\mu^n\cdot A)_s\|_K^2 + 2\sup_{s\in[0,t]}\|(\Delta\sigma^n \cdot S)_s\|_K^2,\] it follows from Lemma \ref{le:f-Lip-nice} and Lemma \ref{lem:intA} that \begin{align*} g^{n+1}(t) &=\mathcal{E}\Big(\sup_{s\in[0,t]} \|\Delta X_s^{n+1}\|_K^2\Big) \\ &\leq 2 \mathcal{E}\Big(\sup_{s\in[0,t]} \|(\Delta \mu^n\cdot A)_s \|_K^2\Big) + 2 \mathcal{E}\Big(\sup_{s\in[0,t]} \|(\Delta \sigma^n\cdot S)_s\|_K^2\Big) \\ &\leq 2c^2 T\int_0^t \mathcal{E}\Big(\|\Delta\mu^n_s\|^2_{L(\mathbb{R},K)} \Big)\,ds + 8c \int_0^t \mathcal{E}\Big(\|\Delta\sigma^n_s\|^2_{L(H,K)} \Big)\,ds \\ &\leq C \int_0^t \mathcal{E}\Big(\|\Delta X^n_s\|^2_{L(K,\mathbb{R})} \Big) \,ds \\ &\leq C \int_0^t \mathcal{E} \Big(\sup_{u\in[0,s]} \|\Delta X_u^{n}\|_K^2\Big) \,ds = C \int_0^t g^n(s) \,ds. \end{align*} For $n=0$ the previous computation yields $g^1(t)\leq C\int_0^t g^0(s)\, ds$. That $g^0<+\infty$ follows directly from the assumption \eqref{eq:Lipschitz-coeff} on $\mu$ and $\sigma$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm:sde}}] The existence and uniqueness of the SDE follows directly from Lemma \ref{lem:aprori} by standard arguments. Indeed, iterating the estimate $g^{n+1}(t)\le C\int_0^t g^n(s)\, ds$ yields \[g^n(t)\le g^0(0) \frac{(Ct)^n}{n !}\] for all $t\in[0,T]$ and $n\in\mathbb{N}$. For any function $Y\colon\Omega\to C([0,T],K)$ define $\|Y\|:=\mathcal{E}\big(\sup_{t\in[0,T]}\|Y_t\|_K^2\big)^{\frac{1}{2}}$ which by Proposition \ref{prop:L2-Banach-Banach} is a semi-norm. Then, since for every $m,n\in\mathbb{N}$ with $m>n$ one has \[\|X^m-X^n\| \leq \sum_{k>n}\| X^k-X^{k-1}\| \leq \sum_{k>n}\Big(\frac{(CT)^k}{k!}\Big)^{\frac{1}{2}} \to 0\quad\mbox{as }n\to\infty\] it follows that $(X^n)$ is a Cauchy sequence w.r.t.~$\|\cdot\|$. By Proposition \ref{prop:L2-Banach-Banach} there exists a subsequence $(n_k)$ such that $X^{n_k}$ converges to some $X:\Omega\to C([0,T],K)$ for typical paths and $\|X^n-X\|\to 0$. Since \[\|X^n-X\|^2_{\mathcal{H}^2(K,\mathbb{R})}=\int_0^T \mathcal{E}\big(\|X^n_s-X_s\|^2_{L(K,\mathbb{R})}\big) \,ds \le T\|X^n-X\|^2\] it follows that $X\in \mathcal{H}^2(K,\mathbb{R})$, as well as \[\|\mu(\cdot,X^n)-\mu(\cdot,X)\|_{\mathcal{H}^2(\mathbb{R},K)}\to 0\quad \mbox{and}\quad \|\sigma(\cdot,X^n)-\sigma(\cdot,X)\|_{\mathcal{H}^2(H,K)}\to 0\] by Lipschitz continuity of $\mu$ and $\sigma$. This shows \begin{align*} X&=\lim_{n\to\infty} X^{n+1} =\lim_{n\to\infty}\Big(x_0+(\mu(\cdot,X^n)\cdot A)+(\sigma(\cdot,X^n)\cdot S)\Big)\\ &= x_0+(\mu(\cdot,X)\cdot A)+(\sigma(\cdot,X)\cdot S) \end{align*} where the limits are w.r.t.~$\|\cdot\|$. As for the uniqueness, suppose there exist two solutions $X,Y:\Omega\to C([0,T],K)$ in $\mathcal{H}^2(K,\mathbb{R})$ of the SDE. Similar to the proof of Lemma \ref{lem:aprori} we get \[g(t):=\mathcal{E}\Big(\sup_{s\in[0,t]} \|X_s-Y_s\|^2_K\Big)\le C\int_0^t g(s)\, ds\] with $C:=(2c^2 T+8c)L^2$. Iterating this estimate yields $g(t)\le g(T)\frac{(Ct)^k}{k!}$ for all $k\in\mathbb{N}$ and $t\in[0,T]$, so that $\|X-Y\|=g(T)^{\frac{1}{2}}=0$. This shows that $X$ and $Y$ coincide for typical paths. \end{proof} {\subsection*{Acknowledgments} We would like to thank the anonymous referee for a very thorough reading and exceptionally helpful comments.\\ The first author is supported by the Austrian Science Fund (FWF) under grant P28861.\\ The third author is supported by the NAP Grant.\\ Part of this research was carried out while the authors were visiting the Shanghai Advanced Institute of Finance and the School of Mathematical Sciences at the Shanghai Jiao Tong University in China, and we would like to thank Samuel Drapeau for his hospitality.} \end{document}
\begin{document} \title{Presentation of immersed surface-links by marked graph diagrams} \author{Seiichi Kamada} \address{Department of Mathematics, Osaka City University, Sumiyoshi, Osaka 558-8585, Japan} \email{[email protected]} \thanks{Supported by JSPS KAKENHI Grant Numbers 26287013 and 15F15319.} \author{Akio Kawauchi} \address{Osaka City University Advanced Mathematical Institute, Osaka City University, Sumiyoshi, Osaka 558-8585, Japan} \email{[email protected]} \thanks{Supported by JSPS KAKENHI Grant Number 24244005.} \author{Jieon Kim} \address{Osaka City University Advanced Mathematical Institute, Osaka City University, Sumiyoshi, Osaka 558-8585, Japan} \email{[email protected]} \thanks{The third author is International Research Fellow of Japan Society for the Promotion of Science. } \author{Sang Youl Lee} \address{Department of Mathematics, Pusan National University, Busan 46241, Republic of Korea} \email{[email protected]} \thanks{Supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (NRF-2016R1A2B4016029).} \maketitle \begin{abstract} It is well known that surface-links in $4$-space can be presented by diagrams on the plane of $4$-valent spatial graphs with makers on the vertices, called marked graph diagrams. In this paper we extend the method of presenting surface-links by marked graph diagrams to presenting immersed surface-links. We also give some moves on marked graph diagrams that preserve the ambient isotopy classes of their presenting immersed surface-links. \end{abstract} \section{Introduction}\label{sect-intr} \label{intro} A surface-link, or an {\it embedded} surface-link, is a closed surface embedded in Euclidean $4$-space $\mathbb R^4$. An {\it immersed surface-link} is a closed surface immersed in $\mathbb R^4$ such that the multiple points are transverse double points. It is well known that surface-links can be presented by diagrams on the plane of $4$-valent spatial graphs with makers on the vertices, called marked graph diagrams (cf. \cite{CKS2004, KamBook2017, Kaw, KeKU, Lo, Sw, Yo}). In this paper we extend the method of presenting surface-links by marked graph diagrams to presenting immersed surface-links. We also give some moves on marked graph diagrams that preserve the ambient isotopy classes of their presenting immersed surface-links, which are extension of moves given by Yoshikawa \cite{Yo} for presentation of embedded surface-links. \section{Marked graph diagrams of immersed surface-links} \label{sect-mgd} In this section, we introduce a marked graph presentation of immersed surface-links. First, we recall quickly the notion of marked graph diagrams and links with bands from \cite{Sw, Yo}. Let $A$ be the square $\{(x, y)|-1 \leq x, y \leq 1\}$, $X$ be the diagonals in $A$ presented by $x^2 = y^2$, and $M_h$ (or $M_v$) be a thick interval in $A$ given by $\{(x, y)|-1/2 \leq x \leq 1/2, \, -\delta \leq y \leq \delta \}$ (or $\{(x, y)|-1/2 \leq y \leq 1/2, \, -\delta \leq x \leq \delta \}$), where $\delta$ is a small positive number. A {\it marked graph} (in $\mathbb R^3$) is a spatial graph $G$ in $\mathbb R^3$ which satisfies the following: \begin{itemize} \item [(1)] $G$ is a finite regular graph with $4$-valent vertices. \item [(2)] Each vertex $v$ is rigid; that is, there is a neighborhood $N(v)$ of $v$ which is identified with thickened $A$ such that $v$ corresponds to the origin and the edges restricted to $N(v)$ correspond to $X$. \item [(3)] Each $v$ has a {\it marker}, which is a thick interval in $N(v)$ which corresponds to $M_h$ or $M_v$ under the identification in (2). \end{itemize} An {\it orientation} of a marked graph $G$ is a choice of an orientation for each edge of $G$ such that around every vertex $v$, two edges incident to $v$ in a diagonal position are oriented toward $v$ and the other two incident edges are oriented outward. For example, see Fig.~\ref{fig-ori-vert}. \begin{figure} \caption{An orientation around a marked vertex} \label{fig-ori-vert} \end{figure} Not every marked graph admits an orientation. A marked graph is called {\it orientable} (or {\it non-orientable}) if it admits (or does not admit) an orientation. A marked graph depicted in Fig.~\ref{fig-nori-mg} is non-orientable. An {\it oriented marked graph} is a marked graph equipped with an orientation. Two (oriented) marked graphs are said to be {\it equivalent} if they are ambient isotopic in $\mathbb R^3$ with respect to markers as subsets of $\mathbb R^3$ (and the orientations). \begin{figure} \caption{A non-orientable marked graph} \label{fig-nori-mg} \end{figure} A {\it banded link} $\mathcal {BL}$ (or a {\it link with bands}) is a pair $(L, \mathcal B)$ of a link $L$ in $\mathbb R^3$ and a set of mutually disjoint bands in $\mathbb R^3$ attached to $L$. It is called {\it oriented} if $L$ is oriented and all bands are oriented coherently with respect to the orientation of $L$. In this case, the link obtained from $L$ by surgery along the bands inherits an orientation, see Fig. \ref{fig-lb}. Two (oriented) banded links are {\it equivalent} if there is an ambient isotopy of $\mathbb R^3$ carrying the (oriented) link and (oriented) bands of one to those of the other. \begin{figure} \caption{Surgery} \label{fig-lb} \end{figure} For a marked graph $G$, we obtain a banded link $(L,\mathcal B) $ by replacing a neighborhood of each $4$-valent vertex with a band such that the core of the band corresponds to the marker as in Fig.~\ref{fig-orbd-2} (b). The banded link is called the {\it banded link associated with $G$} and is denoted by $\mathcal {BL}(G)$. Conversely, a marked graph $G$ is recovered from a banded link $\mathcal {BL}$ by shortening and replacing each band to a $4$-valent vertex as in Fig.~\ref{fig-orbd-2} (a). If $G$ is oriented, then $\mathcal {BL}(G)$ is oriented, and vice versa. \begin{figure} \caption{A band and a marked vertex} \label{fig-orbd-2} \end{figure} For a banded link $\mathcal {BL}=(L, \mathcal B)$, the {\it lower resolution} $L_-(\mathcal {BL})$ is $L$ and the {\it upper resolution} $L_+(\mathcal {BL})$ is the surgery result. For a marked graph $G$, the lower resolution $L_-(G)$ and the upper resolution $L_+(G)$ are defined to be those of the banded link $\mathcal {BL}(G)=(L, \mathcal B)$ associated with $G$. We present a marked graph by a diagram on the plane, which we call a {\it marked graph diagram}, in a usual way in knot theory. Let $D$ be a marked graph diagram. We denote by $\mathcal {BL}(D)$, $L_-(D)$, and $L_+(D)$ the banded link, the lower resolution and the upper resolution of the marked graph presented by $D$. See Fig.~\ref{spun-mgraph-res}. A link is called {\it H-trivial} if it is a split union of trivial knots and Hopf links \cite{KamKawamu}. A trivial link is regarded as an H-trivial link without Hopf links. \begin{defn} A marked graph diagram $D$ (or a marked graph $G$) is {\it H-admissible} if both resolutions $L_-(D)$ and $L_+(D)$ (or $L_-(G)$ and $L_+(G)$) are H-trivial links. \end{defn} \begin{figure} \caption{An H-admissible marked graph diagram} \label{spun-mgraph-res} \end{figure} A marked graph diagram $D$ (or a marked graph $G$) is called {\it admissible} if both resolutions $L_-(D)$ and $L_+(D)$ (or $L_-(G)$ and $L_+(G)$) are trivial links. By definition, an admissible marked graph (diagram) is H-admissible. Now we discuss a marked graph presentation of an immersed surface-link. For a subset $A\subset \mathbb R^3$ and an interval $I\subset\mathbb R,$ let $$AI=\{(x,t)\in\mathbb R^4|x\in A, t\in I\}.$$ Let $D$ be an H-admissible marked graph diagram, and $\mathcal {BL}(D) = (L, \mathcal B)$ the banded link associated with $D$. Consider a surface $\mathcal{S}^1_{-1}$ in $\mathbb R^3 [-1,1]$ satisfying \begin{equation*} \mathcal S^1_{-1} \cap \mathbb R^3[t]=\left\{ \begin{array}{ll} L_+(D)[t] & \hbox{for $0 < t \leq1$,} \\ (L_-(D) \cup |\mathcal B|)[t] & \hbox{for $t = 0$,} \\ L_-(D)[t] & \hbox{for $-1 \leq t < 0$,} \\ \end{array} \right. \end{equation*} where $|\mathcal B|$ denotes the union of the bands belonging to $\mathcal B$. When $D$ is oriented, we assume that the surface $S^1_{-1}$ is oriented so that the orientation of $L_+(D)[1]$ as the boundary of $S^1_{-1}$ coincides with the orientation of $L_+(D)$ induced from $D$. Let $L$ be an H-trivial link with trivial knot components $O_i$ $(i=1,\ldots,m)$ and Hopf link components $H_j$ $(j=1,\ldots,n)$, where $m \geq 0$ and $n \geq 0$. For an interval $[a,b]$, let $L_{\wedge}[a,b]$ be the union of disks $\Delta_i$ $(i=1,\ldots,m)$ and $n$ pairs of disks $C_j$ $(j=1,\ldots,n)$ in $\mathbb R^3[a, b]$ such that (1) $\partial \Delta_i = O_i[a]$ and $\partial C_j = H_j[a]$, (2) $\Delta_i$ has a unique maximal point, (3) each disk of $C_j$ has a unique maximal point, and (4) the two disks of $C_j$ intersect in a point transversely. We call $\Delta_i$ $(i=1,\ldots,m)$ a {\it cone system} with base $O_i$ $(i=1,\ldots,m)$ and $C_j$ $(j=1,\ldots,n)$ a {\it cone system} with base $H_j$ $(j=1,\ldots,n)$. We often assume an additional condition: (5) for each cone $C_j$ over $H_j$, the intersection point of the two disks of $C_j$ in condition~(4) is the unique maximal point of each of the disks in condition~(3). Similarly, for an H-trivial link $L'$ with trivial knot components $O'_i$ $(i=1,\ldots,m')$ and Hopf link components $H'_j$ $(j=1,\ldots,n')$, where $m' \geq 0$ and $n' \geq 0$, let $L'_{\vee }[a,b]$ be the disjoint union of a cone system $\Delta'_i$ in $\mathbb R^3[a,b]$ with base $O'_i$ in $\mathbb R^3[a]$ $(i=1,\ldots,m')$ and a cone system $C'_j$ in $\mathbb R^3[a,b]$ with base $H'_j$ in $\mathbb R^3[a]$ $(i=1,\ldots,n')$, where each disk in the cone system has a unique minimal point. Let $D$ be an H-admissible marked graph diagram. Consider the union $$\mathcal S(D)=L'_\vee[-2,-1]\cup\mathcal S^1_{-1}\cup L_\wedge[1,2],$$ which is an immersed surface-link in $\mathbb R^4$, where $L$ and $L'$ be upper and lower resolutions of $D$, respectively. By an argument in \cite{KamKawamu, KSS} it is seen that the ambient isotopy class of the immersed surface-link $\mathcal S(D)$ is uniquely determined from $D$. We call the immersed surface-link $\mathcal S(D)$ the {\it immersed surface-link constructed from} $D$. \begin{thm}\label{prop-adm} Let $\mathcal L$ be an immersed surface-link. There is an H-admissible marked graph diagram $D$ such that $\mathcal L$ is ambient isotopic to $\mathcal S(D)$. \end{thm} In the situation of this theorem, we say that $\mathcal L$ is {\it presented} by $D$. \begin{proof} The following argument is based on an argument in \cite{KSS} where embedded and oriented surface-links are discussed (cf. \cite{KamBook2017}). Let $\mathcal L$ be an immersed surface-link. Let $d_1, \dots, d_n$ be the double points of $\mathcal L$, and let $N(d_1), \dots, N(d_n)$ be regular neighborhoods of them. Moving $\mathcal L$ by an ambient isotopy, we may assume the following conditions: \begin{itemize} \item[(1)] All critical points of $\mathcal L$, except the double points, with respect to the projection $\mathbb R^4 = \mathbb R^3 \times \mathbb R \to \mathbb R$ are elementary critical points, that are maximal points, saddle points and minimal points. \item[(2)] $\mathcal L$ is in $\mathbb R^3 (-2,2)$. \item[(3)] All double points are in $\mathbb R^3 [1]$. \item[(4)] For each $i$ $(i=1, \dots, n)$, $N(d_i) = N^3(d_i) [1-\epsilon, 1 + \epsilon]$ for a $3$-disk $N^3(d_i)$, and $N(d_i) \cap \mathcal L $ is the cone of a Hopf link $H_i \subset ({\rm int} N^3(d_i))[1-\epsilon]$ with the cone point $d_i \in \mathbb R^3 [1]$. Here $\epsilon$ is a sufficiently small positive number. The $3$-disks $N^3(d_1), \dots, N^3(d_n)$ are mutually disjoint. \end{itemize} Move double points into $\mathbb R^3[3]$ such that the condition (4) is preserved although the $3$-disk $N^3(d_i)$ may change and the time level of $d_i$ changes from $1$ to $3$, i.e., \begin{itemize} \item[(1)] All critical points of $\mathcal L$, except the double points, with respect to the projection $\mathbb R^4 = \mathbb R^3 \times \mathbb R \to \mathbb R$ are elementary critical points, that are maximal points, saddle points and minimal points. \item[(2)] $\mathcal L$ is in $\mathbb R^3 (-2,4)$. \item[(3)] All double points are in $\mathbb R^3 [3]$. All maximal, saddle and minimal points are in $\mathbb R^3 (-2,2)$ \item[(4)] For each $i$ $(i=1, \dots, n)$, $N(d_i) = N^3(d_i) [3-\epsilon, 3 + \epsilon]$ for a $3$-disk $N^3(d_i)$, and $N(d_i) \cap \mathcal L $ is the cone of a Hopf link $H_i \subset ({\rm int} N^3(d_i))[3-\epsilon]$ with the cone point $d_i \in \mathbb R^3 [3]$. The $3$-disks $N^3(d_1), \dots, N^3(d_n)$ are mutually disjoint. \end{itemize} Let $p_1, \dots, p_m$ be the maximal points of $\mathcal L$, $q_1, \dots, q_{m'}$ be the minimal points of $\mathcal L$, and let $N(p_1), \dots, N(p_m), N(q_1), \dots, N(q_{m'})$ be regular neighborhoods of them. Moving $\mathcal L$ by an ambient isotopy, we may assume the following conditions: \begin{itemize} \item[(1)] All critical points of $\mathcal L$, except the double points, with respect to the projection $\mathbb R^4 = \mathbb R^3 \times \mathbb R \to \mathbb R$ are elementary critical points, that are maximal points, saddle points and minimal points. \item[(2)] $\mathcal L$ is in $\mathbb R^3 (-4,4)$. \item[(3)] All double points and all maximal points are in $\mathbb R^3 [3]$. All minimal points are in $\mathbb R^3 [-3]$. All saddle points are in $\mathbb R^3 (-2, 2)$. \item[(4)] For each $i$ $(i=1, \dots, n)$, $N(d_i) = N^3(d_i) [3-\epsilon, 3 + \epsilon]$ for a $3$-disk $N^3(d_i)$, and $N(d_i) \cap \mathcal L $ is the cone of a Hopf link $H_i \subset ({\rm int} N^3(d_i))[3-\epsilon]$ with the cone point $d_i$. The $3$-disks $N^3(d_1), \dots, N^3(d_n)$ are mutually disjoint. \item[(5)] For each $i$ $(i=1, \dots, m)$, $N(p_i) = N^3(p_i) [3-\epsilon, 3 + \epsilon]$ for a $3$-disk $N^3(p_i)$, and $N(p_i) \cap \mathcal L $ is the cone of a trivial knot $O_i \subset ({\rm int} N^3(p_i))[3-\epsilon]$ with the cone point $p_i$. The $3$-disks $N^3(p_1), \dots, N^3(p_m)$ are mutually disjoint, and also disjoint from $N^3(d_1), \dots, N^3(d_n)$. \item[(6)] For each $i$ $(i=1, \dots, m')$, $N(q_i) = N^3(q_i) [-3-\epsilon, -3 + \epsilon]$ for a $3$-disk $N^3(q_i)$, and $N(q_i) \cap \mathcal L $ is the cone of a trivial knot $O'_i \subset ({\rm int} N^3(q_i))[-3+ \epsilon]$ with the cone point $q_i$. The $3$-disks $N^3(q_1), \dots, N^3(q_{m'})$ are mutually disjoint. \end{itemize} Finally, applying the argument in \cite{KSS}, we can move all saddle points into the same hyperplane $\mathbb R^3 [0]$. Then we see the result. \end{proof} \begin{rmk} Theorem 1.4 of \cite{KamKawamu} states that any immersed and oriented surface-link is ambient isotopic to an immersed surface-link satisfying a certain condition. Applying the argument in \cite{KSS}, we can obtain an immersed surface-link required in Theorem~\ref{prop-adm}. \end{rmk} \section{Moves on marked graph diagrams} \label{sect-moves} We discuss moves on marked graph diagrams which preserve the ambient isotopy classes of the immersed surface-links presented by the diagrams. \begin{figure} \caption{ Moves of Type I} \label{fig-moves-type-I} \end{figure} \begin{figure} \caption{Moves of Type II} \label{fig-moves-type-II} \end{figure} The moves depicted in Figs.~\ref{fig-moves-type-I} and~\ref{fig-moves-type-II} were introduced by Yoshikawa \cite{Yo} as moves on marked graph diagrams which do not change the ambient isotopy classes of their presenting surface-links. The moves and their mirror images are called {\it Yoshikawa moves}. Furthermore, we call the moves in Fig.~\ref{fig-moves-type-I} (Fig.~\ref{fig-moves-type-II}) and their mirror images {\it moves of type I} ({\it moves of type II}). Moves of type I do not change the ambient isotopy classes of marked graphs in $\mathbb R^3$, and moves of type II do. Note that Yoshikawa moves preserve $H$-admissibility and admissibility. It is known that two admissible marked graph diagrams present ambient isotopic surface-links if and only if they are related by Yoshikawa moves (cf. \cite{KeKU, Sw, Yo}). Let $D$ be a link diagram of an $H$-trivial link $L$. A crossing point $p$ of $D$ is an {\it unlinking crossing point} if it is a crossing between two components of the same Hopf link of $L$ and if the crossing change at $p$ makes the Hopf link into a trivial link. \begin{defn} Let $D$ be an $H$-admissible marked graph diagram and let $D_-$ and $D_+$ be the diagrams of the lower resolution $L_-(D)$ and the upper resolution $L_+(D)$, respectively. A crossing point $p$ of $D$ is a {\it lower singular point} (or an {\it upper singular point}, resp.) if $p$ is an unlinking crossing point of $D_-$ (or $D_+$, resp.). \end{defn} We introduce new moves for $H$-admissible marked graph diagrams. They are the moves $\Omega_{9}$, $\Omega_{9}'$ and $\Omega_{10}$ in Fig.~\ref{fig-moves-type-III} and their mirror images, which we call {\it moves of type III}. Here we assume that the moves of type III are defined only if two diagrams appearing before and after the move are H-admissible. For example, for the move $\Omega_{9}$ (or $\Omega_{9}'$, resp.) in Fig.~\ref{fig-moves-type-III}, we require that the component $l$ in the resolution $L_+(D)$ (or $L_-(D)$, resp.) is trivial and that $p$ is an upper (or lower, resp.) singular point. \begin{figure} \caption{Moves of Type III: $\Omega_9,\Omega_9'$ and $\Omega_{10} \label{fig-moves-type-III} \end{figure} \begin{defn} The {\it generalized Yoshikawa moves} are Yoshikawa moves (moves of type I and II) and moves of type III introduced above. Two marked graph diagrams are {\it stably equivalent} if they are related by a finite sequence of generalized Yoshikawa moves. \end{defn} \begin{thm}\label{thm-equiv} Let $\mathcal L$ and $\mathcal L'$ be immersed surface-links presented by marked graph diagrams $D$ and $D'$, respectively. If $D$ and $D'$ are stably equivalent, then $\mathcal L$ and $\mathcal L'$ are ambient isotopic. \end{thm} \begin{proof} It suffices to show that $\mathcal L$ and $\mathcal L'$ are ambient isotopic when $D'$ is obtained from $D$ by a move of $\Omega_9,$ $\Omega_9'$ or $\Omega_{10}.$ The moves $\Omega_9$ and $\Omega_9'$ correspond to a creation or removal of a saddle point, and the move $\Omega_{10}$ corresponds to a change the level of double point singularity. See Fig.~\ref{fig-nm4}, which shows partial pictures of broken surface diagrams in $3$-space in the sense of \cite{CS1}. (In \cite{CS1}, embedded surfaces are discussed. However, broken surface diagrams are considered for immersed surface-links and it is true that if two broken surface diagrams are ambient isotopic in $3$-space then the immersed surface-links are ambient isotopic in $4$-space.) Since the moves $\Omega_9$, $\Omega_9'$ and $\Omega_{10}$ do not change the ambient isotopy classes of broken surface diagrams in $3$-space, we see that $\mathcal L$ and $\mathcal L'$ are ambient isotopic. \end{proof} \begin{figure} \caption{Immersed surface-links presented by $\Omega_9$, $\Omega_9'$, and $\Omega_{10} \label{fig-nm4} \end{figure} Let $\Omega_9^*$ and $\Omega_9'^*$ be the moves depicted in Fig.~\ref{fig-omoves} or their mirror images. They are equivalent to $\Omega_9$ and $\Omega_9'$ modulo Yoshikawa moves (of type I) as shown in Fig.~\ref{fig-m9a}. \begin{figure} \caption{Moves $\Omega_9^*$ and $\Omega_9'^*$} \label{fig-omoves} \end{figure} \begin{figure} \caption{Moves $\Omega_9^*$, $\Omega_9'^*$ are equivalent to $\Omega_9$, $\Omega_9'$.} \label{fig-m9a} \end{figure} We conclude the paper by proposing a question. \begin{que} Suppose that $D$ and $D'$ are marked graph diagrams presenting ambient isotopic immersed surface-links. Is $D$ stably equivalent to $D'$? \end{que} \end{document}
\begin{document} \title[Products of unbounded Bloch functions] {Products of unbounded Bloch functions} \author[Daniel Girela]{Daniel Girela} \address{ An\'{a}lisis Matem\'{a}tico\\ Facultad de Ciencias\\ Universidad de M\'{a}laga\\ 29071 M\'{a}laga\\ Spain} \email{[email protected]} \thanks{This research is supported in part by a grant from \lq\lq El Ministerio de Econom\'{\i}a y Competitividad\rq\rq , Spain (PGC2018-096166-B-I00) and by grants from la Junta de Andaluc\'{\i}a (FQM-210 and UMA18-FEDERJA-002).} \subjclass{Primary 30D45; Secondary 30H30} \keywords{Bloch function, Normal function, Blaschke product, Inner function, Minimal Besov space, Analytic mean Lipschitz spaces} \date{October 25, 2019} \begin{abstract} We give new constructions of pair of functions $(f, g)$, analytic in the unit disc, with $g\in H^\infty $ and $f$ an unbounded Bloch function, such that the product $g\cdot f$ is not a Bloch function. \end{abstract} \maketitle \section{Introduction and statements of the results}\label{intro} Let $\mathbb D=\{z\in\mathbb C:|z|<1\}$ denote the open unit disc in the complex plane $\mathbb C$. The space of all analytic functions in $\mathbb D $ will be denoted by $\mathcal Hol (\mathbb D )$. \par For $0<p\le\infty $, the classical Hardy space $H\sp p $ is defined as the set of all $f\in\mathcal Hol (\mathbb D )$ for which $$\|f\|_{H\sp p}\ig \sup_{0<r<1}M\sb p(r,f)<\infty ,$$ where, for $0<r<1$ and $f\in \mathcal Hol (\mathbb D )$, \begin{align*}M_p(r,f)\,=\,&\left (\frac{1}{2\pi}\int_0^{2\pi} |f(re\sp{i\theta })|\sp p d\theta\right )\sp{1/p}, \,(0<p<\infty ); \\M_\infty (r,f)\,=\,&\sup _{\theta \in \mathbb R}|f(re\sp{i\theta })|. \end{align*} We mention \cite{Du:Hp} as a general reference for the theory of Hardy spaces. \par A function $f\in\mathcal Hol (\mathbb D )$ is said to be a Bloch function if $$\Vert f\Vert _{\mathcal B}\ig \vert f(0)\vert + \sup _ {z\in \mathbb D}\,(1-\vert z\vert\sp 2)\vert f\sp\prime (z)\vert <\infty . $$ The space of all Bloch functions is denoted by $\mathcal B$, it is a Banach space with the just defined norm $\Vert \cdot \Vert _{\mathcal B}$. It is well known that $$H\sp\infty\subsetneq\mathcal B.$$ A typical example of an unbounded Bloch function is the function $f$ defined by $$f(z)\,=\,\log \frac{1}{1-z},\quad z\in\mathbb D.$$ We mention \cite{ACP} as a general reference for the theory of Bloch functions. \par A function $f$ which is meromorphic in $\mathbb D $ is said to be a normal function in the sense of Lehto and Virtanen \cite{LV} if $$\sup _{z\in \mathbb D }(1-\vert z\vert \sp 2)\frac{\vert f\sp\prime (z)\vert } {1+\vert f(z)\vert \sp 2}<\infty .$$ For simplicity, we shall let $\mathcal N $ denote the set of all holomorphic normal functions in $\mathbb D $. It is clear that any Bloch function is a normal function, that is, we have $\mathcal B\subset \mathcal N$. We refer to \cite{ACP}, \cite{LV} and \cite{Po:UF} for the theory of normal functions. In particular, we remark here that if $f\in \mathcal N$, $\xi \in \partial \mathbb D$ and $f$ has the asymptotic value $L$ at $\xi$, (that is, there exists a curve $\gamma $ in $\mathbb D$ ending at $\xi $ such that $f(z)\to L$, as $z\to \xi $ along $\gamma $) then $f$ has the non-tangential limit $L$ at $\xi $. \par Let us recall that if a sequence of points $\{a_n\} $ in the unit disc satisfies the \emph{Blaschke condition\/}: $\sum\sb{n=1}\sp{\infty} (1-|a\sb n|)<\infty $, the corresponding Blaschke product $B $ is defined as \[ B(z)=\prod\sb{n=1}\sp{\infty} \frac{|a\sb{n}|}{a\sb{n}} \frac{a\sb{n}-z}{1-\overline{a\sb{n}}z} \,. \] Such a product is analytic in $\mathbb D $. In fact, it is an inner function, that is, an $H^\infty $-function with radial limit of absolute value $1$ at almost every point of $\partial \mathbb D$ (cf. \cite[Chapter~2]{Du:Hp}). \par If $\{a_n\} $ is a Blaschke sequence and there exists $\delta >0$ such that $$\prod_{m\neq n} \left\vert \frac{a_n-a_m}{1-\overline a_na_m}\right \vert\ge \delta ,\quad\text{for all $n$,}$$ we say that the sequence $\{a_n\} $ is \emph{uniformly separated} and that $B$ is an \emph{interpolating Blaschke product}. Equivalently, \begin{equation}\label{inter} B \,\, {\it is \, an \, interpolating\, Blaschke\, product} \,\,\, \Leftrightarrow\,\,\, \inf _{n\ge 1}(1-\vert a_n\vert \sp 2)\vert B'(a_n)\vert >0.\end{equation} We refer to \cite[Chapter~9]{Du:Hp} and \cite[Chapter~VII]{Gar} for the basic properties of interpolating Blaschke products. In particular, we recall that an exponential sequence is uniformly separated and that the converse holds if all the $a_k$'s are positive. \par Lappan \cite[Theorem~3]{La} proved that if $B$ is an interpolating Blaschke product and $f$ is a normal analytic function in $\mathbb D $, the product $B\cdot f$ need not be normal. Lappan used this to show that $\mathcal N$ is not a vector space. \par Lappan's result is a consequence of the following easy fact: if $B$ is an interpolating Blaschke product whose sequence of zeros is $\{ a_n\} $ and $G$ is an analytic function in $\mathbb D$ with $G(a_n)\to\infty$, then $f=B\cdot G$ is not a normal function (and hence it is not a Bloch function either). This result has been used by several authors (see \cite{Cam,Ya1,Ya2,Gi:Pri,Gi:Ill,BGM}) to construct distinct classes of non-normal functions. \par The author and Su\'{a}rez proved in \cite{GS} a result of this kind dealing with Blaschke products with zeros in a Stolz angle but not necessarily interpolating, improving a result of \cite{GGP}. Namely, Theorem\,\@1 of \cite{GS} is the following. \begin{other}\label{Th-GGP} Let $B$ be an infinite Blaschke product whose sequence of zeros $\{ a_n\} $ is contained in a Stolz angle with vertex at $1$ and let $G$ be analytic in $\mathbb D$ with $G(z)\to\infty$, as $z\to 1$. Then the function $f=B\cdot G$ is not a normal function.\end{other} \par It is natural to ask whether it is possible to prove results similar to those described, substituting \lq\lq Blaschke products\rq\rq \, by some other classes of $H^\infty $-functions. Our first result in this paper deals with the atomic singular inner function. \begin{thm}\label{Bloch-inner} Let $S$ be the atomic singular inner function defined by $$S(z)=\exp\left (-\frac{1+z}{1-z}\right ),\quad z\in \mathbb D,$$ and let $f$ be a Bloch function with $$\lim_{z\to 1}\vert f(z)\vert =\infty .$$ Then the function $F$ defined by $F(z)=S(z)f(z)$ is not a normal function (hence, it is not a Bloch function).\end{thm} \par In particular, the function $f$ defined by $f(z)=S(z)\cdot \log \frac{1}{1-z}$ ($z\in \mathbb D$) is not normal. Since a Bloch function satisfies $M_\infty (r,f)\,=\,\og \left (\log\frac{1}{1-r}\right )$, Theorem\,\@\ref{Bloch-inner} follows from the following result. \begin{thm}\label{unbound-inner} Let $S$ be the singular inner function defined by $S(z)=\exp\left (-\frac{1+z}{1-z}\right )$ ($z\in \mathbb D$) and let $f$ be an analytic function in $\mathbb D$ satisfying: \begin{itemize}\item[(i)] $\lim_{z\to 1}\vert f(z)\vert =\infty $.\item[(ii)] $\vert f(r)\vert \,=\,\op \left (\exp\frac{1+r}{1-r}\right )$,\,as $r\to 1^-$. \end{itemize} Then the function $F$ defined by $F(z)=S(z)f(z)$ is not a normal function (hence, it is not a Bloch function).\end{thm} \par For $g\in\mathcal Hol (\mathbb D )$, the multiplication operator $M_g$ is defined by \[ M_g(f)(z)\ig g(z)f(z),\quad f\in \mathcal Hol (\mathbb D),\,\, z\in \mathbb D. \] Let us recall that if $X$ and $Y$ are two spaces of analytic function in $\mathbb D $ and $g\in\mathcal Hol (\mathbb D )$ then $g$ is said to be a multiplier from $X$ to $Y$ if $M_g(X)\subset Y$. The space of all multipliers from $X$ to $Y$ will be denoted by $M(X,Y)$ and $M(X)$ will stand for $M(X,X)$. Brown and Shields \cite{BS} characterized the space of multipliers of the Bloch space $M\mathcal (B)$ as follows. \begin{other}\label{mult-Bloch}A function $g\in\mathcal Hol (\mathbb D)$ is a multiplier of the Bloch space if and only if $g\in H^\infty \cap \mathcal B_{\log }$, where $\mathcal B_{\log }$ is the Banach space of those functions $f\in\mathcal Hol (\mathbb D )$ which satisfy \begin{equation}\label{Blog}\Vert f\Vert _{B_{\log }}\ig \vert f(0)\vert +\sup_{z\in \mathbb D}(1-\vert z\vert ^2)\left (\log \frac{2}{1-\vert z\vert ^2}\right )\vert f^\prime (z)\vert <\infty .\end{equation} \end{other} Thus, if $g\in H^\infty \setminus \mathcal B_{\log }$ there exists a function $f\in \mathcal B\setminus H^\infty $ such that $g\cdot f\notin \mathcal B$. It is easy to see that the analytic Lipschitz spaces $\Lambda _\alpha $ ($0<\alpha \le 1$) and the mean Lipschitz spaces $\Lambda ^p_{\alpha }$ ($1<p<\infty $, $1/p<\alpha \le 1$) are contained in $M(\mathcal B)$ We refer to \cite[Chapter\,\@5]{Du:Hp} for the definitions of these spaces, let us simply recall here that $$\Lambda ^1_1=\{ f\in \mathcal Hol (\mathbb D) : f^\prime \in H^1\} .$$ On the other hand, Theorem\,\@1 of \cite{GGH} shows the existence of a Jordan domain $\Omega $ with rectifiable boundary and $0\in \Omega $, and such that the conformal mapping $g$ from $\mathbb D$ onto $\Omega $ with $g(0)=0$ and $g^\prime (0)>0$ does not belong to $\mathcal B_{\log }$. For this function $g$ we have that $g\in \Lambda ^1_1$ but $g$ is not a multiplier of $\mathcal B$. Thus we have: $$\Lambda ^1_1\,\not\subset \,M(\mathcal B).$$ \par In view of this and the results involving Blaschke products that we have mentioned above, it is natural to ask the following question: \begin{ques}\label{question} Is it true that for any given $f\in \mathcal B\setminus H^\infty $ there exists a function $g\in \Lambda ^1_1$ such that $g\cdot f\notin \mathcal B$? \end{ques} \par We shall show that the answer to this question is affirmative. Actually we shall prove a stronger result. \par We let $B^1$ denote the minimal Besov space which consists of those functions $f\in \mathcal Hol (\mathbb D)$ such that $$\int_{\mathbb D}\vert f^{\prime \prime }(z)\vert \,dA(z)\,<\,\infty .$$ Here $dA$ denotes the area measure on $\mathbb D$. Alternatively, the space $B^1$ can be characterized as follows (see \cite{AFP}): \par For $f\in\mathcal Hol (\mathbb D )$, we have that $f\in B^1$ if and only there exist a sequence of points $\{ a_k\} _{k=1}^\infty \subset \mathbb D $ and a sequence $\{ \lambda_k\} _{k=0}^\infty \in \ell ^1$ such that \begin{equation}\label{B1def} f(z)=\lambda _0+\sum_{k=1}^\infty\lambda_k\varphi_{a_k}(z),\quad z\in\mathbb D.\end{equation} Here, for $a\in \mathbb D$, $\varphi _a:\mathbb D\rightarrow \mathbb D$ denotes the M\"{o}bius transformation defined by \begin{equation}\label{phia}\varphi_a(z)=\frac{a-z}{1-\overline az},\quad z\in \mathbb D .\end{equation} is is well known that $B^1\subset \Lambda^1_1$ (see \cite{AFP,DGV1}) and then our next result implies that the answer to question\,\@\ref{question} is affirmative. \par \begin{thm}\label{not-mul-B1} If $f\in \mathcal B\setminus H^\infty $ then there exists $g\in B^1$ such that $g\cdot f\notin \mathcal B$. \end{thm} \par The proofs of Theorem\,\@\ref{unbound-inner} and Theorem\,\@\ref{not-mul-B1} will be presented in section\,\@\ref{proofs}. We close this section noticing that throughout the paper we shall be using the convention that $C=C(p, \alpha ,q,\beta , \dots )$ will denote a positive constant which depends only upon the displayed parameters $p, \alpha , q, \beta \dots $ (which often will be omitted) but not necessarily the same at different occurrences. Moreover, for two real-valued functions $E_1, E_2$ we write $E_1\lesssim E_2$, or $E_1\gtrsim E_2$, if there exists a positive constant $C$ independent of the arguments such that $E_1\leq C E_2$, respectively $E_1\ge C E_2$. If we have $E_1\lesssim E_2$ and $E_1\gtrsim E_2$ simultaneously then we say that $E_1$ and $E_2$ are equivalent and we write $E_1\asymp E_2$. \par \section{The proofs}\label{proofs} \par \begin{Pf}{\it Theorem\,\@\ref{unbound-inner}.} For $0<a<1$, set $\Gamma _a=\{ z\in \mathbb D : \vert z-a\vert ={1-a}\} $. If $z\in \Gamma _a$ then $\Real \frac{1+z}{1-z}\,=\,\frac{a}{1-a}$ and, hence, \begin{equation*}\label{modSGammaa}\vert S(z)\vert \,=\,\exp\left (\frac{-a}{1-a}\right ),\quad z\in \Gamma _a.\end{equation*} This, together with (i), implies that $$F(z)\to \infty ,\quad \text{as $z\to 1$ along $\Gamma _a$}.$$ Hence $F$ has the asymptotic value $\infty $ at $1$. On the other hand, (ii) implies that $F$ has the radial limit $0$ at $1$. Then it follows that $F$ is not normal. \end{Pf} \par \begin{Pf}{\it Theorem\,\@\ref{not-mul-B1}.} Take $f\in \mathcal B\setminus H^\infty $. Set $$\varphi (r)\,=\,M_\infty (r,g),\quad 0<r<1.$$ Clearly, $\varphi (r)\to\infty $, as $r\to 1$ and it is well known that $$\phi (r)\,=\,\og \left (\log \frac{1}{1-r}\right ).$$ This implies that \begin{equation}\label{op-square}(1-r)^2\varphi (r)\to 0,\quad \text{as $r\to 1$}.\end{equation} Choose a sequence of numbers $\{ r_n\} \subset (0,1)$ satisfying the following properties: \begin{itemize}\item[(i)] $\{ r_n\} $ is increasing. \item[(ii)] $(1-r_n)^2\varphi (r_n)\,=\,\op \left (\left ( \frac{1-r_{n-1}}{n}\right )^2\right ),\quad \text{as $n\to \infty $.}$ \item[(iii)] $\varphi (r_n)\,\ge\,2\varphi (r_{n-1})$, for all $n$. \item[(iv)] $\frac{1-r_{n+1}}{1-r_n}\to 0$, as $n\to \infty $. \end{itemize} The existence of such a sequence is clear, bearing in mind (\ref{op-square}) and the the fact that $\varphi (r)\to\infty $, as $r\to 1$. \par Set $$\lambda _k\,=\,\varphi (r_k)^{-1/2},\quad k=1, 2, \dots .$$ For each $k$, take $a_k\in \mathbb D$ with $\vert a_k\vert \,=\,r_k$ and $\vert f(a_k)\vert \,=\,\varphi (r_k)$. Using (iii), it follows that \begin{equation}\label{sumlam}\sum_{k=1}^\infty \lambda _k\,<\,\infty . \end{equation} Define \begin{equation}\label{def-f}g(z)\,=\,\sum_{k=1}^\infty \lambda _k\varphi _(z),\quad z\in \mathbb D.\end{equation} Using (\ref{sumlam}) we see that the sum in (\ref{def-f}) defines an analytic function in $\mathbb D$ which belongs to $B^1$. Set $$F(z)=g(z)f(z),\quad z\in \mathbb D.$$ Since $g\in H^\infty $ and $f\in\mathcal B$ we see that \begin{equation}\label{gfprime}\vert g(a_n)f^\prime (a_n)\vert \,=\,\og \left (\frac{1}{1-\vert a_n\vert }\right ).\end{equation} On the other hand, \begin{equation}\label{gprime}\vert g^\prime (a_n)f(a_n)\vert \,\gtrsim \,I\,-\,II\,-\,III,\end{equation} where \begin{align*} I\,=&\,\vert f(a_n)\vert \lambda_n \vert \varphi_{a_n}(a_n)\vert,\,\,\,II\,\lesssim \,\vert f(a_n)\vert \sum_{k=1}^{n-1}\lambda _k\frac{1-\vert a_k\vert ^2}{\vert 1-\overline {a_k}\,a_n\vert ^2}, \\ III\,\lesssim &\,\vert f(a_n)\vert \sum_{k=n+1}^{\infty }\lambda _k \frac{1-\vert a_k\vert ^2}{\vert 1-\overline {a_k}\,a_n\vert ^2}. \end{align*} Clearly, \begin{equation}\label{I} I=\,\vert f(a_n)\vert \lambda_n \vert \varphi_{a_n}(a_n)\vert \,\asymp \,\frac{\varphi (r_n)^{1/2}}{1-r_n}.\end{equation} Using the definitions, the facts that $\varphi $ and the sequence $\{ r_n\} $ are increasing, and (ii), we obtain \begin{align}\label{II}II\,\lesssim \,&\vert f(a_n)\vert \sum_{k=1}^{n-1}\lambda _k\frac{1-\vert a_k\vert ^2}{\vert 1-\overline {a_k}\,a_n\vert ^2}\\ \,\lesssim \,& \varphi (r_n)\sum_{k=1}^{n-1}\varphi (r_k)^{-1/2}\frac{1-\vert a_k\vert }{[(1-\vert a_k\vert )+(1-\vert a_n)]^2} \nonumber \\ \,\lesssim \,&\varphi (r_n)\sum_{k=1}^{n-1}\frac{1}{\varphi (r_k)^{1/2}(1-r_k)} \nonumber \\ \,\lesssim \,&\frac{n\varphi (r_n)}{1-r_{n-1}} \nonumber \\ \,=\,& \frac{\varphi (r_n)^{1/2}}{1-r_n}\,\varphi (r_n)^{1/2}\,\frac{n(1-r_n)}{1-r_{n-1}}\nonumber \\ \,=\,&\op \left (\frac{\varphi (r_n)^{1/2}}{1-r_n}\right )\nonumber. \end{align} Likewise, using the definitions, the facts that $\varphi $ and the sequence $\{ r_n\} $ are increasing, (iii), and (iv), we obtain \begin{align}\label{III} III\,\lesssim \,&\varphi (r_n)\sum_{k=n+1}^\infty \frac{\varphi (r_k)^{-1/2}(1-r_k)}{[(1-r_k)+(1-r_n)]^2} \\ \,\lesssim \,&\varphi (r_n)\sum_{k=n+1}^\infty \varphi (r_k)^{-1/2}\frac{1-r_k}{(1-r_n)^2}\nonumber \\ \,\lesssim \,&\varphi (r_n)\frac{1-r_{n+1}}{(1-r_n)^2}\sum_{k=n+1}^\infty \varphi (r_k)^{-1/2}\nonumber \\ \,\lesssim \,& \frac{\varphi (r_n)^{1/2}}{1-r_n}\cdot \frac{1-r_{n+1}}{1-r_n}\nonumber \\ \,=\,&\op \left (\frac{\varphi (r_n)^{1/2}}{1-r_n}\right ).\nonumber \end{align} Using (\ref{I}), (\ref{II}), (\ref{III}), and the fact that $\lim\varphi (r_n)=\infty $, we deduce that $$(1-\vert a_n\vert )\vert g^\prime (a_n)f(a_n)\vert \,\rightarrow \,\infty ,\quad\text{as $n\to\infty$}.$$ This and (\ref{gfprime}) imply that $F$ is not a Bloch function. \end{Pf} \end{document}
\begin{document} \begin{abstract} We consider generalisations of Thompson's group $V$, denoted $V_r(\Sigma)$, which also include the groups of Higman, Stein and Brin. We show that, under some mild hypotheses, $V_r(\Sigma)$ is the full automorphism group of a Cantor-algebra. Under some further minor restrictions, we prove that these groups are of type $\operatorname{F}_\infty$ and that this implies that also centralisers of finite subgroups are of type $\operatorname{F}_\infty$. \end{abstract} \maketitle \section{Introduction} \noindent Thompson's group $V$ is defined as a homeomorphism group of the Cantor-set. The group $V$ has many interesting generalisations such as the Higman-Thompson groups $V_{n,r}$, \cite{higman}, Stein's generalisations \cite{stein} and Brin's higher dimensional Thompson groups $sV$ \cite{brin1}. All these groups contain any finite group, contain free abelian groups of infinite rank, are finitely presented and of type $\operatorname{F}P_\infty$ (see work by several authors in \cite{brown2, fluch++, hennigmatucci, desiconbrita2, stein}). The first and third authors together with Kochloukova \cite{desiconbrita2, britaconcha} further generalise these groups, denoted by $V_r(\Sigma)$ or $G_r(\Sigma)$, as automorphism groups of certain Cantor-algebras. We shall use the notation $V_r(\Sigma)$ in this paper. We show in Theorem \ref{fullauto} that they are the full automorphism groups of these algebras. Fluch, Schwandt, Witzel and Zaremsky \cite{fluch++} use Morse-theoretic methods to prove that Brin's groups $sV$ are of type $\operatorname{F}_\infty.$ By adapting their methods we show, Theorem \ref{FPinfty}, that under some restrictions on the Cantor-algebra, which still comprehend all families mentioned above, $V_r(\Sigma)$ is of type $\operatorname{F}_\infty.$ We also give some constructions of further examples. Bleak \emph{et al.} \cite{matuccietc} and the first and the third authors \cite{britaconcha} show independently that centralisers of finite subgroups $Q$ in $V_{n,r}$ and $V_r(\Sigma)$ can be described as extensions $$K \rightarrowtail C_{V_r(\Sigma)}(Q) \twoheadrightarrow V_{r_1}(\Sigma) \times \ldots \times V_{r_t}(\Sigma),$$ where $K$ is locally finite and $r_1,...,r_t$ are integers uniquely determined by $Q$. It was conjectured in \cite{britaconcha} that these centralisers are of type $\operatorname{F}_\infty$ if the groups $V_r(\Sigma)$ are. In Section \ref{centralisersection} we expand the description of the centralisers given in \cite{matuccietc, britaconcha}, which allows us to prove that the conjecture holds true. This also implies that any of the generalised $V_r(\Sigma)$ which are of type $\operatorname{F}_\infty$ admit a classifying space for proper actions, which is a mapping telescope of cocompact classifying spaces for smaller families of finite subgroups. In other words, these groups are of Bredon type quasi-$\operatorname{F}_\infty$. For definitions and background the reader is referred to \cite{britaconcha}. We conclude with giving a description of normalisers of finite subgroups in Section \ref{normaliser}. These turn up in computations of the source of the rationalised Farrell-Jones assembly map, where one needs to compute not only centralisers, but also the Weyl-groups $W_G(Q)=N_G(Q)/C_G(Q).$ For more detail see \cite{lueck-reich05}, or \cite{geoghegan-varisco} for an example where these are computed for Thompson's group $T$. \subsection*{Acknowledgments} We would like to thank Dessislava Kochloukova for helpful discussions regarding Section \ref{fpinfty}, Claas R\"over for getting us to think about Theorem \ref{fullauto} and the anonymous referee for very carefully reading an earlier version of this paper. We are also indebted to this referee for pointing out Lemma \ref{auxmainfinfty} to us. \section{Background on generalised Thompson groups} \subsection{Cantor-algebras} \noindent We shall follow the notation of \cite[Section 2]{britaconcha} and begin by defining the Cantor algebras $U_r(\Sigma)$. Consider a finite set of colours $S=\{1,\ldots,s\}$ and associate to each $i \in S$ an integer $n_i>1$, called arity of the colour $i.$ Let $U$ be a set on which, for all $i\in S$, the following operations are defined: an $n_i$-ary operation $\lambda_i\, :\,U^{n_i}\to U,$ and $n_i$ 1-ary operations $\alpha^1_i,\ldots,\alpha^{n_i}_i\, ; \alpha^j_i:U\to U.$ Denote $\Omega = \{ \lambda_i, \alpha_i^j \}_{i,j}$ and call $U$ an $\Omega$-algebra. For detail see \cite{Cohn} and \cite{desiconbrita2}. We write these operations on the right. We also consider, for each $i\in S$ and $v\in U,$ the map $\alpha_i:U\to U^{n_i}$ given by $v\alpha_i:=(v\alpha^1_i,v\alpha^2_i,\ldots,v\alpha^{n_i}_i).$ The maps $\alpha_i$ are called descending operations, or expansions, and the maps $\lambda_i$ are called ascending operations, or contractions. Any word in the descending operations is called a descending word. A {\sl morphism} between $\Omega$-algebras is a map commuting with all operations in $\Omega$. Let $\mathfrak{B}_0$ be the category of all $\Omega$-algebras for some $\Omega$. An object $U_0(X) \in \mathfrak{B}_0$ is a {\sl free object} in $\mathfrak{B}_0$ with $X$ as a {\sl free basis}, if for any $S \in \mathfrak{B}_0$ any mapping $\theta : X \to S$ can be extended in a unique way to a morphism $U_0(X) \to S$. For every set $X$ there is an $\Omega$-algebra, free on $X$, called the {\sl $\Omega$-word algebra on $X$} and denoted by $W_\Omega(X)$ (see \cite[Definition 2.1]{desiconbrita2}). Let $B\subset W_{\Omega}(X)$, $b\in B$ and $i$ a colour of arity $n_i$. The set $$(B\smallsetminus\{b\})\cup\{b\alpha_i^1,\ldots,b\alpha_i^{n_i}\}$$ is called a simple expansion of $B$. Analogously, if $b_1,\ldots,b_{n_i}\subseteq B$ are pairwise distinct, $$(B\smallsetminus\{b_1,\ldots,b_{n_i}\})\cup\{(b_1,\ldots,b_{n_i})\lambda_i\}$$ is a simple contraction of $B$. A chain of simple expansions (contractions) is an expansion (contraction). A subset $A\subseteq W_\Omega(X)$ is called {\sl admissible} if it can be obtained from the set $X$ by finitely many expansions or contractions. \noindent We shall now define the notion of a Cantor-algebra. Fix a finite set $X$ and consider the variety of $\Omega$-algebras satisfying a certain set of identities as follows: \begin{definition}\label{sigmadef}\cite[Section 2]{britaconcha} We denote by $\Sigma = \Sigma_1 \cup \Sigma_2$ the following set of laws in the alphabet $X$. \begin{itemize} \item[i)] A set of laws $\Sigma_1$ given by $$u\alpha_i\lambda_i=u,$$ $$(u_1,\ldots,u_{n_i})\lambda_i\alpha_i=(u_1,\ldots,u_{n_i}),$$ for every $u\in W_\Omega(X)$, $i\in S$, and $n_i$-tuple: $(u_1,\ldots,u_{n_i})\in W_\Omega(X)^{n_i}.$ \item[ii)] A second set of laws $$\Sigma_2=\bigcup_{1\leq i<i'\leq s}\Sigma_2^{i,i'}$$ where each $\Sigma_2^{i,i'}$ is either empty or consists of the following laws: consider first $i$ and fix a map $f:\{1,\ldots,n_{i}\}\to\{1,\ldots,s\}$. For each $1\leq j\leq n_{i}$, we see $\alpha_{i}^j\alpha_{f(j)}$ as a set of length 2 sequences of descending operations and let $\mathcal Lambda_{i}=\cup_{j=1}^{n_{i}}\alpha_{i}^j\alpha_{f(j)}$. Do the same for $i'$ (with a corresponding map $f'$) to get $\mathcal Lambda_{i'}$. We need to assume that $f,f'$ are chosen so that $|\mathcal Lambda_i|=|\mathcal Lambda_{i'}|$ and fix a bijection $\phi:\mathcal Lambda_{i}\to\mathcal Lambda_{i'}$. Then $\Sigma_2^{i,i'}$ is the set of laws $$u\nu=u\phi(\nu)\quad \nu\in \mathcal Lambda_{i},u\in W_\Omega(X).$$ \end{itemize} \noindent Factor out of $W_\Omega(X)$ the fully invariant congruence $\mathfrak{q}$ generated by $\Sigma$ to obtain an $\Omega$-algebra $W_\Omega(X)/\mathfrak{q}$ satisfying the identities in $\Sigma$. \noindent The algebra $W_\Omega(X)/\mathfrak{q}=U_r(\Sigma),$ where $r=|X|$, is called a {\sl Cantor-Algebra.} \end{definition} \noindent As in \cite{desiconbrita2} we say that $\Sigma$ is {\sl valid} if for any admissible $Y\subseteq W_\Omega(X)$, we have $|Y|=|\overline Y|$, where $\overline Y$ is the image of $Y$ under the epimorphism $W_\Omega(X) \twoheadrightarrow U_r(\Sigma).$ In particular this implies that $U_r(\Sigma)$ is a free object on $X$ in the class of those $\Omega$-algebras which satisfy the identities $\Sigma$ above. In other words, this implies that $X$ is a basis. \noindent If the set $\Sigma$ used to define $U_r(\Sigma)$ is valid, we also say that $U_r(\Sigma)$ is valid. As done for $W_\Omega(X)$, we say that a subset $A\subset U_r(\Sigma)$ is {\sl admissible} if it can be obtained by a finite number of expansions or contractions from $\overline X$, where expansions and contractions mean the same as before. We shall, from now on, not distinguish between $X$ and $\overline X$. If $A$ can be obtained from a subset $B$ by expansions only, we will say that $A$ is an expansion or a descendant of $B$ and we will write $B\leq A$. If $A$ can be obtained from $B$ by applying a single descending operation, i.e., if $$A=(B\smallsetminus\{b\})\cup\{b\alpha_i^1,\ldots,b\alpha_i^{n_i}\}$$ for some colour $i$ of arity $n_i$, then we will say that $A$ is a simple expansion of $B$. \begin{remark}\label{adm-basis} Every admissible subset is a basis of $U_r(\Sigma)$ (\cite[Lemma 2.5]{desiconbrita2}). Moreover, any set obtained from a basis by performing expansions or contractions is also a basis. Furthermore, the cardinality $m$ of every admissible subset satisfies $m\equiv r$ mod $d$ for $d:=\text{gcd}\{n_i-1\mid i=1,\ldots,s\}.$ In particular, any basis with $m$ elements can be transformed into one of $r$ elements. Hence $U_r(\Sigma)=U_m(\Sigma)$ and we may assume that $r \leq d.$ \end{remark} The converse of the first statement in Remark \ref{adm-basis} is also true: \begin{theorem}\label{fullauto} Let $U_r(\Sigma)$ be a valid and bounded Cantor algebra. Then $V_r(\Sigma)$ is the full group of $\Omega$-algebra automorphisms of $U_r(\Sigma)$. \end{theorem} \begin{proof} We need to show that, under our hypotheses, a subset of $U_r(\Sigma)$ is admissible if and only if it is a basis. In light of Remark \ref{adm-basis} we only need to show that any basis of $U_r(\Sigma)$ is admissible. Let $Y=\{y_1,...,y_n\}$ be an arbitrary basis. Since $X$ is a basis, it generates all of $U_r(\Sigma)$. Hence, for each $y_i \in Y$ there exists some admissible subset $T_i$ of $U_r(\Sigma)$ containing $y_i$. Now let $Z$ be a common upper bound of the $T_i$, $i=1,...n.$ This exists by \cite[Lemma 2.8]{britaconcha}, using the argument of \cite[Proposition 3.4]{desiconbrita2}. The set $Z$ is an admissible subset containing a set $\widehat Y$ whose elements are obtained by performing finitely many descending operations in $Y$. Denote by $\widehat Y_i$ the subsets of $\widehat Y$ given by the following: $\{y_i\} \leq \widehat Y_i$ and $\widehat Y = \cup\widehat Y_i$. Since $Y$ and $Z$ are bases and $Y \le Z$, then Remark \ref{tec} below implies that $\widehat Y_i \cap \widehat Y_j = \varnothing$, for $i \ne j$. By Remark \ref{adm-basis}, since $\widehat Y$ is admissible, it is a basis. Remark \ref{adm-basis} also implies that $Z$ is a basis. It follows from the definition of free basis, see for example \cite[Page 3]{desiconbrita2}, that no proper subset of a basis is a basis. Hence $\widehat Y=Z$ is admissible, thus $Y$ is as well. \end{proof} \begin{remark}\label{tec} Let $B$ be a basis in a valid $U_r(\Sigma)$, and let $A \leq B$. The fact that $A$ is also a basis implies that for any element $b\in B$ there is a single $A(b)\in A$ such that $A(b)w=b$ for some descending word $w$. In this case we say that $A(b)$ is a prefix of $b$. \end{remark} \noindent Any element of $U_r(\Sigma)$ which is obtained from the elements in $X$ by applying descending operations only is called a {\sl leaf}. We denote by $\mathcal L$ the set of leaves. Observe that $\mathcal L$ depends on $X$. Note also that for any leaf $l$ there is some basis $A \geq X$ with $l\in A$. Let $l\in\mathcal L$, we put: $$l(\mathcal L):=\{b\in\mathcal L\mid lw=bw'\text{ for descending words $w,w'$}\}$$ and for a set of leaves $B\subseteq\mathcal L$ we also put $$B(\mathcal L)=\bigcup_{b\in B}b(\mathcal L).$$ We say that $U_r(\Sigma)$ is {\sl bounded} (see \cite[Definition 2.7]{britaconcha}) if for all admissible subsets $Y$ and $Z$ such that there is some admissible $A\leq Y,Z$, there is a unique least upper bound of $Y$ and $Z$. By a unique least upper bound we mean an admissible subset $T$ such that $Y \leq T$ and $Z \leq T$, and whenever there is an admissible set $S$ also satisfying $Y\leq S$ and $Z\leq S$, then $T \leq S.$ \begin{definition}\label{groups}\cite[Definition 2.12]{britaconcha} Let $U_r(\Sigma)$ be a valid Cantor algebra. $V_r(\Sigma)$ denotes the group of all $\Omega$-algebra automorphisms of $U_r(\Sigma)$, which are induced by a map $V\to W$, where $V$ and $W$ are admissible subsets of the same cardinality. \end{definition} \noindent Throughout we shall denote group actions on the left. \begin{remark}\label{tecrem1} For any basis $A \geq X$ and any $g\in V_r(\Sigma)$, there is some $B$ with $A\leq B,gB$. To see it, take $B$ such that $A,g^{-1}A\leq B,$ which exists by \cite[Lemma 2.8]{britaconcha}. \end{remark} \noindent \subsection{Brin-like groups}\label{newgroups} \noindent In this section we give some examples of the groups $V_r(\Sigma)$, which generalise both Brin's groups $sV$ \cite{brin1} and Stein's groups $V(l,A,P)$ \cite{stein}. Furthermore, these groups satisfy the conditions of Definition \ref{complete} below, and we show in Section \ref{fpinfty} that they are of type $\operatorname{F}_\infty.$ \begin{example}\label{brinexamples} \begin{enumerate} \item We begin by recalling the definition of the Brin-algebra \cite[Section 2]{desiconbrita2} and \cite[Example 2.4]{britaconcha}: Consider the set of $s$ colours $S=\{1,\ldots,s\}$, all of which have arity 2, together with the relations: $\Sigma:=\Sigma_1\cup\Sigma_2$ with $$\Sigma_2:=\{\alpha_i^l\alpha_j^t=\alpha_j^t\alpha_i^l\mid 1\leq i \not= j\leq s; l,t=1,2 \}.$$ \noindent Then $V_r(\Sigma)=sV$ is Brin's group. \item Furthermore one can also consider $s$ colours, all of arity $n_i=n\in \mathbb N,$ for all $1\leq i\leq s.$ Let $$\Sigma_2:=\{\alpha_i^l\alpha_j^t=\alpha_j^t\alpha_i^l\mid 1\leq i \not= j\leq s; 1\leq l,t\leq n \}.$$ \noindent Here $V_r(\Sigma)=sV_n$ is Brin's group of arity $n.$ \noindent It was shown in \cite[Example 2.9]{britaconcha} that in this case $U_r(\Sigma)$ is valid and bounded. \item We can also mix arities. Consider $s$ colours, each of arity $n_i\in \mathbb N$ $(i=1,...,s)$, together with $\Sigma:=\Sigma_1\cup\Sigma_2$ where $$\Sigma_2:=\{\alpha_i^l\alpha_j^t=\alpha_j^t\alpha_i^l\mid 1\leq i \not= j\leq s; 1\leq l\leq n_i; 1\leq t\leq n_j \}.$$ \noindent We denote these mixed-arity Brin-groups by $V_r(\Sigma)=V_{\{n_1\},...,\{n_s\}}.$ \noindent The same argument as in \cite[Lemma 3.2]{desiconbrita2} yields that the Cantor-algebra $U_r(\Sigma)$ in this case is also valid and bounded. \end{enumerate} \end{example} \begin{tikzpicture}[scale=0.4] \draw[black, dashed] (1,0) -- (3, 3) -- (5,0); \draw[black] (0,-3)--(1,0)--(2,-3); \draw[black] (1,-3)--(1,0); \draw[black] (4,-3)--(5,0)--(6,-3); \draw[black] (5,-3)--(5,0); \filldraw (0,-3) circle (0.3pt) node[below=4pt]{$1$}; \filldraw (1,-3) circle (0.3pt) node[below=4pt]{$2$}; \filldraw (2,-3) circle (0.3pt) node[below=4pt]{$3$}; \filldraw (4,-3) circle (0.3pt) node[below=4pt]{$4$}; \filldraw (5,-3) circle (0.3pt) node[below=4pt]{$5$}; \filldraw (6,-3) circle (0.3pt) node[below=4pt]{$6$}; \draw[black] (11,0) -- (13, 3) -- (15,0); \draw[black] (13,0)--(13,3); \draw[black, dashed] (10.5,-3)--(11,0)--(11.5,-3); \draw[black, dashed] (12.5,-3)--(13,0)--(13.5,-3); \draw[black,dashed] (14.5,-3)--(15,0)--(15.5,-3); \filldraw (10.5,-3) circle (0.3pt) node[below=4pt]{$1$}; \filldraw (11.5,-3) circle (0.3pt) node[below=4pt]{$4$}; \filldraw (12.5,-3) circle (0.3pt) node[below=4pt]{$2$}; \filldraw (13.5,-3) circle (0.3pt) node[below=4pt]{$5$}; \filldraw (14.5,-3) circle (0.3pt) node[below=4pt]{$3$}; \filldraw (15.5,-3) circle (0.3pt) node[below=4pt]{$6$}; \end{tikzpicture} \centerline{Figure 1: Visualising the identities in $\Sigma_2$ for $V_{\{2\},\{3\}}.$} \begin{example}\label{stein} We now recall the laws $\Sigma_2$ for Stein's groups \cite{stein}: Let $P\subseteq\mathbb Q_{>0}$ be a finitely generated multiplicative group. Consider a basis of $P$ of the form $\{n_1,\ldots,n_s\}$ with all $n_i \geq 0$ ($i=1,...,s$). Consider $s$ colours of arities $\{n_1,\ldots,n_s\}$ and let $\Sigma=\Sigma_1\cup\Sigma_2$ with $\Sigma_2$ the set of identities given by the following order preserving identification: $$\{\alpha_i^1\alpha_j^1,\ldots,\alpha_i^1\alpha_j^{n_j},\alpha_i^2\alpha_j^1,\ldots,\alpha_i^2\alpha_j^{n_j},\ldots,\alpha_i^{n_i}\alpha_j^1,\ldots,\alpha_i^{n_i}\alpha_j^{n_j}\}=$$ $$\{\alpha_j^1\alpha_i^1,\ldots,\alpha_j^1\alpha_i^{n_i},\alpha_j^2\alpha_i^1,\ldots,\alpha_j^2\alpha_i^{n_i},\ldots,\alpha_j^{n_j}\alpha_i^1,\ldots,\alpha_j^{n_j}\alpha_i^{n_i}\},$$ where $i \neq j$ and $i,j\in \{1,...,s\}.$ The resulting Brown-Stein algebra $U_r(\Sigma)$ is valid and bounded, see, for example \cite[Lemma 2.11]{britaconcha}. We denote the resulting groups $V_r(\Sigma)=V_{\{n_1,...,n_s\}}.$ \end{example} \begin{tikzpicture}[scale=0.4] \draw[black,dashed] (1,0) -- (3, 3) -- (5,0); \draw[black] (0,-3)--(1,0)--(2,-3); \draw[black] (1,-3)--(1,0); \draw[black] (4,-3)--(5,0)--(6,-3); \draw[black] (5,-3)--(5,0); \filldraw (0,-3) circle (0.3pt) node[below=4pt]{$1$}; \filldraw (1,-3) circle (0.3pt) node[below=4pt]{$2$}; \filldraw (2,-3) circle (0.3pt) node[below=4pt]{$3$}; \filldraw (4,-3) circle (0.3pt) node[below=4pt]{$4$}; \filldraw (5,-3) circle (0.3pt) node[below=4pt]{$5$}; \filldraw (6,-3) circle (0.3pt) node[below=4pt]{$6$}; \draw[black] (11,0) -- (13, 3) -- (15,0); \draw[black] (13,0)--(13,3); \draw[black, dashed] (10.5,-3)--(11,0)--(11.5,-3); \draw[black, dashed] (12.5,-3)--(13,0)--(13.5,-3); \draw[black,dashed] (14.5,-3)--(15,0)--(15.5,-3); \filldraw (10.5,-3) circle (0.3pt) node[below=4pt]{$1$}; \filldraw (11.5,-3) circle (0.3pt) node[below=4pt]{$2$}; \filldraw (12.5,-3) circle (0.3pt) node[below=4pt]{$3$}; \filldraw (13.5,-3) circle (0.3pt) node[below=4pt]{$4$}; \filldraw (14.5,-3) circle (0.3pt) node[below=4pt]{$5$}; \filldraw (15.5,-3) circle (0.3pt) node[below=4pt]{$6$}; \end{tikzpicture} \centerline{Figure 2: Visualising the identities in $\Sigma_2$ for $V_{\{2,3\}}.$} \begin{definition}\label{def-brinlike} Let $S$ be a set of $s$ colours together with arities $n_i$ for each $i=1,...,s.$ Suppose $S$ can be partitioned into $m$ disjoint subsets $S_k$ such that for each $k$, the set $\{n_i \, |\, i \in S_k\}$ is a basis for a finitely generated multiplicative group $P_k \subseteq \mathbb Q_{\geq 0}.$ Consider $\Omega$-algebras on $s$ colours with arities as above and the set of identities $\Sigma = \Sigma_1 \cup \Sigma_2$, where $\Sigma_2=\Sigma_{2_1}\cup\Sigma_{2_2}$ is given as follows: $\Sigma_{2_1}$ is given by the following order-preserving identifications (as in the Brown-Stein algebra in Example \ref{stein}): for each $k \leq m$ we have $$\{\alpha_i^1\alpha_j^1,\ldots,\alpha_i^1\alpha_j^{n_j},\alpha_i^2\alpha_j^1,\ldots,\alpha_i^2\alpha_j^{n_j},\ldots,\alpha_i^{n_i}\alpha_j^1,\ldots,\alpha_i^{n_i}\alpha_j^{n_j}\}=$$ $$\{\alpha_j^1\alpha_i^1,\ldots,\alpha_j^1\alpha_i^{n_i},\alpha_j^2\alpha_i^1,\ldots,\alpha_j^2\alpha_i^{n_i},\ldots,\alpha_j^{n_j}\alpha_i^1,\ldots,\alpha_j^{n_j}\alpha_i^{n_i}\},$$ where $i \neq j$ and $i,j\in S_k.$ $\Sigma_{2_2}$ is given by Brin-like identifications (as in Example \ref{brinexamples}): for all $i \in S_k$ and $j \in S_l$ such that $S_k \cap S_l = \varnothing$ $ (k\neq l,k,l \leq m)$, we have $$\Sigma_{2_2}:=\{\alpha_i^l\alpha_j^t=\alpha_j^t\alpha_i^l\mid 1\leq l\leq n_i; 1\leq t\leq n_j \}.$$ We call the resulting Cantor algebra $U_r(\Sigma)$ Brin-like and denote the generalised Higman-Thompson group by $V_r(\Sigma)=V_{\{n_i \,|\,i\in S_1\},...,\{n_i \,|\,i\in S_m\}}.$ \end{definition} \begin{example} From Definition \ref{def-brinlike} we notice the following examples: \begin{enumerate} \item If $m=s$, we have the Brin-groups as in Example \ref{brinexamples} (iii). \item If $m=1$, we have Stein-groups as in Example \ref{stein}. \item Suppose we have that $\{n_i \, |\, i\in S_k\} = \{n_i \, |\, i\in S_l\}$ for each $l,k \leq m$. Then the resulting group can be viewed as a higher dimensional Stein-group $mV_{\{n_i \, |\, i\in S_m\}}$. \end{enumerate} \end{example} \begin{question} Suppose $m\notin \{1,s\}.$ What are the conditions on the arities for the groups $V_{\{n_i \,|\,i\in S_1\},...,\{n_i \,|\,i\in S_m\}}$ not be isomorphic to any of the known generalised Thompson groups such as the Higman-Thompson groups, Stein's groups or Brin's groups? More generally, when are two of these groups non-isomorphic? See \cite{warrenconcha} for some special cases. \end{question} \begin{remark}\label{cantorcube} We can view these groups as bijections of $m$-dimensional cuboids in the $m$-dimensional Cartesian product of the Cantor-set, similarly to the description given for $sV$, the Brin-Thompson groups. In each direction, we get subdivisions of the Cantor-set as in the Stein-Brown groups given by $\Sigma_{2_1}$. \end{remark} \begin{lemma} The Brin-like Cantor-algebras are valid and bounded. \end{lemma} \begin{proof} Using the description given in Remark \ref{cantorcube} we can apply the same argument as in \cite[Lemma 3.2]{desiconbrita2}. \end{proof} \noindent The groups defined in this subsection all satisfy the following condition on the relations in $\Sigma$, and hence satisfy the conditions needed in Section \ref{fpinfty}. \begin{definition}\label{complete} Using the notation of Definition \ref{sigmadef}, suppose that for all $i\neq i'$, $i,i' \in S$ we have that $\Sigma_2^{i,i'} \neq \varnothing$ and that $f(j)=i'$ for all $j=1,...,n_i$ and $f'(j')=i$ for all $j'=1,...,n_{i'}$. Then we say that $\Sigma$ (or equivalently $U_r(\Sigma)$) is {\sl complete}. \end{definition} \begin{remark}\label{brinlikecomplete} The Brin-like Cantor-algebras are complete. \end{remark} \section{Finiteness conditions}\label{fpinfty} \noindent In this section we prove the following result: \begin{theorem}\label{FPinfty} Let $\Sigma$ be valid, bounded and complete. Then $V_r(\Sigma)$ is of type $\operatorname{F}_\infty$. \end{theorem} We closely follow \cite{fluch++}, where it is shown that Brin's groups $sV$ are of type $\operatorname{F}_\infty.$ We shall use a different notation, which is more suited to our set-up, and will explain where the original argument has to be modified to get the more general case. Throughout this section $U_r(\Sigma)$ denotes a valid, bounded and complete Cantor-algebra. \begin{definition} Let $B\leq A$ be admissible subsets of $U_r(\Sigma)$. We say that the expansion $B\leq A$ is {\sl elementary} if there are no repeated colours in the paths from leaves in $B$ to their descendants in $A$. Since $\Sigma$ is complete, this condition is preserved by the relations in $\Sigma$. We denote an elementary expansion by $B\preceq A.$ We say that the expansion is {\sl very elementary} if all paths have length at most 1. In this case we write $B\sqsubseteq A$. \end{definition} \begin{remark} If $A\preceq B$ is elementary (very elementary) and $A\leq C\leq B$, then $A\preceq C$ and $C\preceq B$ are elementary (very elementary)).\end{remark} \begin{lemma}\label{thm:unique-descendant} Let $\Sigma$ be complete, valid and bounded. Then any admissible basis $A$ has a unique maximal elementary admissible descendant, denoted by $\mathcal{E}(A).$ \end{lemma} \begin{proof} Let $\mathcal{E}(A)$ be the admissible subset of $n_1\ldots n_s|A|$ leaves obtained by applying all descending operations exactly once to every element of $A$. \end{proof} \subsection{The Stein subcomplex} Denote by $\mathcal{P}_r$ the poset of of admissible bases in $U_r(\Sigma).$ The same argument as in \cite[Lemma 3.5 and Remark 3.7]{desiconbrita2} shows that its geometric realisation $|\mathcal{P}_r|$ is contractible, and that $V_r(\Sigma)$ acts on $\mathcal{P}_r$ with finite stabilisers. In \cite{desiconbrita2, britaconcha} this poset was denoted by $\mathfrak{A}$, but here we will follow the notation of \cite{fluch++}. This poset is essentially the same as the poset of \cite{fluch++} denoted $\mathcal{P}_r$ there as well. We now construct the Stein complex $\mathcal{S}_r(\Sigma)$, which is a subcomplex of $|\mathcal{P}_r|$. The vertices in $\mathcal{S}_r(\Sigma)$ are given by the admissible subsets of $U_r(\Sigma)$. The $k$-simplices are given by chains of expansions $Y_0\leq\ldots\leq Y_k$, where $Y_0\preceq Y_k$ is an elementary expansion. \begin{lemma}\label{core} Let $A,B \in \mathcal{P}_r$ with $A< B$. There exists a unique $A< B_0\leq B$ such that $A\prec B_0$ is elementary and for any $A\leq C \leq B$ with $A\preceq C$ elementary we have $C \preceq B_0.$ \end{lemma} \begin{proof} Let $\mathcal{E}(A)$ be as in the proof of Lemma \ref{thm:unique-descendant}. Let $B_0=\glb(\mathcal{E}(A),B)$ which exists by \cite[Lemma 3.14]{desiconbrita2}. If $A\preceq C\leq B$, then $C\leq\mathcal{E}(A)$ and so $C\leq B_0$. \end{proof} \begin{lemma}\label{contractible} For every $r$ and every valid, bounded and complete $\Sigma$, the Stein-space $\mathcal{S}_r(\Sigma)$ is contractible. \end{lemma} \begin{proof} By \cite[Lemma 3.5]{desiconbrita2}, $|\mathcal{P}_r|$ is contractible. Now use the same argument of \cite[Corollary 2.5]{fluch++} to deduce that $\mathcal{S}_r(\Sigma)$ is homotopy equivalent to $|\mathcal{P}_r|$. Essentially, the idea is to use Lemma \ref{core} to show that each simplex in $|\mathcal{P}_r|$ can be pushed to a simplex in $\mathcal{S}_r(\Sigma)$. \end{proof} \begin{remark} Notice that the action of $V_r(\Sigma)$ on $\mathcal{P}_r$ induces an action of $V_r(\Sigma)$ on $\mathcal{S}_r(\Sigma)$ with finite stabilisers. \end{remark} \noindent Consider the Morse function $t(A)=|A|$ in $\mathcal{S}_r(\Sigma)$ and filter the complex with respect to $t$, i.e. $$\mathcal{S}_r(\Sigma)^{\leq n}:=\{A\in\mathcal{S}_r(\Sigma)\mid t(A)\leq n\}.$$ By the same argument as in \cite[Lemma 3.7]{desiconbrita2} $\mathcal{S}_r(\Sigma)^{\leq n}$ is finite modulo the action of $V_r(\Sigma)$. Let $\mathcal{S}_r(\Sigma)^{< n}=\{A\in\mathcal{S}_r(\Sigma)\mid t(A) <n\}.$ Provided that \begin{equation}\label{pair} \text{ the connectivity of the pair } (\mathcal{S}_r(\Sigma)^{\leq n},\mathcal{S}_r(\Sigma)^{<n})\text{ tends to $\infty$ as $n\to\infty$},\end{equation} Brown's Theorem \cite[Corollary 3.3]{brown2} implies that $V_r(\Sigma)$ is of type $\operatorname{F}_\infty,$ thus proving Theorem \ref{FPinfty}. The rest of this section is devoted to proving (\ref{pair}). \subsection{Connectivity of descending links} Recall that for any $A\in\mathcal{S}_r(\Sigma)$ the descending link $L(A):=\operatorname{lk}\!\!\downarrow\!\!^t(A)$ with respect to $t$ is defined to be the intersection of the link $\operatorname{lk}\!\!\downarrow\!\!(A)$ with $\mathcal{S}_r(\Sigma)^{<n}$, where $t(A)=n$. To show (\ref{pair}), we proceed as in \cite{fluch++}. Using Morse theory, the problem is reduced to showing that for $A$ as before, the connectivity of $L(A)$ tends to $\infty$ when $t(A)=n\to\infty$. Whenever this happens, we will say that $L(A)$ is {\sl $n$-highly connected}. More generally: assume we have a family of complexes $(X_\alpha)_{\alpha\in\mathcal Lambda}$ together with a map $n: (X_\alpha)_{\alpha\in\mathcal Lambda} \to \mathbb Z_{>0}$ such that the set $\{n(\alpha)_{\alpha\in\mathcal Lambda}\}$ is unbounded. Assume further that whenever $n(\alpha)\to\infty$, the connectivity of the associated $X_\alpha$s tends to $\infty$. In this case we will say that the family is {$n$-highly connected}. Note that $L(A)$ is the subcomplex of $\mathcal{S}_r(\Sigma)$ generated by $$\{B\mid B\prec A\text{ is an elementary expansion}\}.$$ Following \cite{fluch++}, define a height function $h$ for $B\in L(A)$ as follows: $$h(B):=(c_s,\ldots,c_2,b)$$ where $b=|B|$ and $c_i$ ($i=2,\ldots,s$) is the number of leaves in $A$ whose length as descendants of their parent in $B$ is $i$. We order these heights lexicographically. Let $c(B)=(c_s,\ldots,c_2),$ which are also ordered lexicographically. Denote by $L_0(A)$ the subcomplex of $\mathcal{S}_r(\Sigma)$ generated by $\{B\mid B\sqsubset A\text{ is a very elementary expansion}\}.$ Then for any $B\in L(A)$, $B\in L_0(A)$ if and only if $h(B)=(0,\ldots,0,|B|)$. \begin{lemma}\label{first} The set of complexes of the form $L_0(A)$ is $t(A)$-highly connected. \end{lemma} \begin{proof} For any $n\geq 0$, we define a complex denoted $K_n$ as follows. Start with a set $A$ with $n$ elements. The vertex set of $K_n$ consists of labelled subsets of $A$ where the possible labels are the colours $\{1,\ldots,s\},$ and where a subset labelled $i$ has precisely $n_i$ elements. Recall that $n_i$ is the arity of the colour $i$. A $k$-simplex $\{\sigma_0,\ldots,\sigma_k\}$ in $K_n$ is given by an unordered set of pairwise disjoint $\sigma_j$s. This complex is isomorphic to the barycentric subdivision of $L_0(A)$ for $n=t(A)$. To prove that $K_n$ is $n$-highly connected, proceed as in the proof of \cite[Lemma 4.20]{brown2}. \end{proof} Now consider descending links in $L(A)$ with respect to the height function $h$, i.e. for $B\in L(A)$ let $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ be the subcomplex of $L(A)$ generated by $\{C\in L(A)\mid h(C)\leq h(B) \mbox{ and either } B<C \mbox{ or } C>B\}.$ Consider the following two cases: \begin{itemize} \item[i)] $B\in L(A)\smallsetminus L_0(A)$ and there is at least one leaf of $B$ that is expanded precisely once to obtain $A$. \item[ii)] $B\in L(A)\smallsetminus L_0(A)$ and no leaf of $B$ is expanded precisely once to obtain $A$. \end{itemize} \noindent The next two Lemmas show that in either case $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ is $t(A)$-highly connected. As in \cite{fluch++} the descending link $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ of some $B\in L(A)$ with respect to $h$ can be viewed as the join of two subcomplexes, the down-link and the up-link. The downlink consists of those elements $C$ such that $C < B$ and $h(C) \leq h(B)$. Hence, by the above $c(B)=c(C).$ The uplink consists of those $C$ that $B< C$, $h(C) \leq h(B)$, and therefore $c(B) > c(C).$ \begin{lemma}\label{second} Let $B \in L(A)$ as in i). Then $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ is contractible. \end{lemma} \begin{proof} It suffices to follow the proof of \cite[Lemma 3.7]{fluch++}. We briefly sketch this proof using our notation: let $b\in B$ be a leaf that is expanded precisely once to obtain $A$. Let $B\prec M\preceq A$ be the maximal elementary expansion of $B$ that preserves the leaf $b$ and lies over $A$. The existence of $M$ follows from a variation of Lemma \ref{core}. Now, for any $C\in\operatorname{lk}\!\!\downarrow\!\!^h(B)$ lying in the uplink we let $B\prec C_0\sqsubseteq C$, where $C_0$ is obtained by performing all expansions in $B$ needed to get $C$, except the one of $b$. One easily checks that $C_0\leq M$, that $C_0$ and $M$ lie in $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ and that both $C_0$ and $M$ lie in the uplink. Hence $M\geq C_0\leq C$ provides a contraction of the uplink. As $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ is the join of the downlink and the uplink we get the result. \end{proof} \begin{lemma}\label{third} Let $B$ be as in ii). Then $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ is $t(A)$-highly connected. \end{lemma} \begin{proof} As before, we follow the proof of \cite[Lemma 3.8]{fluch++} with only minor changes. With our notation, we let $k_s$ be the number of leaves on $B$ that are also leaves of $A$ and let $k_b$ be the remaining leaves. Then one checks that the the up-link in $\operatorname{lk}\!\!\downarrow\!\!^h(B)$ is $k_b$-highly connected and that the down-link is $k_s$-highly connected. As $t(A)=n\leq k_bn_1\ldots n_s+k_s$, we get the result. \end{proof} \noindent Finally, using Morse theory as in \cite{fluch++}, we deduce that the pair $(L(A),L_0(A))$ is $t(A)$-highly connected. As a result, $L(A)$ is also $t(A)$-highly connected, establishing (\ref{pair}) and hence Theorem \ref{FPinfty}. \noindent Some time after a preprint of this work was posted, we learned of Thumann's work \cite{thumann1, thumann2}, where he provides a generalised framework of groups defined by operads to apply the techniques introduced in \cite{fluch++}. We believe that automorphism groups of valid, bounded and complete Cantor algebras might be obtained making a suitable choice of cube cutting operads, see \cite[Subsection 4.2]{thumann1}. Therefore Theorem 4.1 could also be seen as a special case of \cite[Subsection 10.2]{thumann2}. \section{Finiteness conditions for centralisers of finite subgroups}\label{centralisersection} \noindent From now on, unless mentioned otherwise, we assume that the Cantor-algebra $U_r(\Sigma)$ is valid and bounded. \begin{definition} Let $L$ be a finite group. The set of bases in $U_r(\Sigma)$ together with the expansion maps can be viewed as a directed graph. Let $(U_r(\Sigma),L)$ be the following diagram of groups associated to this graph: To each basis $A$ we associate $\text{Maps}(A,L)$, the set of all maps from $A$ to $L$. Each simple expansion $A\leq B$ corresponds to the diagonal map $\delta:\text{Maps}(A,L)\to\text{Maps}(B,L)$ with $\delta(f)(a\alpha_i^j)=f(a)$, where $a\in A$ is the expanded leaf, i.e. $B=(A\smallsetminus\{a\})\cup\{a\alpha_i^1,\ldots, a\alpha_i^{n_i}\}$ for some colour $i$ of arity $n_i$. To arbitrary expansions we associate the composition of the corresponding diagonal maps. \end{definition} Centralisers of finite subgroups in $V_r(\Sigma)$ have been described in \cite[Theorem 4.4]{britaconcha} and also in \cite[Theorem 1.1]{matuccietc} for the Higman-Thompson groups $V_{n,r}$. This last description is more explicit and makes use of the action of $V_{n,r}$ on the Cantor set (see Remark \ref{topology} below). We will use the following notation, which was used in \cite{britaconcha}: let $Q\leq V_r(\Sigma)$ be a finite subgroup and let $t$ be the number of transitive permutation representations $\varphi_i: Q\to S_{m_i}$ of $Q$. Here, $1\leq i\leq t$, $m_i$ is the orbit length and $S_{m_i}$ is the symmetric group of degree $m_i$. Also let $L_i=C_{S_{m_i}}(\varphi_i(Q))$. \noindent There is a basis $Y$ setwise fixed by $Q$ and which is of minimal cardinality. The group $Q$ acts on $Y$ by permutations. Thus there exist integers $0\leq r_1,\ldots,r_t\leq d$ such that $Y=\bigcup_{i=1}^t W_i$ with $W_i$ the union of exactly $r_i$ $Q$-orbits of type $\varphi_i$. See Remark \ref{adm-basis} for the definition of $d$. The next result combines the descriptions in \cite[Theorem 4.4]{britaconcha} and \cite[Theorem 1.1]{matuccietc} giving a more detailed description of the centralisers of finite subgroups in $V_r(\Sigma)$. \begin{theorem}\label{centraliser} Let $Q$ be a finite subgroup of $V_r(\Sigma).$ Then $$C_{V_r(\Sigma)}(Q)=\prod_{i=1}^tG_i$$ where $G_i=K_i\rtimes V_{r_i}(\Sigma)$ and $K_i=\varinjlim(U_{r_i}(\Sigma),L_i)$. Here, $V_r(\Sigma)$ acts on $K_i$ as follows: let $g\in V_{r_i}(\Sigma)$ and let $A$ be a basis in $U_{r_i}(\Sigma)$. The action of $g$ on $K_i$ is induced, in the colimit, by the map $\text{Maps}(A,L)\to\text{Maps}(gA,L)$ obtained contravariantly from $gA\buildrel g^{-1}\over\to A.$ \end{theorem} \begin{proof} The decomposition of $C_{V_r(\Sigma)}(Q)$ into a finite direct product of semidirect products was shown in \cite[Theorem 4.4]{britaconcha}. Hence, for the first claim, all that remains to be checked is that $K_i=\varinjlim (U_{r_i}(\Sigma),L_i)$. We use the same notation as in the proof of \cite[Theorem 4.4]{britaconcha}. Fix $\varphi=\varphi_i$, $l:=r_i$, $L:=L_i$, $m:=m_i$ and $K:=K_i=\operatorname{Ker} \tau.$ Let $x\in K= \operatorname{Ker} \tau$, where $\tau: C_{V_r(\Sigma)}(Q) \twoheadrightarrow V_l(\Sigma)$ is the split surjection of the proof of \cite[Theorem 4.4]{britaconcha}. With $Y$ as above, there is a basis $Y_1\geq Y$ with $xY_1=Y_1$ and $Y_1$ is also $Q$-invariant. Then the basis $Y_1$ decomposes as a union of $l$ $Q$-orbits (all of them of type $\varphi$), and $x$ fixes these orbits setwise. We denote these orbits by $\{C_1,\ldots,C_l\}$. In each of the $C_j$ there is a marked element. Since $\varphi$ is transitive this can be used to fix a bijection $C_j\to\{1,\ldots,m\}$ corresponding to $\varphi$. Then the action of $x$ on $C_j$ yields a well defined $l_j\in L$. This means that we may represent $x$ as $(l_j)_{1\leq j \leq l}$. Let $A$ be the basis of $U_l(\Sigma)$ obtained from $Y_1$ by identifying all elements in the same $Q$-orbit, i.e. $A=\tau^{\frak U}(Y_1)$ with the notation of \cite{britaconcha}. Denote $A=\{a_1,\ldots,a_l\}$ with $a_j$ coming from $C_j$. Then the element $x$ described before can be viewed as the map $x:A\to L$ with $x(a_j)=l_j$. Suppose we chose a different basis $Y_2$ fixed by $x$. It is a straightforward check to see that there is a basis $Y_3$ also fixed by $x$, such that $Y_1,Y_2\leq Y_3$, and that this representation is compatible with the associated expansion maps. To prove the second claim, consider an element $g\in V_l(\Sigma)$ viewed as an element in $C_{V_r(\Sigma)}(Q)$ using the splitting $\tau$ above. This means that $g$ maps $Q$-fixed bases to $Q$-fixed bases and that $g$ preserves the set of marked elements. Let $ Y_1,A $ and $x\in K$ be as above. Then the basis $gY_1$ is the union of the $Q$-orbits $\{gC_1,\ldots,gC_l\}$ and $\tau^{\frak U}(gY_1)=gA.$ Also, for any $c_i\in C_i$, $gxg^{-1}gc_i=gxc_i$ which means that if the action of $x$ on $C_i$ is given by $l_i\in L$, then the action of $x^g$ on $gC_i$ is given also by $l_i$. Therefore the map $gA\to L$ which represents $x^g$ is the composition of the maps $g^{-1}:gA\to A$ and the map $A\to L$ which represents $x$. \end{proof} \begin{remark}\label{topology} In \cite{matuccietc}, where the ordinary Higman-Thompson group $V_r(\Sigma)=V_{n,r}$ is considered, the subgroups $K_i$ are described as $\text{Map}^0(\mathfrak{C},L)$, where $\mathfrak{C}$ denotes the Cantor set, and $\text{Map}^0$ the set of continuous maps. Here the Cantor set is viewed as the set of right infinite words in the descending operations. It is a straightforward check to see that both descriptions are equivalent in this case. In fact $x:A\to L$ corresponds to the element in $\text{Map}^0(\mathfrak{C},L)$ mapping each $\varsigma\in\mathfrak{C}$ to $x(a)$ for the only $a\in A$ which is a prefix of $\varsigma$. Similarly, one can describe $K_i$ when $V_{r_i}(\Sigma)=sV$ is a Brin-group, using the fact that these groups act on $\mathfrak{C}^s$, see \cite{warrenconcha}. \end{remark} \begin{notation} Let $$\Omega:=\{B(\mathcal L)\mid B\subset\mathcal{L}\text{ finite}\}\cup\{\varnothing\}.$$ We also denote $$\Omega^n:=\Omega\times\buildrel n\over\ldots\times\Omega=\{(\omega_1,\ldots,\omega_n)\mid\omega_i\in\Omega\},$$ $$\Omega^n_{c}:=\{(\omega_1,\ldots,\omega_n)\in\Omega^n\mid\cup_{i=1}^n\omega_i=\mathcal L\}.$$ \end{notation} \begin{lemma}\label{properties} \begin{itemize} \item[i)] Let $B \geq A \geq X$ be bases and $B_1\subseteq B$. Let $A_1:=\{a\in A\mid a\text{ is a prefix of an element in }B_1\}$. Then $A_1(\mathcal L)=B_1(\mathcal L)$. \item[ii)] Let $A\geq X$ be a basis, then $A(\mathcal L)=\mathcal L$. \item[iii)] For any $(\omega_1,\ldots,\omega_n)\in\Omega^n$ there is some basis $A$ with $X\leq A$ and some $A_i\subseteq A$, $1\leq i\leq n$ such that $\omega_i=A_i(\mathcal L)$. \item[iv)] Let $A\geq X$ be a basis, $A_1,A_2\subseteq A$ and $\omega_i=A_i(\mathcal L)$ for $i=1,2$. Then $\omega_1=\omega_2$ if and only if $A_1=A_2$. \item[v)] Let $A,B \geq X$ be two bases and $\omega\in\Omega$ be such that for some $A_1\subseteq A$, $B_1\subseteq B$ we have $\omega=A_1(\mathcal L)=B_1(\mathcal L)$. Then $|A_1|\equiv |B_1|$ mod $d$ and $|A_1|=0$ if and only if $|B_1|=0.$ \item[vi)] Let $A,B \geq X$ be two bases and $A_1,A_2\subseteq A$, $B_1,B_2\subseteq B$ with $A_1(\mathcal L)=B_1(\mathcal L)$ and $A_2(\mathcal L)=B_2(\mathcal L)$. Then $A_1\cap A_2=\varnothing$ if and only if $B_1\cap B_2=\varnothing$. \end{itemize} \end{lemma} \begin{proof} It suffices to prove i) in the case when $B$ is obtained by a simple expansion from $A$. Moreover, we may assume that $A_1=\{a\}$ and $B_1=\{a\alpha_i^1,\ldots,a\alpha_i^{n_i}\}$ for some colour $i$ of arity $n_i$. Then obviously $B_1(\mathcal L)\subseteq a(\mathcal L).$ Denote $b_j=a\alpha_i^j$ and let $u\in a(\mathcal L)$. Then $uv=ac$ for descending words $v$ and $c$. Performing the descending operations given by $c$ on the basis $A$, we obtain a basis $C$ with $ac\in C$. Let $D$ be a basis with $C,B\leq D$. Then there is some element $d\in D$ which can be written as $d=acc'$ for some descending word $c'$. Moreover, Remark \ref{tec} also implies that $d=b_jb'$ for some $j$ and descending word $b'$. As $uvc'=acc'=b_jb'$ we get $u\in b_j(\mathcal L)$. \noindent Now ii) follows from i). \noindent To prove iii), suppose that $\omega_i=\{a_i^1,\ldots,a_i^{l_i}\}(\mathcal L)$. For each $a_i^j$ we may find a basis $T_i^j\geq X$ containing $a_i^j$. Now let $A$ be common descendant of the $T_i^j$ and use i). \noindent To establish iv), it suffices to check that if $\widehat a\in A,$ $\widehat a\not\in A_i$, then $\widehat a\not\in A_i(\mathcal L)$. Suppose $\widehat a\in A_i(\mathcal L).$ Then there are descending words $v,u$ and some $a\in A_i$, such that $\widehat av=au=b$. Performing the descending operations given by $v$ and $u$ on $\widehat a$ and $a$ respectively, we get a basis $A\leq B$ and $b \in B$ contradicting Remark \ref{tec}. \noindent In v), since there is a basis $C$ with $A,B\leq C$, we may assume $A\leq B.$ Then v) is a consequence of i) and iv). \noindent Finally, for vi) we may also assume $A\leq B$. Then we only have to use Remark \ref{tec}. \end{proof} \begin{notation} Let $\omega\in\Omega$, $X\leq A$ and $B\subseteq A$ such that $\omega=B(\mathcal L)$. We put $$\|\omega\|=\Bigg\{\begin{aligned} &0\text{ if }\omega=\varnothing\\ &t \text{ for }|B|\equiv t\text{ mod }d\text{ and }0<t\leq d\text{ otherwise.}\\ \end{aligned}$$ This is well defined by Lemma \ref{properties} v). Take $B'\subseteq A$ and $\omega'=B'(\mathcal L)$. If $B\cap B'=\varnothing$, we put $\omega\wedge\omega'=\varnothing$. Note that by Lemma \ref{properties} vi) this is well defined. Finally, let $$\Omega^n_{c,\text{dis}}:=\{(\omega_1,\ldots,\omega_n)\in\Omega^n_c\mid \mathcal L=\bigcup_{i=1}^n\omega_i\text{ and }\omega_i\wedge\omega_j=\varnothing\text{ for }i\neq j \}.$$ \end{notation} The group $V_r(\Sigma)$ does not act on the set of leaves. It does, however, act on $\Omega$ as we will see in Lemma \ref{tec1}. Nevertheless, if $l$ is a leaf such that $l\in A$ for a certain basis $A \geq X$ and $g$ is a group element such that $gA \geq X$, then we will denote by $gl$ the leaf of $gA$ to which $l$ is mapped by $g$. \begin{lemma}\label{tec1} The group $V_r(\Sigma)$ acts by permutations on $\Omega$ and on $\Omega^n_{c,\text{dis}}$. There are only finitely many $V_r(\Sigma)$-orbits under the latter action. Furthermore, the stabiliser of any element in $\Omega^n_{c,\text{dis}}$ is of the form $$V_{k_1}(\Sigma)\times\ldots\times V_{k_n}(\Sigma)$$ for certain integers $k_1,\ldots,k_n$. \end{lemma} \begin{proof} To see that $V_r(\Sigma)$ acts on $\Omega$, it suffices to check that if $\omega=l(\mathcal L)$ for some leaf $l\in\mathcal L$, we have $g\omega\in\Omega$ for any $g\in V_r(\Sigma)$. Let $X\leq A$ be a basis with $l\in A$. By Remark \ref{tecrem1} there is some $A\leq B$ with $A\leq gB$. Note that by Lemma \ref{properties} i) $\omega$ can also be written as $$\omega=B_1(\mathcal L)$$ where $B_1=\{l_1,\ldots,l_k\}$ is the set of leaves in $B$ obtained from $l$. Therefore $gB_1=\{gl_1,\ldots,gl_k\}\subseteq gB$ and $g\omega =gB_1(\mathcal L).$ \noindent That this action induces an action on $\Omega^n_{c,\text{dis}}$ is a consequence of the easy fact that for any $g\in V_r(\Sigma)$ and any $(\omega_1,\ldots,\omega_n)\in\Omega^n_{c,\text{dis}}$ we have $g\omega_i\wedge g\omega_j=\varnothing$ and $\mathcal L=\cup_{i=1}^ng\omega_i$. Let $(\omega_1,\ldots,\omega_n),(\omega_1',\ldots,\omega_n')\in\Omega^n_{c,\text{dis}}$ be such that $\|\omega_i\|=\|\omega_i'\|$ for $1\leq i\leq n$. There are bases $X\leq A,A'$ and subsets $A_1,\ldots,A_n\subseteq A$, $A'_1,\ldots,A'_n\subseteq A'$ such that for each $1\leq i\leq n$, $\omega_i=A_i(\mathcal L)$, $\omega_i'=A_i'(\mathcal L)$ and $|A_i|=|A_i'|$. Hence we may choose a suitable element $g\in V_r(\Sigma)$ such that $gA=A'$ and $gA_i=A_i'$ for each $i=1,\ldots,n$. Then $g(\omega_1,\ldots,\omega_n)=(\omega_1',\ldots,\omega_n')$. Since the number of possible $n$-tuples of integers modulo $d$ having the same number of zeros is finite, it follows that there are only finitely many $V_r(\Sigma)$-orbits. Finally consider $\mathcal{W}=(\omega_1,\ldots,\omega_n)\in\Omega^n_{c,\text{dis}}$ as before, i.e. with $X\leq A$ and $A_1,\ldots, A_n\subseteq A$ such that $\omega_i=A_i(\mathcal L)$ for $1\leq i\leq n$. An element $g\in V_r(\Sigma)$ fixes $\mathcal{W}$ if and only if $g\omega_i=\omega_i$ for each $i=1,\ldots,n$. We may choose a basis $B$ with $A\leq B,gB$ and then, by using Lemma \ref{properties} i) and iv), we see that $g$ fixes $\mathcal{W}$ if and only if it maps those leaves of $B,$ which are of the form $av$ for some $a\in A_i$ and some descending word $v$, to the analogous subset in $gB$. Considering the subalgebra of $U_r(\Sigma)$ generated by the $A_i$, we see that $g$ can be decomposed as $g=g_1\ldots g_n$ with $g_i\in V_{k_i}(\Sigma)$ for $k_i=|A_i|$. \end{proof} Let $K$ be a group and denote by $Y=K\ast K\ast\ldots$ the infinite join of copies of $K$ viewed as a discrete $CW$-complex, i.e. $Y$ is the space obtained by Milnor's construction for $K$. Then $Y$ has a $CW$-complex decomposition whose associated chain complex yields the standard bar resolution. For detail see, for example, \cite[Section 2.4]{bensonII}. Obviously, if a group $H$ acts on $K$ by conjugation, this action can be extended to an action of $H$ on $Y$ and to an action of $G=K\rtimes H$ on $Y$. \begin{lemma}\label{auxmainfinfty} Let $H$ and $K$ be groups and let $H$ act on $K$ via $\varphi: H \to \mathrm{Aut} K$. Assume that $H$ is of type $\operatorname{F}_{\infty}$, and that for every $n \in \mathbb{N}$ the induced action of $H$ on $K^n$ has finitely many orbits and has stabilisers of type $\operatorname{F}_{\infty}$. Then $G = K \rtimes_{\varphi} H$ is of type $\operatorname{F}_{\infty}$. \\ \noindent The same statement holds if $\operatorname{F}_\infty$ is replaced with $\operatorname{F}P_\infty$. \end{lemma} \begin{proof} Let $Y_n = K^{\ast n}$ and let $Y$ be as above. Consider the action of $G$ on $Y$ induced by the diagonal action. Note that this preserves the individual join factors. Since the action of $K$ on $Y$ is free, he stabiliser of a cell in $G$ is isomorphic to its stabiliser in $H$. The stabiliser of an $(n-1)$-simplex is the stabiliser of $n$ elements of $K$, thus $\operatorname{F}_{\infty}$ by assumption. Maximal simplices in $Y_n$ correspond to elements of $K^n$ and every simplex of $Y_n$ is contained in a maximal simplex. This, together with the fact that the action of $G$ on $K^n$ has only finitely many orbits, implies that the action of $G$ on $Y_n$ is cocompact. Finally, the connectivity of the filtration $\{Y_n \}_{n \in \mathbb{N}}$ tends to infinity as $n \to \infty.$ Hence the claim follows from \cite[Corollary 3.3(a)]{brown2}. \end{proof} \begin{theorem}\label{mainfinfty} Assume that for any $t>0$, the group $V_t(\Sigma)$ is of type $\operatorname{F}_\infty.$ Then the groups $G_i= K_i\rtimes V_{r_i}(\Sigma)$ of Theorem \ref{centraliser} are of type $\operatorname{F}_\infty.$ The same statement holds if $\operatorname{F}_\infty$ is replaced with $\operatorname{F}P_\infty$. \end{theorem} \begin{proof} Put $V:=V_r(\Sigma)$, $K:=K_i$ and $G:=G_i$. We claim that for every $n$ there is some $\overline n$ big enough such that there is an injective map of $V$-sets $$\phi_{n}:K^{n}\to\Omega^{\overline n}_{c,\text{dis}}.$$ Let $x\in K$ be given by a map $x: A \to L$, where $A$ is a basis with $X \leq A.$ The element $x$ is determined uniquely by a map which, by slightly abusing notation, we also denote $x:L\to\Omega$. This $x$ maps any $s\in L$ to $\omega_s:=A_s(\mathcal L)$ with $A_s=\{a\in A\mid x(a)=s\}.$ Obviously $\cup_{s\in L}\omega_s=\mathcal L$. This means that fixing an order in $L$ yields an injective map of $V$-sets $$\xi_n:K^n\to\Omega^{n|L|}_c.$$ Consider any $(\omega_1,\ldots,\omega_m)\in\Omega^m_c$ for $m=n|L|$. Let $X\leq A$ with $A_1\ldots, A_m\subseteq A$ and $\omega_i=A_i(\mathcal L)$ for $1\leq i\leq m$. Let $\overline n:=2^m-1$, i.e. the number of non-empty subsets $\varnothing\neq S\subseteq\{1,\ldots,m\}$. For any such $S$ let $$A_S:=\bigcap_{i\in S}A_i\smallsetminus\cup\{\bigcap_{j\in T}A_j\mid {S\subset T\subseteq\{1,\ldots,m\}}\}.$$ Then one easily checks that the $A_S$ are pairwise disjoint and that their union is $\mathcal L$. Let $\omega_S:=A_S(\mathcal L)$. The preceding paragraph means that fixing an ordering on the set of non-empty subsets of $\{1,\ldots,m\}$ yields an injective map of $V$-sets $$\rho_m:\Omega^m_c\to\Omega^{\overline n}_{c,\text{dis}}.$$ Composing $\xi_n$ and $\rho_{m}$ we get the desired $\phi_n$. Now, applying Lemma \ref{tec1} we deduce that $K^n$ has only finitely many orbits under the action of $V_r(\Sigma)$ and that every cell stabiliser is isomorphic to a direct product of copies of $V_{t}(\Sigma)$ for suitable indices $t$. It now suffices to use Lemma \ref{auxmainfinfty}. \end{proof} \goodbreak This implies that \cite[Conjecture 7.5]{britaconcha} holds. \begin{corollary}\clh\mathfrak Fill \begin{enumerate} \item $V_r(\Sigma)$ is quasi-$\underline{\operatorname{F}P}_\infty$ if and only if $V_k(\Sigma)$ is of type ${\operatorname{F}P}_\infty$ for any $k.$ \item $V_r(\Sigma)$ is quasi-$\underline{\mathfrak FF}_\infty$ if and only if $V_k(\Sigma)$ is of type ${\mathfrak FF}_\infty$ for any $k.$ \end{enumerate} \end{corollary} \begin{proof} The \lq\lq only if" part of both items is proven in \cite[Remark 7.6]{britaconcha}. The \lq\lq if" part is a consequence of \cite[Definition 6.3, Proposition 6.10]{britaconcha} and Theorem \ref{mainfinfty} above. \end{proof} \noindent Theorem \ref{mainfinfty} also implies that the Brin-like groups of Section \ref{fpinfty} are of type quasi-$\operatorname{F}_\infty:$ \begin{corollary} Suppose $U_r(\Sigma)$ is valid, bounded and complete. Then $V_r(\Sigma)$ is of type quasi-$\operatorname{F}_\infty$. In particular, centralisers of finite groups are of type $\operatorname{F}_\infty$. \end{corollary} \section{normalisers of finite subgroups}\label{normaliser} \noindent Let $Y$ be any basis. We denote $$S(Y):=\{g\in V_r(\Sigma)\mid gY=Y\}.$$ Observe that this is a finite group, isomorphic to the symmetric group of degree $|Y|$. \begin{theorem}\label{normaliser1} Let $Q\leq V_r(\Sigma)$ be a finite subgroup. Let $Y,$ $t$, $r_i$, $l_i$, $\varphi_i$, and $1\leq i\leq t$ be as in the proof of Theorem \ref{centraliser}. Then $$N_{V_r(\Sigma)}(Q)=C_{V_r(\Sigma)}(Q)N_{S(Y)}(Q)$$ and $N_{V_r(\Sigma)}(Q)/C_{V_r(\Sigma)}(Q)\cong N_{S(Y)}(Q)/C_{S(Y)}(Q).$ \end{theorem} \begin{proof} Let $g\in N_{V_r(\Sigma)}(Q)$ and $Y_1=gY$. Then for any $q\in Q,$ $qY_1=qgY=gq^gY=gY=Y_1$. Therefore $Y_1$ is also fixed setwise by $Q$. Let $r_i'$ denote the number of components of type $\varphi_i$ in $Y_1$. Then, by \cite[Proposition 4.2]{britaconcha} $r_i\equiv r_i'$ mod $d,$ and $r_i=0$ if and only if $r_i'=0$. We claim that $Y$ and $Y_1$ are isomorphic as $Q$-sets, in other words, that $r_i=r_i'$ for every $1\leq i\leq t$. Note that since $g$ normalises $Q$, it acts on the set of $Q$-permutation representations $\{\varphi_1,\ldots,\varphi_t\}$, via $\varphi_i^g(x):=\varphi_i(x^{g^{-1}})$. Let $i$ with $r_i\neq 0$ and let $g(i)$ be the index such that $\varphi_i^g=\varphi_{g(i)}$. The fact that $g:Y\to Y_1$ is a bijection implies that $r_i=r_{g(i)}'$. We may do the same for $g(i)$ and get an index $g^2(i)$ with $r_{g(i)}=r_{g^2(i)}'.$ At some point, since the orbits of $g$ acting on the sets of permutation representations are finite, we get $g^k(i)=i$ and $r_{g^{k-1}(i)}=r_i'.$ As $r_i'\equiv r_i$ mod $d$ we have $r_{g^{k-1}(i)}\equiv r_i$ mod $d,$ and since $0<r_i,r_{g^{k-1}(i)}\leq d$ we deduce that $r_i'=r_{g^{k-1}(i)}=r_i$ as claimed. Now, we can choose an $s\in V_r(\Sigma)$ mapping $Y_1$ to $Y$ and such that $s:Y_1\to Y$ is a $Q$-map, i.e., commutes with the $Q$-action. Therefore, $s\in C_{V_r(\Sigma)}(Q)$ and $sgY=Y$ thus $sg\in N_{S(Y)}(Q).$ \end{proof} \begin{remark} We can give a more detailed description of the conjugacy action of $N_{S(Y)}(Q)$ on the group $C_{V_r(\Sigma)}(Q)$. Recall that, by Theorem \ref{centraliser} this last group is a direct product of groups $G_1,\ldots,G_t$. We use the same notation as in Theorem \ref{centraliser}. Let $g\in N_{S(Y)}(Q)$ and put $\varphi_{g(i)}=\varphi_i^g$ as before. Denote by $Z_{g(i)},Z_i\subseteq Y$ the subsets of $Y$ which are unions of $Q$-orbits of types $\varphi_{g(i)}$ and $\varphi_i$ respectively. Then one easily checks that $gZ_{g(i)}=Z_i$ and $G_{g(i)}=G_i^g$. Moreover, recall that $G_{i}=K_i\rtimes V_{r_i}(\Sigma)$ with $K_i=\varinjlim(U_{r_i}(\Sigma),L_i)$ and $L_i=C_{S_{l_i}}(\varphi_i(Q))$. Then $r_{g(i)}=r_i$ and $g$ maps the subgroup $V_{r_i}(\Sigma)$ of $G_i$ to the same subgroup of $G_{g(i)}$ and $K_i$ to $K_{g(i)}$. We also notice that $g$ acts diagonally on the system $(U_{r_i}(\Sigma),L_i)$ mapping it to $(U_{r_{g(i)}}(\Sigma),L_{g(i)})$ In particular, the action of $g$ on $L_i$ is the restriction of its action on $C_{S(V)}(Q)$ and this action yields taking to the colimit the conjugation action $K_i^g=K_{g(i)}$. \end{remark} \begin{remark} Using \cite[Theorem 5]{zassenhaus}, one can also give a more detailed description of the groups $L_i$ above: $$L_i=N_{\varphi_i(Q)}(\varphi_i(Q)_1)/\varphi_i(Q)_1$$ where $\varphi_i(Q)_1$ is the stabiliser of one letter in $\varphi_i(Q)$. Of course, if $Q$ is cyclic, then so is $\varphi_i(Q)$ and we get $\varphi_i(Q)_1=1$ and $L_i=\varphi_i(Q)$. \end{remark} \iffalse \section{An explicit finite presentation}\label{presentation} \noindent The proof of Theorem \ref{mainfinfty} can be used to show that whenever the groups $V_k(\Sigma)$ are finitely presented for any $k$, then so is $C_{V_r(\Sigma)}(Q)$ for any finite $Q\leq V_r(\Sigma),$ but it does not give an explicit finite presentation. In this Section we are going to construct a finite presentation of $C_{V_r(\Sigma)}(Q)$. To any $\omega\in\Omega$ and any $x\in L$, we associate the element $\chi_{\omega,x}\in\varinjlim(U_r(\Sigma),L)$ defined as follows. Let $X\leq A$ be a basis such that $\omega=A_1(\mathcal L)$ for some $A_1\subseteq A$. Then $\chi_{\omega,x}$ is given by the map $\chi_{\omega,x}:A\to L$ with $\chi_{\omega,x}(a)=x$ if $a\in A_1$, and $\chi_{\omega,x}(a)=1$ otherwise. Suppose that $\omega=\omega_1\cup\omega_2$ with $\omega_1\bigwedge\omega_2=\varnothing.$ We write $\omega_1\dot\cup\omega_2$ for the disjoint union. \begin{lemma}\label{prescolim} The following is a presentation of $\varinjlim(U_r(\Sigma),L)$: $$\langle (\chi_{\omega,x})_{\omega\in\Omega\smallsetminus\varnothing,x\in L}\mid \mathcal{R}_1,\mathcal{R}_2,\mathcal{R}_3\rightarrowngle$$ where $$\mathcal{R}_1=\{\chi_{\omega,xy}^{-1}\chi_{\omega,x}\chi_{\omega,y}\mid\omega\in\Omega,x,y\in L\},$$ $$\mathcal{R}_2=\{[\chi_{\omega,x},\chi_{\omega',y}]\mid\omega,\omega'\in\Omega,\omega\bigwedge\omega'=\varnothing\}\text{ and}$$ $$\mathcal{R}_3=\{\chi_{\omega,x}^{-1}\chi_{\omega_1,x}\chi_{\omega_2,x}\mid\omega,\omega_1,\omega_2\in\Omega,\omega=\omega_1\dot\cup\omega_2\}.$$ Moreover $V_r(\Sigma)$ acts by permutations with finitely many orbits on this presentation. \end{lemma} \begin{proof} Obviously any $\chi\in\varinjlim(U_r(\Sigma),L)$ is a product of elements of the form $\chi_{\omega,x}$ for a certain $\omega\in\Omega$ and $x\in L$. Let $F$ denote the free group on the set $\{\widetilde\chi_{\omega,x}\mid \omega\in\Omega\smallsetminus\varnothing,x\in L\}$. Then there is an epimorphism $$F\buildrel\tau\over\twoheadrightarrow\varinjlim(U_r(\Sigma),L)$$ with $\tau(\widetilde{\chi}_{\omega,x})=\chi_{\omega,x}$. Let $G$ be the abstract group defined in the statement but on the generators $\widetilde{\chi}_{\omega,x}$. It is immediate to verify that the epimorphism $\tau$ defined above induces a epimorphism, which we still call $\tau$, from $G$ to $\varinjlim(U_r(\Sigma),L)$. This follows since all relations inside $G$ are easily verified to hold for the images $\tau(\widetilde{\chi}_{\omega,x})$. Assume that we have a word $\widetilde{w}=w(\widetilde{\chi}_{\omega_1,x_1}, \ldots, \widetilde{\chi}_{\omega_k,x_k})$, for some $\omega_1,\ldots,\omega_k\in\Omega$ and $x_1,\ldots,x_k\in L$. Assume further that \[ 1=\tau(\widetilde{w})=\tau(w(\widetilde{\chi}_{\omega_1,x_1}, \ldots, \widetilde{\chi}_{\omega_k,x_k}))=w(\tau(\widetilde{\chi}_{\omega_1,x_1}), \ldots, \tau(\widetilde{\chi}_{\omega_k,x_k})). \] Let $X\leq A$ be a basis with subsets $A_i\subseteq A$ such that $\omega_i=A_i(\mathcal L)$. We now refine the set $\{A_1,...,A_k\}$ to a set $\{A'_1,...,A'_{k'}\}$ of subsets of $A$ such that for all $i,j \leq k'$ either $A'_i\cap A'_j=\varnothing$ or $A'_i=A'_j.$ By suitably applying the relations in $\mathcal{R}_3$ to both the original word $w(\widetilde{\chi}_{\omega_1,x_1}, \ldots, \widetilde{\chi}_{\omega_k,x_k})$ and its image $w:=\tau(\widetilde{w})=w({\chi}_{\omega_1,x_1}, \ldots,{\chi}_{\omega_k,x_k}),$ we may rewrite each occurrence of $\chi_{\omega_i,x_i}$ and $\widetilde{\chi}_{\omega_i,x_i}$ in terms of suitable new elements $\tau(\widetilde{\chi}_{\omega'_j,y_j})$ and ${\chi}_{\omega'_j,y_j}$ for $1\leq j\leq k'$, so that either $\omega'_j\bigwedge\omega_i'=\varnothing$ or $\omega'_j=\omega'_i$. Reordering them so that $\omega_1,\ldots,\omega_u$ for $1\leq u\leq k'$ are pairwise distinct and applying the relations in $\mathcal{R}_2$ and $\mathcal{R}_1$ to group the suitable products of the $y_j$'s we obtain new words \[ \widetilde{w}\sim\widetilde{w}'=\widetilde{\chi}_{\omega'_1,z_1} \ldots \widetilde{\chi}_{\omega'_{u},z_{u}}, \qquad {w}\sim {w}'={\chi}_{\omega'_{1},z_{1}} \ldots {\chi}_{\omega'_{u},z_{u}}, \] where the $\omega_i'$'s are pairwise disjoint. If ${w}'\sim 1$, we must have $z_i=1$ for any $1\leq i\leq u$, by applying the word ${w}'$ to an $a \in A_i$ such that $A_i(\mathcal{L})=\omega_i'$. From $\mathcal{R}_1$ it is immediate to see that $\widetilde{\chi}_{\omega,1}=1$ for any $\omega\in\Omega$ and so we also have $\widetilde{w}\sim \widetilde{w}'\sim 1$. By Lemma \ref{tec1}, the group $V_r(\Sigma)$ acts by permutations on $\Omega$. Moreover, for any $g\in V_r(\Sigma)$, if $\omega,\omega'\in\Omega$ are such that $\omega\bigwedge\omega'=\varnothing$, then $g\omega\bigwedge g\omega'=\varnothing$ and if $\omega=\omega_1\cup\omega_2$ for $\omega_1,\omega_2\in\Omega$, then $g\omega=g\omega_1\cup g\omega_2$. Therefore $V_r(\Sigma)$ acts by permutations on this presentation. To prove the last statement, it suffices to check the following: \noindent \emph{Claim 1}: The set of generators is $V_r(\Sigma)$-finite. \noindent \emph{Claim 2}: Each of the sets of relations $\mathcal{R}_1, \mathcal{R}_2,\mathcal{R}_3$ is $V_r(\Sigma)$-finite. As the group $L$ is finite, both claims follow from slight variations of the proof of Lemma \ref{tec1}. For example, for Claim 2 for $\mathcal{R}_2$, it suffices to check that whenever we have $\omega,\omega',\widehat\omega,\widehat\omega'\in\Omega$ with $\omega\bigwedge\omega'=\varnothing$, $\widehat\omega\bigwedge\widehat\omega'=\varnothing$, $\|\omega\|=\|\widehat\omega\|$ and $\|\omega'\|=\|\widehat\omega'\|$, then there is some $g\in V_r(\Sigma)$ such that for any $x\in L$, $\chi_{\widehat\omega,x}=\chi_{\omega,x}^g$ and $\chi_{\widehat\omega',x}=\chi_{\omega',x}^g$. To get a suitable $g,$ choose bases $X\leq A,\widehat A$ so that for $B,B'\subseteq A$ and $\widehat B,\widehat B'\subseteq \widehat A$, we have $\omega=B(\mathcal L)$, $\omega'=B'(\mathcal L)$, $\widehat\omega=\widehat B(\mathcal L)$, $\widehat\omega'=\widehat B'(\mathcal L)$, $|A|=|\widehat A|$, $|B|=|\widehat B|$ and $|B'|=|\widehat B'|$. The assumptions imply that $B\cap B'=\varnothing=\widehat B\cap\widehat B'$. So we may choose a $g\in V_r(\Sigma)$ with $gA=\widehat A$, $gB=\widehat B$ and $gB'=\widehat B'$. In a completely analogous way one proves that for $\omega,\omega_1,\omega_2,\widehat\omega,\widehat\omega_1,\widehat\omega_2\in\Omega$ with $\omega=\omega_1\cup\omega_2$, $\widehat\omega=\widehat\omega_1\cup\widehat\omega_2$, $\|\omega\|=\|\widehat\omega\|$, $\|\omega_1\|=\|\widehat\omega_1\|$ and $\|\omega_2\|=\|\widehat\omega_2\|$ there is some $g\in V_r(\Sigma)$ such that for any $x\in L$, $\chi_{\widehat\omega,x}=\chi_{\omega,x}^g$, $\chi_{\widehat\omega_1,x}=\chi_{\omega_1,x}^g$ and $\chi_{\widehat\omega_2,x}=\chi_{\omega_2,x}^g$. \end{proof} \begin{remark} \label{thm:disjoint-relation} One can get a slightly different presentation having as generators just the elements of the form $\chi_{\omega,x}$ with $x\in L$ and $\omega=l(\mathcal L)$ for a leaf $l\in\mathcal L$. Restricting to these kind of $\omega$'s, the set of relations $\mathcal{R}_1,\mathcal{R}_2$ would remain essentially the same. Instead of $\mathcal{R}_3$ we would have to consider all relations of the form $\chi_{\omega,x}^{-1}\chi_{\omega_1,x}\ldots\chi_{\omega_{n_i},x}$ for $x\in L$, $\omega=l(\mathcal L)$ for a leaf $l\in\mathcal L$ and $\omega_j=\omega\alpha_i^j$ for $1\leq j\leq n_i$ and $i$ a colour of arity $n_i$. \end{remark} \begin{proposition}\label{finpres} Assume that the group $V_r(\Sigma)$ is finitely presented. Let $Q\leq V_r(\Sigma)$ be a finite subgroup. Given a finite presentation of $V_r(\Sigma),$ Lemma \ref{prescolim} together with Theorem \ref{thm:burnside-3} yield an explicit finite presentation of $C_{V_r(\Sigma)}(Q).$ \end{proposition} \begin{proof} By Theorem \ref{centraliser}, it suffices to construct an explicit finite presentation of a group of the form $ \varinjlim(U_r(\Sigma),L)\rtimes V_r(\Sigma)$ when $L$ is an arbitrary finite group. Let $V_r(\Sigma) =\langle C \mid D\rightarrowngle$ be a finite presentation of $V_r(\Sigma)$ and let $\varinjlim(U_r(\Sigma),L)=\langle A\mid B\rightarrowngle$ be the presentation constructed in Lemma \ref{prescolim}. We need to verify the hypothesis of Theorem \ref{thm:burnside-3}. In Lemma \ref{prescolim} we have already checked that the group $V_r(\Sigma)$ acts by permutations in this presentation and that there are only finitely many orbits under that action. We may therefore choose $A_0\subset A$ and $B_0\subset B$ finite sets of representatives of these orbits. Theorem \ref{thm:burnside-3} thus implies that the group $\varinjlim(U_r(\Sigma),L)\rtimes V_r(\Sigma)$ has the following presentation: \begin{align*} \langle A_0,C \mid \widehat B_0,D, [\mathrm{Stab}_{V_r(\Sigma)}(y), y],y\in A_0 \rightarrowngle. \end{align*} We can give explicit descriptions of possible choices for the sets $A_0, B_0$. Set $X=\{x_1,\ldots,x_r\}$ and let $\omega_i=\{x_1,\ldots,x_i\}(\mathcal L)$ for $i=1,\ldots,r$. Then: $$A_0=\{\chi_{\omega_i,z}\mid 1\leq i\leq r,z\in L\}.$$ To describe $B_0$, we are going to split it into three pairwise disjoint subsets $B_0=B_0^1\cup B_0^2\cup B_0^3$, according to the three subsets of relations $\mathcal{R}_1$, $\mathcal{R}_2$ and $\mathcal{R}_3$ of Lemma \ref{prescolim}. The simplest one is $B_0^1$: $$B_0^1=\{\chi_{\omega_i,zy}^{-1}\chi_{\omega_i,z}\chi_{\omega_i,y}\mid 1\leq i\leq r,z\in L\}.$$ For $B_0^2,B_0^3$ it is more convenient to fix a basis $X\leq Y$ with $|Y|\geq 2r$. Then we may choose $$B_0^2=\{[\chi_{\omega,z},\chi_{\omega',z}]\mid z\in L,\omega=Z(\mathcal L),\omega'=Z'(\mathcal L),Z,Z'\subseteq Y,Z\cap Z'=\varnothing\},$$ $$B_0^3=\{\chi_{\omega,z}^{-1}\chi_{\omega_1,z}\chi_{\omega_2,z}]\mid z\in L,\omega_1=Z_1(\mathcal L),\omega_2=Z_2(\mathcal L),\omega=(Z_1\dot\cup Z_2)(\mathcal L),Z_1,Z_2\subseteq Y\}.$$ Observe that these choices of $B_0^2$ and $B_0^3$ yield redundant presentations. The previous presentation may not be finite because of all the relations needed to form $[\mathrm{Stab}_{V_r(\Sigma)}(y), y]$ where $y\in A_0$. Notice that $g \in \mathrm{Stab}_{V_r(\Sigma)}(y)$ if and only if $g(\omega)=\omega$ where $y=\chi_{\omega,z}$ for some $z\in L$. By Lemma \ref{tec1} and the assumption on $V_r(\Sigma)$ we deduce that $\mathrm{Stab}_{V_r(\Sigma)}(y)$ is finitely generated by some generators $\mu_1,\ldots, \mu_m$. Consider now the following $m$ relations, which are a subset of the stabiliser relations $[\mathrm{Stab}_{V_r(\Sigma)}(y), y]$: \begin{eqnarray} \label{rel4} \mu_i \chi_{\omega,z} \mu_i^{-1} = \chi_{\omega,z}, \qquad i=1,\ldots, m. \end{eqnarray} If $g \in \mathrm{Stab}_{V_r(\Sigma)}(y)$, then $g=w(\mu_1,\ldots,\mu_m)$ and the stabilizer relation $g \chi_{\omega,z} g^{-1}=\chi_{\omega,z}$ is thus obtained by starting from relation (\ref{rel4}) for some $i$ and then suitably conjugating this relation to build the word $w$. Therefore, the group $G$ has the following finite presentation \begin{align*} \langle A_0, C \mid \widehat B_0,D, [\mu_i, y], i=1,\ldots, m, y\in A_0 \rightarrowngle \end{align*} where the elements $\mu_1,\ldots, \mu_m$ are expressed as words in the generators $C$. \end{proof} \section{A finite generating system for $V_r(\Sigma)$} \label{sec:finite-generation-V} \noindent The proof of Proposition \ref{finpres} implies that if $V_r(\Sigma)$ is finitely generated, then so are centralisers of finite subgroups. In this Section we are going to see that $V_r(\Sigma)$ is indeed finitely generated if we only assume that $\Sigma$ is valid and bounded. If we additionally assume that $\Sigma$ is complete, then the finite generation of $V_r(\Sigma)$ obviously follows from Theorem \ref{FPinfty}. We proceed in the same spirit as the work by Burillo and Cleary in \cite[Theorem 2.1]{burillocleary}. To define our generating family, we will use certain bases which are not descendants of $X$, but have the right number of elements. Choose integers $m_1,\ldots,m_s$ with $$d=\sum_{t=1}^sm_t(n_t-1).$$ Recall that whenever we apply the descending operation $\alpha_t$ to an element of a basis we get a new basis of cardinality increased by $n_t-1$, whereas if we apply the ascending operation $\lambda_t$ the cardinality is decreased by $n_t-1$. This means that there is a sequence of operations that we can perform on $X$ to get a new basis with exactly $r+d$ elements. Notice that we only have to apply first the descending operations to ensure we have enough elements for the ascending ones. Moreover, we may order the leaves of $X$ and assume that only the last leaf, say $l_0$, is affected by this process. Denote the resulting basis by $\tau X$, so that $X\smallsetminus\{l_0\}\subseteq\tau X$. Order the elements of $\tau X$ so that those in $X\smallsetminus\{l_0\}$ come first and are ordered as they were in $X$. We may repeat the process with the last element of $\tau X$ to get a new basis $\tau^2 X$ which we order compatibly and so on. We set $\tau^0X:=X$. Notice that, at each step, the basis $\tau^i X$ is obtained by operations involving only the last element of $\tau^{i-1}X$. Therefore $|\tau^iX|=r+id$. Since $\Sigma$ is bounded, any element $g\in V_r(\Sigma)$ can be represented by a bijection between the leaves of a pair of bases $(T_1,T_2)$ with $X\leq T_1,T_2$. Let $i$ be such that $|T_1|=|T_2|=r+id$. Then $g=g_2^{-1}\sigma g_1$ where for $j=1,2$; $g_j$ is a group element taking $T_j$ to $\tau^iX$, and where $\sigma$ fixes $\tau^iX$ setwise. This means that the set $$M:=\{g\in V_r(\Sigma)\mid gT=\tau^iX, X\leq T,i\geq0\}$$ generates our group. Using the order we gave to $\tau^iX$, put $\tau^iX=\{l_1^i,\ldots,l_{r+id}^i\}.$ Assume that on the $j$-th element we perform a descending operation of colour $t$. We denote by $\tau^iX_{j\alpha_t}$ the basis thus obtained and we order its elements as follows: $$l_1^i,\ldots,l_{j-1}^i,l_{j}^i\alpha_t^1\ldots,l_j^i\alpha_t^{n_t},l_{j+1}^i,\ldots,l_{r+id}^i.$$ \noindent Put $s_t:={n_t-1\over d}$ and let $\xi^j_{i,t}\in V_r(\Sigma)$ be the element such that $$\xi^j_{i,t}:\tau^iX_{j\alpha_t}\to\tau^{(i+{s_t})}X$$ preserves the orders. One can easily check that, if $g\in M$ takes a basis $T$ of cardinality $r+id$ to $\tau^iX$, i.e. $$g:T\to\tau^iX,$$ then, for any $1\leq j\leq r+id$ and each $1\leq t\leq s$, the composition $\xi^j_{i,t}g$ takes the basis obtained from $T$ by performing the descending operation $\alpha_t$ in the vertex $g^{-1}l_j^i$ to $\tau^{(i+s_t)}X$. Notice that this basis has exactly $|T|+n_t-1=r+id+n_t-1$ leaves. If we set $$M_1:=\{\xi^j_{i,t}\mid i\geq 0, 1\leq j\leq r+id,1\leq t\leq s\},$$ $$M_2:=\{\sigma\in V_r(\Sigma)\mid i\geq 0, \sigma\tau^iX=\tau^iX\},$$ the argument above implies that, starting from the identity map $id:X \to X$, we can multiply by the $\xi^j_{i,t}$s to expand the domain $X$ into becoming $T$, while transforming the codomain $X$ into the appropriate $\tau^{s}X$, up to a permutation of the codomain. Therefore, any element $g\in M$ can be written as a product of elements in $M_1$ followed by an element of $M_2$. In other words, $M_1\cup M_2$ also generate our group. We want to reduce $M_1\cup M_2$ to a finite generating system. To do that, first observe that, for $j< r+di-d$, we have $$\xi_{i,t}^j=\xi_{i-1,t}^j.$$ Hence we only have to consider the elements $\xi_{i,t}^j$ for $r+d(i-1)\leq j\leq r+di$, if $i>0$, and $1\leq j\leq r$ in the case if $i=0$. To further reduce the number of generators, it is useful to slightly change our notation. We set $$\rho^j_{i,t}:=\xi_{i,t}^{j+r+d(i-1)-1}$$ for $i>0$ and $1\leq j\leq d+1$. Note that then $r+d(i-1)=1+r+d(i-1)-1\leq j+r+d(i-1)-1\leq d+1+r+d(i-1)-1=r+di$. Then one easily checks that for any $i>1$ and any $1\leq j_1\leq d+1$, $$\rho_{1,t}^{1}\rho_{i,t}^{j_1}=\rho_{i+s_t,t}^{j_1}\rho_{1,t}^{1}.$$ \noindent Now consider the set $M_2$. For $i\geq 0$, let $$S^i:= \{\sigma\in V_r(\Sigma)\mid \sigma\tau^iX=\tau^iX\}.$$ In particular, $S^i$ is the symmetric group on $r+di$ letters and $M_2$ is the union of all $S^i$'s. For $i\geq 2$, let $S_{d+2}^i$ be the subgroup of $S^i$ that moves only the last $d+2$ vertices of $\tau^iX$. Then, for any $i\geq 3$ and any $\sigma\in S_{d+2}^i$, we have $$\rho_{1,t}^1\sigma =\sigma'\rho_{1,t}^1$$ for some element $\sigma'\in S_{d+2}^{i+s_t}$. Since $S^i_{d+2}$ and $S^{i+s_t}_{d+2}$ are finite symmetric groups of the same cardinality, they are isomorphic. We see that conjugation by $\rho_{1,t}^1$ induces an isomorphism between these two finite groups. Observe also that, for any $i\leq i'$, $$\{\sigma\in S^i\mid\sigma\text{ fixes the last element of }\tau^iX\}\subseteq S^{i'},$$ and that $S^{i'}$ is generated by $$\bigcup_{i<i'}\{\sigma\in S^i\mid\sigma\text{ fixes the last element of }\tau^iX\}\cup S^{i'}_{d+2}.$$ When $i'$ is big enough, we may choose $i:=i'-s_t$ and an inductive argument shows that any such $S^{i'}$ can be generated from finitely many of the $S^i$ and $\rho^1_{1,t}$. Hence we have: \begin{proposition}\label{fingen} Assume $\Sigma$ is valid and bounded, then $V_r(\Sigma)$ is finitely generated. Moreover, centralisers of finite subgroups in $V_r(\Sigma)$ are finitely generated too. \end{proposition} \begin{proof} The finite generating system for $V_r(\Sigma)$ consists of \[ \begin{array}{c} \rho_{i,t}^j\text{ for }1\leq j\leq d+1, 1\leq t\leq s\text{ and }1\leq i\leq s_t+1, \\ \rho_{0,t}^j\text{ for }1\leq j\leq r\text{ and }1\leq t\leq s, \\ \{ \text{Transpositions between consecutive letters in } \\ S^0,S^1,S_{d+2}^2, \ldots,S_{d+2}^l\text{ for }l=\max\{3,s_t\}\}. \end{array} \] \end{proof} \begin{question} Does this method give us a finite presentation? Furthermore, can one find a generating system with only 2 elements? \end{question} \appendix \section{The Burnside procedure} We shall now give an outline of the Burnside procedure used in the proof of Proposition \ref{finpres}. As mentioned in the introduction, we do not claim any originality for this. This procedure has been used, without proof, in \cite{kassabovetc}. We are not aware of any place where a proof is presented. Hence we include it here for completeness. The goal is to find a small finite presentation of a group, in the cases where the following procedure can be applied. The idea is to look for a possibly infinite, but well behaved, presentation of a group $G$ and a group $T$ such that the action of $T$ on the generators and relators of $G$ cuts them down to a very small number. At a later stage, the group $T$ will be assumed to be a subgroup of $G$ and its action will return a new shorter presentation. \subsection{Preliminary lemmas} \label{ssec:burnside-1} The beginning of this procedure is general and we only require each of the groups $G$ and $T$ to have a presentation, without any assumption on them. Let $G=\langle A \mid B \rightarrowngle$ and $T=\langle C \mid D \rightarrowngle$ be groups. Let $T$ act on $A$ by permutations. Notice that $B \le F(A)$, the free group generated by $A$, and observe that $T$ also acts on $F(A)$. We assume that $T(B) = B$. Let $A_0$ be a set of representatives for the $T$-orbits in $A$ and $B_0$ be a set of representatives for the $T$-orbits in $B$. We observe that $B_0 \le \langle \, t(a_0) \mid a_0 \in A_0, t \in T \, \rightarrowngle$ that is, we may express the elements of $B_0$ as products of the results of $T$ acting on elements of $A_0$. In the special case that $T$ is a subgroup of $G$, we will be able to express elements in $B_0$ as products of conjugates of elements in $A_0$ by elements in $T$. Hence each element of $B_0$, seen as an element in $G$, can be written in more than one way. So we fix an expression of the type $t_1(a_1) \ldots t_k(a_k)$ for each element of $B_0$. We then define the set $\widehat{B}_0 \subset \langle t a_0 t^{-1} \mid a_0 \in A_0, t \in T \rightarrowngle$ to be the set of fixed expressions for the elements of $B_0$, where we have replaced the action of $T$ on $A_0$ by the conjugation of elements. That is, if $t_1(a_1) \ldots t_k(a_k)$ is a fixed expression in $B_0$, the corresponding element in $\widehat B_0$ is $t_1 a_1 t_1^{-1} \ldots t_k a_k t_k^{-1}$. The set $\widehat{B}_0$ is thus a set of formal expressions which will be used later to express relations in the groups. \begin{lemma} \label{thm:burnside-1} Following the notation previously defined, we have \[ G \rtimes T \cong \langle A_0,C \mid \widehat{B}_0, D, [\mathrm{Stab}_T(y),y], y \in A_0 \rightarrowngle, \] where the semi-direct product is given by the action of $T$ on $G$ as follows: For all $g_1,g_2 \in G$ and $t_1,t_2 \in T,$ multiplication is given by $(g_1,t_1)(g_2,t_2)=(g_1 \operatorname{cd}ot t_1(g_2),t_1t_2).$ \end{lemma} \begin{proof} Let $H$ be the group presented by $\langle A_0,C \mid \widehat{B}_0, D, [\mathrm{Stab}_T(y),y], y \in A_0 \rightarrowngle$. Define the group homomorphism $\varphi:F(A_0 \cup C) \to G \rtimes T$ by sending $a_0 \in A_0$ to $(a_0,1) \in G \rtimes T$ and $c \in C$ to $(1,c) \in G \rtimes T$. Moreover, we observe that by construction of the semidirect product we have that \[ \varphi(c) \varphi(a_0) \varphi(c)^{-1}=(1,c)(a_0,1)(1,c^{-1})=(c(a_0),1) \] and, more generally, we see that \[\varphi(t) \varphi(a_0) \varphi(t)^{-1}=(t(a_0),1)\tag{$\ast$}\] for any word $t\in T$. \noindent \emph{Claim 1.} The map $\varphi$ induces a homomorphism $H \to G \rtimes T$, which we still call $\varphi$. \noindent \emph {Proof of Claim 1} Let $d \in D$ be a relation in $H$, then $d=c_1 \ldots c_k$, for some $c_i \in C$, then \[ \varphi(c_1) \ldots \varphi(c_k)=(1,c_1) \ldots (1,c_k) = (1,c_1 \ldots c_k) = (1,d)=(1,1), \] since $d$ is the identity in $T$. Let now $\widehat{b}_0 \in \widehat{B}_0$ be a relation in $H$, then $\widehat{b}_0=t_1 a_1 t_1^{-1} \ldots t_k a_k t_k^{-1}$ for some $a_i \in A_0$ and $t_i \in T$. Moreover, we write $t_i=c_{1i} \ldots c_{v_i i}$ for elements of $C$ and applying $(\ast)$ above, \begin{eqnarray*} \prod_{i=1}^k \varphi(t_i)\varphi(a_i)\varphi(t_i)^{-1} &=& (\prod_{i=1}^k t_i(a_i),1) \\ =(b_0,1) &=&(1,1). \end{eqnarray*} In fact $b_0:=t_1(a_1)\ldots t_k(a_k)$ is the identity in $G$ since it corresponds to an element in $\widehat{B}_0$. Finally let $a_0 \in A_0$, $t \in \mathrm{Stab}_T(a_0)$ and write $t = c_1 \ldots c_k$. Thus we have, using $(\ast)$ again: \begin{eqnarray*} \varphi(t)\varphi(a_0)\varphi(t)^{-1}\varphi(a_0)^{-1} &=& (t(a_0,1)(a_0^{-1},1) = (1,1). \end{eqnarray*} since $t(a_0)=a_0$. We have thus verified the hypothesis of Von Dyck's Theorem and we can extend $\varphi$ to a group homomorphism $H \to G \rtimes T$, thus proving Claim 1. \noindent \emph{Claim 2.} The map $\varphi$ is surjective. \noindent \emph{Proof of Claim 2.} Any element $(1,t) \in \{1\} \times T:=\{(1,s) \mid s \in T \}$ can be written as $(1,c_1 \ldots c_k)$ for suitable $c_i \in C$, thus \[ (1,t)=(1,c_1) \ldots (1,c_k)=\varphi(c_1) \ldots \varphi(c_k) = \varphi(t) \] since $\varphi$ is a homomorphism and so $\varphi(H)$ contains $1 \times T$ as a subgroup. We observe that any element $(g,1)$ of the subgroup $G \times \{1\}:=\{(h,1) \mid h \in G \}$ can be written as $(t_1(a_1) \ldots t_k(a_k),1)$ for suitable $a_i \in A_0$ and $t_i \in T$. Hence, using calculations analogous to those of Claim 1, we can write \[ \begin{array}{c} (g,1)=(t_1(a_1) \ldots t_k(a_k),1)=\prod_{i=1}^k (1,t_i)(a_i,1)(1,t_i^{-1}) = \varphi(\prod_{i=1}^k t_i a_i t_i^{-1}). \end{array} \] Thus, $\varphi(H)$ contains both $G \times \{1\}$ and $\{1\} \times T$ as subgroups, that is $\varphi(H) \ge G \rtimes T$. \noindent \emph{Claim 3.} The map $\varphi$ is injective. \noindent \emph{Proof of Claim 3.} Any element of $A$ can be written as $t(a_0)$, for some $a_0 \in A_0$ and $t \in T$. Define $\overline{A}^*=\{t a_0 t^{-1} \mid a_0 \in A_0, t \in T\}$ to be the set of symbols of $A$ where we have replaced the action of $T$ with the conjugation of elements. We notice that, if $t(a_0)=s(a_0)$, then $t^{-1}s \in \mathrm{Stab}_T(a_0)$ and we thus define an equivalence relation on $\overline{A}^*$ by writing $t a_0 t^{-1} \sim s a_0 s^{-1}$ if and only if $t^{-1}s \in \mathrm{Stab}_T(a_0)$. We define $\overline{A}:=\overline{A}^*/\sim$ the collection of equivalence classes. If $a \in A$ and $a=t(a_0)$, for some $a_0 \in A_0$ and $t \in T$, we define an element $\overline{a}$ of $\overline{A}$ by setting $\overline{a}=\{s a_0 s^{-1} \mid t^{-1}s \in \mathrm{Stab}_T(a_0)\}$. With this notation, we observe that $T$ acts on $\overline{A}$ through \[ (s,\overline{a}) \to s \operatorname{cd}ot \overline{a} := \overline{st a_0 t^{-1} s^{-1}}, \] for some $a_0 \in A_0, t\in T$ such that $\overline{a}=\overline{t a_0 t^{-1}}$. Also, notice that the map $\psi:A \to \overline{A}$ sending $a \mapsto \overline{a}$ is a $T$-equivariant bijection, that is $\psi(s a)=s \psi(a)= s \operatorname{cd}ot \overline{a}$ for all $s \in T$. Hence the action of $T$ on $A$ is equivalent to the action of $T$ on $\overline{A}$. For each element $\overline{a} \in \overline{A}$ we can fix a representative $t a_0 t^{-1} \in F(A_0 \cup C)$ and we call the set of representatives $\widehat{A}$. By construction, every element $\widehat{b}_0 \in \widehat{B}_0$ can be uniquely written as $\widehat{b}_0=t_1 a_1 t_1^{-1} \ldots t_k a_k t_k^{-1}$, so we define $\overline{B}_0 \subseteq F(\overline{A})$ to be the set of elements $\overline{t_1 a_1 t_1^{-1}} \ldots \overline{t_k a_k t_k^{-1}}$. We then define $\overline{B} \subseteq F(\overline{A})$ to be the set of all elements $\overline{t t_1 a_1 t_1^{-1} t^{-1}} \ldots \overline{ t t_k a_k t_k^{-1}t^{-1}}$, for any $t \in T$. With these definitions, it makes sense to say that the normal closure $F(\overline{B})^{F(\overline{A})}$ inside $F(\overline{A})$ is isomorphic to $F(B)^{F(A)}$ inside $F(A)$. Also notice that $F(\overline{A}) \cong F(\overline{A}^*/\sim) = \langle \overline{A}^* \mid R_\sim\rightarrowngle$, where $R_\sim$ is the set of all relations of the type $t a_0 t^{-1} \sim s a_0 s^{-1}$ if and only if $t^{-1}s \in \mathrm{Stab}_T(a_0)$. Let $w \in H$, such that $\varphi(w)=(1,1)$. We want to prove that we can use the relations of $H$ to deduce that $w =1$ in $H$. Let $w= c_1 a_1 c_2 a_2 \ldots a_k c_{k+1}$ for $a_i \in A_0$ and $c_i \in \langle C \rightarrowngle$. We can rewrite $w$ as \[ w=(c_1 a_1 c_1^{-1}) (c_1 c_2 a_2 c_2^{-1} c_1^{-1}) \ldots (c_1 c_2 \ldots c_k a_k c_k^{-1} \ldots c_1^{-1}) c_1 c_2 \ldots c_k c_{k+1} \] Define $t_i= c_1 \ldots c_i$. Then, up to replacing $t_i$ with another suitable $t_i' \in T$, we can assume that $\overline{t_i a_i t_i^{-1}} \in \widehat{A}$. Hence we can write \[ w=\left( t_1 a_1 t_1^{-1} \ldots t_k a_k t_k^{-1} \right) t_{k+1} \] We apply $\varphi$ to the rewriting of $w$ \[ \begin{array}{c} (1,1) =\varphi\left(\left( t_1 a_1 t_1^{-1} \ldots t_k a_k t_k^{-1} \right) t_{k+1}\right)=\\ =\left(\prod_{i=1}^k (1,t_i)(a_i,1)(1,t_i^{-1}) \right) (1,t_{k+1})=\\ =(t_1(a_1) \ldots t_k(a_k),t_{k+1}) \end{array} \] Since $t_{k+1}=1$ inside $T$, we can use the relations of $T$ to rewrite $t_{k+1}=1$ inside $H$. Similarly, since $t_1(a_1) \ldots t_k(a_k)=1$ inside $G$ and since the normal closure $F(\overline{B})^{F(\overline{A})}$ inside $F(\overline{A})$ is isomorphic to $F(B)^{F(A)}$ inside $F(A)$, we can use the relations of $G$ to rewrite $t_1 a_1 t_1^{-1} \ldots t_k a_k t_k^{-1}=1$ inside $H$. Therefore $w=1$ in $H$ and so $\varphi$ is injective. \noindent The map $\varphi$ is thus a group isomorphism and we are done. \end{proof} \noindent The following result does not depend on the presentations of the relevant groups and relies only on the definition of semidirect product. \begin{lemma} \label{thm:burnside-2} Let $G$ be a group and $T \le G$. Let $G \rtimes T$ be the semidirect product constructed using the action of $T$ on $G$ by conjugation inside $G$. Then \[ G \rtimes T \cong G \times T. \] \end{lemma} \begin{proof} Let $H:=G \rtimes T$ with product induced by the conjugation action, that is \[ (a,x)(b,y)=(axbx^{-1},xy). \] Consider the diagonal set $Q=\{(t^{-1},t) \mid t \in T \}$. We notice that $Q$ is a subgroup of $H$ since, for any two elements $(a^{-1},a),(b^{-1},b)$, we have \[ (a^{-1},a) \operatorname{cd}ot (b^{-1},b)^{-1}= ((ab^{-1})^{-1},ab^{-1}) \] Moreover, it is obvious that $Q \cong T$. Since $(a,x)=(ax,1)(x^{-1},x)$, then $H$ is generated by $G \times \{1\}$ and $Q$. We already know that $G \times \{1\}$ is normal so it remains to be proved that $Q$ is normal too. We observe that, for any $(a,1) \in G \times \{1\}$ and $(t^{-1},t) \in Q$ we have \[ (a,1) \operatorname{cd}ot (t^{-1},t) \operatorname{cd}ot (a^{-1},1) = (at^{-1}t a^{-1} t^{-1} ,t)=(t^{-1},t) \in Q \] hence $G \rtimes T \cong (G \times \{1\}) \times Q \cong G \times T$. \end{proof} \subsection{The Burnside procedure} \label{ssec:burnside-2} We are now ready to explain the Burnside procedure. We make two additional assumptions with respect to those in Subsection \ref{ssec:burnside-1}. We assume that: \begin{enumerate} \item the presentation $T=\langle C \mid D \rightarrowngle$ is finite, \item the number of $T$-orbits in $A$ is finite (and possibly very small, in practical applications), \item the number of $T$-orbits in $B$ is finite (and also possibly very small), \item the stabilizers $\mathrm{Stab}_T(y)$ are finitely generated, for $y \in A_0$. \end{enumerate} Let $G$ and $T$ be as defined in Lemma \ref{thm:burnside-1}, $T \le G$ and let $T$ act by conjugation on $G$, then Lemmas \ref{thm:burnside-1} and \ref{thm:burnside-2} imply that \[ G \times T \cong \left\langle A_0,C \mid \widehat{B}_0, D, [\mathrm{Stab}_T(y),y] \text{ for } y \in A_0 \right\rightarrowngle. \] We rewrite $C$ in terms of $A_0$ and then mod out by $T$ and we also use the finite generation of $\mathrm{Stab}_T(y)$ to rewrite the stabilizer relations as conjugations. Therefore we obtain: \begin{theorem}[Burnside procedure] \label{thm:burnside-3} Let $G,T$ be the groups defined in Lemma \ref{thm:burnside-1}. Assume that \begin{enumerate} \item $T \le G$ and $T$ acts by conjugation on $G$, \item $T=\langle C \mid D \rightarrowngle$ is finitely presented, \item the number of $T$-orbits in $A$ is finite, \item the number of $T$-orbits in $B$ is finite, \item the stabilizers $\mathrm{Stab}_T(y)$ are finitely generated, for $y \in A_0$. \end{enumerate} Then there exists a finite presentation of $G$ of the type: \[ G = \left\langle \begin{array}{cc} & B_0, D, \\ A_0, C \; \; \Big \vert & cyc^{-1}=y, \text{ for } y \in A_0, c \text{ generator of } \mathrm{Stab}_T(y), \\ & \text{finitely many extra relations} \end{array} \right\rightarrowngle, \] where the extra relations are obtained in the following way: there is a relation for every element $c \in C$ and it has the form \[ c=\text{word in conjugates of elements of } A_0 \text{ by elements of }C. \] \end{theorem} \subsection{An application} \label{ssec:burnside-3} The following example is taken from \cite{kassabovetc}. Recall the following presentation for the alternating group \[ \mathrm{Alt}(n+2)=\left\langle x_1,\ldots,x_p \mid (x_i)^3, (x_i x_j)^2, i \ne j \right\rightarrowngle \] where $x_i$ can be realized as the $3$-cycle $(i \; \; \; \; n+1 \; \; \; \; n+2)$. Hence \[ \mathrm{Alt}(7) = \left\langle x_1,x_2,x_3,x_4,x_5 \mid (x_i)^3, (x_i x_j)^2, i \ne j \right \rightarrowngle := G. \] On the other hand, it can be shown that \[ \mathrm{Alt}(5) = \left\langle a,b \mid a^5, b^2, (ab)^3 \right\rightarrowngle := T, \] where $a$ can be realized as $(1 \; 2 \; 3 \; 4 \; 5)$ and $b=(2 \; 3)(4 \; 5)$. Let $z:=x_1=(1 \; 6 \; 7)$ and observe that $x_i=z^{a^{i-1}}$, for $i=1,\ldots,5.$ Now we check that \[ \begin{array}{l} A=\{x_1,\ldots,x_5\}, \\ A_0=\{ z\}, \\ B=\{(x_i)^3, (x_i x_j)^2, i \ne j\},\\ B_0=\{z^3, \left(z z^a \right)^2\}, \\ C=\{a,b\}, \\ D=\{a^5,b^2,(ab)^3\}, \end{array} \] satisfies the conditions of Corollary \ref{thm:burnside-3}. Noting that $\{[\mbox{Stab}_T(y),y] \text{ for } y \in A_0 \} = \{\left[z,b \right], \left[z,(ba)^a\right]\}$, we have that \[ G \times T = \left\langle a,b,z \mid a^5, b^2, (ab)^3, z^3, \left(z z^a \right)^2, \left[z,b \right], \left[z,(ba)^a\right] \right \rightarrowngle \] We can write $a=w_1(x_1,\ldots,x_5)$ and $b=w_2(x_1,\ldots,x_5)$, for suitable words $w_1,w_2 \in F(x_1,\ldots,x_5)$ and then Corollary \ref{thm:burnside-3} yields the following finite presentation for $\mathrm{Alt}(7)$: \[ \mathrm{Alt}(7) = \left\langle a,b,z \mid B_0, D, \left[z,b \right], \left[z,(ba)^a\right], a^{-1}w_1(z,z^a,\ldots,z^{a^4}), b^{-1}w_2(z,z^a,\ldots,z^{a^4}) \right\rightarrowngle. \] \fi \end{document}
\mathbf{E}gin{document} \title[Trapezoid rules]{Convergence of trapezoid rule to rough integrals} \author [Y. Liu \and Z. Selk \and S. Tindel] {Yanghui Liu \and Zachary Selk \and Samy Tindel} \address{Yanghui Liu: Department of Mathematics, Baruch College, CUNY, One Bernard Baruch Way (55 Lexington Ave. at 24th St), New York, NY 10010, United States} \address{Zach Selk: Department of Mathematics, Purdue University, 150 N. University Street, West Lafayette, IN 47907, United States} \address{Samy Tindel: Department of Mathematics, Purdue University, 150 N. University Street, West Lafayette, IN 47907, United States} \thanks{\thetaxtit{2020 Mathematics Subject Classification}. 60G15, 60H07, 60L20.} \thanks{S. Tindel is supported by the NSF grant DMS-1952966.} \keywords{Rough paths, Weighted random sums, Limit theorems, Malliavin calculus.} \mathbf{E}gin{abstract} Rough paths techniques give the ability to define solutions of stochastic differential equations driven by signals $X$ which are not semimartingales and whose $p$-variation is finite only for large values of $p$. In this context, rough integrals are usually Riemann-Stieltjes integrals with correction terms that are sometimes seen as unnatural. As opposed to those somewhat artificial correction terms, our endeavor in this note is to produce a trapezoid rule for rough integrals driven by general $d$-dimensional Gaussian processes. Namely we shall approximate a generic rough integral $\int y \, dX$ by Riemann sums avoiding the usual higher order correction terms, making the expression easier to work with and more natural. Our approximations apply to all controlled processes $y$ and to a wide range of Gaussian processes $X$ including fractional Brownian motion with a Hurst parameter $H>1/4$. As a corollary of the trapezoid rule, we also consider the convergence of a midpoint rule for integrals of the form $\int f(X) dX$. \end{abstract} \maketitle { \hypersetup{linkcolor=black} \tableofcontents } \section{Introduction} Inspired by the seminal series of papers \mathcal Ite{Ch, Young}, rough paths were first introduced in \mathcal Ite{Lyons} in 1998 to study differential equations of the form: \mathbf{E}gin{equation}\lambdabel{CDE} y_t=y_0+\sum_{j=1}^d \int_0^t V_j(y_s)dX_s^j , \end{equation} where $V_j$ are smooth bounded vector fields, $X:[0,T]\to\mathbb{R}^d$ is a given function with finite $p$-variation, usually a stochastic process, and $y:[0,T]\to\mathbb R^d$ is what is being solved for. Even though the final goal of the rough paths theory is to solve differential systems of the form \eqref{CDE} driven by arbitrary noisy inputs, the main step in the approach can be reduced to a proper definition of stochastic integrals like $\int_0^t V_j(y_s)dX_s^j$ above. In order to discuss this kind of integral, we consider a generic partition ${\mathcal P}=\{0=t_0<t_1<\mathcal Dots<t_{n+1}=t\}$ of $[0,t]$ and the following Riemann sum: \mathbf{E}gin{equation}\lambdabel{eq:riemann-sum} \mathcal{R}_0^t(V_j(y),X^j):=\sum_{k=0}^d V_j(y_{t_k})\delta X_{t_kt_{k+1}}^j, \end{equation} where we define $\delta X_{st}:=X_t-X_s$. In a classical setting, one obviously expects $\mathcal{R}_0^t(V_j(y),X^j)$ to converge to $\int_0^t V_j(y_s)dX_s^j$ as the mesh of ${\mathcal P}$ goes to 0. Let us recall what this Riemann sum convergence becomes in a rougher and/or stochastic context. \mathbf{E}gin{enumerate}[wide, labelwidth=!, labelindent=0pt, label={(\roman*)}] \setlength\itemsep{.1in} \item When $X$ is a semimartingale on a filtered probability space $(\Omega, \mathcal F, \mathcal{F}_t,\mu)$, we can use techniques from It\^o calculus to take limits in \eqref{eq:riemann-sum}. For example, assuming that $X$ is a $L^{2}$ continuous martingale, then we define the stochastic integral \mathbf{E}gin{equation}\lambdabel{eq:limit-for-semimartingales} \int_0^t V_j(y_s)dX_s^j\stackrel{L^2(\Omega)}{=}\lim_{|{\mathcal P}|\to 0}\mathcal{R}_0^t(V_j(y),X^j), \end{equation} where the limit of the Riemann sums is understood in the $L^2(\Omega)$ sense. The stochastic integral \eqref{eq:limit-for-semimartingales} can be extended to all processes which are adapted to the filtration $\mathcal{F}_t$ and square integrable with respect to the bracket of $X$. \item When $X$ is not a semimartingale, but is assumed to have finite $p$-variation for $p<2$, one can rely on the classical Young-Stieltjes integration to take limits in \eqref{eq:riemann-sum}. Then one invokes a result in \mathcal Ite{Young} which can be summarized as follows. \mathbf{E}gin{proposition}\lambdabel{prop:young-integration} Let $f\in C^{p\thetaxt{-var}}([0,T],\mathbb R^d)$ and $g\in C^{q\thetaxt{-var}}([0,T],\mathbb R^d)$ with $1/p+1/q>1$. Then the Riemann-Stieltjes integral exists: \mathbf{E}gin{equation} \int_0^t f_s dg_s=\lim_{|{\mathcal P}|\to 0}\mathcal{R}_0^t(f,g) . \end{equation} \end{proposition} \noindent Proposition \ref{prop:young-integration} can be applied directly in order to analyze \eqref{eq:riemann-sum}. Specifically, if we assume that $y$ should have the same $p$-variation as $X$, then we can make sense of \eqref{CDE} in Young-Stieltjes sense whenever $X$ is a stochastic process with finite $p$-variation almost surely for $p<2$. In this case we define: \mathbf{E}gin{equation} \int_0^t V_j(y_s)dX_s^j\stackrel{\thetaxtrm{a.s.}}{=}\lim_{|{\mathcal P}|\to 0}\mathcal{R}_0^t(V_j(y),X^{j}), \end{equation} that is $\int_0^t V_j(y_s)dX_s^j$ is defined as an almost sure limit of Riemann sums. \end{enumerate} However, if we want to solve \eqref{CDE} beyond the semimartingale or the finite $p$-variation case with $p<2$, rough paths theory is the main available path-wise type method. We give a brief introduction to this method in Section \ref{sec:rough-path-above-X}, and refer to \mathcal Ite{FrizBook,Lyons,HairerBook} for a more comprehensive guide. At this point we just mention that rough path theory lets us solve \eqref{CDE} for $X\in C^{p\thetaxt{-var}}$ and for arbitrary $p$, provided we can define the following stack of iterated integrals of order $n=1,2,\dots, \lfloor p\rfloor$: \mathbf{E}gin{align}\lambdabel{eq:def-iterated-intg} &X_{st}^{1,i}=\int_s^t dX_r^i, \quad X_{st}^{2,ij}=\int_s^t\int_s^r dX_u^i dX_r^j, \notag\\ &X_{st}^{3,ijk}=\int_s^t\int_s^r\int_s^u dX_v^i dX_u^j dX_r^k, \quad \mathcal Dots \end{align} Notice that in this paper the analysis is restricted to the case $p<4$, which corresponds to needing to define the first three integrals in \eqref{eq:def-iterated-intg}. This is due to issues in defining these integrals for worse $p$-variation. As mentioned above, once the iterated integrals in~\eqref{eq:def-iterated-intg} are properly defined, we can solve~\eqref{CDE} thanks to the rough paths machinery whenever $X\in C^{p\thetaxt{-var}}$. In particular if $y$ is the solution to~\eqref{CDE}, one can define the stochastic integral $\int y\, dX$ as the following limit of modified Riemann sums: \mathbf{E}gin{equation}\lambdabel{Rough Integral} \int_0^t y_{s} \, dX_s=\lim_{n\to\infty}\sum_{k=0}^{n} y_{t_{k}} \, X^{1}_{t_k,t_{k+1}} +V (y_{t_k}) \, X_{t_kt_{k+1}}^{2}+ V'V(y_{t_k}) \, X^{3}_{t_kt_{k+1}} , \end{equation} where we have considered a 1-dimensional situation (namely $d=1$) in order to avoid cumbersome indices. Standard and relevant applications of rough paths techniques are the ability to define and solve stochastic differential equations driven by fractional Brownian motion or other processes with low regularity that are not semimartingales. Even for equations driven by usual Brownian motions, the continuity results related to rough paths techniques bring simplifications in classical stochastic analysis results such as large deviations principles, see \mathcal Ite{LD}. Other relevant applications include data analysis, see \mathcal Ite{ChineseChar} and filtering theory, see \mathcal Ite{RF}. Nevertheless, in spite of the rough paths theory's numerous achievements, the definition~\eqref{Rough Integral} for an integral of $y$ with respect to $X$ is sometimes seen as somehow not natural due to the higher order ``correction'' terms. In addition, the presence of the high order iterated integrals in the right hand side of~\eqref{Rough Integral} makes the rough integral approximation difficult to implement numerically. Ideally, one would thus like to take limits on simple Riemann sums like \eqref{eq:riemann-sum}. The natural endeavor of approximating rough (or other generalized stochastic) integrals by suitable Riemann sums has been mostly carried out in case of a 1-dimensional fractional Brownian motion $B$ and for integrals of the form $\int f(B)dB$. The contributions in this direction includes \mathcal Ite{BNN, GNRV, HN, HLN, NN, NR, NRS, RV}. We should also mention that the approximation of rough integrals like \eqref{Rough Integral} by trapezoid rules, for a 1-d fractional Brownian motion and a general controlled process $y$, has been considered in \mathcal Ite{OneD}. Namely, the following approximation of a rough integral $\int y dB$ is proposed in \mathcal Ite{OneD}: \mathbf{E}gin{equation}\lambdabel{eq:def-trapezoid-1d} \thetaxt{tr-} \mathcal J_0^t(y ,B):=\sum_{k=0}^n \frac{ y_{t_k} + y_{t_{k+1}} }{2}\delta B_{t_k t_{k+1}}, \end{equation} where $B$ is a one-dimensional fractional Brownian motion with Hurst parameter $H\ge1/6$ and where $y$ is a process whose increments are controlled by $B$ (see Definition~\ref{def:ctrld-process} below for the notion of controlled process). For $H>1/6$ it is proven that this trapezoid rule converges to the rough integral \eqref{Rough Integral}, while in the case $H=1/6$ an additional Brownian term pops out in the limit and the convergence holds in the weak sense only. Notice that this phenomenon had already been observed in \mathcal Ite{HN,NR, NRS} for integrands of the form $y_{t}=f(B_{t})$ for a sufficiently smooth function $f$. \noindent With those preliminary remarks in mind, the main aim of the current contribution is to extend the scope of trapezoid type approximations to rough integrals. Our generalizations will go in two directions, that is \emph{(i)} we shall prove the convergence of trapezoid rules for $d$-dimensional Gaussian processes and \emph{(ii)} we handle the case of a general class of Gaussian processes beyond the fractional Brownian case. Our prototype of convergence theorem is stated below in an informal way. The reader is referred to Theorem \ref{theorem:Trapezoid-Rule} for a more precise statement. \mathbf{E}gin{theorem}\lambdabel{thm:cvgce-trapezoid-intro} Let $X$ be a centered Gaussian process on $[0,T]$ admitting a sufficiently regular covariance function $R$ in the 2-d $\rho$-variation sense. Denote the rough path lift of $X$ by $\thetaxtbf{X}=(X^1,X^2,X^3)$. Let $\mathbf{y}=(y,y^{1}, y^{2})$ be a process controlled by $\thetaxtbf{X}$ (examples of controlled processes include $y_{t}=f(X_{t})$ and solutions of SDEs driven by $\thetaxtbf{X}$). For a given partition of $[0,T]$, ${\mathcal P}=\{0=t_0<t_1<\mathcal Dots<t_{n+1}<T\}$, we define the trapezoidal rule: \mathbf{E}gin{equation}\lambdabel{eq:trap} \operatorname{tr-} \mathcal J_0^T(y,X)= \sum_{k=0}^n\frac{y_{t_k} +y_{t_{k+1}} }{2} \, X_{t_k t_{k+1}}^{1 } . \end{equation} Then as the mesh size $|{\mathcal P}|\to 0$ we have $$ \operatorname{tr-} \mathcal J_0^T(y,X)\xrightarrow[\thetaxt{a.s.}]{}\int_0^T {y}_s \, d\mathbf{X}_s , $$ where the right hand side above designates the rough integral of $y$ against $X$. \end{theorem} \noindent As mentioned above, our main Theorem \ref{thm:cvgce-trapezoid-intro} shows that one can approximate rough integrals by very natural Riemann sums, for a wide class of integrands $y$ and Gaussian driving noises~$X$. As a corollary of our trapezoid rule, we will also prove a midpoint rule for rough integrals of the form $\int f(X) dX$; see Corollary \ref{cor.mid}. Let us mention a few words about the techniques employed for our proofs. Indeed, most of the aforementioned 1-d contributions concerning convergences of trapezoid rules rely heavily on integration by parts techniques from Malliavin calculus, together with central limit theorems for random variables in a fixed chaos. The generalization described in Theorem~\ref{thm:cvgce-trapezoid-intro} requires a new set of methods. Specifically, we shall use a combination of rough paths techniques in discrete time in order to single out the main terms in~\eqref{eq:def-trapezoid-1d}. Then we can simplify the main part of the computations by performing our integration by parts on the building blocks of our rough path ${\bf x}x$ only. Eventually we invoke some limit theorems for weighted sums in order to get our limit results. Here is a brief outline of our paper. In Section \ref{sec:preliminary-material} we set the ground for our computations by recalling some basic facts about rough paths analysis, Gaussian processes and Malliavin calculus. Section~\ref{sec.3} is then devoted to the trapezoid rule. Namely Section~\ref{sec:preliminary-results} gives some preliminary results about Young integrals and convergence of random sequences. Then some random sums in a finite chaos are analyzed in Section~\ref{subseq:lp-bounds}. The corresponding weighted sums are handled in Section~\ref{sec:bounds-weighted-sums}. With all those results in hand, our main theorem is proved in Section~\ref{sec:proof-main-thm}. Eventually we give a brief list of processes to which our general result applies in Section~\ref{sec:applications}. In this article, $C$ denotes a constant which may change from line to line. In the same way, $G$ will denote a generic integrable random variable whose value may change from line to line. \section{Preliminary material}\lambdabel{sec:preliminary-material} This section contains some basic tools from Malliavin calculus and rough paths theory, as well as some analytical results which are crucial for the definition and integration of controlled processes. \subsection{Elements of rough paths}\lambdabel{sec:rough-path-above-X} In this section we shall recall the notion of a rough path above a continuous path $X$, and how this applies to Gaussian processes. The interested reader is referred to \mathcal Ite{HairerBook,FrizBook} for further details. \subsubsection{Basic rough paths notions}\lambdabel{subsubsection.notions} \noindent For $s<t$ and $m\geq 1$, consider the simplex $\mathcal S_{m}([s,t])=\{(u_{1},\ldots,u_{m})\in\lbrack s,t]^{m};\,u_{1}<\mathcal Dots<u_{m}\} $. We start by introducing the notion of increments, which turns out to be handy in the definition of a rough path. \mathbf{E}gin{definition}\lambdabel{def:increments} Let $k\ge 1$. Then the space of $(k-1)$-increments, denoted by $\mathcal C_k([0,T],\mathbb R^d)$ or simply $\mathcal C_k(\mathbb R^d)$, is defined as $$ \mathcal C_k(\mathbb R^d)\equiv \left[l g\in C(\mathcal S_{k}([0,T]); \mathbb R^n) ;\, \lim_{t_{i} \to t_{i+1}} g_{t_1 \mathcal Dots t_{k}} = 0, \thetaxt{ for all } i\le k-1 \right]l. $$ \end{definition} \noindent In the sequel we will also often resort to a finite difference operator called $\delta$, which acts on increments and is useful to split iterated integrals into simpler pieces. \mathbf{E}gin{definition}\lambdabel{def:delta-on-C1-C2} Let $g\in\mathcal C_1(\mathbb R^d)$, $h\in\mathcal C_2(\mathbb R^d)$. Then for $(s,u,t)\in\mathcal S_{3}([0,T])$, we set \mathbf{E}gin{equation*} \delta g_{st} = g_t - g_s, \quad\mbox{ and }\quad \delta h_{sut} = h_{st}-h_{su}-h_{ut}. \end{equation*} \end{definition} \noindent In order to define rough integrals, some minimal regularity assumptions on increments will have to be made. In particular, it will be convenient to measure the regularity of increments in $\mathcal C_{1}$ and $\mathcal C_{2}$ in terms of $p$-variation. \mathbf{E}gin{definition}\lambdabel{def:var-norms-on-C2} For $f \in \mathcal C_2(\mathbb R^d)$, $p>1$ we set $$ \|f\|_{p{\rm \thetaxt{-var}}} = \|f\|_{p{\rm \thetaxt{-var}}; [0,T]} = \sup_{{\mathcal P} \subset [0,T]}\left(\sum_{i} |f_{t_{i}t_{i+1}}|^p\right)^{1/p}, $$ where the supremum is taken over all subdivisions ${\mathcal P}$ of $[0,T]$. The set of increments in $\mathcal C_{2}(\mathbb R^n)$ with finite $p$-variation is denoted by $\mathcal C_{2}^{p\thetaxt{-var}}(\mathbb R^d)$. For $f \in \mathcal C_{1}(\mathbb RR^{d})$, we denote $\|f\|_{p\thetaxt{-var}} = \| \delta f\|_{p\thetaxt{-var}} $. \end{definition} We will also make an extensive use of H\"older norms, whose definition is recalled below: \mathbf{E}gin{definition}\lambdabel{def.2.4} We denote by $\mathcal C_{2}^{\gammamma}(\mathbb R^d)$ the space of $\gammamma$-H\"older functions on $[0,T]$. That is, \mathbf{E}gin{equation} \mathcal C_{2}^{\gammamma}(\mathbb R^d)=\left\{f\in \mathcal C_{2} (\mathbb R^d) : \sup_{s,t\in [0,T]} \frac{| f_{st}|}{|t-s|^{\gammamma}}<\infty\right\}. \end{equation} We define $\mathcal C_{1}^\gammamma(\mathbb R^d)$ the space of functions $f$ such that $\delta f \in \mathcal C_{2}^\gammamma(\mathbb R^d)$. \end{definition} \noindent With these preliminary definitions in hand, we now define the notion of a rough path above a continuous $p$-variation path $x$ with $p>1$. \mathbf{E}gin{definition}\lambdabel{def:RP} Let $x$ be a continuous $\mathbb R^d$-valued $p$-variation path for some $p>1$. We say that $x$ gives rise to a geometric $p$-rough path if there exists a continuous path $(x^{n }_{st} , \, (s,t)\in\mathcal S_{2}([0,T]) )$ with values in $(\mathbb RR^{d})^{[0,t]imes n}$ for each $ n\le \lfloor p \rfloor $ such that $x^{1}_{st}=\delta x_{st}$, and a control function $\omegaega_{x}$ (in the sequel a control will always stand for a two variables function on $\mathcal S_{2} ([0,T]) $ which satisfies super-additivity conditions) such that \noindent \emph{(1) Regularity:} For all $n\le \lfloor p \rfloor$, $x^n$ satisfies $|x^{n}_{st}| \leq \omegaega_{x} (s,t)^{n/p}$. \noindent \emph{(2) Multiplicativity:} With $\delta x^n$ as in Definition \ref{def:delta-on-C1-C2}, we have \mathbf{E}gin{equation}\lambdabel{eq:multiplicativity} \delta x^{n }_{sut}=\sum_{n_1=1}^{n-1} x^{n_{1} }_{su} [0,t]imes x^{n-n_{1} }_{ut} . \end{equation} \noindent \emph{(3) Geometricity:} Let $x^{\varepsilon}$ be a sequence of piecewise linear approximations of $x$. For any $n\le \lfloor p \rfloor$ we assume that $x^{\varepsilon,n }$ converges in $\frac p n$-variation norm to $x^{n }$, where $x^{\varepsilon,n }_{st}$ is defined for $(s,t)\in\mathbb Delta_{2}$ by \mathbf{E}gin{equation}\lambdabel{eq:geometricity} x^{\varepsilon,n }_{st} =\int_{(u_{1},\ldots,u_{n})\in\mathcal S_{n}([s,t])} dx_{u_{1}}^{\varepsilon } [0,t]imes \mathcal Dots[0,t]imes dx_{u_{n}}^{\varepsilon }. \end{equation} \noindent In the sequel we will write ${\bf x}$ for the rough path above $x$, that is \mathbf{E}gin{equation*} {\bf x}_{st}= \left( x^{1 }_{st} , \dots, x^{\lfloor p\rfloor}_{st} \right), \quad (s,t)\in\mathcal S_{2}([0,T]) . \end{equation*} \end{definition} \noindent One of the key success factors of the rough path theory is its ability to give a proper definition of stochastic calculus in very general contexts. Within this framework, the generic integrands in stochastic type integrals are so-called controlled paths, whose definition is recalled below. We first introduce some necessary notations about matrix products. \mathbf{E}gin{notation}\lambdabel{not:tensor-products} In order to avoid lengthy indices in our formulae throughout the paper, we will adopt the following convention for matrix products: two generic elements $v \in (\mathbb R^{d})^{[0,t]imes k} $ and $u\in \mathcal L( (\mathbb R^{d})^{[0,t]imes k}, \mathbb R^{m} )$ will stand for families \mathbf{E}gin{eqnarray*} v &=& \left[l v^{ j_{1}\mathcal Dots j_{k} }; \,\, j_{1}, \dots, j_{k} \in \{1,\dots, d\} \right]l \\ u &=& \left[l u^{i j_{1}\mathcal Dots j_{k} } ;\,\, i \in \{1,\dots, m\},\,\, j_{1}, \dots, j_{k} \in \{1,\dots, d\} \right]l , \end{eqnarray*} where $ v^{ j_{1}\mathcal Dots j_{k} } $ and $ u^{i j_{1}\mathcal Dots j_{k} } $ are real numbers. In this context the product $uv$ is defined as an element of $\mathbb R^{m}$ such that for $1\le i\le m$ we have \mathbf{E}gin{align*} (uv)^{i}: = \sum_{j_{1}, \dots, j_{k}=1}^{d} u^{i j_{1}\mathcal Dots j_{k} } \times v^{ j_{1}\mathcal Dots j_{k} }. \end{align*} Similarly for $1\le k' \le k$ and $w \in (\mathbb R^{d})^{[0,t]imes k'}$, we define $uw$ as an element of $\mathcal L ( (\mathbb R^{d})^{[0,t]imes (k-k')}, \mathbb R^{m} )$ such that for $1\le i\le m$ and $1\le j_{k'+1},\ldots,j_{k}\le d$ we have \mathbf{E}gin{align*} (uw)^{i j_{k'+1}\mathcal Dots j_{k}}: = \sum_{j_{1}, \dots, j_{k'}=1}^{d} u^{i j_{1}\mathcal Dots j_{k'}j_{k'+1}\mathcal Dots j_{k} }\times v^{ j_{1}\mathcal Dots j_{k'} }. \end{align*} \end{notation} We can now state the definition of controlled process in the $p$-variation framework. \mathbf{E}gin{definition}\lambdabel{def:ctrld-process} Let ${\bf x}= (x^{1}, \dots, x^{\lfloor p \rfloor})$ be a $p$-variation rough path as introduced in Definition~\ref{def:RP}. Let $y^0,\ldots,y^{\ell-1}$ be continuous processes $y^k:[0,T]\to\mathcal L ((\mathbb R^d)^{[0,t]imes k},\mathbb R^m)$ and define the remainder terms: \mathbf{E}gin{equation}\lambdabel{eq:remainder-k} r_{st}^{k }=\delta y_{st}^{k } -y_s^{k+1 }x_{st}^{1 }-\mathcal Dots -y_s^{\ell-1 }x_{st}^{\ell-k-1 } \end{equation} and $r_{st}^{\ell-1 }=\delta y_{st}^{\ell-1 }$, where we recall the notation $\delta$ given in Definition~\ref{def:delta-on-C1-C2} and our Notation~\ref{not:tensor-products} on matrix products. If there is a control function $\omegaega_{y}$ such that \mathbf{E}gin{align*} |\delta y^{k}_{st}| \leq \omegaega_{y}(s,t)^{1/p} \qquad \thetaxt{and} \qquad |r_{st}^k| \leq \omegaega_{y}(s,t)^{(\ell-k)/p} \end{align*} for all $k=0,1,\dots, \ell-1$, then we say that $ {\bf y}=(y^0,\ldots ,y^{\ell-1})$ is a $\mathbb R^{m}$-valued path of order $\ell$ controlled by ${\bf x}$. \end{definition} \mathbf{E}gin{remark} A controlled path has to be seen as a continuous path whose increments are ``dominated'' by the increments of $x$. Namely, $y^0$ is a continuous path taking values in $\mathbb{R}^m$ such that the increments $\delta y_{st}^0$ as given in Definition~\ref{def:delta-on-C1-C2} are given by \mathbf{E}gin{equation}\lambdabel{eq:def-controlled-process} \delta y_{st}^{0 }=y_s^{1 }x_{st}^{1 }+\mathcal Dots +y_s^{\ell-1 }x_{st}^{\ell-1 }+r_{st}^{0 } \end{equation} The other relations in Definition \ref{def:ctrld-process} are imposed for algebraic sake. \end{remark} \mathbf{E}gin{remark} In this paper, our main integration results will concern controlled paths. Hence it is worth recalling that this class of processes is rich enough. It includes for instance solutions of differential equations driven by $x$ such as \eqref{CDE}, as well as continuous paths of the form $g(x)$ for a smooth enough function $g$. \end{remark} \mathbf{E}gin{remark}\lambdabel{remark.holder} We will also use $\gammamma$-H\"older norms version of Definitions \ref{def:RP} and \ref{def:ctrld-process} in our discussion. We will omit these definitions for sake of conciseness. As an example, let us just mention that in a $\gamma$-H\"older version of \eqref{eq:remainder-k} we would assume $y^{k}\in\mathcal C_{1}^{\gamma}$ and $r^{k}\in\mathcal C_{2}^{(\ell-k)\gamma}$. \end{remark} The following proposition contains the classical result about integration of controlled processes with respect to a rough path, together with an approximation of the integral by enriched Riemann type sums. \mathbf{E}gin{proposition}\lambdabel{prop:intg-ctrld-process} Let ${\bf x}$ be a continuous $p$-variation rough path on $[0,T]$ and let $ {\bf y}$ be a $\mathbb R^{d}$-valued path of order $\ell=\lfloor p \rfloor$ controlled by $ {\bf x}$ as introduced in Definition \ref{def:ctrld-process}. Consider a sequence of partitions of $[0,T]$ with mesh size $|{\mathcal P}|\to 0$. Then the following limit: \mathbf{E}gin{equation} \lambdabel{eqn.ri} \lim_{|{\mathcal P}|\to 0} \sum_{k=0}^n y_{t_k}^{0 }x_{t_k t_{k+1}}^{1 }+y_{t_k}^{1 } x_{t_k t_{k+1}}^{2 }+\mathcal Dots+y_{t_k}^{\ell-1 }x_{t_k t_{k+1}}^{\ell } \end{equation} exists almost surely. It is called rough integral of $y$ with respect to ${\bf x}$ and is denoted by $\int_0^T {y}_s d {\bf x}_s$.\end{proposition} \noindent One of the crucial ingredients in rough paths theory is the sewing lemma for integration. We label two discrete versions of this lemma, taken from \mathcal Ite{Euler}, for further use. In the following we denote $\mathcal S_{m} (\llbracketbracket s,t\rrbracketbracket) = \{(u_{1},\ldots,u_{m})\in\llbracketbracket s,t\rrbracketbracket^{m};\,u_{1}<\mathcal Dots<u_{m}\} $, where $\llbracketbracket s,t\rrbracketbracket$ denotes the discrete interval related to a given partition of $[s,t]$ (see our forthcoming Notation~\ref{not:discrete-interval}). For notational sake, we just write $\mathcal S_{m}$ for $\mathcal S_{m}(\llbracket 0, T\rrbracket)$. \mathbf{E}gin{lemma}\lambdabel{lem2.4} Consider a Banach space $(\mathcal B,\|\mathcal Dot\|)$ and $Q : \mathcal S_{2} \to \mathcal B $. Recall that we set $\delta Q_{sut} = Q_{st}-Q_{su}-Q_{ut}$. Suppose that $\omegaega $ is a control on $ \llbracket 0, T\rrbracket$. Moreover, assume that $Q_{t_{k}t_{k+1}}=0$ for all $t_{k}\in \llbracket0,T\rrbracket$ and that \mathbf{E}gin{align}\lambdabel{eqn.dQ} \|\delta Q_{sut}\|\leq \omegaega (s, t)^{\mu} \end{align} for all $(s, u, t)\in \mathcal S_{3} $. Then the following relation holds for all $(s,t)\in \mathcal S_{2}$: \mathbf{E}gin{eqnarray*} \|Q_{st} \| \leq K_{\mu} \, \omegaega(s, t)^{\mu} \,, \quad\thetaxt{where}\quad K_{\mu} = 2^{\mu} \, \sum_{l=1}^{\infty} l^{-\mu}. \end{eqnarray*} \end{lemma} The following lemma is a particular case of Lemma \ref{lem2.4} with $\omegaega(s,t)=|t-s|$, which can be seen as the sewing lemma in H\"older norm. \mathbf{E}gin{lemma}\lambdabel{prop:sewing} Fix a constant $\mu>1$. Let $Q$ be as in Lemma \ref{lem2.4}, and we further assume that there exists a constant $C>0$ such that \mathbf{E}gin{align*} |\delta Q_{sut}|\leq C \mathcal Dot |t-s|^{\mu} , \qquad\thetaxt{for all } (s,u,t)\in \mathcal S_{3}. \end{align*} Then for all $(s,t)\in \mathcal S_{2}$ we have \mathbf{E}gin{equation} |Q_{st}| \leq CK_\mu |t-s|^{\mu} . \end{equation} \end{lemma} \subsubsection{Gaussian processes as rough paths} \noindent Let us now turn to a more probabilistic setting for our computations. Namely we assume that $X_t=(X_t^1,\ldots,X_t^d)$ is a continuous, centered Gaussian process with i.i.d. components, defined on a complete probability space $(\Omega, \mathcal F, \mathbf{P})$. The covariance function of $X$ is defined as follows \mathbf{E}gin{equation}\lambdabel{eq:def-covariance-X} R(s,t):=E\left[ X_{s}^{j} X_{t}^{j}\right], \end{equation} where $X^{j}$ is any of the components of $X$. We shall also resort to the following notation in the sequel \mathbf{E}gin{equation}\lambdabel{eq:def-variance-Xt} \sigma_{t}^{2} := E\left[ \left( X_{t}^{j} \right)^{2} \right], \quad\thetaxt{and}\quad \sigma_{st}^{2} := E\left[ \left( \delta X_{st}^{j} \right)^{2} \right] . \end{equation} The information concerning $X$ used below is mostly encoded in the rectangular increments of the covariance function $R$, which are given for $s,t,u,v\in [0,T]$ by \mathbf{E}gin{align}\lambdabel{eq:rect-increment-cov-fct} R^{st}_{uv} := R(t,v)-R(t,u)-R(s, v)+R(s,u). \end{align} Notice that whenever the function $R$ is given as a covariance function like in \eqref{eq:def-covariance-X}, the rectangular increments of $R$ can also be written as \mathbf{E}gin{equation}\lambdabel{eqn.rcov} R^{st}_{uv} = E\left[ (X_t^j-X_s^j) \, (X_v^j-X_u^j) \right]. \end{equation} Related to rectangular 2-d increments, the notion of 2-dimensional $\rho$-variation leads to an efficient way of constructing rough paths above a Gaussian process $X$. It will also feature prominently in our considerations, and thus we label its definition for further use. \mathbf{E}gin{definition}\lambdabel{def:mixed-variation} For a general continuous function $R:[0,T]^{2}\to\mathbb R$ and a parameter $\rho\geq1$, we set \mathbf{E}gin{align} \|R\|_{\rho-\mathrm{var};[s,t]\times[u,v]} :=\sup_{\substack{(t_{i})\in\mathcal{P}([s,t])\\ (t_{j}^{\prime})\in\mathcal{P}\left(\left[u,v\right]\right) } } \left(\sum_{i,j } \left|R_{t_{i}t_{i+1}}^{t_{j}^{\prime}t_{j+1}^{\prime}}\right|^{\rho}\right)^{\frac{1}{\rho}},\lambdabel{eq:mixed_var} \end{align} where $\mathcal{P}([s,t])$ denotes the set of all partitions of $[s,t]$ and where $R^{t_{j'}t_{j'+1}}_{t_i t_{i+1}}$ is defined in \eqref{eq:rect-increment-cov-fct}. \end{definition} \noindent We also define the space of functions in the plane with finite 2-d $\rho$-variation: \mathbf{E}gin{definition}\lambdabel{2d-rho-var-space} Given a finite dimensional vector space $E$, we define $C^{\rho\thetaxt{-var}}([0,T]^2,E)$ to be the space of all functions $f:[0,T]^2\to E$ such that $\|f\|_{\rho\thetaxt{-var}}<\infty$. \end{definition} The standard assumption allowing to build a rough path above a generic Gaussian process concerns the $\rho$-variation of its covariance function. This is why we assume that the following hypothesis holds throughout the paper. \mathbf{E}gin{hyp}\lambdabel{hyp.x} We assume that $X$ is a centered continuous Gaussian process with covariance function $R$ such that $\|R\|_{\rho\thetaxt{-var}}<\infty$ for $\rho\in [1,2)$. \end{hyp} \mathbf{E}gin{remark}\lambdabel{remark.rho} Note that the $\rho$-variation norm $\|\mathcal Dot\|_{\rho\thetaxt{-var}}$ introduced in Definition \ref{def:mixed-variation} uses grid-like partitions. As pointed out in \mathcal Ite{FV}, those $\rho$-variations do not enjoy super-additivity properties. A standard way to circumvent this problem is to replace the $\rho$-variation of Definition \ref{def:mixed-variation} by the so-called controlled 2-d $\rho$-variation norm (see \mathcal Ite[Definition 1]{FV}), which we denote by~$|\mathcal Dot|_{\rho\thetaxt{-var}}$. The norm $|\mathcal Dot|_{\rho\thetaxt{-var}}$ does satisfy sup-additivity properties (cf \mathcal Ite[Theorem 1 (iii)]{FV}). Therefore, although we have not assumed $ |R|_{\rho\thetaxt{-var}}<\infty$ in Hypothesis \ref{hyp.x}, we can consider a $\rho' = (\rho+\varepsilon)>\rho$ for $\varepsilon$ arbitrarily small such that $|R|_{\rho'\thetaxt{-var}}<\infty$ (this is ensured by \mathcal Ite[relation (1.2)]{FV}). Then one is allowed to pick the control $\omega_{R}(D)=|R|_{\rho'\thetaxt{-var}}^{\rho'}(D)$. This kind of manipulation is routinely performed in e.g \mathcal Ite{JM,GOT}. \end{remark} With Remark \ref{remark.rho} in mind and for notational sake, throughout the section we will skip the replacement of $\rho$ by $\rho'$ and pretend that the following is a consequence of Hypothesis~\ref{hyp.x}. \mathbf{E}gin{hyp}\lambdabel{hyp.w} We assume that $X$ is a centered continuous Gaussian process with covariance function $R$, $\rho$ is a parameter lying in $[1,2)$ and $\omegaega_{R}$ is a 2-d control such that for any rectangle $D $ we have \mathbf{E}gin{align*} \|R\|_{\rho\thetaxt{-var}, D}\leq \omegaega_{R}(D)^{1/\rho}. \end{align*} Recall that we say that $\omegaega_{R}$ is a 2-d control if it is continuous, zero on degenerate rectangles and satisfies $ \omegaega_{R} (D) \geq \sum_{i=1}^n \omegaega_{R} (D_i)$ if $ D_i:0\le i\le n $ are disjoint rectangles such that $D=\cup_{i} D_{i}$. \end{hyp} The following result (stated e.g. in \mathcal Ite[Theorem 15.33]{FV}) relates the 2-d $\rho$-variation of $R$ with the pathwise assumptions allowing to apply the abstract rough paths theory. \mathbf{E}gin{proposition}\lambdabel{prop:Gaussian-rough-path} Let $X=(X^1,\ldots,X^d)$ be a continuous centered Gaussian process with i.i.d. components and covariance function $R$ defined by \eqref{eq:def-covariance-X}. If $R$ satisfies Hypothesis \ref{hyp.x} then $X$ gives raise to a geometric $p$-rough path according to Definition~\ref{def:RP}, provided $p>2\rho$. \end{proposition} \subsection{Wiener spaces associated to general Gaussian processes}\lambdabel{sec:wiener-space-general} In this section we consider again the continuous, centered Gaussian process $X$ of Section~\ref{sec:rough-path-above-X}. Recall that its covariance function $R$ is defined by \eqref{eq:def-covariance-X}. We will describe the Cameron-Martin space assuming that we are in a real valued situation, the generalization to a $\mathbb R^{d}$-valued process being left to the patient reader. The analysis of iterated integrals performed in Section \ref{sec.3} will be based on a Hilbert space $\mathcal H$ allowing a proper definition of Wiener integrals as defined e.g in~\mathcal Ite{NuaBook}. Namely ${\mathcal{H}}$ is defined to be the completion of the linear space of functions of the form \[ \mathcal{E} = \left\{ \sum_{i=1}^{n}a_{i} {\bf 1}_{\left[ 0,t_{i}\right] }:a_{i}\in \mathbb{R} \thetaxt{, }t_{i}\in\left[ 0,T\right] \right\} , \] with respect to the inner product \mathbf{E}gin{equation}\lambdabel{eq:def-inner-pdt-H} \left\lambdangle \sum_{i=1}^{n} a_{i} {\bf 1}_{[0,t_{i}]} , \sum_{j=1}^{m}b_{j} {\bf 1}_{[0,s_{j}]} \right\rangle _{\mathcal{H}} = \sum_{i=1}^{n}\sum_{j=1}^{m}a_{i}b_{j}R\left( t_{i},s_{j}\right) . \end{equation} \mathbf{E}gin{remark}\lambdabel{representation H norm} Consider the special case $X_0=0$, which means in particular that $R(0,0)=0$. Then, as suggested by \eqref{eq:def-inner-pdt-H}, for any $h_1, h_2\in\mathcal{H}$, we can infer that \mathbf{E}gin{align}\lambdabel{rep H norm} \lambdangle h_1, h_2\rangle_\mathcal{H}=\int_0^T\int_0^Th_1(s)h_2(t)dR(s,t),\end{align} whenever the 2-d Young's integral on the right-hand side is well-defined (one can refer e.g to~\mathcal Ite{FrizBook} for more details). \end{remark} Since $\mathcal{H}$ is the completion of $\mathcal E$ with respect to the inner product defined by~\eqref{eq:def-inner-pdt-H}, it is isometric to the Hilbert space $H^{1}( X) \subseteq L^{2}( \Omega,\mathcal{F},\mathbf{P}) $ which is defined to be the $\vert \mathcal Dot\vert _{L^{2}(\Omega) }$-closure of the set \[ \left\{ \sum\nolimits_{i=1}^{n}a_{i}X_{t_{i}}:a_{i}\in \mathbb{R} ,\thetaxt{ }t_{i}\in\left[ 0,T\right] ,\thetaxt{ }n\in \mathbb{N} \right\} . \] In particular, we have that $\vert {\bf 1}_{\left[ 0,t\right] }\vert _{\mathcal{H}}=\vert X_{t}\vert _{L^{2}\left( \Omega\right)}$. The isometry between $\mathcal H$ and $H^{1}\left( X\right)$ is denoted by $X(h)$, and is called a Wiener integral. \mathbf{E}gin{remark}\lambdabel{rmk:H-on-subinterval} As mentioned above in \eqref{eq:def-inner-pdt-H}, the space $\mathcal H$ is a closure of indicator functions. Hence it can be defined on any interval $[a,b]\subset[0,T]$. We denote by $\mathcal H([a,b])$ this restriction. For $[a,b]\subset[0,T]$, one can then check the following identity by a limiting procedure on simple functions \mathbf{E}gin{equation}\lambdabel{eq:norm-H-as-2d-young} \left\left\langlegle f \, {\bf 1}_{[a,b]} , \, g \, {\bf 1}_{[a,b]} \right\rangle_{\mathcal H} = \left\left\langlegle f , \, g \right\rangle_{\mathcal H([a,b])}. \end{equation} \end{remark} \subsection{Malliavin calculus for Gaussian processes} In this section we review some basic aspects of Malliavin calculus. The reader is referred to~\mathcal Ite{NuaBook} for further details. As in Section~\ref{sec:wiener-space-general}, the family $X_t=(X_t^1,\ldots,X_t^d)$ designates a continuous, centered Gaussian process with i.i.d.\ components, defined on a complete probability space $(\Omega, \mathcal F, \mathbf{P})$. For sake of simplicity, we assume that $\mathcal F$ is generated by $\{X_{t}; \, t\in[0,T]\}$. An $\mathcal{F}$-measurable real-valued random variable $F$ is said to be cylindrical if it can be written, for some $m\ge 1$, as \mathbf{E}gin{equation*} F=f\left( X_{t_1},\ldots,X_{t_m}\right), \quad\mbox{for}\quad 0\le t_1<\mathcal Dots<t_m \le 1, \end{equation*} where $f:\mathbb{R}^m \rightarrow \mathbb{R}$ is a $C_b^{\infty}$ function. The set of cylindrical random variables is denoted by~$ {S}$. \noindent The Malliavin derivative is defined as follows: for $F \in {S}$, the derivative of $F$ in the direction $h\in\mathcal H$ is given by \[ \mathbf{D}_h F=\sum_{i=1}^{m} \frac{\partial f}{\partial x_i} \left( X_{t_1},\ldots,X_{t_m} \right) \, \lambdangle h, \mathbf{1}_{[0,t_i]}\rangle_{\mathcal H}. \] More generally, we can introduce iterated derivatives. Namely, if $F \in {S}$, we set \[ \mathbf{D}^k_{h_1,\ldots,h_k} F = \mathbf{D}_{h_1} \mathcal Dots\mathbf{D}_{h_k} F. \] For any $p \geq 1$, it can be checked that the operator $\mathbf{D}^k$ is closable from $ {S}$ into $\mathbf{L}^p(\Omega;\mathcal H^{[0,t]imes k})$. We denote by $\mathbb{D}^{k,p}(\mathcal H)$ the closure of the class of cylindrical random variables with respect to the norm \[ \left\| F\right\| _{k,p}=\left( {E}\left[|F|^{p}\right] +\sum_{j=1}^k {E}\left[ \left\| \mathbf{D}^j F\right\| _{\mathcal H^{[0,t]imes j}}^{p}\right] \right) ^{\frac{1}{p}}, \] and we also set $\mathbb{D}^{\infty}(\mathcal H)=\cap_{p \geq 1} \cap_{k\geq 1} \mathbb{D}^{k,p}(\mathcal H)$. The divergence operator $\delta^{\diamondamond}$ is then defined to be the adjoint operator of $\mathbf{D}$. Namely, for a process $u = \{u_{t}; t\in [0,T]\}$ in the domain of $\delta^{\diamondamond}$ we have \mathbf{E}gin{align}\lambdabel{eqn.ibp} E[ \delta^{\diamondamond}(u) F ] = E[ \lambdangle \mathbf{D}F, u\rangle_{\mathcal H} ], \end{align} for all $F\in \mathbb{D}^{1,2}$. Notice that if $u \in \mathbb{D}^{1,2} (\mathcal H)$, then we also have $u\in \mathrm{Dom}( \delta^{\diamondamond}) $. A typical elementary increment which can be represented thanks to the divergence operator is the following: for $(s,t)\in\mathcal S_{2}([0,T])$ and $1\le i \le d$ we have \mathbf{E}gin{align}\lambdabel{eqn.dx} \delta X^{i}_{st} = \delta^{\diamondamond} ( \mathbf{1}_{[s,t]} e_{i} ), \end{align} where $e_{i}$ denotes the $i$-th element of the canonical basis in $\mathbb R^{d}$. We close this section by recalling the following result on Hermite polynomials: \mathbf{E}gin{proposition}\lambdabel{Fin-Dim-Gaussian} Let $X,Y$ be jointly normal random variables with $X,Y\sigmam \mathcal N(0,1)$ and denote by $H_n$ the $n$th Hermite polynomial. Then the following holds true: \mathbf{E}gin{equation} E[H_n(X) H_m(Y)]=\mathbf{E}gin{cases} 0&\thetaxt{ if } n\neq m\\ \frac{1}{n!}\left( E\left[ XY \right]\right)^{n}&\thetaxt{ if } n=m . \end{cases} \end{equation} \end{proposition} \section{The trapezoid rule}\lambdabel{sec.3} This section is devoted to a complete statement and proof of the informal Theorem~\ref{thm:cvgce-trapezoid-intro}. We will first analyze some discrete sums in a finite chaos related to our rough path $\mathbf{X}$ in Section~\ref{subseq:lp-bounds}, then move to some useful weighted sums in Section~\ref{sec:bounds-weighted-sums}. Eventually the main part of our proof will be achieved in Section~\ref{sec:proof-main-thm}. Throughout the section we consider a general centered Gaussian process $X$ which satisfies Hypothesis \ref{hyp.w} . In particular, the covariance function $R$ is defined by \eqref{eq:def-covariance-X} and the variance of the increments $\delta X_{st}^i$ is denoted by $\sigmagma^2 (s,t)$ (see our notation~\eqref{eq:def-variance-Xt}). As in Section \ref{sec:preliminary-material}, $X$ admits a rough path lift $\mathbf{X}$. We also label a notation which will be useful for our future computations. \mathbf{E}gin{notation}\lambdabel{notation.dk} Let $[s,t] \times [u,v]$ be a generic rectangle in $[0,T]^{2}$. Consider a grid-like partition ${\mathcal P} = \{ [t_{k}, t_{k+1}] \times [\tilde{t}_{k'}, \tilde{t}_{k'+1}]; s=t_{0}<\mathcal Dots< t_{m}=t, u=\tilde{t}_{0}<\mathcal Dots <\tilde{t}_{n} =v \}$. Then we set $D_{kk'} = [t_{k}, t_{k+1}] \times [ \tilde{t}_{k'}, \tilde{t}_{k'+1} ] $. \end{notation} \subsection{Two inequalities}\lambdabel{sec:preliminary-results} We first derive an inequality on 2-d Young integrals which we make extensive use of. It is an elaboration of the Young-Loeve-Towghi inequality \mathcal Ite[Theorem 1.2]{To}. Recall that for $y\in C([0,T]^{2})$, $y^{su}_{tv}$ denotes the rectangular increment of $y$ over $[s,t]\times[u,v]$ defined in~\eqref{eq:rect-increment-cov-fct}. \mathbf{E}gin{lemma}\lambdabel{lemma.YLT} Let $z\in C^{\rho\thetaxt{-var}}([0,T]^2,\mathbb{R}^d)$ and consider a function $y$ sitting in the space $C^{\theta\thetaxt{-var}}([0,T]^2, \mathcal L(\mathbb{R}^d,\mathbb{R}^d))$ with $1/\rho+1/\theta>1$, where $C^{\rho\thetaxt{-var}}$ and $C^{\theta\thetaxt{-var}}$ are given in Definition \ref{2d-rho-var-space}. For some given $s<t<\sigma$ and $u<v<\eta$ we set $D=[s,\sigma]\times [u,\eta]$ and \mathbf{E}gin{align}\lambdabel{eqn.yh} \hat{y}_{tv} := \int_{[s,t]\times[u,v]} y^{sr }_{ur'}dz_{r r'} , \end{align} Then the $\rho$-variation of $\hat{y}$ on $D$ can be bounded as follows: \mathbf{E}gin{equation}\lambdabel{eq:YLT} \left\| \hat{y} \right\|_{\rho\thetaxt{-var}, D}\le C\mathcal Dot\|y \|_{\theta\thetaxt{-var};D}\mathcal Dot \|z\|_{\rho\thetaxt{-var};D}. \end{equation} \end{lemma} \mathbf{E}gin{proof} We consider partitions $s=t_{0}<\mathcal Dots<t_{m}=\sigma$ and $u=\tilde{t}_{0}<\mathcal Dots<\tilde{t}_{n}=\eta$ of $[s,\sigma]$ and $[u, \eta]$, respectively. Recall from Notation \ref{notation.dk} that we denote $D_{kk'}=[t_{k}, t_{k+1}]\times [\tilde{t}_{k'}, \tilde{t}_{k'+1}] \subset D$, and write \mathbf{E}gin{align*} \hat{y}(D_{kk'}) = \hat{y}_{\tilde{t}_{k'}\tilde{t}_{k'+1}}^{t_{k}t_{k+1}} = \hat{y}_{t_{k+1}\tilde{t}_{k'+1}}-\hat{y}_{t_{k+1}\tilde{t}_{k'}}-\hat{y}_{t_{k}\tilde{t}_{k'+1}}+\hat{y}_{t_{k}\tilde{t}_{k'}}. \end{align*} Using expression \eqref{eqn.yh}, one can decompose $\hat{y}(D_{kk'})$ according to our partition in the following way: \mathbf{E}gin{eqnarray}\lambdabel{eqn.i} \hat{y}(D_{kk'}) &=& \int_{D_{kk'}} y_{ur'}^{sr}dz_{rr'} \nonumber \\ &=& \int_{D_{kk'}} y_{\tilde{t}_{k'}r'}^{t_{k}r}dz_{rr'}+\int_{D_{kk'}} y_{\tilde{t}_{k'}r'}^{st_{k}}dz_{rr'}+\int_{D_{kk'}} y_{u\tilde{t}_{k'}}^{t_{k}r}dz_{rr'}+\int_{D_{kk'}} y_{u\tilde{t}_{k'}}^{st_{k}}dz_{rr'} \nonumber \\ &:=& I_{1}+ I_{2}+ I_{3}+ I_{4} . \end{eqnarray} It should be noticed that the term $I_{1}$ above can be bounded directly thanks to the Young-Loeve-Towghi inequality \mathcal Ite[Theorem 1.2]{To}. We get \mathbf{E}gin{align}\lambdabel{eqn.i1} |I_{1}| \leq C\|y\|_{\theta\thetaxt{-var}, D_{kk'}} \mathcal Dot \|z\|_{\rho\thetaxt{-var}, D_{kk'}} . \end{align} The term $I_{4}$ can also be treated easily. Indeed, we have \mathbf{E}gin{align*} I_{4} = y^{st_{k}}_{u\tilde{t}_{k'}}\mathcal Dot z^{t_{k}t_{k+1}}_{\tilde{t}_{k}\tilde{t}_{k+1}}, \end{align*} and thus \mathbf{E}gin{align}\lambdabel{eqn.i4} |I_{4}| \leq \|y \|_{\theta\thetaxt{-var} , [u,\tilde{t}_{k'}]\times [s, t_{k} ]} \mathcal Dot \|z\|_{\rho\thetaxt{-var}, D_{kk'}} . \end{align} We now focus on the 1-d type integral $I_{2}$ in equation \eqref{eqn.i}. In order to bound this term one can use the classical Young inequality \mathcal Ite{Young} in order to get \mathbf{E}gin{eqnarray}\lambdabel{eqn.i2} |I_{2}| &=& \left| \int_{[\tilde{t}_{k'}, \tilde{t}_{k'+1}]} y^{st_{k}}_{\tilde{t}_{k'}r'} d (z_{t_{k+1}r'} - z_{t_{k}r'}) \right| \nonumber \\ &\leq& C \, \|y^{st_{k}}_{\tilde{t}_{k'} \mathcal Dot} \|_{\theta\thetaxt{-var} , [\tilde{t}_{k'}, \tilde{t}_{k'+1}]} \mathcal Dot \|z_{t_{k+1} \mathcal Dot} - z_{t_{k} \mathcal Dot} \|_{\rho\thetaxt{-var}, [ \tilde{t}_{k'}, \tilde{t}_{k'+1}]} \nonumber \\ &\leq& C \, \|y \|_{\theta\thetaxt{-var} , [s,t_{k}]\times [\tilde{t}_{k'}, \tilde{t}_{k'+1}]} \mathcal Dot \|z\|_{\rho\thetaxt{-var}, D_{kk'}}. \end{eqnarray} In the same way, we can also upper bound the term $I_{3}$ in \eqref{eqn.i} as \mathbf{E}gin{align}\lambdabel{eqn.i3} |I_{3}| \leq C \|y \|_{\theta\thetaxt{-var} , [u,\tilde{t}_{k'}]\times [t_{k}, t_{k+1}]} \mathcal Dot \|z\|_{\rho\thetaxt{-var}, D_{kk'}}. \end{align} Plugging \eqref{eqn.i1}-\eqref{eqn.i3} into \eqref{eqn.i}, we have thus obtained that \mathbf{E}gin{align*} |\hat{y}(D_{kk'})| \leq & C\Big\{ \|y\|_{\theta\thetaxt{-var}, D_{kk'}} + \|y \|_{\theta\thetaxt{-var} , [s,t_{k}]\times [\tilde{t}_{k'}, \tilde{t}_{k'+1}]} \\ &\qquad + \|y \|_{\theta\thetaxt{-var} , [u,\tilde{t}_{k'}]\times [t_{k}, t_{k+1}]} + \|y \|_{\theta\thetaxt{-var} , [u,\tilde{t}_{k'}]\times [s, t_{k} ]} \Big\}\mathcal Dot \|z\|_{\rho\thetaxt{-var}, D_{kk'}} . \end{align*} Therefore by trivial monotonicity properties of $\theta$-variations, we arrive at \mathbf{E}gin{align}\lambdabel{eqn.yh1} |\hat{y}(D_{kk'})| \leq C \|y \|_{\theta\thetaxt{-var} , D} \mathcal Dot \|z\|_{\rho\thetaxt{-var}, D_{kk'}}. \end{align} As explained in Remark \ref{remark.rho}, we will skip the routine procedure of replacing $\rho$ by $\rho'$ in order to apply super-additivity relations to 2-dimensional $\rho$-variations. Hence summing relation~\eqref{eqn.yh1} over $k,k'$ and invoking super-additivity, we finally prove the desired inequality ~\eqref{eq:YLT}. \end{proof} We close this section by giving a general convergence lemma for a sequence of stochastic processes. It is borrowed from \mathcal Ite[Lemma 3.5]{Euler}. \mathbf{E}gin{lemma}\lambdabel{G existence} Let $\{z^{n}, n\in \mathbb{N}\}$ be a sequence of stochastic processes such that $$ \|\delta z^n_{st}\|_{L^{p}(\Omega)}\le C_p n^{-\alphapha}(t-s)^\mathbf{E}ta, $$ for all $p\ge 1$, where $K_p$ is a constant depending on $p$ and where we recall the notation $\delta$ given in Definition~\ref{def:delta-on-C1-C2}. Then for $0<\gammamma<\mathbf{E}ta$ and $\kappappa>0$, we can find an integrable random variable $G_{\gammamma,\kappappa}$ independent of $n$ and admitting moments of any order, such that: \mathbf{E}gin{equation} \|z^n\|_\gammamma\le G_{\gammamma, \kappappa} \, n^{-\alphapha+\kappappa} . \end{equation} \end{lemma} \subsection{Upper-bounds for processes in a finite chaos}\lambdabel{subseq:lp-bounds} \noindent With the notions of Section \ref{sec:preliminary-material} in hand, we now introduce a family of processes defined as sums of iterated integrals of $X$ which appear naturally in the analysis of the approximation~\eqref{eq:trap}. We start by proving a bound on sums of L\'evy area type processes which generalizes \mathcal Ite{HLN2, HLN3, Euler}. We first label a notation for further use. \mathbf{E}gin{notation}\lambdabel{not:discrete-interval} Let ${\mathcal P}=\{0=t_0<\mathcal Dots<t_{n}=T\}$ be a partition of $[0,T]$. Take $s,t\in [0,T]$. Then $\llbracketbracket s,t\rrbracketbracket\colon = \{t_k\in {\mathcal P} \colon t_k \in [s,t]\}$. We denote $\mathcal S_k(\llbracketbracket s,t\rrbracketbracket)=\{(t_1,\ldots,t_k)\in {\mathcal P} \colon t_1 \leq \mathcal Dots\leq t_k\}$ as the discrete simplex. \end{notation} The main bound involving L\'evy area type objects is the following. \mathbf{E}gin{lemma}\lambdabel{lemma:F-bound} Suppose that Hypothesis \ref{hyp.w} holds true for the $\mathbb R^{d}$-valued Gaussian process $X=(X^1,\ldots,$ $X^d)$ with covariance function $R$, 2-d control $\omegaega_{R}$ and $\rho \in [1,2)$. Consider a partition $\{t_k: 0 \leq k\leq n\}$ of $[0,T]$ and define the process $F$ on $\llbracketbracket 0,T\rrbracketbracket$ by \mathbf{E}gin{equation}\lambdabel{F} F_t^{ij}=\mathbf{E}gin{cases} \sum\limits_{0\leq t_k<t} X_{t_k t_{k+1}}^{2,ij}&\thetaxt{ for }i\neq j \\ \sum\limits_{0\leq t_k<t}X_{t_k t_{k+1}}^{2,ii}-E[X_{t_kt_{k+1}}^{2,ii}]&\thetaxt{ for }i=j , \end{cases} \end{equation} with the convention that $F_0^{ij}=0$. Then for any $p\geq 1$ there exists a strictly positive constant $C=C_{p}$ such that for all $s,t\in \mathcal S_2 $ and $0\leq \varepsilon\leq 2-\rho$ we have \mathbf{E}gin{equation}\lambdabel{eq:F-bound} \|\delta F_{st}^{ij}\|_{p}^{2}\le C \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{ \varepsilon}{\rho} } \mathcal Dot \omegaega_{R}( [s,t]^{2})^{\frac{2-\varepsilon}{\rho}} , \end{equation} where $\|\mathcal Dot\|_p$ denotes the $L^p(\Omega)$ norm and where the rectangle $D_{kk'}$ is defined in Notation \ref{notation.dk}. \end{lemma} \mathbf{E}gin{proof} By hypercontractivity for random variables in the second chaos (see \mathcal Ite[Theorem 1.4.1]{NuaBook}), we just need to consider $p=2$. Furthermore, when $i=j$ notice that \mathbf{E}gin{equation}\lambdabel{eq:second-order} X_{t_kt_{k+1}}^{2,ii}=\int_{t_k}^{t_{k+1}}\int_{t_k}^{u_1}dX_{u_2}^idX_{u_1}^i=\frac{1}{2}\left(\delta X_{t_kt_{k+1}}^{i}\right)^2, \end{equation} where a complete justification of \eqref{eq:second-order} is due to the Definition \ref{def:RP} and the fact that $\mathbf{X}$ is assumed to be geometric. Therefore one can recast the definition of $\delta F^{ii}_{t_{k}t_{k+1}}$ in~\eqref{F} as \mathbf{E}gin{eqnarray}\lambdabel{eq:hermite} \delta F^{ii}_{t_{k}t_{k+1}} = X_{t_k t_{k+1}}^{2,ii}-E[X_{t_kt_{k+1}}^{2,ii}]&=&\frac{1}{2}(X_{t_kt_{k+1}}^{1,i})^2-\frac{1}{2} E[(X_{t_kt_{k+1}}^{1,i})^2] \notag\\ &=& \frac{1}{2} \sigma_{t_{k}t_{k+1}}^{2} H_2\left(\frac{X_{t_kt_{k+1}}^{1,i}}{\sigma_{t_{k}t_{k+1}}}\right), \end{eqnarray} where $H_2(x)=x^2-1$ is the second Hermite polynomial and where the notation $\sigma_{st}^{2}$ has been introduced in \eqref{eq:def-variance-Xt}. Then putting together relations~\eqref{F}, \eqref{eq:second-order} and \eqref{eq:hermite} we have \mathbf{E}gin{align*} & \| \delta F_{st}^{ii}\|_2^2= E\left[\sum_{k}\left(X_{t_k t_{k+1}}^{2,ii}-E[X_{t_kt_{k+1}}^{2,ii}]\right) \sum_{k'}\left(X_{t_{k'} t_{k'+1}}^{2,ii}-E[X_{t_{k'} t_{k'+1}}^{2,ii}]\right)\right]\\ &= \frac14 E\left[\sum_{k} \sigma^{2}_{t_{k}t_{k+1}} H_2\left(\frac{X_{t_kt_{k+1}}^{1,i}}{\sigma_{t_{k} t_{k+1}}}\right)\sum_{k'} \sigma^{2}_{t_{k'}t_{k'+1}} H_2\left(\frac{X_{t_{k'}t_{k'+1}}^{1,i}}{\sigma_{t_{k'} t_{k'+1}}}\right)\right]. \end{align*} Next, expanding the double sum in $k,k'$ above we get \mathbf{E}gin{equation}\lambdabel{a1} \|\delta F_{st}^{ii}\|_2^2 =\frac{1}{4}\sum_{k,k'}\sigma_{t_{k},t_{k+1}}^{2} \sigma_{t_{k'},t_{k'+1}}^{2} E\left[H_2\left(\frac{X_{t_kt_{k+1}}^{1,i}}{\sigma_{t_{k},t_{k+1}}}\right)H_2\left(\frac{X_{t_{k'}t_{k'+1}}^{1,i}}{\sigma_{t_{k'},t_{k'+1}}}\right)\right] . \end{equation} Since each $X_{t_kt_{k+1}}^{1,i}$ is a Gaussian random variable, we can now apply Proposition~\ref{Fin-Dim-Gaussian} with $X=\sigma_{t_{k} t_{k+1}}^{-1}X_{t_kt_{k+1}}^{1,i}$ and $Y=\sigma_{t_{k'} t_{k'+1}}^{-1} X_{t_{k'}t_{k'+1}}^{1,i}$. This yields \mathbf{E}gin{eqnarray}\lambdabel{eq:ii-bound} \| \delta F_{st}^{ii}\|_2^2\notag&=& \frac{1}{8}\sum_{k,k'}\sigma_{t_{k},t_{k+1}}^{2} \sigma_{t_{k'},t_{k'+1}}^{2} \left(E\left[\frac{X_{t_kt_{k+1}}^{1,i} \, X_{t_{k'}t_{k'+1}}^{1,i}}{\sigma_{t_{k},t_{k+1}} \, \sigma_{t_{k'},t_{k'+1}}} \right]\right)^2\\ &=&\frac{1}{8}\sum_{k,k'}\left(R^{t_kt_{k+1}}_{t_{k'}t_{k'+1}}\right)^2 . \end{eqnarray} Notice that the sum on the right hand side of \eqref{eq:ii-bound} is a sum over rectangles $D_{k,k'}=[t_k,t_{k+1}]$ $\times [t_{k'},t_{k'+1}]$. Since we have $|R_{t_{k'} t_{k'+1}}^{t_k t_{k+1}}|\leq \omegaega_{R}(D_{kk'})^{1/\rho} $ thanks to Hypothesis \ref{hyp.w}, we obtain \mathbf{E}gin{eqnarray}\lambdabel{eq:ii-bound-2a} \|\delta F_{st}^{ii}\|_2^2&\le& \frac{1}{8}\sum_{k,k'}\omegaega_{R}(D_{kk'})^{2/\rho} \\ \lambdabel{eq:ii-bound-2} &\le& \frac{1}{8} \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{ \varepsilon}{\rho} } \mathcal Dot \sum_{k,k'}\omegaega_{R}(D_{kk'})^{\frac{2-\varepsilon}{\rho}} . \end{eqnarray} Now recall that in our statement we have chosen $0\leq \varepsilon\leq 2-\rho$, which yields $(2-\varepsilon)/\rho>1$. Hence invoking the sup-additivity of $\omegaega_{R}$ and thus of $\omegaega_{R}^{(2-\varepsilon)/\rho}$, we arrive at \mathbf{E}gin{equation}\lambdabel{eq:Fii} \|\delta F^{ii}_{st}\|_2^2 \le \frac{1}{8} \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{ \varepsilon}{\rho} } \mathcal Dot \omegaega_{R}( [s,t]^{2})^{\frac{2-\varepsilon}{\rho}} . \end{equation} \noindent Putting together inequality \eqref{eq:Fii} and the aforementioned hypercontractivity argument, our claim \eqref{eq:F-bound} is proved for $i=j$. \noindent Let us now handle the case $i\neq j$. Similarly to what we did in \eqref{a1}, we compute \mathbf{E}gin{eqnarray}\lambdabel{eq:Fij-before} \|\delta F_{st}^{ij}\|_2^2\notag&=&\sum_{k,k'} E\left[ X_{t_kt_{k+1}}^{2,ij}X_{t_{k'} t_{k'+1}}^{2,ij}\right] \\ &=&\sum_{k,k'}E\left[\int_{t_k}^{t_{k+1}}X_{t_k r}^{1,i}dX_r^{1,j}\int_{t_{k'}}^{t_{k'+1}}X_{t_{k'} r'}^{1,i}dX_{r'}^{1,j}\right] . \end{eqnarray} In order to compute the right and side of \eqref{eq:Fij-before} we proceed as in \mathcal Ite[p. 402]{FrizBook}. Namely, we consider a Gaussian regularization $X^\varepsilonsilon$ of $X$ whose rough path lift $\mathbf{X}^\varepsilonsilon$ also converges to $\mathbf{X}$. Let us write $R^\varepsilonsilon$ for the covariance function of the process $X^{\varepsilonsilon}$ and $F^\varepsilonsilon$ be the process $F$ defined similarly to \eqref{F} for the process $X^\varepsilonsilon$. A simple application of Fubini's theorem yields \mathbf{E}gin{eqnarray}\lambdabel{eq:Fij-regularized} \|\delta F_{st}^{\varepsilonsilon,ij}\|_2^2\notag&=&\sum_{k,k'}E\left[\int_{t_k}^{t_{k+1}}\int_{t_{k'}}^{t_{k'+1}}X_{t_{k'} r'}^{\varepsilonsilon,1,i}X_{t_k r}^{\varepsilonsilon,1,i}dX_r^{\varepsilonsilon,1,j} dX_{r'}^{\varepsilonsilon,1,j}\right]\\ &=&\sum_{k,k'}\int_{D_{k,k'}}R^{\varepsilonsilon, t_k r}_{t_{k'}r'} \, dR^{\varepsilonsilon}(r,r'), \end{eqnarray} where we recall that $D_{k,k'}=[t_k,t_{k+1}]\times [t_{k'},t_{k'+1}]$. Taking limits in \eqref{eq:Fij-regularized} as $\varepsilonsilon\to 0$ we get \mathbf{E}gin{equation}\lambdabel{eq:Fij-after} \|\delta F^{ij}_{st}\|_{2}^{2}= \sum_{k,k'}\int_{D_{k,k'}}R^{t_k r}_{t_{k'}r'} dR(r,r'). \end{equation} Then a direct application of inequality \eqref{eq:YLT} yields \mathbf{E}gin{equation}\lambdabel{eq:F-f-bound} \|\delta F^{ij}_{st}\|_2^2\le C \sum_{k,k'} \|R\|_{\rho\thetaxt{-var};D_{k,k'}}^{2}, \end{equation} which in turn implies relation \eqref{eq:ii-bound-2a} thanks to Hypothesis \ref{hyp.w}. Starting from \eqref{eq:F-f-bound}, we can thus conclude as we did for \eqref{eq:ii-bound-2} and \eqref{eq:Fii} in the case $i=j$. Our result \eqref{eq:F-bound} is now shown for the case $i\neq j$, which concludes our proof. \end{proof} \noindent We now state an elaboration of Lemma \ref{lemma:F-bound} for third-order integrals. \mathbf{E}gin{lemma}\lambdabel{lemma:g-bound} Suppose that Hypothesis \ref{hyp.w} holds for the Gaussian process $X=(X^1,\ldots,X^d)$ with covariance function $R$, 2-d control $\omegaega_{R}$ and $\rho \in [1,2)$. For $i,j,\ell\in \{1,\ldots,d\}$, $(s,t) \in \mathcal S_2 $ and a generic partition $\{t_{i}; 0\leq k\leq n\}$ of $[0,T]$ we denote \mathbf{E}gin{align}\lambdabel{eqn.g} \delta g_{st}^{ij\ell} =\sum_{s\leq t_k<t} X^{3,ij\ell}_{t_{k}t_{k+1}}. \end{align} Then for any $p\geq 2$ there exists a positive constant $C=C_{\rho,p}$ such that for all $(s,t)\in \mathcal S_2 $ we have \mathbf{E}gin{equation}\lambdabel{eq:g-bound} \|\delta g_{st}^{ij\ell} \|_{L^{p}(\Omega)}^{2} \le C \sum_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho} + C \left| \sum_{k,k'} R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}}R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}} \right|, \end{equation} where we recall our Notation \ref{notation.dk} for $D_{kk'}$. \end{lemma} \mathbf{E}gin{proof} Along the same lines as Lemma \ref{lemma:F-bound}, by hypercontractivity of random variables in the third chaos, we just need to consider $p=2$. We focus on this case in the remainder of the proof. We split our considerations in several steps. \noindent \thetaxtbf{Step 1: equal indices.}\quad In this step, we show that \eqref{eq:g-bound} holds when the indices are equal: $i=j=\ell$. We first note that using geometricity similarly to what we have done in~\eqref{eq:second-order}, we get $X_{t_{k} t_{k+1}}^{3,iii}=\frac{1}{6}(\delta X_{t_kt_{k+1}}^i)^3$. We now expand the cubic power in terms of Hermite polynomials along the same lines as for \eqref{eq:hermite}. Namely, recall that $x^{3} = H_{3}(x)-3x$. Therefore, renormalizing $(\delta X_{t_{k}t_{k+1}}^{i})^{3}$ by $\sigma_{t_{k}t_{k+1}}$ (recall that $\sigma_{st}$ is defined by \eqref{eq:def-variance-Xt}), we get \mathbf{E}gin{align}\lambdabel{eqn.zeta} X^{3,iii}_{t_{k}t_{k+1}} = \frac16 \sigma^{3}_{t_{k}t_{k+1}} H_{3} (\sigma_{t_{k}t_{k+1}}^{-1} \delta X_{t_{k}t_{k+1}}^{i}) +\frac12 \sigma_{t_{k}t_{k+1}}^{2} \delta X_{t_{k}t_{k+1}}^{i}. \end{align} Let us now go back to our expression \eqref{eqn.g} for $i=j=\ell$. By linearity of the expected value we have \mathbf{E}gin{align}\lambdabel{eqn.gn} \|\delta g_{st}^{iii} \|_2^2 = \sum_{k,k'} E \left[ X_{t_k t_{k+1}}^{3,iii}X_{t_{k'} t_{k'+1}}^{3,iii}\right] . \end{align} Plugging relation \eqref{eqn.zeta} into \eqref{eqn.gn}, invoking Proposition \ref{Fin-Dim-Gaussian} and recalling that $\sigma_{t_{k}t_{k+1}}^{2} = R^{t_{k}t_{k+1}}_{t_{k}t_{k+1}} $, we obtain: \mathbf{E}gin{eqnarray}\lambdabel{eq:iii-bound} \|\delta g_{st}^{iii} \|_2^2 &=& \notag \frac{1}{216} \sum_{k,k'} \left( R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}}\right)^{3} + \frac14 \sum_{k,k'} R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}}R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}} \\ & \leq & C \sum_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho} + C \left| \sum_{k,k'} ( R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}}R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}}) \right| , \end{eqnarray} where the last inequality stems from Hypothesis \ref{hyp.w} and where we have used our Notation~\ref{notation.dk} for $D_{kk'}$. We have thus proved our claim \eqref{eq:g-bound} for $p=2$ and $i=j=\ell$. As mentioned above, the general case $p\geq 2$ follows by hypercontractivity. \noindent \thetaxtbf{Step 2: three distinct indices.} We turn to the proof of \eqref{eq:g-bound} for $i,j,\ell$ all distinct. To this aim, we first note that a regularization procedure similar to the one which lead to \eqref{eq:Fij-after} yields \mathbf{E}gin{align*} \|\delta g_{st}^{ij\ell} \|_2^2&=\sum_{k,k'}E[X_{t_{k} t_{k+1}}^{3,ij\ell}X_{t_{k'} t_{k'+1}}^{3,ij\ell}]\\ &=\sum_{k,k'}E\left[\int_{t_k}^{t_{k+1}}\int_{t_k}^{u}X_{t_k,v}^{1,i}dX_{v}^{1,j}dX_u^{1,\ell}\int_{t_{k'}}^{t_{k'+1}}\int_{t_{k'}}^{u'}X_{t_{k'},v'}^{1,i}dX_{v'}^{1,j}dX_{u'}^{1,\ell}\right]\\ &=\sum_{k,k'}E\left[\int_{t_k}^{t_{k+1}}\int_{t_{k'}}^{t_{k'+1}}\int_{t_k}^u\int_{t_{k'}}^{u'}X_{t_k,v}^{1,i}X_{t_{k'},v'}^{1,i}dX_{v}^{1,j}dX_{v'}^{1,j}dX_{u}^{1,\ell}dX_{u'}^{1,\ell}\right], \end{align*} for all $(s,t)\in \mathcal S_2(\llbracketbracket 0,T\rrbracketbracket)$. Then using independence and Fubini's theorem (here again the standard regularization arguments are outlined in \eqref{eq:Fij-after}) we get \mathbf{E}gin{align}\lambdabel{eqn.49} \|\delta g_{st}^{ij\ell} \|_2^2=\sum_{k,k'}\int_{t_k}^{t_{k+1}}\int_{t_{k'}}^{t_{k'+1}}\int_{t_k}^u\int_{t_{k'}}^{u'}R^{t_k v}_{t_{k'} v'} dR(v,v') dR(u,u'). \end{align} Now applying twice inequality \eqref{eq:YLT} to the right side of \eqref{eqn.49}, we easily get \mathbf{E}gin{equation}\lambdabel{eqn.51} \|\delta g_{st}^{ij\ell} \|_2^2\le C \sum_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho} . \end{equation} Our claim \eqref{eq:g-bound} is now proved for $i,j,\ell$ distinct. \noindent \thetaxtbf{Step 3: two distinct indices.} We now turn to the case of two distinct indices in $i,j,\ell$. In fact we should divide this case into $3$ distinct subcases, namely $i=j\neq \ell$, $i\neq j= \ell$ and $i=\ell\neq j$. We will only treat the first case $i=j\neq \ell$, the other ones being similar and left to the patient reader. We proceed as previously, resorting to the geometric property of $\mathbf{X}$, some regularization arguments, and Fubini's theorem. Recalling Notation \ref{notation.dk} for the intervals $ [t_{k}, t_{k+1}] \times [ \tilde{t}_{k'}, \tilde{t}_{k'+1} ] $, we get \mathbf{E}gin{eqnarray*} \|\delta g_{st}^{ii\ell} \|_2^2 &=&\sum_{k,k'}E\left[X_{t_{k} t_{k+1}}^{3,ii\ell}X_{t_{k'} t_{k'+1}}^{3,ii\ell}\right]\\ &=&\frac{1}{4}\sum_{k,k'}\int_{D_{kk'}} E\left[(X_{t_k,u}^{1,i})^2(X_{t_{k'},u'}^{1,i})^2\right] dR(u,u') . \end{eqnarray*} In order to evaluate the term $E[(X_{t_k,u}^{1,i})^2(X_{t_{k'},u'}^{1,i})^2]$ above, we reproduce the steps leading from~\eqref{eq:hermite} to \eqref{eq:ii-bound} in the proof of Lemma \ref{lemma:F-bound} (based on Hermite polynomial decompositions). This yields \mathbf{E}gin{eqnarray}\lambdabel{eq:g-iil-bound-initial} \|\delta g_{st}^{ii\ell} \|_2^2 :=I_{1}+I_{2}, \end{eqnarray} where the terms $I_{1}$ and $I_{2}$ are defined by \mathbf{E}gin{align}\lambdabel{eqn.I12} I_{1} = \frac{1}{4}\sum_{k,k'}\int_{D_{kk'}} R_{t_{k}u}^{t_{k}u}R_{t_{k'}u'}^{t_{k'}u'} dR(u,u') \qquad \thetaxt{and} \qquad I_{2} = \frac{1}{4}\sum_{k,k'}\int_{D_{kk'}} (R_{t_{k}u}^{t_{k'}u'})^{2} dR(u,u') . \end{align} In the following we bound the two quantities $I_{1}$ and $I_{2}$. In order to bound the term $I_{2}$ in \eqref{eqn.I12}, let us set $\mathrm{var}rphi(u,u')= (R_{t_{k}u}^{t_{k'}u'})^{2}$. We will first estimate the $\rho$-var norm of $\mathrm{var}rphi$. To this aim, we decompose the rectangular increments of $\mathrm{var}rphi$ as follows \mathbf{E}gin{align}\lambdabel{eqn.vp} \mathrm{var}rphi_{vu}^{v'u'} &= R_{vu}^{v'u'} ( R_{t_{k}u}^{t_{k'}v'} +R_{t_{k}v}^{t_{k'}v'} )+ R_{vu}^{t_{k'}u'} ( R_{t_{k}u}^{v'u'} +R_{t_{k}v}^{v'u'} ) . \end{align} From this decomposition it is readily checked using Hypothesis \ref{hyp.w} that for $ [v,u]\times[v',u']\subset D_{kk'}$ we have \mathbf{E}gin{multline}\lambdabel{eqn.52} | \mathrm{var}rphi_{vu}^{v'u'} | \leq C \omegaega_{R}([v,u]\times[v',u'])^{1/\rho} \mathcal Dot \omegaega_{R}(D_{kk'})^{1/\rho} \\ + C \omegaega_{R}([v,u] \times [t_{k'}, t_{k'+1}])^{1/\rho}\mathcal Dot \omegaega_{R}([t_{k}, t_{k+1}]\times[v',u'])^{1/\rho} . \end{multline} This inequality can be used in order to evaluate the 2-d $\rho$-var norm of $\mathrm{var}rphi$ over the rectangle~$D_{kk'}$. Indeed, let ${\mathcal P}$ and ${\mathcal P}'$ be partitions of $[t_{k}, t_{k+1}]$ and $[t_{k'}, t_{k'+1}]$, respectively. From~\eqref{eqn.52} we have \mathbf{E}gin{multline*} \sum_{(v,u)\in {\mathcal P}, (v',u')\in {\mathcal P}'} | \mathrm{var}rphi_{vu}^{v'u'} |^{\rho} \leq \omegaega_{R}(D_{kk'}) \sum_{(v,u)\in {\mathcal P}, (v',u')\in {\mathcal P}'} \omegaega_{R}([v,u]\times[v',u']) \\ + \sum_{(v,u)\in {\mathcal P}} \omegaega_{R}( [v,u] \times [t_{k'}, t_{k'+1}] ) \mathcal Dot \sum_{ (v',u')\in {\mathcal P}'} \omegaega_{R}([t_{k}, t_{k+1}]\times[v',u']) \, , \end{multline*} which, by the super-additivity of $\omegaega_{R}$, easily yields \mathbf{E}gin{equation*} \sum_{(v,u)\in {\mathcal P}, (v',u')\in {\mathcal P}'} | \mathrm{var}rphi_{vu}^{v'u'} |^{\rho} \leq \omegaega_{R}(D_{kk'})^{2} . \end{equation*} Since ${\mathcal P}$ and ${\mathcal P}'$ are generic partitions of $[t_{k}, t_{k+1}]$, this implies that \mathbf{E}gin{align}\lambdabel{eqn.53} \|\mathrm{var}rphi\|_{\rho\thetaxt{-var}; D_{kk'}} \leq \omegaega_{R}(D_{kk'})^{2/\rho} . \end{align} With relation \eqref{eqn.53} in hand, we can now establish a bound for $I_{2}$. Indeed, recall that $\mathrm{var}rphi (u,u') = (R_{t_{k}u}^{t_{k'}u'})^{2}$. In particular, we have $\mathrm{var}rphi (t_{k}, u')=0$ and $\mathrm{var}rphi(u,t_{k'})=0$, and we get \mathbf{E}gin{align*} \int_{D_{kk'}} (R_{t_{k}u}^{t_{k'}u'})^{2}dR(u,u') = \int_{D_{kk'}} \mathrm{var}rphi_{t_{k}u}^{t_{k'}u'} dR(u,u'). \end{align*} Therefore, plugging \eqref{eqn.53} in the definition \eqref{eqn.I12} of $I_{2}$ and applying Lemma \ref{lemma.YLT}, we end up with \mathbf{E}gin{align}\lambdabel{eqn.54ii} |I_{2} | \leq C\mathcal Dot \sum_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho} . \end{align} The estimation of $I_{1}$ can be done along the same lines as for $I_{2}$. Namely, we define a function $\psi(u,u') = R_{t_{k}u}^{t_{k}u}R_{t_{k'}u'}^{t_{k'}u'}$. Then the rectangular increments of $\psi$ can be decomposed as \mathbf{E}gin{align*} \psi_{vu}^{v'u'} & = (R_{t_{k}u}^{vu} + R_{t_{k}v}^{vu} )( R_{t_{k'}u'}^{v'u'}+ R_{t_{k'}v'}^{v'u'} ). \end{align*} from which we can deduce \mathbf{E}gin{align}\lambdabel{eqn.psi.rec} |\psi_{vu}^{v'u'} | &\leq C \mathcal Dot \omegaega_{R}([t_{k}, t_{k+1}]\times [v,u])^{1/\rho}\mathcal Dot \omegaega_{R}([t_{k'}, t_{k'+1}]\times [v',u'])^{1/\rho} . \end{align} Starting from \eqref{eqn.psi.rec} we can proceed as in \eqref{eqn.53}-\eqref{eqn.54ii}. We arrive at \mathbf{E}gin{align*} \|\psi\|_{\rho\thetaxt{-var}; D_{kk'}} \leq C\omegaega_{R}(D_{kk'})^{2/\rho} . \end{align*} Now applying Lemma \ref{lemma.YLT} again we end up with \mathbf{E}gin{align}\lambdabel{eqn.54i} |I_{1}|&\leq C\sum_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho} . \end{align} Let us conclude our estimates for this step: plugging \eqref{eqn.54i} and \eqref{eqn.54ii} into \eqref{eq:g-iil-bound-initial} we have obtained \mathbf{E}gin{align}\lambdabel{eqn.giil} \|\delta g^{ii\ell}_{st}\|_{2}^{2} \leq C \sum_{k,k'} \omegaega_{R} (D_{kk'})^{3/\rho} . \end{align} \noindent \thetaxtbf{Step 4: Conclusion.} Let us summarize our considerations so far. Gathering the upper bounds \eqref{eq:iii-bound}, \eqref{eqn.51} and \eqref{eqn.giil} we have proved relation \eqref{eq:g-bound} for all possible values of $i,j,\ell \in \{1,\dots, d\}$ and $p=2$. Recall again that the general case $p\geq 2$ is obtained by hypercontractivity, which finishes our proof. \end{proof} Let $g$ be the increment in relation \eqref{eqn.g}. We now wish to obtain an upper bound for $g$ similar to the bound \eqref{eq:F-bound} we have derived for $F$. This is the content of the next proposition. \mathbf{E}gin{proposition}\lambdabel{prop.g} Let $X$ and $g$ be as in Lemma \ref{lemma:g-bound}. Let $\theta>1$ be such that $\frac{1}{\theta}+\frac{1}{\rho}=1$. Then for all $i,j,\ell \in \{1,\dots, d\}$ and $(s,t)\in \mathcal S_{2}$ we have \mathbf{E}gin{multline}\lambdabel{eqn.57} \|\delta g_{st}^{ij\ell}\|_{p }^{2} \\ \leq C \max_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho-1} \mathcal Dot \omegaega_{R}([s,t]^{2}) + C \max_{k} \omegaega_{R}(D_{kk})^{2(1/\rho-1/\theta)} \mathcal Dot \omegaega_{R} ( [s,t]^{2})^{2 /\theta+1/\rho} . \end{multline} In particular, for $\varepsilon $ such that $ 0\leq\varepsilon \leq (3-\rho)\wedge (2-2\rho/\theta)$ we have \mathbf{E}gin{align}\lambdabel{eqn.58} \|\delta g_{st}^{ij\ell}\|_{p}^{2} \leq \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{\varepsilon}{\rho}} \mathcal Dot \omegaega_{R}([s,t]^{2})^{\frac{3-\varepsilon}{\rho}} . \end{align} \end{proposition} \mathbf{E}gin{proof} We first observe that since $\rho<2$ and $\frac{1}{\theta}+\frac{1}{\rho}=1$, we have $\theta>2>\rho$. Applying H\"older's inequality to the second term on the right side of \eqref{eq:g-bound} yields \mathbf{E}gin{align*} \Big| \sum_{k,k'} ( R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}}R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}}) \Big| \leq\Big( \sum_{k,k'} | R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}} |^{\theta}\Big)^{1/\theta} \mathcal Dot \Big( \sum_{k,k'} |R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}}|^{\rho} \Big)^{1/\rho} . \end{align*} Hence thanks to an elementary algebraic manipulation and according to the definition of $\rho$-variation we get \mathbf{E}gin{eqnarray*} \Big| \sum_{k,k'} ( R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}}R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}}) \Big| &\leq& \left( \sum_{k} | R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}} |^{\theta} \right)^{2/\theta} \omegaega_{R}([s,t]^{2})^{1/\rho} \\ &=& \left( \sum_{k} | R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}} |^{\theta-\rho} | R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}} |^{\rho} \right)^{2/\theta} \omegaega_{R}([s,t]^{2})^{1/\rho}. \end{eqnarray*} Thus bounding the term $| R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}} |^{\theta-\rho}$ above by $\omega_{R}(D_{kk})^{(\theta-\rho)/\rho}$ and owing to the super additive property of Hypothesis \ref{hyp.w}, we end up with \mathbf{E}gin{align}\lambdabel{eqn.3r} \Big| \sum_{k,k'} ( R_{t_{k}t_{k+1}}^{t_{k}t_{k+1}}R_{t_{k'}t_{k'+1}}^{t_{k'}t_{k'+1}}R_{t_{k}t_{k+1}}^{t_{k'}t_{k'+1}}) \Big| &\leq \left(\max_{k} \omegaega_{R}(D_{kk})^{2(1/\rho-1/\theta)}\right) \mathcal Dot \omegaega_{R}([s,t]^{2})^{2 /\theta} \mathcal Dot \omegaega_{R}([s,t]^{2})^{1/\rho} \nonumber \\ &= \left(\max_{k} \omegaega_{R}(D_{kk})^{2(1/\rho-1/\theta)}\right) \mathcal Dot \omegaega_{R} ( [s,t]^{2})^{2 /\theta+1/\rho}. \end{align} On the other hand, it is easy to see that \mathbf{E}gin{align}\lambdabel{eqn.w3bd} \sum_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho} \leq \max_{k,k'} \omegaega_{R}(D_{kk'})^{3/\rho-1} \mathcal Dot \omegaega_{R}([s,t]^{2}) . \end{align} Gathering \eqref{eqn.3r} and \eqref{eqn.w3bd} in \eqref{eq:g-bound}, this concludes relation \eqref{eqn.57}. Relation \eqref{eqn.58} follows immediately from \eqref{eqn.57} and the fact that $ \omegaega_{R}(D_{kk'}) \leq \omegaega_{R}([s,t]^{2}) $. \end{proof} In the following, we turn to the estimate of another third-chaos functional. \mathbf{E}gin{lemma}\lambdabel{lemma.xf} Let $X$ and $F$ be as in Lemma \ref{lemma:F-bound}. For $i,j,\ell=1,\dots, d$ and $(s,t)\in \mathcal S_{2}$, we define the increment $h_{st}^{ij\ell}$ as: \mathbf{E}gin{align*} h_{st}^{ij\ell} =\sum_{s\leq t_k<t} X_{st_k}^{1,\ell}\delta F_{t_kt_{k+1}}^{ij}. \end{align*} In addition, consider $\varepsilon $ such that $0\leq \varepsilon\leq 2-\rho$. Then the following inequality holds true: \mathbf{E}gin{align}\lambdabel{eqn.cr} \|h^{ij\ell}_{st}\|_{p}^{2} \leq 4 \left( \omegaega_{R}([s,t]^{2})^{\frac{3}{\rho}-\frac{2\varepsilon}{\rho}} + \omegaega_{R}([s,t]^{2})^{\frac{3}{\rho}-\frac{\varepsilon}{\rho}} \right) \mathcal Dot \max_{k } \omegaega_{R}([t_{k}, t_{k+1}] \times [0,T])^{\frac{2\varepsilon}{\rho}} . \end{align} \end{lemma} \mathbf{E}gin{proof} As in the proof of Lemma \ref{lemma:g-bound} we should distinguish cases according to possible equalities in the indices $i,j,\ell$. We focus on the case $i=j$ in Step 1 to 3 below, and then deal with the case $i\neq j$ in Step 4. \noindent \thetaxtbf{Step 1: A decomposition of $\mathbf{\|h\|_{2}^{2}}$.} As mentioned above, let us first consider the case $i=j$ and find an estimate for $h^{ii\ell}_{st}$. We start by writing \mathbf{E}gin{align}\lambdabel{eqn.hex} E\left[|h_{st}^{ii\ell}|^2\right]&=\sum_{k,k'}E\left[X_{st_k}^{1,\ell}X_{st_{k'}}^{1,\ell}\delta F_{t_kt_{k+1}}^{ii}\delta F_{t_{k'}t_{k'+1}}^{ii}\right]. \end{align} In addition, recall from \eqref{eqn.dx} and \eqref{eq:hermite} that \mathbf{E}gin{align}\lambdabel{eqn.dsdf} \delta X^{i}_{st} = \delta^{\diamondamond} \left( \mathbf{1}_{[s,t]} e_{i} \right) \, , \quad\thetaxt{and}\quad \delta F^{ii}_{t_{k}t_{k+1}} = \frac12 \sigma^{2}_{t_{k}t_{k+1}} H_{2} \left( \frac{X^{1,i}_{t_{k}t_{k+1}}}{\sigma_{t_{k}t_{k+1}}} \right). \end{align} Therefore, one can recast \eqref{eqn.hex} as \mathbf{E}gin{align}\lambdabel{eqn.hl2} E\left[ |h^{ii\ell}_{st}|^{2} \right] = \sum_{k,k'} \sigma^{2}_{t_{k}t_{k+1}} \sigma^{2}_{t_{k'}t_{k'+1}} E \left[ \delta^{\diamondamond} \left( \mathbf{1}_{[s,t_{k}]} e_{\ell} \right) Z_{kk'} \right], \end{align} where the random variable $Z_{kk'}$ is defined by \mathbf{E}gin{align}\lambdabel{eqn.z} Z_{kk'} = X^{1,\ell}_{st_{k}} H_{2} \left( \frac{X^{1,i}_{t_{k}t_{k+1}}}{\sigma_{t_{k}t_{k+1}}} \right) H_{2} \left( \frac{X^{1,i}_{t_{k'}t_{k'+1}}}{\sigma_{t_{k'}t_{k'+1}}} \right). \end{align} Hence resorting to the integration by parts formula \eqref{eqn.ibp}, we get that: \mathbf{E}gin{align*} E\left[ |h^{ii\ell}_{st}|^{2} \right] = \sum_{k,k'} \sigma_{t_{k}t_{k+1}}^{2} \sigma_{t_{k'}t_{k'+1}}^{2} E \left[ \lambdangle \mathbf{1}_{[s, t_{k}]} e_{\ell}, \mathbf{D} Z_{kk'} \rangle_{\mathcal H} \right], \end{align*} where we recall that $e_{\ell}$ stands for the $\ell$-th element of the canonical basis in $\mathbb R^{d}$. Computing the Malliavin derivative of $Z_{kk'}$ (and recalling that $H_{2}'(x)=x$), we let the reader check that we get the formula \mathbf{E}gin{align}\lambdabel{eqn.3j} E\left[|h_{st}^{ii\ell}|^2\right] = \sum_{k,k'} \left( J_{kk'}^{1} +J_{kk'}^{2}+J_{kk'}^{3} \right), \end{align} where the terms $J_{kk'}^{1} $, $J_{kk'}^{2}$, $J_{kk'}^{3}$ are respectively defined by \mathbf{E}gin{align}\lambdabel{eqn.j123} J_{kk'}^{1}& = E \left[X_{st_k}^{1,\ell}X_{st_{k'}}^{1,\ell}\right] \mathcal Dot E\left[\delta F_{t_kt_{k+1}}^{ii}\delta F_{t_{k'}t_{k'+1}}^{ii}\right] \nonumber \\ J_{kk'}^{2}&= \lambdangle \mathbf{1}_{[s,t_{k}]}, \mathbf{1}_{[t_{k},t_{k+1}]} \rangle_{\mathcal H} \mathcal Dot\lambdangle \mathbf{1}_{[s,t_{k'}]}, \mathbf{1}_{[t_{k'},t_{k'+1}]} \rangle_{\mathcal H} \mathcal Dot \lambdangle \mathbf{1}_{[t_{k},t_{k+1}]}, \mathbf{1}_{[t_{k'},t_{k'+1}]} \rangle_{\mathcal H} \mathcal Dot \mathbf{1}_{\{i=\ell\}} \\ J_{kk'}^{3}&= \lambdangle \mathbf{1}_{[s,t_{k}]}, \mathbf{1}_{[t_{k'},t_{k'+1}]} \rangle_{\mathcal H} \mathcal Dot\lambdangle \mathbf{1}_{[s,t_{k }]}, \mathbf{1}_{[t_{k},t_{k +1}]} \rangle_{\mathcal H} \mathcal Dot \lambdangle \mathbf{1}_{[t_{k},t_{k+1}]}, \mathbf{1}_{[t_{k'},t_{k'+1}]} \rangle_{\mathcal H} \mathcal Dot \mathbf{1}_{\{i=\ell\}}. \nonumber \end{align} In the following, we show that the upper-bound in \eqref{eqn.cr} holds for each $J_{kk'}^{a}$, $a=1,2,3$, and therefore concludes the lemma. \noindent \thetaxtbf{Step 2: Estimate for $\mathbf{J_{kk'}^{1}}$.} In order to bound $J^{1}_{kk'}$, we use the definition \eqref{eqn.rcov} as well as Hypothesis \ref{hyp.w} in order to get \mathbf{E}gin{align*} \left| E [X^{1,\ell}_{st_{k}} X^{1,\ell}_{st_{k'}} ] \right| = \left| R^{st_{k'}}_{st_{k}}\right| \leq \omegaega_{R} \left( [s,t_{k}]\times [s,t_{k'}]\right)^{1/\rho} \leq \omegaega_{R}\left( [s,t]^{2}\right)^{1/\rho}. \end{align*} Furthermore, invoking relations \eqref{eq:hermite} and \eqref{eq:ii-bound}, we get \mathbf{E}gin{align*} \left| E [ \delta F^{ii}_{t_{k}t_{k+1}} \delta F^{ii}_{t_{k'}t_{k'+1}} ] \right| = \frac18 \left( R^{t_{k}t_{k+1}}_{t_{k'}t_{k'+1}}\right)^{2} \leq \omegaega_{R} \left( D_{kk'}\right)^{2/\rho}. \end{align*} Hence resorting to the same arguments as in Lemma \ref{lemma:F-bound} in order to get inequality \eqref{eq:ii-bound-2}, we get that for any $\varepsilon\leq 2-\rho$ we have: \mathbf{E}gin{align}\lambdabel{eqn.j1bd} \sum_{k,k'} |J_{kk'}^{1}| \leq \omegaega_{R}([s,t]^{2})^{1/\rho} \mathcal Dot \sum_{k,k'} \omegaega_{R}(D_{kk'})^{2/\rho} \leq \max_{k} \omegaega_{R}(D_{kk'})^{\frac{\varepsilon}{\rho}} \mathcal Dot \omegaega_{R} ( [s,t]^{2})^{\frac{3-\varepsilon}{\rho}} . \end{align} Otherwise stated, inequality \eqref{eqn.cr} is satisfied for $\sum_{k,k'}|J_{kk'}^{1}|$. \noindent \thetaxtbf{Step 3: Estimate for $\mathbf{J_{kk'}^{2}}$ and $\mathbf{J_{kk'}^{3}}$.} We turn to an upper bound of the term $J_{kk'}^{2}$ in~\eqref{eqn.j123}. To this aim, consider $\theta>2$ such that $\frac{1}{\theta}+\frac{1}{\rho}=1$. Then applying H\"older's inequality to the summation in $k,k'$ of \eqref{eqn.j123}, we get \mathbf{E}gin{multline*} \sum_{k,k'} J_{kk'}^{2} \leq \left(\sum_{k,k'} \Big| \lambdangle \mathbf{1}_{[s,t_{k}]}, \mathbf{1}_{[t_{k},t_{k+1}]} \rangle_{\mathcal H} \lambdangle \mathbf{1}_{[s,t_{k'}]}, \mathbf{1}_{[t_{k'},t_{k'+1}]} \rangle_{\mathcal H} \Big|^{\theta}\right)^{1/\theta} \\ \times \left(\sum_{k,k'} \Big| \lambdangle \mathbf{1}_{[t_{k},t_{k+1}]}, \mathbf{1}_{[t_{k'},t_{k'+1}]} \rangle_{\mathcal H} \Big|^{\rho}\right)^{1/\rho}. \end{multline*} Now we apply the estimate $\lambdangle \mathbf{1}_{[u,v]}, \mathbf{1}_{[u',v']} \rangle_{\mathcal H}\leq \|R\|_{\rho\thetaxt{-var}, [u,v]\times [u',v']}$ to the three inner products in the above inequality. We arrive at \mathbf{E}gin{align}\lambdabel{eqn.j2bd2} \sum_{k,k'} J_{kk'}^{2} &\leq \left(\sum_{k,k'} \Big| \|R\|_{\rho\thetaxt{-var}, [t_{k}, t_{k+1}]\times [s,t]} \mathcal Dot \|R\|_{\rho\thetaxt{-var}, [t_{k'}, t_{k'+1}]\times [s,t]} \Big|^{\theta} \right)^{1/\theta} \mathcal Dot \left(\sum_{k,k'} \|R\|_{\rho\thetaxt{-var}, D_{kk'}}^{\rho}\right)^{1/\rho} \nonumber \\ &\leq \left(\sum_{ k } \|R\|_{\rho\thetaxt{-var}, [t_{k }, t_{k +1}]\times [s,t]}^{\theta} \right)^{2/\theta} \mathcal Dot \omegaega_{R}([s,t]^{2})^{1/\rho} . \end{align} Similarly to what we have done in the proof of Proposition \ref{prop.g}, we note that $\theta>2>\rho$. Hence combining Hypothesis \ref{hyp.w} and the super-additivity of $\omegaega_{R}$ we get the following estimate for $0<\varepsilon\leq \theta-\rho$: \mathbf{E}gin{align}\lambdabel{eqn.j2bd} \sum_{k,k'}J_{kk'}^{2} \leq \max_{k } \omegaega_{R}([t_{k}, t_{k+1}] \times [s,t])^{\varepsilon/\rho} \mathcal Dot \max_{k'} \omegaega_{R}([t_{k'}, t_{k'+1}] \times [s,t])^{\varepsilon/\rho} \mathcal Dot \omegaega_{R}([s,t]^{2})^{\frac{3-2\varepsilon}{\rho}} . \end{align} This implies that the upper-bound estimate \eqref{eqn.cr} holds for $J_{kk'}^{2}$. It can also be shown that the same estimate holds for $J_{kk'}^{3}$. The proof is similar to that of $J_{kk'}^{2}$ and will be omitted. Combining \eqref{eqn.j1bd} and \eqref{eqn.j2bd}, this completes the proof of \eqref{eqn.cr} for $i=j$. \noindent \thetaxtbf{Step 4: The case $i\neq j$.} We now turn to an estimate of $h^{ij\ell}_{st}$ when $i\neq j$. To this aim we first write an expression for $E[ |h^{ij\ell}_{st}|^{2} ]$ mimicking \eqref{eqn.hex}, with the important difference that the term $\delta F^{ij}_{t_{k}t_{k+1}}$ cannot be represented by Hermite polynomials as in \eqref{eqn.dsdf}. Hence the equivalents for~\eqref{eqn.hl2} and \eqref{eqn.z} whenever $i\neq j$ is \mathbf{E}gin{align}\lambdabel{eqn.hl2d} E\left[ |h^{ij\ell}_{st}|^{2} \right]= \sum_{k,k'} \left[ \delta^{\diamondamond} ( \mathbf{1}_{[s,t_{k}]} e_{\ell} ) \tilde{Z}_{kk'} \right], \end{align} where \mathbf{E}gin{align*} \tilde{Z}_{kk'} = X^{1,\ell}_{st_{k'}} \mathcal Dot \delta^{\diamondamond} ( X^{1,i}_{t_{k}\mathcal Dot} \mathcal Dot \mathbf{1}_{[t_{k}, t_{k+1}]} e_{j} ) \mathcal Dot \delta^{\diamondamond} ( X^{1,i}_{t_{k'}\mathcal Dot} \mathcal Dot \mathbf{1}_{[t_{k'}, t_{k'+1}]} e_{j} ) . \end{align*} Integrating relation \eqref{eqn.hl2d} by parts similarly to \eqref{eqn.3j}, we end up with \mathbf{E}gin{align*} E\left[ |h^{ij\ell}_{st}|^{2} \right] = \sum_{k,k'} (J_{kk'}^{1}+J_{kk'}^{4}), \end{align*} where $J^{1}_{kk'}$ has already been defined in \eqref{eqn.j123} and $J^{4}_{kk'}$ is given by \mathbf{E}gin{align}\lambdabel{eqn.j4} J_{kk'}^{4}= & \int_{ D_{kk'}} \left( R_{st_{k} }^{t_{k'}u} R_{st_{k'} }^{t_{k }u' } + R_{st_{k} }^{t_{k }u' } R_{st_{k'} }^{t_{k'}u} \right) dR(u,u') \mathcal Dot \mathbf{1}_{\{i=\ell\}} \nonumber \\ &+ \int_{ D_{kk'}} \left( R_{st_{k} }^{ut_{k'+1}} R_{st_{k'} }^{u't_{k +1} } + R_{st_{k} }^{u't_{k +1} } R_{st_{k'} }^{ut_{k'+1}} \right) dR(u,u') \mathcal Dot \mathbf{1}_{\{j=\ell\}} . \end{align} In order to bound $J^{4}_{kk'}$, we set $\phi (u,u') = R_{st_{k} }^{t_{k'}u} R_{st_{k'} }^{t_{k }u' } $. Then one of the terms in \eqref{eqn.j4} is $\int_{D_{kk'}} \phi(u,u')dR(u,u')$. We wish to bound this term thanks to Lemma \ref{lemma.YLT}. To this aim, similarly to what we did in \eqref{eqn.vp}-\eqref{eqn.52}, we estimate the rectangular increments of $\phi$. We get \mathbf{E}gin{align*} |\phi_{uv}^{u'v'}| = |R_{st_{k} }^{uv} R_{st_{k'} }^{u'v' }| \leq \|R\|_{\rho\thetaxt{-var}, [s,t]\times [u,v]} \mathcal Dot \|R\|_{\rho\thetaxt{-var}, [s,t]\times [u',v']} . \end{align*} Now we consider ${\mathcal P}$ and ${\mathcal P}'$ generic partitions of $[t_{k}, t_{k+1}]$ and $[t_{k'}, t_{k'+1}]$ respectively, as well as $\theta$ such that $\frac{1}{\theta}+\frac{1}{\rho}=1$. Then we have \mathbf{E}gin{align*} \sum_{[u,v]\times [u',v']\in {\mathcal P}\times {\mathcal P}'} |\phi_{uv}^{u'v'}|^{\theta} \leq \sum_{[u,v]\in {\mathcal P}} \|R\|_{\rho\thetaxt{-var}, [s,t]\times [u,v]}^{\theta } \mathcal Dot \sum_{[u',v']\in {\mathcal P}'} \|R\|_{\rho\thetaxt{-var}, [s,t]\times [u',v']}^{\theta} \\ \leq \|R\|_{\rho\thetaxt{-var}, [s,t]\times [t_{k}, t_{k+1}]}^{\theta} \mathcal Dot \|R\|_{\rho\thetaxt{-var}, [s,t]\times [t_{k'}, t_{k'+1}]}^{\theta} . \end{align*} Therefore, we obtain \mathbf{E}gin{align} \lambdabel{eqn.phi} \|\phi\|_{\theta\thetaxt{-var}, D_{kk'}}\leq \|R\|_{\rho\thetaxt{-var}, [s,t]\times [t_{k}, t_{k+1}]} \mathcal Dot \|R\|_{\rho\thetaxt{-var}, [s,t]\times [t_{k'}, t_{k'+1}]}. \end{align} Note that the estimate \eqref{eqn.phi} of $\phi$ also holds for the other three functions in the right-hand side of \eqref{eqn.j4}, namely: \mathbf{E}gin{align*} R_{st_{k} }^{t_{k }u' } R_{st_{k'} }^{t_{k'}u} , \qquad R_{st_{k} }^{ut_{k'+1}} R_{st_{k'} }^{u't_{k +1} } , \qquad R_{st_{k} }^{u't_{k +1} } R_{st_{k'} }^{ut_{k'+1}}. \end{align*} The proof is similar and will be left to the reader. With \eqref{eqn.phi} in hand, we can now invoke Lemma \ref{lemma.YLT} for the right-hand side of relation \eqref{eqn.j4}. This yields \mathbf{E}gin{align}\lambdabel{eqn.j4bd} |J_{kk'}^{4}| \leq \|R\|_{\rho\thetaxt{-var}, [s,t]\times [t_{k}, t_{k+1}]} \mathcal Dot \|R\|_{\rho\thetaxt{-var}, [s,t]\times [t_{k'}, t_{k'+1}]} \mathcal Dot \|R\|_{\rho\thetaxt{-var}, D_{kk'}}. \end{align} Starting from \eqref{eqn.j4bd}, we easily get an upper bound similar to \eqref{eqn.j2bd2} for $\sum_{k,k'}J^{4}_{kk'}$. Then we can proceed as in relation \eqref{eqn.j2bd}. We conclude that \eqref{eqn.cr} holds for the case $i\neq j$. The proof is now complete. \end{proof} \subsection{Upper-bounds for weighted sums}\lambdabel{sec:bounds-weighted-sums} In this section we give some estimates for weighted sums of the processes $F$ and $g$ defined in the previous subsection. These sums will be a part of our main terms in the analysis of the trapezoid rule. \mathbf{E}gin{lemma}\lambdabel{lem:weighted-F} Let $X$ be a $\mathbb R^d$ valued Gaussian process with covariance function $R$ such that Hypothesis \ref{hyp.x} holds with $\rho\in [1,2)$, and therefore Hypothesis~\ref{hyp.w} is guaranteed by Remark~\ref{remark.rho} and subsequent comments. Let $\omegaega_{R}$ be the control in Hypothesis~\ref{hyp.w}. Recall that the increment $F$ is defined in \eqref{F} and fix a partition ${\mathcal P}$ with mesh $|{\mathcal P}|$. We also consider a controlled process of order $1$ according to Definition \ref{def:ctrld-process}, which means that the increments of $y$ can be decomposed as \mathbf{E}gin{align}\lambdabel{eqn.y} y_{st} = y^{1}_{s}X^{1}_{st} + r_{st} \qquad s,t\in [0,T] . \end{align} We call $\omegaega$ the control $\omegaega_{y}$ related to the increments of $y$ in Definition \ref{def:ctrld-process}, and recall that we have \mathbf{E}gin{align}\lambdabel{eqn.y1r} |\delta y^{1}_{st}|\leq \omegaega(s,t)^{1/p} , \qquad |r_{st}|\leq \omegaega(s,t)^{2/p}, \qquad \thetaxt{for all}\quad (s,t)\in \mathcal S_{2}([0,T]) \end{align} almost surely. Eventually, we introduce below a parameter $p$ such that $\frac{1}{p} = \frac{1-\varepsilon}{2\rho}$ for $\varepsilon$ small enough. Then the following holds true: \noindent\emph{(i)} For every $M>0$, we set $A_{M}=\{\omegaega(0,T)\leq M\}$. Then for all $(s,t)\in \mathcal S_2 $ and $(i,j)\in \{1,\ldots,d\}^{2}$ we have: \mathbf{E}gin{align}\lambdabel{eq:weighted-F} E\left[ \mathbf{1}_{A_{M}}\mathcal Dot \left|\sum_{s\leq t_k<t} y_{t_k} \delta F_{t_k t_{k+1}}^{ij}\right|\right] \leq C \mathcal Dot \max_{k } \omegaega_{R}([t_{k}, t_{k+1}] \times [0,T])^{\frac{\varepsilon}{2\rho}} . \end{align} In particular, \mathbf{E}gin{align*} \sum_{s\leq t_k<t} y_{t_k} \delta F_{t_k t_{k+1}}^{ij} \longrightarrow 0, \qquad \thetaxt{in probability as $|{\mathcal P}|\to0$}. \end{align*} \noindent\emph{(ii)} In case of H\"older continuous processes $X$ and $y$, one can improve the convergence as follows. Namely suppose that $\delta y^{1}\in \mathcal{C}^{1/p}$ and $r\in \mathcal C^{2/p}$ almost surely and that \mathbf{E}gin{align}\lambdabel{eqn.Rholder} \omegaega_{R}([s,t]\times [0,T]) \leq C|t-s|, \qquad \thetaxt{for all } (s,t)\in \mathcal S_{2}([0,T]) . \end{align} Furthermore, assume that the uniform partition $0=t_{0}<\mathcal Dots<t_{n}=T$ over $[0,T]$ is considered. Then \mathbf{E}gin{align}\lambdabel{eqn.xfconv} \sum_{s\leq t_k<t} y_{t_k} \delta F_{t_k t_{k+1}}^{ij} \longrightarrow 0, \qquad \thetaxt{almost surely as $n\to\infty$}. \end{align} \end{lemma} \mathbf{E}gin{proof} \noindent \thetaxtbf{Step 1: A decomposition.} Consider $(s,t)\in \mathcal S_2 $, where we recall that $\mathcal S_{2}$ stands for $\mathcal S_{2}([0,T])$. With the help of \eqref{eqn.y} we have the following decomposition \mathbf{E}gin{align}\lambdabel{eqn.yf} \sum_{s\leq t_{k}<t} y_{st_{k}} \delta F^{ij}_{t_{k}t_{k+1}} = y^{1}_{s}\sum_{s\leq t_{k}<t} X^{1}_{st_{k}} \delta F^{ij}_{t_{k}t_{k+1}} + \tilde{r}_{st} , \end{align} where we denote \mathbf{E}gin{align}\lambdabel{eqn.tr} \tilde{r}_{st} = \sum_{s\leq t_{k}<t} r _{st_{k}} \delta F^{ij}_{t_{k}t_{k+1}}. \end{align} \noindent \thetaxtbf{Step 2: Calculations for $\delta \tilde{r}$.} In the following, in order to estimate $\tilde{r}$ we first estimate $\delta \tilde{r}$. To this aim we observe that a simple computation yields $\delta r_{sut} = \delta y^{1}_{su} X^{1}_{ut}$. Therefore, starting from~\eqref{eqn.tr} and using Definition \ref{def:delta-on-C1-C2} for the increment $\delta \tilde{r}$, some elementary calculations show that \mathbf{E}gin{align}\lambdabel{eqn.dr} \delta \tilde{r}_{sut} &= r_{su}\sum_{u\leq t_{k}<t} \delta F^{ij}_{t_{k}t_{k+1}} + \sum_{u\leq t_{k}<t}\delta r_{sut_{k}} \delta F^{ij}_{t_{k}t_{k+1}} \nonumber \\ &= r_{su}\sum_{u\leq t_{k}<t} \delta F^{ij}_{t_{k}t_{k+1}} + \delta y^{1}_{su} \sum_{u\leq t_{k}<t} X^{1}_{ut_{k}}\delta F^{ij}_{t_{k}t_{k+1}}. \end{align} \noindent \thetaxtbf{Step 3: Moment estimates of $\delta \tilde{r}$ and $\tilde{r}$.} We now hinge on relation \eqref{eqn.dr} in order to upper bound $\tilde{r}$. We start by denoting \mathbf{E}gin{align}\lambdabel{eqn.trunc} y^{1,M} = \mathbf{1}_{A_{M}}\mathcal Dot y^{1}\,, \quad \tilde{r}^{M}=\mathbf{1}_{A_{M}}\mathcal Dot \tilde{r} \,, \quad \delta \tilde{r}^{M}=\mathbf{1}_{A_{M}}\mathcal Dot \delta \tilde{r}\,, \quad \omegaega^{M} (s,t)=E\big[\mathbf{1}_{A_{M}}\mathcal Dot \omegaega(s,t)\big]. \end{align} It is easy to see that $\omegaega^{M}$ is a control. By the inequalities in \eqref{eqn.y1r} we also have \mathbf{E}gin{align}\lambdabel{eqn.ym1rm} E\left[|y^{1,M}_{su}|^{p}\right]\leq \omegaega^{M}(s,u) , \qquad\thetaxt{and}\qquad E\left[| r^{M}_{su}|^{p/2}\right]\leq \omegaega^{M}(s,u) . \end{align} We also recall that Hypothesis \ref{hyp.w} holds with some 2-d control $\omegaega_{R}$. Next we multiply both sides of \eqref{eqn.dr} by $\mathbf{1}_{A_{M}}$. According to our notation \eqref{eqn.trunc}, we get \mathbf{E}gin{align}\lambdabel{eqn.drm} \delta r_{sut}^{M} = r^{M}_{su}\sum_{u\leq t_{k}<t} \delta F^{ij}_{t_{k}t_{k+1}} + \delta y^{1,M}_{su} \sum_{u\leq t_{k}<t} X^{1}_{ut_{k}} \delta F^{ij}_{t_{k}t_{k+1}}. \end{align} We are now in a position to apply H\"older's inequality, Lemma \ref{lemma:F-bound}, Lemma \ref{lemma.xf} and the upper bound \eqref{eqn.ym1rm} in order to get \mathbf{E}gin{align}\lambdabel{eqn.debd} E\left[|\delta \tilde{r}^{M}_{sut}|\right] \leq & 2\omegaega^{M}(s,t)^{2/p} \mathcal Dot \omegaega_{R} ([s,t]^{2})^{\frac{1}{\rho}-\frac{\varepsilon}{2\rho}} \mathcal Dot \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{\varepsilon}{2\rho}} \nonumber \\ &+ 2 \omegaega^{M}(s,t)^{1/p} \mathcal Dot \omegaega_{R}([s,t]^{2})^{\frac{3}{2\rho}-\frac{\varepsilon}{2\rho}} \mathcal Dot \max_{k } \omegaega_{R}([t_{k}, t_{k+1}] \times [0,T])^{\frac{\varepsilon}{2\rho}} . \end{align} Let us now discuss the exponents in \eqref{eqn.debd}. Indeed, recall that we have chosen $p$ such that $\frac{2}{p} = \frac{1-\varepsilon}{2\rho}$. Therefore, owing to the fact that $\rho\in [1,2)$, $\varepsilon$ can be chosen small enough so that \mathbf{E}gin{align}\lambdabel{eqn.nu} \nu_{p,\rho} \equiv \left( \frac{2}{p} + \frac{1-\varepsilon/2}{\rho}\right) \wedge \left( \frac{1}{p} + \frac{3}{2} -\frac{\varepsilon}{2} \right)>1 . \end{align} In the sequel we pick a $\mu $ such that $1<\mu<\nu_{p, \rho}$. With this notation in hand define a bivariate function $\tilde{\omegaega}$ by \mathbf{E}gin{align*} \tilde{\omegaega} (s,t) = \left( \omegaega^{M}(s,t)^{2/p} \mathcal Dot \omegaega_{R} ([s,t]^{2})^{\frac{1}{\rho}-\frac{\varepsilon}{2\rho}} \right)^{1/\mu} + \left( \omegaega^{M}(s,t)^{1/p} \mathcal Dot \omegaega_{R}([s,t]^{2})^{\frac{3}{2\rho}-\frac{\varepsilon}{2\rho}} \right)^{1/\mu}. \end{align*} As a direct application of \mathcal Ite[Exercise 1.9 item (iii)]{FV}, it is readily checked that $\tilde{\omegaega}$ is a control. In addition, one can recast \eqref{eqn.debd} as \mathbf{E}gin{align}\lambdabel{eqn.drmbd} E(|\delta \tilde{r}^{M}_{sut}|) \leq 2 \max_{k } \omegaega_{R}([t_{k}, t_{k+1}] \times [0,T])^{\frac{\varepsilon}{2\rho}} \mathcal Dot \tilde{\omegaega}(s,t)^{\mu}. \end{align} Summarizing our considerations for this step, we have obtained that $\tilde{r}$ is an increment from $\mathcal S_{2}$ to the Banach space $\mathcal B=L^{1}(\Omega)$. Moreover, relation \eqref{eqn.tr} easily entails $\tilde{r}^{M}_{t_{\ell}t_{\ell+1}}=0$ for any point $t_{\ell}$ of the partition ${\mathcal P}$, and $\tilde{r}$ satisfies \eqref{eqn.drmbd}. Therefore, a direct application of Lemma \ref{lem2.4} on $\mathcal B=L^{1}(\Omega)$ yields \mathbf{E}gin{align*} E(| \tilde{r}_{st}^{M} |) \leq 2 K_{\mu} \max_{k } \omegaega_{R}([t_{k}, t_{k+1}] \times [0,T])^{\frac{\varepsilon}{2\rho}} \mathcal Dot \tilde{\omegaega}(s,t)^{\mu}. \end{align*} Plugging this estimate into \eqref{eqn.yf} and combining it with \eqref{eqn.cr}, the proof of our claim \eqref{eq:weighted-F} is now achieved. \noindent \thetaxtbf{Step 4: Path-wise estimates of $F$.} We now turn to item (ii) in our lemma, assuming H\"older-continuity for $y,y^{1}, r$ and considering uniform partitions of $[0,T]$ with $t_{k+1}-t_{k}=T/n$. In this context, condition \eqref{eqn.Rholder} allows to write the upper-bound estimate of $F$ \eqref{eq:F-bound} in Lemma~\ref{lemma:F-bound} as: \mathbf{E}gin{align*} \|\delta F^{ij}_{st}\|_{q} \leq C |t-s|^{\frac{1}{\rho}-\varepsilon} \mathcal Dot |{\mathcal P}|^{ \varepsilon } = C |t-s|^{\frac{1}{\rho}-\varepsilon} \mathcal Dot n^{- \varepsilon }, \quad \thetaxt{for all } q>1 \thetaxt{ and } (s,t)\in \mathcal S_{2}. \end{align*} Applying Lemma \ref{G existence} with $z^{n}=F$, $\mathbf{E}ta=\frac{1}{\rho}-\varepsilon$, and $\alpha=\varepsilon$ we obtain \mathbf{E}gin{align}\lambdabel{eqn.Fas} |\delta F^{ij}_{st}| \leq G \mathcal Dot (t-s)^{\frac{1}{\rho}-2\varepsilon} \mathcal Dot n^{- \varepsilon /2}, \end{align} where $G$ is a random variable admitting moments of all orders. In a similar way and with the help of Lemma \ref{lemma.xf}, we can show that \mathbf{E}gin{align}\lambdabel{eqn.xfas} \Big| \sum_{s\leq t_{k}<t} X^{1 }_{ut_{k}} F^{ij}_{t_{k}t_{k+1}} \Big| \leq G\mathcal Dot |t-s|^{\frac{3}{\rho} - 2\varepsilon}\mathcal Dot n^{- \varepsilon /4}. \end{align} With those preliminaries in mind, we will upper bound the increment $\sum_{s\leq t_{k}<t} y_{t_{k}} \delta F^{ij}_{t_{k}t_{k+1}}$ thanks to relation \eqref{eqn.yf}. Namely in the right-hand side of \eqref{eqn.yf} we have that almost surely \mathbf{E}gin{align*} y_{s}\sum_{s\leq t_{k}<t} X^{1}_{st_{k}} \delta F^{ij}_{t_{k}t_{k+1}} \to 0, \quad \thetaxt{for all } (s,t)\in \mathcal S_{2} \end{align*} thanks to \eqref{eqn.xfas}. Therefore, in order to show \eqref{eqn.xfconv} it remains to show the convergence of $\tilde{r}$. \noindent \thetaxtbf{Step 4: Path-wise estimates of $\delta \tilde{r}$ and $\tilde{r}$.} In order to bound $\delta \tilde{r}$ in the H\"older case, we plug \eqref{eqn.Fas} and \eqref{eqn.xfas} into the expression \eqref{eqn.dr} we have obtained for $\delta \tilde{r}$. We end up with \mathbf{E}gin{align*} |\delta \tilde{r}_{sut}| &\leq G \left( \|r\|_{2/p} |t-s|^{\frac{2}{p}+\frac{1}{\rho}-2\varepsilon}\mathcal Dot n^{-\varepsilon/2} + \|y^{1}\|_{1/p} |t-s|^{\frac{1}{p}+\frac{3}{\rho}-2\varepsilon} \mathcal Dot n^{-\varepsilon/4} \right) \\ &\leq n^{-\varepsilon/4} G \left( \|r\|_{2/p}+\|y^{1}\|_{1/p}\right) \mathcal Dot |t-s|^{\mu}, \end{align*} where similarly to what we did in Step 3, we take $1<\mu<\tilde{\nu}_{p,\rho}$ with \mathbf{E}gin{align*} \tilde{\nu}_{p,\rho} = \left( \frac{2}{p} +\frac{1}{\rho}-2\varepsilon \right) \wedge \left( \frac{1}{p} +\frac{3}{\rho}-2\varepsilon \right). \end{align*} Hence one can resort to the H\"older version of the sewing lemma contained in Lemma \ref{prop:sewing}. We get \mathbf{E}gin{align*} |\tilde{r}_{st}|\leq C G (\|r\|_{2/p}+\|y^{1}\|_{1/p}) \mathcal Dot n^{-\varepsilon/4} |t-s|^{\mu}, \end{align*} from which we easily deduce \mathbf{E}gin{align}\lambdabel{eqn.rto0} \lim_{n\to\infty}|\tilde{r}_{s t}| = 0. \end{align} In conclusion, plugging \eqref{eqn.rto0} and \eqref{eqn.xfas} into \eqref{eqn.yf} we have obtained relation \eqref{eqn.xfconv}. The proof is complete. \end{proof} \noindent We now handle some weighted sums of the increment $X^{3}$ which will feature in our trapezoid sums. \mathbf{E}gin{lemma}\lambdabel{lem:weighted-g} As in Lemma \ref{lem:weighted-F}, we consider a Gaussian process $X$ whose covariance $R$ satisfies Hypothesis \ref{hyp.x} with $\rho\in [1,2)$. We call $\omegaega_{R}$ the control defined in Hypothesis \ref{hyp.w}. Let $X^{3} = \{ X^{3,ij\ell}_{st}; (s,t)\in \mathcal S_{2}([0,T]), i,j,\ell =1,\dots, d \} $ be the third order element in the rough path above $X$ and take a sequence of partitions ${\mathcal P}$ with mesh $|{\mathcal P}|$. We also consider a continuous process $y$ such that $y_{0}=0$. Then the following holds true: \noindent \emph{(i)} Let us assume that the increments of $y$ are dominated by a control $\omegaega $ over $[0,T]$. Namely we suppose that for all $(s,t)\in \mathcal S_{2}([0,T])$ we have $|\delta y_{st}| \leq \omegaega(s,t)^{1/p}$ almost surely, where $p$ is such that $\frac{1}{p}= \frac{1-\varepsilon}{2\rho}$ as in Lemma \ref{lem:weighted-F}. Consider a generic index $(i,j,\ell) \in \{ 1,\dots, d \}^{3}$, and recall that $A_{M}$ is defined by $A_{M}= \{ \omegaega (0,T)\leq M \}$ for $M>0$. Then we have \mathbf{E}gin{equation}\lambdabel{eq:weighted-zeta} E\left[\mathbf{1}_{A_{M}} \mathcal Dot\left|\sum_{s\leq t_k<t} \delta y_{st_k} X_{t_kt_{k+1}}^{3,ijl}\right| \, \right] \leq C\mathcal Dot \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{\varepsilon}{2\rho}} . \end{equation} In particular, \mathbf{E}gin{align}\lambdabel{eqn.yx3conv} \sum_{s\leq t_k<t} \delta y_{st_k} X_{t_kt_{k+1}}^{3,ijl} \longrightarrow 0, \qquad \thetaxt{in probability as $|{\mathcal P}|\to0$.} \end{align} \noindent \emph{(ii)} If we are in a H\"older setting, namely $y\in \mathcal C^{1/p}$ and $\omegaega_{R}$ verifying \eqref{eqn.Rholder}, and if we also consider the uniform partition ${\mathcal P}$ (see Lemma \ref{lem:weighted-F} item \emph{(ii)}), then we get \mathbf{E}gin{align}\lambdabel{eqn.yx3as} \sum_{s\leq t_k<t} \delta y_{st_k} X_{t_kt_{k+1}}^{3,ijl} \longrightarrow 0, \qquad \thetaxt{almost surely as $n\to\infty$.} \end{align} \end{lemma} \mathbf{E}gin{proof} The proof is very similar to what we did for Lemma \ref{lem:weighted-F}. For sake of conciseness we will only outline some of the steps, focusing mainly on getting an equivalent of \eqref{eqn.debd}. Along the same lines as \eqref{eqn.trunc}, we set \mathbf{E}gin{align*} y^{M}_{s}= \mathbf{1}_{A_{M}} \, y_{s} \,, \qquad r_{st}^{M} = \mathbf{1}_{A_{M}}\mathcal Dot \sum_{s\leq t_k<t} \delta y_{st_k} X_{t_kt_{k+1}}^{3,ijl}. \end{align*} In this context we let the reader check that the equivalent of relation \eqref{eqn.drm} becomes \mathbf{E}gin{align*} \delta r_{sut}^{M} = \delta y_{su}^{M}\mathcal Dot \sum_{u\leq t_{k}<t} X^{3,ijl}_{t_{k}t_{k+1}}. \end{align*} Hence applying H\"older's inequality with $p$, $q$ such that $\frac{1}{p}+\frac{1}{q}=1$, invoking Proposition \ref{prop.g} and recalling that $\omegaega^{M} $ is introduced in relation \eqref{eqn.trunc}, we get \mathbf{E}gin{align}\lambdabel{eqn.drbd} E\left[|\delta r_{sut}^{M} |\right] & = E\left[ |\delta y^{M}_{su}|^{p} \right]^{1/p} \mathcal Dot \Big\| \sum_{u\leq t_{k}<t} X^{3,ijl}_{t_{k}t_{k+1}}\Big\|_{q} \nonumber \\ &\leq \omegaega^{M}(s,u)^{1/p} \mathcal Dot \max_{k,k'} \omegaega_{R}(D_{kk'})^{\frac{\varepsilon}{2\rho}} \mathcal Dot \omegaega_{R}([s,t]^{2})^{\frac{3-\varepsilon}{2\rho}} . \end{align} Observe that \eqref{eqn.drbd} corresponds to \eqref{eqn.debd} in the proof of Lemma \ref{lem:weighted-F}. Also notice that we have chosen $p$ (with a small enough $\varepsilon$) so that $\frac{1}{p} + \frac{3-\varepsilon}{2\rho}>1$. Otherwise stated, condition~\eqref{eqn.nu} holds in the current context. Therefore one can prove our claims \eqref{eq:weighted-zeta}, \eqref{eqn.yx3conv} and \eqref{eqn.yx3as} exactly as in Lemma \ref{lem:weighted-F}. \end{proof} \mathbf{E}gin{remark}\lambdabel{remark.b} Combining Lemma \ref{lemma.xf} with the above considerations, one can easily extend the conclusions of Lemma \ref{lem:weighted-g} to sums of the form \mathbf{E}gin{align*} \sum_{s\leq t_{k}<t} \delta y_{st_{k}} X^{2,ij}_{t_{k}t_{k+1}} X^{1,\ell}_{t_{k}t_{k+1}}. \end{align*} Details are ommited for sake of conciseness. \end{remark} \subsection{Convergence of the trapezoid rule}\lambdabel{sec:proof-main-thm} With the previous preliminary results in hand, we are now ready to give a complete statement and carry out the proof of our main Theorem~\ref{thm:cvgce-trapezoid-intro}. \mathbf{E}gin{theorem}\lambdabel{theorem:Trapezoid-Rule} Let $X$ be a centered $\mathbb R^{d}$-valued Gaussian process on $[0,T]$ with covariance function $R$. Suppose that Hypothesis \ref{hyp.x} holds true for $\rho\in [1,2)$. Denote the rough path lift of $X$ by $\mathbf{X}=(X^1,X^2,X^3)$. Next consider a $\mathbb R^{d}$-valued controlled process $y$ of order $2$, according to Definition~\ref{def:ctrld-process}. Specifically, there exist processes $ y^{1}, y^{2}, r^{0}, r^{1}$ with regularities to be specified below and such that \mathbf{E}gin{align}\lambdabel{eqn.yctr} \delta y_{st} =y_{s}^{1 }X_{st}^{1 }+y_{s}^{2 }X_{st}^{2 }+r_{st}^{0} , \qquad \delta y^{1}_{st} = y^{2}_{s}X^{1}_{st}+ r^{1}_{st}, \end{align} where we recall again our Notation~\ref{not:tensor-products} on matrix products. For a given partition of $[0,T]$: ${\mathcal P}=\{0=t_0<t_1<\mathcal Dots<t_{n+1}=T\}$ with mesh size $|{\mathcal P}|$, we define the trapezoid rule: \mathbf{E}gin{equation}\lambdabel{eq:proof-what-to- prove} \operatorname{tr-} \mathcal J_0^T(y,X)= \sum_{k=0}^n\frac{y_{t_k} +y_{t_{k+1}} }{2} \mathcal Dot X_{t_k t_{k+1}}^{1 }, \end{equation} where we denote $y_{t} = ( y_{t}^1 ,\dots, y_{t}^{d} ) $ and simply write $(y_{t_k} +y_{t_{k+1}}) \mathcal Dot X_{t_k t_{k+1}}^{1 }$ for the inner product $\sum_{i=1}^{d} (y_{t_k}^{i} +y^{i}_{t_{k+1}}) \, X_{t_k t_{k+1}}^{1, i } $. Then the following holds true: \noindent \emph{(i)} Assume that the framework of Definition \ref{def:ctrld-process} prevails and call $\omegaega$ the control function over $[0,T]$ such that for $\frac{1}{p}$ of the form $\frac{1-\varepsilon}{2\rho}$ we have \mathbf{E}gin{align}\lambdabel{eqn.yctrbd} |r^{0}_{st}|\leq \omegaega(s,t)^{3/p}, \qquad |\delta y^{2}_{st}| \leq \omegaega(s,t)^{1/p}, \qquad |r^{1}_{st}|\leq \omegaega(s,t)^{2/p}. \end{align} Then as the mesh size of the partition $|{\mathcal P}|$ goes to $0$, we have \mathbf{E}gin{align}\lambdabel{eqn.trapcp} \operatorname{tr-} \mathcal J_0^T(y,X)\rightarrow \int_0^T {y}_s \, d\mathbf{X}_s \qquad \thetaxt{in probability}, \end{align} where the right hand side above designates the rough integral of $y$ against $X$ as given in Proposition \ref{prop:intg-ctrld-process}. \noindent\emph{(ii)} Assume we are in a H\"older setting, that is $\omegaega_{R}$ verifies \eqref{eqn.Rholder}. Then recall that $X$ generates a $\frac{1}{p}$-H\"older rough path, and we also assume that $r^{0}\in \mathcal C^{3/p}$, $r^{1}\in \mathcal C^{2/p}$ and $y,y^{1}, y^{2}\in \mathcal C^{1/p}$. Eventually, consider the uniform partition $0=t_{0}<\mathcal Dots<t_{n}=T$ over $[0,T]$ with mesh $T/n$. Then the convergence in \eqref{eqn.trapcp} holds almost surely. \end{theorem} \mathbf{E}gin{proof} Proceeding with the implicit assumption that we sum over $t_k\in {\mathcal P}$, omiting the summation sign for notational convenience we get \mathbf{E}gin{equation}\lambdabel{eq:trapezoid-first} \operatorname{tr-} \mathcal J_0^T(y,X)= y_{t_{k} } \mathcal Dot X_{t_{k} t_{k+1}}^{1 } +\frac{1}{2}\delta y_{t_{k} t_{k+1}} \mathcal Dot X_{t_{k} t_{k+1}}^{1 }. \end{equation} Hence plugging the decomposition \eqref{eqn.yctr} into \eqref{eq:trapezoid-first} and invoking Notation~\ref{not:tensor-products} on matrix products with $m=d$, we obtain \mathbf{E}gin{align}\lambdabel{eqn.trexp} \operatorname{tr-} \mathcal J_0^T(y,X) =& y_{t_{k} } \mathcal Dot X_{t_{k} t_{k+1}}^{1 } +\frac{1}{2}\Bigg(y_{t_{k} }^{1 }X_{t_{k} t_{k+1}}^{1 }+y_{t_{k} }^{2 }X_{t_{k} t_{k+1}}^{2 }+r_{t_{k} t_{k+1}}^{0 }\Bigg) \mathcal Dot X_{t_{k} t_{k+1}}^{1 }. \end{align} Let us rearrange the right-hand side above by by setting \mathbf{E}gin{align*} I_{1} &= y_{t_{k} } \mathcal Dot X_{t_{k} t_{k+1}}^{1 } + y^{1}_{t_{k}} \mathcal Dot X^{2}_{t_{k}t_{k+1}} + y^{2}_{t_{k}} \mathcal Dot X^{3}_{t_{k}t_{k+1}} \\ I_{2} &= \frac12 y^{1}_{t_{k}} X^{1}_{t_{k}t_{k+1}}\mathcal Dot X^{1}_{t_{k}t_{k+1}} - y^{1}_{t_{k}} \mathcal Dot X^{2}_{t_{k}t_{k+1}} \\ I_{3}&= \frac12 y^{2}_{t_{k}} X^{2}_{t_{k}t_{k+1}}\mathcal Dot X^{1}_{t_{k}t_{k+1}} - y^{2}_{t_{k}} \mathcal Dot X^{3}_{t_{k}t_{k+1}} \\ I_{4}&=\frac12 r^{0}_{t_{k}t_{k+1}} \mathcal Dot X^{1}_{t_{k}t_{k+1}}, \end{align*} where we have written $u\mathcal Dot v$ for inner products of vectors as well as matrices. Then one can recast \eqref{eqn.trexp} as \mathbf{E}gin{align}\lambdabel{eqn.i1234} \operatorname{tr-} \mathcal J_0^T(y,X) = I_{1}+I_{2}+I_{3}+I_{4} . \end{align} Now we analyze the terms $I_{1}$, \dots, $I_{4}$ in \eqref{eqn.i1234} in order to prove \eqref{eqn.trapcp}. We will focus on the assumptions and conclusions of item (i), item (ii) being treated along the same lines. First we observe that $I_{1}$ is exactly of the form \eqref{eqn.ri}, with $p<4$ and thus $m=3$. Hence a direct application of Proposition \ref{prop:intg-ctrld-process} yields the almost sure limit \mathbf{E}gin{align*} I_{1} \to \int_{0}^{T} y \, d{\bf X} \qquad \thetaxt{as } |{\mathcal P}|\to 0. \end{align*} Let us now analyze the term $I_{2}$ in \eqref{eqn.i1234}. To this aim, an elementary examination of matrix indices reveals that \mathbf{E}gin{align}\lambdabel{eqn.i2o} I_{2} = y^{1}_{t_{k}} \mathcal Dot \left( \frac12 X^{1}_{t_{k}t_{k+1}} [0,t]imes X^{1}_{t_{k}t_{k+1}} - X^{2}_{t_{k}t_{k+1}} \right). \end{align} Furthermore, since $X$ is a geometric rough path, notice that a consequence of \eqref{eq:multiplicativity} is that for all $(s,t)\in \mathcal S_{2}([0,T])$ we have \mathbf{E}gin{align*} \thetaxt{Sym} (X^{2}_{st}) = \frac12 X^{1}_{st}[0,t]imes X^{1}_{st} . \end{align*} Hence one can write \eqref{eqn.i2o} as \mathbf{E}gin{align*} I_{2} = - y^{1}_{t_{k}} \mathcal Dot \thetaxt{Antisym} (X^{2}_{t_{k}t_{k+1}}) = \frac12 \sum_{i=1}^{d} y_{t_{k}}^{1,ij} \left( X_{t_{k}t_{k+1}}^{2, ji} - X_{t_{k}t_{k+1}}^{2, ij} \right)_{ij} . \end{align*} Thanks to our definition \eqref{F} of the increment $F$, this becomes \mathbf{E}gin{align}\lambdabel{eqn.i2n} I_{2} = \frac12 \sum_{i=1}^{d} y_{t_{k}}^{1,ij} \left( \delta F^{ji}_{t_{k}t_{k+1}}-\delta F^{ij}_{t_{k}t_{k+1}} \right). \end{align} Hence owing to identity \eqref{eqn.i2n}, and since we have assumed that the increments of $y^{1}$ are dominated by the control $\omegaega$, Lemma \ref{lem:weighted-F} item (i) shows that \mathbf{E}gin{align*} \lim_{|{\mathcal P}|\to 0} I_{2} = 0, \qquad \thetaxt{in probability.} \end{align*} We now handle the sum $I_{3}$ in \eqref{eqn.i1234}. To this aim, we resort to Lemma \ref{lem:weighted-g} item (i) for the terms $y_{t_{k}}^{2}\mathcal Dot X^{3}_{t_{k}t_{k+1}}$ and to Remark \ref{remark.b} for the terms $y^{2}_{t_{k}} X^{2}_{t_{k}t_{k+1}} \mathcal Dot X^{1}_{t_{k}t_{k+1}} $. We end up with \mathbf{E}gin{align*} \lim_{|{\mathcal P}|\to 0} I_{3} = 0, \qquad \thetaxt{in probability.} \end{align*} In order to show the convergence in \eqref{eqn.trapcp}, it remains to show that $I_{4}\to 0$ in probability. Towards this aim, recall from Definition \ref{def:ctrld-process} that the increment $r^{0}_{t_{k}t_{k+1}}$ is dominated by $\omegaega (t_{k}, t_{k+1})^{3/p}$. Hence for a small $\varepsilon>0$ we get \mathbf{E}gin{align}\lambdabel{eqn.i4bd} |I_{4}| = \frac12 \sum_{0\leq t_{k}<T} \left| r^{0}_{t_{k}t_{k+1}} \mathcal Dot X^{1}_{t_{k}t_{k+1}}\right| \leq \frac12 \sum_{0\leq t_{k}<T} \omegaega (t_{k}, t_{k+1})^{3/p} \|X \|_{p\thetaxt{-var}, [t_{k}, t_{k+1}]} \nonumber \\ \leq \frac12 \max_{k} \omegaega (t_{k}, t_{k+1})^{\varepsilon} \mathcal Dot \sum_{0\leq t_{k}<T} \omegaega (t_{k}, t_{k+1})^{3/p-\varepsilon} \|X \|_{p\thetaxt{-var}, [t_{k}, t_{k+1}]}. \end{align} Now set \mathbf{E}gin{align*} \tilde{\omegaega}(s,t) = \omegaega (s,t)^{3/p-\varepsilon} \mathcal Dot\|X \|_{p\thetaxt{-var}, [s,t]} \qquad (s,t)\in \mathcal S_{2}. \end{align*} Using the same argument as for \eqref{eqn.drmbd}, it is easy to see that $\tilde{\omegaega}$ is a control. Therefore, by the super-additivity of $\tilde{\omegaega}$ we get \mathbf{E}gin{align*} |I_{4}| \leq \frac12 \max_{k} \omegaega (t_{k}, t_{k+1})^{\varepsilon} \mathcal Dot \omegaega (0,T)^{3/p-\varepsilon}\mathcal Dot \|X \|_{p\thetaxt{-var}, [0,T]}. \end{align*} Since $\max_{k}\omegaega(t_{k}, t_{k+1})\to 0$ as $|{\mathcal P}|\to0$ it follows that $I_{4}\to 0$ almost surely. This completes the proof of \eqref{eqn.trapcp}. Moreover, recall that claim (ii) in our statement is obtained easily by adapting slightly the considerations above, similarly to what we have done in Lemma \ref{lem:weighted-F}. This completes the proof of our theorem. \end{proof} As mentioned in Theorem \ref{thm:cvgce-trapezoid-intro}, typical examples of controlled processes are given by solutions of rough differential equations and processes of the form $y=f(X)$. Hence one can apply Theorem \ref{theorem:Trapezoid-Rule} in order to get a trapezoid rule \eqref{eq:proof-what-to- prove} for $f(X)$. However, we would also like to consider Riemann sums which are closer to the ones handled in \mathcal Ite{BNN, HN}. This is why we wish to consider sums fo the form: \mathbf{E}gin{align}\lambdabel{eqn.m} \thetaxt{m-}\mathcal{J}_{0}^{T} (f(X), X) := \sum_{k=0}^{n-1} f\left( \frac{X_{t_{k}}+X_{t_{k+1}}}{2} \right) \delta X_{t_{k}t_{k+1}}. \end{align} We now state a corollary of Theorem \ref{theorem:Trapezoid-Rule} giving the convergence of m-$\mathcal J_{0}^{T} (f(X), X)$ above. \mathbf{E}gin{cor}\lambdabel{cor.mid} Let $X$ be as in Theorem \ref{theorem:Trapezoid-Rule}. Consider function $f\in C^{3}_{b}(\mathbb RR^{d} )$ and the midpoint rule m-$ \mathcal J_{0}^{T} (f(X), X)$ defined by \eqref{eqn.m}. Then we have \mathbf{E}gin{align} \lambdabel{eqn.mid} \emph{m-}\mathcal{J}_{0}^{T} (f(X),X) \to \int_{0}^{T} f(X_{s}) d{\bf X}_{s} \end{align} as the mesh size $|\mathcal{P}|\to 0$. As in Theorem \ref{theorem:Trapezoid-Rule}, the convergence holds in probability if $p$-variation regularity is considered, and almost surely if H\"older continuity is assumed. \end{cor} \mathbf{E}gin{proof} We first recall that for $a, b\in \mathbb R^{d}$ we have the following mean value identity \mathbf{E}gin{align}\lambdabel{eqn.mv} \frac{f(a)+f(b)}{2}- f\Big(\frac{a+b}{2}\Big) = \frac12 \partial^{2}f (c) \Big(\frac{b-a}{2} [0,t]imes \frac{b-a}{2}\Big) \end{align} where $c\in \mathbb R^{3} $ satisfies $c=a+\theta (b-a)$ for some $\theta\in [0,1]$. Let us take the difference of \eqref{eq:proof-what-to- prove} and \eqref{eqn.m} and then apply the mean value identity \eqref{eqn.mv} with $a=X_{t_{k}}$, $b=X_{t_{k+1}}$. Then we obtain \mathbf{E}gin{align}\lambdabel{eqn.tr-m} \thetaxt{tr-}\mathcal{J}_{0}^{T} (f(X), X) - \thetaxt{m-}\mathcal{J}_{0}^{T} (f(X), X) = \frac18 \sum_{k=0}^{n-1} \partial^{2}f(c) \left( \delta X_{t_{k}t_{k+1}} [0,t]imes \delta X_{t_{k}t_{k+1}} \right) \mathcal Dot \delta X_{t_{k}t_{k+1}} \,, \end{align} with $c = X_{t_{k}}+\theta \delta X_{t_{k}t_{k+1}}$. In order to prove \eqref{eqn.mid}, it suffices to show that the right-hand side of \eqref{eqn.tr-m} converges to zero. To this aim, we observe that \mathbf{E}gin{align}\lambdabel{eqn.fpp} \partial^{2}f(c) = \partial^{2}f(X_{t_{k}}) + \theta \partial^{3}f(d) \delta X_{t_{k}t_{k+1}}, \end{align} where $d = X_{t_{k}}+\lambdambda \delta X_{t_{k}t_{k+1}} $ for some $\lambdambda \in [0,1]$. Substituting \eqref{eqn.fpp} into the right-hand side of \eqref{eqn.tr-m} we obtain two terms. It is then easy to see that one of the two terms is in the form of $\sum_{0\leq t_{k}<T} y_{t_{k}} X^{3}_{t_{k}t_{k+1}}$ with $y =f(X)$. It then follows from Lemma \ref{lem:weighted-g} that it converges to zero. The other term can be treated in a similar way as for $I_{4}$ in \eqref{eqn.i4bd}, which completes the proof. \end{proof} \subsection{Applications}\lambdabel{sec:applications} In this section we will briefly list some important examples of Gaussian processes satisfying our standing Hypothesis \ref{hyp.x}. Notice that in the current paper we only request $V_{\rho}(R)<\infty$ with $\rho\in [1,2)$, which is a weaker condition than in \mathcal Ite{JM}, and certainly weaker than in \mathcal Ite{GOT}. Hence all the examples listed in those two references also apply to our context. We just highlight some of them below. \mathbf{E}gin{enumerate}[wide, labelwidth=!, labelindent=0pt, label=\thetaxtbf{(\roman*)}] \setlength\itemsep{.1in} \item The most obvious example is given by a fractional Brownian motion (fBm) $B^{H}$ for which the covariance function $R$ in \eqref{eq:def-covariance-X} is given by \mathbf{E}gin{align*} R^{H}(s,t) = \frac12 \left( t^{2H}+s^{2H}-|t-s|^{2H} \right). \end{align*} Then $R^{H}$ satisfies Hypothesis \ref{hyp.x} whenever $H\in (\frac14, 1)$, with $\omegaega_{R}([s,t]^{2})=|t-s|$. One can also verify that \eqref{eqn.Rholder} holds. \item If one considers a process $X$ given as $X=B^{H_{1}}+B^{H_{2}}$ with $H_{1}, H_{2}\in (\frac14, 1)$ and two independent $\mathbb R^{d}$-valued fBms $B^{H_{1}}$ and $B^{H_{2}}$, then one can also apply our main Theorem \ref{thm:cvgce-trapezoid-intro} to $X$, with $R(s,t) = R^{H_{1}}(s,t)+R^{H_{2}}(s,t)$. \item The bifractional Brownian motion, introduced in \mathcal Ite{HV} is a centered Gaussian process whose covariance $R^{H,K}$ is given by \mathbf{E}gin{align*} R^{H,K}(s,t) =\frac{1}{2^K}\left((t^{2H}+s^{2H})^K-|t-s|^{2HK}\right). \end{align*} This process generalizes fBm (obtained for $K=1$), and fulfills our Hypothesis \ref{hyp.x} whenever $HK\in (1/4,1)$. \item We refer to \mathcal Ite{JM} for a thorough exploration of random Fourier series, some of which yield a control such that $\omegaega_{R}([s,t]^{2}) \neq |t-s|$, but still satisfying Hypothesis \ref{hyp.x}. \end{enumerate} \mathbf{E}gin{thebibliography}{9} \bibitem{BFG} Bayer, C.; Friz, P. and Gatheral, J. (2016). Pricing under rough volatility, \thetaxtit{Quant. Finance} \thetaxtbf{16} no. 6, 887-904. \bibitem{BNN} Binotto, G.; Nourdin, I. and Nualart, D. (2018). Weak symmetric integrals with respect to the fractional Brownian motion. \thetaxtit{Ann. Probab.} \thetaxtbf{46} no. 4, 2243-2267. \bibitem{Alg-Renorm} Bruned, Y.; Hairer, M. and Zambotti, L. (2019). Algebraic renormalisation of regularity structures \thetaxtit{Invent. Math.} \thetaxtbf{215} no. 3, 1039-1156. \bibitem{Ch} Chen, K. (1954). Iterated integrals and exponential homomorphisms. \thetaxtit{Proc. London Math. Soc.} \thetaxtbf{3} no. 4, 502-512. \bibitem{RF} Diehl, J.; Oberhauser, H. and Riedel, S. (2015). A L\'evy area between Brownian motion and rough paths with applications to robust nonlinear filtering and rough partial differential equations. \thetaxtit{Stochastic Process. Appl.} \thetaxtbf{125} no. 1, 161-181. \bibitem{JM} Friz, P.; Gess, B.; Archil Gulisashvili, S. and Riedel, S. (2016). The Jain-Monrad criterion for rough paths and applications to random Fourier series and non-Markovian H\"ormander theory. \thetaxtit{Ann. Probab.} \thetaxtbf{44} no. 1, 684-738. \bibitem{HairerBook} Friz, P. and Hairer, M. (2014). \thetaxtit{A Course on Rough Paths.} With an introduction to regularity structures. Universitext. Springer, Cham. \bibitem{FrizBook} Friz, P. and Victoir, N. (2010). \thetaxtit{Multidimensional stochastic processes as rough paths. Theory and applications.} Cambridge Studies in Advanced Mathematics, 120. Cambridge University Press, Cambridge. \bibitem{FV} Friz, P. and Victoir, N. (2011). A note on higher dimensional $p$-variation. \thetaxtit{Electron. J. Probab.} {\bf 16} no. 68, 1880-1899. \bibitem{GOT} Gess, B.; Ouyang, C. and Tindel, S. (2017). Density bounds for solutions to differential equations driven by Gaussian rough paths \thetaxtit{preprint.} \bibitem{GNRV} Gradinaru, M.; Nourdin, I.; Russo, F. and Vallois, P. (2005). $m$-order integrals and generalized It\^o's formula: the case of a fractional Brownian motion with any Hurst index. \thetaxtit{Ann. Inst. H. Poincar\'e Probab. Statist.} \thetaxtbf{41} no. 4, 781-806. \bibitem{ChineseChar} Graham, B. (2013). Sparse arrays of signatures for online character recognition. \thetaxtit{preprint.} \bibitem{HN} Harnett, D. and Nualart, D. (2012). Weak convergence of the Stratonovich integral with respect to a class of Gaussian processes. \thetaxtit{Stochastic Process. Appl.} \thetaxtbf{122}, no. 10, 3460-3505. \bibitem{HV} Houdr\'e, C. and Villa, J. (2003). An example of infinite dimensional quasi-helix. \thetaxtit{Contemporary Mathematics, Amer. Math. Soc.}, \thetaxtbf{336}, 195-201. \bibitem{HLN} Hu, Y.; Liu, Y. and Nualart, D. (2019). Crank-Nicolson scheme for stochastic differential equations driven by fractional Brownian motions. \thetaxtit{Ann. Appl. Probab.} To appear. \bibitem{HLN2} Hu, Y.; Liu, Y. and Nualart, D. (2016). Rate of convergence and asymptotic error distribution of Euler approximation schemes for fractional diffusions. \thetaxtit{Ann. Appl. Probab.} \thetaxtbf{26}, no. 2, 1147-1207. \bibitem{HLN3} Hu, Y.; Liu, Y. and Nualart, D. (2016). Taylor schemes for rough differential equations and fractional diffusions. \thetaxtit{Discrete Contin. Dyn. Syst. Ser. B} \thetaxtbf{21}, no. 9, 3115-3162. \bibitem{LD} Ledoux, M.; Qian, Z. and Zhang, T. (2002). Large deviations and support theorem for diffusion processes via rough paths. \thetaxtit{Stochastic Process. Appl.} \thetaxtbf{102} no. 2, 265-283. \bibitem{OneD} Liu, Y. and Tindel, S. (2019). Discrete rough paths and limit theorems. \thetaxtit{Annales de l'Institut Henri Poincar\'e Probabilit\'es et Statistiques.} To appear. \bibitem{Euler} Liu, Y. and Tindel, S. (2019). First-Order Euler Scheme for SDEs Driven By Fractional Brownian Motions: The Rough Case. \thetaxtit{Ann. Appl. Probab.} \thetaxtbf{29} no. 2, 758-826. \bibitem{Lyons} Lyons, T. (1998). Differential equations driven by rough signals. \thetaxtit{Rev. Mat. Iberoamericana} \thetaxtbf{14} no. 2, 215-310. \bibitem{NN} Neuenkirch, A. and Nourdin, I. (2007). Exact rate of convergence of some approximation schemes associated to SDEs driven by a fractional Brownian motion. \thetaxtit{J. Theoret. Probab.} \thetaxtbf{20} no. 4, 871-899. \bibitem{NuaBook} Nualart, D. (2006). \thetaxtit{The Malliavin Calculus and Related Topics.} Second edition. \bibitem{NuaTin} Nualart, D. and Tindel, S. (2011). A construction of the rough path above fractional Brownian motion using Volterra's representation. \thetaxtit{Ann. Probab.} \thetaxtbf{39} no. 3, 1061-1096. \bibitem{fBm} Nourdin, I. (2012). \thetaxtit{Selected Aspects of Fractional Brownian Motion.} Bocconi \& Springer Series, 4. Springer, Milan; Bocconi University Press, Milan. \bibitem{NR} Nourdin, I. and R\'eveillac, A. (2008). Asymptotic behavior of weighted quadratic variations of fractional Brownian motion: The critical case $H=1/4$. \thetaxtit{Ann. Probab.} \thetaxtbf{37} no. 6, 2200-2230. \bibitem{NRS} Nourdin, I.; R\'eveillac, A. and Swanson, J. (2010). The weak Stratonovich integral with respect to fractional Brownian motion with Hurst parameter. \thetaxtit{Electron. J. Probab.} \thetaxtbf{15} no. 70, 2117-2162. \bibitem{RV} Russo, F. and Vallois, P. (1993). Forward, backward and symmetric stochastic integration. \thetaxtit{Probab. Theory Related Fields} \thetaxtbf{97} no. 3, 403-421. \bibitem{To} Towghi, N. (2002). Multidimensional extension of L. C. Young's inequality,. \emph{J. Inequal. Pure Appl. Math.} {\bf 3} no. 2, Article 22, 13 pp. (electronic). \bibitem{Young} Young, L. (1936). An inequality of the H\"older type, connected with Stieltjes integration. \thetaxtit{Acta Math.} \thetaxtbf{67} no. 1, 251-282. \end{thebibliography} \end{document}
\begin{document} \title {Two-step inertial Bregman alternating structure-adapted proximal gradient descent algorithm for nonconvex and nonsmooth problems{\thanks{Supported by Scientific Research Project of Tianjin Municipal Education Commission (2022ZD007).}}} \author{ {\sc Chenzheng Guo{\thanks{Email: [email protected]}}, Jing Zhao{\thanks{Corresponding author. Email: [email protected]}}}\\ \small College of Science, Civil Aviation University of China, Tianjin 300300, China\\ } \date{} \date{} \maketitle{} {\bf Abstract.} In this paper, we propose accelerated alternating structure-adapted proximal gradient descent method for a class of nonconvex and nonsmooth nonseparable problems. The proposed algorithm is a monotone method which combines two-step inertial extrapolation and Bregman distance. Under some assumptions, we prove that every cluster point of the sequence generated by the proposed algorithm is a critical point. Furthermore, with the help of Kurdyka--{\L}ojasiewicz property, we establish the convergence of the whole sequence generated by proposed algorithm. In order to make the algorithm more effective and flexible, we also use some strategies to update the extrapolation parameter and solve the problems with unknown Lipschitz constant. Moreover, we report some preliminary numerical results on involving nonconvex quadratic programming and sparse logistic regression to show the feasibility and effectiveness of the proposed methods. \vskip 0.4 true cm \noindent {\bf Key words}: Accelerated methods; Nonconvex and nonsmooth nonseparable optimization; Extrapolation; Bregman distance; Kurdyka--{\L}ojasiewicz property. \section{Introduction} \hspace*{\parindent} In this paper, we will consider to solve the following nonconvex and nonsmooth nonseparable optimization problem: \begin{equation} \label{MP} \min_{x\in \mathbb R^{n} ,y\in\mathbb R^{m}} L(x,y)=f(x)+Q(x,y)+g(y), \end{equation} where $f:\mathbb R^{n}\to \mathbb R$, $g:\mathbb R^{m}\to \mathbb R$ are continuously differentiable, $Q:\mathbb R^{n}\times \mathbb R^{m}\to \mathbb R\cup \left \{ \infty \right \} $ is a proper, lower semicontinuous function. Note that here and throughout the paper, no convexity is assumed on the objective function. Problem \eqref{MP} is used in many application scenarios, such as signal recovery \cite{NNZC,GZZF,B}, nonnegative matrix facorization \cite{PT,LS}, quadratic fractional programming \cite{BCV,XX}, compressed sensing \cite{ABSS,DC}, sparse logistic regression \cite{XX} and so on. \par The natural method to solve problem \eqref{MP} is the alternating minimization (AM) method (also called block coordinate descent (BCD) method), which, from a given initial point $\left ( x_{0},y_{0} \right ) \in \mathbb{R}^n \times \mathbb{R}^m $, generates the iterative sequence $\left \{ \left ( x_{k},y_{k} \right ) \right \} $ via the scheme: \begin{equation} \label{TTB} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{L(x,y_k)\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{L(x_{k+1},y)\}.\\ \end{cases} \end{equation} If $L(x,y)$ is convex and continuously differentiable, and it is strict convex of one argument while the other is fixed, then the sequence converges to a critical point \cite{BN,BTL}. \par To relax the requirements of AM method and remove the strict convexity assumption, Auslender \cite{A} introduced proximal terms to \eqref{TTB} for convex function $L$: \begin{equation} \label{PAMA} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{L(x,y_k)+\frac{1}{2\lambda_k}\|x-x_k\|^2_2\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{L(x_{k+1},y)+\frac{1}{2\mu_k}\|y-y_k\|^2_2\},\\ \end{cases} \end{equation} where $\{\lambda_k\}_{k\in\mathbb{N}}$ and $\{\mu_k\}_{k\in\mathbb{N}}$ are positive sequences. The above proximal point method, which is called proximal alternating minimization (PAM) algorithm, was further extended to nonconvex nonsmooth functions. Such as, in \cite{ABR}, {Attouch} et al. applied \eqref{PAMA} to solve nonconvex problem \eqref{MP} and proved the sequence generated by the proximal alternating minimization algorithm \eqref{PAMA} converges to a critical point. More convergence analysis of the proximal point method can be found in \cite{ARS,XY}. \par Because the proximal alternating minimization algorithm requires an exact solution at each iteration step, the subproblems are very expensive if the minimizers of subproblems are not given in a closed form. The linearization technique is one of the effective methods to overcome the absence of an analytic solution to the subproblem. {Bolte} et al. \cite{BST} proposed the following proximal alternating linearized minimization (PALM) algorithm under the condition that the coupling term $Q(x,y)$ is continuously differentiable: \begin{equation} \label{PALM} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{f(x)+\langle x,\nabla_xQ(x_k,y_k)\rangle+\frac{1}{2\lambda_k}\|x-x_k\|^2_2\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{g(y)+\langle y,\nabla_yQ(x_{k+1},y_k)\rangle+\frac{1}{2\mu_k}\|y-y_k\|^2_2\}.\\ \end{cases} \end{equation} The step size $\lambda_k$ and $\mu_k$ are limited to $$\lambda_k\in (0,({\rm Lip}(\bigtriangledown _{x}Q(\cdot ,y_k) ))^{-1} ), \mu_k\in (0,({\rm Lip}(\bigtriangledown _{y}Q(x_{k+1},\cdot ) ))^{-1} ),$$ where ``Lip” denotes the Lipschitz constant of the function in parentheses. In this way, the solution of some subproblems may be expressed by a closed-form or can be easily calculated. The global convergence result was established if $L(x,y)$ satisfied the Kurdyka--{\L}ojasiewicz property. \par When $f$ and $g$ are continuously differentiable, a natural idea is to linearize $f$ and $g$. {Nikolova} et al. \cite{NT} proposed the corresponding algorithm, called alternating structure-adapted proximal gradient descent (ASAP) algorithm with the following scheme: \begin{equation} \label{ASAP} \begin{cases} x_{k+1}\in \arg\underset{x\in \mathbb{R}^n}{\min}\{Q(x,y_k)+\langle x,\nabla f(x_k)\rangle+\frac{1}{2\sigma}\|x-x_k\|^2_2\},\\ y_{k+1}\in \arg\underset{y\in \mathbb{R}^m}{\min}\{Q(x_{k+1},y)+\langle y,\nabla g(y_k)\rangle+\frac{1}{2\tau}\|y-y_k\|^2_2\},\\ \end{cases} \end{equation} where $\sigma \in (0,({\rm Lip}(\bigtriangledown f))^{-1} ), \tau \in (0,({\rm Lip}(\bigtriangledown g))^{-1} )$. With the help of Kurdyka--{\L}ojasiewicz property they establish the convergence of the whole sequence generated by \eqref{ASAP}. \par The inertial extrapolation technique has been widely used to accelerate the iterative algorithms for convex and nonconvex optimizations, since the cost of each iteration stays basically unchanged \cite{OCB,BC}. The inertial scheme, starting from the so-called heavy ball method of {Polyak} \cite{PB}, was recently proved to be very efficient in accelerating numerical methods, especially the first-order methods. Alvarez et al. \cite{AA} applied the inertial strategy to the proximal point method and proved that it could improve the rate of convergence. The main feature of the idea is that the new iteration use the previous two or more iterations. \par Based on \eqref{PALM}, {Pock and Sabach \cite{PS}} proposed the following inertial proximal alternating linearized minimization (iPALM) algorithm: \begin{equation} \label{iPALM} \begin{cases} u_{1k}=x_k+\alpha _{1k}(x_k-x_{k-1}), v_{1k}=x_k+\beta _{1k}(x_k-x_{k-1}),\\ x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{f(x)+\langle x,\nabla_xQ(v_{1k},y_k)\rangle+\frac{1}{2\lambda_k}\|x-u_{1k}\|^2_2\},\\ u_{2k}=y_k+\alpha _{2k}(y_k-y_{k-1}), v_{2k}=y_k+\beta _{2k}(y_k-y_{k-1}),\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{g(y)+\langle y,\nabla_yQ(x_{k+1},v_{2k})\rangle+\frac{1}{2\mu_k}\|y-u_{2k}\|^2_2\},\\ \end{cases} \end{equation} where $\alpha _{1k},\alpha _{2k},\beta _{1k},\beta _{2k}\in \left [ 0,1 \right ] $. They proved that the generated sequence globally converges to critical point of the objective function under the condition of the Kurdyka--{\L}ojasiewicz property. When $\alpha _{1k}\equiv \alpha _{2k}\equiv \beta _{1k}\equiv \beta _{2k}\equiv 0$, iPALM reduces to PALM. Then {Cai} et al. \cite{GCH} presented a Gauss--Seidel type inertial proximal alternating linearized minimization (GiPALM) algorithm for solving problem \eqref{MP}: \begin{equation} \label{GiPALM} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{f(x)+\langle x,\nabla_xQ(\tilde{x}_{k},\tilde{y}_k)\rangle+\frac{1}{2\lambda_k}\|x-\tilde{x}_{k}\|^2_2\},\\ \tilde{x}_{k+1}=x_{k+1}+\alpha (x_{k+1}-\tilde{x}_{k}), \alpha \in [0,1),\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{g(y)+\langle y,\nabla_yQ(\tilde{x}_{k+1},\tilde{y}_{k})\rangle+\frac{1}{2\mu_k}\|y-\tilde{y}_{k}\|^2_2\},\\ \tilde{y}_{k+1}=y_{k+1}+\beta(y_{k+1}-\tilde{y}_{k}), \beta \in [0,1). \end{cases} \end{equation} \par By using inertial extrapolation technique, {Xu} et al. \cite{XX} proposed the following accelerated alternating structure-adapted proximal gradient descent (aASAP) algorithm: \begin{equation} \label{aASAP} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{Q(x,\hat{y} _k)+\langle \nabla f(\hat{x}_k),x\rangle+\frac{1}{2\sigma}\|x-\hat{x} _k\|^2_2\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{Q(x_{k+1},y)+\langle \nabla g(\hat{y} _k),y\rangle+\frac{1}{2\tau}\|y-\hat{y}_k\|^2_2\},\\ u_{k+1}=x_{k+1}+\beta_{k}(x_{k+1}-x_{k}), v_{k+1}=y_{k+1}+\beta _{k}(y_{k+1}-y_{k}),\\ {\rm if} \ L(u_{k+1},v_{k+1})\le L(x_{k+1},y_{k+1}), \ {\rm then} \ \hat{x} _{k+1}=u_{k+1}, \hat{y} _{k+1}=v_{k+1},\\ {\rm else} \ \hat{x} _{k+1}=x_{k+1}, \hat{y} _{k+1}=y_{k+1}. \end{cases} \end{equation} Compared with the traditional extrapolation algorithm, the main difference is to ensure that the algorithm is a monotone method in terms of objective function value, while general extrapolation algorithms may be nonmonotonic. \par Bregman distance is a useful substitute for a distance, obtained from the various choices of functions. The applications of Bregman distance instead of the norm gives us alternative ways for more flexibility in the selection of regularization. Choosing appropriate Bregeman distances can obtain colsed form of solution for solving some subproblem. Bregman distance regularization is also an effective way to improve the numerical results of the algorithm. In {\cite{ZQ}}, the authors constructed the following two-step inertial Bregman alternating minimization algorithm using the information of the previous three iterates: \begin{equation} \label{TiBAM} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{L(x,y_k)+D_{\phi_1}(x,x_k)+\alpha_{1k} \langle x,x_{k-1}-x_k\rangle+\alpha_{2k} \langle x,x_{k-2}-x_{k-1}\rangle\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{L(x_{k+1},y)+D_{\phi_2}(y,y_k)+\beta_{1k} \langle y,y_{k-1}-y_k\rangle+\beta_{2k} \langle y,y_{k-2}-y_{k-1}\}, \end{cases} \end{equation} where $D_{\phi_i}(i=1,2)$ denotes the Bregman distance with respect to $\phi_i(i=1,2)$, respectively. The convergence is obtained provided an appropriate regularization of the objective function satisfies the Kurdyka--{\L}ojasiewicz inequality. Based on alternating minimization algorithm, {Zhao} et al. \cite{ZA} proposed the following inertial alternating minimization with Bregman distance (BIAM) algorithm: \begin{equation} \label{BIAM} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{f(x)+Q(x,\hat{y} _k)+\lambda _{k} D_{\phi_1}(x,\hat{x} _k)\},\\ \hat{x}_{k+1}=x_{k+1}+\alpha(x_{k+1}-\hat x_{k}), \alpha \in [0,1),\\ y_{k+1}\in \arg\min_{ y\in \mathbb{R}^m}\{g(y)+Q(\hat{x}_{k+1},y)+\mu _{k} D_{\phi_2}(y,\hat{y} _k)\},\\ \hat{y}_{k+1}=y_{k+1}+\beta(y_{k+1}-\hat y_{k}), \beta \in [0,1). \end{cases} \end{equation} Suppose that the benefit function satisfies the Kurdyka--{\L}ojasiewicz property and the parameters are selected appropriately, they proved the convergence of BIAM algorithm. \par In this paper, based on the alternating structure-adapted proximal gradient methods, we combine inertial extrapolation technique and Bregman distance to construct two-step inertial Bregman alternating structure-adapted proximal gradient descent algorithm. And in order to make the proposed algorithm more effective and flexible, we also use some strategies to update the extrapolation parameter and solve the problems with unknown Lipschitz constant. Under some assumptions about the penalty parameter and objective function, the convergence of the proposed algorithm is obtained based on the Kurdyka--{\L}ojasiewicz property. Moreover, we report some preliminary numerical results on involving quadratic programming and logistic regression problem to show the feasibility and effectiveness of the proposed method. \par The article is organized as follows. In Section \ref{sect2}, we recall some concepts and important lemmas which will be used in the proof of main results. In Section \ref{sect3}, we present the Two-step inertial Bregman alternating structure-adapted proximal gradient algorithm and show its convergence. Finally, in Section \ref{sect4}, the preliminary numerical examples on nonconvex quadratic programming and sparse logistic regression problem are provided to illustrate the behavior of the proposed algorithm. \section{Preliminaries}\label{sect2} \hspace*{\parindent} Consider the Euclidean vector space $\mathbb{R}^d$ of dimension $d\geq 1$, the standard inner product and the induced norm on $\mathbb{R}^d$ are denoted by $\langle\cdot,\cdot\rangle$ and $\|\cdot\|_2$, respectively. We use $\omega(x_k)=\{x:\exists x_{k_j} \rightarrow x\}$ to stand for the limit set of $\{x_k\}_{k\in \mathbb{N}}$. \par The {\it domain} of $f$ are defined by dom$f:=\{x\in \mathbb{R}^d: f(x)<+\infty\}$. We say that $f$ is {\it proper} if dom$f\neq \emptyset$, and $f$ is called {\it lower semicontinuous} at $x$ if $f(x)\le \lim\inf_{k\to \infty }f(x_k)$ for every sequence $\left \{ x_k \right \} $ converging to $x$. If $f$ is lower semicontinuous in its domain, we say $f$ is a lower semicontinuous function. If dom$f$ is closed and $f$ is lower semicontinuous over dom$f$ , then $f$ is a closed function. Further we recall some generalized subdifferential notions and the basic properties which are needed in this paper. \subsection{Subdifferentials} \begin{definition} \rm \label{Def211}(Subdifferentials) Let $f: \mathbb{R}^d \rightarrow(-\infty,+\infty]$ be a proper and lower semicontinuous function. (i)For $x\in$ {\rm dom}$f $, the Fr\'{e}chet subdifferential of $f$ at $x$, written $\hat{\partial}f(x)$, is the set of vectors $v\in \mathbb{R}^d$ which satisfy $$\liminf_{y\rightarrow x}\frac{1}{\|x-y\|_2}[f(y)-f(x)-\langle v,y-x\rangle]\geq 0.$$ (ii)If $x\not\in$ {\rm dom}$f$, then $\hat{\partial}f(x)=\emptyset$. The limiting-subdifferential \cite{MV}, or simply the subdifferential for short, of $f$ at $x\in$ {\rm dom}$f$, written $\partial f(x)$, is defined as follows: $$\partial f(x):=\{v\in \mathbb{R}^d: \exists x_k\rightarrow x, f(x_k)\rightarrow f(x), v_k\in \hat{\partial}f(x_k), v_k\rightarrow v\}.$$ \end{definition} \begin{remark} \label{rem21} \rm (a) The above definition implies that $\hat{\partial}f(x)\subseteq \partial f(x)$ for each $x\in \mathbb{R}^d$, where the first set is convex and closed while the second one is closed (see\cite{RWA}). (b) (Closedness of $\partial f$) Let $\{x_k\}_{k\in \mathbb{N}}$ and $\{v_k\}_{k\in \mathbb{N}}$ be sequences in $\mathbb{R}^d$ such that $v_k \in \partial f(x_k)$ for all $k\in \mathbb{N}$. If $(x_k,v_k)\rightarrow (x,v)$ and $f(x_k)\rightarrow f(x)$ as $k\rightarrow \infty$, then $v \in \partial f (x)$. (c) If $f: \mathbb{R}^d \rightarrow(-\infty,+\infty]$ be a proper and lower semicontinuous and $h : \mathbb{R}^d \rightarrow \mathbb{R}$ is a continuously differentiable function, then $\partial(f+h)(x) = \partial f (x)+\nabla h(x)$ for all $x \in \mathbb{R}^d$. \end{remark} In what follows, we will consider the problem of finding a critical point $(x^\ast,y^\ast)\in$dom$L$. \begin{lemma}{\rm (Fermat’s rule\cite{RW})} \label{lem22} Let $f: \mathbb{R}^{d}\rightarrow \mathbb{R}\cup \left \{ + \infty \right \}$ be a proper lower semicontinuous function. If $f$ has a local minimum at $x^{\ast }$, then $0\in \partial f(x^{\ast })$. \end{lemma} \par We call $x^{\ast }$ is a critical point of $f$ if \ $0\in \partial f(x^{\ast })$. The set of all critical points of $f$ denoted by crit$f$. \begin{lemma}{\rm (Descent lemma\cite{BT})} \label{lem23} Let $h: \mathbb{R}^{d}\rightarrow \mathbb{R}$ be a continuously differentiable function with gradient $\nabla h$ assumed $L_{h}$-Lipschitz continuous. Then \begin{equation} \label{(2.2)} h\left ( u \right ) \le h\left ( v \right ) + \left \langle u-v,\nabla h\left ( v \right ) \right \rangle+\frac{L_{h} }2{\left \| u-v \right \| ^{2}_2 ,\ \forall u,v\in \mathbb R^{d}. } \end{equation} \end{lemma} \subsection{The Kurdyka--{\L}ojasiewicz property} \ \par In this section, we recall the K{\L} property, which plays a central role in the convergence analysis. \begin{definition} \rm \label{Def21}(Kurdyka--{\L}ojasiewicz property \cite{ABR}) Let $f: \mathbb{R}^d \rightarrow(-\infty,+\infty]$ be a proper and lower semicontinuous function. (i)The function $f: \mathbb{R}^d \rightarrow(-\infty,+\infty]$ is said to have the Kurdyka--{\L}ojasiewicz (K{\L}) property at $x^\ast\in$dom$f$ if there exist $\eta\in (0,+\infty]$, a neighborhood $U$ of $x^\ast$ and a continuous concave function $\varphi:[0,\eta)\rightarrow \mathbb{R}_{+}$ such that $\varphi(0)=0$, $\varphi$ is $C^1$ on $(0,\eta)$, for all $s\in(0,\eta)$ it is $\varphi'(s)>0$ and for all $x$ in $U\cap[f(x^\ast)<f<f(x^\ast)+\eta]$ the Kurdyka--{\L}ojasiewicz inequality holds, $$\varphi'(f(x)-f(x^\ast)){\rm dist}(0,\partial f(x))\geq 1.$$ (ii)Proper lower semicontinuous functions which satisfy the Kurdyka--{\L}ojasiewicz inequality at each point of {its domain} are called K{\L} functions. \end{definition} \begin{lemma}{\rm (Uniformized K{\L} property\cite{RW})} \label{lem21} Let $\Omega $ be a compact set and let $f: \mathbb{R}^{d}\rightarrow \mathbb{R}\cup \left \{ + \infty \right \}$ be a proper and lower semicontinuous function. Assume that $f$ is constant on $\Omega $ and satisfies the K{\L} property at each point of $\Omega $. Then, there exist $\varepsilon > 0,\eta> 0$ and $\varphi \in \Phi _{\eta } $ such that for all $x^\ast \in \Omega $ and for all $x\in \left \{ x\in\mathbb{R}^d :{\rm dist}(x,\Omega )< \varepsilon \right \} \cap[f(x^\ast)<f<f(x^\ast)+\eta]$, one has $$\varphi'(f(x)-f(x^\ast)){\rm dist}(0,\partial f(x))\geq 1.$$ \end{lemma} There is a broad class of functions satisfy the K{\L} property, such as strongly convex functions, real analytic functions, semi-algebraic functions \cite{ABR}, subanalytic functions \cite{WCX}, log-exp functions and so on. \subsection{Bregman distance} \begin{definition} \rm { A function $f$ is said convex if dom$f$ is a convex set and if, for all $x$, $y\in$dom$f$, $\alpha\in[0,1]$, $$f(\alpha x+(1-\alpha)y)\leq \alpha f(x)+(1-\alpha)f(y).$$ $f$ is said $\theta$-strongly convex with $\theta> 0$ if $f-\frac{\theta}{2}\|\cdot\|^2$ is convex, i.e., $$f(\alpha x+(1-\alpha)y)\leq \alpha f(x)+(1-\alpha)f(y)-\frac{1}{2}\theta\alpha(1-\alpha)\|x-y\|^2$$ for all $x$, $y\in$dom$f$ and $\alpha\in[0,1]$.} \end{definition} Suppose that the function $f$ is differentiable. Then $f$ is convex if and only if dom$f$ is a convex set and $$f(x)\ge f(y)+\langle \nabla f(y),x-y\rangle$$ holds for all $x$, $y\in$dom$f$. Moreover, $f$ is $\theta$-strongly convex with $\theta> 0$ if and only if $$f(x)\ge f(y)+\langle \nabla f(y),x-y\rangle+\frac{\theta}{2}\|x-y\|^2$$ for all $x$, $y\in$dom$f$. \begin{definition} \rm \label{Defbreg} Let $\phi:\mathbb{R}^d \rightarrow(-\infty,+\infty]$ be a convex and G\^{a}teaux differentiable function. The function $D_\phi :$ dom$\phi\,\,\times$ intdom$\phi \rightarrow [0,+\infty)$, defined by $$D_\phi(x,y)=\phi(x)-\phi(y)-\langle \nabla\phi(y),x-y\rangle,$$ is called the Bregman distance with respect to $\phi$. \end{definition} {From the above definition, it follows that \begin{equation} \label{(2.1)} D_\phi(x,y)\geq\frac{\theta}{2}\|x-y\|^2, \end{equation} if $\phi$ is $\theta$-strongly convex. \section{Algorithm and convergence analysis}\label{sect3} \begin{Assumption} \label{Assumption31} \rm (i) $L:\mathbb R^{n}\times \mathbb R^{m}\to \mathbb R\cup \left \{ \infty \right \} $ is lower bounded. (ii) $f:\mathbb R^{n}\to \mathbb R$ and $g:\mathbb R^{m}\to \mathbb R$ are continuously differentiable and their gradients $\nabla f$ and $\nabla g$ are Lipschitz continuous with constants $L_{\nabla f}$, $L_{\nabla g}$ respectively. (iii) $Q:\mathbb R^{n}\times \mathbb R^{m}\to \mathbb R\cup \left \{ \infty \right \} $ is a proper, lower semicontinuous. (iv) $\phi_i(i=1,2)$ is $\theta_i$-strongly convex differentiable function, $\theta_1>L_{\nabla f}$, $\theta_2>L_{\nabla g}$. And the gradient $\nabla \phi_i$ is $\eta_i$-Lipschitz continuous, i.e., \begin{equation} \label{(33.2)} \aligned &\left \| \nabla{\phi_1}(x) -\nabla{\phi_1}(\hat{x})\right \|\le \eta_1 \|x-\hat{x}\|,\ \forall x,\hat{x}\in \mathbb R^{n},\\ &\left \| \nabla{\phi_2}(y) -\nabla{\phi_2}(\hat{y})\right \|\le \eta_2 \|y-\hat{y}\|,\ \forall y,\hat{y}\in \mathbb R^{m}. \endaligned \end{equation} \end{Assumption} \subsection{The proposed algorithm} \begin{algorithm}[H] \caption{TiBASAP: Two-step inertial Bregman alternating structure-adapted proximal gradient descent algorithm}\label{alg1} \begin{algorithmic} \Require Take $(x_0,y_0)\in \mathbb{R}^n\times \mathbb{R}^m, (\hat{x} _0,\hat{y} _0)=(x_0,y_0), \alpha _k\in [0,\alpha _{\max}], \beta _k\in [0,\beta _{\max}], \alpha _{\max}+\beta _{\max}<1, k=0.$\\ 1. \noindent{\bf Compute} \begin{equation} \label{(3.1)} \begin{cases} \aligned x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{&Q(x,\hat{y} _k)+\langle \nabla f(\hat{x}_k),x\rangle+D_{\phi_1}(x,\hat{x} _k)\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{&Q(x_{k+1},y)+\langle \nabla g(\hat{y} _k),y\rangle+D_{\phi_2}(y,\hat{y} _k)\}. \endaligned \end{cases} \end{equation} 2. \begin{equation} \label{(3.2)} \binom{u_{k+1}}{v_{k+1}} =\binom{x_{k+1}}{y_{k+1}}+\alpha _k\binom{x_{k+1}-x_k}{y_{k+1}-y_k}+\beta_k\binom{x_{k}-x_{k-1}}{y_{k}-y_{k-1}}. \end{equation} 3. \noindent{\bf If} $L(u_{k+1},v_{k+1})\le L(x_{k+1},y_{k+1})$, \noindent{\bf then} \begin{equation} \label{(3.3)} \hat{x} _{k+1}=u_{k+1},\ \hat{y} _{k+1}=v_{k+1}, \end{equation} \noindent{\bf else} \begin{equation} \label{(3.4)} \hat{x} _{k+1}=x_{k+1},\ \hat{y} _{k+1}=y_{k+1}. \end{equation} 4. \noindent{\bf Set} $k\gets k+1$, go to step1. \end{algorithmic} \end{algorithm} {\begin{remark}\rm We discuss the relation of Algorithm \ref{alg1} to the other existing algorithms from literatures. \begin{itemize} \item[(i)] If we take $\phi_1(x)=\frac{1}{2\sigma}\|x\|^2_2$ and $\phi_2(y)=\frac{1}{2\tau}\|y\|^2_2$ for all $x\in \mathbb{R}^n$ and $y\in \mathbb{R}^m$, then Algorithm \ref{alg1} becomes the following iterative method: \begin{equation} \label{(3.0)} \begin{cases} x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{Q(x,\hat{y} _k)+\langle \nabla f(\hat{x}_k),x\rangle+\frac{1}{2\sigma}\|x-\hat{x} _k\|^2_2\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{Q(x_{k+1},y)+\langle \nabla g(\hat{y} _k),y\rangle+\frac{1}{2\tau}\|y-\hat{y}_k\|^2_2\},\\ u_{k+1}=x_{k+1}+\alpha_{k}(x_{k+1}-x_{k})+\beta_{k}(x_{k}-x_{k-1}),\\ v_{k+1}=y_{k+1}+\alpha_{k}(y_{k+1}-y_{k})+\beta_{k}(y_{k}-y_{k-1}),\\ {\rm if} \ L(u_{k+1},v_{k+1})\le L(x_{k+1},y_{k+1}), \ {\rm then} \ \hat{x} _{k+1}=u_{k+1}, \hat{y} _{k+1}=v_{k+1},\\ {\rm else} \ \hat{x} _{k+1}=x_{k+1}, \hat{y} _{k+1}=y_{k+1}. \end{cases} \end{equation} \item[(ii)] Letting $\beta_{k}\equiv 0$ for all $k\geq 0$, \eqref{(3.0)} becomes the accelerated alternating structure-adapted proximal gradient descent (aASAP) algorithm \eqref{aASAP}. \item[(iii)] Letting $\alpha_{k}\equiv\beta_{k}\equiv 0$ for all $k\geq 0$, \eqref{(3.0)} becomes the alternating structure-adapted proximal gradient descent (ASAP) algorithm \eqref{ASAP}. \end{itemize} \end{remark}} {\begin{remark}\rm Compared with the traditional extrapolation algorithm, the main difference is step 3 which ensures the algorithm is a monotone method in terms of objective function value, while general extrapolation algorithms may be nonmonotonic. \end{remark}} \par For extrapolation parameters $\alpha _k$ and $\beta _k$, there are at least two ways to choose them, either as costant or by dynamic update. For example, in \cite{LLA,ZZ} it was defined as \begin{equation} \label{fista} \begin{cases} \alpha _k=\beta_k=\frac{t_{k-1}-1}{2t_k},\\ t_{k+1}=\frac{1+\sqrt{1+4t_{k}^{2}}}{2},\\ \end{cases} \end{equation} where $t_{-1}=t_0=1$. In order to make Algorithm \ref{alg1} more effective, we present an adaptive method to update the extrapolation parameter $\alpha _k$, $\beta _k$, which is given in Algorithm \ref{alg2}. \begin{algorithm}[H] \caption{Two-step inertial Bregman alternating structure-adapted proximal gradient descent with adaptive extrapolation parameter algorithm}\label{alg2} \begin{algorithmic} \Require Take $(x_0,y_0)\in \mathbb{R}^n\times \mathbb{R}^m,\ (\hat{x} _0, \hat{y} _0)=(x_0,y_0), \ \alpha _0\in [0,\alpha _{\max}], \ \beta _0\in [0,\beta _{\max}], \ \alpha _{\max}+\beta _{\max}<1, \ t>1, \ k=0.$\\ 1. \noindent{\bf Compute} \begin{equation} \label{(3.5)} \begin{cases} \aligned x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{&Q(x,\hat{y} _k)+\langle \nabla f(\hat{x}_k),x\rangle+D_{\phi_1}(x,\hat{x} _k)\},\\ y_{k+1}\in \arg\min_{y\in \mathbb{R}^m}\{&Q(x_{k+1},y)+\langle \nabla g(\hat{y} _k),y\rangle+D_{\phi_2}(y,\hat{y} _k)\}. \endaligned \end{cases} \end{equation} 2. \begin{equation} \label{(3.6)} \binom{u_{k+1}}{v_{k+1}} =\binom{x_{k+1}}{y_{k+1}}+\alpha _k\binom{x_{k+1}-x_k}{y_{k+1}-y_k}+\beta_k\binom{x_{k}-x_{k-1}}{y_{k}-y_{k-1}}. \end{equation} 3. \noindent{\bf If} $L(u_{k+1},v_{k+1})\le L(x_{k+1},y_{k+1})$, \noindent{\bf then} \begin{equation} \label{(3.7)} \aligned &\hat{x} _{k+1}=u_{k+1},\ \hat{y} _{k+1}=v_{k+1},\\ &\alpha_{k+1}=\min\left \{ t\alpha _k,\alpha _{\max} \right \}, \\ &\beta_{k+1}=\min\left \{ t\beta _k,\beta _{\max} \right \}, \endaligned \end{equation} \noindent{\bf else} \begin{equation} \label{(3.8)} \aligned &\hat{x} _{k+1}=x_{k+1},\ \hat{y} _{k+1}=y_{k+1},\\ &\alpha _{k+1}=\frac{\alpha _k}{t},\\ &\beta _{k+1}=\frac{\beta _k}{t}. \endaligned \end{equation} 4. \noindent{\bf Set} $k\gets k+1$, go to step1. \end{algorithmic} \end{algorithm} {\begin{remark}\rm Compared with constant or dynamic update by \eqref{fista}, the adaptive extrapolation parameters $\alpha _k$ and $\beta _k$ in Algorithm \ref{alg2} may adopt more extrapolation steps. The numerical results will verify the effectiveness of the adaptive strategy in Section \ref{sect4}. \end{remark}} {\begin{remark}\rm \par A possible drawback of Algorithm \ref{alg1} and Algorithm \ref{alg2} is that the Lipschitz constants $L_{\nabla f}$, $L_{\nabla g}$ are not always known or computable. However, the Lipschitz constant determines the strong convexity modulus range of $\phi_1$ and $\phi_2$. Especially, if $D_{\phi_i}=\frac{1}{2} \left \| \cdot \right \| ^2,\ (i=1,2)$, the Lipschitz constant $L_{\nabla f}$, $L_{\nabla g}$ determines the range of stepsize $\sigma, \tau$ in \eqref{(3.0)}. Even if the Lipschitz constant is known, it is always large in general, which makes the strong convexity modulus $\theta_1$, $\theta_2$ very small. Therefore, in order to improve the efficiency of the algorithm, a method for estimating a proper local Lipschitz constant will be given below. \end{remark}} A backtracking method is proposed to evaluate the local Lipschitz constant. We adopt Barzilai-Borwein (BB) method \cite{BB} with lower bound to initialize the stepsize at each iteration. The procedure of computing the stepsize of the $x$-subproblem is shown in the following box.\\ \noindent \fbox{\parbox{\textwidth}{ \noindent{\bf Backtracking with BB method with lower bound} \par \ \ \noindent{\bf Compute the BB stepsize:} set $t_{\min}>0$, \begin{equation*} \aligned &l_k=\nabla f(x_k)-\nabla f(\hat{x}_{k-1}),\\ &s_k=x_k-\hat{x}_{k-1},\\ &t_k=\max\left \{ \frac{\left | s_{k}^{T} l_k \right | }{ s_{k}^{T} s_k},t_{\min} \right \}. \endaligned \end{equation*} \noindent{\bf Backtracking}: set $\rho>1, \delta >0$. \par \ \ \noindent{\bf Repeat compute}: \begin{equation*} \aligned &x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{Q(x,\hat{y} _k)+\langle \nabla f(\hat{x}_k),x\rangle+t_kD_{\phi_1}(x,\hat{x} _k)\},\\ &t_k\gets \rho t_k. \endaligned \end{equation*} \par \ \ \noindent{\bf until} \begin{equation} \label{(3.12)} Q(x_{k+1},\hat{y} _k)+f(x_{k+1})\le Q(\hat{x}_k,\hat{y} _k)+f(\hat{y} _k)-\frac{\delta}{2}\|x_{k+1}-\hat{x} _k\|^2_2. \end{equation} }} {\begin{remark}\rm The stepsize of $y$-subproblem can be obtained similarly. The process of enlarging $t_k$ is actually approaching the local Lipschitz constant of $f$. Since $f$ is gradient Lipschitz continuous, the backtracking process can be terminated in finite steps for achieving a suitable $t_k$. \end{remark}} \subsection{Convergence analysis} \ \par In this section, we will prove the convergence of Algorithm \ref{alg1}. Note that the bound of $\alpha _k$ and $\beta_k$ is no more than $\alpha _{\max}$ and $\beta _{\max}$ in Algorithm \ref{alg2}, respectively. So the convergence properties of Algorithm \ref{alg1} are also applicable for Algorithm \ref{alg2}. \par Under Assumption \ref{Assumption31}, some convergence results will be proved (see Lemma \ref{lem31}). We will also consider the following additional assumptions to establish stronger convergence results. \begin{Assumption} \label{Assumption32} \rm (i) $L$ is coercive and the domain of $Q$ is closed. (ii) The subdifferential of $Q$ obeys: $$ \forall (x,y)\in {\rm dom} \ Q, \partial _xQ(x,y)\times \partial _yQ(x,y)\subset \partial Q(x,y).$$ (iii) $Q:\mathbb R^{n}\times \mathbb R^{m}\to \mathbb R\cup \left \{ \infty \right \} $ has the following form $$Q(x,y)=q(x,y)+h(x),$$ where $h:\mathbb R^{n}\to \mathbb R\cup \left \{ \infty \right \} $ is continuous on its domain; $q:\mathbb R^{n}\times \mathbb R^{m}\to \mathbb R\cup \left \{ \infty \right \} $ is a continuous function on dom $Q$ such that for any $y$, the partial function $q(\cdot ,y )$ is continuously differentiable about $x$. Besides, for each bounded subset $D_1\times D_2\subset {\rm dom}Q$, there exists $\xi >0$, such that for any $\bar x\in D_1$, $(y,\bar y)\in D_2\times D_2$, it holds that $$\left \| \nabla _xq(\bar x,y)- \nabla _xq(\bar x,\bar y)\right \| \le \xi \left \| y-\bar y \right \| .$$ \end{Assumption} \begin{remark} \label{rem31} \rm (i) Assumption \ref{Assumption32}(i) ensure that the sequences generated by our proposed algorithms is bounded which plays an important role in the proof of convergence. \par(ii) Assumption \ref{Assumption32}(ii) is a generic assumption for the convergence of alternating schemes. Because $f$ and $g$ are continuously differentiable, we have $\partial _xL(x,y)\times \partial _yL(x,y)\subset \partial L(x,y)$ by Remark \ref{rem21} (c). \par(iii) From Assumptions \ref{Assumption31} and \ref{Assumption32}, $L(x,y)$ is continuous on its domain, equal to dom $Q$, which is nonempty and closed. For any sequence $\left \{ (x_k, y_k ) \right \} $ converges to $(\bar x, \bar y)$, it holds that $\left \{L (x_k, y_k ) \right \} $ converges to $L(\bar x, \bar y)$. \end{remark} \begin{lemma} \label{lem31} {\it Suppose that Assumption \ref{Assumption31} hold. Let $\left \{ (x_k, y_k ) \right \} $ and $\left \{ (\hat{x}_k, \hat{y}_k ) \right \} $ are sequences generated by Algorithm \ref{alg1}. The following assertions hold. {\rm (i)} The sequences $\left \{ L(x_k, y_k ) \right \}$, $\left \{ L(\hat{x}_k, \hat{y}_k ) \right \}$ are monotonically nonincreasing and have the same limiting point, i.e. $$\lim_{k \to \infty } L(x_k, y_k )=\lim_{k \to \infty } L(\hat{x}_k, \hat{y}_k )=L^\ast. $$ In particular, \begin{equation} \label{(3.9)} \aligned &L(x_{k+1}, y_{k+1} )\le L(x_k, y_k )-\rho \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2 ,\\ &L(\hat{x}_{k+1}, \hat{y}_{k+1} )\le L(\hat{x}_k, \hat{y}_k )-\rho \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2, \endaligned \end{equation} where $\rho=\min\left \{ \frac{\theta _{1}-L_{\nabla f}}{2}, \frac{\theta _{2}-L_{\nabla g}}{2} \right \} .$ {\rm (ii)} $\sum_{k=0}^{\infty } \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2< \infty ,$ and hence $$\lim_{k \to \infty }\left \| x_{k+1}-\hat{x}_k \right \| =0, \ \lim_{k \to \infty }\left \| y_{k+1}-\hat{y}_k \right \| =0.$$ } \end{lemma} \begin{proof} (i) From Lemma \ref{lem23}, we have \begin{equation} \label{(3.10)} f(x_{k+1} )\le f(\hat {x}_{k} )+\langle \nabla f(\hat{x}_k),x _{k+1}-\hat{x}_k\rangle+\frac{L_{\nabla f}}{2}\left \|x _{k+1}-\hat{x}_k \right \|^2. \end{equation} According to $x$-subproblem in the iterative scheme \eqref{(3.1)}, we obtain $$Q(x_{k+1},\hat{y} _k)+\langle \nabla f(\hat{x}_k),x_{k+1}\rangle+D_{\phi_1}(x_{k+1},\hat{x} _k)\le Q(\hat{x}_k,\hat{y} _k)+\langle \nabla f(\hat{x}_k),\hat{x}_k\rangle+D_{\phi_1}(\hat{x}_k,\hat{x} _k),$$ which implies that\\ \begin{equation} \label{(3.11)} Q(x_{k+1},\hat{y} _k)+\langle \nabla f(\hat{x}_k),x_{k+1}-\hat{x} _k\rangle+D_{\phi_1}(x_{k+1},\hat{x} _k)\le Q(\hat{x}_k,\hat{y} _k). \end{equation} By \eqref{(2.1)}, it follows from \eqref{(3.10)} and \eqref{(3.11)} that\\ \begin{equation} \label{(3.12)} \aligned f(x_{k+1} )+Q(x_{k+1},\hat{y} _k) &\le f(\hat {x}_{k} )+Q(\hat{x}_k,\hat{y} _k)+\frac{L_{\nabla f}}{2}\left \|x _{k+1}-\hat{x}_k \right \|^2-D_{\phi_1}(x_{k+1},\hat{x} _k)\\ &\le f(\hat {x}_{k} )+Q(\hat{x}_k,\hat{y} _k)-\frac{\theta _1-L_{\nabla f}}{2}\left \|x _{k+1}-\hat{x}_k \right \|^2. \endaligned \end{equation} Similarly, for $y$-subproblem, we can get\\ \begin{equation} \label{(3.13)} g(y_{k+1} )+Q(x_{k+1},y _{k+1}) \le g(\hat {y}_{k} )+Q(x_{k+1},\hat{y} _k)-\frac{\theta _2-L_{\nabla g}}{2}\left \|y _{k+1}-\hat{y}_k \right \|^2. \end{equation} Adding \eqref{(3.12)} and \eqref{(3.13)}, we have\\ \begin{equation} \label{(3.14)} \aligned &f(x_{k+1} )+Q(x_{k+1},y _{k+1})+g(y_{k+1} )\\ \le &f(\hat {x}_{k} )+Q(\hat{x}_k,\hat{y} _k)+ g(\hat {y}_{k} )-\frac{\theta _1-L_{\nabla f}}{2}\left \|x _{k+1}-\hat{x}_k \right \|^2-\frac{\theta _2-L_{\nabla g}}{2}\left \|y _{k+1}-\hat{y}_k \right \|^2\\ \le &f(\hat {x}_{k} )+Q(\hat{x}_k,\hat{y} _k)+ g(\hat {y}_{k} )-\rho(\left \|x _{k+1}-\hat{x}_k \right \|^2+\left \|y _{k+1}-\hat{y}_k \right \|^2), \endaligned \end{equation} which can be abbreviated as \begin{equation} \label{(3.15)} L(x_{k+1},y _{k+1})\le L(\hat{x}_k,\hat{y} _k)-\rho\left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2. \end{equation} By the choice of \eqref{(3.3)} and \eqref{(3.4)} for $(\hat{x}_k,\hat{y} _k)$ in Algorithm \ref{alg1}, we get\\ \begin{equation} \label{(3.16)} \aligned L(\hat{x}_{k+1},\hat{y}_{k+1})\le L(x_{k+1},y _{k+1}) &\le L(\hat{x}_k,\hat{y} _k)-\rho \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2\\ &\le L(x_k,y _k)-\rho \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2. \endaligned \end{equation} Hence $\left \{ L(x_k,y _k)\right \}$ and $\left \{ L(\hat{x}_k,\hat{y} _k)\right \}$ are nonincreasing sequences, and\\ \begin{equation} \label{(3.17)} L(x_{k+1},y _{k+1})\le L(\hat{x}_k,\hat{y} _k)\le L(x_k,y _k) \end{equation} holds. Furthermore, $L$ is lower bounded according to Assumption \ref{Assumption31}, therefore the sequence ${L(x_k,y _k)}$ is convergent. Let $\lim_{k \to \infty} L(x_k,y _k)=L^\ast $. Taking limit on \eqref{(3.17)}, and using the squeeze theorem, we get $\lim_{k \to \infty} L(\hat{x}_k,\hat{y} _k)=L^\ast$. (ii) It follows from \eqref{(3.16)} that \begin{equation} \label{(3.18)} \rho \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2\le L(x_k,y _k)-L(x_{k+1},y _{k+1}). \end{equation} Summing up \eqref{(3.18)} from $k= 0,\cdots,K$, we obtain\\ \begin{equation} \label{(3.19)}\ \aligned \sum_{k=0}^{K} \rho \left \| (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2 &\le L(x_0,y _0)-L(x_{K+1},y _{K+1})\\ &\le L(x_0,y _0)-L^\ast < + \infty. \endaligned \end{equation} Let $K\to \infty $, it follows that $\lim_{k \to \infty} \left \|(x _{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|^2=0$, and therefore\\ \begin{equation} \label{(3.20)} \lim_{k \to \infty} \left \|x _{k+1}-\hat{x}_k\right \|=0,\ \ \ \ \lim_{k \to \infty} \left \|y_{k+1}-\hat{y}_k\right \|=0. \end{equation} \end{proof} \begin{lemma} \label{lem32} {\it Suppose that Assumption \ref{Assumption31} and Assumption \ref{Assumption32} hold. Let $\left \{ (x_k, y_k )\right \}$ and $\left \{ (\hat{x}_k,\hat{y} _k)\right \}$ be the sequences generated by Algorithm \ref{alg1}. For any integer $k\ge1$, set \begin{equation} \label{(3.25)} p_{x}^{k+1} =\nabla_xq(x_{k+1},y_{k+1})-\nabla_xq(x_{k+1},\hat{y}_k)+q_{x}^{k+1}, \end{equation} \begin{equation} \label{(3.22)} p_{y}^{k+1} =\nabla g(y_{k+1})-\nabla g( \hat{y}_k)-\nabla{\phi _2}(y_{k+1})+\nabla{\phi _2}(\hat{y}_k), \end{equation} where $q_{x}^{k+1} =\nabla f(x_{k+1})-\nabla f( \hat{x}_k)-\nabla{\phi _1}(x_{k+1})+\nabla{\phi _1}(\hat{x}_k)$, then $(p_{x}^{k+1},p_{y}^{k+1})\in \partial L(x_{k+1},y_{k+1})$, and there exists $\varrho>0$, such that \begin{equation} \label{(3.26)} \left \|(p_{x}^{k+1},p_{y}^{k+1}) \right \| \le \varrho\left \|(x _{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k) \right \|. \end{equation} } \end{lemma} \begin{proof} From the iterative scheme \eqref{(3.1)}, we know\\ $$x_{k+1}\in \arg\min_{ x\in \mathbb{R}^n}\{Q(x,\hat{y} _k)+\langle \nabla f(\hat{x}_k),x\rangle+D_{\phi_1}(x,\hat{x} _k)\}.$$ By Fermat’s rule, $x_{k+1}$ satisfies\\ $$0\in \partial_xQ(x_{k+1},\hat{y}_k)+\nabla f( \hat{x}_k)+\nabla{\phi _1}(x_{k+1})-\nabla{\phi _1}(\hat{x}_k),$$ which implies that\\ \begin{equation} \label{(3.23)}\ \aligned \nabla f(x_{k+1})-\nabla f( \hat{x}_k)-\nabla{\phi _1}(x_{k+1})+\nabla{\phi _1}(\hat{x}_k) &\in \partial_xQ(x_{k+1},\hat{y}_k)+\nabla f( x_{k+1})\\ &= \partial_xL(x_{k+1},\hat{y}_k). \endaligned \end{equation} Similarly, by $y$-subproblem in iterative scheme \eqref{(3.1)}, we have\\ $$0\in \partial_yQ(x_{k+1},y_{k+1})+\nabla g( \hat{y}_k)+\nabla{\phi _2}(y_{k+1})-\nabla{\phi _2}(\hat{y}_k),$$ which implies that\\ \begin{equation} \label{(3.24)}\ \aligned \nabla g(y_{k+1})-\nabla g( \hat{y}_k)-\nabla{\phi _2}(y_{k+1})+\nabla{\phi _2}(\hat{y}_k) &\in \partial_yQ(x_{k+1},y_{k+1})+\nabla g( y_{k+1})\\ &= \partial_yL(x_{k+1},y_{k+1}). \endaligned \end{equation} From Assumption \ref{Assumption32} (iii), we know that\\ $$\partial_xQ(x,y)=\nabla _xq(x,y)+\partial h(x).$$ According to \eqref{(3.23)}, we have \begin{equation} \label{(3.27)}\ \aligned q_{x}^{k+1}\in \partial_xL(x_{k+1},\hat{y}_k) &=\partial_xQ(x_{k+1},\hat{y}_k)+\nabla f( x_{k+1})\\ &=\nabla _xq(x_{k+1},\hat{y}_k)+\partial h(x_{k+1})+\nabla f( x_{k+1}). \endaligned \end{equation} It follows from \eqref{(3.25)} that \begin{equation} \label{(3.28)}\ \aligned p_{x}^{k+1} &=\nabla_xq(x_{k+1},y_{k+1})-\nabla_xq(x_{k+1},\hat{y}_k)+q_{x}^{k+1}\\ &\in \nabla_xq(x_{k+1},y_{k+1})+\partial h(x_{k+1})+\nabla f( x_{k+1})\\ &= \partial_xQ(x_{k+1},y_{k+1})+\nabla f( x_{k+1})\\ &= \partial_xL(x_{k+1},y_{k+1}). \endaligned \end{equation} Hence,\\ $$(p_{x}^{k+1},p_{y}^{k+1})\in \partial_xL(x_{k+1},y_{k+1})\times \partial_yL(x_{k+1},y_{k+1})\subset \partial L(x_{k+1},y_{k+1}).$$ Now we begin to estimate the norms of $p_{x}^{k+1}$ and $\ p_{y}^{k+1}$. Under Assumption \ref{Assumption32} (i) that $L$ is coercive, we deduce that $\left \{ (x_{k+1}, y_{k+1} )\right \}$ is a bounded set. Then from Assumption \ref{Assumption32} (iii) and \eqref{(33.2)}, we have\\ \begin{equation*} \aligned \left \| p_{x}^{k+1} \right \| &\le \left \| \nabla_xq(x_{k+1},y_{k+1})-\nabla_xq(x_{k+1},\hat{y}_k) \right \| +\left \| q_{x}^{k+1} \right \| \\ &\le \xi \left \| y_{k+1}-\hat{y}_k \right \|+\left \| \nabla f(x_{k+1})-\nabla f( \hat{x}_k)\right \| +\left \| \nabla{\phi _1}(\hat{x}_k) -\nabla{\phi _1}(x_{k+1})\right \| \\ &\le \xi \left \| y_{k+1}-\hat{y}_k \right \|+L_{\nabla f}\left \| x_{k+1}-\hat{x}_k\right \| +\eta _1\left \| x_{k+1}-\hat{x}_k\right \| \\ &= \xi \left \| y_{k+1}-\hat{y}_k \right \|+\left (L_{\nabla f}+\eta _1\right )\left \| x_{k+1}-\hat{x}_k\right \|,\\ \left \| p_{y}^{k+1} \right \| &\le \left \| \nabla g(y_{k+1})-\nabla g( \hat{y}_k)\right \| +\left \| \nabla{\phi _2}(\hat{y}_k) -\nabla{\phi _2}(y_{k+1})\right \| \\ &\le L_{\nabla g}\left \| y_{k+1}-\hat{y}_k\right \| +\eta _2\left \| y_{k+1}-\hat{y}_k\right \| \\ &= \left (L_{\nabla g}+\eta _2\right )\left \| y_{k+1}-\hat{y}_k\right \|, \endaligned \end{equation*} and hence\\ \begin{equation*} \aligned \left \| (p_{x}^{k+1},p_{y}^{k+1}) \right \| ^2&= \left \| p_{x}^{k+1}\right \| ^2+\left \|p_{y}^{k+1}\right \| ^2\\ &\le 2\left (L_{\nabla f}+\eta _1\right )^2\left \| x_{k+1}-\hat{x}_k\right \|^2+2\xi^2 \left \| y_{k+1}-\hat{y}_k \right \|^2+\left (L_{\nabla g}+\eta _2\right )^2\left \| y_{k+1}-\hat{y}_k\right \|^2\\ &= 2\left (L_{\nabla f}+\eta _1\right )^2\left \| x_{k+1}-\hat{x}_k\right \|^2+2\left [ \xi^2 +\left (L_{\nabla g}+\eta _2\right )^2\right ] \left \| y_{k+1}-\hat{y}_k \right \|^2. \endaligned \end{equation*} The above inequality holds from the fact that $(a+b)^2\le 2(a^2+b^2),\forall a,b\in \mathbb{R}.$ \par Set $\varrho^{2} =\max\left \{ 2\left (L_{\nabla _f}+\eta _1\right )^2 ,2\xi^2 +2\left (L_{\nabla _g}+\eta _2\right )^2\right \} $. Then\\ \begin{equation*} \aligned &\left \| (p_{x}^{k+1},p_{y}^{k+1}) \right \| ^2\le \varrho^{2}\left ( \left \| x_{k+1}-\hat{x}_k\right \|^2+\left \| y_{k+1}-\hat{y}_k \right \|^2 \right ) \\ &\left \| (p_{x}^{k+1},p_{y}^{k+1}) \right \|\le \varrho\left \| \left (x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k\right ) \right \|. \endaligned \end{equation*} So the conclusion holds. \end{proof} Below, we would summarize some properties about cluster points and prove every cluster point of a sequence generated by Algorithm \ref{alg1} is the critical point of $L$. For simplicity, we introduce the following notations. Let\\ $$z_k=(x_k,y_k),\ \omega _k=(\hat{x}_k,\hat{y}_k).$$ So $L(z_k)=L(x_k,y_k), L(\omega _k)=L(\hat{x}_k,\hat{y}_k).$ \par Let $\{z_k\}$ be the sequence generated by Algorithm \ref{alg1} with initial point $z_0$. Under Assumption \ref{Assumption32} (i) that $L$ is coercive, we deduce that $\{z_k\}$ is a bounded set, and it has at least one cluster point. The set of all cluster points is denoted by $\mathcal{L}(z_0)$, i.e., $$\mathcal{L}(z_0):=\{\hat{z}=(\hat{x},\hat{y})\in \mathbb{R}^n\times \mathbb{R}^m: \exists {\rm \ strictly\ increasing} \left \{ k_j \right \} _{j\in \mathbb{N}} {\rm\ such\ that\ } z_{k_j}\to\hat{ z}, j\to \infty \}.$$ \begin{lemma} \label{lem34} {\it Suppose Assumption \ref{Assumption31} and Assumption \ref{Assumption32} hold, let $\left \{ z _k\right \}$ be a sequence generated by Algorithm \ref{alg1} with initial point $z_0$. Then the following results hold. {\rm (i)} $\mathcal{L}(z_0)$ is a nonempty compact set, and $L$ is finite and constant on $\mathcal{L}(z_0)$, {\rm (ii)} $\mathcal{L}(z_0)\subset {\rm crit} \ L$, {\rm (iii)} $\lim_{k \to \infty} {\rm dist}\left ( z_k,\mathcal{L}(z_0) \right ) =0$. } \end{lemma} \begin{proof} {\rm (i)} The fact $\left \{ z _k\right \}$ is bounded yields $\mathcal{L}(z_0)$ is nonempty. In addition, $\mathcal{L}(z_0)$ can be reformulated as an intersection of compact sets $$\mathcal{L}(z_0)=\bigcap_{s\in \mathbb{N}}\overline{\bigcup_{k\ge s}{z_k}} ,$$ which illustrates that $\mathcal{L}(z_0)$ is a compact set. For any $\hat{z}=(\hat{x},\hat{y})\in \mathcal{L}(z_0)$, there exists a subsequence $\left \{ z _{k_j}\right \}$ such that $$\lim_{j \to \infty} z_{k_j}=\hat{z} .$$ Since $L$ is continuous, we have $$\lim_{j \to \infty} L(z_{k_j})=L(\hat{z} ).$$ According to Lemma \ref{lem31}, we know that $\left \{ L(z _k)\right \}$ converges to $L^\ast $ globally. Hence \begin{equation} \label{(3.29)} \lim_{j \to \infty} L(z_{k_j})=\lim_{k \to \infty} L(z_{k})=L(\hat{z} )=L^\ast. \end{equation} which means $L$ is a constant on $\mathcal{L}(z_0)$. {\rm (ii)} Take $\hat{z}\in \mathcal{L}(z_0)$, then $\exists \ \left \{ (x_{k_j},y_{k_j}) \right \}$ such that $(x_{k_j},y_{k_j})\to \hat{z}$. From \eqref{(3.20)}, we have \begin{equation*} \lim_{j \to \infty} \left \|x _{k_j}-\hat{x}_{k_{j-1}}\right \|=0,\ \ \ \ \lim_{j \to \infty} \left \|y_{k_j}-\hat{y}_{k_{j-1}}\right \|=0, \end{equation*} hence, \begin{equation*} \lim_{j \to \infty} x _{k_j}=\lim_{j \to \infty} \hat{x}_{k_{j-1}}=\hat{x},\ \ \ \ \lim_{j \to \infty} y _{k_j}=\lim_{j \to \infty} \hat{y}_{k_{j-1}}=\hat{y}. \end{equation*} By \eqref{(3.26)}, we get \begin{equation*} \left \|(p_{x}^{k_j},p_{y}^{k_j}) \right \| \le \varrho\left \|(x _{k_j}-\hat{x}_{k_{j-1}},y_{k_j}-\hat{y}_{k_{j-1}}) \right \|. \end{equation*} So $$(p_{x}^{k_j},p_{y}^{k_j})\to (0,0) \ \ \ as \ \ \ j\to \infty .$$\\ Based on the result $\lim_{j \to \infty} L(x_{k_j},y_{k_j})=L(\hat{x} ,\hat{y})$, $(p_{x}^{k_j},p_{y}^{k_j})\in \partial L(x_{k_j},y_{k_j})$ and the closedness property of $\partial L$, we conclude that $(0,0)\in \partial L(\hat{x},\hat{y})$, which means $\hat{z}=(\hat{x},\hat{y})$ is a critical point of $L$, and $\mathcal{L}(z_0)\subset {\rm crit} \ L$. {\rm (iii)} We prove the assertion by contradiction. Assume that $\lim_{k \to \infty} {\rm dist}\left ( z_k,\mathcal{L}(z_0) \right ) \ne 0$. Then, there exists a subsequence $\left \{ z _{k_m}\right \}$ and a constant $M>0$ such that \begin{equation} \label{(3.30)} \left \| z_{k_m} -\hat{z} \right \| \ge {\rm dist}\left ( z _{k_{m}},\mathcal{L}(z_0) \right )>M,\ \ \ \forall \hat{z}\in \mathcal{L}(z_0). \end{equation} On the other hand, $\left \{ z _{k_m}\right \}$ is bounded and has a subsequence $\left \{ z _{k_{m_j}}\right \}$ converging to a point in $\mathcal{L}(z_0)$. Thus, $$\lim_{j \to \infty} {\rm dist}\left ( z _{k_{m_j}},\mathcal{L}(z_0) \right ) = 0,$$ which is a contradiction to \eqref{(3.30)}. \end{proof} \par Now, we can prove the main convergence results of proposed algorithms under K{\L} property. \begin{theorem} \label{th31} {\it Suppose Assumptions \ref{Assumption31} and \ref{Assumption32} hold and $\left \{ z _{k}\right \}$ is a sequence generated by Algorithm \ref{alg1} with initial point $z_0$. Assume that $L$ is a K{\L} function. Then the following results hold. {\rm (i)} $\sum_{k=0}^{\infty } \left \| z_{k+1}-z_k \right \|<\infty $, {\rm (ii)} The sequence $\left \{ z _{k}\right \}$ converges to a critical point of $L$. } \end{theorem} \begin{proof} In the process of our proof, we always assume $L(z_{k})\ne L(\hat{z} )$. Otherwise, there exists an integer $\hat{k}$ such that $L(z_{\hat{k}})= L(\hat{z} )$. The sufficient decrease condition (Lemma \ref{lem31}) implies $z_{\hat{k}+1}= z_{\hat{k}}$. It follows that $z_{\hat{k}}=\hat{z}$ for any $k\ge\hat{k}$ and the assertions holds trivially. {\rm (i)} Since $\left \{ L(z _{k})\right \}$ is a nonincreasing sequence and $\lim_{k \to \infty} L(z_{k})=L(\hat{z} )=L^\ast$ from \eqref{(3.29)}, for any $\eta>0$, there exists a positive integer $k_0$ such that $$L(\hat{z} )<L(z_{k})<L(\hat{z} )+\eta, \ \ \ \forall k>k_0,$$ which means $$z_k\in\left [L(\hat{z} )<L(z_{k})<L(\hat{z} )+\eta \right ], \ \ \ \forall k>k_0.$$ On the other hand, $\lim_{k \to \infty} {\rm dist}\left ( z_k,\mathcal{L}(z_0) \right ) =0$. Therefore for any $\varepsilon >0$, there exists a positive integer $k_1$, such that $${\rm dist}\left ( z_k,\mathcal{L}(z_0) \right )<\varepsilon, \ \ \ \forall k>k_1.$$ Let $l=\max\left \{ k_0,k_1 \right \}+1 $. Then for all $k\ge l$, we have $$z_k\in \left \{ z\mid {\rm dist}\left ( z_k,\mathcal{L}(z_0) \right )<\varepsilon \right \} \bigcap \left [L(\hat{z} )<L(z_{k})<L(\hat{z} )+\eta \right ].$$ Note that $L$ is a constant on the compact set $\mathcal{L}(z_0)$. According to Lemma \ref{lem21}, there exists a concave function $\varphi \in \Phi_\eta $ such that \begin{equation} \label{(3.31)} \varphi'\left ( L(z_{k})-L(\hat{z} ) \right ) {\rm dist}(0,\partial L(z_{k}))\geq 1, \ \ \ \forall k>l. \end{equation} From Lemma \ref{lem32}, we obtain that \begin{equation} \label{(3.32)}\ \aligned {\rm dist}(0,\partial L(z_{k}))\le\left \| (p_{x}^{k},p_{y}^{k}) \right \| &\le \varrho\left \| \left (x_{k}-\hat{x}_{k-1},y_{k}-\hat{y}_{k-1}\right ) \right \|\\ &=\varrho\left \|z_{k}-\omega _{k-1}\right \|. \endaligned \end{equation} Substituting \eqref{(3.32)} into \eqref{(3.31)}, we get $$\varphi'(L(z_{k})-L(\hat{z} ))\ge \frac{1}{{\rm dist}(0,\partial L(z_{k}))} \ge \frac{1}{\varrho\left \|z_{k}-\omega _{k-1}\right \|}. $$ From the concavity of $\varphi$, we have $$\varphi(L(z_{k+1})-L(\hat{z} ))\le \varphi(L(z_{k})-L(\hat{z} ))+\varphi'(L(z_{k})-L(\hat{z} ))(L(z_{k+1})-L(z_{k})).$$ It follows that \begin{equation} \label{(3.33)}\ \aligned \varphi(L(z_{k})-L(\hat{z} ))-\varphi(L(z_{k+1})-L(\hat{z} )) &\ge\varphi'(L(z_{k})-L(\hat{z} ))(L(z_{k})-L(z_{k+1}))\\ &\ge\frac{\rho\left \|z_{k+1}-\omega _{k}\right \|^2}{\varrho\left \|z_{k}-\omega _{k-1}\right \|}. \endaligned \end{equation} The last inequality is from Lemma \ref{lem31}. \par For convenience, we define $\triangle _k=\varphi(L(z_{k})-L(\hat{z} ))$. It is obvious that $\triangle _k$ is nonincreasing of $k$. Let $\underline{\triangle} =\inf_k\left \{ \triangle _k \right \} $ and $C=\frac{\varrho}{\rho} $. Then \eqref{(3.33)} can be simplified as $$\triangle _k-\triangle _{k+1}\ge\frac{\left \|z_{k+1}-\omega _{k}\right \|^2}{C\left \|z_{k}-\omega _{k-1}\right \|},$$ i.e., $$\left \|z_{k+1}-\omega _{k}\right \|^2\le C(\triangle _k-\triangle _{k+1})\left \|z_{k}-\omega _{k-1}\right \|.$$ Using the fact that $2\sqrt{ab} \le a+b$ for $a,b\ge0$, we infer \begin{equation} \label{(3.34)} 2\left \|z_{k+1}-\omega _{k}\right \|\le C(\triangle _k-\triangle _{k+1})+\left \|z_{k}-\omega _{k-1}\right \|. \end{equation} Summing up \eqref{(3.34)} for $k=l+1,\dots ,K$ yields \begin{equation*} \aligned 2\sum_{k=l+1}^{K} \left \|z_{k+1}-\omega _{k}\right \| &\le C(\triangle _{l+1}-\triangle _{K+1})+\sum_{k=l+1}^{K} \left \|z_{k}-\omega _{k-1}\right \|\\ &=C(\triangle _{l+1}-\triangle _{K+1})+\left \|z_{l+1}-\omega _{l}\right \|-\left \|z_{K+1}-\omega _{K}\right \|+\sum_{k=l+1}^{K} \left \|z_{k+1}-\omega _{k}\right \|. \endaligned \end{equation*} Eliminating the same terms of the inequality, we have \begin{equation*} \aligned \sum_{k=l+1}^{K} \left \|z_{k+1}-\omega _{k}\right \| &\le C(\triangle _{l+1}-\triangle _{K+1})+\left \|z_{l+1}-\omega _{l}\right \|-\left \|z_{K+1}-\omega _{K}\right \|\\ &\le C(\triangle _{l+1}-\underline{\triangle})+\left \|z_{l+1}-\omega _{l}\right \|-\left \|z_{K+1}-\omega _{K}\right \|\\ &<\infty. \endaligned \end{equation*} Let $K\to\infty$, we get \begin{equation} \label{(3.35)} \sum_{k=l+1}^{K} \left \|z_{k+1}-\omega _{k}\right \|<\infty. \end{equation} Note that $(z_{k+1},\omega _k)=(x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k)$, and investigating the iterative point $\omega _k=(\hat{x}_k,\hat{y}_k)$ in Algorithm \ref{alg1}. If $(\hat{x}_k,\hat{y}_k)$ is generated by \eqref{(3.3)}, then \begin{equation} \label{(3.36)}\ \aligned &(x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k)\\ =&(x_{k+1}-x_k-\alpha_k(x_{k+1}-x_{k})-\beta_k(x_k-x_{k-1}),y_{k+1}-y_k-\alpha_k(y_{k+1}-y_{k})-\beta_k(y_k-y_{k-1}))\\ =&z_{k+1}-z_k-\alpha_k(z_{k}-z_{k-1})-\beta_k(z_{k-1}-z_{k-2}). \endaligned \end{equation} If $(\hat{x}_k,\hat{y}_k)$ is generated by \eqref{(3.4)}, then $$(x_{k+1}-\hat{x}_k,y_{k+1}-\hat{y}_k)=(x_{k+1}-x_k,y_{k+1}-y_k)=z_{k+1}-z_k.$$ No matter how $(\hat{x}_k,\hat{y}_k)$ is generated, we always have \begin{equation} \label{(3.37)}\ \left \|z_{k+1}-\omega _k \right \| \ge\left \|z_{k+1}-z_k \right \|-\alpha_k\left \|z_{k}-z_{k-1} \right \|-\beta_k\left \|z_{k-1}-z_{k-2}\right \|. \end{equation} Summing up \eqref{(3.37)} for $k=l,\dots ,K$ we get \begin{equation} \label{(3.38)}\ \aligned &\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|-\sum_{k=l+1}^{K}\alpha_k\left \|z_{k}-z_{k-1} \right \|-\sum_{k=l+1}^{K} \beta_k\left \|z_{k-1}-z_{k-2}\right \|\\ \le&\sum_{k=l+1}^{K} \left \|z_{k+1}-\omega _k\right \| < \infty . \endaligned \end{equation} Note that $\alpha_k\in \left [ 0, \alpha _{\max}\right ],\beta _k\in \left [ 0, \beta _{\max}\right ]$. Let $\overline{\alpha } ={\sup}_k\left \{ \alpha_k \right \} ,\overline{\beta } ={\sup}_k\left \{ \beta_k \right \} $, then $0\le \overline{\alpha }+\overline{\beta } \le \alpha_{\max}+\beta_{\max}<1$, and it holds that \begin{equation} \label{(3.39)}\ \aligned &\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|-\overline{\alpha }\sum_{k=l+1}^{K}\left \|z_{k}-z_{k-1} \right \|-\overline{\beta }\sum_{k=l+1}^{K} \left \|z_{k-1}-z_{k-2}\right \|\\ \le&\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|-\sum_{k=l+1}^{K}\alpha_k\left \|z_{k}-z_{k-1} \right \|-\sum_{k=l+1}^{K} \beta_k\left \|z_{k-1}-z_{k-2}\right \|. \endaligned \end{equation} and \begin{equation} \label{(3.40)}\ \aligned &\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|-\overline{\alpha }\sum_{k=l+1}^{K}\left \|z_{k}-z_{k-1} \right \|-\overline{\beta }\sum_{k=l+1}^{K} \left \|z_{k-1}-z_{k-2}\right \|\\ =&\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|-\overline{\alpha }\sum_{k=l+1}^{K}\left \|z_{k+1}-z_{k} \right \|-\overline{\alpha }(\left \|z_{l+1}-z_{l} \right \|-\left \|z_{K+1}-z_{K} \right \|)-\overline{\beta }\sum_{k=l+1}^{K} \left \|z_{k+1}-z_{k}\right \|\\ &-\overline{\beta }(\left \|z_{l+1}-z_{l} \right \|+\left \|z_{l}-z_{l-1} \right \|-\left \|z_{K+1}-z_{K} \right \|-\left \|z_{K}-z_{K-1} \right \|)\\ =&(1-\overline{\alpha }-\overline{\beta })\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|-(\overline{\alpha }+\overline{\beta })(\left \|z_{l+1}-z_{l} \right \|-\left \|z_{K+1}-z_{K} \right \|)\\ &-\overline{\beta }(\left \|z_{l}-z_{l-1} \right \|-\left \|z_{K}-z_{K-1} \right \|). \endaligned \end{equation} Combining \eqref{(3.38)}, \eqref{(3.39)} and \eqref{(3.40)}, we have \begin{equation} \label{(3.41)}\ \aligned &(1-\overline{\alpha }-\overline{\beta })\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|\\ =&(\overline{\alpha }+\overline{\beta })(\left \|z_{l+1}-z_{l} \right \|-\left \|z_{K+1}-z_{K} \right \|)+\overline{\beta }(\left \|z_{l}-z_{l-1} \right \|-\left \|z_{K}-z_{K-1} \right \|)+\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|\\&-\overline{\alpha }\sum_{k=l+1}^{K}\left \|z_{k}-z_{k-1} \right \|-\overline{\beta }\sum_{k=l+1}^{K} \left \|z_{k-1}-z_{k-2}\right \|\\ \le&(\overline{\alpha }+\overline{\beta })(\left \|z_{l+1}-z_{l} \right \|-\left \|z_{K+1}-z_{K} \right \|)+\overline{\beta }(\left \|z_{l}-z_{l-1} \right \|-\left \|z_{K}-z_{K-1} \right \|)+\sum_{k=l+1}^{K} \left \|z_{k+1}-z_k\right \|\\&-\sum_{k=l+1}^{K}\alpha_k\left \|z_{k}-z_{k-1} \right \|-\sum_{k=l+1}^{K}\beta _k \left \|z_{k-1}-z_{k-2}\right \|\\ \le&(\overline{\alpha }+\overline{\beta })(\left \|z_{l+1}-z_{l} \right \|-\left \|z_{K+1}-z_{K} \right \|)+\overline{\beta }(\left \|z_{l}-z_{l-1} \right \|-\left \|z_{K}-z_{K-1} \right \|)+\sum_{k=l+1}^{K} \left \|z_{k+1}-\omega _k\right \|\\ <&\infty. \endaligned \end{equation} The last inequality holds from \eqref{(3.35)}. Taking the limit as $K\to\infty$, and using the fact $\overline{\alpha }+\overline{\beta }<1$, we obtain $$\sum_{k=l+1}^{\infty} \left \|z_{k+1}-z_k\right \|<\infty.$$ This shows that \begin{equation} \label{(3.42)}\ \sum_{k=0}^{\infty} \left \|z_{k+1}-z_k\right \|<\infty. \end{equation} {\rm (ii)} \eqref{(3.42)} implies that for any $m>n\ge l$, it holds that $$\left \| z_{m}-z_n \right \|=\left \|\sum_{k=n}^{m-1} (z_{k+1}-z_k) \right \| \le\sum_{k=n}^{m-1}\left \| z_{k+1}-z_k \right \|< \sum_{k=n}^{\infty}\left \| z_{k+1}-z_k \right \|.$$ Taking $n\to\infty$, we obtain $\left \| z_{m}-z_n \right \|\to 0$ which means that $\left \{ z _{k}\right \}$ is a Cauchy sequence, and hence is a convergent sequence. We also know that $\left \{ z _{k}\right \}$ converges to a critical point of $L$ from Lemma \ref{lem34}(ii). \end{proof} Since K{\L} property is also a very useful tool to establish the convergence rate of many first-order methods. Based on K{\L} inequality, Attouch and Bolte \cite{ABB} first established convergence rate results which are related to the desingularizing function for proximal algorithms. Similar to the derivation process of \cite{ABB}, we can obtain convergence rate results as following. \begin{theorem} \label{th32} {\it (Convergence rate) Let Assumption \ref{Assumption31} and \ref{Assumption32} hold and let $\left \{ z _{k}\right \}$ be a sequence generated by Algorithm \ref{alg1} with $z_0=(x_0,y_0)$ as initial point. Assume also that $L$ is a K{\L} function and the desingularizing function has the form of $\varphi (t)=\frac{C}{\theta }t^{\theta }$ with $\theta \in (0,1]$, $C > 0$. Let $L^{\ast }=L(z)$, $\forall z\in\mathcal{L}(z_0)$. The following assertions hold. {\rm (i)} If $\theta=1$, the Algorithm \ref{alg1} terminates in finite steps. {\rm (ii)} If $\theta\in[\frac{1}{2} ,1)$, then there exist $\omega>0$ and $k_0\in \mathbb{N}$ such that $$L(z_k)-L^{\ast }\le\mathcal{O} \left ( \exp\left ( -\frac{\omega}{\varrho }\right )\right ) , \ \ \ \forall k>k_0.$$ {\rm (iii)} If $\theta\in(0,\frac{1}{2})$, then there exist $\omega>0$ and $k_0\in \mathbb{N}$ such that $$L(z_k)-L^{\ast }\le\mathcal{O} \left (\left ( \frac{k-k_0}{\varrho }\right ) ^{\frac{-1}{1-2\theta } }\right ), \ \ \ \forall k>k_0.$$ } \end{theorem} The result is almost the same as it mentioned in \cite{LL,LYV}. We omit the proof here. \section{Numerical experiments}\label{sect4} \hspace*{\parindent}In this subsection, we provide some numerical experiments which we carried out in order to illustrate the numerical results of Algorithm \ref{alg1} and \ref{alg2} with different Bregman distances. The following list are various functions with its Bregman distances:\\ \hspace*{\parindent}(i) Define the function $\varphi _1(x)=-\gamma\sum_{i=1}^m \ln x_i$ with domain $$\text{dom}\varphi _1=\{x=(x_1, x_2,\cdots, x_m)^T\in \mathbb{R}^m: x_i > 0, i =1, 2,\cdots, m\}$$ and range ran$\varphi _1=(-\infty,+\infty)$. Then $$\nabla \varphi _1(x)=\gamma(-\frac{1}{x_1}, -\frac{1}{x_2}, \cdots, -\frac{1}{x_m})^T$$ and the Bregman distance (the Itakura-Saito distance) with respect to $\varphi _1 $ is $$D_{\varphi _1}(x, y) = \gamma\sum_{i=1}^m\big(\frac{x_i}{y_i}-\ln\big(\frac{x_i}{y_i}\big)-1\big),\ \ \forall x,y\in \mathbb{R}_{++}^m.$$ \hspace*{\parindent}(ii) Define the function $\varphi_2(x)=\frac{\gamma}{2}\|x\|^2$ with domain $\text{dom}\phi _2=\mathbb{R}^m$ and range ran$\varphi_2=[0,+\infty)$. Then $\nabla \varphi_2(x)=x$ and the Bregman distance (the squared Euclidean distance) with respect to $\varphi_2$ is $$D_{\varphi_2}(x, y) = \frac{\gamma}{2}\|x-y\|^2,\ \ \forall x,y\in \mathbb{R}^m.$$ It is clear that $\varphi_i$ is $\gamma$-strongly convex ($i=1,2$).\\ \subsection{Nonconvex quadratic programming} \ \par We consider the following quadratic programming problem\\ \begin{equation} \aligned \label{(4.1)}\ \min_{x}&\ \ \frac{1}{2} x^{T}Ax+b^{T}x \\ &s.t. \ \ x\in S, \endaligned \end{equation} where $A$ is a symmetric matrix but not necessarily positive semidefinite, $b\in \mathbb{R}^n$ is a vector and $S \subset\mathbb{R}^n$ is a ball. By introducing an auxiliary variable $y\in \mathbb{R}^n$, \eqref{(4.1)} can be reformulated as \begin{equation} \aligned \label{(4.2)}\ \min_{x,\ y\in\mathbb{R}^{n}}\ \ &\frac{1}{2} y^{T}Ay+b^{T}y+\iota_S(x) \\ &s.t. \ \ x=y, \endaligned \end{equation} where $\iota_S(x)$ is the indictor function with respect to ball $S$, defined by $$\iota_S(x)=\left\{\begin{matrix} 0, & x\in S, \\ +\infty, &x\notin S. \end{matrix}\right.$$ We use penalty method to handle the constraint. The problem can be transformed into \begin{equation} \label{(4.3)}\ \min_{x,\ y\in\mathbb{R}^{n}}\ \ \frac{1}{2} y^{T}Ay+b^{T}y+\iota_S(x)+\frac{\mu}{2}\left \| x-y \right \|^{2}, \end{equation} where $\mu>0$ be a penalty parameter. When $\mu$ is large enough, the solution of \eqref{(4.3)} is an approximate solution of problem \eqref{(4.2)}. \par Let $$ \aligned &f(x)\equiv0, \ g(y)=\frac{1}{2} y^{T}Ay+b^{T}y,\\ &Q(x,y)=\iota_S(x)+\frac{\mu}{2}\left \| x-y \right \|^{2}. \endaligned $$ Obviously, $f$ and $g$ are smooth functions, and the Lipschitz constant of $\nabla g$, i.e. $L_{\nabla g}$, is the maximal singular value of $A$. Note that when $A$ is not positive semidefinite, $g$ is a nonconvex function. We can solve \eqref{(4.3)} by Algorithm \ref{alg1} and Algorithm \ref{alg2}. Let $\phi_2(y)=\frac{\lambda }{2} \left \| y\right \|^{2}$, we can elaborate the $x$-subproblem and $y$-subproblem of our algorithms respectively as follows. \par The $x$-subproblem corresponds to the following optimization problem $$ \aligned x_{k+1} &\in\arg\min_{ x\in \mathbb{R}^n}\left \{ Q(x,\hat{y}_k )+\left \langle \nabla f(\hat{x}_k),x \right \rangle+D_{\phi_1}(x,\hat{x}_k) \right \} \\ &=\arg\min_{ x\in \mathbb{R}^n}\left \{ \iota_S(x)+\frac{\mu}{2}\left \| x-\hat{y}_k \right \|^{2}+D_{\phi_1}(x,\hat{x}_k) \right \} \\ &=\arg\min_{ x\in S}\left \{ \frac{\mu}{2}\left \| x-\hat{y}_k \right \|^{2}+D_{\phi_1}(x,\hat{x}_k) \right \}. \\ \endaligned $$ The $y$-subproblem corresponds to the following optimization problem $$ \aligned y_{k+1} &\in\arg\min_{ y\in \mathbb{R}^n}\left \{ Q(x_{k+1},y )+\left \langle \nabla g(\hat{y}_k),y \right \rangle+D_{\phi_2}(y,\hat{y}_k) \right \} \\ &=\arg\min_{ y\in \mathbb{R}^n}\left \{ \frac{\mu}{2}\left \|x_{k+1}-y \right \|^{2}+\left \langle A\hat{y}_k+b,y \right \rangle+\frac{\lambda }{2} \left \| y-\hat y_{k} \right \|^{2} \right \}, \\ \endaligned $$ which has an explicit expression $$y_{k+1} =\frac{1}{\mu+\lambda } \left (\mu x_{k+1}+\lambda\hat{y}_k - A\hat{y}_k-b \right) .$$ \par In numerical experiments, we set $A=D+D^T\in \mathbb{R}^{n\times n}$, where $D$ is a matrix generated by i.i.d. standard Gaussian entries. The vector $b$ is also generated by i.i.d. standard Gaussian entries. We take $n=500$ and the radius of the ball is $r=2$. Since $f\equiv 0$, any positive number can be the Lipschitz constant of $L_{\nabla f}$. We set $L_{\nabla f}=L_{\nabla g}$. We selected the starting point randomly, and use $$E_k=\|x_{k+1}-x_k\|+\|y_{k+1}-y_k\|<10^{-4}$$ as the stopping criteria. In the numerical results, ``Iter." denotes the number of iterations. ``Time" denotes the CPU time. ``Extrapolation" records the number of taking extrapolation step, i.e., the number of adopting \eqref{(3.3)}. In order to show the effectiveness of the proposed algorithms, we compare Algorithm \ref{alg1}, Algorithm \ref{alg2} with ASAP \cite{NT} and aASAP \cite{XX} for different Bregman distance. Note that when $\alpha_k\equiv \beta_k\equiv 0$, Algorithm \ref{alg1} and Algorithm \ref{alg2} correspond to ASAP. For aASAP, we take $\alpha_k=0.3$. For Algorithm \ref{alg1}, we set $\alpha_k=0.3, \beta_k=0.2$. And we also take extrapolation parameter dynamically updating with $\alpha_k=\beta_k=\frac{k-1}{k+2}$. Even if the theoretical bound of extrapolation parameter $\alpha_k+\beta_k$ with dynamically updating dost not permit to go beyond $1$, for the convergence is also obtained for this case with a better performance. For Algorithm \ref{alg2}, we set $\alpha_0=0.3, \beta_0=0.2$ as the initial extrapolation parameter and $t=1.2, \beta_{\max}=0.499$. We use ``Alg. 1-i" and ``Alg. 2-i" to denote Algorithm \ref{alg1} and Algorithm \ref{alg2} with $\phi_1 (x)=\varphi_i(x)(1\le i\le 2)$, respectively, where extrapolation parameter $\alpha_k=0.3, \beta_k=0.2$. We use ``Alg. 1-i(F)" and ``Alg. 2-i(F)" to denote Algorithm \ref{alg1} and Algorithm \ref{alg2} with $\phi_1 (x)=\varphi_i(x)(1\le i\le 2)$, respectively, where extrapolation parameter $\alpha_k=\beta_k=\frac{k-1}{k+2}$. \par In Table \ref{table1}, we list the iterations, CPU time and extrapolation step of the above algorithm for different Bregman distance. In Figure \ref{fig_sim}, (a) and (b) reports the result of different extrapolation parameter, respectively, (c) reports the result of different Bregman distance. It can be seen that the Itakura-Saito distance have computational advantage than the squared Euclidean distance for Algorithm \ref{alg1} and Algorithm \ref{alg2} in terms of number of iteration and CPU time. Compared with one-step extrapolation and original algorithm, two-step extrapolation performs much better. It shows that Algorithm \ref{alg2} with adaptive extrapolation parameters performs the best among all algorithms. \begin{table}[!ht] \centering \caption{Numerical results of different Bregman distance with different extrapolation parameter}\label{table1} \begin{tabular}{lllllllll} \hline \multicolumn{4}{c}{Itakura-Saito distance} & & \multicolumn{4}{c}{squared Euclidean distance} \\ \cline{1-4} \cline{6-9} Algorithm & Iter. & Time(s) & Extrapolation & &Algorithm & Iter. & Time(s) & Extrapolation \\ \hline ASAP & 192 &0.5906 & \multicolumn{1}{c}{191} &&ASAP & 202 &0.6456 &\multicolumn{1}{c}{201} \\ aASAP & 138 &0.4127 & \multicolumn{1}{c}{137} &&aASAP & 147 &0.4590 &\multicolumn{1}{c}{146} \\ Alg. 1-1 & 81 &0.3169 & \multicolumn{1}{c}{72} &&Alg. 1-2 & 98 &0.2725 &\multicolumn{1}{c}{96} \\ Alg. 2-1 & 28 &0.0156 & \multicolumn{1}{c}{26} &&Alg. 2-2& 33 &0.0469 &\multicolumn{1}{c}{28} \\ Alg. 1-1(F) & 44&0.0898& \multicolumn{1}{c}{31} &&Alg. 1-2(F) & 48 &0.1013 &\multicolumn{1}{c}{34} \\ \hline \end{tabular} \end{table} \begin{figure*} \caption{The value of $\|x_{k+1} \label{fig_first_case} \label{fig_second_case} \label{fig_thrid_case} \label{fig_sim} \end{figure*} \subsection{Sparse logistic regression} \ \par In this subsection, we apply our algorithms to solve the sparse logistic regression problem. It is an attractive extension to logistic regression as it can reduce overfitting and perform feature selection simultaneously. We consider the Capped-$l_1$ regularized logistic regression problem \cite{Z}, defined as \begin{equation} \label{(4.4)}\ \underset{x\in \mathbb{R}^d}{\min}\ \ \frac{1}{n} \sum_{i=1}^{n} \log(1+\exp(-b_ia_{i}^{T} x))+\lambda\sum_{j=1}^{d}\min(\left | x_j \right |,\theta ),\ \ \theta >0, \end{equation} where $x\in \mathbb{R}^d$, $a_i\in \mathbb{R}^d$, $b_i\in \left \{ -1,1 \right \}, i=1,\dots,n.$ \par By the similar method as the former example, we transform \eqref{(4.4)} to \begin{equation} \aligned \label{(4.5)}\ \underset{x,y\in \mathbb{R}^d}{\min}\ \ &\frac{1}{n} \sum_{i=1}^{n} \log(1+\exp(-b_ia_{i}^{T} x))+\lambda\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )\\ &s.t. \ \ x=y, \endaligned \end{equation} which can be described as \begin{equation} \label{(4.6)}\ \underset{x,y\in \mathbb{R}^d}{\min}\ \ \frac{1}{n} \sum_{i=1}^{n} \log(1+\exp(-b_ia_{i}^{T} x))+\lambda\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )+\frac{\mu}{2}\left \| x-y \right \|^2. \end{equation} \par Let \begin{equation} \aligned \label{(4.7)}\ &f(x)=\frac{1}{n} \sum_{i=1}^{n} \log(1+\exp(-b_ia_{i}^{T} x)),\ g(y)\equiv0,\\ &Q(x,y)=\lambda\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )+\frac{\mu}{2}\left \| x-y \right \|^2. \endaligned \end{equation} \par It is easy to verify that all the functions satisfy the assumptions. However, when $n$ is large, $L_{\bigtriangledown f} $ is difficult to compute, we cannot determine the strong convexity modulus range of the Bregman function. So we need to use backtracking strategy to solve the problem. Let $\phi_2(y)=\frac{\eta }{2} \left \| y\right \|^{2}$, we can elaborate the $x$-subproblem and $y$-subproblem of our algorithms respectively as follows. \par The $x$-subproblem in \eqref{(4.6)} corresponds to the following optimization problem $$ \aligned x_{k+1} &\in\arg\min_{ x\in \mathbb{R}^d}\left \{ Q(x,\hat{y}_k )+\left \langle \nabla f(\hat{x}_k),x \right \rangle+D_{\phi_1}(x,\hat{x}_k) \right \} \\ &=\arg\min_{ x\in \mathbb{R}^d}\left \{ \frac{\mu}{2}\left \| x-\hat{y}_k \right \|^{2}+\left \langle \nabla f(\hat{x}_k),x \right \rangle+D_{\phi_1}(x,\hat{x}_k) \right \}. \\ \endaligned $$ The $y$-subproblem in \eqref{(4.6)} corresponds to $$ \aligned y_{k+1} &\in\arg\min_{ y\in \mathbb{R}^d}\left \{ Q(x_{k+1},y)+\left \langle \nabla g(\hat{y}_k),y \right \rangle+D_{\phi_2}(y,\hat{y}_k) \right \} \\ &=\arg\min_{ y\in \mathbb{R}^d}\left \{ \frac{\mu}{2}\left \| x_{k+1}-y \right \|^{2}+\lambda\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )+ \frac{\eta}{2}\left \| y-\hat{y}_k \right \|^{2} \right \} \\ &=\arg\min_{ y\in \mathbb{R}^d}\left \{ \lambda\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )+ \frac{\mu+\eta}{2}\left \| y-\frac{1}{\mu+\eta}(\eta \hat{y}_k+\mu x_{k+1}) \right \|^{2} \right \} \\ &=\arg\min_{ y\in \mathbb{R}^d}\left \{ \frac{\lambda}{\mu+\eta}\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )+ \frac{1}{2}\left \| y-\frac{1}{\mu+\eta}(\eta \hat{y}_k+\mu x_{k+1}) \right \|^{2} \right \}. \endaligned $$ \par For convenience, we set \begin{equation} \label{(4.8)}\ h(t,y,u_k)=\frac{\lambda}{t}\sum_{j=1}^{d}\min(\left | y_j \right |,\theta )+ \frac{1}{2}\left \| y-u_k \right \|^{2}, \end{equation} where $t=\mu+\eta$, $u_k=\frac{1}{\mu+\eta}(\eta \hat{y}_k+\mu x_{k+1}).$ Then $h(t,y,u_k)$ has a separable structure. It can be decomposed into $d$ one-dimensional subproblems. Let $\left ( u_{k} \right )_j$ be the $j$-th component of $u_k$. Then each component of $y_{k+1}$ can be calculated from \begin{equation} \label{(4.9)}\ \left ( y_{k+1} \right )_j=\arg\min_{ y_j\in \mathbb{R}}\left \{\frac{1}{2}\left( y_j-\left ( u_{k} \right )_j\right )^{2}+\frac{\lambda}{t}\min(\left | y_j \right |,\theta ) \right \},\ j=1,\dots, d, \end{equation} where $\left ( y_{k+1} \right )_j$ denotes the $j$-th component of $y_{k+1}$. Then \eqref{(4.9)} can be described as follows \begin{equation*} \left\{ \begin{array}{r@{\extracolsep{0.5ex}}r@{}r@{}r@{}r@{}r@{}r@{}} &\tilde{y} _j=\underset{y_j\in \mathbb{R}}{\arg\min}\ \frac{1}{2}\left( y_j-\left ( u_{k} \right )_j\right )^{2}+\frac{\lambda}{t}\theta \ \ \ \ \ &\text{ if } \ \left | y_j \right |>\theta,\\ &\hat{y} _j=\underset{y_j\in \mathbb{R}}{\arg\min}\ \frac{1}{2}\left( y_j-\left ( u_{k} \right )_j\right )^{2}+\frac{\lambda}{t}\left | y_j \right | \ \ &\text{ if } \ \left | y_j \right |<\theta. \end{array} \right. \end{equation*} Furthermore, \begin{equation*} \left\{ \begin{array}{r@{\extracolsep{0.5ex}}l@{}l@{}l@{}l@{}l@{}l@{}} &\tilde{y} _j&=&\text{sign}(\left( u_{k} \right)_j)\max(\theta , | \left( u_{k} \right)_j | ) \ \ \ \ \ &\text{ if } \ \left | y_j \right |>\theta,\\ &\hat{y} _j&=&\text{sign}(\left( u_{k} \right)_j)\min(\theta ,\max(0, | \left ( u_{k} \right )_j | -\frac{\lambda}{t})) \ \ &\text{ if } \ \left | y_j \right |<\theta. \end{array} \right. \end{equation*} Set $h_j(t,y_j,\left( u_{k} \right)_j)=\frac{\lambda}{t}\min(\left | y_j \right |,\theta )+ \frac{1}{2}\left \| y_j-\left ( u_{k} \right )_j \right \|^{2}$. Then $\left ( y_{k+1} \right )_j$ can be expressed by $$\left ( y_{k+1} \right )_j=\begin{cases} \tilde{y} _j& \text{ if } h_j(t,\tilde{y} _j,\left ( u_{k} \right )_j)\le h_j(t,\hat{y} _j,\left( u_{k} \right)_j), \\ \hat{y}_j& \text{ otherwise.} \end{cases}$$ \par In the experiment, we take $n = 500, d=200$ and the parameters of the problem are set as $\lambda =10^{-3},\theta =0.1\lambda $, which is the same as \cite{GZL}. The backtracking parameters of the algorithm are set as $\rho =2,\delta =10^{-5}$. We selected the starting point for all algorithms randomly and use $$E_k=\|x_{k+1}-x_k\|+\|y_{k+1}-y_k\|<10^{-5}$$ as the stopping criteria. In the numerical results, ``Iter." denotes the number of iterations. ``Time" denotes the CPU time. ``Extrapolation" records the number of taking extrapolation step. In order to show the effectiveness of the proposed algorithms, we compare our algorithms with ASAP \cite{NT} and aASAP \cite{XX}. The extrapolation parameter of aASAP is $\alpha_k=0.3$. In Algorithm \ref{alg1}, we set $\alpha_k=0.3, \beta_k=0.2$. And we also take extrapolation parameter dynamically updating with $\alpha_k=\beta_k=\frac{k-1}{k+2}$. For Algorithm \ref{alg2}, we set $\alpha_0=0.3, \beta_0=0.2$ as the initial extrapolation parameter and $t=1.5, \beta_{\max}=0.499$. We also use ``Alg. 1-i" and ``Alg. 2-i" to denote Algorithm \ref{alg1} and Algorithm \ref{alg2} with $\phi_1 (x)=\varphi_i(x)(1\le i\le 2)$, respectively, where extrapolation parameter $\alpha_k=0.3, \beta_k=0.2$. We use ``Alg. 1-i(F)" and ``Alg. 2-i(F)" to denote Algorithm \ref{alg1} and Algorithm \ref{alg2} with $\phi_1 (x)=\varphi_i(x)(1\le i\le 2)$, respectively, where extrapolation parameter $\alpha_k=\beta_k=\frac{k-1}{k+2}$. Figure \ref{fig_sim2} shows the performance of different algorithms. We also list the iteration number, the CPU time, and the extrapolation number on the test set of each algorithm in Table \ref{table2}. Figure \ref{fig_sim2} (a) and (b) reports the result of different extrapolation parameter, respectively. It shows that Algorithm \ref{alg2} with adaptive extrapolation parameters performs the best among all algorithms. Figure \ref{fig_sim2} (c) reports the result of different Bregman distance. It shows that the Itakura-Saito distance have computational advantage than the squared Euclidean distance. We also use BB rule improves the computational efficiency of each algorithm. The symbol in Table \ref{table3} and Figure \ref{fig_sim3} “$\sim$BB” means that the algorithm adopts the BB rule with lower bound of $t_{\min}=1.3$ in backtracking process. It shows that using BB rule to initialize the stepsize can improve the efficiency of all algorithms. \begin{table}[!ht] \centering \caption{Numerical results of different Bregman distance with different extrapolation parameter}\label{table2} \begin{tabular}{lllllllll} \hline \multicolumn{4}{c}{Itakura-Saito distance} & & \multicolumn{4}{c}{squared Euclidean distance} \\ \cline{1-4} \cline{6-9} Algorithm & Iter. & Time(s) & Extrapolation & &Algorithm & Iter. & Time(s) & Extrapolation \\ \hline ASAP & 39 &9.3456 & \multicolumn{1}{c}{38} & &ASAP& 71 &18.3569 &\multicolumn{1}{c}{70} \\ aASAP & 35 &8.3227 & \multicolumn{1}{c}{34} & &aASAP& 56 &14.0903 &\multicolumn{1}{c}{55} \\ Alg. 1-1 & 29 &6.9867 & \multicolumn{1}{c}{21} & &Alg. 1-2& 43 &11.0023 &\multicolumn{1}{c}{39} \\ Alg. 2-1 & 15 &3.4024 & \multicolumn{1}{c}{7} & &Alg. 2-2& 23 &6.1576 &\multicolumn{1}{c}{16} \\ Alg. 1-1(F) & 19&5.5435& \multicolumn{1}{c}{8} & & Alg. 1-2(F)& 33 &7.9012 &\multicolumn{1}{c}{20} \\ \hline \end{tabular} \end{table} \begin{figure*} \caption{The value of $\|x_{k+1} \label{fig_first_case} \label{fig_second_case} \label{fig_thrid_case} \label{fig_sim2} \end{figure*} \begin{table}[!ht] \centering \caption{Numerical results of different Bregman distance with different extrapolation parameter}\label{table3} \begin{tabular}{lllllllll} \hline \multicolumn{4}{c}{Itakura-Saito distance} & & \multicolumn{4}{c}{squared Euclidean distance} \\ \cline{1-4} \cline{6-9} Algorithm & Iter. & Time(s) & Extra. & &Algorithm & Iter. & Time(s) & Extra. \\ \hline ASAP$\sim$BB & 15 &4.3476 & \multicolumn{1}{c}{14} & &ASAP$\sim$BB& 25 &6.4702 &\multicolumn{1}{c}{24} \\ aASAP$\sim$BB & 14 &4.0324 & \multicolumn{1}{c}{13} & &aASAP$\sim$BB& 21 &6.0604 &\multicolumn{1}{c}{20} \\ Alg. 1-1$\sim$BB & 13 &3.9198 & \multicolumn{1}{c}{7} & &Alg. 1-2$\sim$BB& 19 &5.3523 &\multicolumn{1}{c}{10} \\ Alg. 2-1$\sim$BB & 9 &2.9745 & \multicolumn{1}{c}{5} & &Alg. 2-2$\sim$BB& 15 &4.1270 &\multicolumn{1}{c}{8} \\ Alg. 1-1(F)$\sim$BB & 11&3.5435& \multicolumn{1}{c}{6} & & Alg. 1-2(F)$\sim$BB&17 &4.9352 &\multicolumn{1}{c}{9} \\ \hline \end{tabular} \end{table} \begin{figure*} \caption{The value of $\|x_{k+1} \label{fig_first_case} \label{fig_second_case} \label{fig_thrid_case} \label{fig_sim3} \end{figure*} \section{Conclusion} {In this paper, we introduce a two-step inertial Bregman alternating structure-adapted proximal gradient descent algorithm for solve nonconvex and nonsmooth nonseparable optimization problem. Under some assumptions, we proved that our algorithm is a descent method in sense of objective function values, and every cluster point is a critical point of the objective function. The convergence of proposed algorithms are proved if the objective function satisfies the Kurdyka--{\L}ojasiewicz property. Furthermore, if the desingularizing function has the special form, we also established the linear and sub-linear convergence rates of the function value sequence generated by the algorithm. In addition, we also proposed a backtracking strategy with BB method to make our algorithms more flexible when the Lipschitz constant is unknown or difficult to compute. In numerical experiments, we apply different Bregman distance to solve nonconvex quadratic programming and sparse logistic regression problem. Numerical results are reported to show the effectiveness of the proposed algorithm.} \end{document}
\begin{document} \title[A criterion for zero averages and support of ergodic measures]{A criterion for zero averages and full support of ergodic measures} \date{Jan 26, 2012} \author[Ch.~Bonatti]{Christian Bonatti} \address{Institut de Math\'ematiques de Bourgogne} {\varepsilon}mail{[email protected]} \author[L.~J.~D\'\i az]{Lorenzo J.~D\'\i az} \address{Departamento de Matem\'atica, Pontif\'{\i}cia Universidade Cat\'olica do Rio de Janeiro} {\varepsilon}mail{[email protected]} \author[J.~Bochi]{Jairo Bochi} \address{Facultad de Matem\'aticas, Pontificia Universidad Cat\'olica de Chile} {\varepsilon}mail{[email protected]} \begin{abstract} Consider a homeomorphism $f$ defined on a compact metric space $X$ and a continuous map $\phi\colon X \to \frak{m}athbb{R}R$. We provide an abstract criterion, called {\varepsilon}mph{control at any scale with a long sparse tail} for a point $x\in X$ and the map $\phi$, that guarantees that any weak$\ast$ limit measure $\frak{m}u$ of the Birkhoff average of Dirac measures $\frac1n\sum_0^{n-1}\deltalta(f^i(x))$ is such that $\frak{m}u$-almost every point $y$ has a dense orbit in $X$ and the Birkhoff average of $\phi$ along the orbit of $y$ is zero. As an illustration of the strength of this criterion, we prove that the diffeomorphisms with nonhyperbolic ergodic measures form a $C^1$-open and dense subset of the set of robustly transitive partially hyperbolic diffeomorphisms with one dimensional nonhyperbolic central direction. We also obtain applications for nonhyperbolic homoclinic classes. {\varepsilon}nd{abstract} \begin{thanks}{This research has been supported [in part] by CAPES - Ci\^encia sem fronteiras, CNE-Faperj, and CNPq-grants (Brazil), EU Marie-Curie IRSES ``Brazilian-European partnership in Dynamical Systems" (FP7-PEOPLE-2012-IRSES 318999 BREUDS), and Fondecyt project 1140202 (Chile). The authors acknowledge the hospitality of PUC-Rio and IMB. LJD thanks the hospitality and support of ICERM - Brown University during the thematic semester ``Fractal Geometry, Hyperbolic Dynamics, and Thermodynamical Formalism".} {\varepsilon}nd{thanks} \keywords{Birkhoff average, ergodic measure, Lyapunov exponent, nonhyperbolic measure, partial hyperbolicity, transitivity} \subjclass[2000]{ 37D25, 37D35, 37D30, 28D99 } \frak{m}aketitle \section{Introduction} \subsection{Motivation and general setting} This work is a part of a long-term project to attack the following general problem which rephrases the opening question in \cite{GIKN} from a different perspective: {\varepsilon}mph{To what extent does ergodic theory detect the nonhyperbolicity of a dynamical system?} More precisely, we say that a diffeomorphism $f$ is {\varepsilon}mph{nonhyperbolic} if its non-wandering set is nonhyperbolic. We aim to know if such $f$ possesses {\varepsilon}mph{nonhyperbolic ergodic measures} (i.e. with some zero {\varepsilon}mph{Lyapunov exponent}) and if some of them fully reflect the nonhyperbolic behaviour of $f$. For instance, we would like to know \begin{itemize} \item what is their support, \item what is their entropy, and \item how many Lyapunov exponents of the measures are zero. {\varepsilon}nd{itemize} In this generality, the answer to this question is negative. There are simple examples of (even analytic) nonhyperbolic dynamical systems whose invariant measures are all hyperbolic and even with Lyapunov exponents uniformly far from zero, see for instance the logistic map $t\frak{m}apsto 4t(1-t)$ or the surgery examples in \cite{BBS} where a saddle of a uniformly hyperbolic set is replaced by non-uniformly hyperbolic sets, among others (more examples of different nature can be found in \cite{CLR,LOR}). Nevertheless, these examples are very specific and fragile. Thus, one hopes that the ``great majority" of nonhyperbolic systems have nonhyperbolic ergodic measures which detect and truly reflect the nonhyperbolic behaviour of the dynamics. Concerning this sort of questions, a first wave of results, initiated with \cite{GIKN}, continued in \cite{DG,BDG}, and culminated in \cite{CCGWY}, show that the existence of nonhyperbolic ergodic measures for nonhyperbolic dynamical systems is quite general in the $C^1$-setting: for $C^1$-generic diffeomorphisms, every nonhyperbolic homoclinic class supports a nonhyperbolic ergodic measure, furthermore under quite natural hypotheses the support of the measure is the whole homoclinic class\footnote{See \cite[Main Theorem]{CCGWY} and also \cite[Theorem B and Proposition 1.1]{CCGWY}. This last result states that the support of the nonhyperbolic measure in a nonhyperbolic homoclinic class of a saddle is the whole homoclinic class. This result requires neither that the stable/unstable splitting of the saddle extends to a dominated splitting on the class (compare with \cite{BDG}) nor that the homoclinic class contains saddles of different type of hyperbolicity (compare with \cite{DG}).}. Given a periodic point $p$ of a diffeomorphism $f$ denote by $\frak{m}u_{\cO(p)}$ the unique $f$-invariant measure supported on the orbit of $p$. We say that such a measure is {{\varepsilon}mph{periodic}}. The previous works follow the strategy of periodic approximations in \cite{GIKN} for constructing a nonhyperbolic ergodic measure as weak$\ast$ limits of periodic measures $\frak{m}u_{\cO(p_n)}$ supported on orbits $\cO(p_n)$ of hyperbolic periodic points $p_n$ with decreasing ``amount of hyperbolicity". The main difficulty is to obtain the ergodicity of the limit measure. \cite{GIKN} provides a criterion for ergodicity summarised in rough terms as follows. Each periodic orbit $\frak{m}athcal{O}(p_n)$ consists of two parts: a ``shadowing part" where $\frak{m}athcal{O}(p_n)$ closely shadows the previous orbit $\frak{m}athcal{O}(p_{n-1})$ and a ``tail" where the orbit is far from the previous one. To get an ergodic limit measure one needs some balance between the ``shadowing" and the ``tail" parts of the orbits. The ``tail part" is used to decrease the amount of hyperbolicity of a given Lyapunov exponent (see \cite{GIKN}) and also to spread the support of the limit measure, (see \cite{BDG}). Nonhyperbolic measures seem very fragile as small perturbations may turn the zero Lyapunov exponent into a nonzero one. However, in \cite{KN} there are obtained (using the method in \cite{GIKN}) certain $C^1$-open sets of diffeomorphisms having nonhyperbolic ergodic measures. Bearing this result in mind, it is natural to ask if the existence of nonhyperbolic measures is a $C^1$-open and dense property in the space of nonhyperbolic diffeomorphisms. In this direction, \cite[Theorem 4]{BoBoDi2} formulates an abstract criterion called {\varepsilon}mph{control at any scale}\footnote{This construction also involves the so-called {{\varepsilon}mph{flip-flop families,}} we will review these notions below as they play an important role in our constructions.} that leads to the following result (see \cite[Theorems 1 and 3]{BoBoDi2}): {{\varepsilon}mph{The $C^1$-interior of the set of diffeomorphisms having a nonhyperbolic ergodic measure contains an open and dense subset of the set of $C^1$-diffeomorphisms having a pair of hyperbolic periodic points of different indices robustly in the same chain recurrence class.}} The method in \cite{BoBoDi2} provides a partially hyperbolic invariant set with positive topological entropy whose central Lyapunov exponent vanishes uniformly. This set only supports nonhyperbolic measures and the existence of a measure with positive entropy is a consequence of the variational principle for entropy~\cite{walters}. A con of this method is that the ``completely" nonhyperbolic nature of the (obtained) set where a Lyapunov exponent vanishes uniformly prevents the measures to have full support in nonhyperbolic chain recurrence classes. This shows that, in some sense, the criterion in \cite{BoBoDi2} may be ``too demanding" and ``rigid". The aim of this paper is to introduce a new criterion that relaxes the ``control at any scale criterion" and allows to get nonhyperbolic measures with ``full support" (in the appropriate ambient space: homoclinic class, chain recurrence class, the whole manifold, according to the case). To be a bit more precise, given a point $x$ and a diffeomorphism $f$ consider the {{\varepsilon}mph{empirical measures}} $\frak{m}u_n(x)$, $n\in \NN$, associated to $x$ defined as the averages of the Dirac measures $\deltalta(f^i(x))$ in the orbit segment $\{x,\dots,f^{n-1}(x)\}$, \begin{equation} \lambdabel{e.empiricalmeasure} \frak{m}u_n(x){\varepsilon}qdef \frac{1}{n}\, \sum_{i=0}^{n-1} \deltalta (f^i(x)). {\varepsilon}nd{equation} The criterion in this paper, called {{\varepsilon}mph{control at any scale with a long sparse tail}} with respect to a continuous map $\varphi$ of a point $x$, allows to construct ergodic measures with full support (in the appropriate ambient space) and a prescribed average with respect to $\varphi$, see Theorem~\ref{t.accumulation}. This construction involves two main aspects of different nature: density of the orbits of $\frak{m}u$-generic points and control of averages. The existence of ergodic measures satisfying both properties is a consequence of the construction. A specially interesting case occurs when the map $\varphi$ is the derivative of a diffeomorphism with respect to a continuous one-dimensional center direction (taking positive and negative values). In such a case we get that every measure $\frak{m}u$ that is a weak$\ast$ limit of a sequence of empirical measures of $x$ is such that $\frak{m}u$-almost every point has a zero Lyapunov exponent and a dense orbit (in the corresponding ambient space), see Theorems~\ref{t.cycle} and \ref{t.ctail}. To state more precisely the dynamical consequences of the criterion let us introduce some notation (the precise definitions can be found below). In what follows we consider a boundaryless Riemannian compact manifold $M$ and the following two $C^1$-open subsets of diffeomorphisms: \begin{itemize} \item The set $\cR\cT(M)$ of all {{\varepsilon}mph{robustly transitive diffeomorphisms}}\footnote{A diffeomorphism is called transitive if it has a dense orbit. The diffeomorphism is $C^1$-robustly transitive if $C^1$-nearby diffeomorphisms are also transitive.} with a partially hyperbolic splitting with one-dimensional (nonhyperbolic) center, \item The set $\cZ(M)$ defined as the $C^1$-interior of the set of $C^1$-diffeomorphisms having a nonhyperbolic ergodic measure with full support in $M$. {\varepsilon}nd{itemize} As an application of our criterion we get that set $\cZ(M)\cap \cR\cT(M)$ is $C^1$-open and $C^1$-dense in $\cR\cT(M)$, see Theorem~\ref{t.c.openanddense}. We also get semi-local versions of this result formulated in terms of nonhyperbolic homoclinic classes or/and chain recurrence classes, see Theorems~\ref{t.cycle} and~\ref{t.ctail}. These results turn the $C^1$-generic statements in \cite{BDG} into $C^1$-open and $C^1$-dense ones. We observe that a similar result involving different methods was announced in \cite{BJ}\footnote{\lambdabel{fn}The construction in \cite{BJ} combines the criteria of periodic approximations in \cite{GIKN} and of the control at any scale in \cite{BoBoDi2} and a shadowing lemma by Gan-Liao, \cite{G}.}. Applications of the criterion in hyperbolic-like contexts, as for instance full shifts and horseshoes, are discussed in Section~\ref{ss.medias}. In this paper we restrict ourselves to the control of the support and the averages of the measures, omitting questions related to the entropy of these measures. Nevertheless it seems that our method is well suited to construct nonhyperbolic ergodic measures with positive entropy and full support. This is the next step of an ongoing project whose ingredients involve tools of a very different nature beyond the scope of this paper. In the dynamical applications we focus on partially hyperbolic diffeomorphisms with a one-dimensional center bundle and therefore the measures may have at most one zero Lyapunov exponent. Here we do not consider the case of higher dimensional central bundles and the possible occurrence of multiple zero exponents. Up to now, there are quite few results on multiple zero Lyapunov exponents. The simultaneous control of several exponents is much more difficult, essentially due to the non-commutativity of $\frak{m}athrm{GL}(n,\frak{m}athbb{R}R)$ for $n>1$. We refer to \cite{BoBoDi} for examples of ($C^1$ and $C^2$) robust existence of ergodic measures with multiple zero exponents in the context of iterated function systems. Recently, \cite{WZ} announces the locally $C^1$-generic vanishing of several Lyapunov exponents in homoclinic classes of diffeomorphisms. We now describe our methods and results in a more detailed way. \subsection{A criterion for controlling averages of continuous maps} \lambdabel{ss.abstractcriterion} Consider a compact metric space $(X,d)$, a homeomorphism $f$ defined on $X$, and a continuous map $\varphi\colon X \to \frak{m}athbb{R}R$. Given a point $x\in X$ consider the set of {{\varepsilon}mph{empirical measures}} $\frak{m}u_n(x)$ associated to $x$ defined as in {\varepsilon}qref{e.empiricalmeasure}. Consider the following notation for finite Birkhoff averages of $\varphi$, \begin{equation}\lambdabel{e.birkhoffaverages} \varphi_n(x) {\varepsilon}qdef \frac{1}{n}\, \sum_{i=0}^{n-1} \varphi (f^i(x)), {\varepsilon}nd{equation} and limit averages of $\varphi$ \begin{equation} \lambdabel{e.notationBirkhoff} \varphi_\infty (x) {\varepsilon}qdef \lim_{n\to+\infty}\ \varphi_n (x)= \lim_{n\to+\infty}\frac 1n\sum_{i=0}^{n-1}\varphi(f^i(x)), {\varepsilon}nd{equation} if such a limit exists. Consider a measure $\frak{m}u$ that is a weak$\ast$ limit of empirical measures of $x$ and a subsequence $\frak{m}u_{n_k}(x)$ with $\frak{m}u_{n_k}(x)\to \frak{m}u$ in the weak$\ast$ topology. The convergence of the sequence of Birkhoff averages $\int \varphi \, d\frak{m}u_{n_k}(x)$ to some limit $\alpha$ implies that $\int \varphi\, d\frak{m}u=\alpha$. But since $\frak{m}u$ may be non-ergodic this does not provide any information about the Birkhoff averages $\varphi_n(y)$ of $\frak{m}u$-generic points $y$. We aim for a criterion guaranteeing that $\frak{m}u$-generic points have the same limit average as $x$. Naively, in \cite{BoBoDi2} the way to get this property is to require that ``all large orbit intervals of the forward orbit of $x$ have average close to the limit average (say) $\alpha$". This was formalised in the criterion {\varepsilon}mph{control of Birkhoff averages at any scale} of a point $x$ with respect to a map $\varphi$ in \cite{BoBoDi2}. This criterion implies that there are sequences of times $t_n\to\infty$ and of ``errors" $\varepsilon_n\to0$ such that every orbit interval with length $t\ge t_n$ of the forward orbit of $x$ has $\varphi$-Birkhoff average in $[\alpha-\varepsilon_n, \alpha+\varepsilon_n]$. When $\varphi$-Birkhoff averages are controlled at any scale then the $\varphi$-Birkhoff averages of any $\omega$-limit point of $x$ converge uniformly to $\alpha$ (see \cite[Lemma 2.2]{BoBoDi2}). To get a limit measure whose support is the whole ambient space the requirement ``all long orbit intervals satisfy the limit average property" is extremely restrictive. Roughly, in the criterion in this paper we only require that ``most of large orbit intervals of the forward orbit of $x$ have average close to the limit average $\alpha$". Let us explain a little more precisely this rough idea. If the limit measure has full support then the orbit of the point $x$ must necessarily visit ``all regions" of the ambient space and these visits require an arbitrary large time. Moreover, to get limit measures whose generic points have dense orbits in the ambient space these ``long visits" must occur with some frequency. During these long visits the control of the averages can be lost. To control simultaneously Birkhoff averages and support of the limit measure, one needs some ``balance" between the part of the orbit where there is a ``good control of the averages" and the part of the orbit used for spreading the support of the measure to get its density (roughly, these parts play the roles of the ``shadowing" and ``tail parts" of the method in \cite{GIKN}). The criterion in this paper formalizes an abstract notion for this balance that we call {\varepsilon}mph{control at any scale with a long sparse tail with respect to $\varphi$ and $X$} (see Definitions~\ref{d.alphacontrol} and \ref{d.controledtail}). Our main technical result is that this criterion provides ergodic measures having simultaneously a prescribed average and a prescribed support. \begin{theo}\lambdabel{t.accumulation} Let $(X,d)$ be a compact metric space, $f\colon X\to X$ a homeomorphism, and $\varphi \colon X\to \frak{m}athbb{R}R$ a continuous map. Consider \begin{itemize} \item a point $x_0\in X$ that is controlled at any scale with a long sparse tail with respect $\varphi$ and $X$ and \item a measure $\frak{m}u$ that is a weak$\ast$ limit of the sequence of empirical measures $(\frak{m}u_n(x_0))_n$ of $x_0$. {\varepsilon}nd{itemize} Then for $\frak{m}u$-almost every point $x$ the following holds: \begin{enumerate} \item the forward orbit of $x$ for $f$ is dense in $X$ and \item $\lim_{n\to\infty} \frac{1}{n} \sum_{i=0}^{n-1} \varphi (f^i(x))= \int \varphi \, d\frak{m}u$. {\varepsilon}nd{enumerate} In particular, these two assertions hold for almost every ergodic measure of the ergodic decomposition of $\frak{m}u$. {\varepsilon}nd{theo} We now exhibit some dynamical configurations where the criterion holds. Indeed, we see that such configurations are quite ``frequent". \subsection{Flip-flop families with sojourns: control at any scale with a long sparse tail} \lambdabel{ss.flipflps} To present a mechanism providing orbits controlled at any scale we borrow the following definition from \cite{BoBoDi2}: \begin{defi}[Flip-flop family]\lambdabel{d.flipflop} Let $(X,d)$ be a compact metric space, $f\colon X \to X$ a homeomorphism, and $\varphi\colon X \to \frak{m}athbb{R}$ a continuous function. A {\varepsilon}mph{flip-flop family} associated to $\varphi$ and $f$ is a family $\frak{m}athfrak{F}=\frak{m}athfrak{F}^+\bigsqcup\frak{m}athfrak{F}^-$ of compact subsets of $X$ such that there are $\alpha>0$ and a sequence of numbers $(\zeta_n)_n$, $\zeta_n>0$ and $\zeta_n\to 0$ as $n\to \infty$, such that: \begin{enumerate} \item\lambdabel{i.flipflop1} for every $D\in\frak{m}athfrak{F}^+$ (resp. $D\in\frak{m}athfrak{F}^-$) and every $x\in D$ it holds $\varphi(x)\geq\alpha$ (resp. $\varphi(x)\leq -\alpha$); \item\lambdabel{i.flipflop2} for every $D\in \frak{m}athfrak{F}$, there are sets $D^+\in \frak{m}athfrak{F}^+$ and $D^-\in\frak{m}athfrak{F}^-$ contained in $f(D)$; \item\lambdabel{i.flipflop33} for every $n>0 $ and every family of sets $D_i\in \frak{m}athfrak{F}$, $i\in\{0,\dots ,n\}$ with $D_{i+1}\subset f(D_i)$ it holds $$ d(f^{n-i}(x),f^{n-i}(y))\leq\zeta_i\cdot d(f^n(x) f^n(y)) $$ for every $i\in\{0,\dots,n\}$ and every pair of points $x,y\in f^{-n}(D_n)$. {\varepsilon}nd{enumerate} We call {{\varepsilon}mph{plaques}}\footnote{We pay special attention to the case when the sets of the flip-flop family are discs tangent to a strong unstable cone field. This justifies this name.} the sets of the flip-flop family $\cF$. {\varepsilon}nd{defi} With the notation in Definition~\ref{d.flipflop}, \cite[Theorem 2.1]{BoBoDi2} claims that for every number $t\in (-\alpha,\alpha)$ and every set $D\in \frak{m}athfrak{F}$ there is a point $x_t\in D$ whose orbit is controlled at any scale for the function $\varphi_t=\varphi-t$. Hence the Birkhoff average of $\varphi$ along the orbit of any point $y\in \omega(x_t)$ is $t$. Furthermore, the $\omega$-limit set of $x_t$ has positive topological entropy. Since we aim to obtain measures with full support we need to relax the control of the averages. For that we introduce a ``sojourn condition" for the returns of the sets of the flip-flop family (item~{\varepsilon}qref{i.defff0} in the definition below). These``sojourns" will be used to get dense orbits and to spread the support of the measures and play a role similar to the ``tails" in \cite{GIKN}. \begin{defi}[Flip-flop family with sojourns]\lambdabel{d.flipfloptail} Let $(X,d)$ be a compact metric space, $Y$ a compact subset of $X$, $f\colon X \to X$ a homeomorphism, and $\varphi\colon X \to \frak{m}athbb{R}$ a continuous function. Consider a flip-flop family $\frak{m}athfrak{F}=\frak{m}athfrak{F}^+\bigsqcup\frak{m}athfrak{F}^-$ associated to $\varphi$ and $f$. We say that the {\varepsilon}mph{flip-flop family $\frak{m}athfrak{F}$ has sojourns along $Y$} (or that {\varepsilon}mph{$\frak{m}athfrak{F}$ sojourns along $Y$}) if for every $\deltalta>0$ there is an integer $N=N_\deltalta$ such that every plaque $D\in\frak{m}athfrak{F}$ contains subsets $\widehat D^+, \widehat D^-$ such that: \begin{enumerate} \item\lambdabel{i.defff0} for every $x\in \widehat D^+\cup \widehat D^-$ the orbit segment $\{x,\dots, f^N(x)\}$ is $\deltalta$-dense in $Y$ (i.e., the $\deltalta$-neighbourhood of the orbit segment contains $Y$); \item\lambdabel{i.defff1} $f^N(\widehat D^+)\in \frak{m}athfrak{F}^+$ and $f^N(\widehat D^-)\in\frak{m}athfrak{F}^-$; \item\lambdabel{i.defff2} for every $i\in\{0,\dots, N\}$ and every pair of points $x,y\in \widehat D^+$ or $x, y\in \widehat D^-$ it holds $$ d(f^{N-i}(x),f^{N-i}(y))\leq\zeta_i\cdot d(f^N(x) f^N(y)), $$ where $(\zeta_i)_i$ is a sequence as in Definition~\ref{d.flipflop}. {\varepsilon}nd{enumerate} {\varepsilon}nd{defi} The conditions in Definition~\ref{d.flipfloptail} are depicted in Figure~\ref{f.ffs}. \begin{figure} \begin{minipage}[h]{\linewidth} \centering \begin{overpic}[scale=.60, ]{P_L3.pdf} \put(9,33){\small$\frak{m}athfrak{F}^+$} \put(88,33){\small$\frak{m}athfrak{F}^-$} \put(18,31){\small$D$} \put(18,17){\small$D^+$} \put(20,9){\small$D^-$} \put(18.5,23.5){\small$\widehat D^-$} \put(46,28){\small$Y$} {\varepsilon}nd{overpic} \caption{A flip-flop family with sojourns: the plaques $D^+$, $D^-$, and $\widehat D^-$ (the plaque $\widehat D^+$ is omitted for visual simplicity). } \lambdabel{f.ffs} {\varepsilon}nd{minipage} {\varepsilon}nd{figure} Next theorem corresponds to \cite[Theorem 2.1]{BoBoDi2} in our setting: \begin{theo}\lambdabel{t.flipfloptail} Let $(X,d)$ be a compact metric space, $Y$ a compact subset of $X$, $f\colon X \to X$ a homeomorphism, and $\varphi\colon X \to \frak{m}athbb{R}$ a continuous function. Consider a flip-flop family $\frak{m}athfrak{F}$ associated to $\varphi$ and $f$ having sojourns along $Y$. Then every plaque $D\in \frak{m}athfrak{F}$ contains a point $x_D\in D$ that is controlled at any scale with a long sparse tail with respect to $\varphi$ and $Y$. {\varepsilon}nd{theo} As a corollary of Theorems~\ref{t.accumulation} and \ref{t.flipfloptail} we get (recall the notation for Birkhoff limits in {\varepsilon}qref{e.notationBirkhoff}): \begin{mcor}\lambdabel{c.flipfloptail} Under the hypotheses of Theorem~\ref{t.flipfloptail} and with the same notation, any measure $\frak{m}u$ that is a weak$\ast$ limit of the empirical measures $(\frak{m}u_n(x_D))_n$ satisfies the following properties: \begin{itemize} \item the orbit of $\frak{m}u$-almost every point is dense in $Y$ and \item for $\frak{m}u$-almost every point $x$ it holds $\varphi_\infty (x)=0$. {\varepsilon}nd{itemize} As a consequence, almost every measure $\nu$ in the ergodic decomposition of $\frak{m}u$ has full support in $Y$ and satisfies $\int\varphi\, d\nu=0$. {\varepsilon}nd{mcor} We now explore some consequences of the results above. \subsection{Birkhoff averages in homoclinic classes}\lambdabel{ss.medias} An important property of our methods is that they can be used in nonhyperbolic and non-Markovian settings. We now present two applications of our criteria in the ``hyperbolic" setting of a mixing sub-shift of finite type that are, as far as we are aware, unknown. The key point of Proposition~\ref{p.r.hyperboliclike} is that it only requires continuity of the potential $\varphi$. When the potential is H\"older continuous this sort of result is well-known\footnote{For instance, techniques from multifractal analysis provide the following: Given a H\"older continuous function $\varphi$, there is a parametrised family of Gibbs states $\frak{m}u_t$, $t\in(\alpha,\beta)$, where $\alpha,\beta$ are as above, such that $\int \varphi \,d\frak{m}u_t=t$. Each $\frak{m}u_t$ has full support and positive entropy. The conclusion in this statement is stronger than the than the one in b) as it guarantees also positive entropy. For a survey of this topic see for instance \cite{PW}.}. \begin{mprop}\lambdabel{p.r.hyperboliclike} Let $\sigma\colon \Sigma\to\Sigma$ be a mixing sub-shift of finite type and $\varphi\colon\Sigma\to\frak{m}athbb{R}R$ a continuous function. Let $\alpha$ and $\beta$ be the infimum and maximum, respectively, of $\int \varphi\, d\frak{m}u$ over the set of $\sigma$-invariant probability measures $\frak{m}u$ (or equivalently of the Birkhoff averages along periodic orbits). Then for every $t\in(\alpha,\beta)$ the following holds: \begin{enumerate} \item {{\varepsilon}m (Application of the criterion in \cite{BoBoDi2})} There is a $\sigma$-invariant compact set $K_t$ with positive topological entropy such that the Birkhoff average of $\varphi$ along the orbit of any point in $K_t$ is $t$. \item {{\varepsilon}m (Application of the new criterion)} There is an ergodic measure $\frak{m}u_t$ with full support in $\Sigma$ such that $\int \varphi\, d\frak{m}u_t=t$. {\varepsilon}nd{enumerate} {\varepsilon}nd{mprop} This proposition deals with systems satisfying {\varepsilon}mph{specification properties}. An important property of our two criteria is that they do not involve and do not depend on specification-like properties. Indeed, they are introduced to control averages of functions in partially hyperbolic settings where specification fails. We now present an application of our criterion in settings without specification properties. In what follows let $M$ be a boundaryless compact Riemannian manifold and $\operatorname{Diff}^1(M)$ the space of $C^1$-diffeomorphisms endowed with the standard uniform topology. The {{\varepsilon}mph{homoclinic class}} of a hyperbolic periodic point $q$ of a diffeomorphism $f\in\operatorname{Diff}^1(M)$, denoted by $H(q,f)$, is the closure of the set of transverse intersection points of the stable and unstable manifolds of the orbit of $q$. Two hyperbolic periodic points $p$ and $q$ of $f$ are {\varepsilon}mph{homoclinically related} if the stable and unstable manifolds of their orbits intersect cyclically and transversely. The homoclinic class of $q$ can also be defined as the closure of the periodic points of $f$ that are homoclinically related to $q$. A homoclinic class is a {\varepsilon}mph{transitive set} (existence of a dense orbit) whose periodic points form a dense subset of it. Homoclinic classes are in many cases the ``elementary pieces of the dynamics" of a diffeomorphism and are used to structure its dynamics, playing a similar role of the basic sets of the hyperbolic theory (indeed each basic set is a homoclinic class), for a discussion see the survey in \cite{B}. The {{\varepsilon}mph{${\frak{m}athrm{u}}$-index}} of a hyperbolic periodic point is the dimension of its unstable bundle. We analogously define {{\varepsilon}mph{${\frak{m}athrm{s}}$-index}}. Two saddles which are homoclinically related have necessarily the same ${\frak{m}athrm{u}}$- and ${\frak{m}athrm{s}}$-indices. However two saddles with {\varepsilon}mph{different indices} (it is not necessary to specify the index type) may be in the same homoclinic class. In such a case the class is necessarily nonhyperbolic. Indeed, the property of a homoclinic class containing saddles of different indices is a typical feature in the nonhyperbolic dynamics studied in this paper (see also \cite{Sh,Mda,BD-robtran}). The next result is a generalisation of the second part of Proposition~\ref{p.r.hyperboliclike} to a non-necessarily hyperbolic context, observe that we do not require hyperbolicity of the homoclinic class. Recall that if $p$ is a periodic point of $f$ we denote by $\frak{m}u_{\cO(p)}$ the $f$-invariant probability supported on the orbit of $p$. \begin{theo}\lambdabel{t.homoclinic} Let $f\colon M\to M$ be a $C^1$-diffeomorphism defined on a boun\-dary\-less compact manifold and $\varphi\colon M\to \frak{m}athbb{R}R$ a continuous function. Consider a pair of hyperbolic periodic points $p$ and $q$ of $f$ that are homoclinically related and satisfy $$ a_p{\varepsilon}qdef\int \varphi\, d\frak{m}u_{\cO(p)}<\int \varphi \,d\frak{m}u_{\cO(q)} {\varepsilon}qdef a_q. $$ Then for every $t\in(a_p,a_q)$ there is an ergodic measure $\frak{m}u_t$ whose support is the whole homoclinic class $H(p,f)=H(q,f)$ and satisfies $\int\varphi\, d\frak{m}u_t=t.$ {\varepsilon}nd{theo} Note that the hypotheses in the theorem are $C^1$-open. Observe that the difficulty in the theorem is to get simultaneously the three properties {\varepsilon}mph{ergodicity}, {\varepsilon}mph{prescribed average}, and {\varepsilon}mph{full support}. It is easier (and also known) to build measures satisfying simultaneously only two of these properties. We also aim to apply the criterion in Theorem~\ref{t.accumulation} to saddles $p$ and $q$ that have different indices and are in the same homoclinic class (or, more generally, chain recurrence class) and thus the saddles are not homoclinically related. Before stating the next corollary let us recall the definition of a chain recurrence class. Given $\varepsilonilon>0$, a finite sequence of points $(x_i)_{i=0}^n$ is an {{\varepsilon}mph{$\varepsilonilon$-pseudo-orbit}} of a diffeomorphism $f$ if $d(f(x_i),x_{i+1})<\varepsilonilon$ for every $i=0,\dots,n-1$ (here $d$ denotes the distance in $M$). A point $x$ is {{\varepsilon}mph{chain recurrent}} for $f$ if for every $\varepsilonilon>0$ there is an $\varepsilonilon$-pseudo-orbit $(x_i)_{i=0}^n$ with $x_0=x=x_n$. The {{\varepsilon}mph{chain recurrent set}} of $f$, denoted by $\cR(f)$, is the union of the chain recurrent points of $f$. The {\varepsilon}mph{chain recurrence class} $C(x,f)$ of a point $x\in \cR(f)$ is the set of points $y$ such that for every $\varepsilonilon>0$ there are $\varepsilonilon$-pseudo-orbits joining $x$ to $y$ and $y$ to $x$. Two chain recurrence classes are either disjoint or equal. Thus the set $\cR(f)$ is the union of pairwise disjoint chain recurrence classes. Let us observe that two points in the same homoclinic class are also in the same chain recurrence class (the converse is false in general, although $C^1$-generically homoclinic classes and chain recurrence classes of periodic points coincide, see \cite{BC}). Thus if $p$ is a hyperbolic periodic point then $H(p,f)\subseteq C(p,f)$. \begin{mcor}\lambdabel{c.function} Let $M$ be a boundaryless compact manifold and $\cU$ be a $C^1$-open set in $\operatorname{Diff}^1(M)$ such that every $f\in \cU$ has a pair of hyperbolic periodic orbits $p_f$ and $q_f$ of different indices depending continuously on $f$ whose chain recurrence classes are equal. Let $\varphi\colon M\to\frak{m}athbb{R}R$ be a continuous function such that $$ \int \varphi\, d\frak{m}u_{\cO(p_f)}<0<\int \varphi \,d\frak{m}u_{\cO(q_f)}, \quad \frak{m}box{for every $f\in \cU$}. $$ Then there are two $C^1$-open sets $\cV_p$ and $\cV_q$ whose union is $C^1$-dense in $ \cU$ such that every $f\in\cV_p$ (resp. $f\in\cV_q$) has an ergodic measure $\frak{m}u_f$ whose support is the homoclinic class $H(p_f,f)$ (resp. $H(q_f,f)$) and satisfies $\int \varphi\, d\frak{m}u_f=0$. {\varepsilon}nd{mcor} Note that the saddles in the corollary cannot be homoclinically related and hence Theorem ~\ref{t.homoclinic} cannot be applied. We bypass this difficulty by transferring the desired averages to pairs of homoclinically related periodic points (then the proof follows from Theorem~\ref{t.homoclinic}), see Section~\ref{ss.proofofcorollarycfunction} for the proof of the corollary. \begin{rema} \lambdabel{r.bdpr} By \cite[Theorem E]{BDPR}, if in Corollary~\ref{c.function} we assume that the chain recurrence class is partially hyperbolic with one-dimensional center (see definition below) then there is a $C^1$-open and dense subset $\cV$ of $\cU$ such that $H(p_g,g)=H(q_g,g)$ for all $g\in \cV$. Without this extra hypothesis the equality of the homoclinic classes is only guaranteed for a residual subset of $\cU$, see \cite{BC}. {\varepsilon}nd{rema} \subsection{Nonhyperbolic ergodic measures with full support}\lambdabel{ss.nonhyperbolicfull} In what follows we focus on {\varepsilon}mph{partially hyperbolic diffeomorphisms with one-dimensional center}. Our aim is to get results as above when $\varphi$ is the ``logarithm of the center derivative". This will allow us to obtain nonhyperbolic ergodic measures with large support in quite general nonhyperbolic settings. Before going to the details we need some definitions. Given a diffeomorphism $f$ we say that a compact $f$-invariant set $\Lambdambda$ is {\varepsilon}mph{partially hyperbolic with one-dimensional center} if there is a $Df$-invariant dominated\footnote{A $Df$-invariant splitting $T_\Lambda M=F\oplus E$ is {\varepsilon}mph{dominated} if there are constants $C>0$ and $\lambdambda<1$ such that $|| Df^{-n} F_{f^n(x)}||\, || Df^n E_x|| < C \lambdambda^n$ for all $x\in\Lambdambda$ and $n\in \NN$. In our case domination means that the bundles $E^{\frak{m}athrm{uc} } \oplus E^{{\frak{m}athrm{s}}s}$ and $E^{{\frak{m}athrm{u}}u} \oplus E^{\frak{m}athrm{cs}}$ are both dominated, where $E^\frak{m}athrm{uc}= E^{{\frak{m}athrm{u}}u} \oplus E^{{\frak{m}athrm{c}}}$ and $E^\frak{m}athrm{cs}= E^{{\frak{m}athrm{c}}} \oplus E^{{\frak{m}athrm{s}}s}$.} splitting with three non-trivial bundles \begin{equation}\lambdabel{e.ph} T_\Lambdambda M = E^{{\frak{m}athrm{u}}u} \oplus E^{{\frak{m}athrm{c}}} \oplus E^{{\frak{m}athrm{s}}s} {\varepsilon}nd{equation} such that $E^\frak{m}athrm{uu}$ is uniformly expanding, $E^{\frak{m}athrm{c}}$ has dimension~$1$, and $E^\frak{m}athrm{ss}$ is uniformly contracting. We say that $E^{{\frak{m}athrm{u}}u}$ and $E^{{\frak{m}athrm{s}}s}$ are the {\varepsilon}mph{strong unstable} and {\varepsilon}mph{strong stable bundles}, respectively, and that $E^{\frak{m}athrm{c}}$ is the {\varepsilon}mph{central bundle}. We denote by $d^{\frak{m}athrm{u}}u$ and $d^{\frak{m}athrm{s}}s$ the dimensions of $E^{{\frak{m}athrm{u}}u}$ and $E^{{\frak{m}athrm{s}}s}$, respectively. Given an ergodic measure $\frak{m}u$ of a diffeomorphism $f$ the Oseledets' Theorem gives numbers $\chi_1(\frak{m}u) \ge \chi_2(\frak{m}u) \ge \cdots \ge \chi_d(\frak{m}u)$, the {{\varepsilon}mph{Lyapunov exponents}}, and a $Df$-invariant splitting $E_1\oplus E_2\oplus \cdots \oplus E_d$, the {{\varepsilon}m{Oseledets' splitting,}} where $d =\operatorname{dim} (M)$, with the following property: for $\frak{m}u$-almost every point $$ \lim_{n \to \pm \infty} \frac{\log \|Df^n_x (v_i)\|}{n} = \chi_i(\frak{m}u), \quad \frak{m}box{for every $i$ and $v\in E_i\smallsetminus \{\bar 0\}$}. $$ If the measure is supported on a partially hyperbolic set with one-dimensional center as above then $$ E^{{\frak{m}athrm{u}}u}=E_1\oplus \cdots \oplus E_{d^{\frak{m}athrm{u}}u}, \quad E^{{\frak{m}athrm{c}}}=E_{d^{\frak{m}athrm{u}}u+1}, \quad E^{{\frak{m}athrm{s}}s}=E_{d^{\frak{m}athrm{u}}u+2}\oplus \cdots \oplus E_{d}, $$ and $\chi_{d^{\frak{m}athrm{u}}u}(\frak{m}u)>0> \chi_{d^{\frak{m}athrm{u}}u+2}(\frak{m}u)$. Let $\chi_{d^{\frak{m}athrm{u}}u+1}(\frak{m}u){\varepsilon}qdef \chi_{{\frak{m}athrm{c}}}(\frak{m}u)$, we say that $\chi_{{\frak{m}athrm{c}}}(\frak{m}u)$ is the {\varepsilon}mph{central exponent} of $\frak{m}u$. In this partially hyperbolic setting the {\varepsilon}mph{logarithm of the center derivative} map \begin{equation}\lambdabel{e.logmap} \frak{m}athrm{J}_f^{{\frak{m}athrm{c}}}(x) {\varepsilon}qdef \log | Df_x |_{E^{\frak{m}athrm{c}} (x)}| {\varepsilon}nd{equation} is well defined and continuous, therefore the central Lyapunov exponent of the measure is given by the integral $$ \chi_{{\frak{m}athrm{c}}}(\frak{m}u)= \int \frak{m}athrm{J}_f^{\frak{m}athrm{c}} \, d\frak{m}u. $$ This equality allows to use the methods in the previous sections to construct and control nonhyperbolic ergodic measures. Let us explain some relevant points of our study. A (new) difficulty, compared with Theorem~\ref{t.homoclinic}, is that the logarithm of the center derivative $ \frak{m}athrm{J}_f^{\frak{m}athrm{c}}$ cannot take values with different signs at homoclinically related periodic points (by definition, such points have the same indices and thus the sign of $ \frak{m}athrm{J}_f^{\frak{m}athrm{c}}$ is the same). To recover this signal property we consider chain recurrence classes containing saddles of different indices. \begin{theo}\lambdabel{t.cycle} Let $M$ be a boundaryless compact manifold and $\cU$ a $C^1$-open set of $\operatorname{Diff}^1(M)$ such that every $f\in \cU$ has hyperbolic periodic orbits $p_f$ and $q_f$ such that: \begin{itemize} \item they have different indices and depend continuously on $f\in \cU$, \item their chain recurrence classes $C(p_f,f)$ and $C(q_f,f)$ are equal and have a partially hyperbolic splitting with one-dimensional center. {\varepsilon}nd{itemize} Then there is a $C^1$-open and dense subset $\cV\subset \cU$ such that every diffeomorphism $f\in\cV$ has a nonhyperbolic ergodic measure $\frak{m}u_f$ whose support is the homoclinic class $H(p_f,f)=H(q_f,f)$. {\varepsilon}nd{theo} Let us first observe that Theorem~\ref{t.cycle} can be rephrased in terms of robust cycles instead of periodic points in the same chain recurrence class. For that we need to review the definition of a {\varepsilon}mph{robust cycle}. Recall that a hyperbolic set $\Lambdambda_f$ of $f\in \operatorname{Diff}^1(M)$ has a well defined hyperbolic continuation $\Lambda_g$ for every $g$ close to $f$. Two transitive hyperbolic basic sets $\Lambdambda_f$ and $\Gammamma_f$ of a diffeomorphism $f$ have a {{\varepsilon}mph{$C^1$-robust (heterodimensional) cycle}} if these sets have different indices and if there is a $C^1$-neighbourhood $\cU_f$ of $f$ such that for every $g\in\cU$ the invariant sets of $\Lambda_g$ and $\Gamma_g$ intersect cyclically. As discussed in \cite{BoBoDi2}, the dynamical scenarios of ``dynamics with $C^1$-robust cycles" and ``dynamics with chain recurrence classes containing $C^1$-robustly saddles of different indices" are essentially equivalent (they coincide in a $C^1$-open and dense subset of $\operatorname{Diff}^1(M)$). We now describe explicitly the open and dense subset $\cV$ of $\cU$ in Theorem~\ref{t.cycle} using {\varepsilon}mph{dynamical blenders} and {\varepsilon}mph{flip-flop configurations} introduced in \cite{BoBoDi2}, see Remark~\ref{r.described}. Naively, a {\varepsilon}mph{dynamical blender} is a hyperbolic and partially hyperbolic set together with a {{\varepsilon}mph{strictly invariant family of discs}} (i.e., the image of any disc of the family contains another disc of the family) almost tangent to its strong unstable direction, see Definition~\ref{d.dynamicalblender}. In very rough terms, a {\varepsilon}mph{flip-flop configuration} of a diffeomorphism $f$ and a continuous function $\varphi$ is a $C^1$-robust cycle associated to a hyperbolic periodic point $q$ and a dynamical blender $\Lambda$ such that $\varphi$ is bigger than $\alpha>0$ in the blender $\Lambdambda$ and smaller than $-\alpha<0$ on the orbit of $q$. Important properties of flip-flop configurations are their $C^1$-robustness, that they occur $C^1$-open and densely in the set $\cU$ in Theorem~\ref{t.cycle}, and that they yield flip-flop families. The latter allows to apply our criterion for zero averages. The set $\cV$ in Theorem~\ref{t.cycle} is described in the remark below. \begin{rema}[The set $\cV$ in Theorem~\ref{t.cycle}] \lambdabel{r.described} The set $\cV$ is the subset of $\cU$ of diffeomorphisms with flip-flop configurations ``containing" the saddle $q_g$. {\varepsilon}nd{rema} To state our next result recall that a {\varepsilon}mph{filtrating region} of a diffeomorphism $f$ is the intersection of an attracting region and a repelling region of $f$. Let $U$ be a filtrating region of $f$ endowed with a strictly forward invariant unstable cone field of index $i$ and a strictly backward invariant cone field of index $\operatorname{dim}(M)-i-1$, see Section~\ref{ss.invariantconefields} for the precise definitions. Then the maximal $f$-invariant set in $U$ has a partially hyperbolic splitting $E^{{\frak{m}athrm{u}}u}\oplus E^{{\frak{m}athrm{c}}} \oplus E^{{\frak{m}athrm{s}}s}$, with $\operatorname{dim} (E^{\frak{m}athrm{c}})=1$. As above this allows us to define the logarithm of the center derivative $ \frak{m}athrm{J}_f^{{\frak{m}athrm{c}}}$ of $f$. We have the following ``variation" of Theorem~\ref{t.cycle}. \begin{theo}\lambdabel{t.ctail} Let $M$ be a boundaryless compact manifold. Consider $f\in \operatorname{Diff}^1(M)$ with a a filtrating region $U$ endowed with a strictly $Df$-invariant unstable cone field of index $i$ and a strictly $Df^{-1}$-invariant cone field of index $\operatorname{dim}(M)-i-1$. Assume that $f$ has a flip-flop configuration associated to a dynamical blender and a hyperbolic periodic point $q$ both contained in $U$. Then there is a $C^1$-neighbourhood $\cV_f$ of $f$ such that every $g\in \cV_f$ has a nonhyperbolic ergodic measure whose support is the whole homoclinic class $H(q_g,g)$ of the continuation $q_g$ of $q$. {\varepsilon}nd{theo} The hypothesis in this theorem imply that the blender and the saddle in the flip-flop configuration are in the same chain recurrence class. With the terminology of robust cycles, they have a $C^1$-robust cycle. Note that Theorem~\ref{t.ctail} is not a perturbation result: it holds for every diffeomorphism with such a flip-flop configuration. Moreover, and more important, the hypotheses in Theorem~\ref{t.ctail} are open (the set $U$ is also a filtrating set for every $g$ sufficiently close to $f$, hence the homoclinic class $ H(q_g,g)$ is contained in $U$ and partially hyperbolic, and flip-flop configurations are robust). Thus Theorem~\ref{t.ctail} holds for the homoclinic class of the continuation of $q$ for diffeomorphisms $g$ close to $f$. \begin{rema}\lambdabel{r.newbdg} Theorem~\ref{t.ctail} does not require the continuous variation of the homoclinic class $H(q_g,g)$ with respect to $g$. Note also that, in general, homoclinic classes only depend lower semi-continuously on the diffeomorphism. As a consequence, the partial hyperbolicity of a homoclinic class is not (in general) a robust property. The relevant assumption is that the homoclinic classes are contained in a partially hyperbolic filtrating neighbourhood which guaranteed the robust partial hyperbolicity of the homoclinic class. We can change the hypotheses in the theorem, omitting that $U$ is a filtrating neighbourhood and considering homoclinic classes depending continuously on the diffeomorphism (this occurs in a residual subset of diffeomorphisms). Then, by continuity, the class is robustly contained in the partially hyperbolic region and we can apply the previous arguments. {\varepsilon}nd{rema} \subsection{Applications to robustly nonhyperbolic transitive diffeomorphisms} \lambdabel{ss.aplication} A diffeomorphism $f\in \operatorname{Diff}^1(M)$ is {\varepsilon}mph{transitive} if it has a dense orbit. The diffeomorphism $f$ is {\varepsilon}mph{$C^1$-robustly transitive} if any diffeomorphism $g$ that is $C^1$-close to $f$ is also transitive. In other words, a diffeomorphism is $C^1$-robustly transitive if it belongs to the $C^1$-interior of the set of transitive diffeomorphisms. We denote by $\cR\cT(M)$ the ($C^1$-open) subset of $\operatorname{Diff}^1(M)$ consisting of diffeomorphisms $f$ such that: \begin{itemize} \item $f$ is robustly transitive, \item $f$ has a pair of hyperbolic periodic points of different indices, \item $f$ has a partially hyperbolic splitting $TM=E^{\frak{m}athrm{u}}u \oplus E^{\frak{m}athrm{c}}\oplus E^{\frak{m}athrm{s}}s$, where $E^{\frak{m}athrm{u}}u$ is uniformly expanding, $E^{\frak{m}athrm{s}}s$ is uniformly contracting, and $E^{\frak{m}athrm{c}}$ is one-dimensional. {\varepsilon}nd{itemize} Note that the last condition implies that the hyperbolic periodic points of $f$ have either ${\frak{m}athrm{u}}$-index $\operatorname{dim} (E^{\frak{m}athrm{u}}u)$ or $\operatorname{dim} (E^{\frak{m}athrm{u}}u)+1$. Note also that our assumptions imply that $\operatorname{dim}(M)\ge 3$ (in lower dimensions $\cR\cT(M)=\varnothing$ , see \cite{PuSa}). In dimension $\ge 3$ and depending on the type of manifold $M$, the set $\cR\cT(M)$ contains interesting examples. Chronologically, the first examples of such partially hyperbolic robustly transitive diffeomorphisms were obtained in \cite{Sh} considering diffeomorphisms in $\TT^4$ obtained as skew products of Anosov diffeomorphisms on $\TT^2$ and derived from Anosov on $\TT^2$ ($\TT^i$ stands for the $i$-dimensional torus). Later, \cite{Mda} provides examples in $\TT^3$ considering derived from Anosov diffeomorphisms. Finally, \cite{BD-robtran} gives examples that include perturbations of time-one maps of transitive Anosov flows and perturbations of skew products of Anosov diffeomorphisms and isometries. \begin{theo} \lambdabel{t.c.openanddense} There is a $C^1$-open and dense subset $\cZ(M)$ of $\cR\cT(M)$ such that every $f\in \cZ(M)$ has an ergodic nonhyperbolic measure whose support is the whole manifold $M$. {\varepsilon}nd{theo} Let us mention some related results. First, by \cite{BoBoDi2}, there is a $C^1$-open and dense subset of $\cR\cT(M)$ formed by diffeomorphisms with an ergodic nonhyperbolic measure with positive entropy, but the support of these measures is not the whole ambient. By \cite{BDG}, there is a residual subset of $\cR\cT(M)$ of diffeomorphism with an ergodic nonhyperbolic measure with full support. Finally, a statement similar to our theorem is stated in \cite{BJ}, see Footnote \ref{fn}. Recall that given a periodic point $p$ of $f$ the measure $\frak{m}u_{\cO(p)}$ is the unique $f$-invariant measure supported on the orbit of $p$. \begin{mcor} \lambdabel{c.averages} Consider a continuous map $\varphi \colon M \to \frak{m}athbb{R}$. Suppose that $f\in \cR\cT(M)$ has two hyperbolic periodic orbits $p$ and $q$ such that $$ \frak{m}u_{\cO(p)} (\varphi)> 0> \frak{m}u_{\cO(q)} (\varphi). $$ Then there are a $C^1$-neighbourhood $\cV_f$ of $f$ and a $C^1$-open and dense subset $\cO_f$ of $\cV_f$ such that every $g\in \cO_f$ has an ergodic measure $\frak{m}u_g$ with full support on $M$ such that $$ \int \varphi \, d\frak{m}u_g=0. $$ {\varepsilon}nd{mcor} \begin{mrem} By \cite[Proposition 1.4]{C}, for diffeomorphisms in $\cZ(M)$ every hyperbolic ergodic measure $\frak{m}u$ is the weak$\ast$ limit of periodic measures supported on points whose orbits tend (in the Hausdorff topology) to the support of the measure $\frak{m}u$. Thus, Corollary~\ref{c.averages} holds after replacing the hypothesis $\frak{m}u_{\cO(p)} (\varphi)>0> \frak{m}u_{\cO(q)} (\varphi)$ by the existence of two hyperbolic ergodic measures $\nu^+$ and $\nu^-$ such that $\int \varphi \, d\nu^+>0>\int \varphi\, d\nu^-$. {\varepsilon}nd{mrem} \subsection{Organization of the paper}\lambdabel{ss.organization} In Section~\ref{s.criterion} we introduce the concepts involved in the criterion of control at any scale with a long sparse tail and prove Theorem~\ref{t.accumulation}. In Section~\ref{s.patterns} we introduce the notion of a pattern and see how they are induced by long tails of scales. We study the {{\varepsilon}mph{concatenations of plaques}} of flip-flop families (associated to a map $\varphi$) and the control of the averages of $\varphi$ corresponding to these concatenations, see Theorem~\ref{t.p.flipfloppattern}. In Section~\ref{s.flipfloptail} we prove Theorem~\ref{t.flipfloptail}, Corollary~\ref{c.flipfloptail}, and Proposition~\ref{p.r.hyperboliclike}. In Section~\ref{s.flip-flophomoclinic} we prove Theorem~\ref{t.homoclinic} involving flip-flop families and homoclinic relations. In Section~\ref{s.flipflopph} we review some key ingredients as dynamical blenders and flip-flop configurations and prove Theorems~\ref{t.cycle} and \ref{t.ctail}. Finally, in Section~\ref{s.applications} we apply our methods to construct nonhyperbolic ergodic measures with full support for some robustly transitive diffeomorphisms, proving Theorem~\ref{t.c.openanddense} and Corollary~\ref{c.averages}. \section{A criterion for zero averages: control at any scale up to a long sparse tail} \lambdabel{s.criterion} The construction that we present for controlling averages is probably too rigid but it is enough to achieve our goals and certain constraints perhaps could be relaxed. However, at this state of the art, we do not aim for full generality but prefer to present the ingredients of the construction in a simple as possible way. One may aim to extract a general conceptual principle behind the construction, but this is beyond the focus of this paper. In Sections~\ref{ss.scalesandtails} and \ref{ss.controlatanyscale} we introduce the concepts involved in the criterion for controlling averages and in Section~\ref{ss.proofoftaccumulation} we prove Theorem~\ref{t.accumulation}. \subsection{Scales and long sparse tails} \lambdabel{ss.scalesandtails} In what follows we introduce the definitions of scales and long sparse tails. \begin{defi}[Scale]\lambdabel{d.scale} A sequence $\cT=(T_n)_{n\in \NN}$ of strictly positive natural numbers is called {\varepsilon}mph{a scale} if there is a sequence $\bar \kappa=(\kappa_n)_{n\ge 1}$ (the {\varepsilon}mph{sequence of factors} of the scale) of natural numbers with $\kappa_{n}\ge 3$ for every $n$ such that \begin{itemize} \item $T_{n}=\kappa_{n} \, T_{n-1}$ for every $n\ge 1$; \item $\kappa_{n+1}/\kappa_n \to \infty$. {\varepsilon}nd{itemize} We assume that the number $T_0$, and hence every $T_n$, is a multiple of $3$. {\varepsilon}nd{defi} We now introduce some notation. In what follows, given $a,b\in \frak{m}athbb{R}R$ we let $$ [a,b]_{\NN}{\varepsilon}qdef[a,b]\cap \NN. $$ Given a subset ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ of $\NN$ a {\varepsilon}mph{component} of ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ is an interval of integers $[a,b]_{\NN}\subset {\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ such that $a,b\in {\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ and $a-1, b+1\not\in {\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$. \begin{defi}[Controling sequence]\lambdabel{d.controllingsequence} Let $\bar\varepsilon=(\varepsilon_n)_{n\in\NN}$ be a sequence of positive numbers converging to $0$. We say that $\bar\varepsilon$ is a {\varepsilon}mph{controlling sequence} if $$ \sum_n \varepsilon_n<+\infty \quad \frak{m}box{and} \quad \prod_n (1-\varepsilon_n) >0.$$ {\varepsilon}nd{defi} \begin{rema}\lambdabel{r.basicfact} For a sequence $(\varepsilon_n)_{n\in\NN}$ of numbers with $\varepsilonilon_n\in(0,1)$ one has $$ \sum_n \varepsilon_n<+\infty\quad \Longleftrightarrow \quad \prod_n (1-\varepsilon_n) >0. $$ {\varepsilon}nd{rema} \begin{rema}\lambdabel{r.controling} Let $\cT=(T_n)_{n\in \NN}$ be a scale, $\kappa_{n+1}=\frac{T_{n+1}}{T_n}$, and $\varepsilon_n =\frac{2}{\kappa_n}$. Then the sequence $(\varepsilon_n)_{n\in \NN}$ is a controlling one. {\varepsilon}nd{rema} \begin{defi}[Long sparse tail] \lambdabel{d.tail} Consider a scale $\cT=(T_n)_{n\in \NN}$ and a controlling sequence $\bar \varepsilon=(\varepsilon_n)_{n\in \NN}$. A set $R_\infty\subset \NN$ is a {\varepsilon}mph{$\cT$-long $\bar\varepsilon$-sparse tail} if the following properties hold: \begin{enumerate} \item[a)]\lambdabel{i.component} Every component of $R_\infty$ is of the form $[k\,T_n, (k+1)\, T_n-1]_{\NN}$, for some $k$ and $n$ (we say that such a component has size $T_n$). {\varepsilon}nd{enumerate} Let $R_n$ be the union of the components of $R_\infty$ of size $T_n$ and let $$ R_{n,\infty}{\varepsilon}qdef \bigcup_{i\ge n} R_i, $$ the union of the components of $R_\infty$ of size larger than or equal to $T_n$. \begin{enumerate} \item[b)]\lambdabel{i.0} $0\notin R_\infty$, in particular $[0,T_n-1]_{\NN} \not\subset R_n.$ \item[c)]\lambdabel{i.center} Consider an interval $I$ of natural numbers of the form $$ I=[kT_n,(k+1) T_n -1]_{\NN}, \quad \frak{m}box{for some $n\ge 1$ and $k\geq 0$,} $$ that is not contained in any component of $R_\infty$ then the following properties hold: \begin{itemize} \item{{\varepsilon}mph{center position:}} $$ \left[ kT_n, kT_n +\frac{T_n}{3}\right]\cap R_{n-1}=\varnothing= \left[\big((k+1)T_n-1\big)-\frac{T_n}{3}, \big( (k+1)T_n-1\big)\right]\cap R_{n-1}. $$ \item {{\varepsilon}mph{$\bar\varepsilon$-sparseness:}} $$ 0< \frac{ \#( R_{n-1}\cap I)}{T_n}<\varepsilon_n. $$ {\varepsilon}nd{itemize} {\varepsilon}nd{enumerate} {\varepsilon}nd{defi} The conditions in Definition~\ref{d.tail} are depicted in Figure~\ref{f.dlongsparsetail}. \begin{figure} \begin{minipage}[h]{\linewidth} \centering \begin{overpic}[scale=.75, ]{P_L1.pdf} \put(-2,4){\small$\kappa\, T_n$} \put(92,4){\small$(\kappa+1)\, T_n$-1} \put(41,6){\small$R_{n-1}$} \put(53,6){\small$R_{n-1}$} \put(48.5,2){\small$\frac{T_n}3$} {\varepsilon}nd{overpic} \caption{A long sparse tail} \lambdabel{f.dlongsparsetail} {\varepsilon}nd{minipage} {\varepsilon}nd{figure} \begin{defi}[Good and bad intervals]\lambdabel{d.goodandbad} With the notation of Definition~\ref{d.tail}, an interval $I$ of the form $I=[kT_n,(k+1)T_n-1]_{\NN}$ is called {\varepsilon}mph{$n$-bad} if $I\subset R_{n,\infty}\subset R_\infty$. The interval is called {\varepsilon}mph{$n$-good} if $I\cap R_{n,\infty}=\varnothing$. {\varepsilon}nd{defi} \begin{rema}[On the definition of a $\cT$-long $\bar\varepsilonilon$-sparse tail] \lambdabel{r.commentson} $\,$ \begin{enumerate} \item It is assumed that $0\notin R_\infty$. This implies that for every $n\geq 0$ the initial interval $[0,T_n-1]_{\NN}$ is not a component of $R_\infty$ of size $T_n$. Therefore $[0,T_n-1]$ is disjoint from $R_n$ and thus from $R_m$ for every $m\geq n$. In other words, the interval $[0,T_n-1]$ is $n$-good, that is, $$ [0,T_n-1]\cap R_{n,\infty}=\varnothing. $$ \item Let $I=[kT_n,(k+1) T_n -1]$ be an interval as in Item~(c) of Definition~\ref{d.tail}. By Item~(a) the interval $I$ is either contained in a component of $R_\infty$ whose size is larger than or equal to $T_n$ or is disjoint from $R_{n,\infty}$. Thus, Item~(c) considers the case where the interval is disjoint from $R_{n,\infty}$. Now any component of $R_\infty$ of size less than $T_n$ is either disjoint from $I$ or contained in $I$: just note that such a component has length $T_m$, $m<n$, and starts at a multiple of $T_m$ and $T_n$ is a multiple of $T_m$. Item~(c) describes the position and quantity of the components of size $T_{n-1}$ in the interval $I$. For that, one splits the interval $I$ into tree parts of equal length $\frac{T_n}{3}$. The following properties are required: \begin{itemize} \item Every component of size $T_{n-1}$ contained in $I$ is contained in the middle third interval; \item The middle third interval contains at least one component of size $T_{n-1}$. Thus the intersection $I\cap R_{n-1}$ is not empty, but the the set $R_{n-1}$ has a small density in the interval $I$ that is upper bounded by $\varepsilonilon_n$. {\varepsilon}nd{itemize} \item Item~(c) does not consider the case $n=0$. For $n=0$ and an interval $I$ of the form $[kT_0,(k+1)T_0-1]$ there are two possibilities: either $I$ is a component of $R_\infty$ (i.e., contained in $R_0$) or $I$ is disjoint from $R_\infty$. \item Given any interval $I$ of the form $I=[kT_n,(k+1)T_n-1]_{\NN}$ there are two possibilities: \begin{itemize} \item either $I\subset R_\infty$ and then $I \subset R_{n,\infty}$ and $I$ is $n$-bad; \item or $I\cap R_{n,\infty}=\varnothing$ and then $I$ is $n$-good. {\varepsilon}nd{itemize} {\varepsilon}nd{enumerate} {\varepsilon}nd{rema} The definition of a long sparse tail involves many properties and conditions, thus its existence it is not obvious. We solve this difficulty in the next lemma. \begin{lemm}[Existence of long sparse tails]\lambdabel{l.tailexistence} Consider a scale $\cT=(T_n)_{n\in \NN}$ and its sequence of factors $\bar \kappa=(k_n)_{n\ge 1}$. Write $\varepsilon_n =\frac{2}{\kappa_n}$ and let $\bar\varepsilonilon=(\varepsilonilon_n)_{n\ge 1}$. Then there is a $\cT$-long $\bar\varepsilon$-sparse tail $R_\infty$. {\varepsilon}nd{lemm} \begin{proof} First note that, by Remark~\ref{r.controling}, the sequence $\bar \varepsilon$ is a controlling one. The construction of the set $R_\infty$ is done inductively. For each $n\in\NN$ we define the intersection of the set $R_{\infty}$ with the intervals $[0,T_n-1]$. We denote such an intersection by $R_{\infty}(T_n)$. For $n=0$, we let $R_{\infty}(T_0)=\varnothing$. Fix now $n>0$ and suppose that the sets $R_{\infty}(T_{n-1})$ has been constructed satisfying (in restriction to the interval $[0,T_{n-1}-1]$) the properties in Definition~\ref{d.tail}. We now proceed to define the set $R_{\infty}(T_n)$. For any $i\le n-1$ we denote by $R_{i,n-1}$ the union of the components of $R_{\infty} (T_{n-1})$ of length $T_i$. We next define the family of subsets $\{R_{j,n}, \, j=0,\dots, n\}$ of $[0,T_n-1]$ by decreasing induction on $j$ as follows. We let $$ R_{n,n}=\varnothing \qquad \frak{m}box{and} \qquad R_{n-1,n}= \left[ \frac{T_n}{3}, \frac{T_n}{3}+T_{n-1}-1 \right]_{\NN}. $$ Let $j<n$ and assume that the sets $R_{i,n}$ are defined for every $n\ge i>j$. The set $R_{j,n}$ is defined as follows: \begin{itemize} \item if $[kT_{j+1}, (k+1)T_{j+1}-1]_{\NN} \subset \bigcup_{i>j} R_{i,n}$ then $$ R_{j,n}\cap [kT_{j+1}, (k+1)T_{j+1}-1]=\varnothing, $$ \item Otherwise we let \begin{equation}\lambdabel{e.Rjn} R_{j,n}\cap [kT_{j+1}, (k+1)T_{j+1}-1]_{\NN} =\left[ \Big( k+\frac{1}{3} \Big) T_{j+1}, \Big(k+\frac{1}{3} \Big) T_{j+1} +T_j-1 \right]_{\NN}. {\varepsilon}nd{equation} {\varepsilon}nd{itemize} Note that by construction, $$ R_{\infty}(T_n)=\bigcup_{i=0}^n R_{i,n}. $$ \begin{clai} The set $R_{\infty}(T_n)$ satisfies (in restriction to the interval $[0,T_n-1]$) the conditions of Definition~\ref{d.tail}. {\varepsilon}nd{clai} \begin{proof} Property (a) in the definition follows from the construction: the components of $R_{i,n}$ have size $T_i$ and have no adjacent points with the components of $\bigcup_{j>i}^n R_{j,n}$. For Property (b) one checks inductively that $O\notin R_{i,n}$ for every $i$ and $n$. Property (c) is a consequence of {\varepsilon}qref{e.Rjn}. If the set $R_{i,n}$ intersects a segment $[k T_{i+1},(k+1)T_{i+1}-1]$ then it is contained in its middle third interval, implying the center position condition. For the sparseness note that by construction and the definition of $\varepsilon_i$, for each $i$ it holds $$ 0< \frac{ \#( R_{i-1}\cap I)}{T_i}= \frac{T_{i-1}}{T_{i}}=\frac{1}{\kappa_i} <\varepsilon_j. $$ This completes the proof of the claim. {\varepsilon}nd{proof} Our construction also provides immediately the following properties: For every $i<n$ it holds: \begin{itemize} \item if $m\ge n$ then $R_{i,m}\cap [0,T_n-1]= R_{i,n}$, \item if $m\geq n$ the $R_{\infty}(T_m)\cap [0,T_n-1]=R_{\infty}(T_n)$, and \item $R_{i,n}\subset R_{i,n+1}$. {\varepsilon}nd{itemize} The tail is now defined by \begin{equation} \lambdabel{e.RinftyRi} R_\infty{\varepsilon}qdef \bigcup_{i=0}^\infty R_i, \quad \frak{m}box{where} \quad R_i= \bigcup _{n>i} R_{i,n}. {\varepsilon}nd{equation} By construction, the set $R_{\infty}$ is an $\varepsilon$-sparse tail of $\cT$. {\varepsilon}nd{proof} \subsection{Control at any scale up a long sparse tail} \lambdabel{ss.controlatanyscale} In this section we give the definition of controlled points. \begin{defi} \lambdabel{d.alphacontrol} Let $X$ be a compact set, $f\colon X\to X$ a homeomorphism, and $\varphi \colon X\to \frak{m}athbb{R}R$ a continuous map. Consider \begin{itemize} \item a scale $\cT$, a controlling sequence $\bar\varepsilon$, and a $\cT$-long $\bar\varepsilon$-sparse tail $R_\infty$; \item decreasing sequences of positive numbers $\bar{\deltalta}=(\deltalta_n)_{n\in\NN} $ and $\bar{\alpha}=(\alpha_n)_{n\in\NN}$, converging to $0$. {\varepsilon}nd{itemize} The $f$-orbit of a point $x\in X$ is {\varepsilon}mph{$\bar\deltalta$-dense along the tail $R_\infty$} if for every component $I$ of $R_\infty$ of length $T_n$ the segment of orbit $\{f^i(x), \, i\in I\}$ is $\deltalta_n$-dense in $X$. The {\varepsilon}mph{Birkhoff averages of $\varphi$ along the orbit of $x$ are $\bar\alpha$-controlled for the scale $\cT$ with the tail $R_\infty$} if for every interval $$ I=[k \, T_n, (k+1) \, T_n-1]_\NN $$ such that $I\not \subset R_{n+1,\infty}$ (i.e., $I$ is either $n$-good or is a component of $R_n$) it holds $$ \varphi_I(x){\varepsilon}qdef \frac{1}{T_n} \sum_{i\in I} \varphi (f^i(x)) \in [-\alpha_n,\alpha_n]. $$ {\varepsilon}nd{defi} \begin{defi}\lambdabel{d.controledtail} Let $X$ be a compact set, $f\colon X\to X$ a homeomorphism, and $\varphi \colon X\to \frak{m}athbb{R}R$ a continuous map. A point $x\in X$ is {\varepsilon}mph{controlled at any scale with a long sparse tail with respect to $X$ and $\varphi$} if there are a scale $\cT$, a controlling sequence $\bar\varepsilon$, a $\cT$-long $\bar\varepsilon$-sparse tail $R_\infty$, and sequences of positive numbers $\bar\deltalta$ and $\bar\alpha$ converging to $0$, such that \begin{itemize} \item the $f$-orbit of $x$ is $\bar\deltalta$-dense along the tail $R_\infty$ and \item the Birkhoff averages of $\varphi$ along the orbit of $x$ are $\bar\alpha$-controlled for the scale $\cT$ with the tail $R_\infty$. {\varepsilon}nd{itemize} In this definition we say that $\bar \deltalta$ is the {\varepsilon}mph{density forcing sequence}, $\bar \alpha$ is the {\varepsilon}mph{average forcing sequence,} and the point $x$ is {\varepsilon}mph{$(\bar\deltalta,\bar \alpha, \bar\varepsilonilon, \frak{m}athcal{T}, R_\infty)$-controlled.} {\varepsilon}nd{defi} \subsection{Proof of Theorem \ref{t.accumulation}} \lambdabel{ss.proofoftaccumulation} In this section we prove Theorem~\ref{t.accumulation}, thus we use the assumptions and the notations in its statement. Consider a point $x_0\in X$ that is controlled at any scale with a long sparse tail for $X$ and $\varphi$. Let \begin{itemize} \item $\cT=(T_n)_{n\in \NN}$ be the scale; \item $R_\infty$ the $\cT$-long $\bar \varepsilon$-sparse tail; and \item $\bar \deltalta=(\deltalta_n)_{n\ge 1}$ the density forcing sequence and $\bar \alpha=(\alpha_n)_{n\ge 1}$ the average forcing sequence. {\varepsilon}nd{itemize} Let $\frak{m}u$ be a measure that is a weak$\ast$ limit of the empirical measures $(\frak{m}u_n(x_0))_{n\in \NN}$. As $x_0$ remains fixed let us write $\frak{m}u_n{\varepsilon}qdef \frak{m}u_n(x_0)$. We need to prove that for $\frak{m}u$-almost every point $x$ it holds: \begin{enumerate} \item \lambdabel{i.ass1} the forward orbit of $x$ is dense in $X$ \item \lambdabel{i.ass2} the Birkhoff averages of $x$ satisfy $\lim_{n\to\infty} \frac{1}{n} \sum_{i=0}^{n-1} \varphi (f^i(x))=0$. {\varepsilon}nd{enumerate} Proposition~\ref{p.l.deltadensity} below immediately implies item~{\varepsilon}qref{i.ass1} (item {\varepsilon}qref{i.ass2} follows from Proposition~\ref{p.l.alpha}). \begin{prop}\lambdabel{p.l.deltadensity} Under the assumptions above, for every $k$ the (forward) orbit of $\frak{m}u$-almost every point is $2\,\deltalta_k$-dense in $X$. {\varepsilon}nd{prop} \begin{proof} Fix $k$. For any given $t>0$ and $\deltalta>0$ consider the set $$ X(t,\deltalta){\varepsilon}qdef \Big\{x\in X \colon \{x, \dots ,f^t(x)\} \, \frak{m}box{is $\deltalta$-dense in $X$} \Big\} $$ and let $$ P_{\infty,t} {\varepsilon}qdef \liminf_{n\to\infty} P_{n,t}, \quad \frak{m}box{where} \quad P_{n,t}{\varepsilon}qdef \frak{m}u_n(X(t,\deltalta_k)). $$ \begin{lemm}\lambdabel{l.c.pinfty} $\lim_{t\to \infty} P_{\infty,t}=1.$ {\varepsilon}nd{lemm} We postpone the proof of this lemma and deduce the proposition from it. Just note that the interior of $X(t,2\deltalta_k)$ contains the closure of $X(t,\deltalta_k)$ for every $t$. Thus $\frak{m}u (X(t,2\deltalta_k))\ge P_{\infty,t}$. Taking the limit when $t\to \infty$ we prove the proposition. {\varepsilon}nd{proof} \begin{proof}[Proof of Lemma~\ref{l.c.pinfty}] Fixed $k$ take $t>T_{k+1}$. \begin{clai} \lambdabel{cl.biblioteca} The set of times $i\in\NN$ such that $f^i(x_0)\not\in X(t,\deltalta_k)$ is contained in the set $$ \bigcup_{j=0}^\infty \Big( R_{m_t+j}\cup (R_{m_t+j}-T_{k+1})\Big), $$ where \begin{itemize} \item $m_t{\varepsilon}qdef \inf \{ m \ge k+1 \colon T_m+2\,T_{k+1}>t\}$ and \item $R_{k+j}-T_{k+1}{\varepsilon}qdef \{ {\varepsilon}ll =i-T_{k+1}, \,\, \frak{m}box{where $i\in R_{k+j}$}\}$. {\varepsilon}nd{itemize} {\varepsilon}nd{clai} \begin{proof} Take $i$ such that $f^i(x_0)\notin X(t,\deltalta_k)$. Then the set $\{f^i(x_0),\dots, f^{i+t}(x_0)\}$ is not $\deltalta_k$-dense. Let $I=[i,i+t]_{\NN}$. Recalling Definition~\ref{d.alphacontrol}, we have that $I$ can not contain any component of $R_\infty$ of size $T_k$ or greater than $T_k$. This implies that \begin{itemize} \item the interval $I$ does not contain any ${\varepsilon}ll$-bad interval for ${\varepsilon}ll\ge k$, \item as a consequence of the sparseness property in item (c) of Definition~\ref{d.tail}, the interval $I$ does not contain any $({\varepsilon}ll+1)$-good interval for ${\varepsilon}ll\ge k$ (i.e., disjoint from or $R_{{\varepsilon}ll+1,\infty}$). {\varepsilon}nd{itemize} Thus necessarily the interval $I$ intersects some bad interval $J=[r_m^-, r_m^+]_{\NN}\subset R_m$, $m>k$, such that $$ I\subset [r_m^--T_{k+1}, r_m^++T_{k+1}]. $$ Otherwise $I$ must contain a $(k+1)$-good interval. Observe that this implies that $$ T_m+2\,T_{k+1}>t, $$ otherwise the segment of orbit $\{f^{i+j}(x_0)\}_{j=0}^{t}$ would be $\deltalta_k$-dense, a contradiction. Hence $m\geq m_t$. Recall that $T_{k+1}<t$, hence $i\in [r_m^--T_{k+1}, r_m^+]$. Thus $$ i\in J\cup (J-T_{k+1})\subset R_m\cup (R_m-T_{k+1}) $$ for some $m\geq m_t$. This ends the proof of the claim. {\varepsilon}nd{proof} In view of Claim~\ref{cl.biblioteca}, to prove the lemma it is enough to see the following: \begin{clai}\lambdabel{c.semnome} $$ \lim_{t\to +\infty} \,\lim_{n\to+\infty} \frac1n \, \# \left( [0,n]\cap \bigcup_{j=0}^\infty \Big( R_{m_t+j}\cup (R_{m_t+j}-T_{k+1}) \Big) \right)=0. $$ {\varepsilon}nd{clai} \begin{proof} Note that the components of the set $R_{m_t+j}\cup (R_{m_t+j}-T_{k+1})$ are intervals of length $T_{m_t+j}+T_{k+1} < 2\, T_{m_t+j}$. Thus the claim is a direct consequence of next fact (recall the definition of $R_{m_t,\infty}$ in Definition~\ref{d.tail}). \begin{fact}\lambdabel{f.c.proportion} $$ \lim_{t\to \infty} \limsup_{n\to \infty} \dfrac{1}{n} \#\, \big(R_{m_t,\infty} \cap [0,n] \big)\to 0. $$ {\varepsilon}nd{fact} \begin{proof} We need to estimate the proportion $$ \varrho(m,n){\varepsilon}qdef \frac{\# \, \big( R_m\cap [0,n] \big) }{n} $$ of the set $R_m$ in $[0,n]_{\NN}$. We claim that $\varrho(m,n)< 3\,\varepsilon_{m+1}$. There are three cases: \begin{itemize} \item $T_{m+1} \le n$: Let $k\,T_{m+1}\le n< (k+1)\, T_{m+1}$, where $k\in \NN$ and $k\ge 1$. By the sparseness condition we have $$ \# \big( R_m \cap [0, (k+1)\, T_{m+1}] \big) < (k+1)\, \varepsilon_{m+1}. $$ Therefore $$ \frac{\#\,\big( R_m \cap [0, n] \big)}n \le \frac{\#\,\big( R_m \cap [0, n] \big) }{k\, T_{m+1}} \le \frac{(k+1) \varepsilon_{m+1}}{k}< 2\, \varepsilon_{m+1}. $$ \item $T_m\leq n< T_{m+1}$: Since $[0,T_{m+1}-1]$ is an $(m+1)$-good interval we have that $[0,T_{m+1}/3]$ and $R_m$ are disjoint. If the proportion is $0$ we are done. Otherwise, by the center position condition, $n>\frac{T_{m+1}}3$. Therefore $$ \frac{\# \, \big( R_m\cap [0,n]\big) }{n}< 3\, \frac{\# \, \big( R_m\cap [0,T_{m+1}]\big)}{T_{m+1}}<3\, \varepsilon_{m+1}, $$ where the last inequality follows from the sparseness condition. \item $n<T_m$: In this case, by condition (b) in Definition~\ref{d.tail}, $R_m\cap[0,n]=\varnothing$. {\varepsilon}nd{itemize} Since $\varrho (m,n)<3\,\varepsilon_{m+1}$ for every $n$ we get $$ \frac{1}{n} \# \, \big( R_{m_t,\infty} \cap [0,n] \big)< 3\, \sum_{m=m_t}^\infty \varepsilon_m. $$ Since, by definition, $\sum_{m=0}^\infty \varepsilon_m<+\infty$ this implies $$ \lim_{t\to \infty}\, \limsup_{n\to \infty} \frac{1}{n}\, \# (R_{m_t,\infty} \cap [0,n])\leq 3\, \lim_{t\to\infty}\sum_{m=m_t}^\infty \varepsilon_m = 0, $$ proving the fact. {\varepsilon}nd{proof} This ends the proof of Claim~\ref{c.semnome} {\varepsilon}nd{proof} The proof of Lemma~\ref{l.c.pinfty} is now complete. {\varepsilon}nd{proof} Proposition~\ref{p.l.deltadensity} gives the density of orbits in Theorem~\ref{t.accumulation}. To end the proof of the theorem it remains to prove the part relative to the averages. This is an immediate consequence of next proposition. Recall the notation of finite Birkhoff averages $\varphi_n(x)$ and of limit averages $\varphi_\infty(x)$ of a function $\varphi$ in {\varepsilon}qref{e.birkhoffaverages} and {\varepsilon}qref{e.notationBirkhoff}. Recall also that $\frak{m}u$ is a weak$\ast$ limit of the empirical measures $\frak{m}u_n=\frak{m}u_n(x_0)$. \begin{prop}\lambdabel{p.l.alpha} Fix $k\in \NN$. For $\frak{m}u$-almost every point $x$ the limite average $\varphi_\infty(x)$ is well defined and belongs to $[-3\alpha_k,3\alpha_k]$. {\varepsilon}nd{prop} \begin{proof} Let $B$ be the set of points such that the limit average $\varphi_\infty (x)$ is well defined. By Birkhoff theorem it holds $\frak{m}u(B)=1$. Therefore it is enough to prove that for every $x\in B$ there is a sequence $n_j=n_j(x) \to \infty$ such that $\varphi_{n_j}(x)\in [-3\,\alpha_k,3\,\alpha_k]$ for every $j$. For $t\in \NN$ define the number $$ q_t{\varepsilon}qdef \liminf_{n\to+\infty}\, q_{t,n}, \quad \frak{m}box{where} \quad q_{t,n}{\varepsilon}qdef \frak{m}u_n \big(\left\{x \colon \varphi_t(x) \in [-2\,\alpha_k,2\,\alpha_k] \right\}\big). $$ \begin{lemm}\lambdabel{l.c.alpha} $\lim_{t\to\infty} q_t=1.$ {\varepsilon}nd{lemm} Let us postpone the proof of this lemma and conclude the proof of the proposition assuming it. By definition of $\frak{m}u$ $$ \frak{m}u (\{x \colon \varphi_t(x) \in [-3\,\alpha_k,3\,\alpha_k]\})\ge q_t. $$ By Lemma~\ref{l.c.alpha}, $q_t\to 1$, thus there is a subsequence $(q_{t_i})$ such that $$ \sum_{0}^\infty(1- q_{t_i}) <+\infty. $$ Fix the sequence $(q_{t_i})_i$ and define the sets $$ Y_N{\varepsilon}qdef \bigcap_{j>N}\{x \colon \varphi_{t_j}(x) \in [-2\, \alpha_k,2\, \alpha_k]\}. $$ By definition $$ \frak{m}u(Y_N) \ge 1-\left(\sum_{j=N}^\infty (1-q_{t_j})\right) $$ and hence $$ \frak{m}u(Y)=1, \quad \frak{m}box{where} \quad Y{\varepsilon}qdef \bigcup_N Y_N. $$ We have that $\frak{m}u(B\cap Y)=1$ and that every point $x\in Y\cap B$ has Birkhoff average $\varphi_\infty (x)\in [-3\, \alpha_k,3\, \alpha_k]$, this ends the proof of the proposition (assuming Lemma~\ref{l.c.alpha}). \begin{proof}[Proof of Lemma~\ref{l.c.alpha}] Recall that $x_0\in X$ is controlled at any scale with a long sparse tail for $X$ and $\varphi$. Recall also the choices of $\cT$, $R_\infty$, $\bar \varepsilon$, and $\bar \alpha$. Fix $k$ and take a number $t>2\,T_k$. Pick a time interval $I=[i,i+t-1]_{\NN}$. For each $j\ge k$ consider the intervals $H$ of the form $[rT_j,(r+1)T_j-1]_{\NN}$ contained in $I$ which are either $j$-good or components of the set $R_j$. By definition of the average forcing sequence $\bar \alpha$ (Definition~\ref{d.alphacontrol}) in each of these intervals $H$ the average satisfies $$ \varphi_H (x_0)\in [-\alpha_j,\alpha_j] \subset [-\alpha_k,\alpha_k]. $$ We call these subintervals $H$ of $I$ {\varepsilon}mph{$\alpha_k$-controlled}. \begin{clai}\lambdabel{c.JI} For every $i$ and $t$ the union of the $\alpha_k$-controlled intervals contained in $I=[i,i+t-1]_{\NN}$ is a (possibly empty) interval $J=J_I$ such that $\varphi_J(x_0)\in [-\alpha_k,\alpha_k]$. {\varepsilon}nd{clai} \begin{proof} Let us first define auxiliary intervals $A=A_I$ and $B=B_I$. The interval $A$ is defined as follows: \begin{itemize} \item If $i\in R_m$ for some $m\geq k$ then $A$ is the intersection of the component of $R_m$ containing $i$ and $I$; \item otherwise we let $A=[i,jT_k-1]$, where $j$ is the infimum of the numbers $r$ with $i\leq r \, T_k$ (note that $A$ is empty if $i=jT_k$). {\varepsilon}nd{itemize} The interval $B$ is symmetrically defined as follows: \begin{itemize} \item if $i+t\in R_m$ for some $m\geq k$ then $B$ is the intersection of the component of $R_m$ containing $i+t$ and $I$; \item otherwise we let $B=[{\varepsilon}ll T_k, i+t]$, where ${\varepsilon}ll$ is the maximum of the numbers $r$ $ r T_k\leq i+t+1$ (note that $B$ is empty if $i+t=jT_k$). {\varepsilon}nd{itemize} \begin{fact} \lambdabel{f.setJ} $J=[i,i+t]\smallsetminus(A\cup B)$. {\varepsilon}nd{fact} \begin{proof} Just note that by construction every component of $R_m$ intersecting $J$ is contained in $J$. A similar inclusion holds for every $m$-good interval intersecting $J$. These two inclusions imply the fact. {\varepsilon}nd{proof} It remains to see that $J$ is the union of pairwise disjoint $\alpha_k$-controlled intervals. By Fact~\ref{f.setJ} and construction, the components of $\bigcup_{m\ge k} R_m$ intersecting $J$ are contained in $J$. These components are pairwise disjoint and their complement is a union of $T_k$-intervals which are good. This implies that $J$ is a disjoint union of intervals $H$ where the average satisfies $\varphi_H (x_0)\in [-\alpha_k,\alpha_k]$. This implies that the average $\varphi_J (x_0)$ of $\varphi$ in $J$ belongs to $[-\alpha_k,\alpha_k]$, ending the proof of the claim. {\varepsilon}nd{proof} \begin{clai} \lambdabel{c.filldensity} Fix $k$. Given any $m\in \NN$ there is $t_m$ such that for every $t\geq t_m$ and for every $i\in \NN$ such that $$ \frac 1t \, \sum_{j=0}^{t-1} \varphi(f^{i+j}(x_0))\notin [-2\,\alpha_k,2\,\alpha_k], $$ then either $i$ or $i+t$ belongs to $\bigcup_{{\varepsilon}ll \geq m} R_{\varepsilon}ll$. {\varepsilon}nd{clai} \begin{proof} Pick and interval $I=[i,i+t]_{\NN}$ and associate to it the interval $J=J_I$ in Claim~\ref{c.JI} and the intervals $A=A_I$ and $B=B_I$ in its proof. As $\varphi_J(x_0)\in [-\alpha_k,\alpha_k]$, in order to have $|\varphi_I (x_0)|> 2\, \alpha_k$ the set $I\smallsetminus J=A\cup B$ must fill a relatively large proportion (depending on $\alpha_k$ and $\sup|\varphi|$ but independent of $t$) of the interval $I$. In other words, there is a constant $C>0$ such that \begin{equation}\lambdabel{e.contradiction} \frac{\#(A\cup B)}{t}>C. {\varepsilon}nd{equation} Fixed $m$, let $$ t_m{\varepsilon}qdef \frac{2\,T_m}{C}+1. $$ Take any $t\ge t_m$. The proof is by contradiction, if $i,i+t\notin \bigcup_{{\varepsilon}ll \geq m} R_{\varepsilon}ll$ then $|A|, |B|\le T_m$ and therefore $$ \frac{\#(A\cup B)}{t}\le \frac{2\, T_m}{t}<C, $$ a contradicting {\varepsilon}qref{e.contradiction}. The proof of the claim is complete. {\varepsilon}nd{proof} We are now ready to conclude the proof of the lemma. Claim~\ref{c.filldensity} implies that for $t>t_m$ the number $(1-q_t)$ is less than twice the density of the set $\bigcup_{{\varepsilon}ll \geq m} R_{\varepsilon}ll$ in $[0,t]$. Fact~\ref{f.c.proportion} implies that this density goes to $0$ as $t\to \infty$, proving the lemma. {\varepsilon}nd{proof} The proof of Proposition~\ref{p.l.alpha} is now complete. {\varepsilon}nd{proof} \section{Patterns, concatenations, flip-flop families, and control of averages}\lambdabel{s.patterns} In this section we introduce the notion of a pattern (Section~\ref{ss.patterns}) and explain its relations with the scales and tails in the previous section. In Section~\ref{ss.tailsandpatterns} we see that a $\cT$-long tail of a scale $\cT$ induces patterns in its good intervals. Patterns will be used to codify certain orbits (the orbit follows some distribution pattern). This naive idea is formalised in the notion of a concatenation of sets following a pattern, see Section~\ref{ss.concatenations}. We are interested in concatenations of plaques of a flip-flop family (associated to a map $\varphi$) and in the control of the averages of $\varphi$ corresponding to these concatenations, see Section~\ref{ss.distortionfobirkhoffaverages}. The main result in this section is Theorem~\ref{t.p.flipfloppattern} that gives the control of averages for concatenations, see Section~\ref{ss.flip-flopfamiliesofplaques}. In the sequel we will make more precise these vague notions. \subsection{Patterns}\lambdabel{ss.patterns} A scale $\frak{m}athcal{T}=(T_n)_{n\ge 0}$ induces, for each $n$, a partition of $\NN$ consisting of intervals of the form $[{\varepsilon}ll \, T_n,({\varepsilon}ll+1)\, T_n-1]_{\NN}$. A pattern is a partition of these intervals respecting some compatibility rules given by the scale. \begin{defi}[Pattern] \lambdabel{d.pattern} Let $\frak{m}athcal{T}=(T_n)_{n\ge 0}$ be a scale and $I\subset \NN$ an interval of the form $I=[{\varepsilon}ll \, T_n,({\varepsilon}ll+1)\, T_n-1]_{\NN}$ for some ${\varepsilon}ll\in \NN$. A $T_n$-pattern $\frak{m}athfrak{P}=\frak{m}athfrak{P}(I)$ of the interval $I$ consists of a partition $\cP=\{I_i\}_{i=1}^r$ of $I$ into intervals $I_i=[k\, T_{{\varepsilon}ll(i)}, (k+1)\, T_{{\varepsilon}ll(i)-1}]_{\NN}$, where ${\varepsilon}ll(i)\in \{0,\dots, n\}$, and a map $\iota\colon\cP \to \{r,w\}$ such that \begin{itemize} \item either ${\varepsilon}ll(i)\ne 0$ and then $\iota (I_i)=w$, \item or ${\varepsilon}ll(i)=0$ and then $\iota (I_i)\in\{r,w\}$. {\varepsilon}nd{itemize} We write $\frak{m}athfrak{P}=(\cP,\iota)$. A subinterval of $I=[kT_i, (k+1)T_i-1]$ that is not strictly contained in an interval of the partition $\cP$ is called {\varepsilon}mph{$\frak{m}athfrak{P}$-admissible (of length $T_i$)}. {\varepsilon}nd{defi} In this definition, the script $w$ refers to ``walk'' and $r$ to ``rest''. \begin{rema}\lambdabel{r.subpattern}$\,$ \begin{enumerate} \item If $[kT_i, (k+1)T_i-1]$ is a $\frak{m}athfrak{P}$-admissible subinterval of $I$, then the restriction of the partition $\cP$ and the restriction of the map $\iota$ induces a $T_i$-pattern in $[kT_i, (k+1)T_i-1]$, which is called a {\varepsilon}mph{subpattern} of $\frak{m}athfrak{P}$. \item A $T_n$-pattern consists either of a unique interval of $w$-type or is a ``concatenation" of $T_{n-1}$-patterns. {\varepsilon}nd{enumerate} {\varepsilon}nd{rema} Consider an interval $I$ and a pattern $\frak{m}athfrak{P}$ of it as in Definition~\ref{d.pattern}. A point $j=k\,T_i\in \NN$ is {\varepsilon}mph{$i$-initial for the pattern $\frak{m}athfrak{P}$} if the interval $[j, j+T_i-1]_{\NN}$ is admissible. A point $j\in \NN$ is {\varepsilon}mph{$\frak{m}athfrak{P}$-initial} if it is $i$-initial for some $i$. We denote the set of initial points of $\frak{m}athfrak{P}$ by $I(\frak{m}athfrak{P})$. The set of {\varepsilon}mph{$\frak{m}athfrak{P}$-marked points} of the $T_n$-pattern $\frak{m}athfrak{P}$, denoted by $M(\frak{m}athfrak{P})$, is the union of the point $\{({\varepsilon}ll+1)T_n\}$ and the set of all initial points of $\frak{m}athfrak{P}$. \subsection{Tails and patterns} \lambdabel{ss.tailsandpatterns} We now see that given a scale $\cT$ and a $\cT$-long sparse tail $R_\infty$, the tail induces patterns in its good intervals $I$ (i.e., $I\cap R_{n,\infty}=\varnothing$, recall Definition~\ref{d.goodandbad}). In this subsection the sparseness of the tail is not relevant. \begin{lemm}[Pattern induced by a tail] \lambdabel{l.tailpattern} Let $\cT=(T_n)_{n\in \NN}$ be a scale and $R_{\infty}$ a $\cT$-long sparse tail. Let $$ I=[{\varepsilon}ll \, T_n,({\varepsilon}ll+1)\,T_n-1]_{\NN} $$ be an $n$-good interval of $R_{\infty}$ and consider the partition $\cP$ of $I$ and the map $\iota\colon \cP\to \{r,w\}$ defined as follows: \begin{itemize} \item the intervals $J$ of $\cP$ with $\iota(J)=w$ are the components of $R_\infty$ contained in $[{\varepsilon}ll\, T_n,({\varepsilon}ll+1)\,T_n-1]$; \item the complement of $R_\infty$ in $[{\varepsilon}ll \, T_n,({\varepsilon}ll+1)\, T_n-1]_{\NN}$ can be written as the union of intervals $J$ of the type $[k\, T_0,(k+1)\, T_0-1]$, these intervals are the elements of the partition $\cP$ with $\iota(J)=r$. {\varepsilon}nd{itemize} Then $\frak{m}athfrak{P}=(\cP,\iota)$ defines a $T_n$-pattern in $I$. {\varepsilon}nd{lemm} \begin{proof} To prove the lemma it is enough to recall that, by definition of a $n$-good interval, there is no component of $R_\infty$ containing the interval $[{\varepsilon}ll \, T_n,({\varepsilon}ll+1)\, T_n-1]_{\NN}$ and that every $T_k$ is a multiple of $T_0$. {\varepsilon}nd{proof} The pattern $\frak{m}athfrak{P}$ in Lemma~\ref{l.tailpattern} is called {\varepsilon}mph{the pattern induced by the tail $R_\infty$} in the good interval $[{\varepsilon}ll \,T_n,({\varepsilon}ll+1)\, T_n-1]$ and is denoted by $\frak{m}athfrak{P}_{n,{\varepsilon}ll}$, or by $\frak{m}athfrak{P}_{n,{\varepsilon}ll}(R_\infty)$ (for emphasising the role of the tail). The next remark associates a sequence of patterns to the tail $R_\infty$. \begin{rema}\lambdabel{r.initial}[Initial patterns for a long tail] With the notation of Lemma~\ref{l.tailpattern}, by definition of the tail $R_\infty$, the initial interval of length $T_n$, $[0,T_n-1]_{\NN}$, is a good interval. We let $\frak{m}athfrak{P}_n=\frak{m}athfrak{P}_{n,0}$ and call it {\varepsilon}mph{the initial $T_n$-pattern of $R_\infty$}. The set of initial points of $\frak{m}athfrak{P}_n$ consists of the following points: $$ \Big( \{\frak{m}box{origins of the components of $R_\infty$}\} \cup \{ kT_0\not\in R_\infty\}\Big) \cap [0, T_n-1]. $$ {\varepsilon}nd{rema} \begin{rema}\lambdabel{r.initialcompatibility} [Compatibility of induced patterns] For every $i<n$ the restriction of the initial pattern $\frak{m}athfrak{P}_n$ to the interval $[0,T_{i}-1]_{\NN}$ is the initial pattern $ \frak{m}athfrak{P}_i$. In other words, $\frak{m}athfrak{P}_i$ is the {\varepsilon}mph{initial $T_i$-subpattern} of $\frak{m}athfrak{P}_n$. {\varepsilon}nd{rema} \subsection{Concatenations and controlled plaque-segments} \lambdabel{ss.concatenations} Consider a compact metric space $(X,d)$, a homeomorphism $f\colon X\to X$, and an open set $U$ of $X$. Consider a family $\cD$ of compact sets contained in $U$. We call the elements in $\cD$ plaques\footnote{In our applications, the elements of $\cD$ are sets in a flip-flop family, recall Definition~\ref{d.flipflop}.}. Given a pair of plaques $D_0,D_1\in \cD$ we say $(D_0,D_1)$ is a {\varepsilon}mph{plaque-segment of size $T$ relative to $U$ and $\cD$} if: \begin{itemize} \item $f^{-i}(D_1)\subset U$ for every $i\in\{0,\dots, T\}$ and \item $f^{-T}(D_1)\subset D_0$. {\varepsilon}nd{itemize} We say that $D_0$ is the {\varepsilon}mph{origin of the segment} and $D_1$ is the {\varepsilon}mph{end of the segment.} \begin{figure} \begin{minipage}[h]{\linewidth} \centering \begin{overpic}[scale=.4, ]{P_L4.pdf} \put(5,63){\small$D_0$} \put(36,63){\small$D_1$} \put(69,63){\small$D_2$} \put(100,63){\small$D_3$} \put(18,36){\small$L_0$} \put(50,36){\small$L_1$} \put(82,36){\small$L_2$} \put(10,18){\tiny$(D_0,D_1)$} \put(42,18){\tiny$(D_1,D_2)$} \put(75,18){\tiny$(D_2,D_3)$} \put(18,12){\tiny$(D_0,D_2)$} \put(40,6){\tiny$(D_0,D_3)$} {\varepsilon}nd{overpic} \caption{Concatenations} \lambdabel{f.concatenations} {\varepsilon}nd{minipage} {\varepsilon}nd{figure} Let $(D_0,D_1)$ and $(D_1,D_2)$ be two plaque-segments of lengths $L_0$ and $L_1$, respectively, relative to $U$ and $\cD$. Then $(D_0,D_2)$ is a plaque-segment of length $L_0+L_1$, called {\varepsilon}mph{the concatenation of $(D_0,D_1)$ and $(D_1,D_2)$}. We use the notation $(D_0,D_2){\varepsilon}qdef (D_0,D_1)\ast(D_1,D_2)$. See Figure~\ref{f.concatenations}. \begin{defi}\lambdabel{d.plaquecontrolled} Let $(X,d)$ be compact metric space, $f\colon X\to X$ a homeomorphism, $U$ an open set of $X$, $\varphi\colon U\frak{m}apsto \frak{m}athbb{R}R$ a continuous map, and $\cD$ a family of plaques contained in $U$. Consider $T\in \NN$ and a subset $J\subset\frak{m}athbb{R}R$. A plaque-segment $(D_0,D_1)$ of length $T>0$ relative to $U$ and $\cD$ is called {\varepsilon}mph{$(J,T)$-controlled} if $$ \varphi_T(x)=\frac 1T\sum_{i=0}^{T-1}\varphi(f^i(x))\in J, \quad \frak{m}box{for every $x\in f^{-T}(D_1) \subset D_0$.} $$ {\varepsilon}nd{defi} When there is no ambiguity on the pair $U$ and $\cD$ the dependence on these sets will be omitted. \begin{defi} Let $(X,d)$ be compact metric space, $f\colon X\to X$ a homeomorphism, $U$ an open set of $X$, $\varphi\colon U\frak{m}apsto \frak{m}athbb{R}R$ a continuous map, and $\cD$ a family of plaques contained in $U$. Consider a a scale $\cT=(T_n)_{n\in \NN}$, a $T_n$-pattern $\frak{m}athfrak{P}$ of $[{\varepsilon}ll\, T_n,({\varepsilon}ll+1)\, T_n-1]_{\NN}$, $n>0$, the set $M(\frak{m}athfrak{P})$ of its marked points, and a family of subsets $\cJ=(J_i)_{i\in \NN}$ of $\frak{m}athbb{R}R$. A family $\{D_i\}_{i\in M(\frak{m}athfrak{P})}$ of plaques of $\cD$ is called {\varepsilon}mph{$(\cJ,\frak{m}athfrak{P})$-controlled} (relatively to $U$ and $\cD$) if: \begin{itemize} \item For every $\frak{m}athfrak{P}$-admissible interval $[k\, T_i,(k+1)\, T_i-1]$ the pair $(D_{kT_i},D_{(k+1)T_i})$ is a plaque-segment of length $T_i$ that is $(J_i,T_i)$-controlled (relative to $U$ and $\cD$). \item For any $i,j\in M(\frak{m}athfrak{P})$, $i<j$, the pair $(D_i,D_j)$ is a plaque-segment of length $j-i$ (relative to $U$ and $\cD$). {\varepsilon}nd{itemize} {\varepsilon}nd{defi} \subsection{Distortion of Birkhoff averages and concatenations in flip-flop families} \lambdabel{ss.distortionfobirkhoffaverages} The following result is a translation of \cite[Lemma 2.4]{BoBoDi2} to the context of flip-flop families with sojourns. Recall the notation for Birkhoff averages in {\varepsilon}qref{e.birkhoffaverages}. \begin{lemm}[Small distortion of Birkhoff averages over long concatenations] \lambdabel{l.distortion} Let \newline $f \colon X \to X$ be a homeomorphism, $\varphi\colon X\to \frak{m}athbb{R}R$ a continuous function, and $\frak{m}athfrak{F}$ a flip-flop family associated to $\varphi$ and $f$ with sojourns in a compact set $Y$. Then for every $\alpha>0$ there exists $t = t(\alpha) \in \NN$ with the following property: Consider any $T\ge t$ and any family of plaques $\{D_i\}_{0\leq i\leq T}$ of $\frak{m}athfrak{F}$ such that for every $i=0, \dots, T-1$. $(D_i, D_{i+1})$ is a plaque-segment of length $L_i$. Then the plaque-segment $$ (D_0,D_T)= (D_0,D_1)\ast(D_1,D_2)\ast \cdots \ast (D_{T-1},D_T) $$ satisfies $$ \left| \varphi_L (x) - \varphi_L(y) \right| < \alpha, \quad \frak{m}box{where} \quad L{\varepsilon}qdef \sum_{i=0}^{T-1} L_i, $$ for every pair of points $x,y\in D_0$ such that $$ f^{L_{i-1}'}(x), f^{L_{i-1}'}(y) \in D_i, \quad \frak{m}box{where} \quad L_{i-1}^\prime {\varepsilon}qdef\sum_{j=0}^{i-1} L_j $$ for every $i=0,\dots, T-1$. {\varepsilon}nd{lemm} The proof is the same as the one of \cite[Lemma 2.4]{BoBoDi2} and the key ingredient is the expansion properties in item {\varepsilon}qref{i.flipflop33} in Definitions~\ref{d.flipflop} and \ref{d.flipfloptail}. We omit this proof and refer to \cite{BoBoDi2}. \subsection{Flip-flop families and concatenations} \lambdabel{ss.flip-flopfamiliesofplaques} The aim of this section is to prove the following theorem. \begin{theor}\lambdabel{t.p.flipfloppattern} Let $(X,d)$ be a compact metric space, $Y$ a compact subset of $X$, $U$ an open subset of $X$, $f\colon X \to X$ a homeomorphism, $\varphi\colon U \to \frak{m}athbb{R}$ a continuous function, and $\frak{m}athfrak{F}$ a flip-flop family with sojourns along $Y$ associated to $\varphi$ and $f$ whose plaques are contained in $U$. Consider sequences $(\deltalta_n)_{n\in \NN}, (\alpha_n)_{n\in \NN}$, and $(\beta_n)_{n\in \NN}$ of positive numbers such that: $$ (\deltalta_n)_n \to 0 \quad \frak{m}box{and} \quad \alpha_{n+1}< \dfrac{\alpha_n}{4}<\beta_n<\dfrac{\alpha_n}{2}. $$ Then there is a scale $\cT=(T_n)_{n\in \NN}$ satisfying the following properties: For every plaque $D\in \frak{m}athfrak{F}$, every $T_n$-pattern $\frak{m}athfrak P= (\cP, \iota)$, and every $\omega \in \{+,- \}$ there is a family of plaques $\cD_{\frak{m}athfrak{P} } = \{D_a\}_{a\in M(\frak{m}athfrak{P})}$ of $\frak{m}athfrak{F}$ such that : \begin{enumerate}[{(I}1{)}] \item \lambdabel{i.I1} $D_0= D$; \item \lambdabel{i.I2} the family $\{D_a\}_{a\in M(\frak{m}athfrak{P})}$ is $(\cJ_n,\frak{m}athfrak{P})$-controlled (relatively to $U$ and $\frak{m}athfrak{P}$) where $\cJ_n=\{J_i\}_{i\in\{0,\dots,n\}}$ and $$ J_i{\varepsilon}qdef \left[ -\alpha_i, -\frac{\alpha_i}2 \right] \cup \left[\frac{\alpha_i}2, \alpha_i\right] \frak{m}box{ for } i<n \frak{m}box{ and } J_n{\varepsilon}qdef \omega\, \left[ \frac{\alpha_n}2, \alpha_n\right]; $$ \item \lambdabel{i.I3} if $[a,a+T_{i}-1]$ is an interval of the partition {$\cP$} of $w$-type then for every point $x\in f^{-T_i}(D_{a+T_i})$ the segment of orbit $\{x,\dots, f^{T_i}(x)\}$ is $\deltalta_i$-dense in $Y$. {\varepsilon}nd{enumerate} {\varepsilon}nd{theor} We say that the family of plaques $\cD_{\frak{m}athfrak{P} } = \{D_a\}_{a\in M(\frak{m}athfrak{P})}$ in Theorem~\ref{t.p.flipfloppattern} is {\varepsilon}mph{$((\frak{m}athcal{J})_n,\frak{m}athfrak P)$-controlled and starts at $D_0$}. \begin{proof} The construction of the scales $\cT$ in the theorem is done by induction on $n$ (assuming that $T_i$ is defined for $i\leq n$ we will define $T_{n+1}$). The proof considers two cases: either the pattern is trivial or it is a concatenations of $T_n$-patterns. Proposition~\ref{p.l.induction1} deals with trivial patterns while Proposition \ref{p.l.induction2} deals with concatenations. Note that the Theorem~\ref{t.p.flipfloppattern} claims the existence of a scale $\cT=(T_n)_{n\in \NN}$ that holds for every disk $D\in \frak{m}athfrak{F}$. In the proofs of Propositions~\ref{p.l.induction1} and \ref{p.l.induction2} we get such a number depending on the plaque $D\in \frak{m}athfrak{F}$ and uniformly bounded. To uniformize this number for every plaque in $\frak{m}athfrak{F}$ we will use Lemma~\ref{l.necessario} below. Recall the definition of the distortion time number $t(\alpha)$ in Lemma~\ref{l.distortion} associated to $\alpha>0$. \begin{lemm}[Uniformization] \lambdabel{l.necessario} Take a scale $(T_n)_{n\in \NN}$ and a sequence $(\alpha_n)_{n\in \NN}$ as in Theorem~\ref{t.p.flipfloppattern}. Then for every plaque $D_0\in \frak{m}athfrak{F}$, every $\omega_0 \in\{-,+\}$, and every $$ t\ge \tau_n {\varepsilon}qdef \frak{m}ax \left\{ t\left(\frac{\alpha_{n+1}}{6}\right), 6 \, \frac{\alpha_n\, T_n}{\alpha_{n+1}}\right\} $$ the following property holds: Let $D_1\in\frak{m}athfrak{F}$ be a plaque such that $(D_0,D_1)$ is a $(\omega_0 \, [\frac{\alpha_{n+1}}2,\alpha_{n+1}],t)$-con\-trolled. plaque-segment. Then there is $\omega \in \{-,+\}$ such that if $(D_1, D_2)$ is a plaque-segment that is $(\omega \, [\frac{\alpha_{n}}2,\alpha_{n}],T_n)$-controlled then the concatenation $(D_0,D_2)= (D_0,D_1)\ast (D_1,D_2)$ is $(\omega_0 \, [\frac{\alpha_{n+1}}2,\alpha_{n+1}],t+T_n)$-controlled. {\varepsilon}nd{lemm} \begin{proof} We prove the lemma for $\omega_0=+$, the case $\omega_0=-$ is analogous. Note first that the choice of $t$ implies that the distortion of $\varphi_t$ in $f^{-t}(D_1)$ is bounded by $\frac{\alpha_{n+1}}{6}$. Moreover, as the orbit segment $(D_0,D_1)$ is $([\frac{\alpha_{n+1}}2,\alpha_{n+1}],t)$-controlled, we have $$ \varphi_{t}(f^{-t}(D_1))\subset\left[ \frac{\alpha_{n+1}}2,\alpha_{n+1} \right]. $$ Now there are two cases: \begin{enumerate} \item if $\displaystyle{ \frak{m}ax_{x\in f^{-t}(D_1)}\varphi_t(x)}\leq \frac{5\, \alpha_{n+1}}{6}, $ we choose $\omega=+$, \item otherwise $\displaystyle{ \frak{m}in_{x\in f^{-t}(D_1)}\varphi_t(x)\geq \frac{4\, \alpha_{n+1}}{6}}$ and we choose $\omega=-$. {\varepsilon}nd{enumerate} In the first case, as $\omega=+$ one can easily check that \begin{equation} \lambdabel{e.easytocheck} \varphi_{t+T_n}(x)>\varphi_t(x) \quad \frak{m}box{for every $x\in f^{-(t+T_n)}(D_2)$.} {\varepsilon}nd{equation} Moreover, as $t>6 \, \frac{\alpha_n\, T_n}{\alpha_{n+1}}$ and $(D_1, D_2)$ is $([\frac{\alpha_{n}}2,\alpha_{n}],T_n)$-controlled we have $$ \varphi_{t+T_n}(x)<\dfrac{\frac 56\, \alpha_{n+1}t +T_n\alpha_n}{t+T_n}< \frac{t \, \alpha_{n+1}}{t+T_n}< \alpha_{n+1} \quad \frak{m}box{for every $x\in f^{-(t+T_n)}(D_2)$.} $$ Finally, recalling that $(D_0,D_1)$ is $([\frac{\alpha_{n+1}}2,\alpha_{n+1}],t)$-controlled and {\varepsilon}qref{e.easytocheck} $$ \frac{\alpha_{n+1}}{2}< \varphi_t(x)< \varphi_{t+T_n}(x)< \alpha_{n+1} $$ ending the proof in the first case. Case (b) is analogous and hence omitted. The proof of the lemma is now complete. {\varepsilon}nd{proof} \begin{prop}[Trivial patterns] \lambdabel{p.l.induction1} Under the assumptions of Theorem~\ref{t.p.flipfloppattern}, assume that for every $i=0, \dots, n$ there are defined natural numbers $T_i$ such that the conclusions in the theorem hold for $T_i$-patterns. Then there is $\widetilde T_{n+1}$ such that for every $T> \widetilde{T}_{n+1}$, every plaque $D\in \frak{m}athfrak{F}$, and every $\omega\in \{+,-\}$ there is a plaque $D_0\in \frak{m}athfrak{F}$ such that \begin{itemize} \item $f^{-T}(D_0)\subset D$, \item for every $x\in f^{-T}(D_0)$ the set $\{x,f(x),\dots, f^{T}(x)\}$ is $\deltalta_{n+1}$-dense in $Y$, and \item $(D_0,D)$ is $(\omega \, [\frac{\alpha_{n+1}}2, \alpha_{n+1}],T)$-controlled. {\varepsilon}nd{itemize} {\varepsilon}nd{prop} \begin{proof} We only present the proof for the case $\omega=+$, the case $\omega=-$ is analogous and thus omitted. Let $N=N_{\deltalta_{n+1}}$ as in Definition~\ref{d.flipfloptail}. Then for every plaque $D\in \frak{m}athfrak{F}$ there is a $\widetilde D_0\in \frak{m}athfrak{F}$ such that \begin{itemize} \item {$f^{-i}(\widetilde D_0)\subset U$ for all $i=0,\dots, N$,} \item $f^{-N} (\widetilde D_0)\subset D$, and \item for every $x\in f^{-N} (\widetilde D_0)$ the set $\{x,f(x),\dots, f^N(x)\}$ is $\deltalta_{n+1}$-dense in $Y$. {\varepsilon}nd{itemize} In what follows we will focus on the control of Birkhoff averages. Note that the average of $\varphi_N$ in $f^{-N} (\widetilde D_0)$ is uniformly upper bounded by the maximum of $|\varphi |$ in $X$ denoted by $ \frak{m}ax |\varphi |$. Recall the definition of $t(\alpha)$ in Lemma~\ref{l.distortion} and fix $k_0$ large enough such that \begin{equation} \lambdabel{e.k0first} k_0 > t \left( \frac{\alpha_{n+1}}{6}\right)\, \frac{1}{T_n}. {\varepsilon}nd{equation} Since $\beta_n<\frac{\alpha_n}{2}$ we have that if $k_0$ is large enough then for every $k\ge k_0$ it holds \begin{equation} \lambdabel{e.k0second} \beta_n< \frac{- N \frak{m}ax |\varphi| + k \, T_n \dfrac{\alpha_n}{2}}{N+ k\, T_n}< \frac{N \frak{m}ax |\varphi| + k \, T_n\, \alpha_n}{N+ k \, T_n}< 3\, \frac{\alpha_n}{2}. {\varepsilon}nd{equation} \begin{clai}\lambdabel{c.icerm} Consider $k_0$ satisfying equations {\varepsilon}qref{e.k0first} and {\varepsilon}qref{e.k0second}. Then for every $k\ge k_0$ there is plaque $\widetilde D_1\in \frak{m}athfrak{F}$ such that \begin{enumerate} \item $f^{-kT_n} (\widetilde D_1)\subset \widetilde D_0$; \item for every $x\in f^{-k\,T_n-N} (\widetilde D_1)$ it holds $$ \frac{1}{N+k\, T_n}\, \sum_{j=0}^{N+k\, T_n-1} \varphi (f^j(x))\in \, \left[ \beta_{n}, \frac{3\,\alpha_{n}}2 \right]; $$ \item for every $x,y\in f^{-kT_n-N} (\widetilde D_1)$. $$ \frac{1}{N+k \, T_n}\, \sum_{j=0}^{N+k\, T_n-1} |\varphi (f^j(x))-\varphi (f^j(y))| < \frac{\alpha_{n+1}}{6}. $$ {\varepsilon}nd{enumerate} {\varepsilon}nd{clai} \begin{proof} To prove the first item we consider the concatenation of $k$ orbit-segments $(\widehat{D}_i, \widehat D_{i+1})$, $i=0,\dots, k-1$, of size $T_n$ associated to $\omega_i=+$ given by the induction hypothesis and such that $\widehat D_0=\widetilde D_0$. These pairs are obtained inductively: assumed defined the pair $(\widehat D_i,\widehat D_{i+1})$ we apply induction hypothesis to the final plaque $\widehat D_{i+1}$. To conclude it is enough to take $\widetilde D_1=\widehat D_{k}$. See Figure~\ref{f.claimicerm}. To prove item (b) note that each plaque-segment $(\widehat D_i, \widehat D_{i+1})$ is $([\frac{\alpha_{n}}2,\alpha_{n}],T_n)$-controlled. Hence the averages in these segments belong to $[\frac{\alpha_n}{2},\alpha_n]$. Now the control of averages follows from the choice of $k_0$ in equation~{\varepsilon}qref{e.k0second}. Item (c) follows from the distortion Lemma~\ref{l.distortion} and the choice of $k_0$ in equation~{\varepsilon}qref{e.k0first}. This ends the proof of the claim. {\varepsilon}nd{proof} \begin{figure} \begin{minipage}[h]{\linewidth} \centering \begin{overpic}[scale=.35, ]{P_L5.pdf} \put(-3,43){\small$D$} \put(88,43){\small$\widetilde{D}_1$} \put(24,43){\small$\widetilde{D}_0$} \put(18,23.5){\small$N$} \put(60,23.5){\small$k \, T_n$} {\varepsilon}nd{overpic} \caption{Proof of Claim \ref{c.icerm}} \lambdabel{f.claimicerm} {\varepsilon}nd{minipage} {\varepsilon}nd{figure} The proof of the proposition now follows arguing exactly as in the concatenation result \cite[Lemma~2.12]{BoBoDi2}. For completeness, we recall the main arguments involved in this proof. Note that in Claim~\ref{c.icerm} we can assume that the constant $k_0$ is such that \begin{equation}\lambdabel{e.choiceofk} k_0> \frak{m}ax\left\{ {\dfrac{6\,\alpha_n}{\alpha_{n+1}}}, t \left( \frac{\alpha_{n+1}}{6}\right)\,\frac{1}{T_n} \right\}. {\varepsilon}nd{equation} \begin{lemm} \lambdabel{l.inextremis} For every $k\ge k_0$ there are $i_0$ and a plaque $D_1\in \frak{m}athfrak{F}$ such that $(D,D_1)$ is $(\big[ \frac{1}{2} \alpha_{n+1},\alpha_{n+1} \big], \widetilde T_{n+1} )$-controlled, where $$ \widetilde T_{n+1}=N+(k+i_0)T_n. $$ {\varepsilon}nd{lemm} \begin{proof} We just describe the main steps of the proof. Take $\widehat D_0{\varepsilon}qdef \widetilde D_1$, where $\widetilde D_1$ is the plaque given by Claim~\ref{c.icerm}. By the induction hypothesis, given any $i\in \{0, \dots, n\}$ there is a family of $([-\alpha_n,-\frac{\alpha_n}{2}], T_n)$-controlled orbit-segments $(\widehat D_{j},\widehat D_{j+1})$ for $j=0,\dots, k+i-1$. This implies that $$ \varphi_{T_n} \big( f^{-T_n}(\widehat D_{j+1}) \big) \subset \left[-\alpha_n,-\frac{\alpha_n}2\right]. $$ Consider the orbit-segment $(D,\widehat D_{k+i})$ of length $N+(k+i)\,T_n$ obtained concatenating $(D,\widehat D_0=\widetilde D_1)$ and $(\widehat D_j,\widehat D_{j+1})$, $j=0,\dots, k+i-1$, that is, $$ (D,\widehat D_{k+i})=(D,\widehat D_0) \ast (\widehat D_0,\widehat D_{1})\ast \cdots \ast (\widehat D_{k+i-1},\widehat D_{k+i}). $$ The choice of $k_0$ in {\varepsilon}qref{e.choiceofk} ($k_0\,\alpha_{n+1}> 6 \, \alpha_n$) immediately implies that every $x\in f^{-N-(k+i)T_n}(\widehat D_{k+i})$ satisfies the following implication, \begin{equation}\lambdabel{e.thefollowingholds} \varphi_{N+(k+i)T_n}(x)\geq \frac 56 \alpha_{n+1} \quad \implies \quad \varphi_{N+(k+i+1)T_n}(x)\geq \frac 46\alpha_{n+1}. {\varepsilon}nd{equation} Let \begin{equation*} {\varepsilon}ll_0{\varepsilon}qdef \dfrac{3\, (N+k\, T_n) \, \alpha_n}{T_n\, (\alpha_n+\alpha_{n+1})}. {\varepsilon}nd{equation*} By item (b) in Claim~\ref{c.icerm} we have that $$ \beta_n \le \varphi_{N+k\, T_n}(x)\le \frac{3\,\alpha_n}{2} \quad \frak{m}box{for every $x\in f^{-N-k\,T_n}(\widehat D_0)$.} $$ A simple calculation gives that for every ${\varepsilon}ll > {\varepsilon}ll_0$ it holds \begin{equation}\lambdabel{e.needtoexplain} \varphi_{N+(k+{\varepsilon}ll)\,T_n}(x)<\frac 12\, \alpha_{n+1} \quad \frak{m}box{for every} \quad x\in f^{-N-(k+{\varepsilon}ll)\, T_n}(\widehat D_{k+{\varepsilon}ll}). {\varepsilon}nd{equation} Equation {\varepsilon}qref{e.needtoexplain} implies that there is a first $i_0$ with $\varphi_{N+(k+i_0)\, T_n} (\bar x) \le \frac{5}{6} \, \alpha_{n+1}$ for some $\bar x \in f^{-N-(k+i_0)\,T_n}(\widehat D_{k+i_0})$. From Equation {\varepsilon}qref{e.thefollowingholds} we get \begin{equation} \lambdabel{e.previousterca} \varphi_{N+(k+i_0)\,T_n}(\bar x)\in \left[ \frac 46\alpha_{n+1},\frac 56\alpha_{n+1} \right]. {\varepsilon}nd{equation} By the choice of $k_0$ in ~{\varepsilon}qref{e.k0first}, the distortion of $\varphi_{N+(k+i_0)T_n}$ in $f^{-N-(k+i_0)\, T_n}(\widehat D_{k+i_0})$ is strictly less than $\frac 16 \alpha_{n+1}$. Equation~{\varepsilon}qref{e.previousterca} now implies that $$ \varphi_{N+(k+i_0)\,T_n}(x)\in\left[ \frac 12\alpha_{n+1},\alpha_{n+1}\right] \quad \frak{m}box{for every $x\in f^{-N-(k+i_0)\,T_n}(\widehat D_{k+i_0})$.} $$ The lemma follows taking $D_1=\widehat D_{k+i_0}$. {\varepsilon}nd{proof} Take $i_0$ as in Lemma~\ref{l.inextremis}, $k_0$ as in {\varepsilon}qref{e.choiceofk}, $k \ge k_0$ sufficiently large, and define $$ \widetilde T_{n+1}{\varepsilon}qdef N+(k+i_0)\, T_n> \tau_n, $$ where $\tau_n$ is as in Lemma~\ref{l.necessario}. Consider the plaque $D_1$ given by Lemma~\ref{l.inextremis} such that $(D,D_1)$ is $(\big[ \frac{1}{2} \alpha_{n+1},\alpha_{n+1} \big], \widetilde T_n )$-controlled. Using the induction hypothesis, consider a plaque $D_2$ such that $(D_1,D_2)$ is $(\big[ \frac{1}{2} \alpha_{n},\alpha_{n} \big], T_n )$-controlled. As $\widetilde T_{n+1}>\tau_n$, Lemma~\ref{l.necessario} implies that $(D_0,D_2)$ is $(\big[\frac{1}{2} \alpha_{n+1},\alpha_{n+1} \big], \widetilde T_{n+1}+T_n)$-controlled. Repeating this last argument $j$ times (any $j$) we get a plaque $D_2(j)$ such that $(D,D_2(j))$ is $(\big[\frac{1}{2} \alpha_{n+1},\alpha_{n+1} \big], \widetilde T_{n+1}+j\,T_n)$-controlled. This completes the average control in the proposition and ends its proof. {\varepsilon}nd{proof} Given a pattern $\frak{m}athfrak{P}$ and its set of marked points $M(\frak{m}athfrak{P})$ we let $e_{ M(\frak{m}athfrak{P})}$ the right extreme of $M(\frak{m}athfrak{P})$. \begin{prop}[Concatenation of patterns] \lambdabel{p.l.induction2} Under the assumptions of Theorem~\ref{t.p.flipfloppattern}, assume that for every $i=0,\dots, n$ there is defined $T_i\in \NN$ satisfying the conclusions in the theorem. Then there is a constant $k_0$ such that for every $k\ge k_0$, every family $\{ \frak{m}athfrak{P}_i\}_{i=1}^k$ of $T_n$-patterns, every $\omega \in \{-,+\}$, and every plaque $D\in \frak{m}athfrak{F}$ there is a sequence of symbols $(\omega_i)_{i=1,\dots,k}$, $\omega_i\in \{-,+\}$, satisfying the following property: Consider the family of sets $\cJ_i=\{J_{i,j}\}$, $i=1,\dots,k$ and $j=1,\dots,n$, defined by \begin{itemize} \item $J_{i,j}{\varepsilon}qdef \big[-\alpha_j, \frac{-1}{2}\,\alpha_j\big] \cup \big[\frac{1}{2}\, \alpha_j, \alpha_j\big]$ if $j<n$ and \item $J_{i,n}{\varepsilon}qdef \omega_i\, \big[ \frac{1}{2}\, \alpha_n, \alpha_n \big]$. {\varepsilon}nd{itemize} Let $\cD_{\frak{m}athfrak{P}_1}=\{D_{1,j}\}_{j\in M(\frak{m}athfrak{P}_1)}$ be a family of $(\cJ_1,\frak{m}athfrak{P}_1)$-controlled plaques given by the induction hypothesis associated to the plaque $D_{1,0}=D$. Let $\cD_{\frak{m}athfrak{P}_i}=\{D_{i,j}\}_{j\in M(\frak{m}athfrak{P}_i)}$ be the family of $(\cJ_i,\frak{m}athfrak{P}_i)$-controlled plaques associated to the final plaque $D_{i-1,e_{ M(\frak{m}athfrak{P}_{i-1})}}=D_{i,0}$ of the family $\cD_{\frak{m}athfrak{P}_{i-1}}=\{D_{i-1,j}\}_{j\in M(\frak{m}athfrak{P}_{i-1})}$ given by the induction hypothesis. Then the plaque-segment $(D, D_{k,e_{ M(\frak{m}athfrak{P}_k)}})$ is $(\omega\, [ \frac{1}{2}\, \alpha_{n+1}, \alpha_{n+1} ],k\,T_n)$-controlled. {\varepsilon}nd{prop} \begin{proof} The proof of follows arguing as in \cite[Lemma~2.12]{BoBoDi2}. We now recall the strategy for $\omega=+$ (the case $\omega = -$ is analogous and thus omitted). We start by taking a sequence of signals $\omega_i=+$ for every $i$. This sequence is sufficiently large to guarantee distortion smaller than $\frac{1}{6}\alpha_{n+1}$ in the pre-images of the plaques. We stop for some large $i_0$. Note that the averages are in $\big[ \frac{1}{2}\alpha_n, \alpha_n \big]$. Thereafter for $i>i_0$ we consider a sequence of $\omega_i=-$ as in the proof of the previous lemma. In this way the averages of $\varphi$ start to decrease. We stop when this average at some point of the plaque belongs to $\big[ \frac{4}{6}\alpha_{n+1},\frac{5}{6}\alpha_{n+1} \big]$ (the key point is that the averages can not jump from above $\frac{5}{6}\alpha_{n+1}$ to below $\frac{4}{6}\alpha_{n+1}$, this is because for large $i$ the cone of size $\big[ \frac{4}{6}\alpha_{n+1},\frac{5}{6}\alpha_{n+1} \big]$ is very width, this is exactly the argument in \cite{BoBoDi2}). The distortion control implies that the average of $\varphi$ in the whole pre-image of the plaque is contained in $\big[\frac{1}{2} \alpha_{n+1},\alpha_{n+1}\big]$. We conclude the proof using Lemma~\ref{l.necessario}: we can continue concatenating plaque-segments keeping the averages in $\big[ \frac{1}{2}\alpha_{n+1},\alpha_{n+1} \big]$. This completes the sketch of the proof of the proposition. {\varepsilon}nd{proof} \noindent{{\varepsilon}mph{End of the proof of Theorem~\ref{t.p.flipfloppattern}.}} The definition of the scale $(T_n)_n$ is done inductively on $n$. Assuming that $T_i$ is defined for $i\leq n$, we define $T_{n+1}$ as follows. Take $\widetilde T_{n+1}$ as in Lemma~\ref{l.inextremis} and $k_0$ as in Claim~\ref{c.icerm}. Then define $T_{n+1}$ as a multiple of $T_n$ such that $$ T_{n+1}\ge \frak{m}ax\{ \widetilde T_{n+1},k_0 \, T_n\} \quad \frak{m}box{and} \quad \frac{T_{n+1}}{T_n}\ge (n+1)\, \frac{T_n}{T_{n-1}}. $$ Take now a $T_{n+1}$-pattern $\frak{m}athfrak P= (\cP, \iota)$, $\omega \in \{+,- \}$, and a plaque $D\in \frak{m}athfrak{F}$. There are two cases to consider. If $\cP$ is the trivial $T_{n+1}$-pattern we just need to apply Proposition~\ref{p.l.induction1}. Otherwise, the $T_{n+1}$-pattern $\frak{m}athfrak{P}$ is a concatenation of a sequence of at least $k_0$ $T_{n}$-patterns. The theorem follows by applying Proposition~\ref{p.l.induction2} to this family of $T_n$-patterns. {\varepsilon}nd{proof} Bearing in mind Remark~\ref{r.initialcompatibility}, we are interested to get plaque-segments associated to $T_n$-patterns respecting the plaque-segments associated to its initial subpatterns. A slight variation of the proof of Proposition~\ref{p.l.induction2} implies the following addendum: \begin{adde}[Extension of initial subpatterns] \lambdabel{p.extension} With the hypotheses of Theorem~\ref{t.p.flipfloppattern}, the scale $\cT=(T_n)_{n\in \NN}$ can be chosen satisfying the following additional property: Let $\frak{m}athfrak P_1$ be a $T_{n}$-pattern and $\frak{m}athfrak P_2$ be a $T_{n+1}$-pattern such that $\frak{m}athfrak P_1$ is the initial $T_n$-subpattern of $\frak{m}athfrak P_2$. Consider a flip-flop family $\frak{m}athfrak{F}$ and $D_0$ a plaque $\frak{m}athfrak{F}$. Let $\{D_a\}_{a\in M(\frak{m}athfrak P_1)}$ be a family of plaques associated to the pattern $\frak{m}athfrak P_1$ starting at $D_0$ given by Theorem~\ref{t.p.flipfloppattern}. Then for every $\omega\in \{+,-\}$, there is a family $\{\widetilde D_a\}_{a\in M(\frak{m}athfrak P_2)}$ of plaques associated to the pattern $\frak{m}athfrak P_2$ satisfying the conclusion of Theorem~\ref{t.p.flipfloppattern}, starting at $D_0$ and such that $$ \widetilde D_a= D_a, \quad \frak{m}box{for every $a\in M(\frak{m}athfrak P_1)$}. $$ {\varepsilon}nd{adde} This result allows us to choose the family of plaques associated to a pattern extending the ones associated to its initial subpatterns. \section{Flip-flop families with sojourns: proof of Theorem~\ref{t.flipfloptail} }\lambdabel{s.flipfloptail} In this section we prove Theorem~\ref{t.flipfloptail}, Corollary~\ref{c.flipfloptail}, and Proposition~\ref{p.r.hyperboliclike}. \subsection{Proof of Theorem~\ref{t.flipfloptail}} Consider a homeomorphism $f\colon X \to X$ defined on a compact metric space $(X,d)$, a continuous function $\varphi\colon X \to \frak{m}athbb{R}R$, and a flip-flop family $\frak{m}athfrak{F}=\frak{m}athfrak{F}^+\bigsqcup\frak{m}athfrak{F}^-$ with sojourns along a compact subset $Y$ of $X$ associated to $\varphi$ and $f$. We need to see that every plaque $D\in \frak{m}athfrak{F}$ contains a point $x_D$ that is controlled at any scale with a long sparse tail with respect to $\varphi$ and $Y$. Fix a sequences of strictly positive numbers $(\alpha_n)_n$ and $(\beta_n)_n$ such that $$ 0<\alpha_{n+1}< \frac{\alpha_n}{4}< \beta_n<\frac{\alpha_n}2. $$ Associated to these sequences we consider the family of control intervals $\cJ_n=\{J_i\}_{i\in\{0,\dots,n\}}$ defined as in Theorem~\ref{t.p.flipfloppattern}. We also take an arbitrary sequence of positive numbers $(\deltalta_n)_n$ converging to $0$. Denote by $\cT=(T_n)_{n\in \NN}$ the scale associated to these sequences given by Theorem~\ref{t.p.flipfloppattern}. By Lemma~\ref{l.tailexistence} there are a sequence $\bar \varepsilonilon=(\varepsilon_n)_n$ and a $\cT$-long $\bar \varepsilon$-sparse tail $R_\infty$. Let $\frak{m}athfrak{P}_n$ be the sequence of initial patterns associated to the tail $R_\infty$ given by Remark~\ref{r.initial}. Let $M (R_\infty)$ be the set of marked points of the components of $\frak{m}athfrak{P}_n$, $$ M(R_\infty) {\varepsilon}qdef \bigcup_{0}^{\infty} M (\frak{m}athfrak{P}_n). $$ \begin{lemm}\lambdabel{l.infinitecontrol} For every plaque $D_0\in\frak{m}athfrak{F}$ there is a sequence $(D_a)_{a\in M(R_\infty)}$ of plaques of $\frak{m}athfrak{F}$ such that for every $n$ the subfamily $(D_a)_{a\in M( \frak{m}athfrak{P}_n)}$ is $(\cJ_n, \frak{m}athfrak{P}_n)$-controlled. {\varepsilon}nd{lemm} \begin{proof} Apply first Theorem~\ref{t.p.flipfloppattern} to construct the family associated to the pattern $\frak{m}athfrak{P}_0$. Thereafter inductively apply Addendum~\ref{p.extension} to construct the family of sets associated to $\frak{m}athfrak{P}_{n+1}$ extending the family constructed for $\frak{m}athfrak{P}_n$. {\varepsilon}nd{proof} By the expansion property {\varepsilon}qref{i.defff2} in Definition~\ref{d.flipfloptail} of a flip-flop family with sojourns we have that \begin{equation}\lambdabel{e.justapoint} \bigcap_{a\in M (R_\infty)} f^{-a}(D_a)=\{x_D\}\subset D_0. {\varepsilon}nd{equation} By construction, the point $x_D$ is controlled at any scale with long sparse tail $R_\infty$ with respect to $\varphi$ and $Y$, proving Theorem~\ref{t.flipfloptail}. \qed \subsection{Proof of Corollary~\ref{c.flipfloptail}} Let $\frak{m}u$ be any weak$\ast$-accumulation point of the family of empirical measures $(\frak{m}u_n(x_D))_n$. By Theorem~\ref{t.accumulation}, for $\frak{m}u$-almost every point $x$ its Birkhoff average $\varphi_\infty(x)$ is zero its orbit is dense in $X$. This immediately implies that almost every component $\nu$ of the ergodic decomposition of $\frak{m}u$ has full support and $\int \varphi \,d\nu=0$. \qed \subsection{Proof Proposition~\ref{p.r.hyperboliclike}} Given $t\in (\alpha, \beta)$ consider $\alpha< \alpha_t <t< \beta_t <\beta$. Consider small cylinders $C(\alpha_t)$ and $C(\beta_t)$ where the map $\varphi$ is less than $\alpha_t$ and bigger than $\beta_t$, respectively. Consider now unstable subsets of these cylinders (i.e., the intersection of the cylinders with unstable sets). For sufficiently large $m$ we have that these sets are a flip-flop family relative to $f^m$. Now it is enough to apply either the criterion in \cite{BoBoDi2} (to get item (a)) or to apply Corollary~\ref{c.flipfloptail} (to get item (b)). \section{Proof of Theorem~\ref{t.homoclinic}: Flip-flop families and homoclinic relations} \lambdabel{s.flip-flophomoclinic} The goal of this section is to prove Theorem~\ref{t.homoclinic}. Consider $f\in \operatorname{Diff}^1(M)$ and a pair of hyperbolic periodic points $p$ and $q$ of $f$ that are homoclinically related and a continuous function $\varphi\colon M\to \frak{m}athbb{R}R$ such that $\int \varphi\, d\frak{m}u_{\cO(p)} <t< \int \varphi \, d\frak{m}u_{\cO(q)} $ (recall that $\frak{m}u_{\cO(p)}$ and $\frak{m}u_{\cO(q)}$ are the unique $f$-invariant probability measures supported on the orbits $p$ and $q$, respectively). For notational simplicity, let us assume that the periodic points $p$ and $q$ are fixed points. In this case the assumption above just means $\varphi(p)<t<\varphi(q)$. After replacing $\varphi$ by the map $\varphi_t=\varphi-t$, to prove the theorem it is enough to get an ergodic measure $\frak{m}u_t$ whose support is $H(p,f)$ such that $\int \varphi_t\, d\frak{m}u_t=0$. Thus, in what follows, we can assume that $t=0$ and hence $\varphi(p)<0<\varphi(q)$. \subsection{Flip-flop families obtained from homoclinic relations} \lambdabel{ss.flip-flopfromhomoclinic} To prove the theorem we construct a flip-flop family associated to $\varphi$ and $f^n$ for some $n>0$. We begin by recalling some constructions from \cite{BoBoDi2}. \subsubsection{The space of discs $\cD^i(M)$} \lambdabel{sss.spaceofdiscs} Recall that $M$ is a closed and compact Riemannian manifold, let $\operatorname{dim} (M)=d$. For each $i\in\{1,\dots,d-1\}$ denote by $\cD^i(M)$ the set of $i$-dimensional (closed) discs $C^1$-embedded in $M$. In the space $\cD^i(M)$ the standard $C^1$-topology is defined as follows, given a disc $D\in \cD^i(M)$ its neighbourhoods are of the form $\{f(D) \colon f \in \cW\}$, where $\cW$ is a neighbourhood of the identity map in $\operatorname{Diff}^1(M)$. In \cite{BoBoDi2} it is introduced the following metric $\frak{m}athfrak{d}$ on the space $\cD^i(M)$, $$ (D_1,D_2) \frak{m}apsto \frak{m}athfrak{d}(D_1,D_2){\varepsilon}qdef d_\frak{m}athrm{Haus}(TD_1,TD_2)+d_\frak{m}athrm{Haus}(T\partial D_1,T\partial D_2), $$ where $D_1,D_2\in \cD^i(M)$, the tangent bundles $TD_i$ and $T\partial D_i$ are considered as compact subsets of the corresponding Grassmannian bundles, and $d_\frak{m}athrm{Haus}$ denotes the corresponding Hausdorff distances. The distance $\frak{m}athfrak{d}$ behaves nicely for the composition of diffeomorphisms: if $D$ and $D^\prime$ are close the same holds for $f(D)$ and $f(D^\prime)$, see \cite[Proposition 3.1]{BoBoDi2}. Given a family of discs $\frak{m}athfrak{D}\subset \cD^i(M)$ and ${\varepsilon}ta>0$, we denote by $\cV^\frak{m}athfrak{d}_{\varepsilon}ta(\frak{m}athfrak{D})$ the open ${\varepsilon}ta$-neighbourhood of $\frak{m}athfrak{D}$ with respect to the distance $\frak{m}athfrak{d}$, $$ \cV^\frak{m}athfrak{d}_{\varepsilon}ta(\frak{m}athfrak{D}){\varepsilon}qdef \{D\in \cD^i(M)\colon \frak{m}athfrak{d}(D,\frak{m}athfrak{D})<{\varepsilon}ta\}. $$ \subsubsection{Proof of Theorem~\ref{t.homoclinic}} Since $\varphi(p)<0<\varphi(q)$, there are small local unstable manifolds $W^{\frak{m}athrm{u}}_{loc}(p,f)$ and $W^{\frak{m}athrm{u}}_{loc}(q,f)$ of $p$ and $q$ such that $\varphi$ is strictly negative in $W^{\frak{m}athrm{u}}_{loc}(p,f)$ and strictly positive in $W^{\frak{m}athrm{u}}_{loc}(q,f)$. Similarly for the local stable manifolds $W^{\frak{m}athrm{s}}_{loc}(p,f)$ and $W^{\frak{m}athrm{s}}_{loc}(q,f)$ of $p$ and $q$. Since $p$ and $q$ are homoclinically related there ${\varepsilon}ll_0\ge 0$ and small discs $\Delta^{\frak{m}athrm{s}}_p \subset W^{\frak{m}athrm{s}}(p)$ and $\Delta^{\frak{m}athrm{s}}_q\subset W^{\frak{m}athrm{s}}(q,f)$ such that the intersections $\Delta^{\frak{m}athrm{s}}_p\cap f^{{\varepsilon}ll_0}(W^{\frak{m}athrm{u}}_{loc}(q,f))$ and $\Delta^{\frak{m}athrm{s}}_q\cap f^{{\varepsilon}ll_0}(W^{\frak{m}athrm{u}}_{loc}(p,f))$ are both transverse and consist of just a point. For $\varrho>0$ consider the $\varrho$-neighbourhoods $\cV^{\frak{m}athfrak{d}}_\varrho(p){\varepsilon}qdef \cV^{\frak{m}athfrak{d}}_\varrho(W^{\frak{m}athrm{u}}_{loc}(p,f))$ and $\cV^{\frak{m}athfrak{d}}_\varrho(q){\varepsilon}qdef \cV^{\frak{m}athfrak{d}}_\varrho(W^{\frak{m}athrm{u}}_{loc}(q,f))$ of the local unstable manifolds of $p$ and $q$ for the distance $\frak{m}athfrak{d}$. For $\varrho$ small enough, every disc in $\cV^{\frak{m}athfrak{d}}_\varrho(p)$ intersects transversely $\Delta^{\frak{m}athrm{s}}_p$. We also have that $\varphi$ is uniformly negative (say less than $-\alpha<0$) in every disc in this neighbourhood. Finally, the derivative of $Df$ is uniformly expanding in restriction to this family of discs. There are similar assertions for the discs in $\cV^{\frak{m}athfrak{d}}_\varrho(q)$: every disc of this neighbourhood meets transversely $\Delta^{\frak{m}athrm{s}}_q$, $\varphi$ is larger than $\alpha>0$ in the discs, and $Df$ is a uniform expansion. \begin{rema}\lambdabel{r.contraction} If $\varrho>0$ is small enough then there is ${\varepsilon}ll_1$ such that for every ${\varepsilon}ll>{\varepsilon}ll_1$ the image $f^{{\varepsilon}ll}(D)$ of any disc $D\in \cV^\frak{m}athfrak{d}_\varrho (p)$ contains discs in $ \cV^\frak{m}athfrak{d}_\varrho(p)$ and in $\cV^\frak{m}athfrak{d}_\varrho(q)$. The same holds (with the same constant) for discs in $\cV^\frak{m}athfrak{d}_\varrho(q)$. This is a well known fact and is the ground of the proof of the existence of unstable manifolds using a graph transformation. In what follows we assume that $\varrho$ satisfies this property. {\varepsilon}nd{rema} We consider the following family $\frak{m}athfrak{F}=\frak{m}athfrak{F}^+\bigsqcup \frak{m}athfrak{F}^-$ of discs: \begin{itemize} \item $\frak{m}athfrak{F}^-$ is the family of discs in $\cV^\frak{m}athfrak{d}_\varrho(p)$ contained in $W^{\frak{m}athrm{u}}(p,f)\cup W^{\frak{m}athrm{u}}(q,f)$; \item $\frak{m}athfrak{F}^+$ is the family of discs in $\cV^\frak{m}athfrak{d}_\varrho(q)$ contained in $W^{\frak{m}athrm{u}}(p,f)\cup W^{\frak{m}athrm{u}}(q,f)$. {\varepsilon}nd{itemize} Note that as $q$ are homoclinically related these two families are both infinite. \begin{prop}\lambdabel{p.flipflophomoclinic} There is $n$ such that the family $\frak{m}athfrak{F}$ is a flip-flop family associated to $\varphi$ and $f^n$ and has sojourns (for $f$) along the homoclinic class $H(p,f)$. {\varepsilon}nd{prop} We postpone the proof of Proposition~\ref{p.flipflophomoclinic} and prove the theorem. \begin{proof}[Proof of Theorem~\ref{t.homoclinic}] Consider the flip-flop family $\frak{m}athfrak{F}$ with sojourns along $H(p,f)$ given by Proposition~\ref{p.flipflophomoclinic}. Exactly as in the proof of Theorem~\ref{t.flipfloptail} in Section~\ref{s.flipfloptail} we use Theorem~\ref{t.p.flipfloppattern}, Addendum~\ref{p.extension}, and Lemma~\ref{l.tailexistence} to construct a scale $\cT$, a tail $R_\infty$, a sequence of increasing patterns $\frak{m}athfrak{P}_n$, and a family of discs $D_a$, $a\in M(R_{\infty})$ such that the restriction of this family to the marked sets $M(\frak{m}athfrak{P}_n)$ is controlled by the pattern $\frak{m}athfrak{P}_n$ for every $n$ (exactly as in Lemma~\ref{l.infinitecontrol}). The expansion property in the flip-flop family implies that (recall equation~{\varepsilon}qref{e.justapoint}) $$ \bigcap_{a\in M(R_\infty)} f^{-a}(D_a)= x_\infty. $$ \begin{clai}$x_\infty \in H(p,f)$. {\varepsilon}nd{clai} \begin{proof} Every disc $D_a$ belongs to $\frak{m}athfrak{F}$, hence it is contained in $W^{\frak{m}athrm{u}} (p,f)\cup W^{\frak{m}athrm{u}} (q,f)$ and intersects transversely $W^{\frak{m}athrm{s}}(p,f)\cup W^{\frak{m}athrm{s}}(q,f)$. Thus $D_a$ contains a point of $H(p,f)$. The $f$-invariance of $H(p,f)$ implies that the same holds for $f^{-a}(D_a)$ The compactness of $H(p,f)$ implies that $x_\infty\in H(p,f)$. {\varepsilon}nd{proof} By construction, the point $x_\infty$ is controlled at any scale with a long sparse tail for $\varphi$ and $f$ (the ambient space here is $H(p,f)$). The theorem now follows from Theorem~\ref{t.accumulation}. {\varepsilon}nd{proof} \subsection{Proof of Proposition~\ref{p.flipflophomoclinic}} \lambdabel{ss.postponed} We split the proof of the proposition into two parts: \subsubsection{$\frak{m}athfrak{F}=\frak{m}athfrak{F}^+\cup \frak{m}athfrak{F}^-$ is a flip-flop family}\lambdabel{ss.flipflop} By construction, the map $\varphi$ is less than $-\alpha<0$ in the discs of $\cV^{\frak{m}athfrak{d}}_\varrho ( p)$ and bigger than $\alpha>0$ in the discs of $\cV^{\frak{m}athfrak{d}}_\varrho(q)$. The definition of $\frak{m}athfrak{F}^\pm$ implies {\varepsilon}qref{i.flipflop1} in Definition~\ref{d.flipflop}. Recall the choice of ${\varepsilon}ll_0$ above and that, by construction, the image $f^{{\varepsilon}ll_0}(D)$ of any disc $D\in \frak{m}athfrak{F}$ intersects transversely the compact parts $\Delta^{\frak{m}athrm{s}}_p$ of $W^{\frak{m}athrm{s}}(p,f)$ and $\Delta^{\frak{m}athrm{s}}_q$ of $W^{\frak{m}athrm{s}}(q,f)$. Thus the $\lambdambda$-lemma (inclination lemma) and the invariance of $W^{\frak{m}athrm{u}}(p,f)\cup W^{\frak{m}athrm{u}}(q,f)$ imply the existence of $n_0>0$ such that for every $n>n_0$ and every disc $D\in\frak{m}athfrak{F}$ the set $f^n(D)$ contains a disc $D^+\in\frak{m}athfrak{F}^+$ and a disc $D^-\in \frak{m}athfrak{F}^-$. This proves item {\varepsilon}qref{i.flipflop2} in Definition~\ref{d.flipflop}. It remains to get the expansion property in item {\varepsilon}qref{i.flipflop33} of Definition~\ref{d.flipflop}. We need to get $n$ such that for every $D\in \frak{m}athfrak{F}$ the disc $f^n(D)$ contains a disc $D'$ such that $f^n\colon f^{-n}(D')\to D'$ is a uniform expansion. For that it is enough to take sufficiently large $n$ (independent of $D$). To see why this is so recall first that $f^{n_0}(D)$ contains a disc $D_{n_0}\in \frak{m}athfrak{F}^+$. Now Remark~\ref{r.contraction} provides a sequence of discs $D_{n_0+i{\varepsilon}ll_0}$ in $\frak{m}athfrak{F}^\pm$ such that $D_{n_0+(i+1){\varepsilon}ll_0}\subset f^{{\varepsilon}ll_0}(D_{n_0+i})$ and $f^{{\varepsilon}ll_0}\colon f^{-{\varepsilon}ll_0}(D_{n_0+(i+1){\varepsilon}ll_0})\to D_{n_0+(i+1){\varepsilon}ll_0}$ is a uniform expansion. This implies that for $i$ large enough (independent of $D$) we get the announced expansion for $f^{n_0+i{\varepsilon}ll_0}\colon f^{-(n_0+i{\varepsilon}ll_0)}(D_{n_0+i{\varepsilon}ll_0})\to D_{n_0+i{\varepsilon}ll_0}$, just note that the $i\,{\varepsilon}ll_0$ additional iterates in the ``expanding part" compensate any contraction introduced by the first $n_0$ iterates. \subsubsection{The family $\frak{m}athfrak{F}$ sojourns along the homoclinic class $H(p,f)$}\lambdabel{ss.sojournsalong} Consider any $\deltalta>0$. We need prove that there is $N>0$ such that every disc $D\in\frak{m}athfrak{F}$ contains a pair of discs $\widehat D^+$, $\widehat D^-$ such that for every $x\in \widehat D^\pm$ the segment of orbit $\{x,\dots, f^N(x)\}$ is $\deltalta$-dense in $H(p,f)$ (item {\varepsilon}qref{i.defff0}), $f^N(\widehat D^+)\in\frak{m}athfrak{F}^+$ and $f^N(\widehat D^-)\in\frak{m}athfrak{F}^-$ (item {\varepsilon}qref{i.defff1}), and $f^N\colon \widehat D^\pm\to f^N(\widehat D^\pm)$ is expanding (item {\varepsilon}qref{i.defff2}). We need the following property of $H(p,f)$ that is a direct consequence of the density of transverse homoclinic intersection points of $W^{\frak{m}athrm{u}}(p,f)\cap W^{\frak{m}athrm{s}}(p,f)$ in $H(p,f)$ and the existence of (hyperbolic) horseshoes associated to these points. \begin{rema}\lambdabel{r.existenceofperiodicorbits} For every $\varepsilonilon>0$ there is a hyperbolic periodic point $r_\varepsilonilon\in H(p,f)$ that is homoclinically related to $p$ and $q$ whose orbit is $\varepsilonilon/2$-dense in $H(p,f)= H(q,f)$. {\varepsilon}nd{rema} To prove item {\varepsilon}qref{i.defff0} consider the point $r=r_{\frac{\deltalta}{2}}\in H(p,f)$ given by Remark~\ref{r.existenceofperiodicorbits}. As the points $r, p$, and $q$ are pairwise homoclinically related, the stable manifold of the orbit of $r$, $W^{\frak{m}athrm{s}}(\cO(r),f)$ accumulates the ones of $p$ and $q$. Hence there are compact discs $\Delta^{\frak{m}athrm{s}}_{r,p}, \Delta^{\frak{m}athrm{s}}_{r,q}\subset W^{\frak{m}athrm{s}}(\cO(r),f)$ such that any disc in $\cV^{\frak{m}athfrak{d}}_\varrho(p)\cup \cV^{\frak{m}athfrak{d}}_\varrho (q)$ meets transversely $\Delta^{\frak{m}athrm{s}}_{r,p}\cup \Delta^{\frak{m}athrm{s}}_{r,q}$. Let $\pi$ be the period of $r$. As in Remark~\ref{r.contraction}, for each $i=0,\dots, \pi-1$, we fix a small local unstable manifold $W^{\frak{m}athrm{u}}_{loc}(f^i(r),f)$ and a small $C^1$-neighbourhood $\cV_{\varepsilon}ta^\frak{m}athfrak{d} (f^i(r)){\varepsilon}qdef \cV_{\varepsilon}ta^\frak{m}athfrak{d} (W^{\frak{m}athrm{u}}_{loc}(f^i(r),f))$ such that the image $f(D)$ of any disc $D\in \cV^\frak{m}athfrak{d}_{\varepsilon}ta(f^i(r))$ contains a disc in $\cV_{\varepsilon}ta^\frak{m}athfrak{d} (f^{i+1}(r))$ (for $\pi-1$ we take ``$\pi=0$"). Take now $D=D_0$ any disc in $\cV^\frak{m}athfrak{d}_{\varepsilon}ta(r)$, let $D_1$ be a sub-disc of $f(D_0)$ in $\cV^\frak{m}athfrak{d}_{\varepsilon}ta (f(r))$, and inductively define $D_{i+1}$ as a disc in $\cV^\frak{m}athfrak{d}_{\varepsilon}ta (f^{i+1}(r))$ contained in $f(D_i)$. Assuming thta the local unstable manifolds and their neighbourhoods are small enough we have that every point in a disc of $\cV_{\varepsilon}ta^\frak{m}athfrak{d} (f^i(r))$, $i=0,\dots, \pi-1$, is at distance less that $\frac{\deltalta}2$ from the orbit of $r$. Since the orbit of $r$ is $\frac{\deltalta}2$-dense in $H(p,f)$ for every $x\in f^{-\pi }(D_\pi)\subset D$, we have that the segment of orbit $\{ x,\dots , f^{\pi}(x)\}$ is $\deltalta$-dense in $H(p,f)$. Consider now a disc $D\in\frak{m}athfrak{F}$. By construction, this disc intersects transversely $W^{\frak{m}athrm{s}}(\cO(r),f)$ in some point of $\Delta^{\frak{m}athrm{s}}_{r,p}\cup \Delta^{\frak{m}athrm{s}}_{r,q}$. By the $\lambdambda$-lemma there is $j_0$ (independent of $D$) such that $f^{j_0}(D)$ contains a disc $D_{0}$ in $\cV_{\varepsilon}ta^\frak{m}athfrak{d}(r)$. The argument above provides a sequence of discs $D_{i}\in \cV_{\varepsilon}ta^\frak{m}athfrak{d} (f^{i}(r))$, $j\in\{0,\dots,\pi-1\}$, with $D_{i+1}\subset f(D_{i})$ and such that for every $x\in f^{-\pi}(D_{\pi})\subset D_0\subset f^{j_0}(D)$ its orbit segment $\{x,\dots, f^{\pi}(x)\}$ is $\deltalta$-dense in $H(p,f)$. A new application of the $\lambdambda$-lemma provides a uniform $j_1>0$ such that $f^{j_1}(D_\pi)$ contains discs $\widetilde D^\pm \in \frak{m}athfrak{F}^\pm$ (recall that the initial $D\in \frak{m}athfrak{F}$ and therefore it is contained in $W^{\frak{m}athrm{u}}(p,f) \cup W^{\frak{m}athrm{u}}(q,f)$). Now it is enough to take $$ N{\varepsilon}qdef j_0+\pi+j_1 \quad \frak{m}box{and} \quad \widehat D^\pm{\varepsilon}qdef f^{-j_0-\pi-j_1}(\widetilde D^\pm) \subset D. $$ By construction the orbit segment $\{y,\dots, f^{N}(y)\}$ of any point $y\in \widehat D^\pm$ is $\deltalta$-dense in $H(p,f)$, proving item {\varepsilon}qref{i.defff0} in Definition~\ref{d.flipfloptail}. By construction, $f^N(\widehat D^\pm)=\widetilde D^\pm \in \frak{m}athfrak{F}^\pm$ proving item {\varepsilon}qref{i.defff1} in Definition~\ref{d.flipfloptail} Note that the discs $\widehat D^\pm\subset D$ satisfy the density in $H(p,f)$ and return to $\frak{m}athfrak{F}^\pm$ properties, however they can fail to satisfy the expansion property in {\varepsilon}qref{i.defff2} in Definition~\ref{d.flipfloptail}. To get additionally such an expansion one considers further iterates of the disc in a ``expanding" region nearby $p$ or $q$. The expansion is obtained using Remark~\ref{r.contraction} and arguing exactly as in Section~\ref{ss.flipflop}. The proof of Proposition~\ref{p.flipflophomoclinic} is now complete. \qed \subsection{Proof of Corollary~\ref{c.function}} \lambdabel{ss.proofofcorollarycfunction} By hypothesis, the saddles $p_f$ and $q_f$ have different ${\frak{m}athrm{u}}$-indices (say $i$ and $j$, $i<j$) that depend continuously on $f$ and whose chain recurrence classes coincide for every diffeomorphism $f$ in a $C^1$-open set $\cU$. As in the proof of Theorem~\ref{t.homoclinic}, let us assume that $t=0$ and hence the Birkhoff average of $\varphi$ is negative in $\cO(p_f)$ and positive in $\cO(q_f)$. According to \cite{ABCDW}, up to restrict to a $C^1$-open and dense subset of $\cU$, we can assume that for every $k\in [i,j]$ every diffeomorphism $f\in \cU$ has a periodic point $r_f$ of ${\frak{m}athrm{u}}$-index $k$ that is $C^1$-robustly in $C(p_f,f)$. Therefore, after replacing $p_f$ and $q_f$ by other periodic points, we can assume that the ${\frak{m}athrm{u}}$-indices of $p_f$ and $q_f$ are consecutive. Following Propositions 3.7 and 3.10 in \cite{ABCDW}, an arbitrarily small $C^1$-per\-tur\-ba\-tion of $f$ gives a diffeomorphism $h$ with a periodic point $r_h$ having a (unique) center eigenvalue equal to $1$ that is robustly in $C(p_h,h)$. This means that this (non-hyperbolic) periodic point $r_h$ admits a continuation $r_g\in C(p_g,g)=C(q_g,g)$ for some $g$ arbitrarily close to $f$. Consider the average of $\varphi$ along the orbit of $r_h$ and assume first that it is different from zero, for example negative. Then, after an arbitrarily small perturbation, we can transform $r_h$ in a hyperbolic point $r_g$ of $g$ of the same index as $q_g$ and homoclinically related to $q_g$ (for this last step we use the version of Hayashi's connecting lemma~\cite{Ha} for chain recurrence classes in \cite{BC}\footnote{This result guarantees that given two saddles in the same chain recurrence there is an arbitrarily small $C^1$-perturbation of the diffeomorphism that gives an intersection between the invariant manifolds of these saddles. If the saddles belong $C^1$-robustly to the same class then one can repeat the previous argument, interchanging the roles of the saddles, to get that the invariant manifolds of these saddles meet cyclically. Finally, if the saddles have the same index one can turn these intersections into transverse ones, thus the two saddles are homoclinically related and hence they are $C^1$-robustly in the same homoclinic class.}). The diffeomorphism $g$ belongs to $\cU$, the saddles $r_g$ and $q_g$ are homoclinically related, and the averages of $\varphi$ in these orbits have different signals. The corollary now follows from Theorem~\ref{t.homoclinic}. In the case when the average of $\varphi$ throughout the orbit of $r_g$ is zero one needs a slight modification of the previous argument. Let us sketch this construction, arguing as above, we can assume that, after an arbitrarily small perturbation, the point $r_g$ is hyperbolic of the same index as $p_g$ (with center derivative arbitrarily close to one) and that $r_g$ and $p_g$ are homoclinically related. Using the homoclinic relation between $r_g$ and $p_g$ we get a point $\bar r_g$ with some center eigenvalue arbitrarily close to one and with negative average for $\varphi$. Next, arguing as above and after a small perturbation, we change the index of the point $\bar r_g$ and generate transverse cyclic intersections between $\bar r_g$ and $q_g$ (i.e., we put the saddle $\bar r_g$ in the homoclinic class of $q_g$). We are now in the previous case and prove the corollary using the saddles $\bar r_g$ and $q_g$. \qed \section{Flip-flop families in partially hyperbolic dynamics} \lambdabel{s.flipflopph} In this section we prove Theorems~\ref{t.cycle} and \ref{t.ctail}. For that we borrow and adapt some constructions in \cite{BoBoDi2}. In Section~\ref{ss.dynamicalblenders} we recall the definition of a dynamical blender and its main properties. Section~\ref{ss.ffconf} is dedicated to the study of flip-flop configurations. In Section~\ref{ss.flipflopfamilieswithtails} we see how flip-flop configurations yield flip-flop families with sojourns. In Section~\ref{ss.controlofaveragesflipflop} we analyse the control of averages in flip-flop configurations. Finally, in Section~\ref{ss.proofoftheoremtcycleandother} we conclude the proofs of Theorems~\ref{t.cycle} and \ref{t.ctail}. \subsection{Dynamical blenders}\lambdabel{ss.dynamicalblenders} The definition of a {\varepsilon}mph{dynamical blender} in \cite{BoBoDi2} involves three main ingredients: the distance on the space of $C^1$-discs (Section~\ref{sss.spaceofdiscs}), strictly invariant families of discs (Section~\ref{ss.invariantfamilies}), and invariant cone fields (Section~\ref{ss.invariantconefields}). We now describe succinctly these ingredients. \subsubsection{Strictly invariant families of discs} \lambdabel{ss.invariantfamilies} Recall the notation $\cD^i(M)$ for the set of $i$-dimensional (closed) discs $C^1$-embedded in $M$ and the definitions of the distance $\frak{m}athfrak{d}$ and the open neighbourhood $\cV^\frak{m}athfrak{d}_{\varepsilon}ta(\frak{m}athfrak{D})$ of a family of discs $\frak{m}athfrak{D}$ with respect to $\frak{m}athfrak{d}$ in Section~\ref{sss.spaceofdiscs}. \begin{defi}[Strictly $f$-invariant families of discs] \lambdabel{d.strict} Let $f\in \operatorname{Diff}^1(M)$. A family of discs $\frak{m}athfrak{D}\subset \cD^i(M)$ is {\varepsilon}mph{strictly $f$-invariant} if there is $\varepsilon>0$ such that for every disc $D_0\in\cV^\frak{m}athfrak{d}_\varepsilon(\frak{m}athfrak{D})$ there is a disc $D_1\in \frak{m}athfrak{D}$ with $D_1\subset f(D_0)$. {\varepsilon}nd{defi} The existence of a strictly invariant family of discs is a $C^1$-robust property: If the family $\frak{m}athfrak{D}$ is strictly $f$-invariant then there are $\frak{m}u, {\varepsilon}ta>0$ such that the family $\frak{m}athfrak{D}_\frak{m}u= \cV^\frak{m}athfrak{d}_\frak{m}u(\frak{m}athfrak{D})$ is strictly $g$-invariant for every $g\in \operatorname{Diff}^1(M)$ that is ${\varepsilon}ta$-$C^1$-close to $f$, see \cite[Lemma 3.8]{BoBoDi2}. \subsubsection{Invariant cone fields} \lambdabel{ss.invariantconefields} Given a vector space of finite dimension $E$, we say that a subset $C$ of $E$ is a {\varepsilon}mph{cone of index $i$} if there are a splitting $E = E_1 \oplus E_2$ with $\operatorname{dim} (E_1) = i$ and a norm $\| \frak{m}athord{\cdot} \|$ defined on $E$ such that $$ C = \{v=v_1 + v_2 \colon v_i \in E_i, \ \|v_2\| \le \|v_1\|\}. $$ A cone $C'$ is {\varepsilon}mph{strictly contained} in the cone $C$ above if there is $\alpha>1$ such that $$ C' \subset C_\alpha=\{v_1 + v_2 \colon v_i \in E_i, \ \|v_2\| \le \alpha^{-1} \|v_1\|\} \subset C. $$ A {\varepsilon}mph{cone field of index $i$} defined on a subset $V$ of a compact manifold $M$ is a continuous map $x\frak{m}apsto \cC(x)\subset T_x M$ that associates to each point $x\in V$ a cone $\cC(x)$ of index $i$ . We denote this cone field by $\cC=\{\cC(x)\}_{x\in V}$. Given a diffeomorphism $f\in \operatorname{Diff}^1(M)$ and a cone field $\cC=\{\cC(x)\}_{x\in V}$ we say that $\cC$ is {\varepsilon}mph{strictly $Df$-invariant} if $Df(x)(\cC(x))$ is strictly contained in $\cC(f(x))$ for every $x \in V \cap f^{-1}(V)$. The following result is a standard lemma about persistence of invariant cone fields (see for instance \cite[Lemma 3.9]{BoBoDi2}). \begin{lemm}\lambdabel{l.cones} Let $f\in \operatorname{Diff}^1(M)$, $V$ a compact subset of $M$, and $\cC$ a strictly $Df$-invariant cone field defined on $V$. Then there is a $C^1$-neighbourhood $\cU$ of $f$ such that $\cC$ is strictly $Dg$-invariant for every $g\in \cU$. {\varepsilon}nd{lemm} \subsubsection{Dynamical blenders} \lambdabel{ss.manyblenders} We are now ready to define a {{\varepsilon}mph{dynamical blender}} and recall its main properties. \begin{defi}[Dynamical blender, \cite{BoBoDi2}] \lambdabel{d.dynamicalblender} Let $f\in \operatorname{Diff}^1(M)$. A compact $f$-invariant set $\Gamma \subset M$ is a {\varepsilon}mph{dynamical $\frak{m}athrm{cu}$-blender} {\varepsilon}mph{of $\frak{m}athrm{uu}$-index $i$} if the following properties hold: \begin{enumerate} \item there is an open neighbourhood $V$ of $\Gammamma$ such that $$ \Gammamma=\bigcap_{n\in\ZZ} f^n(\overline V); $$ \item the set $\Gammamma$ is transitive; \item the set $\Gammamma$ is (uniformly) hyperbolic with $\frak{m}athrm{u}$-index strictly larger than $i$; \item there is a strictly $Df$-invariant cone field $\cC^{\frak{m}athrm{uu}}$ of index $i$ defined on $\overline V$; and \item there are a strictly $f$-invariant family of discs $\frak{m}athfrak{D}\subset \cD^i(M)$ and $\varepsilon>0$ such that every disc in $\cV^\frak{m}athfrak{d}_\varepsilon(\frak{m}athfrak{D})$ is contained in $V$ and tangent to $\cC^{\frak{m}athrm{uu}}$. {\varepsilon}nd{enumerate} We say that $V$ is the {\varepsilon}mph{domain of the blender}, $\cC^{\frak{m}athrm{uu}}$ is its {\varepsilon}mph{strong unstable cone field}, and $\frak{m}athfrak{D}$ is its {\varepsilon}mph{strictly invariant family of discs}. To emphasise the role of these objects we write $(\Gammamma, V,\cC^{\frak{m}athrm{uu}}, \frak{m}athfrak{D})$. {\varepsilon}nd{defi} \begin{rema}\lambdabel{r.blendersph} Let $\Gammamma$ be a hyperbolic set of $\frak{m}athrm{u}$-index $j$ that is also a $\frak{m}athrm{cu}$-blender of $\frak{m}athrm{uu}$-index $i$. By definition, the set $\Gammamma$ has a partially hyperbolic splitting (recall {\varepsilon}qref{e.ph}) of the form $$ T_\Gammamma M= E^\frak{m}athrm{uu} \oplus E^\frak{m}athrm{cu} \oplus E^\frak{m}athrm{s}, $$ where $\operatorname{dim}(E^\frak{m}athrm{uu})=i$, $\operatorname{dim} (E^\frak{m}athrm{cu})=j-i\ge 1$, and $E^{\frak{m}athrm{u}}=E^\frak{m}athrm{uu}\oplus E^\frak{m}athrm{cu}$. Here $E^\frak{m}athrm{s}$ and $E^\frak{m}athrm{u}$ are the stable and unstable bundles of $\Gammamma$. We also define the bundle $E^{\frak{m}athrm{cs}}{\varepsilon}qdef E^\frak{m}athrm{cu}\oplus E^\frak{m}athrm{ss}$. {\varepsilon}nd{rema} Next lemma claims that blenders have well defined continuations. \begin{lemm} [Lemma 3.8 and Scholium 3.14 in \cite{BoBoDi2}] \lambdabel{l.robust} Let $(\Gammamma,V,\cC^{\frak{m}athrm{uu}},\frak{m}athfrak{D})$ be a dynamical blender of $f\in \operatorname{Diff}^1(M)$. Then there are a $C^1$-neighbourhood $\cU$ of $f$ and $\varepsilon>0$ such that for every diffeomorphism $g\in\cU$ the $4$-tuple $(\Gammamma_g,V,\cC^{\frak{m}athrm{uu}},\cV^\frak{m}athfrak{d}_\varepsilonilon(\frak{m}athfrak{D}))$ is a dynamical blender, where $\Gammamma_g$ is the hyperbolic continuation of $\Gammamma$ for $g$. Moreover, every disc $D\in \cV^\frak{m}athfrak{d}_\varepsilon(\frak{m}athfrak{D})$ meets the local stable manifold of $\Gammamma_g$ defined by $$ W^{\frak{m}athrm{s}}_{loc}(\Gamma_g){\varepsilon}qdef \{x\in V\colon f^i(x)\in V\, \frak{m}box{for every $i\ge 0$}\}. $$ {\varepsilon}nd{lemm} \subsection{Flip-flop configurations and partial hyperbolicity} \lambdabel{ss.ffconf} We now recall the definition of a flip-flop configuration and borrow some results from \cite{BoBoDi2}. \begin{defi}[Flip-flop configuration] \lambdabel{d.flip} Consider $f\in \operatorname{Diff}^1(M)$ having a dynamical $\frak{m}athrm{cu}$-blen\-der $(\Gamma, V, \cC^\frak{m}athrm{uu}, \frak{m}athfrak{D})$ of $\frak{m}athrm{uu}$-index $i$ and a hyperbolic periodic point $q$ of $\frak{m}athrm{u}$-index $i$. We say that $(\Gamma, V, \cC^\frak{m}athrm{uu}, \frak{m}athfrak{D})$ and $q$ form a {\varepsilon}mph{flip-flop configuration} if there are: \begin{itemize} \item a disc $\Deltalta^\frak{m}athrm{u}$ contained in the unstable manifold $W^\frak{m}athrm{u}(q,f)$ and \item a compact submanifold with boundary $\Delta^\frak{m}athrm{s} \subset V \cap W^\frak{m}athrm{s}(q,f)$ {\varepsilon}nd{itemize} such that: \begin{enumerate} \item\lambdabel{i.FFC0} The disc $\Deltalta^\frak{m}athrm{u}$ belongs to the interior of the family $\frak{m}athfrak{D}$. \item\lambdabel{i.FFC1} $f^{-n}(\Deltalta^\frak{m}athrm{u}) \cap \overline V = \varnothing$ for all $n>0$. \item\lambdabel{i.FFC2} There is $N>0$ such that $f^{n}(\Deltalta^\frak{m}athrm{s}) \cap \overline V = \varnothing$ for every $n>N$. Moreover, if $x\in \Delta^\frak{m}athrm{s}$ and $j>0$ are such that $f^j(x)\notin V$ then $f^i(x)\notin \overline V$ for every $i\ge j$. \item\lambdabel{i.FFC3} $T_y W^\frak{m}athrm{s}(q,f) \cap \cC^\frak{m}athrm{uu}(y) = \{0\}$ for every $y \in \Deltalta^\frak{m}athrm{s}$. \item\lambdabel{i.FFC4} There are a compact set $K$ in the relative interior of $\Deltalta^\frak{m}athrm{s}$ and $\varepsilonilon>0$ such that for every $D\in\frak{m}athfrak{D}$ there exists $x$ such that $K\cap D=\{x\}$ and $d(x,\partial D)>\varepsilonilon$. {\varepsilon}nd{enumerate} The sets $\Delta^{\frak{m}athrm{u}}$ and $\Delta^{\frak{m}athrm{s}}$ are called the {\varepsilon}mph{ unstable and stable connecting sets} of the flip-flop configuration, respectively. {\varepsilon}nd{defi} \cite[Proposition 4.2]{BoBoDi2} asserts that flip-flop configuration are $C^1$-robust. Next lemma claims that flip-flop configurations yield partially hyperbolic dynamics. Recall Remark~\ref{r.blendersph} and the definition of the center unstable bundle $E^{\frak{m}athrm{cu}}$ of a blender. \begin{lemm}[Lemma~4.6 in \cite{BoBoDi2}]\lambdabel{l.Laranjeiras} Consider $f\in \operatorname{Diff}^1(M)$ having a hyperbolic periodic point $q$ and a dynamical blender $(\Gamma,V,\cC^\frak{m}athrm{uu},\frak{m}athfrak{D})$ in a flip-flop configuration with connecting sets $\Delta^\frak{m}athrm{u}\subset W^\frak{m}athrm{u}(q,f)$ and $\Delta^\frak{m}athrm{s}\subset W^\frak{m}athrm{s}(q,f)$. Consider the closed set $$ \Deltalta{\varepsilon}qdef \frak{m}athcal{O}(q) \, \cup \, \overline{V} \, \cup \, \bigcup_{k\ge 0} f^k(\Deltalta^\frak{m}athrm{s}) \, \cup \, \bigcup_{k\le 0} f^k(\Deltalta^\frak{m}athrm{u}). $$ Then there is a compact neighbourhood $U$ of $\Deltalta$, called a {\varepsilon}mph{partially hyperbolic neighbourhood of the flip-flop configuration}, such that the maximal invariant set $\Gammamma(U)$ of $f$ in $U$ $$ \Gammamma(U){\varepsilon}qdef \bigcap_{i\in \ZZ} f^i(U) $$ has a partially hyperbolic splitting $$ T_{\Gammamma(U)} M = \widetilde E^\frak{m}athrm{uu} \oplus \widetilde E^\frak{m}athrm{cs}, $$ where $\widetilde E^\frak{m}athrm{uu}$ is uniformly expanding and $\widetilde E^\frak{m}athrm{uu}$ and $\widetilde E^\frak{m}athrm{cu}$ extend the bundles $E^\frak{m}athrm{uu}$ and $E^\frak{m}athrm{cs}$, respectively, defined over~$\Gammamma$. Moreover, there is a strictly $Df$-invariant cone field over $U$ that extends the cone field $\cC^\frak{m}athrm{uu}$ defined on $\overline V$ (we also denote this cone field by $\cC^\frak{m}athrm{uu}$) whose vectors are uniformly expanded by $Df$. {\varepsilon}nd{lemm} \subsection{Flip-flop families with sojourns in homoclinic classes} \lambdabel{ss.flipflopfamilieswithtails} \cite[Proposition 4.9]{BoBoDi2} claims that flip-flop configurations yield flip-flop families. These configurations are enough to construct measures with controlled averages. However they do not provide control of the support of the obtained measure. In this paper, we want to get measures with ``full support". Bearing this in mind we defined flip-flop families with sojourns (Definition~\ref{d.flipfloptail}). These ``sojourns" guarantee ``density" of orbits in the ambient space. \begin{theor}\lambdabel{t.p.flipfloptail} Consider $f\in \operatorname{Diff}^1(M)$ with a hyperbolic periodic point $q$ and a dynamical blender $\Gamma$ in a flip-flop configuration. Let $\varphi\colon M\to\frak{m}athbb{R}R$ be a continuous function that is positive on the blender $\Gamma$ and negative on the orbit of $q$. Then there are $N\geq1$ and a flip-flop family $\frak{m}athfrak{F}$ with respect to $\varphi_N$ and $f^N$ which sojourns along the homoclinic class $H(q,f)$ (for $f$). Moreover, given any $\deltalta>0$ the flip-flop family $\frak{m}athfrak{F}$ can be chosen such that: \begin{itemize} \item every $D\in \frak{m}athfrak{F}$ is contained in a $\deltalta$-neighbourhood of $\Gamma\cup \{\cO(q)\}$, \item every $D\in \frak{m}athfrak{F}$ transversely intersects $W^{\frak{m}athrm{s}}(q,f^N)$, and \item there is $D\in \frak{m}athfrak{F}$ contained in $W^{\frak{m}athrm{u}}_{loc}(q,f^N)$. {\varepsilon}nd{itemize} {\varepsilon}nd{theor} To prove this theorem we need to recall the construction of flip-flop families in \cite{BoBoDi2}. As the families in \cite{BoBoDi2} do not have sojourns we need to adapt this construction to our context bearing in mind this fact. \subsubsection{Flip-flop families associated to flip-flop configurations} We now borrow the following result from \cite{BoBoDi2} and recall some steps of its proof. \begin{prop}[Proposition 4.9 in \cite{BoBoDi2}] \lambdabel{p.flipflopconf} Let $f\in \operatorname{Diff}^1(M)$ be a diffeomorphism with a hyperbolic periodic point $q$ and a dynamical blender $\Gamma$ in a flip-flop configuration. Let $U$ be a partially hyperbolic neighbourhood of this configuration and $\varphi\colon U\to\frak{m}athbb{R}R$ a continuous function that is positive on the blender and negative on the orbit of $q$. Then there are $N\geq1$ and a flip-flop family $\frak{m}athfrak{F}=\frak{m}athfrak{F}^+\bigsqcup \frak{m}athfrak{F}^-$ with respect to $\varphi_N$ and $f^N$. Moreover, given any $\deltalta>0$ the flip-flop family can be chosen such that the plaques in $\frak{m}athfrak{F}^+$ are contained in a $\deltalta$-neighbourhood of $\Gamma$ and the plaques in $\frak{m}athfrak{F}^-$ are contained in a $\deltalta$-neigbourhood of $q$. {\varepsilon}nd{prop} Note that Theorem~\ref{t.p.flipfloptail} is just the proposition above with additional sojourns. Observe also that the map $\varphi_N$ is only defined on $\bigcap_0^{N-1} f^{-i}(U)$ and that the plaques of $\frak{m}athfrak{F}$ are contained in that set. We now review the construction in \cite{BoBoDi2}. For simplicity let us suppose that $q$ is a fixed point. The definition of the family $\frak{m}athfrak{F}$ in Proposition~\ref{p.flipflopconf} involves a preliminary family of discs $\frak{m}athfrak{D}_q$ satisfying the following properties (see \cite[Lemma 4.11]{BoBoDi2}): \begin{enumerate} \item[(p1)] \lambdabel{i.1} The family of discs $\frak{m}athfrak{D}_q$ form a small $C^1$-neighbourhood (in the metric $\frak{m}athfrak{d}$) of the local unstable manifold $W^{\frak{m}athrm{u}}_{loc}(q,f)$. This neighbourhood can be taken arbitrarily small. \item[(p2)] \lambdabel{i.2} The sets of the family $\frak{m}athfrak{F}^-$ are contained in discs in $\frak{m}athfrak{D}_q$. \item[(p3)] \lambdabel{i.3} Each disc in $\frak{m}athfrak{D}_q$ contains a plaque of $\frak{m}athfrak{F}^-$. \item[(p4)] \lambdabel{i.4} The image by $f^N$ of any plaque of $\frak{m}athfrak{F}$ contains a disc of $\frak{m}athfrak{D}_q$. {\varepsilon}nd{enumerate} We have the following direct consequences of the properties above: \begin{enumerate} \item[(p5)] As $\frak{m}athfrak{D}_q$ can be taken contained in an arbitrarily small neighbourhood of $W^{\frak{m}athrm{u}}_{loc}(q,f)$, we can assume that $W^{\frak{m}athrm{s}}_{loc}(q,f)$ meets transversely every disc in $\frak{m}athfrak{D}_q$. \item[(p6)] As $W^{\frak{m}athrm{u}}_{loc}(q,f)$ can be chosen arbitrarily small, we can assume that $f^N$ expands uniformly the vectors tangent to the discs in $\frak{m}athfrak{D}_q$ (see also Remark~\ref{r.contraction}). \item[(p7)] As a consequence of items~(p2),(p3), and (p4), the image by $f^N$ of any disc in $\frak{m}athfrak{D}_q$ contains a disc in $\frak{m}athfrak{D}_q$. {\varepsilon}nd{enumerate} We say that the flip-flop family $\frak{m}athfrak{F}$ is {\varepsilon}mph{prepared with and adapted family $\frak{m}athfrak{D}_q$} if $\frak{m}athfrak{F}$ and $\frak{m}athfrak{D}_q$ satisfy properties (p1)--(p7) above. \subsubsection{Proof of Theorem~\ref{t.p.flipfloptail}} Since a flip-flop family yields a prepared flip-flop family, Theorem~\ref{t.p.flipfloptail} is a consequence of the following proposition. \begin{prop} \lambdabel{p.l.flipfloptail} Let $f\in \operatorname{Diff}^1(M)$ be a diffeomorphism with a hyperbolic periodic point $q$ and a dynamical blender $\Gamma$ in a flip-flop configuration, $U$ be a partially hyperbolic neighbourhood of this configuration, and $\varphi\colon U\to\frak{m}athbb{R}R$ a continuous function that is positive on the blender and negative on the orbit of $q$. Let $N\ge 1$ and $\frak{m}athfrak{F}$ be a prepared flip-flop family with an adapted family of discs $\frak{m}athfrak{D}_q$ with respect to $\varphi_N$ and $f^N$. Then the flip-flop family $\frak{m}athfrak{F}$ sojourns along the homoclinic class $H(q, f^N)$. {\varepsilon}nd{prop} \begin{proof} We need the following lemma whose proof is similar to the one of Proposition~\ref{p.flipflophomoclinic} and follows using the partially hyperbolicity in the set $U$. \begin{lemm} \lambdabel{l.preparar} For every $\deltalta>0$ there is $L\in \NN$ such that every disc $D\in\frak{m}athfrak{D}_q$ contains a sub-disc $\widehat D$ such that \begin{itemize} \item for every $x\in \widehat D$ the segment of orbit $\{x,\dots, f^L(x)\}$ is $\deltalta$-dense in $H(q,f^N)$, \item $f^{L}(\widehat D)$ contains a disc of $\frak{m}athfrak{D}_q$. \item for every $i\in\{0,\dots, L\}$ and every pair of points $x,y\in \widehat D$ it holds $$ d(f^{L-i}(x),f^{L-i}(y))\leq\zeta \, \alpha^i\, d(f^L(x) f^L(y)), $$ for some constants $\zeta>0$ and $0<\alpha<1$ (independent of the points and the discs). {\varepsilon}nd{itemize} {\varepsilon}nd{lemm} \begin{proof} Consider a hyperbolic periodic point $r_\deltalta\in H(q,f^N)$ homoclinically related to $q$ and whose orbit is $\deltalta$-dense in $H(q,f^N)$ (recall Remark~\ref{r.existenceofperiodicorbits}). The $\lambdambda$-lemma implies that a compact part of $W^{\frak{m}athrm{s}}(r_\deltalta,f)$ intersects transversely every disc in $\frak{m}athfrak{D}_q$. Thus, again by the $\lambdambda$-lemma, iterations of any disc $D\in \frak{m}athfrak{D}_q$ accumulate to $W^{\frak{m}athrm{u}}_{loc}(\cO(r_\deltalta),f)$. Again the $\lambdambda$-lemma and (p7) in the definition of a prepared family imply that further iterations of $D$ contains a disc in $\frak{m}athfrak{D}_q$. Since the number of iterates involved can be taken uniform, considering the corresponding pre-image one gets the disc $\widehat D$ satisfying the first two items of the lemma. Finally, exactly as in the end of the proof of Proposition~\ref{p.flipflophomoclinic} further iterations provides the uniform expansion property. This ends the proof of the lemma. {\varepsilon}nd{proof} To end the proof of the proposition recall that, by condition (p4), the image of any plaque $D\in \frak{m}athfrak{F}$ contains a disc in $\frak{m}athfrak{D}_q$. This provides the ``sojourns property" for $\frak{m}athfrak{F}$ (may be one needs to add some extra additional ``final" iterates for recovering the expansion). {\varepsilon}nd{proof} \vskip .cm \subsection{Control of averages in flip-flop configurations} \lambdabel{ss.controlofaveragesflipflop} As a first consequence of Theorem~\ref{t.p.flipfloptail} we get measures with controlled averages and full support in a homoclinic class. \begin{theor}\lambdabel{t.p.tail} Let $f\in \operatorname{Diff}^1(M)$ be a diffeomorphism and $\varphi\colon M\to \frak{m}athbb{R}R$ be a continuous map. Suppose that $f$ has a dynamical blender $\Gammamma$ and hyperbolic periodic point $q$ that are in flip-flop configuration with respect to $\varphi$ and $f$. Then there is an ergodic measure $\nu$ whose support is the whole homoclinic class $H(q,f)$ such that $$ \int \varphi d\, \nu=0. $$ {\varepsilon}nd{theor} \begin{proof}Under the hypotheses of the theorem, Theorem~\ref{t.p.flipfloptail} provides a flip-flop family $\frak{m}athfrak{F}$ associated to $f^N$ and $\varphi_N$ which sojourns in $H(q,f^N)$. By Theorem~\ref{t.flipfloptail} every plaque $D\in \frak{m}athfrak{F}$ contains a point $x_D$ that is controlled at any scale with a long sparse tail with respect $\varphi_N$ and $H(q, f^N)$. Note that $W^{\frak{m}athrm{u}}_{loc}(q,f)$ contains a plaque $\Delta\in \frak{m}athfrak{F}$. Let $x_\Delta\in \Deltalta$ be the controlled point given by Theorem~\ref{t.flipfloptail}. \begin{clai} $x_\Deltalta\in H(q,f^N)$. {\varepsilon}nd{clai} \begin{proof} By construction, there is a sequence of discs $D_k$ and numbers $n_k\to \infty$ such that $D_{k+1}\subset f^{n_k}(D_k)$ and $D_0=\Deltalta$. Thus $f^{-n_k}(D_k)\subset \Deltalta$ is a decreasing nested sequence of compact sets that satisfies $$ x_\Deltalta= \bigcap_{k\in \NN} f^{-n_k}(D_k). $$ The fact that this intersection is just a point follows from the expansion property in the definition of a flip-flop family. Since $D_k\subset W^{\frak{m}athrm{u}}(q,f^N)$ and intersects transversely $W^{\frak{m}athrm{s}}(q,f)$ we have that $D_k$ contains a point $x_k\in H(q,f^N)$. Hence $y_k=f^{-n_k}(x_k)\in H(q,f^N)\cap f^{-n_K}(D_k)$. Thus $y_k\to x_\Delta$ and $x_\Deltalta\in H(q, f^N)$. {\varepsilon}nd{proof} Take any measure $\frak{m}u$ that is an accumulation point of the measures $$ \frak{m}u_n{\varepsilon}qdef \frac1n\sum_0^{n-1}\deltalta(f^{iN}(x_\Deltalta)). $$ As the point $x_\Deltalta$ is controlled at any scale with a long sparse tail with respect to $\varphi$ and $H(q,f^N)$, Theorem~\ref{t.accumulation} implies that $\frak{m}u$-almost every point $y$ has a dense orbit in $H(q,f^N)$ and the average of $\varphi_N$ along the $f^N$-orbit of $y$ is zero. To conclude the proof of the proposition consider the measure $$ {\varepsilon}ta {\varepsilon}qdef \frac 1N\sum_0^{N-1} f^i_*(\frak{m}u). $$ Now the $f$-orbit of ${\varepsilon}ta$-almost every point $y$ is dense in $H(q,f)$ and satisfies $\varphi_\infty(y)=0$. By construction, any ergodic component $\nu$ of ${\varepsilon}ta$ satisfies the conclusion in the theorem. {\varepsilon}nd{proof} \begin{rema} Let us compare Theorem~\ref{t.p.tail} with Corollary~\ref{c.function}. In both cases there is a continuous map $\varphi$ with a ``positive" and a ``negative region". Corollary~\ref{c.function} is a perturbation result while Theorem~\ref{t.p.tail} does not involve perturbations. The corollary provides a (locally) open and dense subset of diffeomorphisms $f$ having an ergodic measure $\frak{m}u_f$ with $\int \varphi \, d\frak{m}u_f=0$. In the theorem the mere existence of the flip-flop configuration for $f$ and $\varphi$ gives an ergodic measure $\frak{m}u_f$ of $f$ with $\int \varphi\, d\frak{m}u_f=0$. In the corollary the support of the ergodic measure is not completely determined (either $H(p,f)$ or $H(q,f)$) while in the theorem the support is $H(q,f)$ ($q$ is the saddle in the flip-flop configuration). {\varepsilon}nd{rema} \subsection{Ergodic non-hyperbolic measures with full support} \lambdabel{ss.proofoftheoremtcycleandother} In this section we conclude the proofs of Theorems~\ref{t.cycle} and \ref{t.ctail}. \subsubsection{Proof of Theorem~\ref{t.cycle}} \lambdabel{sss.proofoftheoremtcycle} Consider a $C^1$-open set $\cU\subset \operatorname{Diff}^1(M)$ consisting of diffeomorphisms $f$ having hyperbolic periodic points $p_f$ and $q_f$, depending continuously on $f$, of different indices whose chain recurrence classes $C(p_f,f)$ and $C(q_f,f)$ coincide and have a partially hyperbolic splitting with one-dimensional center. This implies that the ${\frak{m}athrm{u}}$-indices of $p_f$ and $q_f$ are $j+1$ and $j$ for some $j$. Consider $f\in \cU$. We prove that there are a neighbourhood $\cV_f$ of $f$ and an open and dense subset $\cZ_f$ of $\cV_f$ where the conclusion of the theorem holds (every $g\in\cZ_f$ has a nonhyperbolic ergodic measure $\frak{m}u_g$ whose support is $H(p_g,g)=H(q_g,g)$). The theorem follows considering the set $\cV=\bigcup_{f\in \cU} \cZ_f$ that is, by construction, open and dense in $\cU$. Thus, in what follows, we fix $f\in \cU$ and study a local problem in a neighbourhood of $f$. The partial hyperbolicity of $C(p_f,f)$ gives neighbourhoods $V$ of $C(p_f,f)$ and $\cV_f$ of $f$ such that for every $g\in \cV_g$ the maximal invariant set of $g$ in $V$ \begin{equation} \lambdabel{e.upsilon} \Upsilon_g {\varepsilon}qdef \bigcap_{i\in \ZZ} g^i(V) {\varepsilon}nd{equation} has a partially hyperbolic splitting with one-dimensional center. Since chain recurrence classes depend upper semi-continuously, after shrinking $\cV_f$, if necessary, we can assume that $C(p_g,g)\subset V$ for every $g\in \cV_f$. Thus $H(p_g,g)=H(q_g,g)\subset C(p_g,g)\subset \Upsilon_g$ and these sets are partially hyperbolic with one-dimensional center. Hence \cite[Theorem E]{BDPR} (see Remark~\ref{r.bdpr}) gives an open and dense subset $\cW_f$ of $\cV_f$ such that for every $f\in \cW_f$ the homoclinic classes of $p_f$ and $q_f$ are equal (here the hypothesis on the one-dimensional center is essential). To prove the theorem it is enough to get an open an dense subset $\cZ_f$ of $\cW_f$ (thus of $\cV_f$) consisting of diffeomorphisms $g$ having a nonhyperbolic ergodic measure $\frak{m}u_{g}$ whose support is $H(p_g,g)=H(q_g,g)$. By \cite{BD-cycles} there is an open and dense subset $\cC_f$ of $\cW_f$ such that every $g\in \cC_f$ has transitive hyperbolic sets $\Lambdambda^p_g$ and $\Lambdambda^q_g$, with $p_g\in \Lambdambda^p_g$ and $q_g\in \Lambdambda^q_g$, having a robust cycle (i.e., there is a neighbourhood of $g$ consisting of diffeomorphisms $h$ such that the invariant sets of $\Lambdambda^p_h\ni p_h$ and $\Lambdambda^q_h\ni q_h$ meet cyclically). Now \cite[Proposition 5.2]{BoBoDi2} (Robust cycles yield spawners) and \cite[Proposition 5.3]{BoBoDi2} (Spawners yield split flip-flop configurations) gives an open and dense subset $\cZ_f$ of $\cC_f$ such that every $g\in\cZ_f$ has a dynamical blender $\Gamma_g\subset C(p_g,g)$ that is in a flip-flop configuration with $q_g$. Moreover, the blender $\Gamma_g$ and the saddle $p_g$ are {\varepsilon}mph{homoclinically related} (their invariant manifolds intersect cyclically and transversely). As in the case of a homoclinic relation between periodic points this implies that $\Gamma_g \subset C(p_g,g)$. For $g\in \cZ_f$ consider a continuous function $\frak{m}athrm{J}^c_g$ defined on the partially hyperbolic set $\Upsilon_g$ as the logarithm of the center derivative of $g$, recall {\varepsilon}qref{e.logmap}. Note that $\Gammamma_g\cup \cO(q_g)\subset \Upsilon_g$. Up considering an adapted metric, we can assume the map $\frak{m}athrm{J}^c_g$ is positive on the blender $\Gamma_g$ and negative on the orbit of $q_g$. We extend the map $\frak{m}athrm{J}^c_g$ to a continuous map defined on the whole manifold (with a slight abuse of notation, we denote this new map also by $\frak{m}athrm{J}^c_g$). For diffeomorphisms $g\in \cZ_g$ Theorem~\ref{t.p.tail} gives an ergodic measure $\frak{m}u_g$ whose support is $H(q_g,g)=H(p_g,g)$ and such that $\int \frak{m}athrm{J}^c_g\, d\frak{m}u_g=0$. As $\frak{m}u_g$ is supported on $ H(q_g,g)\subset \Upsilon_g$, the function $\frak{m}athrm{J}^c_g$ coincides with the logarithm of the center derivative of $g$ in the support of $\frak{m}u_g$. Thus $\int \frak{m}athrm{J}^c_g\, d\frak{m}u_g=0$ is the center Lyapunov exponent of $\frak{m}u_g$. This concludes the proof of the theorem. \qed \subsubsection{Proof of Theorem~\ref{t.ctail}} \lambdabel{sss.proofoftctail} In the previous sections we dealt with averages of continuous functions. For the analysis of Lyapunov exponents let us recall that in \cite{BoBoDi2} the partial hyperbolicity of the set guarantees the continuity of central (one-dimensional) derivatives in a locally maximal invariant set (these maps are continuously extended to a neighbourhood of the set). Here we we argue as in previous sections keeping in mind the following three facts: (1) The existence of a flip-flop configuration is a hypothesis. (2) The filtrating neighbourhood implies that it contains the homoclinic classes. (3) The existence of invariant cone fields in the filtrating neighbourhood gives the partial hyperbolicity with one-dimensional center of the maximal invariant set in $U$ and thus of the homoclinic classes. \section{Applications to robust transitive diffeomorphisms} \lambdabel{s.applications} In this section we prove Theorem~\ref{t.c.openanddense} and Corollary~\ref{c.averages}. Recall that $\cR\cT(M)$ is the (open) subset of $\operatorname{Diff}^1(M)$ of diffeomorphisms that are robustly transitive and have a pair of hyperbolic periodic points of different indices and a partially hyperbolic splitting with one-dimensional center. We prove the following proposition: \begin{prop}\lambdabel{p.l.h(q)} There is a $C^1$-open and dense subset $\cI(M)$ of $\cR\cT(M)$ such that for every $f\in \cI(M)$ there are hyperbolic periodic points $p_f$ and $q_f$ of different indices such that $$ H(p_f,f)=H(q_f,f)=M. $$ {\varepsilon}nd{prop} In view of this proposition, Theorem~\ref{t.c.openanddense} is a direct consequence of Theorem~\ref{t.cycle} (in $\cR\cT(M)$ the unique chain recurrence class is the whole manifold $M$) and Corollary~\ref{c.averages} is a direct consequence of Corollary~\ref{c.function}. \begin{proof}[Proof of Proposition~\ref{p.l.h(q)}] The diffeomorphisms $f\in \cR\cT(M)$ have a partially hyperbolic splitting with one-dimensional center $TM=E^{\frak{m}athrm{uu}}\oplus E^{\frak{m}athrm{c}}\oplus E^{\frak{m}athrm{ss}}$, where $E^{\frak{m}athrm{uu}}$ is uniformly expanding, $E^{\frak{m}athrm{ss}}$ is uniformly contracting, and $\operatorname{dim} (E^{\frak{m}athrm{c}})=1$. This implies that there is $j\in \{1,\dots,\operatorname{dim}(M)-2\}$ such that every hyperbolic periodic point $p$ of $f$ has $\frak{m}athrm{s}$-index either $j$ or $j+1$, where $j=\operatorname{dim} (E^{\frak{m}athrm{ss}})$. We define $\cR\cT_{j}(M)$ as the open subset of $\cR\cT(M)$ consisting of diffeomorphism whose saddles have $\frak{m}athrm{s}$-indices either $j$ or $j+1$. The next lemma is a consequence of the ergodic closing lemma in \cite{Ma}, for an explicit formulation of this result see \cite[Theorem in page 4]{DPU}. \begin{lemm} The set $\bigcup_{j=1}^{\operatorname{dim} (M)-2} \cR\cT_{j}(M)$ is open and dense in $\cR\cT(M)$. {\varepsilon}nd{lemm} In view of this lemma the proposition is a consequence of the following result: \begin{lemm}\lambdabel{l.l.h(q)} Let $j\in \{1,\dots,\operatorname{dim}(M)-2\}$. There is an open an dense subset $\cI_j(M)$ of $\cR\cT_j(M)$ such that every $f\in \cI_j(M)$ has hyperbolic periodic points $p_f$ and $q_f$ of different indices such that $$ H(p_f,f)=H(q_f,f)=M. $$ {\varepsilon}nd{lemm} \begin{proof} For the diffeomorphisms $f\in \cR\cT_j(M)$ there are defined the strong stable foliation $\cF^{\frak{m}athrm{ss}}_f$ of dimension $j$ and the strong unstable foliation $\cF^{\frak{m}athrm{uu}}_f$ of dimension $\operatorname{dim}(M)-j-1$. Recall that $\cF^{\frak{m}athrm{ii}}_f$, $\frak{m}athrm{i}=\frak{m}athrm{s}, \,\frak{m}athrm{u}$, is the only $Df$-invariant foliation of dimension $\operatorname{dim} (E^{\frak{m}athrm{ii}})$ tangent to $E^{\frak{m}athrm{ii}}$. The foliation $\cF^{\frak{m}athrm{ii}}_f$ is {\varepsilon}mph{minimal} if every leaf $F^{\frak{m}athrm{ii}}_f(x)$ of $\cF^{\frak{m}athrm{ii}}_f$ is dense in $M$. The foliation $\cF^{\frak{m}athrm{ii}}_f$ is {\varepsilon}mph{$C^1$-robustly minimal} if there is a $C^1$-neighbourhood $\cV_f$ of $f$ such that for every $g\in \cV_f$ the foliation $\cF^{\frak{m}athrm{ii}}_g$ is minimal. We denote by $\cM^{\frak{m}athrm{i}}_j(M)$, $i=\frak{m}athrm{s,u}$, the open subset of $\cR\cT_j(M)$ of diffeomorphisms such that $\cF^{\frak{m}athrm{ii}}_f$ is robustly minimal. Let $$ \cM_j(M) {\varepsilon}qdef \cM^{\frak{m}athrm{s}}_j(M) \cup \cM^{\frak{m}athrm{u}}_j(M). $$ \begin{lemm}[\cite{BDU,RHU}] \lambdabel{l.minimal} The set $\cM_j(M)$ is open and dense in $\cR\cT_j(M)$. {\varepsilon}nd{lemm} We need the following property. \begin{clai}$\,$ \lambdabel{cl.sacocheio} \begin{itemize} \item Let $f\in \cM^{\frak{m}athrm{u}}_{j}(M)$. Then $H(q,f)=M$ for every saddle $q$ of $\frak{m}athrm{s}$-index $j+1$. \item Let $f\in \cM^{\frak{m}athrm{s}}_{j}(M)$. Then $H(q,f)=M$ for every saddle $q$ of $\frak{m}athrm{s}$-index $j$. {\varepsilon}nd{itemize} {\varepsilon}nd{clai} \begin{proof} We prove the first item, the second one is analogous and thus omitted. Fix any hyperbolic periodic point $q$ of $\frak{m}athrm{s}$-index $j+1$. Then the unstable manifold of $q$ is a leaf of $\cF_f^{\frak{m}athrm{uu}}$, hence it is dense in $M$. The minimality of $\cF_f^{\frak{m}athrm{uu}}$ and the fact that $W^{\frak{m}athrm{s}}(q,f)$ contains a disc of dimension $j+1$ transverse to $\cF_f^{\frak{m}athrm{uu}}$ imply that there is $K>0$ such that $W^{\frak{m}athrm{s}}(q,f)$ intersects transversely every strong unstable disc of radius larger than $K$. Take now any point $x\in M$ and any $\varepsilonilon>0$. We see that the ball $B_\varepsilonilon (x)$ intersects $H(q,f)$. Since this holds for any $x\in M$ and $\varepsilonilon>0$ and $H(q,f)$ is closed this implies $H(q,f)=M$. The density of $W^{\frak{m}athrm{u}}(q,f)$ implies that there is a disc $\Deltalta\subset W^{\frak{m}athrm{u}}(q,f)$ of dimension $\operatorname{dim} (E^{\frak{m}athrm{uu}})$ contained in $B_\varepsilonilon(x)$. Since $Df$ expands the vectors tangent to $E^{\frak{m}athrm{uu}}$ there is $n>0$ such that $f^n(\Deltalta)$ has radius at least $K$. Thus $f^n(\Deltalta)$ meets transversely $W^{\frak{m}athrm{s}}(q,f)$. Thus $\Deltalta$ contains a point of the homoclinic class of $q$. This implies the claim. {\varepsilon}nd{proof} By \cite[Theorem E]{BDPR} (see also Remark~\ref{r.bdpr}) in this partially hyperbolic setting with one-dimensional center, there is an open and dense subset $\cP_j(M)$ of $\cR\cT_j(M)$ such that for every pair of saddles $p_f$ and $q_f$ of $f$ it holds $H(p_f,f)=H(q_f,f)$. Note that this claim is only relevant when the saddles have different indices. In view of Claim~\ref{cl.sacocheio} to prove Lemma~\ref{l.l.h(q)} it is enough to take $$ \cI_j(M)=\cP_j(M)\cap \cM_j(M) $$ that is open and dense in $\cR\cT_j(M)$ (recall Lemma~\ref{l.minimal}). {\varepsilon}nd{proof} The proof of Proposition~\ref{p.l.h(q)} is now complete. {\varepsilon}nd{proof} \begin{thebibliography}{ABCDW} \bibitem[ABCDW]{ABCDW} Abdenur, F.; Bonatti, C.; Crovisier, S.; D\'{i}az, L.J.; Wen, L. {\varepsilon}mph{Periodic points and homoclinic classes.} Ergodic Theory Dynam.\ Systems 27 (2007), no.\ 1, 1--22. \bibitem[BBS]{BBS} Baladi, V.; Bonatti, C.; Schmitt, B. {\varepsilon}mph{Abnormal escape rates from nonuniformly hyperbolic sets.} Ergodic Theory Dynam.\ Systems 19 (1999), no.\ 5, 1111--1125. \bibitem[BBD]{BoBoDi} Bochi, J.; Bonatti, C.; D\'{i}az, L.J. {\varepsilon}mph{Robust vanishing of all Lyapunov exponents for iterated function systems.} Math.\ Z.\ 276 (2014), no.\ 1-2, 469--503. \bibitem[BBD2]{BoBoDi2} Bochi, J.; Bonatti, C.; D\'{i}az, L.J. {\varepsilon}mph{Robust criterion for the existence of nonhyperbolic ergodic measures.} Comm. Math. Phys. (2016), no\, 3, 751-795. \bibitem[B]{B} Bonatti, C. {\varepsilon}mph{Towards a global view of dynamical systems, for the $C^1$-topology. } Ergodic Theory Dynam.\ Systems 31 (2011), no.\ 4, 959--993. \bibitem[BC]{BC} Bonatti, C.; Crovisier, S. {\varepsilon}mph{R\'ecurrence et g\'en\'ericit\'e.} Invent.\ Math.\ 158 (2004), no.\ 1, 33--104. \bibitem[BD1]{BD-robtran} Bonatti, C.; D\'{i}az, L.J. {\varepsilon}mph{Persistent nonhyperbolic transitive diffeomorphisms. } Ann.\ of Math.\ 143 (1996), no.\ 2, 357--396. \bibitem[BD2]{BD-cycles} Bonatti, C.; D\'{i}az, L.J. {\varepsilon}mph{Robust heterodimensional cycles and $C^1$-generic dynamics. } J.\ Inst.\ Math.\ Jussieu 7 (2008), no.\ 3, 469--525. \bibitem[BDG]{BDG} Bonatti, C.; D\'{i}az, L.J.; Gorodetski, A. {\varepsilon}mph{Non-hyperbolic ergodic measures with large support.} Nonlinearity 23 (2010), no.\ 3, 687--705. \bibitem[BDPR]{BDPR} Bonatti, C.; D\'{i}az, L.J.; Pujals, E. R, Rocha, J. {\varepsilon}mph{Robustly transitive sets and heterodimensional cycles.} Geometric methods in dynamics. I. Ast\'erisque 286 (2003), 187--222. \bibitem[BDU]{BDU} Bonatti, C.; D\'{i}az, L.J.; Ures, R. {\varepsilon}mph{Minimality of strong stable and unstable foliations for partially hyperbolic diffeomorphisms.} J. Inst. Math. Jussieu 1 (2002), 513--541. \bibitem[BZ]{BJ} Bonatti, C.; Zhang, J. {{\varepsilon}mph{On the existence of non-hyperbolic ergodic measures as the limit of periodic measures.}} arXiv:1606.06119. \bibitem[CLR]{CLR} Cao, Y.; Luzzatto, S.; Rios, I. {\varepsilon}mph{Some non-hyperbolic systems with strictly non-zero Lyapunov exponents for all invariant measures: horseshoes with internal tangencies.} Discrete Contin.\ Dyn.\ Syst.\ 15 (2006), no.\ 1, 61--71. \bibitem[CCGWY]{CCGWY} Cheng, C.; Crovisier, S.; Gan, S.; Wang, X.; Yang, D. {\varepsilon}mph{Hyperbolicity versus non-hyperbolic ergodic measures inside homoclinic classes.} Preprint arXiv:1507.08253 \bibitem[C]{C} Crovisier, S. {\varepsilon}mph{Partial hyperbolicity far from homoclinic bifurcations.} Adv. Math. 226 (2011), 673--726. \bibitem[DG]{DG} D\'{i}az, L. J.; Gorodetski, A. {{\varepsilon}mph{Non-hyperbolic ergodic measures for non-hyperbolic homoclinic classes.}} Ergodic Theory Dynam.\ Systems 29 (2009), no.\ 5, 1479--1513. \bibitem[DPU]{DPU} D\'{i}az, L. J.; Ures, R; Pujals, E. R. {{\varepsilon}mph{Partial hyperbolicity and robust transitivity.}} Acta Math. \, 183 (1999), no. 1, 1--43. \bibitem[G]{G} Gan, S. {{\varepsilon}mph{A generalized shadowing lemma.}} Discrete Contin. Dyn. Syst. \ 8 (2002), no.\ 3, 627--632. \bibitem[GIKN]{GIKN} Gorodetski, A.; Ilyashenko, Yu.S.; Kleptsyn, V.A.; Nalsky, M.B. {\varepsilon}mph{Nonremovability of zero Lyapunov exponents. }(Russian) Funktsional.\ Anal.\ i Prilozhen.\ 39 (2005), no.\ 1, 27--38; translation in Funct.\ Anal.\ Appl.\ 39 (2005), no.\ 1, 21--30. \bibitem[H]{Ha} Hayashi, S. {\varepsilon}mph{Connecting invariant manifolds and the solution of the $C^1$ stability and $\Omegaega$-stability conjectures for flows.} Ann.\ of Math.\ 145 (1997), no.\ 1, 81--137. \bibitem[KN]{KN} Kleptsyn, V.A.; Nalsky, M. B. {\varepsilon}mph{Stability of the existence of nonhyperbolic measures for $C^1$-diffeomorphisms.} Funktsional.\ Anal.\ i Prilozhen.\ 41 (2007), no.\ 4, 30--45; translation in Funct.\ Anal.\ Appl.\ 41 (2007), no.\ 4, 271--283. \bibitem[LOR]{LOR} Leplaideur, R.; Oliveira, K.; Rios, I. {\varepsilon}mph{Equilibrium states for partially hyperbolic horseshoes.} Ergodic Theory Dynam.\ Systems 31 (2011), no.\ 1, 179--195. \bibitem[M1]{Mda} Ma\~n\'e, R. {{\varepsilon}mph{Contributions to stability conjecture.}} Topology 17 (1978), no.\ 4, 383--396. \bibitem[M2]{Ma} Ma\~n\'e, R. {\varepsilon}mph{ An ergodic closing lemma.} Ann.\ of Math.\ 116 (1982), no.\ 3, 503--540. \bibitem[PW]{PW} Pesin, Y; Weiss, H. {{\varepsilon}mph{The multifractal analysis of Gibss measures: Motivation, mathematical foundation, and examples.}} Chaos 7 (1997), no.\ 1, 89--106. \bibitem[PS]{PuSa} Pujals, E.R.; Sambarino, M. {\varepsilon}mph{Homoclinic tangencies and hyperbolicity for surface diffeomorphisms.} Ann.\ of Math.\ 151 (2000), no.\ 3, 961--1023. \bibitem[RH$^2$U]{RHU} Rodriguez Hertz, F.; Rodriguez Hertz, M. A.; Ures, R. {\varepsilon}mph{Some results on the integrability of the center bundle for partially hyperbolic diffeomorphisms.} Partially hyperbolic dynamics, laminations, and Teichmüller flow, 103--109, Fields Inst. Commun., 51, Amer. Math. Soc., Providence, RI, 2007. \bibitem[S]{Sh} Shub, M. {{\varepsilon}mph{Topological transitive diffeomorphisms on $T^4$.}} Lecture Notes in Math.\ 206 (1971), 39--40. \bibitem[W]{walters} Walters, P. {\varepsilon}mph{An Introduction to Ergodic Theory}, Graduate Texts in Mathematics 79, Springer, 1981. \frak{m}yMRbibitem[WZ]{}{WZ} Wang, X.; Zhang. J. {{\varepsilon}mph{Ergodic measures with multi-zero Lyapunov exponents inside homoclinic classes.}} Preprint arXiv:1604.03342. {\varepsilon}nd{thebibliography} {\varepsilon}nd{document}
\begin{document} \title{ \sffamily On circumcenter mappings induced by nonexpansive operators} \author{ Heinz H.\ Bauschke\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \href{mailto:[email protected]}{\texttt{[email protected]}}.},~ Hui\ Ouyang\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \href{mailto:[email protected]}{\texttt{[email protected]}}.},~ and Xianfu\ Wang\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \href{mailto:[email protected]}{\texttt{[email protected]}}.} } \date{November 27, 2018} \maketitle \begin{abstract} \noindent We introduce the circumcenter mapping induced by a set of (usually nonexpansive) operators. One prominent example of a circumcenter mapping is the celebrated Douglas--Rachford splitting operator. Our study is motivated by the Circumcentered--Douglas--Rachford method recently introduced by Behling, Bello Cruz, and Santos in order to accelerate the Douglas--Rachford method for solving certain classes of feasibility problems. We systematically explore the properness of the circumcenter mapping induced by reflectors or projectors. Numerous examples are presented. We also present a version of Browder's demiclosedness principle for circumcenter mappings. \end{abstract} {\small \noindent {\bfseries 2010 Mathematics Subject Classification:} {Primary 47H09,47H04; Secondary 41A50, 90C25 } \noindent{\bfseries Keywords:} Browder's demiclosedness principle, circumcenter, circumcenter mapping, nonexpansive, projector, reflector. } \section{Introduction} Throughout this paper, we assume that \begin{empheq}[box = \mybluebox]{equation*} \text{$\mathcal{H}$ is a real Hilbert space} \end{empheq} with inner product $\innp{\cdot,\cdot}$ and induced norm $\|\cdot\|$. Let $m \in \mathbb{N} \smallsetminus \{0\}$, and let $T_{1}, \ldots, T_{m-1}, T_{m}$ be operators from $\mathcal{H}$ to $\mathcal{H}$. Set \begin{empheq}{equation*} \mathcal{S}=\{ T_{1}, \ldots, T_{m-1}, T_{m} \}, \end{empheq} and denote the power set of $\mathcal{H}$ as $2^{\mathcal{H}}$. The associated set-valued operator $\mathcal{S}: \mathcal{H} \rightarrow 2^{\mathcal{H}}$ is defined by \begin{empheq}{equation*} (\forall x \in \mathcal{H}) \quad \mathcal{S}(x)=\{ T_{1}x, \ldots, T_{m-1}x, T_{m}x\}. \end{empheq} Unless otherwise specified, we assume that \begin{empheq}[box = \mybluebox]{equation*} U_{1}, \ldots, U_{m}~\text{are closed affine subspaces of}~\mathcal{H}, \text{ with }\bigcap^{m}_{i=1} U_{i} \neq \varnothing. \end{empheq} In this paper, we introduce the circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ induced by $\mathcal{S}$ which maps every element $x \in \mathcal{H}$ to either empty set or the (unique if it exists) circumcenter of the finitely many elements in the nonempty set $\mathcal{S}(x)$. In fact, the circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ induced by $\mathcal{S}$ is the composition $\ensuremath{\operatorname{C}}CO{} \circ \mathcal{S}$ where $\ensuremath{\operatorname{C}}CO{}$ is the circumcenter operator defined in \cite{BOyW2018}. The domain $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is defined to be $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \{ x \in \mathcal{H} ~|~ \ensuremath{\operatorname{C}}C{\mathcal{S}}x \neq \varnothing\}$. We say the circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is \emph{proper}, if $ \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}}(x) = \mathcal{H}$. Properness is an important property for algorithms where one wishes to consider sequences of the form $(\ensuremath{\operatorname{C}}C{\mathcal{S}}^{k}x)_{k \in \mathbb{N}}$. \emph{The goal of this paper is to explore conditions sufficient for the circumcenter mapping to be proper. We also connect the circumcenter mapping to the celebrated demiclosedness principle by Felix Browder.} The CRM (Circumcentered--Reflection Method) operator $C$ recently investigated by Behling, Bello Cruz, and Santos in {\rm \cite[page~159]{BCS2018}} is a particular instance of a proper circumcenter mapping. The C--DRM (Circumcentered--Douglas--Rachford Method) operator $C_{T}$ defined by Behling et al.\ in {\rm \cite[Section~2]{BCS2017}} is CRM operator associated with only two linear subspaces. Hence, the $C_{T}$ is a special case of our proper circumcenter mapping as well. Behling et al.\ introduced in \cite{BCS2017} the C--DRM which generates iterates by taking the intersection of bisectors of reflection steps to accelerate the Douglas--Rachford method to solve certain classes of feasibility problems. Our paper \cite{BOyW2018} and this paper are motivated by \cite{BCS2017}. The proof of one of our main results, \cref{thm:CCS:proper}, is inspired by {\rm\cite[Lemma~2]{BCS2017}}. We now discuss further results and the organization of this paper. In \cref{sec:Preliminaries}, we collect various results for subsequent use. In particular, facts on circumcenter operator defined in {\rm \cite[Definition~3.4]{BOyW2018}} are reviewed in Section~\ref{Sec:Subsec:CircOpe}. In \cref{sec:CircumMapping}, we introduce the circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ induced by a set of operators $\mathcal{S}$. Based on some known results of circumcenter operator, we derive some sufficient conditions for the circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ to be proper. When $\mathcal{S}$ consists of only three operators, we provide a sufficient and necessary condition for the $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ to be proper. We also obtain conditions sufficient for continuity. Examples illustrating the tightness of our assumptions are provided as well. \cref{subsec:Demiclosedness} contains the demiclosedness principle for certain circumcenter mappings. In \cref{sec:CircumMappingReflectors}, we consider the circumcenter of finite subsets drawn from the affine hull of compositions of reflectors. Inspired by {\rm \cite[Lemma~2]{BCS2017}}, we prove the properness of a certain class of circumcenter mappings induced by reflectors. We also provide improper examples. Two particular instances of $\ensuremath{\operatorname{C}}C{\mathcal{S}}$, one of which belongs to the class of C--DRM operators from \cite{BCS2017} while the other is new, are considered. Comparing to the Douglas--Rachford Method (DRM) and the Method of Alternating projections (MAP), we find in preliminary numerical explorations that $(\ensuremath{\operatorname{C}}C{\mathcal{S}}^{k}x)_{k \in \mathbb{N}}$ can be used to solve best approximation problems. It is interesting that in general $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is neither continuous nor linear. In \cref{sec:CircumMappingProjectots}, the operators in $\mathcal{S}$ are chosen from the affine hull of the set of compositions of projectors. We provide both proper and improper examples of corresponding circumcenter mappings. The final \cref{sec:MoreImproper} deals with reflectors and reflected resolvents. Let us turn to notation. Let $K$ and $C$ be subsets of $\mathcal{H}$, $z \in \mathcal{H}$ and $\lambda \in \mathbb{R}$. Then $K+C =\{x +y ~|~ x \in K, y \in C \}$, $K +z = K + \{z\}$, and $\lambda K =\{\lambda x ~|~ x \in K\}$. The cardinality of the set $K$ is denoted as $\ensuremath{\operatorname{card}}(K)$. The intersection of all the linear subspaces of $\mathcal{H}$ containing $K$ is called the \emph{span} of $K$, and is denoted by $\ensuremath{{\operatorname{span}}}~K$. A nonempty subset $K$ of $\mathcal{H}$ is an \emph{affine subspace} of $\mathcal{H}$ if $(\forall \rho\in\mathbb{R})$ $\rho K + (1-\rho)K = K$; moreover, the smallest affine subspace containing $K$ is the \emph{affine hull} of $K$, denoted $\ensuremath{\operatorname{aff}} K$. Assume that $C$ is a nonempty closed, convex subset in $\mathcal{H}$. We denote by $\ensuremath{\operatorname{P}}_{C}$ the \emph{projector} onto $C$. $\ensuremath{\operatorname{R}}_{C} := 2\ensuremath{\operatorname{P}}_{C} -\ensuremath{\operatorname{Id}} $ is the \emph{reflector} associated with $C$. Let $T:\mathcal{H} \rightarrow \mathcal{H}$. The set of \emph{fixed points} of $T$ is $\ensuremath{\operatorname{Fix}} T =\{x \in \mathcal{H} ~|~ x= Tx\}$. Let $(x_{k})_{k \in \mathbb{N}}$ be a sequence in $\mathcal{H}$ and let $x \in \mathcal{H}$. We use $x_{k} \rightharpoonup x$ to indicate that $(x_{k})_{k \in \mathbb{N}}$ converges weakly to $x$. The set $\mathbf{B}[x;r] := \{y \in \mathcal{H} ~|~ \norm{y-x} \leq r\}$ is the closed ball centered at $x$ of radius $r\geq 0$. For other notation not explicitly defined here, we refer the reader to \cite{BC2017}. \section{Auxiliary results} \label{sec:Preliminaries} In this section, we provide various results that will be useful in the sequel. We start with some facts about affine subspaces. \subsection{Affine subspaces and related concepts} \begin{definition} {\rm \cite[page~4]{R1970}} An affine subspace $C$ is said to be \emph{parallel} to an affine subspace $ M $ if $C = M +a $ for some $ a \in \mathcal{H}$. \end{definition} \begin{fact} {\rm \cite[Theorem~1.2]{R1970}} \label{fac:AffinePointLinearSpace} Every affine subspace $C$ is parallel to a unique linear subspace $L$, which is given by \begin{align*} (\forall y \in C) \quad L = C - y = C - C . \end{align*} \end{fact} \begin{definition} {\rm \cite[page~4]{R1970}} The \emph{dimension} of a nonempty affine subspace is defined to be the dimension of the linear subspace parallel to it. \end{definition} \begin{fact} {\rm \cite[page~7]{R1970}} \label{fac:AffSubsExpre} Let $x_{1}, \ldots, x_{m} \in \mathcal{H}$. Then the affine hull is given by \begin{align*} \ensuremath{\operatorname{aff}}\{x_{1}, \ldots, x_{m}\} =\Big\{\lambda_{1}x_{1}+\cdots +\lambda_{m}x_{m} ~\Big|~\lambda_{1},\ldots,\lambda_{m} \in \mathbb{R} ~\text{and}~\sum^{m}_{i=1} \lambda_{i}=1 \Big\}. \end{align*} \end{fact} \begin{fact} {\rm \cite[Lemma~2.6]{BOyW2018}} \label{fact:AffineHull} Let $x_{1}, \ldots,x_{m} \in \mathcal{H}$, where $m \geq 2$. Then for every $i_{0} \in \{2, \ldots, m\}$, we have \begin{align*} \ensuremath{\operatorname{aff}}\{x_{1}, \ldots, x_{m}\} &~=x_{1} + \ensuremath{{\operatorname{span}}} \{x_{2}-x_{1}, \ldots, x_{m}-x_{1}\}\\ &~=x_{i_{0}}+\ensuremath{{\operatorname{span}}}\{x_{1}-x_{i_{0}},\ldots,x_{i_{0}-1}-x_{i_{0}}, x_{i_{0}+1}-x_{i_{0}},\ldots,x_{m}-x_{i_{0}}\}. \end{align*} \end{fact} \begin{definition} {\rm \cite[page~6]{R1970}} Let $x_{0}, x_{1}, \ldots, x_{m} \in \mathcal{H}$. The $m+1$ vectors $x_{0}, x_{1}, \ldots, x_{m}$ are said to be affinely independent if $\ensuremath{\operatorname{aff}} \{x_{0}, x_{1}, \ldots, x_{m}\}$ is $m$-dimensional. We will also say $(x_{0},x_{1},\ldots, x_{m}) = (x_{i})_{i \in \{0,1,\ldots,m\}}$ is affinely independent. \end{definition} \begin{fact}{\rm \cite[page~7]{R1970}} \label{fac:AffinIndeLineInd} Let $x_{1}, x_{2}, \ldots, x_{m} \in \mathcal{H}$. Then $x_{1}, x_{2}, \ldots,x_{m}$ are affinely independent if and only if $ x_{2}-x_{1}, \ldots, x_{m}-x_{1}$ are linearly independent. \end{fact} \subsection{Projectors and reflectors} Our first result follows easily from the definitions. \begin{lemma} \label{lem:PU:RUIdempotent} Let $C$ be a nonempty closed convex subset of $\mathcal{H}$. Then \begin{enumerate} \item \label{lem:PUIdempotent} $ \ensuremath{\operatorname{P}}_{C} \ensuremath{\operatorname{P}}_{C}=\ensuremath{\operatorname{P}}_{C}$. \item \label{lem:RUIdempotent} $\ensuremath{\operatorname{R}}_{C}\ensuremath{\operatorname{R}}_{C} = \ensuremath{\operatorname{Id}}$. \end{enumerate} \end{lemma} \begin{fact} {\rm \cite[Theorem~5.8]{D2012}} \label{MetrProSubs8} Let $C$ be a closed linear subspace of $\mathcal{H}$. Then \begin{enumerate} \item \label{MetrProSubs8:ii} $\ensuremath{\operatorname{Id}} =\ensuremath{\operatorname{P}}_{C}+\ensuremath{\operatorname{P}}_{C^{\perp}}$. \item \label{MetrProSubs8:iv} $C^{\perp}=\{x \in \mathcal{H}~|~ \ensuremath{\operatorname{P}}_{C}(x)=0\}$ and $C=\{x \in \mathcal{H}~|~ \ensuremath{\operatorname{P}}_{C^{\perp}}(x)=0\}=\{x \in \mathcal{H}~|~ \ensuremath{\operatorname{P}}_{C}(x)=x \}$. \end{enumerate} \end{fact} The following result is a mild extension {\rm \cite[Proposition~1]{BCS2017}} and it is useful in the proof of \cref{thm:CCS:proper}. \begin{proposition} \label{prop:PR} Let $C$ be a closed affine subspace of $\mathcal{H}$. Then the following hold: \begin{enumerate} \item \label{prop:PR:Affine} The projector $\ensuremath{\operatorname{P}}_{C}$ and the reflector $\ensuremath{\operatorname{R}}_{C}$ are affine operators. \item \label{prop:PR:Characterization} Let $x$ be in $\mathcal{H}$ and let $p$ be in $\mathcal{H}$. Then \begin{align*} p=\ensuremath{\operatorname{P}}_{C} x \Longleftrightarrow p \in C \quad \mbox{and} \quad (\forall v \in C) ~ (\forall w \in C) \quad \innp{x-p,v-w}=0. \end{align*} \item \label{prop:PR:Pythagoras} $(\forall x \in \mathcal{H})$ $(\forall v \in C)$ $\norm{x-\ensuremath{\operatorname{P}}_{C} x }^{2} + \norm{v-\ensuremath{\operatorname{P}}_{C} x}^{2}=\norm{x-v }^{2}$. \item \label{prop:PR:isometry} $(\forall x \in \mathcal{H})$ $(\forall y \in \mathcal{H})$ $\norm{x-y}=\norm{\ensuremath{\operatorname{R}}_{C}x-\ensuremath{\operatorname{R}}_{C}y}$. \item \label{prop:PR:ReflectorEquidist} $(\forall x \in \mathcal{H})$ $(\forall v \in C)$ $\norm{x-v}=\norm{\ensuremath{\operatorname{R}}_{C}x-v}$. \end{enumerate} \end{proposition} \begin{proof} \cref{prop:PR:Affine}: $\ensuremath{\operatorname{P}}_{C}$ is affine by {\rm \cite[Corollary~3.22(ii)]{BC2017}}; this implies that $\ensuremath{\operatorname{R}}_{C}=2\ensuremath{\operatorname{P}}_{C}- \ensuremath{\operatorname{Id}}$ is affine as well. \cref{prop:PR:Characterization}: {\rm \cite[Corollary~3.22(i)]{BC2017}}. \cref{prop:PR:Pythagoras}: Indeed, for every $x \in \mathcal{H}$ and $v \in C$, \begin{align*} \norm{x-v }^{2}&=\norm{x-\ensuremath{\operatorname{P}}_{C}x-(v-\ensuremath{\operatorname{P}}_{C}x)}^{2}\\ &=\norm{x-\ensuremath{\operatorname{P}}_{C} x }^{2} -2 \innp{x-\ensuremath{\operatorname{P}}_{C}x, v -\ensuremath{\operatorname{P}}_{C}x} + \norm{v - \ensuremath{\operatorname{P}}_{C} x}^{2}\\ &=\norm{x-\ensuremath{\operatorname{P}}_{C} x }^{2} + \norm{v - \ensuremath{\operatorname{P}}_{C} x }^{2}. \quad (\text{by \cref{prop:PR:Characterization}}) \end{align*} \cref{prop:PR:isometry}: For every $x\in \mathcal{H}$, and for every $y \in \mathcal{H}$, by \cref{prop:PR:Characterization}, \begin{align*} & \quad \quad ~~ \innp{\ensuremath{\operatorname{P}}_{C}x-\ensuremath{\operatorname{P}}_{C}y,\ensuremath{\operatorname{P}}_{C}x-x }- \innp{\ensuremath{\operatorname{P}}_{C}x-\ensuremath{\operatorname{P}}_{C}y,\ensuremath{\operatorname{P}}_{C}y-y }=0 \\ & \Longleftrightarrow \innp{\ensuremath{\operatorname{P}}_{C}x-\ensuremath{\operatorname{P}}_{C}y,\ensuremath{\operatorname{P}}_{C}x-\ensuremath{\operatorname{P}}_{C}y -(x-y) }=0 \\ & \Longleftrightarrow \norm{x-y}^{2} = 4 \norm{\ensuremath{\operatorname{P}}_{C}x-\ensuremath{\operatorname{P}}_{C}y }^{2} -4\innp{\ensuremath{\operatorname{P}}_{C}x-\ensuremath{\operatorname{P}}_{C}y,x-y } +\norm{x-y}^{2}\\ & \Longleftrightarrow \norm{x-y}^{2} = \norm{ (2 \ensuremath{\operatorname{P}}_{C}x-x) -(2 \ensuremath{\operatorname{P}}_{C}y -y)}^{2}\\ & \Longleftrightarrow \norm{x-y}=\norm{\ensuremath{\operatorname{R}}_{C}x-\ensuremath{\operatorname{R}}_{C}y}. \quad (\text{by}~\ensuremath{\operatorname{R}}_{C}=2\ensuremath{\operatorname{P}}_{C}- \ensuremath{\operatorname{Id}}) \end{align*} \cref{prop:PR:ReflectorEquidist}: Notice that $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{R}}_{C} =C$ and then use \cref{prop:PR:isometry}. \end{proof} \subsection{Circumcenters} \label{Sec:Subsec:CircOpe} In the whole subsection, \begin{empheq}[box=\mybluebox]{equation*} \mathcal{P}(\mathcal{H})~\text{is the set of all nonempty subsets of}~\mathcal{H}~\text{containing \emph{finitely many} elements}. \end{empheq} By {\rm \cite[Proposition~3.3]{BOyW2018}}, we know that for every $K \in \mathcal{P}(\mathcal{H})$, there is at most one point $p \in \ensuremath{\operatorname{aff}} (K) $ such that $\{\norm{p-x} ~|~x \in K \}$ is a singleton. Hence, the following notion is well-defined. \begin{definition}[circumcenter operator] {\rm \cite[Definition~3.4]{BOyW2018}} \label{defn:Circumcenter} The \emph{circumcenter operator} is \begin{empheq}[box=\mybluebox]{equation*} \ensuremath{\operatorname{C}}CO{} \colon \mathcal{P}(\mathcal{H}) \to \mathcal{H} \cup \{ \varnothing \} \colon K \mapsto \begin{cases} p, \quad ~\text{if}~p \in \ensuremath{\operatorname{aff}} (K)~\text{and}~\{\norm{p-x} ~|~x \in K \}~\text{is a singleton};\\ \varnothing, \quad~ \text{otherwise}. \end{cases} \end{empheq} In particular, when $\ensuremath{\operatorname{C}}CO(K) \in \mathcal{H}$, that is, $\ensuremath{\operatorname{C}}CO(K) \neq \varnothing$, we say that the circumcenter of $K$ exists and we call $\ensuremath{\operatorname{C}}CO(K)$ the circumcenter of $K$. \end{definition} \begin{fact}{\rm \cite[Example~3.6]{BOyW2018}} \label{fact:CircForTwoPoints} Let $x_1,x_2$ be in $\mathcal{H}$. Then \begin{align*} \ensuremath{\operatorname{C}}CO{\big(\{x_1,x_2\}\big)}=\frac{x_{1} + x_{2}}{2}. \end{align*} \end{fact} \begin{fact} {\rm \cite[Theorem~4.1]{BOyW2018}} \label{fact:unique:LinIndpPformula} Let $K=\{x_{1}, \ldots, x_{m}\} \in \mathcal{P}(\mathcal{H})$, where $x_{1}, \ldots, x_{m}$ are affinely independent. Then $\ensuremath{\operatorname{C}}CO(K) \in \mathcal{H}$, which means that $\ensuremath{\operatorname{C}}CO(K)$ is the unique point satisfying the following two conditions: \begin{enumerate} \item \label{prop:unique:i} $\ensuremath{\operatorname{C}}CO(K) \in \ensuremath{\operatorname{aff}} (K)$, and \item \label{prop:unique:ii} $\{ \norm{\ensuremath{\operatorname{C}}CO(K)-y}~|~y \in K \}$ is a singleton. \end{enumerate} Moreover, \begin{align*} \ensuremath{\operatorname{C}}CO(K) = x_{1}+\frac{1}{2}(x_{2}-x_{1},\ldots,x_{m}-x_{1}) G( x_{2}-x_{1},\ldots,x_{m}-x_{1})^{-1} \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} \\ \vdots\\ \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix}, \end{align*} where $G( x_{2}-x_{1},\ldots,x_{m-1}-x_{1},x_{m}-x_{1})$ is the \emph{Gram matrix} of $x_{2}-x_{1},\ldots,x_{m-1}-x_{1},x_{m}-x_{1}$, i.e., \begin{align*} &\quad ~ G( x_{2}-x_{1},\ldots, x_{m-1}-x_{1},x_{m}-x_{1})\\ &= \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} &\innp{x_{2}-x_{1},x_{3}-x_{1}} & \cdots & \innp{x_{2}-x_{1}, x_{m}-x_{1}} \\ \vdots & \vdots & ~~& \vdots \\ \innp{x_{m-1}-x_{1},x_{2}-x_{1}} & \innp{x_{m-1}-x_{1}, x_{3}-x_{1}} & \cdots & \innp{x_{m-1}-x_{1},x_{m}-x_{1}} \\ \innp{x_{m}-x_{1},x_{2}-x_{1}} & \innp{x_{m}-x_{1},x_{3}-x_{1}} & \cdots & \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix}. \end{align*} \end{fact} \begin{fact} {\rm \cite[Theorem~8.1]{BOyW2018}} \label{fact:clform:three} Suppose that $K=\{x,y,z\} \in \mathcal{P}(\mathcal{H})$ and that $\ensuremath{\operatorname{card}} (K) =3$. Then $x, y, z$ are affinely independent if and only if $\ensuremath{\operatorname{C}}CO(K) \in \mathcal{H}$. \end{fact} Combining \cref{fact:CircForTwoPoints} and \cref{fact:clform:three}, we obtain the following two results. \begin{corollary} \label{cor:x1x2x3Dom} Let $K=\{x_{1}, x_{2}, x_{3}\} \in \mathcal{P}(\mathcal{H})$. Then $\ensuremath{\operatorname{C}}CO(K) \in \mathcal{H}$ if and only if exactly one of the following cases holds. \begin{enumerate} \item $\ensuremath{\operatorname{card}} \{x_{1}, x_{2},x_{3} \} \leq 2$. \item $\ensuremath{\operatorname{card}} \{x_{1}, x_{2},x_{3}\}=3$ and if there is $\{\alpha, \beta\} \subseteq \mathbb{R}$ such that $\alpha (x_{2} -x_{1}) +\beta (x_{3}-x_{1})=0$, then $\alpha =0$ and $\beta =0$. \end{enumerate} \end{corollary} \begin{corollary} \label{cor:mathbbRcard3} Let $a,b,c$ be in $\mathbb{R}$. Then there exists no $x \in \mathbb{R}$ such that $|x-a|=|x-b|=|x -c|$ if and only if $\ensuremath{\operatorname{card}} \{a,b,c\} =3$. \end{corollary} \begin{fact}[scalar multiples] {\rm \cite[Proposition~6.1]{BOyW2018}} \label{fact:CircumHomoge} Let $K \in \mathcal{P}(\mathcal{H})$ and $\lambda \in \mathbb{R} \smallsetminus \{0\}$. Then $\ensuremath{\operatorname{C}}CO(\lambda K)=\lambda \ensuremath{\operatorname{C}}CO(K)$. \end{fact} \begin{fact}[translations] {\rm \cite[Proposition~6.3]{BOyW2018}} \label{fact:CircumSubaddi} Let $K \in \mathcal{P}(\mathcal{H})$ and $y \in \mathcal{H}$. Then $\ensuremath{\operatorname{C}}CO(K+y)=\ensuremath{\operatorname{C}}CO(K)+y$. \end{fact} \begin{fact} {\rm \cite[Lemma~4.2]{BOyW2018}} \label{fact:unique:BasisPformula} Let $K \in \mathcal{P}(\mathcal{H})$ and let $M \subseteq K$ be such that $\ensuremath{\operatorname{aff}}(M)=\ensuremath{\operatorname{aff}}(K)$. Suppose that $\ensuremath{\operatorname{C}}CO(K) \in \mathcal{H}$. Then $\ensuremath{\operatorname{C}}CO(K)=\ensuremath{\operatorname{C}}CO(M)$. \end{fact} \begin{fact} {\rm \cite[Theorem~7.1]{BOyW2018}} \label{fact:CCO:LimitCont} Let $K=\{x_{1}, \ldots, x_{m}\} \in \mathcal{P}(\mathcal{H})$. Suppose that $\ensuremath{\operatorname{C}}CO(K) \in \mathcal{H}$. Then the following hold. \begin{enumerate} \item \label{prop:CCO:LimitCont:Linear} Set $t=\dim \big( \ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1}\} \big)$, and let $\widetilde{K}=\{x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}\} \subseteq K$ be such that $x_{i_{1}} -x_{1}, \ldots, x_{i_{t}}-x_{1}$ is a basis of $\ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1}\}$. Furthermore, let $\big( (x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}}) \big)_{k \geq 1}$ $\subseteq$ $\mathcal{H}^{t+1}$ with $\lim_{k\rightarrow \infty}( x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}})=(x_{1}, x_{i_{1}}, \ldots, x_{i_{t}})$, and set $(\forall k \geq 1)$ $\widetilde{K}^{(k)} = \{x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}}\}$. Then there exist $N \in \mathbb{N}$ such that for every $k \geq N$, $\ensuremath{\operatorname{C}}CO(\widetilde{K}^{(k)}) \in \mathcal{H}$ and \begin{align*} \lim_{k \rightarrow \infty} \ensuremath{\operatorname{C}}CO(\widetilde{K}^{(k)})= \ensuremath{\operatorname{C}}CO(\widetilde{K})=\ensuremath{\operatorname{C}}CO(K). \end{align*} \item \label{prop:CCO:LimitCont:LinearIndep} Suppose that $ x_{1}, \ldots, x_{m-1}, x_{m}$ are affinely independent, and let $ \big( (x^{(k)}_{1}, \ldots, x^{(k)}_{m-1}, x^{(k)}_{m}) \big)_{k \geq 1}$ $\subseteq$ $\mathcal{H}^{m} $ satisfy $\lim_{k\rightarrow \infty}( x^{(k)}_{1}, \ldots,x^{(k)}_{m-1}, x^{(k)}_{m})=(x_{1}, \ldots, x_{m-1},x_{m})$. Set $(\forall k \geq 1)$ $K^{(k)}=\{x^{(k)}_{1}, \ldots, x^{(k)}_{m-1}, x^{(k)}_{m}\}$. Then \begin{align*} \lim_{k \rightarrow \infty} \ensuremath{\operatorname{C}}CO( K^{(k)} )= \ensuremath{\operatorname{C}}CO( K ). \end{align*} \end{enumerate} \end{fact} \begin{fact} {\rm \cite[Example~7.6]{BOyW2018}} \label{fact:CounterExampleContinuity} Suppose that $\mathcal{H}=\mathbb{R}^{2}$. Let $x_{1}=(-2,0)$ and $x_{2}=x_{3}=(2,0)$. Let $(\forall k \geq 1)$ $\big(x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\big)=\big( (-2,0), (2,0),(2-\frac{1}{k},\frac{1}{4k})\big)$. Then \begin{align*} (\forall k \geq 1) \quad \ensuremath{\operatorname{C}}CO(\{x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) = \big(0, -8+\tfrac{2}{k}+\tfrac{1}{8k}\big). \end{align*} \end{fact} \section{Circumcenter mappings induced by operators} \label{sec:CircumMapping} Suppose that $T_{1}, \ldots, T_{m-1}, T_{m}$ are operators from $\mathcal{H}$ to $\mathcal{H}$, with $m \in \mathbb{N} \smallsetminus \{0\}$ and that \begin{empheq}[box=\mybluebox]{equation*} \mathcal{S}=\{ T_{1}, \ldots, T_{m-1}, T_{m} \} \quad \text{and} \quad (\forall x \in \mathcal{H}) \quad \mathcal{S}(x)=\{ T_{1}x, \ldots, T_{m-1}x, T_{m}x\}. \end{empheq} \subsection{Definition} \begin{definition}[induced circumcenter mapping] \label{def:cir:map} The \emph{circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ induced by $\mathcal{S}$} is \begin{empheq}[box=\mybluebox]{equation*} \ensuremath{\operatorname{C}}C{\mathcal{S}} \colon \mathcal{H} \to \mathcal{H} \cup \{ \varnothing \} \colon x \mapsto \ensuremath{\operatorname{C}}CO(\mathcal{S}(x)), \end{empheq} that is, $\ensuremath{\operatorname{C}}C{\mathcal{S}} = \ensuremath{\operatorname{C}}CO{} \circ \mathcal{S}$. The \emph{domain} of $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is \begin{align*} \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \{ x \in \mathcal{H} ~|~ \ensuremath{\operatorname{C}}C{\mathcal{S}}x \neq \varnothing \}. \end{align*} In particular, if $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \mathcal{H}$, then we say the circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ induced by $\mathcal{S}$, is \emph{proper}; otherwise, we call $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ \emph{improper}. \end{definition} \begin{remark} \label{rem:cir:map} By Definitions \ref{def:cir:map} and \ref{defn:Circumcenter}, for every $x \in \mathcal{H}$, if the circumcenter of the set $\mathcal{S}(x)$ defined in \cref{defn:Circumcenter} does not exist in $\mathcal{H}$, then $\ensuremath{\operatorname{C}}C{\mathcal{S}}x= \varnothing $. Otherwise, $\ensuremath{\operatorname{C}}C{\mathcal{S}}x$ is the unique point satisfying the two conditions below: \begin{enumerate} \item $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \ensuremath{\operatorname{aff}}(\mathcal{S}(x))=\ensuremath{\operatorname{aff}}\{T_{1}x, \ldots, T_{m-1}x, T_{m}x\}$, and \item $\norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x -T_{1}x}=\cdots =\norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x -T_{m-1}x}=\norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x -T_{m}x}$. \end{enumerate} \end{remark} \subsection{Basic properties} We start with some examples. \begin{proposition} \label{prop:form:m2:Oper} Assume $\mathcal{S}=\{T_{1}, T_{2}\}$. Then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Moreover, \begin{align*} (\forall x \in \mathcal{H}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x = \frac{T_{1}x +T_{2}x}{2}. \end{align*} \end{proposition} \begin{proof} Clear from \cref{fact:CircForTwoPoints} and \cref{def:cir:map}. \end{proof} \begin{corollary} \label{cor:T1T2T3Dom} Let $\mathcal{S}=\{T_{1}, T_{2}, T_{3} \}$ and let $x \in \mathcal{H}$. Then $x \not \in \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$ if and only if $\ensuremath{\operatorname{card}} \{T_{1}x, T_{2}x, T_{3}x \}=3$ and there exists $(\alpha, \beta) \in \mathbb{R}^{2} \smallsetminus \{(0,0)\}$ such that $\alpha (T_{2}x -T_{1}x) +\beta (T_{3}x -T_{1}x)=0$. \end{corollary} \begin{proof} This follows from \cref{cor:x1x2x3Dom}. \end{proof} \begin{example} Assume that $\mathcal{H}=\mathbb{R}^{2}$. Set $U_{1}=\mathbb{R}\cdot (1,0)$, $U_{2} = \mathbb{R} \cdot (0,1)$, and let $\alpha \in \mathbb{R}$. Set $\mathcal{S} = \{\alpha \ensuremath{\operatorname{Id}}, R_{U_{1}}, R_{U_{2}}\}$. Then the following hold: \begin{enumerate} \item If $\alpha=0$, then $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \{(0,0)\}$. \item If $\alpha=1$ or $\alpha=-1$, then $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} =\mathbb{R}^{2}$, i.e., $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \item If $\alpha \in \mathbb{R} \smallsetminus \{0, 1, -1\}$, then $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \big( \mathbb{R}^{2} \smallsetminus (U_{1} \cup U_{2}) \big) \cup \{(0,0)\}$. \end{enumerate} \end{example} \begin{proposition}\label{thm:CW:ExistWellDefined} Suppose that for every $x \in \mathcal{H}$, there exists a point $p(x) \in \mathcal{H}$ such that \begin{enumerate} \item $p(x) \in \ensuremath{\operatorname{aff}} \{T_{1}x, \ldots, T_{m-1}x, T_{m}x\}$, and \item $\norm{p(x)-T_{1}x} =\cdots =\norm{p(x)-T_{m-1}x}=\norm{p(x)-T_{m}x}$. \end{enumerate} Then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper and \begin{align*} (\forall x \in \mathcal{H}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x=p(x). \end{align*} \end{proposition} \begin{proof} This follows from \cref{rem:cir:map}. \end{proof} \begin{proposition}\label{prop:CW:AffineIndepWellDefined} Suppose that for every $x \in \mathcal{H}$, there exists $\ensuremath{\operatorname{I}}(x) \subseteq \ensuremath{\operatorname{I}}:=\{1, \ldots, m\}$ such that $\ensuremath{\operatorname{card}} \big(\ensuremath{\operatorname{I}}(x) \big) = \ensuremath{\operatorname{card}} \big( \mathcal{S}(x) \big)$ and $(T_{i}x)_{i \in \ensuremath{\operatorname{I}}(x) }$ is affinely independent. Then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{proposition} \begin{proof} Let $x \in \mathcal{H}$. Since $\ensuremath{\operatorname{I}}(x) \subseteq \ensuremath{\operatorname{I}}$, we have $\{ T_{i}x \}_{i \in \ensuremath{\operatorname{I}}(x) } \subseteq \mathcal{S}(x)$. The affine independence of $(T_{i}x)_{i \in \ensuremath{\operatorname{I}}(x) }$ yields $\ensuremath{\operatorname{card}} \big( \{ T_{i}x \}_{i \in \ensuremath{\operatorname{I}}(x) } \big) = \ensuremath{\operatorname{card}} \big(\ensuremath{\operatorname{I}}(x) \big)$. Combining with $\ensuremath{\operatorname{card}} \big(\ensuremath{\operatorname{I}}(x) \big) =\ensuremath{\operatorname{card}} \big( \mathcal{S}(x) \big)$, we obtain that $\{ T_{i}x \}_{i \in \ensuremath{\operatorname{I}}(x) } =\mathcal{S}(x)$, which implies that \begin{align} \label{eq:prop:CW:AffineIndepWellDefined} \ensuremath{\operatorname{C}}C{\mathcal{S}}x = \ensuremath{\operatorname{C}}CO{ \big( \mathcal{S}(x) \big)} = \ensuremath{\operatorname{C}}CO{} \big( \{ T_{i}x \}_{i \in \ensuremath{\operatorname{I}}(x) } \big). \end{align} Using the assumption that $(T_{i}x)_{i \in \ensuremath{\operatorname{I}}(x) }$ is affinely independent again, by \cref{fact:unique:LinIndpPformula}, we deduce that $\ensuremath{\operatorname{C}}CO{} \big( \{ T_{i}x \}_{i \in \ensuremath{\operatorname{I}}(x) } \big) \in \mathcal{H} $. Combining with \cref{eq:prop:CW:AffineIndepWellDefined}, we deduce that $(\forall x \in \mathcal{H})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H}$, i.e., $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{proof} The following example illustrates that the converse of \cref{prop:CW:AffineIndepWellDefined} is not true in general. \begin{example} \label{exam:ProjecCCSProper} Let $U$ be a closed linear subspace of $\mathcal{H}$ with $\{0\} \neq U \varsubsetneqq \mathcal{H}$. Denote by $0$ also the zero operator: $(\forall x \in \mathcal{H})$ $0(x)=0$. Set $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U}, \ensuremath{\operatorname{P}}_{U^{\perp}}, 0\}$. Then the following hold: \begin{enumerate} \item \label{exam:ProjecCCSProper:0} $(\forall x \in \mathcal{H})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}}x= \frac{x}{2}$; consequently, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \item \label{exam:ProjecCCSProper:ii} $\big(\forall x \in \mathcal{H} \smallsetminus ( U\cup U^{\perp}) \big)$ $x, \ensuremath{\operatorname{P}}_{U}x, \ensuremath{\operatorname{P}}_{U^{\perp}}x, 0 (x)$ are pairwise distinct. \item \label{exam:ProjecCCSProper:iii} $(\forall x \in \mathcal{H})$ $\ensuremath{\operatorname{Id}} x, \ensuremath{\operatorname{P}}_{U}x, \ensuremath{\operatorname{P}}_{U^{\perp}}x, 0( x)$ are affinely dependent. \end{enumerate} \end{example} \begin{proof} \cref{exam:ProjecCCSProper:0}: Let $x \in \mathcal{H}$. By \cref{prop:PR}\cref{prop:PR:Pythagoras} and by $0 \in U$ and $\ensuremath{\operatorname{P}}_{U}x \in U$, we deduce that $\norm{\frac{x}{2}-\ensuremath{\operatorname{P}}_{U}\frac{x}{2} }^{2} +\norm{\ensuremath{\operatorname{P}}_{U}\frac{x}{2}}^{2} =\norm{ \frac{x}{2}}^{2}$ and that $\norm{\frac{x}{2}-\ensuremath{\operatorname{P}}_{U}\frac{x}{2} }^{2} +\norm{\ensuremath{\operatorname{P}}_{U}x-\ensuremath{\operatorname{P}}_{U}\frac{x}{2}}^{2} =\norm{ \frac{x}{2}-\ensuremath{\operatorname{P}}_{U}x}^{2}$. Combining with the linearity of $\ensuremath{\operatorname{P}}_{U}$, we obtain \begin{align} \label{eq:exam:ProjecCCSProper:x} \Norm{ \frac{x}{2}}=\Norm{ \frac{x}{2}-\ensuremath{\operatorname{P}}_{U}x}. \end{align} Similarly, by \cref{prop:PR}\cref{prop:PR:Pythagoras} again, replace $U$ in the above analysis by $U^{\perp}$ to yield that \begin{align} \label{eq:exam:ProjecCCSProper:x2} \Norm{ \frac{x}{2}}=\Norm{ \frac{x}{2}-\ensuremath{\operatorname{P}}_{U^{\perp}}x}. \end{align} Combining \cref{eq:exam:ProjecCCSProper:x} with \cref{eq:exam:ProjecCCSProper:x2}, we obtain that \begin{align} \label{eq:exam:ProjecCCSProper:all} \Norm{\frac{x}{2}}=\Norm{\frac{x}{2}-0(x)}=\Norm{\frac{x}{2}-x }=\Norm{ \frac{x}{2}-\ensuremath{\operatorname{P}}_{U}x} =\Norm{ \frac{x}{2}-\ensuremath{\operatorname{P}}_{U^{\perp}}x}. \end{align} Since $\frac{x}{2} = \frac{x}{2} + \frac{0}{2} \in \ensuremath{\operatorname{aff}}\{x, \ensuremath{\operatorname{P}}_{U}x, \ensuremath{\operatorname{P}}_{U^{\perp}}x, 0 (x)\}$, \cref{eq:exam:ProjecCCSProper:all} yields that $(\forall x \in \mathcal{H})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}}x=\frac{x}{2}$. \cref{exam:ProjecCCSProper:ii}: In fact, by \cref{MetrProSubs8}\cref{MetrProSubs8:iv}, \begin{subequations} \label{eq:exam:ProjecCCSProper:subeq} \begin{align} &x = \ensuremath{\operatorname{P}}_{U}x \Longleftrightarrow x \in U;\\ &x = \ensuremath{\operatorname{P}}_{U^{\perp}}x \Longleftrightarrow x \in U^{\perp};\\ &U \cap U^{\perp} =\{0\}. \end{align} \end{subequations} In addition, Combining \cref{eq:exam:ProjecCCSProper:subeq} with \cref{MetrProSubs8}\cref{MetrProSubs8:ii}, we know that \begin{align*} \ensuremath{\operatorname{P}}_{U}x= \ensuremath{\operatorname{P}}_{U^{\perp}}x \Longrightarrow \ensuremath{\operatorname{P}}_{U}x= \ensuremath{\operatorname{P}}_{U^{\perp}}x=0 \Longrightarrow x=\ensuremath{\operatorname{P}}_{U}x+ \ensuremath{\operatorname{P}}_{U^{\perp}}x=0 \in U\cup U^{\perp}. \end{align*} Hence, for every $x \in \mathcal{H} \smallsetminus ( U\cup U^{\perp})$, $x, \ensuremath{\operatorname{P}}_{U}x, \ensuremath{\operatorname{P}}_{U^{\perp}}x, 0 (x)$ are pairwise distinct. \cref{exam:ProjecCCSProper:iii}: Now for every $x \in \mathcal{H}$, \begin{align*} & \quad ~~~ x=\ensuremath{\operatorname{P}}_{U}x+ \ensuremath{\operatorname{P}}_{U^{\perp}}x\\ & \ensuremath{\operatorname{R}}ightarrow x, \ensuremath{\operatorname{P}}_{U}x, \ensuremath{\operatorname{P}}_{U^{\perp}}x ~\text{are linear dependent}\\ & \Leftrightarrow x -0, \ensuremath{\operatorname{P}}_{U}x-0, \ensuremath{\operatorname{P}}_{U^{\perp}}x-0 ~\text{are linear dependent}\\ & \Leftrightarrow 0( x), \ensuremath{\operatorname{Id}} x, \ensuremath{\operatorname{P}}_{U}x, \ensuremath{\operatorname{P}}_{U^{\perp}}x ~\text{are affinely dependent}. \quad (\text{by \cref{fac:AffinIndeLineInd}}) \end{align*} The proof is complete. \end{proof} The following theorem provides a way to verify the properness of $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ where $\mathcal{S}$ contains three operators. \begin{theorem} \label{thm:Proper3} Suppose that $\mathcal{S}=\{T_{1}, T_{2}, T_{3} \}$. Then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper if and only if for every $x \in \mathcal{H}$ with $\ensuremath{\operatorname{card}} \big( \mathcal{S}(x) \big) =3$, the vectors $T_{1}x, T_{2}x, T_{3}x $ are affinely independent. \end{theorem} \begin{proof} By \cref{fact:clform:three}, for every $x \in \mathcal{H}$ with $\ensuremath{\operatorname{card}} \big( \mathcal{S}(x) \big) =3$, \begin{align} \label{eq:thm:Proper3:AffinelyIndep} \ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H} \Longleftrightarrow T_{1}x, T_{2}x, T_{3}x ~\mbox{are affinely independent.} \end{align} \enquote{$\Longrightarrow$}: It follows directly from \cref{eq:thm:Proper3:AffinelyIndep}. \enquote{$\Longleftarrow$}: Assume that for every $x \in \mathcal{H}$ with $\ensuremath{\operatorname{card}} \big( \mathcal{S}(x)\big)=3$, $T_{1}x, T_{2}x, T_{3}x $ are affinely independent in $\mathcal{H}$. Let $x \in \mathcal{H}$. If $\ensuremath{\operatorname{card}} \big( \mathcal{S}(x)\big)=3$, by \cref{eq:thm:Proper3:AffinelyIndep} and the assumption, then $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H}$. Assume $\ensuremath{\operatorname{card}} \big( \mathcal{S}(x)\big) \leq 2$, by \cref{prop:form:m2:Oper}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H}$. Altogether, $(\forall x \in \mathcal{H})$, $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H} $, which means that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{proof} \begin{proposition} \label{prop:CW:FixPointSet} Suppose that $\mathcal{S}=\{ T_{1}, \ldots, T_{m-1}, T_{m} \}$. Then the following hold: \begin{enumerate} \item \label{prop:CW:FixPointSet:inclu} $\cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} \subseteq \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$. \item \label{prop:CW:FixPointSet:equa} If $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} \subseteq \cup^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i}$, then $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} =\cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} $. \item \label{prop:Id:FixPointSet:equa} If $T_{1} =\ensuremath{\operatorname{Id}}$, then $\cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} =\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$ \end{enumerate} \end{proposition} \begin{proof} \cref{prop:CW:FixPointSet:inclu}: Let $x \in \cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} $. Then \begin{align} \label{eq:prop:CW:FixPointSet:fx} (\forall i \in \{1, \ldots, m-1,m\}) \quad T_{i}x=x, \end{align} which yields that $\ensuremath{\operatorname{aff}} \{T_{1}x, \ldots, T_{m-1}x,T_{m}x\}=\ensuremath{\operatorname{aff}} \{x\}=\{x\}$. In addition, by \cref{eq:prop:CW:FixPointSet:fx}, \begin{align*} \norm{x-T_{1}x}=\cdots=\norm{x-T_{m-1}x}=\norm{x-T_{m}x}=0. \end{align*} Therefore, we obtain that $\ensuremath{\operatorname{C}}C{\mathcal{S}}x=x$, which means that $x \in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$. Hence, $\cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} \subseteq \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$. \cref{prop:CW:FixPointSet:equa}: Let $x \in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$. By the assumption, there is $i_{0} \in \{1,\ldots,m\}$ such that \begin{align}\label{eq:i0:prop:CW:FixPointSet} x = T_{i_{0}}x \end{align} Now $x \in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$, i.e., $x=\ensuremath{\operatorname{C}}C{\mathcal{S}}x$, implies that \begin{align} \label{eq:Tx:prop:CW:FixPointSet} \norm{x-T_{1}x}=\cdots=\norm{x-T_{m-1}x}=\norm{x-T_{m}x}. \end{align} Combining \cref{eq:Tx:prop:CW:FixPointSet} with \cref{eq:i0:prop:CW:FixPointSet}, we obtain that \begin{align*} \norm{x-T_{1}x}=\cdots=\norm{x-T_{m-1}x}=\norm{x-T_{m}x}=0, \end{align*} which means that $x \in \cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i}$. Hence, $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} \subseteq \cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i}$. Combining with \cref{prop:CW:FixPointSet:inclu}, we deduce that $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} =\cap^{m}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} $. \cref{prop:Id:FixPointSet:equa}: If $T_1=\ensuremath{\operatorname{Id}}$, then $\ensuremath{\operatorname{Fix}} T_1 = \mathcal{H}$ and the result follows from \cref{prop:CW:FixPointSet:equa}. \end{proof} \begin{example} Assume that $\mathcal{H} =\mathbb{R}^{2}$. Set $T_{1} = \ensuremath{\operatorname{P}}_{\mathbf{B}[(-2,0);1]}$, $T_{2} = \ensuremath{\operatorname{P}}_{\mathbf{B}[(0,2);1]}$, $T_{3} = \ensuremath{\operatorname{P}}_{\mathbf{B}[(2,0);1]}$, and $\mathcal{S} = \{T_{1}, T_{2},T_{3}\}$. Then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Moreover, $\varnothing = \cap^{3}_{i=1} \ensuremath{\operatorname{Fix}} T_{i} \subsetneqq \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \{(0,0)\} $. \end{example} \begin{proof} The properness of $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ follows from \cref{thm:Proper3} while the rest is a consequence of elementary manipulations. \end{proof} \subsection{Continuity} \label{subsec:ContinuCircumMapping} \begin{proposition} \label{prop:OperContIndep} Assume that the elements of $\mathcal{S}=\{T_{1}, \ldots, T_{m-1}, T_{m}\}$ are continuous operators and that $x \in \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$. Then the following hold: \begin{enumerate} \item \label{prop:OperContIndep:i} Let $\widetilde{\mathcal{S}}_{x}=\{T_{1}, T_{i_{1}}, \ldots, T_{i_{d_{x}}} \} \subseteq \mathcal{S}$ be such that\footnotemark ~ $T_{i_{1}}x-T_{1}x, \ldots, T_{i_{d_{x}}} x-T_{1}x$ is a basis of $\ensuremath{{\operatorname{span}}}\{T_{2}x-T_{1}x, \ldots, T_{m}x-T_{1}x \}$ . Then for every $(x^{(k)})_{k\in \mathbb{N}}\subseteq \mathcal{H}$ satisfying $\lim_{k \rightarrow \infty} x^{(k)}=x$, there exists $N \in \mathbb{N}$ such that for every $k \geq N$, $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}_{x}}(x^{(k)}) \in \mathcal{H}$. Moreover \begin{align} \label{eq:prop:OperContIndep:first} \lim_{k \rightarrow \infty}\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}_{x}}(x^{(k)})=\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}_{x}}x=\ensuremath{\operatorname{C}}C{\mathcal{S}}x. \end{align} \item \label{prop:OperContIndep:ii} If $T_{1}x, \ldots, T_{m-1}x, T_{m}x$ are affinely independent, then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is continuous at $x$. \end{enumerate} \end{proposition} \footnotetext{When $\mathcal{S}(x)$ is a singleton, then $\widetilde{\mathcal{S}}_{x}= \{T_{1}\}$ by the standard convention that $ \varnothing $ is the basis of $\{0\}$.} \begin{proof} \cref{prop:OperContIndep:i}: Let $(x^{(k)})_{k\in \mathbb{N}}\subseteq \mathcal{H}$ satisfying $\lim_{k \rightarrow \infty} x^{(k)}=x$. Now \begin{align*} &\mathcal{S}= \{ T_{1}, \ldots, T_{m-1}, T_{m}\},\quad \mathcal{S}(x)=\{T_{1}(x), \ldots, T_{m-1}x, T_{m}x\},\\ &\widetilde{\mathcal{S}}_{x} = \{ T_{1}, T_{i_{1}}, \ldots, T_{i_{d_{x}}}\}, \quad \widetilde{\mathcal{S}}_{x}(x)=\{T_{1}x, T_{i_{1}}x, \ldots, T_{i_{d_{x}}}x \}, \quad \widetilde{\mathcal{S}}_{x}(x^{(k)})=\{T_{1}x^{(k)}, T_{i_{1}}x^{(k)}, \ldots, T_{i_{d_{x}}}x^{(k)} \}. \end{align*} By \cref{def:cir:map}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H}$ means that $\ensuremath{\operatorname{C}}CO(\mathcal{S}(x))\in \mathcal{H}$. By assumptions, \begin{align*} T_{i_{1}}x-T_{1}x, \ldots, T_{i_{d_{x}}} x-T_{1}x ~\text{ is a basis of }~\ensuremath{{\operatorname{span}}} \{T_{2}x-T_{1}x, \ldots, T_{m}x-T_{1}x \} . \end{align*} Substituting the $K$, $\widetilde{K}$ and $\widetilde{K}^{(k)}$ in \cref{fact:CCO:LimitCont}\cref{prop:CCO:LimitCont:Linear} by the above $\mathcal{S}(x)$, $\widetilde{\mathcal{S}}_{x}(x)$ and $\widetilde{\mathcal{S}}_{x}(x^{(k)})$ respectively, we obtain the desired results. \cref{prop:OperContIndep:ii}: This follows easily from \cref{prop:OperContIndep:i}. \end{proof} The next result summarizes conditions under which the proper circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is continuous at a point $x$. \begin{proposition} \label{prop:CircumMapContinu:Allcase} Assume that the elements of $\mathcal{S}=\{T_{1}, \ldots, T_{m-1}, T_{m}\}$ are continuous operators and that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Let $x \in \mathcal{H}$. The following assertions hold: \begin{enumerate} \item \label{thm:CircumMapContinu:Allcase:affineIndep} If $T_{1}x, \ldots, T_{m-1}x, T_{m}x$ are affinely independent, then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is continuous at $x$. \item \label{thm:CircumMapContinu:Allcase:affineDep} If $T_{1}x, \ldots, T_{m-1}x, T_{m}x$ are affinely dependent and $m\leq 2$, then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is continuous at $x$. \end{enumerate} \end{proposition} \begin{proof} \cref{thm:CircumMapContinu:Allcase:affineIndep} follows from \cref{prop:OperContIndep}\cref{prop:OperContIndep:ii} while \cref{thm:CircumMapContinu:Allcase:affineDep} is a consequence of \cref{prop:form:m2:Oper}. \end{proof} The following examples show that even when $T_{1}x, \ldots, T_{m-1}x, T_{m}x$ are affinely dependent and $m \geq 3$, then $\ensuremath{\operatorname{C}}C{S}$ may still be continuous at $x$. \begin{example} \label{exam:ProperCirMapContinuous:UUperp} Suppose that $U$ is a closed linear subspace of $\mathcal{H}$ such that $\{0\}\subsetneqq U \varsubsetneqq \mathcal{H}$. Set $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U}, \ensuremath{\operatorname{R}}_{U^{\perp}}\}$. Then the following hold: \begin{enumerate} \item \label{exam:ProperCirMapContinuous:UUperp:i} The vectors $x,\ensuremath{\operatorname{R}}_{U}x,\ensuremath{\operatorname{R}}_{U^\perp}x$ are affinely dependent for every $x\in U \cup U^\perp$. \item \label{exam:ProperCirMapContinuous:UUperp:ii} $\ensuremath{\operatorname{C}}C{S}\equiv 0$ which is thus proper and continuous on $\mathcal{H}$. \end{enumerate} \end{example} \begin{proof} \cref{exam:ProperCirMapContinuous:UUperp:i}: For every $x \in U$ (respectively $x \in U^{\perp}$), $\ensuremath{\operatorname{R}}_{U}x=x $ (respectively $\ensuremath{\operatorname{R}}_{U^{\perp}}x=x$), which implies that $x, \ensuremath{\operatorname{R}}_{U}x, \ensuremath{\operatorname{R}}_{U^{\perp}}x$, which is $x, x, \ensuremath{\operatorname{R}}_{U^{\perp}}x $ (respectively $x, \ensuremath{\operatorname{R}}_{U}x, x$) are affinely dependent. \cref{exam:ProperCirMapContinuous:UUperp:ii}: Since $\ensuremath{\operatorname{Id}}=\ensuremath{\operatorname{P}}_{U}+\ensuremath{\operatorname{P}}_{U^{\perp}}$ and $\ensuremath{\operatorname{R}}_{U}=2\ensuremath{\operatorname{P}}_{U}-\ensuremath{\operatorname{Id}} $, we have \begin{align*} \frac{\ensuremath{\operatorname{R}}_{U}+\ensuremath{\operatorname{R}}_{U^{\perp}}}{2} =\frac{(2\ensuremath{\operatorname{P}}_{U}- \ensuremath{\operatorname{Id}})+(2\ensuremath{\operatorname{P}}_{U^{\perp}} - \ensuremath{\operatorname{Id}})}{2} =\frac{1}{2}\Big(2\ensuremath{\operatorname{P}}_{U} - \ensuremath{\operatorname{Id}}+2(\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{P}}_{U})-\ensuremath{\operatorname{Id}} \Big)=0. \end{align*} Let $x \in \mathcal{H}$. Then $0 =\frac{\ensuremath{\operatorname{R}}_{U}x+\ensuremath{\operatorname{R}}_{U^{\perp}}x }{2} \in \ensuremath{\operatorname{aff}} \{x ,\ensuremath{\operatorname{R}}_{U}x, \ensuremath{\operatorname{R}}_{U^{\perp}}x\}$. In addition, clearly $0 \in U \cap U^{\perp}$. In \cref{prop:PR}\cref{prop:PR:ReflectorEquidist}, substitute $C =U$, and let the point $v=0$. We get $\norm{x}=\norm{\ensuremath{\operatorname{R}}_{U}x}$. Similarly, In \cref{prop:PR}\cref{prop:PR:ReflectorEquidist}, substitute $C=U^{\perp}$ and let the point $v=0$ . We get $\norm{x}=\norm{\ensuremath{\operatorname{R}}_{U^{\perp}}x}$. Hence, we have \begin{enumerate}[label=(\alph*)] \item $0 \in \ensuremath{\operatorname{aff}} \{x ,\ensuremath{\operatorname{R}}_{U}x, \ensuremath{\operatorname{R}}_{U^{\perp}}x\}$ and \item $\norm{0-x}=\norm{ 0-\ensuremath{\operatorname{R}}_{U}x}=\norm{0-\ensuremath{\operatorname{R}}_{U^{\perp}}x} $, \end{enumerate} which means that $(\forall x \in \mathcal{H})$ $\ensuremath{\operatorname{C}}C{S}(x)=0$. \end{proof} \begin{example} \label{exam:ProperCirMapContinuous} Assume that $\mathcal{H}=\mathbb{R}^{2}$ and $\mathcal{S}=\{T_{1},T_{2},T_{3}\}$, where for every $(x,y) \in \mathbb{R}^{2}$, \begin{align*} T_{1}(x,y)=(x,y);\quad T_{2}(x,y)=(-x,y);\quad T_{3}(x,y)=\big(x,-\tfrac{1}{4}(x-2)\big). \end{align*} Then \begin{enumerate} \item \label{exam:ProperCirMapContinuous:i} $T_{1}(x,y),T_{2}(x,y),T_{3}(x,y)$ are affinely independent if and only if $2x \big( -\tfrac{1}{4}(x-2)-y\big) \neq 0$; \item \label{exam:ProperCirMapContinuous:ii} $\big(\forall (x,y) \in \mathbb{R}^{2}\big) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}} (x,y)=\big(0, \frac{1}{2}\big(y -\frac{1}{4}(x-2)\big)\big)$. \end{enumerate} Consequently, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper and continuous. \end{example} The following example shows that even if the operators in $\mathcal{S}$ are continuous, we generally have \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}} ~\text{is proper} \nRightarrow \ensuremath{\operatorname{C}}C{\mathcal{S}} ~\text{is continuous}. \end{align*} \begin{example} \label{exam:ProperCirMapDiscontinuous} Assume that $\mathcal{H}=\mathbb{R}^{2}$ and $\mathcal{S}= \{T_{1}, T_{2}, T_{3}\}$, where for every $(x,y) \in \mathbb{R}^{2}$, \begin{align*} T_{1}(x,y)=(2,0);\quad T_{2}(x,y)=(-2,0);\quad T_{3}(x,y)=\big(x,-\tfrac{1}{4}(x-2)\big). \end{align*} Then \begin{enumerate} \item \label{exam:ProperCirMapDiscontinuous:i}$\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper; \item \label{exam:ProperCirMapDiscontinuous:ii} Let $(\forall k \geq 1)$ $(x^{(k)},y^{(k)})=(2-\frac{1}{k},0)$. Then $ \lim_{k \rightarrow \infty} \ensuremath{\operatorname{C}}C{\mathcal{S}}(x^{(k)},y^{(k)}) =(0,-8)\neq (0,0)=\ensuremath{\operatorname{C}}C{\mathcal{S}}(2,0)$. Consequently, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is not continuous at the point $(2,0)$. \end{enumerate} \end{example} \begin{proof} \cref{exam:ProperCirMapDiscontinuous:i}: Let $(x,y) \in \mathbb{R}^{2}$. Now by \cref{fac:AffinIndeLineInd}, \begin{align*} &T_{1}(x,y),T_{2}(x,y),T_{3}(x,y)~\text{are affinely independent}\\ \Longleftrightarrow & T_{2}(x,y)- T_{1}(x,y),T_{3}(x,y)-T_{1}(x,y) ~\text{are linearly independent} \\ \Longleftrightarrow & (-4,0), \big(x-2, -\tfrac{1}{4}(x-2) \big) ~\text{are linearly independent}\\ \Longleftrightarrow & \det(A) \neq 0, ~\text{where}~A= \begin{pmatrix} -4 & x-2 \\ 0 & -\frac{1}{4}(x-2) \\ \end{pmatrix}\\ \Longleftrightarrow & x-2 \neq 0. \end{align*} Hence, by \cref{cor:x1x2x3Dom}, when $x-2 \neq 0$, we have $\ensuremath{\operatorname{C}}C{\mathcal{S}}(x,y)\in \mathcal{H}$. Actually, when $x-2=0$, that is $x=2$, then for every $y \in \mathbb{R}$, \begin{align*} T_{1}(2,y)=(2,0), T_{2}(2,y)=(-2,0),T_{3}(2,y)=\big(2,-\tfrac{1}{4}(2-2)\big)=(2,0). \end{align*} By \cref{prop:form:m2:Oper}, we know that $\ensuremath{\operatorname{C}}C{\mathcal{S}}(x,y)=(0,0)\in \mathcal{H}$. Hence, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \cref{exam:ProperCirMapDiscontinuous:i}: Let $(\overline{x},\overline{y})=(2,0)$, and $(\forall k \geq 1)$ $(x^{(k)},y^{(k)})=(2-\frac{1}{k},0)$. By the analysis in \cref{exam:ProperCirMapDiscontinuous:i} above, we know \begin{align} \label{eq:exam:barxValue} \ensuremath{\operatorname{C}}C{\mathcal{S}}(\overline{x},\overline{y})=(0,0). \end{align} On the other hand, since \begin{align*} \mathcal{S}(x^{(k)},y^{(k)})=\Big\{ T_{1}(x^{(k)},y^{(k)}), T_{2}(x^{(k)},y^{(k)}),T_{3}(x^{(k)},y^{(k)}) \Big\} =\Big\{ (2,0), (-2,0), \big(2-\tfrac{1}{k},\tfrac{1}{4k}\big) \Big\}, \end{align*} and since, by \cref{def:cir:map}, $ \ensuremath{\operatorname{C}}C{\mathcal{S}}(x^{(k)},y^{(k)})= \ensuremath{\operatorname{C}}CO(\mathcal{S}(x^{(k)},y^{(k)}) ) $, we deduce that, by \cref{fact:CounterExampleContinuity}, \begin{align} \label{eq:exam:xkValue} \ensuremath{\operatorname{C}}C{\mathcal{S}}(x^{(k)},y^{(k)}) =\big(0, -8+\tfrac{2}{k} +\tfrac{1}{8k}\big). \end{align} Hence, \begin{align*} \lim_{k \rightarrow \infty} \ensuremath{\operatorname{C}}C{\mathcal{S}}(x^{(k)},y^{(k)}) =(0,-8) \neq (0,0) \stackrel{\cref{eq:exam:barxValue}}{=} \ensuremath{\operatorname{C}}C{\mathcal{S}}(2,0) \end{align*} and we are done. \end{proof} \subsection{The Demiclosedness Principle for circumcenter mappings} \label{subsec:Demiclosedness} Let $T\colon \ensuremath{{\mathcal{H}}}\to \ensuremath{{\mathcal{H}}}$ be nonexpansive. Then \begin{equation} \label{e:demiclosed} \left. \begin{array}{c} x_k\ensuremath{{\;\operatorname{\rightharpoonup}\;}} x \\ x_k-Tx_k\to 0 \end{array} \right\} \;\; \ensuremath{\operatorname{R}}ightarrow \;\; x\in\ensuremath{\operatorname{Fix}} T. \end{equation} This well known implication (see \cite[Theorem~3(a)]{B1968}) is \emph{Browder's Demiclosedness Principle}; it is a powerful tool in the study of nonexpansive mappings. (Technically speaking, \eqref{e:demiclosed} states that $\ensuremath{\operatorname{Id}}-T$ is demiclosed at $0$, but because a shift of a nonexpansive mapping is still nonexpansive, it is demiclosed everywhere.) For the sake of brevity, we shall simply say that \begin{center} \enquote{the demiclosedness principle holds for $T$} whenever \eqref{e:demiclosed} holds. \end{center} Clearly, the demiclosedness principle holds whenever $T$ is weak-to-strong continuous, which is the case when $T$ is continuous and $\ensuremath{{\mathcal{H}}}$ is finite-dimensional. The demiclosedness principle also holds for so-called subgradient projectors; see \cite[Lemma~5.1]{BCW2015} for details. We now obtain a condition sufficient for the circumcenter mapping to satisfy the demiclosedness principle. Throughout, we assume $T_1,\ldots,T_m$ are mappings from $\ensuremath{{\mathcal{H}}}$ to $\ensuremath{{\mathcal{H}}}$. \begin{theorem} \label{thm:Main:Demi} Suppose that the demiclosedness principle holds for each element in $\mathcal{S} = \{T_{1}, T_{2}, \ldots, T_{m}\}$. In addition, assume that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper and that the implication \begin{align} \label{eq:assum:thm:Main:Demi} x_{k} - \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} \rightarrow 0 \;\;\ensuremath{\operatorname{R}}ightarrow\;\; (\forall i \in \{1, \ldots, m\}) ~\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} -T_{i}x_{k} \rightarrow 0 \end{align} holds. Then the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ and $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \bigcap^{m}_{i=1}\ensuremath{\operatorname{Fix}} T_{i}$. \end{theorem} \begin{proof} Let $x_{k} \rightharpoonup x $ and \begin{align} \label{eq:assum:T:thm:Main:Demi} x_{k}- \ensuremath{\operatorname{C}}C{\mathcal{S}} x_{k} \rightarrow 0. \end{align} By \cref{eq:assum:T:thm:Main:Demi} and \cref{eq:assum:thm:Main:Demi}, \begin{align} \label{eq:Ti:thm:Main:Demi} (\forall i \in \{1, \ldots, m\}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} -T_{i}x_{k} \rightarrow 0. \end{align} Hence, \begin{align} (\forall i \in \{1, \ldots, m\}) \quad \norm{x_{k} - T_{i}x_{k} } \leq \norm{x_{k} -\ensuremath{\operatorname{C}}C{\mathcal{S}} x_{k} } + \norm{ \ensuremath{\operatorname{C}}C{\mathcal{S}} x_{k} -T_{i}x_{k}} \rightarrow 0. \end{align} Because the demiclosedness principle holds for each $T_{i}$, we deduce that $x \in \cap^{m}_{i=1}\ensuremath{\operatorname{Fix}} T_{i} \subseteq \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$, where the last inclusion follows from \cref{prop:CW:FixPointSet}\cref{prop:CW:FixPointSet:inclu}. Therefore, $x -\ensuremath{\operatorname{C}}C{\mathcal{S}}x =0 $, which shows that the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. To verify the remaining assertion, let $\bar{x} \in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$. For every $k \in \mathbb{N}$, substitute $x_{k}$ by $\bar{x}$. Then using the assumption \cref{eq:assum:thm:Main:Demi}, we deduce that $(\forall i \in \{1, \ldots, m\})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}}\bar{x} -T_{i}\bar{x}=0$. Combining with $(\forall i \in \{1, \ldots, m\})$ $\norm{\bar{x} - T_{i}\bar{x} } \leq \norm{\bar{x} -\ensuremath{\operatorname{C}}C{\mathcal{S}} \bar{x} } + \norm{ \ensuremath{\operatorname{C}}C{\mathcal{S}} \bar{x} -T_{i}\bar{x}}$, we obtain that $\bar{x} \in \cap^{m}_{i=1}\ensuremath{\operatorname{Fix}} T_{i}$. Hence, $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} \subseteq \cap^{m}_{i=1}\ensuremath{\operatorname{Fix}} T_{i}$. Therefore, the desired result follows from \cref{prop:CW:FixPointSet}\cref{prop:CW:FixPointSet:inclu}. \end{proof} \begin{corollary} \label{cor:Main:Demi:Assum} Suppose that $T_{1} =\ensuremath{\operatorname{Id}}$ and that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Then the implication \begin{align} \label{eq:cor:assum:Main:Demi} x_{k} - \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} \rightarrow 0 \;\;\ensuremath{\operatorname{R}}ightarrow \;\; (\forall i \in \{1, \ldots, m\}) ~\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} -T_{i}x_{k} \rightarrow 0. \end{align} holds. \end{corollary} \begin{proof} Since $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper, by \cref{rem:cir:map}, $\norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} -x_{k} } = \norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} - T_{2}x_{k} }=\cdots = \norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} - T_{m}x_{k} }$, which implies that \cref{eq:cor:assum:Main:Demi} is true. \end{proof} \begin{proposition} \label{prop:demi:IdId} Suppose that $T_{1} =\ensuremath{\operatorname{Id}}$, that for every $i \in \{2,\ldots,m\}$, the demiclosedness principle holds for $T_{i}$, that $\mathcal{S} = \{T_{1}, T_{2}, \ldots, T_{m}\}$, and that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Then the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ and $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \cap^{m}_{i=1}\ensuremath{\operatorname{Fix}} T_{i}$. \end{proposition} \begin{proof} Combine \cref{thm:Main:Demi} with \cref{cor:Main:Demi:Assum}. \end{proof} We are now ready for the main result of this section. \begin{theorem}[a demiclosedness principle for circumcenter mappings] \label{prop:demi:Nonexpan} Suppose that $T_{1} =\ensuremath{\operatorname{Id}}$, that each operator in $\mathcal{S} = \{T_{1}, T_{2}, \ldots, T_{m}\}$ is nonexpansive, and that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Then the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ and $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \cap^{m}_{i=1}\ensuremath{\operatorname{Fix}} T_{i}$. \end{theorem} \begin{proof} Combine Browder's Demiclosedness Principle with \cref{prop:demi:IdId}. \end{proof} We now present (omitting its easy proof) another consequence of \cref{prop:demi:IdId}. \begin{corollary} \label{prop:FiniteIdresult} Suppose that $\mathcal{H}$ is finite-dimensional, that $\mathcal{S} = \{T_{1}, T_{2}, \ldots, T_{m}\}$, where $T_{1} = \ensuremath{\operatorname{Id}}$ and $T_j$ is continuous for every $j \in \{2, \ldots, m\}$, and that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Then the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. In particular, \begin{align} \label{eq:prop:sadresult:assum} \left. \begin{array}{c} x_{k} \to \overline{x}\\ x_{k} - \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} \to 0 \end{array} \right\} \;\; \ensuremath{\operatorname{R}}ightarrow \;\; \overline{x} \in \cap^{m}_{j=1}\ensuremath{\operatorname{Fix}} T_{j} = \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}. \end{align} \end{corollary} We now provide an example where the demiclosedness principle does not hold for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. \begin{example} \label{exam:DemicloseTRUE3} Suppose that $\mathcal{H} =\mathbb{R}^{2}$. Set $L=\big\{(u,v)\in\ensuremath{{\mathcal{H}}} ~\big|~v=-\frac{1}{4}u+\frac{1}{2}\big\}$. Assume that $\mathcal{S} = \{T_{1}, T_{2}, T_{3}\}$, where \begin{align*} \big(\forall (u,v) \in \ensuremath{{\mathcal{H}}} \big) \quad T_{1}(u,v) =(-2,0),\; T_{2}(u,v) =(2,0) \quad \text{and} \quad T_{3}(u,v) = \ensuremath{\operatorname{P}}_{L}(u,v). \end{align*} Set $\overline{x} =(0,-8)$ and $(\forall k \in \mathbb{N} \smallsetminus \{0\})$ $x_{k}=\big(\tfrac{1}{k},- \tfrac{1}{4k}-8\big)$. Then the following hold. \begin{enumerate} \item \label{exam:DemicloseTRUE3:proper} $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \item \label{exam:DemicloseTRUE3:fix} $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} =\varnothing$. \item \label{exam:DemicloseTRUE3:converg} $\displaystyle \lim_{k \rightarrow \infty}\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} = \overline{x} =\lim_{k \rightarrow \infty} x_{k} $; consequently, $\displaystyle \lim_{k \rightarrow \infty} (x_{k} - \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k}) =0$. (See also \cref{Demiconti.png}.) \item \label{exam:DemicloseTRUE3:overlinex} $\overline{x} \not\in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$; consequently, the demiclosedness principle does not hold for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. \end{enumerate} \end{example} \begin{proof} \cref{exam:DemicloseTRUE3:proper}: Let $x \in \mathcal{H} $. If $ T_{3}x \in \mathbb{R}\cdot(1,0)$, then $T_{3}x=(2,0)$ and so $ \ensuremath{\operatorname{C}}C{\mathcal{S}}x =(0,0)$. Now assume that $ T_{3}x \not\in \mathbb{R}\cdot(1,0)$. Then $T_{1}x, T_{2}x, T_{3}x $ are affinely independent. Hence, by \cref{thm:Proper3}, $ \ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H}$. Altogether, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \cref{exam:DemicloseTRUE3:fix}: Since $T_{1}x=(-2,0)$ and $T_{2}x=(2,0)$, by definition of circumcenter mapping, \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathbb{R} \cdot (0,1), \end{align*} which implies if $x \in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}}$, then $x \in \mathbb{R} \cdot (0,1)$. Since $T_{3}(0,-8) = \ensuremath{\operatorname{P}}_{L}(0,-8) = (2,0)$, by \cref{prop:form:m2:Oper}, $ \ensuremath{\operatorname{C}}C{\mathcal{S}}(0,-8) =(0,0) \neq (0,-8)$. Hence, $(0,-8) \not\in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} $. Let $x:=(0,v) \in \big( \mathbb{R} \cdot (0,1) \big)\smallsetminus \{(0,-8)\}$. As seen in the proof of \cref{exam:DemicloseTRUE3:proper}, the vectors $T_{1}x, T_{2}x, T_{3}x $ are affinely independent. Hence, by definition of circumcenter mapping, in this case \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}}x ~\text{is the intersection of }~\mathbb{R} \cdot (0,1) ~\text{and the perpendicular bisector of the two points}~T_{2}x, T_{3}x. \end{align*} Denote by $\ensuremath{\operatorname{C}}C{\mathcal{S}}x:= (0,w)$. Some easy calculation yields that if $v > -8$, then $w >v$; if $v <-8$, then $w <v$, which means that $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \neq x $. Altogether, $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} =\varnothing$. \cref{exam:DemicloseTRUE3:converg}: Let $k \in \mathbb{N} \smallsetminus \{0\}$. Since $x_{k}=\big(\tfrac{1}{k},- \tfrac{1}{4k}-8\big)$, by definition of $T_{3}$, \begin{align*} T_{3}x_{k} = \big(2- \tfrac{1}{k}, \tfrac{1}{4k}\big). \end{align*} Hence \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} = CC \big\{(-2,0), (2,0),\big(2- \tfrac{1}{k}, \tfrac{1}{4k}\big) \big\}. \end{align*} By \cref{exam:ProperCirMapDiscontinuous}\cref{exam:ProperCirMapDiscontinuous:ii}, we obtain that \begin{align*} \lim_{k \rightarrow \infty} \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} =(0,-8)=\overline{x} =\lim_{k \rightarrow \infty} x_{k}. \end{align*} \cref{exam:DemicloseTRUE3:overlinex}: By \cref{exam:DemicloseTRUE3:fix}, $\overline{x} \not\in \ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} $. Therefore, the demiclosedness principle does not hold for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. \end{proof} \begin{figure} \caption{\cref{exam:DemicloseTRUE3} \label{Demiconti.png} \end{figure} \begin{remark} \label{rem:Demiclosedness:holds:discon} Consider \cref{exam:DemicloseTRUE3} where each $T_i$ is a projector and thus firmly nonexpansive but $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} = \varnothing$. Is it possible to obtain an example where the demiclosedness principle does not hold but yet $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{C}}C{\mathcal{S}} \neq \varnothing$? We do not know the answer to this question. \end{remark} \section{Circumcenter mappings induced by reflectors} \label{sec:CircumMappingReflectors} Recall that $m \in \mathbb{N} \smallsetminus \{0\}$ and that $U_{1}, \ldots, U_{m}$ are closed affine subspaces in the real Hilbert space $\mathcal{H}$ with $\cap^{m}_{i=1} U_{i} \neq \varnothing$. In the whole section, denote \begin{empheq}[box = \mybluebox]{equation} \label{eq:DefinitionOmega} \Omega = \Big\{ \ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} ~\Big|~ r \in \mathbb{N}, ~\mbox{and}~ i_{1}, i_{2},\ldots, i_{r} \in \{1, \ldots,m\} \Big\}. \end{empheq} By the empty product convention, $\prod^{0}_{j=1}\ensuremath{\operatorname{R}}_{U_{i_{j}}} =\ensuremath{\operatorname{Id}}$. So, when $r=0$ in \cref{eq:DefinitionOmega}, $\ensuremath{\operatorname{Id}} \in \Omega$. Hence, $\Omega$ is the set consisting of the identity operator, $\ensuremath{\operatorname{Id}}$, and all of the compositions of $(\forall i \in \{1, \ldots,m\})$ $\ensuremath{\operatorname{R}}_{U_{i}}$. Throughout this section, we assume that \begin{empheq}[box = \mybluebox]{equation*} \ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \ensuremath{\operatorname{aff}} \Omega. \end{empheq} \subsection{Proper circumcenter mappings induced by reflectors} Note that for every $T$ in $\mathcal{S}$, where $ \mathcal{S} \subseteq \Omega$, there exists $r \in \mathbb{N}$ and $i_{1}, \ldots, i_{r} \in \{1, \ldots, m\}$ such that $T=\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} $. Therefore, from now on we assume \begin{empheq}[box = \mybluebox]{equation*} \ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}~\text{is a representative element of the set}~\mathcal{S},~\text{where}~ \ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \Omega. \end{empheq} We start with a useful lemma. \begin{lemma} \label{lem:EquDistS} Assume that $\ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \Omega$. Let $x \in \mathcal{H}$. Then for every $\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} \in \mathcal{S}$, \begin{align*} (\forall u \in \cap^{m}_{i=1} U_{i}) \quad \norm{x- u}=\norm{\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u}. \end{align*} \end{lemma} \begin{proof} Let $u \in \cap^{m}_{i=1} U_{i}$. Because $U_{1}, \ldots ,U_{m}$ are closed affine subspaces and $u \in \cap^{m}_{i=1} U_{i} \subseteq \cap^{r}_{j=1} U_{i_{j}} $, by \cref{prop:PR}\cref{prop:PR:ReflectorEquidist}, we have \begin{align*} \norm{x-u} &= \norm{\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u} ~~~~~~(\text{by}~ u \in U_{i_{1}})\\ \norm{\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u} &= \norm{\ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u} ~~~~~~(\text{by}~ u \in U_{i_{2}})\\ &\cdots \\ \norm{\ensuremath{\operatorname{R}}_{U_{i_{r-1}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u} &= \norm{\ensuremath{\operatorname{R}}_{U_{i_{r}}} \ensuremath{\operatorname{R}}_{U_{i_{r-1}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u} ~~~~~~(\text{by}~ u \in U_{i_{r}}), \end{align*} which yield \begin{align*} \norm{x-u}=\norm{\ensuremath{\operatorname{R}}_{U_{i_{r}}}\ensuremath{\operatorname{R}}_{U_{i_{r-1}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u}. \end{align*} \end{proof} \begin{proposition} \label{thm:CCS:proper:Lem} Assume that $\ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \Omega$. Let $x \in \mathcal{H}$. Then for every $u \in \cap^{m}_{i=1} U_{i}$, \begin{enumerate} \item \label{thm:CCS:proper:belong} $\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u) \in \ensuremath{\operatorname{aff}} (\mathcal{S}(x))$, and \item \label{thm:CCS:proper:EquaDistance} for every $\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} \in \mathcal{S}$, \begin{align*} \norm{\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u) -x } =\norm{\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u) -\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x}. \end{align*} \end{enumerate} \end{proposition} \begin{proof} \cref{thm:CCS:proper:belong}: Let $u \in \cap^{m}_{i=1} U_{i}$. Because $\ensuremath{\operatorname{aff}} (\mathcal{S}(x))$ is the translate of a finite-dimensional linear subspace, $\ensuremath{\operatorname{aff}} (\mathcal{S}(x))$ is a closed affine subspace. Hence, we know $\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u)$ is well-defined. Clearly, $\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u) \in \ensuremath{\operatorname{aff}} (\mathcal{S}(x))$, i.e., \cref{thm:CCS:proper:belong} is true. \cref{thm:CCS:proper:EquaDistance}: Take an arbitrary but fixed element $\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}$ in $\mathcal{S}$. Since $\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} \in \mathcal{S}$, we know $x, \ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x \in \mathcal{S}(x) \subseteq \ensuremath{\operatorname{aff}} (\mathcal{S}(x))$. Denote $p = \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u)$. Substituting $C=\ensuremath{\operatorname{aff}} (\mathcal{S}(x))$, $x=u$ and $v=x$ in \cref{prop:PR}\cref{prop:PR:Pythagoras}, we deduce \begin{align}\label{thm:eq:1} \norm{u-p}^{2}+\norm{x-p}^{2} &= \norm{u-x}^{2}. \end{align} Similarly, substitute $C=\ensuremath{\operatorname{aff}} (\mathcal{S}(x))$, $x=u$ and $v=\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x$ in \cref{prop:PR}\cref{prop:PR:Pythagoras} to obtain \begin{align} \label{thm:eq:2} \norm{u-p}^{2}+\norm{\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-p}^{2} &= \norm{u-\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x}^{2}. \end{align} On the other hand, by \cref{lem:EquDistS}, we know \begin{align} \label{thm:eq:RRxu} \norm{x-u}=\norm{\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x-u}. \end{align} Combining \cref{thm:eq:RRxu} with \cref{thm:eq:1} and \cref{thm:eq:2}, we yield \begin{align*} \norm{p-x} =\norm{p-\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}}x}. \end{align*} Since $\ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} \in \mathcal{S}$ is arbitrary, thus \cref{thm:CCS:proper:EquaDistance} holds. \end{proof} Combining \cref{thm:CW:ExistWellDefined} with \cref{thm:CCS:proper:Lem}, we deduce the theorem below which is one of the main results in this paper. \begin{theorem} \label{thm:CCS:proper} Assume that $\ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \Omega$. Then the following hold: \begin{enumerate} \item \label{thm:CCS:proper:prop} The circumcenter mapping $\ensuremath{\operatorname{C}}C{\mathcal{S}} : \mathcal{H} \rightarrow \mathcal{H}$ induced by $\mathcal{S}$ is $\emph{proper}$, i.e., for every $x \in \mathcal{H}$, $\ensuremath{\operatorname{C}}C{\mathcal{S}}x$ is the unique point satisfying the two conditions below: \begin{enumerate} \item \label{set:CW:i} $\ensuremath{\operatorname{C}}C{\mathcal{S}}x\in \ensuremath{\operatorname{aff}} (\mathcal{S}(x))$, and \item \label{set:CW:ii} $(\forall \ensuremath{\operatorname{R}}_{U_{i_{k}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{1}}} \in \mathcal{S})$ $\norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x-x} =\norm{\ensuremath{\operatorname{C}}C{\mathcal{S}}x-\ensuremath{\operatorname{R}}_{U_{i_{k}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{1}}}x}$. \end{enumerate} \item \label{thm:CCS:AllU} $(\forall x \in \mathcal{H})$ $(\forall u \in \cap^{m}_{i=1} U_{i})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}}x= \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(u)$. \item \label{thm:CCS:PaffU} $(\forall x \in \mathcal{H})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}}x= \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))}(\ensuremath{\operatorname{P}}_{\cap^{m}_{i=1} U_{i}} x)$. \end{enumerate} \end{theorem} \begin{proof} \cref{thm:CCS:proper:prop} and \cref{thm:CCS:AllU}: The required results follow from \cref{thm:CW:ExistWellDefined} and \cref{thm:CCS:proper:Lem}. \cref{thm:CCS:PaffU}: Since $\ensuremath{\operatorname{P}}_{\cap^{m}_{i=1} U_{i}} x \in \cap^{m}_{i=1} U_{i}$, the desired result comes from \cref{thm:CCS:AllU}. \end{proof} We now list several proper circumcenter mappings induced by reflectors; the properness of some of these mappings is derived from \cref{thm:CCS:proper}. \begin{example} \label{ex:m:m} Assume that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}, \ldots, \ensuremath{\operatorname{R}}_{U_{m}}\}$. By \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{example} \begin{example} \label{ex:m:mm} Assume that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{3}}\ensuremath{\operatorname{R}}_{U_{2}}, \ldots, \ensuremath{\operatorname{R}}_{U_{m}}\ensuremath{\operatorname{R}}_{U_{m-1}}, \ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{m}}\}$. By \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{example} \begin{example}[Behling et al.\ \cite{BCS2017}] \label{ex:2:3} Assume that $m=2$ and that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} \}$. Then, by \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{example} \begin{example}[Behling et al.\ \cite{BCS2018}] \label{ex:m:mm:R} Assume that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}, \ldots, \ensuremath{\operatorname{R}}_{U_{m}}\cdots \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. Then, by \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{example} \begin{remark} In fact, the C--DRM operator $C_{T}$ defined in {\rm \cite[Section~2]{BCS2017}} is the $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ operator of \cref{ex:2:3} while the CRM operator $C$ defined in {\rm \cite[page~159]{BCS2018}} is the operator $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ from \cref{ex:m:mm:R}. \end{remark} \begin{example} \label{ex:2:2} Assume that $m=2$ and that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} \}$. By \cref{prop:form:m2:Oper}, \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}} =\frac{\ensuremath{\operatorname{Id}}+\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}}{2}, \end{align*} which is the well-known Douglas--Rachford splitting operator. Clearly, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. \end{example} \begin{example} \label{ex:m:2:Proj} Assume that $m=2$ and that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$. Then $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. Moreover, \begin{align*} (\forall x \in U_{1}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x=\ensuremath{\operatorname{P}}_{U_{2}}x \quad \text{and} \quad (\forall x \in U_{2}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x=\ensuremath{\operatorname{P}}_{U_{1}}x. \end{align*} \end{example} \begin{proof} The first assertion follows from \cref{ex:m:m}. As for the remaining ones, note that \begin{align} \label{eq:ex:m:2:Proj} (\forall x \in U_{1}) \quad \mathcal{S}(x)=\{x, \ensuremath{\operatorname{R}}_{U_{2}}x\} \quad \text{and} \quad (\forall x \in U_{2}) \quad \mathcal{S}(x)=\{x, \ensuremath{\operatorname{R}}_{U_{1}}x\}. \end{align} Combining \cref{eq:ex:m:2:Proj} with \cref{prop:form:m2:Oper}, we obtain that \begin{align*} (\forall x \in U_{1}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x=\frac{x+\ensuremath{\operatorname{R}}_{U_{2}}x}{2}=\ensuremath{\operatorname{P}}_{U_{2}}x \quad \text{and} \quad (\forall x \in U_{2}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}}x=\frac{x+\ensuremath{\operatorname{R}}_{U_{1}}x}{2}=\ensuremath{\operatorname{P}}_{U_{1}}x. \end{align*} The proof is complete. \end{proof} \begin{example} \label{exam:OurCCS} Assume that $m=2$ and that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} \}$. Let $x \in \mathcal{H}$ and set $l = \ensuremath{\operatorname{card}} \{ x, \ensuremath{\operatorname{R}}_{U_{1}}x,\ensuremath{\operatorname{R}}_{U_{2}}x, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x \}$. Then exactly one of the following cases occurs. \begin{enumerate} \item \label{thm:SymForm:1} $l=1$ and $\ensuremath{\operatorname{C}}C{\mathcal{S}}x =x$. \item \label{thm:SymForm:2} $l=2$, say $S(x)=\{x_{1},x_{2}\}$, where $x_{1}$ and $x_{2}$ are two distinct elements in $S(x)$, and $\ensuremath{\operatorname{C}}C{\mathcal{S}}x=\frac{x_{1}+x_{2}}{2}$. \item \label{thm:SymForm:3} $l=3$, say $S(x)=\{x_{1},x_{2},x_{3}\}$, where $x_{1}$, $x_{2}$, $x_{3}$ are pairwise distinct elements in $S(x)$, and \begin{align*} \scalemath{0.9}{\ensuremath{\operatorname{C}}C{\mathcal{S}}x = \frac{\norm{x_{2}-x_{3}}^{2} \innp{x_{1}-x_{3},x_{1}-x_{2}}x_{1}+ \norm{x_{1}-x_{3}}^{2} \innp{x_{2}-x_{3},x_{2}-x_{1}}x_{2}+ \norm{x_{1}-x_{2}}^{2} \innp{x_{3}-x_{1},x_{3}-x_{2}}x_{3} }{2(\norm{x_{2}-x_{1}}^{2}\norm{x_{3}-x_{1}}^{2}- \innp{x_{2}-x_{1}, x_{3}-x_{1}}^{2})}}. \end{align*} \item \label{thm:SymForm:4} $l=4$ and \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}}x= x_{1}+\frac{1}{2}(x_{2}-x_{1},\ldots,x_{t_{x}}-x_{1}) G( x_{2}-x_{1},\ldots,x_{t_{x}}-x_{1})^{-1} \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} \\ \vdots\\ \norm{x_{t_{x}}-x_{1}}^{2} \\ \end{pmatrix}, \end{align*} where $\{x_{1},x_{2},\ldots,x_{t_{x}}\}=S(x)$, and $x_{1},x_{2},\ldots,x_{t_{x}}$ are affinely independent and \begin{align*} &\quad \quad G( x_{2}-x_{1},\ldots, x_{t_{x}-1}-x_{1}, x_{t_{x}}-x_{1})\\ &= \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} &\innp{x_{2}-x_{1},x_{3}-x_{1}} & \cdots & \innp{x_{2}-x_{1}, x_{t_{x}}-x_{1}} \\ \vdots & \vdots & ~~& \vdots \\ \innp{x_{t_{x}-1}-x_{1},x_{2}-x_{1}} & \innp{x_{t_{x}-1}-x_{1}, x_{3}-x_{1}} & \cdots & \innp{x_{t_{x}-1}-x_{1},x_{t_{x}}-x_{1}} \\ \innp{x_{t_{x}}-x_{1},x_{2}-x_{1}} & \innp{x_{t_{x}}-x_{1},x_{3}-x_{1}} & \cdots & \norm{x_{t_{x}}-x_{1}}^{2} \\ \end{pmatrix}. \end{align*} \end{enumerate} \end{example} \begin{proof} By \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper. The rest follows from \cref{fact:CircForTwoPoints} and \cref{fact:unique:LinIndpPformula}. \end{proof} We now turn to the properness of $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ when $\ensuremath{\operatorname{Id}} \in \widetilde{\mathcal{S}} \subseteq \ensuremath{\operatorname{aff}} \Omega$. \begin{proposition} \label{prop:proper:affRU} Let $\alpha \in \mathbb{R}$. Assume that \begin{align} \label{eq:prop:proper:affRU:assu} \widetilde{\mathcal{S}} = \{\ensuremath{\operatorname{Id}}, (1-\alpha) \ensuremath{\operatorname{Id}} + \alpha \ensuremath{\operatorname{R}}_{U_{1}}, \ldots, (1-\alpha) \ensuremath{\operatorname{Id}} + \alpha \ensuremath{\operatorname{R}}_{U_{m}} \}, \end{align} and that \begin{align} \label{eq:prop:proper:affRU:S} \mathcal{S} = \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ldots, \ensuremath{\operatorname{R}}_{U_{m}}\}. \end{align} Then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is proper. Moreover, \begin{align} \label{eq:prop:proper:affRU:resul} (\forall x \in \mathcal{H}) \quad \ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}x =\alpha \ensuremath{\operatorname{C}}C{\mathcal{S}}x + (1 - \alpha )x \in \mathcal{H}. \end{align} \end{proposition} \begin{proof} If $\alpha=0$, then $\widetilde{\mathcal{S}} = \{\ensuremath{\operatorname{Id}} \}$, by \cref{def:cir:map}, \begin{align*} (\forall x \in \mathcal{H}) \quad \ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}x =x =0 \ensuremath{\operatorname{C}}C{\mathcal{S}}x +(1-0)x \in \mathcal{H}. \end{align*} Now assume $\alpha \neq 0$. Let $x \in \mathcal{H}$. For every $i \in \{1, \ldots, m\}$, thus \begin{align*} \ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}x & = \ensuremath{\operatorname{C}}CO ( \widetilde{\mathcal{S} }(x)) \quad (\text{by \cref{def:cir:map}})\\ & = \ensuremath{\operatorname{C}}CO { \Big ( \big \{ x, (1-\alpha)x + \alpha \ensuremath{\operatorname{R}}_{U_{1}} x, \ldots, (1-\alpha) x + \alpha \ensuremath{\operatorname{R}}_{U_{m}} x \big\} \Big )} \quad (\text{by \cref{eq:prop:proper:affRU:assu}})\\ & = \ensuremath{\operatorname{C}}CO {\Big ( \big \{ 0, \alpha (\ensuremath{\operatorname{R}}_{U_{1}} x-x), \ldots, \alpha (\ensuremath{\operatorname{R}}_{U_{m}} x-x) \big\} +x \Big )} \\ & = \ensuremath{\operatorname{C}}CO {\Big ( \big \{ 0, \alpha (\ensuremath{\operatorname{R}}_{U_{1}} x-x), \ldots, \alpha (\ensuremath{\operatorname{R}}_{U_{m}} x-x) \big\} \Big )}+x \quad (\text{by \cref{fact:CircumSubaddi}})\\ & = \alpha \ensuremath{\operatorname{C}}CO {\Big ( \big \{ 0, \ensuremath{\operatorname{R}}_{U_{1}} x-x, \ldots, \ensuremath{\operatorname{R}}_{U_{m}} x-x \big\} \Big )}+x \quad (\text{by \cref{fact:CircumHomoge} and } \alpha \neq 0)\\ & = \alpha \ensuremath{\operatorname{C}}CO {\Big ( \big \{ x, \ensuremath{\operatorname{R}}_{U_{1}} x, \ldots, \ensuremath{\operatorname{R}}_{U_{m}} x \big\} -x \Big )}+x\\ & = \alpha \ensuremath{\operatorname{C}}CO {\Big ( \big \{ x, \ensuremath{\operatorname{R}}_{U_{1}} x, \ldots, \ensuremath{\operatorname{R}}_{U_{m}} x \big\} \Big )}- \alpha x +x \quad (\text{by \cref{fact:CircumSubaddi}}) \\ & = \alpha \ensuremath{\operatorname{C}}CO {\big(\mathcal{S}(x) \big)} +(1- \alpha)x \quad (\text{by \cref{eq:prop:proper:affRU:S} }) \\ & = \alpha \ensuremath{\operatorname{C}}C{\mathcal{S}}x + (1 - \alpha )x \in \mathcal{H}. \quad (\text{by \cref{def:cir:map} and \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}}) \end{align*} The proof is complete. \end{proof} \begin{proposition} \label{prop:TT:CWhat} Assume that $\mathcal{S}= \{ \ensuremath{\operatorname{Id}} , \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} \}$, set $T=\frac{\ensuremath{\operatorname{Id}} + \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}}{2}$, which is the Douglas--Rachford splitting operator, and set $\widetilde{\mathcal{S}} =\{\ensuremath{\operatorname{Id}}, T, T^{2}\}$. Then the following hold: \begin{enumerate} \item \label{prop:TT:CWhat:AffS} $\ensuremath{\operatorname{aff}} \{\ensuremath{\operatorname{Id}}, T, T^{2}\} =\ensuremath{\operatorname{aff}} \mathcal{S}$. \item \label{prop:TT:CWhat:Prop} $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}} $ is proper. \end{enumerate} \end{proposition} \begin{proof} \cref{prop:TT:CWhat:AffS}: By \cref{fact:AffineHull}, \begin{align} & \ensuremath{\operatorname{aff}} \{ \ensuremath{\operatorname{Id}}, T, T^{2}\} =\ensuremath{\operatorname{aff}} (\mathcal{S}) \nonumber \\ \Longleftrightarrow & \ensuremath{\operatorname{Id}} + \ensuremath{{\operatorname{span}}}\{T-\ensuremath{\operatorname{Id}}, T^{2}-\ensuremath{\operatorname{Id}} \} =\ensuremath{\operatorname{Id}} + \ensuremath{{\operatorname{span}}}\{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}- \ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} - \ensuremath{\operatorname{Id}}\}. \label{eq:prop:TT:CWhat:affspn} \end{align} On the other hand, \begin{align} \label{eq:TT:CWhat:1} T-\ensuremath{\operatorname{Id}} = \frac{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}+\ensuremath{\operatorname{Id}}}{2}-\ensuremath{\operatorname{Id}} =\frac{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}-\ensuremath{\operatorname{Id}}}{2}, \end{align} and \begin{align} \label{eq:TT:CWhat:2} T^{2} - \ensuremath{\operatorname{Id}} & = T^{2}- T +T -\ensuremath{\operatorname{Id}}= (T-\ensuremath{\operatorname{Id}})T + (T-\ensuremath{\operatorname{Id}}) \nonumber \\ & = \frac{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}-\ensuremath{\operatorname{Id}}}{2} \Big(\frac{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}+\ensuremath{\operatorname{Id}}}{2} \Big)+\frac{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} - \ensuremath{\operatorname{Id}}}{2}\nonumber \\ & = \frac{1}{4}(\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}- \ensuremath{\operatorname{Id}})+\frac{1}{2}(\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}- \ensuremath{\operatorname{Id}}), \end{align} which result in \begin{align} \label{eq:TwoVectNonsingMatrix} \begin{pmatrix} T - \ensuremath{\operatorname{Id}} &T^{2}-\ensuremath{\operatorname{Id}} \end{pmatrix} = \begin{pmatrix} \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} - \ensuremath{\operatorname{Id}} &\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}-\ensuremath{\operatorname{Id}} \end{pmatrix}\begin{pmatrix} \frac{1}{2} &\frac{1}{2} \\ 0 & \frac{1}{4} \end{pmatrix}. \end{align} Set $A= \begin{pmatrix} \frac{1}{2} &\frac{1}{2} \\ 0 & \frac{1}{4} \end{pmatrix}$. Since $\det(A) =\frac{1}{8} \neq 0$, \cref{eq:TwoVectNonsingMatrix} yields \begin{align} \label{eq:prop:TT:CWhat:spn} \ensuremath{{\operatorname{span}}}\{T-\ensuremath{\operatorname{Id}}, T^{2}-\ensuremath{\operatorname{Id}} \} = \ensuremath{{\operatorname{span}}}\{\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}- \ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} - \ensuremath{\operatorname{Id}}\}. \end{align} Altogether, \cref{eq:prop:TT:CWhat:spn} and \cref{eq:prop:TT:CWhat:affspn} demonstrate to us that \cref{prop:TT:CWhat:AffS} is true. \cref{prop:TT:CWhat:Prop}: If $x, Tx, T^{2}x$ are affinely independent, by \cref{fact:unique:LinIndpPformula}, then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}x \in \mathcal{H}$. Suppose $x, Tx, T^{2}x$ are affinely dependent. By \cref{eq:TwoVectNonsingMatrix} and $\det (A) \neq 0$, in this case, $x, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x$ are affinely dependent. Applying \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, we know $\ensuremath{\operatorname{C}}C{\mathcal{S}}x \in \mathcal{H}$. Hence, \cref{fact:clform:three} yields that \begin{align} \label{eq:prop:TT:CWhat:cardSx} \ensuremath{\operatorname{card}} \Big(\{ x , \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}},x \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x \} \Big) = \ensuremath{\operatorname{card}} \big(\mathcal{S}(x) \big) \leq 2. \end{align} If $Tx-x=0$, by \cref{prop:form:m2:Oper}, $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}} x=\frac{x+T^{2}x}{2}$. Now suppose $Tx-x \neq 0$. By \cref{eq:TT:CWhat:1}, $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x \neq x$. Therefore, by \cref{eq:prop:TT:CWhat:cardSx} and $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x \neq x$, either $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x=\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x$ or $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x=x$. Suppose $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x=\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x$. Multiply both sides by $\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}$, by \cref{lem:PU:RUIdempotent}\cref{lem:RUIdempotent}, to deduce $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x=x$, which contradicts with $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x$ $\neq$ $x$. Suppose $\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}x$ $=$ $x$, by \cref{eq:TT:CWhat:1} and \cref{eq:TT:CWhat:2}, which implies, $Tx=T^{2}x$. Then by \cref{prop:form:m2:Oper}, we obtain $ \ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}x=\frac{x+Tx}{2} \in \mathcal{H}$. In conclusion, $(\forall x \in \mathcal{H})$ $ \ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}x \in \mathcal{H}$, which means \cref{prop:TT:CWhat:Prop} holds. \end{proof} \subsection{Improper circumcenter mappings induced by reflectors} \label{subsec:ImpcircuMappingRefle} Propositions \ref{prop:proper:affRU} and \ref{prop:TT:CWhat} naturally prompt the following question: Is $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ proper for every $\widetilde{S}$ with $\ensuremath{\operatorname{Id}} \in \widetilde{S} \subseteq \ensuremath{\operatorname{aff}} \Omega$ ? The following examples provide negative answers. \begin{example} \label{exam:IMp:affRU} Assume that $m=2$, that $ U:=U_{1} = U_{2} \subsetneqq \mathcal{H}$, and that $\{ \alpha_{1}, \alpha_{2} \} \subseteq \mathbb{R}$. Assume further that $\widetilde{\mathcal{S}} =\{\ensuremath{\operatorname{Id}}, (1-\alpha_{1})\ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{R}}_{U}, (1- \alpha_{2})\ensuremath{\operatorname{Id}} + \alpha_{2} \ensuremath{\operatorname{R}}_{U}\}$. Then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is improper if and only if $\alpha_{1} \neq 0, \alpha_{2} \neq 0$ and $\alpha_{2} \neq \alpha_{1}$. \end{example} \begin{proof} By \cref{prop:form:m2:Oper}, when $\alpha_{1} =0$ or $\alpha_{2} =0$, then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is proper. For every $x \in \mathcal{H}$, if $\alpha_{1} \neq 0$, \begin{align*} \big( (1-\alpha_{2})x + \alpha_{2} \ensuremath{\operatorname{R}}_{U}x \big )-x = \alpha_{2} (\ensuremath{\operatorname{R}}_{U}x -x) =\frac{\alpha_{2}}{\alpha_{1}} \alpha_{1} (\ensuremath{\operatorname{R}}_{U}x -x) = \frac{\alpha_{2}}{\alpha_{1}} \Big( \big( (1-\alpha_{1})x + \alpha_{1} \ensuremath{\operatorname{R}}_{U}x \big )-x \Big), \end{align*} which implies that, by \cref{fac:AffinIndeLineInd}, \begin{align} \label{eq:exam:IMp:affRU:affind} x, (1-\alpha_{1})x + \alpha_{1} \ensuremath{\operatorname{R}}_{U}x, (1- \alpha_{2})x + \alpha_{2} \ensuremath{\operatorname{R}}_{U}x ~\text{ are affinely dependent}. \end{align} On the other hand, if $x \in \mathcal{H} \smallsetminus U$, then since $\alpha_{1} \neq 0$, $\alpha_{2} \neq 0$, $\alpha_{1} \neq \alpha_{2}$ and $\ensuremath{\operatorname{Fix}} \ensuremath{\operatorname{R}}_{U} =U$, we obtain that \begin{subequations} \label{eq:exam:IMp:affRU:card} \begin{align} & \quad \quad \ensuremath{\operatorname{card}} \big \{x, (1-\alpha_{1})x + \alpha_{1} \ensuremath{\operatorname{R}}_{U}x, (1- \alpha_{2})x + \alpha_{2} \ensuremath{\operatorname{R}}_{U}x \big\} =3 \\ & \Longleftrightarrow \alpha_{1} \neq 0, \alpha_{2} \neq 0 ~~\text{and}~~\alpha_{2} \neq \alpha_{1}. \end{align} \end{subequations} Combining \cref{cor:T1T2T3Dom} with \cref{eq:exam:IMp:affRU:affind} and \cref{eq:exam:IMp:affRU:card}, we deduce the required result. \end{proof} \begin{example} \label{exam::IMp:affRURU} Assume that $m=2$, that $ U:=U_{1} = U_{2} \subsetneqq \mathcal{H}$, and that $\{\alpha_{1}, \alpha_{2} \} \subseteq \mathbb{R}$. Assume further that $\widetilde{\mathcal{S}} = \{ \ensuremath{\operatorname{Id}}, (1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U}, \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U} \big)\}$. Then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is improper if and only if $\alpha_{1} \neq 0, \alpha_{1} \neq \frac{1}{2}, \alpha_{2} \neq 0$ and $\alpha_{2} \neq \frac{\alpha_{1}}{2 \alpha_{1}-1}$. \end{example} \begin{proof} By \cref{prop:form:m2:Oper}, when $\alpha_{1} =0$ or $\alpha_{2} =0$, then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is proper. Note that \begin{subequations} \label{eq:exam::IMp:affRURU:thirditem} \begin{align} & \quad~ \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U} \big) \\ & =(1- \alpha_{1}-\alpha_{2}+\alpha_{2}\alpha_{1} )\ensuremath{\operatorname{Id}} + (\alpha_{1} -\alpha_{2}\alpha_{1})\ensuremath{\operatorname{R}}_{U} +(\alpha_{2} -\alpha_{2}\alpha_{1})\ensuremath{\operatorname{R}}_{U} +\alpha_{2}\alpha_{1}\ensuremath{\operatorname{R}}_{U}\ensuremath{\operatorname{R}}_{U} \\ & =(1- \alpha_{1}-\alpha_{2}+2 \alpha_{2}\alpha_{1} ) \ensuremath{\operatorname{Id}} + (\alpha_{1}+\alpha_{2}-2 \alpha_{2}\alpha_{1}) \ensuremath{\operatorname{R}}_{U} \quad (\text{by \cref{lem:PU:RUIdempotent}\cref{lem:RUIdempotent}}) \end{align} \end{subequations} For every $x \in \mathcal{H}$, if $\alpha_{1} \neq 0$, \begin{align*} & \quad~\Big( \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U} \big)x \Big) -x \\ & = (\alpha_{1}+\alpha_{2}-2 \alpha_{2}\alpha_{1}) ( \ensuremath{\operatorname{R}}_{U}x-x ) \quad (\text{by \cref{eq:exam::IMp:affRURU:thirditem}}) \\ & = \frac{\alpha_{1}+\alpha_{2}-2 \alpha_{2}\alpha_{1}}{ \alpha_{1}} \alpha_{1} ( \ensuremath{\operatorname{R}}_{U}x-x ) \\ & = \frac{\alpha_{1}+\alpha_{2}-2 \alpha_{2}\alpha_{1}}{ \alpha_{1}} \Big( \big( (1-\alpha_{1}) x+\alpha_{1}\ensuremath{\operatorname{R}}_{U}x \big) - x \Big), \end{align*} which implies, by \cref{fac:AffinIndeLineInd}, that \begin{align} \label{eq:exam::IMp:affRURU:AffiInd} x, (1-\alpha_{1}) x +\alpha_{1}\ensuremath{\operatorname{R}}_{U}x, \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U} \big)x \text{ are affinely dependent.} \end{align} On the other hand, assume now $x \in \mathcal{H} \smallsetminus U$. Then \begin{subequations} \label{eq:exam::IMp:affRURU:card} \begin{align} & \quad \quad \ensuremath{\operatorname{card}} \Big\{x, (1-\alpha_{1}) x +\alpha_{1}\ensuremath{\operatorname{R}}_{U}x, \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U} \big)x \Big\} =3 \\ & \Longleftrightarrow \alpha_{1} \neq 0, \alpha_{1} \neq \frac{1}{2}, \alpha_{2} \neq 0 ~~\text{and}~~\alpha_{2} \neq \frac{\alpha_{1}}{2 \alpha_{1}-1}. \end{align} \end{subequations} Combining \cref{cor:T1T2T3Dom} with \cref{eq:exam::IMp:affRURU:AffiInd} and \cref{eq:exam::IMp:affRURU:card}, we infer the desired result. \end{proof} The following example is a special case of \cref{exam::IMp:affRURU}. \begin{example} \label{exam:Count:prop:TT:CWhat} Assume that $m=2$, that $U_{1} \varsubsetneqq U_{2} = \mathcal{H}$, and that $\{\alpha_{1}, \alpha_{2} \} \subseteq \mathbb{R}$. Assume further that $\widetilde{\mathcal{S}} = \{ \ensuremath{\operatorname{Id}}, (1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}, \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}} \big)\}$. Then $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is improper if and only if $\alpha_{1} \neq 0, \alpha_{1} \neq \frac{1}{2}, \alpha_{2} \neq 0$ and $\alpha_{2} \neq \frac{\alpha_{1}}{2 \alpha_{1}-1}$. \end{example} \begin{proof} Since $\ensuremath{\operatorname{R}}_{U_{2}} = \ensuremath{\operatorname{R}}_{\mathcal{H}} =\ensuremath{\operatorname{Id}}$, we deduce that \begin{align*} \widetilde{\mathcal{S}} = \{ \ensuremath{\operatorname{Id}}, (1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U_{1}}, \big( (1-\alpha_{2})\ensuremath{\operatorname{Id}} +\alpha_{2} \ensuremath{\operatorname{R}}_{U_{1}} \big) \circ \big((1-\alpha_{1}) \ensuremath{\operatorname{Id}} +\alpha_{1}\ensuremath{\operatorname{R}}_{U_{1}} \big)\}. \end{align*} The desired result follows directly from \cref{exam::IMp:affRURU}. \end{proof} Notice that in \cref{prop:TT:CWhat} we showed that for $\widetilde{\mathcal{S}} =\{\ensuremath{\operatorname{Id}}, T, T^{2}\} = \{\ensuremath{\operatorname{Id}},\frac{\ensuremath{\operatorname{Id}} + \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}}{2}, \frac{\ensuremath{\operatorname{Id}} + \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}}{2} \circ \frac{\ensuremath{\operatorname{Id}} + \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}}{2} \}$, $\ensuremath{\operatorname{C}}C{\widetilde{\mathcal{S}}}$ is proper. The example above says that this result is not a conincidence. \subsection{Particular circumcenter mappings in finite-dimensional spaces} \subsubsection{Application to best approximation} Suppose that \begin{align*} \mathcal{S}_{1}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\} \quad \text{and} \quad \mathcal{S}_{2} = \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}. \end{align*} By \cref{ex:m:m} and \cref{ex:2:3}, we know $\ensuremath{\operatorname{C}}C{S_{1}}$ and $\ensuremath{\operatorname{C}}C{S_{2}}$ are proper. Hence, for every $x \in \mathcal{H}$, we are able to generate iterations $(\ensuremath{\operatorname{C}}C{S_{1}}^{k}x)_{k\in \mathbb{N}}$ and $(\ensuremath{\operatorname{C}}C{S_{2}}^{k}x)_{k\in \mathbb{N}}$. In the following two examples, we choose two linear subspaces, $U_{1}$ and $U_{2}$, in $\mathbb{R}^{3}$ and one point $x_{0} \in \mathbb{R}^{3}$. Then we count the iteration numbers needed for the four algorithms: the shadow sequence of the Douglas--Rachford method (DRM) (see, \cite{BCNPW2014} for details), the sequence generated by the method of alternating projections (MAP), and the sequence generated by iterating $\ensuremath{\operatorname{C}}C{S_{1}}$ and $\ensuremath{\operatorname{C}}C{S_{2}}$ to find the best approximation point $\overline{x} = \ensuremath{\operatorname{P}}_{U_{1} \cap U_{2}}x_{0}$. \begin{example} \label{exam:3D:LinePlane} Assume that $\mathcal{H} =\mathbb{R}^{3}$, that $U_{1}$ is the line passing through the points $(0,0,0)$ and $(1,0,0)$, and that $U_{2}$ is the plane $\{(x,y,z) ~|~ x+y+z=0\}$. Let $x_{0} = (0.5,0,0)$. As \cref{table:Lineandplane} shows, both of the $\ensuremath{\operatorname{C}}C{S_{1}}$ and $\ensuremath{\operatorname{C}}C{S_{2}}$ are faster than DRM and MAP. (The results were obtained using \texttt{GeoGebra}.) \begin{table}[H] \centering \begin{tabu} to 0.8\textwidth { m{7cm} m{6cm} } \hline Algorithm & Iterations needed to find $\ensuremath{\operatorname{P}}_{U_{1} \cap U_{2}}x_{0}$ \\ \hline Douglas--Rachford method & 12\\ Method of alternating projections & 12 \\ Circumcenter method induced by $\mathcal{S}_{1}$ & 1 \\ Circumcenter method induced by $\mathcal{S}_{2}$ & 1 \\ \hline \end{tabu} \caption{Iterations needed for each algorithm. See \cref{exam:3D:LinePlane} for details.} \label{table:Lineandplane} \end{table} \end{example} \begin{figure} \caption{\cref{exam:3D:LinePlane} \label{3DLINE_PLANE} \end{figure} \begin{example} \label{exam:3D:PlanePlane} Assume that $\mathcal{H} =\mathbb{R}^{3}$, that $U_{1}=\{(x,y,z) ~|~ x+y+z=0\}$, and that $U_{2}:=\{(x,y,z) ~|~ -x+2y+2z=0\}$. Set $x_{0} = (-1,0.5,0.5)$. As \cref{table:planeandplane} illustrates, $\ensuremath{\operatorname{C}}C{S_{2}}$ is faster than the other methods, and $\ensuremath{\operatorname{C}}C{S_{1}}$ performs no worse than DRM or MAP. (The results were obtained using \texttt{GeoGebra}.) \begin{table}[H] \centering \begin{tabu} to 0.8\textwidth { m{7cm} m{6cm} } \hline Algorithm & Iterations needed to find $\ensuremath{\operatorname{P}}_{U_{1} \cap U_{2}}x_{0}$ \\ \hline Douglas--Rachford method & 5\\ Method of alternating projections & 6 \\ Circumcenter method induced by $\mathcal{S}_{1}$ & 5 \\ Circumcenter method induced by $\mathcal{S}_{2}$ & 2 \\ \hline \end{tabu} \caption{Iterations needed for each algorithm. See \cref{exam:3D:PlanePlane} for details.} \label{table:planeandplane} \end{table} \end{example} \begin{figure} \caption{\cref{exam:3D:PlanePlane} \label{3DPLANE_PLANE.png} \end{figure} \subsubsection{Counterexamples} The following two examples show that the circumcenter mapping induced by reflectors is in general neither linear nor continuous. \begin{example}[Discontinuity] \label{exam:discontinuity} Suppose that $\ensuremath{{\mathcal{H}}}=\mathbb{R}^2$, set $U_{1}=\mathbb{R}\cdot (1,0)$, and set $U_{2}:=\mathbb{R}\cdot(1,1)$. Suppose that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$ or that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. Let $\overline{x} =(1,0)$ and let $(\forall k \in \mathbb{N})$ $x_{k}=(1,\frac{1}{k+1})$. As \cref{fig:CCSDiscontinuous.png} illustrates, $\ensuremath{\operatorname{C}}C{\mathcal{S}}\overline{x} = \big(\frac{1}{2},\frac{1}{2}\big)$ and $(\forall k \in \mathbb{N}) $ $\ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} = (0,0)$. Hence, \begin{align*} \lim_{k \to \infty} \ensuremath{\operatorname{C}}C{\mathcal{S}}x_{k} = (0,0) \neq \big(\tfrac{1}{2},\tfrac{1}{2}\big)= \ensuremath{\operatorname{C}}C{\mathcal{S}}\overline{x}, \end{align*} which implies that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is not continuous at $\overline{x}$. By \cref{prop:FiniteIdresult}, the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. \begin{figure} \caption{\cref{exam:discontinuity} \label{fig:CCSDiscontinuous.png} \end{figure} \end{example} \begin{example}[Nonlinearity] \label{exam:not:linear} Suppose that $\ensuremath{{\mathcal{H}}}=\mathbb{R}^2$, set $U_{1}=\mathbb{R}\cdot (1,0)$ and set $U_{2}=\mathbb{R}\cdot(1,1)$. Suppose that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$ or that $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. Let $x =(1,0)$ and $y=(1,-1)$. As \cref{fig:Not_linear.png} illustrates, \begin{align*} \ensuremath{\operatorname{C}}C{\mathcal{S}}x + \ensuremath{\operatorname{C}}C{\mathcal{S}}y =\big(\tfrac{1}{2},\tfrac{1}{2} \big) + (0,0) \neq (0,0) = \ensuremath{\operatorname{C}}C{\mathcal{S}}(x+y), \end{align*} which shows that $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is not linear. By \cref{prop:FiniteIdresult}, the demiclosedness principle holds for $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. \begin{figure} \caption{\cref{exam:not:linear} \label{fig:Not_linear.png} \label{Not linear.png} \end{figure} \end{example} \section{Circumcenter mappings induced by projectors} \label{sec:CircumMappingProjectots} In this section, we uphold the notations that \begin{empheq}[box=\mybluebox]{equation*} \Omega = \Big\{ \ensuremath{\operatorname{R}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{R}}_{U_{i_{2}}}\ensuremath{\operatorname{R}}_{U_{i_{1}}} ~\Big|~ r \in \mathbb{N}, ~\mbox{and}~ i_{1}, \ldots, i_{r} \in \{1, \ldots,m\} \Big\} \quad \text{and} \quad \ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \Omega. \end{empheq} In addition, set \begin{empheq}[box=\mybluebox]{equation*} \Theta = \Big\{ \ensuremath{\operatorname{P}}_{U_{i_{r}}}\cdots \ensuremath{\operatorname{P}}_{U_{i_{2}}}\ensuremath{\operatorname{P}}_{U_{i_{1}}} ~\Big|~ r \in \mathbb{N}, ~\mbox{and}~ i_{1}, \ldots, i_{r} \in \{1, \ldots,m\} \Big\}. \end{empheq} By the empty product convention, $\prod^{0}_{j=1}\ensuremath{\operatorname{P}}_{U_{i_{j}}} =\ensuremath{\operatorname{Id}}$. Hence $\ensuremath{\operatorname{Id}} \in \Theta $. Specifically, we assume that \begin{empheq}[box=\mybluebox]{equation*} \ensuremath{\operatorname{Id}} \in \widehat{\mathcal{S}} \subseteq \ensuremath{\operatorname{aff}} \Theta. \end{empheq} \subsection{Proper circumcenter mappings induced by projectors} First, we present some cases when $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is proper. \begin{proposition} \label{prop:CWProLinComWellDefine} Let $\alpha \in \mathbb{R}$. Assume that \begin{align*} \widehat{\mathcal{S}}=\{\ensuremath{\operatorname{Id}}, (1-\alpha) \ensuremath{\operatorname{Id}} + \alpha \ensuremath{\operatorname{P}}_{U_{1}}, \ldots, (1-\alpha) \ensuremath{\operatorname{Id}} + \alpha \ensuremath{\operatorname{P}}_{U_{m}} \}, \end{align*} and that \begin{align*} \mathcal{S} = \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ldots, \ensuremath{\operatorname{R}}_{U_{m}}\}. \end{align*} Then $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is proper. Moreover, \begin{align} \label{eq:prop:CWProLinComWellDefine} (\forall x \in \mathcal{H}) \quad \ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x = \tfrac{\alpha}{2} \ensuremath{\operatorname{C}}C{\mathcal{S}}x + \big(1 - \tfrac{\alpha}{2}\big)x \in \mathcal{H}. \end{align} \end{proposition} \begin{proof} Apply \cref{prop:proper:affRU} with $\alpha$ replaced by $\frac{\alpha}{2}$. \end{proof} Taking $\alpha =1$ in \cref{prop:CWProLinComWellDefine}, we deduce the next result. \begin{corollary} \label{cor:CWProLinComWellDefine} Assume that $\widehat{\mathcal{S}}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{1}}, \ldots, \ensuremath{\operatorname{P}}_{U_{m-1}},\ensuremath{\operatorname{P}}_{U_{m}} \}$. Then $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is proper, that is for every $x \in \mathcal{H}$, there exists unique $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x \in \mathcal{H}$ satisfying \begin{enumerate} \item $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}(x) \in \ensuremath{\operatorname{aff}}\{x, \ensuremath{\operatorname{P}}_{U_{1}}(x), \ldots, \ensuremath{\operatorname{P}}_{U_{m-1}}(x), \ensuremath{\operatorname{P}}_{U_{m}}(x)\}$ \item $\norm{\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}(x)-x}=\norm{\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}(x)-\ensuremath{\operatorname{P}}_{U_{1}}(x)}=\cdots =\norm{\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}(x)-\ensuremath{\operatorname{P}}_{U_{m-1}}(x)}=\norm{\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}(x)-\ensuremath{\operatorname{P}}_{U_{m}}(x)}$. \end{enumerate} \end{corollary} \begin{proposition} \label{prop:PU1PU2PU1:Proper} Assume that $U_{2}$ is linear and that $\widehat{\mathcal{S}}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{1}}, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}} \}$. Then $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is proper. \end{proposition} \begin{proof} Let $x \in \mathcal{H}$. If $\ensuremath{\operatorname{card}} \big( \widehat{\mathcal{S}}(x) \big) \leq 2$, by \cref{prop:form:m2:Oper}, $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x \in \mathcal{H}$. Now assume $\ensuremath{\operatorname{card}} \big( \widehat{\mathcal{S}}(x) \big) =3$. If $x, \ensuremath{\operatorname{P}}_{U_{1}}x, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x $ are affinely independent, by \cref{fact:clform:three}, $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x \in \mathcal{H}$. Assume that \begin{align} \label{eq:prop:PU1PU2PU1:Properaffde} x, \ensuremath{\operatorname{P}}_{U_{1}}x, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x~\text{are affinely dependent}. \end{align} Note that $\ensuremath{\operatorname{card}} \big( \widehat{\mathcal{S}}(x) \big) =3$ implies that $\ensuremath{\operatorname{P}}_{U_{1}}x -x \neq 0$; moreover, \cref{eq:prop:PU1PU2PU1:Properaffde} yields that there exists $\alpha \neq 1$ such that \begin{align} \label{eq:prop:PU1PU2PU1:Proper:LineDep} \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x -x = \alpha (\ensuremath{\operatorname{P}}_{U_{1}}x -x). \end{align} Because $U_{2}$ is linear subspace, $\ensuremath{\operatorname{P}}_{U_{2}}$ is linear. Applying to both sides of \cref{eq:prop:PU1PU2PU1:Proper:LineDep} the projector $\ensuremath{\operatorname{P}}_{U_{2}}$, we obtain \begin{align} &~\ensuremath{\operatorname{P}}_{U_{2}} \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x - \ensuremath{\operatorname{P}}_{U_{2}}x = \alpha ( \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x - \ensuremath{\operatorname{P}}_{U_{2}}x) \nonumber \\ \Longrightarrow &~ \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x - \ensuremath{\operatorname{P}}_{U_{2}}x = \alpha ( \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x - \ensuremath{\operatorname{P}}_{U_{2}}x) \quad (\text{by \cref{lem:PU:RUIdempotent}\cref{lem:PUIdempotent}}) \nonumber \\ \Longrightarrow &~ (1-\alpha) \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x = (1-\alpha) \ensuremath{\operatorname{P}}_{U_{2}}x \nonumber \\ \Longrightarrow &~ \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x = \ensuremath{\operatorname{P}}_{U_{2}}x. \quad ( \alpha \neq 1) \label{eq:PU2PU1ALPHA} \end{align} Combining $\ensuremath{\operatorname{card}} \big( \widehat{\mathcal{S}}(x) \big) =3$ with \cref{eq:prop:PU1PU2PU1:Properaffde} and \cref{eq:PU2PU1ALPHA}, we deduce that $x, \ensuremath{\operatorname{P}}_{U_{1}}x, \ensuremath{\operatorname{P}}_{U_{2}}x$ are pairwise distinct and affinely dependent. Applying \cref{cor:CWProLinComWellDefine} to $m=2$, we obtain $\ensuremath{\operatorname{C}}CO{(\{x, \ensuremath{\operatorname{P}}_{U_{1}}x, \ensuremath{\operatorname{P}}_{U_{2}}x \})} \in \mathcal{H}$. But this contradicts \cref{fact:clform:three}. Therefore, $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}} = \mathcal{H}$. \end{proof} \begin{proposition} \label{prop:PU1U2:Proper} Assume that $U_{2}$ is linear and that $\widehat{\mathcal{S}}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{2}}, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}} \}$. Then $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is proper. \end{proposition} \begin{proof} Let $x \in \mathcal{H}$. Similarly to the proof in \cref{prop:PU1PU2PU1:Proper}, we arrive at a contradiction for the case where $\ensuremath{\operatorname{card}} \big( \widehat{\mathcal{S}}(x) \big) =3$ and there exists $\alpha \neq 1$ such that \begin{align} \label{eq:prop:PU1U2:Proper:LineDep} \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x-x=\alpha ( \ensuremath{\operatorname{P}}_{U_{2}}x-x ). \end{align} As in the proof of \cref{prop:PU1PU2PU1:Proper}, we apply to both sides of \cref{eq:prop:PU1U2:Proper:LineDep} the projector $\ensuremath{\operatorname{P}}_{U_{2}}$. Then \begin{align*} & \ensuremath{\operatorname{P}}_{U_{2}} \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x-\ensuremath{\operatorname{P}}_{U_{2}}x =\alpha (\ensuremath{\operatorname{P}}_{U_{2}} \ensuremath{\operatorname{P}}_{U_{2}}x- \ensuremath{\operatorname{P}}_{U_{2}} x) \\ \Longrightarrow &~ \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x-\ensuremath{\operatorname{P}}_{U_{2}}x= \alpha (\ensuremath{\operatorname{P}}_{U_{2}}x-\ensuremath{\operatorname{P}}_{U_{2}}x) =0 \quad (\text{by \cref{lem:PU:RUIdempotent}\cref{lem:PUIdempotent})}\\ \Longrightarrow & \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x=\ensuremath{\operatorname{P}}_{U_{2}}x. \end{align*} which contradicts $\ensuremath{\operatorname{card}} \big( \widehat{\mathcal{S}}(x) \big) =3$. \end{proof} \subsection{Improper circumcenter mappings induced by projectors} \label{Sec:Subsec:ImproProjec} Propositions \ref{prop:CWProLinComWellDefine}, \ref{prop:PU1PU2PU1:Proper}, and \ref{prop:PU1U2:Proper} prompt the following question: \begin{question} \label{ques:AffComb:Projec} Suppose that $\{ \alpha_{1}, \alpha_{2} \} \subseteq \mathbb{R} \smallsetminus \{0, 1\}$ and that at least one of $\alpha_{1}, \alpha_{2}$ is not $2$.\footnotemark ~Assume that $\widehat{\mathcal{S}} = \Big\{ \ensuremath{\operatorname{Id}}, (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U_{1}}, \big ( (1 -\alpha_{2}) \ensuremath{\operatorname{Id}} + \alpha_{2}\ensuremath{\operatorname{P}}_{U_{2}} \big) \circ \big( (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U_{1}} \big)\Big\}$ or $\widehat{\mathcal{S}} = \Big\{ \ensuremath{\operatorname{Id}}, (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U_{2}}, \big ( (1 -\alpha_{2}) \ensuremath{\operatorname{Id}} + \alpha_{2}\ensuremath{\operatorname{P}}_{U_{2}} \big) \circ \big( (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U_{1}} \big)\Big\}$. Is $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ proper? \end{question} \footnotetext{For $i \in \{1,2\}$, when $\alpha_{i}$ is $0, 1$, or $2$, then $(1 - \alpha_{i}) \ensuremath{\operatorname{Id}} + \alpha_{i} \ensuremath{\operatorname{P}}_{U_{1}}$ is $\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{1}}$, or $\ensuremath{\operatorname{R}}_{U_{1}}$ respectively. In these special cases, the answer for \cref{ques:AffComb:Projec} is positive (see \cref{prop:CWProLinComWellDefine} and \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}). } The following example demonstrates that the answer to \cref{ques:AffComb:Projec} is negative. \begin{example} \label{exam:UUPUIMPROP} Assume that $m=2$ and that $U:=U_{1} = U_{2} \subsetneqq \mathcal{H}$ and $\{\alpha_{1}, \alpha_{2} \} \subseteq \mathbb{R}$. Assume further that $\widehat{\mathcal{S}} = \big\{ \ensuremath{\operatorname{Id}}, (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U}, \big ( (1 -\alpha_{2}) \ensuremath{\operatorname{Id}} + \alpha_{2}\ensuremath{\operatorname{P}}_{U} \big) \circ \big( (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U} \big)\big\}$. Then $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is improper if and only if $ \alpha_{1} \neq 0, \alpha_{1} \neq 1, \alpha_{2} \neq 0$ and $\alpha_{2} \neq \frac{\alpha_{1}}{\alpha_{1} -1}$. \end{example} \begin{proof} Since $\ensuremath{\operatorname{R}}_{U} =2 \ensuremath{\operatorname{P}}_{U} -\ensuremath{\operatorname{Id}} $, we deduce that \begin{align*} \widehat{\mathcal{S}} & = \Big\{ \ensuremath{\operatorname{Id}}, (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U}, \big ( (1 -\alpha_{2}) \ensuremath{\operatorname{Id}} + \alpha_{2}\ensuremath{\operatorname{P}}_{U} \big) \circ \big( (1 - \alpha_{1}) \ensuremath{\operatorname{Id}} + \alpha_{1} \ensuremath{\operatorname{P}}_{U} \big) \Big\} \\ & = \Big\{ \ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{Id}} + \alpha_{1} ( \ensuremath{\operatorname{P}}_{U} - \ensuremath{\operatorname{Id}} ), \big( \ensuremath{\operatorname{Id}} + \alpha_{2} ( \ensuremath{\operatorname{P}}_{U} -\ensuremath{\operatorname{Id}} ) \big) \circ \big( \ensuremath{\operatorname{Id}} + \alpha_{1} ( \ensuremath{\operatorname{P}}_{U}- \ensuremath{\operatorname{Id}} ) \big) \Big\} \\ & = \Big\{ \ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{Id}} + \frac{\alpha_{1}}{2} ( \ensuremath{\operatorname{R}}_{U} - \ensuremath{\operatorname{Id}} ), \big( \ensuremath{\operatorname{Id}} + \frac{\alpha_{2}}{2} ( \ensuremath{\operatorname{R}}_{U} -\ensuremath{\operatorname{Id}} ) \big) \circ \big( \ensuremath{\operatorname{Id}} + \frac{\alpha_{1}}{2} ( \ensuremath{\operatorname{R}}_{U}- \ensuremath{\operatorname{Id}} ) \big) \Big\}. \end{align*} The result now follows from the assumptions above and \cref{exam::IMp:affRURU}. \end{proof} Next, we present further improper instances of $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$, where $\ensuremath{\operatorname{Id}} \in \widehat{\mathcal{S}} \subseteq \ensuremath{\operatorname{aff}} \Theta$. \begin{example} \label{exam:ImproperColinear} Assume that $\mathcal{H}=\mathbb{R}^{2}$, that $m=2$, that $U_{1} = \mathbb{R} \cdot (1,0)$, and that $U_{2}= \mathbb{R} \cdot (1,2)$. Assume further that $\widehat{\mathcal{S}}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}\ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}\}$. Take $x=(2, 4) \in U_{2}$. As \cref{fig:Pro_ContExam_Nonexist3Point.png} illustrates, $x$, $\ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x$, and $\ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}\ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x$ are pairwise distinct and colinear. By \cref{thm:Proper3}, $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is improper. \begin{figure} \caption{\cref{exam:ImproperColinear} \label{fig:Pro_ContExam_Nonexist3Point.png} \end{figure} \end{example} \begin{example} \label{exam:Improper:Noncolinear} Assume that $\mathcal{H}=\mathbb{R}^{2}$, that $m=2$, that $U_{1} = \mathbb{R} \cdot (1,0)$, and that $U_{2} = \mathbb{R} \cdot (1,1)$. Assume further that $\widehat{\mathcal{S}}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{1}}, \ensuremath{\operatorname{P}}_{U_{2}}, \ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}\}$. Take $x=(4,2)$ and set $\mathcal{K}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{P}}_{U_{1}}, \ensuremath{\operatorname{P}}_{U_{2}}\}$. Clearly, $\ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x-x \in \mathbb{R}^{2}= \ensuremath{{\operatorname{span}}} \{ \ensuremath{\operatorname{P}}_{U_{1}}x-x, \ensuremath{\operatorname{P}}_{U_{2}}x-x \}$, which implies that $\ensuremath{\operatorname{aff}} (\mathcal{K}(x)) = \ensuremath{\operatorname{aff}} (\widehat{\mathcal{S}}(x))$. By \cref{fact:unique:BasisPformula}, if $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x \in \mathcal{H}$, then $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x = \ensuremath{\operatorname{C}}C{\mathcal{K}}x$. As \cref{fig:Pro_ContExam_Nonexist.png} shows, \begin{align*} \norm{\ensuremath{\operatorname{C}}C{\mathcal{K}}x-x}=\norm{\ensuremath{\operatorname{C}}C{\mathcal{K}}x-\ensuremath{\operatorname{P}}_{U_{1}}x}=\norm{\ensuremath{\operatorname{C}}C{\mathcal{K}}x-\ensuremath{\operatorname{P}}_{U_{2}}x}\neq \norm{\ensuremath{\operatorname{C}}C{\mathcal{K}}x-\ensuremath{\operatorname{P}}_{U_{2}}\ensuremath{\operatorname{P}}_{U_{1}}x}. \end{align*} Hence $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}x = \varnothing$, which implies that $\ensuremath{\operatorname{C}}C{\widehat{\mathcal{S}}}$ is improper. \begin{figure} \caption{\cref{exam:Improper:Noncolinear} \label{fig:Pro_ContExam_Nonexist.png} \end{figure} \end{example} \section{More improper circumcenter mappings induced by reflectors} \label{sec:MoreImproper} In \cref{thm:CCS:proper}\cref{thm:CCS:proper:prop}, to prove $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is proper, we required that \begin{align} \label{eq:Cond:CCS:Prop} U_{1}, \ldots, U_{m}~\text{are closed affine subspaces in}~\mathcal{H}~\text{with}~\cap^{m}_{i=1} U_{i} \neq \varnothing, \end{align} and that \begin{align} \label{eq:Cond:CCS:Prop:SIDOME} \ensuremath{\operatorname{Id}} \in \mathcal{S} \subseteq \Omega . \end{align} In \cref{subsec:ImpcircuMappingRefle}, we have already seen that when the condition $\mathcal{S} \subseteq \Omega$ fails, the circumcenter mapping induced by reflectors $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ may be improper. In the remaining part of this section, we consider two circumcenter mappings induced by reflectors, where $m=2$ and $\mathcal{S}= \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$ or $\mathcal{S}= \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. We construct additional improper circumcenter mappings with the conditions in \cref{eq:Cond:CCS:Prop} not being satisfied, which means that the conditions \cref{eq:Cond:CCS:Prop} and \cref{eq:Cond:CCS:Prop:SIDOME} are sharp. \subsection{Inconsistent cases} \label{sec:FailPropInconsis} In this subsection, we focus on the case when $\cap^{m}_{i=1} U_{i} = \varnothing$. Let $U$ and $V$ be two nonempty, closed, convex (possibly nonintersecting) subsets of $\mathcal{H}$. A \emph{best approximation pair} relative to $(U,V)$ is \begin{empheq}{equation*} \quad (a,b) \in U\times V \quad \text{such that} \quad \norm{a-b}=\inf~ \norm{U-V}. \end{empheq} In \cite{BCL2004}, the authors used the Douglas--Rachford splitting operator $T = \frac{\ensuremath{\operatorname{R}}_{V}\ensuremath{\operatorname{R}}_{U} +\ensuremath{\operatorname{Id}}}{2}$ to find a best approximation pair relative to $(U,V)$. \begin{fact} {\rm \cite[Theorem~3.13 and Remark~3.14(ii)]{BCL2004}} \label{fac:BesApproxPair} Let $U$ be a closed affine subspace and let $V$ be a nonempty, closed, convex set in $\mathcal{H}$ ($U, V$ are possibly non-intersecting). Suppose that best approximation pairs relative to $(U,V)$ exist. Set $T:= \frac{\ensuremath{\operatorname{R}}_{V}\ensuremath{\operatorname{R}}_{U} +\ensuremath{\operatorname{Id}}}{2}$. Let $x_{0} \in \mathcal{H}$ and set $x_{n} =T^{n}x_{0}$, for all $n \in \mathbb{N}$. Then \begin{align*} \big( (\ensuremath{\operatorname{P}}_{V}\ensuremath{\operatorname{R}}_{U}x_{n},\ensuremath{\operatorname{P}}_{U}x_{n}) \big)_{n \in \mathbb{N}} \quad \text{and} \quad \big( (\ensuremath{\operatorname{P}}_{V}\ensuremath{\operatorname{P}}_{U}x_{n},\ensuremath{\operatorname{P}}_{U}x_{n}) \big)_{n \in \mathbb{N}} \end{align*} both converge weakly to best approximation pairs relative to $(U,V)$. \end{fact} The following examples show that even if both of $U_{1}, U_{2}$ are closed affine subspaces, when $U_{1} \cap U_{2} = \varnothing$, the operator $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ may not be proper where $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$ or $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. (Notice that in \cref{exam:PointLine}, $U_{1}$ is even a compact set.) Hence, we can not directly generalize \cref{fac:BesApproxPair} by the circumcenter mapping induced by reflectors. The results of the following examples in this section are easily from \cref{cor:T1T2T3Dom} and the proofs are omitted. \begin{example} \label{exam:PointLine} Assume that $\mathcal{H} =\mathbb{R}^{2}$, that $U_{1}=\{(2,0)\}$, and that $U_{2} = \mathbb{R} \cdot (0,1)$. Set $\mathcal{S}_{1}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$ and $\mathcal{S}_{2}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. Then \begin{align*} & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}} =\big( \mathbb{R}^{2} \smallsetminus \mathbb{R}\cdot(1,0) \big ) \cup \big\{(2,0), (0,0) \big\}, \\ & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}} =\big( \mathbb{R}^{2} \smallsetminus \mathbb{R}\cdot(1,0) \big ) \cup \big\{(2,0), (4,0)\big\}. \end{align*} \end{example} \subsection{Non-affine cases} \label{sec:FailPropNon-affin} One of the charming aspects of the Douglas--Rachford method is that it can be used for general convex sets. In this subsection, we assume that \begin{align*} \mathcal{S}_{1}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\} \quad \text{or} \quad \mathcal{S}_{2}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}. \end{align*} We shall present examples in which the operator $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ is improper, with at least one of $U_{1}$ and $U_{2}$ not being an affine subspace while $U_{1} \cap U_{2} \neq \varnothing$. \begin{example} \label{exam:ConvConeLin} Assume that $\mathcal{H} = \mathbb{R}^{2}$, that $U_{1}= \mathbb{R}^{2}_{+}$, and that $U_{2}= (2,0)+\mathbb{R} \cdot (0,1)$. Then \begin{align*} & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}} = \mathbb{R}^{2} \smallsetminus \{(x,y) ~|~x<0 ~\text{and}~y \geq 0\}, \\ & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}} = \big( \mathbb{R}^{2} \smallsetminus \{(x,y) ~|~ x<0 ~\text{and}~y \geq 0 \} \big) \cup \{(-2,y) ~|~y \geq0 \} . \end{align*} \end{example} In the remainder of this subsection, we revisit the examples used in \cite{BCS2017} to show the potential of the Circumcentering Douglas--Rachford method, which are the iterations of the operator $\ensuremath{\operatorname{C}}C{\mathcal{S}_{2}}$. \begin{example} \label{exam:BallLine} Assume that $\mathcal{H} = \mathbb{R}^{2}$, that $U_{1}=\mathbf{B}[(0,0);1]$, and that $U_{2}=(1,0)+\mathbb{R} \cdot (0,1)$. Then \begin{align*} & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}} = \mathbb{R}^{2} \smallsetminus \{(x,0) ~|~ x <-1 \},\\ & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}} = \mathbb{R}^{2} \smallsetminus \{(x,0) ~|~ x<-3 ~\text{or}~ -3<x <-1\} . \end{align*} \end{example} \begin{example} \label{exam:Ball_LineMore} Assume that $\mathcal{H} = \mathbb{R}^{2}$, that $U_{1}=\mathbf{B}[(0,0);1]$, and that $U_{2}=\mathbb{R} \cdot (0,1)$. Then \begin{align*} & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}} = \mathbb{R}^{2} \smallsetminus \{(x,0) ~|~ x <-1 ~\text{or}~x >1 \} ,\\ & \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}} = \mathbb{R}^{2} \smallsetminus \{(x,0) ~|~ x<-2 ~\text{or}~ -2<x <-1 ~\text{or}~ 1 <x < 2 ~\text{or}~ x >2\} . \end{align*} \end{example} \begin{example} \label{exam:BallBall:dom} Assume that $\mathcal{H}= \mathbb{R}^{2}$, that $U_{1}=\mathbf{B}[(-1,0);1]$, and that $U_{2}=\mathbf{B}[(1,0);1]$. Then \begin{equation*} \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}} = \mathbb{R}^{2} \smallsetminus \{(x,0) ~|~ x <-2 ~\text{or}~x >2 \}, \end{equation*} \begin{equation*} \{(x,0) ~|~ -6 \leq x \leq -4~\text{or}~ x \geq -2\} \subseteq \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}}, \end{equation*} \begin{equation*} \{(x,0) ~|~x<-6 ~\text{or}~ -4 <x<-2\} \subseteq \mathbb{R}^2\smallsetminus \big( \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}} \big). \end{equation*} \end{example} \begin{example} \label{exam:Ball_Ball:dom} Assume that $\mathcal{H} = \mathbb{R}^{2}$, that $U_{1}=\mathbf{B}[(-1,0);2]$, and that $U_{2}=\mathbf{B}[(1,0);2]$. Then \begin{equation*} \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}} = \mathbb{R}^{2} \smallsetminus \{(x,0) ~|~ x <-3 ~\text{or}~x >3 \}, \end{equation*} \begin{equation*} \{(x,0) ~|~ -9 \leq x \leq -5~\text{or}~ -3 \leq x \leq 3\} \subseteq \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}}, \end{equation*} \begin{equation*} \{(x,0) ~|~x<-9 ~\text{or}~ -5 <x<-3 ~\text{or}~x >3\} \subseteq \mathbb{R}^2\smallsetminus \big( \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}_{2}} \big). \end{equation*} \end{example} Finally, consider $U_{1}=\{(x,y) \in \mathbb{R}^{2} ~|~ (x+1)^{2}+y^{2} = 4\}$ and $U_{2}=\{(x,y) \in \mathbb{R}^{2} ~|~ (x-1)^{2}+y^{2} = 4\}$. Note that neither $U _{1}$ nor $U_{2}$ is convex. For $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$ or $\mathcal{S}=\{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\}$, one can show that $ \ensuremath{\operatorname{dom}} \ensuremath{\operatorname{C}}C{\mathcal{S}} \subsetneqq \mathbb{R}^2$. \subsection{Impossibility to extend to maximally monotone operators} \label{sec:FailGenMaxiMono} Assume that $\mathcal{S}= \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}} \}$ or $\mathcal{S}= \{\ensuremath{\operatorname{Id}}, \ensuremath{\operatorname{R}}_{U_{1}}, \ensuremath{\operatorname{R}}_{U_{2}}\ensuremath{\operatorname{R}}_{U_{1}}\}$. In order to show a counterexample where the definition of $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ fails to be directly generalized to maximally monotone theory, we need the definition and facts below. \begin{definition} {\rm \cite[Definition~23.1]{BC2017}} Let $A : \mathcal{H} \rightarrow 2^{\mathcal{H}}$. The \emph{resolvent} of $A$ is \begin{align*} J_{A} = (\ensuremath{\operatorname{Id}} + A)^{-1}. \end{align*} \end{definition} \begin{fact} {\rm \cite[Corollary~23.11]{BC2017}} Let $A : \mathcal{H} \rightarrow 2^{\mathcal{H}}$ be maximally monotone and let $\gamma \in \mathbb{R}_{++}$. Then the following hold. \begin{enumerate} \item $J_{\gamma A} : \mathcal{H} \rightarrow \mathcal{H}$ and $\ensuremath{\operatorname{Id}} - J_{\gamma A} : \mathcal{H} \rightarrow \mathcal{H}$ are firmly nonexpansive and maximally monotone. \item The \emph{reflected resolvent} \begin{align*} R_{\gamma A}: \mathcal{H} \rightarrow \mathcal{H} : x \mapsto 2 J_{\gamma A} x -x. \end{align*} is nonexpansive. \end{enumerate} \end{fact} \begin{fact} {\rm \cite[Corollary~20.28]{BC2017}} \label{fact:ContMaxim} Let $A : \mathcal{H} \rightarrow \mathcal{H}$ be monotone and continuous. Then $A$ is maximally monotone. \end{fact} \begin{fact} {\rm \cite[Corollary~20.26]{BC2017}} Let $C$ be a nonempty closed convex subset of $\mathcal{H}$. Then $N_{C}$ is maximally monotone. \end{fact} \begin{fact} {\rm \cite[Corollary~23.4]{BC2017}} \label{fac:JNCPC} Let $C$ be a nonempty closed convex subset of $\mathcal{H}$. Then \begin{align*} J_{N_{C}}=(\ensuremath{\operatorname{Id}} +N_{C})^{-1}= \ensuremath{\operatorname{P}}_{C}. \end{align*} \end{fact} By \cref{fac:JNCPC}, $\ensuremath{\operatorname{R}}_{U_{1}}= 2 \ensuremath{\operatorname{P}}_{U_{1}} -\ensuremath{\operatorname{Id}} = 2 J_{N_{U_{1}}} -\ensuremath{\operatorname{Id}}$ and $\ensuremath{\operatorname{R}}_{U_{2}}= 2 \ensuremath{\operatorname{P}}_{U_{2}} -\ensuremath{\operatorname{Id}} = 2 J_{N_{U_{2}}} -\ensuremath{\operatorname{Id}}$. In these special cases, the reflectors are consistent with the corresponding reflected resolvent. In the following examples, we replace the two maximally monotone operators $N_{U_{1}}, N_{U_{2}}$ in the set $\mathcal{S} = \{\ensuremath{\operatorname{Id}}, 2 J_{N_{U_{1}}} -\ensuremath{\operatorname{Id}}, 2 J_{N_{U_{2}}} -\ensuremath{\operatorname{Id}}\}$ or $\mathcal{S} = \{\ensuremath{\operatorname{Id}}, 2 J_{N_{U_{1}}} -\ensuremath{\operatorname{Id}}, (2 J_{N_{U_{2}}} -\ensuremath{\operatorname{Id}} ) \circ (2 J_{N_{U_{1}}} -\ensuremath{\operatorname{Id}})\}$ by $\alpha \ensuremath{\operatorname{Id}}$ and $\beta \ensuremath{\operatorname{Id}}$ respectively, with $\alpha \geq 0$ and $\beta \geq 0$. By \cref{fact:ContMaxim}, since $\alpha \geq 0$ and $\beta \geq 0$, we obtain that $\alpha \ensuremath{\operatorname{Id}}$ and $\beta \ensuremath{\operatorname{Id}}$ are maximally monotone operators. We shall characterize the improperness of the new operator $\ensuremath{\operatorname{C}}C{\mathcal{S}}$. \begin{example} \label{exam:MaxiMonoFail} Assume that $\{0\}\subsetneqq \mathcal{H}$. Set $A=\alpha \ensuremath{\operatorname{Id}} $ and $B=\beta \ensuremath{\operatorname{Id}} $, where $\alpha \geq 0$ and $\beta \geq 0$. Further set \begin{align*} \mathcal{S}_{1}=\{\ensuremath{\operatorname{Id}}, R_{A}, R_{B}\} \quad \text{and} \quad \mathcal{S}_{2} = \{\ensuremath{\operatorname{Id}}, R_{A}, R_{B}R_{A}\}. \end{align*} Then $\ensuremath{\operatorname{C}}C{\mathcal{S}_{1}}$ is improper if and only if $\alpha \neq 0$, $\beta \neq 0$ and $\alpha \neq \beta$. Moreover, $\ensuremath{\operatorname{C}}C{\mathcal{S}_{2}}$ is improper if and only if $\alpha \neq 0$, $\alpha \neq 1$, $\beta \neq 0$ and $\alpha \neq - \beta$. \end{example} \begin{proof} The definitions yield \begin{align*} &J_{A} =(A+\ensuremath{\operatorname{Id}})^{-1}=\big((\alpha +1) \ensuremath{\operatorname{Id}}\big)^{-1}=\frac{1}{\alpha +1} \ensuremath{\operatorname{Id}}; \quad R_{A}=2 J_{A}- \ensuremath{\operatorname{Id}}= \frac{2}{\alpha +1} \ensuremath{\operatorname{Id}} -\ensuremath{\operatorname{Id}}= \frac{1-\alpha}{\alpha +1} \ensuremath{\operatorname{Id}}; \\ &J_{B} =(B+\ensuremath{\operatorname{Id}})^{-1}=(\big((\beta +1) \ensuremath{\operatorname{Id}}\big)^{-1}=\frac{1}{\beta +1} \ensuremath{\operatorname{Id}}; \quad R_{B}=2 J_{B}- \ensuremath{\operatorname{Id}}= \frac{2}{\beta +1} \ensuremath{\operatorname{Id}} -\ensuremath{\operatorname{Id}}= \frac{1-\beta}{\beta +1} \ensuremath{\operatorname{Id}}. \end{align*} Let $x \in \mathcal{H} \smallsetminus{0}$. Now \begin{subequations} \label{eq:xrarbrba} \begin{align} & x=R_{A}x \Longleftrightarrow x = \frac{1-\alpha}{\alpha +1}x \Longleftrightarrow 1= \frac{1-\alpha}{\alpha +1} \Longleftrightarrow \alpha =0;\\ & x= R_{B}x \Longleftrightarrow x= \frac{1-\beta}{\beta +1}x \Longleftrightarrow \beta =0;\\ & R_{A}x =R_{B}x \Longleftrightarrow \frac{1-\alpha}{\alpha +1}x = \frac{1-\beta}{\beta +1}x \Longleftrightarrow (1-\alpha)(\beta+1)=(\alpha+1)(1-\beta)\Longleftrightarrow \alpha =\beta ;\\ & x =R_{B}R_{A}x \Longleftrightarrow x= \frac{1-\alpha}{\alpha +1} \frac{1-\beta}{\beta +1}x \Longleftrightarrow (\alpha+1)(\beta+1)= (1-\alpha) (1-\beta) \Longleftrightarrow \alpha = -\beta;\\ & R_{A}x =R_{B}R_{A}x \Longleftrightarrow \frac{1-\alpha}{\alpha +1}x = \frac{1-\alpha}{\alpha +1} \frac{1-\beta}{\beta +1}x \Longleftrightarrow \alpha =1 ~\text{or}~ 1= \frac{1-\beta}{\beta +1} \Longleftrightarrow \alpha =1 ~\text{or}~ \beta =0. \end{align} \end{subequations} \enquote{$\Longrightarrow$}: According to the previous analysis, in both of the assertions, the contrapositive of the required results follow from \cref{prop:form:m2:Oper}. \enquote{$\Longleftarrow$}: Assume $\alpha \neq 0$, $\beta \neq 0$ and $\alpha \neq \beta$. Then \begin{align*} \ensuremath{\operatorname{aff}} (\mathcal{S}_{1}(x)) & = \ensuremath{\operatorname{aff}} \{x, R_{A}x, R_{B}x\}= x +\ensuremath{{\operatorname{span}}} \{R_{A}x-x, R_{B}x-x\} \\ & =x + \ensuremath{{\operatorname{span}}}\Big\{\frac{-2\alpha}{\alpha +1}x,\frac{-2\beta}{\beta+1}x \Big\} \\ & = \mathbb{R}\cdot x. \end{align*} Let $x \in \mathcal{H} \smallsetminus \{0\}$. We observe that \begin{subequations} \label{eq:exam:MaxiMonoFail:yS1} \begin{align} & \Big(\exists y \in \ensuremath{\operatorname{aff}} (\mathcal{S}_{1}(x))\Big) \quad \norm{y- x} =\norm{y-R_{A}x}= \norm{y-R_{B}x} \\ \Longleftrightarrow & (\exists t \in \mathbb{R}) \quad \norm{t x- x} = \norm{t x- \frac{1-\alpha}{\alpha +1}x} =\norm{t x- \frac{1-\beta}{\beta +1}x} \\ \Longleftrightarrow & (\exists t \in \mathbb{R}) \quad |t -1|= \Big|t - \frac{1-\alpha}{\alpha +1} \Big|= \Big| t - \frac{1-\beta}{\beta +1} \Big|. \quad (\text{by}~x \neq 0) \end{align} \end{subequations} On the other hand, combining the assumptions with \cref{cor:mathbbRcard3} and \cref{eq:xrarbrba}, we obtain that \begin{align} \label{eq:exam:MaxiMonoFail:not} (\cancel{\exists} t \in \mathbb{R}) \quad |t -1|= \Big|t - \frac{1-\alpha}{\alpha +1} \Big|= \Big| t - \frac{1-\beta}{\beta +1} \Big|. \end{align} Hence, \begin{align*} (\forall x \in \mathcal{H} \smallsetminus \{0\}) \quad \ensuremath{\operatorname{C}}C{\mathcal{S}_{1}}x = \varnothing. \end{align*} Assume $\alpha \neq 0$, $\alpha \neq 1$, $\beta \neq 0$ and $\alpha \neq - \beta$. A similar proof shows that for every $x \in \mathcal{H} \smallsetminus \{0\}$, there is no point $y \in \ensuremath{\operatorname{aff}} (\mathcal{S}_{2}(x))$, such that $\norm{y- x} =\norm{y-R_{A}x}= \norm{y-R_{B}R_{A}x}$, which implies that $(\forall x \in \mathcal{H} \smallsetminus \{0\})$ $\ensuremath{\operatorname{C}}C{\mathcal{S}_{2}}x = \varnothing$. \end{proof} Arguing similarly to the proof of the previous result, we also obtain the following result: \begin{example} \label{exam:MaxiMonoConstFail} Assume that $\{0\} \subsetneqq \mathcal{H}$. Let $\{a,b\} \subseteq \mathbb{R}$. Set $A \equiv a $, i.e., $(\forall x \in \mathcal{H})$ $Ax =a$, and $B \equiv b $. Furthermore, set \begin{align*} \mathcal{S}_{1}=\{\ensuremath{\operatorname{Id}}, R_{A}, R_{B}\} \quad \text{and} \quad \mathcal{S}_{2} = \{\ensuremath{\operatorname{Id}}, R_{A}, R_{B}R_{A}\}. \end{align*} Then $\ensuremath{\operatorname{C}}C{\mathcal{S}_{1}}$ is improper if and only if $a \neq 0$, $b \neq 0$ and $a \neq b$. Moreover, $\ensuremath{\operatorname{C}}C{\mathcal{S}_{2}}$ is improper if and only if $a \neq 0$, $b \neq 0$ and $a \neq - b$. \end{example} The example above shows that there is no direct way to generalize the definition of $\ensuremath{\operatorname{C}}C{\mathcal{S}}$ to maximally monotone theory. \section*{Acknowledgments} HHB and XW were partially supported by NSERC Discovery Grants. \addcontentsline{toc}{section}{References} \end{document}
\begin{document} \title{Random walks on torus and random interlacements: macroscopic coupling and phase transition} \author[J. Černý]{Jiří Černý} \address{Jiří Černý,\newline Faculty of Mathematics, University of Vienna,\newline Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria} \email{[email protected]} \author[A. Teixeira]{Augusto Teixeira} \address{Augusto Teixeira, \newline Instituto Nacional de Matem\'atica Pura e Aplicada -- IMPA,\newline Estrada Dona Castorina 110, 22460-320, Rio de Janeiro, Brazil} \email{[email protected]} \mathrm{d}ate{\today} \begin{abstract} For $d\ge 3$ we construct a new coupling of the trace left by a random walk on a large $d$-dimensional discrete torus with the random interlacements on $\mathbb{Z}^d$. This coupling has the advantage of working up to \emph{macroscopic} subsets of the torus. As an application, we show a sharp phase transition for the diameter of the component of the vacant set on the torus containing a given point. The threshold where this phase transition takes place coincides with the critical value $u_\star(d)$ of random interlacements on $\mathbb Z^d$. Our main tool is a variant of the \emph{soft-local time} coupling technique of \cite{PT12}. \end{abstract} \maketitle \section{Introduction} \label{s:introduction} In this paper we study the trace of a simple random walk $X_n$ on a large $d$-dimensional discrete torus $\mathbb T_N^d= (\mathbb Z/N\mathbb Z)^d$ for $d \geq 3$. In particular, we investigate the percolative properties of its vacant set \begin{equation} \label{e:VNu} \mathcal V_N^u = \mathbb T_N^d \setminus \{X_0,\mathrm{d}ots,X_{\lfloor uN^d \rfloor}\}, \end{equation} for a fixed $u\in[0,\infty)$ as $N$ tends to infinity. Intuitively speaking, the parameter $u$ plays the role of a density of the random walk trace. More precisely, for small values of $u$ and as $N$ grows, the vacant set occupies a large proportion of the torus. Therefore, $\mathcal{V}^u_N$ should consists of a single large cluster together with small finite components. In contrast, for large values of $u$, the asymptotic density of $\mathcal{V}^u_N$ should be small and it should have been fragmented into small pieces. In analogy with the Bernoulli percolation behavior, it is actually expected that there is a phase transition. Namely, there is a critical value $u_c(d)$ such that the first behavior holds true for all $u<u_c(d)$ and the second for all $u>u_c(d)$, with high probability as $N$ tends to infinity. The percolative properties of $\mathcal V^u_N$ have been studied in several recent works. In \cite{BS08}, the authors showed that, for large dimensions $d$ and small enough $u > 0$, the vacant set has a (unique, to some extent) connected component with a non-negligible density. In order to understand the vacant set $\mathcal V^u_N$ more in detail, Sznitman introduced in \cite{Szn10} a model of random interlacements, which can be viewed as an analogue of the random walk trace in the torus, but constructed on the infinite lattice $\mathbb{Z}^d$. In \cite{Szn10,SS09}, it was then shown that the vacant set of random interlacements exhibit a percolation phase transition at some level $u_\star(d)\in (0,\infty)$. It is believed that the critical threshold of the torus, $u_c(d)$ coincides with $u_\star(d)$. Later, in \cite{Win08}, it was established that as $N$ grows, the set $\mathcal{V}_N^u$ converges locally in law to the vacant set of random interlacements $\mathcal{V}^u$, but this didn't have immediate consequences on the percolative behavior of the $\mathcal V_N^u$. In \cite{TW11}, a more quantified control of $\mathcal V^u_N$ in terms of $\mathcal V^u$ improved our understanding of the behavior of the largest connected component $\mathcal C^{\mathrm{max}}_{u,N}$ of $\mathcal V_N^u$. In particular, it was shown that, for any dimension $d \geq 3$, with high probability as $N$ goes to infinity: \begin{itemize} \item for $u$ small enough, there is $\varepsilon >0$ such that \begin{equation*} |\mathcal C_{u,N}^{\mathrm{max}}|\ge \varepsilon N^d, \end{equation*} \item for $u>u_\star(d)$, \begin{equation*} |\mathcal C_{u,N}^{\mathrm{max}}|=o(N^d), \end{equation*} \item for $u$ large enough, for some $\lambda(u) >0$ \begin{equation*} |\mathcal C_{u,N}^{\mathrm{max}}|=O(\log^\lambda N). \end{equation*} \end{itemize} Note that this implies the existence of a certain transition in the asymptotic behavior of $\mathcal{V}^u_N$ as $u$ varies. However it was not known until now where this transition occurs, whether it is sharp, or whether it is related to the model of random interlacements. The results of this paper shed more light on this question. Unfortunately, we are not able to control directly the volume of the largest connected component $\mathcal C_{u,N}^{\mathrm{max}}$. We thus define another observable that is better suited to our analysis. To this end we let $P$ to stand for the law of the simple random walk $(X_n)_{n\ge 0}$ on $\mathbb T_N^d$ started from its invariant distribution (which is uniform on $\mathbb T_N^d$), and write $\mathcal C_N(u)$ for the connected component of $\mathcal V_N^u$ containing some given point, say $0\in \mathbb T_N^d$. We define the observable \begin{equation} \eta_N(u) = P[\mathop{\mathrm{diam}}\nolimits \mathcal C_N(u) \ge N/4], \end{equation} where the diameter is understood in the Euclidean sense, not in the one induced by the graph $\mathcal C_N(u)$. Let us point out that the observable $\eta_N(u)$ is \emph{macroscopic}, that is it depends on the properties of the vacant set $\mathcal V_N^u$ in the box of size comparable with $N$. The next theorem establishes a phase transition for this observable and gives its asymptotic behavior in terms of related quantities for random interlacements. \begin{theorem} \label{t:phasetransition} The observable $\eta_N(u)$ exhibits a phase transition at $u_\star(d)$. More precisely, for $u>u_\star (d)$, \begin{equation} \label{e:phasetransitionsubcritical} \lim_{N\to\infty} \eta_N(u)=0, \end{equation} and for $u < u_\star(d)$, \begin{equation} \label{e:phasetransitionsupercritical} \lim_{N\to\infty} \eta_N(u)=\eta (u)>0, \end{equation} where $\eta (u)$ is the probability that $0\in \mathbb Z^d$ is contained in the infinite component of the vacant set $\mathcal V^u$ of random interlacements at level $u$. \end{theorem} The main ingredient of the proof of Theorem~\ref{t:phasetransition} is a new coupling between $\mathcal V_N^u$ and $\mathcal V_u$ in macroscopic boxes of the torus which is of independent interest. This is stated precisely in the following result. \begin{theorem} \label{t:toruscouplingweak} Let $\mathcal B_N = [0, (1-\mathrm{d}elta )N]^d$ for some $\mathrm{d}elta >0$. Then for every $u \ge 0$ and $\varepsilon >0$ there exist couplings $\mathbb Q_N$ of the random walk on $\mathbb T_N^d$ with the random interlacements such that \begin{equation} \lim_{N\to\infty} \mathbb Q_N\big[(\mathcal V^{u(1+\varepsilon )}\cap \mathcal B_N) \subset (\mathcal V_N^u \cap \mathcal B_N) \subset (\mathcal V^{u(1-\varepsilon )}\cap \mathcal B_N)\big] = 1. \end{equation} \end{theorem} We give a more quantitative version of this theorem later (see Theorem~\ref{t:toruscoupling}). Observe again that the box $\mathcal B_N$ is macroscopic, and that $|\mathcal B_N|/N^d$ can be made arbitrarily close to one. Theorem~\ref{t:toruscouplingweak} thus improves considerably the best previously known coupling of the same objects working with boxes of size $N^{1-\varepsilon}$, see \cite{TW11} (cf.~also \cite{Bel13} for another related coupling). The principal tool for the construction of the above coupling is a streamlined version of the technique of \emph{soft local times}, which was recently developed in \cite{PT12} in order to prove new decorrelation inequalities for random interlacements. This technique allows to couple two Markov chains so that their ranges almost coincide. Our formulation, stated as Theorem~\ref{t:couplingmc} below, provides more explicit bounds on the probability that the coupling fails, and more importantly, it is well adapted to situations where one can estimate the mixing time of the chains in question. See introduction to Section~\ref{s:coupling} for more details. Let us now briefly describe the organization of this paper. In Section~\ref{s:notation} we introduce some basic notation and recall several useful known results. In Section~\ref{s:coupling}, we extend the soft local times method and prove our main technical result on the coupling of ranges of Markov chains. The precise version of Theorem~\ref{t:toruscouplingweak} giving a coupling between the random walk on $\mathbb{T}^d_N$ and the vacant set of random interlacements is stated in Theorem~\ref{t:toruscoupling} in Section~\ref{s:toruscoupling}. Sections \ref{s:rwproperties}--\ref{s:number} provide estimates on the simple random walk, equilibrium measures, mixing times and the number of excursions of the walker which are needed in order to apply the results of Section~\ref{s:coupling}. Finally, Section~\ref{s:proofs} contains the proofs of our main results. In the appendix we include a suitable version of classic Chernov bounds on the concentration of additive functionals of Markov chains. \section{Notation and some results} \label{s:notation} Let us first introduce some basic notation to be used in the sequel. We consider torus $\mathbb T_N^d=(\mathbb Z^d/N\mathbb Z^d)$ which we identify, for sake of concreteness, with the set $\{0,\mathrm{d}ots,N-1\}^d \subset \mathbb Z^d$. On $\mathbb Z^d$, we respectively denote by $|\,\cdot\,|$ and $|\,\cdot\,|_\infty$ the Euclidean and $\ell^\infty$-norms. For any $x\in\mathbb Z^d$ and $r\ge 0$, we let $B(x,r)=\{y\in \mathbb Z^d:|y-x|\le r\}$ stand for the Euclidean ball centered at $x$ with radius $r$. Given $K,U\subset \mathbb Z^d$, $K^c=\mathbb Z^d\setminus K$ stands for the complement of $K$ in $\mathbb Z^d$ and $\mathop{\mathrm{dist}}\nolimits(K,U)=\inf\{|x-y|:x\in K, y\in U\}$ for the Euclidean distance of $K$ and $U$. Finally, we define the inner boundary of $K$ to be the set $\partial K=\{x\in K:\exists y\in K^c, |y-x|=1\}$, and the outer boundary of $K$ as $\partial_e K= \partial (K^c)$. Analogous notation is used on $\mathbb T_N^d$. We endow $\mathbb Z^d$ and $\mathbb T_N^d$ with the nearest-neighbor graph structure. We write $P_x$ for the law on $(\mathbb T_N^d)^{\mathbb N}$ of the canonical simple random walk on $\mathbb T_N^d$ started $x\in \mathbb T_N^d$, and denote the canonical coordinate process by $X_n$, $n\ge 0$. We use $P$ to denote the law of the random walk with a uniformly chosen starting point, that is $P=\sum_{x\in \mathbb T_N^d} N^{-d}P_x$. We write $P_x^{\mathbb Z^d}$ for the canonical law of the simple random walk on $\mathbb Z^d$ started from $x$, and (with slight abuse of notation) $X_n$ for the coordinate process as well. Finally, $\theta_k$ denotes the canonical shifts of the walk, defined on either $(\mathbb T_N^d)^{\mathbb N}$ or $(\mathbb Z^d)^{\mathbb N}$, \begin{equation} \theta_k (x_0,x_1,\mathrm{d}ots) = (x_k,x_{k+1},\mathrm{d}ots). \end{equation} Throughout the text we denote by $c$ positive finite constants whose value might change during the computations, and which may depend on the dimension $d$. Starting from Section~\ref{s:rwproperties}, the constants may additionally depend on $\gamma$, $\alpha$ which we will introduce later (this will be mentioned again when appropriate). Given two sequences $a_N, b_N$, we write $a_N \asymp b_N$ to mean that $c^{-1} a_N \leq b_N \leq c a_N$, for some constant $c\ge 1$. For $K\subset \mathbb Z^d$ finite, as well as for $K\subset \mathbb T_N^d$, we use $H_K$, $\tilde H_K$ to denote entrance and hitting times of $K$ \begin{equation} H_K=\inf\{k\ge 0: X_k\in K\}, \qquad \tilde H_K=\inf\{k\ge 1: X_k\in K\}. \end{equation} For $K\subset \mathbb Z^d$ we define the equilibrium measure of $K$ by \begin{equation} e_K(x) = P_x^{\mathbb Z^d}[\tilde H_K = \infty] \boldsymbol 1 \{x\in K\}, \qquad x\in \mathbb Z^d, \end{equation} and the capacity of $K$ \begin{equation} \mathop{\mathrm{cap}}\nolimits(K) = e_K(K). \end{equation} For every finite $K$, $\mathop{\mathrm{cap}}\nolimits(K) < \infty$, which allows to introduce the normalized equilibrium measure \begin{equation} \label{e:bare} \bar e_K(\cdot) = (\mathop{\mathrm{cap}}\nolimits(K))^{-1} e_K(\cdot). \end{equation} Finally, we give an explicit construction of the vacant set of random interlacements intersected with a finite set $K\subset \mathbb Z^d$. We build on some auxiliary probability space an i.i.d.~sequence $X^{(i)}$, $i\ge 1$, of simple random walks on $\mathbb Z^d$ with the initial distribution $\bar e_K$, and an independent Poisson process $(J_u)_{u\ge 0}$ with intensity $\mathop{\mathrm{cap}}\nolimits(K)$. The vacant set of the random interlacements (viewed as a process in $u\ge 0$) when intersected with $K$ has the law characterized by \begin{equation} \label{e:ridef} (\mathcal V^u\cap K)_{u\ge 0} \overset{\mathrm{law}}= \Big(K\setminus \bigcup_{1\le i \le J_u} \bigcup_{k\ge 0} \{X^{(i)}_k\}\Big)_{u\ge 0}, \end{equation} see, for instance, Proposition 1.3 and below (1.42) in \cite{Szn10}. \section{Coupling the ranges of Markov chains} \label{s:coupling} In this section we construct a coupling of two Markov chains so that their ranges almost coincide. A method to construct such couplings was recently introduced in \cite{PT12}, based on the so-called \emph{soft local times}. We will use the same method to construct the coupling, but propose a new method to estimate the probability that the coupling fails. This is necessary since the estimates in \cite{PT12} use considerably the fact that the Markov chains in consideration have `very strong renewals'. More precisely the trajectory of the chain can easily be decomposed into i.i.d.~blocks (of possibly random length). This, together with bounds on the moment generating function corresponding to one block, allows them to obtain very good bounds on the error of the coupling, that is on the probability that the ranges of the Markov chains are considerably different. In the present paper, we have in mind an application where this `very strong renewal' structure is not present. We hence need to find new estimates on the error of the coupling. These techniques combine the method of soft local times with quantitative Chernov-type estimates on deviations of additive functionals of Markov chains. An estimate of this type suitable for our purposes is proved in the appendix. Similarly as in \cite{PT12}, we will use the regularity of the transition probabilities of the Markov chain to improve the bounds on the error of the coupling. In contrast to \cite{PT12} this regularity will be not expressed via comparing the transition probability with indicator functions of large balls (see Theorem~4.9 of \cite{PT12}), but by controlling the variance of the transition probability. Note also that the estimates on the error of the coupling provided by Theorems~\ref{t:coupling}, \ref{t:couplingmc} are weaker than the ones obtained by techniques of \cite{PT12}, when both techniques apply. This is due to the fact that the Chernov-type estimates mentioned above give the worst case asymptotic and are not-optimal in many situations. Let us now precise the setting of this section. Let $\Sigma $ be a finite state space, $P=(p(x,y))_{x,y\in \Sigma }$ a Markov transition matrix, and $\nu $ a distribution on $\Sigma $. We assume that $P$ is irreducible, so there exists a unique $P$-invariant distribution $\pi $ on $\Sigma $. The mixing time $T$ corresponding to $P$ is defined by \begin{equation} T=\min\big\{n\ge 0:\max_{x\in \Sigma }\|P^n(x,\cdot)-\pi (\cdot)\|_{TV}\big\}\le \frac 14. \end{equation} where $\|\cdot\|_{TV}$ denotes the total variation distance $\lVert \nu - \nu' \rVert_{TV} := (1/2) \sum_x |\nu(x) - \nu'(x)|$. We set \begin{equation} \pi_\star = \min_{z\in \Sigma } \pi (z). \end{equation} Let $\mu $ be an \textit{a priori} measure on $\Sigma $ with full support. (This measure is introduced for convenience only, it will simplify some formulas later. The estimates that we obtain do not depend on the choice of $\mu $.) Let $g:\Sigma \to [0,\infty)$ be the density of $\pi $ with respect of $\mu $, \begin{equation} \label{e:defg} g(x)=\frac {\pi (x)}{\mu (x)}, \qquad x\in \Sigma , \end{equation} and let further $\rho :\Sigma^2 \to [0,\infty)$ be the `transition density' with respect to $\mu $, \begin{equation} \label{e:transdensity} \rho(x,y)=\frac{p(x,y)}{\mu (y)}, \qquad x,y\in \Sigma . \end{equation} We use $\rho_y$ to denote the function $x \mapsto \rho (x,y)$ giving the arrival probability density at $y$ as we vary the starting point. For any function $f:\Sigma \to \mathbb R$, let $\pi (f) = \sum_{x\in \Sigma } \pi (x) f(x)$, and $\mathop{\mathrm{Var}}\nolimits_\pi f = \pi \big((f - \pi (f))^2\big)$. The following theorem provides a coupling of a Markov chain with transition matrix $P$ with an i.i.d.~sequence so that their ranges almost coincide. \begin{theorem} \label{t:coupling} There exists a probability space $(\Omega , \mathcal F, \mathbb Q)$ where one can construct a Markov chain $(Z_i)_{i\ge 0}$ with transition matrix $P$ and initial distribution $\nu $ and an i.i.d.~sequence $(U_i)_{i\ge 0}$ with marginal $\pi $ such that for any $\varepsilon$ satisfying \begin{equation} \label{e:epsas} 0<\varepsilon \le \frac 12 \wedge \min_{z\in \Sigma } \frac{\mathop{\mathrm{Var}}\nolimits_\pi \rho_z} {2\|\rho_z\|_\infty g(z)} \end{equation} and for any $n \ge 2 k(\varepsilon) T$ we have \begin{equation} \label{e:coupling} \mathbb Q\big[\mathcal G(n ,\varepsilon )^c\big] \le C\sum_{z\in \Sigma } \Big(e^{-c n\varepsilon ^2} + e^{-c n \varepsilon \frac{\pi (z)}{\nu (z)}}+ \exp\Big\{-\frac{c \varepsilon ^2 g(z)^2}{\mathop{\mathrm{Var}}\nolimits_\pi \rho_z }\,\frac{n}{k(\varepsilon )T}\Big\} \Big), \end{equation} where $c,C\in (0,\infty)$ are absolute constants, $\mathcal G(n,\varepsilon )$ is the `good' event \begin{equation} \mathcal G =\mathcal G(n,\varepsilon ) = \big\{\{U_i\}_{i=0}^{n(1-\varepsilon)} \subset \{Z_i\}_{i=0}^n \subset \{U_i\}_{i=0}^{n(1+\varepsilon) } \big\}, \end{equation} and \begin{equation} k(\varepsilon)=-\min_{z\in \Sigma } \log_2 \frac{\pi_\star \varepsilon^2 g(z)^2}{6 \mathop{\mathrm{Var}}\nolimits_\pi (\rho_z)}. \end{equation} \end{theorem} \begin{proof} To construct the coupling, we use the same procedure as in \cite{PT12}. Let $(\Omega , \mathcal F, \mathbb Q)$ be a probability space on which we are given a Poisson point process $\eta = (z_i ,v_i )_{i\ge 1}$ on $\Sigma \times [0,\infty)$ with intensity measure $\mu\otimes dx$. On this probability space we now construct a Markov chain $(Z_i)_{i\ge 0}$ and an i.i.d.~sequence $(U_i)_{i\ge 0}$ with the required properties. For a more detailed explanation of this construction, see \cite{PT12}. Let $G_{-1}(z)=0$, $z\in \Sigma $, and define inductively random variables $\xi_k\ge 0$, $Z_k\in \Sigma $, $V_k\ge 0$, and random functions $G_k:\Sigma \to [0,\infty)$, $k\ge 0$, \begin{align} \label{e:ZVa} \xi_k &= \inf\{ t\ge 0: \exists (z, v ) \in \eta \setminus \{(Z_i,V_i)\}_{i=1}^{k-1} \text{ s.t. } G_{k-1}(z )+ t \rho(Z_{k-1},z ) \ge v \},\\ G_k(z) &= G_{k-1}(z) + \xi_k \rho(Z_{k-1},z),\\ (Z_k,V_k) &= \text{the unique point $(z,v)\in \eta $ such that $G_k(z)=v$}, \label{e:ZV} \end{align} where we use the convention $\rho (Z_{-1},z)=\nu (z)/\mu (z)$. If the point satisfying $G_k(z)=v$ in \eqref{e:ZV} is not unique, we pick one arbitrarily. The details of the choice are unimportant, as this occurs with zero probability. Using a similar construction, on the same probability space, we further define random variables $U_k \in \Sigma $, $\tilde \xi_k\ge 0$, $W_k\ge 0$ and random functions $\tilde G_k:\Sigma \to [0,\infty)$, $k \ge 0$, \begin{align} \tilde\xi_k &= \inf\{ t\ge 0: \exists (z, v) \in \eta \setminus \{(U_i,W_i)\}_{i=1}^{k-1} \text{ s.t. } \tilde G_{k-1}(z)+ t g(z) \ge v \},\\ \tilde G_k(z) &= \tilde G_{k-1}(z) + \tilde\xi_k g(z),\\ (U_k,W_k) &= \text{the unique point $(z,v)\in \eta $ such that $\tilde G_k(z)=v$}, \end{align} where again $\tilde G_{-1} \equiv 0$. It follows from \cite[Section 4]{PT12} that $Z=(Z_k)_{k\ge 0}$ is a Markov chain with the required distribution, and $U = (U_k)_{k\ge 0}$ an i.i.d.~sequence with marginal $\pi $. Moreover, the sequences $(\xi_k)$ and $(\tilde \xi_k)$ are i.i.d.~with exponential mean-one marginal. The sequence $(\xi_k)$ is independent of $(Z_k)$, and similarly $(\tilde \xi_k) $ is independent of $(U_k)$. We now estimate the probability of $\mathcal G(n,\varepsilon )^c$. From the above construction it follows that $\mathbb Q$-a.s. \begin{equation} \begin{split} \label{e:ranges} \{Z_i\}_{i=0}^k &= \{z \in \Sigma: \text{ there exists $(z,v)\in \eta$ with } G_k(z)\ge v\},\\ \{U_i\}_{i=0}^k &= \{z \in \Sigma: \text{ there exists $(z,v)\in \eta$ with } \tilde G_k(z)\ge v\}. \end{split} \end{equation} Consider the following events \begin{equation} \begin{split} A^- &= \big\{\tilde G_{n(1-\varepsilon )} < (1-\tfrac \varepsilon 2)ng\big\},\\ A^+ &= \big\{\tilde G_{n(1+\varepsilon )} > (1+\tfrac \varepsilon 2)ng\big\},\\ B &= \big\{n (1-\tfrac \varepsilon 2)g \le G_n \le (1+\tfrac \varepsilon 2)ng\big\}. \end{split} \end{equation} Using \eqref{e:ranges}, it follows that $\mathcal G(n,\varepsilon )^c\subset (A^+)^c\cup (A^-)^c\cup B^c$. To bound the probability of the events $(A^\pm)^c$ and $B^c$, observe first that, by construction, $\tilde G_n = g \sum_{i=1}^n \tilde \xi $. As $\tilde \xi_i$'s are i.i.d., the standard application of the exponential Chebyshev inequality yields the estimate \begin{equation} \label{e:poisson} \mathbb Q\big[ (A^{\pm})^c\big]\le e^{-c n\varepsilon ^2}. \end{equation} To estimate $\mathbb Q[B^c]$, we write $G_n(z)$ as \begin{equation} \label{e:additive} G_n(z)= \xi_0 \frac{\nu (z)}{\mu (z)}+\sum_{i=1}^n \xi_i \rho_z(Z_{i-1}) =\xi_0 \frac{\nu (z)}{\mu (z)}+\int_0^{\tau_n} \rho_z(\bar Z_t) dt, \end{equation} where $(\bar Z_t)_{t\ge 0}$ is a continuous-time Markov chain following the same trajectory as $Z$ with mean-one exponential waiting times, and $\tau_n$ is the time of the $n$-th jump of $\bar Z$. It follows that $\mathbb Q[B^c]$ can be estimated with help of quantitative estimates on the deviations of additive functionals of Markov chains. An estimate suitable for our purposes is proved in the appendix. To apply this estimate we write \begin{equation} \begin{split} \mathbb Q[B^c] \le \sum_{z\in \Sigma } \bigg\{& \mathbb Q\big[\tfrac{\xi_0\nu (z)}{\mu (z)} \ge \tfrac 14\varepsilon n g(z)\big] + \mathbb Q\big[|\tau_n-n|\ge\tfrac 14 n\varepsilon \big] \\&+ \mathbb Q\Big[\int_0^{n(1+\varepsilon /4)} \rho_{z}(\bar Z_t) dt - n\big(1+\tfrac \varepsilon 4\big) g(z) \ge \tfrac 14 n\varepsilon g(z)\Big] \\&+ \mathbb Q\Big[\int_0^{n(1-\varepsilon /4)} \rho_{z}(\bar Z_t) dt - n\big(1-\tfrac \varepsilon 4\big) g(z) \le -\tfrac 14n\varepsilon g(z)\Big]\bigg\}. \end{split} \end{equation} The first term satisfies \begin{equation} \mathbb Q[\xi_0 \nu (z)/\mu(z)\ge \varepsilon n g(z)/4]=e^{-cn\varepsilon \frac{\pi(z)}{\nu(z)}}. \end{equation} The second term can be bounded using a large deviation argument as in \eqref{e:poisson}. The last two terms can be bounded using \eqref{e:apph} with $\mathrm{d}elta = \varepsilon/(4\pm \varepsilon )$, $t=n(1\pm \varepsilon /4)$ and $f= \pm \rho_z$, using also the obvious identity $\pi (\rho _z)=g(z)$. The theorem then directly follows, the condition \eqref{e:epsas} is a direct consequence of the assumption \eqref{e:apphcond} of \eqref{e:apph}. \end{proof} The same technique can trivially be adapted to couple the ranges of two Markov chains: Let $P^1$, $P^2$ be transition matrices of two Markov chains on a common finite state space $\Sigma $ with respective mixing times $T^1$, $T^2$, but with the same invariant distribution $\pi $. Let further $\nu^1$, $\nu^2$ be two initial probability distributions on $\Sigma$. Similarly as above, we fix an a~priori measure $\mu $, and define $g(x)=\pi (x)/\mu (x)$, $\rho^i(x,y)=\mu (y)^{-1} p^i(x,y)$, $i=1,2$. \begin{theorem} \label{t:couplingmc} There exists a probability space $(\Omega ,\mathcal F, \mathbb Q)$ where one can define Markov chains $Z^1$, $Z^2$ with respective transition matrices $P^1$, $P^2$ and starting distributions $\nu^1$, $\nu^2$ such that for every $\varepsilon $ satisfying \begin{equation} \label{e:epsasmc} 0<\varepsilon \le \frac 12 \wedge \min_{i=1,2}\min_{z\in \Sigma } \frac{\mathop{\mathrm{Var}}\nolimits_\pi \rho^i_z} {2\|\rho^i_z\|_\infty g(z)}. \end{equation} and $n\ge 2k(\varepsilon )(T^1\vee T^2)$ we have \begin{equation} \label{e:couplingmc} \mathbb Q\big[\tilde{\mathcal G}(n ,\varepsilon )^c\big] \le C\sum_{i=1,2}\sum_{z\in \Sigma } \Big(e^{-c n\varepsilon ^2} +e^{-c n\varepsilon \frac{\pi (z)}{\nu^i(z)}}+ \exp\Big\{-\frac{c \varepsilon ^2 g(z)^2}{\mathop{\mathrm{Var}}\nolimits_\pi \rho^i_z } \frac{n}{k(\varepsilon )T^i}\Big\} \Big), \end{equation} where $c,C\in (0,\infty)$ are absolute constants, $\tilde{\mathcal G}(n,\varepsilon )$ is the event \begin{equation} \label{e:goodmc} \tilde {\mathcal G}(n,\varepsilon ) = \big\{ \{Z^1_i\}_{i=1}^{n(1-\varepsilon )} \subset \{Z^2_i\}_{i=1}^n \subset \{Z^1_i\}_{i=1}^{n(1+\varepsilon )} \big\}, \end{equation} and \begin{equation} k(\varepsilon)=-\min_{i=1,2}\min_{z\in \Sigma } \log_2 \frac{\pi_\star \varepsilon^2 g(z)^2}{6 \mathop{\mathrm{Var}}\nolimits_\pi (\rho^i_z)}. \end{equation} \end{theorem} \section{Coupling the vacant sets} \label{s:toruscoupling} In this section we state the quantitative version of Theorem~\ref{t:toruscouplingweak} giving the coupling between the vacant sets of the random walk and the random interlacements in the macroscopic subsets of the torus. We then show the connection between Theorem~\ref{t:couplingmc} and our main result by defining the relevant finite state space Markov chains. For technical reasons we should work with `rounded boxes' instead of the usual ones. Their advantage is that the common potential-theoretic quantities, like equilibrium measure and hitting probabilities, are smoother on them; similar smoothing was used in \cite[Section 7]{PT12}. Let \begin{equation} \label{e:gammachi} \gamma\in\Big(\frac 1 {d-1},1\Big) \quad\text{and}\quad \alpha \in \Big(0,\frac 14\Big) \end{equation} be two constants that remain fixed through the paper. Set $L=2N^\gamma +\alpha N$, and define the box $B$ with rounded corners \begin{equation} \label{e:BN} B=B_N=\bigcup_{x\in[L,N-L]^d\cap \mathbb Z^d} B(x,\alpha N). \end{equation} Let further $\Delta$ be the set of points at distance at least $N^\gamma$ from $B$, \begin{equation} \label{e:DeltaN} \Delta = \Delta_N=\Big(\bigcup_{x\in B_N} B(x,N^\gamma)\Big)^c, \end{equation} see Figure~\ref{f:BN} for illustration. We view $B$ and $\Delta$ as subsets of $\mathbb Z^d$ as well as of $\mathbb T_N^d$ (identified with $\{0,\mathrm{d}ots,N-1\}^d$). \begin{figure} \caption{The rounded box $B_N$ (dark gray), the `security zone' of width $N^\gamma $ (white), and the set $\Delta_N$ (light gray) in the torus $\mathbb T_N^d$.} \label{f:BN} \end{figure} We can state the quantitative version of Theorem~\ref{t:toruscouplingweak} now. \begin{theorem} \label{t:toruscoupling} Let $u>0$ and $\varepsilon_N$ be a sequence satisfying $\varepsilon_N\in(0,c_0)$ with $c_0$ sufficiently small. Set $\kappa = \gamma (d-1)-1>0$ and assume that $\varepsilon_N^2 \ge c N^{\mathrm{d}elta-\kappa } $ for some $\mathrm{d}elta >0$. Then there exists coupling $\mathbb Q$ of $\mathcal V_N^u$ with $\mathcal V^{u(1\pm \varepsilon_N)}$ such that for every $N$ large enough \begin{equation} \begin{split} \label{e:c1} \mathbb Q\big[(\mathcal V^{u(1-\varepsilon_N)} \cap B_N) \supset (\mathcal V_N^{u} \cap B_N) \supset (\mathcal V^{u(1+\varepsilon_N)} \cap B_N)\big] \ge 1- C_1 e^{-C_2 N^{\mathrm{d}elta '}} \end{split} \end{equation} for some constants $\mathrm{d}elta '>0$, and $C_1,C_2\in (0,\infty)$ depending on $u$, $\mathrm{d}elta $, $\gamma $ and $\alpha $. \end{theorem} Theorem~\ref{t:toruscoupling} will be proved with help of Theorem~\ref{t:couplingmc}. To this end we now introduce relevant Markov chains which will be coupled together later. The first Markov chain encodes the excursions of the random walk on the torus into the rounded box $B$. More precisely, let $R_i$, $D_i$ be the successive excursion times between $B$ and $\Delta $ of the random walk $X_n$ on $\mathbb T_N^d$ defined by $D_0=H_\Delta $ and for $i\ge 1$ inductively \begin{equation} \begin{split} \label{e:excursions} R_i&=H_B\circ \theta_{D_{i-1}} + D_{i-1},\\ D_i&=H_\Delta \circ \theta_{R_i} + R_i. \end{split} \end{equation} We define the process $Y_i=(X_{R_i},X_{D_i})\in \partial B \times \partial\Delta =:\Sigma $, $i\ge 1$. By the strong Markov property of $X$, $(Y_i)_{i\ge 1}$ is a Markov chain on $\Sigma $ with transition probabilities \begin{equation} \label{e:Y_i} P[Y_{n+1}= \vec y | Y_n=\vec x] = P_{x_2}[X_{H_B }=y_1]P_{y_1}[X_{H_\Delta } = y_2], \end{equation} for every $\vec x=(x_1,x_2)$ and $\vec y=(y_1,y_2)\in \Sigma$, and with initial distribution \begin{equation} \label{e:nuY} \nu_Y(\vec x)=P[X_{R_1}=x_1, X_{D_1}=x_2] = P[X_{R_1}=x_1]P_{x_1}[X_{H_\Delta }=x_2]. \end{equation} The second Markov chain, encoding the behavior of the random interlacements in $B$, is defined similarly by considering separately the excursions of every random walk trajectory of random interlacements which enters $B$, cf.~\eqref{e:ridef}. Let $(X^{(i)})_{i\ge 1}$ be a $P^{\mathbb Z^d}_{\bar e_B}$-distributed i.i.d.~sequence, where $\bar e_B$ is the normalized equilibrium measure of $B$ introduced in \eqref{e:bare}. For every $i\ge 1$, set $R_1^{(i)}=0$ and define $D_j^{(i)}$, $R_j^{(i)}$, $j\ge 1$ analogously to \eqref{e:excursions} to be the successive departure and return times between $B$ and $\Delta $ of the random walk $X^{(i)}$. Set \begin{equation} \label{e:Ti} T^{(i)}=\sup\{j:R^{(i)}_j<\infty\} \end{equation} to be the number of excursions of $X^{(i)}$ between $B$ and $\Delta $ which is a.s.~finite. Finally, let $(Z_k)_{k\ge 1}$ be the sequence of the starting and ending points of these excursions, \begin{equation} \label{e:riexc} Z_k=(X^{(i)}_{R^{(i)}_j},X^{(i)}_{D^{(i)}_j}) \qquad \text{for $i\ge 1$ and $1\le j \le T^{(i)}$ given by } k = \sum_{n=1}^{i-1}T^{(n)}+j. \end{equation} The strong Markov property for $X^{(i)}$'s and their independence imply that $Z_k$ is a Markov chain on $\Sigma $ with transition probabilities \begin{equation} \begin{split} \label{e:distZ} P & [Z_{n+1}= \vec y | Z_n=\vec x]\\ & = \big(P^{\mathbb Z^d}_{x_2}[H_B<\infty, X_{H_B}=y_1] + P^{\mathbb Z^d}_{x_2}[H_B=\infty]\bar e_B(y_1)\big) P^{\mathbb Z^d}_{y_1}[X_{H_\Delta }=y_2] \end{split} \end{equation} for every $\vec x,\vec y\in \Sigma $, and with initial distribution \begin{equation} \nu_Z(\vec x)=\bar e_B(x_1)P^{\mathbb Z^d}_{z_1}[X_{H_\Delta}=x_2]. \end{equation} To apply Theorem~\ref{t:couplingmc}, we need to estimate all relevant quantities for the Markov chains $Y$ and $Z$. This is the content of the following four sections. \emph{From now on, all constants $c$ appearing in the text will possibly depend on the dimension~$d$, and the constants $\alpha$ and $\gamma$ defined in \eqref{e:gammachi}.} \section{Technical estimates} \label{s:rwproperties} In this section we show several estimates on potential-theoretic quantities related to rounded boxes. Let $\bar e_B^\Delta $ be the normalized equilibrium measure on $B$ for the walk killed on~$\Delta $, \begin{equation} \label{e:eDelta} \bar e_B^\Delta (x) = \frac {\boldsymbol 1 _{x\in \partial B}} { \mathop{\mathrm{cap}}\nolimits_\Delta(B) } P_x[\tilde H_B > H_\Delta ], \end{equation} where \begin{equation} \label{e:capDelta} \mathop{\mathrm{cap}}\nolimits_\Delta(B) = \sum_{x\in \partial B} P_x[\tilde H_B > H_\Delta] \end{equation} is the associated capacity. We first show that $\bar e_B^\Delta $ is comparable with the uniform distribution on $\partial B$ and give the order of $\mathop{\mathrm{cap}}\nolimits_\Delta(B)$. \begin{lemma} \label{l:baredelta} The is $c\in (0,1)$ such that \begin{equation} c N^{d-1-\gamma } \le \mathop{\mathrm{cap}}\nolimits_\Delta(B) \le c^{-1}N^{d-1-\gamma }, \end{equation} and for every $x\in \partial B$ \begin{equation} c N^{1-d}\le \bar e^\Delta_B(x) \le c^{-1} N^{1-d}. \end{equation} \end{lemma} \begin{proof} In view of \eqref{e:eDelta}, \eqref{e:capDelta}, to prove the lemma it is sufficient to show that uniformly in $x\in \partial B$, \begin{equation} \label{e:pp} c N^{-\gamma } \le P_x[\tilde H_B> H_\Delta ] \le c^{-1}N^{-\gamma }. \end{equation} For the lower bound, let $\mathcal H_x$ be the $(d-1)$-dimensional hyperplane `tangent' to $\partial B$ containing $x$, and let $\mathcal H'_x$ be the hyperplane parallel to $\mathcal H_x$ tangent to $\partial \Delta $ (see Figure~\ref{f:Hx}). Then \begin{equation} P_x[\tilde H_B> H_\Delta ] \ge P_x[\tilde H_{\mathcal H_x}> H_{\mathcal H'_x}] \ge c N^{-\gamma } \end{equation} where the last inequality follows from observing the projection of $X$ on the direction perpendicular to $\mathcal H_x$ and the usual martingale argument. \begin{figure} \caption{The planes $\mathcal{H} \label{f:Hx} \end{figure} The upper bound in \eqref{e:pp} is proved similarly. We consider a ball $\mathcal G_x$ contained in $B$ with radius $\alpha N$ tangent to $\partial B$ at $x$, and another ball $\mathcal G'_x$ with radius $\alpha N+N^\gamma $ concentric with $\mathcal G_x$. Then \begin{equation} P_x[\tilde H_B> H_\Delta ] \le P_x[\tilde H_{\mathcal G_x}> H_{\mathcal G'_x}] \le c N^{-\gamma }, \end{equation} using \cite[Proposition~1.5.10]{Law91} and the fact that $\alpha N\gg N^\gamma $. This completes the proof. \end{proof} For the usual equilibrium measure we have similar estimates. \begin{lemma} \label{l:ebar} There is a constant $c$ such that for every $x\in \partial B$ \begin{equation} \label{e:ebari} c N^{1-d}\le \bar e_B(x) \le c^{-1} N^{1-d}. \end{equation} and \newconstant{c:exit_B} \begin{equation} \label{e:ebarii} \inf_{y\in \partial \Delta } P^{\mathbb Z^d}_y[H_B=\infty]\ge \useconstant{c:exit_B} N^{\gamma -1}. \end{equation} \end{lemma} \begin{proof} Since $\mathop{\mathrm{cap}}\nolimits(B_N) \asymp N^{d-2}$ (see \cite{Law91}, (2.16) p.53), in order to prove the lower bound in \eqref{e:ebari} we need to show that $P_x[{\tilde H_B=\infty}]\ge c N^{-1}$. This can be proved by similar arguments as above. We fix the hyperplane $\mathcal H_x$ as previously, and let $\mathcal H'_x$ be the hyperplane parallel to $\mathcal H_x$ at distance $N$. Then \begin{equation} \label{e:ebarlb} P_x[\tilde H_B = \infty] \ge P_x[\tilde H_{\mathcal H_x}> H_{\mathcal H'_x}] \cdot \inf_{y\in \mathcal H'_x} P_y[H_B=\infty]. \end{equation} By the same reasoning as above, the first term is bounded from below by $c N^{-1}$ and the second term is of order constant, as follows easily from \cite[Proposition~1.5.10]{Law91} again. To prove the upper bound of \eqref{e:ebari}, we need to show that $P_x[\tilde H_B=\infty]\le N^{-1}$. To this end fix $\mathcal G_x$ as in the previous proof. Then \begin{equation} P_x[\tilde H_B=\infty]\le P_x[\tilde H_{\mathcal G_x}=\infty]\le c N^{-1} \end{equation} by e.g.~\cite[Lemma~7.5]{PT12} Finally, using the same notation as in \eqref{e:ebarlb}, for $y \in \partial \Delta$, \begin{equation} P_y[H_B=\infty]\ge P_y[H_{\mathcal H_x}> H_{\mathcal H'_x}] \, \inf_{y\in \mathcal H'_x} P_y[H_B=\infty]. \end{equation} The first term is larger than $c N^{\gamma -1}$ by a martingale argument and the second is of order constant which proves \eqref{e:ebarii} and completes the proof. \end{proof} Finally, we control hitting probabilities of boundary points of $B$. \begin{lemma} \label{l:hittingprobas} There is a $c<\infty$ such that for every $x\in \partial \Delta $ and $y\in \partial B$ \begin{align} \label{e:ubtorus} P_x[X_{H_B}=y]&\le c N^{-\gamma (d-1)},\\ \label{e:ubzd} P^{\mathbb Z^d}_x[X_{H_B}=y]&\le c N^{-\gamma (d-1)}. \end{align} In addition, for every $y\in \partial B$, there are at least $c^{-1}N^{\gamma (d-1)}$ points $x\in \partial \Delta $ such that \begin{align} \label{e:lbtorus} P_x[X_{H_B}=y]&\ge c^{-1} N^{-\gamma (d-1)},\\ \label{e:lbzd} P^{\mathbb Z^d}_x[X_{H_B}=y]&\le c^{-1} N^{-\gamma (d-1)}. \end{align} \end{lemma} \begin{proof} The lower bounds \eqref{e:lbtorus}, \eqref{e:lbzd} follow directly from \cite[Lemma 7.6(ii)]{PT12} by taking $s=N^\gamma $. The upper bound \eqref{e:ubzd} is a consequence of \cite[Lemma 7.6(i)]{PT12}. Finally, to show \eqref{e:ubtorus}, let $y_1, y_2\in \partial B$ be two points at distance smaller than $\mathrm{d}elta N^{\gamma }$ for some sufficiently small $\gamma $. By \cite[Proposition 7.7]{PT12}, there is a `surface' $\hat D=\hat D(y_1,y_2)$ in $\mathbb Z^d$ separating $\{y_1,y_2\}$ from $x$ so that for every $z\in \hat D\setminus B$ \begin{equation} \label{e:comparable} c P_z[X_{H_B}=y_1]\le P_z[X_{H_B}=y_2]\le c^{-1}P_z[X_{H_B}=y_1] \end{equation} for some sufficiently small $c$ independent of $y_1$, $y_2$. Since every path in $\mathbb T_N^d\setminus B$ from $x$ to $\{y_1,y_2\}$ must pass through $\hat D\setminus B$, using the strong Markov property on $H_{\hat D}$, it follows that $z$ can be replaced by $x$ in \eqref{e:comparable}. As consequence, for every $y\in \partial B$ there are at least $c(\mathrm{d}elta N^{\gamma })^{(d-1)}$ points $y'$ on $\partial B$ with \begin{equation} P_x[X_{H_B}=y']\ge c P_x[X_{H_B}=y], \end{equation} from which \eqref{e:ubtorus} easily follows. \end{proof} \section{Equilibrium measure} \label{s:equilibrium} In this section we show that the equilibrium measures of the Markov chains $Y$ and $Z$ that we defined in Section~\ref{s:toruscoupling} coincide as required by Theorem~\ref{t:couplingmc}. This may sound surprising at first, since the periodic boundary conditions in the torus are felt in the exit probabilities of macroscopic boxes. \begin{lemma} \label{l:invariantmeasure} Let $\pi $ be the probability measure on $\Sigma $ given by \begin{equation} \label{e:pi} \pi (\vec x) = \bar e_B^\Delta (x_1) P_{x_1}[X_{H_\Delta} =x_2], \qquad \vec x=(x_1,x_2)\in \Sigma , \end{equation} Then $\pi $ is the invariant measure for both $Y$ and $Z$. \end{lemma} \begin{proof} To see that $\pi $ is invariant for $Y$ consider the stationary random walk $(X_i)_{i\in \mathbb Z}$ (note the doubly infinite time indices) on $\mathbb T_N^d$. Let $\mathcal R$ be the set of `returns to $B$' for this walk, \begin{equation} \mathcal R = \{n\in \mathbb Z:X_n\in B, \exists m<n, X_m\in \Delta , \{X_{m+1},\mathrm{d}ots,X_{n-1}\}\subset (B\cup \Delta)^c\}, \end{equation} $\mathcal D$ the set of `departures' \begin{equation} \mathcal D = \{n\in \mathbb Z: X_n\in \Delta , \exists m \in \mathcal R, m<n, \{X_m,\mathrm{d}ots,X_{n-1}\}\in \Delta^c\}, \end{equation} and write $\mathcal R=\{\bar R_i\}_{i\in \mathbb Z}$, $\mathcal D = \{\bar D_i\}_{i\in \mathbb Z}$ so that $\bar R_i< \bar D_i < \bar R_{i+1}$, $i\in \mathbb Z$, and \begin{equation} \label{e:RDbarRD} \bar R_0 < \inf\{i\ge 0: X_i\in \Delta \} < \bar R_1. \end{equation} Observe that by this convention the sequence $(\bar R_i,\bar D_i)_{i\ge 1}$ agrees with $(R_i,D_i)_{i\ge 1}$ defined in \eqref{e:excursions}. Remark also that $\bar R_0$ might be non-negative in general, but $\bar R_{-1}<0$. Due to the stationarity and the reversibility of $X$, for every $\vec x= (x_1,x_2)$, \begin{equation} \begin{split} \label{e:calR} P&[n\in \mathcal R, X_n = x_1] \\&= P[X_n=x_1, \exists m<n, X_m\in \Delta, \{X_{m+1},\mathrm{d}ots,X_{n-1}\}\subset (B\cup \Delta )^c] \\& = N^{-d} P_{x_1}[\tilde H_B > H_\Delta ]. \end{split} \end{equation} By the ergodic theorem, the stationary measure $\pi_Y$ of $Y$ satisfies \begin{equation} \pi_Y (\{x_1\}\times \partial \Delta ) = \lim_{k\to \infty}\frac 1k \sum_{i=1}^k \boldsymbol 1 \{X_{R_i}=x_1\} = \lim_{m\to\infty} \frac{m^{-1}\sum_{n=1}^m \boldsymbol 1 \{n\in \mathcal R, X_n=x_1\}} {m^{-1}\sum_{n=1}^m \boldsymbol 1 \{n\in \mathcal R\}}, \end{equation} where we used the observation below \eqref{e:RDbarRD} for the last equality. Applying the ergodic theorem for the numerator and denominator separately and using \eqref{e:calR} yields \begin{equation} \pi_Y (\{x_1\}\times \partial \Delta ) = \frac{P_{x_1}[\tilde H_B>H_\Delta ]} {\sum_{y\in \partial B}P_{y}[\tilde H_B>H_\Delta ]} = \bar e_B^\Delta (x_1). \end{equation} By the strong Markov property, $\pi_Y(\vec x) = \pi_Y(\{x_1\}\times \partial \Delta ) P_{x_1}[{H_\Delta=x_2}]$ and thus $\pi_Y=\pi $ as claimed. We now consider the Markov chain $Z$. This chain is defined from the i.i.d.~sequence of random walks $X^{(i)}$. Each of these random walks give rise to a random-length block of excursions distributed as $\{(X^{(1)}_{R^{(1)}_i},X^1_{D^{(1)}_i}):i=1,\mathrm{d}ots,T^{(1)}\}$. The invariant measure $\pi_Z$ of $Z$ can thus be written as \begin{equation} \pi_Z(\vec x) = \frac 1 {E^{\mathbb Z^d}_{\bar e_B} T^{(1)}} E^{\mathbb Z^d}_{\bar e_B}\bigg[\sum_{i=1}^{T^{(1)}} \boldsymbol 1 _{X^{(1)}_{R^{(1)}_i}=x_1}\bigg] {P_{x_1}[X_{H_\Delta }=x_2]}, \qquad \vec x = (x_1,x_2). \end{equation} To show that $\pi_Z=\pi $ it is thus sufficient to show that the middle term is proportional to $P_{x_1}[\tilde H_B > H_\Delta ]$, since the first term will then be the correct normalizing factor. To simplify the notation we write $X$, $T$, $R_j$ for $X^{(1)}$, $T^{(1)}$, $R_j^{(1)}$, and extend $X$ to a two-sided random walk on $\mathbb Z^d$ by requiring the law of $(X_{-i})_{i\ge 0}$ to be $P^{\mathbb Z^d}_{X_0}[\, \cdot \, | \tilde H_B =\infty]$, conditionally independent of $(X_i)_{i\ge 0}$. We denote by $L=\sup\{n:X_n\in B\}$ the time of the last visit of~$X$ to~$B$. Then, \begin{equation} \begin{split} \label{e:Tia} &E^{\mathbb Z^d}_{\bar e_B}\Big[\sum_{j=1}^{T} \boldsymbol 1 _{X_{R_j}=x_1}\Big] = \sum_{y\in \partial B}\sum_{z\in \partial B} \bar e_B(y) E^{\mathbb Z^d}_y\Big[\boldsymbol 1 _{X_L=z}\sum_{j=1}^{T} \boldsymbol 1 _{X_{R_j}=x_1}\Big] \\&= \sum_{y\in \partial B}\sum_{z\in \partial B} \sum_{n=0}^\infty \bar e_B(y) P^{\mathbb Z^d}_y\bigg[ \begin{aligned} &X_n=x_1, X_L=z,\\&\exists m\in \mathbb Z: m<n, X_m\in \Delta , \{X_{m+1},\mathrm{d}ots,X_{n-1}\}\subset (B\cup \Delta)^c \end{aligned} \bigg]. \end{split} \end{equation} According to \cite[Proposition~1.8]{Szn12}, under $P^{\mathbb Z^d}_{\bar e_B}$, $X_L$ has also distribution $\bar e_B$. Hence, by reversibility, this equals \begin{equation} \begin{split} &= \sum_{y\in \partial B}\sum_{z\in \partial B} \sum_{n=0}^\infty \bar e_B(z) P^{\mathbb Z^d}_z\bigg[ \begin{aligned} &X_n=x_1, X_L=y,\\&\exists m>n: X_m\in \Delta , \{X_{n+1},\mathrm{d}ots,X_{m-1}\}\subset (B\cup \Delta)^c \end{aligned} \bigg] \\&= \sum_{z\in \partial B} \sum_{n=0}^\infty \bar e_B(z) P^{\mathbb Z^d}_z\bigg[ \begin{aligned} &X_n=x_1, \\&\exists m>n: X_m\in \Delta , \{X_{n+1},\mathrm{d}ots,X_{m-1}\}\subset (B\cup \Delta)^c \end{aligned} \bigg] \\&= \sum_{z\in \partial B} \sum_{n=0}^\infty \bar e_B(z) P^{\mathbb Z^d}_z[X_n=x_1]P_{x_1}[\tilde H_B > H_\Delta ]. \end{split} \end{equation} Introducing the Green function $g(x,y)=\sum_{n=0}^\infty P^{\mathbb Z^d}_x[X_n=y]$ and using the identity $\sum_{z} e_B(z)g(z,x)=1$ (see \cite[Proposition 1.8]{Szn12}), this equals to \begin{equation} \label{e:Tib} = \sum_{z\in \partial B} \bar e_B(z) g(z,x_1) P_{x_1}[\tilde H_B > H_\Delta ] = P_{x_1}[\tilde H_B > H_\Delta ]/\mathop{\mathrm{cap}}\nolimits(B). \end{equation} This shows the required proportionality and completes the proof of the lemma. \end{proof} We will need the following estimate on the measure $\pi $. \begin{lemma} \label{l:pi_dB} For every $y\in \partial \Delta $ \begin{equation} \pi (\partial B \times \{y\}) \le C N^{1-d}. \end{equation} \end{lemma} \begin{proof} By similar arguments as in the proof of Lemma~\ref{l:invariantmeasure}, using the same notation, \begin{equation} \begin{split} P[n\in \mathcal D, X_n=y]&=P\big[X_n=y,\exists\,m<n,X_m\in B, \{X_{m+1},\mathrm{d}ots,X_{n-1}\}\in (B \cup \Delta )^c\big] \\&=N^{-d} P_y[\tilde H_\Delta > H_B] \\&\le c N^{-d-\gamma }, \end{split} \end{equation} since, by the same argument as in the proof of Lemma~\ref{l:baredelta}, $P_y[\tilde H_\Delta > H_B]\le c N^{-\gamma }$. Further, \begin{equation} P(n\in \mathcal D)=P(n\in \mathcal R)=\sum_{x\in \partial B} N^{-d} P_x[\tilde H_B > H_\Delta ] \asymp c N^{-\gamma -1}, \end{equation} by the estimates in the proof of Lemma~\ref{l:baredelta} again. Therefore, \begin{equation} \pi (\partial B \times \{y\})=P[X_n=y|n\in \mathcal D] \le c N^{1-d}, \end{equation} and the proof is completed. \end{proof} \section{Mixing times} \label{s:mixing} The next ingredient of Theorem~\ref{t:couplingmc} are the mixing times $T_Y$ and $T_Z$ of the Markov chains $Y$ and $Z$. They are estimated in the following lemma. \begin{lemma} \label{l:mixingtimes} There is a constant $c$ such that \begin{align} T_Z &\le c N^{1-\gamma },\label{e:CPRI} \\ T_Y &\le c N^{1-\gamma}. \label{e:CPRW} \end{align} \end{lemma} \begin{proof} To bound the mixing times we use repeatedly the following lemma which can be found e.g.~in \cite[Corollary~5.3]{LPW09}. \begin{lemma} \label{l:couplingmixing} Let $(\mathcal X_i)_{i\ge 0}$ be an arbitrary Markov chain on a finite state space $\Sigma $. Assume that for every $x,y\in \Sigma $ there exist a coupling $Q_{x,y}$ of two copies $\mathcal X, \mathcal X'$ of $\mathcal X$ starting respectively from $x$ and $y$, such that \begin{equation} \label{e:boundmix} \max_{x,y \in \Sigma } Q_{x,y} [\mathcal X_n \neq \mathcal X'_n] \leq 1/4. \end{equation} Then $T_{\mathcal X} \le n$. \end{lemma} To show \eqref{e:CPRI}, we thus consider two copies $Z_i$, $Z_i'$ of the chain $Z$ starting respectively in $\vec x, \vec x'\in \Sigma $ and define the coupling $Q_{\vec x,\vec x'}$ between them as follows. Let $(\xi_i)_{i\ge 0}$ be a sequence of i.i.d. Bernoulli random variables with $P[\xi_i=1]=\useconstant{c:exit_B} N^{\gamma -1}:=p_N$ where the constant $\useconstant{c:exit_B}$ is as in~\eqref{e:ebarii}. Given $Z_i=\vec x_i$, $Z'_i=\vec x_i'$, and given $\xi_i=1$ we choose $Z_{i+1}=Z'_{i+1}$ distributed as $\nu(\vec x) =\bar e_B(x_1)P_{x_1}[X_{H_\Delta } = x_2]$. On the other hand, when $\xi_i=0$, we choose $Z_{i+1}$ and $Z'_{i+1}$ independently with respective distributions $\mu_{\vec x_i} $ and $\mu_{\vec x_i'}$ where (cf.~\eqref{e:distZ}) \begin{equation} \mu_{\vec x}(\vec y) = \big\{P^{\mathbb Z^d}_{x_2}[H_B<\infty, X_{H_B}=y_1] + (P^{\mathbb Z^d}_{x_2}[H_B=\infty]-p_N)\bar e_B(y_1)\big\} \frac{P^{\mathbb Z^d}_{y_1}[X_{H_\Delta }=y_2]}{1-p_N}. \end{equation} The bound \eqref{e:ebarii} ensures that this is a well-defined probability distribution. If $Z_i=Z'_i$ for some $i$, then we let them move together, $Z_j=Z'_j$ for all $j\ge i$. It follows that \begin{equation} \max_{\vec x,\vec x'}Q_{\vec x,\vec x'}[Z_i\neq Z_{i'}]\le \mathbb P[\xi_j =0 \,\forall j<i] = (1-p_N)^j. \end{equation} Choosing now $j=c N^{1-\gamma }$ with $c$ sufficiently large and using Lemma~\ref{l:couplingmixing} yields \eqref{e:CPRI}. To show \eqref{e:CPRW}, let $G=G_N=\{x\in B_N: \mathop{\mathrm{dist}}\nolimits(x,\partial B_N)\ge \alpha N/2\}$. Intuitively, the excursions of the random walk into $G$ will play the same role as the `excursions of the random interlacements to infinity' played in the proof of \eqref{e:CPRI}. We need two technical claims \begin{claim} \label{c:YregenA} For some constant $c_1>0$ and all $N$ large, \begin{equation} \inf_{x\in \partial B}P_x[H_G<H_\Delta ]\ge c_1 N^{\gamma -1}. \end{equation} \end{claim} \begin{proof} Similarly as in Section~\ref{s:rwproperties}, let $\mathcal G_x$ be the ball with radius $\alpha N$ contained in $B$ tangent to $\partial B$ at $x$, and let $\mathcal G^1_x$, $\mathcal G^2_x$ be the balls concentric with $\mathcal G_x$ with radius $\alpha N/2$ and $\alpha N + N^\gamma $ respectively. Then $\mathcal G^2_x\subset \mathbb T_N^d\setminus \Delta$, and $\mathcal G^1_x\subset G$. Hence, using again \cite[Proposition~1.5.10]{Law91}, \begin{equation} P_x[H_G<H_\Delta]\ge P_x[H_{\mathcal G_x^1}<H_{\mathcal G_x^2}]\ge c N^{1-\gamma } \end{equation} which shows the claim. \end{proof} \begin{claim} \label{c:YregenB} For some $c_2<\infty$ and all $N$ large, \begin{equation} \sup_{x\in \partial G}P_x[X_{H_\Delta } = y] \le c_2 \inf_{x\in \partial G}P_x[X_{H_\Delta } = y] \qquad \text{for all $y\in \partial \Delta $.} \end{equation} \end{claim} \begin{proof} For every $y\in \partial \Delta$, the function $x\mapsto P_x[X_{H_\Delta}=y]$ is harmonic on $\mathbb T_N^d\setminus \Delta$. The claim then follows by Harnack principle, see e.g.~\cite[Theorem~1.7.6]{Law91}. \end{proof} We continue the proof of \eqref{e:CPRW}. For $x\in \partial B$, let $\nu_x(\cdot)=P_x[X_{H_{G\cup \Delta}}\in {}\cdot{}]$. By Claim~\ref{c:YregenA}, $\nu_x(\partial G)\ge c_1 N^{\gamma -1}$, so we can find a sub-probability $\nu^\circ_x$ on $\partial G$ such that $\nu^\circ_x(\partial G)=c_1 N^{\gamma -1}$ and $\nu^\circ_x \le \nu_x$. For any $x\in \mathbb T_N^d$, let $\mu_x(\cdot) = P_x[X_{H_\Delta} \in {}\cdot{}]$, and let $\mu $ be the sub-probability on $\partial \Delta $ given by $\mu (y)=\inf_{x\in \partial G} \mu_x(y)$. It follows from Claim~\ref{c:YregenB} that $\mu (\partial \Delta)\ge c_2^{-1}$. For any non-trivial sub-probability measure $\kappa $, we denote by $\overline\kappa$ the probability measure obtained by normalizing $\kappa $. We an now construct the coupling required for the application of Lemma~\ref{l:couplingmixing}. Let $\vec x(0),\vec x'(0)\in \Sigma $ and define the coupling $Q_{\vec x, \vec x'}$ of two copies $Y$, $Y'$ of $Y$ as follows. Let $Y_0=\vec x$, $Y'_0=\vec x'$, and let $(\xi_i)_{i\ge 0}$, $(\tilde \xi_i)_{i\ge 0}$ be two independent sequences of i.i.d.~Bernoulli random variables with $P[\xi_i=1]=c_1 N^{\gamma -1}$ and $P[\tilde \xi_i=1]=\mu (\partial \Delta)$. Now continue inductively through the following steps \begin{enumerate} \item Given $Y_{k-1}=(Y_{k-1,1},Y_{k-1,2})$ and $Y'_{k-1}=(Y'_{k-1,1},Y'_{k-1,2})$, $k\ge 1$, choose $Y_{k,1}$, resp.~$Y'_{k,1}$, independently from $P_{Y_{k-1,2}}[X_{H_B}\in {}\cdot{}]$, resp.~$P_{Y'_{k-1,2}}[X_{H_B}\in {}\cdot{}]$. \item If $\xi_k=0$, choose $U_k$ according to $\overline{\nu_{Y_{k,1}} - \nu^\circ_{Y_{k,1}}}$, then $Y_{k,2}$ according to $\mu_{U_k}$, and analogously $U'_k$ according to $\overline{\nu_{Y'_{k,1}} - \nu^\circ_{Y'_{k,1}}}$ and then $Y'_{k,2}$ according to $\mu_{U'_k}$, independently. \item Otherwise, if $\xi_k=1$, choose $U_k$ according to $\overline{\nu_{Y_{k,1}}^\circ}$, and $U'_k$ according to $\overline{\nu_{Y'_{k,1}}^\circ}$, independently. If, in addition $\tilde \xi_k=1$, choose $Y_{k,2}=Y'_{k,2}$ according to $\overline \mu$. Otherwise, if $\tilde\xi_k=0$, choose $Y_{k,2}$ according to $\overline{\mu_{U_k}-\mu }$, and $Y'_{k,2}$ according to $\overline{\mu_{U'_k}-\mu }$, independently. \item Finally, if for some $k$, $Y_{k,2}=Y'_{k,2}$, let $Y$ and $Y'$ follow the same trajectory after $k$. \end{enumerate} It can be checked easily that these steps construct two copies of $Y$ started from $\vec x$ and $\vec x'$ respectively. Moreover, \begin{equation} Q_{\vec x,\vec x'}[Y_k \neq Y'_k] \le \mathbb P[\xi_i\tilde \xi_i=0 \,\forall i<k] = (1-c_1N^{\gamma -1}\mu (\partial \Delta))^{k-1}. \end{equation} Observing that $\mu (\partial \Delta)\ge c_2^{-1}$, \eqref{e:CPRW} follows by taking $k = c N^{1-\gamma }$ with $c$ large enough and using Lemma~\ref{l:couplingmixing}. \end{proof} \section{Variance estimate} \label{s:variance} We continue to estimate the ingredients for the application of Theorem~\ref{t:couplingmc}. Due to the form of the equilibrium measure $\pi $ introduced in \eqref{e:pi}, it is suitable to fix the base measure $\mu $ on $\Sigma $ as \begin{equation} \mu (\vec x)=P_{x_1}[X_{H_\Delta }=x_2], \qquad \vec x=(x_1,x_2)\in \Sigma . \end{equation} Then (cf.~\eqref{e:defg},\eqref{e:transdensity} for the notation) \begin{align} g(\vec x)&= \bar e_B^\Delta (x_1), \\ \label{e:rhoY} \rho^Y(\vec x,\vec y) &= P_{x_2}[X_{H_B} =y_1]=:\tilde \rho^Y(x_2,y_1) \\ \label{e:rhoZ} \rho^Z(\vec x, \vec y) &= P^{\mathbb Z^d}_{x_2}[X_{H_B} =y_1] + P^{\mathbb Z^d}_{x_2}[H_B =\infty] \bar e_B(y_1) =:\tilde \rho^Z(x_2,y_1). \end{align} Recall that $\rho^\circ_\vec x$ denotes the function $\vec y \mapsto \rho^\circ(\vec y,\vec x)$; we use $\circ$ to stand for either $Y$ or $Z$. \begin{lemma} \label{l:varrho} There exist constants $c,C\in (0,\infty)$ such that and for every $\vec x\in \Sigma $ \begin{equation} \label{e:81} c N^{1-d} N^{-\gamma (d-1)} \le \mathop{\mathrm{Var}}\nolimits_\pi \rho^\circ_{\vec x} \le C N^{1-d} N^{-\gamma (d-1)}. \end{equation} \end{lemma} \begin{proof} An easy computation yields, using Lemma~\ref{l:pi_dB} for the last inequality, \begin{equation} \label{e:gbound} \begin{split} \mathop{\mathrm{Var}}\nolimits_\pi \rho^\circ_{\vec x} &\le \sum_{\vec x'\in \Sigma } \pi (\vec x') \rho^\circ(\vec x',\vec x)^2 \\&= \sum_{x'_2 \in \partial \Delta } \pi (\partial B\times \{x'_2\}) \tilde \rho^\circ(x'_2,x_1)^2 \\&\le C N^{1-d}\sum_{x'_2\in \partial \Delta } \tilde \rho^\circ(x'_2,x_1)^2 \end{split} \end{equation} Using Lemmas~\ref{l:ebar}, \ref{l:hittingprobas} in \eqref{e:rhoY} and \eqref{e:rhoZ}, we obtain that \begin{equation} \label{e:maxrho} \max_{x\in \partial B, y\in \partial \Delta }\tilde \rho ^\circ(y,x)\le c N^{-\gamma(d-1)} \end{equation} for both chains $\circ \in \{Y,Z\}$. Therefore \begin{equation} \mathop{\mathrm{Var}}\nolimits_\pi \rho^\circ_{\vec x} \le C N^{1-d} \sup\Big\{\sum_{z\in \partial B} h^2(z)\,:\, {h\!:\!\partial B \to [0,cN^{-\gamma(d-1)}]},\sum_{z\in \partial B} h(z)=1\Big\} . \end{equation} The supremum is achieved by a function $h$ that takes the maximal value $cN^{-\gamma (d-1)}$ for as many points as it can, by a convexity argument. Hence, \begin{equation} \mathop{\mathrm{Var}}\nolimits_\pi \rho^\circ_{\vec x}\le CN^{1-d} N^{\gamma (d-1)} (N^{-\gamma (d-1)})^2, \end{equation} and the upper bound follows. Finally, by Lemma~\ref{l:hittingprobas} and \eqref{e:rhoY}, \eqref{e:rhoZ}, for every $x\in \partial B$ there are at least $c N^{\gamma (d-1)}$ points ${y\in \partial \Delta} $ such that $\tilde \rho^\circ(y,x)\ge c'N^{-\gamma (d-1)}$. Hence, $\pi \big((\rho_{\vec x}^\circ)^2\big)$ is larger than the left-hand side of \eqref{e:81}. Moreover, since $\pi$ is invariant for both Markov chains, it follows that $\pi (\rho_{\vec x}^\circ)^2=g(x)^2 \asymp N^{2(1-d)} $, by Lemma~\ref{l:baredelta}. Combining the last two claims, the lower bound follows. \end{proof} \section{Number of excursions} \label{s:number} The final ingredient needed for Theorem~\ref{t:couplingmc} is an estimate on the number of excursion that the random walk typically makes before the time $uN^d$, as well as on the corresponding quantity for the random interlacements at level $u$. Consider first the random walk on the torus. Define \begin{equation} \mathcal N(t)=\sup\{i:R_i < t\} \end{equation} to be the number of excursions starting before $t$. We show that $\mathcal N(t)$ concentrates around its expectation. \begin{proposition} \label{p:concentr_Nt} Let $u > 0$ be fixed. There exist constants $c,C$ depending only on $\gamma $ and $\alpha $ such that for every $N\ge 1$ \begin{equation} P\big[ \big|\mathcal{N}(u N^d) - u \mathop{\mathrm{cap}}\nolimits_\Delta(B)\big| > \eta \mathop{\mathrm{cap}}\nolimits_\Delta (B)\big] \leq C \exp\{-c \eta^2 N^c\}. \end{equation} \end{proposition} \begin{proof} To prove the proposition we first compute the expectation of $\mathcal N(t)$. \begin{lemma} For every $t\in \mathbb N$, \begin{equation} \label{e:ENt} |E \mathcal N(t) - t N^{-d} \mathop{\mathrm{cap}}\nolimits_\Delta (B)|\le 1. \end{equation} Moreover, when starting from $\bar{e}^\Delta_B$, the stationary measure for $R_i$'s, we have \begin{equation} \label{e:ERk} E_{\bar{e}^\Delta_B}(R_1) = \frac{N^d}{\mathop{\mathrm{cap}}\nolimits_\Delta(B)}. \end{equation} \end{lemma} \begin{proof} Recall from the proof of Lemma~\ref{l:invariantmeasure} that $(\bar R_i,\bar D_i)$ denote the returns and departures of the stationary random walk $(X_n)_{n\in \mathbb Z}$. Let $\bar{\mathcal N}(t)= \sup\{i:\bar R_i<t\}$. By the observation below \eqref{e:RDbarRD}, $|\bar{\mathcal N}(t)-\mathcal N(t)|\le 1$. It is thus sufficient to show that $E\bar{\mathcal N}(t)=tN^{-d}\mathop{\mathrm{cap}}\nolimits_\Delta (B)$. To this end recall equality \eqref{e:calR}. Summing it over $x_1\in \partial B$, we obtain \begin{equation} P[{k=\bar R_j}\text{ for some }j]=N^{-d}\mathop{\mathrm{cap}}\nolimits_\Delta (B),\qquad k\ge 0. \end{equation} The required claim follows by summation over $0 \le k < t$. The second claim of the lemma is a consequence of the first claim, the fact that every $X_{R_k}$ is $\bar e_B^\Delta $-distributed at stationarity, and the ergodic theorem. \end{proof} We proceed with proving Propositions~\ref{p:concentr_Nt}. It is more convenient to show a concentration result for the return times $R_i$ instead of $\mathcal{N}(t)$. Observing that for any $t > 0$ and $b > 0$, \begin{equation} \big\{|\mathcal{N}(t) - E(\mathcal{N}(t))| > b \big\} \subseteq \big\{ R_{\lceil E(\mathcal{N}(t)) - b \rceil} > t \big\} \cup \big\{ R_{\lfloor E(\mathcal{N}(t)) + b \rfloor} > t \big\} \end{equation} we obtain easily that \begin{equation} \label{e:97} P\big[ \big|\mathcal{N}(u N^d) - u \mathop{\mathrm{cap}}\nolimits_\Delta (B) \big| > \eta \mathop{\mathrm{cap}}\nolimits_\Delta (B) \big] \leq P[R_{k_-} > uN^d] + P[R_{k_+} < uN^d], \end{equation} where $k_- = \lceil (u-\eta) \mathop{\mathrm{cap}}\nolimits_\Delta (B)\rceil$ and $k_+ = \lfloor(u+\eta) \mathop{\mathrm{cap}}\nolimits_\Delta (B)\rfloor$. Let $\varepsilon > 0$ be a small constant that will be fixed later, and set $\ell = \lfloor N^{\varepsilon} T_Y \rfloor$, where $T_Y$ stands for the mixing time of the chain $Y$ estimated in \eqref{e:CPRW}. In order to estimate the right-hand side of \eqref{e:97}, we study the typical size of $R_{m_{\pm}\ell}$ where \begin{equation} \label{e:98} m_- = \left\lceil\ell^{-1}(u-\eta) \mathop{\mathrm{cap}}\nolimits_\Delta (B) \right\rceil \quad \text{and} \quad m_+ = \left\lfloor\ell^{-1}(u+\eta) \mathop{\mathrm{cap}}\nolimits_\Delta (B)\right\rfloor. \end{equation} From Lemma~\ref{l:baredelta} and \eqref{e:CPRW}, it follows that \begin{equation} \label{e:m_lower} m_\pm \geq c N^{d - 2 - \varepsilon}. \end{equation} Let $\mathcal G_i=\sigma (X_i:i\le R_{i\ell})$. Using the standard properties of the mixing time (see e.g.~\cite[Section~4.5]{LPW09}) and the strong Markov property, it is easy to see that \begin{equation} \|P[(X_{R_{i\ell}},X_{D_{i\ell}})\in \,\cdot\,|\mathcal G_{i-1}] - \pi (\cdot)\|_{TV} \le 2^{-N^{\varepsilon }}. \end{equation} By Lemma~\ref{l:baredelta}, $\pi (\{y\}\times \partial \Delta ) = \bar e_B^\Delta (y)\asymp N^{1-d}$ uniformly in $y\in\partial B$, and thus \begin{equation} \label{e:910} \bigg|\frac{P[X_{R_{i\ell}}=y|\mathcal G_{i-1}]}{\bar e_B^\Delta (y)}-1\bigg| \le c 2^{-N^{\varepsilon /2}}, \qquad i\ge 1. \end{equation} For $m$ standing for $m_+$ or $m_-$, we write \begin{equation} R_{m\ell} = \sum_{j=1}^{m} Z_j, \text{ where } Z_j = R_{j\ell}-R_{(j-1)\ell} \text{ and } R_0:=0. \end{equation} For every $j\ge 2$, by \eqref{e:910}, \begin{equation} P[Z_j > t|\mathcal G_{j-2}] \leq (1+c 2^{-N^{\varepsilon /2}}) P_{\bar e_B^\Delta }[R_\ell > t] \leq 2 \ell P_{\bar e_B^\Delta }[R_1 > t/\ell]. \end{equation} By the invariance principle $P[R_1>N^2]\le c <1$. Using this and Markov property iteratively yields $P[R_1>N^{2+\mathrm{d}elta }]\le e^{-c N^\mathrm{d}elta }$ for any $\mathrm{d}elta >0$, and thus \begin{equation} \label{e:915} P[Z_j > \ell N^{2+\mathrm{d}elta}|\mathcal G_{j-2}] \leq 2\ell P_{\bar e_B^\Delta }[R_1 > N^{2+\mathrm{d}elta}] \leq c \exp\{-N^{c' \mathrm{d}elta}\}. \end{equation} Analogous reasoning proves also that \begin{equation} P[Z_1\ge \ell N^{2+\mathrm{d}elta }] \leq c \exp\{-N^{c' \mathrm{d}elta}\}. \end{equation} Observe also that for $j\ge 2$, by \eqref{e:910} again, \begin{equation} |E[Z_j]-E[Z_j|\mathcal G_{j-1}]|\le c 2^{-N^{\varepsilon /2}}E(Z_j). \end{equation} Hence, \begin{equation} \begin{split} \label{e:917} & P[|R_{m\ell} - E(R_{m\ell})| > \eta E(R_{m\ell})] =P\Big[ \Big|\sum_{j=1}^{m}\big(Z_j - E[Z_j]\big)\Big| > \eta E(R_{m\ell}) \Big] \\ & \le P[Z_1\ge \eta E(R_{m\ell})/4] + \sum_{n \in \{0,1\}} P\Big[ \Big| \sum_{\substack{1 \leq j \le m\\ j \text{ mod } 2 = n}} \big(Z_{j} - E[Z_{j}|\mathcal G_{j-2}]\big)\Big| > \eta E(R_{m\ell})/4 \Big]. \end{split} \end{equation} Setting $\hat Z_j = Z_j \wedge \ell N^{2 + \mathrm{d}elta}$, which by \eqref{e:915} satisfies \begin{equation} |E[\hat Z_j|\mathcal G_{j-2}] - E[Z_j|\mathcal G_{j-2}]| = \int_{\ell N^{2 + \mathrm{d}elta}}^\infty P[Z_j > t|\mathcal G_{j-2}] dt \leq c \exp\{- N^{c'\mathrm{d}elta }\}, \end{equation} the right-hand side of \eqref{e:917} can be bounded by \begin{equation} \leq c m \exp\{-N^{c' \mathrm{d}elta}\} + \sum_{n\in{0,1}}P\Big[ \Big| \sum_{\substack{1 \leq j \le m\\ j = n \text{ mod } 2}} \big(\hat{Z}_{j} - E[\hat Z_{j}|\mathcal G_{j-2}]\big)\Big| > \eta E(R_{m\ell})/4 \Big]. \end{equation} Azuma's inequality together with $E[R_{m\ell}]\asymp N^d$, \eqref{e:98}, \eqref{e:m_lower}, and Lemma~\ref{l:baredelta} then yield \begin{equation} \begin{split} & \leq c m \exp\{-N^{c' \mathrm{d}elta}\} + 4 \exp\Big\{ - \frac{2 c(\eta E(R_{m\ell}))^2}{m (\ell N^{2 + \mathrm{d}elta})^2} \Big\}\\ & \leq c m \exp\{-N^{c'\mathrm{d}elta}\} + 4 \exp\Big\{ - c\eta^2 \frac{m N^{2d - 4 - 2\mathrm{d}elta}}{\mathop{\mathrm{cap}}\nolimits_\Delta(B)^2} \Big\}\\ & \leq c m \exp\{-N^{c' \mathrm{d}elta}\} + 4 \exp\Big\{ - c \eta^2 N^{d-4 + 2\gamma - \varepsilon - 2\mathrm{d}elta} \Big\}. \end{split} \end{equation} For every $d\ge 3$ and $\gamma $ as in \eqref{e:gammachi}, it is possible to fix $\mathrm{d}elta $ and $\varepsilon $ sufficiently small so that the exponent of $N$ on the right-hand side of the last display is positive. Therefore the above decays at least as $C \exp\{-c \eta^2 N^c\}$ as $N$ tends to infinity, finishing the proof of the proposition. \end{proof} We now count the number of excursions of random interlacements at level $u$ into $B$. Let $J_u^N$ be the Poisson process with intensity $\mathop{\mathrm{cap}}\nolimits (B_N)$ driving the excursions of random interlacements to $B_N$, cf.~\eqref{e:ridef}. From Section~\ref{s:toruscoupling}, recall the definition \eqref{e:Ti} of random variables $T^{(i)}$ giving the number of excursions of $i$-th random walk between $B$ and $\Delta $. Given those, denote by $\mathcal N'(u)$ the number of steps of Markov chain $Z$ corresponding to the level $u$ of random interlacements, \begin{equation} \mathcal N'(u) = \sum_{i=1}^{J^N_u} T^{(i)} \end{equation} \begin{proposition} \label{p:nri} There exist constants $c$, $C$ depending only on $\gamma $ and $u$ such that for every $u>0$ \begin{equation} P\big[|\mathcal N'(u) - u \mathop{\mathrm{cap}}\nolimits_\Delta (B)|\ge \eta u \mathop{\mathrm{cap}}\nolimits_\Delta (B)\big]\le C\exp\{-c \eta^2 N^c\}. \end{equation} \end{proposition} \begin{proof} By definition of random interlacements, $J^N_u$ is a Poisson random variable with parameter $u \mathop{\mathrm{cap}}\nolimits (B) \asymp u N^{d-2}$, and thus, by Chernov estimate, \begin{equation} \label{e:cria} P\big[|J^N_u - u \mathop{\mathrm{cap}}\nolimits (B)|\ge \eta u \mathop{\mathrm{cap}}\nolimits (B)\big]\le C\exp\{-c \eta^2 N^{d-2}\}. \end{equation} The random variables $T^{(i)}$ are i.i.d.~and stochastically dominated by the geometric distribution with parameter $\inf_{y\in \partial \Delta }P^{\mathbb Z^d}_y[H_B=\infty] \asymp N^{\gamma -1}$, by Lemma~\ref{l:ebar}. Moreover, by summing \eqref{e:Tia}--\eqref{e:Tib} over $x_1\in\partial B$ we obtain \begin{equation} E^{\mathbb Z^d}_{\bar e_B} T^{(i)} = \frac{\mathop{\mathrm{cap}}\nolimits_\Delta (B)}{\mathop{\mathrm{cap}}\nolimits (B)}. \end{equation} Applying Chernov bound again for $v=(1\pm\frac \eta 2)u\mathop{\mathrm{cap}}\nolimits (B)$, \begin{equation} \label{e:crib} P\Big[\Big|\sum_{i=1}^v T^{(i)} - \frac{v \mathop{\mathrm{cap}}\nolimits_\Delta (B)}{\mathop{\mathrm{cap}}\nolimits (B)}\Big| \ge \frac \eta 2 \, \frac{v \mathop{\mathrm{cap}}\nolimits_\Delta (B)}{\mathop{\mathrm{cap}}\nolimits (B)} \Big]\le C \exp\{-c \eta^2 N^c\} \end{equation} for some constants $C$ and $c$ depending on $\gamma $ and $u$. The proof is completed by combining \eqref{e:cria} and \eqref{e:crib}. \end{proof} \section{Proofs of the main results} \label{s:proofs} We can now finally show our main results: Theorem~\ref{t:toruscoupling} giving the coupling between the vacant sets of the random walk and the random interlacements in macroscopic subsets of the torus, and Theorem~\ref{t:phasetransition} implying the phase transition in the behavior of the radius of the connected cluster of the vacant set of the random walk containing the origin. \begin{proof}[Proof of Theorem~\ref{t:toruscoupling}] As already announced several times, Theorem~\ref{t:couplingmc} is the key ingredient of this proof. Recall the definitions and transition probabilities of the Markov chains $Y=(Y_i)_{i\ge 1}$ and $Z=(Z_i)_{i\ge 1}$ from Section~\ref{s:coupling}. The state space $\Sigma $ of these Markov chains is finite, so we can apply Theorem~\ref{t:couplingmc} to construct a coupling of those two chains on some probability space $(\Omega_N , \mathcal F_N, \mathbb Q_N)$ carrying a Poisson point process with intensity $\mu \otimes dx$ on $\Sigma \times [0,\infty)$, so that their ranges coincide in sense of \eqref{e:goodmc}. We will apply this theorem with \begin{equation} \begin{aligned} n&= u \mathop{\mathrm{cap}}\nolimits_\Delta (B) \asymp N^{d-1-\gamma }, &&\text{(cf.~Lemma~\ref{l:baredelta}, Propositions~\ref{p:concentr_Nt}, \ref{p:nri})} \\|\Sigma|& \asymp N^{2(d-1)}, \\ T_Y,T_Z&\le c N^{1-\gamma }, &&\text{(Lemma~\ref{l:mixingtimes})} \\g(\boldsymbol z)&=\bar e_B^\Delta (z_1)\asymp N^{1-d},\qquad &&\text{(Lemma~\ref{l:baredelta})} \\\mathop{\mathrm{Var}}\nolimits \rho_{\vec z}^Y,\mathop{\mathrm{Var}}\nolimits \rho_{\vec z}^Z&\asymp N^{1-d}N^{-\gamma (d-1)}, &&\text{(Lemma~\ref{l:varrho})} \\\|\rho^Y_{\vec z} \|_\infty, \|\rho^Z_{\vec z}\|_\infty&\asymp N^{-\gamma (d-1)}, &&\text{(Lemma~\ref{l:hittingprobas}, cf.~\eqref{e:maxrho} and below)} \end{aligned} \end{equation} In addition, it follows from Claims~\ref{c:YregenA},~\ref{c:YregenB}, that $\pi^\star$ decays polynomially with $N$, and thus \begin{equation} k(\varepsilon_N ) \sim c \log N - c'\log \varepsilon_N. \end{equation} Substituting those into condition \eqref{e:epsasmc} of Theorem~\ref{t:couplingmc} implies that $\varepsilon_N < c_0=c_0(d,\gamma ,\alpha)$ as assumed in Theorem~\ref{t:toruscoupling}. If, in addition, $\varepsilon_N$ satisfies $\varepsilon_N^2 \ge c N^{\mathrm{d}elta-\kappa }$ for $\kappa = \gamma (d-1)-1>0$ and $\mathrm{d}elta >0$, the, after some algebra, we obtain \begin{equation} \label{e:couplingZY} \mathbb Q_N\Big[\bigcup_{i\le (1-\varepsilon_N)n} Z_i \subset \bigcup_{i\le n} Y_i \subset \bigcup_{i\le (1+\varepsilon_N)n} Z_i\Big]\ge 1- c_1 e^{-c_2 N^{\mathrm{d}elta '}} \end{equation} for some $\mathrm{d}elta' $ as in Theorem~\ref{t:toruscoupling}. We now re-decorate $Y$ and $Z$ to obtain a coupling of the vacant sets restricted to $B$. Let $\Gamma $ be the space of all finite-length nearest-neighbor paths on $\mathbb T_N^d$. For $\gamma \in \Gamma $ we use $\ell(\gamma )$ to denote its length and write $\gamma $ as $(\gamma_0,\mathrm{d}ots,\gamma_{\ell(\gamma )})$. To construct the vacant set of the random walk, we define on the same probability space $(\Omega_N , \mathcal F_N, \mathbb Q_N)$ (by possibly enlarging it) two sequences of `excursions' $(\mathcal E_i)_{i\ge 1}$ and $(\tilde{\mathcal E}_i)_{i\ge 0}$, whose distribution is uniquely determined by the following properties \begin{itemize} \item Given realization of $Y=((Y_{i,1},Y_{i,2}))_{i\ge 1}$ and $Z=((Z_{i,1},Z_{i,2}))_{i\ge 1}$, $(\mathcal E_i)$ and $(\tilde{\mathcal E}_i)$ are conditionally independent sequences of conditionally independent random variables. \item For every $i\ge 1$, the random variable $\mathcal E_i$ is $\Gamma $-valued and for every $\gamma \in \Gamma$, \begin{equation} \mathbb Q_N[\mathcal E_i=\gamma|Y,Z]= P_{Y_{i,1}}[H_\Delta=\ell(\gamma ),X_i=\gamma_i\forall i\le \ell(\gamma ) |X_{H_\Delta }=Y_{i,2}]. \end{equation} \item For every $i\ge 1$, the random variable $\tilde{\mathcal E}_i$ is $\Gamma $-valued and for every $\gamma \in \Gamma$, \begin{equation} \mathbb Q_N[\tilde{\mathcal E}_i=\gamma|Y,Z]= P_{Y_{i,2}}[H_B=\ell(\gamma ),X_i=\gamma_i\forall i\le \ell(\gamma ) |X_{H_B}=Y_{i+1,1}]. \end{equation} \item The random variable $\tilde{\mathcal E}_0$ is $\Gamma $-valued and \begin{equation} \mathbb Q_N[\tilde{\mathcal E}_0=\gamma|Y,Z]= P[R_1=\ell(\gamma ), X_i=\gamma_i\forall i\le \ell(\gamma )|X_{R_1}=Y_{1,1}]. \end{equation} \end{itemize} With slight abuse of notation, we construct on $(\Omega_N , \mathcal F_N, \mathbb Q_N)$ a process $(X_n)_{n\ge 0}$ defined by concatenation of $\tilde{\mathcal E_0}, \mathcal E_1, \tilde{\mathcal E}_1, \mathcal E_2, \mathrm{d}ots$. From the construction it follows easily that $X$ is a simple random walk on $\mathbb T_N^d$ started from the uniform distribution. Finally, we write $R_1=\ell(\tilde{\mathcal E}_0)$, $D_1=\ell(\tilde{\mathcal E}_0)+\ell(\mathcal E_1)$, \mathrm{d}ots, which is consistent with the previous notation, and set, as before, $\mathcal N(uN^d)=\sup\{i:R_i<uN^d\}$. Finally, we fix an arbitrary constant $\beta >0$ and define the vacant set of random walk on $(\Omega_N, \mathcal F_N, \mathbb Q_N)$ by \begin{equation} \label{e:cVNu} \mathcal V^u_N = \mathbb T_N^d \setminus \{X_{\beta N^d},\mathrm{d}ots,X_{(\beta+u)N^d }\}, \end{equation} which has the same distribution as the vacant set introduced in \eqref{e:VNu}, since $(X_i)$ is stationary Markov chain. To construct the vacant set of random interlacements intersected with $B$, let $\mathcal I_0=\emptyset$ and for $i\ge 1$ inductively \begin{equation} \begin{split} \label{e:matchingEs} \iota_i &= \inf\{j\ge 1: j\notin \mathcal I_{i-1}, Y_j=Z_i\}, \\\mathcal E_i^{\mathrm{RI}}&= \mathcal E_{\iota_i}, \\\mathcal I_i &= \mathcal I_{i-1}\cup \{\iota_i\}. \end{split} \end{equation} Let further $(U_i)_{i\ge 1}$ be a sequence of conditionally independent Bernoulli random variables with (cf.~\eqref{e:distZ}) \begin{equation} P[U_i=1]=\frac{P^{\mathbb Z^d}_{Z_{i,2}}[H_B=\infty]\bar e_B(Z_{i+1,1})} {P^{\mathbb Z^d}_{Z_{i,2}}[H_B<\infty, X_{H_B}=Z_{i+1,1}] + P^{\mathbb Z^d}_{Z_{i,2}}[H_B=\infty]\bar e_B(Z_{i+1,1})}. \end{equation} The event $\{U_i=1\}$ heuristically correspond to the event ``after the excursion $Z_i$ the random walk leaves to infinity and the excursion of random interlacements corresponding to $Z_{i+1}$ is a part of another random walk trajectory''. We set $V_0=0$ and inductively for $i\ge 1$. $V_i=\inf\{i>V_{i-1}:U_i=1\}$. Then, by construction, for every $i\ge 1$, $(\mathcal E_j^{\mathrm{RI}})_{V_{i-1}<j\le V_i}$ has the same distribution as the sequence of excursions of random walk $X^{(i)}$ into $B$, cf.~\eqref{e:ridef}, \eqref{e:riexc}. Finally, as in \eqref{e:ridef}, we let $(J^N_u)_{u\ge 0}$ to stand for a Poisson process with intensity $\mathop{\mathrm{cap}}\nolimits (B)$, defined on $(\Omega_N, \mathcal F_N, \mathbb Q_N)$, independent of all previous randomness, and set \begin{equation} \mathcal N'(u)=V_{J_u^N}. \end{equation} This is again consistent with previous notation. Finally, for $\beta $ as above, we can construct the random variables having the law of the vacant set of random interlacements at levels $u+\varepsilon_N$ and $u-\varepsilon_N$ intersected with $B$, \begin{equation} \label{e:cVu} \mathcal V^{u\pm\varepsilon_N} = B\setminus \bigcup_{i=\mathcal N'(\beta\mp\varepsilon N/2) } ^{\mathcal N'(\beta + u \pm \varepsilon_N/2)} \text{Range}(\mathcal E^{\mathrm{RI}}_i). \end{equation} Denoting $\mathcal K_N=\mathop{\mathrm{cap}}\nolimits_\Delta (B)$, by Proposition~\ref{p:concentr_Nt} the set $\mathcal V_N^u$ of \eqref{e:cVNu} satisfies \begin{equation} \label{e:erra} \mathbb Q_N\bigg[ B_N \setminus \bigcup_{i=(\beta -\varepsilon_N/4)\mathcal K_N} ^{(\beta +u+\varepsilon_N/4)\mathcal K_N} \text{Range}(\mathcal E_i) \subset \mathcal V_N^u \subset B_N\setminus \bigcup_{i=(\beta +\varepsilon_N/4)\mathcal K_N} ^{(\beta +u-\varepsilon_N/4)\mathcal K_N} \text{Range}(\mathcal E_i)\bigg] \ge 1-Ce^{-c\varepsilon_N^2 N^{c}}. \end{equation} Combining \eqref{e:couplingZY} and \eqref{e:matchingEs} yields \begin{equation} \begin{split} &\mathbb Q_N\bigg[ B_N \setminus \bigcup_{i=(\beta -\varepsilon_N/4)\mathcal K_N} ^{(\beta +u+\varepsilon_N/4)\mathcal K_N} \text{Range}(\mathcal E_i) \supset B_N \setminus \bigcup_{i=(\beta -\varepsilon_N/3)\mathcal K_N} ^{(\beta +u+\varepsilon_N/3)\mathcal K_N} \text{Range}(\mathcal E^{RI}_i) \bigg] \ge 1-Ce^{-c_2N^{\mathrm{d}elta '}}, \\&\mathbb Q_N\bigg[ B_N \setminus \bigcup_{i=(\beta +\varepsilon_N/4)\mathcal K_N} ^{(\beta +u-\varepsilon_N/4)\mathcal K_N} \text{Range}(\mathcal E_i) \subset B_N \setminus \bigcup_{i=(\beta +\varepsilon_N/3)\mathcal K_N} ^{(\beta +u.\varepsilon_N/3)\mathcal K_N} \text{Range}(\mathcal E^{RI}_i) \bigg] \ge 1-Ce^{-c_2N^{\mathrm{d}elta '}}. \end{split} \end{equation} Finally, by Proposition~\ref{p:nri}, for vacant sets as in \eqref{e:cVu}, \begin{equation} \begin{split} \label{e:errc} &\mathbb Q_N\bigg[\mathcal V^{u+\varepsilon_N/2}\cap B \subset B_N \setminus \bigcup_{i=(\beta -\varepsilon_N/3)\mathcal K_N} ^{(\beta +u+\varepsilon_N/3)\mathcal K_N} \text{Range}(\mathcal E^{RI}_i)\bigg] \ge 1-Ce^{- c\varepsilon_N^2 N^c}, \\&\mathbb Q_N\bigg[\mathcal V^{u-\varepsilon_N/2}\cap B \supset B_N \setminus \bigcup_{i=(\beta +\varepsilon_N/3)\mathcal K_N} ^{(\beta +u-\varepsilon_N/3)\mathcal K_N} \text{Range}(\mathcal E^{RI}_i)\bigg] \ge 1-Ce^{-c \varepsilon_N^2 N^c }. \end{split} \end{equation} Theorem~\ref{t:toruscoupling} then follows by combining \eqref{e:erra}--\eqref{e:errc}. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:phasetransition}] Let us first introduce a simple notation. If $\mathcal{C}$ is a random subset of either $\mathbb{T}^d$ or $\mathbb{Z}^d$, let $A_N(\mathcal{C})$ stand for the event $[\mathop{\mathrm{diam}}\nolimits(\mathcal{C}) > N/4]$, which appears in the definition of $\eta_N(u)$. We also denote by $\mathcal{C}_0(u)$ the connected component containing the origin of $\mathbb{Z}^d$ for random interlacements at level $u$. We now turn to the proof of \eqref{e:phasetransitionsubcritical}. Fix $u > u_\star(d)$. Letting $u' \in (u_\star, u)$ and writing $u'=(1-\varepsilon )u$, we estimate \begin{equation} P[A_N(\mathcal{C}_N(u))] \leq 1 - \mathbb{Q}_N\Big[(\mathcal V_N^u \cap \mathcal B_N) \subset (\mathcal V^{u(1-\varepsilon )} \cap \mathcal B_N)\Big] + P\big[A_N(\mathcal{C}_0(u'))\big], \end{equation} which clearly tends to zero using Theorem~\ref{t:toruscouplingweak} and the fact that $u' > u_\star$. Now let us treat the supercritical case in \eqref{e:phasetransitionsupercritical}. Given $u < u_\star$ and $\varepsilon > 0$, we use the continuity of $\eta(u)$ in $[0,u_\star)$, see Corollary~1.2 of \cite{Tei09b}, to find $u'$ and $u''$ such that \begin{equation} (1-\varepsilon ) u \le u' < u < u'' \le (1+\varepsilon )u \quad \text{and} \quad \eta(u') - \eta(u'') < \varepsilon. \end{equation} We now observe that for $N > c$ we have $|\eta(u') - P[A_N(\mathcal C_0(u'))] | < \varepsilon$. Therefore, since $\eta $ is non-increasing function, \begin{equation} \begin{split} \big|P&[A_N(\mathcal{C}_N(u))] - \eta(u)\big| \\&\leq \varepsilon + \big(P[A_N(\mathcal{C}_N(u))] - \eta(u'')\big)_- + \big(P[A_N(\mathcal{C}_N(u))] - \eta(u')\big)_+\\ & \overset{{N > c}}\leq 2\varepsilon + \big( \mathbb{Q}[A_N(\mathcal C_N(u))] - \mathbb{Q} [A_N(\mathcal{C}_0(u''))]\big)_- + \big( \mathbb{Q}[A_N(\mathcal C_N(u))] - \mathbb{Q} [A_N(\mathcal{C}_0(u'))]\big)_+\\ & \leq 2\varepsilon + 1 - \mathbb Q_N\big[(\mathcal V^{u(1+\varepsilon )}\cap \mathcal B_N) \subset (\mathcal V_N^u \cap \mathcal B_N) \subset (\mathcal V^{u(1-\varepsilon )}\cap \mathcal B_N)\big]. \end{split} \end{equation} Since the limsup of the right-hand side of the above equation is at most $2 \varepsilon$ by Theorem~\ref{t:toruscouplingweak} and $\varepsilon > 0$ is arbitrary, we have proved \eqref{e:phasetransitionsupercritical} and consequently Theorem~\ref{t:phasetransition}. \end{proof} \appendix \section{A Chernov-type estimate for additive functionals of Markov chains} We show here a simple variant of Chernov bound for additive functionals of Markov chains. Many such bounds were obtained previously, but they do not suite our purposes. E.g.,~Lezaud \cite{Lez98} (see also Theorems~2.1.8, 2.1.9 in \cite{Sal97}) provides such bounds in terms of the spectral gap of the Markov chain. Since the spectral gap of non-reversible Markov chains is not easy to estimate, and, more importantly, it does not always reflect the mixing properties of the chain, it seems preferable to use the mixing time of the chain as the input. This idea was applied e.g.~in \cite{CLLM12}, whose bounds, in contrary to \cite{Lez98}, do not use the information about the variance of the additive functional under the equilibrium measure, and thus give worse estimates in the case where this variance is known. The theorems below can be viewed as combination of those two results. We consider discrete time Markov chains first. \begin{theorem} \label{t:appdisc} Let $(X_n)_{n\ge 0}$ be a discrete-time Markov chain on a finite state space $\Sigma $ with transition matrix $P$, initial distribution $\nu $, mixing time $T$, and invariant distribution~$\pi $. Then, for every $n\ge 1$, every function $f:\Sigma \to [-1,1]$ satisfying $\pi (f)=0$ and $\pi (f^2)\le \sigma ^2$, and every $\gamma \le \sigma ^2\wedge \frac 12$ \begin{equation} \label{e:chernovdiscrete} \mathbb P \Big[ \sum_{i<n} f(X_n) \ge n\gamma \Big] \le 4 \exp\Big\{- \Big\lfloor\frac n {k(\gamma )T}-1\Big\rfloor \frac{\gamma^2}{ 6\sigma^2 } \Big\}, \end{equation} with \begin{equation} k(\gamma )= -\log_2 ( \pi_\star \gamma^2/(6\sigma^2)) \end{equation} and $\pi_\star = \min_{x\in \Sigma } \pi (x)$. \end{theorem} \begin{proof} Let $\tau = k(\gamma )T$. From \cite[Section 4.5]{LPW09} it follows that, for any initial distribution $\nu $, \begin{equation} \label{e:taustep} (1-\varepsilon ) \pi (x) \le \mathbb P [X_\tau = x] \le (1+\varepsilon ) \pi (x), \end{equation} with $\varepsilon \le \gamma^2/(6\sigma^2)$. For $0\le k < \tau $, define $X^{(k)}_j= X_{k+\tau j}$, $j\ge 0$. For every $k$, $(X^{(k)}_j)_{j\ge 0}$ is a Markov chain with transition matrix $P^\tau $ and invariant distribution $\pi $. In view of \eqref{e:taustep}, $(X^{(k)}_j)_{j\ge 1}$ are close to being i.i.d.~with marginal $\pi $; the distribution of $X^{(k)}_0$ cannot be controlled in general. Writing $Y^{(k)}_n=\sum_{0\le i<(n-k)/\tau } f(X^{(k)}_i)$, with help of Jensen's inequality and the exponential Chebyshev bound, we have for every $\lambda >0$ \begin{equation} \label{e:excessa} \mathbb P \Big[\sum_{j<n} f(X_j)\ge \gamma n\Big]\le \exp\big\{-\lambda \gamma n \tau^{-1} \big\} \frac 1 \tau \sum_{k<\tau } \mathbb E \big[\exp\{\lambda Y^{(k)}_n\}\big]. \end{equation} Using \eqref{e:taustep}, the Markov property recursively, and the fact $f\le 1$ for the summand $f(X_0^{(k)})$, \begin{equation} \mathbb E\big[\exp\{\lambda Y_n^{(k)}\}\big]\le e^\lambda \exp\Big\{ \Big\lfloor \frac{n-k}{\tau }\Big\rfloor \big(\log (\pi (e^{\lambda f}))+\log (1+\varepsilon )\big)\Big\}, \end{equation} for all $0\le k<\tau $. By Bennett's lemma (see e.g. \cite[Lemma~2.4.1]{DZ98}), \begin{equation} \pi(e^{\lambda f})\le \frac{1}{1+\sigma^2}\, e^{-\lambda \sigma^2} + \frac{\sigma^2}{1+\sigma^2}\, e^{\lambda }. \end{equation} Inserting this bound back into \eqref{e:excessa} and optimizing over $\lambda $ as in \cite[Corollary 2.4.7]{DZ98}, which amounts to choose \begin{equation} e^\lambda = \frac 1 {\sigma ^2}\cdot \frac{\gamma +\sigma ^2}{1-\gamma } \le 4, \end{equation} we obtain \begin{equation} \mathbb P \Big[\sum_{j<n} f(X_j)\ge \gamma n\Big]\le 4 \exp\Big\{ - \Big\lfloor \frac{n}{\tau }-1\Big\rfloor \Big( H\Big(\frac{\gamma +\sigma ^2}{1+\sigma ^2}\Big|\frac{\sigma ^2}{1+\sigma ^2}\Big) - \log (1+\varepsilon)\Big)\Big\}, \end{equation} where $H(x|p)=x\log\frac xp + (1-x)\log \frac{1-x}{1-p}$. Observing finally that for every $\sigma^2 \in (0,1)$ and $\gamma \in (0,\sigma^2)$ \begin{equation} H\Big(\frac{\gamma +\sigma ^2}{1+\sigma ^2}\Big|\frac{\sigma ^2}{1+\sigma ^2}\Big) \ge \frac {\gamma^2}{3\sigma ^2} \end{equation} and $\log (1+\varepsilon ) \le \varepsilon \le \gamma^2/(6\sigma^2)$, we obtain the claim of the theorem. \end{proof} For continuous-time Markov chains we have an analogous statement. \begin{corollary} \label{t:appcont} Let $(X_t)_{t\ge 0}$ be a continuous-time Markov chain on a finite state space $\Sigma $ with generator $L$, initial distribution $\nu $, mixing time $T$, and invariant distribution $\pi $. Then for every $t>0$, every every function $f:\Sigma \to [-1,1]$ with $\pi (f)=0$ and $\pi (f^2)\le \sigma ^2$, and for $\gamma \le \sigma ^2\wedge \frac 12$ \begin{equation} \mathbb P \Big[ \int_0^t f(X_s)\, \mathrm{d} s \ge \gamma t \Big] \le 4 \exp\Big\{- \Big\lfloor\frac t {k(\gamma )T}-1\Big\rfloor \frac{\gamma^2}{6\sigma^2 } \Big\}, \end{equation} with $k(\gamma )$ as in Theorem~\ref{t:appdisc}. \end{corollary} \begin{proof} The proof is a discretization argument: Consider a discrete-time Markov chain $Y^\mathrm{d}elta_{n} = X_{\mathrm{d}elta n}$. The mixing time $T(\mathrm{d}elta )$ of $Y^\mathrm{d}elta $ satisfies $T(\mathrm{d}elta )=T \mathrm{d}elta^{-1}(1+o(1))$ as $\mathrm{d}elta \to 0$. The previous theorem applied with $n=\mathrm{d}elta^{-1} t$, then implies \begin{equation} \mathbb P \Big[\mathrm{d}elta \sum_{j< t \mathrm{d}elta^{-1}} f(X_{j\mathrm{d}elta })\ge \gamma t\Big] \le 4 \exp\Big\{- \Big\lfloor\frac t {k(\gamma )T}-1\Big\rfloor \frac{\gamma^2}{ 6\sigma^2 } \Big\}. \end{equation} Taking $\mathrm{d}elta \to 0$ and using the fact that $\Sigma $ is finite (that is the transition rates are bounded from below) yields the claim. \end{proof} Finally, let $h:\Sigma \to \mathbb R$ be an arbitrary function such that $\mathop{\mathrm{Var}}\nolimits_\pi (h)\le \sigma ^2$. Set \begin{equation} f = (h - \pi(h))/ 2 \| h \|_\infty, \end{equation} so that $\|f\|_\infty \le 1$, $\pi(f)=0$ and $\pi (f^2) \le \sigma ^2/(4 \|h\|_\infty^2)$. The corollary applied with $\gamma = \mathrm{d}elta \pi(h)/2 \| h \|_\infty$ then directly implies \begin{equation} \label{e:apph} \mathbb P \Big[\int_0^t h(X_s)\, \mathrm{d} s - t \pi (h) \ge \mathrm{d}elta t \pi (h)\Big] \le 4 \exp\Big\{- \Big\lfloor\frac t {k'(\mathrm{d}elta )T}-1\Big\rfloor \frac{\mathrm{d}elta^2 \pi (h)^2}{6 \sigma^2 } \Big\} \end{equation} with \begin{equation} k'(\mathrm{d}elta ) = -\log_2\big(\mathrm{d}elta^2 \pi (h)^2 \pi_\star/(6 \sigma^2)\big) \end{equation} whenever \begin{equation} \label{e:apphcond} \mathrm{d}elta \le \frac{\sigma^2}{2 \pi (h) \|h\|_\infty} \wedge 1. \end{equation} \end{document}
\begin{document} \title{ \sffamily Bregman circumcenters: basic theory } \author{ Hui\ Ouyang\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \href{mailto:[email protected]}{\texttt{[email protected]}}.}~ and Xianfu\ Wang\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \href{mailto:[email protected]}{\texttt{[email protected]}}.} } \date{April 5, 2021} \maketitle \begin{abstract} \noindent Circumcenters play an important role in the design and analysis of accelerating various iterative methods in optimization. In this work, we propose Bregman (pseudo-)circumcenters associated with finite sets. We show the existence and give explicit formulae for the unique backward and forward Bregman pseudo-circumcenters of finite sets. Moreover, we use duality to establish connections between backward and forward Bregman (pseudo-)circumcenters. Various examples are presented to illustrate the backward and forward Bregman (pseudo-)circumcenters of finite sets. Our general framework for circumcenters paves the way for the development of accelerating iterative methods by Bregman circumcenters. \end{abstract} {\small \noindent {\bfseries 2020 Mathematics Subject Classification:} { Primary 90C48, 47H04, 47H05; Secondary 90C25, 52A41. } \noindent{\bfseries Keywords:} Bregman distance, Legendre function, backward Bregman projection, forward Bregman projection, backward Bregman (pseudo-)circumcenter, forward Bregman (pseudo-)circumcenter. } \section{Introduction} Circumcenter is a classical concept in geometry. Recently, circumcenters have been used to accelerate iterative methods, such as the Douglas-Rachford method and method of alternating projections, in optimization; see e.g., \cite{BBCS2020CRMbetter, BBCS2017, BBCS2018, BBCS2019, BBCS2020ConvexFeasibility}. Compared with the classic Douglas-Rachford method and the method of alternating projections, the circumcentered methods present much better performance for solving the best approximation problems and feasibility problems. In \cite{BOyW2018}, we provided a systematic study on the circumcenter of a finite set in a Hilbert space. This allowed us to investigate circumcentered methods of nonexpansive mappings, isometries, and best approximation mappings; see, e.g., \cite{BOyW2019Isometry, BOyW2019LinearConvergence, BOyW2020BAM}. Bregman distances have also been widely studied in optimization; see \cite{censor, BBC2003, BC2003, BD2002, reich, wen, laude} and references therein. A natural question is: Can one define circucmcenters using Bregman distances and use them to accelerate iterative methods? However, up to now a study of circumcenters in the framework of Bregman distances is still missing in the literature. \emph{In this work, under general Bregman distances, we introduce appropriate definitions of circumcenters of finitely many points in a Hilbert space, investigate the existence and uniqueness of circumcenters of finite sets, and present explicit formulae for the unique circumcenters. One of the distinguished features is that while the classical circumcenter might fail to exist the Bregman (pseudo-)circumcenters can exist. Our work sets up the theoretical foundation for utilizing Bregman circumcenters to accelerate iterative methods in optimization. } In general, the Bregman distances are neither symmetric nor full domain, these cause many technical challenges. On the one hand, we have to introduce backward Bregman (pseudo-)circumcenters and forward Bregman (pseudo-)circumcenters; on the other hand, appropriate affine subspaces are needed to explore the uniqueness of Bregman circumcenters. Our main results in this work are the following: \begin{itemize} \item[\textbf{R1:}] \cref{theor:affine:character:P} characterizes backward and forward Bregman projectors onto affine subspaces. \item[\textbf{R2:}] We provide sufficient conditions for the existence of backward and forward Bregman circumcenters of finite sets and give corresponding circumcenters in \ensuremath{\operatorname{C}}ref{theorem:formualCCS:Pleft,theorem:forwardCCS} by using the backward and forward Bregman projections onto affine subspaces. Moreover, the unique backward and forward Bregman pseudo-circumcenters are characterized in \ensuremath{\operatorname{C}}ref{theorem:formualCCS,theorem:psuCCS:forward} by using the Eucliean projections onto affine subspaces, respectively. \item[\textbf{R3:}] Some dual expressions of the backward and forward Bregman (pseudo-)circumcenters are demonstrated in \cref{theor:CCS:Rel}. \end{itemize} The paper is organized as follows. Some fundamental results on Bregman distances and projections are presented in \cref{sec:Preliminaries} for subsequent usage. In \ensuremath{\operatorname{C}}ref{sec:BackwardBregmancircumcenters,sec:ForwardBregmancircumcenters}, we systematically investigate backward and forward Bregman (pseudo-)circumcenters, respectively. In particular, we give backward and forward Bregman circumcenters of finite sets via Bregman projections; show the uniqueness of backward and forward Bregman pseudo-circumcenters; state equivalent expressions of backward and forward Bregman pseudo-circumcenters; and provide concrete examples of backward and forward Bregman (pseudo-)circumcenters. In \cref{sec:Miscellaneous}, we establish some duality correspondences between backward and forward Bregman pseudo-circumcenters. Section~\ref{s:compare} compares the Bregman (pseudo-)circumcenters with the classical circumcenter by examples. While the classical circumcenter might not exist the Bregman circumcenters exist. Section~\ref{s:conclude} finishes the paper. \section{Preliminaries} \label{sec:Preliminaries} In this section, we review some results on Bregman distances and characterize forward and backward Bregman projections onto affine subspaces, which are essential to our later analysis. \subsection*{Notation} Throughout the work, we assume that $\mathbb{N}=\{0,1,2,\ldots\}$ and $\{ m, n \} \subseteq \mathbb{N} \smallsetminus \{0\}$, and that \begin{empheq}[box=\mybluebox]{equation*} \mathcal{H} \text{ is a real Hilbert space with inner product } \innp{\cdot, \cdot} \text{ and induced norm } \norm{\cdot}. \end{empheq} $\Gamma_{0} (\mathcal{H}) $ is the set of proper closed convex functions from $\mathcal{H}$ to $\left]-\infty, +\infty\right]$. Let $f:\mathcal{H} \to \left]-\infty, +\infty\right]$ be proper. The \emph{domain} (\emph{conjugate function, gradient}, respectively) of $f$ is denoted by $\ensuremath{\operatorname{dom}} f$ ($f^{*}$, $\nabla f$, respectively). Let $C$ be a nonempty subset of $\mathcal{H}$. Its \emph{interior} and \emph{boundary} are abbreviated by $\ensuremath{\operatorname{int}} C$ and $\ensuremath{\operatorname{bd}} C$, respectively. $C$ is an \emph{affine subspace} of $\mathcal{H}$ if $C \neq \varnothing$ and $(\forall \rho\in\mathbb{R})$ $\rho C + (1-\rho)C = C$. The smallest affine subspace of $\mathcal{H}$ containing $C$ is denoted by $\ensuremath{\operatorname{aff} \,} C$ and called the \emph{affine hull} of $C$. The \emph{orthogonal complement of $C$} is the set $ C^{\perp} :=\{x \in \mathcal{H}~:~ (\forall y \in C) ~\innp{x,y}=0\}$. The \emph{best approximation operator} (or \emph{projector}) onto $C$ under the Euclidean distance is denoted by $\ensuremath{\operatorname{P}}_{C}$, that is, $(\forall x \in \mathcal{H} )$ $\ensuremath{\operatorname{P}}_{C}x := \ensuremath{\operatorname{argmin}}_{y \in C} \norm{x-y}$. Given the points $ a_{1}, \ldots, a_{m} $ in $ \mathcal{H}$, the \emph{Gram matrix} $G(a_{1}, \ldots, a_{m})$ is defined as \begin{align*} G(a_{1}, a_{2}, \ldots, a_{m}) := \begin{pmatrix} \norm{a_{1}}^{2} &\innp{a_{1},a_{2}} & \cdots & \innp{a_{1}, a_{m}} \\ \innp{a_{2},a_{1}} & \norm{a_{2}}^{2} & \cdots & \innp{a_{2},a_{m}} \\ \vdots & \vdots & ~~& \vdots \\ \innp{a_{m},a_{1}} & \innp{a_{m},a_{2}} & \cdots & \norm{a_{m}}^{2} \\ \end{pmatrix}. \end{align*} For every $x \in \mathcal{H}$ and $\delta \in \mathbb{R}_{++}$, $B [x; \delta] $ is the \emph{closed ball with center at $x$ and with radius $\delta$}. Let $A : \mathcal{H} \to 2^{\mathcal{H}}$ and let $x \in \mathcal{H}$. Then $A$ is \emph{locally bounded at $x$} if there exists $\delta \in \mathbb{R}_{++}$ such that $A(B[x;\delta] )$ is bounded. For convenience, if $A(x)$ is a singleton for some $x \in \mathcal{H}$, by a slight abuse of notation we allow $A(x)$ to stand for its unique element. For other notation not explicitly defined here, we refer the reader to \cite{BC2017}. \subsection*{Legendre functions and Bregman distances} Legendre functions are instrumental for our analysis. \begin{definition} {\rm \cite[Definition~5.2 and Theorem~5.6]{BBC2001}} \label{def:Legendre} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $. We say $f$ is: \begin{enumerate} \item \emph{essentially smooth}, $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{par}}rtial f =\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$, $f$ is G\^ateaux differentiable on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, and $\norm{\nabla f (x_{k})} \to +\infty$, for every sequence $(x_{k})_{k \in \mathbb{N}}$ in $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ converging to some point in $\ensuremath{\operatorname{bd}} \ensuremath{\operatorname{dom}} f$. \item \label{def:Legendre:convex} \emph{essentially strictly convex}, if $(\ensuremath{\operatorname{par}}rtial f)^{-1}$ is locally bounded on its domain and $f$ is strictly convex on every convex subset of $\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{par}}rtial f$. \item \emph{Legendre}, if $f$ is both essentially smooth and essentially strictly convex. \end{enumerate} \end{definition} \begin{fact}{\rm \cite[Lemma~7.3(vii)]{BBC2001}} \label{fact:PropertD} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $ with $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$, that $f$ is G\^ateaux differentiable on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, and that $f$ is essentially strictly convex. Let $x$ and $y$ be in $ \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. Then $D_{f}(x, y) = D_{f^{*}}(\nabla f (y),\nabla f (x))$. \end{fact} \begin{fact} {\rm \cite[Corollary~5.5 and Theorem~5.10]{BBC2001}} \label{fact:nablaf:nablaf*:id} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $. Then $f$ is Legendre if and only if $f^{*}$ is. In this case, the gradient mapping $\nabla f: \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \to \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ is bijective, with inverse $(\nabla f)^{-1} =\nabla f^{*} : \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*} \to \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. Moreover, the gradient mappings $\nabla f$, $\nabla f^{*}$ are both norm-to-weak continuous and locally bounded on their respective domains. \end{fact} \begin{definition} {\rm \cite[Definitions~7.1 and 7.7]{BBC2001}} \label{defn:BregmanDistance} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $ with $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$ and that $f$ is G\^ateaux differentiable on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. The \emph{Bregman distance $\ensuremath{\operatorname{D}}_{f}$ associated with $f$} is defined by \begin{align*} \ensuremath{\operatorname{D}}_{f}: \mathcal{H} \times \mathcal{H} \to \left[0, +\infty\right] : (x,y) \mapsto \begin{cases} f(x) -f(y) -\innp{\nabla f(y), x-y}, \quad &\text{if } y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f;\\ +\infty, \quad &\text{otherwise}. \end{cases} \end{align*} Moreover, let $C$ be a nonempty subset of $\mathcal{H}$. For every $(x,y) \in \ensuremath{\operatorname{dom}} f \times \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, define the \emph{backward Bregman projection} of $y$ onto $C$ and \emph{forward Bregman projection} of $x$ onto $C$, respectively, as \begin{align*} &\overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y):= \left\{ u \in C \cap \ensuremath{\operatorname{dom}} f ~:~ (\forall c \in \ensuremath{\operatorname{C}}) \ensuremath{\operatorname{D}}_{f} \left( u,y \right) \leq \ensuremath{\operatorname{D}}_{f} (c,y) \right\}, \text{and}\\ &\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(x) := \left\{ v \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f ~:~ (\forall c \in \ensuremath{\operatorname{C}}) \ensuremath{\operatorname{D}}_{f} \left( x,v \right) \leq \ensuremath{\operatorname{D}}_{f} (x, c) \right\}. \end{align*} \end{definition} Clearly, one recovers the Euclidean distance $\ensuremath{\operatorname{D}} : \mathcal{H} \times \mathcal{H} \to \left[0, +\infty\right] : (x,y) \mapsto \frac{1}{2} \norm{x-y}^{2}$ by setting $f = \frac{1}{2} \norm{\cdot}^{2}$ in \cref{defn:BregmanDistance}. \subsection*{Bregman projections} \ensuremath{\operatorname{C}}ref{fact:charac:PleftCf,fact:charac:PrightCf} play an essential role for the characterizations of Bregman projections onto affine subspaces. \begin{fact} \label{fact:charac:PleftCf} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $ is Legendre, that $C$ is a closed convex subset of $\mathcal{H}$ with $C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$, and that $y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $. Then for every $z \in \mathcal{H}$, \begin{align} \label{eq:fact:charac:PleftCf} z= \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y) \Leftrightarrow \left[ z \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \quad \text{and} \quad (\forall c \in C)~ \ensuremath{\operatorname{I}}nnp{c-z, \nabla f (y) -\nabla f (z)} \leq 0 \right]; \end{align} equivalently, \begin{align}\label{eq:fact:charac:PleftCf:D} z= \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y) \Leftrightarrow \left[ z \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \quad \text{and} \quad (\forall c \in C)~ \ensuremath{\operatorname{D}}_{f} (c,y) \geq \ensuremath{\operatorname{D}}_{f} (c,z) +\ensuremath{\operatorname{D}}_{f} (z,y) \right]. \end{align} \end{fact} \begin{proof} Note that although \cite{BB1997Legendre} is on $\mathbb{R}^{n}$, the proof of \cite[Proposition~3.16]{BB1997Legendre} works in the Hilbert space as well. Hence, mimic the proof of \cite[Proposition~3.16]{BB1997Legendre} and apply \cite[Corollary~7.9]{BBC2001} to obtain \cref{eq:fact:charac:PleftCf}. In addition, in view of \cref{eq:fact:charac:PleftCf} and \cref{defn:BregmanDistance}, we have \begin{align*} z= \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y) \ensuremath{\operatorname{R}}ightarrow \left[ z \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \quad \text{and} \quad (\forall c \in C)~ \ensuremath{\operatorname{D}}_{f} (c,y) \geq \ensuremath{\operatorname{D}}_{f} (c,z) +\ensuremath{\operatorname{D}}_{f} (z,y) \right]. \end{align*} Furthermore, assume that $z \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ and $ (\forall c \in C)$ $\ensuremath{\operatorname{D}}_{f} (c,y) \geq \ensuremath{\operatorname{D}}_{f} (c,z) +\ensuremath{\operatorname{D}}_{f} (z,y) $. Then \begin{align*} (\forall c \in C) \quad \ensuremath{\operatorname{D}}_{f} (c,y) \geq \ensuremath{\operatorname{D}}_{f} (c,z) +\ensuremath{\operatorname{D}}_{f} (z,y) \geq \ensuremath{\operatorname{D}}_{f} (z,y), \end{align*} which, by \cref{defn:BregmanDistance}, implies that $z= \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y)$. Altogether, \cref{eq:fact:charac:PleftCf:D} is true. \end{proof} \begin{definition} {\rm \cite[Definition~2.4]{BC2003}} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $ is Legendre such that $\ensuremath{\operatorname{dom}} f^{*}$ is open. We say \emph{the function $f$ allows forward Bregman projections} if it satisfies the following properties. \begin{enumerate} \item $\nabla^{2} f$ exists and is continuous on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. \item $\ensuremath{\operatorname{D}}_{f}$ is convex on $(\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f)^{2}$. \item For every $x \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, $D_{f}(x, \cdot)$ is strictly convex on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. \end{enumerate} \end{definition} \begin{fact} {\rm \cite[Fact~2.6]{BC2003}} \label{fact:charac:PrightCf} Suppose that $\mathcal{H} =\mathbb{R}^{n}$ and $f \in \Gamma_{0} (\mathcal{H}) $ is Legendre such that $\ensuremath{\operatorname{dom}} f^{*}$ is open, that $f$ allows forward Bregman projections, and that $C $ is a closed convex subset of $\mathcal{H}$ with $C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$. Then for every $y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, the forward Bregman projection of $y$ onto $C$ uniquely exists, and for every $z \in \mathcal{H}$, \begin{align*} z =\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y) \Leftrightarrow \left[ z \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \quad \text{and} \quad (\forall c \in C) ~ \ensuremath{\operatorname{I}}nnp{c-z, \nabla^{2} f (z) (y - z)} \leq 0 \right]; \end{align*} equivalently, \begin{align*} z =\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{C}(y) \Leftrightarrow \left[ z \in C \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \quad \text{and} \quad (\forall c \in C)~ \ensuremath{\operatorname{D}}_{f} (c,y) \geq \ensuremath{\operatorname{D}}_{f} (c,z) +\ensuremath{\operatorname{D}}_{\ensuremath{\operatorname{D}}_{f}} \left((c,c), (y , z)\right) \right]. \end{align*} Moreover, the \emph{forward Bregman projector} $\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{C}$ is continuous on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. \end{fact} Note that not every Legendre function allows forward Bregman projections and that the backward and forward Bregman projections are different notions (see, e.g., \cite{BB1997Legendre}, \cite{BC2003} and \cite{BD2002}). Below we list particular functions satisfying conditions required by \cref{fact:charac:PleftCf} or \cref{fact:charac:PrightCf}. \begin{fact} {\rm \cite[Examples~2.1 and 2.7]{BC2003}} \label{fact:examplef} Suppose that $\mathcal{H} =\mathbb{R}^{n}$. Denote by $(\forall x \in \mathcal{H} )$ $x= (x_{i})^{n}_{i=1} $. Then the following functions are Legendre such that the domain of their conjugates are open. Moreover, the energy, negative entropy, and Fermi-Dirac entropy allow forward Bregman projections.\footnotemark \footnotetext{Here and elsewhere, we use the convention that $0 \ln (0) =0$.} \begin{enumerate} \item $($energy$)$ $f :x \mapsto \frac{1}{2} \norm{x}^{2} =\frac{1}{2} \sum^{n}_{i=1} \abs{x_{i}}^{2}$, with $\ensuremath{\operatorname{dom}} f =\mathbb{R}^{n}$. \item $($negative entropy$)$ $f :x \mapsto \sum^{n}_{i=1} x_{i} \ln (x_{i}) -x_{i}$, with $\ensuremath{\operatorname{dom}} f =\left[0, +\infty\right[^{n} $. \item $($Fermi-Dirac entropy$)$ $f :x \mapsto \sum^{n}_{i=1} x_{i} \ln (x_{i}) + (1-x_{i} ) \ln(1-x_{i} )$, with $\ensuremath{\operatorname{dom}} f =\left[0, 1\right]^{n} $. \item $($Burg entropy$)$ $f :x \mapsto -\sum^{n}_{i=1} \ln (x_{i}) $, with $\ensuremath{\operatorname{dom}} f =\left]0, +\infty\right[^{n} $. \item $f :x \mapsto - \sum^{n}_{i=1} \sqrt{x_{i} }$, with $\ensuremath{\operatorname{dom}} f =\left[0, +\infty\right[^{n} $. \end{enumerate} \end{fact} Bregman projections onto affine subspaces are characterized in the following result. Notice that \cref{theor:affine:character:P}\cref{theor:affine:character:P:Lback} with $f(x)= \sum^{n}_{i=1} x_{i}\log x_{i}$, where $x:= (x_{1}, \ldots, x_{n}) \in \mathbb{R}^{n}$, reduces to \cite[Corollary~2.1]{Teboulle1992}. \begin{theorem} \label{theor:affine:character:P} Suppose that $f \in \Gamma_{0} (\mathcal{H}) $ is Legendre, that $U $ and $L$ are, respectively, closed affine and linear subspaces of $\mathcal{H}$ with $U \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$ and $L \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$. Let $y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ and $z \in \mathcal{H}$. Then the following assertions hold. \begin{enumerate} \item \label{theor:affine:character:PleftCf} The following three statements are equivalent. \begin{enumerate} \item \label{theor:affine:character:PleftCf:=}$z = \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{U} (y) $. \item \label{theor:affine:character:PleftCf:innp}$z \in U \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and $(\forall u \in U) $ $ \innp{\nabla f(y) -\nabla f ( z), u- z } = 0 $. \item \label{theor:affine:character:PleftCf:D}$z \in U \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and $(\forall u \in U) $ $\ensuremath{\operatorname{D}}_{f} (u,y) = \ensuremath{\operatorname{D}}_{f} (u,z) +\ensuremath{\operatorname{D}}_{f} (z,y)$. \end{enumerate} \item \label{theor:affine:character:P:Lback} $z = \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{L} (y) \Leftrightarrow \left[ z \in L \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f ~\text{and} ~ \left( \nabla f(y) -\nabla f ( z) \right) \in L^{\perp} \right].$ \item \label{theor:affine:character:PrightCf} Suppose that $\mathcal{H} =\mathbb{R}^{n}$, that $\ensuremath{\operatorname{dom}} f^{*}$ is open, and that $f$ allows forward Bregman projections. Then the following three statements are equivalent. \begin{enumerate} \item \label{theor:affine:character:PrightCf:=} $z =\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{U}(y)$. \item \label{theor:affine:character:PrightCf:innp} $z \in U \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and $(\forall u \in U) $ $ \innp{u-z, \nabla^{2} f (z) (y - z)} = 0 $. \item \label{theor:affine:character:PrightCf:D} $z \in U \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and $(\forall u \in U) $ $\ensuremath{\operatorname{D}}_{f} (u,y) = \ensuremath{\operatorname{D}}_{f} (u,z) +\ensuremath{\operatorname{D}}_{\ensuremath{\operatorname{D}}_{f}} \left((u,u), (y , z)\right)$. \end{enumerate} \item \label{theor:affine:character:P:Lfor} Suppose that $\mathcal{H} =\mathbb{R}^{n}$, that $\ensuremath{\operatorname{dom}} f^{*}$ is open, and that $f$ allows forward Bregman projections. Then $z =\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{L}(y) \Leftrightarrow \left[ z \in L \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f ~ \text{and} ~ \nabla^{2} f (z) (y - z) \in L^{\perp} \right]. $ \end{enumerate} \end{theorem} \begin{proof} \cref{theor:affine:character:PleftCf}: Suppose that $z = \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{U} (y) $. According to \cref{fact:charac:PleftCf}, $z \in U \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and \begin{align} \label{eq:PCf:leq} (\forall u \in U ) \quad \innp{\nabla f(y) -\nabla f ( z ), u-z } \leq 0. \end{align} Because $z \in U$ and $U $ is affine, we have that $(\forall u \in U)$ $z +(z-u) =2z-u \in U $. Hence, \begin{align} \label{eq:PCf:geq} (\forall u \in U ) \quad \innp{\nabla f(y) -\nabla f ( z ), z-u } =\innp{\nabla f(y) -\nabla f ( z ), z +(z-u)-z } \stackrel{\cref{eq:PCf:leq}}{\leq} 0. \end{align} Combine \cref{eq:PCf:leq} and \cref{eq:PCf:geq} to yield the required \cref{theor:affine:character:PleftCf:=}$\Leftrightarrow$\cref{theor:affine:character:PleftCf:innp}. On the other hand, similarly with the last part of our proof of \cref{fact:charac:PleftCf}, \cref{theor:affine:character:PleftCf:innp} $\Leftrightarrow$ \cref{theor:affine:character:PleftCf:D} follows from \cref{defn:BregmanDistance} and \cref{theor:affine:character:PleftCf:=}$\Leftrightarrow$\cref{theor:affine:character:PleftCf:innp}. \cref{theor:affine:character:P:Lback}: This is clear from \cref{theor:affine:character:PleftCf:=} $\Leftrightarrow$ \cref{theor:affine:character:PleftCf:innp} above. \cref{theor:affine:character:PrightCf}: The proof of \cref{theor:affine:character:PrightCf:=} $\Leftrightarrow $ \cref{theor:affine:character:PrightCf:innp} is similar to the proof of \cref{theor:affine:character:PleftCf:=} $ \Leftrightarrow$ \cref{theor:affine:character:PleftCf:innp}, while this time we apply \cref{fact:charac:PrightCf} instead of \cref{fact:charac:PleftCf}. Let $u \in U$. By \cref{defn:BregmanDistance}, \begin{align*} &\ensuremath{\operatorname{D}}_{f} (u,y) = \ensuremath{\operatorname{D}}_{f} (u,z) +\ensuremath{\operatorname{D}}_{\ensuremath{\operatorname{D}}_{f}} \left((u,u), (y , z)\right) \\ \Leftrightarrow & \ensuremath{\operatorname{D}}_{f} (u,y) = \ensuremath{\operatorname{D}}_{f} (u,z) +\ensuremath{\operatorname{D}}_{f}(u,u) -\ensuremath{\operatorname{D}}_{f}(y , z) - \ensuremath{\operatorname{I}}nnp{ \Big(\nabla f (y) - \nabla f(z), -\nabla f^{2} (z)(y-z) \Big), (u-y,u-z)}\\ \Leftrightarrow & \ensuremath{\operatorname{D}}_{f} (u,y) = \ensuremath{\operatorname{D}}_{f} (u,z) -\ensuremath{\operatorname{D}}_{f}(y , z) - \ensuremath{\operatorname{I}}nnp{\nabla f (y) - \nabla f(z), u-y } +\innp{\nabla^{2} f (z) (y - z),u-z}, \end{align*} which, by the three point identity in \cite[Proposition~2.3(ii)]{BBC2003}, yields \cref{theor:affine:character:PrightCf:innp} $\ensuremath{\operatorname{R}}ightarrow$ \cref{theor:affine:character:PrightCf:D}. On the other hand, by \cref{defn:BregmanDistance}, we know that \cref{theor:affine:character:PrightCf:D} $\ensuremath{\operatorname{R}}ightarrow$ \cref{theor:affine:character:PrightCf:=}. Altogether, we obtain that \cref{theor:affine:character:PrightCf:=} $\Leftrightarrow$ \cref{theor:affine:character:PrightCf:innp} $\Leftrightarrow$ \cref{theor:affine:character:PrightCf:D}. \cref{theor:affine:character:P:Lfor}: This follows easily from \cref{theor:affine:character:PrightCf:=} $\Leftrightarrow$ \cref{theor:affine:character:PrightCf:innp}. \end{proof} \subsection*{Generalizations of formulae of circumcenters} The following result is a generalized version of \cite[Theorem~4.1]{BOyW2018}, which presents an explicit formula of the circumcenter of finite sets. \begin{theorem} \label{thm:LinIndpPformula} Let $z_{0}, z_{1}, \ldots, z_{m}$ be affinely independent points in $\mathcal{H}$, and let $(\lambda_{1}, \ldots, \lambda_{m}) \in \mathbb{R}^{m}$. Set $\ensuremath{\operatorname{I}}:=\{1, \ldots, m \}$ and \begin{align*} p:= z_{0}+ (z_{1}-z_{0},\ldots,z_{m}-z_{0}) G( z_{1}-z_{0},\ldots,z_{m}-z_{0})^{-1} \begin{pmatrix} \lambda_{1}-\innp{z_{0},z_{1}-z_{0}}\\ \vdots\\ \lambda_{m}-\innp{z_{0},z_{m}-z_{0}} \\ \end{pmatrix}, \end{align*} where $G( z_{1}-z_{0},\ldots,z_{m}-z_{0})$ is the Gram matrix defined as \begin{align*} G( z_{1}-z_{0},\ldots, z_{m}-z_{0}) := \begin{pmatrix} \norm{z_{1}-z_{0}}^{2} & \cdots & \innp{z_{1}-z_{0}, z_{m}-z_{0}} \\ \vdots & \cdots & \vdots \\ \innp{z_{m}-z_{0},z_{1}-z_{0}} & \cdots & \norm{z_{m}-z_{0}}^{2} \\ \end{pmatrix}. \end{align*} Then $\left\{ x \in \ensuremath{\operatorname{aff} \,} ( \{z_{0}, z_{1}, \ldots, z_{m} \} ) ~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{x , z_{i} -z_{0}} = \lambda_{i} \right\} = \{p\}$. \end{theorem} \begin{proof} Denote by $\Omega:=\left\{ x \in \ensuremath{\operatorname{aff} \,} ( \{z_{0}, z_{1}, \ldots, z_{m} \} ) ~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{x , z_{i} -z_{0}} = \lambda_{i} \right\} $. According to the assumption and \cite[Fact~2.8]{BOyW2018}, $z_{1}-z_{0}, \ldots, z_{m}-z_{0} $ are linearly independent. Then by \cite[Fact~2.13]{BOyW2018}, the Gram matrix $G(z_{1}-z_{0}, \ldots, z_{m}-z_{0})$ is invertible. Set \begin{align*} \begin{pmatrix} \alpha_{1} \\ \vdots\\ \alpha_{m} \\ \end{pmatrix} := G(z_{1}-z_{0}, \ldots, z_{m}-z_{0})^{-1} \begin{pmatrix} \lambda_{1}-\innp{z_{0},z_{1}-z_{0}} \\ \vdots\\ \lambda_{m}-\innp{z_{0},z_{m}-z_{0}} \\ \end{pmatrix}. \end{align*} So, $p = z_{0}+\alpha_{1}(z_{1}-z_{0})+\alpha_{2}(z_{2}-z_{0})+\cdots+\alpha_{m}(z_{m}-z_{0}) \in \ensuremath{\operatorname{aff} \,} ( \{z_{0}, z_{1}, \ldots, z_{m} \} )$ is well-defined. On the other hand, using the definitions of $G(z_{1}-z_{0}, \ldots, z_{m}-z_{0})$, $(\alpha_{1}, \cdots, \alpha_{m})^{\ensuremath{\operatorname{int}}rcal}$ and $p$, we see that \begin{align*} &G(z_{1}-z_{0}, \ldots, z_{m}-z_{0}) \begin{pmatrix} \alpha_{1} \\ \vdots\\ \alpha_{m} \\ \end{pmatrix} = \begin{pmatrix} \lambda_{1}-\innp{z_{0},z_{1}-z_{0}} \\ \vdots\\ \lambda_{m}-\innp{z_{0},z_{m}-z_{0}} \\ \end{pmatrix}\\ \Leftrightarrow & \begin{cases} \innp{ \alpha_{1}(z_{1}-z_{0})+ \cdots +\alpha_{m}(z_{m}-z_{0}), z_{1}-z_{0} } = \lambda_{1}-\innp{z_{0},z_{1}-z_{0}} \\ ~~~~~~~~~~\vdots \\ \innp{\alpha_{1}(z_{1}-z_{0})+ \cdots +\alpha_{m}(z_{m}-z_{0}), z_{m}-z_{0} } = \lambda_{m}-\innp{z_{0},z_{m}-z_{0}} \end{cases}\\ \Leftrightarrow & \begin{cases} \innp{p-z_{0},z_{1}-z_{0}} =\lambda_{1}-\innp{z_{0},z_{1}-z_{0}}\\ ~~~~~~~~~~\vdots \\ \innp{p-z_{0}, z_{m}-z_{0}} =\lambda_{m}-\innp{z_{0},z_{m}-z_{0}} \end{cases}\\ \Leftrightarrow & (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{p, z_{i}-z_{0}} = \lambda_{i}. \end{align*} Hence, $p \in \Omega $. In addition, assume $q \in \Omega$. Denote by $L:=\ensuremath{{\operatorname{span} \,}} \{ z_{1} -z_{0}, \ldots, z_{m}-z_{0} \}$. Then $\{q, p\} \subseteq \Omega \subseteq \ensuremath{\operatorname{aff} \,} ( \{z_{0}, z_{1}, \ldots, z_{m} \} )$ implies that $q-p \in L$. Moreover, $\{q, p\} \subseteq \Omega $ yields that \begin{align*} (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{ q , z_{i} -z_{0}} = \lambda_{i} \quad \text{and} \quad \innp{ p , z_{i} -z_{0}} = \lambda_{i}, \end{align*} which entails $q-p \in L^{\perp}$. Therefore, $q-p \in L \cap L^{\perp} =\{0\}$, that is, $q=p$. Altogether, the proof is complete. \end{proof} \section{Backward Bregman circumcenters of finite sets} \label{sec:BackwardBregmancircumcenters} In the whole section, suppose that $f \in \Gamma_{0} (\mathcal{H}) $ with $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$ and that $f$ is G\^ateaux differentiable on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ and that \begin{empheq}[box=\mybluebox]{equation} \label{eq:SBack} \mathcal{S}:= \{q_{0}, q_{1}, \ldots, q_{m} \} \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \text{ and } \mathcal{S} \text{ is nonempty}. \end{empheq} Denote by $\ensuremath{\operatorname{I}} := \{1, \ldots, m \}$, \begin{empheq}[box=\mybluebox]{equation} \label{eq:Ef} \overleftarrow{E}_{f}(\mathcal{S}):= \{ x \in \ensuremath{\operatorname{dom}} f ~:~ \ensuremath{\operatorname{D}}_{f} (x,q_{0}) =\ensuremath{\operatorname{D}}_{f} (x, q_{1}) =\cdots =\ensuremath{\operatorname{D}}_{f} (x, q_{m}) \}, \end{empheq} \begin{align} \label{eq:L} L:= \ensuremath{\operatorname{aff} \,} ( \nabla f(\mathcal{S}) ) - \ensuremath{\operatorname{aff} \,} ( \nabla f(\mathcal{S}) ) = \ensuremath{{\operatorname{span} \,}} \{ \nabla f(q_{1})-\nabla f(q_{0}), \ldots, \nabla f(q_{m})-\nabla f(q_{0}) \}, \end{align} and \begin{align} \label{eq:b} (\forall i \in \ensuremath{\operatorname{I}} )\quad \beta_{i}:= \innp{\nabla f(q_{i}), q_{i}} -f(q_{i}) - (\innp{\nabla f(q_{0}), q_{0}} - f(q_{0}) ). \end{align} \subsection*{Backward Bregman (pseudo-)circumcenter operators} To introduce backward Bregman (pseudo-)circumcenter operators, we need the following lemmas. \begin{lemma} \label{lemma:xinEp1pm} We have $ x \in \overleftarrow{E}_{f}(\mathcal{S}) \Leftrightarrow \left[ x \in \ensuremath{\operatorname{dom}} f \text{ and } (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{\nabla f(q_{i}) -\nabla f(q_{0}), x} = \beta_{i}\right]$. \end{lemma} \begin{proof} Let $x \in \ensuremath{\operatorname{dom}} f$. According to \cref{defn:BregmanDistance}, for every $\{ i, j \} \subseteq \{0,1, \ldots, m \}$, we have that \begin{subequations} \label{eq:lemma:xinEp1pm} \begin{align} \ensuremath{\operatorname{D}}_{f} (x, q_{i}) =\ensuremath{\operatorname{D}}_{f} (x,q_{j}) & \Leftrightarrow f(x) -f(q_{i}) -\innp{\nabla f(q_{i}), x-q_{i}} = f(x) -f(q_{j}) -\innp{\nabla f(q_{j}), x-q_{j}} \\ & \Leftrightarrow \innp{\nabla f(q_{j}) -\nabla f(q_{i}), x} =\innp{\nabla f(q_{j}), q_{j}} - f(q_{j}) - ( \innp{\nabla f(q_{i}), q_{i}} -f(q_{i}) ). \end{align} \end{subequations} Hence, \begin{subequations} \begin{align*} x \in \overleftarrow{E}_{f}(\mathcal{S}) & \stackrel{\cref{eq:Ef}}{\Leftrightarrow} ~ (\forall i \in \ensuremath{\operatorname{I}} ) ~ \ensuremath{\operatorname{D}}_{f} (x, q_{i}) =\ensuremath{\operatorname{D}}_{f} (x,q_{0}) \\ & \stackrel{\cref{eq:lemma:xinEp1pm}}{\Leftrightarrow} (\forall i \in \ensuremath{\operatorname{I}} ) ~ \innp{\nabla f(q_{i}) -\nabla f(q_{0}), x} =\innp{\nabla f(q_{i}), q_{i}}-f(q_{i}) -(\innp{\nabla f(q_{0}), q_{0}} -f(q_{0}) )\\ & \stackrel{\cref{eq:b}}{\Leftrightarrow} (\forall i \in \ensuremath{\operatorname{I}} ) ~ \innp{\nabla f(q_{i}) -\nabla f(q_{0}), x} =\beta_{i}. \end{align*} \end{subequations} \end{proof} \begin{lemma} \label{lemma:CCSpseudoB} Suppose that $\nabla f(q_{0}), \nabla f(q_{1}), \ldots, \nabla f(q_{m})$ are affinely independent. Set \begin{align*} p := \nabla f(q_{0})+\alpha_{1} \left(\nabla f(q_{1})-\nabla f(q_{0}) \right)+ \cdots+ \alpha_{m} \left(\nabla f(q_{m})-\nabla f(q_{0})\right), \end{align*} where \begin{align*} \begin{pmatrix} \alpha_{1} \\ \vdots\\ \alpha_{m } \\ \end{pmatrix} := G( \nabla f(q_{1})-\nabla f(q_{0}),\ldots,\nabla f(q_{m})-\nabla f(q_{0}))^{-1} \begin{pmatrix} \beta_{1} -\innp{\nabla f(q_{0}),\nabla f(q_{1})-\nabla f(q_{0})}\\ \vdots\\ \beta_{m}-\innp{\nabla f(q_{0}),\nabla f(q_{m})-\nabla f(q_{0})} \\ \end{pmatrix}. \end{align*} Then $ \left\{ x \in \ensuremath{\operatorname{aff} \,} ( \{\nabla f(q_{0}), \nabla f(q_{1}), \ldots,\nabla f(q_{m}) \} ) ~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{x , \nabla f(q_{i}) -\nabla f(q_{0})} = \beta_{i} \right\} = \{p\}$. \end{lemma} \begin{proof} This follows immediately from \cref{thm:LinIndpPformula} by setting $z_{0}=\nabla f(q_{0})$, $z_{1}=\nabla f(q_{1})$, $\ldots$, $z_{m}=\nabla f(q_{m})$. \end{proof} \begin{lemma} \label{lemma:x-y:Lperp} Let $x $ and $y$ be in $\overleftarrow{E}_{f}(\mathcal{S}) $. Then $x-y \in L^{\perp}$. Consequently, $\left(\forall z \in \overleftarrow{E}_{f}(\mathcal{S}) \right)$ $\overleftarrow{E}_{f}(\mathcal{S}) =\ensuremath{\operatorname{dom}} f \cap \left( z+L^{\perp} \right)$. \end{lemma} \begin{proof} Because $ \{x,y \} \subseteq \overleftarrow{E}_{f}(\mathcal{S}) \subseteq \ensuremath{\operatorname{dom}} f$, due to \cref{lemma:xinEp1pm}, for every $i \in \ensuremath{\operatorname{I}}$, \begin{subequations} \label{eq:x:y:affine} \begin{align} \innp{\nabla f(q_{i }) -\nabla f(q_{0}), x} = \beta_{i} \label{eq:x:y:affine:x},\\ \innp{\nabla f(q_{i }) -\nabla f(q_{0}), y} =\beta_{i} \label{eq:x:y:affine:y}. \end{align} \end{subequations} Then for every $i \in\ensuremath{\operatorname{I}}$, subtract \cref{eq:x:y:affine:y} from \cref{eq:x:y:affine:x} to observe that \begin{align*} (\forall i \in \ensuremath{\operatorname{I}} ) \quad \innp{\nabla f(q_{i }) -\nabla f(q_{0}), x-y} =0, \end{align*} which, by \cref{eq:L}, yields that $x-y\in L^{\perp}$. \end{proof} \begin{lemma} \label{lemma:CCSsingletonOrempty} The set $ \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \cap \overleftarrow{E}_{f}(\mathcal{S})$ is either empty or a singleton. \end{lemma} \begin{proof} Suppose that $ \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \cap \overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$ and that $\{ x_{1}, x_{2}\} \subseteq \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \cap \overleftarrow{E}_{f}(\mathcal{S})$. Then \begin{align} \label{eq:theorem:CCSsingletonOrempty:L} x_{1}- x_{2} \in \ensuremath{\operatorname{aff} \,} ( \nabla f(\mathcal{S}) ) - \ensuremath{\operatorname{aff} \,} ( \nabla f(\mathcal{S}) ) \stackrel{\cref{eq:L}}{=} L. \end{align} On the other hand, by \cref{lemma:x-y:Lperp}, $x_{1}- x_{2} \in L^{\perp}$. This combined with \cref{eq:theorem:CCSsingletonOrempty:L} to obtain that $x_{1}- x_{2} \in L \cap L^{\perp} =\{0\}$. Therefore, the claimed result is true. \end{proof} We are now ready to define backward Bregman circumcenters and pseudo-circumcenters. \begin{definition} \label{defn:CCS:Bregman:left} Let $ \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f)$ be the set of all nonempty subsets of $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ containing finitely many elements. For every $K \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) $, set \begin{empheq}[box=\mybluebox]{equation*} \overleftarrow{E}_{f}(K):= \{ p \in \ensuremath{\operatorname{dom}} f ~:~ (\forall y \in K) ~ \ensuremath{\operatorname{D}}_{f} (p,y) \text{ is a singleton}\}. \end{empheq} \begin{enumerate} \item \label{defn:CCS:Bregman:left:} Define the \emph{backward Bregman circumcenter operator $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}$ w.r.t.\,$f$} as \begin{empheq}[box=\mybluebox]{equation*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f} : \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \to 2^{\mathcal{H}} : K \mapsto \ensuremath{\operatorname{aff} \,} (K )\cap \overleftarrow{E}_{f}( K ). \end{empheq} \item \label{defn:CCS:Bregman:left:ps} Define the \emph{backward Bregman pseudo-circumcenter operator $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}$ w.r.t.\,$f$} as \begin{empheq}[box=\mybluebox]{equation*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} : \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \to 2^{\mathcal{H}} : K \mapsto \ensuremath{\operatorname{aff} \,} ( \nabla f (K ))\cap \overleftarrow{E}_{f}( K ). \end{empheq} \end{enumerate} In particular, for every $K \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) $, we call the element in $ \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(K) $ and $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} (K)$ backward Bregman circumcenter and backward Bregman pseudo-circumcenter of $K$, respectively, \end{definition} By \cref{lemma:CCSsingletonOrempty}, we know that $\left( \forall K \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \right)$ $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(K)$ is either a singleton or an empty set. \subsection*{Existence of backward Bregman circumcenters} \begin{proposition} \label{prop:formualCCS:Pleft:matrixEQ} Set \begin{align*} &A:= \begin{pmatrix} \ensuremath{\operatorname{I}}nnp{\nabla f (q_{1}) -\nabla f (q_{0}), q_{1} -q_{0} } & \ldots& \ensuremath{\operatorname{I}}nnp{\nabla f (q_{1}) -\nabla f (q_{0}), q_{m} -q_{0} }\\ \vdots & \ldots & \vdots\\ \ensuremath{\operatorname{I}}nnp{\nabla f (q_{m}) -\nabla f (q_{0}), q_{1} -q_{0} } & \ldots& \ensuremath{\operatorname{I}}nnp{\nabla f (q_{m}) -\nabla f (q_{0}), q_{m} -q_{0} }\\ \end{pmatrix}, \\ &B:= \begin{pmatrix} \beta_{1} - \ensuremath{\operatorname{I}}nnp{\nabla f (q_{1}) -\nabla f (q_{0}),q_{0} }\\ \vdots\\ \beta_{m} - \ensuremath{\operatorname{I}}nnp{\nabla f (q_{m}) -\nabla f (q_{0}),q_{0} } \end{pmatrix} \text{ and}\\ & \Lambda := \left\{ q_{0} +\sum^{m}_{i=1} \alpha_{i} \left( q_{i} -q_{0} \right) ~:~ (\alpha_{1}, \ldots, \alpha_{m} )^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{m} \text{ s.t. } A\alpha =B \right\}. \end{align*} Then $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \Lambda \cap \ensuremath{\operatorname{dom}} f$. \end{proposition} \begin{proof} According to \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:} and \cref{lemma:xinEp1pm}, $p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if there exists $(\alpha_{1}, \ldots, \alpha_{m} )^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{m}$ such that $ p =q_{0} +\sum^{m}_{j=1} \alpha_{i} \left( q_{j} -q_{0} \right) \in \ensuremath{\operatorname{dom}} f $ and \begin{align*} (\forall i \in \ensuremath{\operatorname{I}}) \quad \ensuremath{\operatorname{I}}nnp{\nabla f(q_{i}) -\nabla f(q_{0}), q_{0} +\sum^{m}_{j=1} \alpha_{j} \left( q_{j} -q_{0} \right) } = \beta_{i}, \end{align*} which entails the desired result. \end{proof} \begin{theorem} \label{theorem:formualCCS:Pleft} Assume that $\overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$. Let $z \in \overleftarrow{E}_{f}(\mathcal{S})$. Then the following hold. \begin{enumerate} \item\label{theorem:formualCCS:Pleft:EQ} $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) =\ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \ensuremath{\operatorname{dom}} f \cap (z+L^{\perp})$. \item \label{theorem:formualCCS:Pleft:P} Suppose that $\mathcal{H} =\mathbb{R}^{n}$, that $f$ is Legendre such that $\ensuremath{\operatorname{dom}} f^{*}$ is open, that $f$ allows forward Bregman projections, that $\ensuremath{\operatorname{aff} \,} (\mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, and that $\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) $ is closed and convex such that $(\forall i \in \ensuremath{\operatorname{I}})$ $\pm \left( \nabla f(q_{i }) -\nabla f(q_{0}) \right)\in \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) )-\overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) $ $($e.g.\,$ \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) $ is a closed affine subspace$)$. Then $ \overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (z) \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$. \end{enumerate} \end{theorem} \begin{proof} \cref{theorem:formualCCS:Pleft:EQ}: This is clear from \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:} and \cref{lemma:x-y:Lperp}. \cref{theorem:formualCCS:Pleft:P}: Note that, by \cref{eq:SBack} and \cref{fact:charac:PrightCf}, $\varnothing \neq S \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \cap \ensuremath{\operatorname{aff} \,} (\mathcal{S})$ and $\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (z) \in \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. Hence, in view of \cref{theorem:formualCCS:Pleft:EQ} above and \cref{eq:L}, it remains to show that \begin{align} \label{eq:theorem:formualCCS:Pleft:P:betai} (\forall i \in \ensuremath{\operatorname{I}}) \quad \ensuremath{\operatorname{I}}nnp{\nabla f( q_{i }) -\nabla f(q_{0}), \overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (z) -z} =0. \end{align} Employ \cref{fact:nablaf:nablaf*:id} and $\ensuremath{\operatorname{aff} \,} (\mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ to deduce that $\nabla f \left( \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \right) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$. Then apply \cref{fact:charac:PleftCf} with $f =f^{*}$, $C=\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) $, and $y=\nabla f (z)$ to obtain that $\overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) \in \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ and that \begin{align*} & \ensuremath{\operatorname{I}}nnp{\nabla f^{*}(\nabla f (z)) -\nabla f^{*} \left( \overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) \right), \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) -\overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) } \leq 0\\ \Leftrightarrow &~ \ensuremath{\operatorname{I}}nnp{z -\nabla f^{*} \left( \overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) \right), \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) -\overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) } \leq 0 \quad (\text{by \cref{fact:nablaf:nablaf*:id}})\\ \Leftrightarrow &~ \ensuremath{\operatorname{I}}nnp{z -\overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )}(z), \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) -\overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) } \leq 0, \quad (\text{by \cite[Poposition~7.1]{BWYY2009}}) \end{align*} which, combining with the assumption that $\pm \left( \nabla f(q_{i }) -\nabla f(q_{0}) \right)\in \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) )-\overleftarrow{\ensuremath{\operatorname{P}}}^{f^{*}}_{\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) } (\nabla f (z)) $, forces \cref{eq:theorem:formualCCS:Pleft:P:betai}. \end{proof} Notice that the condition \enquote{$ \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) $ is a closed affine subspace} in \cref{theorem:formualCCS:Pleft}\cref{theorem:formualCCS:Pleft:P} doesn't imply $ \nabla f$ affine. Because in both our \cref{exam:Back:Rn}\cref{exam:Back:Rn:negative:}$\&$\cref{exam:Back:Rn:FD:} below $\mathcal{S}$ contains points in $\mathbb{R}^{n}$ with the first coordinates being the same and $\nabla f$ is surjective, in both cases, $ \nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) $ is a closed affine subspace. \begin{example}\label{exam:Back:Rn} Suppose that $\mathcal{H} =\mathbb{R}^{n}$. Denote by $(\forall x \in \mathcal{H} )$ $x= (x_{i})^{n}_{i=1} $. Then the following statements hold. \begin{enumerate} \item \label{exam:Back:Rn:negative} Suppose that $(\forall x \in \left[0, +\infty\right[^{n} )$ $f(x) =\sum^{n}_{i=1} x_{i} \ln( x_{i}) -x_{i}$. \begin{enumerate} \item \label{exam:Back:Rn:negative:xy} Suppose that $\mathcal{S}:= \{x,y \} \subseteq \left]0, +\infty\right[^{n}$ such that $x$ and $y$ are pairwise distinct. Let $p \in \mathcal{H} $. Then $ p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=\alpha x + (1-\alpha) y \in \left[0, +\infty\right[^{n}$, where $\alpha:= \frac{\sum^{n}_{i=1} y_{i}-x_{i} -y_{i} \ln \left(\frac{y_{i}}{x_{i}} \right) }{\sum^{n}_{i=1}(x_{i}-y_{i}) \ln \left(\frac{y_{i}}{x_{i}} \right) }$. \item \label{exam:Back:Rn:negative:xyz} Suppose that $\mathcal{S}:= \{x,y,z\} \subseteq \left]0, +\infty\right[^{n}$ such that $x$, $y$, and $ z$ are pairwise distinct. Let $p \in \mathcal{H} $. Then $ p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=x + \alpha (y-x) +\beta (z-x) \in \left[0, +\infty\right[^{n}$, where $(\alpha, \beta)^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{2}$ solves the following equation \begin{align*} \begin{pmatrix} \sum^{n}_{i=1}(y_{i} -x_{i}) \ln(\frac{y_{i}}{x_{i}}) & \sum^{n}_{i=1}(z_{i} -x_{i}) \ln(\frac{y_{i}}{x_{i}})\\ \sum^{n}_{i=1}(y_{i} -x_{i}) \ln(\frac{z_{i}}{x_{i}}) & \sum^{n}_{i=1}(z_{i} -x_{i}) \ln(\frac{z_{i}}{x_{i}}) \end{pmatrix} \begin{pmatrix} \alpha\\ \beta \end{pmatrix} = \begin{pmatrix} \sum^{n}_{i=1}y_{i} -x_{i} -x_{i} \ln(\frac{y_{i}}{x_{i}})\\ \sum^{n}_{i=1} z_{i} -x_{i} -x_{i} \ln(\frac{z_{i}}{x_{i}}) \end{pmatrix}. \end{align*} \item \label{exam:Back:Rn:negative:} Suppose that $\mathcal{H}=\mathbb{R}^{3}$ and that $\mathcal{S}:= \{(1,1,1), (1,2,1), (1,1,2)\} $. Then \begin{align*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ (1,1,1) + \frac{1-\ln 2}{\ln 2} (0,1,0) +\frac{1-\ln 2}{\ln 2} (0,0,1) \right\}. \end{align*} \end{enumerate} \item \label{exam:Back:Rn:FD} Suppose that $(\forall x \in \left[0, 1\right]^{n} )$ $f(x) =\sum^{n}_{i=1} x_{i} \ln( x_{i}) + (1-x_{i}) \ln (1-x_{i})$. \begin{enumerate} \item \label{exam:Back:Rn:FD:xyz} Suppose that $\mathcal{S}:= \{x,y,z\} \subseteq \left]0, 1\right[^{n} $ such that $x$, $y$, and $ z$ are pairwise distinct. Let $p \in \mathcal{H} $. Then $ p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=x + \alpha (y-x) +\beta (z-x) \in \left[ 0, 1\right]^{n} $, where $(\alpha, \beta)^{\ensuremath{\operatorname{int}}rcal}$ solves the following equation \begin{align*} \begin{pmatrix} \sum^{n}_{i=1} (y_{i} -x_{i} ) \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right) & \sum^{n}_{i=1} (z_{i} -x_{i} ) \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right)\\ \sum^{n}_{i=1} (y_{i} -x_{i} ) \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) & \sum^{n}_{i=1} (z_{i} -x_{i} ) \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) \end{pmatrix} \begin{pmatrix} \alpha\\ \beta \end{pmatrix} = \begin{pmatrix} \sum^{n}_{i=1} \ln \left( \frac{1-y_{i}}{1-x_{i}} \right) -x_{i} \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right)\\ \sum^{n}_{i=1} \ln \left( \frac{1-z_{i}}{1-x_{i}} \right) -x_{i} \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) \end{pmatrix}. \end{align*} \item \label{exam:Back:Rn:FD:} Suppose that $\mathcal{H}=\mathbb{R}^{3}$ and that $\mathcal{S}:= \{(\frac{1}{4},\frac{1}{4},\frac{1}{4}), (\frac{1}{4},\frac{1}{2},\frac{1}{4}), (\frac{1}{4},\frac{1}{4},\frac{1}{2})\} $. Then \begin{align*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ \left(\frac{1}{4},\frac{1}{4},\frac{1}{4}\right) +\frac{3\ln 3 -4 \ln 2 }{\ln 3} \left(0,\frac{1}{4},0\right) +\frac{3\ln 3 -4 \ln 2 }{\ln 3} \left(0,0,\frac{1}{4}\right) \right\}. \end{align*} \end{enumerate} \end{enumerate} \end{example} \begin{proof} \cref{exam:Back:Rn:negative}: According to \cite[Proposition~3.5]{BB1997Legendre} and \cref{defn:BregmanDistance}, \begin{align}\label{eq:exam:back:Rn} \left(\forall \{u,v\} \subseteq \left[0, +\infty\right[^{n} \right) \quad D_{f} (u,v) =\sum^{n}_{i=1} u_{i} (\ln (u_{i}) -\ln (v_{i})) +v_{i} -u_{i} . \end{align} \cref{exam:Back:Rn:negative:xy}: This is clear from \cref{eq:exam:back:Rn} and \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:}. \cref{exam:Back:Rn:negative:xyz}: In view of \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:}, $ p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=x + \alpha (y-x) +\beta (z-x) \in \left[0, +\infty\right[^{n}$ such that \begin{align*} & \ensuremath{\operatorname{D}}_{f} (p, x) =\ensuremath{\operatorname{D}}_{f} (p, y)=\ensuremath{\operatorname{D}}_{f} (p, z) \\ \Leftrightarrow & \begin{cases} \sum^{n}_{i=1} p_{i} \ln (\frac{y_{i}}{x_{i}}) = \sum^{n}_{i=1}(y_{i} -x_{i})\\ \sum^{n}_{i=1} p_{i} \ln (\frac{z_{i}}{x_{i}}) = \sum^{n}_{i=1}(z_{i} -x_{i}) \end{cases}\\ \Leftrightarrow & \begin{cases} \alpha\sum^{n}_{i=1}(y_{i} -x_{i}) \ln(\frac{y_{i}}{x_{i}}) +\beta \sum^{n}_{i=1}(z_{i} -x_{i}) \ln(\frac{y_{i}}{x_{i}}) = \sum^{n}_{i=1}\left(y_{i} -x_{i} -x_{i} \ln(\frac{y_{i}}{x_{i}}) \right)\\ \alpha \sum^{n}_{i=1}(y_{i} -x_{i}) \ln(\frac{z_{i}}{x_{i}}) + \beta \sum^{n}_{i=1}(z_{i} -x_{i}) \ln(\frac{z_{i}}{x_{i}}) = \sum^{n}_{i=1}\left(z_{i} -x_{i} -x_{i} \ln(\frac{z_{i}}{x_{i}}) \right) \end{cases}, \end{align*} which deduces the required result. \cref{exam:Back:Rn:negative:}: The desired result follows clearly from \cref{exam:Back:Rn:negative:xyz} above. \cref{exam:Back:Rn:FD:xyz}: Employing \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:}, \cite[Proposition~3.5]{BB1997Legendre} and \cref{eq:exam:back:R:negative:xyz:D}, we observe that in this case, $ p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=x + \alpha (y-x) +\beta (z-x) \in \left[0, 1\right]^{n}$ such that \begin{align*} &D_{f} (p,x) =D_{f} (p,y)=D_{f} (p,z) \\ \Leftrightarrow &\begin{cases} \sum^{n}_{i=1} p_{i} \ln (\frac{p_{i}}{x_{i}}) + (1 -p_{i})\ln (\frac{1-p_{i}}{1-x_{i}}) = \sum^{n}_{i=1} p_{i} \ln (\frac{p_{i}}{y_{i}}) + (1 -p_{i})\ln (\frac{1-p_{i}}{1-y_{i}}) \\ \sum^{n}_{i=1} p_{i} \ln (\frac{p_{i}}{x_{i}}) + (1 -p_{i})\ln (\frac{1-p_{i}}{1-x_{i}}) = \sum^{n}_{i=1} p_{i} \ln (\frac{p_{i}}{z_{i}}) + (1 -p_{i})\ln (\frac{1-p_{i}}{1-z_{i}}) \end{cases}\\ \Leftrightarrow &\begin{cases} \sum^{n}_{i=1} p_{i} \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right) = \sum^{n}_{i=1} \ln \left( \frac{1-y_{i}}{1-x_{i}} \right) \\ \sum^{n}_{i=1} p_{i} \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) = \sum^{n}_{i=1} \ln \left( \frac{1-z_{i}}{1-x_{i}} \right) \end{cases}\\ \Leftrightarrow &\begin{cases} \alpha \sum^{n}_{i=1} (y_{i} -x_{i} ) \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right) + \beta \sum^{n}_{i=1} (z_{i} -x_{i} ) \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right) = \sum^{n}_{i=1} \ln \left( \frac{1-y_{i}}{1-x_{i}} \right) -x_{i} \ln \left( \frac{x_{i}}{y_{i}} \frac{1-y_{i}}{1-x_{i}} \right)\\ \alpha \sum^{n}_{i=1} (y_{i} -x_{i} ) \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) + \beta \sum^{n}_{i=1} (z_{i} -x_{i} ) \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) = \sum^{n}_{i=1} \ln \left( \frac{1-z_{i}}{1-x_{i}} \right) -x_{i} \ln \left( \frac{x_{i}}{z_{i}} \frac{1-z_{i}}{1-x_{i}} \right) \end{cases}, \end{align*} which implies the required result. \cref{exam:Back:Rn:FD:}: This follows from \cref{exam:Back:Rn:FD:xyz} and some easy algebra. \end{proof} \begin{example} \label{exam:back:R} Suppose that $\mathcal{H}=\mathbb{R}$. Then the following statements hold. \begin{enumerate} \item \label{exam:back:R:negative} Suppose that $(\forall x \in \left[0, +\infty\right[ )$ $f(x) =x \ln( x) -x$, and that $\mathcal{S}:= \{x,y,z\} \subseteq \left]0, +\infty\right[$ such that $x$, $y$, and $ z$ are pairwise distinct. Then $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \varnothing$. \item \label{exam:back:R:FD} Suppose that $(\forall x \in \left[0, 1\right] )$ $f(x) =x \ln( x) + (1-x) \ln (1-x)$, and that $\mathcal{S}:= \{x,y,z\} \subseteq \left]0, 1\right[$ such that $x$, $y$, and $ z$ are pairwise distinct. Then $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \varnothing$. \end{enumerate} \end{example} \begin{proof} \cref{exam:back:R:negative}: Let $p \in \left[0, +\infty\right[ $. According to \cref{eq:exam:back:Rn} with $n=1$, it is easy to see that \begin{align} \label{eq:exam:back:R:negative:xyz} D_{f} (p,x) =D_{f} (p,y)=D_{f} (p,z) \Leftrightarrow p =\frac{y-x}{\ln (y) -\ln (x)} =\frac{z-x}{\ln (z) -\ln (x)}. \end{align} Set $g:\left]0, +\infty\right[ \to \mathbb{R} : t \mapsto \frac{t-x}{\ln(t) -\ln(x)} $. Then clearly, $\left(\forall t \in \left]0, +\infty\right[ \smallsetminus \{x\} \right) $ $g(t) \in \mathbb{R}_{++}$, \begin{align*} \left(\forall t \in \left]0, +\infty\right[ \smallsetminus \{x\} \right) \quad g'(t) =\frac{(\frac{x}{t} -\ln \frac{x}{t}) -1}{( \ln (t) -\ln (x))^{2} } >0, \end{align*} which implies that $g$ is strictly increasing. This combined with the assumption, $x$, $y$, and $ z$ are pairwise distinct, deduces that there is no $p \in \left[0, +\infty\right[$ satisfying \cref{eq:exam:back:R:negative:xyz}. Therefore, by \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:}, $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \varnothing$. \cref{exam:back:R:FD}: Now, in view of \cref{defn:BregmanDistance}, \begin{align} \label{eq:exam:back:R:negative:xyz:D} (\forall \{u,v\} \subseteq \left]0, 1\right[ ) \quad D_{f} (u,v) =u \ln (\frac{u}{v}) + (1 -u)\ln (\frac{1-u}{1-v}), \end{align} and \begin{subequations}\label{eq:exam:back:R:FD} \begin{align} &D_{f} (p,x) =D_{f} (p,y)=D_{f} (p,z) \\ \Leftrightarrow &p = \left(\ln \frac{1-y}{1-x} \right) \left( \ln \frac{1-y}{1-x} -\ln \frac{y}{x} \right)^{-1} =\left(\ln \frac{1-z}{1-x} \right) \left( \ln \frac{1-z}{1-x} -\ln \frac{z}{x} \right)^{-1}. \end{align} \end{subequations} Set $g:\left]0, 1\right[ \to \mathbb{R} : t \mapsto \left(\ln \frac{1-t}{1-x} \right) \left( \ln \frac{1-t}{1-x} -\ln \frac{t}{x} \right)^{-1} $. Then some algebra deduces that $(\forall t \in\left]0, 1\right[ \smallsetminus \{x\} )$ $g(t) \in \left[ 0,1 \right]$ and $g'(t) >0$, which, combining with the assumption, $x$, $y$, and $ z$ are pairwise distinct, implies that there exists no $p \in \left[0, 1\right] $ satisfying \cref{eq:exam:back:R:FD}. Therefore, by \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:}, $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \varnothing$. \end{proof} \subsection*{Explicit formula of backward Bregman pseudo-circumcenters} \begin{proposition} \label{prop:formualCCS:matrixEQ} Set \begin{align*} &A:= \begin{pmatrix} \ensuremath{\operatorname{I}}nnp{\nabla f (q_{1}) -\nabla f (q_{0}), \nabla f (q_{1}) -\nabla f (q_{0}) } & \ldots& \ensuremath{\operatorname{I}}nnp{\nabla f (q_{1}) -\nabla f (q_{0}), \nabla f (q_{m}) -\nabla f (q_{0}) }\\ \vdots & \ldots & \vdots\\ \ensuremath{\operatorname{I}}nnp{\nabla f (q_{m}) -\nabla f (q_{0}), \nabla f (q_{1}) -\nabla f (q_{0})} & \ldots& \ensuremath{\operatorname{I}}nnp{\nabla f (q_{m}) -\nabla f (q_{0}), \nabla f (q_{m}) -\nabla f (q_{0}) }\\ \end{pmatrix},\\ &B:= \begin{pmatrix} \beta_{1} - \ensuremath{\operatorname{I}}nnp{\nabla f (q_{1}) -\nabla f (q_{0}), \nabla f (q_{0}) }\\ \vdots\\ \beta_{m} - \ensuremath{\operatorname{I}}nnp{\nabla f (q_{m}) -\nabla f (q_{0}), \nabla f (q_{0}) } \end{pmatrix} \text{ and } \\ & \Omega := \left\{ \nabla f (q_{0}) +\sum^{m}_{i=1} \alpha_{i} \left( \nabla f (q_{i}) -\nabla f (q_{0}),\right) ~:~ (\alpha_{1}, \ldots, \alpha_{m} )^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{m} \text{ s.t. } A\alpha =B \right\}. \end{align*} Then $ \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \Omega \cap \ensuremath{\operatorname{dom}} f$. \end{proposition} \begin{proof} The proof is similar to that of \cref{prop:formualCCS:Pleft:matrixEQ}, but this time we exploit \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:ps} and \cref{lemma:xinEp1pm}. \end{proof} \begin{example}\label{exam:Backpseudo:Rn} Suppose that $\mathcal{H} =\mathbb{R}^{n}$. Denote by $(\forall x \in \mathcal{H} )$ $x= (x_{i})^{n}_{i=1} $. Suppose that $(\forall x \in \left[0, +\infty\right[^{n} )$ $f(x) =\sum^{n}_{i=1} x_{i} \ln( x_{i}) -x_{i}$. and that $\mathcal{S}:= \{x,y \} \subseteq \left]0, +\infty\right[^{n}$ such that $x$ and $y$ are pairwise distinct. Let $p \in \mathcal{H} $. Then $ p \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) $ if and only if $p=\Big( \alpha \ln (x_{1}) + (1-\alpha) \ln (y_{1}) , \ldots ,\alpha \ln (x_{n}) + (1-\alpha) \ln (y_{n}) \Big)^{\ensuremath{\operatorname{int}}rcal} \in \left[0, +\infty\right[^{n}$, where $\alpha:= \frac{ \sum^{n}_{i=1} x_{i}-y_{i} + \ln (y_{i}) \ln \left(\frac{y_{i}}{x_{i}} \right) }{\sum^{n}_{i=1} \left(\ln \frac{y_{i}}{x_{i}} \right)^{2 }}$. \end{example} \begin{proof} This is clear by substituting $m=1$, $q_{1}=x$ and $q_{0} =y$ in \cref{prop:formualCCS:matrixEQ}. \end{proof} \begin{theorem} \label{theorem:formualCCS} Suppose that $\overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$. Let $x \in \overleftarrow{E}_{f}(\mathcal{S}) $. Then the following statements hold. \begin{enumerate} \item \label{theorem:formualCCS:id} $ \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \cap \overleftarrow{E}_{f}(\mathcal{S}) = \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \cap \ensuremath{\operatorname{dom}} f \cap \left( x+L^{\perp} \right)$. \item \label{theorem:formualCCS:EucP} Suppose that $ \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (x ) \in \ensuremath{\operatorname{dom}} f$ $($e.g., $\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) ) \subseteq \ensuremath{\operatorname{dom}} f$ or $\ensuremath{\operatorname{dom}} f =\mathcal{H}$$)$. Then $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})=\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (x) $. \end{enumerate} \end{theorem} \begin{proof} \cref{theorem:formualCCS:id}: This is a direct result of \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:ps} and \cref{lemma:x-y:Lperp}. \cref{theorem:formualCCS:EucP}: Notice that $\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (x) \in \ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) ) \cap \ensuremath{\operatorname{dom}} f$. Furthermore, \begin{align*} x - \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (x) & = x - \ensuremath{\operatorname{P}}_{\nabla f(q_{0}) +L} (x) \quad (\text{by \cref{eq:L}})\\ &= x-\nabla f(q_{0}) - \ensuremath{\operatorname{P}}_{ L} (x-\nabla f(q_{0}) ) \quad (\text{by \cite[Proposition~3.19]{BC2017}})\\ &= \ensuremath{\operatorname{P}}_{ L^{\perp}} (x-\nabla f(q_{0}) ) \in L^{\perp} . \quad (\text{by \cite[Theorem~5.8]{D2012}}) \end{align*} Altogether, $\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (x) \in \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \cap \ensuremath{\operatorname{dom}} f \cap \left( x+L^{\perp} \right)$, which combining with \cref{theorem:formualCCS:id} above and \cref{lemma:CCSsingletonOrempty} to deduce the required result. \end{proof} \begin{corollary} \label{cor:equi:Ef:CCfS} Suppose that $ \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (\overleftarrow{E}_{f}(\mathcal{S}) ) \subseteq \ensuremath{\operatorname{dom}} f$ $($e.g., $\ensuremath{\operatorname{dom}} f =\mathcal{H}$$)$. Then $\overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$ if and only if $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) \neq \varnothing$. \end{corollary} \begin{proof} It is easy to verify the equivalence by \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:ps} and \cref{theorem:formualCCS}\cref{theorem:formualCCS:EucP}. \end{proof} The following result provides an explicit formula for backward Bregman pseudo-circumcenters. \begin{theorem} \label{thm:unique:LinIndpPformula} Suppose that $\nabla f(q_{0}), \nabla f(q_{1}), \ldots, \nabla f(q_{m})$ are affinely independent and that $\ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \subseteq \ensuremath{\operatorname{dom}} f$. Then $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) \neq \varnothing$. Moreover, \begin{align} \label{eq:thm:unique:LinIndpPformula} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \nabla f(q_{0})+\alpha_{1} \left(\nabla f(q_{1})-\nabla f(q_{0}) \right)+ \cdots+ \alpha_{m} \left(\nabla f(q_{m})-\nabla f(q_{0})\right), \end{align} where \begin{align*} \begin{pmatrix} \alpha_{1} \\ \vdots\\ \alpha_{m } \\ \end{pmatrix} := G( \nabla f(q_{1})-\nabla f(q_{0}),\ldots,\nabla f(q_{m})-\nabla f(q_{0}))^{-1} \begin{pmatrix} \beta_{1} -\innp{\nabla f(q_{0}),\nabla f(q_{1})-\nabla f(q_{0})}\\ \vdots\\ \beta_{m}-\innp{\nabla f(q_{0}),\nabla f(q_{m})-\nabla f(q_{0})} \\ \end{pmatrix}. \end{align*} \end{theorem} \begin{proof} According to the assumption and \cite[Fact~2.8]{BOyW2018}, $\nabla f(q_{1})-\nabla f(q_{0}), \ldots, \nabla f(q_{m})-\nabla f(q_{0}) $ are linearly independent. Then by \cite[Fact~2.13]{BOyW2018}, the Gram matrix $G\left(\nabla f(q_{1})-\nabla f(q_{0}), \ldots, \nabla f(q_{m})-\nabla f(q_{0}) \right)$ is invertible. Therefore, the required result follows immediately from \cref{prop:formualCCS:matrixEQ}. \end{proof} \section{Forward Bregman circumcenters of finite sets}\label{sec:ForwardBregmancircumcenters} Throughout this section, suppose that $f \in \Gamma_{0} (\mathcal{H}) $ with $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$ and that $f$ is G\^ateaux differentiable on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, and that \begin{empheq}[box=\mybluebox]{equation} \label{eq:Sfor} \mathcal{S}:= \{p_{0}, p_{1}, \ldots, p_{m} \} \subseteq \ensuremath{\operatorname{dom}} f \text{ and } \mathcal{S} \text{ is nonempty}. \end{empheq} Denote by $\ensuremath{\operatorname{I}} := \{1, \ldots, m\}$, \begin{empheq}[box=\mybluebox]{equation} \label{eq:Eright} \overrightarrow{E}_{f}(\mathcal{S} ):= \{ y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f~:~ \ensuremath{\operatorname{D}}_{f} (p_{0},y) =\ensuremath{\operatorname{D}}_{f} (p_{1},y) =\cdots =\ensuremath{\operatorname{D}}_{f} (p_{m},y) \}, \end{empheq} \begin{align} \label{eq:M} M :=\ensuremath{\operatorname{aff} \,} ( \mathcal{S} ) - \ensuremath{\operatorname{aff} \,} ( \mathcal{S} ) = \ensuremath{{\operatorname{span} \,}} \{ p_{1} -p_{0}, \ldots, p_{m} -p_{0} \}, \end{align} and \begin{align} \label{eq:eta} (\forall i \in \ensuremath{\operatorname{I}} )\quad \eta_{i}:= f(p_{i}) -f (p_{0}). \end{align} \subsection*{Forward Bregman (pseudo-)circumcenter operators} The following lemmas are helpful in this section. \begin{lemma} \label{lemma:xinEp1pm:right} We have $ y \in \overrightarrow{E}_{f}(\mathcal{S} ) \Leftrightarrow \left[ y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \text{ and } (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{\nabla f(y) , p_{i }-p_{0}} = \eta_{i} \right]$ . \end{lemma} \begin{proof} Let $y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. Then $f(y) \in \mathbb{R}$. Hence, for every $\{ i, j \} \subseteq \ensuremath{\operatorname{I}}$, by \cref{defn:BregmanDistance}, \begin{subequations}\label{eq:DfpixDfpjx} \begin{align} \ensuremath{\operatorname{D}}_{f} (p_{i}, y) =\ensuremath{\operatorname{D}}_{f} ( p_{j}, y) & \Leftrightarrow f(p_{i}) - f(y) -\innp{\nabla f(y), p_{i} -y} = f(p_{j}) - f(y) -\innp{\nabla f(y), p_{j} -y} \\ & \Leftrightarrow \innp{\nabla f(y) , p_{j} - p_{i} } = f(p_{j}) -f(p_{i}). \end{align} \end{subequations} Therefore, \begin{align*} y\in \overrightarrow{E}_{f}(\mathcal{S} ) \stackrel{\cref{eq:Eright}}{\Leftrightarrow} & (\forall i \in \ensuremath{\operatorname{I}}) \quad \ensuremath{\operatorname{D}}_{f} (p_{i},y) =\ensuremath{\operatorname{D}}_{f} (p_{0},y ) \\ \stackrel{\cref{eq:DfpixDfpjx}}{\Leftrightarrow} & (\forall i \in\ensuremath{\operatorname{I}}) \quad \innp{\nabla f(y) , p_{i} -p_{0} } = f(p_{i}) -f(p_{0})\\ \stackrel{\cref{eq:eta}}{\Leftrightarrow} & (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{\nabla f(y) , p_{i} -p_{1} } =\eta_{i}. \end{align*} \end{proof} \begin{lemma} \label{lemma:CCSpsF} Suppose that $p_{0}, p_{1}, \ldots, p_{m}$ are affinely independent. Set \begin{align*} q: = p_{0}+ (p_{1}-p_{0},\ldots,p_{m}-p_{0}) G( p_{1}-p_{0},\ldots,p_{m}-p_{0})^{-1} \begin{pmatrix} \eta_{1}-\innp{p_{0},p_{1}-p_{0}}\\ \vdots\\ \eta_{m}-\innp{p_{0},p_{m}-p_{0}} \\ \end{pmatrix}, \end{align*} Then $ \left\{ x \in \ensuremath{\operatorname{aff} \,} ( \{p_{0}, p_{1}, \ldots, p_{m} \} ) ~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{x , p_{i} -p_{0}} = \eta_{i} \right\} = \{q\}$. \end{lemma} \begin{proof} Apply \cref{thm:LinIndpPformula} with $z_{0}=p_{0}$ and $(\forall i \in \ensuremath{\operatorname{I}})$ $z_{i}=p_{i}$ and $\lambda_{i}=\eta_{i}$ to deduce the desired result. \end{proof} \begin{lemma} \label{lemma:nablafyi:in:Mperp} Let $x$ and $y$ be in $ \overrightarrow{E}_{f}(\mathcal{S} )$. Then $\nabla f (y) -\nabla f (x) \in M^{\perp}$. \end{lemma} \begin{proof} Notice that $\{ x,y\} \subseteq \overrightarrow{E}_{f}(\mathcal{S})$ implies that $\{x,y\} \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, and that, by \cref{lemma:xinEp1pm:right}, \begin{subequations} \begin{align} (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{\nabla f(x) , p_{i }-p_{0}} = \eta_{i}, \label{lemma:nablafyi:in:Mperp:y1}\\ (\forall i \in \ensuremath{\operatorname{I}}) \quad \innp{\nabla f(y) , p_{i}-p_{0}} = \eta_{i}. \label{lemma:nablafyi:in:Mperp:y2} \end{align} \end{subequations} Subtracting \cref{lemma:nablafyi:in:Mperp:y1} from \cref{lemma:nablafyi:in:Mperp:y2} to deduce that $ (\forall i \in \ensuremath{\operatorname{I}}) $ $ \innp{\nabla f(y ) -\nabla f(x), p_{i }-p_{0}} = 0$, which, connecting with the definition of $M$ in \cref{eq:M}, yields the desired result. \end{proof} \begin{lemma} \label{lem:CCOfrS} Suppose that $f$ is Legendre. Then $\left( \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \mathcal{S} \right) \right) \cap \overrightarrow{E}_{f}(\mathcal{S}) $ is either an empty set or a singleton. \end{lemma} \begin{proof} Suppose that $ \left( \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \mathcal{S} \right) \right) \cap \overrightarrow{E}_{f}(\mathcal{S}) \neq \varnothing$ and that $\{y_{1}, y_{2} \} \subseteq \left( \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \mathcal{S} \right) \right) \cap \overrightarrow{E}_{f}(\mathcal{S}) $. Then, by \cref{fact:nablaf:nablaf*:id} and \cref{eq:M}, \begin{align} \label{eq:lem:CCOfrS:M} \nabla f (y_{1} )-\nabla f (y_{2}) \in \ensuremath{\operatorname{aff} \,} (\mathcal{S}) -\ensuremath{\operatorname{aff} \,} (\mathcal{S}) =M. \end{align} Because $\{y_{1}, y_{2} \} \subseteq \overrightarrow{E}_{f}(\mathcal{S}) $, \cref{lemma:nablafyi:in:Mperp} implies that \begin{align}\label{eq:lem:CCOfrS:Mperp} \nabla f (y_{1} )-\nabla f (y_{2}) \in M^{\perp}. \end{align} Combine \cref{eq:lem:CCOfrS:M} and \cref{eq:lem:CCOfrS:Mperp} to deduce that $\nabla f (y_{1} )-\nabla f (y_{2}) \in M \cap M^{\perp} =\{0\}$, that is, \begin{align}\label{eq:lem:CCOfrS:EQ} \nabla f (y_{1} )= \nabla f (y_{2}). \end{align} Apply $\nabla f^{*} $ in both sides of \cref{eq:lem:CCOfrS:EQ} and utilize \cref{fact:nablaf:nablaf*:id} to obtain that $y_{1} =y_{2}$. \end{proof} We are now ready to define forward Bregman circumcenters and pseudo-circumcenters. \begin{definition} \label{defn:CCS:Bregman:forward} Let $\mathcal{P} (\ensuremath{\operatorname{dom}} f)$ be the set of all nonempty subsets of $\ensuremath{\operatorname{dom}} f$ containing finitely many elements. For every $K \in \mathcal{P} (\ensuremath{\operatorname{dom}} f)$, set \begin{empheq}[box=\mybluebox]{equation*} \overrightarrow{E}_{f}(K):= \{ q \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f~:~ (\forall x \in K) ~\ensuremath{\operatorname{D}}_{f} (x,q) \text{ is a singleton} \}. \end{empheq} \begin{enumerate} \item \label{defn:CCS:Bregman:forward:} Define the \emph{forward Bregman circumcenter operator w.r.t.\,$f$} as \begin{empheq}[box=\mybluebox]{equation*} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f} : \mathcal{P} (\ensuremath{\operatorname{dom}} f) \to 2^{\mathcal{H}} : K \mapsto \ensuremath{\operatorname{aff} \,} ( K ) \cap \overrightarrow{E}_{f}(K). \end{empheq} \item \label{defn:CCS:Bregman:forward:ps} Define the \emph{forward Bregman pseudo-circumcenter operator w.r.t.\,$f$} as \begin{empheq}[box=\mybluebox]{equation*} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} : \mathcal{P} (\ensuremath{\operatorname{dom}} f) \to 2^{\mathcal{H}} : K \mapsto \left( \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} K \right) \right) \cap \overrightarrow{E}_{f} (K). \end{empheq} \end{enumerate} In particular, for every $K \in \mathcal{P} (\ensuremath{\operatorname{dom}} f)$, we call the element in $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(K) $ and $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(K)$ forward Bregman circumcenter and forward Bregman pseudo-circumcenter of $K$, respectively. \end{definition} In view of \cref{lem:CCOfrS}, $\left( \forall K \in \mathcal{P} (\ensuremath{\operatorname{dom}} f) \right)$ $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(K)$ is either a singleton or an empty set. \subsection*{Existence of forward Bregman circumcenters} \begin{theorem} \label{theorem:forwardCCS} Suppose that $f$ is Legendre and that $\overrightarrow{E}_{f}(\mathcal{S} ) \neq \varnothing$. Let $y \in \overrightarrow{E}_{f}(\mathcal{S} )$. Then the following hold. \begin{enumerate} \item \label{theorem:forwardCCS:EQ} $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \cap \nabla f^{*} \left( \nabla f(y) +M^{\perp} \right)$. \item \label{theorem:forwardCCS:P} Suppose that $ \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$ $($e.g., $\ensuremath{\operatorname{dom}} f =\mathcal{H}$ or $ \mathcal{S} \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$$)$. Then $ \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}(\mathcal{S})}(y) \in \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$. \end{enumerate} \end{theorem} \begin{proof} \cref{theorem:forwardCCS:EQ}: This is clear from \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:}, \cref{lemma:nablafyi:in:Mperp}, and \cref{fact:nablaf:nablaf*:id}. \cref{theorem:forwardCCS:P}: Because $y \in \overrightarrow{E}_{f}(\mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, $\ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$, and $\ensuremath{\operatorname{aff} \,} (\mathcal{S})$ is a closed affine subspace, apply \cref{theor:affine:character:P}\cref{theor:affine:character:PleftCf} with $U=\ensuremath{\operatorname{aff} \,} (\mathcal{S})$ to obtain that \begin{subequations} \begin{align} &\overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) \in \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f, \text{ and} \label{eq:theorem:leftP:in:rightCCS:P}\\ & \ensuremath{\operatorname{I}}nnp{\nabla f(y) -\nabla f \left( \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) \right), \ensuremath{\operatorname{aff} \,} (\mathcal{S})-\overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) } = 0.\label{eq:theorem:leftP:in:rightCCS:innp} \end{align} \end{subequations} Employ $\overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) \in \ensuremath{\operatorname{aff} \,} (\mathcal{S}) $ and \cref{eq:M} to see that $ \ensuremath{\operatorname{aff} \,} (\mathcal{S})-\overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) =M$. Combine this with \cref{eq:theorem:leftP:in:rightCCS:innp} and \cref{fact:nablaf:nablaf*:id} to observe that \begin{align*} \nabla f(y) -\nabla f \left( \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) \right) \in M^{\perp} \Leftrightarrow \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,} (\mathcal{S})} (y) \in ( \nabla f)^{-1} \left( \nabla f(y) +M^{\perp} \right) =\nabla f^{*} \left( \nabla f(y) +M^{\perp} \right), \end{align*} which, combining with \cref{eq:theorem:leftP:in:rightCCS:P} and \cref{theorem:forwardCCS:EQ}, yields the desired result. \end{proof} \begin{example}\label{exam:forward:Rn} Suppose that $\mathcal{H} =\mathbb{R}^{n}$. Denote by $(\forall x \in \mathcal{H} )$ $x= (x_{i})^{n}_{i=1} $. Then the following statements hold. \begin{enumerate} \item \label{exam:forward:Rn:negative} Suppose that $(\forall x \in \left[0, +\infty\right[^{n} )$ $f(x) =\sum^{n}_{i=1} x_{i} \ln( x_{i}) -x_{i}$. \begin{enumerate} \item \label{exam:forward:Rn:negative:xyz} Suppose that $\mathcal{S}:= \{x,y,z\} \subseteq \left]0, +\infty\right[^{n}$ such that $x$, $y$ and $ z$ are pairwise distinct. Let $p \in \mathcal{H} $. Then $ p \in \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=x + \alpha (y-x) +\beta (z-x) \in \left]0, +\infty\right[^{n}$, where $(\alpha, \beta)^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{2}$ solves the following system of equations \begin{align*} \begin{cases} \sum^{n}_{i=1} (y_{i}-x_{i}) \ln\left( x_{i} +\alpha (y_{i} -x_{i}) +\beta (z_{i} -x_{i}) \right)=f(y) -f(x)\\ \sum^{n}_{i=1} (z_{i}-x_{i}) \ln\left( x_{i} +\alpha (y_{i} -x_{i}) +\beta (z_{i} -x_{i}) \right)=f(z) -f(x) \end{cases}. \end{align*} \item \label{exam:forward:Rn:negative:} Suppose that $\mathcal{S}:= \{(1,1,1), (1,2,1), (1,1,2)\} $. Then \begin{align*} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ (1,1,1) + \left( \frac{4}{\rm e} -1\right) (0,1,0) +\left( \frac{4}{\rm e} -1\right) (0,0,1) \right\}. \end{align*} \end{enumerate} \item \label{exam:forward:Rn:FD} Suppose that $(\forall x \in \left[0, 1\right]^{n} )$ $f(x) =\sum^{n}_{i=1} x_{i} \ln( x_{i}) + (1-x_{i}) \ln (1-x_{i})$. \begin{enumerate} \item \label{exam:forward:Rn:FD:xyz} Suppose that $\mathcal{S}:= \{x,y,z\} \subseteq \left]0, 1\right[^{n} $ such that $x$, $y$ and $ z$ are pairwise distinct. Let $p \in \mathcal{H} $. Then $ p \in \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) $ if and only if $p=x + \alpha (y-x) +\beta (z-x) \in \left]0, 1\right[^{n}$, where $(\alpha, \beta)^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{2}$ solves the following system of equations \begin{align*} \begin{cases} \sum^{n}_{i=1} (y_{i}-x_{i}) \ln\left( \frac{ x_{i} +\alpha (y_{i} -x_{i}) +\beta (z_{i} -x_{i}) }{1-x_{i} -\alpha (y_{i} -x_{i}) -\beta (z_{i} -x_{i}) } \right)=f(y) -f(x)\\ \sum^{n}_{i=1} (z_{i}-x_{i}) \ln\left( \frac{ x_{i} +\alpha (y_{i} -x_{i}) +\beta (z_{i} -x_{i}) }{1-x_{i} -\alpha (y_{i} -x_{i}) -\beta (z_{i} -x_{i}) } \right)=f(z) -f(x) \end{cases}. \end{align*} \item \label{exam:forward:Rn:FD:} Suppose that $\mathcal{S}:= \{(\frac{1}{4},\frac{1}{4},\frac{1}{4}), (\frac{1}{4},\frac{1}{2},\frac{1}{4}), (\frac{1}{4},\frac{1}{4},\frac{1}{2})\} $. Then \begin{align*} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})= \left\{ \left(\frac{1}{4},\frac{1}{4},\frac{1}{4}\right) +\frac{21}{43} \left(0,\frac{1}{4},0\right)+\frac{21}{43} \left(0,0,\frac{1}{4}\right) \right\}= \left\{ \left(\frac{1}{4},\frac{16}{43},\frac{16}{43}\right) \right\}. \end{align*} \end{enumerate} \end{enumerate} \end{example} \begin{proof} \cref{exam:forward:Rn:negative:xyz}: As a consequence of \cref{defn:BregmanDistance}, \begin{align*} D_{f} (p,x) =D_{f} (p,y)=D_{f} (p,z) &\Leftrightarrow \begin{cases} \innp{\nabla f(p), y-x} =f(y) -f(x)\\ \innp{\nabla f(p),z-x} =f(z) -f(x) \end{cases}\\ & \Leftrightarrow \begin{cases} \sum^{n}_{i=1} (y_{i}-x_{i}) \ln(p_{i}) =f(y) -f(x)\\ \sum^{n}_{i=1} (z_{i}-x_{i}) \ln(p_{i}) =f(z) -f(x) \end{cases}\\ & \Leftrightarrow \begin{cases} \sum^{n}_{i=1} (y_{i}-x_{i}) \ln\left( x_{i} +\alpha (y_{i} -x_{i}) +\beta (z_{i} -x_{i}) \right)=f(y) -f(x)\\ \sum^{n}_{i=1} (z_{i}-x_{i}) \ln\left( x_{i} +\alpha (y_{i} -x_{i}) +\beta (z_{i} -x_{i}) \right)=f(z) -f(x) \end{cases}. \end{align*} \cref{exam:forward:Rn:negative:}: This follows directly from \cref{exam:forward:Rn:negative:xyz} above. \cref{exam:forward:Rn:FD}: The proof is similar to that of \cref{exam:forward:Rn:negative} and is omitted here. \end{proof} \begin{example} \label{exam:forward:R} Suppose that $\mathcal{H}=\mathbb{R}$. Then the following statements hold. \begin{enumerate} \item \label{exam:forward:R:negative} Suppose that $(\forall x \in \left[0, +\infty\right[ )$ $f(x) =x \ln( x) -x$. \begin{enumerate} \item \label{exam:forward:R:negative:xy} Suppose $\mathcal{S}:= \{x,y\} \subseteq \left[0, +\infty\right[$ with $x \neq y$. Then $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ \exp \left( \frac{y \ln(y) -x\ln(x) +x -y}{y-x} \right) \right\}$. \item \label{exam:forward:R:negative:xyz} Suppose $\mathcal{S}:= \{x,y,z\} \subseteq \left[0, +\infty\right[$ such that $x$, $y$, and $ z$ are pairwise distinct. Then $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \varnothing$. \end{enumerate} \item \label{exam:forward:R:FD} Suppose that $(\forall x \in \left[0, 1\right] )$ $f(x) =x \ln( x) + (1-x) \ln (1-x)$. \begin{enumerate} \item \label{exam:forward:R:FD:xy} Suppose $\mathcal{S}:= \{x,y\} \subseteq \left[0, 1\right] $ with $x \neq y$. Then \begin{align*} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ \left( \exp \left( \frac{ x\ln(x) + (1-x) \ln(1-x) - y\ln(y) -(1-y) \ln(1-y) }{y-x} \right) +1 \right)^{-1} \right\} . \end{align*} \item \label{exam:forward:R:FD:xyz} Suppose $\mathcal{S}:= \{x,y,z\} \subseteq \left[0, 1\right] $ such that $x$, $y$, and $ z$ are pairwise distinct. Then $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \varnothing$. \end{enumerate} \end{enumerate} \end{example} \begin{proof} This comes from \cref{defn:BregmanDistance} and some easy calculus and algebra. In particular, the proof of \cref{exam:forward:R:negative:xyz} and \cref{exam:forward:R:FD:xyz} are similar to that of \cref{exam:back:R}. \end{proof} \subsection*{Explicit formula of forward Bregman pseudo-circumcenters} \begin{proposition} \label{prop:psuCCS:forward:matrix} Set \begin{align*} &A:= \begin{pmatrix} \ensuremath{\operatorname{I}}nnp{ p_{1} -p_{0}, p_{1} -p_{0} } & \ldots& \ensuremath{\operatorname{I}}nnp{p_{1} -p_{0},p_{m} -p_{0}}\\ \vdots & \ldots & \vdots\\ \ensuremath{\operatorname{I}}nnp{p_{m} -p_{0}, p_{1} -p_{0}} & \ldots& \ensuremath{\operatorname{I}}nnp{p_{m} -p_{0}, p_{m} -p_{0} }\\ \end{pmatrix} \text{ and } B:= \begin{pmatrix} \beta_{1} - \ensuremath{\operatorname{I}}nnp{p_{0}, p_{1}-p_{0}}\\ \vdots\\ \beta_{m} - \ensuremath{\operatorname{I}}nnp{p_{0}, p_{m}-p_{0}} \end{pmatrix}. \end{align*} Then $ \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \left\{ q \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f ~:~ \exists (\alpha_{1}, \ldots, \alpha_{m} )^{\ensuremath{\operatorname{int}}rcal} \in \mathbb{R}^{m} \text{ s.t. } \nabla f (q) =p_{0} +\sum^{m}_{i=1} \alpha_{i} \left( p_{i} -p_{0} \right) \text{ and } A\alpha =B \right\}$. \end{proposition} \begin{proof} The proof is similar to that of \cref{prop:formualCCS:Pleft:matrixEQ}, while this time we utilize \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:ps} and \cref{lemma:xinEp1pm:right}. \end{proof} \begin{theorem} \label{theorem:psuCCS:forward} Suppose that $f$ is Legendre and $ \overrightarrow{E}_{f}(\mathcal{S}) \neq \varnothing$. Let $y \in\overrightarrow{E}_{f}(\mathcal{S}) $. Then the following hold. \begin{enumerate} \item \label{theorem:psuCCS:forward:cap} $ \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} (\mathcal{S})\right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \cap \nabla f ^{*}(\nabla f (y) + M^{\perp})$. \item \label{theorem:psuCCS:forward:P} Suppose that $ \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} \left( \nabla f (y) \right) \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ $($e.g., $\ensuremath{\operatorname{aff} \,}( \mathcal{S} ) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ or $\ensuremath{\operatorname{dom}} f^{*} =\mathcal{H}$$)$. Then $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})= \nabla f^{*} \left( \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (\nabla f(y)) \right)$. \end{enumerate} \end{theorem} \begin{proof} \cref{theorem:psuCCS:forward:cap}: This is clear from \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:}, \cref{fact:nablaf:nablaf*:id} and \cref{lemma:nablafyi:in:Mperp}. \cref{theorem:psuCCS:forward:P}: Exploit \cref{fact:nablaf:nablaf*:id} to observe that $ \nabla f^{*} \left( \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (\nabla f(y)) \right) \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \cap \nabla f^{*} (\ensuremath{\operatorname{aff} \,}(\mathcal{S}))$. Moreover, \begin{align*} \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (\nabla f(y)) -\nabla f(y) &= \ensuremath{\operatorname{P}}_{p_{0} +M} (\nabla f(y)) - \nabla f(y) \quad (\text{by \cref{eq:M}})\\ &= p_{0} + \ensuremath{\operatorname{P}}_{ M} (\nabla f(y)-p_{0} ) -\nabla f(y) \quad (\text{by \cite[Proposition~3.19]{BC2017}})\\ &=- \ensuremath{\operatorname{P}}_{ M^{\perp}} (\nabla f(y)-p_{0} ) \in M^{\perp}, \quad (\text{by \cite[Theorem~5.8]{D2012}}) \end{align*} which, due to \cref{eq:M}, implies that $\nabla f^{*} \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (\nabla f(y)) \in \nabla f ^{*}(\nabla f (y) + M^{\perp})$. Altogether, the desired result follows from \cref{theorem:psuCCS:forward:cap} above. \end{proof} \begin{corollary} \label{cor:equi:Ef:CCfS:right} Suppose that $f$ is Legendre and that $ \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} \circ \nabla f \left( \overrightarrow{E}_{f}(\mathcal{S}) \right) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ $($e.g., $\ensuremath{\operatorname{dom}} f^{*} =\mathcal{H}$$)$. Then $\overrightarrow{E}_{f}(\mathcal{S}) \neq \varnothing$ if and only if $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) \neq \varnothing$. \end{corollary} \begin{proof} This follows easily from \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:ps}. and \cref{theorem:psuCCS:forward}\cref{theorem:psuCCS:forward:P}. \end{proof} The following result provides an explicit formula for forward Bregman pseudo-circumcenters. \begin{theorem} \label{thm:LinIndpPformula:TpseudoCCS} Suppose that $f$ is Legendre, that $p_{0}, p_{1}, \ldots, p_{m}$ are affinely independent and that $\ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$. Then $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) \neq \varnothing$. Moreover, \begin{align} \label{EQ:thm:LinIndpPformula:TpseudoCCS:formula} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})= \nabla f^{*} \left( p_{0}+\alpha_{1}(p_{1}-p_{0} )+ \cdots+ \alpha_{m}(p_{m}-p_{0}) \right), \end{align} where \begin{align} \label{eq:thm:LinIndpPformula:TpseudoCCS} \begin{pmatrix} \alpha_{1} \\ \vdots\\ \alpha_{m} \\ \end{pmatrix} := G( p_{1}-p_{0} ,\ldots,p_{m}-p_{0} )^{-1} \begin{pmatrix} \eta_{1} -\innp{p_{0}, p_{1}-p_{0} }\\ \vdots\\ \eta_{m}-\innp{p_{0}, p_{m}-p_{0}} \\ \end{pmatrix}. \end{align} \end{theorem} \begin{proof} Denote by $\bar{z} :=p_{0}+\alpha_{1}(p_{1}-p_{0} )+ \cdots+ \alpha_{m}(p_{m}-p_{0})$, where $(\alpha_{1}, \ldots, \alpha_{m})^{\ensuremath{\operatorname{int}}rcal}$ is defined as \cref{eq:thm:LinIndpPformula:TpseudoCCS} above. According to \cref{lemma:CCSpsF}, \begin{align} \label{eq:thm:LinIndpPformula:TpseudoCCS:S} \left\{ x \in \ensuremath{\operatorname{aff} \,} ( \mathcal{S}) ~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{x , p_{i} -p_{0}} = \eta_{i} \right\} = \{\bar{z}\}. \end{align} Notice that, by \cref{fact:nablaf:nablaf*:id}, $\bar{z} \in \ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ implies that $\nabla f^{*} (\bar{z}) \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$ and $\bar{z} =\nabla f \left( \nabla f^{*} (\bar{z}) \right)$. Because $\nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f = \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \right)$, applying \cref{fact:nablaf:nablaf*:id} and \cref{eq:thm:LinIndpPformula:TpseudoCCS:S}, we obtain that \begin{align}\label{eq:thm:LinIndpPformula:TpseudoCCS:y} \left\{ y \in \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{\nabla f (y) , p_{i} -p_{0}} = \eta_{i} \right\} = \{ \nabla f^{*} (\bar{z}) \}. \end{align} On the other hand, due to \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:ps} and \cref{lemma:xinEp1pm:right}, \begin{align}\label{eq:thm:LinIndpPformula:TpseudoCCS:CCS} \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})= \left\{ y \in \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \innp{\nabla f (y) , p_{i} -p_{0}} = \eta_{i} \right\}. \end{align} Clearly, \cref{eq:thm:LinIndpPformula:TpseudoCCS:y} and \cref{eq:thm:LinIndpPformula:TpseudoCCS:CCS} entail the required result. \end{proof} \section{Duality correspondence} \label{sec:Miscellaneous} Duality is the key for connections between backward and forward Bregman (pseudo-)circumcenters. Let $f \in \Gamma_{0} (\mathcal{H}) $ be G\^ateaux differentiable on $\ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$. Suppose that \begin{empheq}[box=\mybluebox]{equation*} \mathcal{S}:= \{q_{0}, q_{1}, \ldots, q_{m} \} \subseteq \ensuremath{\operatorname{dom}} f \text{ and } \mathcal{S} \text{ is nonempty}. \end{empheq} Set \begin{align} \label{eq:R:Eright} \overrightarrow{E}_{f}(\mathcal{S} ):= \{ y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f~:~ \ensuremath{\operatorname{D}}_{f} (q_{0},y) =\ensuremath{\operatorname{D}}_{f} (q_{1},y) =\cdots =\ensuremath{\operatorname{D}}_{f} (q_{m},y) \}. \end{align} If $\mathcal{S} \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, we set \begin{align} \label{eq:R:Ef} \overleftarrow{E}_{f}(\mathcal{S}):= \{ x \in \ensuremath{\operatorname{dom}} f ~:~ \ensuremath{\operatorname{D}}_{f} (x,q_{0}) =\ensuremath{\operatorname{D}}_{f} (x, q_{1}) =\cdots =\ensuremath{\operatorname{D}}_{f} (x, q_{m}) \}. \end{align} \begin{theorem} \label{theor:CCS:Rel} Suppose that $f$ is Legendre and that $\mathcal{S} \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$. Then the following hold. \begin{enumerate} \item \label{theor:CCS:Rel:E} $\overleftarrow{E}_{f}(\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f= \nabla f^{*} \left( \overrightarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S} \right)\right) \right)$. \item \label{theor:CCS:Rel:forwardE} $\overrightarrow{E}_{f}(\mathcal{S}) = \nabla f^{*} \left( \overleftarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S}\right) \right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*} \right)$. \item \label{theor:CCS:Rel:CCS} $ \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} ( \mathcal{S} ) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f = \nabla f^{*} \left( \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f^{*}} \left( \nabla f (\mathcal{S} )\right) \right)$. \item \label{theor:CCS:Rel:forwardCCS} $ \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \nabla f^{*} \left( \overleftarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S}\right) \right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*} \right) = \nabla f^{*} \left( \nabla f \left( \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \right) \cap \overleftarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S}\right) \right) \right) $. \end{enumerate} \end{theorem} \begin{proof} \cref{theor:CCS:Rel:E}: Because $f$ is Legendre, due to \cref{def:Legendre} and \cref{fact:PropertD}, \begin{align} \label{prop:Rel:E:D} (\forall x \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) ~(\forall i \in \ensuremath{\operatorname{I}} \cup \{0\}) \quad \ensuremath{\operatorname{D}}_{f} (x,q_{i}) = D_{f^{*}}(\nabla f (q_{i}),\nabla f (x)). \end{align} On the other hand, by \cref{eq:R:Eright}, \begin{align}\label{prop:Rel:E:f*} \overrightarrow{E}_{f^{*}} \left( \nabla f \left(\mathcal{S} \right) \right)= \left\{ y \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \ensuremath{\operatorname{D}}_{f^{*}} \left(\nabla f (q_{0}),y \right) =\ensuremath{\operatorname{D}}_{f^{*}} \left(\nabla f (q_{i}),y \right) \right\}. \end{align} Let $x \in \mathcal{H}$. Then \begin{align*} & x \in \overleftarrow{E}_{f}(\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \\ \Leftrightarrow & x \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \text{ and } (\forall i \in \ensuremath{\operatorname{I}}) ~ \ensuremath{\operatorname{D}}_{f} (x,q_{0}) =\ensuremath{\operatorname{D}}_{f} (x, q_{i}) \quad (\text{by \cref{eq:R:Ef}})\\ \Leftrightarrow & x \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \text{ and } (\forall i \in \ensuremath{\operatorname{I}}) ~D_{f^{*}}(\nabla (q_{0}),\nabla f (x)) =D_{f^{*}}(\nabla (q_{i}),\nabla f (x)) \quad (\text{by \cref{prop:Rel:E:D}})\\ \Leftrightarrow & \nabla f (x) \in \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*} \text{ and } (\forall i \in \ensuremath{\operatorname{I}}) ~ D_{f^{*}}(\nabla (q_{0}),\nabla f (x)) =D_{f^{*}}(\nabla (q_{i}),\nabla f (x)) \quad (\text{by \cref{fact:nablaf:nablaf*:id}})\\ \Leftrightarrow & \nabla f (x) \in \overrightarrow{E}_{f^{*}} \left( \nabla f \left(\mathcal{S} \right) \right) \quad (\text{by \cref{prop:Rel:E:f*}})\\ \Leftrightarrow & x \in \nabla f^{*} \left( \overrightarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S} \right)\right) \right), \quad (\text{by \cref{fact:nablaf:nablaf*:id}}) \end{align*} which verifies \cref{theor:CCS:Rel:E}. \cref{theor:CCS:Rel:forwardE}: The proof is similar to that of \cref{theor:CCS:Rel:E}. \cref{theor:CCS:Rel:CCS}: Based on \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:ps} and \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:ps}, \begin{align*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} ( \mathcal{S} ) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f&= \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S} ))\cap \overleftarrow{E}_{f}( \mathcal{S} ) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f\\ &= \ensuremath{\operatorname{aff} \,} ( \nabla f ( \mathcal{S} ))\cap \nabla f^{*} \left( \overrightarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S} \right)\right) \right) \quad (\text{by \cref{theor:CCS:Rel:E} above})\\ &= \nabla f^{*} \left( \nabla f \left( \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S} )) \right) \right) \cap \nabla f^{*} \left( \overrightarrow{E}_{f}\left( \nabla f \left( \mathcal{S} \right)\right) \right) \quad \left(\text{\cref{fact:nablaf:nablaf*:id} implies $\nabla f^{*}\nabla f =\ensuremath{\operatorname{Id}}$}\right)\\ &= \nabla f^{*} \left( \nabla f \left( \ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S} )) \right) \cap \overrightarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S} \right)\right) \right) \quad \left(\text{\cref{fact:nablaf:nablaf*:id} states $\nabla f^{*}$ is bijective}\right)\\ &= \nabla f^{*} \left( \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f^{*}} ( \nabla f (\mathcal{S} )) \right). \end{align*} \cref{theor:CCS:Rel:forwardCCS}: This follows immediately from \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:}, \cref{fact:nablaf:nablaf*:id}, and \cref{theor:CCS:Rel:forwardE} above. \end{proof} \begin{corollary} \label{cor:CCS:Rel} Suppose that $f$ is Legendre with $\ensuremath{\operatorname{dom}} f$ and $\ensuremath{\operatorname{dom}} f^{*} $ being open. Then the following hold. \begin{enumerate} \item \label{cor:CCS:Rel:E} $\overleftarrow{E}_{f}(\mathcal{S}) = \nabla f^{*} \left( \overrightarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S} \right)\right) \right)$. \item \label{cor:CCS:Rel:Eforward} $\overrightarrow{E}_{f}(\mathcal{S}) = \nabla f^{*} \left( \overleftarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S}\right) \right) \right)$. \item \label{cor:CCS:Rel:CCS} $ \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} ( \mathcal{S} ) = \nabla f^{*} \left( \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f^{*}} \left( \nabla f (\mathcal{S} )\right) \right)$. \end{enumerate} \end{corollary} \begin{proof} Taking \cref{eq:R:Ef} and \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:ps} into account, we know that under the hypothesis that $\ensuremath{\operatorname{dom}} f$ and $\ensuremath{\operatorname{dom}} f^{*} $ are open, $\overleftarrow{E}_{f}(\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f = \overleftarrow{E}_{f}(\mathcal{S})$, $\overleftarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S}\right) \right) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*} = \overleftarrow{E}_{f^{*}}\left( \nabla f \left( \mathcal{S}\right) \right)$, and $ \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} ( \mathcal{S} ) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f = \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} ( \mathcal{S} ) $. Therefore, the result follows immediately from \cref{theor:CCS:Rel}. \end{proof} Although there are more generalizations of the classical circumcenter under Bregman distances, the following results suggest that they can probably be deduced by Bregman (pseudo-) circumcenters defined in \cref{defn:CCS:Bregman:left,defn:CCS:Bregman:forward}. \begin{proposition} \label{prop:newCCS} Suppose that $f$ is Legendre. Define \begin{align*} \stackrel{\mathlarger{\twoheadleftarrow}}{\ensuremath{\operatorname{C}}CO{}}_{f} : \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \to 2^{\mathcal{H}} : \mathcal{S} \mapsto \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \left( \nabla f (\mathcal{S}) \right) \right) \cap \overleftarrow{E}_{f}\left(\mathcal{S} \right). \end{align*} Then $\left( \forall \mathcal{S} \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \right)$ $\stackrel{\mathlarger{\twoheadleftarrow}}{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \nabla f^{*} \left( \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f^{*}}\left( \nabla f (\mathcal{S}) \right) \right)$. \end{proposition} \begin{proof} Employing \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:}, \cref{fact:nablaf:nablaf*:id}, and \cref{theor:CCS:Rel}\cref{theor:CCS:Rel:E}, we observe that for every $\mathcal{S} \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) $, \begin{align*} \nabla f^{*} \left( \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f^{*}}\left( \nabla f (\mathcal{S}) \right) \right) &~=~ \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \left( \nabla f (\mathcal{S}) \right) \cap \overrightarrow{E}_{f^{*}}\left( \nabla f (\mathcal{S}) \right) \right) \\ & ~=~ \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \left( \nabla f (\mathcal{S}) \right) \right) \cap \nabla f^{*} \left( \overrightarrow{E}_{f^{*}}\left( \nabla f (\mathcal{S}) \right) \right)\\ & ~=~ \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \left( \nabla f (\mathcal{S}) \right) \right) \cap \overleftarrow{E}_{f}(\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f\\ &= \nabla f^{*} \left( \ensuremath{\operatorname{aff} \,} \left( \nabla f (\mathcal{S}) \right) \right) \cap \overleftarrow{E}_{f}(\mathcal{S}) \\ & ~=~ \, \stackrel{\mathlarger{\twoheadleftarrow}}{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}). \end{align*} \end{proof} \begin{proposition} \label{prop:newCCSps} Suppose that $f$ is Legendre. Define \begin{align*} \stackrel{\mathlarger{\twoheadleftarrow}}{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} : \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \to 2^{\mathcal{H}} : \mathcal{S} \mapsto \ensuremath{\operatorname{aff} \,} (\mathcal{S})\cap \overleftarrow{E}_{f^*}( \nabla f (\mathcal{S}) ). \end{align*} Then $\stackrel{\mathlarger{\twoheadleftarrow}}{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f^*}(\nabla f(\mathcal{S}))$. \end{proposition} \begin{proof} According to \cref{fact:nablaf:nablaf*:id}, $\left(\forall \mathcal{S} \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \right)$ $ \mathcal{S}= \nabla f^* \left( \nabla f(\mathcal{S}) \right)$. Hence, by \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:ps}, we obtain that \begin{align*} \left(\forall \mathcal{S} \in \mathcal{P}( \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f) \right) \quad \stackrel{\mathlarger{\twoheadleftarrow}}{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f} (\mathcal{S}) = \ensuremath{\operatorname{aff} \,} \left( \nabla f^* ( \nabla f(\mathcal{S})) \right) \cap \overleftarrow{E}_{f^*}( \nabla f(\mathcal{S})) = \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f^*} (\nabla f(\mathcal{S})). \end{align*} \end{proof} \section{A comparison of classical circumcenters and Bregman circumcenters}\label{s:compare} Under simplified assumptions, we summarize results in Section~\ref{sec:ForwardBregmancircumcenters} and Section~\ref{sec:BackwardBregmancircumcenters} on the existence and uniqueness of backward and forward Bregman (pseudo-)circumcenters. \begin{corollary} The following assertions hold. \begin{enumerate} \item Suppose that $\mathcal{H} =\mathbb{R}^{n}$, that $f$ is Legendre such that $\ensuremath{\operatorname{dom}} f^{*}$ is open, that $\overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$, that $f$ allows forward Bregman projections, that $\ensuremath{\operatorname{aff} \,} (\mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f$, and that $\nabla f (\ensuremath{\operatorname{aff} \,} (\mathcal{S}) ) $ is a closed affine subspace. Then $\left( \forall z \in \overleftarrow{E}_{f}(\mathcal{S}) \right)$ $ \overrightarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (z) \in \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$. $($See, \cref{theorem:formualCCS:Pleft}\cref{theorem:formualCCS:Pleft:P}.$)$ \item Suppose that $\overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$ and that $\mathcal{S} \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and $\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) ) \subseteq \ensuremath{\operatorname{dom}} f$. Then $\left( \forall z \in \overleftarrow{E}_{f}(\mathcal{S}) \right)$ $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})=\ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}(\nabla f( \mathcal{S} ) )} (z) $. $($See, \cref{theorem:formualCCS}\cref{theorem:formualCCS:EucP}.$)$ \item Suppose that $\mathcal{S} \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f $ and $\ensuremath{\operatorname{aff} \,} ( \nabla f (\mathcal{S}) ) \subseteq \ensuremath{\operatorname{dom}} f$, and that $\nabla f(q_{0}), \nabla f(q_{1}), \ldots, \nabla f(q_{m})$ are affinely independent. Then $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) $ uniquely exists and has the explicit formula \cref{eq:thm:unique:LinIndpPformula}. $($See, \cref{thm:unique:LinIndpPformula}.$)$ \item Suppose that $f$ is Legendre, that $ \ensuremath{\operatorname{aff} \,} (\mathcal{S}) \cap \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f \neq \varnothing$, and that $\overrightarrow{E}_{f}(\mathcal{S} ) \neq \varnothing$. Then $\left( \forall z \in \overrightarrow{E}_{f}(\mathcal{S} ) \right)$ $ \overleftarrow{\ensuremath{\operatorname{P}}}^{f}_{\ensuremath{\operatorname{aff} \,}(\mathcal{S})}(z) \in \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$. $($See, \cref{theorem:forwardCCS}\cref{theorem:forwardCCS:P}.$)$ \item Suppose that $f$ is Legendre, and that $\ensuremath{\operatorname{aff} \,}( \mathcal{S} ) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$ and $\overrightarrow{E}_{f}(\mathcal{S} ) \neq \varnothing$. Then $\left( \forall z \in \overrightarrow{E}_{f}(\mathcal{S} ) \right)$ $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})= \nabla f^{*} \left( \ensuremath{\operatorname{P}}_{\ensuremath{\operatorname{aff} \,}( \mathcal{S} )} (\nabla f(z)) \right)$. $($See, \cref{theorem:psuCCS:forward}\cref{theorem:psuCCS:forward:P}.$)$ \item Suppose that $f$ is Legendre, that $\ensuremath{\operatorname{aff} \,} ( \mathcal{S}) \subseteq \ensuremath{\operatorname{int}} \ensuremath{\operatorname{dom}} f^{*}$, and that $q_{0}, q_{1}, \ldots, q_{m}$ are affinely independent. Then $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) $ uniquely exists and has the explicit formula \cref{EQ:thm:LinIndpPformula:TpseudoCCS:formula}. $($See, \cref{thm:LinIndpPformula:TpseudoCCS}.$)$ \end{enumerate} \end{corollary} Note that backward (forward) Bregman pseudo-cucumcenters are nonempty whenever $\overleftarrow{E}_{f}(\mathcal{S}) \neq \varnothing$ (resp. $\overrightarrow{E}_{f}(\mathcal{S} ) \neq \varnothing$), see \cref{cor:equi:Ef:CCfS} (resp. \cref{cor:equi:Ef:CCfS:right}). Let $\ensuremath{\operatorname{C}}CO{}$ be the classical circumcenter operator defined in \cite[Definition~3.4]{BOyW2018} under the Euclidean distance, i.e., $f:= \frac{1}{2} \norm{\cdot}^{2}$. Then all backward and forward Bregman (pseudo-)circumcenters reduce to the classical circumcenter. \begin{corollary} \label{cor:characterCCS:right} Suppose that $ f :=\frac{1}{2} \norm{\cdot}^{2}$. Then the following statements hold. \begin{enumerate} \item \label{cor:characterCCS:right:eq}$\ensuremath{\operatorname{C}}CO (\mathcal{S}) =\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) =\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) =\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) =\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})$. \item \label{cor:characterCCS:right:norm} $\overleftarrow{E}_{f}(\mathcal{S}) = \overrightarrow{E}_{f}(\mathcal{S}) =\Big\{ y \in \mathcal{H} ~:~ (\forall i \in \ensuremath{\operatorname{I}}) ~ \norm{q_{i} -y} =\norm{q_{0} -y} \Big\}$. Consequently, $\ensuremath{\operatorname{C}}CO (\mathcal{S}) \neq \varnothing$ if and only if there exists $x \in \mathcal{H}$ such that $ \norm{x -q_{0}} =\norm{x -q_{1}} =\cdots =\norm{x -q_{m}}$, that is, $q_{0},q_{1} \ldots, q_{m}$ lie on a sphere with center $x \in \mathcal{H}$. \end{enumerate} \end{corollary} The following example illustrates the existence of the backward Bregman pseudo-circumcenter, while the classical circumcenter does not exist. \begin{example} \label{example:Existence} Suppose that $\mathcal{H} =\mathbb{R}^{3}$ and $\mathcal{S}:= \{(1,2,1), (0.5,1.5,0.5), (1.5,2.5,1.5)\} $. Denote by $(\forall x \in \mathbb{R}^{3} )$ $x= (x_{i})^{3}_{i=1} $. Suppose that $(\forall x \in \left]0, +\infty\right[^{3} )$ $f (x)= -\sum^{3}_{i=1} \ln (x_{i}) $, with $\ensuremath{\operatorname{dom}} f =\left]0, +\infty\right[^{3} $. Then the following assertions hold. \begin{enumerate} \item \label{example:Existence:no} The circumcenter under the Euclidean distance doesn't exist. \item \label{example:Existence:yes} $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) \approx ( 0.7641, 0.8744, 0.7641)$. \end{enumerate} \end{example} \begin{proof} Denote by $x:=(1,2,1)$, $y:= (0.5,1.5,0.5)$, and $z:=(1.5,2.5,1.5)$. \cref{example:Existence:no}: Because $z-x = - (y-x)$, i.e., $x, y$ and $z$ are affinely dependent, due to \cite[Theorem~8.1]{BOyW2018}, \cref{example:Existence:no} is true. \cref{example:Existence:yes}: Because $(\forall u \in \left]0, +\infty\right[^{3} )$ $\nabla f(u) = \left( -\frac{1}{u_{1}} , -\frac{1}{u_{2}}, -\frac{1}{u_{3}} \right)^{\ensuremath{\operatorname{int}}rcal} $, we know that $\nabla f(x) =-(1, \frac{1}{2}, 1)^{\ensuremath{\operatorname{int}}rcal} $, $\nabla f(y) =-(2, \frac{2}{3}, 2)^{\ensuremath{\operatorname{int}}rcal} $, and $\nabla f(z) =-(\frac{2}{3}, \frac{2}{5}, \frac{2}{3})^{\ensuremath{\operatorname{int}}rcal} $. It is easy to see that $\nabla f(x) $, $\nabla f(y) $, and $\nabla f(z) $ are affinely independent. Hence, the desired result follows easily from \cref{prop:formualCCS:matrixEQ}. \end{proof} \cref{fig:BackwardBregmanpseudoCCS} below illustrates \cref{example:Existence}. The red intersection point of the three green, blue and yellow Bregman balls, located on the affine subspace $\ensuremath{\operatorname{aff} \,} \nabla f (\mathcal{S})$ and labeled as ps-CC(S), is $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})$. However, the points $x, y$ and $z$ in \cref{fig:BackwardBregmanpseudoCCS} are colinear, which implies that the classical circumcenter doesn't exist. \begin{figure} \caption{Backward Bregman pseudo-circumcenter exists but the classical circumcenter doesn't.} \label{fig:BackwardBregmanpseudoCCS} \end{figure} The following \cref{example:CCS}\cref{example:CCS:Consist} and \cref{example:CCS}\cref{example:CCS:Different} show, respectively, the difference of the backward and forward Bregman (pseudo-)circumcenters, and the classical circumcenter. \begin{example} \label{example:CCS} Suppose that $\mathcal{H} =\mathbb{R}^{3}$ and $\mathcal{S}:= \{(1,1,1), (1,2,1), (1,1,2)\} $. Denote by $(\forall x \in \mathbb{R}^{3} )$ $x= (x_{i})^{3}_{i=1} $. We consider backward and forward Bregman (pseudo-)circumcenters of $\mathcal{S}$ w.r.t. the energy and negative entropy functions. \begin{enumerate} \item \label{example:CCS:Consist} If $ f :=\frac{1}{2} \norm{\cdot}^{2}$, in view of \cref{cor:characterCCS:right}\cref{cor:characterCCS:right:eq}, we have $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) =\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) =\overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) =\overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S})=\ensuremath{\operatorname{C}}CO{(\mathcal{S})} = (1, \frac{3}{2}, \frac{3}{2})^{\ensuremath{\operatorname{int}}rcal}$. \item \label{example:CCS:Different} Suppose that $(\forall x \in \left[0, +\infty\right[^{3} )$ $f(x) =\sum^{3}_{i=1} x_{i} \ln( x_{i}) -x_{i}$. Then $f(1,1,1)=-3$, $f(1,2,1)=2\ln(2) -4$, $f(1,1,2)=2\ln(2) -4$. Moreover, $(\forall x \in \left]0, +\infty\right[^{3} )$ $\nabla f(x) = \left( \ln(x_{1}), \ln(x_{2}), \ln(x_{3}) \right)^{\ensuremath{\operatorname{int}}rcal} $, and, in view of \cite[Proposition~13.30]{BC2017} and \cite[Example~6.5]{BB1997Legendre}, $(\forall x \in \mathbb{R}^{3} )$ $f^{*}(x) =\sum^{3}_{i=1} \exp (x_{i} )$ and $\nabla f^{*}(x) =\left( \exp (x_{1} ), \exp (x_{2} ), \exp (x_{3} ) \right)^{\ensuremath{\operatorname{int}}rcal}$. \begin{enumerate} \item \label{example:CCS:Different:a} \cref{exam:Back:Rn}\cref{exam:Back:Rn:negative:} and \cref{exam:forward:Rn}\cref{exam:forward:Rn:negative:} imply that \begin{align*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ \left(1, \frac{1}{\ln 2} , \frac{1}{\ln 2} \right) \right\} \quad \text{and} \quad \overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S}) = \left\{ \left(1, \frac{4}{\rm e} , \frac{4}{\rm e} \right) \right\}. \end{align*} \item \cref{thm:unique:LinIndpPformula} and \cref{thm:LinIndpPformula:TpseudoCCS} imply that \begin{align*} \overleftarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) = \left(0, \frac{1}{\ln2}, \frac{1}{\ln2} \right) \quad \text{and} \quad \overrightarrow{\ensuremath{\operatorname{C}}CO{}}^{ps}_{f}(\mathcal{S}) =\left( {\rm e}, \frac{4}{\rm e}, \frac{4}{\rm e} \right). \end{align*} \end{enumerate} \end{enumerate} \end{example} To visualize \cref{example:CCS}\cref{example:CCS:Different}, put $x:=(1,1,1), y:= (1,2,1)$, and $z:= (1,1,2)$ so that $\mathcal{S}=\{x,y,z\}$. \cref{fig:BFBregmanCircumcenters} below illustrates the difference between sets $\overleftarrow{E}_{f}(\mathcal{S})$ and $\overrightarrow{E}_{f}(\mathcal{S})$ because of the asymmetry of the general Bregman distance. The red and magenta lines are $\overleftarrow{E}_{f}(\mathcal{S})$ and $\overrightarrow{E}_{f}(\mathcal{S})$, respectively. The $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$ (resp. $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$) is the intersection of the red (resp. magenta) line and the cyan plane. \begin{figure} \caption{Difference of the backward and forward Bregman circumcenters} \label{fig:BFBregmanCircumcenters} \end{figure} In \cref{fig:BFBregmanBalls}(a) (resp.\cref{fig:BFBregmanBalls}(b)), the green, blue and yellow sets are \enquote{Bregman balls}. Because the intersections of the corresponding three balls in \cref{fig:BFBregmanBalls} happen to be located on the affine subspace $\ensuremath{\operatorname{aff} \,} \{x,y,z\}$, via \cref{defn:CCS:Bregman:left}\cref{defn:CCS:Bregman:left:} and \cref{defn:CCS:Bregman:forward}\cref{defn:CCS:Bregman:forward:}, these red intersection points labelled as $CC(S)$ on the left and right pictures below are $\overleftarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$ and $\overrightarrow{\ensuremath{\operatorname{C}}CO{}}_{f}(\mathcal{S})$, respectively. \begin{figure} \caption{Backward and forward {Bregman} \label{fig:BFBregmanBalls} \end{figure} \section{Conclusions}\label{s:conclude} In this work, we presented basic theory of Bregman circumcenters. We introduced backward and forward Bregman (pseudo-)circumcenters. We also explored the existence and uniqueness of backward and forward Bregman (pseudo-)circumcenters, and presented explicit formulae of Bregman backward and forward pseudo-circumcenters. Various examples were given to illustrate these Bregman circumcenters. The connections between backward and forward Bregman (pseudo-)circumcenters were also established by duality. We shall pursue applications of Bregman circumcenters in accelerating iterative methods in optimization in future. \end{document}
\begin{document} \title{Abelian varieties not isogenous to Jacobians over global fields} \author{Ananth N. Shankar and Jacob Tsimerman} \date{} \maketitle \begin{abstract} We prove the existence of abelian varieties not isogenous to Jacobians over characterstic $p$ function fields. Our methods involve studying the action of degree $p$ Hecke operators on hypersymmetric points, as well as their effect on the formal neighborhoods using Serre Tate co-ordinates. We moreover use our methods to provide another proof over number fields, as well as proving a version of this result over finite fields. \end{abstract} \section{Introduction} This paper concerns the following question: \emph{Given an algebraically closed field $k$ and $g\geq 4$, Does there exist an Abelian variety over $k$ of dimension $g$ which is not isogenous to the Jacobian of a stable curve?}. The history of the question, as far as we know, is that it was first asked by Nicholas Katz and Frans Oort in the setting of $k=\mathbb{Q}bar$, and then posed more generally by Ching-Li Chai and Oort in \cite{chaioortjacobian}. We mention that an observation of Poonen (unpublished, but credited by Chai and Oort) which helped clarify the situation, was to formulate the question more generally for an arbitrary subvariety of $\mathbb{A}g$ in place of the Torelli locus. Of course, one can also generalize from $\mathbb{A}g$ to an arbitrary Shimura variety. In the case of $k=\mathbb{Q}bar$ the question has been answered in the affirmative: Chai-Oort\cite{chaioortjacobian} proved it conditional on the Andre-Oort conjecture for $\mathcal{A}_g$, and then the second-named author made their proof unconditional \cite{jacobabvarjacob}. Since then the Andre-Oort conjecture itself has been proven \cite{jacobandreoort} in this case, removing the need for \cite{jacobabvarjacob}. Recently, Masser-Zannier \cite{masserzannier} proved a stronger result, producing Abelian varieties satisfying the condition which are defined over low-degree fields, and in fact showing that `most' such Abelian varieties satisfy the condition, in a precise sense. We now turn our attention to the case of finite characteristic, about which much less is known. Over $\mathbb{F}pbar$, the authors\cite{ananthjacob} have a previous work in which they make conjectures based on arithmetic statistics which suggests that the statement should not have affirmative answer for all $g\geq 4$. In fact, \cite{ananthjacob} suggests that if $D\subset \mathbb{A}g$ is a generically ordinary divisor defined over $\mathbb{F}pbar$ (with $g>1$), then every $x\in \mathbb{A}g(\mathbb{F}pbar)$ should be isogenous to some $y\in D(\mathbb{F}pbar)$. In the way of unconditional results, there is only a result of Chai-Oort\cite[\S4]{chaioortjacobian} which deals with the analogous situation of a curve inside $X(1)^2$ (which produces non-ordinary points) and a result of the authors\cite[Thm 4.1]{ananthjacob} for a specific hypersurface in $X(1)^N$ for $N\geq 270$, using additive combinatorics. Relatedly, work of the first author, Asvin G. and Q.He \cite{hilbertsurfaces} shows that if $C_1,C_2$ are two generically ordinary curves contained in suitable mod $p$ Hilbert modular surfaces, then there are infinitely many closed points on $C_1$ isogenous to some point on $C_2$. The most analogous setting to number fields is function fields in one variable, and thus the natural analogue to $\mathbb{Q}bar$ is $\ol{\mathbb{F}_p(T)}$. The main goal of this paper is to deal with precisely that case. Our main theorem is as follows: \begin{theorem}\label{thm:main1} Let $k$ be a finite field, let $F=\ol{k(t)}$, and $g>1$. Let $D\subset\mathbb{A}g/F$ be a divisor. There exists an $F$-valued point of $\mathbb{A}g$ which is not isogenous to any $F$-points of $D$. \end{theorem} As a stepping stone to our main theorem, we first prove the following: \begin{theorem}\label{thm:main} Let $k$ be a finite field, and $g>1$. Let $D\subset\mathbb{A}g/k$ be a divisor. Let $F=\ol{k(t)}$. There exists an $F$-valued point of $\mathbb{A}g$ which is not isogenous to any $F$-points of $D$. \end{theorem} The heuristics offered in \cite{ananthjacob} suggest that Theorem \ref{thm:main} is strongest possible theorem that is true in positive characteristic; namely, that $\ol{k(t)}$ is the smallest algebraically closed field over which one can expect the existence of an abelian variety not isogenous to any point on $D$. Of course, one may formulate the above theorem instead in terms of curves contained in $\mathbb{A}g$ by using the familiar translation between curves and function fields. \subsection{Other results} Theorem \ref{thm:main} yields yet another proof of the number field version of Theorem \ref{thm:main} (namely, the existence of abelian varieties over number fields not isogenous to Jacobians). Our method also yields the following result over finite fields: \begin{theorem}\label{finitefields} Let $D\subset \mathbb{A}g$ denote a divisor over $\mathbb{F}pbar$. Then, there exists an ordinary $x\in \mathbb{A}g(\mathbb{F}pbar)$ such that $T(x) \not\subset D$ for every prime-to-$p$ Hecke correspondence $T$. \end{theorem} Theorem \ref{finitefields} acts as a substitute over $\mathbb{F}pbar$, because as stated before we expect that every ordinary $x\in \mathbb{A}g(\mathbb{F}pbar)$ is isogenous to some point in $D(\mathbb{F}pbar)$. We remark also that Theorem \ref{finitefields} also highlights one of the differences/difficulties in handling the $\mathbb{F}pbar$ case: The Galois orbits are just much smaller, so that $T(x)$ cannot be a single orbit, unlike the case of number fields or function fields. \subsection{Ideas of Proofs} \subsubsection{Proof of Theorem \ref{thm:main}} Fix a divisor $D\subset(\mathbb{A}g)_{\ol k}$. Our first observation is that in the function field case one can impose very strong `local' conditions\mathfrak{o}otnote{It is unclear to us whether one can impose analogously strong local conditions over number fields.} on our $F$-point, which rule out being contained in $D$. Thinking of $F$-valued point as curves $C\subset\mathbb{A}g$, this amounts to making a curve $C$ which is highly singular at a given point, such that for example its formal neighborhood contains much of the formal neighborhood of $\mathbb{A}g$. There are some difficulties with implementing this strategy, namely: \begin{enumerate} \item While the local structure of prime-to-$p$ Hecke operators is very well understood, $p$-power Hecke operators behave poorly in characteristic $p$. These are not etale (or even finite) and strongly distort the local structure. \item It is a-priori unclear how to insist that every \emph{irreducible} Hecke translate of $C$ also has this property. A-priori, different branches of $C$ through a point $x$ could separate upon applying a Hecke operator. \end{enumerate} We overcome the second difficulty by insisting that the curve has maximal monodromy by using Lefschetz theorems in conjunction with Bertini-style theorems due to Poonen and Charles-Poonen. The first difficulty is the main one, and our idea in overcoming it is to use \emph{hypersymmetric} abelian varieties (see Definition \ref{hypersymmetricabvars} for the definition). The $p$-power Hecke orbit of a hypersymmetric point consists of points defined over the same field as the original point (similar in spirit to the prime-to-$p$ Hecke orbit of a supersingular point). In particular, the orbit of a hypersymmetric point under $p$-power isogenies is finite. This prevents 'escape to infinity' and allows us to impose certain local conditions on this whole finite set at once, which we can then control. To impose our local conditions, we use the billinear structure on the tangent space (and in fact the entire formal neighborhood) inherited from the Serre-Tate co-ordinates to isolate certain favored directions which behave stably under $p$-power isogenies. \subsubsection{Proof of Theorem \ref{thm:main1}} It turns out that the hardest case of this theorem is Theorem \ref{thm:main}. Our approach is essentially to notice that $\ol{k}(T)$ contains many distinct subfields - in fact, copies of itself: $\ol{k}(P(T))$. One can show that if $D$ is defined over some function field $E$ of $F$, then we may find a smaller subfield $E'\subset E$ such that the intersection of $D$ with its own Galois conjugates over $E'$ is defined over $\ol{k}$. But if $C$ is a curve defined over $E'$, then $C\subset D$ implies that $C$ is contained in the intersection of $D$ with all its galois conjugates over $E'$. One must be a bit careful, and we work with curves instead of function fields, but this idea can essentially be pushed through to reduce to Theorem \ref{thm:main}. \subsubsection{Proofs of Additional Results} This same idea can be carried out (with a lot less difficulty, as we no longer face any charcteristic $p$ issues) to find a curve $C\subset \mathbb{A}g$ none of whose Hecke translates are contained in $D$, where $C$ is now a curve defined over a \emph{Number Field}. We use the existence of such a curve, along with a "big monodromy" result due to Zywina, to provide an intersection-theoretic proof of the existence of Number Field-valued points of $\mathbb{A}g$ that are not isogenous to any point of $D$. This method also goes through to prove Theorem \ref{finitefields}. \subsection{Structure of Paper} In section 2 we show how to construct Abelian varieties over curves with prescribed local and monodromy conditions. In section 3 we review the theory of Serre-Tate coordinates, introduce the notion of primitive Serre-Tate directions and study how they behave under isogenies. This is the technical heart of the paper. We then go on to prove Theorem 1.2, and then upgrade it to Theorem 1.1. Finally, in Section 4 we give a couple of other results: we explain how our methods can give a rather short proof of the same result over number fields (although relying on a strong Monodromy theorem of Zywina) and we prove Theorem \ref{finitefields}. \section{Constructing families of Abelian varieties with specified local conditions} In what follows, for an irreducible curve $C$ when referring to the \emph{geometric monodromy} of $C$, we mean $\pi_1(C_\eta)$ wehere $\eta$ is the generic point. We fix a prime\mathfrak{o}otnote{The purpose of $r$-torsion is simply to rigidify the moduli problem.} $r>6p$ which is distinct from the characteristic $p$. We fix a smooth compactification of $\ol\mathcal{A}_g[r]$, with boundary divisor $D$. Note that over $\mathcal{A}_g[r]$ we have the $n$-torsion local systems $\mathcal{L}_n$ for $n$ prime to $p$, as well as the $p^m$-torsion local systems $\mathcal{L}_{p^m}$ over the ordinary locus $\mathcal{A}_g^{\ord}[r]$, defined as the dual of the connected component of the $p^m$-torsion of the universal Abelian scheme. We shall need the following result: \begin{theorem}\label{congmain} Fix $g>1$, and an auxiliary prime $r>6p$. For any finite set of $\Spec\mathbb{F}pbar[[t]]$ points $x_1,\dots,x_n$ whose differential is injective in $\mathcal{A}^{\ord}_g[r]$, and positive integers $m_1,\dots,m_n$ there exists a curve $C\subset\mathcal{A}^{\ord}_g[r]$ such that \begin{enumerate} \item $C$ admits $\Spec\mathbb{F}pbar[[t]]$ points specializing to all of the $x_i$ mod the $t^{m_i}$ and \item The geometric monodromy of $C$ surjects onto $\mathbb{G}amma_g(\mathbb{Z}_r)\times\prod_{\ell\neq p,r}\mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))\times\mathbb{G}L_g(\mathbb{Z}_p)$ \end{enumerate} where $$\mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell)):=\Sp_{2g}(\mathbb{Z}_\ell)/\mathbb{Z}^\times_\ell$$ \end{theorem} The idea for the proof is twofold: For the prime-to-$p$ part we use tame Lefschetz theorems (see \cite[Theorem 7.3]{Esnault} and \cite{EKin}), and for the $p$-part we simply pick enough $\mathbb{F}_q$ points whose Frobenius images generate. \begin{lemma}\label{tame} For a positive integer $n$ prime to $p$, the $n$-torsion local system $\mathcal{L}_n$ over $\mathcal{A}_g[r]$ is tamely ramified at $D$. \end{lemma} \begin{proof} The image of inertia at the boundary in $\mathbb{G}Sp_{2g}(\mathbb{Z}/n\mathbb{Z})$ is well-known to be unipotent, and therefore has order prime to $p$. \end{proof} Next, let $k$ be a finite field over which the union of all the $x_i$ is defined. We pick finitely many points in $\mathcal{A}_g[r](k)$ whose Frobenius images contain every conjugacy class in $\mathbb{G}l_g(\mathbb{Z}/p^2\mathbb{Z})$. Call the union of all those $R$. It follows that every curve defined over $k$ containing $R$ as a smooth divisor has fundamental group surjecting onto $\mathbb{G}l_g(\mathbb{Z}/p^2\mathbb{Z})$ and hence onto all of $\mathbb{G}l_g(\mathbb{Z}_p)$. We moreover claim that such a curve $C$ has \emph{geometric} fundamental group surjecting onto $\mathbb{G}l_g(\mathbb{Z}/p^2\mathbb{Z})$. Indeed, pick a point $Q\in R\subset C$ such that the Frobenius at $Q$ maps trivially to $\mathbb{G}l_g(\mathbb{Z}/p^2\mathbb{Z})$. Then from the sequence $$1\rightarrow\pi_1(C_{\ol{k}},\ol{Q})\rightarrow\pi_1(C_k,\ol{Q})\rightarrow \pi_1(k)\rightarrow 1$$ the claim follows. Next, as we will be applying Goursat's lemma to join the $p$-part and prime-to-$p$ part of the Monodromy, we shall need to consider quotients of $\mathbb{G}amma_g(\mathbb{Z}_r)\times\prod_{\ell\neq p,r}\mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))$ which can occur as quotients of $\mathbb{G}L_g(\mathbb{Z}_p)$. Since the Chevalley groups are simple (with finitely many possible exceptions) most factors don't contribute, and so it is easy to see that all such quotients factor through a single finite quotient group $H$ of $\mathbb{G}amma_g(\mathbb{Z}_r)\times\prod_{\ell\neq p,r}\mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))$ and $H'$ of $\mathbb{G}L_g(\mathbb{Z}_p)$. Now, by possibly increasing $k$ we increase $R$ to contain points in $\mathcal{A}^{\ord}_g[r](k)$ whose Frobenius image contain every conjugacy class in $H\times H'$. As above, every curve defined over $k$ and containing $R$ as a smooth divisor has geometric fundamental group surjecting onto $H\times H'$. We moreover insist that $R$ is distinct from the closed points in the image of the $x_i$. We are now ready to complete the proof. \begin{proof} of Theorem \ref{congmain}: First, consider a series of blowups at points of $\mathbb{A}g[r]$ to obtain a smooth variety $B$ in which all the $x_i$ seperate. Now we apply Poonens result \cite[Thm 1.3]{poonen} to obtain a smooth curve $C_0$ which is obtained from $B$ by repeatedly intersecting with the ample class, such that $C_0$ is smooth, intersects $D$ transversally, is defined over $k$, and contains the lifts of the $x_i$ mod $t^{m_i}$. Note that by \cite[Thm 1.1]{charlespoonen} we may insist that $C_0$ is moreover irreducible. We will define $C$ to be the image of $C_0$, so this already implies that condition (1) is satisfied. Next, we will moreover ensure (using the same result) that $C_0$ contains the points above $R$ as smooth divisors, which will imply that the geometric monodromy of $C_0$ surjects onto $\mathbb{G}l_g(\mathbb{Z}_p)$. Separately, the Lefschetz theorem proven by Esnault-Kindler together with lemma \ref{tame} implies that the fundamental group of $C_0$ surjects onto $\mathbb{G}amma_g(\mathbb{Z}_r)\prod_{\ell\neq p,r}\mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))$. Finally, Goursat's lemma together with the fact that the geometric monodromy of $C_0$ surjects onto $H\times H'$ implies that the fundamental group of $C_0$ surjects onto all of $\mathbb{G}amma_g(\mathbb{Z}_r)\prod_{\ell\neq p,r}\mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))\times\mathbb{G}l_g(\mathbb{Z}_p)$ as desired. \end{proof} \section{Hecke translates in characteristic $p$} \subsection{Background on Serre-Tate coordinates} We will briefly recall Serre-Tate coordinates and describe the local action of $p$-power hecke operators in terms of these coordinates. For a thorough treatment of Serre-Tate coordinates, see \cite{katzserretate}, or \cite{chaioort}. Let $\mathscr{G}$ denote an ordinary $p$-divisible group over an algebraically closed field $k$ with dimension $g$ and height $2g$. We have that $\mathscr{G} \simeq \mathscr{G}^{\mult} \times \mathscr{G}^{\et}$, where $\mathscr{G}^{\mult} \simeq \mu_{p^{\infty}}^g$ and $\mathscr{G}^{\et} \simeq (\mathbb{Q}_p/\mathbb{Z}_p)^g$. The $p$-divisible groups $\mathscr{G}^{\mult}$ and $\mathscr{G}^{\et}$ are rigid, and the functoriality of the connected-\'{e}tal\`{e} exact sequence implies that deformations of of $\mathscr{G}$ to an Artin local $k$-algebra $R$ are in bijection with extensions of $\mathscr{G}^{\et}\times \Spec R$ by $\mathscr{G}^{\mult}\times \Spec R$. In particular, the deformation space of $\mathscr{G}$ has a natural group structure. Let $T_p(\mathscr{G})$ denote the Tate-module of $\mathscr{G}^{\et}$, and let $T_p(\mathscr{G}^{\vee})$ denote the Tate-module of $(\mathscr{G}^{\mult})^\vee$, where $^\vee$ denotes taking the Cartier dual. Then, the formal deformation space of $\mathscr{G}$, which we have already seen has the structure of a group, actually has the structure of a formal torus, and is canonically isomorphic to $\Hom(T_p(\mathscr{G})\otimes T_p(\mathscr{G}^{\vee}),\mathbb{G}mhat)$. It will later also be convenient to use the canonical identification of this torus with $(T_p(\mathscr{G})^* \otimes T_p(\mathscr{G}^\vee)^*) \otimes \mathbb{G}mhat$. Here, $^*$ denotes taking the linear dual of a $\mathbb{Z}_p$-module. Specifically, let $R$ denote any Artin local $k$-algebra, with maximal ideal $\mathfrak{m}$. Then, the deformations of $\mathscr{G}$ to $R$ are in bijection with bilinear maps $q: T_p(\mathscr{G}) \times T_p(\mathscr{G}^{\vee}) \rightarrow 1+\mathfrak{m}$. Therefore, a $\mathbb{Z}_p$-basis $e_i = \{e_1\hdots e_g\}$ of $T_p(\mathscr{G})$ and $m_j = \{m_1\hdots m_g\}$ of $T_p(\mathscr{G}^{\vee})$ yield coordinates $Q=\{q_{ij}\}$ on the deformation space, i.e. the characteristic $p$ deformation space is isomorphic to $\Spf k[[q_{ij}-1]] $, and an $R$-valued point of the deformation space corresponds to a choice of $g^2$ elements in $1 + \mathfrak{m}$, which is the same data as a bilinear map $q: T_p(\mathscr{G}) \times T_p(\mathscr{G}^{\vee}) \rightarrow 1+\mathfrak{m}$. Changing the coordinates $e_i$ by $A\in \mathbb{G}L_g(\mathbb{Z}_p)$ and $m_j$ by $B\in \mathbb{G}L_g(\mathbb{Z}_p)$ yields a new set of coordinates $Q' = \{q'_{ij}\}$, with $Q' = A^{-1} Q B^{\textrm{T}}$. The same analysis holds when $R$ is a complete local $k$-algebra. Here we think of $Q$ as an element of $M_g((1+m)^\times)$, which inherits the structure of a $M_g(\mathbb{Z}_p)$ bi-module from the $\mathbb{Z}_p$-module structure of $(1+m)^\times$. Let $\mathscr{G}'$ denote another $p$-divisible group and let $\phi: \mathscr{G}\rightarrow \mathscr{G}'$ be an isogeny. We pick bases $\{e'_i,m'_j\}$ for $T_p(\mathscr{G}'),T_p(\mathscr{G}^{\prime \vee})$, and let $X$ and $Y$ denote the matrices (necessarily with non-zero determinant) of $\phi: T_p(\mathscr{G})\rightarrow T_p(\mathscr{G}')$ and $\phi^{\vee}: T_p(\mathscr{G}^{\prime \vee})\rightarrow T_p(\mathscr{G}^\vee)$ in these coordinates. Let $\tilde{\pdiv}/k[[t]]$ deform $\mathscr{G}$, and let $Q = \{q_{ij}(t)\}$ denote its Serre-Tate coordinates. The following result is well known (see for example \cite[Theorem 2.1]{katzserretate} or \cite[Proposition 2.23]{chaioort} ). \begin{proposition}\label{prop:coordinatesafterisog} Notation as above, and consider the matrix $Q' = X^{-1}QY$. Note that a-priori the entries of $Q'$ are valued only in $k((t^{\frac{1}{p^{\infty}}}))$. \begin{enumerate} \item Suppose that the entries of $Q'$ are valued in $1+tk[[t]]$. Then, $\phi$ lifts uniquely to an isogeny $\tilde{\pdiv} \rightarrow \tilde{\pdiv}'$ over $k[[t]]$ (where $\tilde{\pdiv}'$ is necessarily a deformation of $\mathscr{G}'$) and the Serre-Tate coordinates (with respect to $\{e'_i,m'_j \}$) of $\tilde{\pdiv}'$ are $Q'$. \item If the entries of $Q'$ aren't valued in $1 + tk[[t]]$ (eg. if some entry looks like $(1+t)^{\frac{1}{p}}$), then $\phi$ doesn't lift to an isogeny over $k[[t]]$ with source $\tilde{\pdiv}$. However, let $n=n(\tilde{\pdiv},\phi)$ denote the smallest positive integer such that the entries of $Q'(t^{p^n})$ are valued in $1+tk[[t]]$. Then, $\phi$ lifts uniquely to an isogeny $\tilde{\pdiv} \times_{\Spf k[[t]]} \Spf k[[s]] \rightarrow \tilde{\pdiv}'$, where the map $\Spf k[[s]] \rightarrow \Spf k[[t]]$ is given by $t\mapsto s^{p^n}$, and $\tilde{\pdiv}'$ is a deformation of $\mathscr{G}'$ to $k[[s]])$. The Serre-Tate coordinates of $\tilde{\pdiv} \times_{\Spf k[[t]]} \Spf k[[s]]$ are $Q(t) = Q(s^{p^n})$, and of $\tilde{\pdiv}'$ are $Q'(s) = X^{-1}Q(s^{p^n})Y$. \end{enumerate} In either case, we will let $\tilde{\pdiv}phi$ denote the isogeny that lifts $\phi$, which is already defined over $\Spf k[[t]]$ in the first case, but which is only defined over $\Spf k[[s]] = \Spf k[[t^{\frac{1}{p^n}}]] $ in the second case. \end{proposition} Suppose now that $\mathscr{G}$ was equipped with a principal polarization, $\lambda$. This yields canonical isomorphisms $T_p(\mathscr{G})\rightarrow T_p(\mathscr{G}^{\vee})$ and $T_p(\mathscr{G})^*\rightarrow T_p(\mathscr{G}^\vee)^*$, both of which we will also denote by $\lambda$. Suppose that we have chosen coordinates such that $m_i = \lambda(e_i)$ (which yield dual bases $\mu_i = \lambda(\epsilon_i)$ of $T_p(\mathscr{G})^*,T_p(\mathscr{G}^\vee)^*$). Then, $\lambda$ lifts to a deformation of $\mathscr{G}$ precisely when $q_{ij} = q_{ji}$ (\cite[Corollary 2.2.4]{chaioort}). The deformation space of $(\mathscr{G},\lambda)$ is $\Hom^{\Sym}(T_p(\mathscr{G})\otimes T_p(\mathscr{G}^\vee),\mathbb{G}mhat) \subset \Hom(T_p(\mathscr{G})\otimes T_p(\mathscr{G}^\vee),\mathbb{G}mhat)$, the set of all symmetric maps. This space is canonically isomorphic to $\big(T_p(\mathscr{G})^*\otimes T_p(\mathscr{G}^\vee)^*\big)^{\Sym} \otimes \mathbb{G}mhat$, where $(T_p(\mathscr{G})^*\otimes T_p(\mathscr{G}^\vee)^*)^{\Sym}$ is the $\mathbb{Z}_p$-span of $\{\epsilon_i \otimes \mu_j + \epsilon_j\otimes \mu_i, \epsilon_i\otimes \mu_i \}$. In coordinates, this space equals $\Spf k[[q_{ij}]]/(\{q_{ij} - q_{ji} \})$. \subsection{Description of $p$-power Hecke operators on $\mathbb{A}g \mod p$} We now briefly describe $p$-power Hecke operators on ${\mathbb{A}g}^{\ord} \mod p$. For further details, the interested reader may consult \cite[Chapter VII Section 4]{chaifaltings}. Let $(\mathscr{G},\lambda),(\mathscr{G}',\lambda')$ denote ordinary principally polarized $p$-divisible groups as above, and let $\phi \rightarrow \mathscr{G}'$ denote an isogeny such that $\phi^*(\lambda_2) = p^n\lambda_1$. Suppose that we choose bases for $T_p(\mathscr{G}),T_p(\mathscr{G}^\vee),T_p(\mathscr{G}')$ and $T_p(\mathscr{G}^{\prime,\vee})$ which are compatible with $\lambda,\lambda'$. The choice of bases induce isomorphisms $\mathscr{G},\ \mathscr{G}' \rightarrow (\mathbb{Q}_p/\mathbb{Z}_p)^g \times (\mu_{p^{\infty}})^g$. The isogeny $\phi$ acts by multiplication by a matrix $X \in M_g(\mathbb{Z}_p)$ on the first summand, and by $Y \in M_g(\mathbb{Z}_p)$ on the second summand, and the condition $\phi^{*}\lambda' = p^n\lambda$ is equivalent to $X\cdot Y^{T} = p^n\mathbb{I}d$. We say that the matrix of $\phi$ is $(A,B)$. To that end, let $(n:n_1,n_2,\hdots n_g)$ denote a sequence of integers such that $n\geq n_1\geq n_2 \hdots \geq n_g\geq 0$. A $p$-power Hecke operator $\tau$ is said to have type $(n:n_1,n_2,\hdots n_g)$ if it parameterizes all ordinary isogenies $\{\phi: \mathscr{G} \rightarrow \mathscr{G}' \}$ such that the matrix $\phi$ is $(A,B)$ where $AB^T = p^n\mathbb{I}d$ and \[ A \in \mathbb{G}L_g(\mathbb{Z}_p) \left[ \begin{array}{cccc} p^{n_1}&&&\\ &p^{n_2}&&\\ &&\ddots&\\ &&&p^{n_g}\\ \end{array} \right] \mathbb{G}L_g(\mathbb{Z}_p). \]\label{hecketype} We note that the Hecke correspondence $\tau$ is Frobenius precisely when it has type $(1:0,0,\hdots 0)$ and is Verschibung precisely when it has type $(1:1,1,\hdots 1)$. Further, $\tau$ factors through Frobenius precisely when $n > n_1$, and factors through Verschibung when $n_g > 0$. The Hecke correspondence $\tau$ induces a closed subvariety $\mathbb{A}g^{\ord}[\tau]\subset \mathbb{A}g^{\ord}\times \mathbb{A}g^{\ord}$, with the two canonical projections $\Pr_1(\tau),\Pr_2(\tau)$ to $\mathbb{A}g$. The maps $\Pr_i(\tau)$ are usually inseparable. Now, let $C \subset \mathbb{A}g$ denote a reduced generically ordinary curve, and let $C^{\ord}$ be the intersection of $C$ with the ordinary locus of $\mathbb{A}g$. We define $\tau(C^{\ord}) \subset \mathbb{A}g^{\ord}$ to be the unique reduced subscheme corresponding to $\Pr(\tau_1)_*\Pr(\tau)_2^*C^{\ord}$, and we define $\tau(C)$ to be the Zariski closure of $\tau(C^{\ord})$ in $\mathbb{A}g$. We make the following observation that will be used below: Let $\tau$ denote a Hecke correspondence which doesn't factor through Frobenius, and suppose the Serre-Tate coordinates of some branch of the completion of $C$ at an ordinary point are $Q=q_{ij}$, where none of the $q_{ij}$ are pth powers. Then the Serre-Tate coordinates of $\tau$ applied to this branch are calculated using the recipe outlined in Proposition \ref{prop:coordinatesafterisog} (2), i.e. the Serre-Tate coordinates of the various branches equal $A^{-1}Q^{p^n}B^T$, where $A$ is as in \ref{hecketype} and $B$ satisfies $AB^T = p^n\mathbb{I}d$. \subsection{$p$-power Hecke correspondence on Serre-Tate coordinates} We will now describe the action of $p$-power Hecke correspondences locally in terms of Serre-Tate coordinates. For a point $x\in \mathbb{A}g$, we will denote the associated principally polarized abelian variety by $A_x$ and its associated $p$-divisible group by either $A_x[p^{\infty}]$ or $\mathscr{G}_x$. We note that the formal neighbourhood $\Hat{\mathbb{A}g}^x$ of $\mathbb{A}g$ at an ordinary point $A$ is canonically isomorphic to the polarized deformation space of $A_x$ (which, by the Serre-Tate lifting theorem, is the same as the polarized deformation space of $\mathscr{G}_x$). Therefore, this space canonically has the structure of a formal torus, and in coordinates, is isomorphic to $\Hom^{\Sym}(T_p(\mathscr{G}_x)\otimes T_p(\mathscr{G}_x^\vee),\mathbb{G}mhat)$. \begin{defn}\label{defnserretatedirection} Let $x\in \mathbb{A}g$ denote an ordinary point, and let $\ell \subset T_x\mathbb{A}g$ denote a one-dimensional subspace. We say that $\ell$ is a \emph{Serre-Tate} direction if it equals the tangent space of a rank-1 formal subtorus of $\Hat{\mathbb{A}g}^x$, i.e. $L\otimes \mathbb{G}mhat$ where $L\subset (T_p(\mathscr{G})^*\otimes T_p(\mathscr{G}^\vee)^*)^{\Sym}$ is a saturated rank-1 $\mathbb{Z}_p$-submodule. We say that $\ell$ is a \emph{Primitive Serre-Tate} direction if in addition $L$ is spanned by a primitive tensor in $(T_p(\mathscr{G})^*\otimes T_p(\mathscr{G}^\vee)^*)^{\Sym}$. Analogously, we define $T_x^k\mathbb{A}g$ to be the $k$'th order neighborhood of $x$ in $\mathbb{A}g$. We define a primitive kth order Serre-Tate direction to be the base change of a rank-1 formal subtorus of $\Hat{\mathbb{A}g}^x$ to this neighborhood. \end{defn} We note that for any positive $k$, there are only finitely many kth order Serre-Tate directions. \begin{defn}\label{hypersymmetricabvars} An ordinary abelian variety $A/\mathbb{F}_q$ is said to be \textbf{hypersymmetric} if $A$ is isogenous to $E^g$, where $E/\mathbb{F}_q$ is an ordinary Elliptic curve. \end{defn} Let $C\subset \mathbb{A}g$ denote a generically ordinary reduced irreducible curve, that has the following properties: \begin{enumerate}\label{dataC} \item C passes through every $\mathbb{F}_q$-rational ordinary hypersymmetric point $x$.\label{dataC1} \item For any $x \in \mathbb{A}g(\mathbb{F}_q)$ with $A_x$ as above, $C$ has nodal singularities at $x$, and for every kth order primitive Serre-Tate direction $\ell_k$ at $x$, there exists a formally smooth local branch $C_{\ell_k,x}$ of $C$ at $x$ whose tangent space equals $\ell_k$. \label{dataC2} \end{enumerate} We have the following key theorem: \begin{theorem}\label{thm:ppowerheckesmooth} Let $\tau$ denote any $p$-power Hecke correspondence, and let $C$ be as above. Then $\tau(C)$ contains every ordinary $x \in \mathbb{A}g(\mathbb{F}_q)$ which is hypersymmetric. Further, $\tau(C)$ contains every kth order primitive Serre-Tate direction in $T_x^k\mathbb{A}g$. \end{theorem} \begin{lemma}\label{lemma:hypersymmetric} Let $x\in \mathbb{A}g(\mathbb{F}_q)$ be ordinary and hypersymmetric. Then, $\tau(x) \in \mathbb{A}g(\mathbb{F}_q)$. \end{lemma} \begin{proof} Every subgroup of $A_x[p^\infty]$ has the form $G_{\et}\times G_{\mult}$, where $G_{\et} \subset A_x[p^{\infty}]_{\et}$, and $G_{\mult}\subset A_x[p^{\infty}]_{\mult}$. Since $A_x$ is hypersymmetric, the Galois representation on $T_p(A_x[p^{\infty}]_{\et})$ acts through a scalar (because this statement is trivially true for $E$). It follows that every subgroup of $A_x[p^{\infty}]$ is defined over $\mathbb{F}_q$, whence the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:ppowerheckesmooth}] Let $x\in \mathbb{A}g(\mathbb{F}_q)$ denote some fixed hypersymmetric point, and let $\ell_k$ denote a kth order primitive Serre-Tate direction, and let $C_{x,\ell_k} = \Spf \mathbb{F}[[t]]$ denote the formally smooth local branch of $C$ at $x$ that realizes $\ell_k$. We fix polarization-compatible bases $\{e_i \}$ of $T_p(\mathscr{G}_x)$ and $\{m_j \}$, with dual bases $\{\epsilon_i \} $ and $\{ \mu_j\}$, such that $\ell_k$ is realized by the sub-torus corresponding to the span of $\epsilon_1\otimes \mu_1$. Let $y\in \tau^{\vee}(x)$, such that the associated isogeny $\phi^{\vee}:A_x \rightarrow A_y$ is obtained by the quotienting $A$ by the diagonal subgroup with type $(n:n'_1,\hdots n'_g)$. By Lemma \ref{lemma:hypersymmetric}, $y\in \mathbb{A}g(\mathbb{F}_q)$, and therefore our hypothesis implies that $C$ passes through $y$, and there exists a formally smooth branch of $C$ through $y$ realizing any fixed kth order primitive Serre-Tate direction. Let $e'_i \in T_p(\mathscr{G}_y)$ be defined as = $e'_i = \frac{1}{p^{n'_i}}\phi^\vee(e_i)$, and let $m'_j \in T_p(\mathscr{G}_y^{\vee})$ be defined analogously -- $\{e'_i\}$ and $\{ m'_j \}$ are bases of $T_p(A_y)$ and $T_p(A_y^{\vee})$ respectively. Let $\{\epsilon'_i \}$ and $\{ \mu'_j\}$ denote dual bases. Consider the branch $C_{y,\ell'_k}$, where $\ell'_k$ is realized by $\epsilon_1' \otimes \mu_1'$. The Serre-Tate coordinates of $C_{y,\ell'_k}$ are given by functions $q'_{ij}(t)$ where $q'_{11}(t) \equiv 1+t \mod t^{k+1}$ and $q'_{ij}(t) \equiv 1 \mod t^{k+1}$. Let $\phi:A_y\rightarrow A_x$ denote the dual of $\phi^{\vee}$. We now lift $\phi$ to an isogeny $\tilde{\phi}$ whose source is the abelian scheme $A_{C_{y,\ell'_k}}$. By Proposition \ref{prop:coordinatesafterisog}, the Serre-Tate coordinates of $\tilde{\pdiv}phi(A_{C_{y,\ell'_k}})$ are given by $Q(s) = X^{-1}Q'(s^{p^n})Y$, where $X$ and $Y$ are diagonal matrices with ith entry $p^{n_i}$ and $p^{n-n_i}$ respectively. It follows that the kth order tangent direction of $\tilde{\pdiv}phi(A_{C_{y,\ell'_k}})$ is equal to $\ell_k$. As $x$, $\ell_k$ and $\tau$ were arbitrary, the result follows. \end{proof} \subsection{Zariski-Density of rank-1 formal branches} \begin{proposition}\label{notalltori} Let $x$ be a hypersymmetric point, and suppose $D\subset \mathbb{A}g$ is a variety such that $T_xD$ contains all rank-1 primitive subtori. Then $D=\mathbb{A}g$. \end{proposition} \begin{proof} We will in fact prove the stronger claim that all rank-1 formal tori are dense in $\Sym^{2g}(\mathbb{Z}_p)\otimes\Hat{\mathbb{G}m}$. Letting $q_{ij}$ be the co-ordinates as before, we consider the element $v\in\mathbb{Z}_p^g$ such that $v_i=p^{N^i}$ where $N$ will be chosen to be a large integer. Then the corresponding primitive rank 1 torus has co-ordinates $Q_{ij}(t)=t^{p^{N^i+N^j}}$. Now let $f\in k[[q_{ij}]]$ be a non-zero power series which vanishes on all such primitive rank 1 tori. There are finitely many monomials in $f$ which are dominated by all the rest. Note that $\displaystyle \prod_{i,j}q_{ij}^{r_{ij}}$ evaluates at $\vec{Q}(t)$ to $t^{M_{ij}}$ where $M_{ij}=\displaystyle\sum_{i,j}r_{ij}p^{N^i+N^j}$. For large enough $N$ these exponents are all distinct, which means there is a single term of $f(\vec{Q}(t))$ of smallest degree, which is a contradiction. \end{proof} \subsection{Algebraicity of pure Serre-Tate directions} As in \cite[Prop 5.4]{Chai}, for we may consider the completion along the diagonal in $D\times D$ as a formal scheme $D^\wedge$ over $D$, and we may also view this as sitting inside the pull back $i^*\mathbb{A}g^\wedge$. Now for any integer $m>0$ we may also take the corresponding $n$'th order infinitesimal subscheme $D_m$ sitting inside $i^*{\mathbb{A}g}_{,n}$. Now, if we go up to a level cover ${\mathbb{A}g}^{\ord}_{p^n}$ where the $p$-adic tate module has been trivialized mod $p^n$, then the infinitesimal scheme $(\mathbb{A}g)_n$ can be trivialized as $\Sym^2(\mathbb{Z}/p^m\mathbb{Z})^g\otimes {\mathbb{G}m}_n$ and there is a closed subscheme consisting of the pure Serre-Tate directions. Hence the same is true in $\mathbb{A}g^{\ord}$. Thus, pulling back, to $D$ we see that there is a closed subvariety $R_n\subset i^*{\mathbb{A}g}_{,n}$ whose fibers at closed point consist of the unions of all pure Serre-Tate directions to order $n$. Moreover, $R_n$ is etale-locally a product, thus its restriction to any closed subvariety of $D$ is irreducible. It follows that there is a Zariski-closed subscheme $D_{st,n}$of $D$ consisting of all the points at which $D$ contains all the pure Serre-tate direcitons to order $n$. \subsection{Proof of Theorem \ref{thm:main}} Suppose that $D\subsetneq \mathbb{A}g$ is a subvariety, and let $x\in\mathbb{A}g(\ol{k})$ be a hypersymmetric point. If $D$ does not contain any point isogenous to $x$ the result follows simply by picking a curve $C$ passing through $x$. Thus, at the cost of increasing the finite field $k$, we assume $D$ contains a $k$- point isogenous to $x$. By Lemma \ref{notalltori} it follows that $D$ does not contain all formal subtori passing through $x$. It follows that there is some positive integer $M$ such that $T^M_xD$ does not contain all $M$'th order pure Serre-Tate directions. Thus, by the discussion in the previous subsection, $x\in D_{st,M}$. It follows by Noetherianity that the tower $D_{st,M}$ stabilizes, and thus for which the above statement is true for all hypersymmetric points. We now pick a prime $r>6p$ and increase $k$ so that the pre-images of $x$ in $\mathbb{A}g[r]$ are defined over $k$. We now use Theorem \ref{congmain} to product a curve $C\subset\mathbb{A}g[r]$ which \begin{enumerate} \item For every hypersymmetric point isogenous to $x$, and every $M$'th order pure Serre-Tate direction $\eta$ through $x$, $C$ admits a $\ol{k}[[t]]$ point specializing to $\eta$, and \item The geometric monodromy of $C$ surjects onto $\mathbb{G}amma_g(\mathbb{Z}_r)\times\prod_{\ell\neq p,r} \mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))\times \mathbb{G}L_g(\mathbb{Z}_p)$. \end{enumerate} By condition 2 above, every Hecke translate $TC$ of $C$ is irreducible. Since the isogenous curves to $C$ correspond to irreducible components of its Hecke translates,it suffices to show that $TC\not\subset D$ for every hecke operator $T$. We factor $T=T_eT_p$ for $T_e$ a prime-to-$p$ Hecke correspondence and $T_p$ a $p$-power Hecke correspondence. By Theorem \ref{thm:ppowerheckesmooth}, the curve $T_pC$ also satisfies condition 1 above. Now since $TC$ is irreducible, it follows that for each point $y\in T_e x$ the formal neigborhood of $TC=T_e(T_pC)$ at $y$ contains the image of the formal neigborhood of $T_pC$ at $x$. Sine $T_e$ is etale, it follows that $TC$ satisfies condition 1 above at $y$. Therefore, $TC$ cannot be contained in $D.$ The result about function field valued points follows immediately by considering the generic point of $C$. $ \square$ \subsection{Proof of Theorem 1.1} We now suppose that $D\subsetneq (\mathbb{A}g)_{F}$ is a subvariety. Identifying $k(T)$ with the function field of $\mathbf{P}^1_{l}$ we may realize $D$ as the generic point of a (strict) subvariety $\mathcal{D}\subset (\mathbb{A}g\times\mathbf{P}^1)_k$, no generic points of which are contained in a fiber. We let $E\subset(\mathbb{A}g)_{\ol k}$ be the constant part $D$, so that $E_F\subset D$ and $E$ is the maximum such subvariety\mathfrak{o}otnote{One may construct $E$ by intersecting all base changes of $D$ along automorphisms of $F$ over $\ol k$.}. By increasing $k$ we assume $E$ is defined over $k$. We are looking for a curve $C\subset (\mathbb{A}g\times\mathbf{P}^1)_k$ such that no Hecke translate of $C$ is contained inside $\mathcal{D}$. As a first step, we pick $N$ sufficiently large so that $A_N$ is not a sub-quotient of $\prod_{\ell\neq p} \mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))\times \mathbb{G}L_g(\mathbb{Z}_p)$, and let $\phi:\mathbf{P}^1\rightarrow\mathbf{P}^1$ be a degree $N$ map whose Galois group is $A_N$. We shall need the following two lemmas: \begin{lemma}\label{galois} Let $K\subset L$ be a finite separable extension of fields, and let $G$ be the Galois group of the normal closure of $L$ over $K$. Suppose that $H$ is a profinite group with no irreducible constituent in common with $G$, and let $\phi:\mathbb{G}al_K\rightarrow H$ be a group homomorphism. Then $\phi(\mathbb{G}al_L)=\phi(\mathbb{G}al_K)$. \end{lemma} \begin{proof} By increasing $L$ we may as well assume $L/K$ is Galois. Then $\mathbb{G}al_L\subset\mathbb{G}al_K$ is normal, and so we get a map $G\rightarrow\phi(\mathbb{G}al_K)/\phi(\mathbb{G}al_L)$, which must be trivial by our assumptions. The statement follows. \end{proof} \begin{lemma}\label{uniform} Let $\mathcal{D}\subset (\mathbb{A}g\times\mathbf{P}^1)_k$ be as above. There is an integer $M$ such that for any $M$ distinct points $x_1,\dots,x_M$ of $\mathbf{P}^1(\ol k)$, we have $\displaystyle\bigcap_{i=1}^m \mathcal{D}_{x_i}=E$. \end{lemma} \begin{proof} We will base change to an uncountable field $k$, and prove the statement there (from which the original statement clearly follows). For any $M$, consider the set $S_M$ of points $x_1$ for which one can find distinct $x_2,\dots,x_M$ such that $\displaystyle\bigcap_{i=1}^m \mathcal{D}_{x_i}\neq E$. This is clearly a constructible set, so is either finite or co-finite. Moreover the set $S_M$ is clearly descending (perhaps non-strictly) with $M$. It follows that if $S_M$ is not eventually empty, then $\bigcap_M S_M\neq\emptyset.$ Thus we may find a point $x_1$ which is contained in this intersection. We may now iterate with $x_2$, etc.., to produce a countably infinite sequence of points $x_i$ such that $\displaystyle\bigcap_{i=1}^m \mathcal{D}_{x_i}\neq E$ for any positive integer $m$. By Noetherianity this intersectio eventually stabilizes to a subvariety $E'$ properly containing $E$. This $E'$ is contained in infinitely many fibers of $\mathcal{D}$ and hence in all of them. Thus $E'_F\subset\mathcal{D}$ contradicting the definition of $E$. \end{proof} We now construct our curve $C$ as follows. Fix a prime $r>6p$. Following Theorem \ref{thm:main} we next construct an irreducible curve $C_0\subset \mathbb{A}g[r]$ such that \begin{enumerate} \item No Hecke translate of $C_0$ is contained in $E$. \item The geometric monodromy of $C\cap\mathbb{A}g^{\ord}[r]$ surjects onto $\mathbb{G}amma_g(\mathbb{Z}_r)\times\prod_{\ell\neq p,r} \mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))\times \mathbb{G}L_g(\mathbb{Z}_p)$, where $r>6p$ is a prime as in the proof of Theorem \ref{thm:main}. \end{enumerate} We do this the same way as in the proof of Theorem \ref{thm:main1}. Next, we pick a map $\psi:C_0\rightarrow\mathbf{P}^1$ arbitrarily, and let $C_1$ be the graph of this map in $\mathbb{A}g[r]\times\mathbf{P}^1$. Finally, we let $C$ be the base change of this map under $\phi\times\mathbb{I}d_{\mathbb{A}g[r]}$, for $N>\deg\psi$. It follows that $C$ is irreducible since $A_N$ does not admit a non-trivial homomorphism to $S_{\deg\psi}$, By lemma \ref{uniform}, the monodromy group of $C$ surjects onto $\mathbb{G}amma_g(\mathbb{Z}_r)\times\prod_{\ell\neq p,r} \mathbf{P}(\Sp_{2g}(\mathbb{Z}_\ell))\times \mathbb{G}L_g(\mathbb{Z}_p)$. It follows that for each Hecke operator $T$, that $TC$ is irreducible. Suppose that $TC\subset \mathcal{D}$. For a generic point $s\in\mathbf{P}^1$, the fiber $\phi^{-1}(s)$ consists of $N$ points $x_1,\dots,x_N$, and so it follows from theorem \ref{uniform} that $$(TC)_{x_j}=\bigcap_{i=1}^m (TC)_{x_i}\subset\bigcap_{i=1}^m \mathcal{D}_{x_i}=E.$$ It follows that $TC\subset E$. Since $E$ is constant, by projecting to $\mathbb{A}g$, this in fact means that $TC_0\subset E$ which is a contradiction. \section{Additional results} \subsection{The case of number fields} Let $H\subset \mathbb{A}g$ denote the Hodge line bundle. For any proper curve $C\subset \mathbb{A}g$, we define $(C.H)$ to denote the degree of this line bundle pulled back to $C$. \begin{lemma}\label{ellheckedegree} Let $T$ denote a Hecke operator on $\mathbb{A}g$. Then, for any proper curve $C\subset \mathbb{A}g$ we have $\deg T(C) = \deg(T) \cdot \deg(C)$. \end{lemma} \begin{proof} This follows directly from the fact that the two different pull-backs of $H$ to the graph of the Hecke correspondence are equal. \end{proof} \begin{defn} Let $X$ be some variety defined over a field $L$, and let $K/L$ denote a finite extension. We say that a point $x\in X(K)$ is \emph{primitive} if $K$ is the smallest field of definition of $x$. \end{defn} \begin{lemma}\label{pointlargedegreelargemonodromy} Let $C\subset \mathcal{A}_g$ denote a curve defined over $\mathbb{Q}$ with maximal monodromy. There exists a constant $c \in \mathbb{Z}$ such that for any positive integer $n\in \mathbb{Z}$, there exists a number field $K/\mathbb{Q}$ which is Galois and has degree $\geq n$, and a primitive point $x\in C(K)$ such that the monodromy of $A_x$ has index bounded by $c$ in $\mathbb{G}Sp_{2g}(\hat{\mathbb{Z}})$. \end{lemma} \begin{proof} Let $\pi: C\rightarrow \mathbb{P}^1$ denote a finite map of degree $d$, and consider $B = \mathbb{R}es_{C/\mathbb{P}^1}(A)$, which is an abelian scheme over an open subset $U\subset \mathbb{P}^1$ of dimension $gd$. Work of Zywina \cite[Theorem 1.1]{Zywina} implies that for any number field $L$, there exists a constant $c_L$ such that for a density 1 set of points $y\in U(L)$, the monodromy of $B_y$ has index bounded by $c_L$ in the monodromy of $B_U$. In Section 1.2 of \emph{loc. cit.}, Zywina explicitly describes the constant $c_L$. Indeed, if $L/\mathbb{Q}$ were a Galois extension which has the property that the monodromy of $B_{U_L}$ were the same as the monodromy of $B_{U_{\mathbb{Q}}}$, then the constant $c_L$ equals $c_{\mathbb{Q}}$. It is easy to see that there exist fields $L$ with this property with $[L:\mathbb{Q}]$ arbitrarily large; for example, by choosing $L$ so that $\mathbb{G}al(L/\mathbb{Q}) = G(\mathbb{Z}/\ell^n\mathbb{Z})$, where $G/\mathbb{Z}_{\ell}$ is some exceptional simple group and $\ell$ is any prime. We now use the above discussion to deduce the existence of number fields $K$ with arbitrarily large degree and points $x\in C(K)$ with monodromy as claimed. Let $y\in U(L)$ satisfy the conclusions of the previous paragraph. By Hilbert irreducibility, we may also assume that that $\pi^{-1}(y) = \{x_1\hdots x_d \}$, where $x_i\in C(K_i)$ where $K_i/L$ are conjugate degree d extensions of $L$ and the points $x_i$ are conjugate to each other. We have that $\prod_{i=1}^d A_{x_i}$ descends to $L$ and equals $ B_y$, and that the monodromy of $(\prod_{i=1}^d A_{x_i})_{K_1}$ equals the intersections of the monodromies of $B_C$ and $B_y$ inside the monodromy of $B_U$. It then follows that the monodromy of $(\prod_{i=1}^d A_{x_i})_{K_1}$ has index bounded by $c = c_{\mathbb{Q}}$ inside the monodromy of $B_C$, and projecting onto the first factor yields that the monodromy of $A_{x_1}$ has index bounded by $c$ in the monodromy of $A_C$, which is equal to $\mathbb{G}Sp(\hat{\mathbb{Z}})$ by our assumption on the curve $C$. \end{proof} \begin{theorem}\label{mainnumberfield} Let $V\subsetneq \mathcal{A}_g$ denote a closed subvariety. Then, there exists an abelian variety over a number field which is not isogenous to any point of $V$. \end{theorem} \begin{proof} Without loss of generality, we suppose that $V$ is a divisor, and by replacing $V$ by the union of its Galois conjugates, we may assume that $V$ is defined over $\mathbb{Q}$. We may proceed as in the positive characteristic setting to find a curve $C\subset \mathbb{A}g$ having maximal monodromy which satisfies $T(C) \not\subset V$ for every Hecke correspondence $T$. In fact, by using the characteristic-zero Lefschetz theorem detailed in \cite[Theorem 1.2]{GorMacMorse} for smooth quasi-projective varieties, we may find a proper curve $C\subset \mathbb{A}g$ defined over $\mathbb{Q}$ with maximal monodromy, and which satisfies such that $T(C) \not \subset V$ for every Hecke correspondence $T$. We first note that for a proper curve $C\subset \mathbb{A}g$, we have \begin{equation}\label{equationupperbound} (T(C) \cdot V) = \deg(T) (C \cdot V). \end{equation} This is true when $V = H$ by Lemma \ref{ellheckedegree}. In general, we note that $\mathbb{A}g$ has Picard group $\mathbb{Z}$, and hence $(C'\cdot V) = n_{V}(C'\cdot H)$ for any proper curve $C\subset \mathbb{A}g$, where the divisors $[H],[V]$ satisfy $n_V[H] = V$ in $\textrm{Pic}(\mathbb{A}g)$. By Lemma \ref{pointlargedegreelargemonodromy}, there exists a Galois extension $K/Q$ of degree $\geq n$ and a primitive point $x\in C_{\mathbb{Q}}$ whose monodromy has index bounded by $c$ in $\mathbb{G}Sp_{2g}(\hat{\mathbb{Z}})$, where we now choose $n > c (C\cdot V).$ For any fixed Hecke correspondence, we have that $T(x)$ is stabilized by $\mathbb{G}al(\overline{K}/K)$, and the monodromy assumptions on $A_x$ implies that each orbit has at least $\frac{\deg(T)}{c}$ elements. The same statement holds for $\sigma(x)$ with $\sigma \in \mathbb{G}al(K/\mathbb{Q})$, and further, we have that the sets $T(x)$ and $T(\sigma(x))$ are in the same $\mathbb{G}al(\overline{\mathbb{Q}}/\mathbb{Q})$ orbit. Suppose now that $T(x)$ intersected $V$ non-trivially for some Hecke operator $T$. As we have assumed $V$ is defined over $\mathbb{Q}$, it follows that $T(x) \cap V$ is stable under $\mathbb{G}al(\overline{\mathbb{Q}}/\mathbb{Q})$. The previous paragraph implies that $T(\sigma(x)) \cap V$ has size $\geq \frac{\deg(T)}{c}$ for every $\sigma\in \mathbb{G}al(K/\mathbb{Q})$. Further, even if the same point $y$ is contained\mathfrak{o}otnote{This can happen if $\sigma_1(x)$ is isogenous to $\sigma_2(x)$.} in $T(\sigma_1(x)) \cap V$ and $T(\sigma_2(x)) \cap V$, this point contributes twice to the intersection $T(C) \cdot V$, because this implies that $T(C)$ must be singular at $y$, with branches corresponding to the branch of $C$ through $\sigma_1(x)$ and through $\sigma_2(x)$. It therefore follows that \begin{equation}\label{equationlowerbound} T(C)\cdot V \geq \frac{\deg(T)}{c}\cdot n. \end{equation} But our initial choice of $n$ was such that $n> c\cdot N_V \cdot \deg(C)$, and so Equations \eqref{equationupperbound} and \eqref{equationlowerbound} cannot both hold simultaneously. It therefore follows that $T(x)$ must be disjoint from $V$ for every Hecke correspondence $T$, and the result follows. \end{proof} \subsection{An analogue over $\mathbb{F}pbar$.} We will now prove that for $D\subset \mathbb{A}g$ a divisor, there exists an ordinary $x\in \mathbb{A}g(\mathbb{F}pbar)$ such that $T(x) \not\subset D$ for every prime-to-$p$ Hecke operator $T$. \begin{proof}[Proof of Theorem \ref{finitefields}] By the main result of this paper, there exists a generically ordinary $C\subset \mathbb{A}g$ that satisfies $T(C)\not\subset \mathbb{A}g$ for every Hecke operator $T$. An identical intersection-theoretic argument as in the proof of Theorem \ref{mainnumberfield} yields that for a point $x\in C(\mathbb{F}pbar)$, whose minimal field of definition is $\mathbb{F}_q$ where $q$ is a large enough power of $p$, $T(x)\not\subset D$. The result follows. \end{proof} \end{document}
\begin{document} \title{Five Equivalent Representations of a Phylogenetic Tree} \begin{abstract} A phylogenetic tree is a tree with a fixed set of leaves that has no vertices of degree two. In this paper, we axiomatically define four other discrete structures on the set of leaves. We prove that each of these structures is an equivalent representation of a phylogenetic tree. \end{abstract} \section*{Introduction} This paper is concerned with the explanation and proof of the following result. \begin{theorem}\label{thm:intro} Let $n\ge 3$ be a natural number. The following sets are in one-to-one correspondence to each other. \begin{enumerate} \item The set of trees without vertices of degree two and leaves labelled from $1$ to $n$ (also known as ``phylogenetic trees''). \item The set of set-partitions of $\{1,\dots,n\}$ into at least three subsets such that every singleton (cardinality-one subset) occurs in one partition, every subset occurs in at most one partition, and the set of non-singleton subsets is closed under taking the complement. \item The set of cuts of $\{1,\dots,n\}$, i.e., set-partitions into two subsets $A$, $B$, such that for any two cuts $(A_1,B_1)$ and $(A_2,B_2)$, at least one (actually exactly one) of the four sets $A_1\cap A_2$, $A_1\cap B_2$, $B_1\cap A_2$, $B_1\cap B_2$ is empty. \item The sets of pairs of disjoint two-element subsets of $\{1,\dots,n\}$ that fulfill axioms (X1), (X2), and (X3) in Section~\ref{ss:x}. \item The set of equivalence relations of three-element subsets of $\{1,\dots,n\}$ whose equivalence classes fulfill axioms (D1) and (D2) in Section~\ref{ss:3}. \end{enumerate} \end{theorem} \begin{figure} \caption{Starting from a tree without vertices of degree three, we can obtain a set of cuts (i.e. set-partitions into two subsets) of its set of leaves (left subfigure), and a set of set-partitions into at least three subsets (right subfigure).} \label{fig:5tree} \end{figure} Figure~\ref{fig:5tree} shows the construction of a set of cuts and the construction of a set of set-partitions into at least three subsets, for a fixed phylogenetic tree. To obtain a set of pairs of disjoint two-element subsets, take a set of cuts, and pick, for each pair, a two element subset on the left and a two-element subset on the right of one of the cuts. To construct a set of equivalence relations of three-element subsets, we recall the following property of trees: for any three distinct leaves, there is a unique vertex such that the three paths from this vertex to the three leaves are edge-disjoint (see Lemma~\ref{lem:unique_vertex}). There is a natural way to use this fact in order to define an equivalence relation on three-element subsets: two three-element subsets are equivalent if and only if the unique vertex is the same for the two triples. For larger $n$, it is not so clear that all these constructions are in bijection, i.e., the phylogenetic tree can be uniquely recovered from its set of cuts, or from any of the other three structures. We will prove it in full detail. The proofs are combinatoric, but not all trivial. We offer an informal taxonomy of the four equivalent representations of phylogenetic trees. The collections of partitions and the sets of cuts are {\em macroscopic} pictures, in the sense that they are composed of elements of bigger scales. In contrast, the equivalence relations on three-element-subsets and the crossing relations are composed of smaller-scale elements; they are {\em microscopic} pictures: the crossing relations are just quaternary relations on the set of leaves, and the equivalence relation on three-element subsets can be considered as 6-ary relations on the set of leaves. The partitions and the equivalence relations focus on the vertices of the phylogenetic tree, in particular the non-leaves, while the cuts and the crossing relations focus on its edges. The relevance of these four characterizations of phylogenetic trees lies in their applications to the construction of the Knudsen-Mumford moduli space of $n$-marked curves of genus zero. In fact, they allow us to give a very explicit and even elementary construction of this moduli space (in a joint research in progress together with H. Hauser), in contrast to the high-flown machinery in the original papers \cite{KnudsenMumford1976,Knudsen1983} and to simpler, but still quite algebraic constructions in \cite{Keel1992,Kapranov:93,Keel_Tevelev:09,Monin_Rana:17}. Connections between moduli space and phylogenetic trees have also been observed in \cite{GibneyMaclagan2010,Luca}, and the cuts induced by a phylogenetic tree also play a role in \cite{SturmfelsSullivant08,GibneyMaclagan2010}. This paper has two sections. In the first section, we give the definitions of these five representations and some constructions from one to another. The second section contains the more involved constructions and the proof that all constructions are in bijection. \section{Structures and Axioms} \label{sec:axioms} In this section, we axiomatically define five discrete structures on a fixed finite set $N$ of cardinality at least 3: phylogenetic trees, collections of partitions, sets of cuts, crossing relations, and equivalence relations of triples. We will also introduce some functions converting one of these structures to another, and use them to construct examples. Other (more involved) conversion functions will be introduced in Section~\ref{sec:conversions}. \subsection{Trees} Recall that an unrooted tree is an undirected graph that is connected and has no cycles. \begin{definition} A {\em phylogenetic tree} with leaf set $N$ is an unrooted tree $(V,E)$ without vertex of degree~2 such that $N\subset V$ is the set of leaves. We say that two phylogenetic trees $(V_1,E_1)$ and $(V_2,E_2)$ with leaf set $V$ are isomorphic if and only if there is a graph isomorphism that restricts to the identity on the subset of leaves. The set ${\cal T}_N$ is defined as the set of all isomorphism classes of phylogenetic trees with leaf set $N$. \end{definition} \begin{figure} \caption{This is a phylogenetic tree with leaf set $N=\{1,\dots,9\} \label{fig:tree} \end{figure} An example of a phylogenetic tree with leaf set $N=\{1,\dots,9\}$ can be seen in Figure~\ref{fig:tree}. \subsection{Sets of Partitions} Recall that a partition of $N$ is a set of disjoint and non-empty subset of $N$ such that their union is $N$. \begin{definition} A collection/set of partitions of $N$ is {\em phylogenetic} if it fulfills the following axioms: \begin{description} \item[(P1)] Every partition has at least 3 subsets; we also call these subsets the {\em parts}. \item[(P2)] Every one-element subset of $N$ is a part of some partition. \item[(P3)] Every subset of $N$ is a part of at most one partition. \item[(P4)] For every part $A\subset N$ of cardinality bigger than one, the complement $N\setminus A$ is also a part (necessarily of a different partition, by Axiom~(P1)). \end{description} We denote as ${\cal P}_N$ the set of all phylogenetic collections of partitions of $N$. The set $\overline{{\cal P}_N}$ is the set of all sets of collections of partitions of $N$. \end{definition} \begin{example} \label{ex:parts} For $N=\{1,\dots,9\}$, let $P=\{ p_a,p_b,p_c,p_d,p_e\}$ be the collection of the partitions \[ p_a = \{ \{1\}, \{2\}, \{3\}, \{4,5,6,7,8,9\} \} ,\] \[ p_b = \{ \{1,2,3\}, \{4\}, \{5\}, \{6,7,8,9\} \} , \] \[ p_c = \{ \{1,2,3,4,5\}, \{6,7\}, \{8,9\} \} ,\] \[ p_d = \{ \{1,2,3,4,5,8,9\}, \{6\}, \{7\} \} , \] \[ p_e = \{ \{1,2,3,4,5,6,7\}, \{8\}, \{9\} \} . \] It can be checked that the axioms (P1),(P2),(P3),(P4) are fulfilled. Therefore the collection $P$ is phylogenetic. \end{example} The above example can be constructed from the tree in Figure~\ref{fig:tree} in a systematic way, which is described in the following definition. \begin{definition} For every phylogenetic tree $T=(V,E)$ with leaf set $N$, we define a collection of partitions which is in bijection with the set $V\setminus N$ of non-leaves, as follows. For each non-leaf vertex $v$, and for each edge $e$ incident to~$v$, we have a subset of leaves containing the leaves $i$ such that the unique path from $v$ to $i$ begins with $e$. For each non-leaf vertex $v$, these subsets are the parts of a partition of $N$. The collection of partitions of $N$ is denoted by $P_T$. The function ${\cal T}_N\to \overline{{\cal P}_N}$ mapping the class of $T$ to $P_T$ is denoted by $t_{TP}$ and we call it the transformation from trees to partition collections. \end{definition} \begin{proposition} For every phylogenetic tree $T=(V,E)$ with leaf set $N$, the collection $P_T$ is phylogenetic. \end{proposition} \begin{proof} Every non-leaf has at least 3 edges, hence (P1) holds. Every leaf has a unique neighbor which must be a non-leaf, otherwise the tree would only have two vertices (which violates our assumption on the cardinality of $N$); this implies (P2). Uniqueness of the neighbor also implies (P3) for the special case of cardinality-one parts. Let $A$ be a part of $P_T$ such that $2\le |A|\le |N|-2$. Then there are non-leaves $a$, $b$ and an edge $e=\{a,b\}\in E$ such that $A$ is equal to the set of leaves $i$ such that the unique path from $a$ to $i$ contains $e$. And then $N\setminus A$ is the set of leaves $j$ such that the unique path from $b$ to $j$ contains $e$, hence $N\setminus A$ is part of the partition corresponding to the non-leaf $b$. This shows (P4). It remains to prove Axiom (P3). Suppose that $A$ belongs to two distinct partitions of $P_T$, corresponding to non-leafs $a$ and $a'$ respectively. Let $e=\{a,b\}$ and $e'=\{a',b'\}$ be the two edges corresponding to $A$ in the partitions corresponding to $a$ and $a'$ respectively. There is a unique path $p_{a,a'}$ between $a$ and $a'$. In the sequel, we do case distinctions on whether $e$ and $e'$ belongs to path $p_{a,a'}$ or not. \begin{enumerate} \item Neither $e$ nor $e'$ belongs to $p_{a,a'}$. In this case, we remove the edge $e$, and we obtain two components $T_b$ and $T_a$, where $T_b$ contains the elements of $A$ and $T_a$ contains the elements of $N\setminus A$. Now we remove $e'$ in $T_a$, and we further obtain two components $T_{b'}$ and $T_{a'}$, where $T_{b'}$ contains the elements from $A$. However, $T_{b'}$ and $T_b$ are distinct components, they cannot both contain the elements from $A$. This is a contradiction. \item Both $e$ and $e'$ belong to $p_{a,a'}$. In this case, we can argue analogously, by interchanging the roles of $a$ with $b$, and the roles of $a'$ with $b'$. We remove edge $e$, obtaining components $T_a$ which contains elements in $N\setminus A$, and $T_b$ which contains elements in $A$. Similarly, there must be a unique path from $b$ to $b'$ connecting edge $e$ and $e'$. Hence, $e'$ is in $T_b$. Now, in component $T_b$, we remove the edge $e'$, obtaining components $T_{a'}$ and $T_{b'}$, where $T_{b'}$ is the component containing elements in $N\setminus A$. However, $T_{b'}$ and $T_a$ are distinct components, hence cannot both contain elements in $N\setminus A$. This is a contradiction. \item Only one of $e$ and $e'$ belongs to $p_{a,a'}$. Without loss of generality, assume that $e'$ belongs to path $p_{a,a'}$ and $e$ does not. We remove edge $e$ from the tree $T$, obtaining $T_b$ and $T_a$, where the set of leaves in $T_b$ intersected with $N$ is $A$. We remove edge $e'$ from the tree $T$, obtaining $T_{b'}$ and $T_{a'}$, where the set of leaves in $T_{b'}$ intersected with $N$ is $A$. Because $T$ is phylogenetic, there is an edge $e''$ not contained in the path $p_{a,a'}$, but incident with some vertex on this path. Following this edge, we eventually arrive at some leaf $l$. Then we have that $l$ is in $T_{b'}\cap N$ but not in $T_b\cap N$. This is a contradiction. \end{enumerate} Hence, Axiom (P3) holds. \end{proof} \subsection{Sets of Cuts} In this structure, we are particularly interested in partitions that violate (P1). \begin{definition} A {\em cut} of $N$ is a partition of $N$ into two subsets $A,B$ of cardinality larger than one. The subsets $A$, $B$ are called {\em clusters}. We denote such a cut as $(A\mid B)=(B\mid A)$, and omit the curly brackets when $A$ and $B$ are given by the enumeration of elements. Axiom for a set $C$ of cuts of $N$ to be {\itshape phylogenetic} is as follows. And we denote as $cl(C)$ the set of all clusters of $C$. \begin{description} \item[(C)] For any two cuts $(A_1\mid B_1)$, $(A_2\mid B_2)$ in $C$, at least one of the following four sets is empty: $A_1\cap A_2$, $A_1\cap B_2$, $B_1\cap A_2$, $B_1\cap B_2$. \end{description} Denote as ${\cal C}_N$ the set of all phylogenetic sets of cuts of $N$. The set $\overline{{\cal C}_N}$ is the set of all sets of cuts of $N$. \end{definition} \begin{remark} One can check that actually we can omit ``at least'' in the above statement, since when it holds, it cannot happen that two of those four sets are both empty. \end{remark} \begin{example} \label{ex:cuts} For $N=\{1,\dots,9\}$, let $C=\{ c_x,c_y,c_z,c_w\}$ be the set of cuts \[ c_x = (1,2,3\mid 4,5,6,7,8,9), \] \[ c_y = (1,2,3,4,5\mid 6,7,8,9), \] \[ c_z = (1,2,3,4,5,8,9\mid 6,7), \] \[ c_w = (1,2,3,4,5,6,7\mid 8,9). \] It can be checked that axiom (C) is fulfilled, hence $C$ is phylogenetic. Note that the clusters in these cuts are exactly the parts that appear in some partition in Example~\ref{ex:parts} that have cardinality bigger than one. Every cut corresponds to an internal edge of the phylogenetic tree, i.e., an edge connecting two non-leafs. The clusters are just the set of leaves of the two connected components which arise when the corresponding edge is removed. \end{example} \begin{definition} Let $T$ be a phylogenetic tree with leaf set $N$. Then $C_T$ is the set of cuts of $N$ that corresponds to internal edges of $T$, with clusters being the two sets of leaves of the two components that arise when the corresponding edge is removed. The function ${\cal T}_N\to \overline{{\cal C}_N}$, $[T]\mapsto C_T$ is denoted by $t_{TC}$ and we call it the transformation from trees to sets of cuts. Let $P$ be a phylogenetic collection of partitions. The set of cuts whose clusters are exactly the parts of cardinality at least~2 is denoted by $C_P$. The function ${\cal P}_N\to \overline{{\cal C}_N}$, $P\mapsto C_P$ is denoted by $t_{PC}$ and we call it the transformation from partition collections to sets of cuts. \end{definition} \begin{proposition} For every tree $T$, we have $C_{P_T}=C_T$; in other words, we have $t_{PC}\circ t_{TP}=t_{TC}$. \end{proposition} \begin{proof} Straightforward. \end{proof} \subsection{Crossing Relations} \label{ss:x} \begin{definition} \label{def:x} A {\em crossing relation} is a set $X$ of unordered pairs of disjoint cardinality-two subsets of $N$. We write its element as $(i,j\mid k,l)$ - such that if $(i,j\mid k,l)\in X$, then the information that $i,j,k,l$ are pairwise distinct is contained. And we call it a {\itshape cross (of $X$)}, since we can interchange $i,j$ or $k,l$, or two sets $\{i,j\}$ with $\{k,l\}$ without changing the element. Axioms for a crossing relation $X$ to be {\itshape phylogenetic} are as follows. \begin{description} \item[(X1)] If $(i,j\mid k,l)$, then not $(i,k\mid j,l)$, i.e., $(i,k\mid j,l)\notin X$. \item[(X2)] If $(i,j\mid k,l)$ and $(i,j\mid k,m)$ and $l\ne m$, then $(i,j\mid l,m)$. \item[(X3)] If $(i,j\mid k,l)$ and $m$ is distinct from $i,j,k,l$, then $(i,j\mid k,m)$ or $(i,m\mid k,l)$. Note that this ``or'' here means at least one should hold and it may happen that both hold. \end{description} Denote as ${\cal X}_N$ the set of all phylogenetic crossing relations on $N$. Denote as $\overline{{\cal X}_N}$ the set of all crossing relations of $N$. \end{definition} \begin{example} \label{ex:cross} Let $N:=\{1,\dots,9\}$. We define a crossing relation as follows. For any $i,j,k,l$ that are pairwise distinct, the relation $(i,j\mid k,l)$ holds if and only if one of the following statements is true: \begin{itemize} \item $i,j\in \{1,2,3\}$ and $k,l\in \{4,5,6,7,8,9\}$ ($45$ crosses); \item $i,j\in \{1,2,3,4,5\}$ and $k,l\in \{6,7,8,9\}$ ($60$ crosses); \item $i,j\in \{1,2,3,4,5,8,9\}$ and $\{k,l\}=\{6,7\}$ ($21$ crosses); \item $i,j\in \{1,2,3,4,5,6,7\}$ and $\{k,l\}=\{8,9\}$ ($21$ crosses). \end{itemize} (Its relation with Example~\ref{ex:cuts} is apparent.) In total, this crossing relation consists of $108$ crosses. We will see later that this crossing relation is phylogenetic. \end{example} The following definition is a generalization of the construction in Example \ref{ex:cross}. \begin{definition} For every set $C$ of cuts, we define a crossing relation $X_C$ as follows: $(i,j,k,l)$ is in $X_C$ if and only if $C$ contains a cut $(A\mid B)$ such that $i,j\in A$ and $k,l\in B$. The function ${\cal C}_N\to \overline{{\cal X}_N}$, $C\mapsto X_C$ is denoted by $t_{CX}$, we call it the transformation from sets of cuts to crossing relations. \end{definition} \begin{remark} The moduli space of $n$-pointed stable curves of genus zero has a natural decomposition into strata that correspond to phylogenetic trees with leaf set of cardinality $n$. For any such stratum $T$, the crossing relation is then exactly the set of $(i,j\mid k,l)$ such that the cross ratio of the four marked points $p_i,p_j,p_k,p_l$ has value~1. As we will see, it is possible to transform a phylogenetic crossing relation to a phylogenetic tree. In the context of moduli spaces, this is equivalent to saying that we can recover the dual graph of the $n$-pointed graph from the values of its cross ratios. \end{remark} \subsection{Equivalences of Triples} \label{ss:3} A {\em triple} in $N$ is a 3-element subset of $N$. We denote the set of triples in $N$ by ${N\choose 3}$. A set $S\subset{N\choose 3}$ of triples is called {\em diverse} if it is non-empty and it fulfills the following two axioms: \begin{description} \item[(D1)] If $\{i,j,k\}\in S$, and $l\in N$, then $S$ also contains one of the triples $\{ i,j,l\}$, $\{ i,k,l\}$, or $\{j,k,l\}$. \item[(D2)] Let $a,b,c,x,y,z\in N$. If $S$ contains the triples $\{ a,x,y\}$, $\{ b,y,z\}$, and $\{ c,x,z\}$, then it also contains $\{ x,y,z\}$. \end{description} We say that an equivalence relation on ${N\choose 3}$ is {\itshape phylogenetic} if and only if the following axiom is fulfilled: \begin{description} \item[(E0)] Each class of the equivalence relation is diverse. \end{description} Denote as ${\cal E}_N$ the set of all phylogenetic equivalence relations on the triples of $N$. Denote as $\overline{{\cal E}_N}$ the set of all equivalence relations on the triples of $N$. \begin{example} \label{ex:equivsmall} Let $N=\{1,2,3,4,5\}$. We define an equivalence relation with three classes as follows: \[ \{ 1,2,3\}\sim\{ 1,2,4\}\sim\{ 1,2,5\} , \] \[ \{ 1,4,5\}\sim\{ 2,4,5\}\sim\{ 3,4,5\} , \] \[ \{ 1,3,4\}\sim\{ 1,3,5\}\sim\{ 2,3,4\}\sim\{ 2,3,5 \}. \] It can be checked that the axioms (D1) and (D2) are fulfilled in each class, hence the equivalence is phylogenetic. \end{example} In order to construct interesting equivalence relations of triples, we need a lemma on trees. \begin{lemma}\label{lem:unique_vertex} Let $T=(V,E)$ be a tree, and let $i,j,k\in N$ be pairwise distinct leaves. Then there is a unique vertex $v\in V\setminus N$ such that the three paths from $v$ to $i$, $j$, and $k$ are edge-disjoint. \end{lemma} \begin{proof} Let $\mathbb{P}i_{ij}$ be the unique path connecting $i$ and $j$, and let $\mathbb{P}i_{ik}$ be the unique path connecting $i$ and $k$. The common edges of $\mathbb{P}i_{ij}$ and $\mathbb{P}i_{ik}$ also form a path, which connects $i$ to some non-leaf, and this is the vertex $v$ we are looking for. Indeed, the edges that are in $\mathbb{P}i_{ij}$ but not in $\mathbb{P}i_{ik}$ connect $j$ with $v$, and the edges that are in $\mathbb{P}i_{ik}$ but not in $\mathbb{P}i_{ij}$ connect $v$ with $k$. The property of $v$ which is claimed in the lemma implies that the common edges of $\mathbb{P}i_{ij}$ and $\mathbb{P}i_{ik}$ form a path from $i$ to $v$, and this implies uniqueness. \end{proof} \begin{definition}\label{def:tree_to_triples} Let $T=(V,E)\in {\cal T}_N$ be a phylogenetic tree. We define an equivalence relation $\sim_T$ on ${N\choose 3}$ as follows: $\{ i,j,k\}\sim_T \{l,m,n\}$ holds if and only if the unique non-leaf $v$ such that the three paths from $v$ to $i$, $j$, and $k$ are edge-disjoint is equal to the unique non-leaf $w$ such that the three paths from $w$ to $l$, $m$, and $n$ are edge-disjoint. The function ${\cal T}_N\to \overline{{\cal E}_N}$, $[T]\mapsto \sim_T$ is denoted by $t_{TE}$, we call it the transformation from trees to equivalence relations. \end{definition} \begin{example} \label{ex:tb} Let $N=\{1,\dots,9\}$, and let $T=(V,E)$ be the phylogenetic tree in Figure~\ref{fig:tree}. Then the equivalence relation $\sim_T$ in ${N\choose 3}$ has five equivalence classes $E_a,\dots,E_b$, corresponding to the five non-leaves in $V\setminus N$: \begin{itemize} \item The class $E_a$ consists of all triples $\{i,j,k\}$ such that ($i,j\in\{1,2,3\}$ and $k\in\{ 4,5,6,7,8,9\}$) or ($i=1$, $j=2$, and $k=3$). These are 18+1=19 triples. \item The class $E_b$ consists of all triples $\{i,j,k\}$ such that ($i,j\in\{1,2,3\}$ and $j\in\{ 4,5\}$ and $k\in\{ 6,7,8,9\}$), or ($i=4$ and $j=5$ and $k\in\{ 1,2,3,6,7,8,9\}$). These are $24+7=31$ triples. \item The class $E_c$ consists of all triples $\{i,j,k\}$ such that $i\in\{ 1,2,3,4,5\}$, $j\in\{ 6,7\}$ and $k\in\{ 8,9\}$. These are 20 triples. \item The class $E_d$ consists of all triples $\{i,6,7\}$ such that $i\in\{ 1,2,3,4,5,8,9\}$. These are 7 triples. \item The class $E_e$ consists of all triples $\{i,8,9\}$ such that $i\in\{ 1,2,3,4,5,6,7\}$. These are 7 triples. \end{itemize} Note that $19+31+20+7+7=84={9\choose 3}$, which indicates that we did not make a mistake --- every triple occurs in exactly one class. \end{example} In the next section, we will introduce the transformation between equivalences of triples and crossing relations. \section{Conversions} \label{sec:conversions} In this section, we prove that the five structures introduced in Section~\ref{sec:axioms} are equivalent. In Section~\ref{sec:axioms}, we already introduced the maps shown in Figure~\ref{fig:diag1}, and we have seen that the triangle contained in this diagram is commutative. We still have to show that the images of $t_{TC}$, $t_{CX}$, $t_{TE}$ are phylogenetic, we have to construct more conversion maps so that the diagram has directed paths between any two vertices, and we have to show that the enlarged diagram commutes. \begin{figure} \caption{This diagram shows the conversion maps between different types of structures that have been defined in Section~\ref{sec:axioms} \label{fig:diag1} \end{figure} \subsection{Trees and Partitions} \begin{definition} For every set $P=\{p_1,\dots,p_m\}$ of partitions of $N$, we define the graph $G_P$ as follows. The vertex set is $N\cup P$. Two vertices in $p,q\in P$ are connected by an edge if and only if there is a cut $(A\mid B)$ such that $A\in p$ and $B\in q$. A vertex $p\in P$ and a vertex $i\in N$ are connected if $\{i\}\in p$. There is no edge connecting two vertices in $N$. \end{definition} In the following, we will show that $G_P$ is a phylogenetic tree whenever $P$ is phylogenetic, and that the construction $P\to G_P$ is the inverse of $t_{TP}$. \begin{theorem} \label{thm:hh} Assume that $P$ is a phylogenetic set of partitions of $N$. Then $G_P$ is a phylogenetic tree. \end{theorem} \begin{proof}[Proof (H. Hauser)] Let $i\in N$ and $p\in P$. We claim that there is a path connecting $i$ and $p$. Let $A$ be the part in the partition $p$ that contains $i$. If $A=\{i\}$, then there is an edge connecting $i$ and $p$. If $A$ has cardinality bigger than 1, then there is a unique partition $q$ containing $N\setminus A$. It also has a unique part $B$ that contains $i$, which must be a strict subset of $A$. By induction on the cardinality of $A$, there is a path connecting $i$ and $q$. This shows the existence of a path connecting $i$ and $p$. It follows that the graph $G_P$ is connected. In order to show that $G_P$ is a tree, it suffices to show that it has no cycle. The vertices of such a cycle would have to be in $P$, because the vertices in $N$ have degree 1. Let $(p_1,\dots,p_k,p_{k+1}=p_1)$ be a cycle. For $r=1,\dots,k$, there is a unique part $A_r\in p_r$ that contains $i$. For the edge $e=p_1p_2$, one part of its corresponding cut, say $(I,J)$ must contain $i$, and it must be either $A_1$ or $A_2$. If it is $A_1$, then we have $A_{r+1}\subsetneq A_r$ for $r=1,\cdots, k$ because of Axiom (P3). If it is $A_2$, then we have that $A_r\subsetneq A_{r+1}$ for $r=1,\cdots, k$. Both cases lead to $A_1\subsetneq A_1$, which is a contradiction. The degree of any vertex in $P$ is equal to the number of its parts, which is at least three. Therefore, the tree $G_P$ is phylogenetic. \end{proof} If $T$ is a phylogenetic tree, then it is straightforward to check that $G_{P_T}$ is isomorphic to $T$. Also, if $P$ is a phylogenetic set of partitions, then $P_{G_P}=P$. Hence the construction $P\to G_P$ provides the inverse to $t_{TP}:{\cal T}_N\to{\cal P}_N$. \subsection{Trees and Cuts} \begin{proposition} For every phylogenetic tree $T=(V,E)$ with leaf set $N\subset V$, the set $C_T$ of cuts is phylogenetic. \end{proposition} \begin{proof} Let $(A_1\mid B_1)$ and $(A_2\mid B_2)$ be two arbitrary cuts in $C_T$, and let $e_1$ and $e_2$ be their corresponding edges. If we remove both edges from the graph, then we get at most three components. correspondingly, we obtain three leaf sets. Each of the four sets $A_1\cap A_2$, $A_1\cap B_2$, $A_2\cap B_1$, $A_2\cap B_2$ equals to one of these leaf sets, if not empty. Also, note that these four sets are pairwise disjoint. Therefore, at least one of these four sets must be empty. Since the two cuts were chosen arbitrarily, it follows that Axiom~(C) is fulfilled, and $C_T$ is phylogenetic. \end{proof} For the construction of transformation $t_{CT}$ from cuts to trees, recall the following concept: if $(P,\le)$ is a finite partially ordered set, then the {\em Hasse diagram} of $(P,\le)$ is the directed graph with vertex set $P$. And there is an edge from vertex $a$ to vertex $b$ if and only if ($a\leq b$ and for all $c$ such that $a\le c\le b$, we have $a=c$ or $b=c$). We call a set/subset with exactly one element a {\em singleton}. \begin{definition}\label{def:construction_cuts_tree} Let $C$ be a phylogenetic set of cuts. Let $c=(A\mid B)$ be a cut in $C$. Let $V_A$ be the set of all clusters or singletons that are subsets of $A$ (including $A$ itself). Let $G_A=(V_A,E_A)$ be the Hasse diagram of $V_A$ ordered by inclusion. Let $V_B$ be the set of all clusters or singletons that are subsets of $B$ (including $B$ itself). Let $G_B=(V_B,E_B)$ be the Hasse diagram of $V_B$ ordered by inclusion. Let $G_{C,c}=(V_{C,c},E_{C,c})$ be the undirected graph with $V_{C,c}:=V_A\cup V_B$, and $E_{C,c}$ is equal to the union of $E_A$ and $E_B$ --- forgetting the direction --- plus one extra edge connecting $A$ and $B$. We call $G_{C,c}$ the {\em cut graph of $c$}. \end{definition} \begin{figure} \caption{For the set of cuts in Example~\ref{ex:cuts} \label{fig:hasse} \end{figure} \begin{example} Let $N=\{1,\dots,9\}$. Let $C$ be the set of cuts in Example~(\ref{ex:cuts}). Recall that $c_y=(A\mid B)$ where $A=\{1,2,3,4,5\}$ and $B=\{6,7,8,9\}$. Figure~\ref{fig:hasse} shows the Hasse diagrams $G_A$ and $G_B$ and the graph $G_{C,c_y}$. \end{example} \begin{lemma} \label{lem:cut_to_tree_phylogenetic} Let $C$ be a phylogenetic set of cuts of $N$, and let $c=(A\mid B)\in C$. Then $G_{C,c}$ is a phylogenetic tree with leaf set $\{ \{i\}\mid i\in N\}$. \end{lemma} \begin{proof} Since the partially ordered sets $V_A$ and $V_B$ have a largest element, the two Hasse diagrams are connected. The extra edge connects the two Hasse diagrams, therefore $G_{C,c}$ is connected. Let $v,w\in V_A$. By Axiom~(C), the set of clusters $u$ such that $v\le u\le w$ is totally ordered by inclusion. Therefore, the Hasse diagram $G_A$ has no cycle. The same holds for the Hasse diagram $G_B$. Hence both Hasse diagrams are trees. $G_{C,c}$ is obtained by connecting two trees with one extra edge, and so $G_{C,c}$ is also a tree. Its leaves are the minimal elements of the two partial orders, which are exactly the singletons of $N$. Assume, for the sake of contradiction, that $G_{C,c}$ has a vertex of degree two. Then there is a cluster $D$ of $C$ such that the partially ordered set $V_D$ has a ``second largest element'' $D'$, i.e. an element which is largest in the subset of elements not equal to $D$. Then $D\setminus D'$ is not empty, hence there is a singleton $\{a\}$ such that $\{a\}\subseteq D$ but $\{a\}\not\subseteq D'$, which is a contradiction. \end{proof} \begin{proposition} \label{prop:T2C} Let $T=(V,E)$ be a phylogenetic tree with leaf set $N\subset V$. Let $e=\{u,v\}\in E$ be an internal edge and $c=(A\mid B) \in C_T$ be its corresponding cut. Then the phylogenetic tree $G_{C_T,c}=(V_G,E_G)$ is isomorphic to $T$. \end{proposition} \begin{proof} We obtain two components $T_u$ and $T_v$ after removing $e$ from $T$. The set of leaves of $T_u$ is $A\cup\{u\}$, and the set of leaves of $T_v$ is $B\cup\{v\}$. We define a map $f:V\to V_G$. Let $w\in V$. \begin{enumerate} \item If $w$ is a leaf of $T$, then $f(w):=\{w\}$. \item If $w$ is an internal vertex of $T$ contained in $T_u$, then let $e'$ denote the first edge on the unique path in $T$ from $w$ to $v$. Let $(A'\mid B')$ be the corresponding cut, and assume without loss of generality that $A'\subset A$. We set $f(w):=A'$. \item Analogously, if $w$ is an internal vertex $T$ contained in $T_v$, then let $e'$ denote the first edge on the unique path in $T$ from $w$ to $u$. Let $(A'\mid B')$ be the corresponding cut, and assume without loss of generality that $B'\subset B$. We set $f(w):=B'$. \end{enumerate} Injectivity of $f$ is a consequence of the fact that $P_T$ satisfies the axiom (P3). We claim that $f$ is also surjective. Let $x$ be a vertex of $V_G$. If $x=\{i\}$ is a singleton, then $f(i)=x$. If $x=A_1$ is a cluster contained in $A$, then let $e_1$ be the edge corresponding to the cut $(A_1\mid N\setminus A_1)$. Then $f$ maps one of the two vertices of $e_1$ to $x$. Similarly, we can find a preimage if $x=A_1$ is a cluster contained in $B$. Therefore, $f$ provides a bijection between $V$ and $V_G$. We claim that $f$ is a graph isomorphism. Let $v_1,v_2\in V$. If $v_1,v_2$ are both leaves, then $\{v_1,v_2\}$ is not an edge of $T$ and $\{f(v_1),f(v_2)\}$ is not an edge of $E_G$. Assume $v_1\in N$ and $v_2\not\in N$. Then $\{v_1,v_2\}$ is an edge of $T$ if and only if $v_2$ is the unique vertex adjacent to $v_1$, and this is true if and only if $f(v_2)$ is the unique minimal cluster contained in $A$ and containing $v_1$, and this is true if and only if $\{f(v_1),f(v_2)\}$ is an edge in $E_G$. The case $v_1\in B$ is treated analogously. If $v_1$ and $v_2$ are non-leaves of $T$ contained in $T_u$. Assume that $e'=\{v_1,v_2\}$ is an edge of $T$. Let $(A'\mid B')$ be the corresponding cut; without loss of generality, assume $A'\subset A$ and $A'\cup \{v_1\}$ is the set of leaves of one of the two components that we get when we remove $e'$ from $T$. Let $\{v_2,v_3\}$ be the first edge on the path from $v_2$ to $u$ and let $(A''\mid B'')$ be its corresponding cut such that $A''\subset A$. Then we see that $f(v_1)=A'$ and $f(v_2)=A''$. Also we know that $A'\subset A''\subset A$ and there is no other cluster of $C_T$ in between $A''$ and $A'$ with respect to inclusion. This implies that $\{f(v_1),f(v_2)\}$ is an edge in $E_G$. Conversely, if $\{f(v_1),f(v_2)\}$ is an edge in $E_G$, then two edges corresponding to the cuts $(f(v_1)\mid N\setminus f(v_1))$ and $(f(v_2)\mid N\setminus f(v_2))$ have to equal, and the corresponding edge is the edge $\{v_1,v_2\}$ in $T$. The case where both $v_1$ and $v_2$ are non-leaves of $T$ contained in $T_v$ is similar. If $v_1\in T_u$ and $v_2\in T_v$, then $\{v_1,v_2\}$ is an edge of $T$ if and only if $v_1=u$ and $v_2=v$, and this is true if and only if $\{f(v_1),f(v_2)\}$ is an edge in $E_G$. \end{proof} \begin{corollary} Let $T$ be a phylogenetic tree. For any two cuts $c_1,c_2\in C_T$, the cut graphs $G_{C_T,c_1}$ and $G_{C_T,c_2}$ and $T$ are all isomorphic. \end{corollary} \begin{proof} Immediate consequence of Proposition \ref{prop:T2C}. \end{proof} \begin{lemma} Let $C$ be a phylogenetic set of cuts of $N$. Let $c=(A\mid B)\in C$ be a cut. Then $C$ is equal to $C_{G_{C,c}}$. \end{lemma} \begin{proof} Denote the last added edge in the construction of $G_{C,c}$ as $e=\{v,u\}$ and assume without loss of generality that $A$ is the leaf set of component $T_v$ when we remove edge $e$ from $G_{C,c}$. Let $c'=(A'\mid B')$ be an arbitrary cut in $C$. If $c'=c$, then it is equal to the cut in $C_{G_{C,c}}$ corresponding to the edge $e$, hence $c'\in C_{G_{C,c}}$. Assume $c'\ne c$. Because $C$ is phylogenetic, we know that exactly one of the following four statements $A'\subset A$, $A'\subset B$, $B'\subset A$, $B'\subset B$ is true. Without loss of generality, assume that $A'\subset A$. Then $A'\in V(A)$. Let $w$ be the first vertex on the unique path from $A'$ to $u$. Then we see that the corresponding cut for edge $\{A', w\}$ is $(A'\mid B')$. Hence $(A'\mid B')\in C_{G_{C,c}}$. Because $c'$ was chosen arbitrarily, we conclude that $C\subset C_{G_{C,c}}$. Now, take any cut $c'=(A'\mid B')\in C_{G_{C,c}}$. If $c'$ is the cut corresponding to the edge $e$, then $c=\{A\mid B\}$, which implies $c\in C$. Assume $c'$ corresponds to some edge $e'=\{v',u'\}$ in $T_v$. Without loss of generality, we may suppose that $u'$ is on the unique path from $v'$ to $u$. Then $c'=(v'\mid N\setminus v')$. Since $v'$ is a cluster of $C$, we obtain that $c'\in C$. If $c'$ corresponds to some edge in $T_u$, we proceed analogously. Therefore we get $C_{G_{C,c}}\subset C$ and consequently $C= C_{G_{C,c}}$. \end{proof} For any phylogenetic set of cuts $C$, we can choose a cut $c\in C$. The class of $G_{C,c}$ does not depend on the choice of $c$, so this construction provides an inverse to $t_{TC}:{\cal T}_N\to {\cal C}_N$. \subsection{Cuts and Crossings} \begin{proposition} For every phylogenetic set $C$ of cuts, the crossing relation $X_C$ is phylogenetic. \end{proposition} \begin{proof} Assume that $i,j,k,l,m$ are pairwise distinct (but otherwise arbitrary) elements of $N$. Assume, for the sake of contradiction, that $(i,j\mid k,l)$ and $(i,k\mid j,l)$ are both in $X_C$. Then there is a cut $(A_1\mid B_1)$ such that $i,j\in A_1$ and $k,l\in B_1$, and another cut $(A_2\mid B_2)$ such that $i,k\in A_2$ and $j,l\in B_2$. Then all four sets $A_1\cap A_2$, $A_1\cap B_2$, $B_1\cap A_2$, and $B_2\cap B_2$ are not empty. This contradicts Axiom~(C). Hence the assumption must have been wrong, which proves that Axiom~(X1) is fulfilled. Now assume that $(i,j\mid k,l)$ and $(i,j\mid l,m)$ are both in $X_C$. By Axiom~(C), the set of all clusters that contain both $i$ and $j$ and that do not contain $l$ is totally ordered by inclusion. Let $A$ be the smallest such cluster. Then $A$ does not contain $k$ and does not contain $m$. Then $(A\mid N\setminus A)$ is a cut with $i,j$ on the left side and $k,m$ in the right side. Hence $(i,j\mid k,m)$ is in $X_C$, and it follows that Axiom~(X2) is fulfilled. Now assume that $(i,j\mid k,l)$ is in $X_C$. Then there is a cut $(A\mid B)$ such that $i,j\in A$ and $k,l\in B$. If $m\in A$, then $(i,m\mid k,l)$ is in $X_C$, and if $m\in B$, then $(i,j\mid l,m)$ is in $X_C$. It follows that Axiom~(X3) is fulfilled, and that $X_C$ is phylogenetic. \end{proof} The following definition is only needed for the proof of Lemma~\ref{lem:partial}. \begin{definition} A {\em partial cut} of $N$ is a cut of some subset of $N$. Fix a phylogenetic crossing relation $X$. We say that a cut or a partial cut $(A\mid B)$ is {\em compatible} with $X$ if and only if for any distinct $i,j\in A$ and distinct $k,l\in B$, there is a crossing relation $(i,j\mid k,l)\in X$. \end{definition} \begin{example} \label{ex:ibase} If $(i,j\mid k,l)\in X$, then the cross itself, considered as a partial cut, is compatible with $X$. \end{example} \begin{lemma} \label{lem:partial} Let $X$ be a phylogenetic crossing relation on $N$. Then, for any $(i,j\mid k,l)\in X$. Then there exists a cut $(A\mid B)$ compatible with $X$ such that $i,j\in A$ and $k,l\in B$. \end{lemma} \begin{proof} We prove that for any $n$ such that $4\le n\le |N|$, there is a partial cut $(A\mid B)$ compatible with $X$ such that $i,j\in A$ and $k,l\in B$, and $|A|+|B|=n$. We proceed by induction on $n$. For $n=4$, the statement is trivially true. Assume $5\le n\le |N|$. By induction hypothesis, there exists a compatible partial cut $(C\mid D)$ such that $i,j\in C$, $k,l\in D$, and $|C|+|D|=n-1$. Let $m\in N\setminus (C\cup D)$. We claim that either $(C\cup \{m\}\mid D)$ or $(C\mid D\cup\{m\})$ is compatible with $X$. Assume, for the sake of contradiction, that this claim is wrong. Then there exist $a,b,p\in C$ and $c,r,s\in D$ such that $a\ne b$, $r\ne s$, and the relations $(a,b\mid c,m)$ and $(p,m\mid r,s)$ do not hold. We may also assume $a\ne p$ and $r\ne c$. Since $(C\mid D)$ is compatible, it follows that $(a,p\mid r,s)$ holds. By Axiom~(X3), it follows that $(a,p\mid r,m)$ or $(p,m\mid r,s)$ holds --- but we have that $(p,m\mid r,s)$ does not hold, hence we have $(a,p\mid r,m)$. If $b=p$, then $(a,b\mid r,m)$ holds. If $b\ne p$, then we use $(b,p\mid r,s)$ and Axiom~(X3) and get $(b,p\mid r,m)$, since $(p,m\mid r,s)$ does not hold. Then, from $(a,p\mid r,m)$ and $(b,p\mid r,m)$, we obtain $(a,b\mid r,m)$ by Axiom~(X2). By the compatibility of $(C\mid D)$, we get $(a,b\mid r,c)$. By Axiom~(X2), from $(a,b\mid r,c)$ and $(a,b\mid r,m)$, we get $(a,b\mid c,m)$. This contradicts the assumption. \end{proof} We can now the define a transformation $t_{XC}$ from sets of cuts to crossing relations. \begin{definition} For any phylogenetic crossing relation $X$ on $N$, we define $t_{XC}(X)$ as the set of all cuts that are compatible with $X$ and denote it as $C_X$. \end{definition} \begin{proposition} For any phylogenetic crossing relation $X$, the set $C_X$ of compatible cuts is phylogenetic. \end{proposition} \begin{proof} Assume, for the sake of contradiction, that $C_X$ does not fulfill Axiom~(C). Then there exists cuts $(A_1\mid B_1)$, $(A_2\mid B_2)$ in $C_X$ and four elements $i\in A_1\cap A_2$, $j\in A_1\cap B_2$, $k\in B_1\cap A_2$, and $l\in B_1\cap B_2$. Because $(A_1\mid B_1)$ is compatible, we have $(i,j\mid k,l)\in X$. Because $(A_2\mid B_2)$ is compatible, we have $(i,k\mid j,l)\in X$. This contradicts Axiom~(X1). \end{proof} \begin{theorem} The two sets ${\cal X}_N$ and ${\cal C}_N$ are in bijection: function $t_{XC}:{\cal X}_N\to {\cal C}_N$ is the inverse of function $t_{CX}:{\cal C}_N\to {\cal X}_N$. \end{theorem} \begin{proof} Let $X\in {\cal X}_N$ and $i,j,k,l\in N$ pairwise distinct. If $(i,j\mid k,l)$ is in $X$, then Lemma~\ref{lem:partial} implies that there is a cut $(A\mid B)$ in $C_X$ such that $i,j\in A$ and $k,l\in B$. Therefore $(i,j,k,l)$ is also in $X_{C_X}$. Conversely, if $(i,j\mid k,l)$ is in $X_{C_X}$, then there is a cut $(A\mid B)$ in $C_X$ such that $i,j\in A$ and $k,l\in B$. Since $(A\mid B)$ is compatible with $X$, it follows that $(i,j\mid k,l)$ is in $X$. Let $C\in {\cal C}_N$ be a phylogenetic set of cuts. Let $(A\mid B)$ be a cut. If $(A\mid B)$ is in $C$, then all quadruples $(i,j\mid k,l)$ with $i,j\in A$ and $k,l\in B$ are in $X_C$. Hence $(A\mid B)$ is also in $C_{X_C}$. Conversely, assume that $(A\mid B)$ is in $C_{X_C}$. We pick elements $a\in A$ and $b\in B$. Let $\alpha$ be the set of clusters of $C$ that contain $a$ but not $b$, and let $\beta$ be the set of clusters of $C$ that contain $b$ but not $a$. By Axiom~(C), both sets $\alpha$ and $\beta$ are totally ordered by set inclusion. For any $i\in A\setminus\{a\}$ and $j\in B\setminus\{b\}$, the quadruple $(a,i\mid b,j)$ is in $X_C$ because $(A\mid B)$ is compatible with $X_C$. Then, there must exist a cut $(A'\mid B')\in C$ such that $a,i\in A'$ and $b,j\in B'$. Consequently, we have a cluster $A'\in\alpha$ for every element $i\in A\setminus\{a\}$, and therefore the largest cluster of $\alpha$ is a superset of $A$. Similarly, we can show that the largest cluster of $\beta$ is a superset of $B$. Let $A''$ be the smallest cluster of $\alpha$ that is still a superset of $A$. We claim that $A''=A$; if this claim is true, then $(A\mid B)$ would be in $C$, which would finish the proof of the converse and of the whole theorem. To prove the claim, we assume, for the sake of contradiction, that there is an element $c\in A''\setminus A=A''\cap B$. Let $\alpha'$ be the set of clusters of $C$ that contain $a$ but neither $b$ nor $c$. This set is also totally ordered by set inclusion. For any choice of elements $k\in A\setminus\{a\}$ and $l\in B\setminus\{b\}$, the quadruple $(a,k\mid b,l)$ is in $X_C$. Hence there exists a cut in $(A'''\mid B''')$ of $C$ such that $a,k\in A'''$ and $b,l\in B'''$. In particular, $A'''$ is in $\alpha'$. Since we can vary $k$, it follows that the largest cluster of $\alpha'$ is a superset of $A$. Hence there is a superset of $A$ in $\alpha$ that does not contain $c$, which is a contradiction to the fact that the smallest cluster of $\alpha$ that contains $A$, namely $A''$, does contain $c$. \end{proof} \subsection{Partitions and Equivalences} In order to prepare for the conversion between partitions and equivalences, we prove a result which could be considered as a kind of converse of Lemma~\label{lem:tree}. Let us say that a partition {\em separates} a triple $\{a,b,c\}\in {N\choose 3}$ if and only if $a$, $b$, and $c$ are in three pairwise distinct parts of the partition. \begin{theorem} \label{thm:pisp} Let $P$ be a collection of partitions of $N$ satisfying (P1) such that for every triple in ${N\choose 3}$, there is a unique partition separating it. Then $P$ is phylogenetic. \end{theorem} \begin{remark} \label{rem:converse} The converse is also true: if a collection of partition is phylogenetic, then it is equal to $P_T$ for some phylogenetic tree, by Theorem~\ref{thm:hh}. By Lemma~\ref{lem:unique_vertex}, it follows that every triple is separated by a unique partition in $P_T$. \end{remark} In order to prove Theorem~\ref{thm:pisp}, we need the following proposition. \begin{lemma} \label{lem:unique_pair} Let $P$ be a collection of partitions of $N$ satisfying Axiom (P1) such that for every triple in ${N\choose 3}$, there is a unique partition separating it. Let $p_1,p_2$ be two distinct partitions. Then there is a unique pair of parts $A_1\in p_1$ and $A_2\in p_2$ such that $A_1\cup A_2=N$. Moreover, if $B_1\in p_1$ is any part of $p_1$ distinct from $A_1$, and $B_2\in p_2$ is any part of $p_2$ distinct from $A_2$, then $B_1\subset A_2$, $B_2\subset A_1$, and $B_1\cap B_2 = \emptyset$. \end{lemma} \begin{proof} For $i=1,2$, let $a_i,b_i,c_i\in N$ be elements from three different parts of $p_i$. The partition separating $\{a_i,b_i,c_i\}$ is unique, therefore at least two of $a_1,b_1,c_1$ must be in the same part of $p_2$; without loss of generality, we may assume that $a_1$ and $b_1$ are in the same part. We choose $A_2$ to be this part. Similarly, we may assume that $a_2$ and $b_2$ are in the same part of $p_1$, and we choose $A_1$ to be this part. Suppose that $A_1\cup A_2\subsetneq N$. Take any $x\in N\setminus (A_1\cup A_2)$. Assume, without loss of generality, that $x$ is not in the same part with $a_1$ in $p_1$ --- otherwise, we exchange $a_1$ and $b_1$. Analogously we may assume that $x$ is not in the same part with $a_2$ in $p_2$. Then we see that the triple $\{x,a_1,a_2\}$ is separated by both partitions $p_1$ and $p_2$. We have our contradiction. It follows that $A_1\cup A_2=N$. The second statement is a consequence of $A_1\cup A_2=N$ and the fact that the any part of $p_i$ different from $A_i$ is a subset of $N\setminus A_i$, for $i=1,2$. \end{proof} \begin{remark} \label{rem:C} As a consequence of Lemma~\ref{lem:unique_pair}, the set of parts of any set of $P$ fulfills an axiom that is similar to the cluster axiom (C): any two parts, whether they are in the same partition or not, are either contained one in the other, or disjoint, or their union is $N$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:pisp}] We already know that $P$ satisfies Axiom~(P1), by assumption. Axiom (P2): let $a\in N$ be arbitrary. Let $b\in B$ such that $b\ne a$. By Remark~\ref{rem:C}, the set of all parts of any partition that contain $a$ but not $b$ is totally ordered by inclusion. Let $A$ be the smallest such part. We claim that $A=\{a\}$. Assume, for the sake of contradiction, that $A$ contains a second element $c\ne a$. Then the triple $\{a,b,c\}$ cannot be separated by any partition, contradicting the assumption. Axiom (P3): for the sake of contradiction, assume that $A$ is a part with $|A|\geq 2$ that shows up in two distinct partitions $p_1,p_2\in P$. By Lemma \ref{lem:unique_pair}, there exists $B\in p_2$ such that $A\cup B=N$. This implies that $|p_2|=2$, which violates Axiom (P1). Axiom (P4): Suppose, for the sake of contradiction, that (P4) is not fulfilled. Let $A$ be a part of some partition of $p$ of $P$, with cardinality at least 2, such that $N\setminus A$ is not a part of any partition. Let $a$ and $b$ be two distinct elements of $A$ . Let ${\cal X}$ be the set of all parts that are supersets of $N\setminus A$ and do not contain $b$. To show that ${\cal X}$ is not empty, we pick an element $e\in N\setminus A$. There must be a partition $q$ separating the triple $\{a,b,e\}$. By Lemma~\ref{lem:unique_pair}, there are parts $F\in p$ and $E\in q$ such that $E\cup F=N$. The part $A\in p$ has non-empty intersection with at least two parts of $q$, namely with the part containing $a$ and with the part containing $b$. Neither part can be a superset of $A$. Then, by Remark~\ref{rem:C}, both are subsets of $A$. This implies $F=A$ and therefore $E\cup A=N$. Then we get $e\in E$ and $b\not\in E$. So, $E\in{\cal X}$. By Remark~\ref{rem:C}, ${\cal X}$ is totally ordered by inclusion. Let $C$ be the smallest element of ${\cal X}$. Since $N\setminus A$ is not a part, there exists an element $d$ in $C\setminus (N\setminus A)=A\cap C$. Now we repeat the argument that we used above with the triple $\{d,b,e\}$. Note that $d$ and $b$ are two distinct elements in $A$, since $d$ is in some part of $\cal{X}$ while $b$ is not. Hence there must be a partition $q'$ separating the triple $\{d,b,e\}$. By Lemma~\ref{lem:unique_pair}, there are parts $F'\in p$ and $E'\in q'$ such that $F'\cup E'=N$. Then, with the analogous reasoning, we obtain that the parts containing $d$ and the part containing $b$ in $q'$ are both subsets of $A$. Hence we have that $F'=A$ and $E'\cup A=N$. Also we observe that $b\notin E'$. Therefore, $E'\in \cal{X}$ and $d\notin E'$. Since ${\cal X}$ is totally ordered by inclusion and $C$ is the smallest element of $\cal{X}$, we have that $C\subset E'$. This implies $d\in E'$, which is a contradiction. \end{proof} \begin{remark} It is a fun question to ask what happens to if we replace ``triples'' by ``quadruples'', ``quintuples'' etc. The second author conjectures that there are almost no collections of partitions such that each partition has four parts, and every quadruple is separated by a unique partition. More precisely, any such collection has only a single partition, where every part is a singleton. \end{remark} In order to convert triples to partitions, we also need one more definition. \begin{definition} For any partition $p$ of $N$, let $S_p$ be the set of all triples that are separated by $p$. For any set $S$ of triples, let $G_S$ be the graph with vertex set $N$, and an edge between $i,j\in N$ if and only if no triples of $S$ contain both $i$ and $j$. Let $p_S$ be the partition of $N$ defined by the connected components of $G_S$. \end{definition} \begin{example} In Figure \ref{fig:partition_graph} we see the graphs $G_S$ when $S$ is one of the three equivalence classes of triples in Example \ref{ex:equivsmall}. \begin{figure} \caption{These are the graph $G_S$ defined by the three equivalence classes in Example \ref{ex:equivsmall} \label{fig:partition_graph} \end{figure} \end{example} \begin{example} Let $p=p_a$ be as in Example~\ref{ex:parts}. Then $S_{p_a}$ is exactly the equivalence class of triples $E_a$ in Example~\ref{ex:tb}. Moreover, $p_{E_a}$ is the partition $p_a$. \end{example} \begin{lemma}\label{lem:complete_graph} Let $S$ be a diverse set of triples. Then $G_S$ is a disconnected union of complete graphs. \end{lemma} \begin{proof} Let $i,j,k\in N$ be three distinct vertices of $G_S$. Suppose that $\{i,j\}$, $\{j,k\}$ are edges of $G_S$, but $\{i,k\}$ is not an edge. Then there exists $l\in N\setminus \{i,j,k\}$ such that $\{i,k,l\}\in S$. By Axiom (D1), one of the triples $\{i,j,l\}$, $\{j,k,l\}$, or $\{i,j,k\}$ is in $S$. This violates the fact that $\{i,j\}$ and $\{j,k\}$ are edges. This shows that any two vertices in the same connected component of $G_U$ are connected by an edge; the graph is a disconnected union of complete graphs. \end{proof} \begin{lemma} \label{lem:div2part} For any partition $p$ with at least three parts, we have that $S_p$ is diverse and $p_{S_p}=p$. For any diverse set $S$ of triples, the partition $p_S$ has at least three parts, and we have $S_{p_S}=S$. \end{lemma} \begin{proof} Let $p$ be a partition with at least three parts. Then we know that $S_p$ is non-empty. Assume that the triple $\{i,j,k\}$ is in $S_p$, which means that $i$, $j$, and $k$ are in three distinct parts. A fourth element $l\in N$ can at most be in one of this three parts, hence $p$ separates $l$ and two other elements out of $i$, $j$, and $k$. Therefore $S_p$ satisfies (D1). Let $a,b,c,x,y,z\in N$. Assume that $p$ separates the triples $\{ a,x,y\}$, $\{ b,y,z\}$, and $\{ c,x,z\}$. Then $x$, $y$, and $z$ are in pairwise distinct parts, so $p$ also separates $\{x,y,z\}$. Therefore $S_p$ satisfies (D2), hence $S_p$ is diverse. Moreover, $i,j$ are in the same part of $p$ if and only if no triples in $S_p$ contain both $i$ and $j$ if and only if $i$ and $j$ are in the same component of $G_{S_p}$ if and only if $i$ and $j$ are in the same part of $p_{S_p}$. Hence $p_{S_p}=p$. Let $S$ be any diverse set of triples. If a triple $\{i,j,k\}$ is in $S$, then none of $\{i,j\}$, $\{i,k\}$ or $\{j,k\}$ is an edge of $G_S$. By Lemma \ref{lem:complete_graph}, we obtain that $i,j,k$ are in pairwise distinct components in $G_S$. Hence, $i,j,k$ are in pairwise distinct parts of $p_S$. Therefore, $\{i,j,k\}\in S_{p_S}$. For the other direction, let $\{i,j,k\}$ be any triple in $S_{p_S}$. This indicates that $i,j,k$ are in three pairwise distinct parts in $p_S$, i.e., $i,j,k$ are in three pairwise distinct components of the graph $G_S$. Therefore, none of $\{i,j\}$, $\{i,k\}$, $\{j,k\}$ is an edge of $G_S$. Hence, $S$ contains triples $\{i,j,a\}$, $\{i,k,b\}$ and $\{j,k,c\}$ for some $a,b,c\in N$. By Axiom (D2), $\{i,j,k\}\in S$. Hence $S_{p_S}=S$. \end{proof} Now we define the conversion from equivalences on triples to collections of partitions. For any phylogenetic equivalence relation $E$ of triples in $N$, we define $P_E$ as the collection of all partitions $p_U$ for any equivalence class $U$ of $E$. The function from ${\cal E}_N$ to $\overline{{\cal P}_N}$ that maps $E$ to $P_E$ is denoted by $t_{EP}$. \begin{lemma} Let $E$ be a phylogenetic equivalence relation of triples in $N$. Then $P_E$ is a phylogenetic set of partitions. \end{lemma} \begin{proof} Every equivalence class $U$ contains at least one triple. This triple is separated by $p_U$, and it follows that $p_U$ must have at least three parts. Therefore $P_E$ satisfies Axiom~(P1). Every triple $\tau=\{i,j,k\}\in {N\choose 3}$ is in a unique equivalence class $U$, and $p_U$ separates $\tau$ by Lemma~\ref{lem:div2part}. Moreover, if $V$ is any equivalence class such that $p_V$ separates $\tau$, then $\tau\in S_{p_V}=V$, and therefore $U=V$. This implies that every triple is separated by a unique partition in $P_E$. By Theorem \ref{thm:pisp}, $P_E$ is phylogenetic. \end{proof} For any phylogenetic collection $P$ of partitions, each triple $\tau\in {N\choose 3}$ is separated by a unique partition in $P$, by Remark~\ref{rem:converse}. We define $E_P$ as follows: Two triples $\tau_1$ and $\tau_2$ are equivalent if and only if the unique partition separating $\tau_1$ is the same as the unique partition separating $\tau_2$. The function from ${\cal P}_N$ to ${\cal E}_N$ that maps $P$ to $E_P$ is denoted by $t_{PE}$. It is straightforward to see that $t_{TE}=t_{PE}\circ t_{TP}$. \begin{lemma} Let $P$ be a phylogenetic set of partitions of $N$. Then $E_P$ is a phylogenetic equivalence relation of triples in ${N\choose 3}$. \end{lemma} \begin{proof} For any partition $p$ in $P$, the set of triples $S_p$ is diverse by Lemma~\ref{lem:div2part}; hence Axiom~(E0) is fulfilled. \end{proof} \begin{theorem} The two sets $\cal{P}_N$ and $\cal{E}_N$ are in bijection: function $t_{EP}:\cal{E}_N\to \cal{P}_N$, $E\mapsto P_E$ is the inverse of function $t_{PE}:\cal{P}_N\to \cal{E}_N$, $P\mapsto E_P$. \end{theorem} \begin{proof} For any $E\in \cal{E}_N$, any class $U$ of $E$ is diverse. Then, by Lemma \ref{lem:div2part} we have that $S_{p_U}=U$. We see that $\{S_{p_U}\}_{U\in E}$ is exactly $E_{P_E}$ --- where the foot index $U\in E$ means that $U$ is a class of $E$. Hence, we have $E_{P_E}=E$. For any $P\in \cal{P}_N$, each partition $p\in P$ has at least three parts. By Lemma~\ref{lem:div2part}, we know that $p_{S_p}=p$. Since $\{p_{S_p}\}_{p\in P}$ is exactly the partition collection $P_{E_P}$, we obtain that $P_{E_P}=P$. \end{proof} In Figure~\ref{fig:diag2}, we display the diagram consisting of all conversion algorithms in this paper. \begin{figure} \caption{This diagram shows all conversion maps between different types of structures that have been defined in this paper. We also have seen that the triangles are commutative.} \label{fig:diag2} \end{figure} \end{document}
\begin{document} \pagestyle{plain} \title{Subgraph enumeration in massive graphs hanks{Part of this work was done while the author was working at the University of Padova. It is supported in part by University of Padova project CPDA121378, MIUR of Italy project AMANDA, and by the European Research Council grant 614331. } \begin{abstract} We consider the problem of enumerating all instances of a given pattern graph in a large data graph. Our focus is on determining the input/output (I/O) complexity of this problem. Let $E$ be the number of edges in the data graph, $k={B}O{1}$ be the number of vertices in the pattern graph, $B$ be the block length, and $M$ be the main memory size. The main results of the paper are two algorithms that enumerate all instances of the pattern graph. The first one is a deterministic algorithm that exploits a suitable independent set of the pattern graph of size $1\leq s \leq k/2$ and requires ${B}O{E^{k-s}/\left(BM^{k-s-1}\right)}$ I/Os. The second algorithm is a randomized algorithm that enumerates all instances in ${B}O{E^{k/2}/\left(BM^{k/2-1}\right)}$ expected I/Os; the same bound also applies with high probability under some assumptions. A lower bound shows that the deterministic algorithm is optimal for some pattern graphs with $s=k/2$ (e.g., paths and cycles of even length, meshes of even side), while the randomized algorithm is optimal for a wide class of pattern graphs, called Alon class (e.g., cliques, cycles and every graph with a perfect matching). \mathbf{e}nd{abstract} \section{Introduction} This paper targets the problem of enumerating all subgraphs of an input \mathbf{e}mph{data graph} that are isomorphic to a given \mathbf{e}mph{pattern graph}. Subgraph enumeration is a tool for analyzing the structural and functional properties of networks (see, e.g.,~\cite{KairamWL12,GregoriLM13}), and typical pattern graphs are cliques (e.g., triangles), cycles and paths. Subgraph enumeration is also strictly related to the evaluation of conjunctive queries or multiway joins on a single large relation~\cite{AfratiDU13}. The aim of this paper is to assess the input/output (I/O) complexity of the enumeration problem when the data graph does not fit in the main memory. The main results of the paper are external memory (EM) algorithms for subgraph enumeration. In particular, we provide a deterministic algorithm which exploits a \mathbf{e}mph{matched independent set} (MIS) of the pattern graph $H$, which is an independent set $S$ such that each vertex in $S$ can be matched with a vertex not in $S$. Let $E$ be the number of edges in the input data graph, $k={B}O{1}$ be the number of vertices in the pattern graph, $B$ be the block length, and $M$ be the main memory size. Our results are the following: \begin{enumerate} \item We give a deterministic algorithm for subgraph enumeration that exploits a MIS $S$ of the pattern graph of size $s=|S|$, with $1\leq s \leq k/2$. Its I/O complexity is ${B}O{( E^{k-s} \log_M E )/(B M^{k-s-1})}$. As an example, let $M={B}OM{E^\mathbf{e}psilon}$ for some constant $\mathbf{e}psilon>0$: we get ${B}O{E^{k-1}/(B M^{k-2})}$ I/Os if the pattern graph is a $k$-clique ($s=1$), and ${B}O{E^{k/2}/(B M^{k/2-1})}$ I/Os if the pattern graph is an even length path or cycle, or a mesh of even side ($s=k/2$). \item We propose a randomized algorithm for subgraph enumeration. It exploits the random coloring technique in~\cite{PaghS13} for decomposing the problem into smaller subproblems that are solved with the above deterministic algorithm. Its expected I/O complexity is ${B}O{E^{k/2}/\left(B M^{k/2-1}\right)}$. We show that the claimed I/O complexity is also achieved with high probability when $M={B}OM{\sqrt{E}\log E}$ by adjusting the coloring process. We remark that the deterministic algorithm is a crucial component of the randomized one, and cannot be replaced by state-of-the-art techniques without increasing the I/O complexity. \item We discuss some related issues. We first show that the enumeration of $T$ instances of a pattern graph in the Alon class~\cite{AfratiSSU13} requires, even in the best case, ${B}OM{{T}/\left({B M^{ k/2-1}}\right)+{T^{2/k}}/{B}}$ I/Os. The Alon class includes important graphs like cliques, cycles and, more in general, every graph with a perfect matching. This lower bound implies that the randomized algorithm is optimal in the worst case since a clique with $\sqrt{E}$ vertices contains $T={B}T{E^{k/2}}$ instances of any pattern graph. It also shows that the deterministic algorithm is optimal for some sparse pattern graphs (e.g., even length paths and cycles, meshes of even side) if $M={B}OM{E^\mathbf{e}psilon}$ for some constant $\mathbf{e}psilon>0$. Finally, we analyze the work complexity of our algorithms: for pattern graphs in the Alon class, the deterministic and randomized algorithms require respectively ${B}TO{E^{k-s}/M^{k/2-s}}$ and ${B}TO{E^{k/2}}$ total work, where the last term is just a polylog factor from the optimal bound. \mathbf{e}nd{enumerate} The assumption $k={B}O{1}$ is quite natural since it covers the most relevant case; however, the analyses of our algorithms do not assume $k$ to be constant and clearly state the dependency of the I/O complexities on $k$. Moreover, this paper focuses on the enumeration of edge-induced subgraphs which are isomorphic to the pattern graph; however, we claim that our algorithms can be extended even to the enumeration of vertex-induced subgraphs (see Appendix~\ref{app:induced} for more details). We do not require our algorithms to \mathbf{e}mph{list} all instances of the pattern graph, that is to store all instances on the external memory. We simply consider algorithms that \mathbf{e}mph{enumerate} instances: that is, for each instance, they call a function \texttt{emit}$(\cdot)$ with the instance as input parameter. \iffalse This is a natural assumption in external memory since it reduces the I/O complexity and it is satisfied by many applications where instances are intermediate results pipelined to a subsequent computation and are not required to be permanently stored. As an example, consider an application for searching instances that satisfy an arbitrary user-defined function: since the properties of the function are unknown, the algorithm has to consider all instances and then ignore those ones that do not satisfy the function. \fi Nevertheless, our upper and lower bounds can be easily adapted to list all instances by increasing the I/O complexity of an unavoidable additive ${B}T{T/B}$ factor, where $T$ is the number of instances. \section{Related work and comparison with our results} To the best of our knowledge, this is the first paper to deal with the I/O complexity of the enumeration of a generic pattern graph. Previous works have targeted the I/O complexity of triangle enumeration. An optimal algorithm requiring ${B}O{\text{sort}(E)}$ I/Os for graphs with constant arboricity is given in~\cite{GoodrichP11}; this algorithm however does not efficiently scale with larger arboricity. The works ~\cite{ChuC12,HuTC13} propose algorithms for a generic data graph incurring ${B}O{E^2/(BM)}$ I/Os. In the special case where the pattern graph is a triangle, our deterministic algorithm recalls the one proposed in~\cite{HuTC13}, but it does not need to manage in a different way vertices of the data graph with degree $\leq {M}$ and with degree $>M$. The previous bound is improved to an optimal ${B}T{E^{3/2}/\left(B\sqrt{M}\right)}$ (expected) I/O complexity in~\cite{PaghS13,HuTQ15}, which respectively provide randomized and deterministic algorithms. Our randomized algorithm extends to a generic pattern graph the random vertex coloring technique introduced in~\cite{PaghS13}. However, this paper substantially differs from~\cite{PaghS13} since novel and non-trivial results are proposed: besides specific technicalities required for the generalization of the coloring technique, we give the new deterministic algorithm based on a MIS, which is crucial for solving small subproblems generated by the coloring technique, and we show that the I/O complexity of the randomized algorithm holds even with high probability. An algorithm for the enumeration of $k$-cliques, for a given $k\geq 3$, is given in~\cite{ChibaN85} for the RAM model, but it requires ${B}OM{E^{k/2}/B}$ I/Os in a memory hierarchy. Multiway-join is a problem from database theory related to subgraph enumeration: however, the most relevant algorithms (e.g.,~\cite{NgoPRR12}) ignore the memory hierarchy and do not efficiently translate into our settings (a generous analysis would give ${B}OM{E^{k/2}/B}$ I/Os). Algorithms for detecting the existence of a given pattern graph and/or for counting the number of its instances have also been widely studied (e.g.,~\cite{KolountzakisMPT12,KaneMSS12,WilliamsW13}). However, these works rely on techniques (e.g., sampling, sketches, fast matrix multiplication) that allow to detect/count instances without explicitly materializing them, and hence cannot be used for enumeration. Subgraph enumeration has also been targeted in MapReduce. An algorithm for clique enumeration is given in~\cite{FinocchiFF14}, but it does not translate into an I/O efficient algorithm since subproblem size cannot be tuned to fit internal memory (unless $M=E$). Triangle and general pattern graph enumerations are target in \cite{AfratiSSU13,SuriV11} and in~\cite{AfratiDU13}, respectively. Although these results are based on partitioning techniques similar to the one used by our randomized algorithm, they assume a random input and provide weak bounds with an arbitrary input. Better worst case bounds are provided in~\cite{ParkSKP14} by exploiting the random partitioning in~\cite{PaghS13}. We remark that the I/O complexity of previous algorithms for general subgraph enumeration is ${B}OM{E^{k/2}/B}$, which becomes a performance bottleneck (i.e., it dominates the ${B}O{E^{k/2}}$ work complexity) as soon as reading a memory block in external memory is ${B}OM{B}$ times slower than a CPU operation. In contrast, our randomized algorithm requires a smaller amount of I/Os without increasing the work complexity, and avoids the I/O performance bottleneck even for slower external memories (i.e., until an I/O requires ${B}O{BM^{k/2-1}}$ CPU operations). \section{Preliminaries} \subsection{Models} We study our algorithms in the \mathbf{e}mph{external memory model}, which has been widely adopted in the literature (see, e.g., the survey by Vitter~\cite{Vitter08}). The model consists of an internal memory of $M$ words and an external memory of unbounded size. The processor can only use data stored in the internal memory and move data between the two memories in blocks of consecutive $B$ words. We suppose each vertex and edge to require one memory word. The \mathbf{e}mph{I/O complexity} of an algorithm is defined as the number of input/output blocks moved between the two memories by the algorithm. Our algorithms are aware of the memory hierarchy parameters, and can be straightforwardly adapted to a memory-cache hierarchy with an automatic replacement policy (e.g., LRU). \subsection{Notation} We denote with $G=(V, E)$ the simple and undirected input data graph. For notational convenience, whenever the context is clear we use $E$ as a shorthand for the size of set $E$ (and similarly for other sets). We denote with $\deg(v)$ the degree of a vertex $v\in V$. We assume that the sizes of $V$ and $E$ are known, that all vertices in $V$ are labeled with an unique identifier, and that the edge set $E$ is represented with adjacency lists which are stored in consecutive memory positions and sorted by identifier. We observe that these assumptions can be guaranteed by suitably sorting and scanning the input edges without asymptotically affecting the I/O complexity of our algorithms. \begin{wrapfigure}{r}{0.25\textwidth} \centering \scalebox{.5}{\begin{picture}(0,0) \includegraphics{mis_image.pdf} \mathbf{e}nd{picture} \setlength{\unitlength}{3947sp} \begingroup\makeatletter\ifx\SetFigFont\undefined \gdef\SetFigFont#1#2#3#4#5{ \reset@font\fontsize{#1}{#2pt} \fontfamily{#3}\fontseries{#4}\fontshape{#5} \selectfont} \fi\mathbf{e}ndgroup \begin{picture}(3016,3014)(2393,-6068) \put(2701,-3436){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_6$} }}}} \put(3901,-4636){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_7$} }}}} \put(2701,-5836){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_9$} }}}} \put(5101,-3436){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_8$} }}}} \put(3901,-3436){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_1$} }}}} \put(2701,-4636){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_2$} }}}} \put(3901,-5836){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_4$} }}}} \put(5101,-5836){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_5$} }}}} \put(5101,-4636){\makebox(0,0)[b]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h_3$} }}}} \put(3301,-4486){\makebox(0,0)[b]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P(2)=7$} }}}} \put(3301,-3286){\makebox(0,0)[b]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P(1)=6$} }}}} \put(3301,-5686){\makebox(0,0)[b]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P(4)=9$} }}}} \put(5251,-3961){\rotatebox{270.0}{\makebox(0,0)[b]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P(3)=8$} }}}}} \put(5251,-5161){\rotatebox{270.0}{\makebox(0,0)[b]{\smash{{\SetFigFont{8}{9.6}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P(5)=3$} }}}}} \mathbf{e}nd{picture} } \caption{The MIS for a $3\times 3$ mesh. Grey nodes denote the MIS; dashed lines are the probe edges. Since $k=4$, the probe of $h_5$ is $h_3$, which is not in the MIS.\label{fig:pattern}} \mathbf{e}nd{wrapfigure} We denote with $H=(V_H, E_H)$ the simple and undirected pattern graph that we are looking for in the input graph $G$. Let $k=\vert V_H\vert$ and $V_H=\{h_1, \ldots, h_ k\}$. An \textit{instance} of $H$ in $G$ is a tuple ($v_1,\ldots, v_k$) of $ k$ distinct vertices of $G$ such that $(v_i, v_j)\in E$ for each edge $(h_i,h_j)\in E_H$. An instance is \textit{induced} if $(v_i, v_j)\in E$ if and only if $(h_i,h_j) \in E_H$. Namely, {instances} are edge-induced subgraphs of $G$, while {induced instances} are vertex-induced subgraphs of $G$. For a given instance we say that vertex $h_i$ (resp., edge $(h_i,h_j)$) is \textit{mapped} onto $v_i$ (resp., $(v_i,v_j)$). An instance is enumerated by calling a function \texttt{emit}$(v_1,\ldots, v_k)$, and each call performs no I/Os and requires ${B}O{1}$ operations. \iffalse \footnote{For simplicity, we assume that the \texttt{emit}$(\cdot)$ function requires only the vertices of an instance. However our algorithms easily adapt to the case where even edges are required as input.} \fi We define a \mathbf{e}mph{matched independent set $S$} (MIS) of the pattern graph $H$ to be an independent set of $H$ for which exists in $E_H$ a matching between the $s$ vertices in $S$ and $s$ vertices in $V_H \setminus S$, with $s=|S|$. We have $1\leq s \leq k/2$. The maximum size of a MIS is $s=\lfloor k/2\rfloor$ for a cycle of length $k$ or a mesh of size $\sqrt{k}\times \sqrt{k}$, while it is $s=1$ for a $k$-clique. For a given MIS $S$, we let $h_{ k-s+1},\ldots, h_{ k}$ denote the vertices of $H$ in $S$ and assume that $h_i\in V_H\setminus S$ is matched with $h_{k-s+i}\in S$ for every $1\leq i \leq s$. Finally, we define the \mathbf{e}mph{probe vertex} of vertex $h_i$, with $1\leq i \leq k-s$ as follows: it is $h_{k-s+i}$ if $1\leq i \leq s$ (i.e., a vertex in $S$ is the probe vertex of its companion in the matching); otherwise it is an arbitrary neighbor vertex in $S$ if $s+1 \leq i\leq k-s$. If $h_j$ is the probe vertex of $h_i$, then we say that the \mathbf{e}mph{probe index of $i$}, denoted with $P(i)$, is $j$ and that the \mathbf{e}mph{probe edge} of $h_i$ is $(h_i, h_j)$ (see example in Figure~\ref{fig:pattern}). Since we are interested in pattern graphs with a very small number of nodes, we suppose that an exhaustive search on the pattern graph is used to find a MIS with the largest size; we leave as open problem to derive an efficient algorithm for extracting a large MIS. \section{Deterministic EM Algorithm}\label{sec:det} In this section we describe the deterministic algorithm for enumerating all instances of the pattern graph $H$ by exploiting a MIS $S$ of $H$. The algorithm works for any $S$, however the best performance are reached when $S$ is the maximum MIS. For the sake of simplicity, we assume that $s<k/2$, and hence that there exists at least one vertex in $V_H\setminus S$, say $h_{k-s}$, not matched with a vertex in $S$. The case $s=k/2$, covered in Appendix~\ref{app:detalgspecialcase}, is based on the same approach but requires some minor technicalities that increase the I/O complexity by a multiplicative factor ${B}O{\log_M E}$. This factor is asymptotically negligible as soon as $M={B}OM{ E^\mathbf{e}psilon}$ for some constant $\mathbf{e}psilon>0$. We first provide a simple high level explanation of the algorithm, and then give a more detailed description. We observe that an instance of $H$ in $G$ is uniquely defined by the mapping of the $k-s-1$ probe edges associated with $h_1, \ldots, h_{k-s-1}$ and of vertex $h_{k-s}$, since such a mapping automatically fixes the mapping of all vertexes of $H$. As an example consider again Figure~\ref{fig:pattern}: any instance of the pattern graph is univocally given by the mapping of the probe edges $(h_1,h_6),(h_2,h_7),(h_3,h_8),(h_4,h_9)$ and of vertex $h_5$. The opposite direction is not true: a mapping may not denote an instance of $H$ since a non-probe edge of $H$ may be mapped on an edge not in $G$. The deterministic algorithm exploits these facts: it generate all mappings of $k-s-1$ probe edges and of vertex $h_{k-s}$, and then verifies which mappings denote real instances of $H$ in $G$. The generation of all mappings is done with an I/O-efficient exhaustive search. We assume that the edges of $G$ are split into $\phi={B}T{Ek/ M}$ chunks. Specifically, the adjacency lists of $G$ are split into $\phi$ consecutive chunks $C_i$ of size in the range $(M/(8k),M/(4 k)]$, where $1\leq i \leq \phi$ and $\phi\in [4 k E/M, 8kE/M)$. A vertex whose adjacency list is completely contained in a chunk is called \textit{complete}, and \textit{incomplete} otherwise. We require each chunk to contain at most one incomplete vertex. It can be proved that such a partition exists and can be constructed by scanning the edge set $E$. The algorithm works in $\phi^{k-s-1}$ \mathbf{e}mph{rounds}, which run over all possible ways of selecting (with repetitions) $k-s-1$ chunks from $\phi$ chunks. In each round, the following operations are done (step numbers refer to the pseudocode in the next page): the $k-s-1$ selected chunks are loaded into internal memory (steps 1-2); by scanning the entire edge list of $G$, all edges connecting two incomplete vertexes of the loaded chunks are inserted in memory, if not already in a chunk (step 3); finally, all instances of $H$ where the $i$-th probe edge is mapped on an edge in the $i$-th chunk are enumerated (step 4). This last operation proceeds in \mathbf{e}mph{iterations} that run over all possible ways of mapping $h_{k-s}$ to a vertex $v\in V$ (note that this vertex is not fixed by the mapping of probe edges). In each iteration (steps 4.a-4.c), the algorithm scans the adjacency list of $v$ and checks if there exists an instance where $h_{k-s}$ is mapped on $v$ and the $i$-th probe edge is mapped on an edge in the $i$-th chunk; function \texttt{emit}$(\cdot)$ is called for each existing instance. We now provide a more detailed description of the deterministic algorithm. Consider a generic round and denote with $C_{\mathbf{e}ll_1},\ldots, C_{\mathbf{e}ll_{ k-s-1}}$, for suitable values of $\mathbf{e}ll_1,\ldots, \mathbf{e}ll_{ k-s-1}$, the $k-s-1$ selected chunks. The algorithm uses the support sets $E'$, $E''$, $E_i$ for each $1\leq i \leq k-s-1$, and $V_i$ for each $1\leq i \leq k$, which we suppose to be stored in internal memory and initially empty. Each round performs the following operations: \begin{enumerate} \item \label{s1} For each $1\leq i \leq k-s-1$, we load in memory $C_{\mathbf{e}ll_i}$, and fill $V_i$ and $E_i$ with the vertexes and edges that are contained in $C_{\mathbf{e}ll_i}$. Specifically, we add to $V_i$ all vertexes whose adjacency list is (partially) contained in $C_{\mathbf{e}ll_i}$, and add to $E_i$ all edges $(u,v)$ where $u\in V_i$ and $(u,v)$ appears in the (part of) adjacency list of $u$ in $C_{\mathbf{e}ll_i}$. \item \label{s2} For each $1\leq i\leq s$, we add to $V_{k-s+i}$ all vertexes of $G$ on which $h_{k-s+i}\in S$ can be mapped assuming that the probe edge of $h_i$ is mapped onto an edge in $E_i$. Formally, each vertex $u\in V$ is added to $V_{k-s+i}$ if and only if there exists a vertex $v\in V_i$ such that $(v,u)\in E_i$. No I/Os are needed in this step since the operation can be performed by reading the chunks in internal memory.\footnote{Note that at this point all sets $V_i$, with $i \neq k-s$, are not empty because $s<k/2$. Indeed, when $s=k/2$, $V_k$ is not filled since $h_k$ is the probe vertex of $h_{k/2}$.} \item \label{s3} Edge set $E'$ is filled with all edges of $G$ connecting vertices in $(\cup_{i=1}^{k-s-1} V_i) \cup (\cup_{i=k-s+1}^k V_i)$ that are not already available in internal memory but are required for correctly enumerating instances. Formally, for each $(h_i,h_j)\in E_H$ with $1\leq i,j\leq k$ and $i,j\neq k-s$, each edge $(v,v')\in E$ is added to $E'$ if and only if $v\in V_{i}$, $v'\in V_{j}$, but $(v,v')\notin E_{i}\cup E_{j}$. (We note that an edge can be added to $E'$ although it is contained in $E_l$ for some $l\neq i,j$.) This operation can be performed by scanning once the adjacency lists of $G$. \item \label{s4} Enumerate all instances of $H$ in $G$ where vertex $h_i$ is mapped onto a vertex in $V_i$ and its probe edge onto edges in $E_i$, for any $0\leq i <k-s$. The enumeration proceeds in $V$ iterations. In an iteration, we set $V_{k-s}=\{v\}$, for any possible value of $v\in V$, and then the following operations are done: \begin{enumerate} \item \label{s4a} Let $E''$ be the edge set containing all edges between $v$ and vertices in $(\cup_{i=1}^{k-s-1} V_i) \cup (\cup_{i=k-s+1}^k V_i)$ which are not already in internal memory. Formally, each edge $(v,v')$ is added to $E''$ if and only if $v'\in V_i$ but $(v,v')\notin E_i$. (We note that an edge can be added to $E''$ although it is contained in $E_l$ for some $l\neq i$.) This step requires a scan of the adjacency list of $v$. \item \label{s4b} Using a naive approach (see Section~\ref{sec:ext}), enumerate in main memory all instances of $H$ in the subgraph $(\cup_{i=1}^{k} V_i, E'\cup E'' \cup (\cup_{i=1}^{k-s-1} E_i))$ of $G$ where vertex $v_{k-s}$ is mapped onto $v$, and the probe edge of $h_i$ is mapped onto an edge in $E_i$ for each $1\leq i \leq k-s-1$. \item \label{s4c} Empty sets $V_{k-s}$ and $E''$. \mathbf{e}nd{enumerate} \item \label{s5} Empty sets $E'$, $V_{i}$ for each $1\leq i \leq k$, and $E_{i}$ for each $1\leq i \leq k-s-1$. \mathbf{e}nd{enumerate} Correctness and I/O complexity are stated in the following theorem: \begin{theorem}\label{th:det} The above algorithm correctly enumerates all instances of a given pattern graph $H$ and its I/O complexity is ${B}O{(8k)^{k-s-1}\frac{E^{k-s}}{B M^{k-s-1}}}.$ \mathbf{e}nd{theorem} \begin{proof} \mathbf{e}mph{(Sketch)} In order to prove the correctness of the algorithm, it is necessary to prove that all instances are emitted once. As already mentioned all instances are uniquely defined by the mapping of the probe edges of $h_1,\ldots, h_{k-s-1}$ and of the vertex $h_{k-s}$. Standard combinatorial arguments show that each one of these mappings is generated once during the execution of the algorithm. The scan of $E$ performed at the beginning of each round and the scanning of the adjacency list of vertex $v$ at the beginning of an iteration, guarantee that all edges necessary for verifying that a mapping gives a correct instance of $H$ in $G$ are available in the internal memory. The amount of internal memory used in each round is at most $M$ since there are $k-s-1$ chunks of size at most $M/(4k)$ and at most ${B}O{k^2}$ edges are added in steps 3 and 4.a. The naive enumeration in step 4.b then does not require any I/O. The I/O complexity of each round is therefore dominated by the two scans of the adjacency lists of $E$ (in step 3, and in the $V$ iterations of step 4.a). Since there are $\phi^{k-s-1} \leq (8k E/M)^{k-s-1}$ rounds, the claim follows. \qed \mathbf{e}nd{proof} \begin{proof} We first prove the correctness of the algorithm. Consider an instance $(v_1,\ldots, v_ k)$ of the pattern graph $H$ in $G$. For each $1\leq i\leq k-s-1$, let $C_{\mathbf{e}ll_i}$ be the chunk containing $(v_i, v_{P(i)})$ with $v_i\in V_i$ (we recall that $P(i)$ is the probe index of $i$, that is $h_{P(i)}$ is the probe vertex of $h_i$). Consider the unique round where chunks $C_{\mathbf{e}ll_1},\ldots, C_{\mathbf{e}ll_{ k-s-1}}$ are loaded in memory in this order. Then, $(v_1,\ldots, v_ k)$ is correctly enumerated in the iteration where $V_{ k-s}$ is set to $v_{ k-s}$. Indeed, all vertices and edges are available in internal memory: Step~\ref{s1} guarantees that $v_i\in V_i$ for $1\leq i \leq k-s-1$; Step~\ref{s2} adds $v_{k-s+i}$ to $V_{k-s+i}$ for $1\leq i \leq s$ since the edge $(v_i, v_{k-s+i}) \in E_i$ by assumption and $P(i)={k-s+i}$ (we note that this would not happen for $V_{k}$ when $s=k/2$ since $V_{k/2}$ is empty at this point); Step~\ref{s3} we have that all edges connecting vertices in $\{v_1, \ldots, v_{k-s-1}, v_{k-s+1},\ldots, v_k\}$ are in memory (more specifically, all edges between complete vertices are already in internal memory after Step~\ref{s1}); finally, Step~\ref{s4a} guarantees that all edges between $v_{k-s}$ and $\{v_1, \ldots, v_{k-s-1}, v_{k-s+1}\ldots v_k\}$ are in memory. The instance $(v_1,\ldots, v_ k)$ is enumerated once: indeed, the instance can be enumerated only in the unique round where chunks $C_{\mathbf{e}ll_1},\ldots, C_{\mathbf{e}ll_{ k-s-1}}$ are loaded in memory in this order (a different order may enumerate an automorphism but not the same instance), and in the unique iteration where $V_{k-s}$ is set to $v_{k-s}$ (clearly, the naive approach for enumeration in Step~\ref{s4b} must emit each instance once). We now show that the total amount of required internal memory is at most $M$. The sets $V_i$ and $E_i$, for each $i\neq k-s$, have sizes at most $M/4k$ each, and thus at most $M(k-1)/(2k)$ memory words are required (note that chunks $C_{\mathbf{e}ll_1},\ldots, C_{\mathbf{e}ll_{ k-s-1}}$ can be removed from the internal memory after Step~\ref{s1}). The size of $V_{k-s}$ is clearly one memory word. The size of $E'$ is at most $(k-1)^2$ words: indeed, an edge $(v,v')\in E$ is added to $E'$ if and only if $v\in V_{i}$, $v'\in V_{j}$, and $(v,v')\notin E_{i}\cup E_{j}$; this implies that $v$ and $v'$ are incomplete vertices, otherwise $(v,v')$ would be in $E_{i}\cup E_{j}$; then, being at most one incomplete vertex per chunk, the claim follows. Similarly, we have that $E''$ has size at most $(k-1)$ words. Then, the total amount of space is $M(k-1)/(2k)+k^2$ which is not larger than $M$ since $k<<M$. Finally, we analyze the I/O complexity of the algorithm. The I/O cost for enumerating instances in Step~\ref{s4b} is negligible since the problem fits in memory and all operations are performed in main memory. Then the I/O complexity of each round is asymptotically upper bounded by a constant number of scans of the whole edge set $E$. Since there are $\phi^{k-s-1}\leq (8 k E/ M)^{k-s-1}$ rounds, the claimed I/O complexity follows. \qed \mathbf{e}nd{proof} \section{Randomized EM Algorithm}\label{sec:rand} We are now ready to introduce the randomized algorithm. The algorithm, by making use of the random coloring technique in~\cite{PaghS13}, decomposes the problem into small subproblems of expected size ${B}O{M}$, which are then solved with the previous deterministic algorithm. We assume that the maximum degree of $G$ is $\sqrt{EM}$; however, in Section~\ref{sec:degree}, we show how this assumption can be removed by increasing the I/O complexity by a multiplicative factor $k^{{B}O{k}}$. We first prove the expected I/O complexity and then show how to get the high probability under some assumptions in Section~\ref{sec:whp}. Let $\xi: V \rightarrow \{1, \ldots, c\}$, with $c=\sqrt{E/M}$, be a vertex coloring chosen uniformly at random from a family of $2(k-s+1)$-wise independent family of functions. The coloring $\xi$ partitions the edge set $E$ into $c^2$ sets of expected size $M$. For each pair of colors $\tau_1,\tau_2\in \{1, \ldots, c\}$ and $\tau_1\leq \tau_2$, we denote with $E_{\tau_1,\tau_2}$ the set containing edges colored with $\tau_1$ and $\tau_2$, that is $E_{\tau_1,\tau_2}=\{(u,v)\in E \vert \min\{\xi(u),\xi(v)\}=\tau_1, \max\{\xi(u),\xi(v)\}=\tau_2\}$. Each instance $(v_1,\ldots, v_k)$ of the pattern graph can be colored by $\xi$ in $c^k$ ways, and it is said to be \mathbf{e}mph{$(\tau_1,\ldots,\tau_{k})$-colored} if $\xi(v_i)=\tau_i$ for each $1\leq i\leq k$. The randomized algorithm enumerates all instances by decomposing the problem into $c^k$ subproblems. Each subproblem finds all $(\tau_1,\ldots,\tau_{k})$-colored instances according to a given $k$-tuple of colors using the previous deterministic algorithm on the edge set $\cup_{\tau_i\leq \tau_j} E_{\tau_i,\tau_j}$. The algorithm is organized as follows: \begin{enumerate} \item Randomly select a coloring $\xi$ from a $2(k-s+1)$-wise independent family of functions. \item Using sorting, store edges in $E_{\tau_1,\tau_2}$ in consecutive positions, for each color pair $(\tau_1,\tau_2)$. \item For each $k$-tuple of colors $(\tau_1,\ldots,\tau_{k})$, enumerate all $(\tau_1,\ldots,\tau_{k})$-colored instances using the algorithm in Section~\ref{sec:det} on the sets $E_{\tau_i,\tau_j}$, for each $\tau_i\leq \tau_j$. \mathbf{e}nd{enumerate} In order to bound the I/O complexity of the randomized algorithm, we introduce the following technical lemma that upper bounds the expected number $X_t$ of possible tuples of $t$ edges in $E$ that are colored in the same way by $\xi$. A closed form of this quantity is $X_t=\sum_{\tau_1\leq \tau_2, E_{\tau_1,\tau_2}\geq t} \frac{E_{\tau_1,\tau_2}!}{(E_{\tau_1, \tau_2}-t)!}$ (note that sets $E_{\tau_1,\tau_2}$ with less than $t$ edges do not contribute). \begin{lemma}\label{lem:fact} Let $\xi:V\rightarrow \{1,\ldots, c\}$ be chosen uniformly at random from a $2t$-wise independent family of hash functions, where $c=\sqrt{E/M}$. If $M={B}OM{t^2}$ and the maximum vertex degree in $G$ is $\sqrt{EM}$, then $\E{X_t}\leq (2t)^{t-1}EM^{t-1}$. \mathbf{e}nd{lemma} \begin{proof} \newcommand{\mathbf{e}}{\mathbf{e}} We prove the claim by induction on $t$. The claim is verified for $t=1$ since $\E{X_1}=E$. For each tuple $\mathbf{e}=(e_1,\ldots, e_t)$ of $t$ distinct edges in $E$ and for each $2\leq i\leq t$, let $Y^\mathbf{e}_i=1$ if $e_i$ is in the same set $E_{\tau_1, \tau_2}$, for some colors $\tau_1, \tau_2$, of edges $e_{1},\ldots, e_{i-1}$, and 0 otherwise. Set $Y^\mathbf{e}_1=1$. We get $X_t=\sum_{\mathbf{e}} Y^{\mathbf{e}}_t$. Since there are at most $2t$ vertices and $\xi$ is $2t$-wise, we get $$ \Pr{Y^\mathbf{e}_t=1}\leq \left\{ \begin{array}{ll} \Pr{Y^\mathbf{e}_{t-1}=1} /c^2 & \text{if $e_t$ is not adjacent to $e_1,\ldots, e_{t-1}$}\\ \Pr{Y^\mathbf{e}_{t-1}=1}/c & \text{if $e_t$ is adjacent to $e_1,\ldots, e_{t-1}$ on one vertex}\\ \Pr{Y^\mathbf{e}_{t-1}=1} & \text{if $e_t$ is adjacent to $e_1,\ldots, e_{t-1}$ on two vertices} \mathbf{e}nd{array} \right. $$ Each $(t-1)$-tuple $\mathbf{e}'$ can be extended by at most $E$ edges that are not connected with $\mathbf{e}'$, or by $2(t-1)\sqrt{EM}$ edges that are connected to $\mathbf{e}'$ on just one vertex (recall that the maximum degree of a vertex is $\sqrt{EM})$, or by $(t-1)(2t-3)$ edges that are connected to $\mathbf{e}'$ on two vertices. Therefore, we get \begin{align*} \E{X_t}&=\sum_{\mathbf{e}} \Pr{Y^\mathbf{e}_t=1} \leq \E{X_{t-1}}\left(\frac{E}{c^2} + 2(t-1)\frac{\sqrt{EM}}{c}+ (t-1)(2t-3)\right). \mathbf{e}nd{align*} Since the right term is upper bounded by $2tM \E{X_{t-1}}$, the lemma follows.\qed \mathbf{e}nd{proof} We are now ready to show the correctness and I/O complexity of the randomized algorithm. \begin{theorem}\label{thm:randalg} The above randomized algorithm enumerates all instances of a given pattern graph $H$. If the maximum vertex degree of $G$ is $\sqrt{EM}$, then the expected I/O complexity of the algorithm is $ {B}O{(8k)^{4(k-s+1)}{ E^{ k/2}}/({B M^{k/2-1}})}. $ \mathbf{e}nd{theorem} \begin{proof} The correctness easily follows since each instance is colored with a suitable color tuple $(\tau_1,\ldots,\tau_k)$ and is enumerated only in the subproblem associated with this color tuple. The cost of each subproblem is given by Theorem~\ref{th:det}, however for simplicity, we upper bound the cost of the deterministic algorithm with ${B}O{(8k)^{k-s} E^{k-s+1}/(BM^{k-s})}$ in order to get rid of the logarithmic term. The I/O complexity $Q(E)$ of the algorithm is upper bounded by the sum of the costs of all $c^k$ subproblems. Then, \begin{align*} Q(E) =& {B}O{\frac{ (8k)^{ k-s}}{B M^{k-s}}\sum_{(\tau_1,\ldots, \tau_{k})}{\left(\sum_{\tau_i\leq \tau_j} E_{\tau_i,\tau_j}\right)^{k-s+1}}}\\ \leq& {B}O{\frac{(8k)^{2( k-s+1)}}{B M^{ k-s}}\sum_{(\tau_1,\ldots, \tau_{k})}\sum_{\tau_i\leq \tau_j}E_{\tau_i,\tau_j}^{ k-s+1}} \\ \leq & {B}O{\frac{ c^{k-2} (8k)^{2(k-s+1)}}{B M^{k-s}}\sum_{\tau_1\leq \tau_2}E_{\tau_i,\tau_j}^{ k-s+1}}\\ \leq & {B}O{\frac{ c^{k-2} (8k)^{3(k-s+1)}}{B M^{k-s}}\sum_{\tau_1\leq \tau_2, E_{\tau_1,\tau_2}\geq k-s+1}\frac{E_{\tau_1,\tau_2}!}{(E_{\tau_1,\tau_2}- k+s-1)!}}\\ \leq & {B}O{\frac{ c^{k-2} (8k)^{3(k-s+1)}}{B M^{k-s}}X_{k-s+1}}. \mathbf{e}nd{align*} By the linearity of expectation, we get $ \E{Q(E)}={B}O{\frac{ c^{k-2} (8k)^{3(k-s+1)}}{B M^{k-s}}\E{X_{k-s+1}}}. $ Then, by Lemma~\ref{lem:fact} and the $2(k-s+1)$-wiseness of $\xi$, we get the claimed result. \qed \mathbf{e}nd{proof} We remark that our deterministic algorithm is crucial for getting the claimed I/O complexity. Indeed, the algorithm used in the subproblems should require ${B}O{M/B}$ I/Os for solving subproblems of size ${B}T{M}$ (note that subproblems may not perfectly fit the memory size). Using existing enumeration algorithms, which require ${B}OM{M^{k/2}/B}$ I/Os for solving subproblems of size ${B}T{M}$, would increase the total I/O complexity by a multiplicative factor ${B}OM{M^{k/2-1}}$. \subsection{Getting the high probability}\label{sec:whp} If $M={B}OM{\sqrt{E}\log E}$, the randomized coloring process can be slightly modified to get with probability $1-1/{B}T{E}$ the claimed I/O complexity. For the sake of simplicity we assume the maximum degree to be $\sqrt{EM}$, although it is possible to remove this assumption even for higher degree by adapting the procedure described in the next Section~\ref{sec:degree}.\footnote{For $k={B}O{1}$, the procedure in Section~\ref{sec:degree} consists in repeating the randomized algorithm a constant amount of times. Then by an union bound, we get that the claimed complexity.} A vertex $v\in V$ has \mathbf{e}mph{high degree} if $ \sqrt{E}\leq \deg(v) \leq \sqrt{EM}$ and has \mathbf{e}mph{low degree} if $ \deg(v) < \sqrt{E}$. The coloring process is modified as follows. The colors of low degree vertices are assigned independently and uniformly at random. The colors of high degree vertices are set by partitioning vertices into $c$ groups so that the sum of degrees within each group is in $[\sqrt{EM}, 2\sqrt{EM})$, and then high degree vertices within the $i$-th group get color $i$ (this operation requires ${B}O{1}$ sorts). Our argument relies on the technique by Janson~\cite[Theorem 2.3]{Janson04} for obtaining a strong deviation bound for sums of dependent random variables, which we recall here for completeness. Let $X = \sum_{i=1}^p Y_i$ where each $Y_i$ is a random variable with $Y_i - \E{Y_i}\leq 1$, and let $\psi=\sum_{i=i}^p Var{Y_i}$. Denote with $\Delta$ the maximum degree of the dependency graph of $Y_1, \ldots, Y_p$: this is a graph with vertex set $Y=\{1, \ldots, p\}$ such that if $B \subset Y$ and $i\in Y$ is not connected to a vertex in $B$, then $Y_i$ is independent of $\{Y_j\}_{j \in B}$. Then, for any $d >0$, we have $ \Pr{X \geq (1+d) \E{X}}\leq e^{-\frac{8 d^2 \E{X}^2}{25 \Delta (\psi+d \E{X} /3)}}$. \iffalse \begin{lemma} (Theorem 2.3 in~\cite{Janson04}.) \label{lem:janson} Let $X = \sum_{i=1}^p Y_i$ where each $Y_i$ is a random variable with $Y_i - \E{Y_i}\leq 1$, and let $\psi=\sum_{i=i}^p Var{Y_i}$. Then, for any $d >0$ $ \Pr{X \geq (1+d) \E{X}}\leq e^{-\frac{8 d^2 \E{X}^2}{25 \Delta (\psi+d \E{X} /3)}}. $ \mathbf{e}nd{lemma} We are now ready to prove the theorem. \fi \begin{theorem}\label{th:whp} Let $M={B}OM{\sqrt{E} \log E}$ and let the maximum vertex degree of $G$ be $\sqrt{EM}$. Then, the I/O complexity of the above algorithm is ${B}O{(8k)^{6(k-s)}{E^{k/2}}/({B M^{k/2-1}})}$ with probability at least $1-1/E$. \mathbf{e}nd{theorem} \begin{proof} Let $E^L$ be the set of edges in $E$ connecting two low degree vertices. We also define $E^H=E/E^L$, $E_{\tau_1, \tau_2}^L = E_{\tau_1, \tau_2} \cap E^L$, $E_{\tau_1, \tau_2}^H = E_{\tau_1, \tau_2} \cap E^H$. We first show that the size of $E_{\tau_1, \tau_2}^L$ for any color pair $\tau_1, \tau_2$ is smaller than $2M$ with probability at least $1-1/(2E)$. Assume for simplicity that $\vert E^L\vert =\vert E\vert $. For each edge $e\in E^L$, define the random variable $Y_e$ to be $1$ if edge $e$ is in $E_{\tau_1, \tau_2}^L$, and $0$ otherwise. We thus have $E_{\tau_1, \tau_2}^L = \sum_{e\in E} Y_e$. Each random variable $Y_e$ depends on the at most $2\sqrt{E}$ variables associated with edges adjacent to $e$, while it is independent of the remaining ones.\footnote{Note that this is not the case if low degree vertices were colored with $2(k-s)$-wise independent hash functions.} Since $Y_e-\E{Y_e}<1$, we use the aforementioned result by Janson by setting $p=E$, $\E{E_{\tau_1, \tau_2}^L}=M$, $\psi=E(1/c^2-1/c^4)<M$, $d=1$, $\Delta= 2\sqrt{E}$. Then we get $ \Pr{E_{\tau_1, \tau_2}^L \geq 2M}\leq e^{-\frac{4 M}{25 \sqrt{E}}}. $ By an union bound, the probability that $E_{\tau_1,\tau_2}^L$ is smaller than $2M$ for every color pair is at least $1- c^2 e^{-\frac{4 M}{25 \sqrt{E}}}\geq 1-1/(2E)$ when $M={B}OM{\sqrt{E} \log E}$. We now show that the set $E_{\tau_1, \tau_2}^H$ has size $8M$ with probability at least $1-1/(2E)$. There are at most $2\sqrt{M}$ high degree vertices colored with a given color. Then, there cannot be more than $4M$ edges connecting two high degree vertices in $E_{\tau_1, \tau_2}^H$. Consider now the set $E^{H*}$ of edges connecting high degree vertices of colors $\tau_1$ or $\tau_2$ to low degree vertices. We have $E^{H*}\leq 4\sqrt{EM}$. For each $e\in E^{H*}$, define the random variable $Y_e$ to be 1 if the low degree vertex gets color $\tau_1$ or $\tau_2$, and $0$ otherwise. We have $E_{\tau_1, \tau_2}^H\leq \sum_{e\in E^{H*}} Y_e$. Since random variables may be dependent, we apply again the result by Janson with $p=4\sqrt{EM}$, $\E{E^{H*}}=8M$, $\psi=\sum_{e\in E^{H*}}Var{Y_e}\leq 8M$, $d=1/2$, $\Delta = 2 \sqrt{E}$ (since only low degree vertices are randomly colored). Then, $ \Pr{E^H_{\tau_1,\tau_2} \geq 12 M}\leq e^{-\frac{2 M}{25 \sqrt{E}}}. $ Then, the probability that $E_{\tau_1,\tau_2}^H$ is smaller than $16M$ for every color pair is at least $1- c^2 e^{-\frac{2 M}{25 \sqrt{E}}}\geq 1-1/(2E)$ when $M={B}OM{\sqrt{E} \log E}$. Therefore, we have that each $E_{\tau_1,\tau_2}$ has size at most $16M$ with probability at least $1-1/E$. Since each subproblem receives at most $ k^2$ edge sets, the I/O complexity of a subproblem is ${B}O{(18k^2)^{k-s}(8k)^{4(k-s-1)} M/B}$. Since there are $c^k$ subproblems, the claimed I/O complexity follows. \qed \mathbf{e}nd{proof} It deserves to be noticed that it is possible to color low degree vertices with a coloring from a $2(k-s)$-wise independent family and still get the claimed I/O complexity with probability $1-1/E^\mathbf{e}psilon$, for $0\leq \mathbf{e}psilon \leq 1/4$, as soon as $M\geq E^{3/4+\mathbf{e}psilon}$. It suffices to use a technique by Gradwohl and Yehudayoff~\cite[Corollary 3.2]{GradwohlY08} in our argument instead of the aforementioned result by Janson~\cite[Theorem 2.3]{Janson04}. \subsection{Removing the degree assumption}\label{sec:degree} Although the assumption in the randomized algorithm that the maximum degree in $G$ is at most $\sqrt{EM}$ is reasonable for real datasets, it can be removed by increasing the I/O complexity by a multiplicative $k^{{B}O{k}}$ factor. We use the previous randomized algorithm as a black box and exploit a coloring technique that should not be confused with the one used inside the randomized algorithm. We denote with $V_H$ the set of \mathbf{e}mph{very high degree} vertices in $G$ (i.e., degree larger than $\sqrt{EM}$), and with $V_L = V\setminus V_H$ the remaining low degree vertices. We let $G_L=(V_L, E_L)$ denote the subgraph of $G$ induced by $V_L$. Let $p\in[0,k]$. Consider the following simpler problem: enumerate all instances of $H$ where $p$ given vertices of $H$, say for notational simplicity $h_{1}, \ldots h_{p}$, are respectively mapped onto $p$ given very high degree vertices $v'_1, \ldots, v'_p$, and where the remaining vertices of $H$ are mapped onto vertices in $V_L$. Since the mapping on the first $p$ vertices is given we assume that if $(h_i,h_j)\in V_H$ then $(v'_i, v'_j)\in E$ for any $1\leq i,j\leq p$ (this can be checked in scanning complexity). We now show that this problem reduces to the enumeration in $G_L$ of a suitable \mathbf{e}mph{colored} pattern graph with $k'=k-p$ vertices, and which can be solved with the previous randomized algorithm. Suppose that each vertex in $V_L$ is colored with a $p$-bit color, initially set to $0$. Then, for each $i\in [1,p]$ and for each vertex $v\in V_L$ adjacent to $v'_i$, we update the color of $v$ by setting the $i$-th bit to 1 (note that at the end of this operation, a vertex color can have several bits set to 1). Define the color tuple $d=(d_1, \ldots, d_{k'})$ as follows: set each term to 0; then, for each $1\leq i \leq p$ and for each $h_{p+j}$ adjacent to $h_i$ in $H$, we set the $i$-th bit of $d_j$ to 1. Let $H'$ be the subgraph of $H$ induced by $h_{p+1},\ldots, h_k$. Then, the problem can be solved by emitting instances $(v'_1, \ldots, v'_p, v''_1,\ldots, v''_{k'})$, where $(v''_1,\ldots, v''_{k'})$ is every instance of $H'$ in $G_L$ where vertices are colored according with $d$ (i.e., the $i$-th vertex of the instance has color $d_i$). The colored instances of $H'$ can be obtained by adapting the previous randomized algorithm to throw away instances that are not compatible with coloring $d$. By iterating the previous technique for any value of $p$ and for any matching of $p$ vertices in $H$ with $p$ very high degree vertices, we get the claimed result. \begin{theorem}\label{thm:rem} The above algorithm enumerates all instances of a given pattern graph $H$ and the expected I/O complexity is $ {B}O{k^{5(k-s+1)}{ E^{ k/2}}/({B M^{k/2-1}})}. $ \mathbf{e}nd{theorem} \begin{proof} We first show that the technique correctly enumerates all instances where $h_{1}, \ldots, h_{p}$ are respectively mapped onto the very high degree vertices $v'_1, \ldots, v'_p$, and where the remaining vertices of $H$ are mapped onto vertices in $V_L$. Consider the emitted $(v'_1, \ldots, v'_p, v''_1,\ldots, v''_{k'})$ tuple. We now prove that this instance satisfies the required properties. Clearly, $v''_i\in V_L$ by construction. We now show that if $(h_i, h_j)\in H$ then the edge is correctly mapped onto $E$. If $1\leq i,j\leq p$ or $p+1\leq i,j\leq k$, the claim is verified, respectively, by the initial assumption on $v'_1, \ldots, v'_p$ and and by the correctness of the randomized algorithm. Suppose $1\leq i\leq p$ and $p+1\leq j\leq k$ (the opposite is equivalent). Color $d_{j-p}$ must have the $i$-th bit set to $1$ since $(h_i, h_j)\in E_H$. Since the instance must verify the coloring tuple, vertex $v''_{j-p}$ has color $d_{j-p}$ and then it is adjacent to $v'_i$ since the $i$-th bit is 1. Vice versa, it can be similarly shown that all instances that satisfy the desired properties are correctly enumerated. Let $r\leq 2\sqrt{E/M}$ be the number of very high degree vertices. Since for a given $p$ the technique is called $r^p \frac{k!}{(k-p)!}$, the expected I/O complexity can be upper bounded as follows: $$ {B}O{\sum_{p=0}^k r^p \frac{k!}{(k-p)!} k^{4(k-p-s+1)} \frac{E^{(k-p)/2}}{BM^{(k-p)/2-1}}}= {B}O{k^{5(k-s+1)}\frac{E^{k/2}}{BM^{k/2-1}} }. $$ \qed \mathbf{e}nd{proof} We note that the subsequent lower bound does not hold for the technique proposed for getting rid of the degree assumption. Indeed, information on graph connectivity are encoded in the coloring bits, but the lower bound requires at least one memory word for each vertex or edge. However, if $k$ is a small constant, the lower bound still applies by using a memory word instead on a single bit. \section{Further Extensions}\label{sec:ext} \subsubsection{Lower Bound on I/O Complexity} We now describe a lower bound on the I/O complexity for any algorithm that enumerates $T$ instances of a pattern graph in the class of graphs named \mathbf{e}mph{Alon class}~\cite{AfratiSSU13}. A graph in the Alon class has the property that vertices can be partitioned into disjoint sets such that the subgraph induced by each partition is either a single edge, or contains an odd-length Hamiltonian cycle. As in previous works~\cite{HuTC13,PaghS13} on triangle enumeration, we assume that each edge or vertex requires at least one memory word. That is, at any point in time there can be at most $M$ edges/vertices in memory, and an I/O can move at most $B$ edges/vertices to or from memory. This assumption is similar to the indivisibility assumption which is common in lower bounds on the I/O complexity. My mimic the argument in~\cite{PaghS13} for triangle enumeration, it can be proved that the enumeration requires ${B}OM{{T}/({B M^{ k/2-1}})+{T^{2/k}}/{B}}$ I/Os. The claim follows by the fact that there cannot be more than ${B}T{m^{k/2}}$ instances of a subgraph in the Alon class in a graph of $m$ edges~\cite{Alon81}. (For the sake of completeness we provide the proof in Appendix~\ref{app:lb}). When $k={B}O{1}$, the lower bound shows that our randomized algorithm is optimal for any pattern graph, while the deterministic algorithm is optimal if $s=k/2$ and $M=E^\mathbf{e}psilon$ for some constant $\mathbf{e}psilon>0$. Indeed, if the data graph is a complete graph with $\sqrt{E}$ vertices, there exist $T={B}T{E^{k/2}}$ instances of any pattern graph with $k$ vertices. \subsubsection{Work Complexity} We analyze the work complexity when the pattern graph is in the Alon class and $k={B}O{1}$. By using the ideas in~\cite[Theorem~6.2]{AfratiDU13}, the enumeration (in internal memory) within each iteration of the deterministic algorithm can be performed in ${B}TO{M^{k/2-1}}$ work. Then the total work of the deterministic algorithm is ${B}TO{E^{k-s}/M^{k/2-s}}$. As a consequence the expected work of the randomized algorithm becomes ${B}TO{E^{k/2}}$, which is just a polylog factor from the optimum since instances in the Alon class (e.g., cliques) can appear ${B}T{E^{k/2}}$ times in the worst case. To the best of our knowledge, the only algorithm for enumerating a generic pattern graph which does not belong to the Alon class is a brute-force approach. In this case, the deterministic algorithm requires ${B}TO{E^{k-s}}$ work since Step~\ref{s4b} can be performed in ${B}TO{M^{k-s-1}}$ work using the brute-force approach; the expected work of the randomized algorithm then becomes ${B}TO{E^{k/2}M^{k/2-s}}$. In this case the work may become the main bottleneck in a practical implementation. \section{Conclusion} \iffalse For the sake of simplicity, we do not exploit automorphisms in the pattern graph. A technique for exploiting automorphisms in subgraph enumeration is proposed in~\cite{AfratiDU13} and can be applied to our algorithms, improving the exponent of the $k^{{B}O{k}}$ terms in the I/O complexities. ---- We have proposed upper and lower bounds to the I/O complexity for the enumeration of all instances of a given pattern graph with $k$ vertices. In particular, we have given a randomized algorithm requiring ${B}O{E^{k/2}/(BM^{k/2-1})}$ expected I/Os when $k={B}O{1}$. A nice property of this algorithm is that it decomposes the problem into a large number of independent subproblems. The algorithm can be thus easily parallelized in shared-memory (e.g., parallel external memory~\cite{ArgeGNS08}, multicores~\cite{ChowdhuryRSB13}) and distributed-memory (e.g., MapReduce~\cite{PietracaprinaPRSU12}) models. \fi The worst case complexities of our algorithms have an exponential dependency on the vertex number $k$ of the pattern graph, and they are thus mainly of theoretical interest. The lower bound shows that this is the best result in the worst case under standard assumptions. However, some experiments~\cite{ParkSKP14} on related MapReduce algorithms for triangle enumeration shows interesting performance and seems to suggest that the analysis of our algorithms can be improved by expressing the complexities as function of some properties of the input graph (e.g., arboricity) or of the output. An output sensitive algorithm for triangle enumeration has recently been proposed by Bj{\"o}rklund et al.~\cite{BjorklundPWZ14} in the RAM model, however the problem remains open in the external memory for the enumeration of an arbitrary subgraph as well as for triangle enumeration. \textbf{Acknowledgments.} The author would like to thank Rasmus Pagh and Andrea Pietracaprina for useful discussions. {\small } \section*{Appendix} \subsection{Deterministic EM Algorithm when $s=k/2$}\label{app:detalgspecialcase} We now explain how to extend the algorithm to the case $s=k/2$, that is when all vertices in $V_H\setminus S$ are matched with a vertex in $S$. Note that in this case $k$ must be even. We recall that, with our notation, $h_i$ is matched with $h_{k/2+i}$ under $S$ for each $1\leq i \leq k/2$. Let $\Gammaamma_k$ denote the set of indexes of vertices in $V_H\setminus h_{k/2}$ adjacent to $h_k$ (i.e., $i\in \Gammaamma_k$ if and only if $i<k/2$ and $(h_i,h_{k})\in E_H$). We observe that in the previous algorithm, the vertex set $V_{k}$ is empty in Step~\ref{s4b} since $h_k$ is the probe vertex of $h_{k/2}$ and thus $V_k$ is not filled in Step~\ref{s2}. If there are no incomplete vertices in each chunk, then the previous algorithm can be fixed by filling $V_k$ in Step~\ref{s2} with vertices that are connected to a vertex in $V_j$ for every $j\in \Gammaamma_k$. Indeed, these are the only possible values on which $v_{k}$ can be mapped when all vertices $h_i$ with $1\leq i < k/2$ are mapped onto vertices in $V_i$. This operation requires no I/Os since all adjacency lists in each chunk are completely contained in internal memory, and hence the upper bound in Theorem~\ref{th:det} still applies. Instead of proving this claim, we propose a more general approach that holds even with incomplete vertices. Two major changes are required in the deterministic algorithm. The first one allows to correctly enumerate all instances where at least one vertex in $\{h_i, \forall i\in \Gammaamma_k\}$ is mapped onto a complete vertex. Then the second change, which is more articulated, allows to enumerate all instances where each vertex in $\{h_i, \forall i\in \Gammaamma_k\}$ is mapped onto an incomplete vertex. \mathbf{e}mph{First change.} In Step~\ref{s2}, we add to $V_k$ all vertices $v$ which are neighbors of complete vertices in $V_j$ for some $j\in \Gammaamma_k$. Specifically, for each edge $(u,v)$ in $E_j$ with $j\in \Gammaamma_k$, $u\in V_j$ and $u$ complete, vertex $v$ is added to $V_k$. As we will see in the main proof, this change allows to enumerate all instance where at least one vertex in $\{h_i, \forall i\in \Gammaamma_k\}$ is mapped onto a complete vertex. For clearness, consider the following example. Let $h_1$ be adjacent to $h_k$ and let be $v$ a complete vertex in $V_1$. If $h_1$ is mapped onto $v$, the possible values onto which $h_k$ can be mapped is given by the adjacency list of $v$ which is totally in memory (thus, $V_k$ is set to these vertices). Then, for complete the enumeration in the current round we have to insert into $E'$ each edge connecting incomplete vertices in each $V_j$, for $j\in \Gammaamma_k$ with a vertex in $V_k$ (this operation is performed by Step~\ref{s3} without further modifications) \mathbf{e}mph{Second change.} We add a new operation before Step~\ref{s5}, but outside the iteration loop in Step~\ref{s4}. This operation is performed only if there exists an incomplete vertex in each chunk $C_{\mathbf{e}ll_i}$ with $i\in \Gammaamma_k$ and let $v'_1, \ldots, v'_{\Gammaamma_k}$ these vertices (otherwise, there would no instances where each vertex in $\{h_i, \forall i\in \Gammaamma_k\}$ is mapped onto an incomplete vertex and this modification would be useless). The algorithm computes a set $V'$, stored in external memory since it may exceed the internal memory size, containing all vertices that are connected with all vertices $v'_1, \ldots v'_{\Gammaamma_k}$ in $G$; this set can be computed by merging the adjacency lists of $v'_1, \ldots v'_{\Gammaamma_k}$ and keeping only vertices that appear $\Gammaamma_k$ times. Then, using sorting, we compute a new edge list $\hat E$ containing all edges with at least one extreme in $V'$. For each edge in $\hat E$, we call \mathbf{e}mph{linked} the vertex in $V'$. We denote with $\hat V$ the vertices in $\hat E$ and require $\hat E$ to be stored as a collection of adjacency lists. Subsequently, the algorithm enumerates all instances where vertex $h_{k/2}$ is mapped onto a vertex in $\hat V$, its probe edge onto an edge in $\hat E$, $h_{k}$ onto the linked vertex of this edge (i.e., with a vertex in $V'$), and $h_i$ on the incomplete vertex in $V_i$ for each $i\in \Gammaamma_k$. The enumeration is performed in iterations as in the previous algorithm, and in each iteration the algorithm maps $h_{k/2}$ on a vertex $\hat V$ and its adjacency list loaded in memory (if the adjacency list is too long we split it into segments of size $M/(8k)$). (Clearly, as for every instance enumerated in the current round, we also require that vertex $h_i$ is mapped onto a vertex in $V_i$ and its probe edge onto $E_i$ for any $1\leq i \leq k/2-1$.) We observe that it is not needed to load in memory the adjacency lists of incomplete vertices in $\{V_i, \forall i\in \Gammaamma_k\}$ since each vertex in $V'$ is connected with all of them by construction. The I/O complexity of the algorithm is bounded by the following theorem. \begin{theorem} The above algorithm correctly enumerates all instances of a given pattern graph $H$ and its I/O complexity is ${B}O{(8k)^{k-s-1}\frac{E^{k-s}}{B M^{k-s-1}} \log_M E}.$ \mathbf{e}nd{theorem} \begin{proof} Consider the first change. In Step~\ref{s3}, we load in memory each edge between incomplete vertices in $\cup_{i\in \Gammaamma_k} V_i$ and vertices in $V_k$. By mimic the proof of Theorem~\ref{th:det}, it can be shown that the algorithm correctly enumerates all instances where vertex $h_i$ is mapped onto a vertex in $V_i$, its probe edge onto $E_i$ for any $1\leq i \leq k/2-1$ \mathbf{e}mph{and} at least one vertex in $\{h_i, \forall i\in \Gammaamma_k\}$ is mapped onto a complete vertex. However, it may happen that instances where all vertices in $\{h_i, \forall i\in \Gammaamma_k\}$ are mapped onto incomplete vertices are not enumerated since some edges could be missing in $E'$. This is fixed by the second change. Indeed, the construction on $\hat E$ guarantees that for each $(u,v)\in \hat E$, where $v$ is marked as linked, the vertex $v$ is connected to every incomplete vertex in $\{V_i, \forall i\in \Gammaamma_k\}$. Therefore. as soon as $h_i$ is mapped on the incomplete vertex in $V_i$, with $i\in \Gammaamma_k$, and $h_k$ is mapped onto $v$, we have that the edge dependencies are correctly enumerated (even if edge information are not currently available in internal memory). Finally, we note that the first change does not increase the I/O complexity and load in memory at most $\Gammaamma_k \cdot M/(2k)\leq M/4$ additional edges/vertices. The second change requires ${B}O{(E/B) \log_M E}$ I/Os per round (i.e., sorting complexity) and load in memory at most $M/(8k)$ edges per iteration. Since the space used by the first change can be deallocated before the operations required by the second change start, the total amount of internal memory never exceeds $M$ (recall as shown in the previous theorem, the base algorithm requires about $M(k-1)/(2k)$ words of internal memory). The claimed I/O complexity easily follows. \mathbf{e}nd{proof} We observe that the deterministic algorithm requires a MIS $S$ of the pattern graph (i.e., each vertex in $S$ is matched with a vertex not in $S$) in order to correctly enumerate instances with incomplete vertices. As an example consider the following case. Let the pattern graph be a path of length 3, let $h_1$ be adjacent to vertices $h_{0}$ and $h_{2}$, and let $S=\{h_0, h_2\}$ be a standard independent set of $H$ (note that it is not matched). Suppose that there exists an instance where vertices $v, v', v''$ are mapped onto $h_0, h_{1}, h_{2}$ respectively. If $v'$ is incomplete and edges $(v,v')$ and $(v',v'')$ are in distinct chunks, then the two edges may not be at the same time in the internal memory and then the instance cannot be emitted. This problem disappears if the maximum degree of the input data graph is ${B}O{M/k}$ since there are no incomplete vertices and then all edges connected to a vertex are available within a single chunk. In this case, it can be proved that the I/O complexity reduces to ${B}O{(8k)^{k-s'-1}E^{k-s'}/(BM^{k-s'-1})}$ I/Os, where $s'$ is the size of a traditional independent set $S'$ of the pattern graph. This implies that it is possible to go below the ${B}O{E^{k/2}/(BM^{k/2-1})}$ bound if $s'\geq k/2$, such as in stars, paths of odd length, or meshed with odd side. Clearly, for these patterns the lower bound in Section~\ref{sec:ext} does not apply since they are not in the Alon class. \subsection{Lower bound on the I/O Complexity}\label{app:lb} The proof mimics the argument in~\cite{PaghS13} for triangle enumeration, but exploits the fact that there cannot be more than ${B}T{m^{k/2}}$ instances of a subgraph in the Alon class in a graph of $m$ edges~\cite{Alon81}. The execution of an algorithm on a memory of size $M$ can be simulated, without increasing the I/O complexity, in a memory of size $2M$ so that the computation proceeds in rounds. In each round (with the possible exception of the last round) there are ${B}T{M/B}$ I/Os, and memory blocks are read from (resp., written on) the external memory only at the begin (resp., end) of a round. (We refer to~\cite{PaghS13} for more details on the simulation.) By the aforementioned result on the Alon class, ${B}T{M^{k/2}}$ instances can be enumerated in a round since there are at most $2M$ edges in memory. Then, there must be at least $\lfloor T/{B}T{M^{k/2}}\rfloor$ rounds. Since each round needs ${B}T{M/B}$ I/Os, we get the first part of the claim. The second term follows since ${B}OM{T^{2/k}}$ input edges must be read to enumerate $T$ distinct instances. \qed \subsection{Enumeration of Induced Subgraphs.}\label{app:induced} The deterministic and randomized algorithms can be easily adapted to enumerate all \mathbf{e}mph{induced} instances of a given subgraph. The I/O complexity of the deterministic algorithm does increase asymptotically, while the I/O complexity of the randomized algorithm shows only a small increase in the exponent of the term $k^{{B}O{k}}$. It suffices to run the deterministic algorithm as the subgraph was a $k$-clique $s=1$ and hence we can use the simple deterministic algorithm bounded in Theorem~\ref{th:det}). In each iteration, the algorithm contains all edges in $E$ between any pair of vertices in $\cup_{i=1}^k V_i$. Then, all instances of $H$ are found, but only induced instances are enumerated. This is possible since all edges between vertices in the instance are available in memory. The I/O complexity of the algorithm then becomes ${B}O{(8k)^{k-2} E^{ k-1}/(BM^{ k-2})}$. By using this algorithm for solving subproblems in the randomized algorithm, we get an enumeration algorithm for induced subgraphs requiring ${B}O{(8k)^{4k} E^{k/2}/(B M^{k/2-1})}$ I/Os, assuming that the maximum vertex degree is $\sqrt{EM}$. The high probability result applies as well. \mathbf{e}nd{document}
\begin{document} \title{Superconducting microfabricated ion traps} \author{Shannon X. Wang} \email[]{[email protected]} \author{Yufei Ge} \author{Jaroslaw Labaziewicz} \affiliation{Center for Ultracold Atoms, Research Laboratory of Electronics and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA} \author{Eric Dauler}\affiliation{MIT Lincoln Laboratory, 244 Wood St., Lexington, Massachusetts, 02420, USA} \author{Karl Berggren} \author{Isaac L. Chuang} \affiliation{Center for Ultracold Atoms, Research Laboratory of Electronics and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA} \date{\today} \begin{abstract} \noindent We fabricate superconducting ion traps with niobium and niobium nitride and trap single $^{88}$Sr ions at cryogenic temperatures. The superconducting transition is verified and characterized by measuring the resistance and critical current using a 4-wire measurement on the trap structure, and observing change in the rf reflection. The lowest observed heating rate is 2.1(3) quanta/sec at 800~kHz at 6~K and shows no significant change across the superconducting transition, suggesting that anomalous heating is primarily caused by noise sources on the surface. This demonstration of superconducting ion traps opens up possibilities for integrating trapped ions and molecular ions with superconducting devices. \end{abstract} \maketitle Microfabricated surface electrode ion traps have significantly advanced the capabilities of trapped ion systems for quantum information processing \cite{Chiaverini:05, Stick:06, Steane:07}, by enabling increased level of precision, density, and system integration. The ions trapped in these devices represent quantum bits and are confined by oscillating electric fields. While typical ion traps currently employ aluminum, gold, or doped semiconductor as the electrode material \cite{Seidelin:06, Leibrandt:09, Stick:06}, the anomalous electric field noise \cite{Turchette:00} affecting such traps provides significant motivation to explore qualitatively different materials for microfabricated ion traps, such as superconductors. In particular, the fact that a superconductor expels electric fields provides an opportunity to test the theoretical understanding that anomalous noise results from surface patch potentials \cite{Turchette:00, Dubessy:09}, rather than sources in the bulk, since the bulk noise sources would be screened by the superconductor. A similar approach was taken for neutral atoms, where in superconducting traps it was found that magnetic near-field noise is suppressed resulting in lower heating rate and longer spin-flip lifetimes \cite{Fermani:10, Kasch:10}. For a thin film superconducting ion trap, blue lasers are typically employed for doppler cooling and state detection of trapped ions, and the short (279$-$422~nm \cite{James:98a}) wavelengths may create quasiparticles in the superconductor, driving it into a normal state. Therefore, verifying that the superconductor employed is actually superconducting during an experiment is required. Here, we demonstrate the operation of several superconducting microfabricated ion traps made with niobium and niobium nitride, describe how superconductivity is verified during trap operation, and apply these traps to test the physical mechanisms of anomalous noise. The demonstration of superconducting ion traps opens up possibilities for integrating trapped ions and molecular ions with superconducting devices, such as photon counting detectors, microwave resonators \cite{Schuster:09}, and circuit-QED systems \cite{Tian:04}. The ion traps used in this experiment consist of Nb or NbN on a sapphire substrate. One Nb and one NbN trap are identical to a prior five-electrode design \cite{Labaziewicz:08}. An additional Nb trap (Nb-g) includes a thin wire structure \cite{Wang:09} on the center ground electrode that is electrically connected in a 4-wire configuration to measure the resistivity of the electrode. The thinnest part of the wire is 10~$\mu$m wide. The fabrication procedure is as follows. A 400~nm layer of Nb is grown by DC magnetron sputtering of a niobium target in Ar gas; NbN is grown by adding N$_2$ gas during sputtering. Electrodes are defined by optical lithography using NR9-3000P photoresist, exposed through a chrome mask and developed in RD6 developer. Reactive ion etch with CF$_4$ and O$_2$ is used to etch exposed metal. Gold contact pads for wirebonding are then defined by optical lithography using S1813 or NR9-3000PY photoresist, deposited using evaporation and created with a lift-off process. After the initial Nb sputtering, the trap is maintained below a temperature of 90$^\circ$C during all steps of the fabrication and packaging process to minimize oxide formation on the surface. For trap Nb-g, a surface-mount resistor (0603, 1~k$\Omega$) is glued to one trap corner and used as a heater for controlling the trap temperature. The trap is operated in a bath cryostat, and we estimate the trap surface temperature to be $\sim6$~K \cite{Antohi:09}. A single $^{88}$Sr ion is trapped 100~$\mu$m above the trap surface, loaded via photoionization of a thermal vapor. Typical ion lifetime is several hours with cooling lasers on, same as traps made of normal metals. Typical axial trap frequencies are 2$\pi \times$ 0.8$-$1.3~MHz. The 5S$_{1/2} \rightarrow $4D$_{5/2}$ optical transition is used for sideband cooling and temperature readout, addressed via a narrow 674~nm diode laser locked to a stable external cavity. We verify that the traps are superconducting by observing three variables: resistance, critical current, and reflected rf power. Resistance is measured on the wire structure in Nb-g as the trap cools or warms up. Figure \ref{Fig:SCtrans}a) shows the resistance as a function of measured baseplate temperature (with a Lakeshore RX103 calibrated diode) during a slow warm-up of the cryostat. The trap is heated to above $T_c$ during ion loading, but cools to below $T_c$ within 5-10 minutes. Superconductivity is maintained on the trap when 150~V (amplitude) of rf drive is applied to the trap rf electrodes to create the trapping potential. This corresponds to $\sim$250~mA of current on the rf electrodes, given a capacitance to ground of $\sim$8~pF. The critical current of the wire structure, both with and without the trapping lasers, is 180(1)~mA. This corresponds to a critical current density of 4$\times 10^6$~A/cm$^2$, typical in order-of-magnitude for thin-film Nb. Based on this measured critical current density and electrode dimensions (400~nm $\times$ 150$\mu$m), the calculated current limit on the rf electrodes is 2.7~A, well above what is needed for typical trap operations. The superconducting transition is also observed by looking at the reflected rf power in the NbN trap, which is more resistive immediately above $T_c$. Rf reflected power is measured with a directional coupler mounted before the helical resonator, which is inside the cryostat. As shown in Figure \ref{Fig:SCtrans}b), almost all power is reflected back on resonance below $T_c$. These methods confirm that the traps are superconducting while ions are trapped in the presence of lasers and rf current drive. When the wire structure is current biased with 1-10~mA less than the critical current, the 405~nm, 422~nm, and 460~nm lasers grazing incident on the trap cause it to transition to the normal state. However, under normal trapping conditions, the lasers have no effect on the measured critical current. \begin{figure} \caption{\label{Fig:SCtrans} \label{Fig:SCtrans} \end{figure} The ion heating rates in all traps are obtained by measuring the average number of motional quanta with a varied delay after ground state cooling on the S-D optical transition. The number of motional quanta is determined by probing the sidebands of the transition using the shelving technique, and comparing the ratio of shelving probability on each sideband \cite{Turchette:00}. The measured heating rate is weakly dependent on the rf voltage and dc compensation voltages, so these parameters are varied during measurement to find the operating point that gives the lowest heating rate. This value can still vary between different days depending on the trap's processing history, temperature cycling, and other unknown factors; but is typically within the same order of magnitude. The heating rates of Nb and NbN traps are comparable to the lowest heating rates of traps of the same design and tested in the same cryogenic experiment but made with normal metals including Au, Ag, and Al, as listed in Table \ref{Tbl:allheating}. \begin{table}[tb] \begin{tabular}{ccc} \hline\hline Trap &\quad heating rate (q/s) &\quad $S_E$ ($10^{-15}$ V$^2$/m$^2$/Hz)\\ \hline\hline NbN & 16(1) & 192(12) \\ Nb & 2.1(3) & 25(4) \\ Nb-g & 4.2(8) & 48(12) \\ \hline Au$^{\rm a}$ & 2.1(4) & 25(5) \\ Ag$^{\rm b}$ & 2.1(2) & 25(3)\\ Al & 7.0(8) & 84(10)\\ \hline\hline \end{tabular} \begin{flushleft} $^{\rm a}$Ref. \textup{ \cite{Labaziewicz:08b} } \\ $^{\rm b}$Ref. \textup{ \cite{Labaziewicz:08} } \end{flushleft} \caption{\label{Tbl:allheating} Heating rate in quanta/second of traps made of superconducting and normal metals measured at cryogenic temperatures. Conversion to electric field noise $S_E$ is scaled to 1~MHz assuming $1/f$ scaling \cite{Labaziewicz:08}.\newline } \end{table} We measured the heating rate above and below the superconducting transition in the Nb-g trap. For the data above $T_c$, 3~mA of current is driven to the 1~k$\Omega$ resistor so as to heat the trap just past $T_c$ as observed by monitoring resistance of the wire structure, corresponding to 9~mW of power dissipated on the trap. In a subsequent cooldown, we mounted RuO$_2$ temperature sensors on the trap\cite{Labaziewicz:08b} and estimate that the operating temperature in the normal state is $\sim$2~K above $T_c$. The trap heating rate is measured immediately before and after this change as shown in Figure \ref{Fig:Nbgheating}. Measurements above and below $T_c$ are interleaved and taken in quick succession, and they are found to be comparable. All data were taken within one cooldown over two days. \begin{figure} \caption{\label{Fig:Nbgheating} \label{Fig:Nbgheating} \end{figure} The negligible change in heating rate across $T_c$ suggests that buried defects have little effect on anomalous heating. First it is useful to note that at cryogenic temperatures, the expected level of Johnson noise is on the order of 10$^{-20}$ V$^2$/m$^2$/Hz, while the field noise as measured by the ion is on the order of 10$^{-14}$ V$^2$/m$^2$/Hz. Thus it is not surprising that removing the Johnson noise may not have much effect on anomalous heating. The remaining explanation is that anomalous heating is predominantly a surface effect and is unrelated to resistivity. The distinction between surface and bulk is given by the London penetration depth, which in Nb is about an order of magnitude less than the 400~nm film thickness. The results here are still consistent with the current theory of patch potentials on metal surfaces. For superconductors, a recent theory proposed that surface plasmons can be an additional source of electromagnetic noise \cite{Henkel:08}. In one Nb trap we tested, the heating rate was measured on multiple instances over the period of over one year. During this time the trap was installed and removed from the cryostat multiple times and exposed to air in between with no processing or cleaning. The lowest heating rate obtained during any data run shows no significant variation over the year. In contrast, in many of our electroplated gold traps the heating rate can change by an order of magnitude between temperature cyclings, and after a few months in storage, increase in surface roughness and color changes along all electrode edges observable under an optical microscope is apparent. In conclusion, we have demonstrated superconducting ion traps that show good trapping stability and low heating rates. The heating rate does not change appreciably across $T_c$, indicating that anomalous heating is primarily a surface effect unrelated to bulk resistivity. Though the anomalous heating was not reduced by superconductivity, the consistency of the low anomalous heating through temperature cycling and exposure to air is an advantage over other materials such as electroplated gold traps. The feasibility of superconducting ion traps invite the possibility of integrating trapped ions with superconducting devices such as Josephson junctions and SQUIDs, though the compatibility of such devices is open to investigation. Recent progress in using the ion as an extremely sensitive detector of forces and charges \cite{Maiwald:09, Biercuk:10, Harlander:10} also suggest the possibility of detecting superconducting vortices with trapped ions. Magnetic flux trapped in vortices would modify the magnetic field above the superconductor. The vortex density is determined by the applied external field during cooling across the superconducting transition. The resulting change in local magnetic field can be detected by the ion via the Ramsey method on a narrow transition. An estimate of the ion's sensitivity to magnetic field\cite{Maiwald:09} of 1.1$\times 10^{-11}$ T/$\sqrt{\tau/{\rm Hz}}$ and typical ion height of 100~$\mu$m are comparable to parameters in early experiments that demonstrated vortex detection using SQUIDs \cite{Minami:91, Mathai:92}. Such coupling to superconducting vortices have been demonstrated with trapped neutral atoms in a recent experiment\cite{Muller:10}. We thank Adam McCaughan for assistance in trap fabrication. This work was supported by the COMMIT Program with funding from IARPA, the DARPA Quest program, and the NSF Center for Ultracold Atoms. \begin{thebibliography}{23} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Chiaverini}\ \emph {et~al.}(2005)\citenamefont {Chiaverini}, \citenamefont {Blakestad}, \citenamefont {Britton}, \citenamefont {Jost}, \citenamefont {Langer}, \citenamefont {Leibfried}, \citenamefont {Ozeri},\ and\ \citenamefont {Wineland}}]{Chiaverini:05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Blakestad}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Britton}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Jost}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Langer}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ozeri}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Comput.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {419} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stick}\ \emph {et~al.}(2006)\citenamefont {Stick}, \citenamefont {Hensinger}, \citenamefont {Olmschenk}, \citenamefont {Madsen}, \citenamefont {Schwab},\ and\ \citenamefont {Monroe}}]{Stick:06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stick}}, \bibinfo {author} {\bibfnamefont {W.~K.}\ \bibnamefont {Hensinger}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Olmschenk}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Madsen}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Schwab}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {36} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steane}(2007)}]{Steane:07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steane}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Comput.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {171} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Seidelin}\ \emph {et~al.}(2006)\citenamefont {Seidelin}, \citenamefont {Chiaverini}, \citenamefont {Reichle}, \citenamefont {Bollinger}, \citenamefont {Leibfried}, \citenamefont {Britton}, \citenamefont {Wesenberg}, \citenamefont {Blakestad}, \citenamefont {Epstein}, \citenamefont {Hume}, \citenamefont {Jost}, \citenamefont {Langer}, \citenamefont {Ozeri}, \citenamefont {Shiga},\ and\ \citenamefont {Wineland}}]{Seidelin:06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Seidelin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Reichle}}, \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Bollinger}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Britton}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Wesenberg}}, \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Blakestad}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Epstein}}, \bibinfo {author} {\bibfnamefont {D.~B.}\ \bibnamefont {Hume}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Jost}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Langer}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ozeri}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Shiga}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {253003} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leibrandt}\ \emph {et~al.}(2009)\citenamefont {Leibrandt}, \citenamefont {Labaziewicz}, \citenamefont {Clark}, \citenamefont {Chuang}, \citenamefont {Epstein}, \citenamefont {Ospelkaus}, \citenamefont {Wesenberg}, \citenamefont {Bollinger}, \citenamefont {Leibfried}, \citenamefont {Wineland}, \citenamefont {Stick}, \citenamefont {Stick}, \citenamefont {Monroe}, \citenamefont {Pai}, \citenamefont {Low}, \citenamefont {Frahm},\ and\ \citenamefont {Slusher}}]{Leibrandt:09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Leibrandt}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Labaziewicz}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Clark}}, \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Epstein}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ospelkaus}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Wesenberg}}, \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Bollinger}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stick}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Stick}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \bibinfo {author} {\bibfnamefont {C.-S.}\ \bibnamefont {Pai}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Low}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Frahm}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~E.}\ \bibnamefont {Slusher}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Comput.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {0901} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Turchette}\ \emph {et~al.}(2000)\citenamefont {Turchette}, \citenamefont {Kielpinski}, \citenamefont {King}, \citenamefont {Leibfried}, \citenamefont {Meekhof}, \citenamefont {Myatt}, \citenamefont {Rowe}, \citenamefont {Sackett}, \citenamefont {Wood}, \citenamefont {Itano}, \citenamefont {Monroe},\ and\ \citenamefont {Wineland}}]{Turchette:00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.~A.}\ \bibnamefont {Turchette}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kielpinski}}, \bibinfo {author} {\bibfnamefont {B.~E.}\ \bibnamefont {King}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Meekhof}}, \bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {Myatt}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Rowe}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Sackett}}, \bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont {Wood}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo {pages} {063418} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dubessy}, \citenamefont {Coudreau},\ and\ \citenamefont {Guidoni}(2009)}]{Dubessy:09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Dubessy}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Coudreau}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Guidoni}},\ }\href {\doibase 10.1103/PhysRevA.80.031402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {031402} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fermani}\ \emph {et~al.}(2010)\citenamefont {Fermani}, \citenamefont {M\"{u}ller}, \citenamefont {Zhang}, \citenamefont {Lim},\ and\ \citenamefont {Dumke}}]{Fermani:10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fermani}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {M\"{u}ller}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Lim}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Dumke}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. B}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {095002} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kasch}\ \emph {et~al.}(2010)\citenamefont {Kasch}, \citenamefont {Hattermann}, \citenamefont {Cano}, \citenamefont {Judd}, \citenamefont {Scheel}, \citenamefont {Zimmermann}, \citenamefont {Kleiner}, \citenamefont {Koelle},\ and\ \citenamefont {Fort\'agh}}]{Kasch:10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kasch}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Hattermann}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Cano}}, \bibinfo {author} {\bibfnamefont {T.~E.}\ \bibnamefont {Judd}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Scheel}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zimmermann}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kleiner}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Koelle}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fort\'agh}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {065024} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {James}(1998)}]{James:98a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~F.~V.}\ \bibnamefont {James}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. B}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo {pages} {181} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schuster}\ \emph {et~al.}(2009)\citenamefont {Schuster}, \citenamefont {Bishop}, \citenamefont {Chuang}, \citenamefont {DeMille},\ and\ \citenamefont {Schoelkopf}}]{Schuster:09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author} {\bibfnamefont {L.~S.}\ \bibnamefont {Bishop}}, \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {DeMille}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}},\ }\href@noop {} {\enquote {\bibinfo {title} {Cavity qed in a molecular ion trap},}\ } (\bibinfo {year} {2009}),\ \bibinfo {note} {quant-ph:0903.3552}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tian}\ \emph {et~al.}(2004)\citenamefont {Tian}, \citenamefont {Rabl}, \citenamefont {Blatt},\ and\ \citenamefont {Zoller}}]{Tian:04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Tian}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rabl}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {247902} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Labaziewicz}\ \emph {et~al.}(2008{\natexlab{a}})\citenamefont {Labaziewicz}, \citenamefont {Ge}, \citenamefont {Antohi}, \citenamefont {Leibrandt}, \citenamefont {Brown},\ and\ \citenamefont {Chuang}}]{Labaziewicz:08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Labaziewicz}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ge}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Antohi}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibrandt}}, \bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont {Brown}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {013001} (\bibinfo {year} {2008}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2009)\citenamefont {Wang}, \citenamefont {Labaziewicz}, \citenamefont {Ge}, \citenamefont {Shewmon},\ and\ \citenamefont {Chuang}}]{Wang:09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~X.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Labaziewicz}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ge}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Shewmon}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Letters}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {094103} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Antohi}\ \emph {et~al.}(2009)\citenamefont {Antohi}, \citenamefont {Schuster}, \citenamefont {Akselrod}, \citenamefont {Labaziewicz}, \citenamefont {Ge}, \citenamefont {Lin}, \citenamefont {Bakr},\ and\ \citenamefont {Chuang}}]{Antohi:09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~B.}\ \bibnamefont {Antohi}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Schuster}}, \bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {Akselrod}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Labaziewicz}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ge}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont {W.~S.}\ \bibnamefont {Bakr}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href {\doibase 10.1063/1.3058605} {\bibfield {journal} {\bibinfo {journal} {Rev. Sci. Instr.}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {eid} {013103} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Labaziewicz}\ \emph {et~al.}(2008{\natexlab{b}})\citenamefont {Labaziewicz}, \citenamefont {Ge}, \citenamefont {Leibrandt}, \citenamefont {Wang}, \citenamefont {Shewmon},\ and\ \citenamefont {Chuang}}]{Labaziewicz:08b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Labaziewicz}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ge}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibrandt}}, \bibinfo {author} {\bibfnamefont {S.~X.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Shewmon}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {180602} (\bibinfo {year} {2008}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Henkel}\ and\ \citenamefont {Horovitz}(2008)}]{Henkel:08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Henkel}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Horovitz}},\ }\href {\doibase 10.1103/PhysRevA.78.042902} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {042902} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maiwald}\ \emph {et~al.}(2009)\citenamefont {Maiwald}, \citenamefont {Leibfried}, \citenamefont {Britton}, \citenamefont {Bergquist}, \citenamefont {Leuchs},\ and\ \citenamefont {Wineland}}]{Maiwald:09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Maiwald}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Britton}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Bergquist}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Leuchs}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {551} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Biercuk}\ \emph {et~al.}(2010)\citenamefont {Biercuk}, \citenamefont {Uys}, \citenamefont {Britton}, \citenamefont {VanDevender},\ and\ \citenamefont {Bollinger}}]{Biercuk:10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Biercuk}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Uys}}, \bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont {Britton}}, \bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {VanDevender}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Bollinger}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Nanotech.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {646} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harlander}\ \emph {et~al.}(2010)\citenamefont {Harlander}, \citenamefont {Brownnutt}, \citenamefont {H\"ansel},\ and\ \citenamefont {Blatt}}]{Harlander:10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Harlander}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Brownnutt}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {H\"ansel}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href {http://stacks.iop.org/1367-2630/12/i=9/a=093035} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {093035} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Minami}\ \emph {et~al.}(1992)\citenamefont {Minami}, \citenamefont {Geng}, \citenamefont {Chihara}, \citenamefont {Yuyama},\ and\ \citenamefont {Goto}}]{Minami:91} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Minami}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Geng}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chihara}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yuyama}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Goto}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Cryogenics}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages} {648 } (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mathai}\ \emph {et~al.}(1992)\citenamefont {Mathai}, \citenamefont {Song}, \citenamefont {Gim},\ and\ \citenamefont {Wellstood}}]{Mathai:92} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mathai}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Gim}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~C.}\ \bibnamefont {Wellstood}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Letters}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo {pages} {598} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M\"{u}ller}\ \emph {et~al.}(2010)\citenamefont {M\"{u}ller}, \citenamefont {Zhang}, \citenamefont {Fermani}, \citenamefont {Chan}, \citenamefont {Wang}, \citenamefont {Zhang}, \citenamefont {Lim},\ and\ \citenamefont {Dumke}}]{Muller:10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {M\"{u}ller}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fermani}}, \bibinfo {author} {\bibfnamefont {K.~S.}\ \bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {Z.~W.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {C.~B.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Lim}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Dumke}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {043016} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{example}gin{document} \preprint{APS/123-QED} \title{Carrying an arbitrarily large amount of information using a single quantum particle } \author{Li-Yi Hsu} \affiliation{\footnotesize Department of Physics, Chung Yuan Christian University, Chungli 32023, Taiwan } \author{Ching-Yi Lai} \email{[email protected]} \affiliation{\footnotesize Institute of Communications Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan} \author{You-Chia Chang} \affiliation{\footnotesize Department of Photonics and Institute of Electro-Optical Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan} \author{Chien-Ming Wu} \affiliation{\footnotesize Institute of Photonics Technologies, National Tsing Hua University, Hsinchu 30013, Taiwan} \author{Ray-Kuang Lee} \affiliation{\footnotesize Institute of Photonics Technologies, National Tsing Hua University, Hsinchu 30013, Taiwan} \date{\today} \begin{example}gin{abstract} Theoretically speaking, a photon can travel arbitrarily long before it enters into a detector, resulting a click. How much information can a photon carry? We study a bipartite asymmetric ``two-way signaling" protocol as an extension of that proposed by Del Santo and Daki\ifmmode \acute{c}\end{lemma}se \'{c}\fi{}. Suppose that Alice and Bob are distant from each other and each of them has an $n$-bit string. They are tasked to exchange the information of their local n-bit strings with each other, using only a single photon during the communication. It has been shown that the superposition of different spatial locations in a Mach-Zehnder (MZ) interferometer enables bipartite local encodings. We show that, after the travel of a photon through a cascade of $n$-level MZ interferometers in our protocol, the one of Alice or Bob whose detector clicks can access the other's full information of $n$-bit string, while the other can gain one-bit of information. That is, the wave-particle duality makes two-way signaling possible, and a single photon can carry arbitrarily large (but finite) information. \end{abstract} \maketitle \section{Introduction} Communication is a process of sending and receiving messages from one party to another~\cite{Shannon48}. More precisely, communication is a physical process with physical information carriers transmitted without violating any physical principle. For instance, electromagnetic waves used in wireless communication are governed by Maxwell's equations in classical physics. As a consequence of special relativity, faster-than-light communication is impossible. Also, the unavoidable energy consumption in the Maxwell's demon and Landauer's erasure indicates that information is physical~\cite{Lan91}, and the link between thermodynamics and information has potential to deliver new insights in physics and biology. The role of information in physics theory has been extensively investigated. For example, it is proposed that quantum theory can be derived and reconstructed from purely informational principles~\cite{Har01,DB10,CDP11, MMA+13}. The effect of uncertainty relation in information processing can be stated in terms of information content principle~\cite{CHHH17} and No-Disturbance-Without-Uncertainty principle~\cite{SZY19}. Therein, a fundamental and interesting concern is the channel capacity in communication. {According to the no-signaling principle, there is no information gain without classical or quantum communication;} the transmission of the message as the cause that increases the information. It is well known that, in the dense-coding protocol, two bits of information can be carried in one qubit with preshared entanglement~\cite{BW92}. For the receiver to obtain $n$ bits of information, at least a total of $n$ qubits have to be exchanged and at least $n/2$ qubits have to be sent from the sender~\cite{Hol73,CDNT99,DHW04,NS06,HW10,DH11,LC18c}. As a generalization of no-signaling principle {respected both in classical and quantum physics}, information causality states that one cannot gain more information than the number of bits sent via classical communication~\cite{PPK+09}. Note that the protocols mentioned above are proposed for one-way communication, and quantum entanglement as physical resource is initially distributed between the sender and receiver. Photons as flying qubits are usually exploited in quantum communication. Given a photon as an information carrier, its particle-wave duality makes two-way communication possible. Very recently a variant of the ``guess your neighbor's input" game~\cite{ABB+10} was studied by Santo and Daki\'{c}~\cite{SD18}, which we call the SD game in this article. They proposed a protocol ({SD protocol}) to win the SD game with certainty, while a classical strategy can win with probability at most $50\%$. We review the SD game as follows. Two distant agents Alice and Bob are given two input bits $x,y\in\{0,1\}$, respectively, which are drawn uniformly at random, and they are asked to output two bits $a,b\in\{0,1\}$, respectively. They win the game if both of them output a bit that is equal to the other's input (i.e., $a=y$ and $b=x$). {A restriction here is that only an information carrier, classical or quantum, can be manipulated. Obviously, they cannot win with certainty using simply a classical information carrier since it can transmit a single bit of information within a specified time limit.} Using a photon, on the other hand, enables two-way signalling so that they can win the SD game with certainty~\cite{MMD+18}. Notably, one of them can gain one bit of information if no detector clicks. According to Renninger's negative result experiment~\cite{Ren53, Bae05} or the bomb-testing problem~\cite{Elitzur1993}, even if there is no interaction between the quantum object and the measuring device, one still learns definite knowledge of the quantum state. The concept of {{SD protocol}} is explained as follows, in terms of the (level 1) optical implementation shown in~Fig.~\ref{fig:SD_game}. \begin{example}gin{figure}[h!] \centering \includegraphics[width=0.5\linewidth] {Fig1.png} \caption{ Optical implementation of the SD protocol. The photon is initially injected into the level-1 MZ interferometer. After traveling through the first $50/50$ beam splitter (BS1), the incident photon is half-reflected and half-transmitted in a coherent way. Alice and Bob can locally access the halves depicted in the red solid line and blue dash line, respectively. For the local encoding at level~1, Alice inserts the $\pi$-phase modulator (PM A) if $x_{1}=0$ and does nothing if $x_{1}=1$; similarly, Bob inserts the $\pi$-phase modulators, (PM B) if $y_{1}=0$ and does nothing if $y_{1}=1$. After the interference of these two coherent halves at the second $50/50$ beam splitter (BS2), the photon enters one of the two detectors~\cite{SD18}.} \label{fig:SD_game} \end{figure} A photon is emitted from a referee source and then injected into the first beam splitter (BS 1) of a Mach-Zehnder (MZ) interferometer. Consequently, this single photon is coherently superposed over two different spatial locations. Hence the two local agents Alice and Bob can each (i) perform local operations on the incoming parts of the photon as information encoding, and (ii) access a detector to detect the photon at a certain time window later. According to (i), Alice and Bob encode their bits in the phase of the photon before it reaches the second beam splitter (BS2). With a delicate design, the parity of the two input bits completely determines the path of the photon leaving BS2. Consequently, one knows with certainty which detector will detect this photon while the other will detect nothing. For example, Alice's (Bob's) detector clicks if $x=y$ $(x\neq y)$ in the ideal case. Once Alice's detector does not receive any photon in a certain time window (no interaction between the quantum object and the measuring device), she knows that $x\neq y$ and outputs bit $a=x+1 \mod 2$. {As a result, using the spacial superposition of a single photon, Alice and Bob can communicate a total of two-bit information within a specified time window and hence win the game with certainty. As opposed to this quantum communication, to win the game with certainty using classical communication, the time-window would have been too short to exchange two one-way classical communications~\cite{SD18}.} In this paper, we characterize the power of a single photon as an information carrier. Our concerns are twofold: how much information a single photon can carry; and how much information an agent can obtain even if an interaction-free measurement occurs (no photon is detected by the detectors at hand). {We will design a generalized Santo and Daki\'{c} (GSD) game and show that using one single photon, one can win the game with certainty and learn a total of $(n+1)$ bits of information in an $n$-level GSD game, while one learns only $n$ bits of information by classical communication. When $n$ is arbitrarily large, this suggests that a single photon can carry an arbitrarily large amount of information. We would like to mention that in a related work~\cite{HD19}, Horvat and Daki\'{c} showed that a single particle can be used to communicate simultaneously with $n$ parties and achieves the so-called genuine $n$-way signaling.} Note that a photon as an information carrier here can be replaced by a quantum particle whose coherence is under enough experimental control to exhibit coherence. The remainder of this paper is organized as follows: In Sec.~\ref{sec:GSD}, we introduce the GSD game. The experiment setup of the $n$-level circuit is proposed. We characterize and then optimize the total information gains for Alice and Bob. Several specific cases are studied. In Sec.~\ref{sec:discussion}, we investigate the physics concerning the information gains. Finally, in Sec.~\ref{sec:implementation} we estimate the performance of the GSD game in the physical realization. \section{Generalized SD game}\label{sec:GSD} \subsection{Experimental setup} We consider a generalized Santo and Daki\'{c} (GSD) game as follows. Alice and Bob are assigned two independent input strings ${\mathbf x}=x_1 \cdots x_{n},$ ${\mathbf y}=y_1 \cdots y_{n}\in\{0,1\}^n$, respectively, and they are asked to output bit strings ${\mathbf a}=a_1 \cdots a_{n}$ and ${\mathbf b}=b_1 \cdots b_{n}$, respectively. {They win the game if (i) one of them can know the other's input string and (ii) the other can gain at least one bit of information. Only a single information carrier is allowed for the communication task within a specific time window.} Equipped with a single photon in the GSD game, it will be shown that {there is} a total of $n+1$ bits of information gain {for Alice and Bob} as a result of two-way signaling {in a time window $\tau$.} {However, if the information carrier is classical, they can exchange a total of at most $n$ bits of information in the same time window.} \begin{example}gin{figure}[h] \centering \includegraphics[width=1.0\linewidth] {Fig2.png} \caption{Left: The optical details of a two-level evolution circuit. According to the local encoding at level~1, the leaving photon at BS2 is injected into one of the two fibers (F1 and F2), and enters one of these two MZ interferometers at level~2. Similarly, Alice (Bob) inserts PMs into these two MZ interferometers at level~2 if $x_{2}=1$ $(y_{2}=1)$ and does nothing if $x_{2}=0$ $(y_{2}=0)$. Right: The topological unfolding of the 2-level circuit as a full 2-level binary tree. The nodes therein denote the MZ interferometers, where the photon is spatially superposed, and the directed edges between nodes indicates the possible travelling paths of the photon. } \label{fig:SD_game_l2} \end{figure} First consider a two-level {circuit as the} extension of the SD protocol, as shown in Fig.~\ref{fig:SD_game_l2}. The two detectors in Fig.~\ref{fig:SD_game} are replaced by MZ interferometers, followed by four detectors. One of the four detectors will click according to the parities of $(x_1,y_1)$ and $(x_2, y_2)$. A $(k+1)$-level circuit can be constructed by (i) replacing the detectors in the $k$-level circuit by the MZ interferometers, and (ii) putting $2^{k+1}$ detectors at the output of the interferometers. {{Naturally}, an $n$-level circuit for GSD game can be {recursively} extended.} Our protocol for the SGD game is explained as follows with the experimental setup shown in Fig.~\ref{fig:1b}, which can be schematically depicted as a perfect $n$-level binary tree. A detector is placed at each leaf node, and a MZ interferometer is placed at each parent node. According to the input bits $x_{i}$ and $y_{i}$, Alice and Bob perform phase encoding by inserting a phase modulator (bit value $=0$) or not (bit value $=1$) into each of the $2^{i}$ MZ interferometers at level $i$. Hence a single photon injected into the root will travel through one of the $2^n$ light paths. Therein, after leaving a MZ interferometer at level $k$, the photon goes either the \textit{even-$k$} $(x_{k}=y_{k})$ path or \textit{odd-$k$} $(x_{k}\neq y_{k})$ one, and then enters into a MZ interferometer at level $(k+1)$. Note that there are $2^{k-1}$ even-$k$ and $2^{k-1}$ odd-$k$ paths. Consequently, the photon’s complete path is determined by the parity relations of the $n$ bit pairs $(x_{1}, y_{1}),$ \dots, $(x_{n}=y_{n})$ and finally flies into one of $2^{n}$ detectors, $D_{1},$ \dots, $D_{2^n}$, which are locally accessible to either Alice or Bob. (Note that it is not necessary that Alice and Bob have an equal number of detectors.) The local agent whose detector clicks can learn these $n$ parities and hence knows the other's $n$ input bits exactly. \begin{example}gin{figure}[!h] \centering \includegraphics[width=0.950\linewidth] {Fig3.png} \caption{ The unfolding layout of the $n$-level circuit as a perfect $n$-level binary tree and the detectors. There are $2^{i-1}$ MZ interferometers at level $i$. The photon is initially injected into the level-1 MZ interferometer. Then the parity of the bit pair $(x_{1}, y_{1})$ completely determines which one of the two level-2 MZ interferometers the photon will enter. Without loss of generality, let the photon go to the right MZ interferometer at level 2 if $x_{1} = y_{1}$, and the left MZ interferometer, otherwise. More optical details are explained in Fig.\,\ref{fig:SD_game}. Similarly, the parity of $(x_{2}, y_{2})$ determines the next target interferometers at level 3. This process is continued for a cascade of $n$ MZ interferometers. As a result, the light path of the photon completely depends on the $n$ bit pairs $(x_{1}, y_{1})$, \dots, $(x_{n}, y_{n})$. Finally, the photon flies into one of the $2^{n}$ detectors, each of which is held by Alice (A) or Bob (B). } \label{fig:1b} \end{figure} { Next we discuss the physical settings so that Alice and Bob can exchange a single information carrier for a total of $n$ times. Let Alice and Bob be located at a distance $d$ from each other, and, for simplicity, assume that an information carrier, classical or quantum, travels at the speed $c$. Suppose that an information carrier carries one bit of information in classical one-way communication. So it takes time roughly $nd/c$ for transmitting $n$ bits of information by a single carrier. On the other hand, let the length of an odd-$k$ or even-$k$ path be $\delta$ {for $k<n$}. {In other words, in Fig.~\ref{fig:SD_game_l2}, {the photon travels a distance $d$ between BS1 and BS2 is, and the length of F1 or F2 is $\delta$.} {In the experiment setup, let} $\delta\ll d$ by choosing sufficiently large $d$; however, {such setup} is not reflected from the scale of our plots.} Thus it takes time $((nd+(n-1)\delta)/c)$ to implement our protocol in Fig.~\ref{fig:1b}. As a result, we allow a specific time window $\tau$ such that $((nd+(n-1)\delta)/c)\leq \tau \leq ((nd+(n-1)\delta)/c)+\end{proposition}silon$, where $\end{proposition}silon\geq 0$ is a small constant such that $((nd+(n-1)\delta)/c)+\end{proposition}silon< (n+1)d/c$. This choice of time window $\tau$ allows Alice and Bob to exchange a total of $n+1$ bits of information (shown in the next subsection) using our protocol, but this time window is not long enough so that a classical scheme can exchange only $n$ bits of information. } An example of $n=2$ is illustrated in Fig.~\ref{fig:time}. \begin{example}gin{table*}[t!] \centering \begin{example}gin{tabular}{|c|c|c|c|c|c|c|} \hline &$D_1$& $D_2$& $D_3$& $D_4$& Bob's knowledge on the bit-pair relations& Bob's information gain \\ \hline Case (1) &A&A&B&B& $x_1\neq y_1$ &1\\ \hline Case (2) &B&B&A&A& $x_1= y_1$&1 \\ \hline Case (3) &A&B&A&B& $x_2\neq y_2$&1 \\ \hline Case (4) &B&A&B&A& $x_2= y_2$ &1 \\ \hline Case (5) &B&A&A&B& either $x_1\neq y_1$ and $x_2\neq y_2$, or $x_1=y_1$ and $x_2=y_2$ &1 \\ \hline Case (6) &A&B&B&A& either $x_1\neq y_1$ and $x_2= y_2$, or $x_1=y_1$ and $x_2\neq y_2$&1 \\ \hline Case (7) &A&B&B&B& $x_1\neq y_1$ and $x_2\neq y_2$ & $2$ \\ \hline Case (8) &B&A&A&A& $(x_1,x_2)\neq (y_1,y_2)$ & $2-\log_2 3$ \\ \hline \end{tabular} \caption{Some detector assignments and Bob's corresponding information gains, given that one of Alice's detectors clicks. In the cases (1) and (2) ((3) and (4)), Bob knows $x_1 (x_2)$ with certainty. In the cases (5) and (6), Bob knows $x_1$ and $x_2$ simultaneously with probability 0.5, which indicates that Bob can gain one-bit information on average. In case (7) Bob knows that $x_1\neq y_1$ and $x_2\neq y_2$ and his information gain is $2$ (bits). In case (8), Bob knows that $(x_1,x_2)\neq (y_1,y_2)$ and hence his information gain is $2-\log_2 3$. } \label{tb:1} \end{table*} \begin{example}gin{figure}[!h] \centering \includegraphics[width=1.03\linewidth] {Fig4.png} \caption{ { The time window in the $n=2$ case. (a) Assume that Alice holds the left two detectors ($D_1$ and $D_2$ in Fig.~\ref{fig:SD_game_l2}). Alice and Bob each performs the local encoding operation for $x_{1}$ at $t=0$ and for $y_{1}$ at $t=(d+\delta)/c$, respectively. Finally, the photon flies into one of the detectors accessible to Alice at time $(2d+\delta)/c\leq \tau\leq (2d+\delta)/c+\end{proposition}silon$. (b) In the same window $\tau$, Alice and Bob can only exchange some $x_{j}$ and $y_{i}$ by classical communication. (c) To finish the same task as in (a) using a classical information carrier, it requires a time window $3d/c \leq \tau' \leq 3d/c +\end{proposition}silon$. } } \label{fig:time} \end{figure} \begin{example}gin{figure}[h!] \centering \includegraphics[width=0.99\linewidth] {Fig5.png} \caption{An effective circuit with two detectors. All the left paths have a time delay $\delta$, while the right paths have additional delays denoted by longer optical fibers. } \label{fig:2detectors} \end{figure} The implementation of Fig.~\ref{fig:1b} can be refined in the case that the detectors at the left and right leaves belong to Alice and Bob, respectively, assuming that $2^n\delta \ll d$. {That is, $n$ cannot be arbitrarily large or $n=O(\log(d/\delta ))$.} Specifically, we can use only two detectors (one for Alice and the other for Bob) and add a time domain coordinate to save the massive number of $2^n$ detectors required. This is done as shown in Fig.~\ref{fig:2detectors}, where the left path at level $i$ has a time delay $\delta$ and the right light path at level $i$ has an additional delay of $2^{n-i-1}\delta$ and both the left and right paths the level $n$ have delay $\delta$. Consequently, the $2^n$ light paths from left to right in Fig.~\ref{fig:1b} will have delays $n\delta, n\delta, (n+1)\delta, (n+1)\delta, \dots, (2^{n-1}-1)\delta, (2^{n-1}-1)\delta$ in Fig.~\ref{fig:2detectors}, respectively. An important observation is that at the same level each of Alice and Bob has the same input bit and applies the same PMs to the corresponding light paths. Also a beam splitter has two input ports, which allows us to connect both branches to the same beam splitter. Therefore, from the time of clicking, one can deduct the corresponding light path in the circuit of Fig.~\ref{fig:1b} and learn the $n$-bit string. As a comparison, the previous scheme by Santo and Daki\'{c}~\cite{SD18} uses one single-photon source and two detectors to exchange two bits of information. Our scheme is able to transmit more information at the cost of additional fibers, MZ interferometers, and beam splitters. \subsection{Information gains} Let us quantify how many detectors Bob should have to optimize his information gain $I(X;B|Y)$, where $I(X;B|Y)= H(B|Y)-H(B|X,Y)$ is the \textit{mutual information} between Alice's input variable $X$ and Bob's output variable $B$ conditioned on Bob's input variable $Y$; $H(B|Y)$ is the \textit{conditional Shannon entropy}, and $H(X)= -\sum_{{\mathbf x}} p_{\mathbf x} \log p_{\mathbf x}$. Let $m$ be the number of detectors that belong to Bob. Since $X$ and $Y$ are independent, it is clear that \begin{example}gin{align*} I(X;B|Y)\leq & H(B) =H\left(\left\{ \underbrace{{1}/{2^n},\dots, {1}/{2^n}}_m,1- {m}/{2^n}\right\}\right)\\ =&n-\left(1-\frac{m}{2^n}\right)\log\left(2^n-m\right). \end{align*} The total information gain of Alice and Bob is \begin{example}gin{align*} &I(Y;A|X)+I(X;B|Y) =H(A)+ H(B)\\ &=2n-\frac{m}{2^n}\log m- \left(1-\frac{m}{2^n}\right)\log\left(2^n-m\right) \leq n+1, \end{align*} where the equality holds when $m=2^{n-1}$. The main result can be stated as follows: { \textit{The optimal total information gain is $n+1$.} } To reach optimal total information gain, Alice and Bob each should access half of the $2^n$ detectors. It does not matter which detectors Alice or Bob should hold since the one with a clicking detector can learn $n$ bits of information, while the other learns one bit of information. As an illustration, we analyze Bob's information gain in the case of $n=2$ as shown in Fig.~\ref{fig:SD_game_l2}. Various detector assignments as listed in Table~\ref{tb:1}, assuming that one of Alice's (Bob's) detectors always clicks (never click). {Note that, to win the GSD game with certainty, the one that cannot learn the other's $n$ input bits must know one bit of information.} With a delicate initial assignment of these $2^n$ detectors between Alice and Bob, they can exchange the input bit pair $(x_{k}, y_{k})$ with certainty for some specific~$k$. Specifically, denote two detector sets by $\Delta_{k}^{E}$ and $\Delta_{k}^{O}$. For all $i=1,\dots ,2^n$, the detector $D_{i}\in\Delta_{k}^{E}$ $(D_{i}\in \Delta_{k}^{O})$ if $D_{i}$ receives a photon travelling through an even-k (odd-k) path. Let all $2^{n-1}$ detector elements in $\Delta_{k}^{E}$ $(\Delta_{k}^{O})$ are completely accessible to Alice (Bob). In this case, once none of the detectors belonging to Bob clicks. Bob can learn that that $x_{k}=y_{k}$ and hence {he outputs the bit $b_{k}=x_{k}$ with certainty.} For example, as shown in Table~\ref{tb:1}, Bob can always learn $x_1$ using the detector assignment in Case (1) or (2), or learn $x_2$ using the detector assignment in Case (3) or (4). It is noteworthy to mention the following detector assignment. Let Alice occupy only one detector and Bob occupy the other $2^n-1$ ones. With probability $2^{-n}$, Alice's detector receives a photon. In this case, the no-click on Bob's side makes him exclude the possibility of $2^n-1$ parity relation sets and hence learn the input string ${\mathbf x}$. In other words, if Alice and Bob are tasked to output ${\mathbf a}={\mathbf y}$ and ${\mathbf b}={\mathbf x}$, respectively, in the GSD game, they can win the game with the probability $2^{-n}$. {On the other hand, if one of Bob's detectors clicks, Alice still learns $n-\log_2(2^n-1)$ bits of information.} \section{Discussion}\label{sec:discussion}A lesson learned from the dense coding is that sending one qubit is equivalent to sending two classical bits; another lesson from information causality is that, if there is no quantum communication, the information gain is equal to the amount of classical communication. Notably, the dense coding and random access code each (i) are one-way communication, and (ii) exploit quantum entanglement as physical resource. To the best of our knowledge, the protocols of SD and GSD games are the first ones for two-way signaling quantum communication. Therein, the spatial coherent superposition and wave-particle duality can be regarded as physical resources. From the two-way signaling aspect, these two quantum properties of a photon are more beneficial than quantum entanglement. In the proposed two-way signaling protocol, sending a photon with an $n$-level circuit is equivalent to sending $n+1$ bits, where $n$ can be arbitrarily large. Which agent can obtain the other’s information depends on the local bit strings ${\mathbf x}$ and ${\mathbf y}$, and the pre-assignment of these $2^{n}$ detectors to Alice or Bob. {In any way, there is always a detector that clicks, which indicates either $I(Y;A|X)= n$ or $I(X;B|Y)= n$ must hold, and hence we can conclude that $n\leq I(Y;A|X) +I(X;B|Y)\leq n+1$.} From the causal perspective, the optimal information gain in the GSD game can be explained in a two-fold way. Firstly, an information carrier is consumed therein. Notably, regarding the classical communication, information causality states that the information gain cannot exceed the amount of classical communication. Thus sending-and-receiving a photon can result in one-bit information gain. Secondly, the two distant local operations at the same level fully determines into which way the photon enters in the next level, and this contributes the one bit information. In other words, only when the coherent superposed parts meet at BS2 {of a MZ interferometer in every level} as shown in Fig.~\ref{fig:1b}, the which-way uncertainty between these two beam splitters in the MZ interferometer vanishes, and consequently produces one-bit information. That is, a level contributes one-bit information gain. At the end, at most $(n+1)$ bits of information can be generated during a photon entering an $n$-level circuit. For example, in the Elitzur-Vaidman bomb tester, a single photon is emitted, but one of its coherent parts is blocked and there is no interference at the second beam splitter of a MZ interferometer~\cite{Elitzur1993}. In this case, only a bit of information (whether the bomb explodes) is accessible. On the other hand, in the simple one-way SD game, assume that the bit $y_{1}=1$ is public, and the bit $x_{1}$ is unknown to Bob~\cite{SD18}. To inform Bob, Alice performs local operations on the accessible coherent superposed part. It is the interference at the second beam splitter brings Bob the bit value of $x_{1}$. \section{Implementation}\label{sec:implementation} Since the complexity of the $n$-level circuit grows exponentially in $n$ (or linearly in $n$ if the scheme of Fig.~\ref{fig:2detectors} is used), it is impossible to realize the optical circuit for arbitrarily large $n$ with imperfect devices. Noisy components, such as the photon source, beam splitters and detectors, will cause the photon to decay and hence limit the possible circuit level. \begin{example}gin{figure}[h] \centering \includegraphics[width=0.8\linewidth] {Fig6.png} \caption{The rate of success of our GSD protocol versus the number of circuit levels. } \label{fig:loss-rate} \end{figure} Here we estimate the performance of the protocol when it is implemented under realistic experimental conditions. We consider the following error sources. A realistic pulsed single-photon source has a photon number probability $P(n)$ to generate $n$ photons per pulse. A quantum dot single-photon source can achieve $P(1)=0.72$ \cite{SSW17}. The beam splitters in experiments may not have perfectly even split ratio between transmission and reflection, but this uneven split ratio can be compensated with experimental techniques, such as using wave plates together with polarized beam splitters. Therefore, we assume the split ratio is perfectly even. We also assume the phase errors given by phase shifter/modulators are negligible compared to other error sources. This is justifiable when using piezoelectric phase shifters, which can achieve a phase accuracy better than $2\pi /500$. We consider the optical loss to be a dominating error source, which can result from the non-$100\%$ reflectivity of mirrors and the non-perfect anti-reflection coatings of all transmissive optical components. We estimate the optical loss $\end{proposition}silon$ per stage to be $1.5\%$. For example, using two AR coated surface for a wave plate and one AR coated surface for a beam splitter, each of $0.5\%$ loss. We assume the detection efficiency $\end{theorem}a_D$ of the detectors to be $85\%$, which is achievable using superconducting nanowire single photon detectors (SNSPD). The contribution from the dark counts of the detectors can be negligible by using low-dark-count detectors such as SNSPD or by applying gating techniques. Using these numbers, we obtain the success rate of our protocol for $n$ stages to be $P(1)\left(1-\end{proposition}silon\right)^n \end{theorem}a_D$, as shown in Fig.~\ref{fig:loss-rate}. \emph{Acknowledgments.}|\, CYL was supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant\,MOST108-2636-E-009-004. YCC was supported by MOST108-2218-E-009-035-MY3. \begin{example}gin{thebibliography}{25} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \end{lemma}se \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \end{lemma}se \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begin{example}gingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\underline12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begin{example}gingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand {\mathbb E}\,print [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \textnormal{tr}anslation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.{\mathbb E}\,OS\space} \providecommand {\mathbb E}\,OS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Shannon}(1948)}]{Shannon48} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~E.}\ \bibnamefont {Shannon}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Bell System Technical Journal}\ }\textbf {\bibinfo {volume} {27}},\ \bibinfo {pages} {379} (\bibinfo {year} {1948})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Landauer}}(1991)}]{Lan91} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Landauer}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Today}\ }\textbf {\bibinfo {volume} {44}},\ \bibinfo {pages} {23} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hardy}(2001)}]{Har01} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Hardy}},\ }\href@noop {} {\ (\bibinfo {year} {2001})},\ {\mathbb E}\,print {http://arxiv.org/abs/arXiv:quant-ph/0101012} {arXiv:quant-ph/0101012} \BibitemShut {NoStop} \bibitem [{\citenamefont {Daki\'{c}}\ and\ \citenamefont {\v{C}. Brukner}(2011)}]{DB10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Daki\'{c}}}\ and\ \bibinfo {author} {\bibnamefont {\v{C}. Brukner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Deep Beauty: Understanding the Quantum World through Mathematical Innovation}\ ,\ \bibinfo {pages} {365}} (\bibinfo {year} {2011})},\ {\mathbb E}\,print {http://arxiv.org/abs/arXiv:0911.0695} {arXiv:0911.0695} \BibitemShut {NoStop} \bibitem [{\citenamefont {Chiribella}\ \emph {et~al.}(2011)\citenamefont {Chiribella}, \citenamefont {D'Ariano},\ and\ \citenamefont {Perinotti}}]{CDP11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Chiribella}}, \bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {D'Ariano}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Perinotti}},\ }\href {\doibase 10.1103/PhysRevA.84.012311} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {012311} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Masanes}\ \emph {et~al.}(2013)\citenamefont {Masanes}, \citenamefont {M{\"u}ller}, \citenamefont {Augusiak},\ and\ \citenamefont {P{\'e}rez-Garc{\'\i}a}}]{MMA+13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Masanes}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {M{\"u}ller}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Augusiak}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {P{\'e}rez-Garc{\'\i}a}},\ }\href {\doibase 10.1073/pnas.1304884110} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the National Academy of Sciences}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {16373} (\bibinfo {year} {2013})},\ {\mathbb E}\,print {http://arxiv.org/abs/https://www.pnas.org/content/110/41/16373.full.pdf} {https://www.pnas.org/content/110/41/16373.full.pdf} \BibitemShut {NoStop} \bibitem [{\citenamefont {Czekaj}\ \emph {et~al.}(2017)\citenamefont {Czekaj}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\ \citenamefont {Horodecki}}]{CHHH17} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Czekaj}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}},\ }\href {\doibase 10.1103/PhysRevA.95.022119} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {022119} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2019)\citenamefont {Sun}, \citenamefont {Zhou},\ and\ \citenamefont {Yu}}]{SZY19} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-L.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhou}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yu}},\ }\href@noop {} {\ (\bibinfo {year} {2019})},\ \bibinfo {note} {arXiv:1906.11807}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}\ and\ \citenamefont {Wiesner}(1992)}]{BW92} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Bennett}}\ and\ \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Wiesner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {2881} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Holevo}(1973)}]{Hol73} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Holevo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Probl. Peredachi Inf.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {3} (\bibinfo {year} {1973})},\ \bibinfo {note} {{E}nglish translation \emph{Problems Inform. Transmission}, vol. 9, no. 3, pp.177--183, 1973}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cleve}\ \emph {et~al.}(1999)\citenamefont {Cleve}, \citenamefont {van Dam}, \citenamefont {Nielsen},\ and\ \citenamefont {Tapp}}]{CDNT99} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {van Dam}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Nielsen}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tapp}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Quantum Computing and Quantum Communications}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {C.~P.}\ \bibnamefont {Williams}}}\ (\bibinfo {publisher} {Springer Berlin Heidelberg},\ \bibinfo {address} {Berlin, Heidelberg},\ \bibinfo {year} {1999})\ pp.\ \bibinfo {pages} {61--74}\BibitemShut {NoStop} \bibitem [{\citenamefont {Devetak}\ \emph {et~al.}(2004)\citenamefont {Devetak}, \citenamefont {Harrow},\ and\ \citenamefont {Winter}}]{DHW04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Devetak}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Harrow}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href {\doibase 10.1103/PhysRevLett.93.230504} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {230504} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nayak}\ and\ \citenamefont {Salzman}(2006)}]{NS06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Nayak}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Salzman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. ACM}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {184} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Hsieh}}\ and\ \citenamefont {{Wilde}}(2010)}]{HW10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Hsieh}}}\ and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {{Wilde}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE Trans. Inf. Theory}\ }\textbf {\bibinfo {volume} {56}},\ \bibinfo {pages} {4705} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Datta}\ and\ \citenamefont {Hsieh}(2011)}]{DH11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Datta}}\ and\ \bibinfo {author} {\bibfnamefont {M.-H.}\ \bibnamefont {Hsieh}},\ }\href {\doibase 10.1088/1367-2630/13/9/093042} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {093042} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lai}\ and\ \citenamefont {Chung}(2019)}]{LC18c} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lai}}\ and\ \bibinfo {author} {\bibfnamefont {K.-M.}\ \bibnamefont {Chung}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proc. IEEE Intl. Symp. Inf. Theory}}}\ (\bibinfo {year} {2019})\ p.\ \bibinfo {pages} {2997},\ \bibinfo {note} {arXiv:1809.10694}\BibitemShut {NoStop} \bibitem [{\citenamefont {Paw\l{l}owski}\ \emph {et~al.}(2009)\citenamefont {Paw\l{l}owski}, \citenamefont {Paterek}, \citenamefont {Kaszlikowski}, \citenamefont {Scarani}, \citenamefont {Winter},\ and\ \citenamefont {\.{Z}ukowski}}]{PPK+09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Paw\l{l}owski}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Paterek}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kaszlikowski}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {\.{Z}ukowski}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {461}},\ \bibinfo {pages} {1101–1104} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Almeida}\ \emph {et~al.}(2010)\citenamefont {Almeida}, \citenamefont {Bancal}, \citenamefont {Brunner}, \citenamefont {Ac\'{\i}n}, \citenamefont {Gisin},\ and\ \citenamefont {Pironio}}]{ABB+10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont {Almeida}}, \bibinfo {author} {\bibfnamefont {J.-D.}\ \bibnamefont {Bancal}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Brunner}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ac\'{\i}n}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pironio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {230404} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Del~Santo}\ and\ \citenamefont {Daki\ifmmode~\acute{c}\end{lemma}se \'{c}\fi{}}(2018)}]{SD18} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Del~Santo}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Daki\ifmmode~\acute{c}\end{lemma}se \'{c}\fi{}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {060503} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Massa}}\ \emph {et~al.}(2018)\citenamefont {{Massa}}, \citenamefont {{Moqanaki}}, \citenamefont {{Del Santo}}, \citenamefont {{Dakic}},\ and\ \citenamefont {{Walther}}}]{MMD+18} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {{Massa}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {{Moqanaki}}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {{Del Santo}}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {{Dakic}}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {{Walther}}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {2018 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR)}}}\ (\bibinfo {year} {2018})\ pp.\ \bibinfo {pages} {1--2}\BibitemShut {NoStop} \bibitem [{\citenamefont {Renninger}(1953)}]{Ren53} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Renninger}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Zeitschrift für Physik}\ }\textbf {\bibinfo {volume} {136}},\ \bibinfo {pages} {251} (\bibinfo {year} {1953})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Baere}(2005)}]{Bae05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Baere}},\ }\href@noop {} {\ (\bibinfo {year} {2005})},\ \bibinfo {note} {arXiv:quant-ph/0504031}\BibitemShut {NoStop} \bibitem [{\citenamefont {Elitzur}\ and\ \citenamefont {Vaidman}(1993)}]{Elitzur1993} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Elitzur}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }\href {\doibase 10.1007/BF00736012} {\bibfield {journal} {\bibinfo {journal} {Foundations of Physics}\ }\textbf {\bibinfo {volume} {23}},\ \bibinfo {pages} {987} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horvat}\ and\ \citenamefont {Daki\'{c}}(2019)}]{HD19} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Horvat}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Daki\'{c}}},\ }\href@noop {} {\ (\bibinfo {year} {2019})},\ {\mathbb E}\,print {http://arxiv.org/abs/arXiv:2003.12114} {arXiv:2003.12114} \BibitemShut {NoStop} \bibitem [{\citenamefont {Senellart}\ \emph {et~al.}(2017)\citenamefont {Senellart}, \citenamefont {Solomon},\ and\ \citenamefont {White}}]{SSW17} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Senellart}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Solomon}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {White}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Nanotechnology}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {1026} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Questions on surface braid groups} \author{Paolo Bellingeri} \author{Eddy Godelle} \address{Univ. Pisa, Dipartimento di Matematica, 56205 Pisa, Italy} \email{[email protected]} \address{Univ. Caen, Laboratoire de Math\'ematiques Nicolas Oresme, 14032 Caen, France} \email{[email protected]} \begin{abstract} We provide new group presentations for surface braid groups which are positive. We study some properties of such presentations and we solve the conjugacy problem in a particular case. \end{abstract} \maketitle \section{Introduction and Motivation} Let $\Sigma_{g,p}$ be an orientable surface of genus $g$ with $p$ boundary components. For instance, $\Sigma_{0,0}$ is the 2-sphere, $\Sigma_{1,0}$ is the torus, and $\Sigma_{0,1}$ corresponds to the disk. A \emph{geometric braid} on $\Sigma_{g,p}$ based at $\mathcal{M}_n(\Sigma)hcal{P}$ is a collection $B=(\psi_1, \dots, \psi_n)$ of $n\geq 2$ paths from $[0,1]$ to $\Sigma_{g,p}$ such that $\psi_i(0)= P_i$, $\psi_i(1) \in \mathcal{M}_n(\Sigma)hcal{P}$ and $\{ \psi_1(t), \dots, \psi_n(t)\}$ are distinct points for all $t \in [0, 1]$. Two braids are considered as equivalent if they are isotopic. The usual product of paths defines a group structure on the equivalence classes of braids. This group doesnot depend, up to isomorphism, on the choice of $\mathcal{M}_n(\Sigma)hcal{P}$. It is called \emph{the (surface) braid group on $n$ strands} on $\Sigma_{g,p}$ and denoted by $B_n(\Sigma_{g,p})$. The group $B_n(\Sigma_{0,1})$ is the classical braid group $B_n$ on $n$ strings. Some elements of $B_n(\Sigma_{g,p})$ are shown in Figure \ref{gen:sb1}. The braid $\sigma_i$ corresponds to the standard generator of $B_n$ and it can be represented by a geometric braid on $\Sigma_{g,p}$ where all the strands are trivial except the $i$-th one and the $(i+1)$-th one. The $i$-th strand goes from $P_i$ to $P_{i+1}$ and the $(i+1)$-th strand goes from $P_{i+1}$ to $P_{i}$ according to Figure 1. The loops $\delta_1, \dots, \delta_{p+2g-1}$ based on $P_1$ in Figure 1 represent standard generators of $\pi_1(\Sigma_{g,p})$. By its definition, $B_1(\Sigma_{g,p})$ is isomorphic to $\pi_1(\Sigma_{g,p})$. We can also consider $\delta_1, \dots, \delta_{p+2g-1}$ as braids on $n$ strands on $\Sigma$, where last $n-1$ strands are trivial. It is well known since E. Artin (\cite{Art}) that the braid group $B_n$ has a positive presentation (see for instance \cite{MurKur} Chapter 2 Theorem 2.2), i.e. a group presentation which involves only generators and not their inverses. Hence one can associate a (braid) monoid $B_n^+$ with the same presentation, but as a monoid presentation. It turns out that the braid monoid $B_n^+$ is a Garside monoid (see \cite{Deh}), that is a monoid with a good divisibility structure, and that the braid group $B_n$ is the group of fractions of the monoid $B_n^+$. As a consequence, the natural morphism of monoids from $B_n^+$ to $B_n$ is into, and we can solve the word problem, the conjugacy problem and obtain normal forms in $B_n$(see \cite{BrS,Cha,Deh,Del,Gar}). These results extend to Artin-Tits groups of spherical type which are a well-known algebraic generalization of the braid group $B_n$ (\cite{BrS,Cha,Deh,Del}).\\ In the case of surface braid groups $B_n(\Sigma_{g,p})$, some group presentations are known but they are not positive. Furthermore, questions as the conjugacy problem are not solved in the general case. The word problem in surface braid groups is known to be solvable (see \cite{Gon}) even if algorithms are not as efficient as the ones proposed for the braid group $B_n$.\\ In this note we provide positive presentations for $B_n(\Sigma_{g,p})$ and we address questions related to the conjugacy problem of surface braid groups. We do not discuss the case of $B_n(\Sigma_{0,0})$, the braid group on the $2$-sphere; this is a particular case with specific properties. For instance if $\Sigma$ is an oriented surface, the surface braid group $B_n(\Sigma)$ has torsion elements only and only if $\Sigma$ is the $2$-sphere (see \cite{GilBus} page 277, \cite{FadBus} page 255, and \cite{ParRol} proposition 1.5).\\ In Section 2 and 3 we focus on braid groups on surfaces with boundary components and without boundary components respectively. In Section 4, we investigate the special case of $B_2(\Sigma_{1,0})$ and we solve the word problem and the conjugacy problem for this group. ace{5pt} \epsfysize=7truecm \epsfxsize=13truecm \centerline{\epsfbox{surface.eps}} \label{gen:sb1} \centerline{Figure 1: some braid elements} \section{Braid groups on surfaces with boundary components} In this section we investigate braid groups on oriented surfaces with a positive number of boundary components. Our first objective is to prove Theorem \ref{thm:presbg1}: \begin{thm} \label{thm:presbg1} Let $n$ and $p$ be positive integers. Let $g$ be a non negative integer. Then, the group $B_n(\Sigma_{g,p})$ admits the following group presentation:\\ $\bullet$ Generators: $\sigma_1, \dots, \sigma_{n-1},\delta_1,\cdots , \delta_{2g+p-1}$;\\ $\bullet$ Relations:\\ -Braid relations:\\ \begin{tabular}{lll} $(BR1)$ &$\sip \, \sjp = \sjp \, \sip$&for $| i-j | \ge 2$;\\ $(BR2)$ &$\sip \, \siip \, \sip = \siip \, \sip \, \siip$&for $1 \le i\le n-1$.\end{tabular}\\ - Commutative relations between surface braids:\\ \begin{tabular}{lll} $(CR1)$ &$\delta_r \sigma_i= \sigma_i \delta_r$ &for $i\not= 1$; $1 \le r \le 2g+p-1$;\\ $(CR2)$ &$\delta_{r} \sigma_1 \delta_{r} \sigma_1 = \sigma_1 \delta_{r} \sigma_1 \delta_{r}$& $1\leq r\leq 2g+p-1$;\\ $(CR3)$&$\delta_r \sigma_1 \delta_r \delta_{s} \sigma_1 = \sigma_1 \delta_r\delta_{s} \sigma_1 \delta_r$ & for $1 \le r<s \le 2g+p-1$ with\\&& $(r,s)\neq (p+2i, p+2i+1)$, $0\leq i \leq g-1$. \end{tabular}\\ - Skew commutative relations on the handles:\\ $(SCR1)$ $\sigma_1 \delta_{r+1} \sigma_1 \delta_{r} \sigma_1= \delta_{r} \sigma_1 \delta_{r+1}$ for $r = p+2i$ where $0 \le i \le g-1$. \end{thm} The above presentation can be compared to the presentation of $B_{g,n}$ given in \cite{HaeLam} page 18. \proof Let us denote by $\widetilde{B_n(\Sigma_{g,p})}$ the group defined by the presentation given in Theorem~\ref{thm:presbg1}. We prove that the group $\widetilde{B_n(\Sigma_{g,p})}$ is isomorphic to the group $B_n(\Sigma_{g,p})$ using the presentation given in Theorem~\ref{thm:pres}. Let $\psi:\{\gend\}\to \{\sigma_1, \dots, \sigma_{n-1},\delta_1,\cdots , $ $\delta_{2g+p-1}\}$ be the set-map defined by $\psi(\sigma_i)=\sigma_i$ for $i=1, \dots, n-1$, $\psi(a_r)=\delta_{p+2(r-1)}^{-1}$, $\psi(b_r)=\delta_{p+2(r-1)+1}^{-1}$ for $r=1, \dots, g$ and $\psi(z_j)=\delta_j^{-1}$ for $j=1, \dots, p-1$. We claim that $\psi$ extends to a homomorphism of groups $\psi: B_n(\Sigma_{g,p}) \to \widetilde{B_n(\Sigma_{g,p})}$. We have to verify that the image by $\psi$ of the braid relations and of the relations of type (R1)-(R8) are true in $\widetilde{B_n(\Sigma_{g,p})}$. It is enough to verify that \stepcounter{equation} \begin{eqnarray} \label{eqaprouver}\sigma_1^{-1} \delta_r^{-1} \sigma_1 \delta_s^{-1} = \delta_s^{-1} \sigma_1^{-1} \delta_r^{-1} \sigma_1\end{eqnarray} for $1\leq r<s\leq 2g+p-1$ and $(r,s)\neq (p+2k,p+2k+1)$ which corresponds to the image by $\psi$ in $\widetilde{B_n(\Sigma_{g,p})}$ of the relations of type (R3), (R6) and (R7); the other cases are true as they are relations of the presentation of $\widetilde{B_n(\Sigma_{g,p})}$.\\ The relations of type (CR3) can be written $\delta_r \sigma_1 \delta_r \delta_s = \sigma_1 \delta_r \delta_s \sigma_1 \delta_r \sigma_{1}^{-1}$. From the relations of type (CR1) we deduce that, in $\widetilde{B_n(\Sigma_{g,p})}$, the equalities $\delta_r\sigma_1 \delta_r\delta_s =\delta_r\sigma_1\delta_r\sigma_1\sigma_{1}^{-1}\delta_s = \sigma_1 \delta_r\sigma_1\delta_r\sigma_{1}^{-1}\delta_s$ holds. Hence we obtain $\sigma_1\delta_r\delta_s\sigma_1\delta_r\sigma_{1}^{-1}=\sigma_1\delta_r\sigma_1\delta_r \sigma_{1}^{-1}\delta_s$. From this equality, we derive that $\delta_s \sigma_1 \delta_r \sigma_{1}^{-1}=\sigma_1 \delta_r \sigma_{1}^{-1} \delta_s$, and finally we get the relations (\ref{eqaprouver}). On the other hand, consider $\overline{\psi}$ the set-map defined from $\{\sigma_1, \dots, \sigma_{n-1},\delta_1,\cdots , \delta_{2g+p-1}\}$ to $\{\genh\}$ by $\overline{\psi}(\sigma_i)=\sigma_i$ for $i=1, \dots, n-1$, $\overline{\psi}(\delta_j)=z_j^{-1}$ for $j=1, \dots, p-1$, $\overline{\psi}(\delta_{p+2(r-1)})=a_r^{-1}$ and $\overline{\psi}(\delta_{p+2(r-1)+1)}=b_r^{-1}$ for $r=1, \dots, g$. We prove that $\overline{\psi}$ extends to an homomorphism of groups from $\widetilde{B_n(\Sigma_{g,p})}$ to $B_n(\Sigma_{g,p})$. Since braid relations and the images by $\overline{\psi}$ of the relations of type $(ER)$, $(CR1)$ and $(CR2)$ are verified, it suffices to check that the equalities corresponding to relations of type $(CR3)$ hold in $B_n(\Sigma_{g,p})$. We verify that the equality $a_r^{-1} \sigma_1 a_r^{-1} a_{s}^{-1} \sigma_1 = \sigma_1 a_r^{-1} a _{s}^{-1} \sigma_1 a_r^{-1}$ for $(1 \le r>s \le g)$ holds in $B_n(\Sigma_{g,p})$. The other cases can easily be verified by the reader. From the relations of type (R2), it follows that $ a_r^{-1} \sigma_1 a_r^{-1} = \sigma_1 a_r^{-1} \sigma_1 a_r^{-1} \sigma_1^{-1}$; thus we have $a_r^{-1} \sigma_1 a_r^{-1} a_{s}^{-1} \sigma_1 = \sigma_1 a_r^{-1} \sigma_1 a_r^{-1} \sigma_1^{-1} a_{s}^{-1} \sigma_1$. Applying relations of type (R3) we deduce that $ a_r^{-1} \sigma_1^{-1} a_{s}^{-1} \sigma_1= \sigma_1^{-1} a_{s}^{-1} \sigma_1 a_r^{-1}$ and therefore the equalities $a_r^{-1} \sigma_1 a_r^{-1} a_{s}^{-1} \sigma_1 = \sigma_1 a_r^{-1} a _{s}^{-1} \sigma_1 a_r^{-1}$ hold in $B_n(\Sigma_{g,p})$. Then, the morphism $\overline{\psi}$ from $\widetilde{B_n(\Sigma_{g,p})}$ to $B_n(\Sigma_{g,p})$ is well defined and it is the inverse of $\psi$. Hence, $B_n(\Sigma_{g,p})$ is isomorphic to $\widetilde{B_n(\Sigma_{g,p})}$. \hspace*{\fill}$\Box$ ace{10pt} We remark that the presentation given in Theorem \ref{thm:presbg1} is positive and has less types of relations than the presentation given in Theorem \ref{thm:pres}. \begin{lem} \label{lem:reduction} Let $G$ be a group and let $\sigma,\delta,\delta'$ be in $G$.\\(i) If (a) $\delta(\sigma\delta\delta'\sigma) = (\sigma\delta\delta'\sigma)\delta$, (b) $\sigma\delta\sigma\delta = \delta\sigma\delta\sigma$ and (c) $\sigma\delta'\sigma\delta' = \delta'\sigma\delta'\sigma$ then $\delta'(\sigma\delta\delta'\sigma) = (\sigma\delta\delta'\sigma)\delta'$.\\ (ii) If (a) $\delta(\sigma\delta\delta'\sigma) = \sigma(\delta\delta'\sigma\delta)$, (b) $\delta'(\sigma\delta\delta'\sigma) = (\sigma\delta\delta'\sigma)\delta'$ and (c) $\sigma\delta'\sigma\delta' = \delta'\sigma\delta'\sigma$ then $\sigma\delta\sigma\delta = \delta\sigma\delta\sigma$. \\ (iii) In the presentation of Theorem \ref{thm:presbg1}, we can replace relation $(CR3)$ by: \\ $(CR3')$ $\delta_s \sigma_1 \delta_r \delta_{s} \sigma_1 = \sigma_1 \delta_r\delta_{s} \sigma_1 \delta_s$ \\ for $1 \le r<s \le 2g+p-1$ with $(r,s)\neq (p+2i, p+2i+1)$, $0\leq i \leq g-1$. \end{lem} \proof (i) Assume (a) $\delta\sigma\delta\delta'\sigma = \sigma\delta\delta'\sigma\delta$, (b) $\sigma\delta\sigma\delta = \delta\sigma\delta\sigma$ and (c) $\sigma\delta'\sigma\delta' = \delta'\sigma\delta'\sigma$. Then, $\delta'\sigma\delta\delta'\sigma = \delta^{-1}\sigma^{-1}\sigma\delta\delta'\sigma\delta\delta'\sigma = \delta^{-1}\sigma^{-1}\delta\sigma\delta\delta'\sigma\delta'\sigma = \delta^{-1}\sigma^{-1}\delta\sigma\delta\sigma\delta'\sigma\delta' = \delta^{-1}\sigma^{-1} \sigma\delta\sigma\delta \delta'\sigma\delta' = \sigma\delta \delta'\sigma\delta'$. \\ (ii) Assume (a) $\delta(\sigma\delta\delta'\sigma) = \sigma(\delta\delta'\sigma\delta)$, (b) $\delta'(\sigma\delta\delta'\sigma) = (\sigma\delta\delta'\sigma)\delta'$ and (c) $\sigma\delta'\sigma\delta' = \delta'\sigma\delta'\sigma$. Then \displaylines {\sigma\delta\sigma\delta = \sigma\delta\delta'\sigma\delta(\delta'\sigma\delta)^{-1}\sigma\delta = \delta\sigma\delta\delta'\sigma {\delta}^{-1}\sigma^{-1}{\delta'}^{-1}\sigma\delta = \delta\sigma\delta(\sigma\delta'\sigma\delta'\sigma^{-1}{\delta'}^{-1}) {\delta}^{-1}\sigma^{-1}{\delta'}^{-1}\sigma\delta = \cr \delta\sigma\delta\sigma\delta'\sigma\delta'{\delta'}^{-1}\sigma^{-1}{\delta'}^{-1}{\delta}^{-1}\sigma^{-1}\sigma\delta = \delta\sigma\delta\sigma.}\noindent (iii) is a consequence of (i). \hspace*{\fill}$\Box$ ace{10pt} Since the relations of the presentation of $B_n(\Sigma_{g,p})$ are positive, one can define a monoid with the same presentation but as a monoid presentation. It is easy to see that the monoid we obtain doesnot inject in $B_n(\Sigma_{g,p})$, even if we add the relations of type $(CR3')$ to the presentation given in Theorem \ref{thm:presbg1}. In fact the following relations, $$(CR3)_{k} \ \ \delta_r \sigma_1 \delta_r \delta^k_{s} \sigma_1 = \sigma_1 \delta_r\delta^k_{s} \sigma_1 \delta_r$$ for $1 \le r<s \le 2g+p-1$ with $(r,s)\neq (p+2i, p+2i+1)$, $0\leq i \leq g-1$ and $k\in \mathcal{M}_n(\Sigma)hbb{N}^*$, and $$(CR3')_k \ \ \delta_s \sigma_1 \delta^k_r \delta_{s} \sigma_1 = \sigma_1 \delta^k_r\delta_{s} \sigma_1 \delta_s$$ for $1 \le r<s \le 2g+p-1$ with $(r,s)\neq (p+2i, p+2i+1)$, $0\leq i \leq g-1$ and $k\in \mathcal{M}_n(\Sigma)hbb{N}^*$, are true in $B_n(\Sigma_{g,p})$ for each positive integer $k$, but they are false in the monoid for $k$ greater than 1: no relation of the presentation can be applied to the left side of the equalities. Then starting from the left side of the equality for $k > 1$, we cannot obtain the right side of the equality by using the relations of the monoid presentation only. \begin{quest} Let $B^*_n(\Sigma_{g,p})$ be the monoid defined by the presentation of Theorem \ref{thm:presbg1} with the extra relations $(CR3)_k$, $(CR3')_k$ for $k\in\mathcal{M}_n(\Sigma)hbb{N}^*$. Is the canonical homomorphism $\varphi$ from $B^*_n(\Sigma_{g,p})$ to $B_n(\Sigma_{g,p})$ into ? \end{quest} We remark that we can define a length function $\ell$ on $B^*_n(\Sigma_{g,p})$: if $F^*$ is the free monoid based on $\sigma_1\cdots,\sigma_{n-1},\delta_1,\cdots, \delta_{2g}$, if $l: F \to \mathcal{M}_n(\Sigma)hbb{N}$ is the canonical length function and if $w\mapsto \overline{w}$ is the canonical morphism from $F^*$ onto $B^*_n(\Sigma_{g,p})$ then, for each $g$ in $B^*_n(\Sigma_{g,p})$, one has $sup\{l(w)\mid w\in F^*$ ; $\overline{w} = g\} < +\infty$; furthermore if we set $\ell(g) = sup\{l(w)\mid w\in F^*\}$, then for $g_1,g_2$ in $B^*_n(\Sigma_{g,p})$ we have $\ell(g_1g_2) \leq \ell(g_1)+\ell(g_2)$.\\ Now, let us consider the particular case of planar surfaces. \begin{prop}\label{propallcocksuite} Let $n,p$ be positive integers with $n\geq p-1$. Let $I\subset \{1,\cdots, n\}$ with $Card(I) = p-1$.\\ Then $B_n(\Sigma_{0,p})$ admits the following presentation:\\ $\bullet$ Generators : $\sigma_1,\cdots,\sigma_{n-1}$ and $\rho_i$ for $i\in I$;\\$\bullet$ Relations:\\ \begin{tabular}{lll} $(BR1)$ &$\sip \, \sjp = \sjp \, \sip$&for $1 \le i,j\le n-1$ with $| i-j | \ge 2$;\\ $(BR1)'$ &$\rho_r\rho_s = \rho_s\rho_r$&$r,s\in I$, $r\neq s$;\\ $(BR1)''$ &$\rho_r\sigma_i = \sigma_i\rho_r$&$r\in I$; $1 \le i\le n-1$, $i\ne r-1,r$.\\ $(BR2)$ &$\sip \, \siip \, \sip = \siip \, \sip \, \siip$&for $1 \le i\le n-1$;\\ $(BR3)$ &$\sigma_i\rho_r\sigma_i\rho_r = \rho_r\sigma_i\rho_r\sigma_i$&$r\in I$, $i = r,r-1$;\\ $(BR3)'$ &$(\sigma_{r-1}\sigma_r)\rho_r\sigma_{r-1}\rho_r = \rho_r(\sigma_{r-1}\sigma_r)\rho_r\sigma_{r-1}$&$r\in I$ ; $r\neq 1,n$. \end{tabular}\end{prop} \proof Consider the presentation of Theorem \ref{thm:presbg1}. Let $I = \{r_1<r_2\cdots <r_{p-1}\}$ and set $\rho_{r_j} = (\sigma_{r-1}\cdots \sigma_1)\delta_{p-j}(\sigma_{r-1}\cdots \sigma_1)^{-1}$. Relations $(BR1)''$ are equivalent to relations $(CR1)$ with $i\neq r$, by using braid relations. Using $(CR2)$, we get $(BR1)'\iff (CR3)$. Using braid relations $(BR1)$ and $(BR2)$, the relations $(CR2)$ are equivalent to relations $(BR3)$ by conjugation by $(\sigma_{r-1}\sigma_r)\cdots(\sigma_1\sigma_2)$ and $(\sigma_{r-2}\sigma_{r-1})\cdots(\sigma_1\sigma_2)$ when $i = r-1$ and $i = r$ respectively . Now consider the relation $(CR1)$ for $i = r$. By conjugation by $\sigma_{r-1}\cdots \sigma_1$, we get $\sigma_{r-1}\sigma_r\sigma_{r-1}^{-1}\rho_r = \rho_r\sigma_{r-1}\sigma_r\sigma_{r-1}^{-1}$ and, then $\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} = \sigma_r\rho_r\sigma_{r-1}\sigma_r$. It follows that the relations of type $(CR1)$ for $i = r$ is equivalent to the relation $\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} = \sigma_r\rho_r\sigma_{r-1}\sigma_r$. This last relation is equivalent to relation $(BR3)'$ using relation $(BR3)$: \displaylines{\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} = \sigma_r\rho_r\sigma_{r-1}\sigma_r \iff \sigma_{r-1}\sigma_r\rho_r\sigma_{r-1}\rho_r\sigma_{r-1} = \sigma_r\rho_r\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} \iff \cr \sigma_{r-1}\sigma_r\sigma_{r-1}\rho_r\sigma_{r-1}\rho_r = \sigma_r\rho_r\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} \iff \sigma_r \sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} \rho_r= \sigma_r\rho_r\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1} \iff\cr \sigma_{r-1}\sigma_r\rho_r\sigma_{r-1}\rho_r= \rho_r\sigma_{r-1}\sigma_r\rho_r\sigma_{r-1}.} \hspace*{\fill}$\Box$ ace{10pt} \begin{cor}(\cite{All} Table 1.1)\\ $B_n(\Sigma_{0,3})$ is isomorphic to the Affine Artin group of type $\tilde{B}(n+1)$ for $n\geq 2$. \end{cor} \proof We apply Proposition \ref{propallcocksuite} with $I = \{1,n\}$. \hspace*{\fill}$\Box$ ace{10pt} Recall that a monoid $M$ is cancellative if the property ``$\forall x,y,z,t \in M, (xyz = xtz)\Rightarrow (y=t)$'' holds in $M$. \begin{quest} Let $B^+_n(\Sigma_{0,p})$ the monoid defined by the presentation given in Proposition \ref{propallcocksuite}, considered as a monoid presentation.\\(i) Is the monoid $B^+_n(\Sigma_{0,p})$ cancellative ?\\(ii) is the natural homomorphism from $B^+_n(\Sigma_{0,p})$ to $B_n(\Sigma_{0,p})$ injective ? \end{quest} For $p = 1$ and $p=2$, the groups $B_n(\Sigma_{0,p})$ are isomorphic to the braid group $B_n$ and the Artin-Tits group of type $B$ respectively. Hence, the answer to above questions are positive. In the case of $B_n(\Sigma_{0,3})$, the answers are also positive (see \cite{Cor} and \cite{Par}). Note that the relations of the presentation of $B_n(\Sigma_{0,p})$ are homogeneous. Therefore we can define a length function $\ell$ on $B^+_n(\Sigma_{0,p})$ such that $\ell(g_1g_2) = \ell(g_1)+\ell(g_2)$ for every $g_1,g_2\in B^+_n(\Sigma_{0,p})$. \section{Braid groups on closed surfaces} In this section, we consider braid groups on closed surfaces, that is without boundary components. In particular, we prove Corollaries \ref{thm:presbg2} and \ref{thm:presbg3}. \begin{prop} \label{pressansbord}Let $n,g$ be positive integers. The group $B_n(\Sigma_{g,0})$ admits the following presentation:\\$\bullet$ Generators: $\sigma_1\cdots,\sigma_{n-1},\delta_1,\cdots,\delta_{2g}$;\\ $\bullet$ Relations\\ -Braid relations: \noindent \begin{tabular}{lll} $(BR1)$&$\sg_{i} \sg_{j} = \sg_{j} \sg_{i}$ & for $| i-j | \ge 2$.\\ $(BR2)$&$\sg_{i} \sg_{i+1} \sg_{i} = \sg_{i+1} \sg_{i} \sg_{i+1}$&$1\le i\le n-1$; \end{tabular}\\ -Commutative relation between surface braids:\\ \begin{tabular}{lll} $(CR1)$&$\sigma_i\delta_r = \delta_r\sigma_i$&$3\le i\le n-1; 1\le r\le 2g$;\\$(CR4)$&$\sigma_1\delta^2_r = \delta^2_r\sigma_1$&$1\le r\le 2g$;\\ &$\sigma_2\delta_{2r-1}\sigma_2 = \delta_{2r-1}\sigma_2\sigma_1$&$1\le r\le 2g$;\\&$\sigma_1\delta_{2r}\sigma_2= \sigma_2\delta_{2r}\sigma_1$&$1\le r\le 2g$;\end{tabular}\\ -Skew commutative relations on the handles:\\ (SCR2) $\sigma_1\delta_r\delta_{r+2s}\sigma_1 =\delta_{r+2s}\delta_{r}$ $1 \le r <r+2s\le 2g$;\\ $(SCR3)$ $\delta_{2r}\sigma_1\delta_{2s-1}\delta_{2r}\sigma_1 = \delta_{2s-1}\delta_{2r}^2$ $1\le s\le r\le g$;\\ $\sigma_1\delta_{2s}\delta_{2r-1}\sigma_1\delta_{2s} = \delta^2_{2s}\delta_{2r-1}$ $1\le s< r\le g$;\\ -Relation associated to the fundamental group of the surface:\\ (FGR) $(\sigma_2\cdots\sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots \sigma_2)\sigma_1\delta_1\delta_2\cdots\delta_{2g}\sigma_1 = \delta_{2g}\cdots\delta_2\delta_1$. \end{prop} \proof Starting from the presentation of Theorem \ref{deuxiemepresentation}, we set $\sigma_i = \theta_i^{-1}$, $\delta_{2r} = b_{2r}\theta_1^{-1}$ and $\delta_{2r-1} = \theta_1b^{-1}_{2r-1}$; we obtain easily the required presentation. \hspace*{\fill}$\Box$ ace{10pt} \begin{cor} \label{thm:presbg2} Let $n$ and $g$ be positive integers with $g\geq 2$. Then, the group $B_n(\Sigma_{g,0})$ admits the following group presentation:\\ $\bullet$ Generators: $\sigma_1, \dots, \sigma_{n-1},\delta_1,\cdots , \delta_{2g}$;\\ $\bullet$ Relations:\\ -Braid relations:\\ \begin{tabular}{lll} $(BR1)$ &$\sip \, \sjp = \sjp \, \sip$&for $1 \le i,j\le n-1$ with $| i-j | \ge 2$;\\ $(BR2)$ &$\sip \, \siip \, \sip = \siip \, \sip \, \siip$&for $1 \le i\le n-1$.\end{tabular}\\ - Commutative relations between surface braids:\\ \begin{tabular}{lll} $(CR1)$&$\delta_r \sigma_i= \sigma_i \delta_r$ &for $2<i$; $1 \le r \le 2g$;\\ $(CR4)$&$\delta^2_r \sigma_1= \sigma_1 \delta^2_r$ &for $1 \le r \le 2g$;\\ &$\sigma_2\delta_{2r-1}\sigma_2 = \delta_{2r-1}\sigma_2\sigma_1$;\\ &$\sigma_1\delta_{2r}\sigma_2 = \sigma_2\delta_{2r}\sigma_1$;\\ $(CR5)$&$(\delta_{2r}\sigma_1)(\delta_{2s-1}\delta_{2s}) = (\delta_{2s-1}\delta_{2s})(\delta_{2r}\sigma_1) $&$1\le s<r\le g$;\\ &$(\sigma_1\delta_{2r})(\delta_{2s}\delta_{2s-1}) = (\delta_{2s}\delta_{2s-1})(\sigma_1\delta_{2r}) $&$1\le r<s\le g$;\\ &$(\delta_{2r}\sigma_1)(\delta_{2r-1}\delta_{2s}) = (\delta_{2r-1}\delta_{2s})(\delta_{2r}\sigma_1) $&$1\le s<r\le g$;\\ &$(\sigma_1\delta_{2r-1})(\delta_{2s-1}\delta_{2r}) = (\delta_{2s-1}\delta_{2r})(\sigma_1\delta_{2r-1}) $&$1\le r<s\le g$; \end{tabular}\\ -Skew commutative relations on the handles\\ \begin{tabular}{lll}$(SCR2)$ &$\sigma_1\delta_r\delta_{r+2s}\sigma_1 =\delta_{r+2s}\delta_{r}$ &$1 \le r <r+2s\le 2g$;\end{tabular}\\ -Relation associated to the fundamental group of the surface\\ \begin{tabular}{lll}$(FGR)$ $(\sigma_2\cdots\sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots \sigma_2)\sigma_1\delta_1\delta_2\cdots\delta_{2g}\sigma_1 = \delta_{2g}\cdots\delta_2\delta_1$.\end{tabular} \end{cor} \begin{cor} \label{thm:presbg3} Let $n$ be positive integer. Then, the group $B_n(\Sigma_{1,0})$ admits the following group presentation:\\ $\bullet$ Generators: $\sigma_1, \dots, \sigma_{n-1},\delta_1, \delta_{2}$;\\ $\bullet$ Relations:\\ -Braid relations:\\ \begin{tabular}{lll} $(BR1)$ &$\sip \, \sjp = \sjp \, \sip$&for $| i-j | \ge 2$;\\ $(BR2)$ &$\sip \, \siip \, \sip = \siip \, \sip \, \siip$&for $1 \le i\le n-1$.\end{tabular}\\ -Commutative relations between surface braids:\\ \begin{tabular}{lll} $(CR1)$&$\delta_r \sigma_i= \sigma_i \delta_r$ &for $2<i$; $r = 1,2$;\\ $(CR4)$&$\delta^2_r \sigma_1= \sigma_1 \delta^2_r$ &$r = 1,2$;\\ &$\sigma_2\delta_1\sigma_2 = \delta_1\sigma_2\sigma_1$;\\ &$\sigma_1\delta_2\sigma_2 = \sigma_2\delta_2\sigma_1$;\end{tabular}\\ -Skew commutative relations on the handles:\\ $(SCR4)$ $\delta_2\sigma_1\delta_1\delta_2\sigma_1 = \delta_1\delta_2^2$;\\ -Relation associated to the fundamental group of the surface:\\ $(FGR)$ $(\sigma_2\cdots\sigma_{n-2}\sigma_{n-1}^2\sigma_{n-2}\cdots \sigma_2)\sigma_1\delta_1\delta_2\sigma_1 = \delta_2\delta_1$. \end{cor} Corollary \ref{thm:presbg3} is a special case of Proposition \ref{pressansbord} when $g = 1$. When $g\geq 2$, Corollary \ref{thm:presbg2} follows from Proposition \ref{pressansbord} using Lemma \ref{lemmebelow} below. \begin{lem}\label{lemmebelow} (i) Let $G$ be a group and $\sigma,\delta_1,\delta_2,\delta_1',\delta_2'$ be in $G$ such that a) $\sigma\delta_i^2 = \delta_i^2\sigma$ for $i = 1,2$; b) $\sigma{\delta'}_i^2 = {\delta'}_i^2\sigma$ for $i = 1,2$; c) $g\delta'_2\delta_2g = \delta_2\delta'_2$ and d) $g\delta'_1\delta_1g = \delta_1\delta'_1$. Then,\\ (i) $\delta_2\sigma\delta'_1\delta_2\sigma = \delta'_1\delta_2^2 \iff \delta_2\sigma \delta'_1\delta'_2= \delta'_1\delta'_2\delta_2\sigma\iff \delta_1\delta_2\sigma\delta'_1 = \sigma\delta'_1\delta_1\delta_2$.\\(ii) $\delta_2\sigma\delta_1\delta_2\sigma = \delta_1\delta_2^2 \iff \delta_2\sigma\delta_1\delta'_2 = \delta_1\delta'_2\delta_2\sigma$.\\(iii) $\delta'_2\sigma\delta'_1\delta'_2\sigma = \delta'_1{\delta'}_2^2 \iff \delta_1\delta'_2\sigma\delta'_1 = \sigma\delta'_1\delta_1\delta'_2$. \end{lem} \proof We prove under the hypothesis that $\delta_2\sigma\delta'_1\delta_2\sigma = \delta'_1\delta_2^2 \iff \delta_2\sigma \delta'_1\delta'_2= \delta'_1\delta'_2\delta_2\sigma$. The other cases are similar.\\ \displaylines{\delta_2\sigma\delta'_1\delta_2\sigma = \delta'_1\delta_2^2 \iff \delta_2\sigma\delta'_1\delta_2 = \delta'_1\delta_2^2\sigma^{-1}\iff \delta_2\sigma\delta'_1\delta_2 = \delta'_1\sigma^{-1}\delta_2^2\iff \delta_2\sigma\delta'_1 = \delta'_1\sigma^{-1}\delta_2\iff \cr \delta_2\sigma\delta'_1\delta'_2 = \delta'_1\sigma^{-1}\delta_2\delta'_2\iff \delta_2\sigma \delta'_1\delta'_2= \delta'_1\delta'_2\delta_2\sigma.} \hspace*{\fill}$\Box$ ace{10pt} \begin{lem} Let $n,g$ be positive integers. Consider the group $B_n(\Sigma_{g,0})$ and the presentation of Proposition \ref{pressansbord}. Then, for every $1\le r,s\le g$, $$\delta_{2s-1}\delta^2_{2r}\delta_{2s-1} = \delta_{2r}\delta^2_{2s-1}\delta_{2r}.$$ \end{lem} \proof Let $1\le s\le r\le g$. From the relations of type $(SCR3)$ and the relations of the first type of $(CR4)$, it follows $\delta_{2s-1}\delta_{2r}\sigma_1\delta_{2s-1}\delta_{2r}\sigma_1 = \delta_{2s-1}^2\delta_{2r}^2 = \sigma_1\delta_{2s-1}\delta_{2r}\sigma_1\delta_{2s-1}\delta_{2r}$. Hence, $\delta_{2s-1}^2\delta_{2r}\delta_{2s-1}^{-1} = \sigma_1\delta_{2s-1} \delta_{2r} \sigma_1 = \delta_{2r}^{-1}\delta_{2s-1}\delta_{2r}^2$ and then $\delta_{2s-1}\delta^2_{2r}\delta_{2s-1} = \delta_{2r}\delta^2_{2s-1}\delta_{2r}$.\\If $1\le r < s \le g$ then we proceed in the same way, using that $\sigma_1\delta_{2s}\delta_{2r-1}\sigma_1\delta_{2s} = \delta^2_{2s}\delta_{2r-1}$. \hspace*{\fill}$\Box$ ace{10pt} \begin{quest} Consider the monoids defined by the presentation given in Theorem 3.1, Corollary 3.2 or in Corollary 3.3. Are they cancellative ? do they embed in $B_n(\Sigma_{g,0})$ ? \end{quest} \section{Braid group on two strands on the torus} \subsection{The word problem and the conjugacy problem} In this section we solve the word problem and the conjugacy problem for the special case when $g = 1$ and $n = 2$ by using a presentation derived from the one obtained in the previous section.\\ As a consequence of Corollary \ref{thm:presbg3} we have: \begin{cor} \label{proppresb210} $B_2(\Sigma_{1,0})$ admits the group presentation: $$B_2(\Sigma_{1,0} = \langle \sigma_1,\delta_1, \delta_{2}\mid \delta^2_1 \sigma_1= \sigma_1 \delta^2_1; \delta^2_2 \sigma_1= \sigma_1 \delta^2_2; \delta_2\sigma_1\delta_1\delta_2\sigma_1 = \delta_1\delta_2^2; \sigma_1\delta_1\delta_2\sigma_1 = \delta_2\delta_1\rangle.$$ \end{cor} \begin{lem} \label{lempresb210} Let $G$ be group and $\sigma_1,\delta_1, \delta_{2}$ be in $G$ such that a) $\delta^2_1 \sigma_1= \sigma_1 \delta^2_1$~; b) $\delta^2_2 \sigma_1= \sigma_1 \delta^2_2$~; c) $\sigma_1\delta_1\delta_2\sigma_1 = \delta_2\delta_1$. Then $\delta_2\sigma_1\delta_1\delta_2\sigma_1 = \delta_1\delta_2^2 \iff (\delta^2_1 \delta_2= \delta_2 \delta^2_1$ and $\delta^2_2 \delta_1 = \delta_1 \delta^2_2)$. \end{lem} \begin{cor}\label{lemmurKur}(\cite{MurKur}, Chapter 11, Exercises 5.2 and 6.3) The group $B_2(\Sigma_{1,0})$ admits the two following group presentations: $$ B_2(\Sigma_{1,0}) = \langle \sigma_1,\delta_1, \delta_{2}\mid \delta^2_1 \sigma_1= \sigma_1 \delta^2_1~; \delta^2_2 \sigma_1= \sigma_1 \delta^2_2~; \delta^2_1 \delta_2= \delta_2 \delta^2_1~; \delta^2_2 \delta_1 = \delta_1 \delta^2_2~; \sigma_1\delta_1\delta_2\sigma_1 = \delta_2\delta_1 \rangle.$$ \begin{eqnarray}\label{preb2} B_2(\Sigma_{1,0}) = \langle a,b,c\mid a^2b= ba^2; b^2a = ab^2; a^2c= ca^2; b^2c = cb^2;a^2b^2 = c^2 \rangle.\end{eqnarray}\end{cor} \proof (i) follows from Corollary \ref{proppresb210} and Lemma \ref{lempresb210}. For ii), we set $a = \delta_2$, $b = \delta_1$ and $c = \delta_2\delta_1\sigma_1^{-1}$ as suggested in \cite{MurKur} Chapter 11, Exercise 6.3. \hspace*{\fill}$\Box$ ace{10pt} Using the presentation (\ref{preb2}), we are able to solve the word problem and the conjugacy problem in $B_2(\Sigma_{1,0})$. Considering (\ref{preb2}), for $x = a$, or $x = b$ we can define a weight homomorphism of groups $\ell_{\hat{x}} : B_2(\Sigma_{1,0}) \to \mathcal{M}_n(\Sigma)hbb{Z}$ such that $\ell_{\hat{x}}(x) = 0$ and $\ell_{\hat{x}}(y) = 1$ for $y\in\{a,b,c\}$ and $y\neq x$.\\ In the following we denote by $F(a,b,c)$ the free group based on $\{a,b,c\}$. We denote by $W(a,b,c)$ the Coxeter group associated to $F(a,b,c)$ and defined by $W(a,b,c) = \langle a,b,c\mid a^2 = b^2 = c^2 = 1\rangle$. If $w$ is in $F(a,b,c)$ we denote by $\overline{w}$ its image in $B_2(\Sigma_{1,0})$. Considering (\ref{preb2}), there exists a morphism $p : B_2(\Sigma_{1,0})\to W(a,b,c)$ that sends $x\in \{a,b,c\}$ on $x$. Note that the canonical morphism from $F(a,b,c)$ onto $W(a,b,c)$ factorises through $p$.\\ We denote by $L_{a,b}$ the set-map from $B_2(\Sigma_{1,0})$ to $F(a,b,c)$ defined by $L_{a,b}(g) = a^{\ell_{\hat{b}}(g)}b^{\ell_{\hat{a}}(g)}$ for $g$ in $B_2(\Sigma_{1,0})$. If $w$ is in $F(a,b,c)$, we write, by abuse of notation, $p(w)$ and $L_{a,b}(w)$ for $p(\overline{w})$ and $L_{a,b}(\overline{w})$ respectively. \begin{prop} (i) The center $Z(B_2(\Sigma_{1,0}))$ of the group $B_2(\Sigma_{1,0})$ is a free Abelian group based on $a^2$ and $b^2$. Furthermore, for each element $g$ of $Z(B_2(\Sigma_{1,0}))$, the word $L_{a,b}(g)$ is a representing element of $g$.\\ (ii) The group $B_2(\Sigma_{1,0})$ is a central extension of $W(a,b,c)$.\\ In other words, the sequence $1\to Z(B_2(\Sigma_{1,0})) \to B_2(\Sigma_{1,0}) \to W(a,b,c)\to 1$ is exact. \end{prop} \proof We remark that the presentation (\ref{preb2}) implies that $a^2$, $b^2$ and $c^2$ are in $Z(B_2(\Sigma_{1,0}))$ and that $W(a,b,c) = B_2(\Sigma_{1,0})/a^2 = b^2 = 1$. Since the center of $W(a,b,c)$ is trivial, (ii) follows. As a consequence, we get that $a^2$ and $b^2$ generated the Abelian group $Z(B_2(\Sigma_{1,0}))$. Now, let $g$ belong to $Z(B_2(\Sigma_{1,0}))$. Each word $w$ that represents $g$ and written on the letters $a^2$, $b^2$, and their inverses, can be modify in order to obtain $L_{a,b}(g)$ by using the relations $a^2b^2 = b^2a^2$ and the relations of $a^2$ and $b^2$ with their respective inverses. Hence, $Z(B_2(\Sigma_{1,0}))$ is a free Abelian group based on $a^2$ and $b^2$. \hspace*{\fill}$\Box$ ace{10pt} \begin{cor}\label{propfinalpourcasparticuliergroup} Let $w$ be in $F(a,b,c)$; then $\overline{w} = 1\iff (\ L_{a,b}(w) = 1$ and $p(w) = 1\ )$.\end{cor} \proof Assume $L_{a,b}(w) = 1$ and $p(w) = 1$. Since $p(\overline{w}) = 1$, the element $\overline{w}$ is in the center of $B_2(\Sigma_{1,0})$. But in that case, $L_{a,b}(w) = 1$ represents $\overline{w}$. Then $\overline{w} = 1$. \hspace*{\fill}$\Box$ ace{10pt} \begin{cor} The word problem in $B_2(\Sigma_{1,0})$ is solvable.\end{cor} \proof The word problem is solvable in the free group $F(a,b,c)$ and in the Coxeter group $W(a,b,c)$. Then the claim follows from Corollary \ref{propfinalpourcasparticuliergroup}.\hspace*{\fill}$\Box$ ace{10pt} \begin{cor}\label{lempourconj} Let $g,h$ be in $F(a,b,c)$; then, $\overline{g} = \overline{h} \iff p(g) = p(h)$ and $L_{a,b}(g) = L_{a,b}(h)$. \end{cor} \proof Assume $p(g) = p(h)$ and $L_{a,b}(g) = L_{a,b}(h)$. Then the element $\overline{g}\overline{h}^{-1}$ is in the center of $B_2(\Sigma_{1,0})$. Since $L_{a,b}(g) = L_{a,b}(h)$ it follows that $\ell_{\hat{a}}(g)= \ell_{\hat{a}}(h)$ and $\ell_{\hat{b}}(g)= \ell_{\hat{b}}(h)$. Therefore $L_{a,b}(gh^{-1})=1$ and thus $\overline{g}=\overline{h}$. \hspace*{\fill}$\Box$ ace{10pt} We denote by $F^+(a,b,c)$ the free monoid based on $\{a,b,c\}$. It is a submonoid of $F(a,b,c)$. If $w$ is in $W(a,b,c)$, there exists a unique element $[w]$ in $F^+(a,b,c)$ of minimal length such that its image in $W(a,b,c)$ is $w$. By construction, for each $w$ in $W(a,b,c)$ the element $p(\overline{[w]})$ is equal to $w$. As a consequence, the map sending each $w$ in $W(a,b,c)$ on $\overline{[w]}$ is injective. For short, we will write $[w]$ for $\overline{[w]}$. \begin{cor} (i) Let $g,h$ be in $B_2(\Sigma_{1,0})$; then,$$(\exists r\in B_2(\Sigma_{1,0}), rgr^{-1} = h )\iff ( L_{a,b}(g) = L_{a,b}(h)\textrm{ and } \exists w\in W(a,b,c), wp(g)w^{-1} = p(h)).$$ Furthermore, if the right side holds, then $[w]g[w]^{-1} = h$. \end{cor} \proof The side ``$\Rightarrow$'' is clear with $w = p(r)$. Assume conversely that $ L_{a,b}(g) = L(a,b)(h)$ and $\exists w\in W(a,b,c),\ wp(g)w^{-1} = p(h)$. Since $wp(g)w^{-1} = p(h)$, we have $p([w]g[w]^{-1}) = p(h)$. But $L_{a,b}([w]g[w]^{-1}) = L_{a,b}(g) = L_{a,b}(h)$ and $[w]g[w]^{-1}$ = $h$ by Corollary \ref{lempourconj}. \hspace*{\fill}$\Box$ ace{10pt} \begin{cor}The conjugacy problem in $B_2(\Sigma_{1,0})$ is solvable.\end{cor} \proof The conjugacy problem is solvable in each Coxeter group (see \cite{Kra}).\hspace*{\fill}$\Box$ ace{10pt} \subsection{The Garside method and complete presentation} In order to solve the word problem and the conjugacy problem in $B_2(\Sigma_{1,0})$, we can try to use the method used by Garside to solve the word problem and the conjugacy problem, that is to find a Garside structure for $B_2(\Sigma_{1,0})$. Let us remark that surface braid groups on surfaces of genus greater than $1$ have trivial center (see \cite{ParRol}) and then they cannot be Garside groups. Recall that in a monoid $M$ we say that $a$ left-divides $b$ if $b = ac$ for some $c$ in $M$. We say, in a similar way, that $a$ right-divides $b$ when $b = ca$ for some $c$ in $M$. An element $\Delta$ of $M$ is said to be balanced when its set of left-divisors is equal to its set of right-divisors. We denote by $B^+_2(\Sigma_{1,0})$ the monoid defined by the presentation (\ref{preb2}), but considered as a monoid presentation. Then in $B^+_2(\Sigma_{1,0})$ the element $c^2$ is balanced. Furthermore its set $D(c^2)$ of divisors generates $B^+_2(\Sigma_{1,0})$. Nevertheless, $B^+_2(\Sigma_{1,0})$ fails to be a Garside monoid with $c^2$ for Garside element (see \cite{Deh} for a definition) because it is not a lattice for left-divisibility: $a$ and $b$ have two distinct minimal common multiples, namely $ab^2$ and $ba^2$. Anyway, as shown in \cite{Deh} Section 8, part of the results established for Garside groups still hold, as we will see in Lemma \ref{corpropfinalpouras}. Let $\iota: B_2^+(\Sigma_{1,0})\to B_2(\Sigma_{1,0})$ be the canonical homomorphism of monoids. By abuse of notation, we denote by $\ell_{\hat{a}}$ and $\ell_{\hat{b}}$ the morphisms $\ell_{\hat{a}}\circ \iota$ and $\ell_{\hat{b}}\circ \iota$ respectively. We remark that $z\mapsto\overline{z}$ factorises through $\iota$. By abuse of notation, we denote by $z\mapsto\overline{z}$ and by $w\mapsto\overline{[w]}$ the factorizations. Then we have $\overline{w} =\iota(\overline{w})$. As before, we write $[w]$ for $\overline{[w]}\in B_2^+(\Sigma_{1,0})$. We remark that Corollary 4.5 and 4.6 still hold if we consider $\overline{w}$ in $B_2^+(\Sigma_{1,0})$ and $g,h$ in $B_2^+(\Sigma_{1,0})$. As a consequence, we have the following result: \begin{lem} \label{corpropfinalpouras}(i) $ B^+_2(\Sigma_{1,0})$ is cancellative and the canonical morphism $\iota : B^+_2(\Sigma_{1,0}) \to B_2(\Sigma_{1,0})$ is into. \\ (ii) $\forall G \in B_2(\Sigma_{1,0})$, $\exists ! j\in\mathcal{M}_n(\Sigma)hbb{Z}, \exists ! g\in B^+_2(\Sigma_{1,0})$ such that $G = c^{-2j} \iota(g)$ and $c^2$ doesnot divide $g$. \end{lem} \proof (i) Let $h,g_1,g_2$ be in $B^+_2(\Sigma_{1,0})$ such that $hg_1 = hg_2$. Then $\ell_{\hat{a}}(hg_1) = \ell_{\hat{a}}(h)+ \ell_{\hat{a}}(g_1)$ and $\ell_{\hat{a}}(hg_2) = \ell_{\hat{a}}(h)+ \ell_{\hat{a}}(g_2)$. Hence we have $\ell_{\hat{a}}(g_1) = \ell_{\hat{a}}(g_2)$. In the same way we get $\ell_{\hat{b}}(g_1) = \ell_{\hat{b}}(g_2)$, and also $p(g_1) = p(g_2)$ in the group $W(a,b,c)$. Then $g_1 = g_2$. We proceed in the say way if $g_1h = g_2h$. The other results are consequences of the Garside like structure as proved in Proposition 8.10 of \cite{Deh}. \hspace*{\fill}$\Box$ ace{10pt} In the following, we identify $ B^+_2(\Sigma_{1,0})$ with its image in $ B_2(\Sigma_{1,0})$. In order to solve the word problem in $B_2(\Sigma_{1,0})$, it is then enough to solve the words problem in $B^+_2(\Sigma_{1,0})$. Then, using the following proposition, we obtain another solution to the word problem. \begin{prop}\label{propfinalpourcasparticulier} Let $g$ be in $B^+_2(\Sigma_{1,0})$; then, there exist a unique pair $(h,l)$ in $\mathcal{M}_n(\Sigma)hbb{N}^2$, and a unique $w$ in $W(a,b,c)$ such that $g = a^{2k}b^{2l}[w]$. \end{prop} \proof Since $c^2 = a^2b^2$ and that $a^2$, $b^2$ are in the center of $Z(B^+_2(\Sigma_{1,0}))$, it follows that we can write $g = a^{2k}b^{2l}[w]$ with $k,l$ in $\mathcal{M}_n(\Sigma)hbb{N}$ and $w$ in $W(a,b,c)$. Assume that $g = a^{2i}b^{2j}[z]$ for some $i,j$ in $\mathcal{M}_n(\Sigma)hbb{N}$ and $z$ in $W(a,b,c)$. We have (1) $ w = p(g) z$ and then $[w] = [z]$; (2) $k = \frac{\ell_{\hat{b}}(g) - \ell_{\hat{b}}([w])}{2}= i$; (3) $l = \frac{\ell_{\hat{a}}(g) - \ell_{\hat{a}}([w])}{2} = j$. Then the decomposition of $g = a^{2k}b^{2l}[w]$ is unique.\hspace*{\fill}$\Box$ ace{10pt} If we want to solve the conjugacy problem by using the idea of Garside, we need to understand the normal form of $B^+_2(\Sigma_{1,0})$ as defined in Definition 7.2 of \cite{Deh}. This lead us to the notion of complete presentation as defined in \cite{Deh2}. Let $S$ be a finite set, and $S^*$ be the free monoid based on $S$. We denote by $\epsilon$ the empty word. Let $B$ be a monoid with presentation $(S,\mathcal{M}_n(\Sigma)hcal{R})$. We write $w\equiv w'$ if the word $w,w'$ of $S^*$ have the same image in $B$. Let $w,w'$ be two words in $(S\cup S^{-1})^*$, where $S^{-1}$ is a disjoint copy of $S$. We say that $w$ reverses in $w'$, and write $w\curvearrowright w'$ if $w'$ is obtained from $w$ by a finite sequence of the following steps: deleting some $u^{-1}u$ for some $u\in S$ or replacing some subword $u^{-1}v$ where $u,v$ are in $S$, with a a word $v'{u'}^{-1}$ such that $uv' = vu'$ is a relation of $\mathcal{M}_n(\Sigma)hcal{R}$. \begin{df}[\cite{Deh2} Definition 2.1 and Proposition 3.3] Let $B$ be a monoid with presentation $(S,\mathcal{M}_n(\Sigma)hcal{R})$. We say that the presentation $(S,\mathcal{M}_n(\Sigma)hcal{R})$ is complete if $$\forall u,v\in S^*,\ (u\equiv v \iff u^{-1}v\curvearrowright \epsilon).$$ \end{df} For instance the classical presentation of each Artin-Tits monoid is complete. The definition of complete presentation is easy to understand. Nevertheless, it is not easy to verify that a given presentation is complete. In \cite{Deh2}, Dehornoy gives a semi-algorithmical method in order to decide if a given presentation is complete. Semi-algorithmical means that when the process finishes, it gives an answer, but it is possible that it doesnot finish. We do not explain this technical method, named {\it the cube condition}, but refer to Definition 3.1 and Figure 3.1 of \cite{Deh2}. Applying the cube condition process, it is quiet clear that the presentation (\ref{preb2}) of the monoid $B_2(\Sigma_{0,1})$ is not complete and that we must add to the presentation the relation ``$b^2a^2 = c^2$ '' if we want to expect that the presentation is complete. \begin{quest} Is the presentation \begin{eqnarray}\langle a,b,c\mid a^2b= ba^2; b^2a = ab^2; \label{presentcomplete} a^2c= ca^2; b^2c = cb^2 ; a^2b^2 = c^2 ; b^2a^2 = c^2 \rangle^+ \end{eqnarray} a complete presentation of the monoid $B_2(\Sigma_{1,0})$ ? In other words, does this presentation verify the cube condition ? \end{quest} \begin{quest} Is the monoid presentation $\langle a,b\mid ab^2 = b^2a ; ba^2 = a^2b\rangle^+$ complete ? In other words, does this presentation verify the cube condition ? \end{quest} A positive answer to Question 5 seems to be crucial in order to state the interest of the method of completeness. \appendix \section{Presentations of surface braid groups} \begin{thm}[\cite{Bel} \label{thm:pres} Theorem 1.1] Let $n$, $p$ be positive integers and $g$ a non negative integer. Let $g$ be a non negative integer. The group $B_n(\Sigma_{g,p})$ admits the following group presentation: \noindent $\bullet$ Generators: $\gend$. \noindent $\bullet$ Relations: \begin{itemize} \item braid relations: \noindent \begin{tabular}{lll} $(BR1)$&$\sg_{i} \sg_{j} = \sg_{j} \sg_{i}$ & for $| i-j | \ge 2$.\\ $(BR2)$&$\sg_{i} \sg_{i+1} \sg_{i} = \sg_{i+1} \sg_{i} \sg_{i+1}$&$1\le i\le n-1$; \end{tabular}\\ \item mixed relations: \noindent\begin{tabular}{lll} (R1) &$a_r \sigma_i=\sigma_i a_r$&for $1 \le r \le g;\; i\not= 1$;\\ &$b_r \sigma_i=\sigma_i b_r$ &for $1 \le r \le g$; $i\not= 1$; \\ (R2)&$\sigma_1^{-1} a_r \sigma_1^{-1} a_{r}= a_r \sigma_1^{-1} a_{r}\sigma_1^{-1}$ &for $1 \le r \le g$; \\ &$\sigma_1^{-1} b_r \sigma_1^{-1} b_{r}= b_r \sigma_1^{-1} b_{r}\sigma_1^{-1}$&for $1 \le r \le g$;\\ (R3)&$\sigma_1^{-1} a_{s} \sigma_1 a_r = a_r \sigma_1^{-1} a_{s} \sigma_1$&for $s < r$; \\ &$\sigma_1^{-1} b_{s} \sigma_1 b_r = b_r \sigma_1^{-1} b_{s} \sigma_1$&for $s < r$;\\ &$\sigma_1^{-1} a_{s} \sigma_1 b_r = b_r \sigma_1^{-1} a_{s} \sigma_1$&for $s < r$ \\ &$\sigma_1^{-1} b_{s} \sigma_1 a_r = a_r \sigma_1^{-1} b_{s} \sigma_1$&for $s < r$\\ (R4)&$\sigma_1^{-1} a_r \sigma_1^{-1} b_{r} = b_{r} \sigma_1^{-1} a_r \sigma_1$ &for $1 \le r \le g$;\\ (R5)&$z_j\sigma_i= \sigma_i z_j$ &for $i \not=1, j=1, \dots, p-1$;\\ (R6)&$\sigma_1^{-1} z_i \sigma_1 a_r =a_r\sigma_1^{-1} z_i\sigma_1$&for $1 \le r \le g;\; i=1, \dots, p-1$; \\ &$\sigma_1^{-1} z_i \sigma_1 b_r =b_r\sigma_1^{-1} z_i\sigma_1$ &for $1 \le r \le g;\; i=1, \dots, p-1$\\ (R7)&$\sigma_{1}^{-1} z_j \sigma_{1} z_l=z_l\sigma_{1}^{-1} z_j \sigma_{1}$ &for $j=1, \dots, p-1 $, $j<l$;\\ (R8)&$\sigma_{1}^{-1} z_j \sigma_{1}^{-1} z_j=z_j\sigma_{1}^{-1} z_j \sigma_{1}^{-1}$&for $ j=1, \dots, p-1$. \end{tabular} \end{itemize} \end{thm} \begin{thm}[\cite{Bel} Theorem A.4]\label{deuxiemepresentation} Let $\Sigma_{0,g}$ be a closed surface of positive genus $g$. Then $B_n(\Sigma_{0,g})$ admits the following presentation:\\$\bullet$ Generators: $\theta_1,\cdots \theta_{n-1}$ $b_1\cdots b_{2g}$;\\$\bullet$ Relations:\\ \noindent \begin{tabular}{lll} $(BR1)$&$\theta_i\theta_j=\theta_j\theta_i$ &for $2\le | i-j |$.\\ $(BR1)$&$\theta_i \theta_{i+1} \theta_i= \theta_{i+1} \theta_i \theta_{i+1}$&for $1\le i\le n-2$.\end{tabular}\\ \noindent \begin{tabular}{lll} $(R1)$&$b_r\theta_i = \theta_i b_r$&$1\le r \le 2g$; $i\neq 1$;\\ $(R2)$&$b_s\theta_1^{-1}b_r\theta_1^{-1} = \theta_1 b_r\theta_1^{-1}b_s$&$1\le s<r \le 2g$;\\ $(R3)$&$b_r\theta_1^{-1}b_r\theta_1^{-1} = \theta_1^{-1}b_r\theta_1^{-1}b_r$&$1\le r\le 2g$; \end{tabular}\\ $(TR)$ $b_1b_2^{-1}\cdots b_{2g-1}b_{2g}^{-1}b_1^{-1}b_2\cdots b_{2g-1}^{-1}b_{2g} = \theta_1\cdots \theta_{n-2}\theta^2_{n-1}\theta_{n-2}\cdots\theta_1$. \end{thm} \end{document}
\begin{document} \title[Article Title]{Multiple-Periods Locally-Facet-Based MIP Formulations for the Unit Commitment Problem} \author*[1,2]{\fnm{Linfeng} \sur{Yang}}\email{[email protected]} \author[3]{\fnm{Shifei} \sur{Chen}}\email{[email protected]} \author[4]{\fnm{Zhaoyang} \sur{Dong}}\email{[email protected]} \affil[1]{\orgdiv{School of Computer Electronics and Information}, \orgname{Guangxi University}, \orgaddress{\city{Nanning}, \postcode{530004}, \country{China}}} \affil*[2]{\orgdiv{The Guangxi Key Laboratory of Multimedia Communication and Network Technology}, \orgname{Guangxi University}, \orgaddress{\city{Nanning}, \postcode{530004}, \country{China}}} \affil[3]{\orgdiv{School of Electrical Engineering}, \orgname{Guangxi University}, \orgaddress{\city{Nanning}, \postcode{530004}, \country{China}}} \affil[4]{\orgdiv{School of Electrical and Electronics Engineering}, \orgname{Nanyang Technological University}, \orgaddress{\city{Singapore}, \postcode{639798}, \country{Singapore}}} \maketitle {\noindent{\textbf{Abstract}\\} The thermal unit commitment (UC) problem has historically been formulated as a mixed integer quadratic programming (MIQP), which is difficult to solve efficiently, especially for large-scale systems. The tighter characteristic reduces the search space, therefore, as a natural consequence, significantly reduces the computational burden. In literatures, many tightened formulations for a single unit with parts of constraints were reported without presenting explicitly how they were derived. In this paper, a systematic approach is developed to formulate tight formulations. The idea is to use more binary variables to represent the state of the unit so as to obtain the tightest upper bound of power generation limits and ramping constraints for a single unit. In this way, we propose a multi-period formulation based on sliding windows which may have different sizes for each unit in the system. Furthermore, a multi-period model taking historical status into consideration is obtained. Besides, sufficient and necessary conditions for the facets of single-unit constraints polytope are provided and redundant inequalities are eliminated. The proposed models and three other state-of-the-art models are tested on 73 instances with a scheduling time of 24 hours. The number of generators in the test systems ranges from 10 to 1080. The simulation results show that our proposed multi-period formulations are tighter than the other three state-of-the-art models when the window size of the multi-period formulation is greater than 2.} {\noindent{\bf{Keywords} } Unit Commitment, High-dimensional, Tight, Compact, Locally Ideal, Polytope, Facet, Convex Hull} \section{Introduction}\label{sec1} The unit commitment (UC) has been receiving significant attention from both industry and academia. In general, the UC problem is formulated as a mixed integer nonlinear programming (MINLP) problem \cite{anjos2017unit} to determine the operational schedule of the generating units at each time period with varying loads under different operating constraints and environments. Mixed-Integer Programming (MIP) problems can be handled with commercial solvers. The UC problem can be formulated as a mixed integer quadratic programming (MIQP) problem and directly solved by using such solvers \cite{yang2017novel}. We can also convert the quadratic objective function of the UC problem into a piecewise linear function by accurately approximating the quadratic production cost function with a set of piecewise blocks \cite{carrion2006computationally}. Then we obtain a mixed-integer linear programming (MILP) problem which can be solved by using MILP solvers. The numerical results reported in \cite{frangioni2009computational} show that solving the approximate MILP problem is more competitive than simply solving the MIQP problem directly. The quality of the MIP model, which mainly depends on the tightness and compactness of the model, seriously affects the performance of solver \cite{williams2013model}. The tightness of an MIP formulation is defined as the difference between the optimal values for the MIP problem and its continuous relaxation problem \cite{wolsey2020integer}. Tightening an MIP formulation, usually by adding cutting planes, can reduce the search space that the solver requires to explore in order to find the optimal integer solution \cite{wolsey2003strong}. The compactness of an MIP formulation refers to the quantity of data that must be processed when solving the problem. A more compact formulation can speed up the search for the optimal solution. How to build a high-quality MIP model has become a hot topic in recent years. Researchers have made a lot of efforts to construct better formulations for the UC problem in terms of tightness or compactness. In order to describe the physical constraints for the unit more accurately, researchers have made a lot of effort, such as trying to express the unit state with different binary variables. \cite{garver1962power} first proposes using three binary variables to represent the commitment, startup and shutdown status of the unit. \cite{carrion2006computationally} and \cite{frangioni2009tighter} omit two sets of binary variables from the three-binary-variable formulation presented in \cite{garver1962power} and presents a more compact formulation only with commitment variable. \cite{yang2017novel} proposes a two-binary-variable MIQP formulation for the UC problem with considering compactness and tightness simultaneously. Since the optimal solution of an MILP problem can always be found at the vertices of the convex hull for its feasible region, we can obtain the optimal solution of the MILP problem directly by solving the relaxation problem. We should note that describing the convex hull of an MILP problem’s feasible set needs an enormous number of inequalities. Finding the explicit convex hull representation for the feasible set of an entire UC problem could be difficult and maybe even unrealistic. \cite{padberg1998location} provides the definition of the locally ideal MILP formulation. Many existing UC literatures provide a locally ideal or locally tighter formulation for a subset of the constraints of the UC problem. \cite{rajan2005minimum} successfully describes the convex hull of a minimum up/down time polytope. \cite{gentile2017tight} provides a convex hull for the polytope subject to minimum up/down time and generation limits. \cite{damci2016polyhedral} provides a convex hull for a two-period ramping polytope and some facets of multi-period ramping polytope. \cite{frangioni2015new} describes the convex hull of the feasible solutions of single-unit satisfying minimum up/down time constraints, minimum and maximum power output, and ramping constraints (including start-up and shut-down limits). The main works and innovations of this paper are: \begin{enumerate} \item We borrow the variable definition method in \cite{frangioni2015new} to describe the state of unit so that we could easily construct stronger valid inequalities for constraints of a single-unit within any period. By this way, we extend the modeling method from \cite{yang2021two} to multi-period. A new and more direct method is proposed to construct strong valid inequalities (facets) for the feasible region of the UC problem satisfying minimum up/down time constraints, generation limits and ramping limits simultaneously. Actually, the proposed method can be extended to construct facets for other UC constraints, for instance, startup/shutdown cost constraints. \item We provide more complete theoretical results on facets of the multi-period polytope which is subject to a subset of constraints for the UC problem. The constraints obtained by the proposed method are all proved to be facets under certain conditions. We provide sufficient and necessary conditions for the facets of the multi-period polytope and eliminate redundant inequalities. In addition, we also provide the improved version of the formulations when considering the generation’s history, and discuss the tightness of the formulations. Besides, the convex hull of feasible solutions for unit commitment problem with partial constraints and some related theories are given. \item Based on sliding window, the method of building a tight multi-period formulation for a unit commitment problem in high dimensional space is provided. The window size of each unit in the system can be different and adjusted according to actual needs. Furthermore, a history-dependent multi-period formulation is proposed. In order to improve the efficiency of the model, we make a series of transformations to the model. \end{enumerate} The remainder of this paper is organized as follows. In Section \ref{sec2}, we recall three recent well-performed MIQP formulations of the UC problem. In Section \ref{sec3}, we extend the method of constructing inequalities in Section \ref{sec2} to multi-period and present the procedure for constructing multi-period locally-facet constraints of single-unit based on sliding window. In Section \ref{sec4}, we improve the formulations in Section \ref{sec3} and propose some history-dependent multi-period formulations. In Section \ref{sec5}, we perform some computational experiments to demonstrate the behaviors of our models and three recent well-performed formulations on some data set used in other literatures. And we shed further light on the classes of instances for which our formulation outperforms. In Section \ref{sec6}, we draw the conclusions of our study and suggest future research. \section{State-of-the-art UC formulations for two-period and three-period}\label{sec2} In this section, we will review three state-of-the-art formulations of UC problem. $N$ and $T$ represent the total number of units in the system and scheduling time respectively. Most of the constraints for a single unit described in this article could be applied to every generator $i \in \{1,\cdots,N\}$ and every period $t \in \{1,\cdots,T\}$. If it is not the case mentioned above, we will specifically point out. \subsection{State-of-the-art three-binary formulations for UC}\label{subsec1} Let $P^i_t$ be the power output of unit $i$ in period $t$, and $u^i_t$ be the binary variable denoting the schedule of unit $i$ in period $t$. The objective function is usually defined as the minimization of power system operation costs, which mainly depends on the production cost and the startup cost. \begin{equation} F_c=\sum\nolimits_{i=1}^{N}\sum\nolimits_{t=1}^{T}[f_i(P^i_t)+S^i_t]\label{eq:objective-function} \end{equation} where $f_i(P^i_t)=\alpha_iu^i_t+\beta_iP_t^i+\gamma_i(P_t^i )^2$ is the production cost ($\alpha_i,\beta_i,\gamma_i$ are the coefficients), and $S_t^i$ is the startup cost. One of the most common system-wide constraints that link the schedules of different units in the system is power balance constraint. Let $P_{D,t}$ be the system load demand in period $t$. \begin{equation} \sum\nolimits_{i=1}^{N}P^i_t-P_{D,t}=0\label{eq:power-balance} \end{equation} Another common system-wide constraints is system spinning reserve requirement. Let $\overline{P}^i$ be the maximum power output of unit $i$, and $R_t$ be the spinning reserve requirement in period $t$. \begin{equation} \sum\nolimits_{i=1}^Nu^i_t\overline{P}^i\ge P_{D,t}+R_t\label{eq:spinning-reserve} \end{equation} Let $v^i_t$ be 1 if unit $i$ is started up in period $t$ (i.e., $u^i_t=1$ and $u^i_{t-1}=0$), and $w^i_t$ be 1 if unit $i$ is shut down in period $t$ (i.e., $u^i_t=0$ and $u^i_{t-1}=1$). \cite{garver1962power} uses three binary variables ($u^i_t$, $v^i_t$, $w^i_t$) to represent the unit status and introduces a logical constraint to relate the three binary variables. \begin{equation} v^i_t-w^i_t=u^i_t-u^i_{t-1}\label{eq:logical} \end{equation} Let $C^i_{hot}$ and $C^i_{cold}$ be the hot and cold startup cost of unit $i$ respectively. Furthermore, let $\underline{T}^i_{on}$ and $\underline{T}^i_{off}$ be the minimum up and down time periods of unit $i$ respectively. $T_{cold}^i$ represents cold startup time of unit $i$. $T_0^i$ is the number of periods unit $i$ has been online (+) or offline (-) prior to the first period of the scheduling time span. We define the operator $[\cdot]^+$ as $\max(0,\cdot)$. \cite{jabr2012tight} and \cite{atakan2018state} express the startup cost $S^i_t$ as an MILP formulation: \begin{align} &S^i_t\ge C^i_{hot}v^i_t\label{eq:hot-start}\\ &S^i_t\ge C^i_{cold}[v^i_t-\sum\nolimits_{\pi=\max(1,t-\underline{T}^i_{off}-T^i_{cold})}^{t-1}w^i_{\pi}-m^i_t]\label{eq:cold-start} \end{align} where $m_t^i=1$ if $t-\underline{T}_{off}^i-T_{cold}^i\le 0$ and ${[-T_0^i ]}^+<\lvert t-\underline{T}_{off}^i-T_{cold}^i-1\rvert+1$; $m_t^i=0$ otherwise. Rajan and Takriti use $v^i_t$ and $w^i_t$ to formulate the minimum up/down time constraints in \cite{rajan2005minimum}. \begin{align} &\sum\nolimits_{\overline{\omega}=[t-\underline{T}^i_{on}]^++1}^tv^i_{\overline{\omega}}\le u^i_t, \qquad t \in [W^i+1,T]\label{eq:min-up-time}\\ &\sum\nolimits_{\overline{\omega}=[t-\underline{T}^i_{off}]^++1}^tw^i_{\overline{\omega}}\le 1-u^i_t, \qquad t \in [L^i+1,T]\label{eq:min-down-time} \end{align} where $W^i=[\min(T,u^i_0(\underline{T}^i_{on}-T^i_0)))]^+$ and $L^i=[\min(T,(1-u^i_0)(\underline{T}^i_{off}+T^i_0))]^+$. With the minimum up/down time constraints, taking the generator’s history into account, the initial status of the unit is subject to the following constraints \cite{yang2017novel}: \begin{equation} u^i_t=u^i_0, \qquad t \in [0,W^i+L^i]\label{eq:initial-status} \end{equation} Let $\underline{P}^i$ be the minimum power output of unit $i$. The simplest generation limits are provided by \cite{carrion2006computationally}. \begin{align} &u^i_t{\underline{P}}^i\le P^i_t\label{eq:min-output}\\ &P^i_t\le u^i_t{\overline{P}}^i\label{eq:max-output} \end{align} If startup $(P^i_{start})$ and shutdown $(P^i_{shut})$ ramping limits are taken into account, the upper bounds can be tighten \cite{morales2013tight}. For unit $i \in \mathcal{J}^{>1}:=\{i\vert\underline{T}^i_{on}>1\}$: \begin{equation} P^i_t\le u^i_t{\overline{P}}^i-v^i_t({\overline{P}}^i-P^i_{start})-w^i_{t+1}({\overline{P}}^i-P^i_{shut}), \qquad t \in [1,T-1]\label{eq:max-output-12} \end{equation} And for unit $i \in \mathcal{J}^1:=\{i\vert\underline{T}^i_{on}=1\}$, \cite{gentile2017tight} proposes tighter upper bound constraints: \begin{align} &P^i_t\le u^i_t{\overline{P}}^i-v^i_t({\overline{P}}^i-P^i_{start})-w^i_{t+1}[P^i_{start}-P^i_{shut}]^+, \qquad t \in [1,T-1]\label{eq:max-output-21}\\ &P^i_t\le u^i_t{\overline{P}}^i-w^i_{t+1}({\overline{P}}^i-P^i_{shut})-v^i_t[P^i_{shut}-P^i_{start}]^+, \qquad t \in [1,T-1]\label{eq:max-output-22} \end{align} Let $P^i_{up}$ and $P^i_{down}$ be the ramp-up and ramp-down limits for unit $i$ respectively. \cite{damci2016polyhedral} provides the ramping constraints which are proved to be facets of two-period ramping polytope. \begin{align} &P^i_t-P^i_{t-1} \le u^i_t(P^i_{up}+\underline{P}^i)-u^i_{t-1}\underline{P}^i+v^i_t(P^i_{start}-P^i_{up}-\underline{P}^i)\label{eq:ramp-up-1}\\ &P^i_{t-1}-P^i_t\le u^i_{t-1}(P^i_{down}+\underline{P}^i)-u^i_t\underline{P}^i+w^i_t(P^i_{shut}-P^i_{down}-\underline{P}^i)\label{eq:ramp-down-1} \end{align} For unit $i \in \mathcal{J}^{>1}\cap\mathcal{L}$, where $\mathcal{L}:=\{i\vert P_{up}^i>P_{shut}^i-\underline{P}^i\}, $ \cite{ostrowski2012tight} proposes a strengthened ramp-up inequality. \begin{align} &P^i_t-P^i_{t-1} \le u^i_tP^i_{up}-w^i_t\underline{P}^i-w^i_{t+1}(P^i_{up}-P^i_{shut}+\underline{P}^i)+v^i_t(P^i_{start}-P^i_{up}),\notag\\ &t \in [1,T-1]\label{eq:ramp-up-2} \end{align} A strengthened ramp-down inequality for unit $i \in \mathcal{J}^{>1}\cap\underline{\mathcal{L}}$, where $\underline{\mathcal{L}}:=\{i\vert P_{down}^i>P_{start}^i-\underline{P}^i\}$, is also proposed. \begin{align} P^i_{t-1}-P^i_t \le &u^i_tP^i_{down}+w^i_tP^i_{shut}-v^i_{t-1}(P^i_{down}-P^i_{start}+\underline{P}^i)\notag\\ &-v^i_t(P^i_{down}+\underline{P}^i), \qquad t \in [2,T]\label{eq:ramp-down-2} \end{align} For unit $i \in \mathcal{J}^{>1}\cap\underline{\mathcal{J}}^{>1}\cap\mathcal{L}$, bounded on three periods, the other ramping constraint is \begin{align} P^i_{t+1}-P^i_{t-1}\le &2u^i_{t+1}P^i_{up}-w^i_t\underline{P}^i-w^i_{t+1}\underline{P}^i+v^i_t(P^i_{start}-P^i_{up})+\notag\\ &v^i_{t+1}(P^i_{start}-2P^i_{up}), \qquad t \in [1,T-1]\label{eq:ramp-up-3} \end{align} where $\underline{\mathcal{J}}^{>1}:=\{i\vert\underline{T}^i_{off}>1\}$. When compactness and tightness are taken into account simultaneously, the state-of-the-art two-period MIQP UC formulation \cite{damci2016polyhedral}, denoted as 2-period model (2P), is \begin{align} &\min \quad(\ref{eq:objective-function})\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:power-balance})(\ref{eq:spinning-reserve})(\ref{eq:logical})(\ref{eq:hot-start})(\ref{eq:cold-start})(\ref{eq:min-up-time})(\ref{eq:min-down-time})\\ &(\ref{eq:initial-status})(\ref{eq:min-output})(\ref{eq:max-output})(\ref{eq:ramp-up-1})(\ref{eq:ramp-down-1})\\ &u^i_t,v^i_t,w^i_t\in \{0,1\};P^i_t,S^i_t\in R_+ \end{aligned} \right. \end{align} The state-of-the-art UC formulation with tightest unit generation and ramping constraints within three periods \cite{gentile2017tight}\cite{damci2016polyhedral}\cite{morales2013tight}\cite{ostrowski2012tight}, denoted as 3-period model (3P), is \begin{align} &\min \quad(\ref{eq:objective-function})\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:power-balance})(\ref{eq:spinning-reserve})(\ref{eq:logical})(\ref{eq:hot-start})(\ref{eq:cold-start})(\ref{eq:min-up-time})(\ref{eq:min-down-time})(\ref{eq:initial-status})(\ref{eq:min-output})\\ &(\ref{eq:max-output})\text{ for } t=T;(\ref{eq:max-output-12})\text{ for } i \in \mathcal{J}^{>1};(\ref{eq:max-output-21})(\ref{eq:max-output-22})\text{ for } i \in \mathcal{J}^1\\ &(\ref{eq:ramp-up-1}) \text{ for } i \in (\mathbb{N}-(\mathcal{J}^{>1}\cap\mathcal{L}))\cup((\mathcal{J}^{>1}\cap\mathcal{L})\land(t=T))\\ &(\ref{eq:ramp-down-1}) \text{ for } i \in (\mathbb{N}-(\mathcal{J}^{>1}\cap\underline{\mathcal{L}})\cup((\mathcal{J}^{>1}\cap\underline{\mathcal{L}})\land(t=1))\\ &(\ref{eq:ramp-up-2}) \text{ for } i \in \mathcal{J}^{>1}\cap\mathcal{L};(\ref{eq:ramp-down-2}) \text{ for } i \in \mathcal{J}^{>1}\cap\underline{\mathcal{L}}\\ &(\ref{eq:ramp-up-3})\text{ for } i \in \mathcal{J}^{>1}\cap\underline{\mathcal{J}}^{>1}\cap\mathcal{L}\\ &u^i_t,v^i_t,w^i_t\in \{0,1\};P^i_t,S^i_t\in R_+ \end{aligned} \right. \end{align} where $\mathbb{N}:=\{1,\cdots,N\}$. \subsection{Three-period locally ideal model}\label{subsec2} In order to simplify the proof, \cite{yang2017novel} proposes to project $P_t^i$ onto $[0,1]$. \begin{equation} \widetilde{P}^i_t=\frac{P^i_t-u^i_t\underline{P}^i}{\overline{P}^i-\underline{P}^i} \end{equation} where $\widetilde{P}^i_t \in [0,1]$. By this way, semi-continuous variable can be eliminated from the model. Similarly, we let $\widetilde{P}^i_{up}=\frac{P^i_{up}}{\overline{P}^i-\underline{P}^i}$, $\widetilde{P}^i_{down}=\frac{P^i_{down}}{\overline{P}^i-\underline{P}^i}$ , $\widetilde{P}^i_{start}=\frac{P^i_{start}-\underline{P}^i}{\overline{P}^i-\underline{P}^i}$, $\widetilde{P}^i_{shut}=\frac{P^i_{shut}-\underline{P}^i}{\overline{P}^i-\underline{P}^i}$. The production cost $f_i(P^i_t)$ can be transformed to $\widetilde{f}_i(\widetilde{P}^i_t)=\widetilde{\alpha}_iu^i_t+\widetilde{\beta}_i\widetilde{P}_t^i+\widetilde{\gamma}_i(\widetilde{P}_t^i )^2$, where $\widetilde{\alpha}_i=\alpha_i+\beta_i\underline{P}^i+\gamma_i(\underline{P}^i)^2$, $\widetilde{\beta}_i=(\overline{P}^i-\underline{P}^i)(\beta_i+2\gamma_i\underline{P}^i)$, $\widetilde{\gamma}_i=\gamma_i(\overline{P}^i-\underline{P}^i)^2$ \cite{yang2017novel}. The power balance constraint can be reformulated as \cite{yang2017novel}: \begin{equation} \sum\nolimits_{i=1}^{N}[\widetilde{P}^i_t(\overline{P}^i-\underline{P}^i)+u^i_t\underline{P}^i]-P_{D,t}=0\label{eq:new-power-balance} \end{equation} \cite{yang2017novel} introduces $\widetilde{S}_t^i$ to represent the part of startup cost that exceeds $C_{hot}^i$, then (\ref{eq:hot-start}) can be discarded. And (\ref{eq:cold-start}) should be reformulated as follows \cite{yang2021two}. \begin{equation} \widetilde{S}^i_t\ge (C^i_{cold}-C^i_{hot})[v^i_t-\sum\nolimits_{\pi=\max(1,t-\underline{T}^i_{off}-T^i_{cold})}^{t-1}w^i_{\pi}-m^i_t]\label{eq:start-cost} \end{equation} \cite{yang2021two} proves that (\ref{eq:start-cost}) is more compact than (\ref{eq:hot-start})-(\ref{eq:cold-start}). The objective function (\ref{eq:objective-function}) can be reformulated as \begin{equation} F_c=\sum\nolimits_{i=1}^{N}\sum\nolimits_{t=1}^{T}[\widetilde{f}_i(\widetilde{P}^i_t)+C^i_{hot}v^i_t+\widetilde{S}^i_t]\label{eq:new-objective-function} \end{equation} For notational simplicity, we drop the superscript $i$ for the generator when considering the single-unit constraints in the following description of this paper. \cite{yang2021two} introduces eight binary variables $\tau_{i,t}^1\sim\tau_{i,t}^8$, illustrated in Table \ref{tab:state-variable}, to represent the commitment of a single unit within three periods. According to the linear relations between $u_{t-1}^i$, $u_t^i$, $u_{t+1}^i$, $v_t^i$, $v_{t+1}^i$, $w_t^i$, $w_{t+1}^i$, $\tau_{i,t}^1\sim\tau_{i,t}^8$, the new variables except $\tau^3_{i,t}$ can be eliminated from the following constraints in this section. $\tau^3_{i,t}$ can be determined by the following inequalities: \begin{align} &\tau^3_t\ge v_t+w_{t+1}-u_t, \qquad t \in [1,T-1]\label{eq:tao3-in1} \\ &\tau^3_t\le w_{t+1}, \qquad t \in [1,T-1]\label{eq:tao3-in2}\\ &\tau^3_t\le v_t, \qquad t \in [1,T-1]\label{eq:tao3-in3} \end{align} \begin{table}[t] \begin{center} \begin{minipage}{\textwidth} \caption{Illustration for state variables.}\label{tab:state-variable} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccc} \toprule $u_{t-1}$ & $u_t$ & $u_{t+1}$ & $\tau_t^1$ & $\tau_t^2$ & $\tau_t^3$ & $\tau_t^4$ & $\tau_t^5$ & $\tau_t^6$ & $\tau_t^7$ & $\tau_t^8$ & $\tau^{t-1}_{t-1,t-1}$ & $\tau^{t-1}_{t-1,t}$ & $\tau^{t-1}_{t-1,t+1}$ & $\tau^{t-1}_{t,t}$ & $\tau^{t-1}_{t,t+1}$ & $\tau^{t-1}_{t+1,t+1}$ \\ \midrule 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ \botrule \end{tabular} } \end{minipage} \end{center} \end{table} With the introducing of new state variables, \cite{yang2021two} obtains a strong valid inequality for the unit generation limits. \begin{align} \widetilde{P}_{t-1}\le& u_{t-1}-w_t(1-\widetilde{P}_{shut})-w_{t+1}(1-\widetilde{P}_{down}-\widetilde{P}_{shut})\notag\\&+\tau^3_t(1-\widetilde{P}_{down}-\widetilde{P}_{shut}), \qquad t \in [1,T-1]\label{eq:max-output-31}\\ \widetilde{P}_t\le& u_t-v_t(1-\widetilde{P}_{start})-w_{t+1}(1-\widetilde{P}_{shut})\notag\\&+\tau^3_t[1-\max(\widetilde{P}_{start},\widetilde{P}_{shut})], \qquad t \in [1,T-1]\label{eq:max-output-32}\\ \widetilde{P}_{t+1}\le& u_{t+1}-v_t(1-\widetilde{P}_{up}-\widetilde{P}_{start})-v_{t+1}(1-\widetilde{P}_{start})\notag\\&+\tau^3_t(1-\widetilde{P}_{up}-\widetilde{P}_{start}), \qquad t \in [1,T-1]\label{eq:max-output-33} \end{align} \cite{yang2021two} has proved that (\ref{eq:max-output-31})-(\ref{eq:max-output-33}) are facets of three-period ramping polytope on the assumption that $\widetilde{P}_{shut}+\widetilde{P}_{down}<1$ and $\widetilde{P}_{start}+\widetilde{P}_{up}<1$. Similar to the upper bound limit for the unit generation, \cite{yang2021two} provides tighter ramping constraints with taking startup and shutdown ramping limits into account. \begin{align} \widetilde{P}_t-\widetilde{P}_{t-1}\le& v_t(\widetilde{P}_{start}-\widetilde{P}_{up})+\tau^3_t([\widetilde{P}_{up}-\widetilde{P}_{shut}]^+-[\widetilde{P}_{start}-\widetilde{P}_{shut}]^+)\notag\\ &+u_t\widetilde{P}_{up}-w_{t+1}[\widetilde{P}_{up}-\widetilde{P}_{shut}]^+, \qquad t \in [1,T-1]\label{eq:ramp-up-31}\\ \widetilde{P}_{t+1}-\widetilde{P}_t\le& u_{t+1}\widetilde{P}_{up}+v_{t+1}(\widetilde{P}_{start}-\widetilde{P}_{up}), \qquad t \in [1,T-1]\label{eq:ramp-up-32}\\ \widetilde{P}_{t+1}-\widetilde{P}_{t-1}\le& 2u_{t+1}\widetilde{P}_{up}+v_t(\widetilde{P}_{start}-\widetilde{P}_{up})+v_{t+1}(\widetilde{P}_{start}-2\widetilde{P}_{up})\notag\\ &+\tau^3_t(\widetilde{P}_{up}-\widetilde{P}_{start}), \qquad t \in [1,T-1]\label{eq:ramp-up-33}\\ \widetilde{P}_{t-1}-\widetilde{P}_t\le& u_{t-1}\widetilde{P}_{down}+w_t(\widetilde{P}_{shut}-\widetilde{P}_{down})\label{eq:ramp-down-31}\\ \widetilde{P}_t-\widetilde{P}_{t+1}\le& w_{t+1}(\widetilde{P}_{shut}-\widetilde{P}_{down})+\tau^3_t([\widetilde{P}_{down}-\widetilde{P}_{start}]^+-[\widetilde{P}_{shut}-\widetilde{P}_{start}]^+)\notag\\ &+u_t\widetilde{P}_{down}-v_t[\widetilde{P}_{down}-\widetilde{P}_{start}]^+, \qquad t \in [1,T-1]\label{eq:ramp-down-32}\\ \widetilde{P}_{t-1}-\widetilde{P}_{t+1}\le& 2u_{t-1}\widetilde{P}_{down}+w_t(\widetilde{P}_{shut}-2\widetilde{P}_{down})+w_{t+1}(\widetilde{P}_{shut}-\widetilde{P}_{down})\notag\\ &+\tau^3_t(\widetilde{P}_{down}-\widetilde{P}_{shut}), \qquad t \in [1,T-1]\label{eq:ramp-down-33} \end{align} In \cite{yang2021two}, (\ref{eq:ramp-up-31})-(\ref{eq:ramp-down-32}) are proved to be facets of three-period ramping polytope on the assumption that $\widetilde{P}_{shut}+\widetilde{P}_{down}<1$, $\widetilde{P}_{start}+\widetilde{P}_{up}<1$, $2\widetilde{P}_{down}<1$, $2\widetilde{P}_{up}<1$. Then the tight and compact MIQP UC formulation within three periods \cite{yang2021two}, denoted as 3-period high dimensional model (3P-HD), is \begin{align} &\min \quad(\ref{eq:new-objective-function})\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:spinning-reserve})(\ref{eq:logical})(\ref{eq:min-up-time})(\ref{eq:min-down-time})(\ref{eq:initial-status})(\ref{eq:new-power-balance})(\ref{eq:start-cost})\\ &(\ref{eq:tao3-in1})(\ref{eq:tao3-in2})(\ref{eq:tao3-in3})(\ref{eq:max-output-31})(\ref{eq:max-output-32})(\ref{eq:max-output-33})\\ &(\ref{eq:ramp-up-31})(\ref{eq:ramp-up-33})(\ref{eq:ramp-down-32})(\ref{eq:ramp-down-33})\\ &(\ref{eq:ramp-up-32}) \text{ for } t=T-1,(\ref{eq:ramp-down-31}) \text{ for } t=1\\ &\tau^3_t=0 \text{ for } i \in \mathcal{J}^{>1}\\ &u^i_t,v^i_t,w^i_t,{\tau^3_t}^i\in \{0,1\},\widetilde{P}^i_t\in [0,1],\widetilde{S}^i_t\in R_+ \end{aligned} \right. \end{align} \section{Multi-period locally ideal model based on sliding window}\label{sec3} Many advances have been made in the research on tightness of model in the last few decades. Some literatures give facets of feasible solutions of the UC problem under partial constraints. However, most of them only give expressions of facets, without presenting the construction method and process. Although some papers have given the construction methods, which are all in the case of two-period or three-period, they are difficult to be generalized to multi-period. In this paper, we extend the method of constructing inequalities used in \cite{yang2021two} to multi-period. Then we obtain the facets of ramping polytope within any time periods, and the corresponding theoretical results. We use $\textup{UB}(\cdot)$ to denote upper bound on ``$\cdot$", $\textup{LB}(\cdot)$ to denote lower bound on ``$\cdot$", $\textup{RHS}$ to represent right hand side, $\textup{LHS}$ to represent left hand side. Consider the following inequality for UC. \begin{equation} \textup{LHS expression}\le \textup{RHS expression}\label{eq:inequality-expression} \end{equation} Traditionally, the LHS expression can be constructed from the physical significance of UC problem. For instance, according to the physical significance of generation limit, it is obvious that the ``LHS expression" can be set to be ``$\widetilde{P}^i_t$". Similarly, ``LHS expression" can be set to be ``$\widetilde{P}^i_t-\widetilde{P}^i_{t-1}$" for ramping constraints. Certainly, some ``LHS expression" without clear physical significance also can be used for constructing the tight UC constrains, for instance, equality (13) in \cite{pan2016polyhedral} where ``$P^i_{t-1}-P^i_t+P^i_{t+1}$" was used as ``LHS expression" of ramping constraints. ``RHS expression" of (\ref{eq:inequality-expression}) also can be constructed according to the physical significance of thermal unit. However, one should try to compress the upper bound on left hand side (LHS) of (\ref{eq:inequality-expression}). Other physics constraints can be considered in this procedure, and construct ``RHS expression" equaling to UB(LHS). Then stronger possible inequality would be obtained. Based on the above method, we could improve the single-unit constraints, bounded in any multi-period, of every unit in the system next. We should note that in this section we talk about any M-period without taking history status into consideration. \subsection{Sliding window}\label{subsec3} As illustrated in Fig. \ref{fig:sliding-window}, the scheduling span (including $T$ periods in the planning horizon and one period prior to the planning horizon) for each unit $i$ can be divided into into $T-M^i+2$ M-periods that intersect each other, denoted as sliding window. $M^i$ represents the size of sliding window for unit $i$. Obviously, $M^i\in [2,T+1]$. We should note that the window size for each unit $i$ in the system can be different. \begin{figure} \caption{Sliding window for the scheduling span} \label{fig:sliding-window} \end{figure} By introducing new binary variables to represent feasible combinations of unit commitments in periods $\{t-1,t,t+1\}$, \cite{yang2021two} provides a method to construct facets of three-period polytope. Theoretically, this method can be directly extended to multi-period, but the state variables that needs to be introduced are going to be $O(2^T)$ in this case. In order to reduce the number of state variables, we draw on the definition of variables in \cite{frangioni2015new} to present our high-dimensional formulation for UC problem. We introduce new state variables $\tau_{h,k}^{i,m}$ to denote the schedule of unit $i$ in periods $[h,k]$ of M-period which ranks $m$. For any M-period $m$ , we let $\tau_{h,k}^{i,m}$ be 1 if the unit $i$ is turned on at time period $h$ (i.e., $u^i_{h-1}=0$ when $h>m$, $u^i_h=1$) and remains operational until the end of time period $k$ (i.e., $u^i_k=1$, $u^i_{k+1}=0$ when $k<m+M^i-1$), and 0 otherwise. It is worth noting that in \cite{frangioni2015new} and \cite{bacci2019new} the generator is offline in period $h-1$ and $k+1$. While in our formulations, the commitments of the generator in period $h-1$ and $k+1$ are uncertain when $h=m$ and $k=m+M-1$ respectively. We let $A^i_m$ be the set of all feasible continuous operating intervals for the unit $i$ in the M-period $m$. We define $A^i_m$ as $A^i_m:=\{[h,k]\vert h=m,k\in [m,m+M^i-1],\text{ or }h\in[m+1,m+M^i-1],k\in [\min(h+\underline{T}^i_{on}-1,m+M^i-1),m+M^i-1]\}$. And let $\lvert A^i_m\rvert$ be the number of elements in the set $A^i_m$. For example, we let $m=t-1$, $M^i=3$, $T^i_{on}=1$. As illustrated in Table \ref{tab:state-variable}, \cite{anjos2017unit} introduces eight state variables ${\tau_t^1}^i\sim{\tau_t^8}^i$ to represent the unit state and thus construct some stronger inequalities for generation limits and ramping limits. Here we replace ${\tau_t^1}^i\sim{\tau_t^8}^i$ with $\tau^{i,m}_{h,k}$. According to the definition of $\tau^m_{h,k}$, we can easily represent $u_t$, $v_t$, $w_t$ by using $\tau^m_{h,k}$. \begin{align} &\sum\nolimits_{\{[h,k] \in A_m,t \in [h,k]\}}\tau^m_{h,k}=u_t, \qquad t \in [m,m+M-1] \label{eq:m-taou} \\ &\sum\nolimits_{\{[h,k] \in A_m,h=t\}}\tau^m_{h,k}=v_t, \qquad t \in [m+1,m+M-1] \label{eq:m-taov} \\ &\sum\nolimits_{\{[h,k] \in A_m,k=t-1\}}\tau^m_{h,k}=w_t, \qquad t \in [m+1,m+M-1] \label{eq:m-taow} \end{align} \eqref{eq:logical} can be obtained by using constraints \eqref{eq:m-taou}-\eqref{eq:m-taow}. For each M-period, period $t$ either falls into the continuous online interval or the continuous offline interval. When minimum-down time constraints are taken into consideration, according to the physical significance of $\tau_{h,k}^m$, we obtain the following inequalities \cite{knueven2018ramping}. \begin{equation} \sum\nolimits_{\{[h,k] \in A_m,t \in [h,k+\underline{T}_{off}]\}}\tau^m_{h,k}\le 1, \qquad t \in [m,m+M-1] \label{eq:inequality-taotao} \end{equation} Two adjacent M-period are not independent, but intersecting and interrelated. The value of $u_t,v_t,w_t$ in two adjacent M-period $m$ and $m+1$ should be consistent. According to (\ref{eq:m-taou})-(\ref{eq:m-taow}), we have the following equalities. \begin{align} &\sum\nolimits_{\{[h,k] \in A_m,t \in [h,k]\}}\tau^m_{h,k}=\sum\nolimits_{\{[h,k] \in A_{m+1},t \in [h,k]\}}\tau^{m+1}_{h,k},\notag\\ &\qquad m \in [0,T-M],t \in [m+1,m+M-1] \label{eq:m-taou-m} \\ &\sum\nolimits_{\{[h,k] \in A_m,h=t\}}\tau^m_{h,k}=\sum\nolimits_{\{[h,k] \in A_{m+1},h=t\}}\tau^{m+1}_{h,k},\notag\\ &\qquad m \in [0,T-M],t \in [m+2,m+M-1] \label{eq:m-taov-m} \\ &\sum\nolimits_{\{[h,k] \in A_m,k=t-1\}}\tau^m_{h,k}=\sum\nolimits_{\{[h,k] \in A_{m+1},k=t-1\}}\tau^{m+1}_{h,k},\notag\\ &\qquad m \in [0,T-M],t \in [m+2,m+M-1] \label{eq:m-taow-m} \end{align} Any one of (\ref{eq:m-taov-m}) and (\ref{eq:m-taow-m}) can be derived from the other one and (\ref{eq:m-taou-m}). By this way, we could use the new binary variables $\tau^m_{h,k}$ to replace the state variable $u_t,v_t,w_t$ in the UC formulation. We let $\prod$ denote the Cartesian product throughout this paper. \begin{theorem}\label{binary-integer-0} For any M-period $m$ we define the polyhedron $\mathcal{B}^R_m=\{\prod_{[h,k]\in A_m}\tau^m_{h,k}\in[0,1]^{\lvert A_m\rvert}:(\ref{eq:inequality-taotao})\}$. Let $\mathcal{B}^I_m=\mathcal{B}^R_m\cap\{\prod_{[h,k]\in A_m}\tau^m_{h,k}\in\{0,1\}^{\lvert A_m\rvert}\}$. Then $\mathcal{B}^R_m=\textup{conv}(\mathcal{B}^I_m)$. \end{theorem} \begin{proof}[\textbf{Proof}] $$ \bm{C_h}=\begin{blockarray}{ccccc} \tau^m_{h,k} & \tau^m_{h,k+1} & \cdots & \tau^m_{h,m+M-1} & \\ \begin{block}{(cccc)c} 0 & 0 & \cdots & 0 & t=m \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & t=h-1 \\ 1 & 1 & \cdots & 1 & t=h \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 1 & \cdots & 1 & t=k+\underline{T}_{off} \\ 0 & 1 & \cdots & 1 & t=k+\underline{T}_{off}+1 \\ \vdots & \ddots & \ddots & \vdots & \vdots \\ 0 & \cdots & 0 & 1 & t=m+M-1 \\ \end{block} \end{blockarray} $$ We let $\bm{B_m}$ be the coefficient matrix of inequalities (\ref{eq:inequality-taotao}) for M-period $m$, and $\bm{B_m}=[\bm{C_m},\bm{C_{m+1}},\cdots,\bm{C_{m+M-1}}]$. For any column, if any time periods $t_1$, $t_2$ exist such that $t_1+1<t_2$, $t_1,t_2\in [m,m+M-1]$, $t_1\in [h,k+\underline{T}_{off}]$, $t_2\in [h,k+\underline{T}_{off}]$ (i.e. the elements of the row corresponding to $t=t_1$, $t=t_2$ in this column are equal to 1), then we have $t\in [h,k+\underline{T}_{off}]$ for all $t$ with $t_1<t<t_2$. Clearly, $\bm{B_m}$ is an interval matrix(Definition 2.2. of §$\uppercase\expandafter{\romannumeral3}.1.2$ in \cite{wolsey1988integer}). According to the Corollary 2.10 of §$\uppercase\expandafter{\romannumeral3}.1.2$ in \cite{wolsey1988integer}, $\bm{B_m}$ is totally unimodular. Let $\bm{d'}=\bm{0}$, $\bm{d}=\bm{1}$, $\bm{b'}=\bm{0}$, $\bm{b}=\bm{1}$, and then we know that $\mathcal{B}^R_m$ is an integral polyhedron according to Proposition 2.3 of §$\uppercase\expandafter{\romannumeral3}.1.2$ in \cite{wolsey1988integer}. Analogous theorem, using different proof technique, is proposed in \cite{knueven2018ramping}. \end{proof} \subsection{The multi-period locally-facet-based constraints}\label{subsec4} In this subsection we will talk about the generation limits and ramping constraints in any M-period $m\in [0,T-M+1]$. Facet is an important concept for the description of polyhedral and can be used to address the question of what a strong inequality means for a polyhedral. If $P$ is a polyhedron, $F=\{x\in P\vert\pi^Tx=\pi_0\}$ defines a face of the polyhedron $P$ if $\pi^Tx=\pi_0$ is a valid inequality of $P$. Furthermore, $F$ is a facet of $P$ if $F$ is a face of $P$ and $\textup{dim}(F)=\textup{dim}(P)-1$. Proposition 9.1 in \cite{wolsey2020integer} points out that, if $P$ is full-dimensional, a valid inequality $\pi^Tx=\pi_0$ is necessary in the description of $P$ if and only if it defines a facet of $P$. However, this is unrealistic for directly constructing the facet in most cases, because an enormous number of affine independent points and state variables are needed to describe the facet of an MILP problem’s feasible set. Since UC is an NP-hard problem, finding the explicit facet-based representation of the entire UC problem could be difficult and maybe even unrealistic. Alternatively, one may possibly resemble the facet-based formulation for a well-structured subset of the original MILP problem’s feasible set. We borrow the term ``locally" from \cite{padberg1998location}, and use ``locally" to refers that the facet-based formulation is sought for a specifically selected portion, but not the entire mathematical programming problem. Although including such a locally ideal MILP formulation into the problem does not guarantee the facet-based formulation of the entire UC problem, it could help tighten the lower bound and in turn reduce the search effort of the branch\&cut algorithm. Now we will show the details of our method to construct the locally-facet-based expressions for unit generation limits and ramping constraints. \begin{table}[t] \begin{center} \begin{minipage}{\textwidth} \caption{Illustration for upper bound of generation limits.}\label{tab:max-output} \begin{tabular*}{\textwidth}{ccc} \toprule case & $\tau^m_{h,k}=1$ & $\textup{UB}(\widetilde{P}_t)$ \\ \midrule 1 & $m<h\le t\le k<m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$\\ 2 & $m<h\le t\le k=m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$\\ 3 & $m=h\le t\le k<m+M-1$ & $\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$\\ 4 & $m=h\le t\le k=m+M-1$ & 1\\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table} First we consider the generation limits in $M$-period $m$. When the ramping limits are taken into consideration, we list $\textup{UB}(\widetilde{P}_t)$ in Table \ref{tab:max-output}. If period $t$ is not in any operating interval, Clearly the maximum power output in period $t$ is equal to zero. According to the Table \ref{tab:max-output}, The maximum power output in period $t$ varies with the continuous operating interval in which $t$ is located. Then we obtain the following upper bound limit for the power of unit: \begin{align} \widetilde{P}_t\le &\sum\nolimits_{\{[h,k] \in A_m,m<h\le t\le k<m+M-1\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k]\in A_m,m<h\le t\le k=m+M-1\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]\notag\\ &+\sum\nolimits_{\{[h,k]\in A_m,m=h\le t\le k<m+M-1\}}\tau^m_{h,k}\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k]\in A_m,m=h\le t\le k=m+M-1\}}\tau^m_{h,k},\quad t\in[m,m+M-1]\label{eq:tight-max-output-0} \end{align} Analogous results are proposed in \cite{bacci2019new}. But our formulation takes all possible cases into account and is more general, and our inequalities are strictly tighter than (55)(56) in \cite{bacci2019new} when $[h,k] \in A_m$ exists such that $\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}<\widetilde{P}_{shut}$ or $\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}<\widetilde{P}_{start}$. When the startup/shutdown ramping limits and generation limits are taken into consideration, similar to the constructing of unit generation limits, we list $\textup{UB}(\widetilde{P}_t-\widetilde{P}_{t-a})$ in Table \ref{tab:ramp}. According to Table \ref{tab:ramp}, we can obtain the following ramp-up constraints: \begin{align} \widetilde{P}_t-\widetilde{P}_{t-a}\le &\sum\nolimits_{\{[h,k]\in A_m,t-a<h\le t\le k=m+M-1\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,t-a<h\le t\le k<m+M-1\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,h\le t-a<t\le k<m+M-1\}}\tau^m_{h,k}\times\notag\\ &\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,h\le t-a<t\le k=m+M-1\}}\tau^m_{h,k}\min(1,a\widetilde{P}_{up}),\notag\\ &t\in[m+1,m+M-1],a\in[1,t-m]\label{eq:tight-ramp-up-0} \end{align} Similarly, we have ramp-down constraints as follows: \begin{align} \widetilde{P}_{t-a}-\widetilde{P}_t\le &\sum\nolimits_{\{[h,k]\in A_m,m=h\le t-a\le k<t\}}\tau^m_{h,k}\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,m<h\le t-a\le k<t\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,m<h\le t-a<t\le k\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,m=h\le t-a<t\le k\}}\tau^m_{h,k}\min(1,a\widetilde{P}_{down}),\notag\\ &t\in[m+1,m+M-1],a\in[1,t-m]\label{eq:tight-ramp-down-0} \end{align} Analogous results with $a=1$ are proposed in \cite{bacci2019new}. But our formulation takes generation limits into consideration and is more general with ramping constraints spanning more than two periods (i.e., $a>1$) being considered. (\ref{eq:tight-ramp-up-0}) is strictly tighter than (50) in \cite{bacci2019new} when $[h,k] \in A_m$ exists such that $\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}<\max(\widetilde{P}_{up},\widetilde{P}_{start})$. (\ref{eq:tight-ramp-down-0}) is strictly tighter than (51) in \cite{bacci2019new} when $[h,k]\in A_m$ exists such that $\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up}<\max(\widetilde{P}_{down},\widetilde{P}_{shut})$. \begin{table}[t] \begin{center} \begin{minipage}{\textwidth} \caption{Illustration for upper bound of ramping limits}\label{tab:ramp} \resizebox{\textwidth}{!}{ \begin{tabular}{ccc} \toprule case & $\tau^m_{h,k}=1$ & $\textup{UB}(\widetilde{P}_t-\widetilde{P}_{t-a})$ \\ \midrule 1 & $h\le t-a<t\le k<m+M-1$ & $\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$ \\ 2 & $h\le t-a<t\le k=m+M-1$ & $\min(1,a\widetilde{P}_{up})$ \\ 3 & $t-a<h\le t\le k<m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$ \\ 4 & $t-a<h\le t\le k=m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$ \\ \botrule \end{tabular} } \resizebox{\textwidth}{!}{ \begin{tabular}{ccc} \toprule case & $\tau^m_{h,k}=1$ & $\textup{UB}(\widetilde{P}_{t-a}-\widetilde{P}_t)$ \\ \midrule 1 & $m<h\le t-a<t\le k$ & $\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$\\ 2 & $m=h\le t-a<t\le k$ & $\min(1,a\widetilde{P}_{down})$\\ 3 & $m<h\le t-a\le k<t$ & $\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$\\ 4 & $m=h\le t-a\le k<t$ & $\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$\\ \botrule \end{tabular} } \end{minipage} \end{center} \end{table} We note that similar expressions had been proposed in \cite{bacci2019new}. Inspired by dynamic programming algorithm, \cite{bacci2019new} presents a new formulation for single-unit commitment problem and prove that it describes the convex hull of the solutions of the single-unit commitment problem. Then they transform the DP convex hull model to obtain several similar expressions for unit generation limits and ramping constraints which are formed by linear combination of some constraints of DP model. However: \begin{enumerate} \item Our procedure to construct the expressions is more directly and more simple than \cite{bacci2019new}. And some comprehensive and rigorous facet theory are proved in this paper. \item Expressions in \cite{bacci2019new} were obtained by relaxing the DP model, the theoretical tightness of resulted expressions have not been given. \item As we mentioned above, our model is tighter than that in \cite{bacci2019new}, because we consider the constraints more comprehensively. For instance, we consider the upper bounds of power generations of the generator when constructing the ramping constraints. \item \cite{bacci2019new} constructs constraint inequalities on the solution space of the whole UC problem, while our model constructs tight constraint inequalites on multi-period single-unit constraint polytopes for the UC problem based on ``locally ideal" and then combines them together. \end{enumerate} Therefore, the model we construct in this paper satisfies all the typical constraints for thermal units: minimum up and down time, power generation limits, ramping (including ramp up, ramp down, start-up and shut-down) limits, which are concerned when constructing convex hull of polytope for the UC problem in most literatures \cite{damci2016polyhedral}\cite{frangioni2015new}\cite{pan2016polyhedral}. In fact, by introducing new state variables $\varrho_{c,d}$ for unit $i$ which remains off in periods $[c,d]$, the method proposed above can be extended to multi-period constraint polytope with other single-unit constraints such as startup/shutdown cost. Lack of space, we won't give further treatment of them here. \subsection{Discussion of tightness of proposed constraints}\label{subsec5} We name the facet of polytope which considers a subset of the constraints for UC model (i.e., minimum up and down time, power generation limits, ramping limits) only in a few periods of time as ``locally-facet". We have derived some tight expressions for a subset of the constraints for unit commitment problem in the previous subsection. Here we discuss the tightness of these inequalities. We refer to the feasible set defined by constraints (\ref{eq:inequality-taotao})(\ref{eq:tight-max-output-0}) as $\mathcal{P}^R_m$, i.e. $\mathcal{P}^R_m=\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_m,\cdots,\widetilde{P}_{m+M-1})\in [0,1]^{\lvert A_m\rvert +M}:\eqref{eq:inequality-taotao}(\ref{eq:tight-max-output-0})\}$. Let $\mathcal{P}^I_m=\mathcal{P}^R_m\cap\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_m,\cdots,\widetilde{P}_{m+M-1})\in\{0,1\}^{\lvert A_m\rvert}\times[0,1]^M\}$. \begin{proposition}\label{prp1} Inequality (\ref{eq:tight-max-output-0}) defines a facet of $\textup{conv}(\mathcal{P}^I_m)$. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{theorem} $\mathcal{P}^R_m=\textup{conv}(\mathcal{P}^I_m)$. \end{theorem} \begin{proof}[\textbf{Proof}] We have proved in Theorem \ref{binary-integer-0} that $\mathcal{B}^R_m$ is an integral polyhedron. According to the Lemma 4 in \cite{gentile2017tight}, it is not difficult to prove $\mathcal{P}^R_m=\textup{conv}(\mathcal{P}^I_m)$. \end{proof} We refer to the feasible set defined by constraints (\ref{eq:inequality-taotao})(\ref{eq:tight-max-output-0})-(\ref{eq:tight-ramp-down-0}) as $\mathcal{Q}^R_m$, i.e. $\mathcal{Q}^R_m=\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_m,\cdots,\widetilde{P}_{m+M-1})\in [0,1]^{\lvert A_m\rvert+M}:\eqref{eq:inequality-taotao}(\ref{eq:tight-max-output-0})-(\ref{eq:tight-ramp-down-0})\}$. Let $\mathcal{Q}^I_m=\mathcal{Q}^R_m\cap\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_m,\cdots,\widetilde{P}_{m+M-1})\in\{0,1\}^{\lvert A_m\rvert}\times[0,1]^M\}$. \begin{proposition}\label{prp2} Inequality (\ref{eq:tight-max-output-0}) defines a facet of $\textup{conv}(\mathcal{Q}^I_m)$. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{proposition}\label{prp3} Inequality (\ref{eq:tight-ramp-up-0}) defines a facet of $\textup{conv}(\mathcal{Q}^I_m)$ if and only if $a<\frac{1}{\widetilde{P}_{up}}$. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{proposition}\label{prp4} Inequality (\ref{eq:tight-ramp-down-0}) defines a facet of $\textup{conv}(\mathcal{Q}^I_m)$ if and only if $a<\frac{1}{\widetilde{P}_{down}}$. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} We let $G_1=\min(\lceil\frac{1}{\widetilde{P}_{up}}\rceil,M)$ and $G_2=\min(\lceil\frac{1}{\widetilde{P}_{down}}\rceil,M)$. Our work identifies facets and removes a large number of redundant inequalities (roughly accounts for $\frac{2M(M-1)-(G_1-1)(2M-G_1)-(G_2-1)(2M-G_2)}{2M(M-1)}\times100\%)$, as shown in Fig. \ref{fig:facet-constraints} in the numerical experiment, which can significantly improve the speed of modeling and solving. \subsection{Multi-period locally-facet-based UC formulation}\label{subsec6} Based on sliding window, we can obtain Multi-Period Locally-facet-based UC formulation. Since we omit the state variable $u_t^i$, we have to transform some of the original constraints. \begin{equation} \sum\nolimits_{\{[h,k] \in A_{\min(t,T-M+1)},t \in [h,k]\}}\tau^{\min(t,T-M+1)}_{h,k}=u_0,\qquad t\in[0,W+L]\label{eq:new-initial-status-0} \end{equation} We should note that qualities (\ref{eq:new-initial-status-0}) can replace (\ref{eq:initial-status}). For the convenience of processing, we classify the units as follows. We let $J^{>x}:=\{i\vert\underline{T}_{on}^i>x\} $ and $K^{>x}:=\{i\vert\underline{T}_{off}^i>x\}$ where $x\in[1,T]$. We should note that the minimum up time constraints are actually implicit in the definition of $A^i_m$. But it's not complete in the global space. According to linear relations (\ref{eq:m-taou})-(\ref{eq:m-taow}), we can reformulate minimum up/down time constraints (\ref{eq:min-up-time})(\ref{eq:min-down-time}) as follows: \begin{align} &\sum\nolimits_{\overline{\omega}=[t-\underline{T}_{on}]^++1}^t\sum\nolimits_{\{[\overline{\omega},k] \in A_{\max(0,\overline{\omega}-M+1)}\}}\tau^{\max(0,\overline{\omega}-M+1)}_{\overline{\omega},k}\le\notag\\ &\sum\nolimits_{\{[h,k] \in A_{\min(t,T-M+1)},t \in [h,k]\}}\tau^{\min(t,T-M+1)}_{h,k}, \quad t \in [W+1,T]\label{eq:tao-min-up-time}\\ &\sum\nolimits_{\overline{\omega}=[t-\underline{T}_{off}]^++1}^t\sum\nolimits_{\{[h,\overline{\omega}-1] \in A_{\min(\overline{\omega}-1,T-M+1)}\}}\tau^{\min(\overline{\omega}-1,T-M+1)}_{h,\overline{\omega}-1}\le\notag\\ &1-\sum\nolimits_{\{[h,k] \in A_{\min(t,T-M+1)},t \in [h,k]\}}\tau^{\min(t,T-M+1)}_{h,k}, \quad t \in [L+1,T]\label{eq:tao-min-down-time} \end{align} If $\underline{T}^i_{on}<M^i$, then (\ref{eq:tao-min-up-time}) are redundant and can be removed. If $\underline{T}^i_{off}<M^i$, then (\ref{eq:tao-min-down-time}) are redundant and can be removed. Using (\ref{eq:m-taou})-(\ref{eq:m-taow}), we can reformulate (\ref{eq:spinning-reserve}), (\ref{eq:new-power-balance}) and (\ref{eq:start-cost}) as: \begin{align} &\widetilde{S}_t\ge (C_{cold}-C_{hot})[\sum\nolimits_{\{[t,k] \in A_{\max(0,t-M+1)}\}}\tau^{\max(0,t-M+1)}_{t,k}-\notag\\ &\sum\nolimits_{\pi=\max(1,t-\underline{T}_{off}-T_{cold})}^{t-1}\sum\nolimits_{\{[h,\pi-1] \in A_{\min(\pi-1,T-M+1)}\}}\tau^{\min(\pi-1,T-M+1)}_{h,\pi-1}-m_t]\label{eq:tao-start-cost}\\ &\sum\nolimits_{i=1}^{N}[\widetilde{P}^i_t(\overline{P}^i-\underline{P}^i)+\underline{P}^i\sum\nolimits_{\{[h,k] \in A^i_{\min(t,T-M^i+1)},t \in [h,k]\}}\tau^{i,\min(t,T-M^i+1)}_{h,k}]\notag\\ &-P_{D,t}=0\label{eq:tao-power-balance}\\ &\sum\nolimits_{i=1}^N\overline{P}^i\sum\nolimits_{\{[h,k] \in A^i_{\min(t,T-M^i+1)},t \in [h,k]\}}\tau^{i,\min(t,T-M^i+1)}_{h,k}\ge P_{D,t}+R_t\label{eq:tao-spinning-reserve} \end{align} We reformulate $\widetilde{f}^i(\widetilde{P}^i_t)$ as $\widetilde{f}^i(\widetilde{P}^i_t)=\widetilde{\alpha}_i\sum\nolimits_{\{[h,k]\in A^i_m,t\in[h,k]\}}\tau^{i,m}_{h,k}+\widetilde{\beta}_i\widetilde{P}_t^i+\widetilde{\gamma}_i(\widetilde{P}_t^i )^2,m=\min(t,T-M^i+1)$. A multi-period tight MIQP UC formulation, denoted as MP-1, that contains only binary variables $\tau^{i,m}_{h,k}$ could be obtained directly. \begin{align} &\min \quad\sum_{i=1}^{N}\sum_{t=1}^{T}[\widetilde{f}_i(\widetilde{P}^i_t)+C^i_{hot}\sum\nolimits_{\{[t,k] \in A^i_{\max(0,t-M^i+1)}\}}\tau^{i,\max(0,t-M^i+1)}_{t,k}+\widetilde{S}^i_t]\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:inequality-taotao})(\ref{eq:m-taou-m})(\ref{eq:m-taov-m})(\ref{eq:m-taow-m})(\ref{eq:tight-max-output-0})(\ref{eq:new-initial-status-0})(\ref{eq:tao-spinning-reserve})(\ref{eq:tao-power-balance})(\ref{eq:tao-start-cost})\\ &(\ref{eq:tight-ramp-up-0})\text{ for }m=T-M^i+1\text{ or }a=t-m\\ &(\ref{eq:tight-ramp-down-0})\text{ for }m=0\text{ or }t=m+M^i-1\\ &(\ref{eq:tao-min-up-time})\text{ for } i\in J^{>M^i-1},(\ref{eq:tao-min-down-time})\text{ for } i\in K^{>M^i-1}\\ &\widetilde{P}^i_t\in [0,1],\widetilde{S}^i_t\in R_+\\ &\tau^{i,m}_{h,k}\in \{0,1\},[h,k]\in A^i_m,m\in [0,T-M^i+1] \end{aligned} \right. \end{align} We should note that $M^i$ can take on any value belonging to $[2,T+1]$. The variables $u^i_t, v^i_t, w^i_t$ in UC formulation are replaced with $\tau^{i,m}_{h,k}$ by using (\ref{eq:m-taou})-(\ref{eq:m-taow}) and adding (\ref{eq:m-taou-m})-(\ref{eq:m-taow-m}) to the model. In this way, the number of non-zero elements in the formulation increase dramatically. For any unit $i$ in $M^i$ periods, we note that, when only considering state variable constraints, Minimum up/down time constraints, and unit generation limits, our model gives the ideal formulation (convex hull of the solutions of the single-unit commitment problem). And when ramping constraints are considered furthermore, our model gives the facets of unit generation limits and ramping constraints. In addition, compared to DP-based model, our model can segment the UC problem which is based on sliding window. For example, for a 24-period scheduling span, we can take every six periods as a group to build the multi-period model presented in this paper and then connect them together. As to the size of sliding window, we need to consider comprehensively to get the most efficient model. \begin{table}[t] \begin{center} \begin{minipage}{\textwidth} \caption{Number of variables and constraints.}\label{tab:variable-num} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccccccccccccccc} \toprule M & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 \\ \midrule $u_t,v_t,w_t$ & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 & 72 \\ $n_1$ & 72 & 138 & 220 & 315 & 420 & 532 & 648 & 765 & 880 & 990 & 1092 & 1183 & 1260 & 1320 & 1360 & 1377 & 1368 & 1330 & 1260 & 1155 & 1012 & 828 & 600 & 325 \\ $n_2$ & 72 & 115 & 154 & 189 & 220 & 247 & 270 & 289 & 304 & 315 & 322 & 325 & 324 & 319 & 310 & 297 & 280 & 259 & 234 & 205 & 172 & 135 & 94 & 49 \\ Upper bounds & 47 & 68 & 87 & 104 & 119 & 132 & 143 & 152 & 159 & 164 & 167 & 168 & 167 & 164 & 159 & 152 & 143 & 132 & 119 & 104 & 87 & 68 & 47 & 24 \\ Ramping constraints & 48 & 138 & 264 & 420 & 600 & 798 & 1008 & 1224 & 1440 & 1650 & 1848 & 2028 & 2184 & 2310 & 2400 & 2448 & 2448 & 2394 & 2280 & 2100 & 1848 & 1518 & 1104 & 600 \\ \botrule \end{tabular} } \end{minipage} \end{center} \end{table} If the minimum up/down time constraints and generator’s history state are not taken into consideration in the definition of $A^i_m$, the number of variables $\tau^{i,m}_{h,k}$ is equal to $n_1=\frac{M^i(M^i+1)(T-M^i+2)}{2}$. As the value of $M^i$ increases, the number of variables first increases and then decreases. If we take minimum up time constraints into account, the number of variables $\tau^{i,m}_{h,k}$ is equal to $n_2=\{2M^i-1+\frac{{[M^i-\underline{T}^i_{on}-1]}^+(M^i-\underline{T}^i_{on})}{2}\}(T-M^i+2)$. With $M^i$ fixed, $n_2$ reaches minimum when $\underline{T}^i_{on}\ge M^i-1$. The number of generation limits and ramping constraints are equal to $M^i(T-M^i+2)-1$ and $M^i(M^i-1)(T-M^i+2)$ respectively. Here we let $T=24$, $\underline{T}^i_{on}=M^i-1$ and list the number of variables and constraints for a single unit in the Table \ref{tab:variable-num}. In order to construct a compact and tight model, we can adjust the values of $M^i$ for different unit $i$. When $M^i\ge \max(\lceil\frac{1-\widetilde{P}^i_{start}}{\widetilde{P}^i_{up}}\rceil+1,\lceil\frac{1-\widetilde{P}^i_{shut}}{\widetilde{P}^i_{down}}\rceil+1,\lceil\frac{1}{\widetilde{P}^i_{up}}\rceil,\lceil\frac{1}{\widetilde{P}^i_{down}}\rceil,2)$, we could get tightest generation upper bounds and ramping constraints in the global space. \subsection{Improvements of multi-period UC formulation}\label{subsec7} The number of nonzero elements of model MP-1 increases as all of the variables $u^i_t,v^i_t,w^i_t$ in the model are replaced with $\tau^{i,m}_{h,k}$. By adding the variables $u^i_t,v^i_t,w^i_t$, model MP-1 can be transformed to \begin{align} &\min \quad(\ref{eq:new-objective-function})\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:spinning-reserve})(\ref{eq:logical})(\ref{eq:min-up-time})(\ref{eq:min-down-time})(\ref{eq:initial-status})(\ref{eq:new-power-balance})(\ref{eq:start-cost})(\ref{eq:m-taou})(\ref{eq:m-taov})(\ref{eq:m-taow})(\ref{eq:tight-max-output-0})\\ &(\ref{eq:tight-ramp-up-0})\text{ for } m=T-M^i+1\text{ or }a=t-m\\ &(\ref{eq:tight-ramp-down-0})\text{ for } m=0\text{ or }t=m+M^i-1\\ &u^i_t,v^i_t,w^i_t\in\{0,1\},\widetilde{P}^i_t\in [0,1],\widetilde{S}^i_t\in R_+\\ &\tau^{i,m}_{h,k}\in\{0,1\},[h,k]\in A^i_m,m\in[0,T-M^i+1] \end{aligned} \right. \end{align} which is denoted as MP-2. In general, models that are more compact and tighter are more computationally efficient. However, when the tightness of the model reaches a certain level, the increase of tightness is of little help to the improvement of its computational efficiency. On the other hand, too many binary variables will also increase the computational burden. This formulation is not compact enough because it contains too many variables. Since variables $u_t,v_t,w_t,\tau^m_{h,k}$ are linearly dependent, some of these variables can be represented by linear combinations of the rest. In order to reduce the number of variables without adding too many non-zero elements, $u_t,v_t,w_t$ should be preserved. And some of $\tau^m_{h,k}$ should be omitted instead. For this purpose, we try to represent $\tau^m_{h,k}$ with $u_t,v_t,w_t,\tau^m_{h,k}$ in the following. We assume, without loss of generality, that $\underline{T}_{on}=1$. Firstly, $\tau^m_{h,k}\in\{\tau^m_{h,k}\vert h \in [m+1,m+M-2],k \in [h,m+M-2]\}$ can be viewed as basic variables. \begin{align} &\left\{ \begin{aligned} &\sum\nolimits_{\{t \in [h,k],h \in [m,m+M-1],k \in [h,m+M-1]\}}\tau^m_{h,k}=u_t, \quad t \in [m,m+M-1] \\ &\sum\nolimits_{\{h=t,h \in [m,m+M-1],k \in [h,m+M-1]\}}\tau^m_{h,k}=v_t, \quad t \in [m+1,m+M-1] \\ &\tau^m_{h,k}=\tau^m_{h,k}, \quad h \in [m+1,m+M-2],k \in [h,m+M-2] \end{aligned} \right.\notag\\ &\Longrightarrow\qquad \bm{E}\bm{x}=\bm{b} \end{align} where $\bm{E}$ is an $\frac{M(M+1)}{2}\times\frac{M(M+1)}{2}$ matrix, $\bm{x}=(\tau^m_{m,m},\cdots,\tau^m_{m,m+M-1},\\\tau^m_{m+1,m+M-1},\cdots,\tau^m_{m+M-1,m+M-1}, \tau^m_{m+1,m+1},\cdots,\tau^m_{m+M-2,m+M-2})$, \\$\bm{b}=(u_m,\cdots,u_{m+M-1},v_{m+1},\cdots,v_{m+M-1},\tau^m_{m+1,m+1},\cdots,\tau^m_{m+M-2,m+M-2})$. Matrix $\bm{E}$ can be cut into four blocks. \begin{align} \bm{E}= \left[\begin{array}{c|c} \bm{C} & \bm{B} \\ \hline \bm{0} & \bm{I} \end{array}\right] \end{align} where $\bm{I}$ is an identity matrix, and matrix $\bm{C}$ have the structure as follows. $$ C=\begin{blockarray}{cccccccccc} 1 & 2 & \cdots & M-1 & M & M+1 & \cdots & 2M-2 & 2M-1 & \\ \begin{block}{(ccccccccc)c} 1 & 1 & \cdots & 1 & 1 & 0 & \cdots & 0 & 0 & 1 \\ 0 & 1 & \cdots & 1 & 1 & 1 & \ddots & \vdots & \vdots & 2 \\ \vdots & 0 & \ddots & \vdots & \vdots & \vdots & \ddots & 0 & \vdots & \vdots \\ 0 & \vdots & \ddots & 1 & 1 & 1 & \cdots & 1 & 0 & M-1 \\ 0 & 0 & \cdots & 0 & 1 & 1 & \cdots & 1 & 1 & M \\ 0 & 0 & \cdots & 0 & 0 & 1 & 0 & \cdots & 0 & M+1 \\ \vdots & \vdots & \ddots & \vdots & \vdots & 0 & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 & \vdots & \ddots & 1 & 0 & 2M-2 \\ 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 1 & 2M-1 \\ \end{block} \end{blockarray} $$ Matrix $\bm{E}$ can be easily transformed to identity matrix by elementary transformation, which means that matrix $\bm{E}$ is equivalent to identity matrix. Hence, matrix $\bm{E}$ is nonsingular. It is not difficult to verify that state variables $u_m$, $\cdots$, $u_{m+M-1}$, $v_{m+1}$, $\cdots$, $v_{m+M-1}$, $\tau^m_{h,k}\in \{\tau^m_{h,k}\vert m<h\le k<m+M-1\}$, are linearly independent. And all the new state variables $\tau^m_{h,k}$ can be expressed as linear combinations of these base variables. \begin{equation} \bm{x}=\bm{E}^{-1}\bm{b}\label{eq:tao-taobase} \end{equation} According to the definition of $\tau^m_{h,k}$, we have the following inequalities. \begin{align} \bm{E}^{-1}\bm{b}\ge \bm{0}\label{eq:taom-in-0}\\ \bm{E}^{-1}\bm{b}\le \bm{1}\label{eq:taom-in-1} \end{align} And inequalities corresponding to variables $\tau^m_{h,k}\in \{\tau^m_{h,k}\vert m<h\le k<m+M-1\}$ in inequality groups (\ref{eq:taom-in-0})(\ref{eq:taom-in-1}) can be deleted. If $\underline{T}_{on}>1$, (\ref{eq:taom-in-0})(\ref{eq:taom-in-1}) still hold with $\tau^m_{h,k}=0,k\in[h,h+\underline{T}_{on}-2]$. Now, we present our multi-period tight and compact MIQP UC formulation, denoted as M-period tight model (MP-3), in high-dimensional space: \begin{align} &\min \quad(\ref{eq:new-objective-function})\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:spinning-reserve})(\ref{eq:logical})(\ref{eq:min-up-time})(\ref{eq:min-down-time})(\ref{eq:initial-status})(\ref{eq:new-power-balance})(\ref{eq:start-cost})(\ref{eq:tight-max-output-0})(\ref{eq:taom-in-0})(\ref{eq:taom-in-1})\\ &(\ref{eq:tight-ramp-up-0})\text{ for } m=T-M^i+1\text{ or }a=t-m\\ &(\ref{eq:tight-ramp-down-0})\text{ for } m=0\text{ or }t=m+M^i-1\\ &u^i_t,v^i_t,w^i_t\in\{0,1\},\widetilde{P}^i_t\in [0,1],\widetilde{S}^i_t\in R_+,m\in[0,T-M^i+1]\\ &\tau^{i,m}_{h,k}\in\{0,1\},[h,k]\in A^i_m, m<h\le k<m+M^i-1 \end{aligned} \right. \end{align} We should note that all of the ariables $\tau^{i,m}_{h,k}\in \{\tau^{i,m}_{h,k}\vert m\le h\le k\le m+M^i-1,h=m\text{ or }k=m+M^i-1\text{ or }[h,k]\notin A^i_m\}$ in this model have been eliminated by using (\ref{eq:tao-taobase}). In fact, our 3P-3 formulation is equivalent to the 3P-HD formulation. However, 3P-HD formulation is simplified by using (\ref{eq:logical}) to make the model more compact with less nonzeros. \section{History-dependent multi-period formulation}\label{sec4} In the discussion in Section \ref{sec3}, we did not consider the influence of the initial state on the definition of $A^i_m$, the upper bound of power output and the ramping constraints. Although the constraints obtained in this way is ideal in the M-period, they are not tight enough in the global space. Therefore, we will make some improvements to them. \subsection{Tighter constraints based on historical status}\label{subsec8} If the unit $i$ has been offline for $-T^i_0$ periods prior to the first period of the time span, then $A^i_m:=\{[h,k]\vert h=m\ge L^i+1,k\in [\min(\max(m,L^i+\underline{T}^i_{on}),m+M^i-1),m+M^i-1],\text{ or }h\in[\max(m+1,L^i+1),m+M^i-1],k\in [\min(h+\underline{T}^i_{on}-1,m+M^i-1),m+M^i-1]\}$. With ramping limits taken into consideration , we let $U^i=\min\{T,u^i_0[\max(\lceil \frac{\max(0,\widetilde{P}^i_0-\widetilde{P}^i_{shut})}{\widetilde{P}^i_{down}}\rceil,\underline{T}^i_{on}-\underline{T}^i_0)]\}$. If the unit $i$ has been online for $T^i_0$ periods prior to the first period of the time span, then $A^i_m:=\{[h,k]\vert h=m,k\in [\min(\max(m,U^i),m+M^i-1),m+M^i-1],\text{ or }h\in [\max(m+1,U^i+\underline{T}^i_{off}+1),m+M^i-1],k\in [\min(h+\underline{T}^i_{on}-1,m+M^i-1),m+M^i-1]\}$. If the unit $i$ has been online prior to the first period of the time span, there must be one and only one continuous operating interval $[h,k]$ in each M-period $m$ where $m\le\min(U^i,T-M^i+1)$. Then we have \begin{equation} \sum\nolimits_{[m,k]\in A_m}\tau^m_{m,k}=1,\qquad m\in[0,\min(U,T-M+1)]\label{eq:new-initial-status-1} \end{equation} In the following discussion of this section, we will adopt this new definition of $A^i_m$. Similar to Theorem \ref{binary-integer-0}, we can derive the following theorem. \begin{theorem}\label{binary-integer-1} For any M-period $m$ we define the polyhedron $\widetilde{\mathcal{B}}^R_m=\{\\\prod_{[h,k]\in A_m}\tau^m_{h,k}\in [0,1]^{\lvert A_m\rvert}:(\ref{eq:inequality-taotao})(\ref{eq:new-initial-status-1})\}$. Let $\widetilde{\mathcal{B}}^I_m=\widetilde{\mathcal{B}}^R_m\cap\{\prod_{[h,k]\in A_m}\tau^m_{h,k}\in\{0,1\}^{\lvert A_m\rvert}\}$. Then $\widetilde{\mathcal{B}}^R_m=\textup{conv}(\widetilde{\mathcal{B}}^I_m)$. \end{theorem} \begin{proof}[\textbf{Proof}] If $m\in[0,\min(U,T-M+1)]$, we get $\sum\nolimits_{[m,k]\in A_m}\tau^m_{m,k}\le 1$ and \\$-\sum\nolimits_{[m,k]\in A_m}\tau^m_{m,k}\le -1$ from (\ref{eq:new-initial-status-1}). We let $\bm{B_m}$ be the coefficient matrix of inequalities (\ref{eq:inequality-taotao}), and $\bm{D_m}$ be the coefficient matrix of the inequalities obtained from (\ref{eq:new-initial-status-1}). $$\bm{E_m}=\begin{bmatrix} \bm{B_m} \\ \bm{D_m} \\ \end{bmatrix} \textup{can be obtained by a series of pivot operations on} \begin{bmatrix} \bm{B_m} \\ \bm{0} \\ \end{bmatrix}$$ We have shown that $\bm{B_m}$ is totally unimodular in Theorem \ref{binary-integer-0}. According to the Proposition 2.1 of §$\uppercase\expandafter{\romannumeral3}.1.2$ in \cite{wolsey1988integer}, $\bm{E_m}$ is totally unimodular. Let $\bm{d'}=\bm{0}$, $\bm{d}=\bm{1}$, $\bm{b'}=(\bm{0},\bm{-1})$, $\bm{b}=(\bm{1},\bm{-1})$, and then we know that $\widetilde{\mathcal{B}}^R_m$ is an integral polyhedron according to Proposition 2.3 of §$\uppercase\expandafter{\romannumeral3}.1.2$ in \cite{wolsey1988integer}. \end{proof} We define $\mathcal{D}$ as $\mathcal{D}:=\{i\vert u_0^i=0\}$, $\mathcal{U}$ as $\mathcal{U}:=\{i\vert u_0^i=1\}$, and $\mathcal{V}$ as $\mathcal{V}:=\{i\vert U^i+\underline{T}^i_{off}<m\}$.The theorems and propositions in Section \ref{sec3} still hold with new definition of $A^i_m$ for unit $i\in \mathcal{D}\cup(\mathcal{U}\cap\mathcal{V})$ in any M-period $m$. Hence, we won't repeat them here. For unit $i\in (\mathbb{N}-(\mathcal{D}\cup(\mathcal{U}\cap\mathcal{V})))$, we can further tighten (\ref{eq:tight-max-output-0})-(\ref{eq:tight-ramp-down-0}) and lower bound of generation limits with considering $\widetilde{P}_0$. \begin{table}[t] \begin{center} \begin{minipage}{\textwidth} \caption{Illustration for upper bound of generation limits with historical status.}\label{tab:max-output-1} \resizebox{\textwidth}{!}{ \begin{tabular}{ccc} \toprule $\tau_{h,k}=1$ & $\textup{UB}(\widetilde{P}_t)$ \\ \midrule $m=h\le t\le k<m+M-1$ & $\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$\\ $m=h\le t\le k=m+M-1$ & $\min(1,\widetilde{P}_0+t\widetilde{P}_{up})$\\ $m<h\le t\le k<m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$\\ $m<h\le t\le k=m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$\\ \botrule \end{tabular} } \end{minipage} \end{center} \end{table} Taking historical status into consideration, the lower bound of generation limits for the unit in period $t$ is: \begin{equation} \textup{LB}(\widetilde{P}_t)=\sum\nolimits_{\{[m,k] \in A_m,t\le k\}}\tau^m_{m,k}\max(0,\widetilde{P}_0-t\widetilde{P}_{down}) \end{equation} Then, we obtain the following inequalities: \begin{align} \widetilde{P}_t\ge\sum\nolimits_{\{[m,k] \in A_m,t\le k\}}\tau^m_{m,k}\max(0,\widetilde{P}_0-t\widetilde{P}_{down}),\quad t\in[m,m+M-1]\label{eq:tight-min-output-1} \end{align} As inllustrated in Table \ref{tab:max-output-1}, we get tighter inequalities for upper bound of generation limits. \begin{align} \widetilde{P}_t\le &\sum\nolimits_{\{[h,m+M-1] \in A_m,m<h\le t\}}\tau^m_{h,m+M-1}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,m<h\le t\le k<m+M-1\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[m,k] \in A_m,t\le k<m+M-1\}}\tau^m_{m,k}\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[m,m+M-1]\in A_m\}}\tau^m_{m,m+M-1}\min(1,\widetilde{P}_0+t\widetilde{P}_{up}),\notag\\ &t\in[m,m+M-1]\label{eq:tight-max-output-1} \end{align} Similarly, we obtain tighter inequalities for ramping constraints according to Table \ref{tab:tight-ramp-1}. \begin{align} \widetilde{P}_t-\widetilde{P}_{t-a}\le &\sum\nolimits_{\{[h,m+M-1]\in A_m,t-a<h\le t\}}\tau^m_{h,m+M-1}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,t-a<h\le t\le k<m+M-1\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,m<h\le t-a<t\le k<m+M-1\}}\tau^m_{h,k}\times\notag\\ &\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,m+M-1] \in A_m,m<h\le t-a\}}\tau^m_{h,m+M-1}\min(1,a\widetilde{P}_{up})\notag\\ &+\sum\nolimits_{\{[m,k] \in A_m,t\le k<m+M-1\}}\tau^m_{m,k}\min\{a\widetilde{P}_{up},1-\max[0,\widetilde{P}_0-\notag\\ &(t-a)\widetilde{P}_{down}],\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}\notag\\ &+\sum\nolimits_{\{[m,m+M-1] \in A_m\}}\tau^m_{m,m+M-1}\times\notag\\ &\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up}\}\notag\\ &-\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],\notag\\ &t\in[m+1,m+M-1],a\in[1,t-m]\label{eq:tight-ramp-up-1}\\ \widetilde{P}_{t-a}-\widetilde{P}_t\le &\sum\nolimits_{\{[h,k] \in A_m,m<h\le t-a<t\le k\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[h,k] \in A_m,m<h\le t-a\le k<t\}}\tau^m_{h,k}\times\notag\\ &\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[m,k] \in A_m,t\le k\}}\tau^m_{m,k}\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]\notag\\ &+\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}\times\notag\\ &\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}],\notag\\ &t\in[m+1,m+M-1],a\in[1,t-m]\label{eq:tight-ramp-down-1} \end{align} \begin{table}[t] \begin{center} \begin{minipage}{\textwidth} \caption{Illustration for upper bound of ramping limits with historical status.}\label{tab:tight-ramp-1} \resizebox{\textwidth}{!}{ \begin{tabular}{cc} \toprule $\tau_{h,k}=1$ & $\textup{UB}(\widetilde{P}_t-\widetilde{P}_{t-a})$ \\ \midrule $m=h\le t-a<t\le k<m+M-1$ & $\min[1-\textup{LB}(\widetilde{P}_{t-a}),a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\textup{LB}(\widetilde{P}_{t-a})]$ \\ $m=h\le t-a<t\le k=m+M-1$ & $\min[1-\textup{LB}(\widetilde{P}_{t-a}),a\widetilde{P}_{up}]$ \\ $m=h\le t-a\le k<t$ & $\textup{UB}(\widetilde{P}_t)-\textup{LB}(\widetilde{P}_{t-a})$ \\ $m<h\le t-a<t\le k<m+M-1$ & $\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$ \\ $m<h\le t-a<t\le k=m+M-1$ & $\min(1,a\widetilde{P}_{up})$ \\ $t-a<h\le t\le k<m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\textup{LB}(\widetilde{P}_{t-a})$ \\ $t-a<h\le t\le k=m+M-1$ & $\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]-\textup{LB}(\widetilde{P}_{t-a})$ \\ \botrule \end{tabular} } \resizebox{\textwidth}{!}{ \begin{tabular}{cc} \toprule $\tau_{h,k}=1$ & $\textup{UB}(\widetilde{P}_{t-a}-\widetilde{P}_t)$ \\ \midrule $m=h\le t-a<t\le k$ & $\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]$\\ $m=h\le t-a\le k<t$ & $\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$\\ $m<h\le t-a<t\le k$ & $\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$\\ $m<h\le t-a\le k<t$ & $\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$\\ \botrule \end{tabular} } \end{minipage} \end{center} \end{table} Next, we will talk about the tightness of (\ref{eq:tight-min-output-1})-(\ref{eq:tight-ramp-down-1}). We let $\widetilde{\mathcal{P}}^R_m:=\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_{\max(1,m)},\cdots,\widetilde{P}_{m+M-1})\in {[0,1]}^{\lvert A_m\rvert+M+\min(0,m-1)}:(\ref{eq:inequality-taotao})(\ref{eq:new-initial-status-1})(\ref{eq:tight-min-output-1})(\ref{eq:tight-max-output-1})\}$. Let $\widetilde{\mathcal{P}}^I_m:=\widetilde{\mathcal{P}}^R_m\cap\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_{\max(1,m)},\cdots,\widetilde{P}_{m+M-1})\in\{0,1\}^{\lvert A_m\rvert }\times[0,1]^{M+\min(0,m-1)}\}$. \begin{proposition}\label{prp5} Inequality (\ref{eq:tight-min-output-1}) defines a facet of $\textup{conv}(\widetilde{\mathcal{P}}^I_m)$. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{proposition}\label{prp6} Inequality (\ref{eq:tight-max-output-1}) defines a facet of $\textup{conv}(\widetilde{\mathcal{P}}^I_m)$. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{theorem} $\widetilde{\mathcal{P}}^R_m=\textup{conv}(\widetilde{\mathcal{P}}^I_m)$. \end{theorem} \begin{proof}[\textbf{Proof}] We have proved in Theorem \ref{binary-integer-1} that $\widetilde{\mathcal{B}}^R_m$ is an integral polyhedron. According to the Lemma 4 in \cite{gentile2017tight}, it is not difficult to prove $\widetilde{\mathcal{P}}^R_m=\textup{conv}(\widetilde{\mathcal{P}}^I_m)$. \end{proof} We refer to the feasible set defined by constraints (\ref{eq:inequality-taotao})(\ref{eq:new-initial-status-1})(\ref{eq:tight-min-output-1})-(\ref{eq:tight-ramp-down-1}) as $\widetilde{\mathcal{Q}}^R_m$, i.e. $\widetilde{\mathcal{Q}}^R_m:=\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_{\max(1,m)},\cdots,\widetilde{P}_{m+M-1})\in{[0,1]}^{\lvert A_m\rvert+M+\min(0,m-1)}:(\ref{eq:inequality-taotao})(\ref{eq:new-initial-status-1})(\ref{eq:tight-min-output-1})-(\ref{eq:tight-ramp-down-1})\}$. Let $\widetilde{\mathcal{Q}}^I_m:=\widetilde{\mathcal{Q}}^R_m\cap\{(\prod_{[h,k]\in A_m}\tau^m_{h,k},\widetilde{P}_{\max(1,m)},\cdots,\widetilde{P}_{m+M-1})\in\{0,1\}^{\lvert A_m\rvert }\times[0,1]^{M+\min(0,m-1)}\}$, $K=\min\{T,u_0[\max(\max(0,\lfloor \frac{\widetilde{P}_0-\widetilde{P}_{shut}}{\widetilde{P}_{down}}\rfloor+1),\underline{T}_{on}-\underline{T}_0)]\}$. \begin{proposition}\label{prp7} Inequality (\ref{eq:tight-min-output-1}) defines a facet of $\textup{conv}(\widetilde{\mathcal{Q}}^I_m)$ if and only if one of the following conditions hold: \begin{enumerate} \item $t=\max(1,m)$. \item $\min(K,\frac{\widetilde{P}_0}{\widetilde{P}_{down}})<t$. \end{enumerate} \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{proposition}\label{prp8} Inequality (\ref{eq:tight-max-output-1}) defines a facet of $\textup{conv}(\widetilde{\mathcal{Q}}^I_m)$ if and only if one of the following conditions hold: \begin{enumerate} \item $t=\max(1,m)$. \item $\min(K,\frac{1-\widetilde{P}_0}{\widetilde{P}_{up}})<t$. \item $K<m+M-1,\max(\frac{\widetilde{P}_{shut}-\widetilde{P}_0}{\widetilde{P}_{up}} ,\frac{K\widetilde{P}_{down}+\widetilde{P}_{shut}-\widetilde{P}_0}{\widetilde{P}_{up}+\widetilde{P}_{down}})<t<m+M-1$. \end{enumerate} \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{proposition}\label{prp9} We define condition $\mathcal{M}_1$ as one of the following conditions hold: \begin{enumerate} \item $a=1$. \item $t>K$. \item $\min(\frac{1}{\widetilde{P}_{up}},\frac{1-\widetilde{P}_0+t\widetilde{P}_{down}}{\widetilde{P}_{up}+\widetilde{P}_{down}})<a$. \item $\max(t,K)<m+M-1,\min\{\frac{\widetilde{P}_{shut}+[K-t]^+ \widetilde{P}_{down}}{\widetilde{P}_{up}},\frac{\widetilde{P}_{shut}+\max(t,K)×\widetilde{P}_{down}-\widetilde{P}_0}{\widetilde{P}_{up}+\widetilde{P}_{down}}\}<a$. \item $\min(\underline{T}_{on},\frac{1-\widetilde{P}_{start}}{\widetilde{P}_{up}}+1)<a\le t-U-\underline{T}_{off}$. \item $t<m+M-1,\max[\frac{\widetilde{P}_{shut}-\widetilde{P}_{start}}{\widetilde{P}_{up}},\frac{(\underline{T}_{on}-1)\widetilde{P}_{down}+\widetilde{P}_{shut}-\widetilde{P}_{start}}{\widetilde{P}_{up}+\widetilde{P}_{down}},t+\underline{T}_{on}-m-M]+1<a\le t-U-\underline{T}_{off}$. \end{enumerate} We define condition $\mathcal{M}_2$ as one of the following conditions hold: \begin{enumerate} \item $t-a=0$. \item $a<\min(\frac{1}{\widetilde{P}_{up}},\frac{1-\widetilde{P}_0+t\widetilde{P}_{down}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$. \item $a<\min(\frac{1}{\widetilde{P}_{up}},t-U-\underline{T}_{off})$. \end{enumerate} Inequality (\ref{eq:tight-ramp-up-1}) defines a facet of $\textup{conv}(\widetilde{\mathcal{Q}}^I_m)$ if and only if condition $\mathcal{M}_1$ and condition $\mathcal{M}_2$ both hold. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} \begin{proposition}\label{prp10} We define condition $\mathcal{M}_3$ as one of the following conditions hold: \begin{enumerate} \item $a=1$. \item $a\le t-U-\underline{T}_{off}$. \item $\min(\frac{1}{\widetilde{P}_{down}},\frac{\widetilde{P}_0+t\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})<a$. \item $U<t,\min(\frac{1-\widetilde{P}_{shut}}{\widetilde{P}_{down}}+1,\frac{\widetilde{P}_0+t\widetilde{P}_{up}+\widetilde{P}_{down}-\widetilde{P}_{shut}}{\widetilde{P}_{up}+\widetilde{P}_{down}})<a$. \end{enumerate} We define condition $\mathcal{M}_4$ as one of the following conditions hold: \begin{enumerate} \item $t-a=0$. \item $a<\min(\frac{1}{\widetilde{P}_{down}},\frac{\widetilde{P}_0+t\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$. \item $a<\min(\frac{1}{\widetilde{P}_{down}},t-U-\underline{T}_{off},\frac{\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$. \end{enumerate} Inequality (\ref{eq:tight-ramp-down-1}) defines a facet of $\textup{conv}(\widetilde{\mathcal{Q}}^I_m)$ if and only if condition $\mathcal{M}_3$ and condition $\mathcal{M}_4$ both hold. \end{proposition} \begin{proof}[\textbf{Proof}] See ``Appendix 2". \end{proof} Then, model MP-3 can be transformed to history-dependent multi-period tight MIQP UC formulation which takes generator’s history into account, denoted as M-period tight model (MP-Ti): \begin{align} &\min \quad(\ref{eq:new-objective-function})\notag\\ &s.t.\left\{ \begin{aligned} &(\ref{eq:spinning-reserve})(\ref{eq:logical})(\ref{eq:min-up-time})(\ref{eq:min-down-time})(\ref{eq:initial-status})(\ref{eq:new-power-balance})(\ref{eq:start-cost})(\ref{eq:taom-in-0})(\ref{eq:taom-in-1})\\ &(\ref{eq:tight-min-output-1})\text{ for } i\in (\mathbb{N}-(\mathcal{D}\cup(\mathcal{U}\cap\mathcal{V})))\\ &(\ref{eq:tight-max-output-0})\text{ for } i\in \mathcal{D}\cup(\mathcal{U}\cap\mathcal{V}),(\ref{eq:tight-max-output-1})\text{ for } i\in (\mathbb{N}-(\mathcal{D}\cup(\mathcal{U}\cap\mathcal{V})))\\ &(\ref{eq:tight-ramp-up-0})\text{ for } i\in \mathcal{D}\cup(\mathcal{U}\cap\mathcal{V}),m=T-M^i+1\text{ or }a=t-m\\ &(\ref{eq:tight-ramp-down-0})\text{ for } i\in \mathcal{D}\cup(\mathcal{U}\cap\mathcal{V}),m=0\text{ or }t=m+M^i-1\\ &(\ref{eq:tight-ramp-up-1})\text{ for } i\in (\mathbb{N}-(\mathcal{D}\cup(\mathcal{U}\cap\mathcal{V}))),m=T-M^i+1\text{ or }a=t-m\\ &(\ref{eq:tight-ramp-down-1})\text{ for } i\in (\mathbb{N}-(\mathcal{D}\cup(\mathcal{U}\cap\mathcal{V}))),m=0\text{ or }t=m+M^i-1\\ &u^i_t,v^i_t,w^i_t\in\{0,1\},\widetilde{P}^i_t\in [0,1],\widetilde{S}^i_t\in R_+,m\in[0,T-M^i+1]\\ &\tau^{i,m}_{h,k}\in\{0,1\},[h,k]\in A^i_m, m<h\le k<m+M^i-1 \end{aligned} \right. \end{align} \subsection{MILP approximations}\label{subsec9} Solving MILP is easier than solving MIQP, so it is very popular to approximate MIQP as MILP and then use MILP solver to solve the UC problem. Assume that $L$ is a given parameter, let $p^i_l=\underline{P}^i+l(\overline{P}^i-\underline{P}^i)/L$ and $l=0,1,2,\cdots,L$. For 2P and 3P, after replacing $\gamma_i(P^i_t)^2$ in the objective function with a corresponding new variable $z^i_t$ and adding the following linear constraints to the formulation. \begin{equation} z^i_t\ge 2\gamma_ip^i_lP^i_t-\gamma_i(p^i_l)^2 \end{equation} we obtain the MILP UC models that approximate the original MIQP models. Similarly, let $\widetilde{p}^i_l=l/L$. Replace $\widetilde{\gamma_i}(\widetilde{P}^i_t)^2$ in the 3P-HD and MP models with $z^i_t$ and add the following constraints: \begin{equation} z^i_t\ge 2\widetilde{\gamma_i}\widetilde{p}^i_l\widetilde{P}^i_t-\widetilde{\gamma_i}(\widetilde{p}^i_l)^2 \end{equation} then we obtain the MILP approximations of 3P-HD and MP models. \section{Numerical results and analysis}\label{sec5} In this section, two data sets are used to test the tightness and computational performances of the new formulations proposed in Section \ref{sec3} and Section \ref{sec4}, compared with the state-of-the-art 2-period and 3-period formulations. The first data set stems from the synthetic instances of \cite{ostrowski2012tight}, which are replicated dataes from \cite{carrion2006computationally}. The second data set, including 5 different data values for 10-, 20-, 50-, 75-, 100-, 150-unit system, and 12 different data values for 200-unit system, is published at \url{http://groups.di.unipi.it/optimize/Data/UC.html}. In total, seventy-three realistic instances with 10-1080 units running for a time span of 24 hours are used in our experiments. The machine on which we perform all of our computations is a desktop with Intel i7-8700K 3.7 GHz CPU and 8 GB of RAM, running MS-Windows 10 (64-bit) and MATLAB 2016b. We use MATLAB to call GUROBI 9.1.1 to solve the MILP problems. The time limit for the solver is set to 3600 seconds. All the codes and instances for the simulations in this article are available from \url{https://github.com/linfengYang/multi-period-UC-model}. For MP-1 (MP-2, MP-3, MP-Ti) formulations, we choose four different sizes of sliding windows to construct the models, where $M$ equals to 2, 3, $H^i=\max(\lceil\frac{1-\widetilde{P}^i_{start}}{\widetilde{P}^i_{up}}\rceil+1,\lceil\frac{1-\widetilde{P}^i_{shut}}{\widetilde{P}^i_{down}}\rceil+1,\lceil\frac{1}{\widetilde{P}^i_{up}}\rceil,\lceil\frac{1}{\widetilde{P}^i_{down}}\rceil,2)$, $T+1$ respectively, named 2P-1 (2P-2, 2P-3, 2P-Ti), 3P-1 (3P-2, 3P-3, 3P-Ti), HP-1 (HP-2, HP-3, HP-Ti) and TP-1 (TP-2, TP-3, TP-Ti) respectively. We have made a series of improvements to improve the computational efficiency of our models. First, we eliminate some redundant inequalities by using Propositions \ref{prp3}-\ref{prp4}. ``MP-1-all" represent the models that retain all proposed inequalities. ``MP-1" represent the models that exclude redundant inequalities. In order to reflect the difference in computational efficiency of each model, we introduce relative time ``rTime", which can be defined as $(\textup{time}-\textup{reftime})/\textup{reftime}$. In this expression, ``time" represents the solving time of each model, and ``reftime" represents the solving time of reference model. Here, we choose 3P model as reference model. We compare MP-1-all and MP-1 models in terms of rTime in Fig. \ref{fig:facet-runtime}. There is no obvious difference between MP-1-all model and MP-1 model in 2-period, 3-period and H-period. However, TP-1 model performs better than TP-1-all model in general. \begin{figure} \caption{Comparison of MP-1, MP-1-all and the other three state-of-the-art MILP formulations in terms of rTime.} \label{fig:facet-runtime} \end{figure} We adopt the performance profiles to show the difference in computational efficiency of each model more clearly \cite{dolan2002benchmarking}. we define $t_{p,s}$ as the computing time required to solve problem $p$ by model $s$. \begin{equation} \rho_s(\tau)=\frac{1}{n_p}\textup{size}\{p\in\mathscr{P}:\frac{t_{p,s}}{\min\{t_{p,s}:s\in\mathscr{S}\}}\le\tau\} \end{equation} where $\mathscr{S}$ is the set of models, $\mathscr{P}$ is the test set, $n_p$ is the total number of problems. As shown in Fig. \ref{fig:facet-profile}, model MP-1 has excellent performance in T-period. \begin{figure} \caption{Performance profiles on CPU time for MP-1 and MP-1-all formulations.} \label{fig:facet-profile} \end{figure} We define ``redu\_con" as the percentage decreases in the number of constraints of model MP-1 compared to model MP-1-all. As shown in Fig. \ref{fig:facet-constraints}, compared with MP-1-all model, the number of constraints in MP-1 model decreases more significantly for T-period. \begin{figure} \caption{The redu\_con of four MP-1 formulations.} \label{fig:facet-constraints} \end{figure} The solving process was frequently interrupted due to insufficient memory, when we used TP-1-all model to solve the continuous relaxation of the MIP problem with more than 900 units. And finally we got the solutions after several attempts. To simplify the model, for lower/upper bound of generation limits and ramping constraints, we select only inequalities that satisfy the conditions of Propositions \ref{prp3}-\ref{prp4} for models MP-1, MP-2, MP-3. And only inequalities that satisfy the conditions of Propositions \ref{prp7}-\ref{prp10} are selected for models MP-Ti. Next, considering the compactness of the models, we improved MP-1 models with respect to binary variables, and get two groups of MP models ``MP-2, MP-3". As shown in Fig. \ref{fig:Variable-runtime} and Fig. \ref{fig:Variable-performance-profile}, MP-3 models always perform best. \begin{figure} \caption{Comparison of MP-1, MP-2, MP-3 and the other three state-of-the-art MILP formulations in terms of rTime.} \label{fig:Variable-runtime} \end{figure} \begin{figure} \caption{Performance profiles on CPU time for MP-1, MP-2 and MP-3 formulations.} \label{fig:Variable-performance-profile} \end{figure} Finally, we took historical status into consideration in order to make the models tighter and got MP-Ti. We compare the compactness of MP-3 and MP-Ti MILP formulations in Table \ref{tab:HistoryCompactness1} and Table \ref{tab:HistoryCompactness2}. The columns ``pre\_cons", ``pre\_vars", and ``pre\_nozs" represent the numbers of constraints, variables, and nonzeros respectively for the corresponding problem after being presolved by Gurobi. The columns ``redu\_con", ``redu\_var", and ``redu\_noz" represent the percentage decreases in the number of constraints, variables, and nonzeros respectively of MP-Ti compared to MP-3. As seen in Table \ref{tab:HistoryCompactness1} and Table \ref{tab:HistoryCompactness2}, there is little difference in the number of constraints and variables between MP-3 and MP-Ti. The number of nonzeros in TP-Ti model increases greatly compared with TP-3 model. \begin{sidewaystable} \begin{center} \begin{minipage}{\textheight} \caption{Comparison of MP-3 and MP-Ti MILP formulations in terms of compactness for the first data set.}\label{tab:HistoryCompactness1} \resizebox{\textheight}{!}{ \begin{tabular}{@{\extracolsep{\fill}}lccccccccccccccccccccccccc@{\extracolsep{\fill}}} \toprule &&\multicolumn{6}{@{}c@{}}{2P-Ti}&\multicolumn{6}{@{}c@{}}{3P-Ti}&\multicolumn{6}{@{}c@{}}{HP-Ti}&\multicolumn{6}{@{}c@{}}{TP-Ti}\\ \cmidrule(lr){3-8}\cmidrule(lr){9-14}\cmidrule(lr){15-20}\cmidrule(lr){21-26} Case & Units & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz \\ & & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) \\ \midrule 1 & 28 & 7213 & 0.00 & 3378 & 0.00 & 48395 & 0.00 & 7749 & 0.09 & 3378 & 0.00 & 51491 & 0.05 & 7773 & 0.10 & 3378 & 0.00 & 51686 & 0.07 & 7023 & 0.10 & 4689 & 0.00 & 43711 & -1.05 \\ 2 & 35 & 9153 & 0.00 & 4199 & 0.00 & 62161 & 0.00 & 10146 & 0.39 & 4219 & 0.00 & 68071 & 0.23 & 10205 & 0.45 & 4199 & 0.00 & 68933 & 0.29 & 9074 & 0.44 & 5590 & 0.00 & 55975 & -0.90 \\ 3 & 44 & 11317 & 0.00 & 5352 & 0.00 & 73226 & 0.00 & 12499 & 0.26 & 5412 & 0.00 & 79896 & 0.16 & 12418 & 0.31 & 5352 & 0.00 & 80219 & 0.21 & 11330 & 0.29 & 7924 & 0.00 & 73420 & -0.78 \\ 4 & 45 & 11216 & 0.00 & 5554 & 0.00 & 70109 & 0.00 & 12183 & 0.23 & 5614 & 0.00 & 75410 & 0.15 & 12091 & 0.26 & 5554 & 0.00 & 75618 & 0.18 & 11309 & 0.25 & 8956 & 0.00 & 69607 & -0.56 \\ 5 & 49 & 12605 & 0.00 & 5958 & 0.00 & 81200 & 0.00 & 13855 & 0.38 & 5978 & 0.00 & 88834 & 0.24 & 13959 & 0.44 & 5958 & 0.00 & 90060 & 0.29 & 12686 & 0.42 & 8793 & 0.00 & 81600 & -0.64 \\ 6 & 50 & 12406 & 0.00 & 6215 & 0.00 & 74658 & 0.00 & 13740 & 0.44 & 6315 & 0.00 & 82175 & 0.30 & 13626 & 0.51 & 6215 & 0.00 & 82882 & 0.37 & 12847 & 0.47 & 10544 & 0.00 & 79070 & -0.24 \\ 7 & 51 & 12813 & 0.00 & 6262 & 0.00 & 81373 & 0.00 & 13919 & 0.09 & 6342 & 0.00 & 87179 & 0.06 & 13724 & 0.11 & 6262 & 0.00 & 86823 & 0.07 & 12781 & 0.10 & 9854 & 0.00 & 81959 & -0.76 \\ 8 & 51 & 12854 & 0.00 & 6293 & 0.00 & 79658 & 0.00 & 14212 & 0.35 & 6433 & 0.00 & 78675 & 0.25 & 14148 & 0.41 & 6293 & 0.00 & 88143 & 0.28 & 13192 & 0.38 & 10246 & 0.00 & 85132 & -0.37 \\ 9 & 52 & 13296 & 0.00 & 6354 & 0.00 & 84563 & 0.00 & 14872 & 0.40 & 6454 & 0.00 & 93478 & 0.25 & 14749 & 0.46 & 6354 & 0.00 & 94121 & 0.31 & 13530 & 0.43 & 9760 & 0.00 & 86867 & -0.58 \\ 10 & 54 & 13485 & 0.00 & 6693 & 0.00 & 81980 & 0.00 & 14918 & 0.29 & 6813 & 0.00 & 89800 & 0.20 & 14701 & 0.35 & 6693 & 0.00 & 89915 & 0.24 & 13837 & 0.32 & 11202 & 0.00 & 84870 & -0.46 \\ 11 & 132 & 33269 & 0.00 & 16194 & 0.00 & 213763 & 0.00 & 36893 & 0.22 & 16514 & 0.00 & 232707 & 0.14 & 36189 & 0.26 & 16194 & 0.00 & 232026 & 0.18 & 33479 & 0.25 & 25173 & 0.00 & 198776 & -0.29 \\ 12 & 156 & 39199 & 0.00 & 19174 & 0.00 & 247908 & 0.00 & 43093 & 0.24 & 19434 & 0.00 & 269364 & 0.16 & 42664 & 0.29 & 19174 & 0.00 & 269915 & 0.19 & 39602 & 0.26 & 30362 & 0.00 & 233730 & -0.21 \\ 13 & 156 & 39423 & 0.00 & 19162 & 0.00 & 249182 & 0.00 & 43794 & 0.32 & 19462 & 0.00 & 273556 & 0.21 & 43355 & 0.38 & 19162 & 0.00 & 274711 & 0.26 & 40083 & 0.35 & 30183 & 0.00 & 237059 & -0.31 \\ 14 & 160 & 40802 & 0.00 & 19537 & 0.00 & 261028 & 0.00 & 45397 & 0.47 & 19717 & 0.00 & 288411 & 0.30 & 45523 & 0.54 & 19537 & 0.00 & 292484 & 0.36 & 41564 & 0.51 & 29701 & 0.00 & 245005 & -0.05 \\ 15 & 165 & 43033 & 0.00 & 19866 & 0.00 & 286426 & 0.00 & 47677 & 0.45 & 19886 & 0.00 & 315410 & 0.27 & 48279 & 0.51 & 19866 & 0.00 & 320907 & 0.33 & 43086 & 0.49 & 27296 & 0.00 & 250126 & -0.05 \\ 16 & 167 & 42225 & 0.00 & 20486 & 0.00 & 266401 & 0.00 & 46847 & 0.41 & 20726 & 0.00 & 293251 & 0.26 & 46736 & 0.47 & 20486 & 0.00 & 296259 & 0.32 & 43089 & 0.45 & 32174 & 0.00 & 254236 & -0.12 \\ 17 & 172 & 43252 & 0.00 & 21118 & 0.00 & 274647 & 0.00 & 47392 & 0.12 & 21458 & 0.00 & 296262 & 0.07 & 46557 & 0.14 & 21118 & 0.00 & 294721 & 0.09 & 43326 & 0.13 & 33312 & 0.00 & 253008 & -0.27 \\ 18 & 182 & 46249 & 0.00 & 22271 & 0.00 & 294854 & 0.00 & 51338 & 0.42 & 22511 & 0.00 & 324558 & 0.26 & 51285 & 0.48 & 22271 & 0.00 & 328095 & 0.33 & 47021 & 0.45 & 34210 & 0.00 & 274945 & -0.16 \\ 19 & 182 & 46075 & 0.00 & 22272 & 0.00 & 293823 & 0.00 & 51196 & 0.48 & 22492 & 0.00 & 324165 & 0.30 & 51309 & 0.55 & 22272 & 0.00 & 328773 & 0.37 & 47041 & 0.52 & 34414 & 0.00 & 276345 & -0.12 \\ 20 & 183 & 46255 & 0.00 & 22402 & 0.00 & 295570 & 0.00 & 51159 & 0.39 & 22642 & 0.00 & 323946 & 0.25 & 51072 & 0.45 & 22402 & 0.00 & 327177 & 0.31 & 46941 & 0.43 & 34619 & 0.00 & 274911 & -0.20 \\ 21 & 187 & 47074 & 0.00 & 22961 & 0.00 & 297810 & 0.00 & 52051 & 0.45 & 23181 & 0.00 & 327159 & 0.29 & 52132 & 0.52 & 22961 & 0.00 & 331455 & 0.35 & 48052 & 0.48 & 36066 & 0.00 & 281953 & -0.03 \\ 22 & 560 & 143367 & 0.00 & 67560 & 0.00 & 967982 & 0.00 & 154087 & 0.09 & 67560 & 0.00 & 1029904 & 0.05 & 154567 & 0.10 & 67560 & 0.00 & 1033804 & 0.07 & 139567 & 2.15 & 93780 & 0.00 & 927750 & 19.23 \\ 23 & 700 & 182167 & 0.00 & 83980 & 0.00 & 1243299 & 0.00 & 202027 & 0.39 & 84380 & 0.00 & 1361500 & 0.23 & 203207 & 0.45 & 83980 & 0.00 & 1378731 & 0.29 & 184387 & 0.43 & 111800 & 0.00 & 1465883 & 0.21 \\ 24 & 880 & 225447 & 0.00 & 107040 & 0.00 & 1464515 & 0.00 & 249087 & 0.26 & 108240 & 0.00 & 1597920 & 0.16 & 247467 & 0.31 & 107040 & 0.00 & 1604367 & 0.21 & 229787 & 0.30 & 158480 & 0.00 & 1865371 & 0.14 \\ 25 & 900 & 223427 & 0.00 & 111080 & 0.00 & 1402179 & 0.00 & 242767 & 0.23 & 112280 & 0.00 & 1508200 & 0.15 & 240927 & 0.26 & 111080 & 0.00 & 1512347 & 0.19 & 228787 & 0.34 & 179120 & 0.00 & 1917874 & 0.17 \\ 26 & 980 & 251207 & 0.00 & 119160 & 0.00 & 1623991 & 0.00 & 276207 & 0.38 & 119560 & 0.00 & 1776680 & 0.24 & 278287 & 0.44 & 119160 & 0.00 & 1801171 & 0.29 & 257187 & 0.45 & 175860 & 0.00 & 2075852 & 0.23 \\ 27 & 1000 & 247227 & 0.00 & 124300 & 0.00 & 1493160 & 0.00 & 273907 & 0.44 & 126300 & 0.00 & 1643500 & 0.30 & 271627 & 0.51 & 124300 & 0.00 & 1657623 & 0.37 & 259427 & 0.55 & 210880 & 0.00 & 2159739 & 1.62 \\ 28 & 1020 & 255367 & 0.00 & 125240 & 0.00 & 1627460 & 0.00 & 277487 & 0.09 & 126840 & 0.00 & 1743580 & 0.06 & 273587 & 0.11 & 125240 & 0.00 & 1736423 & 0.08 & 259007 & 0.17 & 197080 & 0.00 & 2144855 & 0.64 \\ 29 & 1020 & 256187 & 0.00 & 125860 & 0.00 & 1593160 & 0.00 & 287327 & 0.35 & 128660 & 0.00 & 1761340 & 0.23 & 282067 & 0.41 & 125860 & 0.00 & 1762851 & 0.28 & 267007 & 0.40 & 204920 & 0.00 & 2158174 & 3.51 \\ 30 & 1040 & 265027 & 0.00 & 127080 & 0.00 & 1691247 & 0.00 & 296547 & 0.40 & 129080 & 0.00 & 1869560 & 0.25 & 294087 & 0.46 & 127080 & 0.00 & 1882407 & 0.31 & 274247 & 0.44 & 195200 & 0.00 & 2245151 & 0.22 \\ 31 & 1080 & 268807 & 0.00 & 133860 & 0.00 & 1639600 & 0.00 & 297467 & 0.29 & 136260 & 0.00 & 1796000 & 0.20 & 293127 & 0.35 & 133860 & 0.00 & 1798275 & 0.24 & 279727 & 0.38 & 224040 & 0.00 & 2307866 & 2.10 \\ \botrule \end{tabular} } \end{minipage} \end{center} \end{sidewaystable} \begin{sidewaystable} \begin{center} \begin{minipage}{\textheight} \caption{Comparison of MP-3 and MP-Ti MILP formulations in terms of compactness for the second data set.}\label{tab:HistoryCompactness2} \resizebox{\textheight}{!}{ \begin{tabular}{@{\extracolsep{\fill}}lccccccccccccccccccccccccc@{\extracolsep{\fill}}} \toprule &&\multicolumn{6}{@{}c@{}}{2P-Ti}&\multicolumn{6}{@{}c@{}}{3P-Ti}&\multicolumn{6}{@{}c@{}}{HP-Ti}&\multicolumn{6}{@{}c@{}}{TP-Ti}\\ \cmidrule(lr){3-8}\cmidrule(lr){9-14}\cmidrule(lr){15-20}\cmidrule(lr){21-26} Case & Units & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz & pre\_cons & redu\_con & pre\_vars & redu\_var & pre\_nozs & redu\_noz \\ & & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) & & (\%) \\ \midrule 32 & 10 & 2207 & 0.05 & 953 & 0.00 & 13295 & 0.03 & 2714 & 2.30 & 952 & 0.00 & 17098 & 2.75 & 2887 & 2.40 & 950 & 0.21 & 18560 & 2.80 & 2580 & 2.68 & 1636 & 0.00 & 33061 & -83.61 \\ 33 & 10 & 2148 & 0.00 & 937 & -0.11 & 12308 & 0.00 & 2669 & 1.33 & 934 & 0.00 & 15642 & 0.79 & 2847 & 1.39 & 936 & -0.21 & 17143 & 0.96 & 2575 & 1.79 & 1846 & -0.44 & 46400 & -144.75 \\ 34 & 10 & 2439 & 0.08 & 1070 & 0.00 & 15234 & 0.05 & 3009 & 1.86 & 1069 & 0.00 & 20764 & 2.10 & 3207 & 2.23 & 1068 & 0.19 & 22498 & 2.58 & 2846 & 2.43 & 2055 & -0.15 & 44978 & -118.57 \\ 35 & 10 & 2306 & 0.00 & 1053 & 0.00 & 12574 & 0.27 & 3079 & 1.75 & 1052 & -0.10 & 19458 & 2.16 & 3573 & 1.79 & 1253 & -0.24 & 23750 & 1.99 & 2839 & 2.20 & 2234 & -0.27 & 24328 & -9.62 \\ 36 & 10 & 2180 & 0.00 & 978 & 0.00 & 10473 & 0.00 & 2918 & 2.18 & 977 & 0.00 & 15008 & 2.38 & 3200 & 1.99 & 1078 & 0.09 & 19873 & 1.84 & 2728 & 2.29 & 2124 & -0.14 & 33775 & -55.80 \\ 37 & 20 & 4535 & 0.00 & 2017 & 0.00 & 28566 & 0.00 & 5619 & 1.21 & 2010 & 0.00 & 36195 & 0.69 & 5984 & 1.42 & 2006 & 0.30 & 39547 & 0.68 & 5363 & 1.83 & 3803 & -0.13 & 74438 & -100.33 \\ 38 & 20 & 4576 & 0.00 & 2017 & 0.00 & 26416 & 0.00 & 5689 & 1.32 & 2013 & 0.00 & 35159 & 1.04 & 6057 & 1.78 & 2011 & 0.15 & 38367 & 1.48 & 5407 & 1.71 & 3621 & -0.19 & 59189 & -63.15 \\ 39 & 20 & 4446 & 0.00 & 1979 & 0.00 & 26253 & 0.00 & 5495 & 1.52 & 1972 & 0.00 & 33520 & 0.85 & 5856 & 1.70 & 1970 & 0.20 & 36759 & 0.96 & 5318 & 1.70 & 4039 & -0.40 & 102512 & -183.06 \\ 40 & 20 & 4349 & 0.05 & 2052 & 0.00 & 23762 & 0.03 & 5904 & 0.64 & 2049 & -0.39 & 32995 & 0.34 & 6492 & 1.96 & 2259 & 1.40 & 43474 & 1.79 & 5557 & 2.23 & 4893 & 2.76 & 134478 & -204.25 \\ 41 & 20 & 4461 & 0.02 & 2073 & 0.00 & 23723 & 0.01 & 6063 & 1.16 & 2072 & 0.00 & 32671 & 0.91 & 7079 & 4.45 & 2495 & 4.41 & 44191 & 5.57 & 5644 & 5.81 & 4703 & 11.91 & 118228 & -153.01 \\ 42 & 50 & 11536 & 0.00 & 5179 & 0.00 & 57211 & 0.36 & 14287 & 1.22 & 5159 & 0.00 & 79867 & 0.96 & 15215 & 1.67 & 5141 & 0.39 & 89946 & 1.04 & 13694 & 1.60 & 10301 & -0.23 & 217748 & -136.76 \\ 43 & 50 & 11656 & 0.00 & 5225 & 0.00 & 56654 & 0.08 & 14418 & 1.50 & 5208 & 0.00 & 79097 & 1.14 & 15350 & 2.02 & 5193 & 0.33 & 89278 & 1.42 & 13895 & 1.40 & 10661 & -0.46 & 247907 & -168.32 \\ 44 & 50 & 11328 & -0.01 & 5091 & 0.00 & 56373 & 0.34 & 13990 & 1.54 & 5070 & 0.00 & 78473 & 1.34 & 14888 & 1.94 & 5053 & 0.34 & 88387 & 1.49 & 13483 & 1.61 & 10487 & -0.46 & 249359 & -170.76 \\ 45 & 50 & 11050 & -0.01 & 5229 & 0.00 & 48980 & 0.30 & 14944 & 1.33 & 5218 & -0.04 & 75564 & 1.18 & 17443 & 1.78 & 6271 & 0.21 & 95646 & 3.04 & 13963 & 1.90 & 12420 & -0.07 & 290565 & -166.55 \\ 46 & 50 & 11311 & -0.02 & 5329 & 0.00 & 50313 & 0.08 & 15254 & 1.49 & 5319 & 0.00 & 76549 & 1.42 & 17790 & 1.90 & 6369 & 0.22 & 97586 & 3.21 & 14253 & 2.07 & 12635 & -0.09 & 299209 & -169.54 \\ 47 & 75 & 16935 & 0.00 & 7677 & 0.00 & 79713 & 0.08 & 20983 & 1.13 & 7637 & 0.00 & 111907 & 1.37 & 22281 & 1.67 & 7601 & 0.47 & 127657 & 1.13 & 20327 & 1.04 & 16416 & -0.37 & 394974 & -182.75 \\ 48 & 75 & 17350 & 0.00 & 7832 & 0.00 & 81491 & 0.14 & 21476 & 1.17 & 7796 & 0.00 & 113951 & 1.06 & 22835 & 1.90 & 7768 & 0.36 & 129339 & 1.51 & 20719 & 1.46 & 16431 & -0.31 & 385202 & -173.77 \\ 49 & 75 & 17224 & 0.01 & 7744 & 0.00 & 81162 & 0.11 & 21313 & 1.81 & 7714 & 0.00 & 113002 & 1.65 & 22714 & 1.98 & 7692 & 0.29 & 128408 & 1.50 & 20491 & 1.60 & 15696 & -0.34 & 352952 & -156.95 \\ 50 & 75 & 17194 & 0.00 & 7733 & 0.00 & 80472 & 0.23 & 21269 & 1.84 & 7706 & 0.00 & 112069 & 1.80 & 22636 & 2.08 & 7683 & 0.30 & 127357 & 1.68 & 20496 & 1.58 & 15975 & -0.48 & 373117 & -173.00 \\ 51 & 75 & 16589 & 0.00 & 7890 & 0.00 & 69644 & 0.34 & 22497 & 1.20 & 7872 & -0.04 & 107831 & 1.18 & 26327 & 1.56 & 9476 & 0.21 & 142681 & 1.54 & 21038 & 2.09 & 19129 & -0.07 & 489668 & -195.24 \\ 52 & 100 & 23057 & 0.00 & 10375 & 0.00 & 105032 & 0.18 & 28471 & 1.68 & 10378 & 0.27 & 146059 & 1.57 & 30277 & 2.10 & 10334 & 0.60 & 166452 & 1.57 & 27391 & 1.74 & 21220 & -0.25 & 490400 & -170.12 \\ 53 & 100 & 22744 & 0.00 & 10256 & 0.00 & 103288 & 0.10 & 28125 & 1.53 & 10242 & 0.16 & 144024 & 1.58 & 29903 & 1.95 & 10195 & 0.49 & 164430 & 1.49 & 27096 & 1.59 & 21090 & -0.30 & 478637 & -165.05 \\ 54 & 100 & 22878 & 0.00 & 10301 & 0.00 & 103417 & 0.04 & 28248 & 1.58 & 10315 & 0.38 & 143736 & 1.50 & 30039 & 1.99 & 10276 & 0.58 & 163693 & 1.63 & 27274 & 1.68 & 21609 & -0.38 & 514915 & -180.28 \\ 55 & 100 & 22758 & -0.00 & 10244 & 0.00 & 103415 & 0.22 & 28119 & 1.54 & 10242 & 0.29 & 144454 & 1.51 & 29917 & 1.91 & 10199 & 0.56 & 164639 & 1.40 & 27151 & 1.55 & 21150 & -0.31 & 490836 & -169.01 \\ 56 & 100 & 22949 & -0.00 & 10325 & 0.00 & 104097 & 0.29 & 28399 & 1.59 & 10310 & 0.14 & 145324 & 1.65 & 30220 & 1.98 & 10269 & 0.39 & 165785 & 1.43 & 27286 & 1.51 & 20628 & -0.31 & 427577 & -137.37 \\ 57 & 150 & 34234 & -0.01 & 15562 & 0.00 & 149229 & 0.20 & 42342 & 1.82 & 15462 & 0.46 & 206781 & 1.67 & 45046 & 2.19 & 15397 & 0.73 & 236090 & 1.72 & 40768 & 1.90 & 31574 & -0.29 & 694568 & -160.55 \\ 58 & 150 & 34164 & -0.01 & 15559 & 0.00 & 149167 & 0.07 & 42317 & 1.61 & 15457 & 0.35 & 207201 & 1.44 & 45016 & 2.00 & 15390 & 0.59 & 237072 & 1.50 & 40818 & 1.57 & 32023 & -0.38 & 758207 & -182.89 \\ 59 & 150 & 34007 & -0.01 & 15501 & 0.00 & 148146 & 0.11 & 42098 & 1.64 & 15403 & 0.46 & 205463 & 1.55 & 44793 & 2.08 & 15336 & 0.65 & 235073 & 1.59 & 40650 & 1.65 & 32105 & -0.35 & 773638 & -186.52 \\ 60 & 150 & 34142 & -0.01 & 15561 & 0.00 & 149225 & 0.06 & 42293 & 1.48 & 15452 & 0.41 & 207869 & 1.39 & 44961 & 1.93 & 15379 & 0.74 & 238184 & 1.41 & 40887 & 1.59 & 32472 & -0.29 & 780618 & -187.18 \\ 61 & 150 & 33841 & -0.01 & 15450 & 0.00 & 147221 & 0.18 & 41888 & 1.56 & 15344 & 0.50 & 204771 & 1.42 & 44557 & 1.97 & 15271 & 0.77 & 234371 & 1.50 & 40557 & 1.49 & 32330 & -0.42 & 780435 & -189.98 \\ 62 & 200 & 44731 & 0.00 & 21357 & 0.00 & 167475 & 0.04 & 60483 & 1.30 & 21263 & 0.30 & 257349 & 1.39 & 70586 & 1.54 & 25410 & 0.33 & 370863 & 1.42 & 56629 & 2.01 & 50463 & -0.12 & 1133378 & -163.07 \\ 63 & 200 & 44203 & 0.00 & 21186 & 0.00 & 165529 & 0.05 & 59849 & 1.17 & 21093 & 0.37 & 255332 & 1.20 & 69903 & 1.51 & 25267 & 0.42 & 369229 & 1.40 & 56190 & 1.95 & 51025 & -0.09 & 1223444 & -182.20 \\ 64 & 200 & 43858 & 0.00 & 20993 & 0.00 & 164254 & 0.08 & 59467 & 1.25 & 20897 & 0.25 & 254063 & 1.33 & 69485 & 1.42 & 25059 & 0.35 & 367603 & 1.30 & 55708 & 1.82 & 50023 & -0.08 & 1154743 & -169.64 \\ 65 & 200 & 45840 & 0.01 & 20808 & 0.01 & 195239 & 0.10 & 56815 & 1.69 & 20674 & 0.26 & 272026 & 1.55 & 60454 & 2.06 & 20594 & 0.52 & 312792 & 1.52 & 54588 & 1.67 & 41449 & -0.32 & 872550 & -145.99 \\ 66 & 200 & 45750 & -0.01 & 20821 & 0.00 & 194451 & 0.11 & 56714 & 1.57 & 20688 & 0.30 & 271524 & 1.42 & 60331 & 1.97 & 20606 & 0.57 & 312305 & 1.51 & 54706 & 1.52 & 42760 & -0.39 & 996296 & -177.77 \\ 67 & 200 & 45314 & -0.01 & 20642 & 0.00 & 192382 & 0.20 & 56153 & 1.47 & 20506 & 0.29 & 268985 & 1.36 & 59694 & 1.94 & 20414 & 0.59 & 309434 & 1.43 & 54209 & 1.44 & 42641 & -0.30 & 985777 & -175.81 \\ 68 & 200 & 45113 & -0.00 & 20507 & 0.00 & 191704 & 0.05 & 55933 & 1.48 & 20372 & 0.17 & 267790 & 1.37 & 59454 & 1.93 & 20286 & 0.49 & 307691 & 1.47 & 53868 & 1.47 & 41497 & -0.31 & 908161 & -158.94 \\ 69 & 200 & 45847 & -0.00 & 20891 & 0.00 & 194728 & 0.07 & 56802 & 1.64 & 20759 & 0.35 & 271331 & 1.46 & 60426 & 2.05 & 20666 & 0.60 & 311804 & 1.55 & 54818 & 1.58 & 43272 & -0.35 & 1020516 & -183.33 \\ 70 & 200 & 43877 & 0.00 & 21039 & 0.00 & 164491 & 0.09 & 59510 & 1.19 & 20945 & 0.26 & 254759 & 1.26 & 69495 & 1.39 & 25121 & 0.37 & 368583 & 1.27 & 55828 & 1.85 & 50702 & -0.09 & 1199085 & -177.97 \\ 71 & 200 & 44706 & 0.00 & 21348 & 0.00 & 167841 & 0.04 & 60528 & 1.16 & 21254 & 0.16 & 258721 & 1.25 & 70615 & 1.41 & 25418 & 0.20 & 372703 & 1.26 & 56612 & 1.77 & 50409 & -0.11 & 1094816 & -152.40 \\ 72 & 200 & 44226 & 0.00 & 21235 & 0.00 & 165590 & 0.04 & 59929 & 1.16 & 21137 & 0.31 & 255669 & 1.08 & 70049 & 1.48 & 25332 & 0.39 & 369875 & 1.37 & 56203 & 1.95 & 51323 & -0.08 & 1279255 & -194.51 \\ 73 & 200 & 44347 & 0.00 & 21187 & 0.00 & 165972 & 0.04 & 59975 & 1.26 & 21091 & 0.35 & 255558 & 1.33 & 69983 & 1.46 & 25234 & 0.30 & 368469 & 1.30 & 56191 & 1.90 & 50077 & -0.15 & 1053273 & -143.82 \\ \botrule \end{tabular} } \end{minipage} \end{center} \end{sidewaystable} The tightness of an MIP formulation is usually characterized by using the relative integrality gap $(Z_{MIP}-Z_{CR})/Z_{MIP}$ \cite{wolsey2020integer}, which is denoted as ``iGap". In this expression, $Z_{CR}$ represents the optimal value for the continuous relaxation of the MIP problem, which will vary with the formulation we construct for the MIP problem. And $Z_{MIP}$ represents the optimal value for the MIP problem. Due to the limitation of solving algorithm and solving time, it is usually difficult to get the real optimal value of MIP problem. The solving process usually terminates with an optimal value satisfying the preset optimal tolerance. To ensure fairness, we use the same $Z_{MIP}$ for all of the MILP formulations, which is equal to the minimum of the optimal values obtained by using GUROBI with an accuracy of 0.1\% among the aforementioned MILP formulations. In this way, the difference in iGap could accurately reflect the difference in tightness of MIP model’s continuous relaxation. Fig. \ref{fig:History-iGap} shows the iGap for MP-3, MP-Ti and the other three state-of-the-art MILP formulations on all of the instances in comparison with 3P (using ratios), where the ratio for the 3P model is always 100\%. The differences in gap among MP-3 and MP-Ti models is large for the first dataset, but small for the second dataset. In additional, we have the following tightness relationship among the eleven formulations: \begin{align} &2P\precsim 3P\precsim 2P-3\precsim 2P-Ti\\ &2P\precsim 3P\precsim 2P-3\precsim 3P-HD=3P-3\precsim 3P-Ti\\ &2P\precsim 3P\precsim 2P-3\precsim 3P-HD\precsim HP-3\precsim HP-Ti\\ &2P\precsim 3P\precsim 2P-3\precsim 3P-HD\precsim TP-3\precsim TP-Ti \end{align} It is worth mentioning that the tightness relationship between 2P-Ti formualion and 3P-HD formualion is ambiguous. As shown in Fig. \ref{fig:History-iGap}, 2P-Ti formualion is more tighter than 3P-HD formualion for instances in the first data set. However, 3P-HD formualion is more tighter than 2P-Ti formualion for all instances in the second data set except three instances. The main reason is that MP-Ti formulations improve tightness mainly by considering the historical status in the lower/upper bound of generation limits and ramping constraints (as analyzed in Section \ref{sec4}), and the historical status plays a key role in some instances. \begin{figure} \caption{Comparison of MP-3, MP-Ti and the other three state-of-the-art MILP formulations in terms of iGap.} \label{fig:History-iGap} \end{figure} As shown in Fig. \ref{fig:History-runtime} and Fig. \ref{fig:History-profile-1}, the performance of MP-Ti models are slightly better than that of MP-3 models for the first data set. Fig. \ref{fig:History-runtime} and Fig. \ref{fig:History-profile-2} show that the performance of MP-Ti models are slightly worse than that of MP-3 models for 2-period, 3-period and H-period. Furthermore, MP-3 models perform significantly better than MP-Ti for T-period. \begin{figure} \caption{Comparison of MP-3, MP-Ti and the other three state-of-the-art MILP formulations in terms of rTime.} \label{fig:History-runtime} \end{figure} \begin{figure} \caption{Performance profiles on CPU time for MP-3 and MP-Ti formulations for the first data set.} \label{fig:History-profile-1} \end{figure} \begin{figure} \caption{Performance profiles on CPU time for MP-3 and MP-Ti formulations for the second data set.} \label{fig:History-profile-2} \end{figure} Fig. \ref{fig:MP3iGap} shows the iGap for MP-3 and the other three state-of-the-art MILP formulations and all of the instances in comparison with 3P (using ratios). Fig. \ref{fig:MPTiiGap} shows the iGap for MP-Ti and the other three state-of-the-art MILP formulations and all of the instances in comparison with 3P (using ratios). Our TP-3 and TP-Ti models always provide the best gaps. In additional, we have the following tightness relationship among the models: \begin{align} 2P\precsim 3P\precsim 2P-3\precsim 3P-HD=3P-3\precsim HP-3\precsim TP-3\\ 2P\precsim 3P\precsim 2P-Ti\precsim3P-Ti\precsim HP-Ti\precsim TP-Ti \end{align} \begin{figure} \caption{Comparison of MP-3 and the other three state-of-the-art MILP formulations in terms of iGap.} \label{fig:MP3iGap} \end{figure} \begin{figure} \caption{Comparison of MP-Ti and the other three state-of-the-art MILP formulations in terms of iGap.} \label{fig:MPTiiGap} \end{figure} It can be seen from Fig. \ref{fig:MP3PB} that 2P model and 3P model always give the lowest proportion of integers in binary variables for the relaxation solution. On the contrary, TP-3 model always provides the highest one. For data set 1, the differences among 2P-3, 3P-3, HP-3 and 3P-HD are not significant. For data set 2, HP-3 model performs better than the other three. Fig. \ref{fig:MPTiPB} shows the difference in the proportion of integers in binary variables among MP-Ti and the other three state-of-the-art MILP formulations. The differences among 2P-Ti, 3P-Ti, HP-Ti and 3P-HD are slight for the first data set. \begin{figure} \caption{Comparison of MP-3 and the other three state-of-the-art MILP formulations in terms of the proportion of integers in binary variables.} \label{fig:MP3PB} \end{figure} \begin{figure} \caption{Comparison of MP-Ti and the other three state-of-the-art MILP formulations in terms of the proportion of integers in binary variables.} \label{fig:MPTiPB} \end{figure} Fig. \ref{fig:MP3Runtime} shows the rTime of MP-3 and the other three state-of-the-art MILP formulations for all test instances. Overall, 2P model performs worst, followed by 3P model , then TP-3 model. Other models perform equally well. In particular, TP-3 model performs well for instances 24-31 in the first data set with units ranging from 560 to 1080. Fig. \ref{fig:MPTiRuntime} shows the rTime of MP-Ti and the other three state-of-the-art MILP formulations for all test instances. \begin{figure} \caption{Comparison of MP-3 and the other three state-of-the-art MILP formulations in terms of rTime.} \label{fig:MP3Runtime} \end{figure} \begin{figure} \caption{Comparison of MP-Ti and the other three state-of-the-art MILP formulations in terms of rTime.} \label{fig:MPTiRuntime} \end{figure} As shown in Fig. \ref{fig:performance-profile}, our MP-3, MP-Ti formulations as well as 3P-HD, have a relatively excellent performance compared with the other two state-of-the-art MILP formulations for the first data set. For the seconde data set, 3P model peforms better than TP-Ti. The poor convergences of TP-Ti for the second data set are mainly due to the large number of variables and constraints. Fig. \ref{fig:performance-profile-clear} shows the differences among the well-performed models more precisely. \begin{figure} \caption{Performance profiles on CPU time for MP-3, MP-Ti and the other three state-of-the-art MILP formulations.} \label{fig:performance-profile} \end{figure} \begin{figure} \caption{Performance profiles on CPU time for MP-3, MP-Ti and 3P-HD formulations.} \label{fig:performance-profile-clear} \end{figure} In Table \ref{tab:data1} and Table \ref{tab:data2}, we report more detailed results for the two data set respectively, where ``Units" represents the number of units in the system, ``time" represents the execution time for solving, ``$N_b$" represents the proportion of integers in binary variables for the solution of continuous relaxation problem. All of the test instances can be solved to optimality within the time limit of 3600 seconds. \begin{sidewaystable} \begin{center} \begin{minipage}{\textheight} \caption{Comparison of MP-3 and the other three state-of-the-art MILP formulations in computational performance for the first data set.}\label{tab:data1} \resizebox{\textheight}{!}{ \begin{tabular}{@{}lcccccccccccccccccccccc@{}} \toprule &&\multicolumn{3}{@{}c@{}}{2P}&\multicolumn{3}{@{}c@{}}{2P-3}&\multicolumn{3}{@{}c@{}}{3P}&\multicolumn{3}{@{}c@{}}{3P-3}&\multicolumn{3}{@{}c@{}}{3P-HD}&\multicolumn{3}{@{}c@{}}{HP-3}&\multicolumn{3}{@{}c@{}}{TP-3}\\ \cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}\cmidrule(lr){12-14}\cmidrule(lr){15-17}\cmidrule(lr){18-20}\cmidrule(lr){21-23} Case&Units&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)\\ \midrule 1 & 28 & 79.66 & 0.67 & 3.39 & 80.75 & 0.47 & 2.67 & 80.65 & 0.57 & 3.17 & 79.66 & 0.45 & 1.83 & 84.66 & 0.45 & 1.66 & 79.71 & 0.45 & 1.56 & 93.83 & 0.44 & 2.06 \\ 2 & 35 & 74.60 & 0.63 & 6.98 & 79.09 & 0.44 & 1.31 & 77.94 & 0.51 & 8.09 & 77.74 & 0.43 & 3.67 & 83.37 & 0.43 & 3.16 & 78.69 & 0.43 & 3.66 & 93.52 & 0.42 & 6.06 \\ 3 & 44 & 73.58 & 0.64 & 5.00 & 76.36 & 0.42 & 1.48 & 77.37 & 0.48 & 7.81 & 80.38 & 0.38 & 2.64 & 84.93 & 0.38 & 2.34 & 79.86 & 0.38 & 3.42 & 94.51 & 0.38 & 1.97 \\ 4 & 45 & 83.06 & 0.57 & 4.67 & 86.05 & 0.40 & 1.50 & 84.72 & 0.44 & 3.06 & 87.13 & 0.37 & 1.28 & 90.08 & 0.37 & 1.56 & 85.37 & 0.37 & 1.67 & 95.98 & 0.36 & 1.87 \\ 5 & 49 & 74.46 & 0.73 & 8.69 & 76.87 & 0.50 & 2.03 & 77.78 & 0.54 & 3.01 & 79.92 & 0.40 & 2.11 & 84.64 & 0.40 & 1.58 & 83.39 & 0.40 & 1.86 & 95.33 & 0.39 & 2.22 \\ 6 & 50 & 80.22 & 0.84 & 2.44 & 80.58 & 0.52 & 1.25 & 80.64 & 0.56 & 2.75 & 86.14 & 0.37 & 1.39 & 88.88 & 0.37 & 1.27 & 86.67 & 0.36 & 1.42 & 95.32 & 0.34 & 2.27 \\ 7 & 51 & 82.82 & 0.56 & 4.53 & 82.76 & 0.39 & 1.28 & 81.40 & 0.44 & 10.04 & 85.12 & 0.37 & 1.66 & 89.12 & 0.37 & 1.97 & 85.27 & 0.37 & 1.48 & 95.60 & 0.37 & 2.50 \\ 8 & 51 & 79.98 & 0.76 & 3.17 & 80.64 & 0.53 & 1.37 & 81.13 & 0.58 & 2.52 & 83.69 & 0.43 & 1.86 & 87.08 & 0.43 & 1.11 & 85.16 & 0.42 & 1.12 & 96.23 & 0.40 & 2.70 \\ 9 & 52 & 74.57 & 0.72 & 10.70 & 76.52 & 0.48 & 1.73 & 76.90 & 0.53 & 2.83 & 79.87 & 0.39 & 1.67 & 84.51 & 0.39 & 1.41 & 84.13 & 0.39 & 1.67 & 95.58 & 0.37 & 2.67 \\ 10 & 54 & 78.16 & 0.92 & 7.59 & 80.35 & 0.63 & 1.37 & 79.45 & 0.67 & 9.83 & 83.38 & 0.47 & 1.80 & 86.98 & 0.47 & 1.83 & 83.87 & 0.46 & 1.92 & 95.90 & 0.43 & 3.73 \\ 11 & 132 & 81.52 & 0.55 & 37.08 & 81.28 & 0.37 & 13.70 & 81.61 & 0.43 & 35.40 & 82.68 & 0.35 & 14.28 & 86.49 & 0.35 & 10.47 & 84.13 & 0.35 & 9.75 & 95.36 & 0.34 & 10.84 \\ 12 & 156 & 75.78 & 0.57 & 50.63 & 79.03 & 0.38 & 12.39 & 78.37 & 0.43 & 24.67 & 80.55 & 0.35 & 13.48 & 84.74 & 0.35 & 13.87 & 83.91 & 0.35 & 13.32 & 95.37 & 0.35 & 15.25 \\ 13 & 156 & 78.21 & 0.62 & 24.21 & 78.86 & 0.42 & 12.18 & 79.66 & 0.47 & 25.79 & 81.89 & 0.37 & 12.90 & 86.03 & 0.37 & 13.00 & 86.44 & 0.37 & 12.43 & 95.99 & 0.37 & 25.43 \\ 14 & 160 & 76.63 & 0.70 & 41.94 & 77.75 & 0.46 & 13.79 & 78.37 & 0.50 & 26.81 & 79.94 & 0.37 & 14.67 & 84.74 & 0.37 & 16.07 & 85.22 & 0.37 & 14.18 & 95.99 & 0.35 & 17.65 \\ 15 & 165 & 69.30 & 0.64 & 54.03 & 71.77 & 0.40 & 18.79 & 72.66 & 0.45 & 34.24 & 72.13 & 0.35 & 34.49 & 79.09 & 0.35 & 20.62 & 78.51 & 0.35 & 24.04 & 94.22 & 0.34 & 18.85 \\ 16 & 167 & 78.03 & 0.72 & 52.02 & 79.14 & 0.47 & 12.62 & 79.67 & 0.51 & 26.74 & 82.23 & 0.37 & 15.32 & 86.28 & 0.37 & 14.61 & 85.15 & 0.37 & 13.67 & 96.11 & 0.36 & 18.95 \\ 17 & 172 & 78.75 & 0.55 & 40.01 & 78.64 & 0.38 & 26.01 & 78.91 & 0.42 & 32.68 & 80.63 & 0.35 & 17.62 & 85.37 & 0.35 & 15.31 & 82.19 & 0.35 & 26.43 & 95.14 & 0.35 & 16.39 \\ 18 & 182 & 77.47 & 0.69 & 63.09 & 78.58 & 0.46 & 18.34 & 79.49 & 0.51 & 35.65 & 81.86 & 0.39 & 18.53 & 85.88 & 0.39 & 16.59 & 85.45 & 0.38 & 18.70 & 95.63 & 0.37 & 22.23 \\ 19 & 182 & 77.15 & 0.63 & 64.03 & 78.66 & 0.42 & 20.96 & 79.59 & 0.46 & 29.77 & 83.69 & 0.36 & 19.59 & 87.45 & 0.36 & 18.78 & 86.63 & 0.35 & 19.79 & 95.31 & 0.34 & 22.10 \\ 20 & 183 & 77.99 & 0.59 & 38.32 & 81.45 & 0.40 & 18.00 & 79.54 & 0.44 & 26.21 & 84.40 & 0.37 & 24.85 & 88.21 & 0.37 & 20.09 & 88.56 & 0.36 & 21.10 & 96.43 & 0.36 & 21.03 \\ 21 & 187 & 79.39 & 0.64 & 39.33 & 81.11 & 0.43 & 17.95 & 82.04 & 0.47 & 27.74 & 82.04 & 0.36 & 19.62 & 86.28 & 0.36 & 16.86 & 86.40 & 0.36 & 21.09 & 95.61 & 0.35 & 22.46 \\ 22 & 560 & 80.98 & 0.53 & 583.02 & 81.84 & 0.34 & 293.73 & 81.38 & 0.43 & 648.07 & 80.34 & 0.32 & 279.95 & 85.30 & 0.32 & 222.98 & 80.31 & 0.32 & 273.94 & 94.14 & 0.31 & 320.83 \\ 23 & 700 & 75.47 & 0.52 & 827.01 & 80.41 & 0.33 & 412.00 & 79.46 & 0.40 & 645.93 & 78.95 & 0.32 & 319.83 & 84.25 & 0.32 & 296.57 & 79.77 & 0.32 & 321.24 & 93.82 & 0.31 & 409.06 \\ 24 & 880 & 74.28 & 0.61 & 938.12 & 77.20 & 0.39 & 392.27 & 76.52 & 0.45 & 820.82 & 80.25 & 0.35 & 335.37 & 85.22 & 0.35 & 443.24 & 80.59 & 0.35 & 379.96 & 94.69 & 0.35 & 267.25 \\ 25 & 900 & 82.97 & 0.54 & 757.57 & 86.44 & 0.37 & 388.36 & 85.40 & 0.41 & 599.64 & 87.57 & 0.34 & 377.10 & 90.40 & 0.34 & 357.37 & 86.62 & 0.34 & 374.68 & 96.17 & 0.34 & 272.59 \\ 26 & 980 & 75.55 & 0.68 & 990.17 & 77.10 & 0.44 & 419.98 & 78.11 & 0.49 & 872.87 & 80.04 & 0.35 & 490.98 & 84.78 & 0.35 & 494.16 & 83.51 & 0.35 & 579.74 & 95.40 & 0.33 & 162.63 \\ 27 & 1000 & 80.79 & 0.81 & 912.14 & 80.81 & 0.49 & 369.38 & 80.88 & 0.53 & 563.73 & 86.39 & 0.34 & 488.71 & 89.31 & 0.34 & 505.82 & 86.85 & 0.33 & 440.66 & 95.41 & 0.31 & 244.96 \\ 28 & 1020 & 83.73 & 0.53 & 1431.49 & 83.26 & 0.36 & 434.44 & 81.84 & 0.40 & 3013.43 & 85.74 & 0.34 & 428.87 & 88.79 & 0.34 & 393.72 & 85.85 & 0.34 & 467.47 & 95.69 & 0.33 & 229.24 \\ 29 & 1020 & 80.24 & 0.71 & 871.92 & 81.00 & 0.48 & 445.75 & 81.29 & 0.53 & 712.96 & 83.84 & 0.38 & 438.08 & 87.24 & 0.38 & 381.91 & 85.01 & 0.37 & 458.86 & 96.54 & 0.35 & 86.09 \\ 30 & 1040 & 75.48 & 0.68 & 1244.44 & 76.73 & 0.44 & 516.05 & 77.16 & 0.49 & 326.78 & 80.01 & 0.35 & 482.78 & 84.31 & 0.35 & 579.05 & 84.49 & 0.35 & 530.72 & 95.67 & 0.33 & 195.88 \\ 31 & 1080 & 78.52 & 0.88 & 1108.14 & 81.11 & 0.58 & 491.57 & 79.86 & 0.62 & 204.61 & 83.63 & 0.43 & 460.09 & 87.19 & 0.43 & 531.69 & 84.08 & 0.41 & 425.42 & 96.05 & 0.38 & 197.05 \\ \botrule \end{tabular} } \end{minipage} \end{center} \end{sidewaystable} \begin{sidewaystable} \begin{center} \begin{minipage}{\textheight} \caption{Comparison of MP-3 and the other three state-of-the-art MILP formulations in computational performance for the second data set.}\label{tab:data2} \resizebox{\textheight}{!}{ \begin{tabular}{@{}lcccccccccccccccccccccc@{}} \toprule &&\multicolumn{3}{@{}c@{}}{2P}&\multicolumn{3}{@{}c@{}}{2P-3}&\multicolumn{3}{@{}c@{}}{3P}&\multicolumn{3}{@{}c@{}}{3P-3}&\multicolumn{3}{@{}c@{}}{3P-HD}&\multicolumn{3}{@{}c@{}}{HP-3}&\multicolumn{3}{@{}c@{}}{TP-3}\\ \cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}\cmidrule(lr){12-14}\cmidrule(lr){15-17}\cmidrule(lr){18-20}\cmidrule(lr){21-23} Case&Units&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)&$N_b$(\%)&iGap(\%)&time(s)\\ \midrule 32 & 10 & 76.81 & 1.49 & 0.34 & 84.58 & 0.88 & 0.22 & 82.78 & 1.05 & 0.39 & 89.17 & 0.63 & 0.25 & 91.58 & 0.63 & 0.23 & 90.14 & 0.51 & 0.25 & 96.47 & 0.51 & 0.64 \\ 33 & 10 & 70.69 & 2.35 & 0.58 & 83.47 & 1.59 & 0.39 & 80.56 & 1.84 & 0.36 & 86.25 & 1.31 & 0.53 & 90.32 & 1.31 & 0.41 & 88.33 & 1.18 & 0.48 & 96.62 & 1.17 & 0.95 \\ 34 & 10 & 72.36 & 1.91 & 0.48 & 78.61 & 1.10 & 0.27 & 78.19 & 1.36 & 0.45 & 81.94 & 0.75 & 0.33 & 86.53 & 0.75 & 0.31 & 86.67 & 0.66 & 0.34 & 95.31 & 0.65 & 0.56 \\ 35 & 10 & 71.67 & 2.24 & 0.50 & 78.61 & 1.57 & 0.28 & 77.36 & 1.80 & 0.42 & 85.75 & 1.31 & 0.36 & 87.47 & 1.31 & 0.31 & 88.57 & 1.14 & 0.42 & 95.55 & 1.10 & 0.41 \\ 36 & 10 & 71.25 & 1.95 & 0.36 & 79.31 & 1.22 & 0.25 & 77.64 & 1.47 & 0.34 & 85.15 & 0.98 & 0.28 & 86.74 & 0.98 & 0.25 & 89.81 & 0.93 & 0.36 & 95.86 & 0.93 & 0.45 \\ 37 & 20 & 73.54 & 2.06 & 2.16 & 85.63 & 1.36 & 0.84 & 83.75 & 1.44 & 2.37 & 88.54 & 1.04 & 2.12 & 92.37 & 1.04 & 1.86 & 90.97 & 0.88 & 1.69 & 96.65 & 0.88 & 4.89 \\ 38 & 20 & 75.56 & 1.59 & 1.00 & 82.29 & 0.95 & 0.48 & 82.57 & 1.05 & 0.67 & 90.49 & 0.66 & 0.53 & 93.05 & 0.66 & 0.56 & 92.71 & 0.54 & 0.81 & 97.47 & 0.53 & 0.84 \\ 39 & 20 & 72.92 & 1.58 & 1.73 & 80.14 & 0.78 & 0.78 & 77.71 & 1.05 & 1.11 & 86.67 & 0.53 & 1.23 & 89.95 & 0.53 & 1.05 & 89.24 & 0.46 & 1.89 & 96.21 & 0.46 & 1.97 \\ 40 & 20 & 79.10 & 1.87 & 1.52 & 84.86 & 1.19 & 0.59 & 82.50 & 1.31 & 1.83 & 90.60 & 0.92 & 0.89 & 91.58 & 0.92 & 0.77 & 93.24 & 0.70 & 1.52 & 97.85 & 0.70 & 4.23 \\ 41 & 20 & 73.40 & 1.85 & 0.91 & 81.11 & 1.12 & 0.36 & 79.10 & 1.25 & 0.62 & 85.69 & 0.79 & 0.58 & 87.58 & 0.79 & 0.48 & 91.57 & 0.63 & 0.89 & 96.66 & 0.61 & 1.66 \\ 42 & 50 & 74.06 & 1.49 & 9.06 & 83.72 & 0.71 & 4.26 & 81.22 & 0.88 & 6.12 & 88.14 & 0.40 & 2.76 & 91.28 & 0.40 & 4.91 & 90.86 & 0.27 & 2.17 & 97.67 & 0.26 & 4.03 \\ 43 & 50 & 76.00 & 1.13 & 4.01 & 85.72 & 0.49 & 1.70 & 84.58 & 0.65 & 2.17 & 91.58 & 0.26 & 1.61 & 93.64 & 0.26 & 1.75 & 93.44 & 0.13 & 1.39 & 97.91 & 0.13 & 1.78 \\ 44 & 50 & 76.25 & 1.21 & 4.95 & 83.31 & 0.45 & 2.55 & 80.25 & 0.64 & 3.47 & 90.42 & 0.19 & 2.51 & 92.67 & 0.19 & 2.42 & 93.58 & 0.11 & 1.41 & 97.91 & 0.11 & 2.00 \\ 45 & 50 & 75.08 & 1.28 & 6.45 & 84.50 & 0.60 & 2.22 & 81.47 & 0.75 & 4.19 & 90.20 & 0.33 & 5.05 & 91.58 & 0.33 & 4.12 & 94.72 & 0.18 & 6.53 & 98.28 & 0.18 & 8.22 \\ 46 & 50 & 73.89 & 1.22 & 5.25 & 83.94 & 0.54 & 2.62 & 82.31 & 0.65 & 3.59 & 91.54 & 0.27 & 3.62 & 92.61 & 0.27 & 3.61 & 94.70 & 0.15 & 3.26 & 98.35 & 0.15 & 6.48 \\ 47 & 75 & 80.07 & 1.31 & 11.86 & 87.57 & 0.57 & 5.44 & 86.59 & 0.74 & 4.95 & 90.94 & 0.29 & 2.53 & 93.24 & 0.29 & 3.08 & 95.02 & 0.12 & 2.05 & 98.11 & 0.12 & 3.39 \\ 48 & 75 & 75.54 & 1.46 & 8.08 & 84.85 & 0.68 & 3.45 & 82.63 & 0.88 & 4.44 & 89.63 & 0.39 & 3.48 & 92.36 & 0.39 & 2.78 & 92.22 & 0.22 & 2.84 & 97.86 & 0.21 & 3.45 \\ 49 & 75 & 75.80 & 1.27 & 11.95 & 85.93 & 0.54 & 6.48 & 82.06 & 0.70 & 5.17 & 90.33 & 0.25 & 4.16 & 92.72 & 0.25 & 3.89 & 94.39 & 0.12 & 2.37 & 98.01 & 0.12 & 2.78 \\ 50 & 75 & 75.89 & 1.31 & 8.95 & 85.70 & 0.59 & 4.92 & 82.35 & 0.77 & 5.25 & 89.98 & 0.33 & 3.87 & 92.21 & 0.33 & 5.37 & 93.72 & 0.14 & 4.17 & 98.13 & 0.14 & 3.06 \\ 51 & 75 & 74.41 & 1.38 & 9.89 & 83.54 & 0.62 & 4.92 & 79.65 & 0.80 & 9.22 & 90.02 & 0.32 & 3.31 & 91.17 & 0.32 & 3.05 & 93.90 & 0.17 & 3.12 & 97.77 & 0.17 & 5.01 \\ 52 & 100 & 75.00 & 1.29 & 21.78 & 84.93 & 0.56 & 5.61 & 82.24 & 0.68 & 8.40 & 90.08 & 0.25 & 6.01 & 92.80 & 0.25 & 7.98 & 92.88 & 0.14 & 4.51 & 97.51 & 0.13 & 6.28 \\ 53 & 100 & 74.69 & 1.39 & 18.39 & 85.64 & 0.61 & 5.39 & 82.81 & 0.81 & 10.44 & 89.40 & 0.34 & 9.48 & 92.29 & 0.34 & 9.51 & 93.01 & 0.19 & 9.90 & 97.82 & 0.18 & 9.04 \\ 54 & 100 & 76.71 & 1.30 & 12.12 & 86.44 & 0.55 & 15.78 & 83.28 & 0.72 & 8.45 & 90.79 & 0.25 & 12.97 & 93.13 & 0.25 & 10.28 & 93.25 & 0.13 & 11.18 & 97.86 & 0.13 & 8.72 \\ 55 & 100 & 78.36 & 1.27 & 12.53 & 87.19 & 0.53 & 8.47 & 84.63 & 0.71 & 10.70 & 90.82 & 0.24 & 9.04 & 93.11 & 0.24 & 8.79 & 93.86 & 0.10 & 8.28 & 97.86 & 0.09 & 5.67 \\ 56 & 100 & 74.71 & 1.24 & 10.47 & 87.06 & 0.51 & 6.81 & 83.75 & 0.70 & 7.39 & 90.39 & 0.25 & 5.98 & 92.72 & 0.25 & 8.44 & 93.31 & 0.10 & 4.16 & 98.11 & 0.10 & 4.47 \\ 57 & 150 & 77.22 & 1.18 & 32.74 & 87.06 & 0.51 & 9.47 & 84.09 & 0.61 & 16.11 & 91.76 & 0.25 & 7.92 & 93.84 & 0.25 & 9.67 & 94.95 & 0.10 & 7.15 & 98.18 & 0.10 & 7.09 \\ 58 & 150 & 76.28 & 1.22 & 35.51 & 86.70 & 0.53 & 17.92 & 83.81 & 0.68 & 13.92 & 91.32 & 0.26 & 10.33 & 93.66 & 0.26 & 13.31 & 93.58 & 0.13 & 12.22 & 97.95 & 0.13 & 8.86 \\ 59 & 150 & 78.24 & 1.21 & 32.87 & 86.89 & 0.53 & 12.39 & 83.82 & 0.68 & 15.68 & 90.94 & 0.27 & 9.98 & 93.42 & 0.27 & 10.26 & 93.29 & 0.13 & 9.34 & 98.06 & 0.13 & 11.51 \\ 60 & 150 & 75.48 & 1.27 & 25.24 & 86.74 & 0.54 & 9.20 & 83.01 & 0.75 & 18.15 & 91.09 & 0.29 & 7.84 & 93.52 & 0.29 & 10.50 & 93.69 & 0.14 & 7.86 & 97.72 & 0.14 & 10.61 \\ 61 & 150 & 77.01 & 1.30 & 26.79 & 85.68 & 0.52 & 16.45 & 82.85 & 0.73 & 15.54 & 91.30 & 0.25 & 10.25 & 93.35 & 0.25 & 11.84 & 94.06 & 0.11 & 10.73 & 97.68 & 0.11 & 11.14 \\ 62 & 200 & 76.25 & 1.16 & 40.99 & 86.58 & 0.48 & 16.57 & 83.45 & 0.61 & 20.79 & 91.66 & 0.23 & 20.17 & 92.75 & 0.23 & 19.89 & 96.15 & 0.10 & 18.34 & 98.29 & 0.09 & 14.40 \\ 63 & 200 & 76.59 & 1.17 & 42.08 & 86.45 & 0.47 & 14.81 & 83.10 & 0.63 & 18.89 & 92.26 & 0.21 & 23.59 & 93.28 & 0.21 & 21.79 & 95.72 & 0.09 & 24.38 & 98.22 & 0.08 & 16.07 \\ 64 & 200 & 76.38 & 1.17 & 50.52 & 86.24 & 0.46 & 17.90 & 82.93 & 0.64 & 22.10 & 92.43 & 0.21 & 20.39 & 93.54 & 0.21 & 23.26 & 96.53 & 0.10 & 18.14 & 98.41 & 0.10 & 14.86 \\ 65 & 200 & 76.42 & 1.26 & 57.39 & 85.63 & 0.55 & 22.95 & 82.63 & 0.67 & 24.57 & 90.82 & 0.26 & 21.10 & 93.30 & 0.26 & 21.96 & 93.80 & 0.10 & 16.18 & 98.06 & 0.10 & 13.29 \\ 66 & 200 & 75.20 & 1.37 & 41.12 & 85.06 & 0.60 & 14.68 & 82.38 & 0.76 & 26.99 & 89.81 & 0.30 & 17.54 & 92.35 & 0.30 & 20.28 & 93.83 & 0.16 & 14.06 & 98.03 & 0.16 & 16.59 \\ 67 & 200 & 77.85 & 1.24 & 38.12 & 86.03 & 0.54 & 18.53 & 83.54 & 0.71 & 25.85 & 91.27 & 0.28 & 15.76 & 93.49 & 0.28 & 21.04 & 94.56 & 0.14 & 14.78 & 98.10 & 0.14 & 14.00 \\ 68 & 200 & 76.36 & 1.30 & 58.19 & 85.81 & 0.57 & 21.17 & 83.55 & 0.72 & 22.62 & 90.99 & 0.26 & 17.87 & 93.30 & 0.26 & 15.68 & 93.08 & 0.13 & 16.42 & 97.70 & 0.13 & 17.40 \\ 69 & 200 & 77.27 & 1.25 & 30.18 & 85.85 & 0.53 & 18.48 & 82.74 & 0.69 & 30.71 & 91.17 & 0.24 & 26.31 & 93.58 & 0.24 & 15.89 & 93.61 & 0.10 & 15.57 & 97.93 & 0.10 & 13.31 \\ 70 & 200 & 75.38 & 1.28 & 40.58 & 85.76 & 0.53 & 15.25 & 81.98 & 0.75 & 34.35 & 91.12 & 0.28 & 24.88 & 92.43 & 0.28 & 18.87 & 94.65 & 0.14 & 37.96 & 97.76 & 0.14 & 42.40 \\ 71 & 200 & 76.69 & 1.26 & 61.69 & 85.53 & 0.49 & 18.21 & 83.28 & 0.64 & 20.29 & 91.42 & 0.23 & 19.75 & 92.56 & 0.23 & 18.28 & 95.78 & 0.09 & 28.51 & 98.40 & 0.08 & 16.40 \\ 72 & 200 & 76.32 & 1.23 & 53.02 & 85.00 & 0.47 & 19.14 & 82.14 & 0.65 & 28.34 & 90.98 & 0.20 & 33.60 & 92.29 & 0.20 & 19.32 & 95.04 & 0.07 & 31.27 & 98.17 & 0.07 & 45.08 \\ 73 & 200 & 77.39 & 1.13 & 40.24 & 86.60 & 0.46 & 12.83 & 84.05 & 0.57 & 25.28 & 92.18 & 0.20 & 18.07 & 93.25 & 0.20 & 18.56 & 95.95 & 0.08 & 18.01 & 98.35 & 0.08 & 13.61 \\ \botrule \end{tabular} } \end{minipage} \end{center} \end{sidewaystable} \section{Conclusion}\label{sec6} In this paper, tighter upper bounds of generation limits and ramping constraints are constructed by introducing more binary variables to represent unit states. A multi-period locally ideal formulation based on sliding window is proposed. It is worth noting that different window sizes can be set for different units in the system. In general, larger sliding window will generate tighter formulation. However, the number of variables and constraints increase dramatically for large window, which will increase the computational burden. In practical application, we can choose appropriate window size for different units in the system, so as to obtain an efficient model that can find a high-quality solution in a relatively short time. A method to identify the facets of constraint polytope is provided in this paper. For models with large window, only the facet constraints and the necessary physical constraints will be keep. This reduces memory requirements and improves the computational efficiency without reducing the tightness. In order to improve the compactness of MP-1 models, we make a series of adjustments to the binary variables of MP-1, and finally get compact models MP-3 with as few binary variables as possible without reducing the tightness of the models. The experimental results show that MP-3 models have excellent performances in computational efficiency. To further tighten the model, historical status is taken into consideration in upper bounds of generation limits and ramping constraints. This plays an important role in the tightness of models for the first data set. However, the addition of historical status does little to improve the tightness of models for the second data set, and the compactness is greatly reduced in the meantime. Although our MP-3 fomulations are much more complex than the state-of-the-art MILP formulations, they have excellent performances in computational efficiency. Even if the system contains more than 1000 units, they still can get the optimal solution satisfying the accuracy requirement in 600 seconds. On the other hand, studying the influence of sliding step size of sliding window on the computational efficiency of the model will be one of the future research directions. Although we have found tight multi-period models with high efficiency, we are still unable to represent convex hull of feasible solutions for the single-unit problem. In order to represent the convex hull, we need a large amount of (probably $O(2^M)$) inequalities without explicit physical significance, such as ramping constraints with ``$P_{i,t-1}-P_{i,t}+P_{i,t+1}$" as left hand side expression. {\noindent{\bf{Acknowledgments} } The work of LF Yang and SF Chen were supported by the Natural Science Foundation of Guangxi (2020GXNSFAA297173, 2020GXNSFDA238017, 2021GXNSFBA075012), and Natural Science Foundation of China (51767003), and in part by the Thousands of Young and Middle-Aged Backbone Teachers Training Program for Guangxi Higher Education [Education Department of Guangxi (2017)]. The work of ZY Dong was partially supported by funding from the UNSW Digital Grid Futures Institute, UNSW, Sydney, under a cross disciplinary fund scheme. We would like to thank Wei Li of Guangxi University for his valuable discussions.} \section*{Appendix 1: Some feasible points to the M-period polytope with partial constraints}\label{appendix1} Theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer} provides a way to identify whether a valid inequality is a facet. This technique is also used in \cite{damci2016polyhedral}. First of all, we construct some feasible points to the M-period polytope with partial constraints in Appendix 1. We let $\varepsilon$ be a very small number greater than zero and let $[r_1,r_2]$ be one member of the set $A_m$. According to equality (\ref{eq:new-initial-status-1}), there is one and only one $[m,k]\in A_m$ for $m\in [0,\min(U,T-M+1)]$ such that $\tau^m_{m,k}=1$. The values of those variables whose values are not given are equal to zero. \setcounter{equation}{0} \renewcommand{B\arabic{equation}}{A\arabic{equation}} \begin{footnotesize} \begin{align} &\tau^m_{h,k}=0,\forall [h,k]\in A_m;\widetilde{P}_r=0,\forall r\in [m,m+M-1]\label{p:1}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_r=1,\forall r\in[m,m+M-1]\label{p:2}\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_r=0,\forall r\in[r_1,r_2]\label{p:3}\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_0}=\varepsilon;\widetilde{P}_r=0,\forall r\in[r_1,r_2],r\ne r_0\label{p:4}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in [m,U];\tau^m_{r_1,r_2}=1;\widetilde{P}_r=0,\forall r\in [r_1,r_2]\label{p:5}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in [m,U];\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_0}=\varepsilon,r_1\le r_0\le r_2\label{p:6}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [m,r_2]\label{p:7}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\notag\\ &\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [m,r_2]\label{p:8}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\notag\\ &\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3}-\max(0,r_0-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3}-\varepsilon\rvert;\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\notag\\ &\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [m,r_2],r\ne r_0\label{p:9}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in [m,r_1];\widetilde{P}_r=\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-(r-r_1)\notag\\ &\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{r_2-r_1},\forall r\in [r_1,r_2]\label{p:10}\\ &\widetilde{P}_{r_0}=\lvert\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-(r_0-r_1)\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{r_2-r_1}-\varepsilon\rvert;\notag\\ &\tau^m_{m,r_2}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,r_1],r\ne r_0;\widetilde{P}_r=\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\notag\\ &(r-r_1)\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{r_2-r_1},\forall r\in [r_1,r_2],r\ne r_0\label{p:11}\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\notag\\ &\forall r\in [r_1,r_2]\label{p:12}\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\notag\\ &\max(0,r_3-r_0)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\max(0,r_0-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3}-\varepsilon\rvert;\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\notag\\ &\forall r\in [r_1,r_2],r\ne r_0\label{p:13}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\tau^m_{r_1,r_2}=1;\notag\\ &\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\notag\\ &\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [r_1,r_2]\label{p:14}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+\notag\\ &(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}\notag\\ &-\max(0,r_0-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3}-\varepsilon\rvert;\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\notag\\ &\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [r_1,r_2],r\ne r_0\label{p:15}\\ &\tau^m_{m,U}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(U-r_3)\widetilde{P}_{down}];\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\notag\\ &\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{U-r_3},\forall r\in[m,U];\tau^m_{r_1,r_2}=1;\notag\\ &\widetilde{P}_{r_4}=\min[1,\widetilde{P}_{start}+(r_4-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_4)\widetilde{P}_{down}];\widetilde{P}_r=\widetilde{P}_{r_4}-\max(0,r_4-r)\notag\\ &\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{start})}{r_4-r_1}-\max(0,r-r_4)\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{shut})}{r_2-r_4},\forall r\in [r_1,r_2]\label{p:16}\\ &\tau^m_{m,U}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(U-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{U-r_3},\forall r\in[m,U];\notag\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_r=0,\forall r\in [r_1,r_2]\label{p:17}\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,m+M-1]\label{p:18}\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\varepsilon\rvert;\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,m+M-1]\label{p:19}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,m+M-1]\label{p:20}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1}-\varepsilon\rvert;\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,m+M-1],r\ne r_0\label{p:21}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up});\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3},\forall r\in[m,m+M-1]\label{p:22}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up});\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3}-\varepsilon\rvert;\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{r_3}-\widetilde{P}_0}{r_3},\forall r\in[m,m+M-1],r\ne r_0\label{p:23}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_r=\max[0,1-\max(0,r_3-r)\times\widetilde{P}_{up}],\forall r\in[m,m+M-1]\label{p:24}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_r=\max[0,1-\max(0,r_3-r)\times\widetilde{P}_{up}-\varepsilon],\forall r\in[m,m+M-1]\label{p:25}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_r=\max[0,1-\max(0,r-r_3)\times\widetilde{P}_{down}],\forall r\in[m,m+M-1]\label{p:26}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_r=\max[0,1-\max(0,r-r_3)\times\widetilde{P}_{down}-\varepsilon],\forall r\in[m,m+M-1]\label{p:27}\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [r_4,r_2];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4}-(r_4-r)\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{start})}{r_4-r_1},r\in [r_1,r_4-1]\label{p:28}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in [r_4,r_2];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4}-(r_4-r)\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{start})}{r_4-r_1},\forall r\in [r_1,r_4-1]\label{p:29}\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in [r_1,r_4];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4}-(r-r_4)\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{shut})}{r_2-r_4},\forall r\in [r_4+1,r_2]\label{p:30}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in [r_1,r_4];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4}-\max(0,r-r_4)\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{shut})}{r_2-r_4},\forall r\in [r_4+1,r_2]\label{p:31}\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_1,m+M-1]\label{p:32}\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}]-\varepsilon;\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_1,m+M-1]\label{p:33}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_1,m+M-1]\label{p:34}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}]-\varepsilon;\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_1,m+M-1]\label{p:35}\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_3+1,m+M-1];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-(r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,r_3-1]\label{p:36}\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}]-\varepsilon;\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_3+1,m+M-1];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-(r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,r_3-1]\label{p:37}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}];\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_3+1,m+M-1];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-(r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,r_3-1]\label{p:38}\\ &\tau^m_{m,U}=1;\widetilde{P}_r=\max(0,\widetilde{P}_0-r\widetilde{P}_{down}),\forall r\in[m,U];\notag\\ &\tau^m_{r_1,m+M-1}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{start}+(r_3-r_1)\widetilde{P}_{up}]-\varepsilon;\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in [r_3+1,m+M-1];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-(r_3-r)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{start})}{r_3-r_1},\forall r\in[r_1,r_3-1]\label{p:39}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r-r_3)\times\frac{\widetilde{P}_{ramp}}{a},\forall r\in[m,r_4];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4}-(r-r_4)\times\frac{\max(0,\widetilde{P}_{r_4}-\widetilde{P}_{shut})}{r_2-r_4},\forall r\in[r_4+1,r_2]\label{p:40}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in[r_4,r_2];\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[m,r_4-1]\label{p:41}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\notag\\ &\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r_0-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3}-\varepsilon\rvert,r_4<r_0\le r_2;\widetilde{P}_r=\widetilde{P}_{r_3}-\notag\\ &\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{r_2-r_3},\forall r\in[r_4,r_2],r\ne r_0;\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[1,r_4-1]\label{p:42}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(r_2-r_3)\widetilde{P}_{down}];\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-(r-r_3)\notag\\ &\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in[r_3+1,m+M-1];\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,r_3-1]\label{p:43}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up});\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a},\notag\\ &\forall r\in[r_4,m+M-1];\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[m,r_4-1]\label{p:44}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up});\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}-\max(0,r_3-r_0)\times\frac{\widetilde{P}_{ramp}}{a}-\varepsilon\rvert,\notag\\ &r_4<r_0\le m+M-1;\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a},\forall r\in[r_4,m+M-1],r\ne r_0;\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[m,r_4-1],r\ne r_0\label{p:45}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up})-\varepsilon;\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a},\notag\\ &\forall r\in[r_4,m+M-1];\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[m,r_4-1]\label{p:46}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up})-\varepsilon;\widetilde{P}_{r_0}=\widetilde{P}_{r_3}-\max(0,r_3-r_0)\times\frac{\widetilde{P}_{ramp}}{a}-\varepsilon,\notag\\ &r_4<r_0\le m+M-1;\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a},\forall r\in[r_4,m+M-1],r\ne r_0;\notag\\ &\widetilde{P}_{r_0}=\widetilde{P}_0-r_0\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4}-\varepsilon,m\le r_0<r_4;\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[m,r_4-1],r\ne r_0\label{p:47}\\ &\tau^m_{m,U}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(U-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}-\max(0,r_3-r)\times\frac{\widetilde{P}_{ramp}}{a}-\max(0,r-r_3)\times\frac{\max(0,\widetilde{P}_{r_3}-\widetilde{P}_{shut})}{U-r_3},\forall r\in[r_4,U];\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_4}}{r_4},\forall r\in[m,r_4-1];\tau^m_{r_1,r_2}=1;\widetilde{P}_r=0,\forall r\in[r_1,r_2]\label{p:48}\\ &\tau^m_{m,U}=1;\widetilde{P}_{r_3}=\min[1,\widetilde{P}_0+r_3\widetilde{P}_{up},\widetilde{P}_{shut}+(U-r_3)\widetilde{P}_{down}];\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-(r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in[r_3,U];\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,r_3-1];\tau^m_{r_1,r_2}=1;\widetilde{P}_r=0,\forall r\in[r_1,r_2]\label{p:49}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up});\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-(r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in[r_3+1,r_4];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4},\forall r\in[r_4+1,m+M-1];\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,r_3-1]\label{p:50}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up});\widetilde{P}_{r_0}=\max[0,\widetilde{P}_{r_3}-(r_0-r_3)\times\frac{\widetilde{P}_{ramp}}{a}]+\varepsilon,\notag\\ &r_3<r_0<r_4;\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-(r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in[r_3+1,r_4],r\ne r_0;\notag\\ &\widetilde{P}_{r_0}=\widetilde{P}_{r_4}+\varepsilon,r_4<r_0\le m+M-1;\widetilde{P}_r=\widetilde{P}_{r_4},\forall r\in[r_4+1,m+M-1],r\ne r_0;\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,r_3-1]\label{p:51}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up})-\varepsilon;\notag\\ &\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-(r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\forall r\in[r_3+1,r_4];\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_4},\forall r\in[r_4+1,m+M-1];\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,r_3-1]\label{p:52}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\min(1,\widetilde{P}_0+r_3\widetilde{P}_{up})-\varepsilon;\widetilde{P}_r=\max[0,\widetilde{P}_{r_3}-(r-r_3)\times\frac{\widetilde{P}_{ramp}}{a}],\notag\\ &\forall r\in[r_3+1,r_4];\widetilde{P}_r=\widetilde{P}_{r_4},\forall r\in[r_4+1,m+M-1];\widetilde{P}_{r_0}=\widetilde{P}_0-r_0\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3}-\varepsilon,\notag\\ &m\le r_0<r_3;\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,r_3-1],r\ne r_0\label{p:53}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\max(0,\widetilde{P}_0-r_3\widetilde{P}_{down});\notag\\ &\widetilde{P}_r=\widetilde{P}_{r_3}+\max(0,r_3-r)\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,m+M-1]\label{p:54}\\ &\tau^m_{m,m+M-1}=1;\widetilde{P}_{r_3}=\max(0,\widetilde{P}_0-r_3\widetilde{P}_{down});\widetilde{P}_{r_0}=\lvert\widetilde{P}_{r_3}+\max(0,r_3-r_0)\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3}\notag\\ &-\varepsilon\rvert;\widetilde{P}_r=\widetilde{P}_{r_3}+\max(0,r_3-r)\times\frac{\widetilde{P}_0-\widetilde{P}_{r_3}}{r_3},\forall r\in[m,m+M-1],r\ne r_0\label{p:55}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{r_2},\forall r\in[m,r_2]\label{p:56}\\ &\tau^m_{m,r_2}=1;\widetilde{P}_{r_0}=\lvert\widetilde{P}_0-r_0\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{r_2}-\varepsilon\rvert;\notag\\ &\widetilde{P}_r=\widetilde{P}_0-r\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{r_2},\forall r\in[m,r_2],r\ne r_0\label{p:57} \end{align} \end{footnotesize} \section*{Appendix 2: Proof of proposition for history-denpendent polytope} \setcounter{equation}{0} \renewcommand{B\arabic{equation}}{B\arabic{equation}} \setcounter{proposition}{0} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp1}}] If all points of $\mathcal{P}_m^I$ that are tight at inequality (\ref{eq:tight-max-output-0}) satisfy \begin{equation} \sum^{m+M-1}_{j=m}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:1} \end{equation} then \begin{enumerate} \item $\lambda_0=0$.\\ Point (\ref{p:1}) satisfies inequality (\ref{eq:tight-max-output-0}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:1}), then $\lambda_0=0$. \item $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$.\\ Consider the following two cases: \begin{enumerate} \item $m\le j<t$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=m$, $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-0}) at equality. Because both points satisfy equality (\ref{pr:1}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \item $t<j\le m+M-1$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=t+1$, $r_2=m+M-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-0}) at equality. Because both points satisfy equality (\ref{pr:1}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \end{enumerate} \item $\varphi_{h,k}=0,t\notin[h,k]$.\\ If $[h,k]\in A_m$ exists such that $t\notin[h,k]$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. If Point (\ref{p:3}) satisfies equality (\ref{pr:1}), then $\varphi_{h,k}=\lambda_0$. We have shown that $\lambda_0=0$ in part 1. Hence $\varphi_{h,k}=0$. \item $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],m<h\le t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t\le k<m+M-1$, we consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:12}) satisfies equality (\ref{pr:1}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}],m<h\le t$.\\ If $[h,m+M-1]\in A_m$ exists such that $m<h\le t$, we consider point (\ref{p:18}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:18}) satisfies equality (\ref{pr:1}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \item $\varphi_{m,k}=-\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],t\le k<m+M-1$.\\ If $[m,k]\in A_m$ exists such that $t\le k<m+M-1$, we consider point (\ref{p:7}) with $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:7}) satisfies equality (\ref{pr:1}), then we get $\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{m,m+M-1}=-\omega_t$.\\ If $[m,m+M-1]\in A_m$ exists, we consider point (\ref{p:2}). Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:2}) satisfies equality (\ref{pr:1}), then we get $\omega_t+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t$. \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp1}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp2}}] If all points of $\mathcal{Q}_m^I$ that are tight at inequality (\ref{eq:tight-max-output-0}) satisfy \begin{equation} \sum^{m+M-1}_{j=m}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:2} \end{equation} then \begin{enumerate} \item $\lambda_0=0$.\\ Point (\ref{p:1}) satisfies inequality (\ref{eq:tight-max-output-0}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:2}), then $\lambda_0=0$. \item $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$.\\ Consider the following two cases: \begin{enumerate} \item $m\le j<t$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=m$, $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-0}) at equality. Because both points satisfy equality (\ref{pr:2}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \item $t<j\le m+M-1$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=t+1$, $r_2=m+M-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-0}) at equality. Because both points satisfy equality (\ref{pr:2}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \end{enumerate} \item $\varphi_{h,k}=0,t\notin[h,k]$.\\ If $[h,k]\in A_m$ exists such that $t\notin[h,k]$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. If Point (\ref{p:3}) satisfies equality (\ref{pr:2}), then $\varphi_{h,k}=\lambda_0$. We have shown that $\lambda_0=0$ in part 1. Hence $\varphi_{h,k}=0$. \item $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],m<h\le t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t\le k<m+M-1$, we consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:12}) satisfies equality (\ref{pr:2}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}],m<h\le t$.\\ If $[h,m+M-1]\in A_m$ exists such that $m<h\le t$, we consider point (\ref{p:18}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:18}) satisfies equality (\ref{pr:2}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \item $\varphi_{m,k}=-\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],t\le k<m+M-1$.\\ If $[m,k]\in A_m$ exists such that $t\le k<m+M-1$, we consider point (\ref{p:7}) with $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:7}) satisfies equality (\ref{pr:2}), then we get $\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{m,m+M-1}=-\omega_t$.\\ If $[m,m+M-1]\in A_m$ exists, we consider point (\ref{p:2}). Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:2}) satisfies equality (\ref{pr:2}), then we get $\omega_t+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t$. \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp2}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp3}}] Necessity: For contradiction we assume that $a\ge \frac{1}{\widetilde{P}_{up}}$ . We have $\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]=\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$ and $\min(1,a\widetilde{P}_{up})=1$. Then inequality (\ref{eq:tight-ramp-up-0}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le \sum\nolimits_{\{[h,k]\in A_m,h\le t-a<t\le k<m+M-1\}}\tau^m_{h,k}\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\sum\nolimits_{\{[h,m+M-1]\in A_m,h\le t-a\}}\tau^m_{h,m+M-1}+\sum\nolimits_{\{[h,k]\in A_m,t-a<h\le t\le k<m+M-1\}}\tau^m_{h,k} \min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\sum\nolimits_{\{[h,m+M-1]\in A_m,t-a<h\le t\}}\tau^m_{h,m+M-1}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. It is dominated by inequalities (\ref{eq:tight-max-output-0}) and $\widetilde{P}_{t-a}\ge 0$.\\ Sufficiency: If all points of $\mathcal{Q}_m^I$ that are tight at inequality (\ref{eq:tight-ramp-up-0}) satisfy \begin{equation} \sum^{m+M-1}_{j=m}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:3} \end{equation} then \begin{enumerate} \item $\lambda_0=0$.\\ Point (\ref{p:1}) satisfies inequality (\ref{eq:tight-ramp-up-0}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:3}), then $\lambda_0=0$. \item $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$.\\ Consider the following two cases: \begin{enumerate} \item $j\in[m,t-a-1]\cup[t-a+1,t-1]$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=m$, $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-0}) at equality. Because both points satisfy equality (\ref{pr:3}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \item $j\in[t+1,m+M-1]$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=t+1$, $r_2=m+M-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-0}) at equality. Because both points satisfy equality (\ref{pr:3}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \end{enumerate} \item $\omega_{t-a}=-\omega_t$.\\ If $a<\frac{1}{\widetilde{P}_{up}}$, we get $a\widetilde{P}_{up}<1$. We consider point (\ref{p:24}) and point (\ref{p:25}) with $r_3=t$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-0}) at equality. We have shown that $\omega_j=0$, $j\in[m,m+M-1]\backslash\{t-a,t\}$ in part 2. Because both points satisfy equality (\ref{pr:3}), we get $\omega_{t-a} (1-a\widetilde{P}_{up})+\omega_t=\omega_{t-a} (1-a\widetilde{P}_{up}-\varepsilon)+\omega_t (1-\varepsilon)$. Hence $\omega_{t-a}=-\omega_t$. \item $\varphi_{h,k}=0,t\notin[h,k]$.\\ If $[h,k]\in A_m$ exists such that $t\notin[h,k]$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-0}) at equality. If Point (\ref{p:3}) satisfies equality (\ref{pr:3}), then $\varphi_{h,k}=\lambda_0$. We have shown that $\lambda_0=0$ in part 1. Hence $\varphi_{h,k}=0$. \item $\varphi_{h,k}=-\omega_t\times\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],h\le t-a<t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $h\le t-a<t\le k<m+M-1$, we consider point (\ref{p:28}) with $r_1=h$, $r_2=k$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:28}) satisfies equality (\ref{pr:3}), then we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{h,m+M-1}=-\omega_t\times\min(1,a\widetilde{P}_{up}),h\le t-a$.\\ If $[h,m+M-1]\in A_m$ exists such that $h\le t-a$, we consider point (\ref{p:32}) with $r_1=h$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min(1,a\widetilde{P}_{up})$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:32}) satisfies equality (\ref{pr:3}), then we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]-\min(1,a\widetilde{P}_{up})\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min(1,a\widetilde{P}_{up})$. \item $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],t-a<h\le t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $t-a<h\le t\le k<m+M-1$, we consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ in part 2. If Point (\ref{p:12}) satisfies equality (\ref{pr:3}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}],t-a<h\le t$.\\ If $[h,m+M-1]\in A_m$ exists such that $t-a<h\le t$, we consider point (\ref{p:18}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ in part 2. If Point (\ref{p:18}) satisfies equality (\ref{pr:3}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp3}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp4}}] Necessity: For contradiction we assume that $a\ge \frac{1}{\widetilde{P}_{down}}$. We have $\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]=\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up}]$ and $\min(1,a\widetilde{P}_{down})=1$. Then inequality (\ref{eq:tight-ramp-down-0}) can be written as $\widetilde{P}_{t-a}-\widetilde{P}_t\le \sum\nolimits_{\{[h,k]\in A_m,m<h\le t-a<t\le k\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up}]+\sum\nolimits_{\{[m,k]\in A_m,t\le k\}}\tau^m_{m,k}+\sum\nolimits_{\{[h,k]\in A_m,m<h\le t-a\le k<t\}}\tau^m_{h,k} \min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\sum\nolimits_{\{[m,k]\in A_m,t-a\le k<t\}}\tau^m_{m,k}\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. It is dominated by inequalities (\ref{eq:tight-max-output-0}) and $\widetilde{P}_t\ge 0$.\\ Sufficiency: If all points of $\mathcal{Q}_m^I$ that are tight at inequality (\ref{eq:tight-ramp-down-0}) satisfy \begin{equation} \sum^{m+M-1}_{j=m}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:4} \end{equation} then \begin{enumerate} \item $\lambda_0=0$.\\ Point (\ref{p:1}) satisfies inequality (\ref{eq:tight-ramp-down-0}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:4}), then $\lambda_0=0$. \item $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$.\\ Consider the following two cases: \begin{enumerate} \item $j\in[t-a+1,t-1]\cup[t+1,m+M-1]$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=t-a+1$, $r_2=m+M-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-0}) at equality. Because both points satisfy equality (\ref{pr:4}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \item $j\in[m,t-a-1]$.\\ Consider point (\ref{p:3}) and point (\ref{p:4}) with $r_1=m$, $r_2=t-a-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-0}) at equality. Because both points satisfy equality (\ref{pr:4}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \end{enumerate} \item $\omega_{t-a}=-\omega_t$.\\ If $a<\frac{1}{\widetilde{P}_{down}}$, we get $a\widetilde{P}_{down}<1$. We consider point (\ref{p:26}) and point (\ref{p:27}) with $r_3=t-a$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-0}) at equality. We have shown that $\omega_j=0$, $j\in[m,m+M-1]\backslash\{t-a,t\}$ in part 2. Because both points satisfy equality (\ref{pr:4}), we get $\omega_{t-a} +\omega_t(1-a\widetilde{P}_{down})=\omega_{t-a}(1-\varepsilon)+\omega_t (1-a\widetilde{P}_{down}-\varepsilon)$. Hence $\omega_{t-a}=-\omega_t$. \item $\varphi_{h,k}=0,t-a\notin[h,k]$.\\ If $[h,k]\in A_m$ exists such that $t-a\notin[h,k]$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-0}) at equality. If Point (\ref{p:3}) satisfies equality (\ref{pr:4}), then $\varphi_{h,k}=\lambda_0$. We have shown that $\lambda_0=0$ in part 1. Hence $\varphi_{h,k}=0$. \item $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}],m<h\le t-a<t\le k$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t-a<t\le k$, we consider point (\ref{p:30}) with $r_1=h$, $r_2=k$, $r_3=t-a$, $r_4=t$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:30}) satisfies equality (\ref{pr:4}), then we get $\omega_t\{\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]-\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]\}+\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$. \item $\varphi_{m,k}=\omega_t\times\min(1,a\widetilde{P}_{down}),t\le k$.\\ If $[m,k]\in A_m$ exists such that $t\le k$, we consider point (\ref{p:40}) with $r_1=h$, $r_3=t-a$, $r_4=t$, $\widetilde{P}_{ramp}=\min(1,a\widetilde{P}_{down})$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:40}) satisfies equality (\ref{pr:4}), then we get $\omega_t\{\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]-\min(1,a\widetilde{P}_{down})\}+\omega_{t-a}\times\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min(1,a\widetilde{P}_{down})$. \item $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}],m<h\le t-a\le k<t$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t-a\le k<t$, we consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t-a$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:12}) satisfies equality (\ref{pr:4}), then we get $\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. \item $\varphi_{m,k}=\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}],t-a\le k<t$.\\ If $[m,k]\in A_m$ exists such that $t-a\le k<t$, we consider point (\ref{p:7}) with $r_1=h$, $r_3=t-a$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-0}) at equality. We have shown that $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:7}) satisfies equality (\ref{pr:4}), then we get $\omega_{t-a}\times\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp4}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp5}}] If all points of $\widetilde{\mathcal{P}}_m^I$ that are tight at inequality (\ref{eq:tight-min-output-1}) satisfy \begin{equation} \sum^{m+M-1}_{j=\max(1,m)}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:5} \end{equation} then \begin{enumerate} \item $\lambda_0=0$ when $m>U$.\\ If $m>U$, point (\ref{p:1}) is valid and satisfies inequality (\ref{eq:tight-min-output-1}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:5}), then $\lambda_0=0$. \item $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$.\\ Consider point (\ref{p:54}) and point (\ref{p:55}) with $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-min-output-1}) at equality. Because both points satisfy equality (\ref{pr:5}), we get $\omega_j[\widetilde{P}_t+\max(0,t-j)\times\frac{\widetilde{P}_0-\widetilde{P}_t}{t}]=\omega_j\lvert\widetilde{P}_t+\max(0,t-j)\times\frac{\widetilde{P}_0-\widetilde{P}_t}{t}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\varphi_{m,k}=\lambda_0,t\notin[m,k]$.\\ If $[m,k]\in A_m$ exists such that $t\notin[m,k]$, we consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:10}) satisfies equality (\ref{pr:5}), then $\varphi_{m,k}=\lambda_0$. \item $\varphi_{m,k}=\lambda_0-\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down}),t\in[m,k]$.\\ If $[m,k]\in A_m$ exists such that $t\in[m,k]$, we consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:10}) satisfies equality (\ref{pr:5}), then $\varphi_{m,k}=\lambda_0-\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down})$. \item $\varphi_{h,k}=0,h\ne m$.\\ If $[h,k]\in A_m,h\ne m$ exists, we consider the following two cases: \begin{enumerate} \item $t\le U$.\\ Consider point (\ref{p:5}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0-\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down})$, $t\in[m,k]$ above. If Point (\ref{p:5}) satisfies equality (\ref{pr:5}), then we get $\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down})+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t>U$.\\ If $m\le U$, we consider point (\ref{p:5}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:5}) satisfies equality (\ref{pr:5}), then we get $\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. If $m>U$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\lambda_0=0$ when $m>U$ in part 1. If Point (\ref{p:3}) satisfies equality (\ref{pr:5}), then we get $\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \end{enumerate} \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp5}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp6}}] If all points of $\widetilde{\mathcal{P}}_m^I$ that are tight at inequality (\ref{eq:tight-max-output-1}) satisfy \begin{equation} \sum^{m+M-1}_{j=\max(1,m)}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:6} \end{equation} then \begin{enumerate} \item $\lambda_0=0$ when $m>U$.\\ If $m>U$, point (\ref{p:1}) is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:6}), then $\lambda_0=0$. \item $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$.\\ Consider point (\ref{p:22}) and point (\ref{p:23}) with $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-1}) at equality. Because both points satisfy equality (\ref{pr:6}), we get $\omega_j[\widetilde{P}_t-\max(0,t-j)\times\frac{\widetilde{P}_t-\widetilde{P}_0}{t}]=\omega_j\lvert\widetilde{P}_t-\max(0,t-j)\times\frac{\widetilde{P}_t-\widetilde{P}_0}{t}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\varphi_{m,k}=\lambda_0,t\notin[m,k]$.\\ If $[m,k]\in A_m$ exists such that $t\notin[m,k]$, we consider point (\ref{p:10}) with $r_1=m$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:10}) satisfies equality (\ref{pr:6}), then $\varphi_{m,k}=\lambda_0$. \item $\varphi_{m,k}=\lambda_0-\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],m\le t\le k<m+M-1$.\\ If $[m,k]\in A_m$ exists such that $m\le t\le k<m+M-1$, we consider point (\ref{p:8}) with $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:8}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,k}=\lambda_0$. Hence, $\varphi_{m,k}=\lambda_0-\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{m,m+M-1}=\lambda_0-\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})$.\\ We consider point (\ref{p:22}) with $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:22}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})+\varphi_{m,m+M-1}=\lambda_0$. Hence, $\varphi_{m,m+M-1}=\lambda_0-\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})$. \item $\varphi_{h,k}=0,h\ne m,t\notin[h,k]$.\\ If $[h,k]\in A_m,h\ne m$ exists such that $t\notin[h,k]$, we consider the following two cases: \begin{enumerate} \item $t\le U$.\\ Consider point (\ref{p:16}) with $r_1=h$, $r_2=r_4=k$, $r_3=t$. Clearly this point satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0-\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$, $m\le t\le k<m+M-1$ above. If Point (\ref{p:16}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down} ]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t>U$.\\ If $m\le U$, we consider point (\ref{p:16}) with $r_1=h$, $r_2=r_4=k$, $r_3=U$. Clearly this point satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:16}) satisfies equality (\ref{pr:6}), then we get $\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. If $m>U$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\lambda_0=0$ when $m>U$ in part 1. If Point (\ref{p:3}) satisfies equality (\ref{pr:6}), then we get $\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \end{enumerate} \item $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],m<h\le t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t\le k<m+M-1$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ Consider point (\ref{p:16}) with $r_1=h$, $r_2=k$, $r_3=U$, $r_4=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:16}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $m>U$.\\ Consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:12}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \end{enumerate} \item $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}],m<h\le t$.\\ If $[h,m+M-1]\in A_m$ exists such that $m<h\le t$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ Consider point (\ref{p:20}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:20}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \item $m>U$.\\ Consider point (\ref{p:18}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:18}) satisfies equality (\ref{pr:6}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \end{enumerate} \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp6}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp7}}] Necessity: For contradiction we assume that all of the conditions are not satisfied. Then we have: \begin{enumerate} \item $t>\max(1,m)$. \item $t\le K$. \item $\widetilde{P}_0-t\widetilde{P}_{down}\ge 0$. \end{enumerate} We have $\widetilde{P}_0-(t-1)\widetilde{P}_{down}\ge\widetilde{P}_{shut}$ or $t\le W$. If $\widetilde{P}_0-(t-1)\widetilde{P}_{down}=\widetilde{P}_{shut}$, the unit must remain operational at least until time period $t-1$ (i.e., $t-1\le U$). If $\widetilde{P}_0-(t-1)\widetilde{P}_{down}>\widetilde{P}_{shut}$ or $t\le W$, the unit must remain operational at least until time period $t$ (i.e., $t\le U$). According to the definition of $A_m$, inequalities (\ref{eq:tight-min-output-1})(\ref{eq:tight-ramp-down-1}) can be written as $\widetilde{P}_t\ge\sum\nolimits_{\{[m,k] \in A_m,t\le k\}}\tau^m_{m,k}(\widetilde{P}_0-t\widetilde{P}_{down})$ and $\widetilde{P}_{t-a}-\widetilde{P}_t\le\sum\nolimits_{\{[m,k] \in A_m,t-a<t\le k\}}\tau^m_{m,k}a\widetilde{P}_{down}+\sum\nolimits_{\{[m,t-1] \in A_m\}}\tau^m_{m,t-1}[\widetilde{P}_{shut}+(a-1)\widetilde{P}_{down}]$ respectively. Inequality (\ref{eq:tight-min-output-1}) with $t>m$ is dominated by inequalities (\ref{eq:tight-min-output-1}) with $t=m$ and (\ref{eq:tight-ramp-down-1}) with $t-a=m$. It should be noted that $K$ is equal to $U$ or $U+1$.\\ Sufficiency: If all points of $\widetilde{\mathcal{P}}_m^I$ that are tight at inequality (\ref{eq:tight-min-output-1}) satisfy \begin{equation} \sum^{m+M-1}_{j=\max(1,m)}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:7} \end{equation} then \begin{enumerate} \item $\lambda_0=0$ when $m>U$.\\ If $m>U$, point (\ref{p:1}) is valid and satisfies inequality (\ref{eq:tight-min-output-1}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:7}), then $\lambda_0=0$. \item $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$.\\ \begin{enumerate} \item $t<j\le m+M-1$.\\ Consider point (\ref{p:54}) and point (\ref{p:55}) with $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-min-output-1}) at equality. Because both points satisfy equality (\ref{pr:7}), we get $\omega_j\widetilde{P}_t=\omega_j\lvert\widetilde{P}_t-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\max(1,m)\le j<t$.\\ If $t=m$, we don’t need to consider this case. Otherwise, consider the following two cases: \begin{enumerate} \item $K<t$.\\ We get $\frac{\widetilde{P}_0-\widetilde{P}_{shut}}{t-1}<\widetilde{P}_{down}$, $t-1\ge U$. Consider point (\ref{p:56}) and point (\ref{p:57}) with $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-min-output-1}) at equality. Because both points satisfy equality (\ref{pr:7}), we get $\omega_j[\widetilde{P}_0-j\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{t-1}]=\omega_j\lvert\widetilde{P}_0-j\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{t-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\frac{\widetilde{P}_0}{\widetilde{P}_{down}}<t$.\\ We get $\widetilde{P}_0-t\widetilde{P}_{down}<0$. Consider point (\ref{p:54}) and point (\ref{p:55}) with $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-min-output-1}) at equality. Because both points satisfy equality (\ref{pr:7}), we get $\omega_j[(t-j)\times\frac{\widetilde{P}_0}{t}]=\omega_j\lvert(t-j)\times\frac{\widetilde{P}_0}{t}-\varepsilon\rvert$. Hence $\omega_j=0$. \end{enumerate} \end{enumerate} \item $\varphi_{m,k}=\lambda_0,t\notin[m,k]$.\\ If $[m,k]\in A_m$ exists such that $t\notin[m,k]$, we consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:10}) satisfies equality (\ref{pr:7}), then $\varphi_{m,k}=\lambda_0$. \item $\varphi_{m,k}=\lambda_0-\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down}),t\in[m,k]$.\\ If $[m,k]\in A_m$ exists such that $t\in[m,k]$, we consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:10}) satisfies equality (\ref{pr:7}), then $\varphi_{m,k}=\lambda_0-\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down})$. \item $\varphi_{h,k}=0,h\ne m$.\\ If $[h,k]\in A_m,h\ne m$ exists, we consider the following two cases: \begin{enumerate} \item $t\le U$.\\ Consider point (\ref{p:5}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0-\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down})$, $t\in[m,k]$ above. If Point (\ref{p:5}) satisfies equality (\ref{pr:7}), then we get $\omega_t\times\max(0,\widetilde{P}_0-t\widetilde{P}_{down})+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t>U$.\\ If $m\le U$, we consider point (\ref{p:5}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:5}) satisfies equality (\ref{pr:7}), then we get $\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. If $m>U$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-min-output-1}) at equality. We have shown that $\lambda_0=0$ when $m>U$ in part 1. If Point (\ref{p:3}) satisfies equality (\ref{pr:7}), then we get $\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \end{enumerate} \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp7}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp8}}] Necessity: For contradiction we assume that all of the conditions are not satisfied. Then we have: \begin{enumerate} \item $t>\max(1,m)$. \item $t\le K$. \item $\widetilde{P}_0+t\widetilde{P}_{up}\le 1$. \item $K\ge m+M-1$ or $\widetilde{P}_{shut}+(K-t)\widetilde{P}_{down}\ge\widetilde{P}_0+t\widetilde{P}_{up}$. \end{enumerate} We have $\widetilde{P}_0-(t-1)\widetilde{P}_{down}\ge\widetilde{P}_{shut}$ or $t\le W$. If $\widetilde{P}_0-(t-1)\widetilde{P}_{down}=\widetilde{P}_{shut}$, the unit must remain operational at least until time period $t-1$ (i.e., $t-1\le U$). If $\widetilde{P}_0-(t-1)\widetilde{P}_{down}>\widetilde{P}_{shut}$ or $t\le W$, the unit must remain operational at least until time period $t$ (i.e., $t\le U$). It should be noted that $K$ is equal to $U$ or $U+1$. We consider the following three cases: \begin{enumerate} \item $K\ge m+M-1,U\ge m+M-1$.\\ According to the definition of $A_m$ and (\ref{eq:new-initial-status-1}), inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le a\widetilde{P}_{up}$, and (\ref{eq:tight-max-output-1}) can be written as $\widetilde{P}_t\le\widetilde{P}_0+t\widetilde{P}_{up}$. Inequality (\ref{eq:tight-max-output-1}) with $t>m$ is dominated by inequalities (\ref{eq:tight-max-output-1}) with $t=m$ and (\ref{eq:tight-ramp-up-1}) with $t-a=m$. \item $K\ge m+M-1,U=m+M-2$.\\ We have $\widetilde{P}_{shut}+U\widetilde{P}_{down}=\widetilde{P}_0$. According to the definition of $A_m$, inequalities (\ref{eq:tight-max-output-1})(\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t\le\tau^m_{m,U}[\widetilde{P}_{shut}+(U-t)\widetilde{P}_{down}]+\tau^m_{m,m+M-1}(\widetilde{P}_0+t\widetilde{P}_{up})$ and $\widetilde{P}_t-\widetilde{P}_{t-a}\le-\tau^m_{m,U}a\widetilde{P}_{down}+\tau^m_{m,m+M-1}a\widetilde{P}_{up}$ respectively when $t\le U$. Inequalities (\ref{eq:tight-max-output-1})(\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t\le\tau^m_{m,m+M-1}(\widetilde{P}_0+t\widetilde{P}_{up})$ and $\widetilde{P}_t-\widetilde{P}_{t-a}\le\tau^m_{m,m+M-1}a\widetilde{P}_{up}-\tau^m_{m,U}[\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$ respectively when $t>U$. Inequality (\ref{eq:tight-max-output-1}) with $t>m$ is dominated by inequalities (\ref{eq:tight-max-output-1}) with $t=m$ and (\ref{eq:tight-ramp-up-1}) with $t-a=m$. \item $\widetilde{P}_{shut}+(K-t)\widetilde{P}_{down}\ge\widetilde{P}_0+t\widetilde{P}_{up}$.\\ We have $(K-1)\widetilde{P}_{down}>\widetilde{P}_0-\widetilde{P}_{shut}$. Thus, $t\le U=K=W$. According to the definition of $A_m$ and (\ref{eq:new-initial-status-1}), inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le a\widetilde{P}_{up}$, and (\ref{eq:tight-max-output-1}) can be written as $\widetilde{P}_t\le(\widetilde{P}_0+t\widetilde{P}_{up})$. Inequality (\ref{eq:tight-max-output-1}) with $t>m$ is dominated by inequalities (\ref{eq:tight-max-output-1}) with $t=m$ and (\ref{eq:tight-ramp-up-1}) with $t-a=m$. \end{enumerate} Sufficiency: If all points of $\widetilde{\mathcal{Q}}_m^I$ that are tight at inequality (\ref{eq:tight-max-output-1}) satisfy \begin{equation} \sum^{m+M-1}_{j=\max(1,m)}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:8} \end{equation} then \begin{enumerate} \item $\lambda_0=0$ when $m>U$.\\ If $m>U$, point (\ref{p:1}) is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:8}), then $\lambda_0=0$. \item $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$.\\ Consider the following two cases: \begin{enumerate} \item $t<j\le m+M-1$.\\ Consider point (\ref{p:22}) and point (\ref{p:23}) with $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-1}) at equality. Because both points satisfy equality (\ref{pr:8}), we get $\omega_j\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})=\omega_j\lvert\min(1,\widetilde{P}_0+t\widetilde{P}_{up})-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\max(1,m)\le j<t$.\\ If $t=m$, we don’t need to consider this case. Otherwise, consider the following two cases: \begin{enumerate} \item $\min(K,\frac{1-\widetilde{P}_0}{\widetilde{P}_{up}})<t$.\\ If $t>K$, we get $\frac{\widetilde{P}_0-\widetilde{P}_{shut}}{t-1}<\widetilde{P}_{down}$. Consider point (\ref{p:10}) and point (\ref{p:11}) with $r_1=m$, $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-1}) at equality. Because both points satisfy equality (\ref{pr:8}), we get $\omega_j\{\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-(j-r_1)\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{t-m-1}\}=\omega_j\lvert\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-(j-r_1)\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{t-m-1}-\varepsilon\rvert$. Hence $\omega_j=0$. If $\frac{1-\widetilde{P}_0}{\widetilde{P}_{up}}<t$, we get $1<\widetilde{P}_0+t\widetilde{P}_{up}$. Consider point (\ref{p:22}) and point (\ref{p:23}) with $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-1}) at equality. Because both points satisfy equality (\ref{pr:8}), we get $\omega_j[1-(t-j)\times\frac{1-\widetilde{P}_0}{t}]=\omega_j \lvert1-(t-j)\times\frac{1-\widetilde{P}_0}{t}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $K<m+M-1,\max(\frac{\widetilde{P}_{shut}-\widetilde{P}_0}{\widetilde{P}_{up}},\frac{K\widetilde{P}_{down}+\widetilde{P}_{shut}-\widetilde{P}_0}{\widetilde{P}_{up}+\widetilde{P}_{down}})<t<m+M-1$.\\ We get $\max(K,t)<m+M-1$, $\widetilde{P}_{shut}+[\max(K,t)-t]\widetilde{P}_{down}<\widetilde{P}_0+t\widetilde{P}_{up}$. Consider point (\ref{p:8}) and point (\ref{p:9}) with $r_2=\max(K,t)$, $r_0=j$, $r_3=t$. Both points are valid and satisfy inequality (\ref{eq:tight-max-output-1}) at equality. Because both points satisfy equality (\ref{pr:8}), we get $\omega_j[\widetilde{P}_t-(t-j)\times\frac{\widetilde{P}_t-\widetilde{P}_0}{t}]=\omega_j [\widetilde{P}_t-(t-j)\times\frac{\widetilde{P}_t-\widetilde{P}_0}{t}-\varepsilon]$. Hence $\omega_j=0$. \end{enumerate} \end{enumerate} \item $\varphi_{m,k}=\lambda_0,t\notin[m,k]$.\\ If $[m,k]\in A_m$ exists such that $t\notin[m,k]$, we consider point (\ref{p:10}) with $r_1=m$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:10}) satisfies equality (\ref{pr:8}), then $\varphi_{m,k}=\lambda_0$. \item $\varphi_{m,k}=\lambda_0-\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],m\le t\le k<m+M-1$.\\ If $[m,k]\in A_m$ exists such that $m\le t\le k<m+M-1$, we consider point (\ref{p:8}) with $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:8}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,k}=\lambda_0$. Hence, $\varphi_{m,k}=\lambda_0-\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $\varphi_{m,m+M-1}=\lambda_0-\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})$.\\ We consider point (\ref{p:22}) with $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ in part 2. If Point (\ref{p:22}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})+\varphi_{m,m+M-1}=\lambda_0$. Hence, $\varphi_{m,m+M-1}=\lambda_0-\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})$. \item $\varphi_{h,k}=0,h\ne m,t\notin[h,k]$.\\ If $[h,k]\in A_m,h\ne m$ exists such that $t\notin[h,k]$, we consider the following two cases: \begin{enumerate} \item $t\le U$.\\ Consider point (\ref{p:16}) with $r_1=h$, $r_2=r_4=k$, $r_3=t$. Clearly this point satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0-\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$, $m\le t\le k<m+M-1$ above. If Point (\ref{p:16}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t)\widetilde{P}_{down} ]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t>U$.\\ If $m\le U$, we consider point (\ref{p:16}) with $r_1=h$, $r_2=r_4=k$, $r_3=U$. Clearly this point satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:16}) satisfies equality (\ref{pr:8}), then we get $\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. If $m>U$, we consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\lambda_0=0$ when $m>U$ in part 1. If Point (\ref{p:3}) satisfies equality (\ref{pr:8}), then we get $\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \end{enumerate} \item $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],m<h\le t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t\le k<m+M-1$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ Consider point (\ref{p:16}) with $r_1=h$, $r_2=k$, $r_3=U$, $r_4=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:16}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $m>U$.\\ Consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:12}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \end{enumerate} \item $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}],m<h\le t$.\\ If $[h,m+M-1]\in A_m$ exists such that $m<h\le t$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ Consider point (\ref{p:20}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\varphi_{m,k}=\lambda_0$, $t\notin[m,k]$ above. If Point (\ref{p:20}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \item $m>U$.\\ Consider point (\ref{p:18}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-max-output-1}) at equality. We have shown that $\omega_j=0,j\in[\max(1,m),m+M-1]\backslash\{t\}$ and $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:18}) satisfies equality (\ref{pr:8}), then we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \end{enumerate} \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp8}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp9}}] Necessity: For contradiction we assume that one of condition $\mathcal{M}_1$ and condition $\mathcal{M}_2$ is not satisfied.\\ If condition $\mathcal{M}_1$ is not satisfied. We have \begin{enumerate} \item $a>1$. \item $t\le K$. \item $a\widetilde{P}_{up}\le 1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. \item $K\ge m+M-1$ or $a\widetilde{P}_{up}\le\widetilde{P}_{shut}+(K-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. \item $a\le\underline{T}_{on}$ or $t-a<U+\underline{T}_{off}$. \item $\widetilde{P}_{start}+(a-1)\widetilde{P}_{up}\le1$ or $t-a<U+\underline{T}_{off}$. \item $t\ge m+M-1$ or $t-a<U+\underline{T}_{off}$ or $\widetilde{P}_{start}+(a-1)\widetilde{P}_{up}\le\widetilde{P}_{shut}+[\max(t,t-a+\underline{T}_{on})-t]\widetilde{P}_{down}$. \end{enumerate} We have $\widetilde{P}_0-(t-1)\widetilde{P}_{down}\ge\widetilde{P}_{shut}$ or $t\le W$. If $\widetilde{P}_0-(t-1)\widetilde{P}_{down}=\widetilde{P}_{shut}$, the unit must remain operational at least until time period $t-1$ (i.e., $t-1\le U$). If $\widetilde{P}_0-(t-1)\widetilde{P}_{down}>\widetilde{P}_{shut}$ or $t\le W$, the unit must remain operational at least until time period $t$ (i.e., $t\le U$). It should be noted that $K$ is equal to $U$ or $U+1$. We have $t-a<t\le K\le U+1\le U+\underline{T}_{off}$. We consider the following three cases: \begin{enumerate} \item $K\ge m+M-1,U\ge m+M-1$.\\ According to the definition of $A_m$ and (\ref{eq:new-initial-status-1}), inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le a\widetilde{P}_{up}$. Inequality (\ref{eq:tight-ramp-up-1}) with $a>1$ is dominated by inequalities (\ref{eq:tight-ramp-up-1}) with $a=1$. \item $K\ge m+M-1,U=m+M-2$.\\ We have $\widetilde{P}_{shut}+U\widetilde{P}_{down}=\widetilde{P}_0$. According to the definition of $A_m$, inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le-\tau^m_{m,U}a\widetilde{P}_{down}+\tau^m_{m,m+M-1}a\widetilde{P}_{up}$ when $t\le U$. (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le\tau^m_{m,m+M-1}a\widetilde{P}_{up}-\tau^m_{m,U}[\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$ when $t>U$. Inequality (\ref{eq:tight-ramp-up-1}) with $a>1$ is dominated by inequalities (\ref{eq:tight-ramp-up-1}) with $a=1$. \item $a\widetilde{P}_{up}\le\widetilde{P}_{shut}+(K-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$.\\ We have $(K-1)\widetilde{P}_{down}>\widetilde{P}_0-\widetilde{P}_{shut}$. Thus, $t\le U=K=W$. According to the definition of $A_m$ and (\ref{eq:new-initial-status-1}), inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le a\widetilde{P}_{up}$. Inequality (\ref{eq:tight-ramp-up-1}) with $a>1$ is dominated by inequalities (\ref{eq:tight-ramp-up-1}) with $a=1$. \end{enumerate} If condition $\mathcal{M}_2$ is not satisfied. We have \begin{enumerate} \item $t-a>0$. \item $a\widetilde{P}_{up}\ge 1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. \item $a\widetilde{P}_{up}\ge 1$ or $t-a\le U+\underline{T}_{off}$. \end{enumerate} If $a\widetilde{P}_{up}\ge 1$, inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le\sum\nolimits_{\{[m,k] \in A_m,t\le k<m+M-1\}}\tau^m_{m,k}\{\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}+\sum\nolimits_{\{[m,m+M-1] \in A_m\}}\tau^m_{m,m+M-1}\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}-\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]+\sum\nolimits_{\{[h,k] \in A_m,t-a<h\le t\le k<m+M-1\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\sum\nolimits_{\{[h,m+M-1]\in A_m,t-a<h\le t\}}\tau^m_{h,m+M-1}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\sum\nolimits_{\{[h,k] \in A_m,U+\underline{T}_{off}<h\le t-a<t\le k<m+M-1\}}\tau^m_{h,k}\min\{1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}\}+\sum\nolimits_{\{[h,m+M-1] \in A_m,U+\underline{T}_{off}<h\le t-a\}}\tau^m_{h,m+M-1}$. If $t-a\le U+\underline{T}_{off}$, inequality (\ref{eq:tight-ramp-up-1}) can be written as $\widetilde{P}_t-\widetilde{P}_{t-a}\le\sum\nolimits_{\{[m,k] \in A_m,t\le k<m+M-1\}}\tau^m_{m,k}\{\min[1,\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}+\sum\nolimits_{\{[m,m+M-1] \in A_m\}}\tau^m_{m,m+M-1}\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}-\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]+\sum\nolimits_{\{[h,k] \in A_m,t-a<h\le t\le k<m+M-1\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\sum\nolimits_{\{[h,m+M-1]\in A_m,t-a<h\le t\}}\tau^m_{h,m+M-1}\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. Inequality (\ref{eq:tight-ramp-up-1}) is dominated by inequalities (\ref{eq:tight-max-output-1}) and (\ref{eq:tight-min-output-1}).\\ Sufficiency: If all points of $\widetilde{\mathcal{Q}}_m^I$ that are tight at inequality (\ref{eq:tight-ramp-up-1}) satisfy \begin{equation} \sum^{m+M-1}_{j=\max(1,m)}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:9} \end{equation} then \begin{enumerate} \item $\lambda_0=0$ when $m>U$.\\ If $m>U$, point (\ref{p:1}) satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We get $\widetilde{P}_t=0$. If Point (\ref{p:1}) satisfies equality (\ref{pr:9}), then $\lambda_0=0$. \item $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$.\\ There are three cases to be considered. \begin{enumerate} \item $t<j\le m+M-1$.\\ Consider point (\ref{p:44}) and point (\ref{p:45}) with $r_3=t$, $r_4=t-a$, $r_0=j$, $\widetilde{P}_{ramp}=\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up}\}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})=\omega_j\lvert\min(1,\widetilde{P}_0+t\widetilde{P}_{up})-\varepsilon\rvert$. Hence $\omega_j=0$. \item $t-a<j<t$.\\ If $a=1$, we don't need to consider this case. Otherwise, consider the following six cases: \begin{enumerate} \item $t>K$.\\ We get $\frac{\widetilde{P}_0-\widetilde{P}_{shut}}{t-1}<\widetilde{P}_{down}$. Consider point (\ref{p:10}) and point (\ref{p:11}) with $r_1=t-a$, $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j[\widetilde{P}_{t-a}-(j-t+a)\times\frac{\max(0,\widetilde{P}_{t-a}-\widetilde{P}_{shut})}{a-1}]=\omega_j\lvert\widetilde{P}_{t-a}-(j-t+a)\times\frac{\max(0,\widetilde{P}_{t-a}-\widetilde{P}_{shut})}{a-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\min(\frac{1}{\widetilde{P}_{up}},\frac{1-\widetilde{P}_0+t\widetilde{P}_{down}}{\widetilde{P}_{up}+\widetilde{P}_{down}})<a$.\\ We get $1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]<a\widetilde{P}_{up}$. Consider point (\ref{p:44}) and point (\ref{p:45}) with $r_3=t$, $r_4=t-a$, $r_0=j$, $\widetilde{P}_{ramp}=1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j\{1-(t-j)\times\frac{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]}{a}\}=\omega_j\lvert1-(t-j)\times\frac{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]}{a}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\underline{T}_{on}<a\le t-U-\underline{T}_{off}$.\\ If $m\le U$, we consider point (\ref{p:5}) and point (\ref{p:6}), if $m>U$, we consider point (\ref{p:3}) and point (\ref{p:4}). We let $r_1=t-a+1$, $r_2=t-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \item $\frac{1-\widetilde{P}_{start}}{\widetilde{P}_{up}}+1<a\le t-U-\underline{T}_{off}$.\\ We get $1<\widetilde{P}_{start}+(a-1)\widetilde{P}_{up}$, $U+\underline{T}_{{off}}\le t-a$. If $m\le U$, we consider point (\ref{p:20}) and point (\ref{p:21}), if $m>U$, we consider point (\ref{p:18}) and point (\ref{p:19}). We let $r_1=t-a+1$, $r_3=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j\times[1-(t-j)\times\frac{1-\widetilde{P}_{start}}{a-1}]=\omega_j\times\lvert1-(t-j)\times\frac{1-\widetilde{P}_{start}}{a-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $\max(t,K)<m+M-1$,\\ $\min\{\frac{\widetilde{P}_{shut}+[K-t]^+ \widetilde{P}_{down}}{\widetilde{P}_{up}},\frac{\widetilde{P}_{shut}+\max(t,K)\times\widetilde{P}_{down}-\widetilde{P}_0}{\widetilde{P}_{up}+\widetilde{P}_{down}}\}<a$.\\ We get $\widetilde{P}_{shut}+[\max(t,K)-t]\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]<a\widetilde{P}_{up}$. Consider point (\ref{p:41}) and point (\ref{p:42}) with $r_2=\max(t,K)$, $r_3=t$, $r_4=t-a$, $r_0=j$, $\widetilde{P}_{ramp}=\min\{1,\widetilde{P}_{shut}+[\max(t,K)-t]\widetilde{P}_{down}\}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]>-a\widetilde{P}_{down}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j[\widetilde{P}_t-(t-j)\times\frac{\widetilde{P}_{ramp}}{a}]=\omega_j\lvert\widetilde{P}_t-(t-j)\times\frac{\widetilde{P}_{ramp}}{a}-\varepsilon\rvert$. Hence $\omega_j=0$. \item $t<m+M-1,\max[\frac{\widetilde{P}_{shut}-\widetilde{P}_{start}}{\widetilde{P}_{up}}+1,\frac{(\underline{T}_{on}-1)\widetilde{P}_{down}+\widetilde{P}_{shut}-\widetilde{P}_{start}}{\widetilde{P}_{up}+\widetilde{P}_{down}}+1,t+\underline{T}_{on}-m-M+1]<a\le t-U-\underline{T}_{off}$.\\ We get $\max(t,t-a+\underline{T}_{on})<m+M-1$, $t-a\ge U+\underline{T}_{off}$, $\widetilde{P}_{shut}+[\max(t,t-a+\underline{T}_{on})-t]\widetilde{P}_{down}<\widetilde{P}_{start}+(a-1)\widetilde{P}_{up}$. If $m\le U$, we consider point (\ref{p:14}) and point (\ref{p:15}), if $m>U$, we consider point (\ref{p:12}) and point (\ref{p:13}). We let $r_1=t-a+1$, $r_2=\max(t,t-a+\underline{T}_{on})$, $r_3=U$, $r_4=t$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j\times[\widetilde{P}_t-(t-j)\times\frac{\max(0,\widetilde{P}_t-\widetilde{P}_{start})}{a-1}]=\omega_j\times\lvert\widetilde{P}_t-(t-j)\times\frac{\max(0,\widetilde{P}_t-\widetilde{P}_{start})}{a-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \end{enumerate} \item $\max(1,m)\le j<t-a$.\\ If $t-a=0$ or $t-a=\max(1,m)$, we don’t need to consider this case. Otherwise, consider the following two cases: \begin{enumerate} \item $a<\min(\frac{1}{\widetilde{P}_{up}},\frac{1-\widetilde{P}_0+t\widetilde{P}_{down}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$.\\ We get $a\widetilde{P}_{up}<1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. Consider point (\ref{p:46}) and point (\ref{p:47}) with $r_3=t$, $r_4=t-a$, $r_0=j$, $\widetilde{P}_{ramp}=a\widetilde{P}_{up}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j (\widetilde{P}_0-j\times\frac{\widetilde{P}_0-\widetilde{P}_{t-a}}{t-a})=\omega_j (\widetilde{P}_0-j\times\frac{\widetilde{P}_0-\widetilde{P}_{t-a}}{t-a}-\varepsilon)$. Hence $\omega_j=0$. \item $a<\min(\frac{1}{\widetilde{P}_{up}},t-U-\underline{T}_{off})$.\\ We get $t-a>U+\underline{T}_{off}\ge U+1\ge K$. Then we have $\frac{\widetilde{P}_0-\widetilde{P}_{shut}}{t-a-1}<\widetilde{P}_{down}$. Consider point (\ref{p:10}) and point (\ref{p:11}) with $r_1=m$, $r_2=t-a-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. Because both points satisfy equality (\ref{pr:9}), we get $\omega_j\{\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-(j-r_1)\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{t-a-m-1}\}=\omega_j\lvert\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-(j-r_1)\times\frac{\max[0,\max(0,\widetilde{P}_0-r_1\widetilde{P}_{down})-\widetilde{P}_{shut}]}{t-a-m-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \end{enumerate} \end{enumerate} \item $\omega_{t-a}=-\omega_t$ when $t-a\ne 0$.\\ Consider the following two cases: \begin{enumerate} \item $a<\min(\frac{1}{\widetilde{P}_{up}},\frac{1-\widetilde{P}_0+t\widetilde{P}_{down}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$.\\ We get $a\widetilde{P}_{up}<1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. Consider point (\ref{p:44}) and point (\ref{p:46}) with $r_3=t$, $r_4=t-a$, $r_0=j$, $\widetilde{P}_{ramp}=a\widetilde{P}_{up}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ in part 2. Because both points satisfy equality (\ref{pr:9}), we get $\omega_{t-a}[\min(1,\widetilde{P}_0+t\widetilde{P}_{up})-a\widetilde{P}_{up}]+\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})=\omega_{t-a}[\min(1,\widetilde{P}_0+t\widetilde{P}_{up})-\varepsilon-a\widetilde{P}_{up}]+\omega_t[\min(1,\widetilde{P}_0+t\widetilde{P}_{up})-\varepsilon]$. Hence $\omega_{t-a}=-\omega_t$. \item $a<\min(\frac{1}{\widetilde{P}_{up}},t-U-\underline{T}_{off})$.\\ We get $a\widetilde{P}_{up}<1$, $t-a\ge U+\underline{T}_{off}+1.$ If $m\le U$, we consider point (\ref{p:34}) and point (\ref{p:35}), if $m>U$, we consider point (\ref{p:32}) and point (\ref{p:33}). We let $r_1=U+\underline{T}_{off}+1$, $r_3=t$, $r_4=t-a$, $r_0=j$, $\widetilde{P}_{ramp}=a\widetilde{P}_{up}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ in part 2. Because both points satisfy equality (\ref{pr:9}), we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}]-a\widetilde{P}_{up}\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}]=\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}]-\varepsilon-a\widetilde{P}_{up}\}+\omega_t\{\min[1,\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}]-\varepsilon\}$. Hence $\omega_{t-a}=-\omega_t$. \end{enumerate} \item $\varphi_{m,k}=\lambda_0,k<t-a$.\\ If $[m,k]\in A_m$ exists such that $k<t-a$, consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ above. Because point (\ref{p:10}) satisfies equality (\ref{pr:9}), we get $\varphi_{m,k}=\lambda_0$. \item $\varphi_{m,k}=\lambda_0+\omega_t\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],t-a\le k<t$.\\ If $[m,k]\in A_m$ exists such that $t-a\le k<t$, consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\},\omega_{t-a}=-\omega_t$ above. Because point (\ref{p:10}) satisfies equality (\ref{pr:9}), we get $\omega_{t-a}\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]+\varphi_{m,k}=\lambda_0$. Hence, $\varphi_{m,k}=\lambda_0+\omega_t\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$. \item $\varphi_{m,k}=\lambda_0-\omega_t\times\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\},t\le k<m+M-1$.\\ If $[m,k]\in A_m $ exists such that $t\le k<m+M-1$, we consider point (\ref{p:41}) with $r_2=k$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:41}) satisfies equality (\ref{pr:9}), then we get $\omega_{t-a}\times\{\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}\}+\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,k}=\lambda_0$. Hence, $\varphi_{m,k}=\lambda_0-\omega_t\times\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}$. \item $\varphi_{m,m+M-1}=\lambda_0-\omega_t\times\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up}\}$.\\ If $[m,m+M-1]\in A_m$ exists, we consider point (\ref{p:44}) with $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up}\}$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:44}) satisfies equality (\ref{pr:9}), then we get $\omega_{t-a}\times\{\min(1,\widetilde{P}_0+t\widetilde{P}_{up})-\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up}\}\}+\omega_t\times\min(1,\widetilde{P}_0+t\widetilde{P}_{up})+\varphi_{m,m+M-1}=\lambda_0$. Hence, $\varphi_{m,m+M-1}=\lambda_0-\omega_t\times\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up}\}$. \item $\varphi_{h,k}=0$, $h\ne m$, $t\notin[h,k]$.\\ If $[h,k]\in A_m$, $h\ne m$ exists such that $t\notin[h,k]$, we consider the following three cases: \begin{enumerate} \item $U<t-a$.\\ If $m\le U$, consider point (\ref{p:16}) with $r_1=h$, $r_2=r_4=k$, $r_3=U$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. Because point (\ref{p:16}) satisfies equality (\ref{pr:9}), we get $\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. If $m>U$, consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\lambda_0=0$ when $m>U$ above. Because point (\ref{p:3}) satisfies equality (\ref{pr:9}), we get $\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t-a\le U<t$.\\ Consider point (\ref{p:14}) with $r_1=h$, $r_2=r_3=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0+\omega_t\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$, $t-a\le k<t$ above. Because point (\ref{p:14}) satisfies equality (\ref{pr:9}), we get $\omega_{t-a}\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t-a<t\le U$.\\ Consider point (\ref{p:48}) with $r_1=h$, $r_2=k$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0-\omega_t\times\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}$, $m\le t-a<t\le k<m+M-1$ above. Because point (\ref{p:48}) satisfies equality (\ref{pr:9}), we get $\omega_{t-a}\times\{\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t)\widetilde{P}_{down}]-\min\{1-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}],a\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t)\widetilde{P}_{down}-\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]\}\}+\omega_t\times\min[1,\widetilde{P}_0+t\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \end{enumerate} \item $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}],t-a<h\le t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $t-a<h\le t\le k<m+M-1$, we consider the following two cases: \begin{enumerate} \item $U<t-a$.\\ If $m\le U$, consider point (\ref{p:14}) with $r_1=h$, $r_2=k$, $r_3=t$. We get $\widetilde{P}_t=\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. Because point (\ref{p:14}) satisfies equality (\ref{pr:9}), we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. If $m>U$, consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\lambda_0=0$ when $m>U$ above. Because point (\ref{p:12}) satisfies equality (\ref{pr:9}), we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $t-a\le U$.\\ Consider point (\ref{p:14}) with $r_1=h$, $r_2=k$, $r_3=t$. We get $\widetilde{P}_t=\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0+\omega_t\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$, $t-a\le k<t$ above. Because point (\ref{p:14}) satisfies equality (\ref{pr:9}), we get $\omega_{t-a}\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \end{enumerate} \item $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}],t-a<h\le t$.\\ If $[h,m+M-1]\in A_m$ exists such that $t-a<h\le t$, we consider the following two cases: \begin{enumerate} \item $U<t-a$.\\ If $m\le U$, consider point (\ref{p:20}) with $r_1=h$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. Because point (\ref{p:20}) satisfies equality (\ref{pr:9}), we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{m,U}+\varphi_{h,m+M-1}=\lambda_0$. Hence, $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. If $m>U$, consider point (\ref{p:18}) with $r_1=h$, $r_2=k$, $r_3=t$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\lambda_0=0$ when $m>U$ above. Because point (\ref{p:18}) satisfies equality (\ref{pr:9}), we get $\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,m+M-1}=\lambda_0$. Hence, $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \item $t-a\le U$.\\ Consider point (\ref{p:20}) with $r_1=h$, $r_3=t$. We get $\widetilde{P}_t=\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0+\omega_t\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]$, $t-a\le k<t$ above. Because point (\ref{p:20}) satisfies equality (\ref{pr:9}), we get $\omega_{t-a}\times\max[0,\widetilde{P}_0-(t-a)\widetilde{P}_{down}]+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{m,U}+\varphi_{h,m+M-1}=\lambda_0$. Hence, $\varphi_{h,m+M-1}=-\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]$. \end{enumerate} \item $\varphi_{h,k}=-\omega_t\times\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$, $m<h\le t-a<t\le k<m+M-1$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t-a<t\le k<m+M-1$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ We consider point (\ref{p:29}) with $r_1=h$, $r_2=k$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. If Point (\ref{p:29}) satisfies equality (\ref{pr:9}), then we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \item $m>U$.\\ We consider point (\ref{p:28}) with $r_1=h$, $r_2=k$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:28}) satisfies equality (\ref{pr:9}), then we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]-\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=-\omega_t\times\min[1,a\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t)\widetilde{P}_{down}]$. \end{enumerate} \item $\varphi_{h,m+M-1}=-\omega_t\times\min(1,a\widetilde{P}_{up})$, $m<h\le t-a$.\\ If $[h,m+M-1]\in A_m$ exists such that $m<h\le t-a$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ We consider point (\ref{p:34}) with $r_1=h$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min(1,a\widetilde{P}_{up})$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. If Point (\ref{p:34}) satisfies equality (\ref{pr:9}), then we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]-\min(1,a\widetilde{P}_{up})\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{m,U}+\varphi_{h,m+M-1}=\lambda_0$. Hence, $\varphi_{h,m+M-1}=-\omega_t\times\min(1,a\widetilde{P}_{up})$. \item $m>U$.\\ We consider point (\ref{p:32}) with $r_1=h$, $r_3=t$, $r_4=t-a$, $\widetilde{P}_{ramp}=\min(1,a\widetilde{P}_{up})$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-up-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:32}) satisfies equality (\ref{pr:9}), then we get $\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]-\min(1,a\widetilde{P}_{up})\}+\omega_t\times\min[1,\widetilde{P}_{start}+(t-h)\widetilde{P}_{up}]+\varphi_{h,m+M-1}=\lambda_0$. Hence, $\varphi_{h,m+M-1}=-\omega_t\times\min(1,a\widetilde{P}_{up})$. \end{enumerate} \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp9}. \end{proof} \begin{proof}[\textbf{Proof of Proposition \upshape\ref{prp10}}] Necessity: For contradiction we assume that one of condition $\mathcal{M}_1$ and condition $\mathcal{M}_2$ is not satisfied.\\ If condition $\mathcal{M}_3$ is not satisfied. We have \begin{enumerate} \item $a>1$. \item $t-a<U+\underline{T}_{off}$. \item $a\widetilde{P}_{down}\le\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$. \item $t\le U$ or $\widetilde{P}_{shut}+(a-1)\widetilde{P}_{down}\le \min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$. \end{enumerate} We consider the following two cases: \begin{enumerate} \item $t\le U$.\\ According to the definition of $A_m$ and (\ref{eq:new-initial-status-1}), inequality (\ref{eq:tight-ramp-down-1}) can be written as $\widetilde{P}_{t-a}-\widetilde{P}_t\le a\widetilde{P}_{down}$. Inequality (\ref{eq:tight-ramp-down-1}) with $a>1$ is dominated by inequalities (\ref{eq:tight-ramp-down-1}) with $a=1$. \item $\widetilde{P}_{shut}+(a-1)\widetilde{P}_{down}\le \min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$.\\ According to the definition of $A_m$ and (\ref{eq:new-initial-status-1}), inequality (\ref{eq:tight-ramp-down-1}) can be written as $\widetilde{P}_{t-a}-\widetilde{P}_t\le\sum\nolimits_{\{[m,k]\in A_m,t\le k\}}\tau^m_{m,k}a\widetilde{P}_{down}+\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}[\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. Inequality (\ref{eq:tight-ramp-down-1}) with $a>1$ is dominated by inequalities (\ref{eq:tight-ramp-down-1}) with $a=1$. \end{enumerate} If condition $\mathcal{M}_4$ is not satisfied. We have \begin{enumerate} \item $t-a>0$. \item $a\widetilde{P}_{down}\ge \min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$. \item $a\widetilde{P}_{down}\ge\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]$ or $t-a\le U+\underline{T}_{off}$. \end{enumerate} If $a\widetilde{P}_{down}\ge\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]$, inequality (\ref{eq:tight-ramp-down-1}) can be written as $\widetilde{P}_{t-a}-\widetilde{P}_t\le\sum\nolimits_{\{[h,k] \in A_m,U+\underline{T}_{off}<h\le t-a<t\le k\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up}]+\sum\nolimits_{\{[h,k] \in A_m,U+\underline{T}_{off}<h\le t-a\le k<t\}}\tau^m_{h,k}\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\sum\nolimits_{\{[m,k] \in A_m,t\le k\}}\tau^m_{m,k}\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]+\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. If $t-a\le U+\underline{T}_{off}$, inequality (\ref{eq:tight-ramp-down-1}) can be written as $\widetilde{P}_{t-a}-\widetilde{P}_t\le\sum\nolimits_{\{[m,k] \in A_m,t\le k\}}\tau^m_{m,k}\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]+\sum\nolimits_{\{[m,k] \in A_m,t-a\le k<t\}}\tau^m_{m,k}\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. Inequality (\ref{eq:tight-ramp-down-1}) is dominated by inequalities (\ref{eq:tight-max-output-1}) and (\ref{eq:tight-min-output-1}).\\ Sufficiency: If all points of $\widetilde{\mathcal{Q}}_m^I$ that are tight at inequality (\ref{eq:tight-ramp-down-1}) satisfy \begin{equation} \sum^{m+M-1}_{j=\max(1,m)}\omega_j\widetilde{P}_j+\sum\nolimits_{[h,k]\in A_m}\varphi_{h,k}\tau^m_{h,k}=\lambda_0\label{pr:10} \end{equation} then \begin{enumerate} \item $\lambda_0=0$ when $m>U$.\\ If $m>U$, point (\ref{p:1}) satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. If Point (\ref{p:1}) satisfies equality (\ref{pr:10}), then $\lambda_0=0$. \item $\omega_j=0,j\in[m,m+M-1]\backslash\{t-a,t\}$.\\ There are three cases to be considered. \begin{enumerate} \item $t<j\le m+M-1$.\\ Consider point (\ref{p:50}) and point (\ref{p:51}) with $r_3=t-a$, $r_4=t$, $r_0=j$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. Because both points satisfy equality (\ref{pr:10}), we get $\omega_j\widetilde{P}_t=\omega_j(\widetilde{P}_t+\varepsilon)$. Hence $\omega_j=0$. \item $t-a<j<t$.\\ If $a=1$, we don't need to consider this case. Otherwise, consider the following six cases: \begin{enumerate} \item $a\le t-U-\underline{T}_{off}$.\\ If $m\le U$, we consider point (\ref{p:5}) and point (\ref{p:6}), if $m>U$, we consider point (\ref{p:3}) and point (\ref{p:4}). We let $r_1=t-a+1$, $r_2=m+M-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. Because both points satisfy equality (\ref{pr:10}), we get $\omega_j\times 0=\omega_j\varepsilon$. Hence $\omega_j=0$. \item $\min(\frac{1}{\widetilde{P}_{down}},\frac{\widetilde{P}_0+t\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})<a$.\\ We get $\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]<a\widetilde{P}_{down}$. Consider point (\ref{p:50}) and point (\ref{p:51}) with $r_3=t-a$, $r_4=t$, $r_0=j$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. Because both points satisfy equality (\ref{pr:10}), we get $\omega_j\times\max[0,\widetilde{P}_{t-a}-(j-t+a)\times\frac{\widetilde{P}_{ramp}}{a}]=\omega_j\{\max[0,\widetilde{P}_{t-a}-(j-t+a)\times\frac{\widetilde{P}_{ramp}}{a}]+\varepsilon\}$. Hence $\omega_j=0$. \item $U<t,\min\{\frac{1-\widetilde{P}_{shut}}{\widetilde{P}_{down}}+1,\frac{\widetilde{P}_0+t\widetilde{P}_{up}+\widetilde{P}_{down}-\widetilde{P}_{shut}}{\widetilde{P}_{up}+\widetilde{P}_{down}}\}<a$.\\ We get $\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]<\widetilde{P}_{shut}+(a-1)\widetilde{P}_{down}$. Consider point (\ref{p:8}) and point (\ref{p:9}) with $r_2=t-1$, $r_3=t-a$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. Because both points satisfy equality (\ref{pr:10}), we get $\omega_j[\widetilde{P}_{t-a}-(j-t+a)\times\frac{\max(0,\widetilde{P}_{t-a}-\widetilde{P}_{shut})}{a-1}]=\omega_j\lvert\widetilde{P}_{t-a}-(j-t+a)\times\frac{\max(0,\widetilde{P}_{t-a}-\widetilde{P}_{shut})}{a-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \end{enumerate} \item $\max(1,m)\le j<t-a$.\\ If $t-a=0$ or $t-a=\max(1,m)$, we don’t need to consider this case. Otherwise, consider the following two cases: \begin{enumerate} \item $a<\min(\frac{1}{\widetilde{P}_{down}},\frac{\widetilde{P}_0+t\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$.\\ We get $a\widetilde{P}_{down}<\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$. Consider point (\ref{p:52}) and point (\ref{p:53}) with $r_3=t-a$, $r_4=t$, $r_0=j$, $\widetilde{P}_{ramp}=a\widetilde{P}_{down}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. Because both points satisfy equality (\ref{pr:10}), we get $\omega_j (\widetilde{P}_0-j\times\frac{\widetilde{P}_0-\widetilde{P}_{t-a}}{t-a})=\omega_j (\widetilde{P}_0-j\times\frac{\widetilde{P}_0-\widetilde{P}_{t-a}}{t-a}-\varepsilon)$. Hence $\omega_j=0$. \item $a<\min(\frac{1}{\widetilde{P}_{down}},t-U-\underline{T}_{off},\frac{\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$.\\ We get $t-a>U+\underline{T}_{off}\ge U+1\ge K$. Then we have $\frac{\widetilde{P}_0-\widetilde{P}_{shut}}{t-a-1}<\widetilde{P}_{down}$. Consider point (\ref{p:10}) and point (\ref{p:11}) with $r_1=m$, $r_2=t-a-1$, $r_0=j$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. Because both points satisfy equality (\ref{pr:10}), we get $\omega_j[\widetilde{P}_0-j\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{t-a-1}]=\omega_j\lvert\widetilde{P}_0-j\times\frac{\max(0,\widetilde{P}_0-\widetilde{P}_{shut})}{t-a-1}-\varepsilon\rvert$. Hence $\omega_j=0$. \end{enumerate} \end{enumerate} \item $\omega_{t-a}=-\omega_t$ when $t-a\ne 0$.\\ Consider the following two cases: \begin{enumerate} \item $a<\min(\frac{1}{\widetilde{P}_{down}},\frac{\widetilde{P}_0+t\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$.\\ We get $a\widetilde{P}_{down}<\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]$. Consider point (\ref{p:50}) and point (\ref{p:52}) with $r_3=t-a$, $r_4=t$, $r_0=j$, $\widetilde{P}_{ramp}=a\widetilde{P}_{down}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ in part 2. Because both points satisfy equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]+\omega_t\{\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]-a\widetilde{P}_{down}\}=\omega_{t-a}\{\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]-\varepsilon\}+\omega_t\{\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up}]-a\widetilde{P}_{down}-\varepsilon\}$. Hence $\omega_{t-a}=-\omega_t$. \item $a<\min(\frac{1}{\widetilde{P}_{down}},t-U-\underline{T}_{off},\frac{\widetilde{P}_{start}+(t-U-\underline{T}_{off}-1)\widetilde{P}_{up}}{\widetilde{P}_{up}+\widetilde{P}_{down}})$.\\ We get $a\widetilde{P}_{down}<\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]$, $t-a\ge U+\underline{T}_{off}+1.$ If $m\le U$, we consider point (\ref{p:38}) and point (\ref{p:39}). If $m>U$, we consider point (\ref{p:36}) and point (\ref{p:37}). We let $r_1=U+\underline{T}_{off}+1$, $r_3=t-a$, $r_4=t$, $r_0=j$, $\widetilde{P}_{ramp}=a\widetilde{P}_{down}$. Both points are valid and satisfy inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ in part 2. Because both points satisfy equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]+\omega_t\times\{\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]-a\widetilde{P}_{down}\}=\omega_{t-a}\{\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]-\varepsilon\}+\omega_t\times\{\min[1,\widetilde{P}_{start}+(t-a-U-\underline{T}_{off}-1)\widetilde{P}_{up}]-a\widetilde{P}_{down}-\varepsilon\}$. Hence $\omega_{t-a}=-\omega_t$. \end{enumerate} \item $\varphi_{m,k}=\lambda_0,k<t-a$.\\ If $[m,k]\in A_m$ exists such that $k<t-a$, consider point (\ref{p:10}) with $r_1=r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ above. Because point (\ref{p:10}) satisfies equality (\ref{pr:10}), we get $\varphi_{m,k}=\lambda_0$. \item $\varphi_{m,k}=\lambda_0+\omega_t\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}],t-a\le k<t$.\\ If $[m,k]\in A_m$ exists such that $t-a\le k<t$, consider point (\ref{p:8}) with $r_2=k,r_3=t-a$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\},\omega_{t-a}=-\omega_t$ above. Because point (\ref{p:8}) satisfies equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{m,k}=\lambda_0$. Hence, $\varphi_{m,k}=\lambda_0+\omega_t\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. \item $\varphi_{m,k}=\lambda_0+\omega_t\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}],m\le t-a<t\le k$.\\ If $[m,k]\in A_m $ exists such that $m\le t-a<t\le k$, we consider point (\ref{p:43}) with $r_2=k$, $r_3=t-a$, $r_4=t$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$ and $\omega_{t-a}=-\omega_t$ above. If Point (\ref{p:43}) satisfies equality (\ref{pr:10}), then we get $\omega_{t-a}\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\omega_t\{\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]-\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]\}+\varphi_{m,k}=\lambda_0$. Hence, $\varphi_{m,k}=\lambda_0+\omega_t\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]$. \item $\varphi_{h,k}=0$, $h\ne m$, $t-a\notin[h,k]$.\\ If $[h,k]\in A_m$, $h\ne m$ exists such that $t-a\notin[h,k]$, we consider the following three cases: \begin{enumerate} \item $U<t-a$.\\ If $m\le U$, consider point (\ref{p:5}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. Because point (\ref{p:5}) satisfies equality (\ref{pr:9}), we get $\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. If $m>U$, consider point (\ref{p:3}) with $r_1=h$, $r_2=k$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\lambda_0=0$ when $m>U$ above. Because point (\ref{p:3}) satisfies equality (\ref{pr:10}), we get $\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t-a\le U<t$.\\ Consider point (\ref{p:17}) with $r_1=h$, $r_2=k$, $r_3=t-a$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0+\omega_t\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$, $t-a\le k<t$ above. Because point (\ref{p:17}) satisfies equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t+a)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \item $t-a<t\le U$.\\ Consider point (\ref{p:49}) with $r_1=h$, $r_2=k$, $r_3=t-a$, $r_4=t$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0+\omega_t\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]$, $m\le t-a<t\le k\le m+M-1$ above. Because point (\ref{p:49}) satisfies equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t+a)\widetilde{P}_{down}]+\omega_t\{\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},\widetilde{P}_{shut}+(U-t+a)\widetilde{P}_{down}]-\min[1,\widetilde{P}_0+(t-a)\widetilde{P}_{up},a\widetilde{P}_{down}]\}+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=0$. \end{enumerate} \item $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}],m<h\le t-a\le k<t$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t-a\le k<t$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ Consider point (\ref{p:14}) with $r_1=h$, $r_2=k$, $r_3=t-a$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. Because point (\ref{p:14}) satisfies equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. \item $m>U$.\\ Consider point (\ref{p:12}) with $r_1=h$, $r_2=k$, $r_3=t-a$. Clearly this point is valid and satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\lambda_0=0$ when $m>U$ above. Because point (\ref{p:12}) satisfies equality (\ref{pr:10}), we get $\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]$. \end{enumerate} \item $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$, $m<h\le t-a<t\le k\le m+M-1$.\\ If $[h,k]\in A_m$ exists such that $m<h\le t-a<t\le k\le m+M-1$, we consider the following two cases: \begin{enumerate} \item $m\le U$.\\ We consider point (\ref{p:31}) with $r_1=h$, $r_2=k$, $r_3=t-a$, $r_4=t$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\varphi_{m,k}=\lambda_0$, $k<t-a$ above. If Point (\ref{p:31}) satisfies equality (\ref{pr:10}), then we get $\omega_t\{\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]-\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]\}+\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{m,U}+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$. \item $m>U$.\\ We consider point (\ref{p:30}) with $r_1=h$, $r_2=k$, $r_3=t-a$, $r_4=t$, $\widetilde{P}_{ramp}=\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$. Clearly this point satisfies inequality (\ref{eq:tight-ramp-down-1}) at equality. We have shown that $\omega_j=0$, $j\in[\max(1,m),m+M-1]\{t-a,t\}$, $\omega_{t-a}=-\omega_t$, $\lambda_0=0$ when $m>U$ above. If Point (\ref{p:30}) satisfies equality (\ref{pr:10}), then we get $\omega_t\{\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]-\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]\}+\omega_{t-a}\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},\widetilde{P}_{shut}+(k-t+a)\widetilde{P}_{down}]+\varphi_{h,k}=\lambda_0$. Hence, $\varphi_{h,k}=\omega_t\times\min[1,\widetilde{P}_{start}+(t-a-h)\widetilde{P}_{up},a\widetilde{P}_{down}]$. \end{enumerate} \end{enumerate} According to theorem 3.6 of §$I.4.3$ in \cite{wolsey1988integer}, we get Proposition \ref{prp10}. \end{proof} \end{document}
\begin{document} \setcounter{page}{1} \title{Special topic} \centerline{\begin{minipage}{12cm} \begin{center} Abstract \end{center} We briefly discuss the notion of the Lagrange multiplier for a linear constraint in the Hilbert space setting, and we prove that the pressure $p$ appearing in the stationary Stokes equations is the Lagrange multiplier of the constraint $\mathrm{div}\, u =0$. \end{minipage}} \section*{Introduction} For the equations modelling incompressible fluid flows it is frequently remarked that the pressure term acts as a Lagrange multiplier enforcing the incompressibility constraint. Here we prove it rigorously in the case of the stationary Stokes equations. Namely, we show that $p$ appearing in the equations is the Lagrange multiplier corresponding to the constraint $\mathrm{div}\, u =0$ in the variational formulation of the equations, see Section \ref{p_in_SE_section}. For this purpose we briefly discuss preliminary concepts and present some simple variational problems which use Lagrange multipliers in the next section. We then generalise the concept of the Lagrange multiplier to the general Hilbert setting (Section \ref{lagrange_multiplier_section}) and apply it to the stationary Stokes equations. \section{Preliminaries}\label{prelims_section} Let $H$ be a Hilbert space, and $J\colon H\to \mathbb{R}$ a convex functional that is Frechet differentiable (that is for each $u\in H$ there exists $\nabla J(u) \in H^*$ such that for all $v\in H$ $J(v) = J(u) + \langle \nabla J(u) , v-u \rangle + o(||v-u ||)$, where $\langle u^* , u \rangle $ denotes the duality pairing between a linear functional $u^* \in H^*$ and a point $u\in H$, and $o(x)$ is any function such that $o(x)/x \stackrel{x\to 0^+}{\longrightarrow }0$). Let $K$ denote a closed subspace of $H$. \begin{lem}\label{lemma1} If $J: H \to \mathbb{R}$ is convex and differentiable at $u\in K$ then \[ J(u) = \min_{v\in K} J(v) \, \Leftrightarrow \, \nabla J(u) \in K^\circ , \] where $K^\circ \coloneqq \{ u^* \in H^* \colon \langle u^* , v \rangle =0 \text{ for } v\in K \}$ denotes the annihilator of $K$. \begin{proof}$\mbox{}$ \begin{enumerate} \item[$(\Rightarrow )$] If $u$ is a minimiser of $J$ over $K$ then $\forall v\in K$ s.t. $||v||=1$ and $\forall t > 0$ we have $J(u\pm tv ) \geq J(u)$ and so \[ t \langle \nabla J (u) , v \rangle + o(||tv||) \geq 0 \Rightarrow \langle \nabla J (u) , v \rangle \geq 0 .\] Taking $t<0$ instead of $t>0$, we similarly obtain $\langle \nabla J (u) , v \rangle \leq 0$. Hence $\langle \nabla J (u) , v \rangle = 0$ for all $v\in K$, that is $\nabla J(u)\in K^\circ $. \item[$(\Leftarrow )$] From convexity, we have \[ J(u+t(v-u)) = J((1-t)u + tv) \leq t J(v) + (1-t) J(u) \qquad \forall v\in K, \, \forall t\in (0,1). \] Subtracting $J(u)$ and dividing by $t$, we get \[ \frac{1}{t}\left( J(u+t(v-u)) - J(u) \right) \leq J(v) - J(u) . \] The assumption $\nabla J(u) \in K^\circ$ gives $\langle \nabla J (u) , v \rangle =0$ and so the left hand side is equal to $\frac{o(||t(v-u)||)}{t}$. Taking the limit $t\to 0^+$ we get $0\leq J(v) - J(u)$ for all $v\in K$.\qedhere \end{enumerate} \end{proof} \end{lem} \begin{ex}\label{ex1} Let $a_1,\ldots , a_M$ be orthonormal vectors in $H$ and let \begin{equation}\label{V} V = \left\lbrace v\in H \, | \, (a_i , v )=0 \, \, \forall i=1,\ldots , M \right\rbrace , \end{equation} a finite intersection of hyperplanes. Consider a minimisation problem: Find $u\in V$ such that \[ J(u) = \min_{v\in V} J(v), \] where $J:H \to \mathbb{R}$ is convex and differentiable. Then Lemma \ref{lemma1} gives that $u\in V$ is the minimiser if $\nabla J(u) \in V^\perp = \text{span} \{ a_1, \ldots , a_M \} $. Therefore there exist unique $\lambda_i$'s, $i=1,\ldots , M$, such that $\nabla J(u) = \sum_{i=1}^M \lambda_i a_i$. These $\lambda_i$'s are called \emph{Lagrange multipliers}. \end{ex} \begin{ex}\label{ex2} (Elliott \cite{optimisation}, p. 87) Let $A \in \mathbb{R}^{N\times N}$ be a symmetric, positive definite matrix, $b\in \mathbb{R}^N$, and $C\in \mathbb{R}^{M\times N}$, where $M<N$, be of full rank. Consider the minimisation problem \begin{equation}\label{ex2prob} \min_{x\in \text{Ker}\, C} J(x), \end{equation} where $J(x):= \frac{1}{2} (x,Ax) - (b,x)$. Note this example is a special case of Example \ref{ex1} with $H\coloneqq \mathbb{R}^N$ and $C = [ a_1 , \ldots , a_M ]^T$ and with a special form of $J$. Hence $\nabla J(u) = \sum_{i=1}^M \lambda_i a_i$. However $a_i = C^T e_i$, where $\{ e_i \}_{i=1, \ldots , M}$ is the standard basis of $\mathbb{R}^M$, and a direct computation shows that $\nabla J(x) = Ax -b$. Therefore \begin{equation}\label{ex2row} Ax -b = \sum_{i=1}^M \lambda_i C^T e_i = C^T \Lambda , \end{equation} where $\Lambda \coloneqq (\lambda_1 , \ldots , \lambda_M )^T$. We call $\Lambda $ the \emph{Lagrange multiplier} of problem \eqref{ex2prob}. Rewriting the above equality together with the constraint $x\in \text{Ker}\, C$ in a compact form we obtain \[ \left[ \begin{array}{cc} A & -C^T \\ C & 0 \end{array} \right] \left[ \begin{array}{c} x \\ \Lambda \end{array} \right] = \left[ \begin{array}{c} b \\ 0 \end{array} \right] . \] Since $A$ is invertible and $C$ is of a full rank, the solution $x$ to this system exists and is unique. This example illustrates what is the role of the Lagrange multiplier $\Lambda $: it is a ``redundant variable'' which ``fills out the columns of the system'' and hence makes it solvable for $x$. \end{ex} \section{The Lagrange multiplier}\label{lagrange_multiplier_section} We now generalise the examples to a general Hilbert space setting. Let $M$ be another Hilbert space, let $T\colon H\to M^*$ be a bounded linear operator, and let $T^*\colon M\to H^*$ denote the dual operator of $T$ (that is $\langle T^*q,u \rangle =\langle Tu ,q \rangle $, where $\langle \cdot , \cdot \rangle $ denotes the duality pairing in turn between $H^*$ and $H$ and between $M^*$ and $M$) \begin{theorem}\label{mainthm} Suppose that the operator $T$ satisfies the condition \begin{equation}\label{c} || T^* q ||_{H^*} \geq C || q ||_M \qquad \text{ for all } q\in M \end{equation} for some $C>0$ and consider a minimisation problem: Find $u\in \text{Ker}\, T$ such that \begin{equation}\label{minimise} J(u) = \min_{v\in \text{Ker} \, T} J(v). \end{equation} Then $u\in \text{Ker} \, T$ is a solution to (\ref{minimise}) if and only if there exists $p\in M$ such that \[ T^*p=\nabla J(u) .\] Moreover, if such $p$ exists, it is unique. \end{theorem} \begin{df} This $p\in M$ is the \emph{Lagrange multiplier} of the problem (\ref{minimise}). \end{df} Note that by the Fundamental Theorem of Mixed Finite Element Method (see, for example, Lemma 4.1 in Girault \& Raviart \cite{gv}, Chapter I) condition \eqref{c} is equivalent to \[ \| T v \|_{M^*} \geq C \| v \|_H \qquad \text{ for all } v\in ( \mathrm{Ker}\, T)^\perp. \] Let us also point out that Example \ref{ex1} is a special case of the theorem above by setting $T=P$, where $P\colon H \to V^\perp$ is an orthogonal projection with respect to the inner product of $H$ (here we identify $H^*$ with $H$). The condition \eqref{c} follows for such $T$ by noting that $M=V^\perp = (\mathrm{Ker} \, T)^\perp$ and by writing \[ \| T^* q \|_H = \sup_{v\in H} \frac{(T^* q, v)}{\| v \|_H} = \sup_{v\in H} \frac{(Pv ,q)}{\| v \|_H} \geq \frac{(Pq, q)}{\| q \|_H} = \| q\|_H \qquad \text{ for all } q\in M . \] \begin{proof}(of Theorem \ref{mainthm}) \begin{enumerate} \item[$(\Leftarrow )$] Since \[ \langle \nabla J(u) , v \rangle = \langle T^* p , v \rangle = \langle T v , p \rangle = 0 \qquad \text{ for } v\in \text{Ker}\, T , \] we see, using Lemma \ref{lemma1}, that $u \in \text{Ker}\, T$ is a solution of (\ref{minimise}). \item[$(\Rightarrow )$] From (\ref{c}) we can see that $T^*$ is injective on its range $\mathcal{R}(T^*)$. Therefore $T^*$ has a bounded inverse $T ^{-*} : \mathcal{R} (T^*) \to M$ and $\| T ^{-*} \| \leq \frac{1}{C}$. Hence $T^* : M \to \mathcal{R} (T^*) $ is an isomorphism. In particular $\mathcal{R} (T^* )$ is closed in $H^*$. From Banach Closed Range Theorem (see, for example, Yosida \cite{yosida}, pp. 205-208) we get \[ \mathcal{R} (T^* ) = \left( \text{Ker} \, T \right)^\circ . \] Hence, if $u \in \text{Ker}\, T$ is a solution of (\ref{minimise}), then Lemma \ref{lemma1} gives $\nabla J(u) \in (\mathrm{Ker} \, T)^\circ = \mathcal{R} (T^*)$, that is there exists a unique $ p \in M$ such that $\nabla J(u) = T^* p$. \qedhere \end{enumerate} \end{proof} \section{Pressure function in the stationary Stokes equations}\label{p_in_SE_section} We now turn into the stationary Stokes equations, \begin{eqnarray*} -\Delta {u} + \nabla p &=& f \qquad \text{ in } \Omega ,\\ \text{div} \, {u} &=& 0\qquad \text{ in } \Omega , \\ u&=&0 \qquad \text{ on } \partial \Omega , \end{eqnarray*} where $\Omega \subset \mathbb{R}^n$ is a smooth domain, ${u} : \Omega \to \mathbb{R}^3$ denotes the velocity of the fluid, $p: \Omega \to \mathbb{R}$ denotes the pressure and $f: \Omega \to \mathbb{R}^3$ is the density of forces acting on the fluid (e.g. gravitational force). The steady Stokes equations govern a flow of a steady, viscous, incompressible fluid. The weak formulation of this problem is to find $u\in V$ and $p\in {L}_0^2$ such that \begin{equation}\label{stokes} (\nabla u , \nabla v ) - (p, \mathrm{div} \, v ) = (f,v) \qquad \text{ for all } v\in H^1_0 (\Omega ), \end{equation} where \[ V\coloneqq \left\lbrace {v} \in H_0^1 (\Omega ) \colon \mathrm{div} \, {v} =0 \right\rbrace , \quad L^2_0 (\Omega ) \coloneqq \left\lbrace q\in L^2 (\Omega ) \, \colon \, \int_\Omega q =0 \right\rbrace \] and $(\cdot , \cdot )$ denotes the $L^2$ inner product (for either scalar, vector or matrix functions). We will show that the problem \eqref{stokes} is equivalent to finding a minimiser $u\in V$ of the problem \begin{equation}\label{minstokes} J(\mathbf{u}) = \min_{\mathbf{v} \in V} J(\mathbf{v}), \end{equation} where \begin{equation}\label{J} J(\mathbf{v} )\coloneqq \frac{1}{2} ( \nabla {v}, \nabla v ) - (f , {v}) . \end{equation} (Note that this formulation does not include $p$.) Moreover we will show that the pressure function $p$ is the Lagrange multiplier of the problem \eqref{minstokes}. Indeed, letting $H\coloneqq H^1_0(\Omega )$ and $M\coloneqq L^2_0 (\Omega )$ we see that $J$ is a convex and differentiable functional on $H$ with \[\nabla J(v) = (\nabla {v} ,\nabla ( \cdot ) ) - (f , \cdot ) \in H^* \] for all $v$. Furthermore letting \[ T\colon H \to M^* \cong L^2_0,\qquad \langle Tv , q \rangle \coloneqq (\mathrm{div } \, v , q ) \quad \text{ for } v\in H, q\in M \] we see that $T$ is a bounded linear operator and $V=\mathrm{Ker}\, T$. Moreover $T^* \colon M\to H^*$ is such that $\langle T^* q, {v} \rangle = \langle T {v} , q \rangle = ( \text{div} \, {v} , q ) $ for $q \in M$, ${v} \in H$, that is \[ T^* q = \nabla q \quad \text{ as an element of } H^* . \] The condition \eqref{c} follows for such $T^*$ from the well-known inequality $\| q \|_{L^2} \leq C \| \nabla q \|_{H^{-1} }$ for $q\in L^2_0 (\Omega )$ (see, for example Temam \cite{temam}, pp. 10-11). Therefore Theorem \ref{mainthm} gives that $u\in V$ is a solution to the minimisation problem \eqref{minstokes} if and only if there exists $p\in L^2_0 (\Omega )$ such that \[ T^* p = \nabla J(u) = (\nabla {v} ,\nabla ( \cdot ) ) - (f , \cdot ), \]which is simply the weak formulation \eqref{stokes} of the steady Stokes equations holds. Note also the similarity of the last equality with \eqref{ex2row}. \end{document}
\begin{document} \title{Stabilizer Quantum Error Correction with Qubus Computation} \author{Casey R. Myers}\email{[email protected]} \affiliation{Department of Physics and Astronomy, and Institute for Quantum Computing, University of Waterloo, ON, N2L 3G1, Canada} \author{Marcus Silva}\email{[email protected]} \affiliation{Department of Physics and Astronomy, and Institute for Quantum Computing, University of Waterloo, ON, N2L 3G1, Canada} \author{Kae Nemoto} \affiliation{ National Institute of Informatics, Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \author{William J. Munro} \affiliation{Hewlett-Packard Laboratories, Filton Road, Stoke Gifford, Bristol, BS34 8QZ, UK} \affiliation{ National Institute of Informatics, Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \begin{abstract} In this paper we investigate stabilizer quantum error correction codes using controlled phase rotations of strong coherent probe states. We explicitly describe two methods to measure the Pauli operators which generate the stabilizer group of a quantum code. First, we show how to measure a Pauli operator acting on physical qubits using a single coherent state with large average photon number, displacement operations, and photon detection. Second, we show how to measure the stabilizer operators fault-tolerantly by the deterministic preparation of coherent cat states along with one-bit teleportations between a qubit-like encoding of coherent states and physical qubits. \end{abstract} \maketitle The question of which physical system is best suited for quantum information processing is still open, each implementation proposal having strengths and weaknesses. In some systems (such as optics) it is difficult to make qubits interact, so that the two-qubit gates needed for universal computation are difficult to implement. One scheme, proposed by Gottesman {et. al.}~\cite{Gottesman99}, circumvents the need to make qubits interact directly by using a modified teleportation protocol. A generalization of this leads to the cluster state proposal of Raussendorf {\em et. al.}~\cite{Raussendorf01}, where a large entangled state is prepared offline, and computation is performed by a sequence of single qubit measurements which depend on the outcomes of previous measurements. A different scheme that bypasses the need for qubits to interact directly was proposed by Nemoto {\em et. al.}~\cite{Nemoto04,Nemoto05}. This scheme shows that, by inducing a phase on a large coherent state bus mode which depends on the logical state of the physical qubits, one can implement a near deterministic CNOT gate between the qubits. Coherent states are particularly useful because of the ease with which they may be produced, e.g. with lasers or Bose-Einstein condensates. Further developments have shown more direct methods to perform two-qubit gates with bus modes, termed {\em qubus computation}~\cite{Spiller06}. If qubus computation is to be seriously considered for physical implementation, a full analysis of the propagation of errors should be undertaken. The starting point for these considerations is whether we can perform quantum error correction (QEC) on qubits efficiently. In particular, can one measure the syndromes for a given stabilizer code directly with controlled rotations (CRs) and strong coherent probe beams? Recent work by Yamaguchi \textit{et. al.}~\cite{Yamaguchi05} demonstrates how to measure the syndromes for {\em some} stabilizer codes using these tools. They show that the stabilizers for the three bit-flip code can be measured directly with CRs and a single strong coherent bus mode. The stabilizers for Shor's 9-qubit code~\cite{Shor96} can also be measured, showing that it is possible to correct for any error on a single qubit in an encoded block. The purpose of this paper is to generalise the results of Yamaguchi {\it et. al} and demonstrate how CRs can be used to implement quantum error correction with {\em any} possible stabilizer code. We will describe two schemes to measure the syndromes of an arbitrary weight $n$ Pauli operator, using the stabilizer operators of the seven qubit code as a concrete example for each one of these schemes. The first scheme uses a single strong coherent probe beam, a quadratic number of CRs, a linear number of coherent displacements, and a photon number measurement. This scheme can be modified to use homodyne measurement at the cost of a slightly larger number of CRs and coherent displacements. The second scheme we describe is a fault-tolerant approach to the measurement of the Pauli operators, which requires a linear number of strong coherent pulses, CRs and detectors. Although we focus on the 7 qubit code -- which has stabilizer generators with weight 4 -- for each of these schemes we describe how to generalise to Pauli operators of weight $n$. {\em Background ---} In~\cite{Yamaguchi05} it was shown that the stabilizers for the 3-qubit bit-flip code ($\ket{0}$$\to$$\ket{000}$, $\ket{1}$$\to$$\ket{111}$) could be measured with the parity gate depicted in Fig.~\ref{YamaguchiGHZ}a. \begin{figure} \caption{\footnotesize {\bf (a)} \label{YamaguchiGHZ} \end{figure} It can be seen that this circuit is a parity gate when we consider its effect on the input state $\ket{\psi_{\text{in}}}= \bigl(a_{0}\ket{00}+a_{1}\ket{01}+ a_{2}\ket{10}+a_{3}\ket{11}\bigr)\ket{\alpha}$, where $\ket{\alpha}$ is a coherent bus mode. The effect of the CRs is to apply a phase to the coherent beam if our data qubit is $\ket{1}$ and leave it alone otherwise: $\bigl(a\ket{0}+b\ket{1}\bigr)\ket{\alpha}\to a\ket{0}\ket{\alpha}+b\ket{1}\ket{\alpha e^{i\theta}}$. Before the detector $D_{2}$ in Fig.~\ref{YamaguchiGHZ}a, the state $\ket{\psi_{\text{in}}}$ is $\bigl(a_{0}\ket{00}+a_{3}\ket{11}\bigr)\ket{\alpha}+ a_{1}\ket{01}\ket{\alpha e^{-i\theta}}+a_{2}\ket{10}\ket{\alpha e^{i\theta}}$. When $D_{2}$ is a homodyne detection along the $x$-quadrature we are able to distinguish $a_{0}\ket{00}+a_{3}\ket{11}$ from $a_{1}\ket{01}+a_{2}\ket{10}$, since a homodyne measurement of $\ket{\alpha}$ along the $x$-quadrature is equivalent to the projection $\bk{x}{\alpha}$. That is, $\ket{\alpha e^{\pm i\theta}}$ are indistinguishable when we homodyne detect along the $x$-quadrature. This is the basis of the CNOT shown in~\cite{Nemoto04}. With two parity gates we can measure the Pauli operators $ZZI$ and $IZZ$. That is, one parity gate is applied to qubits 1 and 2 to measure $ZZI$ while the second parity gate is applied to qubits 2 and 3 to measure $IZZ$, as shown in Fig.~\ref{YamaguchiGHZ}b. The state before the application of the parity gates is $\ket{\overline{\psi}_{\text{in}}}= \bigl(c_{0}\ket{000}+c_{1}\ket{111}\bigr)\ket{\alpha}\ket{\alpha}$. There are four cases to consider: no error, $\ket{\overline{\psi}_{\text{in}}}$; an error on qubit 1, $XII\ket{\overline{\psi}_{\text{in}}}$; an error on qubit 2, $I XI\ket{\overline{\psi}_{\text{in}}}$; an error on qubit 3, $II X\ket{\overline{\psi}_{\text{in}}}$. We can see what the effect of a bit flip error on each of the modes is by considering the state $\ket{abc}\ket{\alpha}\ket{\alpha}$, where $a,b,c\in \{0,1\}$. Directly before homodyne detection in Fig.~\ref{YamaguchiGHZ}b $\ket{abc}\ket{\alpha}\ket{\alpha}$ becomes $\ket{abc}\ket{\alpha e^{i(a-b)\theta}}\ket{\alpha e^{i(c-b)\theta}}$. When we measure the probe states to be $\ket{\alpha e^{\pm im\theta}}\ket{\alpha e^{\pm in\theta}}$, where $m,n\in\{0,\pm1\}$, we know whether there was no error ($m,n=0$) or a one bit flip error, the location of the bit flip also being identified by the values of $m$ and $n$. Similar methods can be applied to measure the stabilizer operators for Shor's 9-qubit code. The natural question that arises is: can we use techniques similar to those above to measure the syndromes for an arbitrary stabilizer code? {\em Larger codes ---} As a concrete example, consider the $[[7,1,3]]$ stabilizer code~\cite{Steane}. This code can correct a single arbitrary quantum error in any of the 7 qubits, and it has been used extensively in studies of fault-tolerance in quantum computers due to the fact that it allows for simple constructions of fault-tolerant encoded gates~\cite{Got98}. In order to detect which error has corrupted the data, one must measure six multiqubit Pauli operators which, up to qubit permutations and local unitaries, are equivalent to the Pauli operator $ZZZZ$, or the measurement of {\em only} the parity of 4 qubits. For an arbitrary stabilizer code, various multiqubit Pauli operator must be measured, each of which is always equivalent to a measurement of only the parity of a subset of qubits, thus it is sufficient to consider only multiqubit parity measurements in order to perform quantum error correction with stabilizer codes. {\em Single Coherent State Pulse ---} In order to measure $ZZZZ$ with CRs, we can start with the encoded state $\bigl(c_{0}\ket{0_{L}}+c_{1}\ket{1_{L}}\bigr)\ket{\alpha}$, and design a circuit that gives us $\ket{\alpha_{1}}$ when there was no error (even parity) and $\ket{\alpha_{2}}$ when there was an error (odd parity), where $\alpha_{1}\neq \alpha_{2}$. Ideally we would want to do this with just one coherent probe beam, four CRs and a single homodyne detection, following a direct analogy with the circuit depicted in Fig.~\ref{YamaguchiGHZ}a. However this is not possible. The best we can do is have some even states go to $\ket{\alpha}$ and the rest go to $\ket{\alpha e^{\pm2i\theta}}$ while the odd states go to $\ket{\alpha e^{\pm i\theta}}$. The circuit that performs this is shown in Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a. \begin{figure} \caption{\footnotesize {\bf (a)} \label{4qubitNoDisp4qubitNoDispPhase} \end{figure} Notice that in phase space we would have five points -- three for the even states ($\ket{\alpha}, \ket{\alpha e^{\pm2i\theta}}$) and two for the odd states ($\ket{\alpha e^{\pm i\theta}}$) -- as can be seen in Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}b. If we were to homodyne detect the probe beam at this stage, we would partially decode our encoded state $c_{0}\ket{0_{L}}+c_{1}\ket{1_{L}}$ since we can distinguish the state $\ket{\alpha}$ from $\ket{\alpha e^{\pm2i\theta}}$. The problem now becomes determining what operations must be done before we homodyne detect so that we only distinguish between states of different parity in the first four qubits, and nothing more. It turns out that either homodyne or photon number detection can be used, depending on the operations applied before the measurement. {\em Photon number measurement ---} If we incorporate displacements along with Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a we can take the five points in phase space to just three. Displacements of a state can be easily implemented by mixing the state with a large coherent state on a weak beam splitter, the size of the coherent states amplitude and beam splitter reflectivity deciding the displacement~\cite{Displ}. If we have three displacements and three applications of Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a, as in Fig.~\ref{4qubit2DispPhase}a, we find that $\ket{\text{odd}}\to\ket{-4\alpha\sin^{2}(\theta/2)(2 \sin^{2}(\theta)+\cos(\theta))}$ and $\ket{\text{even}}\to\ket{\pm 2\alpha\sin^{2}(\theta)(2\cos(\theta)-1)}$, as depicted in Fig.~\ref{4qubit2DispPhase}b. The displacements that accomplish this are $D(\beta_{1})=D( -4\alpha \cos^{2}(\theta/2)\bigl(2\cos(\theta)-1\bigr))$, $D(\beta_{2})=D(\alpha\bigl(1+2\cos(\theta)+2\cos(3\theta)\bigr))$ and $D(\beta_{3})=D(\alpha(\cos(2\theta)-\cos(3\theta)-\cos(\theta)-1)$. \begin{figure} \caption{\footnotesize {\bf (a)} \label{4qubit2DispPhase} \end{figure} Notice that the red and black circle in the Fig.~\ref{4qubit2DispPhase}b are equidistant from the p-axis. We can thus perform a photon number measurement on the probe beam to determine whether we had an odd or even state. In order for a photon number measurement to distinguish the odd from even states we require $\alpha\theta^{2}\gg 1/\sqrt{3}$. We can use this method to measure the parity for a state of any size. If we have $n$ qubits then we can have at best $n+1$ points in phase space, using the $\theta, -\theta, \theta, -\theta$ pattern for the CRs shown in Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a. Using displacements and a photon number detector we are able to measure the parity. In general, if $n$ is even we need $n-1$ displacements and $n^{2}-n$ CRs with a photon number measurement. When $n$ is odd, after the application of the circuit analogous to Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a of size $n$, we will have the point $\ket{\alpha e^{i(2n-1)\theta}}$ in phase space without the point $\ket{\alpha e^{-i(2n-1)\theta}}$. So we need an extra displacement to move the non-symmetric point. If $n$ is odd we need $n$ displacements and $n^{2}$ CRs with a photon number measurement. For this method to work we need the use of a number discriminating photo-detector. In practice it is well known that homodyne detection is much more precise than number discriminating photo-detectors. For this reason, we describe how to measure a Pauli operator using homodyne detection. {\em Homodyne detection ---} Consider the $ZZZZ$ case again. After applying Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a we have five points in phase space. Ideally we want $\ket{\alpha}$ and $\ket{\alpha e^{\pm2i\theta}}$ to become one point in phase space, say $R_{1}+iR_{2}$, and $\ket{\alpha e^{\pm i\theta}}$ to become one point, say $R_{3}+iR_{4}$. If this was possible then homodyne detection could be used. This can be done with five displacements and six applications of Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a, requiring 10 simultaneous equations to be solved. Without loss of generality we set $R_{2}=R_{4}=R_{1}=0$. The equations to be solved are $e^{4i\beta\theta}A+e^{3i\beta\theta}B+e^{2i\beta\theta}C+e^{i\beta\theta}D+E=e^{-i\beta\theta}\alpha_{\beta}-e^{5i\beta\theta}$, where $\alpha_{0,\pm2}=0$ and $\alpha_{\pm 1}=R_{3}$. After solving these equations we find that $A$, $B$, $C$, $D$ and $E$ scale as $-R_{3}/\theta^{4}$. We are free to choose the distance between the origin and $R_{3}$ to be arbitrarily large, at the expense of using arbitrarily large displacements. We can also use the above method to distinguish the parity of any given state of $n$ qubits. If we have $n$ qubits we have $n+1$ points in phase space, using a circuit similar to Fig.~\ref{4qubitNoDisp4qubitNoDispPhase}a. In order to distinguish the parity we need $n+1$ displacements and $n(n+2)$ CRs. {\em Fault-Tolerance ---} These two methods to measure weight $n$ Pauli operators cannot be used for fault-tolerant quantum computation. If there is an error on the coherent probe mode during one of the CRs, say photon loss, it would be transferred to a phase error in each of the physical qubits it interacts with afterwards -- that is, a single fault can cause a number of errors which is greater than the number of errors the code can correct. For this reason we now look at measuring the syndromes of stabilizers fault-tolerantly. Shor~\cite{Shor96} first described how to fault-tolerantly measure the generators of the stabilizer group of a quantum error correcting code using ancilla GHZ states $\bigl(\ket{0}^{\otimes n}+\ket{1}^{\otimes n}\bigr)/\sqrt{2}$, CNOT's and Hadamards. For example, in order to measure the Pauli operator $ZZZZ$ (which is equivalent to measuring the parity of 4 qubits and nothing else), we would use the circuit shown in Fig.~\ref{ShorCirc}a. \begin{figure} \caption{\footnotesize {\bf(a)} \label{ShorCirc} \end{figure} To fault-tolerantly measure the stabilizer group generators of a QEC with CRs we make three modifications to Fig.~\ref{ShorCirc}a. First, instead of using $\ket{0}$ and $\ket{1}$ for the ancilla, we use the coherent states $\ket{\alpha}$ and $\ket{\alpha e^{i\theta}}$, respectively. In that case, the ancilla GHZ state becomes $\bigl(\ket{\alpha}^{\otimes n}+\ket{\alpha e^{i\theta}}^{\otimes n}\bigr)/\sqrt{2}$. Second, we replace the CNOT's with CRs, which will cause a phase shift if the physical state is $\ket{1}$ and do nothing otherwise, i.e. $\ket{1}\ket{\alpha}\to\ket{1}\ket{\alpha e^{-i\theta}}$ and $\ket{1}\ket{\alpha e^{i\theta}}\to\ket{1}\ket{\alpha}$. We also need to replace the Hadamards with some quantum operation ${\widetilde{H}}$ which will perform the mapping ${\widetilde{H}}\ket{\alpha}\approx \bigl(\ket{\alpha}+\ket{\alpha e^{i\theta}}\bigr)/\sqrt{2}$ and ${\widetilde{H}}\ket{\alpha e^{i\theta}}\approx \bigl(\ket{\alpha}-\ket{\alpha e^{i\theta}}\bigr)/\sqrt{2}$. Third, we replace the qubit measurements with some sort of optical measurement that distinguishes between $\ket{\alpha}$ and $\ket{\alpha e^{\pm i\theta}}$ but not between $\ket{\alpha e^{i\theta}}$ and $\ket{\alpha e^{-i\theta}}$ -- this is what we call ${\widetilde{Z}}$ measurement. This new circuit is depicted in Fig.~\ref{ShorCirc}b. The ${\widetilde{Z}}$ measurements can be performed directly by homodyne detection, or by displacements followed by photon counting detectors -- in both cases, using techniques outlined earlier in this paper. What remains to be specified is the preparation of the coherent cat state $(\ket{\alpha}^{\otimes n}+\ket{\alpha e^{i\theta}}^{\otimes n})/\sqrt{2}$ and the implementation of the ${\widetilde{H}}$ operation. One solution for the cat state preparation is to use one bit teleportations~\cite{Zhou00} which translate states from the $\ket{0}/\ket{1}$ basis to the $\ket{\alpha}/\ket{\alpha e^{\pm i\theta}}$ basis. Preparation of the cat state is done by using the one bit teleportation in Fig.~\ref{one-teleWNL}a to prepare $\ket{\sqrt{n}\alpha}+\ket{\sqrt{n}\alpha e^{i\theta}}$ from the state $\bigl(\ket{0}+\ket{1}\bigr)/\sqrt{2}$ and the coherent state $\ket{\sqrt{n}\alpha}$, and then sending this state into an $n$-port symmetric beam-splitter~\cite{Gilchrist04,Ralph03}. In principle, we are required to correct the state before the beam-splitter by applying the transformation ${\widetilde{Z}}$ such that ${\widetilde{Z}}\ket{\alpha}\approx\ket{\alpha}$ while ${\widetilde{Z}}\ket{\alpha e^{i\theta}}\approx-\ket{\alpha e^{i\theta}}$. However, we can avoid explicitly applying this transformation by keeping track of this necessary correction -- what is called the {\em Pauli frame}~\cite{PauliFrame}-- and compensating for it in subsequent measurements. Similarly, to perform the ${\widetilde{H}}$ (the approximate Hadamard on coherent state logic) we first use Fig.~\ref{one-teleWNL}b to teleport the quantum state from the bus to a qubit, then perform the Hadamard transformation and finally teleport back to the coherent state logic using the circuit shown in Fig.~\ref{one-teleWNL}a. \begin{figure} \caption{\footnotesize Approximate one bit teleportation protocols~\cite{Zhou00} \label{one-teleWNL} \end{figure} These teleportations, when performed back-to-back to teleport a qubit state to another qubit, can also be used as {\em leakage reduction units} to reduce leakage faults to regular faults~\cite{AT07}. The resources required to measure a weight $n$ Pauli operator are $3n+1$ CRs, $n+1$ ancillary qubits, $2n$ ${\widetilde{Z}}$ measurements and $n+1$ qubit measurements. {\em Noisy ancillas --- } If the probability of error at each gate is bounded by $\epsilon$, transversal operations and encoding can ensure that the probability of an uncorrectable error is $O(\epsilon^2)$ instead of $O(\epsilon)$. An error during cat state preparation may lead to correlated $X$-like errors in the cat state with probability $O(\epsilon)$, which can lead to uncorrectable errors in the encoded data during the measurement of the Pauli operator, thus defeating the purpose of encoding the data for fault-tolerant quantum computation. In order to avoid this, one can verify the integrity of the cat state via non-destructive state measurement~\cite{Pre97,AGP}. When using CRs and coherent beam probes, this translates to preparing an extra copy of the cat state, which remains in coherent state logic, interacting with the qubit GHZ state transversally with controlled $-\theta$ rotations, and ${\widetilde{Z}}$ measuring each mode of the ancillary cat state. By performing classical error correction on the measurement outcomes, one can deduce the locations of $X$-like errors in either the GHZ state or the ancillary cat state. If the data is encoded in a code that can correct a single error, repeating this procedure with another ancillary cat state allows for the inference of which locations in the qubit GHZ state have $X$ errors with high enough probability to ensure uncorrectable errors are only introduced into the data with probability $O(\epsilon^2)$~\cite{Pre97}, so that Pauli measurements with a verified ancilla can be used for fault-tolerant quantum computation. Overall, the overhead for each attempt of measuring a weight $n$ Pauli operator consists of $2(n+1)$ CRs, $2$ ancillary qubit preparations and measurements, and $2n$ ${\widetilde{Z}}$ measurements. $Z$-like errors (including dephasing of coherent superpositions, one of the consequences of photon loss in the CRs) do not lead to errors in the encoded data, just errors in the outcome of the Pauli operator measurement. If error correction is to be performed, the Pauli operator measurement must be repeated $3$ times, and a majority vote of the outcomes is taken, in order to ensure that the measurement outcome is reliable~\cite{Pre97}. Some of the systematic errors in the probe beams, such as phase rotation or attenuation (also consequences of photon loss in the CRs), can be partially compensated for by additional linear-optics elements and by adjusting the ${\widetilde{Z}}$ measurements individually to minimize additional $X$ errors. Moreover, errors in the transversal operations during the preparation of the cat state are independent, and thus do not need special consideration during this verification stage -- they do contribute to $\epsilon$, however, and are thus crucial for fault-tolerance threshold calculations. {\em Discussion ---} We have shown two schemes to measure the syndromes of an arbitrary weight $n$ Pauli operator. The first scheme uses a single strong coherent probe beam, a quadratic number of CRs, a linear number of coherent displacements, and a photon number or homodyne measurement -- however, this scheme is not fault-tolerant. The second scheme we described is fault-tolerant, and the amount of resources scales linearly with the weight of the Pauli operator. This demonstrates how it is in principle possible to perform general fault-tolerant quantum computation in the qubus architecture. It is worth noting that we could have easily used controlled displacements in the place of CRs in the methods presented here. {\em Acknowledgments ---} We would like to thank P. Aliferis and R. Van Meter for valuable discussions. We are supported in part by NSERC, ARO, CIAR, MITACS, the Lazaridis Fellowship, MEXT in Japan and the EU project QAP. \end{document}
\begin{document} \newcommand{ $\square$}{ $\square$} \begin{frontmatter} \title{Size of edge-critical uniquely 3-colorable planar graphs \footnote{Supported by 973 Program of China 2013CB329601, 2013CB329603, National Natural Science Foundation of China Grant 61309015 and National Natural Science Foundation of China Special Equipment Grant 61127005. }} \author[PKU]{Zepeng Li}\ead{[email protected]} \author[PKU]{Enqiang Zhu}\ead{[email protected]} \author[CDU1,CDU2]{Zehui Shao}\ead{[email protected]} \author[PKU]{Jin Xu}\ead{[email protected]} \address[PKU]{ Key Laboratory of High Confidence Software Technologies, Peking University, Beijing, 100871, China} \address[CDU1]{Key Laboratory of Pattern Recognition and Intelligent Information Processing, Institutions of Higher Education of Sichuan Province, China} \address[CDU2]{School of Information Science and Technology, Chengdu University, Chengdu, 610106, China} \begin{abstract} A graph $G$ is \emph{uniquely k-colorable} if the chromatic number of $G$ is $k$ and $G$ has only one $k$-coloring up to permutation of the colors. A uniquely $k$-colorable graph $G$ is edge-critical if $G-e$ is not a uniquely $k$-colorable graph for any edge $e\in E(G)$. Mel'nikov and Steinberg [L. S. Mel'nikov, R. Steinberg, One counterexample for two conjectures on three coloring, Discrete Math. 20 (1977) 203-206] asked to find an exact upper bound for the number of edges in a edge-critical 3-colorable planar graph with $n$ vertices. In this paper, we give some properties of edge-critical uniquely 3-colorable planar graphs and prove that if $G$ is such a graph with $n(\geq6)$ vertices, then $|E(G)|\leq \frac{5}{2}n-6 $, which improves the upper bound $\frac{8}{3}n-\frac{17}{3}$ given by Matsumoto [N. Matsumoto, The size of edge-critical uniquely 3-colorable planar graphs, Electron. J. Combin. 20 (3) (2013) $\#$P49]. Furthermore, we find some edge-critical 3-colorable planar graphs which have $n(=10,12, 14)$ vertices and $\frac{5}{2}n-7$ edges. \end{abstract} \begin{keyword} planar graph; unique coloring; uniquely $3$-colorable planar graph; edge-critical \MSC 05C15 \end{keyword} \end{frontmatter} \section{Introduction} A graph $G$ is \emph{uniquely k-colorable} if $\chi(G)=k$ and $G$ has only one $k$-coloring up to permutation of the colors, where the coloring is called a unique $k$-coloring. In other words, all $k$-colorings of $G$ induce the same partition of $V(G)$ into $k$ independent sets. In addition, uniquely colorable graphs may be defined in terms of their chromatic polynomials, which initiated by Birkhoff \cite{Birkhoff1912} for planar graphs in 1912 and, for general graphs, by Whitney \cite{Whitney1932} in 1932. Because a graph $G$ is uniquely $k$-colorable if and only if its chromatic polynomial is $k!$. For a discussion of chromatic polynomials, see Read \cite{Read1968}. Let $G$ be a uniquely $k$-colorable graph, $G$ is \emph{edge-critical} if $G-e$ is not uniquely $k$-colorable for any edge $e\in E(G)$. Uniquely colorable graphs were defined and studied firstly by Harary and Cartwright \cite{Harary1968} in 1968. They proved the following theorem. \begin{theorem}(Harary and Cartwright \cite{Harary1968}) \label{theorem1.1} Let $G$ be a uniquely $k$-colorable graph. Then for any unique $k$-coloring of $G$, the subgraph induced by the union of any two color classes is connected. \end{theorem} As a corollary of Theorem \ref{theorem1.1}, it can be seen that a uniquely $k$-colorable graph $G$ has at least $(k-1)|V(G)|-{k \choose 2}$ edges. Furthermore, if a uniquely $k$-colorable graph $G$ has exactly $(k-1)|V(G)|-{k \choose 2}$ edges, then $G$ is edge-critical. There are many references on uniquely colorable graphs. For example see Chartrand and Geller \cite{Chartrand1969}, Harary, Hedetniemi and Robinson \cite{Harary1969} and Bollob\'{a}s \cite{Bollob¨¢s1978}. Chartrand and Geller \cite{Chartrand1969} in 1969 started to study uniquely colorable planar graphs. They proved that uniquely 3-colorable planar graphs with at least 4 vertices contain at least two triangles, uniquely 4-colorable planar graphs are maximal planar graphs, and uniquely 5-colorable planar graphs do not exist. Aksionov \cite{Aksionov1977} in 1977 improved the low bound for the number of triangles in a uniquely 3-colorable planar graph. He proved that a uniquely 3-colorable planar graph with at least 5 vertices contains at least 3 triangles and gave a complete description of uniquely 3-colorable planar graphs containing exactly 3 triangles. For an edge-critical uniquely $k$-colorable planar graph $G$, if $k=2$, then it is easy to deduce that $G$ is tree and has exactly $|V(G)|-1$ edges. If $k=4$, then $G$ is a maximal planar graph and has exactly $3|V(G)|-6$ edges by Euler's Formula. Therefore, it is sufficient to consider the size of uniquely $3$-colorable planar graphs. We denote by $\mathcal{U}_E$ the set of all edge-critical uniquely $3$-colorable planar graphs and by $size(n)$ the upper bound of the size of edge-critical uniquely $3$-colorable planar graphs with $n$ vertices. In 1977 Aksionov \cite{Aksionov1977} conjectured that $size(n)=2n-3$. However, in the same year, Mel'nikov and Steinberg \cite{Mel'nikov1977} disproved the conjecture by constructing a counterexample $H$, which has 16 vertices and 30 edges. Moreover, they proposed the following problems: \begin{problem}(Mel'nikov and Steinberg \cite{Mel'nikov1977})\label{problem1} Find an exact upper bound for the number of edges in a edge-critical 3-colorable planar graph with $n$ vertices. Is it true that $size(n)=\frac{9}{4}n-6$ for any $n\geq 12$? \end{problem} Recently, Matsumoto \cite{Matsumoto2013} constructed an infinite family of edge-critical uniquely 3-colorable planar graphs with $n$ vertices and $\frac{9}{4}n-6$ edges, where $n\equiv 0 (\textrm{mod}~4)$. He also gave a non-trivial upper bound $\frac{8}{3}n-\frac{17}{3}$ for $size(n)$. In this paper, we give some properties of edge-critical uniquely 3-colorable planar graphs with $n$ vertices and improve the upper bound of $size(n)$ given by Matsumoto \cite{Matsumoto2013} to $\frac{5}{2}n-6$, where $n\geq 6$. Moreover, we give some edge-critical 3-colorable planar graphs which have $n(=10,12, 14)$ vertices and $\frac{5}{2}n-7$ edges. It follows that the conjecture of Mel'nikov and Steinberg \cite{Mel'nikov1977} is false because $\frac{5}{2}n-7> \frac{9}{4}n-6$ if $n\geq 12$. \section{Notation} Only finite, undirected and simple graphs are considered in this paper. For a planar graph $G=(V(G), E(G),F(G))$, $V(G)$, $E(G)$ and $F(G)$ are the sets of vertices, edges and faces of $G$, respectively. We denote by $\delta(G)$ and $\Delta(G)$ the \emph{minimum degree} and \emph{maximum degree} of graph $G$. The degree of a vertex $v \in V(G)$, denoted by $d_{G}(v)$, is the number of neighbors of $v$ in $G$. The degree of a face $f\in F(G)$, denoted by $d_G(f)$, is the number of edges in its boundary, cut edges being counted twice. When no confusion can arise, $d_{G}(v)$ and $d_{G}(f)$ are simplified by $d(v)$ and $d(f)$, respectively. A face $f$ is a \emph{$k$-face} if $d_G(f)=k$ and a \emph{$\geq$k-face} if $d_G(f)\geq k$. The similar notation is used for cycles. We denote by $V_i(G)$ the set of vertices of $G$ with degree $i$ and by $V_{\geq i}(G)$ the set of vertices of $G$ with degree at least $i$, where $\delta(G)\leq i \leq \Delta(G)$. The similar notation is used for the set of faces of $G$. A $k$-wheel is the graph consists of a single vertex $v$ and a cycle $C$ with $k$ vertices together with $k$ edges from $v$ to each vertex of $C$. A planar (resp. outerplanar) graph $G$ is \emph{maximal} if $G+uv$ is not planar (resp. outerplanar) for any two nonadjacent vertices $u$ and $v$ of $G$. Let $V_1$ and $V_2$ be two disjoint subset of $V(G)$, we use $e(V_1,V_2)$ to denote the number of edges of $G$ with one end in $V_1$ and the other in $V_2$. In particular, if $V_1$ or $V_2=\{v\}$, we simply write $e(v,V_2)$ or $e(V_1,v)$ for $e(V_1,V_2)$, respectively. To \emph{contract} an edge $e$ of a graph $G$ is to delete the edge and then identify its ends. The resulting graph is denoted by $G/e$. Two faces $f_1$ and $f_2$ of $G$ are \emph{adjacent} if they have at least one common edge. A $k$-cycle $C$ is said to be a \emph{separating} $k$-\emph{cycle} in $G$ if the removal of $C$ disconnects the graph $G$. A \emph{k-coloring} of $G$ is an assignment of $k$ colors to $V(G)$ such that no two adjacent vertices are assigned the same color. Naturally, a $k$-coloring can be viewed as a partition $\{V_1,V_2,\cdots,V_k\}$ of $V$, where $V_i$ denotes the set of vertices assigned color $i$, and is called a \emph{color class} of the coloring for any $i=1,2,\cdots,k$. Two $k$-colorings $f$ and $f^\prime$ of $G$ are said to be \emph{distinct} if they produce two distinct partitions of $V(G)$ into $k$ color classes. A graph $G$ is \emph{k-colorable} if there exists a $k$-coloring of $G$, and the \emph{chromatic number} of $G$, denoted by $\chi(G)$, is the minimum number $k$ such that $G$ is $k$-colorable. The notations and terminologies not mentioned here can be found in \cite{Bondy2008}. \section{Properties of edge-critical uniquely $3$-colorable planar graphs} Let $G$ be a 3-colorable planar graph and $f$ be a 3-coloring of $G$. It is easy to see that the restriction of $f$ to $G-e$ is a 3-coloring of $G-e$, where $e\in E(G)$. For convenience, we also say $f$ is a 3-coloring of $G-e$. If there exists a 3-coloring $f'$ of $G-uv$ such that $f'(u)\neq f'(v)$, then we say that $f'$ can be \emph{extended} to a 3-coloring of $G$. \begin{theorem}\label{theorem2.1} Let $G$ be a uniquely 3-colorable planar graph. Then $G \in \mathcal{U}_E$ if and only if $G/ e$ is 3-colorable for any edge $e\in E(G)$. \end{theorem} \begin{prof} Suppose that $G \in \mathcal{U}_E$, then, by definition, $G-e$ has at least two distinct 3-colorings for each $e=uv\in E(G)$. Since $G$ is uniquely 3-colorable, we conclude that there exists a 3-coloring $f$ of $G-e$ such that $f(u)=f(v)$. Hence $G/ e$ is 3-colorable. Conversely, suppose that $G \nonumbertin \mathcal{U}_E$. Then there exists an edge $e'=uv\in E(G)$ such that $G-e'$ is also a uniquely 3-colorable planar graph. Obviously, for any unique 3-coloring $f$ of $G$, we have $f(u)\neq f(v)$. So $G/ e'$ is not 3-colorable. This establishes Theorem \ref{theorem2.1}. \qed \end{prof} The following result is obtained by Theorem \ref{theorem2.1}. \begin{corollary}\label{corollary2.2} Let $G \in \mathcal{U}_E$ and $v\in V(G)$. If $v$ is incident with exactly one 4-face and all other faces incident with $v$ are triangular, then $d(v)$ is even. \end{corollary} \begin{prof} Suppose that the result is not true. Let $v_1, v_2, \cdots, v_{2k+1}$ be the neighbors of $v$ and $v_1, v, v_{2k+1}$ and $u$ be the vertices of the 4-face. Then the graph $G/ uv_1$ contains a $(2k+1)$-wheel. Hence $G/ uv_1$ is not 3-colorable, a contradiction with Theorem \ref{theorem2.1}. \qed \end{prof} \begin{theorem}\label{theorem2.3} Suppose that $G \in \mathcal{U}_E$ and $G_0$ is a subgraph of $G$. If $G_0$ is uniquely 3-colorable, then we have \begin{description} \item[(i)] $G_0 \in \mathcal{U}_E$; \item[(ii)] For any vertex $v\in V(G)\setminus V(G_0)$, $e(v,V(G_0))\leq 2$. \end{description} \end{theorem} \begin{prof} (i) Suppose that $G_0 \nonumbertin \mathcal{U}_E$, then there exists an edge $e=uv\in E(G_0)$ such that $G_0-e$ is also uniquely 3-colorable. Let $f$ be a unique 3-coloring of $G$. Since $G \in \mathcal{U}_E$, then $G-e$ has a 3-coloring $f'$ which is distinct from $f$. Note that $f(u)\neq f(v)$, we have $f'(u)\neq f'(v)$. Thus, $f'$ can be extended to a 3-coloring of $G$. So $G$ has two distinct 3-colorings $f$ and $f'$, which contradicts $G \in \mathcal{U}_E$. (ii) Suppose that there exists a vertex $v\in V(G)\setminus V(G_0)$ such that $e(v,V(G_0))=3$. Let $f$ be a unique 3-coloring of $G$ and $v_1,v_2,v_3\in V(G_0)$ be the three neighbors of $v$ in $G$. Then their exist at least two vertices among $v_1,v_2$ and $v_3$ receive the same color. We assume w.l.o.g. that $f(v_1)=f(v_2)$. Since $G \in \mathcal{U}_E$, then $G-vv_1$ has a 3-coloring $f'$ which is distinct from $f$. Note that $G_0$ is uniquely 3-colorable and $f(v)\neq f(v_2)$, we have $f'(v)\neq f'(v_1)$. Thus, $f'$ can be extended to a 3-coloring of $G$. This is a contradiction. \qed \end{prof} \begin{corollary}\label{corollary2.4} Suppose that $G \in \mathcal{U}_E$ contains a sequence $T_1,T_2,\cdots,T_t$ of triangles satisfying $T_i$ and $T_{i+1}$ have a common edge, where $i=1,2,\cdots, t-1$ and $t\geq2$. Let $v$ and $u$ be the vertices in $V(T_1)\backslash V(T_2)$ and $V(T_t)\backslash V(T_{t-1})$, respectively, then $v\neq u$ and $vu\nonumbertin E(G)$. \end{corollary} \begin{prof} Let $v_1,v_2$ be the neighbors of $v$ in $T_1$. Since the subgraph of $G$ consists of $t-1$ triangles $T_2,T_3,\cdots,T_{t}$ is uniquely 3-colorable, by Theorem \ref{theorem2.3}, we know that $u$ is not adjacent to $v_1$ or $v_2$ in $G$. Thus, $v\neq u$. Similarly, since the subgraph of $G$ consisting of $t$ triangles $T_1,T_2,\cdots,T_{t}$ is uniquely 3-colorable, we have $vu\nonumbertin E(G)$. \qed \end{prof} By Corollary \ref{corollary2.4}, we have the following result. \begin{corollary}\label{corollary2.5} Suppose that $G \in \mathcal{U}_E$ has no separating 3-cycles. Let $H$ be a subgraph of $G$ that consists of a sequence of triangles $T_1,T_2,\cdots,T_t$ such that each $T_j$ has a common edge with $T_{i}$ for some $i\in\{1,2,\cdots,j-1\}$, where $j=2,3,\cdots, t$. Then $G[V(H)]$ is a maximal outerplanar graph. \end{corollary} For a planar graph $G \in \mathcal{U}_E$, if $G$ has no separating 3-cycles, we call the subgraph $H$ in Corollary \ref{corollary2.5} a \emph{triangle-subgraph} of $G$. Note that a triangle is a triangle-subgraph of $G$. Therefore, any $G \in \mathcal{U}_E$ has at least one triangle-subgraph. A triangle-subgraph $H$ of $G$ is \emph{maximal} if there is no maximal outerplanar subgraph $H'$ of $G$ such that $H\subset H'$. In other words, the graph $H$ consists of the longest sequence $T_1,T_2,\cdots$ of triangles such that each $T_j$ $(j\geq2)$ has a common edge with $T_{i}$ for some $i\in\{1,2,\cdots,j-1\}$. \begin{theorem}\label{theorem2.6} Suppose that $G \in \mathcal{U}_E$ has no separating 3-cycles. Let $G_0$ be a uniquely 3-colorable subgraph and $H_1,H_2$ be any two maximal triangle-subgraphs of $G$. If $E(G_0)\cap E(H_i)=\emptyset$, $i=1,2$, then we have \begin{description} \item[(i)] $G_0$ and $H_1$ have at most one common vertex; \item[(ii)] If $G_0$ and $H_1$ have a common vertex $v$, then $e(V(G_0-v),V(H_1-v))\leq 1$; otherwise, $e(V(G_0),V(H_1))\leq 3$; \item[(iii)] If $H_1$ and $H_2$ have a common vertex $v$, $G_0$ and $H_i$ have a common vertex $v_i$ and $v\neq v_i$, $i=1,2$, then the union of $G_{0},H_{1}$ and $H_{2}$ is uniquely 3-colorable. \end{description} \end{theorem} \begin{prof} Let $f$ be a unique 3-coloring of $G$. (i) Suppose, to the contrary, that $G_0$ and $H_1$ have two common vertices $v_1$ and $v_2$. Since $E(G_0)\cap E(H_1)=\emptyset$, then $v_1$ and $v_2$ are not adjacent in both $H_1$ and $G_0$. Otherwise, if $v_1v_2\in E(G_0)\backslash E(H_1)$, this contradicts Corollary \ref{corollary2.4}; if $v_1v_2\in E(H_1)\backslash E(G_0)$, then $G_0+ v_1v_2$ is uniquely 3-colorable but not edge-critical, a contradiction with Theorem \ref{theorem2.3}. By the definition of a triangle-subgraph, we know that there exists a sequence $T_1,T_2,\cdots,T_t$ of triangles in $H_1$ such that $T_i$ and $T_{i+1}$ have a common edge and $\{v_1\}=V(T_1)\backslash V(T_2)$, $\{v_2\}=V(T_t)\backslash V(T_{t-1})$, where $i=1,2,\cdots, t-1$. If $f(v_1)=f(v_2)$. Let $v_3$ be a neighbor of $v_1$ in $T_1$, then $v_3\in V(T_2)$. Since $G \in \mathcal{U}_E$, then $G-v_1v_3$ has a 3-coloring $f'$ which is distinct from $f$. Note that both $G_0$ and the subgraph of $H_1$ consists of $t-1$ triangles $T_2,\cdots,T_t$ are uniquely 3-colorable and $f(v_2)\neq f(v_3)$. So $f'(v_1)= f'(v_2)$, $f'(v_2)\neq f'(v_3)$, namely $f'(v_1)\neq f'(v_3)$. Therefore, $f'$ can be extended to a 3-coloring $f'$ of $G$ which is distinct from $f$. This contradicts $G \in \mathcal{U}_E$. If $f(v_1)\neq f(v_2)$. Let $v_4$ be a neighbor of $v_1$ in $T_1$ satisfying $f(v_4)=f(v_2)$. Since $G \in \mathcal{U}_E$, then $G-v_1v_4$ has a 3-coloring $f'$ which is distinct from $f$. Since both $G_0$ and the subgraph of $H_1$ consists of $t-1$ triangles $T_2,\cdots,T_t$ are uniquely 3-colorable, we have $f'(v_1)\neq f'(v_4)$. Therefore, $f'$ can be extended to a 3-coloring $f'$ of $G$ which is distinct from $f$. It is a contradiction. (ii)\textbf{Case 1}. $G_0$ and $H_1$ have a common vertex $v$. Suppose that $e(V(G_0-v),V(H_1-v))\geq 2$ and $u_1v_1, u_2v_2$ are two edges with $u_1, u_2 \in V(G_0-v)$ and $v_1, v_2 \in V(H_1-v)$. If there exists a vertex $u \in \{u_1, v_1, u_2,v_2\}$ such that $f(v)=f(u)$, we assume w.l.o.g. that $u=u_1$, then $f(v)\neq f(v_1)$. Since $G \in \mathcal{U}_E$, $G-u_1v_1$ has a 3-coloring $f'$ which is distinct from $f$. Note that both $G_0$ and $H_1$ are uniquely 3-colorable, we have $f'(v)=f'(u_1)$ and $f'(v)\neq f'(v_1)$. Thus $f'(u_1)\neq f'(v_1)$ and then $f'$ can be extended to a 3-coloring $f'$ of $G$ which is distinct from $f$. If $f(v)\neq f(w)$ for any $w \in \{u_1, v_1, u_2,v_2\}$, then $\{f(u_1),f(v_1)\}=\{f(u_2),f(v_2)\}$. Thus, we have either $f(u_1)=f(u_2)$ and $f(v_1)=f(v_2)$, or $f(u_1)= f(v_2)$ and $f(u_2)= f(v_1)$. Since $G-u_2v_2$ has a 3-coloring $f'$ which is distinct from $f$, and $G_0$ and $H_1$ are uniquely 3-colorable, we have $f'(u_2)\neq f'(v_2)$. Therefore, $f'$ can be extended to a 3-coloring $f'$ of $G$ which is distinct from $f$. \textbf{Case 2}. $G_0$ and $H_1$ have no common vertex. Suppose that $e(V(G_0),V(H_1))\geq 4$ and $u_1v_1, u_2v_2, u_3v_3, u_4v_4$ are 4 edges with $u_i \in V(G_0)$ and $v_i \in V(H_1)$, $i=1,2,3,4$. Then there exist two edges, say $u_1v_1$ and $u_2v_2$, such that $\{f(u_1),f(v_1)\}=\{f(u_2),f(v_2)\}$. By using a similar argument to Case 1, we can obtain a 3-coloring $f'$ of $G-u_1v_1$, which is distinct from $f$ and can be extended to a 3-coloring of $G$. It is a contradiction. (iii) By definition of $H_1$, there exists a sequence $T_{1},T_{2},\cdots,T_{t}$ of triangles in $H_{1}$ such that $T_{i}$ and $T_{i+1}$ have a common edge and $\{v\}=V(T_{1})\backslash V(T_{2})$, $\{v_1\}=V(T_t)\backslash V(T_{t-1})$, where $i=1,2,\cdots, t-1$. Suppose that $|\{f(v),f(v_1)$, $f(v_2)\}|=1$. Let $u$ be an arbitrary neighbor of $v$ in $V(T_{1})$. Then $f(v_1)\neq f(u)$. Since $G \in \mathcal{U}_E$, $G-vu$ has a 3-coloring $f'$ which is distinct from $f$. Note that $G_0,H_{2}$ and the subgraph of $H_1$ consists of $t-1$ triangles $T_2,\cdots,T_t$ are uniquely 3-colorable, we have $f'(v)=f'(v_2)=f'(v_1)$ and $f'(v_1)\neq f'(u)$. Therefore, $f'$ can be extended to a 3-coloring $f'$ of $G$ which is distinct from $f$. This contradicts $G \in \mathcal{U}_E$. Suppose that $|\{f(v),f(v_1),f(v_2)\}|=2$, then their exists $i\in \{1,2\}$ such that $f(v)\neq f(v_i)$. We assume w.l.o.g. that $f(v)\neq f(v_1)$. Let $u$ be a neighbor of $v$ in $V(T_{1})$ satisfying $f(u)=f(v_1)$. If $f(v)=f(v_2)$, then $f(v_1)\neq f(v_2)$. Since $G \in \mathcal{U}_E$, $G-vu$ has a 3-coloring $f'$ which is distinct from $f$. Note that $G_0,H_{2}$ and the subgraph of $H_1$ consists of $t-1$ triangles $T_2,\cdots,T_t$ are uniquely 3-colorable, we have $f'(v)=f'(v_2)$, $f'(v_1)=f'(u)$ and $f'(v_1)\neq f'(v_2)$. Thus, $f'(v)\neq f'(u)$. Therefore, $f'$ can be extended to a 3-coloring $f'$ of $G$ which is distinct from $f$. This contradicts $G \in \mathcal{U}_E$. If$f(v)\neq f(v_2)$, then $f(v_1)=f(v_2)$. Since $G \in \mathcal{U}_E$, $G-vu$ has a 3-coloring $f'$ which is distinct from $f$. Since $G_0,H_{2}$ and the subgraph of $H_1$ consists of $t-1$ triangles $T_2,\cdots,T_t$ are uniquely 3-colorable, we have $f'(u)=f'(v_1)=f'(v_2)$ and $f'(v)\neq f'(v_2)$. Thus, $f'(v)\neq f'(u)$. This contradicts $G \in \mathcal{U}_E$. Suppose that $|\{f(v_1),f(v_2),f(v_3)\}|=3$. Using the fact that any coloring $f'$ of two vertices $u,w \in V(G')$ with $f'(u)\neq f'(w)$ can be extended uniquely to a 3-coloring of $G'$, we can obtain that the union of $G_0,H_{1}$ and $H_{2}$ is uniquely 3-colorable, where $G'\in \{G_0,H_1,H_2\}$. \qed \end{prof} \section{Size of edge-critical uniquely $3$-colorable planar graphs} In this section, we consider the upper bound of $size(n)$ for edge-critical uniquely 3-colorable planar graphs with $n(\geq 6)$ vertices. Suppose that $G \in \mathcal{U}_E$ and $G$ has no separating 3-cycles. Let $H_1,H_2,\cdots,H_k$ be all of the maximal triangle-subgraphs of $G$. For two maximal triangle-subgraphs $H_{i}$ and $H_{j}$ having a common vertex $v$, if there exists $H_{\ell}$ such that $H_{i},H_{j}$ and $H_{\ell}$ satisfy the condition of Case (iii) in Theorem \ref{theorem2.6}, namely $H_{i}$ and $H_{\ell}$ have a common vertex (say $v_i$), $H_{j}$ and $H_{\ell}$ have a common vertex (say $v_j$) and $ v_i\neq v \neq v_{j}$, then we say that $H_{i}$ and $H_{j}$ \emph{satisfy} Property \textbf{P}. Let $G'=H_1\cup H_2\cup \cdots\cup H_k$. (We will use such notation without mention in what follows.) Now we analyse the relationship between $|F_{\geq 4}(G')|$, the number of $\geq$4-faces of $G'$, and $k$. For a vertex $u\in V(G')$, we use $D(u)$ to denote the number of maximal triangle-subgraphs of $G$ that contain $u$. First we construct a new graph $H_G$ from $G'$ with $V(H_G)=\{h_1,h_2$, $\cdots$, $h_k\}$, where $h_i$ in $H_G$ corresponds to $H_i$ in $G'$ for any $i \in\{1,2,\cdots,k\}$. The edges in $H_G$ are constructed by the following two steps. \\ \textbf{ Step 1:} For every $u\in V(G')$ with $D(u)=2$, add the edge $h_{i_1}h_{i_2}$ to $H_G$ if both $H_{i_1}$ and $H_{i_2}$ contain $u$. (see e.g. Fig. \ref{figure1}) \\ \textbf{ Step 2:} For every $u\in V(G')$ with $D(u)\geq 3$, let $H_{i_1},H_{i_2}, \cdots, H_{i_{D(u)}}$ contain $u$ and they appear in clockwise order around $u$. For any $1\leq j< k\leq D(u)$ with $H_{i_j}$ and $H_{i_k}$ satisfying Property \textbf{P}, then add the edge $h_{i_j}h_{i_k}$ to $H_G$. Let $G_u$ be the subgraph of $H_G$ with vertex set $V(G_u)=\{h_{i_1},h_{i_2}, \cdots, h_{i_{D(u)}}\}$ and edge set $E(G_u)=\{h_{i_j}h_{i_k}: \textrm{$H_{i_j}$ and $H_{i_k}$ satisfy Property \textbf{P}}, 1\leq j< k\leq D(u)\}$. Then we add some edges in $\{h_{i_\ell}h_{i_{\ell+1}}: \ell=1,2,\cdots, D(u)\}$ to $G_u$ such that the resulting graph, denoted by $G_{\langle u \rangle}$, is connected and has the minimum number of edges. Now the construction of the edges of the graph $H_G$ is completed. (see e.g. in Fig. \ref{figure1}, we first join the edges $h_4h_9$, $h_7h_8$ and $h_8h_{11}$, then join the edges $h_4h_5$, $h_5h_6$, $h_6h_7$, $h_9h_{10}$ and $h_8h_{12}$.) \begin{figure} \caption{An example of a graph $G'$ and the corresponding graph $H_G$.} \label{figure1} \end{figure} \textbf{Remark}. By the definition of $H_G$, if $h_{i}h_{j}\in E(H_G)$, then $H_{i}$ and $H_{j}$ must have a common vertex. For a edge-critical uniquely 3-colorable graph $G$, if $D(u)\leq 2$ for any $u\in G'$, then the graph $H_G$ obtained by above construction is unique; otherwise, $H_G$ is not unique. Furthermore, we have Theorem \ref{theorem3.1}. \begin{theorem}\label{theorem3.1} Suppose that $G \in \mathcal{U}_E$ has no separating 3-cycles. Let $H_1,H_2$, $\cdots$, $H_k$ be all of the maximal triangle-subgraphs of $G$, then $H_G$ is a simple planar graph and $|F(H_G)|=|F_{\geq 4}(G')|$. \end{theorem} \begin{prof} By Corollary \ref{corollary2.5} and Theorem \ref{theorem2.6}(i), we know that $H_G$ has no loops or parallel edges. So $H_G$ is a simple graph. Note that $G'=H_1\cup H_2\cup \cdots\cup H_k$ is a planar graph. For any $u\in V(G')$ with $D(u)\geq 3$ and any $1\leq j< k\leq D(u)$ with $H_{i_j}$ and $H_{i_k}$ satisfying Property \textbf{P}, then $G_u$ is a planar graph and there exist no edges $h_{i_a}h_{i_b},h_{i_c}h_{i_d}\in E(G_u)$ such that $i_a \in \{i_c+1,\cdots,i_d-1\}$ and $i_b \in \{i_d+1,\cdots,i_c-1\}$, where $a,b,c,d\in \{1,2,\cdots,D(u)\}$ and the subscripts are taken modulo $D(u)$. Now we prove that $G_u$ is a forest. If $h_{i'_1}h_{i'_2},h_{i'_1}h_{i'_3}\in E(G_u)$, then, by the definition of $G_u$ and Theorem \ref{theorem2.6}(iii), we know that there exist $H_{\ell_1}$ and $H_{\ell_2}$ such that the graph $H_{i'_1}\cup H_{i'_2}\cup H_{i'_3}\cup H_{\ell_1}\cup H_{\ell_2}$ is uniquely 3-colorable. Thus, $H_{i'_2}$ and $H_{i'_3}$ does not satisfy Property \textbf{P}, namely $h_{i'_2}h_{i'_3}\nonumbertin E(G_u)$. Therefore, $G_u$ is a forest. By the definition of $G_{\langle u \rangle}$, it is easy to see that $G_{\langle u \rangle}$ is a tree. By the definition of $H_G$, we can conclude that $H_G$ is a planar graph. For any distinct faces $f_1$ and $f_2$ of $H_G$, by the definition of $H_G$, it can be seen that there exist two distinct $\geq$4-faces of $G'$ corresponding to $f_1$ and $f_2$, respectively. Conversely, for any $\geq$4-face $f'$ of $G'$, let $H_{j_1},H_{j_2}, \cdots, H_{j_{t}}$ be all of the maximal triangle-subgraphs satisfying $H_{j_\ell}$ and $f'$ have common edges, $\ell=1,2,\cdots, t$. Let $u_\ell$ be the common vertex of $H_{j_\ell},H_{j_{\ell+1}}$, because $G_{\langle u_\ell\rangle}$ is tree, there exists a unique face of $H_G$ incident with $h_{j_1},h_{j_2}, \cdots, h_{j_{t}}$. Thus, $|F(H_G)|=|F_{\geq 4}(G')|$. \qed \end{prof} \begin{theorem}\label{theorem3.2} Suppose that $G \in \mathcal{U}_E$ has no separating 3-cycles. Let $f_0,f_1$, $\cdots$, $f_t$ be a sequence of faces in $H_G$ such that $f_\ell$ and $f_{m}$ are adjacent, $\ell=1,2,\cdots, t$, $m\in \{0,1,\cdots,\ell-1\}$. If $d(f_0)=3$ and $d(f_\ell)=4$, $\ell=1,2,\cdots, t$, let $h_{i_1},h_{i_2},\cdots, h_{i_s}$ be all of the vertices incident with the faces $f_0,f_1,\cdots,f_{t}$. Then $|V(f_t)\setminus \bigcup_{\ell=0}^{t-1}V(f_{\ell})|=2$ and $H_{i_1}\cup H_{i_2}\cup\cdots\cup H_{i_s}$ is uniquely 3-colorable, where $V(f)$ denotes the set of the vertices incident with $f$. \end{theorem} \begin{prof} The proof is by induction on $t$. Let $V(f_0)=\{h_{i_1},h_{i_2},h_{i_3}\}$ and $h_{j_{s-r}},\cdots, h_{i_s}$ be the vertices incident with $f_{t}$, but not incident with $f_0,f_1$, $\cdots$, $f_{t-1}$. If $t=0$, since $d(f_0)=3$, by Theorem \ref{theorem2.6} (iii), we know that $H_{i_1}\cup H_{i_2}\cup H_{i_3}$ is uniquely 3-colorable. Suppose that $t\geq 1$. By hypothesis, $H_{i_1}\cup H_{i_2}\cup\cdots\cup H_{i_{s-r-1}}$ is uniquely 3-colorable. Since $d(f_t)=4$, by Theorem \ref{theorem2.6} (i), we have $r=1$, namely $|V(f_t)\setminus \bigcup_{\ell=0}^{t-1} V(f_{\ell})|=2$. Therefore, by Theorem \ref{theorem2.6} (iii), we obtain that $H_{i_1}\cup H_{i_2}\cup\cdots\cup H_{i_s}$ is uniquely 3-colorable. \qed \end{prof} For a planar graph $G$, let $C$ and $C'$ be two cycles of $G$. $C$ and $C'$ are \emph{dependent} if there exists a sequence $C_1(=C),C_2,\cdots,C_t(=C')$ of cycles of $G$ such that $C_\ell$ and $C_{\ell+1}$ have common edges and $|V(C_s)|=4$, where $\ell=1,2,\cdots, t-1$, $s=2,3,\cdots, t-1$. Obviously, if $C$ and $C'$ have a common edge, then they are dependent. \begin{lemma}\label{lemma3.3} Let $G$ be a planar graph, $|V(G)|\geq 4$. If any $i$-cycle of $G$ is dependent with at most $i-3$ 3-cycles for $3\leq i \leq5$ and with at most $i-2$ 3-cycles for $i\geq 6$, then $|V(G)|\geq |F(G)|+2$. \end{lemma} \begin{prof} The proof is by contradiction. Let $G$ be a smallest counterexample to the lemma, then $G$ satisfies the conditions of the lemma and $|V(G)|< |F(G)|+2$. Suppose that $G$ is not connected, let $G_1$ be a connected component of $G$. If $|V(G_1)|\leq 3$ and $|V(G-V(G_1))|\leq 3$, it is easy to see that $|V(G)|\geq |F(G)|+2$. This is a contradiction. Otherwise, we assume w.l.o.g. that $|V(G-V(G_1))|\geq 4$. Since any $i$-cycle of $G$ is dependent with at most $i-3$ 3-cycles for $3\leq i \leq5$ and with at most $i-2$ 3-cycles for $i\geq 6$, the same is true of $G_1$ and $G-V(G_1)$. By the minimality of $G$, we have $|V(G-V(G_1))|\geq |F(G-V(G_1))|+2$. Furthermore, if $|V(G_1)|\geq 4$, then $|V(G_1)|\geq |F(G_1)|+2$; otherwise, $|V(G_1)|\geq |F(G_1)|$. Therefore, $|V(G)|=|V(G_1)|+|V(G-V(G_1))|\geq |F(G_1)|+|F(G-V(G_1))|+2= |F(G)|+3$, a contradiction. Suppose that $G$ is connected. If $G$ contains a cut vertex $u$, let $V_1,V_2,\cdots,V_r$ be the vertex sets of the connected components of $G-u$, respectively, and $G_j=G[\{u\}\cup V_i]$, $j=1,2,\cdots,r$. Obviously, $G_j$ satisfies the conditions of the lemma. If $|V(G_j)|\geq 4$, then, by the minimality of $G$, $|V(G_j)|\geq |F(G_j)|+2$; otherwise, $|V(G_j)|\geq |F(G_j)|+1$, $j=1,2,\cdots,r$. Therefore, $|V(G)|=\sum_{j=1}^{r} |V(G_j)|-(r-1)\geq \sum_{j=1}^{r} (|F(G_j)|+1)-(r-1)= |F(G)|+(r-1)+1\geq |F(G)|+2$, a contradiction. Now we assume that $G$ is 2-connected. If $G$ contains no 3-faces, then $2|E(G)|=\sum_{f\in F(G)} d(f)\geq 4|F(G)|$. Thus $|E(G)|\geq 2|F(G)|$. By Euler's Formula, we have $|V(G)|\geq |F(G)|+2$. This contradicts the choice of $G$. If $G$ contains exactly one 3-face, then $G$ contains at least one $\geq$5-face. Thus, $2|E(G)|=\sum_{f\in F(G)} d(f)\geq 4|F(G)|$ and then $|V(G)|\geq |F(G)|+2$. If $G$ contains at least two 3-faces, then each 3-face is dependent with at least two $\geq$5-faces for $G$ is 2-connected. We claim that $|E(G)|\geq 2|F(G)|$, namely $\sum_{f\in F(G)}d(f)-4\geq 0$. For any face $f\in F(G)$, we set the \emph{initial charge} of $f$ to be $ch(f)=d(f)-4$. We now use the discharging procedure, leading to the final charge $ch'$, defined by applying the following rule: \textbf{RULE}. Each 3-face receives $\frac{1}{2}$ from each dependent $\geq$5-face. For any face $f\in F(G)$, if $d(f)=3$, since $f$ is dependent with at least two $\geq$5-faces, then $ch'(f)\geq ch(f)+2\times \frac{1}{2}=0$. If $d(f)=4$, then $ch'(f)=ch(f)=0$. If $d(f)=5$, then $ch'(f)\geq ch(f)-\frac{1}{2}\cdot [d(f)-3]=0$. If $d(f)\geq 6$, then, by hypothesis, $ch'(f)\geq ch(f)-\frac{1}{2}\cdot [d(f)-2]\geq \frac{1}{2}\cdot d(f)-3\geq 0$. Therefore, $\sum_{f\in F(G)}ch(f)=\sum_{f\in F(G)}ch'(f)\geq 0$. Thus, by Euler's Formula, we have $|V(G)|\geq |F(G)|+2$. This contradicts the choice of $G$. \qed \end{prof} \begin{theorem}\label{theorem3.4} Suppose that $G \in \mathcal{U}_E$ has no separating 3-cycles. If $G$ has $k$ maximal triangle-subgraphs $H_1,H_2$, $\cdots$, $H_k$ and $k\geq 4$, then $|F(H_G)|\leq |V(H_G)|-2$. \end{theorem} \begin{prof} By Lemma \ref{lemma3.3}, it suffices to prove that any $i$-cycle of $H_G$ is dependent with at most $i-3$ 3-cycles if $3\leq i \leq5$ and with at most $i-2$ 3-cycles if $i\geq 6$. The proof is by contradiction. Let $C$ be a $i$-cycle of $H_G$, and $C$ is dependent with at least $i-2$ 3-cycles ($3\leq i \leq5$) or with at least $i-1$ 3-cycles ($i\geq 6$). If $i=3$ or 4, by Theorems \ref{theorem2.6} (i) and \ref{theorem3.2}, it is easy to see that there exist no dependent 3-cycles and at most one 3-cycle that is dependent with a 4-cycle. This contradicts the hypothesis. Suppose that $i\geq 5$, let $r=i-2$ if $i=5$ and $r=i-1$ if $i\geq 6$. Let $C=C_{j,0},C_{j,1},\cdots,C_{j,t_j}$ be a sequence of cycles of $H_G$ such that $C_{j,\ell}$ and $C_{j,\ell+1}$ have common edges, $|V(C_{j,s})|=4$ and $|V(C_{j,t_j})|=3$, where $\ell=0,1,\cdots, t_j-1$, $s=1,2,\cdots, t_j-1$ and $j=1,2,\cdots, r$. Then for any $a\in \{1,2,\cdots, t_1\}$ and $b\in \{1,2,\cdots, t_2\}$, $C_{1,a}$ and $C_{2,b}$ are not dependent. Otherwise, $C_{1,a}$ is dependent with two 3-cycles $C_{1,t_1}$ and $C_{2,t_2}$. Therefore, each pair of 4-cycles in $\{C_{1,1},C_{2,1},\cdots,C_{r,1}\}$ have no common edges. Moreover, $C$ and $C_{j,1}$ have exactly one common edge because $r\geq i-2$, $j=1,2,\cdots, r$. If $i=5$, then $r=3$. We assume w.l.o.g. that $V(C)=\{h_1,h_2,\cdots,h_5\}$ and let $V_j=\bigcup_{\ell=1}^{t_j}V(C_{j,\ell})$. Now we consider the following two cases: \textbf{Case 1}. $C_{1,1}\cup C_{2,1}\cup C_{3,1}$ contains 4 vertices of $C$, see Fig.\ref{figure2} (a). Assume w.l.o.g. that $h_1\nonumbertin V(C_{1,1}\cup C_{2,1}\cup C_{3,1})$ and $V_1\cup V_2 \cup V_3=\{h_2,h_3,\cdots,h_p\}$. By Theorem \ref{theorem3.2}, we can conclude that $G_0=H_2\cup H_3\cup\cdots\cup H_p$ is uniquely 3-colorable. Note that $H_1$ and $G_0$ have two common vertices, this contradicts Theorem \ref{theorem2.6} (i). \textbf{Case 2}. $C_{1,1}\cup C_{2,1}\cup C_{3,1}$ contains 5 vertices of $C$, see Fig.\ref{figure2} (b). Assume w.l.o.g. that $V_2\cup V_3=\{h_3,h_4,\cdots,h_{p}\}$, $V_1=\{h_1,h_2,h_{p+1},\cdots,h_{p'}\}$ and $h_{p'}\in V(C_{1,t_1})\setminus V(C_{1,t_{1}-1})$. By Theorem \ref{theorem3.2}, we can conclude that $G_0=H_3\cup H_4\cup\cdots\cup H_p$ is uniquely 3-colorable. Then, by Theorem \ref{theorem2.6} (iii), we obtain that $G_1=G_0\cup H_1\cup H_2\cup H_{p+1}\cup \cdots\cup H_{p'-1}$ is uniquely 3-colorable. Note that $H_p'$ and $G_1$ have two common vertices, this contradicts Theorem \ref{theorem2.6} (i). \begin{figure} \caption{Two cases of a $5$-cycle dependent with three 3-cycles.} \label{figure2} \end{figure} If $i\geq 6$, then $r=i-1$. Assume w.l.o.g. that $V(C)=\{h_1,h_2,\cdots,h_i\}$. In this case, there exists only one edge, say $h_1h_i$, of $C$ that is not in $C_{1,1}\cup C_{2,1}\cup\cdots \cup C_{r,1}$. Suppose that $C_{j,1}$ contains the edge $h_{j}h_{j+1}$ and $V_2\cup V_3 \cup\cdots \cup V_{r-1}=\{h_2,h_3,\cdots,h_p\}$, $j=1,2,\cdots,r-1$. By Theorem \ref{theorem3.2}, we can conclude that $G_0=H_2\cup H_3\cup\cdots\cup H_p$ is uniquely 3-colorable. Note that $H_1$ and $G_0$ have two common vertices, this contradicts Theorem \ref{theorem2.6} (i). \qed \end{prof} \iffalse The proof is by induction on the number $k=|V(H_G)|$. If $k=4$ or 5, it is easy to check that the result is true by Theorem \ref{theorem2.7}. So we may assume that $k\geq 6$. If $H_G$ has no triangles, then $2|E(H_G)|=\sum_{i=1}^{|F(H_G)|} d(f_i)\geq 4|F(H_G)|$. Thus $|E(H_G)|\geq 2|F(H_G)|$. By Euler's Formula, we have $|F(H_G)|\leq |V(H_G)|-2$. If $H_G$ has triangles. Assume w.l.o.g that $h_1h_2h_3$ is a triangle of $H_G$, then, by Theorem \ref{theorem2.6}, the union $H_{1,2,3}=H_{1}\cup H_{2}\cup H_{3}$ is uniquely 3-colorable. Let $C_1=v_{1,1}v_{1,2}\cdots v_{1,s_1}$ and $C_2=v_{2,1}v_{2,2}\cdots v_{2,s_2}$ be two facial cycles of $H_{1,2,3}$ with $|V(C_1)|\geq 4$ and $|V(C_2)|\geq 4$, respectively. Let $f$ be a unique 3-coloring of $G$. Now we construct a new graph $G'$ from $G$ as following. Let $H'$ be a uniquely 3-colorable graph with $6s$ ($s=\max\{s_1,s_2\}$) vertices shown in Fig.1 and $f'$ be a unique 3-coloring of $H'$. Note that $\{f'(u_{i,3j-2}),f'(u_{i,3j-1}),f'(u_{i,3j})\}=\{1,2,3\}$ for any $i\in \{1,2\}$ and $j\in \{1,2,\cdots, s\}$. We choose a subsequence $u'_{i,1},u'_{i,2},\cdots,u'_{i,s_i}$ of $u_{i,1},u_{i,2},\cdots,u_{i,3s}$ such that $u'_{i,j} \in \{u_{i,3j-2},u_{i,3j-1},u_{i,3j}\}$ and $f'(u'_{i,j})=f(v_{i,j})$, where $j=1,2,\cdots, s_i$ and $i=1,2$. Then we replace $H_{1,2,3}$ by $H'$ in $G$ and join $u'_{i,j}$ and $x$ if $v_{i,j}$ and $x$ are adjacent, where $x\in V(G)\setminus V(H_{1,2,3})$, $j=1,2,\cdots, s_i$ and $i=1,2$. If $\delta(G)\leq 2$, let $d_G(v)\leq 2$, then any $i$-cycle of $G-v$ is also dependent with at most $i-3$ 3-cycles. Since $|V(G-v)|=|V(G)|-1$ and $|F(G- v)|\geq |F(G)|-1$, we have $|V(G-v)|< |F(G-v)|+2$, contradicting the minimality of $G$. Since $G \in \mathcal{U}_E$, it can be seen that $G' \in \mathcal{U}_E$. \fi By Theorems \ref{theorem3.1} and \ref{theorem3.4}, we obtain the following Corollary \ref{corollary3.5}. \begin{corollary}\label{corollary3.5} Suppose that $G \in \mathcal{U}_E$ has no separating 3-cycles. If $G$ has $k$ maximal triangle-subgraphs $H_1,H_2$, $\cdots$, $H_k$ and $k\geq 4$, then $|F_{\geq 4}(G')|\leq k-2$. \end{corollary} \begin{theorem}\label{theorem3.6} Let $G \in \mathcal{U}_E$ and $|V(G)|\geq 6$, then $|E(G)|\leq \frac{5}{2}|V(G)|-6$. \end{theorem} \begin{prof} The proof is by induction on $n=|V(G)|$. It is easy to check that the theorem is true for $n=6$. Suppose that the theorem is true for all edge-critical uniquely 3-colorable planar graphs with $p$ vertices, where $6\leq p \leq n-1$ and $n\geq 7$. Let $G\in \mathcal{U}_E$ and $|V(G)|=n$. We consider the following two cases: \textbf{Case 1}. $G$ contains a separating 3-cycle $C$. Let $G_1$ (resp. $G_2$) be the subgraph of $G$ consists of $C$ together with its interior (resp. exterior). Then both $G_1$ and $G_2$ are uniquely 3-colorable planar graphs. Otherwise, suppose that $G_1$ has two distinct 3-colorings, then each 3-coloring of $G_1$ can be extended to a 3-coloring of $G$. This contradicts $G \in \mathcal{U}_E$. By Theorem \ref{theorem2.3}, we have $G_1,G_2 \in \mathcal{U}_E$. If $V(G_i)\geq 6$ for $i=1,2$, by induction, $|E(G_i)|\leq \frac{5}{2}|V(G_i)|-6$. Thus, $|E(G)|=|E(G_1)|+|E(G_2)|-3\leq \frac{5}{2}|V(G_1)|-6+\frac{5}{2}|V(G_2)|-6-3\leq \frac{5}{2}(|V(G)|+3)-15< \frac{5}{2}|V(G)|-6$. If $V(G_1)\leq 5$ or $V(G_2)\leq 5$, since $G_1,G_2 \in \mathcal{U}_E$, there exists a vertex $v$ in $V(G_1)\setminus V(C)$ or $V(G_2)\setminus V(C)$ such that $d_G(v)=2$. Therefore, $G- v$ is uniquely 3-colorable and then $G- v \in \mathcal{U}_E$. By induction, $|E(G- v)|\leq \frac{5}{2}|V(G- v)|-6$. Thus, $|E(G)|=|E(G- v)|+2\leq \frac{5}{2}(|V(G)|-1)-4< \frac{5}{2}|V(G)|-6$. \textbf{Case 2}. $G$ contains no separating 3-cycles. Using the fact that every planar graph with $n$ vertices is a subgraph of a maximal planar graph with the same vertices, we may assume that $G_{max}$ is a maximal planar graph with $n$ vertices and $G$ is a subgraph of $G_{max}$. Let $q=|E(G_{max})|-|E(G)|$, then $|E(G)|=3n-6-q$ and $|F(G)|=2n-4-q$. In this case, we prove the theorem by showing that $q\geq \frac{n}{2}$. Let $H_1,H_2$, $\cdots$, $H_k$ be all of the maximal triangle-subgraphs of $G$, $G'=H_1\cup H_2\cup \cdots\cup H_k$ and $H_i$ contain $t_i$ 3-faces, where $i=1,2\cdots,k$. Then $|V(H_i)|=t_i+2$, $|E(H_i)|=2t_i+1$ and $|F_3(G)|=\sum_{i=1}^{k}t_i$. Moreover, $|E(G')|=\sum_{i=1}^{k}|E(H_i)|=\sum_{i=1}^{k}(2t_i+1)=k+2|F_3(G)|$. Let $G^*$ be the dual graph of $G$ and $G^*_0$ be the subgraph of $G^*$ induced by $V_{\geq4}(G^*)$, the set of vertices of degree at least 4 in $G^*$. By Euler's Formula, we have $|E(G^*_0)|=|V(G^*_0)|+|F(G^*_0)|-\omega(G^*_0)-1$, where $\omega(G^*_0)$ is the number of connected components of $G^*_0$. By the definition of $G^*$, we have $|V(G^*_0)|=|F_{\geq4}(G)|$, $|E(G^*_0)|=|E(G)|-|E(G')|$ and $|F(G^*_0)|=|V(G)|-|V(G')|+\omega(G')$. Since $G$ contains no separating 3-cycles, then $\omega(G^*_0)=|F_{\geq 4}(G')|$. Therefore, \begin{equation}\label{equ1} \begin{split} & |E(G)| = |E(G')|+|E(G_0^*)|\\ =& k+ 2|F_3(G)|+|F_{\geq4}(G)|+|V(G)|-|V(G')|+\omega(G')-|F_{\geq 4}(G')|-1\\ =&2n-4-q +|F_3(G)|+(k-|F_{\geq 4}(G')|)+(n-|V(G')|)+(\omega(G')-1) \end{split} \end{equation} Note that $n-|V(G')|\geq 0$ and $\omega(G')-1\geq 0$ in Formula (\ref{equ1}). Because $G_{max}$ has $2n-4$ 3-faces by Euler's Formula and removing a edge decreases the number of 3-faces by at most two, we have \begin{equation}\label{equ2} |F_3(G)|\geq 2n-4-2q \end{equation} Suppose that $k=1$, then $|F_{\geq 4}(G')|=\omega(G')=1$, $H_1$ is a maximal outerplanar graph and $H_1=G[V(H_1)]$ by Corollary \ref{corollary2.5}. If $|V(G')|=n$, then $G=H_1$. In this case, $|E(G)|=2n-3<\frac{5}{2}n-6$ since $n\geq 7$. If $|V(G')|=n-1$, then, by Theorem \ref{theorem2.3}, $|E(G)|=|E(H_1)|+2=2(n-1)-3+2<\frac{5}{2}n-6$. If $|V(G')|\leq n-2$, then, by Formula (\ref{equ1}), we have $|E(G)|\geq 2n-4-q +2n-4-2q+2=4n-3q-6$. Since $|E(G)|=3n-6-q$, we have $q\geq \frac{n}{2}$. Therefore, $|E(G)|\leq \frac{5}{2}n-6$. Suppose that $k=2$, by Theorem \ref{theorem2.6}(i), we have $|F_{\geq 4}(G')|=1$ and $\omega(G')\leq 2$. If $\omega(G')=1$ and $|V(G')|=n$, then $H_1$ and $H_2$ have a common vertex. By Theorem \ref{theorem2.6}(ii), there exists at most one edge in $E(G)\setminus (E(H_1)\cup E(H_2))$. Therefore, $|E(G)|\leq |E(H_1)|+|E(H_2)|+1=2t_1-3+2t_2-3 +1=2(n+1)-5<\frac{5}{2}n-6$. If $\omega(G')=2$ or $|V(G')|\leq n-1$, then, by Formula (\ref{equ1}), we have $|E(G)|\geq 2n-4-q +2n-4-2q+1+1=4n-3q-6$. Similarly, we can obtain $q\geq \frac{n}{2}$, and hence, $|E(G)|\leq \frac{5}{2}n-6$. Suppose that $k=3$, by Theorem \ref{theorem2.6}(i) and (iii), we have $|F_{\geq 4}(G')|\leq 2$ and $\omega(G')\leq 3$. If $|F_{\geq 4}(G')|=1$, then, by Formula (\ref{equ1}), we have $|E(G)|\geq 2n-4-q +2n-4-2q+2=4n-3q-6$. Therefore, $q\geq \frac{n}{2}$ and then $|E(G)|\leq \frac{5}{2}n-6$. If $|F_{\geq 4}(G')|=2$, then, by Theorem \ref{theorem2.6}(iii), we know that $G'$ is uniquely 3-colorable. In this case, if $|V(G')|=n$, then $G=G'$ and $|E(G)|=|E(H_1)|+|E(H_2)|+|E(H_3)|=2t_1-3+2t_2-3+2t_3-3=2(n+3)-9<\frac{5}{2}n-6$. If $|V(G')|\leq n-1$, then, by Formula (\ref{equ1}), we have $|E(G)|\geq 2n-4-q +2n-4-2q+1+1=4n-3q-6$. Therefore, $q\geq \frac{n}{2}$ and then $|E(G)|\leq \frac{5}{2}n-6$. Suppose that $k\geq 4$, by Corollary \ref{corollary3.5}, we have $k-|F_{\geq 4}(G')|\geq 2$. By Formula (\ref{equ1}), we have $|E(G)|\geq 2n-4-q +2n-4-2q+2=4n-3q-6$. Therefore, $q\geq \frac{n}{2}$ and then $|E(G)|\leq \frac{5}{2}n-6$. \qed \end{prof} \section{Concluding Remarks} In this section we give some edge-critical uniquely $3$-colorable planar graphs which have $n(=10,12, 14)$ vertices and $\frac{5}{2}n-7$ edges. Fig. \ref{figure3} shows a edge-critical uniquely $3$-colorable planar graph $G_1$, which has $10$ vertices and 18 edges, and a unique 3-coloring of $G_1$. \begin{figure} \caption{A edge-critical uniquely $3$-colorable planar graph $G_1$.} \label{figure3} \end{figure} Fig. \ref{figure4} shows two edge-critical uniquely $3$-colorable planar graphs $G_2$ and $G_3$, both of which have 12 vertices and 23 edges, and their unique 3-colorings. \begin{figure} \caption{Two edge-critical uniquely $3$-colorable planar graphs $G_2$ and $G_3$.} \label{figure4} \end{figure} \begin{figure} \caption{Two edge-critical uniquely $3$-colorable planar graphs $G_4$ and $G_5$.} \label{figure5} \end{figure} Fig. \ref{figure5} shows two edge-critical uniquely $3$-colorable planar graphs $G_4$ and $G_5$, both of which have $14$ vertices and 28 edges, and their unique 3-colorings. Note that for $G_1$, we have $k(G_1)-|F_{\geq 4}(G'_1)|=2$, where $k(G_1)$ is the number of maximal triangle-subgraphs of $G_1$. For $i\in \{2,4,5\}$, $|V(G'_i)|=|V(G_i)|$; For $i\in \{2,4\}$, $\omega(G'_i)=1$. Furthermore, we have $|F_3(G_i)|\geq 2|V(G_i)|-4-2q$ for $i\in \{1,2,3,4,5\}$, namely the equality of Formula \ref{equ2} holds for $G_i$. \section*{References} \end{document}
\begin{document} \begin{frontmatter} \begin{fmbox} \dochead{Research} \title{What is a quantum simulator?} \author[ addressref={aff1,aff2,aff3,aff4}, corref={aff1}, email={[email protected]} ]{\inits{THJ}\fnm{Tomi H} \snm{Johnson}} \author[ addressref={aff2,aff1,aff3} ]{\inits{SR}\fnm{Stephen R} \snm{Clark}} \author[ addressref={aff2,aff1,aff3} ]{\inits{D}\fnm{Dieter} \snm{Jaksch}} \address[id=aff1]{ \orgname{Centre for Quantum Technologies, National University of Singapore}, \street{3 Science Drive 2}, \postcode{117543} \city{Singapore}, \cny{Singapore} } \address[id=aff2]{ \orgname{Clarendon Laboratory, University of Oxford}, \street{Parks Road}, \postcode{OX1 3PU} \city{Oxford}, \cny{UK} } \address[id=aff3]{ \orgname{Keble College, University of Oxford}, \street{Parks Road}, \postcode{OX1 3PG} \city{Oxford}, \cny{UK} } \address[id=aff4]{ \orgname{Institute for Scientific Interchange}, \street{Via Alassio 11/c}, \postcode{10126} \city{Torino}, \cny{Italy} } \begin{artnotes} \end{artnotes} \end{fmbox} \begin{abstractbox} \begin{abstract} Quantum simulators are devices that actively use quantum effects to answer questions about model systems and, through them, real systems. Here we expand on this definition by answering several fundamental questions about the nature and use of quantum simulators. Our answers address two important areas. First, the difference between an operation termed simulation and another termed computation. This distinction is related to the purpose of an operation, as well as our confidence in and expectation of its accuracy. Second, the threshold between quantum and classical simulations. Throughout, we provide a perspective on the achievements and directions of the field of quantum simulation. \end{abstract} \begin{keyword} \kwd{Quantum} \kwd{Simulation} \kwd{Computation} \kwd{Definition} \kwd{Requirements} \kwd{Perspective} \end{keyword} \begin{keyword}[class=AMS] \kwd{03.65.-w} \kwd{03.67.Ac} \kwd{03.67.Lx} \end{keyword} \end{abstractbox} \end{frontmatter} \section{Introduction} Simulating models of the physical world is instrumental in advancing scientific knowledge and developing technologies. Accordingly, the task has long been at the heart of science. For example, orreries have been used for millennia to simulate models of the motions of celestial objects~\cite{Brewster1830}. More recently, differential analysers or mechanical integrators were developed to solve differential equations modelling e.g.\ heat flow and transmission lines~\cite{Thomson1876,Cairns1944}. Unfortunately, simulation is not always easy. There are numerous important questions to which simulations would provide answers but which remain beyond current technological capabilities. These span a multitude of scientific research areas, from high-energy~\cite{Barcelo2011,Jordan2012}, nuclear, atomic~\cite{You2011} and condensed matter physics~\cite{Lewenstein2007,Lewenstein2012} to thermal rate constants~\cite{Lidar2009} and molecular energies~\cite{Wang2008,Whitfield2011} in chemistry~\cite{Kassal2011,Lu2012}. An exciting possibility is that the first simulation devices capable of answering some of these questions may be quantum, not classical, with this distinction to be clarified below. The types of quantum hardware proposed to perform such simulations are as hugely varying as the problems they aim to solve: trapped ions \cite{Porras2004,Kim2010,Gerritsma2010,Lanyon2011,Blatt2012}, cold atoms in optical lattices \cite{Jaksch2005,Dalibard2011,Bloch2012}, liquid and solid-state NMR \cite{Peng2009,Peng2010,Du2010,Li2011,Zhang2011}, photons \cite{Angelakis2007,Angelakis2008,Lu2009,Lanyon2009,Peruzzo2010,Aspuru-Guzik2012}, quantum dots \cite{Manousakis2002,Smirnov2007,Byrnes2008}, superconducting circuits \cite{You2005,Clarke2008,You2011,Houck2012}, and NV centres~\cite{Wang2014}. At the time of writing, astonishing levels of control in proof-of-principle experiments (cf.\ the above references and citations within) suggest that quantum simulation is transitioning from a theoretical dream into a credible possibility. Here we complement recent reviews of quantum simulation~\cite{Kendon2010,Cirac2012,Hauke2012,Schaetz2013,Buluta2009,Georgescu2013} by providing our answers to several fundamental but non-trivial and often contentious questions about quantum simulators, highlighting whenever there is a difference of opinion within the community. In particular, we discuss how quantum simulations are defined, the role they play in science, and the importance that should be given to verifying their accuracy. \section{What are simulators?} \label{sec:whatsim} \begin{figure} \caption{A quantum simulator reveals information about an abstract mathematical function relating to a physical model. However, it is important to consider the typical purpose and context of such a simulation. By comparing its results to a real system of interest, a simulation is used to decide whether or not the model accurately represents that system. If the representation is though to be accurate, the quantum simulator can then loosely be considered as a simulator for the system of interest. We represent this in the figure by a feedback loop from the quantum device back to the system of interest.} \label{fig:fig1} \end{figure} Both simulators and computers are physical devices that reveal information about a mathematical function. Whether we call a device a simulator or a computer depends not only on the device, but also on what is supposed about the mathematical function and the intended use of the information obtained. If the function is interpreted as part of a physical model then we are likely to call the device a simulator. However, this brief definition neglects the typical purpose and context of a simulation (see \fir{fig:fig1}). As will become clear below, a simulation is usually the first step in a two-step process, with the second being the comparison of the physical model with a real physical system (see \secr{sec:howused} `How are simulators used?'). This then makes simulation part of the usual scientific method. This context is why some loosely state that simulation is the use of one physical device to tell us about another real physical system~\cite{Feynman1982}. It also affects the level of trust that can be reasonably demanded of the simulation (see \secr{sec:whentrust} `When are quantum simulators trustworthy?'). If the accuracy with which a device simulates a model can be arbitrarily controlled and guaranteed then it is often elevated to the status of a computer, a name that reflects our trust in the device. A consequence of this guaranteed accuracy is that it allows assured interpretation of the results of the operation, the information obtained about a mathematical function, without reference to some real system. Thus, as well as to imply accuracy, the term computer is also more often used to describe calculations that relate to more abstract mathematical functions, unconnected to a physical system, and are used outside of the scientific method. It is interesting to apply our definition of a simulator to well-known situations in which the term is used. The majority of experimental devices advertised as quantum simulations are so-called analogue simulators~\cite{Kendon2010,Cirac2012,Hauke2012,Schaetz2013,Buluta2009,Georgescu2013}. They are devices whose Hamiltonians can be engineered to approximate those of a subset of models put forward to describe a real system. This closely fits our definition of simulators, and their usual purpose and context outlined above. Another different type of device is Lloyd's digital quantum simulator~\cite{Lloyd1996}. This replicates universal unitary evolution by mapping it, via Trotter decompositions, to a circuit, which can then be made arbitrarily accurate by the use of error correction. Whilst going by the name simulator, it is effectively a universal quantum computer. From our arguments above, we would also describe this as a computer: error correction ensures the result applying to the modelled system can be interpreted without comparison to a real physical system, thus playing the role of a computation. Finally, the company D-wave has developed a device to find the ground state of the classical Ising model~\cite{Johnson2011b}. While this is a device that returns a property of a physical model, it is advertised as a computer. We would agree, since its primary use seems to be in solving optimisation problems embedded in the Ising ground state, rather than learning about a real physical system. \section{What are quantum simulators?} \label{sec:whatquant} To complete the definition of a {\em quantum} simulator we need to define what is meant by a quantum device. This problem is also faced by quantum biology \cite{Davies2004,Abbott2008,Lambert2013} and other quantum technologies. It is complicated by the fact that, at some level, quantum mechanics describes the structure and dynamics of all physical objects. Quantumness may be structural and inert e.g.\ merely responsible for the available single-particle modes. Or quantumness may be active e.g.\ exploiting entanglement between modes, potentially achieving functionality more efficiently than a classical device (see \secr{sec:whyneedquant} `Why do we need quantum simulators?'). To this end, we must distinguish between devices for which, during the operation of the simulator, the particular degrees of freedom doing the simulating do or do not behave classically. We choose here to define classical as when there is some single-particle basis in which the density operator $\hat{\rho}(t)$ describing the relevant degrees of freedom is, for the purposes of the simulation, diagonal at all times $t$. This is written \begin{equation} \hat{\rho}(t) = \sum_{s,i} p( \left \{ N_{s,i} \right \} , t) \ket{\left \{ N_{s,i} \right \} ,t} \bra{\left \{ N_{s,i} \right \} ,t} . \nonumber \end{equation} Here $\ket{\left \{ N_{s,i} \right \} ,t}$ is a Fock state in which $N_{s,i}$ particles of species $s$ occupy mode $i$. The mode annihilation operator is $\an{a}_{s,i}(t) = \int \mathrm{d} \mathbf{r} \an{\mathrm{P}si}_s (\mathbf{r}) \chi^\ast_{s,i} (\mathbf{r},t)$, with $\chi_{s,i} (\mathbf{r},t)$ the corresponding single-particle modefunction and $\an{\mathrm{P}si}_s (\mathbf{r})$ the field operator for species $s$. The diagonal elements $p( \left \{ N_{s,i} \right \} , t)$ are the probabilities of the different occupations. This condition ensures there is always a single-particle basis in which dephasing would have no effect. This invariance under dephasing is a common way to define classicality~\cite{Meznaric2013}. The condition also disallows entanglement between different single-particle modes, as would be expected for a condition of classicality. It does allow the natural entanglement between identical particles in the same mode due to symmetrisation. Such entanglement can be mapped to entanglement between modes by operations that themselves do not contribute entanglement~\cite{Killoran2014}. However, if such operations are never applied, it is reasonable to consider the device to be classical. In other words, we are less concerned with the potential of entanglement as a resource than how this resource is manifested during the operation of the device. Let us build confidence in our definition by using it to classify well-known devices as classical or quantum. Reassuringly, the operation of the room-temperature semi-conductor devices used to perform every-day computing are classical according to the definition. The relevant properties of inhomogeneous semi-conductors are captured by a model in which the degrees of freedom are valence (quasi) electrons that incoherently occupy single-particle states $\chi_i (\mathrm{r})$ of the Bloch type~\cite{Ashcroft1976}. Next, consider two devices for preparing the ground state of a classical Ising model, classical annealing~\cite{Kirkpatrick1983} and quantum annealing~\cite{Farhi1998,Farhi2000,Santoro2006,Das2008,Biamonte2011b}. Classical annealing by coupling the Ising spins to a cooling environment is not quantum since at all times the thermal density matrix of the system is diagonal in the computational basis, a single-particle basis. However, preparing that same state by quantum annealing, adiabatically quenching a transverse field, is expected to be quantum. This is due to the fact that in the middle of the quench, which forms the main part of the simulation, the Ising spins will usually become entangled. Since these are particles in distinguishable modes, the device cannot behave classically at all times. Finally, consider a Bose-Einstein condensate~\cite{Leggett2001,Pitaevskii2003} that is accurately described by many bosons in the same single-particle mode $\chi_0 (t)$. Alternatively, consider a Poissonian mixture of different occupation numbers or equivalently a coherent number superposition of unknown phase, both of which are well approximated by $N_0$ bosons occupying $\chi_0 (t)$, for large mean occupation $N_0$. In these cases, the single occupied modefunction evolves according to the Gross-Pitaevskii equation. Devices based on this condensate evolution have been proposed to simulate some gravitational models~\cite{Garay2000}, and we would accordingly label these simulations as classical. However, simulating a related model featuring cosmological particle production by exploiting coherent Bogoliubov excitations above the condensate is sensitive to off-diagonal elements in the density operator, in a single-particle basis, and is thus quantum \cite{Jain2007}. Our chosen boundary between quantum and classical is one of many possibilities; and indeed defining the quantumness of the simulation entirely in terms of the device is not common. Many others~\cite{Buluta2009,Georgescu2013} take the quantum in quantum simulator to relate to the model being simulated as well as to the simulating device. In common with definitions of quantum computation, our assignment of the quantum in quantum simulator based only on the device avoids the assumption that only simulating quantum models is hard enough to potentially benefit from a quantum device. This is not so: finding the ground state of even a classical Ising model is NP-hard and thus thought to be inefficient on both a classical and quantum device~\cite{Barahona1982,Bernstein1997}. \section{How are simulators used?} \label{sec:howused} A common perception (that goes right back to the language used at the conception of quantum simulation~\cite{Feynman1982}) is that the purpose of a simulator is purely to reveal information about another real system. We pick an idealised model describing a system of interest, and then simulate that model, taking the output to describe not only the model but the system of interest. As long as the idealised model is a `good' description of the system of interest then it is inferred that the simulator is a `good' simulator of the system. While this inference is correct, it misses an important purpose of a simulator. This other crucial purpose of a simulator is to reveal information about a model and compare this to the behaviour of the real system of interest. This then allows us to infer whether or not the model provides a `good' description of the system in the first place and whether or not the results bear any relevance to the real world. For example, simulating the Fermi-Hubbard model would be hugely important if it turned out that this model captures the behaviour of some high-$T_c$ superconductors (as suggested by some~\cite{Anderson1987,Rice1988,Anderson2004,LeHur2009}), but it may be that the main conclusion of simulations will be to rule this out (as expected by others~\cite{Laughlin2002,Leggett2006,Laughlin2014}). Only when we have developed confidence in a model accurately representing a system can we use the simulator of the model to inform us about the system. \section{Why do we need simulators?} \label{sec:whyneed} Above we have stated that simulators are used to find properties of a model, assess whether the model is relevant to and accurately describes the real system of interest, and, if so, learn about that system. Are there other ways to learn about a system without simulation? Do we need simulators? There are, of course, many examples of scientists making progress without simulation. Over a century ago, the phenomenon of superconductivity was discovered and later its properties analysed by experimental investigation largely unguided by analytical or numerical simulation~\cite{Ginzburg2004}. Today, in cases where detailed simulation is not possible, we successfully design drugs largely by trial and error on a mass scale~\cite{Madsen2002}. These two examples, however, also show why simulation is crucial. Computer-aided drug design~\cite{Cohen1996,Zhou2010} exploits the simulation of molecular systems to drastically speed up and thus lower the cost of the design process. Similarly, if we wish to manufacture materials with enhanced superconducting properties, e.g. increase the critical temperature $T_c$, then we might benefit from some understanding directing that manufacture, as would be provided by a model and a means of simulating it~\cite{Fausti2011,Kaiser2012}. Simulation can also be a convenience: in 2014 the USA bobsleigh team won Olympic bronze with a machine designed almost entirely virtually~\cite{Bobsleigh}. Simulation was used to optimise the aerodynamic performance without the need for a wind tunnel. \section{Why do we need quantum simulators?} \label{sec:whyneedquant} While the idea of simulations is centuries old \cite{Brewster1830,Thomson1876}, the suggestion that a quantum device would make for a better mimic of some models than a classical device is commonly attributed to Feynman in 1982 \cite{Feynman1982}. He noted that calculating properties of an arbitrary quantum model on a classical device is a seemingly very inefficient thing to do (taking a time that scales exponentially with the number of particles in the model being simulated), but a quantum device might be able to do this efficiently (taking a time that scales at most polynomially with particle number~\cite{Lloyd1996}). This does not of course prohibit the simulation of many quantum models from being easy using classical devices and thus not in need of a quantum simulator. The classical numerical tools usually employed include exact calculations, mean-field~\cite{Stanley1971} and dynamical mean-field theory~\cite{Georges1996,Kotliar2006,Aoki2013}, tensor network theory~\cite{Verstraete2008,Cirac2009,Johnson2010,Schollwock2011,Biamonte2011,Evenbly2011,Johnson2013,Orus2013}, density functional theory (DFT)~\cite{Hohenberg1964,Kohn1965,Parr1983,Martin2004,Burke2005} or quantum Monte Carlo algorithms~\cite{Foulkes2001,Gull2011,Pollet2012,Austin2012}, which all have their limitations. Exact calculations are only possible for small Hilbert spaces. Mean-field-based methods are only applicable when the correlations between the constituent parts of the system being modelled are weak. Tensor network methods are only applicable if there is a network structure to the Hilbert space and often fail in the presence of strong entanglement between contiguous bipartite subspaces~\cite{Eisert2010}, with this sensitivity to entanglement being much greater with two- or higher-dimensional models. For DFT, the functionals describing strong correlations are, in general, not believed to be efficient to find~\cite{Schuch2009}. Quantum Monte Carlo struggles, for example, with Fermionic statistics or frustrated models, due to the sign problem~\cite{Loh1990,Troyer2005}. For the above reasons, quantum devices are expected to be crucial for large network (e.g. lattice) models, featuring Fermions or frustration and strong entanglement, or non-network based many-body models featuring states with strong correlations that are difficult to describe with DFT. Strong entanglement can arise, for example, near a phase transition, or after a non-equilibrium evolution~\cite{Trotzky2012}. It must be stated, however, that there is no guarantee that a classical device or algorithm will not sometime in the future be devised to efficiently study some subset of the above quantum models. In addition to the widely-accepted need for quantum devices for the quantum models discussed above, there are calls and proposals for quantum devices to simulate classical models~\cite{Yung2010,Sinha2010}, for example, molecular dynamics~\cite{Harris2010} and lattice gas models~\cite{Boghosian1998,Meyer2002}. This also applies to any simulation that reduces to solving an eigenvalue equation~\cite{Abrams1999} or a set of linear equations~\cite{Harrow2009}. As with quantum models, many of these simulations, for example solving a set of linear equations, can be solved without much trouble on a classical device for small to medium simulations. The benefit of a quantum device is that the size of problems that can be tackled in a reasonable time grows significantly more quickly with the size of the simulating device than it does for a classical device, thus it is envisaged that quantum devices will one day be able to solve larger problems than their classical counterparts. It is clear from this last point that the scaling of classical and quantum simulators must be treated carefully, taking into account the sizes of problems that can be tackled by current or future devices. It is possible that the experimental difficulty of scaling up quantum simulation hardware might cause an overhead such that a quantum device does not surpass the accuracy obtained by a classical algorithm that in principle does not scale as well but runs on ever-improving hardware obeying Moore's law. \section{When are quantum simulators trustworthy?} \label{sec:whentrust} \begin{figure*} \caption{Consider the displacement of a spring due to the pressure of a gas (far left), or the time taken for a dropped ball to fall (middle left). Simple models can be proposed to describe either system. The former might be modelled as an ideal gas trapped in a box by a frictionless piston held in place by a perfect spring. The latter as a frictionless body moving with uniform acceleration. Calculating the quantity of interest within either system, displacement or time, respectively, reduces within the model to calculating a square root. We thus consider four methods to perform a this simulation. Building an approximation to either model system; analogue simulation. Alternatively, using an abacus (middle right) or a calculator (far right); digital simulation. \newline \newline With today's knowledge, in the parlance used in this article, we would elevate the status of the latter two simulations to computations, because of the guaranteed accuracy with which each calculation reproduces the model. Meanwhile, the former two simulatiors are not so easily verified. Importantly, they are falsifiable, e.g.\ by comparing one to the other. This is similar to the state of analogue quantum simulators currently used to perform large-scale quantum simulations. \newline \newline However, the confidence in each simulator is a matter of perspective. It is not objective. Many centuries ago, we would only have trusted the abacus to perform such a calculation, since its principles were well understood and square-root algorithms with assured convergence were known even to the Babylonians. Once Gallileo began the development of mechanics, we might have considered the method of dropping a ball. Confidence in the simulation could have been established by testing the analogue simulator against the abacus. Nearly two centuries ago, when we first began to understand equilibrium thermodynamics, we might have preferred the gas-piston-spring method. Nowadays, we would all choose the calculator or a solid-state equivalent. This confidence is partly a result of testing the calculator against some known results, but also largely because, after the development of quantum mechanics, we feel we understand the components of solid-state systems to such a high level that we are willing to extrapolate this confidence to unknown territory. In a century, our confidence could well be placed most strongly in another system. } \label{fig:fig2} \end{figure*} So far we are yet to address perhaps the most difficult and important aspect of simulation, upon which its success rests. How can we asses whether the quantum simulator represents the model? How rigorous an assessment is needed? For this discussion we focus on analogue quantum simulators, because they are the most easily scaled quantum simulators and so are likely to be used in the near future to simulate large systems. They also most closely follow our definition of a simulator, as opposed to a computer (see \secr{sec:whatsim} `What are simulators?'). The topic of falsifying bad quantum simulators has received some attention. In certain parameter regimes there may be efficiently calculable exact analytical results or it might be possible to perform a trusted classical simulation, against which the quantum simulator results may be compared~\cite{Trotzky2012}. Often there are bounds that some measurable quantities are known to obey, and this too can be tested~\cite{Hauke2012}. Alternatively, it might be possible to check known relationships between two different simulations. For example, in an Ising model, flipping the direction of the magnetic field is equivalent to flipping the sign of the component of the spins along that field, thus giving two simulations whose results are expected to have a clear relationship. A natural extension of this strategy is to compare many quantum simulations realised by different devices, perhaps each with a slightly different source of error, trusting only the aspects of the results shared by all devices~\cite{Leibfried2010}. If any of the above tests fail beyond an acceptable accuracy, then we do not trust the simulation results. If a simulator passes all tests, then we may take this as support for the accuracy of that simulator. It would be incorrect, however, to say that such tests verify the accuracy of a simulator. A simulator could have significant errors yet pass these tests. It might be that the simulator is accurate in the regimes in which we have accurate analytical or classical numerical results, but is more sensitive to errors in regimes that are difficult to treat with other methods, e.g.\ near phase transitions, perhaps for the same reason. In fact, Hauke {\em et al.}\ gave an example of exactly this phenomenon in the transverse Ising model~\cite{Hauke2012}. The danger with comparing simulations, even realised by different devices~\cite{Leibfried2010}, is that there may be similar sources of error, or errors in the two simulations may manifest in the results in the same way. Although this makes simulation difficult to assess, it does not invalidate it; it would be unreasonably harsh to demand verification of all simulators. The reason for this is that, as illustrated in \fir{fig:fig1}, simulators are usually the first step in a two-step process: first a device is devised to simulate a model, and second the model is employed to study a real system (see \secr{sec:howused} `How are they used?'). It might be unreasonable to demand a more rigorous testing of the first part of this process than the second. In the second part, when we devise a model to reproduce the behaviour of a physical system, we only demand that the model be falsifiable~\cite{Popper1963}. We seek as many fail-able tests as possible of the model, and to the extent that it passes these tests, we retain the model. It is difficult for experiments to verify a particular use of the model, rather successful experiments merely declare the model `not yet false'. This is the scientific method. We should not, therefore, demand anything more or less when going in the other direction, devising a physical device to reproduce the behaviour of a model. All we can do is test our simulators as much as possible, and slowly build confidence in accordance with the passing of these tests. If the capability of performing such tests lags behind the development of the simulator, then so naturally must our confidence. It becomes clear that the purpose of the device is crucial to how it is assessed, explaining our highlighting the purpose of a simulator alongside its definition. If we were using a device to provide information about a model without any additional motivation, as with a computer, then it would be reasonable to search for a means of verification and guarantees of accuracy, as with a computer. Eventually, quantum technologies might develop to a stage where large simulations of this type are feasible, e.g.\ via Lloyd's digital simulator~\cite{Lloyd1996}, but it is likely to be in the more distant future. It must be noted, however, that many of the devices we use regularly for computation are unverifiable in the strictest sense. Not every transistor in the classical computers we use (for instance to simulate quantum systems) can be verified to be functioning as desired~\cite{May1992}. We instead develop an understanding of the sources of error, perform some tests to check for obvious errors, and use the devices with caution. The words `trust' and `confidence' in the preceding paragraphs are chosen deliberately. They indicate that, since for simulation we do not always have verifiability, we are not discussing objective properties of devices, but our understanding of them. This will change in time (see an example of this in \fir{fig:fig2}). Further, confidence depends on the eventual goal of our use of the simulator. Some properties of a system may be too sensitive to Hamiltonian parameters to be realistically captured by a simulator, while other properties may be statistically robust against parameter variations~\cite{Walschaers2013}. In this sense trustworthiness is not a clear-cut topic that is established upon the initial development of a simulator. Instead, it is the result of a complex, time-consuming process in the period that follows. It is the responsibility of critics not to be overly harsh and unfairly demanding of new simulators to provide immediate proof of their trustworthiness, but it is also the responsibility of proponents not to declare trustworthiness before their simulator has earned it. \section{Where next for quantum simulation?} \label{sec:wherenext} The majority of the current effort on quantum simulation is, firstly, in matching models of interest to a suitable quantum device with which to perform a simulation~\cite{Buluta2009,Georgescu2013}. Secondly, experimentalists demonstrate a high level of control and flexibility with a simulator, performing some of the simple fail-able tests mentioned above~\cite{Aspuru-Guzik2012,Blatt2012,Bloch2012}. This is very much along the lines of the five goals set out by Cirac and Zoller in 2012 \cite{Cirac2012}, and great successes have led to claims that we are now able to perform simulations on a quantum device that we are unable to do on a classical device. In the future, the main direction of inquiry will continue to be along these lines. However, it is the very fact that the simulation capabilities of quantum devices are beginning to surpass those of classical devices that should prompt a more forceful investigation into the best approach to establishing confidence in quantum simulators. Hauke {\em et al.}\ proposed a set of requirements for a quantum simulator, an alternative to Cirac and Zoller's, that focuses on establishing the reliability and efficiency of a simulator, and the connection between these two properties \cite{Hauke2012}. As we move to classically unsimulable system sizes and regimes where there is no clear expected behaviour, trustworthiness and falsifiability should no longer be an afterthought. In fact, they should be primary objectives of experimental and theoretical work, since quantum simulators cannot truly be useful until some level of trust is established. Can we predict in advance where the results of quantum simulators are more sensitive to errors? How does this overlap with the regimes of classical simulability? Are there even some results that will be exponentially sensitive to the Hamiltonian parameters and not expected to ever be simulable in a strict sense? These are difficult but important questions to answer, and progress on them will be exciting and thought provoking. \begin{backmatter} \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} All authors conceived of the study, participated in its design and coordination, helped to draft the manuscript, and read and approved the final manuscript. THJ undertook the majority of the writing. \end{backmatter} \end{document}
\begin{document} \setlength{\baselineskip}{1.25\baselineskip} \begin{center} {\large\bf Restricted Routing and Wide Diameter of the Cycle Prefix Network} \end{center} \begin{center} \parbox{15cm}{\begin{center} \baselineskip=1.1\baselineskip William Y. C. Chen, $\;$ Vance Faber $\;$ and $\;$ Emanuel Knill {\em C-3, Mail Stop B265\\ Los Alamos National Laboratory\\ Los Alamos, New Mexico 87545} \end{center} } \end{center} \begin{abstract} The cycle prefix network is a Cayley coset digraph based on sequences over an alphabet which has been proposed as a vertex symmetric communication network. This network has been shown to have many remarkable communication properties such as a large number of vertices for a given degree and diameter, simple shortest path routing, Hamiltonicity, optimal connectivity, and others. These considerations for designing symmetric and directed interconnection networks are well justified in practice and have been widely recognized in the research community. Among the important properties of a good network, efficient routing is probably one of the most important. In this paper, we further study routing schemes in the cycle prefix network. We confirm an observation first made from computer experiments regarding the diameter change when certain links are removed in the original network, and we completely determine the wide diameter of the network. The wide diameter of a network is now perceived to be even more important than the diameter. We show by construction that the wide diameter of the cycle prefix network is very close to the ordinary diameter. This means that routing in parallel in this network costs little extra time compared to ordinary single path routing. \end{abstract} \begin{center} Suggested Running Title: Cycle Prefix Network ................................................. \end{center} \newsection{Introduction} The cycle prefix network is a vertex symmetric, directed graph which has recently been proposed for use as a communication network [\ref{FMC93}]. It has been shown that the cycle prefix network has many remarkable communication properties such as a large number of vertices for a given degree and diameter, simple shortest path routing, Hamiltonicity, optimal connectivity, and others [\ref{FMC93},\ref{CFK93},\ref{JR92}]. These considerations for designing symmetric and directed networks are well justified in practice and have been widely recognized in the research community. In the search for highly efficient network models and in the study of communication algorithms on these networks, Cayley graph techniques have been used successfully in discovering new models and in analyzing network efficiencies. Another interesting idea in network design which is used for the construction of many networks is to represent nodes as certain sequences over an alphabet with links represented by suitable operations on sequences. The hypercube, de Bruijn, Kautz, star and pancake networks can all be constructed in this fashion. In the case of cycle prefix digraphs, both the idea of Cayley graphs (Cayley coset digraphs, to be precise) and that of sequences over an alphabet can be used as the underlying representation, and each has its own advantage. The former idea was the point of departure for the discovery of the cycle prefix network, motivated by the fundamental theorem of Sabidussi [\ref{Sab69}] which shows that all vertex symmetric digraphs are Cayley coset digraphs. The sequence representation of a cycle prefix network is more useful for studying its properties and for implementing it in practice. For this reason, we shall utilize the sequence representation of the cycle prefix digraph throughout the paper. In the performance evaluation of networks, the diameter and routing efficiency are among the most critical concerns. In this paper we shall further study these issues for the cycle prefix network. The paper has two objectives. The first objective is to confirm an observation first made on the basis of computer experiments concerning the diameter change when certain links are removed. By proving a reachability theorem we show that in the case of cycle prefix digraphs, the resulting networks often possess better degree-diameter properties than the original one. Using the method of Conway and Guy [\ref{CG}], one may construct large symmetric networks with small degree and diameter. In a recent paper [\ref{CF92}], Comellas and Fiol describe a routing scheme with implies an upper bound on the diameter of the link deleted cycle prefix digraphs. However, they left open the question of whether the bound is exact. We settle this question by exhibiting vertices that achieve the diameter bound. The second objective of this paper is concerned with the recently introduced notion of the wide diameters of networks. Mathematically, the notion of the wide diameter of a graph naturally stems from the classical theorem of Menger relating connectivity to disjoint paths. However, such a notion has not been studied in graph theory until very recently when it became relevant from an engineering point of view. D. F. Hsu gives a vivid account on the background of wide diameters [\ref{Hsu**}]: ``The concept of the wide diameter of a graph $G$ arises naturally from the study of routing, reliability, randomized routing, fault tolerance, and other communication protocols (such as the byzantine algorithm) in parallel architecture and distributed computer networks. By considering both the width and the length of a container, we are able to give a global and systematic treatment on the interconnection network for various distributed systems.'' ``Although the concept of a container and the notion of wide diameter have been discussed and used in practical applications, the graph theory questions suggested have not, at least until recently, been studied as extensively as the questions in the hardware and software design, development, and implementation of distributed computing systems.'' In the case of the cycle prefix networks, we completely determine its wide diameter through an explicit construction. It turns out that the wide diameter is very close to the ordinary diameter of the network. In other words, routing in parallel in the cycle prefix network costs little extra time compared to ordinary single path routing. This property undoubtedly increases the usefulness of the cycle network. \newsection{The Cycle Prefix Digraphs $\Gamma_\Delta(D)$ and $\Gamma_\Delta(D, -r)$} \newcommand{$\Gamma_\Delta(D)$}{$\Gamma_\Delta(D)$} \newcommand{$\Gamma_\Delta(D)$o}{$\Gamma_\Delta(D,-1)$} \newcommand{$\Gamma_\Delta(D)$r}{$\Gamma_\Delta(D, -r)$} The cycle prefix digraph $\Gamma_\Delta(D)$ $(\Delta\geq D)$ is defined as a digraph whose vertex set consists of sequences $x_1x_2\cdots x_D$ over an alphabet $\{ 1, 2, \ldots, \Delta+1\}$, where $x_1, x_2, \ldots, x_D$ are distinct. Such sequences are called partial permutations or $D$-permutations. The adjacency relations for a vertex $x_1x_2\cdots x_D$ are described as follows: \[ x_1x_2\cdots x_D \Rightarrow \left\{ \begin{array}{ll} x_k x_1\cdots x_{k-1} x_{k+1} \cdots x_d, \quad & \mbox{for $2\leq k \leq D$}, \\ \equsep y x_1 x_2 \cdots x_{D-1}, & \mbox{for $y \not= x_1, x_2, \ldots, x_D$}. \end{array} \right. \] We say that the vertex $x_kx_1\cdots x_{k-1} x_{k+1} \cdots x_D$ is obtained from $x_1x_2\cdots x_D$ via a rotation on the prefix $x_1x_2\cdots x_k$, denoted, \[ x_k x_1\cdots x_{k-1} x_{k+1} \cdots x_D = R_k (x_1x_2\cdots x_D)\; .\] In particular, the operation $R_D$ is called a full rotation, and $x_D x_1 x_2\cdots x_{D-1}$ is called a full rotation of $x_1x_2\cdots x_D$. The rotations $R_k$ for $k< D$ are called partial rotations. Similarly, we say that the sequence $yx_1x_2\cdots x_{D-1}$ is obtained from $x_1x_2\cdots x_D$ via a shift, denoted \[ yx_1 x_2\cdots x_{D-1} = S_y ( x_1x_2\cdots x_D).\] Since the term ``adjacent'' is somewhat ambiguous for a directed graph, when $(u, v)$ is an arc in a digraph we shall say that $u$ is {\em adjacent to} $v$ while $v$ is {\em next to } to $u$. The term ``adjacent from'' is used by some authors to distinguish it from ``adjacent to''. From the above sequence definition of $\Gamma_\Delta(D)$, it is easily seen to be vertex symmetric, a fact that immediately follows from the Cayley coset digraph definition $\Gamma_\Delta(D)$ [\ref{FMC93}]. We notice that some authors prefer the sequence shift in the direction from right to left, like the shift for de Bruijn digraphs: $(x_1, x_2, \ldots, x_n) \Rightarrow (x_2, \ldots, x_n, z)$. For the sake of consistency, we shall follow the notation in [\ref{FMC93}], and continue the left-to-right shift, which reflects the rotations on the prefixes. A left-handed notation is however adopted by Comellas and Fiol [\ref{CF92}]. In the original study of cycle prefix digraphs $\Gamma_\Delta(D)$, Faber and Moore observed from computational experiments that if one rules out the double arcs in $\Gamma_\Delta(D)$, then the diameter of the resulting digraph, denoted, $\Gamma_\Delta(D)$o, increases only by one. They then considered a new construction based on the cycle prefix digraph $\Gamma_\Delta(D)$. Suppose $\Gamma_\Delta(D)$r\ is the digraph obtained from $\Gamma_\Delta(D)$\ by deleting the arcs represented by the partial rotations $R_2, R_3, \ldots, R_{r+1}$. Formally speaking, $\Gamma_\Delta(D)$r\ has the same vertex set as $\Gamma_\Delta(D)$, and the adjacency relations for a vertex $x_1x_2\cdots x_D$ are described by \[ x_1x_2\cdots x_D \Rightarrow \left\{ \begin{array}{ll} x_k x_1\cdots x_{k-1} x_{k+1} \cdots x_d, \quad & \mbox{for $r+2\leq k \leq D$}, \\ \equsep y x_1 x_2 \cdots x_{D-1}, & \mbox{for $y \not= x_1, x_2, \ldots, x_D$}. \end{array} \right. \] Note that the degree of $\Gamma_\Delta(D)$r\ decreases by $r$ compared with $\Gamma_\Delta(D)$. There is an intuitive reason to surmise that the diameter of $\Gamma_\Delta(D)$r\ would increase by the same scale. Recently, Comellas and Fiol [\ref{CF92}] have shown that in most cases the diameter increase of $\Gamma_\Delta(D)$r\ is in fact bounded by $r$. However, they did not solve the problem of whether this bound is exact or not. We will fill this gap by proving the exactness of the diameter bound. Moreover, we shall study the reachability property of the digraph $\Gamma_\Delta(D)$r\ (this property for $\Gamma_\Delta(D)$\ has been studied in [\ref{CF92}]). A vertex symmetric digraph with the reachability property is of great use in constructing new classes of vertex symmetric digraphs with small degree and diameter, as proposed by Conway and Guy [\ref{CG}]. The details are presented in the next section. \newsection{Restricted Routing for $\Gamma_\Delta(D)$r} For the sake of easier presentation, we shall start with the reachability of the digraph $\Gamma_\Delta(D)$r. It is easy to see that $\Gamma_\Delta(D)$r\ is vertex symmetric and that one may choose the {\em standard origin} $12\cdots D$ while considering routing for any two vertices. For simplicity the vertex $X$ is always referred to as $x_1x_2\cdots x_D$. \begin{de}[$k$-Reachable digraphs] A digraph is said to be $k$-reachable if for any two vertices $u$ and $v$, which are not necessarily distinct, there exists a path (with repeated vertices and arcs allowed) from $u$ to $v$ of length $k$. \end{de} Comellas and Fiol [\ref{CF92}] have shown that the digraph $\Gamma_\Delta(D)$\ is $D$-reachable for $\Delta \geq D \geq 3$. Here we will present a stronger result for $\Gamma_\Delta(D)$r. \begin{thm} Suppose $r\geq 0$ and $\Delta\geq D \geq 2r+3$. Then the vertex symmetric digraph $\Gamma_\Delta(D)$r\ is $(D+r)$-reachable. \label{thm-reach} \end{thm} The major reason for the above theorem lies in the following observation about {\em dead angles}. Given a vertex $X=x_1x_2\cdots x_D$, the prefix $x_1x_2\cdots x_{r+1}$ is called the dead angle of $X$ in $\Gamma_\Delta(D)$r. We say that a letter $z$ is in the dead angle of $X$ if $z=x_i$ for some $1\leq i \leq r+1$. \begin{lem}[Dead Angle Principle] Let $X$ be a vertex in $\Gamma_\Delta(D)$r. Then there exists a vertex $Y$ next to $X$ such that $Y$ begins with a letter $z$, if and only if $z$ is not in the dead angle of $P$. \end{lem} The proof of the above lemma is straightforward. It is based on a property of $\Gamma_\Delta(D)$\ regarding how the rotation and shift operation work together to complement each other: Suppose $z$ is not in the dead angle of $X$. If $z$ is indeed in $X$, then a rotation operation on $X$ may put $z$ back to the beginning of the sequence; otherwise, a shift operation can achieve the same goal with ease. For the above reason, one sees that the two operations are coherent with each other, although the look rather unrelated. Moreover, suppose $Y$ is next to $X$ in $\Gamma_\Delta(D)$r, then $Y$ is determined by its first element. We now give the proof of Theorem \ref{thm-reach}. {\em Proof.} Let $I=12\cdots D$ be the standard origin and $X=x_1x_2\cdots x_D$ be the destination. It suffices to show that there is a directed path of length $D+r$ from $I$ to $X$. We first consider the case when $x_D\not=1$. Let $A$ be the set $\{x_{1}, x_{2}, \ldots, x_{D-r-1}\}$. Since there are $r+1$ elements in the dead angle of $X$, and $D-(r+1) > r+1$, there exists an element $y_1$ in $A$ that is not in the dead angle of $I$. By the dead angle principle, $I$ is adjacent to a vertex $P_1$ with prefix $y_11 \cdots (r+1)$. Let $A=A \backslash \{y_1\}$. Considering the dead angle of $P_1$, the same condition on $D$ and $r$ ensures that that there exists an element $y_{2}$ in $A_1$ that is not in the dead angle of $P_1$ (implying that $y_1$ and $y_2$ are distinct). Hence $P_1$ is adjacent to a vertex $P_2$ with prefix $y_{2} y_1 1 \cdots r$. Repeating the above procedure, one may reach a vertex $P_r$ such that $P_r$ has prefix $y_r\cdots y_2y_1 \,1$ and $y_1, y_2, \ldots,y_r$ come from $A$. It has already taken $r$ steps to get to $P_r$ from $I$. Since $x_D\not= 1$, we may construct a path of length $D$ from $P_r$ to $X$, and display it by showing the prefixes: \[ y_r \cdots y_2 y_1 1 \quad \Rightarrow \quad x_D y_r\cdots y_2 y_1 \quad \Rightarrow \quad x_{D-1} x_D y_r\cdots y_2 y_1 \quad \Rightarrow \quad \cdots \quad \] \[ \Rightarrow \quad x_{D-r} \cdots x_{D-1} x_D y_r\cdots y_2 y_1 \quad \Rightarrow \quad x_{D-r-1} x_{D-r} \cdots x_{D-1} x_D \quad \Rightarrow \quad \cdots \quad \] \[\Rightarrow \quad x_1 x_2\cdots x_D\;. \] We next consider the case when $x_D=1$. Let $A=\{x_1, x_2, \ldots, x_{D-r-2}\}$ and $B=\{ 2, 3, \ldots, r+1\}$. Since $D-r-2 >r$, there exists an element $y_1$ such that $y_1 \in A$ but $y_1\not\in B$. Note that $1\not\in A, B$. Hence by the dead angle principle, $I$ is adjacent to a vertex $P_1$ with prefix $y_1 1 \,2\, \cdots (r+1)$. Let $A_1=A\backslash \{y_1\}$, $B_1=B\backslash \{r+1\}$. The same condition on $D$ and $r$ ensures that there exists $y_2\in A_1$, but $y_2\not\in B_1$. It follows that $P_1$ is adjacent to a vertex $P_2$ with prefix $y_2y_1 \, 1\, 2\, \cdots \, r$. Repeating this procedure, one ends up with a vertex $P_r$ having prefix $y_ry_{r-1}\cdots y_1 1$, where $y_i\in A$. We continue with the following path of length $D-r$ starting from $P_r$ (with only prefixes shown): \begin{eqnarray*} y_r y_{r-1} \cdots y_1 1 & \Rightarrow & x_{D-r-1} y_r y_{r-1} \cdots y_1 1 \\ & \Rightarrow & x_{D-1} x_{D-r-1} y_r y_{r-1} \cdots y_1 1 \\ & \Rightarrow & x_{D-2} x_{D-1} x_{D-r-1} y_r y_{r-1} \cdots y_1 1 \\ & \cdots & \\ & \Rightarrow & x_{D-r} \cdots x_{D-1} x_{D-r-1} y_r y_{r-1} \cdots y_1 1 \\ & \Rightarrow & x_{D-r-1} x_{D-r} \cdots x_{D-1} y_r y_{r-1} \cdots y_1 1 \end{eqnarray*} The last vertex is labeled by $P_{2r+2}$ according to the length. Note that $y_i \in A$, we claim that there is a path from $P_{2r+2}$ to $X$ of the following form: \[ P_{2r+2} \quad \Rightarrow \quad x_{D-r-2} x_{D-r-1} \cdots x_{D-1} \cdots 1 \quad \Rightarrow \quad x_{D-r-3} \cdots x_{D-1} \cdots 1 \quad \Rightarrow \quad \cdots \] \[ \quad \Rightarrow \quad x_1x_2 \cdots x_{D-1} 1 =X\, , \] because $y_i \in A$ at each step it is impossible to bump 1 out of the sequence so that the last vertex has to be $X$. Summing up all the segment, we get a path of length $D+r$. \qed Specializing the above theorem for $r=0$, it follows the reachability of $\Gamma_\Delta(D)$\ first observed in [\ref{CF92}]. Moreover, using the method of Conway and Guy [\ref{CG}], one may construct large symmetric digraphs with small degree and diameter based on $\Gamma_\Delta(D)$r. Since the digraph $\Gamma_\Delta(D)$r\ in some cases has more vertices than $\Gamma_{\Delta-r}(D+r)$, one may use the above theorem in constructing new symmetric digraphs. However, we will not discuss this aspect here. The rest of this section is concerned with the diameter of $\Gamma_\Delta(D)$r. It is shown in [\ref{CG}] that the diameter of $\Gamma_\Delta(D)$r\ does not exceed $D+r$ for $\Delta\geq D \geq 2r+2$. This upper bound is achieved by the following construction that is a much simpler version than the construction for the reachability of $\Gamma_\Delta(D)$r. Note that the reachability result requires a slightly stronger condition on the parameters of $\Gamma_\Delta(D)$r. Let's give an outline. Let $I=12\cdots D$ be the standard origin, and $X=x_1x_2\cdots x_D$ be any vertex in $\Gamma_\Delta(D)$r. For the case $x_D\not=1$, one may first try to reach from $I$ a vertex with prefix $y_ry_{r-1}\cdots y_1 1$, where $y_i \not= x_{D-1}, x_{D-2}, \ldots, x_{D-r}$. Then one may continue with vertices having prefixes $x_D, x_{D-1}$, etc. For the case $x_D=1$, one may get to a vertex with prefix $y_r y_{r-1}\cdots y_1 1$ such that $y_i \in \{ x_1, x_2, \ldots, x_{D-r-2}\}$. Then one may get to $X$ via vertices with prefixes $x_{D-1}$, $x_{D-2}x_{D-1}$, etc. The last element $x_D=1$ will eventually takes care of itself for the reason given in the proof of the reachability theorem. It is harder to show that the above diameter bound is exact. For this purpose, we find a class of vertices that achieve the bound. A vertex $X$ in $\Gamma_\Delta(D)$r\ is called a remote vertex if $x_{D-1}=1$ and $x_D=D$, and there exists $x_i>D$ for some $1\leq i \leq D-2$. We shall use the common notation $d(X, Y)$ to denote the distance from $X$ to $Y$ in a digraph. Then we have the following theorem: \begin{thm} Let $\Delta\geq D \geq 2r+2$, and $X$ be a remote vertex in $\Gamma_\Delta(D)$r, then the distance from the standard origin $I=12\cdots D$ to $X$ equals $D+r$. \end{thm} {\em Proof.} The diameter upper bound is already established, so it suffices to show that $d(I,X)\geq D+r$. Since $X=x_1x_2\cdots x_{D-2}1D$ and there exists $x_i>D$ for some $i$, to reach $X$ from $I$ requires at least one shift operation. Thus, element $D$ in $I$ cannot remain in the last position during the process to reach $X$ from $I$. Since $D$ is in the destination vertex $X$, it is either moved back to the beginning position at some point, or is removed out of the sequence by a shift operation and then put back to the beginning by another shift operation. Let $I\Rightarrow P_1 \Rightarrow P_2 \Rightarrow \cdots \Rightarrow P_m =X$ be a shortest path from $I$ to $X$. Since either a shift or rotation operation on a vertex, say $Y=y_1y_2\cdots y_D$, moves the elements in the dead angle to the positions on the right hand side, the vertex $P_{r+1}$ must have the prefix $z_{r+1} \cdots z_2 z_1 1$. If $D$ does not appear in $z_{r+1} \cdots z_2 z_1$, then it will take at least $D$ steps to reach $X$ from $P_{r+1}$ as far as the last element of $X$ is concerned. This contradicts the upper bound $D+r$ on the diameter of $\Gamma_\Delta(D)$r. We now assume that $z_i=D$. It follows that $P_{r+i+1}$ has a prefix of the form $w_i \cdots w_2 w_1 \, z_{r+1} \cdots z_{i+1} D$. Consider the following two cases: Case 1. The element 1 appears in $w_i \cdots w_2 w_1$. If $D$ is shifted out of a vertex after the vertex $P_{r+i+1}$, then it will take at least $D$ steps to put $D$ to the last position of $P_m$, a contradiction. Thus, $D$ has to remain in the vertices $P_{r+i+1}$, $P_{r+i+2}$, $\ldots$, $P_m$. Moreover, $D$ will never be put back to the beginning of a vertex by a rotation because after that rotation one needs at least $D-1$ steps to move $D$ to the last position of $P_m$, which is also impossible. Hence, in the path from $P_{r+i+1}$ to $P_m$, $1$ has to remain in all the vertices on this path segment, and $1$ is always to the left of $D$. We define $\delta(Y)$ to be the number of elements between $D$ and 1. Let $f$ be the number of operations used to reach $P_m$ from $P_{r+i+1}$ that move the element $D$ to its right, and $g$ be other operations used in the same path. For a rotation or a shift operation $T$ on $P_j$ $(r+i+1 \leq j \leq m-1)$, if $T$ moves $D$ to its right, then $T$ leaves the value of $\delta(P_j)$ unchanged; otherwise $T$ may reduce the value of $\delta(P_j)$ at most by one. It follows that \[m-r-i-1 = f+g \geq (D-r-2) + (r-i+1) = D-i-1\, .\] Hence $m \geq D+r$. Case 2. The element $1$ does not appear in $w_i \cdots w_2 w_1$. As in Case 1, the element $D$ has to remain in the vertices on the path from $P_{r+i+1}$ to $P_m$, and $D$ is never moved back to the beginning of any vertex on the path. It is clear that at some step, $1$ has to be put back to the beginning of a vertex on the aforementioned path either by a rotation or a shift operation. Suppose this happens to $P_j= 1 \cdots$ $(j\geq r+i+2$). Since 1 is never moved back to the beginning of a vertex, in $P_j$ there are at least $r+1$ elements between $1$ and $D$. Thus, we need at least $r+1$ steps to move $1$ next to $D$. It follows that \[ m \geq (r+i+1) + (D-r-2)+(r+1) = D+r+i \geq D+r \; .\] This completes the proof. \qed We remark that when $r\geq 1$ the digraph $\Gamma_\Delta(D)$r\ does not have the unique shortest path property like $\Gamma_\Delta(D)$. For example, in $\Gamma_4(4,-1)$ there are two shortest paths from 1234 to 5214, shown below: \[ \begin{array}{ccccccccccc} 1234 & \Rightarrow & 4123 & \Rightarrow & 5412 & \Rightarrow & 1542 & \Rightarrow & 2154 & \Rightarrow & 5214, \\ \equsep 1234 & \Rightarrow & 5123 & \Rightarrow & 4512 & \Rightarrow & 1452 & \Rightarrow & 2145 & \Rightarrow & 5214. \end{array} \] \newsection{The Wide Diameter of $\Gamma_\Delta(D)$} Connectivity considerations of a network was primarily motivated by its fault tolerance capabilities, while the diameter is a measurement of routing efficiency along a single path. Interestingly, the recent notion of wide diameter is a kind of unification of both the diameter and the connectivity due to the classical theorem of Menger. This notion also has a strong practical background. Let $G$ be a digraph of connectivity $k$ and diameter $D$. By Menger's theorem, between any two distinct vertices $x$ and $y$ in $G$ there are $k$ vertex-disjoint paths. Such a set of disjoint paths, denoted by $C(x, y)$, is called a container, and its length is defined as maximum length among the paths in the container. The wide distance from $x$ to $y$ is then defined to be the minimum length of the containers from $x$ to $y$, and the wide diameter is the maximum wide distance among all the pairs of distinct vertices. As we have mentioned before, the consideration of the wide diameter of a network has solid practical background which we will not get into the discussions. Clearly, the wide diameter of a digraph is at least as large as the ordinary diameter. However, it is rather remarkable that for most of the popular interconnection networks the wide diameters are not significantly bigger than (actually, a small constant bigger than) the ordinary diameter, like the hypercube, the de Bruijn, the Kautz, and the star networks. The main result of this section is to show that such a remarkable phenomenon also occurs in the cycle prefix network. For a vertex $X$ in $\Gamma_\Delta(D)$, we shall use $N(X)$ to denoted the set of vertices next to $X$, and $M(X)$ the set of vertices adjacent to $X$. We shall use the notation $i\circ X$ to denote the vertex adjacent to $X$ that is obtained by rotating the element $i$ to the beginning position if $i$ is $X$, or by shifting $i$ into $X$ and bumping the last element out of $X$, namely $S_i(x)$ by the previous notation. If $i=x_1$, let $i\circ X = X$. Note that $N(X)$ consists of vertices $X_i = i\circ X$ for $i\not= x_1$, and $M(Y)$ consists of vertices \[ Y_i = \left\{ \begin{array}{ll} 2\,3\,\cdots \, i \, 1 \, (i+1)\, \cdots \, D \, , \quad & \mbox{if}\quad 2\leq i \leq D\, . \\ \equsep 2\, 3\, \cdots \, D \, i\, \quad & \mbox{if}\quad D < i \leq \Delta+1\, ,\\ \equsep Y&\mbox{if}\quad i = 1\,. \end{array} \right. \] \begin{thm} The wide diameter of $\Gamma_\Delta(D)$\ is at most $D+2$. It is exactly $D+2$ for $D\geq 4$. Specifically, if $X$ and $Y$ are distinct vertices in $\Gamma_\Delta(D)$, then there is a bijection $\theta$ of $N(X)\setminus{\{Y\}}$ to $M(Y)\setminus{\{X\}}$ such that the shortest paths from $Z$ to $\theta(Z)$ are vertex disjoint and do not contain either $X$ or $Y$. \end{thm} Since the proof of the above theorem heavily depends on the unique shortest path property, we here give a brief review of the shortest path routing in $\Gamma_\Delta(D)$. Given two vertices $X$ and $Y$ in $\Gamma_\Delta(D)$, a tail of $Y$ with respect to $X$ (as the origin) is a suffix $y_{k+1}\cdots y_D$ such that it forms a subsequence of $X$, say $x_{i_1} x_{i_2} \cdots x_{i_{D-k}}$, and all the elements $x_{1}, x_{2}, \ldots, x_{i_{D-k}}$ occur in $Y$. Note that a tail can be the empty sequence. A header of $Y$ with respect to $X$ is a prefix $y_1\cdots y_{k}$ such that the complement suffix $y_{k+1}\cdots y_D$ is a tail. It is proved in [\ref{FMC93}] that the distance from $X$ to $Y$ is the length of the shortest header of $Y$ with respect to $X$. Suppose $y_1y_2\cdots y_{k}$ is the shortest header of $Y$ with respect to $X$, then the shortest path from $X$ to $Y$ is determined by the following prefixes: \[ X \quad \Rightarrow \quad y_k **\* \quad \Rightarrow \quad y_{k-1}y_{k} **\,* \quad \Rightarrow \quad \cdots \quad \Rightarrow \quad y_1\cdots y_k **** = Y\, ,\] where $***$ is the usual wild-card notation to mean ``some sequence'' in order to fill the gap in the notation of a sequence. Without loss of generality $Y$ can be assumed to be the standard origin. For clarity we list the following conditions which together are equivalent to $d(X,Y)=k$ for $k<D$: \begin{itemize} \item[(a).] $y_D$ appears in $X$, say $x_j=y_D$. \item[(b).] $x_1, x_2, \ldots, x_j$ are all in $Y$ (but they do not necessarily form a subsequence). \item[(c).] $y_{k+1}\cdots y_D$ is a subsequence of $X$, but $y_k y_{k+1}$ is not. \end{itemize} When $d(X, Y)=D$, it is equivalent to the following statement: \begin{itemize} \item[(a).] either $y_D$ is not in $X$, \item[(b).] or $y_D$ is in $X$, say $x_j=y_D$, but there exists $x_r$ with $r<j$ that is not in $Y$. \end{itemize} For example, let $X=47285136$ and $Y=82164753$, the shortest header of $Y$ with respect to $X$ is illustrated by $8216\,|\,4753$ The following property is helpful in understanding the routing scheme in $\Gamma_\Delta(D)$\ and it implies the uniqueness of the shortest path. \begin{prop} Given $X=x_1x_2\cdots x_D$ and $Y=y_1y_2\cdots y_D$ in $\Gamma_\Delta(D)$, suppose $k=d(X,Y)$. Then $d(i\circ X, Y) \geq d(X,Y)$ unless $i=y_k$, in which case $d(i\circ X, Y)= d(X,Y)-1$. \end{prop} In order to reach the conclusion in the above theorem concerning the wide diameter of $\Gamma_\Delta(D)$, we shall start with the easiest case $x_1=1$. We give a complete treatment of this case. For the other cases, we only give an outline of the proof. The details are similar to the case $x_1 = 1$ but more tedious. In this regard, we hope that a simpler construction will be found with a better understanding of the the wide diameter of $\Gamma_\Delta(D)$. There is no doubt that the construction given in this paper is {\em ad hoc}, although it does give the best bound. For the case $x_1=1$, the mapping $\theta$ is defined by \[ \theta(X_i) = Y_i\,, \quad (2\leq i \leq \Delta+1).\] The following is an example for $\Delta=5, D=4$ and $X=1325$. \[ \begin{array}{lllllllllll} X_2= & 2135 & \rightarrow & {\underline 4} 213 & \rightarrow & \underline{34}21 & \rightarrow & \underline{134} 2 & \rightarrow & \underline{2134} & = Y_2 \\ \equsep X_3= & 3125 & \rightarrow & \underline{4}312 & \rightarrow & \underline{14}32 & \rightarrow & \underline{314}2 & \rightarrow & \underline{2314} & = Y_3 \\ \equsep X_4= & 4132 & \rightarrow & \underline{3}412 & \rightarrow & & & & & \underline{23}41 & = Y_4 \\ \equsep X_5= & 5132 & \rightarrow & \underline{4}513 & \rightarrow & \underline{34}51 & \rightarrow & & & \underline{234}5 & = Y_5 \\ \equsep X_6= & 6132 & \rightarrow & \underline{4}613 & \rightarrow & \underline{34}61 & \rightarrow & & & \underline{234}6 & = Y_6 \end{array}\] The following lemma gives the distance from $X_i$ to $Y_i$, from which the shortest path routing is determined in terms of the shortest header. \begin{lem} Let $X=x_1x_2\cdots x_D$ be a vertex in $\Gamma_\Delta(D)$\ such that $x_1=1$. Suppose the distance from $X$ to $Y=12\cdots D$ is $k$, then the distance from $X_i$ to $Y_i$ is given by \[ d(X_i, Y_i) = \left\{ \begin{array}{ll} k, \quad & \mbox{if} \quad 1 < i < k , \\ \equsep k-2, \quad & \mbox{if} \quad i= k , \\ \equsep i-2, \quad & \mbox{if} \quad k < i \leq D , \\ \equsep D-1, \quad & \mbox{if} \quad i > D\, . \end{array}\right. \] \end{lem} {\em Proof.} We first consider the case when $k<D$. Note that $d(X, Y)=k$ is equivalent to the above conditions (a), (b) and (c) altogether. We need to check the same conditions for the corresponding distances in various cases. For $1<i<k$, the verification for $d(i\circ X, Y_i)=k$ is divided into the following three steps: (a). $Y_D=D$ appears in $X$: Suppose $i\circ X$ does not contain $D$. Since $D$ is in $X$, it must be the last element in $X$ and $i\circ X$ is obtained from $X$ via a shift operation. By the condition (b) for $d(X,Y)=k < D$, $x_1, x_2, \ldots, x_D$ are all in $Y$, implying that $X$ is a permutation on $1, 2, \ldots, D$. Thus, $i\circ X$ is obtained from $X$ via a rotation, which is a contradiction. (b). Suppose $x_j=D$. If $i \circ X$ is next to $X$ via a rotation, then every element prior to $D$ in $i\circ X$ is in $Y_i$ since $Y_i$ is a permutation of $Y$. If $i\circ X$ is next to $X$ via a shift operation, the above argument for (a) shows that $D$ cannot be the last element of $X$. It also follows that every element prior to $D$ in $i\circ X$ is still in $Y_i$. (c). Since $(k+1, k+2, \ldots, D)$ is a subsequence of $X$, it follows that it is also a subsequence of $i\circ X$ because $i<k$ and $D$ stays in $i\circ X$. Clearly, $(k, k+1)$ cannot be a subsequence of $i\circ X$ because it is not a subsequence of $X$. Combining (a), (b) and (c) one sees that $d(X_i, Y_i)=k$, and the shortest header of $Y_i$ with respect to $X_i$ is illustrated as follows: \[ 2 \, 3 \cdots i \, 1 \,(i+1) \, \cdots k \, | \, (k+1) \cdots D\, .\] For the case $i=k$, $d(X_k, Y_k)=k-2$: The verification of conditions (a) and (b) are the same as for the previous case. The only catch for condition (c) is that $(k, 1, k+1, \ldots D)$ is a subsequence of $k\circ X$. Since $k$ is the first element of $k\circ X$, it follows that $d(X_k,Y_k)=k-2$ and the shortest header of $Y_k$ is illustrated below: \[ 2 \, 3\, \cdots \, (k-1) \, | \, k \, 1\, (k+1) \cdots D\, .\] For the case $k< i \leq D$, $d(X_i, Y_i)=i-2$: The argument for conditions (a) and (b) remain the same. Noticing that the first two elements of $i\circ X$ is $i\,1$ and that $(i, 1, i+1, \cdots, D)$ is a subsequence of $i\circ X$, it follows that the shortest header of $Y_i$ is illustrated below: \[ 2\,3\, \cdots, k\, (k+1) \, | \, i \, 1\, (i+1)\, \cdots \, D\, .\] It now comes the last subcase: $D< i \leq \Delta+1$. The tail containing the single element $i$ of $Y_i$ is clearly the longest tail with respect to $X_i$, and it is illustrated below: \[ 2\, 3\, \cdots \, D \, | \, i \, .\] We finally finish up the main case $k=D$, for which $Y$ is not a closed vertex with respect to $X$. For $1<i<D$, if $D$ is not in $X$, then $D$ is not in $i\circ X$ either. Suppose $x_j=D$ and there exists $x_r$ with $r<j$ that is not in $Y$. If $i\circ X$ does not contain $D$, then we are done. If $i\circ X$ contains $D$, then $x_r$ stays in $i\circ X$, but it is not in $Y_i$. Hence we still have $d(X_i, Y_i)=D$. For $i=D$, we have $D\circ X = D\, 1\, ***$ and $Y_D= 2\, 3\, \cdots \, D \,1$. Clearly $d(X_D, Y_D)=D-2$. For $i>D$, this is an easy matter, and the same as for the case $k<D$. This completes all the cases. \qed The shortest path routing from $X_i$ to $Y_i$ easily follows from the above lemma. Our next goal is to show that all the shortest paths from $X_i$ to $Y_i$ are vertex-disjoint. To this end, we need to define two statistics on a vertex so that they can be used to distinguish the vertices along the shortest paths from $X_i$ to $Y_i$. Given $X=x_1x_2\cdots x_D$, suppose $d(X, Y)=k$ where $Y=12\cdots D$. Define $\alpha(X)$ to be the first element $x_i$ such that $x_i\not\in \{ k+1, k+2,\ldots, D+1\}$. Let $\beta(X,i) = j+1$, where $j$ is obtained as follows: Let $Y'$ be the second to the last vertex on the shortest path from $X$ to $i\circ Y$. Let $j$ be the element immediately preceding $i$ in $Y'$ or, if $j$ does not occur in $Y'$, the last element of $Y'$. Equivalently, $\beta(X,i)$ is the minimum of $D+2$ and the smallest $x > k$ such that $x$ is to the right of $i$ or not in $X$. Let $\beta(X) = \beta(X,\alpha(X))$. For example, suppose $X=531624$ and $Y=123456$. Then $d(X,Y)=4$, $\alpha(X)=3$, $\beta(X)=6$, $\beta(X,2) = 7$. We call $(\alpha(x), \beta(x))$ the {\em characteristic pair} of $X$. It turns out that the characteristic pair of a vertex on the shortest path from $X_i$ to $Y_i$ can be easily determined along with the routing. The following table illustrates the shortest path $P_i$ from $X_i$ to $Y_i$ in various cases, together with the characteristic pairs from which one sees that all the vertices are indeed distinct. The notation $j\rightarrow$ means the operation of getting $j\circ Z$ from $Z=z_1z_2\cdots z_D$ for $j\not= z_1$. \begin{table} \caption{ Case 1: $x_1 = 1$.} \label{table:case1} \[\begin{array}{|llr|c|c|p{75pt}|} \hline i\hspace{20pt}\mbox{} & P_i&& \alpha(v) & \beta(v) & Notes \\ \hline \multicolumn{2}{|l}{\mbox{(a)} 1<i<k:}&&&&\\& \begin{array}{l} i\rightarrow\\ k\rightarrow\\ \vdots\\ i+1\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& i& k+1 & $i\not=1$ \\ &\begin{array}{l} 1\rightarrow\\ i\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\} & 1 & i+1 & $i+1\not=k+1$ \\ \hline \multicolumn{2}{|l}{\mbox{(b) } i=k:}&&&&\\& \begin{array}{l} k \rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\} & 1 & k+1 & \\ \hline \multicolumn{2}{|l}{\mbox{(c)} k+1\leq i\leq D+1:}&&&&\\& \begin{array}{l} i\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\}& 1 & i+1 & \\ \hline \multicolumn{2}{|l}{\mbox{(d)} D+1<i\leq \Delta+1:}&&&&\\& \begin{array}{l} i\rightarrow\\ D\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& i & D+1 & \\ \hline \end{array} \] \end{table} The cases other than $x_1=1$ are more tedious. The critical part is to construct the mapping from $N(x)$ to $M(Y)$. Recall that $d(X,Y)=k$. For $x_1=k+1$, we have \[ \theta(i\circ X) = \left\{ \begin{array}{ll} Y_k, \quad & \mbox{if} \quad i=1\, ,\\ \equsep Y_i, \quad & \mbox{if} \quad 1< i < k\; , \\ \equsep Y_{\beta(X,1)-1} \quad & \mbox{if} \quad i=k\, ,\\ \equsep Y_{i-1} \quad & \mbox{if} \quad k+1\leq i<\beta(X,1)\, ,\\ \equsep Y_i \quad & \mbox{if} \quad \beta(X,1)\leq i \leq \Delta+1\,. \end{array} \right. \] It is straightforward to see that $\theta$ is a bijection. The distance from $i\circ X$ to $\theta(i\circ X)$ are given below: \[ d(i\circ X, \theta(i \circ X)) = \left\{ \begin{array}{ll} k-1, \quad & \mbox{if} \quad i=1\, ,\\ \equsep k-1, \quad & \mbox{if} \quad 1< i < k\; , \\ \equsep k-1, \quad & \mbox{if} \quad i=k\, ,\\ \equsep i, \quad & \mbox{if} \quad k+1\leq i<\beta(X,1)\, ,\\ \equsep D-1 \quad & \mbox{if} \quad \beta(X,1)\leq i \leq \Delta+1\,. \end{array} \right. \] We omit the detailed verification of the above distances. Based on these distances, we have the following table which illustrates the shortest path routing from $i\circ X$ to $\theta(i\circ X)$. In addition to the characteristic pairs, we need one more characteristic to distinguish the vertices. For the sake of easy description, we assume that the elements in $X$ that are greater than $D$ occur in increasing order starting $D+1, D+2, \ldots$, because a permutation on the set $\{ D+1, D+2, \ldots, \Delta+1\}$ can map the vertex $X$ into this form without affecting the destination vertex or the other elements in $X$ that do not exceed $D$. \begin{table} \caption{ Case 2: $x_1 = k+1$.} \label{table:case2} \[\begin{array}{|llr|c|c|c|p{75pt}|} \hline i\hspace{20pt}\mbox{} & P_i&& \alpha(v) & \beta(v) &\beta(v, 1)& Notes\\ \hline \multicolumn{2}{|l}{\mbox{(a)} i = 1:}&&&&&\\& \begin{array}{l} 1\rightarrow\\ k \rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& 1& k+1 & k+1 & Ignore if $k+1 = 2$.\\ \hline \multicolumn{2}{|l}{\mbox{(b)} 1<i<k:}&&&&&\\& \begin{array}{l} i \rightarrow\\ k \rightarrow\\ \vdots\\ i+1\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& i& k & \beta(x,1)&\\& \begin{array}{l} 1\rightarrow\\ i\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& 1& i+1& i+1& $i+1\not=k+1$\\ \hline \multicolumn{2}{|l}{\mbox{(c)} i = k:}&&&&&\\& \begin{array}{l} k\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\}& \not=k+1 & k+1&\beta(x, 1)&Ignore if $k+1 = 2$.\\ \hline \multicolumn{2}{|l}{\mbox{(d)} k+1< i < \beta(x,1):} &&&&&\\& \begin{array}{l} i\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\end{array}\right\}& k+1 & i+1 & \beta(x,1) & \\ & \begin{array}{l} 1\rightarrow\\ i-1\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& 1&i&i&\\ \hline \multicolumn{2}{|l}{\mbox{(e)} \beta(x,1)\leq i\leq D+1:}&&&&&\\& \begin{array}{l} i\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\}& &i+1&i+1&$i+1\not=\beta(x, 1)$\\ \hline \multicolumn{2}{|l}{\mbox{(f)} D+1<i\leq \Delta+1, i\not=k+1:}&&&&&\\& \begin{array}{l} i\rightarrow\\ D\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& i&D+1&D+1&\\ \hline \end{array}\] \end{table} For $x_1\not= 1$ or $k+1$, we have \[ \theta(i\circ X) = \left\{ \begin{array}{ll} Y_{\beta(X,1)-1}, \quad & \mbox{if} \quad i=1\, ,\\ \equsep Y_i, \quad & \mbox{if} \quad 1< i < k\; , \\ \equsep Y_{\alpha(X)} \quad & \mbox{if} \quad i=k\, ,\\ \equsep Y_{i-1} \quad & \mbox{if} \quad k+1\leq i<\beta(X,1)\, ,\\ \equsep Y_i \quad & \mbox{if} \quad \beta(X,1)\leq i \leq \Delta+1\,. \end{array} \right. \] In this case, the distances are given below: \[ d(i\circ X, \theta(i \circ X)) = \left\{ \begin{array}{ll} \beta(X,1)-1, \quad & \mbox{if} \quad i=1\, ,\\ \equsep k-1, \quad & \mbox{if} \quad 1< i < k\; , i\not= x_1, \\ \equsep k-2, \quad & \mbox{if} \quad i=k\, ,\\ \equsep i, \quad & \mbox{if} \quad k+1\leq i<\beta(X,1)\, ,\\ \equsep i-2, \quad & \mbox{if} \quad \beta(X,1)\leq i \leq D\,,\\ \equsep D-1 \quad & \mbox{if} \quad D < i \leq \Delta+1, i\not=k+1\,. \end{array} \right. \] Note that the assumption on the elements in $X$ that are greater than $D$ implies that $x_1<k$. The detailed information on the shortest path routing from $i\circ X$ to $\theta(i\circ X)$ is given in the table below, which also includes the statistic $\beta(V,1)$ to distinguish the vertices. \begin{table} \caption{ Case 3.} \label{table:case3} \[\begin{array}{|llr|c|c|c|p{75pt}|} \hline \hspace{20pt}\mbox{} & P_i&& \alpha(v) & \beta(v) & \beta(v,1)& Notes\\ \hline \multicolumn{2}{|l}{\mbox{(a)} i=1:}&&&&&\\& \begin{array}{l} 1\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\end{array}\right\}& 1 & k+1 & k+1 & \\& \begin{array}{l} \beta(x,1)-1\rightarrow\\ \vdots\\ 2 \rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\}& 1 & \beta(x,1) & \beta(x,1)&\\ \hline \multicolumn{2}{|l}{\mbox{(b) } 1 < i < k, i\not=x_1:}&&&&&\\& \begin{array}{l} i\rightarrow\\ k\rightarrow\\ \vdots\\ i+1\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& i & k+1 &\beta(x,1)&$i\not=x_1$\\& \begin{array}{l} 1\rightarrow\\ i\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& 1& i+1&i+1&$i+1\not=k+1$, $i+1\not = x_1+1$\\ \hline \multicolumn{2}{|l}{\mbox{(c) } i = k:}&&&&&\\& \begin{array}{l} k \rightarrow\\ \vdots\\ x_1 + 1\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\}& x_1& k+1 &\beta(x,1)& $d(v,Y)<k$\\& \begin{array}{l} 1\rightarrow\\ x_1\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& 1& x_1+1& x_1+1 &$x_1<k$\\ \hline \multicolumn{2}{|l}{\mbox{(d) } k+1 \leq i <\beta(x,1):}&&&&&\\& \begin{array}{l} i\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\end{array}\right\}& x_1& i+1&\beta(x,1)&$i+1\not=k+1$\\& \begin{array}{l} 1\rightarrow\\ i-1\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& 1&i&i&\\ \hline \multicolumn{2}{|l}{\mbox{(e) } \beta(x,1)\leq i\leq D+1:}&&&&&\\& \begin{array}{l} i\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\end{array}\right\}& x_1&i+1&i+1&\\ \hline \multicolumn{2}{|l}{\mbox{(f) }D+1<i\leq \Delta+1, i\not=k+1:}&&&&&\\& \begin{array}{l} i\rightarrow\\ D\rightarrow\\ \vdots\\ 2\rightarrow\\ \end{array}&\left.\begin{array}{r}\\\\\\\\\end{array}\right\}& i&D+1&D+1&\\ \hline \end{array}\] \end{table} After completing the above case by case analysis, we arrive at the conclusion that the wide diameter of $\Gamma_\Delta(D)$ does not exceed $D+2$. To determine when the wide diameter is exactly $D+2$, let $X=(\Delta+1, \Delta, \ldots, \Delta-D+2)$ and $Y=12\cdots D$. The distance from $i\circ Y$ is $D$ except for $i=D$. If $1< i < D$, the unique shortest path from $i\circ X$ to $Y$ goes through $23\cdots D \, (\Delta+1)$. Provided that $D-2\geq 2$, this implies that, of any $\Delta$ disjoint path from $X$ to $Y$, at least one must have length at least $D+2$. \qed We have left open the problem of finding the wide diameter of $\Gamma_\Delta(D)$r, which is of great interest if it is determined. \noindent {\small {\large \bf Acknowledgments.} This work was performed under the auspices of the U. S. Department of Energy. We thank D. F. Hsu for helpful discussions. } \noindent {\Large\bf References} \newcounter{paper} \setcounter{paper}{0} \begin{list} {[\arabic{paper}] }{\usecounter{paper} \setlength{\leftmargin=1.5cm}{\labelsep=2mm}} \item \label{AK89} S. B. Akers and B. Krishnamurthy, A group-theoretic model for symmetric interconnection networks, IEEE Trans. on Computers, 38 (1989), 555-566. \item \label{Ber92} J.-C. Bermond, ed., Interconnection Networks, Special Issue in Discrete Applied Mathematics, Vol. 37/38 (1992). \item \label{CFK93} W. Y. C. Chen, V. Faber and E. Knill, A new routing scheme for cycle prefix graphs, LAUR-93-3576, Los Alamos National Laboratory, 1993. \item \label{CG} J. H. Conway and M. J. T. Guy, Message graphs, Ann. of Discrete Math., 13 (1982), 61-64. \item \label{FMC93} V. Faber, J. Moore and W. Y. C. Chen, Cycle prefix digraphs for symmetric interconnection networks, Networks, 23 (1993), 641-649. \item \label{CF92} F. Comellas and M. A. Fiol, Vertex symmetric digraphs with small diameter, preprint, 1992. \item \label{Hsu93} D. F. Hsu, ed., Interconnection Networks and Algorithms, Special Issue in Networks, to appear. \item \label{Hsu**} D. F. Hsu, On container width and length in graphs, groups, and networks, to appear. \item \label{JR92} M. Jiang and F. Ruskey, Determining the Hamilton-Connectedness of certain vertex-transitive graphs, Technical Report, DCS-202-IR, Department of Computer Science, University of Victoria, Victoria, B.C., Canada, 1992. \item \label{Sab69} G. Sabidussi, Vertex transitive graphs, Monatsh. Math. 68 (1969), 426-438. \end{list} \end{document}
\begin{document} \title{Module decompositions using pairwise comaximal ideals} \author{Gary F. Birkenmeier} \address{Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504 USA} \email{[email protected]} \author{C. Edward Ryan} \address{Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504 USA} \email{[email protected]} \subjclass[2010]{16D70} \keywords{ring; module; decomposition; annihilator; comaximal ideal} \begin{abstract} In this paper we show that for a given set of pairwise comaximal ideals $\{X_i\}_{i\in I}$ in a ring $R$ with unity and any right $R$-module $M$ with generating set $Y$ and $C(X_i)=\sum\limits_{k\in\mathbb{N}}\underline{\ell}_M(X_i^{k})$, $M=\oplus_{i\in I}C(X_i)$ if and only if for every $y\in Y$ there exists a nonempty finite subset $J\subseteq I$ and positive integers $k_j$ such that $\bigcap\limits_{j\in J}X_i^{k_j}\subseteq\underline{r}_R(yR)$. We investigate this decomposition for a general class of modules. Our main theorem can be applied to a large class of rings including semilocal rings $R$ with the Jacobson radical of $R$ equal to the prime radical of $R$, left (or right) perfect rings, piecewise prime rings, and rings with ACC on ideals and satisfying the right AR property on ideals. This decomposition generalizes the decomposition of a torsion abelian group into a direct sum of its p-components. We also develop a torsion theory associated with sets of pairwise comaximal ideals. \end{abstract} \maketitle \section{Introduction} Throughout this paper $R$ denotes a ring, not necessarily commutative, with identity, and $M$ denotes a unital right $R$-module. Recall if $M$ is a torsion abelian group, then $M=\oplus C(P_i)$, where $C(P_i)$ is the $p$-component of $M$. Also, if $R$ is semisimple Artinian, then $M=\oplus\underline{\ell}_M(P_i)$, where $P_i$ is a maximal ideal of $R$ and $\underline{\ell}_M(P_i)=\{m\in M\mid mP_i=0\}$ is the homogeneous component of $P_i$ in $M$. It is natural to investigate a general decomposition theory that includes the aforementioned decomposition results as special cases. In Section~\ref{sec:main} we provide such a result. In Section~\ref{sec:main} we state a decomposition theorem which provides a decomposition of a module as a direct sum of fully invariant submodules. Our main result, Theorem~\ref{thm:infinite}, decomposes a right $R$-module $M$ to a direct sum in terms of annihilator submodules using a set of pairwise comaximal ideals of $R$. Section~\ref{sec:nilary} extends the primary decomposition of a finitely generated torsion module over a Dedekind domain to certain kinds of noncommutative rings (cf. Theorem~\ref{thm:nilarydecomposition}). We develop a preradical $\gamma$ and its radical closure $\bar{\gamma}$ based on Theorem~\ref{thm:infinite} in Section~\ref{sec:torsion}. Our main goal in this section is to obtain a decomposition of a module into a direct sum of a torsion module and a torsion-free module (cf. Proposition~\ref{prop:splitting}) under $\gamma$. We write $K\subseteq M$ and $K\leq M$ to denote subsets and submodules of $M$, respectively. We say that a submodule $N\leq M$ is \textit{essential} in $M$, denoted $N\mathop{\leq^{\rm ess}\!}M$, if $N\cap K\neq0$ for any nonzero submodule $K\leq M$. A submodule $N\leq M$ is \textit{fully invariant} in $M$, denoted $N\trianglelefteq M$, if and only if $f(N)\subseteq N$ for every $f\in\rm{End}_R(M)$, where $\rm{End}_R(M)=\{h:M\longrightarrow M\mid h\text{ is an }R\text{-homomorphism}\}$. If $X\subseteq R$, then the \textit{left annihilator} of $X$ in $M$ is $\underline{\ell}_M(X)=\{m\in M\mid mx=0\text{ for all }x\in X\}$. If $N\subseteq M$, then the \textit{right annihilator} of $N$ in $R$ is $\underline{r}_R(N)=\{r\in R\mid nr=0\text{ for all }n\in N\}$. The \textit{singular submodule} of $M$, denoted by $Z(M)$, is $Z(M)=\{m\in M\mid\underline{r}_R(m)\mathop{\leq^{\rm ess}\!}R\}$. As in \cite[p.123]{Row-1991}, we say a nonempty set of ideals $\{X_i\}_{i\in I}$ of $R$ is \textit{pairwise comaximal} if and only if $X_i+X_j=R$ for all $i,j\in I$ with $i\neq j$. \section{Decomposition Theorem}\label{sec:main} In this section we develop a decomposition theorem, Theorem~\ref{thm:infinite}, which provides a basic decomposition of a module into a direct sum of fully invariant annihilator submodules. This result also generalizes the well known result that a torsion abelian group is a direct sum of its p-components. We make repeated and implicit use of the following lemma, which lists some properties of a finite collection of pairwise comaximal ideals of a ring. These properties are well known. \begin{lemma}\label{lem:props} \begin{enumerate} \item If $\{X_i\}_{i=1}^n$, $n>1$, is a set of pairwise comaximal ideals of $R$, then $\sum\limits_{i=1}^n\bigl(\bigcap\limits_{j\neq i}X_j\bigr)=R=\sum\limits_{i=1}^n\bigl(\prod\limits_{j\neq i}X_j\bigr)$. \item If $\{X_i\}_{i=1}^n$ is a set of pairwise comaximal ideals of $R$, then $X_i+\bigcap\limits_{j\neq i}X_j=R$. In particular, if each $X_i\neq R$, then $\bigcap\limits_{j\neq i}X_j\neq0$ for all $i$. \item If $\{Y_i\}_{i=1}^n$ is a set of ideals of $R$ such that $\sum\limits_{i=1}^nY_i=R$, and we define $X_i=\sum\limits_{i\neq j}Y_j$, then $\bigcap\limits_{i=1}^nX_i=\sum\limits_{\sigma\in S_n}X_{\sigma(1)}X_{\sigma(2)}\dotsb X_{\sigma(n)}$, where $S_n$ is the set of permutations on $n$ letters. \item If $X,Y\trianglelefteq R$ are such that $X+Y=R$, then $X^m+Y^n=R$ for each pair of positive integers $m,n$. \end{enumerate} \end{lemma} \begin{definition} Let $X$ be an ideal of $R$, and $M$ be an $R$-module. We define the \textit{component} of $X$ in $M$ to be $C(X)=\sum\limits_{k\in\mathbb{N}}\underline{\ell}_M(X^k)$. \end{definition} \begin{theorem}\label{thm:infinite} Let $\mathcal{X}=\{X_i\}_{i\in I}$ be a nonempty collection of pairwise comaximal ideals of $R$, and let $Y$ be a generating set for $M$. Then: \begin{enumerate} \item for each $y\in Y$, there exists a finite subset $J\subseteq I$ such that $\bigcap\limits_{j\in J}X_j\subseteq\underline{r}_R(y)$ if and only if $M=\oplus_{i\in I}\underline{\ell}_M(X_i)$; \item for each $y\in Y$, there exists a finite subset $J\subseteq I$ and positive integers $k_j$, $j\in J$, such that $\bigcap\limits_{j\in J}X_j^{k_j}\subseteq\underline{r}_R(y)$ if and only if $M=\oplus_{i\in I}C(X_i)$. \end{enumerate} \end{theorem} \begin{proof} (1) The proof of (1) is similar to that of (2). (2) ($\Rightarrow$) Let $0\neq y\in Y$. Then $\underline{r}_R(yR)\supseteq\bigcap\limits_{i=1}^nX_i^{k_i}$ for some $X_i\in\mathcal{X}$, $k_i\geq1$, since $\bigcap\limits_{i=1}^nX_i^{k_i}$ is an ideal of $R$. We can write $y$ as $y=yx_{2,1}x_{3,1}\dotsb x_{n,1}+yx_{1,2}x_{3,2}\dotsb x_{n,2}+\dotsb+yx_{1,n}x_{2,n}\dotsb x_{n-1,n}$, where $x_{i,j}\in X_i^{k_i}$ and $x_{2,1}x_{3,1}\dotsb x_{n,1}+\dotsb+x_{1,n}x_{2,n}\dotsb x_{n-1,n}=1$, since $\{X_i^{k_i}\}_{i=1}^n$ is pairwise comaximal. Note that the $i^{th}$ term of the sum is an element of $\underline{\ell}_M(X_i^{k_i})$. Thus $y\in\sum\limits_{X\in\mathcal{X}}C(X)$, so $M=\sum\limits_{i\in I}C(X_i)$. To show that $\{C(X)\mid X\in\mathcal{X}\}$ is an independent set, take $X\in\mathcal{X}$ and a finite subset $\{X_i\}_{i=1}^n$ of $\mathcal{X}$ not containing $X$. Notice that $\underline{r}_R\bigl(\underline{\ell}_M(X^k)\cap\sum\limits_{i=1}^n\underline{\ell}_M(X_i^{k_i})\bigr)\supseteq\underline{r}_R\bigl(\underline{\ell}_M(X^k)\bigr)+\bigcap\limits_{i=1}^n\underline{r}_R\underline{\ell}_M(X_i^{k_i})\supseteq X^k+\bigcap\limits_{i=1}^nX_i^{k_i}=R$ for any $k,k_i\geq1$. Thus $\underline{\ell}_M(X^k)\cap\sum\limits_{i=1}^n\underline{\ell}_M(X_i^{k_i})=0$ for $k,k_i\geq1$, so $C(X)\cap\sum\limits_{X'\neq X}C(X')=0$. Therefore $C(X)\mid X\in\mathcal{X}\}$ is an independent set. ($\Leftarrow$) Let $y\in Y$. Then $y=y_1+y_2+\cdots+y_n$ for some $y_i\in C(X_i)$ with $X_i\in\mathcal{X}$, $1\leq i\leq n$. For each $i$, there is a minimum power $k_i$ for which $y_i\in\underline{\ell}_M(X_i^{k_i})$. Thus we have $\underline{r}_R(yR)\supseteq\underline{r}_R\bigl(\oplus_{i=1}^n\underline{\ell}_M(X_i^{k_i})\bigr)=\bigcap\limits_{i=1}^n\underline{r}_R\underline{\ell}_M(X_i^{k_i})\supseteq\bigcap\limits_{i=1}^nX_i^{k_i}$. \end{proof} Note that in the above result, each $\underline{\ell}_M(X_i)\trianglelefteq M$, and each $C(X_i)\trianglelefteq M$. Hence if $M=\oplus_{i\in I}\underline{\ell}_M(X_i)$ or $M=\oplus_{i\in I}C(X_i)$, then $\rm{End}(M_R)=\prod\rm{End}\bigl(\underline{\ell}_M(X_i)_R\bigr)$ or $\rm{End}(M_R)=\prod\rm{End}\bigl(C(X_i)_R\bigr)$, respectively. For another immediate example illustrating Theorem~\ref{thm:infinite}, let $\{X_i\}_{i\in\mathbb{N}}$ be a set of pairwise comaximal ideals of $R$ and $M=\oplus_{i\in\mathbb{N}}R/X_i^{i}$. For any $m\in M$, there exists a finite nonempty subset $J\subseteq I$ such that $\bigcap\limits_{j\in J}X_j^j\subseteq\underline{r}_R(mR)$. By Theorem~\ref{thm:infinite}, $M=\oplus_{i\in I}C(X_i)$. Note that $C(X_i)=R/X_i^i$ for each $i\in\mathbb{N}$. \begin{corollary}\label{thm:main} Let $\{X_i\}_{i=1}^n$, $n\geq2$, be a set of pairwise comaximal ideals of $R$, and let $A=\bigcap\limits_{i=1}^nX_i$. Then: \begin{enumerate} \item $A\subseteq\underline{r}_R(M)$ if and only if $M=\underline{\ell}_M(X_1)\oplus\dotsb\oplus\underline{\ell}_M(X_n)$; \item $\bar{M}=M/MA=\oplus_{i=1}^n\underline{\ell}_{\bar{M}}(X_i)$. \end{enumerate} \end{corollary} Note that Corollary~\ref{thm:main} can also be proven using the Chinese Remainder Theorem (cf., e.g., \cite[p.131]{Hun-1974}). Recall that a prime Goldie ring $R$ in which each nonzero ideal of $R$ is invertible is called an \textit{Asano order} (or an \textit{Asano prime ring} \cite[pp.146--150]{MR-2001}). For example, a Dedekind domain is an Asano order. The next result is an application of Theorem~\ref{thm:infinite} to rings with sets of commuting pairwise comaximal ideals. Note that in an Asano order, multiplication of maximal ideals is commutative, and every nonzero ideal is a unique product of maximal ideals. \begin{corollary} Suppose that $\{X_i\}_{i\in I}$ is a set of commuting pairwise comaximal ideals of $R$ (i.e., $X_iX_j=X_jX_i$ for all $i,j\in I$). Let $M$ be a nonzero $R$-module and $Y$ a generating set of $M$. If, for every $y\in Y$, there exists a finite subset $J\subseteq I$ and positive integers $k_j$ such that $\prod\limits_{j\in J}X_j^{k_j}\subseteq\underline{r}_R(yR)$, then $M=\oplus_{i\in I}C(X_i)$. \end{corollary} \begin{proof} The corollary follows from Lemma~\ref{lem:props} and Theorem~\ref{thm:infinite}. \end{proof} \begin{corollary}\label{cor:Asano} Let $R$ be an Asano prime ring and $\{X_i\}_{i\in I}$ be the set of maximal ideals of $R$, and let $Y$ be a generating set of $M$. If $\underline{r}_R(yR)\neq0$ for all $y\in Y$, then $M=\oplus_{i\in I}C(X_i)$. \end{corollary} Note that Corollary~\ref{cor:Asano} generalizes the well-known theorem that every torsion abelian group is the direct sum of its $p$-components. That is, in the case where $R=\mathbb{Z}$ and $M$ is an abelian torsion group, then Theorem~\ref{thm:infinite} yields the decomposition of $M$ into its $p$-components. A natural question to ask is: ``Under what conditions can we guarantee that each annihilator direct summand of the decomposition afforded by Theorem~\ref{thm:infinite} or Corollary~\ref{thm:main} is nonzero?'' For example, take $R=\mathbb{Z}$, and let $M=\mathbb{Z}_2\oplus\mathbb{Z}_3$. Consider $\{2\mathbb{Z},3\mathbb{Z},5\mathbb{Z}\}$. Then the conditions of Corollary~\ref{thm:main}(1) are satisfied, so $M=\underline{\ell}_M(2\mathbb{Z})\oplus\underline{\ell}_M(3\mathbb{Z})\oplus\underline{\ell}_M(5\mathbb{Z})$. But $\underline{\ell}_M(5\mathbb{Z})=0$. The next theorem gives a set of conditions which ensures the non-triviality of the direct summands. \begin{theorem}\label{thm:minimal} Let $\{X_i\}_{i\in I}$ be a family of pairwise comaximal ideals of $R$. Then $M=\oplus_{i\in I}\underline{\ell}_M(X_i)$ and each $\underline{\ell}_M(X_i)\neq0$ if and only if \begin{enumerate} \item for every $m\in M$, there exists a nonempty finite subset $J\subseteq I$ such that $\bigcap\limits_{i\in J}X_i\subseteq\underline{r}_R(mR)$; and \item for every $X_j$, $j\in I$, there exists $m\in M$ such that for some nonempty finite subset $J\subseteq I$ with $j\in J$, $\bigcap\limits_{i\in J}X_i\subseteq\underline{r}_R(mR)$ and $\bigcap\limits_{i\in J-\{j\}}X_i\nsubseteq\underline{r}_R(mR)$. \end{enumerate} \end{theorem} \begin{proof} $\Rightarrow$) Suppose that $M=\oplus_{i\in I}\underline{\ell}_M(X_i)$ and each $\underline{\ell}_M(X_i)\neq0$. By Theorem~\ref{thm:infinite}, for every $m\in M$, there exists a nonempty finite subset $J\subseteq I$ such that $\bigcap\limits_{i\in J}X_i\subseteq\underline{r}_R(mR)$. Let $j\in I$. Since $\underline{\ell}_M(X_j)\neq0$, we can find a nonzero $m\in\underline{\ell}_M(X_j)$. Then $\underline{r}_R(mR)\supseteq\underline{r}_R\underline{\ell}_M(X_j)\supseteq X_j$. ($\Leftarrow$) By Theorem~\ref{thm:infinite}, $M=\oplus_{i\in I}\underline{\ell}_M(X_i)$. If $\underline{\ell}_M(X_j)=0$ for some $j\in I$, then for every $m\in M$ there exists a subset $J\subseteq I$ with $j\notin J$ such that $\underline{r}_R(mR)\supseteq\bigcap\limits_{i\in J}X_i$, which is a contradiction. \end{proof} The following examples illustrate some of the results of this section. \begin{examples}\label{section3examples} (1) Let $X,Y,Z\trianglelefteq R$ such that $X+Y+Z=R$. \\Let $T=\begin{bmatrix} R& 0& 0\\0& R& Z\\0& 0& R\end{bmatrix}$, and let $M=\begin{bmatrix} \frac{R}{X+Z}& \frac{R}{Y+Z}& \frac{R}{X+Y}\\ \frac{R\vphantom{3^{3^3}}}{X+Y\vphantom{3_{3_3}}}& \frac{R}{Y+Z}& \frac{R}{X+Y}\\ \frac{R}{Y+Z}& \frac{R}{X+Z}& \frac{R}{Y+Z}\end{bmatrix}$, where $\frac{R}{X+Y}$ denotes the factor ring of $R$ by $X+Y$. Then $$\underline{r}_T(M)=\begin{bmatrix} (X+Y)\cap(X+Z)\cap(Y+Z)& 0& 0\\0& (X+Z)\cap(Y+Z)& Z\\0& 0& (X+Y)\cap(Y+Z)\end{bmatrix}.$$ Let $P_1=\begin{bmatrix} X+Y& 0& 0\\0& X+Z& Z\\0& 0& X+Y\end{bmatrix}$, $P_2=\begin{bmatrix} X+Z& 0& 0\\0& Y+Z& 0\\0& 0& Y+Z\end{bmatrix}$, and \\$P_3=\begin{bmatrix} Y+Z& 0& 0\\0& R& Z\\0& 0& R\end{bmatrix}$. Then $P_i+P_j=T$ for all $i\neq j$, and $P_1\cap P_2\cap P_3\subseteq\underline{r}_T(M)$; so by Corollary~\ref{thm:main}, \begin{align*} M & =\underline{\ell}_M(P_1)\oplus\underline{\ell}_M(P_2)\oplus\underline{\ell}_M(P_3) \\ & =\begin{bmatrix} 0& 0& \frac{R}{X+Y}\\ \frac{R\vphantom{3^{3^3}}}{X+Y\vphantom{3_{3_3}}}& 0& \frac{R}{X+Y}\\0& \frac{R}{X+Z}& 0 \end{bmatrix} \oplus\begin{bmatrix} \frac{R}{X+Z}& \frac{R}{Y+Z}& 0\\0& \frac{R\vphantom{3^{3^3}}}{Y+Z\vphantom{3_{3_3}}}& 0\\0& 0& \frac{R}{Y+Z}\end{bmatrix} \oplus\begin{bmatrix} 0& 0& 0\\0& 0& 0\\ \frac{R}{Y+Z}& 0& 0\end{bmatrix}. \end{align*} (2) Let $R$ be a Dedekind domain and $\{P_1,P_2\}$ a family of distinct nonzero prime ideals of $R$. Suppose that $M=R/P_1^2\oplus R/P_1P_2$. Then $X_1=P_1^2$ and $X_2=P_2$ are two ideals of $R$ such that \begin{inparaenum} \item $P_1^2P_2=\underline{r}_R(M)=X_1\cap X_2$ and \item $X_1+X_2=R$. \end{inparaenum} Therefore, $M=\underline{\ell}_M(X_1)\oplus\underline{\ell}_M(X_2)$, where $\underline{\ell}_M(X_1)=R/P_1^2\oplus P_2/P_1P_2$ and $\underline{\ell}_M(X_2)=0\oplus P_1/P_1P_2$. \end{examples} \begin{lemma}\label{lem:semiqBaer} Consider the following conditions on $R$: \begin{enumerate} \item $R/P(R)$ is quasi-Baer, \item Every prime ideal of $R$ contains a unique minimal prime ideal, and \item Every pair of distinct minimal prime ideals is comaximal. \end{enumerate} Then (2)$\Longleftrightarrow$(3). Moreover, if $R$ has only finitely many minimal prime ideals, then (1)$\Longleftrightarrow$(2). \end{lemma} \begin{proof} Suppose every prime ideal contains a unique minimal prime ideal. Assume that there exist two minimal prime ideals $P_1, P_2$ of $R$ such that $P_1+P_2\subsetneq R$. Then there exists a maximal ideal $M$ such that $P_1+P_2\subseteq M$. Since $M$ is maximal, $M$ is a prime ideal and $P_1,P_2\subseteq M$, which is a contradiction. Suppose that every pair of distinct minimal prime ideals is comaximal. Assume $P$ is a prime ideal such that $P_1, P_2$ are distinct minimal prime ideals of $R$ contained in $P$. Then $R=P_1+P_2\subseteq P$, which contradicts $P$ being a prime ideal. Thus $P$ contains a unique minimal prime ideal. The equivalence of (1) and (2) in the case that $R$ has only finitely many minimal prime ideals is established in \cite[Proposition 4]{BKP-2011}. \end{proof} Observe that Lemma~\ref{lem:semiqBaer} allows us to decompose a large class of modules over such rings. Note that semilocal rings $R$ with the Jacobson radical of $R$ equal to the prime radical of $R$, left (or right) perfect rings and piecewise prime rings (\cite{BKP-2003} or \cite{BPR-2013}) are examples of rings $R$ such that $R/P(R)$ is quasi-Baer and $R$ has only finitely many minimal prime ideals. \begin{theorem}\label{thm:semiqBaer} Let $\mathcal{P}=\{P_i\}_{i\in I}$ be the set of minimal prime ideals of $R$, and let $K$ be a right $R$-module. \begin{enumerate} \item Suppose that every prime ideal of $R$ contains a unique minimal prime ideal. Then $\underline{r}_R(mR)$ contains a nonempty finite intersection of elements of $\mathcal{P}$ for each $m\in M$ if and only if $M=\oplus_{i\in I}\underline{\ell}_M(P_i)$. \item Suppose that $R/P(R)$ is a quasi-Baer ring and $I=\{1,2,\dotsb,n\}$. Then: \begin{enumerate} \item $\bigcap\limits_{i=1}^nP_i^{m_i}\subseteq\underline{r}_R(M)$ for some $k_i\geq1$ if and only if $M=\underline{\ell}_M(P_1^{k_1})\oplus\dotsb\oplus\underline{\ell}_M(P_n^{k_n})$; \item if $M=K/KP(R)$ then $M=\underline{\ell}_M(P_1)\oplus\dotsb\oplus\underline{\ell}_M(P_n)$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} The proof of (1) follows from Lemma~\ref{lem:semiqBaer} and Theorem~\ref{thm:infinite}, and (2) follows from Corollary~\ref{thm:main}. \end{proof} Note that for any ring $R$ with only finitely many minimal prime ideals, $R/P(R)$ has a right ring of quotients which is quasi-Baer (e.g., its quasi-Baer hull) with only finitely many minimal prime ideals \cite[Theorem 3.13]{BPR-2009}. Moreover, if $R$ is quasi-Baer and $P$ is a prime ideal, then either $P=eR$ for some $e=e^2\in R$ or $P_R\mathop{\leq^{\rm ess}\!}R_R$ (cf. \cite[Proposition 2.2]{BKP-2003}). The following corollary is an application of Theorem~\ref{thm:semiqBaer}. \begin{corollary}\label{cor:semiqBaer} Let $T$ be an $n\times n$ generalized upper triangular matrix ring with $R_\alpha$ the ring in the $\alpha$-th diagonal entry, and $P_\alpha$ be the subset of $T$ with $0$ in the $\alpha$-th diagonal entry. Take $A$ to be the intersection of the $P_\alpha$, and let $K$ be a right $T$-module with $M=K/KA$. Then $M=\underline{\ell}_M(P_1)\oplus\dotsb\oplus\underline{\ell}_M(P_n)$. Moreover, if each $R_\alpha$ is a prime ring, then $T/P(T)$ is quasi Baer and $P_\alpha$ is a minimal prime ideal of $T$. \end{corollary} Note that if $R$ is a quasi Baer ring of $T$-dimension $n$, then $R$ is ring isomorphic to an $n\times n$ generalized triangular matrix ring $T$ with prime rings on the diagonal and with minimal prime ideals $P_\alpha$, as in Corollary~\ref{cor:semiqBaer} (c.f. \cite[Theorem 4.4]{BHKP-2000} or \cite[p.162]{BPR-2013}). Also each $P_\alpha=e_\alpha T$ for some left semicentral idempotent $e_\alpha\in T$ or $P_\alpha$ is essential in $T$. Thus either $\underline{\ell}_M(P_\alpha)=M(1-e_\alpha)$ or $\underline{\ell}_M(P_\alpha)$ is a submodule of $Z(M)$. In particular, note that any piecewise prime ring satisfies these conditions. Also observe that any right hereditary right noetherian ring is piecewise prime. \section{Strongly p-Nilary Decompositions}\label{sec:nilary} The main result of this section, Theorem~\ref{thm:nilarydecomposition}, extends the characterization of finitely generated torsion modules over Dedekind domains to a large class of noncommutative rings. From \cite{BKP-2013}, we use the following generalization of primary ideals from commutative ring theory and related concepts. Let $I$ be an ideal of $R$. The \textit{pseudo-radical of} $I$, denoted $\sqrt{I}$, is defined as $\sqrt{I}=\sum\{V\trianglelefteq R\mid V^n\subseteq I$ for some $n\geq1\}$. $I$ is a \textit{strongly p-nilary ideal} of $R$ if and only if $\sqrt{I}$ is a prime ideal of $R$. Note that $\{0\}$ is a strongly p-nilary ideal of $R$ if and only if the sum of all nilpotent ideals is a prime ideal of $R$. Also, a set of strongly p-nilary ideals $Q_1,Q_2,\dots,Q_n$ of $R$ such that $I=Q_1\cap Q_2\cap\cdots\cap Q_n$ forms a \textit{minimal strongly p-nilary decomposition} of $I$ if and only if \begin{inparaenum} \item for each $i$, $1\leq i\leq n$, $I\neq\bigcap\limits_{j\neq i}Q_j$, and \item for any subset $S\subseteq\{1,\dots,n\}$ with $\lvert S\rvert\geq2$, the ideal $\bigcap\limits_{s\in S}Q_s$ is not strongly p-nilary. \end{inparaenum} The following lemma can be found in \cite[Theorem 2.15]{BKP-2013}. \begin{lemma}\label{lem:ACCnilary} Suppose that $R$ has ACC on ideals. Then the following conditions are equivalent: \begin{enumerate} \item For each pair of ideals $A,B\trianglelefteq R$, there exists a positive integer $k$ such that $A^k\cap B^k\subseteq AB$. \item Each $I\trianglelefteq R$ has a minimal strongly p-nilary decomposition. \end{enumerate} \end{lemma} \begin{theorem}\label{thm:nilarydecomposition} Suppose that $R$ has ACC on ideals, every pair of incomparable prime ideals of $R$ is comaximal, and for any ideals $A,B\trianglelefteq R$, $A^k\cap B^k\subseteq AB$ for some positive integer $k$. Let $M$ be a nonzero right $R$-module. \begin{enumerate} \item If $\underline{r}_R(M)\neq0$, then there exists a set $\{P_i\}_{i=1}^n$ of prime ideals and positive integers $k_i$, $1\leq i\leq n$, such that $\bigcap\limits_{i=1}^nP_i^{k_i}\subseteq\underline{r}_R(M)$, each $\underline{\ell}_M(P_i^{k_i})\neq0$, and $M=\oplus_{i=1}^n\underline{\ell}_M(P_i^{k_i})$. \item Suppose that $\{Q_i\}_{i\in I}$ is the set of minimal prime ideals of $R$, with $\lvert I\rvert\geq2$, and $Y$ a set of generators of $M$. If $\underline{r}_R(yR)\neq0$ for each $y\in Y$, then $M=\oplus_{i\in I}C(Q_i)$. \end{enumerate} \end{theorem} \begin{proof} (1) The proof is similar to that of (2) and follows from Corollary~\ref{thm:main}. (2) By Lemma~\ref{lem:ACCnilary}, there exists a minimal strongly p-nilary decomposition $\underline{r}_R(yR)=\bigcap\limits_{i=1}^nX_i$, where $\{X_i\}_{i=1}^n$ is a set of strongly p-nilary ideals of $R$. We have the existence of a nonzero prime ideal $P_i=\sqrt{X_i}$ and a positive integer $k_i$ such that $P_i^{k_i}\subseteq X_i\subseteq P_i$ for each $i$, since each $X_i$ is finitely generated. Note that $P_i$ contains a minimal prime ideal, say $Q_i$, and $Q_i^{k_i}\subseteq P_i^{k_i}$. Then $\bigcap\limits_{i=1}^nQ_i^{k_i}\subseteq\bigcap\limits_{i=1}^nX_i=\underline{r}_R(yR)$, Theorem~\ref{thm:infinite} yields $M=\oplus_{i\in I}^nC(Q_i)$. Because $\underline{r}_R(yR)=\bigcap\limits_{i=1}^nX_i$ is a minimal strongly p-nilary decomposition, it follows from Theorem~\ref{thm:minimal} that $C(Q_i)\neq0$ for each $i$. \end{proof} Note that the class of rings which satisfy the condition that every incomparable pair of prime ideals is comaximal is closed under taking direct products, matrices, and generalized triangular matrices. Also, this condition implies that distinct minimal prime ideals are pairwise comaximal, so that Theorem~\ref{thm:semiqBaer}(1) may be applicable. \begin{examples} Any finite direct sum of matrices over the following rings are examples of rings satisfying the hypothesis of Theorem~\ref{thm:nilarydecomposition}: \begin{enumerate} \item any ring $R$ such that $R$ has ACC on ideals, every nonzero prime ideal of $R$ is maximal, and $R$ has the right AR-property for ideals (cf. \cite[pp. 190--193]{GW-2000}). In particular, any Dedekind domain; \item right duo rings with ACC on ideals such that every nonzero prime ideal is maximal (cf. \cite[Theorem 2.15 and Proposition 2.16]{BKP-2013}). For example, any generalized ZPI ring \cite[pp.469--477]{Gil-1972} in which minimal primes are pairwise comaximal (e.g., R=$\mathbb{Z}\oplus\mathbb{Z}_{p^n}$); \item local rings with nilpotent Jacobson radicals and ACC on ideals (cf. [7, Corollary 3.18]). In particular, any semisimple Artinian ring is a finite direct sum of such rings. \end{enumerate} \end{examples} \section{Torsion Theory Induced by Pairwise Comaximal Ideals}\label{sec:torsion} In this section, we develop a preradical $\gamma$ and its radical closure $\bar{\gamma}$ based on Theorem~\ref{thm:infinite}. Our main goal is to obtain a decomposition of a given module into a direct sum of a torsion module and a torsion-free module using the torsion theory that is developed. The torsion modules are defined to have the decomposition of Theorem~\ref{thm:infinite} or at least essentially contain such a decomposition. For this section, we need basic terminology and facts of torsion theory. The definitions and results can be found in \cite[Chapter VI]{Sten-1975} or \cite[Chapters I, II]{BKN-1982}. We denote the category of all right $R$-modules by $\mathcal{M}_R$. Given a nonempty set $\mathcal{X}$ of pairwise comaximal ideals of a ring, we define a preradical $\gamma_{\mathcal{X}}$ corresponding to $\mathcal{X}$, and list some basic properties. \begin{definition} Let $\mathcal{X}=\{X_i\}_{i\in I}$ be a fixed set of pairwise comaximal ideals of $R$. Define $\gamma_{\mathcal{X}}(M)=\bigl\{m\in M\mid \bigcap\limits_{i\in J}X_i^{k_i}\subseteq\underline{r}_R(mR)\text{ for some nonempty finite subset }J\subseteq I\text{ and positive integers }k_i\bigr\}$. We omit the subscript $\mathcal{X}$ when the context is clear. \end{definition} Note that $\gamma(M)$ is a submodule of $M$, and if $\mathcal{X}=\{X_i\}_{i=1}^n$ is finite, then $\gamma(M)=\sum\{N\leq M\mid\bigcap\limits_{i=1}^nX_i^{k_i}\subseteq\underline{r}_R(N)\text{ for some }k_i\geq1\}$. \begin{proposition}\label{prop:radical} Let $\{X_i\}_{i\in I}$ be a set of pairwise comaximal ideals of $R$. Then: \begin{enumerate} \item $\gamma$ is a left exact preradical. \item $\gamma(M)=\oplus_{i\in I}C(X_i)$. In particular, $M$ is pretorsion-free if and only if $\underline{\ell}_M(X_i^{k_i})=0$ for each $k_i\geq1$. \item If either of the following conditions hold: \begin{enumerate} \item $\bigl(\bigcap\limits_{i\in J}X_i^{k_i}\bigr)^k$ is finitely generated for every nonempty finite subset $J\subseteq I$, and $k,k_i\geq1$, or \item $I$ is finite and for every $k_i\geq1$, $\bigl(\bigcap\limits_{i\in I}X_i^{k_i}\bigr)^k=\bigl(\bigcap\limits_{i\in I}X_i^{k_i}\bigr)^{k+1}$ for some $k\geq1$, \end{enumerate} then $\rho(M)=\bigl\{m\in M\,\Bigm|\,\bigl(\bigcap\limits_{i\in J}X_i^{k_i}\bigr)^n\subseteq\underline{r}_R(mR)\text{ for some }k_i,n\geq1$ and \\ nonempty finite subset $J\subseteq I\bigr\}$, is the smallest radical larger than $\gamma$ (i.e., $\rho=\bar{\gamma}$). \end{enumerate} \end{proposition} \begin{proof} (1) To show that $\gamma$ is a preradical let $f:M\longrightarrow N$ be an $R$-module homomorphism, and $m\in\gamma(M)$. Then $\underline{r}_R(mR)\supseteq\bigcap\limits_{i\in J}X_i^{k_i}$ for some nonempty finite set $J\subseteq I$ and $k_i\geq1$, so $\bigl(f(m)R\bigr)\bigl(\bigcap\limits_{i\in J}X_i^{k_i}\bigr)=f\bigl(m(\bigcap\limits_{i\in J}X_i^{k_i})\bigr)=f(0)=0$. Thus $f(m)\in\gamma(N)$. Therefore $\gamma$ is a preradical. The proof that $\gamma$ is left exact is straightforward. (2) Since $\underline{\ell}_{\gamma(M)}(X_i)\subseteq\underline{\ell}_M(X_i)$ for all $i$, by Theorem~\ref{thm:infinite} we have $\gamma(M)\subseteq\oplus_{i\in I}C(X_i)$. Now let $m\in\oplus_{i\in I}C(X_i)$. Then there are a finite set $J\subseteq I$ and positive integers $k_j\geq1$ with $mR\subseteq\oplus_{j\in J}\underline{\ell}_M(X_j^{k_j})$. Hence $\bigcap\limits_{j\in J}X_j^{k_j}\subseteq\underline{r}_R\bigl(\oplus_{j\in J}\underline{\ell}_M(X_j^{k_j})\bigr)\subseteq\underline{r}_R(mR)$. So $m\in\gamma(M)$. Therefore $\gamma(M)=\oplus_{i\in I}C(X_i)$. (3a) Observe that $\rho$ is a left exact preradical. We show that $\gamma(M/\rho(M))=0$. Suppose that $\bigl(kR+\rho(M)\bigr)/\rho(M)\leq\gamma\bigl(M/\rho(M)\bigr)$ for some $k\in M$. Then $\bigcap\limits_{i\in J}X_i^{k_i}\subseteq\underline{r}_R\bigl(k+\rho(M)\bigr)$ for some nonempty finite subset $J\subseteq I$, $k_i\geq1$. Since $\bigcap\limits_{i\in J}X_i^{k_i}$ is finitely generated, so is $k\bigcap\limits_{i\in J}X_i^{k_i}$. Note that $k\bigcap\limits_{i\in J}X_i^{k_i}\subseteq\rho(M)$ and $k\bigcap\limits_{i\in J}X_i^{k_i}$ finitely generated imply that there exist a nonempty finite subset $J'\subseteq I$ and $k'_j,m\geq1$ such that $k(\bigcap\limits_{i\in J}X_i^{k_i})(\bigcap\limits_{j\in J'}X_j^{k'_j})^m=0$. Then $k(\bigcap\limits_{i\in J\cup J'}X_i^{k_i})^m=0$, so $k\in\rho(M)$. Thus $kR\subseteq\rho(M)$. Therefore $\gamma\bigl(M/\rho(M)\bigr)=0$, so $\gamma(M)\subseteq\rho(M)$. The method of proof that $\rho$ is a radical (i.e., that $\rho\bigl(M/\rho(M)\bigr)=0$) is similar to the argument above. Suppose that $\tau$ is a radical containing $\gamma$. Consider $\rho\bigl(M/\tau(M)\bigr)$. Suppose that $nR/\tau(M)\leq M/\tau(M)$ such that there exist a nonempty finite subset $J\subseteq I$ and $k_i,m\geq1$ for which $nR(\bigcap\limits_{i\in J}X_i^{k_i})^m\subseteq \tau(M)$ and $nR(\bigcap\limits_{i\in J}X_i^{k_i})^{m-1}\nsubseteq\tau(M)$. Then $0\neq nR(\bigcap\limits_{i\in J}X_i^{k_i})^{m-1}/\tau(M)\subseteq\gamma\bigl(M/\tau(M)\bigr)=0$, which is a contradiction. Thus $\rho(M/\tau(M))=0$, which implies that $\rho(M)\subseteq \tau(M)$. Therefore $\rho$ is the smallest radical containing $\gamma$, so $\rho=\bar{\gamma}$. (3b) The method of proof is similar to that of (3a). \end{proof} For example, if $R$ is a ring such that every maximal right ideal contains a maximal ideal (for example, if $R$ is a right quasi-duo ring \cite{Yu-1995}), and if $\mathcal{X}$ is the set of maximal ideals of $R$, then $\rm{Soc}(M)\subseteq\gamma(M)$. Thus, in this case, semisimple modules are $\gamma$-torsion modules. Next we find conditions on $M$ and $R$ so that $M$ splits or essentially splits in $\gamma$ or $\bar{\gamma}$. Finding such conditions is of central importance in torsion theory. We begin with the following technical lemma. \begin{lemma}\label{lem:equalannihilators} Let $M$ be an $R$-module such that $Z(M)=0$ and let $S$ and $K$ be submodules of $M$ such that $S\mathop{\leq^{\rm ess}\!}K$. If $\underline{\ell}_R\underline{r}_R(S)\subseteq\underline{r}_R\underline{r}_R(S)$, then $\underline{r}_R(S)=\underline{r}_R(K)$. \end{lemma} \begin{proof} Since annihilation is order-reversing, $\underline{r}_R(K)\subseteq\underline{r}_R(S)$. Let $a\in\underline{r}_R(S)$, $k\in K$, and $L=\{x\in R\mid kax=0\}$. We show that $L_R\mathop{\leq^{\rm ess}\!}R_R$. Suppose that $t\in R-L$. Then $kat\neq0$. Since $S\mathop{\leq^{\rm ess}\!}K$, there exists $v\in R$ such that $0\neq katv\in S$. If $tv\underline{r}_R(S)=0$, then $tv\in\underline{\ell}_R\underline{r}_R(S)$. Hence $\underline{r}_R(S)tv=0$, so $katv=0$, a contradiction. So $tv\underline{r}_R(S)\neq0$. Then there exists $b\in\underline{r}_R(S)$ such that $tvb\neq0$. Then $katvb=0$. Thus $0\neq tvb\in L$, so $L\mathop{\leq^{\rm ess}\!}R$. Hence $kaL=0$ implies that $ka\in Z(M)=0$. Therefore $\underline{r}_R(S)=\underline{r}_R(K)$. \end{proof} From \cite{BMR-2002}, an \textit{FI-extending module} is a $R$-module $M$ such that every fully invariant submodule is essential in a direct summand of $M$. Observe that for an FI-extending module $\gamma(M)\mathop{\leq^{\rm ess}\!}\bar{\gamma}(M)\mathop{\leq^{\rm ess}\!}D$ where $D$ is a direct summand of $M$. Note that FI-extending modules are quite numerous since every finitely generated projective module over a semiprime ring has an FI-extending hull which, in general, is properly contained in its injective hull \cite[Theorem 6]{BPR-2009B}. \begin{proposition}\label{prop:splitting} Let $M$ be an FI-extending $R$-module such that $Z(M)=0$. If $R$ is commutative or semiprime, then $M=\gamma(M)\oplus F$ for some submodule $F\leq M$ such that $\gamma(F)=\bar{\gamma}(F)=0$. \end{proposition} \begin{proof} Since $M$ is FI-extending, $\gamma(M)$ is essential in a direct summand, say $N$. Recall that in a semiprime ring $\underline{\ell}_R(I)=\underline{r}_R(I)$ for all $I\trianglelefteq R$. Hence $\underline{\ell}_R\underline{r}_R\bigl(\gamma(M)\bigr)=\underline{r}_R\underline{r}_R\bigl(\gamma(M)\bigr)$. Similarly, if $R$ is commutative, $\underline{\ell}_R\underline{r}_R\bigl(\gamma(M)\bigr)=\underline{r}_R\underline{r}_R\bigl(\gamma(M)\bigr)$. From Lemma~\ref{lem:equalannihilators}, $\underline{r}_R\bigl(\gamma(M)\bigr)=\underline{r}_R(N)$. By the definition of $\gamma(M)$, $N=\gamma(M)$, so $\gamma(M)$ is a direct summand of $M$. Since $\gamma(F)=F\cap\gamma(M)$, we have that $\gamma(F)=\bar{\gamma}(F)=0$ for any direct complement of $\gamma(M)$. \end{proof} Observe that in Proposition~\ref{prop:splitting}, $\gamma(M)=\bar{\gamma}(M)$, since $\gamma(M)$ is a closed submodule (i.e., $\gamma(M)$ has no nontrivial essential extension) of $M$. To illustrate Proposition~\ref{prop:splitting}, our next result provides a large class of rings for which every projective module is nonsingular and FI-extending. Recall that an $AW^*$-algebra is a $C^*$-algebra which is a Baer ring \cite[Preface]{Kap-1968}. For example, any von Neumann algebra is an $AW^*$-algebra. \begin{proposition}\label{prop:projective} If $R$ is a right nonsingular semiprime quasi-Baer ring (e.g., right nonsingular prime rings, commutative Baer rings, and AW*-algebras), then every projective module is nonsingular and FI-extending. \end{proposition} \begin{proof} From \cite[Theorem 4.7]{BMR-2002}, $R$ is right and left FI-extending. By \cite[Proposition 1.5 and Corollary 3.4]{BPR-2002}, every projective module is FI-extending. Clearly, every projective module is also nonsingular. \end{proof} Our next result is an application of Propositions~\ref{prop:splitting} and \ref{prop:projective} to operator theory. \begin{corollary}\label{cor:operatortheory} If $R$ is an AW*-algebra, then every finitely generated Hilbert C*-module $M$ is nonsingular and FI-extending (thus $\gamma(M)$ is a direct summand of $M$). \end{corollary} \begin{proof} From \cite[Theorem 8.1.27 and p.352]{BL-2004}, every finitely generated Hilbert C*-module is projective. The remainder of the proof follows from Propositions~\ref{prop:splitting} and \ref{prop:projective}. \end{proof} Another condition which guarantees an FI-extending module $M$ splits in $\bar{\gamma}$ is that $\gamma$ be stable. So we look for conditions that ensure stability of $\gamma$. The following proposition gives conditions on $R$ that are sufficient for $\gamma$ to be stable. \begin{proposition}\label{prop:stable} Let $\{X_i\}_{i\in I}$ be a set of pairwise comaximal ideals of $R$. If, for each $L_R\mathop{\leq^{\rm ess}\!}R_R$, finite set $J\subseteq I$, and positive integers $k_j\geq1$, $L\bigl(\bigcap\limits_{j\in J}X_j^{k_j}\bigr)=\bigcap\limits_{j\in J}X_j^{k_j}$, then $\gamma$ is stable and $\gamma=\bar{\gamma}$. \end{proposition} \begin{proof} Assume that $\gamma(M)=M$ and $w\in E(M)$. Then there exist a finite subset $J\subseteq I$ and positive integers $k_j\geq1$ so that $\bigcap\limits_{j\in J}X_j^{k_j}\subseteq\underline{r}_R(wR)$. Then $L=\{r\in R\mid wr\in M\}\mathop{\leq^{\rm ess}\!}R_R$. So $w\bigcap\limits_{i=1}^nX_i^{k_i}=wL\bigl(\bigcap\limits_{i=1}^nX_i^{k_i}\bigr)=0$. Thus $w\in\gamma\bigl(E(M)\bigr)$. Therefore $\gamma$ is stable. From \cite[pp.142, 152--153]{Sten-1975}, $\gamma=\bar{\gamma}$. \end{proof} Note that if $\rm{Soc}(R_R)\bigl(\bigcap\limits_{i=1}^nX_i\bigr)=\bigcap\limits_{i=1}^nX_i$ or if $\bigcap\limits_{i=1}^nX_i=\bigl(\bigcap\limits_{i=1}^nX_i\bigr)^2\subseteq\rm{Soc}(R_R)$ (e.g., if $\bigcap\limits_{i=1}^nX_i=0$), then $L\bigl(\bigcap\limits_{i=1}^nX_i\bigr)=\bigcap\limits_{i=1}^nX_i$ for all $L_R\mathop{\leq^{\rm ess}\!}R_R$. \begin{proposition}\label{prop:stablesplitting} If $M$ is an FI-extending $R$-module and the torsion theory associated with $\bar{\gamma}$ is stable, then $M=\bar{\gamma}(M)\oplus F$. \end{proposition} \begin{proof} This result is a direct consequence of \cite[p.153, Proposition 7.2]{Sten-1975}. \end{proof} We conclude Section~\ref{sec:torsion} with some examples and an open question regarding the torsion theory developed. \begin{examples} \begin{enumerate} \item Let $R=T_n(A)$ be the ring of $n$-by-$n$ upper triangular matrices with entries in $A$, where $A$ is a ring with unity, and $\mathcal{X}=\left\{X_i\right\}_{i=1}^n$, where $X_i=\left\{(a_{ij})\in R\mid a_{ii}=0\right\}$ for each $i$, $1\leq i\leq n$. Note that $\mathcal{X}$ is a set of pairwise comaximal ideals in $R$. Also, $\bigl(\bigcap\limits_{i=1}^nX_i\bigr)^k=0$ for any $k\geq n$. Hence $\bar{\gamma}(M)=M$, so $\gamma(M)\mathop{\leq^{\rm ess}\!}M$ for any right $R$-module $M$. \item Let $R=\mathbb{Z}$, and $M=\mathbb{Z}_{p_1^\infty}\oplus\mathbb{Z}_{p_2^\infty}\oplus\cdots\oplus\mathbb{Z}_{p_n^\infty}\oplus\mathbb{Z}^{\omega}$, where each $p_i$ is a distinct prime and $\omega$ is any ordinal. Let $X_i=p_i\mathbb{Z}$ for each $i$, $1\leq i\leq n$. Then $\gamma(M)=\mathbb{Z}_{p_1^\infty}\oplus\mathbb{Z}_{p_2^\infty}\oplus\cdots\oplus\mathbb{Z}_{p_n^\infty}$. Note that $M=\bar{\gamma}(M)\oplus\mathbb{Z}^{\omega}$ and $\mathbb{Z}^{\omega}$ is $\bar{\gamma}$-torsion free. \end{enumerate} \end{examples} Open Question: Characterize the radical closure of $\gamma$. \end{document}
\begin{document} \newtheorem{pf}{Proof} \newtheorem{pot}{Proof of Theorem} \title{A minimization approach to conservation laws with random initial conditions and non-smooth, non-strictly convex flux} \author{Carey Caginalp\affil{}\corrauth} \shortauthors{the Author(s)} \address{University of Pittsburgh, Mathematics Department, 301 Thackeray Hall, Pittsburgh PA\ 15260} \corraddr{Email: carey\[email protected].} \begin{abstract} We obtain solutions to conservation laws under any random initial conditions that are described by Gaussian stochastic processes (in some cases discretized). We analyze the generalization of Burgers' equation for a smooth flux function $H\left( p\right) =\left\vert p\right\vert ^{j}$ for $j\geq2$ under random initial data. We then consider a piecewise linear, non-smooth and non-convex flux function paired with general discretized Gaussian stochastic process initial data. By partitioning the real line into a finite number of points, we obtain an exact expression for the solution of this problem. From this we can also find exact and approximate formulae for the density of shocks in the solution profile at a given time $t$ and spatial coordinate $x$. We discuss the simplification of these results in specific cases, including Brownian motion and Brownian bridge, for which the inverse covariance matrix and corresponding eigenvalue spectrum have some special properties. We calculate the transition probabilities between various cases and examine the variance of the solution $w\left(x,t\right)$ in both $x$ and $t$. We also describe how results may be obtained for a non-discretized version of a Gaussian stochastic process by taking the continuum limit as the partition becomes more fine. \end{abstract} \keywords{conservation laws; random initial conditions; Lax-Oleinik; shocks; variational problems in differential equations; minimization; stochastic processes \newline \textbf{Mathematics Subject Classification:} 35F20, 35F31} \maketitle \section{Introduction} The conservation law in the form \begin{align} w_{t}+\left( \mathcal{H}\left( w\right) \right) _{x} & =0\text{ in }\mathbb{R\times}\text{ }\left( 0,\infty\right) \nonumber\\ w\left( x,0\right) & =g^{\prime}\left( x\right) \text{ on } \mathbb{R}\times\left\{ t=0\right\} \label{cl} \end{align} and its related Hamilton-Jacobi problem \begin{align} u_{t}+\mathcal{H}\left( u_{x}\right) & =0\text{ in }\mathbb{R\times}\text{ }\left( 0,\infty\right) \nonumber\\ u\left( x,0\right) & =g\left( x\right) \text{ on }\mathbb{R} \times\left\{ t=0\right\} \label{hj} \end{align} have a wide array of applications among fluid mechanics, shocks, and turbulence. An interesting feature is that even for a smooth flux function $\mathcal{H}$ and smooth initial data, discontinuous solutions to the conservation law (\ref{cl}) may occur \cite{BG, CD, ERS, EV, FM, GR, HA, HP, KR, KA, LA1, LA2, M, MP, MS, SL1, SH}. In particular, this is observed in the prototypical case of Burgers' equation, for which $\mathcal{H}\left( w\right) =w^{2}/2$. The work of Dafermos \cite{D1} was instrumental in constructing an approximation for this case and more generally, locally Lipschitz $\mathcal{H}$ by using a series of piecewise linear flux functions to construct the solution for deterministic initial conditions. Using our methods for random initial conditions, one also can construct the continuum limit for the deterministic problem as a special case. In this paper, we consider the introduction of randomness into the initial conditions for the conservation law above. The classical approach for solving the conservation law (\ref{cl}) entails using variational methods to derive the Lax-Oleinik formula \cite{EV, HP, LA1, LA2} \begin{equation} w\left( x,t\right) :=\frac{\partial}{\partial x}\left[ \min_{y\in \mathbb{R}}\left\{ t\mathcal{L}\left( \frac{x-y}{t}\right) +g\left( y\right) \right\} \right] , \label{clsol} \end{equation} requiring assumptions on smoothness, superlinearity, and strict convexity of the flux function. In a previous paper \cite{CA}, we showed that (\ref{clsol}) will still hold when these conditions are relaxed, with the only other assumption being that of Lipschitz continuity on the initial condition $g^{\prime }\left( x\right) $. These methods thus enable us to derive a number of results for the case of non-deterministic initial data. When dealing with randomness, an important notion is that of a stochastic process $X\left( t\right) $. The process is a set of random variables linked together by a law of probability with the set of possible paths given by some state space $\Omega$. In general this is far too broad a class to attack analytically, so attempts to derive exact solutions are often restricted to one specific stochastic process, for example Brownian motion \cite{FM}. In this work, we derive results for an entire family of stochastic processes. In Section 2 we provide an introduction and references to some of the key properties of such a family known as \textit{Gaussian stochastic processes}. The motivation for this class of process is not only that it has sufficient structural properties to obtain closed-form results but also has the breadth to analyze a number of distinct cases. We also utilize this approach in calculating transition probabilities between different states and examining the variance in space as time increases. Consider the basic problem $w_{t} + |w|_{x} = 0$ subject to continuous initial conditions $w(x,0)=g'(x)$. Application of our results gives $g(x-t)$ as the solution if the minimum of $g(y)$ is at the left endpoint of the interval, $g(x+t)$ if the minimum is on the right endpoint, and $0$ for an interior minimum. Thus for $g'$ that is given by Brownian motion, one minimizes integrated Brownian motion over a finite interval and obtains the shock statistics. This example can provide a pathway to considering the behavior of solutions to more complicated flux functions under random initial data, for example when one has not one but a number of affine segments linked together. This sort of construction is denoted as a \emph{polygonal flux function} and is thus particularly suited to analysis under the Hopf-Lax minimization approach \cite{HP, LA1, LA2} discussed below. Together with the results proven by Dafermos \cite{D1}, this methodology can be the basis for a general class of flux and random initial data. Notably, both Brownian motion and Brownian bridge initial conditions for $g^{\prime}\left( x\right) $ will satisfy the requisite regularity properties, as well the additional property of being Gaussian stochastic processes. Furthermore, when integrated to obtain integrated Brownian motion and integrated Brownian bridge, they remain Gaussian stochastic process, leading to useful properties on both $g$ and $g^{\prime}$ that aid in the analysis and computation of minimizers as specified in (\ref{clsol}). As is shown in \cite{CA}, we have the key relationship for the solution $w$ given by \begin{equation} w\left( x,t\right) =g^{\prime}\left( y^{\ast}\left( x,t\right) \right) \label{introkr} \end{equation} (when $g'\left(y^{\ast}\left(x,t\right)\right)$ exists) expressed as a function of the greatest minimizer, \begin{equation} y^{\ast}=\arg^{+}\min_{y\in\mathbb{R}}\left\{ t\mathcal{L}\left( \frac {x-y}{t}\right) +g\left( y\right) \right\} . \label{ystardef} \end{equation} For the case of Burgers' equation, one uses the relation $g^{\prime}\left( y^{\ast}\left( x,t\right) \right) =\frac{x-y^{\ast}\left( x,t\right) } {t}$ to obtain a special case of (\ref{introkr}) given by \begin{equation} w\left( x,t\right) =\frac{x-y^{\ast}\left( x,t\right) }{t} \label{introkrb} \end{equation} When one introduces Brownian motion initial data, (\ref{introkr}) leads to the following result for the variance of the solution expressed in terms of the variance of the minimizer in (\ref{ystardef}), for a flux function given by $\mathcal{H}\left( p\right) =\frac{1}{j}\left\vert p\right\vert ^{j}$ \begin{equation} Var\left( w\left( x,t\right) \right) =t^{-\frac{2}{j-1}}Var\left( sgn\left( x-y^{\ast}\left( x,t\right) \right) \left\vert x-y^{\ast}\left( x,t\right) \right\vert ^{\frac{1}{j-1}}\right) \end{equation} which for Burgers' equation reduces to \begin{equation} Var\left( w\left( x,t\right) \right) =\frac{1}{t^{2}}Var\left( x-y^{\ast }\left( x,t\right) \right) . \label{varburg} \end{equation} Later on, we specialize to the case $x=0$. In Section 3 we discuss extensions of (\ref{varburg}) to more general flux functions. For example, with the flux $\mathcal{H}\left( p\right) =p^{2n}$, we have a generalization of (\ref{varburg}), providing an understanding of the propagation of variance over the sample space $\Omega$. We also consider a related case of $\mathcal{H}\left( p\right) =\left\vert p\right\vert $, in light of its Legendre transform \begin{equation} \mathcal{L}\left( q\right) =\left\{ \begin{array} [c]{c} 0\\ +\infty \end{array} \right. \begin{array} [c]{c} \left\vert q\right\vert \leq1\\ \left\vert q\right\vert >1 \end{array} \end{equation} for which the solution has a very simple probabilistic interpretation. In particular, one can show formally that the frequency of jumps in $w\left( x,t\right) $ decreases in time. The minimum defined on the right-hand side of (\ref{ystardef}) is of course unique under the classical assumptions that $\mathcal{H}$ is smooth, superlinear, and uniformly convex \cite{EV}. However, the analog of the result (\ref{introkr}) remarkably continues to hold even when these assumptions are relaxed, i.e. when the flux function has a lesser degree of regularity, the convexity of $\mathcal{H}$ is not uniform or even strict, and when it is no longer superlinear. Even if the initial condition $g^{\prime}$ is also discontinuous, one can prove a modified form of (\ref{introkr}) by accounting for the vertices of $g$. A particularly interesting and yet broad range of cases results from studying a Gaussian stochastic process, $X\left( r\right) $, a random process completely determined by its second order statistics. In other words, such a process at arbitrary times $\left\{ r_{i}\right\} _{i=1}^{n}$ is completely determined by the means $\mathbb{E}\left\{ X\left( r_{i}\right) \right\} $ and the corresponding covariance matrix $\sum$ defined by $\left( \sum\right) _{ij}=Cov\left( X\left( r_{i}\right) ,X\left( r_{j}\right) \right) $. We denote the inverse covariance matrix by $A:=\sum^{-1}$. In Section 4, we consider a Gaussian stochastic process $X\left( r\right) $ and partition the real line into sets $\zeta_{i}$ with measure $\nu\left( \zeta_{i}\right) >\varepsilon>0$, and take a finite number of points, among which we compute the minimum. We pair this initial condition with a piecewise linear flux function. The Legendre transform $L$ of this flux function only takes a finite number of slopes, each on an interval of finite measure. By taking the intersection of our partition with this interval, we obtain a finite covering for it, and thus a finite number of points at which to analyze the quantity minimized in (\ref{ystardef}). We want to calculate the solution $\mathbb{E}\left\{ w\left( x,t\right) \right\} $, which we have shown in \cite{CA} to depend only on the derivative of $L$ or $g$ at the point of the minimizer. By considering first the local minima of this quantity for each of the finite number of segments of $L$, and then taking the minimum over those segments, we obtain a remarkable closed-form expression for the solution. Among the results we can obtain from this is the expectation of the solution at a point $\left(x,t\right)$. \begin{align} \mathbb{E}\left\{ w\left( x,t\right) \right\} =-\sum_{i=1}^{N}p_{i}c_{N+1-i} + \sum_{i=N+1}^{2N+1}p_{i}\mathbb{E}\left\{g'(y^{*}(x,t))\right\}\label{Thm5} \end{align} where \begin{align} p_{i} & = \sum_{j=2}^{n_{i}} \left( 2\pi\right) ^{-\frac{n}{2}}\left\vert A\right\vert ^{\frac{1}{2}}\int_{-\infty}^{\infty}dx_{i,j} {\displaystyle\prod\limits_{\left(m,l\right)\neq \left(i,j\right)}}\int_{x_{i,j}}^{\infty} dx_{m,l} e^{-\frac{1}{2}\left( \bar{x}-\tilde{\mu}\right) ^{T}A\left( \bar{x}-\tilde{\mu}\right)}\nonumber\\ &for \ 1\le i \le N.\nonumber\\ &\ For \ N+1\leq i\le2N+1 \ one \ has \ p_{i}=\mathbb{P}\left\{R_{i-N}\right\}. \label{introSolution} \end{align} are expressed in terms of integrals of nested error functions, and the sets $\left\{R_{i}\right\}$ refer to the event that a minimum occurs at a vertex of $L$ (discussed further in (\ref{Thm4})). The quantity $A_{i}$ in the first term of (\ref{introSolution}) here denotes the inverse covariance matrix corresponding to the $i$th affine segment of $L$, and $A$ for the same quantity over the whole domain. The vector $\mu$ is given by the discrete points of $\mu=\mathbb{E}\left\{ g\left( y\right) +tL\left( \frac{x-y}{t}\right) \right\} $, where the second term is deterministic. The quantity $\tilde{\mu}$ refers to the analogous quantity in the $i$th segment. The integers $n_{i}$ correspond to technical details of partitioning the intervals for which $L$ has different slopes, as described in Section 5. An important aspect of this evolution is that for Burgers' equation with random initial conditions, one has an increasing variance in the probabilistic sense for the solution $w\left( x,t\right) $ as a function of $t$ for a given $x$, but formal calculations suggest that it has decreasing variance in the sense of $TV\left( w\left( x,t\right) \right) $ as a function of time. In this manner, it is almost paradoxical that conservation laws which lead to shocks from smooth solutions can also act in such a way as to smooth out random initial conditions. A key issue in kinetic theory is the distribution and formation of discontinuities in the solution, known as \textit{shocks}. In particular, for a Gaussian stochastic process paired with a piecewise linear flux function, we can expand on the results of Section 5, including (\ref{introSolution}). In doing so, we derive expressions (both exact and approximate) for the distribution of these shocks based on the minimizer switching from a point on one segment of $L$ to another. From this result, we also compute the total density of shocks. The expressions obtained in Sections 5 and 6 hold for \textit{any} Gaussian stochastic process. Some cases of particular interest include that of Brownian motion or Brownian bridge initial conditions, as well as Ornstein-Uhlenbeck (\cite{PK}, p. 439). In Section 6, we apply our results to special cases and observe the simplifications for these specific processes. Since many of our results are exact and in terms of error functions, this also provides a pathway to further numerical computation. By taking the limit of a finer partition of the real line, we also explain how one may be able to pass to a limit and obtain results for the continuum problem in addition to the discretized one. \section{Background on Gaussian stochastic processes} \subsection{Stochastic processes} In considering randomness for the initial conditions along the entire real line $\mathbb{R}$, it is important to have a well-defined construction. A \textit{stochastic process} $X\left( s\right) $ is a collection of random variables on a probability space $\left( \Omega,\mathcal{F},\mathbb{P} \right) $. The space $\Omega$ can be thought of as the set of all possible outcomes for the random variable, $\mathcal{F}$ as a set of reasonable sets of outcomes (i.e. \textit{measurable} sets) and $\mathbb{P}$ is a measure with total mass $1$ assigning the probabilities for which an outcome or set of outcomes in $\mathcal{F}$ can occur. The expectation $\mathbb{E}\left\{ X\right\} $ of a random variable $X$ is defined as $\mathbb{E}\left\{ X\right\} =\int_{\Omega}X\left( \omega\right) dP\left( \omega\right) $. We also require that the measure $\mathbb{P}$ be countably additive, i.e. $\mathbb{P}\left( \cup_{i\in\mathbb{N}}F_{i}\right) =\sum_{i=1}^{\infty }\mathbb{P}\left( F_{i}\right) $ for $\left\{ F_{i}\right\} $ disjoint. \begin{example} \label{Ex 2.2}Consider a collection of random variables $\left\{ X_{\alpha }\right\} _{\alpha\in I}$ for some index set $I$ such that $X_{\alpha} \sim\mathcal{N}\left( 0,1\right) $. That is, take identically distributed random variables that are normally distributed with mean $0$ and variance $1$. Define \begin{equation} S\left( n\right) =\sum_{i=1}^{n}X_{\alpha}. \end{equation} The process $S\left( n\right) $ is known as a random walk and is a stochastic process on the probability space $\left( \Omega,\mathcal{F} ,\mathbb{P}\right) $ where $\Omega=\mathbb{N}$, $\mathcal{F}$ is the set of Borel sets on $\mathbb{R}$, and $\mathbb{P}$ is the usual probability measure. \end{example} \subsection{Gaussian stochastic processes} Let $\left\{ R_{i}\right\} _{i=1}^{m}$ be a collection of independent, identically distributed random variables with standard normal distribution, i.e. $R_{i}\sim\mathcal{N}\left( 0,1\right) $ as in Example \ref{Ex 2.2}. If a set of random variables $\left\{ X_{i}\right\} _{i=1}^{n}$ satisfy \begin{align} X_{1} & =a_{11}R_{1}+...+a_{1m}R_{m}+\mu_{1}\nonumber\\ & ...\nonumber\\ X_{n} & =a_{n1}R_{1}+...+a_{nm}R_{m}+\mu_{n} \end{align} for constants $\left\{ a_{ij}\right\} ,$ $\left\{ \mu_{i}\right\} $, we say that $\left( X_{1},...,X_{n}\right) $ form a \textit{multivariate normal distribution}. We also define the \textit{covariance } of two random variables below. \begin{definition} Let $X_{i}$ and $X_{j}$ be random variables. We define the \textit{covariance }of the random variables by \begin{align} Cov\left[ X_{i},X_{j}\right] & =\mathbb{E}\left[ X_{i}-\mathbb{E}\left[ X_{i}\right] \right] \mathbb{E}\left[ X_{j}-\mathbb{E}\left[ X_{j}\right] \right] \nonumber\\ & =\mathbb{E}\left[ X_{i}X_{j}\right] -\mathbb{E}\left[ X_{i}\right] \mathbb{E}\left[ X_{j}\right] \end{align} \end{definition} This is the basis of the definition for a Gaussian stochastic process. In particular, one has the following \cite{RO}: \begin{theorem} If $\left( X_{1},...,X_{n}\right) $ have a multivariate normal distribution with density given by \begin{equation} f_{X}\left(X_{1},...,X_{n}\right)=\frac{e^{-\frac{1}{2}\left(\overrightarrow{x}-\overrightarrow{\mu}\right)^{T} \Sigma^{-1}\left(\overrightarrow{x}-\overrightarrow{\mu}\right)}} {\sqrt{\left(2\pi\right)^{n}\left\vert\Sigma\right\vert}} \end{equation} where $\overrightarrow{\mu}$ is the mean, and $\Sigma$ the covariance matrix, then the joint moment generating function is given by \begin{equation} \phi\left( \lambda_{1},...,\lambda_{n}\right) =\exp\left\{ \sum_{i=1} ^{n}\lambda_{i}\mu_{i}+\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\lambda _{i}\lambda_{j}Cov\left[ X_{i}X_{j}\right] \right\} \end{equation} \end{theorem} This is easily proven using the definition and algebraic manipulation and properties of characteristic functions. We can now define what we mean by a Gaussian stochastic process: \begin{definition} \label{Def 2.1}Let $\left\{ t_{i}\right\} _{i=1}^{n}$ be arbitrary and increasing without loss of generality. We call a stochastic process $\left\{ X\left( t\right) \right\} $ a \textit{Gaussian stochastic process if }$\left( X\left( t_{1}\right) ,...,X\left( t_{n}\right) \right) $ has a multivariate normal distribution. \end{definition} Definition \ref{Def 2.1} implies that a stochastic process that is Gaussian is completely characterized by the properties of its mean and covariance matrix. A natural question to ask is what sort of processes might satisfy this requirement, and how one proves this property. We next give several examples relating to cases of particular interest in the broader literature. More details can be found in \cite{RO}. \begin{definition} \label{Def 2.2}Let $W\left( t\right) $ be a stochastic process. We call $W\left( t\right) $ Brownian motion if (i) $W\left( 0\right) =0$ (ii) $W\left( t\right) $ is almost surely continuous (iii)\ $W\left( t\right) $ has independent increments with a normal distribution, i.e. \begin{equation} W\left( t\right) -W\left( s\right) \sim\mathcal{N}\left( 0,t-s\right) . \end{equation} \end{definition} \begin{definition} \label{Def 2.3}We call a Brownian motion conditioned on $W\left( T\right) =0$, denoted $\left\{ W_{b}\left( t\right) \right\} _{0\leq t\leq T}$ a \textit{Brownian bridge}. An equivalent definition is to set \begin{equation} W_{b}\left( t\right) =W\left( t\right) -\frac{t}{T}W\left( T\right) \text{.} \end{equation} In particular, we note that $W_{b}\left( 0\right) =W_{b}\left( T\right) =0$. \end{definition} \begin{definition} We call a process $W_{o}\left( t\right) $ stationary Ornstein-Uhlenbeck if it is defined by \begin{equation} W_{o}\left( t\right) =e^{-\alpha t}W\left( e^{2\alpha t}t\right) . \end{equation} \end{definition} The stationary Ornstein-Uhlenbeck process has the important property that its associated covariance matrix is invariant under translation in time. A basic result is the following. \begin{lemma} \label{Prop 2.3}(a)\ Let $\left\{ W\left( t\right) \right\} _{t\geq0}$ be a Brownian motion. Then $W\left( t\right) $ is a Gaussian stochastic process. (b)\ The \textit{integrated Brownian motion} $S\left( t\right) $ given by $S\left( t\right) :=\int_{0}^{t}W\left( r\right) dr$. Then $S\left( t\right) $ is a Gaussian stochastic process. Further, defining $Z\left( t\right) $ as integrated Brownian motion together with a drift, $Z\left( t\right) $ is a Gaussian stochastic process. (c) Let $T>0$ be given and $W\left( t\right) $ be a Brownian motion on $\left[ 0,T\right] $. The stochastic process $W_{b}\left( t\right) =W\left( t\right) -\frac{t}{T}W\left( T\right) $, called \textit{Brownian bridge}, is a Gaussian stochastic process. (d) Define $Z\left( t\right) =\int_{0}^{t}W_{b}\left( r\right) dr$. Then $Z\left( t\right) $ is also a Gaussian stochastic process. \end{lemma} Part (b) is proven in the Appendix. A basic result that follows from the standard theory is given below. \begin{theorem} \label{Thm 2.1}(\textbf{Covariance Matrices for Brownian Motion and Brownian Bridge}). Let $T$ and $\left\{ t_{i}\right\} _{i=1}^{N}\in\left[ 0,T\right] $ be given. (a) Let $\left\{ S\left( t\right) \right\} _{t>0}$ be an integrated Brownian motion with piecewise linear drift. Then the covariance at times $0\leq s\leq t$ is given by \begin{equation} Cov\left[ S\left( t\right) ,S\left( s\right) \right] =s^{2}\left( \frac{t}{2}-\frac{s}{6}\right) \label{covMatrixIBM} \end{equation} (b) For an integrated Brownian bridge $Z\left( t\right) =\int_{0} ^{t}W\left( r\right) -\frac{r}{T}W\left( T\right) dr$, the covariance matrix at times $0\leq s\leq t\leq T$ is given by \begin{equation} Cov\left[ Z\left( t\right) ,Z\left( s\right) \right] =s^{2}\left( \frac{t}{2}-\frac{s}{6}\right) -\frac{1}{T}\frac{t^{2}s^{2}}{4} \label{covMatrixIBB} \end{equation} For $s$ and $t$ outside of $\left[ 0,T\right] $, the variance of $Z\left( t\right) $ is constant but nonzero. \end{theorem} An outline of the proof of Theorem \ref{Thm 2.1} is provided in the Appendix. \begin{remark} Later on, we consider Brownian motion on the interval $\left(0,1\right)$. In order to use our results on a different interval, one can instead consider Brownian motion on a translated and stretched interval, i.e. $\left(-M,M\right)$ where $B\left(-M\right)=0$ is the starting point instead of $0$ as is typically used.\label{BMextend} \end{remark} \section{Generalizations of Burgers' equation with random initial data} We consider a generalization of Burgers' equation by considering the flux function for $j\geq2$ as follows: \begin{equation} \mathcal{H}\left( p\right) =\frac{p^{j}}{j}. \label{genfluxfn} \end{equation} In this way, we obtain the following generalization of (\ref{varburg}). \begin{lemma} \label{Prop 2.1}With $\mathcal{H}$ defined by (\ref{genfluxfn}), consider the conservation law \begin{align} w_{t}+\left( \mathcal{H}\left( w\right) \right) _{x} & =0\text{ in }\mathbb{R\times}\text{ }\left( 0,\infty\right) \nonumber\\ w\left( x,0\right) & =g^{\prime}\left( x\right) \text{ on } \mathbb{R}\times\left\{ t=0\right\}. \end{align} The solution is then \begin{equation} w\left( x,t\right) =sgn\left( \frac{x-y^{\ast}\left( x,t\right) } {t}\right) \left( \frac{x-y^{\ast}\left( x,t\right) }{t}\right) ^{1/\left( j-1\right) }. \label{Propgenburgers} \end{equation} \end{lemma} \begin{proof} The Legendre transform of this flux function (\ref{genfluxfn}) is \begin{align} \mathcal{L}\left( q\right) & =\sup_{p}\left\{ pq-p^{j}\right\} =\left\vert q\right\vert ^{j/\left( j-1\right) }-\frac{1}{j}\left\vert q\right\vert ^{j/\left( j-1\right) }\nonumber\\ & =\frac{j-1}{j}\left\vert q\right\vert ^{j/\left( j-1\right) } \end{align} and its derivative is given by \begin{equation} \mathcal{L}^{\prime}\left( q\right) =sgn\left( q\right) q^{1/\left( j-1\right) }. \end{equation} The solution $w\left( x,t\right) $ to the conservation law (\ref{cl}) can then be expressed as \[ w\left( x,t\right) =\mathcal{L}^{\prime}\left( \frac{x-y^{\ast}\left( x,t\right) }{t}\right) =sgn\left( \frac{x-y^{\ast}\left( x,t\right) } {t}\right) \left\vert \frac{x-y^{\ast}\left( x,t\right) }{t}\right\vert ^{1/\left( j-1\right) } \] \end{proof} \begin{remark} For Burgers' equation, $j=2$ and the relation (\ref{Propgenburgers}) reduces to \[ w\left( x,t\right) =\frac{x-y^{\ast}\left( x,t\right) }{t}, \] i.e., the result (\ref{introkrb}). \end{remark} \begin{lemma} \label{Prop 2.2}Consider the conservation law as in Lemma \ref{Prop 2.1} with $\mathcal{H}\left( p\right) =\left\vert p\right\vert $ with initial data $g^{\prime}$. Then the solution is given by \begin{align} w\left( x,t\right) & =g^{\prime}\left( y^{\ast}\left( x,t\right) \right) \text{ at points where }g^{\prime}\text{ is continuous,}\nonumber\\ w\left( x,t\right) & =0\text{ elsewhere,} \end{align} where $y^{\ast}$ is the value of the minimizer as before. \end{lemma} \begin{proof} The Legendre transform of $\mathcal{H}$ is simply \begin{equation} L\left( q\right) =\left\{ \begin{array} [c]{c} 0\\ +\infty \end{array} \right. \begin{array} [c]{c} \left\vert q\right\vert \leq1\\ \left\vert q\right\vert >1 \end{array} . \end{equation} The result then follows immediately from the solution formula (\ref{clsol}). \end{proof} This result is particularly of interest as it shows, for the special case of the flux function $\mathcal{H}\left( p\right) =\left\vert p\right\vert $, that the solution of the conservation law reduces to finding the minimum of the integral of the initial data, i.e. the minimum of $g$. For example, if $g^{\prime}$ is given by a Brownian motion, the solution at given $x$ and $t$ is given by solving the problem of finding the minimum of an integrated Brownian motion on a finite interval (whose endpoints are dependent on $x$ and $t$). The integrated Brownian motion is a Gaussian stochastic process (Lemma \ref{Prop 2.3} (b)), so the minimizer $y^{\ast}$ can easily be approximated by using the mean and covariance matrix (Theorem \ref{Thm 2.1} (a)). For example, if the initial condition $g^{\prime}$ consists of $\pm1$ with probabilities $p$ and $1-p$, then one has $w\left( x,t\right) \in\left\{ 0,g^{\prime}\left( x-t\right) ,g^{\prime }\left( x+t\right) \right\} $. The value taken by $w$ depends on whether the minimum is attained on the interior, left endpoint, or right endpoint of the interval $\left[ x-t,x+t\right] $. \begin{remark} \label{Rmk Range}Due to the Legendre transform $L$ taking on infinite value outside of $\left\vert q\right\vert >1$, for a given $\left( x,t\right) $, the minimizer $y^{\ast}\left( x,t\right) $ takes its value in the domain \begin{equation} \left\vert x-y\right\vert <t. \end{equation} This is a manifestation of the finite domain of influence strictly enforced, i.e. within a cone $y=x\pm t$. \end{remark} \begin{theorem} \label{Thm 2.2}Consider the conservation law (\ref{cl}) with flux function given by (\ref{genfluxfn}), i.e. $\mathcal{H}\left( p\right) =\frac{p^{j} }{j}$, $j\geq2$. For Brownian motion initial conditions one has, with $y^{\ast}$ depending on $\left( x,t\right) $,\newline \begin{equation} Var\left( w\left( x,t\right) \right) =t^{-\frac{2}{j-1}}Var\left( sgn\left( x-y^{\ast}\right) \left\vert x-y^{\ast}\right\vert ^{\frac{1} {j-1}}\right), \label{Sec3me} \end{equation} noting that $y^{*}$ is short for $y^{*}\left(x,t\right)$. \end{theorem} \begin{proof} For a flux function of the form (\ref{genfluxfn}), a minimum is attained when \begin{equation} \mathcal{L}^{\prime}\left( \frac{x-y^{\ast}}{t}\right) =sgn\left( \frac{x-y^{\ast}}{t}\right) \left( \frac{x-y^{\ast}}{t}\right) ^{\frac {1}{j-1}}=g^{\prime}\left( y^{\ast}\right) =w\left( x,t\right) . \end{equation} One then writes \begin{align} \mathbb{E}\left\{ w\left( x,t\right) \right\} & =t^{-\frac{1}{j-1} }\mathbb{E}\left\{ sgn\left( \frac{x-y^{\ast}}{t}\right) \left\vert x-y^{\ast}\right\vert ^{\frac{1}{j-1}}\right\} ,\nonumber\\ \mathbb{E}\left\{ w\left( x,t\right) ^{2}\right\} & =t^{-\frac{2}{j-1} }\mathbb{E}\left\{ \left\vert x-y^{\ast}\right\vert ^{\frac{2}{j-1}}\right\} . \end{align} Therefore \begin{align} Var\left( w\left( x,t\right) \right) & =t^{-\frac{2}{j-1}} \{\mathbb{E}\left\{ \left\vert x-y^{\ast}\right\vert ^{\frac{2}{j-1} }\right\} \\ & -\mathbb{E}\left\{ sgn\left( \frac{x-y^{\ast}}{t}\right) \left\vert x-y^{\ast}\right\vert ^{\frac{1}{j-1}}\right\} ^{2}\} \end{align} \end{proof} \begin{remark} To obtain the result for Burgers' equation, one sets $j=2$, and observes that Theorem \ref{Thm 2.2} then yields the result (\ref{varburg}): \begin{equation} Var\left( w\left( x,t\right) \right) =\frac{1}{t^{2}}Var\left( x- y^{\ast }\left( x,t\right) \right) . \end{equation} Further, for the case of Brownian motion and for an arbitrary time $t$, $Var\left(g'\left( y^{\ast}\left( x,t\right)\right) \right) $ is increasing in $x$ since $y^{\ast}\left(x,t\right)$ is increasing in $x$ (see \cite{CA}), and thus the variance of $w\left( x,t\right)$ also increases. \end{remark} \begin{remark} From (\ref{Sec3me}) for Burgers' equation at $x=0$, using Theorem \ref{Thm 2.2} we have \begin{equation} Var\left( w\left( 0,t\right) \right) \text{ }\sim\text{ }\frac{C}{t^{2} }\cdot Var\left[y^{*}\left(0,t\right)\right]\label{Varw0} \end{equation} Using the solution formula, we also have \begin{align} w\left(x,t\right)=g'\left(y^{\ast}\left(x,t\right)\right)\text{ so } Var\left[w\left(x,t\right)\right]=Var\left[g'\left(y^{\ast}\left(x,t\right)\right)\right] \label{solform} \end{align} Combining (\ref{Varw0}) and (\ref{solform}) yields \begin{align} t^{2}Var\left[g'\left(y^{\ast}\left(0,t\right)\right)\right]=Var\left[y^{\ast}\left(0,t\right)\right]. \label{var1} \end{align} Since $g'$ is Brownian motion, one has $Var\left[g'\left(z\right)\right]=z$ for any $z>0$. Thus, one can formally write \begin{align} Var\left[g'\left(y^{\ast}\left(0,t\right)\right)\right]\sim\mathbb{E}y^{\ast}\left(0,t\right) \label{var2} \end{align} Combining (\ref{var1}) and (\ref{var2}) then yields \begin{align} \mathbb{E}y^{\ast}\left(0,t\right)\sim\frac{C}{t^{2}}Var\left[y^{\ast}\left(0,t\right)\right] \end{align} Further, if we are willing to make the conjecture that \begin{equation} Var\left[y^{*}\left(0,t\right)\right]\sim\mathbb{E}\left[y^{*}\left(0,t\right)\right]^{p}, \end{equation} for some power $p\in\mathbb{N}$, then we can deduce \begin{equation} Var\left( y^{\ast}\right) =t^{2}Var\left( g^{\prime}\left( y^{\ast }\right) \right) \sim t^{2}\mathbb{E}\left[ y^{\ast}\right]^{p} \sim Ct^{2}\cdot t^{p}. \end{equation} Then the result (\ref{Varw0}) can be further extended to obtain \begin{equation} Var\left( w\left( 0,t\right) \right)\sim Ct^{3p-2}, \end{equation} which seems to formally suggest that the variance of the solution on $x=0$ increases as time increases. One can also perform computations using a finite difference scheme through a numerical study, e.g. from \cite{SS}. \end{remark} \subsection{Spatial density of shocks} We now present a formal argument that the density of shocks in the solution $w\left( x,t\right) $ in $x$ decreases as $t$ increases for the case $H\left( p\right) =\left\vert p\right\vert $ with any random initial data. A shock can only form when there is a transition from Region I to II or III, or from Region II to III (see Figure \ref{Fig MS}). As noted in \ref{Rmk Range}, the range of minimization problem is restricted to the interval $\left\vert x-y\right\vert \leq t$, so that we have \begin{equation} w\left( x,t\right) =\partial_{x}\min_{y\in\left[ x-t,x+t\right] }g\left( y\right) =g^{\prime}\left( y^{\ast}\left( x,t\right) \right) . \end{equation} The minimum of $g\left( y\right) $ is either on the interior or attained at one of the endpoints, $x\pm t$. If it occurs on the interior, we must have $g^{\prime}\left( y^{\ast}\right) =0$. However, in the event of a minimum at $x+t$, one may have $g^{\prime}\left( y^{\ast}\right) =g^{\prime}\left( x+t\right) <0$. The case $g^{\prime}\left( x+t\right) =0$ will occur on a set of measure zero, and thus we have $w\left( x,t\right) <0$ for this case. Similarly, if the minimum is attained at $y^{\ast}=x-t,$ we will have $w\left( x,t\right) =g^{\prime}\left( x-t\right) >0$. We want to examine probabilities and consider how minima can change with a small change in the spatial coordinate. Let time $t$ be fixed and consider a small change in $x$ from $x$ to $x+\Delta x$. If the minimizer does not change, then the value of $w\left( x,t\right) $ also does not change by the argument above. If it does change, then so does $g^{\prime}$, and consequently, $w\left( x,t\right) $. We calculate the probability that there is a transition in $w$, i.e. the probability of a lower minimum on $\left( x+\Delta x-t,x+\Delta x+t\right) $ compared to that on $\left( x-t,x+t\right) $. This is given by \begin{equation} \mathbb{P}\left\{ \min\left\{ g\left( y\right) :\left\vert x-y\right\vert <t\right\} >\min\left\{ g\left( y\right) :\left\vert x+\Delta x-y\right\vert <t\right\} \right\} . \label{Sec3pt} \end{equation} In order for (\ref{Sec3pt}) to be nonzero, there must be a minimum outside the region of overlap, i.e. one of two events. In the first, a new minimum exists on the segment $\left(x+t,x+\Delta x+t\right)$, which we term region III, whose value is below the original minimum. In the second case, the original minimum fell in the range $\left( x-t,x+\Delta x-t\right),$ (region I), which is excluded when moving from $x$ to $x+\delta x$ as illustrated in Figure \ref{Fig MS}. In the latter case, the minimum attained on this segment fails to be in the new interval when $x$ is increased by $\Delta x$, so the new minimum is a consequence of the original no longer being contained in the domain of consideration. In the former case (with the new minimum attained on the right), the minimum moves lower, so $w\left( x,t\right) =\partial_{x}\min\left\{ g\left( y\right) \right\} $ jumps from zero to a negative value. We express this probability as \begin{align} & \mathbb{P}\left\{ w\left( x_{-},t\right) >w\left( x_{+},t\right) \right\} \nonumber\\ & =\mathbb{P}\left\{ \min\left\{ g\left( y\right) :\left\vert x-y\right\vert <t\right\} >\min\left\{ g\left( y\right) :x+t<y<x+\Delta x+t\right\} \right\} \label{Sec3pt'} \end{align} In the latter case, the minimum on the left gives rise to a larger minimum in the central region, so the jump is to a minimum value. The inequalities above are value for any $x$ and $\Delta x$. In articular, we could write a set of inequalities for a partition $\left\{ x_{-2} ,x_{-1},x_{0},x_{1},x_{2},...\right\} $ of $\mathbb{R}$ with corresponding $\Delta x_{i}:=x_{i}-x_{i-1}$. We now perform the same computation for any $x$ but with time $t+\Delta t$ to obtain the following: \begin{align} & \mathbb{P}\left\{ w\left( x,t+\Delta t\right) >w\left( x,t+\Delta t\right) \right\} \nonumber\\ & =\mathbb{P}\left\{ \begin{array} [c]{c} \min\left\{ g\left( y\right) :\left\vert x-y\right\vert <t+\Delta t\right\} >\\ \min\left\{ g\left( y\right) :x+t+\Delta t<y<x+\Delta x+t+\Delta t\right\} \end{array} \right\} \label{Sec3dt} \end{align} \begin{figure} \caption{When moving from $\left( x,t\right) $ to $\left( x+\Delta x,t\right) $, the interval of minimization is shifted $\Delta x$ to the right. To have different minimizers at points $x$ and $x+\Delta x,$ at least one of two events must occur: (i) the minimizer was attained in the leftmost segment $\left( x-t,x-t+\Delta x\right) $ which is omitted for the minimization problem at $x+\Delta x$ and hence lost; (ii) a new, lower minimum is introduced by considering the segment $\left( x+t,x+t+\Delta x\right) $ on the right and hence gained.} \label{Fig MS} \end{figure} \noindent We can now evaluate the probability in (\ref{Sec3dt}) to obtain the probability of a transition for $\left( x-\Delta t,t+\Delta t\right) \,$, i.e. obtain \begin{align} & \mathbb{P}\left\{ w\left( x-\Delta t,t+\Delta t\right) >w\left( x_{+}-\Delta t,t+\Delta t\right) \right\} \nonumber\\ & =\mathbb{P}\left\{ \begin{array} [c]{c} \min\left\{ g\left( y\right) :x-t-2\Delta t<y<x+t\right\} >\\ \min\left\{ g\left( y\right) :x+t<y<x+\Delta x+t\right\} \end{array} \right\} .\label{Sec3dt''} \end{align} We now compare (\ref{Sec3dt''}) with (\ref{Sec3pt'}) noting that the second terms are identical. One has the inequality \begin{equation} \min\left\{ g\left( y\right) :\left\vert x-y\right\vert <t\right\} \geq\min\left\{ g\left( y\right) :x-t-2\Delta t<y<x+t\right\} . \end{equation} In other words, in the notation of uniformly spaced points $\left\{ x_{i}\right\} _{i\in\mathbb{Z}}$, we have \begin{equation} \mathbb{P}\left\{ w\left( x_{i},t\right) >w\left( x_{i+1},t\right) \right\} \geq\mathbb{P}\left\{ w\left( x_{i+1},t+\Delta t\right) >w\left( x_{i+1}-\Delta t,t+\Delta t\right) \right\} \label{downshock} \end{equation} and \begin{equation} \mathbb{P}\left\{ w\left( x_{i+1},t\right) >w\left( x_{i},t\right) \right\} \geq\mathbb{P}\left\{ w\left( x_{i},t+\Delta t\right) >w\left( x_{i}+\Delta t,t+\Delta t\right) \right\} .\label{upshock} \end{equation} By combining (\ref{downshock}) and (\ref{upshock}) and then summing over all $i$, one sees that the density of jumps is decreasing in time, i.e. \begin{align} & \sum_{i}\mathbb{P}\left\{ \text{jump between }x_{i}\text{ and } x_{i+1}\right\} \\ & \geq\sum_{i}\left\{ \begin{array} [c]{c} \mathbb{P}\left\{ w\left( x_{i+1},t+\Delta t\right) >w\left( x_{i+1}-\Delta t,t+\Delta t\right) \right\} \\ +\mathbb{P}\left\{ w\left( x_{i+1},t+\Delta t\right) >w\left( x_{i+1}+\Delta t,t+\Delta t\right) \right\} \end{array} \right\} . \end{align} Further, if the process is stationary, one has a stronger result in that the local variation at a given $x_{i}$ is also nonincreasing. In the limit, one may also write \begin{equation} \mathbb{P}\left\{ w\left( x,t\right) >w\left( x_{+},t\right) \right\} \geq\mathbb{P}\left\{ w\left( x_{+},t+\Delta t\right) >w\left( \left[ x-\Delta t\right] _{+},t+\Delta t\right) \right\} \end{equation} and \begin{equation} \mathbb{P}\left\{ w\left( x,t\right) <w\left( x_{-},t\right) \right\} \geq\mathbb{P}\left\{ w\left( x_{-},t+\Delta t\right)\right\}>w\left(\left[ x+\Delta t\right] _{-},t+\Delta t\right). \end{equation} Thus, we can bound the probabilities that the right and left-hand limits of the solution at a particular point $x$ in space change as time is increased slightly for a transition of the type from Region I or II to III. In addition to this case, there are potential transitions from Region I to II. In order for such a transition to occur, we would need \begin{equation} min\left\{g\left(y\right):x-t<y<x+\Delta x - t\right\} < min\left\{g\left(y\right) : x+\Delta x -t<y<x+t\right\}. \label{2to3i} \end{equation} Considering the same probabilities at $x+\Delta x$ and $t+\Delta t$, we have \begin{align} min\left\{g\left(y\right):x+\Delta t - \left(t+\Delta t\right)<y<x+\Delta t+\Delta x -\left(t+\Delta t \right)\right\} \nonumber\\ < min\left\{g\left(y\right):x+\Delta t+\Delta x - \left(t+\Delta t\right)<y<x+\Delta t+t+\Delta t\right\}, \end{align} i.e., \begin{equation} min\left\{g\left(y\right):x-t<y<x+\Delta x-t\right\}<min\left\{g\left(y\right): x+\Delta x-t<y<x+t+2\Delta t\right\}. \label{2to3ii} \end{equation} Then if a sample Brownian motion path $\omega$ satisfies \ref{2to3ii}, it will satisfy \ref{2to3i}, so one has \begin{equation} \mathbb{P}\left\{\text{Transition I}\rightarrow\text{II at }\left(x,t\right) \right\}\geq\mathbb{P}\left\{\text{Transition I}\rightarrow\text{II at } \left(x+\Delta x,t+\Delta t\right)\right\}, \end{equation} and similarly, one shows that the density of shocks arising from these transitions is also decreasing in time. \section{Application to a prototype case} As discussed earlier, an important application of our methodology of involves the case where the flux function is given simply as $H\left( p\right) =\left\vert p\right\vert $. We will first consider this case for (\ref{cl}) to illustrate the main ideas and a convergence proof before proceeding to the more general case in the next section. As we shall see, though the formulation of this problem is simple, it raises a number of interesting points and the solution under our methods presents a clear geometric interpretation. Even for an initial condition given by a stochastic process such as Brownian motion, the essence of computing the solution profile is reduced to calculating in which of three regions Brownian motion has a minimum. To calculate the solution $w(x,t)$, one needs to determine whether the minimum of the Brownian motion is at $x-t$, $x+t$, or within $(x-t,x+t) $. The more general case is based on the same principles but involves more calculation. To ease notation, we will present the following prescription as to how one determines the value of the solution profile $w\left( x,t\right) $ for a particular realization of one Brownian path and for one specific point $\left( x,t\right) \in\mathbb{R\times}(0,\infty)$. Implicit will be the existence of an underlying, appropriate partition of the real line, the technical details of which are omitted here and are to be outlined in the next section. First note that the Legendre transform of the flux function $H$ given above will be $0$ inside a compact interval and $+\infty$ elsewhere. More specifically, we have \begin{equation} tL\left( \frac{y-x}{t}\right) =\left\{ \begin{array} [c]{c} 0\\ +\infty \end{array} \right. \begin{array} [c]{c} y\in\left[ x-t,x+t\right] \\ y\in\left( -\infty,x-t\right) \cup\left( x+t,\infty\right) \end{array} . \end{equation} To this end, we consider the quantity \begin{equation} tL\left( \frac{y-x}{t}\right) +g\left( y\right) .\label{qmin} \end{equation} One then has the remarkable simplification that minimizing the expression (\ref{qmin}) over the entire real line is equivalent to computing the minimum of $g\left( y\right) $ in the interval $\left[ x-t,x+t\right] $. Although finding the minimum of $g\left( y\right) $ for a stochastic process, say integrated Brownian motion, may not be so simple, for our purposes it will suffice to determine whether one of three distinct cases can occur. Indeed, either the minimum of $g\left( y\right) $ will occur at one of the endpoints or somewhere in the interior. In our setup, we discretize Brownian motion in the following way. Without loss of generality (see Remark \ref{BMextend}), consider the interval $\left[ 0,1\right] $ and let $M\in\mathbb{N}$ be given. Divide the interval into $2^{M}$ pieces, resulting in a partition $\left\{ r_{n}^{\left( M\right) }\right\} $, and choose random variables so that the discretized Brownian motion is constant on each subinterval, with its values prescribed by a random walk. Our goal is to show that the solution with initial conditions given by the discretized version of the integrated Brownian motion indeed converges to integrated Brownian motion as we let $M\rightarrow\infty$. We define our setup of the discretization more formally in Definition \ref{Def 1} before introducting our main results. We will denote our discretized Brownian motion by $W_{M}$, and the integrated process as $I_{M}=\int W_{M}$. Note that our procedure superficially resembles the classical construction of Brownian motion as a limit of interpolated (and thus continuous) random variables. However, using the interpolated random walk would not be as useful in our case, as the minima of its integral will not general be at the specified $n$ points (vertices of $W_{M}$ and $L$). We define \begin{equation} g_{M}^{\prime}\left( r\right) =W_{M}\left( r\right) :=W\left( r_{i}\right) \ \ \ for\ r\in\lbrack r_{i},r_{i+1}). \end{equation} We then integrate and define \begin{equation} I_{M}\left( t\right) =\int_{0}^{t}W_{M}\left( s\right) ds,\label{disIBM} \end{equation} where (\ref{disIBM}) defines our discretized integrated Brownian motion. The initial value problem we want to consider then becomes (using superscripts for $M$ to ease notation) \begin{equation} w_{t}^{M}+\left\vert w^{M}\right\vert _{x}=0,\text{ }w^{M}\left( x,0\right) =g_{M}^{\prime}\left( x\right) .\label{Ninitial} \end{equation} We construct a solution to this discretized problem. A key result, derived in Section 3.1, is that \begin{equation} w^{M}\left( x,t\right) =\left\{ \begin{array} [c]{c} 0\\ g_{N}^{\prime}\left( x-t\right) \\ g_{N}^{\prime}\left( x+t\right) \end{array} \right. \begin{array} [c]{c} \text{if interior minimum}\\ \text{if minimum is left endpoint}\\ \text{if minimum is right endpoint} \end{array} .\label{Nsol} \end{equation} The events that the minimum occurs in these different cases (i.e., either somewhere in the interior of $\left[ x-t,x+t\right] $ or at one of the endpoints) can be described as Gaussian integrals. In other words, the value of the solution is completely determined by the value of the initial condition depending on whether the minimum is attained at the left endpoint, in the interior, or at the right endpoint. The details are presented in the next section for the more general case, but take the form of a series of nested Gaussian integrals. A second set of questions involve whether the solution converges, and if so in what sense to solutions to the limiting problem, i.e. \begin{equation} w_{t}+\left\vert w\right\vert _{x}=0,\text{ }w\left( x,0\right) =g^{\prime }\left( x\right) \label{IBMinitial} \end{equation} where $g^{\prime}$ is given by a Brownian motion. Indeed, one can show that we have convergence in probability of $w^{M}$ to $w$ in the following sense. Given $A=B+C$, where $\mathbb{P}\left(\vert C \vert>\alpha\right)$ will be small, we will want to bound the probabilities of $\left\{ A>0 \right\}$ and $\left\{A<0\right\}$ in terms of sets only involving $B$. One may write \begin{equation} \mathbb{P}\left(B\leq-\alpha\right)-\mathbb{P}\left(C\geq\alpha\right) \leq\mathbb{P}\left(A<0\right)\leq\mathbb{P}\left(B\leq\alpha)\right) +\mathbb{P}\left(C\leq-\alpha\right). \end{equation} We use Billingsley Theorem 37.9, p. 534 \cite{BI}. Let $n\in\mathbb{M}$ be fixed and let $D$ be the dyadic rationals \begin{equation} D:=\left\{k2^{-M}:k\in\mathbb{M}\right\} \end{equation} Then we have for Brownian motion, $W$, with $\delta:=2^{-M}$, the bound for any $\alpha>0$ \begin{equation} \mathbb{P}\left[\sup_{r\in\left[0,1\right]}\left\vert W\left(t+\delta \right) -W\left(t \right)\right\vert >\alpha\right]\leq K\frac{\delta^{2}}{\alpha^{4}}, \end{equation} where $K$ is a positive real number. By continuity of Brownian motion, we can extend this inequality to all $t$ (not just dyadic). We consider for simplicity the interval $\left[0,1\right]$ so $k\in\left\{1,2,...,2^{n}\right\}$ and let $Q:=2^{M}$, $\left(r_{0}, ..., r_{Q}\right)$ be a set of equally spaced points with $r_{0}:=0$ and $r_{N}:=1$ with spacing \begin{equation} \Delta r=\delta=2^{-M}. \end{equation} We have the, for any $r\in\left[0,1\right]$, the bound \begin{equation} \left\vert I\left(r\right)\right\vert\leq\sup_{s\in\left[0,1\right]}\left\vert W\left(s\right)-W_{N}\left(s\right)\right\vert. \end{equation} Now define the following: \begin{align} \tilde{A}\left(s\right):=I\left(x-t\right)-I\left(s\right)\text{, } \tilde{B}\left( s\right) :=I_{M}\left( x-t\right) -I_{M}\left( s\right) \nonumber\\ \tilde{C}\left(s\right):=\left\{I\left(x-t\right)-I_{M}\left(x-t\right)\right\} +\left\{I\left(s\right)-I_{M}\left(s\right)\right\}. \end{align} and the real numbers \begin{align} A:=\sup_{s\in\left[ x-t,x+t\right] }\tilde{A}\left( s\right), \ B:=\sup_{s\in\left[ x-t,x+t\right] }\tilde{B}\left( s\right),\nonumber\\ C:=\sup_{s\in\left[ x-t,x+t\right] }\tilde{A}\left( s\right) -\sup_{s\in\left[ x-t,x+t\right] }\tilde{B}\left( s\right)\label{Def ABC} \end{align} Thus, we have $A=B+C$, and $C$ has the property \begin{equation} \vert C\vert\leq\sup_{\left(s\in x-t,x+t\right)}\vert\tilde{C}\left(s\right)\vert>\alpha \end{equation} Using the above, one can then write \begin{equation} \mathbb{P}\left\{\omega:\vert C\vert>\alpha\right\}\leq\mathbb{P}\left\{ \omega:\sup_{s\in\left[0,1\right]}\vert I\left(s\right)-I_{M}\left(s\right)\vert>\alpha\right\} \leq K\frac{M^{-2}}{\alpha^{4}} \end{equation} and recalling that $\delta=\frac{1}{Q}$ by definition, we have \begin{equation} \mathbb{P}\left(B<-\alpha\right)-K\frac{M^{-2}}{\alpha^{4}}\leq\mathbb{P}\left(A<0\right) \leq\mathbb{P}\left(B<\alpha\right)+K\frac{M^{-2}}{\alpha^{4}} \end{equation} Then, by choosing $\alpha$ and $N$ appropriately, one has \begin{equation} \mathbb{P}\left( B<-M^{-\frac{2}{5}}\right) -KM^{-\frac{2}{5}} <\mathbb{P}\left( A<0\right) <\mathbb{P}\left( B<M^{-\frac{2}{5}}\right) +KM^{-\frac{2}{5}}\label{absconv} \end{equation} for a pure number $K$ independent of $N$. We use these inequalities in conjunction with the convergence theorems below. \subsection{Properties of the Brownian motion discretization} In particular, we want to prove that $I_{N}$ defined by (\ref{disIBM}) is a Cauchy sequence in probability. Given an $\varepsilon>0$ we choose $J$ such that $\Delta r_{N_{J}}\leq\varepsilon.$ Let $t\in\lbrack r_{i}^{\left( J\right) },r_{i+1}^{\left( J\right) }).$ This means that for all $j,k>J$ we have, for any particular $\omega$ in the Polish space, the inequality \begin{equation} \left\vert W_{N_{j}}\left( t\right) -W_{N_{k}}\left( t\right) \right\vert \leq\sup_{t_{1},t_{2}\in\lbrack r_{i}^{\left( J\right) },r_{i+1}^{\left( J\right) })}\left\vert W\left( t_{1}\right) -W\left( t_{2}\right) \right\vert . \end{equation} This means that for any $\omega$ and any real number $\alpha$, one has \begin{align} & P\left\{ \omega:\left\vert W_{N_{j}}\left( t\right) -W_{N_{k}}\left( t\right) \right\vert >\alpha\right\} \nonumber\\ & \leq P\left\{ \omega:\sup_{t_{1},t_{2}\in\lbrack r_{i}^{\left( J\right) },r_{i+1}^{\left( J\right) })}\left\vert W\left( t_{1}\right) -W\left( t_{2}\right) \right\vert >\alpha\right\} \ . \end{align} A similar bound is obtained for the integrals of the discretized Brownian motion. Using the definitions above we write, for $t\in\left[ 0,1\right] ,$ and any particular $\omega$ so that $W_{N_{j}}$ and $W_{N_{k}}$ are continuous, bounded functions, \begin{align} \left\vert I_{N_{j}}\left( t\right) -I_{N_{k}}\left( t\right) \right\vert & \leq\int_{0}^{t}\left\vert W_{N_{j}}\left( s\right) -W_{N_{k}}\left( s\right) \right\vert ds\nonumber\\ & \leq\sup_{s\in\left[ 0,1\right] }\left\vert W_{N_{j}}\left( s\right) -W_{N_{k}}\left( s\right) \right\vert \nonumber\\ & \leq\sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert \end{align} so that for any $\omega$ and $\alpha\in\mathbb{R}$ we have \begin{equation} \left\vert I_{N_{j}}\left( t\right) -I_{N_{k}}\left( t\right) \right\vert \geq\alpha\ \ \ implies\ \ \sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert \geq\alpha\ . \end{equation} Hence we have \begin{align} \left\{ \omega:\left\vert I_{N_{j}}\left( t\right) -I_{N_{k}}\left( t\right) \right\vert \geq\alpha\right\} & \subset\left\{ \omega :\sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert \geq\alpha\right\} \nonumber\\ P\left\{ \left\vert I_{N_{j}}\left( t\right) -I_{N_{k}}\left( t\right) \right\vert \geq\alpha\right\} & \leq P\left\{ \sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert \geq \alpha\right\} . \label{I} \end{align} Next, we claim that if $s$ and $r$ are in their respective closed, bounded intervals, and one has for some $C\in\mathbb{R}$ and any fixed $\omega$ the inequality \begin{equation} P\left\{ \left\vert W\left( s+r\right) -W\left( s\right) \right\vert \geq\alpha\right\} \leq C \end{equation} then one also has \begin{equation} P\left\{ \sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert \geq\alpha\right\} \leq C\ . \label{sup} \end{equation} To prove this note that for each $\omega$ the Brownian motion $W\left( s;\omega\right) $ is continuous, so its difference $\left\vert W\left( s+r\right) -W\left( s\right) \right\vert $ is also continuous in $\left( r,s\right) $ and will have a maximum and minimum on $\left[ 0,\Delta r^{\left( J\right) }\right] \times\left[ 0,1\right] $. Let $\left( r^{\ast},s^{\ast}\right) $ be a point of maximum. Then there is a sequence, $\left( r_{n},s_{n}\right) $ converging to $\left( r^{\ast},s^{\ast }\right) .$ We have thus \begin{equation} \sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert =\left\vert W\left( s^{\ast}+r^{\ast}\right) -W\left( s^{\ast}\right) \right\vert . \end{equation} Let $R_{n}:=\left\vert W\left( s_{n}+r_{n}\right) -W\left( s_{n}\right) \right\vert $ and $R^{\ast}:=\left\vert W\left( s^{\ast}+r^{\ast}\right) -W\left( s^{\ast}\right) \right\vert .$ For each $\omega$ we have by continuity in $s$ for $W\left( s;\omega\right) ,$ the pointwise convergence, $R_{n}\rightarrow R^{\ast}.$ I.e., for any $\varepsilon>0$ we have convergence with probability $1,$ also expressed as (B p. 75) \begin{equation} P\left[ \left\vert R_{n}-R^{\ast}\right\vert \geq\varepsilon\ i.o.\right] =0. \end{equation} This implies convergence in probability, which implies convergence in distribution, i.e., for any $\alpha$ such that $P\left[ R^{\ast} =\alpha\right] =0,$ which in our case is all $\alpha$ by continuity, one has \begin{equation} \lim_{n\rightarrow\infty}P\left[ R_{n}>\alpha\right] =P\left[ R>\alpha\right] , \end{equation} i.e., one has \begin{align} \lim_{n\rightarrow\infty}P\left[ \left\vert W\left( s_{n}+r_{n}\right) -W\left( s_{n}\right) \right\vert >\alpha\right] & =P\left[ \left\vert W\left( s^{\ast}+r^{\ast}\right) -W\left( s^{\ast}\right) \right\vert >\alpha\right] \nonumber\\ & =P\left[ \sup_{s}\sup_{r}\left\vert W\left( s+r\right) -W\left( s\right) \right\vert \right] . \end{align} Hence, the assumption that $P\left[ \left\vert W\left( s_{n}+r_{n}\right) -W\left( s_{n}\right) \right\vert >\alpha\right] \leq C$ implies the conclusion $\left( \ref{sup}\right) ,$ proving the claim. We use these in conjunction with a basic bound on Brownian motion to prove the following Lemma. \begin{lemma} For any $\alpha >0,$ one has \begin{equation} P\left\{ \left\vert I_{N_{j}}\left( s\right) -I_{N_{k}}\left( s\right) \right\vert >\alpha \right\} \leq 3\frac{\left\vert \Delta r^{\left( J\right) }\right\vert }{\alpha ^{4}}\text{ \ for any }s\in \left[ 0,1\right] . \label{6} \end{equation} \label{lemmabound} \end{lemma} \begin{proof} A random variable $X$ with finite kth moment satisfies Markov's inequality for any $k\in\mathbb{N}$ \begin{equation} P\left\{ \left\vert X\right\vert >\alpha\right\} \leq\frac{E\left[ \left\vert X\right\vert ^{k}\right] }{\alpha^{k}}. \end{equation} We apply this to $X=W\left( t\right) -W\left( s\right) $ with $k:=4.$ Note that $X$ is a Gaussian with mean $0$ and variance $\sigma^{2}=\left\vert t-s\right\vert \ .$ The fourth moment of this Gaussian is then $3\sigma ^{4}=3\left\vert t-s\right\vert ^{2}.$ Thus, we can write, for $r\in\left[ 0,\delta\right] $ the inequality \begin{equation} P\left\{ \left\vert W\left( s+r\right) -W\left( s\right) \right\vert >\alpha\right\} \leq\frac{3\left\vert t-s\right\vert ^{2}} {\alpha^{4}}. \label{M} \end{equation} From this inequality, one obtains \begin{equation} P\left\{ \sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert >\alpha\right\} \leq\frac{3\left[ \Delta r^{\left( J\right) }\right] ^{2}}{\alpha^{4}}. \end{equation} Combining this with the inequality $\left( \ref{I}\right) ,$ for $j,k>J$ one has for any $t\in\left[ 0,1\right] $ \begin{align} P\left\{ \left\vert I_{N_{j}}\left( t\right) -I_{N_{k}}\left( t\right) \right\vert >\alpha\right\} & \leq P\left\{ \sup_{s\in\left[ 0,1\right] }\sup_{r\in\left[ 0,\Delta r^{\left( J\right) }\right] }\left\vert W\left( s+r\right) -W\left( s\right) \right\vert >\alpha\right\} \nonumber\\ & \leq\frac{3\left[ \Delta r^{\left( J\right) }\right] ^{2}}{\alpha^{4}}. \end{align} \end{proof} Our next lemma entails passing to a subsequence to achieve the convergence in the desired sense. We state this without proof as it follows easily from analysis. \begin{lemma} There is a measurable function $I^{\ast }\left( \omega ,s\right) $ such that $I_{N_{j}}\rightarrow I^{\ast }$ in probability and a subsequence converges $a.u.$ in $\omega $ and uniformly in $s.$\label{meassubseq} \end{lemma} With the definition \begin{equation} B_{M}:=\sup_{s\in \left[ x-t,x+t\right] }\left\{ I_{M}\left( x-t\right) -I_{M}\left( s\right) \right\} \end{equation} one has from (\ref{absconv}) the bounds \begin{align} P\left\{ B_{M}<-M^{-2/5}\right\} -KM^{-2/5}&<P\left\{ A\leq 0\right\}\nonumber\\ &<P\left\{ B_{M}<M^{-2/5}\right\} +KM^{-2/5}. \label{7} \end{align} By Lemma \ref{meassubseq} we know that there is a subsequence of $\left\{ I_{N_{j}}\right\} $ converging a.u., although the convergence in measure should be adequate. We consider only the subsequence that converges almost uniformly in $\omega $ and also in measure in $\omega $ uniformly in the $s$ variable. We will just refer to this subsequence as $ \left\{ I_{j}\right\} $ for brevity rather than $\left\{ I_{N_{j}}\right\} .$ By containment of sets in conjuction with (\ref{7}) for $j\in\mathbb{N}:$ $j^{-2/5}<\varepsilon$, one has the inequalities \begin{eqnarray} P\left( B_{j}<-\varepsilon \right) -Kj^{-2/5} &<&P\left\{ B_{j}<-j^{-2/5}\right\} -Kj^{-2/5} \nonumber \\ &<&P\left\{ A\leq 0\right\} \nonumber \\ &<&P\left\{ B_{j}<j^{-2/5}\right\} +Kj^{-2/5}<P\left( B_{j}<\varepsilon \right) +Kj^{-2/5} \label{8} \end{eqnarray} Define, analogous to $B_{M}$ above, the quantity \[ B^{\ast }:=\sup_{s\in \left[ x-t,x+t\right] }\left\{ I^{\ast }\left( x-t\right) -I^{\ast }\left( s\right) \right\} . \] Since Lemma \ref{lemmabound} says that $\left\{ I_{N_{j}}\left( \omega ,s\right) \right\} $ is fundamental in measure uniformly in $s$, we know that there is an $ I^{\ast }$ to which $I_{N_{j}}$ converges in measure. Then we have using $I_{j}$ in place of $I_{N_{j}}$ , \[ I_{j}\left( x-t\right) -I\left( s\right) \rightarrow I^{\ast }\left( x-t\right) -I^{\ast }\left( s\right) \text{ in prob, unif in }s. \] Using Lemma \ref{meassubseq} we then have that \begin{eqnarray*} \sup_{s}\left\{ I_{j}\left( x-t\right) -I\left( s\right) \right\} &\rightarrow &\sup_{s}\left\{ I^{\ast }\left( x-t\right) -I^{\ast }\left( s\right) \right\} \text{ in prob, i.e .,} \\ B_{j} &\rightarrow &B^{\ast }\text{ in prob.} \end{eqnarray*} Convergence in probability implies convergence in distribution, so that we have \[ P\left\{ B_{j}<\alpha \right\} \rightarrow P\left\{ B^{\ast }<\alpha \right\} . \] Now using this in $\left( \ref{8}\right) $ we take limits and obtain, \begin{equation} P\left\{ B^{\ast }<-\varepsilon \right\} \leq P\left\{ A\leq 0\right\} \leq P\left\{ B^{\ast }<\varepsilon \right\} . \label{9} \end{equation} If we consider only the subsequence that converges almost uniformly, then we have that for any $\delta >0$ we have a set $F$ such that $P\left( F\right) <\delta $ and on $\Omega \ \backslash \ F$ one has uniform convergence of continuous functions $B_{j}$ so that $B^{\ast }$ is continuous. Hence we can take the limit as $\varepsilon \rightarrow 0$ and obtain from $\left( \ref{9} \right)$ that \begin{equation} P\left\{ B^{\ast }\leq 0\right\} \leq P\left\{ A\leq 0\right\} \leq P\left\{ B^{\ast }\leq 0\right\} . \end{equation} Thus we have the following theorem. \begin{theorem} In the limit $M\rightarrow\infty$, we have that $\mathbb{P}\left\{ w_{M}\left(x,t\right)=g_{M}^{\prime}\left(x-t\right) \right\}\rightarrow\mathbb{P}\left\{w\left(x,t\right) =g^{\prime}\left(x,t\right)\right\}$ for $1\leq i\leq 2N+1$. In other words, we have convergence in probability of the solution to the discrete problem to that of the continuous problem (\ref{cl}). \end{theorem} Recalling that $A<0$ is equivalent to $I\left(s\right)$ having its minimum at the left endpoint, i.e. (\ref{Def ABC}), we see that the probability that the minimum it attained at the left endpoint is a limit of probabilities involving the discretizations $I_{M}$. While we have developed these ideas in the context of $H\left(p\right)= \left\vert p\right\vert$, the same methodology can be implemented for general polygonal flux, as the latter simply intrdouces deterministic drift, and the bounds on Brownian motion remain valid. \section{Analysis of the piecewise linear flux function with Gaussian stochastic process initial data} We now consider the flux function in (\ref{cl}). To be precise, let the flux function $H$ be piecewise linear with slopes given by $\left\{ m_{i}\right\} _{i=1}^{N+1}$ in increasing order and break points $\left\{ c_{i}\right\} _{i=1}^{N}$. The Legendre transform $L$ will consequently have slopes $\left\{ c_{i}\right\} _{i=1}^{N}$ and break points $\left\{ m_{i}\right\} _{i=1}^{N+1}$, with slope $-c_{N+1-i}$ in the interval $I_{i}:=\left[ m_{i},m_{i+1}\right] $, and infinite value outside the interval $\left[ m_{1},m_{N+1}\right] $. When evaluated at the argument $\frac{x-y}{t}$, the function exhibits limiting behavior under small or large time leading to interesting results in these limits for the solution of the minimization problem. For small time $t$, the function $L$ is infinite outside a small interval, making it more likely that a minimum is obtained at a vertex of $L$. For large $t$, $L$ can be approximated as a single linear segment. In this case the minimum is then likely to be at a vertex of $g$ rather than at a vertex of $L$. In numerical studies, this can be illustrated by comparing the variance in the spatial variable $x$ at given fixed times $t_{1}$, $t_{2}$, etc. As is apparent in Figure \ref{Fig BM} computed using \cite{SS}, the total variation tends to decrease as a function of $t$. This augments the formal calculations shown in Section 3. This is despite the fact that the probabilistic variance at a point $x$ increases in time. \begin{figure} \caption{By performing numerical simulations using a finite difference scheme on Burgers' equation for Brownian motion initial conditions, one can see that the longer time behavior exhibits less variance in $x$ and appears more smooth, in accordance with the analytic results.} \label{Fig BM} \end{figure} In addition to the intuitive interpretation of the asymptotic limiting behavior of the solutions, we prove a rigorous result for a broad class of initial conditions. We consider a discretized approximation to a Gaussian stochastic process as an initial condition paired with the piecewise linear flux function. The virtue of this approach is that the result holds for any stochastic process in this class, and is exact without relying on the particular structure of say Brownian motion or Brownian bridge. We recall a result from \cite{CA}. \begin{theorem} \label{Thm 3.1}Let $L$ be polygonal convex, as described above, and $g^{\prime }$ be piecewise constant. Then \begin{align} w\left( x,t\right) & =g^{\prime}\left( y^{\ast}\left( x,t\right) \right) \text{ when the minimizer }y^{\ast}\text{ is at a vertex of }L\nonumber\\ & =L^{\prime}\left( \frac{x-y^{\ast}\left( x,t\right) }{t}\right) \text{ when the minimizer }y^{\ast}\text{ is at a vertex of }g \label{Thm 3.1a} \end{align} a.e. in $x$ (for fixed $t>0$) is a solution to (\ref{cl}). Should a minimum occur at a point where both $g$ and $L$ have a vertex, one has a discontinuity in the solution with $w\left(x-,t\right)$ taking the value of the left limit $g'\left(x,t\right)$ and $w\left(x,t\right)$ having the value $g'\left(x+,t\right)$. Note that for a fixed $t>0$ and $x$ a.e., one of the cases in (\ref{Thm 3.1a}) occurs. \end{theorem} This theorem will be needed as the final step in proving our main result of this section. Before stating the theorem, we clarify the choice of discrete points at which we will sample the stochastic process. This partition will depend on the value of $x$ and $t$. Given $x$ and $t$, the Legendre transform $L$ of the flux function will only be finite on a finite interval, and therein $L^{\prime},$ the quantity of interest, will take $N$ different values. We partition each of these intervals into pieces at which we sample the values of $L'$ and $g'$. In addition, we will need to consider the value at each of the vertices of $L$. One is then left with a \textit{finite} number of points, and computing the minimum amongst these is a far more manageable task than calculating the minimum along a continuum. Indeed, our main result presents an explicit formula for computing this minimum, and thus an exact expression for the expectation value of the solution $w\left( x,t\right) $. One can then consider refining such partitions and taking a limit to the continuum. In a computational context, it may be necessary to first fix a partition, and restrict oneself to that same grid, but we leave the technical details to future works. \begin{definition} \label{Def 1} On the interval $\left[x-m_{N+1}t, x-m_{1}t\right]$ we set grid points for a fixed $x$ and $t$ as follows. Note that $L$ is divided into $N$ segments on which it takes finite values, and has $N+1$ associated vertices. Starting from the leftmost segment, we label these vertices as $r_{1,1}, ..., r_{1,n_{1}}, r_{2,1}, ..., r_{i,1}, ..., r_{N+1,1}$. We number the $n_{i}$ points that lie between each vertex sequentially using the second index, pairing them with the first index representing the vertex most immediately to its left, i.e. $r_{1,2}, ..., r_{1,n_{1}}$ for points between the first and second vertex. We denote the total number of points (vertices and points between) by $n$, and define the intervals $I_{i}^{x,t}:=(r_{i,1},r_{i+1,1})$. \end{definition} We illustrate the definition of these partitions in Figure \ref{Fig PI}. \begin{figure} \caption{Illustration of the partition defined in Definition \ref{Def 1} \label{Fig PI} \end{figure} \begin{definition} Let $x$ and $t$ be given and choose partitions of $I_{i}^{x,t}$ as described in Definition \ref{Def 1}. Let the number of such points be denoted by $n_{i} $, and define $\sum_{i}n_{i}=:n$. Define $A_{i,j}$ as the inverse of the covariance matrix of $X\left( s\right) $ with respect to these points $\left\{ s_{i,j}\right\} $ in the subpartition $\left \{r_{j,2}, ...,r_{j,n_{j}}\right \}$ and suppress the second index of $A,s,$ etc. Denote the eigenvalues of $A_{i}$ as $\tilde{\lambda}_{i}$, and define $\tilde{\mu}_{i}$ as the means of each random variable, i.e. \begin{equation} \mu_{i,j}=\mathbb{E}\{g(r_{i,j}\}+ tL(\frac{x-r_{i,j}}{t}) \ for \ j\neq1, \ 1\leq i\leq N. \end{equation} The matrix $A_{i}$ can be diagonalized by \begin{equation} A_{i}=U_{i}D_{i}U_{i}^{T}. \end{equation} Similarly, define $A,\left\{ \lambda_{i}\right\} ,\left\{ \mu_{i}\right\} $ etc. as the covariance matrix, eigenvalues, means of the random variables for the entire process within the range $(x-t,x+t)$. Define also $Q_{i}$ as follows: \begin{equation} Q_{i}=Q\left(i,x,t\right) :=\min_{y\in I_{i}^{x,t} }\left\{ tL\left( \frac{x-y}{t}\right) +g\left( y \right) \right\} \label{Thm1}, \end{equation} i.e. the minimum on the $ith$ segment of the flux function $L$. \end{definition} \begin{theorem} \label{Thm 3.2}Let the flux function $H$ in the conservation law (\ref{cl}) be piecewise linear and convex, with $N$ segments and $N+1$ vertices. Let the initial condition $g^{\prime}_N\left( s\right)$ be given by a discretized Gaussian stochastic process, as outlined in Definition \ref{Def 1}. The probability that the minimum occurs on the $i$th segment, i.e. \begin{equation} p_{i}:=\mathbb{P}\left\{ Q\left( y^{\ast}\left( x,t\right) \right) =Q\left( y\left( i,x,t\right) \right) \right\}\text{, i.e., } w(x,t)=c_{N+1-i} \label{Thm3} \end{equation} is given by \begin{align} p_{i} & = \sum_{j=2}^{n_{i}} \left( 2\pi\right) ^{-\frac{n}{2}}\left\vert A\right\vert ^{\frac{1}{2}}\int_{-\infty}^{\infty}dx_{i,j} {\displaystyle\prod\limits_{\left(m,l\right)\neq \left(i,j\right)}}\int_{x_{i,j}}^{\infty} dx_{m,l} e^{-\frac{1}{2}\left( \bar{x}-\tilde{\mu}\right) ^{T}A\left( \bar{x}-\tilde{\mu}\right) }\nonumber\\ &for \ 1\le i \le N.\label{Thm4} \end{align} The probability of such a minimum occuring at a vertex point of $L$ is given by an expression similar to (\ref{Thm4}) with the integral over the jth vertex as the outermost integral (treating the vertex as a segment with only one point). Finally, define the event that the minimum occurs at the $jth$ vertex of $L$ by $R_{j}$ and set (for notational convenience) $p_{N+i}=\mathbb{P} \left\{R_{i}\right\}$. Then the expected value of the solution is given by \begin{align} \mathbb{E}\left\{ w\left( x,t\right) \right\} = \mathbb{E} \left\{ L^{\prime}\left( \frac{x-y^{\ast}\left( x,t\right) }{t}\right) \right\} \nonumber\\ + \sum_{i=1}^{N+1} \mathbb{P}\left\{ R_{i}\right\} \mathbb{E}\left\{g'(y^{*}(x,t))\right\} \nonumber\\ =-\sum_{i=1}^{N}p_{i}c_{N+1-i} + \sum_{i=N+1}^{2N+1}p_{i}\mathbb{E}\left\{g'(y^{*}(x,t))\right\}\label{Thm5} \end{align} a.e., that is, an average of the slopes of $L$ weighted by the probability of the minimum occurring on the $i$th segment of $L$, plus terms at vertices of $L$. \end{theorem} \begin{remark} In Theorem \ref{Thm 3.2}, we present closed-form probability of the solution taking on one of a number of values for a given $x$ and $t$, as well as the expectation of the solution at this given value. In fact, we can also write an expression for the distribution of the local minimum on segment $i$ as a function of its value $s$ by instead taking the last integral in (\ref{Thm4}) up to $s$ instead of $\infty$. We may also write expressions for the cdf of the value of $g+L$ on the vertex, and for the cdf of the minimum value obtained on just one segment. Consider first, for given $i\in\left[ 1,N\right] $, the domain $I_{i}^{x,t}$ (with parameters $x,t$) where $L$ is a purely affine function. Then the cumulative probability distribution (cdf) for the minimum on this $i$th segment, i.e. the quantity $Q_{i}$ is given by \begin{equation} F_{Q_{i}}\left( s\right) :=\left( 2\pi\right) ^{-\frac{n_{i}}{2} }\left\vert A_{i}\right\vert ^{\frac{1}{2}}\int_{-\infty}^{s}...\int_{-\infty }^{s}e^{-\frac{1}{2}\left( \bar{x}-\bar{\mu}\right) ^{T}A_{i}\left( \bar {x}-\bar{\mu}\right) }dx_{i,n_{i}}...dx_{i,1}.\label{Thm2} \end{equation} where $\bar{s}=\left( s,s,...,s\right) $ and $\bar{\mu}$ the vector given by $\mathbb{E}\left\{ g\left( \bar{y}\right) +tL\left( \frac{\bar{x}-\bar{y} }{t}\right) \right\} $ evaluated at the discrete set of points. Furthermore, the cdf of a potential minimum that occurs at the $jth$ of the $N+1$ vertices of $L$ is given by \begin{equation} F_{v_{j}}(s) = (2\pi)^{-\frac{1}{2}}\vert \mu_{j} \vert ^{-\frac{1}{2}} \int_{-\infty}^{s}e^{-\frac{1}{2}\mu_{j}\left(x-\mu_{j}\right)^{2} }dx. \end{equation} \end{remark} \begin{remark} The case of the Gaussian stochastic process includes the deterministic case, i.e. when $p_{i}=1$ for some $i$ dependent on $x$ and $t$ and $p_{j}=0$ for $j\not =i$. \end{remark} \begin{proof} [Proof of Theorem \ref{Thm 3.2}]We consider an initial condition given by the discretized Gaussian stochastic process $g'_{\tilde{N}}(x)$. The key feature is that, using our methods, this problem of finding the minimum over the entire interval $(x-t,x+t)$ is reduced to finding a minimum over $n$ points, $n_{i}$ of which are on each segment, and $N+1$ points that are on the vertices of $L$. Given an $x$ and $t$, these $n$ points are fixed. Since we have a joint probability density for the values of this function at any of these points, it is a matter of integrating over the appropriate region(s) to obtain the desired probabilities. From there we can calculate not only the expectation value $\mathbb{E} w(x,t)$ but the entire distribution of the solution $w$. To this end, let $ \left\{ Y_{j}\right\}_{j=1}^{n}$ be a set of random variables with a multivariate probability density given by \begin{equation} f\left( x;\mu,A\right) =\left( 2\pi\right) ^{-\frac{n}{2}}\left\vert A_{i}\right\vert ^{\frac{1}{2}}e^{-\frac{1}{2}\left( \bar{x}-\bar{\mu }\right) ^{T}A_{i}\left( \bar{x}-\bar{\mu}\right) }, \end{equation} where $A_{i}:=\sum_{i}^{-1}$ and $\sum_{i}$ is the covariance matrix of the random variables in the subpartition $\zeta^{i}$, and $\bar{x},\bar{\mu} \in\mathbb{R}^{n_{i}}$ are vectors. Here the components of $\mu$ and $x$ are related by $\mu_{i}=tL\left( \frac{x_{i}-y_{i}}{t}\right) +\mathbb{E} g\left( y_{i}\right) $. Thus, $f$ contains all the information we will need to compute the minimum along the $i$th segment of $L$. For simplicity, we write out the distribution over the points $n_{i,2}, ..., n_{i,n_{i}}$ to have values less than or equal to $s_{i}$. This is expressed as \begin{align} \mathbb{P}\left\{ Y_{2}\leq s_{1},...Y_{n_{i}}\leq s_{n_{i}}\right\} & =\mathbb{P}\left\{ Y_{i}\in {\displaystyle\bigotimes\limits_{i=1}^{n}} (-\infty,s_{i}]\right\} \nonumber\\ & =\int_{-\infty}^{s_{1}}...\int_{-\infty}^{s_{n}}\frac{e^{-\frac{1} {2}\left( \bar{x}-\tilde{\mu}\right) ^{T}A_{i}\left( \bar{x}-\tilde{\mu }\right) }}{\left( 2\pi\right) ^{\frac{n_{i}-1}{2}}\left\vert \sum_{i}\right\vert ^{\frac{1}{2}}}dx_{n_{i}}...dx_{2}.\label{onesegprob} \end{align} Note we have supressed the first index in $Y_{i,j}$ (which would be $i$) in these quantities over segment $i$ for ease of notation. To find the probability that a minimum on one segment is below a value $s$, we can use the result (\ref{onesegprob}) and must all sum over all the (mutually exclusive) events that each random variable $Y_{i}$ is a minimum as follows \begin{align} \mathbb{P}\left\{ Y_{i}\in(-\infty,s)^{n_{i}}\right\} = \sum_{j=2}^{n_{i}} \left( 2\pi\right) ^{-\frac{n_{i}-1}{2}}\left\vert A_{i}\right\vert ^{\frac{1}{2}}\nonumber\\ \left\{\int_{-\infty }^{s}...\int_{-\infty}^{s}e^{-\frac{1}{2}\left( \bar{x}-\tilde{\mu}\right) ^{T}A_{i}\left( \bar{x}-\tilde{\mu}\right) }dx_{n_{i}}...dx_{1}\right\} ,\label{exprmaxall} \end{align} where we have $(n_{i}-1)-$fold integrals from $-\infty$ to $s$ on the right-hand side of (\ref{exprmaxall}). A similar construction then yields the result (\ref{Thm4}). We now want to use equation (\ref{Thm 3.1a}) from Theorem \ref{Thm 3.1}, which states the solution $w\left( x,t\right) $ takes the value of $L^{\prime}$ at any point where the minimum is achieved at a vertex of $g$, and the value $g$ when achieved at a vertex of $L$. By accounting for both the possibility of a minimum at one of the $N$ segments or at one of the vertices of $L$, we can write not only the expectation value of the solution at a given point, but the entire distribution. The expectation value is consequently \begin{align} \mathbb{E}\left\{ w\left( x,t\right) \right\} = \mathbb{E} \left\{ L^{\prime}\left( \frac{x-y^{\ast}\left( x,t\right) }{t}\right) \right\} \nonumber\\ + \sum_{j=1}^{N+1} \mathbb{P}\left\{R_{j}\right\} \mathbb{E}\left\{g'(y^{*}(x,t))\right\} \nonumber\\ =-\sum_{i=1}^{N}p_{i}c_{N+1-i} + \sum_{i=N+1}^{2N+1}p_{i}\mathbb{E}\left\{g'(y^{*}(x,t))\right\}, \end{align} i.e., it is given by the average over these possibilities. The event of a minimum at a vertex of $L$ may be more than a set of measure zero at the discrete level, but one might expect convergence to zero as we take the limit $\tilde{N}\rightarrow \infty$. The situation in regards to these two cases is illustrated in Figure \ref{Fig MinL}. \end{proof} \begin{figure} \caption{We are seek the minimum of $L\left( \frac{x-y} \label{Fig MinL} \end{figure} For a specific realization of the stochastic process $g_{\tilde{N}}\left( x\right) $, the values of the solution $w$ are within the range $\left\{ c_{i}\right\} $. The solution $w$ takes the value $c_{N+1-i}$ when the minimum is achieved along the $i$th segment. If the minimum is achieved at a vertex of $L$, the expectation of the solution is zero by assumption, so the second set of terms drops out. \begin{remark} As an aside, we note that one can transform the integrals to a different coordinate system and simplify the exponentials to only square terms. However, the domain of integration is transformed in such a way that the integration is nontrivial and results in a nested sequence of integrated error functions. By construction, the matrix $A$ is symmetric and real, so the eigenvalues are real there exists a matrix $U_{i}$ such that $U^{T}AU=D$, where $D$ is the diagonal matrix consisting of the eigenvalues $\lambda_{n_{i}}$. Any repeated eigenvalues are assigned to $\left\{ \lambda_{i}\right\} $ as needed, i.e. $\lambda_{k}=\lambda_{k+1}$ is permitted. Furthermore, we know $U^{-1}=U^{T}$, so we can write \begin{equation} \left( \bar{x}-\bar{u}\right) ^{T}A\left( \bar{x}-\bar{\mu}\right) =\left( \bar{x}-\bar{\mu}\right) UDU^{-1}\left( \bar{x}-\bar{\mu}\right) =z^{T}Dz=\sum_{i=1}^{n}\lambda_{i}z_{i}^{2} \end{equation} where \begin{equation} \bar{z}=U^{-1}\left( \bar{x}-\bar{\mu}\right) \end{equation} for a unitary matrix $U$ so that $\left\vert U\right\vert ^{2}=1$. Hence, given any vector $\mu$ and a real, symmetric matrix $A$, we can write \begin{equation} -\frac{1}{2}\left( \bar{x}-\bar{\mu}\right) =-\frac{1}{2}\sum_{i=1}^{n_{i} }\lambda_{i}z_{i}^{2}. \end{equation} Thus, one has \begin{equation} f\left( z_{j},\left\{ \lambda_{j}\right\} \right) =\left( 2\pi\right) ^{-n/2}\left\vert \Sigma\right\vert ^{-\frac{1}{2}}e^{-\frac{1}{2}\sum _{i=1}^{n_{i}}\lambda_{i}z_{i}^{2}} \end{equation} or equivalently \begin{equation} \mathbb{P}\left\{ Z_{1}\leq z_{1},...,Z_{n_{i}}\leq z_{n_{i}}\right\} =\int...\int_{\Omega}\frac{e^{-\frac{1}{2}\sum_{i=1}^{n}\lambda_{i}\tilde {z}_{i}^{2}}}{\left( 2\pi\right) ^{\frac{n}{2}}\left\vert \sum\right\vert ^{\frac{1}{2}}}d\tilde{z}_{n_{i}}...d\tilde{z}_{1}\label{coordChangeOneSeg} \end{equation} over the transformed domain $\Omega$ (rotated and scaled from the original region $\left[ -\infty,s\right] ^{n_{i}}$). \end{remark} \section{Exact and approximate results for the density of shocks} We now consider the density of shocks in the solution $w\left( x,t\right) $, i.e. a spatial coordinate $x$ where \begin{equation} w\left( x_{-},t\right) \not =w\left( x_{+},t\right) . \end{equation} This was discussed in the special case $H\left( p\right) =\left\vert p\right\vert $ in Section 3. Recall from Section 4 the notation $I_{i}^{x,t} =(r_{i,1},r_{i+1,1})$ to track the intervals along which $L$ has slope $d_{i}$. From the arguments above, it is clear that at point $x$ where a shock occurs, the minimizer $y^{\ast}\left( x,t\right)$ jumps from one segment of $L$ to another. By continuity, we can assume that for sufficiently small $\varepsilon>0$, $y^{\ast}\left( x-\varepsilon,t\right) $ is located within the same segment in the limit. In other words, one writes \begin{equation} \left\{y^{\ast}\left( x-\varepsilon,t\right)\text{ }|\ \varepsilon<\varepsilon_{0} \right\} \subset I_{i}^{x,t} \backslash\partial I_{i}^{x,t} \end{equation} Let $\Delta x>0$ and consider the following. Suppose that $y_{1}:=y^{\ast }\left( x-\Delta x,t\right) $ is located where $L$ has slope $d_{1}$ and $y_{2}:=y^{\ast}\left( x,t\right) $ where $L$ has slope $d_{2}$. When we shift the spatial coordinate by amount $\Delta x$, the value of the quantity to be minimized changes by $R_{1}=-d_{1}\Delta x$ at $y_{1}$ and by $R_{2}=-d_{2}\Delta x$ at $y_{2}$. Thus, the net change is a decrease $\left( d_{2}-d_{1}\right) \Delta x$ at $y_{2}$ relative to $y_{1}$. This means a shock from this limit can only occur when $d_{2}>d_{1}$. This argument is illustrated in Figure \ref{Min2Segments}. If we let $\Delta x$ be negative, i.e. we consider the points $x$ and $x+\Delta x$ instead of $x-\Delta x$ and $x$, we can have a shock for the case $d_{1}>d_{2}$. We now want to find an expression for the probability density of such shocks. To be more precise, we let $a:=\Delta x\left( d_{2}-d_{1}\right) .$ We want to compute the probability that the minimum shifts from one segment to another. First consider the simpler problem with only two segments, and the probability of the minimum shifting from a particular point on the first segment to a particular point on the second segment. To illustrate this in a simple way, consider the points $r_{1,2}$ and $r_{2,2}$ among the $n_{1}+n_{2}+1$ points in the system. We also use the notation that the random variable $Y_{i,j}$ corresponds to the point $r_{i,j}$, supressing the subscript $r$. We compute the probability that the variable $Y_{1,2}$ associated with the point $r_{1,2}$ is at least $s$, $Y_{2,2}$ is in the range between $s$ and $s+a$, and $Y_{1,i}$, $Y_{2,i}$, and $Y_{3,1}$ are all at least $s+a$ for all $i\neq 2$ as follows \begin{align} & \mathbb{P}\left\{ Y_{1,2}=s_{1,2},Y_{2,2}\in\left( s_{1,2},s_{1,2}+a\right) ,Y_{j,i}\geq s_{1,2}+a\text{ for all }i\neq 2, 1\leq j \leq 3 \right\} \nonumber\\ & = \int_{s_{1,2}}^{s_{1,2}+a}ds_{2,2} \displaystyle\prod\limits_{\left(m,n\right)\neq\left(1,2\right)\ or \ \left(2,2\right)} \int_{s_{1,2}+a}^{\infty}ds_{m,n} f\left(s_{1,1},s_{1,2},...,s_{3,1}\right) \end{align} where $f$ is the joint distribution for the variables $s_{1,1},...,s_{3,1}$ at the $n_{1}+n_{2}+1$ points, and we have fixed $s_{1,2}$. Now, we want to divide by $a$ and take the limit. When we differentiate with respect to $x$, the integral with respect to $s_{2,2}$ (in the $(n_{1}+1)-$th argument of $f$) drops out due to the Fundamental Theorem of Calculus, assuming sufficient continuity properties. The probability of transitioning from a minimum from one segment to the other can be written explictly as follows \begin{align} \mathbb{P}_{\left(1,2\right),\left(2,2\right)}:&= \lim_{\Delta x \rightarrow0}\frac{1}{\Delta x}\mathbb{P}\left\{ \text{min at }\left(1,2\right)\text{ at }\left(x,t\right)\text{ and min at } \left(2,2\right)\text{ at }\left(x+\Delta x,t\right)\right\}\nonumber\\ &=\left( d_{2}-d_{1}\right)\int_{-\infty}^{\infty}ds_{1,2} {\displaystyle\prod\limits_{(u,v) \neq (1,2), (2,2)}}ds_{u,v} \int_{s_{1,2}}^{\infty} f\left(s_{1,1}, s_{1,2}, ..., s_{2,1},s_{2.2}, ..., s_{3,1}, \right). \end{align} i.e. the $s_{2,2}$ entry is evaluated at $s_{1,2}$ in $f$. This can be generalized to computing the probability of transition from one segment to another, which will entail $n_{i}n_{j}$ terms. These terms correspond to a jump from a minimum at $s$ attained at one of the $n_{i}$ possible points from segment $i$ to one of the $n_{j}$ possible points on segment $j$. Now we extend this computation from one point to another, to all points in a segment $i$ to another segment $j$. Let \begin{align} \tilde{P}_{i,j}\left( s; a\right) :=\mathbb{P} \Big\{ & \min_{2\leq k\leq n_{i},}Y_{i,k}=s,\text{ }\min_{2\leq k\leq n_{j}} Y_{j,k} \in\left(s,s+a\right), \nonumber\\ & \min_{m\neq i,j \ or \ n=1}Y_{m,n}\geq s+a\Big\},\label{switchsegments} \end{align} that is, the probability that the minimum on segment $i$ equals a given value $s$, that the minimum on a different segment $j$ falls slightly above this same value $s$, and that the minimum on all of the other segments excluding $i$ and $j$ is above the value $s+a$. Then, as $x$ is changed an infinitesimal amount, the value of the solution $w\left( x,t\right) $ will flip from $d_{1}$ to $d_{2}$. In the limit $a\downarrow0$ (or equivalently, $\Delta x \downarrow0$), the density of shocks is obtained. One may write an exact expressiom and take this limit to obtain \begin{figure} \caption{We are interested in conditions under which the global minimizer $y^{\ast} \label{Min2Segments} \end{figure} \begin{align} \mathbb{P}_{i,j}:&= \lim_{\Delta x \rightarrow 0}\frac{1}{\Delta x} \mathbb{P}\left\{w\left(x,t\right)=d_{i}\text{ and }w\left(x+\Delta x,t\right)=d_{j}\right\} \nonumber\\ &=\sum_{k=1}^{n_{i}}\sum_{k^{\prime}=1} ^{n_{j}}\left( d_{k^{\prime}}-d_{k}\right)\int_{-\infty}^{\infty}ds {\displaystyle\prod\limits_{(m,n) \neq (i,k), (j,k')}}\nonumber\\ & \int_{s}^{\infty}ds_{m,n}f\left(s_{1,1}, ..., s_{N+1,1} \right)|_{s_{i,k}=s_{j,k'}=s}. \label{switchsegmentsall} \end{align} for the density of any transitions of a minimum between segments $i$ and $j$. The joint probability distribution function $f$ is as outlined in (\ref{Thm2}). One can diagonalize this matrix. Indeed, denoting the eigenvalues of $A$ by $\left\{ \lambda_{m}\right\} _{m=1}^{n}$, $A$ can thus be diagonalized by $U^{T}A^ U=D$, where the diagonal matrix $D$ is given by $D _{jk}=\delta_{jk}\lambda_{j}$. We may then rewrite the integrand $f\left( \cdot\right) $ in (\ref{switchsegmentsall}) as $\prod_{l=1}^{n-2} e^{-\lambda_{l}z_{l}^{2}}$ and have, similar to the proof of Theorem \ref{Thm 3.2}: \begin{equation} P_{i,j}\left( s\right) =\left( 2\pi\right) ^{-\frac {n}{2}}\sum_{k=1}^{n_{i}}\sum_{k^{\prime}=1}^{n_{j} }\left( d_{k^{\prime}}-d_{k}\right) \left\vert A\right\vert ^{\frac{1}{2}}\int_{\Omega_{(i,k),(j,k')}} {\displaystyle\prod\limits_{l=1}^{n-2}} e^{-y_{l}^{2}}dy_{l}\label{transitionDensityas} \end{equation} where the transformed domains of integration depends upon $k$ and $k^{\prime}$. The density of transitions, in a range $\left( s,s+ds\right) $ can be expressed using these results. The integration over the domain $\Omega$ may appear complicated due to the high dimension of the space and the nested nature of the integrals, but one can make some simplifications. For example, as is further illustrated in the next section, the eigenvalue spectrum of $A$ falls off rapidly. Thus, only a (smallest) few of the eigenvalues are relevant and many of the integrals can be well-approximated as $\delta-$functions about $y_{l}=0$. \subsection{Transitions between vertex and non-vertex points} Above, we have considered the question of transition from an overall minimum from a segment $i$ to a different segment $j$. There is also the issue of a minimum obtained at one of the $N$ segments of $L$ replaced by a minimum at a vertex, and vice versa. We will quantify the probability of such an occurence. First, we comment that without discretization, we would have a transition from $w(x,t)=d_{i}$ to $w(x,t)=g'(j,1)$ if $Y_{i,k}=s$ and $Y_{j,1}\in(s,s-\Delta x (d_{i}+g'(j,1))$. Now, with the discrete model, we will evaluate exactly the probability that if $Y_{i,k}=s$, then $Y_{j,1} \in(s,s-\Delta x (d_{i}+g'(j,1)))$ and vice versa. We assume for the first case that $d_{i}+g'\left(j,1\right)<0$. Recall that the grid we place is for a fixed $x$ and $t$. In this section, we will consider changing $x$ (or similarly, $t$) by such a small amount $\Delta x$ that all of the original grid points within $(x-t,x+t)$ remain, and no new grid points are introduced into the interval $(x+\Delta x -t, x+\Delta x +t)$. Thus, the points $r_{k,l}$ remain unchanged. We also assume that the minimum attained at $(x,t)$ occurs at the vertex $r_{j,1}$ and make the approximation that $L$ still nearly has a vertex at $r_{j,1}$ despite the $\Delta x$ shift. The first step is to identify the condition for local minima at vertices $r_{j,1}$ to overtake $Y_{i,k}$, the absolute minimum over $\mathbb{R}$ located at some $k$th point on the $i$th segment. We use $Y_{i,k}^{new}$ to denote the new local minimum when $L$ is shifted by $\Delta x$. We calculate \begin{align} Y_{i,k}^{new}:=Y_{i,k}-\Delta xd_{i}>Y_{j,1}+g'(r_{j,1})\Delta x =: Y_{j,1}^{new} \label{gToL} \end{align} In other words, (\ref{gToL}) is the requirement for a transition from a minimum at the vertex of $g$ at $r_{i,k}$ to the vertex of $L$ originally at $r_{j,1}$ that has now shifted. Of course, we also need the condition \begin{align} Y_{i,k}<Y_{j,1}\label{cond1} \end{align} The condition (\ref{gToL}) is equivalent to \begin{align}\label{cond2} Y_{i,k}-\Delta x(d_{i}+g'(r_{j,1}))>Y_{j,1} \end{align} Once we have established conditions (\ref{cond1}$-$\ref{cond2}), we can now calculate that probability using the discrete setup in which we deal with only the points ${r_{i,1}, r_{i,2}, ...}$, i.e. perform the calculation using Gaussian integrals as follows. Let $\mathbb{P}_{i,j}^{\Delta x}$ be defined as the probability of a transition from segment $i$ of $L$ to the vertex point $r_{j,1}$ of $L$, so that \begin{align} \mathbb{P}_{i,j}^{\Delta x} &:= \sum_{k=1}^{n_{i}}\int_{-\infty}^{\infty}ds\mathbb{P}\left\{ Y_{i,k}=s, \ Y_{j,1}\in\left(s,s-\left(d_{i}+g'\left(r_{j,1}\right)\right)\Delta x\right)\right\} \nonumber\\ &=\sum_{k=1}^{n_{i}}\int_{-\infty}^{\infty}ds\int_{s}^{s-\left(d_{i} +g'\left(r_{j,1}\right)\right)\Delta x}dx_{j,1}\nonumber\\ &{\displaystyle\prod\limits_{\left(m,n\right)\neq\left(i,k\right),\left(j,1\right)}} \int_{s}^{\infty}dx_{m,n} f\left(x_{1,1},...x_{N+1,1}\right)\vert_{x_{i,k}=s}\nonumber\\ \mathbb{P}_{i,j} &:= \lim_{\Delta x \rightarrow0} \frac{\mathbb{P}_{i,j, \Delta x}}{\Delta x} =-\left(d_{i}+g'\left(r_{j,1}\right)\right)\sum_{k=1}^{n_{i}} \int_{-\infty}^{\infty}ds\nonumber\\ &{\displaystyle\prod\limits_{\left(m,n\right)\neq \left(i,k\right),\left(j,1\right)}}\int_{s}^{\infty}dx_{m,n} f\left(...\right)\vert_{x_{i,k}=s,x_{j,1}=s}\nonumber\\ &=\mathbb{P}\left\{w\left(x_{-},t\right)=d_{i}\text{ and }w\left(x_{+},t\right) =g'\left(r\left(j,1\right)\right)\right\}. \end{align} The calculation for the opposite transition, which can only occur when $d_{i}+ g'\left(r\left(j,1\right)\right)$ is positive, is \begin{align} \mathbb{P}_{i,j}:&=\mathbb{P}\left\{w\left(x_{-},t\right)=g'\left(r\left(j,1\right)\right) \text{ and }w\left(x_{+},t)=d_{i}\right)\right\}\nonumber\\ &=\lim_{\Delta x\rightarrow0}\frac{1}{\Delta x}\sum_{k=1}^{n_{i}}\int_{-\infty}^{\infty} ds\mathbb{P}\left\{Y_{j,1}=s,Y_{i,k}\in\left(s,s+\left(d_{i}+g'\left(j,1\right)\right)\Delta x\right) \right\}\nonumber\\ &=\left(d_{i}+g'\left(r_{j,1}\right)\right)\int_{-\infty}^{\infty}ds {\displaystyle\prod\limits_{\left(m,n\right)\neq\left(i,k\right),\left(j,1\right)}} \int_{s}^{\infty}dx_{m,n}f\left(...\right)\vert_{x_{i,k}=x_{j,1}=s}. \end{align} \section{Applications to selected examples and open problems} The results of the preceding sections apply in broad generality for any kind of randomness that falls under the umbrella of Gaussian stochastic processes. In addition to the general results proven for an arbitrary Gaussian stochastic process, it is also interesting to analyze the behavior of the solution to the conservation law under a few specific cases. In particular, we will illustrate the results for the cases of Brownian motion and Brownian bridge. As these processes correspond to the initial condition $g^{\prime}$, we must also note that $g\left( x\right) $ will be a Gaussian stochastic process for both of these cases, termed integrated Brownian motion and integrated Brownian bridge, respectively. For each of these cases, the covariance matrix takes a special form as was described in Section 2. \subsection{Brownian motion and Brownian bridge as initial conditions} In cases such as Brownian motion, Brownian bridge, or Ornstein-Uhlenbeck, one observes specific structure in the matrices and integrals entailed in potential applications of Theorem \ref{Thm 3.2}, which may make it somewhat more tractable to numeric computation than an arbitrary Gaussian stochastic process. For example, in some cases the relevant inverses of covariance matrices can be neglected aside from a narrow band of elements close to the diagonal. For example, consider an integrated Brownian motion $S\left( t\right) $ and let $\left\{ t_{i}\right\} _{i=1}^{n}$ be given. Observe that the covariance matrix $\sum$ for integrated Brownian motion from (\ref{covMatrixIBM}) can be written as \cite{RO} \begin{equation} \sum\nolimits_{ij}=\frac{1}{N^{3}}\min\left( i,j\right) ^{2}\left( \frac{\max\left( i,j\right) }{2}-\frac{\min\left( i,j\right) }{6}\right), \end{equation} so it is symmetric with respect to permuting $i$ and $j$. One of the main quantities of interest in our results is of course the inverse $A$ of this matrix. Upon taking $N$ large, one observes an interesting property in that a substantial fraction of the diagonal elements of the matrix $A$ are concentrated at one value. Numerical computation indicates that this number is close to $14.354...$ independent of $N$. Furthermore, the smallest of the eigenvalues start off at one value, with many concentrated near it, and then increase sharply, as illustrated in Figure \ref{Fig EV}. In addition, the numerically significant parts of the columns of the matrix $A$ are repetitive. We denote the smallest eigenvalue by $\lambda_{1}$. Due to the nature of the integrands taking the form \begin{equation} \int e^{-\lambda_{n}z_{n}^{2}}dz_{n}, \end{equation} there will be a rapid falloff as $\lambda_{n}$ increases. As a first-order approximation, one may assume the first $N^{\prime}$ eigenvalues have the value $\lambda_{1}$ exactly, and ignore the rest as they will not make a meaningful contribution to the solution. From the minimization perspective, we are simply neglecting the terms corresponding to points at which the probability of attaining the minimum is very small. This will lead to a simplification of the form where one can set $\lambda_{i}=\lambda_{1}$ for a number $n'$ of the eigenvalues, and ignore the subsequent ones. For the shock densities, one observes a similar phenomenon, and expects a transition probability of the form \begin{align} P_{i,i^{\prime}} & \tilde{=}\left( 2\pi\right) ^{-\frac{n} {2}}\int_{-\infty}^{\infty}\sum_{k=1}^{n_{i}}\sum_{k^{\prime}=1} ^{n_{i^{\prime}}}\left( d_{k^{\prime}}-d_{k}\right) \int_{-\infty}^{s }...\int_{-\infty}^{s}e^{-\frac{1}{2}\left( \bar{s}-\tilde{\mu}\right) ^{T}A_{i}\left( \bar{s}-\tilde{\mu}\right) }\nonumber\\ & \cdot e^{-\left( \frac{\lambda_{l}}{2}\left[ U^{T}\left( \bar{s} -\bar{\mu}\right) \right] _{k}\right) ^{2}-\left( \frac{\lambda_{l}} {2}\left[ U^{T}\left( \bar{s}-\bar{\mu}\right) \right] _{k^{\prime} }\right) ^{2}}ds_{N+1,1}...ds_{1,1}ds. \end{align} where $\tilde{\mu}$ and $A$ now correspond to the mean and inverse covariance matrix on segments $i$ and $i^{\prime}$ together. In Section 2, the covariance matrix for Brownian bridge in the range where it is defined is very similar to that of Brownian motion, despite some key differences in the processes. Outside of $\left[ 0,T\right] $ (under linear transformation), the variance of the integrated Brownian motion is constant. In applying this initial condition, one must set $g^{\prime}\left( x\right) =g\left( x\right) =0$ outside of a finite range $\left[ 0,T\right]$. In fact, one observes the same qualitative behavior for the eigenvalues of its inverse covariance matrix. This will lead to a result similar to (\ref{Thm4}) for the solution to the conservation law with Brownian bridge initial data. Another interesting application is that of a stationary Ornstein-Uhlenbeck process, i.e. with a covariance matrix invariant under translation, which may facilitate computations for further results. The advantage of this process is that it has constant variance, making it advantageous for some physical models where the randomness is fairly homogenous, rather than Brownian motion where the process is fixed at $0$ and variance grows with time. \begin{figure} \caption{When $N$ is large, the diagonal elements of the inverse covariance matrix $A$ are concentrated very close to a particular value which can be calculated as approximately 14.3... The eigenvalues are also concentrated near one value, and sharply increase. Larger eigenvalues can be neglected due to the asymptotic behavior of the error function.} \label{Fig EV} \end{figure} \subsection{Approximation to smooth flux} A classical result by Dafermos \cite{D1} utilizing shock techniques describes how solutions to conservation laws with smooth flux can be approximated by those of polygonal flux. Our theorems can be combined with Dafermos' work in order to use polygonal flux as a building block for an arbitrary smooth flux function with randomness of the form of any Gaussian stochastic process. One may also take the limits of $N\rightarrow\infty$ and make the partitions $\left\{ r_{i,j}\right\} $ increasingly fine in our solution (\ref{Thm4}). In this case, one should obtain the result in \cite{FM} for which Burgers' equation is considered with Brownian initial conditions and exact solutions are computed. Our approach thus provides an extension of exact, closed-form results without relying on the exact properties of a particular flux function. \section{Appendix: results for Gaussian stochastic processes} Here we present sketches of the proof of several of the results we cited earlier in Section 2. \begin{proof} [Proof of Proposition \ref{Prop 2.3}](b) Define $\left\{ t_{i}\right\} _{i=1}^{n}$ as before, we need to show that we can write \begin{align} S\left( t_{1}\right) & =a_{11}R_{1}+...+a_{1m}R_{m}+\mu_{1}\nonumber\\ & ...\nonumber\\ S\left( t_{n}\right) & =a_{n1}R_{1}+...+a_{nn}R_{m}+\mu_{n} \end{align} We can write the integral as an approximate sum with interpolation points $\left( r_{1},...,r_{n}\right) $ given as $r_{i}:=\frac{j}{Nt_{n}}$, so that \begin{equation} S\left( t\right) \tilde{=}S_{N}\left( t\right) :=\sum_{i=1}^{N_{t}} \frac{1}{N}W\left( \frac{j}{N_{t_{n}}}\right) \text{, }\frac{N_{t}}{N_{t_{n}}}\leq t<\frac{N_{t}+1}{Nt_{n}} \end{equation} By definition, then \begin{align} NS_{N}\left( t_{1}\right) & =W\left( \frac{1}{N}\right) +W\left( \frac{2}{N}\right) +...+W\left( \frac{N_{t_{1}}}{N}\right) \nonumber\\ & ...\nonumber\\ NS_{N}\left( t_{n}\right) & =W\left( \frac{1}{N}\right) +W\left( \frac{2}{N}\right) +...+W\left( \frac{N_{t_{n}}}{N}\right) , \label{nsumibm} \end{align} i.e. the only difference in the identities in (\ref{nsumibm}) is where one terminates the sum. Since $W\left( t\right) $ is Gaussian stochastic by the part (a), we may write \begin{equation} W\left( \frac{i}{N}\right) =b_{1i}R_{1}+...+b_{1m}R_{m}+\tilde{\mu}_{i} \label{bmmultivariate} \end{equation} By substituting (\ref{bmmultivariate}) into (\ref{nsumibm}) and defining new constants \[ a_{i1}:=\frac{\left( b_{i1}+b_{i2}+...+b_{i,Nt_{1}}\right) }{N}, \] we obtain the result that integrated Brownian motion is also a Gaussian stochastic process by using the Central Limit Theorem for functions (\cite{PK}, p. 399). We do not prove (c) and (d) here as the ideas are similar. In particular, since a Brownian bridge is a linear combination of $W\left( t\right) $ and $W\left( T\right) $, it can easily be expressed in terms of independent, identically distributed normal random variables. \end{proof} \begin{proof} [Proof of Theorem \ref{Thm 2.1}](a) We first consider the covariance matrix for times $s$ and $t$. Since piecewise drift does not alter the covariance matrix, we assume without loss of generality that this term is zero. Letting $W\left( t\right) $ be a Brownian motion, one observes \begin{align} Cov\left[ \int_{0}^{s}W\left( r\right) dr,\int_{0}^{t}W\left( r\right) dr\right] & =\mathbb{E}\left\{ \int_{0}^{t}W\left( r\right) dr\int _{0}^{t}W\left( r^{\prime}\right) dr^{\prime}\right\} \label{covIBM}\\ & =\int_{0}^{s}\int_{0}^{t}\mathbb{E}\left\{ W\left( r\right) W\left( r^{\prime}\right) \right\} dr^{\prime}dr\nonumber\\ & =\int_{0}^{s}\int_{0}^{t}\min\left\{ r,r^{\prime}\right\} drdr^{\prime }\nonumber\\ & =s^{2}\left( \frac{t}{2}-\frac{s}{6}\right) . \end{align} The cross term, $\mathbb{E}\left\{ \int_{0}^{t}W\left( r\right) dr\right\} \mathbb{E}\left\{ \int_{0}^{t}W\left( r^{\prime}\right) dr^{\prime }\right\} $, vanishes since $\mathbb{E}\left\{ W\left( t\right) \right\} =0$ for Brownian motion. Using a discretization argument as in \cite{RO}, one may also construct (\ref{covIBM}) from first principles. (b) Let $Z\left( t\right) $ be an integrated Brownian bridge constructed as in Definition \ref{Def 2.3}. As in part (a) we consider the sum approximation to the Brownian bridge assuming no drift, and consider first only two points $0\leq s\leq t\leq T$. We make note of (\ref{covMatrixIBM}) and also note \begin{equation} Cov\left[ W\left( T\right) ,\int_{0}^{t}W\left( r\right) dr\right] =Cov\left[ W\left( t\right) ,\int_{0}^{t}W\left( r\right) dr\right] \label{covW1} \end{equation} since $W\left( T\right) -W\left( t\right) $ is independent of events at $t$ and before. Continuing from (\ref{covW1}), we have \begin{align} Cov\left[ W\left( T\right) ,\int_{0}^{t}W\left( r\right) dr\right] & =\mathbb{E}\left\{ W\left( t\right) \int_{0}^{t}W\left( r\right) dr\right\} \nonumber\\ & =\int_{0}^{t}\mathbb{E}\left\{ W\left( t\right) \right\} W\left( r\right) dr\nonumber\\ & =\int_{0}^{t}\mathbb{E}\left\{ \left( W\left( t\right) -W\left( r\right) +W\left( r\right) \right) W\left( r\right) \right\} dr\nonumber\\ & =\int_{0}^{t}\mathbb{E}\left\{ W\left( r\right) ^{2}\right\} dr=\frac{t^{2}}{2}, \label{covMatrixW1Wr} \end{align} noting that the second term in the definition of covariance vanishes since $\mathbb{E}\left\{ W\left( t\right) \right\} =0$. One then writes, expanding the definition of $Z\left( t\right) $ and collecting terms from (\ref{covW1}) and (\ref{covMatrixW1Wr}): \begin{equation} Cov\left[ Z\left( s\right) ,Z\left( t\right) \right] =s^{2}\left( \frac{t}{2}-\frac{s}{6}\right) -\frac{1}{T}\frac{t^{2}s^{2}}{4}. \label{covMatrixIBMdiscrete} \end{equation} for times $0\leq\left\{ t_{i}\right\} _{i=1}^{N}\leq T$. Now consider $0\leq s\leq T<t.$\ Then we have \[ Cov\left[ Z\left( s\right) ,Z\left( t\right) \right] =Cov\left[ S\left( s\right) -\frac{s}{T}W\left( T\right) ,S\left( T\right) -W\left( T\right) \right] \] One obtains similar results by (stochastically) reflecting the process to consider it on the interval $\left[ -T,T\right] $ for symmetry. One can also obtain these results by discretizing the process and writing the approximation of the integral, then limiting to the continuum using the Central Limit Theorem for functions. We also note that although the process $Z\left( s\right) $ is constant outside the interval $\left[ 0,T\right] $ (or $\left[ -T,T\right] $ depending on the definition), $Z\left( s\right) $ does not vanish for $s>T$ with probability $1$. \end{proof} \section*{Acknowledgments} The author thanks the referee for review of the manuscript. \section*{Conflict of interest} The author declares no conflicts of interest in this paper. \end{document}
\begin{document} \title{A generalized Calder\'on formula for open-arc diffraction problems: theoretical considerations} \begin{abstract} We deal with the general problem of scattering by open-arcs in two-dimensional space. We show that this problem can be solved by means of certain second-kind integral equations of the form $\tilde{N} \tilde{S}[\varphi] = f$, where $\tilde{N}$ and $\tilde{S}$ are first-kind integral operators whose composition gives rise to a generalized Calder\'on formula of the form $\tilde{N} \tilde{S} = \tilde{J}_0^\textbf{t}u + \tilde{K}$ in a {\em weighted, periodized} Sobolev space. (Here $\tilde{J}^\textbf{t}u_0$ is a continuous and continuously invertible operator and $\tilde{K}$ is a compact operator.) The $\tilde{N} \tilde{S}$ formulation provides, for the first time, a second-kind integral equation for the open-arc scattering problem with Neumann boundary conditions. Numerical experiments show that, for both the Dirichlet and Neumann boundary conditions, our second-kind integral equations have spectra that are bounded away from zero and infinity as $k\to \infty$; to the authors' knowledge these are the first integral equations for these problems that possess this desirable property. This situation is in stark contrast with that arising from the related {\em classical} open-surface hypersingular and single-layer operators $\mathbf{N}$ and $\mathbf{S}$, whose composition $\mathbf{NS}$ maps, for example, the function $\phi =1$ into a function that is not even square integrable. Our proofs rely on three main elements: 1)~Algebraic manipulations enabled by the presence of integral weights; 2)~Use of the classical result of continuity of the Ces\`aro operator; and 3)~Explicit characterization of the point spectrum of $\tilde{J}^\textbf{t}u_0$, which, interestingly, can be decomposed into the union of a countable set and an open set, both of which are tightly clustered around $-\frac{1}{4}$. As shown in a separate contribution, the new approach can be used to construct simple spectrally-accurate numerical solvers and, when used in conjunction with Krylov-subspace iterative solvers such as GMRES, it gives rise to dramatic reductions of iteration numbers vs. those required by other approaches. \end{abstract} \section{Introduction\label{intro}} The field of Partial Differential Equations (PDEs) with boundary values prescribed on open surfaces has a long and important history, including significant contributions in the theory of diffraction by open screens, elasticity problems in solids containing cracks, and fluid flow past plates; solution to such problems impact significantly on present day technologies such as wireless transmission, electronics and photonics. From a mathematical point of view, besides techniques applicable to simple geometries, existing solution methods include special adaptations of finite-element and boundary-integral methods that account in some fashion for the singular character of the PDE solutions at edges. With much progress in the area over the last sixty years the field remains challenging: typically only low-frequency open-surface problems can be treated with any accuracy by previous approaches. In this paper we focus on the problem of electromagnetic and acoustic scattering by open arcs. In particular, we introduce certain first-kind integral operators $\mathbf{N}_\omega$ and $\mathbf{S}_\omega$ whose composition gives rise, after appropriate change of periodic variables, to a generalized Calder\'on formula $\tilde{N}\tilde{S} = \tilde{J}^\textbf{t}u_0 + \tilde{K}$---where $\tilde{J}^\textbf{t}u_0$ is a continuous and continuously invertible operator and where $\tilde{K}$ is a compact operator---together with associated {\em second-kind open-surface integral equations} of the form $\tilde{N} \tilde{S}[\varphi] = f$. This approach enables, for the first time, treatment of open-arc scattering problems with Neumann boundary conditions by means of second kind equations. Further, a wide range of numerical experiments~\cite{BrunoLintner2} indicate that, for both the Dirichlet and Neumann boundary conditions, our second-kind integral equations have spectra that are bounded away from zero and infinity as $k\to \infty$, and give rise to high accuracies and dramatic reductions of Krylov-subspace iteration numbers vs. those required by other approaches. These methods and results were first announced in~\cite{BrunoOberwolfach}; succinct proofs of the open-arc Calder\'on formulae, further, were presented in~\cite{BrunoLintner2}. Integral equation methods provide manifold advantages over other methodologies: they do not suffer from the well known pollution~\cite{BabuskaSauterStefan} and dispersion~\cite{Jameson} errors characteristic of finite element and finite difference methods, they automatically enforce the condition of radiation at infinity (without use of absorbing boundary conditions), and they lend themselves to (typically iterative) acceleration techniques~\cite{BleszynskiBleszynskiJaroszewicz,BrunoKunyansky,Rokhlin}---which can effectively take advantage of the reduced dimensionality arising from boundary-integral equations, even for problems involving very high-frequencies. Special difficulties inherent in open-surface boundary-integral formulations arise from the solution's edge singularity~\cite{Maue,Stephan,Costabel}. Such difficulties have typically been tackled by incorporating the singularity explicitly in both Galerkin~\cite{Stephan,StephanWendland,StephanWendland2} and Nystr\"om~\cite{AtkinsonSloan,RokhlinJiang,Monch} integral solvers; with one exception (introduced in the contributions~\cite{AtkinsonSloan,RokhlinJiang} and discussed below in some detail), in all of these cases integral equations of the first kind were used. While providing adequate discretizations of the problem, first-kind integral equation can be poorly conditioned and, for high-frequencies, they require large numbers of iterations and long computing times when accelerated iterative solvers as mentioned above are used. (The literature on the singular behavior of open-arc solutions is quite rich and interesting from a historical perspective: it includes the early analysis~\cite{Sommerfeld}, corrections~\cite{BouwkampReview,BouwkampOnBethe} to early contributions~\cite{MeixnerOld,Bethe}, the well-known finite-energy condition introduced in~\cite{Meixner}, the integral equation formulation~\cite{Maue} and subsequent treatments for integral approaches for these problems, leading to the first regularity proof~\cite{Stephan} and the comprehensive treatment~\cite{Costabel} which establishes, in particular, that for $C^\infty$ open surfaces with $C^\infty$ edges, the integral equation solution of the Dirichlet (resp. Neumann) open-edge problems equals a $C^\infty$ function times an unbounded (resp. bounded) canonical edge-singular function.) As mentioned above, iterative solvers based on first-kind integral equations often require large numbers of iterations and long computing times. Attempts have been made over the years to obtain second-kind open-surface equations and, indeed, second kind equations for open surfaces were developed previously by exploiting the diagonal character of the logarithmic single layer, at least for the case of the {\em Dirichlet problem for the Laplace equation}~\cite{AtkinsonSloan,RokhlinJiang}. Unfortunately, as shown in~\cite{BrunoLintner2}, direct generalization of such approaches to high-frequency problems give rise to numbers of iterations that can in fact be much larger than those inherent in first-kind formulations. Efforts were also made to obtain second kind equations on the basis of the well known Calder\'on formula. The Calder\'on identity relates the classical single-layer and hypersingular operators $\mathbf{S}_c$ and $\mathbf{N}_c$ that are typically associated with the Dirichlet and Neumann problems on a closed surface $\Gamma_c$: for such closed surfaces the Calder\'on formula reads $\mathbf{N}_c\mathbf{S}_c = -\mathbf{I}/4 + \mathbf{K}_c$, where $\mathbf{K}_c$ is a compact operator in a suitable Sobolev space. Attempts to extend this idea to open surfaces were pursued in~\cite{PovznerSuharesvki,ChristiansenNedelec}. As first shown in~\cite{PovznerSuharesvki}, there indeed exists a related identity for the corresponding single-layer and hypersingular operators $\mathbf{N}$ and $\mathbf{S}$ on an open surface $\Gamma$. As in the closed-surface case, we have $\mathbf{N}\mathbf{S} = -\mathbf{I}/4 + \mathbf{K}$; unfortunately, however, a useful functional setting for the operator $\mathbf{N}\mathbf{S}$ does not appear to exist ($\mathbf{K}$ is not compact). As shown in Appendix~\ref{NSOne}, for example, the composition $\mathbf{N}\mathbf{S}$ maps the constant function $\varphi =1$ on an open surface into a function that tends to infinity at the boundary of $\Gamma$ like $1/d$, where $d$ denotes the distance to the curve edge (see Appendix~\ref{NSOne}). In particular, the formulation $\mathbf{N}\mathbf{S}$ cannot be placed in the functional framework put forth in~\cite{Stephan,StephanWendland,StephanWendland2} and embodied by equations~\eqref{Smapping},~\eqref{Nmapping} and Definition~\ref{htil_def} below: a function with $1/d$ edge asymptotics is not an element of $H^{-\frac{1}{2}}(\Gamma)$. In view of the aforementioned regularity result~\cite{Costabel}---which, can be fully exploited numerically through use of Chebyshev expansions~\cite{AtkinsonSloan,Monch} in two-dimensions and appropriate extensions~\cite{BrunoLintner3} of the high-order integration methods~\cite{BrunoKunyansky} in three-dimensions---, and on account of the results of this paper, use of the combination $N_\omega S_\omega$ enables low-iteration-number, second-kind, super-algebraically accurate solution of open surface scattering problems---and thus gives rise to a highly efficient numerical solver for open surface scattering problems in two and three dimensions; see~\cite{BrunoLintner2,BrunoLintner3}. This paper is organized as follows. After introduction of necessary notations and preliminaries, the main result of this paper, Theorem~\ref{th_1}, is stated in Section~\ref{sec_2}. Section~\ref{three} contains the main elements of the theory leading to a proof of Theorem~\ref{th_1}. Necessary uniqueness and regularity results for the single-layer and hyper-singular weighted operators under a cosine change of variables, which have mostly been known for a number of years (see~\cite[Ch. 11]{VainikkoSaranen} and the extensive literature cited therein) are presented in Section~\ref{SGeneral}; the inclusion of these results renders our text essentially self contained, and it establishes a direct link between our context and that represented by references~\cite{Stephan,StephanWendland,StephanWendland2}. Building up on constructions presented in earlier sections, the proof of Theorem~\ref{th_1} is given in Section~\ref{five}. Two appendices complete our contribution: Appendix~\ref{Nfact_appendix} presents a version adequate to our context of a known expression linking the hypersingular operator and an integro-differential operator containing only tangential derivatives; Appendix~\ref{NSOne}, finally, demonstrates that the image of the composition $\mathbf{N}\mathbf{S}$ of the un-weighted operators $\mathbf{N}$ and $\mathbf{S}$ is not contained in $H^{-\frac{1}{2}}$. \section{Preliminaries\label{sec_2}} Throughout this paper $\Gamma$ is assumed to be a smooth open arc in two dimensional space. \subsection{Background} As is well known~\cite{StephanWendland,StephanWendland2,VainikkoSaranen}, the Dirichlet and Neumann boundary-value problems for the Helmholtz equation \begin{equation}\label{b_conds} \left\{ \begin{array}{llll} \Delta u +k^2 u = 0 \quad\mbox{outside}\quad \Gamma , & u|_{\Gamma} = f, &f \in H^\frac{1}{2}(\Gamma) & \quad\mbox{(Dirichlet)}\\ \Delta v +k^2 v = 0 \quad\mbox{outside}\quad \Gamma, & \frac{\partial v}{\partial n}|_{\Gamma} = g, & g\in H^{-\frac{1}{2}}(\Gamma) &\quad\mbox{(Neumann)} \end{array}\right. \end{equation} admit unique radiating solutions $u,v\in H^1_\text{loc}(\mathbb{R}^2\setminus \Gamma)$ which can be expressed in terms of single- and double-layer potentials, respectively: \begin{equation}\label{single_l} u(\mathbf{r})= \int_\Gamma G_k(\mathbf{r},\mathbf{r}')\mu(\mathbf{r}') d\ell' \end{equation} and \begin{equation} v(\mathbf{r})= \int_\Gamma \frac{\partial G_k(\mathbf{r},\mathbf{r}')}{\partial \textbf{n}_{\mathbf{r}'}}\textbf{n}u(\mathbf{r}') d\ell' \end{equation} for $\mathbf{r}$ outside $\Gamma$. Here $\textbf{n}_{\mathbf{r}'}$ is a unit vector normal to $\Gamma$ at the point $\mathbf{r}'\in\Gamma$ (we assume, as we may, that $\textbf{n}_{\mathbf{r}'}$ is a smooth function of $\mathbf{r}'\in\Gamma$), and, letting $H_0^1$ denote the Hankel function, \begin{equation}\label{Gk_def} G_k(\mathbf{r},\mathbf{r}')=\left\{\begin{array}{ll} \frac{i}{4}H_0^1(k |\mathbf{r}-\mathbf{r}'|),& k > 0 \\ -\frac{1}{2\pi}\ln|\mathbf{r}-\mathbf{r}'|,&k=0 \end{array}\right. , \end{equation} and \begin{equation} \frac{\partial G_k(\mathbf{r},\mathbf{r}')}{\partial \mathbf{n}_{\mathbf{r}'}}=\mathbf{n}_{\mathbf{r}'}\cdot\textbf{n}abla_{\mathbf{r}'}G_k(\mathbf{r},\mathbf{r}'). \end{equation} Denoting by $\mathbf{S}$ and $\mathbf{N}$ the single-layer and hypersingular operators \begin{equation}\label{Sdef} \mathbf{S}[\mu](\mathbf{r})= \int_\Gamma G_k(\mathbf{r},\mathbf{r}')\mu(\mathbf{r}') d\ell'\quad , \quad \mathbf{r}\in\Gamma, \end{equation} and \begin{equation}\label{Ndef} \begin{split} \mathbf{N}[\textbf{n}u](\mathbf{r})= &\; \frac{\partial }{\partial \textbf{n}_{\mathbf{r}}}\int_\Gamma \frac{\partial G_k(\mathbf{r},\mathbf{r}')}{\partial \textbf{n}_{\mathbf{r}'}}\textbf{n}u(\mathbf{r}') d\ell'\\ \stackrel{\mathrm{def}}{=}&\lim\limits_{z\rightarrow 0^+} \frac{\partial }{\partial z}\int_\Gamma \frac{\partial G_k(\mathbf{r}+z\mathbf{n}_{\mathbf{r}},\mathbf{r}')}{\partial \textbf{n}_{\mathbf{r}'}}\textbf{n}u(\mathbf{r}') d\ell'\quad , \quad \mathbf{r}\in\Gamma, \end{split} \end{equation} the densities $\mu$ and $\textbf{n}u$ are the unique solutions of the first kind integral equations \begin{equation}\label{Sbad} \mathbf{S}[\mu]=f \end{equation} and \begin{equation}\label{Nbad} \mathbf{N}[\textbf{n}u]=g. \end{equation} As shown in~\cite{Stephan,StephanWendland,StephanWendland2}, the operators $\mathbf{S}$ and $\mathbf{N}$ define bounded and continuously invertible mappings \begin{equation}\label{Smapping} \mathbf{S}:\; \tilde{H}^{-\frac{1}{2}}(\Gamma) \rightarrow H^{\frac{1}{2}}(\Gamma),\quad\mbox{and} \end{equation} \begin{equation}\label{Nmapping} \mathbf{N}:\; \tilde{H}^{\frac{1}{2}}(\Gamma) \rightarrow H^{-\frac{1}{2}}(\Gamma), \end{equation} where for $s\in \mathbb{R}$, the space $\tilde{H}^s(\Gamma)$ is defined below. \begin{definition}\label{htil_def} Let $G_1$ be a domain in the plane, with a smooth boundary $\dot G_1$, let $s\in\mathbb{R}$, and assume $\dot G_1$ contains the smooth open curve $\Gamma$. The Sobolev space $\tilde{H}^{s}(\Gamma)$ is defined as the set of all elements $f\in H^{s}(\dot G_1)$ satisfying $\mathrm{supp}(f)\subseteq\overline\Gamma$. \end{definition} \begin{remark}\label{inv_is_cont} As is well known~\cite[Corollary 2.7]{brezis2010functional}, the inverse $L^{-1}$ of a continuous and invertible (one-to-one and surjective) operator $L$ between two Banach spaces (and, in particular, between two Hilbert spaces such as the Sobolev spaces considered in this text) is also continuous. In view of this fact, above and throughout this text the terms ``invertible continuous operator'', ``invertible bounded operator'', ``bicontinuous operator'', ``continuous operator with continuous inverse'', etc, are used as interchangeable synonyms. \end{remark} The mapping results~\eqref{Smapping},~\eqref{Nmapping} provide an extension of classical closed-surfaces results: for a closed Lipschitz surface $\Gamma_c$ and for any $s\in\mathbb{R}$, the closed-surface single-layer and hypersingular operators $\mathbf{S}_c$ and $\mathbf{N}_c$ define bounded mappings \begin{equation}\label{ScmappingHs} \mathbf{S}_c:\; H^{s}(\Gamma_c) \rightarrow H^{s+1}(\Gamma_c), \end{equation} \begin{equation}\label{NcmappingHs} \mathbf{N}_c:\; H^{s+1}(\Gamma_c) \rightarrow H^{s}(\Gamma_c), \end{equation} see e.g.~\cite{Costabel_Old,Nedelec,Kress}. Additionally, the closed-surface potentials satisfy the classical Calder\'on relation \begin{equation}\label{Calderon} \mathbf{N}_c\mathbf{S}_c= -\frac{\mathbf{I}}{4} +\mathbf{K}_c \end{equation} in $H^{s}(\Gamma_c)$, where $\mathbf{K}_c$ is a compact operator. While~\eqref{ScmappingHs} and~\eqref{NcmappingHs} do not apply to open surfaces, the solutions $\mu$ and $\textbf{n}u$ of the open-surface integral equations~\eqref{Sbad} and~\eqref{Nbad} enjoy significant regularity properties. In particular, letting $d=d(\mathbf{r})$ denote any non-negative smooth function defined on $\Gamma$ which for $\mathbf{r}$ in a neighborhood of each end-point equals the Euclidean distance from $\mathbf{r}$ to the corresponding end-point, and letting $\omega$ denote any function defined on $\Gamma$ such that $\omega/\sqrt{d}$ is $C^\infty$ up to the endpoints, the recent results~\cite{Costabel} establish that if the arc $\Gamma$ and the right-hand-side functions $f$ and $g$ in~(\ref{Sbad}) and~(\ref{Nbad}) are infinitely differentiable we have \begin{equation}\label{CostabelExp1} \mu = \frac{\alpha}{\omega} \end{equation} and \begin{equation}\label{CostabelExp2} \textbf{n}u = \beta\cdot\omega, \end{equation} where $\alpha$ and $\beta$ are $C^\infty$ functions throughout $\Gamma$. The singular behavior in these solutions is thus fully characterized by the factors $d^{1/2}$ and $d^{-1/2}$ in equations~\eqref{CostabelExp1} and~\eqref{CostabelExp2}, respectively. \subsection{Generalized Calder\'on Formula\label{well_cond}} In view of equations~\eqref{CostabelExp1} and~\eqref{CostabelExp2}, for any non-vanishing function $\omega(\mathbf{r})>0$ such that $\omega/\sqrt{d}$ is $C^\infty$ up to the endpoints of $\Gamma$, we define the weighted operators \begin{equation}\label{Somega} \mathbf{S}_\omega[\alpha]= \mathbf{S}\left[\frac{\alpha}{\omega}\right] \end{equation} and \begin{equation}\label{Nomega} \mathbf{N}_\omega[\beta] = \mathbf{N}\left[\beta\cdot\omega\right], \end{equation} and we consider the weighted versions \begin{equation}\label{Sgood} \mathbf{S}_\omega[\alpha]= f \end{equation} and \begin{equation}\label{Ngood} \mathbf{N}_\omega[\beta] = g \end{equation} of the integral equations~\eqref{Sbad} and~\eqref{Nbad}; clearly, in view of the discussion of the previous section, for smooth $\Gamma$ and smooth right-hand-sides $f$ and $g$, the solutions $\alpha$ and $\beta$ of~\eqref{Sgood} and~\eqref{Ngood} are smooth up to the endpoints of $\Gamma$. Without loss of generality we use a smooth parametrization $\mathbf{r}(t)=\left(x(t),y(t)\right)$ of $\Gamma$ defined in the interval $[-1,1]$, for which $\textbf{t}u(t)=|\frac{d \textbf{r}(t)}{dt}|$ is never zero. For definiteness and simplicity, throughout the rest of the paper we select $\omega$, as we may, in such a way that \begin{equation}\label{canon_sq} \omega(\mathbf{r}(t))= \sqrt{1-t^2}. \end{equation} The operators $\mathbf{S}_\omega$ and $\mathbf{N}_\omega$ thus induce the parameter-space operators \begin{equation}\label{sOmegaDef} S_\omega[\varphi](t)=\int_{-1}^1 G_k\left(\mathbf{r}(t),\mathbf{r}(t')\right)\frac{ \varphi(t')}{\sqrt{1-t'^2}}\;\textbf{t}u(t') dt', \end{equation} and \begin{equation}\label{NOmegaDef} N_\omega[\psi](t)=\lim\limits_{z\rightarrow 0^+}\frac{\partial}{\partial z}\int_{-1}^1 \frac{\partial}{\partial \textbf{n}_{\mathbf{r}(t')}}G_k\left(\mathbf{r}(t)+z\textbf{n}_{\mathbf{r}(t)},\mathbf{r}(t')\right) \psi (t') \textbf{t}u(t')\sqrt{1-t'^2}dt'; \end{equation} defined on functions $\varphi$ and $\psi$ of the variable $t$, $-1\leq t\leq 1$; clearly, for $\varphi(t) = \alpha(\mathbf{r}(t))$ and $\psi(t) = \beta(\mathbf{r}(t))$ we have \begin{equation}\label{S_param} \mathbf{S}_\omega[\alpha](\mathbf{r}(t)) = S_\omega[\varphi](t) \end{equation} and \begin{equation}\label{N_param} \mathbf{N}_\omega[\beta](\mathbf{r}(t)) = N_\omega[\psi](t). \end{equation} In order to proceed we further transform our integral operators: using the changes of variables $t=\cos\theta$ and $t'=\cos\theta'$ and, defining $\textbf{n}_\theta= \textbf{n}_{\mathbf{r}(\cos\theta)}$ and using~\eqref{S_param} and~\eqref{N_param}, we re-express equations~\eqref{Sgood} and~\eqref{Ngood} in the forms \begin{equation}\label{Stilde} \tilde{S}[\tilde{\varphi}]= \tilde f \end{equation} and \begin{equation}\label{Ntilde} \tilde{N}[\tilde{\psi}] = \tilde g, \end{equation} where $\tilde{S}$ and $\tilde{N}$ denote the operators \begin{equation}\label{ssdef} \tilde{S}[\gamma](\theta)=\int_{0}^\pi G_k(\mathbf{r}(\cos\theta),\mathbf{r}(\cos\theta'))\gamma(\theta')\textbf{t}u( \cos\theta')d\theta' \end{equation} and \begin{equation}\label{NNdef} \tilde{N}[\gamma](\theta)=\lim\limits_{z\rightarrow 0^+}\frac{\partial}{\partial z}\int_{0}^\pi \frac{\partial}{\partial \textbf{n}_{\theta'}}G_k(\mathbf{r}(\cos\theta)+z\textbf{n}_{\theta},\mathbf{r}(\cos\theta'))\gamma(\theta') \textbf{t}u(\cos\theta')\sin^2\theta'd\theta', \end{equation} and where \begin{equation}\label{fg_theta} \tilde f(\theta) = f(\mathbf{r}(\cos\theta))\quad,\quad \tilde g(\theta) = g(\mathbf{r}(\cos\theta)); \end{equation} clearly, the solutions of equations~\eqref{sOmegaDef}-\eqref{Ntilde} are related by \begin{equation}\label{phi_psi_theta} \tilde \varphi(\theta) = \varphi(\cos\theta)\quad,\quad \tilde \psi(\theta) = \psi(\cos\theta). \end{equation} In view of the symmetries induced by the $\cos\theta$ dependence in equations~\eqref{ssdef} through \eqref{fg_theta}, it is natural to study the properties of these operators and equations in appropriate Sobolev spaces $H^s_e(2\pi)$ of $2 \pi$ periodic and even functions defined below; cf. \cite{YanSloan,BrunoHaslam}. \begin{definition} Let $s\in \mathbb{R}$. The Sobolev space $H^s_e(2\pi)$ is defined as the completion of the space of infinitely differentiable $2\pi$-periodic and even functions defined in the real line with respect to the norm \begin{equation} \|v\|_{H_s^e(2\pi)}^2 = |a_0|^2 + 2\sum\limits_{m=1}^\infty m^{2s}|a_m|^2 , \end{equation} where $a_m$ denotes the $m$-th cosine coefficient of $v$: \begin{equation} v(\theta)=\frac{1}{2}a_0 + \sum\limits_{m=1}^\infty a_m \cos( m\theta ). \end{equation} \end{definition} Clearly the set $\{\cos(n\theta):n\in \mathbb{N} \}$ is a basis of the Hilbert space $H^s_e(2\pi)$ for all $s$. For notational convenience we also introduce corresponding discrete sequence spaces $h^s$, $s\geq 0$ and $\ell^2$. \begin{definition} Let $s\geq 0$. The Hilbert space $h^s$ is defined as the space of all sequences $a=(a_n)_{n\in\mathbb{N}} $ of complex numbers with finite norm $||a||_{h^s}<\infty$, with the discrete $s$-norm $||\cdot ||_{h^s}$ defined by \begin{equation} ||a||^2_{h^s} = |a_0|^2+2\sum\limits_{n=1}^\infty |a_n|^2 n^{2s}. \end{equation} and with the natural associated scalar product. We also define $\ell^2 = h^0$. \end{definition} The main purpose of this paper is to establish the following theorem. \begin{theorem}\label{NStheorem}\label{th_1} The composition $\tilde{N}\tilde{S}$ defines a bicontinuous operator from $H^s_e(2\pi)$ to $H^s_e(2\pi)$ for all $s>0$. Further, this operator satisfies a generalized Calder\'on formula \begin{equation}\label{NSfact} \tilde{N}\tilde{S}= \tilde J^\textbf{t}u_0 +\tilde K, \end{equation} where ${\tilde K}: H^s_e(2\pi) \rightarrow H^s_e(2\pi)$ is a compact operator, and where $\tilde J^\textbf{t}u_0: H^s_e(2\pi) \rightarrow H^s_e(2\pi)$ is a bicontinuous operator, independent of $k$, with point spectrum equal to the union of the discrete set $\Lambda_\infty=\{\lambda_0=-\frac{\ln2}{4},\quad \lambda_n=-\frac{1}{4}-\frac{1}{4n}:n>0\}$ and a certain open set set $\Lambda_s$ which is bounded away from zero and infinity. The sets $\Lambda_s$ are nested, they form a decreasing sequence, and they satisfy $\bigcap_{s>0}\bar{\Lambda}_s=\{-\frac{1}{4}\}$, where $\bar{\Lambda}_s$ denotes the closure of $\Lambda_s$. In addition, the operators \begin{equation} \tilde{S}:\; H^s_e(2\pi) \rightarrow H^{s+1}_e(2\pi)\quad\mbox{and} \end{equation} \begin{equation}\label{Ndomain_3} \tilde{N}:\; H^{s+1}_e(2\pi) \rightarrow H^{s}_e(2\pi) \end{equation} are bicontinuous. \end{theorem} We thus see that, through introduction of the weight $\omega$ and use of spaces of even and $2\pi$ periodic functions, a picture emerges for the open-surface case that resembles closely the one found for closed-surface configurations: the generalized Calder\'on relation~\eqref{NSfact} is analogous to the Calder\'on formula~\eqref{Calderon}, and mapping properties in terms of the complete range of Sobolev spaces are recovered for $\tilde{S}$ and $\tilde{N}$, in a close analogy to the framework embodied by equations~\eqref{ScmappingHs} and~\eqref{NcmappingHs}. In the remainder of this paper we present a proof of Theorem~\ref{NStheorem}. This proof is based on a number of elements, the first one of which, presented in Section~\ref{three}, concerns the operator $\tilde J^\textbf{t}u_0$ in~\eqref{NSfact}---which corresponds, in fact, to the zero-frequency/straight-arc version of Theorem~\ref{NStheorem}. \section{Straight arc at zero frequency: operators $\tilde J_0$ and $\tilde J^\textbf{t}u_0$\label{three}} \subsection{Preliminary properties of the operators $\tilde{S}_0$, $\tilde{N}_0$ and other related operators\label{prelim_prop}} In the case in which $\Gamma$ is the straight-arc $[-1,1]$ and $k=0$, $\tilde S$ reduces to Symm's operator~\cite{YanSloan,BrunoHaslam} \begin{equation}\label{S_0} \tilde{S}_0 [\tilde \varphi] (\theta ) = -\frac{1}{2\pi}\int_0^\pi \ln|\cos\theta-\cos\theta'| \tilde \varphi( \theta ) d\theta, \end{equation} for which the following lemma holds \begin{lemma}\label{lemma1} The operator $\tilde{S}_0$ maps $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$, and \begin{equation}\label{S0reg} \tilde{S}_0:\quad H^s_e(2\pi) \rightarrow H^{s+1}_e(2\pi)\quad \mbox{is bicontinuous for all }s\geq 0. \end{equation}\end{lemma} \begin{proof} It follows from the weak singularity of the kernel in equation~\eqref{S_0} that \begin{equation}\label{S0cont} \tilde{S}_0:\quad H^0_e(2\pi) \rightarrow H^{0}_e(2\pi)\quad \mbox{is a continuous operator}. \end{equation} Furthermore, taking into account the well documented diagonal property~\cite{MasonHandscomb} \begin{equation}\label{S0Diag} \tilde{S_0}[e_n] = \lambda_n e_n, \quad \lambda_n= \left\{ \begin{array}{cc}\frac{\ln 2}{2}& n=0\\ \frac{1}{2n}, & n\geq 1 \end{array} \right. \end{equation} of Symm's operator in the basis $\{e_n: n\geq 0\}$ of $H^s_e(2\pi)$ defined by \begin{equation}\label{basis} e_n(\theta) = \cos n\theta, \end{equation} we see that, for every basis element $e_n, n\geq 0$, the operator $\tilde{S}_0$ coincides with the diagonal operator defined by \begin{equation} W[f]=\sum\limits_{n\geq 0}\lambda_n f_n e_n \quad \mbox{for} \quad f =\sum\limits_{n\geq0} f_n e_n \in H^s_e(2\pi). \end{equation} Clearly the operator $W:\quad H^s_e(2\pi) \rightarrow H^{s+1}_e(2\pi)$ is bicontinuous for all $s\geq 0$, and it is in particular a continuous operator from $H^0_e(2\pi)$ into $H^0_e(2\pi)$. The continuous operators $\tilde{S}_0$ and $W$ thus coincide on the dense set $\{e_n \}$ of $H^0_e(2\pi)$, and they are therefore equal throughout $H^0_e(2\pi)$. It follows that $\tilde{S}_0 = W$ maps $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$ bicontinuously, and the proof is complete. \end{proof} The corresponding zero-frequency straight-arc version $\tilde{N}_0$ of the operator $\tilde N$, in turn, is given by \begin{equation}\label{N0def0} \tilde{N_0}[\tilde{\psi}](\theta)=\frac{1}{4\pi}\lim\limits_{z\rightarrow 0}\frac{\partial^2}{\partial z^2}\int_{0}^\pi \ln| (\cos\theta - \cos\theta')^2 + z^2 | \tilde{\psi}(\theta')\sin^2\theta' d\theta', \end{equation} which, following~\cite{Kress,ColtonKress1,Monch} we express in the form \begin{equation}\label{N0Def} \tilde{N}_0 = \tilde D_0 \tilde{S}_0 \tilde T_0 \end{equation} where \begin{equation}\label{D0Def} \tilde D_0[\tilde \varphi](\theta) = \frac{1}{\sin\theta}\frac{d\tilde \varphi (\theta)}{d\theta} \end{equation} and \begin{equation}\label{T0Def} \tilde T_0[\tilde \varphi](\theta) = \frac{d}{d \theta}\left(\tilde\varphi(\theta) \sin\theta\right). \end{equation} (The general curved-arc arbitrary-frequency version of this relation is presented in Lemma~\ref{Nfact_lemma} below and, for the sake of completeness, a derivation of the general relation is provided in Appendix~\ref{Nfact_appendix}.) Note that, in contrast with the closed-arc case~\cite[p. 117]{Kress}, the expressions~\eqref{N0Def} through~\eqref{T0Def} contain the vanishing factor $\sin\theta$ and the singular factor $1/\sin\theta$; in particular, for example, it is not immediately clear that the operator $\tilde{N}_0$ maps $H^{s+1}_e(2\pi)$ into $H^{s}_e(2\pi)$. This result is presented in Corollary~\ref{coro-3}. In preparation for our proofs of that and other straight-arc zero-frequency results in the following sections, in the remainder of this section we establish a preliminary continuity result for the operator $\tilde{D}_0$. \begin{lemma}\label{D0lemma} The operator $\tilde{D}_0$ defines a bounded mapping from $H^2_e(2\pi)$ into $H^0_e(2\pi)$. \end{lemma} \begin{proof} Let $\tilde\varphi=\sum_{n=0}^\infty \tilde\varphi_n e_n$ be an element of $H^2_e(2\pi)$. We assume at first that $\tilde\varphi_{2p+1} = 0$ for all integers $p\geq 0$. Let $P>0$ and $\tilde\varphi^P=\sum_{p=0}^P\tilde\varphi_{2p}e_{2p}$; clearly $\tilde\varphi^P$ converges to $\tilde\varphi$ in $H^2_e(2\pi)$ as $P\to\infty$. We have \begin{equation} \tilde D_0[\tilde\varphi^P](\theta)=-\sum\limits_{p=0}^P 2p \tilde \varphi_{2p} \frac{\sin(2p\theta)}{\sin \theta} \end{equation} In view of the identity \begin{equation}\label{Un} \frac{\sin(n+1)\theta}{\sin \theta}=\left\{\begin{array}{ll} \sum\limits_{k=0}^p (2-\delta_{0k})\cos2k\theta, & n=2p \\ 2\sum\limits_{k=0}^p \cos(2k+1)\theta, & n=2p+1, \end{array}\right. \end{equation} (which expressed in terms of Chebyshev polynomials of the first and second kind is given e.g. in equation (40)~\cite[p. 187]{Bateman} and problem~3 in~\cite[p. 36]{MasonHandscomb}), we obtain \begin{equation}\label{D0dec} \tilde D_0[\tilde\varphi^P]=-2\sum\limits_{k=1}^{P} \left(\sum\limits_{p = k}^\infty 2p\tilde\varphi_{2p}^P\right)e_{2k-1} \end{equation} where \[ \tilde\varphi_{2p}^P = \left\{\begin{array}{cc} \tilde\varphi_{2p}^P, & p\leq P \\ 0, & p> P\end{array}\right. . \] The quantity in parenthesis on the right-hand-side of equation~\eqref{D0dec} can be expressed in terms of the adjoint $C^*$ of the discrete Ces\`aro operator $C$, where $C$ and $C^*$ are given by \begin{equation} C[g](n)=\frac{1}{n+1}\sum\limits_{k=0}^n g_k, \quad C^*[g](k)=\sum\limits_{p=k}^\infty \frac{g_p}{p+1}; \end{equation} as there follows from~\cite{BrownHalmosShields}, $C$ and $C^*$ define bounded operators from $\ell^2$ into $\ell^2$. We thus re-express equation~\eqref{D0dec} as \begin{equation}\label{D0dec2} \tilde{D}_0[\tilde\varphi^P]=-2\sum\limits_{k=1}^{P} C^*[g^P](k)e_{2k-1} , \end{equation} where the sequence $g^P$ is given by $g^P_p=2p(p+1)\tilde{\varphi}^P_{2p}$. Clearly $g^P$ is an element of $\ell^2$ and we have \begin{equation}\label{gtophi} ||g^P||_{\ell^2} =\left(\sum\limits_{p=1}^P |g_p^P|^2\right)^{\frac 12} \leq \left(\sum\limits_{p=1}^P (2p)^4 |\tilde\varphi_{2p}^P|^2\right)^{\frac 12}\leq 2^{1/2} ||\tilde\varphi^P||_{H^2_e(2\pi)}. \end{equation} In view of the boundedness of $C^*$ as an operator from $\ell^2$ into $\ell^2$ we obtain \begin{equation} ||\tilde{D}_0[\tilde\varphi^P]||_{H^0_e(2\pi)}\leq 2 ||C^*||_{\ell^2} ||g^P||_{\ell^2}, \end{equation} and thus, in view of~\eqref{gtophi}, \begin{equation} ||\tilde{D}_0[\tilde\varphi^P]||_{H^0_e(2\pi)}\leq 2^{3/2}||C^*||_{\ell^2} ||\tilde \varphi^P||_{H^2_e(2\pi)}. \end{equation} A similar manipulation on odd-termed sequences can be performed and, in all, it follows that for any $P>0$ and any element $\tilde\varphi=\sum_{n=0}^\infty \tilde\varphi_n e_n\in H^2_e(2\pi)$ the corresponding finite-term truncation $\tilde \varphi^P=\sum_{n=0}^P \tilde \varphi_n e_n$ satisfies \begin{equation}\label{D0bound} ||\tilde{D}_0[\tilde\varphi^P]||_{H^0_e(2\pi)}\leq K ||\tilde \varphi^P||_{H^2_e(2\pi)} \end{equation} for some constant $K$ which does not depend on $P$. Since $\tilde \varphi^P$ converges in $H^2_e(2\pi)$ to $\tilde \varphi$ as $P\to\infty$, it follows that $\tilde \varphi^P$ is a Cauchy sequence in $H^2_e(2\pi)$: \begin{equation}\label{cauchy} ||\tilde \varphi^P - \tilde \varphi^Q||_{H^2_e(2\pi)}\to 0 \quad \mbox{as} \quad P,\, Q\to\infty. \end{equation} Now, $(\tilde \varphi^P - \tilde \varphi^Q)$ can be viewed as a finite-term truncation of an element of $H^2_e(2\pi)$ and, thus, the estimate~\eqref{D0bound} applies to it: we obtain \begin{equation}\label{D0bound2} ||\tilde{D}_0[\tilde\varphi^P] - \tilde{D}_0[\tilde\varphi^Q]||_{H^0_e(2\pi)}\leq K ||\tilde \varphi^P - \tilde \varphi^Q||_{H^2_e(2\pi)}. \end{equation} Equations~\eqref{cauchy}-\eqref{D0bound2} show that $\tilde{D}_0[\tilde\varphi^P]$ is a Cauchy sequence in $H^0_e(2\pi)$. It follows that the sequence $\tilde{D}_0[\tilde\varphi^P]$ converges in that space as $P\to\infty$ and, in particular, that $\tilde{D}_0[\tilde\varphi]$ is an element of $H^0_e(2\pi)$. Taking limit as $P\to\infty$ in equation~\eqref{D0bound}, finally, yields the inequality \[ ||\tilde{D}_0[\tilde\varphi]||_{H^0_e(2\pi)}\leq K ||\tilde\varphi||_{H^2_e(2\pi)} \] which establishes the needed boundedness of the operator $\tilde{D}_0$. The proof is now complete. \end{proof} \begin{corollary}\label{N0cor} The operator $\tilde{N}_0$ defines a bounded mapping from $H^2_e(2\pi)$ into $H^0_e(2\pi)$. \end{corollary} \begin{proof} The proof follows from Lemma~\ref{D0lemma}, the decomposition~\eqref{N0Def}, the continuity of $~\tilde{S}_0$ established in ~\eqref{S0reg}, and the easily verified observation that $\tilde{T}_0$ defines a continuous operator from $H^{s+1}_e$ and $H^s_e$: \begin{equation}\label{T0map} \tilde{T}_0:\quad H^{s+1}_e(2\pi) \rightarrow H^s_e(2\pi). \end{equation} \end{proof} A preliminary boundedness result for the composite operator $\tilde J_0 = \tilde{N}_0\tilde{S}_0$ follows from this Corollary. \begin{corollary}\label{J0cor} The straight-arc zero-frequency version of the composite operator $\tilde N \tilde S$, which is given by \begin{equation}\label{j0def} \tilde J_0 = \tilde N_0\tilde S_0, \end{equation} defines a bounded operator from $H^{1}_e(2\pi)$ into $H^0_e(2\pi)$. \end{corollary} In the following section we show that, as stated in Theorem~\ref{th_1} for the related operator $\tilde J^\textbf{t}u_0$ (cf. Section~\ref{J_0_tau}), not only does $\tilde J_0$ define a continuous operator from $H^{1}_e(2\pi)$ into $H^0_e(2\pi)$ (Corollary~\ref{J0cor}): $\tilde J_0$ can also be viewed as a continuous operator from $H^s_e(2\pi)$ into $H^s_e(2\pi)$ for all $s\geq 0$. \subsection{Boundedness of $\tilde{J}_0$ in $H^s_e$ and link with the continuous Ces\`aro operator\label{cos_basis}} The continuity proof presented in this section is based in part on the following lemma, whose proof relies on use of a certain operator $\tilde{C}$ related to the {\em continuous} Ces\`aro operator. (Note that the constructions in Section~\ref{prelim_prop} invoke properties of the {\em discrete} Ces\`aro operator, instead). \begin{lemma}\label{Cprop} For all $s\geq 0$ the integral operator \begin{equation}\label{Cfact} {\tilde C}[\tilde \varphi](\theta)= \frac{\theta(\pi-\theta)}{\pi\sin\theta}\left[ \frac{1}{\theta}\int_0^\theta \tilde \varphi(u) du - \frac{1}{\pi-\theta}\int_\theta^\pi \tilde \varphi(y) du \right] \end{equation} maps $H^s_e(2\pi)$ continuously to itself. Furthermore, \begin{equation}\label{Cdef} {\tilde C}[e_n] (\theta ) = \left\{\begin{array}{ll} 0& \text{for $n=0$}\\ \frac{\sin n\theta}{n\sin\theta} & \text{for $n>0$}, \end{array}\right. \end{equation} for all $n\geq 0$ \end{lemma} \begin{proof} The relation~\eqref{Cdef} results from simple manipulations. The integral operator on the right-hand side of equation~\eqref{Cfact} in turn, can be expressed in terms of the continuous Ces\`aro operator \begin{equation}\label{expr_ces} C[f](x)=\frac{1}{x}\int_0^x f(u)du=\int_0^1f(xu)du. \end{equation} As is known~\cite{BrownHalmosShields}, $C$ is a bounded operator from $L^2[0,b]$ into $L^2[0,b]$ (the space of square-integrable functions over $[0,b]$) for all $b>0$. In view of the relations~\eqref{Cdef} it follows that the operator $\tilde C$ can be extended in a unique manner as a bounded operator from $H^0(2\pi)$ to $H^0(2\pi)$. Taking into account equation~\eqref{expr_ces}, further, for each $f\in C^\infty_0[0,b]$, $m\in \mathbb{N}$ and $x\in[0,b]$ we obtain \begin{equation} \left|\frac{\partial^m {C}[f](x)}{\partial x^m}\right|^2\leq \left(\int_0^1 \left|u^mf^{(m)}(xu)\right|du\right)^2 \leq \left(\int_0^1 \left|f^{(m)}(xu)\right|du\right)^2 = \left(C\left[g\right](x)\right)^2 \end{equation} where $g=\left|f^{(m)}\right|$. Integrating this inequality with respect to $x$ and taking into account the boundedness of $C$ as an operator from $L^2$ to $L^2$ there results \begin{equation} \int_0^{2\pi}\left|\frac{\partial^m {C}[f](x)}{\partial x^m}\right|^2 dx\leq M||f^{(m)}||_{L_2[0,2\pi]}^2 \end{equation} for some constant $M$. It follows easily from this inequality that $\tilde{C}$ is a continuous operator from $H^m_e(2\pi)$ into $H^m_e(2\pi)$ for all non-negative integers $m$. Letting $H^m(2\pi)$ be the space of $2\pi$ periodic functions whose derivatives of order $k$ are square integrable in any bounded set of the line for all integers $k\leq m$ (c.f.~\cite{Kress}) we see that $\tilde{C}$ equals the restriction to $H^m_e(2\pi)$ of some continuous operator $\tilde{P}: H^m(2\pi)\to H^m(2\pi)$: we may simply take, for example, $\tilde{P}$ to equal $\tilde{C}$ on the subspace of even functions and to equal 0 on the space of odd functions. In view of the Sobolev interpolation result (see e.g.~\cite[Theorem 8.13]{Kress}), $\tilde{P}$ defines a continuous operator from $H^s(2\pi)$ to $H^s(2\pi)$ for all $s\geq 0$, and thus, by restriction of $\tilde{P}$ to the susbspace of even and period functions we see that $\tilde{C}$ is a continuous operator from $H^s_e(2\pi)$ to $H^s_e(2\pi)$ for all $s\geq 0$. \end{proof}\\ Our main result concerning the operator $\tilde{J}_0$ is given in the following lemma. \begin{lemma}\label{J0bounded} The composition $\tilde J_0= \tilde N_0\tilde S_0$ defines a bounded operator from $H^s_e(2\pi)$ into $H^s_e(2\pi)$ for all $s \geq 0$. \end{lemma} \begin{proof} We first evaluate the action of $\tilde J_0$ on the basis $\{e_n:n\geq 0\}$. The case $n=0$ is straightforward: in view of~\eqref{S0Diag} and~\eqref{N0Def} we have \begin{equation}\label{0case} \tilde J_0[e_0](\theta) =-\frac{\ln2}{4}. \end{equation} For $n\geq 0$, in turn, expanding~(\ref{T0Def}) we obtain \begin{equation}\label{T0cos} \begin{split} \tilde T_0[e_n](\theta) = \cos\theta \cos n\theta-n\sin n\theta \sin \theta \\ =\frac{\cos(n+1)\theta+\cos(n-1)\theta}{2}+n\frac{\cos(n+1)\theta -\cos(n-1)\theta}{2} \end{split} \end{equation} which, for $n\geq 2$, in view of~\eqref{S0Diag} yields, upon application of $\tilde{S}_0$, \begin{equation}\label{S0T0} \tilde{S}_0 \tilde T_0[e_n](\theta) = \frac{\cos(n+1)\theta}{4(n+1)}+\frac{\cos(n-1)\theta}{4(n-1)}+n\left(\frac{\cos(n+1)\theta}{4(n+1)} -\frac{\cos(n-1)\theta}{4(n-1)}\right)\quad,\quad (n\geq 2). \end{equation} In view of~\eqref{N0Def}-\eqref{D0Def}, for $n \geq 2$ we thus obtain the relation \begin{equation}\label{N0en} \tilde{N}_0[e_n](\theta)=-\cos\theta\frac{\sin n\theta}{2\sin\theta}-\frac{n}{2}\cos n\theta, \end{equation} which, as it is easily verified, also holds for $n=1$. Using this relation in conjunction with~\eqref{S0Diag} and~\eqref{0case} it follows that \begin{equation}\label{N0S0cos} \tilde J_0[e_n](\theta)=\left\{\begin{array}{ll}-\frac{\ln 2}{4}, & n =0 \\ -\cos\theta\frac{\sin n \theta}{4n\sin \theta} -\frac{\cos n\theta}{4},& n>0\end{array}\right. \end{equation} It can be easily verified that the operator $\tilde W_0$ defined by \begin{equation}\label{Jfact} \tilde W_0[\varphi](\theta)=-\frac{\tilde \varphi(\theta)}{4} - \frac{\cos\theta}{4}{\tilde C[\tilde \varphi](\theta)}+\frac{1-\ln 2 }{4\pi}\int_0^\pi\tilde\varphi(\theta)d\theta, \end{equation} reduces to the right-hand side of~\eqref{N0S0cos} when evaluated on the basis functions: $\tilde{J}_0[e_n]=\tilde W_0[e_n],$ for all $n\geq 0$ (the last term in equation~\eqref{Jfact} is obtained by collecting the zero-th order terms, and explicitly expressing the zero-th order coefficient of $\tilde{\varphi}$ as an integral). In view of Lemma~\ref{Cprop} we see that the operator $\tilde W_0$ defines a bounded mapping from $H^s_e(2\pi)$ into $H^s_e(2\pi)$ for all $s>0$. We conclude the equality of $\tilde W_0$ and $\tilde{J}_0$ from the continuity of $\tilde{J}_0$ established in Corollary~\ref{J0cor}, and the lemma follows. \end{proof} From the relationship $\tilde{N}_0 = \tilde{J}_0 \tilde{S}_0^{-1}$ we immediately obtain the following corollary. \begin{corollary} \label{coro-3} The operator $\tilde{N}_0:\quad H^{s+1}_e(2\pi) \rightarrow H^s_e(2\pi)$ is continuous. \end{corollary} \begin{remark}\label{rem_2} The decomposition~\eqref{Jfact} superficially resembles the classical closed-surface Calder\'on formula~\eqref{Calderon}, as it expresses the operator $\tilde W_0 = \tilde J_0 = \tilde N_0\tilde S_0$ as the sum of $-I/4$ and an additional operator. As shown in Section~\ref{point_sp} below, however, the operator ${\tilde C}: H^s_e(2\pi) \to H^s_e(2\pi)$ which appears in~\eqref{Jfact} is not compact---and thus the Fredholm theory cannot be applied to establish the continuous invertibility of $\tilde J_0 = \tilde W_0$ merely on the basis of the decomposition~\eqref{Jfact}. The results of Section~\ref{inv_j0} nevertheless do establish that the operator $\tilde J_0 = \tilde N_0\tilde S_0$ is bicontinuous. Section~\eqref{point_sp} then provides a description of the spectrum of $\tilde J_0$, and, in preparation for the proof of Theorem~\ref{th_1}, Section~\ref{J_0_tau} extends all of these results to the operator $\tilde{J}_0^\textbf{t}u$. \end{remark} \subsection{Invertibility of $\tilde J_0$ \label{inv_j0}} We now proceed to show that the continuous operator $\tilde J_0 : H^s_e(2\pi)\to H^s_e(2\pi)$ admits a (bounded) inverse. Noting that the decomposition~\eqref{N0Def} is not directly invertible on a term by term basis ($\tilde T_0$ and $\tilde D_0$ are not invertible) we first state and proof two lemmas concerning the mapping properties of the operators $\tilde T_0$ and $\tilde D_0$. \begin{lemma} The operators $\tilde{C}$ and $\tilde{T}_0$ satisfy \begin{equation}\label{T0C} \tilde{T}_0 \tilde C[ e_n ]= \left\{\begin{array}{ll} 0,& n=0 \\ e_n,& n> 0. \end{array}\right. \end{equation} and \begin{equation}\label{CT0} \tilde{C} \tilde T_0[\tilde \varphi]= \tilde \varphi\quad \mbox{for all}\quad \tilde \varphi\in H^s_e(2\pi). \end{equation} \end{lemma} \begin{proof} In view of~\eqref{T0Def} and~\eqref{Cdef} we clearly obtain equation~\eqref{T0C}, while equation~\eqref{CT0} follows immediately from~\eqref{Cfact}. \end{proof}\\ \begin{lemma} For all $s\geq 2$ the operator $\tilde{D}_0$ can be expressed in the form \begin{equation}\label{DDinv} \tilde{D}_0 = -\frac{1}{4}\tilde{C}\left( \tilde{S}_0^{-1}\right)^2. \end{equation} \end{lemma} \begin{proof} In view of~\eqref{D0Def} we have \begin{equation} \tilde{D}_0[e_n](\theta)=\left\{ \begin{array}{cc} 0, &n=0\\ -n\frac{\sin n \theta}{\sin\theta}, & n\geq 1. \end{array} \right. \end{equation} In view of the continuity of the various operators involved in relevant Sobolev spaces (Lemmas~\ref{lemma1},~\ref{D0lemma} and~\ref{Cprop}) and the density of the basis $\{e_n\}$ in $H^s_e(2\pi)$, equation~\eqref{DDinv} follows from~\eqref{S0Diag} and~\eqref{Cdef}. \end{proof} \begin{corollary} For all $s>0$, the operator $\tilde{D}_0$ defines a bounded mapping from $H^{s+2}_e(2\pi)$ into $H^s_e(2\pi)$. \end{corollary} \begin{lemma} \label{alt-lemma} For each integer $n\geq 2$ we have \begin{equation}\label{CST} \tilde{C}\tilde{S}_0\tilde{T}_0[e_n](\theta) = \frac {\cos\theta}{2} \tilde{C}\left[\frac{n}{1-n^2}\ e_n\right](\theta) -\frac {n\cos n\theta}{2(1-n^2)}. \end{equation} \end{lemma} \begin{proof} From the easily established identity \[ \tilde{S}_0\tilde{T}_0[e_n] = \frac 14 ( e_{n+1} - e_{n-1})\quad (n\geq 1), \] using equation~\eqref{Cdef} we obtain the relation \begin{equation}\label{cancellation} \tilde{C}\tilde{S}_0\tilde{T}_0[e_n] = \frac 12 \left [ \frac{\sin n\theta}{\sin \theta}\frac{\cos \theta}{1-n^2} -\frac {n\cos n\theta}{1-n^2}\right]\quad (n\geq 2) \end{equation} which, via an additional application of equation~\eqref{Cdef} yields the desired equation~\eqref{CST}. \end{proof} \begin{corollary} \label{alt-cor} The composition $\tilde{C}\tilde{S}_0\tilde{T}_0$ which, in view of Lemma~\ref{lemma1}, Lemma~\ref{Cprop} and equation~\eqref{T0Def}, defines a continuous operator from $H^s_e(2\pi)$ to $H^s_e(2\pi)$ for $s\geq 1$, can in fact be extended in a unique fashion to an operator defined on $H^s_e(2\pi)$ for each $s\geq 0$. For all $s\geq 0$, further, these extended maps enjoy additional regularity: they can be viewed as continuous operators from $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$. \end{corollary} \begin{proof} The proof follows by consideration of equation~\eqref{cancellation}. A cancellation of the form $n\sin (n+1)\theta - n\sin (n+1)\theta = 0$, which occurs in the process of evaluation of the right hand side of equation~\eqref{cancellation}, underlies the additional regularity of the mapping $\tilde{C}\tilde{S}_0\tilde{T}_0$. \end{proof} We can now obtain the inverse of the operator $\tilde{J}_0$. \begin{lemma}\label{J0Inverse} The continuous operator $\tilde J_0: H^s_e(2\pi) \to H^s_e(2\pi)$ ($s\geq 0$) which, according to equations~\eqref{N0Def} and~\eqref{j0def} is given by \begin{equation}\label{J_exp} \tilde J_0=\tilde D_0 \tilde{S}_0 \tilde T_0\tilde{S}_0, \end{equation} is bijective, with (continuous) inverse $\tilde J_0^{-1}:H^s_e(2\pi) \to H^s_e(2\pi)$ given by \begin{equation}\label{I0Def} \tilde J_0^{-1}=-4\tilde{S}_0^{-1}\tilde{C}\tilde{S}_0\tilde{T}_0 \end{equation} for $s\geq 2$, and given by the unique continuous extension of the right hand side of this equation for $2 > s\geq 0$. \end{lemma} \begin{proof}\\ Since the rightmost factor $\tilde{S}_0$ in equation~\eqref{J_exp} is a diagonal operator, we consider the next operator from the right in this product, namely, $\tilde T_0$, which in view of equations~\eqref{T0C} and~\eqref{CT0}, admits $\tilde{C}$ as a ``partial'' inverse. Since the next factor $\tilde{S}_0$ from the right is, once again, a diagonal operator, we consider next the leftmost factor in equation~\eqref{J_exp}: the operator $\tilde D_0$, a decomposition of which was provided in equation~\eqref{DDinv}. In sum, to obtain the inverse of $\tilde J_0$ we proceed as follows: multiplying $\tilde J_0$ on the right by $\tilde{S}_0^{-1}\tilde{C}$ we obtain an operator that maps $e_0$ to $0$ and $e_n$ to $ \tilde{D}_0\tilde{S}_0[e_n]$. Thus, considering~\eqref{CT0} and~\eqref{DDinv}, we further multiply on the right by $-4\tilde{S}_0\tilde T_0$ and we obtain the operator \begin{equation}\label{inv_left} -4\tilde J_0\tilde{S}_0^{-1}\tilde{C}\tilde{S}_0\tilde T_0 \end{equation} which, in view of the fact that the image of $\tilde{S}_0\tilde T_0$ is orthogonal to $e_0$ (as it follows easily from equations~\eqref{S0Diag} and~\eqref{T0Def}) maps $e_n$ to $-4\tilde{D}_0\left( \tilde{S}_0\right)^2\tilde T_0[e_n]$ for all $n\geq 0$. But, in view of~\eqref{DDinv}, this quantity equals $\tilde{C} \tilde T_0[e_n]$ which, according to~\eqref{CT0}, equals $e_n$. In other words, the operator~\eqref{inv_left}, which is a continuous operator from $H^s_e(2\pi)$ to $H^{s-1}_e(2\pi)$ ($s\geq 1$), maps $e_n$ to $e_n$ for $n=0,1,2\dots$---and, thus, \begin{equation}\label{I0-2} \tilde{I}_0 = -4\tilde{S}_0^{-1}\tilde{C}\tilde{S}_0\tilde T_0 \end{equation} is a right inverse of $\tilde J_0$, that is \begin{equation}\label{rght-inv-sgr1} \tilde J_0\tilde{I}_0 = I, \end{equation} at least for $s\geq 1$. Conversely, since in view of equations~\eqref{J_exp} and~\eqref{DDinv} $\tilde{J}_0$ can be expressed, for $s\geq 1$, in the form \begin{equation}\label{J0dense} \tilde{J}_0=-\frac{1}{4}\tilde{C}\tilde{S}_0^{-1}\tilde{T}_0\tilde{S}_0, \end{equation} for $s\geq 2$ we have \begin{equation}\label{I0J0} \tilde I_0 \tilde J_0 = \tilde S_0^{-1} \tilde C \tilde S_0 \tilde T_0 \tilde C \tilde S_0^{-1} \tilde T_0\tilde{S}_0. \end{equation} Now, as noted above, the image of $\tilde{T}_0$ is orthogonal to $e_0$, and thus, since $\tilde S_0$ is a diagonal operator, the same is true of the operator $\tilde S_0^{-1} \tilde T_0 \tilde S_0$. Equation~\eqref{T0C} can therefore be used directly to obtain \begin{equation} \tilde T_0 \tilde C \tilde S_0^{-1} \tilde T_0 \tilde S_0[ e_n ]= \tilde S_0^{-1} \tilde T_0 \tilde S_0[ e_n ], \quad \mbox{for all}\quad n \geq 0. \end{equation} Clearly then, equation~\eqref{I0J0} can be reduced to \begin{equation} \tilde{I}_0\tilde{J}_0=\tilde{S}_0^{-1}\tilde C \tilde{T}_0\tilde{S}_0, \end{equation} and making use of~\eqref{CT0}, we finally obtain \begin{equation}\label{lft-inv} \tilde{I}_0\tilde{J_0}=I, \end{equation} as desired, thus establishing the invertibility of $\tilde{J}_0$ at least for $s\geq 2$. The boundedness of $\tilde{I}_0 = \tilde{J}_0^{-1}$ for $s\geq 2$ follows in view of Remark~\ref{inv_is_cont} (continuity of inverses of continuous linear maps) or, otherwise, directly from equation~\eqref{I0Def}, Corollary~\ref{alt-cor} and Lemma~\ref{lemma1}. To treat the case $2>s\geq 0$, finally, we note that by Corollary~\ref{alt-cor} and Lemma~\ref{lemma1} $\tilde{I}_0$ can be extended in a unique fashion as a continuous mapping from $H^s_e(2\pi)$ to $H^s_e(2\pi)$ for all $s\geq 0$, and that by Lemma~\ref{J0bounded} $\tilde{J_0}$ is continuous mapping from $H^s_e(2\pi)$ to $H^s_e(2\pi)$ for all $s\geq 0$. The $s\geq 2$ relations~\eqref{rght-inv-sgr1} and~\eqref{lft-inv} thus extend to all $s\geq 0$ by density of $H^2_e(2\pi)$ in $H^s_e(2\pi)$ ($2>s\geq 0$), and the proof is thus complete. \end{proof} \begin{corollary}\label{N0mapping} For all $s\geq 0$, the operator $\tilde{N}_0=\tilde{J}_0\tilde{S}_0^{-1}$ defines a bicontinuous mapping from $H^{s+1}_e(2\pi)$ to $H^s_e(2\pi)$. \end{corollary} \begin{proof} This follows directly from equation~\eqref{S0reg}, equation~\eqref{j0def}, and Lemmas~\ref{J0bounded} and ~\ref{J0Inverse}. \end{proof} \subsection{Point Spectrum of $\tilde J_0$\label{point_sp}} Having established boundedness and invertibility, we conclude our study of the operator $\tilde J_0$ by computing its eigenvalues. \begin{lemma}\label{Jeigenvalues} For any $s>0$, the point spectrum $\sigma_s$ of $\tilde J_0 : H^s_e(2\pi) \to H^s_e(2\pi)$ can be expressed as the union \begin{equation} \sigma_s = \Lambda_s \cup\Lambda_\infty, \end{equation} where $\Lambda_\infty$ is the discrete set \begin{equation}\label{lambdaInfinity} \Lambda_\infty=\left\{\lambda_n: n=0,1,\dots, \infty\right\},\; \lambda_n=\left\{\begin{array}{ll}-\frac{\ln 2 }{4} , & n = 0 \\ -\frac{1}{4} - \frac{1}{4n},& n>0 , \end{array} \right. \end{equation} and where $\Lambda_s$ is the open bounded set \begin{equation}\label{lambda_s} \Lambda_s=\left\{ \lambda=(\lambda_x+i\lambda_y)\in\mathbb{C}\; : \;4s+2 < \frac{-\left(\lambda_x+\frac{1}{4}\right)}{(\lambda_x+\frac{1}{4})^2+\lambda_y^2}\right\}. \end{equation} \end{lemma} \begin{proof} We start by re-expressing equation~\eqref{N0S0cos} as \begin{equation}\label{Jc0} \tilde J_0[e_n](\theta)=\left\{\begin{array}{ll}-\frac{\ln 2}{4}&n=0\\-\frac{\sin (n+1)\theta}{4n\sin\theta}+\frac{\cos n\theta}{4n}-\frac{\cos n \theta}{4},& n>0. \end{array}\right. \end{equation} Then, making use again of~\eqref{Un} we obtain \begin{equation}\label{J0exp} \tilde J_0[ e_n] = \left\{ \begin{array}{ll} \lambda_n e_n - \frac{1}{2n}\sum\limits_{k=0}^{p-1}(1-\frac{\delta_{0k}}{2})e_{2k},& n = 2p,\; p\geq0\\ \lambda_n e_n - \frac{1}{2n}\sum\limits_{k=0}^{p-1}e_{2k+1},& n = 2p+1,\; p\geq0\\ \end{array}\right., \end{equation} where the diagonal elements $\lambda_n$ are defined in equation~\eqref{lambdaInfinity}. Clearly, $\tilde{J}_0$ takes the form of an upper-triangular (infinite) matrix whose diagonal terms $\lambda_n$ define eigenvalues associated with eigenvectors $v_n$, each one of which can be expressed in terms of a finite linear combination of the first $n$ basis functions: $v_n=\sum_{k=0}^n c_k^n e_k$. In particular, for all $n\in \mathbb{N}$, $v_n \in H^s_e[0,2\pi]$ for all $s > 0$. This shows that the set $\Lambda_\infty$ of diagonal elements defined in equation~\eqref{lambdaInfinity} is indeed contained in $\sigma_s$ for all $s>0$. As is well known, an upper triangular operator in an infinite-dimensional space can have eigenvalues beyond those represented by diagonal elements. As shown in \cite[Th. 2]{BrownHalmosShields}, for instance, the point spectrum of the upper-triangular bounded operator \begin{equation} C^*[a](n) = \sum\limits_{k=n}^\infty \frac{a_k}{k+1}, \end{equation} (the adjoint of the discrete Ces\`aro operator $C$) is the open disc $|\lambda-1|<1$. A similar situation arises for our operator $\tilde J_0$. To obtain the full point spectrum of the operator $\tilde J_0$ let $\lambda \in \mathbb{C}$ and $f=\sum_{k=0}^\infty f_n e_n$ be such that $\tilde{J}_0[f]=\lambda f$. It follows from~\eqref{J0exp} that the coefficients $f_n$ satisfy the relation \begin{equation}\label{recfn} (-\frac{1}{4}-\frac{1}{4n})f_{n} -\frac{1}{2} \sum\limits_{k=1}^\infty \frac{f_{n+2k}}{n+2k}=\lambda f_{n}\quad,\quad n \geq 1, \end{equation} along with \begin{equation} (-\frac{\ln 2}{4})f_{0} -\frac{1}{4} \sum\limits_{k=1}^\infty \frac{f_{2k}}{2k}=\lambda f_{0}\quad,\quad n = 0. \end{equation} Equation~\eqref{recfn} is equivalent to \begin{equation} \frac{1}{2}\sum\limits_{k=1}^\infty \frac{f_{n+2k}}{n+2k} = f_n( -\frac{1}{4n}-\frac{1}{4}-\lambda),\quad n\geq 1, \end{equation} which, by subtraction, gives \begin{equation} \frac{1}{2}\frac{f_{n+2}}{n+2} = f_n( -\frac{1}{4n}-\frac{1}{4}-\lambda)-f_{n+2}(- \frac{1}{4(n+2)}-\frac{1}{4}-\lambda),\quad n \geq 1. \end{equation} Therefore, the coefficients of $f$ must satisfy \begin{equation}\label{fRec} \left\{\begin{array}{rlc} f_{n+2}=&f_n\left(\frac{ \frac{z}{2} + \frac{1}{n} }{\frac{z}{2} -\frac{1}{(n+2)} }\right),&\quad n \geq 1\\ \frac{1}{4}\sum\limits_{k=1}^\infty \frac{f_{2k}}{2k} =& f_0( -\frac{\ln 2}{4}-\lambda),&\quad n=0. \end{array}\right. \end{equation} where, in order to simplify the notations, we write \begin{equation}\label{zdef} z = 8\lambda +2. \end{equation} It is clear from equation~\eqref{fRec} that the zero-th coefficient is determined by the coefficients of even positive orders, and that the sequence $f_n$ for $n\geq 1$ is entirely determined by $f_1$ and $f_2$. Clearly, there are no elements of the point spectrum for which $Re(z) \geq 0 $, since for such values of $z$ the resulting sequence $f_n$ is not square summable (that is, $\sum |f_n|^2=\infty$). Note that the set of vectors $\{v_n\}$ associated with the discrete eigenvalues $\lambda_n=-\frac{1}{4}-\frac{1}{4n}$, in turn, are recovered by setting $z=-\frac{2}{n}$. To determine all of the elements of the point spectrum with $Re(z)<0$ we study separately the odd and even terms in the sequence~\eqref{fRec}. We start with the sequence $q_n=f_{2n}$, which satisfies the recurrence relationship \begin{equation} q_{n+1}=q_n \left( \frac{ z + \frac{1}{n}}{z -\frac{1}{n+1}}\right),\quad n \geq 1. \end{equation} Let $z= -x+iy$ with $x > 0$, and assume without loss of generality, that $q_1 = 1 $. Then \begin{equation} q_n = \left(\frac{z-1}{z-\frac{1}{n}}\right)\prod\limits_{k=1}^{n-1} \left( \frac{z + \frac{1}{k}}{z -\frac{1}{k}} \right),\quad n\geq 1, \end{equation} and it follows that \begin{equation}\label{logqn} \begin{split} \ln|q_n| = \ln\left|\frac{z-1}{z- \frac{1}{n}}\right|+\frac{1}{2}\sum\limits_{k=1}^{n-1} \ln\left( \frac{(x-\frac{1}{k})^2 + y^2}{ (x+\frac{1}{k})^2 + y^2} \right)\\ = \ln\left|\frac{z-1}{z - \frac{1}{n}}\right|+\frac{1}{2}\sum\limits_{k=1}^{n-1}\ln\left( \frac{1-r(x,y,k))}{1+r(x,y,k)} \right) \end{split} \end{equation} where \begin{equation}\label{rxyk} r(x,y,k)=\frac{2x}{k(x^2+y^2+\frac{1}{k^2})}. \end{equation} For large $k$, we have \begin{equation} \ln \left( \frac{1-r(x,y,k)}{1+r(x,y,k)}\right)= -\frac{4x}{k(x^2 + y^2) }+O(\frac{1}{k^3}),\; k \rightarrow \infty, \end{equation} and thus \begin{equation}\label{qnAsympt} \ln|q_n|= -\frac{2x}{x^2+y^2}\ln n + M + O(\frac{1}{n}), \end{equation} where $M$ is a constant. The absolute value of $q_n$ is thus asymptotically given by \begin{equation} |q_n| = O\left( \frac{1}{n^{\frac{2x}{x^2+y^2}}}\right) \end{equation} as $n\to\infty$. It follows that, for any $s>0$, the set of points $(x,y)$ in the half plane such that the sequence $\sum n^{2s} |q_n|^2<\infty$ is exactly defined by the equation \begin{equation}\label{convergence_condition} 2s - \frac{4x}{x^2 + y^2 } < -1. \end{equation} The analysis for the odd-term sequence $p_n=f_{2n+1}$ can be carried out similarly, since \begin{equation} p_{n+1}=p_n\left(\frac{z+\frac{1}{n+\frac{1}{2}}}{z-\frac{1}{n+1+\frac{1}{2}}}\right),\end{equation} which essentially amounts to replacing $k$ by $k+\frac{1}{2}$ in equations~\eqref{logqn} and~\eqref{rxyk}. The convergence condition~\eqref{convergence_condition} thus applies to $p_n$ as well, and it follows, in view of equation~\eqref{zdef}, that the set $\Lambda_s$ defined by~\eqref{lambda_s} contains all the eigenvalues of $\tilde{J}_0$ not contained in $\Lambda_\infty$. \end{proof} \begin{corollary} The operator $\tilde{C}\; :H^s_e(2\pi) \to H^s_e(2\pi)$ is not compact. \end{corollary} \begin{proof} This follows from the decomposition~\eqref{Jfact} of $\tilde{J}_0$ and the fact that $\tilde{J}_0$ admits a spectrum that is not discrete. \end{proof} \begin{remark} Using polar coordinates $(r,\theta)$ around the point $(-\frac{1}{4},0)$ it is easy to check that \begin{equation}\label{polarLambda} \Lambda_s = \left\{ (\lambda_x+i\lambda_y)\in\mathbb{C}: \lambda_x+\frac{1}{4}=r\cos\theta,\, \lambda_y= r\sin\theta,\, 0<r < -\frac{\cos \theta}{4s +2 },\, \theta \in \left[\frac{\pi}{2}, \frac{3\pi}{2}\right] \right\}. \end{equation} Clearly then, for $s>s'$, $\Lambda_s \varsubsetneq \Lambda_{s'}$, and we have $\bigcap_{s>0} \Lambda_s =\varnothing$, while the intersection of the closures is given by $\bigcap_{s>0} \bar{\Lambda}_s= \{-\frac{1}{4}\}$. Also, for all $s>0$, ${\rm dist}( \sigma_s,0)=-\frac{1}{4}$, and $\max_{\lambda\in\sigma_s}|\lambda| \leq \frac{3}{4}$. It therefore follows that $\sigma_s$ is bounded away from the zero and infinity. In view of Theorem~\ref{th_1} and Section~\ref{J_0_tau}, this is a fact of great significance in connection with the numerical solution of equations~\eqref{Stilde} and~\eqref{Ntilde} by means of Krylov-subspace iterative linear-algebra techniques; see~\cite{BrunoLintner2} for details. \end{remark} \subsection{The operator $\tilde J^\textbf{t}u_0$\label{J_0_tau}} In our proof of Theorem~\ref{NStheorem} we need to consider not $\tilde J_0$ but a closely related operator, namely \begin{equation}\label{j_0_tau_def} \tilde J^\textbf{t}u_0 = \tilde N^\textbf{t}u_0 \tilde S^\textbf{t}u_0 \end{equation} where defining (in a manner consistent with equation~\eqref{Z_op} below) $\tilde{Z}_0[\gamma](\theta)=\gamma(\theta)\textbf{t}u(\cos\theta)$, we have set \begin{equation}\label{s_tau_def} \tilde{S}_0^\textbf{t}u[\gamma]=\tilde{S}_0\tilde{Z}_0[\gamma ], \end{equation} and \begin{equation}\label{n0_tau_def} \tilde{N}^\textbf{t}u_0[\gamma] = \tilde{Z}_0^{-1}\tilde{N}_0[\gamma]. \end{equation} It is easy to generalize equation~\eqref{S0reg}, Corollary~\ref{N0mapping} and Lemmas~\ref{J0bounded} through~\ref{Jeigenvalues} to needed corresponding results for $\tilde{S}_0^\textbf{t}u$, $\tilde{N}_0^\textbf{t}u$ and $\tilde J^\textbf{t}u_0$; these are given in the following Theorem. \begin{theorem}\label{j_0_tau_thm} Let $s\geq 0$. Then, \begin{itemize} \item[(i)] The operator $\tilde{S}_0^\textbf{t}u\;\,:\; H^{s}_e(2\pi)\to H^{s+1}_e(2\pi)$ is bicontinuous. \item[(ii)] The operator $\tilde{N}_0^\textbf{t}u\;\,:\; H^{s+1}_e(2\pi)\to H^s_e(2\pi)$ is bicontinuous. \item[(iii)] The operator $\tilde J^\textbf{t}u_0\;\,:\;H^s_e(2\pi)\to H^s_e(2\pi)$ is bicontinuous. \item[(iv)] The point spectrum of $\tilde J^\textbf{t}u_0:\; H^s_e(2\pi)\to H^s_e(2\pi)$ is equal to the point spectrum $\sigma_s$ of $\tilde J_0$. \end{itemize} \end{theorem} \begin{proof} In view of~\eqref{s_tau_def}, ~\eqref{n0_tau_def}, the ensuing relation \begin{equation}\label{j_0_tau_gen} \tilde J^\textbf{t}u_0 = \tilde{Z}_0^{-1}\tilde J_0\tilde{Z}_0, \end{equation} and the fact that $\textbf{t}u$ is smooth and non-vanishing, the proof of points~{\em(i)},~{\em(ii)} and~{\em(iii)} is immediate. Equation~\eqref{j_0_tau_gen} also shows that $(\lambda,v)$ is an eigenvalue-eigenvector pair for $\tilde J_0$ if and only if $(\lambda,\tilde{Z}_0^{-1}[v])$ is an eigenvalue-eigenvector pair for $\tilde{J}_0^\textbf{t}u$, and point~{\em(iv)} follows as well. \end{proof} \section{General Properties of the Operators $\tilde S$ and $\tilde N$ \label{SGeneral}} The proof of Theorem~\ref{NStheorem} results from a perturbation argument involving Theorem~\ref{j_0_tau_thm} and the results established in this section on the regularity and invertibility of the operators $\tilde S$ and $\tilde N$ defined by equations~\eqref{ssdef} and~\eqref{NNdef}. \subsection{Bicontinuity of the operator $\tilde{S}$} We seek to show that for all $s\geq 0$ the operator $\tilde{S}$ defined in equation~\eqref{ssdef} is a bicontinuous mapping between $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$. This is done in Lemmas~\ref{S_bounded} and~\ref{Sinverse} below. \begin{lemma}\label{S_bounded} Let $s\geq 0$. Then $\tilde{S}$ defines a bounded mapping from $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$. Further, the difference $\tilde{S} - \tilde{S}^\textbf{t}u_0$ (see equation~\eqref{s_tau_def}) defines a continuous mapping from $H^s_e(2\pi)$ into $H^{s+3}_e(2\pi)$. \end{lemma} \begin{proof} In view of equation~\eqref{Gk_def} and the expression \[ H_0^1(z) = \frac{2i}{\pi}J_0(z)\ln(z) + R(z) \] for the Hankel function in terms of the Bessel function $J_0(z)$, the logarithmic function and a certain entire function $R$, the kernel of the operator $S_\omega$ (equation~\eqref{sOmegaDef}) can be cast in the form \begin{equation} G_k(\mathbf{r}(t),\mathbf{r}(t'))=A_1(t,t')\ln|t-t'|+A_2(t,t'), \end{equation} where $A_1(t,t')$ and $A_2(t,t')$ are smooth functions. Further, since $J_0(z)$ is given by a series in powers of $z^2$, it follows that for all $m\in \mathbb{N}$, the function $A_1$ can be expressed in the form \[ A_1(t,t')=-\frac{1}{2\pi} + \sum_{n=2}^{m+3} a_n(t)(t'-t)^n + (t-t')^{m+4}\Lambda_{m+3}(t,t'), \] where $\Lambda_{m+3}(t,t')$ is a smooth function of $t$ and $t'$. The operator $\tilde{S}$ in equation~\eqref{ssdef} can thus be expressed in the form \begin{equation}\label{Sfact} \begin{split} \tilde{S}[\tilde \varphi](\theta)=\tilde{S}_0^\textbf{t}u[\tilde\varphi](\theta) + \sum\limits_{n=2}^{m+3} a_n(\cos\theta)\int_0^\pi(\cos\theta'- \cos\theta)^n \ln|\cos\theta-\cos\theta'|\tilde\varphi(\theta')\textbf{t}u(\cos\theta')d\theta'\\+\int_0^\pi A_3(\cos\theta,\cos\theta')\tilde\varphi(\theta')\textbf{t}u(\cos\theta')d\theta', \end{split} \end{equation} where $A_3(\cos\theta,\cos\theta')$, which contains a logarithmic factor, belongs to $C^{m+3}([0,2\pi]\times [0,2\pi])$. Clearly, for $n\geq 2$, the second derivative $d^2/d\theta^2$ of the product $(\cos\theta'- \cos\theta)^n \ln|\cos\theta-\cos\theta'|$ can be expressed as a product $P_1(\cos\theta,\cos\theta')\ln|\cos\theta-\cos\theta'|+ P_2(\cos\theta,\cos\theta')$ where $P_1(t,t')$ and $P_2(t,t')$ are polynomials. Collecting terms with the common factor $\cos^\ell\theta'$ we then obtain \begin{equation}\label{Sfact_2} \frac{d^2}{d\theta^2}\left(\tilde{S} - \tilde{S}_0^\textbf{t}u\right)[\tilde\varphi](\theta) =\sum\limits_{\ell=0}^{m+1} b_\ell(\cos\theta)\tilde S_0 \tilde Z_\ell [\tilde \varphi](\theta) + \int_0^\pi A_4(\cos\theta,\cos\theta')\tilde\varphi(\theta')\textbf{t}u(\cos\theta')d\theta', \end{equation} where $b_\ell(\cos\theta)$ is an even smooth function, where the operator $\tilde Z_\ell: H^s_e(2\pi)\to H^s_e(2\pi)$ ($s\in\mathbb{R}$) is given by \begin{equation}\label{Z_op} \tilde Z_\ell[\gamma](\theta')=\cos^\ell\theta'\;\textbf{t}u(\cos\theta')\; \gamma(\theta'), \end{equation} and where $A_4(\cos\theta,\cos\theta')\in C^{m+1}([0,2\pi]\times [0,2\pi])$. Now, in view of equation~\eqref{S0reg}, the first term on the right-hand-side of equation~\eqref{Sfact_2} defines a bounded operator from $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$. On the other hand, the derivatives of orders $k\leq (m+1)$ of the second term on the right-hand-side of~(\ref{Sfact_2}), all reduce to integral operators with bounded kernels, and thus map $L^2[0,2\pi]$ continuously into $L^2[0,2\pi]$. It follows that the second term itself maps continuously $H^0_e(2\pi)$ (and hence $H^{m}_e(2\pi)$) into $H^{m+1}_e(2\pi)$, and the lemma follows for integer values $s=m$. The extension for real values $s > 0$ follows directly by interpolation~\cite[Theorem 8.13]{Kress}. \end{proof} The following lemma and its corollary provide a direct link, needed for our proof of Lemma~\ref{Sinverse}, between the spaces $H^s_e(2\pi)$ under consideration here and the original space $\tilde{H}^{-\frac{1}{2}}(\Gamma)$ appearing in equations~\eqref{Smapping}. \begin{lemma}\label{LemmaStephanSpace0} Let $s>0$, and assume $\tilde\varphi\in H^s_e(2\pi)$. Then the function \begin{equation} w(\xi)=\frac{1}{\pi}\int_0^\pi \tilde\varphi(\theta) e^{-i\xi\cos\theta}d\theta. \end{equation} satisfies \begin{equation} \int_\mathbb{R}\frac{|w(\xi)|^2}{(1+|\xi|^2)^\frac{1}{2}}d\xi < \infty. \end{equation} \end{lemma} \begin{proof} Using the $L^2[0,\pi]$-convergent cosine expansion \begin{equation} \tilde\varphi(\theta) = \sum_{n=0}^\infty a_n\cos\theta \end{equation} we obtain \begin{equation}\label{fhat2} w(\xi)=\sum_{n=0}^\infty \frac{a_n}{\pi}\int_0^\pi \cos n\theta e^{-i\xi\cos\theta}d\theta. \end{equation} Since \begin{equation} \int_0^\pi \cos n\theta e^{-i\xi\cos\theta}d\theta = \frac{1}{2}\int_{-\pi}^\pi e^{in\theta}e^{-i\xi\cos\theta}d\theta = \frac{1}{2}e^{\frac{in\pi}{2}}\int_{-\pi}^\pi e^{-in\theta}e^{-i\xi\sin\theta}d\theta = \pi i^n J_n(-\xi), \end{equation} (where, denoting by $J_n(\xi)$ the Bessel function of order $n$, the last identity follows from~\cite[8.411 p. 902]{Gradshteyn}), we see that equation~\eqref{fhat2} can be re-expressed in the form \begin{equation} w(\xi ) = \sum\limits_{n=0}^\infty i^n a_n J_n(-\xi) = \sum\limits_{n=0}^\infty \left( \sqrt{1 +n^{2s}} \; i^n a_n \right)\; \left( \frac{J_n(-\xi )}{\sqrt{1 +n^{2s}}}\right). \end{equation} In view of the Cauchy-Schwartz inequality we thus obtain \begin{equation} \left|w(\xi)\right|^2 \leq \left(\sum\limits_{n= 0}^\infty (1 + n^{2s})|a_n|^2\right) \left(\sum\limits_{n=0}^\infty \frac{|J_n(\xi)|^2}{1+ n^{2s}}\right) \leq \left( \sum\limits_{n=1}^\infty\frac{|J_n(\xi)|^2}{n^{2s}} +|J_0(\xi)|^2\right)\|\tilde\varphi \|^2_s. \end{equation} Since $0\leq |\xi|/(1+|\xi|^2)^{1/2}\leq 1$, it follows that \begin{equation}\begin{split} \int_\mathbb{R}\frac{|w(\xi)|^2}{(1 + |\xi|^2)^\frac{1}{2}} d\xi\leq \left( \sum\limits_{n=1}^\infty \left(\frac{1}{n^{2s}}\int_\mathbb{R}\frac{|J_n(\xi)|^2}{(1 + |\xi|^2)^\frac{1}{2}}d\xi\right)+\int_\mathbb{R}\frac{|J_0(\xi)|^2}{(1+|\xi|^2)^\frac{1}{2}}d\xi\right)\|\tilde\varphi \|^2_s \\\leq \left(\sum\limits_{n=1}^\infty \left(\frac{1}{n^{2s}}\int_\mathbb{R}\frac{|J_n(\xi)|^2}{|\xi|}d\xi\right)+\int_\mathbb{R}\frac{|J_0(\xi)|^2}{(1+|\xi|^2)^\frac{1}{2}}d\xi\right)\|\tilde\varphi \|^2_s. \end{split} \end{equation} Further, in view of \cite[6.574, eq 2.]{Gradshteyn}, the integral involving $J_n$ can be computed exactly for $n\geq 1$: \begin{equation} \int_\mathbb{R} \frac{|J_n(\xi)|^2}{|\xi|}d\xi =\frac{1}{n}. \end{equation} It thus follows that \begin{equation} \int_\mathbb{R} \frac{| w(\xi) |^2}{(1 +|\xi |^2)^\frac{1}{2}}d\xi \leq C_s \|\tilde\varphi \|^2_s < \infty \end{equation} where \begin{equation} C_s= \sum\limits_{n=1}^\infty \frac{1}{n^{1+2s}} + \int_\mathbb{R}\frac{|J_0(\xi)|^2}{(1+|\xi|^2)^\frac{1}{2}}d\xi. \end{equation} \end{proof} \begin{corollary}\label{LemmaStephanSpace1} Let $s>0$, $\tilde\varphi\in H^{s}_e(2\pi)$, $\varphi(t) = \tilde\varphi(\arccos(t))$, $\varphi:[-1,1]\to\mathbb{C}$, $\alpha(\bf{p}) = \varphi(\mathbf{r}^{-1}(\bf{p}))$ and $W(\bf{p}) = \omega(\mathbf{r}^{-1}(\bf{p}))$. Then, the function $F=\frac{\alpha}{W}$ is an element of $\tilde{H}^{-\frac{1}{2}}(\Gamma)$. \end{corollary} \begin{proof} It suffices to take show that $f = \varphi/\omega \in \tilde{H}^{-\frac{1}{2}}[-1,1]$ for the case $\Gamma = [-1,1]$. Extending $f$ by $0$ outside the interval $[-1,1]$, the Fourier transform of $f$ is given by \begin{equation} \hat{f}(\xi)=\int_{-\infty}^\infty f(t)e^{-i\xi t }dt=\int_{-1}^1 \frac{\varphi(t)e^{-i\xi t }}{\omega(t)} dt=\int_0^\pi \tilde\varphi(\theta) e^{-i\xi\cos\theta}d\theta, \end{equation} since $\omega(t)=\sqrt{1-t^2}$ in the present case. The Corollary now follows from Lemma~\ref{LemmaStephanSpace0}. \end{proof} \begin{lemma}\label{Sinverse} For all $s > 0$ the operator $\tilde{S}: H^s_e(2\pi)\to H^{s+1}_e(2\pi)$ is invertible, and the inverse $\tilde{S}^{-1}: H^{s+1}_e(2\pi)\to H^s_e(2\pi)$ is a bounded operator. \end{lemma} \begin{proof} Let $s> 0$ be given. From Lemma~\ref{lemma1} we know $\tilde{S_0}: H^s_e(2\pi)\to H^{s+1}_e(2\pi)$ is a continuously invertible operator. The same clearly holds for $\tilde{S}_0^\textbf{t}u$ as well, and we may write \begin{equation}\label{s_tau} \tilde{S} =\tilde{S_0^\textbf{t}u}\left( I + \left(\tilde{S_0^\textbf{t}u}\right)^{-1}(\tilde{S}-\tilde{S_0^\textbf{t}u})\right). \end{equation} It follows from Lemma~\ref{S_bounded} that the operator $(\tilde{S}_0^\textbf{t}u)^{-1}(\tilde{S}-\tilde{S_0^\textbf{t}u})$ is bounded from $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$, and therefore, in view of the Sobolev embedding theorem it defines a compact mapping from $H^s_e(2\pi)$ into itself. Further, in view of Corollary~\ref{LemmaStephanSpace1} and the injectivity of the mapping~\eqref{Smapping} it follows that the operator $\tilde{S}:H^s_e(2\pi)\to H^{s+1}_e(2\pi)$ is injective, and therefore, so is \begin{equation}\label{FredholmReduction} \left(\tilde{S_0^\textbf{t}u} \right)^{-1} \tilde{S} = I+\left(\tilde{S_0^\textbf{t}u}\right)^{-1}(\tilde{S}-\tilde{S_0^\textbf{t}u}): H^s_e(2\pi)\to H^{s}_e(2\pi). \end{equation} A direct application of the Fredholm theory thus shows that the operator~\eqref{FredholmReduction} is continuously invertible, and the lemma follows. \end{proof} \subsection{Bicontinuity of the operator $\tilde{N}$} To study the mapping properties of the operator $\tilde{N}$ we rely on Lemma~\ref{Nfact_lemma} below where, as in~\cite{Monch}, the operator $\tilde{N}$ is re-cast in terms of an expression which involves tangential differential operators (cf. also~\cite[Th. 2.23]{ColtonKress1} for the corresponding result for closed surfaces). The needed relationships between normal vectors, tangent vectors and parametrizations used are laid down in the following definition. \begin{definition}\label{tangent} For a given (continuous) selection of the normal vector $\textbf{n}=\textbf{n}(\textbf{r})$ on $\Gamma$, the tangent vector $\textbf{t}(\textbf{r})$ is the unit vector that results from a $90^\circ$ clockwise rotation of $\textbf{n}(\textbf{r})$. Throughout this paper it is further assumed that the parametrization $\textbf{r} = \textbf{r}(t)$ of the curve $\Gamma$ has been selected in such a way that \begin{equation} \frac{d\textbf{r}}{dt}(t) = \left| \frac{d\textbf{r}}{dt} \right| \textbf{t}(\textbf{r}(t)). \end{equation} \end{definition} \begin{lemma}\label{Nfact_lemma} For $\varphi \in C^\infty(\Gamma)$, and for $t \in (-1,1)$, the quantity $N_\omega[\varphi](t)$ defined by equation~\eqref{NOmegaDef} can be expressed in the form \begin{equation}\label{NomegaFact} N_\omega[\varphi](t)=N_\omega^g[\varphi](t)+N_\omega^{pv}[\varphi](t) \end{equation} where \begin{equation}\label{NomegaG} N_\omega^g[\varphi](t)=k^2\int_{-1}^1 G_k(\mathbf{r}(t),\mathbf{r}(t'))\;\varphi(t')\; \textbf{t}u(t')\;\sqrt{1-t'^2}\;\mathbf{n}_t \cdot \mathbf{n}_{t'}\;dt',\end{equation} and where \begin{equation}\label{NomegaPV} N_\omega^{pv}[\varphi](t)=\frac{1}{\textbf{t}u(t)}\frac{d}{dt}\left(\int_{-1}^1 G_k(\mathbf{r}(t),\mathbf{r}(t'))\;\frac{d}{dt'} \left( \varphi(t')\;\sqrt{1-t'^2}\right)dt'\right). \end{equation} \end{lemma} \begin{proof} See Appendix~\ref{Nfact_appendix}, cf.~\cite{Kress,ColtonKress1,Monch}. \end{proof} In order to continue with our treatment of the operator $\tilde{N}$ we note that, using the changes of variables $t=\cos\theta$ and $t'=\cos\theta'$ in equations~\eqref{NomegaG} and~\eqref{NomegaPV} together with the notation~\eqref{phi_psi_theta}, for $\varphi\in C^\infty(\Gamma)$ and for $\theta\in (0,\pi)$ we obtain \begin{equation}\label{Nfact} \tilde{N}[\tilde\varphi]=\tilde{N}^{g}[\tilde\varphi]+\tilde{N}^{pv}[\tilde\varphi], \end{equation} where \begin{equation}\label{Ng} \tilde{N}^g[\tilde\varphi](\theta)=k^2\int_{0}^\pi G_k(\mathbf{r}(\cos\theta),\mathbf{r}(\cos\theta')) \;\tilde\varphi(\theta')\; \textbf{t}u(\cos\theta')\;\sin^2\theta'\; \mathbf{n}_\theta \cdot \mathbf{n}_{\theta'}\;d\theta', \end{equation} and where, taking into account equations~\eqref{D0Def} and~\eqref{T0Def}, \begin{equation}\label{Npv} \tilde{N}^{pv}[\tilde\varphi](\theta) = \frac{1}{\textbf{t}u(\cos\theta)}\left(\tilde D_0 \tilde{S} \tilde T_0^\textbf{t}u\right)[\tilde\varphi](\theta), \end{equation} with \begin{equation}\label{t_0_tau} \tilde T_0^\textbf{t}u[\tilde\varphi](\theta)=\frac{1}{\textbf{t}u(\cos\theta)}T_0[\tilde \varphi](\theta). \end{equation} \begin{lemma}\label{N_pv_bounded} Let $s \geq 0$. The operator $\tilde{N}^{pv}$ defines a bounded mapping from $H^{s+1}_e(2\pi)$ to $H^s_e(2\pi)$. Further, the difference $( \tilde{N}^{pv}-\tilde{N}_0^\textbf{t}u )$ (see equation~\eqref{n0_tau_def}) defines a bounded mapping from $H^{s+1}_e(2\pi)$ into $H^{s+1}_e(2\pi)$. \end{lemma} \begin{proof} Using~\eqref{N0Def},~\eqref{n0_tau_def} and~\eqref{Npv} we obtain \begin{equation}\label{pv_0} {\tilde N}^{pv}[\tilde \varphi] =\tilde{N}^\textbf{t}u_0[\tilde \varphi] + \frac{1}{\textbf{t}u(\cos\theta)} \tilde D_0 (\tilde{S}-\tilde{S}_0^\textbf{t}u) \tilde T_0^\textbf{t}u [\tilde \varphi]. \end{equation} As shown in Theorem~\ref{j_0_tau_thm} the operator $\tilde{N}_0^\textbf{t}u : H^{s+1}_e(2\pi)\to H^s_e(2\pi)$ on the right-hand side of this equation is bounded. To establish the continuity of the second term on the right-hand side of equation~\eqref{pv_0} we first note that, in view of equation~\eqref{T0Def}, the operator $\tilde T_0 : H^{s+1}_e(2\pi)\to H^s_e(2\pi)$ is bounded, and therefore, so is $\tilde T_0^\textbf{t}u$. Further, as shown in Lemma~\ref{S_bounded}, the operator $(\tilde{S}-\tilde{S}^\textbf{t}u_0)$ maps continuously $H^s_e(2\pi)$ into $H^{s+3}_e(2\pi)$ so that, to complete the proof, it suffices to show that the operator $\tilde D_0$ maps continuously $H^{s+3}_e(2\pi)$ into $H^{s+1}_e(2\pi)$. But, for $\tilde\psi\in H^{s+3}_e(2\pi)$ ($s>0$) we can write \[ \tilde D_0[\tilde \psi](\theta) = \frac{1}{\sin\theta}\int_0^\theta \frac{d^2}{d\theta^2}\tilde\psi(u) du, \] and since the zero-th order term in the cosine expansion of $\frac{d^2}{d\theta^2}\tilde\psi$ vanishes, in view of~\eqref{Cdef} we have \[ \tilde D_0[\tilde \psi] = \tilde{C}\left[ \frac{d^2 \tilde \psi}{d\theta^2}\right]. \] It therefore follows from Lemma~\ref{Cprop} that the second term in~\eqref{pv_0} is a continuous map from $H^{s+1}_e(2\pi)$ into $H^s_e(2\pi)$, that is, $(\tilde{N}^{pv}-\tilde{N}_0^\textbf{t}u )$, as claimed. \end{proof} \begin{corollary}\label{N_bounded} For all $s\geq 0$ the operator $\tilde{N}$ can be extended as a continuous linear map from $H^{s+1}_e(2\pi)$ to $H^s_e(2\pi)$. Further, the difference $\tilde{N}-\tilde{N}_0^\textbf{t}u$ defines a continuous operator from $H^{s+1}_e(2\pi)$ to $H^{s+1}_e(2\pi)$. \end{corollary} \begin{proof} From equation~\eqref{Ng} we see that $\tilde{N}^g$ has the same mapping properties as $\tilde{S}$ (Lemma~\ref{S_bounded}), namely \begin{equation}\label{ng_cont} \tilde N^g: H^{s+1}_e(2\pi)\to H^{s+2}_e(2\pi) \quad \mbox{ is continuous}. \end{equation} In view of Lemma~\ref{N_pv_bounded} it therefore follows that the right hand side of equation~\eqref{Nfact}, \begin{equation}\label{Npvg} \tilde{N}^g+\tilde{N}^{pv} : H^{s+1}_e(2\pi)\to H^{s}_e(2\pi), \end{equation} is a bounded operator for all $s\geq 0$. Equation~\eqref{Nfact} was established for functions $\tilde\varphi$ of the form~\eqref{phi_psi_theta} with $\varphi\in C^\infty(\Gamma)$. But the set of such functions $\tilde\varphi$ is dense in $H^{s+1}_e(2\pi)$ for all $s>0$---as can be seen by considering, e.g., that the Chebyshev polynomials span a dense set in $H^{s+1}[-1,1]$. It follows that $\tilde N$ can be uniquely extended to a continuous operator from $H^{s+1}_e(2\pi)$ to $H^{s}_e(2\pi)$, as claimed. Finally, $\tilde N- \tilde{N}_0^\textbf{t}u = \tilde{N}^g +(\tilde N^{pv}-\tilde{N}^\textbf{t}u_0)$ is continuous from $H^{s+1}_e(2\pi)$ into $H^{s+1}_e(2\pi)$, in view of equation~\eqref{ng_cont} and Lemma~\ref{N_pv_bounded}. \end{proof} The following lemma establishes a link, needed for our proof of Lemma~\ref{Ninverse}, between the domain of the unweighted hypersingular operator $\mathbf{N}$ considered in~\cite{Stephan} (equation~\eqref{Nmapping} above) and the corresponding possible domains of the weighted operator $\tilde{N}$ (equation~\eqref{Ndomain_3}); cf. also Corollary~\ref{LemmaStephanSpace1} where the corresponding result for the domains of the operators $\mathbf{S}$ and $\tilde{S}$ is given. \begin{lemma}\label{LemmaStephanSpace3} Let $\tilde\psi$ belong to $H^{s+1}_e(2\pi)$ for $s>0$, $\psi(t)=\tilde\psi( \arccos t)$, $\psi:\; [-1,1]\rightarrow \mathbb{C}$, $\beta(\bf{p})=\psi\left( \bf{r}^{-1}(\bf{p})\right)$, $W(\bf{p})=\omega\left( \textbf{r}^{-1}(\bf{p})\right)$. Then the function $G=W\beta$ is an element of $\tilde{H}^{\frac{1}{2}}(\Gamma)$. \end{lemma} \begin{proof} It suffices to show that $g=\omega\psi\in \tilde{H}^{\frac{1}{2}}[-1,1]$ for the case $\Gamma = [-1,1]$. Extending $g$ by $0$ outside the interval $[-1,1]$, the Fourier transform of $g$ is given by \begin{equation}\label{gxi0} \hat{g}(\xi)=\int_{-1}^1 \psi(t)e^{-i\xi t} \omega(t) dt=\int_0^\pi \psi(\cos\theta)e^{-i\xi\cos\theta}\sin^2\theta d\theta, \end{equation} since $\omega(t)=\sqrt{1-t^2}$ in the present case. Integrating by parts we obtain \begin{equation}\label{gxi1} \hat{g}(\xi)=\frac{1}{i\xi}\int_0^\pi \frac{\partial }{\partial \theta}\left\{\psi(\cos\theta)\sin\theta\right\} e^{-i\xi\cos\theta} d\theta. \end{equation} It is easy to check that $\frac{\partial}{\partial\theta}\{ \psi(\cos\theta) \sin\theta\}=\frac{\partial}{\partial\theta}\{ \tilde\psi(\theta) \sin\theta\}$ is an element of $H^s_e(2\pi)$ and, thus, in view of equation~\eqref{gxi1} together with Lemma~\ref{LemmaStephanSpace0} we obtain \begin{equation}\label{gxi} \int_\mathbb{R} \frac{|\hat{g}(\xi)|^2\xi^2}{(1+\xi^2)^\frac{1}{2}}d\xi <\infty \end{equation} It thus follows that the second term on the right-hand side of the identity \begin{equation} \int_\mathbb{R} |\hat{g}(\xi)|^2(1 + |\xi|^2)^\frac{1}{2}d\xi = \int_\mathbb{R} \frac{|\hat{g}(\xi)|^2}{(1+\xi^2)^\frac{1}{2}}d\xi + \int_\mathbb{R} \frac{|\hat{g}(\xi)|^2\xi^2}{(1+\xi^2)^\frac{1}{2}}d\xi \end{equation} is finite. The first term is also finite, as can be seen by applying Lemma~\ref{LemmaStephanSpace0} directly to equation~\eqref{gxi0}. The function $g$ thus belongs to $\tilde{H}^\frac{1}{2}[-1,1]$, and the proof is complete. \end{proof} \begin{lemma}\label{Ninverse} For all $s > 0$ the operator $\tilde{N}: H^{s+1}_e(2\pi)\to H^{s}_e(2\pi)$ is invertible, and the inverse $\tilde{N}^{-1}: H^{s}_e(2\pi)\to H^{s+1}_e(2\pi)$ is a bounded operator. \end{lemma} \begin{proof} In view of Theorem~\ref{j_0_tau_thm}, the operator $\tilde{N}_0^\textbf{t}u\;\,:\; H^{s+1}_e(2\pi)\to H^s_e(2\pi)$ is bicontinuous, and we may thus write \begin{equation} \tilde{N}=\tilde{N^\textbf{t}u_0}\left( I +\left(\tilde{N}^\textbf{t}u_0\right)^{-1}(\tilde{N}-\tilde{N}^\textbf{t}u_0)\right). \end{equation} Since, by Corollary~\ref{N_bounded}, the difference $\tilde{N}-\tilde{N}^\textbf{t}u_0$ defines a bounded mapping from $H^{s+1}_e(2\pi)$ into $H^{s+1}_e(2\pi)$, it follows that the operator $\left(\tilde{N}^\textbf{t}u_0\right)^{-1}(\tilde{N}-\tilde{N}^\textbf{t}u_0)$ is bounded from $H^{s+1}_e(2\pi)$ into $H^{s+2}_e(2\pi)$ and, in view of the Sobolev embedding theorem, it is also compact from $H^{s+1}_e(2\pi)$ into $H^{s+1}_e(2\pi)$. The Fredholm theory can thus be applied to the operator \begin{equation}\label{id_p_comp} I+\left(\tilde{N}^\textbf{t}u_0\right)^{-1}(\tilde{N}-\tilde{N}^\textbf{t}u_0). \end{equation} This operator is also injective, in view of Lemma~\ref{LemmaStephanSpace3} and the bicontinuity of the map $ \mathbf{N}$ in equation~\eqref{Nmapping}, and it is therefore invertible. The Lemma then follows from the bicontinuity of the operator of $\tilde{N}_0^\textbf{t}u$. \end{proof} \section{Generalized Calder\'on Formula: Proof of Theorem~\ref{NStheorem}}\label{five} Collecting results presented in previous sections we can now present a proof of Theorem~\ref{NStheorem}.\\ \begin{proof} The bicontinuity of the operators $\tilde{S}$, $\tilde{N}$ and $\tilde{N}\tilde{S}$ follow directly from Lemmas \ref{S_bounded},~\ref{Sinverse},~\ref{Ninverse} and Corollary~\ref{N_bounded}. To establish equation~\eqref{NSfact}, on the other hand, we write \begin{equation} \tilde{N}\tilde{S}=\tilde{N}_0^\textbf{t}u\tilde{S}_0^\textbf{t}u + \tilde K =\tilde{J}_0^\textbf{t}u + \tilde K \end{equation} where as shown in Theorem~\ref{j_0_tau_thm}, $\tilde{J}_0^\textbf{t}u$ is bicontinuous, and where \begin{equation} \tilde K=\tilde{N}(\tilde{S} - \tilde{S}_0^\textbf{t}u) + (\tilde{N}- \tilde{N}_0^\textbf{t}u)\tilde{S}_0^\textbf{t}u. \end{equation} In view of Lemma~\ref{S_bounded}, Corollary~\ref{N_bounded} and Theorem~\ref{j_0_tau_thm}, the operator $\tilde K$ maps $H^s_e(2\pi)$ into $H^{s+1}_e(2\pi)$ and is therefore compact from $H^s_e(2\pi)$ into $H^s_e(2\pi)$. The proof is now complete.\; \end{proof} \section*{Acknowledgments} The authors gratefully acknowledges support from the National Science Foundation and the Air Force Office of Scientific Research. \appendix \section{Proof of Lemma~\ref{Nfact_lemma}\label{Nfact_appendix}} \begin{proof} Assuming $\varphi\in C^\infty(\Gamma)$, we define the weighted double layer potential by \begin{equation} \mathbf{D}_\omega[\alpha](\textbf{r})=\int_\Gamma \frac{\partial G(\textbf{r},\textbf{r}')}{\partial \textbf{n}_{\textbf{r}'}}\alpha(\textbf{r}')\omega(\textbf{r}')d\ell',\quad \mbox{\textbf{r} outside } \Gamma, \end{equation} which, following the related closed-surface calculation presented in \cite[Theorem 2.23]{ColtonKress1}, we rewrite as \begin{equation}\label{Ndiv} \mathbf{D}_\omega[\alpha](\textbf{r})=-\mathrm{div}_{\textbf{r}} \textbf{E}[\alpha](\textbf{r}) \end{equation} where \begin{equation}\label{vecE} \textbf{E}[\alpha](\textbf{r}) = \int_\Gamma G_k(\textbf{r},\textbf{r}')\alpha(\textbf{r}')\omega(\textbf{r}')\textbf{n}(\textbf{r}')d\ell'. \end{equation} Since $\textbf{E}[\alpha] = \textbf{E}=(E_x,E_y)$ satisfies $\Delta\textbf{E}+k^2\textbf{E} = 0$, the two dimensional gradient of its divergence can be expressed in the form \begin{equation} \mathrm{grad}\;\mathrm{div}\textbf{E}=-k^2\textbf{E}+ \left(\frac{\partial }{\partial y} \mathrm{curl}\;\textbf{E}, -\frac{\partial}{\partial x}\mathrm{curl}\;\textbf{E} \right), \end{equation} where the {\em scalar rotational} of a two-dimensional vector field $\textbf{A} = (A_x,A_y)$ is defined by $\mathrm{curl}\;\textbf{A} = (\frac{\partial A_y}{\partial x} -\frac{\partial A_x}{\partial y})$. Since ${\rm curl}_{\textbf{r}}\left(\textbf{n}(\textbf{r}') G(\textbf{r},\textbf{r}') \right )$ equals $-\textbf{t}(\textbf{r}')\cdot \textbf{n}abla_{\textbf{r}'} G(\textbf{r},\textbf{r}') $ (see definition~\ref{tangent}), we obtain \begin{equation}\label{Fexp} {\rm curl}_{\textbf{r}}\textbf{E}[\alpha](\textbf{r})=-\int_{-1}^1 \frac{d G_k(\textbf{r},\textbf{r}(t'))}{dt'}\alpha(\mathbf{r}(t'))\omega(\mathbf{r}(t'))dt'. \end{equation} Therefore, taking the gradient of~\eqref{Ndiv}, letting $\varphi(t') = \alpha(\mathbf{r}(t'))$, using~\eqref{canon_sq}, integrating~\eqref{Fexp} by parts, and noting that the boundary terms vanish identically (since $\sqrt{1-t'^2} =0$ for $t' =\pm 1$), we see that \begin{equation}\label{gradDomega} \mbox{grad}\; \mathbf{D}_\omega[\alpha](\textbf{r})=k^2\int_\Gamma G(\textbf{r},\textbf{r}')\alpha(\textbf{r}')\textbf{n}(\textbf{r}')\omega(\mathbf{r}')d\ell'-\left(\frac{\partial A}{\partial y},-\frac{\partial A}{\partial x}\right), \end{equation} where \begin{equation}\label{A} A(\textbf{r})= \int_{-1}^1G(\textbf{r},\textbf{r}(t'))\frac{d}{dt'} \big(\varphi(t')\sqrt{1-t'^2}\big)dt'. \end{equation} In view of the continuity of the tangential derivatives of single layer potentials across the integration surface (e.g.~\cite[Theorem 2.17]{ColtonKress1}), in the limit as $\textbf{r}\to\textbf{r}(t)\in \Gamma$ we obtain \begin{equation} \left(\frac{\partial A(\textbf{r}(t))}{\partial y},-\frac{\partial A(\textbf{r}(t))}{\partial x}\right)\cdot\textbf{n}(\textbf{r}(t))=-\frac{1}{\textbf{t}u(t)}\frac{dA(\textbf{r}(t))}{dt}, \end{equation} and the decomposition~(\ref{NomegaFact}) results. \end{proof} \section{Asymptotic behavior of $\mathbf{N}\mathbf{S}[1]$\label{NSOne}} In this section we demonstrate the poor quality of the composition $\mathbf{N}\mathbf{S}$ of the unweighted hypersingular and single-layer operators by means of an example: we consider the flat arc $[-1,1]$ at zero frequency ($\mathbf{N}\mathbf{S}=\mathbf{N}_0\mathbf{S}_0$). In detail, in Section~\ref{app_b1} we show that the image of $\mathbf{S}$ is not contained in the domain of $\mathbf{N}$ (and, thus, the formulation $\mathbf{N}\mathbf{S}$ cannot be placed in the functional framework~\cite{Stephan,StephanWendland,StephanWendland2}), and in Section~\ref{app_b2} we study the edge asymptotics of the function $\mathbf{N}\mathbf{S}[1]$ which show, in particular, that the function $1$, (which itself lies in $H^s[-1,1]$ for arbitrarily large values of $s$) is mapped by the operator $\mathbf{N}\mathbf{S}$ into a function which does not belong to the Sobolev space $H^{-\frac{1}{2}}[-1,1]$, and, thus, to any space $H^s[-1,1]$ with $s\geq -1/2$. We thus consider the {\em unweighted} single-layer and hypersingular operators which, in the present flat-arc, zero-frequency case take particularly simple forms. In view of~\eqref{Sdef}, the {\em parameter-space form} of the unweighted single-layer operator (which is defined in a manner analogous to that inherent in equation~\eqref{sOmegaDef} and related text) is given by \begin{equation}\label{S00} S_0[\varphi](x)=-\frac{1}{2\pi}\int_{-1}^1\ln|x-s|\varphi(s)ds. \end{equation} With regards to the parameter-space form $N_0$ of the hypersingular operator~\eqref{Ndef} we note, with reference to that equation, that in the present zero-frequency flat-arc case we have $\mathbf{r}=(x,0)$, $z\textbf{n}_{\mathbf{r}} = (0,z)$ and $-d/d\textbf{n}_{\mathbf{r}'}=d/dz$. Since, additionally, the single layer potential~\eqref{single_l} yields a solution of the Laplace equation in the variables $(x,z)$, we have \begin{equation}\label{N0zlimit} 4\pi N_0[\varphi](x)=-\lim_{z\rightarrow 0 } \frac{d^2}{dx^2}\int_{-1}^1 \varphi(s) \ln((x-s)^2+z^2)ds, \end{equation} or equivalently, \begin{equation}\label{N0tangeant} N_0[\varphi](x)=\frac{1}{4\pi}\lim_{z\rightarrow 0 } \frac{d}{dx}\int_{-1}^1 \varphi(s) \frac{d}{ds} \ln((x-s)^2+z^2)ds. \end{equation} Note that, in view of the classical regularity theory for the Laplace equation, letting $z$ tend to zero for $-1<x<1$ in equation of~\eqref{N0zlimit} we also obtain, for smooth $\varphi$, \begin{equation}\label{N00} N_0[\varphi](x)=\frac{d^2}{dx^2}S_0[\varphi](x),\quad -1<x<1. \end{equation} \subsection{The operator $S_0$\label{app_b1}} Integrating~\eqref{S00} by parts we obtain \begin{equation} -2\pi S_0[1](x)=\int_{-1}^1 \frac{d(s-x)}{ds}\ln|s-x|ds = (1-x)\ln(1-x) + (1+x)\ln(1+x) -2, \end{equation} and therefore \begin{equation}\label{S0base} S_0[1](x)=\frac{1}{2\pi}\big( 2 - (1-x)\ln(1-x) -(1+x)\ln(1+x)\big). \end{equation} Incidentally, this expression shows that the unweighted single-layer operator does not map $C^\infty$ functions into $C^\infty$ functions up to the edge; a more general version of this result is given in~\cite[p. 182]{Tricomi}. The following two lemmas provide details on certain mapping properties of the operator $S_0$ . \begin{lemma}\label{S0reg1} The image $S_0[1]$ of the constant function $1$ by the operator~\eqref{S00} is an element of $H^\frac{1}{2}[-1,1]$. \end{lemma} \begin{proof} Let $\Gamma_1$ be a closed, smooth curve which includes the segment $[-1,1]$. Clearly, the function \begin{equation}\label{f1} f_1(s)=\left\{ \begin{array}{ll}1,& s \in [-1,1]\\ 0,& s\in \Gamma_1\backslash [-1,1]\end{array}\right. \end{equation} belongs to $L^2(\Gamma_1)$ and therefore to $H^{-\frac{1}{2}}(\Gamma_1)$, so that, according to Definition~\ref{htil_def}, the constant 1 is in the space $\tilde{H}^{-\frac{1}{2}}[-1,1]$. In view of equation~\eqref{Smapping}, it follows that $S_0[1]\in H^{\frac{1}{2}}[-1,1]$. \end{proof} \begin{lemma}\label{S0reg_ex} The image $S_0[1]$ of the constant function $1$ by the operator~\eqref{S00} is not an element of $\tilde H^\frac{1}{2}[-1,1]$. \end{lemma} \begin{proof} In view of~\eqref{S0base} and the fact that $S_0[1](x)$ is an even function of $x$, integration by parts yields \begin{equation}\label{S0IBP} \int_{-1}^1 e^{-i\xi x}S_0[1](x)dx = \frac{2\sin \xi}{\xi}S_0[1](1)+ \frac{1}{2\pi i\xi}\int_{-1}^1 e^{-i\xi x }\ln\frac{1-x}{1+x}dx.\end{equation} Taking into account the identities \cite[eq. 4.381, p 577]{Gradshteyn} \begin{equation}\label{sinlogint} \left\{\begin{array}{ll} \int_0^1\ln x \cos \xi x\, dx &= -\frac{1}{\xi}\left[\mathrm{si}(\xi) + \frac{\pi}{2} \right], \\ \int_0^1\ln x \sin \xi x\, dx &= -\frac{1}{\xi}\left[\textbf{C} +\ln \xi - \mathrm{ci}(\xi) \right],\end{array} \right. \quad \xi>0 \end{equation} where $\mathbf{C}$ is the Euler constant, and where $\mathrm{si}(\xi)$ and $\mathrm{ci}(\xi)$ are the sine and cosine integrals respectively, (both of which are bounded functions of $\xi$ as $|\xi|$ tends to infinity), it is easily verified that the second term in~\eqref{S0IBP} behaves asymptotically as $\frac{\ln(\xi)}{\xi^2}$ as $\xi$ tends to infinity. Clearly, the first term of~\eqref{S0IBP} decays as $O(\frac{1}{\xi})$, and therefore \begin{equation}\label{FT_S0} \left|\int_{-1}^1 e^{-i\xi x}\left(S_0[1](x)\right) dx \right|^2 = O\left(\frac{1}{\xi^2}\right), \quad \xi \rightarrow \infty. \end{equation} Equation~\eqref{FT_S0} tells us that the function $\varphi:\mathbb{R}\to \mathbb{R}$ which equals $S_0[1](x) $ for $x$ in the interval $[-1,1]$ and equals zero in the complement of this interval, does not belong to $H^\frac{1}{2}\left(\mathbb{R} \right)$, and, thus, $S_0[1]\textbf{n}ot\in\tilde H^\frac{1}{2}[-1,1]$, as claimed. \end{proof} \begin{remark}\label{S0remark} Lemmas ~\ref{S0reg1} and ~\ref{S0reg_ex} demonstrate that, as pointed out in Section~\ref{intro}, the formulation $\mathbf{N}\mathbf{S}$ of the open-curve boundary-value problems under consideration cannot be placed in the functional framework put forth in~\cite{Stephan,StephanWendland,StephanWendland2} and embodied by equations~\eqref{Smapping},~\eqref{Nmapping} and definition~\ref{htil_def}: the image of the operator $\mathbf{S}$ is not contained in the domain of definition of the operator $\mathbf{N}$; see equations~\eqref{Smapping} and~\eqref{Nmapping}. \end{remark} \subsection{The combination $N_0S_0$\label{app_b2}} While, as pointed out in the previous section, $S_0[1]$ does not belong to the domain of definition of $N_0$ (as set up by the formulation~\eqref{Smapping},~\eqref{Nmapping}), the quantity $N_0S_0[1](x)$ can be evaluated pointwise for $|x|<1$, and it is instructive to study its asymptotics as $x\to \pm 1$. \begin{lemma}\label{N0L2} $N_0S_0[1]$ can be expressed in the form \begin{equation}\label{N0S01} N_0S_0[1](x)=\frac{\ln2-1}{\pi^2(1-x^2)} + \mathcal{L}(x), \end{equation} where $\mathcal{L}\in L^2[-1,1]$. \end{lemma} \begin{proof} In view of~\eqref{S0base} we have \begin{equation}\label{N0S0_start} N_0S_0[1](x)=\frac{1}{\pi}N_0[1](x) - \frac{1}{2\pi}N_0[g](x), \end{equation} where \begin{equation} g(x)=(1-x)\ln(1-x) + (1+x)\ln(1+x). \end{equation} For the first term on the right-hand side of this equation we obtain from~\eqref{N00} and~\eqref{S0base} \begin{equation}\label{N0_1} N_0[1](x)= -\frac{1}{\pi(1-x^2)}\, . \end{equation} To evaluate the second term $N_0[g]$ in equation~\eqref{N0S0_start}, in turn, we first integrate by parts equation~\eqref{N0tangeant} and take limit as $z\to 0$ and thus obtain \begin{equation} \begin{split} N_0[g](x)=\frac{1}{2\pi}\left(\frac{d}{dx}\left(\big[\ln|x-s|g(s)\big]_{-1}^1\right) - \frac{d}{dx}\int_{-1}^1 \ln|x-s| \frac{d}{ds}g(s)ds\right)\\ =\frac{\ln 2}{\pi}\frac{d}{dx}\left(\ln\left(\frac{1-x}{1+x}\right) \right) + \frac{1}{2\pi}\frac{d}{dx}\int_{-1}^1 \ln|x-s| \ln\left(\frac{1-s}{1+s}\right)ds, \end{split} \end{equation} or \begin{equation}\label{N0pv} N_0[g](x)=\frac{-2\ln2}{\pi(1-x^2)}-\frac{1}{2\pi}p.v.\int_{-1}^1\ln\left(\frac{1-s}{1+s}\right)\frac{1}{s-x}ds. \end{equation} Clearly, to complete the proof it suffices to establish that the functions \begin{equation} \mathcal{L}^+(x)=p.v.\int_{-1}^1\frac{\ln(1-s)}{s-x}ds\quad\mbox{and}\quad \mathcal{L}^-(x)=p.v.\int_{-1}^1\frac{\ln(1+s)}{s-x}ds \end{equation} are elements of $L^2[-1,1]$. Let us consider the function $\mathcal{L}^+$ for $x\geq 0$ first. Re-expressing $\mathcal{L}^+(x)$ as the sum of the integrals over the interval $[x-(1-x), x + (1-x)] = [2x-1,1]$ (which is symmetric respect to $x$ plus the integral over $[-1,2x-1]$ and using a simple change of variables we obtain \begin{equation}\label{L0} \mathcal{L}^+(x)= \int_0^{1-x}\frac{\ln(1-x-u)-\ln(1-x+u)}{u}du + \int_{1-x}^{1+x}\frac{\ln(1-x+u)}{u}du. \end{equation} Letting $z = 1-x$ and $v = \frac{u}{z}$, we see that the first integral in~\eqref{L0} is a constant function of $x$: \begin{equation}\label{FirstInt} \int_0^{z}\frac{\ln(z-u)-\ln(z+u)}{u}du = \int_0^1\frac{\ln(1-v)-\ln(1+v)}{v}dv = \mbox{const}. \end{equation} For the second integral in~\eqref{L0}, on the other hand, we write \begin{equation}\label{SecondInt} \begin{split} \int_{1-x}^{1+x}\frac{\ln(1+u-x)}{u}du =\int_{1-x}^{1+x}\frac{\ln(1+\frac{u}{1-x})}{u}du + \ln(1-x)\int_{1-x}^{1+x}\frac{du}{u}\\ =\int_1^{\frac{1+x}{1-x}}\frac{\ln(1+v)}{v}dv + \ln(1-x)\ln\left(\frac{1+x}{1-x}\right)\\ =\int_1^{\frac{1+x}{1-x}}\frac{\ln(1+v)}{1+v}dv +\int_1^{\frac{1+x}{1-x}}\frac{\ln(1+v)}{v(1+v)}dv + \ln(1-x)\ln\left(\frac{1+x}{1-x}\right)\\ =\frac{1}{2}\left(\ln^2\left(\frac{2}{1-x}\right)-\ln^2 2\right) +\int_1^{\frac{1+x}{1-x}}\frac{\ln(1+v)}{v(1+v)}dv + \ln(1-x)\ln\left(\frac{1+x}{1-x}\right). \end{split} \end{equation} Since the second term on the last line of equation~\eqref{SecondInt} is bounded for $0\leq x<1$, it follows that, in this interval, the function $\mathcal{L}^+(x)$ equals a bounded function plus a sum of logarithmic terms and is thus an element of $L^2[0,1]$. Using a similar calculation it is easily shown that $\mathcal{L}^+(x)$ is bounded for $-1\leq x <0$, and it thus follows that $\mathcal{L}^+\in L^2[-1,1]$, as desired. Analogously, we have $\mathcal{L}^-\in L^2[-1,1]$, and the lemma follows. \end{proof} \begin{corollary} Let $\Gamma = [-1,1]$. Then $\mathbf{N}\mathbf{S}[1]$ does not belong to the codomain $H^{-\frac{1}{2}}[-1,1]$ of the operator $\mathbf{N}$ in equation~\eqref{Nmapping}. \end{corollary} \begin{proof} In view of Lemma~\ref{N0L2} it suffices to show that the function $h(x)=\frac{1}{1-x^2}$ does not belong to $H^{-\frac{1}{2}}[-1,1]$, or, equivalently, that the primitive $k(x)=-\frac{1}{2}\ln\frac{1-x}{1+x}$ of $h$ does not belong to $H^\frac{1}{2}[-1,1]$. Clearly, to establish that $k\textbf{n}ot\in H^\frac{1}{2}[-1,1]$ it suffices to show that the function $\ell(x)=p(x)\ln(x)$ is not an element of $H^\frac{1}{2}[0,\infty[$, where $p$ is a smooth auxiliary function defined for $x\geq 0$ which equals 1 in the interval $[0,1]$ and which vanishes outside the interval $[0,2]$. To do this we appeal to the criterion~\cite[p. 54]{LionsMagenes} \[ \ell\in H^\frac{1}{2}(0,\infty) \iff \ell\in L^2(0,\infty) \quad \mbox{and} \int_0^\infty t^{-2}dt\int_0^\infty \left|\ell(x+t) - \ell(x) \right|^2dx< \infty. \] To complete the proof of the lemma it thus suffices to show that the integral \[ I = \int_0^\infty t^{-2}dt\int_0^1 \left|\ln(x+t) - \ln(x) \right|^2dx \] is infinite. But, using the change of variables $u=\frac{t}{x}$ we obtain \begin{equation} I=\int_0^{\infty}\frac{1}{t}dt\left(\int_t^\infty \frac{\left|\ln(1+u)\right|^2}{u^2}du\right)=\infty, \end{equation} and the lemma follows. \end{proof} \end{document}
\begin{document} \title{\enspace Basic quasi-Hopf algebras over cyclic groups} \underline{\operatorname{Aut}}hor{Iv\'an Ezequiel Angiono} \mathrm{ad}dress{Facultad of Matem\'atica, Astronom\'\i a y F\'\i sica \newline \indent Universidad Nacional of C\'ordoba \newline \indent CIEM -- CONICET \newline \indent (5000) Ciudad Universitaria, C\'ordoba, Argentina} \email{[email protected]} {\mathbf{l}}ate{\today} \thanks{ {\it Key words and phrases:} Quasi-Hopf algebras, finite tensor categories, pointed categories, pointed Hopf algebras. } \begin{abstract} Let $m$ a positive integer, not divisible by 2,3,5,7. We generalize the classification of basic quasi-Hopf algebras over cyclic groups of prime order given in \cite{EG3} to the case of cyclic groups of order $m$. To this end, we introduce a family of non-semisimple radically graded quasi-Hopf algebras $A(H,s)$, constructed as subalgebras of Hopf algebras twisted by a quasi-Hopf twist, which are not twist equivalent to Hopf algebras. Any basic quasi-Hopf algebra over a cyclic group of order $m$ is either semisimple, or is twist equivalent to a Hopf algebra or a quasi-Hopf algebra of type $A(H,s)$. \enspace d{abstract} \maketitle \section{Introduction}\label{section:introduction} A finite dimensional associative algebra is \emph{basic} if all its irreducible representations are 1-dimensional. Dually, we obtain pointed coalgebras. Thus, the problem classification of basic Hopf algebras up to isomorphism is equivalent to the problem of classification of finite dimensional pointed Hopf algebras up to isomorphism. When the group $G(H)$ of grouplike elements of a finite dimensional pointed Hopf algebra $H$ is abelian of order not divisible by $2,3,5,7$, this problem was solved by Andruskiewitsch and Schneider, see \cite{AS4}. One of the main difficulties is, once one knows all the coradically graded pointed Hopf algebras which are finite dimensional, to obtain all the liftings; i.e. for any coradically graded $H_0$, to find all the Hopf algebras $H$ whose associated graded Hopf algebra is $H_0$. The result of Andruskiewitsch-Schneider also yields a classification of pointed finite tensor categories with abelian groups of grouplike elements of order not divisible by $2,3,5,7$ which have a fiber functor, as the categories of comodules over such pointed Hopf algebras (see \cite{EO}). Moreover, by Masouka's Theorem \cite[Thm. A1]{Ma}, the equivalence classes of such categories reduce to the graded case, because the category of comodules over a lifting $H$ of $H_0$ is equivalent to the category of comodules over $H_0$. In what follows, $\mathbf{k}$ will denote an algebraically closed field of characteristic zero. All the algebras and tensor categories considered in this work are over $\mathbf{k}$. The general problem of classification of pointed finite tensor categories (not necessarily having a fiber functor) reduces to classification of basic quasi-Hopf algebras up to twist, and it is closely related to the classification of pointed Hopf algebras. The first approach to this problem was suggested by Etingof and Gelaki in a series of papers, in which they classified pointed finite tensor categories whose group of invertible objects has prime order (see \cite{EG3} for a complete answer). To do so, they considered the quasi-Hopf algebras $A(q)$ with non-trivial associator constructed in \cite{G}. This family completes the list of such categories, with the categories of representations of Hopf algebras and the semisimple ones. In this work we classify pointed tensor categories such that their group of invertible objects is cyclic, with order not divisible by $2,3,5,7$. This restriction on the order comes mainly from the classification Theorem for pointed Hopf algebras over abelian groups in \cite{AS4}. In fact, we can classify basic radically graded quasi-Hopf algebras over cyclic groups of odd order, but restrict as above when we consider liftings of these algebras. Therefore main Theorem could still hold for any cyclic group of odd order if one can extend the theory of liftings for any group of odd order. This family of categories has a subfamily corresponding to non-semisimple quasi-Hopf algebras $A(H,s)$, constructed in a similar way to the family of quasi-Hopf algebras $A(q)$ of Gelaki, from radically graded Hopf algebras. Consider $H={\otimesperatorname{op}}lus_{n \geq 0} H(n)$ a radically graded Hopf algebra, generated by a group like element $\text{ch }i$ of order $m^{2}$ and skew primitive elements $x_1,...,x_{\theta}$ such that: \begin{equation}\label{skewprimitives} \text{ch }i x_i\text{ch }i^{-1} = q^{d_i}x_i, \quad \Delta(x_i)= x_i \otimestimes \text{ch }i^{b_i} + 1 \otimestimes x_i, \enspace d{equation} where $q$ is a root of unity of order $m^{2}$. Call \begin{equation}\label{solutionsHforpossiblecocycles} \Upsilon(H):= \left\{ s \in \{1, \ldots, m-1\}: b_i \equiv sd_i (m), \, 1 \leq i \leq \theta \right\}. \enspace d{equation} Consider its subalgebra $A(H,s)$ generated by $\sigma:=\text{ch }i^{m}$ and $x_1,...,x_{\theta}$. Modifying the coalgebra structure of $H$ by a twist $J_s \in H\otimestimes H$ (there exists one $J_s$ for each $s\in \Upsilon(H)$), we shall prove that $A(H,s)$ with the induced coalgebra structure by restriction is a quasi Hopf algebra, which is not twist equivalent to a Hopf algebra. As we will prove that liftings of quasi-Hopf algebras $A(H,s)$ come from de-equivariantizations of liftings of Hopf algebras, Masuoka's Theorem simplifies the classification problem: we can restrict to the radically graded case. The main result of this work is the following: \begin{thm}\label{thm:classificationqHA} Let $A$ be a quasi-Hopf algebra such that its radical is a quasi-Hopf ideal, and $A/\mathbb{R}ad A \text{co}ng \mathbf{k}[{\mathbb Z}_m]$ as algebras, for some $m\in \Bbb N$ not divisible by primes $\leq 7$. Then $A$ is equivalent by a twist to one of the following: \begin{enumerate} \item a radically graded finite-dimensional Hopf algebra $A$ such that $$ A/ \mathbb{R}ad A \text{co}ng \mathbf{k} [{\mathbb Z}_m] \mbox{, or}$$ \item a semisimple quasi-Hopf algebra $\mathbf{k}[{\mathbb Z}_m]$, with associator given by $\otimesmega_s \in H^3({\mathbb Z}_m, \mathbf{k}^{\times})$, $s\in \{1,...,m-1\}$, or \item a quasi-Hopf algebra $A(H,s)$, where $H$ is a radically graded Hopf algebra such that $H/ \mathbb{R}ad H \text{co}ng \mathbf{k} [{\mathbb Z}_{m^2}]$, and $s \in \Upsilon(H)$. \enspace d{enumerate} \enspace d{thm} We will give the proof in Subsection \ref{subsection:proofmainthm}. Recall that a tensor category $\mathcal{C}$ is \emph{pointed} if every simple object of $\mathcal{C}$ is invertible. Invertible objects form a group. The previous Theorem implies the corresponding statement for pointed finite tensor categories. \begin{cor} Let $\mathcal{C}$ a pointed finite tensor category whose simple objects form a cyclic group of order $m$, where $m$ is not divisible by $2,3,5,7$. Then $\mathcal{C}$ is equivalent to one of the following: \begin{enumerate} \item the category of finite dimensional $H$-modules, for $H$ a radically graded finite-dimensional Hopf algebra such that $H/ \mathbb{R}ad H \text{co}ng \mathbf{k} [{\mathbb Z}_m]$, or \item a semisimple category $\mathbb{R}ep_{\otimesmega_s}({\mathbb Z}_m)$, or \item the category of finite dimensional $A(H,s)$-modules, for some radically graded Hopf algebra $H$ such that $H/ \mathbb{R}ad H \text{co}ng \mathbf{k} [{\mathbb Z}_{m^2}]$, and $s \in \Upsilon(H)$. \enspace d{enumerate} \enspace d{cor} \begin{proof} The fact that the category is pointed implies that its objects have integer Frobenius-Perron dimension, so by \cite{EO} it is the category of finite dimensional modules of some quasi-Hopf algebra $A$. This quasi-Hopf algebra is basic (because $\mathcal{C}$ is pointed), so the result follows from Theorem \ref{thm:classificationqHA}. \end{proof} The organization of this paper is the following. In Section \ref{section:preliminaries} we describe some tools which we use in the rest of the work. The two key results are the classification of pointed Hopf algebras over abelian groups given by Andruskiewitsch-Schneider, and the equivariantization procedure. In Section \ref{section:gradedqHA} we construct basic radically graded quasi-Hopf algebras over ${\mathbb Z}_m$ as a generalization of the family $A(q)$ in \cite{G}. Using some methods in Etingof-Gelaki's works, we prove that these are all the basic radically graded quasi-Hopf algebras over ${\mathbb Z}_m$ up to twist equivalence. After that, we consider liftings of these graded algebras in Section \ref{section:liftings}. We prove that each basic quasi-Hopf algebra whose associated radically graded quasi-Hopf algebra has trivial associator is a Hopf algebra, as in \cite{EG3}. For each non-semisimple basic radically graded quasi-Hopf algebra with non-trivial associator, we prove that any lifting $A$ can be extended to a Hopf algebra $H$ as in the graded case, so $\mathbb{R}ep H$ is the equivariantization of $\mathbb{R}ep A$ for some action of ${\mathbb Z}_m$; for an analogous procedure see \cite{EG4}. In this way we can describe such $A$ using the inverse procedure, the de-equivariantization of $\mathbb{R}ep H$ for an inclusion of $\mathbb{R}ep {\mathbb Z}_m$ (a result of Masuoka in \cite{Ma} reduces it to the graded case), and we complete the classification. In Section \ref{section:classificationZpn}, we apply the previous classification to the case $m=p^n$ for some prime $p$ and some $n \in \mathbb{N}$. Such description is important for the general case, where we reduce some results to the case $p^n$. \textbf{Acknowledgments.} The author's work was supported by CONICET, and was done mainly during his visit to MIT in February-May 2009, supported by Banco Santander Rio SA. The author thanks MIT for its warm hospitality, and especially his host Professor Pavel Etingof for posing the problem, guidance and explanations about tensor categories, and for many important suggestions that influenced this work. He is also grateful to Cesar Galindo for many stimulating discussions. \section{Preliminaries}\label{section:preliminaries} For any Hopf algebra $H$, $\Delta$, $\end{proof}silon$ and $S$ will denote the coproduct, counit and antipode, respectively. For the coproduct we will use Sweedler notation: for any $c \in C$, $\Delta(c)= c_1 \otimestimes c_2$. For each tensor category $\mathcal{C}$ we denote by $\mathcal{Z}(\mathcal{C})$ the Drinfeld center of $\mathcal{C}$. For each object $X$ of $\mathcal{C}$, we denote by $FPdim \, X$ the Frobenius-Perron dimension of $X$, see \cite{EO}. To begin with, we will describe some topics. First we give a brief introduction to Yetter-Drinfeld modules over a Hopf algebra $H$. Second, we consider the equivariantization and de-equivariantization procedures, for a better description see \cite{dgno},\cite{EG4} and \cite{ENO2}. Also we consider the lifting theory for pointed Hopf algebras and the main results, see \cite{AS4}. For these Hopf algebras, we will give a brief characterization of their duals, which give place to basic Hopf algebras; i.e. their radical is a Hopf ideal, and $H/\mathbb{R}ad H \text{co}ng \mathbb{F}un G$ for some finite group $G$. \subsection{Yetter-Drinfeld modules and Drinfeld center of Hopf algebras.} We recall the definition of a Yetter-Drinfeld module over a Hopf algebra in order to write the formulas defining this notion. \begin{defn} Let $H$ be a Hopf algebra. A left Yetter-Drinfeld module $M$ over $H$ is a left $H$-module $M$, with action denoted by $\!\cdot\! ot: H\otimestimes M \rightarrow M$, which is also a left $H$-comodule, with coaction ${\mathbf{l}}elta:M \rightarrow H\otimestimes M$, ${\mathbf{l}}elta(m)=m_{(-1)}\otimestimes m_{(0)}$, satisfying: \begin{equation}\label{YDrelation} {\mathbf{l}}elta(h\!\cdot\! ot m) = h_1m_{(-1)}S(h_3) \otimestimes h_2 \!\cdot\! ot m_{(0)}, \qquad h\in H, m \in M. \enspace d{equation} Morphisms of Yetter-Drinfeld modules are $H$-linear morphisms which also preserve the comodule structure. The category of left $H$-comodules is denoted by $\mathrm{h}yd$: it is a tensor category, which inherits the action in the tensor product as $H$-modules, and coaction ${\mathbf{l}}elta_{M\otimestimes N}: M\otimestimes N \rightarrow H\otimestimes M\otimestimes N$, \begin{equation}\label{YDtensorcoaction} {\mathbf{l}}elta_{M\otimestimes N}(m\otimestimes n)=m_{(-1)}n_{(-1)} \otimestimes m_{(0)} \otimestimes n_{(0)}, \qquad m\in M, \, n\in N. \enspace d{equation} This category is braided, where the braiding for each pair $M,N \in \mathrm{h}yd$ is given by $c_{M\otimestimes N}: M\otimestimes N \rightarrow N\otimestimes M$, \begin{equation}\label{YDbraiding} c_{M\otimestimes N}(m\otimestimes n)= m_{(-1)}\!\cdot\! ot n \otimestimes m_{(0)}, \qquad m \in M, n \in N. \enspace d{equation} \enspace d{defn} \begin{rem} \eqref{YDrelation} is equivalent to the following: \begin{equation}\label{YDrelation2} (h_1\!\cdot\! ot m)_{(-1)}h_2 \otimestimes (h_1\!\cdot\! ot m)_{(0)} = h_1m_{(-1)} \otimestimes h_2 \!\cdot\! ot m_{(0)}, \qquad h\in H, m \in M. \enspace d{equation} \enspace d{rem} The category $\mathrm{h}yd$ is equivalent to the category $\mathbb{R}ep(D(H))=\mathcal{Z}(\mathbb{R}ep(H))$. \subsection{Equivariantization and de-equivariantization.} Let $\mathcal{C}$ be a finite tensor category. Denote by $\underline{\operatorname{Aut}}\mathcal{C}$ the category which have as objects the tensor auto-equivalences of $\mathcal{C}$, and its morphisms are isomorphisms of tensor functors. It is a monoidal category, whose tensor product is the composition of tensor functors. For any group $\Gamma$ denote by $\underline{\Gamma}$ the category whose objects are elements of $\Gamma$, its morphisms are just the identities on each object, and the tensor product corresponds to the multiplication in $\Gamma$. \begin{defn} An \emph{action} of a group $G$ on a finite tensor category $\mathcal{C}$ is a monoidal functor $\mathcal{F}:\underline{\Gamma} \rightarrow \underline{\operatorname{Aut}} \mathcal{C}$. \enspace d{defn} In this way, we have a collection of functors $\{F_g: g\in \Gamma\} \subset \underline{\operatorname{Aut}} \mathcal{C}$, and isomorphisms $$ \gamma_{g,h}: \xymatrix{ F_g \circ F_h \ar[r]^{\sim} & F_{gh} }, \quad g,h \in \Gamma, $$ defining the tensor structure of the functor $\mathcal{F}$. \begin{defn} Let $\Gamma$ be a finite group acting on a finite tensor category $\mathcal{C}$. A $\Gamma$\emph{-equivariant object} of $\mathcal{C}$ is an object $X\in \mathcal{C}$ with a family of isomorphisms $u_g: F_g(X) \rightarrow X$ such that for all pairs $g,h \in \Gamma$ the following diagram commutes: $$ \xymatrix@C=.9in{ F_g(F_h( X )) \ar[r]^{F_g(u_h)}\ar[d]^{\gamma_{g,h}} & F_g(X) \ar[d]^{u_g} \\ F_{gh}(X) \ar[r]^{u_{gh}} & X. } $$ A \emph{morphism of equivariant} objects $\beta: (X,(u_g)_{g\in \Gamma}) \rightarrow (Y,(v_g)_{g\in \Gamma})$ is a morphism $\beta:X \rightarrow Y$ in $\mathcal{C}$ such that for all $g\in \Gamma$, $\beta \circ u_g=v_g \circ F_g(\beta)$. The category of $\Gamma$-equivariant objects is called the \emph{equivariantization} of $\mathcal{C}$, and will be denoted $\mathcal{C}^{\Gamma}$. \enspace d{defn} For such category, we have a natural inclusion $\iota: \mathbb{R}ep \Gamma \rightarrow \mathcal{C}^{\Gamma}$. We consider the inverse procedure. Consider a finite tensor category $\mathcal{D}$ such that $\mathcal{Z}(\mathcal{D})$ contains a Tannakian subcategory $\mathbb{R}ep\Gamma$ for some finite group $\Gamma$, and the composition $\mathbb{R}ep \Gamma \rightarrow \mathcal{Z}(\mathcal{D}) \rightarrow \mathcal{D}$ is an inclusion. The algebra $Fun(\Gamma)$ of functions $\Gamma \rightarrow \mathbf{k}$ is an algebra in the tensor category $\mathbb{R}ep \Gamma$: the group $\Gamma$ acts on $Fun(\Gamma)$ by left translations. In this way $Fun(\Gamma)$ is an algebra in the braided category $\mathcal{Z}(\mathcal{D})$. \begin{defn} The category of $Fun(\Gamma)$-modules in $\mathcal{D}$ is called the \emph{de-equivariantization} of $\mathcal{D}$, and will be denoted by $\mathcal{D}_{\Gamma}$. It is a tensor category. \enspace d{defn} We will use the following result about equivariantization and deequivariantization. For a complete reference and proofs about these facts, see \cite{dgno}. \begin{thm}\label{thm:equiv-deequiv} ${\en\mathsf {(i)}}\;$ Let $\Gamma$ be a finite group acting on a finite tensor category $\mathcal{C}$. Then $\mathbb{R}ep \Gamma$ is a Tannakian subcategory of $\mathcal{Z}(\mathcal{C}^{\Gamma})$ (that is, the braiding of $\mathcal{Z}(\mathcal{C}^{\Gamma})$ restricts to the symmetric braiding of $\mathbb{R}ep \Gamma$), and the composition of $\mathbb{R}ep \Gamma \rightarrow \mathcal{Z}(\mathcal{C}^{\Gamma})$ with the forgetful functor $\mathcal{Z}(\mathcal{C}^{\Gamma}) \rightarrow \mathcal{C}^{\Gamma}$ is the natural inclusion $\iota$. ${\en\mathsf {(i)}}\;$i The procedures of equivariantization and deequivariantization are inverse to each other. \enspace d{thm} \begin{exa}\label{example:semisimple-equiv} We describe here an example over pointed semisimple categories. Although we shall work over non semisimple categories, we shall consider the semisimple part of some pointed ones and this will be useful in what follows. Consider an action of a group $\Gamma$ over the category $\mathcal{C}=Vec_{K,\otimesmega}$, where $K$ is an abelian group and $\otimesmega \in H^3(K, \mathbf{k}^{\times})$. We will denote the simple elements of $\mathcal{C}$ just with the elements of $K$. We assume that the action over the objects is trivial; that is, $F_\gamma (X)=X$ for all object $X$ and all $\gamma \in \Gamma$. In this way, following the description on \cite[Section 7]{T} and using that the action is trivial on objects, the action is described by an element $\psi \in H^2(\Gamma,\mathrm{h}at{K})$: $$ \psi(\gamma_1, \gamma_2): F_{\gamma_1} \circ F_{\gamma_2} \rightarrow F_{\gamma_1 \gamma_2}, \quad \gamma_i \in \Gamma.$$ From the tensor structure of each $F_{\gamma}$ we have an element $\xi \in H^2(K,\mathrm{h}ga)$, $$ \xi(k_1,k_2)(\gamma): F_{\gamma}(k_1) \otimestimes F_{\gamma}(k_2)=k_1+k_2 \rightarrow F_{\gamma}(k_1+k_2)=k_1+k_2,$$ where $k_i \in K, \gamma \in \Gamma$. We want to describe actions such that $\mathcal{C}^{\Gamma}$ is pointed: it can be derived from \cite{N}, since by \cite{Ni} we have $\mathcal{C}^{\Gamma} \text{co}ng (\mathcal{C} \rtimes \Gamma)^*_{\mathcal{C}}$. For our context, we derive that $\Gamma$ is abelian (notice also that we have an inclusion of $\mathbb{R}ep \Gamma$ in $\mathcal{C}^{\Gamma}$) and so $\mathbb{R}ep \Gamma \text{co}ng Vec_{\mathrm{h}ga}$. For a description as in \cite{N}, the action of $\mathrm{h}ga$ on $K$ is trivial, and $\mathcal{C} \rtimes \Gamma \text{co}ng Vec_{K \times \mathrm{h}ga}$. As $ FPdim (\mathcal{C}^{\Gamma})= |\Gamma| FPdim \mathcal{C} = |\Gamma| |K|$, $\mathcal{C}^{\Gamma}$ has $|\Gamma| |K|$ non-isomorphic simple objects. Such objects are pairs $\left(k,(u_{\gamma})_{\gamma \in \Gamma}\right)$, for scalars $u_{\gamma} \in \mathbf{k}^{\times}$ satisfying $u_{\gamma_1}u_{\gamma_2}= \psi(\gamma_1,\gamma_2)(k) u_{\gamma_1 + \gamma_2}$. Therefore two simple objects $\left(k,(u_{\gamma})_{\gamma \in \Gamma} \right)$ and $\left(k,(v_{\gamma})_{\gamma \in \Gamma}\right)$ are related by an element $f \in \mathrm{h}ga$ such that $v_{\gamma}=u_{\gamma}f(\gamma)$ for all $\gamma \in \Gamma$, which are isomorphic if and only if $f=1$. In this way we identify simple elements in $\mathcal{C}^{\Gamma}$ as pairs $(k,f) \in K\times \mathrm{h}ga$. Also for any fixed $k$, there exist $|\Gamma|$ elements $\left(k,(u_{\gamma})_{\gamma \in \Gamma}\right)$, and $$ \psi(\gamma_1, \gamma_2)(k) = u_{\gamma_1} u_{\gamma_2}u_{\gamma_1+\gamma_2}^{-1} = \psi(\gamma_2, \gamma_1)(k). $$ Therefore $\psi(\gamma_1, \gamma_2) = \psi(\gamma_2, \gamma_1)$ for all $\gamma_i \in \Gamma$, and from the relation given in \cite{T} we derive that $\xi(k_1,k_2)=\xi(k_2,k_1)$ for all $k_i \in K$. The elements of $H^2(K,\mathrm{h}ga)$ parameterize central extensions of $K$ by $\mathrm{h}ga$, and if $L$ is the corresponding to $\xi$, then $L$ is abelian and we can identify $\mathcal{C}^{\Gamma}= Vec_{L,\widetilde{\otimesmega}}$ for some $\widetilde{\otimesmega} \in H^3(L,\mathbf{k}^{\times})$, because the tensor product in $\mathcal{C}^{\Gamma}$ satisfies under the previous considerations: $$ (k_1,f_1) \otimestimes (k_2 ,f_2)=(k_1+k_2,f_1+f_2+\xi(k_1,k_2) ), \qquad k_i \in K, f_i \in \mathrm{h}ga. $$ Such $\widetilde{\otimesmega}$ is the pullback of $\otimesmega$ under the projection $\pi:L \rightarrow K$ corresponding to the extension, because the forgetful functor $\mathcal{C}^{\Gamma} \rightarrow \mathcal{C}$ is a tensor functor. This can also be derived from Naidu's work. Also $\psi \in H^2(\Gamma, \mathrm{h}at{K})$ is the element corresponding to the dual extension $\mathrm{h}at{L}$ of $\Gamma$ by $\mathrm{h}at{K}$. Note that, given a morphism $T: \mathrm{h}ga \rightarrow \mathrm{h}at{L}$ such that for all $f_1,f_2 \in \mathrm{h}ga$, $$ \langle T(f_1), (0,f_2)\rangle = 1, $$ the function $\otimesmega: K^3 \rightarrow \mathbf{k}^{\times}$ given by $$ \otimesmega(k_1,k_2,k_3)= \langle T\left( \xi(k_2,k_3) \right), (k_1,0) \rangle, \qquad k_i \in K, $$ defines an element in $H^3(K, \mathbf{k}^{\times})$, which will be denoted also by $\otimesmega$. The pullback $\widetilde{\otimesmega}$ of such element is trivial in $H^3(L,\mathbf{k}^{\times})$. Indeed, if $\alpha: L^2 \rightarrow \mathbf{k}^{\times}$ is the function $$ \alpha \left( (k_1,f_1), (k_2,f_2) \right)= \langle T(f_2), (k_1,f_1) \rangle = \langle T(f_2), (k_1,0) \rangle ,$$ then ${\mathbf{l}}elta^2(\alpha)=\widetilde{\otimesmega}$. In this way, $\mathcal{C}^{\Gamma}\text{co}ng Vec_L$, and we have an inclusion $$ \mathbb{R}ep \Gamma \text{co}ng Vec_{\mathrm{h}ga} \mathrm{h}ookrightarrow Z(Vec_L) \text{co}ng Vec_{L {\otimesperatorname{op}}lus \mathrm{h}at{L}}, $$ which composed with the forgetful functor to $\mathcal{C}^{\Gamma}=Vec_L$ gives the canonical inclusion $\mathbb{R}ep \Gamma \mathrm{h}ookrightarrow Vec_L$, so we have an inclusion of groups $\mathrm{h}ga \mathrm{h}ookrightarrow L {\otimesperatorname{op}}lus \mathrm{h}at{L}$, which composed with the projection to the first component gives the inclusion $\mathrm{h}ga \mathrm{h}ookrightarrow L$. \enspace d{exa} \begin{exa}\label{example:semisimple-deequiv} Consider now the de-equivariantization of $\mathcal{D}=Vec_L$, given by an inclusion of $\mathbb{R}ep \Gamma$ as a Tannakian subcategory of $Z(\mathcal{D})$, which factorizes through the center $Z(Vec_L) \text{co}ng Vec_{L {\otimesperatorname{op}}lus \mathrm{h}at{L}}$; $\Gamma$ and $L$ are abelian groups as before, and we call $K$ the corresponding quotient group, which we also assume abelian. Therefore we have an inclusion $\iota: \mathrm{h}ga \rightarrow L$, and a morphism $T: \mathrm{h}ga \rightarrow \mathrm{h}at{L}$, such that for all $f_1,f_2 \in \mathrm{h}ga$, $ \langle T(f_1), (0,f_2)\rangle = 1$. Such $T$ parameterizes the natural morphisms $c_{V,-}: V\otimestimes - \rightarrow - \otimestimes V$ for each $V \in \mathbb{R}ep \Gamma$ viewed as an element of $\mathcal{D}$. Consider $L$ as an extension of $K$ by $\mathrm{h}ga$, in such a way the inclusion $\iota$ is the canonical one, and it corresponds to an element $\xi \in H^2(K,\mathrm{h}ga)$. The algebra $A=\mathbb{F}un \Gamma$ is just the sum $A={\otimesperatorname{op}}lus_{f \in \mathrm{h}ga} f $ as element of $Vec \mathrm{h}ga$, with the canonical product, so we consider $A={\otimesperatorname{op}}lus_{f \in \mathrm{h}ga} (0,f) $ inside $\mathcal{D}$. By the previous considerations, we obtain $\mathcal{D}_\Gamma \text{co}ng Vec_{K, \otimesmega}$ for some $\otimesmega \in H^3(K,\mathbf{k}^{\times}$. The functor $F: \mathcal{D}\rightarrow \mathcal{D}_{\Gamma}$, $F(X)=A \otimestimes X$ is a monoidal functor, where the natural isomorphisms $$ J_{X,Y}: F(X) \otimestimes_A F(Y) \rightarrow F(X\otimestimes Y) $$ are given by the natural isomorphisms induced by $T$ followed by the multiplication in $A$. Considering the monoidal functor axiom we deduce that $\otimesmega(k_1,k_2,k_3)= \langle T\left( \xi(k_2,k_3) \right), (k_1,0) \rangle$. \enspace d{exa} \subsection{Pointed Hopf algebras and liftings.} We recall the Andruskiewitsch-Schneider Classification Theorem for pointed Hopf algebras over abelian groups whose order is divisible by primes greater than 7, and a result about their categories of comodules, due to Masuoka. \begin{defn}[\cite{AS4}] Let $\Gamma$ be an abelian group. A \emph{datum} of finite Cartan type over $\Gamma$, $$ \mathcal{D}=\mathcal{D} \left(\Gamma, (g_i)_{i=1,...,\theta}, (\text{ch }i_i)_{i=1,...,\theta}, A=(a_{ij})_{i,j=1,...,\theta} \right), $$ consists of elements $g_i \in \Gamma$, $\text{ch }i_i \in \widehat{\Gamma}$ and a Cartan matrix of finite type $A$ satisfying for all $i,j$ $$ q_{ij}q_{ji}=q_{ii}^{a_{ij}}, \quad q_{ii}\neq 1, $$ where we define $q_{ij}:=\text{ch }i_j(g_i)$. \enspace d{defn} Now call $\Phi$ the root system of the Cartan matrix $A$, $\mathcal{X}$ the set of connected components of the corresponding Dynkin diagram and $\alpha_1,...,\alpha_{\theta}$ a set of simple roots; we write $i\sim j$ if $\alpha_i,\alpha_j$ are in the same connected component. For each $J\in \mathcal{X}$, $\Phi_J$ denotes the root system of the component $J$. Fix a datum $\mathcal{D}$. For each $\alpha= \sum_{i=1}^{\theta} k_i\alpha_i \in \Phi^+$, we define \begin{equation}\label{formulagchialpha} g_{\alpha}:=\prod_{i=1}^{\theta} g_i^{k_i}, \qquad \text{ch }i_{\alpha}:=\prod_{i=1}^{\theta} \text{ch }i_i^{k_i}. \enspace d{equation} For our purposes, we consider $q_{ii}$ of odd order, and coprime with 3 if $\alpha_i$ belongs to a connected component of type $G_2$. In such case the order of $q_{ii}$ is constant on each connected component $J\in \mathcal{X}$, and we define $N_J$ as the order of any $q_{ii}$. We introduce now two families of parameters. First we consider a family $$\lambda= (\lambda_{ij})_{i,j \in \{1,..., \theta\}, i \nsim j}$$ of elements of $\mathbf{k}$ satisfying the condition: \begin{equation}\label{lambdacondition} \mbox{if } g_ig_j=1 \mbox{ or }\text{ch }i_i\text{ch }i_j \neq \end{proof}silon ,\mbox{then } \lambda_{ij} =0. \enspace d{equation} The second family is $\mu= (\mu_{\alpha})_{\alpha \in \Phi^+}$, which elements are also in $\mathbf{k}$, satisfying the condition: \begin{equation}\label{mucondition} \mbox{if } g_{\alpha}^{N_J}=1 \mbox{ or }\text{ch }i_{\alpha}^{N_J} \neq \end{proof}silon ,\mbox{then } \mu_{\alpha} =0, \quad \forall\alpha \in \Phi_J^+, J\in \mathcal{X}. \enspace d{equation} In \cite{AS4}, for any family $\mu$ and any $\alpha \in \Phi$, they introduce an element $u_{\alpha}(\mu) \in \mathbf{k} [\Gamma]$, which belongs to the augmentation ideal of $\mathbf{k}[g_i^{N_i}]$. An important fact for our work is that $u_{\alpha}(0)=0$ for all $\alpha \in \Phi^+$, where $\mu=0$ denotes the family which consists of all parameters equal to 0. Also there exist elements $x_{\alpha}, \alpha \in \Phi^+$, which determine a PBW basis (see \cite{AS4} and the references therein). \begin{defn}[\cite{AS4}] The Hopf algebra $u(\mathcal{D},\lambda, \mu)$ is generated by $\Gamma$ and $x_1, \ldots, x_{\theta}$, with the following relations: \begin{eqnarray} gx_ig^{-1} &=& \lambda_i(g)x_i, \quad i=1,...,\theta, \, g\in \Gamma; \label{groupaction}\\ ad_c(x_i)^{1-a_{ij}}x_j &=& 0, \quad i\neq j, i \sim j; \label{qserre} \\ ad_c(x_i)x_j &=& \lambda_{ij}(1-g_ig_j), \quad i<j, i \nsim j; \label{linkingrelation}\\ x_{\alpha}^{N_J} &=& u_{\alpha}(\mu), \quad \alpha \in \Phi_J^+, \, J\in \mathcal{X}. \label{powerrootlinfting} \enspace d{eqnarray} \enspace d{defn} \begin{rem} \begin{enumerate} \item In \cite{AS4} they prove that the algebra $u(\mathcal{D},\lambda, \mu)$ is a Hopf algebra, where the coproduct is defined by $\Delta(g)=g\otimestimes g$ for all $g\in \Gamma$, and $\Delta(x_i)=x_i \otimestimes 1+g_i \otimestimes x_i$. Its group-like elements are $ G\left(u(\mathcal{D},\lambda, \mu) \right) = \Gamma$. \item The graded case (trivial lifting) corresponds to $\mu=0, \lambda=0$. \enspace d{enumerate} \enspace d{rem} \begin{thm}[\cite{AS4}]\label{thm:liftingsAS} Let $H$ a finite dimensional pointed Hopf algebra, with group of group-like elements $\Gamma=G(H)$. Assume that the order of $\Gamma$ is not divisible by primes $\leq 7$. Then there exist a datum $\mathcal{D}$ and families $\lambda, \mu$ such that $H\text{co}ng u(\mathcal{D},\lambda,\mu)$. \enspace d{thm} \begin{defn}[See \cite{AS2} and references therein] Let $H$ be a bialgebra. A 2-cocycle on $H$ is a bilinear map $\sigma: H \times H \rightarrow \mathbf{k}$, which satisfies the following conditions \begin{eqnarray} \sigma(a_1,b_1) \sigma(a_2b_2, c) &=& \sigma(a, b_1c_1)\sigma(b_2,c_2), \\ \sigma(a,1) &=& \sigma(1,a)=\end{proof}silon(a), \enspace d{eqnarray} for all $a,b,c \in H$. Given an invertible (with respect to the convolution product) 2-cocycle, we define a new product on $H$ given by $$ a \!\cdot\! ot_\sigma:= \sigma(a_1,b_1)a_2b_2 \sigma^{-1}(a_3,c_3), \qquad a,b \in H. $$ Then $H$ with this product, the same unit and the same coproduct structure is a new bialgebra. We denote it by $H_\sigma$. If $H$ is a Hopf algebra with antipode $S$, define $$ S_ \sigma (a)= \sigma \left(a_1, S(a_2) \right) S(a_3) \sigma^{-1} \left( S(a_4), a_5 \right), \qquad a \in H. $$ Then $S_\sigma$ is an antipode for $H_\sigma$, so $H_\sigma$ is a Hopf algebra. \enspace d{defn} The following property of these liftings for coradically graded pointed Hopf algebras shall help us when we want to describe the category of representation of duals of pointed Hopf algebras. \begin{thm}[\cite{Ma}]\label{thm:masuokacocycle} Given a datum $\mathcal{D}$ and families $\lambda, \mu$, the Hopf algebra $u(\mathcal{D},\lambda, \mu)$ is a cocycle deformation of the associated graded Hopf algebra $u(\mathcal{D},0, 0)$. \enspace d{thm} \begin{rem}\label{rem:tensorequivwithgraded} By the previous Theorem, the category of $u(\mathcal{D},\lambda, \mu)$-comodules is tensor equivalent to the category of $u(\mathcal{D},\lambda, \mu)$-comodules, see \cite{S1}. Consider now a basic Hopf algebra $H$ such that such that $H/\mathbb{R}ad H \text{co}ng \mathbb{F}un G$, where $G$ is an abelian group as in Andruskiewitsch-Schneider Classification Theorem. Denote by $H_0$ its associated radically graded Hopf algebra. Then $H^*$ is a pointed Hopf algebra isomorphic to some $u(\mathcal{D},\lambda, \mu)$, and its associated coradically graded Hopf algebra is $H_0^*$, which is isomorphic to $u(\mathcal{D},0, 0)$. Therefore, $\mathbb{R}ep H$ is tensor equivalent to $\mathbb{R}ep H_0$, because they are isomorphic to the categories of comodules over their corresponding duals. \enspace d{rem} \subsection{Duals of pointed Hopf algebras} Recall the following result: \begin{prop}[\cite{B}]\label{prop:Beattidual} Let $\Gamma$ be a finite abelian group, and $V \in ^{\mathbf{k} \Gamma}_{\mathbf{k} \Gamma} \mathcal{YD}$, with basis $v_1,...,v_{\theta}$ where $v_i \in V^{\text{ch }i_i}_{g_i}$ for some $g_i \Gamma$, $\text{ch }i_i \in \widehat{\Gamma}$, such that $\mathcal{B}(V)$ is finite dimensional. Then $H^* \text{co}ng \mathcal{B}(W) \# \mathbf{k}\widehat{\Gamma}$, where we consider $W \in ^{\mathbf{k} \widehat{\Gamma}}_{\mathbf{k} \widehat{\Gamma}} \mathcal{YD}$ with a basis $w_i \in W^{g_i}_{\text{ch }i_i}$. \enspace d{prop} \begin{rem} The corresponding braiding matrices of $V$ and $W$ coincide: $(\text{ch }i_i(g_j))_{1 \leq i,j \leq n}$. \enspace d{rem} We will describe duals of non-trivial liftings of Hopf algebras $u(\mathcal{D},\lambda, \mu)$ (see \cite{B} for case $A_1\otimestimes....\otimestimes A_1$, that is quantum linear spaces). Consider the coradically graded Hopf algebra $H_0=\mathcal{B}(V) \# \mathbf{k} [\Gamma]$, for some abelian group $\Gamma$ and some $V\in ^{\k[\Gamma]}_{\k[\Gamma]}\mathcal{YD}$ such that $V$ is a diagonal braided vector space of Cartan type; if $A=(a_{ij})$ is the associated Cartan matrix of finite type, consider a basis $y_1,..., y_{\theta}$ with $y_i \in V^{\text{ch }i_i}_{g_i}$ ($g_i\in \Gamma, \text{ch }i_i \in \widehat{\Gamma}$) satisfying $\text{ch }i_i(g_j)\text{ch }i_j(g_i)=\text{ch }i_i(g_i)^{a_{ij}}$ for all $i \neq j$. Liftings (in the coradical sense) $H$ are characterized as in Theorem \ref{thm:liftingsAS}, with the linking relations \eqref{linkingrelation} and the power root vector relations \eqref{powerrootlinfting}. Such $H$ has a basis $\{ hy: h \in \Gamma, y \in B\}$, where $B= \{ x_{\alpha_1}^{n_1}\!\cdot\! ots x_{\alpha_k}^{n_j}: \alpha_1 > \ldots >\alpha_k, \, 0 \leq n_j \leq N_{\alpha_j}-1 \}$ for a fixed order of $\Delta^+$, and $N_{\alpha}=N_J$ for each $\alpha \in J$, $J \in \mathcal{X}$. Define for each $\gamma \in \widehat{\Gamma}$ and each $y\in B$ the element $f_{\gamma,y} \in H^*$, which satisfies: \begin{equation}\label{dualbasis} f_{\gamma,y}(gy')=\gamma(g){\mathbf{l}}elta_{y,y'}, \qquad g\in \Gamma, y'\in B. \enspace d{equation} In this way, $\{ f_{\gamma,y}: \gamma \in \widehat{\Gamma},y\in B \}$ is a basis of $H^*$. We call $x_i:=f_{\end{proof}silon,y_i}$ for any $i\in \{1,...,\theta \}$, and identify $\gamma=f_{\gamma,1}$ for any $\gamma \in \widehat{\Gamma}$. \begin{lem}\label{lemma:productdual} Consider $H, x_1,...,x_{\theta}$ as above. Then, \begin{enumerate} \item $\widehat{\Gamma} \cup \{x_1,...,x_{\theta} \}$ generate $H^*$ as algebra. \item $\mathbb{R}ad H^*$ is the ideal generated by $\{x_1,...,x_{\theta} \}$. \item in $H^*$, $\gamma x_i = \gamma(g_i)x_i \gamma$ for any $i\in \{1,...,\theta\}, \gamma \in \widehat{\Gamma}$. \enspace d{enumerate} \enspace d{lem} \begin{proof} (1) This follows from \cite[Lemma 2.1]{EG1}. (2) Remember that $Corad(H)^{\bot}=Rad(H^*)$, so the radical of $H^*$ is the ideal generated by $x_1,...,x_{\theta}$ and $H^*/Rad(H^*) \text{co}ng \widehat{\Gamma}$, by (1). (3) We calculate this explicitly for each $g\in \Gamma, z\in B$, \begin{eqnarray*} (\gamma x_i)(gz) &=& (\gamma \otimestimes x_i)\Delta(gz)= (\gamma \otimestimes x_i)(gz \otimestimes g + gg_i\otimestimes z+ \widetilde{\Delta}(z)) \\ &=& \gamma(gg_i) {\mathbf{l}}elta_{z,y_i}= \gamma(g_i) (x_i \otimestimes \gamma)(gz \otimestimes g + gg_i\otimestimes z+ \widetilde{\Delta}(z)) \\ &=& \gamma(g_i) (x_i \otimestimes \gamma)\Delta(gz) = \left( \gamma(g_i)x_i\gamma \right) (gz), \enspace d{eqnarray*} where $\widetilde{\Delta}(z)$ is a sum of terms which first or second tensor term vanishes by applying $\gamma$. \end{proof} \begin{lem}\label{lemma:coproductdual} Consider $H, x_1,...,x_{\theta}$ as above. Call \begin{equation}\label{grouplikedual} \mathcal{O}mega:= \left\{ \gamma \in \widehat{\Gamma}: \gamma\left(\lambda_i(g_i^{N_i}-1) \right)= \gamma\left(\alpha_{ij}(g_ig_j-1) \right)=0, \, 1 \leq i \nsim j \leq \theta \right\}. \enspace d{equation} \begin{enumerate} \item For the coproduct on $H^*$ we have: \begin{eqnarray}\label{coproductdual} \Delta(x_i)-x_i\otimestimes 1-\text{ch }i_i \otimestimes x_i &\in& \mathbb{R}ad(H^*)\otimestimes \mathbb{R}ad(H^*), \\ \Delta(\text{ch }i)-\text{ch }i \otimestimes \text{ch }i &\in& \mathbb{R}ad(H^*)\otimestimes \mathbb{R}ad(H^*). \enspace d{eqnarray} \item $\mathcal{O}mega$ is the group of group-like elements of $H^*$. \enspace d{enumerate} \enspace d{lem} \begin{proof} (1) Note that $\Delta(f)(x,y)=f(xy)$ for all $f\in H^*, x,y \in H$. Now for any $\text{ch }i \in \widehat{\Gamma}$ we note that $$ \Delta(\text{ch }i) - \text{ch }i \otimestimes \text{ch }i \in \left( \widehat{\Gamma} \otimestimes \mathbb{R}ad{H^*} \right) {\otimesperatorname{op}}lus \left( \mathbb{R}ad{H^*} \otimestimes \widehat{\Gamma} \right) {\otimesperatorname{op}}lus \left( \mathbb{R}ad{H^*} \otimestimes \mathbb{R}ad{H^*} \right) $$ (it is straightforward that $\Delta(\text{ch }i)$ contains $\text{ch }i \otimestimes \text{ch }i$ as the component in $\widehat{\Gamma} \otimestimes \widehat{\Gamma}$). Evaluating in $(g, g'y)$ and $(gy, g')$ for $g,g' \in \Gamma$, $y\in B$, we deduce that there are no components in $\widehat{\Gamma} \otimestimes \mathbb{R}ad{H^*} {\otimesperatorname{op}}lus \mathbb{R}ad{H^*} \otimestimes \widehat{\Gamma}$, because $gyg'=q \, gg'y$ for some $q\in \mathbf{k}^{\times}$. In a similar way we deduce the formula for $\Delta(x_i)$. Note that for each $z,z' \in B$, we express $zz'$ as a sum of elements of $B$ replacing $x_{\alpha}x_{\beta}$ by a sum of elements of $B$ of the same degree (where degree means length of words, i.e. viewing these in the tensor algebra of $V$), or where we replace some powers $x_{\alpha}^{N_{\alpha}}$ by $u_{\alpha}(\mu)$, or replace $x_ix_j$ by $\text{ch }i_j(g_i)x_jx_i+\lambda_{ij}(1-g_ig_j)$. So after reordering terms, $zz'$ is a sum of terms of the same degree in $B$, or of terms of less degree which have as factor $u_{\alpha}(\mu)$ or $\lambda_{ij}(1-g_ig_j)$. Then, $$ \Delta(x_i)(gz, g'z')= x_i(q\,gg'zz')=0, \quad \forall (z,z') \neq (x_i,1), (1,x_i), $$ because $u_{\alpha}(\mu),\lambda_{ij}(1-g_ig_j) \in \mathbf{k}er \end{proof}silon$. Also, \begin{align*} \Delta(x_i)(gx_i, g')= x_i(\text{ch }i_i(g')gg'x_i)= \text{ch }i_i(g') , \\ \Delta(x_i)(g, g'x_i)= x_i(gg'x_i)=1 . \enspace d{align*} so we prove \eqref{coproductdual}. (2) It follows from the previous analysis about the expression of $zz',\, z,z' \in B$; see also \cite{B}. \end{proof} \section{Basic Graded quasi-Hopf algebras}\label{section:gradedqHA} In what follows, $m$ will denote a positive integer. Consider a finite dimensional radically graded quasi-Hopf algebra: $H={\otimesperatorname{op}}lus_{i \geq 0} H[i]$, where $I:= \mathbb{R}ad H = {\otimesperatorname{op}}lus_{i \geq 1} H[i]$, $I^k= {\otimesperatorname{op}}lus_{i \geq k} H[i]$. In such case, $H[0]$ is semisimple and $H$ is generated by $H[0]$ and $H[1]$ (Lemma 2.1, \cite[]{EG1}). Observe that if $H$ is also basic, then $H[0]=\mathbb{F}un(\Gamma)$ for some finite group $\Gamma$, where the associator (being in degree 0) corresponds to a class in $H^3(\Gamma, \mathbf{k}^{\times})$. Also, by \cite{S2}, $H[1]$ is a free module over $H[0]$. Consider now the case $\Gamma={\mathbb Z}_{m}$. Let $\sigma,\text{ch }i$ be generators of ${\mathbb Z}_{m}, {\mathbb Z}_{m^{2}}$, respectively, related by the condition $\text{ch }i^{m}=\sigma$ (considering the canonical inclusion ${\mathbb Z}_{m} \subseteq {\mathbb Z}_{m^{2}}$). Let $\{1_b: 0\leq b \leq m^{2}-1 \}$ the set of idempotents of $\mathbf{k} [{\mathbb Z}_{m^{2}}]$, defined by the condition $\text{ch }i 1_b=q^{b}1_b$ ($q$ a primitive root of unity of order $m^{2}$). Also, let $\{\mathbf{1}_b: 0\leq s \leq m-1 \}$ the set of idempotents of $\mathbf{k} [{\mathbb Z}_{m}]$: as it is noted in \cite{G}, \begin{equation}\label{idemp} \sum_{0 \leq i \leq m-1} 1_{mi+s} = \mathbf{1}_s, \qquad 0 \leq s \leq m-1. \enspace d{equation} Also by \cite{G}, $\{\otimesmega_s: 0\leq s \leq m-1\}=H^3({\mathbb Z}_{m}, \mathbb{C}^{\times})$, where $\otimesmega_s: ({\mathbb Z}_{m})^3 \rightarrow \mathbf{k}^{\times}$ is defined by \begin{equation}\label{3-cocycle} \otimesmega_s(i,j,k) = q^{si(j+k-(j+k)')}, \qquad (i' \mbox{ denotes the remainder of the division by }m). \enspace d{equation} In consequence, if $H$ is basic radically graded, the associator (being in degree zero) is \begin{equation}\label{assoc} \Phi_s:= \sum_{i,j,k=0}^{m-1} \otimesmega_s(i,j,k) \mathbf{1}_i \otimestimes \mathbf{1}_j \otimestimes \mathbf{1}_k, \enspace d{equation} for some $0\leq s \leq m-1$, which is trivial if and only if $s=0$. Let $J_s= \sum_{i,j=0}^{m^{2}-1} c(i,j)^s 1_i\otimestimes1_j$, where $c(i,j):= q^{i(j-j')}$. As it is proved in \cite{G}, $J_s$ is invertible and satisfies: \begin{equation}\label{conditionsJs} (\varepsilon \otimestimes \mathrm{id})(J_s)= (\mathrm{id} \otimestimes \varepsilon)(J_s)=1, \qquad \Phi_s=d\, J_s. \enspace d{equation} \subsection{Quasi-Hopf algebras $A(H,s)$} Given a radically graded Hopf algebra $H={\otimesperatorname{op}}lus_{n \geq 0} H(n)$ generated by a group like element $\text{ch }i$ of order $m^{2}$ and skew primitive elements $x_1,...,x_{\theta}$ satisfying \ref{skewprimitives}, $H=R \# \mathbf{k} [\Gamma]$, where $R \in ^{\k[\Gamma]}_{\k[\Gamma]}\mathcal{YD}$ the algebra of coinvariants. If $\mathrm{dim} H$ is finite, $m^2$ does not divide $b_id_i$ (because $q^{b_id_i} \neq 1$). We will define a quasi-Hopf algebra $A(H,s)$ for each $s \in \Upsilon(H)$ (recall the definition of $\Upsilon(H)$ given in Section \ref{section:introduction}), such that $A(H, s)/ \mathbb{R}ad A(H,s) \text{co}ng \mathbf{k}[{\mathbb Z}_{m}]$, with associator given by $\otimesmega_s \in H^3({\mathbb Z}_m,\mathbf{k}^{\times})$. Consider the twist quasi-Hopf algebra $(H_{J_s}, \Delta_{J_s}, \varepsilon, \Phi_{J_s}, S_{J_s}, \alpha_{J_s}\beta_{J_s}, 1)$ and its subalgebra $A(H,s)$ generated by $\sigma:=\text{ch }i^{m}$ and $x_1,...,x_k$. Note that if $H$ is finite dimensional, $$\mathrm{dim} A(H,s) = \mathrm{dim} H/ m= m \mathrm{dim} R.$$ \begin{prop}\label{prop:examples} $(A(H,s), \Delta_{J_s}, \varepsilon, \Phi_{J_s}, S_{J_s}, \sigma^{-s}, 1)$ is a quasi Hopf algebra, which is not twist equivalent to a Hopf algebra. \enspace d{prop} \begin{proof} To simplify notation, we simply call $A=A(H,s)$. First of all, $\Phi_s \in A\otimestimes A\otimestimes A$. Using that $1_zx_i=x_i1_{z-d_i}$, \begin{eqnarray*} \Delta_{J_s}(x_i) &=& \sum_{z,y=0}^{m^{2}-1} \frac{c(z,y)^s}{c(z-d_i,y)^s}q^{b_iy} x_i 1_{z-d_i} \otimestimes 1_{y} + \frac{c(z,y)^s}{c(z,y-d_i)^s} 1_{z} \otimestimes x_i 1_{y-d_i} \\ &=& \sum_{y=0}^{m-1} q^{b_iy} \left( \sum_{k=0}^{m-1} q^{mk(b_i-sd_i)} x_i \otimestimes 1_{y+km} \right) \\ &&+ \sum_{z,y=0}^{m^{2}-1} q^{z((y+d_i)'-d_i-y')} 1_{z} \otimestimes x_i 1_{y} \\ &=& \sum_{y=0}^{m-1} q^{b_iy} x_i \otimestimes \mathbf{1}_{y} + \sum_{k=0}^{m-1} \left(\sum_{j=0}^{m-d_i'-1} q^{sk(d_i'-d_i)} \mathbf{1}_{k} \otimestimes x_i \mathbf{1}_{j} \right. \\ && \left. + \sum_{j=m-d_i'}^{m-1} q^{sk(d_i'+m-d_i)} \mathbf{1}_{k} \otimestimes x_i \mathbf{1}_{j} \right). \enspace d{eqnarray*} Therefore $\Delta_{J_s}(x_i) \in A \otimestimes A$, and $(A, \Delta_{J_s}, \varepsilon_{J_s}, \Phi_{J_s})$ is a quasi bialgebra. Now, $\alpha_{J_s}= \sum_{z=0}^{m^{2}-1} c(-z,z)^s 1_z, \, \beta_{J_s}= \sum_{z=0}^{m^{2}-1} c(z,-z)^s 1_z$, so \begin{eqnarray*} \alpha_{J_s}\beta_{J_s} &=& \sum_{z=0}^{m^{2}-1} c(-z,z)^s c(z,-z)^s 1_z = \sum_{z=0}^{m^{2}-1} q^{smz} 1_z \\ &=& \sum_{k=0}^{m-1} q^{smk} \mathbf{1}_k = \left( \sum_{k=0}^{m-1} q^{k} \mathbf{1}_k \right)^{ms} = \sigma^{-s}. \enspace d{eqnarray*} Remember that $S(x_i)=-x_i \text{ch }i^{b_i}$, so \begin{eqnarray*} S_{J_s}(x_i) &=& \beta_{J_s} S(x_i) \beta_{J_s}^{-1}= -x_i \left( \sum_{y=0}^{m^{2}-1} \frac{c(y+d_i,-y-d_i)^s}{c(y,-y)^s}q^{-b_iy} 1_y \right) \\ &=& -x_i \left( \sum_{y=0}^{m^{2}-1} q^{-b_iy+s(y+d_i)(m-(y+d_i)'+y+d_i)-sy(m-y'+y)} 1_y \right) \\ &=& -x_i \left( \sum_{k,l=0}^{m-1} q^{-b_il+s(l+d_i)(m-(l+d_i)'+l+d_i)-slm+km(sd_i-b_i)} 1_{km+y} \right) \\ &=& -x_i \left( \sum_{l=0}^{m-1} q^{-b_il+s(l+d_i)(m-(l+d_i)'+l+d_i)-slm} \mathbf{1}_{y} \right) , \enspace d{eqnarray*} where we use again that $m$ divides $b_i-sd_i$. \end{proof} \subsection{Radically graded quasi-Hopf algebras as subalgebras of twisted Hopf algebras.} We prove now that any radically graded quasi-Hopf algebra over ${\mathbb Z}_m$ looks like the quasi-Hopf algebras in the previous section. This fact gives us a characterization of all such quasi-Hopf algebras, in order to classify them. \begin{thm}\label{thm:projection} Let $A={\otimesperatorname{op}}lus_{n \geq 0} A[n]$ be a finite dimensional radically graded quasi-Hopf algebra over ${\mathbb Z}_m$, with associator $\Phi_s$ for some $s$ and $A[1] \neq 0$. Then, there exists a finite dimensional radically graded Hopf algebra $H$ as above, where $H= \mathcal{B}(V) \# {\mathbb Z}_{m^2}$ for some Yetter-Drinfeld module $V$ over ${\mathbb Z}_{m^2}$, and a graded quasi-Hopf algebra epimorphism $\pi: A \twoheadrightarrow \bar{A}:= A(H,s)$, which is the identity restricted to degree 0 and 1. \enspace d{thm} \begin{proof} This proof is similar to the one of Theorem 3.1 of \cite{EG1}. Decompose $A[1]= {\otimesperatorname{op}}lus_{0 \leq r < m} H_r[1]$, where $$A_r[1]= \{x \in A[1]: \sigma x\sigma^{-1}=Q^rx\},$$ $Q=q^{m}$ a primitive root of unity of order $m$. Note that if $x \in A_r[1]$, $\mathbf{1}_ix=x\mathbf{1}_{i-r}$. Also, by \cite{EO}, we have that $H_0[1]=0$. Let $\mathrm{h}at{A}$ be the tensor algebra of $A[1]$ over $A[0]$: it is a quasi-Hopf algebra, and we have a canonical surjective homomorphism $\pi_1: \mathrm{h}at{A} \twoheadrightarrow A$. Let $\gamma$ be the automorphism of $\mathrm{h}at{A}$ defined by $$ \gamma|_{A[0]}= \mathrm{id}, \qquad \gamma|_{A_r[1]}= q^r\mathrm{id}. $$ Consider $L$ the sum of all quasi-Hopf ideals of $\mathrm{h}at{A}$ contained in ${\otimesperatorname{op}}lus_{i \geq 2} \mathrm{h}at{A}[i]$. Therefore $\mathbf{k}er \pi \subseteq L$, and $\gamma(L)=L$, so $\gamma$ acts over $\bar{A}:=\mathrm{h}at{A}/L$. We define $\bar{H}$ as the quasi-Hopf algebra generated by $\bar{A}$ and a group-like element $\text{ch }i$, where $\text{ch }i^{m}=\sigma$ ($\text{ch }i$ has order $m^2$), and $\text{ch }i z\text{ch }i^{-1}=\gamma(z)$ for all $z\in H$. Note that $\otimesperatorname{Ad}(\sigma)=\gamma^{m}$, so it is well defined, and $\text{ch }i$ generates a group isomorphic to ${\mathbb Z}_{m^{2}}$. We consider the twist $H:= \bar{H}^{J^{-1}}$, which is a finite dimensional radically graded Hopf algebra. In such case, it is of the way $H=R\# {\mathbb Z}_{m^2}$, for some braided graded Hopf algebra $R$ in the category of Yetter-Drinfeld modules over ${\mathbb Z}_{m^2}$. We consider skew primitive elements $x_1,...,x_k \in H[1]$ which are eigenvectors of $\otimesperatorname{Ad}(\text{ch }i)$: $$ \text{ch }i x_i\text{ch }i^{-1}=q^{d_i}x_i, \quad \Delta(x_i)=x_i \otimestimes 1 +\text{ch }i^{b_i}\otimestimes x_i , \qquad b_i,d_i \in {\mathbb Z}_{m^{2}}$$ Therefore, $\sigma x_i\sigma^{-1}=q^{d_jm}x_i$; as $H_0[1]=0$, $m \nmid d_j$. If we denote $\bar{\Delta}$ the coproduct of $\bar{H}$, $\bar{\Delta}(x_i) \in \bar{A}\otimestimes \bar{A}$ because $\bar{A}$ is a quasi-Hopf subalgebra of $\mathrm{h}at{H}$. As $\bar{\Delta}(x_i)=J\Delta(x_i)J^{-1}$, we have \begin{eqnarray*} \bar{\Delta}(x_i) &=& \sum_{z,y=0}^{m^{2}-1} \frac{c(z,y)^s}{c(z-d_i,y)^s}q^{b_iy} x_i 1_{z-d_i} \otimestimes 1_{y} + \frac{c(z,y)^s}{c(z,y-d_i)^s} 1_{z} \otimestimes x_i 1_{y-d_i} \\ &=& \sum_{y=0}^{m^{2}-1} q^{sd_i(y'-y)+b_iy} x_i\otimestimes 1_{y} + \sum_{z,y=0}^{m^{2}-1} q^{sz(y'-d_i-(y-d_i)')} 1_{z} \otimestimes x_i 1_{y-d_i} \\ &=& \sum_{y=0}^{m-1} q^{b_iy} \left( \sum_{k=0}^{m-1} q^{mk(b_i-sd_i)} x_i \otimestimes 1_{y+km} \right) \\ && + \sum_{z,y=0}^{m-1} q^{sz((y+d_i)'-d_i-y)} \mathbf{1}_{z} \otimestimes x_i \mathbf{1}_{y}. \enspace d{eqnarray*} The first summand belongs to $\bar{A}\otimestimes \bar{A}$, so $b_i \equiv sd_i (m)$. Now, the braided graded Hopf algebra $R$ in the category of Yetter-Drinfeld modules over ${\mathbb Z}_{m^2}$ is generated in degree 1; call $V:=R[1]$. Therefore there exists an epimorphism of Hopf algebras $H \twoheadrightarrow \mathcal{B}(V) \# {\mathbb Z}_{m^2}$, which induces by twisting and restriction (note that the kernel of such map is generated in degree $\geq 2$) a surjective morphism $\bar{A}=A(H, s) \twoheadrightarrow A(\mathcal{B}(V)\#{\mathbb Z}_{m^2}, s)$. As both algebras have the same degree 0 and 1 parts and $\bar{A}$ has no proper quasi-Hopf ideals generated in degree $\geq 2$, such surjective map is an isomorphism, and $H=\mathcal{B}(V) \# {\mathbb Z}_{m^2}$. \end{proof} \subsection{Generation in degree 1} In what follows, consider $m$ odd. Strictly speaking, we consider radically graded Hopf algebras, which are dual of coradically graded Hopf algebras. Although $H$ and $H^*$ are of the same type (as groups, ${\mathbb Z}_m \text{co}ng \widehat{{\mathbb Z}_m}$ canonically), to be consistent with the notation we consider a braided vector space of diagonal type $W$ as above and fix a basis $x_1,...,x_{\theta}$, where $x_i \in W_{\text{ch }i_i}^{g_i}$ for some $g_i \in {\mathbb Z}_{m^2}$, $\text{ch }i_i \in \widehat{{\mathbb Z}}_{m^2}$, so the braiding matrix is $(\text{ch }i_i(g_j))_{1 \leq i,j \leq n}$. Call $\mathcal{X}$ the set of connected components of $A$. By Heckenberger's classification, on each connected component: \begin{itemize} \item it is a braiding of Cartan type (see \cite{H}): there exists a Cartan matrix $A=(a_{ij})$ such that for all $i,j$, $q_{ii}^{a_{ij}}=q_{ij}q_{ji}$, or \item 3 divides $m$ and $V$ is of type \eqref{hatB2}, \eqref{hatB3} (see Section). \enspace d{itemize} Such Nichols algebra is ${\mathbb Z}_{\theta}$-graded, where each $x_i$ has degree $e_i$. \emph{Consider first the Cartan case.} Let $\Delta_+$ be the set of positive roots of $A$. We know that for each $\alpha \in \Delta_+$, there exists an element $x_{\alpha}\in \mathcal{B}(V)$ of degree $\alpha$, such that the $x_{\alpha}$'s determine a PBW basis, with height $N_I$, determined by $I \in \mathcal{X}$ if $\alpha \in I$ (here is important that $2,3$ do not divide $n$); see \cite{AS3} and the reference therein. Moreover, \begin{thm}[\cite{AS4}, Thm. 5.5, see also \cite{A}] The algebra $\mathcal{B}(W)$ is presented by generators $x_1, \ldots, x_{\theta}$ and relations \begin{eqnarray} ad_c(x_i)^{1-a_{ij}}x_j &=& 0, \qquad \label{quantumserre} \\ x_{\alpha}^{N_I} &=& 0. \qquad \label{heightroot} \enspace d{eqnarray} \enspace d{thm} Therefore, the algebra $\tilde{A}$ is generated by the same relations, and \begin{equation}\label{groupelement} \sigma^m=0, \qquad \sigma x_i\sigma^{-1}=q^{d_im}x_i \, \, (i=1,..., \theta). \enspace d{equation} if $\sigma$ denotes the generator of ${\mathbb Z}_m \text{co}ng \widehat{{\mathbb Z}}_m$ (because the multiplication is not changed by twisting). Our goal now is to prove that $\pi:A \twoheadrightarrow \bar{A}$ as above is really an isomorphism. In order to do that, we will prove that the relations \eqref{quantumserre} and \eqref{heightroot} hold in $A$ (relations \eqref{groupelement} are satisfied because $\pi$ is an isomorphism in degree 0 and 1, and a morphism of algebras). \begin{prop}\label{prop:quantumserre} Let $\pi:A\twoheadrightarrow \bar{A}$ be as in Theorem \ref{thm:projection}, with $A$ finite dimensional. Then, for all $i \neq j$, $ad_c(x_i)^{1-a_{ij}}x_j=0$ holds in $A$. \enspace d{prop} \begin{proof} Suppose that $z_{ij}:=ad_c(x_i)^{1-a_{ij}}x_j \neq 0$ in $A$. Then $x_i,x_j, z_{ij}$ are linearly independent (because they are linearly independent in $H$). By the previous construction, $\mathrm{h}at{A}= A(T(V)\#{\mathbb Z}_{m^2}, \otimesmega_s)$, so we look a the coproduct in $A$ from the corresponding in $\mathrm{h}at{A}$ and projecting. In $\mathrm{h}at{H}:=T(V)\#{\mathbb Z}_{m^2}$, $z_{ij}$ is skew primitive: $\Delta(z_{ij})=z_{ij} \otimestimes1+g^{(1-a_{ij})b_i+b_j} \otimestimes z_{ij}$, because $z_{ij}$ is primitive in $T(V)$. So the subalgebra $B$ generated by $a,x_i,x_j$ and $z_{ij}$ in $\mathrm{h}at{A}$ is a quasi-Hopf algebra, because $\Delta_{\mathrm{h}at{A}}(z_{ij}) =J\Delta(z_{ij})J^{-1}$. Applying Theorem \ref{thm:projection} for $B$, there exists a projection $B \twoheadrightarrow \bar{B}= A(\mathcal{B}(V_1), s)$, so $\bar{B}$ is also finite dimensional. As the braiding of $V_1$ is independent of the basis for which is calculated (it is of diagonal type, see \cite{AS1}), we can calculate it with respect to the basis $y_1=x_i$, $y_2=x_j$, $y_3= z_{ij}$. Let $(Q_{st})_{s,t=1,2,3}$ the corresponding matrix. Using that $V$ is of Cartan type, we have \begin{align*} &Q_{11}=q_{ii}, &Q_{12}Q_{21}=q_{ij}q_{ji}, \\ &Q_{22}=q_{jj} , &Q_{13}Q_{31}= q_{ii}^{2-a_{ij}} , \\ &Q_{33}=q_{ii}^{1-a_{ij}}, &Q_{23}Q_{32}=q_{ii}^{a_{ij}(1-a_{ij})}q_{jj}^2. \enspace d{align*} This braiding is of Cartan type, or standard $\mathrm{h}at{B}_2 \times A_1$, or as in \eqref{hatB3}, because the order of the elements in the diagonal are odd and $\bar{B}$ is finite dimensional. In any case, there exists a matrix $(m_{st})$ as in \cite{A} associated with the braiding $(Q_{st})$. Therefore at least two vertices are not connected (there exist $s \neq t$ such that $m_{st}=m_{ts}=0$): \begin{itemize} \item If $Q_{23}Q_{32}=1$, then $q_{jj}^2=q_{ii}^{a_{ij}(a_{ij}-1)}=q_{jj}^{a_{ji}(a_{ij}-1)}$, so ${\otimesperatorname{op}}eratorname{ord} q_{jj}$ divides $2-a_{ij}a_{ji}+a_{ji}$. This is a contradiction because ${\otimesperatorname{op}}eratorname{ord} q_{jj}$ is odd, greater than 1. \item If $Q_{12}Q_{21}=1$, then $a_{ij}=a_{ji}=0$ and $q_{ii}^{-m_{13}+2}=Q_{11}^{-m_{13}}Q_{13}Q_{31}=1$. The unique possibility is $m_{13}=3$, in which case $m_{23}=m_{32}=0$, but this contradicts the previous item. \item If $Q_{13}Q_{31}=1$, then ${\otimesperatorname{op}}eratorname{ord} q_{ii}$ divides $2-a_{ij}$, which cannot happen by a similar argument. \enspace d{itemize} From this contradiction, $z_{ij}=0$ in $A$. \end{proof} \begin{prop}\label{prop:heightroots} Let $\pi:A\twoheadrightarrow \bar{A}$ be as in Theorem \ref{thm:projection}, with $A$ finite dimensional. Then, for all $\alpha \in I$, $I \in \mathcal(X)$, $x_{\alpha}^{N_I}=0$ holds in $A$. \enspace d{prop} \begin{proof} Following notation in \cite{AS3}, consider $\widetilde{\mathcal{B}}(V)$ the algebra generated by $V$, where the $x_i$'s are primitive, and where relation \eqref{quantumserre} holds for all $i \neq j$: that is, consider the quotient of the tensor algebra $T(V)$ by the braided Hopf biideal generated by the quantum Serre relations. Call $H_1:= \widetilde{\mathcal{B}}(V) \# {\mathbb Z}_{m^2}$. For $A_1:= A(H_1,s)$ we have a surjective map of algebras $A_1\twoheadrightarrow A$, because of Proposition \ref{prop:quantumserre}, which is of quasi-Hopf algebras because they have the same structure in degree 0,1 and they are generated by these components. So we have the following picture: $$ \xymatrix{ H_1 \ar@{->>}[rd] & \ar@{->>}[l]\mathrm{h}at{H} \ar@{->>}[d] \\ & H } \quad \leftrightsquigarrow \quad \xymatrix{ \mathrm{h}at{A}=A(\mathrm{h}at{H}, \otimesmega_s) \ar@{->>}[r] \ar@{->>}[d] \ar@{->>}[rd] & \ar@{->>}[ld]A_1=A(H_1, \otimesmega_s) \ar@{->>}[d] \\ \bar{A}=A(\bar{H}, \otimesmega_s) & \ar@{->>}[l] A. } $$ Call $\mathcal{K}(V)$ the subalgebra generated by the $x_{\alpha}^{N_I}$ in $\widetilde{\mathcal{B}}(V)$: by Proposition 4.7 in \cite{AS3}, it is a braided Hopf subalgebra of $\widetilde{\mathcal{B}}(V)$. In consequence, by twisting and restriction, the algebra $\mathcal{K}$ generated by $a$ and $x_{\alpha}^{N_I}$ in $A_1$ is a quasi-Hopf subalgebra of $A_1$. Suppose that at least one of the $x_{\alpha}^{N_I} \neq 0$ in $A$. Therefore, the subalgebra of $A$ generated by $a$ and $x_{\alpha}^{N_I}$, is a non zero quasi-Hopf subalgebra of $A$ (it is the image of $\mathcal{K}$). Consider then $x_{\alpha}^{N_I}$ a non zero element of minimal degree: it is a non zero primitive element because of the degree consideration. Therefore the subalgebra $\mathcal{K}'$ generated by $a$ and $x_{\alpha}^{N_I}$ in $A$ is finite dimensional, and admits a projection over a finite dimensional quasi-Hopf algebra $\mathcal{K}''= A(R\#{\mathbb Z}_{n^2}, \otimesmega_s)$. Looking at $x_{\alpha}^{N_I}$ as an element of $\widetilde{\mathcal{B}}(V)$, we have $c (x_{\alpha}^{N_I} \otimestimes x_{\alpha}^{N_I}) = x_{\alpha}^{N_I} \otimestimes x_{\alpha}^{N_I}$, because $N_I$ is the order of $q_{\alpha}$, where $q_{\alpha}$ is the scalar such that $c (x_{\alpha} \otimestimes x_{\alpha}) = q_{\alpha} x_{\alpha} \otimestimes x_{\alpha}$ (it depends just on the ${\mathbb Z}_{\theta}$-graduation). Call $z= x_{\alpha}^{N_I}$. In $R$, as $\Delta$ is an algebra morphism (inside the category of Yetter-Drinfeld modules) and $\Delta(z)= z\otimestimes 1+1\otimestimes z$; inductively, $$ \Delta(z^k)= \sum_{j=0}^k \binom{k}{j} z^j \otimestimes z^{k-j} $$ (here we use that $c(z\otimestimes z)= z \otimestimes z$). As $R$ is finite dimensional (because $\mathcal{K}''$ is finite dimensional), there exists $k$ such that $z^k=0$. Considering the minimal one, we derive that $z=0$ because the field is of characteristic 0. From this, $R=0$, which contradicts that some $x_{\alpha}^{N_I}$ is non zero. \end{proof} \emph{Consider now $V$ of standard type \eqref{hatB2}} (see \cite{A} for definition of standard braiding: we do not consider the case \eqref{hatB3} at the moment, because it does not appear for ${\mathbb Z}_{m}$ as we shall prove in Section \ref{nichols}). By \cite{A}, we know that $\mathcal{B}(V)$ is presented by generators $x_1,x_2$ and relations: \begin{eqnarray} x_1^3 &=& x_2^N=0, \label{powerB2} \\ \left( ad_c(x_1)x_2 \right)^3 &=& \left( ad_c(x_1)^2x_2 \right)^{N'}=0 ,\label{powerrootsB2} \\ ad_c(x_1)^3x_2 &=& ad_c(x_2)^2x_1 =0 , \label{qserreB2}\\ \left[ad_c(x_1)^2x_2, ad_c(x_1)x_2 \right]_c &=& 0. \label{newqserreB2} \enspace d{eqnarray} where $N, N'$ denote the order of $\zeta, \zeta\xi^{-1}$, respectively. \begin{prop}\label{prop:standardrelations} Let $A$ be a finite dimensional quasi-Hopf algebra such that as in Theorem \ref{thm:projection} we have $\pi: A \twoheadrightarrow \bar{A}= A(\mathcal{B}(V) \# Z_{m^2},s)$ for such 2-dimensional standard braided vector space. Then \eqref{powerB2}, \eqref{powerrootsB2}, \eqref{qserreB2} and \eqref{newqserreB2} hold in $A$. \enspace d{prop} \begin{proof} The strategy to prove them is to consider algebras $H_1$ as in the proof of Proposition \ref{prop:heightroots} such that the corresponding $A_1= A(H_1, s)$ projects onto $A$, in order to obtain quasi-Hopf subalgebras of $A$, then apply Theorem \ref{thm:projection} and derive a contradiction if these relations are non zero. ${\en\mathsf {(i)}}\;$ Relations \eqref{powerB2} are easily proved, because $x_1^3, x_2^N$ are primitive in $T(V)$ as braided Hopf algebra in $^{\k \Z_m}_{\k \Z_m}\mathcal{YD}$. ${\en\mathsf {(i)}}\;$i The second relation on \eqref{qserreB2} holds as in Proposition \ref{prop:quantumserre}, because $q_{22}q_{21}q_{12}=1$ as in the Cartan case, and this is what is used in the proof of such Proposition. For the second, it is better to consider $H_1$ as the quotient by the Hopf ideal generated by $x_1^3$, because in such case $ad_c(x_1)^3x_2$ is skew-primitive by \cite[Lemma 5.7]{A}. In such case, we work as in Proposition \ref{prop:quantumserre}, considering the braiding matrix $(Q_{ij})_{i,j=1,2,3}$ with respect to $y_1=x_1, y_2=x_2, y_3= ad_c(x_1)^3x_2$: \begin{align*} &Q_{11}=\xi, &Q_{12}Q_{21}=\zeta^{-1}, \\ &Q_{22}=\zeta , &Q_{13}Q_{31}= \zeta^{-1} , \\ &Q_{33}=\zeta^{-2}, &Q_{23}Q_{32}=\zeta^{-1}. \enspace d{align*} This diagonal braiding is not associated with a Nichols algebra of diagonal type, because all the vertices are connected but all the $Q_{ii}$'s have odd order. We have a contradiction, so \eqref{qserreB2} hold in $A$. In a similar way, left hand side of \eqref{newqserreB2}, which we call $y_3$, is skew-primitive by \cite[Lemma 5.9]{A}. Considering the braiding matrix $(Q_{ij})_{i,j=1,2,3}$ with respect to $y_1=x_1, y_2=x_2, y_3$: \begin{align*} &Q_{11}=\xi, &Q_{12}Q_{21}=\zeta^{-1}, \\ &Q_{22}=\zeta , &Q_{13}Q_{31}= \zeta^{-2}, \\ &Q_{33}=\zeta^{-2}, &Q_{23}Q_{32}=\zeta, \enspace d{align*} We have a contradiction again, so also \eqref{newqserreB2} holds in $A$. ${\en\mathsf {(i)}}\;$ii Now consider $\widetilde{\mathcal{B}}(V)$ the quotient of $T(V)$ by the braided Hopf biideal generated by all the relations except \eqref{powerrootsB2}, and $H_1 =\widetilde{\mathcal{B}}(V) \# {\mathbb Z}_{m^2}$. As before, let $A_1= A(H_1, s)$. If $\mathcal{B}(V)=T(V)/I(V)$, the ideal $I(V)$ is generated in consequence by \eqref{powerB2}-\eqref{newqserreB2}, so $\left( ad_c(x_1)x_2 \right)^3$ is primitive in $\widetilde{\mathcal{B}}(V)$, because it belongs to the kernel of the surjection onto $\mathcal{B}(V)$ and is of minimal degree. Therefore $\left( ad_c(x_1)x_2 \right)^3=0$ in $A$ by an analogous proof as in Proposition \ref{prop:heightroots}. If now $\widetilde{\mathcal{B}}(V)$ denotes the quotient of $T(V)$ by the braided Hopf biideal generated by all the relations except $\left( ad_c(x_1)^2x_2 \right)^{N'}=0$, in such algebra $\left( ad_c(x_1)^2x_2 \right)^{N'}$ is primitive, and again it implies that $\left( ad_c(x_1)^2x_2 \right)^{N'}=0$ in $A$. \end{proof} \subsection{Classification.} With the previous results we can describe all the radically graded quasi-Hopf algebras over ${\mathbb Z}_m$ for $m$ odd. We summarize this in the following result. \begin{thm}\label{thm:classificationgradedqHA} Let $A$ be a radically graded finite dimensional quasi-Hopf algebra such that for some odd integer $m$, $$A/Rad(A)= \mathbf{k}[{\mathbb Z}_{m}].$$ Then $H$ is twist equivalent to one of the following quasi-Hopf algebras: \begin{enumerate} \item radically graded Hopf algebras $A$ such that $A/\mathbb{R}ad(A) \text{co}ng \mathbf{k} [{\mathbb Z}_{m}]$, \item semisimple quasi-Hopf algebras $\mathbf{k}[{\mathbb Z}_m]$ with associator given by $\otimesmega_s \in H^3({\mathbb Z}_m,\mathbf{k}^{\times})$, for some $s\in \{0,1,...,m-1\}$, \item an algebra $A(H,s)$, for some radically graded Hopf algebra $H$ such that $A/\mathbb{R}ad(A) \text{co}ng \mathbf{k} [{\mathbb Z}_{m^2}]$, and some $s \in \Upsilon(H)$. \enspace d{enumerate} \enspace d{thm} \begin{proof} Given a radically graded quasi-Hopf algebra, if its associator is trivial, it corresponds to a Hopf algebra which dual is coradically graded with coradical ${\mathbb Z}_m$. But this family is self-dual. Consider now radically graded quasi-Hopf algebras $A$ with non-trivial associator $\Phi_s$. If rank of $A[1]$ over $A[0]$ is zero, $A$ is semisimple, so $A= \mathbf{k}[{\mathbb Z}_m]$. If rank of $A[1]$ over $A[0]$ greater than 0, by Theorem \ref{thm:projection} there exists a coradically graded Hopf algebra $H$ with coradical ${\mathbb Z}_{m^2}$ and a projection of quasi-Hopf algebras $\pi: A \twoheadrightarrow A(H,s)$. If $H$ is of Cartan type, by Propositions \ref{prop:quantumserre} and \ref{prop:heightroots}, the relations defining $A(H,s)$ are satisfied in $A$, so $\pi$ is an isomorphism. If $H$ is not of Cartan type, by Heckenberger's classification of diagonal braidings \cite{H}, it is of standard type with some connected component of the generalized Dynkin diagram not of Cartan type, and by Proposition \ref{prop:standardrelations} relations defining $A(H,s)$ are satisfied in $A$, so again $\pi$ is an isomorphism. This completes the proof. \end{proof} \section{Liftings of quasi-Hopf algebras over $\mathbf{k}[{\mathbb Z}_{m}]$}\label{section:liftings} In this section, for any radically graded quasi-Hopf algebra $A_0$ with associator $\Phi_s$ such that $A_0/Rad(A_0) \text{co}ng \mathbf{k}[{\mathbb Z}_{m}]$ we consider the possible liftings: that is, all the non-semisimple quasi-Hopf algebras $A$ such that the associated graded quasi-Hopf algebra (with respect to the radical filtration) is $A_0$. By the previous section, such $A_0$ are related with radically graded Hopf algebras $H_0$ such that $H_0/Rad(H_0) \text{co}ng \mathbf{k}[{\mathbb Z}_{m^{2}}]$: $A_0=A(H_0,s)$. We will relate the liftings $A$ of $A_0$ with liftings $H$ of $H_0$. We will use the same denomination of deformation as in \cite{EG3}: a deformation of a map $f_0$ is a map $f$ obtained by adding terms in degree higher than some degree $d$. \textbf{We restrict to the case $m$ not divisible by primes $\leq 7$:} at the moment pointed Hopf algebras over abelian groups are completely classified for those groups whose order is not divisible by $2,3,5,7$ (see \cite{AS4}). \subsection{Lifting of quasi-Hopf algebras with trivial associator.} We begin with quasi-Hopf algebras whose coradical is a quasi-Hopf ideal, such that the corresponding graded quasi-Hopf algebra is a Hopf algebra; i.e. the corresponding associator is trivial. Remember the following result: \begin{prop}[\cite{EG3}]\label{prop:liftingtrivialassoc} Let $A$ be a finite dimensional quasi-Hopf algebra whose radical is a quasi Hopf ideal and the corresponding graded algebra $A_0=gr(A)$ is a Hopf algebra. If $H^3(A_0^*, \mathbf{k})=0$, then $A$ is twist equivalent to a Hopf algebra. \enspace d{prop} Fix a radically graded Hopf algebra $A_0$ and consider a set of skew-primitive elements $x_i$ and a group-like element $\gamma$ as in Section \ref{section:gradedqHA}, which generate $A_0$ as an algebra. Write $m=p_1^{\alpha_1} \!\cdot\! ots p_k^{\alpha_k}$, the decomposition of $m$ as product of primes, $p_i >7$ by hypothesis. Define \begin{equation}\label{definitionVpi} V(p_i):= \left\{ j\in \{1,...,\theta\}: b_id_i \not\equiv 0 (p_i^{\alpha_i}) \right\}. \enspace d{equation} As we consider finite dimensional Hopf algebras, $q^{b_id_i} \neq 1$, or equivalently $m$ does not divide $b_id_i$. Therefore, $\cup_l V(p_l)=\{ 1,...,\theta\}$. Also for each pair $i\sim j$, we have $a_{ij},a_{ji} \in \{1,2,3\}$ and $a_{ij}b_id_i \equiv a_{ji}b_jd_j (m)$, so $i \in V(p_l)$ iff $j\in V(p_l)$. In this way each $V(p_l)$ is a union of connected components of the Dynkin diagram associated with the diagonal braiding of $A_0$. \begin{lem} Let $p_l, V(p_l)$ as above. Then $|V(p_l)| \leq 2$. \enspace d{lem} \begin{proof} Let $A_{p_l}=(a_{ij})_{i,j \in V(p_l)}$ be the Cartan matrix obtained by restriction of $A$: it is another finite Cartan matrix. We have $$ a_{ij}b_id_i \equiv b_id_j+b_jd_i \equiv a_{ji}b_jd_j (p_l^{\alpha_l}). $$ If $m'=\prod_{k\neq l}p_k^{\alpha_k}$, and $\bar{q}=q^{m'}$, it is a root of unity of order $p_l^{\alpha_l}$, then the braiding $(\bar{q}^{b_id_j})_{i,j \in V_l}$ is of finite Cartan type, associated to a braided vector space $W$ with a basis $\bar{x}_i$, such that we can fix a generator $\bar{\sigma}$ of ${\mathbb Z}_{p_l^{\alpha_l}}$ satisfying: $$ \bar{\sigma}\bar{x}_i\bar{\sigma}^{-1}=\bar{q}^{d_i}\bar{x}_i, \quad \Delta(\bar{x}_i) = \bar{x}_i\otimestimes 1+ \bar{\sigma}^{b_i} \otimestimes \bar{x}_i. $$ Therefore it is one in Section \ref{nichols}, and $|V(p_l)| =\mathrm{dim} W \leq 2$ \end{proof} Now we state an analogous result to \cite[Thm. 1.3]{EG3}, and adapt the proof. \begin{thm}\label{thm:trivialassoc} Let $A$ be a finite dimensional quasi-Hopf algebra whose coradical is a quasi-Hopf ideal such that $A/\mathbb{R}ad(A) \text{co}ng \mathbf{k}[{\mathbb Z}_{m}]$, and the associated graded quasi-Hopf algebra $A_0=gr(A)$ is a Hopf algebra. Then $A$ is twist equivalent to a Hopf algebra. \enspace d{thm} \begin{proof} By Proposition \ref{prop:liftingtrivialassoc}, it is enough to prove that $H^3(A_0^*, \mathbf{k})=0$. Note that $A_0^*$ is a coradically graded Hopf algebra with $G(A_0^*)={\mathbb Z}_{m}$. By the results in Section \ref{classifnichols}, it is of the way $A_0^*= {\mathbb Z}_{m} \ltimes \mathcal{B}(V))$, where $V$ is a braided vector space of Cartan type, and $q$ is a root of order $m^2$: $$ H^\bullet(A_0^*, \mathbf{k}) = H^\bullet(\mathcal{B}(V), \mathbf{k})^{{\mathbb Z}_m}= H^\bullet(\mathfrak{u}_q^+,\mathbf{k})^{{\mathbb Z}_m}. $$ The last equality is proved in \cite{EG3}, (although $\mathcal{B}(V)$ and $\mathfrak{u}_q^+$ can be non-isomorphic as algebras, we have $H^\bullet(\mathcal{B}(V), \mathbf{k})= H^\bullet(\mathfrak{u}_q^+,\mathbf{k})$). In \cite{GK} they prove that $H^\bullet(\mathfrak{u}_q^+,\mathbf{k})=\sum_{w\in W} \mathbb{C} \eta_w \otimestimes S(\mathfrak{n}_+)$, where $W$ is the Weyl group, each $\eta_w$ has degree the length of $w$ (that we will denote $\ell(w)$) and $S(\mathfrak{n}_+)$ is the symmetric algebra of the positive part of the associated Lie algebra sitting in degree 2. Define $\rho:= \frac{1}{2} \sum_{\alpha \in \Delta_+} \alpha$. A generator $\sigma$ of ${\mathbb Z}_m$ acts trivially on $S(\mathfrak{n}_+)$, and by a scalar $\lambda_w$ on each $\eta_w$. Such scalar is $$ \lambda_w:=q^{-\sum_i n_id_i} \qquad \mbox{where }\sum_i n_i \alpha_i = \gamma_w:=\sum_{\alpha \in \Delta_+: w(\alpha)< 0} \alpha=\rho-w(\rho).$$ As ${\mathbb Z}_m$ acts trivially in $S(\mathfrak{n}_+)$ and by $q^d_i \neq 1$ on $w=s_i$, in order to prove that $H^3(\mathfrak{u}_q,\mathbf{k})^{{\mathbb Z}_m}=0$ it is enough to prove that $\lambda_w \neq 1$ for any $w$ such that $\ell(w)=3$. Write $w=s_{i_1}s_{i_2}s_{i_3}$, where at least two of them are different. Assume first that $i_1,i_2,i_3 \in V(p_l)$ for some prime $p_l$ dividing $m$. Therefore two of them are equal, because such component belongs to one of the sets $V(p_l)$, and any of them has at most two elements by the above Lemma; assume $i_2=i_3$ so $V(p_l)=\{i_1,i_2\}$. Such set corresponds to a subdiagram of type $A_1\times A_1$, $A_2$, $B_2$ or $G_2$. The condition $\lambda_w=1$ is equivalent to $\sum_i n_id_i \equiv 0 (m)$, which we can consider just modulo $p_l^{\alpha_l}$. Using the characterization in Section \ref{nichols} and a computation analogous to the one in \cite[Prop. 5.1]{EG3}, we conclude that $\lambda_w \neq 1$ in this case. Assume now that not all belong to the same $V(p_l)$. Then all the $i_j$ are different, and we fix by simplicity $i_j=j$. In this way there exists one of them which is in a different component: assume $3\in V(p_l)$ and $1,2 \notin V(p_l)$; i.e. $p_l^{\alpha_l}$ divides $b_1d_1$, $b_2d_2$, but it does not divide $b_3d_3$. Write $b_i=p_l^{\beta_i}a_i$, $d_i=p_l^{\gamma_i}c_i$, where $p_l$ does not divide $a_i$, $c_i$. Then $\beta_1+\gamma_1$, $\beta_2+\gamma_2 \geq \alpha_l$, but $\beta_1+\gamma_1< \alpha_l$. Also, $b_1d_2+b_2d_1 \equiv 0 (p^l)$: it follows because $b_1d_2+b_2d_1 \equiv 0 (m)$ if $1 \nsim 2$, or because we consider $1,2 \notin V(p_l)$ if $1 \sim 2$. Therefore $\min\{\beta_1+\gamma_2, \beta_2+\gamma_1\} \geq \alpha_l$. From all this equations, $$ \min \{\beta_1, \beta_2\}+\min \{\gamma_1, \gamma_2\} \geq \alpha_l. $$ Suppose now that $\lambda_w=1$. Therefore $d_3 \equiv -n_1d_1-n_2d_2 (m)$, where at least one of $n_1,n_2 \in \{1,2,3\}$ ($n_3=1$ because $3$ is in a different connected component of the Dynkin diagram). In this way, as $p_l^{\alpha_l}$ does not divide $d_3$ (because it does not divide $b_3d_3$) we deduce that $\min \{ \gamma_1,\gamma_2 \} \leq \gamma_3$. Also, as $3$ is not connected with $1,2$, we have $ b_1d_3+b_3d_1 \equiv b_2d_3+b_3d_2 \equiv 0 (p_l^{\alpha_l})$, so $$ b_3d_3 \equiv -b_3(n_1d_1+n_2d_2) \equiv d_3 (n_1b_1+n_2d_2) (p_l^{\alpha_l}).$$ It follows that $\beta_3+\gamma_3 \geq \min \{\beta_1,\beta_2\}+\gamma_3 \geq \min \{\beta_1, \beta_2\}+\min \{\gamma_1, \gamma_2\} \geq \alpha_l$, which contradicts the hypothesis $3 \in V(p_l)$. Therefore $\lambda_w \neq 1$ also in this case. From all these computations, $H^3(\mathfrak{u}_q,\mathbf{k})^{{\mathbb Z}_m}=0$. \end{proof} \subsection{Equivariantization of liftings of quasi-Hopf algebras.} We want to obtain from each quasi-Hopf algebra $A$ which is a lifting of $A_0$, a Hopf algebra $H$ which is a lifting of $H_0$. \begin{thm}\label{thm:equivariantization} Let $A$ be a quasi Hopf algebra such that $Rad(A)$ is a quasi-Hopf ideal, and $gr(A)=A_0=A(H_0, s)$. There exists an action of $\Gamma={\mathbb Z}_{m}$ on the category $\mathcal{C}=\mathbb{R}ep (A)$ which fixes the simple elements of $\mathcal{C}$, such that the equivariantization $\mathcal{C}^{\Gamma}$ is tensor equivalent to $\mathbb{R}ep(H)$, for some Hopf algebra $H$. Such Hopf algebra is a lifting of $H_0$, and there exists an inclusion of quasi Hopf algebras $A \mathrm{h}ookrightarrow H^J$, for some twist $J \in H\otimestimes H$. \enspace d{thm} \begin{proof} The first step is to construct $H$. The idea is to 'extend' $A$ as for the radically graded case following the steps in \cite{EG3}. We recall the main steps in order to see that such proof still holds in our context. Consider the automorphism $\text{ch }i=S^2$ of the algebra $A$. Define $\Delta_{\text{ch }i}(x):= (\text{ch }i \otimestimes \text{ch }i)(\Delta(\text{ch }i^{-1}(x)))$ for each $x \in A$. By \cite{D}, Proposition 1.2, there exists a twist $K$ such that $\Delta_{\text{ch }i}(x)=K\Delta(x)K^{-1}$ for all $x\in A$. Call $K_0$ the degree zero part of $K$, which commutes with $\Delta_0(x)$ for all $x$ ($\Delta_0$ denotes the coproduct of $A_0$), so $K_0=1$, and hence $K=1+hdt$. Note that \cite[Lemma 4.1]{EG3} holds in our setting: it uses just the fact that the character $\lambda$, which determines the isomorphism of tensor functors $V \rightarrow \lambda \otimestimes V^{****} \otimestimes \lambda^{-1}$ in $\mathbb{R}ep A$ has order $m$. Therefore we conclude that $S^{2m}$ is an inner automorphism: there exists $b=\sigma+hdt \in A$ ($\sigma \in A_0$ is the fixed group element) such that $S^{2m}(x)=bxb^{-1}$ for all $x\in A$. Also \cite[Lemma 4.2]{EG3} applies here (it uses the fact that $A$ is a lifting of $A_0$ with $A_0/Rad(A_0)={\mathbb Z}_{m}$ but not the particular structure of $A_0$), and then we can choose $b$ satisfying the relation \begin{equation}\label{conditionb} K^{-1} \text{ch }i^{\otimestimes 2}(K^{-1})\!\cdot\! ots (\text{ch }i^{n-1})^{\otimestimes 2} (K^{-1}) = \Delta(b)(b^{-1} \otimestimes b^{-1}). \enspace d{equation} Using the construction for the semidirect product explained in \cite[Section 3]{EG3}, we can define the quasi-Hopf algebra $$ \widetilde{H}:= (\mathbf{k}[\text{ch }i,\text{ch }i^{-1}] \ltimes A)/(\text{ch }i^n -b). $$ Note that this algebra is characterized by the multiplication of $A$, $\text{ch }i a\text{ch }i^{-1}= \text{ch }i(a)=S^2(a)$ and the relation $\text{ch }i^n=b$ established by the quotient. Considering a lifting $J \in \widetilde{H} \otimestimes \widetilde{H}$ of $J_s$, $H:= \widetilde{H}^{J^{-1}}$ is a quasi-Hopf lifting of $H_0$. As it is showed \cite[Theorem 1.3]{EG3} (we use really a generalization of this proof for ${\mathbb Z}_{m}$ in place of ${\mathbb Z}_p$, see proof of Theorem \ref{thm:trivialassoc}), $H^3(H^*_0, \mathbf{k})=0$ when $H_0$ corresponds to Nichols algebras of Cartan type for ${\mathbb Z}_{m}$, so as in \cite[Theorem 4.3]{EG3} we can change $J$ for $JF$ for some $F=1+hdt$ such that $H$ is a Hopf algebra (still a lifting of $H_0$). Note that $\mathbb{R}ep \widetilde{H} \text{co}ng \mathbb{R}ep H$. To complete the proof, define the action of ${\mathbb Z}_m$ on $\mathcal{C}=\mathbb{R}ep A$ (this is analogous to the proof of \cite[Thm. 4.2]{EG4}): we call $h$ a generator of ${\mathbb Z}_m$, to distinguish it from $\text{ch }i \in \widetilde{H}$. To do this, we have to define a collection of functors $\{F_k:=F_{h^k} \}_{k=0,1...,m-1} \subset \otimesperatorname{Aut}(\mathcal{C})$. For each $(V,\pi_V)\in \mathbb{R}ep A$, consider $F_k(V)=V$, and $\pi_{F_k(V)}(a)=\pi_V(S^2(a))$ for all $a\in A$. The natural isomorphism $\gamma_{i,j}: F_i(F_j(V)) \rightarrow F_{(i+j)'}(V)$ is given by the action $b^{\frac{(i+j)'-i-j}{n}} \in A$: explicitly, $F_k \circ F_j=F_{k+j}$ if $j+k <n$, and $F_k\circ F_j $ is related with $F_{j+k-n}$ up to the action of $b^{-1}$. For this action, a $\Gamma$-equivariant object of $\mathcal{C}$ is an object $X\in \mathcal{C}$ together with a collection of linear isomorphisms $u_k: F_k(X)=X \rightarrow X$ such that \begin{align*} u_k(a\!\cdot\! ot v) &= S^{2k}(a)u_k(v), \qquad a\in A, v \in V; \\ u_k u_j &=u_{(k+j)'}b^{\frac{(i+j)'-i-j}{n}}. \enspace d{align*} These relations are exactly the ones defining $H$ as we have seen if $u_k=\text{ch }i^k$, so we have an equivalence of categories between $\mathcal{C}^{\Gamma}$ and $\mathbb{R}ep(H)$; the tensor product of these representations is the same as for representations of $H^{J}=\widetilde{H}$. So this completes the proof. \end{proof} \subsection{De-equivariantization of $\mathbb{R}ep(H)$ for liftings of $H_0$.} We want to obtain $A$ from $H$ for each lifting $H$ of a Hopf algebra $H_0$ as above;this is possible thanks that the de-equivariantization procedure is the inverse of equivariantization. But then we want to know for which liftings $H$ of $H_0$ we can apply it, in order to obtain all the quasi-Hopf liftings $A$. That is, we want to know all inclusions $\mathbb{R}ep({\mathbb Z}_m) \rightarrow \mathcal{Z}(\mathbb{R}ep(H)) \text{co}ng \mathbb{R}ep(D(H))$ such that they factorize the inclusion $\mathbb{R}ep({\mathbb Z}_m) \rightarrow \mathbb{R}ep(H)$. We begin characterizing such functors. Consider a radically graded Hopf algebra over ${\mathbb Z}_{m^2}$ such that $\Upsilon(H_0) \neq \emptyset$. We prove now a technical lemma which we need in what follows. \begin{lem}\label{lemma:grouplikeorderm} Fix $H$ a lifting of $H_0$ as in Section \ref{section:gradedqHA}. Then, $\sigma=\text{ch }i^m \in G(H)$ \enspace d{lem} \begin{proof} To prove that $\text{ch }i^m$ is a group-like element is equivalent to prove that $\text{ch }i^m(g_ig_j)=\text{ch }i^m(g_{\alpha}^{N_{\alpha}})=1$ for each pair $i,j$ such that $\lambda_{ij}=0$ and for each positive root $\alpha$ such that $\mu_{\alpha} \neq 0$, by Lemma \ref{lemma:coproductdual}. Consider $i \nsim j$ such that $\lambda_{ij} \neq 0$. Therefore $\end{proof}silon= \text{ch }i_i\text{ch }i_j= \text{ch }i^{d_i+d_j}$, so $d_i+d_j \equiv 0 (m^2)$. Now for $s \in \Upsilon(H_0)$, $b_k \equiv sd_k (m)$ for all $k$. Then $b_i+b_j \equiv s(d_i+d_j) \equiv 0 (m)$, so $$ \text{ch }i^m(g_ig_j)=\text{ch }i^m(g^{b_i+b_j})=q^{m(b_i+b_j)}=1. $$ Now consider a positive root $\alpha=\sum n_j\alpha_j$ such that $\mu_{\alpha}\neq 0$. Therefore $$ \end{proof}silon= \text{ch }i_{\alpha}^{N_{\alpha}}= \left( \prod_{j=1}^{\theta}\text{ch }i_j^{n_j} \right)^{N_{\alpha}} =\text{ch }i^{N_{\alpha}(\sum n_id_i)}. $$ Then $N_{\alpha}(\sum n_id_i) \equiv 0 (m^2)$. Consider $s$ as above, so we have $$ N_{\alpha}(\sum n_ib_i) \equiv N_{\alpha}s(\sum n_id_i) \equiv 0 (m), $$ which implies that $\text{ch }i^m(g_{\alpha}^{N_{\alpha}})=1$. \end{proof} \begin{prop}\label{prop:inclusionZm} Fix $H$ a lifting of $H_0$ as in Section \ref{section:gradedqHA}. There is a bijection between: \begin{enumerate} \item functors $F:\mathbb{R}ep({\mathbb Z}_m) \rightarrow \mathcal{Z}(\mathbb{R}ep(H))$ such that $\mathbb{R}ep({\mathbb Z}_m)$ is a Tannakian subcategory of $\mathcal{Z}(\mathbb{R}ep(H))$, and the composition $\mathbb{R}ep({\mathbb Z}_m) \rightarrow \mathcal{Z}(\mathbb{R}ep(H)) \rightarrow \mathbb{R}ep(H)$, \item integers $s \in \Upsilon(H_0)$. \enspace d{enumerate} \enspace d{prop} \begin{proof} As before, $\text{ch }i$ denotes a generator of $\widehat{{\mathbb Z}_{m^2}}={\mathbb Z}_{m^2}$, which satisfies $\text{ch }i(g)=q$ for our fixed root of unity of order $m^2$. Consider a functor as in (1): it is given by the projection $H \twoheadrightarrow H/\mathbb{R}ad(H) \text{co}ng \mathbf{k}[{\mathbb Z}_{m^2}] \twoheadrightarrow \mathbf{k}[Z_m]$, where both projections are the canonical ones. In this way, we have the element $\gamma \in H \setminus \mathbb{R}ad(H)$, which is a preimage of the generator of $\widehat{{\mathbb Z}_{m^2}}={\mathbb Z}_{m^2}$, as we defined in Section \ref{section:preliminaries}. $\mathbb{R}ep {\mathbb Z}_m$ is semisimple, and we essentially have to identify the simple ${\mathbb Z}_m$-modules $M_i= \mathbf{k} v_i$ (we fix a non-zero vector $v_i$ of this one-dimensional vector space), $i=0,1,...,m-1$. By the equivalence $\mathcal{Z}(\mathbb{R}ep H) \text{co}ng \mathrm{h}yd$, we consider $M_i \in \mathrm{h}yd$, where the action should be given by $\gamma \!\cdot\! ot v_i=q^{mi}v_i$. We have to define an structure of $H$-comodule for each $M_i$, ${\mathbf{l}}elta:M_i \rightarrow H\otimestimes M_i$. As $\mathrm{dim} M_i=1$, it is determined by a group-like element $\text{ch }i_i \in H$ for each $i=0,1,...m-1$, such that ${\mathbf{l}}elta(v_i)=\text{ch }i_i \otimestimes v_i$. In $\mathbb{R}ep G$, $M_i \otimestimes M_j \text{co}ng M_{(i+j)'}$, and we want a tensor inclusion. By \eqref{YDtensorcoaction}, this means that $\text{ch }i_i\text{ch }i_j=\text{ch }i_{(i+j)'}$, so $\text{ch }i_1=\text{ch }i^{ms}$ for some $s\in \{0,1,...,m-1\}$, and this determines $\text{ch }i_i=\text{ch }i^{msi}$ for all $i=0,1,...,m-1$. By Lemma \ref{lemma:grouplikeorderm}, all the $\text{ch }i^{msi}$ are group-like elements. These action and coaction should satisfy \eqref{YDrelation2}. As $\text{ch }i, x_1,...,x_{\theta}$ generates $H$ as algebra, it is enough to prove this relation for these generators. We use here Lemma \ref{lemma:coproductdual}. When $h=\text{ch }i$ and $m=v_i$, as $\mathbb{R}ad H$ acts by 0, both sides of \eqref{YDrelation2} are equal to $q^{mi}\, \text{ch }i^{msi+1} \otimestimes v_i$. When $h=x_k$ and $m=v_i$, as $x_k$ acts by 0, the left and the right-hand sides of \eqref{YDrelation2} are, respectively, \begin{align*} & (x_k\!\cdot\! ot v_i)_{(-1)}1 \otimestimes (x_k \!\cdot\! ot v_i)_{(0)}+ (\text{ch }i^{b_k}\!\cdot\! ot v_i)_{(-1)}x_k \otimestimes (\text{ch }i^{b_k} \!\cdot\! ot v_i)_{(0)}= q^{b_k mi} \, \text{ch }i^{msi}x_k \otimestimes v_i, \\ & x_k \text{ch }i^{msi} \otimestimes 1 \!\cdot\! ot v_i + \text{ch }i^{b_k}\text{ch }i^{msi} \otimestimes x_k \!\cdot\! ot v_i = q^{d_k s mi} \, \text{ch }i^{msi}x_k \otimestimes v_i. \enspace d{align*} Therefore the action satisfies \eqref{YDrelation2} if and only if $b_k \equiv sd_k (m)$ for all $k$. Also the braiding of $\mathrm{h}yd$ restricts to the canonical symmetric braiding of $\mathbb{R}ep {\mathbb Z}_{m}$. In fact, for each pair $k,j$, the braiding $c_{M_k,M_j}: M_k \otimestimes M_j \rightarrow M_j \otimestimes M_k$ is, by \eqref{YDbraiding}, $$ c(v_k \otimestimes v_j)= (v_k)_{(-1)} \!\cdot\! ot v_j \otimestimes (v_k)_{(0)}= \text{ch }i^{msk} \!\cdot\! ot v_j \otimestimes v_k = q^{m^2skj} v_j \otimestimes v_k=v_j \otimestimes v_k. $$ Reciprocally, consider $s \in \Upsilon(H_0)$. Define $\mathcal{F}_s:\mathbb{R}ep {\mathbb Z}_m \rightarrow \mathrm{h}yd$ as the functor which is the induced by the projection $H \twoheadrightarrow H/\mathbb{R}ad(H) \text{co}ng \mathbf{k}[{\mathbb Z}_{m^2}] \twoheadrightarrow \mathbf{k}[Z_m]$ as modules, and for each $M_i= \mathbf{k} v_i$ define as before ${\mathbf{l}}elta: M_i \rightarrow H\otimestimes M_i$, ${\mathbf{l}}elta(v_i)=\text{ch }i^{msi} \otimestimes x_i$. By the previous computations, these structures satisfy the compatibility condition \eqref{YDrelation2}, so $\mathcal{F}_s$ sends objects to objects. For morphisms, the semisimplicity of $\mathbb{R}ep {\mathbb Z}_m$ gives a canonical definition of $\mathcal{F}_s$, preserving the abelian structures of categories. As above $\mathcal{F}_s$ is tensorial, and moreover is braided, if we consider the canonical symmetric braiding of $\mathbb{R}ep {\mathbb Z}_{m}$. So the proof is completed. \end{proof} \begin{lem}\label{lemma:sneqt} Suppose that $s \in \Upsilon(H_0)$. Then the de-equivariantization $(\mathbb{R}ep H)_{{\mathbb Z}_m}$ induced by the inclusion $\mathcal{F}_s$ is $\mathbb{R}ep A$ for some basic quasi-Hopf algebra such that $A/\mathbb{R}ad A \text{co}ng \mathbf{k}[{\mathbb Z}_m]$ with associator given by $\otimesmega_s \in H^3({\mathbb Z}_m,\mathbf{k}^{\times})$. \enspace d{lem} \begin{proof} By \cite[Corollary 4.27]{dgno}, the category $(\mathbb{R}ep H)_{{\mathbb Z}_m}$ is integral, so it corresponds to $\mathbb{R}ep A$ for some quasi-Hopf algebra $A$. We apply the computations in Example \ref{example:semisimple-deequiv} to the semisimple part of these categories, and we obtain that the semisimple part of $\mathbb{R}ep A$ is $Vec_{{\mathbb Z}_m, \otimesmega}$, where $\otimesmega$ is given as in such example. As here we have $T: {\mathbb Z}_m \rightarrow \widehat{{\mathbb Z}}_{m^2}$, $T(a)(b)= q^{sab}$ for $q$ a root of unity of order $m^2$ as above, and the 2-cocycle defining ${\mathbb Z}_{m^2}$ as an extension of ${\mathbb Z}_m$ by $\widehat{{\mathbb Z}}_m \text{co}ng {\mathbb Z}_m$ is $$ \xi: {\mathbb Z}_m \times {\mathbb Z}_m \rightarrow {\mathbb Z}_m \text{co}ng < m > \subseteq {\mathbb Z}_{m^2}, \quad \xi(j,k)= (j+k)'-j-k, $$ we deduce that $\otimesmega=\otimesmega_s$. \end{proof} In order to classify all the liftings of quasi-Hopf algebras, we have to classify all the possible inclusions $\mathbb{R}ep {\mathbb Z}_m \rightarrow \mathbb{R}ep D(H)$ for liftings $H$ of Hopf algebras which satisfy conditions in Section \ref{section:gradedqHA}, and consider their de-equivariantizations. \begin{lem}\label{lemma:de-equivgradedcase} Let $H_0$ be a radically graded Hopf algebra such that $H_0/\mathbb{R}ad H_0 \text{co}ng Z_{m^2}$. For each integer $s \in \Upsilon(H_0)$, the de-equivariantization of $\mathbb{R}ep H_0$ corresponding to the functor $\mathcal{F}_s$ is $\mathbb{R}ep A(H_0,s)$. \enspace d{lem} \begin{proof} By the proof of Theorem \ref{thm:projection} and Theorem \ref{thm:classificationgradedqHA}, we can extend each $A(H_0,s)$ to $H^{J_s}$ in such a way we obtain $\mathbb{R}ep H_0$ as a equivariantization of $\mathbb{R}ep A(H_0,s)$ by an action of ${\mathbb Z}_m$ fixing the invertible elements, see the proof of Theorem \ref{thm:equivariantization}. By Theorem \ref{thm:equiv-deequiv}, each $\mathbb{R}ep A(H_0,s)$ is in consequence a de-equivariantization of $\mathbb{R}ep H_0$ by an inclusion of $\mathbb{R}ep {\mathbb Z}_m$. The result follows by the previous Lemma. \end{proof} \subsection{Proof of Theorem \ref{thm:classificationqHA}.}\label{subsection:proofmainthm} For $A$ a quasi-Hopf algebra as in the Theorem, consider its associated radically graded quasi-Hopf algebra $A_0$. If $A_0$ has trivial associator, then it is a Hopf algebra, and $A$ is twist equivalent to a Hopf algebra by Theorem \ref{thm:trivialassoc}. The dual algebra has coradical isomorphic to ${\mathbb Z}_m$, because dualizing the radical filtration we obtain the coradical filtration, see \cite{Mo}. By Theorem \ref{thm:liftingsAS}, its dual is a lifting $u(\mathcal{D}, \lambda,\mu)$ for a datum $\mathcal{D}$ over ${\mathbb Z}_m$, but by Theorem \ref{thm:masuokacocycle} it is a cocycle deformation of $A_0^*= u(\mathcal{D},0,0)$, so $A$ is twist equivalent to $A_0$, which is also of type $u(\mathcal{D}',0,0)$ for some datum $\mathcal{D}'$ over ${\mathbb Z}_m$. Consider now the case when $A_0$ has non-trivial associator: by Theorem \ref{thm:classificationgradedqHA}, it is semisimple with non-trivial associator, or it is of the way $A(H_0,\otimesmega_s)$ for some radically graded (and in consequence also coradically graded) Hopf algebra $H_0$ with group of group-like elements ${\mathbb Z}_{m^2}$ and $s \in \Upsilon(H_0)$. In the first case we are done, so consider the second. By Theorem \ref{thm:equivariantization}, the category $\mathbb{R}ep A$ admits an action of ${\mathbb Z}_m$ whose equivariantization is $\mathbb{R}ep H$, where $H$ is a lifting (in the radical sense) of the Hopf algebra $H_0$. On the other hand, $\mathbb{R}ep H$ is tensor equivalent to $\mathbb{R}ep H_0$ by Remark \ref{rem:tensorequivwithgraded}, and this tensor equivalence induces an equivalence between the corresponding centers, which commutes with the forgetful functors. So an inclusion of $\mathbb{R}ep {\mathbb Z}_m$ in $\mathbb{R}ep H$ factorizing through the center of $\mathbb{R}ep H$ corresponds univocally to an inclusion in $\mathbb{R}ep H_0$ factorizing through the center of $\mathbb{R}ep H_0$, and that tensor equivalence induces also a tensor equivalence between the $A=\mathbb{F}un {\mathbb Z}_m$-modules on such categories. That is, de-equivariantizations of $\mathbb{R}ep H$ are tensor equivalent to de-equivariantizations of $\mathbb{R}ep H_0$. In consequence we reduce the problem to the graded case, and $\mathbb{R}ep H_0$ admits as many inclusions $\mathbb{R}ep G \rightarrow \mathbb{R}ep D(H)$ as numbers $s$ are in $\Upsilon(H_0)$. But each $s$ corresponds to a de-equivariantization $Rep A(H_0,\otimesmega_s)= (\mathbb{R}ep H)_{{\mathbb Z}_m}$ by Lemma \ref{lemma:de-equivgradedcase}, so they correspond to all the de-equivariantizations. As these procedures are inverse one of the other, $\mathbb{R}ep A \text{co}ng \mathbb{R}ep A(H_0,\otimesmega_s)$ for some $s$, and in consequence $A$ is equivalent to $A(H_0,\otimesmega_s)$. \section{Explicit description of quasi-Hopf algebras over ${\mathbb Z}_{p^n}$, $p$ prime}\label{section:classificationZpn} As an example of the previous result, we will describe all the basic finite-dimensional quasi-Hopf algebras $A$ such that $A/\mathbb{R}ad A \text{co}ng \mathbf{k} [{\mathbb Z}_{p^n}]$, for $p$ a prime greater than 7 and any $n\in\mathbb{N}$. This is based in the classification of pointed Hopf algebras over ${\mathbb Z}_{p^n}$, so first of all we describe all the possible Nichols algebras of finite dimension over $\mathbf{k}[{\mathbb Z}_{p^n}]$. It is done as in \cite{AS1} for $n=1$, and we shall obtain here an analogous description for the general case. Moreover, we can classify radically graded quasi-Hopf algebras over ${\mathbb Z}_{p^n}$ for any odd prime $p$ (by the general Theorem \ref{thm:classificationgradedqHA}). Then we restrict our attention to the case $p >7$, because of Theorem \ref{thm:liftingsAS}. \subsection{Nichols algebras over ${\mathbb Z}_{p^n}$}\label{nichols} We consider which are the possible Nichols algebras over ${\mathbb Z}_{p^n}$ following the description in \cite{AS1}. We fix $q$ a primitive root of unity of order $p^n$ and $g$ a generator of ${\mathbb Z}_{p^n}$, $p$ odd. As in such work, we consider a basis $x_1,...,x_l$, where $x_i \in V_{g_i}^{\text{ch }i_i}$ for some $g_i \in {\mathbb Z}_{p^n}$ and some characters $\text{ch }i_i$. The characters are determined by $\text{ch }i_i(a)=q^{d_i}$, and we write $g_i=g^{b_i}$. Consider $a_i,c_i \in \mathbb{N}$ not divisible by $p$, and $\alpha_i, \gamma_i \geq 0$ such that $b_i=p^{\alpha_i}a_i$, $d_i=p^{\gamma_i}c_i$. By \cite{H}, the braiding matrix $(q_{ij}:=\text{ch }i_i(g_j))$ is of Cartan type, or $p=3$ and the braiding is one of the following: \begin{align}\label{hatB2} \mathrm{h}at{B}_2: &\xymatrix{\circ^{\xi} \ar@{-}[r]^{\zeta^{-1}} & \circ^{\zeta}}, \qquad \xi \in \mathbb{G}_3, \zeta \in \mathbf{k}^{\times} \setminus \{ 1, \xi, \xi^2 \}, \\& \xymatrix{ \circ^{\zeta} \ar@{-}[r]^{\zeta^{-1}} & \circ^{\zeta} \ar@{-}[r]^{\zeta^{-1}} & \circ^{\zeta^{-3}} }, \quad \xymatrix{ \circ^{\zeta} \ar@{-}[r]^{\zeta^{-1}} & \circ^{\zeta^{-4}} \ar@{-}[r]^{\zeta^4} & \circ^{\zeta^{-3}} }, \qquad \zeta \in \mathbb{G}_9 , \label{hatB3} \enspace d{align} where $\mathbb{G}_k$ denotes the set of primitive root of unity of order $k$. If $l=1$, we have nothing to consider, except that $q^{b_1d_1} \neq 1$; that is, $p^{n}$ does not divide $b_1d_1$. When $l=2$, the braiding is of Cartan type $A_1 \times A_1$, $A_2$, $B_2$, $G_2$, or non-Cartan of type $\mathrm{h}at{B}_2$. $\mathbf{A_1\times A_1}$: We have $1=q_{12}q_{21}=q^{b_1d_2+b_2d_1}$, so $b_1d_2+b_2d_1 \equiv 0 (p^n)$. First of all, $\alpha_1+\gamma_2=\alpha_2+\gamma_1$, because both numbers are less than n; we call m to this number. Also, $a_1c_2+a_2c_1 \equiv 0 (p^{n-m})$. We can describe then the set of solutions as choosing $\alpha_i \leq m <n$, $a_1,a_2,c_1,c_2$ non divisible by $p$ such that $a_1c_2+a_2c_1 \equiv 0 (p^{n-m})$ (one can choose freely three of them and determine the other), and define $\gamma_i=m-\alpha_i$. $\mathbf{A_2}$: Now, $b_1c_1 \equiv b_2c_2 \equiv -b_1c_2-b_2c_1 (p^{n})$, so $$ \alpha_1+\gamma_1 = \alpha_2+\gamma_2 = \min \{\alpha_1+\gamma_2, \alpha_2+\gamma_1 \}.$$ From this, $\alpha_1=\alpha_2$ and $\gamma_1=\gamma_2$. Also, $a_1c_1 \equiv a_2c_2 \equiv -a_1c_2-a_2c_1 (p^{n-m})$, where $m= \alpha_i+\gamma_i$. Therefore, $a_1^2+a_1a_2+a_2^2 \equiv 0 (p^{n-m})$, so $p=3$, $m=n-1$ and $a_1 \equiv a_2 (3)$, or $a_1a_2^{-1} \not\equiv 1 (p^{n-m})$ is a cubic root of unity, in which case $p \equiv 1 (3)$. $\mathbf{B_2}$: We have $b_1c_1 \equiv 2b_2c_2 \equiv -b_1c_2-b_2c_1 (p^{n})$, and then $ \alpha_1+\gamma_1 = \alpha_2+\gamma_2 = \min \{\alpha_1+\gamma_2, \alpha_2+\gamma_1 \}$. As before, $\alpha_1=\alpha_2$ and $\gamma_1=\gamma_2$. Also, $a_1c_1 \equiv 2a_2c_2 \equiv -a_1c_2-a_2c_1 (p^{n-m})$, if $m= \alpha_i+\gamma_i$. In this case, $a_1^2+2a_1a_2+2a_2^2 \equiv 0 (p^{n-m})$, so as in \cite{AS1}, this equation has solution if and only if $-1$ is an square modulo $p$, which implies $p \equiv 1 (4)$. $\mathbf{G_2}$: In this case, $b_1c_1 \equiv 3b_2c_2 \equiv -b_1c_2-b_2c_1 (p^{n})$. If $p=3$, then $ \alpha_1+\gamma_1 = \alpha_2+\gamma_2 +1 = \min \{\alpha_1+\gamma_2, \alpha_2+\gamma_1 \}$, which is a contradiction. If $p\neq 3$, $ \alpha_1+\gamma_1 = \alpha_2+\gamma_2 = \min \{\alpha_1+\gamma_2, \alpha_2+\gamma_1 \}$. Therefore, $\alpha_1=\alpha_2$ and $\gamma_1=\gamma_2$. Also, $a_1c_1 \equiv 3a_2c_2 \equiv -a_1c_2-a_2c_1 (p^{n-m})$ for $m= \alpha_i+\gamma_i$. Therefore, $a_1^2+3a_1a_2+3a_2^2 \equiv 0 (p^{n-m})$, so this equation has solution if and only if $-3$ is an square modulo $p$, which implies $p \equiv 1 (3)$. $\mathbf{\mathrm{h}at{B}_2}$: In this case, $p=3$ and $\zeta$ is a primitive root of order $3^k$ for $2 \leq k \leq n$, so $\xi=\zeta^{\pm 3^{k-1}}$ (note that if $k=1$, then the braiding is of Cartan type $A_2$ or $B_2$). Changing $q$, we can assume $\zeta=q^{3^{n-k}}$, so we have $$ b_1d_1 \equiv \pm 3^{n-1} (3^n), \quad b_2d_2 \equiv 3^{n-k} (3^n), \quad b_1d_2+b_2d_1 \equiv -3^{n-k} (3^n). $$ From these equations, $\alpha_1+\gamma_1=n-1, \, \alpha_2+\gamma_2=n-k$ and $\min \{\alpha_1+\gamma_2,\alpha_2+\gamma_1\}=n-k$, so $\alpha_1=\alpha_2$ (in which case $\gamma_1=\gamma_2+k-1$), or $\gamma_1=\gamma_2$ (in which case $\alpha_1=\alpha_2+k-1$), and $$ a_1c_1 \equiv \pm 1 (3), \quad a_2c_2 \equiv 1 (3^k), \quad \left\{ \begin{array}{ll} a_1c_2+3^{k-1}a_2c_1 \equiv -1 (3^k), & \alpha_1=\alpha_2; \\ 3^{k-1}a_1c_2+a_2c_1 \equiv -1 (3^{k}) , & \gamma_1=\gamma_2. \enspace d{array} \right. $$ Consider the second case; the first is analogue. Multiplying by the invertible element $c_2$ (modulo $p^k$), we have $$ a_1c_2^2+c_2+3^{k-1}c_1 \equiv 0 (3^k). $$ This equation has a solution if and only if $1-4a_1c_13^{k-1} \equiv 1 \pm 3^{k-1} (3^k)$ is a quadratic residue. Note that $$ (\pm 3^{k-1}\pm 1)^2 \equiv \pm 2 \!\cdot\! ot 3^{k-1}+1 \equiv 1 \mp 3^{k-1} (3^n),$$ so $1 \pm 3^{k-1}$ are quadratic residues. This provides the possible structures of Yetter-Drinfeld modules of this kind, reconstructing $b_i,d_i$. Now we are ready to prove the analogous statement to Proposition 5.1 of \cite{AS1} for $p^n$. \begin{prop}\label{classifnichols} Let $V$ a Yetter-Drinfeld module over $Z_{p^n}$ of finite Cartan type, $\mathrm{dim} V \geq 3$. Then, $p=3$ and $V$ is of type $A_2\times A_1$ or $A_2\times A_2$. \enspace d{prop} \begin{proof} Consider $V$ of dimension 3; we discard first the non-Cartan cases: they are $\mathrm{h}at{B}_2 \times A_1$ or as in \eqref{hatB3}. For all cases, we can consider vertices 1,2 determining a subdiagram of type $\mathrm{h}at{B}_2$, vertices 1,3 not connected, and vertices 2,3 determining a subdiagram of type $A_2$ or $A_1 \times A_1$. From the first condition, $\alpha_1+\gamma_1 > \alpha_2+\gamma_2$; from the second, $\alpha_1+\gamma_1 = \alpha_3+\gamma_3$, and from the last, $\alpha_2+\gamma_2 = \alpha_3+\gamma_3$. But this is a contradiction. Consider then $V$ of Cartan type. As in \cite{AS1}, it is not of type $A_1\times A_1\times A_1$, so we can assume vertices 1 and 2 of the corresponding Dynkin diagram are connected, and vertices 1 and 3 are disconnected. Moreover, we can assume that if there exists a multiple arrow, it is the one between vertices 1 and 2. That is, \begin{itemize} \item $b_1d_3 \equiv -b_3c_1 (p^n)$; \item $b_1c_1 \equiv m b_2c_2 \equiv -b_1c_2-b_2c_1 (p^n)$ for some $m=1,2,3$, in which case $b_2d_3 \equiv -b_3c_2 (p^n)$ (cases $X_2 \times A_1$ for $X=A,B,G$), or $b_3c_3 \equiv b_2c_2 \equiv -b_3c_2-b_2c_3 (p^n)$ and $m=1,2$ (cases $A_3,B_3$), or \item $2b_1c_1 \equiv b_2c_2 \equiv -b_1c_2-b_2c_1 (p^n)$, in which case $b_3c_3 \equiv b_2c_2 \equiv -b_3c_2-b_2c_3 (p^n)$ and $m=1,2$ (case $C_3$). \enspace d{itemize} As the corresponding submatrices should be of finite Cartan type, we use the previous description for rank $2$. After to reduce the powers of $p$ involved in each equation, we reduce to a equation modulo $p$ for $a_i,c_i$ not divisible by $p$. A detailed study as in \cite{AS1} gives as unique remaining case $A_2\times A_1$, in which case $p=3$. Thus if we consider $\mathrm{dim} V \geq 4$, each subdiagram of three vertices is of type $A_2\times A_1$, so we have only one possibility: $A_2 \times A_2$ and $p=3$. In this case, we can describe such $V$ as follows: \begin{align*} & b_1=b_2=3^{\alpha}a, \quad d_1=d_2=3^{\gamma}c & \alpha+\gamma=n-1, \, 3 \nmid ac,\\ & b_3=b_4=3^{\alpha'}a', \quad d_3=d_4=3^{\gamma'}c' & \alpha'+\gamma'=n-1, \, 3 \nmid a'c', \\ & \alpha+\gamma'=\alpha'+\gamma \Longrightarrow & \alpha=\alpha', \, \gamma=\gamma', \\ & ac'+a'c \equiv 0 (3). \enspace d{align*} \end{proof} \subsection{Basic quasi-Hopf algebras over ${\mathbb Z}_{p^n}$} \begin{prop}\label{rank1} Let $A$ be a basic radically graded Hopf algebra, whit $A[0]=\mathbf{k}[{\mathbb Z}_{p^n}]$ and associator $\Phi_s$, where $p$ does not divide $s$. Then the rank of $A[1]$ over $A[0]$ is $\leq 1$. \enspace d{prop} \begin{proof} Suppose there exists $A$ as above such that the rank of $A[1]$ over $A[0]$ is $\geq 2$, and consider $A$ of minimal possible dimension. By Theorem \ref{thm:projection}, $A=A(H, s)$, $\bar{H}=R\# {\mathbb Z}_{p^{2n}}$ for some Nichols algebra $R$ of diagonal type, $\mathrm{dim} R[1]=2$ and the braiding is given by $(q^{b_id_j})_{i,j=1,2}$. By Heckenberger's classification \cite{H}, it is of Cartan type: \begin{itemize} \item if it of type $A_2$, $B_2$ or $G_2$, then \begin{center} $b_1d_1+b_1d_2+b_2d_1 \equiv m \, b_2d_2+b_1d_2+b_2d_1 \equiv 0 (p^{2n})$, $m=1,2,3$ respectively; \enspace d{center} \item if it is of type $A_1 \times A_1$, $b_1d_2+b_2d_1 \equiv 0 (p^{2n})$; \enspace d{itemize} or $p=3$ and \begin{itemize} \item it is of standard $\mathrm{h}at{B}_2$ type, with conditions as in Section \ref{classifnichols}. \enspace d{itemize} We write $b_i=p^{\alpha_i}a_i$, $d_i=p^{\gamma_i}c_i$, where $p$ does not divide $a_ic_i$. As $b_i \equiv s d_i (p^n)$, we have $\alpha_i=\gamma_i$. For cases $X_2$, note that $p^{2\alpha_1}a_1c_1 \equiv p^{2\alpha_2}ma_2c_2 (p^{2n})$, so $\alpha_1 =\alpha_2$ (we simply call them $\alpha$). Therefore $a_1c_1 \equiv m a_2c_2 \equiv-a_1c_2-a_2c_1 (p^{2n-2\alpha})$, $a_i \equiv sc_i (p^{n-\alpha})$. These equations imply $a_1^2 \equiv -2a_1a_2 (p^{n-\alpha})$, so $a_1 \equiv -2a_2 (p^{n-\alpha})$, and $ma_2^2 \equiv a_1^2 \equiv 4a_2^2 (p^{n-\alpha})$. That is, $p^{n-\alpha} \mid (4-m)a_2^2$. It follows that $p=3, \alpha=n-1$ and $m=1$. But in this case, $$ (a_1-a_2)(c_1-c_2) \equiv 3a_1c_1 \, (9) .$$ As $a_1 \equiv -2a_2 \equiv a_2 (9)$ and $a_i \equiv c_i (3)$, it follows that $9 \mid 3a_1c_1$, a contradiction. For case $A_1 \times A_1$, $\alpha_1 =\alpha_2$ as above, and $a_1c_2+a_2c_1 \equiv 0 (p^{2n-2\alpha})$. It follows that $2a_1a_2 \equiv 0 (p^{n-\alpha})$, which is a contradiction. From the previous contradictions, the rank of $H[1]$ over $H[0]$ is $\leq 1$. \end{proof} \begin{rem} Note that for any $m$, $A(q)=A(\mathcal{B}(V)\# {\mathbb Z}_{m^2}, \otimesmega_1)$, where $V$ is the diagonal braided vector space of dimension 1 and braiding given by $q$. \enspace d{rem} The question now is what happens when $p$ divides $s$ and we consider the associator given by $\otimesmega_s \in H^3({\mathbb Z}_m,\mathbf{k}^{\times})$ . Consider the quasi-Hopf algebras $A(H,s)$, for $H=R\# {\mathbb Z}_{p^{2n}}$, and write $s=p^{\theta}t$, where $\alpha \geq 1$ and $p$ does not divide $t$. Consider $\alpha_i, \gamma_i \geq 0$ such that $b_i=p^{\alpha_i}a_i$, $d_i=p^{\gamma_i}c_i$. \emph{When $R$ has rank one}, the unique condition is $b_1 \equiv sd_1 (p^n)$, which is possible choosing any $d_1$, $\gamma_1 <n$: if $\theta+\gamma_1<n$, then $b_1$ is uniquely defined modulo $p^n$, if $\theta+\gamma_1 \geq n$, simply choose $b_1$ such that $p^n$ divides $b_1$. \emph{When $R$ has rank two}, $R$ is of Cartan type $A_1 \times A_1$, $A_2$, $B_2$ or $G_2$, or $p=3$ and it is of standard type $\mathrm{h}at{B}_2$. In the first case, we will see in Section \ref{nichols} that it is determined by $\alpha_i \leq m <n$, $a_1,a_2,c_1,c_2$ non divisible by $p$ such that $a_1c_2+a_2c_1 \equiv 0 (p^{2n-m})$, and define $\gamma_i=m-\alpha_i$. As also $b_i \equiv sd_i (p^n)$, if we suppose $p^n$ does not divide $b_1$ ($\alpha_1<n$), then $\alpha_1=\theta+\gamma_1$ and $a_1 \equiv c_1 (p)$. But in such case, $\alpha_2= \theta+\gamma_2$ and $a_2 \equiv c_2 (p)$ so $2c_1c_2 \equiv0 (p)$, which is a contradiction. Therefore, $p^n$ divides $b_i$, and we have $\gamma_i+\theta \geq n$. The unique restriction is in consequence to choose $\gamma_i, \alpha_i$ such that $n \leq \gamma_i+\theta$, $\gamma_i+\alpha_i < 2n$. In the other cases, the condition $b_i \equiv sd_i (p^n)$ gives a contradiction if we suppose $p^n$ does not divide $b_i$ in a similar way to the previous case, so we consider $p^n | b_i$ for all $i$. The other restrictions are given in Section \ref{nichols}. This condition is implicit when we consider the special case for $A_2$ and $p=3$, where $\alpha_i+\gamma_i=2n-1$. \emph{When $R$ has rank greater than 2}, we know $p=3$ and $R$ is of type $A_2\times A_1$ or $A_2\times A_2$. In this case, $\alpha_i+\gamma_i=2n-1$, so any of those examples such that $p^n$ divides $\beta_i$ gives such $H$: we have $\alpha_i=n$, $\gamma_i=n-1$, and the fact that $p$ divides $s$ says that $b_i \equiv sd_i (p^n)$ holds trivially. From the Theorem \ref{thm:classificationgradedqHA}, we have proved: \begin{cor}\label{corollary:gradedqHApn} Let $A={\otimesperatorname{op}}lus_{n \geq 0} A[n]$ be a finite dimensional radically graded quasi-Hopf algebra over ${\mathbb Z}_{p^n}$, with associator $\Phi_s$ for some $s$ such that the rank of $A[1]$ over $A[0]$ is $\theta \geq 1$. Then $A= A(\mathcal{B}(V)\# Z_{p^{2n}},\otimesmega_s)$ for some Yetter-Drinfeld module $V$ over ${\mathbb Z}_{p^{2m}}$ of dimension $\theta=2,3,4$. Moreover, $\theta=3,4$ if and only if $p=3$ and $V$ is of type $A_2 \times A_1$, $A_2 \times A_2$, respectively. \enspace d{cor} Also there exist quasi-Hopf algebras $H(p^n,s), \, 1 \leq s\leq p^n-1$, generated by a group-like element $\sigma$ of order $p^n$, with non-trivial associator $\Phi_s$, distinguished elements $\alpha_s=\sigma^{-s},\, \beta=1$ and $S(\sigma)=\sigma^{-1}$. In a similar way as for $n=1$, here an automorphism preserves the power of $p$ which divides s, so we have $2(n-1)$ classes of equivalences up to isomorphism: if $s_0$ is a non quadratic residue coprime with p, then these classes are $$ H_+(p^n,m):=H(p^n,p^m), \quad H_-(p^n,m):=H(p^n,s_0p^m) \qquad (1 \leq m \leq n-1).$$ Also, these classes are not twist equivalent. We restrict our attention to the case $p>7$ as above. As a consequence of Theorem \ref{thm:classificationqHA} we have: \begin{thm}\label{thm:classificationpn} Let $A$ be a finite dimensional quasi-Hopf algebra such that $$A/Rad(A) \text{co}ng \mathbf{k}[{\mathbb Z}_{p^n}],$$ for some prime $p >7$. Then $A$ is twist equivalent to one of the following quasi-Hopf algebras: \begin{enumerate} \item radically graded Hopf algebras $u(\mathcal{D},0,0)$ for some datum $\mathcal{D}$ of type $A_2$, $B_2$ or $G_2$ over ${\mathbb Z}_{p^n}$, \item the semisimple quasi-Hopf algebras $H_{\pm}(p^n,m) \, (1 \leq m \leq n-1)$, \item the algebras $A(H,\otimesmega_s)$, where $H \text{co}ng u(\mathcal{D},0,0)$ for some datum $\mathcal{D}$ of type $A_2$, $B_2$ or $G_2$ over ${\mathbb Z}_{p^{2n}}$ and some $s \in \Upsilon(H)$. \enspace d{enumerate} \enspace d{thm} \begin{thebibliography}{DGNO} \bibitem[AS1]{AS1} N. Andruskiewitsch and H.-J. Schneider, Finite quantum groups and Cartan matrices, \emph{Adv. in Math.} \textbf{154} (2000), 1--45. \bibitem[AS2]{AS2} N. Andruskiewitsch and H.-J. Schneider, Pointed Hopf Algebras, in: \emph{Recent developments in Hopf algebra Theory, MSRI Publications} \textbf{43} (2002), 168, Cambridge Univ. Press. \bibitem[AS3]{AS3} N. Andruskiewitsch and H.-J. Schneider, Finite quantum groups over abelian groups of prime exponent, \emph{Ann. Sci. Ec. Norm. Super.} \textbf{35} (2002), 1--26. \bibitem[AS4]{AS4} N. Andruskiewitsch and H.-J. Schneider, On the classification of finite-dimensional pointed Hopf algebras, Abstract and file, math.QA/0502157. \emph{Ann. Math.}, accepted. 43 pp. \bibitem[A]{A} I. Angiono, On Nichols algebras with standard braiding. \emph{Algebra and Number Theory}, \textbf{3}, No. 1, 2009, p. 35--106. \bibitem[B]{B} M. Beattie, Duals of pointed Hopf algebras, \emph{Journal of Algebra}, Volume \textbf{262}, Issue 1, 1 April 2003, Pages 54--76 \bibitem[D]{D} V. Drinfeld, Quasi-Hopf algebras. (Russian) \emph{Algebra i Analiz} \textbf{1} (1989), no. 6, 114--148; translation in \emph{Leningrad Math. J.} \textbf{1} (1990), no. 6, 1419--1457. \bibitem[DGNO]{dgno} V.Drinfeld, S.Gelaki, D.Nikshych, and V.Ostrik, On braided fusion categories I, Abstract and file, arXiv:0906.0620. \bibitem[EG1]{EG1} P. Etingof and S. Gelaki, Finite-dimensional quasi-Hopf algebras with radical of codimension 2, \emph{Mathematical Research Letters} \textbf{11} (2004), 685--696. \bibitem[EG2]{EG2} P. Etingof and S. Gelaki, On radically graded finite-dimensional quasi-Hopf algebras, {\em Mosc. Math. J.} {\bf 5} (2005), no. 2, 371--378. \bibitem[EG3]{EG3} P. Etingof and S. Gelaki, Liftings of graded quasi-Hopf algebras with radical of prime codimension, \emph{J. Pure Appl. Algebra} \textbf{205}, No.2, 310--322 (2006). \bibitem[EG4]{EG4} P. Etingof, S. Gelaki, The small quantum group as a quantum double, Abstract and file, arXiv:0902.0332. \bibitem[ENO2]{ENO2} P. Etingof, D. Nikshych and V. Ostrik, Weakly group-theoretical and solvable fusion categories, Abstract and file, arXiv:0809.3031. \bibitem[EO]{EO} P. Etingof and V. Ostrik, Finite tensor categories, {\em Moscow Mathematical Journal} {\bf 4} (2004), no. 3, 627--654, 782--783. \bibitem[G]{G} S. Gelaki, Basic quasi-Hopf algebras of dimension $n^3$, {\em Journal of Pure and Applied Algebra} {\bf 198} (2005), 165--174. \bibitem[GK]{GK} V. Guinzburg and S. Kumar, Cohomology of quantum groups at root of unity, \emph{Duke Math. J.} \textbf{69} (1993), no.1, 179--198. \bibitem[H]{H} I. Heckenberger, Classification of arithmetic root systems, \emph{Adv. Math.} \textbf{220} (2009) 59--124. \bibitem[Ma]{Ma} A. Masuoka, Abelian and non-abelian second cohomologies of quantized enveloping algebras, J. Algebra 320 (2008), 1--47. \bibitem[Mo]{Mo} S. Montgomery, Hopf algebras and their actions on rings, \emph{CBMS Lecture Notes} \textbf{82}, Amer. Math. Soc., 1993. \bibitem[N]{N} D. Naidu, Categorical Morita Equivalence for Group-Theoretical Categories, \emph{Communications in Algebra}, Volume 35, Issue 11 November 2007, 3544--3565. \bibitem[Ni]{Ni} D.Nikshych, Non-group-theoretical semisimple Hopf algebras from group actions on fusion categories, \emph{Selecta Mathematica}, Volume 14, Number 1 October 2008, 145--161. \bibitem[R]{R} D. Radford, The structure of Hopf algebras with a projection, {\em J. Algebra} {\bf 92} (1985), no. 2, 322--347. \bibitem[S1]{S1} P. Schauenburg, Hopf Bigalois extensions, Comm. Algebra 24 (1996), 3797--3825. \bibitem[S2]{S2} P. Schauenburg, A quasi-Hopf algebra freeness theorem, \emph{Proc. Amer. Math. Soc.} \textbf{132} (2004), no.4, 965--972. \bibitem[T]{T} D. Tambara, Invariants and semi-direct products for finite group actions on tensor categories, \emph{J. Math. Soc. Japan} Volume 53, Number 2 (2001), 429--456. \enspace d{thebibliography} \enspace d{document}
\begin{document} \mbox{} \begin{center} \rule{17cm}{1.5pt} \renewcommand{\,}{\,} {\mathbb{L}arge \bf Translating solitons of the mean curvature flow in the space $\mathbb{H}^2\times\mathbb{R}$ \footnote{\hspace{-.75cm} Mathematics Subject Classification: 53A10, 53C42}}\\ {\large Antonio Bueno}\\ \rule{17cm}{1.5pt} \end{center} Departamento de Geometría y Topología, Universidad de Granada, E-18071 Granada, Spain. \\ e-mail: [email protected] \begin{abstract} In this paper we study the theory of translating solitons of the mean curvature flow of immersed surfaces in the product space $\mathbb{H}^2\times\mathbb{R}$. We relate this theory to the one of manifolds with density, and exploit this relation by regarding these translating solitons as minimal surfaces in a conformal metric space. Explicit examples of these surfaces are constructed, and we study the asymptotic behavior of the existing rotationally symmetric examples. Finally, we prove some uniqueness and non-existence theorems. \end{abstract} \section{Introduction} Let $M$ be an orientable, immersed surface in the product space $\mathbb{H}^2\times\mathbb{R}$. We will say that $M$ is a \emph{translating soliton} if the mean curvature $H_M$ of $M$ satisfies at each $p\in M$ \begin{equation}\label{hyperbolicsoliton} H_M(p)\eta_p=\partial_z^\bot. \end{equation} Here, $(\cdot)^\bot$ denotes the \emph{normal component}, $\partial_z$ is the unit vertical Killing vector field in $\mathbb{H}^2\times\mathbb{R}$ and $\eta$ is a unit normal vector field defined on $M$. Throughout this paper we will denote by $\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle=g_{\mathbb{H}^2}+dz^2$ to the product metric in $\mathbb{H}^2\times\mathbb{R}$; here $g_{\mathbb{H}^2}$ is the metric in $\mathbb{H}^2$ of constant curvature $-1$. Notice that the above equation can be rewritten as \begin{equation}\label{mediapresc} H_M(p)=\langle\eta_p,\partial_z\mathop{\rm ran }\nolimitsgle, \end{equation} where the scalar quantity $\langle \eta, \partial_z\mathop{\rm ran }\nolimitsgle$ is the \emph{angle function} of the surface $M$ computed with respect to $\eta$ and the $e_3$ direction, and will be denoted by $\nu(p):=\langle \eta_p, \partial_z\mathop{\rm ran }\nolimitsgle$ for all $p\in M$. Our objective in this paper is to take as a starting point the well studied theory of \emph{translating solitons} of the mean curvature flow (MCF for short) in the Euclidean space $\mathbb{R}^3$, and use some of the known examples of translating solitons in $\mathbb{H}^2\times\mathbb{R}$ to obtain uniqueness and non-existence theorems, see \cite{CSS,Hu,HuSi,Il,MPGSHS,MSHS,Sm,SpXi} for relevant works regarding translating solitons of the MCF in $\mathbb{R}^3$. First, recall some basic notions about the MCF in $\mathbb{R}^3$. Let $\psi:M\rightarrow\mathbb{R}^3$ be an immersion of an orientable surface $M$ in the Euclidean space $\mathbb{R}^3$. Define by $\psi_t(\cdot)=\psi(\cdot,t):M\times [0,T)\rightarrow\mathbb{R}^3$ a smooth variation of $M$, where $T>0$. We say that the variation $\psi_t$ evolves by MCF if \begin{equation}\label{MCF} \bigg(\frac{\partial\psi_t}{\partial t}(p,t)\bigg)^\bot=H_t(\psi_t(p))(\eta_t)_{\psi_t(p)},\ \forall p\in M,t\in [0,T), \end{equation} where $\eta_t:M_t\rightarrow\mathbb{R}^3$ is a unit normal vector field of $M_t=\psi_t(M)$ and $H_t$ is the mean curvature of the surface $M_t$ computed with respect to $\eta_t$. A surface $M$ in $\mathbb{R}^3$ is a translating soliton of the MCF if it is a solution of Equation \eqref{MCF} for the particular variation given by Euclidean translations $\psi_t(p)=\psi(p)+tv$, where $v\in\mathbb{R}^3$ is a fixed vector named the \emph{translating vector}. As a matter of fact, $\eta_t=\eta$ and $H_t=H_M$, and thus Equation \eqref{MCF} reduces to \begin{equation}\label{solitonMCF} H_M(p)=\langle\eta_p,v\mathop{\rm ran }\nolimitsgle. \end{equation} In \cite{HuSi} the authors proved that translating solitons in $\mathbb{R}^3$ appear in the singularity theory of the MCF as the equation of the limit flow by a proper blow-up procedure near type II singular points. Since $\mathbb{R}^3$ is \emph{isotropic} and no direction at all is in some sense \emph{privileged}, after an Euclidean change of coordinates which leaves the problem invariant we may suppose that $v$ is the vertical vector $e_3$. Equation \eqref{solitonMCF} shows us that translating solitons of the MCF in $\mathbb{R}^3$ can be seen as a \emph{prescribed curvature problem}, only involving the measurement of the angle that makes a unit normal field defined on the surface with a unit Killing vector field in the space. Among the most recognized examples of translating solitons in $\mathbb{R}^3$, the ones invariant under the $SO(2)$ action of rotations around a fixed axis parallel to the direction of the flow have special interest in themselves. The complete, rotational translating solitons are classified as follows: there exists a complete, strictly convex translating soliton which is an entire graph over $\mathbb{R}^2$, called the \emph{bowl soliton}; and there exists a 1-parameter family of properly embedded annuli called the \emph{translating catenoids} or \emph{wing-like solitons}, see \cite{AlWu,CSS,MSHS,SpXi} for relevant works regarding asymptotic behavior at infinity as well as characterizations of these examples. Concretely, in \cite{CSS} the authors proved that the bowl soliton and the translating catenoids are \emph{asymptotic at infinity} when expressed as graphs outside a compact set. In recent years, the space $\mathbb{H}^2\times\mathbb{R}$ has been considered as a major framework to extend the classical theory of minimal surfaces and non-vanishing constant mean curvature surfaces in the Euclidean space $\mathbb{R}^3$. Many geometers have focused on this space in the last years, developing a fruitful theory of immersed surfaces in $\mathbb{H}^2\times\mathbb{R}$. See \cite{NeRo1,NeRo2} for some remarkable works regarding this space. The structure of this paper is the following: in \textbf{Section \ref{examples}}, we will study the first properties of translating solitons in $\mathbb{H}^2\times\mathbb{R}$, taking as main motivation the well studied theory of translating solitons of the MCF in $\mathbb{R}^3$. We introduce the two models of the space $\mathbb{H}^2\times\mathbb{R}$ that we are going to work with. In Theorem \ref{car3} we characterize translating solitons in $\mathbb{H}^2\times\mathbb{R}$ as minimal surfaces in a conformal space, and in a density space. In particular, we show that these solitons are critical points for the weighted area functional, as introduced by Gromov in \cite{Gr}. This point of view of translating solitons as minimal surfaces allows us to prove the tangency principle in Theorem \ref{tangency}, and to solve the Dirichlet problem in Proposition \ref{dirichlet}. The simplest examples of surfaces to study are those invariant under the 1-parameter action of isometries of $\mathbb{H}^2\times\mathbb{R}$. In \textbf{Section \ref{rotacionales}}, the considered uniparametric group of isometries are rotations around a vertical axis, and the examples arising are quite similar to the ones in the translating solitons of the MCF theory. These rotationally symmetric examples were constructed by E. Kocakusakli, M. A. Lawn and M. Ortega, see \cite{KoOr,LaOr} in the semi-Riemannian setting, and in particular in the spaces $\mathbb{H}^n\times\mathbb{R}$. Here we will prove the existence of such examples by using a phase space analysis in same fashion as in \cite{BGM}, where the authors studied immersed surfaces in $\mathbb{R}^3$ whose mean curvature is given as a prescribed function in the sphere $\mathbb{S}^2$ depending on its Gauss map. The main idea is that the ODE satisfied by the coordinates of the \emph{generating curve} of a rotationally symmetric translating soliton, can be expressed as a first order autonomous system. With these tools, in Section \ref{exbowl} we prove the existence of the \emph{bowl soliton}, and in Section \ref{excat} we prove the existence of the \emph{translating catenoids}, also called wing-like examples. Both the bowl soliton and the family of translating catenoids are the analogous to the rotationally symmetric translating solitons of the MCF in $\mathbb{R}^3$. In \textbf{Section \ref{asintotico}}, motivated by the graphical computations of the rotationally symmetric solitons, we study the behavior at infinity of the bowl soliton and the translating catenoids. In Lemma \ref{asin} we will obtain the behavior that a rotational, graphical translating soliton has when approaching to infinity. In particular, the bowl soliton and the ends of each translating catenoid have the same asymptotic behavior when expressed as graphs outside a compact set. Lastly, in \textbf{Section \ref{uniqueness}} we use the examples defined in Section \ref{examples}, their asymptotic behavior at infinity exposed in Section \ref{asintotico}, and Theorem \ref{tangency} to prove some uniqueness and non-existence theorems. Most of the theorems obtained in this Section, motivated by the thesis manuscript in \cite{Pe}, are proved by comparing with a proper translating soliton of the previously introduced, and then invoking Theorem \ref{tangency} in order to arrive to a contradiction. The main result here is Theorem \ref{unicidadbowl}, where we prove that an immersed translating soliton with finite topology and one end which is $C^1$-asymptotic to the bowl soliton has to be a vertical translation of the bowl. This similar characterization of the bowl soliton of the MCF in $\mathbb{R}^3$ was first obtained in \cite{MSHS}. {\bf Acknowledgements} The author is grateful to the referee for helpful comments that highly improved the final version of the paper. \section{Preliminaries on translating solitons in the product space $\mathbb{H}^2\times\mathbb{R}$}\label{examples} Throughout this paper we will use two models of the hyperbolic plane $\mathbb{H}^2$: \begin{itemize} \item Let $\mathbb{L}^3$ denote the usual Lorentz-Minkowski flat space with global coordinates $(x_1,x_2,x_3)$ and endowed with the metric $dx_1^2+dx_2^2-dx_3^2$. The hyperbolic plane can be regarded as the hypercuadric defined as $$ \mathbb{H}^2=\{(x_1,x_2,x_3)\in\mathbb{L}^3;\ x_1^2+x_2^2-x_3^2=-1,\ x_3>0\}, $$ endowed with the restriction of the ambient metric. \item Consider the disk $\mathbb{D}=\{(x_1,x_2)\in\mathbb{R}^2;\ x_1^2+x_2^2<1\}$ endowed with the metric $ds^2=\lambda^2(dx_1^2+dx_2^2)$, where \begin{equation}\label{conformalfact} \lambda=\frac{2}{1-x_1^2-x_2^2}. \end{equation} Then, the space $(\mathbb{D},ds^2)$ is isometric to $\mathbb{H}^2$ and its known as the \emph{Poincarè disk model} of $\mathbb{H}^2$. In this model we have global coordinates $(x_1,x_2,z)$, where $(x_1,x_2)\in\mathbb{D}$ and $z\in\mathbb{R}$, and a global orthonormal frame given by $$ E_1=\frac{1}{\lambda}\partial_{x_1},\hspace{.5cm} E_2=\frac{1}{\lambda}\partial_{x_2},\hspace{.5cm} E_3=\partial_z. $$ \end{itemize} The product space $\mathbb{H}^2\times\mathbb{R}$ is defined as the Riemannian product of the hyperbolic plane $\mathbb{H}^2$ and the real line $\mathbb{R}$, endowed with the usual product metric which we will denote by $\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle$. In the space $\mathbb{H}^2\times\mathbb{R}$ there are defined the two usual projections $\pi_1:\mathbb{H}^2\times\mathbb{R}\rightarrow\mathbb{H}^2$ and $\pi_2:\mathbb{H}^2\times\mathbb{R}\rightarrow\mathbb{R}$. The \emph{height function} of $\mathbb{H}^2\times\mathbb{R}$ is defined to be the second projection $\pi_2$, and is commonly denoted as $h(p):=\pi_2(p)$ for all $p\in\mathbb{H}^2\times\mathbb{R}$. The gradient of the height function is a vertical, unit Killing vector field and is commonly denoted in the literature by $\partial_z$. Let us point out two key properties that translating solitons in $\mathbb{R}^3$ satisfy. One of the main tools in this theory is the fact that they can be regarded as minimal surfaces in the conformal space $\big(\mathbb{R}^3,e^{x_3}g_{euc}\big)$, where $x_3$ stands for the third coordinate of a point and $g_{euc}$ is the usual Euclidean metric. The conformal metric $e^{x_3}g_{euc}$ is known in the literature as the \emph{Ilmanen metric}, see \cite{Il} for more details. Consequently, every translator in $\mathbb{R}^3$ is a minimal surface in $\big(\mathbb{R}^3,e^{x_3}g_{euc}\big)$ and vice versa. The theory of translating solitons in the Euclidean space is also related with the one of manifolds with density as follows: Let $(\mathcal{N},g,\phi)$ be a manifold with a density function $\phi\in C^\infty(\mathcal{N})$. For manifolds with density, Gromov \cite{Gr} defined the \emph{weighted mean curvature} of an oriented hypersurface $M\subset(\mathcal{N},g,\phi)$ by \begin{equation}\label{weightedmc} H_\phi:=H_M-g(\nabla\phi,\eta), \end{equation} where $H_M$ is the mean curvature of $M$ in $(\mathcal{N},g)$, $\eta$ is a unit normal vector field along $M$ and $\nabla$ is the gradient computed in the ambient space $(\mathcal{N},g)$, see also \cite{BCMR}. For the particular case when we consider the weighted space $\big(\mathbb{R}^3,g_{euc},x_3\big)$, from \eqref{weightedmc} we obtain $$ H_{x_3}=H-g_{euc}(\eta,\nabla x_3)=H-g_{euc}(\eta,e_3). $$ Thus, an immersed surface in $\mathbb{R}^3$ is a translating soliton if and only if it is a minimal surface in $\mathbb{R}^3$ measured with the density $x_3$. The next theorem proves that the translating solitons in $\mathbb{H}^2\times\mathbb{R}$ inherits these same properties. \begin{teo}\label{car3} Let $M$ be an immersed surface in $\mathbb{H}^2\times\mathbb{R}$. Then, are equivalent: \begin{itemize} \item[1.] The surface $M$ is a translating soliton in $\mathbb{H}^2\times\mathbb{R}$. \item[2.] The surface $M$ is minimal in the conformal space $\big(\mathbb{H}^2\times\mathbb{R},e^{h}\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle\big)$. \item[3.] The surface $M$ is weighted minimal in the density space $\big(\mathbb{H}^2\times\mathbb{R},\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle,h\big)$. \end{itemize} \end{teo} \begin{proof} $\underline{1.\mathbb{L}ongleftrightarrow 2.}$ A wide-known formula states that given a three dimensional Riemannian manifold $(\mathcal{N},g)$ and a conformal metric $\overline{g}=e^{2\phi}g$, where $\phi$ is a smooth function on $\mathcal{N}$, the mean curvatures $\overline{H}_M$ and $H_M$ of $M$ with respect to the metrics $\overline{g}$ and $g$ respectively, are related by the formula $$ \overline{H}_M=e^{-\phi}\big(H_M-2g(\nabla\phi,\eta)\big), $$ where $\nabla$ is the gradient operator with respect to the metric $g$. In our situation, the conformal factor is just the exponential of the height function $h$. As the gradient of the height function in $\mathbb{H}^2\times\mathbb{R}$ is no other than the vertical Killing vector field $\partial_z$, we obtain $$ \overline{H}_M=e^{-h/2}\big(H_M-\langle\partial_z,\eta)\mathop{\rm ran }\nolimitsgle\big)=e^{-h/2}\big(H_M-\nu\big). $$ Thus, $M$ is a translating soliton if and only if the mean curvature $\overline{H}_M$ vanishes identically, proving the equivalence between the first items. $\underline{1.\mathbb{L}ongleftrightarrow 3.}$ From Equation \eqref{weightedmc} for the density space $\big(\mathbb{H}^2\times\mathbb{R},\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle,h\big)$, we obtain $$ H_h=H_M-\langle\partial_z,\eta\mathop{\rm ran }\nolimitsgle=H_M-\nu. $$ This proves that $M$ is a translating soliton in $(\mathbb{H}^2\times\mathbb{R},\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle)$ if and only if $M$ is weighted minimal in $\big(\mathbb{H}^2\times\mathbb{R},\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle,h\big)$. \end{proof} The importance of Item $3$ in the previous theorem is that translating solitons can be characterized as critical points of the weighted area functional in the following way: Let $(\mathcal{N},g,\phi)$ a density manifold. For a measurable subset $\Omega\subset\mathcal{N}$ with boundary $M=\partial\Omega$ and inward unit normal $\eta$, we can define the \emph{weighted area} of $M$ as $$ A_\phi(M)=\int_M e^\phi dv_M, $$ where $dv_M$ stands for the area element with respect to the metric $g$. Consider a compactly supported variation $\{\Psi_t\}$ of a immersed surface $M$ with $\Psi'(0)=V+\omega\eta$, where $V$ is a tangent vector field along $M$ and $\omega$ is a smooth function with compact support on $M$. By Bayle's variational formula in \cite{Ba}, we have $$ \frac{d}{dt}\bigg|_{t=0}A_\phi(\Psi_t(M))=\int_M H_\phi\omega e^\phi dv_M, $$ and thus weighted minimal surfaces in density spaces are critical points of the weighted area functional. In particular, Item 3 in Theorem \ref{car3} ensures us that \emph{\textbf{translating solitons are critical points for the weighted area functional under compactly supported variations.}} The minimality of a translating soliton in the conformal space $(\mathbb{H}^2\times\mathbb{R},e^{h}\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle)$ given by Item $2$ in Theorem \ref{car3} allows us to formulate the \emph{tangency principle}, which resembles us to the case of minimal surfaces in $\mathbb{R}^3$ and is just a consequence of the maximum principle for elliptic PDE's due to Hopf. \begin{teo}[Tangency principle]\label{tangency} Let $M_1$ and $M_2$ be two connected translating solitons in $\mathbb{H}^2\times\mathbb{R}$ with possibly non-empty boundaries $\partial M_1,\ \partial M_2$. Suppose that one of the following statements holds \begin{itemize} \item There exists $p\in int(M_1)\cap int (M_2)$ with $(\eta_1)_p=(\eta_2)_p$, where $\eta_i:M_i\rightarrow\mathbb{S}^2$ is the unit normal of $M_i$, respectively. \item There exists $p\in\partial M_1\cap\partial M_2$ with $(\eta_1)_p=(\eta_2)_p$ and $(\xi_1)_p=(\xi_2)_p$, where $\xi_i$ is the interior unit conormal of $\partial M_i$. \end{itemize} Assume that $M_1$ lies locally around $p$ at one side of $M_2$. Then, in either situation, both surfaces agree in a neighbourhood of $p$. Moreover, if both surfaces $M_i$ are complete, then $M_1=M_2$. \end{teo} We also focus our attention on solving the \emph{Dirichlet problem} for graphical translating solitons. The next result is a consequence of Theorem 1.1 in \cite{CHH}, and gives conditions for the existence of graphical translating solitons in $\mathbb{H}^2\times\mathbb{R}$. \begin{pro}\label{dirichlet} Let $\Omega\subset\mathbb{H}^2$ be a bounded $C^2$ domain with $C^{2,\alpha}$ boundary, and consider $\varphi\in C^{2,\alpha}(\partial\Omega)$ for $\alpha\in (0,1)$. Suppose that $H_{\partial\Omega}\geq 2$, where $H_{\partial\Omega}$ stands for the inward curvature of $\partial\Omega$. Then, the Dirichlet problem \begin{equation}\label{divergencia} \left\{\begin{array}{ll} 2H_M=\displaystyle{\frac{2}{\sqrt{1+|\nabla u|^2}}}=\mathop{\rm div }\nolimits\Bigg(\frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\Bigg) & in\ \Omega\\ u=\varphi & on\ \partial\Omega \end{array}\right. \end{equation} has a unique solution $u\in C^{2,\alpha}(\overline{\Omega})$. \end{pro} \begin{proof} We will check that our hypothesis agree with the hypothesis in Theorem 1.1 in \cite{CHH}, which is formulated in a more general setting; there, $\Omega$ is an open subset of a complete, non-compact manifold $M$, and Equation \eqref{divergencia} has the expression $$ \mathop{\rm div }\nolimits\Bigg(\frac{\nabla u}{\sqrt{1+|\nabla u|^2}}\Bigg)=\langle\overline{\nabla} f,\eta\mathop{\rm ran }\nolimitsgle, $$ where $\eta$ is the unit normal of the graph, $f$ is a smooth function defined in the product manifold $M\times\mathbb{R}$, and $\overline{\nabla}$ is the gradient operator computed with respect the product metric. In our setting, the function $f$ defined on $\mathbb{H}^2\times\mathbb{R}$ is just $f(p)=2h(p)$, where $h$ denotes as usual the height function of a point $p\in\mathbb{H}^2\times\mathbb{R}$, and thus $\overline{\nabla} f=2\partial_z$. In the same spirit as in Theorem 1.1, we define $F=\underset{\overline{\Omega\times\mathbb{R}}}{\sup}|\overline{\nabla} f|$. In our particular case, $F=2$. Now, for applying Theorem 1.1, three conditions must hold: \begin{itemize} \item[1.] $F<\infty$, which is trivial in our case. \item[2.] $Ric_\Omega\geq -F$. In our case, $\Omega\subset\mathbb{H}^2$, which has constant curvature equal to $-1$. The Ricci curvature of the hyperbolic plane is equal to $-2$, and thus this result also holds trivially. \item[3.] $H_{\partial\Omega}\geq F$. This is just the hypothesis stated at the formulation of Proposition \ref{dirichlet}. \end{itemize} In this situation, Theorem 1.1 in \cite{CHH} ensures that there exists a function $u\in C^{2,\alpha}(\overline{\Omega})$, such that the graph defined by $u$ solves Equation \eqref{divergencia}. This completes the proof of Proposition \ref{dirichlet}. \end{proof} To end this section, we will give the first examples of solutions of Equation \eqref{hyperbolicsoliton}, which are minimal surfaces of $\mathbb{H}^2\times\mathbb{R}$. If a translating soliton is also a minimal surface, then the translating vector $\partial_z$ that defines the movement of the translating soliton must satisfy \begin{equation}\label{hypvertical} \partial_z^\bot=H_M(p)\eta_p=0, \end{equation} that is, $\partial_z$ has to be tangential to the translator $M$ at each $p\in M$. This happens for vertical planes $\gamma\times\mathbb{R}$, where $\gamma\subset\mathbb{H}^2$ is a geodesic, which are minimal surfaces of $\mathbb{H}^2\times\mathbb{R}$ everywhere tangential to $\partial_z$, and thus translating solitons. \section{Rotationally symmetric translating solitons}\label{rotacionales} This section is devoted to the study of translating solitons which are invariant under the isometric $SO(2)$-action of rotations around a vertical axis. These examples were already obtained in \cite{KoOr,LaOr} for translating solitons immersed in semi-Riemannian manifolds. An alternative proof will be given in this paper, and the existence, uniqueness and properties of these rotationally symmetric translators will be analysed by means of a phase space study, inspired by the ideas developed in Section 3 in \cite{BGM}. \textbf{Throughout this section, the model used for the space $\mathbb{H}^2$ will be the Lorentz-Minkowski hyperboloid in $\mathbb{L}^3$.} After an ambient translation, we may suppose that the vertical axis is the one passing through the origin. Let $\alpha(t)=(\sinh r(t),0,\mathop{\rm cosh }\nolimits r(t),w(t)),\ t\in I\subset \mathbb{R}$ be an arc-length parametrized curve in the space $\mathbb{H}^2\times\mathbb{R}$. In this situation, we can make $\alpha$ rotate around the vertical axis passing through the origin under the isometric $SO(2)$-action of a circle $\phi(s)=(\mathop{\overline{\rm co}}\nolimitss s,\sin s)$. Bearing this in mind, the parametrization given by \begin{equation}\label{parrot} \psi(t,\theta)=(\sinh r(t)\mathop{\overline{\rm co}}\nolimitss \theta,\sinh r(t)\sin \theta,\mathop{\rm cosh }\nolimits r(t),w(t)) \end{equation} generates an immersed surface $M$, rotationally symmetric with respect to the vertical axis passing through the origin. With this parametrization the angle function is given, up to a change of the orientation, by $\nu(\psi(t,\theta))=r'(t)$. The principal curvatures of $M$ at each $\psi(t,s)$ are given by \begin{equation}\label{curvaturasprinc} \kappa_1=\kappa_\alpha=r'(t)w''(t)-r''(t)w'(t),\ \ \ \ \ \ \ \kappa_2=w'(t)\mathop{\overline{\rm co}}\nolimitsth r(t), \end{equation} where $\kappa_\alpha$ is the geodesic curvature of $\alpha(t)$. The mean curvature of a rotationally symmetric surface in $\mathbb{H}^2\times\mathbb{R}$ has the expression $$ 2H_M=r'(t)w''(t)-r''(t)w'(t)+w'(t)\mathop{\overline{\rm co}}\nolimitsth r(t). $$ By hypothesis, $M$ is a translating soliton and thus $H_M(\psi(t,\theta))=r'(t)$. This implies that the coordinates $r(t),w(t)$ of an arc-length parametrized curve, generating a rotationally symmetric translating soliton given by Equation \eqref{parrot}, satisfy the system \begin{equation}\label{ODErot} \left\{\begin{array}{rll} r'(t)&=& \mathop{\overline{\rm co}}\nolimitss\theta(t)\\ w'(t)&=& \sin\theta(t)\\ \theta'(t)&=& 2\mathop{\overline{\rm co}}\nolimitss\theta(t)-\sin\theta(t)\mathop{\overline{\rm co}}\nolimitsth r(t). \end{array} \right. \end{equation} From now on, we will suppress the dependence of the variable $t$ and just write $r\equiv r(t)$, and so on. The arc-length parametrized condition $r'^2+w'^2=1$ implies that the function $r$ is a solution of the autonomous second order ODE \begin{equation}\label{ODErot2} r''=(1-r'^2)\mathop{\overline{\rm co}}\nolimitsth r-2\varepsilon r'\sqrt{1-r'^2},\ \ \ \ \varepsilon=\mathrm{sign}(w'), \end{equation} on every subinterval $J\subset I$ where $w'\neq 0$. The change $r'=y$ transforms \eqref{ODErot2} into the first order autonomous system \begin{equation}\label{1ordersys} \left(\begin{array}{c} r'\\ y' \end{array}\right)=\left(\begin{array}{c} y\\ (1-y^2)\mathop{\overline{\rm co}}\nolimitsth r -2\varepsilon y\sqrt{1-y^2} \end{array}\right)=F(r,y). \end{equation} We define the \emph{phase space} of \eqref{1ordersys} as the half-strip $\mathbb{T}heta_\varepsilon:=(0,\8)\times (-1,1)$, with coordinates $(r,y)$ denoting, respectively, the distance to the rotation axis and the angle function. It will be also useful to define the sets $\mathbb{T}heta_\varepsilon^+:=(\mathbb{T}heta_\varepsilon\cap\{y>0\})$ and $\mathbb{T}heta_\varepsilon^-:=(\mathbb{T}heta_\varepsilon\cap\{y<0\})$. The \emph{equilibrium points}, if they exist, correspond to points at constant distance to the axis of rotation. They can be characterized by the fact that $F(r_0,y_0)=0$. In this case no equilibrium points exist, and thus there are no translating solitons that can be considered as rotational vertical cylinders. A straightforward consequence of the uniqueness of the Cauchy problem is that the orbits $\gamma(t):=(r(t),y(t))$ are a foliation by \emph{\textbf{regular proper $C^2$ curves}} of $\mathbb{T}heta_\varepsilon$. This properness condition will be applied throughout this paper, and should be interpreted as follows: any orbit $\gamma(t)$ cannot have as endpoint a finite point of the form $(x_0,y_0)$ with $x_0\neq 0$ and $y_0\neq\pm 1$, since at these points Equation \eqref{1ordersys} has local existence and uniqueness, and thus any orbit around a point $(x_0,y_0)$ can be extended. This properness condition implies that any orbit $\gamma(t)$ is a maximal curve inside $\mathbb{T}heta_\varepsilon$ which has its endpoints at the boundary $\overline{\mathbb{T}heta}_\varepsilon=\{0\}\times\{1,-1\}$. The points in $\mathbb{T}heta_\varepsilon$ where $y'=0$ are those lying at the horizontal graph \begin{equation} r=\Gamma_\varepsilon(y)=\mathrm{arctanh}\bigg(\frac{\sqrt{1-y^2}}{2\varepsilon y}\bigg). \end{equation} We will denote by $\Gamma_\varepsilon$ the intersection $\mathbb{T}heta_\varepsilon\cap\Gamma_\varepsilon(y)$. It is immediate to observe that the values $t\in J$ where the profile curve $\alpha$ has vanishing geodesic curvature are those where $y'=0$, i.e. the points where $(r(t),y(t))\in\Gamma_\varepsilon$. Notice that, as the function $\mathrm{arctanh}$ is defined only for values lying in the interval $(-1,1)$, $\Gamma_\varepsilon(y)$ is only defined for values $y$ satisfying the bound $$ -1<\frac{\sqrt{1-y^2}}{2y}<1\mathbb{L}ongleftrightarrow |y|>\frac{1}{\sqrt{5}}. $$ That is, the curve $\Gamma_\varepsilon$ has an asymptote at the lines $y=\pm 1/\sqrt{5}$. As $\Gamma_\varepsilon$ only appears at $\mathbb{T}heta_\varepsilon$ when $\varepsilon y\geq 0$, then for $\varepsilon=1$ (resp. $\varepsilon=-1$) $\Gamma_1$ only appears at $\mathbb{T}heta_1$ for $y\in(1/\sqrt{5},1]$ (resp. only appears at $\mathbb{T}heta_{-1}$ for $y\in [-1, -1/\sqrt{5})$). This implies that $\Gamma_\varepsilon$ and the axis $y=0$ divide $\mathbb{T}heta_\varepsilon$ into three connected components where both $r'$ and $y'$ are monotonous and $\alpha$ has non-vanishing geodesic curvature. It will be useful for the sake of clarity to name each of these \emph{monotonicity regions}: we define $\mathbb{T}heta_\varepsilon^+=\mathbb{T}heta_\varepsilon\cap\{y>0\}$ and $\mathbb{T}heta_\varepsilon^-=\mathbb{T}heta_\varepsilon\cap\{y<0\}$. When $\varepsilon=1$, then $\Gamma_1$ is contained entirely in $\mathbb{T}heta_1^+$. We define \begin{equation}\label{regionesmonotoniamas} \begin{array}{l} \mathbb{L}ambda_1^-=\big(\mathbb{T}heta_1^+\cap\{y\leq 1/\sqrt{5}\}\big)\cup\{(r,y);\ y>1/\sqrt{5},\ r<\Gamma_1(y)\}, \\ \mathbb{L}ambda_1^+=\{(r,y);\ y>1/\sqrt{5},\ r>\Gamma_1(y)\}. \end{array} \end{equation} which are, along with $\mathbb{T}heta_1^-$ the three monotonicity regions in $\mathbb{T}heta_1$, see Fig. \ref{fases}, left. Likewise, if $\varepsilon=-1$ then $\Gamma_{-1}$ is contained in $\mathbb{T}heta^-_{-1}$. Now we define \begin{equation}\label{regionesmonotoniamenos} \begin{array}{l} \mathbb{L}ambda_{-1}^-=\big(\mathbb{T}heta_{-1}^-\cap\{y\geq 1/\sqrt{5}\}\big)\cup\{(r,y);\ y<-1/\sqrt{5},\ r<\Gamma_1(y)\}, \\ \mathbb{L}ambda_{-1}^+=\{(r,y);\ y<-1/\sqrt{5},\ r>\Gamma_1(y)\}. \end{array} \end{equation} In this situation the three monotonicity regions of $\mathbb{T}heta_{-1}$ are $\mathbb{T}heta_{-1}^+,\ \mathbb{L}ambda_{-1}^-$ and $\mathbb{L}ambda_{-1}^+$, see Fig. \ref{fases}, right. \begin{figure} \caption{Left: phase space $\mathbb{T} \label{fases} \end{figure} We should emphasize that the signs of the principal curvatures given by Equation \eqref{curvaturasprinc} at each point $\alpha(t)$ are given by \begin{equation}\label{signk} \mathrm{sign}(\kappa_1)=\mathrm{sign}(-\varepsilon y'(t)),\\\\\ \mathrm{sign}(\kappa_2)=\mathrm{sign}(\varepsilon). \end{equation} In each of these monotonicity regions we can view the orbits as functions $y=y(r)$ wherever possible, i.e. at points with $y\neq 0$, and thus we have \begin{equation}\label{yfuncx} y\frac{dy}{dr}=(1-y^2)\mathop{\overline{\rm co}}\nolimitsth r-2\varepsilon y\sqrt{1-y^2}. \end{equation} In particular, in each monotonicity region the sign of $yy'$ is constant. As a consequence, the signs of $y_0$ and $r_0-\Gamma_\varepsilon(r_0)$ (for $y_0\neq 0$) determine the behavior of the orbit of \eqref{1ordersys} seen as a function $(r,y(r))$ in each component. The possible behaviors are summarized in the following Lemma: \begin{lem}\label{comportamiento} In the above setting, for any $(r_0,y_0)\in\mathbb{T}heta_\varepsilon$ with $y_0\neq 0$, the following properties hold: \begin{itemize} \item If $r_0>\Gamma_\varepsilon(y_0)$ (resp. $r_0<\Gamma_\varepsilon(y_0)$) and $y_0>0$, then $y(r)$ is strictly decreasing (resp. increasing), at $r_0$. \item If $r_0>\Gamma_\varepsilon(y_0)$ (resp. $r_0<\Gamma_\varepsilon(y_0)$) and $y_0<0$, then $y(r)$ is strictly increasing (resp. decreasing), at $r_0$. \item If $y_0=0$, then the orbit passing through $(r_0,0)$ is orthogonal to the $r$-axis. \item If $r_0=\Gamma_\varepsilon(y_0)$, then $y'(r_0)=0$ and $y(r)$ has a local extremum at $r_0$. \end{itemize} \end{lem} For any $(x_0,y_0)\in\mathbb{T}heta_\varepsilon^1$ we ensure the existence and uniqueness of the Cauchy problem of an orbit passing through $(x_0,y_0)$ that is a solution of system \eqref{1ordersys}. However, Equation \eqref{1ordersys} has a singularity at the points with $x_0=0$, and thus we cannot apply the existence and uniqueness of the Cauchy problem in order to guarantee the existence of an orbit having as endpoints either $(0,\pm 1)$. To overcome this difficulty we may solve the Dirichlet problem by Proposition \ref{dirichlet} in order to ensure the existence of a translating soliton in $\mathbb{H}^2\times\mathbb{R}$ which is rotational around the vertical axis passing through the origin and that meets this axis orthogonally at some point. \begin{lem}\label{existeeje} There exists a disk $\Omega\subset\mathbb{H}^2$ centered at the origin of $\mathbb{H}^2$ and a function $u:\Omega\rightarrow\mathbb{R}$ such that the surface defined by $M=\mathrm{graph}(u)$ is a translating soliton in $\mathbb{H}^2\times\mathbb{R}$ which is rotationally symmetric with respect to the vertical axis passing through the origin and that meets this axis in an orthogonal way at some $p\in M$. Moreover, $M$ is unique among all the graphical translating solitons over $\Omega$ with constant Dirichlet data. \end{lem} \begin{proof} We will expose the argument for \emph{upwards-oriented} graphs, since it is similar to \emph{downwards-oriented} graphs. By Proposition \ref{dirichlet}, we can solve the Dirichlet problem in Equation \eqref{divergencia} for upwards-oriented graphs in a small enough disk $\Omega\subset\mathbb{H}^2$ centred at the origin with constant Dirichlet data on the boundary, obtaining a $C^{2,\alpha}$ function $u:\Omega\rightarrow\mathbb{R}$ that solves Equation \eqref{divergencia}. Let us define $M:=\mathrm{graph}(u)$. As the mean curvature $H_M$ is given by the angle function, and it is rotationally symmetric, the translating soliton $M$ has the same symmetries as the prescribed function and thus $M$ is a rotational surface. The uniqueness of $M$ comes from the maximum principle, as the divergence equation is invariant up to additive constants. \end{proof} \subsection{The bowl soliton}\label{exbowl} The following theorem proves the existence of the analogous to the bowl soliton in $\mathbb{R}^3$. \begin{teo}\label{teobowl} There exists an upwards-oriented, rotational translating soliton in $\mathbb{H}^2\times\mathbb{R}$ that is an entire, vertical graph. \end{teo} \begin{proof} According to Lemma \ref{existeeje}, we ensure the existence of an upwards-oriented, rotational translating soliton $M$, generated by rotating an arc-length parametrized curve $\alpha(t)$ which is solution of \eqref{ODErot}. As $M$ is upwards-oriented, then at $p_0=M\cap l$, where $l$ is the vertical line passing through the origin, we have $H_M(p_0)=\nu(p_0)=1>0$. By the mean curvature comparison principle, the height function $w(t)$ of $\alpha(t)$ satisfies $w'(t)>0$, for $t>0$ close enough to zero, and thus the orbit $\gamma(t):=(r(t),y(t))$ starts at the point $(0,1)$ in $\mathbb{T}heta_1$ for $t>0$ small enough. Moreover, for $t>0$ small enough the curve $\alpha(t)$ has positive geodesic curvature, and thus the orbit $\gamma$ lies in $\mathbb{L}ambda^+$ for points near to $(0,1)$ in $\mathbb{T}heta_1$. The monotonicity properties imply that the whole $\gamma$ is contained in $\mathbb{L}ambda^+$ and $\kappa_{\alpha(t)}>0$ for all $t>0$. By monotonicity and properness, we can see $\gamma$ as a graph $y=f(r)$, for a certain $f\in C^2([0,\infty))$ satisfying $f(0)=1,\ f'(0)=0,\ f(r)>1/\sqrt{5}$ and $f'(r)<0$ for all $r>0$, see Fig. \ref{bowlperfil}. This implies that the translating soliton $M$ generated by rotating $\alpha$ with respect to the axis $l$, is an entire, vertical graph in $\mathbb{H}^2\times\mathbb{R}$, concluding the proof. \begin{figure} \caption{Left: the phase space with the solution corresponded to the bowl soliton plotted in red. Right: the profile of the bowl soliton in the model $\mathbb{D} \label{bowlperfil} \end{figure} \end{proof} This entire graph is called the \emph{bowl soliton}, and will be denoted throughout this paper by $\mathcal{B}$ (see Fig. \ref{bowl}). The \emph{vertex} is the lowest point of $\mathcal{B}$, which is also the unique point in $\mathcal{B}$ that intersects the axis of rotation. \begin{figure} \caption{The bowl soliton in the Poincaré model $\mathbb{D} \label{bowl} \end{figure} The main difference with the \emph{bowl soliton} in the theory of translating solitons in $\mathbb{R}^3$ is that although the bowl soliton has angle function tending to zero (and thus mean curvature tending to zero), here the bowl soliton has angle function tending to $1/\sqrt{5}$, and thus the mean curvature at infinity is non-zero. However, in the space $\mathbb{H}^2\times\mathbb{R}$ no constant mean curvature spheres exist for values of the mean curvature $H\leq 1/2$. In particular, this behavior of $\mathcal{B}$ does not contradict the mean curvature comparison theorem for constant mean curvatures spheres whose mean curvature approach to $1/2$. \subsection{A one parameter family of immersed annuli: the translating catenoids}\label{excat} The following theorem proves the existence of the analogous to the wing-like catenoids in $\mathbb{R}^3$. \begin{teo}\label{catenoids} There exists a one parameter family of properly immersed translating solitons, each one with the topology of an annulus. Each end of the annulus points to the $\partial_z$ direction, and is a vertical graph outside a compact set. These examples, denoted by $\{\mathcal{C}_r\}_{r>0}$, are called the translating catenoids, or wing-like solutions. \end{teo} \begin{proof} Let $M$ be the rotational translating soliton in $\mathbb{H}^2\times\mathbb{R}$ generated by the rotation of an arc-length parametrized curve $\alpha(t)$ given by Equation \eqref{parrot}, with initial conditions $r(0)=r_0,\ w'(0)=1$, for an arbitrary $r_0>0$. The orbit $\gamma=(r(t),y(t))$ passing through $(r_0,0)$ belongs to the phase space $\mathbb{T}heta_1$ for $t$ close enough to zero, i.e. $\varepsilon=1$ in \eqref{1ordersys}. In this situation, we know that there are three monotonicity regions in $\mathbb{T}heta_1$. For $t>0$ small enough, $\gamma$ stays in $\mathbb{L}ambda_1^-$, and by Lemma \ref{comportamiento} we can see the second coordinate of $\gamma$, $y(t)$, as an increasing function $y(r)$ until $\gamma$ intersects $\Gamma_1$, where $y(r)$ attains a maximum. Then, $\gamma$ lies inside $\mathbb{L}ambda_1^+$ and stays at it, and the coordinate $y(t)$ can be seen as a decreasing function $y(r)$ converging to $y=1/\sqrt{5}$, see Fig. \ref{fasecat1}, left. With this procedure we obtain the first component $M^+$, which is a graph over the exterior of the disk $D(0,r_0)$ of $\mathbb{H}^2$; indeed, the only point with $y'(0)$, i.e. with vertical tangency, occurs at $t=0$. This component has the topology of $\mathbb{S}^1\times[0,\infty)$, and $\mathbb{S}^1\times\{0\}$ is just the circumference at $t=0$. The height function $w(t)$ satisfies $w'(t)>0$ for every $t>0$. If we denote $t_0>0$ to the instant where $\gamma$ intersects $\Gamma_1$, then as $\gamma\subset\mathbb{L}ambda_1^+$ for all $t>t_0$, we conclude hat $M^+$ is a graph outside a compact set. By properness, the height of $M^+$ is unbounded as $t\rightarrow\infty$. \begin{figure} \caption{Phase space $\mathbb{T} \label{fasecat1} \end{figure} Now we decrease the parameter $t$ from $t=0$, and the orbit $\gamma$ now lies in the region $\mathbb{T}heta_1^-$ and the coordinate $y(t)$ can be expressed a decreasing graph $y(r)$. Now we let $t$ decrease until $\gamma$ intersects the line $y=-1$ at a point $(r_1,-1)$, where $r_1>r_0$, see Fig. \ref{fasecat1} right. This implies that the generating curve $\alpha(t)$ has a point of horizontal tangency away from the axis of rotation. Then then the phase space changes to $\mathbb{T}heta_{-1}$, and $\gamma$ starts from the point $(r_1,-1)$ contained in $\mathbb{L}ambda_{-1}^+$. Decreasing again $t$, and by Lemma \ref{comportamiento} we ensure that the coordinate $y$ of $\gamma$ can be seen as an increasing graph $y(r)$ that lies entirely in $\mathbb{L}ambda_{-1}^+$ and converges to $y=-1/\sqrt{5}$, see Fig. \ref{fasecat2}, obtaining the second component $M^-$. Similar arguments ensure us that $M^-$ is a graph for all $t<0$, homeomorphic to $\mathbb{S}^1\times [0,\infty)$. For $t\rightarrow -\infty$, the height function $w(t)$ is an increasing function. Again, by properness the height of $M^-$ is unbounded. By uniqueness of the solution of the Cauchy problem for graphs, we can deduce that both components can be smoothly glued together along their planar boundaries, where their unit normals agree, obtaining a complete surface $M$. \begin{figure} \caption{Phase space $\mathbb{T} \label{fasecat2} \end{figure} These examples are the translating catenoids, also known as wing-like solutions. They are characterized as a one parameter family of immersed annuli $\{\mathcal{C}_r\}_r$, where the parameter $r$ denotes the distance of each $\mathcal{C}_r$ to the axis of rotation. From the above discussions, for each $r_0>0$ the vertical cylinder $C(0,r_0)$ of radius $r_0$ and centred at the axis of rotation, intersects $\mathcal{C}_{r_0}$ at an unique circumference with radius $r_0$, which will be called the \emph{neck of the translating catenoid}. Moreover, each $\mathcal{C}_{r_0}$ lies entirely inside the non-compact component of $\overline{\big(\mathbb{H}^2\times\mathbb{R}-C(0,r_0)\big)}$. In Fig. \ref{catenoid} we can see on the left the profile of one translating catenoid, and on the right that catenoid rotated around the vertical axis passing through the origin, both plotted in the Poincaré disk model of $\mathbb{H}^2\times\mathbb{R}$. The points located at the circumference where the minimum height is achieved and have horizontal tangent plane, are those where the phase plane changes from $\mathbb{T}heta_1$ to $\mathbb{T}heta_{-1}$. \end{proof} \begin{figure} \caption{Left: the profile of a translating catenoid, with each component plotted in red and orange, respectively. Right: the translating catenoid in the Poincaré model $\mathbb{D} \label{catenoid} \end{figure} \section{The asymptotic behavior of the rotational examples}\label{asintotico} Inspired by the ideas developed in \cite{CSS}, this section is devoted to study the behavior of the bowl soliton $\mathcal{B}$ and the translating catenoids $\mathcal{C}_r$ at infinity. Let $M$ be a rotational translating soliton in $\mathbb{H}^2\times\mathbb{R}$ that is a vertical graph. Such a surface can be parametrized by \begin{equation}\label{parrotgraph} \psi(r,\theta)=\big(\sinh r\mathop{\overline{\rm co}}\nolimitss\theta,\sinh r\sin\theta,\mathop{\rm cosh }\nolimits r,f(r)\big),\ t\in I,\ \theta\in (0,2\pi), \end{equation} for a $C^2$ function $f:I\rightarrow\mathbb{R}$. The angle function $\nu$ of a surface parametrized by Equation \eqref{parrotgraph} is constant in $\theta$, and is given by \begin{equation}\label{anglerot} \nu(r)=\frac{1}{\sqrt{1+f'(r)^2}}. \end{equation} As $M$ is a translating soliton, the mean curvature of $M$ satisfies $H_M(\psi(r,\theta))=\nu(r)$. This condition writes as \begin{equation}\label{Hgrafo} \frac{2}{\sqrt{1+f'(r)^2}}=2H_M(\psi(r,\theta))=\frac{f''(r)}{(1+f'(r)^2)^{3/2}}+\frac{f'(r)}{\sqrt{1+f'(r)^2}}\mathop{\overline{\rm co}}\nolimitsth r, \end{equation} and after the change $f'(r)=\varphi(r)$, \begin{equation}\label{cambio} \varphi'(r)=(1+\varphi(r)^2)(2-\varphi(r)\mathop{\overline{\rm co}}\nolimitsth r). \end{equation} An exhaustive analysis of Equation \eqref{cambio} allows us to study the asymptotic behavior at infinity of a rotational soliton. \begin{lem}\label{asin} For any $R>0$ and $\varphi_0>0$, there exists a unique smooth solution, $\varphi(r)$ on $[R,\infty)$ to the boundary value problem \begin{equation}\label{sistemaasin} \left\{\begin{array}{l} \varphi'(r)=(1+\varphi(r)^2)(2-\varphi(r)\mathop{\overline{\rm co}}\nolimitsth r)\\ \varphi(R)=\varphi_0. \end{array}\right. \end{equation} Moreover, $\lim_{r\rightarrow\infty}\varphi(r)=2$. \end{lem} \begin{proof} The proof is an adaptation of Lemma 2.1 in \cite{CSS} and for the sake of clarity a similar notation will be used. First, notice that fixing $(R,\varphi_0)$ is just fixing initial conditions $(x_0=R,y_0)$, where $y_0=1/\sqrt{1+\varphi^2_0}$, in the phase space $\mathbb{T}heta_1^+$. Thus, existence and uniqueness of the Cauchy problem ensures us the existence of an orbit $\gamma(t)=(r(t),y(t))\subset\mathbb{T}heta_1^+$, with the property that if $s\rightarrow\infty$, then $y(t)\searrow 1/\sqrt{5}$. This gives us the existence of a translating soliton, which is a rotational graph outside a compact set. In particular, the condition $y(t)\searrow 1/\sqrt{5}$ implies that for $s$ big enough the angle function of the solution decreases to the value $1/\sqrt{5}$. This translating soliton $M$ can be parametrized by Equation \eqref{parrotgraph} for a $C^2$ function $f:[R,\infty)\rightarrow\mathbb{R}$. In particular, as the angle function $\nu(r)$ is given by Equation \eqref{anglerot}, for $r>r_0$ with $r_0>R$ big enough, $\nu(r)>1/\sqrt{5}$, and thus $\varphi(r)<2$. Moreover, as the angle function is a decreasing function, Equation \eqref{anglerot} implies that $\varphi(r)$ is an increasing function converging to the value $2$. This implies that for $r>r_0$, $\varphi'(r)>0$, and according to Equation \eqref{sistemaasin}, $2-\varphi(r)\mathop{\overline{\rm co}}\nolimitsth r$ is positive and remains so. Therefore, we may assume $\varphi(r)\leq 2\mathop{\rm tanh }\nolimits r$ for $r>r_0$. In particular, the solution exists for all $r>R$, and by properness of the orbit $\gamma$ the solution cannot become infinite at a finite point. Now we claim that for every $\varepsilon>0$ and $\mathbb{L}ambda>0$, there exists $r_1>\mathbb{L}ambda$ such that \begin{equation}\label{mayor} \varphi(r_1)\geq 2(1-\varepsilon)\mathop{\rm tanh }\nolimits(r_1). \end{equation} If not, then we substitute the inequality in \eqref{cambio} and obtain for $r>r_0$ $$ \varphi'(r)\geq (1+\varphi(r)^2)2\varepsilon, $$ which yields after integration $$ \varphi(r)\geq \tan(2\varepsilon r). $$ For $r$ close enough to $\pi/(4\varepsilon)$ the function $\varphi(r)$ tends to infinity, a contradiction since $\varphi(r)$ is defined for all values of $r>R$. Let be $\varepsilon>0$ and consider the function $\xi(r)=2(1-\varepsilon)\mathop{\rm tanh }\nolimits r$. It is a straightforward fact that the function $\xi(r)$ satisfies \begin{equation}\label{desig} \xi'(r)\leq (1+\xi^2(r))\big(2-\xi(r)\mathop{\overline{\rm co}}\nolimitsth r\big), \end{equation} for $r>r_2$ and $r_2$ sufficiently large. Now, Equation \eqref{mayor} ensures us the existence of some $r_3>r_2$ such that $\varphi(r_3)=\xi(r_3)$. Substituting in \eqref{desig} yields $\xi'(r_3)\leq \varphi'(r_3)$. In this situation we have $\varphi(r)$ and $\xi(r)$ two increasing functions with $\xi(r_3)=\varphi(r_3)$ and $\varphi'(r)\geq \xi'(r)$ for $r>r_3$. This implies that $\varphi(r)\geq\xi(r)$ for all $r>r_3$. In particular, for $r$ close enough to infinity the function $\varphi(r)$ has the bound $$ 2(1-\varepsilon)\mathop{\rm tanh }\nolimits r\leq\varphi(r)\leq 2\mathop{\rm tanh }\nolimits r. $$ Since this is true for every $\varepsilon>0$ and every $r$ large enough, we conclude that $\varphi(r)$ has the asymptotic behavior \begin{equation}\label{asin1} \varphi(r)=2\mathop{\rm tanh }\nolimits r+o(\mathop{\rm tanh }\nolimits r). \end{equation} Because $\mathop{\rm tanh }\nolimits r$ is a bounded function, the term $o(\mathop{\rm tanh }\nolimits r)$ in Equation \eqref{asin1} can be substituted by a negative function $\psi(r)$ tending to zero. Moreover, as we are only interested in asymptotic behavior we can suppose without losing generality that $\psi'(r)>0$ for $r>r_4$ and $r_4$ big enough. Now let us figure out the asymptotic expression of $\psi(r)$. First of all, observe that because $\psi(r)$ is a negative function tending to zero, then $\psi'(r)$ also tends to zero as $r\rightarrow\infty$; if not, $\psi(r)$ would become positive, a contradiction. If we substitute $\varphi'(r)=2/\mathop{\rm cosh }\nolimits^2r+\psi'$ in the ODE given in Equation \eqref{sistemaasin} we get that $\psi'(r)$ satisfies \begin{equation}\label{deripsi} \psi'(r)=-\frac{\psi(r)}{\mathop{\rm tanh }\nolimits r}\big(1+(2\mathop{\rm tanh }\nolimits r+\psi(r))^2\big)-\frac{2}{\mathop{\rm cosh }\nolimits^2r}. \end{equation} As for $r>r_4$, $\psi'(r)>0$, we get the first bound \begin{equation}\label{cotainf} -\psi(r)> \frac{2\mathop{\rm tanh }\nolimits r}{\mathop{\rm cosh }\nolimits^2 r(1+4\mathop{\rm tanh }\nolimits^2r)},\ \forall r>r_4. \end{equation} On the other hand, $-\psi(r)$ decreases and tends to zero and thus $-\psi'(r)$ is a negative function tending to zero. This implies that for every $\varepsilon_0>0$, there exists $r_5>0$ large enough such that for every $r>r_5$ we have $$ -\varepsilon_0<-\psi'(r)=\frac{\psi(r)}{\mathop{\rm tanh }\nolimits r}\big(1+(2\mathop{\rm tanh }\nolimits r+\psi(r))^2\big)+\frac{2}{\mathop{\rm cosh }\nolimits^2r}, $$ which yields $$ -\psi(r)\big(1+(2\mathop{\rm tanh }\nolimits r+\psi(r))^2\big)<\varepsilon_0\mathop{\rm tanh }\nolimits r +\frac{2\mathop{\rm tanh }\nolimits r}{\mathop{\rm cosh }\nolimits^2r}. $$ As $\psi(r)>-\varepsilon_0$ for $r>r_5$ big enough, we also obtain $$ -\psi(r)\big(1+(2\mathop{\rm tanh }\nolimits r-\varepsilon_0)^2\big)<-\psi(r)\big(1+(2\mathop{\rm tanh }\nolimits r+\psi(r))^2\big), $$ which yields the other bound for $\psi$ \begin{equation}\label{cotasup} -\psi(r)<\frac{2\mathop{\rm tanh }\nolimits r}{\mathop{\rm cosh }\nolimits^2 r\big(1+(2\mathop{\rm tanh }\nolimits r-\varepsilon_0)^2\big)}+\varepsilon_0\frac{\mathop{\rm tanh }\nolimits r}{\big(1+(2\mathop{\rm tanh }\nolimits r-\varepsilon_0)^2\big)}. \end{equation} As inequality \eqref{cotasup} holds for every $\varepsilon_0>0$, and because $\mathop{\rm tanh }\nolimits r/\big(1+(2\mathop{\rm tanh }\nolimits r-\varepsilon_0)^2\big)$ is a bounded function, joining \eqref{cotainf} and \eqref{cotasup} we ensure that $-\psi(r)$ is asymptotic to the function $$ \frac{2\mathop{\rm tanh }\nolimits r}{\mathop{\rm cosh }\nolimits^2 r(1+4\mathop{\rm tanh }\nolimits^2r)}. $$ As we defined $\varphi(r)=f'(r)$, we conclude once and for all that a rotational translating soliton that is a graph tending to infinity, is asymptotic to the rotational translating soliton generated by the graph $$ f(r)=2\log\mathop{\rm cosh }\nolimits r+\frac{1}{4}\log\frac{\mathop{\rm cosh }\nolimits^2r}{5\mathop{\rm cosh }\nolimits(2r)-3},\hspace{.5cm} r>\max\{r_i\}_{i=1,\dots,6}. $$ On the one hand, the function $\log\mathop{\rm cosh }\nolimits r$ satisfies $$ \lim_{r\rightarrow\infty}\frac{\log\mathop{\rm cosh }\nolimits r}{r}=1. $$ On the other hand, the function $\log\mathop{\rm cosh }\nolimits^2r/(5\mathop{\rm cosh }\nolimits(2r)-3)$ is bounded; in fact, its limit is $-\log 10$. Thus, the function $f$ has the asymptotic expansion at infinity $$ f(r)=2r+k,\ k\in\mathbb{R}. $$ As Equation \eqref{Hgrafo} is invariant up to additive constants to the function $f(r)$, we conclude that up to vertical translations, the function $f(r)$ has the asymptotic expression $f(r)=2r$ at infinity. \end{proof} We want to finish this section by remarking some similarities and differences between the asymptotic behavior that translating solitons and minimal surfaces in $\mathbb{H}^2\times\mathbb{R}$ have. Consider the family of translating catenoids $\{\mathcal{C}_r\}_r$ in $\mathbb{H}^2\times\mathbb{R}$. Then, it can be proved that if $r\rightarrow 0$, $\{\mathcal{C}_r\}_r$ smoothly converges to a double covering of $\mathcal{B}-\{\textbf{v}\}$, where $\textbf{v}$ is the vertex of $\mathcal{B}$. This also happens in the minimal surface theory in $\mathbb{H}^2\times\mathbb{R}$: if consider a vertical axis of rotation $L$, then the minimal surfaces of revolution around $L$ are totally geodesic copies of $\mathbb{H}^2$, and thus minimal, orthogonal to $L$ and a one parameter family of rotationally symmetric properly embedded annuli, the minimal catenoids $\{C_\lambda\}_\lambda$. Here, the parameter $\lambda$ also indicates the distance to $L$, and corresponds to the smallest circumference contained in each catenoid, which is also known as the neck. These minimal catenoids are symmetric bi-graphs over a minimal plane $\Pi$ orthogonal to $L$, and when their neck-sizes converge to zero they converge to a double covering of $\Pi-(\Pi\cap L)$. In this situation, it is natural to relate minimal planes with the bowl soliton, and minimal catenoids with translating catenoids. Both the translating catenoids and the minimal catenoids stay at bounded distance to the bowl soliton and the minimal plane, respectively. In particular, this bound on the distance from the translating catenoids to the bowl disables us to apply the same ideas as in \cite{HoMe} in order to obtain \emph{half-space theorems} for properly immersed translating solitons lying at one side of the bowl. \section{Uniqueness and non-existence theorems for translating solitons}\label{uniqueness} Most of the results obtained in this section will be proved with the same method: we will use the translating solitons studied in Section \ref{rotacionales} as \emph{canonical surfaces} to compare with, and Theorem \ref{tangency} to arrive to contradictions if some interior tangency point exists. \subsection{The uniqueness of the bowl soliton} The aim of this first theorem is to give an analogous result to Theorem A in Section 3 in \cite{MSHS}: a complete, embedded translating soliton in the Euclidean space $\mathbb{R}^n$, with a single end smoothly asymptotic to the translating bowl must be a vertical translation of the bowl. Their proof uses Alexandrov reflection technique with respect to vertical planes coming from infinity, and the asymptotic behavior at infinity of the bowl to ensure that Alexandrov reflection technique can start from points close to infinity. In this section we consider the Poincaré disk model of $\mathbb{H}^2$, as introduced in the beginning of Section \ref{examples}. The hyperbolic distance from a point $(x_1,x_2,0)$ to the origin $(0,0,0)$ will be denoted by $r(x_1,x_2)$ We also give next the following definition: \begin{defi} Let $M$ be a properly immersed translating soliton. We say that $M$ is $C^1$-asymptotic to the bowl soliton $\mathcal{B}$ if for all $\varepsilon>0$ there exists $R>0$ big enough such that $M\cap \big(\mathbb{H}^2\times\mathbb{R}-B^3(0,R)\big)$ can be expressed as the graph of a function $g_R:\big(\mathbb{H}^2-B^2(0,R)\big)\rightarrow\mathbb{R}$ such that \begin{equation}\label{smoothlyasympt} |g_R(x_1,x_2)-2r(x_1,x_2)|<\varepsilon,\hspace{.3cm} |D_vg_R(x_1,x_2)-2D_vr(x_1,x_2)|<\varepsilon,\ \forall\ r(x_1,x_2)>R,\ \forall\ v\in \mathbb{H}^2,\ |v|=1. \end{equation} \end{defi} Notice that in Lemma \ref{asin1} we already proved that the radial graph defined by the function $f(r)=2r$ converges asymptotically the bowl soliton. Thus, Equation \eqref{smoothlyasympt} implies not only that $M$ converges in distance to $\mathcal{B}$, but also in its first derivative. \begin{teo}\label{unicidadbowl} Let $M$ be a properly immersed translating soliton with a single end, that is $C^1$-asymptotic to the bowl soliton $\mathcal{B}$. Then, $M$ is a vertical translation of $\mathcal{B}$. \end{teo} \begin{proof} Let $M$ be a properly immersed translating soliton with a single end that is $C^1$-asymptotic to the bowl soliton $\mathcal{B}$. Given a unit vector $v\in\mathbb{H}^2$ and the horizontal direction $(v,0)$ in $\mathbb{H}^2\times\mathbb{R}$, the vertical plane orthogonal to $v$ passing through the origin is the totally geodesic surface of $\mathbb{H}^2\times\mathbb{R}$ given by the product $\gamma\times\mathbb{R}$, where $\gamma=\gamma(s)\subset\mathbb{H}^2\times\{0\}$ is the arc-length parametrized, horizontal geodesic of $\mathbb{H}^2\times\mathbb{R}$ such that $\gamma(0)=(0,0,0)$ and $\gamma'(0)\bot v$. This surface will be denoted by $\Pi_v$. Let $\sigma(s)$ be the horizontal geodesic such that $\sigma(0)=(0,0,0)$ and $\sigma'(0)=v$; recall that $\sigma$ and $\gamma$ differ one from the other by a rotation, and thus their arc-length parameter coincides. Consider the 1-parameter family of hyperbolic translations $\{T_s\}$ along the geodesic $\sigma$ such that $T_s(\sigma(0))=\sigma(s)$ for all $s$. We define $\{\Pi_v(s):=T_s(\Pi_v)\}_s$ as the family of vertical planes in $\mathbb{H}^2\times\mathbb{R}$ at distance $s$ to $\Pi_v$ and orthogonal to $\sigma'(s)$ at $\sigma(s)$. We denote by $\Pi_v(s)^+$ (resp. $\Pi_v(s)^-$) to the closed half-space $\bigcup_{\lambda\geq s}\Pi_v(\lambda)$ (resp. $\bigcup_{\lambda\leq s}\Pi_v(\lambda)$), and by $M_+(s)$ (resp. $M_-(s)$) to the intersection $M\cap\Pi_v(s)^+$ (resp. $M\cap\Pi_v(s)^-$). The reflection of $M_+(s)$ with respect to the plane $\Pi_v(s)$ will be denoted by $M_+^*(s)$. If $p\in M$ and $p^*\in M^*$ denote the reflected point of $p$ in the reflected surface of $M$, then $\nu^*(p^*)=\nu(p)$, where $\nu^*$ is the angle function of $M^*$. In particular, reflections with respect to vertical planes are isometries of $\mathbb{H}^2\times\mathbb{R}$ that send translating solitons into translating solitons. Denote by $\mathfrak{p}$ the projection onto the plane $\Pi_v$ defined as follows: Let be $x\in\mathbb{H}^2\times\mathbb{R}$ and consider the curve $\alpha_x(s)=T_s(x)$ given as the flow of $T_s$ passing through $x$. Then, we define $\mathfrak{p}(x)$ as the intersection of $\alpha_x(s)$ with the plane $\Pi_v$. This intersection is unique, and thus $\mathfrak{p}(x)$ is well defined. Moreover, after a translation in the arc-length parameter of $\alpha_x(s)$, we will suppose henceforth that $\alpha_x(0)=\mathfrak{p}(x)$. For a point $x\in\mathbb{H}^2\times\mathbb{R}$, let us denote by $I(x)$ to the instant of time $s_0$ such that $\alpha_x(s_0)=x$. We say that $A$ is on the \emph{right hand side} of $B$ if for every $x\in\Pi_v$ such that $$ \mathfrak{p}^{-1}(x)\cap A\neq\varnothing,\ \ \mathrm{and}\ \ \mathfrak{p}^{-1}(x)\cap B\neq\varnothing, $$ we have $$ \inf\big\{I\big(\mathfrak{p}^{-1}(x)\cap A\big) \big\}\geq\sup\big\{I\big(\mathfrak{p}^{-1}(x)\cap B\big) \big\}. $$ The condition \emph{$A$ is on the right hand side of $B$} is denoted by $A\geq B$, see Figure \ref{proj}. \begin{figure} \caption{Left: the definition of the projection $\mathfrak{p} \label{proj} \end{figure} Now we define the set $$ \mathcal{A}=\{s\geq 0;\ M_+(s)\ \mathrm{is\ a\ graph\ over}\ \Pi_v,\ \mathrm{and}\ M_+^*(s)\geq M_-(s)\}. $$ First, we show that $\mathcal{A}\neq\varnothing$. As $M$ is $C^1$-asymptotic to $\mathcal{B}$, for an arbitrary small $\varepsilon>0$ we can choose $s_0>R$ big enough so the intersection $M\cap\{z> s_0\}$ has the topology of an annulus and has distance at most $\varepsilon$ to $\mathcal{B}$. The reflection $M_+^*(s_0)$ is also asymptotic to $\mathcal{B}_+^*(s_0)$ with distance less than $\varepsilon$, and thus does not intersect the surface $M_-(s_0)$ at any interior or boundary point. Moreover, as $\mathcal{B}_+(s_0)$ is a graph onto the plane $\Pi_v$ and $M$ is $C^1$-asymptotic to $\mathcal{B}$, by Equation \eqref{smoothlyasympt} there exists $r_0>R$ such that $D_vg_R (x_1,x_2)>0$, for any $(x_1,x_2)\in\mathbb{H}^2$ such that $r(x_1,x_2)>r_0$. Consequently, and increasing $s_0>r_0$ if necessary, if $\eta$ is a unit normal vector field on $M$ then $\langle\eta,v\mathop{\rm ran }\nolimitsgle>0$ for any point in $M_+(s_0)$. This implies that $M_+(s_0)$ is a graph onto $\Pi_v$. As $\mathcal{B}_+^*(s_0)$ lies inside the interior domain bounded by $\mathcal{B}$, increasing $s_0$ again if necessary, we can suppose that the distance between $\mathcal{B}_+^*(s_0)$ and $\mathcal{B}_-(s_0)\cap\{z> s_0\}$ is greater than $2\varepsilon$. This implies that $M_+^*(s_0)$ is on the right hand side of $M_-(s_0)$ and thus $s_0\in\mathcal{A}$, proving that $\mathcal{A}$ is a non-empty set. Moreover, we ensure that if $s_0\in\mathcal{A}$, then $[s_0,\infty)\subset\mathcal{A}$. If the assertion fails, then there exists $s_*>s_0$ such that $s_*\notin\mathcal{A}$, and this holds necessary because either $M_+^*(s_*)$ and $M_-(s_*)$ have non-empty intersection, or $M_+(s_*)$ is not a graph onto the plane $\Pi_v$. Thus, there exists $s_1\in(s_0,s_*]$ such that $M_+^*(s_1)$ intersects $M_-(s_1)$ for the first time in an interior point, or $M_+(s_1)$ and $M_+^*(s_1)$ have unit normal agreeing at the boundary. The tangency principle in Theorem \ref{tangency} ensures in its interior or boundary version that $M_+^*(s_1)$ and $M_-(s_1)$ agree, and so the plane $\Pi_v(s_1)$ would be a plane of reflection symmetry of $M$. But $\mathcal{B}_+^*(s_1)$ and $\mathcal{B}_-(s_1)$ stay one to each other at a positive distance, since $\Pi_v(s_1)$ is not a plane of reflection symmetry of $\mathcal{B}$. This is a contradiction to the fact that $M$ is $C^1$-asymptotic to $\mathcal{B}$, since the reflected component $M_+^*(s_1)$, which agrees with $M_-(s_1)$, stays at positive distance to $\mathcal{B}_-(s_1)$ . This concludes the proof that $[s_0,\infty)\subset\mathcal{A}$. The next step is proving that $\mathcal{A}$ is a closed subset of the interval $[0,\infty)$. Indeed, let $\{s_n\}$ be a sequence of points in $\mathcal{A}$ converging to some $s_0$. According to the previous discussion, we have $(s_0,\infty)\subset\mathcal{A}$. First, suppose that $M_+(s_0)$ is not a graph onto the plane $\Pi_v$. Then there exists points $p\neq q\in M_+(s_0)$ such that $\mathfrak{p}(p)=\mathfrak{p}(q)$ and $I(q)>I(p)$. Notice that $I(p)=s_0$, since $s\in\mathcal{A}$ for all $s>s_0$. Let be $I(q)=s_1>s_0$. If we consider the plane $\Pi_v\big({\big(3s_0+s_1\big)\big/4}\big)$, then $M_+^*\big((3s_0+s_1)/4\big)$ cannot be on the right hand side of $M_-\big((3s_0+s_1)/4\big)$ since $I(q^*)<I(p)$, contradicting the fact that $s_0<\big(3s_0+s_1\big)/4\in\mathcal{A}$. The continuity of the graphical condition yields $M_+^*(s_0)\geq M_-(s_0)$ and hence $s_0\in\mathcal{A}$. Now we will prove that the minimum of the set $\mathcal{A}$ is 0. To prove this, we suppose that $\min\mathcal{A}=s_*>0$, and will arrive to a contradiction. Indeed, if $s_*>0$ then one of the following items must hold: \begin{itemize} \item There exists a point $p\in M\cap\Pi_v(s_*)$ such that $M_+(s_*)$ is not a graph at $p$ onto the plane $\Pi_v$. This implies that $\langle\eta_p,v\mathop{\rm ran }\nolimitsgle=0$, where $\eta$ is a unit normal for the surface $M$. \item There exists a point $p\in M$ such that $M_+^*(s_*)$ and $M_-(s_*)$ have no empty intersection at some $p$, and $M_+^*(s_*)$ lies at the right hand side of $M_-(s_*)$. \end{itemize} Notice that in the second item, the intersection between $M_+^*(s_*)$ and $M_-(s_*)$ must be tangential; otherwise for $\varepsilon>0$ small enough $M_+^*(s_*+\varepsilon)$ and $M_-(s_*+\varepsilon)$ would still have a transversal intersection, contradicting that $s_*=\min\mathcal{A}$. In any case, Theorem \ref{tangency} in its interior or boundary version ensures us that the plane $\Pi_v({s_*})$ is a plane of reflection symmetry of $M$. As the $z$-axis is the axis of rotation of the bowl soliton $\mathcal{B}$, and every plane of reflection symmetry of $\mathcal{B}$ contains the $z$-axis, the symmetrized $\mathcal{B}_+^*(s_*)$ is on the right hand side of $\mathcal{B}_-(0)$ and lies at a positive distance $d>0$. By hypothesis $M_+^*(s_*)=M_-(s_*)$ has distance to $\mathcal{B}_+^*(s_*)$ tending to zero. Thus, $M_-(s_*)$ has distance to $\mathcal{B}_-(0)$ bounded from below, contradicting the fact that $M$ is $C^1$-asymptotic to $\mathcal{B}$. This implies that $\min\mathcal{A}=0$ and thus we have that $M_+^*(0)\geq M_-(0)$. If we repeat this argument by defining $$ \mathcal{A_-}=\{s\leq 0;\ M_-(s)\ \mathrm{is\ a\ graph\ over}\ \Pi_v\ \mathrm{and}\ M_-^*(s)\leq M_+(s)\}, $$ then we conclude that $M_-^*(0)\leq M_+(0)$. By symmetrizing again we obtain $M_-(0)\geq M_+^*(0)$, and so $M_-(0)=M_+^*(0)$; that is, the plane $\Pi_v$ is a plane of reflection symmetry of the surface $M$. As $v$ was chosen as an arbitrary horizontal vector, we conclude that $M$ is rotationally symmetric around the $z$-axis. By uniqueness, $M$ is a vertical translation of the bowl soliton $\mathcal{B}$, completing the proof. \end{proof} The following proposition concerning the height function of a translating soliton will be useful: \begin{pro}\label{hmaximo} Let $M$ be a compact translating soliton with boundary. Then, the height function of $M$ cannot attain a local maximum in any interior point of $M$. \end{pro} \begin{proof} The proof will be done by contradiction. Suppose that in some $p\in M$, the height function has a local maximum. This implies that there exists a neighbourhood $U_p$ of $p$ in $M$ such that $h(U_p)\leq h(p)=p_3$, where $h$ is the height function of $M$. In this situation, $U_p$ lies below the horizontal plane $\mathbb{H}^2\times\{p_3\}$. Let $\eta:M\rightarrow\mathbb{S}^2$ be a unit normal vector field to $M$. By hypothesis, $\eta_p=\pm 1$. If $\eta_p=1$, then $M$ has positive mean curvature equal to $1$ at $p$. But $M$ lies locally below $\mathbb{H}^2\times\{p_3\}$ which is a minimal surface and can be oriented upwards without changing the mean curvature. This is a contradiction with the mean curvature comparison principle. If $\eta_p=-1$, we orient $\mathbb{H}^2\times\{p_3\}$ downwards to arrive to the same contradiction. \end{proof} Notice that the height function of a translating soliton can achieve a local (or global) minimum, see for example the bowl soliton or the translating catenoids. The last theorem in this section has also a counterpart for constant mean curvature surfaces in $\mathbb{R}^3$, and is an important open problem in this theory. It is known that if a compact surface $M\subset\mathbb{R}^3$ with constant mean curvature $H$ and boundary $\partial M$ a circle, lies at one side of the plane $P$ containing the boundary, then $M$ is invariant under rotations around the axis centred at the center of $\partial M$ and orthogonal to $P$, and thus is a part of a sphere of radius $1/H$. However, if the hypothesis on the surface lying at one side of the plane that contains the boundary fails, then the theorem is not known to be true or not. According to Proposition \ref{hmaximo} this cannot happen for translating solitons, and thus compact pieces of the bowl soliton are unique in the following sense: \begin{teo} Let $\Gamma\subset\mathbb{H}^2\times\{t_0\}$ be a closed, embedded curve invariant under rotations around a vertical axis $l$ of $\mathbb{H}^2\times\mathbb{R}$. Let $M$ be a compact, embedded translating soliton with boundary $\partial M=\Gamma$. Then, $M$ is rotationally symmetric and, up to translations, is a piece of the bowl soliton. \end{teo} \begin{proof} The proof will be done by using Alexandrov reflection technique with respect to vertical planes. Without losing generality, after a translation that sends the vertical axis $l$ to the vertical axis $(0,0,t),\ t\in\mathbb{R}$, we may suppose that $\Gamma$ is a circumference centred at the origin with a certain radius. From Proposition \ref{hmaximo}, the translating soliton $M$ lies below the horizontal plane $\mathbb{H}^2\times\{t_0\}$. This consideration is the key that allows us to apply Alexandrov reflection technique, since in the constant mean curvature framework the first contact point may be between an interior and boundary points. The same notation as in the proof of Theorem \ref{unicidadbowl} will be used here. Let $v$ be an arbitrary, unit horizontal vector, and $\Pi_v$ the vertical plane orthogonal to $v$ passing through the origin. For $s$ big enough, $\Pi_v(s)\cap M=\varnothing$. We start decreasing $s$ until $\Pi_v(s_0)$ intersects $M$ for the first time at a point $p\in M\cap\Pi_v(s_0)$, for some $s_0>0$. Decreasing the parameter $s$, for $s$ close enough to $s_0$ the reflection $M^+_*(s)$ lies inside the interior domain enclosed by $M$. Alexandrov reflection technique stops at some instant $s_1\geq 0$ such that either $M^+_*(s_1)$ is tangent to $M$ at an interior point with the same unit normal; or the intersection between $M^+_*(s_1)$ and $M$ occurs at a boundary point, where their inner conormals agree. In any case, Theorem \ref{tangency} ensures us that the plane $\Pi(s_1)$ is a plane of reflection symmetry of the surface $M$. As the boundary is a circle, the plane $\Pi_v(s_1)$ has to be a plane of reflection symmetry of $\partial M$ as well, and thus $\Pi_v(s_1)$ passes through the center of $\partial M$, which yields $s_1=0$ and $\Pi_v(s_1)\equiv\Pi_v$. Repeating this procedure with all the horizontal directions $v$ we obtain that the surface $M$ is rotationally symmetric around the line passing through the origin, and intersects this axis in an orthogonal way. By uniqueness, $M$ is a compact piece of the translating bowl, as desired. \end{proof} \subsection{Non-existence theorems for translating solitons} In this last section we prove non-existence theorems for translating solitons, assuming some geometric obstructions. The first non-existence result is a straightforward consequence of the divergence theorem, see \cite{Lo}: \begin{pro} There do not exist closed (compact without boundary) translating solitons. \end{pro} \begin{proof} Suppose that $M$ is a closed translating soliton, and we will arrive to a contradiction. It is known that the height function $h:M\rightarrow\mathbb{R}$ on an immersed surface $M$ in $\mathbb{H}^2\times\mathbb{R}$ satisfies the PDE $\mathbb{D}elta_M h=2 H_M \nu$, where $\mathbb{D}elta_M$ is the Laplace-Beltrami operator in $M$. As $M$ is a translating soliton, the mean curvature is equal to $H_M=\nu$. Integrating and applying the divergence theorem yields $$ 0=\int_M \mathbb{D}elta_M h=2\int_M \nu^2, $$ and thus $\nu(p)=0$ for every $p\in M$. This implies that $M$ is contained in a vertical plane, contradicting the fact that $M$ is closed. \end{proof} \begin{obs} The previous Proposition can be also proved by considering the closed soliton and a vertical plane tangent to the soliton (such a plane exists by compactness) as minimal surfaces in the conformal space $\big(\mathbb{H}^2\times\mathbb{R},e^h\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle\big)$, and then arriving to a contradiction by applying Theorem \ref{tangency}. However, for applying this theorem we have to invoke Hopf's maximum principle, which is a more powerful theorem than the divergence theorem. \end{obs} Now we prove a height estimate for compact translating solitons with boundary contained in a horizontal plane. Before announcing the result, we will introduce some previous notation that will be useful. Let $\sigma$ be a positive constant. We will denote by $\mathcal{B}(\sigma)$ to the compact piece of the bowl soliton that has the circumference $C(0,\sigma)$, centred at the origin and with radius $\sigma$, as boundary. It suffices to intersect $\mathcal{B}$ with a solid vertical cylinder with axis passing through the origin and radius $\sigma$, and then translate that compact piece in a way that the boundary lies inside the horizontal plane $\mathbb{H}^2\times\{0\}$. The distance from the vertex of $\mathcal{B}(\sigma)$ to that horizontal plane $\mathbb{H}^2\times\{0\}$ will be denoted by $\tau(\sigma)$. Denote by $\mathcal{Z}_s$ to the flow of the vertical Killing vector field $\partial_z$, which consists on the vertical translations, and let us write by $\mathcal{B}(\sigma,s)$ to the image of $\mathcal{B}(\sigma)$ under $\mathcal{Z}_s$. Also, we state a proposition that has interest on itself, and for the sake of clarity we expose its proof outside the main theorem. \begin{pro}\label{inslab} Let $M$ be a compact translating soliton with boundary $\Gamma=\partial M$. If $\Gamma$ lies between two vertical planes, then the whole soliton $M$ lies between those planes. \end{pro} \begin{proof} Suppose that $M$ has points outside one of the parallel planes, name it $P$. Denote by $P^+$ the component such that $\Gamma\subset P^+$, and by $P^-$ the other component. As $M$ and $\Gamma\subset P^+$, then $M^-=M\cap P^-$ is a compact surface with boundary in $P$. Let $p\in M^-$ be the point with further distance from $M^-$ to $\Pi$. On the one hand, it is clear that $\nu(p)=0$ and thus $H_M(p)=0$. On the other hand, consider vertical planes $P_\lambda$ parallel to $P$ contained in $P^-$ and such that $P_\lambda\cap M^-=\varnothing$. Then we move the parameter $\lambda$ in such a way that the planes $P_\lambda$ move towards $P$ until there exists a first instant $\lambda_0$ such that $P_{\lambda_0}$ intersects $M^-$ precisely at $p$. This contradicts Theorem \ref{tangency} since both $P_{\lambda_0}$ and $M$ are minimal surfaces in the conformal space $\big(\mathbb{H}^2\times\mathbb{R},e^h\langle\cdot,\cdot\mathop{\rm ran }\nolimitsgle\big)$ and thus they should agree, contradicting the fact that $M$ is compact. \end{proof} Now we stand in position to formulate the height estimate for compact translating solitons. \begin{teo} Let $\Gamma$ be a closed curve of diameter $\sigma$ contained in a horizontal plane $\mathbb{H}^2\times\{t_0\}$ and let $M$ be a compact, connected translating soliton whith boundary $\partial M=\Gamma$. Then, for all $p\in M$, the distance from $p$ to $\mathbb{H}^2\times\{t_0\}$ is less or equal than $\tau(2\sigma)$, where $\tau(\sigma)$ is the constant defined above. \end{teo} \begin{proof} Let $\sigma$ be the diameter of $\Gamma$. We can apply a translation $T$ to $\Gamma$ such that $T\big(\Gamma\big)$ lies inside the disk $D\big(0,2\sigma\big)$. For saving notation we will just denote $T\big(\Gamma\big)$ by $\Gamma$. Consider now the compact piece $\mathcal{B}(2\sigma)$. By Proposition \ref{hmaximo} we know that $M$ lies below the horizontal plane $\mathbb{H}^2\times\{0\}$. As $\Gamma\subset D\big(0,2\sigma\big)$, initially the surface $M$ is inside the mean convex region enclosed by $\mathcal{B}(2\sigma)$. We assert that the entire surface $M$ lies strictly in this region. Indeed, suppose that $\mathcal{B}(2\sigma)$ and $M$ have non-empty intersection. This intersection has to be transversal since otherwise we would have by Theorem \ref{tangency} that $M=\mathcal{B}(2\sigma)$ and thus $\Gamma=C\big(0,2\sigma\big)$, contradicting the fact that $\Gamma$ has diameter $\sigma$. Translate downwards the graph $\mathcal{B}(2\sigma)$ until $\mathcal{B}(2\sigma,t_0)\cap M=\varnothing$, for $t_0$ small enough. Then move upwards $\mathcal{B}(2\sigma,t_0)$ by increasing $t_0$ until we reach a first contact point, which has to be an interior point at some horizontal plane $\mathbb{H}^2\times\{t_1\}$. Then, Theorem \ref{tangency} ensures us that $M$ and $\mathcal{B}(2\sigma,t_1)$ must agree, and thus $\Gamma=C\big((0,0,t_1),2\sigma\big)$. But in the instant of time that both surfaces coincide, their boundaries are in different planes, which is absurd. Thus, $M$ lies inside the mean convex side of $\mathcal{B}(2\sigma,0)$ and we obtain the desired height estimate. \end{proof} The two last theorems give geometric obstructions for the existence of certain translating solitons. \begin{teo} There do not exist properly immersed translating solitons in $\mathbb{H}^2\times\mathbb{R}$ contained inside a compact vertical cylinder. \end{teo} \begin{proof} Suppose that $M$ is a properly immersed translating soliton lying inside a vertical cylinder of radius $r_0$, and denote it by $C(r_0)$. After a translation we can suppose that the axis of the cylinder is the straight line $l=(0,0,t),\ t\in\mathbb{R}$. Consider the family of translating catenoids $\{\mathcal{C}_r\}_r$ rotated around $l$. For $r>r_0$ each $\mathcal{C}_r$ lie inside the non-compact component of $\overline{\mathbb{H}^2\times\mathbb{R}-C(r_0)}$. Now we start decreasing the parameter $r$ until we reach an interior tangency point between $M$ and $\mathcal{C}_{r_1}$, for some $r_1\leq r_0$. This is a contradiction with Theorem \ref{tangency} since $M$ and $\mathcal{C}_{r_1}$ would agree, but none $\mathcal{C}_r$ lies inside a vertical cylinder. \end{proof} \noindent The author was partially supported by MICINN-FEDER, Grant No. MTM2016-80313-P and Junta de Andalucía Grant No. FQM325. \end{document}
\begin{document} \title{Reduced Basis {\em A Posteriori} \begin{abstract} We present reduced basis approximations and rigorous {\em a posteriori} error bounds for the instationary Stokes equations. We shall discuss both a method based on the standard formulation as well as a method based on a penalty approach, which combine techniques developed in \cite{Gerner:2011fk,Gerner:2012fk} and \cite{veroy10:_stokes_penalty} with current reduced basis techniques for parabolic problems. The analysis then shows how time integration affects the development of reduced basis {\em a posteriori} error bounds as well as the construction of computationally efficient reduced basis approximation spaces. To demonstrate their performance in practice, the methods are applied to a Stokes flow in a two-dimensional microchannel with a parametrized rectangular obstacle; evolution in time is induced by a time-dependent velocity profile on the inflow boundary. Numerical results illustrate (i)~the rapid convergence of reduced basis approximations, (ii)~the performance of {\em a posteriori} error bounds with respect to sharpness, and (iii)~computational efficiency. \end{abstract} \begin{keywords} Instationary Stokes equations; incompressible fluid flow; saddle point problem; model order reduction; reduced basis method; {\em a posteriori} error bounds \end{keywords} \begin{AMS} 65N12, 65N15, 65N30, 76D07 \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{\sc A.-L.~Gerner and K.~Veroy}{\sc Reduced Basis Methods for the Instationary Stokes Equations} \section*{Introduction} Designed for the real-time and many-query context of parameter estimation, optimization, and control, the reduced basis (RB) method permits the efficient yet reliable approximation of input-output relationships induced by parametrized partial differential equations. The essential ingredients are: (i) dimension reduction, through Galerkin projection onto a low-dimensional RB space; (ii) certainty, through rigorous {\em a posteriori} bounds for the errors in the RB approximations; (iii) computational efficiency, through an Offline-Online computational strategy; and (iv) effectiveness, through a greedy sampling approach. In this paper, we demonstrate how RB techniques presented in \cite{Gerner:2011fk,Gerner:2012fk,veroy10:_stokes_penalty} for parametrized saddle point problems may be extended to the time-dependent setting. To this end, we consider the instationary Stokes equations. We shall discuss both a method based on the standard formulation as well as a method based on a penalty approach (see also \cite{Gerner:2012uq} for initial results), which combine techniques developed in \cite{Gerner:2011fk,Gerner:2012fk} and \cite{veroy10:_stokes_penalty} with current RB techniques for parabolic problems (see, e.g., \cite{Grepl:2005kx,grepl05,haasdonk08:_reduc}). The analysis then shows how time integration affects the development of RB {\em a posteriori} error bounds as well as the construction of computationally efficient RB approximation spaces. Starting from the standard mixed formulation of the instationary Stokes equations, we develop rigorous {\em a posteriori} error bounds for the RB velocity approximations. As in the stationary case presented in \cite{Gerner:2011fk,Gerner:2012fk}, they involve the (Online-) estimation of coercivity, continuity, and inf-sup stability constants associated with the diffusion term and incompressibility constraint; in addition, they now also depend on continuity constants associated with the mass term. Employing a penalty formulation, we obtain rigorous upper bounds for the errors in both the velocity and pressure approximations. As in the stationary case presented in \cite{veroy10:_stokes_penalty}, they are computationally very efficient since they do not involve the estimation of inf-sup stability constants but depend only on coercivity constants associated with the diffusion and penalty terms; however, they again also depend on the penalty parameter such that associated effectivities increase as we approach the nonpenalized problem. To construct efficient RB approximation spaces, we consider a POD greedy procedure (see \cite{Grepl:2005kx,Haasdonk:2011ffk,haasdonk08:_reduc}) that is coupled with adaptive stabilization techniques developed in \cite{Gerner:2011fk}. To demonstrate their performance in practice, the methods are then applied to a Stokes flow in a parametrized domain where evolution in time is induced by a time-dependent velocity profile on the inflow boundary. This paper is organized as follows: In \S \ref{s:general_problem_statement}, we introduce the general problem formulation and its ``truth'' approximation upon which our RB approximation will subsequently be built. We start from a time-discrete framework already that allows us to directly recover the settings discussed in \cite{Gerner:2011fk,Gerner:2012fk} and \cite{veroy10:_stokes_penalty}; now, we have a saddle point problem associated with each time step. The time discretization scheme is given by a backward Euler method. Section \ref{s:RB_method} then describes our RB method. In \S \ref{ss:Galerkin_projection}, we define the RB approximation as the Galerkin projection onto a low-dimensional RB approximation space. We develop rigorous {\em a posteriori} error bounds in \S \ref{ss:error_estimation}. Both RB approximations and error bounds can be computed Online-efficiently as summarized in \S \ref{ss:offline_online}. This enables us to employ adaptive sampling processes for constructing computationally efficient RB approximation spaces, which shall be outlined in \S \ref{ss:construction_RB_spaces}. In \S \ref{s:model_problem}, we introduce our instationary Stokes model problem. Numerical results in \S \ref{s:numerical_results} then illustrate (i)~the rapid convergence of RB approximations, (ii)~the performance of {\em a posteriori} error bounds with respect to sharpness, and (iii)~computational efficiency. Finally, in \S \ref{s:conclusion}, we give some concluding remarks. \section{General Problem Statement} \label{s:general_problem_statement} \subsection{Formulation} \label{ss:pspp} Let ${X_{\rm e}}$ and ${Y_{\rm e}}$ be two Hilbert spaces with inner products $(\cdot,\cdot)_{{X_{\rm e}}}$, $(\cdot,\cdot)_{{Y_{\rm e}}}$ and associated norms $\|\cdot\|_{{X_{\rm e}}} = \sqrt{(\cdot,\cdot)_{{X_{\rm e}}}}$, $\|\cdot\|_{{Y_{\rm e}}}=\sqrt{(\cdot,\cdot)_{{Y_{\rm e}}}}$, respectively.\footnote{Here and in the following, the subscript e denotes ``exact''.} We define the product space ${Z_{\rm e}} \equiv {X_{\rm e}} \times {Y_{\rm e}}$, with inner product $(\cdot,\cdot)_{{Z_{\rm e}}} \equiv (\cdot,\cdot)_{{X_{\rm e}}} + (\cdot,\cdot)_{{Y_{\rm e}}}$ and norm $\|\cdot\|_{{Z_{\rm e}}} = \sqrt{(\cdot,\cdot)_{{Z_{\rm e}}}}$. The associated dual spaces are denoted by ${X'_{\rm e}}$, ${Y'_{\rm e}}$, and $Z'_{\rm e}$. Furthermore, let ${\mathcal D} \subset \mathbb{R}^{n}$ be a prescribed $n$-dimensional, compact parameter set. For any parameter $\mu \in {\mathcal D}$, we then consider the continuous bilinear forms $m(\cdot,\cdot;\mu):{X_{\rm e}}\times {X_{\rm e}} \to\mathbb{R}$, $a(\cdot,\cdot;\mu):{X_{\rm e}}\times {X_{\rm e}}\rightarrow \mathbb{R}$, and $b(\cdot,\cdot;\mu):{X_{\rm e}}\times {Y_{\rm e}} \rightarrow \mathbb{R}$,\footnote{For clarity of exposition, we suppress the obvious requirement of nonzero elements in the denominators.} \begin{align} \label{eq:gamma_m_e} \gamma_m^{\rm e}(\mu) & \equiv \sup_{u\in {X_{\rm e}}} \sup_{v\in {X_{\rm e}}}\frac{m(u,v;\mu)}{\|u\|_{{X_{\rm e}}}\|v\|_{{X_{\rm e}}}} < \infty \quad\forall\;\mu\in{\mathcal D},\\ \label{eq:gamma_a_e} \gamma_a^{\rm e}(\mu) & \equiv \sup_{u\in {X_{\rm e}}} \sup_{v\in {X_{\rm e}}}\frac{a(u,v;\mu)}{\|u\|_{{X_{\rm e}}}\|v\|_{{X_{\rm e}}}} < \infty \quad\forall\;\mu\in{\mathcal D},\\ \label{eq:gamma_b_e} \gamma_b^{\rm e}(\mu) &\equiv \sup_{q\in {Y_{\rm e}}} \sup_{v\in {X_{\rm e}}} \frac{b(v,q;\mu)}{\|q\|_{{Y_{\rm e}}}\|v\|_{{X_{\rm e}}}} < \infty \quad\forall\;\mu\in{\mathcal D}, \end{align} as well as $c(\cdot,\cdot;\mu):{Y_{\rm e}}\times{Y_{\rm e}}\to\mathbb{R}$, \begin{align} \label{eq:gamma_c_e} \gamma_c^{\rm e}(\mu) & \equiv \sup_{p\in {Y_{\rm e}}} \sup_{q\in {Y_{\rm e}}}\frac{c(p,q;\mu)}{\|p\|_{{Y_{\rm e}}}\|q\|_{{Y_{\rm e}}}} < \infty \quad\forall\;\mu\in{\mathcal D}. \end{align} We moreover assume that $a(\cdot,\cdot;\mu)$ and $c(\cdot,\cdot;\mu)$ are coercive on ${X_{\rm e}}$ and ${Y_{\rm e}}$, respectively, \begin{align} \label{eq:alpha_a_e} \alpha_a^{\rm e}(\mu) &\equiv \inf_{v\in {X_{\rm e}}} \frac{a(v,v;\mu)}{\|v\|^2_{{X_{\rm e}}}} > 0 \quad\forall\;\mu\in{\mathcal D},\\ \label{eq:alpha_c_e} \alpha_c^{\rm e}(\mu) &\equiv \inf_{q\in {Y_{\rm e}}} \frac{c(q,q;\mu)}{\|q\|^2_{{Y_{\rm e}}}} > 0 \quad\forall\;\mu\in{\mathcal D}, \end{align} $m(\cdot,\cdot;\mu)$ is symmetric and positive definite, \begin{align}\label{eq:pos_def_m_e} m(v,v;\mu) &> 0 \quad\forall\; 0\neq v\in{X_{\rm e}} \quad\forall\;\mu\in{\mathcal D}, \end{align} and $b(\cdot,\cdot;\mu)$ satisfies the inf-sup condition \begin{equation} \label{eq:brezzi_infsup_e} \beta^{\rm e}(\mu) \equiv \inf_{q\in {Y_{\rm e}}} \sup_{v\in {X_{\rm e}}} \frac{b(v,q;\mu)}{\|q\|_{{Y_{\rm e}}}\|v\|_{{X_{\rm e}}}}>0\quad\forall\; \mu\in{\mathcal D}. \end{equation} By (\ref{eq:gamma_a_e}), (\ref{eq:alpha_a_e}) and (\ref{eq:gamma_c_e}), (\ref{eq:alpha_c_e}), the bilinear forms $a(\cdot,\cdot;\mu)$ and $c(\cdot,\cdot;\mu)$ provide with $\|\cdot\|_{{X_{\rm e}},\mu}\equiv\sqrt{a(\cdot,\cdot;\mu)}$ and $\|\cdot\|_{{Y_{\rm e}},\mu}\equiv\sqrt{c(\cdot,\cdot;\mu)}$ energy norms on ${X_{\rm e}}$ and ${Y_{\rm e}}$, respectively, which are equivalent to $\|\cdot\|_{X_{\rm e}}$ and $\|\cdot\|_{Y_{\rm e}}$ for any $\mu\in{\mathcal D}$; note that, to this end, $a(\cdot,\cdot;\mu)$ and $c(\cdot,\cdot;\mu)$ do not necessarily have to be symmetric. Furthermore, as a symmetric and positive definite bilinear form, $m(\cdot,\cdot;\mu)$ defines an inner product on ${X_{\rm e}}$ for any parameter $\mu\in{\mathcal D}$; the associated norm shall be denoted by $\|\cdot\|_\mu \equiv \sqrt{m(\cdot,\cdot;\mu)}$. We further assume that we are given a time interval $[0,T]$, $T>0$, and linear functionals $f(\cdot;\mu)\in C^0(0,T;{X'_{\rm e}})$ and $g(\cdot;\mu)\in C^0(0,T;{Y'_{\rm e}})$ for all $\mu\in{\mathcal D}$; for a vector space $V$, $C^0(0,T;V)$ here denotes the space of $V$-valued functions of class $C^0$ with respect to $t\in [0,T]$. Throughout this work, we directly consider a time-discrete framework: We divide the time interval $[0,T]$ into $K$ subintervals of equal length $\Delta t \equiv T/K$, and define $t^k\equiv k\Delta t$ for all $k=0,\ldots, K$; for notational convenience, we also introduce ${\mathbb{K}} \equiv \{1,\ldots, K\}$ and ${\mathbb{K}}_0 \equiv {\mathbb{K}} \cup \{0\}$. We then set $f^k(\cdot;\mu) \equiv f(t^k;\mu)\in {X'_{\rm e}}$ and $g^k(\cdot;\mu) \equiv g(t^k;\mu)\in {Y'_{\rm e}}$ for all $k\in{\mathbb{K}}_0$, $\mu\in{\mathcal D}$. For $\varepsilon\geq 0$, we now consider the following ``exact''---more precisely, semi-discrete---problem resulting from a backward Euler method (see, e.g., \cite{Ern:2004jl,Reusken:2011fk,Quarteroni:2008fk,Thomee:1997fk}): For any given parameter $\mu\in{\mathcal D}$, we find $u^{\varepsilon,k}_{\rm e}(\mu)\in {X_{\rm e}}$ and $p^{\varepsilon,k}_{\rm e}(\mu)\in {Y_{\rm e}}$, $k\in{\mathbb{K}}$, such that $u^{\varepsilon,0}_{\rm e}(\mu)=0$\footnote{We here assume zero initial conditions for simplicity; note that nonzero initial conditions can be handled as well without much difficulty (see \cite{grepl05}).} and \begin{align}\nonumber &{\textstyle \frac{1}{\Delta t}}\, m(u^{\varepsilon,k}_{\rm e}(\mu)-u^{\varepsilon,k-1}_{\rm e}(\mu),v;\mu) \\ \label{eq:exact_scheme} &\qquad\begin{split} + \; a(u^{\varepsilon,k}_{\rm e}(\mu),v;\mu) + b(v,p^{\varepsilon,k}_{\rm e}(\mu);\mu) &= f^k(v;\mu) \quad\forall\;v\in {X_{\rm e}},\\ b(u^{\varepsilon,k}_{\rm e}(\mu),q;\mu) -\varepsilon\,c(p^{\varepsilon,k}_{\rm e}(\mu),q;\mu)&= g^k(q;\mu) \hspace{0.3ex}\quad\forall\;q\in {Y_{\rm e}}, \end{split} \quad k\in{\mathbb{K}}. \end{align} Even though we here use a common notation for simplicity in exposition, we point out that (\ref{eq:exact_scheme}) states very different problems for $\varepsilon=0$ and $\varepsilon>0$, respectively. For $\varepsilon=0$, we also denote $u^k_{\rm e}(\mu)\equiv u^{0,k}_{\rm e}(\mu)$, $k\in{\mathbb{K}}_0$, and $p^k_{\rm e}(\mu)\equiv p^{0,k}_{\rm e}(\mu)$, $k\in{\mathbb{K}}$, for all $\mu\in{\mathcal D}$. For $\varepsilon>0$, corresponding to our discussions in \cite{veroy10:_stokes_penalty}, (\ref{eq:exact_scheme}) can be considered as a perturbed or regularized version of the problem associated with $\varepsilon=0$; in this case, we therefore call $(u^{\varepsilon,k}_{\rm e}(\mu), p^{\varepsilon,k}_{\rm e}(\mu)), k\in{\mathbb{K}}$, also the penalty solution. Since these problems differ considerably in their general nature (cf.~\cite{Gerner:2011fk} and \cite{veroy10:_stokes_penalty}), we shall often treat them separately in the following analysis and explicitly distinguish between the two cases $\varepsilon=0$ and $\varepsilon>0$. From (\ref{eq:alpha_a_e}) and (\ref{eq:pos_def_m_e}), the bilinear form $\frac{1}{\Delta t}m(\cdot,\cdot;\mu) + a(\cdot,\cdot;\mu)$ is coercive on ${X_{\rm e}}$ for any $\mu\in{\mathcal D}$. The problem (\ref{eq:exact_scheme}) is thus uniquely solvable for $(u^{k}_{\rm e}(\mu), p^{k}_{\rm e}(\mu)), k\in{\mathbb{K}}$, and $(u^{\varepsilon,k}_{\rm e}(\mu), p^{\varepsilon,k}_{\rm e}(\mu)), k\in{\mathbb{K}}$, as a saddle point problem according to \cite{Gerner:2011fk} and \cite{veroy10:_stokes_penalty}, respectively. \subsection{Truth Approximation} \label{ss:truth} We now introduce a high-fidelity ``truth'' approximation upon which our RB approximation will subsequently be built. To this end, let $X$ and $Y$ denote finite-dimensional subspaces of ${X_{\rm e}}$ and ${Y_{\rm e}}$, respectively. We define the product space $Z \equiv X \times Y$ and denote by ${\mathcal N}$ the dimension of $Z$. We emphasize that the dimension ${\mathcal N}$ is typically very large. These ``truth'' approximation subspaces inherit the inner products and norms of the exact spaces: $(\cdot,\cdot)_X \equiv (\cdot,\cdot)_{{X_{\rm e}}}$, $\|\cdot\|_X\equiv \|\cdot\|_{{X_{\rm e}}}$, $(\cdot,\cdot)_Y \equiv (\cdot,\cdot)_{{Y_{\rm e}}}$, $\|\cdot\|_Y\equiv \|\cdot\|_{{Y_{\rm e}}}$, and $(\cdot,\cdot)_Z \equiv (\cdot,\cdot)_{{Z_{\rm e}}}$, $\|\cdot\|_Z\equiv \|\cdot\|_{{Z_{\rm e}}}$. Clearly, the continuity properties (\ref{eq:gamma_m_e}), (\ref{eq:gamma_a_e}), (\ref{eq:gamma_b_e}), and (\ref{eq:gamma_c_e}) are passed on to the ``truth'' approximation spaces, \begin{align} \label{eq:gamma_m} \gamma_m(\mu) &\equiv \sup_{u\in X} \sup_{v\in X} \frac{m(u,v;\mu)}{\|u\|_X \|v\|_X} < \infty \quad\forall\;\mu\in{\mathcal D},\\ \label{eq:gamma_a} \gamma_a(\mu) &\equiv \sup_{u\in X} \sup_{v\in X} \frac{a(u,v;\mu)}{\|u\|_X \|v\|_X} < \infty \quad\forall\;\mu\in{\mathcal D},\\ \label{eq:gamma_b} \gamma_b(\mu) &\equiv \sup_{q\in Y} \sup_{v\in X} \frac{b(v,q;\mu)}{\|q\|_Y \|v\|_X} < \infty\quad\forall\;\mu\in{\mathcal D},\\ \label{eq:gamma_c} \gamma_c(\mu) &\equiv \sup_{p\in X} \sup_{q\in X} \frac{c(p,q;\mu)}{\|p\|_Y \|q\|_Y} < \infty\quad\forall\;\mu\in{\mathcal D}; \end{align} so are the coercivity properties (\ref{eq:alpha_a_e}) and (\ref{eq:alpha_c_e}), \begin{align} \label{eq:alpha_a} \alpha_a(\mu) &\equiv \inf_{v\in X} \frac{a(v,v;\mu)}{\|v\|^2_X} > 0 \quad\forall\;\mu\in{\mathcal D},\\ \label{eq:alpha_c} \alpha_c(\mu) &\equiv \inf_{q\in X} \frac{c(q,q;\mu)}{\|q\|^2_Y} > 0 \quad\forall\;\mu\in{\mathcal D}, \end{align} as well as the inner product $m(\cdot,\cdot;\mu)$ and associated norm $\|\cdot\|_\mu$, \begin{equation} \label{eq:pos_def_m} m(v,v;\mu) > 0 \quad\forall\; 0\neq v\in X \quad\forall\;\mu\in{\mathcal D}. \end{equation} Thus, $\|\cdot\|_{X,\mu}\equiv\|\cdot\|_{{X_{\rm e}},\mu}$ and $\|\cdot\|_{Y,\mu}\equiv\|\cdot\|_{{Y_{\rm e}},\mu}$ define norms on $X$ and $Y$, respectively, which are equivalent to $\|\cdot\|_X$ and $\|\cdot\|_Y$ for any $\mu\in{\mathcal D}$. We further assume that the approximation spaces $X$ and $Y$ are chosen such that they satisfy the Ladyzhenskaya--Babu\v{s}ka--Brezzi (LBB) inf-sup condition (see, e.g., \cite{Brezzi:1991fk}) \begin{equation} \label{eq:LBB} \beta(\mu) \equiv \inf_{q\in Y} \sup_{v\in X} \frac{b(v,q;\mu)}{\|q\|_{Y} \|v\|_X} \geq \beta^0(\mu) > 0 \quad \forall\; \mu\in{\mathcal D}, \end{equation} where $\beta^0(\mu)$ is a constant independent of the dimension ${\mathcal{N}}$. Our high-fidelity ``truth'' discretization for (\ref{eq:exact_scheme}) now reads as follows: For $\varepsilon\geq 0$ and any given $\mu\in{\mathcal D}$, we find $u^{\varepsilon,k}(\mu)\in X$ and $p^{\varepsilon,k}(\mu)\in Y$, $k\in{\mathbb{K}}$, such that $u^{\varepsilon,0}(\mu)=0$ and \begin{align}\nonumber &{\textstyle \frac{1}{\Delta t}}\, m(u^{\varepsilon,k}(\mu)-u^{\varepsilon,k-1}(\mu),v;\mu) \\ \label{eq:truth_scheme} &\qquad\begin{split} + \; a(u^{\varepsilon,k}(\mu),v;\mu) + b(v,p^{\varepsilon,k}(\mu);\mu) &= f^k(v;\mu) \quad\forall\;v\in X,\\ \hspace{2ex} b(u^{\varepsilon,k}(\mu),q;\mu) -\varepsilon\,c(p^{\varepsilon,k}(\mu),q;\mu) &= g^k(q;\mu) \hspace{0.2ex}\quad\forall\;q\in Y, \end{split} \quad k\in{\mathbb{K}}. \end{align} In case of $\varepsilon=0$, we also denote $u^k(\mu)\equiv u^{0,k}(\mu)$, $k\in{\mathbb{K}}_0$, and $p^k(\mu)\equiv p^{0,k}(\mu)$, $k\in{\mathbb{K}}$. As the exact problem in \S \ref{ss:pspp}, the problem (\ref{eq:truth_scheme}) is uniquely solvable for $(u^{k}(\mu),p^{k}(\mu)), k\in{\mathbb{K}}$, and $(u^{\varepsilon,k}(\mu), p^{\varepsilon,k}(\mu)), k\in{\mathbb{K}}$, according to \cite{Gerner:2011fk} and \cite{veroy10:_stokes_penalty}, respectively. \begin{remark} \label{rmrk:truth_LBB} We note that in case of $\varepsilon>0$, the LBB inf-sup condition (\ref{eq:LBB}) is in fact not a compulsory requirement for the system (\ref{eq:truth_scheme}) to be well-posed (see, e.g., \cite{Brezzi:1991fk}). However, if the problem is considered as a perturbation of the problem associated with $\varepsilon=0$, the condition is needed for the solution $(u^{\varepsilon,k}(\mu), p^{\varepsilon,k}(\mu)), k\in{\mathbb{K}}$, to converge to $(u^{k}(\mu),p^{k}(\mu)), k\in{\mathbb{K}}$, as $\varepsilon$ tends to zero (see, e.g., \cite{Bercovier:1978kx}). For further details in this context, we refer the reader also to \cite[\S 4]{Gerner:2012ag}. \end{remark} \section{The Reduced Basis Method} \label{s:RB_method} We now turn to the RB method, discussing the approximation procedure, rigorous {\em a posteriori} error estimators, and the construction of stable approximation spaces that capture the causality associated with the parameter dependence {\em as well as} with evolution in time. \subsection{Galerkin Projection}\label{ss:Galerkin_projection} Suppose that we are given a set of nested, low-dimensional RB approximation subspaces $X_N\subset X_{N+1}\subset X$ and $Y_N\subset Y_{N+1}\subset Y$, $N\in {\mathbb{N}_{\rm max}} \equiv\{1,\ldots,N_{\rm max}\}$. We denote by $N_X$ and $N_Y$ the dimensions of $X_N$ and $Y_N$, respectively, and the total dimension of $Z_N\equiv X_N\times Y_N$ by $N_Z \equiv N_X + N_Y$. The subspaces $X_N$, $Y_N$, and $Z_N$ again inherit all inner products and norms of $X$, $Y$, and $Z$, respectively. The RB approximation is then defined as the Galerkin projection with respect to the truth problem (\ref{eq:truth_scheme}) onto these low-dimensional subspaces: For $\varepsilon\geq 0$ and any given $\mu\in {\mathcal D}$, we find $u^{\varepsilon,k}_N(\mu)\in X_N$ and $p^{\varepsilon,k}_N(\mu)\in Y_N$, $k\in{\mathbb{K}}$, such that $u^{\varepsilon,0}_N(\mu)=0$ and \begin{align}\nonumber &{\textstyle\frac{1}{\Delta t}}\, m(u^{\varepsilon,k}_N(\mu)-u^{\varepsilon,k-1}_N(\mu),v_N;\mu)\\ \label{eq:rb_scheme} &\begin{split} + \; a(u_{N}^{\varepsilon,k}(\mu),v_N;\mu) + b(v_N,p^{\varepsilon,k}_{N}(\mu);\mu) &= f^k(v_N;\mu)\quad\forall\;v_N\in X_N,\\ b(u^{\varepsilon,k}_{N}(\mu),q_N;\mu) - \varepsilon\, c(p^{\varepsilon,k}_{N}(\mu),q_N;\mu) & = g^k(q_N;\mu)\hspace{0.3ex}\quad\forall\;q_N\in Y_N, \end{split} \quad k\in{\mathbb{K}}. \end{align} Again, we denote $u^k_N(\mu)\equiv u^{0,k}_N(\mu)$, $k\in{\mathbb{K}}_0$, and $p^k_N(\mu)\equiv p^{0,k}_N(\mu)$, $k\in{\mathbb{K}}$. The discrete RB system now essentially behaves as in the stationary case: We recall (see \cite{Gerner:2011fk}) that a pair $(X_N,Y_N)$ of RB approximation spaces is called {\em stable} if it satisfies the inf-sup condition \begin{equation}\label{eq:rb_infsup} \beta_N(\mu)\equiv \inf_{q_N\in Y_N} \sup_{v_N \in X_N} \frac{b(v_N,q_N;\mu)}{\|q_N\|_Y\|v_N\|_X}>0 \quad\forall\;\mu\in{\mathcal D}. \end{equation} In case of $\varepsilon=0$, (\ref{eq:rb_scheme}) is then uniquely solvable for $(u^k_N(\mu),p^k_{N}(\mu)), k\in{\mathbb{K}}$, if and only if the RB approximation spaces $X_N, Y_N$ are stable; in case of $\varepsilon>0$, corresponding to our comments on the truth problem in Remark~1.1, (\ref{eq:rb_scheme}) is uniquely solvable for $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, for any choice of $X_N, Y_N$ (see \cite{Brezzi:1991fk,Gerner:2012ag}). \subsection{{\em A Posteriori} Error Estimation} \label{ss:error_estimation} We now develop upper bounds for the errors in our RB approximations that are rigorous, sharp, and computationally efficient. In this context, symmetric problems shall be discussed as a special case in which these bounds can be further sharpened. In this section, we assume that the low-dimensional RB spaces $X_N, Y_N$ are constructed such that for any given $\mu\in{\mathcal D}$, a solution $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu))\in X_N\times Y_N$, $k\in{\mathbb{K}}$, to (\ref{eq:rb_scheme}) exists (see \S \ref{ss:Galerkin_projection}). For $\mu\in{\mathcal D}$, we then consider the errors \begin{align} \nonumber e^u_N(\mu) &\equiv (e^{u,k}_N(\mu))_{k\in{\mathbb{K}}}, \text{ where } e^{u,k}_N(\mu) \equiv u^{\varepsilon,k}(\mu) - u^{\varepsilon,k}_N(\mu) \in X, \;k\in{\mathbb{K}},\\ \label{eq:def_errors} e^p_N(\mu) &\equiv (e^{p,k}_N(\mu))_{k\in{\mathbb{K}}}, \text{ where } e^{p,k}_N(\mu) \equiv p^{\varepsilon,k}(\mu) -p^{\varepsilon,k}_N(\mu) \in Y,\; k\in{\mathbb{K}},\\ \nonumber e^\varepsilon_N(\mu) &\equiv (e^{\varepsilon,k}_N(\mu))_{k\in{\mathbb{K}}}, \text{ where } e^{\varepsilon,k}_N(\mu) \equiv (e^{u,k}_N(\mu),e^{p,k}_N(\mu))\in Z, \;k\in{\mathbb{K}}, \end{align} in the RB approximations $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, with respect to the truth solution $(u^{\varepsilon,k}(\mu),p^{\varepsilon,k}(\mu)), k\in{\mathbb{K}}$; we note that in particular $e^{u,0}_N(\mu)\equiv u^{\varepsilon,0}(\mu) - u^{\varepsilon,0}_N(\mu)=0$ from our initial conditions. To formulate our RB {\em a posteriori} error bounds, we rely on the residuals associated with the RB approximation $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, \begin{align} \nonumber r^{1,k}_{N}(\cdot;\mu) &\equiv f^k(\mu) - {\textstyle \frac{1}{\Delta t}}\,m(u^{\varepsilon,k}_{N}(\mu)-u^{\varepsilon,k-1}_{N}(\mu),v;\mu)\\ \label{eq:residual_1_k} &\hspace{26ex} - a(u^{\varepsilon,k}_{N}(\mu),v;\mu) - b(v,p^{\varepsilon,k}_{N}(\mu);\mu) \in X',\\ \label{eq:residual_2_k} r^{2,k}_{N}(\cdot;\mu) &\equiv g^k(\mu) - b(u^{\varepsilon,k}_{N}(\mu),q;\mu)+ \varepsilon\, c(p^{\varepsilon,k}_{N}(\mu),q;\mu) \in Y' \end{align} for $k\in{\mathbb{K}}$ and $\mu\in{\mathcal D}$. In the following analysis, we distinguish between the cases $\varepsilon=0$ and $\varepsilon>0$. \subsubsection{$\varepsilon=0$} \label{sss:errbnds} We here derive rigorous upper bounds for the error $e^{u}_N(\mu)$ measured in the ``spatio-temporal'' energy norm \begin{equation} \label{eq:energy_norm} \|(v^j)_{j\in{\mathbb{K}}}\|_{\ell^2(0,k;X)} \equiv \Bigg(\|v^k\|^2_\mu + \Delta t \sum_{j=1}^k \|v^j\|^2_{X,\mu}\Bigg)^{1/2},\quad (v^j)_{j\in{\mathbb{K}}}\subseteq X, \quad k\in{\mathbb{K}}. \end{equation} Our RB {\em a posteriori} error bounds shall be formulated in terms of the dual norms of the residuals (\ref{eq:residual_1_k}) and (\ref{eq:residual_2_k}), and (Online-)efficient lower and upper bounds to the truth continuity, coercivity, and inf-sup constants (\ref{eq:gamma_m}), (\ref{eq:gamma_a}), (\ref{eq:alpha_a}), and (\ref{eq:LBB}), \begin{align} \label{eq:constants_LB_UB} \setlength\arraycolsep{2pt} \begin{array}{ccccc} \gamma_m^{\rm LB}(\mu) & \leq & \gamma_m(\mu) & \leq & \gamma_m^{\rm UB}(\mu),\\[0.5ex] \gamma_a^{\rm LB}(\mu) & \leq & \gamma_a(\mu) & \leq & \gamma_a^{\rm UB}(\mu),\\[0.5ex] \alpha_a^{\rm LB}(\mu) & \leq & \alpha_a(\mu) & \leq & \alpha_a^{\rm UB}(\mu),\\[0.5ex] \beta^{\rm LB}(\mu) & \leq & \beta(\mu) & \leq & \beta^{\rm UB}(\mu), \end{array} \quad\forall\;\mu\in{\mathcal D}. \end{align} We can now state the following result. \begin{proposition} \label{prpstn:errbnd} For any given $\mu\in{\mathcal D}$, $N\in{\mathbb{N}_{\rm max}}$, $k\in{\mathbb{K}}$, and $\alpha_a^{\rm LB}(\mu)$, $\gamma^{\rm UB}_a(\mu)$, $\beta^{\rm LB}(\mu)$, $\gamma_m^{\rm UB}(\mu)$ satisfying (\ref{eq:constants_LB_UB}), we define \begin{multline} \label{eq:energy_errbnd} \Delta^{k}_N(\mu) \equiv \Bigg[ \Delta t \sum_{j=1}^k \frac{\|r^{1,j}_N(\cdot;\mu)\|^2_{X'}}{{\alpha^{\rm LB}_a(\mu)}} + \frac{2}{\beta^{\rm LB}(\mu)}\Bigg(1+{\frac{\gamma^{\rm UB}_a(\mu)}{\alpha^{\rm LB}_a(\mu)}}\Bigg) \|r^{1,j}_N(\cdot;\mu)\|_{X'}\|r^{2,j}_N(\cdot;\mu)\|_{Y'}\\ + \Bigg(\frac{\gamma^{\rm UB}_m(\mu)}{\Delta t} + \frac{(\gamma^{\rm UB}_a(\mu))^2}{\alpha^{\rm LB}_a(\mu)}\Bigg) \frac{\|r^{2,j}_N(\cdot;\mu)\|^2_{Y'}}{(\beta^{\rm LB}(\mu))^2}\Bigg]^{1/2}. \end{multline} Then, $\Delta^{k}_N(\mu)$ represents an upper bound for the error $e^{u}_N(\mu)$ measured in the ``spatio-temporal'' energy norm (\ref{eq:energy_norm}), \begin{equation} \label{eq:rig_energy_errbnd} \|e^{u}_N(\mu)\|_{\ell^2(0,k;X)} \leq \Delta^{k}_N(\mu) \quad\forall\; k\in{\mathbb{K}}, \;\mu\in{\mathcal D}, \;N\in{\mathbb{N}_{\rm max}}. \end{equation} \end{proposition} \begin{proof} Let $\mu$ be any parameter in ${\mathcal D}$, $N\in{\mathbb{N}_{\rm max}}$, and $k\in{\mathbb{K}}$. For clarity of exposition, we suppress the argument $\mu$ in this proof. Take any $1\leq j\leq k$. From (\ref{eq:residual_1_k}), (\ref{eq:residual_2_k}), and (\ref{eq:truth_scheme}), the errors $e^{u,j}_N\in X$ and $e^{p,j}_N\in Y$ satisfy the equations \begin{align} \label{eq:error_mom} {\textstyle \frac{1}{\Delta t}}\, m(e^{u,j}_N-e^{u,j-1}_N,v) + a(e^{u,j}_N,v) + b(v,e^{p,j}_N) &= r^{1,j}_N(v) \quad\forall\;v\in X,\\ \label{eq:error_cont} b(e^{u,j}_N,q) &= r^{2,j}_N(q) \quad\forall\;q\in Y. \end{align} By the LBB inf-sup condition (\ref{eq:LBB}) and (\ref{eq:error_mom}), we have \begin{align} \nonumber \beta \|e^{p,j}_N\|_Y &\leq \sup_{v\in X} \frac{b(v, e^{p,j}_N)}{\|v\|_X} = \sup_{v\in X} \frac{ r^{1,j}_N(v) - a(e^{u,j}_N,v)-\frac{1}{\Delta t} m(e^{u,j}_N-e^{u,j-1}_N,v)}{\|v\|_X}\\ \label{eq:bound_ep_j} &\leq \|r^{1,j}_N\|_{X'} + {\gamma_a} \|e^{u,j}_N\|_{X} + \frac{\sqrt{\gamma_m}}{\Delta t} \|e^{u,j}_N-e^{u,j-1}_N\|_\mu, \end{align} where the last inequality follows from the Cauchy--Schwarz inequality for the inner product $m(\cdot,\cdot)$, (\ref{eq:gamma_m}), and (\ref{eq:gamma_a}). We then set $v=e^{u,j}_N$, $q=e^{p,j}_N$ in (\ref{eq:error_mom}), (\ref{eq:error_cont}) and subtract the second from the first equation such that \begin{align*} {\textstyle \frac{1}{\Delta t}}\, m(e^{u,j}_N-e^{u,j-1}_N,e^{u,j}_N) + \|e^{u,j}_N\|^2_{X,\mu} & = r^{1,j}_N(e^{u,j}_N) - r^{2,j}_N(e^{p,j}_N) \\ &\leq \|r^{1,j}_N\|_{X'} \|e^{u,j}_N\|_X + \|r^{2,j}_N\|_{Y'}\|e^{p,j}_N\|_Y. \end{align*} Applying now (\ref{eq:bound_ep_j}) and (\ref{eq:alpha_a}) yields \begin{multline*} {\textstyle \frac{1}{\Delta t}}\,m(e^{u,j}_N-e^{u,j-1}_N,e^{u,j}_N) + \|e^{u,j}_N\|^2_{X,\mu}\\ \leq \frac{1}{\beta} \|r^{1,j}_N\|_{X'}\|r^{2,j}_N\|_{Y'}+ \bigg(\|r^{1,j}_N\|_{X'} + \frac{{\gamma_a}}{\beta}\|r^{2,j}_N\|_{Y'}\bigg) \frac{\|e^{u,j}_N\|_{X,\mu}}{\sqrt{\alpha_a}} \\ + \frac{1}{\Delta t}\frac{\sqrt{\gamma_m}}{\beta} \|r^{2,j}_N\|_{Y'} \|e^{u,j}_N-e^{u,j-1}_N\|_\mu, \end{multline*} which can be further bounded from Young's inequality by \begin{multline*} \leq \frac{1}{\beta} \|r^{1,j}_N\|_{X'}\|r^{2,j}_N\|_{Y'}+ \frac{1}{2\alpha_a}\bigg(\|r^{1,j}_N\|_{X'} + \frac{{\gamma_a}}{\beta}\|r^{2,j}_N\|_{Y'}\bigg)^2 + \frac{1}{2} \|e^{u,j}_N\|^2_{X,\mu} \\ +\frac{1}{2\Delta t}\frac{\gamma_m}{\beta^2} \|r^{2,j}_N\|^2_{Y'} + \frac{1}{2\Delta t} \|e^{u,j}_N-e^{u,j-1}_N\|^2_\mu. \end{multline*} Rearranging terms, the inequality now reads \begin{multline*} \frac{1}{\Delta t} \left(\|e^{u,j}_N\|^2_\mu - \|e^{u,j-1}_N\|^2_\mu \right) + \|e^{u,j}_N\|^2_{X,\mu}\\ \leq \frac{\|r^{1,j}_N\|^2_{X'} }{\alpha_a}+ \frac{2}{\beta} \bigg(1+ {\frac{\gamma_a}{\alpha_a}}\bigg) \|r^{1,j}_N\|_{X'}\|r^{2,j}_N\|_{Y'} + \bigg(\frac{\gamma_m}{\Delta t} + \frac{\gamma_a^2}{\alpha_a} \bigg)\frac{\|r^{2,j}_N\|^2_{Y'}}{\beta^2}, \end{multline*} and the result follows from applying the sum $\sum_{j=1}^k$, $e^{u,0}_N=0$, and (\ref{eq:constants_LB_UB}). \end{proof} In the special case of a symmetric problem, the error bounds given in Proposition~\ref{prpstn:errbnd} can be improved (see also \cite{Gerner:2012fk}). We may then derive the following result. \begin{proposition} \label{prpstn:errbnd_sym} Let $a(\cdot,\cdot;\mu)$ be symmetric for all $\mu\in{\mathcal D}$. For any given $\mu\in{\mathcal D}$, $N\in{\mathbb{N}_{\rm max}}$, $k\in{\mathbb{K}}$, and $\alpha_a^{\rm LB}(\mu)$, $\gamma^{\rm UB}_a(\mu)$, $\beta^{\rm LB}(\mu)$, $\gamma_m^{\rm UB}(\mu)$ satisfying (\ref{eq:constants_LB_UB}), we define \begin{multline} \label{eq:energy_errbnd_sym} \Delta^{{\rm sym},k}_N(\mu) \equiv \Bigg[ \Delta t \sum_{j=1}^k \frac{\|r^{1,j}_N(\cdot;\mu)\|^2_{X'}}{{\alpha^{\rm LB}_a(\mu)}}\\ + \frac{2}{\beta^{\rm LB}(\mu)}\Bigg(1+\sqrt{\frac{\gamma^{\rm UB}_a(\mu)}{\alpha^{\rm LB}_a(\mu)}}\Bigg) \|r^{1,j}_N(\cdot;\mu)\|_{X'}\|r^{2,j}_N(\cdot;\mu)\|_{Y'}\\ + \Bigg(\frac{\gamma^{\rm UB}_m(\mu)}{\Delta t} + {\gamma^{\rm UB}_a(\mu)}\Bigg) \frac{\|r^{2,j}_N(\cdot;\mu)\|^2_{Y'}}{(\beta^{\rm LB}(\mu))^2}\Bigg]^{1/2}. \end{multline} Then, $\Delta^{{\rm sym},k}_N(\mu)$ represents an upper bound for the error $e^{u}_N(\mu)$ measured in the ``spatio-temporal'' energy norm (\ref{eq:energy_norm}), \begin{equation} \label{eq:rig_energy_errbnd_sym} \|e^{u}_N(\mu)\|_{\ell^2(0,k;X)} \leq \Delta^{{\rm sym},k}_N(\mu) \quad\forall\; k\in{\mathbb{K}}, \;\mu\in{\mathcal D}, \;N\in{\mathbb{N}_{\rm max}}. \end{equation} \end{proposition} \begin{proof} Following the lines of the previous proof, we may now apply the Cauchy--Schwarz inequality for the inner product $a(\cdot,\cdot)$ to obtain \begin{equation*} \beta \|e^{p,j}_N\|_Y \leq \|r^{1,j}_N\|_{X'} + \sqrt{\gamma_a} \|e^{u,j}_N\|_{X,\mu} + \frac{\sqrt{\gamma_m}}{\Delta t} \|e^{u,j}_N-e^{u,j-1}_N\|_\mu, \end{equation*} instead of (\ref{eq:bound_ep_j}). Proceeding as before, this yields \begin{multline*} {\textstyle \frac{1}{\Delta t}}\,m(e^{u,j}_N-e^{u,j-1}_N,e^{u,j}_N) + \|e^{u,j}_N\|^2_{X,\mu}\\ \leq \frac{1}{\beta} \|r^{1,j}_N\|_{X'}\|r^{2,j}_N\|_{Y'}+ \bigg(\frac{\|r^{1,j}_N\|_{X'}}{\sqrt{\alpha_a}} + \frac{\sqrt{\gamma_a}}{\beta}\|r^{2,j}_N\|_{Y'}\bigg) \|e^{u,j}_N\|_{X,\mu} \\ + \frac{1}{\Delta t}\frac{\sqrt{\gamma_m}}{\beta} \|r^{2,j}_N\|_{Y'} \|e^{u,j}_N-e^{u,j-1}_N\|_\mu, \end{multline*} and the statement again follows from applying Young's inequality, the sum $\sum_{j=1}^k$, {$e^{u,0}_N=0$}, and (\ref{eq:constants_LB_UB}). \end{proof} \subsubsection{$\varepsilon>0$} \label{sss:errbnds_penalty} We here derive rigorous upper bounds for the error $e^{\varepsilon}_N(\mu)$ measured in the ``spatio-temporal'' energy norm \begin{align} \label{eq:energy_norm_penalty} \|(v^j,q^j)_{j\in{\mathbb{K}}}\|_{\ell^2(0,k;Z)} \equiv \Bigg(\|v^k\|^2_\mu + \Delta t \sum_{j=1}^k \|v^j\|^2_{X,\mu} + \varepsilon\, \|q^j\|^2_{Y,\mu}\Bigg)^{1/2}, \end{align} where $(v^j,q^j)_{j\in{\mathbb{K}}}\subseteq Z$, $k\in{\mathbb{K}}$. In addition to the dual norms of the residuals (\ref{eq:residual_1_k}) and (\ref{eq:residual_2_k}), we here also rely on (Online-)efficient lower (and upper) bounds to the truth coercivity constants (\ref{eq:alpha_a}) and (\ref{eq:alpha_c}), \begin{align} \label{eq:alpha_LB} \setlength\arraycolsep{2pt} \begin{array}{ccccc} \alpha_a^{\rm LB}(\mu) & \leq & \alpha_a(\mu) & \leq & \alpha_a^{\rm UB}(\mu),\\[0.5ex] \alpha_c^{\rm LB}(\mu) & \leq & \alpha_c(\mu) & \leq & \alpha_c^{\rm UB}(\mu), \end{array} \quad\forall\;\mu\in{\mathcal D}, \end{align} to formulate our RB {\em a posteriori} error bounds. To demonstrate the differences to the case where $\varepsilon=0$, we recall the following result together with its proof (see \cite{Gerner:2012uq}). \begin{proposition} \label{prpstn:errbnds_penalty} For any given $\mu\in{\mathcal D}$, $N\in{\mathbb{N}_{\rm max}}$, $k\in{\mathbb{K}}$, and $\alpha_a^{\rm LB}(\mu)$, $\alpha^{\rm LB}_c(\mu)$ satisfying (\ref{eq:alpha_LB}), we define \begin{equation} \label{eq:energy_errbnd_penalty} \Delta^{\varepsilon,k}_N(\mu) \equiv \Bigg(\Delta t \sum_{j=1}^k \frac{\|r^{1,j}_{N}(\cdot;\mu)\|^2_{X'}}{\alpha^{\rm LB}_a(\mu)} + \frac{\|r^{2,j}_{N}(\cdot;\mu)\|^2_{Y'}}{\varepsilon \alpha^{\rm LB}_c(\mu)}\Bigg)^{1/2}. \end{equation} Then, $\Delta^{\varepsilon,k}_N(\mu)$ represents an upper bound for the error $e^{\varepsilon}_N(\mu)$ measured in the ``spatio-temporal'' energy norm (\ref{eq:energy_norm_penalty}), \begin{equation} \label{eq:rig_energy_errbnd_penalty} \|e^{\varepsilon}_N(\mu)\|_{\ell^2(0,k;Z)} \leq \Delta^{\varepsilon,k}_N(\mu) \quad\forall\; k\in{\mathbb{K}}, \;\mu\in{\mathcal D}, \;N\in{\mathbb{N}_{\rm max}}. \end{equation} \end{proposition} \begin{proof} Let $\mu$ be any parameter in ${\mathcal D}$, $N\in{\mathbb{N}_{\rm max}}$, and $k\in{\mathbb{K}}$. For clarity of exposition, we suppress the argument $\mu$ in this proof. Take any $1\leq j\leq k$. From (\ref{eq:residual_1_k}), (\ref{eq:residual_2_k}), and (\ref{eq:truth_scheme}), the errors $e^{u,j}_N\in X$ and $e^{p,j}_N\in Y$ satisfy the equations \begin{align*} {\textstyle \frac{1}{\Delta t}}\, m(e^{u,j}_N-e^{u,j-1}_N,v) + a(e^{u,j}_{N},v) + b(v,e^{p,j}_{N}) &= r^{1,j}_{N}(v)\quad \forall\;v\in X,\\ b(e^{u,j}_{N},q) -\varepsilon\, c(e^{p,j}_{N},q) & = r^{2,j}_{N}(q) \quad\forall\; q\in Y. \end{align*} Setting here $v=e^{u,j}_{N}$, $q=e^{p,j}_{N}$ and subtracting the second from the first equation, we obtain \begin{multline}\label{eq:inst_penalty_inequ_1} {\textstyle\frac{1}{\Delta t}}\, m(e^{u,j}_N-e^{u,j-1}_N,e^{u,j}_{N}) + \|e^{u,j}_{N}\|^2_{X,\mu} + \varepsilon\, \|e^{p,j}_{N}\|^2_{Y,\mu} = r^{1,j}_{N}(e^{u,j}_{N}) - r^{2,j}_{N}(e^{p,j}_{N})\\ \leq \|r^{1,j}_{N}\|_{X'} \|e^{u,j}_{N}\|_X + \|r^{2,j}_{N}\|_{Y'} \|e^{p,j}_{N}\|_Y. \end{multline} On the right-hand side, we now use (\ref{eq:alpha_a}), (\ref{eq:alpha_c}), and Young's inequality so that \begin{multline*} \|r^{1,j}_{N}\|_{X'} \|e^{u,j}_{N}\|_X + \|r^{2,j}_{N}\|_{Y'} \|e^{p,j}_{N}\|_Y\\ \leq \frac{\|r^{1,j}_{N}\|_{X'}}{\sqrt{\alpha_a}}\|e^{u,j}_{N}\|_{X,\mu} + \frac{\|r^{2,j}_{N}\|_{Y'}}{\sqrt{\alpha_c}}\|e^{p,j}_{N}\|_{Y,\mu}\\ \leq \frac{1}{2}\left(\frac{\|r^{1,j}_{N}\|^2_{X'}}{\alpha_a} + \|e^{u,j}_{N}\|^2_{X,\mu} + \frac{\|r^{2,j}_{N}\|^2_{Y'}}{\varepsilon \alpha_c} + \varepsilon\,\|e^{p,j}_{N}\|^2_{Y,\mu}\right); \end{multline*} on the left-hand side, we use the Cauchy--Schwarz inequality for the inner product $m(\cdot,\cdot)$ followed by Young's inequality so that \begin{align*} m(e^{u,j}_N-e^{u,j-1}_N, e^{u,j}_N) \geq \|e^{u,j}_N\|^2_\mu - \|e^{u,j-1}_N\|_\mu\|e^{u,j}_N\|_\mu \geq \frac{1}{2} \left(\|e^{u,j}_N\|^2_\mu-\|e^{u,j-1}_N\|^2_\mu\right). \end{align*} Rearranging terms, the inequality (\ref{eq:inst_penalty_inequ_1}) finally reads \begin{equation*} {\frac{1}{\Delta t}}\Big(\|e^{u,j}_N\|^2_\mu - \|e^{u,j-1}_N\|^2_\mu\Big) + \|e^{u,j}_{N}\|^2_{X,\mu} + \varepsilon\, \|e^{p,j}_{N}\|^2_{Y,\mu} \leq \frac{\|r^{1,j}_{N}\|^2_{X'}}{\alpha_a} + \frac{\|r^{2,j}_{N}\|^2_{Y'}}{\varepsilon \alpha_c}, \end{equation*} and the statement follows from applying the sum $\sum_{j=1}^k$, $e^{u,0}_N=0$, and (\ref{eq:alpha_LB}). \end{proof} Through the introduction of the penalty term, we thus obtain {\em a posteriori} error bounds that do not depend on inf-sup constants. However, we note that they depend on the penalty parameter $\varepsilon$: As $\varepsilon$ decreases and we approach the nonperturbed problem, (\ref{eq:energy_errbnd_penalty}) suggests a growth by an order of $O(\textstyle \frac{1}{\sqrt{\varepsilon}})$. \subsection{Offline-Online Computational Procedure} \label{ss:offline_online} The efficiency of the RB method relies on an Offline-Online computational decomposition strategy. As it is by now standard, we shall only provide a brief summary at this point and refer the reader to, e.g., \cite{Grepl:2005kx,Rozza:2008fv} for further details. The procedure requires that all involved operators can be affinely expanded with respect to the parameter $\mu$. All $\mu$-independent quantities are formed and stored within a computationally expensive Offline stage, which is performed only once and whose cost depends on the large finite element dimension ${\mathcal N}$. For any given parameter $\mu\in{\mathcal D}$, the RB approximation $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, is then computed within a highly efficient Online stage; the cost does not depend on ${\mathcal N}$ but only on the much smaller dimension of the RB approximation space. The computation of the {\em a posteriori} error bounds consists of two components: the calculation of the residual dual norms $\|r^{1,k}_{N}(\cdot;\mu)\|_{X'}$, $\|r^{2,k}_{N}(\cdot;\mu)\|_{Y'}$, $k\in{\mathbb{K}}$, and the calculation of the required lower and upper bounds (\ref{eq:constants_LB_UB}) and (\ref{eq:alpha_LB}), respectively, to the involved constants. The former is again an application of now standard RB techniques that can be found in \cite{Grepl:2005kx,Rozza:2008fv}. The latter is achieved by a successive constraint method (SCM) as proposed in \cite{DBP-Huynh:2007ez}; we also refer the reader to \cite{Gerner:2011fk} for details in our saddle point context. \subsection{Construction of Reduced Basis Approximation Spaces} \label{ss:construction_RB_spaces} We now turn to the construction of the RB approximation spaces $X_N, Y_N$, $N\in{\mathbb{N}_{\rm max}}$. The low-dimensional spaces $X_N, Y_N$ are constructed by exploiting the parametric structure of the problem: According to the so-called Lagrange approach, basis functions are essentially given by truth solutions associated with several chosen parameter snapshots. However, in our time-dependent setting, $X_N$ and $Y_N$ not only have to appropriately represent the submanifold induced by the parametric dependence but also need to capture the causality associated with evolution in time to provide accurate approximations $(u^{\varepsilon,k}_N(\mu), p^{\varepsilon,k}_N(\mu))$ for $(u^{\varepsilon,k}(\mu),p^{\varepsilon,k}(\mu))$, $k\in{\mathbb{K}}$, for any parameter query. Keeping computational cost to a minimum, we aim to achieve this with as few basis functions as possible. The POD greedy procedure represents an adaptive sampling process for parabolic problems that properly accounts for temporal and parametric causality: It combines the proper orthogonal decomposition (POD) method in $k$ (see \cite{Kunisch:2001fk,Kunisch:2002bh}) with the greedy procedure in $\mu$ (see \cite{dahmen10:_greedy,Buffa:kx} and \cite{Gerner:2011fk}). To begin with, we briefly recall the optimality property of the POD as described in \cite{Kunisch:2001fk,Kunisch:2002bh}. For a given finite set $\mathcal{X}_\mathcal{I}\equiv \{\chi_1,\ldots,\chi_{\mathcal{I}}\}\subseteq X$ and $M_X\leq {\rm dim}({\rm span}(\mathcal{X}_{\mathcal{I}}))$, the POD basis of rank $M_X$ consists of $M_X$ $(\cdot,\cdot)_X$-orthonormal basis functions that approximate $\mathcal{X}_{\mathcal{I}}$ best in the sense that \begin{equation*} {\rm span}({\rm POD}_X(\mathcal{X}_{\mathcal{I}},M_X)) = \arg \inf_{\substack{\mathcal{X}\,\subseteq\, {\rm span}(\mathcal{X}_{\mathcal{I}})\\ {\rm dim}(\mathcal{X})=M_X}} \left(\frac{1}{\mathcal{I}}\sum_{i=1}^{\mathcal{I}} \inf_{\chi \in \mathcal{X}} \|\chi_i-\chi\|^2_X\right)^{1/2}; \end{equation*} analogously, we denote by ${\rm POD}_Y(\mathcal{Y}_{\mathcal{I}},M_Y)$ the POD basis of rank $M_Y$ for a finite set \mbox{$\mathcal{Y}_{\mathcal{I}}\subseteq Y$}, $M_Y\leq {\rm dim}({\rm span}(\mathcal{Y}_{\mathcal{I}}))$. Assuming that we are given a current pair $(X_{N-1}, Y_{N-1})$ of RB approximation spaces, the POD greedy algorithm now proceeds as follows: In compliance with the greedy approach, it detects the parameter $\mu_N$ for which the (Online-)efficient RB error bound attains its maximum over an exhaustive sample $\Sigma\subset {\mathcal D}$. For a prescribed $\Delta N\in{\mathbb{K}}$, we then compute the POD bases of rank $\Delta N$ associated with the truth solutions $u^{\varepsilon,k}(\mu_N)$ and $p^{\varepsilon,k}(\mu_N)$, $k\in{\mathbb{K}}$; more specifically, we compute ${\rm POD}_X(E^u,\Delta N)$ and ${\rm POD}_Y(E^p,\Delta N)$ for \begin{align*} E^u &\equiv \{\, u^{\varepsilon,k}(\mu_N) - \Pi_{X_{N-1}} u^{\varepsilon,k}(\mu_N) \mid k\in{\mathbb{K}} \,\},\\ E^p &\equiv \{\, p^{\varepsilon,k}(\mu_N) - \Pi_{Y_{N-1}} p^{\varepsilon,k}(\mu_N) \mid k\in{\mathbb{K}} \,\}, \end{align*} where $\Pi_{X_{N-1}}$ and $\Pi_{Y_{N-1}}$ refer to the $(\cdot,\cdot)_X$- and $(\cdot,\cdot)_Y$-orthogonal projections on the current RB approximation spaces $X_{N-1}$ and $Y_{N-1}$, respectively. Finally, the $\Delta N$ POD basis functions are appended to $X_{N-1}$ and $Y_{N-1}$, and we obtain a subsequent pair $(X_N,Y_N)$. This process is then repeated until a prescribed error tolerance is satisfied. We refer the reader to \cite{Grepl:2005kx,Haasdonk:2011ffk,haasdonk08:_reduc} for a detailed discussion of the POD greedy procedure, and to \cite{knezevic09:_reduc_basis_approx_poster_error,knezevic10:_certif_reduc_basis_method_fokker} for an application to the Boussinesq and Fokker--Planck equations. \begin{algorithm}[!bp] \caption{Adaptive Sampling Procedure for $\varepsilon=0$} \label{alg:modified_POD} \begin{algorithmic}[1] \STATE Choose $\Sigma \subset {\mathcal D}$, $\delta_{\rm tol}, \delta^\beta_{\rm tol}\in(0,1)$, $\Delta N\in{\mathbb{K}}$, and $\mu_1 \in\Sigma$ \STATE Set $N\leftarrow 0$, ${\mathcal D}_N\leftarrow \{ \}$, ${\mathcal D}'\leftarrow \{ \}$, $N_Y \leftarrow 0$, $Y_N \leftarrow \{ \}$, $N_X \leftarrow 0$, $X_N\leftarrow \{ \}$ \REPEAT \STATE $N \leftarrow N+1$, ${\mathcal D}_{N} \leftarrow {\mathcal D}_{N-1} \cup \{\mu_N\}$ \STATE $E^p = \{\, p^{k}(\mu_N) - \Pi_{Y_{N-1}} p^{k}(\mu_N) \mid k\in{\mathbb{K}} \,\}$ \STATE $N_Y \leftarrow N_Y+\Delta N$, $Y_N \leftarrow Y_{N-1} \oplus {\rm span}({\rm POD}_Y(E^p,\Delta N))$ \IF {$\mu_N\notin {\mathcal D}'$,} \STATE $E^u = \{\, u^{k}(\mu_N) - \Pi_{X_{N-1}} u^{k}(\mu_N) \mid k\in{\mathbb{K}} \,\}$ \STATE $N_X \leftarrow N_X+\Delta N$, $X_N \leftarrow X_{N-1} \oplus {\rm span}({\rm POD}_X(E^u,\Delta N))$ \ENDIF \WHILE {({\bf true})} \FORALL {$\mu \in \Sigma$} \STATE Compute $(u^k_N(\mu),p^k_N(\mu))$, $k\in{\mathbb{K}}$, $\Delta_N(\mu)$, and \STATE $\hat d^\beta_N(\mu) \equiv \max \left\{\,\frac{\beta^{\rm UB}(\mu) - \beta_N(\mu)}{\beta^{\rm UB}(\mu)},0\,\right\}$ (cf.~(\ref{eq:dist_betaN})) \ENDFOR \STATE $ \mu'_{N} \equiv \arg \max_{\mu \in \Sigma}\; \Delta_N(\mu)$, $ \mu^* \equiv \arg \max_{\mu\in\Sigma} \hat d^\beta_N(\mu)$ \IF {$\hat d^\beta_N(\mu^*) < \delta^\beta_{\rm tol}$,} \STATE $\mu_{N+1} \equiv \mu'_N$ \STATE {\bf break} \ENDIF \IF {$ \min_{\mu\in {\mathcal D}'\cup{\mathcal D}_N} \frac{|\mu'_N-\mu|}{|\mu|} \geq 0.1\%$,} \STATE ${\mathcal D}'\leftarrow {\mathcal D}' \cup \{\mu'_N\}$ \STATE $E^u = \{\, u^{k}(\mu'_N) - \Pi_{X_{N}} u^{k}(\mu'_N) \mid k\in{\mathbb{K}} \,\}$ \STATE $N_X\leftarrow N_X+\Delta N$, $X_N \leftarrow X_{N} \oplus {\rm span}({\rm POD}_X(E^u,\Delta N))$ \ELSE \STATE $N_X\leftarrow N_X+1$, $X_N \leftarrow X_{N}\oplus {\rm span}\{\,T_{\mu^*} \varrho_N(\mu^*) \,\}$ (see (2.28), (2.36) in \cite{Gerner:2011fk}) \ENDIF \ENDWHILE \UNTIL {$\Delta_N(\mu_{N+1}) < \delta_{\rm tol}$} \STATE $N_{\rm max} \leftarrow N$ \end{algorithmic} \end{algorithm} \begin{algorithm}[!bp] \caption{Adaptive Sampling Procedure for $\varepsilon>0$} \label{alg:modified_POD_penalty} \begin{algorithmic}[1] \STATE Choose $\Sigma \subset {\mathcal D}$, $\delta_{\rm tol}\in(0,1)$, $\delta^\kappa_{\rm tol}>0$, $\Delta N\in{\mathbb{K}}$, and $\mu_1 \in\Sigma$ \STATE Set $N\leftarrow 0$, ${\mathcal D}_N\leftarrow \{ \}$, ${\mathcal D}'\leftarrow \{ \}$, $N_Y \leftarrow 0$, $Y_N \leftarrow \{ \}$, $N_X \leftarrow 0$, $X_N\leftarrow \{ \}$ \REPEAT \STATE $N \leftarrow N+1$, ${\mathcal D}_{N} \leftarrow {\mathcal D}_{N-1} \cup \{\mu_N\}$ \STATE $E^p = \{\, p^{\varepsilon,k}(\mu_N) - \Pi_{Y_{N-1}} p^{\varepsilon,k}(\mu_N) \mid k\in{\mathbb{K}} \,\}$ \STATE $N_Y \leftarrow N_Y+\Delta N$, $Y_N \leftarrow Y_{N-1} \oplus {\rm span}({\rm POD}_Y(E^p,\Delta N))$ \IF {$\mu_N\notin {\mathcal D}'$,} \STATE $E^u = \{\, u^{\varepsilon,k}(\mu_N) - \Pi_{X_{N-1}} u^{\varepsilon,k}(\mu_N) \mid k\in{\mathbb{K}} \,\}$ \STATE $N_X \leftarrow N_X+\Delta N$, $X_N \leftarrow X_{N-1} \oplus {\rm span}({\rm POD}_X(E^u,\Delta N))$ \ENDIF \WHILE {({\bf true})} \FORALL {$\mu \in \Sigma$} \STATE Compute $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu))$, $k\in{\mathbb{K}}$, $\Delta_N(\mu)$, and $\kappa^\varepsilon_N(\mu)$ (see (\ref{eq:kappa_N})) \ENDFOR \STATE $ \mu'_{N} \equiv \arg \max_{\mu \in \Sigma}\; \Delta_N(\mu)$, $ \mu^* \equiv \arg \max_{\mu\in\Sigma} \kappa^\varepsilon_N(\mu)$ \IF {$\kappa^\varepsilon_N(\mu^*) < \delta^\kappa_{\rm tol}$,} \STATE $\mu_{N+1} \equiv \mu'_N$ \STATE {\bf break} \ENDIF \IF {$ \min_{\mu\in {\mathcal D}'\cup{\mathcal D}_N} \frac{|\mu'_N-\mu|}{|\mu|} \geq 0.1\%$,} \STATE ${\mathcal D}'\leftarrow {\mathcal D}' \cup \{\mu'_N\}$ \STATE $E^u = \{\, u^{\varepsilon,k}(\mu'_N) - \Pi_{X_N} u^{\varepsilon,k}(\mu'_N) \mid k\in{\mathbb{K}} \,\}$ \STATE $N_X\leftarrow N_X+\Delta N$, $X_N \leftarrow X_{N} \oplus {\rm span}({\rm POD}_X(E^u,\Delta N))$ \ELSE \STATE $N_X\leftarrow N_X+1$, $X_N \leftarrow X_{N}\oplus {\rm span}\{\,T_{\mu^*} \varrho_N(\mu^*) \,\}$ (see (2.28), (2.36) in \cite{Gerner:2011fk}) \ENDIF \ENDWHILE \UNTIL {$\Delta_N(\mu_{N+1}) < \delta_{\rm tol}$} \STATE $N_{\rm max} \leftarrow N$ \end{algorithmic} \end{algorithm} For our saddle point problems, we now couple the above procedure with stabilization techniques developed in \cite{Gerner:2011fk}; here (see also \cite{Gerner:2012fk}), best convergence results were achieved by Algorithm~3 that aims to stabilize $X_N, Y_N$ adaptively through an enrichment of the primal RB approximation space with additional truth solutions. According to these observations, we now apply the sampling procedures presented in Algorithm~\ref{alg:modified_POD} and Algorithm~\ref{alg:modified_POD_penalty}. In case of $\varepsilon=0$, we use the distance $d^\beta_N(\mu)$ (see~\cite{Gerner:2011fk}), \begin{equation}\label{eq:dist_betaN} d^\beta_N(\mu) \equiv \max\left\{\frac{\beta(\mu)-\beta_N(\mu)}{\beta(\mu)},0\right\}, \quad\mu\in{\mathcal D}, \end{equation} of the inf-sup constants $\beta_N(\mu)$ to the truth inf-sup constants $\beta(\mu)$ as an indicator whether a current pair of RB approximation spaces needs to be stabilized; the exact procedure is given in Algorithm~\ref{alg:modified_POD}. In case of $\varepsilon>0$, numerical results in \cite{Gerner:2012ag} showed that the inf-sup constants $\beta_N(\mu)$ may not be appropriate indicators for an ill-conditioned system but an adaptive sampling process should be based rather on the condition number $\kappa^\varepsilon_N(\mu)$, \begin{equation} \label{eq:kappa_N} \kappa^\varepsilon_N(\mu) \equiv \frac{\sigma^{\varepsilon,\rm max}_N(\mu)}{\sigma^{\varepsilon,\rm min}_N(\mu)},\quad \varepsilon>0,\quad\mu\in{\mathcal D},\; N\in{\mathbb{N}_{\rm max}}; \end{equation} here, $\sigma^{\varepsilon,\rm max}_N(\mu)$ and $\sigma^{\varepsilon,\rm min}_N(\mu)$ denote the maximum and minimum singular values of the corresponding RB system matrix, respectively. Algorithm~\ref{alg:modified_POD_penalty} now presents a possibility how this could be realized. \section{Model Problem} \label{s:model_problem} We consider a Stokes flow in a two-dimensional microchannel with an obstacle as introduced in \cite{veroy10:_stokes_penalty}; evolution in time is now induced by a time-dependent velocity profile on the inflow boundary. Let $\mu$ be any parameter in ${\mathcal D}$. For the physical domain $\tilde\Omega$ and a given time interval $[0,T]$, $T>0$, we now seek to find the (inhomogeneous) velocity $\tilde u_{\rm e, inh}:\tilde\Omega\times (0,T)\to\mathbb{R}^2$ and the pressure $\tilde p_{\rm e}:\tilde\Omega\times (0,T)\to\mathbb{R}$ satisfying \begin{align} \label{eq:strong_inh_mom} \frac{\partial \tilde u_{\rm e, inh}}{\partial t} - \tilde\Delta \tilde u_{\rm e, inh} + \tilde\nabla \tilde p_{\rm e} &= 0 \quad\text{in }\tilde\Omega\times (0,T),\\ \label{eq:strong_inh_cont} \tilde \nabla\cdot \tilde u_{\rm e,inh} &= 0 \quad\text{in }\tilde\Omega\times (0,T), \end{align} subject to initial conditions $\tilde u_{\rm e, inh}(\cdot,0) = 0$ and with boundary conditions \begin{align} \label{eq:inh_bc} \begin{split} &\tilde u_{\rm e, inh}(\tilde x,t) = H(t) h(\tilde x) \quad\text{on } \Gamma_{\rm in} \times (0,T),\quad \tilde{u}_{\rm e, inh}=0 \quad\text{on }\tilde\Gamma_0 \times (0,T), \\ &\hspace{20ex}\frac{\partial \tilde u_{\rm e, inh}}{\partial \tilde n} = \tilde p_{\rm e} \tilde n \quad\text{on }\Gamma_{\rm out} \times (0,T); \end{split} \end{align} here, $\tilde\Delta$ and $\tilde\nabla$ denote the Laplacian and gradient operator over the physical domain $\tilde\Omega$, $\tilde n$ is the unit outward normal, $h:\mathbb{R}^2\to\mathbb{R}^2$ is given by $h(x) \equiv (4x_2(1-x_2),0)$ for all $x=(x_1,x_2)\in \mathbb{R}^2$, and we choose $H:[0,T]\to\mathbb{R}$ with $H(t) \equiv t (\sin(2\pi t)+1)$ for all $t\in[0,T]$. According to the setting introduced in \cite{veroy10:_stokes_penalty}, we also consider the following perturbation of the problem (\ref{eq:strong_inh_mom})--(\ref{eq:inh_bc}): For a sufficiently small $\varepsilon>0$, we introduce a penalty term into the continuity equation (\ref{eq:strong_inh_cont}) such that \begin{equation}\label{eq:strong_inh_cont_penalty} \tilde \nabla\cdot \tilde u^\varepsilon_{\rm e,inh} = -\varepsilon\, p^\varepsilon_{\rm e} \quad\text{in }\tilde\Omega\times (0,T). \end{equation} We now follow the steps discussed in \cite{veroy10:_stokes_penalty}: We choose the lifting function $\tilde u^H_{\rm L} \equiv H \tilde u_{\rm L}$ where $\tilde u_{\rm L}$ is defined as in \cite{veroy10:_stokes_penalty}, and transform the problem statement for the homogeneous velocity $\tilde u^\varepsilon_{\rm e} \equiv \tilde u^\varepsilon_{\rm e, inh}-\tilde u^H_{\rm L}$ to an equivalent problem posed over the reference domain $\Omega$. Furthermore, as required for the time-discrete setting introduced in \S \ref{s:general_problem_statement}, we divide the time interval $[0,T]$ into $K$ subintervals of equal length $\Delta t \equiv T/K$, and consider a backward Euler method for time integration. The problems (\ref{eq:strong_inh_mom})--(\ref{eq:inh_bc}) and (\ref{eq:strong_inh_mom}), (\ref{eq:strong_inh_cont_penalty}), (\ref{eq:inh_bc}) may thus be written as a parametrized saddle point problem of the form (\ref{eq:exact_scheme}). Here, for any $\mu\in{\mathcal D}$, the bilinear forms $a(\cdot,\cdot;\mu)$, $b(\cdot,\cdot;\mu)$, and $c(\cdot,\cdot;\mu)$ are given as in \cite{veroy10:_stokes_penalty}; accordingly, the bilinear form $m(\cdot,\cdot;\mu):{X_{\rm e}}\times {X_{\rm e}} \to\mathbb{R}$ represents the $L^2$-inner product for vector functions over the physical domain $\tilde\Omega$ formulated on the reference domain $\Omega$, \begin{equation*} m(u,v;\mu) = \sum_{s=1}^S \frac{1}{|{\rm det}(A^s(\mu))|} \int_{\Omega^s} u\cdot v\,dx \quad\forall\; u,v\in {X_{\rm e}}, \end{equation*} and the linear functionals $f(\cdot;\mu)$ and $g(\cdot;\mu)$ are given by \begin{align*} f(v,t;\mu) &= f(v,t) = -H'(t)\int_{\Omega_{\rm L}} u_{\rm L}\cdot v\,dx - H(t) \int_{\Omega_{\rm L}}\frac{\partial u_{{\rm L}i}}{\partial x_j} \frac{\partial v_i}{\partial x_j}\,dx,\\ g(q,t;\mu) &= g(q,t) = H(t) \int_{\Omega_{\rm L}} q \frac{\partial u_{{\rm L}i}}{\partial x_i}\,dx \end{align*} for all $v\in{X_{\rm e}}$, $q\in{Y_{\rm e}}$, $t\in[0,T]$. We recall that the bilinear forms $a(\cdot,\cdot;\mu)$, $b(\cdot,\cdot;\mu)$, and $c(\cdot,\cdot;\mu)$ then satisfy the assumptions (\ref{eq:gamma_a_e})--(\ref{eq:brezzi_infsup_e}), (\ref{eq:gamma_c_e}), and (\ref{eq:alpha_c_e}). For all $\mu\in{\mathcal D}$, $m(\cdot,\cdot;\mu)$ defines an inner product on ${X_{\rm e}}$ such that (\ref{eq:pos_def_m_e}) holds true; moreover, there exists a constant $C^{\rm e}(\mu)>0$ from the Poincar\'{e} inequality (see, e.g., \cite{Quarteroni:2008fk}) such that \begin{equation*} m(v,v;\mu) \leq C^{\rm e}(\mu) \, a(v,v;\mu) \quad\forall\; v\in {X_{\rm e}} \quad\forall\;\mu\in{\mathcal D}, \end{equation*} and thus (\ref{eq:gamma_m_e}) is satisfied with \begin{equation*} \gamma^{\rm e}_m(\mu) \equiv \sup_{u\in X_{\rm e}}\sup_{v\in {X_{\rm e}}} \frac{m(u,v;\mu)}{\|u\|_{X_{\rm e}} \|v\|_{X_{\rm e}}} \leq C^{\rm e}(\mu) \,\gamma^{\rm e}_a(\mu) < \infty \quad\forall\;\mu\in{\mathcal D}. \end{equation*} Choosing the truth approximation spaces $X$ and $Y$ as the standard conforming $\mathbb{P}_2$-$\mathbb{P}_1$ Taylor--Hood finite element approximation subspaces \cite{Taylor:1973fk} over the regular triangulation ${\mathcal T}_\Omega$, we ensure that also (\ref{eq:LBB}) is satisfied (see, e.g., \cite{Brezzi:1991fk,Ern:2004jl,Raviart:1986uq,Quarteroni:2008fk}) and therefore recover the situation described in \S \ref{ss:truth}. \section{Numerical Results} \label{s:numerical_results} We now apply the RB methodology developed in \S \ref{s:RB_method} to our model problem introduced in \S \ref{s:model_problem}. We set $T=1$ and consider a constant time step size $\Delta t$ corresponding to $K=100$ time levels. The truth discretization is based on a fine mesh with a total of ${\mathcal N} =$ 72,076 velocity and pressure degrees of freedom. In this section, all numerical results are attained using the open source software \texttt{rbOOmit}~\cite{Knezevic:2011fk}, an implementation of the RB framework within the C++ parallel finite element library \texttt{libMesh}~\cite{libMeshPaper}. \subsection{$\varepsilon=0$} We first turn to the coercivity, continuity, and inf-sup constants required for our RB procedure. We obtain (Online-)efficient lower and upper bounds to $\alpha_a(\mu)$, $\gamma_a(\mu)$, and $\beta(\mu)$ by using the SCM (see \S \ref{ss:offline_online}) with the configurations specified in \cite{Gerner:2011fk}. To estimate the continuity constants $\gamma_m(\mu)$, we apply the method for $M_\alpha=\infty$, $M_+=0$, an exhaustive sample $\Xi\subset{\mathcal D}$ of size $|\Xi|$ = 4,225, and the SCM tolerance $\epsilon=0.01$ (see \cite{DBP-Huynh:2007ez}). We then obtain accurate (Online-)efficient lower and upper bounds $\gamma_m^{\rm LB}(\mu)$ and $\gamma^{\rm UB}_m(\mu)$ with $K_{\rm max}=5$. We now turn to the RB approximation. To build our low-dimensional RB approximation spaces $X_N, Y_N$, $N\in{\mathbb{N}_{\rm max}}$, we apply the POD greedy procedure described in Algorithm~\ref{alg:modified_POD} (see \S \ref{ss:construction_RB_spaces}). The sampling process is based on an exhaustive random sample $\Sigma\subset{\mathcal D}$ of size $|\Sigma|=$ 4,900, $\Delta N=2$, and $\delta^\beta_{\rm tol}=0.1$; since our Stokes model problem is clearly symmetric, we here in particular use the relative RB {\em a posteriori} error bound $\Delta_N(\mu)\equiv \Delta^{{\rm sym},K}_N(\mu) / \|(u^j_N(\mu))_{j\in{\mathbb{K}}}\|_{\ell^2(0,K;X)}$ (see (\ref{eq:energy_norm}), (\ref{eq:energy_errbnd_sym})). \begin{figure} \caption{Maximum error $\|e^{u} \label{fig:err_errbnd_K} \end{figure} \begin{figure} \caption{Maximum error $\|e^{u} \label{fig:err_errbnd_N} \end{figure} \begin{table}[htp] \centering \footnotesize \begin{tabular}{@{}c|c|c|c|c|c|c|c@{}} \multicolumn{8}{l}{(a) Effectivities $\eta^{{\rm sym},k}_N(\mu)$ associated with $\Delta^{{\rm sym},k}_N(\mu)$}\\[1ex] \hline $N$ & $N_Z$ & $k=10$ & $k=20$ & $k=40$ & $k=60$ & $k=80$ & $k=100$ \\ \hline $5$ & $36$ & $28.90$ & $30.38$ & $31.30$ & $31.61$ & $31.63$ & $31.13$\\ $10$ & $63$ & $30.41$ & $31.35$ & $32.20$ & $32.37$ & $32.16$ & $31.91$\\ $15$ & $93$ & $25.83$ & $28.05$ & $29.39$ & $29.51$ & $29.44$ & $29.38$\\ $20$ & $109$ & $23.31$ & $24.25$ & $26.79$ & $27.23$ & $27.08$ & $27.21$\\ $25$ & $142$ & $25.29$ & $28.15$ & $29.54$ & $29.73$ & $29.66$ & $29.64$\\ $30$ & $177$ & $26.28$ & $26.05$ & $28.77$ & $30.58$ & $30.70$ & $30.60$\\ $35$ & $201$ & $24.77$ & $24.86$ & $26.68$ & $27.36$ & $27.51$ & $27.81$\\ $40$ & $222$ & $24.18$ & $23.96$ & $24.19$ & $25.03$ & $25.51$ & $25.54$\\ \hline \end{tabular}\\[1ex] \begin{tabular}{@{}c|c|c|c|c|c|c|c@{}} \multicolumn{8}{l}{(b) Effectivities $\eta^{k}_N(\mu)$ associated with $\Delta^k_N(\mu)$}\\[1ex] \hline $N$ & $N_Z$ & $k=10$ & $k=20$ & $k=40$ & $k=60$ & $k=80$ & $k=100$ \\ \hline $5$ & $36$ & $39.58$ & $41.51$ & $42.76$ & $43.20$ & $43.22$ & $42.53$\\ $10$ & $63$ & $39.90$ & $41.66$ & $42.82$ & $43.21$ & $43.00$ & $42.90$\\ $15$ & $93$ & $38.58$ & $43.21$ & $44.64$ & $45.13$ & $45.14$ & $45.07$\\ $20$ & $109$ & $32.59$ & $34.10$ & $37.11$ & $37.60$ & $37.17$ & $37.19$\\ $25$ & $142$ & $35.52$ & $39.26$ & $42.93$ & $42.73$ & $42.63$ & $43.55$\\ $30$ & $177$ & $34.31$ & $34.27$ & $36.41$ & $37.31$ & $39.67$ & $40.64$\\ $35$ & $201$ & $32.86$ & $33.59$ & $36.52$ & $37.39$ & $36.65$ & $37.54$\\ $40$ & $222$ & $33.67$ & $33.76$ & $35.25$ & $35.42$ & $34.78$ & $35.17$\\ \hline \end{tabular}\\[1ex] \caption{Maximum effectivities (a)~$\eta^{{\rm sym},k}_N(\mu)\equiv \Delta^{{\rm sym},k}_N(\mu)/\|e^{u}_N(\mu)\|_{\ell^2(0,k;X)}$ (see (\ref{eq:rig_energy_errbnd_sym})) and (b)~$\eta^{k}_N(\mu)\equiv \Delta^{k}_N(\mu)/\|e^{u}_N(\mu)\|_{\ell^2(0,k;X)}$ (see (\ref{eq:rig_energy_errbnd})) for several values of $k\in{\mathbb{K}}$ and $N$; the maximum is taken over 25 parameter values.} \label{tbl:effectivities} \end{table} Figure~\ref{fig:err_errbnd_K} now shows the maximum error $\|e^{u}_N(\mu)\|_{\ell^2(0,K;X)}$ (see (\ref{eq:def_errors})) in the RB velocity approximations and associated error bounds $\Delta^{{\rm sym},K}_N(\mu)$ and $\Delta^{K}_N(\mu)$ (see (\ref{eq:energy_errbnd})) as functions of the dimension $N_Z$; Figure~\ref{fig:err_errbnd_N} presents the maximum error $\|e^{u}_N(\mu)\|_{\ell^2(0,k;X)}$ and associated error bounds $\Delta^{{\rm sym},k}_N(\mu)$, $\Delta^k_N(\mu)$ as functions of $k\in{\mathbb{K}}$ for several values of $N_Z$. First, we observe that the RB error and error bounds are roughly uniform in time (see Fig.~\ref{fig:err_errbnd_N}) and decrease rapidly as $N_Z$ increases (see Fig.~\ref{fig:err_errbnd_K}). We obtain stable, rapidly convergent RB approximations, and rigorous {\em a posteriori} error bounds that reflect the behavior of the error very accurately. Second, the error bounds are tight. To quantify this statement, we present in Table~\ref{tbl:effectivities} maximum effectivities associated with $\Delta^{{\rm sym},k}_N(\mu)$ and $\Delta^k_N(\mu)$ for several values of $k$ and $N$. We notice that their values remain more or less constant with $k$. Moreover, as in the stationary case (see \cite{Gerner:2012fk}), we benefit from exploiting the symmetry of the problem: Effectivities range from 33 to 45 in case of $\Delta^k_N(\mu)$ (see Table~\ref{tbl:effectivities}(b)) and improve in case of $\Delta^{{\rm sym},k}_N(\mu)$ by roughly 10 (see Table~\ref{tbl:effectivities}(a)). We emphasize at this point that the error bound formulations in (\ref{eq:energy_errbnd}) and (\ref{eq:energy_errbnd_sym}) in fact suggest a growth in time. In practice, this behavior seems rather weak (see Table~\ref{tbl:effectivities}) but may be investigated in greater detail within future work. We now discuss the Online computation times for the proposed method. For comparison, once the $\mu$-independent parts in the affine expansions of the involved operators have been formed (see \S \ref{ss:offline_online}), direct computation of the truth approximation $(u^k(\mu),p^k(\mu)), k\in{\mathbb{K}}$, (i.e., assembly and solution of (\ref{eq:truth_scheme})) requires roughly 30 seconds on a 2.66 GHz Intel Core 2 Duo processor. We initially take a total RB dimension of $N_Z=226$. Once the database has been loaded, the Online calculation of $(u^k_N(\mu), p^k_N(\mu)), k\in{\mathbb{K}}$, (i.e., assembly and solution of (\ref{eq:rb_scheme})) and $\Delta^{{\rm sym},k}_N(\mu), k\in{\mathbb{K}}$, for any new value of $\mu\in{\mathcal D}$ takes on average 27.97 and 80.76 milliseconds, respectively, which is in total roughly 270 times faster than direct computation of the truth approximation. Thus, even for this large value of $N_Z$, we obtain significant Online savings. In practice, however, we quite often need not take such a large value of $N_Z$; our rigorous and inexpensive error bounds $\Delta^{{\rm sym},k}_N(\mu)$, $k\in{\mathbb{K}}$, allow us to choose the RB dimension just large enough to obtain a desired accuracy. To achieve a prescribed accuracy of at least $1\%$ (resp., $0.1\%$) in the RB approximations $u^k_N(\mu), k\in{\mathbb{K}}$, we need $N_Z = 76$ (resp., $N_Z = 109$) (see Fig.~\ref{fig:err_errbnd_K}). Again, once the database has been loaded, the Online calculation of $(u^k_N(\mu), p^k_N(\mu)), k\in{\mathbb{K}}$, and $\Delta^{{\rm sym},k}_N(\mu), k\in{\mathbb{K}}$, for any new value of $\mu\in{\mathcal D}$ then takes on average 4.41 (resp., 7.62) and 24.47 (resp., 33.75) milliseconds, respectively, which is in total roughly 1,000 times (resp., 700 times) faster than direct computation of the truth approximation. \subsection{$\varepsilon>0$} Again, the SCM (see \S \ref{ss:offline_online}) enables the (Online-)efficient estimation of the coercivity constants $\alpha_a(\mu)$ and $\alpha_c(\mu)$; as we here use the same configurations, we refer the reader to \cite{Gerner:2012ag} for details in this context. To build our low-dimensional RB approximation spaces $X_N, Y_N$, $N\in{\mathbb{N}_{\rm max}}$, we apply the POD greedy procedure described in Algorithm~\ref{alg:modified_POD_penalty} (see \S \ref{ss:construction_RB_spaces}). The sampling process is based on an exhaustive random sample $\Sigma\subset{\mathcal D}$ of size $|\Sigma|=$ 4,900, $\Delta N=2$, $\delta^\kappa_{\rm tol}=10^3$, and the relative RB {\em a posteriori} error bound $\Delta_N(\mu)=\Delta^{\varepsilon,K}_N(\mu)/\|(u^{\varepsilon,j}_N(\mu))_{j\in{\mathbb{K}}}\|_{\ell^2(0,K;Z)}$ (see (\ref{eq:energy_norm_penalty}), (\ref{eq:energy_errbnd_penalty})). \begin{figure} \caption{Maximum error $\|e^{\varepsilon} \label{fig:errbnd_alg6_penalty} \end{figure} Figure~\ref{fig:errbnd_alg6_penalty} now shows the maximum error $\|e^\varepsilon_N(\mu)\|_{\ell^2(0,K;Z)}$ (see (\ref{eq:def_errors})) in the RB velocity and pressure approximations together with the associated error bound $\Delta^{\varepsilon,K}_N(\mu)$ as functions of the dimension $N_Z$ for different values of $\varepsilon$. Figure~\ref{fig:err_errbnd_N_penalty} then presents the maximum error $\|e^\varepsilon_N(\mu)\|_{\ell^2(0,k;Z)}$ and associated error bound $\Delta^{\varepsilon,k}_N(\mu)$ as functions of $k\in{\mathbb{K}}$ for several values of $N$; note that the latter are chosen as the values for which the error bounds $\Delta^{\varepsilon,K}_N(\mu)$ guarantee a prescribed accuracy of at least $1\%$ and $0.1\%$ in the RB approximations. First, we again observe that the RB error and error bounds are roughly uniform in time (see Fig.~\ref{fig:err_errbnd_N_penalty}) and decrease rapidly as $N_Z$ increases (see Fig.~\ref{fig:errbnd_alg6_penalty}). We obtain stable RB approximations whose rapid convergence is not affected by the penalty parameter, and {\em a posteriori} error bounds that are meaningful and rigorous. Second, using the condition numbers $\kappa^\varepsilon_N(\mu)$ as an indicator for an ill-conditioned system, Algorithm~\ref{alg:modified_POD_penalty} guarantees stability by properly accounting for the effects of the penalty term: For $\varepsilon=10^{-2}$, the sampling process recognizes that the RB approximation spaces $X_N,Y_N$ do not have to be stabilized to provide accurate approximations; taking smaller values of $\varepsilon$ and thus approaching the nonpenalized problem, an additional enrichment of the RB approximation space for the velocity becomes more and more necessary. Third, we see that the error bounds are tight for $\varepsilon=10^{-2}$ but become less sharp as we decrease $\varepsilon$ and our perturbed truth approximation becomes more accurate. However, effectivities exhibit a similar $O\big(\frac{1}{\sqrt{\varepsilon}}\big)$-dependence on the penalty parameter as observed in the stationary case (see \cite{veroy10:_stokes_penalty}) and remain reasonably small for relatively small values of $\varepsilon$. To further quantify this statement, we present in Table~\ref{tbl:effectivities_penalty} the effectivities associated with $\Delta^{\varepsilon,k}_N(\mu)$ for different values of $k$, $N$, and $\varepsilon$. We note that their values are fairly constant with $k$ and $N$ and confirm the $O\big(\frac{1}{\sqrt{\varepsilon}}\big)$-dependence indicated by (\ref{eq:energy_errbnd_penalty}) as well as Fig.~\ref{fig:errbnd_alg6_penalty} and Fig.~\ref{fig:err_errbnd_N_penalty}. The effects of the penalty parameter on the effectivities are thus relatively benign and we obtain useful bounds for reasonably small values of $\varepsilon$. \begin{figure} \caption{Maximum error $\|e^{\varepsilon} \label{fig:err_errbnd_N_penalty} \end{figure} \begin{table}[htp] \centering \footnotesize \begin{tabular}{@{}c|c|c|c|c|c|c|c@{}} \multicolumn{8}{l}{(a)~$\varepsilon = 10^{-2}$}\\[1ex] \hline $N$ & $N_Z$ & $k=10$ & $k=20$ & $k=40$ & $k=60$ & $k=80$ & $k=100$\\ \hline & & & & & & &\\[-2.3ex] $5$ & $20$ & $1.145\cdot 10^1$ & $1.282\cdot 10^1$ & $1.297\cdot 10^1$ & $1.296\cdot 10^1$ & $1.295\cdot 10^1$ & $1.293\cdot 10^1$\\ $15$ & $60$ & $1.202\cdot 10^1$ & $1.251\cdot 10^1$ & $1.292\cdot 10^1$ & $1.284\cdot 10^1$ & $1.281\cdot 10^1$ & $1.289\cdot 10^1$\\ $25$ & $100$ & $1.154\cdot 10^1$ & $1.154\cdot 10^1$ & $1.235\cdot 10^1$ & $1.248\cdot 10^1$ & $1.239\cdot 10^1$ & $1.227\cdot 10^1$\\ $35$ & $140$ & $1.132\cdot 10^1$ & $1.126\cdot 10^1$ & $1.163\cdot 10^1$ & $1.171\cdot 10^1$ & $1.159\cdot 10^1$ & $1.159\cdot 10^1$\\ $45$ & $180$ & $1.241\cdot 10^1$ & $1.235\cdot 10^1$ & $1.226\cdot 10^1$ & $1.206\cdot 10^1$ & $1.201\cdot 10^1$ & $1.194\cdot 10^1$\\ \hline \end{tabular}\\[1ex] \begin{tabular}{@{}c|c|c|c|c|c|c|c@{}} \multicolumn{8}{l}{(b)~$\varepsilon = 10^{-3}$}\\[1ex] \hline $N$ & $N_Z$ & $k=10$ & $k=20$ & $k=40$ & $k=60$ & $k=80$ & $k=100$\\ \hline & & & & & & &\\[-2.3ex] $4$ & $22$ & $2.691\cdot 10^1$ & $3.139\cdot 10^1$ & $3.273\cdot 10^1$ & $3.293\cdot 10^1$ & $3.275\cdot 10^1$ & $3.227\cdot 10^1$\\ $12$ & $66$ & $2.725\cdot 10^1$ & $2.969\cdot 10^1$ & $3.071\cdot 10^1$ & $3.166\cdot 10^1$ & $3.155\cdot 10^1$ & $3.101\cdot 10^1$\\ $19$ & $103$ & $2.358\cdot 10^1$ & $2.367\cdot 10^1$ & $2.593\cdot 10^1$ & $2.641\cdot 10^1$ & $2.645\cdot 10^1$ & $2.710\cdot 10^1$\\ $26$ & $145$ & $2.965\cdot 10^1$ & $2.933\cdot 10^1$ & $2.945\cdot 10^1$ & $2.973\cdot 10^1$ & $2.960\cdot 10^1$ & $2.983\cdot 10^1$\\ $34$ & $183$ & $2.983\cdot 10^1$ & $2.902\cdot 10^1$ & $2.903\cdot 10^1$ & $2.900\cdot 10^1$ & $2.850\cdot 10^1$ & $2.826\cdot 10^1$\\ \hline \end{tabular}\\[1ex] \begin{tabular}{@{}c|c|c|c|c|c|c|c@{}} \multicolumn{8}{l}{(c)~$\varepsilon = 10^{-4}$}\\[1ex] \hline $N$ & $N_Z$ & $k=10$ & $k=20$ & $k=40$ & $k=60$ & $k=80$ & $k=100$\\ \hline & & & & & & &\\[-2.3ex] $3$ & $18$ & $8.804\cdot 10^1$ & $8.882\cdot 10^1$ & $8.986\cdot 10^1$ & $9.150\cdot 10^1$ & $9.159\cdot 10^1$ & $9.113\cdot 10^1$\\ $10$ & $60$ & $6.726\cdot 10^1$ & $8.104\cdot 10^1$ & $9.668\cdot 10^1$ & $9.737\cdot 10^1$ & $9.639\cdot 10^1$ & $9.608\cdot 10^1$\\ $17$ & $101$ & $9.326\cdot 10^1$ & $1.010\cdot 10^2$ & $1.074\cdot 10^2$ & $1.055\cdot 10^2$ & $1.053\cdot 10^2$ & $1.072\cdot 10^2$\\ $24$ & $141$ & $1.058\cdot 10^2$ & $1.055\cdot 10^2$ & $1.043\cdot 10^2$ & $1.033\cdot 10^2$ & $9.825\cdot 10^1$ & $9.822\cdot 10^1$\\ $33$ & $181$ & $8.304\cdot 10^1$ & $8.290\cdot 10^1$ & $8.592\cdot 10^1$ & $8.635\cdot 10^1$ & $8.573\cdot 10^1$ & $8.706\cdot 10^1$\\ \hline \end{tabular}\\[1ex] \begin{tabular}{@{}c|c|c|c|c|c|c|c@{}} \multicolumn{8}{l}{(d)~$\varepsilon = 10^{-5}$}\\[1ex] \hline $N$ & $N_Z$ & $k=10$ & $k=20$ & $k=40$ & $k=60$ & $k=80$ & $k=100$\\ \hline & & & & & & &\\[-2.3ex] $3$ & $18$ & $2.706\cdot 10^2$ & $2.919\cdot 10^2$ & $3.311\cdot 10^2$ & $3.365\cdot 10^2$ & $3.199\cdot 10^2$ & $3.198\cdot 10^2$\\ $10$ & $55$ & $2.240\cdot 10^2$ & $2.510\cdot 10^2$ & $2.684\cdot 10^2$ & $2.699\cdot 10^2$ & $2.698\cdot 10^2$ & $2.701\cdot 10^2$\\ $17$ & $99$ & $2.696\cdot 10^2$ & $2.860\cdot 10^2$ & $3.103\cdot 10^2$ & $3.115\cdot 10^2$ & $3.107\cdot 10^2$ & $3.129\cdot 10^2$\\ $24$ & $138$ & $2.563\cdot 10^2$ & $2.950\cdot 10^2$ & $3.214\cdot 10^2$ & $3.206\cdot 10^2$ & $3.147\cdot 10^2$ & $3.210\cdot 10^2$\\ $32$ & $183$ & $2.786\cdot 10^2$ & $2.765\cdot 10^2$ & $3.206\cdot 10^2$ & $3.324\cdot 10^2$ & $3.306\cdot 10^2$ & $3.368\cdot 10^2$\\ \hline \end{tabular}\\[1ex] \caption{Maximum effectivities $\eta^{\varepsilon,k}_N(\mu)\equiv \Delta^{\varepsilon,k}_N(\mu)/\|e^{\varepsilon}_N(\mu)\|_{\ell^2(0,k;Z)}$ (see (\ref{eq:rig_energy_errbnd_penalty})) for several values of $k\in{\mathbb{K}}$ and $N$ for (a)~$\varepsilon=10^{-2}$, (b)~$\varepsilon=10^{-3}$, (c)~$\varepsilon = 10^{-4}$, and (d)~$\varepsilon=10^{-5}$; the maximum is taken over 25 parameter values.} \label{tbl:effectivities_penalty} \end{table} \begin{table}[htp] \centering \footnotesize \begin{tabular}{@{}c|c|c|c|c|c@{}} \hline & & & & & \\[-2.4ex] $\varepsilon$ & $N_Z$ & $N$ & $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$ & $\Delta^{\varepsilon,k}_N(\mu), k\in{\mathbb{K}}$ & Total\\ & & & & & \\[-2.3ex] \hline & & & & & \\[-2.3ex] $ 10^{-2}$ & 68 (112) & 17 (28) & 3.71 (7.65) & 14.53 (26.95) & 18.25 (34.60)\\ \hline & & & & & \\[-2.3ex] $ 10^{-3}$ & 70 (107) & 13 (20) & 3.99 (7.43) & 17.19 (28.25) & 21.18 (35.67)\\ \hline & & & & & \\[-2.3ex] $ 10^{-4}$ & 79 (150) & 14 (25) & 4.73 (13.59) & 20.28 (44.70) & 25.01 (58.29)\\ \hline & & & & & \\[-2.3ex] $ 10^{-5}$ & 121 (174) & 21 (31) & 9.19 (17.47) & 33.81 (54.08) & 43.01 (71.55)\\ \hline \end{tabular}\\[1ex] \caption{Average computation times in milliseconds for the Online evaluation of $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, (assembly and solution of (\ref{eq:rb_scheme})) and the error bounds $\Delta^{\varepsilon,k}_N(\mu), k\in{\mathbb{K}}$, (see (\ref{eq:energy_errbnd_penalty})) for different values of $\varepsilon$ with a prescribed accuracy of at least 1\% (resp., 0.1\%) for the RB approximations $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$.} \label{tbl:computation_times_penalty} \end{table} We close this section by discussing the Online computation times. For comparison, once the $\mu$-independent parts in the affine expansions of the involved operators have been formed (see \S \ref{ss:offline_online}), direct computation of the truth approximation $(u^{\varepsilon,k}(\mu),p^{\varepsilon,k}(\mu)), k\in{\mathbb{K}}$, (i.e., assembly and solution of (\ref{eq:truth_scheme})) requires roughly 23 seconds on a 2.66 GHz Intel Core 2 Duo processor. Again, our rigorous and inexpensive RB {\em a posteriori} error bounds enable us to choose the RB dimension just large enough to obtain a desired accuracy. Choosing $\varepsilon=10^{-2}$, the error bounds $\Delta^{\varepsilon,k}_N(\mu)$ are sharp with effectivities of approximately $12$ (see Table~\ref{tbl:effectivities_penalty}(a)) and prescribe a dimension of $N_Z= 68$ to achieve an accuracy of at least $1\%$ in the RB approximations $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$ (see Fig.~\ref{fig:errbnd_alg6_penalty}). Once the database has been loaded, the Online calculation of $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, (i.e., assembly and solution of (\ref{eq:rb_scheme})) and $\Delta^{\varepsilon,k}_N(\mu), k\in{\mathbb{K}}$, for any new value of $\mu\in{\mathcal D}$ then takes on average 3.71 and 14.53 milliseconds, respectively, which is in total roughly 1,200 times faster than direct computation of the truth approximation. Choosing smaller values for $\varepsilon$, the error bounds become more pessimistic and thus dictate a larger system dimension at which they guarantee the same order of accuracy. For $\varepsilon=10^{-5}$, we need $N_Z=121$ to achieve a prescribed accuracy of at least $1\%$ in the RB approximations (see Fig.~\ref{fig:errbnd_alg6_penalty}); the Online calculation of $(u^{\varepsilon,k}_N(\mu),p^{\varepsilon,k}_N(\mu)), k\in{\mathbb{K}}$, and $\Delta^{\varepsilon,k}_N(\mu), k\in{\mathbb{K}}$, then takes on average 9.19 and 33.81 milliseconds, respectively, which is in total roughly 500 times faster than direct computation of the truth approximation. Thus, even for small penalty parameters $\varepsilon$, accurate approximations are guaranteed at significant Online savings. Detailed computation times for different values of $\varepsilon$ are given in Table~\ref{tbl:computation_times_penalty}. \section{Concluding Remarks} \label{s:conclusion} In this paper, we present new RB methods for the instationary Stokes equations. Combining techniques developed in \cite{Gerner:2011fk,Gerner:2012fk} with current approaches for parabolic problems, we derive new rigorous {\em a posteriori} bounds for the errors in the RB velocity approximations and a POD greedy procedure that properly accounts for temporal and parametric causality as well as stability. The method provides rapidly convergent RB approximations that are highly efficient and whose accuracy is certified by sharp and inexpensive {\em a posteriori} error bounds. An approximation by penalty or regularization allows for significant Offline savings at the expense of a less accurate truth approximation. Due to the introduced penalty term, an additional enrichment of the RB velocity approximation space is not always necessary to obtain stable approximations; moreover, we obtain {\em a posteriori} error bounds that do not involve the expensive computation of inf-sup stability constants. As in the stationary case (see \cite{veroy10:_stokes_penalty}), the method provides RB approximations and meaningful {\em a posteriori} error bounds that are computed very easily; nevertheless, drawbacks such as the disadvantageous dependence of the error bounds on the penalty parameter remain. Time integration is achieved through a backward Euler method. Clearly, also other time integration schemes may be used. Using a Crank--Nicolson method, often preferred in practice due to its second-order accuracy, we may develop a penalty approach that is very similar to the one presented in this paper (see \cite{Gerner:2012ag}); in case of $\varepsilon=0$, useful RB {\em a posteriori} error bounds could not yet been derived and may therefore be---as well as {\em a posteriori} error bounds for the RB pressure approximations---part of future work. \section*{Acknowledgments} We would like to thank Prof.~Arnold Reusken and Prof. Martin Grepl of RWTH Aachen University for numerous very helpful suggestions and comments. We are also very grateful to Dr.~David J. Knezevic of Harvard University for his invaluable support on {\tt rbOOmit}~\cite{Knezevic:2011fk}, and Mark K\"{a}rcher of RWTH Aachen University for a very careful reading of the manuscript as well as many valuable comments and discussions. Financial support from the Deutsche Forschungsgemeinschaft (German Research Foundation) through grant GSC 111 is gratefully acknowledged. \end{document}
\boldsymbolegin{document} \sloppy \def\spacingset#1{\renewcommand{\boldsymbolaselinestretch} {#1}\small\normalsize} \spacingset{0.8} \title{\boldsymbolf Collective anomaly detection in High-dimensional VAR Models} \author{Hyeyoung Maeng, Idris Eckley and Paul Fearnhead\hspace{.2cm} \\ Lancaster University, United Kingdom} \maketitle \boldsymboligskip \boldsymbolegin{abstract} There is increasing interest in detecting collective anomalies: potentially short periods of time where the features of data change before reverting back to normal behaviour. We propose a new method for detecting a collective anomaly in VAR models. Our focus is on situations where the change in the VAR coefficient matrix at an anomaly is sparse, i.e.\ a small number of entries of the VAR coefficient matrix change. To tackle this problem, we propose a test statistic for a local segment that is built on the lasso estimator of the change in model parameters. This enables us to detect a sparse change more efficiently and our lasso-based approach becomes especially advantageous when the anomalous interval is short. We show that the new procedure controls Type 1 error and has asymptotic power tending to one. The practicality of our approach is demonstrated through simulations and two data examples, involving New York taxi trip data and EEG data. \end{abstract} \noindent {\it Keywords:} Collective anomaly; high-dimensional time series; lasso; sparse changes; epidemic change \spacingset{1.35} \section{Introduction} \label{sec1} There is a growing need for modelling and analysis of high-dimensional time series, as such series have become increasingly common in many application areas. Applications include forecasting using a large panel of time series in economics \citep{de2008forecasting, banbura2010large}, estimating causal relationships among genes and constructing gene regulatory networks \citep{shojaie2010discovering}, identifying the monetary transmission mechanism from macroeconomic time series \citep{bernanke2005measuring}, discovering causal interactions in Neuroimaging \citep{smith2012future, seth2015granger}, analysing housing markets for home-price estimation and forecasting \citep{rapach2007forecasting, calomiris2008foreclosure, stock2008evolution} and analysing the network structure of volatility interconnections in the Standard \& Poor’s 100 data \citep{barigozzi2017network}. The majority of existing methods are built on the assumption of stationary and stable time series. If there is either a structural change or a period of anomalous behaviour in a time series, detecting the location of the change/anomaly is not only an important task in itself, but also useful for a follow-up analysis after detection. It is indeed a problem of significant interest in many applications. For example, \citet{chen1997testing} detects multiple change-points in variance of weekly stock prices and \citet{cribben2017estimating} study a network change-point detection problem for resting state functional magnetic resonance imaging data. \citet{ombao2005slex} propose a way of segmenting multivariate nonstationary time series and analyse time-varying electroencephalogram data that is recorded during an epileptic seizure. Other examples include detecting changes that have occurred in a sparse subset of time series \citep{cho2015multiple, cho2016change, wang2018high}, covariance change-point detection for multivariate or high-dimensional time series \citep{inclan1994use, aue2009break, wang2017optimal} and detecting change-points under the factor model framework \citep{breitung2011testing, chen2014detecting, baltagi2017identification, barigozzi2018simultaneous}. One of the most popular models for high-dimensional time series is the vector autoregressive (VAR) model \citep{sims1980macroeconomics, lutkepohl2005new}, due to its ability to capture complex temporal and cross-sectional relationships. However, the estimation of the coefficient matrix becomes challenging as the number of parameters increases quadratically with the number of time series. To overcome this, structured sparsity of the VAR coefficients is often assumed as this assumption dramatically reduces the number of model parameters. For example, \citet{song2011large} use lasso type, that is $\ell_1$, penalties to encourage sparsity in the estimates of the VAR coefficients. \citet{davis2016sparse} propose a two-stage approach to fit sparse VAR models and provide a numerical evidence that a log-likelihood based loss function improves the forecasting performance compared to a least squares based one as the former includes information on the error covariance matrix. \citet{basu2015regularized} investigate the theoretical properties of $\ell_1$-penalised estimators for a Gaussian VAR model and show consistency results, while \citet{lin2017regularized} generalise the results by considering a general norm instead of being restricted to the $\ell_1$-norm for the penalty. Recently, more complex structures have been studied in the literature: \citet{basu2019low} study the low-rank and structured sparse VAR model and \citet{nicholson2020high} impose a hierarchical structure on VAR coefficient matrices according to the lag order, thus addressing both the dimensionality and the lag selection issues at the same time. Despite the large body of literature on VAR models, detecting a structural break has rarely been studied. \citet{kirch2015detection} consider two scenarios, detecting at-most-one-change and epidemic change in model parameters of multivariate time series which is not restricted to VAR models. \citet{safikhani2020joint} consider the multiple change-point setting for the VAR coefficient matrix under a high-dimensional regime and propose a three-stage procedure that returns consistent estimators of both change-points and parameters. \citet{wang2019localizing} also study the same setting (i.e.\ when the model parameters have a form of piecewise constant over time) and use a dynamic programming approach for localising change-points and improving the corresponding error rates. \citet{bai2020multiple} study the multiple change-point setting but assume the low-rank plus sparse structure on the VAR coefficient matrices and consider the case where only the sparse structure changes over time, while the low-rank parts remain constant. We will explain how our proposal is different from these existing works later in this section. In contrast to these earlier works, we focus on settings where we have plenty of information about the current or normal behaviour of our time-series, and wish to detect periods of different or abnormal behaviour. This can arise when detecting collective anomalies \citep{fisch2018linear,tveten2020scalable} or epidemic changepoints \citep{yao1993tests} -- where we have a, potentially short, period of time where the behaviour of our model changes before it reverts back to current behaviour. This also arises with sequential change detection \citep{lai1995sequential}, when we observe data in real-time and wish to detect any change away from the current behaviour as quickly as possible. For ease of presentation we focus primarily on detecting collective anomalies/epidemic changes, and use the terminology collective anomaly from now on. We show how our method can be extended to the online framework in Section \ref{sec5}. The key feature of these problems is that we have substantially more information about the current or normal behaviour than about the anomaly. This suggests that we should potentially use different procedures to estimate the parameters of the VAR model for the normal behaviour than for the anomaly. We do this through making an assumption that it is the change in VAR parameters that is sparse. We focus on improving the detection power when the difference between the coefficient matrices at anomaly point is sparse (i.e. a small number of entries of the VAR coefficient matrix change). To tackle this problem, we propose a test statistic for a local segment which is built on the lasso estimator of the change in model parameters. This enables us to detect a sparse change more efficiently, as the sparsity of change is considered in establishing the test statistic. Moreover, our lasso-based approach become more advantageous over, say, the standard likelihood-ratio test statistic for shorter anomalous intervals: as for shorter intervals we have fewer observations to estimate the new VAR coefficient matrix, and it becomes more like a high-dimensional problem where the number of observations is similar to or less than the number of parameters to estimate. In Section \ref{sec4}, our approach is compared with a method that is built on estimating the change in VAR matrix using ordinary least squares estimator, and the results show that our method outperforms it when detecting sparse change. As we consider the setting where a relatively longer region has a normal behaviour than the anomalous behaviour, it is reasonable to assume that the underlying VAR coefficient matrix is estimated well enough. Thus, we first develop our method when the normal behaviour is assumed to be known and extend it to the case where an appropriate estimator for the VAR coefficient is used instead. Our theory in Section \ref{sec3} shows the validity of this approach providing that the estimator for the VAR coefficient is close enough to the true one. Although our main focus is on single anomaly detection, we show that the new method can be extended for detecting multiple anomalies in Section \ref{sec2.1}. Among those relevant works already introduced earlier in this section, the work of \citet{safikhani2020joint} and \citet{bai2020multiple} are most closely related to our work, in that they also control the change in VAR parameters with a lasso penalty in their objective functions, however their approaches are different from our method in several aspects. To obtain the initial estimate of change-points before screening, \citet{safikhani2020joint} use a fused lasso penalty on a full model considering all time points being a candidate for change-point. Thus their objective function controls the sparsity of VAR parameters and the sparsity of its difference at the same time. \citet{bai2020multiple} follows a similar procedure to \citet{safikhani2020joint} under the multiple change-point framework. They use a block fused lasso penalty by assuming that the model parameters in a block is fixed, while our objective function controls only the sparsity of change in building a test statistic and search many segments to find an anomalous interval. Also, \citet{safikhani2020joint} and \citet{bai2020multiple} assume that the $l_2$-norm of a change in VAR parameter is bounded away from zero, whereas our assumption on the $l_2$-norm of a change is related to the sparsity of change which is in line with the assumptions used in \citet{wang2019localizing}. Although those change-point detection methods are not exactly designed for the anomaly setting we consider in this paper, we compare our performance with theirs and present results in the supplementary material. Our method works better especially when the underlying VAR coefficient matrix is dense but the change is sparse, and surprisingly even in the case where the VAR coefficient matrix has a low rank plus sparse structure and only a sparse component changes. Full details can be found in the supplementary material. The remainder of the article is organised as follows. Section \ref{sec2} gives a full description of our procedure and the relevant theoretical results are presented in Section \ref{sec3}. The supporting simulation studies are described in Section \ref{sec4}. Our methodology is illustrated through two datasets in Section \ref{sec5} and we end with additional discussion in Section \ref{sec6}. The proofs of our main theoretical results are in the supplementary material. \section{Methodology} \label{sec2} \subsection{Problem setting} \label{sec2.1} We consider a zero-mean stationary ${p}$-dimensional multivariate time series $\boldsymbol{x}_t = (x_{1t}, \ldots, x_{pt})'$ generated by a VAR(1) model: \boldsymbolegin{equation} \label{model} \boldsymbol{x}_t = \boldsymbol{A}_t \boldsymbol{x}_{t-1} + {\boldsymbol{\varepsilon}}_t, \quad \boldsymbol{\varepsilon}_t \stackrel{\text{i.i.d.}}{\sim} N(\boldsymbol{0}, \mathcal{S}igma_{\varepsilon}), \quad t=1, \ldots, T, \end{equation} where $\{\boldsymbol{A}_{t}\}_{t=1}^{T}$ is a $p \times p$ matrix and $\mathcal{S}igma_{\varepsilon}$ is a positive definite matrix. We assume that the high-dimensional VAR model shows an anomalous behaviour at $t \in [\eta_1, \eta_2]$ such that \boldsymbolegin{equation} 0=\eta_0 < \eta_1 < \eta_2 < \eta_3=T, \end{equation} which gives the sets \boldsymbolegin{equation} \label{sets} \boldsymbol{\mathfrak{x}}_1 = \{\boldsymbol{x}_1, \ldots, \boldsymbol{x}_{\eta_1-1}\}, \quad \boldsymbol{\mathfrak{x}}_2 = \{\boldsymbol{x}_{\eta_1}, \ldots, \boldsymbol{x}_{\eta_2}\}, \quad \boldsymbol{\mathfrak{x}}_3 = \{\boldsymbol{x}_{\eta_2+1}, \ldots, \boldsymbol{x}_{T}\} \end{equation} and the sequence of $\{\boldsymbol{A}_t\}_{t=1}^T$ forms piecewise-constant coefficient matrices as follows, \boldsymbolegin{align*} \boldsymbol{A}^{(1)} = \boldsymbol{A}_{1} = \cdots = \boldsymbol{A}_{\eta_1-1}, \quad \boldsymbol{A}^{(2)} = \boldsymbol{A}_{\eta_1} =\cdots = \boldsymbol{A}_{\eta_2}, \quad \boldsymbol{A}^{(1)} = \boldsymbol{A}_{\eta_2+1} = \cdots \boldsymbol{A}_{T}, \end{align*} where $\boldsymbol{A}^{(1)} \neq \boldsymbol{A}^{(2)}$ and $\boldsymbol{A}^{(1)}, \boldsymbol{A}^{(2)} \in \mathbb{R}^{p \times p}$. The model in equation \eqref{model} can be represented as the following linear regression, \boldsymbolegin{equation} \label{model1} \renewcommand*{\arraystretch}{0.8} \boldsymbolegin{pmatrix} \boldsymbol{x}_1^\prime \\ \boldsymbol{x}_2^\prime \\ \vdots \\ \boldsymbol{x}_T^\prime \end{pmatrix}_{T \times p} = \boldsymbolegin{pmatrix} \boldsymbol{x}_0^\prime & 0 \\ \vdots & \vdots \\ \boldsymbol{x}_{\eta_1-2}^\prime & 0 \\ \boldsymbol{x}_{\eta_1-1}^\prime & \boldsymbol{x}_{\eta_1-1}^\prime \\ \vdots & \vdots \\ \boldsymbol{x}_{\eta_2-1}^\prime & \boldsymbol{x}_{\eta_2-1}^\prime \\ \boldsymbol{x}_{\eta_2}^\prime & 0 \\ \vdots & \vdots \\ \boldsymbol{x}_{T-1}^\prime & 0 \end{pmatrix}_{T \times 2p} \boldsymbolegin{pmatrix} {\boldsymbol{\theta}^{(1)}}^{\prime} \\ {\boldsymbol{\theta}^{(2)}}^{\prime} \end{pmatrix}_{2p \times p} + \boldsymbolegin{pmatrix} \varepsilon_1^\prime \\ \varepsilon_2^\prime \\ \vdots \\ \varepsilon_T^\prime \end{pmatrix}_{T \times p}, \end{equation} where \boldsymbolegin{align*} \boldsymbol{\theta}^{(1)} = \boldsymbol{A}^{(1)}, \; \boldsymbol{\theta}^{(2)} = \boldsymbol{A}^{(2)}-\boldsymbol{A}^{(1)}. \end{align*} The model, as written in equation \eqref{model1}, is a linear regression of the form $\mathcal{Y} = \mathcal{X} \Theta + {E}.$ As such, it can be represented as \boldsymbolegin{equation} \label{model4} \boldsymbol{Y}_{Tp \times 1}=\boldsymbol{X}_{Tp \times 2p^2}\boldsymbol{\Theta}_{2p^2 \times 1} +\boldsymbol{E}_{Tp \times 1}, \end{equation} where $\boldsymbol{X} = \mathit{I}_{p} \otimes \mathcal{X}$ and $\otimes$ is the tensor product of two matrices. Now our interest is in estimating the collective anomaly $[\eta_1, \eta_2]$. Our motivation is for scenarios where there is substantial information about the normal or pre-change behaviour of the data. Thus, for ease of presentation, we will first assume that $\boldsymbol{\theta}^{(1)}$ in \eqref{model1} is known. In practice we will use an estimate of $\boldsymbol{\theta}^{(1)}$, and our theory shows that our approach has good asymptotic properties if we plug-in a suitably accurate estimate of $\boldsymbol{\theta}^{(1)}$ in the following procedure. We assume that the change $\boldsymbol{\theta}^{(2)}$ is sparse in that it has small number of nonzero entries which will be formulated in a later section. Assuming the base coefficient matrix $\boldsymbol{A}^{(1)}$ is known, we can rewrite the model as \boldsymbolegin{equation} \label{model_a1} \renewcommand*{\arraystretch}{0.8} \boldsymbolegin{pmatrix} \boldsymbol{x}_1^\prime \\ \boldsymbol{x}_2^\prime \\ \vdots \\ \boldsymbol{x}_T^\prime \end{pmatrix}_{T \times p} - \boldsymbolegin{pmatrix} \boldsymbol{x}_0^\prime{\boldsymbol{\theta}^{(1)}}^{\prime} \\ \vdots \\ \boldsymbol{x}_{\eta_1-2}^\prime{\boldsymbol{\theta}^{(1)}}^{\prime} \\ \boldsymbol{x}_{\eta_1-1}^\prime{\boldsymbol{\theta}^{(1)}}^{\prime} \\ \vdots \\ \boldsymbol{x}_{\eta_2-1}^\prime{\boldsymbol{\theta}^{(1)}}^{\prime} \\ \boldsymbol{x}_{\eta_2}^\prime{\boldsymbol{\theta}^{(1)}}^{\prime} \\ \vdots \\ \boldsymbol{x}_{T-1}^\prime{\boldsymbol{\theta}^{(1)}}^{\prime} \end{pmatrix}_{T \times p} = \boldsymbolegin{pmatrix} 0 \\ \vdots \\ 0 \\ \boldsymbol{x}_{\eta_1-1}^\prime \\ \vdots \\ \boldsymbol{x}_{\eta_2-1}^\prime \\ 0 \\ \vdots \\ 0 \end{pmatrix}_{T \times p} \boldsymbolegin{pmatrix} {\boldsymbol{\theta}^{(2)}}^{\prime} \end{pmatrix}_{p \times p} + \boldsymbolegin{pmatrix} \varepsilon_1^\prime \\ \varepsilon_2^\prime \\ \vdots \\ \varepsilon_T^\prime \end{pmatrix}_{T \times p}, \end{equation} which can be represented as $\mathcal{Y} - \mathcal{X}^{(1)}{\boldsymbol{\theta}^{(1)}}^{\prime} = \mathcal{X}^{(2)}{\boldsymbol{\theta}^{(2)}}^{\prime} + {E}.$ With slight abuse of notation as we are using different definitions of $\boldsymbol{Y}$, $\boldsymbol{X}$ and $\boldsymbol{\Theta}$, this can be re-written as \boldsymbolegin{equation} \label{model_a2} \boldsymbol{Y} _{Tp \times 1}=\boldsymbol{X} _{Tp \times p^2}\boldsymbol{\Theta} _{p^2 \times 1} +\boldsymbol{E} _{Tp \times 1}, \end{equation} where $\boldsymbol{X} = \mathit{I}_{p} \otimes \mathcal{X}^{(2)}$. \subsection{Lasso-based approach} \label{sec2.2} To detect a collective anomaly we derive a test for whether data in an interval of time is anomalous, and then apply this test to data from a set of suitably chosen intervals, $\mathbb{J}_{T, p} (L)$. To help with the presentation of theory in Section \ref{sec3}, we parameterise this set by the length, $L$, of the smallest interval it contains. For any interval $J \in \mathbb{J}_{T, p} (L)$, by extracting the corresponding rows from each matrix in \eqref{model_a1}, the linear regression form can be rewritten as: $\mathcal{Y}_{J} - \mathcal{X}^{(1)}_{J}{\boldsymbol{\theta}^{(1)}}^{\prime} = \mathcal{X}^{(2)}_{J}{\boldsymbol{\theta}^{(2)}}^{\prime} + {E}_{J},$ that can be vectorised in a form of \boldsymbolegin{equation} \boldsymbol{Y}_{J}=\boldsymbol{X}_{J}\boldsymbol{\Theta} +\boldsymbol{E}_{J}, \end{equation} as in \eqref{model_a2}. One of the standard ways to detect change or epidemic changes in regression models is to use a likelihood ratio test \citep{kim1989likelihood, siegmund1995using, yau2016inference, baranowski2019narrowest, dette2020likelihood}, and these methods can be applied in the VAR setting. To detect a collective anomaly in a set of intervals, our procedure involves calculating the likelihood ratio statistic for each interval $J \in \mathbb{J}_{T, p} (L)$ as \boldsymbolegin{equation} \label{lrt} -2\boldsymboligg\{\sum_{s\in J} l_s\boldsymbolig(\boldsymbol{\Theta}=0, \mathcal{S}igma_{\varepsilon} \boldsymbolig) - \sum_{s\in J} l_s\boldsymbolig(\boldsymbol{\hat{\boldsymbol{\Theta}}}, \mathcal{S}igma_{\varepsilon} \boldsymbolig)\boldsymboligg\}, \end{equation} where $\boldsymbol{\hat{\boldsymbol{\Theta}}}$ is the maximum likelihood estimator and the likelihood function has the form of \boldsymbolegin{equation*} \sum_{s\in J} l_s \boldsymbolig(\boldsymbol{\boldsymbol{\Theta}}, \mathcal{S}igma_{\varepsilon} \boldsymbolig) = - \frac{1}{2}\boldsymboligg\{ |J|p \log(2 \pi) + |J| \log|\mathcal{S}igma_{\varepsilon}| + (\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}})^\top (\mathcal{S}igma_{\varepsilon}^{-1} \otimes \mathit{I}) (\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}}) \boldsymboligg\}. \end{equation*} As we consider only ${\boldsymbol{\Theta}}$ varying, the first two terms are constant and will cancel in the test statistic. It is common to assume $\mathcal{S}igma_{\varepsilon}$ is the identity matrix, in which case the maximum likelihood estimator of $\boldsymbol{\Theta}$ is the ordinary least squares (OLS) estimator. Alternatively we can estimate the variance from the residuals obtained when estimating the parameters of the VAR model on training data. For ease of presentation, we will assume $\mathcal{S}igma_{\varepsilon}$ is an identity matrix from now on, but our theoretical results are still valid if this assumption is not correct. Furthermore, the theory can be extended to situations where we assume either $\mathcal{S}igma_{\varepsilon}$ is any positive identity matrix or an estimate of $\mathcal{S}igma_{\varepsilon}$ is used. We now give details of the likelihood ratio statistic and our suggested improvement based on penalised estimation of the change in the VAR coefficients. \boldsymbolegin{comment} In general, the likelihood function has the form of \boldsymbolegin{align*} \log \mathbf{L}({{\Theta}}, \mathcal{S}igma) =& \mathbf{\ell} ({{\Theta}}, \mathcal{S}igma) \\ =& -\frac{Tp}{2} \log(2 \pi) -\frac{T}{2} \log|\mathcal{S}igma| - \frac{1}{2} tr[(\mathcal{Y} -\mathcal{X}{\Theta}) \mathcal{S}igma^{-1} (\mathcal{Y}-\mathcal{X}{\Theta})^\top]\\ =& -\frac{Tp}{2} \log(2 \pi) -\frac{T}{2} \log|\mathcal{S}igma| - \frac{1}{2} tr[\mathcal{S}igma^{-1} (\mathcal{Y}-\mathcal{X}{\Theta})^\top (\mathcal{Y}-\mathcal{X}{\Theta}) ] \end{align*} thus the profile likelihood function is \boldsymbolegin{align*} \mathbf{\ell} (\hat{{\Theta}}, \mathcal{S}igma) = -\frac{Tp}{2} \log(2 \pi) -\frac{T}{2} \log|\mathcal{S}igma| + \frac{1}{2} tr[\mathcal{S}igma^{-1} (\mathcal{Y}-\mathcal{X}\hat{{\Theta}})^\top (\mathcal{Y}-\mathcal{X}\hat{{\Theta}})], \end{align*} \boldsymbolegin{itemize} \item When $\mathcal{S}igma$ is known, the likelihood ratio statistic is \boldsymbolegin{align*} -2 \boldsymbolig\{\mathbf{\ell} ({\boldsymbol{\Theta}}=0, {\mathcal{S}igma}) - \mathbf{\ell} (\hat{\boldsymbol{\Theta}}, {\mathcal{S}igma})\boldsymbolig\} &= tr[\mathcal{Y} \mathcal{S}igma^{-1}\mathcal{Y}^\top] - tr[(\mathcal{Y}-\mathcal{X}\hat{\Theta}) \mathcal{S}igma^{-1} (\mathcal{Y}-\mathcal{X}\hat{\Theta})^\top] \\ &= tr[(\mathcal{X}\hat{\Theta}) \mathcal{S}igma^{-1} (\mathcal{X}\hat{\Theta})^\top] , \end{align*} which follows a $\chi^2_{p^2}$ distribution. \item When $\mathcal{S}igma$ is unknown, the likelihood ratio statistic is as follows, \boldsymbolegin{align} \label{eq_sigma_unknown} -2 \boldsymbolig\{\mathbf{\ell} ({\boldsymbol{\Theta}}=0, \hat{\mathcal{S}igma}_0) - \mathbf{\ell} (\hat{\boldsymbol{\Theta}}, \hat{\mathcal{S}igma})\boldsymbolig\} = T \log|\hat{\mathcal{S}igma}_0| - T \log|\hat{\mathcal{S}igma}|, \end{align} where $\hat{\mathcal{S}igma} = \frac{1}{T} (\boldsymbol{Y}-\boldsymbol{X}\hat{\boldsymbol{\Theta}}) (\boldsymbol{Y}-\boldsymbol{X}\hat{\boldsymbol{\Theta}})^\top$ is the maximum likelihood estimator. The LRT statistic in \eqref{eq_sigma_unknown} is a monotone increasing function of \boldsymbolegin{align} tr[(\mathcal{X}\hat{\Theta}) S^{-1} (\mathcal{X}\hat{\Theta})^\top] \end{align} that is the Hotelling's $T^2(T-1)$ statistic. As the Hotelling's $T^2(T-1)$ is approximated by $\chi^2$ distribution as the observation goes to $\infty$, it might work in showing Theorem 3. Also, this does not affect in Theorem 1 as what we need is just the fact that $\hat{\Theta}=0$. However, the form of LRT statistic in \eqref{eq_sigma_unknown} is not applicable for building Theorem 2 as it contains $\log$ terms. We may need the assumption that $\mathcal{S}igma$ is known and positive definite thus $\mathcal{S}igma^{-1/2}$ can be obtained and we work with $\mathcal{S}igma^{-1/2}\mathcal{Y} = \mathcal{S}igma^{-1/2}\mathcal{X}{{\Theta}} + \mathcal{S}igma^{-1/2} E$. \end{itemize} \end{comment} \paragraph{The OLS method} Before introducing the lasso-based approach, we consider the test statistic based on the least squares estimator which we refer to as the OLS method. The OLS estimator has been popularly used in the change point detection literature e.g. in a linear model setup, CUSUM-type approaches built on the least squares estimator are studied by \citet{horvath2004monitoring}, \citet{aue2006change}, \citet{chen2010modified} and \citet{fremdt2015page}. For any interval $J \in \mathbb{J}_{T, p} (L)$, the test statistic of the OLS method takes the form, \boldsymbolegin{align*} T(J) &= \|\boldsymbol{Y}_{J} \|_2^2 - \min_{\boldsymbol{\Theta}} \boldsymbolig\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J} {\boldsymbol{\Theta}} \|_2^2\boldsymbolig\}\\ \numberthis \label{naive} &= \|\boldsymbol{Y}_{J} \|_2^2- \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}} \|_2^2, \end{align*} that is the same as the likelihood ratio statistic in \eqref{lrt} when $\mathcal{S}igma_{\varepsilon}$ is the identity matrix. $T(J)$ has a $\chi^2_{p^2}$ distribution under the null, $\boldsymbol{\Theta}=\boldsymbol{0}$. The classical least squares estimator $\hat{\boldsymbol{\Theta}}$ in \eqref{naive} is not able to be used when the dimension $p$ is greater than $T$. Note that $\hat{\boldsymbol{\Theta}}$ also depends on $J$ but this is suppressed in the notation for simplicity. \paragraph{The Lasso method} To handle the case when $\boldsymbol{\Theta}$ is sparse more effectively, we propose a test statistic based on a lasso estimator: \boldsymbolegin{equation} \label{lasso} T^{\text{lasso}}(J) = \|\boldsymbol{Y}_{J} \|_2^2 - \min_{\boldsymbol{\Theta}} \boldsymbolig\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J} {\boldsymbol{\Theta}} \|_2^2 + \lambdabda \|\boldsymbol{\Theta}\|_1 \boldsymbolig\}. \end{equation} To detect a collective anomaly, we calculate this test statistic for a collection of intervals, $\mathbb{J}_{T, p} (L)$. We detect an anomaly if the maximum value of these test statistics is above a pre-determined threshold. If we detect an anomaly, we estimate its location as the interval in $\mathbb{J}_{T, p} (L)$ with the largest test-statistic value. The detailed procedure is given in Algorithm \ref{algo1}. \boldsymbolegin{center} \boldsymbolegin{algorithm}[h!] \setstretch{1} \mathcal{S}etAlgoLined \textbf{INPUT}: $\boldsymbol{X}$ matrix in \eqref{model_a2}, $L$, $\lambdabda^\textsuperscript{thr}$ \boldsymbolegin{enumerate} \item[] \textbf{Step 1}: Set a collection of intervals $\mathbb{J}_{T, p} (L)$ where $L$ is the minimum length of intervals. \item[] \textbf{Step 2}: For any interval $J \in \mathbb{J}_{T, p} (L)$, calculate $T^{\text{lasso}}(J)$ as in \eqref{lasso}. \item[] \textbf{Step 3}: Using a pre-specified threshold $\lambdabda^\textsuperscript{thr}$, pick the candidate set \boldsymbolegin{equation*} \mathbb{I}^* = \Big\{ J \in \mathbb{J}_{T, p} (L) : T^{\text{lasso}}(J) > \lambdabda^\textsuperscript{thr} \Big\}. \end{equation*} \end{enumerate} If $\mathbb{I}^* \neq \emptyset$, reject the null hypothesis (no anomaly exists) and save the estimator of the anomaly interval, \boldsymbolegin{equation} \label{ihat} \hat{I} = \argmax_{ J \in \mathbb{J}_{T, p} (L)} T^{\text{lasso}}(J). \end{equation} \textbf{OUTPUT}: $\hat{I}$. \caption{Single anomaly detection} \label{algo1} \end{algorithm} \end{center} For setting the collection of intervals $\mathbb{J}_{T, p} (L)$ in Step 1, there exist two general methods; randomly generated intervals \citep{fryzlewicz2014wild, baranowski2019narrowest} and deterministic construction of intervals \citep{kovacs2020seeded}. In this paper, we use both construction methods and compare their performance in Section \ref{sec4}. \subsection{Extension to detecting multiple anomalies} \label{sec2.3} Following the ideas in \citet{fryzlewicz2014wild} and \citet{kovacs2020seeded}, to deal with multiple anomalies, we repeatedly update the candidate set by removing the intervals that overlap with any detected anomalies. The detailed procedure is given in Algorithm \ref{algo2}. \boldsymbolegin{center} \boldsymbolegin{algorithm}[ht] \setstretch{1} \mathcal{S}etAlgoLined \textbf{INPUT}: $\boldsymbol{X}$ matrix in \eqref{model_a2}, $L$, $\lambdabda^\textsuperscript{thr}$ \boldsymbolegin{enumerate} \item[] \textbf{Step 1}: Set a collection of intervals $\mathbb{J}_{T, p} (L)$ where $L$ is the minimum length of intervals. \item[] \textbf{Step 2}: For any interval $J \in \mathbb{J}_{T, p} (L)$, calculate $T^{\text{lasso}}(J)$ as in \eqref{lasso}. \item[] \textbf{Step 3}: Using a pre-specified threshold $\lambdabda^\textsuperscript{thr}$, pick the candidate set \boldsymbolegin{equation*} \label{candidate} \mathbb{I}^* = \Big\{ J \in \mathbb{J}_{T, p} (L) : T^{\text{lasso}}(J) > \lambdabda^\textsuperscript{thr} \Big\}. \end{equation*} \end{enumerate} If $\mathbb{I}^* \neq \emptyset$, reject the null hypothesis (no anomaly exist). Set $\mathbb{I}^{(1)} = \mathbb{I}^*$, $j=1$ and proceed the following steps. \\ \While{$\mathbb{I}^{(j)} \neq \emptyset$}{ \boldsymbolegin{enumerate} \item[] \textbf{Step 4}: Save the estimator of the anomaly interval, \boldsymbolegin{equation*} \hat{I}_j = \argmax_{ J \in \mathbb{I}^{(j)} } T^{\text{lasso}}(J), \end{equation*} and update the candidate set as \boldsymbolegin{equation*} \mathbb{I}^{(j+1)} = \mathbb{I}^{(j)} \setminus \{J: J \in \mathbb{I}^{(j)}, J \cap \hat{I}_j \neq \emptyset\} \end{equation*} \item[] \textbf{Step 5}: Set $j=j+1$. \end{enumerate} } \textbf{OUTPUT}: $\hat{I} = \{ \hat{I}_1, \hat{I}_2, \cdots\}$. \caption{Multiple anomaly detection} \label{algo2} \end{algorithm} \end{center} \subsection{Extension to VAR(q) model} \label{sec2.4} The VAR process of order $1$ presented in Section \ref{sec2.1} can simply be extended to VAR(q) as follows, \boldsymbolegin{equation} \label{model_varq} \boldsymbol{x}_t = \boldsymbol{A}_{t, 1} \boldsymbol{x}_{t-1} + \cdots + \boldsymbol{A}_{t, q} \boldsymbol{x}_{t-q} + {\boldsymbol{\varepsilon}}_t, \quad \boldsymbol{\varepsilon}_t \stackrel{\text{i.i.d.}}{\sim} N(\boldsymbol{0}, \mathcal{S}igma_{\varepsilon}), \quad t=1, \ldots, T, \end{equation} where $\{\boldsymbol{A}_{t, k}\}_{t=1}^{T}$ is a $p \times p$ matrix for all $k=1, \ldots, q$ and $\mathcal{S}igma_{\varepsilon}$ is assumed to be a positive definite matrix. With a slight abuse of notation, the piecewise-constant coefficient matrices are as follows, \boldsymbolegin{align*} &\boldsymbol{A}^{(1)} = (\boldsymbol{A}_{t', 1}, \ldots, \boldsymbol{A}_{t', q}) \in \mathbb{R}^{p \times pq}, \quad \text{ for any } t'=1, \ldots, {\eta_1-1}, {\eta_2+1}, \ldots, T \\ &\boldsymbol{A}^{(2)} = (\boldsymbol{A}_{t', 1}, \ldots, \boldsymbol{A}_{t', q}) \in \mathbb{R}^{p \times pq}, \quad \text{ for any } t'= {\eta_1}, \ldots, {\eta_2}, \end{align*} and the model \eqref{model_varq} can be represented as \boldsymbolegin{equation} \label{model1_varq} \renewcommand*{\arraystretch}{0.6} \boldsymbolegin{pmatrix} \boldsymbol{x}_q^\prime \\ \boldsymbol{x}_{q+1}^\prime \\ \vdots \\ \boldsymbol{x}_{T}^\prime \end{pmatrix}_{(T-q+1) \times p} = \boldsymbolegin{pmatrix} \boldsymbol{x}_{q-1}^\prime & \cdots & \boldsymbol{x}_0^\prime & 0 & \cdots & 0\\ \vdots & & \vdots & \vdots & & \vdots \\ \boldsymbol{x}_{\eta_1+q-3}^\prime & \cdots & \boldsymbol{x}_{\eta_1-2}^\prime & 0 & \cdots & 0\\ \boldsymbol{x}_{\eta_1+q-2}^\prime & \cdots & \boldsymbol{x}_{\eta_1-1}^\prime & \boldsymbol{x}_{\eta_1+q-2}^\prime & \cdots & \boldsymbol{x}_{\eta_1-1}^\prime \\ \vdots & & \vdots & \vdots & & \vdots \\ \boldsymbol{x}_{\eta_2+q-2}^\prime & \cdots & \boldsymbol{x}_{\eta_2-1}^\prime & \boldsymbol{x}_{\eta_2+q-2}^\prime & \cdots & \boldsymbol{x}_{\eta_2-1}^\prime\\ \boldsymbol{x}_{\eta_2+q-1}^\prime & \cdots & \boldsymbol{x}_{\eta_2}^\prime & 0 & \cdots & 0\\ \vdots & & \vdots & \vdots & & \vdots \\ \boldsymbol{x}_{T-1}^\prime & \cdots & \boldsymbol{x}_{T-q}^\prime & 0 & \cdots & 0\\ \end{pmatrix}_{(T-q+1) \times 2pq} \boldsymbolegin{pmatrix} {\boldsymbol{\theta}^{(1)}}^{\prime} \\ {\boldsymbol{\theta}^{(2)}}^{\prime} \end{pmatrix}_{2pq \times p} + \boldsymbolegin{pmatrix} \varepsilon_q^\prime \\ \varepsilon_{q+1}^\prime \\ \vdots \\ \varepsilon_T^\prime \end{pmatrix}_{(T-q+1) \times p}, \end{equation} where $\boldsymbol{\theta}^{(1)} = \boldsymbol{A}^{(1)}, \boldsymbol{\theta}^{(2)} = \boldsymbol{A}^{(2)}-\boldsymbol{A}^{(1)}$. With the larger dimension of the parameters, the same argument for the VAR process of order $q$ can be achieved by following the logic from \eqref{model4}. \section{Theoretical results} \label{sec3} In this section, we explore the asymptotic behaviour of the proposed method. We show that our method controls the familywise error under the null (i.e. when there exist no anomaly) with an appropriate threshold and give conditions under which the asymptotic power of the method tends to 1. These results are based upon the following assumptions. \boldsymbolegin{Assume} \label{asmpt1} For each $j=1, 2$, let $\Gamma_j(\ell)$ be the population version of the lag-$\ell$ covariance matrix of $\boldsymbol{\mathfrak{x}}_j$ where $\boldsymbol{\mathfrak{x}}_j$ is as in \eqref{sets}. For $\kappa \in [-\pi, \pi]$, there exist the spectral density matrices, \boldsymbolegin{equation*} f_j(\kappa) = \frac{1}{2\pi}\sum_{l \in \mathbb{Z}} \Gamma_j(l) \exp^{-\sqrt{-1}\kappa l}. \end{equation*} In addition, \boldsymbolegin{equation*} \max_j \mathcal{M}(f_j) = \max_j \Big\{ \text{ess}\sup_{\kappa\in [-\pi, \pi]} \Lambda_{\max}(f_j(\kappa)) \Big\} < +\infty, \end{equation*} and \boldsymbolegin{equation*} \min_j \boldsymbol{\mathfrak{m}}(f_j) = \min_j \Big\{ \text{ess}\inf_{\kappa\in [-\pi, \pi]} \Lambda_{\min}(f_j(\kappa)) \Big\} > 0, \end{equation*} where $\Lambda_{\max}(A)$ and $\Lambda_{\min}$ are the largest and the smallest eigenvalues of the symmetric matrix $A$, respectively. \end{Assume} This first condition is needed to control the stability properties of the VAR models. This is a spectral density condition that is not only valid for VAR model but also holds for a large class of general linear process. \citet{basu2015regularized} use the same assumption but for a stable VAR setting without considering anomalies, while we extend it to the single collective anomaly setting by assuming a spectral density function for each common and anomalous segments separately. In order to bound the power of our method we need conditions on the size and length of any anomaly and the set of intervals we use -- essentially we will need at least one interval of sufficient length to be contained within the anomaly. To this end we introduce the following: \boldsymbolegin{Assume} \label{asmpt2} There exist at least one interval $J \in \mathbb{J}_{T, p} (L)$ such that $J \subseteq [\eta_1, \eta_2]$ and the choice of $L$ for a set of intervals $\mathbb{J}_{T, p} (L)$ satisfies the following condition as $T, p \rightarrow \infty$, \boldsymbolegin{equation*} \frac{log(T \lor p)}{L} \rightarrow 0 \end{equation*} where any interval $J \in \mathbb{J}_{T, p} (L)$ has length at least $L$. \end{Assume} \boldsymbolegin{Assume} \label{asmpt3} The sparsity of change is fixed; $\|\boldsymbol{\Theta} \|_0=d_0$. \end{Assume} \boldsymbolegin{Assume} \label{asmpt4} For any $\xi>0$, $L \cdot \|\boldsymbol{\Theta}\|_2^2 > C_2 \cdot d_0^2 \cdot \log^{1+\xi}{(T \lor p)}$, where $C_2>0$ is a constant. \end{Assume} Assumption \ref{asmpt3} gives the condition on the number of nonzero entries of the coefficient matrix, where the sparsity parameter $d_0$ affects the signal-to-noise ratio condition in Assumption \ref{asmpt4}. Our Assumption \ref{asmpt4} is similar to the conditions required in other change-point problem in high-dimensional VAR model. For example, \citet{wang2019localizing} study a multiple change point setting and their signal-to-noise ratio assumption becomes equal to ours in the case when single change-point is considered, while \citet{safikhani2020joint} assume $\|\boldsymbol{\Theta}\|_2$ is bounded away from zero. Our final assumption is used to extend our results to the case where we estimate $\boldsymbol{\theta}^{(1)}$. \boldsymbolegin{Assume} \label{asmpt5} It holds for the estimator $\hat{\boldsymbol{\theta}}^{(1)}$ that $\boldsymbolig\|\boldsymbol{\theta}^{(1)}-\hat{\boldsymbol{\theta}}^{(1)} \boldsymbolig\|_\infty < C \sqrt{\frac{\log({T \lor p})}{L}}$ with probability approaching $1$ as $T \rightarrow \infty$ and $p \rightarrow \infty$, where $C>0$ is a constant. \end{Assume} Assumption \ref{asmpt5} states the condition on the estimation error bound in $\ell_\infty$-norm. This is in line with the estimation error presented in Proposition 4.1 of \citet{basu2015regularized} and Lemma 15 of \citet{wang2019localizing} in which the sparsity assumption is imposed on VAR coefficient matrices. For instance, when $\boldsymbol{\theta}^{(1)}$ is assumed to be sparse with the condition $\|\boldsymbol{\theta}^{(1)}\|_0=k$, then its lasso estimator, $\hat{\boldsymbol{\theta}}^{(1)}$, satisfies $\boldsymbolig\|\boldsymbol{\theta}^{(1)}-\hat{\boldsymbol{\theta}}^{(1)} \boldsymbolig\|_2 \leq c \sqrt{k} \sqrt{\frac{\log ({T \lor p})}{T}}$, where $\hat{\boldsymbol{\theta}}^{(1)}$ is obtained from a sample of size $T$. When the sparsity $k$ is fixed, the estimation error bound in $\ell_2$-norm implies Assumption \ref{asmpt5}. We now present our main theoretical results. The following theorem gives conditions on the lasso penalty to ensure the procedure asymptotically controls the familywise error when there is no anomaly. \boldsymbolegin{Thm} \label{theorem1} Let Assumptions \ref{asmpt1}-\ref{asmpt2} hold. If there exist no anomaly, for a tuning parameter $\lambdabda = C_3 \sqrt{L (2\log{p} + \log{T})}$ with a constant $C_3$ large enough, we have \boldsymbolegin{align*} P\boldsymboligg(\max_{ J \in \mathbb{J}_{T, p} (L)} T^{\text{lasso}}(J) \leq \lambdabda^{\text{thr}} \boldsymboligg) & \geq P\boldsymboligg(\max_{ J \in \mathbb{J}_{T, p} (L)} T^{\text{lasso}}(J) = 0\boldsymboligg) \\ & \geq 1-C_4 \exp(-C_5 (2\log{p} + \log{T})), \end{align*} where $C_4, C_5 >0$, $\lambdabda^{\text{thr}}$ is strictly positive and $\lambdabda$ is a tuning parameter controlling the penalty term in lasso regression in \eqref{lasso}. \end{Thm} In Theorem \ref{theorem1}, it is clear that our result applies to any positive threshold $\lambdabda^{\text{thr}}$. In the proof of Theorem \ref{theorem1}, we show that the familywise error is controlled under an appropriate tuning parameter $\lambdabda$ and the proof can be found in the supplementary material. We now turn to the asymptotics of the test statistic under the alternative. \boldsymbolegin{Thm} \label{theorem2} Let Assumptions \ref{asmpt1}-\ref{asmpt4} hold. If there exist an anomaly, with a tuning parameter $\lambdabda = C_2 \sqrt{L (2\log{p} + \log{T})}$ for a large enough $C_2$, as $T \rightarrow \infty$, \boldsymbolegin{align*} P\boldsymboligg(\max_{ J \in \mathbb{J}_{T, p} (L)} T^{\text{lasso}}(J) \leq \lambdabda^{\text{thr}} \boldsymboligg) \rightarrow 0 \end{align*} and \boldsymbolegin{equation*} P(\hat{I} \cap [\eta_1, \eta_2] \neq \emptyset) \rightarrow 1, \end{equation*} where the threshold $\lambdabda^{\text{thr}}$ has the order of $\sqrt{L\cdot\log (p \lor T)}$, the estimated anomaly $\hat{I}$ is as in \eqref{ihat} and $\lambdabda$ is a tuning parameter controlling the penalty term in lasso regression in \eqref{lasso}. \end{Thm} Theorem \ref{theorem2} states that the test statistic corresponding to the intervals in the candidate set is greater than the pre-specified threshold if the interval is located within the true anomaly. In other words, it shows that the individual test has asymptotic power one. The following theorem shows that our method has larger power to detect a sparse collective anomaly. \boldsymbolegin{Thm} \label{thm2-1} Assume that $\boldsymbol{x}_t$ follows \eqref{model_a1} and let Assumptions \ref{asmpt1}-\ref{asmpt4} hold. Let the null hypothesis hold, then for any $\{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset \}$, the test statistic of the OLS method in \eqref{naive} follows a $\chi^2_{p^2}$ distribution. Consequently, we have an asymptotic level $\alpha$ test if the null hypothesis is rejected for $T(J) > \chi^2_{p^2; (1-\alpha)}$, where $\chi^2_{p^2; (1-\alpha)}$ is the $(1-\alpha)$-quantile of chi-square distribution with $p^2$ degrees of freedom. Under the alternative, for any $J \in \mathbb{J}_{T, p} (L)$ such that $J \subseteq [\eta_1, \eta_2]$, the upper bound on the power of the OLS method is given as \boldsymbolegin{equation} \label{upperbound} \frac{E\boldsymbolig( \|\boldsymbol{Y}_{J} \|_2^2 - \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2\boldsymbolig)}{W_p}, \end{equation} where $W_p=O_p(p)$. \end{Thm} Note that $W_p$ in \eqref{upperbound} is linked to the false positive rate as it is the approximation of $\chi^2_{p^2; (1-\alpha)} - p^2$. See the proof in the supplementary material for further details. Theorem \ref{thm2-1} shows the asymptotic behaviours of the test statistic of the OLS method under both the null and the alternative hypotheses. Furthermore, Theorem \ref{thm2-1} implies that the test statistic built on the lasso estimator can detect weaker anomalies than using the OLS estimator when the change is sparse. The intuition behind this is that the test statistic of the OLS method in \eqref{naive} can be written as \boldsymbolegin{equation} \label{naive_alt} \|\boldsymbol{Y}_{J} \|_2^2 - \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 + \boldsymbolig\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 - \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}} \|_2^2 \boldsymbolig\}, \end{equation} and $E(\|\boldsymbol{Y} \|_2^2-\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \|_2^2)$ needs to be at least as large as $O_p(p)$ to have high power. By comparison, if we denote the lasso estimator of $\boldsymbol{\Theta}$ by $\hat{\boldsymbol{\Theta}}$, then the test statistic of the lasso method in \eqref{lasso} can be written as \boldsymbolegin{align*} \numberthis \label{lasso_alt} &\|\boldsymbol{Y}_{J} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 - \lambdabda\|\boldsymbol{\Theta}\|_1 + \boldsymbolig\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 + \lambdabda\|\boldsymbol{\Theta}\|_1 - \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}} \|_2^2 - \lambdabda\|\hat{\boldsymbol{\Theta}}\|_1 \boldsymbolig\}. \end{align*} Noting that the term in $\{\}$s in \eqref{lasso_alt} is positive, the lasso-based test statistic requires that $\|\boldsymbol{Y} \|_2^2-\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \|_2^2$ should at least as large as $O_p(\lambdabda\|\boldsymbol{\Theta}\|_1)$ and $\lambdabda = C_2 \sqrt{L(2\log{p} + \log{T})}$. The following two corollaries state that the assertions in Theorems \ref{theorem1}-\ref{theorem2} remain true if the $\boldsymbol{\theta}^{(1)}$ is replaced by an estimator ${\hat{\boldsymbol{\theta}}^{(1)}}$ that satisfies the condition in Assumption \ref{asmpt5}. \boldsymbolegin{Cor} \label{cor1} Theorem \ref{theorem1} holds with a different constant if ${\hat{\boldsymbol{\theta}}^{(1)}}$ is used in calculating the test statistic instead of the true parameter $\boldsymbol{\theta}^{(1)}$, where ${\hat{\boldsymbol{\theta}}^{(1)}}^{\prime}$ is an estimator fulfilling Assumption \ref{asmpt5}. \end{Cor} \boldsymbolegin{Cor} \label{cor2} Theorem \ref{theorem2} holds with a different constant if ${\hat{\boldsymbol{\theta}}^{(1)}}$ is used in calculating the test statistic instead of the true parameter $\boldsymbol{\theta}^{(1)}$, where ${\hat{\boldsymbol{\theta}}^{(1)}}^{\prime}$ is an estimator fulfilling Assumption \ref{asmpt5}. \end{Cor} The proofs of Theorems \ref{theorem1}-\ref{thm2-1} and Corollaries \ref{cor1}-\ref{cor2} can be found in the supplementary material. \section{Simulation study} \label{sec4} \subsection{Preliminaries} We compare the performance of our lasso-based approach with the OLS method described in Section \ref{sec2.2}. Whilst there are other methods for detecting changes in a VAR model, such as those of \citet{safikhani2020joint} and \citet{bai2020multiple}, they are not designed for the collective anomaly setting that we consider. For completeness, we compare their performances with ours, and the details can be found in the supplementary material. Perhaps due to not being designed for the collective anomaly setting, we find these alternative methods perform substantially worse than ours, particularly when the underlying matrix $A^{(1)}$ is dense but the change is sparse. In practice, the underlying parameter $A^{(1)}$ is often unknown and needs to be estimated. In this case, as the accuracy of our method depends on how accurately we can estimate $A^{(1)}$, considering two extreme cases gives upper and lower bounds on our method: $A^{(1)}$ is known and $A^{(1)}$ is estimated from a relatively small amount of data. The threshold of each test is selected by choosing the $99\%$ quantile of the test statistics obtained through the 100 simulation runs performed under the null. For the error variance, we set $\mathcal{S}igma_{\varepsilon}$ to be the identity matrix. In the following sections, we report the results when $\mathcal{S}igma_{\varepsilon}$ is known. The results for the case when $\mathcal{S}igma_{\varepsilon}$ is estimated can be found in the supplementary material. We also look at how the choice of the set of intervals, $\mathbb{J}_{T, p} (L)$, affects performance. We vary both the number of intervals which we denote by $s$, and the way we choose the intervals, randomly or deterministically, with a pre-determined minimum length of interval. For the deterministic construction of intervals, we use the technique proposed in Definition 1 of \citet{kovacs2020seeded} with the decay parameter $1/a = 1.1, 1.2$. Regardless of the way of choosing the intervals, we force the minimum length intervals to be greater than $p$ in order to compare our approach with the OLS method. In the following sections, we present the simulation results for two scenarios: (1) $A^{(1)}$ is dense and (2) $A^{(1)}$ is sparse; where the number of non-zero elements is large in $(1)$ and small in $(2)$. Note that when $A^{(1)}$ is assumed to be unknown, it is estimated from the null region with ridge or lasso penalty depending on the given sparsity of $A^{(1)}$. \subsection{Dense $A^{(1)}$} \label{denseA1} We first consider the case when all entries of $A^{(1)}$ are non-zero. The coefficient matrix is randomly generated by using the algorithm proposed by \citet{ansley1986note} and implemented in R package \code{gmvarkit} which forces the resulting VAR model to be stationary, where the range of the entries of $A^{(1)}$ is obtained as $[-0.67, 0.58]$. We set $T=500$ and $p=10$, where only a few (five or ten) entries in the VAR coefficient matrix undergo change in anomalous interval. We investigate both cases: (1) $A^{(1)}$ is assumed to be known and (2) $A^{(1)}$ is estimated from the training data with a ridge penalty. In the latter case, the training data contains the same amount of the test data which we examine for detecting an anomaly. Our lasso-based method is implemented by using the tuning parameter $\lambdabda$ presented in Theorems \ref{theorem1}-\ref{theorem2} with the constant $C=0.15$. In the following sections, we consider the single anomaly and the multiple anomaly case. \subsubsection{Single anomaly} \label{sec.s.a} We consider a single anomaly interval located in the middle with three different lengths, where the details can be found in Table \ref{Tab:single_a} and the coefficient matrices are presented in Figure \ref{fig:single_a}. The non-zero entries in $A^{(2)}-A^{(1)}$ are all equal to $\Delta = 0.35$ and $A^{(2)}$ is made by adding $\Delta$ to the first ten smallest positive entries of $A^{(1)}$. \boldsymbolegin{figure}[h!] \boldsymbolegin{center} \includegraphics[width=12cm, height=5cm]{Amat_1.eps} \end{center} \caption{The underlying coefficient matrices, $(A^{(1)}, A^{(2)}, A^{(1)})$, for the simulation setting in Section \ref{sec.s.a}, where $A^{(2)}$ corresponds to an anomaly.} \label{fig:single_a} \end{figure} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccccc} \hline & T & p & $[\eta_1, \eta_2]$ & $\eta_2-\eta_1$ & $\Delta$ & $\|\boldsymbol{\Theta}\|_0$ \\ \hline case 1 & 500 & 10 & $[T(5/11), T(6/11)]$ & $45$ & 0.35 & 10 \\ case 2 & 500 & 10 & $[T(7/15), T(8/15)]$ & $33$ & 0.35 & 10 \\ \hline \end{tabular} \caption{Simulation settings for two cases considered in Section \ref{sec.s.a}, where $\Delta$ is the size of non-zero entries of $\boldsymbol{\Theta}$ and $\|\boldsymbol{\Theta}\|_0$ is the number of non-zero elements of $\boldsymbol{\Theta}$.} \label{Tab:single_a} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{6}{*}{case 1} & random & OLS & 100 & 82 \\ & (s = 1029) & Lasso & 100 & \textbf{94} \\ \cline{2-5} & deterministic & OLS & 100 & 85 \\ & (s = 1029) & Lasso & 100 & \textbf{95}\\ \cline{2-5} & deterministic & OLS & 100 & 85 \\ & (s = 540) & Lasso & 100& \textbf{94} \\ \hline \multirow{6}{*}{case 2} & random & OLS & 98 & 46 \\ & (s = 1029) & Lasso & \textbf{100} & \textbf{66} \\ \cline{2-5} & deterministic & OLS & 98 & 56\\ & (s = 1029) & Lasso & \textbf{99} & \textbf{75}\\ \cline{2-5} & deterministic & OLS & 98 & 54 \\ & (s = 540) & Lasso & \textbf{99} & \textbf{72} \\ \hline \end{tabular} \caption{Empirical power ($\%$) from 100 simulation runs for two methods in all cases described in Section \ref{sec.s.a}, where $s$ is the number of intervals examined.} \label{tab:single_a_count} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{6}{*}{case 1} & random & OLS & 0.45 (0.31) & 10.50 (16.66) \\ & (s = 1029) & Lasso & 0.40 (0.00) & 5.24 (10.34) \\ \cline{2-5} & deterministic & OLS & 0.39 (0.25) & 7.18 (16.31) \\ & (s = 1029) & Lasso & 0.35 (0.16) & 2.63 (9.96) \\ \cline{2-5} & deterministic & OLS & 0.39 (0.33) & 7.15 (16.32) \\ & (s = 540) & Lasso & 0.32 (0.22) & 3.06 (10.85) \\ \hline \multirow{6}{*}{case 2} & random & OLS & 3.42 (6.49) & 28.38 (21.04) \\ & (s = 1029) & Lasso & 2.20 (0.91) & 19.08 (20.54) \\ \cline{2-5} & deterministic & OLS & 1.32 (6.57) & 21.31 (23.20)\\ & (s = 1029) & Lasso & 0.82 (4.67) & 12.46 (20.57) \\ \cline{2-5} & deterministic & OLS & 1.27 (6.58) & 22.17 (23.34) \\ & (s = 540) & Lasso & 0.76 (4.68) & 13.76 (21.34) \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for two methods in all cases described in Section \ref{sec.s.a}, where $s$ is the number of intervals examined.} \label{tab:single_a_meansd} \end{table} As shown in Table \ref{tab:single_a_count}, the lasso-based method tends to detect an anomaly more often than the OLS-based approach in all cases regardless of the way of choosing intervals to investigate and whether $A^{(1)}$ is known or unknown. As expected, compared to the results when the true $A^{(1)}$ is known, both OLS and lasso methods perform less well when $\hat{A}^{(1)}$ is used. Comparing the randomly and the deterministically chosen segments with the size ($s$) equal to $1029$, for both the OLS and the lasso methods, the deterministic way tends to give a slightly lower power when $A^{(1)}$ is known but gives a similar or a slightly larger power when $A^{(1)}$ is estimated. Note, when $A^{(1)}$ is estimated, the deterministically chosen intervals with smaller sample size ($s=540$) shows a similar or a larger power than those chosen randomly with a sample size ($s=1029$) for both methods, and the difference becomes larger as the length of anomalous interval becomes shorter (from case 1 to case 2 as presented in Table \ref{Tab:single_a}). Table \ref{tab:single_a_meansd} shows that the lasso method also outperforms in terms of distance between the estimated and the true anomaly and its variance. \subsubsection{Two collective anomalies} \label{ta} We now consider two collective anomalies, $[\eta_1, \eta_2]$ and $[\eta_3, \eta_4]$, where the corresponding coefficient matrix is $A^{(2)}$ and $A^{(3)}$, respectively. $A^{(2)}$ and $A^{(3)}$ are obtained by adding $\Delta_1$ and $\Delta_2$, respectively to the first five smallest positive entries of $A^{(1)}$ and the true coefficient matrices are presented in Figure \ref{fig:two_a}. Two different cases are considered and the details are provided in Table \ref{table_ta}. \boldsymbolegin{figure}[!ht] \boldsymbolegin{center} \includegraphics[width=16cm, height=4.5cm]{Amat_2.eps} \end{center} \caption{The underlying coefficient matrices, $(A^{(1)}, A^{(2)}, A^{(1)}, A^{(3)}, A^{(1)})$, for the simulation setting in Section \ref{ta}, where $A^{(2)}$ and $A^{(3)}$ correspond to the anomalies.} \label{fig:two_a} \end{figure} \boldsymbolegin{table}[!ht] \centering \boldsymbolegin{tabular}{cccccccccc} \hline & T & p & $[\eta_1, \eta_2]$ & $[\eta_3, \eta_4]$ & $\eta_2-\eta_1$ & $\eta_4-\eta_3$ & $\Delta_1$ & $\Delta_2$ & $\|\boldsymbol{\Theta}\|_0$ \\ \hline case 1 & 500 & 10 & $[133, 166]$ & $[333, 366]$ & $33$ & $33$ & $0.6$ & $0.6$ & $5$ \\ case 2 & 500 & 10 & $[33, 66]$ & $[433, 466]$& $33$ & $33$ & $0.5$ & $0.5$ & $5$ \\ \hline \end{tabular} \caption{Simulation settings for two cases considered in Section \ref{ta}, where $\Delta_1=|A^{(2)}-A^{(1)}|$, $\Delta_2=|A^{(3)}-A^{(1)}|$ and $\|\boldsymbol{\Theta}\|_0$ is the number of non-zero elements of $\boldsymbol{\Theta}$.} \label{table_ta} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cccccccccccc} \hline & & & \multicolumn{4}{c}{$A^{(1)}$ is known} & & \multicolumn{4}{c}{$A^{(1)}$ is estimated} \\ \cmidrule(lr){4-7} \cmidrule(lr){9-12} & & & 0 & 1 & \textbf{2} & 3 & & 0 & 1 & \textbf{2} & 3 \\ \hline \multirow{6}{*}{case 1} & random & OLS & 0 & 27& \textbf{73} & 0 && 3 & \textbf{82} & 15 & 0 \\ & (s = 1944) & Lasso & 0 & 24 & \textbf{76} & 0 && 0 & 48 &\textbf{52} & 0 \\ \cline{2-12} & deterministic & OLS & 0 & 24& \textbf{76} & 0 && 1 & \textbf{70} & 29 & 0 \\ & (s = 1944) & Lasso & 0 & 12 & \textbf{86} & 2 && 0 & 35 & \textbf{65} & 0 \\ \cline{2-12} & deterministic & OLS & 0 & 26 & \textbf{74} & 0 && 2 & \textbf{72} & 26 & 0 \\ & (s = 1029) & Lasso & 0 & 21 & \textbf{77} & 2 && 0 & 40 & \textbf{60} & 0 \\ \hline \multirow{6}{*}{case 2} & random & OLS & 0 & 4 & \textbf{94} & 2 && 36 & \textbf{58} & 6 & 0 \\ & (s = 1944) & Lasso & 0 & 2 & \textbf{98} & 0 && 7 & \textbf{50} & 43 & 0 \\ \cline{2-12} & deterministic & OLS & 0 & 4 & \textbf{95} & 1 && 31 & \textbf{60} & 9 & 0 \\ & (s = 1944) & Lasso & 0 & 1 & \textbf{96} & 3 && 4 & 43 & \textbf{53} & 0 \\ \cline{2-12} & deterministic & OLS & 0 & 4 & \textbf{95} & 1 && 34 & \textbf{58} & 8 & 0 \\ & (s = 1029) & Lasso & 0 & 1 & \textbf{98} & 1 && 8 & 40 & \textbf{52} & 0 \\ \hline \end{tabular} \caption{Distribution of the number of detected anomalies for two methods in all cases described in Section \ref{ta} over 100 simulation runs, where $s$ is the number of intervals examined.} \label{tab:two_a_count} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{6}{*}{case 1} & random & OLS & 3.01 (2.89) & 15.90 (12.21) \\ & (s = 1944) & Lasso & 2.82 (2.83) & 7.38 (8.56)\\ \cline{2-5} & deterministic & OLS & 2.46 (2.62) & 12.45 (13.04)\\ & (s = 1944) & Lasso & 1.97 (2.68) & 4.41 (7.26) \\ \cline{2-5} & deterministic & OLS & 2.59 (2.57) & 13.07 (13.06) \\ & (s = 1029) & Lasso & 2.43 (2.66) & 5.11 (7.77)\\ \hline \multirow{6}{*}{case 2} & random & OLS & 3.93 (5.11) & 12.26 (2.83)\\ & (s = 1944) & Lasso & 2.50 (1.88) & 8.38 (5.44) \\ \cline{2-5} & deterministic & OLS & 1.92 (4.19) & 11.92 (3.76)\\ & (s = 1944) & Lasso & 1.50 (3.80) & 6.66 (6.22) \\ \cline{2-5} & deterministic & OLS & 1.98 (4.22) & 12.02 (3.58) \\ & (s = 1029) & Lasso & 1.44 (3.68) & 6.79 (6.17) \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for two methods in all cases described in Section \ref{ta}, where $s$ is the number of intervals examined.} \label{tab:two_a_meansd} \end{table} To detect multiple anomalies, the procedure presented in Algorithm \ref{algo2} is applied. From Table \ref{tab:two_a_count}, we obtain similar interpretations to those from the single anomaly case in Section \ref{sec.s.a}. When the true $A^{(1)}$ is known, both the OLS and the lasso methods give better results than when $A^{(1)}$ is assumed to be unknown and estimated. For both methods and for both cases ($A^{(1)}$ is known and estimated), the deterministic settings ($s=1029$ and $s=1944$) tend to return better results than the random setting with the sample size of $s=1944$. In Table \ref{tab:two_a_meansd}, the lasso method returns smaller mean and standard deviation of the Hausdorff distance, regardless of the way of choosing segments and whether $A^{(1)}$ is known or not. \subsection{Sparse $A^{(1)}$} \label{sec4.2.1} \boldsymbolegin{table}[!ht] \centering \boldsymbolegin{tabular}{cccccccc} \hline & T & p & $[\eta_1, \eta_2]$ & $\eta_2-\eta_1$ & $\|A^{(1)}\|_\infty$ & $\|A^{(2)}\|_\infty$ & $\|\boldsymbol{\Theta}\|_0$ \\ \hline case 1 & 500 & 20 & $[T(4/9), T(5/9)]$ & $55$ & $0.6$ & $0.05$ & $19$\\ case 2 & 500 & 20 & $[T(6/13), T(7/13)]$ & $39$ & $0.6$ & $0.05$ & $19$\\ \hline \end{tabular} \caption{Simulation setting for two cases considered in Section \ref{sec4.2.1}, where $\|A^{(1)}\|_\infty$ and $\|A^{(2)}\|_\infty$ are the size of non-zero elements in $A^{(1)}$ and $A^{(2)}$, respectively and $\|\boldsymbol{\Theta}\|_0$ is the number of non-zero elements of $\boldsymbol{\Theta}$.} \label{Tab:single_a_sps} \end{table} \boldsymbolegin{figure}[!ht] \boldsymbolegin{center} \includegraphics[width=12cm, height=5cm]{Amat_3.eps} \end{center} \caption{The underlying coefficient matrices, $(A^{(1)}, A^{(2)}, A^{(1)})$, for the simulation setting in Section \ref{sec4.2.1}, where $A^{(2)}$ corresponds to an anomaly.} \label{fig:single_a_sps} \end{figure} \boldsymbolegin{table}[!ht] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{case 1} & random & OLS & 100 & 90 \\ & (s = 499) & Lasso & 100 & \textbf{100} \\ \cline{2-5} & deterministic & OLS & 100 & 95 \\ & (s = 499) & Lasso & 100 & \textbf{100}\\ \hline \multirow{4}{*}{case 2} & random & OLS & 100 & 30 \\ & (s = 499) & Lasso & 100 & \textbf{58} \\ \cline{2-5} & deterministic & OLS & 100 & 50 \\ & (s = 499) & Lasso & 100 & \textbf{92}\\ \hline \end{tabular} \caption{Empirical power ($\%$) from 100 simulation runs for two methods in all cases described in Section \ref{sec4.2.1}, where $s$ is the number of intervals examined.} \label{Tab:single_a_sps_1} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{case 1} & random & OLS & 1.43 (0.53) & 6.00 (13.03) \\ & (s = 499) & Lasso & 1.31 (0.45) & 1.28 (0.21) \\ \cline{2-5} & deterministic & OLS & 0.43 (0.24) & 2.70 (9.71)\\ & (s = 499) & Lasso & 0.34 (0.12) & 0.37 (0.13)\\ \hline \multirow{4}{*}{case 2} & random & OLS & 3.26 (1.19) & 32.80 (20.89) \\ & (s = 499) & Lasso & 3.21 (1.18) & 20.12 (22.48) \\ \cline{2-5} & deterministic & OLS & 0.43 (0.56) & 23.39 (23.12)\\ & (s = 499) & Lasso & 0.30 (0.19) & 4.03 (12.56)\\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for two methods in all cases described in Section \ref{sec4.2.1}, where $s$ is the number of intervals examined.} \label{Tab:single_a_sps_2} \end{table} We now consider the case when $A^{(1)}$ is sparse i.e. only a smaller number of entries are non-zero. Similar to the settings used in \citet{safikhani2020joint}, the 1-off diagonal values of the coefficient matrix are non-zero as shown in Figure \ref{fig:single_a_sps}. The details of the simulation setting are given in Table \ref{Tab:single_a_sps}. Tables \ref{Tab:single_a_sps_1} and \ref{Tab:single_a_sps_2} show similar interpretations with those given in Section \ref{denseA1}. \boldsymbolegin{comment} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cccc} \hline \multicolumn{4}{c}{$\#$ of estimated cp from 20 runs} \\ \hline 0 & \textbf{2}& 3& 4 \\ \hline 12 & 6& 1& 1\\ \hline \end{tabular} \caption{\textcolor{blue}{With the default lasso tuning parameter, \citet{wang2019localizing} returns all zero estimated change point in case 1 with $20$ simulation runs. The mean of the Hausdorff distance is given as $45.8$. With the smaller tuning parameter ($\times 0.15$), the results are as above and the corresponding mean of Hausdorff distance is obtained as $29.66$ (only DP is used) and $29.15$ (when local refinement is also used as a post-processing).}} \end{table} \end{comment} \section{Data analysis} \label{sec5} \subsection{Yellow cab demand in New York City} \boldsymbolegin{figure}[ht!] \boldsymbolegin{center} \includegraphics[width=15cm, height=15cm]{nyc_taxi_1.eps} \end{center} \caption{(Top) The differenced yellow taxi pickups recorded from March 11, 2019 to March 6, 2020 in Manhattan. (Middle) The 20 largest test statistics with the corresponding interval. The blue horizontal dashed line indicates the threshold. (Bottom) The portion of the top plot indicated with dashed green vertical lines. Red vertical lines show the estimated anomaly, [Nov $3, 2019$ $00:00:00$, Nov $3, 2019$ $06:30:00$].} \label{fig:nyc_taxi} \end{figure} To demonstrate the usefulness of our method, we now turn to real data applications. In our first example, we apply our method to the yellow taxi trip data that is previously analysed by \citet{safikhani2020joint}. The data can be downloaded from the New York City Taxi and Limousine Commission (TLC) Database (\url{https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page}). This data consists of the number of yellow taxi pick-ups recorded from 10 randomly selected zones in Manhattan, a borough in New York City. We aggregate the number of yellow taxi pick-ups every 30 minutes from March 11, 2019 to March 6, 2020 which results in 17376 time points. We seek to detect whether a collective anomaly exists after removing the first order nonstationarity from the data. We use the differenced version of the time series, using the first 4344 data points to estimate the underlying VAR coefficient $A^{(1)}$ by applying a lasso penalty. The next 4344 data points are used to obtain a threshold, where the threshold is selected as the 99$\%$ quantile of the test statistics from 100 deterministically chosen intervals. Then we detect a single anomaly using the remaining 8687 data points. \boldsymbolegin{figure}[ht!] \boldsymbolegin{center} \includegraphics[width=14cm, height=10cm]{locid.eps} \end{center} \caption{Taxi demand (Top) and differenced Taxi demand (Bottom) for 2nd (black), 4th (red) and 6th (green) zones in Manhattan recorded from October 24, 2019 to November 5, 2019.} \label{fig:nyc_taxi_1} \end{figure} The top plot in Figure \ref{fig:nyc_taxi} shows that two consecutive spikes are observed between October 7 and November 6 in 2019, where the interval within green vertical lines is enlarged in the bottom plot. From the middle plot, we see that the largest test statistic is obtained for a small interval which includes the spikes shown in the top plot. The bottom plot shows that the spikes occur between 12am to 2am on November 3, 2019 and our method detects an anomaly between 12am and 6:30am on November 3, 2019. From Figure \ref{fig:nyc_taxi_1}, we see that a sudden high demand occurred at the $4^{\text{th}}$ and $6^{\text{th}}$ zones located in Downtown Manhattan (also known as Lower Manhattan). This anomaly seems to be related to traffic management for the 2019 New York City Marathon which took placed on November 3, 2019 in New York City. We can interpret that there was a sudden high demand in Downtown Manhattan where the marathon route did not pass through, and this changes the relationship between the 10 zones we investigate. \subsection{EEG Data} \boldsymbolegin{figure}[ht!] \boldsymbolegin{center} \includegraphics[width=16cm, height=11cm]{eeg_online.eps} \end{center} \caption{(Top) EEG data recorded at 18 different channels. Blue solid vertical line is the time at which the neurologist thinks seizure starts and the red dashed vertical line is the anomaly detected in the online setting. (Bottom) The maximum test statistics at each time point obtained through Algorithm \ref{algo_online} which stops when the anomaly is detected. The horizontal red line presents the pre-specified threshold.} \label{fig:eeg_online} \end{figure} We now show how our method can be used in as an online changepoint detection method. We demonstrate this on electroencephalogram (EEG) data collected from an epileptic patient. Other ways of analysing this dataset can be found in \citet{ombao2001automatic}, \citet{ombao2005slex} and \citet{schroder2019fresped}. The data consists of brain electrical potentials recorded by placing electrodes on 18 locations on the scalp of a patient. The EEG signals are recorded during an epileptic seizure, thus these exists a visible change in the data as shown in Figure \ref{fig:eeg_online}. The brain wave patterns are recorded over $500$ seconds with the sampling rate 100 Hz (i.e. 100 points per second). As done in \citet{safikhani2020joint}, to speed up computation, we use 2 observations per second which reduces the number of time points to $T=1000$. We separate the data into a training set of the size $T_1=600$ and a test set of the size $T_2=400$. The first half of the training set is used to estimate the underlying VAR coefficient $A^{(1)}$ by applying a lasso penalty and the second half is used to have a threshold that is chosen as the 99$\%$ quantile of the test statistics computed from 327 deterministically chosen intervals. Then we perform the single anomaly detection using a test set. As mentioned in Section \ref{sec1}, here we show how our method can be applied to the online framework. We refer the reader to \citet{fisch2020real} and \citet{yu2021optimal} for the recent works on online detection algorithm for change-points or anomalies. In the online setting, we make sequential decisions about the occurrence of an anomaly whenever each new observations is obtained. Our algorithm for online anomaly detection is similar to Algorithm 2 of \citet{yu2021optimal}. The detailed procedure is given in Algorithm \ref{algo_online} where we set $t_0=10$. As shown in Figure \ref{fig:eeg_online}, an anomaly is estimated at $t=119$ that has the detection delay of $5$ time points compared to $t=114$ at which the neurologist states that a seizure takes place. \boldsymbolegin{center} \boldsymbolegin{algorithm}[H] \setstretch{1} \mathcal{S}etAlgoLined \textbf{INPUT}: $\boldsymbol{X}$, $\lambdabda^{\text{thr}}$, $t_0$ \\ $t \leftarrow t_0$ \\ FLAG $\leftarrow 0$\\ \While{\normalfont FLAG $= 0$}{ $t \leftarrow t+1$ \\ $J \leftarrow \floor{\frac{\log t}{\log 2}}$ \\ $ j \leftarrow 1$\\ \While{\normalfont FLAG = 0 and $j \leq J$}{ $s_j \leftarrow t - 2^{j-1} $\\ $J \leftarrow [s_j, t] $\\ FLAG $\leftarrow \mathbb{1} \{ T^{\text{lasso}}(J) > \lambdabda^\textsuperscript{thr} \}$\\ $j \leftarrow j+1$\\ } } \textbf{OUTPUT} : $t$. \caption{Online anomaly detection} \label{algo_online} \end{algorithm} \end{center} \boldsymbolegin{comment} \boldsymbolegin{center} \boldsymbolegin{algorithm}[H] \setstretch{1} \mathcal{S}etAlgoLined \textbf{INPUT}: $\hat{A}^{(1)}$, $\lambdabda^\textsuperscript{thr}$, $t_0$, $d$\\ $t \leftarrow t_0$ \\ FLAG $\leftarrow 0$\\ \While{\normalfont FLAG $= 0$}{ $t \leftarrow t+1$ \\ $K$ $\leftarrow \floor{\frac{t-1}{d}+1}$ \\ $j \leftarrow 1$\\ \While{\normalfont FLAG $= 0$ and $j \leq K$ }{ $s_j \leftarrow d(j-1) + 1$ \\ $J \leftarrow [s_j, t]$ \\ FLAG $\leftarrow \mathbb{1} \{ T^{\text{lasso}}(J) > \lambdabda^\textsuperscript{thr} \}$\\ $j \leftarrow j+1$\\ } } \textbf{OUTPUT}: $t$. \caption{Online anomaly detection} \label{algo_online} \end{algorithm} \end{center} \end{comment} \section{Discussion} \label{sec6} Our lasso-based approach is motivated for data where we have substantially more data about the current or normal behaviour of the time series than for any anomaly or epidemic change. Thus it is natural to model the change as sparse and thus a lasso-based test is more appropriate than a standard likelihood-ratio or OLS-based test. We provide a numerical evidence that our method outperforms existing competitors in detecting sparse change when $A^{(1)}$ is either dense or sparse. Our method searches a set of local segments to detect an anomalous interval, whereas the existing change detection methodologies for the VAR model perform global optimisation. As illustrated in real data example, the local optimisation aspect of our method gives a flexibility to extend it to the online setting. \section{Technical proofs} \label{pf} We first give a preparatory lemma and then move onto the proofs of main theorems and corollaries presented in Section \ref{sec3}. \boldsymbolegin{Lem} \label{lem1-1} Let Assumptions \ref{asmpt1}-\ref{asmpt4} hold. For any $J \in \mathbb{J}_{T, p} (L)$ such that $J \subseteq [\eta_1, \eta_2]$ and any $|J| \succsim c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_1 \log p$, with probability at least $1-T^{-6}$, we have \boldsymbolegin{equation*} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}} \geq c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 |J| \cdot \|{\boldsymbol{\Theta}}\|^2_2 - c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \log(p) \cdot \|{\boldsymbol{\Theta}}\|^2_1 \end{equation*} where $c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_1, c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2, c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 >0$ are some constants depending on $\boldsymbol{\mathfrak{m}}$ and $\mathcal{M}$. \end{Lem} \paragraph{Proof of Lemma \ref{lem1-1}} The argument follows the proof of Lemma 13-(b) of \citet{wang2019localizing}. \paragraph{Proof of Theorem \ref{theorem1}} By the construction of the candidate set $\mathbb{I}^*$, it is sufficient to show that $T^{\text{lasso}}(J) \rightarrow 0$ under the null, where $T^{\text{lasso}}(J)$ is as in \eqref{lasso}. The KKT conditions for the lasso problem in \eqref{lasso} is that any $\hat{\boldsymbol{\Theta}}$ is optimal if and only if there exists a subgradient $\hat{s}$ such that \boldsymbolegin{equation} \label{kkt} \boldsymbol{X}_{J}^\top \Big(\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}}\Big) = \lambdabda \hat{s}_{J}, \end{equation} where $\hat{s}_{J} = \partial\boldsymbolig|\hat{\boldsymbol{\Theta}}\boldsymbolig|_1$ is a subgradient of the $l_1$ norm evaluated at $\hat{\boldsymbol{\Theta}}$ which takes the form \boldsymbolegin{equation} \label{subgrad} \hat{s}_{J} = \text{sgn}(\hat{\boldsymbol{\Theta}}) \; \text{for} \; \hat{\boldsymbol{\Theta}} \neq 0, \quad |\hat{s}_{J}| \leq 1 \; \text{otherwise}. \end{equation} As $\boldsymbol{Y}=\boldsymbol{E}$ under the null, \eqref{kkt} and \eqref{subgrad} give a condition on $\boldsymbol{X}$ and $\boldsymbol{E}$ to ensure that we estimate $\boldsymbol{\Theta}=\boldsymbol{0}$ as follows: for any $J \in \mathbb{J}_{T, p} (L)$ such that $J \cap [\eta_1, \eta_2] = \emptyset$, \boldsymbolegin{equation} \label{toshow} \max_{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset} \Big\|\boldsymbol{X}_{J}^\top\boldsymbol{E}_{J}\Big\|_\infty \leq \lambdabda. \end{equation} We remind that $\mathcal{X}_{J}$ is the unvectorised covariates as follows: \boldsymbolegin{equation*} \mathcal{X}_{J} =\boldsymbolegin{pmatrix} 0 \\ 0\\ \vdots \\ 0 \\ \boldsymbol{x}_{t+1}^\prime \\ \vdots \\ \boldsymbol{x}_{t+h-1}^\prime \end{pmatrix}_{(2h-1) \times p} \end{equation*} and note that \boldsymbolegin{align*} \max_{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset} \Bigg\|\frac{\boldsymbol{X}_{J}^\top\boldsymbol{E}_{J}}{|J|}\Bigg\|_\infty & = \max_{\{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset \}, 1 \leq i, j \leq p} \Bigg| e^{\prime}_i \boldsymboligg( \frac{\mathcal{X}^{\prime}_{J} E_{J}}{|J|}\boldsymboligg) e_j\Bigg|, \\ \numberthis \label{choices} &\leq \max_{\{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset \}, 1 \leq i, j \leq p} \Bigg| e^{\prime}_i \boldsymboligg( \frac{\mathcal{X}^{\prime}_{J} E_{J}}{L}\boldsymboligg) e_j\Bigg|, \end{align*} where $e_i \in \mathbb{R}^p$ with the $i$-th element equals to $1$ and zero otherwise. Similar to the argument used in Proposition $2.4(b)$ of \citet{basu2015regularized}, for fixed $i, j, J$, there exist $k_1, k_2 >0$ such that for all $\gamma >0$: \boldsymbolegin{equation} \label{bm} P\Bigg(\boldsymboligg| e^{\prime}_i \boldsymboligg( \mathcal{X}^{\prime}_{J} E_{J}\boldsymboligg) e_j\boldsymboligg| > k_1 L \gamma \Bigg) \leq 6 \exp (-k_2 L \min (\gamma, \gamma^2)). \end{equation} As the number of intervals contained in $\mathbb{J}_{T, p} (L)$ is of the order $O(T)$ when they are constructed through the seeded interval idea in \citet{kovacs2020seeded}, we consider the union over $p^2\cdot T$ possible choices of $i, j, J$ in \eqref{choices}. Then the result follows by setting $\gamma = k_3 \sqrt{\frac{2\log{p} + \log{T}}{L}}$ for a large enough $k_3 > 0$. Therefore, with probability at least $1-C_4 \exp(-C_5 (2\log{p} + \log{T}))$, we have \boldsymbolegin{equation} \label{ub} \max_{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset} \Big\|\boldsymbol{X}_{J}^\top\boldsymbol{E}_{J}\Big\|_\infty \leq C_3 \sqrt{L \log({T \lor p})}, \end{equation} where $C_4>0$ and $C_5>0$. Having the condition $\lambdabda = C_3 \sqrt{L \log({T \lor p})}$ with a large enough $C_3 > 0$ in \eqref{toshow}, we obtain $\hat{\boldsymbol{\Theta}}=\boldsymbol{0}$ with probability at least $1-C_4 \exp(-C_5 (2\log{p} + \log{T}))$. Therefore, under the null, the probability that $T^{\text{lasso}}(J) \rightarrow 0$ for $J \in \mathbb{J}_{T, p} (L)$ such that $ J \cap [\eta_1, \eta_2] = \emptyset$ is at least $1-C_4 \exp(-C_5 (2\log{p} + \log{T}))$ where $C_4, C_5 >0$. We emphasise that \eqref{ub} can be applied to any serially uncorrelated Gaussian errors $\boldsymbol{\varepsilon}_t \stackrel{\text{i.i.d.}}{\sim} N(\boldsymbol{0}, \mathcal{S}igma_{\varepsilon})$ as the constant $k_1$ in \eqref{bm} presented in Proposition $2.4(b)$ of \citet{basu2015regularized} has a form of \boldsymbolegin{equation*} k_1 = 2 \pi \Lambda_{\max}(\mathcal{S}igma_{\varepsilon}) \boldsymboligg( 1+ \frac{1+\mu_{\max}(\mathcal{A})}{\mu_{\min}(\mathcal{A})}\boldsymboligg), \end{equation*} where $\Lambda_{\max}(\mathcal{S}igma_{\varepsilon})$ is the maximum eigenvalue of $\mathcal{S}igma_{\varepsilon}$, $\mu_{\max}(\mathcal{A})=\max_{|z|=1} \Lambda_{\max} (\mathcal{A}^*(z) \mathcal{A}(z))$, $\mu_{\min}(\mathcal{A})=\min_{|z|=1} \Lambda_{\min} (\mathcal{A}^*(z) \mathcal{A}(z))$ and $\mathcal{A}(z)=\mathit{I}_p - \mathbf{A}^{(1)}z$ for the VAR(1) model and $\mathcal{A}(z)=\mathit{I}_p - \sum_{d=1}^{q} \mathbf{A}^{(1)}_d z^d$ for the VAR(q) model. Therefore, even if $\mathcal{S}igma_{\varepsilon}$ is not an identity matrix, we can have \eqref{ub} with a different constant $C_3$ which depends on the maximum eigenvalue of $\mathcal{S}igma_{\varepsilon}$. \paragraph{Proof of Theorem \ref{theorem2}} It is sufficient to prove that for any $J \in \mathbb{J}_{T, p} (L)$ such that $ J \subseteq [\eta_1, \eta_2]$, with probability approaching to $1$ as $T \rightarrow \infty$, \boldsymbolegin{equation} \label{toshowlem2} \boldsymbolig\|\boldsymbol{Y}_{J} \boldsymbolig\|_2^2 - \boldsymbolig\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2 > \lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}. \end{equation} This is because the other part in equation \eqref{lasso_alt}, \boldsymbolegin{equation} \label{toshowlem2_0} \Big\{ \boldsymbolig\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2 + \lambdabda\|\boldsymbol{\Theta}\|_1 - \boldsymbolig\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}} \boldsymbolig\|_2^2 - \lambdabda\|\hat{\boldsymbol{\Theta}}\|_1 \Big\}, \end{equation} is always positive and the left-hand side of \eqref{toshowlem2} dominates \eqref{toshowlem2_0}. We can simplify \eqref{toshowlem2} as \boldsymbolegin{equation} \label{toshowlem2_1} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}} + 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{E}_{J} > \lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}. \end{equation} The left-hand side of \eqref{toshowlem2_1} is a Gaussian variable that can be written as $\boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J} + 2 \boldsymbol{\nu}_{J}^\top\boldsymbol{E}_{J} \sim N(\boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J}, 4 \boldsymbol{\nu}_{J}^\top \mathcal{S}igma_{\varepsilon} \boldsymbol{\nu}_{J})$, where $\boldsymbol{\nu}_{J} = \boldsymbol{X}_{J} {\boldsymbol{\Theta}}$ and $\boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J} \rightarrow \infty$. Then for any $g(J) = o(\sqrt{\boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J}})$ that goes to $\infty$, we have the following bound with probability approaching to $1$, \boldsymbolegin{equation} \label{bound_vv} \boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J} + 2 \boldsymbol{\nu}_{J}^\top\boldsymbol{E}_{J} \geq \boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J} - g(J) \sqrt{4 \gamma \boldsymbol{\nu}_{J}^\top \boldsymbol{\nu}_{J}}, \end{equation} where $\gamma$ is the maximum eigenvalue of $\mathcal{S}igma_{\varepsilon}$. The right-hand side of \eqref{bound_vv} is of order $\boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J}$, thus we now show that ${\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}}$ is bounded by $\lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}$ with probability tending to $1$. From Lemma \ref{lem1-1}, with probability approaching to $1$, we have \boldsymbolegin{align*} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}} &\geq c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 |J| \cdot \|{\boldsymbol{\Theta}}\|^2_2 - c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \log(p) \cdot \|{\boldsymbol{\Theta}}\|^2_1 \\ &\geq c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 L \cdot \|{\boldsymbol{\Theta}}\|^2_2 - c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \log(p) \cdot \|{\boldsymbol{\Theta}}\|^2_1, \end{align*} where $c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2, c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 >0$, thus we now show \boldsymbolegin{equation} \label{pf_thm2_1} c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \|{\boldsymbol{\Theta}}\|^2_2 > c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \frac{\log(p)}{L} \cdot \|{\boldsymbol{\Theta}}\|^2_1 + \frac{\lambdabda}{L}\|\boldsymbol{\Theta}\|_1 + \frac{\lambdabda^\textsuperscript{thr}}{L}, \end{equation} as $T, p \rightarrow \infty$. We can obtain \eqref{pf_thm2_1} as $T, p \rightarrow \infty$ from combining \boldsymbolegin{align*} & (a) \;c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \|{\boldsymbol{\Theta}}\|^2_2 > c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \frac{\log(p)}{L} \cdot \|{\boldsymbol{\Theta}}\|^2_1,\\ & (b) \; c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \|{\boldsymbol{\Theta}}\|^2_2 > \frac{\lambdabda}{L}\|\boldsymbol{\Theta}\|_1, \\ & (c) \; c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \|{\boldsymbol{\Theta}}\|^2_2 > \frac{\lambdabda^\textsuperscript{thr}}{L}, \end{align*} where (a) can be shown by using $d_0 \|\boldsymbol{\Theta}\|_2^2 \geq \|\boldsymbol{\Theta}\|_1^2$ from Assumption \ref{asmpt3} and $\frac{\log p}{L} \rightarrow 0$ from Assumption \ref{asmpt2}. By using $d_0 \|\boldsymbol{\Theta}\|_2^2 \geq \|\boldsymbol{\Theta}\|_1^2$, (b) becomes $c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \|{\boldsymbol{\Theta}}\|_2 > \frac{\lambdabda}{L} \sqrt{d_0}$ that can be achieved from ${\frac{\lambdabda}{L}} = \sqrt{\frac{ C_3 \log(T \lor p)}{L}}$ and $\|\boldsymbol{\Theta}\|_2^2 > C_2 \cdot \frac{\log^{1+\xi}{(T \lor p)}}{L}$ in Assumption \ref{asmpt4}. Similarly (c) can be obtained from $ \frac{\lambdabda^\textsuperscript{thr}}{L} = O\Bigg(\sqrt{\frac{\log(T \lor p)}{L}}\Bigg)$ and Assumption \ref{asmpt4}. \boldsymbolegin{comment} From Lemma \ref{lem1-1} and Assumption \ref{asmpt4}, with probability approaching to $1$, we have \boldsymbolegin{align*} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}} &\geq c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 |J| \cdot \|{\boldsymbol{\Theta}}\|^2_2 - c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \log(p) \cdot \|{\boldsymbol{\Theta}}\|^2_1 \\ &\geq c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \cdot C_2 |J| \cdot \frac{d_0^2 \log^{1+\xi}(T \lor p)}{L} - c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \log(p) \cdot \|{\boldsymbol{\Theta}}\|^2_1 \end{align*} where $c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2, c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 >0$, thus it is sufficient to show that there exist $c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2, c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 >0$ such that \boldsymbolegin{equation} \label{thm2_eq1} c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_2 \cdot C_2 \frac{d_0^2 \log^{1+\xi}(T \lor p)}{L} \geq c^{\boldsymbol{\mathfrak{m}}, \mathcal{M}}_3 \frac{\log(p)}{|J|} \cdot \|{\boldsymbol{\Theta}}\|^2_1 + \frac{1}{|J|}\lambdabda\|\boldsymbol{\Theta}\|_1 + \frac{1}{|J|}\lambdabda^\textsuperscript{thr}. \end{equation} We can obtain \eqref{thm2_eq1} by combining the facts that (1) $|J| > C\log(p)$ (2) $\lambdabda = C_2 \sqrt{L \log(T \lor p)}$, (3) $\frac{\log(p)}{L} \rightarrow 0$ from Assumption \ref{asmpt2}, (4) $\|\boldsymbol{\Theta}\|_1$ is fixed and (5) $\lambdabda^\textsuperscript{thr} = O(\sqrt{L \log(T \lor p)})$. \end{comment} We now consider the case $\mathcal{S}igma_\varepsilon$ is not an identity matrix. In that case, \eqref{toshowlem2} becomes \boldsymbolegin{equation*} \boldsymbol{Y}_{J}^\top \mathcal{S}igma_{\varepsilon}^{-1} \boldsymbol{Y}_{J} - (\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}})^\top \mathcal{S}igma_{\varepsilon}^{-1} (\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}}) > \lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}, \end{equation*} thus \eqref{toshowlem2_1} becomes \boldsymbolegin{equation} \label{pf_thm2_sigma_eq1} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \mathcal{S}igma_{\varepsilon}^{-1} \boldsymbol{X}_{J} {\boldsymbol{\Theta}} + 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top\mathcal{S}igma_{\varepsilon}^{-1} \boldsymbol{E}_{J} > \lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}, \end{equation} which holds by following the same argument used above with $\boldsymbol{\nu}_{J} = \mathcal{S}igma_{\varepsilon}^{-1/2} \boldsymbol{X}_{J} {\boldsymbol{\Theta}}$ and different constants, as the left-hand side of \eqref{pf_thm2_sigma_eq1} is a Gaussian random variable bounded by a component that is of order $\boldsymbol{\nu}_{J}^\top\boldsymbol{\nu}_{J}$. Lastly, without repeating all the proofs, we argue that the theory we present for the known $\mathcal{S}igma_{\varepsilon}$ can be applied to the case when an estimate of ${\mathcal{S}igma_{\varepsilon}}$ is used. If $\hat{\mathcal{S}igma}_{\varepsilon}$ is used instead of ${\mathcal{S}igma_{\varepsilon}}$, the left-hand side of \eqref{pf_thm2_sigma_eq1} can be rewritten as \boldsymbolegin{equation} \label{pf_thm2_sigma_eq3} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \mathcal{S}igma_{\varepsilon}^{-1} \boldsymbol{X}_{J} {\boldsymbol{\Theta}} + 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top\mathcal{S}igma_{\varepsilon}^{-1} \boldsymbol{E}_{J}+ {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top (\hat{\mathcal{S}igma_{\varepsilon}}^{-1}-\mathcal{S}igma_{\varepsilon}^{-1}) \boldsymbol{X}_{J} {\boldsymbol{\Theta}} + 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top (\hat{\mathcal{S}igma_{\varepsilon}}^{-1}-\mathcal{S}igma_{\varepsilon}^{-1}) \boldsymbol{E}_{J}, \end{equation} thus the test depends on the eigenvalues of the measure of the distance between $\hat{\mathcal{S}igma}^{-1}_{\varepsilon}$ and $\mathcal{S}igma^{-1}_{\varepsilon}$. If $\hat{\mathcal{S}igma}^{-1}_{\varepsilon}$ converges to $\mathcal{S}igma^{-1}_{\varepsilon}$ as observation increases, the last two terms in \eqref{pf_thm2_sigma_eq3} become under control, thus we can obtain the same argument with extra constant terms. \boldsymbolegin{comment} \boldsymbolegin{Thm} \label{thm2} \textcolor{red}{Previous version} Let $\boldsymbol{x}_t$ follows \eqref{model} in anomaly case defined in \eqref{model_a1} and Assumptions \ref{asmpt1}-\ref{asmpt4} hold. For any $\lambdabda = C_2 \sqrt{\frac{2\log{p} + \log{T}}{L}}$, the test statistic built on the lasso estimator in \eqref{lasso_alt} can detect a weaker sparse anomaly (i.e. $\boldsymbolig\|\boldsymbol{Y} \boldsymbolig\|_2^2-\boldsymbolig\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2 > O\Big(\sqrt{\log (p \lor T)}\Big)$) while the naive estimator in \eqref{naive_alt} needs $\boldsymbolig\|\boldsymbol{Y} \boldsymbolig\|_2^2-\boldsymbolig\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2$ to be much larger than $O(p)$. \end{Thm} \boldsymbolegin{Proof} \textcolor{red}{Previous version} The theorem says that the test statistic built on the lasso estimator in \eqref{lasso_alt} gains larger power under the alternative than the naive estimator in \eqref{naive_alt} does. The test statistic of the OLS method in \eqref{naive_alt} implies that $\boldsymbolig\|\boldsymbol{Y} \boldsymbolig\|_2^2-\boldsymbolig\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2$ need to be much larger than $O(p)$ to have high power as the latter term $\Big\{ \boldsymbolig\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2 - \boldsymbolig\|\boldsymbol{Y}-\boldsymbol{X}\hat{\boldsymbol{\Theta}} \boldsymbolig\|_2^2 \Big\}$ in \eqref{naive_alt} has mean $p^2$. On the other hand, the test statistic of the lasso method in \eqref{lasso_alt} implies that $\boldsymbolig\|\boldsymbol{Y} \boldsymbolig\|_2^2-\boldsymbolig\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}} \boldsymbolig\|_2^2$ should be larger than $\lambdabda\|\boldsymbol{\Theta}\|_1$ as the latter term in $\{\}$s is positive which can be considered as a lower bound. Thus, under the signal to noise ratio in Assumption \ref{asmpt4}, the lasso method has larger power than the OLS method. \end{Proof} \end{comment} \paragraph{Proof of Theorem \ref{thm2-1}} It is straightforward that the test statistic of the OLS method in \eqref{naive} has a $\chi^2_{p^2}$ distribution, where the degrees of freedom $p^2$ comes from the difference in dimensionality of ${\Theta}_0$ and $\hat{\Theta}$. Therefore, we get an asymptotic level $\alpha$ test if the null hypothesis is rejected for $T(J) > \chi^2_{p^2; (1-\alpha)}$, where $\chi^2_{p^2; (1-\alpha)}$ is the $(1-\alpha)$-quantile of chi-square distribution with $p^2$ degrees of freedom. Using the threshold established above, under the alternative, an upper bound on the power of the OLS method can be obtained as \boldsymbolegin{align*} & P\boldsymboligg( T(J) > \chi^2_{p^2; (1-\alpha)} \boldsymboligg) \\ & = P\boldsymboligg(\|\boldsymbol{Y}_{J} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 + \Big\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}}\|_2^2 \Big\} \geq \chi^2_{p^2; (1-\alpha)}\boldsymboligg) \\ \numberthis \label{pf_tm3_1} & = P\boldsymboligg(\|\boldsymbol{Y}_{J} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 + \Big\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}}\|_2^2 \Big\} - p^2 \geq \chi^2_{p^2; (1-\alpha)} - p^2 \boldsymboligg) \\ \numberthis \label{pf_tm3_2} & \leq \frac{E(\|\boldsymbol{Y}_{J} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2)}{\chi^2_{p^2; (1-\alpha)} - p^2} \\ \numberthis \label{pf_tm3_3} & \approx \frac{E(\|\boldsymbol{Y}_{J} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2)}{\frac{1}{2}\Big(z_{1-\alpha} + \sqrt{2p^2-1}\Big)^2 - p^2}, \end{align*} where $z_{1-\alpha}$ is the $(1-\alpha)$-quantile of Gaussian distribution. The equality in \eqref{pf_tm3_1} is obtained by subtracting $E\Big\{ \|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}{\boldsymbol{\Theta}} \|_2^2 -\|\boldsymbol{Y}_{J}-\boldsymbol{X}_{J}\hat{\boldsymbol{\Theta}}\|_2^2 \Big\} = p^2$ from both sides, the inequality in \eqref{pf_tm3_2} is obtained by using Markov's inequality and \eqref{pf_tm3_3} is achieved as the quantile of chi-square distribution has an approximation, $\chi^2_{p^2; (1-\alpha)} \approx \frac{1}{2}\Big(z_{1-\alpha} + \sqrt{2p^2-1}\Big)^2$. Therefore, the upper bound on the power of the OLS method can be obtained as in \eqref{upperbound}, which implies that $E(\|\boldsymbol{Y} \|_2^2-\|\boldsymbol{Y}-\boldsymbol{X}{\boldsymbol{\Theta}}\|_2^2)$ needs to be at least $O_p(p)$ to have power approaching to $1$. \paragraph{Proof of Corollary \ref{cor1}} As we have $\boldsymbol{Y}=\boldsymbol{E} + \text{vec} \boldsymbolig( \mathcal{X}^{(1)}(\boldsymbol{\theta}^{(1)}-{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} )\boldsymbolig)$ under the null rather than $\boldsymbol{Y}=\boldsymbol{E}$, the right-hand side of the inequality in \eqref{choices} can be represented as \boldsymbolegin{align*} &\max_{\{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset \}, 1 \leq i, j \leq p} \Bigg| e^{\prime}_i \Bigg( \frac{{\mathcal{X}^{(2)}_{J}}^{\prime} \Big(E_{J}+\mathcal{X}_{J}^{(1)}\boldsymbolig({\boldsymbol{\theta}^{(1)}}^{\prime}-{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig) \Big) }{L}\Bigg) e_j\Bigg| \\ \numberthis \label{a1hat_choices_1} & \leq \max_{\{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset \}, 1 \leq i, j \leq p} \Bigg| e^{\prime}_i \Bigg( \frac{{\mathcal{X}^{(2)}_{J}}^{\prime} E_{J} }{L}\Bigg) e_j\Bigg| + \max_{\{J: J \in \mathbb{J}_{T, p} (L), J \cap [\eta_1, \eta_2] = \emptyset \}, 1 \leq i, j \leq p} \Bigg| e^{\prime}_i \Bigg( \frac{{\mathcal{X}^{(2)}_{J}}^{\prime} \mathcal{X}_{J}^{(1)}\boldsymbolig({\boldsymbol{\theta}^{(1)}}^{\prime}-{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig) }{L}\Bigg) e_j\Bigg|, \end{align*} It is sufficient to show that both terms in \eqref{a1hat_choices_1} are less than or equal to $C_3 \lambdabda$ with probability approaching 1. The condition for the first term is obtained from the proof of Theorem \ref{theorem1} and the one for the second term is obtained from Assumption \ref{asmpt5} and from the fact that $\frac{\mathcal{X}^{(2)\prime}_{J}\mathcal{X}_{J}^{(1)}}{|J|}$ converges as $T \rightarrow \infty$. \paragraph{Proof of Corollary 2} It is sufficient to prove that \eqref{toshowlem2} still holds where $\boldsymbol{Y}_{J} = \text{vec} \boldsymbolig(\mathcal{Y}_{J} - \mathcal{X}^{(1)}_{J}{{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig)$ is replaced by $\boldsymbol{Y}_{J}^{\prime} = \text{vec} \boldsymbolig(\mathcal{Y}_{J} - \mathcal{X}^{(1)}_{J}{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig)$. The left-hand side of \eqref{toshowlem2} can be simplified as \boldsymbolegin{equation} \label{toshow2lem2_a1hat} {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}} + 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{E}_{J} + 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \text{vec}\Big(\mathcal{X}_{J}^{(1)}\boldsymbolig({\boldsymbol{\theta}^{(1)}}^{\prime}-{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig) \Big) > \lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}. \end{equation} As shown in the proof of Theorem 2, it is sufficient to show that ${\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \boldsymbol{X}_{J} {\boldsymbol{\Theta}}$ is bounded by $\lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}$ with probability tending to $1$ as the last component in left-hand side of \eqref{toshow2lem2_a1hat}, \boldsymbolegin{equation*} 2 {\boldsymbol{\Theta}}^\top \boldsymbol{X}_{J}^\top \text{vec}(\mathcal{X}_{J}^{(1)}\boldsymbolig({\boldsymbol{\theta}^{(1)}}^{\prime}-{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig)) = 2 \boldsymbol{\Theta}^\top \text{vec}\Big({\mathcal{X}^{(2)}_{J}}^{\prime} \mathcal{X}_{J}^{(1)} \boldsymbolig({\boldsymbol{\theta}^{(1)}}^{\prime}-{\hat{\boldsymbol{\theta}}^{(1)}}^{\prime} \boldsymbolig) \Big), \end{equation*} is less than $\lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}$ with probability approaching to $1$ from Assumption 5 and also from the fact that $\frac{{\mathcal{X}^{(2)}_{J}}^{\prime} \mathcal{X}_{J}^{(1)}}{|J|}$ converges as $T \rightarrow \infty$. Following the same logic presented in the proof Theorem 2, it can be shown that the first component in left-hand side of \eqref{toshow2lem2_a1hat} is greater than $\lambdabda\|\boldsymbol{\Theta}\|_1 + \lambdabda^\textsuperscript{thr}$ with probability approaching to $1$ which completes the proof. \section{Additional Simulation Results} \label{ASR} In this section, additional simulation results are reported. As mentioned in the main paper, our method is compared with the one proposed in \citet{safikhani2020joint} available from \url{https://github.com/abolfazlsafikhani/SBDetection}. We first present a new simulation scenario that is similar to the one used in \citet{safikhani2020joint}, then give the additional results for those scenarios examined in the main paper. Regarding the tuning parameters for \citet{safikhani2020joint}, we follow the recommendation of their paper by using the default ones in Section \ref{sim_stronger} where the simulation setting is a slightly modified version of scenario 1 of \citet{safikhani2020joint}. However, the anomalies presented in Section \ref{sigma_estimated} are harder to detect as the size of change in coefficient matrix is smaller and the noise has a larger variance compared to the one in Section \ref{sim_stronger}. Thus, to improve the performance of their method, we adjust tuning parameters rather than using the default ones and the details can be found in Section \ref{sigma_estimated} \pagebreak \subsection{Stronger signal-to-noise ratio and larger change size} \label{sim_stronger} \subsubsection{Sparse $A^{(1)}$} \label{sim_stronger_1} \boldsymbolegin{figure}[ht!] \boldsymbolegin{center} \includegraphics[width=10cm, height=4cm]{sbd_A1.eps} \end{center} \caption{The underlying coefficient matrices, $(A^{(1)}, A^{(2)}, A^{(1)})$, for the simulation setting in Section \ref{sim_stronger_1}, where $A^{(2)}$ corresponds to an anomaly.} \label{fig:sbd_A1} \end{figure} \boldsymbolegin{table}[!ht] \centering \boldsymbolegin{tabular}{cccccccc} \hline T & p & $[\eta_1, \eta_2]$ & $\eta_2-\eta_1$ & $\text{nzr}(A^{(1)})$ & $\text{nzr}(A^{(2)})$ & $\|\boldsymbol{\Theta}\|_0$ & $\mathcal{S}igma_{\varepsilon}$ \\ \hline 500 & 20 & $[T(1/3), T(2/3)]$ & $167$ & $-0.6$ & $0.75$ & $19$ & $0.01\textbf{\textit{I}}_p$\\ \hline \end{tabular} \caption{Simulation setting for Section \ref{sim_stronger_1}, where $\text{nzr}(A^{(1)})$ and $\text{nzr}(A^{(2)})$ are the non-zero elements of $A^{(1)}$ and $A^{(2)}$, respectively and $\|\boldsymbol{\Theta}\|_0$ is the number of non-zero elements of $\boldsymbol{\Theta}$.} \label{Tab:stronger_setting} \end{table} We borrow the simulation setting of scenario 1 used in \citet{safikhani2020joint}. To make the single anomaly setting, we slightly modify the original setting by changing the size of non-zero coefficients to $(-0.6, 0.75, -0.6)$ for those intervals divided by an anomaly, whereas \citet{safikhani2020joint} consider the two change points with the corresponding size of non-zero coefficients $(-0.6, 0.75, -0.8)$. The details of the anomaly are given in Table \ref{Tab:stronger_setting}, and we can see that the size of change is larger, the length of anomaly is longer and the signal-to-noise ratio is larger than the simulation setting in Section \ref{sec4.2.1}. \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is known} & random & OLS & 100 & 100 \\ & (s = 499) & Lasso & 100 & 100 \\ \cline{2-5} & deterministic & OLS & 100 & 100 \\ & (s = 499) & Lasso & 100 & 100 \\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is unknown} & random & OLS & 100 & 100 \\ & (s = 469) & Lasso & 100 & 100 \\ \cline{2-5} & deterministic & OLS & 100 & 100 \\ & (s = 469) & Lasso & 100 & 100 \\ \hline \end{tabular} \caption{Empirical power ($\%$) from 100 simulation runs for the settings described in Section \ref{sim_stronger_1}, where $s$ is the number of intervals examined.} \label{Tab:stronger_setting_ours1} \end{table} Comparing the simulation results of ours with \citet{safikhani2020joint}, Tables \ref{Tab:stronger_setting_ours1}-\ref{Tab:stronger_setting_ss} show that all methods detect one anomaly in all $100$ runs (this is shown as ``one'' anomaly for the default and the lasso methods and ``two'' change-points for the method of \citet{safikhani2020joint}). In terms of the localisation, the default and the lasso methods work better than \citet{safikhani2020joint} as they have smaller mean and sd of Hausdorff distance. We emphasise that the anomaly presented in this section is easier to detect than those used in the main paper in the sense that the size of change in coefficient matrix is larger, the width of anomaly is longer and the noise variance is smaller. \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is known} & random & OLS & 1.07 (0.10) & 0.80 (0.00) \\ & (s = 499) & Lasso & 1.08 (0.10) & 0.80 (0.00) \\ \cline{2-5} & deterministic & OLS & 0.40 (0.02) & 0.40 (0.03) \\ & (s = 499) & Lasso & 0.40 (0.02) & 0.39 (0.04)\\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is unknown} & random & OLS & 1.14 (0.09) & 0.80 (0.00) \\ & (s = 469) & Lasso & 1.03 (0.08) & 0.80 (0.00) \\ \cline{2-5} & deterministic & OLS & 0.39 (0.03) & 0.40 (0.03) \\ & (s = 469) & Lasso & 0.39 (0.03) & 0.39 (0.05)\\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for the settings described in Section \ref{sim_stronger_1}, where $s$ is the number of intervals examined.} \label{Tab:stronger_setting_ours2} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccc} \hline Empirical power ($\%$) & & mean (sd) of Hausdorff distance\\ \cmidrule(lr){1-1} \cmidrule(lr){3-3} 100 & & 2.19 (1.55) \\ \hline \end{tabular} \caption{Simulation results of \citet{safikhani2020joint} under the setting in Section \ref{sim_stronger_1}.} \label{Tab:stronger_setting_ss} \end{table} \subsubsection{Low rank + sparse $A^{(1)}$} \label{LS_a1} \boldsymbolegin{figure}[h!] \boldsymbolegin{center} \includegraphics[width=10cm, height=4cm]{Amat_4.eps} \end{center} \caption{The underlying coefficient matrices, $(A^{(1)}, A^{(2)}, A^{(1)})$, for the simulation setting in Section \ref{LS_a1}, where $A^{(2)}$ corresponds to an anomaly.} \label{fig:single_a_sps} \end{figure} \boldsymbolegin{table}[!ht] \centering \boldsymbolegin{tabular}{cccccccc} \hline T & p & $[\eta_1, \eta_2]$ & $\eta_2-\eta_1$ & $\text{nzr}(A^{(1)}_\text{sps})$ & $\text{nzr}(A^{(2)}_\text{sps})$ & Range(low-rank) & $\mathcal{S}igma_{\varepsilon}$ \\ \hline 300 & 20 & $[T(1/3), T(2/3)]$ & $100$ & $-0.8682672$ & $0.8682672$ & $(-0.217, 0.212)$ & $0.01\textbf{\textit{I}}_p$\\ \hline \end{tabular} \caption{Simulation setting for Section \ref{LS_a1}, where $\text{nzr}(A^{(1)}_\text{sps})$ and $\text{nzr}(A^{(2)}_\text{sps})$ are the non-zero elements in sparse part of $A^{(1)}$ and $A^{(2)}$, respectively and Range(low-rank) is the range of non-zero elements in low-rank part.} \label{fig:single_a_sps} \end{table} In this section, we borrow the simulation setting of scenario $A.2$ used in \citet{bai2020multiple} to compare their performance with ours. The underlying VAR coefficient matrix has the low rank plus sparse structure and only the sparse part (i.e. 1-off diagonal in this setting) undergoes change at anomaly. The true coefficient matrices are presented in Figure \ref{fig:single_a_sps}. As all elements of $A^{(1)}$ are non-zero, we estimate $A^{(1)}$ by using the ridge penalty when $A^{(1)}$ is assumed to be unknown. \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is known} & random & OLS & 100 & 100 \\ & (s = 845) & Lasso & 100 & 100 \\ \cline{2-5} & deterministic & OLS & 100 & 100 \\ & (s = 845) & Lasso & 100 & 100 \\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is unknown} & random & OLS & 100 & 100 \\ & (s = 941) & Lasso & 100 & 100 \\ \cline{2-5} & deterministic & OLS & 100 & 100 \\ & (s = 941) & Lasso & 100 & 100 \\ \hline \end{tabular} \caption{Empirical power ($\%$) from 100 simulation runs for the settings described in Section \ref{LS_a1}, where $s$ is the number of intervals examined.} \label{Tab:stronger_setting_ours1} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is known} & random & OLS & 0.67 (0.00) & 1.47 (0.52) \\ & (s = 845) & Lasso & 0.67 (0.00) & 1.55 (0.66) \\ \cline{2-5} & deterministic & OLS & 0.33 (0.00) & 0.33 (0.00)\\ & (s = 845) & Lasso & 0.33 (0.00) & 0.34 (0.07) \\ \hline \multirow{4}{*}{$\mathcal{S}igma_{\varepsilon}$ is unknown} & random & OLS & 0.67 (0.00) & 1.74 (0.63) \\ & (s = 941) & Lasso & 0.67 (0.00) & 1.31 (0.55) \\ \cline{2-5} & deterministic & OLS & 0.61 (0.13) & 0.65 (0.08) \\ & (s = 941) & Lasso & 0.38 (0.12) & 0.44 (0.15) \\ \hline \multicolumn{3}{c}{\citet{bai2020multiple}} & \multicolumn{2}{c}{2.11 (1.59)} \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for the settings described in Section \ref{LS_a1}, where $s$ is the number of intervals examined.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cc} \hline \multicolumn{2}{c}{$\#$ of estimated change points from 100 runs} \\ \hline 2 & 3 \\ \hline 80 & 20 \\ \hline \end{tabular} \caption{Simulation results of \citet{bai2020multiple} under the setting in Section \ref{LS_a1}.} \label{Tab:stronger_setting_LS} \end{table} \pagebreak \subsection{Simulation results when $\mathcal{S}igma_{\varepsilon}$ is estimated} \label{sigma_estimated} In this section, we repeat the simulation settings introduced in Section \ref{sec4} for the case when $\mathcal{S}igma_{\varepsilon}$ is unknown. We use the maximum likelihood estimator for $\hat{\mathcal{S}igma}_{\varepsilon}$ and compare the performance of our method with the change-point detection technique proposed by \citet{safikhani2020joint}). As mentioned earlier, the default tuning parameters recommended by \citet{safikhani2020joint} do not fit well in the simulation settings presented in the following sections. Among three tuning parameters $\lambdabda_1$, $\lambdabda_2$ and $\omega$, we adjust $\lambdabda_1$ and $\omega$; a larger range is examined for finding the optimal $\lambdabda_1$ in the initial break detection stage and a larger value of $\omega=4 \log T \log p$ is used (instead of the default constant $1/1.75$) to allow smaller number of break points to be selected in the screening stage. \subsubsection{Dense $A^{(1)}$, Single anomaly} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{6}{*}{case 1} & random & OLS & 25 & 13 \\ & (s = 1969) & Lasso & \textbf{100} & \textbf{78} \\ \cline{2-5} & deterministic & OLS & 20 & 29 \\ & (s = 1969) & Lasso & \textbf{100} & \textbf{83}\\ \cline{2-5} & deterministic & OLS & 19 & 28 \\ & (s = 981) & Lasso & \textbf{100} & \textbf{82} \\ \hline \multirow{6}{*}{case 2} & random & OLS & 7 & 6 \\ & (s = 1969) & Lasso & \textbf{99} & \textbf{34} \\ \cline{2-5} & deterministic & OLS & 10 & 14\\ & (s = 1969) & Lasso & \textbf{100} & \textbf{34}\\ \cline{2-5} & deterministic & OLS & 10 & 13 \\ & (s = 981) & Lasso & \textbf{100} & \textbf{32} \\ \hline \end{tabular} \caption{Empirical power ($\%$) from 100 simulation runs for the settings described in Section \ref{sec.s.a}, where $s$ is the number of intervals examined.} \label{tab:single_a_count_sigmahat} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{rrrrrrrrrrrrrrr} \hline & \multicolumn{14}{c}{\# of estimated change points}\\ \cline{2-15} & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 17 \\ \hline case 1 & 0 & 1 & 4 & 6 & 16 & $\textbf{19}$ & 11 & 10 & 13 & 10 & 4 & 3 & 1 & 2 \\ case 2 & 1 & 1 & 7 & 14 & 11 & $\textbf{25}$ & 10 & 15 & 7 & 4 & 3 & 1 & 0 & 1 \\ \hline \end{tabular} \caption{Distribution of the number of estimated change-points by \citet{safikhani2020joint} under the simulation setting described in Section \ref{sec.s.a} over 100 simulation runs.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{6}{*}{case 1} & random & OLS & 35.41 (18.24) & 43.00 (8.79) \\ & (s = 1969) & Lasso & 2.67 (3.85) & 18.28 (17.75) \\ \cline{2-5} & deterministic & OLS & 37.72 (16.39) & 35.73 (17.66) \\ & (s = 1969) & Lasso & 1.84 (4.32) & 12.23 (18.11) \\ \cline{2-5} & deterministic & OLS & 38.33 (15.92) & 36.62 (16.94) \\ & (s = 981) & Lasso & 2.00 (4.33) & 13.95 (19.05) \\ \hline \multirow{6}{*}{case 2} & random & OLS & 43.86 (11.50) & 45.83 (6.16) \\ & (s = 1969) & Lasso & 3.74 (6.07) & 36.70 (17.51) \\ \cline{2-5} & deterministic & OLS & 42.58 (13.38) & 42.47 (13.23) \\ & (s = 1969) & Lasso & 2.50 (5.15) & 36.80 (18.29) \\ \cline{2-5} & deterministic & OLS & 42.80 (13.06) & 42.97 (12.47) \\ & (s = 981) & Lasso & 2.28 (4.70) & 37.15 (18.01) \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for the settings described in Section \ref{sec.s.a}, where $s$ is the number of intervals examined.} \label{tab:single_a_meansd_sigmahat} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cc} \hline & mean (sd) of Hausdorff distance \\ \hline case 1 & 22.84 (4.64)\\ case 2 & 23.46 (4.45)\\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs obtained by \citet{safikhani2020joint} under the simulation setting described in Section \ref{sec.s.a}.} \end{table} \pagebreak \subsubsection{Dense $A^{(1)}$, Two anomalies} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{rrrrrrrrrrr} \hline & \multicolumn{10}{c}{\# of estimated change points}\\ \cline{2-11} & $\mathbf{4}$ & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline case 1 & 5 & 5 & 16 & 18 & $\mathbf{21}$ & 18 & 12 & 4 & 1 & 0 \\ case 2 & 4 & 12 & 11 & 21 & $\mathbf{24}$ & 13 & 7 & 5 & 2 & 1 \\ \hline \end{tabular} \caption{Distribution of the number of estimated change points by \citet{safikhani2020joint} under the simulation setting described in Section \ref{ta} over 100 simulation runs.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cccccccccccc} \hline & & & \multicolumn{4}{c}{$A^{(1)}$ is known} & & \multicolumn{4}{c}{$A^{(1)}$ is estimated} \\ \cmidrule(lr){4-7} \cmidrule(lr){9-12} & & & 0 & 1 & \textbf{2} & 3 & & 0 & 1 & \textbf{2} & 3 \\ \hline \multirow{6}{*}{case 1} & random & OLS & \textbf{55} & 45 & 0 & 0 && 48 & \textbf{49} & 3 & 0 \\ & (s = 1969) & Lasso & 0 & \textbf{69} & 31 & 0 && 5 & \textbf{94} & 1 & 0 \\ \cline{2-12} & deterministic & OLS & 23 & \textbf{65} & 12 & 0 && 37 & \textbf{54} & 9 & 0 \\ & (s = 1969) & Lasso & 0 & \textbf{65} & 33 & 2 && 5 & \textbf{92} & 3 & 0 \\ \cline{2-12} & deterministic & OLS & 28 & \textbf{61} & 11 & 0 && 42 & \textbf{55} & 3 & 0 \\ & (s = 981) & Lasso & 0 & \textbf{66} & 34 & 0 && 5 & 91 & 4 & 0 \\ \hline \multirow{6}{*}{case 2} & random & OLS & \textbf{72} & 28 & 0 & 0 && \textbf{86} & 14 & 0 & 0 \\ & (s = 1969) & Lasso & 0 & 5 & \textbf{93} & 2 && 38 & \textbf{62} & 0 & 0 \\ \cline{2-12} & deterministic & OLS & \textbf{71} & 23 & 6 & 0 && \textbf{75} & 23 & 2 & 0 \\ & (s = 1969) & Lasso & 0 & 5 & \textbf{95} & 0 && 36 & \textbf{64} & 0 & 0 \\ \cline{2-12} & deterministic & OLS & \textbf{81} & 16 & 3 & 0 && \textbf{82} & 17 & 1 & 0 \\ & (s = 981) & Lasso & 0 & 6 & \textbf{94} & 0 && 36 & \textbf{64} & 0 & 0 \\ \hline \end{tabular} \caption{Distribution of the number of detected anomalies for two methods in all cases described in Section \ref{ta} over 100 simulation runs, where $s$ is the number of intervals examined.} \label{tab:two_a_count_sigmahat} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cc} \hline & mean (sd) of Hausdorff distance \\ \hline case 1 & 31.42 (3.70) \\ case 2 & 20.25 (7.08) \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs obtained by \citet{safikhani2020joint} under the simulation setting described in Section \ref{ta}.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{6}{*}{case 1} & random & OLS & 29.74 (8.38) & 30.01 (7.85) \\ & (s = 1969) & Lasso & 7.27 (4.38) & 19.02 (9.87)\\ \cline{2-5} & deterministic & OLS & 28.51 (10.91) & 29.69 (9.50)\\ & (s = 1969) & Lasso & 5.95 (3.33) & 17.81 (12.08) \\ \cline{2-5} & deterministic & OLS & 29.02 (10.45) & 30.96 (7.39) \\ & (s = 981) & Lasso & 5.72 (3.16) & 18.05 (12.16) \\ \hline \multirow{6}{*}{case 2} & random & OLS & 13.60 (0.18) & 13.54 (0.72)\\ & (s = 1969) & Lasso & 3.42 (5.13) & 12.41 (1.68) \\ \cline{2-5} & deterministic & OLS & 13.14 (2.77) & 13.44 (1.20) \\ & (s = 1969) & Lasso & 2.35 (2.71) & 12.72 (1.64) \\ \cline{2-5} & deterministic & OLS & 13.27 (1.76) & 13.41 (1.17) \\ & (s = 981) & Lasso & 2.47 (2.89) & 12.62 (1.76) \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for the settings described in Section \ref{ta}, where $s$ is the number of intervals examined.} \label{tab:two_a_meansd_sigmahat} \end{table} \pagebreak \subsubsection{Sparse $A^{(1)}$, Single anomaly} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & \multicolumn{2}{c}{$\#$ anomaly detection from 100 runs} \\ \cline{4-5} & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{case 1} & random & OLS & 2 & 3 \\ & (s = 469) & Lasso & \textbf{100} & \textbf{99} \\ \cline{2-5} & deterministic & OLS & 73 & 73 \\ & (s = 469) & Lasso & \textbf{100} & \textbf{100}\\ \hline \multirow{4}{*}{case 2} & random & OLS & 2 & 1 \\ & (s = 469) & Lasso & \textbf{85} & \textbf{33} \\ \cline{2-5} & deterministic & OLS & 57 & \textbf{66} \\ & (s = 469) & Lasso & \textbf{100} & 52 \\ \hline \end{tabular} \caption{Empirical power ($\%$) from 100 simulation runs for the settings described in Section \ref{sec4.2.1}, where $s$ is the number of intervals examined.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{rrrrrrr} \hline & \multicolumn{6}{c}{\# of estimated change points}\\ \cline{2-7} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline case 1 & 1 & 34 & \textbf{51} & 12 & 1 & 1 \\ case 2 & 1 & 29 & \textbf{56} & 12 & 1 & 1 \\ \hline \end{tabular} \caption{Distribution of the number of estimated change points by \citet{safikhani2020joint} under the simulation setting described in Section \ref{sec4.2.1} over 100 simulation runs.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{ccccc} \hline & & & \multicolumn{2}{c}{mean (sd) of Hausdorff distance} \\ \cline{4-5} & & & $A^{(1)}$ is known & $A^{(1)}$ is estimated\\ \hline \multirow{4}{*}{case 1} & random & OLS & 44.44 (2.55) & 43.92 (6.08) \\ & (s = 469) & Lasso & 1.50 (0.67) & 1.75 (4.36) \\ \cline{2-5} & deterministic & OLS & 20.72 (19.56) & 23.43 (19.20)\\ & (s = 469) & Lasso & 0.32 (0.15) & 0.32 (0.15)\\ \hline \multirow{4}{*}{case 2} & random & OLS & 45.93 (3.47) & 46.35 (0.50) \\ & (s = 469) & Lasso & 12.97 (14.19) & 31.43 (21.44) \\ \cline{2-5} & deterministic & OLS & 29.41 (19.33) & 30.27 (17.77) \\ & (s = 469) & Lasso & 0.46 (0.15) & 22.50 (23.08) \\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs for the settings described in Section \ref{sec4.2.1}, where $s$ is the number of intervals examined.} \end{table} \boldsymbolegin{table}[h!] \centering \boldsymbolegin{tabular}{cc} \hline & mean (sd) of Hausdorff distance \\ \hline case 1 & 45.65 (1.84)\\ case 2 & 45.82 (2.46)\\ \hline \end{tabular} \caption{The mean (standard deviation) of Hausdorff distance from 100 simulation runs obtained by \citet{safikhani2020joint} under the simulation setting described in Section \ref{sec4.2.1}.} \end{table} \end{document}
\begin{document} \date{} \title{Low-Switching Policy Gradient with Exploration \ via Online Sensitivity Sampling} \begin{abstract} Policy optimization methods are powerful algorithms in Reinforcement Learning (RL) for their flexibility to deal with policy parameterization and ability to handle model misspecification. However, these methods usually suffer from slow convergence rates and poor sample complexity. Hence it is important to design provably sample efficient algorithms for policy optimization. Yet, recent advances for this problems have only been successful in tabular and linear setting, whose benign structures cannot be generalized to non-linearly parameterized policies. In this paper, we address this problem by leveraging recent advances in value-based algorithms, including bounded eluder-dimension and online sensitivity sampling, to design a low-switching sample-efficient policy optimization algorithm, \emph{LPO}, with general non-linear function approximation. We show that, our algorithm obtains an $\varepsilon$-optimal policy with only $\widetilde{O}(\frac{\text{poly}(d)}{\varepsilon^3})$ samples, where $\varepsilon$ is the suboptimality gap and $d$ is a complexity measure of the function class approximating the policy. This drastically improves previously best-known sample bound for policy optimization algorithms, $\widetilde{O}(\frac{\text{poly}(d)}{\varepsilon^8})$. Moreover, we empirically test our theory with deep neural nets to show the benefits of the theoretical inspiration. \end{abstract} \section{Introduction} Reinforcement learning (RL) has achieved great success in many practical areas by adopting policy gradient methods with deep neural networks \citep{schulman2015trust, schulman2017proximal, haarnoja2018soft}. These policy optimization methods are some of the most classic \citep{williams1992simple, konda1999actor} approaches for RL. Although their theoretical convergence properties have been established in \citep{geist2019theory, abbasip, agarwal2020optimality, bhandari2019global} with assumptions that the state space is already well-explored, it is usually not the case in practice. To resolve this issue, policy-based approaches with active exploration in the environment have been proposed in simple tabular~\citep{shani2020optimistic}, linear function approximation~\citep{cai2020provably, agarwal2020pc} and general function approximation \citep{feng2021provably} models. Among these exploration-based approaches, \citet{agarwal2020pc} and \citet{feng2021provably} are specially designed to handle model-misspecification more robustly than existing value-based approaches \citep{jin2020provably,wang2020reinforcement} by performing policy gradient methods to solve a sequence of optimistic MDPs. However, the robustness of both \citep{agarwal2020pc} and \citep{feng2021provably} pays a huge price: to obtain an $\varepsilon$-suboptimal policy, \citet{agarwal2020pc} requires $\sim\widetilde{O}(1/\varepsilon^{11})$, and \citet{feng2021provably} requires $\sim\widetilde{O}(1/\varepsilon^8)$ number of samples to obtain an $\varepsilon$-optimal policy. Recently, \citet{zanette2021cautiously} has designed a low switching (i.e. reducing the number of policy changes) policy-based algorithm with linear function approximation, which largely reduces the sample complexity of \citet{agarwal2020pc}. However, it is still unknown how to improve sample complexity of policy-based algorithms with good robustness in the non-linear setting. As for the value-based methods, low-switching techniques \citep{bai2019provably, gao2021provably, kong2021online} are utilized to reduce the policy changes of the algorithm. Among them, \citet{kong2021online} proposed a novel notion of online sensitivity score, which measures the importance of a data point relative to a given dataset over some \emph{general} function class. By using this sensitivity score, \citet{kong2021online} established an online sub-sampling technique which greatly reduced the average \emph{computing time} of previous work \citep{wang2020reinforcement}. Nevertheless, it is unknown whether such low-switching techniques can be applied to save \emph{samples} in policy-based approaches. In this paper, we present a low-switching policy-based algorithm \textbf{LPO} (\emph{\textbf{L}ow-Switching \textbf{P}olicy Gradient and Exploration via \textbf{O}nline Sensitivity Sampling}), which leverages techniques in policy-based approaches, such as \citep{feng2021provably, zanette2021cautiously} and value-based approach, such as \citep{kong2021online} to establish efficient policy gradient on non-linear function class while preserving the low-switching property to save samples and running time. Our algorithm follows an actor-critic framework, where the critic guides the exploration of the policy via exploration bonuses derived from the non-linear function class, and policy-gradient (PG) updates the policy to guarantee robustness and stability. The low-switching technique is applied primarily to derive a slowly updating critic, while preserving the quality of learning. Since one of the major terms in sample complexity originates from the PG policy update, slowly updating critic can drastically save the sample complexity as it requires only a few policy updates. Concretely, our approach only update the policy for $\sim \log T$ times for running $T$ rounds of the algorithm, whereas existing approaches, e.g., \citep{feng2021provably}, which also targets on the policy-based exploration with non-linear function approximation, takes at least $\sim T$ policy updates. We also derive new PG approaches aware of the structure of non-linear function class to further save samples in updating the policy. \paragraph{Our Contribution} \begin{itemize} \item We design a new policy-based exploration algorithm, \textbf{LPO}, with non-linear function approximation. The algorithm enjoys the same stability guarantee in terms of model-misspecification as presented in existing approaches \citep{feng2021provably}. This algorithm leverages efficient value-based techniques (online sensitivity sampling) to slowly update its policy and thus enjoys a sample complexity of $\widetilde{O}(\text{poly}(d)/\varepsilon^3)$, whereas existing approach takes at least $\widetilde{O}(\text{poly}(d)/\varepsilon^8)$ samples to obtain an $\varepsilon$-optimal policy, where $d$ is related to the eluder-dimension, measuring the complexity of the function class. \item While enjoying a theoretical guarantee at special cases where the function class has a bounded complexity, the algorithm itself can be implemented using neural networks. We further empirically tested the theoretical inspiration of online sensitivity sampling with existing deep RL frameworks. The experimental results demonstrated the efficacy of our approaches. \end{itemize} \paragraph{Related Work} With regards to exploration methods in RL, there are many provable results in the tabular case \citep{kearns2002near, brafman2002r, kearns89, jin2018q} and linear \citep{yang2019sample,yang2020reinforcement, jin2020provably} settings with value or model-based methods. Recent papers \citep{shani2020optimistic, cai2020provably, agarwal2020pc} have developed policy-based methods also in tabular and linear settings and \citet{zanette2021cautiously} greatly reduces the sample complexity of \citet{agarwal2020pc} mainly by using the doubling trick for determinant of empirical cumulative covariance. However, relative few provable results are achieved in non-linear setting. For general function approximation, complexity measures are essential for non-linear function class, and \citet{russo2013eluder} proposed the concept of eluder dimension. Recent papers have extended it to more general framework (e.g. Bellman Eluder dimension \citep{jin2021bellman}, Decision-Estimation Coefficient \citep{foster2021statistical}, Admissible Bellman Characterization \citep{chen2022general}). However, the use of eluder dimension allows computational tractable optimization methods. Based on the eluder dimension, the value-based technique of \citet{wang2020reinforcement} describes a UCB-VI style algorithm that can explore the environment driven by a well-designed width function and \citet{kong2021online} devises an online sub-sampling method which largely reduces the average computation time of \citet{wang2020reinforcement}. For policy-based method in the general setting, \citet{feng2021provably} proposes a model-free algorithm with abundant exploration to environment using the indicator of width function. Moreover, it has better robustness to model misspecification compared to (misspecified) Linear MDP \citep{jin2020provably}. However, \citet{feng2021provably} suffers from huge sample complexity. In this paper, instead of directly finding a similar notion in the non-linear setting just like determinant in linear setting \citep{zanette2021cautiously}, we adopt an online sensitivity-sampling method to quantify the sensitivity of new-coming data obtained from the environment. Moreover, the importance of designing a sophisticated and efficient reward bonus is mentioned in \citep{zanette2021cautiously} and we significantly generalize this approach to the non-linear setting by combining the width function and its indicator and our reward bonuses save samples and computing time compared to \citep{feng2021provably}. \paragraph{Notations} We use $[n]$ to represent index set $\{1,\cdots n\}$. For $x \in \mathbb{R}$, $\lfloor{x\rfloor}$ represents the largest integer not exceeding $x$ and $\lceil{x\rceil}$ represents the smallest integer exceeding $x$. Given $a,b \in \mathbb{R}^d$, we denote by $a^\top b$ the inner product between $a$ and $b$ and $||a||_2$ the Euclidean norm of $a$. Given a matrix $A$, we use $||A||_2$ for the spectral norm of $A$, and for a positive definite matrix $\Sigma$ and a vector $x$, we define $||x||_{\Sigma}=\sqrt{x^\top \Sigma x}$. We abbreviate Kullback-Leibler divergence to \textbf{KL} and use $O$ to lead orders in asymptotic upper bounds and $\widetilde{O}$ to hide the polylog factors. For a finite set $\mathcal{A}$, we denote the cardinality of $\mathcal{A}$ by $|\mathcal{A}|$, all distributions over $\mathcal{A}$ by $\Delta (\mathcal{A})$, and especially the uniform distribution over $\mathcal{A}$ by $\text{Unif} (\mathcal{A})$. For a function $f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, define $$ \|f\|_{\infty}=\max _{(s, a) \in \mathcal{S} \times \mathcal{A}}|f(s, a)| . $$ Similarly, for a function $v: \mathcal{S} \rightarrow \mathbb{R}$, define $$ \|v\|_{\infty}=\max _{s \in \mathcal{S}}|v(s)| . $$ For a set of state-action pairs $\mathcal{Z} \subseteq \mathcal{S} \times \mathcal{A}$, for a function $f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, we define the $\mathcal{Z}$-norm of $f$ as $$ \|f\|_{\mathcal{Z}}=\left(\sum_{(s, a) \in \mathcal{Z}}(f(s, a))^{2}\right)^{1 / 2}. $$ \section{Preliminaries} \label{sec 2} \paragraph{Markov Decision Process} In this paper, we consider discounted Markov decision process (MDP) environment, which can be specified by a tuple, $\mathcal{M}\ =\ (\mathcal{S}, \mathcal{A}, P, r, \gamma)$, where $\mathcal{S}$ is a possibly infinite state space, $\mathcal{A}$ is a finite action space and we denote $A=|\mathcal{A}|$, $P: \mathcal{S} \times \mathcal{A} \rightarrow \Delta(\mathcal{S})$ specifies a transition kernel and is unknown to the learner, $r: \mathcal{S} \times \mathcal{A} \rightarrow[0,1]$ is a reward function, and $\gamma \in(0,1)$ is a discount factor that discounts the reward received in a future time step. Suppose an RL agent chooses an action $a\in\mathcal{A}$ at the current state $s$, the environment brings the agent to a new state $s'$ with the unknown probability $P\left(s^{\prime} \mid s, a\right)$ and the agent receives an instant reward $r(s, a)$. The goal for a leaner is to find a policy\footnote{We here only consider stationary policies as one can always find a stationary optimal-policy in a discounted MDP \citep{puterman2014markov}.} $\pi: \mathcal{S} \rightarrow \Delta(\mathcal{A})$ such that the expected long-term rewards are maximized. In particular, the quality of a policy can be measured by the the $Q$-value function $Q^{\pi}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is defined as: $$ Q^{\pi}(s, a):=\mathbb{E}^{\pi}\left[\sum_{t=0}^{\infty} \gamma^{t} r\left(s_{t}, a_{t}\right) \mid s_{0}=s, a_{0}=a\right], $$ where the expectation is taken over the trajectory following $\pi$ -- this measures the expected discounted total returns of playing action $a$ at state $s$ and then playing policy $\pi$ (for an indefinite amount of time). And after taking expectation over the action space, we get the value function: $V^{\pi}(s):=$ $\mathbb{E}_{a \sim \pi(\cdot \mid s)}\left[Q^{\pi}(s, a)\right]$, which measures the total expected discounted returns of playing policy $\pi$ starting from state $s$. From $V^{\pi}$ and $Q^{\pi}$, we can further define the advantage function of $\pi$ as $A^{\pi}(s, a)=Q^{\pi}(s, a)-$ $V^{\pi}(s)$, which measures whether the action $a$ can be further improved. Moreover, if a policy $\pi^*$ is optimal, then the Bellman equation \citep{puterman2014markov} states that $A^{\pi^*}(s,a)\le 0$ for all $s, a$ and $\mathbb{E}_{a\sim \pi^*(\cdot|s)}[A^{\pi^*}(s, a)] = 0$. In practice, we may also restrict the policy space being considered as $\Pi$ (which may be parameterized by a function class). We also define the discounted state-action distribution $d_{\tilde{\tilde{s}}}^{\pi}(s, a)$ induced by $\pi$ as: $$ d_{\tilde{s}}^{\pi}(s, a)=(1-\gamma) \sum_{t=0}^{\infty} \gamma^{t} \operatorname{Pr}^{\pi}\left(s_{t}=s, a_{t}=a \mid s_{0}=\tilde{s}\right), $$ where $\operatorname{Pr}^{\pi}\left(s_{t}=s, a_{t}=a \mid s_{0}=\tilde{s}\right)$ is the probability of reaching $(s, a)$ at the $t_{\text {th }}$ step starting from $\tilde{s}$ following $\pi$. Similarly, the definition of $d_{\tilde{s}, \tilde{a}}^{\pi}(s, a)$ can be easily derived as the distribution of state-actions if the agent starts from state $\tilde{s}$ and selects an action $\tilde{a}$. For any initial state-actions distribution $\nu \in \Delta(\mathcal{S} \times \mathcal{A})$, we denote by $d_{\nu}^{\pi}(s, a):=\mathbb{E}_{(\tilde{s}, \tilde{a}) \sim \nu}\left[d_{(\tilde{s}, \tilde{a})}^{\pi}(s, a)\right]$ and $d_{\nu}^{\pi}(s):=\sum_{a} d_{\nu}^{\pi}(s, a)$. Given an initial state distribution $\rho \in \Delta(\mathcal{S})$, we define $V_{\rho}^{\pi}:=\mathbb{E}_{s_{0} \sim \rho}\left[V^{\pi}\left(s_{0}\right)\right]$. With these notations, the reinforcement learning (RL) problem with respect to the policy class $\Pi$ is reduced to solving the following optimization problem. $$ \underset{\pi \in \Pi}{\operatorname{maximize}} V_{\rho_{0}}^{\pi}, $$ for some initial distribution $\rho_0$. We further, without loss of generality \footnote{Otherwise, we can modify the MDP and add a dummy state $s_0$ with $\rho_0$ as its state transition for all actions played at $s_0$.}, assume $\rho_0$ is a singleton on some state $s_{0}$. \paragraph{Policy Space and Width Function}We now formally define the policy parameterization class, which is compatible with a neural network implementation. For a set of functions $\mathcal{F} \subseteq\{f: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}\}$, we consider a policy space as $\Pi_{\mathcal{F}}:= \{ \pi_f, f\in\mathcal{F}\}$ by applying the softmax transform to functions in $\mathcal{F}$, i.e., for any $f\in \mathcal{F}$, $$ \pi_f(a|s) := \frac{\exp(f(s,a))}{\sum_{a'\in\mathcal{A}}\exp(f(s,a'))} $$ Given $\mathcal{F}$, we define its function difference class $\Delta \mathcal{F} := \{\Delta f|\ \Delta f = f- f', f,f'\in \mathcal{F}\}$ and width function on $\Delta \mathcal{F}$ for a state-action pair $(s, a)$ as \label{width} $$ w(\Delta \mathcal{F}, s, a)=\sup _{\Delta f \in \Delta \mathcal{F} }|\Delta f(s, a)|. $$ As we will show shortly, this width function will be used to design exploration bonuses for our algorithm. \section{Algorithms} \label{sec 3} \begin{algorithm}[t] \caption{\textbf{LPO}} \label{A1} \begin{algorithmic}[1] \STATE \textbf{Input}: Function class $\mathcal{F}$ \STATE \textbf{Hyperparameters}: $N,\delta,\beta$ \STATE For all $s \in S$, initialize $\pi^{0}(\cdot|s)=\text{Unif}(\mathcal{A})$, $\widehat{\mathcal{Z}}^1=\emptyset$ \FOR{$n=1,2,\cdots,N$} \STATE Update policy cover $\pi_{cov}^{n}=\pi^{0:n-1}$ \STATE ${\widehat{\mathcal{Z}}}^n \leftarrow \textbf{S-Sampling} \big{(}\mathcal{F},\widehat{\mathcal{Z}}^{n-1},(s_{n-1},a_{n-1}),\delta\big{)}$ \IF{${\widehat{\mathcal{Z}}}^n \neq {\widehat{\mathcal{Z}}}^{\underline{n}}$ \textbf{or} $n=1$} \STATE Update the known set and bonus function \STATE ${\mathcal{K}}^n=\{(s,a)\ |\ \omega({\widehat{\mathcal{F}}}^n,s,a)< \beta\}$ \STATE $b^n(s,a)=\frac{3}{1-\gamma}\cdot\textbf{1}\{\omega({\widehat{\mathcal{F}}}^n,s,a)\geq \beta\}+\frac{2}{\beta}\cdot\omega({\widehat{\mathcal{F}}}^n,s,a)\cdot\textbf{1}\{\omega({\widehat{\mathcal{F}}}^n,s,a) <\beta\}$ \STATE Set $\underline{n}\leftarrow n$ \STATE $\pi^n\leftarrow \textbf{Policy Update}(\pi_{cov}^n,b^n,\mathcal{K}^n)$ \ELSE \STATE $\pi^n \leftarrow \pi^{\underline{n}},\mathcal{K}^n \leftarrow \mathcal{K}^{\underline{n}},b^n \leftarrow b^{\underline{n}}$ \ENDIF \STATE $(s_n,a_n)\leftarrow \textbf{d-sampler}(\pi^{\underline{n}},\nu)$ \ENDFOR \STATE \textbf{Output}: Unif($\pi^0,\pi^1,\cdots,\pi^{N-1}$) \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{\textbf{S-Sampling (Sensitivity-Sampling)}} \label{A2} \begin{algorithmic}[1] \STATE \textbf{Input}: Function class $\mathcal{F}$, current sensitivity dataset $\widehat{\mathcal{Z}}$, a new state-action pair $z$, failure probability $\delta$. \STATE Compute the sensitivity factor of $z$ relative to the dataset $\mathcal{Z}$: $$ s_z=C\cdot\text{sensitivity}_{\widehat{\mathcal{Z}},\mathcal{F}}(z)\cdot\text{log}(N\mathcal{N}(\mathcal{F},\sqrt{\delta/64N^3})/\delta) $$ \STATE Let $\widehat{z}$ be the neighbor of $z$ satisfying the condition \eqref{E2} \IF {$s_z\geq 1$} \STATE Add 1 copy of $\widehat{z}$ into $\mathcal{Z}$ \ELSE \STATE Let $n_z = \lfloor{\frac{1}{s_z}\rfloor}$ and add $n_z$ copies of $\widehat{z}$ into $\widehat{\mathcal{Z}}$ with probability $\frac{1}{n_z}$ \ENDIF \STATE \textbf{return} $\widehat{\mathcal{Z}}$. \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{\textbf{Policy Update}} \label{A3} \begin{algorithmic}[1] \STATE \textbf{Input:} $\pi_{cov},b,\mathcal{K}$ \STATE \textbf{Initialize:}$\ \pi_0(\cdot|s)=\text{Unif}(\mathcal{A})\ \text{if}\ s \in \mathcal{K}\ and$ \STATE $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \pi_0(\cdot|s)=\text{Unif}(\{a|(s,a)\notin\mathcal{K}\})\ \text{if}\ s\notin\mathcal{K} $ \FOR{$k=1,2,\cdots,K-1$} \IF{$k-\underline{k}>\kappa$ or $k=0$} \STATE $\underline{k} \leftarrow k$ \STATE $D \leftarrow \textbf{Behaviour Policy Sampling}(\pi_{\underline{k}},\pi_{\text{cov}})$ \ENDIF \STATE $\widehat{Q}_k \leftarrow$ \textbf{Policy Evaluation Oracle}$(\pi_k,D,\pi_{\underline{k}},b)$ \STATE Update policy: $\forall s\in \mathcal{K}$ \STATE $\pi_{k+1}(\cdot|s)\propto\pi_{k}(\cdot|s)e^{\eta\widehat{Q}_k(\cdot|s)}$ \ENDFOR \STATE \textbf{return:}$\ \pi_{0:K-1}=\text{Unif}\{\pi_0,\cdots,\pi_{K-1}\}$ \end{algorithmic} \end{algorithm} In this section, we present our algorithm \emph{Low-Switching Policy Gradient and Exploration via Online Sensitivity Sampling }\textbf{(LPO)}. The algorithm takes a function class $\mathcal{F}$ as an input and interacts with the environment to produce a near-optimal policy. The complete pseudocode is in \cref{A1}. We first give an overview of our algorithm before describing the details of our improvements. \subsection{Overview of our Algorithm} Our algorithm \textbf{LPO} (\cref{A1}) has two loops. The outer loop produces a series of well-designed optimistic MDPs by adding a reward bonus and choosing an initial state distribution which are then solved with regression in the inner loop. These optimistic MDPs will encourage the agent to explore unseen part of the environment. In our \textbf{LPO}, we construct the initial state distribution by using the uniform mixture of previous well-trained policies (also called policy cover). Specifically, at the beginning of $n$-th iteration, we have already collected sample $(s_n,a_n)$ using the last policy $\pi^{\underline{n}}$. Then at iteration $n$, we use \textbf{S-Sampling} (i.e. \textbf{Sensitivity-Sampling}) (\cref{A2}) to measure the change that the new sample brings to the dataset. If the current sample can provide sufficiently new information relative to the formal dataset, then with great probability, we choose to store this data and invoke the inner loop to update the policy. Otherwise, we just abandon this data and continue to collect data under $\pi^{\underline{n}}$. Through this process, a policy cover $\pi_{cov}^n=$ Unif($\pi^0,\pi^1,\cdots,\pi^{n-1}$) is constructed to provide an initial distribution for the inner routine. To this end, we define the known state-actions $\mathcal{K}^n$, which can be visited with high probability under $\pi_{cov}^n$. Using $\mathcal{K}^n$ and a reward bonus $b^n$, we create an optimistic MDP to encourage the agent to explore outside $\mathcal{K}^n$ as well as to refine its estimates inside $\mathcal{K}^n$. In the inner routine, the algorithm \textbf{Policy Update} (\cref{A3}) completes the task to find an approximately optimal policy $\pi^n$ in the optimistic MDP through general function approximation. This policy $\pi^n$ would produce new samples which will be measured in the next iteration. Under the procedure of our algorithm, the policy cover will gain sufficient coverage over the state-action space and the bonus will shrink. Therefore, the near-optimal policies in the optimistic MDPs eventually behave well in the original MDP. Next, we will describe the details of each part of our algorithm. \subsection{Outer Loop} Now we describe the details of three important parts in the outer loop. \paragraph{Lazy Updates of Optimistic MDPs via Online Sensitivity-Sampling} The lazy or infrequent updates of the optimistic MDPs in \textbf{LPO} play a crucial role of improving sample complexity, which reduce the number of \textbf{Policy Update} invocations from $O(N)$ to $O(\text{poly}(\log N))$. For the linear case, \citet{zanette2021cautiously} achieves this result by monitoring the determinant of the empirical cumulative covariance matrix. However, in our general setting, we can not count on the linear features anymore. Instead, we introduce our online sensitivity sampling technique, which is also mentioned in \cite{wang2020reinforcement, kong2021online}. Now we describe the procedure for constructing the sensitivity dataset $\widehat{\mathcal{Z}}^n$. At the beginning of iteration $n$, the algorithm receives the current sensitivity dataset $\widehat{\mathcal{Z}}^{n-1}$ and the new data $(s_{n-1},a_{n-1})$ from the last iteration. We first calculate the online sensitivity score in \eqref{E1} to measure the importance of $(s_{n-1},a_{n-1})$ relative to $\widehat{\mathcal{Z}}^{n-1}$. \begin{equation}\label{E1} \begin{aligned} &\operatorname{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z\right) \\ =&\mathop{\text{sup}}\limits_{f_1,f_2\in \mathcal{F}}\frac{{\left(f_1(z)-f_2(z)\right)}^2}{\text{min}\{{||f_1-f_2||}_{\mathcal{Z}^n}^2,4NW^2\}+1} \end{aligned} \end{equation} Then the algorithm adds $(s_{n-1},a_{n-1})$ to $\widehat{\mathcal{Z}}^{n-1}$ with probability decided by its online sensitivity score. Intuitively, the more important the new sample is, the more likely it is to be added. At the same time, the algorithm set the number of copies added to the dataset according to the sampling probability, if added. In addition, due to the technical obstacle in theoretical proof, we need to replace the original data $z$ with the data $\widehat{z}\in\mathcal{C}(\mathcal{S}\times\mathcal{A},1/16\sqrt{64N^3/\delta})$ in the $\varepsilon$-cover set (defined in Assumption \ref{ass 4.2}), which satisfies: \begin{equation}\label{E2} \mathop{\text{sup}}\limits_{f\in\mathcal{F}}|f(z)-f(\widehat{z})|\leq1/16\sqrt{64N^3/\delta} \end{equation} Furthermore, the chance that the new sample is sensitive gets smaller when the dataset gets enough samples, which means that the policy will not change frequently in the later period of training. As will be demonstrated later, the number of switches (i.e. the number of policy changes in the running of the outer loop) achieve the logarithmic result. To this end, the size of sensitivity dataset is bounded and provides good approximation to the original dataset, which serves as a benign property for theoretical proof. \paragraph{Known and Unknown state-actions} According to the value of width function (defined in \cref{width}) under the sensitivity dataset, the state-action space $\mathcal{S}\times\mathcal{A}$ is divided into two sets, namely the known set $\mathcal{K}^n$ in \eqref{E3} and its complement the unknown set. \begin{equation}\label{E3} \begin{aligned} \mathcal{K}^n=&\{(s,a)\in\mathcal{S}\times\mathcal{A}|\ \omega(\widehat{\mathcal{F}}^n,s,a)<\beta\} \end{aligned} \end{equation} where $$ {\widehat{\mathcal{F}}}^n=\{\Delta f \in \Delta \mathcal{F}|\ ||\Delta f||_{\widehat{\mathcal{Z}}^n}^2\leq \epsilon \} $$ In fact, the width function $\omega(\widehat{\mathcal{F}}^n,s,a)$ serves as a prediction error for a new state-action pair $(s,a)$ where the training data is $\widehat{\mathcal{Z}}^n$, which is the general form of $||\phi(s,a)||_{(\widehat{\Sigma}^n)^{-1}}$ in the linear case. Therefore, the known set $\mathcal{K}^n$ represents the state-action pairs easily visited under the policy cover $\pi_{\text{cov}}^n$. If all the actions for one state are in the $\mathcal{K}^n$, we say this state is known. \begin{equation} \mathcal{K}^n=\{s\in\mathcal{S}|\ \forall a\in\mathcal{A},\omega(\widehat{\mathcal{F}}^n,s,a)<\beta\} \end{equation} \paragraph{Bonus Function} In a more refined form, \textbf{LPO} devises bonus function in both the known and unknown sets. \begin{equation}\label{E5} \begin{aligned} b^n(s,a)&=2b_w^n(s,a)+b_{\text{1}}^n(s,a),\ \text{where} \\ b_w^n(s,a)&=\frac{1}{\beta}\ \omega(\widehat{\mathcal{F}}^n,s,a)\textbf{1}\{s\in\mathcal{K}^n\},\ \text{and}\ \\ b_{\text{1}}^n(s,a)&=\frac{3}{1-\gamma}\textbf{1}\{s\notin\mathcal{K}^n\} \end{aligned} \end{equation} On the unknown state-actions, the bonus is a constant $\frac{3}{1-\gamma}$, which is the largest value of the original reward over a trajectory. This will force the agent out of the known set and explore the unknown parts of the \textbf{MDP}. On the known state-actions, the uncertainty is measured by the width function. Notice that our algorithm explore the environment in a much more sophisticated and efficient way than \citep{feng2021provably} does. Our algorithm \textbf{LPO} not only explores the unknown part using the indicator $b_{\text{1}}^n(s,a)$, but also takes the uncertainty information $b_w^n(s,a)$ in the known set into account. Consequently, \textbf{LPO} still explores the state-action pair in the known set until it is sufficiently understood. Moreover, since the size of sensitivity dataset is bounded by $O(d\log N)$, where $d$ is the eluder dimension, the average computing time of our bonus function is largely reduced. \subsection{Inner Loop} In the inner routine, the \textbf{Policy Update} initializes the policy to be a uniform distribution and encourages the policy to explore the unknown state-actions. Next, we adopt the online learning algorithm to update the policy, which is an actor-critic pattern. This update rule is equivalent to Natural Policy Gradient (NPG) algorithm for log-linear policies \citep{kakade2001natural, agarwal2020optimality}, where we fit the critic with Monte Carlo samples and update the actor using exponential weights. As mentioned in \citep{agarwal2020optimality}, Monte Carlo sampling has an advantage of assuring better robustness to model misspecification, but produces huge source of sample complexity. \paragraph{Sample efficient policy evaluation oracle via importance sampling.} In the \textbf{Policy Update} routine, the policy obtained in each iteration needs to be evaluated. In a most direct way, the agent needs to interact with the environment by Monte Carlo sampling and estimate the $Q$-function for each policy, and this leads to the waste of samples. In order to improve the sample complexity of \textbf{Policy Update} while keeping the robustness property, we design a sample efficient policy evaluation oracle by applying trajectory-level importance sampling on past Monte Carlo return estimates \citep{precup2000eligibility}. To be specific, at iteration $\underline{k}$ in the inner loop, the agent will collect data by routine \textbf{Behaviour Policy Sampling} (\cref{A4}), and the dataset obtained in this iteration will be reused for the next $\kappa$ turns. At iteration $k$ ($k\leq \underline{k}+\kappa $), the \textbf{Policy Evaluation Oracle} (\cref{A5}) can estimate the Monte Carlo return for the current policy $\pi_k$ by reweighting the samples with importance sampling. With the reweighted random return, the oracle fits the critic via least square regression and outputs an estimated $\widehat{Q}_k$ for policy $\pi_k$. To this end, we update the policy by following the NPG rule. Although the technique above can largely reduce the number of interactions with environment (from $K$ to $\lceil\frac{K}{\kappa}\rceil$), the selection of $\kappa$ greatly influences the variance of importance sampling estimator, which ultimately challenges the robustness property. In fact, We need to set $\kappa = \widetilde{O}(\sqrt{K})$ in order to keep a stable variance of the estimator (see Lemma \ref{E.3} and Remark \ref{kappa} for details). \begin{remark} If we have obtained a full coverage dataset over state-action space (e.g. using bonus-driven reward to collect data \citep{jin2020provably, wang2020reward}), the policy evaluation oracle can evaluate all the policies by using the above mentioned dataset and only needs $\widetilde{O}(\frac{1}{\varepsilon^2})$ samples. However, this oracle depends on the ($\zeta$-approximate) linear MDP, which is a stronger assumption than that in our setting. \end{remark} \paragraph{Pessimistic critic to produce one-sided errors} From the line 9 in \cref{A5}, we find that the critic fitting is actually the Monte Carlo return minus the initial bonus. Therefore, an intuitive form for the critic estimate is $\widehat{Q}_{b^n}^k(s,a)=f_k(s,a)+b^n(s,a)$. However, in line 10 of \cref{A5}, we only partially make up for the subtracted term and define the critic estimate as $\widehat{Q}_{b^n}^k(s,a)=f_k(s,a)+\frac{1}{2}b^n(s,a)$. This introduces a negative bias in the estimate. However, in the later proof we can see that $\widehat{Q}_{b^n}^k(s,a)$ is still optimistic relative to $Q^k(s,a)$. This one-sided error property plays an essential role of bounding the gap between $Q_{b^n}^{k,*}$ and $\widehat{Q}_{b^n}^k(s,a)$, which ultimately improves the sample complexity. \section{Theoretical Guarantee} \label{sec 4} In this section, we will provide our main result of \textbf{LPO} under some assumptions. The main theorem (\cref{thm:bigtheorem body}) is presented in this section and the complete proof is in the appendix. First of all, the sample complexity of algorithms with function approximation depends on the complexity of the function class. To measure this complexity, we adopt the notion of eluder dimension which is first mentioned in \citep{russo2013eluder}. \begin{definition} \label{def:inj} (Eluder dimension). $\varepsilon \geq 0$ and $\mathcal{Z}=\left\{\left(s_{i}, a_{i}\right)\right\}_{i=1}^{n} \subseteq \mathcal{S} \times \mathcal{A}$ be a sequence of state-action pairs. \begin{itemize} \item{A state-action pair $(s, a) \in \mathcal{S} \times \mathcal{A}$ is $\varepsilon$-dependent on $\mathcal{Z}$ with respect to $\mathcal{F}$ if any $f, f^{\prime} \in \mathcal{F}$ satisfying $\left\|f-f^{\prime}\right\|_{\mathcal{Z}} \leq \varepsilon$ also satisfies $\left|f(s, a)-f^{\prime}(s, a)\right| \leq \varepsilon$.} \item{An $(s, a)$ is $\varepsilon$-independent of $\mathcal{Z}$ with respect to $\mathcal{F}$ if $(s, a)$ is not $\varepsilon$-dependent on $\mathcal{Z}$.} \item{The eluder dimension $\operatorname{dim}_{E}(\mathcal{F}, \varepsilon)$ of a function class $\mathcal{F}$ is the length of the longest sequence of elements in $\mathcal{S} \times \mathcal{A}$ such that, for some $\varepsilon^{\prime} \geq \varepsilon$, every element is $\varepsilon^{\prime}$-independent of its predecessors.} \end{itemize} \end{definition} \begin{remark} In fact, \citep{russo2013eluder} has shown several bounds for eluder dimension in some special cases. For example, when $\mathcal{F}$ is the class of linear functions, i.e. $f_{\theta}(s,a)=\theta^\top\phi(s,a)$ with a given feature $\phi: \mathcal{S}\times\mathcal{A} \rightarrow \mathbb{R}^d$, or the class of generalized linear functions of the form $f_{\theta}(s,a)=g(\theta^\top\phi(s,a))$ where $g$ is a differentiable and strictly increasing function, $\operatorname{dim}_{E}(\mathcal{F}, \varepsilon)=O(d\log(1/\varepsilon))$. Recently, more general complexity measures for non-linear function class have been proposed in \citep{jin2021bellman, foster2021statistical, chen2022general}. However, the adoption of eluder dimension allows us to use computation-friendly optimization methods (e.g. dynamic programming, policy gradient) whereas theirs do not directly imply computationally tractable and implementable approaches. If only statistical complexities are considered, we believe our techniques could be extended to their complexity measures. \end{remark} Next, we assume that the function class $\mathcal{F}$ and the state-actions $\mathcal{S}\times\mathcal{A}$ have bounded covering numbers. \begin{assumption} \label{ass 4.2} ($\varepsilon$-cover). For any $\varepsilon>0$, the following holds: 1. there exists an $\varepsilon$-cover $\mathcal{C}(\mathcal{F}, \varepsilon) \subseteq \mathcal{F}$ with size $|\mathcal{C}(\mathcal{F}, \varepsilon)| \leq \mathcal{N}(\mathcal{F}, \varepsilon)$, such that for any $f \in \mathcal{F}$, there exists $f^{\prime} \in \mathcal{C}(\mathcal{F}, \varepsilon)$ with $\left\|f-f^{\prime}\right\|_{\infty} \leq \varepsilon$; 2. there exists an $\varepsilon$-cover $\mathcal{C}(\mathcal{S} \times \mathcal{A}, \varepsilon)$ with size $|\mathcal{C}(\mathcal{S} \times \mathcal{A}, \varepsilon)| \leq \mathcal{N}(\mathcal{S} \times \mathcal{A}, \varepsilon)$, such that for any $(s, a) \in \mathcal{S} \times \mathcal{A}$, there exists $\left(s^{\prime}, a^{\prime}\right) \in \mathcal{C}(\mathcal{S} \times \mathcal{A}, \varepsilon)$ with $\max _{f \in \mathcal{F}}\left|f(s, a)-f\left(s^{\prime}, a^{\prime}\right)\right| \leq \varepsilon$. \end{assumption} \begin{remark} Assumption \ref{ass 4.2} is rather standard. Since our algorithm complexity depends only logarithmically on $\mathcal{N}(\mathcal{F},\cdot)$ and $\mathcal{N}(\mathcal{S} \times \mathcal{A},\cdot)$, it is even acceptable to have exponential size of these covering numbers. \end{remark} Next, we enforce a bound on the values of the functions class: \begin{assumption} \label{ass 4.3} (Regularity). We assume that $\sup _{f \in \mathcal{F}}\|f\|_{\infty} \leq W$. \end{assumption} Assumption \ref{ass 4.3} is mild as nearly all the function classes used in practice have bounded magnitude in the input domain of interests. In general, one shall not expect an arbitrary function class could provide good guarantees in approximating a policy. In this section, we apply the following completeness condition to characterize whether the function class $\mathcal{F}$ fits to solve RL problems. \begin{assumption} \label{ass 4.4} $\left(\mathcal{F}\right.$-closedness). $$ \mathcal{T}^{\pi} f(s, a):=\mathbb{E}^{\pi}\left[r(s, a)+\gamma f\left(s^{\prime}, a^{\prime}\right) \mid s, a\right] . $$ We assume that for all $\pi \in\{\mathcal{S} \rightarrow \Delta(\mathcal{A})\}$ and $g: \mathcal{S} \times \mathcal{A} \rightarrow\left[0, W\right]$, we have $\mathcal{T}^{\pi} g \in \mathcal{F}$. \end{assumption} \begin{remark} Assumption \ref{ass 4.4} is a closedness assumption on $\mathcal{F}$, which enhances its representability to deal with critic fitting. For some special cases, like linear $f$ in the linear MDP \citep{yang2019sample, jin2020provably}, this assumption always holds. If this assumption does not hold, which means $Q$-function may not realizable in our function class $\mathcal{F}$, then there exists a $\epsilon_{\text{bias}}$ when we approximate the true $Q$-function. This model misspecified setting will be discussed in Assumption \ref{B.2}. \end{remark} With the above assumptions, we have the following sample complexity result for \textbf{LPO}. \begin{theorem} \label{thm:bigtheorem body} (Main Results: Sample Complexity of LPO). Under Assumptions \ref{ass 4.2}, \ref{ass 4.3}, and \ref{ass 4.4}, for any comparator $\widetilde{\pi}$, a fixed failure probability $\delta$, eluder dimension $d =\text{dim}(\mathcal{F},1/N)$, a suboptimality gap $\varepsilon$ and appropriate input hyperparameters: $N \geq \widetilde{O}\left(\frac{{d}^2}{(1-\gamma)^4\varepsilon^2}\right), K = \widetilde{O}\left(\frac{\ln |\mathcal{A}| W^{2}}{(1-\gamma)^{2} \varepsilon^{2}}\right), M \geq \widetilde{O}\left(\frac{{d}^2}{(1-\gamma)^4\varepsilon^2}\right), \eta = \widetilde{O}\left(\frac{\sqrt{\ln |\mathcal{A}|}}{\sqrt{K} W}\right), \kappa = \widetilde{O}\left(\frac{1-\gamma}{\eta W}\right)$, our algorithm returns a policy $\pi^{\text{LPO}}$, satisfying $$ \left(V^{\widetilde{\pi}}-V^{\pi^{\text{LPO}}}\right)\left(s_{0}\right) \leq \varepsilon. $$ with probability at least $1-\delta$ after taking at most $\widetilde{O}\left(\frac{{d}^3}{(1-\gamma)^{8} \varepsilon^{3}}\right)$ samples. \end{theorem} \begin{remark} The complete proof of our main theorem is presented in \cref{M-thm}. For the model misspecified case, which means Assumption \ref{ass 4.4} does not hold, there exists a $\epsilon_{\text {bias }}$ in our regret bound (see details in Lemma \ref{B.10}) \end{remark} \section{Practical Implementation and Experiments} \label{sec 5} In this section, we introduce a practical approach to implementing our proposed theoretical algorithm and show our experiment results. \subsection{Practical Implementation of \textbf{LPO}} \begin{figure*} \caption{Performance of \textbf{PPO} \label{mainexperiment} \label{fig:1} \end{figure*} The width function in our theoretical framework enables our policy gradient algorithm to explore efficiently. The value of the width function should be large over novel state-action pairs and shrink over not novel. Intuitively, the width function over all state-action pairs should be its maximum at the beginning of learning and decrease along the way. To this end, we leverage the random network distillation technique proposed by \citep{burda2018exploration}. We randomly initialize two neural networks $f$ and $f'$ that maps from $\mathcal{A} \times \mathcal{S}$ to $\mathbb{R}^d$ with the same architecture (but different parameters). And our bonus $b(s, a)$ is defined as $b(s, a) = \Vert f(s, a) - f'(s, a) \Vert^2$. During training, we fix $f'$ and train $f$ to minimize $\Vert f(s, a) - f'(s, a) \Vert^2$ with gradient descent over state-action pairs that the agent has seen during the sampling procedure, i.e. distilling the randomly initialized network $f'$ into a trained one $f$. Over time, this residual-dependent bonus will decrease over state-action pairs that agents commonly visit. With bonus calculated in the way described above, at each Monte Carlo session, we can calculate an intrinsic advantage estimation $\hat{A}^{(int)}$, which will affect our policy gradient along with the extrinsic advantage estimation $\hat{A}^{(ext)}$. The gradient of policy parametrized by $\phi$ is given by: \begin{equation}\label{totaladvantage} \hat{A}^{(total)}_t = \alpha \hat{A}^{(ext)}_t + \beta \hat{A}^{(int)}_t \end{equation} where $\alpha$ and $\beta$ are hyperparameters that control how much we want to emphasize the importance of exploration, in our experiment, we use $\alpha = 2$ and $\beta = 1$. And the $\hat{A}_{ext}$, $\hat{A}_{int}$ are calculated with generalized advantage estimation \citep{schulman2015high}, and the estimation of advantages over a time period of $T$ is given by: \begin{equation}\label{advcalculation} \begin{aligned} \hat{A}^{(ext)}_t & = \sum_{i=t}^T (\gamma^{(ext)} \lambda)^{i - t} [r_i + \gamma^{(ext)} V(s_{i+1}) - V(s_i)] \\ \hat{A}^{(int)}_t & = \sum_{i=t}^T (\gamma^{(int)} \lambda)^{i - t} [b_i + \gamma^{(int)} V^{(int)}(s_{i+1}) \\ & - V^{(int)}(s_i)] \\ \end{aligned} \end{equation} where $V$ and $V^{(int)}$ are two value estimator parametrized that predicts extrinsic and intrinsic value separately. We train value functions to fit the Monte Carlo estimation of values of the current policy. In our theoretical framework, the sensitivity is computed using \eqref{E1} and only designed to achieve boundness on the final theoretical guarantee, but not for practical implementation. We overcome this issue by introducing a coarse, yet effective approximation of sensitivity sampling: gradually increasing the samples we collect for Monte Carlo estimation. For each sampling procedure at time $t$, we collect $max\{N, (1 + \frac{1}{T})^tN\}$ samples ($N$ is the number of samples we collect at the first sampling procedure). This simple mechanism makes the agent collect more and more samples for each training loop, which allow the agent to explore more freely at the early stage of learning, and forces the agent to explore more carefully at the late stage, as using more data for each training loop will shrink the value of width function in a more stable way. The algorithm is shown in Algorithm \ref{ea}. \begin{algorithm}[t] \caption{\textbf{LPO (Practical Implementation)}} \label{ea} \begin{algorithmic}[1] \STATE \textbf{Input}: Width function $(f, f')$, Policy $\pi_{\phi_0}$, Value networks $(V^{(ext)}$, $V^{(int)})$ \STATE \textbf{Hyperparameters}: $N, K, \gamma, \lambda,\alpha, \beta$ \STATE For all $s \in S$, initialize $\pi^{0}(\cdot|s)=\text{Unif}(\mathcal{A})$ \FOR{ $k = 0,1,2,3,...,K$ } \STATE $T \gets \lceil (1 + \frac{1}{K})^k N \rceil$ \STATE Run policy $\pi_{\phi}$ for $T$ steps $\rightarrow D_k$ \STATE Calculate $A^{(ext)}$, $A^{(int)}$ using \ref{advcalculation} using $\lambda$ \STATE Calculate $A^{(total)}$ using \ref{totaladvantage} using $\alpha$, $\beta$ \STATE Optimize $\pi_{\phi}$, $(V^{(ext)}$, $V^{(int)})$ using \textbf{PPO} with $A^{(total)}$ as advantage estimation \STATE Optimize $f$ to fit $f'$ w.r.t. $D_k$ \ENDFOR \STATE \textbf{Output}: Unif($\pi^0,\pi^1,\cdots,\pi^{N-1}$) \end{algorithmic} \end{algorithm} \subsection{Experiments} To further illustrate the effectiveness of our width function and our proposed sensitivity sampling, we compare \citep{schulman2017proximal, feng2021provably} with our proposed \textbf{LPO} in sparse reward MuJoCo environments \citep{6386109}. We re-implement \citep{feng2021provably} with the random network distillation method, as we find the original implementation of width function was not numerically stable. More details are in the discussion section. The sparse MuJoCo environment is different from the usual localmotion task, where rewards are dense and given according to the speed of the robots, in sparse environments, reward $(+1)$ is only given whenever the robot exceeds a certain speed, and no reward is given otherwise. Such environments are more difficult than their dense reward counterpart in the sense that the agent needs to actively explore the environment strategically and find a good policy. \textbf{PPO} manages to find rewards in SparseHopper and SparseWalker2d, but fails in SparseHalfCheetah. Although \textbf{ENIAC} \citep{feng2021provably} also uses intrinsic rewards for strategic exploration, it still fails in the SparseWalker2d environment. This might be due to the intrinsic reward of \textbf{ENIAC} switching too fast, thus the exploration is not persistent enough for the agent to find the reward. In contrast, our method succeeds in all three environments, the result is shown Figure \ref{fig:1}. \subsection{Limitation of Previous Implementations} Note that we do not compare our method directly with implementations in \citep{agarwal2020pc, feng2021provably}, as we discovered some limitations presented in their implementations. We will discuss this in more details in Section~\ref{App F} about their limitations in terms of batch normalization and mis-implementations of core components in existing approaches. We illustrate the issue by directly running their published code. As shown in Figure \ref{fig:2}, our approaches and the other baseline approaches \citep{stable-baselines3} successfully solve the problem in a few epochs, while their implementations fail to achieve similar performance. \begin{figure} \caption{Performance comparison between the original implementations of \textbf{ENIAC, PC-PG} \label{fig:2} \end{figure} \ \section{Conclusion} \label{sec 6} In this paper, we establish a low-switching sample-efficient policy optimization algorithm with general function approximation using online sensitivity sampling and data reuse techniques. Our algorithm largely improves the sample complexity in \citep{feng2021provably}, while still keeping its robustness to model misspecification. Our numerical experiments also empirically prove the efficiency of the low-switching technique in policy gradient algorithms. \appendix \onecolumn \begin{algorithm}[tb] \caption{\textbf{Behaviour Policy Sampling}} \label{A4} \begin{algorithmic}[1] \STATE \textbf{Input:} Behaviour Policy $\pi$, Policy Cover $\pi^{1:n}$ \STATE $D=\emptyset$ \FOR{$i=1,\cdots,M$} \STATE Sample j uniformly at random in $1:n$ \STATE $(s,a)\leftarrow \textbf{d-sampler}(\pi_j,\nu)$ \STATE Sample $h\geq1$ with probability $\gamma^{h-1}(1-\gamma)$ \STATE Continue the rollout from $(s,a)$ by executing $\pi$ for $h-1$ steps \STATE Storing the rollout $D[i]\leftarrow \{(s_1,a_1,\cdots,s_h,a_h)\}$ where $(s_1,a_1)=(s,a)$ \ENDFOR \STATE \textbf{return:} $D$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{\textbf{Policy Evaluation Oracle}} \label{A5} \begin{algorithmic}[1] \STATE \textbf{Input}: Evaluate policy $\pi$, Dataset $D$, Behaviour policy $\underline{\pi}$, Bonus function $b$ \FOR{$i=1,2,\cdots,M$} \STATE $\lambda_i\leftarrow\prod\limits_{\tau=2}^{|D[i]|}\frac{\pi(a_{\tau}|s_{\tau})}{\underline{\pi}(a_{\tau}|s_{\tau})}$ \STATE $(s_i^F,a_i^F)\leftarrow D[i]$ 's first sample \STATE $(s_i^L,a_i^L)\leftarrow D[i]$ 's last sample \STATE $G_i \leftarrow \frac{1}{1-\gamma}[r(s_i^L,a_i^L)+b(s_i^L,a_i^L)]$ \ENDFOR \STATE \textbf{end for} \STATE $\widehat{f}\leftarrow \underset{f \in \mathcal{F}}{\text{argmin}}\sum_{i=1}^{M}\big{(}f(s_i^F,a_i^F)-(\lambda_i G_i-b(s_i^F,a_i^F))\big{)}^2$ \label{L9} \STATE \textbf{return:} $\widehat{Q}(s,a)=\widehat{f}(s,a)+\frac{1}{2}b(s,a),\ \forall s \in \mathcal{K}^n\ \text\ {and}\ \widehat{Q}(s,a)=\widehat{f}(s,a)+b(s,a),\ $otherwise \label{L10} \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{\textbf{d-sampler}} \label{A6} \begin{algorithmic}[1] \STATE \textbf{Input}: $\nu\in\Delta(S\times A),\pi$. \STATE Sample $s_0,a_0\sim\nu$. \STATE Sample $\tau\geq1$ with probability $\gamma^{\tau-1}(1-\gamma)$. \STATE Execute $\pi$ for $\tau-1$ steps, giving state s. \STATE Sample action $a\sim\pi(\cdot|s)$. \STATE \textbf{return} $(s,a)$. \end{algorithmic} \end{algorithm} \section{Remaining Algorithm Pseudocodes} We provide the remaining algorithms including Behaviour Policy Sampling (\cref{A4}), Policy Evaluation Oracle (\cref{A5}), and the visitation distribution sampler (\cref{A6}). \section{Main Analysis} \label{App B} In this section, we provide the main guarantees for \textbf{LPO}. \subsection{Proof Setup} \paragraph{Bonus and auxiliary MDP} To begin with, we introduce the concept of optimisic (bonus-added) and auxiliary MDP, which is also mentioned in \citep{agarwal2020pc,feng2021provably}. However, we design these MDPs with very different bonus functions. For each epoch $n\in[N]$, we consider an arbitrary fixed comparator policy $\widetilde{\pi}$ (e.g., an optimal policy) and three MDPs: the original MDP $\mathcal{M}:=(\mathcal{S},\mathcal{A},P,r,\gamma)$, the bonus-added MDP $\mathcal{M}_{b^n}:=(\mathcal{S},\mathcal{A},P,r+b^n,\gamma)$, and an auxiliary MDP $\mathcal{M}^n:=(\mathcal{S},\mathcal{A}\cup\{{a^\dagger}\},P^n,r^n,\gamma)$, where $a^\dagger$ is an extra action which is only available for $s\notin \mathcal{K}^n$. For all $(s,a)\in\mathcal{S}\times\mathcal{A}$, $$ P^n(\cdot|s,a)=P(\cdot|s,a),\ r^n(s,a)=r(s,a)+b^n(s,a). $$ For $s\notin\mathcal{K}^n$ $$ P^n(s|s,a^\dagger)=1, r^n(s,a^\dagger)=3 $$ The above equation actually exhibits its property to absorb and provide maximum rewards for agent outside the known states. For readers' convenience, we present the definition of bonus function and known states. The bonus function $b^n:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$ defined as \begin{equation} \begin{aligned} b_w^n(s,a)&=\frac{1}{\beta}\ \omega(\widehat{\mathcal{F}}^n,s,a)\textbf{1}\{s\in\mathcal{K}^n\}\\b_{\text{1}}^n(s,a)&=\frac{3}{1-\gamma}\textbf{1}\{s\notin\mathcal{K}^n\}\\b^n(s,a)&=2b_w^n(s,a)+b_{\text{1}}^n(s,a) \end{aligned} \end{equation} Known states \begin{equation} \begin{aligned} \mathcal{K}^n=&\{s\in\mathcal{S}|\ \forall a\in\mathcal{A},\omega(\widehat{\mathcal{F}}^n,s,a)<\beta\}\\\mathcal{K}^n=&\{(s,a)\in\mathcal{S}\times\mathcal{A}|\ \omega(\widehat{\mathcal{F}}^n,s,a)<\beta\} \end{aligned} \end{equation} \paragraph{} Given the auxiliary MDP $\mathcal{M}^n$, we define $\widetilde{\pi}^n(\cdot|s)=\widetilde{\pi}(\cdot|s)\ \text{for} \ s\in \mathcal{K}^n \ \text{and} \ \widetilde{\pi}^n(a^\dagger|s)=1\ \text{for}\ s\notin\mathcal{K}^n$. We denote by $\widetilde{d}_{\mathcal{M}^n}$ the state-action distribution induced by $\widetilde{\pi}^n$ on $\mathcal{M}^n$ and $d^{\widetilde{\pi}}$ the state-action distribution induced by $\widetilde{\pi}$ on $\mathcal{M}$. \paragraph{} Given a policy $\pi$, we denote by $V_{b^n}^{\pi},Q_{b^n}^{\pi},\text{and}\ A_{b^n}^{\pi}$ the state-value, Q-value, and the advantage function of $\pi$ on $\mathcal{M}_{b^n}$, and $V_{\mathcal{M}^n}^{\pi}, Q_{\mathcal{M}^n}^{\pi}$,and $A_{\mathcal{M}^n}^{\pi}$ for the counterparts on $\mathcal{M}^n$, and we define $Q^{\pi}(s,\widetilde{\pi}):=\mathbb{E}_{a\sim\widetilde{\pi}(\cdot|s)}[Q^{\pi}(s,a)]$. \newline Based on the above definitions and notations, we have the following lemmas. \begin{lemma} \label{B.5} (Distribution Dominance) \citep{feng2021provably}. Consider any state $s \in \mathcal{K}^{n}$, we have: $$ \tilde{d}_{\mathcal{M}^{n}}(s, a) \leq d^{\tilde{\pi}}(s, a), \quad \forall a \in \mathcal{A} . $$ \end{lemma} \begin{proof} We prove by induction over the time steps along the horizon $h$. We denote the state-action distribution at the $h_{\mathrm{th}}$ step following $\tilde{\pi}^{n}$ on $\mathcal{M}^{n}$ as $\tilde{d}_{\mathcal{M}^{n}, h}$. Starting at $h=0$, if $s_{0} \in \mathcal{K}^{n}$, then $\tilde{\pi}^{n}\left(\cdot \mid s_{0}\right)=\tilde{\pi}\left(\cdot \mid s_{0}\right)$ and $$ \tilde{d}_{\mathcal{M}^{n}, 0}\left(s_{0}, a\right)=d_{0}^{\tilde{\pi}}\left(s_{0}, a\right), \quad \forall a \in \mathcal{A} . $$ Now we assume that at step $h$, for all $s \in \mathcal{K}^{n}$, it holds that $$ \tilde{d}_{\mathcal{M}^{n}, h}(s, a) \leq d_{h}^{\tilde{n}}(s, a), \forall a \in \mathcal{A} $$ Then, for step $h+1$, by definition we have that for $s \in \mathcal{K}^{n}$ $$ \begin{aligned} \tilde{d}_{\mathcal{M}^{n}, h+1}(s) &=\sum_{s^{\prime}, a^{\prime}} \tilde{d}_{\mathcal{M}^{n}, h}\left(s^{\prime}, a^{\prime}\right) P_{\mathcal{M}^{n}}\left(s \mid s^{\prime}, a^{\prime}\right) \\ &=\sum_{s^{\prime}, a^{\prime}} 1\left\{s^{\prime} \in \mathcal{K}^{n}\right\} \tilde{d}_{\mathcal{M}^{n}, h}\left(s^{\prime}, a^{\prime}\right) P_{\mathcal{M}^{n}}\left(s \mid s^{\prime}, a^{\prime}\right) \\ &=\sum_{s^{\prime}, a^{\prime}} 1\left\{s^{\prime} \in \mathcal{K}^{n}\right\} \tilde{d}_{\mathcal{M}^{n}, h}\left(s^{\prime}, a^{\prime}\right) P\left(s \mid s^{\prime}, a^{\prime}\right) \end{aligned} $$ where the second line is due to that if $s^{\prime} \notin \mathcal{K}^{n}, \tilde{\pi}$ will deterministically pick $a^{\dagger}$ and $P_{\mathcal{M}^{n}}\left(s \mid s^{\prime}, a^{\dagger}\right)=0 .$ On the other hand, for $d_{h+1}^{\tilde{\pi}}(s, a)$, it holds that for $s \in \mathcal{K}^{n}$, $$ \begin{aligned} d_{h+1}^{\tilde{\pi}}(s) &=\sum_{s^{\prime}, a^{\prime}} d_{h}^{\tilde{\pi}}\left(s^{\prime}, a^{\prime}\right) P\left(s \mid s^{\prime}, a^{\prime}\right) \\ &=\sum_{s^{\prime}, a^{\prime}} 1\left\{s^{\prime} \in \mathcal{K}^{n}\right\} d_{h}^{\tilde{\pi}}\left(s^{\prime}, a^{\prime}\right) P\left(s \mid s^{\prime}, a^{\prime}\right)+\sum_{s^{\prime}, a^{\prime}} 1\left\{s^{\prime} \notin \mathcal{K}^{n}\right\} d_{h}^{\tilde{\pi}}\left(s^{\prime}, a^{\prime}\right) P\left(s \mid s^{\prime}, a^{\prime}\right) \\ & \geq \sum_{s^{\prime}, a^{\prime}} 1\left\{s^{\prime} \in \mathcal{K}^{n}\right\} d_{h}^{\tilde{\pi}}\left(s^{\prime}, a^{\prime}\right) P\left(s \mid s^{\prime}, a^{\prime}\right) \\ & \geq \sum_{s^{\prime}, a^{\prime}} 1\left\{s^{\prime} \in \mathcal{K}^{n}\right\} \tilde{d}_{\mathcal{M}^{n}, h}\left(s^{\prime}, a^{\prime}\right) P\left(s \mid s^{\prime}, a^{\prime}\right) \\ &=\tilde{d}_{\mathcal{M}^{n}, h+1}(s) . \end{aligned} $$ Using the fact that $\tilde{\pi}^{n}(\cdot \mid s)=\tilde{\pi}(\cdot \mid s)$ for $s \in \mathcal{K}^{n}$, we conclude that the inductive hypothesis holds at $h+1$ as well. Using the definition of the average state-action distribution, we conclude the proof. \end{proof} \begin{lemma} \label{B.8} (Partial optimism) \citep{zanette2021cautiously}. Fix a policy $\widetilde{\pi}$ that never takes $a^\dagger$. In any episode $n$ it holds that $$ V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(\widetilde{s})-V^{\widetilde{\pi}}(\widetilde{s})\geq\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}2b_\omega^n(s,\widetilde{\pi}) $$ \end{lemma} \begin{proof} Notice that if $s\notin\mathcal{K}^n$, then $V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s)=\frac{3}{1-\gamma}$ as the policy self-loops in $s$ by taking $a^\dagger$ there. Otherwise, \begin{equation} \begin{aligned} V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s)&=\mathbb{E}_{a\sim\widetilde{\pi}^n(\cdot|s)}Q_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,a)\\ &= \frac{1}{1-\gamma}\mathbb{E}_{a\sim\tilde{\pi}^n(\cdot|s)}\mathbb{E}_{(s',a')\sim\widetilde{d}_{\mathcal{M}^n}|(s,a)}r^n(s',a')\\ &\leq \frac{3}{1-\gamma} \end{aligned} \end{equation} Therefore, $V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s)\leq\frac{3}{1-\gamma}$. Using the performance difference lemma in \citep{kakade2001natural}, we get: \begin{equation} \begin{aligned} &(1-\gamma)(V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(\widetilde{s})-V_{\mathcal{M}^n}^{\widetilde{\pi}}(\widetilde{s}))=\mathbb{E}_{(s,a)\sim d_{\widetilde{s}}^{\widetilde{\pi}}}[-A_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,a)]\\&=\mathbb{E}_{(s,a)\sim d_{\widetilde{s}}^{\widetilde{\pi}}}[V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s)-Q_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,a)]\\&=\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}[Q_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,\widetilde{\pi}^n)-Q_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,\widetilde{\pi})]\\&=\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}[\left(Q_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,\widetilde{\pi}^n)-Q_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s,\widetilde{\pi})\right)\textbf{1}\{s\notin\mathcal{K}^n\}]\\&=\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}[\left(\frac{3}{1-\gamma}-r(s,\widetilde{\pi})-2b_{\omega}^n(s,\widetilde{\pi})-b_{\text{1}}^n(s,\widetilde{\pi})-\gamma\mathbb{E}_{s'\sim P(s,\widetilde{\pi})}V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s')\right)\textbf{1}\{s\notin\mathcal{K}^n\}] \end{aligned} \end{equation} Since $r(s,\widetilde{\pi})+2b_{\omega}^n(s,\widetilde{\pi})+\gamma\mathbb{E}_{s'\sim P(s,\widetilde{\pi})}V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(s')\leq 1+2+\frac{3\gamma}{1-\gamma} \leq\frac{3}{1-\gamma}$\newline Thus, \begin{equation} \begin{aligned} V_{\mathcal{M}^n}^{\widetilde{\pi}^n}(\widetilde{s})&\geq V_{\mathcal{M}^n}^{\widetilde{\pi}}(\widetilde{s})-\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}b_1^n(s,\widetilde{\pi})\\&=V^{\widetilde{\pi}}(\widetilde{s})+\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}b^n(s,\widetilde{\pi})-\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}b_1^n(s,\widetilde{\pi})\\&=V^{\widetilde{\pi}}(\widetilde{s})+\frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\widetilde{s}}^{\widetilde{\pi}}}2b_{\omega}^n(s,\widetilde{\pi}) \end{aligned} \end{equation} \end{proof} \begin{lemma} \label{B.9} (Negative Advantage) \citep{zanette2021cautiously}. $$ A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)\textbf{1}\{s\notin\mathcal{K}^n\}\leq 0 $$ \end{lemma} \begin{proof} Assume $s\notin \mathcal{K}^n$, then $Q^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)=3+\gamma V_{\mathcal{M}^n}^{{\pi}^n}(s)$. Note that if $s\notin \mathcal{K}^n$, $\pi^n$ takes an action $a\not= a^{\dagger}$ such that $b_1^n(s,a)=\frac{3}{1-\gamma}$. Therefore, $V_{\mathcal{M}^n}^{{\pi}^n}(s)\geq\frac{3}{1-\gamma}$.\newline Combining the two expressions we obtain that, in any state $s\notin \mathcal{K}^n$, $$ A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)=Q^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)- V_{\mathcal{M}^n}^{{\pi}^n}(s)=3+\gamma V_{\mathcal{M}^n}^{{\pi}^n}(s)- V_{\mathcal{M}^n}^{{\pi}^n}(s)\leq 0 $$ \end{proof} \subsection{Proof Sketch} In order to bound the gap of values between the output policy $\pi^{\text{LPO}}=$Unif($\pi^0,\pi^1,\cdots,\pi^{N-1}$) and the comparator $\widetilde{\pi}$, we need to analyze the gap between $V^{\widetilde{\pi}}$ and $V^{\pi^n}$ for each $n \in [N]$. With the above three lemmas based on the structure of the well-designed MDPs, we are able to obtain the following regret decomposition (see details in Lemma \ref{B.10} (Regret decomposition)): \begin{equation} \begin{aligned} \left(V^{\widetilde{\pi}}-V^{{\pi}^n}\right)(s_0)& \leq \frac{1}{1-\gamma}\left[\underbrace{\mathop{\text{sup}}\limits_{s\in\mathcal{K}^n}\widehat{A}^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})\textbf{1}\{s\in\mathcal{K}^n\}}_{\text{term 1}}+\underbrace{\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})-A_{\mathcal{M}^n}^*(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}}_{\text{term 2}}\right.\\&\ \ \ \ \ \ \ \ \ \ \ \ +\left.\underbrace{\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[A^*_{\mathcal{M}^n}(s,\widetilde{\pi})-\widehat{A}_{\mathcal{M}^n}^{{\pi}^n}(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}}_{\text{term 3}}-\underbrace{\mathbb{E}_{s\sim d^{\widetilde{\pi}}|s_0}2b_{\omega}^n(s,\widetilde{\pi})}_{\text{term 4}}+\underbrace{\mathbb{E}_{s\sim d^n|s_0}b^n(s,\pi^n)}_{\text{term 5}}\right] \end{aligned} \end{equation} Now we discuss the details of each term. \begin{itemize} \item term 1 represents the \emph{Solver Error}, which measures the performance of policy $\pi^n$ in terms of empirical advantage function on known states. This term can be bounded by Lemma \ref{B.7} (NPG Analysis). \item term 2 represents the \emph{Approximation Error}, which exists when our function class $\mathcal{F}$ do not have enough representability to deal with critic fitting, and this term can be controlled with Assumption \ref{B.2} (Bounded Transfer Error) and Lemma \ref{B.6}. \item term 3 represents the \emph{Statistical Error}, which is the average error between the empirical and optimal advantage function under known states. This term can be bounded by term 4 (the expectation of width function) according to Lemma \ref{B.5} and Lemma \ref{E.6}. \item term 5 is the expectation of bonus function under policy $\pi^n$, and the bound of bonuses is achieved in Lemma \ref{D.2}. \end{itemize} \subsection{Analysis of LPO} In this part, we follow the above steps of proof to establish the result of our main theorem. First, we introduce some notions and assumptions to handle the model misspecification. These notions have been discussed in \citep{agarwal2020pc, feng2021provably}. \begin{definition} \label{B.1} (Transfer error). Given a target function $g:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$, we define the critic loss function $L(f;d;g)$ with $d\in\Delta(\mathcal{S}\times\mathcal{A})$ as: $$ L(f;d;g):=\mathbb{E}_{(s,a)\sim d }\left[(f(s,a)-g(s,a))^2\right] $$ \end{definition} We let $Q_{b^n}^{{\pi}^n}$, $Q_{b^n}^{{\pi}_k}$ to be the $Q$-function in the bonus-added MDP for a given outer iteration $n$ and an inner iteration $k$. Then the transfer error for a fixed comparator $\widetilde{\pi}$ is defined as $$ \epsilon_k^n:=\inf _{f \in \mathcal{F}_{k}^{n}} L\left(f ; \tilde{d}, Q_{b^{n}}^{\pi_k}-b^{n}\right), $$ where $\mathcal{F}_{k}^{n}:=\operatorname{argmin}_{f \in \mathcal{F}} L\left(f ; \rho_{c o v}^{n}, Q_{b^{n}}^{\pi_k}-b^{n}\right)$ and $\tilde{d}(s, a):=$ $d_{s_{0}}^{\tilde{\pi}}(s) \circ \operatorname{Unif}(\mathcal{A})$. \begin{assumption} \label{B.2} (Bounded Transfer Error). For the fixed comparator policy $\tilde{\pi}$ , for every epoch $n \in[N]$ and every iteration $k$ inside epoch $n$, we assume that there exists a constant $\epsilon_{\text {bias }}$ such that $$ \epsilon_k^n \leq \epsilon_{\text {bias }}, $$ \end{assumption} Notice that $\epsilon_{\text {bias }}$ measures both approximation error and distribution shift error. By the definition of transfer error, we can select a function $\tilde{f}_{k}^{n} \in \mathcal{F}_{k}^{n}$, such that $$ L\left(\tilde{f}_{k}^{n}; \tilde{d}; Q_{b^{n}}^{\pi_k}-b^{n}\right) \leq 2 \epsilon_{\text {bias }} $$ \begin{assumption} \label{B.3} For the same loss $L$ in the Definition \ref{B.1} and the fitter $\tilde{f}_{k}^{n}$ in Assumption \ref{B.2}, we assume that there exists some $C \geq 1$ and $\epsilon_{0} \geq 0$ such that for any $f \in \mathcal{F}$, $$ \mathbb{E}_{(s, a) \sim \rho_{c o v}^{n}}\left[\left(f(s, a)-\tilde{f}_{k}^{n}(s, a)\right)^{2}\right] \leq C \cdot\left(L\left(f ; \rho_{c o v}^{n}, Q_{b^{n}}^{\pi_k}-b^{n}\right)-L\left(\tilde{f}_{k}^{n} ; \rho_{c o v}^{n}, Q_{b^{n}}^{\pi_k}-b^{n}\right)\right)+\epsilon_{0} $$ for $n \in[N]$ and $0 \leq k \leq K-1$. \end{assumption} \begin{remark} \label{remark app 1} Under Assumption \ref{ass 4.4}, for every $n \in [N]$ and $k \in [K]$, $Q_{b^{n}}^{\pi_k}(s,a)-b^{n}(s,a)=\mathbb{E}^{\pi_{k}^{n}}\left[r(s, a)+\gamma Q_{b^{n}}^{\pi_k}\left(s^{\prime}, a^{\prime}\right)|s,a\right] \in \mathcal{F}$. Thus, $\epsilon_{\text {bias }}$ can take value 0 and $\tilde{f}_{k}^{n}=Q_{b^{n}}^{\pi_k}-b^{n}$. Further in Assumption \ref{B.3}, we have $$ \mathbb{E}_{(s, a) \sim \rho_{c o v}^{n}}\left[\left(f(s, a)-\tilde{f}_{k}^{n}(s, a)\right)^{2}\right]=L\left(f ; \rho_{c o v}^{n}, Q_{b^{n}}^{\pi_k}-b^{n}\right) . $$ Hence, $C$ can take value 1 and $\epsilon_{0}=0$. If $Q_{b^{n}}^{\pi_k}-b^{n}$ is not realizable in $\mathcal{F}, \epsilon_{\text {bias }}$ and $\epsilon_{0}$ could be strictly positive. Hence, the above two assumptions are generalized version of the closedness condition considering model misspecification. Next, we define the optimal regression fit considering the loss function and its corresponding advantage functions. \end{remark} \begin{definition} \label{B.4} \begin{equation} \begin{aligned} &f^{n,*}\in\mathop{\text{argmin}}\limits_{f\in\mathcal{F}}L(f;\rho^n,Q_{b^n}^{{\pi}^n}-b^n),\ f_k^*\in\mathop{\text{argmin}}\limits_{f\in\mathcal{F}}L(f;\rho^n,Q_{b^n}^{{\pi}_k}-b^n)\\&A_{b^n}^*(s,a)=f^{n,*}(s,a)+b^n(s,a)-\mathbb{E}_{a'\sim\pi^n(\cdot|s)}\left[f^{n,*}(s,a')+b^n(s,a')\right]\\&A_{b^n}^{k,*}(s,a)=f_k^*(s,a)+b^n(s,a)-\mathbb{E}_{a'\sim\pi_k(\cdot|s)}\left[f_k^*(s,a')+b^n(s,a')\right]\\ \end{aligned} \end{equation} In the later proof, we select $f^{n,*}$, $f_k^*$ not only to be the optimal solution with respect to the above loss function, but also satisfy the inequality in Assumption \ref{B.3}, just like $\tilde{f}_{k}^{n}$. \end{definition} \begin{lemma} \label{B.6} (Advantage Transfer Error decomposition). We have that $$ \mathbb{E}_{(s, a) \sim \tilde{d}_{\mathcal{M}^{n}}}\left(A_{b^{n}}^{k}-A_{b^{n}}^{k, *}\right) \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}<4 \sqrt{|\mathcal{A}| \epsilon_{\text {bias }}} . $$ \end{lemma} \begin{proof} We have $$ \begin{aligned} &\mathbb{E}_{(s, a) \sim \tilde{d}_{\mathcal{M}^{n}}}\left(A_{b^{n}}^{k}-A_{b^{n}}^{k, *}\right) 1\left\{s \in \mathcal{K}^{n}\right\} \\ &=\mathbb{E}_{(s, a) \sim \tilde{d}_{\mathcal{M}^{n}}}\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right) \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}-\mathbb{E}_{s \sim \tilde{d}_{\mathcal{M}^{n}}, a \sim \pi_{k}(\cdot \mid s)}\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right) 1\left\{s \in \mathcal{K}^{n}\right\} \\ &\leq \sqrt{\mathbb{E}_{(s, a) \sim \tilde{d}_{\mathcal{M}^{n}}}\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right)^{2} \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}}+\sqrt{\mathbb{E}_{s \sim \tilde{d}_{\mathcal{M}^{n}, a \sim \pi_{k}(\mid s)}}\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right)^{2} 1\left\{s \in \mathcal{K}^{n}\right\}} \\ &\leq \sqrt{\mathbb{E}_{(s, a) \sim d^{\bar{\pi}}}\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right)^{2} \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}}+\sqrt{\mathbb{E}_{s \sim d^{\bar{\pi}}, a \sim \pi_{k}(\cdot \mid s)}\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right)^{2} \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}} \\ &=\sqrt{\mathbb{E}_{(s, a) \sim \tilde{d}}|\mathcal{A}| \tilde{\pi}(a \mid s) \cdot\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right)^{2} \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}}+\sqrt{\mathbb{E}_{(s, a) \sim \tilde{d}}|\mathcal{A}| \pi_{k}(a \mid s) \cdot\left(Q_{b^{n}}^{k}-f_{k}^{*}-b^{n}\right)^{2} \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}} \\ &<4 \sqrt{|\mathcal{A}| \epsilon_{\mathrm{bias}}}, \end{aligned} $$ where the first inequality is by Cauchy-Schwarz, the second inequality is by distribution dominance, and the last two lines follow the bounded transfer error assumption and the definition of $f_{k}^{*}$. \end{proof} Next, we provide the NPG Analysis. \begin{lemma} \label{B.7} (NPG Analysis) \citep{agarwal2020pc}. Take stepsize $\eta=$ $\sqrt{\frac{\log (|\mathcal{A}|)}{16 W^{2} K}}$ in the algorithm, then for any $n \in[N]$ we have $$ \sum_{k=0}^{K-1} \mathbb{E}_{(s, a) \sim \tilde{d}_{\mathcal{M}^{n}}}\left[\widehat{A}_{b^{n}}^{k}(s, a) \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}\right] \leq 8 W \sqrt{\log (|\mathcal{A}|) K} $$ \end{lemma} \begin{proof} Here we omit the index $n$ for simplicity. From the NPG update rule $$ \begin{aligned} \pi_{k+1}(\cdot \mid s) & \propto \pi_{k}(\cdot \mid s) e^{\eta \widehat{Q}_{k}(s, \cdot)} \\ & \propto \pi_{k}(\cdot \mid s) e^{\eta \widehat{Q}_{k}(s, \cdot)} e^{-\eta \widehat{V}_{k}(s)} \\ &=\pi_{k}(\cdot \mid s) e^{\eta \widehat{A}_{k}(s, \cdot)} \end{aligned} $$ if we define the normalizer $$ z_{k}(s)=\sum_{a^{\prime}} \pi_{k}\left(a^{\prime} \mid s\right) e^{\eta \widehat{A}_{k}\left(s, a^{\prime}\right)} $$ then the update can be written as $$ \pi_{k+1}(\cdot \mid s)=\frac{\pi_{k}(\cdot \mid s) e^{\eta \widehat{A}_{k}(s, \cdot)}}{z_{k}(s)} $$ Then for any $s \in \mathcal{K}^{n}$, $$ \begin{aligned} &\mathbf{K L}\left(\tilde{\pi}(\cdot \mid s), \pi_{k+1}(\cdot \mid s)\right)-\mathbf{K L}\left(\widetilde{\pi}(\cdot \mid s), \pi_{k}(\cdot \mid s)\right) \\ &=\sum_{a} \tilde{\pi}(a \mid s) \log \frac{\tilde{\pi}(a \mid s)}{\pi_{k+1}(a \mid s)}-\sum_{a} \tilde{\pi}(a \mid s) \log \frac{\tilde{\pi}(a \mid s)}{\pi_{k}(a \mid s)} \\ &=\sum_{a} \tilde{\pi}(a \mid s) \log \frac{\pi_{k}(a \mid s)}{\pi_{k+1}(a \mid s)} \\ &=\sum_{a} \tilde{\pi}(a \mid s) \log \left(z_{k} e^{-\eta \widehat{A}_{k}(s, a)}\right) \\ &=-\eta \sum_{a} \widetilde{\pi}(a \mid s) \widehat{A}_{k}(s, a)+\log z_{k}(s) \end{aligned} $$ Since $\left|\widehat{A}_{k}(s, a)\right| \leq 4 W$ and when $T>\log (|\mathcal{A}|), \eta<1 /(4 W)$, we have $\eta \widehat{A}_{k}(s, a) \leq 1 .$ By the inequality that $\exp (x) \leq 1+x+x^{2}$ for $x \leq 1$ and $\log (1+x) \leq x$ for $x>-1$ $$ \log \left(z_{k}(s)\right) \leq \eta \sum_{a} \pi_{k}(a \mid s) \widehat{A}_{k}(s, a)+16 \eta^{2} W^{2}=16 \eta^{2} W^{2} $$ Thus for $s \in \mathcal{K}^{n}$ we have $$ \mathbf{K L}\left(\tilde{\pi}(\cdot \mid s), \pi_{k+1}(\cdot \mid s)\right)-\mathbf{K L}\left(\tilde{\pi}(\cdot \mid s), \pi_{k}(\cdot \mid s)\right) \leq-\eta \mathbb{E}_{a \sim \tilde{\pi}^{n}(\cdot \mid s)}\left[\widehat{A}_{k}(s, a)\right]+16 \eta^{2} W^{2} $$ By taking sum over $k$, we get $$ \begin{aligned} & \sum_{k=0}^{K-1} \mathbb{E}_{(s, a) \sim \tilde{d}_{\mathcal{M}^{n}}}\left[\widehat{A}_{k}(s, a) \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}\right] \\ =& \sum_{k=0}^{K-1} \frac{1}{\eta} \mathbb{E}_{s \sim \tilde{d}_{\mathcal{M}^{n}}}\left[\left(\mathbf{K L}\left(\tilde{\pi}(\cdot \mid s), \pi_{0}(\cdot \mid s)\right)-\mathbf{K L}\left(\tilde{\pi}(\cdot \mid s), \pi_{K}(\cdot \mid s)\right)\right) \mathbf{1}\left\{s \in \mathcal{K}^{n}\right\}\right]+16 \eta K W^{2} \\ \leq & \log (|\mathcal{A}|) / \eta+16 \eta K W^{2} \\ \leq & 8 W \sqrt{\log (|\mathcal{A}|) K} . \end{aligned} $$ \end{proof} Now we are ready to analyze the regret decomposition. \begin{lemma} \label{B.10} (Regret decomposition). With probability at least $1-\delta$ it holds that \begin{equation} \begin{aligned} \frac{1}{N}\sum_{n=1}^N\left(V^{\widetilde{\pi}}-V^{{\pi}^n}\right)(s_0)\leq \frac{\mathcal{R}(K)}{(1-\gamma)K}+ \frac{2\sqrt{2A\epsilon_{\text{bias}}}}{1-\gamma}+\frac{1}{\sqrt{N}}\widetilde{O}\left(\frac{\sqrt{{d}^2\epsilon}}{(1-\gamma)^2\beta}\right) \end{aligned} \end{equation} \end{lemma} \begin{proof} Fix a policy $\widetilde{\pi}$ on $\mathcal{M}$. Consider the following decomposition for an outer episode $n$, \begin{equation} \begin{aligned} \left(V^{\widetilde{\pi}}-V^{{\pi}^n}\right)(s_0)&=\underbrace{V^{\widetilde{\pi}}(s_0)+\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\widetilde{\pi}}|s_0}2b_{\omega}^n(s,\widetilde{\pi})}_{\leq V_{\mathcal{M}^n}^{\tilde{\pi}^n}(s_0) \ \text{by Lemma \ref{B.8}}}-\underbrace{V^{{\pi}^n}(s_0)-\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^n|s_0}b^n(s,\pi^n)}_{=-V_{b^n}^{\pi^n}}\\&+\frac{1}{1-\gamma}\underbrace{\left[-\mathbb{E}_{s\sim d^{\widetilde{\pi}}|s_0}2b_{\omega}^n(s,\widetilde{\pi})+\mathbb{E}_{s\sim d^n|s_0}b^n(s,\pi^n)\right]}_{\overset{def}{=}B^n} \end{aligned} \end{equation} We put the term $B^n$ aside for a moment and use performance difference lemma to obtain \begin{equation} \begin{aligned} V_{\mathcal{M}^n}^{\tilde{\pi}^n}(s_0)-V_{b^n}^{\pi^n}(s_0)&=V_{\mathcal{M}^n}^{\tilde{\pi}^n}(s_0)-V_{\mathcal{M}^n}^{\pi^n}(s_0)\\&=\frac{1}{1-\gamma}\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\left[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)\right]\\&=\frac{1}{1-\gamma}\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\left[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)\textbf{1}\{s\in\mathcal{K}^n\}+\underbrace{A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi}^n)\textbf{1}\{s\notin\mathcal{K}^n\}}_{\leq0 \ \text{by Lemma \ref{B.9}}}\right]\\&\leq \frac{1}{1-\gamma}\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\left[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})\textbf{1}\{s\in\mathcal{K}^n\}\right] \end{aligned} \end{equation} where the last step is because on states $s\in\mathcal{K}^n$ we have $\widetilde{\pi}^n(\cdot|s)=\widetilde{\pi}(\cdot|s)$. \newline Define \begin{equation} \begin{aligned} \widehat{A}_{b^n}^k(s,a)&=\widehat{Q}_{b^n}^k(s,a)-\widehat{V}_{b^n}^k(s)\\&=f_k(s,a)+b_{\omega}^n(s,a)-\mathbb{E}_{a'\sim\pi_k(\cdot|s)}\left[f_k(s,a')+b_{\omega}^n(s,a')\right] \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \widehat{A}_{b^n}^{\pi^n}(s,a)=\frac{1}{K}\sum_{k=0}^{K-1}\widehat{A}_{b^n}^k(s,a) \end{aligned} \end{equation} Then we get \begin{equation} \begin{aligned} &=\frac{1}{1-\gamma}\left[\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\widehat{A}^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})\textbf{1}\{s\in\mathcal{K}^n\}+\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}\left[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})-\widehat{A}^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})\right]\textbf{1}\{s\in\mathcal{K}^n\}\right]\\ &\leq\frac{1}{1-\gamma}\left[\underbrace{\mathop{\text{sup}}\limits_{s\in\mathcal{K}^n}\widehat{A}^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})\textbf{1}\{s\in\mathcal{K}^n\}}_{\text{term 1}}+\underbrace{\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})-A_{\mathcal{M}^n}^*(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}}_{\text{term 2}}\right.\\&\ \ \ \ \ \ \ \ \ \ \ \ +\left.\underbrace{\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[A^*_{\mathcal{M}^n}(s,\widetilde{\pi})-\widehat{A}_{\mathcal{M}^n}^{{\pi}^n}(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}}_{\text{term 3}}\right] \end{aligned} \end{equation} The first term can be bounded by Lemma \ref{B.7} (NPG lemma): \begin{equation} \begin{aligned} \mathop{\text{sup}}\limits_{s\in\mathcal{K}^n}\widehat{A}^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})\textbf{1}\{s\in\mathcal{K}^n\}= \mathop{\text{sup}}\limits_{s\in\mathcal{K}^n}\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}_{a\sim\tilde{\pi}(\cdot|s)}\widehat{A}_{b^n}^k(s,a)\textbf{1}\{s\in\mathcal{K}^n\}\leq\frac{\mathcal{R}(K)}{K} \end{aligned} \end{equation} The second term can be bounded by Lemma \ref{B.5}, \ref{B.6} \begin{equation} \begin{aligned} &\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})-A_{\mathcal{M}^n}^*(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}\\ &\leq \mathbb{E}_{s\sim d^{\tilde{\pi}}}[A^{\pi^n}_{\mathcal{M}^n}(s,\widetilde{\pi})-A_{\mathcal{M}^n}^*(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}\\ &\leq 2\sqrt{2A\epsilon_{\text{bias}}} \end{aligned} \end{equation} The third term can be bounded by Lemma \ref{E.6}, which ensures that with probability at least $1-\frac{\delta}{2}$ it holds that \begin{equation} \begin{aligned} \forall n\in [N],\ \forall k\in\{0,\cdots,K-1\},\ \forall (s,a)\in\mathcal{K}^n: \ 0\leq Q_{b^n}^{k,*}(s,a)-\widehat{Q}_{b^n}^{k}(s,a)\leq2b_{\omega}^n(s,a) \end{aligned} \end{equation} Then we have: $\forall n\in[N],\forall(s,a)\in\mathcal{K}^n$: \begin{equation} \begin{aligned} A^*_{\mathcal{M}^n}(s,a)-\widehat{A}_{\mathcal{M}^n}^{{\pi}^n}(s,a)&=\frac{1}{K}\sum_{k=0}^{K-1}\left[\left( Q_{b^n}^{k,*}(s,a)-\widehat{Q}_{b^n}^{k}(s,a)\right)-\underbrace{\left(Q_{b^n}^{k,*}(s,\pi_k^n)-\widehat{Q}_{b^n}^{k}(s,\pi_k^n)\right)}_{\geq0}\right]\\ &\leq Q^*_{\mathcal{M}^n}(s,a)-\widehat{Q}_{\mathcal{M}^n}^{{\pi}^n}(s,a) \end{aligned} \end{equation} Apply the Lemma \ref{B.5}, we have \begin{equation} \begin{aligned} \mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[A^*_{\mathcal{M}^n}(s,\widetilde{\pi})-\widehat{A}_{\mathcal{M}^n}^{{\pi}^n}(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}&\leq\mathbb{E}_{s\sim \widetilde{d}_{\mathcal{M}^n}}[Q^*_{\mathcal{M}^n}(s,\widetilde{\pi})-\widehat{Q}_{\mathcal{M}^n}^{{\pi}^n}(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\}\\ &\leq \mathbb{E}_{s\sim d^{\widetilde{\pi}}}[Q^*_{\mathcal{M}^n}(s,\widetilde{\pi})-\widehat{Q}_{\mathcal{M}^n}^{{\pi}^n}(s,\widetilde{\pi})]\textbf{1}\{s\in\mathcal{K}^n\} \end{aligned} \end{equation} As a result, \begin{equation} \begin{aligned} \left(V^{\widetilde{\pi}}-V^{\pi^n}\right)(s_0)&\leq\frac{1}{1-\gamma}\left[\frac{\mathcal{R}(K)}{K}+ 2\sqrt{2A\epsilon_{\text{bias}}}+\mathbb{E}_{s\sim d^{\widetilde{\pi}}}2b_{\omega}^n(s,\widetilde{\pi})\textbf{1}\{s\in\mathcal{K}^n\}+B^n\right]\\&=\frac{1}{1-\gamma}\left[\frac{\mathcal{R}(K)}{K}+ 2\sqrt{2A\epsilon_{\text{bias}}}+\mathbb{E}_{s\sim d^n}b^n(s,\pi^n)\right] \end{aligned} \end{equation} And finally using the concentration of bonuses (Lemma \ref{D.2}) we get \begin{equation} \begin{aligned} \frac{1}{N}\sum_{n=1}^N\left(V^{\widetilde{\pi}}-V^{\pi^n}\right)(s_0)&\leq\frac{\mathcal{R}(K)}{(1-\gamma)K}+ \frac{2\sqrt{2A\epsilon_{\text{bias}}}}{1-\gamma}+\frac{1}{N(1-\gamma)}\sum_{n=1}^N\mathbb{E}_{s\sim d^n}b^n(s,\pi^n)\\ &\leq \frac{\mathcal{R}(K)}{(1-\gamma)K}+ \frac{2\sqrt{2A\epsilon_{\text{bias}}}}{1-\gamma}+\frac{1}{\sqrt{N}}\widetilde{O}\left(\frac{\sqrt{{d}^2\epsilon}}{(1-\gamma)^2\beta}\right) \end{aligned} \end{equation} \end{proof} Combining all previous lemmas, we have the following theorem about the sample complexity of our LPO. \begin{theorem} \label{thm:bigtheorem} \label{M-thm} (Main Results: Sample Complexity of LPO). Under Assumptions \ref{ass 4.2}, \ref{ass 4.3}, and \ref{ass 4.4}, for any comparator $\widetilde{\pi}$, a fixed failure probability $\delta$, eluder dimension $d = \text{dim}(\mathcal{F},1/N)$, a suboptimality gap $\varepsilon$ and appropriate input hyperparameters: $N \geq \widetilde{O}\left(\frac{{d}^2}{(1-\gamma)^4\varepsilon^2}\right), K = \widetilde{O}\left(\frac{\ln |\mathcal{A}| W^{2}}{(1-\gamma)^{2} \varepsilon^{2}}\right), M \geq \widetilde{O}\left(\frac{{d}^2}{(1-\gamma)^4\varepsilon^2}\right), \eta = \widetilde{O}\left(\frac{\sqrt{\ln |\mathcal{A}|}}{\sqrt{K} W}\right), \kappa = \widetilde{O}\left(\frac{1-\gamma}{\eta W}\right)$, our algorithm returns a policy $\pi^{\text{LPO}}$, satisfying $$ \left(V^{\widetilde{\pi}}-V^{\pi^{\text{LPO}}}\right)\left(s_{0}\right) \leq \varepsilon. $$ with probability at least $1-\delta$ after taking at most $\widetilde{O}\left(\frac{{d}^3}{(1-\gamma)^{8} \varepsilon^{3}}\right)$ samples. \end{theorem} \begin{proof} First, let's consider Lemma \ref{B.10} (Regret decomposition). We need ensure $$ \frac{\mathcal{R}(K)}{(1-\gamma) K}=\frac{8 W}{(1-\gamma)} \sqrt{\frac{\ln |\mathcal{A}|}{K}} \leq \frac{\varepsilon}{2} \quad \longrightarrow \quad K=\widetilde{O}\left(\frac{\ln |\mathcal{A}| W^{2}}{(1-\gamma)^{2} \varepsilon^{2}}\right) $$ This gives the inner iteration complexity. Next, $\beta$ can be any constant between 0 and 1, and recall that $\epsilon$ is the parameter that controls the width function \eqref{E3}. We set $\epsilon$ in the following form (see Lemma \ref{E.4} for justification) $$ \epsilon=100\left(\frac{3}{2}C_1N\cdot\epsilon_{\text{stat}}+20NW\epsilon_1+\frac{1}{2}C_2\cdot\ln\left(\frac{N\mathcal{N}(\Delta\mathcal{F},2\epsilon_1)}{\delta'}\right)\right) $$ and $$ \epsilon_{\text{stat}}=\frac{500 C \cdot W^{4} \cdot \log \left(\frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}\right)}{M}+13 W^{2} \cdot \epsilon_{2} $$ where $\epsilon_1, \epsilon_2$ represents the function cover radius. Since our algorithm complexity depends only logarithmically on the covering numbers, we can set the cover radius with any polynomial degree of $\varepsilon$. In fact, $\epsilon_1=O(\varepsilon^3)$, $\epsilon_2=O(\varepsilon^3)$, $\epsilon=O(\log N)$, $$ \begin{aligned} \frac{1}{\sqrt{N}}\widetilde{O}\left(\frac{\sqrt{{d}^2\epsilon}}{(1-\gamma)^2\beta}\right)\leq\frac{\varepsilon}{2}\longrightarrow M=N\geq\widetilde{O}\left(\frac{{d}^2}{(1-\gamma)^4\varepsilon^2}\right) \end{aligned} $$ gives the outer iteration complexity and the number of samples collected by a single Monte Carlo trajectory. Under Assumption \ref{ass 4.4}, which means $Q$-function is realizable in our function class $\mathcal{F}$, $\epsilon_{\text{bias}}=0$ (see Remark \ref{remark app 1} for justification). After setting the hyperparameters above, with probability at least $1-\delta$, we have $$ \frac{1}{N}\sum_{n=1}^N\left(V^{\widetilde{\pi}}-V^{\pi^n}\right)(s_0)\leq \varepsilon $$ Remember that our algorithm outputs a uniform mixture of policy cover $\pi^{\text{LPO}}=$Unif($\pi^0,\pi^1,\cdots,\pi^{N-1}$), so we have $$ \left(V^{\widetilde{\pi}}-V^{\pi^{\text{LPO}}}\right)\left(s_{0}\right) \leq \varepsilon. $$ Next, we consider the total samples we need to collect through steps of the algorithm. Every time the bonus switches, \cref{A3} is invoked, and runs for $K$ iterations. From Lemma \ref{E.3} we know that once data are collected, they can be reused for the next $\kappa$ policies. Therefore, we actually run \cref{A4} for $\left\lceil\frac{K}{\kappa}\right\rceil$ times, and whenever invoking \cref{A4}, we need $M$ samples by Monte Carlo sampling. We denote $S$ to be the number of bonus switches given in Proposition \ref{C.7} (Number of Switches). In total, the sample complexity of our algorithm is $$ \begin{aligned} &\underbrace{S}\limits_{\text{number of inner loops invoked}}\ \ \times\ \ \underbrace{\left\lceil\frac{K}{\kappa}\right\rceil}\limits_{\text{the inner iteration}}\ \ \times\ \ \underbrace{M}\limits_{\text{Monte Carlo trajectories}} \\ &=\widetilde{O}\left(d\times\frac{2 \ln (1 / \delta)\left(\frac{\sqrt{\ln |\mathcal{A}|}}{\sqrt{K} W}\right)(B+W)}{(1-\gamma) \ln 2} \times K\times M\right) \\&=\widetilde{O}\left(d\left(\frac{B}{W}+1\right) \frac{\sqrt{K}}{1-\gamma}M\right) \\ &=\widetilde{O}\left(\frac{d}{1-\gamma} \times \frac{W}{(1-\gamma) \varepsilon}\times \frac{{d}^2}{(1-\gamma)^4\varepsilon^2}\right) \\ &=O\left(\frac{{d}^3}{(1-\gamma)^{8} \varepsilon^3}\right) \end{aligned} $$ We complete the proof of our main theorem. \end{proof} \section{The Number of Switches} \label{App C} In this section, we will give the bound of the number of switching policies in the outer loop. Recall that the width function is $$ \omega({\widehat{\mathcal{F}}}^n,s,a) = \mathop{\text{sup}}\limits_{f_1,f_2\in\mathcal{F},||f_1-f_2||_{\widehat{\mathcal{Z}}^n}^2\leq\epsilon}|f_1(s,a)-f_2(s,a)| $$ The parameter $\epsilon$ will be defined later in \eqref{eq 34}. In fact, we will show that $\epsilon= {O}(\log{N})$ in Lemma \ref{E.4} and \ref{E.5}. First, we need to show that for every $n \in [N]$, the sensitivity dataset ${\widehat{\mathcal{Z}}}^n$ approximates the original dataset ${\mathcal{Z}}^n$. Our approach is inspired by \citep{kong2021online}. For all $n \in[N]$ and $\alpha \in[\epsilon,+\infty)$, we define the following quantities $$ \begin{aligned} \underline{\mathcal{B}}^{n}(\alpha) &:=\left\{\left(f_{1}, f_{2}\right) \in \mathcal{F} \times \mathcal{F} \mid\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq \alpha / 100\right\} \\ \mathcal{B}^{n}(\alpha) &:=\left\{\left(f_{1}, f_{2}\right) \in \mathcal{F} \times \mathcal{F} \mid \min \left\{\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}, 4 N W^{2}\right\} \leq \alpha\right\} \\ \overline{\mathcal{B}}^{n}(\alpha) &:=\left\{\left(f_{1}, f_{2}\right) \in \mathcal{F} \times \mathcal{F} \mid\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq 100 \alpha\right\} \end{aligned} $$ For each $n \in[N]$, we use $\mathcal{E}^{n}(\alpha)$ to denote the event that $$ \underline{\mathcal{B}}^{n}(\alpha) \subseteq \mathcal{B}^{n}(\alpha) \subseteq \overline{\mathcal{B}}^{n}(\alpha) $$ Furthermore, we denote that $$ \mathcal{E}^{n}:=\bigcap_{j=0}^{\infty} \mathcal{E}^{n}\left(100^{j} \epsilon\right), $$ Our goal is to show that the event $\mathcal{E}^{n}$ holds with great probability, which means ${\widehat{\mathcal{Z}}}^n$ is a good approximation to ${\mathcal{Z}}^n$. \\ Before presenting the proof, we need the following concentration inequality proved in \citep{freedman1975tail}. \begin{lemma} \label{C.1} Consider a real-valued martingale $\left\{Y_{k}: k=0,1,2, \cdots\right\}$ with difference sequence $\left\{X_{k}: k=0,1,2, \cdots\right\}$. Assume that the difference sequence is uniformly bounded: $$ \left|X_{k}\right| \leq R \quad \text { almost surely for } \quad k=1,2,3, \cdots $$ For a fixed $n \in \mathbb{N}$, assume that $$ \sum_{k=1}^{n} \mathbb{E}_{k-1}\left(X_{k}^{2}\right) \leq \sigma^{2} $$ almost surely. Then for all $t \geq 0$, $$ P\left\{\left|Y_{n}-Y_{0}\right| \geq t\right\} \leq 2 \exp \left\{-\frac{t^{2} / 2}{\sigma^{2}+R t / 3}\right\} $$ \end{lemma} Furthermore, we need a bound on the number of elements in the sensitivity dataset. This is established in \citep{kong2021online}. \begin{lemma} \label{C.2} With probability at least $1-\delta / 64 N$, $$ \left|\widehat{\mathcal{Z}}^{n}\right| \leq 64 N^{3} / \delta \quad \forall n \in[N] . $$ \end{lemma} The following lemma shows that if $\mathcal{E}^{n}$ happens, ${\widehat{\mathcal{Z}}}^n$ is a good approximation to ${\mathcal{Z}}^n$. And this is proved in \citep{kong2021online}. \begin{lemma} \label{C.3} If $\mathcal{E}^{n}$ occurs, then $$ \frac{1}{10000}\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq \min \left\{\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}, 4 N W^{2}\right\} \leq 10000\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}, \quad \forall\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}>100 \epsilon $$ and $$ \min \left\{\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}, 4 N W^{2}\right\} \leq 10000 \epsilon, \quad \forall\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq 100 \epsilon $$ \end{lemma} To establish our result, we need the following lemma. The proof follows the approach of \citep{kong2021online}. We present it here for completeness. \begin{lemma} \label{C.4} For $\alpha \in\left[\epsilon, 4 N W^{2}\right]$ $$ \operatorname{Pr}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\left(\mathcal{E}^{n}(\alpha)\right)^{c}\right) \leq \delta /\left(32 N^{2}\right) $$ \end{lemma} \begin{proof} We use $\overline{\mathcal{Z}}^{n}$ to denote the dataset without rounding, i.e., we replace every element $\hat{z}$ with $z$. Denote $C_{1}=C \cdot \log \left(N \cdot \mathcal{N}\left(\mathcal{F}, \sqrt{\delta / 64 N^{3}}\right) / \delta\right)$, where $C$ will be chosen appropriately later. We consider any fixed pair $\left(f_{1}, f_{2}\right) \in \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right) \times \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right)$. For each $i \geq 2$, define $$ Z_{i}=\max \left\{\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{i}}^{2}, \min \left\{\left\|f_{1}-f_{2}\right\|_{\widehat{Z}^{i-1}}^{2}, 4 N W^{2}\right\}\right\} $$ and $$ Y_{i}= \begin{cases}\frac{1}{p_{z_{i-1}}}\left(f_{1}\left(z_{i-1}\right)-f_{2}\left(z_{i-1}\right)\right)^{2} & z_{i-1} \text { is added into } \overline{\mathcal{Z}}^{i} \text { and } Z_{i} \leq 2000000 \alpha \\ 0 & z_{i-1} \text { is not added into } \overline{\mathcal{Z}}^{i} \text { and } Z_{i} \leq 2000000 \alpha \\ \left(f_{1}\left(z_{i-1}\right)-f_{2}\left(z_{i-1}\right)\right)^{2} & Z_{i}>2000000 \alpha\end{cases} $$ Note that $Z_{i}$ is constant under $\mathcal{F}_{i-1}$ and $Y_{i}$ is adapted to the filtration $\mathcal{F}_{i}$, thus $$ \mathbb{E}_{i-1}\left[Y_{i}\right]=\left(f_{1}\left(z_{i-1}\right)-f_{2}\left(z_{i-1}\right)\right)^{2} $$ now we bound $Y_{i}$ and its variance in order to use Freedman's inequality. If $p_{z_{i-1}}=1$ or $Z_i>2000000 \alpha$, then $Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]=\operatorname{Var}_{i-1}\left[Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]\right]=$ 0 . Otherwise by the definition of $p_{z}$ we have $$ \begin{aligned} \left|Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]\right| & \leq\left(\min \left\{\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{i-1}}^{2}, 4 N W^{2}\right\}+1\right) / C_{1} \\ & \leq 3000000 \alpha / C_{1} \end{aligned} $$ and $$ \begin{aligned} \operatorname{Var}_{i-1}\left[Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]\right] & \leq \frac{1}{p_{z_{i-1}}}\left(f_{1}\left(z_{i-1}\right)-f_{2}\left(z_{i-1}\right)\right)^{4} \\ & \leq\left(f_{1}\left(z_{i-1}\right)-f_{2}\left(z_{i-1}\right)\right)^{2} \cdot 3000000 \alpha / C_{1} \end{aligned} $$ Taking sum with respect to $i$ yields $$ \sum_{i=2}^{n} \operatorname{Var}_{i-1}\left[Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]\right] \leq(3000000 \alpha)^{2} / C_{1} $$ By Freedman's inequality, we have $$ \begin{aligned} &\mathbb{P}\left\{\left|\sum_{i=2}^{n}\left(Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]\right)\right| \geq \alpha / 100\right\} \\ &\leq 2 \exp \left\{-\frac{(\alpha / 100)^{2} / 2}{(3000000 \alpha)^{2} / C_{1}+\alpha \cdot 3000000 \alpha / 3 C_{1}}\right\} \\ &\leq\left(\delta / 64 N^{2}\right) /\left(\mathcal{N}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right)\right)^{2} \end{aligned} $$ where the last inequality is guaranteed by taking $C$ appropriately large. Taking a union bound over all $\left(f_{1}, f_{2}\right) \in \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right) \times \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right)$, with probability at least $1-\delta /\left(64 T^{2}\right)$, we have $$ \left|\sum_{i=2}^{n}\left(Y_{i}-\mathbb{E}_{i-1}\left[Y_{i}\right]\right)\right| \leq \alpha / 100 . $$ In the rest of the proof, we condition on the event above and the event defined in Lemma \ref{C.2}. \paragraph{Part 1} $\left(\underline{\mathcal{B}}^{n}(\alpha) \subseteq \mathcal{B}^{n}(\alpha)\right)$ : Consider any pair $f_{1}, f_{2} \in \mathcal{F}$ with $\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq \alpha / 100$. From the definition we know that there exist $\left(\hat{f}_{1}, \hat{f}_{2}\right) \in \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right) \times \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right)$ such that $\left\|\hat{f}_{1}-f_{1}\right\|_{\infty},\left\|\hat{f}_{2}-f_{2}\right\|_{\infty} \leq \sqrt{\delta /\left(64 N^{3}\right)}$. Then we have that $$ \begin{aligned} \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2} & \leq\left(\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}+\left\|f_{1}-\hat{f}_{1}\right\|_{\mathcal{Z}^{n}}+\left\|\hat{f}_{2}-f_{2}\right\|_{\mathcal{Z}^{n}}\right)^{2} \\ & \leq\left(\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}+2 \cdot \sqrt{\left|\mathcal{Z}^{n}\right|} \cdot \sqrt{\delta /\left(64 N^{3}\right)}\right)^{2} \\ & \leq \alpha / 50 \end{aligned} $$ We consider the $Y_{i}$ 's which correspond to $\hat{f}_{1}$ and $\hat{f}_{2}$. Because $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq \alpha / 50$, we also have that $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n-1}}^{2} \leq \alpha / 50$. From $\mathcal{E}^{n-1}$ we know that $\min \left\{\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n-1}}^{2}, 4 N W^{2}\right\} \leq 10000 \alpha$. Then from the definition of $Y_{i}$ we have $$ \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^n}^{2}=\sum_{i=2}^{n} Y_{i} $$ Then $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^n}^{2}$ can be bounded in the following manner: $$ \begin{aligned} \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2} &=\sum_{i=2}^{n} Y_{i} \\ & \leq \sum_{i=2}^{n} \mathbb{E}_{i-1}\left[Y_{i}\right]+\alpha / 100 \\ &=\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2}+\alpha / 100 \\ & \leq 3 \alpha / 100 \end{aligned} $$ As a result, $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^n}^{2}$ can also be bounded: $$ \begin{aligned} \left\|f_{1}-f_{2}\right\|_{\overline{\mathcal{Z}}^{n}}^{2} & \leq\left(\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^{n}}+\left\|f_{1}-\hat{f}_{1}\right\|_{\overline{\mathcal{Z}}^{n}}+\left\|f_{2}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^{n}}\right)^{2} \\ & \leq\left(\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^{n}}+2 \cdot \sqrt{\left|\overline{\mathcal{Z}}^{n}\right|} \cdot \sqrt{\delta /\left(64 N^{3}\right)}\right)^{2} \\ & \leq \alpha / 25 \end{aligned} $$ Finally we could bound $\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}$ : $$ \begin{aligned} \left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2} & \leq\left(\left\|f_{1}-f_{2}\right\|_{\overline{\mathcal{Z}}^{n}}+\sqrt{64 N^{3} / \delta} /\left(8 \sqrt{64 N^{3} / \delta}\right)\right)^{2} \\ & \leq \alpha \end{aligned} $$ We conclude that for any pair $f_{1}, f_{2} \in \mathcal{F}$ with $\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq \alpha / 100$, it holds that $\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2} \leq \alpha$. Thus we must have $\underline{\mathcal{B}}^{n}(\alpha) \subseteq \mathcal{B}^{n}(\alpha)$. \paragraph{Part 2} $\left(\mathcal{B}^{n}(\alpha) \subseteq \overline{\mathcal{B}}^{n}(\alpha)\right)$ : Consider any pair $f_{1}, f_{2} \in \mathcal{F}$ with $\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}>100 \alpha$. From the definition we know that there exist $\left(\hat{f}_{1}, \hat{f}_{2}\right) \in \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right) \times \mathcal{C}\left(\mathcal{F}, \sqrt{\delta /\left(64 N^{3}\right)}\right)$ such that $\left\|\hat{f}_{1}-f_{1}\right\|_{\infty},\left\|\hat{f}_{2}-f_{2}\right\|_{\infty} \leq \sqrt{\delta /\left(64 N^{3}\right)}$. Then we have that $$ \begin{aligned} \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2} & \geq\left(\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}-\left\|f_{1}-\hat{f}_{1}\right\|_{\mathcal{Z}^{n}}-\left\|\hat{f}_{2}-f_{2}\right\|_{\mathcal{Z}^{n}}\right)^{2} \\ & \geq\left(\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}-2 \cdot \sqrt{\left|\mathcal{Z}^{n}\right|} \cdot \sqrt{\left.\delta /\left(64 N^{3}\right)\right)}\right)^{2} \\ &>50 \alpha \end{aligned} $$ Thus we have $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2}>50 \alpha$. We consider the $Y_{i}$ 's which correspond to $\hat{f}_{1}$ and $\hat{f}_{2}$. Here we want to prove that $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}>40 \alpha$. For the sake of contradicition we assume that $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2} \leq 40 \alpha$. \paragraph{Case 1}: $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2} \leq 2000000 \alpha$. From the definition of $Y_{i}$ we have $$ \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^{n}}^2=\sum_{i=2}^{n} Y_{i} $$ Combined with the former result, we conclude that $$ \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^{n}}^{2}=\sum_{i=2}^{n} Y_{i} \geq \sum_{i=2}^{n} \mathbb{E}_{i-1}\left[Y_{i}\right]-\alpha / 100=\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2}-\alpha / 100>50 \alpha-\alpha / 100>49 \alpha $$ Then we have $$ \begin{aligned} & \left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2} \geq\left(\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\overline{\mathcal{Z}}^{n}}-\sqrt{64 N^{3} / \delta} /\left(8 \sqrt{64 N^{3} / \delta}\right)\right)^{2} \\ & >40 \alpha \end{aligned} $$ which leads to a contradiction. \paragraph{Case 2}: $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n-1}}^{2}>1000000 \alpha$. From $\mathcal{E}^{n-1}$ we deduce that $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n-1}}^{2}>100 \alpha$ which directly leads to a contradiction. \paragraph{Case 3}: $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n}}^{2}>2000000 \alpha$ and $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\mathcal{Z}^{n-1}}^{2} \leq 1000000 \alpha$. It is clear that $\left(\hat{f}_{1}\left(z_{n-1}\right)-\hat{f}_{2}\left(z_{n-1}\right)\right)^{2}>$ $1000000 \alpha$. From the definition of sensitivity we know that $z_{n-1}$ will be added into $\overline{\mathcal{Z}}^{n}$ almost surely, which leads to a contradiction. We conclude that $\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}>40 \alpha$. Finally we could bound $\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}$ : $$ \begin{aligned} \left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2} & \geq\left(\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}-\left\|f_{1}-\hat{f}_{1}\right\|_{\widehat{\mathcal{Z}}^{n}}-\left\|\hat{f}_{2}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}\right)^{2} \\ & \geq\left(\left\|\hat{f}_{1}-\hat{f}_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}-2 \cdot \sqrt{\left|\widehat{\mathcal{Z}}^{n}\right|} \cdot \sqrt{\delta /\left(64 N^{3}\right)}\right)^{2} \\ &>\alpha \end{aligned} $$ We conclude that for any pair $f_{1}, f_{2} \in \mathcal{F}$ with $\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}>10000 \alpha$, it holds that $\left\|f_{1}-f_{2}\right\|_{\widehat{\mathcal{Z}}^{n}}^{2}>$ $100 \alpha$. This implies that $\mathcal{B}^{n}(\alpha) \subseteq \overline{\mathcal{B}}^{n}(\alpha)$. \end{proof} Next, we give a bound of the summation of online sensitivity scores. \begin{lemma} \label{C.5} (Bound of sensitivity scores). We have $$ \sum_{n=1}^{N-1} \operatorname{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z^{n}\right) \leq C \cdot \operatorname{dim}_{E}(\mathcal{F}, 1 / 4N) \log_2 \left(4 N W^{2}\right) \log N $$ for some absolute constant $C>0$. \end{lemma} \begin{proof} Note that $\mathcal{Z}^{n}=\left\{\left(s_{i}, a_{i}\right)\right\}_{i \in[n-1]}$, so $\left|\mathcal{Z}^{n}\right| \leq N$. Notice that $$ \begin{aligned} \operatorname{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z^{n}\right)&=\mathop{\text{sup}}\limits_{f_1,f_2\in \mathcal{F}}\frac{{\left(f_1(z)-f_2(z)\right)}^2}{\text{min}\{{||f_1-f_2||}_{\mathcal{Z}^n}^2,4NW^2\}+1}\\ &= \mathop{\text{sup}}\limits_{f_1,f_2\in \mathcal{F}}\frac{\left(f_{1}\left(z_{n}\right)-f_{2}\left(z_{n}\right)\right)^{2}}{\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}+1} \end{aligned} $$ For each $n \in[N-1]$, let $f_{1}, f_{2} \in \mathcal{F}$ be an arbitrary pair of functions, such that $$ \frac{\left(f_{1}\left(z_{n}\right)-f_{2}\left(z_{n}\right)\right)^{2}}{\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}+1} $$ is maximized, and we define $L\left(z_{n}\right)=\left(f_{1}\left(z_{n}\right)-f_{2}\left(z_{n}\right)\right)^{2}$ for such $f_{1}, f_{2}$. Note that $0 \leq L\left(z_{n}\right) \leq 4 W^{2}$. Let $\mathcal{Z}^{N}=\cup_{\alpha=0}^{\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor} \mathcal{Z}_{\alpha} \cup \mathcal{Z}_{\infty}$ be a dyadic decomposition with respect to $L(\cdot)$, where for each $0 \leq \alpha \leq\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor$, we define $$ \mathcal{Z}_{\alpha}=\left\{z_{n} \in \mathcal{Z}^{N} \mid L\left(z_{n}\right) \in\left(4 W^{2} \cdot 2^{-(\alpha+1)}, 4 W^{2} \cdot 2^{-\alpha}\right]\right\} $$ and $$ \mathcal{Z}_{\infty}=\left\{z_{n} \in \mathcal{Z}^{N} \mid L\left(z_{n}\right) \leq 4 W^{2} \cdot 2^{-\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor-1}\right\} $$ Therefore, for any $z_{n} \in \mathcal{Z}_{\infty},\ \text{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \leq 4 W^{2} \cdot 2^{-\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor-1}<1 / N$, and thus $$ \sum_{z_{n} \in \mathcal{Z}_{\infty}} \text{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \leq N \cdot \frac{1}{N}=1 $$ For each $\alpha$, let $N_{\alpha}=\left|\mathcal{Z}_{\alpha}\right| / \operatorname{dim}_{E}\left(\mathcal{F}, 4 W^{2} \cdot 2^{-(\alpha+1)}\right)$, and we decompose $\mathcal{Z}_{\alpha}$ into $\left(N_{\alpha}+1\right)$ disjoint subsets, i.e., $\mathcal{Z}_{\alpha}=\cup_{j=1}^{N_{\alpha}+1} \mathcal{Z}_{\alpha}^{j}$, by using the following procedure: Initialize $Z_{\alpha}^{j}=\{\}$ for all $\mathrm{j}$ and consider each $z_{n} \in \mathcal{Z}_{\alpha}$ sequentially. For each $z_{n} \in \mathcal{Z}_{\alpha}$, find the smallest $1 \leq j \leq N_{\alpha}$, such that $z_{n}$ is $4 W^{2} \cdot 2^{-(\alpha+1)}$-independent of $\mathcal{Z}_{\alpha}^{j}$ with respect to $\mathcal{F}$. Set $j=N_{\alpha}+1$ if such $j$ does not exist, use $j\left(z_{n}\right) \in\left[N_{\alpha}+1\right]$ to denote the choice of $j$ for $z_{n}$, and add $z_{n}$ into $Z_{\alpha}^{j}$. Now, for each $z_{n} \in \mathcal{Z}_{\alpha}, z_{n}$ is $4 W^{2} \cdot 2^{-(\alpha+1)}$-dependent on each of $\mathcal{Z}_{\alpha}^{1}, \mathcal{Z}_{\alpha}^{2}, \cdots, \mathcal{Z}_{\alpha}^{j\left(z_{n}\right)-1}$. Next, we will show that: For each $z_{n} \in \mathcal{Z}_{\alpha}$, $$ \text{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \leq \frac{4}{j\left(z_{n}\right)} $$ For any $z_{n} \in \mathcal{Z}_{\alpha}$, let $f_{1}, f_{2} \in \mathcal{F}$ be an arbitrary pair of functions, such that $$ \frac{\left(f_{1}\left(z_{n}\right)-f_{2}\left(z_{n}\right)\right)^{2}}{\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}+1} $$ is maximized. Since $z_{n} \in \mathcal{Z}_{\alpha}$, we must have $\left(f_{1}\left(z_{n}\right)-f_{2}\left(z_{n}\right)\right)^{2}>4 W^{2} \cdot 2^{-(\alpha+1)}$, since $z_{n}$ is $4 W^{2} \cdot 2^{-(\alpha+1)}$ dependent on each of $\mathcal{Z}_{\alpha}^{1}, \mathcal{Z}_{\alpha}^{2}, \cdots, \mathcal{Z}_{\alpha}^{j\left(z_{n}\right)-1}$, for each $1 \leq t<j\left(z_{n}\right)$, we have $\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}_{\alpha}^{t}}^{2} \geq 4 W^{2}$. $2^{-(\alpha+1)}$, note that $\mathcal{Z}_{\alpha}^{1}, \mathcal{Z}_{\alpha}^{2}, \cdots, \mathcal{Z}_{\alpha}^{j\left(z_{n}\right)-1} \subset \mathcal{Z}^{n}$ due to the design of the partition procedure. Thus, $$ \begin{aligned} & \text{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \leq \frac{\left(f_{1}\left(z_{n}\right)-f_{2}\left(z_{n}\right)\right)^{2}}{\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}+1} \leq \frac{4 W^{2} \cdot 2^{-\alpha}}{\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}^{n}}^{2}} \leq \frac{4 W^{2} \cdot 2^{-\alpha}}{\sum_{t=1}^{j(z_{n})-1}\left\|f_{1}-f_{2}\right\|_{\mathcal{Z}_{\alpha}^{t}}^{2}}, \\ & \leq \frac{4 W^{2} \cdot 2^{-\alpha}}{\left(j\left(z_{n}\right)-1\right) \cdot 4 W^{2} \cdot 2^{-(\alpha+1)}} \leq \frac{2}{j\left(z_{n}\right)-1} \end{aligned} $$ Therefore, $$ \text{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \leq \min \left\{\frac{2}{j\left(z_{n}\right)-1}, 1\right\} \leq \frac{4}{j\left(z_{n}\right)} $$ In addition, by the definition of $4 W^{2} \cdot 2^{-(\alpha+1)}$-independent, we have $\left|\mathcal{Z}_{\alpha}^{j}\right| \leq \operatorname{dim}_{E}\left(\mathcal{F}, 4 W^{2} \cdot 2^{-(\alpha+1)}\right)$ for all $1 \leq j \leq N_{\alpha}$. Therefore, $$ \begin{aligned} \sum_{z_{n} \in \mathcal{Z}_{\alpha}} \text { sensitivity }_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) & \leq \sum_{1 \leq j \leq N_{\alpha}}\left|\mathcal{Z}_{\alpha}^{j}\right| \cdot \frac{4}{j}+\sum_{z \in \mathcal{Z}_{\alpha}^{N_{\alpha}+1}} \frac{4}{N_{\alpha}} \\ & \leq 4 \operatorname{dim}_{E}\left(\mathcal{F}, 4 W^{2} \cdot 2^{-(\alpha+1)}\right) \cdot\left(\ln \left(N_{\alpha}\right)+1\right)+\left|\mathcal{Z}_{\alpha}\right| \cdot \frac{4 W^{2} \cdot 2^{-(\alpha+1)}}{\left|\mathcal{Z}_{\alpha}\right|} \\ &=4 \operatorname{dim}_{E}\left(\mathcal{F}, 4 W^{2} \cdot 2^{-(\alpha+1)}\right) \cdot\left(\ln \left(N_{\alpha}\right)+2\right) \\ & \leq 8 \operatorname{dim}_{E}\left(\mathcal{F}, 4 W^{2} \cdot 2^{-(\alpha+1)}\right) \cdot \ln N \end{aligned} $$ Now, by the monotonicity of eluder dimension, it follows that: $$ \begin{aligned} \sum_{n=1}^{N-1} \text { sensitivity }_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) & \leq \sum_{\alpha=0}^{\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor} \sum_{z_{n} \in \mathcal{Z}_{\alpha}} \text { sensitivity }_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right)+\sum_{z_{n} \in \mathcal{Z}^{\infty}} \text { sensitivity }_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \\ & \leq 8\left(\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor+1\right) \operatorname{dim}_{E}(\mathcal{F}, 1 / 4 N) \ln N+1 \\ & \leq 9\left(\left\lfloor\log _{2}\left(4 W^{2} N\right)\right\rfloor+1\right) \operatorname{dim}_{E}(\mathcal{F}, 1 / 4 N) \ln N \end{aligned} $$ \end{proof} The following proposition verifies that $\bigcap_{n=1}^{\infty} \mathcal{E}^{n}$ happens with high probability. \\ \begin{proposition} \label{C.6} $$ \mathbb{P}\left(\bigcap_{n=1}^{\infty} \mathcal{E}^{n}\right) \geq 1-\delta / 32 $$ \end{proposition} \begin{proof} For all $n \in[N]$ it holds that $$ \begin{aligned} & \mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\right)-\mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n}\right) \\ =& \mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\left(\mathcal{E}^{n}\right)^{c}\right) \\ =& \mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\left(\bigcap_{j=0}^{\infty} \mathcal{E}^{n}\left(100^{j} \epsilon\right)\right)^{c}\right) \\ =& \mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1} \bigcup_{j=0}^{\infty}\left(\mathcal{E}^{n}\left(100^{j} \epsilon\right)\right)^{c}\right) \\ \leq & \sum_{j=0}^{\infty} \mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\left(\mathcal{E}^{n}\left(100^{j} \epsilon\right)\right)^{c}\right) \\ =& \sum_{j \geq 0,100^{j} \epsilon \leq 4 N W^{2}}\mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\left(\mathcal{E}^{n}\left(100^{j} \epsilon\right)\right)^{c}\right) \end{aligned} $$ where the last equality holds because $\mathbb{P}\left(\mathcal{E}^{n}(\alpha)\right)=1$ while $\alpha>4 N W^{2}$. Combining this with Lemma \ref{C.4} yields $$ \mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\right)-\mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n}\right) \leq \delta /\left(32 N^{2}\right) \cdot\left(\log \left(4 N W^{2} / \epsilon\right)+2\right) \leq \delta /(32 N) $$ thus $$ \begin{aligned} & \mathbb{P}\left(\bigcap_{n=1}^{N} \mathcal{E}^{n}\right) \\ =& 1-\sum_{n=1}^{N}\left(\mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n-1}\right)-\mathbb{P}\left(\mathcal{E}^{1} \mathcal{E}^{2} \ldots \mathcal{E}^{n}\right)\right) \\ \geq & 1-N \cdot(\delta / 32 N) \\ =& 1-\delta / 32 \end{aligned} $$ \end{proof} With Lemma \ref{C.5}, we are now ready to prove: \newline \begin{proposition} \label{C.7} With probability at least $1-\delta / 8$, the following statements hold: (i) The subsampled dataset ${\widehat{\mathcal{Z}}}^n$ changes for at most $$ S_{\max }=C \cdot \log \left(N \mathcal{N}\left(\mathcal{F}, \sqrt{\delta / 64 N^{3}}\right) / \delta\right) \cdot \operatorname{dim}_{E}(\mathcal{F}, 1 / N) \cdot \log ^{2} N $$ where $C>0$ is some absolute constant. (ii) For any $n \in[N],\left|\widehat{\mathcal{Z}}^{n}\right| \leq 64 N^{3} / \delta$. \end{proposition} \begin{proof} Conditioning on $\mathcal{E}^{n}$, we have $$ \mathbb{I}\left\{\mathcal{E}^{n}\right\} \cdot \text{sensitivity}_{\widehat{\mathcal{Z}}^{n}, \mathcal{F}}\left(z_{n}\right)\leq C\cdot\text{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) $$ for some constant $C>0$ according to Lemma \ref{C.3}. By definition of $p_{z}$ we have $$ p_{z} \lesssim \operatorname{sensitivity}_{\widehat{\mathcal{Z}}, \mathcal{F}}(z) \cdot \log \left(N \mathcal{N}\left(\mathcal{F}, \sqrt{\delta / 64 N^{3}}\right) / \delta\right) $$ thus by Lemma \ref{C.5} we have $$ \begin{aligned} \sum_{n=1}^{N-1} \mathbb{I}\left\{\mathcal{E}^{n}\right\} \cdot p_{z_{n}} & \lesssim \sum_{n=1}^{N-1} \mathbb{I}\left\{\mathcal{E}^{n}\right\} \cdot \text{sensitivity}_{\widehat{\mathcal{Z}}^{n}, \mathcal{F}}\left(z_{n}\right) \cdot \log \left(N \mathcal{N}\left(\mathcal{F}, \sqrt{\delta / 64 N^{3}}\right) / \delta\right) \\ & \lesssim \sum_{n=1}^{N-1} \operatorname{sensitivity}_{\mathcal{Z}^{n}, \mathcal{F}}\left(z_{n}\right) \cdot \log \left(N \mathcal{N}\left(\mathcal{F}, \sqrt{\delta / 64 N^{3}}\right) / \delta\right) \\ & \lesssim \log \left(N \mathcal{N}\left(\mathcal{F}, \sqrt{\delta / 64 N^{3}}\right) / \delta\right) \operatorname{dim}_{E}(\mathcal{F}, 1 / N) \log ^{2} N \end{aligned} $$ and by choosing $C$ in the proposition appropriately, we may assume that $$ \sum_{n=1}^{N-1} \mathbb{I}\left\{\mathcal{E}^{n}\right\} \cdot p_{z_{n}} \leq S_{\max } / 3 $$ For $2 \leq n \leq N$, define random variables $\left\{X_{n}\right\}$ as $$ X_{n}= \begin{cases}\mathbb{I}\left\{\mathcal{E}^{n-1}\right\} & \hat{z}_{n-1} \text { is added into } \widehat{\mathcal{Z}}^{n} \\ 0 & \text { otherwise }\end{cases} $$ Then $X_{n}$ is adapted to the filtration $\mathcal{F}_{n}$. We have that $\mathbb{E}_{n-1}\left[X_{n}\right]=p_{z_{n-1}} \cdot \mathbb{I}\left\{\mathcal{E}^{n-1}\right\}$ and $\mathbb{E}_{n-1}\left[\left(X_{n}-\mathbb{E}_{n-1}\left[X_{n}\right]\right)^{2}\right]=\mathbb{I}\left\{\mathcal{E}^{n-1}\right\} \cdot p_{z_{n-1}}\left(1-p_{z_{n-1}}\right)$. Note that $X_{n}-\mathbb{E}_{n-1}\left[X_{n}\right]$ is a martingale difference sequence with respect to $\mathcal{F}_{n}$ and $$ \begin{aligned} \sum_{n=2}^{N} \mathbb{E}_{n-1}\left[\left(X_{n}-\mathbb{E}_{n-1}\left[X_{n}\right]\right)^{2}\right]&=\sum_{n=2}^{N} \mathbb{I}\left\{\mathcal{E}^{n}\right\} p_{z_{n-1}}\left(1-p_{z_{n-1}}\right) \leq \sum_{n=2}^{N} \mathbb{I}\left\{\mathcal{E}_{n-1}\right\} \cdot p_{z_{n-1}} \leq S_{\max } / 3 \\ \sum_{n=2}^{N} \mathbb{E}_{n-1}\left[X_{n}\right]&=\sum_{n=2}^{n} p_{z_{n-1}} \mathbb{I}\left\{\mathcal{E}_{n-1}\right\} \leq S_{\max } / 3 \end{aligned} $$ thus by applying Freedman's inequality (Lemma \ref{C.1}), we deduce that $$ \begin{aligned} & \mathbb{P}\left\{\sum_{n=2}^{N} X_{n} \geq S_{\max }\right\} \\ \leq & \mathbb{P}\left\{\left|\sum_{n=2}^{N}\left(X_{n}-\mathbb{E}_{n-1}\left[X_{n}\right]\right)\right| \geq 2 S_{\max } / 3\right\} \\ \leq & 2 \exp \left\{-\frac{\left(2 S_{\max } / 3\right)^{2} / 2}{S_{\max } / 3+2 S_{\max } / 9}\right\} \\ \leq & \delta /(32 N) \end{aligned} $$ With a union bound we know that with probability at least $1-\delta / 32$, $$ \sum_{n=2}^{N} X_{n}<S_{\max } $$ We condition on the event above and $\bigcap_{n=1}^{N} \mathcal{E}^{n}$. In this case, we add elements into $\widehat{\mathcal{Z}}^{n}$ for at most $S_{\text {max }}$ times. Combining the result above with Lemma \ref{C.2} completes the proof. \end{proof} \section{Concentration of Bonuses} \label{App D} Before bounding the bonuses, we need the following concentration inequality proved in \citep{beygelzimer2011contextual}. \begin{lemma} \label{D.3} (Bernstein for Martingales). Consider a sequence of random variables $X_1,X_2,\cdots, X_T$. Assume that for all $t$, $X_t\leq R$, and $\mathbb{E}_t[X_t]\overset{def}{=}\mathbb{E}[X_t|X_1,\cdots,X_{t-1}]=0$. Then for any $\delta>0$, there exists constant $c_1,c_2$, such that $$ \mathbf{P}\left(\sum_{t=1}^{T} X_{t} \leq c_1 \times \sqrt{\sum_{t=1}^{T} \mathbb{E}_t[X_t^2]\ln \frac{1}{\delta}}+c_2 \times \ln \frac{1}{\delta}\right) \geq 1-\delta $$ \end{lemma} \begin{lemma} \label{D.1} (Bound of Indicators). For any episode $n$ during the execution of the algorithm, with probability $1-\delta/2$, \begin{equation} \begin{aligned} \sum_{n=1}^N \mathbb{E}_{(s,a)\sim d^{n}}b_1^n(s,a)\leq \widetilde{O}\left(\frac{\sqrt{N{d}^2\epsilon}}{(1-\gamma)\beta}\right) \end{aligned} \end{equation} where $d =\text{dim}_E(\mathcal{F},1/N)$. \end{lemma} \begin{proof} \begin{equation} \begin{aligned} \sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^{n}}b_1^n(s,a)&\leq\frac{3}{1-\gamma}\sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^{n}}\textbf{1}\{\omega(\widehat{\mathcal{F}}^n,s,a)\geq\beta\}\\ &=\frac{3}{1-\gamma}\sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^{n}}\textbf{1}\{\frac{1}{\beta}\omega(\widehat{\mathcal{F}}^n,s,a)\geq1\}\\& \leq\frac{3}{1-\gamma}\cdot\frac{1}{\beta}\sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^{n}}\omega(\widehat{\mathcal{F}}^n,s,a)\\ &\leq \widetilde{O}\left(\frac{\sqrt{N{d}^2\epsilon}}{(1-\gamma)\beta}\right)\ (\text{by Lemma \ref{D.2}}) \end{aligned} \end{equation} \end{proof} \begin{lemma} \label{D.2} (Bound of Bonuses). For any episode $n$ during the execution of the algorithm, with probability $1-\delta/2$ \begin{equation} \begin{aligned} \sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^{n}}\omega(\widehat{\mathcal{F}}^n,s,a)\leq O\left(\sqrt{N{d}^2\epsilon}+\ln(\frac{2}{\delta})\right)=\widetilde{O}\left(\sqrt{N{d}^2\epsilon}\right) \end{aligned} \end{equation} where $d =\text{dim}_E(\mathcal{F},1/N)$. \end{lemma} \begin{proof} We define the random dataset $\mathcal{D}_{1:n}$ to represent all the information at the beginning of iteration $n$ of the algorithm. Then we define $$ \xi_n=\mathbb{E}_{(s,a)\sim d^{n}}[\omega(\widehat{\mathcal{F}}^n,s,a)|\mathcal{D}_{1:n}]-\omega(\widehat{\mathcal{F}}^n,s_n,a_n) $$ and let $$ A=\sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^{n}}[\omega(\widehat{\mathcal{F}}^n,s,a)|\mathcal{D}_{1:n}]=\sum_{n=1}^N\omega(\widehat{\mathcal{F}}^n,s_n,a_n)+\sum_{n=1}^N\xi_n $$ Now we bound $\sum_{n=1}^N\omega(\widehat{\mathcal{F}}^n,s_n,a_n)$:\newline We condition on the event in the Lemma \ref{C.6}, we have $$ \omega(\widehat{\mathcal{F}}^n,s,a)\leq \mathop{\text{sup}}\limits_{f_1,f_2\in\mathcal{F},||f_1-f_2||_{\mathcal{Z}^n}^2\leq100\epsilon}|f_1(s,a)-f_2(s,a)|\overset{def}{=}\bar{b}^n(s,a) $$ For any given $\alpha>0$, let $\mathcal{L}=\{(s_n,a_n)|n\in[N],\bar{b}^n(s_n,a_n)>\alpha\}$, let $|\mathcal{L}|=L$.\newline Next we show that there exists $z_k:=(s_k,a_k)\in\mathcal{L}$, such that $(s_k,a_k)$ is $\alpha$-dependent on at least $N=L/\text{dim}_E(\mathcal{F},\alpha)-1$ disjoint subsequences in $\mathcal{Z}^k\cap\mathcal{L}$. We decompose the $\mathcal{L}=\cup_{j=1}^{N+1}\mathcal{L}^j$ by the following procedure. We initialize $\mathcal{L}^j=\{\}$ for all $j$ and consider $z_k\in\mathcal{L}$ sequentially. For each $z_k\in\mathcal{L}$, we find the smallest $j\ (1\leq j\leq N)$, such that $z_k$ is $\alpha$-independent on $\mathcal{L}^j$ with respect to $\mathcal{F}$. We set $j=N+1$ if such $j$ does not exist. We add $z_k$ into $\mathcal{L}^j$ afterwards. When the decomposition of $\mathcal{L}$ is finished, $\mathcal{L}^{N+1}\neq\emptyset$, as $\mathcal{L}^j$ contains at most $\text{dim}_E(\mathcal{F},\alpha)$ elements for $j\in [N]$. For any $z_k\in\mathcal{L}^{N+1}$, $z_k$ is $\alpha$-dependent on at least $N=L/\text{dim}_E(\mathcal{F},\alpha)-1$ disjoint subsequences in $\mathcal{Z}^k\cap\mathcal{L}$. \newline On the other hand, there exists $f_1,f_2\in\mathcal{F}$ with $||f_1-f_2||_{\mathcal{Z}^k}^2\leq100\epsilon$, such that $|f_1(s,a)-f_2(s,a)|>\alpha$. So we have: $$ (L/\text{dim}_E(\mathcal{F},\alpha)-1)\cdot\alpha^2\leq||f_1-f_2||_{\mathcal{Z}^k}^2\leq100\epsilon $$ which implies $$ L\leq(\frac{100\epsilon}{\alpha^2}+1)\text{dim}_E(\mathcal{F},\alpha) $$ Now let $b_1\geq b_2\geq\cdots b_N$ to be a permulation of $\{\bar{b}^n(s_n,a_n)\}_{n=1}^N$. For any $b_n\geq\frac{1}{N}$, we have $$ n\leq(\frac{100\epsilon}{b_n^2}+1)\text{dim}_E(\mathcal{F},b_n)\leq(\frac{100\epsilon}{b_n^2}+1)\text{dim}_E(\mathcal{F},\frac{1}{N}) $$ which implies that $$ b_n\leq\left(\frac{n}{\text{dim}_E(\mathcal{F},\frac{1}{N})}-1\right)^{-\frac{1}{2}}\sqrt{100\epsilon},\ \text{when}\ b_n\geq 1/N $$ Moreover, we have $b_n\leq 2W$, so \begin{equation} \begin{aligned} \sum_{n=1}^N b_n&\leq N\cdot\frac{1}{N}+2W\cdot\text{dim}_E(\mathcal{F},1/N)+\sum_{\text{dim}_E(\mathcal{F},1/N)<n\leq N}\left(\frac{n}{\text{dim}_E(\mathcal{F},\frac{1}{N})}-1\right)^{-\frac{1}{2}}\sqrt{100\epsilon}\\ &\leq1+2W\cdot\text{dim}_E(\mathcal{F},1/N)+C\cdot\sqrt{\text{dim}_E(\mathcal{F},1/N)\cdot N\cdot\epsilon} \end{aligned} \end{equation} For simplicity, we denote $d:=\text{dim}_E(\mathcal{F},1/N)$, then $\sum_{n=1}^N\omega(\widehat{\mathcal{F}}^n,s_n,a_n)\leq O(\sqrt{N{d}^2\epsilon})$. Then we will bound the sum of noise terms: \begin{equation} \begin{aligned} \sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^n}[\xi_n^2|\mathcal{D}_{1:n}]&=\sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^n}[\omega^2(\widehat{\mathcal{F}}^n,s,a)|\mathcal{D}_{1:n}]\\ &\leq 2W\cdot \sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^n}[\omega(\widehat{\mathcal{F}}^n,s,a)|\mathcal{D}_{1:n}] \end{aligned} \end{equation} Now using the Lemma \ref{D.3} (Bernstein for Martingales) gives with probability at least $1-\frac{\delta}{2}$ for some constant $c$ \begin{equation} \begin{aligned} \sum_{n=1}^N\xi_n &\leq c\times\left(\sqrt{2\sum_{n=1}^N\mathbb{E}_{(s,a)\sim d^n}[\xi_n^2|\mathcal{D}_{1:n}]\ln(2/\delta)}+\frac{\ln(2/\delta)}{3}\right)\\&=c\times\left(\sqrt{4WA\ln(2/\delta)}+\frac{\ln(2/\delta)}{3}\right) \end{aligned} \end{equation} Solving for A finally gives with high probability \begin{equation} \begin{aligned} A=O\left(\sqrt{N{d}^2\epsilon}+\ln(\frac{2}{\delta})\right) \end{aligned} \end{equation} \end{proof} \section{Analysis of Policy Evaluation Oracle} \label{App E} In this section, we provide the theoretical guarantee of our policy evaluation oracle using importance sampling technique. \begin{definition} \label{E.1} (Importance Sampling Estimator). Let $t$ be a positive discrete random variable with probability mass function $\mathbf{P}(t=\tau)=\gamma^{\tau-1}(1-\gamma)$, and let $\left\{\left(s_{\tau}, a_{\tau}, r_{\tau}\right)\right\}_{\tau=1, \ldots, t}$ be a random trajectory of length $t$ obtained by following a fixed "behavioral" policy $\underline{\pi}$ from $(s, a)$. The importance sampling estimator of the target policy $\pi$ is: $$ \left(\Pi_{\tau=2}^{t} \frac{\pi\left(s_{\tau}, a_{\tau}\right)}{\underline{\pi}\left(s_{\tau}, a_{\tau}\right)}\right) \frac{r_{t}}{1-\gamma} . $$ \end{definition} Notice that our inner loop solves the bonus-added MDP problem, so $r_t$ is replaced by $G_t$ in the following formula. $$ G_t=\left\{ \begin{aligned} &\frac{1}{1-\gamma}[r_t+b(s_t,a_t)], \ \text{if} \ t \geq 2 \\ &\frac{1}{1-\gamma}[r_t], \ \text{if}\ t=1 \end{aligned} \right. $$ \begin{definition} \label{E.2} We define $B=\frac{3}{1-\gamma}$, $G_{\text{max}}=\frac{2+B}{(1-\gamma)}$, $\delta_1=\gamma^{\alpha}$ \end{definition} \begin{remark} From the definition of bonus function, we know that $0\leq b(\cdot,\cdot)\leq B$. In addition, the random return from a single Monte Carlo trajectory $\frac{G_t}{1-\gamma}$ has a deterministic upper bound $G_{\text{max}}$. For a concise bound, we can assume $2G_{\text{max}}\leq W$ in the following proof. \end{remark} \begin{lemma} (Stability of Importance Sampling Estimator) \label{E.3} When $$ k-\underline{k} \leq \kappa \stackrel{\text { def }}{=} \frac{(1-\gamma) \ln 2}{2 \ln \left(1 / \delta_1\right) \eta(B+W)}, $$ then with probability $1-\delta_1$, $$ \left(\Pi_{\tau=2}^{t} \frac{\pi\left(s_{\tau}, a_{\tau}\right)}{\underline{\pi}\left(s_{\tau}, a_{\tau}\right)}\right) \frac{G_{t}}{1-\gamma}\leq 2 G_{\text{max}} $$ \end{lemma} \begin{remark} \label{kappa} This lemma indicates that if we want to get a stable importance sampling estimator during policy evaluation process, $\kappa$ should be $O(\sqrt{K})$ \ (since $\eta$ has an order of $O(\frac{1}{\sqrt{K}})$ by Lemma \ref{B.7}). \end{remark} \begin{proof} This lemma combines the results of Appendix G in \citep{zanette2021cautiously}. First of all, we need to figure out the policy form on the known set. In fact, we have the following conclusion. $$ \forall(s, a), \quad \pi_k(a \mid s)=\pi_{\underline{k}}(a \mid s) \times \frac{e^{c(s, a)}}{\sum_{a^{\prime}} \pi_{\underline{k}}\left(a^{\prime} \mid s\right) e^{c\left(s, a^{\prime}\right)}} $$ where $$ c(s,a)=\eta\cdot\sum_{i=\underline{k}}^{k-1}[b(s,a)+f_i(s,a)] $$ We assume $k>\underline{k}$, then according to the algorithm, $$ \begin{aligned} \pi_k(\cdot|s)&\propto\pi_{k-1}(\cdot|s)e^{\eta[f_{k-1}(s,\cdot)+b(s,\cdot)]}\\ &\propto \pi_{\underline{k}}(\cdot|s)e^{\eta\sum_{i=\underline{k}}^{k-1}[f_i(s,\cdot)+b(s,\cdot)]} \end{aligned} $$ We define $$ c(s,a)=\eta\cdot\sum_{i=\underline{k}}^{k-1}[b(s,a)+f_i(s,a)] $$ So the desired result is obtained.\\ To simplify the notation, we use $c$ to denote $\sup_{(s,a)}|c(s,a)|$. Then the following chain of inequalities is true. $$ e^{-2 c} \leq \frac{e^{-c}}{\sum_{a^{\prime}} \underline{\pi}\left(a^{\prime} \mid s\right) e^{c}} \leq \frac{\pi(a \mid s)}{\underline{\pi}(a \mid s)}\leq \frac{e^{c(s, a)}}{\sum_{a^{\prime}} \pi\left(a^{\prime} \mid s\right) e^{c\left(s, a^{\prime}\right)}}\leq \frac{e^{c}}{\sum_{a^{\prime}} \underline{\pi}\left(a^{\prime} \mid s\right) e^{-c}} =e^{2c} $$ So we can bound the policy ratio. $$ e^{-2 c} \leq \sup _{(s, a)} \frac{\pi(a \mid s)}{\pi(a \mid s)} \leq e^{2 c} $$ Notice that $$ \eta\cdot\kappa\cdot(B+W)\geq \sup_{(s,a)}|c(s,a)| $$ Then we have $$ c=\sup_{(s,a)}|c(s,a)|\leq \frac{(1-\gamma)\ln2}{2\ln(1/\delta_1)} $$ Remember that $t$ is small with high probability: $$ \begin{aligned} \mathbf{P}(t>\alpha) &=\sum_{t=\alpha+1}^{\infty} \gamma^{\alpha-1}(1-\gamma) \\ &=\gamma^{\alpha} \sum_{t=0}^{\infty} \gamma^{\alpha}(1-\gamma) \\ &=\gamma^{\alpha} \stackrel{\text { def }}{=} \delta_1 . \end{aligned} $$ This implies $$ \alpha=\frac{\ln \delta_1 }{\ln \gamma}=\frac{\ln 1 / \delta_1 }{\ln 1 / \gamma} \leq \frac{\ln 1 / \delta_1 }{1-\gamma} $$ In the complement of the above event: $$ \left(\sup _{(s, a)} \frac{\pi(a \mid s)}{\pi(a \mid s)}\right)^{t-1} \leq e^{2(\alpha-1) c}\leq e^{\frac{(\alpha-1)(1-\gamma)\ln2}{\ln(1/\delta_1)}}\leq 2 . $$ Then with probability at least $1-\delta_1$ if the importance sampling ratio is upper bounded $$ \Pi_{\tau=2}^{t} \frac{\pi\left(s_{\tau}, a_{\tau}\right)}{\underline{\pi}\left(s_{\tau}, a_{\tau}\right)} \leq\left(\sup _{(s, a)} \frac{\pi(a \mid s)}{\pi(a \mid s)}\right)^{t-1} \leq 2 $$ And $\frac{G_{t}}{1-\gamma}$ is bounded by $G_{\text{max}}$ in absolute value, which guarantees our result. \end{proof} \begin{lemma} \label{E.4} (Concentration of Width Function). If we set \begin{equation} \label{eq 34} \begin{aligned} \frac{1}{100}\epsilon=\frac{3}{2}C_1N\cdot\epsilon_{\text{stat}}+20NW\epsilon_1+\frac{1}{2}C_2\cdot\ln\left(\frac{N\mathcal{N}(\Delta\mathcal{F},2\epsilon_1)}{\delta}\right) \end{aligned} \end{equation} where $\epsilon_1$ denotes the function cover radius, $C_1$, $C_2$ is some constant defined in the following proof, $\epsilon_{\text{stat}}$ will be determined in Lemma \ref{E.5}. Then with probalility at least $1-\frac{1}{2}\delta$, for all $n\in [N]$ \begin{equation} \begin{aligned} ||\Delta f_k||_{\widehat{\mathcal{Z}}^n}^2\leq \epsilon \end{aligned} \end{equation} \end{lemma} \begin{proof} Conditioned on the Proposition \ref{C.6}, we only need to prove \begin{equation} \begin{aligned} ||\Delta f_k||_{\mathcal{Z}^n}^2\leq 100\epsilon \end{aligned} \end{equation} Let $\mathcal{C}\left(\Delta \mathcal{F}, 2 \epsilon_{1}\right)$ be a cover set of $\Delta \mathcal{F}$. Then for every $\Delta f \in \Delta \mathcal{F}$, there exists a $\Delta g \in \mathcal{C}\left(\Delta \mathcal{F}, 2 \epsilon_{1}\right)$ such that $\|\Delta f-\Delta g\|_{\infty} \leq 2 \epsilon_{1}$. Consider a fixed $\Delta g \in \mathcal{C}\left(\Delta \mathcal{F}, 2 \epsilon_{1}\right)$, we define n random variables: $$ X_i=\frac{1}{8W^2}\left(\left(\Delta g\left(s_i,a_i\right)\right)^{2}-\mathbb{E}_{(s, a) \sim d^{\pi^i}}\left[(\Delta g(s, a))^{2}\right]\right), i \in[n] $$ Notice that for all $i\in [n]$, $X_i\leq 1$, $\mathbb{E}_i[X_i]=0$, and $$ \mathbb{E}_i[X_i^2]\leq \mathbb{E}_i[|X_i|]\leq \mathbb{E}_i\left[\frac{\left(\Delta g(s_i,a_i)\right)^2}{4W^2}\right]=\frac{1}{4W^2}\mathbb{E}_{(s,a)\sim d^{\pi^i}}\left(\Delta g(s,a)\right)^2 $$ Then by using Lemma \ref{D.3} (Bernstein for Martingales), we have the following inequality: With probability at least $1-\delta_2$, $$ \sum_{i=1}^n X_i\leq c_1\times \sqrt{\frac{\ln \frac{1}{\delta_2}}{4W^2}\sum_{i=1}^n\mathbb{E}_{(s,a)\sim d^{\pi^i}}\left(\Delta g(s,a)\right)^2}+c_2\times \ln\frac{1}{\delta_2} $$ which means $$ \frac{1}{n}\sum_{i=1}^n\left[\left(\Delta g(s_i,a_i)\right)^2-\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta g(s, a))^{2}\right]\leq c\times\left(\sqrt{\frac{\ln \frac{1}{\delta_2}}{n^2}\sum_{i=1}^n\mathbb{E}_{(s,a)\sim d^{\pi^i}}\left(\Delta g(s,a)\right)^2}+\frac{1}{n}\ln\frac{1}{\delta_2}\right) $$ We now proof that if $\lambda=C\cdot\ln\left(\frac{1}{\delta_2}\right)$, which $C$ is a constant appropriate large, then $$ c\times\left(\sqrt{\frac{\ln \frac{1}{\delta_2}}{n^2}\sum_{i=1}^n\mathbb{E}_{(s,a)\sim d^{\pi^i}}\left(\Delta g(s,a)\right)^2}+\frac{1}{n}\ln\frac{1}{\delta_2}\right)\leq \frac{1}{2}\left(\frac{1}{n}\left(\sum_{i=1}^n\mathbb{E}_{(s,a)\sim d^{\pi^i}}\left(\Delta g(s,a)\right)^2\right)+\frac{\lambda}{n}\right) $$ To simplify the notation, we define $A=\sum_{i=1}^n\mathbb{E}_{(s,a)\sim d^{\pi^i}}\left(\Delta g(s,a)\right)^2$. \paragraph{Case 1: $A\leq \lambda$} According to the selection of $\lambda$, there exists constant $c',c''$ appropriate small, such that $$ \frac{1}{n}\ln\left(\frac{1}{\delta_2}\right)\leq c'\cdot\left(\frac{\lambda}{n}\right) $$ $$ \sqrt{\frac{\lambda}{n^2}\ln\left(\frac{1}{\delta_2}\right)}\leq c''\cdot\left(\frac{\lambda}{n}\right) $$ Then $$ \textbf{LHS}\leq c\times\left(\sqrt{\frac{\lambda}{n^2}\ln\left(\frac{1}{\delta_2}\right)}+\frac{1}{n}\ln\left(\frac{1}{\delta_2}\right)\right)\leq c\times (c'+c'')\left(\frac{\lambda}{n}\right)\leq \frac{1}{2}\left(\frac{\lambda}{n}+\frac{A}{n}\right)=\textbf{RHS} $$ \paragraph{Case 2: $A\geq \lambda$} there also exists constant $c',c''$ appropriate small, such that $$ \frac{1}{n}\ln\left(\frac{1}{\delta_2}\right)\leq c'\left(\frac{A}{n}\right) $$ $$ \sqrt{\frac{A}{n^2}\ln\left(\frac{1}{\delta_2}\right)}\leq c''\cdot\left(\frac{A}{n}\right) $$ Then $$ \textbf{LHS}\leq c\times\left(\sqrt{\frac{A}{n^2}\ln\left(\frac{1}{\delta_2}\right)}+\frac{1}{n}\ln\left(\frac{1}{\delta_2}\right)\right)\leq c\times (c'+c'')\left(\frac{A}{n}\right)\leq \frac{1}{2}\left(\frac{\lambda}{n}+\frac{A}{n}\right)=\textbf{RHS} $$ After taking the union bound on the function cover $\mathcal{C}(\Delta\mathcal{F},2\epsilon_1)$, we have the following result: With probalility at least $1-N\mathcal{N}(\Delta\mathcal{F},2\epsilon_1)\delta_2\overset{def}{=}1-\frac{1}{8}\delta$, by setting $\lambda=C\cdot\ln\left(\frac{N\mathcal{N}(\Delta\mathcal{F},2\epsilon_1)}{\delta}\right)$, we have $$ \forall n, \forall \Delta g \in \mathcal{C}(\Delta\mathcal{F},2\epsilon_1),\sum_{i=1}^n\left[\left(\Delta g(s_i,a_i)\right)^2-\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta g(s, a))^{2}\right]\leq \frac{1}{2}\left(\sum_{i=1}^n\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta g(s, a))^{2}+\lambda \right) $$ Next, we transform to an arbitrary function $\Delta f \in \Delta \mathcal{F}$. Since there exists a $\Delta g \in \mathcal{C}\left(\Delta \mathcal{F}, 2 \epsilon_{1}\right)$ such that $\left\|\Delta f-\Delta g\right\|_{\infty} \leq 2 \epsilon_{1}$, we have that for all $i \in[n]$, $$ \begin{aligned} &\left|\left(\Delta f\left(s_i, a_i\right)\right)^{2}-\left(\Delta g\left(s_i, a_i\right)\right)^{2}\right| \\ &\left.=\left|\Delta f\left(s_i, a_i\right)-\Delta g\left(s_i, a_i\right)\right| \cdot \mid \Delta f\left(s_i, a_i\right)+\Delta g\left(s_i, a_i\right)\right) \mid \leq 8 W \epsilon_{1} \end{aligned} $$ and $$ \begin{aligned} &\left|\mathbb{E}_{(s, a) \sim d^{{\pi}^i}}\left[\left(\Delta f(s, a)\right)^{2}\right]-\mathbb{E}_{(s, a) \sim d^{{\pi}^i}}\left[(\Delta g(s, a))^{2}\right]\right| \\ &\leq \mathbb{E}_{(s, a) \sim d^{{\pi}^i}}\left|\Delta f(s, a)-\Delta g(s, a)\right| \cdot\left|\Delta f(s, a)+\Delta g(s, a)\right| \leq 8 W \epsilon_{1} \end{aligned} $$ Therefore, $$ \begin{aligned} &\sum_{i=1}^n\left[\left(\Delta f(s_i,a_i)\right)^2-\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta f(s, a))^{2}\right]\\ \leq & \left|\sum_{i=1}^n\left[\left(\Delta f(s_i, a_i)\right)^{2}-\left(\Delta g(s_i, a_i)\right)^{2}\right]\right|+\left|\sum_{i=1}^n\left[\left(\Delta g(s_i, a_i)\right)^{2}-\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta g(s, a))^{2}\right]\right| \\ + & \left|\sum_{i=1}^n\left[\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta f(s, a))^{2}-\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta g(s, a))^{2}\right]\right| \\ \leq & \frac{1}{2}\left(\sum_{i=1}^n\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta g(s, a))^{2}+\lambda\right)+16nW\epsilon_1 \\ \leq & \frac{1}{2}\left(\sum_{i=1}^n\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta f(s, a))^{2}+8nW\epsilon_1+\lambda\right)+16nW\epsilon_1 \end{aligned} $$ Then, $$ \forall n \in [N], \forall \Delta f \in \Delta \mathcal{F}, ||\Delta f||_{\mathcal{Z}^n}^2\leq \frac{3}{2}\sum_{i=1}^n\mathbb{E}_{(s, a) \sim d^{\pi^i}}(\Delta f(s, a))^{2}+\frac{1}{2}\lambda+20nW\epsilon_1 $$ Then we have with probability at least $1-\frac{1}{8}\delta$, $$ \left\|\Delta f_{k}\right\|_{\mathcal{Z}^{n}}^{2} \leq \frac{3}{2}n \cdot \mathbb{E}_{\rho_{\mathrm{cov}}^{n}}\left[\left(\Delta f_{k}\right)^{2}\right]+20nW\epsilon_1+\frac{1}{2}\lambda, \ \ \forall n \in [N] $$ By Assumption 6, $$ \begin{aligned} \mathbb{E}_{\rho_{\mathrm{cov}}^{n}}\left[\left(\Delta f_{k}\right)^{2}\right] &=\mathbb{E}_{(s, a) \sim \rho_{\mathrm{cov}}^{n}}\left[\left(f_{k}^{*}(s,a)-f_{k}(s,a)\right)^{2}\right] \\ &\leq C \cdot\left(L\left(f_{k} ; \rho_{\mathrm{cov}}^{n}, Q_{b^{n}}^{k}-b^{n}\right)-L\left(f_{k}^{*} ; \rho_{\mathrm{cov}}^{n}, Q_{b^{n}}^{k}-b^{n}\right)\right)\\ & \leq C \cdot \epsilon_{\mathrm{stat}}\ \ \text{(by Lemma \ref{E.5})} \end{aligned} $$ By the choice of $\epsilon,\left\|\Delta f_{t}\right\|_{\mathcal{Z}^{n}}^{2} \leq 100 \epsilon,\ \forall n \in [N]$ with probability at least $1-\frac{1}{4}\delta$. Combining the above result with Lemma \ref{C.7}, we finish our proof of Lemma \ref{E.4}. Next, we give an explicit form of $\epsilon_{\text {stat }}$ as defined in the next lemma. \end{proof} \begin{lemma} \label{E.5} (Concentration of statistical error). Following the same notation as in Lemma \ref{E.4}, it holds with probability at least $1-\frac{1}{8}\delta$ that $$ L\left(f_{k} ; \rho_{c o v}^{n}, Q_{b^{n}}^{k}-b^{n}\right)-L\left(f_{k}^{*} ; \rho_{c o v}^{n}, Q_{b^{n}}^{k}-b^{n}\right) \leq \frac{500 C \cdot W^{4} \cdot \log \left(\frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}\right)}{M}+13 W^{2} \cdot \epsilon_{2}, $$ where $C, \epsilon_{0}$ are defined in Lemma \ref{B.3}, and $\epsilon_{2}>0$ denotes the function cover radius which will be determined later. \end{lemma} \begin{proof} This proof builds on \citet{feng2021provably}'s Lemma C.4, but deals with the concentration of importance sampling estimator. First note that in the loss function, the expectation has a nested structure: the outer expectation is taken over $(s, a) \sim \rho_{\text {cov }}^{n}$ and the inner conditional expectation is $Q_{b^{n}}^{k}(s, a)=$ $\mathbb{E}^{\pi_{k}}\left[\sum_{h=0}^{\infty} \gamma^{h}\left(r\left(s_{h}, a_{h}\right)+b^{n}\left(s_{h}, a_{h}\right)\right) \mid\left(s_{0}, a_{0}\right)=(s, a)\right]$ given a sample of $(s, a) \sim \rho_{\text {cov }}^{n}$. To simplify the notation, we use $x$ to denote $(s, a), y \mid x$ for an unbiased sample of $Q_{b^{n}}^{k}(s, a)-b^{n}(s, a)$ through importance sampling, and $\nu$ for $\rho_{\text {cov }}^{n}$, the marginal distribution over $x$, then the loss function can be recast as $$ \begin{aligned} &\mathbb{E}_{x \sim \nu}\left[\left(f_{k}(x)-\mathbb{E}[y \mid x]\right)^{2}\right]:=L\left(f_{k} ; \rho_{\text {cov }}^{n}, Q_{b^{n}}^{k}-b^{n}\right) \\ &\mathbb{E}_{x \sim \nu}\left[\left(f_{k}^{*}(x)-\mathbb{E}[y \mid x]\right)^{2}\right]:=L\left(f_{k}^{*} ; \rho_{\text {cov }}^{n}, Q_{b^{n}}^{k}-b^{n}\right) \end{aligned} $$ In particular, $f_{k}$ can be rewritten as $$ f_{k} \in \underset{f \in \mathcal{F}}{\operatorname{argmin}} \sum_{i=1}^{M}\left(f\left(x_{i}\right)-y_{i}\right)^{2}, $$ where $\left(x_{i}, y_{i}\right)$ are drawn i.i.d.: $x_{i}$ is generated following the marginal distribution $\nu$ and $y_{i}$ is generated conditioned on $x_{i}$. \\ Note that $y_i$ is collected by importance sampling estimator, which does not necessarily come from Monte Carlo sampling. However, in the latest time when the agent interacts with the environment, the samples are drawn i.i.d., which guaranteed the same property for the importance sampling process. For any function $f$, we have: $$ \begin{aligned} & \mathbb{E}_{x, y}\left[\left(f_{k}(x)-y\right)^{2}\right] \\ =& \mathbb{E}_{x, y}\left[\left(f_{k}(x)-\mathbb{E}[y \mid x]\right)^{2}\right]+\mathbb{E}_{x, y}\left[(\mathbb{E}[y \mid x]-y)^{2}\right]+2 \mathbb{E}_{x, y}\left[\left(f_{k}(x)-\mathbb{E}[y \mid x]\right)(\mathbb{E}[y \mid x]-y)\right] \\ =& \mathbb{E}_{x, y}\left[\left(f_{k}(x)-\mathbb{E}[y \mid x]\right)^{2}\right]+\mathbb{E}_{x, y}\left[(\mathbb{E}[y \mid x]-y)^{2}\right], \end{aligned} $$ where the last step follows from the cross term being zero. Thus we can rewrite the generalization error as $$ \begin{aligned} & \mathbb{E}_{x}\left[\left(f_{k}(x)-\mathbb{E}[y \mid x]\right)^{2}\right]-\mathbb{E}_{x}\left[\left(f_{k}^{*}(x)-\mathbb{E}[y \mid x]\right)^{2}\right] \\ =& \mathbb{E}_{x, y}\left(f_{k}(x)-y\right)^{2}-\mathbb{E}_{x, y}\left(f_{k}^{*}(x)-y\right)^{2} . \end{aligned} $$ Next, we establish a concentration bound on $f_{k}$. Since $f_{k}$ depends on the training set $\left\{\left(x_{i}, y_{i}\right)\right\}_{i=1}^{M}$, as discussed in Lemma \ref{B.6}, we use a function cover on $\mathcal{F}$ for a uniform convergence argument. We denote by $\mathcal{F}_{k}^{n}$ the $\sigma$-algebra generated by randomness before epoch $n$ iteration $k$. Recall that $f_{k}^{*} \in \operatorname{argmin}_{f \in \mathcal{F}} L\left(f ; \rho_{\mathrm{cov}}^{n}, Q_{b^{n}}^{k}-b^{n}\right)$. Conditioning on $\mathcal{F}_{k}^{n}, \rho_{\mathrm{cov}}^{n}, Q_{b^{n}}^{k}-b^{n}$, and $f_{k}^{*}$ are all deterministic. For any $f \in \mathcal{F}$, we define $$ Z_{i}(f):=\left(f\left(x_{i}\right)-y_{i}\right)^{2}-\left(f_{k}^{*}\left(x_{i}\right)-y_{i}\right)^{2}, \quad i \in[M] $$ Then $Z_{1}(f), \ldots, Z_{M}(f)$ are i.i.d. random variables and notice that $y_i$ is drawn from importance sampling estimator. From Lemma \ref{E.3}, we know that with probability at least $1-M\delta_1$, $y_i\leq 2 G_{\text{max}}\leq W,\ i\in [M]$. Conditioned on this event, we have $$ \begin{aligned} \mathbb{V}\left[Z_{k}(f) \mid \mathcal{F}_{k}^{n}\right] & \leq \mathbb{E}\left[Z_{i}(f)^{2} \mid \mathcal{F}_{k}^{n}\right] \\ &=\mathbb{E}\left[\left(\left(f\left(x_{i}\right)-y_{i}\right)^{2}-\left(f_{k}^{*}\left(x_{i}\right)-y_{i}\right)^{2}\right)^{2} \mid \mathcal{F}_{k}^{n}\right] \\ &=\mathbb{E}\left[\left(f\left(x_{i}\right)-f_{k}^{*}\left(x_{i}\right)\right)^{2} \cdot\left(f\left(x_{i}\right)+f_{k}^{*}\left(x_{i}\right)-2 y_{i}\right)^{2} \mid \mathcal{F}_{k}^{n}\right] \\ & \leq 36 W^{4} \cdot \mathbb{E}\left[\left(f\left(x_{i}\right)-f_{k}^{*}\left(x_{i}\right)\right)^{2} \mid \mathcal{F}_{k}^{n}\right] \\ & \leq 36 W^{4} \cdot\left(C \cdot \mathbb{E}\left[Z_{i}(f) \mid \mathcal{F}_{k}^{n}\right]\right) \end{aligned} $$ where the last inequality is by Lemma \ref{B.3}. Next, we apply Bernstein's inequality on the function cover $\mathcal{C}\left(\mathcal{F}, \epsilon_{2}\right)$ and take the union bound. Specifically, with probability at least $1-\delta_3$, for all $g \in \mathcal{C}\left(\mathcal{F}, \epsilon_{2}\right)$, $$ \begin{aligned} & \mathbb{E}\left[Z_{i}(g) \mid \mathcal{F}_{k}^{n}\right]-\frac{1}{M} \sum_{i=1}^{M} Z_{i}(g) \\ & \leq \sqrt{\frac{2 \mathbb{V}\left[Z_{i}(g) \mid \mathcal{F}_{k}^{n}\right] \cdot \log \frac{\mathcal{N}\left(\mathcal{F}_{,}, \epsilon_{2}\right)}{\delta_3}}{M}}+\frac{12 W^{4} \cdot \log \frac{\mathcal{N}\left(\mathcal{F}_{,}, \epsilon_{2}\right)}{\delta_3}}{M} \\ & \leq \sqrt{\frac{72 W^{4}\left(C \cdot \mathbb{E}\left[Z_{i}(g) \mid \mathcal{F}_{k}^{n}\right]\right) \cdot \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}}+\frac{12 W^{4} \cdot \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M} . \end{aligned} $$ For $f_{t}$, there exists $g \in \mathcal{C}\left(\mathcal{F}, \epsilon_{2}\right)$ such that $\left\|f_{k}-g\right\|_{\infty} \leq \epsilon_{2}$ and $$ \begin{aligned} \left|Z_{i}\left(f_{k}\right)-Z_{i}(g)\right| &=\left|\left(f_{k}\left(x_{i}\right)-y_{i}\right)^{2}-\left(g\left(x_{i}\right)-y_{i}\right)^{2}\right| \\ &=\left|f_{k}\left(x_{i}\right)-g\left(x_{i}\right)\right| \cdot\left|f_{k}\left(x_{i}\right)+g\left(x_{i}\right)-2 y_{i}\right| \leq 6 W^{2} \epsilon_{2} . \end{aligned} $$ Therefore, with probability at least $1-\delta_3$, $$ \begin{aligned} & \mathbb{E}\left[Z_{i}\left(f_{k}\right) \mid \mathcal{F}_{k}^{n}\right]-\frac{1}{M} \sum_{i=1}^{M} Z_{i}\left(f_{k}\right) \\ &\leq \mathbb{E}\left[Z_{i}(g) \mid \mathcal{F}_{k}^{n}\right]-\frac{1}{M} \sum_{i=1}^{M} Z_{i}(g)+12 W^{2} \epsilon_{2} \\ &\leq \sqrt{\frac{72 W^{4}\left(C \cdot \mathbb{E}\left[Z_{i}(g) \mid \mathcal{F}_{k}^{n}\right]\right) \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}}+\frac{12 W^{4} \log \frac{\mathcal{N}\left(\mathcal{F},\epsilon_{2}\right)}{\delta_3}}{M}+12 W^{2} \epsilon_{2} \\ &\leq \sqrt{\frac{72 W^{4}\left(C \cdot \mathbb{E}\left[Z_{i}(f_k) \mid \mathcal{F}_{k}^{n}\right]+6CW^2\epsilon_2\right) \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}}+\frac{12 W^{4} \log \frac{\mathcal{N}\left(\mathcal{F},\epsilon_{2}\right)}{\delta_3}}{M}+12 W^{2} \epsilon_{2}. \end{aligned} $$ Since $f_{k}$ is an empirical minimizer, we have $\frac{1}{M} \sum_{i=1}^{M} Z_{i}\left(f_{k}\right) \leq 0$. Thus, $\mathbb{E}\left[Z_{i}\left(f_{k}\right) \mid \mathcal{F}_{k}^{n}\right] \leq \sqrt{\frac{72 W^{4}\left(C \cdot \mathbb{E}\left[Z_{i}\left(f_{k}\right) \mid \mathcal{F}_{k}^{n}\right]+6 C W^{2} \epsilon_{2}\right) \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}}+\frac{12 W^{4} \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}+12 W^{2} \epsilon_{2} .$ Solving the above inequality with quadratic formula and using $\sqrt{a+b} \leq \sqrt{a}+\sqrt{b}, \sqrt{a b} \leq a / 2+b / 2$ for $a>0, b>0$, we obtain $$ \mathbb{E}\left[Z_{i}\left(f_{k}\right) \mid \mathcal{F}_{k}^{n}\right] \leq \frac{500 C \cdot W^{4} \cdot \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}+13 W^{2} \cdot \epsilon_{2} $$ Since the right-hand side is a constant, through taking another expectation, we have $$ \mathbb{E}\left[Z_{i}\left(f_{k}\right)\right] \leq \frac{500 C \cdot W^{4} \cdot \log \frac{\mathcal{N}\left(\mathcal{F}, \epsilon_{2}\right)}{\delta_3}}{M}+13 W^{2} \cdot \epsilon_{2}. $$ Notice that $\mathbb{E}\left[Z_{i}\left(f_{k}\right)\right]=L\left(f_{k} ; \rho_{\mathrm{cov}}^{n}, Q_{b^{n}}^{k}-b^{n}\right)-L\left(f_{k}^{*} ; \rho_{\mathrm{cov}}^{n}, Q_{b^{n}}^{k}-b^{n}\right)$. Finally, we let $(1-M\delta_1)(1-\delta_3)\geq 1-\frac{1}{8}\delta$ , so the desired result is obtained. \end{proof} \begin{lemma} \label{E.6} (One-sided error). With probability at least $1-\frac{\delta}{2}$ it holds that \begin{equation} \begin{aligned} \forall n\in [N],\ \forall k\in\{0,\cdots,K-1\},\ \forall (s,a)\in\mathcal{K}^n: \ 0\leq Q_{b^n}^{k,*}(s,a)-\widehat{Q}_{b^n}^{k}(s,a)\leq2b_{\omega}^n(s,a) \end{aligned} \end{equation} \end{lemma} \begin{proof} When $(s,a)\in \mathcal{K}^n$, $$ \widehat{Q}_{b^n}^{k}(s,a)=f_k(s,a)+b_{\omega}^n(s,a) $$ $$ Q_{b^n}^{k,*}(s,a)=f_k^*(s,a)+b^n(s,a)=f_k^*(s,a)+2b_{\omega}^n(s,a) $$ Then, $$ |Q_{b^n}^{k,*}(s,a)-\widehat{Q}_{b^n}^{k}(s,a)-b_{\omega}^n(s,a)|=|f_k^*(s,a)-f_k(s,a)|=|\Delta f_k(s,a)| $$ According to Lemma \ref{E.4}, with probability at least $1-\frac{1}{2} \delta\ $, $||\Delta f_k||_{\widehat{\mathcal{Z}}^n}^2\leq \epsilon,\ \forall n \in [N]$\\ Using the definition of $b_{\omega}^n(s,a)$, we have $$ |\Delta f_k(s,a)|\leq \omega(\widehat{\mathcal{F}}^n,s,a)\leq b_{\omega}^n(s,a)\ ( \beta<1) $$ Finally, Lemma \ref{E.6} concludes. \end{proof} \section{Limitation of Previous Implementations} \label{App F} Note that we do not compare our method directly with implementations in \citep{agarwal2020pc, feng2021provably}, as we discovered some limitations presented in their implementations. We show our insights in this section and provide an empirical evaluation of the quality of implementations of our algorithm and previous ones. Observation normalization is also very crucial for on-policy algorithms, but it is missing in those implementations. For the MountainCar environment, we find that the difficulty is not from the exploration problem, but from the ill-shaped observation. In their experiments, \textbf{PPO}-based exploration algorithms take up to 10k episodes to learn a near-optimal policy in MountainCar environment, however, with a running mean-std observation normalization, it only takes PPO-based algorithms a few episodes to learn the task. Furthermore, both of their implementations strictly follow the theoretical algorithms and use a ``Roll-In'' mechanism in order to get the previous distribution $\rho$. Although a recent study \citep{li2022understanding} shows evidence of leveraging the ``Roll-In'' mechanism in single-task RL problems for the off-policy algorithms, it still remains unknown whether such mechanism benefits on-policy algorithms in single-task RL problems. In our experiment, we find that \textbf{PC-PG} or \textbf{ENIAC} with ``Roll-In'' does not provide efficiency compared to its counterpart variant. We hypothesize that it is because the stochasticity of \textbf{PPO} and the environment is enough for the policy itself to recover the state distribution, thus the additionally introduced ``Roll-In'' is not needed. Additionally, experiments from previous works \citep{agarwal2020pc, feng2021provably} compared exploration capability with \textbf{RND}, the current state-of-the-art algorithm on Montezuma's Revenge \citep{bellemare13arcade, burda2018exploration}. However, we find there is some discrepancy between their implementation and the original implementation of \textbf{RND}. Most importantly, their implementation does not use next state $s'$ to determine the intrinsic reward of state action pair $(s, a)$. The reason why this is crucial is that using $s'$ to determine the intrinsic reward integrates the novelty of the $(s, a)$ while using $s$ will lose the information of the action. To demonstrate our point, we tested the original implementation of \citep{agarwal2020pc, feng2021provably} on MountainCarContinuous with running observation normalization (for all running algorithms). With observation normalization, our implemented algorithms easily learn the task within 10000 steps, significantly better than results reported in \citep{agarwal2020pc, feng2021provably}. Moreover, we also test their implementations along with observation normalization. The performance of their implementations does not improve much over the course of 10000 steps, which demonstrates our point that their ``Roll-In" mechanism may not provide efficiency. Our implementations \citep{stable-baselines3}, including \textbf{RND} and \textbf{PPO}, succeed to find rewards in the environments, while implementations from previous works do not. The result is shown in Figure \ref{fig:2}. \section{Hyperparameters} We implemented our method based on the open source package \citep{stable-baselines3}, and the performance of \textbf{PPO} is obtained by running the built-in implemented \textbf{PPO}. Following \citep{burda2018exploration}, we use smaller batch size (compared to 64 in standard MuJoCo environment \citep{schulman2017proximal}), specifically 32 in SparseHopper and 16 in SparseWalker2d and SparseHalfCheetah. The detailed hyperparameters are showed in the table \ref{hypertable}. \begin{table}[h!]\label{hypertable} \centering \begin{tabular}{||c c c||} \hline Hyperparameter & \textbf{Value} (\textbf{LPO}, \textbf{ENIAC}) & \textbf{Value} (\textbf{PPO})\\ [0.5ex] \hline\hline $N$ & 2048 & 2048 \\ $T$ & 2e6 & 2e6 \\ $\lambda$ & 0.95 & 0.95\\ $\gamma^{(int)}$ & 0.999 & -\\ $\gamma^{(ext)}$ & 0.99 & 0.99\\ $\alpha$ & 2 & -\\ $\beta$ & 1 & - \\ Learning rate & 1e-4 & 1e-4 \\ Batch size & 32, 16 & 32, 16 \\ Number of epoch per iteration & 10 & 10 \\ [1ex] \hline \end{tabular} \end{table} \end{document}
\begin{document} \title{Optimal Ancilla-free Pauli+V Circuits for Axial Rotations} \author{Andreas Blass} \affiliation{Mathematics, University of Michigan, Ann Arbor, MI, USA} \author{Alex Bocharov} \author{Yuri Gurevich} \affiliation{Microsoft Research, Redmond, WA, USA} \begin{abstract} Recently Neil Ross and Peter Selinger analyzed the problem of approximating $z$-rotations by means of single-qubit Clifford+T circuits. Their main contribution is a deterministic-search technique which allowed them to make approximating circuits shallower. We adapt the deterministic-search technique to the case of Pauli+V circuits and prove similar results. Because of the relative simplicity of the Pauli+V framework, we use much simpler geometric methods. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} For any universal basis $\mathcal B$ for single-qubit circuits, this natural problem arises: Given a single-qubit gate $G$ and real $\varepsilon>0$, construct an ancilla-free $\mathcal B$-circuit that approximates $G$ with precision $\varepsilon$. There is a well-known elementary reduction of this problem to its special case where $G$ is a $z$-rotation $R_z(\theta)= \begin{sm}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{sm}$ [Ref.~\onlinecite{QCQI}]. The reduction does not work for all bases but it works for Clifford+T, for Pauli+V and for most other universal single-qubit bases in the literature. We restrict attention to the special case. The matrix of the approximating $\mathcal B$-circuit has the form $\begin{sm}u&-v^*\\ v&u^*\end{sm}$. In many cases, including those of Clifford+T and Pauli+V, the circuit can be efficiently constructed from the matrix. The problem becomes just to find an appropriate pair $(u,v)$ of complex numbers. In article [Ref.~\onlinecite{Selinger}], Peter Selinger introduced a randomized-search technique for finding a desired pair $(u,v)$ in the Clifford+T framework. The result was an efficient probabilistic circuit-synthesis algorithm for the Clifford+T basis. ``Under a mild hypothesis on the distribution of primes, the expected running time of the probabilistic algorithm is polynomial in $\log(1/\varepsilon)$, and the depth of the resulting approximating circuit is $O(1/\varepsilon)$. If the gate to be approximated is a $z$-rotation, the T-count of the approximating circuit is $4\log_2(1/\varepsilon) + O(1)$.'' Let V be the set of 3 unitary operators \[ V_1 = (I + 2iX)/\sqrt{5},\quad V_2 = (I + 2iY)/\sqrt{5},\quad V_3 = (I + 2iZ)/\sqrt{5} \] where $I$ is the identity matrix and $X,Y,Z$ are the single-qubit Pauli matrices. The group generated by the three operators was introduced and studied in [Refs.~\onlinecite{LPSI,LPSII}]. The use of the V basis for quantum computing was initiated and studied in [Ref.~\onlinecite{HRC}]. Using the randomized-search technique, two of the present authors and Krysta Svore developed a probabilistic algorithm analogous to Selinger's for Pauli+V (and thus for Clifford+V) circuits [Ref.~\onlinecite{BGS}]. Under a conjecture on the distribution of primes, the expected running time of their algorithm is polynomial in $\log(1/\varepsilon)$, and the depth of the resulting circuit is $4\log_5(1/\varepsilon) + O(1)$. The conjecture is rather credible and purely number-theoretic. Later, also in the framework of Clifford+T, Ross and Selinger replaced the randomized-search technique with an even more efficient deterministic-search technique [Ref.~\onlinecite{RoSelinger}]. Under a hypothesis on the distribution of primes, the expected running time of the new circuit-synthesis algorithm is polynomial in $\log(1/\varepsilon)$, and the T-count of the approximating circuit is $3\log_2(1/\varepsilon) + O(\log\log(1/\varepsilon))$. If an oracle to factor integers is available (e.g. Shor's factoring algorithm), the approximating circuit has the minimal possible depth. Ross and Selinger suggested that some other universal quantum bases may be amenable to a similarly optimal synthesis. We show in this paper that this suggestion is realized in the case of the Pauli+V basis. The deterministic-search technique simplifies quite substantially in the Pauli+V case. We present two circuit-synthesis algorithms using credible number-theoretic conjectures. The first synthesis algorithm runs in expected polynomial time with respect to $\log\frac1\varepsilon$ and constructs a Pauli+V circuit of depth at most $3\log_5\frac1\varepsilon + O(\log\log\frac1\varepsilon)$ that approximates the axial rotation $R_z(\theta)$ within $\varepsilon$. The second synthesis algorithm takes an additional input parameter, namely an error tolerance $\delta>0$. It runs in time polynomial in $\log\frac1\varepsilon$ and $\log\frac1\delta$, and it is in fact fast and practical. It returns Nil or constructs a Pauli+V circuit of depth at most $3\log_5\frac1\varepsilon + O(\log\log\frac1\varepsilon)$ that approximates the axial rotation $R_z(\theta)$ within $\varepsilon$. The probability of returning Nil is at most $\delta$. It may be important to find a similar solution in the framework of Fibonacci circuits described e.g. in [Ref.~\onlinecite{KBS}]. \noindent{\bf Related Work}\quad After our result was pre-announced in [Ref.~\onlinecite{BSR}], Neil Ross published another confirmation that the deterministic-search technique works for the Pauli+V basis [Ref.~\onlinecite{Ross}]. Our solution is simpler, conceptually and algorithmically. \section{Preliminaries} \label{sec:prelim} \subsection{Geometry}\label{sub:geometry} Consider the complex plane $\mathbb C$. The real and imaginary axes meet at point $0$ which is the origin of the coordinate system and will be denoted $O$. Every nonzero point $z\in\mathbb C$ can be viewed as a vector from the origin $O$ to point $z$. It will be clear from the context when a point is treated as a vector. If $u,v$ are distinct points of the plane then $[u,v]$ is the straight-line segment between $u$ and $v$, and $|u,v|$ is the length of $[u,v]$. If, in addition, points $u,v$ are on the unit circle around $O$ and are not the opposites of each other, let ${\rm Arc}(u,v)$ be the shorter arc of the unit circle between $u$ and $v$. Given a positive real $\varepsilon < 1$, consider a circular segment, or meniscus, $M_0 = \{u:\: 1-\varepsilon^2 < \Re(u) \le 1\}$ of the unit disk, centered around point $1$. Given a real number $\theta$, rotate $M_0$ to the angle $-\theta/2$ around $O$; the resulting meniscus is centered around the point $e^{-i\theta/2}$ and will be denoted $M_\varepsilon(\theta)$ so that $M_0 = M_\varepsilon(0)$. Menisci play important role in our approximation problem. Given a meniscus $M = M_\varepsilon(\theta)$, centered around the point $z = e^{-i\,\theta/2}$, let $z_1,z_2$ be the two corner points of the meniscus. Let $z_0 = (z_1+z_2)/2$ and let $z_3$ be the intersection point of the tangent lines to the unit circle at $z_1$ and $z_2$. The following terminology will be useful. \begin{itemize} \item The chord $[z_1,z_2]$ is the \emph{base} of $M$, and the vectors $z_1-z_2$ and $z_2 - z_1$ are the \emph{base vectors} of $M$. \item ${\rm Arc}(z_1,z_2)$ is the \emph{arc} of $M$. \item The vector $z - z_0$ is the \emph{handle} of $M$. Note that the handle uniquely defines the meniscus. \item The isosceles triangle formed by points $z_1, z_2, z_3$ is the \emph{enclosing triangle} of $M$. The base of $M$ is also the base of the triangle, and $[z_0,z_3]$ is the \emph{median} of the triangle. \end{itemize} \begin{lemma}\label{lem:arc} Let $M$ be a meniscus $M_\varepsilon(\theta)$ with base $b$ and handle $h$, and let $\mu$ and $s$ be the median and one of the two equal sides of the enclosing triangle of $M$. Then \begin{itemize} \item $|h| = \varepsilon^2$ and $|b|\approx 2\varepsilon\sqrt2$. \item ${\rm Arc}(M) \approx 2\varepsilon\sqrt2$. \item $|\mu| \approx 2\varepsilon^2$. \item $|s| \approx \varepsilon\sqrt2$. \end{itemize} \end{lemma} The approximate equalities mean that higher powers of $\varepsilon$ are ignored. \begin{proof} Let the points $z,z_0,z_1,z_2,z_3$ be as above. The first claim follows from the definition of the meniscus. Let $x = {\rm Arc}(M)/2$. To prove the second claim, note that $\cos x = |O,z_0|/|O,z_2| = |O,z_0| = 1 - \varepsilon^2$. Since the Taylor series for $\cos x$ is $1 - x^2/2 + \dots$, we have $x \approx \varepsilon\sqrt2$ and ${\rm Arc}(M) \approx 2\varepsilon\sqrt2 = O(\varepsilon)$. Let $\delta = |z,z_3|$. We have $1-\varepsilon^2 = \cos x = |O,z_2|/|O,z_3| = 1/(1+\delta)$, so $1+\delta = 1/(1-\varepsilon^2) = 1 + \varepsilon^2 + \varepsilon^4 + \dots$ and $\delta\approx \varepsilon^2$. Since $\mu = [z_0,z_3]$, we have $|\mu| \approx 2\varepsilon^2$. By definition of $s$, we have $|s|^2 = |\mu|^2 + (|b|/2)^2 \approx (|b|/2)^2$, so $|s| \approx |b|/2$. \end{proof} Recall that the trace distance ${\rm TD}(U,V)$ between unitary operators $U,V$ (up to phase factors) of the Hilbert space $\mathbb C^2$ is $\sqrt{1-|{\rm Tr}(UV^\dagger)|/2}$. \begin{lemma} \label{lem:meniscus} Let $U$ be a unitary operator $\begin{sm}u&-v^*\\ v&u^*\end{sm}$ on $\mathbb C^2$, $R$ be a $z$-rotation $R_z(\theta) = \begin{sm}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{sm}$, and $M$ be the meniscus $M_\varepsilon(\theta)$. Then \[ {\rm TD}(U,R)<\varepsilon \iff u_M\in M \] where \begin{align*} u_M &= \begin{cases} u & \text{if } \Re(ue^{i\theta/2}) \ge0 \\ -u & \text{otherwise} \end{cases}\\ \end{align*} \end{lemma} \begin{proof} Note that ${\rm Tr}(UR^\dagger) = (ue^{i\theta/2}) + (ue^{i\theta/2})^* = 2\Re(ue^{i\theta/2})$. \begin{align*} {\rm TD}(U,R) < \varepsilon &\iff \sqrt{1-|{\rm Tr}(UR^\dagger)|/2} < \varepsilon \\ &\iff 1-|{\rm Tr}(UR^\dagger)|/2 < \varepsilon^2 \\ &\iff 2(1-\varepsilon^2) < |{\rm Tr}(UR^\dagger)| \\ &\iff 1 - \varepsilon^2 < |\Re(u e^{i\theta/2})|\\ &\iff 1 - \varepsilon^2 < \Re(u_M e^{i\theta/2})\\ &\iff 1 - \varepsilon^2 < \Re(u_M e^{i\theta/2}) \le 1 \\ &\iff u_M e^{i\theta/2} \in M_\varepsilon(0) \\ &\iff u_M \in M_\varepsilon(\theta) \end{align*} The penultimate equivalence uses the fact that $|u|\le1$ which is true because $u$ occurs in a unitary matrix. \end{proof} \begin{corollary}\label{cor:meniscus} If $u\in M_\varepsilon(\theta)$ and there exists a complex number $v$ satisfying the norm equation $|u|^2 + |v|^2 = 1$, then the matrix $U = \begin{sm}u&-v^*\\ v&u^*\end{sm}$ is at trace distance $<\varepsilon$ from the $z$-rotation $R = R_z(\theta)$. \end{corollary} \begin{proof} Since $u\in M_\varepsilon(\theta)$, we have $ue^{i\theta/2}\in U_0(\theta)$, $\Re(u e^{i\theta/2}) \ge0$, and $u_M=u$. Now use Lemma~\ref{lem:meniscus}. \end{proof} \subsection{Pauli+V}\label{sub:V} We use the same letter for a linear operator on $\mathbb C^2$ and its matrix in the standard basis. Unitary operators \[ V_1 = (I + 2iX)/\sqrt{5},\quad V_2 = (I + 2iY)/\sqrt{5},\quad V_3 = (I + 2iZ)/\sqrt{5} \] generate a group $V$ that is dense in the special unitary group ${\rm SU}(2)$ [Ref.~\onlinecite{LPSI}]. Here $I$ is the identity matrix, and $X,Y,Z$ are the single-qubit Pauli matrices. We have \[ V_1^{-1} = (I - 2iX)/\sqrt{5},\quad V_2^{-1} = (I - 2iY)/\sqrt{5},\quad V_3^{-1} = (I - 2iZ)/\sqrt{5}. \] The group $V$ is in fact freely generated by $V_1, V_2, V_3$. This fact appears without proof in [Ref.~\onlinecite{LPSI}]. For completeness, we prove it here; see Corollary~\ref{cor:free} below. In [Ref.~\onlinecite{BGS}], we explored a slightly larger group $W$ generated by the three operators $V_j$ and three Pauli operators $X, Y, Z$. Because of $Y(I+2iX) = (I-2iX)Y$ and similar relations, every product of operators in $W$ easily reduces to a normal form \begin{equation}\label{normal} \begin{aligned} A_1 A_2 \cdots A_t B\quad \text{where}\quad& \sqrt5A_i\in \{1\pm2iX, 1\pm2iY, 1\pm2iZ\},\\ &\ A_1 A_2 \cdots A_t \text{ is reduced, and}\\ &\ B\in \{\pm I,\pm X, \pm Y, \pm Z\}; \end{aligned} \end{equation} in the process the number of V factors does not change but the number of Pauli factors becomes $\le1$. The product $A_1 A_2 \cdots A_t$ is reduced in the sense that no $A_{j+1}$ is the inverse of $A_j$. By Theorem 1 in [Ref.~\onlinecite{BGS}], an ${\rm SU}(2)$ operator is in $W$ if and only if it can be given in the form \begin{equation}\label{exact} \frac{1}{(\sqrt{5})^{t}} \left(\begin{matrix} u & -v^* \\ v & u^* \end{matrix}\right) \end{equation} where $u,v$ are Gaussian integers. The next theorem will relate the exponent $t$ in this formula to the number of factors in the normal form of an element of $W$. It will also provide similar information about the images in ${\rm SO}(3)$ of the Pauli+V matrices. Recall how matrices in ${\rm SU}(2)$ act as rotations on three-dimensional Euclidean space. They act by conjugation on the 3-dimensional vector space of traceless Hermitian matrices \[ xX+yY+zZ= \begin{pmatrix} z&x-iy\\x+iy&-z \end{pmatrix}, \] where $X$, $Y$, and $Z$ are the Pauli matrices, and where we regard the real numbers $x,y,z$ as coordinates in $\mathbb R^3$. Furthermore, this conjugation action preserves the Euclidean norm \[ x^2+y^2+z^2=-\det\begin{pmatrix} z&x-iy\\x+iy&-z \end{pmatrix}, \] so we get a homomorphism of ${\rm SU}(2)$ into the orthogonal group $\text{O}(3)$. Because ${\rm SU}(2)$ is connected, the homomorphism actually maps into ${\rm SO}(3)$. Under this homomorphism, the $V$-matrices correspond to rotations by $\arccos(-3/5)$ about the three coordinate axes, namely \[ R_1= \begin{pmatrix} 1&0&0\\0&-\frac35&\frac45\\0&-\frac45&-\frac35 \end{pmatrix},\quad R_2= \begin{pmatrix} -\frac35&0&-\frac45\\0&1&0\\\frac45&0&-\frac35 \end{pmatrix},\quad R_3= \begin{pmatrix} -\frac35&\frac45&0\\-\frac45&-\frac35&0\\0&0&1 \end{pmatrix}. \] \begin{theorem} \label{thm:free} \begin{enumerate} \item Any matrix obtained as a reduced product of $t$ factors taken from $\{{R_1}^{\pm1},{R_2}^{\pm1},{R_3}^{\pm1}\}$ has at least one entry which, when written as a fraction in lowest terms, has denominator $5^t$. \item Any matrix obtained as a reduced product of $t$ factors taken from $\{{V_1}^{\pm1},{V_2}^{\pm1},{V_3}^{\pm1}\}$ has the form \eqref{exact}, and it cannot be written in that form with $t$ replaced by a smaller exponent. \item Any matrix in the normal form described above, with $t$ factors taken from $\{{V_1}^{\pm1},{V_2}^{\pm1},{V_3}^{\pm1}\}$ followed by one factor from $\{\pm I,\pm X,\pm Y,\pm Z\}$, has the form \eqref{exact}, and it cannot be written in that form with $t$ replaced by a smaller exponent. \end{enumerate} \end{theorem} \begin{proof} The proof of item (1) in the theorem is rather long and is therefore given in an appendix. We give here the easy deductions of items~(2) and (3) from item~(1). To prove (2), consider any product $M=A_1\cdots A_t$ of $t$ factors, each of which is in $\{{V_1}^{\pm1},{V_2}^{\pm1},{V_3}^{\pm1}\}$. Each factor is thus a matrix of Gaussian integers divided by $\sqrt 5$, so the product $M$ is a matrix of Gaussian integers dvided by $\sqrt{5^t}$. We need to show that no lesser power of $\sqrt 5 $ can serve as the denominator for $M$. So suppose, toward a contradiction, that $M$ is a matrix of Gaussian integers divided by $\sqrt{5^r}$ with $r<t$. Then the the same is true of the conjugate transpose of $M$, which is also the inverse of $M$ because $M$ is unitary. Thus, when $M$ acts by conjugation on the three-dimensional space of traceless Hermitian matrices, the denominators are (at most) $5^r$, namely a factor $\sqrt{5^r}$ from $M$ and another factor $\sqrt{5^r}$ from $M^{-1}$. This means that the image of $M$ in ${\rm SO}(3)$, which is given by this conjugation action on traceless Hermitian matrices, involves denominators only $5^r$. But this element of ${\rm SO}(3)$ is obtained by multiplying the $R$ matrices corresponding to the $V$ matrices that produced $M$. So we would have a reduced product of $t$ factors from $\{{R_1}^{\pm1},{R_2}^{\pm1},{R_3}^{\pm1}\}$ with only $5^r$ in the denominator. This contradicts item~(1). Finally, for item~(3), we must show that what we just proved about products of the form $A_1\cdots A_t$ is also valid for $A_1\cdots A_tB$ where $B\in\{\pm I,\pm X,\pm Y, \pm Z\}$. But this is easy, since the entries of $B$ (and of $B^{-1}$, because $B^{-1}=B$) are Gaussian integers, so multiplication by $B$ has no effect on the number of $\sqrt 5$ factors needed in the denominator. \end{proof} \begin{corollary}\label{cor:free} \begin{enumerate} \item The matrices $R_1, R_2,$ and $R_3$ are free generators of the subgroup of ${\rm SO}(3)$ that they generate. \item The matrices $V_1,V_2,$ and $V_3$ are free generators of the subgroup of ${\rm SU}(2)$ that they generate. \end{enumerate} \end{corollary} \begin{proof} Both parts follow immediately from the corresponding parts of the theorem. The identity matrix cannot be represented by a nonempty reduced word in the given generators and their inverses, because it has no 5 or $\sqrt 5$ in the denominator. \end{proof} \begin{proposition} The normal forms $A_1\cdots A_tB$ all represent distinct matrices, so that every element of $W$ has a unique normal form. \end{proposition} \begin{proof} Part~(3) of Theorem~4 immediately implies that two normal forms with distinct lengths represent distinct matrices, for they have different powers of $\sqrt5$ in their simplest forms. So we need only consider normal forms of one weight $t$ at a time. Fix $t$ for the rest of this proof. Combining Part~(3) of Theorem~\ref{thm:free} with Theorem 1 in [Ref.~2], we find that every matrix in $W$ whose simplest form is \eqref{exact} (with our fixed $t$) is represented by a normal form $A_1\cdots A_tB$ (again with our fixed $t$). To show that this representation is unique, it suffices to show that the number of such matrices equals the number of such normal forms. To count the relevant matrices \eqref{exact}, write $u=a+bi$ and $v=c+di$, where $a,b,c,$ and $d$ are ordinary integers, and observe that the matrix \eqref{exact} is in ${\rm SU}(2)$ if and only if \[ a^2+b^2+c^2+d^2=|u|^2+|v|^2=5^t. \] Thus, the number of matrices in ${\rm SU}(2)$ of the form \eqref{exact} is the number of representations of $5^t$ as a sum of four squares of integers. The number of such matrices for which this is the simplest form is then obtained by subtracting the number of such four-square representations in which all of $a,b,c,$ and $d$ are divisible by 5. By Jacobi's four-square theorem, every positive odd integer $n$ has $8\sum_{d|n}d$ representations as a sum of four squares of integers. In particular, for $n=5^t$, there are \[ 8(1+5+\cdots+5^t)=8\frac{5^{t+1}-1}{5-1}=2(5^{t+1}-1) \] representations of $5^t$ as a sum $a^2+b^2+c^2+d^2$. As noted above, we must subtract the number of these representations in which all of $a,b,c,$ and $d$ are divisible by 5. Dividing these four integers by 5, we obtain the representations of $5^{t-2}$ as a sum of four squares, so the number to be subtracted is $2(5^{t-1}-1)$. Therefore, the number of matrices whose simplest form is \eqref{exact} is \[ 2(5^{t+1}-1)-2(5^{t-1}-1)=2\cdot 5^{t-1}\cdot(25-1)=48\cdot 5^{t-1}. \] Now, let us count the number of normal forms $A_1\cdots A_tB$. We have 6 choices for $A_1$ (namely any of the ${V_i}^{\pm1}$), 5 choices for each subsequent $A_j$ (namely any of the ${V_i}^{\pm1}$ except the inverse of the immediately preceding $A_{j-1}$), and 8 choices for $B$. That makes $48\cdot 5^{t-1}$ normal forms. Since this count agrees with the count of matrices above, the proof of the proposition is complete. \end{proof} The V-\emph{count} of a $W$-operator $U$ is $t$ if $U$ has a normal form $A_1\cdots A_t B$. Every $W$-circuit implementing $U$ contains at least $t$ $V$-gates. \subsection{Diophantine approximations} \label{sub:da} We presume that the reader is familiar with continued fractions, and we use Khinchin's book [Ref.~\onlinecite{Khinchin}] as our reference on continued fractions. Every rational number $q$ has a unique continued-fraction representation $[a_0; a_1, a_2, \dots, a_n]$ where all $a_i$ are integers, $a_1,a_2,\dots$ are positive and $a_n > 1$. Every irrational number has a unique continued-fraction representation $[a_0; a_1, a_2, \dots]$ where all $a_i$ are integers, and $a_1,a_2,\dots$ are positive. If $[a_0; a_1,\dots,a_k]$ is an initial segment of the continued-fraction representation of $\gamma$ then the reduced fraction $p_k/q_k$ represented by $[a_0; a_1,\dots,a_k]$ is the $k^{th}$ approximant (or convergent) of $\gamma$. By a theorem of Dirichlet [Ref.~\onlinecite[Theorem~25]{Khinchin}], for any real number $\gamma$ and any integer $r\ge1$, there exist relatively prime integers $x$ and $y$ such that \[ |\gamma y - x| < 1/r \quad\text{and}\quad 1\le y\le r. \] The proof of Theorem~25 in [Ref.~\onlinecite{Khinchin}] includes this claim: If $p_k/q_k$ is the approximant of $\gamma$ such that $q_k\le r$ and either $r< q_{k+1}$ or else $\frac{p_k}{q_k}=\gamma$ then $|\gamma q_k - p_k| < 1/r$. \begin{lemma}\label{lem:rational} There is a polynomial-time algorithm that, given a rational $g$ and integer $r\ge1$, computes an approximant $p_k/q_k$ of $g$ such that $|g q_k - p_k| < 1/r$ and $1\le q_k\le r$. \end{lemma} \begin{proof} The desired algorithm is recursive. Let $\gamma_0 = \gamma$, $a_0 = \lfloor \gamma_0 \rfloor$, $p_0=a_0$, $q_0=1$, and suppose we computed already $\gamma_j,a_j,p_j,q_j$. If $\gamma_j -a_j < 1/r$, stop and output $p_j/q_j$. Otherwise let $\gamma_{j+1} = 1/ (\gamma_j-a_j)$, $a_{j+1} = \lfloor \gamma_{j+1} \rfloor$ and \begin{align*} p_{j+1} &= a_{j+1}p_j + p_{j-1}\\ q_{j+1} &= a_{j+1}q_j + q_{j-1} \end{align*} where $p_{-1}=1$ and $q_{-1}=0$. To estimate the running time of the algorithm, use the fact that any $q_n\ge 2^{(n-1)/2}$ [Ref.~\onlinecite[Theorem~12]{Khinchin}]. If the output is $p_k/q_k$, we have $2^{(k-1)/2}\le q_k\le r$ and $k \le 1 +2\log_2 r$. \end{proof} \begin{proviso} Every real number $\gamma$, used as input to an algorithm in the present paper, comes with an oracle that, given the unary notation for an integer $m\ge0$, produces the part $\sum_{n=0}^m d_n/10^n$ of the decimal notation $\sum_{n=0}^\infty d_n/10^n$ for $\gamma$, where every $d_n$ is an integer and $d_1,d_2,\dots$ are in $\{0,1,\dots,9\}$. \end{proviso} \begin{lemma}\label{lem:real} There is a polynomial-time algorithm that, given a real $\gamma$ and integer $r\ge1$, computes a reduced fraction $x/y$ such that $|\gamma y - x| < 1/r$ and $y\le 2r$. \end{lemma} \begin{proof} Use the oracle companion of $\gamma$ to compute a rational $g$ such that $|\gamma - g| \le\frac1{2r^2}$. Use the algorithm of Lemma~\ref{lem:rational} to compute a fraction $x/y$ such that $1\le y\le 2r$ and $|\gamma y -x| \le\frac1{2r}$. We have \[ |\gamma - \frac xy| \le |\gamma - g| + |g-\frac xy| \le \frac1{2r^2} + \frac1{2ry} \le \frac1{ry} \] and so $|\gamma y - x| \le \frac1r$. \end{proof} \subsection{Sums of squares} \label{sub:squares} We recall some well-known facts related to the problem of representing a given (rational) integer $n\ge0$ as a sum of two squares of integers. All variables will range over the integers. For brevity, we say that $n$ is S2S if it is a sum of two squares. Every prime number of the form $4m+1$ is S2S. Given any such prime $p$, the Rabin-Shallit algorithm finds an S2S representation $x^2+y^2$ of $p$ in expected time $O(\log p)$ [Ref.~\onlinecite[Theorem~11]{RS}]. The S2S property is multiplicative. Indeed, if $m = a^2+b^2 = |a+bi|^2$ and $n = x^2+y^2 = |x+yi|^2$ then $mn = |(a+bi)(x+yi)|^2 = (ax-by)^2 + (ay+bx)^2$. \begin{lemma}\label{lem:S2S} There is an algorithm that, given the representation of any number $n$ as a product of powers of distinct primes, decides whether $n$ is S2S and, if yes, produces an S2S representation $x^2+y^2$ of $n$. The algorithm works in expected polynomial time. \end{lemma} \begin{proof} A number $n$ is S2S if and only if every prime factor $q$ of $n$ of the form $4m + 3$ has an even exponent in the representation of $n$ as a product of powers of distinct primes [Ref.~\onlinecite[Theorem~366]{HW}]. This criterion allows you to decide whether $n$ is S2S or not. If $n$ is S2S, use the Rabin-Shallit algorithm and the multiplicativity of S2S to find an S2S representation of $n$. We illustrate this part on an example. Suppose that $n = p_1p_2^3q^2$ where $p_1,p_2$ are primes of the form $4m+1$ and $q$ is a prime of the form $4m+3$. Use the Rabin-Shallit algorithm to represent $p_1,p_2$ as sums of squares. Use the algorithm of the paragraph preceding the lemma, to represent $p_1p_2$ as $c^2+d^2$. Then $n = (cp_2q)^2 + (dp_2q)^2$. \end{proof} By the prime number theorem, the number $\pi(n)$ of primes $\le n$ is asymptotically equal to $\frac n{\log n}$ where $\log$ means the natural logarithm. The number of primes of the form $4m+1$ that are $\le n$ is asymptotically equal to $\frac n{2\log n}$ and thus is $\Omega(\frac{n}{\log n})$. It follows that the fraction of S2S numbers $\le n$ is $\Omega(\frac1{\log n})$. \begin{proposition}\label{pro:R2} There is an algorithm that, given a positive integer $n$ of the form $4m+1$ and a positive $\delta>0$, works in time $O((\log n)\log\frac1\delta)$ and returns an S2S representation of $n$ or Nil. If $n$ is prime then,with probability $>1-\delta$, the algorithm returns an S2S representation of $n$. \end{proposition} \begin{proof} Let $A$ be the Rabin-Shallit algorithm for finding an S2S representation of a given prime number $n=4m+1$. $A$ works in two stages. At stage~1, it solves the equation $x^2=-1$ in the field ${\mathbb Z}_n$. That solution is used at stage~2 to produce an S2S representation of $n$. Stage~2 is performed in linear time. Stage~1 is a while loop. At the $j^{th}$ round of the loop, $A$ randomly chooses a residue $b_j\mod n$ and computes, in linear time, the greatest common divisor $d_j(x) = x-a_j$ of polynomials $(x-b_j)^2 + 1$ and $x^{2m}-1$. With probability $1/2$, the residue $a_j+b_j$ solves $x^2=-1$; if this happens, call the round successful. The expected number of the rounds is 2. So $A$ works in expected linear time. Let $A'$ be the modification of $A$ that takes an additional input $\delta>0$ and replaces the while loop with the following for-loop where $J = \lceil \log_2(1/\delta) \rceil$. For $j=0$ to $J$ do: \begin{enumerate} \item Perform one round of the $A$'s while loop. \item If the round is successful, go to stage~2. \item If $j<J$ then increment $j$ else stop and return Nil. \end{enumerate} The probability that $A'$ outputs Nil is $1/2^{J+1}<\delta$. The worst-case running time of $A'$ is $O((\log n)\log\frac1\delta)$. The desired algorithm $A''$ is the modification of $A'$ where the input integer $n=4m+1$ is not necessarily prime. $A''$ simulates $A'$ on the given $n$ and $\delta$. If $A'$ returns an alleged S2S representation $c^2+d^2$ then $A"$ checks whether the representation is genuine and returns the same S2S representation of $n$ if it is genuine. In all other cases, $A''$ returns Nil. \end{proof} \section{Adjusting a meniscus} \label{sec:adjust} \begin{comment} Enumeration of the points in $C_t \cap M_\varepsilon(\theta)$ may seem deceptively simple. Consider scaled out meniscus $\sqrt{5}^t\,M_\varepsilon(\theta)$ and its projection $P_t(\varepsilon,\theta)$ on the horizontal axis $\Im z=0$. For any point $u/\sqrt{5}^t \in C_t \cap M_\varepsilon(\theta)$, $\Re(u)$ must be an integer $a \in P_t(\varepsilon,\theta)$. It is easy to compute the intersection of the vertical straight line $\Re(z)=a$ with $\sqrt{5}^t\,M_\varepsilon(\theta)$ and to enumerate all Gaussian integers within that intersection. The one and only problem with this direct approach is that $|P_t(\varepsilon,\theta)| = O(\sqrt{5}^t\, \varepsilon)$ may to large for efficient compilation since it implies the inspection of $O(\sqrt{5}^t\, \varepsilon)$ vertical lines. In this subsection we propose a simple transformation to resolve this inefficiency. \end{comment} We present an algorithm that, given a meniscus $M_\varepsilon(\theta)$ of the unit disk, constructs an operator $\tau\in {\rm SL}(2,{\mathbb Z})$ such that $\tau M$ resides in a vertical band of width $O(\varepsilon^{3/2})$. Call a complex number $z$ \emph{quasi-rational} if $\Re(z)=0$ or $\Im(z)/\Re(z)$ is rational. Every nonzero quasi-rational $z$ has a unique \emph{reduced} presentation of the form $\mu(a+bi)$ where $\mu$ is real and positive and where $a,b$ are mutually prime integers; if $\Re(z)\ne0$ then $b/a$ is the reduced form of the fraction $\Im(z)/\Re(z)$. Observe that a vector $r$ is orthogonal to a given quasi-rational vector $\mu(a+bi)$ if and only if $r$ is quasi-rational of the form $\nu(b-ai)$. \begin{lemma}\label{lem:tau} Let $q$ be a nonzero quasi-rational with reduced presentation $\mu(a+bi)$. For any nonzero complex number $r$ orthogonal to $q$, there is $\tau\in {\rm SL}(2,{\mathbb Z})$ such that \begin{enumerate} \item $\tau q = \mu i$ and $|\Im(\tau r)| < |\Re(\tau r)|$, \item $|\tau q| = |q|/\sqrt{a^2+b^2}$, \item $|\Re(\tau r)| = \sqrt{a^2+b^2}\, |r|$. \end{enumerate} \end{lemma} \begin{proof}[Proof of the lemma] \noindent 1. Use the extended Euclidean algorithm to find integers $u,v$ with $ua + vb =1$ and let $\tau_0 = \begin{sm}b&-a\\u&v\end{sm}$. Then $\tau_0q = \mu i$ and $\tau_0r$ is some complex number $\alpha+\beta i$. Since $q$ and $r$ are non-collinear, so are $\tau q$ and $\tau r$, and therefore $\alpha\ne0$. If $|\beta|< |\alpha|$, the desired $\tau = \tau_0$. Otherwise $\tau = \tau'\tau_0$ where $\tau' = \begin{sm}1&0\\k&1\end{sm}$ and $k = \lceil -\frac \beta\alpha \rceil = -\frac \beta\alpha + \delta$ for some $\delta$ such that $0\le \delta < 1$. We have $\tau q = \tau'(\mu i) = \mu i$, $\Re(\tau r) = \Re(\tau'(\alpha+\beta i)) = \alpha$, and $\Im(\tau r) = \Im(\tau'(\alpha + \beta i)) = k\alpha + \beta = \delta\alpha$, so that indeed $|\Im(\tau r)| < |\Re(\tau r)|$. \noindent 2. By the first part of Claim~1, $|\tau q| = |\mu| = |q|/\sqrt{a^2+b^2}$. \noindent 3. $\tau$ maps the rectangle with sides $q$ and $r$ to a parallelogram of the same area because $\tau\in{\rm SL}(2,{\mathbb Z})$. The parallelogram has a vertical side $\tau q = \mu i$ and side $\tau r$ whose horizontal component is $\Re(\tau r)$, so its area is $|\mu|\cdot \Re(\tau r)$. The original rectangle had area $|q|\cdot|r| = |\mu|\sqrt{a^2+b^2}\,|r|$. Equating the two areas and cancelling $|\mu|$, we get the claim. \end{proof} \begin{corollary} Let $M$ be a meniscus with a quasi-rational base vector $q$ of reduced form $\mu(a+bi)$ and with handle $h$. There is $\tau\in {\rm SL}(2,{\mathbb Z})$ such that $\tau q$ is vertical, $|\tau q| = |q|/\sqrt{a^2+b^2}$ and $|\Re(\tau h)| = \sqrt{a^2+b^2}\, |h|$. \end{corollary} Originally, to achieve the goal of this section, we intended to show that, for every meniscus $M$ of the unit circle, there exists a slightly bigger meniscus $L\supseteq M$ with a base vector $q$ and a handle $h$ and there exists an operator $\tau\in{\rm SL}(2,{\mathbb Z})$ such that $\tau q$ is vertical and $|\Re(\tau h)| = O(\varepsilon^{3/2})$. The intent ran into difficulties with Diophantine approximations; see \S\ref{sub:da} in this connection. Fortunately there is another way. \begin{comment} For our purposes, making $\mathcal Base(M)$ nearly horizontal is as good as making it nearly vertical. Note that the corollary remains true if ``vertical'' is replaced with ``horizontal;'' just compose $\tau$ with a $90$-degree rotation. \end{comment} \begin{theorem}\label{thm:tau} Let $M$ be a meniscus $M_\varepsilon(\theta)$ of the unit disk. There is $\tau\in {\rm SL}(2,{\mathbb Z})$ such that $\tau M$ resides in a vertical band of width $O(\varepsilon^{3/2})$. Moreover, there is a polynomial-time algorithm that, given $\varepsilon$ and $\theta$, constructs the desired $\tau$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:tau}] Since the isometry $\begin{sm}0&-1\\ 1&0\end{sm} \in {\rm SL}(2,{\mathbb Z})$ makes horizontal bands vertical, it suffices to prove the version of the theorem where ``vertical'' is replaced with ``vertical or horizontal.'' Let $z_1, z_2$ be the two corner points of $M$ and $r = \alpha + \beta i$ be the base vector $z_2-z_1$. We assume that $\alpha\beta\ne0$; otherwise we have nothing to do. Without loss of generality, $z_1$ is the left of the two corner points of $M$, so that $\alpha>0$. The enclosing triangle $Z$ of $M$ is formed by points $z_1,z_2$ and the intersection point, call it $z_3$, of the tangent lines to the unit circle at $z_1$ and $z_2$. Without loss of generality, $|\beta|\le|\alpha|$; otherwise, instead of making $\mathcal Base(M)$ nearly vertical, we'll make it nearly horizontal (even though this may look a bit unnatural: if the base is closer to horizontal then we adjust it to become vertical, and if it's closer to vertical then we adjust it to become horizontal.) We are going to construct an operator $\tau\in {\rm SL}(2,{\mathbb Z})$ such that the horizontal projections of all sides of the triangle $\tau Z$ are $O(\varepsilon^{3/2})$. Let $\gamma = \beta/\alpha$ and $n = \lceil 1/\sqrt\varepsilon\, \rceil$. Apply the algorithm of Lemma~\ref{lem:real} to construct a reduced fraction $b/a$ such that $|\gamma a - b| < 1/n \le \sqrt\varepsilon$ and $1\le a \le 2n = 2\lceil 1/\sqrt\varepsilon\,\rceil$. Since $|\gamma a - b| < 1$ and $b$ is an integer, $b$ and $\gamma a$ cannot have the opposite signs. Hence $b=0$ or $b$ has the sign of $\gamma a$ which is also the sign of $\gamma$ and $\beta$. Recall that $\alpha\ge\beta$. We claim that $a \ge|b|$. Indeed, \[ |b|-a = |b| - a|\gamma| - a(1 - |\gamma|) \le |b| - a|\gamma| = |b - \gamma a| < \sqrt\varepsilon < 1 \] and thus $|b|\le a$. Let $q$ be the quasi-rational $\alpha + \frac ba \alpha i = \frac\alpha a (a + bi)$. Recall from \S\ref{sub:geometry} that $|r| = \Theta(\varepsilon)$. We have $r - q = (\beta - \frac ba\alpha) i$ and $ |r - q| = |\frac\alpha a (\gamma a - b)| \le \frac{|r|}a |\gamma a -b| = O(\varepsilon^{3/2}/a)$. Consider meniscus $L = M_\delta(\eta)\supseteq M$ such that $\mathcal Base(L)$ is parallel to $q$ and touches $\mathcal Base(M)$ at $z_1$ or $z_2$. We consider only the case $z_1\in\mathcal Base(L)$; the other case is similar. Let $y_1=z_1$ and $y_2$ be the other end of $\mathcal Base(L)$. The enclosing triangle of $L$ is formed by points $y_1,y_2$ and the intersection point, call it $y_3$, of the tangent lines to the unit circle at $y_1$ and $y_2$. We have ${\rm Arc}(y_1,y_2) = {\rm Arc}(z_1,z_2) + {\rm Arc}(z_2,y_2) = O(\sqrt\varepsilon /a)$ and, by Lemma~\ref{lem:arc}, $\delta = O(\sqrt\varepsilon /a)$. Let $h$ be the handle of $L$. By Lemma~\ref{lem:arc}, $|h| = \delta^2$. By Lemma~\ref{lem:tau}, there is $\tau\in {\rm SL}(2,{\mathbb Z})$ that makes $\mathcal Base(L)$ vertical and such that $|\Re(\tau h)| = \Theta(a\cdot|h|) =\Theta(a\delta^2)$. Furthermore, a particular $\tau = \tau' \tau_0$ is constructed in the proof of the lemma; we are going to take advantage of that. The horizontal projection of $[\tau z_1, \tau z_2]$ is $\Re(\tau r)$. We claim that the length of the horizontal projection is $O(\varepsilon^{3/2})$. Since $\tau'$ preserves the real part of any vector, it suffices to show that $|\Re(\tau_0 r)| = O(\varepsilon^{3/2})$. We have \[ \Re\left[\begin{sm}b&-a\\u&v\end{sm} \begin{sm}\alpha&\beta\end{sm}\right] = b\alpha - a\beta = \alpha(b - a\gamma). \] So $|\Re(\tau_0 r)| \le |r|\cdot |a\gamma - b| = O(\varepsilon^{3/2})$. It remains to show that the length of the horizontal projection of $[\tau z_1, \tau z_3]$ is $O(\varepsilon^{3/2})$. Let $z = z_3 - z_1$ and $y = y_3 - y_1$. By Lemma~\ref{lem:arc}, $|z| \approx \varepsilon\sqrt2$ and $|y| \approx \delta\sqrt2$; so $\frac{|z|} {|y|} \approx \frac\varepsilon\delta$. Since the vectors $y,z$ are collinear and operator $\tau$ is linear and projection operators are linear as well, $\frac{|\Re(\tau z)|}{|\Re(\tau y)|} \approx \frac\varepsilon\delta$. Since $\Re(\tau y) = \Re(\tau h)$, we have \[ |\Re(\tau z)| = \frac\varepsilon\delta \Re(\tau h) = \Theta\left( \frac\varepsilon\delta a\delta^2\right) = \Theta(a\varepsilon\delta) = O(\varepsilon^{3/2}). \] It remains to estimate the running time of our algorithm. To this end we need only to estimate the time needed to compute integers $a,b$ and then integers $u,v$ such that $au+bv=1$; the remaining work takes constant time. By Lemma~\ref{lem:real}, integers $a,b$ are computed in polynomial time. The Euclidean algorithm that computes $u,v$ runs in polynomial time as well. \end{proof} \section{Deterministic search} \label{sec:detsearch} We explain the deterministic search, how it works and why. We do that essentially on the Pauli+V example. But, to simplify the exposition and minimize distractions, we abstract away some details. It would be easy to abstract away more details with the price of making the exposition a little more involved. Consider a finite universal basis $\mathcal B$ for single-qubit gates. $\mathcal B$ may contain gates considered negligible gates; in such a case the depth of a $\mathcal B$-circuit is the number of non-negligible gates in the circuit. In the Pauli+V case, the Pauli gates may be considered negligible, because they are relatively cheap to implement and because at most one Pauli gate occurs in the normal form \eqref{normal}. Assume that $\mathcal B$ comes with a partial function ${\rm Level}(u)$ that assigns nonnegative integers to some complex numbers and with an equation ${\rm NE}_t(u,v)$ such that the following constraints C1-C4$'$ hold. \begin{enumerate} \item[C1.] If $u,v$ are of levels $\le t$ and ${\rm NE}_t(u,v)$ holds, then the matrix $\begin{sm}u&-v^*\\ v&u^*\end{sm}$ is unitary and exactly realizable by a $\mathcal B$ circuit of depth $\le t$. \end{enumerate} The ``NE'' in ``${\rm NE}_t(u,v)$'' alludes to the fact that the condition is typically expressed as a norm equation on $u$ and $v$, with parameter $t$. Whether ${\rm NE}_t(u,v)$ is a pure norm equation or not, we will refer to it as the \emph{norm equation}. The reader may be interested how the Pauli+V fits the general scheme, in particular what are the levels and norm equation in the Pauli+V scheme; all these questions will be addressed in the next section. Recall that we are seeking to approximate a given $z$-rotation $R_z(\theta) = \begin{sm}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{sm}$ to a given precision $\varepsilon$. Let $M=M_\varepsilon(\theta)$. If $u\in M$ and ${\rm Level}(u)\le t$, call $u$ a \emph{candidate of level $\le t$}. By Corollary~\ref{cor:meniscus}, if $u$ is a candidate of level $\le t$ and $v$ is a complex number of level $\le t$ such that ${\rm NE}_t(u,v)$ holds, then the matrix $U = \begin{sm}u&-v^*\\ v&u^*\end{sm}$ is at a distance $<\varepsilon$ from $R_z(\theta)$. Accordingly, call a candidate $u$ of level $\le t$ a \emph{winning candidate of level $\le t$} (or simply a \emph{winning candidate} if $t$ is clear from the context) if there is a complex number $v$ of level $\le t$ such that ${\rm NE}_t(u,v)$ holds. Let $C_t$ be the set of all candidates of levels $\le t$, and let $W_t$ be the subset of all winning candidates of levels $\le t$. Assume that the following constraints C2--C4 hold. By default, $\log$ means natural logarithm. \begin{enumerate} \item[C2.] There is an efficient algorithm that, given $t$, enumerates the candidates of level $t$. \item[C3.] There exist a real $a>1$ and an integer $t_0$ such that $|C_{t_0}| >1$, and $|C_t|\le1$ for all $t<t_0$, and $|C_t| \ge a^{t-t_0}$ for all $t>t_0$ such that $t-t_0$ is even. Here $t_0$ depends on $\varepsilon$ and $\theta$ while $a$ depends on neither. \item[C4.] $|W_t| = \Omega(|C_t|/t)$ as $t\to\infty$, uniformly with respect to $\varepsilon$. \end{enumerate} The requirement that $t-t_0$ be even in C3 reflects a peculiarity of the Pauli+V case. \begin{lemma}\label{lem:explore} $|W_t|\ge k$ for some $t = t_0 + c\,\log t_0$ where $c\ge0$ depends on $k$ and $\varepsilon$ but not $\theta$. \end{lemma} \begin{proof} Let $a,t_0$ be as in C3. By C4, there exist a real $b>0$ and an integer $t_1\ge1$, independent of $k$, $\varepsilon$ or $\theta$, such that $|W_t| \ge b|C_t|/t$ for all $t\ge t_1$. Let $t\ge t_0$, $t\ge t_1$ and $t-t_0$ be even. Then $|W_t|\ge b|C_t|/t \ge ba^{t-t_0}/t$, and so $|W_t|\ge k$ if $a^{t-t_0} \ge kt/b$. If $t = t_0 + x\,\log t_0$ then \begin{equation}\label{ktest} |W_t|\ge k \text{ if } (a^{\log t_0})^x \ge \frac{kt_0}b + \frac{k\log t_0}b x. \end{equation} The exponential function $(a^{\log t_0})^x$ of $x$ quickly outgrows the linear function $\frac{kt_0}b + \frac{k\log t_0}b x$ of $x$. The desired $t$ is $t_0 + c\,\log t_0$ where $c$ is the least nonnegative real such that $t_0 + c\log t_0\ge t_1$, $x\log t_0$ is an even integer and the premise of the implication~\eqref{ktest} holds. \end{proof} Our goal is to (efficiently) find a winning candidate, preferably of low level. Our ability to do this depends on our ability to tell whether a given candidate is winning, and in this connection we consider two scenarios. \noindent\emph{Scenario~1}\quad We have a deterministic decision procedure that, given an integer $t\ge0$ and a candidate $u\in C_t$, decides in polynomial time whether $u\in W_t$. Then the following obvious deterministic search finds a winning candidate of the minimal possible level. Explore candidates of levels $0$, then candidates of level $1$, etc. until a winning candidate of some level is found. The efficiency of such a deterministic search crucially depends on the efficiency of the decision procedure. \noindent\emph{Scenario~2}\quad We have a randomizing procedure that, given an integer $t\ge0$ and a candidate $u\in C_t$, decides in expected polynomial time whether $u$ belongs to a subset $W'_t$ of $W_t$ subject to the following constraint. \begin{enumerate} \item[C4$'$] $|W'_t| = \Omega(|C_t|/t)$ as $t\to\infty$, uniformly with respect to $\varepsilon$. \end{enumerate} Then the following randomizing search finds a candidate in $W'_t$ of the minimal possible level. Explore candidates of levels $0$, then candidates of level $1$, etc. until, for some $t$, a member of $W'_t$ is found. \begin{lemma}\label{lem:explore'} $|W'_t|\ge k$ for some $t = t_0 + c\,\log t_0$ where $c\ge0$ depends on $k$ and $\varepsilon$ but not $\theta$. \end{lemma} \begin{proof} Just replace the reference to C4 with a reference to C4$'$ in the proof of Lemma~\ref{lem:explore}. \end{proof} \section{Optimal Pauli+V circuits} \subsection{Pauli+V candidates} \label{sub:candidates} We specialize \S\ref{sec:detsearch} to the Pauli+V case. All the assumptions made in \S\ref{sec:detsearch} need to be justified. Define a complex number $u$ to be of level $\le t$ if $\sqrt{5^s}u$ is a Gaussian integer for some nonnegative integer $s\le t$. The norm equation ${\rm NE}_t(u,v)$ has a particularly simple form in the Pauli+V case: $|u|^2 + |v^2| = 1$. If $\sqrt{5^t} u, \sqrt{5^t} v$ are Gaussian integers $a+bi, c+di$ respectively then the norm equation becomes $a^2 + b^2 + c^2 + d^2 = 5^t$. By Theorem~1 in [Ref.~\onlinecite{BGS}], mentioned in \S~\ref{sub:V}, constraint C1 holds. Toward verifying C2, construct a linear transformation $\tau$ as in Theorem~\ref{thm:tau}. The transformation $\tau$ maps straight lines into straight lines and ellipses into ellipses; it preserves areas and convexity. The elliptical meniscus $\tau M$ is enclosed in a vertical band of width $O(\varepsilon^{3/2})$. The inflated elliptical meniscus $\sqrt{5^t} \tau M$ is enclosed in a vertical band that projects onto the real segment $[l_t, r_t]$ with $r_t-l_t = O(\sqrt{5^t} \varepsilon^{3/2})$. $\sqrt{5^t} \tau M$ is bounded by segments \begin{align*} & a + i f(a):\ l_t \le a \le r_t\\ & a + i g(a):\ l_t \le a \le r_t \end{align*} of a straight line and of an ellipse respectively. To simplify the exposition, we consider only the case where the straight line segment is above the ellipsis segment. Each Gaussian integer in $\sqrt{5^t} \tau M$ belongs to a vertical segment $[n+ig(n),n+if(n)]$ where $n$ is an integer in the segment $[l_t,r_t]$; see Figure \ref{fig:menuscus:enumeration}. This allows us to enumerate efficiently all Gaussian integers $a+bi$ in $\sqrt{5^t} \tau M$ and thus to enumerate efficiently all candidates $\frac {\tau^{-1}(a+bi)} {\sqrt{5^t}}$ of levels $\le t$. Constraint C2 holds. \begin{figure} \caption{\label{fig:menuscus:enumeration} \label{fig:menuscus:enumeration} \end{figure} Constraint C3 follows from the following claim based on an observation in [Ref.~\onlinecite{RoSelinger}]. \begin{claim} \label{clm:exp_growth} If $|C_t| \ge2$ then for any integer $k \geq 0$ we have $|C_{t+2k}| \ge 1+5^k$. \end{claim} \begin{proof} Suppose that $z_1,z_2$ are Gaussian integers and candidates $\frac{z_1}{\sqrt{5^t}}, \frac{z_2}{\sqrt{5^t}}$ of levels $\le t$ belong to $M$. For any $k$, these candidates are also of levels $\le t+2k$. Since $M$ is convex, it contains also the following intermediate candidates of levels $\le t+2k$: $$ \frac{5^t-j}{5^k} z_1 + \frac j{5^k} z_2\quad \text{where } j=1,...,5^k-1. $$ \end{proof} Note that a candidate $u = \sqrt{5^{-t}}(a+bi)$ of level $\le t$ is a winning candidate of level $\le t$ if and only if $5^t-a^2-b^2$ is a sum $c^2+d^2$ of two squares. Here $a,b,c,d$ are all integers, Constraint C4 follows from the following number-theoretic conjecture of [Ref.~\onlinecite{BGS}]. \begin{conjecture}\label{con:1} Let $A$ be the area of the meniscus $\sqrt{5^t}M_\varepsilon(\theta)$ of the disk of complex numbers of norm $\le\sqrt{5^t}$, and let $S$ be the set of Gaussian integers $a+bi$ in $\sqrt{5^t}M_\varepsilon(\theta)$ such that $5^t-a^2-b^2$ is a sum of two squares (of rational integers). Then $|S| = \Omega(A/t)$. \end{conjecture} Actually, instead of $\Omega$, one finds $\Theta$ in [Ref.~\onlinecite{BGS}], but only the lower bound is used there and also here. Define $W'_t$ to be the set of members $u = \sqrt{5^{-t}}(a+bi)$ of $W_t$ such that $5^t-a^2-b^2$ is a prime of the form $4m+1$. Every such prime is a sum of two squares; see \S\ref{sub:squares}. To justify C4$'$, we need an additional number-theoretic conjecture. \begin{conjecture}\label{con:2} Let $A,S$ be as in Conjecture~\ref{con:1}, and let $S'$ consists of numbers $a+bi\in S$ such that $5^t-a^2-b^2$ is a prime of the form $4m+1$. Then $|S'|=\Omega(A/t)$. \end{conjecture} Both conjectures were found credible by the experts in analytic number theory that we consulted. The conjectures also are supported by experimentation. The intuition behind the conjectures is that there is no correlation between sets $S,S'$ one the one side and prime numbers on the other. As far as sets $S$ and $S'$ are concerned, the distribution of prime numbers could be random. ``It is evident that the primes are randomly distributed but, unfortunately, we don't know what `random' means'' quipped the number theorist Bob Vaughan in 1990 [Ref.~\onlinecite{MSE}]. \subsection{The first circuit-synthesis algorithm} \label{sub:syn1} Our first circuit-synthesis algorithm is presented in Figure \ref{fig:syn1} where $P$ is a procedure that takes a positive integer $n$ as input and returns a complex number $c+di$ with $c^2+d^2 =n$ or returns Nil. We give 2 variants of the synthesis algorithm that correspond to the two scenarios of \S\ref{sec:detsearch}. The two variants differ only in their versions $P_1, P_2$ of the procedure $P$. \begin{figure} \caption{First synthesis algorithm} \label{fig:syn1} \end{figure} \noindent\emph{Procedure~$P_1$}\\ This (and only this) version of $P$ presumes the availability of a quantum computer. \begin{enumerate} \item Use Shor's algorithm [Ref.~\onlinecite{Shor}] to factor the given number $n$. Shor's algorithm works in polynomial time on a quantum computer. \item Return Nil if some prime factor of $n$ of the form $4m+3$ has an odd exponent in the representation of $n$ as a product of powers of distinct primes. \item Use the algorithm of Lemma~\ref{lem:S2S} to find a representation $n=c^2+d^2$ of $n$ as a sum of two squares; return $c+d i$. \end{enumerate} \noindent\emph{Procedure~$P_2$} \begin{enumerate} \item Check whether $n$ has the form $4m+1$, and return Nil if not. \item Use the Lenstra-Pomerance version of the AKS primality test that runs in time $(\log n)^6\cdot (2+\log\log n)^c$ for some real constant $c$ [Ref.~\onlinecite{LP}]. (Alternatively use Miller's primality test that runs in time $O(\log n)^4$ but assumes the Extended Riemann Hypothesis [Ref.~\onlinecite{Miller}].) Return Nil if $n$ is composite. \item Use the Rabin-Shallit algorithm to find a representation $n=c^2+d^2$ of $n$; return $c+d i$. \end{enumerate} The first variant of the synthesis algorithm runs in expected polynomial time and produces an approximating circuit of the least possible depth. \begin{theorem}\label{thm:syn2} Given a target angle $\theta$ and real $\varepsilon>0$, the second variant of the synthesis algorithm works in expected polynomial time and constructs a Pauli+V circuit of depth at most $3\log_5\frac1\varepsilon + O(\log\log\frac1\varepsilon)$ that approximates the axial rotation $R_z(\theta)$ within $\varepsilon$. \end{theorem} \begin{proof}\mbox{} \noindent\emph{Correctness}\quad It should be obvious at this point that the algorithm produces a circuit that approximates $R_z(\theta)$ within $\varepsilon$. \noindent\emph{Circuit depth}\quad By Lemma~\ref{lem:explore'}, the synthesis algorithm produces a circuit of depth at most $t_0 + O(\log t_0)$ where $t_0$ is the least index $t$ such that $|C_t|\ge2$. It suffices to prove the following lemma. \begin{lemma}\label{lem:t0} $t_0 \le 3\log_5(1/\varepsilon)$ for sufficiently small positive $\varepsilon$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:t0}] Let $M= M_\varepsilon(\theta)$. By Lemma~\ref{lem:arc}, ${\rm Area}(M) \ge c\varepsilon^3$ where $c\approx 2\sqrt2$. Let $s = 3\log_5(1/\varepsilon)$. Then \[ {\rm Area}(\sqrt{5^s}M) \ge 5^s\cdot c\varepsilon^3 = (1/\varepsilon)^3 c\varepsilon^3 = c > 2. \] The number $N(t)$ of Gaussian integers in $\sqrt{5^t}M$ is asymptotically equal to ${\rm Area}(\sqrt{5^t}M)$ as $t\to\infty$ [Ref.~\onlinecite{Huxley}]. So, for sufficiently small positive $\varepsilon$, $N(s)>2$ and $\sqrt{5^s}M$ contains at least 2 Gaussian integers. \end{proof} \noindent\emph{Running time}\quad By Lemma~\ref{lem:explore'}, $W'_t$ contains at least two elements for some $t$ of the form $t_0 + b\log t_0$ where $b\ge0$. Accordingly, the synthesis algorithm needs to explore only candidates of levels $\le t$. Each such candidate has the form $\frac u{\sqrt{5^t}}$ where $u$ is a Gaussian integer in $\sqrt{5^t}M$. The number of such Gaussian integers is asymptotically equal to ${\rm Area}(\sqrt{5^t}M) = 5^t{\rm Area}(M) \le 5^t4\sqrt2\varepsilon^3$ where the inequality follows from Lemma~\ref{lem:arc}. By Lemma~\ref{lem:t0}, $t= 3\log_5(1/\varepsilon) + d\log_5\log(1/\varepsilon)$ for some $d\ge0$, so that the algorithm needs to explore at most \[ (1/\varepsilon)^3 \log^d(1/\varepsilon) 4\sqrt2 \varepsilon^3 = \Theta(\log^d(1/\varepsilon)) \] candidates, that is only a polynomial number of candidates. In the previous subsection, we explained how to enumerate all the candidates in $C_t$. To this end we have to walk through all the vertical segments $[n+ig(n),n+if(n)]$ where $l_t\le n\le r_t$. There are only polynomially many of those vertical segments, namely \begin{equation}\label{half3} \sqrt{5^t}\varepsilon^{3/2} = (1/\varepsilon)^{3/2} \log^{d/2}(1/\varepsilon) \varepsilon^{3/2} = \log^{d/2}(1/\varepsilon). \end{equation} Each vertical segment contains only polynomially many Gaussian integers, and the exploration of one candidate involves a procedure $P$ that runs in expected polynomial time. The whole algorithm runs in expected polynomial time \end{proof} \begin{remark} The reader may wonder whether the transformation $\tau$ was necessary. It was. The original $M$ is enclosed in a vertical band of width that may be of the order of $\varepsilon$. Replacing $\varepsilon^{3/2}$ with $\varepsilon$ gives us exponentially many vertical segments. \end{remark} \subsection{The second circuit-synthesis algorithm} \label{sub:syn2} The only essential difference of our second synthesis algorithm SA2 from the first one is its version of procedure $P$ whose input includes a positive real $\delta$ in addition to an integer. But this influences the forms of input and output of SA2. The input of SA2 comprises three components: a target angle $\theta$, a precision $\varepsilon>0$ and an error tolerance $\delta>0$. SA2 returns either an approximation to the rotation $R_z$ or Nil. The probability of returning Nil is at most $\delta$. SA2 is presented in Figure~\ref{fig:syn2}. \begin{figure} \caption{Second synthesis algorithm} \label{fig:syn2} \end{figure} But, before we describe the new version of $P$, let's address Rabin's primality test [Ref.~\onlinecite{RabinPrime}]. It is an efficient polynomial-time primality test with a parameter $k$. If $n$ is prime then the test result is always correct. For a composite $n$ the test may declare $n$ to be prime, but the probability of such error is tiny. ``The algorithm produces and employs certain random integers $0 < b_1,\dots,b_k < n$\dots'' writes Rabin in [Ref.~\onlinecite{RabinPrime}], ``the probability that a composite number will be erroneously asserted to be prime is smaller than $\frac1{2^{2k}}$. If, say, $k = 50$, then the probability of error is at most $\frac1{2^{100}}$.'' Now we are ready to describe the new version of procedure $P$. \begin{enumerate} \item Check whether $n$ has the form $4m+1$, and return Nil if not. \item Use Rabin's primality test with $k\ge \frac12 \log\frac1\delta $. Return Nil if $n$ is found to be composite. \item Apply the algorithm of Proposition~\ref{pro:R2} to $n$ and $\delta$. \end{enumerate} \begin{corollary} The new procedure $P$ returns an S2S representation of $n$ or Nil. The probability of returning Nil is at most $\delta$. \end{corollary} \begin{theorem}\label{thm:syn3} Given a target angle $\theta$ and real $\varepsilon,\delta>0$, the second synthesis algorithm SA2 works in polynomial time and returns Nil or constructs a Pauli+V circuit of depth at most $3\log_5\frac1\varepsilon + O(\log\log\frac1\varepsilon)$ that approximates the axial rotation $R_z(\theta)$ within $\varepsilon$. The probability of returning Nil is at most $\delta$. \end{theorem} \begin{proof} If the primality test works as intended, SA2 works essentially like the first one. If the primality test errs, which happens with probability at most $\delta$, SA2 returns Nil or constructs a Pauli+V circuit of depth at most $3\log_5(1/\varepsilon) + O(\log\log\frac1\varepsilon)$ that approximates the axial rotation $R_z(\theta)$ within $\varepsilon$. \end{proof} \appendix \section{Three Free Rotations} \label{three:free:rotation} In this appendix, we prove item~(1) of Theorem~\ref{thm:free}. We shall be concerned with reduced words $w$ built from the formal symbols (letters) $l_1$, $l_2$, $l_3$, and their (formal) inverses. (As in the theorem, ``reduced'' means that no $l_i$ occurs next to its own inverse in $w$.) We write $w[R]$ for the product obtained by replacing each formal symbol $l_i$ by the corresponding matrix $R_i$ exhibited just before the statement of Theorem~\ref{thm:free}, and replacing ${l_i}^{-1}$ by the inverse matrix (also the transpose, as the matrices are orthogonal). We must show that, if a reduced word $w$ has length $t$, then the corresponding matrix $w[R]$ contains at least one entry which, when written as a fraction in lowest terms, has denominator $5^t$. We shall prove this result by induction on the length $t$ of the word. It is clearly true for $t=0$ and $t=1$. For the induction step, we begin with some preliminary considerations to simplify the problem. Each of the matrices $R_i$ and each of their inverses can be written as $1/5$ times an integer matrix $S_i$, namely \[ S_1= \begin{pmatrix} 5&0&0\\0&-3&4\\0&-4&-3 \end{pmatrix},\quad S_2= \begin{pmatrix} -3&0&-4\\0&5&0\\4&0&-3 \end{pmatrix},\quad S_3= \begin{pmatrix} -3&4&0\\-4&-3&0\\0&0&5 \end{pmatrix}. \] and $S_1^\top, S_2^\top,S_3^\top$. Note that we are factoring $1/5$ out of each $R_i$ and each of the inverses $R_i^{-1}=R_i^\top$, so the remaining factors are the matrices $S_i$ and $S_i^\top$, not $S_i^{-1}$ (which differs from $S_i^\top$ by a factor 25). Thus, if a reduced word $w$ has length $t$, then $w[R]=(1/5^t)w[S]$, where $w[S]$ is obtained from $w$ by replacing the letters $l_i$ and their inverses $l_i^{-1}$ in $w$ by the matrices $S_i$ and their transposes $S_i^\top$. What we must prove is that, in such a product $w[S]$ of $S$ matrices, we never have all the entries divisible by 5; then, in any entry that is not divisible by 5, the overall factor of $1/5^t$ provides the denominator required for our result. We have thus reduced our task to showing that, if we take a reduced word $w$ and substitute for each letter $l_i$ the corresponding $S_i$ and for each ${l_i}^{-1}$ the transpose $S_i^\top$ of the corresponding $S_i$, then the product matrix $w[S]$ will not have all its entries divisible by 5. We can reduce the task further. Since we are interested only in divisibility by 5, we can reduce all entries in the $S$ matrices, and their transposes, and their products, modulo 5. That is, we can perform the whole calculation using matrices with entries in $\mathbb Z/5$, namely the matrices \[ T_1= \begin{pmatrix} 0&0&0\\0&2&4\\0&1&2 \end{pmatrix},\quad T_2= \begin{pmatrix} 2&0&1\\0&0&0\\4&0&2 \end{pmatrix},\quad T_3= \begin{pmatrix} 2&4&0\\1&2&0\\0&0&0 \end{pmatrix} \] obtained by reducing the $S$ matrices modulo 5. As with the $S$ matrices, we use the notation $w[T]$ for the result of taking a word $w$, replacing each $l_i$ and $l_i^{-1}$ with $T_i$ and the transpose $T_i^\top$, and multiplying the resulting matrices. Thus, $w[T]$ is a $3\times 3$ matrix over the 5-element field $\mathbb Z/5$. Our goal, that the product $w[S]$ of $S$ matrices obtained from a reduced word $w$ does not have all entries divisible by 5, can now be reformulated as saying that the product $w[T]$ of $T$ matrices obtained from a reduced word is not the zero matrix. Not only are the $T$ matrices singular (because each has a column of zeros) but they have rank only 1, because, in each of them, the two non-zero columns are proportional. The same goes for the transposes of these matrices (either by similar inspection or because transposing a matrix doesn't change its rank). Let us write $L_i$ for the one-dimensional subspace of $(\mathbb Z/5)^3$ that is the range of $T_i$ and $L_i'$ for the range of the transpose $T_i^\top$. Thus, $L_i$ (resp.\ $L_i'$) is generated as a vector space by either of the non-zero columns (resp.\ rows) of $T_i$. With these preparations, we are ready for the induction step in the proof. Suppose the claim is true for reduced words of a certain length $t\geq1$, and suppose we are given a reduced word of length $t+1$, say $QW$, where $Q$ is the first letter of our given word and $w$ is all the rest, i.e., a word of length $t$. The first letter of $w$ (which exists as $t\geq1$) is either some $l_i$ or some $l_i^{-1}$. Let us consider first the case that the first letter of $w$ is $l_i$. (The case of an inverse $l_i^{-1}$ will be similar.) By induction hypothesis, the matrix $w[T]$ is not the zero matrix. Its range is included in the range $L_i$ of $T_i$ (because the range of any matrix product $JK$ is included in the range of the left factor $J$) and must therefore be all of $L_i$ (since $L_i$ has dimension only 1). To show that $QW$ also corresponds to a non-zero matrix, it therefore suffices to show that $Q[T]$ does not annihilate $L_i$. (As $Q$ is a single letter or inverse, $Q[T]$ is a single $T$ matrix or transpose.) Here it is important that $Q$ is not $l_i^{-1}$, because $QW$ is a reduced word. So we need only check that the range $L_i$ of $T_i$ is not annihilated by any of the $T_j$'s or their transposes, except for $T_i^\top$, and this is just a matter of inspection (the many 0's in the matrices make the computations trivial). In the case where the first letter of $w $ is some $l_i^{-1}$, we similarly reduce the problem to showing that the range $L_i'$ of $T_i^\top$ is not annihilated by any of the $T_j$'s or their transposes except for $T_i$. This again can be done by inspection, thereby completing the induction and thus the proof of Theorem~\ref{thm:free}(1). \end{document}
\begin{document} \preprint{AIP/123-QED} \title[]{ Generalized Time-bin Quantum Random Number Generator with Uncharacterized Devices } \author{Hamid Tebyanian} \address{Department of Mathematics, University of York, Heslington, York, YO10 5DD, United Kingdom} \author{Mujtaba Zahidy} \affiliation{ Centre of Excellence for Silicon Photonics for Optical Communications (SPOC), Department of Electrical and Photonics Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark } \author{Ronny M\"uller} \affiliation{ Centre of Excellence for Silicon Photonics for Optical Communications (SPOC), Department of Electrical and Photonics Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark } \author{S{\o}ren Forchhammer} \affiliation{ Centre of Excellence for Silicon Photonics for Optical Communications (SPOC), Department of Electrical and Photonics Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark } \author{Davide Bacco} \affiliation{ Department of Physics and Astronomy, University of Florence, Via Sansone 1, Sesto Fiorentino, 50019, Italy } \author{Leif. K. Oxenl{\o}we} \affiliation{ Centre of Excellence for Silicon Photonics for Optical Communications (SPOC), Department of Electrical and Photonics Engineering, Technical University of Denmark, Kgs. Lyngby, Denmark } \begin{abstract} Random number generators (RNG) based on quantum mechanics are captivating due to their security and unpredictability compared to conventional generators, such as pseudo-random number generators and hardware-random number generators. This work analyzes evolutions in the extractable amount of randomness with increasing the Hilbert space dimension, state preparation subspace, or measurement subspace in a class of semi-device-independent quantum-RNG, where bounding the states' overlap is the core assumption, built on the prepare-and-measure scheme. We further discuss the effect of these factors on the complexity and draw a conclusion on the optimal scenario. We investigate the generic case of time-bin encoding scheme, define various input (state preparation) and outcome (measurement) subspaces, and discuss the optimal scenarios to obtain maximum entropy. Several input designs were experimentally tested and analyzed for their conceivable outcome arrangements. We evaluated their performance by considering the device's imperfections, particularly the after-pulsing effect and dark counts of the detectors. Finally, we demonstrate that this approach can boost the system entropy, resulting in more extractable randomness. \end{abstract} \maketitle \section{Introduction} \label{Sec::Intro} Randomness is indispensable for simulation, gambling, and numerous cryptographic applications, e.g., quantum key distribution (QKD) \cite{QKD,Zahidy_2021_OAM}, where the protocol's security is guaranteed by random selections of the encoding and measurement bases \cite{Acin2016}. Traditional randomness generators rely on deterministic processes, which are, in principle, predictable. However, unlike the deterministic evolution of classical systems, quantum mechanics grants the ability to generate genuine randomness based on the quantum measurement outcome that is entirely unpredictable \cite{rev_QRNG,julio_RNG_Tests}. A random number generator (RNG), in general, should deliver unpredictable and secure random numbers by exploiting effective instruments aiming to make it performant, high rate, and commercially affordable. Quantum RNG (QRNG) can be an outstanding choice in satisfying the needs for security, practicality, and affordability; nevertheless, any imperfection in the physical realization may cause information leakage which an eavesdropper could use to predict the QRNG's outcome \cite{Stipcevic12,Herrero-Collantes2017}. Nowadays, QRNGs are commercially available, symbolizing one of the most successful developments of quantum technologies. In Device-dependent (DD) QRNGs, the user must trust the device's performance. This type of QRNG requires a detailed understanding of the functioning of the in-use devices to constrain the output's randomness \cite{zahidy2021quantum,DD_IDQ,DD_marco,QRNG_gen-new}. Although DD QRNGs randomness is guaranteed by quantum theory, any gap between theoretical and real-world implementation, such as experimental errors, device imperfections, or dishonest producers, may enable an adversary to predict the QRNG's outcomes and thus endanger the system's security \cite{DI_roger,colbeck2011quantum,Colbeck2011,DI_me,rutvij_DI}. At the same time, in device-independent (DI) protocols, one can certify randomness without relying on assumptions about the device's performance. These protocols utilize the non-local property of quantum theory to guarantee the output's randomness. DI QRNGs are, therefore, highly secure, and thus no assumptions on the eavesdropper are made. Implementing DI QRNGs, nevertheless, can be demanding as it involves conducting a loophole-free Bell test, which is a challenging experimental task with a typically low generation rate \cite{DI_new}. Contrary to DD and DI QRNGs, semi-device independent QRNGs are based on protocols that allow for high-rate generation, acceptable security, and simplicity in implementation \cite{semi-DI,MDI_new,semi_DI_new,SDI_prx}. In this class, the performance is boosted by taking a few assumptions on the working principle of the experimental apparatus, e.g., trusting the measurement \cite{sdi_neww,SDI_Avesani2022} or the preparation device \cite{MDI_new,MDIII} or weaker hypothesis like bounding the energy or the overlap \cite{Tebyanian_2021,Brask2017} of the generated states, while guaranteeing the security by accounting for all possible attack attempts within our assumptions \cite{Ma2016}. \begin{figure*} \caption{Possible input-output configurations with four time-bin case. Inputs: Sets I, II, and III show input configurations where one, two, and three weak coherent states are positioned in time-bins, respectively. Subsets I and II are subsections of set II where the overlap is mixed. Outputs: 16 possible outcome configurations for four time-bin case, where some are theoretically impossible, e.g., obtaining four detection events, while real-world errors such as the detector's dark counts make it probable.} \label{Fig::PermutStates} \end{figure*} This work studies a class of semi-DI QRNGs founded on the basis of restraining the states' overlap by employing a time-bin encoding scheme and single-photon detection. The overlap bound guarantees that the prepared states are non-orthogonal and hence, no measurement can perfectly distinguish them \cite{VanHimbeeck2017semidevice, Brask2017}. While the inability of predicting the outcome of measurement by the user is the source of randomness, the indistinguishability of the state is the source of security, from the perspective of the measurement apparatus. The entropy and extractable randomness are optimized, and compared, with the help of semi-definite programming (SDP). We discuss the improvement in entropy and randomness generation rate with increasing the number of time-bin or input states. The main contribution of this work is to investigate the impact of increasing or adjusting the number of time bins on the extractable amount of randomness and the system's generation rate with the security assumption. We found an upper bound on the number of input-output for a general number of time bins and showed that the system's entropy improves with a increasing number of time bins. We also discuss the experimental challenges from both state preparation and measurement points of view. Similarly, we demonstrate that the generation rate increases by optimally dispersing the weak coherent state (WCS) in time-bin configurations, which can significantly enhance this approach's performance for practical applications. \section{Protocol} \label{SEC::Protocol} The QRNG protocol introduced here is based on the prepare-and-measure scenario, where the prepared states' overlap is bounded while no other assumptions are required on the rest of the setup \cite{tpsk_QPSK,Tebyanian_2021,avesani2020}. \subsection{Preparation and Measurement Stages} \label{SubSEC::PrepMeas} Quantum mechanics does not allow any measurement to distinguish non-orthogonal states perfectly \cite{Barnett:09}. This feature can be used to generate random numbers without trusting the measurement apparatus. Here, we address a general case of non-orthogonal states in a time-bin encoding with $n$ bins and $m$ distributed weak coherent pulses $| \alpha \rangle$. The states $\ket{\psi_i}$, \begin{equation} \ket{\psi_i} = |0\rangle^{n-m} | \alpha \rangle^{\otimes m} = |0\rangle \otimes | \alpha \rangle \otimes ... \otimes |\alpha\rangle\otimes |0\rangle, \end{equation} are formed by permuting the $m$ WCSs in the $n$ bins where the rest are filled with vacuum states (VS). The states $\ket{\psi_i}$ are required to respect an overlap condition that satisfies the protocol's assumption: \begin{equation} |\bra{\psi_i}\ket{\psi_j}| \geq \delta,\quad \forall \; i\neq j, \label{EQ::OverlapCond} \end{equation} where $\delta$ is the overlap bound. The non-zero overlap guarantees the inability to distinguish the states by performing any measurement, hence, allowing to generate secure randomness from the ambiguity therein \cite{Barnett:09}. A simple illustration of state formation in time-bin encoding can be found in \cite{Tebyanian_2021}. In this scenario, the general case is defined by allowing the number of time-bins $n$ to increase without any limits as well as the number of WCSs $m$, where $1 \leq m < n$. We denote a \textit{configuration} of $n$ time-bins and $m$ WCSs with $(n,m)$-configuration. The number of states in a $(n,m)$-configuration is given by the binomial coefficient, $C_n^m=n!/(m!(n-m)!)$, formed by all possible combinations of placing $m$ WCSs in $n$ time-bins. However, not all groups of states in a configuration respect the overlap bound, Eq. (\ref{EQ::OverlapCond}). A careful examination of combinations shows that in an $(n,m)$-configuration, there are subsets of states with specific overlaps. Each subset is then divided into groups of states that are equivalent w.r.t. the overlap value. Fig. (\ref{Fig::PermutStates}) shows the $(4,2)$-configuration and its subsets with different overlap values. To be noted that while the four groups of subset I are not closed w.r.t. each other, adding any elements of another group to any of them violates the overlap bound. It is easy to show that the number of subsets is equal to \[ \begin{cases} m & \quad \text{if } \quad 2m-n \leq 0 \\ n-m & \quad \text{if } \quad 2m-n > 0. \end{cases} \] Consequently, a $(n,m)$-configuration can have a total overlap value of the form \begin{equation} \begin{split} |\bra{\psi_i}\ket{\psi_j}| & = \langle 0 | 0 \rangle^{n-2m+s} \langle 0 | \alpha \rangle^{2(m-s)} \langle \alpha | \alpha \rangle^s \\ & = \langle 0 | \alpha \rangle^{2(m-s)}, \end{split} \label{over} \end{equation} where $s$ is the number of coinciding $\langle \alpha | \alpha \rangle$ WCSs. We denote an $(n,m)$-configuration with $s$ coinciding WCSs as $n_{m,s}$ with $n > m \geq s$. In the following, we will only consider the case of equality in Eq. \eqref{EQ::OverlapCond}. We denote with $\mathcal{B}(n,m,s)$ the maximum number of states in any subset $\mathcal{S}$ of the $(n,m)$-configuration such that all elements in $\mathcal{S}$ have the same value of $s$ pairwise, with $s$ defined as in Eq. \eqref{over}. It is of relevance to know $\mathcal{B}$ for any configuration as it defines the number of inputs and possible outputs in our prepare-and-measure QRNG protocol. This question is closely related to \textit{constant weight binary codes}. To see this, we can identify bins that contain a WCS with `1' and bins that contain the vacuum state with `0', such that we identify each state in a $(n,m)$-configuration with a binary vector of length $n$ and weight $m$. Each subset $\mathcal{S}$ can then be directly identified with a code of length $n$, Hamming distance $d$, and weight $m$, where Hamming distance and $s$ are related as $d=2(m-s)$. Eq. \eqref{over} can then be written as $|\bra{\psi_i}\ket{\psi_j}| = \langle 0 | \alpha \rangle^d$. In the context of constant weight binary codes, there exists the well-known but open question of determining the maximum number of codewords $\mathcal{A}(n,m,d_{\text{min}})$, where $d_{\text{min}}$ refers to the minimum distance of the code. $\mathcal{B}(n,m,s)$ can be upper-bounded by $\mathcal{A}(n,m,2(m-s))$ which in turn can be upper-bounded by different theoretical bounds \cite{1057714, upp, schri}. Lower bounds to $\mathcal{A}$, typically by explicit construction \cite{59932, montemanni}, cannot be applied to $\mathcal{B}$ as the codes can contain state-pairs with $d>d_{\text{min}}$ which translates to a violation of Eq. \eqref{EQ::OverlapCond} since $\delta = \langle 0|\alpha \rangle^d$. Increasing $d$ reduces the overlap value and therefore reduces the ambiguity in their measurement. Instead, we show here an explicit lower bound $C$ by simple construction: For $2m-n\leq0$, all codewords share $s$ `1's at the same positions. Distribute the remaining $m-s$ ones in the remaining $n-s$ slots so that there is no coinciding ones, and fill the $R=n- \lfloor \frac{n-s}{m-s} \rfloor (m-s)-s$ leftover columns with zeros.\\ \[ \mathcal{B}= \begin{bmatrix} 1 & ... & 1 & \coolover{n-s}{ 1 & ... & 1 & 0 & ... & 0 & 0 & ... & 0 & 0 &...&0}\\ ... & ... & 1 & 0 & ... & 0 & 1 & ... & 1 & 0 & ... & 0&0&...&0 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & 0 & ... & 0 &0&...&0\\ \coolunder{s}{1&...&1} & \coolunder{m-s}{0&...&0} & \coolunder{m-s}{0&...&0} & \coolunder{m-s}{1&...&1} & \coolunder{R}{0&...&0}\\ \end{bmatrix} \] This results in $C = \lfloor \frac{n-s}{m-s} \rfloor$ different states. If instead $2m-n>0$, all codewords share $n+s-2m$ zero positions and the remaining $2m-s$ slots are divided into sections with $m-s$ zeros. $\mathcal{B}$ can therefore be lower-bounded by \begin{equation} \mathcal{B}(n,m,s) \geq C(n,m,s) = \begin{cases} 1 & \text{if} \quad n+s-2m < 0\\ \lfloor \frac{n-s}{m-s} \rfloor & \text{if} \quad 2m \leq n\\ \lfloor \frac{2m-s}{m-s} \rfloor & \text{if} \quad 2m > n \end{cases} \end{equation} In the absence of noise or errors, the number of all possible outcomes, \textit{B}, follows from the click or no-click event when a state is sent. For an $n_{m,s}$-configuration, the number of distinct outcomes is obtained as \begin{equation} B = C (2^m - 1)- 2^{m-s} + 1. \label{EQ::NumOutcomes} \end{equation} In the no-frills case, only one WCS is placed, $m=1$, in each time-bin regardless of the number of bins, see Fig. \ref{Fig::PermutStates} (set I). There are always $B=n+1$ possible outcomes in this case — one for each input plus one for the no-click (indeterminate) event, which occurs randomly, suggesting that the entropy should be minimal. Fig. \ref{Fig::PermutStates} (Set II and III) shows the cases with $m=2$ and $m=3$, respectively. Note that the case with $m=2$ WCSs has two subsets with 1 and 2 coinciding WCSs with 4 equivalent groups for $m=2$ and 3 for $m=3$. In the ideal situation, the number of outcomes follows Eq. (\ref{EQ::NumOutcomes}). However, in a real implementation, due to noise, dark counts, or after-pulsing, all $B=2^n$ outcomes, shown in Fig. \ref{Fig::PermutStates} for $n=4$ - Outputs, are probable although with negligible probability. These errors and imperfections are viewed as classical side-information serving the adversary to predict the measurement outcome. All sorts of probable classical side-information and correlations (between preparation and measurement sides) are considered in the security estimation. The user can monitor these correlations and stop the protocol in case of observing considerable noise. \subsection{Security estimation} \label{subsec::SecEst} Despite the fact that the generation of random numbers in a QRNG is based on the intrinsic probabilistic nature of quantum mechanics, the raw data outcome is a mixture of the sequences generated from deterministic classical sources and quantum processes. Therefore, it is essential to estimate the amount of extractable randomness in a defined protocol and later use it to exclude the classical contribution. The quantity min-entropy ($H_{\min}$) measures the maximum extractable randomness provided that an adversary can optimally guess the generator's outcome knowing the working principle of the devices. To account for any side information, we used conditional min-entropy and considered only classical side-information. Throughout this work, we assumed a trusty source with no quantum correlation to the outside world. The conditional min-entropy on the variable $b$ conditioned on classical side-information $E$ reads \cite{Tomamichel2011} \begin{equation} H_{\min}(b|E) = -\log_2 P_{\text{guess}}(b|E), \label{EQ::H_min} \end{equation} where $P_{\text{guess}}$ is the maximum probability that an adversary can guess the measurement outcome with a complete understanding of the devices' working principle and classical noises. In a semi-DI framework, the guessing probability should be maximized over all possible preparation and measurement strategies. $P_{\text{guess}}$ reads: \begin{equation} P_{\text{guess}}= \mathop {\max}\limits_{p(x),\psi_x,M_{b}^{{\varsigma}}}\bigg\{\sum\limits_{x = 0}^{I-1} p(x) \sum\limits_{{\varsigma} } \mathop{\max}\limits_{b}\big[\bra{\psi_x} M_{b}^{{\varsigma}} \ket{\psi_x}\big]\bigg\}, \label{P_gg} \end{equation} where $p(x)$ is the probability of transmitting input $x$, $M_{b}^{{\varsigma}}=P(\varsigma)\Pi_{b}^{{\varsigma}}$ are weighted measurement strategies over all positive operator valued measurements (POVM), and $\varsigma$, known by the adversary, represents the classical correlations between the measurement devices and environment (e.g., adversary). Each POVM $\Pi_{b}^{{\varsigma}}$, labeled by $\varsigma$, can be implemented with probability $P(\varsigma)$. $I$ and $B$ are the numbers of inputs and outcomes, respectively. As shown in \cite{Bancal_2014}, the maximizations in Eq. (\ref{P_gg}) can be grouped as they occur for the same value of $b$ at given $x$, this would significantly ease up the optimization process. Therefore the total number of possible measurement strategies for given input would be $B^I$, thus $\varsigma \in \{\varsigma_0,\dots, \varsigma_{I-1}\}$, where $\varsigma_s \in \{0,\dots.,B-1\}$. Following the same approach presented in \cite{Tebyanian_2021,Brask2017,d_out}, $P_{\text{guess}}$ for the balanced input case, $p(x)=1/I$, can be written as: \begin{equation} P_{\text{guess}} = \frac{1}{I}\mathop{\max }\limits_{\{M _b^{\varsigma},\hat\rho_x\}} \sum\limits_{x = 0}^{I-1}{\sum\limits_{\varsigma} \Tr[\hat\rho_x M _{\varsigma_x}^{\varsigma} ]}, \label{eq:P_g3} \end{equation} where $\hat\rho_x=\ket{\psi_x}\bra{\psi_x}$, and $\Tr[\hat\rho_x M _{b}^{\varsigma} ]=p(b|x)$ is the conditional probability of obtaining outcome $b$ given input $x$. Eq.(\ref{eq:P_g3}) suggests that $P_{\text{guess}}$ depends on the state's overlap rather than input state $\hat\rho_x$. Besides, the optimization problem in Eq.(\ref{eq:P_g3}) can be bounded to a $I$-dimensional Hilbert space; for more detail, see \cite{Brask2017,Tebyanian_2021,Bancal_2014}. The optimization problem in $P_{\text{guess}}$ can be efficiently solved by casting it into semi-definite programming (SDP), which is a numerical tool for solving complex optimization problems. Following the same argument presented in \cite{Tebyanian_2021,d_out,Bancal_2014,Brask2017}, we can show that for the protocol under study, strong duality holds which means both the primal and dual forms of the SDP exist. By feeding the SDP with the experimental conditional probabilities $P(b|x)$ and defining the overlap bound, the SDP can numerically optimize $P_{\text{guess}}$. Afterward, the conditional min-entropy, Eq. (\ref{EQ::H_min}), can be calculated. It should be noted that the security estimation is applicable for multiple input-output (IO) cases. The number of inputs can vary from 2 to the number of available states in an equivalence group in a $n_{m,s}$-configuration. For example, one can choose to send only 2 out of 4 states in set I in figure (\ref{Fig::PermutStates}). The computational cost (CC) is associated with the number of IO in the system and can affect the system's overall generation rate. This is due to an increment in the time it takes to execute the SDP, which in turn leads to a decrease in the system's overall efficiency. Thus, it is important to be mindful of the impact of increased computational complexity when considering adding more IO to the system. Fig. (\ref{Fig::com}) shows the CC as a function of the number of IO obtained on a personal computer. \begin{figure} \caption{Computational cost (CC), colour bar, as a function of the number of inputs and outputs. Note that the CC is plotted on a logarithmic scale, expressing that CC increases exponentially with the number of IO.} \label{Fig::com} \end{figure} Given a specific input, an outcome probability is a function of mean photon number per pulse $\mu$, detector efficiency $\eta_{\text{det}}$, noise in the form of background light, dark count, and after-pulsing. An approach to reduce the complexity of SDP is to group the outcomes, from an adversary point of view. This will drastically reduce the complexity of SDP. It can be explained in a $n_{1,0}$-configuration where, in the absence of noise, there are $n+1$ different outcomes. The common outcome is the no-click one, and the others are 1-click due to the WCS. In this case, a new variable ($E \in \{0,1\}$) can be assigned to the outcomes in which $E=0$ corresponds to the no-click event, all '0', while $E$ is 1 for $b\in\{\underbrace{100\dotsb 0}_\text{n}, 010\dotsm0, \dotsm, 0\dotsm01\}$. \begin{equation} \begin{aligned} & P_{\text{guess}}= \mathop {\max}\limits_{p(x),\rho_x}\bigg\{\sum\limits_{x = 0}^{2} p(x) \\ & \sum\limits_{\varsigma_0,\varsigma_1,\varsigma_2 = 0}^{1} \mathop {\max} \big\{\Tr[\hat\rho_x M_{E=0}^{\varsigma_0,\varsigma_1,\varsigma_2 }],1-\Tr[\hat\rho_x M_{E=0}^{\varsigma_0,\varsigma_1,\varsigma_2 }]\big\}\bigg\} \end{aligned} \label{MHZ} \end{equation} For configurations with more WCSs more variables (corresponding to E) should be specified as there would be more indeterminate events. The many-outcome approach is a computationally simplified, effective, and efficient method of increasing entropy without significantly increasing CC. This is a result of comparing the computational cost with increasing the number of inputs versus the number of outcomes which shows that the former increases faster, see Fig. (\ref{Fig::com}). Hence, in an $n_{m,s}$ configuration, an efficient strategy is to keep the number of inputs fixed and low and increase the number of outcomes. \begin{table}[htb] \centering \includegraphics[width=\linewidth]{Figures/p_3.pdf} \caption{\textbf{Many-input vs Many-outcome approach.} \textit{Top:} Many-outcomes approach with binary input; Examples of many-outcome scenarios with two input states. Note that the overlap value differs in each case. \textit{Bottom:} Many-input approach with categorizing the outcomes. Note: x and -- represent detection and no-detection events, respectively. } \label{Fig::dout} \end{table} The many-outcome approach is studied for the continuous variable (CV) case in Ref. \cite{d_out} where the focus is on heterodyne and homodyne detectors with binary input. In the time-bin encoding scheme, we can control the number of outcomes by adjusting the number of time-bins or the number of WCS in each configuration. It should be noted that the overlap bound is not considered in this argument and should be added as criteria when solving the SDP. As an example with dual input, it is shown in Fig. (\ref{Fig::optii}) that conditional entropy rises when the number of outcomes increases. \begin{figure} \caption{Optimal conditional min-entropy as a function of the number of outcomes with binary input. \textit{Inset:} \label{Fig::optii} \end{figure} As shown in table (\ref{Fig::dout})-top, the overlap could be different from case to case; this causes the optimal value of conditional min-entropy to take place at different mean-photon numbers; the inset of Fig. (\ref{Fig::optii}) shows the optimal mean-photon number as a function of outcomes for different overlaps. Similarly, a many-input case can be introduced while keeping the outcome minimal. Table (\ref{Fig::dout})-bottom shows examples of the possible setting of the many-input approach. \subsection{Conditional Probability} Given the inputs and the outputs, one can compute the input-output correlation by employing the conditional probability $p(b|x)$, i.e., the probability of receiving outcome b given input x: \begin{equation} p(b|x) = \sum\limits_\varsigma {p_\varsigma} \text{Tr}[\hat\rho_x \hat\Pi_b^\varsigma], \end{equation} where $\hat\rho_x$ are the prepared states, $\hat\Pi_b^\varsigma$ are the POVMs describing the measurement,$\varsigma$ the classical variable provided to the adversary which describes the classical correlations between the experimental devices and the adversary. The detector's dark count rate (DCR) and ambient light are usually considered constant (on average); as they are independent of the incident photon's energy. However, the likelihood of obtaining an afterpulse click is directly related to the system's repetition rate. Some detection events may not be caused by a WCS but could be afterpulses of an earlier detection event—the higher the system's repetition rate, the higher the chance of an afterpulse in the subsequent time-bins. Consequently, it is critical to consider the afterpulsing effect for practical situations. The probability of registering a detection event in the $T^{th}$ bin is mainly subject to the presence of a WCS in that bin and afterpulsing due to detections in the earlier bins. Assuming that afterpulsing only happens due to a detection event in the immediate bin before, the probability of detection in bin $T$ can be written as: \begin{equation} \begin{split} P_{\alpha}^{T}(1) & =1-e^{-\eta_{\text{det}}L|\alpha|^2}+\epsilon+P_{ap}P_{\alpha}^{T-1}(1) \\ & = 1-e^{-\eta_{\text{det}}L|\alpha|^2}\\ & +\epsilon+P_{ap}( 1-e^{-\eta_{\text{det}}L|\alpha|^2}+\epsilon+P_{ap}P_{\alpha}^{T-2}(1)) \\ & \cdots \\ & = \frac{1-e^{-\eta_{\text{det}}L|\alpha|^2}+\epsilon}{1-P_{ap}}. \end{split} \label{EQ::P_11} \end{equation} where $P_{\alpha}^{T}(1)$ is the probability of registering a detection when sending $|\alpha\rangle$, $\eta_{\text{det}}$ and $L$ are detector efficiency and source-measurement loss, $\epsilon$ is for devices' imperfections and classical noises, e.g., dark counts, background noise, etc., and $P_{ap}$ represents the afterpulse probability due to a detection event at one bin distance which is the intrinsic character of a single-photon avalanche diode (SPAD) that can be characterized experimentally. In Eq. (\ref{EQ::P_11}), we substituted $P_{\alpha}^{T-2}(1)$ with its value and formed a geometric series to find the result. The rest of the probabilities can be expressed as \begin{equation} \begin{aligned} & P_{\alpha}(0)=1-P_{\alpha}(1) \\ & P_{\varnothing}(1)=P_{ap}(\frac{1-e^{-\eta_{\text{det}}L|\alpha|^2}+\epsilon}{1-P_{ap}})+\epsilon \\ & P_{\varnothing}(0)=1-P_{\varnothing}(1), \end{aligned} \label{sing_pro} \end{equation} where $P_{\alpha}(1)$, $P_{\varnothing}(1)$, \big( $P_{\alpha}(0)$, $P_{\varnothing}(0)$\big) represent the probability of registering a click \big(no-click\big) event when states $\ket{\alpha}$ and $\ket{0}$ are transmitted. Given Eqs. (\ref{EQ::P_11}) and (\ref{sing_pro}), we can compute all the possible conditional probabilities for any input-output dimension. \subsection{Randomness Generation Rate} Besides security, the randomness generation rate is another key parameter of any QRNG. We previously discussed the security estimation for the general case with multiple input-output in the presence of classical side information and noise and how it scales up. Here, we consider the eventual generation rate in the time-bin protocol. For a weak coherent pulse source with repetition rate $f$, the input-state generation, comprised of $n$ time-bins, scales down as $f/n$. However, the extractable randomness is determined by $H_{\min}$, Eq. (\ref{EQ::H_min}), and the number of states available in an equivalence group in a $n_{m,s}$-configuration. Hence, the rate can be written as, \begin{equation} R = \frac{f}{n} \cdot |n_{m,s}| \cdot H_{\min}(n_{m,s}, \upsilon, \eta_{\text{det}}, \mu_{\text{optimal}}), \label{EQ::GenRate} \end{equation} where $|n_{m,s}|$ is the cardinality of the input-state set and $H_{\min}(n_{m,s}, \upsilon, \eta_{\text{det}}, \mu_{\text{optimal}})$ is the maximum extractable entropy from that set considering optimal $\mu$, all the sources of noise, and detector efficiency. As discussed in section (\ref{subsec::SecEst}), a general solution for $H_{\min}$ considering all the parameters is not feasible to present and this quantity needs to be calculated and optimized for each case. It should be noted that we assume $f$ being below the detector's dead-time to avoid missing a signal. Additionally, the analysis considers all the possible inputs and outcomes. The investigation would become more straightforward in the case of the many-input or many-outcome approaches. \begin{figure} \caption{Conditional min-entropy as a function of states' total overlap. The solid, dashed, and dotted curves represent the theoretical predictions for 3, 4, and 5 input configurations, respectively. At the same time, the blue and orange dots show the experimental data for 4 and 5 input cases measured with SPAD with 83 $\%$ efficiency.} \label{Fig::res} \end{figure} \section{Experimental implementation} This section investigates the experimental implementation and some practical considerations of this protocol. According to the protocol, the detection apparatus is considered a black box with no assumption on its performance. However, state generation must respect an overlap criteria, Eq. (\ref{EQ::OverlapCond}), which translates in two conditions; limited mean photon number $\mu$ per WCS and WCS positioning in an $n$-time-bin state. \begin{figure} \caption{Schematic of the QRNG setup. A continuous wave laser (CW) is carved to form a train of pulses according to the protocol selected by the user. A combination of an attenuator (att.) and a 99:1 beamsplitter bring the power to single photon level where the 99\% output is monitored constantly with a power meter (PM) to certify the overlap condition. The single photons are then sent to a detector (SNSPD) for measurement. The polarization controller (PC) adjusts the polarization to maximize efficiency. The detection events are registered with a time-to-digital converter (TDC). State generation and measurement are governed and synchronized with the field programmable gate array (FPGA).} \label{Fig::chall} \end{figure} Fig. (\ref{Fig::chall}) shows a schematic representation of the setup. The $n$-time-bin state is generated by carving a 1550 nm continuous wave laser (CW) into pulses with 120 ps pulse width and a repetition rate of 31.25 MHz. Two cascaded intensity modulators, shown as one in the setup, guarantee high extinction ratio and perfect state generation. The repetition rate is chosen such that it matches the detector's dead-time and to minimize the chance of no-detection events. A field programmable gate array (FPGA) generates the electrical signal to drive the intensity modulators and to synchronize the measurement apparatus. To verify the overlap criteria, WCS placement is controlled such that the final state matches a subset, see Fig. (\ref{Fig::PermutStates}). A 99:1 beamsplitter separates the signal with the 99\% arm redirected to a power meter (PM). A variable optical attenuator (VOA) then sets the mean photon number to $\mu_{\text{optimal}}$ extracted from the security estimation process. The quantum states are then sent and measured with a superconducting nanowire single photon detector (SNSPD) with ~30 ns dead-time, ~80 DCR, and ~83\% detection efficiency. The detection events are then registered with a time-to-digital converter (TDC) with 1 ps resolution and are analyzed for randomness extraction. It is worth noting that in the time-bin encoding, detector's dead-time is the main limiting factor for high repetition rate state generation. \begin{figure*} \caption{Conditional min-entropy as a function of states' total overlap for Subsets I (A) and II (B) represented in Fig. \ref{Fig::PermutStates} \label{Fig::subsets} \end{figure*} \section{Results \& Discussion} This section presents the theoretical and experimental min-entropy of different configurations, intending to validate the theoretical estimations. Foremost, the input-output correlation $P(b|x)$ is estimated by performing several measurements with various overlaps and gathering the detector's outcomes $b$ for given input $x$. The extractable amount of randomness is evaluated by inserting the input-output correlation and states' overlap into the SDP, which numerically computes the min-entropy. We consider the simplest case: supplying one bin with a WCS and filling the rest of the bins with VS. Possible outcome configurations increase by raising the number of inputs, leading to a different input-output correlation and entropy. As shown in Fig. \ref{Fig::res}, the amount of extractable randomness conditioned on the classical side-information increases for the cases with a higher number of inputs. Alternative forms of input configurations with more WCSs can also be considered. Paying attention to the 4-input case as an example, as shown in Fig. \ref{Fig::PermutStates}, instead of using the typical input configurations (set I, II, and III), one can implement subsets I and II, which require a ternary and binary initial seed rather than quartet one, downsizing the computational complexity, see Fig. \ref{Fig::com}. In Fig. (\ref{Fig::subsets}), the conditional min-entropy is plotted as a function of states' overlap for subsets I and II. The dashed curve is the expected theoretical results obtained for our experimental parameters which is in acceptable agreement with the experimental data taken from SNSPD with 83$\%$ detection efficiency and for various mean photon numbers. The maximum conditional min-entropy for subsets I and II is $0.759$ and $0.546$, respectively, which are remarkably higher compared to typical binary and ternary input configurations at $\sim 0.2$ and $\sim 0.25$ obtained with detectors with 80\% and higher than 90\% detection efficiencies, respectively \cite{Brask2017,Tebyanian_2021}. It should be noted that this higher rate entropy is achievable without the need of adjusting the optical setup and can be done in the signal preparation and post-processing stage. Furthermore, the randomness generation rate scaled from $0.11$ and $0.083$ to $0.1897$ and $0.136$ which is a considerable improvement achieved only by redefining the transmitted states. \section{Conclusion} In conclusion, we demonstrated a semi-DI QRNG based on the prepare-and-measure scenario exploiting a time-bin encoding scheme and single-photon detection technique investigating multiple input-output cases. Furthermore, the protocol is experimentally implemented using commercial-off-the-shelf components in a simple all-in-fibre optical setup at telecom wavelength, allowing a straightforward tunable input configuration needless of an optical switch. We show that by holding the number of inputs(outcomes) fixed (minimal), known as the many-outcome (many-inputs) approach, one can increase the system entropy while keeping the computational complexity low. Additionally, a comprehensive study of time-bin encoding semi-DI QNRG is presented where, depending on the needs, one can select appropriate time-bin settings. Besides, we compared this protocol's results with binary and ternary-input systems and showed that our protocol is capable of generating more randomness with the same optical setup. The proposed protocol features advanced security since it only demands bounding the prepared states' overlap; the rest of the setup is not required to be characterized and can be classically correlated with the adversary. Alternatively, this protocol can be implemented in a different wavelength where single photon avalanche diodes (SPADs) have better detection efficiency, thus making this proposal chip-integrable. In a nutshell, the semi-DI protocols' main advantage is to ease up the implementation complexity and enhance the generation rate preserving a high level of security. This paper demonstrates a semi-DI QRNG based on the overlap bound with an easy-to-implement experimental setup which can produce random numbers at a high rate with robust security applicable for various input-output configurations. \textbf{Acknowledgment:} This work is supported by the Center of Excellence SPOC - Silicon Photonics for Optical Communications (ref DNRF123), by the EraNET Cofund Initiatives QuantERA within the European Union’s Horizon 2020 research and innovation program grant agreement No. 731473 (project SQUARE), and by VILLUM FONDEN, QUANPIC (ref. 00025298). H. T. acknowledges the Innovate UK Industrial Strategy Challenge Fund (ISCF), project 106374-49229 AQuRand (Assurance of Quantum Random Number Generators). \textbf{Conflict of interest statement:} The authors declare no conflicts of interest regarding this article. \textbf{Data availability:} The data that support the findings of this study are available from the corresponding author upon reasonable request. \end{document}
\begin{document} \begin{abstract} A lattice walk model is said to be reluctant if the defining step set has a strong drift towards the boundaries. We describe efficient random generation strategies for these walks. \end{abstract} \title{ Taming Reluctant Random Walks in the Positive Quadrant} \section{Introduction} Walks on lattices are fundamental combinatorial classes. They appear in many guises particularly in formal language theory, queuing theory, and combinatorics as they naturally encode common relations. A typical lattice path model is a set of walks defined by a fixed, finite set of allowable moves (called the \emph{step set}), and a region to which the walks are confined (typically a convex cone). The exact and asymptotic enumeration of lattice paths restricted to the first quadrant (known as quarter plane models) have been a particularly active area of study of late because of some new and interesting techniques coming from different areas of computer algebra and complex analysis~\cite{BoKa09, BoMi10,KuRa11, Rasc12}. Efficient uniform random generation is useful to study the typical large scale behavior of walks under different conditions. Models which restrict walks to the upper half plane can be specified by an algebraic combinatorial grammar~\cite{Duch00,BaFl02}. Consequently, efficient random generation schemes can be obtained using several systematic strategies, such as recursive generation~\cite{FlZiVa94} and Boltzmann sampling~\cite{DuFlLoSc02}. Intriguingly, walks restricted to the first quadrant are more complex. Rare is the quarter-plane model with an algebraic generating function that cannot be trivially reformulated as a half-plane model. Overwhelmingly, the cyclic lemma, combinatorial identities and other grammar-based techniques that are so fruitful in the half-plane case, do not easily apply. Furthermore, there is only a small proportion of models whose generating function satisfies a differential equation with polynomial coefficients\footnote{For example, amongst the 20~804 small step models with less than 6 steps in 3 dimensions, only around 150 appear to be D-finite~\cite{BoBoKaMe15}.}, again excluding a potential source of direct, generic generation techniques~\cite{BaBoJa13}. Rejection sampling is the term for a general technique where one generates from a simpler superclass, and then rejects elements until an element from the desired class is obtained. In the case of lattice paths, a naive rejection strategy could use unrestricted walks as a superset. This is only practical for those quarter-plane models whose counting sequences grow essentially like those of the unrestricted walks. Such is the case when the \emph{drift\/}, or vector sum of the stepset, is positive coordinate-wise. Anticipated rejection can also be used when the drift is $\mathbf{0}$, and provides a provably efficient algorithm~\cite{BaSp14}. However, any such strategy is demonstrably doomed to failure when the drift of the step set is negative in any coordinate, as the probability of generating long unconstrained walks which remain in the first quadrant becomes exponentially small. One strategy in the literature is to change the probability on the allowable steps, and consequently forgo the uniformity of the generation~\cite{Bousquet11}. It appears then that the problem of efficient, uniform random generation algorithms for generic quarter plane lattice path models is a relatively undeveloped topic. \subsection*{Our contribution}The main result of this paper is an efficient rejection algorithm for the \emph{uniform\/} random generation of walks in the quarter plane. It is an application of recent results due Johnson, Mishna and Yeats~\cite{JoMiYeXX}, Garbit and Raschel~\cite{GaRa14} amongst others. It is provably efficient and straightforward to implement. It is most impressive on walks whose drift is negative in both coordinates, a property we call \emph{reluctant}, but it also offers notable gains for any model which tends to either boundary. More precisely, we describe a strategy in which every walk of length~$n$ is generated with equal likelihood. The efficacy result holds for quarter plane models with any step set, and is easily generalized to higher dimensions. Figure~\ref{fig:bigwalk} illustrates a walk of over 18000 steps that was generated uniformly at random for the quarter plane model with reluctant step set \[\ensuremath{\mathscr{S}}=\{(1,0), (0,1), (-1, 0), (1, -1), (-1, -1), (-2, -1) \}.\] \begin{figure} \caption{\sc A random walk with 18 000 steps in the quarterplane using the stepset \mbox{$\ensuremath{\mathscr{S} \label{fig:bigwalk} \end{figure} The probability of generating a walk of this length by rejection from the set of unrestricted sequences of steps~$\ensuremath{\mathscr{S}}^*$ is less than $\frac{5.3299}{6}^{18000}\sim 1.75\cdot 10^{-926}$. However, with our strategy, it was generated (relatively quickly). Rejection from an unrestricted walk is not the only competition. For the purposes of comparison, we describe a recursive strategy which requires exact enumeration results to be tabulated in advance. This has potential to be efficient, and is insensitive to the drift of the model, but does require a lot of storage. We discuss this algorithm in Section~\ref{sec:Recursive}. Our alternative sampler is based on a straightforward combinatorial interpretation of an enumerative result. Roughly, we use that for any quarter plane model, there is a corresponding half plane model such that asymptotically, both models have the same exponential growth factor. This implies that a rejection strategy from this half plane has sub-exponential rate of rejection. The sub-exponential factors are conjectured to also match in many cases, suggesting that it is in fact a particularly efficient strategy. Asymptotic enumerative results are recalled in the next section. A {\em baseline} algorithm, based on a trivial recurrence, is presented in Section~\ref{sec:naive}. Section~\ref{sec:rejection} describes our main rejection algorithm. Its practical implementation depends on the rationality of the slope of the half-plane model, and is discussed in Subsection~\ref{sec:grammars}. We conclude with some remarks regarding implementation aspects, along with possible extensions. \section{2D lattice path basics} A 2D lattice path model is a combinatorial class consisting of walks on the 2D integer lattice, starting at the origin, and taking steps from some finite multi-set $\mathcal{S}\subset \mathbb{Z}^2$ of allowable steps. In this work, we consider the restriction of such walks to the positive quadrant $Q=\mathbb{Z}^2_{\geqslant 0}$, although the general strategy works for a wider set of cones. We use the half-plane $H_\theta$ defined by a line through the origin: \[ H_\theta= \{ x \sin \theta + y \cos\theta \geqslant 0 \}. \] For a fixed, finite step set~$\ensuremath{\mathscr{S}}\subset\mathbb{Z}^2$, a given cone $C$, and positive integer $n$ we define $\operatorname{\sf walks} (C, \ensuremath{\mathscr{S}}, n)$ to be the class of walks of length~$n$ starting at the origin, taking steps in $\ensuremath{\mathscr{S}}$, and staying in $C$. Formally, \[ \operatorname{\sf walks} (C, \ensuremath{\mathscr{S}}, n)=\{ x_0,x_1,\dots,x_n\mid x_0= (0,0) \,\wedge\, x_{j+1}-x_j\in\mathcal{S} \,\wedge\, x_i\in C \}. \] The complete class is given by $\operatorname{\sf walks} (C,\ensuremath{\mathscr{S}})=\bigcup_{n\geqslant 0} \operatorname{\sf walks} (C, \ensuremath{\mathscr{S}}, n)$. We use the following enumerative quantities in our analysis: \begin{equation}\label{eqn:qn} q_n^\ensuremath{\mathscr{S}}=|\operatorname{\sf walks} (Q,\ensuremath{\mathscr{S}}, n)|\quad h^\ensuremath{\mathscr{S}}_n(\theta)=|\operatorname{\sf walks} (H_\theta, \ensuremath{\mathscr{S}}, n)|. \end{equation} The asymptotic regime for $q_n^\ensuremath{\mathscr{S}}$ is always of the form \begin{equation}q^\ensuremath{\mathscr{S}}_n \sim \gamma\,\rho^{-n}\, n^{-r}, \label{eq:asymptwalks} \end{equation} for real numbers $\rho$ and $r$. We refer to $\rho^{-1}$ as the exponential growth factor of the model. The asymptotic regime critically (although not exclusively) depends on the \emph{drift\/} of the step set~$\ensuremath{\mathscr{S}}$, defined as $\operatorname{\sf drift} (\ensuremath{\mathscr{S}})= \sum_{s\in S} s$. A walk model $\operatorname{\sf walks}(Q, \ensuremath{\mathscr{S}})$ is said to be \emph{reluctant\/} when $\operatorname{\sf drift} (\ensuremath{\mathscr{S}})=(\delta_1, \delta_2)$ with $\delta_1<0$ and $\delta_2<0$. Reluctant models for the positive quadrant have exponential growth factors that are lower than the number of steps. It follows that the naive algorithm that performs rejection from unconstrained walks, has exponential time complexity, motivating our algorithmic contribution. Indeed, even when one of $\delta_1<0$ or $\delta_2<0$, the exponential growth factor can be less than the number of steps. \section{Basic recursive random generator}\label{sec:naive} The exact value of~$q^\ensuremath{\mathscr{S}}_n$ can be expressed using a recurrence. This motivates a straightforward instance of the recursive method~\cite{Wilf1977,FlZiVa94}, where steps are simply drawn sequentially, using probabilities that depend both on the current position reached, and the number of remaining steps. \subsection{Exact enumeration of walks} Define $q^\ensuremath{\mathscr{S}}_n(x,y)$ to be the number of positive suffixes of walks in $\operatorname{\sf walks} (Q, \ensuremath{\mathscr{S}}, n)$ which start from the point $(x, y)$ and remain in the positive quadrant. Such suffix walks of length $n$ can be factored as a first step $(i,j)\in\ensuremath{\mathscr{S}}$, keeping the walk in the positive quadrant, followed by another positive suffix of length $n-1$ starting at $(x+i,y+j)$. This leads to the recurrence: \begin{equation}\label{eq:qnij} q^\ensuremath{\mathscr{S}}_n(x,y) = \begin{cases} \displaystyle \sum_{\substack{(i, j)\in \ensuremath{\mathscr{S}} \text{ s.t.}\\x+i\ge0,y+j\ge0 }} q^\ensuremath{\mathscr{S}}_{n-1}(x+i,y+j)& \mbox{if }n>0,\\ 1 & \mbox{if } n=0 \end{cases} \end{equation} A quadrant walk is also the positive suffix of a walk starting at $(0,0)$, thus $q^\ensuremath{\mathscr{S}}_n := q^\ensuremath{\mathscr{S}}_n(0,0)$. This recurrence can also be trivially adapted to handle general cones, higher dimensions, or for further constraining the end-point, e.g. to count/generate meanders, or walks ending on the diagonal. \subsection{Algorithm and complexity analysis} \label{sec:Recursive} Once the cardinalities $q^\ensuremath{\mathscr{S}}_n(x,y)$ are available, a uniform random walk is generated, by choosing one of the steps with probabilities proportional to the number of possible suffixes. \begin{enumerate} \item {\bf Preprocessing.} Precompute $q^\ensuremath{\mathscr{S}}_{n'}(x,y)$ for each $n'\in [0,n]$ and $(x,y) \in [0,n\cdot a]\times[0,n\cdot b]$, where $a:= \max_{(i,j)\in\ensuremath{\mathscr{S}}}i$ and $b:= \max_{(i,j)\in\ensuremath{\mathscr{S}}}j$;\label{step:precomp} \item {\bf Generation.} Initially starting from $(0,0)$ and $n':=n$, iterate until $n'=0$:\label{step:elongation} \begin{enumerate} \item Choose a step $(i,j)\in\ensuremath{\mathscr{S}}$ with probability $q^\ensuremath{\mathscr{S}}_{n'-1}(x+i,y+j)/q_{n'}(x,y)$; \item Add $(i,j)$ to the walk, update the current point ($(x,y) := (x+i, y+j) $), and decrease the remaining length ($n':=n'-1$); \end{enumerate} \end{enumerate} \begin{theorem}[Complexity/correctness] \label{thm:storage} The random uniform generation of~$k$ 2-dimensional walks confined to the positive quadrant can be performed in $\Theta(k\cdot n + n^{3})$ arithmetic operations, using storage for $\Theta(n^{3})$ numbers. \end{theorem} \begin{proof} The preprocessing stage should only be computed once in the generation of $k$ sequences. It involves $\Theta(|\ensuremath{\mathscr{S}}| \cdot n^{d+1})$ arithmetic operations, and requires storage for $\Theta(n^{d+1})$ large numbers. The generation of a single walk requires the generation of $\Theta(n)$ random numbers and, for each of them, their comparisons to $\Theta(|\ensuremath{\mathscr{S}}|)$ other numbers. An induction argument establishes the correctness of the algorithm. Assume that, for all $n'<N$ and $(x,y) \in [0,n'\cdot a]\times[0,n'\cdot b]$, the positive suffixes are uniformly generated, a fact that can be verified when $n'=0$. Then for $n'=N$, the algorithm chooses a suitable step $(i,j)\in \ensuremath{\mathscr{S}}$, and then recursively generates a -- uniform from the induction hypothesis -- suffix from the updated position. The probability of generating any such walk is therefore \[ \mathbb{P}(w) = \frac{q^\ensuremath{\mathscr{S}}_{N-1}(x+i,y+j)}{q_N(x,y)} \times \frac{1}{q^\ensuremath{\mathscr{S}}_{N-1}(x+i,y+j)} = \frac{1}{q^\ensuremath{\mathscr{S}}_{N}(x,y)}\] and we conclude with the uniformity of the generation. \end{proof} In practice however, the memory consumption of the algorithm grows in $\Theta(n^4)$ bits, which limits the utility of this strategy to $n<500$. Thus the above algorithm only serves as a baseline for our alternative based on rejection. \section{Efficient rejection sampler from 1D models}\label{sec:rejection} We recall some basics of rejection sampling for our analysis. Let~$\ensuremath{\mathscr{A}}$ be a combinatorial class which contains the sub-class $\ensuremath{\mathscr{C}}$. Given a random sampler for $\ensuremath{\mathscr{A}}$, we can use a rejection strategy to make a random sampler for $\ensuremath{\mathscr{C}}$. Let $a_n$ and~$c_n$ respectively count the number of elements of size $n$ in $\ensuremath{\mathscr{A}}$ and $\ensuremath{\mathscr{C}}$. Following Devroye~\cite[Chapter II.3]{Devr86}, we say that class $\ensuremath{\mathscr{A}}$ \emph{efficiently covers\/} $\ensuremath{\mathscr{C}}$ if \[\left(\frac{a_n}{c_n}\right) \in \mathcal{O}(n^{p}), \] Here $p \ge 0$ is some constant independent of $n$. In other words, asymptotically, the expected number of elements drawn from~$\ensuremath{\mathscr{A}}$ before generating an element in~$\ensuremath{\mathscr{C}}$ is polynomial in~$n$. Ideally~$p$ is as small as possible. \subsection{Candidate superclass: Half-plane model} Our algorithm arises from the surprising observation made by Johnson, Mishna, and Yeats~\cite{JoMiYeXX}, later proven by Garbit and Raschel~\cite{GaRa14}: \begin{theorem}[Garbit and Raschel~\cite{GaRa14}]\label{th:GR2014} Consider a step set $\ensuremath{\mathscr{S}}$, let $\rho(\theta)^{-1}:=\lim_{n\rightarrow \infty} h^\ensuremath{\mathscr{S}}_n(\theta)^{1/n}$ be the exponential growth factor of the half-plane model $\operatorname{\sf walks} (H_\theta, \ensuremath{\mathscr{S}})$, and define \[\theta^* := \argmax_{0\leqslant \theta \leqslant \pi/2} \rho(\theta),\] Then the growth factor $\rho^{-1}:=\lim_{n\rightarrow \infty} (q^\ensuremath{\mathscr{S}}_n)^{1/n}$ of walks in the positive quadrant $Q$ satisfies: \begin{equation} \label{eq:qnhn} \rho=\rho(\theta^*). \end{equation} \end{theorem} This says that the exponential growth of the quarter-plane model is equal to the exponential growth of a superclass half-plane model. Furthermore the value of $\theta^*$ is explicitly computable. \begin{corollary}\label{thm:EfficientCover} The combinatorial class $\operatorname{\sf walks} (H_{\theta^*}, \ensuremath{\mathscr{S}})$ efficiently covers $\operatorname{\sf walks} (Q,\ensuremath{\mathscr{S}})$. \end{corollary} Next we consider the sub-exponential factors, as this gives the polynomial complexity of the rejection. On the side of the half-plane walks, the asymptotic formulas for~$h^\ensuremath{\mathscr{S}}_n(\theta)$ can be deduced from the complete generating function study of Banderier and Flajolet~\cite{BaFl02}. The sub-exponential factors are either $n^0, n^{-1/2}$, or $n^{-3/2}$, depending on the drift of the model (positive, zero and negative respectively). For quarter-plane walks, the picture is less complete. The case of excursions for models with zero drift was described by Denisov and Wachtel~\cite{DeWa15}, and from this work Duraj~\cite[Theorem~II]{Dura14} was able to conclude explicit formulas for reluctant walks: Let $S(x,y)=\sum_{(i,j)\in \ensuremath{\mathscr{S}}} x^iy^j$, and let $(\alpha, \beta)$ be the unique positive critical point of $S(x,y)$. Such a point always exists, provided that~$\ensuremath{\mathscr{S}}$ satisfies some non-triviality conditions. Then, \begin{equation}\label{eq:qnsim} q^\ensuremath{\mathscr{S}}_n \sim \gamma\,\rho^{-n}\, n^{-r}, \end{equation} where $\rho$ and $r$ satisfy \[ \rho=\frac{1}{S(\alpha,\beta)}\text{ and } r=1+\pi\arccos \frac{S_{xy}(\alpha, \beta)}{\sqrt{S_{xx}(\alpha, \beta) S_{yy}(\alpha, \beta)}}. \] \SetKwFunction{UniformDraw}{UniformDraw} \SetKwFunction{Map}{2DMap} \begin{algorithm}[t] \KwData{Reluctant step set $\ensuremath{\mathscr{S}}\subset\mathbb{Z}^2$, length $n$} \KwResult{$w\in \operatorname{\sf walks}(Q, \ensuremath{\mathscr{S}}, n)$ drawn uniformly at random} \tcp{Determine optimal slope $m=arctan(\theta^*)$ following \cite{JoMiYeXX}} \lIf{$\ensuremath{\mathscr{S}}$ is singular} {$m\gets 0$ } \Else{ Set $S(x,y)=\sum_{(i,j)\in \ensuremath{\mathscr{S}}} x^iy^j$\; Determine $(x,y)=(\alpha, \beta)$, the unique positive solution of $\frac{d}{dx}S(x,y)=\frac{d}{dy} S(x,y)=0$\; \lIf{$\beta=1$} {$m=\infty $} \lElse{$m=\ln \alpha/\ln\beta$} } \tcp{Create suitable grammar $\mathcal{G}$} \lIf{ $m=\infty$}{$p\gets 1$ and $q\gets 0$} \lElseIf{ $m$ is rational}{find $p,q\in\mathbb{N}$ so that $m=p/q$} \lElse{find $p/q$, a $1/\sqrt{n}$-rational approximation to $m$} {$\ensuremath{\mathscr{A}}\to\{ip+jq:(i,j)\in \ensuremath{\mathscr{S}}\}$\; $\mathcal{G}\to\operatorname{\sf grammar} (\ensuremath{\mathscr{A}})$\;} \tcp{Main rejection loop} \Repeat{$\Map{$w$} \in Q$}{$w \to \UniformDraw{$\mathcal{G},n$}$} \caption{Outline of our rejection algorithm. \protect\UniformDraw{$\mathcal{G},n$} denotes a uniform sampler of walks of length $n$ for the grammar $\mathcal{G}$, and \protect\Map{$w$} indicates the reintepretation of $w$ as a sequence of 2D steps.\label{alg:fullmonty}} \end{algorithm} \subsection{The algorithm} Algorithm~\ref{alg:fullmonty} implements a classic rejection from a carefully-chosen half-plane model. \begin{theorem}[Complexity of Algorithm~\ref{alg:fullmonty}] Let $\ensuremath{\mathscr{S}}$ be a reluctant walk model and let $M(n)$ denote the time complexity of generating a walk for the half-plane model $H_{\theta^*}$. The expected time taken by Algorithm~\ref{alg:fullmonty} to generate a walk in the positive quadrant is in $\Theta\left(M(n)\times n^{r-3/2}\right).$ \end{theorem} This immediately follows from formula~\eqref{eq:qnsim}, from which we deduce that the expected number of trials is ${h^\ensuremath{\mathscr{S}}_n(\theta)}/{q^\ensuremath{\mathscr{S}}_n} \in \Theta(n^{r-3/2}).$ For reluctant small step models, one has $3.3< r < 7.5$. More recently, Garbit and Raschel have conjectured formulas for the sub-exponential factor in the general case, and remarkably suggest that for many (non-reluctant) models, ${h^\ensuremath{\mathscr{S}}_n(\theta)}/{q^\ensuremath{\mathscr{S}}_n} \in \mathcal{O}(1)$. Next we address the efficient uniform random generation of walks in $\operatorname{\sf walks} (H_{\theta^*}, \ensuremath{\mathscr{S}})$. \subsection{Half plane models as unidimensional walks} We now describe efficient samplers for half-plane models $\operatorname{\sf walks} (H_\theta, \ensuremath{\mathscr{S}})$. Remark that walks in any half-plane can be generated as positive 1D walks, by taking 1D steps that are the orthogonal projections of those in $\ensuremath{\mathscr{S}}$ onto the half-plane boundary. A unidimensional model is defined by a set $\ensuremath{\mathscr{A}}\subset\mathbb{R}$. The nontriviality conditions imply that $\ensuremath{\mathscr{A}}$ contains both a positive and a negative element. The associated class of walks begins at $0$, takes steps which are elements from $\ensuremath{\mathscr{A}}$ such that the sum over any prefix of the walk is nonnegative. If $\ensuremath{\mathscr{A}}$ is a multiple of a set of integers, then the class is modelled by a context free grammar, which we describe in the next section. Otherwise, the class cannot be trivially modelled by a context-free grammar, as is proven in Section~\ref{sec:notCFG}. Given $\ensuremath{\mathscr{S}}$ and $\theta$, we define the associated unidimensional model \[\ensuremath{\mathscr{A}}(\theta)=\{i\sin\theta + j\cos\theta: (i,j)\in\ensuremath{\mathscr{S}}\}.\] The classes $\operatorname{\sf walks}(H_\theta,\ensuremath{\mathscr{S}})$ and $\operatorname{\sf walks} (\ensuremath{\mathscr{A}}(\theta), \mathbb{R}_{\geqslant 0})$ are in a straightforward bijection. \begin{remark}[\cite{JoMiYeXX}]\label{rem:misc} If $\ensuremath{\mathscr{S}}$ defines a non-trivial 2D quarterplane model, then $\ensuremath{\mathscr{A}}(\theta)$ defines a non-trivial unidimensional model. Moreover if $\ensuremath{\mathscr{S}}$ is relunctant, then the drift of $\ensuremath{\mathscr{A}}(\theta)$ is negative. Finally multiplying steps by a positive constant does not affect the language of positive walks. \end{remark} Two cases arise, depending on whether or not $\ensuremath{\mathscr{A}}(\theta^*)$ consists of rational-valued steps (up to rescaling). This is equivalent to asking if the slope of the boundary of the half-plane, $m=\tan(\theta^*)$ is rational. \subsection{Case 1: Rational projected steps} \label{sec:grammars} When $\tan(\theta^*)$ is rational, the steps in $\ensuremath{\mathscr{A}}(\theta^*)$ can be scaled to be integers, as mentioned in Remark~\ref{rem:misc}, therefore we consider unidimensional models $\ensuremath{\mathscr{A}}$ which consist of integer-valued steps. Combinatorial specifications and, specifically, context-free grammars can then be used for random generation. Context-free grammars are indeed suitable to describe objects following rules which depend on a single, integer counter---and place certain, finite constraints on this counter. For the purpose of random walks, this counter may typically keep track of the height of the walk, and be constrained to always remain positive (i.e., the walk remains above the $x$-axis). From a grammar, random objects can be sampled using a variety of generic methods. More generally, this is equivalent to saying that grammars can describe walks that are confined within a \emph{half-plane}. To build the grammar $\operatorname{\sf grammar}(\ensuremath{\mathscr{A}})$ for a unidimensional model defined by step set $\ensuremath{\mathscr{A}}$, we first distinguish the positive, negative and neutral steps \begin{align*} \ensuremath{\mathscr{A}}^+ &:= \set{ a \ | \ a\in \ensuremath{\mathscr{A}} \text{ and } w(a) > 0 },& \ensuremath{\mathscr{A}}^-&:= \set{ a\ | \ a\in \ensuremath{\mathscr{A}} \text{ and } w(a) < 0 }, \end{align*} \begin{equation*} \ensuremath{\mathscr{A}}^0 := \set{a \ | \ a\in \ensuremath{\mathscr{A}} \text{ and } w(a) = 0 }\text{,} \end{equation*} and define the largest upward and downward step lengths \begin{align*} \bar{a} &:= \max \ensuremath{\mathscr{A}}^+ & \bar{b} &:= -\min \ensuremath{\mathscr{A}}^-.\end{align*} Note that both of these lengths are positive, and are well-defined when the step set satisfies the conditions of non-triviality. Using these three sets, and these two values we define the associated grammar $\operatorname{\sf grammar}(\ensuremath{\mathscr{A}})$, whose terminals are given by $\ensuremath{\mathscr{A}}$, and non-terminals are defined as follows: \def\cls[{\mathrm{aux}}]{P}{\cls[{\mathrm{aux}}]{P}} \begin{align*} \cls{P} &= \cls{D} \times \cls[{\mathrm{aux}}]{P}& \cls[i]{L}&= \sum_{a\in \ensuremath{\mathscr{A}}\atop{w(a) = i}} a \; + \sum_{k=i+1}^{\min(\bar{a}, i+\bar{b})} \cls[k]{L} \cls[k-i]{R}\\ \cls[{\mathrm{aux}}]{P} &= \ensuremath{\varepsilon} + \sum_{k=1}^{\bar{a}} \cls[k]{L} \times \cls[{\mathrm{aux}}]{P} & \cls[j]{R}&= \sum_{b\in \ensuremath{\mathscr{A}}\atop{w(b) = -j}} b \; + \sum_{k=j+1}^{\min(j+\bar{a}, \bar{b})} \cls[k-j]{L} \cls[k]{R} \end{align*} \begin{equation*} \cls{D} = \sum_{c\in S\atop{w(c) = 0}} c \times \cls{D} \; + \sum_{k=1}^{\max(\bar{a},\bar{b})} \cls[k]{L} \times \cls{D}\times\cls[k]{R}\times\cls{D} \end{equation*} This follows from Duchon~\cite{Duch00}, Bousquet-M\'elou~and~Ponty~\cite{BoPo08}, with minor corrections to the indices that prevent the grammar from referencing undefined rules. The decomposition of a walk is unique and a schematic of a typical decompostion is presented in Figure~\ref{fig:decomp}. \begin{figure} \caption{Typical decomposition of a walk in $\mathcal{D} \label{fig:decomp} \end{figure} Given the step set $\ensuremath{\mathscr{A}}$, $\mathcal{G}=\operatorname{\sf grammar} (\ensuremath{\mathscr{A}})$ can be built in constant time (proportional to $\max(\bar{a}, \bar{b})^2$). To generate an element from a context free grammar, one either uses recursive methods~\cite{Wilf1977} or Boltzmann generation~\cite{DuFlLoSc02}. The grammar here is straightforward, so most common optimizations apply~\cite{Goldwurm1995}. \begin{theorem}[Complexity of rational half-plane sampling] Let $\operatorname{\sf walks} (\mathbb{R}_{\geqslant 0}, \ensuremath{\mathscr{A}})$ be a non-trivial unidimensional model defined by a rational multiset~$\ensuremath{\mathscr{A}}\subset\mathbb{Z}$. The uniform random generation of $k$ walks of length~$n$ in $\operatorname{\sf walks} (\mathbb{R}_{\geqslant 0}, \ensuremath{\mathscr{A}})$ can be performed in $\mathcal{O}(k\cdot n\log n)$ arithmetic operations using storage for $O(1)$ numbers. \end{theorem} \begin{corollary} When the step set $\ensuremath{\mathscr{S}}$ yields a rational $\ensuremath{\mathscr{A}}$ unidimensional projection, Algorithm~\ref{alg:fullmonty} generates $k$ walks in the positive quadrant using $\mathcal{O}(k\cdot n^{r-1/2}\log n)$ arithmetic operations, where $r$ is the exponent of the subexponential term in the asymptotics of $\operatorname{\sf walks} (Q,\ensuremath{\mathscr{S}})$ . \end{corollary} \subsection{Case 2: Non-rational projected steps} \label{sec:irrationalslope} When the projected step set $\ensuremath{\mathscr{A}}(\theta^*)$ contains non-rational steps, then the associated language is not context-free, and grammars can no longer be used directly. However, it is still possible to use a rational approximation of the perfect half-plane model, at the expense of the algorithmic efficiency. \subsubsection{Contextuality of associated languages}\label{sec:notCFG} \begin{lemma} Let $\ensuremath{\mathscr{S}}\subset\mathbb{Z}^2$ be a finite set which defines a non-trivial quarterplane model. Let $\theta^*$ be angle determined by Theorem~\ref{th:GR2014} and assume furthermore that $m=\tan(\theta^*)$ is irrational. Then, the language $\Lang{m}$ whose alphabet is made from the pairs $(i,j)\in \ensuremath{\mathscr{S}}$, and the words are restricted to walks in $\operatorname{\sf walks}(H_{\theta^*}, \ensuremath{\mathscr{S}})$ is not context-free. \end{lemma} \begin{proof}Consider two steps $a, b\in \ensuremath{\mathscr{A}}$, encoded by symbols $s_a$ and $s_b$, such that $a>0$ and $b<0$ and $a/b$ is irrational. The existence of such steps follows from the non-triviality of $\ensuremath{\mathscr{S}}$, and the irrationality of $\tan(\theta^*)$. First, recall that the intersection of a context-free language and a rational language is a context-free language. If the intersection language $$\Lang{m}^{\cap}=\{s_a^*s_b^*\}\cap \Lang{m} = \{s_a^is_b^j\mid a\cdot i-b\cdot j\ge 0\}$$ is not context-free, then neither is $\Lang{m}$. The fact that $\Lang{m}^\cap$ is not context free can be proven using the context-free version of the pumping lemma, which states that, if $\Lang{m}^\cap$ is context-free, then there exists a word length $p$ above which each word $w\in\Lang{m}^\cap$ can be decomposed as $w=x.u.y.v.z$ such that $|u.y.v|\le p$, $|u.v|\ge 1$, and $\{x.u^i.y.v^i.z\mid i\in\mathbb{N}\}\subset\Lang{m}^\cap$. Let $\Delta(w) = |w|_{s_a}\cdot a - |w|_{s_b}\cdot b$ denote the (signed) final distance to the half plane, we establish the following technical lemma. \begin{lemma}For any $p\ge 0$, there exists a word $w^*\in\Lang{m}^\cap, $ $|w^*|>p$, such that $\Delta(w^*)<\Delta(w)$ for all $w\in\Lang{m}^\cap$, $|w|<|w^*|$.\label{lem:existsCloserWord} \begin{proof} Assume that $p$ is given, and let $\Delta^{\le p}$ denote the smallest distance to the half plane of a word of length $\le p$, reached by some word $s_a^{x^{\bullet}}s_b^{y^{\bullet}} \in \Lang{m}^\cap$ of length $x^\bullet+y^\bullet\le p$. First we constructively show the existence of a word of length greater than $p$, whose final distance to the half-plane is smaller than $\Delta^{\le p}$. Consider the word $$w^{\circ}:=s_a^{K\cdot x_\bullet}s_b^{K\cdot y_\bullet} s_b\quad\text{ where }\quad K:=\left\lceil\frac{b}{\Delta^{\le p}}\right\rceil.$$ Since both the slope and ratio $a/b$ are irrational, then $\Delta^{\le p}\neq 0$ and such a word exists. The final distance to the half plane of $w^{\circ}$ is given by: \begin{align*} \Delta(w^\circ) &= K\cdot\Delta^{\le p} - b = \left(\left\lceil\frac{b}{\Delta^{\le p}}\right\rceil-\frac{b}{\Delta^{\le p}}\right)\cdot \Delta^{\le p}< \Delta^{\le p}. \end{align*} Consider now the smallest word $w^*\in\Lang{m}^\cap$ such that $\Delta(w^*)<\Delta^{\le p}$. Such a word exists since $\Delta(w^\circ)<\Delta^{\le p}$ and clearly obeys $|w^*|>p$. Since $w^*$ is the smallest word such that $|w^*|>p$ and $\Delta(w^*)<\Delta^{\le p}$, then one has $\Delta(w^*)<\Delta^{\le |w^*|}$ which proves our claim. \end{proof} \end{lemma} Let us now investigate the possible factorizations as $x.u.y.v.z$ of the word $w^*$, whose existence is established by Lemma~\ref{lem:existsCloserWord}, and show that neither of them satisfies the pumping lemma. Focusing on $u$ and $v$, remark that neither of them should simultaneously feature both kinds of steps, otherwise any word $w^{[1]}=x.u^2.y.v^2.z\notin\Lang{m}^\cap$, as it would feature at least two peaks. It follows that any satisfactory decomposition must be of the form $u=s_a^i$ and $v=s_b^j$, and the non-rationality of $a/b$ implies that $\Delta(u.v)\neq 0$. If $\Delta(u.v)<0$, then the word $w^{[2]}=x.u^r.y.v^r.z$, $r>\lceil \Delta(w^*)/\Delta(u.v)\rceil$ is such that $\Delta(w^{[2]})<0$, and therefore $w^{[2]}\notin \Lang{m}^\cap$. If $\Delta(u.v)>0$, then let us observe that $u.v\in\Lang{m}^\cap$ and has total length $i+j<p$, therefore Lemma~\ref{lem:existsCloserWord} implies that $\Delta(u.v)>\Delta^{|w'|}$. It follows that the word $w^{[3]}=x.u^0.y.v^0.z$ has final distance to the slope $\Delta^{|w'|}-\Delta(u.v)<0$ and therefore $w^{[3]}\notin\Lang{m}^\cap$. Having found all possible decompositions lacking in some respect, we conclude that $\Lang{m}^\cap$ is not a context-free language, and neither is $\Lang{m}$. \end{proof} \subsection{Rational approximations} \label{sec:ratapprox} All is not lost in the case of an irrational slope model, however, as we can define an approximation to the slope that is sufficiently close to the optimal slope to ensure polynomial-time rejection. \begin{definition}{$\delta$-rational approximation} A half-plane model $\ensuremath{\textsc{H}}_{\theta_r}(\ensuremath{\mathscr{S}})$ is a $\delta$-rational approximation of a half-plane model $\ensuremath{\textsc{H}}_\theta(\ensuremath{\mathscr{S}})$ if and only if $m_r := \tan{\theta_r} \in \ensuremath{\mathbb{Q}}$ and $|\tan{\theta}-\tan{\theta_r}|\le \delta$. \end{definition} \begin{proposition}\label{thm:approx} For any model $\operatorname{\sf walks}(H_\theta, \ensuremath{\mathscr{S}})$ and $\delta>0$ a desired precision, there exists a grammar with $\mathcal{O}(1/\delta)$ non-terminals and $\mathcal{O}(1/\delta^2)$ rules, which generates a $\delta$-rational approximation $\operatorname{\sf walks}(\ensuremath{\textsc{H}}_{\theta_r},\ensuremath{\mathscr{S}})$ of $\operatorname{\sf walks}(H_\theta,\ensuremath{\mathscr{S}})$. \end{proposition} Remark that, as soon as $|\tan{\theta}-\tan{\theta_r}|>0$, the exponential growth factor of the half-plane model becomes greater than that of the quarter-plane model, and Algorithm~\ref{alg:fullmonty} becomes exponential on $n$. On the other hand, for any length $n\in\mathbb{N}$, setting $\delta:=1/(n+1)$ will define a model $\ensuremath{\textsc{H}}_{\theta_r}(\ensuremath{\mathscr{S}})$ which coincides with $\ensuremath{\textsc{H}}_{\theta}(\ensuremath{\mathscr{S}})$ on positive walks of length $n$. Indeed, the accumulated {\em error} due to the approximation of the step set remains too small to lead to the acceptance of some walk in $\ensuremath{\textsc{H}}_{\theta_r}(\ensuremath{\mathscr{S}})$ and not in $\ensuremath{\textsc{H}}_{\theta}(\ensuremath{\mathscr{S}})$ (and vice-versa). \begin{theorem}[Complexity of $1/(n+1)$-rational approximation] Let $\operatorname{\sf walks} (\mathbb{R}_{\geqslant 0}, \ensuremath{\mathscr{A}}, n)$ be a non-trivial unidimensional model defined by a non-rational multiset~$\ensuremath{\mathscr{A}}$. The uniform random generation of $k$ walks of length~$n$ in $\operatorname{\sf walks} (\mathbb{R}_{\geqslant 0}, \ensuremath{\mathscr{A}}, n)$ can be performed in: \begin{itemize} \item $\mathcal{O}(k\cdot n\log n + n^3)$ arithmetic operations, using storage for $\mathcal{O}(n^3)$ large integers~\cite{FlZiVa94}; \item $\mathcal{O}(k\cdot n^3\log n + n^2)$ arithmetic operations, using storage for $\mathcal{O}(n^2)$ large integers~\cite{Goldwurm1995}; \item $\mathcal{O}(k\cdot n^2 + n^2)$ arithmetic operations, using storage for $\mathcal{O}(n^2)$ real values (Oracle)~\cite{DuFlLoSc02,Pivoteau2008}. \end{itemize} \end{theorem} Finally, we conjecture that a polynomial rejection is actually reached using a $(1/\sqrt{n})$-rational approximation. \TODOYann{To be replaced by a suitable conjecture} \begin{conjecture} Let $h_{n}(\theta)$ be the number of walks of length $n$ in an half-plane model $\ensuremath{\textsc{H}}_{\theta}(\ensuremath{\mathscr{S}})$, there exists an infinite sequence of angles $\{\theta_n\}_{n\le0}$ such that: \begin{itemize} \item For all $n\ge0$, $\ensuremath{\textsc{H}}_{\theta_n}(\ensuremath{\mathscr{S}})$ is a $(1/\sqrt{n})$-rational approximation of $\ensuremath{\textsc{H}}_{\theta^*}(\ensuremath{\mathscr{S}})$; \item The number of rejections in Algorithm~\ref{alg:fullmonty} remains polynomial in $n$: \[ \exists p\in \mathbb{R}, \lim_{n\to+\infty} \frac{h_{n}(\theta_n)}{h_{n}(\theta)} \in \mathcal{O}(n^p) \] \end{itemize} \end{conjecture} \section{Remarks and future extensions} \label{sec:Results} We have implemented this algorithm in Python, with external calls to Sage to compute $\theta^*$, and to Maple for Boltzmann Generation. Experimentally, in the case of irrational projected steps, even crude approximations led to much improved empirical complexities than both the default half-plane generation and the naive recursive generator. On the other hand, increasingly precise approximations led to an overwhelming growth in the size of the grammar, as could be expected from the asymptotic complexity. This raises interesting questions about the precise interplay between the size of the grammar and the complexity, starting with our conjecture which we hope to address in a future version of this work. There are many possible optimizations, notably, anticipated rejection. We expect this should have a positive effect on the complexity, particularly in the null-drift cases, possibly after projection onto the targeted half-plane. \paragraph{Natural extensions and generalizations} Finally, many natural extensions come to mind. Generating excursions in the quarter plane is difficult, but using our grammar-based approach it is completely straightforward. Finally, there are analogous ``best hyperplane'' theorems in higher dimensions, and for more general cones, and our general approach could in principle generalize to these cases. \end{document}
\begin{equation}gin{document} \title{\large {\bf{\sc Tracing homotopy path for the solution of nonlinear complementarity Problem}}} \author{ A. Dutta$^{a, 1}$, A. K. Das$^{b, 2}$\\ \emph{\setminusall $^{a}$Department of Mathematics, Jadavpur University, Kolkata, 700 032, India}\\ \emph{\setminusall $^{b}$SQC \& OR Unit, Indian Statistical Institute, Kolkata, 700 108, India}\\ \emph{\setminusall $^{1}$Email: [email protected]}\\ \emph{\setminusall $^{2}$Email: [email protected]} \\ } \date{} \maketitle \begin{equation}gin{abstract} \noindent In this article, we consider nonlinear complementarity problem. We introduce a new homotopy function for finding the solution of nonlinear complementarity problem through the trajectory . We show that the homotopy path approaching the solution is smooth and bounded . Numerical example of an oligopoly equilibrium problem is illustrated to show the effectiveness of the proposed algorithm. \\ \noindent{\bf Keywords:} Nonlinear complementarity problem, homotopy continuation method, Oligopoly equilibrium problem. \end{abstract} \section{Introduction} The nonlinear complementarity problem is identified as an important mathematical programming problem. The idea of nonlinear complementarity problem is based on the concept of linear complementarity problem. For recent study on this problem and applications see {\cal I}te{das2017finiteness}, {\cal I}te{articlee14}, {\cal I}te{bookk1}, {\cal I}te{articlee7} and references therein. For details of several matrix classes in complementarity theory, see {\cal I}te{articlee1}, {\cal I}te{articlee2}, {\cal I}te{articlee9}, {\cal I}te{articlee17}, {\cal I}te{article1}, {\cal I}te{mohan2001more}, {\cal I}te{article12}, {\cal I}te{article07}, {\cal I}te{dutta2022column} and references cited therein. The problem of computing the value vector and optimal stationary strategies for structured stochastic games for discounted and undiscounded zero-sum games and quadratic Multi-objective programming problem are formulated as linear complementary problems. For details see {\cal I}te{articlee18}, {\cal I}te{mondal2016discounted}, {\cal I}te{neogy2005linear} and {\cal I}te{neogy2008mixture}. The complementarity problems are considered with respect to principal pivot transforms and pivotal method to its solution point of view. For details see {\cal I}te{articlee8}, {\cal I}te{articlee10}, {\cal I}te{das1} and {\cal I}te{neogy2012generalized}. There so many methods are developed to solve a nonlinear complementarity problem. see {\cal I}te{pang1986}, {\cal I}te{pang1993},{\cal I}te{karamardian1969}, {\cal I}te{watson1979}. Eaves and Saigal {\cal I}te{eaves1972homotopies} formed an important class of globally convergent methods for solving systems of non-linear equations. Such methods have been used to constructively prove the existence of solutions to many economic and engineering problems. The fundamental idea of a homotopy continuation method is to solve a problem by tracing a certain continuous path that leads to a solution of the problem. Thus, defining a homotopy mapping that yields a finite continuation path plays an essential role in a homotopy continuation method. \vskip 1em The paper is organized as follows. Section 2 presents some basic notations and results. In section 3, we propose a new homotopy function to find the solution of nonlinear complementarity problem. We construct a smooth and bounded homotopy path to find the solution of the nonlinear complementarity problem as the homotopy parameter $\lambda$ tends to $0$. To find the solution of homotopy function we use modified homotopy continuation method to increase the order of convergency of the algorithm. We also find the sign of the positive tangent direction of the homotopy path. Finally, in section 4, we numerically solve the oligopoly problem which is formulated by nonlinear complementarity problem using the introduced homotopy function. \section{Preliminaries} Consider a function $f: R^n \rightarrow R^n$ , and a vector $z\in R^n$ such that $f= \left[\begin{equation}gin{array}{c} f_1\\ f_2\\ \vdots\\ f_n\\ \end{array}\right]$ and $z= \left[\begin{equation}gin{array}{c} z_1\\ z_2\\ \vdots\\ z_n\\ \end{array}\right].$ The complementarity problem is to find a vector $z\in R^n$ such that \begin{equation}gin{equation}\label{cp} z^Tf(z)=0, \ \ \ f(z)\geq 0 , \ \ \ z\geq 0. \end{equation} When the function $f$ is a nonlinear function, then it is called nonlinear complementarity problem. \vskip 1em The basic idea of homotopy method is to construct a homotopy continuation path from the auxiliary mapping g to the object mapping f. Suppose the given problem is to find a root of the non-linear equation f(x) = 0 and suppose g(x) = 0 is an auxiliary equation with $g(x_0)=0$. Then the homotopy function $H:R^{n+1} \to R^n$ can be defined as $H(x, \lambda) = $ $ (1-\lambda)f(x) + \lambda g(x),$ $ 0 \leq \lambda \leq 1.$ Then we consider the homotopy equation $H(x, \lambda) = 0,$ where $(x_0,1)$ is a known solution of the homotopy equation. Our aim is to find the solution of the equation $f(x)=0$ from the known solution of $g(x) = 0$ by solving the homotopy equation $H(x, \lambda) = 0$ varrying the values of $\lambda$ from $1$ to $0$. Kojima et al showed that under some conditions nonlinear complementarity problem can be solvable by homotopy continuation method. For details see {\cal I}te{kojima1991homotopy}, {\cal I}te{kojima1993general}, {\cal I}te{kojima1994global}, {\cal I}te{tseng1997infeasible}. \vskip 1em Now we state some results which will be required in the next section. \begin{equation}gin{lemma} (Generalizations of Sard's Theorem{\cal I}te{Chow}) \ Let $U \subset R^n$ be an open set and $f :R^n \to R^p$ be smooth. We say $y \in R^p$ is a regular value for $f$ if $\text{Range} Df(x) = R^p $ $\forall x \in f^{-1}(y),$ where $Df(x)$ denotes the $n \times p$ matrix of partial derivatives of $f(x).$ \end{lemma} \begin{equation}gin{lemma}\label{par} (Parameterized Sard Theorem {\cal I}te{Wang}) \ Let $V \subset R^n, U \subset R^m$ be open sets, and let $\phi:V\times U \to R^k$ be a $C^\alphapha$ mapping, where $\alphapha >\text{max}\{0,m-k\}.$ If $0\in R^k$ is a regular value of $\phi,$ then for almost all $a \in V, 0$ is a regular value of $\phi _ a=\phi(a,.).$ \end{lemma} \begin{equation}gin{lemma}\label{inv} (The inverse image theorem {\cal I}te{Wang}) \ Let $\phi : U \subset R^n \to R^p$ be $C^\alphapha$ mapping, where $\alphapha >\text{max}\{0,n-p\}.$ Then $\phi^{-1}(0)$ consists of some $(n-p)$-dimensional $C^\alphapha$ manifolds. \end{lemma} \begin{equation}gin{lemma}\label{cl} (Classification theorem of one-dimensional smooth manifold {\cal I}te{N}) \ One-dimensional smooth manifold is diffeomorphic to a unit circle or a unit interval. \end{lemma} \begin{equation}gin{lemma}{\cal I}te{cordero2012increasing}\label{coorder} Consider the function $f:R^n\to R^n$ and the iterative method $y^k=x^k-f'(x^k)^{-1}f(x^k),$ \ $z^k=x^k-2(f'(y^k)+f'(x^k))^{-1}f(x^k), \ w^k=z^k-f'(y^k)^{-1}f(z^k)$ has $5$th order convergence. \end{lemma} \section{Main Results} Now we solve nonlinear complementarity problem by homotopy method. Consider two positive numbers $m,n,$ such that $m$ is very large positive number and $l$ is positive number, $l<<m.$ First we define\\ $\mathcal{R}_{(m)}=\{(z,y,w_1,w_2,v_1,v_2)\in R_{++}^n \times R_{++}^n \times R_{++}^n \times R_{++}^n \times R_{++} \times R_{++}:m-(\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i+v_2)>l, m-(\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i+v_1)>l\}, \ $\\ $ \mathcal{\bar{R}}_{(m)}=\{(z,y,w_1,w_2,v_1,v_2)\in R_{+}^n \times R_{+}^n \times R_{+}^n \times R_{+}^n \times R_{+} \times R_{+}:m-(\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i+v_2)\geq l, m-(\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i+v_1)\geq l\}.$\\ Here $m$ is a predefined large number. We choose the initial point\\ $x^{(0)}=(z^{(0)},y^{(0)},{w_1}^{(0)},{w_2}^{(0)},{v_1}^{(0)},{v_2}^{(0)})\in \mathcal{R}_{(m)}$ such that\\ $A^0 B^0-B^0{v_2}^{(0)}-{A^{(0)}{v_1}^{(0)}}=0, $\\ $l(B^{(0)}{v_2}^{(0)}-A^{(0)}{v_1}^{(0)})+lB^{(0)}(l-A^{(0)})+A^{(0)}{v_1}^{(0)}(A^{(0)}-{v_2}^{(0)}) \neq 0,\\ l(A^{(0)}{v_1}^{(0)}-B^{(0)}{v_2}^{(0)})+lA^{(0)}(l-B^{(0)})+B^{(0)}{v_2}^{(0)}(B^{(0)}-{v_1}^{(0)}) \neq 0,\\ l(B^{(0)}-l)\neq(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}, l(A^{(0)}-l)\neq(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)},$\\ where $A=(m-(\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i)), A^{(0)}=(m-(\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i)), \\ B=(m-(\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i)),$ $B^{(0)}=m-(\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i). $ \\Now define the feasible region $\mathcal{F}_{(m)}=\{(z,y,w_1,w_2,v_1,v_2)\in {\mathcal{R}}_{(m)}: {v_1} \neq \frac{\lambda(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l}; \\ {v_2} \neq \frac{\lambda(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l} \ \forall \ \lambda \in (0,1)\},$\\ $\mathcal{\bar{F}}_{(m)}=\{(z,y,w_1,w_2,v_1,v_2) \in \mathcal{\bar{R}}_{(m)}: {v_1} \neq \frac{\lambda(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l}; {v_2} \neq \frac{\lambda(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l} \ \forall \ \lambda \in (0,1)\}.$ \\ $\partial \mathcal{F}_{(m)}=\{(z,y,w_1,w_2,v_1,v_2)\in \partial\mathcal{R}_{(m)}: {v_1} \neq \frac{\lambda(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l}; {v_2} \neq \frac{\lambda(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l} \ \forall \ \lambda \in (0,1)\},$ where $\partial\mathcal{R}_{(m)}$ is the boundary of $ \mathcal{\bar{R}}_{(m)}.$ Now we construct the homotopy function \begin{equation}gin{equation}\label{homf} H(x,x^{(0)},\lambda)=\left[\begin{equation}gin{array}{c} (1-\lambda)(y-w_1+v_1e+J_f^t(z-w_2+v_2e))+\lambda(z-z^{(0)}) \\ W_1z-\lambda W_1^{(0)}z^{(0)}\\ W_2y-\lambda W_2^{(0)}y^{(0)}\\ y-(1-\lambda)f(z)-\lambda(y^{(0)})\\ (m-\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i-v_2)v_1- \lambda((m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})v_1^{(0)})\\ (m-\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i-v_1)v_2- \lambda((m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})v_2^{(0)})\\ \end{array}\right]=0 \end{equation} where $e=[1,1,{\cal D}ots,1]^t,\ Z=\text{diag}(z);\ W_1=\text{diag}(w_1); \ W_2=\text{diag}(w_2); \ W_1^{(0)}=\text{diag}(w_1^{(0)}); \ W_2^{(0)}=\text{diag}(w_2^{(0)}); \ x=(z,y,w_1,w_2,v_1,v_2)\in \mathcal{\bar{F}}_{(m)}; \ x^{(0)}=(z^{(0)},y^{(0)},{w_1}^{(0)},{w_2}^{(0)},{v_1}^{(0)},{v_2}^{(0)})\in \mathcal{F}_{(m)}; \ \lambda \in (0,1]$ and $J_f$ is the jacobian of $f(z).$ \begin{equation}gin{theorem}\label{reg} For almost all initial points $x^{(0)}\in \mathcal{F}_{(m)},$ $0$ is a regular value of the homotopy function $H:R^{4n+2} \times (0,1] \to R^{4n+2}$ and the zero point set $H_{x^{(0)}}^{-1}(0)=\{(x,\lambda)\in \mathcal{F}_{(m)} \times (0,1]:H_{x^{(0)}}(x,\lambda)=0\}$ contains a smooth curve $\Gamma_x^{(0)}$ starting from $(x^{(0)},1).$ \end{theorem} \begin{equation}gin{proof} The jacobian matrix of the above homotopy function $H(x,x^{(0)},\lambda)$ is denoted by $DH(x,x^{(0)},\lambda))$ and we have $DH(x,x^{(0)},\lambda))=$$\left[\begin{equation}gin{array}{ccc} \frac{\partial{H(x,x^{(0)},\lambda)}}{\partial{x}} & \frac{\partial{H(x,x^{(0)},\lambda)}}{\partial{x^{(0)}}} & \frac{\partial{H(x,x^{(0)},\lambda)}}{\partial{\lambda}}\\ \end{array}\right].$ For all $x^{(0)} \in \mathcal{F}_1$ and $\lambda \in (0,1],$ we have $\frac{\partial{H(x,x^{(0)},\lambda)}}{\partial{x^{(0)}}}=$$\begin{equation}gin{bmatrix} K_1 & K_2\\ K_3 & K_4\\ \end{bmatrix},$\\ where $K_1=\begin{equation}gin{bmatrix} -\lambda I & 0 & 0 & 0\\ -\lambda W_1^{(0)} & 0 & -\lambda Z^{(0)} & 0\\ 0 & -\lambda W_2^{(0)} & 0 & -\lambda Y^{(0)}\\ 0 & -\lambda I & 0 & 0\\ \end{bmatrix},$ $K_2=\begin{equation}gin{bmatrix} 0 & 0\\ 0 & 0\\ 0 & 0\\ 0 & 0\\ \end{bmatrix},$ $K_3=\begin{equation}gin{bmatrix} \lambda v_1^{(0)} e^t & 0 & \lambda v_1^{(0)} e^t & 0 \\ 0 & \lambda v_2^{(0)} e^t & 0 & \lambda v_2^{(0)} e^t\\ \end{bmatrix},$ $K_4= \begin{equation}gin{bmatrix} -\lambda(m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)}) & \lambda v_1^{(0)}\\ \lambda v_2^{(0)} & -\lambda(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})\\ \end{bmatrix} ,$ \\ $Y^{(0)}=\text{diag}(y^{(0)}), Z^{(0)}=\text{diag}(z^{(0)})$, $W_1^{(0)}=\text{diag}(w_1^{(0)})$, $W_2^{(0)}=\text{diag}(w_2^{(0)})$.\\ $\text{det}(\frac{\partial{H}}{ \partial{x^{(0)}}})=\lambda^{4n+2}((m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})-v_1^{(0)}v_2^{(0)})\prod_{i=1}^{n} z_i^{(0)}y_i^{(0)}\neq 0$ for $\lambda \in (0,1].$ \\ Thus $DH(x,x^{(0)},\lambda))$ is of full row rank. Therefore, $0$ is a regular value of $H(x,x^{(0)},\lambda)).$ By Lemmas \ref{par} and \ref{inv}, for almost all $x^{(0)} \in \mathcal{F}_{(m)},$ $0$ is a regular value of $H_{x^{(0)}}(x,\lambda)$ and $H_{x^{(0)}}^{-1}(0)$ consists of some smooth curves and $H_{x^{(0)}}(x^{(0)},1)=0.$ Hence there must be a smooth curve $\Gamma_x^{(0)}$ starting from $(x^{(0)},1).$ \end{proof} \begin{equation}gin{theorem}\label{bdd} Let $\mathcal{F}_{(m)}$ be a nonempty set. For a given $x^{(0)} \in \mathcal{F}_{(m)},$ if $0$ is a regular value of $H(x,x^{(0)},\lambda),$ then $\Gamma_x^{(0)}$ is a bounded curve in $\mathcal{\bar{F}}_{(m)} \times (0,1].$ \end{theorem} \begin{equation}gin{proof} We have that $0$ is a regular value of $H(x,x^{(0)},\lambda)$ by theorem \ref{reg} and $\mathcal{F}_{(m)}$ be a nonempty set. It is clear that the set $\mathcal{F}_{(m)}$ and $(0,1]$ is bounded. So there exists a sequence of points $\{z^k,y^k,w_1^k,w_2^k,v_1^k,v_2^k,\lambda^k\} \subset \Gamma_x^{(0)} \times (0,1],$ such that $\lim\limits_{k \to \infty}z^k=\bar{z}, \lim\limits_{k \to \infty}y^k=\bar{y}, \lim\limits_{k \to \infty}w_1^k=\bar{w_1}, \lim\limits_{k \to \infty}w_2^k=\bar{w_2}, \lim\limits_{k \to \infty}v_1^k=\bar{v_1}, \lim\limits_{k \to \infty}v_2^k=\bar{v_2}, \lim\limits_{k \to \infty}\lambda^k=\bar{\lambda}. $ Hence $\Gamma_x^{(0)}$ is a bounded curve in $\mathcal{\bar{F}}_{(m)} \times (0,1].$ \end{proof} Now we show the convergence of the homotopy function \ref{homf}. \begin{equation}gin{theorem} For $x^{(0)}=(z^{(0)},y^{(0)},w_1^{(0)},w_2^{(0)},v_1^{(0)},v_2^{(0)})\in \mathcal{R}_{(m)}$ such that\\ $A^0 B^0-B^0{v_2}^{(0)}-{A^{(0)}{v_1}^{(0)}}=0,$\\ $l(B^{(0)}{v_2}^{(0)}-A^{(0)}{v_1}^{(0)})+lB^{(0)}(l-A^{(0)})+A^{(0)}{v_1}^{(0)}(A^{(0)}-{v_2}^{(0)}) \neq 0,\\ l(A^{(0)}{v_1}^{(0)}-B^{(0)}{v_2}^{(0)})+lA^{(0)}(l-B^{(0)})+B^{(0)}{v_2}^{(0)}(B^{(0)}-{v_1}^{(0)}) \neq 0,\\ l(B^{(0)}-l)\neq(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}, \\ l(A^{(0)}-l)\neq(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)},$ \\ the homotopy equation finds a bounded smooth curve $\Gamma_x^{(0)} \subset \mathcal{{F}}_{(m)} \times (0,1]$ which starts from $(x^{(0)},1)$ and approaches the hyperplane at $\lambda \to 0.$ As $\lambda \to 0,$ the limit set $\mathcal{L} \times \{0\} \subset \mathcal{\bar{F}}_{(m)} \times \{0\}$ of $\Gamma_x^{(0)}$ is nonempty and every point in $\mathcal{L}$ is a solution of the following system of equations: \begin{equation}gin{equation}\label{sys} \begin{equation}gin{aligned} (y-w_1+v_1e+J_f^t(z-w_2+v_2e))=0 \\ W_1z=0\\ W_2y=0\\ y-f(z)=0\\ (m-\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i-v_2)v_1=0\\ (m-\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i-v_1)v_2=0\\ \end{aligned} \end{equation} \end{theorem} \begin{equation}gin{proof} Note that $\Gamma_x^{(0)}$ is diffeomorphic to a unit circle or a unit interval $(0,1]$ in view of Lemma \ref{cl}. As $\frac{\partial{H(x,x^{(0)},1)}}{\partial{x^{(0)}}}$ is nonsingular, $\Gamma_x^{(0)}$ is diffeomorphic to a unit interval $(0,1].$ Again $\Gamma_x^{(0)}$ is a bounded smooth curve by the Theorem \ref{bdd}. Let $(\bar{x},\bar{\lambda})$ be a limit point of $\Gamma_x^{(0)}.$ We consider four cases: \begin{equation}gin{enumerate} \item[(i)] $(\bar{x},\bar{\lambda})\in \mathcal{{F}}_{(m)} \times \{1\}.$ \item[(ii)] $(\bar{x},\bar{\lambda})\in \partial{\mathcal{{F}}_{(m)}} \times \{1\}.$ \item[(iii)] $(\bar{x},\bar{\lambda})\in \partial{\mathcal{{F}}_{(m)}} \times (0,1).$ \item[(iv)] $(\bar{x},\bar{\lambda})\in \mathcal{\bar{F}}_{(m)} \times \{0\}.$ \end{enumerate} Suppose for case (i) the homotopy function \ref{homf} has solution $(\bar{x},1)$, other than the initial solution $x^{(0)}$. As $\lambda \to 1$, $\bar{y}=y^{(0)}, \bar{z_1}=z_1 ^{0}, \bar{z_2}=z_2 ^{0},\bar{v_1} \neq 0, \bar{v_2} \neq 0$. So for $\lambda \to 1$, $(A-v_2) \to (A^{(0)}-\bar{v_2}), (B-v_1) \to (B^{(0)}-\bar{v_1})$. Hence from homotopy function \ref{homf} \begin{equation}gin{eqnarray}\label{t1} (A^{(0)}-\bar{v_2})\bar{v_1}=(A^0-{v_2}^{(0)}){v_1}^{(0)}\\ \label{t2} (B^{(0)}-\bar{v_1})\bar{v_2}=(B^0-{v_1}^{(0)}){v_2}^{(0)} \label{t3} \end{eqnarray} From \ref{t3}, $\bar{v_2}=\frac{B^0-{v_1}^{(0)}}{B^{(0)}-\bar{v_1}}{v_2}^{(0)}.$ From equation \ref{t1} $(A^{(0)}-\frac{B^0-{v_1}^{(0)}}{B^{(0)}-\bar{v_1}}{v_2}^{(0)})\bar{v_1}=(A^0-{v_2}^{(0)}){v_1}^{(0)}.$ This implies that $\bar{v_1}=\frac{-(B^0{v_2}^{(0)}-A^0{v_1}^{(0)}-A^0 B^0) \pm \sqrt{(A^0 B^0-B^0{v_2}^{(0)}-A^0{v_1}^{(0)})^2}}{2A^0}$.\\ $\implies \bar{v_1}={v_1}^{(0)} \text{or} $ $\frac{A^0 B^0-B^0{v_2}^{(0)} }{A^{(0)}}$. As $\bar{v_1}=\frac{A^0 B^0-B^0{v_2}^{(0)} }{A^{(0)}}={v_1}^{(0)} $ from the condition of choosing the initial point $x^{(0)}, $ the equation $H_{x^{(0)}}(x,1)=0$ has only one solution $x^{(0)}\in \mathcal{{R}}_{(m)}.$ Hence the case $(i)$ is impossible.\\ In case $(ii)$ the homotopy equation \ref{homf} implies that $ \bar{y}=y^{(0)}, \bar{z_1}=z_1 ^{0}, \bar{z_2}=z_2 ^{0}, \bar{v_1} \neq 0, \bar{v_2} \neq 0.$ So $(A-v_2) \to (A^{(0)}-\bar{v_2})$ and $(B-v_1) \to (B^{(0)}-\bar{v_1})$ as $\lambda \to 1.$ From last two components of homotopy equation \ref{homf} we have \begin{equation}gin{equation}\label{ineq} (A^{(0)}-\bar{v_2})\bar{v_1}=(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}, (B^{(0)}-\bar{v_1})\bar{v_2}=(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}. \end{equation} Three cases may arise. \\ \textbf{Case 1:} Let $A^{(0)}-\bar{v_2}=l.$ From equation \ref{ineq} \\ $\bar{v_1}=\frac{(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l},\ (B^{(0)}-\frac{(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l})\bar{v_2}=(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)} \\ \implies \bar{v_2}=$ $\frac{l(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{lB^{(0)}-A^{(0)}{v_1}^{(0)}+{v_1}^{(0)}{v_2}^{(0)}}=A^{(0)}-l \\ \implies l(B^{(0)}{v_2}^{(0)}-A^{(0)}{v_1}^{(0)})+lB^{(0)}(l-A^{(0)})+A^{(0)}{v_1}^{(0)}(A^{(0)}-{v_2}^{(0)}) = 0,$ contradicts the choosing of initial point.\\ \textbf{Case 2:} Let $B^{(0)}-\bar{v_1}=l.$ From equation \ref{ineq} \\ $\bar{v_2}=\frac{(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l}, \ (A^{(0)}-\frac{(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l})\bar{v_1}=(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)} \\ \implies \bar{v_1}=$ $\frac{l(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{lA^{(0)}-B^{(0)}{v_2}^{(0)}+{v_1}^{(0)}{v_2}^{(0)}}=B^{(0)}-l \\ \implies l(A^{(0)}{v_1}^{(0)}-B^{(0)}{v_2}^{(0)})+lA^{(0)}(l-B^{(0)})+B^{(0)}{v_2}^{(0)}(B^{(0)}-{v_1}^{(0)}) = 0,$ contradicts the choosing of initial point.\\ \textbf{Case 3:} Let $B^{(0)}-\bar{v_1}=l, \ A^{(0)}-\bar{v_2}=l. $ From equation \ref{ineq} \\ we have $l\bar{v_1}=(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)},l\bar{v_2}=(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}.\\ \implies l(B^{(0)}-l)=(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}, l(A^{(0)}-l)=(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}, $ contradicts the choosing of initial point. In case $(iii)$ from homotopy equation \ref{homf} we have $\bar{z}>0,\bar{y}>0,\bar{w_1}>0,\bar{w_2}>0.$ Three cases may arise.\\ \textbf{Case1:} Let $A-v_2 \to \bar{A}-\bar{v_2}=l,$ where $\bar{A}=(m-\overset{n}{\underset{i=1}{\sum}}(\bar{z}+\bar{w_1})_i.$ Then from equation \ref{ineq} we have $\bar{v_1}=\frac{\bar{\lambda}(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l},$ which contradicts that $\bar{v_1}\in \partial{\mathcal{{F}}_{(m)}}.$\\ \textbf{Case2:} Let $B-v_1 \to \bar{B}-\bar{v_1}=l,$ where $\bar{B}=(m-\overset{n}{\underset{i=1}{\sum}}(\bar{y}+\bar{w_2})_i.$ Then from equation \ref{ineq} we have $\bar{v_2}=\frac{\bar{\lambda}(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l},$ which contradicts that $\bar{v_2}\in \partial{\mathcal{{F}}_{(m)}}.$\\ \textbf{Case3:} Let $A-v_2 \to \bar{A}-\bar{v_2}=l,B-v_1 \to \bar{B}-\bar{v_1}=l.$ Then from equation \ref{ineq} we have $\bar{v_1}=\frac{\bar{\lambda}(A^{(0)}-{v_2}^{(0)}){v_1}^{(0)}}{l}$ and $\bar{v_2}=\frac{\bar{\lambda}(B^{(0)}-{v_1}^{(0)}){v_2}^{(0)}}{l},$ which contradicts that $\bar{v_1}\in \partial{\mathcal{{F}}_{(m)}}$ and $\bar{v_2}\in \partial{\mathcal{{F}}_{(m)}}.$\\ Therefore $(iv)$ is the only possible case. Hence $\bar{x}=(\bar{z},\bar{y},\bar{w_1},\bar{w_2},\bar{v_1},\bar{v_2})$ is a solution of the system of equations \ref{sys}. \end{proof} \begin{equation}gin{remk}\label{mmf} From the homotopy function \ref{homf} as $\lambda \to 0$ we get $\bar{y}-\bar{w}_1+\bar{J}_f^t(\bar{z}-\bar{w}_2)=0,$ $\bar{y}=f(\bar{z})$ and $\bar{w}_{1i}\bar{z}_i=0,$ $\bar{w}_{2i}\bar{y}_i=0 \ \forall i\in\{1,2, {\cal D}ots n\}$, where $\bar{J}_f$ is the jacobian of $f(z)$ at the point $\bar{z}.$ Now $\bar{w}_1$ and $\bar{w}_2$ can be decomposed as $\bar{w}_1= \bar{y}-\Delta\bar{y} \geq 0$ and $\bar{w}_2= \bar{z}-\Delta\bar{z} \geq 0.$ Now it is clear that $\bar{y}_i \bar{z}_i=\Delta\bar{y}_i \bar{z}_i=\Delta\bar{z}_i\bar{y}_i \ \forall i \in \{1,2, {\cal D}ots n\}$ and $\bar{J}_f^t\Delta \bar{z}+\Delta \bar{y}=0.$ This implies that $(\bar{Z}\bar{J}_f^t+\bar{Y})\Delta \bar{z}=0,$ where $\bar{Y}=$diag$(\bar{y})$ and $\bar{Z}=$diag$(\bar{z}).$ \end{remk} \begin{equation}gin{theorem} The component $\bar{z}$ of $(\bar{z},\bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ gives the solution of the complementarity problem\ref{cp} if and only if $\Delta\bar{z}_i \Delta\bar{y}_i=0 $ or $\bar{w}_{1i}+\bar{w}_{2i}>0 \ \forall i \in \{1,2,{\cal D}ots n\}.$ \end{theorem} \begin{equation}gin{proof} Suppose $\bar{z} \geq 0$ and $\bar{y}=f(\bar{z}) \geq 0$ give the solution of the complementarity problem\ref{cp}. Then $\bar{z}_i\bar{y}_i=0$ \ $ \forall i \in \{1,2,{\cal D}ots n\}.$ This implies that $\bar{z}_i=0$ or $\bar{y}_i=0$ \ $ \forall i \in \{1,2,{\cal D}ots n\}.$ Now we consider the following cases.\\ \noindent Case 1: For atleast one $i \in \{1,2,{\cal D}ots n\},$ let $\bar{z}_{i}>0, \bar{y}_{i}=0.$ In view of Remark \ref{mmf}, this implies that $\Delta\bar{y}_i=0 \implies \Delta\bar{z}_i \Delta\bar{y}_i=0.$ \\ \noindent Case 2: For atleast one $i \in \{1,2,{\cal D}ots n\},$ let $\bar{y}_{i}>0, \bar{z}_{i}=0.$ In view of \ref{mmf}, this implies that $\Delta\bar{z}_i=0 \implies \Delta\bar{z}_i \Delta\bar{y}_i=0.$ \\ \noindent Case 3: For atleast one $i \in \{1,2,{\cal D}ots n\},$ let $\bar{y}_{i}=0, \bar{z}_{i}=0.$ This implies that either $\Delta\bar{y}_i\Delta\bar{z}_i=0$ or $\bar{w}_{1i}+\bar{w}_{2i}>0.$ Conversely, let consider $\Delta\bar{z}_i \Delta\bar{y}_i=0 $ or $\bar{w}_{1i}+\bar{w}_{2i}>0 \ \forall i \in \{1,2,{\cal D}ots n\}.$ Let $\forall i \in \{1,2,{\cal D}ots n\}, \ \Delta\bar{z}_i \Delta\bar{y}_i=0 $ implies either $\Delta\bar{z}_i=0$ or $\Delta\bar{y}_i=0.$ This implies that $\bar{y}_i \bar{z}_i=0 \ \forall i \in \{1,2,{\cal D}ots n\}.$ Therefore $\bar{y}$ and $\bar{z}$ give the solution of given complementarity problem \ref{cp}. \noindent Let consider $\bar{w}_{1}+\bar{w}_{2}>0.$ Then three cases will arise. \noindent Case 1: Let $\bar{w}_{1i}>0, \bar{w}_{2i}=0$ for atleast one $i \in \{1,2,{\cal D}ots n\}.$ This implies that $\bar{z}_i=0$ and $\bar{y}_i \geq 0.$ \\ Case 2: Let $\bar{w}_{1i}=0, \bar{w}_{2i}>0$ for atleast one $i \in \{1,2,{\cal D}ots n\}.$ This implies that $\bar{z}_i \geq 0$ and $\bar{y}_i=0.$ \\ Case 3: Let $\bar{w}_{1i}>0, \bar{w}_{2i}>0$ for atleast one $i \in \{1,2,{\cal D}ots n\}.$ This implies that $\bar{z}_i=0$ and $\bar{y}_i=0.$ \\ Considering the above three cases $\bar{z}$ and $\bar{y}$ solves the compplementarity problem \ref{cp}.\\ \end{proof} \begin{equation}gin{theorem} If the nonlinear function $f(z)$ is a $P_0$ function, then the component $\bar{z}$ of $(\bar{z},\bar{y} ,\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ gives the solution of the nonlinear complementarity problem \ref{cp}. \end{theorem} \begin{equation}gin{proof} Let $f(z)$ be a $P_0$ function. Then the jacobian matrix of the nonlinear function at a point $z$, $\bar{J}_f$ is a $P_0$ matrix. Assume that the component $\bar{z}$ of $(\bar{z},\bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ does not give the solution of the nonlinear complementarity problem \ref{cp}. Hence $\Delta\bar{z}_i \Delta\bar{y}_i\neq0 $ and $\bar{w}_{1i}+\bar{w}_{2i}=0$ for atleast one $i$. Then $\Delta\bar{z}_i \neq 0, \Delta\bar{y}_i\neq0 , \bar{w}_{1i}=0, \bar{w}_{2i}=0.$ Now $\bar{w}_{1i}=\bar{y}_i-\Delta\bar{y}_i=0$ and $\Delta\bar{z}_i \Delta\bar{y}_i\neq0$ $\implies \bar{y}_i=\Delta\bar{y}_i>0$. In similar way $\bar{w}_{2i}=\bar{z}_i-\Delta\bar{z}_i=0$ and $\Delta\bar{z}_i \Delta\bar{y}_i\neq0$ $\implies \bar{z}_i=\Delta\bar{z}_i>0$. As $(\bar{z},\bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{\bar{F}}_{(m)}\times \{0\}$, $v_1=0, v_2=0$. From Equation \ref{sys}, $\Delta\bar{y}_i + ({\bar{J}_f}^t\Delta\bar{z})_i=0.$ This implies that $ ({\bar{J}_f}^t\Delta\bar{z})_i<0$ and also $(\bar{z})_i ({\bar{J}_f}^t\Delta\bar{z})_i<0.$ This contradicts that $\bar{J}_f$ is a $P_0$-matrix. Therefore the component $\bar{z}$ of $(\bar{z},\bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ gives the solution of the nonlinear complementarity problem \ref{cp}. \end{proof} \begin{equation}gin{theorem}\label{22222} Suppose the matrix $(\bar{Z}\bar{J}_f^t+\bar{Y})$ is nonsingular, where $\bar{Y}=$diag$(\bar{y})$ and $\bar{Z}=$diag$(\bar{z}).$ Then $\bar{z}$ solves the complementarity problem \ref{cp}. \end{theorem} \begin{equation}gin{proof} Let $(\bar{Z}\bar{J}_f^t+\bar{Y})$ is nonsingular matrix. Now from remark \ref{mmf} it is clear that $\Delta z=0.$ This implies that $\bar{z}$ solves the complementarity problem \ref{cp}. \end{proof} \begin{equation}gin{theorem} If the jacobian matrix $\bar{J}_f$ has nonsingular principal minors, then the component $\bar{z}$ of $(\bar{z}, \bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ solves the nonlinear complementarity problem \ref{cp}. \end{theorem} \begin{equation}gin{proof} Consider that the jacobian matrix $\bar{J}_f$ has nonsingular principal minors. Then the transpose of the jacobian matrix $\bar{J}_f ^t$ has nonsingular principal minors. By theorem \ref{22222}, if the matrix $(\bar{Z}\bar{J}_f^t+\bar{Y})$ is nonsingular, then $\bar{z}$ solves the nonlinear complementarity problem, where $\bar{Y}=\text{diag}(\bar{y}),$ $\bar{Z}=\text{diag}(\bar{z}).$ Let $\tilde{\mathcal{A'}}=\left[\begin{equation}gin{array}{cc} \bar{Y} & \bar{Z}\\ -\bar{J}_f^t & I\\ \end{array}\right]$. Then $\det(\tilde{\mathcal{A'}})= \det(\bar{Y}+\bar{Z}\bar{J}_f^t)$. Assume that the component $\bar{z}$ of $(\bar{z},\bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ is not the solution of the nonlinear complementarity problem. Then there exists atleast one $i$, such that $\bar{z}_i\bar{y}_i>0$. Without loss of generality $\bar{y}$ and $\bar{z}$ can be represented as $\bar{y}=\left[\begin{equation}gin{array}{c} \bar{y}_p \\ \bar{y}_q\\ \bar{o}_r\\ \end{array}\right]$, $\bar{z}=\left[\begin{equation}gin{array}{c} \bar{o}_p \\ \bar{z}_q\\ \bar{z}_r\\ \end{array}\right]$, where $\bar{y}_p\in {R^p}_{++}, \ \bar{y}_q, \bar{x}_q \in {R^q}_{++}, \ \bar{z}_r\in {R^r}_{++}$, \ $\bar{o}_r\in R^r, \ \bar{o}_p \in R^p$ and $\bar{o}_r, \ \bar{o}_p $ are vectors with all zeros. Here $(\bar{y}_q)_i(\bar{z}_q)_i>0$ and $\bar{Y}=\text{diag}(\bar{y}), \bar{Z}=\text{diag}(\bar{z})$. Now we can rewrite $\left[\begin{equation}gin{array}{cc} \bar{Y} & \bar{Z}\\ -\bar{J}_f^t & I\\ \end{array}\right]=\left[\begin{equation}gin{array}{cccccc} \bar{Y}_p & \bar{O}_q & \bar{O}_r & \bar{O}_p & \bar{O}_q & \bar{O}_r \\ \bar{O}_p & \bar{Y}_q & \bar{O}_r & \bar{O}_p & \bar{Z}_q & \bar{O}_r\\ \bar{O}_p & \bar{O}_q & \bar{O}_r & \bar{O}_p & \bar{O}_q & \bar{Z}_r\\ M' & B' & C' & I_p & \bar{O}_q & \bar{O}_r\\ D' & E' & F' & \bar{O}_p & I_q & \bar{O}_r\\ G' & H' & K' & \bar{O}_p & \bar{O}_q & I_r\\ \end{array}\right]$, where\\ $-\bar{J}_f^t=\left[\begin{equation}gin{array}{ccc} M' & B' & C'\\ D' & E' & F' \\ G' & H' & K' \\ \end{array}\right]$, \ $\bar{Y}=\left[\begin{equation}gin{array}{ccc} \bar{Y}_p & \bar{O}_q & \bar{O}_r \\ \bar{O}_p & \bar{Y}_q & \bar{O}_r \\ \bar{O}_p & \bar{O}_q & \bar{O}_r \\ \end{array}\right]$, \ $\bar{Z}=\left[\begin{equation}gin{array}{ccc} \bar{O}_p & \bar{O}_q & \bar{O}_r \\ \bar{O}_p & \bar{Z}_q & \bar{O}_r\\ \bar{O}_p & \bar{O}_q & \bar{Z}_r\\ \end{array}\right]$, \ $\bar{Z}_q=\text{diag}(\bar{z}_q)$, \ $\bar{Z}_r=\text{diag}(\bar{z}_r)$, \ $\bar{Y}_q=\text{diag}(\bar{y}_q)$, $\bar{Y}_p=\text{diag}(\bar{y}_p)$, \ $\bar{O}_p=\text{diag}(\bar{o}_p)$, \ $\bar{O}_q=\text{diag}(\bar{o}_q)$ \ $\bar{O}_r=\text{diag}(\bar{o}_r)$, \ $M',D',G', I_p\in R^{p \times p}$, \ $B',E',H', I_q \in R^{q \times q}$, \ $C',F',K', I_r \in R^{r \times r}$ and $I_p,I_q,I_r$ are identity matrices. By elementary row operations we can get \vskip 1em $\tilde{\mathcal{B'}}=\left[\begin{equation}gin{array}{cccccc} I & \bar{O}_q & \bar{O}_r & \bar{O}_p & \bar{O}_q & \bar{O}_r \\ \bar{O}_p & I & \bar{O}_r & \bar{O}_p & \bar{Z}_q{\bar{Y}_q}^{-1} & \bar{O}_r\\ \bar{O}_p & \bar{O}_q & \bar{O}_r & \bar{O}_p & \bar{O}_q & I\\ M' & B' & C' & I & \bar{O}_q & \bar{O}_r\\ D' & E' & F' & \bar{O}_p & I & \bar{O}_r\\ G' & H' & K' & \bar{O}_p & \bar{O}_q & I\\ \end{array}\right]$. \vskip 1em By interchanging rows this matrix reduces to \\ $\tilde{\mathcal{C'}}=$ $\left[\begin{equation}gin{array}{cccccc} I & \bar{O}_q & \bar{O}_r & \bar{O}_p & \bar{O}_q & \bar{O}_r \\ \bar{O}_p & I & \bar{O}_r & \bar{O}_p & \bar{Z}_q{\bar{Y}_q}^{-1} & \bar{O}_r\\ -G' & -H' & -K' & \bar{O}_p & \bar{O}_q & \bar{O}_r\\ M' & B' & C' & I & \bar{O}_q & \bar{O}_r\\ D' & E' & F' & \bar{O}_p & I & \bar{O}_r\\ G' & H' & K' & \bar{O}_p & \bar{O}_q & I\\ \end{array}\right]$.\\ Hence $\det(\tilde{\mathcal{A'}})= \det(\tilde{\mathcal{C'}})=(-1)^r\det(K')\neq 0$. Therefore by theorem \ref{22222} $\bar{z}$ solves the nonlinear complementarity problem. This contradicts our assumption. Hence the component $\bar{z}$ of $(\bar{z},\bar{y},\bar{w}_1,\bar{w}_2,0) \in \mathcal{L}\times \{0\}$ is the solution of the nonlinear complementarity problem \ref{cp}. \end{proof} \begin{equation}gin{remk} Now we trace the homotopy path $\Gamma_x^{(0)} \subset \mathcal{{F}}_{(m)} \times (0,1]$ from the initial point $(x^{(0)},1)$ until $\lambda \to 0$ and find the solution of the given complementarity problem \ref{cp} under some assumptions. Let $s$ denote the arc length of $\Gamma_x^{(0)}$, we can parameterize the homotopy path $\Gamma_x^{(0)}$ with respect to $s$ in the following form\\ \begin{equation}gin{equation}\label{ss} H_{x^{(0)}} (x(s),\lambda (s))=0, \ x(0)=x^{(0)}, \ \lambda(0)=1. \end{equation} Now differentiating \ref{ss} with respect to $s$ we obtain the following system of ordinary differential equations with given initial values{\cal I}te{fan} \begin{equation}gin{equation} H'_{x^{(0)}} (x(s),\lambda (s))\left[\begin{equation}gin{array}{c} \frac{dx}{ds}\\ \frac{d\lambda}{ds}\\ \end{array}\right]=0, \ \|( \frac{dx}{ds},\frac{d\lambda}{ds})\|=1, \ x(0)=x^{(0)}, \ \lambda(0)=1, \ \frac{d\lambda}{ds}(0)<0, \end{equation} and the $x$-component of $(x(\bar{s}),\lambda (\bar{s}))$ gives the solution of the complementarity problem for $\lambda (\bar{s})=0.$ \end{remk} Now we use the homotopy continuation method with some modifications to trace the homotopy path $\Gamma_x^{(0)}$ numerically. For details see ( cite our homotopy paper) \subsection{Algorithm}\label{homosin} \textbf{Step 0:} Set $i=i_s=0.$ [$i$ is the Number of Iteration(s) and $i_s$ is the Number of shifting of the Initial Point(s).] Give an initial point $(x^{(0)},\lambda_0) \in \mathcal{F}_(m) \times \{1\}.$ Set $\eta_1=10^{-12}, \eta_2=10^{-8}, c_0=50, m_0=25.$ $\kappa_1=\sqrt{2}, \kappa_2=9000, $ where the Step-length is determined by $\kappa_1^k, k \in Z$ and the limit of the maximum step-length is maintained by $\kappa_1^k \leq \kappa_2.$ $\epsilonilon_1=10^{-9}, \epsilonilon_2=10^{-6}.$ These are real numbers, used as thresholds for $\lambda.$ If $\lambda$ achieves a value $0\leq \lambda \leq \epsilonilon_1,$ then the algorithm stops with an Acceptable Solution. But, due to a stuck out for some specific reasons, if $\lambda$ achieves a value, such that, $\epsilonilon_1< \lambda \leq \epsilonilon_2,$ the algorithm stops, declaring that point as Probable Solution.\\ . \textbf{Step 1:} Set $\left[\begin{equation}gin{array}{c} x\\ t\\ \end{array}\right]=\left[\begin{equation}gin{array}{c} x^{(0)}\\ 1\\ \end{array}\right].$ Now calculate the constant $d^{(0)}=\det (\frac{\partial H}{\partial x}(x^{(0)},\lambda_0)). $ If $|d^{(0)}|\leq \epsilonilon,$ then stop else go to step $2$.[$\epsilonilon \to 0,$ a threshold.] \textbf{Step 2:} Set $c_1=c_2=0.$ Now calculate the constant $d=\det (\frac{\partial H}{\partial x}(x,\lambda)). $ If $|d|\leq \epsilonilon,$ then stop else go to step $3$.[$\epsilonilon \to 0,$ a threshold.] \textbf{Step 3:} Determine the unit predictor direction $\tau^{(n)}$ by the following method: If sign$(d)=-\text{sign}(d_0),$ then set $t_d=1-\lambda,$ else set $t_d=-\lambda.$ Calculate $w_d=-t_d(\frac{\partial H}{\partial x}(x,\lambda))^{-1}(\frac{\partial H}{\partial \lambda}(x,\lambda)),$ $\tau^{(n)}=$$\left[\begin{equation}gin{array}{c} x_n\\ t_n\\ \end{array}\right]=\frac{1}{\|x_d,t_d\|}\left[\begin{equation}gin{array}{c} x_d\\ t_d\\ \end{array}\right],$ $\tau=\dfrac{|t_d|}{\|x_d,t_d\|},$ where $\|x_d,t_d\|=\sqrt{x_d^2+t_d^2}.$ If $\tau \leq \eta_1,$ then set $c1=c1+1$ else reset $c1=0.$ If $c1<c0,$ then go to step $4$ else, if $t_d\leq \epsilonilon_2$ then, stop with a Probable Solution else, stop due to Non-Convergence. \textbf{Step 4:} Choosing step length: Set $ k=0; \gamma=[\nabla\mu(x)]^tx_n,$ where $\mu: R^n \to R,$ is used to increase step length in the Descent Direction(s). Set this Function, such that, $\mu(\bar{x})\leq \mu(x),$ and $x, \bar{x} \in \mathcal{F}_(m),$ where $\bar{x}$ is the solution of the problem. $\mu(x)$ is taken as $[H_0(x)]^t[H_0(x)],$ where $H_0(x)=\left[\begin{equation}gin{array}{c} (y-w_1+v_1e+J_f^t(z-w_2+v_2e)) \\ W_1z\\ W_2y\\ y-f(z)\\ (m-\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i-v_2)v_1\\ (m-\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i-v_1)v_2\\ \end{array}\right].$ If $\gamma \geq 0, $ $x+\kappa_1^{k+1}x_n \in \mathcal{F}_{(m)}, 0<t+\kappa_1^{k+1}t_n<1, $ then set $k=k+1$ and go to step $5.$ else if $\gamma< 0, \mu(x+\kappa_1^{k+1}x_n)<\mu(x+\kappa_1^{k}x_n), x+\kappa_1^{k+1}x_n \in \mathcal{F}_(m),0<t+\kappa_1^{k+1}t_n<1, $ then set $k=k+1,$ and go to step $5.$ else reset $c_2=0,$ and jump to step $6.$ \textbf{Step 5:} If $\kappa_1^k>\kappa_2,$ then set $k=k-1,$ $c_2=c_2+1$ and go to step $6$, else go to step $4$. \textbf{Step 6:} If $c_2<c_0,$ then go to step $7$, else if $t_n\leq \epsilonilon_2,$ then stop with probable solution else stop. \textbf{Step 7:} Compute the predictor and corrector point: $\left[\begin{equation}gin{array}{c} x_p\\ t_p\\ \end{array}\right]= \left[\begin{equation}gin{array}{c} x\\ t\\ \end{array}\right]+\kappa_1^k\left[\begin{equation}gin{array}{c} x_n\\ t_n\\ \end{array}\right],$ $\left[\begin{equation}gin{array}{c} \bar{x}_p\\ \bar{t}_p\\ \end{array}\right]= \left[\begin{equation}gin{array}{c} x_p\\ t_p\\ \end{array}\right]-[J_H(x_p,t_p)^{+}H(x_p,t_p)],$ where $[J_H(x_p,t_p)^{+}]$ is the Moore-Penrose Inverse. Now compute $\left[\begin{equation}gin{array}{c} \tilde{x}_p\\ \tilde{t}_p\\ \end{array}\right]=$ $ \left[\begin{equation}gin{array}{c} x_p\\ t_p\\ \end{array}\right]-2[(J_H(x_p,t_p)+J_H(\bar{x}_p,\bar{t}_p))^{+}H(x_p,t_p)]$ and $r=\|H(x_c,t_c)\|.$ Then compute the next iteration $\left[\begin{equation}gin{array}{c} x_{cc}\\ t_{cc}\\ \end{array}\right]=\left[\begin{equation}gin{array}{c} \bar{x}_p\\ \bar{t}_p\\ \end{array}\right]-2[J_H(\bar{x}_p,\bar{t}_p)+J_H(\tilde{x}_p,\tilde{t}_p)]^{+}H(\bar{x}_p,\bar{t}_p).$ Then $\left[\begin{equation}gin{array}{c} x_b\\ t_b\\ \end{array}\right]=\left[\begin{equation}gin{array}{c} x_{cc}\\ t_{cc}\\ \end{array}\right]-J_H(x_{cc},t_{cc})^{+}H(x_{cc},t_{cc}).$ Repeating $m_0$ times get the next iteration $\left[\begin{equation}gin{array}{c} x_c\\ t_c\\ \end{array}\right]=\left[\begin{equation}gin{array}{c} x_b\\ t_b\\ \end{array}\right].$ If $r\leq 1,0<t_c<1,$ and $x_c \in \mathcal{F}_(m),$ then jump to step $10$ else set $k=k-1$ and go to step $8$. \textbf{Step 8:} Calculate $a = \text{min}(\kappa_1^k,\|x-x_c\|).$ If $a\leq \eta_2,$ then go to step $9$ else jump back to step $5.$ \textbf{Step 9:} If $t_c \leq \epsilonilon_2,$ then stop with a Probable Solution else, set $ i_s = i_s + 1$ and jump back to Step $1$, after changing the Initial Point as, $x^{(0)} = x_c.$ \textbf{Step 10:} Set $\left[\begin{equation}gin{array}{c} x \\ t \\ \end{array}\right]=$ $\left[\begin{equation}gin{array}{c} x_c\\ t_c\\ \end{array}\right].$ If $t_c\leq \epsilonilon_1,$ then stop with acceptable homotopy solution else set $i=i+1$ and go to step $2.$\\ Note that $J_H(x,t)^{+}$ is the moore penrose inverse of the jacobian matrix $J_H(x,t).$ That is $J_H(x,t)^{+}=J_H(x,t)^T (J_H(x,t)J_H(x,t)^T)^{-1}.$ The proposed homotopy continuation method solves homotopy function by solving the initial value problem with the following iterative process $J_j=$ $\left[\begin{equation}gin{array}{c} x_p\\ t_p\\ \end{array}\right]= \left[\begin{equation}gin{array}{c} x\\ t\\ \end{array}\right]+\kappa_1^k\left[\begin{equation}gin{array}{c} x_n\\ t_n\\ \end{array}\right],$\\ $T_j=[J_H(x_p,t_p)^{+}H(x_p,t_p)],$\\ $S_j=J_j-K_j= $ $\left[\begin{equation}gin{array}{c} \bar{x}_p\\ \bar{t}_p\\ \end{array}\right]$,\\ $TT_j=$ $\left[\begin{equation}gin{array}{c} x_p\\ t_p\\ \end{array}\right]-2[(J_H(x_p,t_p)+J_H(\bar{x}_p,\bar{t}_p))^{+}H(x_p,t_p)]=\left[\begin{equation}gin{array}{c} \tilde{x}_p\\ \tilde{t}_p\\ \end{array}\right]$.\\ $SS_j=$ $\left[\begin{equation}gin{array}{c} x_{cc}\\ t_{cc}\\ \end{array}\right]=\left[\begin{equation}gin{array}{c} \bar{x}_p\\ \bar{t}_p\\ \end{array}\right]-2[J_H(\bar{x}_p,\bar{t}_p)+J_H(\tilde{x}_p,\tilde{t}_p)]^{+}H(\bar{x}_p,\bar{t}_p),$\\ By this iterative process the proposed homotopy function achieves the order of convergence as $7^m -1.$ \begin{equation}gin{theorem} Suppose that the homotopy function has derivative, which is lipschitz continuous in a convex neighbourhood ${\cal A}l N$ of $c$ where $c$ is the solution of the homotopy function $H(u,t)=0,$ whose Jacobian matrix is continuous and nonsingular and bounded on ${\cal A}l N.$ Then the homotopy continuation method has order $7^{m}-1.$ \end{theorem} \begin{equation}gin{proof} By the Implicit Function Theorem ensures the existence of a unique continuous solution $z(h) \in {\cal A}l N$ of $\Dot{z}(h)=-\tilde{J}^{-1}\tilde{f},$ $z(0)=u$ and $h\in(-\delta,\delta),$ for some $\delta >0.$ Define $\begin{equation}ta _j=\|z(h)-I_j(u,h)\|.$ From lemma \ref{coorder} $\begin{equation}ta_j=O(h^{7^j}).$ Then $\begin{equation}ta _{j+1}=\|z(h)-I_{j+1}\| \leq K{\begin{equation}ta _j}^7.$ Hence $\begin{equation}ta _{j+1}=O(h^{7^{j+1}}).$ By induction method the modified homotopy continuation method has convergency of order $7^{m}-1$ \end{proof} \begin{equation}gin{theorem} If the homotopy curve $\Gamma_x^{(0)}$ is smooth, then the positive tangent direction $\tau^{(0)}$ at the initial point $x^{(0)}$ satisfies sign($\det \left[\begin{equation}gin{array}{c} \frac{\partial H}{\partial x \partial \lambda}(x^{(0)},1)\\ \tau^{(0)^t}\\ \end{array}\right]$)$<0.$ \end{theorem} \begin{equation}gin{proof} From equation \ref{homf} we have $ H(x,x^{(0)},\lambda)=\\$ $\left[\begin{equation}gin{array}{c} (1-\lambda)(y-w_1+v_1e+J_f^t(z-w_2+v_2e))+\lambda(z-z^{(0)}) \\ W_1z-\lambda W_1^{(0)}z^{(0)}\\ W_2y-\lambda W_2^{(0)}y^{(0)}\\ y-(1-\lambda)f(z)-\lambda(y^{(0)})\\ (m-\overset{n}{\underset{i=1}{\sum}}(z+w_1)_i-v_2)v_1- \lambda((m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})v_1^{(0)})\\ (m-\overset{n}{\underset{i=1}{\sum}}(y+w_2)_i-v_1)v_2- \lambda((m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})v_2^{(0)})\\ \end{array}\right]=0.$\\ Now at the point $(x=x^{(0)},\lambda=1)$ the value of the partial derivative is $\frac{\partial H}{\partial x \partial \lambda}(x,\lambda)=\begin{equation}gin{bmatrix} K_5 & K_6 \end{bmatrix},$ where\\ $K_5=\begin{equation}gin{bmatrix} M' & N'\\ \end{bmatrix},$ $M'=\begin{equation}gin{bmatrix} I & 0 & 0 & 0 \\ W_1^{(0)} & 0 & Z^{(0)} & 0 \\ 0 & W_2^{(0)} & 0 & Y^{(0)} \\ 0 & I & 0 & 0 \\ -v_1^{(0)} e^t & 0 & -v_1^{(0)} e^t & 0 \\ 0 & -v_2^{(0)} e^t & 0 & -v_2^{(0)} e^t \\ \end{bmatrix}$\\ and $N'=\begin{equation}gin{bmatrix}0 & 0\\ 0 & 0 \\ 0 & 0 \\ 0 & 0\\ (m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)}) & -v_1^{(0)} \\ -v_2^{(0)} & (m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})\\ \end{bmatrix}.$\\ $K_6=\begin{equation}gin{bmatrix} A\\ B\\ C\\ D\\ E\\ F\\ \end{bmatrix},$$Y^{(0)}=\text{diag}(y^{(0)}), Z^{(0)}=\text{diag}(z^{(0)})$, $W_1^{(0)}=\text{diag}(w_1^{(0)})$, $W_2^{(0)}=\text{diag}(w_2^{(0)}),\\A=-[y^{(0)}-w_1 ^{(0)}+v_1 ^{(0)}e+J_{(f^{(0)})}^t(z^{(0)}-w_2 ^{(0)}+v_2 ^{(0)}e)],\\ B=-W_1^{(0)}z^{(0)}, C=-W_2^{(0)}y^{(0)}, D= f(z^{(0)})-y^{(0)}, E=-(m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})v_1^{(0)}, F=-(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})v_2^{(0)}. $\\ Let positive tangent direction be $\tau^{(0)}=\left[\begin{equation}gin{array}{c} t \\ -1 \end{array}\right]=\left[\begin{equation}gin{array}{c} (R^{(0)}_1)^{(-1)}R_2^{(0)} \\ -1 \end{array}\right],$\\ where $R^{(0)}_1=\left[\begin{equation}gin{array}{cc} P' & Q'\\ \end{array}\right],$ $P'=\left[\begin{equation}gin{array}{cccc} I & 0 & 0 & 0 \\ W_1^{(0)} & 0 & Z^{(0)} & 0 \\ 0 & W_2^{(0)} & 0 & Y^{(0)} \\ 0 & I & 0 & 0 \\ -v_1^{(0)} e^t & 0 & -v_1^{(0)} e^t & 0 \\ 0 & -v_2^{(0)} e^t & 0 & -v_2^{(0)} e^t \\ \end{array}\right],$\\ $Q'=\left[\begin{equation}gin{array}{cc} 0 & 0\\ 0 & 0\\ 0 & 0\\ (m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)}) & -v_1^{(0)}\\ -v_2^{(0)} & (m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})\\ \end{array}\right]$\\ and $R^{(0)}_2=\left[\begin{equation}gin{array}{c} A\\ B\\ C\\ D\\ E\\ F\\ \end{array}\right],$ where $A=-[y^{(0)}-w_1 ^{(0)}+v_1 ^{(0)}e+J_{(f^{(0)})}^t(z^{(0)}-w_2 ^{(0)}+v_2 ^{(0)}e)],\\ B=-W_1^{(0)}z^{(0)}, C=-W_2^{(0)}y^{(0)}, D= f(z^{(0)})-y^{(0)},\\ E=-(m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})v_1^{(0)}, F=-(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})v_2^{(0)}. $ \\ Here $\text{det}(R^{(0)}_1)=$\\$(\frac{(m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})}{v_1^{(0)}}-v_2^{(0)})(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-\\v_1^{(0)})\prod_{i=1}^{n} z_i^{(0)}y_i^{(0)}\neq 0.$ Therefore, \\ $\det\left[\begin{equation}gin{array}{c} \frac{\partial H}{\partial x \partial \lambda}(y^{(0)},1)\\ \tau ^{(0)^t}\\ \end{array}\right]$$=\det\left[\begin{equation}gin{array}{cc} R^{(0)}_1 & R^{(0)}_2\\ (R^{(0)}_2)^t(R^{(0)}_1)^{(-t)} & -1\\ \end{array}\right]$ \\ $= \det\left[\begin{equation}gin{array}{cc} R^{(0)}_1 & R^{(0)}_2\\ 0 & -1-(R^{(0)}_2)^t(R^{(0)}_1)^{(-t)}(R^{(0)}_1)^{(-1)}R_2^{(0)} \\ \end{array}\right] \\ =\det(R^{(0)}_1)\det(-1-(R^{(0)}_2)^t(R^{(0)}_1)^{(-t)}(R^{(0)}_1)^{(-1)}R_2^{(0)}) \\ =-\det(R^{(0)}_1)\det(1+(R^{(0)}_2)^t(R^{(0)}_1)^{(-t)}(R^{(0)}_1)^{(-1)}R_2^{(0)}) \\ =-(\frac{(m-\overset{n}{\underset{i=1}{\sum}}(z^{(0)}+w_1^{(0)})_i-v_2^{(0)})(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-v_1^{(0)})}{v_1^{(0)}}-v_2^{(0)})(m-\overset{n}{\underset{i=1}{\sum}}(y^{(0)}+w_2^{(0)})_i-\\v_1^{(0)})\prod_{i=1}^{n} z_i^{(0)}y_i^{(0)}\det(1+(R^{(0)}_2)^t(R^{(0)}_1)^{(-t)}(R^{(0)}_1)^{(-1)}R_2^{(0)})<0. $ \end{proof} \section{Numerical Example}\label{oliex} We consider the nonlinear complementarity form of the oligopolistic market equilibrium problem and determine the equilibrium point with homotopy method. Here we consider the oligopolistic market equilibrium problem operating under the Nash equilibrium concept of noncooperative behaviour, as a problem of the game theory. To illustrate the use of the above three algorithms presented in the previous section, a numerical example{\cal I}te{articleHarker} is presented here. Consider an oligopoly with five firms, each with a total cost function of the form: \begin{equation}gin{equation}\label{olix} c_i(Q_i)=n_iQ_i+\frac{\begin{equation}ta_i}{\begin{equation}ta_i+1}{L_i}^{-\frac{1}{\begin{equation}ta_i}}{Q_i}^{\frac{\begin{equation}ta_i+1}{\begin{equation}ta_i}} \end{equation} The demand curve is given by: \begin{equation}gin{equation} \tilde{Q}=5000P^{-1.1}, \ \ P(\tilde{Q})=5000^{1/1.1}\tilde{Q}^{-1/1.1}. \end{equation} The parameters of the equation \ref{olix} for the five firms is given below: \begin{equation}gin{table}[ht] {\cal A}ption{Value of parameters for five firms} {\cal E}ntering \begin{equation}gin{tabular}{c c c c} \hline\hline firm $i$ & $n_i$ & $L_i$ & $\begin{equation}ta_i$ \\ [0.5ex] \hline 1 & 10 & 5 & 1.2 \\ 2 & 8 & 5 & 1.1 \\ 3 & 6 & 5 & 1 \\ 4 & 4 & 5 & 0.8 \\ 5 & 2 & 5 & 0.6 \\ [1ex] \hline \end{tabular} \end{table}\\ \\ \\ \\ Now we solve this problem using the above algorithm. To solve this problem using the homotopy method \ref{homf} with real valued parameter $\lambda,$ we first take the initial point $z^{(0)}=$$\left[\begin{equation}gin{array}{c} 1\\ 1\\ 1\\ 1\\ 1\\ \end{array}\right],$ $y^{(0)}=$$\left[\begin{equation}gin{array}{c} 1\\ 1\\ 1\\ 1\\ 1\\ \end{array}\right],$ ${w_1}^{(0)}=$$\left[\begin{equation}gin{array}{c} 1\\ 1\\ 1\\ 1\\ 1\\ \end{array}\right],$ ${w_2}^{(0)}=$$\left[\begin{equation}gin{array}{c} 1\\ 1\\ 1\\ 1\\ 1\\ \end{array}\right],$ $v_1=0.001,$ $v_2=0.001,$ $\lambda_0=1.$ After $23$ iterations we get the result $\bar{z}=$$\left[\begin{equation}gin{array}{c} 15.42931\\ 12.49858\\ 9.663473\\ 7.165094\\ 5.132566\\ \end{array}\right],$ $\bar{y}=$$\left[\begin{equation}gin{array}{c} 0\\ 0\\ 0\\ 0\\ 0\\ \end{array}\right],$ $\bar{w_1}=$$\left[\begin{equation}gin{array}{c} 0\\ 0\\ 0\\ 0\\ 0\\ \end{array}\right],$ $\bar{w_2}=$$\left[\begin{equation}gin{array}{c} 15.42931\\ 12.49858\\ 9.663473\\ 7.165094\\ 5.132566\\ \end{array}\right],$ $\bar{v_1}=0,$ $\bar{v_2}=0,$ $\bar{\lambda}=0.$ \begin{equation}gin{figure}[h] {\cal E}ntering \includegraphics[height=1.5in, width=4in]{untitledoligopoly.png} {\cal A}ption{Solution Approach of Oligopoly Problem} \label{fig0} \end{figure} \section{Conclusion} In this study, we consider homotopy path to solve nonlinear complementarity problem based on our newly introduced homotopy function by modified homotopy continuation method. We find the positive tangent direction of the homotopy path. We prove that the smooth curve for the proposed homotopy function is bounded and convergent under some conditions related to initial points. An oligopoly equilibrium problem is numerically solved by the proposed modified homotopy continuation method. . \section{Acknowledgment} The author A. Dutta is thankful to the Department of Science and Technology, Govt. of India, INSPIRE Fellowship Scheme for financial support. \vskip 1em \end{document}
\begin{document} \title{f Asymptotic Behaviour of Ergodic skip -2mm Integrals of `Renormalizable' skip -2mm Parabolic Flows skip 6mm} \thispagestyle{first} \setcounter{page}{317} \begin{abstract} \vskip 3mm Ten years ago A. Zorich discovered, by computer experiments on interval exchange transformations, some striking new power laws for the ergodic integrals of generic non-exact Hamiltonian flows on higher genus surfaces. In Zorich's later work and in a joint paper authored by M. Kontsevich, Zorich and Kontsevich were able to explain conjecturally most of Zorich's discoveries by relating them to the ergodic theory of Teichm\"uller flows on moduli spaces of Abelian differentials. In this article, we outline a generalization of the Kontsevich-Zorich framework to a class of `renormalizable' flows on `pseudo-homogeneous' spaces. We construct for such flows a `renormalization dynamics' on an appropriate `moduli space', which generalizes the Teichm\"uller flow. If a flow is renormalizable and the space of smooth functions is `stable', in the sense that the Lie derivative operator on smooth functions has closed range, the behaviour of ergodic integrals can be analyzed, at least in principle, in terms of an Oseledec's decomposition for a `renormalization cocycle' over the bundle of `basic currents' for the orbit foliation of the flow. This approach was suggested by the author's proof of the Kontsevich-Zorich conjectures and it has since been applied, in collaboration with L. Flaminio, to prove that the Zorich phenomenon generalizes to several classical examples of volume preserving, uniquely ergodic, parabolic flows such as horocycle flows and nilpotent flows on homogeneous $3$-manifolds. \vskip 4.5mm \noindent {\bf 2000 Mathematics Subject Classification:} 37C40, 37E35, 37A17, 37A25, 34D08, 43A85. \noindent {\bf Keywords and Phrases:} Ergodic integrals, Renormalization dynamics, Cohomological equations, Invariant distributions, Basic currents. \end{abstract} \vskip 12mm \section{Introduction} \label{section 1}\setzero \vskip-5mm \hspace{5mm} A fundamental problem in smooth ergodic theory is to establish quantitative estimates on the asymptotics behaviour of ergodic integrals of smooth functions. For several examples of {\it hyperbolic }flows, such as geodesic flows on compact manifolds of negative curvature, the asymptotic behaviour of ergodic integrals is described by the {\it Central Limit Theorem }(Y. Sinai, M. Ratner). In these cases, the dynamical system can be described as an approximation of a `random' stochastic process, like the outcomes of flipping a coin. Non-hyperbolic systems as not as well understood, with the important exception of toral flows. For generic non-singular area-preserving flows on the $2$-torus logarithmic bounds on ergodic integrals of zero-average functions of bounded variation can be derived by the {\it Denjoy-Koksma inequality }and the theory of continued fractions. For a general ergodic flow, ergodic integrals are bounded for all times for a special class of functions: {\it coboundaries }with bounded `transfer' functions (Gottschalk-Hedlund). In the hyperbolic examples and in the case of generic toral flows, a smooth function is a coboundary if and only if it has zero average with respect to all invariant measures. In this article, we are interested in flows with {\it parabolic }behaviour. Following A. Katok, a dynamical system is called parabolic if the rate of divergence of nearby orbits is at most polynomial in time, while hyperbolic systems are characterized by exponential divergence. Toral flows are a rather special parabolic example, called {\it elliptic}, since there is no divergence of orbits. It has been known for many years that typical examples of parabolic flows, such as horocycle flows or generic nilpotent flows are {\it uniquely ergodic}, but until recently not much was known on the asymptotic behaviour of ergodic averages, with the exception of some polynomial bounds on the speed of convergence in the horocycle case (M. Ratner, M. Burger), related to the polynomial rate of mixing. We have been able to prove, in collaboration with L. Flaminio, that for many examples of parabolic dynamics the behaviour of ergodic averages is typically described as follows. A smooth flow $\Phi^X$ on a finite dimensional manifold $M$ has {\it deviation spectrum }$\{\lambda_1> ... > \lambda_i> ... >0\}$ with multiplicities $m_1, ...,m_i,...\in {\mathbb Z}^+$ if there exists a system $\{{\mathcal D}_{ij} \,|\, i\in {\mathbb Z}^+\,,\,\,1\le j\le m_i\}$ of linearly independent $X$-{\it invariant distributions }such that, for almost all $p\in M$, the ergodic integrals of any smooth function $f\in C_0^{\infty}(M)$ have an asymptotic expansion \begin{equation} \label{DS} \int_0^T f\bigl(\Phi^X(t,p)\bigr) dt \,\,= \,\,\sum_{i\in \mathbb{N}} \sum_{j=1}^{m_i} c_{ij}(p,T) \,{\mathcal D}_{ij}(f)\, T^{\lambda_i}\,\, +\,\, R(p,T)(f)\,, \end{equation} where the real coefficients $c_{ij}(p,T)$ and the distributional remainder $R(p,T)$ have, for almost all $p\in M$, a sub-polynomial behaviour, in the sense that \begin{equation} \limsup_{T\to +\infty} \frac{ \log \sum_{j=1}^{m_i}|c_{ij}(p,T)|^2}{\log T}\, =\,\limsup_{T\to +\infty} \frac{\log \|R(p,T)\|}{\log T}\,=\,0\,. \end{equation} The notion of a deviation spectrum first arose in the work of A. Zorich and in his joint work with M. Kontsevich on non-exact Hamiltonian flows with isolated saddle-like singularities on compact higher genus surfaces. Zorich discovered in numerical experiments on interval exchange transformations an unexpected new phenomenon \cite{Zone}. He found that, although a generic flow on a surface of genus $g\ge 2$ is uniquely ergodic (H. Masur, W. Veech), for large times the homology classes of return orbits exhibit unbounded polynomial deviations with exponents $\lambda_1>\lambda_2>...>\lambda_g>0$ from the line spanned in the homology group by the {\it Schwartzmann's asymptotic cycle}. In his later work \cite {Ztwo}, \cite{Zthree} and in joint work with M. Kontsevich \cite{K}, Zorich was able to explain this phenomenon in terms of conjectures on the Lyapunov exponents of the Teichm\"uller flow on moduli spaces of holomorphic differentials on Riemann surfaces. Kontsevich and Zorich also conjectured that Zorich's phenomenon is not merely topological, but it extends to ergodic integrals of smooth functions. ``There is, presumably, an equivalent way of describing the numbers $\lambda_i$. Namely, let $f$ be a smooth function ... Then for a generic trajectory $p(t)$, we expect that the number $\int_0^T f\bigl(p(t)\bigr) dt$ for large $T$ with high probability has size $T^{\lambda_i+o(T)}$ for some $i\in \{1,...,g\}$. The exponent $\lambda_1$ appears for all functions with non-zero average value. The next exponent, $\lambda_2$, should work for functions in a codimension $1$ subspace of $C^{\infty}(S)$, etc.'' \cite{K} Around the same time, we proved that for a generic non-exact Hamiltonian flow $\Phi^X$ on a higher genus surface not all smooth zero average functions are smooth coboundaries \cite{Fone}. In fact, we found that, in contrast with the hyperbolic case and the elliptic case of toral flows, there are $X$-invariant distributional obstructions, which are not signed measures, to the existence of smooth solutions of the {\it cohomological equation }$Xu=f$. This result suggested that Zorich's phenomenon should be related to the presence of invariant distributions other than the (unique) invariant probability measure. In fact, in \cite{Ftwo} we were able prove the Kontsevich-Zorich conjectures that the deviation exponents are non-zero and that generic non-exact Hamiltonian flows on higher genus surfaces have a deviation spectrum. Recently, in collaboration with L. Flaminio, we have proved that other classical parabolic examples, such as horocycle flows on compact surfaces of constant negative curvature \cite{FFone} and generic nilpotent flows on compact $3$-dimensional nilmanifolds \cite{FFtwo}, do have a deviation spectrum, but of countable multiplicity, in contrast with the case of flows on surfaces which have spectrum of finite multiplicity equal to the genus. We will outline below a general framework, derived mostly from \cite{Ftwo} and successfully carried out in \cite{FFone}, \cite{FFtwo}, for proving that a flow on a {\it pseudo-homogeneous space }has a deviation spectrum. Our framework is based on the construction of an appropriate {\it renormalization dynamics }on a moduli space of pseudo-homogeneous structures, which generalizes the Teichm\"uller flow. A {\it renormalizable flow }for which the space of smooth functions is {\it stable} (in the sense of A. Katok), has a deviation spectrum determined by the Lyapunov exponents of a {\it renormalization cocycle }over a bundle of {\it basic currents}. Pseudo-homogeneous spaces are a generalization of homogeneous spaces. The motivating non-homogeneous example is given by any punctured Riemann surface carrying a holomorphic differential vanishing only at the punctures. It turns out that renormalizable flows are necessarily parabolic. In fact, the class of renormalizable flows encompasses all parabolic flows which are reasonably well-understood, while not much is known for most non-renormalizable parabolic flows, such as generic geodesic flows on flat surfaces with conical singularities. Our approach unifies and generalizes several classical quantitative equidistribution results such as the Zagier-Sarnak results for periodic horocycles on non-compact hyperbolic surfaces of finite volume \cite{FFone} or number theoretical results on the asymptotic behaviour of theta sums \cite{FFtwo}. \section{Renormalizable flows} \label{section 2} \setzero\vskip-5mm \hspace{5mm } Let $\mathfrak g$ be a finite dimensional real Lie algebra. A $\mathfrak g$-{\it structure }on a manifold $M$ is defined to be a homomorphism $\tau$ from $\mathfrak g$ into the Lie algebra ${\mathcal V}(M)$ of all smooth vector fields on $M$. This notion is well-known in the theory of transformation groups (originated in the work of S. Lie) under the name of `infinitesimal $G$-transformation group' (for a Lie group $G$ with $\mathfrak g$ as Lie algebra). The second fundamental theorem of Lie states that any infinitesimal $G$-transformation group $\tau$ on $M$ can be `integrated' to yield an essentially unique local $G$-transformation group. A $\mathfrak g$-structure $\tau$ will be called {\it faithful }if $\tau$ induces a linear isomorphism from ${\mathfrak g}$ onto $T_x M$, for all $x\in M$. Let $\tau$ be a $\mathfrak g$-structure. For each element $X\in {\mathfrak g}$, the vector field $X_{\tau}:=\tau(X)$ generates a (partially defined) flow $\Phi^X_{\tau}$ on $M$. Let $E_t(X_{\tau})\subset M$ be the closure of the complement of the domain of definition of the map $\Phi^X_{\tau}(t,\cdot)$ at time $t\in {\mathbb R}$. A faithful $\mathfrak g$-structure will be called {\it pseudo-homogeneous }if for every $X\in \mathfrak g$ there exists $t>0$ such that $E_t(X_{\tau}) \cup E_{-t}(X_{\tau})$ has zero (Lebesgue) measure. A manifold $M$ endowed with a pseudo-homogeneous $\mathfrak g$-structure will be called a {\it pseudo-homogeneous }$\mathfrak g$-{\it space}. All homogeneous spaces are pseudo-homogeneous. Let ${\mathcal T}_{\mathfrak g}(M)$ be the space of all pseudo-homogeneous $\mathfrak g$-structures on $M$. The automorphism group $\text{Aut}(\mathfrak g)$ acts on ${\mathcal T}_{\mathfrak g}(M)$ by composition on the right. The group $\text{Diff}(M)$ acts on ${\mathcal T}_{\mathfrak g}(M)$ by composition on the left. The spaces \begin{equation} \text{T}_{\mathfrak g}(M):={\mathcal T}_{\mathfrak g}(M)/\text{Diff}_0(M) \,\,,\,\,\,\, {\mathcal M}_{\mathfrak g}(M):=\text{T}_{\mathfrak g}(M)/\Gamma(M) \,, \end{equation} where $\Gamma(M):=\text{Diff}^+(M)/\text{Diff}_0(M)$ is the {\it mapping class group}, will be called respectively the {\it Teichm\"uller space }and the {\it moduli space }of pseudo-homogeneous $\mathfrak g$-structures on $M$. The group $\text{Aut}(\mathfrak g)$ acts on the Teichm\"uller space $\text{T} _{\mathfrak g}(M)$ and on the moduli space ${\mathcal M}_{\mathfrak g}(M)$, since in both cases the action of $\text{Aut}(\mathfrak g)$ on ${\mathcal T}_{\mathfrak g}(M)$ passes to the quotient. Let $\text{Aut}^{(1)}(\mathfrak g)$ be the subgroup of automorphisms with determinant one. An element $X\in \mathfrak g$ will be called {\it a priori renormalizable }if there exists a partially hyperbolic one-parameter subgroup $\{G^X_t\}\subset \text{Aut}^{(1)}(\mathfrak g)$, $t\in {\mathbb R}$ ($t\in {\mathbb Z}$), in general non-unique, with a single (simple) Lyapunov exponent $\mu_X>0$ such that \begin{equation} \label{eq:RN} G^X_t(X)=e^{t\mu_X}\,X \,\,. \end{equation} It follows from the definition that the subset of a priori renormalizable elements of a Lie algebra $\mathfrak g$ is saturated with respect to the action of $\text{Aut}(\mathfrak g)$. The subgroup $\{G^X_t\}$ acts on the Teichm\"uller space and on the moduli space of pseudo-homogeneous $\mathfrak g$-structures as a `renormalization dynamics' for the family of flows $\Phi^X_{\tau}$ generated by the vector fields $\{X_{\tau}\,|\,\tau\in {\mathcal T}_{\mathfrak g}(M)\}$ on $M$. It will be called a {\it generalized Teichm\"uller flow (map)}. A flow $\Phi^X_{\tau}$ will be called {\it renormalizable }if $\tau\in {\mathcal M}_{\mathfrak g}(M)$ is a recurrent point for some generalized Teichm\"uller flow (map) $G_t^X$. If $\mu$ is a probability $G^X_t$-invariant measure on the moduli space, then by Poincar\'e recurrence the flow $\Phi^X_{\tau}$ is renormalizable for $\mu$-almost all $\tau\in {\mathcal M}_{\mathfrak g}(M)$. Let $R$ be an inner product on $\mathfrak g$. Every faithful $\mathfrak g$ structure $\tau$ induces a Riemannian metric $R_{\tau}$ of constant curvature on $M$. Let $\omega_{\tau}$ be the volume form of $R_{\tau}$. The total volume function $A:{\mathcal T}_{\mathfrak g}(M)\to {\mathbb R}^+ \cup \{+\infty\}$ is $\text{Diff}^+(M)$-invariant and $\text{Aut}^{(1)}(\mathfrak g)$-invariant. Hence $A$ is well-defined as an $\text{Aut}^{(1)}(\mathfrak g)$-invariant function on the Teichm\"uller space and on the moduli space. It follows that the subspace of finite-volume $\mathfrak g$-structures has an $\text{Aut}^{(1)} (\mathfrak g)$-invariant stratification by the level hypersurfaces of the total volume function. Since different hypersurfaces are isomorphic up to a dilation, when studying finite-volume spaces it is sufficient to consider the hypersurface of volume-one $\mathfrak g$-structures: \begin{equation} \text{T}_{\mathfrak g}^{(1)}(M):=\text{T}_{\mathfrak g}(M)\cap A^{-1}(1) \,\,,\,\,\,\, {\mathcal M}_{\mathfrak g}^{(1)}(M):= {\mathcal M}_{\mathfrak g}(M) \cap A^{-1}(1) \,. \end{equation} Let $\tau$ be a faithful $\mathfrak g$-structure and let $X\in\mathfrak g$. If the linear map $ad_X$ on $\mathfrak g$ has zero trace, the flow $\Phi^X_{\tau}$ preserves the volume form $\omega_{\tau}$ and $X_{\tau}$ defines a symmetric operator on $L^2(M,\omega_{\tau})$ with domain $C_0^{\infty}(M)$. If $\tau$ is pseudo-homogeneous, by E. Nelson's criterion \cite{N}, $X_{\tau}$ is essentially skew-adjoint. It turns out that any a priori renormalizable element $X\in \mathfrak g$ is {\it nilpotent}, in the sense that all eigenvalues of the linear map $ad_X$ are equal to zero, hence the flow $\Phi^X_{\tau}$ is volume preserving and parabolic. In all the examples we have considered, the Lie algebra $\mathfrak g$ is {\it traceless}, in the sense that for every element $X\in \mathfrak g$, the linear map $\text {ad}_X$ has vanishing trace. In this case, any pseudo-homogeneous $\mathfrak g$-structure induces a representation of the Lie algebra $\mathfrak g$ by essentially skew-adjoint operators on the Hilbert space $L^2(M,\omega_{\tau})$ with common invariant domain $C_0^{\infty}(M)$. \section{Examples} \label{section 3} \setzero\vskip-5mm \hspace{5mm } Homogeneous spaces provide a wide class of examples. Let $G$ be a finite dimensional (non-compact) Lie group with Lie algebra $\mathfrak g$ and let $M=G/\Gamma$ be a (compact) homogeneous space. The Teichm\"uller space $\text{T}_G(M)\subset \text{T}_{\mathfrak g}(M)$ and the moduli space ${\mathcal M}_G(M)\subset {\mathcal M}_{\mathfrak g}(M)$ of all homogeneous $G$-space structures on $M$ are respectively isomorphic to the Lie group $\text{Aut}(G)$ and to the homogeneous space $\text{Aut}(G)/\text{Aut}(G, \Gamma)$, where $\text{Aut}(G,\Gamma)<\text{Aut}(G)$ is the subgroup of automorphisms which stabilize the lattice $\Gamma$. The Teichm\"uller and moduli spaces $\text{T}^{(1)}_G(M)$ and ${\mathcal M}^{(1)}_G(M)$ of homogeneous volume-one $G$-space structures are respectively isomorphic to the subgroup $\text{Aut}^{(1)}(G)$ of orientation preserving, volume preserving automorphisms and to the homogeneous space $\text{Aut}^{(1)}(G) /\text{Aut}^{(1)}(G,\Gamma)$. In the {\it Abelian }case ${\mathfrak g}={\mathbb R}^n$, any $X\in {\mathbb R}^n$ is a priori renormalizable. In fact, the group $\text{Aut}^{(1)} ({\mathbb R}^n)=\text{SL}(n,{\mathbb R})$ acts transitively on ${\mathbb R}^n$ and $X_1=(1,0,...,0)$ is renormalized by the one-parameter group $G_t:=\text {diag}(e^t,e^{-t/n},...,e^{-t/n})\subset \text{SL}(n,{\mathbb R})$. Finite volume Abelian homogeneous spaces are diffeomorphic to $n$-dimensional tori ${\mathbb T}^n$. The generalized Teichm\"uller flow $G_t$ on the moduli space of all volume-one Abelian homogeneous structures on ${\mathbb T}^n$ is a volume preserving Anosov flow on the finite-volume non-compact manifold $\text{SL}(n,\mathbb R)/\text{SL}(n,\mathbb Z)$. Hence, in this case, almost all homogeneous flows are renormalizable, by Poincar\'e recurrence theorem. The dynamics of the flow $G_t$ has been investigated in depth by D. Kleinbock and G. Margulis in connection with the theory of Diophantine approximations. In the {\it semi-simple }case, let ${\mathfrak g}={\mathfrak s}{\mathfrak l} (2,\mathbb R)$ be the unique $3$-dimensional simple Lie algebra. There is a basis $\{H,H^{\perp},X\}$ with commutation relations $[X,H]=H$, $[X,H^{\perp}] =-H^{\perp}$ and $[H,H^{\perp}]=2X$. The elements $H$, $H^{\perp}$ are renormalized by the one-parameter group $G_t:=\text{diag}(e^t,e^{-t},1)\subset \text{Aut}^{(1)}(\mathfrak g)$, while $X$ is not a priori renormalizable. The unit tangent bundle of any hyperbolic surface $S$ can be identified to a homogeneous $\mathfrak g$-space $M:=\text {PSL}(2,{\mathbb R})/\Gamma$. The vector fields $H$, $H^{\perp}\in {\mathfrak g}$ generate the horocycle flows and the vector field $X$ generates the geodesic flow on $S$. Since $G_t$ is a group of {\it inner }automorphisms, it is in fact generated by the geodesic vector field $X$, every point of the moduli space is fixed under $G_t$. Hence horocycle flows are renormalizable on every homogeneous space $\text{PSL} (2,{\mathbb R})/\Gamma$. In the {\it nilpotent}, non-Abelian case, let $\mathfrak n$ be the Heisenberg Lie algebra, spanned by elements $\{X,X^{\perp},Z\}$ such that $[X,X^{\perp}] =Z$ and $Z$ is a generator of the one-dimensional center $Z_{\mathfrak n}$. The element $X$ is renormalized by the one-parameter subgroup $G_t:=\text{diag} (e^t,e^{-t},1)\subset \text{Aut}^{(1)}({\mathfrak n})$. Since the group $\text{Aut}({\mathfrak n})$ acts transitively on ${\mathfrak n}\setminus Z_{\mathfrak n}$, every $Y\in{\mathfrak n}\setminus Z_{\mathfrak n}$ is a priori renormalizable, while the elements of the center are not. A compact nilmanifold modeled over the Heisenberg group $N$ is a homogeneous space $M=N/\Gamma$, where $\Gamma$ is a co-compact lattice. These spaces are topologically circle bundles over ${\mathbb T}^2$ classified by their Euler characteristic. The moduli space ${\mathcal M}^{(1)}_N(M)$ of volume-one homogeneous $\mathfrak n$-structures on $M$ is a $5$-dimensional finite-volume non-compact orbifold which fibers over the modular surface $\text{SL}(2, {\mathbb R})/\text{SL}(2,{\mathbb Z})$ with fiber ${\mathbb T}^2$. The generalized Teichm\"uller flow is an Anosov flow on ${\mathcal M}^{(1)}_N(M)$ \cite{FFtwo}. The motivation for our definition of a pseudo-homogeneous space comes from the theory of Riemann surfaces of higher genus. Any holomorphic (Abelian) differential $h$ on a Riemann surface $S$ of genus $g\ge 2$, vanishing at $Z_h\subset S$, induces a (non-unique) pseudo-homogeneous ${\mathbb R}^2$-structure on the open manifold $M_h:=S\setminus Z_h$ In fact, the frame $\{X,X^{\perp}\}$ of $TS|M_h$ uniquely determined by the conditions \begin{equation} \frac{\sqrt{-1}}{2}\, \imath_X\,(h\wedge {\bar h})\,=\, \Im(h) \,\,\,\,, \,\,\,\, \frac{\sqrt{-1}}{2}\, \imath_{X^{\perp}} \,(h\wedge {\bar h})\,=\,-\Re(h) \end{equation} satisfies the Abelian commutation relation $[X,X^{\perp}]=0$ and the homomorphism $\tau_h:{\mathbb R}^2\to {\mathcal V}(M_h)$ such that $\tau_h (1,0)=X$, $\tau_h(0,1)=X^{\perp}$ is a pseudo-homogeneous ${\mathbb R}^2$-structure on $M_h$. Let $Z\subset S$ be a given subset of cardinality $\sigma\in{\mathbb N}$ and let $\kappa=(k_1,...,k_{\sigma})\in ({\mathbb Z}^+)^{\sigma}$ with $\sum k_i=2g-2$. Let ${\mathcal H}_{\kappa}(S,Z)$ be the space of Abelian differentials $h$ with $Z_h=Z$ and zeroes of multiplicities $(k_1,...,k_{\sigma})$. The projection of the set $\{\tau_h\in {\mathcal T}_{{\mathbb R}^2}(M)\,|\,h\in {\mathcal H}_{\kappa} (S,Z)\}$ into the moduli space ${\mathcal M}_{{\mathbb R}^2}(M)$ of pseudo-homogeneous ${\mathbb R}^2$-structures on $M:=S\setminus Z$ is isomorphic to a stratum ${\mathcal H}({\kappa})$ of the moduli space of Abelian differentials on $S$. The flow induced on ${\mathcal H}({\kappa})$ by the one-parameter group of automorphism $G_t=\text{diag}(e^t,e^{-t}) \subset \text{SL}(2,\mathbb R)$ coincides with the Teichm\"uller flow on the stratum ${\mathcal H}({\kappa})$. \section{Cohomological equations} \label{section 4} \setzero\vskip-5mm \hspace{5mm } Let $(\mathfrak g, R)$ be a finite dimensional Lie algebra endowed with an inner product. Any pseudo-homogeneous $\mathfrak g$-structure $\tau$ on a manifold $M$ induces a {\it Sobolev filtration }$\{\text{W}^s_{\tau} (M)\}_{s\ge 0}$ on the space $W^0_{\tau}(M):=L^2(M,\omega_{\tau})$ of square-integrable functions. Let $\triangle_{\tau}$ be the non-negative {\it Laplace-Beltrami operator }of the Riemannian metric $R_{\tau}$ on $M$. The Laplacian is densely defined and symmetric on the Hilbert space $W^0_{\tau}(M)$ with domain $C_0^{\infty}(M)$, but it is not in general essentially self-adjoint. In fact, if $\mathfrak g$ is traceless, by a theorem of E. Nelson \cite{N}, $\triangle _{\tau}$ is essentially self-adjoint if and only if the representation $\tau$ of the Lie algebra $\mathfrak g$ on $W^0_{\tau}(M)$ by essentially skew-adjoint operators induces a unitary representation of a Lie group. Let then $\bar \triangle_{\tau}$ be the {\it Friederichs extension }of $\triangle_{\tau}$. The Sobolev space $W^s_{\tau}(M)$, $s>0$, is defined as the maximal domain of the operator $(\text{I}+{\bar\triangle}_{\tau})^{s/2}$ endowed with the norm \begin{equation} \|f\|_{s,\tau}:= \|(\text{I}+\bar{\triangle})^{s/2}f\|_{0,\tau}\,. \end{equation} The Sobolev spaces $W^{-s}_{\tau}(M)$ are defined as the duals of the Hilbert spaces $W^s_{\tau}(M)$, for all $s > 0$. Let $C_B^0(M)$ be the space of continous bounded functions on $M$. The pseudo-homogeneous space $(M,\tau)$ will be called of {\it bounded type }if there is a continous (Sobolev) embedding $W^s_{\tau}(M)\subset C_B^0(M)$ for all $s>\text{dim}(M)/2$. The bounded-type condition is essentially a geometric property of the pseudo-homogeneous structure. Let $X\in {\mathfrak g}$. Following A. Katok, the space $W^s_{\tau}(M)$ is called $W^t_{\tau}(M)$-{\it stable }with respect to the flow $\Phi^X_{\tau}$ if the subspace \begin{equation} R^{s,t}(X_{\tau}):= \{ f\in W^s_{\tau}(M)\,|\, f=X_{\tau}u\,\,,\,\,\,\, u\in W^t_{\tau}(M)\} \end{equation} is closed in $W^s_{\tau}(M)$. The flow $\Phi^X_{\tau}$ will be called {\it tame }(of degree $\ell>0$) if $W^s_{\tau}(M)$ is $W^{s-\ell}_{\tau}(M)$-stable with respect to $\Phi^X_{\tau}$ for all $s>\ell$. In all the examples of \S 3, generic renormalizable flows are tame. In particular, it is well known that generic toral flows are tame, horocycle flows and generic nilpotent flows on $3$-dimensional compact nilmanifolds were proved tame of any degree $\ell>1$ in \cite{FFone}, \cite{FFtwo}, generic non-exact Hamiltonian flows on higher genus surfaces were proved tame in \cite{Fone}. These results are based on the appropriate harmonic analysis: in the homogeneous cases, the theory of unitary representations for the Lie group $\text{SL}(2,{\mathbb R})$ \cite{FFone} and the Heisenberg group \cite{FFtwo}; in the more difficult non-homogeneous case of higher genus surfaces, the theory of boundary behaviour of holomorphic functions on the unit disk plays a crucial role \cite{Ftwo}. If the Sobolev space $W^s_{\tau}(M)$ is stable with respect to the flow $\Phi^X_{\tau}$, the closed range $R^{s,t}(X_{\tau})$ of the operator $X_{\tau}$ coincides with the distributional kernel ${\mathcal I}^s(X_{\tau}) \subset W^{-s}_{\tau}(M)$ of $X_{\tau}$, which is a space of $X_{\tau}$-{\it invariant distributions}. Let $X$ be any smooth vector field on a manifold $M$. A distribution ${\mathcal D}\in{\mathcal D}'(M)$ is called $X$-invariant if $X{\mathcal D}=0$ in ${\mathcal D}'(M)$. Invariant distributions are in bijective correspondence with (homogeneous) one-dimensional {\it basic currents }for the orbit foliation ${\mathcal F}(X)$ of the flow $\Phi^X$. A one-dimensional basic current $C$ for a foliation $\mathcal F$ on $M$ is a continous linear functional on the space $\Omega^1_0(M)$ of smooth $1$-forms with compact support such that, for all vector fields $Y$ tangent to $\mathcal F$, \begin{equation} \label{eq:BC} \imath_YC\,=\,{\mathcal L}_YC\,=\,0\,\, (\iff \,\, \imath_YC\,=\,dC\,=\,0)\,. \end{equation} It follows from the definitions that the one-dimensional current $C:=\imath_X {\mathcal D}$ is basic for ${\mathcal F}(X)$ if and only if the distribution $\mathcal D$ is $X$-invariant. Let ${\mathcal I}(X)$ be the space of all $X$-invariant distributions and ${\mathcal B}(X)$ be the space of all one-dimensional basic currents for the orbit foliation ${\mathcal F}(X)$. The linear map $\imath_X:{\mathcal I}(X)\to {\mathcal B}(X)$ is bijective. Let $(M,\tau)$ be a pseudo-homogeneous space. There is a well-defined Hodge (star) operator and a space $\text{C}^0_{\tau}(M)$ of square-integrable $1$-forms on $M$ associated with the metric $R_{\tau}$. Since the Laplace operator $\triangle_{\tau}$ extends to $\text{C}^0_{\tau} (M)$ with domain $\Omega^1_0(M)$, it is possible to define, as in the case of functions, a Sobolev filtration $\{\text{C}^s_{\tau}(M)\}_{s\ge 0}$, on the space $\text{C}^0_{\tau}(M)$. The Sobolev spaces $\text{C}^{-s}_{\tau} (M)$ are defined as the duals of the Sobolev spaces $\text{C}^s(M)$, for all $s>0$. Let ${\mathcal B}^s(X_{\tau}):={\mathcal B}(X_{\tau})\cap \text{C}^{-s}_{\tau}(M)$ be the subspaces of basic currents of Sobolev order $\le s$ for the orbit foliation ${\mathcal F}(X_{\tau})$. The space ${\mathcal B}^s(X_{\tau})$ is the image of ${\mathcal I}^s(X_{\tau}) :={\mathcal I}(X_{\tau})\cap W^{-s}_{\tau}(M)$ under the bijective map $\imath_X:{\mathcal I}(X)\to {\mathcal B}(X)$. In the case of minimal toral flows the space ${\mathcal B}^s(X_{\tau})$ is one-dimensional for all $s\ge 0$ (as all invariant distributions are scalar multiples of the unique invariant probability measure). In the parabolic examples we have studied, ${\mathcal B}^s(X_{\tau})$ has countable dimension, as soon as $s>1/2$, for horocycle flows or generic nilpotent flows, while for generic non-exact Hamiltonian flows on higher genus surfaces the dimension is finite for all $s>0$ and grows linearly with respect to $s>0$. This finiteness property seems to be an exceptional low dimensional feature. \section{The renormalization cocycle} \label{section 5} \setzero\vskip-5mm \hspace{5mm } The Sobolev spaces $\text{C}^s_{\tau}(M)$ of one-dimensional currents form a smooth infinite dimensional vector bundle over ${\mathcal T}_{\mathfrak g}(M)$. Such bundles can be endowed with a flat connection with parallel transport given locally by the identity maps $\text{C}^s_{\tau}(M)\to\text{C}^s_{\tau'} (M)$, for any $\tau\approx\tau'\in {\mathcal T}_{\mathfrak g}(M)$. Since the diffeomorphism group $\text{Diff}(M)$ acts on $\text{C}^s_{\tau}(M)$ by push-forward, we can define (orbifold) vector bundles $\text{C}^s_{\mathfrak g} (M)$ over the Teichm\"uller space $\text{T}_{\mathfrak g}(M)$ or the moduli space ${\mathcal M}_{\mathfrak g}(M)$ of pseudo-homogeneous structures on $M$. If $X\in \mathfrak g$ is a priori renormalizable, a generalized Teichm\"uller flow (map) $G^X_t$ can be lifted by parallel transport to a `renormalization cocycle' $R^X_t$ on the bundles of currents $\text{C}^s_{\mathfrak g}(M)$ over the Teichm\"uller space or the moduli space. It follows from the definitions that the sub-bundles ${\mathcal B}^s_{\mathfrak g}(X)\subset \text{C}^s_ {\mathfrak g}(M)$ with fibers the subspaces of basic currents ${\mathcal B}^s (X_{\tau})\subset \text{C}^s_{\tau}(M)$ are $R^X_t$-invariant. It can be proved that, for any $G_t$-ergodic probability measure $\mu$ on the moduli space, if the flows $\Phi^X_{\tau}$ are tame of degree $\ell>0$ for $\mu$-almost all $\tau\in{\mathcal M}_{\mathfrak g}(M)$, then the sub-bundles ${\mathcal B}^s _{\mathfrak g}(X)$ are $\mu$-almost everywhere defined with closed (Hilbert) fibers of constant rank, for all $s>\ell$. In the examples considered, with the exception of flows on higher genus surfaces, the Hilbert bundles of basic currents ${\mathcal B}^s_{\mathfrak g} (X)$ are infinite dimensional, and to the author's best knowledge, available Oseledec-type theorems for Hilbert bundles do not apply to the renormalization cocycle. However, the cocycle has a well defined Lyapunov spectrum and an Oseledec decomposition. We are therefore led to formulate the following hypothesis: \noindent $H_1(s)$. The renormalization cocyle $R^X_t$ on the bundle ${\mathcal B}^s_{\mathfrak g}(X)$ over the dynamical system $(G_t^X,\mu)$ has a Lyapunov spectrum $\{\nu_1> ...> \nu_k> ...>0>...\}$ and an Oseledec's decomposition \begin{equation} \label{eq:OD} {\mathcal B}^s_{\mathfrak g}(X)=E^s_{\mathfrak g}(\nu_1)\oplus ... \oplus E^s_{\mathfrak g}(\nu_k)\oplus ... \oplus N^s_{\mathfrak g}\,\,, \end{equation} in which the components $E^s_{\mathfrak g}(\nu_k)$ correspond to the Lyapunov exponents $\nu_k>0$, while the component $N^s_{\mathfrak g}$ has a non-positive top Lyapunov exponent. Our result on the existence of a deviation spectrum requires an additional technical hypothesis, verified in our examples. \noindent $H_2(s)$. Let $\gamma_{\tau}^1(p)$ be the one-dimensional current defined by the time $T=1$ orbit-segment of the flow $\Phi^X_{\tau}$ with initial point $p\in M$. $(a)$ The essential supremum of the norm $\|\gamma _{\tau}^1(p)\|_{\tau,s}$ over $p\in M$ is locally bounded for $\tau\in\text {supp}(\mu)\subset {\mathcal M}_{\mathfrak g}(M)$; $(b)$ The orthogonal projections of $\gamma_{\tau}^1(p)$ on all subspaces $E^s_{\tau}(\nu_k) \subset \text{C}^{-s}_{\mathfrak g}(M)$ are non-zero for $\mu$-almost all $\tau \in {\mathcal M}_{\mathfrak g}(M)$ and almost all $p\in M$. { Let }$X\in {\mathfrak g}$ { be a priori renormalizable and let }$\mu$ { be a }$G_t^X$-{ invariant Borel probability measure on }${\mathcal M} _{\mathfrak g}(M)$, { supported on a stratum of bounded-type }$\mathfrak g$-{ structures. If the flow }$\Phi_{\tau}^X$ { is tame of degree }$\ell >0$ { and the hypoteses }$H_1(s)$, $H_2(s)$ { are verified for }$s>\ell+ \text{dim}(M)/2$, { for }$\mu$-{ almost }$\tau\in {\mathcal M}_{\mathfrak g}$, { the flow }$\Phi_{\tau}^X$ { has a deviation spectrum with deviation exponents} \begin{equation} \nu_1/\mu_X \,>\,...\,>\,\nu_k/\mu_X \,>\,...\,>\,0 \end{equation} { and multiplicities given by the decomposition }(\ref{eq:OD}) { of the renormalization coycle.} In the homogeneous examples, the Lyapunov spectrum of the renormalization cocycle is computed explicitly in every irreducible unitary representation of the structural Lie group. In the horocycle case, the existence of an Oseledec's decomposition (\ref{eq:OD}) is equivalent to the statement that the space of horocycle-invariant distributions is spanned by (generalized) eigenvectors of the geodesic flow, well-known in the representation theory of semi-simple Lie groups as {\it conical distributions }\cite{H}. In the non-homogeneous case of higher genus surfaces, the Oseledec's theorem applies since the bundles ${\mathcal B}^s_{\mathfrak g}(X)$ are finite dimensional. We have found in all examples a surprising heuristic relation between the Lyapunov exponents of the renormalization cocycle and the Sobolev regularity of basic currents (or equivalently of invariant distributions): the subspaces $E^s_{\tau}(\nu_k)$ are generated by basic currents of {\it Sobolev order }$1 -\nu_k/\mu_X\ge 0$. The Sobolev order of a one-dimensional current $C$ is defined as the infimum of all $s>0$ such that $C\in \text{C}^{-s}_{\tau}(M)$. In the special case of non-exact Hamiltonian flow on higher genus surfaces the Lyapunov exponents of the renormalization cocycle are related to those of the Teichm\"uller flow. In fact, let $S$ be compact orientable surface of genus $g\ge 2$ and let ${\mathcal H}({\kappa})$ be a stratum of Abelian differentials vanishing at $Z\subset S$. Let ${\mathcal B}_{\kappa}(X) \subset {\mathcal B}_{{\mathbb R}^2}(X)$ be the measurable bundle of basic currents over ${\mathcal H}({\kappa})\subset {\mathcal M}_{{\mathbb R}^2} (S\setminus Z)$ and let $H^1_{\kappa}(S\setminus Z,\mathbb R)$ be the bundle over ${\mathcal H}({\kappa})$ with fibers isomorphic to the real cohomology $H^1(S\setminus Z,\mathbb R)$. Since basic currents are closed, there exists a {\it cohomology map }$j_{\kappa}:{\mathcal B}_{\kappa}(X) \to H^1_{\kappa} (S\setminus Z,{\mathbb R})$ such that, as proved in \cite{Ftwo}, the restrictions $j_{\kappa}|{\cal B}^s_{\kappa}(X)$ are surjective for all $s>>1$ and, for all $s\ge 1$, there are exact sequences \begin{equation} 0\rightarrow \mathbb R \rightarrow {\mathcal B}_{\kappa}^{s-1}(X)\,\, ^{\underrightarrow{\,\,\,\, \delta_{\kappa}\,\,\,\,}}\,\, {\mathcal B} _{\kappa}^s(X)\,\, ^{\underrightarrow{\,\,\,\, j_{\kappa} \,\,\,\,}}\,\, H^1_{\kappa}(S\setminus Z,{\mathbb R})\,\,. \end{equation} The renormalization cocycle $R^X_t$ on ${\mathcal B}_{\kappa}^s(X)$ projects for all $s>>1$ onto a cocycle on the cohomology bundle $H^1_{\kappa}(S\setminus Z,{\mathbb R})$, introduced by M. Kontsevich and A. Zorich in order to explain the {\it homological }asymptotic behaviour of orbits of the flow $\Phi_{\tau}^X$ for a generic $\tau\in {\cal H}(\kappa)\subset {\cal M}_{{\mathbb R}^2}(S \setminus Z)$ \cite{K}. The Lyapunov exponents of the Kontsevich-Zorich cocycle on $H^1_{\kappa}(S\setminus Z,{\mathbb R})$, \begin{equation} \label{eq:RKZE} \lambda_1=1>\lambda_2\ge \dots \ge\lambda_g\ge \overbrace{0=\cdots= 0}^{\#Z-1} \ge -\lambda_g\ge \dots \ge -\lambda_2>-\lambda_1=-1 \,, \end{equation} are related the Lyapunov exponents of the Teich\"uller flow on ${\mathcal H}({\kappa})$ \cite{K}, \cite{Ftwo}. Since the bundle map $\delta_{\kappa}$ shifts Lyapunov exponents by $-1$ and, as conjectured in \cite{K} and proved in \cite{Ftwo}, the Kontsevich-Zorich exponents $\lambda_1=1>\lambda_2\ge \dots \ge\lambda_g$ are non-zero, the strictly positive exponents of the renormalization cocycle coincide with the Kontsevich-Zorich exponents. This reduction explains why in the case of non-exact Hamiltonian flows on surfaces the Lyapunov exponents of the Teichm\"uller flow are related to the deviation exponents for the ergodic averages of smooth functions. \label{lastpage} \end{document}
\begin{document} \title[Concordance and crossing changes]{Concordance, crossing changes, and knots in homology spheres} \author{Christopher W.\ Davis} \address{Department of Mathematics, University of Wisconsin--Eau Claire} \email{[email protected]} \urladdr{www.uwec.edu/daviscw} \date{\today} \text{s}\,ubjclass[2010]{57M25} \begin{abstract}Any knot in $S^3$ may be reduced to a slice knot by crossing changes. Indeed, this slice knot can be taken to be the unknot. In this paper we study the question of when the same holds for knots in homology spheres. We show that a knot in a homology sphere is nullhomotopic in a smooth homology ball if and only if that knot is smoothly concordant to a knot which is homotopic to a smoothly slice knot. As a consequence, we prove that the equivalence relation on knots in homology spheres given by cobounding immersed annuli in a homology cobordism is generated by concordance in homology cobordisms together with homotopy in a homology sphere. \end{abstract} \maketitle \text{s}\,ection{Introduction and statement of results} Classically, a knot $K$ is an isotopy class of smooth embeddings of the circle, $S^1$, into the 3-sphere, $S^3$. A knot is called \textbf{slice} if it forms the boundary of a smoothly embedded 2-disk $D$ in the 4-ball. This disk $D$ is called a slice disk for $K$. This notion was first considered by Fox and Milnor in 1957 in the study of singularities of surfaces in 4-manifolds \cite{FoMi57, FoMi}. The question of which knots are slice is closely related to local obstructions arising in a surgery-theoretic attempt to classify 4-manifolds \cite{CF}. Since then the question of what knots admit slice disks has been at the heart of the study of 4-manifold topology. While not every knot in $S^3$ is slice, every knot can be transformed into a slice knot by a finite sequence of crossing changes. Indeed that slice knot can be taken to be the unknot. In the case of knots in a non-simply connected homology sphere the situation is more subtle. A knot representing a nontrivial class in the fundamental group cannot be reduced to the unknot by any sequence of crossing changes. The main goal of this paper is to ask when a knot in a homology sphere can be homotoped to a new knot which bounds a smoothly embedded disk in homology ball. We will consider the following question. \begin{question}\label{quest:main} Let $Y$ be a homology sphere and $K$ be a knot in $Y$. Does there exist a homotopy transforming $K$ to a new knot $K'$ in $Y$ which bounds a smoothly embedded disk in a smooth homology ball bounded by $Y$? \end{question} If one allows for topological homology balls and locally flat embedded disks then the answer to this question is affirmative for all knots. In \cite{AR99}, Austin and Rolfsen prove that any knot in a homology sphere admits a homotopy to a knot with Alexander polynomial 1. By work of Freedman-Quinn \cite[Theorem 11.7B]{FQ} such a knot bounds a locally flat embedded disk in a contractible 4-manifold. In the smooth setting there are obstructions to Question~\mathfrak{r}ef{quest:main} having an affirmative answer. First, not every homology sphere bounds a homology ball. For example, $Y$ might be the Poincar\'e homology sphere or any other homology sphere with nonzero Rohlin invariant. See \cite[Definition 5.7.16]{KirbyCalculus} for a brief discussion of the Rohlin invariant. Secondly, if $K$ is homotopic in $Y$ to a knot which bounds an embedded disk in a homology ball, then $K$ is nullhomotopic in that homology ball. By work of Daemi \cite[Remark 1.6]{Daemi2018} there exist a knot $K$ in a homology sphere $Y$ such that $Y$ bounds a homology ball and yet $K$ is not nullhomotopic in any homology ball bounded by $Y$. Such a knot cannot be homotopic to a knot which bounds a smoothly embedded disk. We give a name to the property of bounding a smoothly embedded disk in a homology ball. Let $Y$ be a homology sphere and $K$ be a knot in $Y$. We say that $(Y,K)$ is \textbf{homology slice} if there exists a smooth homology ball bounded by $Y$ in which $K$ bounds a smoothly embedded disk. Thus, Question~\mathfrak{r}ef{quest:main} asks whether the homotopy class of $K$ in $Y$ contains a homology slice representative. The notions of sliceness and homology sliceness extend to equivalence relations. Two knots $K$ and $J$ in $S^3$ are called \textbf{concordant} if $K\times\{1\}\text{s}\,ubseteq S^3\times\{1\}$ and $J\times\{0\}\text{s}\,ubseteq S^3\times\{0\}$ cobound a smoothly embedded annulus in $S^3\times[0,1]$. We call knots $K$ and $J$ in homology spheres $Y$ and $X$ \textbf{homology concordant} if there exists a smooth homology cobordism from $Y$ to $X$ in which $K\text{s}\,ubseteq Y$ and $J\text{s}\,ubseteq X$ cobound a smoothly embedded annulus. Similar to the relationship between classical concordance and slice knots, the homology concordance class containing the unknot in $S^3$ is precisely the set of homology slice knots. Importantly, homology concordance allows one to compare knots which do not lie in the same 3-manifold. The quotient of the set of knots in homology spheres by homology concordance is the topic of study of \cite{Davis2018,DR2016, HLL2018, Levine2016} amongst others. Our first main result says that every knot in the boundary of a contractible 4-manifold is homology concordant to a knot for which the answer to Question~\mathfrak{r}ef{quest:main} is affirmative. \begin{theorem}\label{thm:smooth} Let $Y$ be a homology sphere which bounds a smooth contractible 4-manifold. Let $K$ be a knot in $Y$. There exists a knot $K'$ in a homology sphere $Y'$ such that $(Y,K)$ is homology concordant to $(Y',K')$ and $K'$ is homotopic in $Y'$ to a third knot $K''$ which bounds a smoothly embedded disk in a smooth contractible 4-manifold bounded by $Y'$. \end{theorem} Said another way, if we let $\text{s}\,imeq_h^3$ and $\text{s}\,imeq_c$ denote homotopy in a homology sphere and homology concordance respectively and we let $U$ denote the unknot in $S^3$ then Theorem~\mathfrak{r}ef{thm:smooth} concludes $$ (Y,K)\text{s}\,imeq_c (Y',K')\text{s}\,imeq_h^3 (Y',K'')\text{s}\,imeq_c (S^3, U). $$ Thus, the equivalence relation generated by homology concordance and homotopy equates every knot in the boundary of a contractible 4-manifold with the unknot. If one allows homology spheres which bound homology balls but not contractible 4-manifolds then there is an obstruction to a knot being related to a homology slice knot by a sequence of homotopies and homology concordances. Indeed, by \cite[Remark 1.6]{Daemi2018} there exist knots in homology spheres which are not nullhomotopic in any homology ball. Such a knot cannot be reduced to a smoothly slice knot by any sequence of homotopies and homology concordances. The following theorem reveals that this is the only obstruction. \begin{theorem}\label{thm:smoothII} Let $Y$ be a homology sphere which bounds a smooth homology ball $W$. Let $K$ be a knot in $Y$ which is nullhomotopic in $W$. There exists a knot $K'$ in a homology sphere $Y'$ such that $(Y,K)$ is homology concordant to $(Y',K')$ and $K'$ is homotopic in $Y$ to a third knot $K''$ which bounds a smoothly embedded disk in a smooth homology ball bounded by $Y'$. \end{theorem} \begin{remark} In \cite[Proposition 2.1]{StrleOwens2015} Strle and Owens prove a quantitative version of Theorem~\mathfrak{r}ef{thm:smooth} for knots in $S^3$. They show that if a knot in $S^3$ bounds an immersed disk in the 4-ball which has $n$ self-intersections, then $K$ is concordant to another knot $K'$ (in $S^3$) which can be reduced to a slice knot after $n$ crossing changes. While we will not do so in this document, by marrying their techniques with ours we expect the same can be proven of knots in homology spheres. \end{remark} Theorems~\mathfrak{r}ef{thm:smooth} and \mathfrak{r}ef{thm:smoothII} both conclude that a knot is homology concordant to a knot for which the answer to Question~\mathfrak{r}ef{quest:main} is affirmative. In the case of knots which are nullhomotopic in a homology ball admitting no 1-handles, work of Kojima \cite{Kojima79} implies that we can do away with the concordance, answering Question~\mathfrak{r}ef{quest:main} in the affirmative. We prove the same result when the homology ball admits a handle structure with no 3-handles. \begin{theorem}\label{thm:smoothIII} Let $Y$ be a homology sphere which bounds a smooth homology ball $W$ which has a handle structure with no 3-handles. Let $K$ be a knot in $Y$ which is nullhomotopic in $W$. Then $K$ is homotopic in $Y$ to a knot $K'$ which bounds a smoothly embedded disk in $W$. \end{theorem} As an application consider the knot in a homology sphere $(Y,K)$ of Figure~\mathfrak{r}ef{fig:LevineExample}. By \cite[Theorem 1.1]{Levine2016} this knot does not bound any piecewise linear embedded disk in any smooth homology ball. As a consequence $(Y,K)$ is not homology concordant to any knot in $S^3$. However, notice that $Y$ bounds a contractible 4-manifold which has one 0-handle, one 1-handle, one 2-handle, and most importantly no 3-handles. Thus, by Theorem~\mathfrak{r}ef{thm:smoothIII}, $K$ is homotopic to another knot in $Y$ which is homology slice. \begin{figure} \caption{A knot $K$ in the boundary of a contractible 4-manifold. For a suitable knot $T$ this knot is not homology concordant to any knot in $S^3$ \cite[Theorem 1.1]{Levine2016} \label{fig:LevineExample} \end{figure} There is an extension of the equivalence relations $\text{s}\,imeq_h^3$ and $\text{s}\,imeq_c$. Given knots $K$ and $J$ in homology spheres $Y$ and $X$, if there exists a smooth homology cobordism from $Y$ to $X$ in which $K$ and $J$ are homotopic then we write $(Y,K)\text{s}\,imeq_h^4(X,J)$. Equivalently and more geometrically we say $(Y,K)\text{s}\,imeq_h^4(X,J)$ if there exists a smooth homology cobordism from $Y$ to $X$ in which $K$ and $J$ cobound an immersed annulus. It is clear that if $(Y,K)$ and $(X,J)$ are related by a sequence of homotopies and homology concordances then they are related under $\text{s}\,imeq_h^4$. The following theorem reveals the converse: if $(Y,K)\text{s}\,imeq_h^4(X,J)$ then $(Y,K)$ is related to $(X,J)$ by a sequence consisting of one homotopy and two homology concordances. Thus, the equivalence relation $\text{s}\,imeq_h^4$ is generated by homotopy and homology concordance. \begin{theorem}\label{thm:relative} Let $(Y,K)$ and $(X,J)$ be knots in homology spheres. If there exists a smooth homology cobordism from $Y$ to $X$ in which $K$ is homotopic to $J$ then there exist knots $K'$ and $J'$ in some homology sphere $Z$ such that $$(Y,K)\text{s}\,imeq_c(Z,K')\text{s}\,imeq_h^3(Z,J')\text{s}\,imeq_c (X,J).$$ \end{theorem} \text{s}\,ubsection*{Outline of paper} In Section~\mathfrak{r}ef{sect:no 3-handles} we consider the case that a knot $K$ in a homology sphere $Y$ is nullhomotopic in a homology ball which has no 3-handles and prove Theorem~\mathfrak{r}ef{thm:smoothIII}. In Section~\mathfrak{r}ef{sect:3-handles} we manipulate handle structures of homology balls in order to separate the 1- and 3-handles. We go on to prove Theorems~\mathfrak{r}ef{thm:smooth} and \mathfrak{r}ef{thm:smoothIII}. In Section~\mathfrak{r}ef{sect:relative} we take advantage of the group structure on $\mathcal{C}$ to prove Theorem~\mathfrak{r}ef{thm:relative}. \text{s}\,ubsection*{Acknowledgments} The author would like to thank MinHoon Kim, Arunima Ray, JungHwan Park, Paolo Acieto, Kent Orr, Aliakbar Daemi, and Adam Levine for helpful conversations at various stages of the development of the ideas leading to this paper. \text{s}\,ection{In the absence of 3-handles.}\label{sect:no 3-handles} Throughout this paper, we make extensive use of handle decompositions of smooth manifolds. A good reference is \cite{KirbyCalculus}. We begin with the following proposition revealing that if $W$ is a 4-manifold, $Y\text{s}\,ubseteq \partial W$, and $(W,Y)$ has no relative 1-handles, then every homotopy class in $\ker(\pi_1(Y)\to \pi_1(W))$ is represented by a knot which bounds a smoothly embedded disk in $W$. \begin{proposition}\label{prop:no 3-handles} Let $W$ be a smooth, connected, compact, orientable 4-manifold and $Y$ be a submanifold of $\partial W$. Suppose that $(W,Y)$ admits a handle structure with no 1-handles. Let $\text{g}amma\in \ker(\pi_1(Y)\to \pi_1(W))$. There exists a knot $K$ in the homotopy class, $\text{g}amma$, which bounds a smoothly embedded disk in $W$. \end{proposition} \begin{proof} Let $\beta_1,\dots, \beta_m\text{s}\,ubseteq Y$ be the cores of the attaching regions of the 2-handles of $(W, Y)$ regarded as framed knots. Notice that if we take any collection of framed pushoffs of the various $\beta_i$, then the resulting link bounds a collection of disjoint smoothly embedded disks in $W$. Indeed, these disks are pushoffs of the cores of the 2-handles. Pick a base-point $q$ in $Y$. After a choice of basing arcs, the homotopy classes $[\beta_1],\dots, [\beta_m]$ normally generate $\ker(\pi_1(Y,q)\to \pi_1(W,q))$. By assumption, $\text{g}amma$ is in $\ker(\pi_1(Y,q)\to \pi_1(W,q))$. We conclude that $\text{g}amma$ is a product of conjugates of the $ \beta_i$: $$\text{g}amma = \displaystyle \prod _{k=1}^n [c_k\beta_{i_k}^{\epsilon_{k}}c_k^{-1}]$$ for some choice of arcs $c_k\text{s}\,ubseteq Y$ running from $q$ to a point on $\beta_k$ and $\epsilon_k = \pm1$. Here $[c_k\beta_{i_k}^{\epsilon_{k}}c_k^{-1}]\in \pi_1(Y,q)$ is the homotopy class of the result of following $c_k$, then $\beta_{i_k}$ (or its reverse if $\epsilon_k=-1$), and finally the reverse of $c_k$. After a small homotopy we assume that the various $c_k$ are embedded arcs, are disjoint from each other (except for the point at $q$), and have interiors disjoint from the various $\beta_i$. As a consequence, we may construct a knot $K$ in the the homotopy class $\text{g}amma$ by starting with a single unknot based at $q$ and banding it with the pushoffs of $\beta_{i_k}$ (or the reverse $\beta_{i_k}^{-1}$) along bands around the $c_k$. Finally, we build a smoothly embedded disk bounded by $K$ as follows. For each $k=1,\dots, n$ let $\Delta_{k}$ be a pushoff of the core of the 2-handle attached along $\beta_{i_k}$ (or its orientation reverse if $\epsilon_k=-1$). Observe that $\Delta_{1}, \dots, \Delta_{n}$ are disjoint smoothly embedded disks. Band each $\Delta_{k}$ to a single disk bounded by the unknot centered at $q$ along the arc $c_k$. The resulting surface is the required smoothly embedded disk. \end{proof} Theorem~\mathfrak{r}ef{thm:smoothIII} amounts to a special case of Proposition~\mathfrak{r}ef{prop:no 3-handles} \begin{reptheorem}{thm:smoothIII} Let $Y$ be a homology sphere which bounds a smooth homology ball $W$ which has a handle structure with no 3-handles. Let $K$ be a knot in $Y$ which is nullhomotopic in $W$. Then $K$ is homotopic in $Y$ to a knot $K'$ which bounds a smoothly embedded disk in $W$. \end{reptheorem} \begin{proof} Let $Y$ be a smooth homology sphere which bounds a smooth homology ball $W$ which has a handle structure with no 3-handles. Turning this handle structure upside-down $(W,Y)$ has a handle structure with no 1-handles. Thus, Proposition~\mathfrak{r}ef{prop:no 3-handles} applies, so that since the homotopy class of $K$ is in $\ker(\pi_1(Y)\to \pi_1(W))$ we conclude that $K$ is homotopic in $Y$ to a knot which bounds a smoothly embedded disk in $W$. \end{proof} \text{s}\,ection{Separating 3-handles from 1-handles.}\label{sect:3-handles} In the case of a homology ball $W$ which has 3-handles, we split $W$ into two pieces separating the 1-~and 3-handles. \begin{proposition}\label{prop:decompose} Let $W$ be a smooth homology ball with boundary $\partial W = Y$. Then there exists a smooth homology ball $W'$ with $\partial W' = Y'$ and a smooth homology cobordism $V$ from $Y'$ to $Y$ such that \begin{enumerate} \item $W'\cup_{Y'} V =W$ \item $W'$ has a handle structure with neither $3$- nor $4$-handles and so $(W',Y')$ has neither $0$- nor $1$-handles. \item $(V,Y')$ has a handle structure with neither $0$- nor $1$-handles, so that $(V,Y)$ has neither $3$- nor $4$-handles. \end{enumerate} If $W$ is a contractible 4-manifold then $W'$ can be arranged to also be contractible. \end{proposition} \begin{proof} Start by picking a handle decomposition of $W$ consisting of a single 0-handle, $n$ 1-handles, $m$ 2-handles, some number of 3-handles and no 4-handles. Let $W^{(d)}$ denote the union of all handles of dimension at most $d$. Let $\alpha_1,\dots, \alpha_n\text{s}\,ubseteq \partial W^{(1)}$ be curves dual to the belt spheres of the added 1-handles. The homology classes $[\alpha_1]\dots[\alpha_n]$ give a basis for $H_1(W^{(1)})\cong\mathbb{Z}^n$. Let $\beta_1,\dots, \beta_m\text{s}\,ubseteq \partial W^{(1)}$ be the cores of the attaching regions of the added 2-handles. We express the homology classes $[\beta_1],\dots, [\beta_m] \in H_1(W^{(1)})$ as vectors in terms of the basis $[\alpha_1],\dots, [\alpha_n]$ in order to get an $(m\times n)$ presentation matrix, $P$, for $H_1(W)=0$. See for example \cite[Section 4.2]{KirbyCalculus}. As $P$ presents the trivial group, a sequence of row-moves reduces $P$ to an $(m\times n)$ matrix with $1$'s on the main diagonal and $0$'s elsewhere. These row-moves in turn can be realized by reordering and reversing some $\beta_i$'s and by sliding some $\beta_i$'s over other $\beta_j$'s. See for example \cite[Section 5.1]{KirbyCalculus}. Thus, after performing these moves, we may assume that in $H_1(W^{(1)})$, $[\beta_i]=[\alpha_i]$ for all $i\le n$ and $[\beta_i]=0$ for all $i>n$. Let $W'$ be the result of adding to $W^{(1)}$ 2-handles along $\beta_1,\dots, \beta_n$. Since $[\beta_i] =[\alpha_i]$ in $H_1(W^{(1)})$ for $i=1,\dots n$ we see that $H_1(W')=H_2(W')=0$. Because $W'$ has neither 3- nor $4$-handles, $H_3(W') = H_4(W')=0$. Thus, $W'$ is a homology ball. Let $Y'=\partial W'$ and $V$ be given by starting with $Y'\times[0,1]$, adding 2-handles along $\beta_{n+1},\dots, \beta_m$ and finally adding all 3-handles. Then $W=W'\cup_{Y'} V$. Because $W$ and $W'$ are both homology balls, a Mayer-Veitoris argument reveals that $V$ is a homology cobordism. By its very construction, $(V,Y')$ has only 2- and 3-handles. This gives the desired result when $W$ is a homology ball. Now suppose that $W$ is a contractible 4-manifold. Pick a basepoint $q\in \partial W^{(1)}$. After a choice of basing arcs, we see that $\pi_1(\partial W^{(1)}, q)\cong \pi_1(W^{(1)}, q)$ is the free group on the set of homotopy classes $[\alpha_1],\dots, [\alpha_n]$. Moreover, as $W$ is simply connected, $$\pi_1(\partial W^{(1)}) = \ker\left(\pi_1(\partial W^{(1)},q)\to \pi_1(W,q) = \{0\}\mathfrak{r}ight)$$ is normally generated by the homotopy classes of the cores of the attaching regions of the 2-handles $[\beta_1],\dots, [\beta_m]$, after a choice of basing arcs. Thus, each $\alpha_i$ is homotopic in $\partial W^{(1)}$ to a product of conjugates of the attaching regions: $$ [\alpha_i] = \displaystyle \prod _{k=1}^{\ell_i} [c_{i,k} \beta_{j_{i,k}}^{\epsilon_{i,k}} c_{i,k}^{-1}] $$ for some choices of arcs $c_{i,k}$ running from $q$ to a point on $\beta_{j_{i,k}}$ and $\epsilon_{i,k} = \pm1$. After a small homotopy we may assume that the various $c_{i,k}$ are all embedded, have disjoint interiors, and have interiors disjoint from the various $\beta_j$. Similar to the proof of Proposition~\mathfrak{r}ef{prop:no 3-handles} we build a knot $A_i$ in $\partial W^{(1)}$ representing the homotopy class $[\alpha_i]$ by starting with a small unknot near $q$ and banding with framed pushoffs of the various $\beta_{j_{i,k}}$ (or their reverses if $\epsilon_{i,k}=-1$) along bands centered on the $c_{i,k}$. Let $A_1,\dots, A_n$ be the resulting knots. By construction $[A_i] = [\alpha_i]$ in $\pi_1(\partial W^{(1)}, q)$. Since $A_i$ is constructed by starting with the unknot and banding it with the $\beta_{j_{i,k}}$, we may slide each $A_i$ over the various $\beta_{j_{i,k}}$ to reduce the link $A_1\cup \dots\cup A_n$ to the unlink in $\partial W^{(2)}$. Thus, the various $A_i$ bound disjoint disks in $\partial W^{(2)}$. Restrict a framing of these disks to a framing on $A_i$. Form a new 4-manifold by starting with $W^{(2)}$ and adding 2-handles along the knots $A_1,\dots, A_n$. Because the $A_i$ bound disjoint disks in $\partial W^{(2)}$ the boundary of this new 4-manifold is the connected sum of $\partial W^{(2)}$ with $n$ copies of $S^1\times S^2$. Add 3-handles along these new non-separating 2-spheres. The 2-handles added to $A_i$ together with these 3-handles form cancelling pairs \cite[Proposition 4.2.9]{KirbyCalculus} so that the 4-manifold resulting from adding these 2- and 3-handles to $W^{(2)}$ is diffeomorphic to $W^{(2)}$. Construct a 4-manifold diffeomorphic to $W$ by adding the remaining 3-handles. Thus, we may assume that $W$ has a handle structure with 2-handles attached along framed curves $A_1, \dots, A_n, \beta_1, \dots, \beta_m\text{s}\,ubseteq \partial W^{(1)}$ where $[A_1], \dots, [A_n]$ give a free basis for $\pi_1(W^{(1)})$. Let $W'$ be given by adding to $W^{(1)}$ only the 2-handles attached along $A_1,\dots, A_n$. It now follows that $W'$ is a simply connected homology ball. Therefore, $W$ is contractible by a standard application of the Hurewicz homomorphism and the Whitehead theorem. See for example \cite[Corollary 6.70]{DavisKirk01}. Let $Y'=\partial W'$ and $V$ be given by starting with $Y'\times[0,1]$, adding 2-handles along $b_1,\dots, b_m$ and then 3-handles. This gives the desired result. \end{proof} Finally, we prove Theorems \mathfrak{r}ef{thm:smooth} and \mathfrak{r}ef{thm:smoothII}. \begin{reptheorem}{thm:smooth} Let $Y$ be a homology sphere which bounds a smooth contractible 4-manifold. Let $K$ be a knot in $Y$. There exists a knot $K'$ in a homology sphere $Y'$ such that $(Y,K)$ is homology concordant to $(Y',K')$ and $K'$ is homotopic in $Y'$ to a third knot $K''$ which bounds a smoothly embedded disk in a smooth contractible 4-manifold bounded by $Y'$. \end{reptheorem} \begin{proof} Let $Y$ be the boundary of a contractible smooth 4-manifold $W$. Let $K$ be a knot in $Y$. Appeal to Proposition~\mathfrak{r}ef{prop:decompose} to decompose $W$ as $W'\cup_{Y'} V$ where $W'$ is a contractible 4-manifold with boundary $Y'$, $V$ is a homology cobordism from $Y$ to $Y'$, $(V,Y)$ has only relative 1-~and 2-handles, and $(W',Y')$ has no 1-handles. As $(V,Y)$ has only 1-~and 2-handles, we may realize $V$ by starting with $Y\times[0,1]$ and adding $1$- and $2$-handles on $Y\times\{1\}$. We isotope $K$ slightly to make it disjoint from the attaching regions for these handles. The image of $K\times[0,1]$ in $Y\times[0,1]\text{s}\,ubseteq V$ gives a homology concordance from $K$ to some knot $K'$ in $Y'$. Now, $\pi_1(W')$ is trivial so that $K'$ is nullhomotopic in $W'$. Since $W'$ has no 3-handles, Theorem~\mathfrak{r}ef{thm:smoothIII} applies and we see that $K'$ is homotopic to some other knot $K''$ in $Y'$ which bounds a smoothly embedded disk in $W'$. \end{proof} \begin{reptheorem}{thm:smoothII} Let $Y$ be a homology sphere which bounds a homology ball $W$. Let $K$ be a knot in $Y$ which is nullhomotopic in $W$. There exists a knot $K'$ in a homology sphere $Y'$ such that $(Y,K)$ is homology concordant to $(Y',K')$ and $K'$ is homotopic in $Y$ to a third knot $K''$ which bounds a smoothly embedded disk in a smooth homology ball bounded by $Y'$. \end{reptheorem} \begin{proof} Let $Y$ be the boundary of a smooth homology ball $W$. The proof begins in the same manner as the proof of Theorem~\mathfrak{r}ef{thm:smooth}. Appeal to Proposition~\mathfrak{r}ef{prop:decompose} to decompose $W$ as $W'\cup_{Y'} V$ where $W'$ is a homology ball with boundary $Y'$, $V$ is a homology cobordism from $Y$ to $Y'$, $(V,Y)$ has only relative 1-~and 2-handles, and $(W',Y')$ has no 1-handles. As $(V,Y)$ has only 1-~and 2-handles, we realize $V$ by starting with $Y\times[0,1]$ and adding $1$- and $2$-handles to $Y\times\{1\}$. We may isotope $K$ slightly to get $K$ disjoint from the attaching regions. The image of $K\times[0,1]$ in $Y\times[0,1]\text{s}\,ubseteq V$ gives a concordance from $K$ to some knot $K'_0$ in $Y'$. From here the proof differs from that of Theorem~\mathfrak{r}ef{thm:smooth}. As $W'$ is not simply connected we cannot conclude that $K'_0$ is nullhomotopic in $W'$. Let $K'_+$ be a pushoff of $K_0'$. Pick a basepoint $q\in Y'$ which lies on $K'_+$. Since $K$ is nullhomotopic in $W$ by assumption, $K'_+$ is nullhomotopic in $W$ as well. Therefore, the homotopy class $[K'_+]$ lies in $\ker (\pi_1(V,q)\to \pi_1(W,q))$. Recall that $(W',Y')$ has only 2-~and 3-handles. After making a choice of basing arcs, the homotopy classes of the attaching regions for these 2-handles, $\beta_1,\dots, \beta_n\text{s}\,ubseteq Y'$, normally generate $\ker (\pi_1(V,q)\to \pi_1(W,q))$. Thus $[K'_+]$ is equal in $\pi_1(V,q)$ to a product of conjugates of these $\beta_i$. More precisely, in $\pi_1(V, q)$, $ [K'_+] = \displaystyle \prod _{k=1}^m [c_k\beta_{i_k}^{\epsilon_k} c_k^{-1}] $ for some choices of embedded curves $c_k\text{s}\,ubseteq V$ running from $q$ to a point on $\beta_{i_k}$. Since $(V,Y')$ has no relative 1-handles, $\pi_1(Y',q)\to \pi_1(V,q)$ is onto and we may assume that each $c_k$ is embedded in $Y'$. With this assumption, we have $\displaystyle \prod _{k=1}^m [c_k\beta_{i_k}^{\epsilon_k} c_k^{-1}]\in \pi_1(Y', q)$ is a product of conjugates of the attaching regions $\beta_1,\dots, \beta_n$. Thus, $\displaystyle \prod _{k=1}^m [c_k\beta_{i_k}^{\epsilon_k} c_k^{-1}]$ is in $\ker(\pi_1(Y',q)\to \pi_1(W',q))$. Theorem~\mathfrak{r}ef{thm:smoothIII} concludes that $\displaystyle \prod _{k=1}^m [c_k\beta_{i_k}^{\epsilon_k} c_k^{-1}] \in \pi_1(Y',q)$ is represented by a knot $K''$ in $Y'$ which bounds a smoothly embedded disk in $W'$. Summarizing the proof so far, there is a smoothly embedded annulus $C$ in $V$ bounded by $K\text{s}\,ubseteq Y$ and $K_0'\text{s}\,ubseteq Y'$. In turn $K_0'$ is homotopic in $V$ to another knot $K''\text{s}\,ubseteq Y'$ which bounds a smoothly embedded disk in $W'$. It remains to alter the annulus $C$ and the knot $K_0'$ to arrange that the homotopy from $K_0'$ to $K''$ lies in $Y'$. Let $\nu(K)\cong K\times D^2$ be a product neighborhood of $K$ in $Y$, $\nu(C)$ be the image of $\nu(K)\times[0,1]$ in $ Y\times[0,1]\text{s}\,ubseteq V$, and $\nu(K') = \nu(C)\cap Y'$. Choosing these tubular neighborhoods small enough, $K'_+$ is disjoint from $\nu(K')$. As $[K'_+]=[K'']$ in $\pi_1(V,q)$ it follows that the product $[K'{_+}]^{-1}[K'']$ is nullhomotopic in $V$. Thus, $[K'{_+}]^{-1}[K'']\in \ker(\pi_1(V-\nu(C), q)\to \pi_1(V, q))$, which is normally generated by the homotopy class of a meridian $m(K'_0)\text{s}\,ubseteq Y'$ of $K'_0$ based at $q$. Thus, we conclude that there is a product of conjugates of meridians of $K'_0$, $\delta = \displaystyle \prod _{k} [d_k m(K'_0)^{\epsilon_k} d_k^{-1}]$ with each $d_k$ and embedded arc in $V-\nu(C)$, such that $[K'{_+}]^{-1}[K'']\delta^{-1}$ is nullhomotopic in $V-\nu(C)$. As $\nu(C)$ is the image of $\nu(K)\times[0,1]$ in $Y\times[0,1]\text{s}\,ubseteq V$ and $(V,Y)$ has only 1-~and 2-handles, it follows that $(V-\nu(C),Y-\nu(K))$ has only 1-~and 2-handles. Turning this handle structure upside-down, $(V-\nu(C), Y'-\nu(K'_0))$ has only relative 2-~and 3-handles. In particular, $\pi_1(Y-\nu(K_0'))\to \pi_1(V-\nu(C))$ is onto. Thus, we may assume that each $d_k$ lies in $Y'-\nu(K_0')$. As a consequence, $\delta\in \pi_1(Y'-\nu(K_0'), q)$. We now have that $[K'{_+}]^{-1} [K'']\delta^{-1}$ is in $\ker(\pi_1(Y'-\nu(K_0')) \to \pi_1(V-\nu(C)))$. Again making use of the fact that $(V-\nu(C), Y'-\nu(K'_0))$ has no 1-handles, Proposition~\mathfrak{r}ef{prop:no 3-handles} implies that the homotopy class $[K'{_+}]^{-1} [K'']\delta^{-1}\in \pi_1(Y'-\nu(K_0'))$ contains a knot $J$ which bounds a smoothly embedded disk $D_J$ in $V-\nu(C)$. The disk $D_J\text{s}\,ubseteq V$ is disjoint from $C$. We band $C$ to $D_J$ along an embedded arc in $Y'$ and see a homology concordance from $K$ to a new knot $K'$ with $[K'] = [K'_+][J]$ in $\pi_1(Y',q)$. Expanding in $\pi_1(Y',q)$, $$ [K'] = [K'_+][J] = [K'_+]\left([K'{_+}]^{-1} [K'']\delta^{-1}\mathfrak{r}ight) = [K'']\delta^{-1} $$ As $\delta$ is a product of conjugates of meridians of $K_0'$, $\delta$ is nullhomotopic in $Y'$. Thus, in $\pi_1(Y',q)$, $[K'] = [K'']$. Summarizing, there is a homology cobordism $V$ from $Y$ to $Y'$ in which the knots $K$ and $K'$ cobound a smoothly embedded annulus. The knot $K'\text{s}\,ubseteq Y$ is homotopic in $Y$ to a third knot $K''$, which bounds a smoothly embedded disk in the homology ball $W'$. This completes the proof. \end{proof} \text{s}\,ection{The equivalence relation generated by homotopy and homology concordance}\label{sect:relative} Next we set our eyes on Theorem~\mathfrak{r}ef{thm:relative} which concludes that two knots in possibly different homology spheres cobound an immersed annulus in a homology cobordism if and only if they are related by a sequence of homology concordances and homotopies. Recall the following notations from the introduction: \begin{itemize} \item $(X,K)\text{s}\,imeq_h (X,J)$ means $K$ and $J$ cobound an immersed annulus in the homology sphere $X$. \item $(X,K)\text{s}\,imeq_c (Y,J)$ means there exists a homology cobordism from $X$ to $Y$ in which $K$ and $J$ cobound a smoothly embedded annulus. \item $(X,K)\text{s}\,imeq_h^4 (Y,J)$ means there exists a homology cobordism from $X$ to $Y$ in which $K$ and $J$ cobound an immersed annulus. \end{itemize} First we check that all of these are well defined under connected sum of pairs. Recall that the connected sum $(X,K)\#(Y,J) = (X\# Y, K\# J)$ is given by picking points $p\in K$ and $q\in J$, removing small neighborhoods $\nu(p)\text{s}\,ubseteq X$ and $\nu(q)\text{s}\,ubseteq Y$, of $p$ and $q$, so that $K-\nu(p)$ and $J-\nu(q)$ are properly embedded arcs from a point $p_+$ to a point $p_-$ and from $q_+$ to $q_-$. We form the connected sum $X\# Y$ by gluing these two spherical boundaries together along an orientation reversing diffeomorphism which sends $p_+$ to $q_-$ and $p_-$ to $q_+$. The connected sum $K\# J$ is given by gluing together $K-\nu(p)$ and $J-\nu(q)$. \begin{proposition}\label{prop: well defined} Suppose that $(Y,K)$, $(Y',K')$, and $(X,J)$ are knots in homology spheres. For any $\text{s}\,imeq\in\{\text{s}\,imeq_h, \text{s}\,imeq_c,\text{s}\,imeq_h^4\}$ if $(Y,K)\text{s}\,imeq (Y',K')$ then $(Y,K)\#(X,J)\text{s}\,imeq (Y',K')\#(X,J)$. (In the case of $\text{s}\,imeq_h$ we assume $Y=Y'$.) \end{proposition} \begin{proof} The proofs in the cases of $\text{s}\,imeq_c$ and $\text{s}\,imeq_h^4$ are nearly identical. Suppose that $(Y,K)$, $(Y',K')$, and $(X,J)$ are knots in homology spheres. Let $W$ be a homology cobordism from $Y$ to $Y'$ in which $A$ is an embedded (or immersed in the case of $\text{s}\,imeq_h^4$) annulus bounded by $K$ and $K'$. Let $\alpha$ be an embedded curve in $A$ running from a point on $K$ to a point on $K'$. If $A$ is immersed, then $\alpha$ is chosen to be disjoint from all self-intersection points of $A$. Let $p$ be a point on $J$. Glue together $W-\nu(\alpha)$ and $(X-\nu(p))\times[0,1]$ to get a homology cobordism $V$ from $Y\# X$ to $Y'\# X$. Let $ A'$ be the result of gluing $A-\nu(\alpha)$ to $(J-\nu(p))\times[0,1]$ in this homology cobordism. Then $A'$ is an embedded (or immersed if $A$ is immersed) annulus bounded by $K\#J$ and $K'\#J$. Thus, if $(Y,K)\text{s}\,imeq_c (Y',K')$ then $(Y\#X, K\#J)\text{s}\,imeq_c (Y'\#X, K'\#J)$ and if $(Y,K)\text{s}\,imeq_h^4 (Y',K')$ then $(Y\#X, K\#J)\text{s}\,imeq_h^4 (Y'\#X, K'\#J)$. Suppose that $(Y,K)\text{s}\,imeq_h(Y',K')$ so that $Y=Y'$ and $K$ is homotopic to $K'$ in $Y$. We may assume that the homotopy is constant on a small neighborhood of some point $p$ of $K$. Using this as the point we use in the connected sum construction and using the constant homotopy on $J\text{s}\,ubseteq Y$, we see that $K\#J$ and $K'\#J$ are homotopic in $Y\#X$, completing the proof. \end{proof} \begin{reptheorem}{thm:relative} Let $Y$ and $X$ be homology spheres. Let $K$ and $J$ be knots in $X$ and $Y$ respectively. If there exists a smooth homology cobordism from $Y$ to $X$ in which $K$ is homotopic to $J$ then there exist knots $K'$ and $J'$ in some homology sphere $Z$ such that $$(Y,K)\text{s}\,imeq_c(Z,K')\text{s}\,imeq_h(Z,J')\text{s}\,imeq_c (X,J).$$ \end{reptheorem} \begin{proof}[Proof of Theorem~\mathfrak{r}ef{thm:relative}] Suppose that $(X,K)$ and $(Y,J)$ are knots in homology spheres and that $(X,K)\text{s}\,imeq_h^4 (Y,J)$. By Proposition~\mathfrak{r}ef{prop: well defined}, $(X\#-Y,K\#-J)\text{s}\,imeq_h^4 (Y\#-Y,J\#-J)$. Here $-Y$ indicates the orientation reverse of $Y$ and $-J$ the orientation reverse of $J$. Just as for knots in $S^3$, $J\#-J$ bounds a smoothly embedded disk in a homology ball bounded by $Y\#-Y$. Stack a homology cobordism $X\#-Y$ to $Y\#-Y$ on top of a homology ball bounded by $Y\#-Y$. This gives a homology ball bounded by $X\#-Y$. In this homology cobordism stack a homotopy from $(X\#-Y,K\#-J)$ to $(Y\#-Y,J\#-J)$ on top of a slice disk for $J\#-J$ to see an immersed disk. Thus, $K\#-J$ is nullhomotopic in a homology ball. Theorem~\mathfrak{r}ef{thm:smoothII} now applies and concludes that there exists another homology sphere $Z_0$ and knots $L$ and $L'$ in $Z_0$ such that $$ (X\#-Y,K\#-J)\text{s}\,imeq_c(Z_0,L)\text{s}\,imeq_h(Z_0,L')\text{s}\,imeq_c(S^3,U) $$ where $U$ is the unknot in $S^3$. By Proposition~\mathfrak{r}ef{prop: well defined}, we may take the connected sum of each of these terms with $(Y,J)$ to get $$ ((X\#-Y)\#Y,(K\#-J)\#J)\text{s}\,imeq_c(Z_0\#Y,L\# J)\text{s}\,imeq_h(Z_0\#Y,L'\#J)\text{s}\,imeq_c(Y,J) $$ This together with the observation that ${(X,K)\text{s}\,imeq_c(X\#(-Y\#Y),K\#(-J\#J))}$ completes the proof. \end{proof} \end{document}
\begin{document} \title{Resource Logics with a Diminishing Resource} \author{Natasha Alechina} \orcid{0000-0003-3306-9891} \affiliation{ \institution{University of Nottingham} \streetaddress{Jubilee Campus} \city{Nottingham} \country{UK} } \email{[email protected]} \author{Brian Logan} \orcid{0000-0003-0648-7107} \affiliation{ \institution{University of Nottingham} \streetaddress{Jubilee Campus} \city{Nottingham} \country{UK} } \email{[email protected]} \begin{abstract} Model-checking resource logics with production and consumption of resources is a computationally hard and often undecidable problem. We introduce a simple and realistic assumption that there is at least one \emph{diminishing resource}, that is, a resource that cannot be produced and every action has a non-zero cost on this resource. An example of such resource is time. We show that, with this assumption, problems that are undecidable even for the underlying Alternating Time Temporal Logic, such as model-checking under imperfect information and perfect recall, become decidable for resource logics with a diminishing resource. \end{abstract} \maketitle \section{Introduction} There has been a considerable amount of work on multi-agent temporal logics interpreted over structures where agents' actions consume resources, or both produce and consume resources. Examples include an extension of Coalition Logic where actions consume resources and coalitional modalities are annotated with resource bounds (`agents in coalition $A$ have a strategy of cost at most $b$ to achieve $\phi$') (RBCL) \cite{Alechina//:09b,Alechina//:10d}, a similar extension for Alternating Time Temporal Logic ATL (RB-ATL) \cite{Alechina//:10a}, extensions of Computation Tree Logic and Alternating Time Temporal Logic with both consumption and production of resources (RTL, RAL) \cite{BullingFarwer09rtl-clima-post,Bulling/Farwer:10a}, a variant of resource bounded ATL where all resources are convertible to money and the amount of money is bounded (PRB-ATL) \cite{DellaMonica//:11a,DellaMonica//:13a}, an extension of PRB-ATL to $\mu$-calculus \cite{DellaMonica/Lenzi:12a}, a version of ATL with more general numerical constraints (QATL${}^*$) \cite{Bulling/Goranko:13a}, a version of RB-ATL where unbounded production of resources is allowed (RB$\pm$ATL\xspace) \cite{Alechina//:16d,Alechina//:16c}. The model-checking problem for such resource logics is decidable, though often not comptationally tractable, when resources are only consumed or where the amount of resources is somehow bounded. \cite{Bulling/Farwer:10a}. For RAL with unbounded production of resources, the model-checking problem is undecidable, and this holds even for several of its fragments \cite{Bulling/Farwer:10a}, although recently a fragment of RAL without the boundedness assumption has been found where the model-checking problem is decidable \cite{Alechina//:17b}. A slightly different semantics compared to RAL, but allowing unbounded production of resources, also results in a decidable model-checking problem for resource extensions of ATL such as RB$\pm$ATL\xspace \cite{Alechina//:16d}; the complexity of the model-checking problem for RB$\pm$ATL\xspace has been shown to be 2EXPTIME-complete in \cite{Alechina//:16c}. There exists also a large body of related work on reachability and non-termination problems in energy games and games on vector addition systems with state \cite{Brazdil&Jancar&Kucera10,Jurdzinski&Lazic&Schmitz15}. In fact, complexity and decidability results for resource logics in \cite{Alechina//:16c} build on the results for single-sided vector addition systems with states \cite{Courtois&Schmitz14,Abdullaetal13}. As far as we are aware, there is no work on model-checking resource logics under imperfect information. For ATL (without resources) under imperfect information and with perfect recall uniform strategies the problem is undecidable for three or more agents \cite{Dima/Tiplea:11a}. It is however decidable in the case of bounded strategies \cite{Vester:13a}. For two player energy games with imperfect information and a fixed initial credit the existence of a winning strategy is also decidable \cite{Degorre//:10a}. In this paper we consider a special kind of models for resource logics satisfying a restriction that one of the resources is always consumed by each action. It is a very natural setting which occurs in many verification problems for resource logics. The first obvious example of such a resource is time. Time is always `consumed' by each action, and no agent in the system can turn back the clock and `produce' time. When a verification problem has time as one of the explicit resource parameters, the restriction certainly applies. Other examples include systems where agents have a non-rechargeable battery and where all actions consume energy, e.g,. nodes in a wireless sensor network; and systems where agents have a store of propellant that cannot be replenished during the course of a mission and all actions of interest involve manoeuvring, e.g., a constellation of satellites. We call this special resource that is consumed by all actions a \emph{diminishing resource}. From the technical point of view, the restriction to systems with a diminishing resource has the advantage that all strategies become bounded, even if for other resource types unbounded production is allowed. In the case of RB$\pm$ATL\xspace with a diminishing resource where the model-checking problem is already known to be decidable and 2EXPTIME-complete, we can produce simpler model-checking algorithms and a lower complexity bound (PSPACE if resource bounds are written in unary). In the case of RB$\pm$ATL\xspace with a diminishing resource under imperfect information, the result of \cite{Vester:13a} does not apply immediatelly because the bound is not fixed in advance, but the logic is indeed decidable and we get a new set of model-checking algorithms and a complexity bound. Finally, the decidability of RAL with a diminishing resource follows from the result on the decidability of RAL on bounded models \cite{Bulling/Farwer:10a}, but the model-checking algorithms and the PSPACE upper bound (for resource endowments written in unary) are specific to RAL with diminishing resource and are new. The rest of the paper is organised as follows. In Section \ref{sec:atlr-spec}, we introduce RB$\pm$ATL\xspaced with a diminishing resource, motivate changes to its syntax (we use the Release operator instead of `Always' or `Globally', and do not allow infinite resource bounds), give a model-checking algorithm and analyse its complexity. In Section \ref{sec:atlrdir} we introduce RB$\pm$ATL\xspacedir, which is RB$\pm$ATL\xspaced under imperfect information and perfect recall, and give a model-checking algorithm for it and analyse its complexity. Finally in Section \ref{sec:rald} we define RAL with diminishing resource, give a model-checking algorithm for it and show that the complexity is the same as for RB$\pm$ATL\xspaced. \section{RB$\pm$ATL\xspaced} \label{sec:atlr-spec} The logic RB$\pm$ATL\xspace was introduced in \cite{Alechina//:14c}, and its model-checking complexity studied in more detail in \cite{Alechina//:16d} and \cite{Alechina//:16c}. Here we consider a variant of this logic without the \idle action which is interpreted on finite paths. It contains a Release operator instead of Globally and does not allow infinite values in resource bounds. We use Release because it is not definable in ATL in terms of Next, Until and Globally \cite{Laroussinie//:08a} while Globally is definable in terms of Release, and it has a more intuitive meaning on finite computations. As is the case with RB$\pm$ATL\xspace, the syntax of RB$\pm$ATL\xspaced is defined relative to the following sets: $Agt =\{a_1, \ldots, a_n\}$ is a set of $n$ agents, $Res=\{res_1, \ldots,$ $res_r\}$ is a set of $r$ resource types, $\Pi$ is a set of propositions, and ${\mathcal B} = \nat^{{Res}^{Agt}}$ is a set of resource bounds (resource allocations to agents). Elements of ${\mathcal B}$ are vectors of length $n$ where each element is a vector of length $r$ (the $k$th element of the $i$th vector is the allocation of the $k$th resource to the $i$th agent). We will denote by ${\mathcal B}_A$ (for $A \subseteq Agt$) the set of possible resource allocations to agents in $A$. Formulas of RB$\pm$ATL\xspaced are defined by the following syntax \[ \phi, \psi ::= p \mid \neg \phi \mid \phi \lor \psi \mid \AO{A^b}{\phi} \mid \AU{A^b}{\phi}{\psi} \mid \AR{A^b}{\phi}{\psi} \] where $p \in \Pi$ is a proposition, $A \subseteq Agt$, and $b \in {\mathcal B}_A$ is a resource bound. Here, $\AO{A^b}{\phi}$ means that a coalition $A$ can ensure that the next state satisfies $\phi$ under resource bound $b$. $\AU{A^b}{\phi}{\psi}$ means that $A$ has a strategy to enforce $\psi$ while maintaining the truth of $\phi$, and the cost of this strategy is at most $b$. Finally, $\AR{A^b}{\phi}{\psi}$ means that $A$ has a strategy to maintain $\psi$ until and including the time when $\phi$ becomes true, or to maintain $\psi$ forever if $\phi$ never becomes true, and the cost of this strategy is at most $b$. The language is interpreted on resource-bounded concurrent game structures. Without loss of generality, we assume that the first resource type is diminishing, i.e.,\xspace is consumed by every action. \begin{definition} \label{def:rbcgs} A resource-bounded concurrent game structure with diminishing resource (RB-CGS$^\ensuremath{\#}$) is a tuple $M = (Agt$, $Res$, $S, \Pi, \pi$, $Act$, $d, c, \delta)$ where: \begin{itemize} \item $Agt$ is a non-empty finite set of $n$ agents, \item $Res$ is a non-empty finite set of $r$ resource types, where the first one is the distinguished diminishing resource \item $S$ is a non-empty finite set of states; \item $\Pi$ is a finite set of propositional variables and $\pi : \Pi \to \wp(S)$ is a truth assignment which associates each proposition in $\Pi$ with a subset of states where it is true; \item $Act$ is a non-empty set of actions \item $d : S \times Agt \to \wp(Act)\setminus \{\emptyset\}$ is a function which assigns to each $s \in S$ a non-empty set of actions available to each agent $a \in Agt$. We denote joint actions by all agents in $Agt$ available at $s$ by $D(s) = d(s,a_1) \times \cdots \times d(s,a_n)$; \item $c : S \times Act \to \integer^r$ is a partial function which maps a state $s$ and an action $\sigma$ to a vector of integers, where the integer in position $i$ indicates consumption or production of resource $r_i$ by the action (negative value for consumption and positive value for production). We stipulate that the first position in the vector is always at most $-1$ (at least one unit of the diminishing resource is consumed by every action). \item $\delta : S \times Act^{|Agt|} \to S$ is a partial function that maps every $s \in S$ and joint action $\sigma \in D(s)$ to a state resulting from executing $\sigma$ in $s$. \end{itemize} \end{definition} In what follows, we use the usual point-wise notation for vector comparison and addition. In particular, $(b_1,\ldots,b_r) \leq (d_1,\ldots,d_r)$ iff $b_i \leq d_i$ $\forall$ $i \in \{1,\ldots,r\}$, $(b_1,\ldots,b_r) = (d_1,\ldots,$ $d_r)$ iff $b_i = d_i$ $\forall$ $i \in \{1,\ldots,r\}$, and $(b_1,\ldots,b_r) + (d_1,\ldots,d_r) = (b_1 + d_1,\ldots,b_r + d_r)$. We define $(b_1,\ldots,b_r) < (d_1,\ldots,d_r)$ as $(b_1,\ldots,b_r) \leq (d_1,\ldots,d_r)$ and $(b_1,\ldots,b_r) \not = (d_1,\ldots,d_r)$. Given a function $f$ returning a vector, we denote by $f_i$ the function that returns the i-th component of the vector returned by $f$. We denote by $\mathsf{prod}(s,\sigma)$ the vector obtained by replacing negative values in $c(s,\sigma)$ by $0$s: it is the vector of resources produced by action $\sigma$. We denote by $\mathsf{cons}(s,\sigma)$ the vector obtained by first replacing positive values in $c(s,\sigma)$ by $0$s and then replacing negative values by their absolute values: $\mathsf{cons}(s,\sigma) = (|\min(0, c_1(s,\sigma))|$, \ldots, $|\min(0, c_r(s,\sigma))|)$. It returns the positive costs on each resource of executing $\sigma$. In particular, $\mathsf{cons}_1(s,\sigma) \geq 1$. We denote the set of all finite non-empty sequences of states (finite computations) in a RB-CGS$^{\ensuremath{\#}}$ $M$ by $S^+$. We consider only finite computations because we are interested in computations possible under a finite resource bound, and in the presence of a diminishing resource which is required for any action, such computations are always finite. For a computation $\lambda = s_1\ldots s_k \in S^+$, we use the notation $\lambda[i] = s_i$ for $i \leq k$, $\lambda[i,j] = s_i \ldots s_j$ $\forall$ $1 \leq i \leq j \leq k$, and $|\lambda| = k$ for the length of $\lambda$. Given a RB-CGS$^\ensuremath{\#}$ $M$ and a state $s \in S$, a \emph{joint action by a coalition} $A \subseteq Agt$ is a tuple $\sigma = (\sigma_a)_{a \in A}$ (where $\sigma_a$ is the action that agent $a$ executes as part of $\sigma$, the $a$th component of $\sigma$) such that $\sigma_a \in d(s,a)$. For a joint action $\sigma$ by a coalition $A$, we denote by $\mathsf{cons}(s,\sigma) = (\mathsf{cons}(s,\sigma_a))_{a \in A}$ the vector of costs of the joint action, similarly for $\mathsf{prod}(s,sigma)$. The set of all joint actions for $A$ at state $s$ is denoted by $D_A(s)$. Given a joint action by $Agt$ $\sigma \in D(s)$, $\sigma_A$ (a projection of $\sigma$ on $A$) denotes the joint action executed by $A$ as part of $\sigma$: $\sigma_A = (\sigma_a)_{a \in A}$. The set of all possible outcomes of a joint action $\sigma \in D_A(s)$ at state $s$ is: \[ out(s,\sigma) = \{ s' \in S \mid \exists \sigma' \in D(s): \sigma= \sigma'_A \land s' = \delta(s,\sigma') \} \] A \emph{strategy for a coalition} $A \subseteq Agt$ in a RB-CGS$^\ensuremath{\#}$ $M$ is a mapping $F_A : S^+ \to Act^{|A|}$ such that, for every $\lambda \in S^+$, $F_A(\lambda) \in D_A(\lambda[|\lambda|])$. A computation $\lambda$ is consistent with a strategy $F_A$ iff, for all $i$, $1 \leq i < |\lambda|$, $\lambda[i+1] \in out(\lambda[i],F_A(\lambda[1,i]))$. We denote by $out(s,F_A)$ the set of all computations $\lambda$ starting from $s$ that are consistent with $F_A$. Given a bound $b \in {\mathcal B}_A$, a computation $\lambda \in out(s,F_A)$ is $b$-consistent with $F_A$ iff, for every $i$, $1 \leq i < |\lambda|$, \[\mathsf{cons}(\lambda[i], F_A(\lambda[1,i])) \leq e_A(\lambda[i])\] where $e_A(\lambda[i])$ is the amount of resources agents in $A$ have in $\lambda[i]$: $e_A(\lambda[1]) = b$ and \begin{align*} e_A(\lambda[i+1]) =&\ e_A(\lambda[i]) - \mathsf{cons}(\lambda[i], F_A(\lambda[1,i])) + \\ & \ \mathsf{prod}(\lambda[i], F_A(\lambda[1,i])). \end{align*} In other words, the amount of resources any of the agents have is never negative for any resource type. A computation $\lambda$ is $b$-maximal for a strategy $F_A$ if it cannot be extended further while remaining $b$-consistent (the next action prescribed by $F_A$ would violate $b$-consistency). The set of all maximal computations starting from state $s$ that are $b$-consistent with $F_A$ is denoted by $out(s,F_A,b)$. Note that this set is finite, the maximal length of each computation is bounded by $b$ (or rather by the minimal value for any agent in $A$ of ${b_a}_1$: the bound on the first resource). Given a RB-CGS$^\ensuremath{\#}$ $M$ and a state $s$ of $M$, the truth of an RB$\pm$ATL\xspaced formula $\phi$ with respect to $M$ and $s$ is defined inductively on the structure of $\phi$ as follows: \begin{itemize} \item $M, s \ensuremath{M}\xspaces p$ iff $s \in \pi(p)$; \item $M, s \ensuremath{M}\xspaces \neg \phi$ iff $M, s \not\ensuremath{M}\xspaces \phi$; \item $M, s \ensuremath{M}\xspaces \phi \lor \psi$ iff $M, s \ensuremath{M}\xspaces \phi$ or $M, s \ensuremath{M}\xspaces \psi$; \item $M, s \ensuremath{M}\xspaces \AO{A^b}{\phi}$ iff $\exists$ strategy $F_A$ such that for all $b$-maximal $\lambda \in out(s, F_A, b)$, $|\lambda| \geq 2$ and $M, \lambda[2] \ensuremath{M}\xspaces \phi$; \item $M, s \ensuremath{M}\xspaces \AU{A^b}{\phi}{\psi}$ iff $\exists$ strategy $F_A$ such that for all $b$-maximal $\lambda \in out(s, F_A, b)$, $\exists i$ such that $1 \leq i \leq |\lambda|$, $M, \lambda[i] \ensuremath{M}\xspaces \psi$ and $M, \lambda[j] \ensuremath{M}\xspaces \phi$ for all $j \in \{1,\ldots,i-1\}$. \item $M, s \ensuremath{M}\xspaces \AR{A^b}{\phi}{\psi}$ iff $\exists$ strategy $F_A$ such that for all $b$-maximal $\lambda \in out(s, F_A,b)$, either $\exists i$ such that $1 \leq i \leq |\lambda|$: $M, \lambda[i] \ensuremath{M}\xspaces \phi$ and $M, \lambda[j] \ensuremath{M}\xspaces \psi$ for all $j \in \{1,\ldots,i\}$; or, $M, \lambda[j] \ensuremath{M}\xspaces \psi$ for all $j$ such that $1 \leq j \leq |\lambda|$. \end{itemize} The most straightforward way of model-checking RB$\pm$ATL\xspaced is to adapt the model-checking algorithm for RB$\pm$ATL\xspace \cite{Alechina//:16d} and add a clause for $\AR{A^b}{\phi}{\psi}$. We present this simple algorithm here because we will use it in modified form in subsequent sections. It is however possible to do RB$\pm$ATL\xspaced model-checking more efficiently in the spirit of \cite{DellaMonica//:11a}. The algorithm is shown in Algorithm \ref{alg:atlrd-label}. Given a formula, $\phi_0$, we produce a set of subformulas $Sub(\phi_0)$ of $\phi_0$ in the usual way. $Sub(\phi_0)$ is ordered in increasing order of complexity. We then proceed by cases. For all formulas in $Sub(\phi_0)$ apart from $\AO{A^b}\phi$, $\AU{A^b}{\phi}{\psi}$ and $\AG{A^b}{\phi}$ we essentially run the standard ATL model-checking algorithm \cite{Alur//:02a}. Labelling states with $\AO{A^b}\phi$ makes use of a function $Pre(A,\rho,b)$ which, given a coalition $A$, a set $\rho \subseteq S$ and a bound $b$, returns a set of states $s$ in which $A$ has a joint action $\sigma_A$ with $\mathsf{cons}(s,\sigma_A) \leq b$ such that $out(s,\sigma_A) \subseteq \rho$. Labelling states with $\AU{A^b}{\phi}{\psi}$ and $\AR{A^b}{\phi}{\psi}$ is more complex, and in the interests of readability we provide separate functions: \textsc{until} for $\AU{A^b}{\phi}{\psi}$ formulas is shown in Algorithm \ref{alg:until-strategy}, and \textsc{release} for $\AR{A^b}{\phi}{\psi}$ formulas is shown in Algorithm \ref{alg:release-strategy}. Both algorithms proceed by depth-first and-or search of $M$. We record information about the state of the search in a search tree of nodes. A \emph{node} is a structure that consists of a state of $M$, the resources available to the agents $A$ in that state (if any), and a finite path (sequence of of nodes and edges) leading to this node from the root node. Edges in the tree correspond to joint actions by all agents. Note that the resources available to the agents in a state $s$ on a path constrain the edges from the corresponding node to be those actions $\sigma_A$ where $\mathsf{cons}(s,\sigma_A)$ is less than or equal to the available resources. For each node $n$ in the tree, we have a function $s(n)$ that returns its state, $p(n)$ that returns the nodes on the path, $\mathit{act}(n)$ that returns the joint action taken to reach $s(n)$ from the preceding state on the path (i.e.,\xspace the edge to $n$), and $e(n)$ that returns the vector of resource availabilities in $s(n)$ for $A$ as a result of following $p(n)$. The functions $\mathit{act}_{a}(n)$ and $e_{a}(n)$ return the action performed by agent $a \in A$ in $\mathit{act}(n)$ and the resources available to agent $a$ in $e(n)$ respectively. We use $p(n)[i]$ to denote the $i$-th node in the path $p(n)$, and $p(n)[1, j]$ to denote the prefix of $p(n)$ up to the $j$-th node. The function $\mathit{node}_0(s,b)$ returns the root node, i.e., a node $n_0$ such that $s(n_0) = s$, $p(n_0) = [\ ]$, $\mathit{act}(n_{0}) = nil$, and $e(n_0) = b$. The function $\mathit{node}(n, \sigma, s')$ returns a node $n'$ where $s(n') = s'$, $p(n') = [p(n) \cdot n]$, $\mathit{act}(n') = \sigma$, and for all agents $a \in A$ $e_a (n') = e_a (n) + \mathsf{prod}(s(n),\sigma_a) - \mathsf{cons}(s(n),\sigma_a)$. \begin{algorithm} \caption{Labelling $\phi_0$ } \label{alg:atlrd-label} \begin{algorithmic}[1] \Function{RB$\pm$ATL\xspaced-label}{$M, \phi_0$} \For{$\phi' \in Sub(\phi_0)$} \Case{$\phi' = p,\ \neg \phi,\ \phi \vee \psi$}\ standard, see \cite{Alur//:02a} \EndCase \Case{$\phi' = \AO{A^b}{\phi}$} \State $[\phi']_M \gets Pre(A, [\phi]_M,b)$ \EndCase \Case{$\phi' = \AU{A^b}{\phi}{\psi}$} \State $[\phi']_M \gets \{\ s \mid s \in S \wedge$ \State $\quad \Call{until-strategy}{node_0(s,b), \AU{A^b}{\phi}{\psi} } \}$ \EndCase \Case{$\phi' = \AR{A^b}{\phi}{\psi}$} \State $[\phi']_M \gets \{\ s \mid s \in S \wedge$ \State $\quad \Call{release-strategy}{node_0(s,b), \AR{A^b}{\phi}{\psi} } \}$ \EndCase \EndFor \State $\mathbf{return\ } [\phi_0]_M$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{Labelling $\AU{A^b}{\phi}{\psi}$ } \label{alg:until-strategy} \begin{algorithmic}[1] \Function{until-strategy}{$n, \AU{A^b}{\phi}{\psi} $} \If{$s(n) \in [\psi]_M $} \State $\mathbf{return}\ \mathit{true}$ \EndIf \If{$s(n) \not \in [\phi]_M $} \State $\mathbf{return}\ \mathit{false}$ \EndIf \State $ActA \gets \{ \sigma \in D_A(s(n)) \mid \mathsf{cons}(s(n),\sigma) \leq e(n) \}$ \For{$\sigma \in ActA $} \State $O \gets out(s(n),\sigma)$ \State $\mathit{strat} \gets \mathit{true}$ \For{$s' \in O$} \State $\mathit{strat} \gets \mathit{strat} \wedge $ \State $\quad \Call{until-strategy}{node(n,\sigma,s'), \AU{A^b}{\phi}{\psi} }$ \EndFor \If{$ \mathit{strat}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{Labelling $\AR{A^b}{\phi}{\psi}$ } \label{alg:release-strategy} \begin{algorithmic}[1] \Function{release-strategy}{$n, \AR{A^b}{\phi}{\psi} $} \If{$s(n) \in [\psi]_M \cap [\phi]_M$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \If{$s(n) \in [\psi]_M \ \wedge\ $ \NoNumber{\\ $\qquad\ \ \exists \sigma \in D_A(s(n)) : \mathsf{cons}(s(n),\sigma) \not\leq e(n)$}} \State $\mathbf{return}\ \mathit{true}$ \EndIf \If{$s(n) \not \in [\psi]_M $} \State $\mathbf{return}\ \mathit{false}$ \EndIf \State $ActA \gets \{ \sigma \in D_A(s(n)) \mid \mathsf{cons}(s(n),\sigma) \leq e(n) \}$ \For{$\sigma \in ActA $} \State $O \gets out(s(n),\sigma)$ \State $\mathit{strat} \gets \mathit{true}$ \For{$s' \in O$} \State $\mathit{strat} \gets \mathit{strat} \wedge $ \State $\quad \Call{release-strategy}{node(n,\sigma,s'), \AR{A^b}{\phi}{\psi} }$ \EndFor \If{$ \mathit{strat}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} When checking whether $\AU{A^b}{\psi_1}{\psi_2}$ or $\AR{A^b}{\psi_1}{\psi_2}$ is true in a state $s$, we examine paths whose length is bounded by the smallest resource bound ${b_a}_1$ on the first resource in $b$ (since every action costs at least 1 unit of the first resource, any computation can contain at most ${b_a}_1 $ steps). An over-approximation of the size of this search tree is $S^{min_{a\in A}({b_a}_1)}$. \begin{lemma} \label{lem:terminates} Algorithm \ref{alg:atlrd-label} on input $M$, $\phi$ terminates after at most $O(|\phi|\times |M|^{k})$ steps where $k$ is the maximal value of the first resource bound in $\phi$. \end{lemma} \begin{lemma} \label{lem:correct} Algorithm \ref{alg:atlrd-label} is correct. \end{lemma} \begin{proof} The Boolean cases of the algorithm are standard. The algorithm for $\AO{A^b}{\phi}$ returns all states from where there is an action by $A$ that costs less than $b$ and all outcomes of this action satisfy $\phi$. Essentially in each such state there is a one-step strategy satisfying $\AO{A^b}{\phi}$. This is all we need because the rest of actions on this strategy can be arbitrary; the computations that are produced by the strategy do not need to satisfy any additional constraints apart from being maximal, i.e.,\xspace eventually running out of resources (which they are guaranteed to do because of the first resource). The algorithm for $\AU{A^b}{\phi}{\psi}$ performs forward and-or search while making sure $\phi$ remains true, until $\psi$ is reached. It returns true if and only if it finds a strategy where each computation reaches a $\psi$ state before $A$ run out of resources to carry on with the strategy, and $\phi$ holds along the computation up to the point $\psi$ becomes true. Again actions after the $\psi$ state can be arbitrary. The algorithm for $\AR{A^b}{\phi}{\psi}$ is similar to $\AU{A^b}{\phi}{\psi}$ apart from two points. One is that the $\psi$ state should also satisfy $\phi$ (the invariant holds not just on the path to a $\phi$ state but in the $\phi$ state itself). This is ensured by the test at line 2. The second difference is that there is another way to make $\AR{A^b}{\phi}{\psi}$ true, which is to run out of resources while maintaining $\psi$. This is the reason for the test at line 4: if the invariant $\psi$ is true in $s$ and there is an action $\sigma$ in $D_A(s)$ that would cause $A$ to run out of resources, we return true because for this computation $\lambda$, the strategy $F_A$ such that $F_A(\lambda)=\sigma$ ensures that $\lambda$ is a $b$-maximal computation (and it satisfies $\psi$ everywhere). \end{proof} \begin{theorem} The model-checking problem for RB$\pm$ATL\xspaced is decidable in PSPACE (if resource bounds are written in unary). \end{theorem} \begin{proof} From Lemmas \ref{lem:terminates} and \ref{lem:correct} we have a model checking algorithm that solves the model checking problem for RB$\pm$ATL\xspaced. The complexity which results from the time bound in Lemma \ref{lem:terminates} can be improved by observing that the depth first search can be arranged using a stack and we only need to keep one branch at a time on the stack. The size of the stack is bounded by $min_{a \in A}({b_a}_1)$ and hence is polynomial if $b$ is represented in unary. \end{proof} \section{RB$\pm$ATL\xspaced with imperfect information and perfect recall} \label{sec:atlrdir} Agents often have to act under imperfect information, for example, if states are only partially observable, an agent may be uncertain whether it is in state $s$ or $s'$. This is represented in imperfect information models as a binary indistinguishability relation on the set of states for each agent $a$, $\sim_a$: if $a$ cannot distinguish $s$ from $s'$, we have $s \sim_a s'$. This relation can easily be lifted to finite sequences of states: if $s_1 \sim_a s'_1$, $s_2 \sim_a s'_2$, then $s_1s_2 \sim_a s'_1 s'_2$. An essential requirement for strategies under imperfect information is that they are \emph{uniform}: if agent $a$ is uncertain whether the history so far is $\lambda$ or $\lambda'$ ($\lambda \sim_a \lambda'$), then the strategy for $a$ should return the same action for both: $F_a(\lambda) = F_a(\lambda')$. Intuitively, the agent has no way of choosing different actions in indistinguishable situations. A strategy $F_A$ for a group of agents $A$ is uniform if it is uniform for every agent in $A$. In what follows, we consider \emph{strongly uniform} strategies \cite{Maubert/Pinchinat:14a}, which require that a strategy work from all initial states that are indistinguishable by some $a \in A$. Unfortunately, model-checking for ATL under imperfect information with perfect recall uniform strategies, ATL$_{iR}$, is undecidable for more than three agents \cite{Dima/Tiplea:11a}. It is known that the model checking problem for ATL$_{iR}$ with \emph{bounded} strategies is decidable, while for \emph{finite} strategies it is undecidable \cite{Vester:13a}. Bounded strategies are those that are defined for sequences of states of at most some fixed length $k$. In RB$\pm$ATL\xspacedir, there is no fixed bound on the size of strategies, since the size of strategy depends on the formula and the model. However, we can show that indeed the model checking problem for RB$\pm$ATL\xspacedir with imperfect information and perfect recall strongly uniform strategies is decidable. The model checking algorithms are similar to those given for RB$\pm$ATL\xspaced in Section \ref{sec:atlr-spec} in that they proceed by and-or depth first search, storing information about the state of the search in a search tree of nodes. However, in this case, the algorithms for Next, Until and Release also take a stack (list) of `open' nodes $B$, a set of `closed' nodes $C$ in addition to an RB$\pm$ATL\xspaced formula. $B$ records the current state of the search while $C$ records `successful' branches (rather than all visited nodes). Uniformity is ensured if action choices are consistent with those taken after $\sim_{a}$ sequences of states on all successful paths explored to date: ($n_1, \ldots, n_k \sim_a n'_1, \ldots, n'_k$ iff $s(n_1), \ldots, s(n_k) \sim_a s(n'_1), \ldots, s(n'_k)$). In addition, we assume functions $\hd{u}$, $\tl{u}$ which return the head and tail of a list $u$, and $u\, \circ\, v$ which concatenates the lists $u$ and $v$. (We abuse notation slightly, and treat sets as lists, e.g.,\xspace use $\hd{u}$ where $u$ is a set, to return an arbitrary element of $u$, and use $\circ$ between a set and a list.) $M,s \ensuremath{M}\xspaces \AO{A^b}{\phi}$ under strong uniformity requires that there exists a uniform strategy $F_A$ such that for all $a \in A$, if $s' \sim_a s$, then for all $b$-maximal $\lambda \in out(s', F_A, b)$: $|\lambda| > 1$ and $M, \lambda[2] \ensuremath{M}\xspaces \phi$. Similarly, in truth conditions for $\AU{A^b}{\phi}{\psi}$ and $\AR{A^b}{\phi}{\psi}$ we require the existence of a uniform strategy where all $b$-maximal computations starting from states $s'$ indistinguishable from $s$ by any $a \in A$ satisfy the Until (respectively, Release) formula. Weak uniformity only requires the existence of a uniform strategy from $s$. It is easy to modify the algorithms below to correspond to weak uniformity semantics. In fact, the algorithm for $\AO{A^b}{\phi}$ would become much simpler (identical to that for RB$\pm$ATL\xspaced in the previous section). \begin{algorithm}[h] \caption{Labelling $\phi_0$} \label{alg:atlr-label2} \begin{algorithmic}[1]\small \Function{RB$\pm$ATL\xspacedir-label}{$M, \phi_0$} \For{$\phi' \in Sub(\phi_0)$} \Case{$\phi' = p,\ \neg \phi,\ \phi \vee \psi$} \ standard, see \cite{Alur//:02a} \EndCase \Case{$\phi' = \AO{A^b}{\phi}$} \State $[\phi']_M \gets \{\ s \mid s \in S\ \wedge\ $ \StatexIndent[7] $\textsc{next}([node_0(s',b) : s' \sim_{a \in A} s],$ \StatexIndent[9] $ \{\ \}, \AO{A^b}{\phi}) \}$ \EndCase \Case{$\phi' = \AU{A^b}{\phi}{\psi}$} \State $[\phi']_M \gets \{\ s \mid s \in S\ \wedge\ $ \StatexIndent[7] $\textsc{until}([node_0(s',b) : s' \sim_{a \in A} s],$ \StatexIndent[9] $\{\ \}, \AU{A^b}{\phi}{\psi} ) \}$ \EndCase \Case{$\phi' = \AR{A^b}{\phi}{\psi}$} \State $[\phi']_M \gets \{\ s \mid s \in S\ \wedge\ $ \StatexIndent[7] $\textsc{release}([node_0(s',b) : s' \sim_{a \in A} s],$ \StatexIndent[8] $\ \ \{\ \}, \AR{A^b}{\phi}{\psi} ) \}$ \EndCase \EndFor \State $\mathbf{return\ } [\phi_0]_M$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{Labelling $\AO{A^b}{\phi}$} \label{alg:x-strategy} \begin{algorithmic}[1]\small \Function{next}{$B, C, \AO{A^b}{\phi}$} \If{$B = [\ ]$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \State $n \gets \hd{B}$ \State $Act_A \gets \{\sigma \in D_A(s(n)) \mid \mathsf{cons}(s(n),\sigma) \leq e(n)\ \wedge\ $ \StatexIndent[4] $out(s(n),\sigma) \subseteq [\phi]_M \ \wedge\ \forall a \in A$ \StatexIndent[4] if $\exists n' \in C : p(n) \cdot n \sim_{a} p(n')$ \StatexIndent[4] then $\sigma_a = act_a(p(n')[1]) \}$ \For{$\sigma \in ActA $} \If{$\textsc{next}(\tl{B}, C \cup \{ node(n,\sigma,\hd{out(s(n),\sigma)})\},$ \StatexIndent[5] $\ \ \AO{A^b}{\phi})$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{Labelling $\AU{A^b}{\phi}{\psi}$} \label{alg:until-strategy-imperfect} \begin{algorithmic}[1] \Function{until}{$B, C, \AU{A^b}{\phi}{\psi} $} \If{$B = [\ ]$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \State $n \gets \hd{B}$ \If{$s(n) \in [\psi]_M$} \State $\mathbf{return}\ \textsc{until}(\tl{B}, C \cup \{n\}, \AU{A^b}{\phi}{\psi})$ \EndIf \If{$s(n) \not\in [\phi]_M $} \State $\mathbf{return}\ \mathit{false}$ \EndIf \State $Act_A \gets \{ \sigma \in D_A(s(n)) \mid \mathsf{cons}(s(n),\sigma) \leq e(n)\ \wedge\ \forall a \in A$ \StatexIndent[4] if $\exists n' \in C : p(n) \cdot n \sim_{a} p(n')[1, | p(n) \cdot n |]$ \StatexIndent[4] then $\sigma_a = act_a(p(n')[| p(n) \cdot n | + 1]) \}$ \For{$\sigma \in Act_A $} \State $P \gets \{ node(n,\sigma,s') \mid s' \in out(s(n),\sigma) \}$ \If{$\textsc{until}(P \circ \tl{B}, C, \AU{A^b}{\phi}{\psi} )$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{Labelling $\AR{A^b}{\phi}{\psi}$} \label{alg:release-strategy-imperfect} \begin{algorithmic}[1] \Function{release}{$B, C, \AR{A^b}{\phi}{\psi} $} \If{$B = [\ ]$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \State $n \gets \hd{B}$ \If{$s(n) \in [\psi]_M \cap [\phi]_M$} \State $\mathbf{return}\ \textsc{release}(\tl{B}, C \cup \{n\}, \AR{A^b}{\phi}{\psi})$ \EndIf \If{$s(n) \not \in [\psi]_M $} \State $\mathbf{return}\ \mathit{false}$ \EndIf \State $Act_A \gets \{ \sigma \in D_A(s(n)) \mid \forall a \in A$ \StatexIndent[4] if $\exists n' \in C : p(n) \cdot n \sim_{a} p(n')[1, | p(n) \cdot n |]$ \StatexIndent[4] then $\sigma_a = act_a(p(n')[| p(n) \cdot n | + 1]) \}$ \For{$\sigma \in Act_A $} \If{$\mathsf{cons}(s(n),\sigma) \not\leq e(n)$} \State $n' \gets node(n,\sigma, s' \in out(s(n),\sigma))$ \If{$\textsc{release}(\tl{B}, C \cup \{n'\}, \AR{A^b}{\phi}{\psi})$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \Else \State $P \gets \{ node(n,\sigma,s') \mid s' \in out(s(n),\sigma) \}$ \If{$\textsc{release}(P \circ \tl{B}, C, \AR{A^b}{\phi}{\psi} )$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{lemma} Algorithm \ref{alg:atlr-label2} terminates in at most $O(|\phi| \times |M|^{k+1})$ steps, where $k$ is the maximal value of the first resource bound in $\phi$. \end{lemma} \begin{proof} The algorithm for $\AO{A^b}{\phi}$ attempts to find an action which works (achieves $\phi$) from all states indistinguishable from $s$ by some agent in $A$. There are at most $|S|$ such states, and at most $|M|$ possible actions to try. In the worst case (when no action works in all states) we try every action in each state: $O(|M|^2)$ steps. As before, the algorithms for $\AU{A^b}{\phi}{\psi}$ and $\AR{A^b}{\phi}{\psi}$ are attempting to find a strategy of depth $\min_{a \in A} ({b_a}_1)$, but now from all indistinguishable states and satisfying additional constraints of uniformity. Considering all indistinguishable states adds an additional level (intuitively the root of the tree from which all indistinguishable initial states are reachable). Satisfying uniformity means having to backtrack to a successful subtree to try a different choice of actions even if the previous choice was successful (because the same choice does not work in an indistinguishable branch on another tree). In the worst case, we will consider all possible actions at each of $O(b)$ levels of the search tree. We repeat this for every subformula ($|\phi|$ many times). \end{proof} \begin{lemma} Algorithm \ref{alg:atlr-label2} is correct. \end{lemma} \begin{proof} We consider the cases of $\AO{A^b}{\phi}$, $\AU{A^b}{\phi}{\psi}$ and $\AR{A^b}{\phi}{\psi}$. The algorithm for $\AO{A^b}{\phi}$ places all states which are indistinguishable from the current state for one of the agents in $A$ in the open list $B$. This ensures that a successful strategy (single action $\sigma$ which is $b$-consistent and achieves $\phi$) found in state $s$ will be placed in the closed list $C$, and in states $s' \sim_a s$ (indistinguishable for the agent $a$) the same action $\sigma_a$ will be attempted as part of the joint action $\sigma'$ by $A$. If this does not result in a successful strategy in $s'$, the algorithm will backtrack and try another action for $a$ in $s$. The algorithm returns true if and only if in all indistinguishable states, an action by $A$ is found which always results in a state satisfying $\phi$, is under the resource bound, and its $a$th component is the same in all $\sim_a$ states. This guarantees that the algorithm found a one step strategy to satisfy the $\phi$. In order to extend it to an arbitrary uniform strategy, we can simply select the first action in $D_a(s')$ for all sequences ending in $s'$ and all $a \in A$. This will ensure that all $a$-indistinguishable sequences are assigned the same action. $\AU{A^b}{\phi}{\psi}$ implements the same idea as above, but with respect to multi-step strategies. Every time an action is selected on some path $p$, if $p' \sim_a p$ is in the closed list $C$, then $a$'s action after $p$ is selected to be the same as that selected after $p'$. If this is not successful then eventually we will fail back to $p'$ and try a different action there. If the algorithm returns true, then we are guaranteed that the strategy contained in $C$ is uniform. We can easily extend the strategy contained in $C$ to a uniform strategy, since we do not need to achieve any objectives after satisfying $\psi$. $\AR{A^b}{\phi}{\psi}$ is similar to $\AU{A^b}{\phi}{\psi}$, but now we have an additional complication that actions selected to `run out of resources' need to be in the closed list since they should also satisfy uniformity. This is ensured on lines 11-14 of the algorithm (we add a path ending with an `expensive' action $\sigma$ and an arbitrary successor $n'$ to the closed list). \end{proof} \begin{theorem} The model-checking problem for RB$\pm$ATL\xspaced with imperfect information and perfect recall is decidable in EXPSPACE if the resource bounds are represented in unary. \end{theorem} \begin{proof} In addition to the space required for the stack, we also need to store the closed list $C$. In the worst case, the closed list will contain all possible sequences of states of length at most $\min_{a \in A}({b_a}_1)$, which is $O(|S|^k)$, where k is the maximal value of the first resource bound in $\phi$. \end{proof} \section{\ensuremath{\mathsf{RAL}}\xspaced} \label{sec:rald} In this section we define a diminishing resource version of \emph{resource agent logic} (\ensuremath{\mathsf{RAL}}\xspaced) following \cite{Bulling/Farwer:10a}, with modifications required for our setting (e.g.,\xspace no infinite endowments). The logic is defined over a set of agents $Agt$, a set of resources types $Res$, and a set of propositional symbols $\Pi$. An \emph{endowment (function)} $\eta : Agt \times Res \rightarrow \nat$ assigns resources to agents; $\eta_a(r)=\eta(a,r)$ is the amount of resource agent $a$ has of resource type $r$. $\ensuremath{\mathsf{En}}$ denotes the set of all possible endowments. The formulas of RAL$^{\ensuremath{\#}}$ are defined by: \begin{center}$\phi, \psi::= p \mid \neg \phi \mid \phi \wedge \phi \mid \coopdown[B]{A}\ensuremath{\!\bigcirc\!} \phi \mid \coop{A}{}_B^\eta\ensuremath{\!\bigcirc\!} \phi \mid \coopdown[B]{A}\phi \ensuremath{\,\mathcal{U}} \psi \mid \coop{A}{}_B^\eta\phi \ensuremath{\,\mathcal{U}} \psi \mid \coopdown[B]{A}\phi \ensuremath{\mathcal{R}} \psi \mid \coop{A}{}_B^\eta\phi \ensuremath{\mathcal{R}} \psi$ \end{center} where $p \in\Pi$ is a proposition, $A, B\subseteq Agt$ are sets of agents, and $\eta$ is an endowment. $A$ are called the proponents, and $B$ the (resource-bounded) opponents. Unlike in RB$\pm$ATL\xspaced, in \ensuremath{\mathsf{RAL}}\xspaced there are two types of cooperation modalities, $\coopdown[B]{A}$ and $\coop{A}{}_B^\eta$. In both types of cooperation modality, the actions performed by agents in $A\cup B$ consume and produce resources (actions by agents in $Agt \setminus (A \cup B)$ do not change their resource endowment). The meaning of $\coop{A}{}_B^\eta\varphi$ is otherwise the same as in RB$\pm$ATL\xspaced. The formula $\coopdown[B]{A}\varphi$ on the other hand requires that the strategy uses the resources \emph{currently} available to the agents. The models of RAL$^{\ensuremath{\#}}$ are resource-bounded concurrent game structures with diminishing resource (RB-CGS$^\ensuremath{\#}$). Strategies are also defined as for RB$\pm$ATL\xspaced. However, to evaluate formulas with a down arrow, such as $\coopdown[B]{A}\ensuremath{\!\bigcirc\!} \varphi$, we need the notion of \emph{resource-extended computations}. A \emph{resource-extended} computation $\lambda \in (S\times\ensuremath{\mathsf{En}})^+$ is a non-empty sequence over $S\times\ensuremath{\mathsf{En}}$ such that the restriction to states (the first component), denoted by $\lambda|_S$, is a path in the underlying model. The projection of $\lambda$ to the second component of each element in the sequence is denoted by $\lambda|_\ensuremath{\mathsf{En}}$. A \emph{$(\eta,s_A,B)$-computation} is a resource-extended computation $\lambda$ where for all $i=1,\ldots$ with $\lambda[i]:=(s_i,\eta^i)$ there is an action profile $\sigma \in d(\lambda|_\ensuremath{S}\xspace[i])$ such that: \begin{enumerate} \item $\eta^0 = \ensuremath{\eta}\xspace$ ($\ensuremath{\eta}\xspace$ describes the initial resource distribution); \item $F_A(\lambda|_\ensuremath{S}\xspace[1,i])= \sigma_A$ ($A$ follow their strategy); \item $\lambda|_S[i+1]=\delta(\lambda|_S[i],\sigma)$ (transition according to $\sigma$); \item for all $a \in A\cup B$ and $r\inRes$: $\eta^{i}_a(r) \geq \mathsf{cons}_r(\lambda|_S[i],\sigma_a)$ (each agent has enough resources to perform its action); \item for all $a \in A\cup B$ and $r\inRes$: $\eta^{i+1}_a(r) = \eta^i_a(r) + \mathsf{prod}_r(\lambda|_S[i], \sigma_a) - \mathsf{cons}_r(\lambda|_S[i], \sigma_a)$ (resources are updated); \item for all $a \in Agt \setminus (A \cup B)$ and $r\inRes$: $\eta^{i+1}_a(r) = \eta^i_a(r)$ (the resources of agents not in $A\cup B$ do not change). \end{enumerate} The \emph{$(\eta,B)$-outcome} of a strategy $F_{A}$ in $s$, $\rhooutcome{s}{\eta}{F_{A},B}$, is defined as the set of all $(\eta,F_{A},B)$-computations starting in $s$. Truth is defined over a model $\ensuremath{M}\xspace$, a state $s \in \ensuremath{S}\xspace$, and an endowment $\eta$. The semantics is given by the satisfaction relation $\ensuremath{M}\xspacesR$ where the cases for propositions, negation and conjunction are standard and omitted: \begin{description} \item[$\ensuremath{M}\xspace,s,\ensuremath{\eta}\xspace \ensuremath{M}\xspacesR{\coopdown[B]{A}}\ensuremath{\!\bigcirc\!} \varphi$] iff there is a strategy $F_A$ for $A$ such that for all $\onepath\in out(s,\ensuremath{\eta}\xspace,F_A,B)$, $|\onepath| > 1$ and $\ensuremath{M}\xspace,\onepath|_S[2],$ $\onepath|_\ensuremath{\mathsf{En}}[2] \ensuremath{M}\xspacesR \varphi$ \item[$\ensuremath{M}\xspace,s,\ensuremath{\eta}\xspace \ensuremath{M}\xspacesR{\coop{A}}{}_B^\zeta \ensuremath{\!\bigcirc\!} \varphi$] iff there is a strategy $F_A$ for $A$ such that for all $\onepath\in out(s,\zeta,F_A,B)$, $|\onepath| > 1$ and $\ensuremath{M}\xspace,\onepath|_S[2],$ $\onepath|_\ensuremath{\mathsf{En}}[2] \ensuremath{M}\xspacesR \varphi$ \item[$\ensuremath{M}\xspace,s,\ensuremath{\eta}\xspace \ensuremath{M}\xspacesR{\coopdown[B]{A}}\varphi\ensuremath{\,\mathcal{U}}\psi$ ] iff there is a strategy $F_A$ for $A$ such that for all $\onepath\in out(s,\ensuremath{\eta}\xspace,F_A,B)$, there exists $i$ with $1 \leq i \leq |\onepath| $ and $\ensuremath{M}\xspace,\onepath|_S[i],\onepath|_\ensuremath{\mathsf{En}}[i] \ensuremath{M}\xspacesR \psi$ and for all $j$ with $1 \leq j < i$, $\ensuremath{M}\xspace,\onepath|_S[j],\onepath|_\ensuremath{\mathsf{En}}[j] \ensuremath{M}\xspacesR \varphi$ \item[$\ensuremath{M}\xspace,s,\ensuremath{\eta}\xspace \ensuremath{M}\xspacesR{\coop{A}}{}_B^\zeta \varphi\ensuremath{\,\mathcal{U}}\psi$] iff there is a strategy $F_A$ for $A$ such that for all $\onepath\in out(s,\zeta,F_A,B)$, there exists $i$ with $1 \leq i \leq |\onepath| $ and $\ensuremath{M}\xspace,\onepath|_S[i],\onepath|_\ensuremath{\mathsf{En}}[i] \ensuremath{M}\xspacesR \psi$ and for all $j$ with $1 \leq j < i$, $\ensuremath{M}\xspace,\onepath|_S[j],\onepath|_\ensuremath{\mathsf{En}}[j] \ensuremath{M}\xspacesR \varphi$ \item[$\ensuremath{M}\xspace,s,\ensuremath{\eta}\xspace \ensuremath{M}\xspacesR{\coopdown[B]{A}}\varphi\ensuremath{\mathcal{R}} \psi$ ] iff there is a strategy $F_A$ for $A$ such that for all $\onepath\in out(s,\ensuremath{\eta}\xspace,F_A,B)$, either there exists $i$ with $1 \leq i \leq |\onepath| $ and $\ensuremath{M}\xspace,\onepath|_S[i],\onepath|_\ensuremath{\mathsf{En}}[i] \ensuremath{M}\xspacesR \psi \wedge \varphi$ and for all $j$ with $1 \leq j < i$, $\ensuremath{M}\xspace,\onepath|_S[j],\onepath|_\ensuremath{\mathsf{En}}[j] \ensuremath{M}\xspacesR \psi$; or, for all $j$ with $1 \leq j \leq |\onepath| $, $\ensuremath{M}\xspace,\onepath|_S[j],\onepath|_\ensuremath{\mathsf{En}}[j] \ensuremath{M}\xspacesR \psi$ \item[$\ensuremath{M}\xspace,s,\ensuremath{\eta}\xspace \ensuremath{M}\xspacesR{\coop{A}}{}_B^\zeta \varphi\ensuremath{\mathcal{R}} \psi$] iff there is a strategy $F_A$ for $A$ such that for all $\onepath\in out(s,\zeta,F_A,B)$, either there exists $i$ with $1 \leq i \leq |\onepath| $ and $\ensuremath{M}\xspace,\onepath|_S[i],\onepath|_\ensuremath{\mathsf{En}}[i] \ensuremath{M}\xspacesR \psi \wedge \varphi$ and for all $j$ with $1 \leq j < i$, $\ensuremath{M}\xspace,\onepath|_S[j],\onepath|_\ensuremath{\mathsf{En}}[j] \ensuremath{M}\xspacesR \psi$; or, for all $j$ with $1 \leq j \leq |\onepath| $, $\ensuremath{M}\xspace,\onepath|_S[j],\onepath|_\ensuremath{\mathsf{En}}[j] \ensuremath{M}\xspacesR \psi$ \end{description} The model checking algorithms for \ensuremath{\mathsf{RAL}}\xspaced are similar to those given for RB$\pm$ATL\xspaced in Section \ref{sec:atlr-spec} in that they proceed by and-or depth first search. However, in this case, the nodes in the search tree also include information about the current proponent and (resource-bounded) opponent coalitions, and the functions that construct nodes are redefined as $\mathit{node}_0(s,b,A,B)$ and $\mathit{node}(n, \sigma, s',A,B)$ where $A$ are the proponents and $B$ are the resource-bounded opponents. The model checking algorithm for \ensuremath{\mathsf{RAL}}\xspaced is shown in Algorithm \ref{alg:rald-label}, and takes as input a model $\ensuremath{M}\xspace$, a formula $\phi$, and an initial endowment $\eta$, and labels the set of states $[\phi]_\ensuremath{M}\xspace^\ensuremath{\eta}\xspace$, where $[\phi]_\ensuremath{M}\xspace^\ensuremath{\eta}\xspace = \{s\ |\ \ensuremath{M}\xspace , s, \ensuremath{\eta}\xspace \ensuremath{M}\xspaces \phi\}$ is the set of states satisfying $\phi$. \textsc{\ensuremath{\mathsf{RAL}}\xspaced-label} simply calls the function \textsc{strategy} to label states with $\phi$. $\mathit{pr}$ and $\mathit{op}$ are functions that return the proponents $A \subseteq Agt$ and the resource-bounded opponents $B \subseteq Agt$ respectively if $\phi$ is of the form $\coop[B]{A}^* \ensuremath{\!\bigcirc\!} \psi$, $\coop[B]{A}^* \psi_1\ensuremath{\,\mathcal{U}}\psi_2$, $\coop[B]{A}^* \psi_1 \ensuremath{\mathcal{R}} \psi_2$ where $*$ is either $\downarrow$ or an endowment, or $\emptyset$ otherwise. \begin{algorithm} \caption{Labelling $\phi$ } \label{alg:rald-label} \begin{algorithmic}[1]\small \Procedure{\ensuremath{\mathsf{RAL}}\xspaced-label}{$\ensuremath{M}\xspace, \phi, \eta $} \State $[\phi]_{\ensuremath{M}\xspace}^\ensuremath{\eta}\xspace \gets \{\ q \mid q \in S\ \wedge$ \StatexIndent[5] $\Call{strategy}{node_0(q,\ensuremath{\eta}\xspace,\mathit{pr}(\phi), \mathit{op}(\phi)), \phi}\}$ \EndProcedure \end{algorithmic} \end{algorithm} The function \textsc{strategy} is shown in Algorithm \ref{alg:rald-strategy} and proceeds by depth-first and-or search. We process each coalition modality in turn, starting from the outermost modality. The logical connectives are standard, and simply call \textsc{strategy} on the subformulas. Each temporal operator is handled by a separate function: \textsc{next} for $\ensuremath{\!\bigcirc\!} \psi$, \textsc{until} for $\phi \ensuremath{\,\mathcal{U}} \psi$, and \textsc{release} for $\phi \ensuremath{\mathcal{R}} \psi $. \begin{algorithm} \caption{Strategy} \label{alg:rald-strategy} \begin{algorithmic}[1]\small \Function{strategy}{$n, \phi$} \Case{$\phi = p \in \Pi$} \State $\mathbf{return}\ s(n) \in \pi(p)$ \EndCase \Case{$\phi = \neg \psi$} \State $\mathbf{return}\ \neg \Call{strategy}{node_0(s(n),e(n),pr(n),op(n)), \psi}$ \EndCase \Case{$\phi = \psi_1 \vee \psi_2$} \State $\mathbf{return}\ \Call{strategy}{node_0(s(n),e(n),pr(n),op(n)), \psi_1}\ \vee$ \StatexIndent[4] $\ \ \Call{strategy}{node_0(s(n),e(n),pr(n),op(n)), \psi_2}$ \EndCase \Case{$\phi = \coopdown[B]{A}\ensuremath{\!\bigcirc\!}\psi$} \State $\mathbf{return}\ \Call{next}{node_0(s(n),e(n),A,B), \phi}$ \EndCase \Case{$\phi = \coop[B]{A}^\zeta\ensuremath{\!\bigcirc\!}\psi$} \State $\mathbf{return}\ \Call{next}{node_0(s(n),\zeta,A,B), \phi}$ \EndCase \Case{$\phi = \coopdown[B]{A}\psi_1\ensuremath{\,\mathcal{U}}\psi_2$} \State $\mathbf{return}\ \Call{until}{node_0(s(n),e(n),A,B), \phi}$ \EndCase \Case{$\phi = \coop[B]{A}^\zeta\psi_1\ensuremath{\,\mathcal{U}}\psi_2$} \State $\mathbf{return}\ \Call{until}{node_0(s(n),\zeta,A,B), \phi}$ \EndCase \Case{$\phi = \coopdown[B]{A} \psi_1 \ensuremath{\mathcal{R}} \psi_2$} \State $\mathbf{return}\ \Call{release}{node_0(s(n),e(n),A,B), \phi}$ \EndCase \Case{$\phi = \coop[B]{A}^\zeta \psi_1 \ensuremath{\mathcal{R}} \psi_2$} \State $\mathbf{return}\ \Call{release}{node_0(s(n),\zeta,A,B), \phi}$ \EndCase \EndFunction \end{algorithmic} \end{algorithm} The function \textsc{next} for formulas of types $\coop[B]{A}^\downarrow \ensuremath{\!\bigcirc\!} \phi$ and $\coop[B]{A}^\zeta \ensuremath{\!\bigcirc\!} \phi$ is shown in Algorithm \ref{alg:rald-next} and is straightforward. We simply check if there is an action of $A$ that is possible given the current endowment (lines 2--4), and where in all outcome states $A$ has a strategy to enforce $\phi$ (lines 6--10). Note that the recursive call (line 8) is to \textsc{strategy}, to correctly determine the endowments for the new search in both the case where $\phi$ specifies a fresh endowment or the resources currently available to the agents (i.e., down arrow). \begin{algorithm} \caption{Next (both types of modalities) } \label{alg:rald-next} \begin{algorithmic}[1]\small \Function{next}{$n, \coop[B]{A}^* \ensuremath{\!\bigcirc\!} \phi$} \State $Act_{A} \gets \{ \sigma' \in D_{A} (s(n)) \mid \mathsf{cons}(\sigma') \leq e_{A}(n) \}$ \For{$\sigma' \in Act_{A}$} \State $Act_{Agt} \gets \{\sigma \in D(s(n))\mid \sigma_{A} = \sigma' \wedge\ $ \StatexIndent[9] $\ \ \mathsf{cons}(\sigma_B) \leq e_{B}(n) \}$ \State $\mathit{strat} \gets \mathit{true}$ \For{$\sigma \in Act_{Agt}$} \State $s' \gets \delta(s(n),\sigma)$ \State $\mathit{strat} \gets \mathit{strat}\ \wedge\ \Call{strategy}{node(n, \sigma, s', A, B), \phi }$ \EndFor \If{$ \mathit{strat}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} The function \textsc{until} for formulas of types $\coop[B]{A}^\downarrow \phi \ensuremath{\,\mathcal{U}}\psi$ and $\coop[B]{A}^\zeta \phi\ensuremath{\,\mathcal{U}}\psi$ is shown in Algorithm \ref{alg:rald-until}. If $A$ have a strategy to enforce $\psi$, we return true (lines 2--3). We then check if it is possible to enforce $\phi$ in $n$, and terminate the search with false if it is not (lines 4--5). Otherwise the search continues. Each action available at $s(n)$ is considered in turn (lines 6--14). For each action $\sigma' \in Act_A$, we check whether a recursive call of the algorithm returns true in all outcome states $s'$ of $\sigma'$ (i.e., $\sigma'$ is part of a successful strategy). If such a $\sigma'$ is found, the algorithm returns true. Otherwise the algorithm returns false. The function \textsc{release} for formulas of types $\coop[B]{A}^\downarrow \phi \ensuremath{\mathcal{R}}\psi$ and $\coop[B]{A}^\zeta \phi\ensuremath{\mathcal{R}}\psi$ is similar (see Algorithm \ref{alg:rald-release}). \begin{algorithm} \caption{Until (both types of modalities)} \label{alg:rald-until} \begin{algorithmic}[1]\small \Function{until}{$n, \coop[B]{A}^* \phi \ensuremath{\,\mathcal{U}} \psi $} \If{$\Call{strategy}{n, \psi}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \If{$\neg\, \Call{strategy}{n, \phi}$} \State $\mathbf{return}\ \mathit{false}$ \EndIf \State $Act_{A} \gets \{ \sigma' \in D_{A}(s(n)) \mid \mathsf{cons}(\sigma') \leq e_A(n) \}$ \For{$\sigma' \in Act_{A} $} \State $Act_{Agt} \gets \{\sigma \in D(s(n))\mid \sigma_A = \sigma' \wedge\ $ \StatexIndent[9.5] $ \mathsf{cons}(\sigma_B) \leq e_{B}(n) \}$ \State $\mathit{strat} \gets \mathit{true}$ \For{$\sigma \in Act_{Agt}$} \State $s' \gets \delta(s(n),\sigma)$ \State $\mathit{strat} \gets \mathit{strat}\ \wedge \ $ \StatexIndent[5.5] $\Call{until}{node(n,\sigma,s', A,B), \coop[B]{A}^* \phi \ensuremath{\,\mathcal{U}} \psi}$ \EndFor \If{$ \mathit{strat}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Release (both types of modalities)} \label{alg:rald-release} \begin{algorithmic}[1]\small \Function{release}{$n, \coop[B]{A}^* \phi \ensuremath{\mathcal{R}} \psi $} \If{$\neg \Call{strategy}{n, \psi}$} \State $\mathbf{return}\ \mathit{false}$ \EndIf \If{$\Call{strategy}{n, \phi}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \If{$\exists\, \sigma \in D_{A} \text{\ s.t.\ } \mathsf{cons}(s(n),\sigma) \not\leq e_{A}(n))$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \State $Act_{A} \gets \{ \sigma' \in D_{A}(s(n)) \mid \mathsf{cons}(\sigma') \leq e_A(n) \}$ \For{$\sigma' \in Act_{A} $} \State $Act_{Agt} \gets \{\sigma \in D(s(n))\mid \sigma_A = \sigma' \wedge\ $ \StatexIndent[9.5] $ \mathsf{cons}(\sigma_B) \leq e_{B}(n) \}$ \State $\mathit{strat} \gets \mathit{true}$ \For{$\sigma \in Act_{Agt}$} \State $s' \gets \delta(s(n),\sigma)$ \State $\mathit{strat} \gets \mathit{strat}\ \wedge \ $ \StatexIndent[5.5] $\Call{release}{node(n,\sigma,s', A,B), \coop[B]{A}^* \phi \ensuremath{\mathcal{R}} \psi}$ \EndFor \If{$ \mathit{strat}$} \State $\mathbf{return}\ \mathit{true}$ \EndIf \EndFor \State $\mathbf{return}\ \mathit{false}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{lemma} Algorithm \ref{alg:rald-strategy} terminates in $O(|M|^{|\phi|})$ steps, where the bounds in $\phi$ are written in unary. \end{lemma} \begin{proof} The only difference between the RAL$^\ensuremath{\#}$ algorithms and the algorithms in section \ref{sec:atlr-spec} is the fact that in the case of RAL$^\ensuremath{\#}$ we cannot label states with subformulas. For example, we cannot find states satisfying $\coopdown[B]{A}\phi \ensuremath{\,\mathcal{U}} \psi$ because we do not know which endowment the $\downarrow$ refers to. When verifying a formula with non-propositional subformulas, for example $\coopdown[B]{A}\phi \ensuremath{\,\mathcal{U}} \psi$ again, where $\phi$ and $\psi$ are not propositional, we have to make recursive calls to check whether the current state satisfies $\phi$ or $\psi$ \emph{with the current endowment}. Hence the checks for $\Call{strategy}{n, \phi}$ instead of checking whether $s(n) \in [\phi]_M$. However the recursive calls are always to formulas of lower complexity, and it is easy to show that in the propositional case they do terminate, and that under the inductive assumption if lower complexity calls terminate, then the calls to $\coop[B]{A}^* \ensuremath{\!\bigcirc\!} \phi$, $\coop[B]{A}^* \phi \ensuremath{\,\mathcal{U}} \psi$ and $\coop[B]{A}^* \phi \ensuremath{\mathcal{R}} \psi$ terminate. The algorithm again performs depth first and-or search, but now up to the depth determined by the nestings of modalities in $\phi$: we need to take the sum of the minimal bounds for the first resource occurring in the endowment of some resource bounded agent in nested formulas to find the maximal depth of the tree. We can ignore $\downarrow$ endowments because they will use the amount of the first resource remaining from the outer modalities. \end{proof} \begin{lemma} Algorithm \ref{alg:rald-strategy} is correct. \end{lemma} \begin{proof} Assuming that calls to $\Call{strategy}{n,\phi}$ terminate and have the same effect as checking whether $s(n) \in [\phi]_M$, the algorithms are the same as for RB$\pm$ATL\xspaced. The only small difference is that we remember the current endowment and pass it to the $\downarrow$ modalities as if it was an explicit bound $b$ in RB$\pm$ATL\xspaced. \end{proof} \begin{theorem} The model-checking problem for RAL$^\ensuremath{\#}$ is decidable in PSPACE (if resource bounds are written in unary). \end{theorem} \begin{proof} From the two lemmas above it follows that Algorithm \ref{alg:rald-strategy} is a terminating and correct model-checking algorithm for RAL$^\ensuremath{\#}$. The space it is using on the stack is polynomial in the size of the formula (it is the sum of nested resource bounds on the first resource for the minimally endowed agents). After at most $O(k)$ steps, where $k$ is the maximal value of the first resource bound in $\phi$, the endowment becomes negative for one of the agents, and the algorithm terminates. \end{proof} \section{Conclusion} In this paper we studied resource logics over models with a diminishing resource. We gave new and simple model-checking algorithms for the versions of RB$\pm$ATL\xspace, RB$\pm$ATL\xspaceir and RAL with a diminishing resource. We believe that settings where one of the resources is always consumed are quite common, and our results may therefore be of practical interest. It was known that the model checking problem for RB$\pm$ATL\xspace is decidable, but our complexity result for RB$\pm$ATL\xspaced is new. Decidability of the model checking problem for RAL follows from a more general result on bounded models from \cite{Bulling/Farwer:10a}, but no model checking algorithm was given there. The model checking algorithm for RAL$^{\ensuremath{\#}}$ is different from the algorithm for the decidable fragment of RAL presented in \cite{Alechina//:17b} because it works for the full RAL rather than just for the positive fragment of proponent-restricted RAL in \cite{Alechina//:17b}. \end{document}
\begin{document} \subjclass[2010]{Primary 39A13} \date{\today} \keywords{$q$-difference equation, Borel-Laplace transforms, Fourier transforms.} \thanks{This project has received funding from the ANR de rerum natura ANR-19-CE40-0018. } \title{On the product of two $1$-$q$-summable series} \begin{abstract} In this paper we consider a $q$-analog of the Borel-Laplace summation process defined by Marotte and the second author, and consider two series solutions of linear $q$-difference equations with slopes $0$ and $1$. The latter are $q$-summable and we prove that the product of the series is $q$-(multi)summable and its $q$-sum is the product of the $q$-sum of the two series. This is a first step in showing the conjecture that the $q$-summation process is a morphism of rings. We prove that the $q$-summation does induce a morphism of fields by showing that if the inverse of the $q$-Euler series is $q$-summable, then its $q$-sum is not the inverse of the $q$-sum of the $q$-Euler series. \varepsilonnd{abstract} \tableofcontents \pagebreak \section*{Introduction}\label{sec:primo} In this paper, we are interested in the algebraic properties of a $q$-analogue of the Borel-Laplace summation process defined in \cite{marotte2000multisommabilite}. Before going further in the $q$-world, let us make a short overview of the theory in the setting of linear differential equations. We refer for instance to \cite{balser2006divergent,balser2008formal} for a complete description of the theory. \par Consider a meromorphic linear differential equation. We have the coexistence of divergent formal power series and integral solutions. For instance, the Euler equation $x^{2}\partial_x y+y=1$ admits the Euler series $f:=\displaystyle \sum_{n=0}^{\infty} (-1)^{n}n! x^{n}$ as formal power series solution. On the other hand, there are integral solutions, such as $$\mathcal{S}^{d}(f)(x):=\displaystyle\int_{0}^{\infty e^{\mathbf{i}d}}\frac{e^{-\xi/x}}{1+\xi}d\xi .$$ The path of integration has to be understood as the half line in $\CC$ of complex numbers of argument $d\in \RR$. The latter integral is well defined when the path of integration does not pass through the pole $\xi=-1$, that is when $d\not \varepsilonquiv \pi [2\pi]$. We may prove that the function is analytic on the sector $\arg(x)\in (d-\pi/2,d+\pi/2)$ and is asymptotic to $f$ is a certain sens. More generally, given a formal power series solution of a linear differential equation with coefficients that are germs of meromorphic functions at $0$, we may for convenient $d\in \RR$, construct an integral solution using Borel and Laplace transformations in direction $d$. The map $f\mapsto \mathcal{S}^{d}(f)$ that sends a formal power series to the inegral solution induces a morphism of fields. Moreover, it leaves the germs of meromorphic functions at $0$ invariant and commutes with the derivation, that is $\mathcal{S}^{d}(\partial_x f)=\partial_x \mathcal{S}^{d}(f)$. \\ \par Let us now consider the case of $q$-difference equations. Let us fix $q>1$, define the $q$-difference operator $\sigma_{q} y(x):= y(qx)$, and consider the $q$-Euler equation $x\sigma_{q} y+y=1$. It admits the $q$-Euler series $g:=\displaystyle \sum_{n=0}^{\infty} (-1)^{n}q^{n(n-1)/2}x^{n}$ as divergent formal power series. An integral solution is given by $$\mathcal{S}_{q}^{d}(g):= \frac{1}{\sigma_{q}rt{2\pi\ln (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}\frac{e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/2\log(q)}}{1+\xi}\,\,\frac{d\xi}{\xi}, \quad d\not \varepsilonquiv \pi [2\pi].$$ We may prove that the latter is meromorphic on the Riemann surface of the logarithm and defines a multivalued complex function. Unfortunately, contrary to the differential case, there are many different $q$-summation process, due to the nonuniqueness of the $q$-analogue of the exponential. Each of the $q$-sum has advantages in certain contexts, see for instance the work of the two authors, Ramis, Sauloy, and more exhaustively, \cite{abdi1960q,abdi1964certain, ramis1992growth,zhang1999developpements, marotte2000multisommabilite,zhang2000transformations, zhang2001fonction,ramis2002developpement, zhang2002sommation, zhang2003fonctions,zhang2004solutions,di2009q, ramis2013local,dreyfus2015building, dreyfus2016q}. In this paper, we use a $q$-summation process introduced in \cite{marotte2000multisommabilite}. It was proved that to $f$, a formal power series solution of a linear meromorphic $q$-difference equation, we may associate, using $q$-analogues of the Borel-Laplace summation in the direction $d\in \RR$ for convenient $d$, a multivalued meromorphic solution $\mathcal{S}_{q}^{d}(f)$. Let us by $\CC[[x]]_{q}^d$ the space of series where $f\mapsto \mathcal{S}_{q}^{d}(f)$, is well defined. The authors proved that the $q$-summation process $f\mapsto \mathcal{S}_{q}^{d}(f)$, satisfies the following algebraic properties, see Proposition~\ref{prop1} below, \begin{itemize} \item If $f_1,f_2\in \CC[[x]]_{q}^d$, then $f_1+f_2\in \CC[[x]]_{q}^d$ and $\mathcal{S}_{q}^{d}(f_{1}+ f_{2})=\mathcal{S}_{q}^{d}(f_{1})+ \mathcal{S}_{q}^{d}(f_{2})$; \item If $f\in \CC[[x]]_{q}^d$, then $\sigma_{q}(f)\in \CC[[x]]_{q}^d$ and $\mathcal{S}_{q}^{d}(\sigma_{q}(f))=\sigma_{q}\left(\mathcal{S}_{q}^{d}(f)\right)$; \item For all convergent series $f$, if $g\in \CC[[x]]_{q}^d$, then $fg•in \CC[[x]]_{q}^d$ and $\mathcal{S}_{q}^{d}(fg)=f\mathcal{S}_{q}^{d}(g)$. \varepsilonnd{itemize} In this paper we prove that under certain assumptions, the $q$-summation process commutes with the product, see Theorem~\ref{thm4} for a precise statement. \begin{thmintro} Let $f_1,f_{2}$ be series that are solutions of linear $q$-difference equations with slopes $0$ and $1$. If $f_1$ and $f_2$ belong to $\CC[[x]]_{q}^d$, then $f_1 f_2\in \CC[[x]]_{q}^d$ and $$\mathcal{S}_{q}^{d}(f_{1} f_{2})=\mathcal{S}_{q}^{d}(f_{1})\mathcal{S}_{q}^{d}(f_{2}).$$ \varepsilonnd{thmintro} We conjecture that $f\mapsto \mathcal{S}_{q}^{d}(f)$ defines a morphism of rings and this result is a first step in that direction. Proving the latter conjecture could allows us to define the $q$-analogue of the Stokes operators in an integral way as in the differential case, and prove a $q$-analogue of the Ramis density theorem. Note that this question has been considered in another point of view in \cite{ramis2013local} and the comparison of the two approaches would be interesting. \par A natural question is whether the $q$-summation process could define a morphism of fields. We answer negatively to this question in Section~\ref{sec5} where the inverse of the $q$-Euler series is considered. We prove that if its $q$-sum is defined, then it is not the inverse of the $q$-sum of the $q$-Euler series. \par The paper is organized as follows. In Section \ref{subsection:notation1}, we introduce the $q$-analogues of the Borel and Laplace transformations and prove some of their basic properties. We also introduce the space of $q$-multisummable series in direction $d$, that is the set of series where the map $f\mapsto \mathcal{S}_{q}^{d}(f)$ is well defined. An example of such series is the ring of series solutions of linear $q$-difference equations. Section~\ref{section:Euler} is devoted to the proof of the product theorem. This is done by proving the result for series that are a variant of the $q$-Euler series, use the fact that series solutions of linear $q$-difference equation with slopes $0$ and $1$, admit a certain decomposition into variant of $q$-Euler series, and finally, use the algebraic properties already known to conclude. In Section \ref{sec5}, we study the inverse of the $q$-Euler series and prove that if it $q$-multisummable, then its $q$-sum could not be the inverse of the $q$-sum of the Euler series. \section{Notations and some prelimilary results}\label{subsection:notation1} This section is devoted to review the $q$-summation theory developed in our previous papers \cite{zhang1999developpements, marotte2000multisommabilite}. By taking inspiration from a Phragmén–Lindelöf principle stated in \cite{fruchard1999remarques} for the space of $q$-summable functions, we shall build a $q$-Borel-Laplace summation process using two functional spaces (or saying sheaves) that will be denoted as $\OO_{q;1}^d$ and $\EE_{q;1}^d$; see Proposition \ref{prop:BLd}. In most cases, we shall directly refer to the above-mentioned works \cite{zhang1999developpements, marotte2000multisommabilite} unless we believe some additional precisions or mentions are necessary. \subsection{Formal transformations} As usual, we will denote by $\CC[[x]]$ the $\CC$-vector space of all formal power series in the variable $x$ with coefficients in $\CC$. On the lines of what is done in the classic Borel-Laplace summation theory (see, for example, \cite{balser2006divergent}), we first recall the following couple $(\hat {\mathcal B}} \def\cL{{\mathcal L}_{q;1},\hat\cL_{q;1})$ of formal $q$-Borel and $q$-Laplace transforms, which are really ismorphisms of $\CC$-vector spaces: \begin{equation} \label{equation:formalq-Borel} \begin{array}{llll} \hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1}:& \CC[[x]]&\to&\CC[[\xi]] \\ & \displaystyle \sum_{n\ge 0}a_n\,x^n&\displaystyle\mapsto&\displaystyle\sum_{n\ge 0}a_n\,q^{-n(n-1)/2}\,\xi^n \varepsilonnd{array} \varepsilonnd{equation} and \begin{equation} \label{equation:formalq-Laplace} \begin{array}{llll} \hat\cL_{q;1}:& \CC[[\xi]]&\to&\CC[[x]] \\ & \displaystyle \sum_{n\ge 0}a_n\,\xi^n&\displaystyle\mapsto&\displaystyle\sum_{n\ge 0}a_n\,q^{n(n-1)/2}\,x^n\,. \varepsilonnd{array} \varepsilonnd{equation} By direct computations, one can find the following identities for $j$, $m\in\ZZ$: \begin{equation} \label{cBcL}\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1}\,x^{j}\,\sigma_{q}^{m}=q^{-j(j-1)/2}\xi^{j}\,\sigma_{q}^{m-j}\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1},\quad \hat\cL_{q;1}\,\xi^{j}\,\sigma_{q}^{m}=q^{j(j-1)/2}x^{j}\,\sigma_{q}^{m+j}\,\hat\cL_{q;1}\,. \varepsilonnd{equation} Let $\CC\{x\}$ be the sub-space of all power series whose radius of convergence is strictly positive. Set $\CC[[x]]_{q;1}=\hat \cL_{q;1}\left(\CC\{\xi\}\right)$ the set of $q$-Gevrey series of order one. \par By following \cite{ramis1992growth}, we say that the entire function $\phi$ is called to have a $q$-exponential growth of ordre (at most) one at infinity in the following sense: for some (or any) $R>0$, one can find $C$, $A>0$ such that, for every $\xi\in\CC$, $$ |\xi|\ge R\quad \Longrightarrow\quad |\phi(\xi)|\le C\,e^{(\log|A\xi|)^2/(2\log(q))}\,. $$ The ring of such functions is called $\EE_{q;1}$. \par Let us denote by $\tilde\CC^*$ the Riemann surface of the logarithm. Given $r\in (0,\infty)$, let us denote by $\partial^+\widetilde D_r$, the path parameterized by $\left\{ \begin{array}{lll}\RR&\rightarrow & \widetilde\CC^*\\t&\mapsto &rq^{\mathbf{i}t}\varepsilonnd{array} \right.$. Define the linear maps ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ and $\cL_{q;1}^d$ as follows: \begin{equation} \label{equation:qBorel} \begin{array}{llll} {\mathcal B}} \def\cL{{\mathcal L}_{q;1}: & \CC\{\xi\}&\rightarrow &\EE_{q;1}\\ &f&\mapsto&\frac{1}{\sigma_{q}rt{2\pi\log (q)}\,\mathbf{i}}\,\displaystyle\int_{\partial^+\widetilde D_r}e^{(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,f(x)\,\frac{dx}{x} \varepsilonnd{array} \varepsilonnd{equation} and \begin{equation}\label{equation:qLaplace} \begin{array}{llll} \cL_{q;1}^d:&\EE_{q;1} &\rightarrow &\CC\{\xi\}\\ &\phi&\mapsto &\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\phi(\xi)\,\frac{d\xi}{\xi} \,. \varepsilonnd{array} \varepsilonnd{equation} In the above, $\log$ denotes the principal branch of the logarithm on the Riemann surface $\tilde\CC^*$, $r>0$ is chosen to be smaller than the radius of convergence of $f$, and $d\in \RR$ may be arbitrary. The integrals appeared in \varepsilonqref{equation:qBorel} and \varepsilonqref{equation:qLaplace} are related with the well-konwn Gauss integral ($a>0$, $b\in\RR$) : $$ \int_{-\infty}^{+\infty}e^{-a(t+b)^2}\,dt=\sigma_{q}rt{\frac{\pi}{a}}\,. $$ Using suitable changes of variables, it follows from the above that, for all integer $n\in\ZZ$ for all $r>0$: $$\int_{\partial^+\widetilde D_r} e^{(\log(x/\sigma_{q}rt{q}))^2/(2\log(q))}\,x^n\,\frac{dx}{x}=\sigma_{q}rt{2\pi\log(q)}\,\mathbf{i}\,q^{-n(n-1)/2} $$ and $$ \int_0^{+\infty}e^{-(\log(\sigma_{q}rt{q}\,\xi))^2/(2\log(q))}\,\xi^n\,\frac{d\xi}{\xi} =\sigma_{q}rt{2\pi\log(q)}\,q^{n(n-1)/2}\,. $$ Thus, restricting to the convergent power series spaces $\CC\{x\}$ and $\EE_{q;1}$, the linear maps $\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ and $\hat\cL_{q;1}$ defined in \varepsilonqref{equation:formalq-Borel} and \varepsilonqref{equation:formalq-Laplace} coincide with ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ and $\cL^d_{q;1}$. \subsection{Analytic transformations}\label{subsection:notation2} From now on, we will fix a real $d\in\RR$, that will be identified with the direction of argument $d$ coming from the ``origin'' on the Riemann surface $\tilde\CC^*$, namely $(0,\infty e^{\mathbf{i} d})$. For simplify, define $$\tilde D_{R}=\{x\in\tilde\CC^*: |x|<R\}\,,\quad V_{\varepsilon}^d=\left\{\xi\in\tilde\CC^*: |\arg (\xi)-d|<\varepsilon\right\}$$ for $R>0$ and $\varepsilon>0$. In the same time, both spaces $\CC\{x\}$ and $\EE_{q;1}$ are respectively extended into $\OO_{q;1}^d$ and $\EE_{q;1}^d$ as follows: \begin{enumerate} \item $\OO_{q;1}^d$ is composed of all analytic functions $f$ in some domain $\tilde D_{R}$ such that, for some suitable $\varepsilon>0$: $$ \sup_{d'\in(d-\varepsilon,d+\varepsilon)}\left(\sup_{(r,t)\in(0,R)\times\RR}\left|f(re^{\mathbf{i}(t+d')})\right|\,e^{-t^2/(2\log(q))}\right)<+\infty\,; $$ \item $\EE_{q;1}^d$ is composed of all analytic functions $\phi$ in some domain $V_\varepsilon^d$ such that, for some suitable $A>0$ and $R>0$: $$\max\left(\sup_{\xi\in V_\varepsilon^d,|\xi|\le R}|\phi(\xi)|,\sup_{\xi\in V_\varepsilon^d, |\xi|>R}|\phi(\xi)|e^{-(\log|A\xi|)^2/(2\log(q))}\right)<+\infty\,. $$ \varepsilonnd{enumerate} As shows the following proposition, the maps ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ and $\cL^{d}_{q;1}$ can be extended to $\OO_{q;1}^d$ and $\EE_{q;1}^d$ respectively. \begin{prop} \label{prop:BLd} The maps ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ and $\cL_{q;1}^d$ defined by the integrals given in \varepsilonqref{equation:qBorel} and \varepsilonqref{equation:qLaplace} may be extended to $\OO_{q;1}^d$ and $\EE_{q;1}^d$ respectively and are bijections that are each other inverses $$ \begin{array}{llll} {\mathcal B}} \def\cL{{\mathcal L}_{q;1}: & \OO_{q;1}^{d}&\rightarrow &\EE_{q;1}^{d}\\ \cL^{d}_{q;1}: & \EE_{q;1}^{d}&\rightarrow &\OO_{q;1}^{d}.\varepsilonnd{array}$$ \varepsilonnd{prop} \begin{proof} By \cite{marotte2000multisommabilite}, Lemmas 1.3.1 and~1.3.4 the map ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ (resp. $\cL_{q;1}^d$) is well defined on $\OO_{q;1}^d$ (resp. $\EE_{q;1}^d$), and ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}(\OO_{q;1}^d)\subset \EE_{q;1}^d$ (resp. ${\cL_{q;1}^d(\EE_{q;1}^d)\subset \OO_{q;1}^d}$). The fact that ${\mathcal B}} \def\cL{{\mathcal L}_{q;1}\circ \cL_{q;1}^d$ is the identity on $\EE_{q;1}^d$ is \cite{marotte2000multisommabilite}, Theorem 1.3.7. We may deduce also from the latter proof that $\cL_{q;1}^d\circ{\mathcal B}} \def\cL{{\mathcal L}_{q;1}$ is the identity on $\OO_{q;1}^d$. \varepsilonnd{proof} \subsection{Transformations of arbitrary order}\label{subsection:notation4} Let $k>0$, write $q^{1/k}=e^{(\log(q))/k}$, and define the formal $q$-Borel and $q$-Laplace transforms of order $k$ in the following manner: $$ \hat {\mathcal B}} \def\cL{{\mathcal L}_{q;k}=\hat{\mathcal B}} \def\cL{{\mathcal L}_{q^{1/k};1}\,,\quad \hat \cL_{q;k}=\hat\cL_{q^{1/k};1}\,. $$ Furthermore, by replacing $q$ with $q^{1/k}$, one sets: \begin{equation} \label{equation:qksummable}X_{q;k}=X_{q^{1/k};1},\quad X\in\left\{ \hat{\mathcal B}} \def\cL{{\mathcal L}, \hat\cL; {\mathcal B}} \def\cL{{\mathcal L},\ \cL^d,\ \CC[[x]],\ \EE,\,\ \ \OO^d,\ \EE^d \ \right\}. \varepsilonnd{equation} In this way, the maps ${\mathcal B}} \def\cL{{\mathcal L}_{q;k}$ and $\cL_{q;k}^d$ are bijections between $\OO_{q;k}^d$ and $\EE_{q;k}^d$ such that \begin{equation} {\mathcal B}} \def\cL{{\mathcal L}_{q;k}\circ\cL_{q;k}^d=\mathbf{Id}|_{\EE_{q;k}^d},\quad \cL_{q;k}^d\circ{\mathcal B}} \def\cL{{\mathcal L}_{q;k}=\mathbf{Id}|_{\OO_{q;k}^d}.\varepsilonnd{equation} For $k'>k>0$, it is easy to obtain from \varepsilonqref{equation:qksummable} that $\CC[[x]]_{q;k}\supsetneq\CC[[x]]_{q;k'}$ and, further, \begin{equation} \label{kk'} \OO_{q;k}^d\subsetneq\OO_{q;k'}^d,\quad \EE_{q;k}^d\subsetneq\EE_{q;k'}^d \varepsilonnd{equation} for any $d\in\RR$. \subsection{Multisummation} \label{subsection:notation5} As in \cite{marotte2000multisommabilite}, Section 2.3.3, we will denote by $\Omega^+$ the set of finite sequences of strictly increasing elements of $\QQ_{>0}$. Let $\Omega^{+*}=\Omega^+ \setminus \{\varnothing \}$. Given $(s_1,...,s_r)\in \Omega^{+*}$, we define $\widetilde{s}_1,\dots,\widetilde{s}_r\in \QQ_{>0}$ as $\widetilde{s}_i=\frac{1}{\frac{1}{s_i} -\frac{1}{s_{i+1}}}$, where we made the convention that $s_{r+1}=\infty$. \begin{ex} If we consider $(s_1,s_2):=(1,2)\in \Omega^{+*}$ the associated sentence is $(\widetilde{s}_1,\widetilde{s}_2):=(2,2)$. \varepsilonnd{ex} Let ${\mathcal C}^d$ denotes the analytic continuation of any germ of analytic functions at zero along the direction of argument $d$. \begin{dfn}[\cite{marotte2000multisommabilite}, Definition~2.3.4]\label{def1} Let $d\in\RR$, $\vec s=(s_1,...,s_r)\in \Omega^{+*}$. The power series $ f\in\CC[[x]]$ is $q$-multisummable of order $\vec s$ in the direction of argument $d$ if and only if, the following conditions are fulfilled: \begin{enumerate} \item $\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}( f)\in\CC\{\xi\}$ and ${\mathcal C}^d\circ \hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}(f)\in\EE_{q;\widetilde{s}_1}^d$. \item for $1\leq j<r$, $\cL_{q;\widetilde{s}_j}^d \circ \dots\circ\cL_{q;\widetilde{s}_1}^d\circ{\mathcal C}^d\circ \hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}( f)\in\EE_{q;\widetilde{s}_{j+1}}^d$. \varepsilonnd{enumerate} \varepsilonnd{dfn} Let $\CC[[x]]_{q;\vec s}^d$ be the set of all $q$-multisummable power series of order $\vec s$ in the direction of argument $d$. For $ f\in\CC[[x]]_{q;\vec s}^d$, define its (multi-)sum function ${\mathcal S}} \def\cE{{\mathcal E}_{q;\vec s}^d( f)$ in the direction $d$: $${\mathcal S}} \def\cE{{\mathcal E}_{q;\vec s}^d(f):=\cL_{q;\widetilde{s}_r}^d\circ \dots\circ\cL_{q;\widetilde{s}_1}^d\circ{\mathcal C}^d\circ \hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}( f)\,. $$ We list first properties of the intermediate $q$-sum that will be very important in the sequel. It is a straightforward consequence of the definition combined with Proposition~\ref{prop:BLd}. \begin{prop}\label{prop3} \begin{enumerate}Let $d\in\RR$, $\vec s=(s_1,\dots ,s_r)\in \Omega^{+*}$ and $ f\in\CC[[x]]_{q;\vec s}^d$. \item One has ${\mathcal S}} \def\cE{{\mathcal E}_{q;\vec s}^d(f)\in \OO_{q;\widetilde{s}_r}^d$. \item ${\mathcal B}_{q;\widetilde{s}_{1}}\circ\dots \circ {\mathcal B}_{q;\widetilde{s}_{r}}\circ\mathcal{S}_{q;\vec s}^{d}(f)=\widehat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;s_1}(f)$. \item For $j=2,\dots, r$, \begin{multline*} {\mathcal B}_{q;\widetilde{s}_{j}}\circ\dots \circ {\mathcal B}_{q;\widetilde{s}_{r}}^{d }\circ\mathcal{S}_{q;\vec s}^{d}(f)= \cL_{q;\widetilde{s}_{j-1}}^{d}\circ \dots \circ \cL_{q;\widetilde{s}_{1}}^{d}\circ{\mathcal C}^d\circ\widehat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;s_1}(f) \in \OO_{q;\widetilde{s}_{j-1}}^{d}\cap \EE_{q;\widetilde{s}_{j}}^{d}. \varepsilonnd{multline*} \varepsilonnd{enumerate} \varepsilonnd{prop} Let $\vec{s}\subset\vec{t}\in \Omega^{+*}$, $d\in \RR$, and assume that $\hat{f}\in \CC[[x]]_{q;\vec s}^d$. Then by \cite{marotte2000multisommabilite}, Lemma~2.4.1, $\hat{f}\in \CC[[x]]_{q;\vec t}^d$ and $\mathcal{S}_{q;\vec{s}}^{d}(\hat{f})=\mathcal{S}_{q;\vec{t}}^{d}(\hat{f})$. Then, we may omit $\vec{s}$ and write $\mathcal{S}_{q}^{d}$ instead of ${\mathcal S}} \def\cE{{\mathcal E}_{q;\vec s}^d$. For $d\in \RR$, we let $\CC[[x]]_{q}^d=\displaystyle \bigcup_{\vec{s}\in \Omega^{+}}\CC[[x]]_{q;\vec s}^d$. We say that $d\in \RR$ is a singular direction of $f\in \CC[[x]]$, if $f\notin \CC[[x]]_{q}^d$. \par We say that $f\in \CC[[x]]$ is a $q$-multisummable series, and we write $f\in \mathcal{MS}_{q}$, if the set of its singular direction is finite modulo $2\pi\ZZ$. The set of $q$-multisummable series, form a $\CC\{x\}$-module and the $q$-summation process is a morphism, as shows the following proposition. Let $\OO_{q}^d:=\displaystyle \bigcup_{s\in \QQ_{>0}}\OO_{q;s}^d$. \begin{prop}\label{prop1} Let $d\in \RR$. The $q$-summation process $\left\{ \begin{array}{lll}\CC[[x]]_{q}^d&\rightarrow & \OO_{q}^d \\f&\mapsto & {\mathcal S}} \def\cE{{\mathcal E}_q^d(f) \varepsilonnd{array} \right.$ satisfies the following algebraic properties: \begin{itemize} \item For all $f_{1},f_{2}\in \CC[[x]]_{q}^d$, we have $f_{1}+f_{2}\in \CC[[x]]_{q}^d$, and $\mathcal{S}_{q}^{d}(f_{1}+ f_{2})=\mathcal{S}_{q}^{d}(f_{1})+ \mathcal{S}_{q}^{d}(f_{2})$; \item For all $f\in \CC[[x]]_{q}^d$, we have $\sigma_{q}(f)\in \CC[[x]]_{q}^d$, and $\mathcal{S}_{q}^{d}(\sigma_{q}(f))=\sigma_{q}\left(\mathcal{S}_{q}^{d}(f)\right)$; \item For all $f\in\CC\{x\}$, $ g\in\CC[[x]]_{q}^d$, we have $fg\in\CC[[x]]_{q}^d$ and $\mathcal{S}_{q}^{d}(fg)=f\,\mathcal{S}_{q}^{d}(g)$. \varepsilonnd{itemize} \varepsilonnd{prop} \subsection{Linear $q$-difference equations and $q$-multissummation} \label{subsection:notation6} Let $L$ be a $q$-difference operator of the following form: \begin{equation}\label{eq01} L=a_{n} \sigma_{q}^{n}+\dots+a_{0}\in \CC\{x\}[\sigma_{q}], \varepsilonnd{equation} where $a_n a_0 \neq 0$, $n>0$. The Newton polygon associated with $L$, denoted by $\mathcal{N\!P}(L)$, is the convex hull, in the plane $\RR^2$, of the finite set of ascending half-lines $$\displaystyle \bigcup_{j=0}^{n}\left\{ (j,k)\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}\times[0,+\infty): k\geq v_{0}(a_{j})\right\}\subset\RR^2, $$ where $v_{0}$ denotes the $x$-adic valuation -- it is worth recalling that $v_0(a_j)=+\infty$ when $a_j=0$. Let $(d_{1},n_{1}),\dots ,(d_{r+1},n_{r+1})$ with ${d_{1}<\dots<d_{r+1}}$, be a minimal subset of~${\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}\times{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}$ for the inclusion, such that the lower part of the boundary of $\mathcal{N\!P}(L)$ is the convex hull of the finite set of the points having as their coordinates $(d_{1},n_{1}),\dots ,(d_{r+1},n_{r+1})$. Letting $\displaystyle s_j=\frac{n_{j+1}-n_{j}}{d_{j+1}-d_{j}}$, one gets the (finite) slopes of $\mathcal{N\!P}(L)$. Note that, by construction, the sequence $(s_j)_{1\le j\le r}$ is strictly increasing. With regard to the summability of the formal power series solutions of linear $q$-difference equations, one can quote the following result. \begin{thm}[\cite{marotte2000multisommabilite}, Theorem 3.3.5]\label{thm3} Let $L$ be as in \varepsilonqref{eq01}, and let $f\in\CC[[x]]$. Suppose that the associated Newton polygon $\mathcal{N\!P}(L)$ has the integers $s_1<...<s_r$ as all its {\bf positive} slopes. If $L f\in\CC\{x\}$, then $ f\in\mathcal{MS}_{q}$. More precisely, for all $d\in \RR$ that is not a singular direction, one has $f\in \CC[[x]]_{q;(s_1,\dots,s_r)}^d$. Furthermore, for all $d\in \RR$ that is not a singular direction, $\mathcal{S}_{q}^{d}(f)$ is solution of \varepsilonqref{eq01}. \varepsilonnd{thm} \begin{ex}\label{ex1} Let $a\in \CC^*$ and let $E_{a,q}$ be the unique series that is solution of the following first order linear $q$-difference equation: \begin{equation} \label{qEulerEquation} L_a y=1\,,\quad\hbox{\rm where}\ L_a=x\,\sigma_q+a. \varepsilonnd{equation} When $a=1$ we recover the $q$-Euler equation and $E_{a,q}$ is the $q$-Euler series. With \varepsilonqref{cBcL} we find that $(\xi+a)\widehat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1} (E_{a,q})=1$. Then, $\widehat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;s_r}(E_{a,q})\in \CC \{x\}\cap \EE_{q;1}^d$, for all $d\in \RR$ such that $\arg(-a)\neq d$. We therefore obtain \begin{equation}\label{eq1} \mathcal{S}_{q}^{d}(E_{a,q})=\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\frac{1}{\xi +a}\,\frac{d\xi}{\xi}. \varepsilonnd{equation} By \varepsilonqref{cBcL}, $\mathcal{S}_{q}^{d}(E_{a,q})$ is solution of \varepsilonqref{qEulerEquation}. In virtue of Proposition \ref{prop3}, we find that $\mathcal{S}_{q}^{d}(E_{a,q})\in \OO_{q;1}^d$, and is therefore analytic on some domain of the form $\tilde{D}_R$ for some $R>0$. Since $\mathcal{S}_{q}^{d}(E_{a,q})$ is solution of \varepsilonqref{qEulerEquation} and $q>1$, we deduce that $\mathcal{S}_{q}^{d}(E_{a,q})$ is meromorphic on the Riemann surface of the logarithm. \par When $\arg(-a)\neq d,d'$, $\mathcal{S}_{q}^{d}(E_{a,q})$ and $\mathcal{S}_{q}^{d'}(E_{a,q})$ are two meromorphic solutions, and we might compare the two functions. Assume that $d<d'$. For $R>0$ let $\gamma_{R}$ be the path that goes from $0$ to $Re^{\mathbf{i}d}$ in straight line, from $Re^{\mathbf{i}d}$ to $Re^{\mathbf{i}d'}$ following positively the circle of center $0$ and radius $R$, and coming from $Re^{\mathbf{i}d'}$ to $0$ in straight line. When $R$ is sufficiently big, residue theorem yields that \begin{equation}\label{eq3} \frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_{\gamma_R} e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\frac{1}{\xi +a}\,\frac{d\xi}{\xi}= \frac{2\mathbf{i}\pi}{\sigma_{q}rt{2\pi\log (q)}}A, \varepsilonnd{equation} where $A=\mathrm{res}(\xi^{-1} e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}, -a)$ denotes the residue at $\xi=-a$. The latter is zero if and only if there exists $k\in \ZZ$ such that $\arg(-a)<d<d'<\arg(-a)+2k \pi$. When $R$ goes to infinity, the integral from $Re^{\mathbf{i}d}$ to $Re^{\mathbf{i}d'}$ tends to $0$. Then, when $R$ goes to infinity the left hand side of \varepsilonqref{eq3} tends to $\mathcal{S}_{q}^{d}(E_{a,q})-\mathcal{S}_{q}^{d'}(E_{a,q})$ while the right hand side stay equal to $\frac{2\mathbf{i}\pi}{\sigma_{q}rt{2\pi\log (q)}}A$. This shows that if for all $k\in \ZZ$, $\arg(-a)+2k\pi \notin (d,d')$, then $\mathcal{S}_{q}^{d}(E_{a,q})$ and $\mathcal{S}_{q}^{d'}(E_{a,q})$ are equal. \varepsilonnd{ex} \begin{ex}\label{ex2} Let us see $E_{a,q}$ as a function of $a$ and for $m\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}$, let $E^{[m]}_{a,q}=\partial_{a}^{m}E_{a,q}$ be the formal derivative. Obviously, the formal Borel transformation commutes with $\partial_a$ and we find for all $m$, $\hat {\mathcal B}} \def\cL{{\mathcal L}_{q;1}(E^{[m]}_{a,q})=\partial_{a}^{m}\hat {\mathcal B}} \def\cL{{\mathcal L}_{q;1}(E_{a,q})=\partial_{a}^{m}\frac{1}{\xi+a}=\frac{(-1)^{m}}{(\xi+a)^{m+1}}$. Consider $\mathcal{S}_{q}^{d}(E_{a,q})$ as an integral depending upon the parameter $a$ and let us study its differentialbility. Let $f(a,m,\xi,x):=\xi^{-1}e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\frac{(-1)^{m}}{(\xi+a)^{m+1}}$. Let $d\in \RR$, let us fix a compact $K\subset \CC^* \setminus \RR_{>0}e^{\mathbf{i}(d+\pi)} $ . Let $M :=\displaystyle \min_{\substack{a\in K, \\ \xi\in \RR_{>0}e^{\mathbf{i}d}}} (\xi+a)$. Note that $M>0$. Then, for all $m$, let us set $g(m,\xi,x):=\left| \xi^{-1}e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,M^{-m-1} \right|$. Then, we may locally dominate the following integral depending upon $a$ $$\begin{array}{ll} &\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}\left|e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\frac{(-1)^{m}}{(\xi+a)^{m+1}}\,\frac{d\xi}{\xi}\right|\\ \leq&\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}\left|e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\,\frac{d\xi}{M^{m+1}\xi}\right| \\ \leq& \infty. \varepsilonnd{array}$$ Then the integral depending upon $a$ may be differentiate and we find, $E^{[m]}_{a,q}\in \CC[[x]]_{q;1}^{d}$ and $$\begin{array}{ll} \mathcal{S}_{q}^{d}(E^{[m]}_{a,q})&=\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\frac{(-1)^{m}}{(\xi+a)^{m+1}}\,\frac{d\xi}{\xi}\\ &= \partial_{a}^{m}\left(\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\frac{1}{\xi +a}\,\frac{d\xi}{\xi}\right)\\ &=\partial_{a}^{m}\mathcal{S}_{q}^{d}(E_{a,q}). \varepsilonnd{array}$$ \varepsilonnd{ex} The following result, whose prove may be deduced from \cite{di2009q}, Theorem 4.20 will be used toward the proof of our main result. Let us make the convention that $E^{0}_{a,q}=1$ for all $a\in\CC^*$. \begin{thm}\label{thm2} Let $L$ be as in \varepsilonqref{eq01}, and let $ f\in\CC[[x]]$ be a solution of \varepsilonqref{eq01}. Suppose that the associated Newton polygon $\mathcal{N\!P}(L)$ has slopes $0$ and $1$. Then, there exists a decomposition of the form $$f = \displaystyle \sum_{i=0}^{k}f_i \times E^{[m_i]}_{a_i,q}$$ such that $f_0,\dots, f_k \in \CC\{ x \}$, and $a_i\in \CC^*$, $m_i\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}$. \varepsilonnd{thm} \section{Product theorem}\label{section:Euler} The goal of this section is to prove the main result of the paper, that is the product of two series solution of linear $q$-difference equations with slopes $0$ and $1$ is $q$-multisummable, and the $q$-sum of the product is the product of the $q$-sums. By Theorem~\ref{thm2} two such series admits a decomposition involving convergent series and variants of $q$-Euler series. The strategy of the proof is to show the product theorem for series of the form $E^{[m]}_{a,q}$, $E^{[n]}_{b,q}$ where $a,b\in \CC^*$, $m,n\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^*$, and then use the Theorem \ref{thm2}, together with the fact that the $q$-summation process is a morphism of $\CC\{x\}$-modules, see Proposition~\ref{prop1}, to deduce the result. \subsection{Case of the Euler series} We are first going to consider the particular case where series are of the form $E_{a,q}$, $E_{b,q}$ where $a,b\in \CC^*$ are fixed complex numbers. So let us study the summability of the product $E_{a,q}E_{b,q}$. Now, the equation into the form $x\,\sigma_q E_{a,q}=1-aE_{a,q}$, $x\,\sigma_q E_{b,q}=1-bE_{b,q}$, consider the product of both sides of, and observe then that $$(x^{2}\sigma_q-ab)E_{a,q} E_{b,q}=1-aE_{a,q}-bE_{b,q}. $$Furthermore, as $L_aL_b = L_b L_a$, we find $ L_aL_b E_{a,q}= L_b 1= x+b$ and $ L_aL_b E_{b,q}= L_a 1= x+a$. Since and $ L_aL_b 1=(x\sigma_{q} +a)(x+b)=qx^2 + (a+b)x+ab$, it follows that $E_{a,q} E_{b,q}$ is solution to the following functional equation: \begin{equation} \label{qEulerCarre} (x\,\sigma_q+a)(x\,\sigma_q+b) (x^{2}\sigma_q-ab)y=qx^2 + (a+b)x+ab-a (x+b)-b(x+a)=qx^2 -ab. \varepsilonnd{equation} \begin{figure} \begin{center} \begin{tikzpicture} \coordinate (O) at (0,4); \coordinate (A) at (0,0); \coordinate (C) at (2,2); \coordinate (D) at (3,3); \coordinate (E) at (3,4); \draw (A) -- (C); \draw (C) -- (E); \draw (0,0) grid (3,4); \put(0,-10){{$(0,0)$}} \put(60,45){{$(2,2)$}} \put(90,110){{$(3,4)$}} \node (centre) at (0,0){$\bullet$}; \node (centre) at (2,2){$\bullet$}; \node (centre) at (3,4){$\bullet$}; \varepsilonnd{tikzpicture} \caption{The Newton polygon of the operator associated with $E_{a,q} E_{b,q}$.}\label{fig1} \varepsilonnd{center} \varepsilonnd{figure} It is obvious that the associated Newton polygon for $L_a,L_b$ have a unique slope, that equals 1. One sees that the slopes of the Newton polygon of $$L:=(x\,\sigma_q+a)(x\,\sigma_q+b) (x^{2}\sigma_q-ab)$$ are $1$ and $2$. Consequently, by Theorem \ref{thm3}, one may expect to have $ E_{a,q}E_{b,q}\in \mathcal{MS}_q$. Let $$S:=\{\arg(-a)+2\pi \ZZ \}\cup \{\arg(-b)+2\pi \ZZ \} . $$ Then for all $d\in \RR \setminus S$, one has $E_{a,q}\in \CC[[x]]_{q;1}^{d}$ and $E_{b,q}\in \CC[[x]]_{q;1}^{d}$. The goal of that subsection is to prove: \begin{thm}\label{theo:2} Given $d\in \RR \setminus S$, $E_{a,q}E_{b,q}\in \CC[[x]]_{q;(1,2)}^{d}$ and for all $x\in\widetilde{\CC}^*$, we have $$ \mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})(x)=\mathcal{S}_{q;1}^{d}(E_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})(x). $$ \varepsilonnd{thm} Since $E_{\star,q}=\displaystyle\sum_{n\ge 0}(-1)^n (\star)^{n} \,q^{n(n-1)/2}\,x^n$, it is straightforward to check that we have ${f_1=\hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}(E_{a,q} E_{b,q})\in\CC\{\zetaeta\}}$. \begin{lem}\label{lem8} The power series $f_1$ represents the only analytic function on $\CC\setminus(-aq^{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W},-bq^{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W})$ that satisfies the $q$-difference equation \begin{equation} \label{qEulerCarreBorel}(q^{-1}\zetaeta^{2} \sigma_{q}^{-1}-ab)f_1 =\frac{\zetaeta^2 -ab}{(\zetaeta+a)(\zetaeta+b)}\,. \varepsilonnd{equation} Furthermore, for any given $d\in \RR\setminus S$, we have $f_1\in \EE_{q;2}^d$. \varepsilonnd{lem} \begin{proof} By considering the first relation in \varepsilonqref{cBcL}, one gets that $$ \hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1}{L}=(\zetaeta+a)(\zetaeta+b)\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1}(x^2\sigma_q-ab)=(\zetaeta+a)(\zetaeta+b)(q^{-1}\zetaeta^2\sigma_q^{-1}-ab)\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;1}\,. $$ By definition, $\hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}(qx^2 -ab)=x^2 -ab$. Thus, applying $\hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}$ in both sides of \varepsilonqref{qEulerCarre} yields the functional equation \varepsilonqref{qEulerCarreBorel} for $f_1$. By putting \varepsilonqref{qEulerCarreBorel} into the following form: $$ f_1(\zetaeta)=\frac{ab-\zetaeta^2}{ab(\zetaeta+a)(\zetaeta+b)}+(abq)^{-1}\zetaeta^2f_1(\zetaeta/q)\,, $$ iterating this last relation shows that \begin{equation}\label{f1series} f_1(\zetaeta)=\sum_{n\ge 0}\frac{\zetaeta^{2n}\,(ab q^n-\zetaeta^2)}{(ab)^nq^{n^2}\,(a q^{n}+\zetaeta)(b q^{n}+\zetaeta)}\,. \varepsilonnd{equation} Thus, $f_1$ is analytic on the domain $\CC\setminus(-aq^{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}, -bq^{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W})$. The uniqueness of $f_1$ comes from the fact that the associated homogeneous equation with \varepsilonqref{qEulerCarreBorel}, $(q^{-1}\zetaeta^{2} \sigma_{q}^{-1}-ab)y =0$, has not nontrivial analytic solution at $\zetaeta=0$. To prove that $f_1\in \EE_{q;2}^d$ for all $d\in\RR\setminus S$, we may use the same reasoning as in \cite{dreyfus2015building}, Proposition 2.13, (3). \varepsilonnd{proof} Given any $d\in \RR\setminus S $, let $f_2^{d}:=\cL_{q;2}^d(f_1)$. Let $S'=S\cup \{\arg(\sigma_{q}rt{ab})+\pi \ZZ\}$. \begin{lem}\label{lem10} If $d\in \RR\setminus S$, one has for all $\zetaeta\in\tilde\CC^*$ \begin{equation} \label{qBorelf22}(q^{-1/2}\zetaeta^{2}-ab)f_2^{d}(\zetaeta) =-a \,\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})(\zetaeta)-b\,\mathcal{S}_{q;2}^{d}(E_{b,q^{1/2}})(\zetaeta). \varepsilonnd{equation} Furthermore, when $d\in \RR\setminus S'$, we find $f_2^{d}\in\EE_{q;2}^d$. \varepsilonnd{lem} \begin{rmk}\label{rem1} We will prove in the sequel that for all $d\in \RR\setminus S$, we have $f_2^{d}\in\EE_{q;2}^d$. \varepsilonnd{rmk} \begin{proof} Let $q'=q^{1/2}$, $d\in\RR \setminus S$, and transform \varepsilonqref{qEulerCarreBorel} into one $q'$-difference equation as follows: $$(q'^{-2} \zetaeta^2\sigma_{q'}^{-2} -ab) f_1=\frac{\zetaeta^2 -ab}{(\zetaeta+a)(\zetaeta+b)}.$$ With the help of \varepsilonqref{cBcL}, in which $q$ is replaced by $q'$, we deduce that $$(q'^{-1} \zetaeta^2 -ab )f_2^{d}(\zetaeta)=\cL_{q';1}^d\left(\frac{ \xi^2 -ab}{(\xi+a)(\xi+b)}\right)(\zetaeta),\quad \zetaeta\in\tilde\CC^*. $$ As $\displaystyle\frac{\xi^2 -ab}{(\xi+a)(\xi+b)} =1-\frac{a}{\xi+a}-\frac{b}{\xi+b}$ we obtain that $$\cL_{q';1}^d\left(\frac{\xi^2 -ab}{(\xi+a)(\xi+b)}\right)(\zetaeta)=1-a \,\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})(\zetaeta)-b \,\mathcal{S}_{q;2}^{d}(E_{b,q^{1/2}})(\zetaeta) .$$ This shows \varepsilonqref{qBorelf22} for all $d\in \RR\setminus S$. Furthermore, by Proposition \ref{prop3}, $\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})\in \OO_{q;2}^d$, so it is defined in the neighborhood of $0$ in the Riemann surface $\tilde\CC^*$. By Theorem~\ref{thm3}, $\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})$ is solution of the same equation as $E_{a,q^{1/2}}$, which implies that we have $(x\sigma_{q'}+a)\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})=1$. We deduce similarly to the proof of \cite{dreyfus2015building}, Proposition~2.13, (3), that $\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})\in \EE_{q;2}^d$. Then, the right hand side of \varepsilonqref{qBorelf22} belongs to $\EE_{q;2}^d$ for all $d\in \RR\setminus S$. \par For all $\zetaeta\in\tilde\CC^*$ such that $\zetaeta^2\not=q^{1/2}ab$, {\it i.e.} $\zetaeta\not=q^{1/4}\,e^{k\pi\mathbf{i} }\sigma_{q}rt{ab}$ for any $k\in\ZZ$, we deduce that $\frac{1}{q'^{-1} \zetaeta^2 -ab}\in\EE_{q;2}^d$. Then for all $d\in \RR\setminus S'$, we find $$f_2^{d}(\zetaeta)=\frac{-a \,\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})(\zetaeta)-b\,\mathcal{S}_{q;2}^{d}(E_{b,q^{1/2}})(\zetaeta)}{(q^{-1/2} \zetaeta^2-ab) } \in\EE_{q;2}^d.$$ \varepsilonnd{proof} We are now ready the prove Theorem \ref{theo:2}. \begin{proof}[Proof of Theorem \ref{theo:2}] Let us begin with the case where $d\in \RR \setminus S'$, and then consider the general case. \begin{center}\textbf{Case $d\in \RR \setminus S'$.} \varepsilonnd{center} Let us first consider the situation where $d\in \RR \setminus S'$. If one defines $$ e_{q}(x)=\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,e^{-(\log(q^{1/2}x))^2/(2\log(q))} $$ for all $x\in\tilde\CC^*$, one can write \begin{equation} \label{equation:Eq21} \mathcal{S}_{q;1}^{d}(E_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})(x)=\int_0^{\infty e^{\mathbf{i}d}}\!\!\int_0^{\infty e^{\mathbf{i}d}}\frac{e_q(\xi_1/x)\,e_q(\xi_2/x)}{(a+\xi_1)(b+\xi_2)\,}\,\frac{d\xi_1}{\xi_1}\,\frac{d\xi_2}{\xi_2}\,. \varepsilonnd{equation} Let $\Phi$ : $(\xi_1,\xi_2)\mapsto(\xi,\zetaeta)$ be the homeomorphism from $\RR_{>0}\times \RR_{>0}$ onto itself defined by $$ \xi=\frac{\xi_1}{\xi_2}\,,\quad \zetaeta=q^{1/4}\sigma_{q}rt{\xi_1\xi_2}\,. $$ It is a bijection of $\RR_{>0}\times \RR_{>0}$ with inverse $$ \xi_1=q^{-1/4}\,\sigma_{q}rt \xi\,\zetaeta\,,\quad \xi_2=\frac{q^{-1/4}\,\zetaeta}{\sigma_{q}rt\xi}. $$ Furthermore, the Jacobian is given by $$J:=\left(\begin{array}{cc} \partial_{\xi_1}\xi & \partial_{\xi_1}\zetaeta \\ \partial_{\xi_2}\xi & \partial_{\xi_2}\zetaeta \varepsilonnd{array} \right)=\left(\begin{array}{cc} (\xi_2)^{-1} & q^{1/4}\sigma_{q}rt{\xi_2 /\xi_1}/2 \\ -\xi_1 (\xi_2)^{-2} & q^{1/4}\sigma_{q}rt{\xi_1 /\xi_2}/2 \varepsilonnd{array} \right).$$ Then, $$\det(J)=q^{1/4}(\xi_1)^{1/2}(\xi_2)^{-3/2}=\frac{\xi \zetaeta}{\xi_1\xi_2}.$$ Let us prove the following technical lemma. \begin{lem}\label{lem1} We have the following equality $$ e_q(\xi_1/x)\,e_q(\xi_2/x)=e_{q^2}(\xi/q)\,e_{q^{1/2}}(\zetaeta/x)\,. $$ \varepsilonnd{lem} \begin{proof}[Proof of Lemma \ref{lem1}] We have to prove that $f(x):=\frac{e_q(\xi_1/x)\,e_q(\xi_2/x)}{e_{q^2}(\xi/q)\,e_{q^{1/2}}(\zetaeta/x)}$ equals to $1$. The following holds $$ f(x)=e^{\frac{-(\log(q^{1/2}\xi_1/x))^2-(\log(q^{1/2}\xi_2/x)^2+\frac{1}{2}(\log(\xi))^2+2(\log(q^{1/4}\zetaeta/x))^2}{2 \log(q)}}. $$ Let us expand the expression of $f(x)$ that will be of the form $e^{\alpha/2\log(q)}e^{\log (\beta)\log(x)/\log(q)}$ where $$ \alpha=-(\log(q^{1/2}\xi_1))^2-(\log(q^{1/2}\xi_2))^2+\frac{1}{2}(\log(\xi))^2+2(\log(q^{1/4}\zetaeta))^2, $$ and $$ \beta=\log(q^{1/2}\xi_1)+\log(q^{1/2}\xi_2)-2\log(q^{1/4}\zetaeta). $$ Replacing $\xi$ and $\zetaeta$ by their expression in $\xi_1,\xi_2$ gives \begin{align*} \alpha=-\left(\frac{\log(q)}{2}+\log(\xi_1)\right)^2-\left(\frac{\log(q)}{2}+\log(\xi_2)\right)^2+\frac{1}{2}\left(\log(\xi_1)-\log(\xi_2)\right)^2\\ +2\left(\frac{\log(q)}{2}+\frac{\log(\xi_1)}{2}+\frac{\log(\xi_2)}{2}\right)^2=0, \varepsilonnd{align*} and $$\beta =\frac{\log(q)}{2}+\log(\xi_1)+\frac{\log(q)}{2}+\log(\xi_2)-2\left(\frac{\log(q)}{2}+\frac{\log(\xi_1)}{2}+\frac{\log(\xi_2)}{2}\right)=0.$$ This completes the proof of the lemma. \varepsilonnd{proof} Let us continue the proof of Theorem \ref{theo:2}. If $$\phi(\xi,\zetaeta)=\frac1{(a+q^{-1/4}\,\sigma_{q}rt \xi\,\zetaeta)(b+{q^{-1/4}\,\zetaeta}/{\sigma_{q}rt\xi})} $$ and $$ \psi^*(\zetaeta)=\int_0^{\infty e^{\mathbf{i}d}}e_{q^2}(\xi/q)\phi(\xi,\zetaeta)\,\frac{d\xi}{\xi}\,, $$ making the change of variables $\Phi$ in \varepsilonqref{equation:Eq21} yields that $$\left( \mathcal{S}_{q;1}^{d}(E_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})\right)(x)=\int_0^{\infty e^{\mathbf{i}d}}\psi^*(\zetaeta)e_{q^{1/2}}(\zetaeta/x)\frac{d\zetaeta}\zetaeta\,. $$ Let $f_2^d$ be the function considered in Lemma \ref{lem10}. By Lemma \ref{lem10}, $E_{a,q}E_{b,q}\in \CC[[x]]_{q;(1,2)}^{d}$ and $$ {\mathcal S}} \def\cE{{\mathcal E}^{d}_{q;1}(E_{a,q}E_{b,q})(x)=\cL_{q;2}^d (f_2^d)(x)=\int_0^{\infty e^{\mathbf{i}d}}f_2^d(\zetaeta)\,e_{q^{1/2}}(\zetaeta/x)\,\frac{d\zetaeta}{\zetaeta}\,. $$ In virtue of Lemma \ref{lem10}, we have to prove \begin{equation}\label{equation:psi*} \psi^*(\zetaeta) = \frac{1}{ab -q^{-1/2}\zetaeta^{2}} \,\left(a\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})(\zetaeta)+b \,\mathcal{S}_{q;2}^{d}(E_{b,q^{1/2}})(\zetaeta)-1\right)\,. \varepsilonnd{equation} In what follows, we will prove \varepsilonqref{equation:psi*} for all $\zetaeta>0$, and by analytic continuation principle this permits us to get \varepsilonqref{equation:psi*} for all $\zetaeta\in\tilde\CC^*$. Let $\xi=u^2$ with $u>0$, write $\sigma_{q}rt\xi=u$, and note that $$ \phi(\xi,\zetaeta)=\frac{u}{(a+q^{-1/4}\,u\,\zetaeta)(bu+q^{-1/4}\,\zetaeta)}=\frac{\frac{aq^{1/2}}{abq^{1/2} -\zetaeta^2 }}{a+q^{-1/4}\zetaeta\,u}-\frac{\frac{q^{-1/4}\zetaeta}{ab-q^{-1/2}\zetaeta^2}}{ub+q^{-1/4}\zetaeta}\,. $$ Define for every convenient $\zetaeta\in\CC$ such that the denominator does not vanish on the path of integration: $$ I_1(\zetaeta)=\int_0^{\infty e^{\mathbf{i}d}} \frac{\frac{aq^{1/2}}{abq^{1/2} -\zetaeta^2 }}{a+q^{-1/4}\zetaeta\,u}\,e_{q^{1/2}}(u/q^{1/4})\,\frac{du}{u} $$ and $$ I_2(\zetaeta)=\int_0^{\infty e^{\mathbf{i}d}} \frac{\frac{q^{-1/4}\zetaeta}{ab-q^{-1/2}\zetaeta^2}}{ub+q^{-1/4}\zetaeta}\,e_{q^{1/2}}(u/q^{1/4})\,\frac{du}{u}\,. $$ A straightforward computation shows that $$e_{q^2}(\xi/q)\displaystyle= \frac{1}{\sigma_{q}rt{4\pi\log (q)}}\,e^{-(\log(\xi))^2/(4\log(q))}= \frac{1}{2} \frac{1}{\sigma_{q}rt{\pi\log (q)}}\,e^{-(\log(u))^2/(\log(q))} =\frac{1}{2}e_{q^{1/2}}(u/q^{1/4}). $$ In view of the relation $\displaystyle\frac{d\xi}{\xi}=2\,\frac{du}{u}$ one finds that $$ \psi^*(\zetaeta)= \left(I_1(\zetaeta)-I_2(\zetaeta)\right).$$ By using appropriate changes of variables, it follows that, for all convenient $\zetaeta\in\CC$ such that the denominator does not vanish on the path of integration $$ I_1(\zetaeta)=\int_0^{\infty e^{\mathbf{i}d} } \frac{\frac{aq^{1/2}}{abq^{1/2}-\zetaeta^2 }}{a+v}\,e_{q^{1/2}}(v/\zetaeta)\,\frac{dv}{v}=\frac{a}{ab-q^{-1/2}\zetaeta^2 } {\mathcal S}} \def\cE{{\mathcal E}_{q;2}(E_{a,q^{1/2}})(\zetaeta)$$ and $$ I_2(\zetaeta)=\int_0^{\infty e^{\mathbf{i}d}} \frac{\frac{1}{ab-q^{-1/2}\zetaeta^2}}{1+bv}\,e_{q^{1/2}}(\zetaeta v/q^{1/2})\,\frac{dv}{v}.$$ If we put $w=1/v$, we find $$I_2(\zetaeta)=\frac{1}{ab-q^{-1/2}\zetaeta^2}\int_0^{\infty e^{\mathbf{i}d}} \frac{w}{w+b}\,e_{q^{1/2}}(\zetaeta /w q^{1/2})\,\frac{dw}{w}$$ and with $e_{q^{1/2}}(x)=e_{q^{1/2}}(q^{-1/2}x^{-1})$, we obtain that the latter is equal to $$\frac{1}{ab-q^{-1/2}\zetaeta^2}\int_0^{\infty e^{\mathbf{i}d}} \frac{w}{w+b}\,e_{q^{1/2}}(w/\zetaeta)\,\frac{dw}{w}=\frac{1-b {\mathcal S}} \def\cE{{\mathcal E}^{d}_{q;2}(E_{b,q^{1/2}})(\zetaeta) }{ab-q^{-1/2}\zetaeta^2} .$$ Therefore, one deduces from the above the expression expected in \varepsilonqref{equation:psi*} for $\psi^*$.\par \begin{center}\textbf{General case.} \varepsilonnd{center} It remains to prove the result for $d\in \RR \setminus S$. Recall that $S'=S\cup \{\arg(\sigma_{q}rt{ab})+\pi \ZZ\}$ and for all $d\in \RR \setminus S'$, $E_{a,q}E_{b,q}\in \CC[[x]]_{q;(1,2)}^{d}$ and for all $x\in\widetilde{\CC}^*$, \begin{equation}\label{eq2} \mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})(x)=\mathcal{S}_{q;1}^{d}(E_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})(x). \varepsilonnd{equation} If $S=S'$ there is nothing to prove. Assume that this is not the case. Let $d_0\in \{\arg(\sigma_{q}rt{ab})+\pi \ZZ\}$ that does not belong to $S$. Let $d'<d_0 <d''$ such that $(d',d'')\cap S' =\{d_0\}$. Then, $(d',d'')\cap S =\varnothing$. By Example \ref{ex1}, we deduce that for $\star\in \{a,b\}$, $\mathcal{S}_{q;1}^{d}(E_{\star,q})$ is independent of $d\in (d',d'')$. Then, the right hand side of \varepsilonqref{eq2}, seen as a function of $d$, is independent of $d$ in $(d',d'')$. By Proposition \ref{prop3}, $\mathcal{S}_{q;1}^{d}(E_{a,q})$ and $\mathcal{S}_{q;1}^{d}(E_{b,q})$ belong to $\OO_{q;1}^{d}$ and it follows by definition that for all $d\in (d',d'')$, $\mathcal{S}_{q;1}^{d}(E_{a,q})\times\mathcal{S}_{q;1}^{d}(E_{b,q})\in \OO_{q;2}^{d}$ and we may apply ${\mathcal B}} \def\cL{{\mathcal L}_{q;2}$ to it. By Proposition~\ref{prop3}, and the proof of the theorem in the case $d\in \RR\setminus S'$, for all $d\in (d',d'')\setminus \{ d_0\}$, we have $${\mathcal B}} \def\cL{{\mathcal L}_{q;2}(\mathcal{S}_{q;1}^{d}(E_{a,q})\times \mathcal{S}_{q;1}^{d}(E_{b,q}))={\mathcal B}} \def\cL{{\mathcal L}_{q;2}\mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})=f_{2}^{d}.$$ By Lemma \ref{lem10}, for all $d\in (d',d'')\setminus \{ d_0\}$ $$ f_2^{d}(\zetaeta) =\frac{-a \,\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})(\zetaeta)-b\,\mathcal{S}_{q;2}^{d}(E_{b,q^{1/2}})(\zetaeta)}{q^{-1/2}\zetaeta^{2}-ab}. $$ Let $f_{2}^{d_0}:={\mathcal B}} \def\cL{{\mathcal L}_{q;2}(\mathcal{S}_{q;1}^{d_0}(E_{a,q}) \mathcal{S}_{q;1}^{d_0}(E_{b,q}))$. Since $\mathcal{S}_{q;1}^{d}(E_{a,q}) \mathcal{S}_{q;1}^{d}(E_{b,q})$ is independent of $d\in (d',d'')$, it follows that $f_{2}^{d}$ is independent of $d\in (d',d'')$ and we deduce that $$f_2^{d_0}(\zetaeta) =\frac{-a \,\mathcal{S}_{q;2}^{d_0}(E_{a,q^{1/2}})(\zetaeta)-b\,\mathcal{S}_{q;2}^{d_0}(E_{b,q^{1/2}})(\zetaeta)}{q^{-1/2}\zetaeta^{2}-ab}.$$ Lemma \ref{lem10} shows that $f_2^{d_0}(\zetaeta)= \cL_{q;2}^{d_0}(f_1)$, proving with Proposition \ref{prop:BLd} that $$\begin{array}{lll} \mathcal{S}_{q;1}^{d_0}(E_{a,q})\mathcal{S}_{q;1}^{d_0}(E_{b,q})&=&\cL_{q;2}^{d_0}\circ{\mathcal B}} \def\cL{{\mathcal L}_{q;2}\circ \mathcal{S}_{q;1}^{d_0}(E_{a,q})\mathcal{S}_{q;1}^{d_0}(E_{b,q})\\ &=& \cL_{q;2}^{d_0}(f_2^{d_0})\\ &=&\cL_{q;2}^{d_0}\circ \cL_{q;2}^{d_0}(f_1)\\ &=&\cL_{q;2}^{d_0}\circ \cL_{q;2}^{d_0}\circ \hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}(E_{a,q} E_{b,q}). \varepsilonnd{array}$$ Then $E_{a,q}E_{b,q}\in \CC[[x]]_{q;(1,2)}^{d_0}$ and $ \mathcal{S}_{q;(1,2)}^{d_0}(E_{a,q}E_{b,q})=\mathcal{S}_{q;1}^{d_0}(E_{a,q})\mathcal{S}_{q;1}^{d_0}(E_{b,q})$. \varepsilonnd{proof} \subsection{Variant of the Euler series} Consider now the situation where the two series are of the form $E^{[m]}_{a,q}$, $E^{[n]}_{b,q}$ where $a,b\in \CC^*$, $m,n\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^*$. Recall that we have set ${S:=\{\arg(-a)+2\pi \ZZ \}\cup \{\arg(-b)+2\pi \ZZ \}}$. More precisely, let us prove the following. \begin{cor}\label{cor1} Given $d\in \RR \setminus S$, $ E^{[m]}_{a,q}E^{[n]}_{b,q}\in \CC[[x]]_{q;(1,2)}^{d}$ and for all $x\in\widetilde{\CC}^*$, we have $$ \mathcal{S}_{q;(1,2)}^{d}(E^{[m]}_{a,q}E^{[n]}_{b,q})(x)=\mathcal{S}_{q;1}^{d}(E^{[m]}_{a,q})\mathcal{S}_{q;1}^{d}(E^{[n]}_{b,q})(x). $$ \varepsilonnd{cor} \begin{proof} Let us fix $d\in \RR \setminus S$. As we can see in Example \ref{ex2}, we may differentiate $\mathcal{S}_{q;1}^{d}( E_{a,q})$ with respect to $a$ and find $\mathcal{S}_{q;1}^{d}(E^{[m]}_{a,q})=\partial_{a}^{m}\mathcal{S}_{q;1}^{d}( E_{a,q})$. Therefore we may differentiate $\mathcal{S}_{q;1}^{d}( E_{a,q})\mathcal{S}_{q;1}^{d}( E_{b,q})$ with respect to $a$ and find $\partial_a^m \mathcal{S}_{q;1}^{d}(E_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})=\mathcal{S}_{q;1}^{d}(E^{[m]}_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})(x)$. If we show that we may differentiate $\mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})$ with respect to $a$ with $\partial_a^m\mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})=\mathcal{S}_{q;(1,2)}^{d}(E^{[m]}_{a,q}E_{b,q})$, we will deduce with Theorem~\ref{theo:2} that $\mathcal{S}_{q;(1,2)}^{d}(E^{[m]}_{a,q}E_{b,q})=\mathcal{S}_{q;1}^{d}(E^{[m]}_{a,q})\mathcal{S}_{q;1}^{d}(E_{b,q})$. If we proceed similarly with the $b$ derivation we will complete the proof. So it suffices to show that we may differentiate $\mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})$ with respect to $a$ with $\partial_a^m\mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})=\mathcal{S}_{q;(1,2)}^{d}(E^{[m]}_{a,q}E_{b,q})$. \par By definition, the derivation in $a$ commutes with $\hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}$ so that $\partial_{a}^m f_1=\partial_{a}^m \hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}(E_{a,q} E_{b,q})=\hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}(E^{[m]}_{a,q} E_{b,q})$. Let us fix a compact $K\subset \CC^* \setminus \RR_{>0}e^{\mathbf{i}(d+\pi)}$. Recall, see \varepsilonqref{f1series} , that we have the expression ${f_1=\sum_{n\ge 0}\frac{\zetaeta^{2n}\,(ab q^n-\zetaeta^2)}{(ab)^nq^{n^2}\,(a q^{n}+\zetaeta)(b q^{n}+\zetaeta)}}$. Then, there exists $M>0$ such that for all $\zetaeta\in e^{\mathbf{i}d}$, for all $n\geq 0$, for all $a\in K$, $\left| \frac{\zetaeta^{2n}\,(ab q^n-\zetaeta^2)}{(ab)^nq^{n^2}\,(a q^{n}+\zetaeta)(b q^{n}+\zetaeta)}\right|< \left|\frac{M\zetaeta^{2n+2}}{b^n q^{n^2}(b q^{n}+\zetaeta)}\right|$. Similarly to the proof of \cite{dreyfus2015building}, Proposition~2.13, (3), we find that $\sum_{n\ge 0} \left|\frac{M\zetaeta^{2n+2}}{b^n q^{n^2}(b q^{n}+\zetaeta)}\right| \in \EE_{q;2}^d$. Furthermore, we deduce as in Example \ref{ex2} that $\partial_{a}^m f_1\in \EE_{q;2}^d$ and $\cL_{q;2}^{d}(\partial_{a}^m f_1)=\partial_{a}^m \cL_{q;2}^{d}( f_1)=\partial_{a}^m f_2^d$. We now use \varepsilonqref{qBorelf22} to deal with $f_2^d$. We need to bound $\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})$ and $\frac{1}{(q'^{-1} \zetaeta^2 -ab )}$ uniformly in $a$. Let us begin by $\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})$. As we can see in Example \ref{ex2}, there exists $M>0$ such that for all $a\in K$, $$\left|\mathcal{S}_{q}^{d}(E^{[m]}_{a,q})\right|\leq\frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\displaystyle\int_0^{\infty e^{\mathbf{i}d}}\left|e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\,\frac{d\xi}{M^{m+1}\xi}\right|.$$ Setting $x=re^{\mathbf{i}t}$ we find that the latter expression is bounded by $$\frac{e^{t^2/(2\log(q))} }{\sigma_{q}rt{2\pi\log (q)}M^{m+1}}\, \displaystyle\int_0^{\infty e^{\mathbf{i}d}}\left|e^{-(\log(\frac{r}{\sigma_{q}rt{q}\xi}))^2/(2\log(q))}\,\,\frac{d\xi}{\xi}\right|.$$ Now consider $\frac{1}{(q'^{-1} \zetaeta^2 -ab )}$ in the case $d\in \RR \setminus S'$. Let $M'>0 $ such that for all $a\in K$, for all $\zetaeta\in \RR_{>0}e^{\mathbf{i}d}$, $\left|(q'^{-1} \zetaeta^2 -ab ) \right| >M'$. Then, we may bound the function ${f_2^{d}(\zetaeta)=\frac{-a \,\mathcal{S}_{q;2}^{d}(E_{a,q^{1/2}})(\zetaeta)-b\,\mathcal{S}_{q;2}^{d}(E_{b,q^{1/2}})(\zetaeta)}{(q'^{-1} \zetaeta^2 -ab )}}$ uniformly in $a\in K$ by a function where $\cL_{q;2}^{d}$ may be applied. This shows that we may differentiate and $\partial_a^{m}\cL_{q;2}^{d}(f_2^{d})=\cL_{q;2}^{d}(\partial_a^{m} f_2^{d})$. For the remaining cases $d\notin S$, we may have a problem to bound when $q'^{-1} \zetaeta^2 =ab$. By Remark \ref{rem1} in that case, $f_{2}^{d}\in \EE_{q;2}^d$ proving that $f_{2}^{d}$ is analytic at $q'^{-1} \zetaeta^2 =ab$ and thus may be correctly bounded. We have proved that $$\cL_{q;2}^{d}\cL_{q;2}^{d}\hat{{\mathcal B}} \def\cL{{\mathcal L}}_{q;1}(\partial_a^m E_{a,q}E_{b,q})=\partial_a^m\mathcal{S}_{q;(1,2)}^{d}( E^{[m]}_{a,q}E_{b,q}).$$ Then, $ E^{[m]}_{a,q}E_{b,q}\in \CC[[x]]_{q;(1,2)}^{d}$ and $\mathcal{S}_{q;(1,2)}^{d}(E^{[m]}_{a,q}E_{b,q})=\partial_a^{m}\mathcal{S}_{q;(1,2)}^{d}(E_{a,q}E_{b,q})$. This was the sufficient fact to conclude the proof. \varepsilonnd{proof} \subsection{Product theorem} Let us now state and prove the main result of the paper. \begin{thm}\label{thm4} Let $f$ (resp. $g$) be a series solution of a linear $q$ difference equation with slopes $0$ and $1$. Then, $fg\in \mathcal{MS}_{q}$. More precisely, let $d\in \RR$, such that $f\in \CC[[x]]_{q;1}^d$ and $g\in \CC[[x]]_{q;1}^d$. Then $f\times g\in \CC[[x]]_{q;(1,2)}^d$ and for all $x\in\widetilde{\CC}^*$, $$ \mathcal{S}_{q;(1,2)}^{d}(fg)(x)=\mathcal{S}_{q;1}^{d}(f)\mathcal{S}_{q;1}^{d}(g)(x).$$ \varepsilonnd{thm} \begin{proof} By Theorem \ref{thm2}, there exists a decomposition $$ f =\displaystyle \sum_{i=0}^{k}f_i E^{[m_i]}_{a_i,q}$$ such that $f_0,\dots, f_k \in \CC\{ x \}$, and $a_i\in \CC^*$, $m_i\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}$. The same holds for $g$ $$ g =\displaystyle \sum_{j=0}^{\varepsilonll}g_j E^{[n_j]}_{b_j,q},$$ with $g_0,\dots, g_k \in \CC\{ x \}$, and $b_j\in \CC^*$, $n_j\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}$. Then we find $$fg=\displaystyle \sum_{i=0}^{k}\sum_{j=0}^{\varepsilonll} f_i g_j E^{[m_i]}_{a_i,q}E^{[n_j]}_{b_j,q} .$$ Let $d\in \RR$, such that $f\in \CC[[x]]_{q;1}^d$ and $g\in \CC[[x]]_{q;1}^d$. The singular directions of $f$ (resp. $g$) correspond to the directions $\arg(-a_i)+2\pi\ZZ$ with $m_i\neq 0$ (resp. $\arg(-b_j)+2\pi\ZZ$ with $n_j\neq 0$). Let $S\subset \RR$ be the union of the singular directions of $f$ and $g$. By Corollary~\ref{cor1}, when $m_i,n_j \in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^*$, $E^{[m_i]}_{a_i,q}E^{[n_j]}_{b_j ,q}\in \CC[[x]]_{q;(1,2)}^d$ and for $x\in\widetilde{\CC}^*$, $\mathcal{S}_{q;(1,2)}^{d}(E^{[m_i]}_{a_i,q} E^{[n_j]}_{b_j ,q})(x)=\mathcal{S}_{q;1}^{d}(E^{[m_i]}_{a_i,q})\mathcal{S}_{q;1}^{d}(E^{[n_j]}_{b_j ,q})(x)$. By Proposition~\ref{prop1}, the map $f\mapsto \mathcal{S}_{q}^{d}(f)$ is a morphism of $\CC \{x\}$-modules so that we have $f_i g_j \mathcal{S}_{q;(1,2)}^{d}\left(E^{[m_i]}_{a_i,q} E^{[n_j]}_{b_j ,q} \right)= \mathcal{S}_{q;(1,2)}^{d}\left(f_i g_jE^{[m_i]}_{a_i,q} E^{[n_j]}_{b_j ,q} \right)$, $\mathcal{S}_{q;1}^{d}(f)=\displaystyle \sum_{i=0}^{k}f_i \mathcal{S}_{q;1}^{d}\left(E^{[m_i]}_{a_i,q} \right)$, and $\mathcal{S}_{q;1}^{d}(g)=\displaystyle \sum_{j=0}^{\varepsilonll} g_j \mathcal{S}_{q;1}^{d}\left( E^{[n_j]}_{b_j ,q} \right)$. Then, $$\begin{array}{lll} \mathcal{S}_{q;1}^{d}(f)\mathcal{S}_{q;1}^{d}(g)(x)&=&\displaystyle \sum_{i=0}^{k}\sum_{j=0}^{\varepsilonll}f_i g_j \mathcal{S}_{q;(1,2)}^{d}\left(E^{[m_i]}_{a_i,q} E^{[n_j]}_{b_j ,q} \right)\\ &=&\displaystyle \sum_{i=0}^{k}\sum_{j=0}^{\varepsilonll} \mathcal{S}_{q;(1,2)}^{d}\left(f_i g_j E^{[m_i]}_{a_i,q} E^{[n_j]}_{b_j ,q} \right)\\ &=& \mathcal{S}_{q;(1,2)}^{d}(fg),\varepsilonnd{array}$$ which shows that $fg\in \CC[[x]]_{q;(1,2)}^d$. This completes the proof. \varepsilonnd{proof} \section{Inverse of the $q$-Euler series}\label{sec5} In this section we study the inverse of the $q$-Euler series $E_{1,q}$. More precisely, we prove that if it is $q$-multisumable in the direction $d=0$, then ${\mathcal S}} \def\cE{{\mathcal E}_{q}^{0}(E_{1,q}^{-1})\neq {\mathcal S}} \def\cE{{\mathcal E}_{q}^{0}(E_{1,q})^{-1}$. In particular, this means that we have no hope to define a morphism of fields. \par One can observe that $$ {\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{2N\pi}(E_{1,q})(x)= \frac{1}{\sigma_{q}rt{2\pi\log (q)}}\,\int_0^{\infty}\frac{e^{-(\log(\frac{x}{\sigma_{q}rt{q}\xi e^{2N\pi\mathbf{i} }}))^2/(2\log(q))}}{1+\xi}\,\,\frac{d\xi}{\xi}\,.$$ This implies that \begin{equation} \label{Sum+n} {\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{2N\pi}(E_{1,q})(x)={\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{0}(E_{1,q})(x e^{-2N\pi\mathbf{i} })\,. \varepsilonnd{equation} Furthermore, applying the residue theorem as in Example \ref{ex1} yields that \begin{equation} \label{Stokes} {\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0 (E_{1,q})(x)-{\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{2\pi}(E_{1,q})(x)=\sigma_{q}rt{\frac{2\pi}{\log(q)}}\, \mathbf{i}\,e^{-(\log(\frac{x}{\sigma_{q}rt{q}}e^{-\pi\mathbf{i}}))^2/(2\log(q))}\,. \varepsilonnd{equation} Iterating \varepsilonqref{Stokes} several times implies that $$ {\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0(E_{1,q})(x)-{\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{2N\pi}(E_{1,q})(x)=\sigma_{q}rt{\frac{2\pi}{\log(q)}}\, \mathbf{i}\,\sum_{k=0}^{N-1}e^{-(\log(\frac{x}{\sigma_{q}rt{q}}e^{-\pi\mathbf{i} (2k+1)}))^2/(2\log(q))}\,. $$ Thus, combining this together with \varepsilonqref{Sum+n} gives the following identity for all $x\in\tilde\CC^*$: \begin{equation} \label{StokesN} {\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0 (E_{1,q})(x)={\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{0}(E_{1,q})(xe^{-2\pi\mathbf{i} N})+\sigma_{q}rt{\frac{2\pi}{\log(q)}}\, \mathbf{i}\,\sum_{k=0}^{N-1}e^{-(\log(\frac{x}{\sigma_{q}rt{q}}e^{-\pi\mathbf{i} (2k+1)}))^2/(2\log(q))}\,. \varepsilonnd{equation} Furthermore, replacing $x$ with $x\,e^{2\pi\mathbf{i} N}$ in the above in \varepsilonqref{StokesN} implies immediately that \begin{equation} \label{Stokes-N} {\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0(E_{1,q})(x)={\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{0}(E_{1,q})(xe^{2\pi\mathbf{i} N})-\sigma_{q}rt{\frac{2\pi}{\log(q)}}\, \mathbf{i}\,\sum_{k=0}^{N-1}e^{-(\log(\frac{x}{\sigma_{q}rt{q}}e^{\pi\mathbf{i} (2k+1)}))^2/(2\log(q))}\,. \varepsilonnd{equation} The following lemma, will permit us to control the sums in \varepsilonqref{StokesN} and \varepsilonqref{Stokes-N}. \begin{lem} \label{lemN} Let $(N,r_0,\lambda)\in{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^* \times\CC\times\RR_{>0}$ and define $$ f(N;r_0,\lambda)=\sum_{k=0}^{N-1}e^{\lambda(N-k+r_0)^2}\,. $$ For all $r_1<r_2$, one has the existence of $C,C'>0$ such that for all $\Re (r_0)\in[r_1,r_2]$ \begin{equation} \label{fNr-1} C \left|e^{\lambda(N+r_0)^2}\right|\leq \left|f(N;r_0,\lambda)\right|\leq C' \left|e^{\lambda(N+r_0)^2}\right|. \varepsilonnd{equation} \varepsilonnd{lem} \begin{proof} Let us fix $r_1<r_2$. Consider $e^{-\lambda(N+r_0)^2}f(N;r_0,\lambda)$ with $\Re (r_0)\in[r_1,r_2]$. On has $$e^{-\lambda(N+r_0)^2}f(N;r_0 ,\lambda)=\sum_{k=0}^{N-1}e^{-\lambda(N+r_0)^2}e^{\lambda(N-k+r_0)^2} =\sum_{k=0}^{N-1}e^{k\lambda (k-2N-2r_0)} .$$ The degree two polynomial $x\mapsto \lambda x (x-2N-2\Re(r_0))$ has a minimum at $N+\Re(r_0)$. Therefore, there exists $C'>0$ such that for all $\Re (r_0)\in[r_1,r_2]$, for all $N\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^*$ $$1+\sum_{k=1}^{N-1}\left|e^{k\lambda (k-2N-2r_0)}\right|\leq 1+ N \max(e^{\lambda (1-2N-2\Re(r_0))}, e^{(N-1)\lambda (-N-1-2\Re(r_0))}) \leq C'.$$ Then, $\sum_{k=0}^{N-1}e^{k\lambda (k-2N-2r_0)}$ converges, when $N$ goes to infinity proving the upper bound. \par On has $$\left|e^{-\lambda(N+r_0)^2}f(N;r,\lambda)\right|\geq \left|1- \left| \sum_{k=1}^{N-1}e^{k\lambda (k-2N-2r_0)} \right|\; \right| .$$ Again, $\left| \sum_{k=1}^{N-1}e^{k\lambda (k-2N-2r_0)} \right|\leq N \max(e^{\lambda (1-2N-2\Re(r_0))}, e^{(N-1)\lambda (-N-1-2\Re(r_0))})$, proving that there exists $C>0$, such that for all $\Re (r_0)\in[r_1,r_2]$, such that when $N$ is sufficiently big ${\left|1- \left| \sum_{k=1}^{N-1}e^{k\lambda (k-2N-2r_0)} \right|\; \right| \geq C}$. Up to take a smaller $C$ we find that for all ${\Re (r_0)\in[r_1,r_2]}$, for all $N\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^*$, $C \left|e^{\lambda(N+r_0)^2}\right|\leq \left|f(N;r_0,\lambda)\right|$. \varepsilonnd{proof} Let us now use \varepsilonqref{StokesN} and \varepsilonqref{Stokes-N} to give an estimate for ${\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0 (E_{1,q})(x)$. \begin{lem}\label{lem11} We have the existence of $C(r),R>0$ such that $$\sup_{(r,t)\in(0,R)\times\RR} \left|\frac{1}{{\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0(E_{1,q})(re^{\mathbf{i}t})}\right|\leq C(r)e^{-\frac{t^2}{2\log (q)}}. $$ \varepsilonnd{lem} \begin{proof} First, we will consider the case where $t\to+\infty$. Choose $N\in{\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}$ such that $t\displaystyle\in[N,N+1)$, and set $$ \lambda:=\frac{2\pi^2}{\log(q)}, \qquad r_0:=-\frac{\mathbf{i}}{2\pi}\,\log\left(\frac{x\,e^{-\pi\mathbf{i} }}{\sigma_{q}rt{q}}\right)-N\in\left[-\frac12,\frac12\right)\oplus\left(\mathbf{i}\RR\right)\,. $$ One can notice that, for any integer $k$: $$ e^{-(\log(\frac{re^{\mathbf{i}t}}{\sigma_{q}rt{q}}e^{-\pi\mathbf{i} (2k+1)}))^2/(2\log(q))}=e^{\lambda(N-k+r_0)^2}\,. $$ By combining \varepsilonqref{fNr-1} together with \varepsilonqref{StokesN}, it follows that we have the existence of $C>0$ such that for all $N\in {\mathbb N}} \def\ZZ{{\mathbb Z}} \def\QQ{{\mathbb Q}} \def\RR{{\mathbb R}} \def\CC{{\mathbb C}} \def\HH{{\mathbb H}} \def\EE{{\mathbb E}} \def\AA{{\mathbb A}} \def\GammaG{{\mathbb G}} \def\SS{{\mathbb S}} \def\OO{{\mathbb O}} \def\WW{{\mathbb W}^*$ for all $t>0$ $$ \left|{\mathcal S}} \def\cE{{\mathcal E}_{q;1}^0(E_{1,q})(re^{\mathbf{i}t})\right|\geq \left|\left|{\mathcal S}} \def\cE{{\mathcal E}_{q;1}^{0}(E_{1,q})(re^{\mathbf{i}t}e^{-2\pi\mathbf{i} N})\right|-C \left|\sigma_{q}rt{\frac{2\pi}{\log(q)}}\, \,e^{-(\log(\frac{re^{\mathbf{i}t}}{\sigma_{q}rt{q}}e^{-\pi\mathbf{i} }))^2/(2\log(q))}\right| \right| . $$ When $t$ goes to $+\infty$, $e^{-(\log(\frac{re^{\mathbf{i}t}}{\sigma_{q}rt{q}}e^{-\pi\mathbf{i} }))^2/(2\log(q))}=O(e^{\frac{t^2}{2\log (q)}})$. Taking the inverse in the latter expression gives the result for $t>0$ sufficiently big. Up to take a bigger constant we deduce the result for all $t>0$. \par Finally, one can notice the case of $\arg(x)\to-\infty$ may be treated in a similar way, using \varepsilonqref{Stokes-N} instead of \varepsilonqref{StokesN}. \varepsilonnd{proof} Let us prove now prove that the Borel transformation of a function with a negative $q$-exponential angular growth is still a function with a negative $q$-exponential angular growth. \begin{lem}\label{lem12} Let $s,s'\in \QQ_{>0}$. Let $f$ be a function meromorphic on the neighborhood of $0$ in the Riemann surface of the logarithm. Assume that there exist $C(R),R>0$ such that $$ \sup_{(r,t)\in(0,R)\times\RR} |f(re^{\mathbf{i}t})|\,< C(R)e^{-\frac{t^2}{2s'\log (q)}}\,. $$ Then, $f\in \OO_{q;s}^0$ and we may consider ${\mathcal B}} \def\cL{{\mathcal L}_{q;s}^0 (f)$. Furthermore, there exist $C'(R)\in(0,\infty)$ such that $$ \sup_{(r,t)\in(0,R)\times\RR} |{\mathcal B}} \def\cL{{\mathcal L}_{q;s}^0 (f)(re^{\mathbf{i}t})|\,<C'(R) e^{-\frac{t^2}{2\widetilde{s}\log (q)}},\,\quad \widetilde{s}=\frac{1}{s+s'}. $$ \varepsilonnd{lem} \begin{proof} By definition, we may apply ${\mathcal B}} \def\cL{{\mathcal L}_{q;s}^0$ to $f$. Let $g={\mathcal B}} \def\cL{{\mathcal L}_{q;s}^0 (f)$. We have $$g(re^{\mathbf{i}t})=\frac{1}{\sigma_{q}rt{2\pi\log (q^{s})}\,\mathbf{i}}\,\displaystyle\int_{\partial^+\widetilde D_{r}}e^{\frac{\left(\log(\frac{x}{\sigma_{q}rt{q^{s}}re^{\mathbf{i}t}})\right)^2}{2\log(q)s}}\,f(x)\,\frac{dx}{x}.$$ To simplify, let us write $C$ instead of $C(R)$. If we write $x=re^{\mathbf{i}\theta}$, we find $\frac{dx}{x}=\mathbf{i} d\theta$. Then, $$\begin{array}{l} |g(re^{\mathbf{i}t})|= \left|\frac{1}{\sigma_{q}rt{2\pi\log (q^{s})}\,}\,\displaystyle\int_{-\infty}^{+\infty}e^{\frac{\left(\log(\frac{e^{\mathbf{i}\theta}}{\sigma_{q}rt{q^{s}}e^{\mathbf{i}t}})\right)^2}{2\log(q)s}}\,f(re^{\mathbf{i}\theta})\,d\theta\right|\\ \leq \left|\frac{1}{\sigma_{q}rt{2\pi\log (q^{s})}}\,\displaystyle\int_{-\infty}^{+\infty}e^{\frac{\left(\log(q^{-s/2})\right)^2}{2\log(q)s}} e^{\frac{\left(\log(e^{\mathbf{i}(\theta-t)})\right)^2}{2\log(q)s}} e^{\frac{\log(q^{-s/2})\log(e^{\mathbf{i}(\theta-t)})}{s\log(q)}} C e^{-\frac{\theta^2}{2s'\log (q)}}\,d\theta\right|\\ \leq \frac{1}{\sigma_{q}rt{2\pi\log (q^{s})}}\,\displaystyle\int_{-\infty}^{+\infty} e^{\frac{\left(\log(q^{-s/2})\right)^2}{2\log(q) s}} e^{-\frac{(\theta -t)^2}{2s\log (q)}} \left|\left(q^{-s/2}\right)^{\frac{\mathbf{i}(\theta-t)}{s\log(q)}}\right| C e^{-\frac{\theta^2}{2s'\log (q)}}\,d\theta \\ \leq \frac{Ce^{\frac{\left(\log(q^{-s/2})\right)^2}{2\log(q)s}} }{\sigma_{q}rt{2\pi\log (q^{s})}}\,\displaystyle\int_{-\infty}^{+\infty}e^{-\frac{(\theta -t)^2}{2s\log (q)}} e^{-\frac{\theta^2}{2s'\log(q)}}\,d\theta \\ \leq \frac{Ce^{\frac{\left(\log(q^{-s/2})\right)^2}{2\log(q)s}} e^{-\frac{t^2}{2s \log (q)}} e^{\frac{s^{-2}t^2}{2(s^{-1}+s'^{-1}) \log (q)}} }{\sigma_{q}rt{2\pi\log (q^{s})}}\,\displaystyle\int_{-\infty}^{+\infty}e^{-(s^{-1}+s'^{-1})\frac{(\theta -s^{-1}t/(s^{-1}+s'^{-1}))^2}{2\log (q)}} \,d\theta \\ \leq \frac{Ce^{(\log(q^{-s/2}))^2/2\log(q)s} e^{-\left(\frac{1}{s}-\frac{1}{s^2(s^{-1}+s'^{-1}) }\right) \frac{t^2}{2 \log (q)}} }{\sigma_{q}rt{2\pi\log (q^{s})}}\,\displaystyle\int_{-\infty}^{+\infty}e^{-(s^{-1}+s'^{-1})\frac{\theta^2}{2\log (q)}} \,d\theta . \varepsilonnd{array} $$ We now remark that $\frac{1}{s}-\frac{1}{s^2(s^{-1}+s'^{-1}) }=\frac{1}{s}-\frac{1}{s+s^2 /s' }=\frac{s+s^2/s' -s}{s^2 +s^3/s'}=\frac{s^2/s'}{s^2 +s^3/s'}=\frac{1}{s+s'} $. This proves the result. \varepsilonnd{proof} We are now ready to prove the main result of the section. \begin{thm}\label{thm:inverse} For all finite strictly increasing sequence $\vec s$ of positive rational numbers, there exists no power series $ f\in\CC[[x]]_{q;\vec s}^0$ such that $\mathcal{S}_{q;\vec s}^{0}( f)=\left(\mathcal{S}_{q;1}^{0}(E_{1,q})\right)^{-1}$. Therefore, if $E_{1,q}^{-1}\in \CC[[x]]_{q}^{0}$, then ${\mathcal S}} \def\cE{{\mathcal E}_{q}^{0}(E_{1,q}^{-1})\neq{\mathcal S}} \def\cE{{\mathcal E}_{q}^{0}(E_{1,q})^{-1}$. \varepsilonnd{thm} \begin{proof} To the contrary, assume that $\mathcal{S}_{q;\vec s}^{d}( f)=\left( \mathcal{S}_{q;1}^{d}(\hat{E}_q)\right)^{-1}$ for some suitable increasing sequence $\vec s=(s_1,...,s_r)$. Let $(\widetilde{s}_1,...,\widetilde{s}_r)$ its associated sequence as in \varepsilonqref{subsection:notation5}. By Proposition \ref{prop3}, one has: $$\mathcal{S}_{q;\vec s}^{d}( f)=\cL_{q;\widetilde{s}_r}^0\circ \dots\circ\cL_{q;\widetilde{s}_1}^0\circ{\mathcal S}^d\circ \hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}( f)\in \OO_{q;\widetilde{s}_r}^0,$$ and $${\mathcal B}_{q;\widetilde{s}_1}\circ\dots \circ {\mathcal B}_{q;\widetilde{s}_r}\left(\mathcal{S}_{q}^{0}(\hat{E}_q(x)\right)^{-1}=\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}\left( f\right).$$ Furthermore, Lemma \ref{lem11} states that $$ \left(\mathcal{S}_{q;1}^{d}(\hat{E}_q(x))^{-1}\right)=O\left(e^{-(\log(\frac{x}{\sigma_{q}rt q}))^2/(\log(q))}\right)\,. $$ Thus, using several times Lemma \ref{lem12} implies that $$\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}\left(\mathcal{S}_{q;1}^{d}(\hat{E}_q(x))^{-1}\right)=O\left(e^{-{\,x}/({2\tilde{s}\log(q)})}\right),$$ where $\tilde s$ is a convenient positive constant. Since $\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}( f)$ is a convergent power series, we find that the only possibility is that ${\hat{\mathcal B}} \def\cL{{\mathcal L}_{q;s_1}( f)=0}$. This implies $ f=0$, which is a contradiction. \varepsilonnd{proof} \varepsilonnd{document}
\begin{document} \title{Fairness for Multi-Self Agents} \begin{abstract} We investigate whether fairness is compatible with efficiency in economies with multi-self agents, who may not be able to integrate their multiple objectives into a single complete and transitive ranking. We adapt envy-freeness, egalitarian-equivalence and the fair-share guarantee in two different ways. An allocation is \emph{unambiguously-fair} if it satisfies the chosen criterion of fairness according to every objective of any agent; it is \emph{aggregate-fair} if it satisfies the criterion for some aggregation of each agent's objectives. While efficiency is always compatible with the unambiguous fair-share guarantee, it is incompatible with unambiguous envy-freeness in economics with at least three agents. Two agents are enough for efficiency and unambiguous egalitarian-equivalence to clash. Efficiency and the unambiguous fair-share guarantee can be attained together with aggregate envy-freeness, or aggregate egalitarian-equivalence. \end{abstract} \section{Introduction} We consider multi-self agents who pursue a variety of potentially conflicting objectives. An agent may, for example, base her decisions on her career prospects, her family life and her immediate enjoyment. A fully rational agent is able to aggregate all her objectives into a single complete and transitive ranking; a boundedly rational agent may fail to do so. In an economy with such boundedly rational agents, we consider three criteria of fairness. An allocation has the \emph{fair-share guarantee (FS)} \footnote{Also known as \emph{proportionality}.} if each agent prefers their bundle to the average bundle; it is \emph{envy free (EF)} if each agent prefers their bundle to any other agent's; it is \emph{egalitarian-equivalent (EE)} if each agent is indifferent between their bundle and some fixed reference bundle. For each fairness criterion, we call an allocation \emph{unambiguously fair} if it satisfies the fairness criterion according to each objective of each agent. For an illustration, consider agents that are driven by two selves; one with a cool-headed long-run view and another that greedily searches immediate gratification.\footnote{ \citet{kreps1979representation} and \citet{gul2001temptation, gul2004self} introduced such agents together with proposals for their complete and transitive rankings over choice sets.} An allocation among such ``tempted'' agents is unambiguously envy-free if neither self of any agent envies any other agent. In this case no agent envies any other agent according to any aggregation of her two objectives. The agents may then avoid the mental load of weighing their long term goals against their immediate greed. With rational agents, each of the three concepts of fairness is --- under generic conditions --- compatible with efficiency. We then ask whether unambiguous fairness is compatible with efficiency when agents are boundedly rational. The assumption of bounded rationality has two countervailing effects: On the one hand, the set of Pareto optima can be large. Unambiguous fairness is, on the other hand, hard to satisfy. A-priori, it is not clear which of these two effects dominates. We find in Theorem \ref{positive-pofs} that efficiency is always compatible with the unambiguous fair-share guarantee. Theorem \ref{positive-pone-2} shows that any economy with just two agents, all whose selves have convex preferences, has an unambiguously envy-free Pareto optimum. Unambiguous no-envy, however, conflicts with efficiency in economies with three or more agents (Proposition \ref{negative-pone}). Two agents suffice for unambiguous egalitarian-equivalence to conflict with efficiency (Proposition \ref{negative-poee}). Faced with the non-existence of unambiguously envy-free or egalitarian-equivalent Pareto optima, we investigate these fairness criteria according to some aggregation of all agents' objectives into rational preferences. Theorems \ref{positive-collective-ef} and \ref{positive-collective-ee} show that some Pareto optima with the unambiguous fair-share guarantee respectively satisfy aggregate no-envy and aggregate egalitarian-equivalence, when all selves have strictly convex preferences. The aggregators used in the two results do not vary with the economies under consideration. To understand these aggregators, interpret the agents and their selves as as families and their members. If the Rawlsian criterion of justice is used to resolve intra-family conflict, the leximin aggregator used in Theorem \ref{positive-collective-ef} arises. If the family instead aggregates its members' objectives using Nash bargaining (maximizing the product of members' utilities), we obtain the aggregator used in Theorem \ref{positive-collective-ee}. When agents are rational, market equilibria from equal endowments have the fair-share guarantee and are envy-free \citep{foley1967resource}. So when agents are rational, envy-free Pareto optima with the fair share guarantee exist wherever there are market equilibria. With boundedly rational agents this strong nexus between no envy and market equilibrium disappears: Proposition \ref{negative-ceei} shows that, even if some unambiguously envy free Pareto optima arise as market equilibria, it may be necessary to give less rational agents larger endowments to obtain such allocations in market equilibrium. \section{Preliminaries} \subsection{Goods and Agents} There are $G$ different homogeneous divisible goods and a finite set of agents $I$. The set of consumption bundles is $\mathbb{R}_+^G$; the total endowment is $e\in\mathbb{R}_+^G$ with $e\gg0$.\footnote{For any two vectors $x,x'\in\mathbb{R}^m$ for some integer $m$, say $x\geq x'$ if $x_i\geq x'_i$ for all $i$, $x>x'$ if $x\geq x'$ but not $x=x'$ and $x\gg x'$ if $x_i>x'_i$ for all $i$.} An \emph{allocation} is a vector $\mathbf{x} = \{x_i\}_{i\in I}$ of consumption bundles $x_i\in \mathbb{R}_+^G$ whose sum does not exceed the total endowment: $\sum_{i\in I} x_i\leq e$. $X$ is the set of all allocations. The average bundle $\overline{e}{}:=\frac{1}{\mid I\mid}e$ defines each agent's \emph{fair-share} of the total endowment, and each agent gets the fair share in the the equal split $\mathbf{\bar{e}} = (\overline{e},\ldots,\overline{e})$. \subsection{Selves and Aggregators} When evaluating allocations, agents only consider their own bundles. Each agent $i$ has a set of \emph{selves} $S_i$.\footnote{Alternatively, one could think of the agent's selves as his \emph{frames} in the sense of \cite{SalantRubinstein2008}.} The preferences of each such self $s\in S_i$ are represented by a continuous utility function $u_s: \mathbb{R}_+^G\to\mathbb{R}$, normalized such that $u_s(\overline{e})=0$. We assume throughout that $u_s$ is strictly increasing in all components, so all selves have strictly monotone preferences. The preferences represented by the utility $u_s$ are \emph{convex} if $u_s(x) > u_s(x')$ implies $u_s((1 - \alpha)x + \alpha x') > u_s(x')$ for all $x,x'\in \mathbb{R}_+^G$ and $\alpha\in(0,1)$. They are \emph{strictly convex} if $u_s((1 - \alpha)x + \alpha x') > u_s(x')$ also holds for any two different bundles $x$ and $x'$ with $u_s(x)=u_s(x')$. We say that an agent $i$ is \emph{more rational than} an agent $j$ if $S_i\subsetneq S_j$. \footnote{The relation ``more rational than-'' is incomplete. For example, a fully rational agent $i$ with a single self $S_i=\{u_s\}$ is not more-rational than an agent $j$ with two selves if $u_s\notin S_j$.} If an economy is derived from another economy by (weakly) increasing each agent's set of selves, then the original economy is \emph{more rational than} than the derived one. Agent $i$ \emph{unambiguously prefers} an option if all his selves prefer this option. Formally, agent $i$'s \emph{unambiguous preference} $\succsim^U_i$ is a --- typically incomplete --- transitive relation, where $x\succsim^U_i x'$ holds if and only if $u_s(x)\geq u_s( x')$ for all $s\in S_i$ and where $x\succ^U_i x'$ holds if in addition $u_{s'}(x)> u_{s'}( x')$ holds for some $s'\in S_i$.\footnote{ \citet{mandler2014indecisiveness,mandler2020distributive} calls $\succsim^U_i$ the \emph{behavioral preference}. \citet{BernheimRangel2009} define a similar relation based on revealed choices, denote it by $R'$ and call it \emph{weak unambigous choice preference}. } A transitive and complete preference $\succsim^{agg}_i$ is an \emph{aggregator} of agent $i$'s selves if for all bundles $x,x'$, $x\succsim^U_i x'$ implies $x\succsim^{agg}_i x'$ and $x\succ^U_i x'$ implies $x\succ^{agg}_i x'$. \citet{szpilrajn1930extension}'s extension theorem guarantees the existence of such aggregators. An agent unambiguously prefers some bundle $x$ to a different bundle $x'$ ($x\succsim_i^U x'$) if and only if he prefers $x$ to $x'$ ($x\succsim_i^{agg} x'$) according to every aggregator $\succsim_i^{agg}$. \footnote{The proof is simple and we omit it. It is available upon request.} We also call a vector of aggregators $\succsim^{agg}\colon=(\succsim^{agg}_i)_{i\in I}$ an aggregator. The function $g((u_s(\cdot))_{s\in S_i}):X\to \mathbb{R}$ represents an aggregator for agent $i$ if $g:\mathbb{R}^{\mid S_i\mid }\to \mathbb{R}$ is strictly increasing in all its components. If $g$ is the sum of its components, then $g$ represents the \emph{utilitarian aggregator}. The \emph{leximin aggregator} is not representable by any function. To define it, fix any two bundles $x$ and $x'$, and consider the two minimal utilities of any selves of the agent: if $\min_{s\in S_i}u_s(x)>\min_{s\in S_{i}}u_s(x') $ then $x\succ^{lex}_i x'$. If these two minimal utilities are identical, then the leximin aggregator uses the two second-lowest utilities to rank $x$ and $x'$. Proceeding inductively $\succsim^{lex}_i$ ranks $x$ and $x'$ as indifferent if and only if both are mapped to the same vector of utilities in increasing order of utility \citep{Dubins1961How,Moulin2004Fair}. \footnote{ The definitions of the utilitarian and leximin aggregators depend on the representations $u_s$ of the selves' preferences. The income-equivalence principle \citep{decancq2015happiness,FleurbaySchokkaert2013} provides a specific normalization with $u_s(\overline{e})=0$. } To illustrate such aggregators at the hand of a specific decision theoretic model, consider the \cite{Bewley} model of Knightian uncertainty, where the different selves of an agent use different priors to evaluate uncertain events. \cite{Bewley} axiomatized the unambiguous preference $\succsim^U_i$ where agent $i$ prefers an option to a different one if the former yields a higher expected utility than the latter according to every prior of the agent. If such an agent uses the utilitarian aggregator, he is an expected utility maximizer. If he instead evaluates every choice according to the minimal expected utility over the set of all priors, he is represented by a maximin expected utility, following \citet{gilboa1989maxmin}. \footnote{The proof of Theorem \ref{positive-collective-ef} addresses the difference between the leximin aggregator and the function that uses the minimal utility of any self to evaluate options.} \subsection{Notions of Fairness} An allocation satisfies the \emph{fair-share guarantee} if each agent weakly prefers his bundle to the fair-share; it is \emph{envy-free} if no agent strictly prefers the bundle of another agent; it is \emph{egalitarian equivalent} if each agent is indifferent between his bundle and some fixed reference bundle $r$. For a given aggregator $\succsim^{agg}$ and a given fairness notion, we call an allocation $\succsim^{agg}$-\emph{aggregate}-fair if each agent $i$ considers it fair according to their aggregator $\succsim^{agg}_i$. Aggregate fairness coincides with standard fairness for rational agents whose preferences are represented by the aggregators. So existing tools suffice to study aggregate fairness. But there is a drawback: we have to commit to particular aggregators. To avoid such a dependence, we study unambiguously-fair allocations, which are fair according to every aggregator. Specifically, an allocation $\mathbf{x}$ is \emph{unambiguously envy-free (EF)} if there are no two agents $i,i'$ and self $s\in S_i$ such that $u_s(x_{i'})> u_s(x_i)$. It is \emph{unambiguously egalitarian equivalent (EE)}, if there exists a reference bundle $r$ such that for each agent $i$ $u_s(r)=u_s(x_i)$ holds for all $s\in S_i$. The allocation $\mathbf{x}$ satisfies the \emph{unambiguous fair-share guarantee (FS)} if for each agent $i$ $u_s(x_i)\geq u_s(\overline{e})$ holds for all $s\in S_i$, and $E$ is the set of all unambiguous-FS allocations.\footnote{ For each of the fairness notions, an allocation is unambiguously-fair if and only if it is aggregate-fair according to every vector of aggregators. The proof is available upon request. } Unambiguous fairness relies on the transitive but incomplete preferences $\succsim^U_i$ of agents. For a different approach to fairness for boundedly rational agents, we instead consider complete but intransitive aggergators in Section \ref{sec:intrans}. \subsection{Pareto Optimality} An allocation $\mathbf{x}$ \emph{Pareto dominates} a different allocation $\mathbf{x}'$ if $u_s(x_i)\geq u_s( x'_i)$ holds for every self $s\in S_i$ of every agent $i$, while $u_{s'}(x_j)>u_{s'}( x_j)$ holds for at least one agent $j$ and self $s'\in S_j$. An allocation is \emph{Pareto-optimal} it is not Pareto-dominated by any other allocation. To formalize the two countervailing effects of bounded rationality on the set of fair and efficient allocations, consider a more-rational and a less-rational economy. Say that some allocation $\mathbf{x}'$ Pareto dominates a different allocation $\mathbf{x}$ in the less-rational economy, and say that no selves are indifferent between the two allocations. Then, clearly, $\mathbf{x}'$ also dominates $\mathbf{x}$ according to the smaller sets of selves in the more-rational economy. \footnote{To see that the clause of indifference matters, consider a fully rational economy in which all selves are indifferent between all bundles. Now add one self to one agent's set of selves and assume that this self strictly ranks all allocations. While no allocation dominates any other in the rational economy, this does not hold in the boundedly rational economy.} So, abstracting away from the issue of indifferences, the set of Pareto-optima increases when the economy becomes less rational. On the other hand, the set of unambiguously fair allocations in the more-rational economy is larger, since the chosen fairness notion then has to hold for fewer selves. Our first result shows that --- no matter the level of irrationality --- the set of Pareto optima contains some unambiguously-fair allocations. \begin{theorem} \label{positive-pofs} Any economy has unambiguous-FS Pareto optima. \end{theorem} \begin{proof} Since $u_s$ is for each $s\in \cup_{i\in N}S_i$ continuous, the set $E$ of unambiguous-FS allocations is compact. Define a function $F:X\to \mathbb{R}$ as the sum of all utilities of all agents' selves: $F(\mathbf{x})=\sum_{i\in N}\sum_{s\in S_i}u_s(x_i)$ for all $x\in X$. By the compactness of $E$ and the continuity of $F$, some allocation $\mathbf{x}^*$ maximizes $F(x)$ over all $x\in E$, so that $x^*$ is Pareto optimal in $E$. Since no allocation outside $E$ Pareto dominates any allocation in $E$, $\mathbf{x}^*$ is Pareto optimal. Since $x^*\in E$, $x^*$ is unambiguous-FS. \end{proof} The above proof continues to hold if we use a different method to find a Pareto optimum in $E$ and if we drop the assumption of monotonic preferences. \subsection{Market Equilibria} \label{sub:equilibria} We borrow our notions of market equilibrium from behavioral welfare economics \citep{FonOtani1979,mandler2014indecisiveness}. A triplet $(\mathbf{p},\mathbf{x}^*,\mathbf{x}^0)$ with $\mathbf{x}^*, \mathbf{x}\in X$ and $\mathbf{p}\in \mathbb{R}^G$ is a \emph{market equilibrium from (endowments) $\mathbf{x}^0\in X$}, if the following two conditions hold for each agent $i\in I$: \begin{enumerate} \item $\mathbf{p} x^*_i \leq \mathbf{p} x^0_i$, that is, each agent $i$ can afford bundle $x^*_i$; and--- \item \label{cond:affordable} For any bundle $x'\in\mathbb{R}_+^G$, if $\mathbf{p} x' \leq \mathbf{p} x^0_i$ then either $u_s(x')=u_s(x^*_i)$ for all selves $s\in S_i$, or $u_s(x')<u_s(x^*_i)$ for at least one self $s\in S_i$. \end{enumerate} The second condition is weak: agent $i$'s selves may not unanimously prefer a different affordable bundle to the choice $x^*_i$. Equivalently, any agent may buy any bundle that is optimal according to \emph{some} aggregation of her selves. So the definition remains agnostic as to the aggregator an agent uses when she shops. Our result on the impossibility of attaining fairness in market equilibrium (Proposition \ref{negative-ceei}) is strengthened by this agnosticism. \footnote{If we would instead require that any self of agent $i$ prefers $x^*_i$ to all affordable bundles, nonexistence of market equilibria would be immediate, as an agent's selves need not agree on optimal choices. Using aggregate preferences in the second condition would tie us down to a particular rational theory on how agents weigh their different selves.} \subsection{Examples with two goods} \label{sub:two-goods} In economies with two goods, these are called $y$ and $z$, and we normalise the total endowment of each good to $|I|$, so that $\overline{e}=(1,1)$. For a differentiable utility $u_s:\mathbb{R}^2_+\to \mathbb{R}$, the \emph{marginal rate of substitution} between $y$ and $z$ is: \begin{align*} MRS_s(y,z) := \frac{d(u_s(y,z))}{dy}\bigg/\frac{d(u_s(y,z))}{dz}. \end{align*} Two different utilities $u_s$ and $u_{s'}$ have the \emph{single crossing property} if any two indifference-curves defined by $u_s(y,z)=\alpha$ and $u_{s'}(y,z)=\beta$ for some $\alpha,\beta\in \mathbb{R}$ share at most one point. So $u_s$ and $u_{s'}$ have the single-crossing property if for all bundles $(y,z)$ and $(y',z')$ and either $u_s=u$, $u_{s'}=u'$ or $u_s=u'$, $u_{s'}=u$: \begin{align*}\mbox{If }y'> y\mbox{ and }z'<z\mbox{ then } u(y',z')\geq u(y,z)\Rightarrow u'(y',z')> u'(y,z)\\ \mbox{If }y'< y\mbox{ and }z'>z\mbox{ then } u'(y',z')\geq u'(y,z)\Rightarrow u(y',z')> u(y,z). \end{align*} Two differentiable utilities have the single-crossing property if the marginal rate of substitution of one is higher than the other's at each bundle $(y,z)$. For example, any two Cobb-Douglas utilities $u_s(y,z)=y^{\alpha}z^{1-\alpha}$ with different $\alpha$ have the single-crossing property. Our proofs and examples use the characterization of all interior Pareto-optimal allocations as the set of allocations at which the intersection of all agents' MRS-ranges, the intervals between the smallest and the largest MRS of their selves, is non-empty. \begin{theorem} \label{po-characterization} Fix a two-good economy with convex preferences representable by differentiable utilities and an allocation $\mathbf{x}$ in the interior of $X$. Then $\mathbf{x}$ is Pareto optimal if and only if: \begin{align*} \bigcap_{i\in I}[\min_{s\in S_i}MRS_s(y_i,z_i),\max_{s\in S_i}MRS_s(y_i,z_i)]\neq\emptyset. \end{align*} \end{theorem} Theorem \ref{po-characterization} has been shown to hold in much more general environments by \citet{FonOtani1979,mandler2014indecisiveness} and we omit its proof here. Simple modifications of the arguments in \citet{mas1974equilibrium} also prove Theorem \ref{po-characterization}. \section{Envy-freeness} \label{sec:pone} Theorem \ref{positive-pone-2} shows that two-agent economies with convex preferences always have unambiguous-EF Pareto optima. Economies with more agents, in contrast, need not have such allocations (Proposition \ref{negative-pone}). Even if unambiguous-EF Pareto optima exist, they may not arise as market equilibria from equal endowments. The market approach, however, turns out to be useful to find unambiguous-FS Pareto optima that are aggregate-EF. The impossibility results hold even if all but one agent are fully rational. \subsection{Unambiguous envy-freeness} \label{sub:pone-with-two} \begin{theorem} \label{positive-pone-2} Any two\-/agent economy, where all selves have convex preferences, has an unambiguous\-/EF and unambiguous\-/FS Pareto optimum. \end{theorem} \begin{proof} By Theorem \ref{positive-pofs}, there exists an unambiguous-FS Pareto optimum $\mathbf{x}$. To see that $\mathbf{x}$ is unambiguous-EF, suppose by contradiction that some self $s$ of some agent $i$ envies agent $j$, so that $u_s(x_i) < u_s(x_{j}) = u_s(e- x_i)$. By convexity, $u_s(x_i)$ must also be smaller than $u_s(\frac{1}{2}x_i+\frac{1}{2}(e{}- x_i))$. But the bundle $\frac{1}{2}x_i+\frac{1}{2}(e{}- x_i)$ is exactly the fair-share $\overline{e}$ --- a contradiction to the assumption that $x$ is unambiguous-FS. \end{proof} The preceding proof only requires continuous and convex preferences, the assumption of monotonicity can be dropped. To show that Theorem \ref{positive-pone-2} does not extend to economies with more agents, we construct a two-goods economy with two fully rational and one boundedly rational agent. All selves' utilities have the single-crossing property and the preferences of the two fully rational agents 1 and 2 are intermediate between the two selves of the boundedly rational agent 3. To avoid envy between agent 3 and the two fully rational agents, agent 3 must consume the same bundle as agents 1 \emph{and} as agent 2. A contradiction then arises since agents 1 and 2 have different marginal rates of substitution at any bundle, and therefore must consume different bundles in any interior Pareto optimum. Figure \ref{fig:pone-none} illustrates. \begin{proposition} \label{negative-pone} Economics with three agents and two goods need not have any unambiguous-EF Pareto optima. \end{proposition} \begin{figure} \caption{\small \label{fig:pone-none} \label{fig:pone-none} \end{figure} \begin{proof} Fix a two-good economy with two fully rational agents 1 and 2 represented by $u_1$ and $u_2$ and a boundedly rational agent 3 whose two selves are represented by $u_h$ and $u_w$. All selves' utilities are differentiable and convex, and satisfy a single-crossing property so that for any two bundles $(y,z)$, $(y',z')$: \begin{align*} \text{If }&y'> y\text{ and }z'<z\text{ then:} &u_{h}(y',z')\geq u_h(y,z)\Rightarrow u_{2}(y',z')> u_{2}(y,z),\\&& u_{2}(y',z')\geq u_{2}(y,z)\Rightarrow u_{1}(y',z')> u_{1}(y,z),\\ && u_{1}(y',z')\geq u_1(y,z)\Rightarrow u_{w}(y',z')> u_{w}(y,z).\\ \text{If }&y'< y\text{ and }z'>z\text{ then: } &u_{w}(y',z')\geq u_w(y,z)\Rightarrow u_{1}(y',z')> u_{1}(y,z),\\&& u_{1}(y',z')\geq u_{1}(y,z)\Rightarrow u_{2}(y',z')> u_{2}(y,z),\\ && u_{2}(y',z')\geq u_{2}(y,z)\Rightarrow u_{h}(y',z')> u_{h}(y,z). \end{align*} Fix an unambiguous-EF allocation $\mathbf{x}=((y_1,z_1),(y_2,z_2),(y_{3},z_{3}))$. Since all preferences are strictly monotonic, either $(y_1-y_3)(z_1-z_3)< 0$ or $(y_1,z_1)=(y_3,z_3)$ must hold for agents 1 and 3 not to envy each other: \begin{itemize} \item If $y_1> y_3$ and $z_1< z_3$, then $u_1(y_1,z_1)\geq u_1(y_3,z_3)$ and single-crossing imply $u_w(y_1,z_1)> u_w(y_3,z_3)$, so self $w$ of agent 3 envies agent 1. \item If $y_1< y_3$ and $z_1> z_3$, then $u_1(y_1,z_1)\geq u_1(y_3,z_3)$ and single-crossing imply $ u_h(y_1,z_1)> u_h(y_3,z_3)$, so self $h$ of agent 3 envies agent 1. \item So we must have $(y_1,z_1)=(y_3,z_3)$. \end{itemize} Mutatis mutandis $(y_{2},z_{2})=(y_3,z_3)$ also holds and all three agents consume the same bundle. So $\mathbf{x}=\mathbf{\bar{e}}$. But $\mathbf{\bar{e}}$ cannot be Pareto optimal since the two fully rational agents consume different bundles in any interior Pareto-optimum (Theorem \ref{po-characterization}). \end{proof} The economy in the proof of Proposition \ref{negative-pone} illustrates the two countervailing forces of bounded rationality on the set of unambiguously fair Pareto optima. By standard results, a fully rational version of this economy, where agent 3 has only one self, has an unambiguous-EF Pareto optimum. With the addition of agent 3's second self, the set of Pareto-optima increases while the set of unambiguous-EF allocations decreases. The second effect dominates: the example in the proof has no unambiguous-EF Pareto optimum. But with the further addition of $\frac{1}{2}u_w+\frac{1}{2}u_h$ to agent 1's and agent 2's sets of selves, the set of Pareto optima increases enough for $\mathbf{\bar{e}}$ to be an unambiguous-EF Pareto optimum in this less-rational economy, by Theorem \ref{po-characterization}. The example used to prove Proposition \ref{negative-pone} can be extended to any number of agents by adding fully rational agents with preferences that are intermediate between the two selves of agent 3. To extend it to any number of goods, replace the single good $z$ with $G-1$ perfect substitutes $z^1,\ldots,z^{G-1}$, and define self $s$'s utility for bundle $y,z^1,\ldots,z^{G-1}$ as $u_s(y, z^1 + \cdots + z^{G-1})$, where $u_s$ is the utility of $s$ in the two-goods economy. \subsection{Aggregate envy-freeness} \label{sub:aggregate-ef} While Theorem \ref{negative-pone} shows that unambiguous-EF Pareto optima need not exist, standard results show that mild conditions suffice for existence of aggregate-EF Pareto optima. Here we investigate conditions under which some aggregate-EF Pareto optima have the unambiguous fair-share guarantee. When all selves' preferences are strictly convex, the set of Pareto optima that guarantee the fair-share to each self contains some allocations that are envy free according to a particular aggregator---the leximin aggregator. \begin{theorem} \label{positive-collective-ef} If all selves' preferences are strictly convex there exist unambiguous\-/FS and $\succsim^{lex}_f$\-/aggregate\-/EF Pareto optima. \end{theorem} \begin{proof} For each agent $i$ the function $U^{\min}_i(\cdot) := \min_{s\in S_i}u_s(\cdot)$ \footnote{Since $U^{\min}_i$ is indifferent between any two bundles associated with the same minimal utility over all selves, $U^{\min}_i$ does not represent an aggregator for agent $i$'s selves. We can therefore not directly use $U^{\min}_i$ as the aggregator in Theorem \ref{positive-collective-ef}.} inherits the continuity, monotonicity and strict convexity of all selves' preferences.\footnote{ \label{ftn:uimin} To see the strict convexity, fix any two bundles $x\neq x'$, an $\alpha \in(0,1)$, and an agent $i\in I$. Suppose that $U^{\min}_i(x)\geq U^{\min}_i(x')$. Let $s^*\in S_i$ be the self for which the latter minimum is attained, that is, $U^{\min}_i(x') = u_{s^*}(x')$. So we have for all $s\in S_i$ \begin{align*} u_s(x')\geq u_{s^*}(x')\text{ and } u_s(x)\geq U^{\min}_i(x) \geq U^{\min}_i(x') = u_{s^*}(x'). \end{align*} Since each $u_s$ represents a strictly convex preference we have $u_s((1-\alpha)x + \alpha x') > u_{s^*}(x')$ for all $s\in S_i$, which implies $U^{\min}_i((1-\alpha)x + \alpha x') > u_{s^*}(x') = U^{\min}_i(x')$.} Say $(\mathbf{p},\mathbf{x}^*,\mathbf{\bar{e}})$ is a market equilibrium from equal endowments in the economy where each agent $i$'s preferences are represented by $U^{\min}_i$. \textbf{Claim 1: $\mathbf{x}^*$ is unambiguous-FS.} Fix an arbitrary agent $i$. Thanks to the normalization $u_s(\overline{e})=0$, and since $\overline{e}$ is in each agent's budget set, $\min_{s\in S_i}u_s(x^*_i)=U^{\min}_i(x^*_i)\geq U^{\min}_i(\overline{e})= \min_{s\in S_i}u_s(\overline{e})=0$. So $u_s(x^*_i)\geq 0$ holds for each $s\in S_i$, and $\mathbf{x}^*$ satisfies unambiguous-FS. \textbf{Claim 2: $\mathbf{x}^*$ is $\succsim^{lex}_f$-aggregate-EF.} Fix an arbitrary agent $i$. Since $\mathbf{x}^*$ is an equilibrium allocation, $x^*_i$ is $U^{\min}_i$-maximal in agent $i$'s budget set. Since budget sets are convex and since $U^{\min}_i$ represents strictly convex preferences $x^*_i$ is the unique $U^{\min}_i$-maximum. By the definition of $\succsim^{lex}_i$, the set of $\succsim^{lex}_i$-maxima is a subset of the set of $U^{\min}_i$-maxima, so $x^*_i$ is also the unique $\succsim^{lex}_i$-maximum in the budget set. Since all agents have identical budgets, $x^*_i$ is $\succsim^{lex}_i$ preferred to any $x^*_j$, and the allocation $\mathbf{x}^*$ is $\succsim^{lex}$-aggregate-EF. \textbf{Claim 3: $\mathbf{x}^*$ is Pareto optimal.} Suppose that some allocation $\mathbf{x}'$ Pareto-improves on $\mathbf{x}^*$, so that $u_s(x'_i)\geq u_s(x^*_i)$ for all $s\in S_i$ and $i\in I$, and $u_{s'}(x'_j)> u_{s'}(x^*_j)$ for some $s'\in S_j$ and $j\in I$. The definition of $\succsim^{lex}_i$ then yields $x'_i \succsim^{lex}_i x^*_i$ for all $i\in I$, and $x'_{j} \succ^{lex}_{j} x^*_{j}$ for $j\in I$. By the arguments in Claim 2, $x^*_i$ is for each agent $i$ the unique $\succsim^{lex}_i$-maximal choice in their budget set. So $\mathbf{p}^*\cdot x'_{j} > \mathbf{p}^*\cdot x^*_{j} $ holds for any agent $j$ with $x'_{j} \succ^{lex}_j x^*_{j}$ while $x'_{i}=x^*_{i}$ holds for all other agents $i$. The monotonicity of agents' preferences implies $\mathbf{p}^* \cdot x^*_i = \mathbf{p}^*\cdot\overline{e}$ for any agent $i$. Summing over all agents, yields $\sum_{i\in I} \mathbf{p}^*\cdot x'_i > \mathbf{p}^*\cdot e$ -- a contradiction to the feasibility of $\mathbf{x}'$ which requires $\sum_{i\in I} x'_i \leq e$. \end{proof} The requirement that allocations satisfy no-envy according to at least one self of each agent would appear to be weaker than $\succsim^{lex}$-aggregate no envy. To see that this is not the case, we construct a two-goods economy where no unambiguous-FS Pareto optimum is one-self envy-free. Assume two rational agents with different Cobb-Douglas utilities and a boundedly rational agent each of whose two selves only cares about one good. Say $\mathbf{x}^*$ was an unambiguous-FS Pareto optimum satisfying no-envy according to at least one self of each agent. By unambiguous-FS, agent 3's selves prefer $(y^*_3,z^*_3)$ to the fair-share $\overline{e}=(1,1)$, so $y^*_3\geq 1$ and $z^*_3\geq 1$. W.l.o.g., say that no envy holds for the self of agent 3 that only cares about $y$, so that $y^*_3\geq y^*_2,y^*_1$. For agents 1 and 2 not to envy agent 3, we then must have $z^*_1,z^*_2\geq z^*_3$. By the single crossing property agents 1 and 2 must consume different bundles. The agents, in sum, consume more than three units of $z$ --- a contradiction. While the two selves of agent 3 violate our standard monotonicity assumption, nearby economies with strictly monotonic preferences will, by continuity, also lack unambiguous-FS Pareto optima that are one-self envy-free. To construct such a nearby economy, represent the preferences agent $3$'s two selves by $y_3+\epsilon \sqrt{z_3}$ and $z_3+\epsilon\sqrt{y_3}$ for some small $\epsilon>0$. \footnote{A similar example shows that we cannot replace the leximin aggregator in Theorem \ref{positive-collective-ef} with an arbitrary aggregator: there is no unambiguous-FS aggregate-EF Pareto optimum, according to the aggregator where agent 3 ranks any bundle with more $y$ above any bundle with less $y$ and considers $z$ only to compare two bundles with equally much $y$. Moreover, for every real constant $\kappa$, suppose that the preferences of agent $i$ are aggregated using the function $\sum_{s\in S_i}u_s^{\kappa} / \kappa$ (this is equivalent to the utilitarian aggregator for $\kappa=1$, approaches the Nash aggregator for $\kappa=0$, and approaches the leximin aggregator for $\kappa\to-\infty$; see \citet{Moulin2004Fair}). Then, a similar example shows that there may be no unambiguous-FS aggregate-EF Pareto optimum for any $\kappa>-\infty$. } \subsection{Market equilibrium and unambiguous envy-freeness} \label{sec:pone-without-ceei} The classic proof that envy-free Pareto optima exist \citep{foley1967resource} argues that market equilibria from equal endowments yield such allocations. Since the fair-share is each agent's endowment, each agent must weakly prefer the bundle she buys in equilibrium to the fair-share. Since each agent faces the same budget set, each agent could choose any other agent's bundle. So no agent envies any other. This technique does not fare as well in economies with boundedly rational agents. Proposition \ref{negative-pone} already shows that unambiguous-EF Pareto optima need not exist. But even even if such allocations exist, they may not arise as market equilibria from equal endowments. \begin{proposition} \label{negative-ceei} (a) Some economies have unambiguous\-/EF and unambiguous\-/FS Pareto optima, even though no such allocation arises as a market equilibrium from equal endowments. (b) In any economy, if an unambiguous\-/EF allocation $\mathbf{x}$ can be sustained as a market equilibrium, and if agent $i$ is more rational than agent $j$, then agent $j$'s equilibrium budget must be at least as large as agent $i$'s. \end{proposition} \begin{proof} (a) Fix a three-agent two-good economy. Say the two fully rational agents 1 and 2 are represented by Cobb-Douglas utilities with respectively $\alpha_1=\frac{1}{3}$ and $\alpha_2=\frac{2}{3}$. These utilities also represent the two selves of the boundedly rational agent 3. We first show that this economy has a unambiguous-EF Pareto-optimum $\mathbf{x}^*=\big((y_1,z_1), (y_{2},z_{2}),(y_3,z_3)\big)$. For agents 1 and 2 not to envy agent 3 and vice versa, $y_1^{\frac{1}{3}}z_1^{\frac{2}{3}}=y_3^{\frac{1}{3}}z_3^{\frac{2}{3}}$ and $y_{2}^{\frac{2}{3}}z_{2}^{\frac{1}{3}}=y_3^{\frac{2}{3}}z_3^{\frac{1}{3}}$ must hold. If $\mathbf{x}^*$ is on the boundary of $X$ then at least one self has zero utility. No envy then implies that all selves have utility zero at $\mathbf{x}^*$ which can therefore not be Pareto optimal. So $\mathbf{x}^*$ must be in the interior of $X$ and $\frac{1}{2}\frac{z_1}{y_1}= MRS_1(y_1,z_1)=MRS_{2}(y_{2},z_{2}) = 2\frac{z_{2}}{y_{2}}$ must hold. Combining the preceding equations with the resource constraints $y_1+y_{1}+y_3=3$ and $z_1+z_{2}+z_2=3$ we obtain a system of 5 equations in 6 unknowns. To obtain a concrete solution $\mathbf{x}^*$ (illustrated in Figure \ref{fig:pone-none-without-ce}), we additionally impose the symmetry condition $y_3=z_3$. The symmetric solution is: \begin{align*} y^*_1 \approx 0.654 && z^*_1\approx 1.308 \\ y^*_2 \approx 1.308 && z^*_2\approx 0.654 \\ y^*_3 \approx 1.038 && z^*_3\approx 1.038 \end{align*} \begin{figure}\label{fig:pone-none-without-ce} \end{figure} Since agent 3's bundle in $\mathbf{x}^*$ contains strictly more than the average share from each good, $\mathbf{x}^*$ cannot be obtained as a market equilibrium from equal endowments. So suppose some other unambiguous-EF Pareto optimum $\big((y_1,z_1)$, $(y_{2},z_{2})$, $(y_3,z_3)\big)$ could be obtained as a market equilibrium from equal endowments. By Pareto-optimality and the single-crossing property we have $(y_1,z_1)\neq (y_{2},z_{2})$, so that $(y_3,z_3)\neq (y_1,z_1)$ or $(y_3,z_3)\neq (y_{2},z_{2})$ (or both). W.l.o.g. say $(y_3,z_3)\neq (y_1,z_1)$. Since all selves have strictly convex preferences, the fully rational agent 1 strictly prefers $(y_1,z_1)$ to all other bundles in the budget set, in particular $u_1(y_1,z_1)>u_1(y_3,z_3)$. But then agent 3 envies agent 1 according to the $u_1$ self --- a contradiction. (b) Say $\mathbf{x}$ is unambiguous-EF and $(\mathbf{p},\mathbf{x},\mathbf{x}^0)$ is a market equilibrium from some endowment $\mathbf{x}^0$. Suppose agent $i$'s equilibrium budget was strictly larger than agent $j$'s (so $\mathbf{p} x^0_i>\mathbf{p} x^0_j$). By definition of market equilibrium, agent $i$'s bundle $x_i$ is optimal in agent $i$'s budget set according to some aggregator $\succsim^{agg}_i$. Denote by $x^*_i$ the $\succsim^{agg}_i$-optimal choice from agent $j$'s budget set, so that $x^*_i\succsim^{agg}_i x_j$. Since agent $i$'s budget set strictly contains agent $j$'s, and since all selves preferences (and therefore also $\succsim^{agg}_i$) are strictly monotonic, $x_i\succ^{agg}_ix^*_i$. Since $\mathbf{x}$ is unambiguous-EF, $u_s(x_j)\geq u_s(x_i) $ holds for all $s\in S_j$. Since agent $i$ is more rational than agent $j$, we have $S_i\subset S_j$. Since $\succsim^{agg}_i$ is an aggregator of agent $i$'s selves, we in sum get the contradiction $x_j\succsim^{agg}_i x_i\succ^{agg}_ix^*_i\succsim^{agg}_i x_j$. \end{proof} Our permissive definition of market equilibrium (Sub. \ref{sub:equilibria}) strengthens the above result. No matter which aggregators the agents happen to use while shopping, no market equilibrium from equal endowments is unambiguous-EF in the above example. The result continues to hold for any more restrictive notion of market equilibrium. To illustrate Proposition \ref{negative-ceei}, consider insurance policies against earthquakes and flooding. Two fully rational agents with the same income but different priors may buy different insurance policies. Now consider a boundedly rational third agent, whose selves use the priors of the two preceding agents, as in the \cite{Bewley} model of Knightian uncertainty. The third agent then needs a higher income to avoid envy of the other two agents. The more rational agents can better target their income to achieve their goals. Indeed, the symmetric allocation in part a) of the above proof, can arises as a market equilibrium when we endow the boundedly rational agent 3 with a higher income than the fully rational agents. \section{Egalitarian Equivalence} \label{sec:poee} \subsection{Unambiguous egalitarian-equivalence} Proposition \ref{negative-poee} shows that unambiguous egalitarian\-/equivalence is even more restrictive than unambiguous envy\-/freeness, in the sense that even two-agent economies may lack unambiguous-EE Pareto optima. The fact that an agent with two selves with the single-crossing property is only indifferent between a reference bundle $r$ and some bundle $x$ if $x=r$ is the driving force behind the upcoming non-existence proof. This proof specifies an economy with two such agents who must both consume the reference bundle $r$ in any unambiguous-EE allocation. At the same time, we differentiate the agents enough for them to consume different bundles in any Pareto optimum. \begin{proposition} \label{negative-poee} When there are at least two agents and at least two goods, a unambiguous-EE Pareto optimum might not exist. \end{proposition} \begin{proof} Fix a two-agents two-goods economy. Define $u_{\gamma}(y,z)\colon=y^{\gamma}+z$. Represent the two selves of agents 1 and 2 by $u_{1/5}$ and $u_{1/4}$, and respectively $u_{1/3}$ and $u_{1/2}$. Fix an unambiguous-EE Pareto optimum $\mathbf{x} $ with reference bundle $r$. For $\mathbf{x}$ to be an unambiguous-EE Pareto optimum, each agent must consume a positive amount of $y$.\footnote{This holds since the marginal utility of $y$ at 0 is for each $u_{\gamma}$ infinite.} Assuming large enough total endowment of $z$, each agent must also consume a positive quantity of $z$ in $\mathbf{x}$. The single-crossing property together with $u_{1/4}(y_1,z_1)=u_1(r)$ and $u_{1/5}(y_1,z_1)=u_1(r)$ imply $(y_1,z_1)=r$. By the same token, agent $2$ must also consume $r$, so that $\mathbf{x}=(r,r)$. Since $\max_{s\in S_1} MRS_s(r) > \min_{s\in S_2} MRS_s(r)$ the allocation is by Theorem \ref{po-characterization} not Pareto optimal. Figure \ref{fig:poee} illustrates. \end{proof} \begin{figure}\label{fig:poee} \end{figure} The analysis of the above example remains unchanged if we add further agents to the above economy. As long as such agents have multiple selves with the single-crossing property, they must consume the reference bundle $r$ to be unambiguously indifferent between their consumption and $r$. With enough differentiation between the agents they cannot all consume the same bundle in a Pareto optimum. For an example with more than two goods, replace the single good $z$ with some $G-1$ perfect substitutes $z^1,\ldots,z^{G-1}$, as in comment after Proposition \ref{negative-pone}. As is the case for unambiguous\-/EF Pareto-optima, the existence of unambiguous\-/EE Pareto-optima is not ``monotone'' in the variability of selves: if we increase each agent's set of selves in the above example to include each other's selves, then the equal split is Pareto\-/optimal and unambiguous\-/EE {by Theorem \ref{po-characterization}. \subsection{Aggregate egalitarian-equivalence} \label{sub:collective-ee} In parallel to Subsection \ref{sub:aggregate-ef} on envy freeness, there exist aggregators for which efficiency, the unanimous fair-share guarantee and aggregate egalitarian equivalence are compatible. Theorem \ref{positive-collective-ee} shows that such compatibility is achieved for the \emph{Nash aggregator}. To define this aggregator, interpret agents and their selves as families and their members. If these families use Nash-bargaining with the fair-share as the outside option, they choose the bundle $x^*_i$ that maximizes the product of their members' utilities. The monotonicity and continuity of all selves' preferences allows us to normalize any $u_s$ such that $u_s(x_i)\colon=t-1$ if $u_s(x_i)=u_s(t\cdot \overline{e})$, where $u_s(\overline{e})=0$ continues to hold. Define $U^{Nash}_i(x)\colon=\sqrt[\mid S_i\mid]{\Pi_{s\in S_i}u_s(x_i)}$ to represent agent $i$'s Nash-bargaining aggregator over bundles in the interior of $E$. For any $t\geq 1$ we have $U^{Nash}_i(t\cdot \overline{e})= \sqrt[\mid S_i\mid]{\Pi_{s\in S_i}u_s(t\cdot \overline{e})}= \sqrt[\mid S_i\mid]{(t-1)^{\mid S_i \mid }}=t-1$. Since $U^{Nash}_i$ only represents an aggregator for bundles that all selves strictly prefer to $\overline{e}$, we use the leximin aggregator for all remaining comparisons to define an aggregator $\succsim_{i}^{Nash}$. Formally, the value of $x\succsim^{Nash}_i x'$ equals --- \begin{align*} & U_i^{Nash}(x)\geq U_i^{Nash}(x') && \text{if~} u_s(x),u_s(x')>0 \text{~for all~} s\in S_i; \\ & x\succsim^{lex}_i x' && \text{otherwise.} \end{align*} \begin{theorem} \label{positive-collective-ee} If all selves have strictly convex preferences, there exist unambiguous\-/FS and $\succsim^{Nash}_f$\-/aggregate\-/EE Pareto optima. \end{theorem} \begin{proof} Define $\mathbf{x}^*$ as the allocation in $E$ that maximizes the minimal Nash bargaining utility: \begin{align*} \mathbf{x}^*\in \arg\max_{\mathbf{x}\in E}\min_{i\in I}U^{Nash}_i(x_i)~~~ \end{align*} Such an allocation exists since each $U^{Nash}_i$ is continuous on the - compact and non-empty - set $E$. Define $t^*\colon=1+\min_{i\in I} U^{Nash}_i(x^*_i)$. Since $\mathbf{x}^*\in E$, $t^*-1\geq 0$. \textbf{Claim 1: There is no allocation $\mathbf{x} \in E$ for which $U^{Nash}_i(x_i)\geq t^*-1$ for all $i$ and $U^{Nash}_j(x_j)>t^*-1$ for some $j\in I$.} Suppose $E$ contained such an $\mathbf{x}$. Since $U^{Nash}_j$ is continuous, there exists a small bundle $\epsilon>0$ such that $U^{Nash}_j(x_j-\epsilon)>t^*-1$ and $u_s(x_j-\epsilon)>0$ for each $s\in S_j$. Evenly redistribute $\epsilon$ from agent $j$ to all others to obtain the allocation $\mathbf{x}^{\circ}$. The strict monotonicity of all selves' preferences yields $U^{Nash}_i(x^{\circ}_i)> t^*-1$ for all $i\neq j$. In sum we get $\min_{i\in I}U^{Nash}_i(x^{\circ}_i)>t^*-1$ --- a contradiction to the definition of $\mathbf{x}^*$. \textbf{Claim 2: $\mathbf{x}^*$ is unambiguous-FS.} Follows from $\mathbf{x}^*\in E$. \textbf{Claim 3: If $\mathbf{x}^*\neq \mathbf{\bar{e}}$ then $\mathbf{x}^*$ is in the interior of $E$, that is, $\mathbf{x}^*$ yields strictly positive utilities to all selves.} Suppose for contradiction that $\mathbf{x}^*\neq \mathbf{\bar{e}}$ but $u_s(x^*_i)=0$ for some agent $i$ and $s\in S_i$. This implies that $U^{Nash}_i(\mathbf{x}^*_i)=0$, and $t^*-1=\min_{i\in I}U^{Nash}_i(\mathbf{x}^*_i)=0$. Define $\mathbf{x}'\colon=\frac{1}{2}\mathbf{x}^*+ \frac{1}{2}\mathbf{\bar{e}}$. Since $\mathbf{x}^*\neq \mathbf{\bar{e}}$, there exists an agent $j$ with $x^*_j\neq \overline{e}$. By the strict convexity of all selves' preferences, $U^{Nash}_i(x'_i)\geq U^{Nash}_i(\mathbf{\bar{e}})=\min_{i\in I}U^{Nash}_i(x^*_i)=0$ holds for all $i\in I$ and $U^{Nash}_j(x'_j)>0$ --- a contradiction to Claim 1. \textbf{Claim 4: $\mathbf{x}^*$ is Pareto-optimal.} If some allocation $\mathbf{x}$ Pareto-dominates $\mathbf{x}^*$, then $U^{Nash}_i(x_i)\geq U^{Nash}_i(x^*_i)\geq \min_{i\in I}U^{Nash}_i(x^*_i)=t^*-1$ for all $i$ and $U^{Nash}_j(x_j)> U^{Nash}_j(x^*_j)\geq \min_{i\in I}U^{Nash}_i(x^*_i)=t^*-1$ for some $j\in I$, contradicting Claim 1. \textbf{Claim 5: $\mathbf{x}^*$ is $\succsim^{Nash}$-aggregate-EE.} If $\mathbf{x}^*=\mathbf{\bar{e}}$, then each agent gets exactly $\overline{e}$, and $\mathbf{x}^*$ is even unambiguous-EE. So assume that $\mathbf{x}^*\neq \mathbf{\bar{e}}$. By Claim 3, $\mathbf{x}^*$ is in the interior of $E$. By Claim 1, $U^{Nash}_i(x^*_i)=t^*-1= U^{Nash}_i(t^*\overline{e})$ holds for all agents $i\in I$. Since $U^{Nash}_i$ represents the aggregator $\succsim^{Nash}_i$ in the interior of $E$, $x^*$ is $\succsim^{Nash}$\-/aggregate\-/EE with the reference bundle $t^*\overline{e}$. \end{proof} The proof of Theorem \ref{positive-collective-ee} uses two properties of the Nash aggregator: Firstly, the continuity of $U^{Nash}_i$ on $E$ is required for $\mathbf{x}^*$ to be well-defined. Secondly, $U^{Nash}_i(x^*_i)>0$ holds if and only if $u_s(x^*_i) > 0$ for all $s\in S_i$, so the allocation $\mathbf{x}^*$ is either $(\overline{e}, \dots, \overline{e})$ or in the interior of $E$ (Claim 3). The proof holds unchanged for any aggregator with these two properties. For example, $U_i(x_i) := \int_{t_1=0}^{u_1(x_i)} \ldots \int_{t_k=0}^{u_k(x_i)} f_i(t_1,\ldots,t_k)$ for any positive measurable function $f_i$ of $k$ variables, where $S_i=\{u_1,\dots, u_k\}$. The function $U^{Nash}_i$ corresponds to the special case $f_i\equiv 1$. \footnote{ We are grateful to Magma and Martin R. for these insights. } Other natural aggregators, such as the leximin or the utilitarian aggregator, violate one of these properties. Similarly, if we consider egalitarian equivalence according to one self per agent, then the second property is violated. We do not know for which alternative aggregators there always exist unambiguous-FS aggregate-EE Pareto-optima. \subsection{Weakly convex preferences} Proposition \ref{negative-collective-ee} below shows that economies where selves have merely convex (not strictly convex) preferences need not have any unambiguous-FS and aggregate-EE Pareto optima. We can however guarantee any two out of these three properties: \begin{proposition} \label{negative-collective-ee} When all selves have (weakly) convex preferences: (a) Some three-agent economies have no unambiguous-FS and aggregate-EE Pareto optimum. (b) There always exist aggregate\-/EE Pareto optima, unambiguous\-/FS Pareto optima, and aggregate\-/EE unambiguous\-/FS allocations. \end{proposition} \begin{figure}\label{fig:poiee} \end{figure} \begin{proof} (a) Say $u_3(y,z)=y+z$ represents the preferences of the fully rational agent 3. Agents 1 and 2 have two selves each. The preference of their first selves coincide with agent 3's, and we have $u_{1}(y,z) = u_{2}(y,z) = u_3(y,z) = y+z$. The two other selves of agents 1 and 2 are represented by different Cobb-Douglas utilities: $v_1(y,z) = y^{1/4}z^{3/4}$ and $v_2(y,z)=y^{3/4}z^{1/4}$. For the fair-share guarantee to hold according to the three selves with identical linear preferences, $y_1+z_1=y_2+z_2=y_3+z_3=2$ must hold. The only Pareto-optimal allocation satisfying these equations gives agent 1 $(1/2,3/2)$ and agent 2 $(3/2,1/2)$; see Figure \ref{fig:poiee}. Now suppose the allocation was aggregate-EE for some aggregator and reference bundle $r:=(y_r,z_r)$. For agent 3 to be indifferent between his bundle $(1,1)$ and $r$, $y_r+z_r$ must equal 2, so $r$ must lie on the solid green line in Figure \ref{fig:poiee}. This implies that the two other agents are according to their linear selves indifferent between their bundles and $r$. But, for any $r$ with $y_r+z_r=2$, at least one agent prefers her bundle to $r$ according to her Cobb-Douglas self. This agent must according any aggregator strictly prefer her bundle to $r$. \footnote{ To see where the proof of Theorem \ref{positive-collective-ee} fails on this economy, note that $E$ contains only allocations $\mathbf{x}$ with $y_i+z_i=2$ for all agents $i$ (the set represented by the solid green line). So $U^{Nash}_i(x_i)=0$ holds for all $i \in I$ and all $\mathbf{x} \in E$, and any $\mathbf{x}\in E$ satisfies the definition of $\mathbf{x}^*$. In particular, Claim 3 fails. } (b) Theorem \ref{positive-pofs} proved the existence of unambiguous\-/FS Pareto optima. The equal allocation is aggregate\-/EE and unambiguous\-/FS. To find an aggregate-EE Pareto optimum without unambiguous-FS, replace the Nash aggregator in the proof of Theorem \ref{positive-collective-ee} with the utilitarian aggregator. As this aggregator is represented by a continuous function on the entire set $X$ (not only in the interior of $E$), the proof of aggregate-EE is now valid without the need for Claim 3. \end{proof} \section{Intransitive aggregators}\label{sec:intrans} An alternative approach towards fairness in the context of behavioral economics would explicitly consider some intransitive aggregations of the agents' selves. Agents might aggregate their objectives intransitively, using voting rules or by sequential procedures, as suggested by respectively \citet{green2007choice} and \citet{ApesteguiaBallester}. For a concrete example, say an allocation is \emph{majority-fair} if at least half of the selves of any given agent deem it to be fair.\footnote{ Majority fairness was studied in the context of cake-cutting \citep{SegalHalevi2018Families} and indivisible item allocation \citep{segal2019democratic}. } So an allocation is majority envy free (majority-EF) if no more than half of any agent's selves envy a different agent, it is majority egalitarian equivalent (majority-EE) if there exists a references bundle $r$ such that at least half of the selves of any given agent are indifferent between $r$ and their bundle. Our negative results can be extended to show that majority-fairness is incompatible with efficiency. For a parallel to the our result on unambiguous-EF results Theorem \ref{negative-pone}, consider an economy with four rational agents ($1,2,3,4$) and one boundedly rational agent 5 with three selves ($5a,5b,5c$). Say that the utilities of all selves have the single-crossing property. The indifference-curves of the rational agents 1 and 2 are between the indifference curves of the selves $5a$ and $5b$; the indifference curves of the remaining two rational agents 3 and 4 are between the indifference curves of selves $5b$ and $5c$. Suppose a majority-EF efficient allocation $\mathbf{x}^*$ exists and that all agents consume positive quantities of each good in this allocation. Due to the single crossing property, no two rational agents can consume the same quantity in the allocation $\mathbf{x}^*$. In any majority-EF allocation, at most one self of agent 5 may envy any other agent. So say without loss of generality that self $5a$ as well as self $s\in\{5b,5c\}$ do not envy any other agent. Now we have a situation similar to the example used to prove Proposition \ref{negative-pone}: No envy, together with the fact that agent 1's indifference curves lie between the indifference curves of selves $5a$ and $s$, imply that agents 1 and 5 must consume the same bundle. Mutatis mutandis, we see that also agent 2 and agent 5 have to consume the same bundle --- a contradiction to agents 1 and 2 consuming different bundles in any interior Pareto optimum. Clearly, the dearth of unambiguous-EF allocations is owed to the fact that the unambiguous ranking according to which each agent must prefer their own bundle to any other agent's might be highly incomplete. Our notion of majority-EF then presents an interesting contrast: even though majority preference is a complete ranking, the example used to prove the non-existence result for unambiguous-EF transfers with some modification to the case of majority-EF allocations. So we see that both - completeness and transitivity - play their role in existence of envy free Pareto optima in rational environments. Similarly, our example in the proof of result that unambiguous-EE need not exist (Theorem \ref{negative-poee}) can be extended to majority egalitarian equivalence: Consider an economy with 2 agents, each of whom has 3 selves. Say all utilities in the model satisfy the single-crossing property. In any majority-EE allocation, at least two selves of each agent must be indifferent between their bundle and the reference bundle $r$, so the situation is similar to the one analyzed in Proposition \ref{negative-poee}. \section{Related work} \subsection{Behavioral Welfare Economics} While there is a large literature on boundedly rational choices by single agents, our paper belongs to the much smaller literature on welfare economies with two or more boundedly rational agents. \citet{BernheimRangel2009} define several notions of Pareto-optimality and of competitive equilibrium for economies with multi-self agents, and prove that competitive equilibria always satisfy one of these notions of Pareto-optimality. \citet{mandler2014indecisiveness} proves that, with multi-self agents, the set of Pareto-optima might be very large, in the sense that it has the same dimension as the set of all allocations. Since almost all Pareto-optima lie in full-dimensional neighborhoods of other Pareto optima, he argues that the Pareto criterion is of limited use for policy decisions. To close this gap of indecisiveness, \citet{mandler2020distributive} suggests to use utilitarian-optimality. \citet{danan2015harsanyi} characterize a utilitarian social-choice rule when agents have incomplete preferences, and \citet{FleurbaySchokkaert2013} characterizes an egalitarian social-choice rule --- based on the principle of income-equivalence --- with incomplete preferences. \subsection{Fairness criteria} The modern study of fairness in economics was initiated by \citet{Steinhaus1948}, who proved the existence of \emph{fair-share} allocations of a heterogeneous good (``cake''). Since then, fairness has been extensively studied in economics \citep{young1995equity,Moulin2004Fair,Thomson2011Fair} as well as in other disciplines such as mathematics and computer science. The existence of \emph{envy-free} Pareto-optima was initially proved as a consequence of the existence of market equilibrium \citep{foley1967resource}. \citet{varian1974equity} showed that no-envy Pareto-optima may exist even when a competitive equilibrium does not. \citet{varian1974equity} replaces assumption of convex preferences with a condition on the set of allocations associated with any weakly-Pareto-optimal vector of utilities. If any such set is a singleton an envy free Pareto optimum exists. \citet{svensson1983existence} and \citet{Diamantaras1992Equity} then respectively showed that \citet{varian1974equity}'s ``singleton''-requirement can be replaced by convexity and contractibility. \citet{Diamantaras1992Equity}'s result applies to economies with public goods. \citet{svensson1994sigma} and \citet{Bogomolnaia2017Competitive} showed that envy free Pareto optima exist under yet further relaxed conditions. The \emph{egalitarian equivalence} criterion was introduced by \citet{pazner1978egalitarian}. They argue that the equal split $\mathbf{\bar{e}}$ where each agent gets the same bundle is, from an egalitarian perspective, an ideal division. Since $\mathbf{\bar{e}}$ is usually not efficient, \citet{pazner1978egalitarian} propose to regain efficiency by considering all allocations for which there exists an (infeasible) allocation where all agents consume the same bundle, such that each agent is indifferent between that bundle and their consumption in the allocation. \citet{pazner1978egalitarian} proved that egalitarian-equivalent Pareto optima exist in economies with production, which may not have any envy-free Pareto-optima \citep{vohra1992equity}. \citet{thomson1990non} showed that egalitarian equivalent Pareto optima are generically not envy-free when there are at least three agents. \subsection{Fair division among families} Several papers study fair division with multi-self agents under the interpretation of agents and their selves as families and their members. \citet{SegalHalevi2018Families} study fair division in cake-cutting problems, where the challenge is to allocate connected pieces, or at least pieces made of a small number of connected components. \citet{ManurangsiSu17,Suksompong18,segal2019democratic,kyropoulou2019almost} study fair division in problems with indivisible goods. Since there may be no fair allocation of indivisible goods, the focus in these studies is on finding approximately-fair allocations. In contrast, our model of fully-divisible goods always allows for fair allocations, and the challenge is to find allocations that are both fair and efficient. \citet{ghodsi2018rent} studies fair division of rooms and rent among families of tenants, using three notions of fairness termed strong, aggregate and weak. Their strong-fairness is our unambiguous no-envy; their weak-fairness holds if at least one self per agent does not envy; their aggregate-fairness corresponds to our aggregate fairness with the utilitarian aggregator. \subsection{Public and club goods} The existence of fair allocations with public and private goods has been studied e.g. by \citet{Diamantaras1992Equity,Diamantaras1994Generalization,Diamantaras1996Set,Guth2002NonDiscriminatory}. With multi-self agents, our fairness notions correspond to \emph{club goods} \citep{buchanan1965economic}, that is, goods that are public among all selves of the agent --- but private outside. As far as we know, fair allocation of goods among different clubs has not yet been considered. Under the interpretation of multi-self agents as clubs, questions such as the optimal number of members in a club, optimal number of clubs, optimal quantity of club-good provision, pricing policies and exclusion mechanisms have been studied \citep{sandler1980economic,hillman1993socialist,sandler1997club,loertscher2017club,mackenzie2018club}. A modern example of a club good is information: information is partly excludable (via intellectual property law), but once it is given to a group, it is not rival. Therefore our work may have implications on fair division of information, for example, dividing training samples for machine learning among groups of researchers. \section{Acknowledgments} We are grateful for the support of the Minerva foundation through the ARCHES prize. Erel is grateful to the Israel Science Foundation for grant 712/20. The paper started as a discussion in the economics stackexchange website. \footnote{https://economics.stackexchange.com/q/9916/385} We are grateful to Shane Auerbach, Amit Goyal and Kitsune Cavalry for participating in the discussion. We are grateful to Hal Varian, William Thomson, Herve Moulin, Gerald A Edgar, Martin R., and reviewers of Games and Economic Behavior for their very helpful feedback on a previous version of this paper. \iffalse \appendix \section*{Appendix} \section{Omitted Proofs} \paragraph{Proof of Lemma \ref{lemma: family leximin and min utility}} Suppose all selves of some agent $i$ have strictly convex preferences. Let $X_i$ be a non-empty, convex and compact subset of $\mathbb{R}_+^G$. \begin{proof} Let $x,x'$ be two different bundles. The assumption on the frames' preferences implies that, for every $f\in F_i$, if $u_f(x)\geq u_f(x')$, then for every $\alpha\in(0,1)$ we have $u_f((1-\alpha)x + \alpha x') > u_f(x')$. We now prove that the same is true for the function $U^{\min}_i$. Suppose that $U^{\min}_i(x)\geq U^{\min}_i(x')$. Assume w.l.o.g. that the right-hand side minimum is attained for frame 1. So for all $f\in F_i$: \begin{align*} u_f(x')&\geq u_1(x'), \\ u_f(x)&\geq U^{\min}_i(x') = u_1(x'). \end{align*} Therefore, $u_f((1-\alpha)x + \alpha x') > u_1(x')$ for all $f\in F_i$ and every $\alpha\in(0,1)$. Therefore, $U^{\min}_i((1-\alpha)x + \alpha x') > u_1(x') = U^{\min}_i(x')$, so $U^{\min}_i$ represents a strictly-convex preference. Since $X_i$ is non-empty and compact, and since all agents' utilities $u_f$ are continuous, the function $U^{\min}_i$ is continuous too, so there exists a unique $U^{\min}_i$-maximal bundle $x^*_i$ in $X_i$. By the definition of $\succsim^{lex}_i$, the set of $\succsim^{lex}_i$-maxima in $X_i$ is a subset of the set of $U^{\min}_i$-maxima in $X_i$, so that $x^*_i$ is also the unique $\succsim^{lex}_i$-maximum in $X_i$. \end{proof} \begin{lemma} \label{lem:agg-uni-col-fair-implications} An allocation is unambiguous\-/EE (unambiguous\-/EF, unambiguous\-/FS) if-and-only-if it is $\succsim^{agg}$\-/aggregate-EE ($\succsim^{agg}$\-/aggregate\-/EF, $\succsim^{agg}$\-/aggregate\-/FS) for every aggregator $\succsim^{agg}$. \end{lemma} \begin{proof} Fix an allocation $\mathbf{x} \in X$ and an agent $i$. The definition of aggregators implies that any unambiguous preference over two bundles also holds for any aggregator. To see that the converse also holds, fix an agent $i$ with $|F_i|\geq 2$ and two bundles $x_i$ and $x'_i$ such that the agent does not unambiguously prefer $x_i$ to $x'_i$. So there exists a frame $f\in F_i$ with $u_f(x')>u_f(x)$, implying that for some sufficiently large number $\alpha > 0$, \begin{align*} \alpha \cdot u_f(x')+\sum_{f'\in F_i\setminus \{f\}}u_{f'}(x')> \alpha\cdot u_f(x)+\sum_{f'\in F_i\setminus \{f\}}u_{f'}(x). \end{align*} The preceding function represents an aggregate preference of agent $i$, for which $x \succsim^{agg}_i x'$ does not hold. \end{proof} \iffalse \paragraph{Lemma \ref{lem:po}.} \er{ If all individuals' preferences are continuous, strictly monotonic and strictly convex, then an allocation is Pareto-optimal if and only if it is weakly Pareto-optimal.} \begin{proof} It is sufficient to prove that, when the lemma conditions hold, any allocation that is Pareto-dominated is also strictly Pareto-dominated. Let $\mathbf{x}\in X$ be an allocation, and suppose some other $\mathbf{x}'\in X$ Pareto-dominates it. So $u_i(x'_{\familyof{i}})\geq u_i(x_{\familyof{i}})$ for all $i$ and $u_j(x'_{\familyof{j}})>u_i(x_{\familyof{j}})$ for at least one $j$. In particular, at least one family $f^* = \familyof{j}$ receives a different bundle in the new allocation: $x'_{f^*} \neq x_{f^*}$. Let $\mathbf{x}'' := \frac{1}{2}\mathbf{x} + \frac{1}{2}\mathbf{x}'$. By strict convexity, $u_i(x''_{f^*}) > u_i(x_{f^*})$ for all $i\in f^*$. For every other family $f$, either $x''_f = x'_f = x_f$, or the same inequality holds; in both cases, $u_i(x''_{f}) \geq u_i(x_{f})$ for all $i\in f$. Starting from $\mathbf{x}''$, take a sufficiently small bundle $\epsilon$ from $f^*$ such that the inequality $u_i(x''_{f^*}-\epsilon) > u_i(x_{f^*})$ continues to hold (this is possible thanks to continuity). Distribute $\epsilon$ among the other families. Denote the new allocation by $\mathbf{x}^{\circ}$. By strict monotonicity, we have $u_i(x^{\circ}_f) > u_i(x^{\circ}_f)$ for all $i\in f$, for all $f\in F$. So $\mathbf{x}^o$ strictly Pareto-dominates $\mathbf{x}$, as claimed. \end{proof} \fi \fi \end{document}
\begin{equation}gin{document} \title{An inequality for a periodic uncertainty constant } \author{Elena A. Lebedeva\footnote{Mathematics and Mechanics Faculty, Saint Petersburg State University, Universitetsky prospekt, 28, Peterhof, Saint Petersburg, 198504, Russia; Saint Petersburg State Polytechnical University, Polytechnicheskay 29, Saint Petersburg, 195251, Russia}} \date{[email protected]} \maketitle \begin{equation}gin{abstract} An inequality refining the lower bound for a periodic (Breitenberger) uncertainty constant is proved for a wide class of functions. A connection of uncertainty constants for periodic and non-periodic functions is extended to this class. A particular minimization problem for a non-periodic (Heisenberg) uncertainty constant is studied. \end{abstract} \textbf{Keywords:} uncertainty constant, uncertainty principle, periodic wavelet, tight frame \textbf{MSC[2010]} 42C40, 42C15 \section{Introduction} \label{intr} The Breitenberger uncertainty constant (UC) is commonly used as a measure of localization for periodic functions. It was introduced in 1985 by Brei\-tenberger in \cite{B}. It can be derived from a general operator "position-momentum" approach as it is discussed in \cite{FolSit}. The Breitenberger UC has a deep connection with the classical Heisenberg UC, which characterizes localization of functions on the real line. There exists a universal lower bound for both UCs (the uncertainty principle). It equals $1/2$ (see chosen normalization in Sec. {\mathbb R}ef{note}). It is well known that the least value is attained on the Gaussian function in the real line case and there is no such function in the periodic case. At the same time, in \cite{Bat} Battle proves a number of inequalities specifying the lower bound of the Heisenberg UC for wavelets. In particular, it is proved that if a wavelet $\psi^0\in L_2({\mathbb R})$ has a zero frequency centre $c(\widehat{\psi^0}):=\int_{{\mathbb R}}\xi |\widehat{\psi^0}(\xi)|^2 \, d\xi/(\int_{{\mathbb R}}|\widehat{\psi^0}(\xi)|^2 \, d\xi)=0,$ then the Heisenberg UC is greater or equal to $3/2$ (see \cite[Theorem 1.4]{Bat}). The main contribution of this paper is an inequality refining the lower bound of the Breitenberger UC for a wide class of sequences of periodic functions (Theorem {\mathbb R}ef{main}). This result is somewhat analogous to Battle's result mentioned above. Given a sequence of periodic functions $\psi_j,$ $j \in \z_+,$ the conditions $|(\psi'_j,\,\psi_j)|\leq C \|\psi_j\|^2$ and $\lim_{j\to \infty} q_j \widehat{\psi}_j(k) / \|\psi_j\| =0$ (see ({\mathbb R}ef{cond3}) and ({\mathbb R}ef{cond1}) in Theorem {\mathbb R}ef{main}) correspond to a zero frequency centre $c(\widehat{\psi^0})=0$ and the wavelet admissibility condition $\widehat{\psi^0}(0)=0$ respectively. The rest of restrictions ({\mathbb R}ef{cond2}), ({\mathbb R}ef{cond4})--({\mathbb R}ef{cond6}) in Theorem {\mathbb R}ef{main} mean some ``regularity'' of the sequence $\psi_j.$ In \cite{prqurase03}, the following formula connecting UCs for periodic ($UC_B$) and non-periodic ($UC_H$) functions is obtained $ \lim_{j\to\infty} UC_B(\psi^p_j)=UC_H(\psi_0), $ where $\psi^p_{j}(x):=2^{j/2}\sum_{n \in \mathbb{Z}} \psi^0(2^j(x+2 \pi n)),$ $j \in\z_+$. In Step 3 of the proof of Theorem {\mathbb R}ef{main}, we generalize this formula and suggest a new proof of this fact. In Remark 1 -- Remark 3, we discuss which classes of periodic wavelet sequences satisfy the conditions of Theorem {\mathbb R}ef{main}. We also study one particular minimization problem for the Heisenberg UC connected with Battle's result mentioned above (Theorem {\mathbb R}ef{nominfunc}). If the result of Theorem {\mathbb R}ef{nominfunc} had been wrong, it would have been possible to give another proof of Theorem {\mathbb R}ef{main}. While there are sufficiently many results specifying the lower and the upper bounds of the Heisenberg UC \cite{Balan,Bat,FolSit,GB,GL,L08,L07,L12,L11,N1} and the upper bound of the Breitenberger UC \cite{LebPres14,NW,PQ,R05,Se}, to our knowledge, there are not actually any results concerting to an estimation of the lower bound for the Breitenberger UC in the literature. This work also has the following motivation. In \cite{LebPres14}, a family of periodic Parseval wavelet frames is constructed. The family has optimal time-frequency localization (the Breitenberger UC tends to $1/2$) with respect to a family parameter, and it has the best currently known localization (the Breitenberger UC tends to $3/2$) with respect to a multiresolution analysis parameter. In \cite{LebPres14}, the conjecture was formulated: the Breitenberger UC is greater than $3/2$ for any periodic wavelet sequence $(\psi_j)_{j\in\z_+}$ such that $(\psi_j',\,\psi_j)_{L_{2,2\pi}}=0$. Theorem {\mathbb R}ef{main} of this paper proves the conjecture for a wide class of sequences of periodic functions under a milder restriction $(\psi_j',\,\psi_j)_{L_{2,2\pi}} \leq C \|\psi_j\|^2_{L_{2,2\pi}}$. So the family constructed in \cite{LebPres14} has optimal localization with respect to both parameters within the class of functions considered in Theorem {\mathbb R}ef{main}. \section{Notations and auxiliary results} \label{note} Let $L_{2,2 \pi}$ be the space of all $2\pi$-periodic square-integrable complex-valued functions, with inner product $(\cdot,\cdot)$ given by $ (f,\,g):= (2\pi)^{-1} \int_{-\pi}^{\pi} f(x)\overline{g(x)}\,\mathrm{d}x $ for any $f,g \in L_{2,2\pi},$ and norm $\|\cdot\|:=\sqrt{(\cdot,\,\cdot)}.$ The Fourier series of a function $ f \in L_{2,2\pi} $ is defined by $\sum_{k \in \mathbb{Z}}\widehat{f}(k) \mathrm{e}^{ \mathrm{i} k x},$ where its Fourier coefficient is defined by $ \widehat{f}(k) = (2\pi)^{-1} \int_{-\pi}^{\pi} f(x)\mathrm{e}^{- \mathrm{i} k x}\,\mathrm{d}x. $ Let $L_{2}({\mathbb R})$ be the space of all square-integrable complex-valued functions, with inner product $(\cdot,\cdot)$ given by $ (f,\,g):= (2\pi)^{-1} \int_{{\mathbb R}} f(x)\overline{g(x)}\,\mathrm{d}x $ for any $f,g \in L_2({\mathbb R}),$ and norm $\|\cdot\|:=\sqrt{(\cdot,\,\cdot)}.$ The Fourier transform of a function $ f \in L_{2}({\mathbb R}) $ is defined by $ \widehat{f}(\xi):= (2\pi)^{-1} \int_{{\mathbb R}}f(x) \mathrm{e}^{- \mathrm{i} \xi x}\,\mathrm{d}x. $ Let us recall the definitions of the UCs and the uncertainty principles. \begin{equation}gin{df}[\cite{H}] \label{Huc} \texttt{The Heisenberg UC} of $f \in L_2(\mathbb{R})$ is the functional $UC_H(f):=\Delta(f)\Delta(\widehat{f})$ such that $$ \Delta^2(f):= \|f\|^{-2}\|(\cdot-c(f))f\|^{2}, \ \ \ \ \ \ \ c(f):= \|f\|^{-2}(\cdot f,\,f), $$ where $\Delta(f),$ $\Delta(\widehat{f}),$ $c(f),$ and $c(\widehat{f})$ are called \texttt{time variance, frequency variance, time centre,} and \texttt{frequency centre} respectively. \end{df} It is clear that the time variance can be rewritten as $$ \Delta^2(f)=\frac{\|\cdot f\|^2}{\|f\|^2}-\frac{(\cdot f,\,f)^2}{\|f\|^4} $$ Using elementary properties of the Fourier transform, we rewrite the frequency variance as $$ \Delta^2(\widehat{f})=\frac{\| i f'\|^2}{\|f\|^2} - \frac{(i f',\,f)^2}{\|f\|^4} $$ (See \cite{prqurase03} Lemmas 1 and 2, where this trick is explained in detail). \begin{equation}gin{teo}[\cite{H, FolSit}; the Heisenberg uncertainty principle] Let $f \in L_2(\mathbb{R})$, then $UC_H(f)\geq 1/2,$ and the equality is attained iff $f$ is the Gaussian function. \end{teo} \begin{equation}gin{teo} [\cite{Bat}, p.137; refinement of the Heisenberg uncertainty principle] \label{Battle} If $f \in L_2({\mathbb R}),$ $\ c(\widehat{f})$ $=0$, and $\int_{{\mathbb R}} f = 0,$ then $UC_H(f)\geq 3/2.$ \end{teo} \begin{equation}gin{df}[\cite{B}] \label{uc} Let $f =\sum_{k \in \mathbb{Z}} c_k \mathrm{e}^{ \mathrm{i} k \cdot}\in L_{2,2\pi}.$ \texttt{ The first trigonometric moment} is defined as $$ \tau(f):=\frac{1}{2 \pi} \int_{-\pi}^{\pi} \mathrm{e}^{ \mathrm{i} x} |f(x)|^2\, \mathrm{d}x = \sum_{k \in \mathbb{Z}} c_{k-1} \overline{c_{k}}. $$ \texttt{The angular variance} of the function $f$ is defined by $$ {{\mathbb R}m var_A }(f):= \frac{\left(\sum_{k \in \mathbb{Z}}|c_k|^2{\mathbb R}ight)^2}{ \left|\sum_{k \in \mathbb{Z}}c_{k-1} \bar{c}_{k}{\mathbb R}ight|^2}-1 = \frac{\|f\|^4}{|\tau(f)|^2}-1. $$ \texttt{The frequency variance} of the function $f$ is defined by $$ {{\mathbb R}m var_F }(f):= \frac{\sum_{k \in \mathbb{Z}}k^2 |c_k|^2}{\sum_{k \in \mathbb{Z}}|c_k|^2}- \frac{\left(\sum_{k \in \mathbb{Z}}k|c_k|^2{\mathbb R}ight)^2}{\left(\sum_{k \in \mathbb{Z}}|c_k|^2{\mathbb R}ight)^2} = \frac{\|f'\|^2}{\|f\|^2}-\frac{(i f',\, f)^2}{\|f\|^4}. $$ The quantity $ UC_B(\{c_k\}):=UC_B(f):=\sqrt{\mathrm{var_A}(f)\mathrm{var_F}(f)} $ is called \texttt{the Breitenberger (periodic) UC}. \end{df} We consider also two additional terms to characterize the first trigonometric moment (In another form, they are introduced in \cite[Lemma 3]{prqurase03}). Namely, by definition, put \begin{equation} \label{defAB} A(f):=\frac12\sum_{k\in \z} |c_{k-1}-c_k|^2,\ \ \ \ B(f):=\frac12 \sum_{k\in \z}(c_{k-1}-c_k)(\overline{c_{k-1}}+\overline{c_{k}}) \end{equation} for $f(x)=\sum_{k \in \mathbb{Z}} c_k \mathrm{e}^{ \mathrm{i} k x}\in L_{2,2\pi}.$ It is clear that \begin{equation} \label{AB} A(f) = \sum_{k\in \z} |c_k|^2 - \Re \left(\sum_{k\in \z} c_{k-1}\overline{c_k}{\mathbb R}ight) = \|f\|^2 - \Re (\tau(f)), \ \ \ \ B(f) = i \Im \left(\sum_{k\in \z} c_{k-1}\overline{c_k}{\mathbb R}ight) = i \Im (\tau(f)). \end{equation} \begin{equation}gin{teo}[\cite{B, PQ}; the Breitenberger uncertainty principle] \label{UC} Let $f \in L_{2,2\pi}$, $f(x)\neq C \mathrm{e}^{ \mathrm{i} k x},$ $C \in \mathbb{R},$ $k \in \mathbb{Z}$. Then $UC_B(f) > 1/2$ and there is no function such that $UC_B(f) = 1/2.$ \end{teo} Now, we recall the notion of a tight frame. Let $H$ be a separable Hilbert space. If there exists a constant $A>0$ such that for any $f \in H$ the following equality holds $ \sum_{n=1}^{\infty} \left|(f,\,f_n){\mathbb R}ight|^2 = A \|f\|^2, $ then the sequence $(f_n)_{n \in \mathbb{N}}$ is called \texttt{a tight frame} for $H.$ In the case $A=1$, a tight frame is called \texttt{ a Parseval frame}. In addition, if $\|f_n\|=1$ for all $n \in \mathbb{N}$, then a Parseval frame forms an orthonormal basis. We are especially interested to get a refinement of the Breitenberger uncertainty principle for the case of periodic wavelet sequences. For our purposes, it is sufficient to consider wavelet systems with one wavelet generator. We recall the basic notions. In the sequel, we use the following notation $ f_{j,k}(x):=f_j(x- 2 \pi 2^{-j} k) $ for a function $f_j \in L_{2,2\pi}.$ Consider functions $\varphi_0,\,\psi_j \in L_{2,2\pi},$ $j\in\z_+$. If the set $\Psi:=\left\{\varphi_0, \psi_{j,k} : \ j\in\z_+,\ k=0,\dots,2^j-1 {\mathbb R}ight\}$ forms a tight frame (or a basis) for $L_{2,2\pi}$ then $\Psi$ is said to be \texttt{a periodic tight wavelet frame (or a periodic wavelet basis)} for $L_{2,2\pi}.$ \begin{equation}gin{teo}[\cite{GT1}; the unitary extension principle for a periodic setting] \label{UEP} Let $\varphi_j \in L_{2,2\pi},$ $j\in\z_+$, be a sequence of $2\pi$-periodic functions such that \begin{equation}gin{equation}\label{con1} \lim_{j \to \infty}2^{j/2} \widehat{\varphi}_j(k) = 1, \ \ \ \ \ \ \ k \in \z. \end{equation} Let $\mu^j_k \in \cn,$ $j\in\z_+,$ $k\in\z$, be a two-parameter sequence such that $\mu^j_{k+2^j}=\mu^j_{k},$ and \begin{equation}gin{equation} \label{con2} \widehat{\varphi}_{j}(k)=\mu^{j+1}_k \widehat{\varphi}_{j+1}(k). \end{equation} Let $\psi_j,$ $j\in\z_+,$ be a sequence of $2\pi$-periodic functions defined using Fourier coefficients \begin{equation}gin{equation} \label{con3} {\widehat{\psi}}_{j}(k)=\lambda^{j+1}_{k} \widehat{\varphi}_{j+1}(k), \end{equation} where $\lambda^j_{k}\in\cn$, $\lambda^j_{k+2^j}=\lambda^j_{k}$, and \begin{equation}gin{equation} \label{con4} \left( \begin{equation}gin{array}{cc} \mu ^j_k & \mu ^j_{k+2^{j-1}} \\ \lambda^j_{k} & \lambda^j_{k+2^{j-1}} \end{array} {\mathbb R}ight) \left( \begin{equation}gin{array}{cc} \overline{\mu} ^j_k & \overline{\lambda}^j_{k} \\ \overline{\mu} ^j_{k+2^{j-1}} & \overline{\lambda}^j_{k+2^{j-1}} \end{array} {\mathbb R}ight) = \left( \begin{equation}gin{array}{cc} 2 & 0 \\ 0 & 2 \end{array} {\mathbb R}ight). \end{equation} Then the family $\Psi : =\left\{\varphi_0, \psi_{j,k} : \ j \in\z_+,\ k=0,\dots,2^j-1{\mathbb R}ight\}$ forms a Parseval wavelet frame for $L_{2,2\pi}.$ \end{teo} The sequences $(\varphi_j)_{j\in\z_+},$ $(\psi_j)_{j\in\z_+},$ $(\mu^j_k)_{k\in\z},$ and $(\lambda^j_k)_{k\in\z}$ are called \texttt{a scaling sequence, a wavelet sequence, a scaling mask and a wave\-let mask} respectively. A periodic wavelet system can be constructed starting with a scaling mask. Namely, let $\nu^{j}_{k}$ be a sequence given by $\nu^{j}_{k}=\nu^{j}_{k+2^j}$. We define $\widehat{\xi}_{j}(k):=\prod_{r=j+1}^{\infty}\nu^{r}_{k}.$ If the above infinite products converge, then the scaling sequence, scaling mask, wavelet mask, and wavelet sequence are defined respectively as \begin{equation}gin{gather} \widehat{\varphi_{j}}(k):=2^{-j/2}\widehat{\xi}_{j}(k),\qquad\quad \mu^{j}_k:=\sqrt{2} \nu^{j}_k,\ \notag\\ \lambda^{j}_k:=e^{2\pi i 2^{-j}k}\mu^{j}_{k+2^{j-1}},\qquad \quad {\widehat{\psi}}_{j}(k):=\lambda^{j+1}_{k} \widehat{\varphi}_{j+1}(k). \notag \end{gather} \section{Main result} In the following theorem we prove an inequality for the Breitenberger UC for a wide class of sequences of periodic functions. \begin{equation}gin{teo} \label{main} Let $\psi_j \in L_{2,2\pi},$ $j\in\n$ be periodic functions such that \begin{equation}gin{gather} \label{cond1} \lim_{j\to \infty} q_j \widehat{\psi}_j(k) / \|\psi_j\| =0 \mbox{ for } |k| \leq M(C), \\ \label{cond2} \lim_{j\to\infty} q_j^{-2} A(\psi'_j) / \|\psi_j\|^2= 0,\\ \label{cond3} |(\psi'_j,\,\psi_j)|\leq C \|\psi_j\|^2,\\ \label{cond4} q_j^{-2}\|\psi'_j\|^2\leq C \|\psi_j\|^2, \\ \label{cond5} q_j^{2} A(\psi_j)\leq C \|\psi_j\|^2, \\ \label{cond6} q_{j} \left|B(\psi_j){\mathbb R}ight|\leq C \|\psi_j\|^2 \end{gather} where $M(C):=2 (C + C \sqrt{2C}/3 +1/6),$ $C>0$ is an absolute constant, and $q_j {\mathbb R}ightarrow +\infty$ as $j \to +\infty.$ If $\lim_{j \to \infty}UC_B(\psi_j)$ exists, then \begin{equation} \lim_{j \to \infty}UC_B(\psi_j)\geq 3/2. \end{equation} \end{teo} \textbf{Proof.} \textbf{Step 1.} The Breitenberger UC is a homogeneous functional of degree zero, that is $UC(\alpha f) = UC(f)$ for $\alpha \in \cn \setminus\{0\}.$ So in the sequel, we consider the functions $\psi_j/\|\psi_j\|$ instead of $\psi_j.$ However, to avoid the fussiness of notations we keep the former name for the function $\psi_j.$ Thus we consider a sequence $(\psi_j)_{j \in \z_+},$ where $\|\psi_j\|=1$, and conditions ({\mathbb R}ef{cond1}) -- ({\mathbb R}ef{cond6}) take the form $$ \lim_{j\to \infty} q_j \widehat{\psi}_j(k)=0 \mbox{ for } |k| \leq M(C),\ \ \ \lim_{j\to\infty} q_j^{-2} A(\psi'_j) = 0,\ \ \ |(\psi'_j,\,\psi_j)|\leq C , $$ $$ q_j^{-2}\|\psi'_j\|\leq C, \ \ \ q_j^{2} A(\psi_j)\leq C,\ \ \ q_j B(\psi_j)\leq C. $$ \textbf{Step 2.} Let us consider auxiliary functions $\psi_{\ast j}\in L_{2,2\pi}$ such that \begin{equation} \label{condpsi0} \widehat{\psi_{\ast j}}(k)=\left\{ \begin{equation}gin{array}{ll} \widehat{\psi_{j}}(k), & |k| > M(C), \\ 0, & |k| \leq M(C). \end{array} {\mathbb R}ight. \end{equation} It follows from ({\mathbb R}ef{cond1}) that $$ \|\psi_j\|^2-\|\psi_{\ast j}\|^2 = \sum_{|k|\leq M(C)}|\widehat{\psi_{j}}(k)|^2 = q_j^{-2} o(1), \ \ \ \ \tau(\psi_j)-\tau(\psi_{\ast j}) = \sum_{|k|\leq M(C)}\widehat{\psi_{j}}(k-1)\overline{\widehat{\psi_{j}}(k)} = q_j^{-2} o(1), $$ $$ (\psi'_j,\, \psi_j)-(\psi'_{\ast j},\, \psi_{\ast j}) = i \sum_{|k|\leq M(C)}k|\widehat{\psi_{j}}(k)|^2 = q_j^{-2} o(1), \ \ \ \ \|\psi'_j\|^2-\|\psi'_{\ast j}\|^2 = \sum_{|k|\leq M(C)}k^2|\widehat{\psi_{j}}(k)|^2 = q_j^{-2} o(1) $$ as $j\to\infty.$ It is straightforward to see that \begin{equation} \label{UCBpsi0j} UC_B(\psi_{\ast j})-UC_B(\psi_{j}) {\mathbb R}ightarrow 0 \mbox{ as } j \to \infty. \end{equation} Indeed, it follows from ({\mathbb R}ef{AB}), $\|\psi_j\|=1$, and ({\mathbb R}ef{cond5}) that $$ 1\geq |\tau(\psi_j)|^2 = (\|\psi_j\|^2 - A(\psi_j))^2 + ( i B(\psi_j))^2= (1 - A(\psi_j))^2 + ( i B(\psi_j))^2 \geq 1-2 A(\psi_j) \geq 1- 2 C q_j^{-2}. $$ So $$ 0\leq {{\mathbb R}m var_A }(\psi_j) = \frac{1}{|\tau(\psi_j)|^2} - 1 \leq \frac{2C q_j^{-2} }{1-2C q_j^{-2} }. $$ Therefore, $q_j^2 {{\mathbb R}m var_A }(\psi_j)$ is bounded as $j \to \infty.$ By definition of ${{\mathbb R}m var_F }$, and ({\mathbb R}ef{cond4}) we obtain the boundedness of $q_j^{-2}{{\mathbb R}m var_F }(\psi_{\ast j})$ $$ 0\leq q_j^{-2}{{\mathbb R}m var_F }(\psi_{\ast j}) = q_j^{-2} \left(\frac{\|(\psi_{\ast j})'\|^2}{\|\psi_{\ast j}\|^2}-\frac{(i (\psi_{\ast j})',\, \psi_{\ast j})^2}{\|\psi_{\ast j}\|^4}{\mathbb R}ight) \leq q_j^{-2} \frac{\|(\psi_{j})'\|^2+q_j^{-2} o(1)}{1-q_j^{-2} o(1)} \leq \frac{C+q_j^{-4} o(1)}{1-q_j^{-2} o(1)}. $$ We rewrite now $UC_B(\psi_{\ast j})-UC_B(\psi_{j})$ in a standard form $$ \left(q_j^{2}{{\mathbb R}m var_A }(\psi_{\ast j})-q_j^{2}{{\mathbb R}m var_A }(\psi_{ j}){\mathbb R}ight)q_j^{-2}{{\mathbb R}m var_F }(\psi_{\ast j}) + \left(q_j^{-2}{{\mathbb R}m var_F }(\psi_{\ast j})-q_j^{-2}{{\mathbb R}m var_F }(\psi_{ j}){\mathbb R}ight)q_j^{2}{{\mathbb R}m var_A }(\psi_{ j}), $$ and we estimate $q_j^{2} \left({{\mathbb R}m var_A }(\psi_{\ast j})-{{\mathbb R}m var_A }(\psi_{ j}){\mathbb R}ight)$ and $q_j^{-2}\left({{\mathbb R}m var_F }(\psi_{\ast j})-{{\mathbb R}m var_F }(\psi_{ j}){\mathbb R}ight).$ Using the above estimates for $\|\psi_j\|^2-\|\psi_{\ast j}\|^2$ and $\tau(\psi_j)-\tau(\psi_{\ast j})$ we get $$ q_j^{2} \left|{{\mathbb R}m var_A }(\psi_{\ast j})-{{\mathbb R}m var_A }(\psi_{ j}){\mathbb R}ight| = q_j^{2} \left|\frac{\|\psi_{\ast j}\|^4}{|\tau(\psi_{\ast j})|^2}-\frac{\|\psi_{j}\|^4}{|\tau(\psi_{j})|^2}{\mathbb R}ight|$$ $$ \leq q_j^{2} \frac{ \left|\|\psi_{\ast j}\|^2-\|\psi_{ j}\|^2{\mathbb R}ight| \left(\|\psi_{\ast j}\|^2+\|\psi_{ j}\|^2{\mathbb R}ight) |\tau(\psi_{\ast j})|^2 + \left||\tau(\psi_{\ast j})|-|\tau(\psi_{j})|{\mathbb R}ight| \left(|\tau(\psi_{\ast j})|+|\tau(\psi_{j})|{\mathbb R}ight) \|\psi_{j}\|^4 }{|\tau(\psi_{\ast j})|^2|\tau(\psi_{ j})|^2} $$ $$ \leq q_j^2 \frac{ q_j^{-2}o(1) (1+q_j^{-2}o(1))+2 q_j^{-2}o(1) } {(1-2q_j^{-2})(1-2(C+o(1))q_j^{-2})}. $$ So $q_j^{2} \left|{{\mathbb R}m var_A }(\psi_{\ast j})-{{\mathbb R}m var_A }(\psi_{ j}){\mathbb R}ight| \to 0$ as $j \to \infty.$ Analogously, using the above estimates for $\|\psi'_j\|^2-\|\psi'_{\ast j}\|^2$ and $(\psi'_j,\, \psi_j)-(\psi'_{\ast j},\, \psi_{\ast j})$ we have $$ q_j^{-2}\left|{{\mathbb R}m var_F }(\psi_{\ast j})-{{\mathbb R}m var_F }(\psi_{ j}){\mathbb R}ight| = q_j^{-2}\left| \frac{\|\psi'_{\ast j}\|^2}{\|\psi_{\ast j}\|^2}+\frac{(\psi'_{\ast j},\psi_{\ast j})^2}{\|\psi_{\ast j}\|^4}- \frac{\|\psi'_{ j}\|^2}{\|\psi_{ j}\|^2}-\frac{(\psi'_{j},\psi_{j})^2}{\|\psi_{j}\|^4} {\mathbb R}ight| $$ $$ \leq q_j^{-2} \frac{ \left|\|\psi'_{\ast j}\|^2-\|\psi'_{ j}\|^2{\mathbb R}ight| \|\psi_{\ast j}\|^2 + \left|\|\psi_{\ast j}\|^2-\|\psi_{ j}\|^2{\mathbb R}ight| \|\psi'_{j}\|^2 } {\|\psi_{\ast j}\|^2\|\psi_{ j}\|^2} $$ $$ + q_j^{-2} \frac{ \left|(\psi'_{\ast j},\psi_{\ast j})-(\psi'_{ j},\psi_{ j}){\mathbb R}ight| \left|(\psi'_{\ast j},\psi_{\ast j})+(\psi'_{ j},\psi_{ j}){\mathbb R}ight| \|\psi_{\ast j}\|^4 + \left|\|\psi_{\ast j}\|^2-\|\psi_{ j}\|^2{\mathbb R}ight| \left(\|\psi_{\ast j}\|^2+\|\psi_{ j}\|^2{\mathbb R}ight) \left|(\psi'_{ j},\psi_{ j})^2{\mathbb R}ight| }{\|\psi_{\ast j}\|^4\|\psi_{ j}\|^4} $$ $$ \leq q_j^{-2}\frac{q_j^{-2}o(1)(1+q_j^{-2}o(1))+q_j^{-2}o(1) C q_j^{2}}{1-q_j^{-2}o(1)} $$ $$ + q_j^{-2}\frac{q_j^{-2}o(1)(2 C+q_j^{-2}o(1))(1-q_j^{-2}o(1)) + q_j^{-2}o(1)(1+q_j^{-2}o(1))C}{1-q_j^{-2}o(1)}. $$ So $q_j^{-2}\left({{\mathbb R}m var_F }(\psi_{\ast j})-{{\mathbb R}m var_F }(\psi_{ j}){\mathbb R}ight) \to 0$ as $j\to \infty$, and ({\mathbb R}ef{UCBpsi0j}) is proved. \textbf{Step 3.} Let us introduce auxiliary functions $f_{\ast j},$ $j\in\n$ such that \begin{equation}gin{gather} \label{conf1} q_j^{-1/2}\widehat{f_{\ast j}}(q_j^{-1}k) = \widehat{\psi_{\ast j}}(k), \\ \label{conf2} \widehat{f_{\ast j}} \mbox{ is linear on any interval } [q_j^{-1}(k-1),\,q_j^{-1}k], \end{gather} where $k \in \z.$ Since $\psi_{\ast j} \in L_{2,2\pi}$, it follows that $f_{\ast j} \in L_2({\mathbb R})$ for $j\in\z.$ We claim that \begin{equation} \label{UCHUCB} UC_B(\psi_{\ast j}) - UC_H(f_{\ast j}) {\mathbb R}ightarrow 0 \mbox{ as } j\to \infty. \end{equation} Indeed, we estimate differences \begin{equation}gin{enumerate} \item $\|\psi_{\ast j}\|^2_{L_{2,2\pi}} - \|f_{\ast j}\|^2_{L_2({\mathbb R})}, \ \ \ $ \item $q_j^{-1}(\psi'_{\ast j},\,\psi_{\ast j})_{L_{2,2\pi}}-(f'_{\ast j},\,f_{\ast j})_{L_2({\mathbb R})}, \ \ \ $ \item $q_j^{-2}\|\psi'_{\ast j}\|^2_{L_{2,2\pi}} - \|f'_{\ast j}\|^2_{L_2({\mathbb R})}, $ \item $2\cdot q_j^{2} A(\psi_{\ast j}) - \|\cdot f_{\ast j}\|^2_{L_2({\mathbb R})}, \ \ \ $ \item $q_j B(\psi_{\ast j}) - i(\cdot f_{\ast j},\,f_{\ast j})_{L_2({\mathbb R})}.$ \end{enumerate} 1. For the first difference, consequently using the definition of $f_{\ast j}$ ({\mathbb R}ef{conf2}), ({\mathbb R}ef{AB}), and ({\mathbb R}ef{cond5}) we have $$ \|\psi_{\ast j}\|^2_{L_{2,2\pi}} - \|f_{\ast j}\|^2_{L_2({\mathbb R})} = \sum_{k\in\z} \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 -\int_{{\mathbb R}}\bigl|\widehat{f_{\ast j}}(\xi)\bigr|^2\,d\xi= \sum_{k\in\z} \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 -\sum_{k\in\z} \int_{q_j^{-1}(k-1)}^{q_j^{-1}k} \bigl|\widehat{f_{\ast j}}(\xi)\bigr|^2\,d\xi $$ $$ =\sum_{k\in\z} \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 -\sum_{k\in\z} \int_{k-1}^{k} \left|\left(\widehat{\psi_{\ast j}}(k-1)-\widehat{\psi_{\ast j}}(k){\mathbb R}ight)(\xi-k)+\widehat{\psi_{\ast j}}(k){\mathbb R}ight|^2\,d\xi =\sum_{k\in\z} \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 $$ $$ -\frac13 \left(\sum_{k\in\z} \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 +\sum_{k\in\z} \bigl|\widehat{\psi_{\ast j}}(k-1)\bigr|^2+\Re \left(\sum_{k\in\z} \widehat{\psi_{\ast j}}(k-1)\overline{\widehat{\psi_{\ast j}}(k)}{\mathbb R}ight) {\mathbb R}ight)= \frac13 \left(\|\psi_{\ast j}\|^2_{L_{2,2\pi}}-\Re(\tau(\psi_{\ast j})){\mathbb R}ight) $$ $$ =\frac13 A(\psi_{\ast j})\leq \frac{C}{3} q_j^{-2} $$ So, \begin{equation} \label{estnorm} \|\psi_{\ast j}\|^2_{L_{2,2\pi}} - \|f_{\ast j}\|^2_{L_2({\mathbb R})}\leq \frac{C}{3} q_j^{-2} \to 0 \mbox{ as } j \to \infty. \end{equation} 2. Similarly to the previous calculation, we obtain $$ \left|q_j^{-1}(\psi'_{\ast j},\,\psi_{\ast j})_{L_{2,2\pi}}-(f'_{\ast j},\,f_{\ast j})_{L_2({\mathbb R})}{\mathbb R}ight|= \left|q_j^{-1} i \sum_{k\in\z} k \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 - i \int_{{\mathbb R}} \xi \bigl|\widehat{f_{\ast j}}(\xi)\bigr|^2\,d\xi{\mathbb R}ight| $$ $$ = q_j^{-1} \left|\sum_{k\in\z} k \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 -\sum_{k\in\z} \int_{k-1}^{k} \xi \left|\left(\widehat{\psi_{\ast j}}(k-1)-\widehat{\psi_{\ast j}}(k){\mathbb R}ight)(\xi-k)+\widehat{\psi_{\ast j}}(k){\mathbb R}ight|^2\,d\xi {\mathbb R}ight| $$ $$ \leq \frac{q_j^{-1}}{3} \left| \sum_{k\in\z} k \left(\bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2-\Re \bigl( \widehat{\psi_{\ast j}}(k-1)\overline{\widehat{\psi_{\ast j}}(k)}\bigr){\mathbb R}ight){\mathbb R}ight| +\frac{q_j^{-1}}{12}\left|\Re \left(\sum_{k\in\z} \widehat{\psi_{\ast j}}(k-1)\overline{\widehat{\psi_{\ast j}}(k)}{\mathbb R}ight) {\mathbb R}ight| $$ For the first sum, the Cauchy inequality, ({\mathbb R}ef{cond4}), and ({\mathbb R}ef{cond5}) yield $$ \frac{q_j^{-1}}{3}\left| \sum_{k\in\z} k \left(\bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2-\Re \bigl( \widehat{\psi_{\ast j}}(k-1)\overline{\widehat{\psi_{\ast j}}(k)}\bigr){\mathbb R}ight){\mathbb R}ight| \leq \frac{q_j^{-1}}{3} \left(\sum_{k\in\z} k^2 \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2{\mathbb R}ight)^{1/2} $$ $$ \times \left(\sum_{k\in\z}\bigl|\widehat{\psi_{\ast j}}(k)-\widehat{\psi_{\ast j}}(k-1)\bigr|^2{\mathbb R}ight)^{1/2} \leq \frac{q_j^{-1}}{3} \|\psi'_{\ast j}\|_{L_{2,2\pi}}\left(2 A(\psi_{\ast j}){\mathbb R}ight)^{1/2}\leq \frac{2^{1/2}}{3} C^{3/2} q_j^{-1}. $$ For the second one, using ({\mathbb R}ef{AB}) and ({\mathbb R}ef{cond5}), we have $$ \frac{q_j^{-1}}{12}\left|\Re \left(\sum_{k\in\z} \widehat{\psi_{\ast j}}(k-1)\overline{\widehat{\psi_{\ast j}}(k)}{\mathbb R}ight) {\mathbb R}ight|=\frac{q_j^{-1}}{12}\left|\Re \bigl(\tau(\psi_{\ast j})\bigr){\mathbb R}ight|= \frac{q_j^{-1}}{12} \left|\|\psi_{\ast j}\|^2-A(\psi_{\ast j}){\mathbb R}ight|\leq \frac{q_j^{-1}}{6}. $$ Therefore, \begin{equation} \label{estcentre} \left|q_j^{-1}(\psi'_{\ast j},\,\psi_{\ast j})_{L_{2,2\pi}}-(f'_{\ast j},\,f_{\ast j})_{L_2({\mathbb R})}{\mathbb R}ight| \leq \left(\frac{C\sqrt{2C}}{3} + \frac{1}{6}{\mathbb R}ight)q_j^{-1} \to 0 \mbox{ as } j \to \infty. \end{equation} 3. The next estimations are analogous to the previous ones, so we omit details $$ \left|q_j^{-2}\|\psi'_{\ast j}\|^2_{L_{2,2\pi}} - \|f'_{\ast j}\|^2_{L_2({\mathbb R})}{\mathbb R}ight| = \left|q_j^{-2} \sum_{k\in\z} k^2 \bigl|\widehat{\psi_{\ast j}}(k)\bigr|^2 - \int_{{\mathbb R}} \xi^2 \bigl|\widehat{f_{\ast j}}(\xi)\bigr|^2\,d\xi{\mathbb R}ight| $$ $$ = q_j^{-2} \left| \frac16 \sum_{k\in \z} \bigl| k \widehat{\psi_{\ast j}}(k) - (k-1) \widehat{\psi_{\ast j}}(k-1) \bigr|^2 - \frac{1}{30} \left(2\sum_{k\in \z} \bigl| \widehat{\psi_{\ast j}}(k) \bigr|^2 + 3 \Re \left(\sum_{k\in \z}\widehat{\psi_{\ast j}}(k-1)\overline{\widehat{\psi_{\ast j}}(k)} {\mathbb R}ight) {\mathbb R}ight) {\mathbb R}ight| $$ $$ = q_j^{-2} \left| \frac16 A(\psi'_{\ast j})- \frac{1}{30} \left(2\|\psi_{\ast j}\|^2 +3 \Re\bigl(\tau(\psi_{\ast j})\bigr) {\mathbb R}ight) {\mathbb R}ight|= q_j^{-2} \left| \frac16 A(\psi'_{\ast j})- \frac{1}{30} \left(5 \|\psi_{\ast j}\|^2-3 A(\psi_{\ast j}) {\mathbb R}ight) {\mathbb R}ight| $$ Then, by ({\mathbb R}ef{cond2}), ({\mathbb R}ef{cond5}), we conclude that $$ q_j^{-2}\|\psi'_{\ast j}\|^2_{L_{2,2\pi}} - \|f'_{\ast j}\|^2_{L_2({\mathbb R})} \to 0 \mbox{ as } j\to \infty. $$ Due to piecewise linearity of the functions $\widehat{f_{\ast j}}$, the differences in item 4 and item 5 are equal to $0$ for all $j \in \n.$ 4. Indeed, exploiting the definitions of $A$ ({\mathbb R}ef{defAB}) and $f_{\ast j}$ ({\mathbb R}ef{conf1}), we immediately get $$ 2\cdot q_j^{2} A(\psi_{\ast j}) - \|\cdot f_{\ast j}\|^2_{L_2({\mathbb R})} = q_j^{2} \sum_{k\in\z} \left|\widehat{\psi_{\ast j}}(k-1) - \widehat{\psi_{\ast j}}(k){\mathbb R}ight|^2 - \int_{{\mathbb R}} \left|\widehat{f_{\ast j}}'(\xi){\mathbb R}ight|^2\,d\xi $$ $$ = q_{j} \sum_{k\in\z} \left|\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr) - \widehat{f_{\ast j}}\bigl(q_j^{-1} k\bigr){\mathbb R}ight|^2 - \int_{{\mathbb R}} \left|\widehat{f_{\ast j}}'(\xi){\mathbb R}ight|^2\,d\xi $$ $$ = \sum_{k\in \z} \int_{q_j^{-1}(k-1)}^{q_j^{-1} k}\left(\left|\frac{\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr) - \widehat{f_{\ast j}}\bigl(q_j^{-1} k\bigr)}{q_j^{-1}}{\mathbb R}ight|^2 -\left|\widehat{f_{\ast j}}'(\xi){\mathbb R}ight|^2 {\mathbb R}ight)\,d\xi =0. $$ 5. The definitions of $B$ ({\mathbb R}ef{defAB}) and $f_{\ast j}$ ({\mathbb R}ef{conf1}) yields $$ q_j B(\psi_{\ast j}) - i(\cdot f_{\ast j},\,f_{\ast j})_{L_2({\mathbb R})} = q_j B(\psi_{\ast j}) + 2\pi (\widehat{f_{\ast j}}',\,\widehat{f_{\ast j}})_{L_2({\mathbb R})} $$ $$ = q_{j}\sum_{k\in\z} \frac{\overline{\widehat{\psi_{\ast j}}(k-1)+\widehat{\psi_{\ast j}}(k)}}{2} \bigl(\widehat{\psi_{\ast j}}(k-1)-\widehat{\psi_{\ast j}}(k)\bigr)+ \int_{{\mathbb R}} \widehat{f_{\ast j}}'(\xi)\overline{\widehat{f_{\ast j}}(\xi)}\,d\xi $$ $$ = \sum_{k\in\z} \int_{q_j^{-1}(k-1)}^{q_j^{-1} k} \left( \frac{\overline{\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr)+\widehat{f_{\ast j}}\bigl(q_j^{-1}(k)\bigr)}}{2}\frac{\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr)-\widehat{f_{\ast j}}\bigl(q_j^{-1}(k)\bigr)}{q_j^{-1}}+\widehat{f_{\ast j}}'(\xi)\overline{\widehat{f_{\ast j}}(\xi)} {\mathbb R}ight)\,d\xi $$ The function $\widehat{f_{\ast j}}$ is linear on $[q_j^{-1}(k-1),\,q_j^{-1} k]$, therefore, $ \widehat{f_{\ast j}}'(\xi) \equiv -\left(\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr)-\widehat{f_{\ast j}}\bigl(q_j^{-1}(k)\bigr){\mathbb R}ight)q_j$ on $[q_j^{-1}(k-1),\,q_j^{-1} k],$ denoting $c_{j,k}:=-\left(\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr)-\widehat{f_{\ast j}}\bigl(q_j^{-1}(k)\bigr){\mathbb R}ight)q_j$ and continuing calculations, we obtain $$ \sum_{k\in\z} c_{j,k} \int_{q_j^{-1}(k-1)}^{q_j^{-1} k}\left( - \frac{\overline{\widehat{f_{\ast j}}\bigl(q_j^{-1}(k-1)\bigr)+\widehat{f_{\ast j}}\bigl(q_j^{-1}(k)\bigr)}}{2} +\overline{\widehat{f_{\ast j}}(\xi)} {\mathbb R}ight)\,d\xi = 0, $$ where the last equality is again due to the linearity of $\widehat{f_{\ast j}}$ on $[q_j^{-1}(k-1),\,q_j^{-1} k].$ So items 1.---5. are estimated. Now, we write the squared Breitenberger UC in the form $$ UC_B^2(\psi_{\ast j}) = q_j^{2} {{\mathbb R}m var_A } (\psi_{\ast j}) q_j^{-2} {{\mathbb R}m var_F } (\psi_{\ast j}), $$ where $$ q_j^{-2} {{\mathbb R}m var_F } (\psi_{\ast j}) = q_j^{-2} \left(\frac{\|\psi_{\ast j}'\|^2}{\|\psi_{\ast j}\|^2}+\frac{(\psi_{\ast j}',\, \psi_{\ast j})^2}{\|\psi_{\ast j}\|^4}{\mathbb R}ight) $$ and (see also ({\mathbb R}ef{defAB}), ({\mathbb R}ef{AB})) $$ q_j^{2} {{\mathbb R}m var_A } (\psi_{\ast j}) = q_j^{2}\left(\frac{\|\psi_{\ast j}\|^4}{|\tau(\psi_{\ast j})|^2}-1{\mathbb R}ight)= q_j^{2} \frac{\|\psi_{\ast j}\|^4-|\tau(\psi_{\ast j})|^2}{|\tau(\psi_{\ast j})|^2} = q_j^{2} \frac{2 A(\psi_{\ast j}) \|\psi_{\ast j}\|^2- A^2(\psi_{\ast j})+B^2(\psi_{\ast j})}{\bigl(\|\psi_{\ast j}\|^2-A(\psi_{\ast j})\bigr)^2-B^2(\psi_{\ast j})}. $$ Using ({\mathbb R}ef{cond3}), ({\mathbb R}ef{cond4}), and $\|\psi_{\ast j}\|^2 - \|\psi_{j}\|^2= \|\psi_{\ast j}\|^2 - 1 \to 0$ as $j \to \infty$, we see that $q_j^{-2} {{\mathbb R}m var_F } (\psi_{\ast j})$ is bounded as $j\to \infty.$ And since, by ({\mathbb R}ef{UCBpsi0j}), there exists finite $\lim_{j\to \infty}UC_B(\psi_{\ast j}),$ it follows that there exists an absolute constant $C_0>0$ such that $q_j^{2} {{\mathbb R}m var_A } (\psi_{\ast j})> C_0$ as $j\to \infty$. Similarly, ({\mathbb R}ef{cond5}), ({\mathbb R}ef{cond6}), and $\|\psi_{\ast j}\|^2 - \|\psi_{j}\|^2= \|\psi_{\ast j}\|^2 - 1 \to 0$ as $j \to \infty$, yield the boundedness of $q_j^{2} {{\mathbb R}m var_A } (\psi_{\ast j})$ as $j\to \infty.$ Therefore, there exists an absolute constant $C_0>0$ such that $q_j^{-2} {{\mathbb R}m var_F } (\psi_{\ast j})> C_0$ as $j\to \infty$. Hence the inequality $q_j^{2} {{\mathbb R}m var_A } (\psi_{\ast j})> C_0>0$ as $j\to \infty$ and estimations of items 1.,4.,5. enable to write $$ \frac{\Delta^2(f_{\ast j})}{q_j^{2} {{\mathbb R}m var_A }(\psi_{\ast j})}=\frac{\|f_{\ast j}\|^2 \|\cdot f_{\ast j}\|^2-(\cdot f_{\ast j},\,f_{\ast j})^2}{\|f_{\ast j}\|^4} \frac{(\|\psi_{\ast j}\|^2-A(\psi_{\ast j}))^2-B^2(\psi_{\ast j})}{q_j^{2}\left(2 A(\psi_{\ast j}) \|\psi_{\ast j}\|^2-A^2(\psi_{\ast j})+ B^2(\psi_{\ast j}){\mathbb R}ight)} $$ $$ = \frac{\left(\|\psi_{\ast j}\|^2+o(1){\mathbb R}ight) 2 \cdot q_j^{2} A(\psi_{\ast j}) + q_j^{2} B^2(\psi_{\ast j})}{q_j^{2}\left(2 A(\psi_{\ast j}) \|\psi_{\ast j}\|^2-A^2(\psi_{\ast j})+ B^2(\psi_{\ast j}){\mathbb R}ight)} \frac{\left((\|\psi_{\ast j}\|^2-A(\psi_{\ast j}))^2-B^2(\psi_{\ast j}){\mathbb R}ight)}{(\|\psi_{\ast j}\|^2+o(1))^2} $$ $$ =\left(1+\frac{o(1) 2 \cdot q_j^{2} A(\psi_{\ast j}) + q_j^{2} A^2(\psi_{\ast j})}{q_j^{2}\left(2 A(\psi_{\ast j}) \|\psi_{\ast j}\|^2-A^2(\psi_{\ast j}){\mathbb R}ight)}{\mathbb R}ight) \frac{\left((\|\psi_{\ast j}\|^2-A(\psi_{\ast j}))^2-B^2(\psi_{\ast j}){\mathbb R}ight)}{(\|\psi_{\ast j}\|^2+o(1))^2} {\mathbb R}ightarrow 1 \mbox{ as } j\to \infty $$ Similarly, by the inequality $q_j^{-2} {{\mathbb R}m var_F } (\psi_{\ast j})> C_0 > 0$ as $j\to \infty$ and estimations of items 1.,2.,3., we conclude that $$ \frac{\Delta^2(\widehat{f_{\ast j}})}{q_j^{-2} {{\mathbb R}m var_F }(\psi_{\ast j})}= \frac{\|\psi_{\ast j}\|^4 \left(\|i f'_{\ast j}\|^2 \|f_{\ast j}\|^2 + (f'_{\ast j},\,f_{\ast j})^2{\mathbb R}ight)}{\|f_{\ast j}\|^4\ q_j^{-2}\left(\|\psi'_{\ast j}\|^2 \|\psi_{\ast j}\|^2 + (\psi'_{\ast j},\,\psi_{\ast j})^2{\mathbb R}ight)} $$ $$ =\frac{\|\psi_{\ast j}\|^4 \left((q_j^{-2}\|\psi'_{\ast j}\|^2 + o(1))(\|\psi_{\ast j}\|^2+o(1))+(q_j^{-1}(\psi'_{\ast j},\,\psi_{\ast j})+o(1))^2{\mathbb R}ight)}{(\|\psi_{\ast j}\|^2+o(1))^2 \ q_j^{-2}\left(\|\psi'_{\ast j}\|^2 \|\psi_{\ast j}\|^2 + (\psi'_{\ast j},\,\psi_{\ast j})^2{\mathbb R}ight)} {\mathbb R}ightarrow 1 \mbox{ as } j \to \infty. $$ Finally, ({\mathbb R}ef{UCHUCB}) follows from the last two limits and existence of finite $\lim_{j\to \infty}UC_B(\psi_{\ast j}).$ \textbf{Step 4.} Let us consider auxiliary functions $f_j$, $j \in \n$ defined by $$ \widehat{f_{j}} := \widehat{f_{\ast j}}(\cdot + c(\widehat{f_{\ast j}})), $$ where $c(\widehat{f})$ is the frequency centre of the function $f$ (see Definition {\mathbb R}ef{Huc}). Then it is well-known (see \cite[Exercise 1.5.1]{NPS} ) that $c(\widehat{f_j})=0$ and \begin{equation} \label{UCH0} UC_H(f_{\ast j})=UC_H(f_j),\ \ \ \ \ \ \ \ j \in \n. \end{equation} Let us check that $|c(\widehat{f_{\ast j}})|\leq q_j^{-1} M(C)$ as $j>j_0$ for some $j_0\in\n.$ Indeed, by definition of a frequency centre, estimations ({\mathbb R}ef{estnorm}), ({\mathbb R}ef{estcentre}) yields $$ |c(\widehat{f_{\ast j}})| = \frac{\left(\cdot \widehat{f_{\ast j}},\,\widehat{f_{\ast j}} {\mathbb R}ight)}{\|\widehat{f_{\ast j}}\|^2} = \frac{\left|\left(f'_{\ast j},\,f_{\ast j}{\mathbb R}ight){\mathbb R}ight|}{\|f_{\ast j}\|^2} \leq q_j^{-1} \frac{\left|(\psi'_{\ast j},\,\psi_{\ast j}){\mathbb R}ight|+C \sqrt{2C}/3 + 1/6}{\|\psi_{\ast j}\|^2-C q_j^{-2}/3} $$ Since $\|\psi_{\ast j}\|^2-C q_j^{-2}/3 = 1 -C q_j^{-2}/3 +o(1) \geq 1/2$ as $j>j_0$ for some $j\in\n$, it follows from ({\mathbb R}ef{cond3}) and definition of $M(C)$ ({\mathbb R}ef{cond1}) that $$ |c(\widehat{f_{\ast j}})| \leq 2\cdot q_j^{-1}\left(\left|(\psi'_{\ast j},\,\psi_{\ast j}){\mathbb R}ight|+C \sqrt{2C}/3 + 1/6{\mathbb R}ight) < 2\cdot q_j^{-1}\left(C+C \sqrt{2C}/3 + 1/6{\mathbb R}ight) =q_j^{-1} M(C). $$ Finally, using conditions ({\mathbb R}ef{condpsi0}) and ({\mathbb R}ef{conf1}) we conclude that $\widehat{f_{\ast j}}(\xi) \equiv 0$ for $\xi \in [ - q_j^{-1}M(C),\,q_j^{-1}M(C)].$ Therefore, the inequality $|c(\widehat{f_{\ast j}})|\leq q_j^{-1} M(C)$ as $j>j_0$ for some $j_0\in\n$ provides $\widehat{f_j}(0)=\widehat{f_{\ast j}}(c(\widehat{f_{\ast j}}))=0$ as $j>j_0$ for some $j_0\in\n$. \textbf{Step 5.} Since $\widehat{f_j}(0)=0$ as $j>j_0$ for some $j_0\in\n$ and $c(\widehat{f_{j}})=0$ it follows from Theorem {\mathbb R}ef{Battle} that $UC_H(f_j)\geq 3/2$ as $j>j_0$ for some $j_0\in\n$. It remains to note that using ({\mathbb R}ef{UCBpsi0j}), ({\mathbb R}ef{UCHUCB}), and ({\mathbb R}ef{UCH0}) we obtain $$ \lim_{j\to \infty} UC_B(\psi_j)= \lim_{j\to \infty} UC_B(\psi_{\ast j})=\lim_{j\to \infty} UC_H(f_{\ast j})=\lim_{j\to \infty} UC_H(f_j)\geq 3/2. $$ Theorem {\mathbb R}ef{main} is proved. $\Diamond$ Analyzing the conditions of Theorem {\mathbb R}ef{main} one could ask a natural question about classes of periodic sequences that satisfy these conditions. Are these conditions restrictive or mild? Let us make some illuminating remarks. \textbf{Remark 1: Wavelet sequence generated by periodization} \noindent Let $\psi^0 \in L_2(\mathbb{R})$ be a wavelet function on the real line. It is natural to require that $\cdot \psi^0,\, i (\psi^0)' \in L_2({\mathbb R})$. Otherwise $UC_H(\psi^0)$ will be infinite. Put $$ \psi^p_{j,k}(x):=2^{j/2}\sum_{n \in \mathbb{Z}} \psi^0(2^j(x+2 \pi n)+k). $$ {\sloppy The sequence $(\psi^p_{j,k})_{j,k},$ $j \in \z_+,$ $k=0,\dots,2^j-1$ is said to be \texttt{a periodic wavelet set generated by periodization}. Set $\psi^p_{j} := \psi^p_{j,0},$ $j = 0,1,\dots.$ Put $q_j=2^j.$ We claim that for a wavelet sequence $(\psi^p_{j})_{j\in\z_+}$ conditions ({\mathbb R}ef{cond4})-({\mathbb R}ef{cond6}) are fulfilled, and $\lim_{j\to\infty}UC_B(\psi^p_j)$ is finite. If additionally $\widehat{\psi^0}(\xi) = o (\sqrt{\xi})$ as $\xi \to 0$, and $\cdot (\psi^0)' \in L_2({\mathbb R}),$ then condition ({\mathbb R}ef{cond1}), and ({\mathbb R}ef{cond2}) are also fulfilled respectively. Indeed, in \cite{prqurase03}, it is proved that the quantities $$ \|\psi^p_j\|_{L_{2,2\pi}}, \ \ 2^{-2j}\|(\psi^p_j)'\|^2_{L_{2,2\pi}},\ \ 2^{-j}((\psi^p_j)',\,\psi^p_j)_{L_{2,2\pi}}, \ \ 2\cdot 2^{2j} A(\psi^p_j), \mbox{ and }\ 2^j B(\psi^p_j) $$ tend to $$ \|\psi^0\|_{L_2({\mathbb R})},\ \ \|(\psi^0)'\|^2_{L_2({\mathbb R})},\ \ ((\psi^0)',\,\psi^0)_{L_2({\mathbb R})}, \ \ \|\cdot \psi^0\|^2_{L_2({\mathbb R})}, \mbox{ and } \ i(\cdot \psi^0,\,\psi^0)_{L_2({\mathbb R})} $$ respectively. Therefore, conditions ({\mathbb R}ef{cond4}) - ({\mathbb R}ef{cond6}) hold true. Then in \cite{prqurase03} it is deduced that $\lim_{j\to\infty}UC_B(\psi^p_j) = UC_H(\psi^0),$ so $\lim_{j\to\infty}UC_B(\psi^p_j)$ is finite. Since $\widehat{\psi^p_j}(k) = 2^{-j/2}\widehat{\psi^0}(2^{-j}k)$ and $\widehat{\psi^0}(\xi) = o (\sqrt{\xi})$ as $\xi \to 0$, it follows that ({\mathbb R}ef{cond1}) is satisfied for any $k \in \n.$ The condition $\widehat{\psi^0}(\xi) = o (\sqrt{\xi})$ as $\xi \to 0$ is not restrictive. If this condition is not fulfilled, then $UC_H(\psi_0)=\infty.$ Indeed, suppose $UC_H(\psi_0)$ is finite. Then $(1+|\cdot|)\psi,\, (1+|\cdot|)\widehat{\psi} \in L_2({\mathbb R}),$ and $\psi^0$ is absolutely continuous (see \cite[Theorem 1.5.2]{NPS}). Therefore, $\psi^0(x) = O (x^{-3/2-\varepsilon}),$ $\varepsilon > 0$ as $x\to\infty.$ Then $|\widehat{\psi^0}(\xi+h) - \widehat{\psi^0}(\xi)| = O(h^{1/2+\varepsilon/2})$ as $\xi\in{\mathbb R}.$ And the condition $\widehat{\psi^0}(\xi) = o (\sqrt{\xi})$ as $\xi \to 0$ follows from the fact that $\widehat{\psi}(0)=0.$ Finally, $\lim_{j\to \infty} 2 A((\psi^p_j)') = \|\cdot (\psi^0)'\|_{L_2({\mathbb R})}$ and $\cdot (\psi^0)' \in L_2({\mathbb R})$ yields ({\mathbb R}ef{cond2}). } It is clear that combining the result $\lim_{j\to\infty}UC_B(\psi^p_j) = UC_H(\psi^0)$ of \cite{prqurase03} and Theorem {\mathbb R}ef{Battle} we immediately get the inequality $\lim_{j \to \infty} UC_B(\psi^p_j)\geq 3/2$, and to do so we do not need condition ({\mathbb R}ef{cond2}). However the above inequality is proved here under the condition $((\psi^p_j)',\,\psi^p_j)_{L_{2,2\pi}} =0,$ while only the mild restriction $((\psi^p_j)',\,\psi^p_j)_{L_{2,2\pi}}\leq C \|\psi^p_j\|^2_{L_{2,2\pi}}$ is required in Theorem {\mathbb R}ef{main}, cf. ({\mathbb R}ef{cond3}). \textbf{Remark 2: Wavelet sequence generated by UEP.} \noindent Let $(\psi_j)_{j\in\z_+}$ be a wavelet sequence satisfying Theorem {\mathbb R}ef{UEP} and $c_1 \leq \|\psi_j\|\leq c_2$, where $c_1>0,$ $c_2$ are absolute constants. Put $q_j=2^{j/2}.$ Then condition ({\mathbb R}ef{cond1}) is fulfilled for $k \in \z$. In fact, by ({\mathbb R}ef{con3}) and ({\mathbb R}ef{con4}), we conclude that $ \widehat{\psi_j}(k)= e^{2\pi i 2^{-j-1} k} \mu_{k+2^j}^{j+1} \widehat{\varphi_{j+1}}(k) $ and it follows from ({\mathbb R}ef{con4}) $ |\mu_{k+2^j}^{j+1}|\leq \sqrt{2}. $ Therefore, $ |\widehat{\psi_j}(k)|\leq \sqrt{2} |\widehat{\varphi_{j+1}}(k)|. $ Using ({\mathbb R}ef{con1}), we get ({\mathbb R}ef{cond1}) for all $k \in \n$. Next, if $\psi_j$ is a trigonometric polynomial of degree less than $C_1 2^{j/2}$, where $C_1$ is an absolute constant, then condition ({\mathbb R}ef{cond4}) is also fulfilled. Indeed, it follows from the Bernstein inequality that $ 2^{-j}\|\psi_j'\|^2 / \|\psi_j\|^2 \leq C^2_0. $ \textbf{Remark 3: Wavelet sequence constructed in \cite{LebPres14}.} \noindent Let $(\psi^a_j)_{j\in\n},$ $a>1,$ be a family of periodic wavelet sequences constructed in \cite{LebPres14}. Then all conditions ({\mathbb R}ef{cond1})--({\mathbb R}ef{cond6}) are hold true for $q_j = j^{1/2}.$ Indeed, it is straightforward to see that $\widehat{\psi^a_j}(k) \asymp j^{-1} 2^{-j/2}$ for a fixed $k \in \z,$ $\|\psi^a_j\|^2 \asymp j^{-1/2} 2^{-j},$ $A((\psi^a_j)') \asymp j^{-1/2} 2^{-j}$, $\|(\psi^a_j)'\|^2 \asymp j^{1/2} 2^{-j},$ $A(\psi^a_j) \asymp j^{-3/2} 2^{-j},$ $|B(\psi^a_j)| \asymp \sin(2 \pi 2^{-j-1}) j^{-1/2} 2^{-j}$ as $j \to \infty.$ \section{On a particular minimization problem for the Heisenberg UC} One could make an analogy between Theorem {\mathbb R}ef{Battle} and Theorem {\mathbb R}ef{main}. For $f\in L_2({\mathbb R})$, exact conditions $c(\widehat{f})=0$ and $\int_{{\mathbb R}} f = \widehat{f}(0) = 0$ of Theorem {\mathbb R}ef{Battle} correspond to mild limit conditions ({\mathbb R}ef{cond3}) and ({\mathbb R}ef{cond1}) respectively. So it would be expectable to get a generalization of Theorem {\mathbb R}ef{Battle} of the following form: if $\int_{{\mathbb R}} f = \e$ and $c(\widehat{f})=0$, then $UC_H(f)\geq \alpha(\e),$ where $\lim_{\e\to 0}\alpha(\e)=3/2.$ Unfortunately, it is impossible to generalize the proof of Theorem {\mathbb R}ef{Battle} to the above case. It turns out (Theorem {\mathbb R}ef{nominfunc}) that there is no function satisfying the following minimization problem $$ UC_H(f) \to \min;\ f \in L_2({\mathbb R}),\ \int_{{\mathbb R}} f = \varepsilon, \ \e >0,\ c(\widehat{f})=0. $$ In the case of Theorem {\mathbb R}ef{Battle} ($\e=0$) such a function do exist. To suggest an alternative proof is an open question. \begin{equation}gin{teo} \label{nominfunc} Let $f$ be a function such that $\ \cdot f, \ i f' \in L_2({\mathbb R}),$ $f_0:=a^{1/2}f(a\cdot),$ where $a=(\|\cdot f\| / \|i f'\|)^{1/2}$, and $c(\widehat{f_0})=0$. Fix $\e\in\cn$ and suppose $\int_{{\mathbb R}}f_0(x)\,dx=\e.$ Then if $\e\neq 0,$ $\e\neq 2 \pi^{3/4}\frac{\sqrt{(2k)!}}{2^k k!},$ $k\in\n,$ then there is no function satisfying the following minimization problem \begin{equation} \label{minprob} \left\{ \begin{equation}gin{array}{l} UC_H(f_0) \to \min,\\ \int_{{\mathbb R}} f_0 = \varepsilon, \ \ \varepsilon >0, \ \ \ c(\widehat{f_0})=0. \end{array} {\mathbb R}ight. \end{equation} If $\e = 2 \pi^{3/4}\frac{\sqrt{(2k)!}}{2^k k!},$ $k\in\n$, then the Hermite function \begin{equation} \label{Hermite} \phi_n(x)=\left(\frac{2^n n!}{2\sqrt{\pi}}{\mathbb R}ight)^{-1/2}(-1)^n e^{x^2/2} \frac{d^n}{dx^n}(e^{-x^2}), \end{equation} $n=2k,$ minimizes the problem ({\mathbb R}ef{minprob}) and $UC_H(\phi_{2k})=(4k+1)/2$. \end{teo} \textbf{Proof.} The case $\e=0$ is considered in \cite[p.137-138]{Bat} (see Theorem {\mathbb R}ef{Battle}). We exploit the ``variational'' idea from the proof of this theorem for an arbitrary, sufficiently small parameter $\e$. Suppose $f_1(x):=f_0(x+x_{0f}),$ then the time and frequency centres of the function $f_1$ are equal to zero, and $\Delta_{f_0}=\Delta_{f_1},$ $\Delta_{\widehat{f_0}}=\Delta_{\widehat{f_1}},$ $\int_{{\mathbb R}}f_1 = \int_{{\mathbb R}} f_0.$ It is also clear that $UC_H$ is a homogeneous functional that is $UC_H(c f)=UC_H(f)$ for $c\in \cn \setminus \{0\}$. So without loss of generality we assume that the functions $f_0$ and $\widehat{f_0}$ have zero centres and $\|f_0\|=1.$ Then $UC_H$ takes the form $UC_H(f_0)=\|\cdot f_0\| \|i f'_0\|.$ Let us minimize the functional $\|\cdot f_0\|^2 \|i f'_0\|^2$ over functions $f_0\in L_2({\mathbb R})$ satisfying the constraints $\|f_0\|=1$ and $\int_{{\mathbb R}}f_0=\e.$ For the Lagrange function $$ \|\cdot f_0\|^2 \|i f'_0\|^2 + \lambda(\|f_0\|^2-1)+\kappa \left(\int_{{\mathbb R}}f_0 - \e{\mathbb R}ight) $$ we get the Euler-Lagrange equation \begin{equation}gin{equation} \label{dif_eq} \|i f'_0\|^2 x^2 f_0(x) - \|\cdot f_0\|^2 f''_0(x) + \lambda f_0(x)+ \frac12 \kappa = 0. \end{equation} By definition of $f_0$ we get $\|i f'_0\| = a\|i f'\| = a^{-1}\|\cdot f\| = \|\cdot f_0\|=:\sqrt{\alpha/2}$. Then the equation is rewritten as $$ \frac12 \alpha (x^2 f_0(x)-f''_0(x)) +\lambda f_0(x) +\frac12 \kappa = 0, $$ where $\frac12 \alpha$ is a value of the desired functional. If $\kappa=0$ then the solutions are eigenfunctions of the Hamilton operator $H f=\frac12 (\cdot^2 f-f'')$ that is the Hermite functions ({\mathbb R}ef{Hermite}) normalized by the condition $\|\phi_n\|^2=(2\pi)^{-1}\int |\phi_n|^2 = 1.$ It can be proved by fairly straightforward calculation that $$ \int_{{\mathbb R}}\phi_n=\left\{ \begin{equation}gin{array}{ll} 0, & n=2k+1, \\ 2 \pi^{3/4}\frac{\sqrt{(2k)!}}{2^k k!}, & n=2k. \end{array} {\mathbb R}ight. $$ and $a=(\|\cdot \phi_n\| / \|i \phi'_n\|)^{1/2} = 1$. If $\e\neq\int_{{\mathbb R}}\phi_n,$ then there are no solutions of minimization problem satisfying the constraints. Otherwise, the Hermite functions ({\mathbb R}ef{Hermite}) minimizes the problem ({\mathbb R}ef{minprob}) and $UC_H(\phi_n)= \|\cdot \phi_n\| \|i \phi_n'\| =(2n+1)/2.$ If $\kappa\neq 0,$ then we use the completeness of the set of Hermite functions, expand a function $f_0$ into a series $\sum a_n\phi_n,$ and substitute it in the equation. As a result we get $$ \sum_{k=0}^{\infty} a_k (\alpha(k+1/2)+\lambda)\phi_k(x)+ \kappa/2 = 0. $$ Orthonormality of the set $\{\phi_n\}_{n\in\n}$ allows to find the coefficients $$ a_k = - \frac12 \kappa \frac{1}{\alpha(k+1/2)+\lambda} \int_{{\mathbb R}} \phi_k. $$ Multiplying equation ({\mathbb R}ef{dif_eq}) by $f_0$ and integrating over ${\mathbb R}$ we obtain a relationship between parameters $\alpha^2/2+\lambda + \kappa \e/2 = 0. $ Therefore, the solution of ({\mathbb R}ef{dif_eq}) takes the form $$ f_0(x) = -\pi^{3/4} \frac{\kappa}{\alpha}\sum_{n=0}^{\infty} \frac{\sqrt{(2n)!}}{2^n n!}\frac{\phi_{2n}(x)}{2n+1/2-(\alpha+\kappa \e/\alpha)/2}. $$ The constraints give us the following equations \begin{equation}gin{gather} \e=\int_{{\mathbb R}}f_0 = - 2 \pi^{3/2} \frac{\kappa}{\alpha}\sum_{k=0}^{\infty} \frac{(2n)!}{(2^n n!)^2}\frac{1}{2n+1/2-(\alpha+\kappa \e/\alpha)/2} \nonumber \\ 1=\frac{1}{2\pi}\int_{{\mathbb R}}|f_0|^2 = \pi^{3/2}\frac{\kappa^2}{\alpha^2}\sum_{k=0}^{\infty} \frac{(2n)!}{(2^n n!)^2}\frac{1}{(2n+1/2-(\alpha+\kappa \e/\alpha)/2)^2} \nonumber \end{gather} Using \cite[Chap.12, Ex.8]{WW} we rewrite the equations \begin{equation}gin{gather} \e= - \pi^{3/2} \frac{\kappa}{\alpha}\Gamma\left(\frac12{\mathbb R}ight)\frac{\Gamma\left(\frac14-\frac14\left(\alpha+\frac{\kappa \e}{\alpha}{\mathbb R}ight){\mathbb R}ight)}{\Gamma\left(\frac34-\frac14\left(\alpha+\frac{\kappa \e}{\alpha}{\mathbb R}ight){\mathbb R}ight)} \nonumber \\ 1= \frac{\pi^{1/2}}{2}\frac{\kappa^2}{\alpha^2}\Gamma\left(\frac12{\mathbb R}ight) \frac{d}{d(\alpha+\kappa \e/\alpha)}\frac{\Gamma\left(\frac14-\frac14\left(\alpha+\frac{\kappa \e}{\alpha}{\mathbb R}ight){\mathbb R}ight)}{\Gamma\left(\frac34-\frac14\left(\alpha+\frac{\kappa \e}{\alpha}{\mathbb R}ight){\mathbb R}ight)}. \nonumber \end{gather} We introduce notions $\begin{equation}ta:=\kappa/\alpha,$ $F(x):=\Gamma(1/2) \frac{\Gamma(1/4-x/4)}{\Gamma(3/4-x/4)}.$ Then the last equations take the form \begin{equation}gin{gather} \nonumber \left\{ \begin{equation}gin{array}{l} -\begin{equation}ta F(\alpha+\begin{equation}ta \e) = \pi^{-3/2} \e, \\ \begin{equation}ta^2 F'(\alpha+\begin{equation}ta \e) = 2 \pi^{-1/2}. \end{array} {\mathbb R}ight. \end{gather} We claim that this system has no solutions if $\varepsilon\neq 0$. Indeed, if $\e = 0,$ then $\alpha = 3,$ $\begin{equation}ta^2 = 2 \pi^{-1/2} (F'(3))^{-1}.$ (This is the case of Theorem {\mathbb R}ef{Battle}.) The function $F$ is continuously differentiable and increasing in the neighborhood of $3$ and $F(3)=0.$ However it follows from the equation $ -\begin{equation}ta F(\alpha+\begin{equation}ta \e) = \pi^{-3/2} \e $ that $F(3+0)<0$ and $F(3-0)>0$. So $F$ can not be an increasing function. This contradiction means that in the case $\kappa \neq 0,$ there is no solution of minimization problem ({\mathbb R}ef{minprob}). Theorem {\mathbb R}ef{nominfunc} is proved. $\Diamond$ \section*{Acknowledgments} The work is supported by the RFBR, grant \#15-01-05796, and by Saint Petersburg State University, grant \#9.38.198.2015. \begin{equation}gin{thebibliography}{00} \bibitem{Balan} R.~{B}alan, An uncertainty inequality for wavelet sets, Appl. Comput. Harmon. Anal. 5 (1998) 106--108. \bibitem{Bat} G.~Battle, Heisenberg inequalities for wavelet states, Appl. Comput. Harmon. Anal. 4 (1997) 119--146. \bibitem{B} E.~{B}reitenberger, Uncertainty measures and uncertainty relations for angle observables, {F}ound. {P}hys. 15 (1985) 353--364. \bibitem{FolSit} G. B. Folland and A. Sitaram, The uncertainty principle: A mathematical survey, Journal of Fourier analysis and applications, 3 (3) (1997) 207--238. \bibitem{GB} \v{Z}.~{G}imbutas, A.~{B}astys, {D}aubechies compactly supported wavelets with minimal Heisenberg boxes, { {}Lit. {M}ath. {J}.} 35 (1995) 343--362. \bibitem{GT1} S.S.~Goh, K.M.~Teo, Extension principles for tight wavelet frames of periodic functions, Appl. Comput. Harmon. Anal. 25 (2008) 168--186. \bibitem{GL} T.N.T.~Goodman, S.L.~Lee, Asymptotic optimality in time-frequency localization of scaling functions and wavelets, in: Frontiers in Interpolation and Application, N.K. Govil, H.N. Mhaskar, R.N. Mohapatra, Z. Nashed, and J. Szadados (Eds.), (2007) 145--171. \bibitem{H} W.~Heisenberg, The actual concept of quantum theoretical kinematics and mechanics, Physikalische Z., 43, (1927) 172. \bibitem{L08} E.A.~{L}ebedeva, {E}xponentially decaying wavelets with uncertainty constants uniformly bounded with respect to the smoothness parameter, Siberian Math. J, 49, 3 (2008) 457--473. \bibitem{L07} E.A. Lebedeva, Minimization of the uncertainty constant of the family of Meyer wavelets. Math. Notes, 81, 3-4 (2007) 489--95. \bibitem{L12} E.A. Lebedeva, On the uncertainty principle for Meyer wavelet functions, Journal of Mathematical Sciences, 182, 5 (2012) 656--662. \bibitem{L11} E.A.~{L}ebedeva, {Q}uasispline wavelets and uncertainty constants, Appl. Comput. Harmon. Anal. 30 (2011) 214-230. \bibitem{LebPres14} E.A.~{L}ebedeva, J.~Prestin, Periodic wavelet frames and time–frequency localization, Appl. Comput. Harmon. Anal. 37 2 (2014) 347--359. \bibitem{NW} F.J.~{N}arcowich, J.D.~{W}ard, Wavelets associated with periodic basis functions, Appl. Comput. Harmon. Anal. 3 (1996) 40--56. \bibitem{N1} Novikov I. Ya. Modified Daubechies wavelets preserving loca\-lization with growth of smoothness, East J. Approximation, 1, 3 (1995) 314--348. \bibitem{NPS} I.Ya.~{N}ovikov, V.Yu.~{P}rotasov, M.A.~{S}kopina, {W}avelet {T}heory, Translations of Mathematical Monographs 239, AMS, 2011. \bibitem{PQ} J.~Prestin, E.~Quak, Optimal functions for a periodic uncertainty principle and multiresolution analysis, Proc. Edinb. Math. Soc., II. Ser. 42 (1999) 225--242. \bibitem{prqurase03} J.~{P}restin, E.~{Q}uak, H.~{R}auhut, K.~{S}elig, {O}n the connection of uncertainty principles for functions on the circle and on the real line, {J}. {F}ourier {A}nal. {A}ppl. 9 (2003) 387--409. \bibitem{R05} H.~{R}auhut, Best time localized trigonometric polynomials and wavelets, Adv. Comput. Math. 22 (2005) 1--20. \bibitem{Se} K.~{S}elig, {T}rigonometric {w}avelets and the {u}ncertainty {p}rinciple, in: Approximation Theory, M. W. M{\"u}ller, M. Felten, and D. H. Mache (Eds.), Math. Research, Vol. 86, Akademie Verlag, Berlin (1995) 293--304. \bibitem{WW} E. Whittaker, G. Watson, A Course of Modern Analysis, Cambribge Univ. Press, London, 2002. \end{thebibliography} \end{document}
\begin{equation}gin{document} \begin{equation}gin{abstract} In this paper we study a variational Neumann problem for the higher order $s$-fractional Laplacian, with $s>1$. In the process we introduce some non local Neumann boundary conditions that appear in a natural way from a Gauss-like integration formula. \end{abstract} \maketitle \section{Introduction and results}\label{introdu} In this paper we introduce a natural Neumann problem for the higher-order fractional Laplacian $(-\Delta)^su$, $s>1$. Let us recall that when $0<s<1$ the operator is usually defined, for smooth functions, by means of the following principal value \begin{equation}gin{equation}\label{defoperador} (-\Delta)^s u(x):=c_{N,s}\,P.V.\,\int_{\mathbb{R}^N}\frac{u(x)-u (y)}{|x-y|^{N+2s}}\,dy,\quad 0<s<1. \end{equation} Here, \begin{equation}gin{equation}\label{eq:cnsig} c_{N,s}:=\left(\int_{\mathbb{R}^N}\frac{1-\cos(\xi_1)}{|\xi|^{N+2s}}\,d\xi\right)^{-1}=2^{2s-1}\pi^{-\frac{N}{2}}\frac{\Gamma(\frac{N+2s}{2})}{|\Gamma(-s)|},\end{equation} is a normalized constant. See for example \cite{BMS, DnPV, DMPS}. It is well-know that for functions, say, in the Schwartz class $\mathcal{S}(\mathbb{R}^N)$ this operator has an equivalent definition via the Fourier transform that is also valid when $s>1$. More precisely, \begin{equation}gin{equation}\label{eq:sajhhajhsj} \widehat{(-\Delta)^{s}}u(\xi)=|\xi|^{2s}\widehat{u}(\xi),\quad \xi\in\mathbb{R}^{N},\, s>0. \end{equation} From now on, for the sake of simplicity we will consider here the higher order fractional Laplacian with $s=1+\sigma$, $0<\sigma<1$, so that $s\in(1,2)$. Following the expression given in \eqref{defoperador}, in this case for $u$ smooth, we can also define the operator as \begin{equation}gin{equation}\label{eq:defoper} (-\Delta)^{s}u(x)=c_{N,\sigma}\, P.V.\int_{\mathbb R^N}\frac{(-\Delta u(x))-(-\Delta u(y))}{|x-y|^{N+2\sigma}}\,dy,\quad 1<s<2, \end{equation} where $c_{N,\sigma}$ is the nomalization constant given in \eqref{eq:cnsig}. If $0<s<1$ there are many results regarding existence, regularity and qualitative properties of solutions of nonlocal problems that involve the operator $(-\Delta)^s$ (see \cite{BDGQ, BDGQ2, DSV, G0, RoS0, RoS1} and the references therein; this list of publications is far from being complete). The study of the non local higher order operator, compared to the better understood lower order non local operator (i.e. $s\in (0,1)$) has not been entirely developed yet. In the higher order case, for example, the lack in general of a maximum principle introduces some new difficulties. Some results on this subject, like existence and representation of solutions, integration by parts, regularity, best Sobolev constants, maximum principles, Pohozaev identities and spectral results among others can be found in the list of papers~\cite{AJS, CT, DG, G0, G1, MYZ, RoS,Y} or in the corresponding bibliography of each of them. \ For what concerns the Neumann problem for the fractional Laplacian $(-\Delta)^s$, in the case $s\in (0,1)$ and in other similar $s$-nonlocal operators, different approaches have been developed in the literature; see for instance \cite{BCGJ, BGJ, BBC, CK, CERW, CERW1, CERW2, DRoV, G, MPV, SV}. The reader may find a comparison between some of these different models in \cite{DRoV}. We also notice here that all the Neumann conditions presented in the previous works regarding $(-\Delta)^s$, $0<s<1$, are easily seen to approach the classical one ${\partialrtial_\nu u}$ when $s\to 1$. Nevertheless the one presented in \cite{DRoV} by S. Dipierro, X. Ros-Oton and E. Valdinoci allows us to work in a variational framework and, as the authors describe in Section 2 of the aforementioned paper \cite{DRoV}, it also has a natural probabilistic interpretation. To be more precise, the authors introduce and study the existence and uniqueness of solutions of the following Neumann problem for the fractional ($s\in (0,1) $) Laplacian \begin{equation}gin{equation}\label{eq:ESX} \begin{equation}gin{cases} (-\Delta)^s u = f(x) &\text{in}\,\,\Omega\\ \mathcal N_s u =g &\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega, \end{cases}\end{equation} where $f,g$ are appropriate problem data. Here, the operator $\mathcal N_s v$ denotes the nonlocal normal derivative defined, for smooth functions, by \begin{equation}gin{equation}\label{eq:neumannenrico} \mathcal N_s v(x):=c_{N,s}\int_{\Omega}\frac{v(x)-v(y)}{|x-y|^{N+2s}}\,dy, \qquad x\in \mathbb{R}^N\setminus\overlineerline \Omega. \end{equation} This condition can be seen as the natural one to have the associated Gauss and Green formulas that allow to use a variational approach in the analysis of problem \eqref{eq:ESX} similar to the local Neumann problem $-\Delta u = f(x)$ in $\Omega$, with $\partialrtial_{\nu} u= g $ on $\partialrtial \Omega$. \ In the case of higher order operators even in the local case the situation is more involved in general as one can see in, for example, \cite{LA, BL, C1, V}. In particular in \cite{LA}, by using a {\em Biharmonic Green Formula}, the authors define the Neumann problem for the biharmonic operator $\Delta ^2 u$ and the natural boundary Neumann that, in dimension $N=2$, {rises in the study of the bending of free plates.} As far as we know the problem of establishing a reasonable Neumann condition asociated to $(-\Delta)^su$, $s>1$ has not been developed yet. Therefore, the aim of this work is to introduce a Neumann problem for the higher order fractional $s$-Laplacian, $s\in (1,2)$, and to study the problem $$ \begin{equation}gin{cases} (-\Delta)^s u = f(x) &\text{in}\,\,\Omega,\quad 1<s<2\\ \mbox{\it{s-Neumann conditions }} u =g&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline{\Omega}. \end{cases} $$ Here, and throughout the paper, $\Omega$ denotes a smooth bounded domain and our approach is to look for a variational formulation of the problem. Using a similar integration by parts as in the lower order case, $0<s<1$, we can see that for a smooth function $u$ one has \begin{equation}gin{equation}\noindentnumber \int_{\Omega}(-\Delta)^su\,dx=-\int_{\mathbb R^N\setminus \Omega} \mathcal N_\sigma (-\Delta u) \,dx, \end{equation} where $$\mathcal N_\sigma(-\Delta u)(x)=(-\Delta)_{\Omega}^\sigma (-\Delta u)(x)=\int_{\Omega}\frac{{(-\Delta u)(x)-(-\Delta u)(y)}}{|x-y|^{N+2\sigma}}dy,\, x\in\mathbb{R}^{N}\setminus\overlineerline{\Omega}.$$ However, in order to obtain a Green formula seeking a variational formulation of the problem, it will be necessary to split this last condition in two parts. Following this path and via a {\em non local Green Formula type}, we are lead to define two non local operators $\mathcal N^1_\sigma, \mathcal N^2_\sigma $, that will play the role of the {\it s-Neumann conditions} for our problem. More precisely, we will study the following \begin{equation}gin{equation}\label{eq:p}\tag{$\mathcal P$} \begin{equation}gin{cases} (-\Delta)^s u = f(x) &\text{in}\,\,\Omega, \quad 1<s<2\\ \mathcal N_\sigma^1 u =g_1&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega=: \mathcal C\overlineerline\Omega\\ \mathcal N_\sigma^2 u=g_2 &\text{on}\,\,\partialrtial\Omega, \end{cases}\end{equation} where $f$, $g_1$ and $g_2$ satisfy some suitable hypotheses that we will specify below and {$\Omega\subset\mathbb R^N$ be a bounded $C^{1,1}$ domain (unless we specify something different as, for example, in Lemma \ref{regional} below)}. The definition of the operators $\mathcal{N}_{\sigma}^{i}$, $i=1,\, 2$ for suitable $v\in \mathcal{S}(\mathbb{R}^{N})$ will come in a natural way from the integration by parts formula stated below in Theorem \ref{pro:intbyparts2} as follows \begin{equation}gin{equation}\label{eq:neumann1} \mathcal N^1_\sigma v(x){:=-\operatorname{div}(-\Delta)^{\sigma}_{\Omega}(\nabla v)}(x), \qquad x\in \mathbb{R}^N\setminus\overlineerline \Omega, \end{equation} and \begin{equation}gin{equation}\label{eq:neumann2} \mathcal N^2_\sigma v(x){:=(-\Delta)^{\sigma}_{\mathbb{R}^N\setminus \Omega}(\nabla v)(x)\cdot \nu}, \qquad x\in \partialrtial \Omega, \end{equation} where $\nu$ is the outer unit normal field to $\partialrtial \Omega$. Also, $(-\Delta)^{\sigma}_{\mathcal{A}}w$ denotes the regional fractional Laplacian that, for an open set $\mathcal A\subset \mathbb R^N$ and regular functions $w$, is defined by \begin{equation}gin{equation}\label{eq:numebehebqfgwdg} (-\Delta)^\sigma_{\mathcal A} w (x):=c_{N,\sigma}\,\lim_{\varepsilon \rightarrow 0^+}\,\int_{\mathcal A\setminus B_\varepsilon(x)}\frac{w(x)-w(y)}{|x-y|^{N+2\sigma}}\,dy,\quad {x\in\mathbb{R}^{N}}\setminus \partialrtial\mathcal A, \end{equation} where $c_{N,\sigma}$ is defined in \eqref{eq:cnsig}. {We give now some remarks about the regional operator (see also \cite{Mou} and the references therein)}. First of all we notice that, as we will see, the operator may not be {pointwise well defined} for $x\in \partialrtial\mathcal A$. {For a detailed explanation under which conditions the pointwise definition up to the boundary can be considered see, for instace, \cite[Theorem 5.3]{Q-YM}.} Nonetheless, we observe that the principal value in the previous definition is not needed when $x\in \mathbb{R}^N\setminus\overlineerline{\mathcal A}$ if $w$ is sufficiently regular, say for instance $w\in\mathcal{S}(\mathbb{R}^{N})$. The same is true if $x\in\mathcal{A}$ and $\sigma<1/2$. However if $x\in\mathcal A$ and $\sigma\geq 1/2$, even if $w\in\mathcal{S}(\mathbb{R}^{N})$, the principal value is required. In fact, if $x\in\mathcal A$ denoting by $\rho(x)=dist(x,\partialrtial\mathcal A)$ and $B_x=B_{\rho(x)}(x)$ then \begin{equation}gin{equation}\label{drexler} (-\Delta)_{\mathcal A}^\sigma w(x)= c_{N,\sigma}\left(\int_{\mathcal A\setminus B_x}\frac{w(x)-w(y)}{|x-y|^{N+2\sigma}} dy+ \lim_{\epsilon\to 0}\int_{B_x\setminus B_\epsilon(x)}\frac{w(x)-w(y)}{|x-y|^{N+2\sigma}}dy\right). \end{equation} Using now that $B_x\setminus B_\epsilon(x)$ is a symmetric domain around $x$ it follows that $$\lim_{\epsilon\to 0}\int_{B_x\setminus B_\epsilon(x)}\frac{w(x)-w(y)}{|x-y|^{N+2\sigma}}dy= \lim_{\epsilon\to 0}\int_{B_x\setminus B_\epsilon(x)}\frac{w(x)-w(y)-\nabla w(x)(x-y)}{|x-y|^{N+2\sigma}}dy.$$ Since the previous integral is absolutely convergent for example if $w\in \mathcal{C}^{1,1}$, from \eqref{drexler} we get that, when $\sigma\geq 1/2$, \begin{equation}gin{equation}\label{drexler2} (-\Delta)_{\mathcal A}^\sigma w(x)=c_{N,\sigma} \begin{equation}gin{cases} \delta isplaystyle \int_{\mathcal A\setminus B_x}\frac{w(x)-w(y)}{|x-y|^{N+2\sigma}} dy+\int_{B_x}\frac{w(x)-w(y)-\nabla w(x)(x-y)}{|x-y|^{N+2\sigma}}dy,&\text{if}\,\,x\in \mathcal A,\\ \\ \delta isplaystyle\int_{\mathcal A} \frac{w(x)-w(y)}{|x-y|^{N+2\sigma}}dy,&\text{if}\,\,x\in\mathbb{R}^N\setminus\overlineerline{\mathcal A}. \end{cases} \end{equation} {Nevertheless, according to Theorem B in \cite{Mou} the operator defined by \eqref{eq:neumann2} can be undestood in the trace sense. In this way will be considered hereafter.} \ Before announcing the main result of this work we introduce the following notation and definitions: \begin{equation}gin{definition}\label{def:affinfunctions} By $\mathcal P_{1}(\mathbb R^N)$ we denote the vector space of all polynomials of degree one with real coefficients, that is, \[\mathcal P_{1}(\mathbb R^N)=\left\{p(x):\mathbb R^N\rightarrow \mathbb R\,|\ p(x)= c_0+ (c,x),\,\, \text{with}\,\, c_0\in \mathbb R\,\, \text{and}\,\,c,x \in \mathbb R^N\right\},\] where $(\cdot, \cdot):\mathbb R^N\times \mathbb R^N \rightarrow \mathbb R$ represents the Euclidean scalar product in $\mathbb R^N$. \end{definition} We define also $\delta ot{H}^s(\Omega)$ as the class of functions given by \begin{equation}gin{equation}\label{spacebase} \delta ot{H}^s(\Omega)=\left\{\, u:\mathbb{R}^N\rightarrow\mathbb{R}\,|\, u \hbox{ weakly differentiable, so that } \, D(u)<\infty \right\},\,\, \end{equation} where \begin{equation}gin{equation*} D(u):=\sqrt{ \iint_{Q(\Omega)} \delta isplaystyle\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}dxdy}, \end{equation*} and \begin{equation}gin{equation}\noindentnumber Q(\Omega):=\mathbb{R}^{2N}\setminus(\mathbb{R}^N\setminus \Omega)^2. \end{equation} Notice that $\mathcal P_{1}(\mathbb R^N)\subset \delta ot{H}^s(\Omega)$. Next we will define the class of admissible data. Let $g_1\in {L^1( \mathbb{R}^N\setminus\Omega, |x|^2\, dx )}{\cap L^1(\mathbb{R}^N\setminus\Omega)}$. Associated to $g_1$ we consider the positive measure in $\mathbb{R}^N$, absolutely continuous with respect to Lebesgue measure, defined by \begin{equation}gin{equation}\label{dmu} d\mu_{g_1}=(\chi_\Omega+|g_1|\chi_{\mathbb{R}^N\setminus\Omega})dx, \end{equation} and the class of functions \begin{equation}gin{equation}\label{ortogonal} H^{s,0}_{g_1}(\Omega):=\Big\{u\in \delta ot{H}^s(\Omega):\, \int_{\mathbb{R}^{N}}{u p\, d\mu_{g_1}}=0,\, {\forall} p\in\mathcal{P}_{1}(\mathbb{R}^{N})\Big\}. \end{equation} For the associated measure $d\mu_{g_1}$ we consider the following Rayleigh quotient \begin{equation}gin{equation}\label{espectral} \lambda_1(g_1)=\inf_{u\in H^{s,0}_{g_1}, \, u\ne 0}\delta isplaystyle\frac{\delta isplaystyle\int\int_{Q(\Omega)} \delta isplaystyle\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}dxdy}{\delta isplaystyle\int_{\mathbb{R}^N}u^2d\mu_{g_1}}. \end{equation} \begin{equation}gin{definition}\label{admissible}\textit{($(\mathcal A_{(f,g_1,g_2)})$ assumptions).} We say that $(f,g_1,g_2)$ is an admissible data triplet if \begin{equation}gin{enumerate} \item $f\in L^2(\Omega)$ \item $g_1\in {L^1( \mathbb{R}^N\setminus\Omega, |x|^2\, dx )}{\cap L^1(\mathbb{R}^N\setminus\Omega)}$ and the corresponding measure $d\mu_{g_1}$ satisfy that the spectral value $\lambda_1(g_1)$ defined by \eqref{espectral} is strictly positive. \item $g_2\in L^2(\partialrtial \Omega)$. \end{enumerate} \end{definition} As a direct consequence of the definition, given an admissible $g_1$ we have that $$\int_{\mathbb{R}^N}u^2d\mu_{g_1}<+\infty, \hbox{ for all } u\in H^{s,0}_{g_1}(\Omega).$$ Also, by the hypotheses on integrability of $g_1$, one has $$\int_{\mathbb{R}^N}p^2d\mu_{g_1}<+\infty, \hbox{ for all } p\in \mathcal P_{1}(\mathbb R^N).$$ \begin{equation}gin{example} Every function $g_1$ such that $g_1\in L^p(\mathbb{R}^N\setminus\Omega)$ with $p>\frac N2$ and with compact support satisfies condition $(2)$ in Definition \ref{admissible} (see Lemma \ref{equivalencia} below). \end{example} Now we are ready to state the main result of the paper: \begin{equation}gin{theorem}\label{main} Let $\Omega\subset\mathbb R^N$ be a bounded $C^{1,1}$ domain and let us suppose that the assumptions $(\mathcal A_{(f,g_1,g_2)})$ hold. Then, the problem \eqref{eq:p} has a weak solution (in the sense of Definition \ref{def:weaksolution}), if and only if the following compatibility condition hold{s} \begin{equation}gin{equation}\label{eq:unodiez} \int_{\Omega} f p \, dx +\int_{\mathbb R^N\setminus \Omega}g_1p \, dx+\int_{\partialrtial \Omega}g_2 p\, dS=0, \qquad \text{for all }\,\, p\in \mathcal P_1(\mathbb R^N). \end{equation} Moreover, if \eqref{eq:unodiez} holds, the solution is unique up to an affine function $p\in \mathcal P_1(\mathbb R^N).$ \end{theorem} \ The paper is organized as follows: in Section 2 we present the integration by parts formula that shows the key point in order to understand the variational structure of the problem \eqref{eq:p}. In Section 3 we give some preliminaries related to the functional framework associated to problem \eqref{eq:p} and we introduce the proper notion of solution that will be used along this work. Section 4 deals with the proof of Theorem \ref{main}. In Section 5 we give the complete description of the structure of the eigenvalues and eigenfunctions of \eqref{eq:p}. Finally, in Section 6 we briefly comment other problems and results related with the one studied here. \ Throughout the paper, generic fixed numerical constants will be denoted by $C$, in some cases with a subscript and/or a superscript, and will be allowed to vary within a single line or formula. \section{Computations in $\mathbb{R}^N$ and a motivation of the problem \eqref{eq:p}} The main objective of this section is to prove a new integration by parts formula associated to $(-\Delta)^s$, $1<s<2$. In the sequel by $ Q(\Omega)$, we mean \begin{equation}gin{equation}\noindentnumber Q(\Omega):=\mathbb{R}^{2N}\setminus(\mathbb{R}^N\setminus \Omega)^2. \end{equation} First of all we need the following result that allows us to write the fractional operator in a divergence form. \begin{equation}gin{proposition}\label{pro:equivfraclapl} Given $u\in \mathcal \mathcal{S}(\mathbb{R}^{N}) $ and $s=m+\sigma$ with $m\in\mathbb N$ and $\sigma \in (0,1).$ The operator $(-\Delta)^s u$ can be expressed in one of the following ways \begin{equation}gin{equation}\label{eq:ksakjasjkajahdshhiohio} (-\Delta)^su=-\Delta^m\big((-\Delta)^\sigma u\big)=(-\Delta)^\sigma \big(-\Delta^m u\big)= -\operatorname{div}(-\Delta)^{\frac{m-1}{2}}(-\Delta)^\sigma (-\Delta)^{\frac{m-1}{2}} \nabla u, \end{equation} if $m$ is odd, or \begin{equation}gin{equation}\noindentnumber (-\Delta)^su=-\Delta^m\big((-\Delta)^\sigma u\big)=(-\Delta)^\sigma \big(-\Delta^m u\big)= (-\Delta)^{\frac{m}{2}}(-\Delta)^\sigma (-\Delta)^{\frac{m}{2}} u, \end{equation} if $m$ is even. \end{proposition} \begin{equation}gin{proof} It is sufficient to use the Fourier transform $\mathscr F (\cdot)$ and the multiplicative semigroup property. We prove the last equality in \eqref{eq:ksakjasjkajahdshhiohio}. The others follow in the same way. \begin{equation}gin{equation}\label{eq:gunsandroses} -\operatorname{div}(-\Delta)^{\frac{m-1}{2}}(-\Delta)^\sigma (-\Delta)^{\frac{m-1}{2}} \nabla u=-\sum_{j=1}^N\partialrtial_j(-\Delta)^{\frac{m-1}{2}}(-\Delta)^\sigma (-\Delta)^{\frac{m-1}{2}} \partialrtial_j u. \end{equation} Using Fourier transform in \eqref{eq:gunsandroses}, we obtain \begin{equation}gin{eqnarray}\label{eq:sakdhjdhiuiu} &&\mathscr F\big (-\operatorname{div}(-\Delta)^{\frac{m-1}{2}}(-\Delta)^\sigma (-\Delta)^{\frac{m-1}{2}} \nabla u \big)= - \sum_{j=1}^N (i\xi_j)|\xi|^{(m-1)}|\xi|^{2\sigma}|\xi|^{(m-1)}(i\xi_j) \mathscr F u\\\noindentnumber &&=|\xi|^{2(m+\sigma)}\mathscr F u \end{eqnarray} Recalling \eqref{eq:sajhhajhsj}, from \eqref{eq:sakdhjdhiuiu} we deduce \begin{equation}gin{equation}\noindentnumber (-\Delta)^su:=\mathscr F^{-1}\Big(|\xi|^{2(m+\sigma)}\mathscr F u\Big)= -\operatorname{div}(-\Delta)^{\frac{m-1}{2}}(-\Delta)^\sigma (-\Delta)^{\frac{m-1}{2}} \nabla u. \end{equation} \end{proof} \subsection{Integration by parts formula} In this section we prove different integration formulas that will be essential to define a variational formulation of the Neumann boundary conditions. To simplify the next results, recalling \eqref{eq:numebehebqfgwdg}, for $u\in \mathcal{S}(\mathbb{R}^{N})$ and $\Omega$ {a smooth domain,} we can write \[ (-\Delta)^{\sigma}u=(-\Delta)^{\sigma}_{\Omega}u+(-\Delta)^{\sigma}_{\mathbb{R}^N\setminus \overlineerline{\Omega}}u, \quad \mbox{a.e.} \] The operators $(-\Delta)^{\sigma}_{\Omega}u$ and $ (-\Delta)^{\sigma}_{\mathbb{R}^N\setminus \overlineerline{\Omega}}u$ are the regional $\sigma$-Laplacian for $\Omega$ and $\mathbb{R}^N\setminus \overlineerline{\Omega}$ respectively. We refer for instante to \cite{DnPV}, \cite{Guan}, \cite{Mou} and the references therein for the properties of the regional fractional laplacian. For the reader convenience we include the following result that will be used in the next calculations. \begin{equation}gin{lemma}\label{regional} Let $\Omega$ be a $\mathcal{C}^{1,1}$ domain that could be unbounded such that its boundary, $\partialrtial\Omega$, is a compact set. Then for all $u\in \mathcal{S}(\mathbb{R}^{N})$, $$\int_\Omega (-\Delta)^\sigma_\Omega u(x) dx=0.$$ \end{lemma} \begin{equation}gin{proof} Assume that $\Omega$ is bounded; if $0<\sigma<\frac 12$ the result is obvious given that the function $$G(x,y)=\frac{u(x)-u (y)}{|x-y|^{N+2\sigma}}\in L^1(\Omega\times\Omega),$$ and $G(x,y)=-G(y,x)$. We consider now the case $\frac 12\le \sigma<1$ in which the principal value is present. Consider, $$f_\epsilon(x)= \int_{\Omega\setminus B_\epsilon(x)}\frac{u(x)-u (y)}{|x-y|^{N+2\sigma}}dy,\, \quad \epsilon>0,\, x\in\Omega.$$ If we are able to find $h(x)\in L^1(\Omega)$ such that $|f_\epsilon(x)|\le h(x)$, $x\in \Omega$ then the result follows by the Dominated Convergence Theorem; indeed $$\int_\Omega (-\Delta)^\sigma_\Omega u(x) dx=\int_\Omega \lim_{\epsilon \to 0} f_\epsilon(x)dx=\lim_{\epsilon \to 0}\iint_{\Omega\times\Omega\setminus\{(x,y)\,|\, |x-y|<\epsilon\}}\left(\frac{u(x)-u (y)}{|x-y|^{N+2\sigma}}\right)dxdy=0$$ by the antisymmetry, as above. To find a function $h(x)\in L^1(\Omega)$ majoring the $f_\epsilon(x)$ family, fix $x\in \Omega$. Define $\rho(x)=dist(x,\partialrtial\Omega)$, $B_x=B_{\rho(x)}(x)$ and consider first the case $0<\epsilon<\rho(x)$. Then $$f_\epsilon(x)=\int_{\Omega\setminus B_x}\frac{u(x)-u (y)}{|x-y|^{N+2\sigma}} dy+ \int_{B_x\setminus B_\epsilon(x)}\frac{u(x)-u(y)}{|x-y|^{N+2\sigma}}dy.$$ Now by antisymmetry we find that $$\left|\int_{B_x\setminus B_\epsilon(x)}\frac{u(x)-u(y)}{|x-y|^{N+2\sigma}}dy\right| \le \int_{B_x}\left|\frac{u(x)-u(y)-\nabla u(x)(x-y)}{|x-y|^{N+2\sigma}}\right|dy$$ where the last term has a quadratic cancelation and becomes a term in $L^1(\Omega)$. Finally, we estimate the first term as follows. Take $R=2\mbox{diam}(\Omega)$ $$\left|\int_{\Omega\setminus B_x}\frac{u(x)-u (y)}{|x-y|^{N+2\sigma}} dy\right|\le \int_{\Omega\setminus B_x}\left|\frac{u(x)-u (y)}{|x-y|^{N+2\sigma}}\right| dy \le C_1 \int_{B_R(x)\setminus B_x}\frac{dy}{|x-y|^{N+2\sigma-1}}\le C_2 \int_{\rho(x)}^R \frac{dt}{t^{2\sigma}}. $$ The case $ \epsilon\ge\rho(x)$ is simpler since then $\Omega\setminus B_x\supset \Omega\setminus B_\epsilon(x)$. Summarizing, $$|f_\epsilon(x)|\le\begin{equation}gin{cases} O(1),\quad 0<\sigma<\frac 12\\ -\log \rho(x)+O(1),\quad \sigma=\frac 12\\ \delta isplaystyle\frac 1{\rho(x)^{2\sigma-1}}+O(1), \quad \frac 12<\sigma <1. \end{cases} $$ If $\Omega$ is unbounded, inside of a ball containing the boundary we reproduce the same calculations that in the bounded case and outside we take into account the decay od the kernel, that is $$|(-\Delta)_\Omega u(x)|\le \frac{C}{|x|^{N+2\sigma}}.$$ Then we apply again the Dominated Convergence Theorem to conclude. \end{proof} Now we can establish the following \begin{equation}gin{proposition}\label{pro:intbyparts1}Let $u\in \mathcal{S}(\mathbb{R}^{N})$, $s=1+\sigma$ and $\Omega\subseteq\mathbb{R}^{N}$ be a smooth domain, possibly unbounded, with compact boundary. Then \begin{equation}gin{equation}\noindentnumber \int_{\Omega}(-\Delta)^su\,dx=-\int_{\mathbb R^N\setminus \Omega} \mathcal N_\sigma (-\Delta u) \,dx, \end{equation} where $$\mathcal N_\sigma(-\Delta u)(x)=(-\Delta)_{\Omega}^\sigma (-\Delta u)(x)=\int_{\Omega}\frac{{(-\Delta u(x))-(-\Delta u(y))}}{|x-y|^{N+2\sigma}}dy,\, x\in \mathbb R^N\setminus \overlineerline{\Omega}.$$ \end{proposition} \begin{equation}gin{proof} For $u\in \mathcal{S}(\mathbb{R}^{N})$ we note that $(-\Delta)^s u$ is well defined in all $\mathbb R^N$ and actually there exists a positive constant $C=C(N, \sigma,\|\Delta u\|_{L^{\infty}(\mathbb{R}^N)})$, such that $|(-\Delta)^s u|\leq C$. By direct computations we obtain \begin{equation}gin{equation}\label{eq:c1} \begin{equation}gin{array}{lll} &\delta isplaystyle \int_\Omega (-\Delta)^s u\,dx=c_{N,\sigma}\int_\Omega P. V. \int_{\mathbb R^N}\frac{(-\Delta u(x))-(-\Delta u(y))}{|x-y|^{N+2\sigma}}\,dy\,dx\\ &\delta isplaystyle =c_{N,\sigma}\int_\Omega \int_{\mathbb R^N\setminus \Omega}\frac{(-\Delta u(x))-(-\Delta u(y))}{|x-y|^{N+2\sigma}}\,dy\,dx\\ &\delta isplaystyle =c_{N,\sigma}\int_{\mathbb R^N\setminus \Omega}\int_\Omega \frac{(-\Delta u(x))-(-\Delta u(y))}{|x-y|^{N+2\sigma}}\,dx\,dy\\ &\delta isplaystyle=-\int_{\mathbb R^N\setminus \Omega} \mathcal N_\sigma (-\Delta u)\,dy, \end{array} \end{equation} where in \eqref{eq:c1} we have use Lemma \ref{regional} that gives $$ \int_\Omega P. V. \int_\Omega \frac{(-\Delta u(x))-(-\Delta u(y))}{|x-y|^{N+2\sigma}}\,dy\,dx=0. $$ \end{proof} We now show some {\it calculation rules} that will be needed later. \begin{equation}gin{lemma}\label{lem:primolemmaintperpar}Let $u\in \mathcal{S}(\mathbb{R}^{N})$ and $\Omega\subseteq\mathbb{R}^{N}$ be a smooth domain with compact boundary. Then, for every $0<\sigma<1$ we have \begin{equation}gin{itemize} \item [$(i)$] \begin{equation}gin{equation*} \int_{\mathbb{R}^N\setminus \Omega} (-\Delta)^{\sigma}_{\Omega}u \,dx=-\int_{\Omega} (-\Delta)^{\sigma}_{\mathbb{R}^N\setminus \Omega}u \,dx, \end{equation*} \item [$(ii)$] \begin{equation}gin{equation*} \int_{\Omega}(-\Delta)^{s}u=-\int_{\mathbb{R}^N\setminus \Omega}(-\Delta)^{s}u,\quad s=1+\sigma. \end{equation*} \end{itemize} \end{lemma} \begin{equation}gin{proof} To prove $(i)$ it is sufficient to apply Fubini's theorem and $(ii)$ follows by using Proposition \ref{pro:intbyparts1} and Lemma \ref{regional}. \end{proof} Thanks to Lemma \ref{lem:primolemmaintperpar} we have the following result that will be needed to prove the main theorem of the present work. \begin{equation}gin{proposition}\label{pro:cuatrodiec} Let $p\in \mathcal P_1(\mathbb R^N) $ and let $u\in \mathcal{S}(\mathbb{R}^{N})$ be such that \begin{equation}gin{equation}\label{eq:sagaghagdvjhgsjd} \int_{\mathbb{R}^N\setminus \Omega}|\operatorname{div}\big((-\Delta)_{ \Omega}^\sigma \nabla u(x)\big)\, p(x)|\,dx<+\infty. \end{equation} Then \begin{equation}gin{equation*} \int_{\Omega}p\, (-\Delta)^su \,dx= -\int_{\mathbb{R}^N\setminus \Omega} p\, \mathcal N^1_\sigma u \,dx -\int_{\partialrtial \Omega} p\, \mathcal{N}^2_\sigma u \,d S. \end{equation*} \end{proposition} \begin{equation}gin{proof} If $p\in \mathcal P_1(\mathbb R^N) $ and $u\in \mathcal{S}(\mathbb{R}^{N})$ then \begin{equation}gin{eqnarray}\label{eq:chenonsipuoooo} &&\int_{\Omega}p\, (-\Delta)^{s}u\,dx= \int_{\Omega}-\operatorname{div}\big((-\Delta)^\sigma \nabla u\big)\,p\,dx\\\noindentnumber &&= \int_{\Omega}-\operatorname{div}\big((-\Delta)_{\Omega}^\sigma \nabla u\big)\,p\,dx+ \int_{\Omega}-\operatorname{div}\big((-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u\big)\,p\,dx\\\noindentnumber &&=: I_1+I_2. \end{eqnarray} By the divergence theorem we have that \begin{equation}gin{eqnarray}\noindentnumber I_1&=&\int_{\Omega}-\operatorname{div}\big((-\Delta)_{\Omega}^\sigma \nabla u\big)\,p\,dx\\\noindentnumber &=& \int_{\Omega} (-\Delta)_{\Omega}^\sigma \nabla u \cdot \nabla p\, dx- \int_{\partialrtial \Omega}p\, (-\Delta)_{\Omega}^\sigma \nabla u \cdot \nu \,dS, \end{eqnarray} where $\nu$ denotes the unit outer normal field to the boundary $\partialrtial \Omega$. Since $\nabla p$ is a constant vector, then using $(i)$ of Lemma \ref{lem:primolemmaintperpar}, we obtain that \begin{equation}gin{equation}\label{eq:kjsakajjjhkwdjhkwefw} I_1=-\int_{\partialrtial \Omega}p\, (-\Delta)_{\Omega}^\sigma \nabla u \cdot \nu \,dS. \end{equation} By divergence theorem \begin{equation}gin{eqnarray}\noindentnumber I_2&:=& \int_{\Omega}-\operatorname{div}\big((-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u\big)\,p\,dx\\\noindentnumber &=& \int_{\Omega} (-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u \cdot \nabla p\, dx- \int_{\partialrtial \Omega}p\, (-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u \cdot \nu \,dS. \end{eqnarray} Recalling that $\nabla p$ is a constant vector, using $(i)$ of Lemma \ref{lem:primolemmaintperpar}, we obtain \begin{equation}gin{equation}\label{eq:sajhsajhhsinceraaa} I_2=-\int_{\mathcal C \Omega}(-\Delta)^{\sigma}_{\Omega}\nabla u\cdot \nabla p\, dx - \int_{\partialrtial \Omega}p\, (-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u \cdot \nu \,dS. \end{equation} Using \eqref{eq:kjsakajjjhkwdjhkwefw} and \eqref{eq:sajhsajhhsinceraaa} together with divergence theorem, we deduce \begin{equation}gin{eqnarray}\label{eq:lalunaaaaasefsss} &&I_1+I_2= -\int_{\partialrtial \Omega}p\, (-\Delta)_{\Omega}^\sigma \nabla u \cdot \nu \,dS\\\noindentnumber &&-\int_{\mathcal C \Omega}(-\Delta)^{\sigma}_{\Omega}\nabla u\cdot \nabla p\, dx - \int_{\partialrtial \Omega}p\, (-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u \cdot \nu \,dS\\\noindentnumber &&=-\int_{\partialrtial \Omega}p\, (-\Delta)_{\Omega}^\sigma \nabla u \cdot \nu \,dS \\\noindentnumber &&+\int_{\mathbb{R}^N\setminus \Omega}\operatorname{div}\big((-\Delta)_{ \Omega}^\sigma \nabla u\big)\,p\,dx-\int_{\partialrtial \Omega}p\, (-\Delta)_{ \Omega}^\sigma \nabla u \cdot (-\nu)\, ds -\int_{\partialrtial \Omega}p\, (-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u \cdot \nu \,dS\\\noindentnumber &&=-\int_{\mathbb{R}^N\setminus \Omega}-\operatorname{div}\big((-\Delta)_{ \Omega}^\sigma \nabla u\big)\,p\,dx-\int_{\partialrtial \Omega}p\, (-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma \nabla u \cdot \nu \,dS, \end{eqnarray} where $(-\nu)$ denotes the unit inner normal field to the boundary $\partialrtial \Omega$. We point out that in the previous computations, the divergence theorem ({see for example \cite[Theorem 6.3.4]{Will}}) can be used using a truncation argument together with \eqref{eq:sagaghagdvjhgsjd}. Collecting \eqref{eq:chenonsipuoooo} and \eqref{eq:lalunaaaaasefsss}, using the definitions \eqref{eq:neumann1} and \eqref{eq:neumann2}, we conclude the proof. \end{proof} \begin{equation}gin{remark} We notice that from Proposition \ref{pro:intbyparts1} and Proposition \ref{pro:cuatrodiec} it is clear that \begin{equation}gin{equation}\label{eq:chetempochefa} \int_{\mathbb{R}^N\setminus \Omega} \mathcal N_\sigma (-\Delta u)\,dx=\int_{\mathbb{R}^N\setminus \Omega} \mathcal N^1_\sigma u\,dx +\int_{\partialrtial \Omega} \mathcal{N}^2_\sigma u \,d S,\, \mbox{ for every $u\in\mathcal{S}(\mathbb{R}^{N})$ } \end{equation} in the hypotheses of Proposition \ref{pro:cuatrodiec}, that is, \eqref{eq:chetempochefa} is the splitting of $\mathcal N_\sigma (-\Delta u)$ in the two parts that will be needed for a variational formulation of the corresponding Neumann problem. \end{remark} We conclude this section obtaining a natural Neumann condition for the s-Laplacian with $s>1$. Roughly speaking, in the higher order case, to describe an appropriate weak formulation of our problem, we have to use two (non local) Neumann conditions. Our candidates are given in equations \eqref{eq:neumann1} and \eqref{eq:neumann2}. Thus, although Proposition \ref{pro:intbyparts1} suggests the use of $\mathcal N_\sigma (-\Delta u)$ as the Neumann condition for problem \eqref{eq:p} we rather split it into $\mathcal N_\sigma^1 u$ and $\mathcal N_\sigma^2 u$ via the equation \eqref{eq:chetempochefa}. The fact that this is the right splitting follows from the following proposition. \begin{equation}gin{proposition}\label{pro:intbyparts2} Let $u \in \mathcal{S}(\mathbb{R}^{N})$ be such that \begin{equation}gin{equation}\label{condition3} \int_{\mathbb{R}^N\setminus \Omega}\left |\operatorname{div}\left((-\Delta)_{ \Omega}^\sigma \nabla u\right)\, \right|\,dx<+\infty, \end{equation} and set $s=1+2\sigma$, $0<\sigma<1$. Then, for $v\in \mathcal{S}(\mathbb{R}^{N})$, we have \begin{equation}gin{eqnarray}\noindentnumber &&\frac{c_{N,\sigma}}{2}\int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx\,dy\noindentnumber\\ &&=\int_{\Omega}v\, (-\Delta)^su\, dx +\int_{\mathbb{R}^N\setminus \Omega} v\, \mathcal {N}^1_\sigma u \, dx +\int_{\partialrtial \Omega} v\, \mathcal{N}^2_\sigma u \, d S. \end{eqnarray} \end{proposition} \begin{equation}gin{proof} Since $u$ is regular, a similar argument as in Lemma \ref{regional} shows that $$\int_{\mathcal A}P.V.\int_{\mathcal A} \frac{(\nabla v(x)+\nabla v(y))(\nabla u(x))-(\nabla u(y))}{|x-y|^{N+2\sigma}}\,dy\,dx=0,$$ for any open set $\mathcal A\subset \mathbb R^N$. Therefore we have \begin{equation}gin{eqnarray}\label{eq:int1}\\\noindentnumber&&\frac{1}{2}\int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx\,dy = \int_{\Omega}\nabla v(x){P. V.} \int_{\mathbb R^N}\frac{(\nabla u(x)-\nabla u(y))}{|x-y|^{N+2\sigma}}\, dy\,dx \\\noindentnumber &&+ \int_{\mathbb{R}^N\setminus \Omega}\nabla v(x) \int_{\Omega}\frac{(\nabla u(x)-\nabla u(y))}{|x-y|^{N+2\sigma}}\, dy\,dx. \end{eqnarray} In each term of the r.h.s of \eqref{eq:int1} we use the divergence theorem. Therefore we get the following identity \begin{equation}gin{eqnarray}\label{eq:int11} &&{c_{N,\sigma}}\int_{\Omega}\nabla v(x) {P. V.} \int_{\mathbb R^N}\frac{(\nabla u(x)-\nabla u(y))}{|x-y|^{N+2\sigma}}\, dy\,dx \\\noindentnumber &&= \int_{\Omega} \left(-\operatorname{div} {\left((-\Delta)^\sigma \nabla u(x)\right)}\right) v(x)\,dx +\int_{\partialrtial \Omega}v(x) {\left((-\Delta)^\sigma \nabla u(x)\right)} \cdot \nu\, d S, \end{eqnarray} where $\nu$ denotes the unit outer normal field to the boundary $\partialrtial \Omega$ and \begin{equation}gin{eqnarray}\label{eq:int111} &&{c_{N,\sigma}}\int_{\mathbb{R}^N\setminus \Omega}\nabla v(x) \int_{\Omega}\frac{(\nabla u(x)-\nabla u(y))}{|x-y|^{N+2\sigma}}\, dy\,dx \\\noindentnumber &&= \int_{\mathbb{R}^N\setminus \Omega} \left (-\operatorname{div} {\big((-\Delta)_\Omega^\sigma \nabla u\big)}\right)v(x)\,dx-\int_{\partialrtial \Omega}v(x) {\big((-\Delta)_\Omega^\sigma \nabla u\big)}\cdot \nu\, d S. \end{eqnarray} {Thus, by Proposition \ref{pro:equivfraclapl}, putting together \eqref{eq:int11} and \eqref{eq:int111}, from \eqref{eq:int1}} we obtain that \begin{equation}gin{eqnarray}\label{eq:int1111}\\\noindentnumber&&\frac{c_{N,\sigma}}{2}\int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx\,dy\\\noindentnumber &&= \int_{\Omega}\left( -\operatorname{div} {\big((-\Delta)^\sigma \nabla u\big)}\right) v(x)\,dx +\int_{\mathbb{R}^N\setminus \Omega} \left (-\operatorname{div} {\big((-\Delta)_\Omega^\sigma \nabla u\big)}\right)v(x)\,dx \\\noindentnumber &&+\int_{\partialrtial \Omega}v(x) {(-\Delta)_{\mathbb{R}^N\setminus \Omega}^\sigma (\nabla u)}\cdot \nu\, d S\\\noindentnumber &&= \int_{\Omega}{v}(-\Delta)^su\, dx +\int_{\mathbb{R}^N\setminus \Omega} {v}\mathcal N^1_\sigma u \,dx +\int_{\partialrtial \Omega} {v}\mathcal N^2_\sigma u \,d S, \end{eqnarray} concluding the proof. \end{proof} \subsection{Some considerations about condition \eqref{condition3}} Let us point out here that the integrability condition \eqref{condition3} in Proposition \ref{pro:intbyparts2} is not needed when $0<\sigma<1/2$, for in this case one always {has} $ \operatorname{div}\left((-\Delta)_{ \Omega}^\sigma (\nabla u)(x)\right)\in L^1(\mathbb R^N\setminus \Omega)$. To see this observe that for a function $u\in \mathcal{S}(\mathbb{R}^N)$ a simple computation shows that \begin{equation}gin{equation}\label{diver} \operatorname{div}\left((-\Delta)_{ \Omega}^\sigma (\nabla u)(x)\right)= c_{N,\sigma}\int_\Omega \frac{\Delta u(x)-(N+2\sigma)\frac{(\nabla u(x)-\nabla u(y))\cdot(x-y)}{|x-y|^2}}{|x-y|^{N+2\sigma}} \, dy \end{equation} We will use the following result whose proof is implicit in the proof of Lemma \ref{regional} \begin{equation}gin{lemma} Let $\Omega$ be a $\mathcal{C}^{1,1}$ domain such that its boundary, $\partialrtial\Omega$, is a compact set and let $0<\alpha<1$. Then $$ \int_{\mathbb R^N\setminus \Omega}\int_\Omega \frac 1{|x-y|^{N+\alpha}} dy\, dx<C_{N,\Omega}. $$ \end{lemma} Using this and the fact that $$ |\operatorname{div}\left((-\Delta)_{ \Omega}^\sigma (\nabla u)(x)\right)|\le C \int_\Omega\frac 1{|x-y|^{N+2\sigma}}, $$ we deduce our statement. \ However, when $1/2\le \sigma<1$ we do not have in general that $ \operatorname{div}\left((-\Delta)_{ \Omega}^\sigma (\nabla u)(x)\right)\in L^1(\mathbb R^N\setminus \Omega)$ as the following counterexample shows. \noindentindent{\bf Counterexample}: Let $\Omega$ denote the unit ball centered at the origin in $\mathbb R^N$. For $R$ large, define the function $u$ in the Schwartz class as follows $$u(x)= \begin{equation}gin{cases} \frac 12|x|^2, \quad &\mbox{ if } |x|\le R\\[3mm] 0, \quad &\mbox{ if } |x|\ge 2R, \end{cases} $$ and $u\in \mathcal C^\infty$ everywhere. Then, formula \eqref{diver} gives for this $u$ and $1<|x|<R$, \begin{equation}gin{equation*} \operatorname{div}\left((-\Delta)_{ \Omega}^\sigma (\nabla u)(x)\right)= c_{N,\sigma}\int_\Omega \frac{-2\sigma}{|x-y|^{N+2\sigma}} \, dy. \end{equation*} This function is clearly not integrable in $B_R(0)\setminus \Omega$ for $2\sigma \ge 1$. Therefore the extra hypothesis in Proposition \ref{pro:intbyparts2} is necessary to justify our computations. \ It is worth pointing out also that the integrability condition \eqref{condition3} is only needed in a local sense. More precisely, if $\Omega\subset B_R(0)$ then we always have for $u\in \mathcal S(\mathbb R^N)$ that $\operatorname{div}\left((-\Delta)_{ \Omega}^\sigma (\nabla u)(x)\right)\in L^1(\mathbb R^N\setminus B_{2R}(0))$. In fact we have the following stronger estimate \begin{equation}gin{lemma} Assume as before that $\Omega\subset B_R(0)$. Then for every $u\in \mathcal S(\mathbb R^N)$ and every polynomial $p\in \mathcal P_1(\mathbb R^N)$ we have \begin{equation}gin{equation*} \int_{\mathbb{R}^N\setminus B_{2R}(0)}\left |\operatorname{div}\left((-\Delta)_{ \Omega}^\sigma \nabla u(x)\right)\,p(x) \right|\,dx<+\infty. \end{equation*} \end{lemma} \begin{equation}gin{proof} To see this, we use the expression given by \eqref{diver}. Since $|\Delta u(x) p(x)|<C$, $|x-y|\sim |x|$ for $|y|<R$ and $|x|>2R$, and $\frac{|(\nabla u(x)-\nabla u(y))\cdot(x-y)\,p(x)|}{|x-y|^2}\le C \frac{|p(x)|}{|x|}\le C'$, for $|x|$ large, we have $$ |\operatorname{div}\left((-\Delta)_{ \Omega}^\sigma \nabla u(x)\right)\,p(x)| \le C^{''} \frac 1{|x|^{N+2\sigma}} , \qquad |x|>2R. $$ This finishes the proof. \end{proof} With all the above, we conclude that condition \eqref{eq:sagaghagdvjhgsjd} in Proposition \ref{pro:cuatrodiec} is always granted when $0<\sigma<1/2$ and is equivalent to condition \eqref{condition3} when $1/2\le \sigma <1$. \section{The functional setting of the problem}\label{se:weakvaria} We recall that a function $u$ is weakly differentiable in $ \mathbb R^N$ if there exists a vector field \\ $\vv U:\mathbb R^N \to \mathbb R^N$ such that \begin{equation}gin{itemize} \item $u, |\vv U| \in L^1_{loc}(\mathbb R^N)$ and \item for every smooth vector field $\vv F$ of compact support we have $$ \int u(x)\, \mbox{div}\vv F(x) \, dx = -\int \vv U(x)\cdot \vv F(x) \, dx. $$ \end{itemize} We write $\vv U= \nabla u$. If $\vv U=(U^1,U^2,\delta ots,U^N)$, then the n'th component $U^j$ is denoted by $\partialrtial_j u$ and satisfies $$ \int \partialrtial_j u\; \varphi \, dx=-\int u \;\partialrtial_j \var\, dx, \quad \forall \varphi \in {\mathcal C}^\infty_0. $$ We now define the appropriate functional space to solve the Neumann problem. \begin{equation}gin{definition} {Given $g_1$ as in the assumptions $\mathcal A_{(f,g_1,g_2)}$, we define the space} \begin{equation}gin{equation}\noindentnumber H^s_{\mathcal N(g_1,g_2)}(\Omega)=\Big\{u:\mathbb R^N\rightarrow {\mathbb R}\,:\, u {\mbox{ weakly differentiable }}\quad \text{and}\quad \|u\|_{H^s_{\mathcal N(g_1,g_2)}{(\Omega)}}< +\infty\Big\}, \end{equation} where \begin{equation}gin{equation}\label{eq:canzperunamic} \|u\|_{H^s_{\mathcal N(g_1,g_2)}(\Omega)}=\sqrt{\int_\Omega u^2\,dx+ \iint_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy+ \int_{\mathcal C \Omega}|g_1|u^2\, dx}. \end{equation} \end{definition} Notice that we have the formal function space identity \begin{equation}gin{equation}\label{space1} H^s_{\mathcal N(g_1,g_2)}(\Omega)=\delta ot{H}^s(\Omega)\bigcap L^2(\mathbb R^N,d\mu_{g_1}), \end{equation} with $\delta isplaystyle \delta ot{H}^s(\Omega)$ and $d\mu_{g_1}$ defined in \eqref{spacebase} and \eqref{dmu} respectively. \ \begin{equation}gin{remark} Even {though} the space $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ does not depend on the boundary data $g_2$, we prefer to include $g_2$ as a subscript in the notation {in order to keep in mind} both Neumann conditions in problem \eqref{eq:p}. \end{remark} \ Let us prove the following \begin{equation}gin{proposition}\label{Hilbert} The space $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ is a Hilbert space, with the inner product given by $$ (u,v)_{H^s_{\mathcal N(g_1,g_2)}(\Omega)}=\int_{\mathbb R^N} uv\,d\mu_{g_1} +\iint_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy . $$ \end{proposition} Clearly, $$(\cdot,\cdot)_{H^s_{\mathcal N(g_1,g_2)}}:H^s_{\mathcal N(g_1,g_2)}(\Omega)\times H^s_{\mathcal N(g_1,g_2)}(\Omega)\rightarrow \mathbb R,$$ is a bilinear form defined over the reals. Moreover, if $\|u\|_{H^s_{\mathcal N(g_1,g_2)}(\Omega)}=\sqrt{(u,u)_{H^s_{\mathcal N(g_1,g_2)}(\Omega)}}=0,$ we have on the one hand that $D(u)=0$ and this says that $u$ coincides a.e. with a polynomial of degree 1. Since, on the other hand, $\int_\Omega u^2 dx=0$ we conclude that $u$ must be 0 a.e. Hence, we only need to show that $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ is complete. \ Before proving that $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ is complete we will state some technical results that will be needed. We will denote by $\delta isplaystyle \fint_A v$ the average integral value of $v$ on $A$, that is, $\delta isplaystyle \fint_A v=\frac 1{|A|}\int_A v.$ \begin{equation}gin{lemma}\label{all_2} There exists a constant $C=C(N,|\Omega|)$ so that, for every given a ball $B$ with $\Omega \subset B$, and $v:\mathbb R^N\rightarrow {\mathbb R}$ weakly differentiable with $|\nabla v|\in L^2(B)$, one has \begin{equation}gin{equation}\label{Poincare1} \int_B \left(v(x)-\fint_\Omega v\right)^2dx \le C\,|B|^{1+\frac 1N}\int_B |\nabla v(x)|^2dx.\end{equation} \end{lemma} \begin{equation}gin{corollary}\label{cor1} With the same hipotheses and notation of Lemma \ref{all_2}, we have \begin{equation}gin{equation}\label{extension2} \frac 12\fint_B |v(x)|^2dx \le C\,|B|^{1+\frac 1N}\fint_B |\nabla v(x)|^2dx+ \left(\left |\fint_\Omega v(y)\,dy\right|\right)^2. \end{equation} \end{corollary} \begin{equation}gin{proof} Use simply the numerical inequality $(b-a)^2\ge \frac 12 b^2-a^2$. \end{proof} \ \begin{equation}gin{proof}[Proof of Lemma \ref{all_2}] The proof of (\ref{Poincare1}) is standard. First we observe that, from Jensen's inequality, we have \begin{equation}gin{equation*} \left(v(x)-\fint_\Omega v\right)^2=\left(\fint_\Omega (v(x)-v(y))dy\right)^2 \le \fint_\Omega \left(v(x)-v(y)\right)^2dy. \end{equation*} Integrating both sides with respect to $dx$ on $B$, and using the identity \begin{equation}gin{equation*} v(x)-v(y) = \int_0^1 \nabla v(tx+(1-t)y)\cdot (x-y)dt, \quad \mbox{a.e.} \quad x,y \in \mathbb R^N, \end{equation*} and Jensen's again, we have \begin{equation}gin{equation*} \int_B \left(v(x)-\fint_\Omega v\right)^2dx \le \int_B \fint_\Omega\int_0^1\left |\nabla v(tx+(1-t)y)\cdot (x-y)\right |^2 dt\, dy\, dx. \end{equation*} By Fubini and the change of variables $x\to z=tx+(1-t)y$, we obtain that \begin{equation}gin{eqnarray*} J&:=& \int_B \fint_\Omega\int_0^1\left |\nabla v(tx+(1-t)y)\cdot (x-y)\right |^2 dt\, dy\, dx\\[2mm] &\le & \fint_\Omega \int_0^1 \int_{B} \left|\nabla v (z)\right|^2\left( \frac{|z-y|}{t}\right)^2 \chi_B\left( \frac{z-y}{t}+y\right)dz\frac{dt}{t^N} dy, \end{eqnarray*} where we have used that $\Omega\subseteq B$. We observe now that if the ball $B$ has radius $R$ and both, ${y}$ and $\frac{z-y}{t}+y $ are in $B$, then $\left|\frac{z-y}{t}\right|< 2R$, which forces $t$ to be bigger than $\frac{|z-y|}{2R}$. Thus, \begin{equation}gin{eqnarray*} J&\le& \int_{B} \left| \nabla v(z)\right|^2 \fint_\Omega \int_{1\wedge \frac{|z-y|}{2R}}^1 |z-y|^2\frac{dt}{t^{N+2}} dy\, dz \\[2mm] &\le&\frac {(2R)^{N+1}}{N+1} \int_{B} \left|\nabla v(z)\right|^2 \fint_\Omega \frac{1}{|z-y|^{N-1} }dy\, dz, \end{eqnarray*} Finally, using that $\delta isplaystyle \int_\Omega \frac{1}{|z-y|^{N-1} }dy \le C(N)|\Omega|^{1/N}$, we conclude the lemma. \end{proof} \ Now we prove the following \begin{equation}gin{lemma}\label{Wintinger} If $u\in \delta ot{H}^s(\Omega)$ then $|\nabla u|\in L^2(B)$ for every ball $B$. Moreover, if $ \Omega\subset B$ one has the estimate \begin{equation}gin{equation}\label{Wintinger2} \int_B\left|\nabla u(x)-\fint_\Omega \nabla u\right|^2 dx\le {C(N,\sigma)}\, |B|^{1+\frac {2\sigma}N} \int_B\fint_\Omega\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}dxdy \end{equation} \end{lemma} \ As an easy consequence we obtain the following inequality \begin{equation}gin{corollary}\label{Wintinger0} There exists a positive constant $C=C(N)>0$ such that for $u\in \delta ot{H}^s(\Omega)$ and every ball $B\supset \Omega$, $$ \frac 12\int_B|\nabla u(x)|^2 dx\leq \frac{{C(N,\sigma)}}{|\Omega|}\, |B|^{1+\frac {2\sigma}N}\, D^2(u) + \left(\fint_\Omega \nabla u(y) dy\right)^2, $$ where $D(u)$ was given in \eqref{spacebase}. In particular, if $u\in \delta ot{H}^s(\Omega)$ then $u \in H^1_{loc}(\mathbb R^N)$ \end{corollary} \begin{equation}gin{proof}[Proof of Lemma \ref{Wintinger}] To simplify the notation, set $$\delta isplaystyle \vv {b}=(b^1,\ldots,b^N):= \left(\fint_\Omega \partialrtial_1 u,\delta ots, \fint_\Omega \partialrtial_N u\right)= \delta isplaystyle \fint_\Omega \nabla u(y) dy.$$ As in the proof of Lemma \ref{all_2}, we easily get \begin{equation}gin{equation*} \int_B\left|\nabla u(x)-\vv b\right|^2\, dx \leq \int_{B}\fint_{\Omega}\left|\nabla u(x)-\nabla u(y)\right|^2\, dy\, dx.\noindentnumber \end{equation*} Therefore, if $B$ is a ball that contains $\Omega$ and has radius $R$, we obtain $$\int_B\left |\nabla u(x)dx-\vv {b}\right|^2 dx \le(2R)^{N+2\sigma} \int_B\fint_\Omega \frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}dxdy, $$ as stated. \end{proof} \ \begin{equation}gin{proof}[Proof of Proposition \ref{Hilbert}] As we pointed out above, we only need to show that $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ is complete. To that end, consider a Cauchy sequence $\{u_k\}_k$ in our space. We proceed in several steps: \ \textsc{Step 1:} There exists a function $u^*$ such that \begin{equation}gin{equation}\label{L2mu} \lim_{k\to\infty} \left(\int_{\Omega} |u^*-u_k|^2\, dx + \int_{\mathcal C \Omega} |u^*-u_k|^2 |g_1| \, dx \right)=0. \end{equation} This comes simply from the fact that $\{u_k\}_k$ is a Cauchy sequence in $ L^2(\mathbb{R}^N, d\mu_{g_1})$. Since, in particular, \begin{equation}gin{equation*} \lim_{k\to\infty} \int_{\Omega\bigcup \{|g_1|>1/m\}} |u^*-u_k|^2\, dx =0, \end{equation*} for all $m\in \mathbb N$, there exists a subsequence that converges pointwise to $u^*$ in the set $\Omega\bigcup \{g_1\neq 0\}$ a.e. with respect to Lebesgue measure. \ \textsc{ Step 2:} There exists a vector field $\vv U:\mathbb R^N \to \mathbb R^N$ such that for every ball $B\subset \mathbb R^N$ we have \begin{equation}gin{equation}\label{nabla:L2} \lim_{k\to\infty} \int_B |\nabla u_k-\vv U|^2\, dx=0. \end{equation} The idea here is to prove that the sequence of vector fields $\{\nabla u_k\}_k$ is a Cauchy sequence in $[L^2(B)]^N$. By using \eqref{Wintinger2} and putting as above \[\vv {b_k}=(b_k^{1},\ldots,b_k^{N}):= \delta isplaystyle \fint_\Omega \nabla u_k(y) dy,\] we find that the sequence of vector fields $\{\nabla u_k-\vv{b_k}\}_k$ is a Cauchy sequence in $[L^2(B)]^N$ for every ball $B\subset \mathbb R^N$ and, hence, there exists a vector field $\vv U_0=(U_0^1,\ldots,U_0^N)$ so that \begin{equation}gin{equation}\label{nabla0:L2} \lim_{k\to\infty} \int_B |\nabla u_k(x)-\vv{b_k}-\vv U_0(x)|^2\, dx=0, \quad \forall B \mbox{ ball} . \end{equation} \ Let us prove that the sequence of vectors $\{\vv{b_k}\}_k$ has a limit. To see it, we observe that if $\varphi$ is a smooth bump function supported in $\Omega$ with $\int \varphi \, dx=1$ we have \begin{equation}gin{equation}\label{vector} \lim_{k\to\infty} \int \left(\partialrtial_j u_k-b_k^j\right)\varphi = \int U_0^j\varphi, \quad j=1,\delta ots, N. \end{equation} Since $\delta isplaystyle \int \left(\partialrtial_j u_k-b_k^j\right)\varphi \, dx= -\int u_k\,\partialrtial_j \varphi\, dx -b_k^j $ and $u_k\rightarrow u^*$ as $k\to \infty$ in $L^2(\Omega, dx)$, we have that there exists the limit $$ b_0^j:=\lim_{k\to\infty} b_k^j=-\int\left(u^*\,\partialrtial_j \varphi+U_0^j\varphi\right)dx. $$ If we set $\vv b_0=(b_0^1,\ldots,b_0^N)$ then $\vv U=\vv U_0+\vv b_0$ represents the vector field seeked in \ref{nabla:L2}. \ \textsc{Step 3:} From Corollaries \ref{cor1} and \ref{Wintinger0} we have that the family $\{u_k\}_k$ is a Cauchy sequence on $L^2(B,dx)$ for every ball $B\subset \mathbb R^N$. In particular, there exists a function $u$ defined on all $\mathbb R^N$ so that \begin{equation}gin{equation}\label{L1} u_k\longrightarrow u \quad \mbox{in } {L^2_{loc}(\mathbb R^N)}, \quad \mbox{as } k\to\infty. \end{equation} Since, from \textsc{Step 2}, we also have \begin{equation}gin{equation}\label{L1_grad} \nabla u_k\longrightarrow \vv U \quad \mbox{in } L^2_{loc}(\mathbb R^N), \quad \mbox{ as } k\to\infty, \end{equation} we conclude that $\vv U=\nabla u$. Obviously we also have that $u=u^*$ a.e. on the set $\{x\in\mathbb{R}^{N}:g_1(x)\neq 0\}$. \ We collect now all the information to prove that the function $u$ is indeed the limit of the sequence $\{u_k\}_k$ in the norm of $H^s_{\mathcal N(g_1,g_2)}(\Omega)$. First, we have from (\ref{L1_grad}) and Fatou's Lemma that $$ \lim_{k\to\infty} D^2(u-u_k)=\lim_{k\to\infty} \iint_{Q(\Omega)}\frac{\left|(\vv U(x)-\nabla u_k(x))-(\vv U(y)-\nabla u_k(y))\right|^2}{|x-y|^{N+2\sigma}}\, dx dy =0. $$ This, together with (\ref{L2mu}) and the above observation on $u^*$ gives \begin{equation}gin{eqnarray*} \lim_{k\to\infty} \|u-u_k\|^2_{H^s_{\mathcal N(g_1,g_2)}} &=& \lim_{k\to\infty} \left(\int |u-u_k|d\mu_{g_1}+D^2(u-u_k)\right) \\ &=& \lim_{k\to\infty} \left(\int |u^*-u_k|d\mu_{g_1}+D^2(u-u_k)\right)=0. \end{eqnarray*} This finishes the proof of Proposition \ref{Hilbert} \end{proof} \section{Existence of solutions to \eqref{eq:p}. The proof of Theorem \ref{main}.} \label{sec:variational} We start defining the following weak formulation for the problem \eqref{eq:p}. We have \begin{equation}gin{definition}\label{def:weaksolution} Assume that $f\in L^2(\Omega)$, $g_1\in {L^1( \mathbb{R}^N\setminus\Omega, |x|^2\, dx )}{\cap L^1(\mathbb{R}^N\setminus\Omega)},$ and $g_2\in L^2(\partialrtial\Omega)$. Then $u\in H^s_{\mathcal N(g_1,g_2)}(\Omega)$ is a weak solution to \eqref{eq:p} if and only if \begin{equation}gin{eqnarray}\label{eq:dbahgduvasc} &&\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy \\\noindentnumber &&=\int_{\Omega}fv\,dx+\int_{\mathbb R^N\setminus \Omega}g_1v\, dx+ \int_{\partialrtial \Omega}g_2v\, dS, \end{eqnarray} for all $v \in H^s_{\mathcal N(g_1,g_2)}(\Omega)$. \end{definition} \begin{equation}gin{remark}\label{rem:wellposdness} {We point out that if $u,v \in H^s_{\mathcal N(g_1,g_2)}(\Omega)$, each term in \eqref{eq:dbahgduvasc} is well defined. In particular since $v\in H^1(\Omega)$, we deduce that $v$ has a trace $Tv$ on $\partialrtial \Omega$, in $L^2(\partialrtial \Omega)$. The regularity of $g_2$ can also be sharpened according to the trace theory, that is, it is sufficient to require that $g_2\in L^{q}(\partialrtial\Omega)$ whit $q=2(N-1)/N<2$ (see \cite{dibenedetto}). Moreover, by Sobolev inequality, see \cite{DnPV}, {since ${v} \in L^p(\Omega)$, with $p\leq2N/(N-2(\sigma+1))$ (we make the convention that $2N/(N-2(\sigma+1)=+\infty$ if $N\leq 2(\sigma+1)$), the previous definition has sense for every} $f \in L^q(\Omega)$, with $q \geq 2N/(N+2(\sigma+1))$.} \end{remark} Thanks to Definition \ref{def:weaksolution} we can also associate a variational formulation to \eqref{eq:p}. If $f\in L^2(\Omega)$, $g_1\in {L^1( \mathbb{R}^N\setminus\Omega, |x|^2\, dx )}{\cap L^1(\mathbb{R}^N\setminus\Omega)},$ and $g_2\in L^2(\partialrtial\Omega)$, for all $u\in H^s_{\mathcal N(g_1,g_2)}(\Omega)$ we can define the functional \begin{equation}gin{equation}\label{eq:deffunctional} J(u):= \frac{c_{N,\sigma}}{4} \int_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy-\int_{\Omega}f{u}\,dx-\int_{\mathbb R^N\setminus \Omega}g_1{u}\, dx- \int_{\partialrtial \Omega}g_2{u}\, dS, \end{equation} \ If for example, we consider the homogeneous problem \begin{equation}gin{equation}\label{nonresonant} \begin{equation}gin{cases} u+(-\Delta)^s u = f(x) &\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 u =0&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 u=0&\text{on}\,\,\partialrtial\Omega, \end{cases} \end{equation} it is easy to see that a standard variational argumentation gives the unique energy solution. \ Along this section we analyze a compatibility condition to take into account, to prove the existence of weak solutions of \eqref{eq:p}, that is, in the resonant case. A key point is the following: let us consider in $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ the equivalence relation defined by \[u\backsim v \quad \hbox{ if an only if there exists } p\in \mathcal{P}_1(\mathbb{R}^N) \hbox{ such that } \quad u=v+p.\] {Let us denote by} $\mathcal{H}^s$ the quotient space {with respect to} this equivalence relation, {that is} $${\mathcal{H}^{s}:={H^s_{\mathcal N(g_1,g_2)}(\Omega)}\delta iagup{\mathcal{P}_{1}(\mathbb{R}^{N})}=\Big\{ [u],\, u\in H^s_{\mathcal N(g_1,g_2)}(\Omega)\Big\},}$$ {where, given $u\in H^s_{\mathcal N(g_1,g_2)}(\Omega)$ $$[u]=\{v\in H^s_{\mathcal N(g_1,g_2)}(\Omega):\, v\backsim u\}:=\{u+p:\, u\in H^s_{\mathcal N(g_1,g_2)}(\Omega),\, p\in\mathcal{P}_{1}(\mathbb{R}^{N})\}\subseteq H^s_{\mathcal N(g_1,g_2)}(\Omega).$$} It is well known that $$\|[u]\|^2_{\mathcal{H}^{s}}=\inf_{p\in\mathcal{P}_1(\mathbb{R}^{N})}\|u-p\|^{2}_{H^s_{\mathcal N(g_1,g_2)}(\Omega)}.$$ By the Hilbert projection theorem it is clear that the previous infimum is attained, that is there exists $\widetilde{p}\in\mathcal{P}_{1}(\mathbb{R}^{N})$ such that $$\|[u]\|^2_{\mathcal{H}^{s}}=\|u-\widetilde{p}\|^{2}_{H^s_{\mathcal N(g_1,g_2)}(\Omega)}.$$ Moreover $v:=u-\widetilde{p}\, {\in}\, H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega)$ where \begin{equation}gin{equation}\label{noviembre_set} H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega):=\left\{u\in H^s_{\mathcal{N}(g_1,g_2)}(\Omega):\, \int_{\mathbb{R}^{N}}{u p\, d\mu_{g_1}}=0,\, {\forall} p\in\mathcal{P}_{1}(\mathbb{R}^{N})\right\} \end{equation} where $d\mu_{g_1}$ was defined in \eqref{dmu}. We notice that $H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega)=H_{g_1}^{s,0}(\Omega)\cap L^{2}(\mathbb{R}^{N}, d\mu_{g_1})$ is a closed subspace of $H^s_{\mathcal N(g_1,g_2)}(\Omega)$. Let us define \[\|[u]\|^2_*=\int_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy,\] that is a norm in $\mathcal{H}^{s}$. In fact for $p\in\mathcal{P}_{1}(\mathbb{R}^{N})$, we have that $\|u+p\|_*=0$ implies that $u\in \mathcal{P}_{1}(\mathbb{R}^{N})$, that is the zero function in $\mathcal{H}^{s}$. We prove the following result that shows an example of an admissible $g_1$. \begin{equation}gin{lemma}\label{equivalencia} Let us suppose $g_1\in L^p_{c}(\mathbb R^N\setminus \Omega),$ with $p>N/2$, then $\lambda_1(g_1)>0$, where $\l_1(g_1)$ is defined in \eqref{espectral}. As a consequence, the norm $\|\cdot\|_{\mathcal{H}^{s}}$ is equivalent to the norm $\|\cdot\|_*$, that is there exists a positive constant $C=C(N,\sigma, \Omega)$ such that \begin{equation}gin{equation}\label{tutoria} \frac1C\|[u]\|_*\leq\|[u]\|_{\mathcal{H}^{s}}\leq C \|[u]\|_*, \qquad \text{for all}\,\, [u]\in \mathcal{H}^{s}. \end{equation} \end{lemma} \begin{equation}gin{proof} {The fact that} $g_1$ verifies that $g_1\in {L^1( \mathbb{R}^N\setminus\Omega, |x|^2\, dx )}{\cap L^1(\mathbb{R}^N\setminus\Omega)}$ {easily follows} by a simple application of the H\"{o}lder inequality. {Moreover since $g_1$ has compact support, by Corollary~\ref{Wintinger0}, we deduce that $H^{s,0}_{g_1}(\Omega) = H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega).$} {Show that $\lambda_1(g_1)>0$ it is equivalent to obtain, for every $v\in H^{s,0}_{g_1}(\Omega)= H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega)$, the following Poincar\'e-type inequality} \begin{equation}gin{equation}\label{poincare} \int_{\Omega}{v}^2\,dx+ {\int_{\mathcal C \Omega}|g_1|{v}^2\, dx} \leq C(N,\sigma, \Omega) \iint_{Q(\Omega)}\frac{|\nabla {v}(x)-\nabla {v}(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy. \end{equation} Observe that if \eqref{poincare} is true, then the second inequality of \eqref{tutoria} is also valid. Indeed if we consider $[u]\in\mathcal{H}^s$ and \begin{equation}gin{equation}\label{eq:wwww} w:=u-\widetilde{p}\in H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega),\quad \widetilde{p}\in\mathcal{P}_{1}(\mathbb{R}^{N}), \end{equation} the function where the infimum in the norm is attained, by \eqref{poincare} it will follow that \begin{equation}gin{eqnarray*} \|[u]\|^2_{\mathcal{H}^{s}}=\|w\|^2_{H^{s}_{\mathcal{N}(g_1,g_2)}(\Omega)}&=& \int_{\Omega}w^2\,dx+\iint_{Q(\Omega)}\frac{|\nabla w(x)-\nabla w(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy+ {\int_{\mathcal C \Omega}|g_1|w^2\, dx} \\\noindentnumber &\leq& C(N,\sigma, \Omega) \iint_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy\leq C(N,\sigma, \Omega) \|[u]\|_*. \end{eqnarray*} as wanted. To show \eqref{poincare} let us suppose, by contradiction, that there exists, up to a renormalization, a sequence $\{v_k\}\subset H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega)$ such that \begin{equation}gin{equation}\label{ppl} \int_{\Omega}v_k^2\, dx+\int_{\mathbb{R}^N\setminus \Omega}|g_1|v_k^2\, dx=1\,\, \mbox{and}\,\, \iint_{Q(\Omega)}\frac{|\nabla v_k(x)-\nabla v_k(y)|^{2}}{|x-y|^{N+2\sigma}}\, dx dy<\frac{1}{k}. \end{equation} First of all, we will show that actually \begin{equation}gin{equation}\label{orden_noviembre} \int_{\Omega}v_k^2\, dx+\int_{\Omega}|\nabla v_k|^2\, dx+\int_{\Omega}\int_{\Omega}\frac{|\nabla v_k(x)-\nabla v_k(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy:=\|v_k\|^2_{W^{s,2}(\Omega)}<C.\end{equation} In fact, by contradiction, let us suppose that there exists a subsequence that we still denote by $\{v_k\}$, such that \begin{equation}gin{equation}\label{pio} \rho_k:=\int_{\Omega}|\nabla v_k|^2\, dx\rightarrow +\infty.\ \end{equation} Defining $z_k=v_k/\rho_k$, since $\|\nabla z_k\|_{L^2(\Omega)}=1$, from \eqref{ppl} is clear that \begin{equation}gin{equation}\label{wui} \int_{\Omega}z_k^2\, dx+\int_{\Omega}|\nabla z_k|^2\, dx+\int_{\Omega}\int_{\Omega}\frac{|\nabla z_k(x)-\nabla z_k(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy< C, \end{equation} that is, $\|z_k\|^2_{W^{s,2}(\Omega)}\leq C$ so $z_k \rightharpoonup z^\star$ in $W^{s,2}(\Omega)$. In particular, by Rellich's theorem we have that, up to subsequence $z_k\overlineerset{L^2(\Omega)}{\longrightarrow} z^\star.$ Moreover by \eqref{ppl} and \eqref{pio}-\eqref{wui} it follows that \begin{equation}gin{eqnarray*} C&\geq& \int_{\Omega}z_k^2\, dx+\int_{\Omega}\int_{\Omega}\frac{|\nabla z_k(x)-\nabla z_k(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy\\ &=&\frac{1}{\rho_k^2}\left(\int_{\Omega}v_k^2\, dx+\int_{\Omega}\int_{\Omega}\frac{|\nabla v_k(x)-\nabla v_k(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy\right)\to 0,\, k\to\infty. \end{eqnarray*} So that in particular $z^*\equiv0$. On the other hand using the fractional compact embeddings theorem \cite[Theorem 7.1]{DnPV}, we have that (up to subsequence) $$1=\lim_{k\rightarrow +\infty} \int_{\Omega}|\nabla z_k|^2\, dx= \int_{\Omega}|\nabla z^*|^2\, dx,$$ which is a contradiction and therefore \eqref{orden_noviembre} follows. Thus, from \eqref{orden_noviembre} in particular we infer that $\{v_k\}$ is bounded in $W^{s,2}(\Omega)$, so, up to subsequence, $v_k\rightharpoonup v$ in $W^{s,2}(\Omega)$ and $v_k\to v$ in $L^{2}(\Omega)$. Moreover by Lemma~\ref{Wintinger}, Lemma \ref{all_2} and from that fact that $\|\nabla v_k\|_{L^2(\Omega)}\leq C$ we get that \[\int_{B}v_k^2\,dx+\int_{B}|\nabla v_k|^2\,dx\leq C,\] where $B$ is a ball centered at the origin with $\Omega \subset B$ and $C=C(N,|B|,|\Omega|)$ is a positive constant. That is, $\|v_k\|^2_{H^1(B)}\leq C$, so that $v_k\to v$ in $L^{q}$, $q<2N/(N-2)$. Hence, using the fact that $g_1\in L^p_c(\mathbb R^N\setminus\Omega),\, p>N/2$ we can pass to the limit in \eqref{ppl} getting that \begin{equation}gin{equation}\label{eq:miaoiooio} \int_{\Omega}v^2\, dx+\int_{\mathbb{R}^{N}\setminus \Omega}|g_1|v^2\, dx=1. \end{equation} By the lower semicontinuity of the norm w.r.t. the weak convergence, form \eqref{ppl} is also clear that $$\iint_{Q(\Omega)}\frac{|\nabla v(x)-\nabla v(y)|^{2}}{|x-y|^{N+2\sigma}}\, dx dy=0.$$ So that $v\in \mathcal{P}_{1}(\mathbb{R}^N)$ which, by \eqref{eq:miaoiooio}, clearly implies a contradiction with the fact that $v \in H^{s,0}_{\mathcal{N}(g_1,g_2)}(\Omega)$. \ {To conclude the proof of Lemma let us mention that the first inequality of \eqref{tutoria} is obviously true because $\|[u]\|^2_{\mathcal{H}^{s}}=\|w\|^2_{H^{s}_{\mathcal{N}(g_1,g_2)}(\Omega)}$ where $w$ was given in \eqref{eq:wwww}.} \end{proof} {Next we will emphasize that $J$ is well defined in $\mathcal{H}^s$. In fact if $f, g_1$ and $ g_2$ satisfies de compatibily condition \eqref{eq:unodiez} and $u \backsim v$ then \begin{equation}gin{equation}\label{jota} J(u)=J(v)=J(u-p). \end{equation} Therefore we can establish now the following} \begin{equation}gin{theorem}\label{key_th} Assume that $(\mathcal A_{(f,g_1,g_2)})$ holds and let $J:\mathcal{H}^s \rightarrow \mathbb R$ be the functional defined in \eqref{eq:deffunctional}. {If $f, g_1$ and $ g_2$ satisfying the compatibility condition \eqref{eq:unodiez} then} \begin{equation}gin{enumerate} \item $J$ has a unique minimum in $\mathcal{H}^s$. \item Every critical point of $J$ is in fact a weak solution to the problem \eqref{eq:p} modulo a polynomial in $\mathcal{P}_1(\mathbb{R}^N)$. \end{enumerate} \end{theorem} \begin{equation}gin{proof} First of all, it is easy to check (see also Remark \ref{rem:wellposdness}) that the functional $J(u)$ is well defined in $\mathcal{H}^s$ that is, it is enough to prove that \begin{equation}gin{equation}\label{vv} \big |J([u])\big |< \infty. \end{equation} By abuse of notation, taking into account \eqref{jota} we will write $J(u)$ instead of $J({[u]})$. To obtain \eqref{vv} it is sufficient to point out that we have \[\left|\int_{\Omega}fu\, dx\right|\leq \|f\|_{L^2(\Omega)}\|u\|_{L^2(\Omega)} \leq C\|[u]\|_{\mathcal{H}^s}.\] Moreover by the Cauchy-Schwartz inequality, $$\left|\int_{\mathbb{R}^N\setminus\Omega} ug_1 dx\right|\leq \int_{\mathbb{R}^N\setminus\Omega}|g_1|^{\frac 12}|g_1|^{\frac 12}|u|\,dx\leq \left(\int_{\mathbb{R}^N\setminus\Omega}|g_1|\, dx\right)^{\frac 12}\left(\int_{\mathbb{R}^N\setminus\Omega}|g_1||u|^2\, dx\right)^{\frac 12}\leq C\|[u]\|_{\mathcal{H}^s}.$$ On the other hand, by using the H\"older and trace inequality and the Poincar\'e-Wintinger inequality given in Corollary \ref{Wintinger0}, we get that \[\left|\int_{\partialrtial\Omega} g_2 u\, dS\right|\le \|g_2\|_{L^{\frac{2(N-1)}{N}}(\partialrtial\Omega)}\|u\|_{L^{\frac{2^*(N-1)}{N}}(\partialrtial\Omega)} \leq C\|g_2\|_{L^{2}(\partialrtial\Omega)}(\|u\|_{L^2(\Omega)}+\|\nabla u\|_{L^2(\Omega)})\leq C(1+\|[u]\|_{\mathcal{H}^s}).\] By the previous computations and the fact that $\lambda_1(g_1)>0$, {(see \eqref{tutoria}-\eqref{eq:wwww})} we also deduce that $J$ is coercive in $\mathcal{H}^s$, that is \[J(u)\geq C_1\|{[u]}\|^2_{\mathcal{H}^s}-C_2\|{[u]}\|_{\mathcal{H}^s}-C_3,\] for some positive constants $C_1, C_2, C_3$. As $J$ is continuous, convex and coercive then, $(1)$ is an elementary consequence of the classical minimization results. To obtain (2) let us consider $[u],[v] \in \mathcal{H}^s$ and $t\in \mathbb R$. We deduce that \begin{equation}gin{eqnarray}\label{eq:tresonce} &&\lim_{t\rightarrow 0}\frac{J(u+tv)-J(u)}{t} \\\noindentnumber&&= \frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy-\int_{\Omega}fv \, dx \\\noindentnumber &&-\int_{\mathbb R^N\setminus \Omega}g_1v\, dx- \int_{\partialrtial \Omega}g_2v\, dS. \end{eqnarray} In fact to get \eqref{eq:tresonce}, we observe that the first term on the r.h.s. of \eqref{eq:deffunctional} can be view as a bilinear form and the other terms are linear. From \eqref{eq:tresonce} we obtain the conclusion, that is \begin{equation}gin{eqnarray}\label{eq:congrrssjajoj} J'(u)[v]&=& \frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy-\int_{\Omega}fv \, dx \\\noindentnumber &-&\int_{\mathbb R^N\setminus \Omega}g_1v\, dx- \int_{\partialrtial \Omega}g_2v\, dS, \end{eqnarray} for all $[u],[v] \in \mathcal{H}^s$ and therefore, a critical point of $J$ is in fact a weak solution (in the sense of Definition \ref{def:weaksolution}) to~\eqref{eq:p} modulo first degree polynomials. \end{proof} We next show a lemma useful to obtain the proof of Theorem \ref{main} because show that the compatibility condition is a necessary condition for the existence of a solution to~\eqref{eq:p}: \begin{equation}gin{lemma}[Necessary condition]\label{pro:necesscondit} Let us suppose that $(\mathcal A_{(f,g_1,g_2)})$ hold and let $u$ be a weak solution to~\eqref{eq:p}. Then {\eqref{eq:unodiez} is satisfied. That is,} $$\int_{\Omega} f p \, dx +\int_{\mathbb R^N\setminus \Omega}g_1p \, dx+\int_{\partialrtial \Omega}g_2 p\, dS=0, \qquad \text{for all }\,\, p\in \mathcal P_1(\mathbb R^N).$$ \end{lemma} \begin{equation}gin{proof} It is sufficient to observe that $\mathcal P(x)\subset H^s_{\mathcal N(g_1,g_2)}(\Omega)$. Therefore using $p\in \mathcal P(x)$ as a test function in \eqref{eq:dbahgduvasc} and taking into account that $\nabla p(x)$ is a constant function we conclude. \end{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{main}:] By Lemma~\ref{pro:necesscondit} it is clear that if there exists a weak solution $u\in H^s_{\mathcal N(g_1,g_2)}(\Omega)$ to~\eqref{eq:p}, then \eqref{eq:unodiez} is obtained. On the contrary if \eqref{eq:unodiez} is true, then by Theorem \ref{key_th} there exists $[u]\in \mathcal{H}^{s}$ a solution of \eqref{eq:p}. The solution is unique up to a polynomial $p\in \mathcal P (\mathbb R^N).$ \end{proof} \ The next lemma will be useful in order to prove the right uniqueness result for weak solutions to \eqref{eq:p} and to analyze the spectral properties of the Neumann Problem (see Section 5). We notice here that this result is the equivalent of \cite[Lemma 3.8]{DRoV} for the Neumann problem associated to the fractional Laplacian operator of order $0<s<1$. \begin{equation}gin{lemma}\label{eq:uniquuuueneessssss} Let assume that $(\mathcal A_{(f,g_1,g_2)})$ hold and let $u$ be a weak solution to \begin{equation}gin{equation}\noindentnumber \begin{equation}gin{cases} (-\Delta)^s u = f(x) &\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 u =g_1&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 u=g_2&\text{on}\,\,\partialrtial\Omega, \end{cases} \end{equation} with $f,g_1,g_2$ non negative functions. Then \[u\in \mathcal P_1(\mathbb R^N).\] \end{lemma} \begin{equation}gin{proof} Taking $\mathcal P_1(\mathbb R^N)\ni p\equiv 1$ {as a test function} we get \begin{equation}gin{equation}\noindentnumber \int_{\Omega} f \, dx +\int_{\mathbb R^N\setminus \Omega}g_1 \, dx+\int_{\partialrtial \Omega}g_2 \, dS=0, \end{equation} and thus, since $f,g_1,g_2$ are non negative, we deduce that $f=0$ a.e. in $\Omega$, $g_1=0$ a.e. in $\mathbb R^N\setminus\Omega$ and $g_2=0$ a.e. (w.r.t the measure $S$ of the boundary) on $\partialrtial \Omega$. Therefore considering {now} $v=u$ as a test function we get \[\int_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy=0,\] that is $\nabla u (x)$ is constant in $\mathbb R^N$. Thus $u\in \mathcal P_1(\mathbb R^N).$ \end{proof} \ Next we will analyze the existence of the resonant problem with a different approach that, in particular, will be useful to study the spectrum of the Neumann problem $\eqref{eq:p}$ in the next section. This is the approach done in \cite{DRoV} for $0<s<1$. We start by considering the problem \eqref{nonresonant} with homogeneous Neumann condition, namely we set $g_1=0$ in $\mathbb R^N\setminus \overlineerline \Omega$ and $g_2=0$ on $\partialrtial \Omega$. We also assume that $f \noindentt \equiv 0$, since otherwise the result holds considering the trivial solution. We call, to be short, $H^s_{\mathcal N, \vec 0}(\Omega)$, the space $H^s_{\mathcal N(g_1,g_2)}(\Omega)$ with homogeneous Neumann conditions $g_1=g_2=0$ in the problem \eqref{eq:p}. First of all we observe that, by the Riesz theorem, given $h\in L^2(\Omega)$, since the functional $$ v\longrightarrow \int_{\Omega}hv\,dx,\quad v\in {H^s_{\mathcal N, \vec 0}(\Omega)} $$ is linear and continuous in $ {H^s_{\mathcal N, \vec 0}(\Omega)},$ there exists a unique function $w\in H^s_{\mathcal N, \vec 0}(\Omega)$ such that \begin{equation}gin{equation}\label{eq:ausiliarioproblema} \int_{\Omega}wv\, dx+\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla w(x)-\nabla w(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy =\int_{\Omega}hv\,dx, \end{equation} for all $v\in {H^s_{\mathcal N, \vec 0}(\Omega)}$, with $\mathcal N^1_\sigma w(x)=0$ in $\mathbb R^N\setminus \overlineerline \Omega$ and $\mathcal N^2_\sigma w(x)=0$ on $\partialrtial \Omega$. Therefore we will define the inverse operator \begin{equation}gin{eqnarray}\noindentnumber K:L^2(\Omega)&\longrightarrow& {H^s_{\mathcal N, \vec 0}(\Omega)}\\\noindentnumber h&\longrightarrow& w, \end{eqnarray} with $w$ the solution to \eqref{eq:ausiliarioproblema}. We can also define the restriction operator $\overlineerset{\circ}{K}$ as \begin{equation}gin{equation}\label{eq:comaaaavasc} \overlineerset{\circ}{K}h=Kh\big|_{\Omega}, \end{equation} and readily follows that $\overlineerset{\circ}{K}:L^2(\Omega)\longrightarrow H^s_{\mathcal N, \vec 0}(\Omega)\subseteq L^2(\Omega)$. Notice that we can use the Fredholm alternative, given that $\overlineerset{\circ}{K}$ is compact. Indeed, taking $w$ as a test function in \eqref{eq:ausiliarioproblema} we have that $\|w\|_{H^s_{\mathcal N, \vec 0}(\Omega)}\leq C \|h\|_{L^2(\Omega)}.$ Therefore taking a sequence $\{h_n\}_{n\in \mathbb N}$ bounded in $L^2(\Omega)$, we obtain that the sequence $w_n=\overlineerset{\circ}{K}h_n$ is bounded in ${H^s_{\mathcal N, \vec 0}(\Omega)}$ as well, that is \begin{equation}gin{equation}\label{eq:ajndkkhjkjdanomad} \|w_n\|_{H^s_{\mathcal N, \vec 0}(\Omega)}\leq C, \end{equation} for some constant $C$ that does not depend on $n$. In particular from \eqref{eq:canzperunamic} it follows that $$ \int_{\Omega}w_n^2\, dx+\int_{\Omega}\int_{\Omega}\frac{|\nabla w_n(x)-\nabla w_n(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy\leq C. $$ As we did in the proof of Lemma \ref{equivalencia} the previous inequality implies that $\|w_n\|^2_{W^{s,2}(\Omega)}<C$ ($s=1+\sigma$) so, since $W^{s,2}(\Omega)$ is compactly embedded in $L^{2}(\Omega)$, we deduce that, up to subsequences, $\{w_n\}$ converges in $L^2(\Omega)$ as wanted. Moreover the operator $\overlineerset{\circ}{K}$ is self-adjoint. Indeed, taking {$h_1,h_2\in C_c^{\infty}(\Omega)$} and using the weak formulation \eqref{eq:ausiliarioproblema}, for every $v\in {H^s_{\mathcal N, \vec 0}(\Omega)}$ we get that \begin{equation}gin{equation}\label{eq:gen1} \int_{\Omega}v\,Kh_1 dx+\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla Kh_1(x)-\nabla Kh_1(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy=\int_{\Omega}h_1v dx \end{equation} and \begin{equation}gin{equation}\label{eq:gen2} \int_{\Omega}v\,Kh_2 dx+\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla Kh_2(x)-\nabla Kh_2(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy=\int_{\Omega}h_2v. \end{equation} Using $v=Kh_2$ as test function in \eqref{eq:gen1} and $v=Kh_1$ as test function in \eqref{eq:gen2}, by \eqref{eq:comaaaavasc}, we deduce \begin{equation}gin{equation}\label{eq:avbbbbbb} \int_{\Omega}h_1\overlineerset{\circ}{K}h_2\,dx=\int_{\Omega}h_2\overlineerset{\circ}{K}h_1 \, dx.\end{equation} Then by a {density argument}, \eqref{eq:avbbbbbb} holds for $h_1,h_2\in L^2(\Omega)$ so this implies that $\overlineerset{\circ}{K}$ is self-adjoint. To conclude the proof in the homogeneous case we will show that \begin{equation}gin{equation}\label{palma} Ker\,(Id- \overlineerset{\circ}{K})=\mathcal P_1(\mathbb R^N), \end{equation} that is, the Kernel of the operator $Id- \overlineerset{\circ}{K}$ is the space of affine functions given in Definition \ref{def:affinfunctions}. Let $p\in \mathcal P_1(\mathbb R^N)$, since $\nabla p$ is constant, firstly is clear that and observe that $$ \int_{\Omega}pv\, dx+\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla p(x)-\nabla p(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy=\int_{\Omega}p v\,dx. $$ Moreover, using the definitions \eqref{eq:neumann1} and \eqref{eq:neumann2}, it is also true that $\mathcal N^1_\sigma p (x)=0$ and $\mathcal N^2_\sigma p (x)=0$. Therefore $Kp (x)=p (x)$ in $\mathbb R^N$ and hence $\overlineerset{\circ}{K}p(x)=p(x)$ in $\Omega$. This shows that $$\mathcal P_1(\mathbb R^N)\subset Ker\,(Id- \overlineerset{\circ}{K}).$$ The reverse inclusion is also true. In fact, taking now $w\in Ker\,(Id- \overlineerset{\circ}{K})\subseteq L^2(\Omega)$, that is, $w=\overlineerset{\circ}{K}w=Kw$ in $\Omega$, by the definition of $K$ we have that \begin{equation}gin{eqnarray}\label{eq:cuatrotrece} &&\int_{\Omega}(Kw)v\, dx+\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla Kw(x)(x)-\nabla Kw(x)(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy\\\noindentnumber &&=\int_{\Omega}wv\, dx,\quad \forall v\in{H^s_{\mathcal N, \vec 0}(\Omega)}. \end{eqnarray} Then taking $v=w$ as a test function in \eqref{eq:cuatrotrece} we get \[{\int_{Q(\Omega)}}\frac{|\nabla w(x)-\nabla w(y)|^2}{|x-y|^{N+2\sigma}}\, dx dy=0, \] which in particular implies that $w$ is a affine function, that is, $w\in \mathcal P_1(\mathbb R^N)$ as wanted. Once we have proved \eqref{palma} applying the Fredholm alternative we obtain \begin{equation}gin{center} $Im(Id- \overlineerset{\circ}{K})=Ker\,(Id- \overlineerset{\circ}{K})^{\perp}=\mathcal P_1(\mathbb R^N)^{\perp},$ \end{center} that is $$Im(Id- \overlineerset{\circ}{K})=\Big\{f\in L^2(\Omega)\,:\,(f,p)_{L^2(\Omega)}=0,\, p\in \mathcal P_1(\mathbb R^N)^{\perp}\Big\},$$ where by $(\cdot,\cdot)_{L^2(\Omega)}$ we denote the classical inner product in $L^2(\Omega)$. By Theorem \ref{key_th} we have that \begin{equation}gin{equation}\label{fredh} \mbox{the homogeneous problem \eqref{eq:p} has a solution if and only if $f\in \mathcal{P}_1(\mathbb R^N)^{\perp}$.} \end{equation} We can obtain again the same result by using the previous arguments: Consider $f\in P_1(\mathbb R^N)^{\perp}=Im(Id- \overlineerset{\circ}{K})$. Then there exists $h\in L^2(\Omega)$ such that \begin{equation}gin{equation}\label{eq:ffnumero}f=h-\overlineerset{\circ}{K}h.\end{equation} If we set $u={K}h$, then by construction, for every $v\in H^s_{\mathcal N, \vec 0}(\Omega)$, we get \begin{equation}gin{equation}\label{eq:coffeinfaccia} \int_{\Omega}uv\, dx+\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla u(x)-\nabla u(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy=\int_{\Omega}hv\, dx, \end{equation} with $$\mathcal N^1_\sigma u(x)=0,\qquad x\in \mathbb{R}^N\setminus\overlineerline \Omega,\quad\text{and}\quad\mathcal N^2_\sigma u(x)=0, \qquad x\in \partialrtial \Omega.$$ Since $u=Kh=\overlineerset{\circ}Kh$ in $\Omega$, from \eqref{eq:ffnumero} and \eqref{eq:coffeinfaccia} it follows that \begin{equation}gin{equation}\label{eq:solproblomogneo} \begin{equation}gin{cases} (-\Delta)^s u = f(x) &\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 u =0&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 u=0 &\text{on}\,\,\partialrtial\Omega, \end{cases}\end{equation} in the weak sense. Thus, $u$ is the desired solution. On the other hand if $u\in {H^s_{\mathcal N, \vec 0}(\Omega)}$ is a weak solution of \eqref{eq:solproblomogneo}, then we have \[(-\Delta)^su+u=f+u,\quad \text{in}\,\, \Omega,\] that is, by the definition of $K$, one has $u=K\big(u+f\big)$ in $\mathbb R^N$ and then $u=\overlineerset{\circ}K\big(u+f\big )$ in $\Omega$. We deduce that \[(Id-\overlineerset{\circ}K)\big (u+f\big )=f, \quad \text{in}\,\, \Omega.\] Then $f$ belongs to $Im(Id- \overlineerset{\circ}{K})$ and, therefore, it is such that $(f,p)_{L^2(\Omega)}=0$ for all functions $p\in \mathcal P_1(\mathbb R^N)$ as wanted. This says that the nonhomogeneous case of problem \eqref{eq:p} can be solved if we have an additional condition of the data, that is, if there exists $\psi$ sufficiently smooth such that \[\mathcal{N}^1_\sigma(\psi)=g_1\,\, \text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\qquad \text{and}\qquad \mathcal{N}^2_\sigma(\psi)=g_2 \,\, \text{on}\,\,\partialrtial \Omega.\] If this is the case, then for $f$, $g_1$ and $g_2$ admissible data, we have that \begin{equation}gin{equation}\noindentnumber \int_{\Omega} fp \, dx +\int_{\mathbb R^N\setminus \Omega}\mathcal{N}^1_\sigma(\psi)p\, dx+\int_{\partialrtial \Omega}\mathcal{N}^2_\sigma(\psi)p\, dS=0, \qquad \text{for all }\,\, p\in \mathcal P_1(\mathbb R^N), \end{equation} By Proposition \ref{pro:cuatrodiec} we obtain \begin{equation}gin{equation}\label{eq:eccomiiii} \int_{\Omega} \big(f -(-\Delta)^s\psi\big)p\,dx=0, \qquad \text{for all }\,\, p\in \mathcal P_1(\mathbb R^N) \end{equation} Thus, by \eqref{fredh} and \eqref{eq:eccomiiii}, there exists a weak solution $\hat u$ to \begin{equation}gin{equation}\noindentnumber \begin{equation}gin{cases} (-\Delta)^s \hat u = \hat f(x) &\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 \hat u =0&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 \hat u=0 &\text{on}\,\,\partialrtial\Omega, \end{cases}\end{equation} where $$\hat f= f -(-\Delta)^s\psi\in Im(Id- \overlineerset{\circ}{K}).$$ Therefore, defining $u:=\hat u +\psi$ we get that $u\in H^s_{\mathcal N, g_1, g_2}(\Omega)$ is a weak solution to \begin{equation}gin{equation}\noindentnumber \begin{equation}gin{cases} (-\Delta)^s u = f(x) &\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 u =g_1&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 u=g_2&\text{on}\,\,\partialrtial\Omega. \end{cases}\end{equation} In both cases, homogeneous and non-homogeneous, the uniqueness up to a function $p\in \mathcal P_1(\mathbb R^N)$, follows easily by contradiction using Lemma~\ref{eq:uniquuuueneessssss}. \section{Spectral theory} We will develop now the spectral theory associated to problem $\eqref{eq:p}$ using some general results established for compact operators. More precisely the complete description of the {structure} of the eigenvalues and eigenfunctions are given in the following \begin{equation}gin{theorem}\label{spectral} Let $\Omega\subset\mathbb{R}^{N}$ be a regular bounded domain. Then there exist a nondecreasing sequence $\{\lambda_i\}\geq 0$ and a sequence of functions $u_i:\mathbb{R}^{N}\to\mathbb{R}$ such that $$ (\mathcal{P}_i)=\begin{equation}gin{cases}\label{ii} (-\Delta)^s u_i = \lambda_i\, u_i &\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 u_i =0&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 u_i=0 &\text{on}\,\,\partialrtial\Omega, \end{cases} $$ Moreover the functions ${u_i}\big|_{\Omega}$ form a complete ortogonal system in the space $L^{2}(\Omega)$. \end{theorem} \begin{equation}gin{proof} First of all we define de set \begin{equation}gin{equation}\label{eq:trump} L^{2,0}(\Omega):=\{u\in L^2(\Omega):\, \int_{\Omega}{u\, p\, dx}=0,\, {\forall}p\in\mathcal{P}_{1}(\mathbb{R}^{N})\},\end{equation} {that contains the set $H^{s,0}_{\mathcal{N},\vec 0}(\Omega)$ defined in \eqref{noviembre_set} for $g_1=g_2=0.$} Let us now consider the linear operator $T: L^{2,0}(\Omega)\to H^{s,0}_{\mathcal{N},\vec 0}(\Omega)$, such that $T(f):=u$ where $u$ is the (unique) solution of the problem $$\begin{equation}gin{cases} (-\Delta)^s u= f&\text{in}\,\,\Omega\\ \mathcal N_\sigma^1 u =0&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega\\ \mathcal N_\sigma^2 u=0 &\text{on}\,\,\partialrtial\Omega, \end{cases}$$ given in Theorem \ref{main}, recalling that $f$ satisfies \eqref{eq:unodiez}. We observe that the uniqueness come from the fact that $L^{2,0}(\Omega)$ is a closed subspace of $L^{2}(\Omega)$ and $L^{2,0}(\Omega)=\mathcal{P}_{1}(\mathbb{R}^{N})^{\perp}$. As in the proof of Theorem \ref{main} we define the restriction operator $\overlineerset{\circ}{T}$ as \begin{equation}gin{equation}\label{eq:colombia} \overlineerset{\circ}{T}f=Tf\big|_{\Omega} \end{equation} and therefore $$\overlineerset{\circ}T:L^{2,0}(\Omega) \to L^{2,0}(\Omega).$$ With this notation {is clear that a function $u_i$ is a solution of problem $(\mathcal{P}_i)$ if and only if \begin{equation}gin{equation}\label{autovaloresT} \mbox{$u_i=T(\lambda_i u_i)=\lambda_i T(u_i)$,} \end{equation} therefore} it is possible to transform the question of the solvability of $(\mathcal{P}_i)$, in the investigation of the eigenvalues and eigenfunctions of the operator $\overlineerset{\circ}T$. In order to use the well-know theory that establish the spectral properties of the operator, we will prove that $\overlineerset{\circ} T$ is compact, self-adjoint and positive in the Hilbert space $L^{2,0}(\Omega)$. Indeed using the weak formulation \eqref{eq:dbahgduvasc}, for every $f_1,f_2\in L^{2,0}(\Omega)$ and $v, \varphi,\in H^{s}_{\mathcal{N},\vec 0}(\Omega)$ it follows \begin{equation}gin{eqnarray}\label{eq:dbahgduvasco} &&\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{({\nabla Tf_1}(x)-\nabla Tf_1(y))\cdot(\nabla v(x)-\nabla v(y))}{|x-y|^{N+2\sigma}}\, dx dy=\int_{\Omega}f_1v\,dx,\\\noindentnumber &&\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla Tf_2(x)-\nabla Tf_2(y))\cdot(\nabla \varphi(x)-\nabla \varphi(y))}{|x-y|^{N+2\sigma}}\, dx dy=\int_{\Omega}f_2 \varphi\,dx. \end{eqnarray} Thus, since $\overlineerset{\circ}{T}f=Tf$ in $\Omega$, arguing as in equations \eqref{eq:gen1}-\eqref{eq:avbbbbbb}, we conclude that $T$ is self-adjoint in $L^{2,0}(\Omega)$. Further, using again \eqref{eq:dbahgduvasco}, it follows $$(\overlineerset{\circ}{T}f, f)_{L^2(\Omega)}=({T}f, f)_{L^2(\Omega)}=\frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{|(\nabla Tf(x)-\nabla Tf(y))|^2}{|x-y|^{N+2\sigma}}\, dx dy\geq 0,$$ for every $f\in L^{2,0}(\Omega)$. {Moreover if $(\overlineerset{\circ}{T}f, f)_{L^2(\Omega)}=0$ then $f\equiv 0$. Indeed if $$(\overlineerset{\circ}{T}f, f)_{L^2(\Omega)}=\int_{Q(\Omega)}\frac{|(\nabla Tf(x)-\nabla Tf(y))|^2}{|x-y|^{N+2\sigma}}\, dx dy=0,$$ then ${T}f\in\mathcal{P}_{1}(\mathbb{R}^{n})$ so that by \eqref{eq:dbahgduvasco} we deduce that $f\equiv 0$ as wanted.} That is, the operator $\overlineerset{\circ}{T}f$ is positive in $L^{2,0}(\Omega)$. Finally we will show that $T$ is compact in $L^{2,0}(\Omega)$. In fact, from \eqref{eq:dbahgduvasco}, with $Tf=u$ and $v=u$, we get that \begin{equation}gin{equation}\label{cero} {\left(\int_{\Omega}\int_{\Omega}\frac{|\nabla u(x)-\nabla u(y)|^{2}}{|x-y|^{N+2\sigma}}\, dx dy\, \right)}\leq \int_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^{2}}{|x-y|^{N+2\sigma}}\, dx dy\leq \|f\|_{L^{2}(\Omega)}\|u\|_{L^{2}(\Omega)}. \end{equation} {Since the Poicar\'e inequality given in \eqref{poincare} is clearly satisfied by every $u\in H^{s,0}_{\mathcal{N},\vec 0}(\Omega)$}, from \eqref{cero} it follows that \begin{equation}gin{equation}\label{laotra} \left(\int_{\Omega}\int_\Omega\frac{|\nabla u(x)-\nabla u(y)|^{2}}{|x-y|^{N+2\sigma}}\, dx dy\right)^{\frac 12} \leq C\|f\|_{L^{2}(\Omega)}. \end{equation} Let us now consider $\{f_n\}$ a bounded sequence in $\in L^{2,0}(\Omega)$. By \eqref{poincare} and \eqref{laotra}, repeating the arguments done to prove {\eqref{orden_noviembre} in Lemma \ref{equivalencia}, we infer that} $\{u_n=\overlineerset{\circ} Tf_n\}$ is also bounded in the space $W^{s,2}(\Omega)$ ($s=1+\sigma$). Therefore since, in particular, the inclusion $W^{s,2}(\Omega)\to L^{2}(\Omega)$ is compact, by subsequence, $u_n\to u$ in $L^{2}(\Omega)$. Once we have proved that $\overlineerset{\circ}{T}$ is compact, self-adjoint and a positive operator {in the separable space $L^{2,0}(\Omega)$} then (see for instance \cite[Theorem 3.8]{re}) the operator $\overlineerset{\circ}{T}$ has a countable set of eigenvalues $\{\mu_{i}\}_{i\geq 2}$, all of them being positive. In particular $$\mu_2\geq \mu_{3}\geq \ldots >0,\, \mbox{satisfying $\lim_{i\to\infty}{\mu_{i}}=0$.}$$ To the sequence $\{\mu_{i}\}_{i\geq 2}$ there corresponds a finite number of linearly independent eigenfuntions $\{{u}_{i}\}_{i\geq 2}$ that form a complete orthonormal system in $L^{2,0}(\Omega)$. Moreover, {as we noticed in \eqref{autovaloresT}} \begin{equation}gin{eqnarray}\label{eq:lavorohopromesso} \frac{c_{N,\sigma}}{2} \int_{Q(\Omega)}\frac{(\nabla u_i(x)-\nabla u_i(y))\cdot(\nabla \varphi (x)-\nabla \varphi(y))}{|x-y|^{N+2\sigma}}\, dx dy&=&\frac{1}{\mu_i}\int_{\Omega}u_i \varphi\,dx\\\noindentnumber &=:&\lambda_i\int_{\Omega}u_i \varphi\,dx. \end{eqnarray} Thus, by \eqref{eq:lavorohopromesso} we finally infer that $$\{\lambda_i:=1/\mu_i,\, u_{i}\}_{i\geq 2},$$ form part of the suitable family of eigenfunctions and eigenvalues of $(\mathcal{P}_i)$ that we are looking for. To complete this family we observe that, by Lemma \ref{eq:uniquuuueneessssss}, it follows that $\lambda_{1}=0$ is an eigenvalue with eigenfunctions \begin{equation}gin{equation}\label{eq:flightttt} \{u_{1,0}(x)=1,\, u_{1,1}(x)=x_1,\, \ldots u_{1,N}(x)=x_N\}.\end{equation} Therefore, up to a reordering, we have obtained the sequence of eigenvalues $$0=\lambda_1<\lambda_2\leq\ldots,\quad \lim_{i\to\infty}{\lambda_i}=\infty,$$ and its corresponding eigenfunctions $\{\{u_{1,j}\}^{N}_{j=0},\, u_{i}\}_{i\geq 2}$ that are a complete orthogonal system in $L^{2}(\Omega)$. Indeed, as we have seen above, the eigenfunctions $\{{u}_{i}\}_{i\geq 2}$ are orthonormal w.r.t the $L^2$-scalar product and moreover, each ${u}_i$ is orthogonal to the subspace generated by the eigenfunctions~\eqref{eq:flightttt}, since the system $\{{u}_i\}_{i\geq 2}$ belongs to $L^{2,0}(\Omega)$. Finally, to show that the orthogonal system is maximal in $L^2(\Omega)$, let us consider $h\in L^{2}(\Omega)$ and we define $$\widetilde{h}=h-h_{1},$$ where $h_1$ is the orthogonal projection (w.r.t. the $L^2$-scalar product) of $h$ in the subspace $\mathcal{P}_{1}(\Omega)$. Then $\widetilde{h}\in L^{2,0}(\Omega)$ and $h_1=b_0+(b_j,x)\in\mathcal{P}_{1}(\Omega)$, for some $b_j\in\mathbb{R},\, j=0,\,\ldots N$. Since $\widetilde{h}\in L^{2,0}(\Omega)$ and $\{u_i\}_{i\geq 2}$ forms a complete system in $L^{2,0}(\Omega)$, then we obtain that \begin{equation}gin{equation}\label{jestrella} \lim_{k\to \infty}\left\|\widetilde{h}-\sum_{i=2}^{k} a_iu_i\right\|_{L^2(\Omega)}=0, \end{equation} for some real numbers $\{a_i\}$. Moreover, $$\widetilde{h}= h-\sum_{j=0}^{N}a_{1,j}\, u_{1,j}.$$ Thus, by \eqref{jestrella}, it follows that $$\lim_{k\to \infty}\left\|{h}-\sum_{j=0}^{N}a_{1,j}u_{1,j}-\sum_{i=2}^{k} a_iu_i\right\|_{L^2(\Omega)}=0,$$ as wanted. \end{proof} \section{Further results and problems} In this final section we describe in an informal way some further results and interesting open problems related with what we have seen in the previous sections. \subsection{The Neumann problem for $(-\Delta)^su $ in the case $s>2$} In this subsection, using several integrations by parts (i.b.p., in short), we highlight the generalization in the higher-order case $s>2$ of Proposition~\ref{pro:intbyparts2}, which was basic to define the Neumann problem. We write $s=m+\sigma$ and consider the case $m\geq 2$, even. {The case $m\geq3$ and odd can be obtained in the same way as in the proof of Proposition \ref{pro:intbyparts2}. Therefore we skip this case.} \ {\bf Case: $m\geq 2$, even.} Let us define the natural Neumann conditions that come from the following non local higher-order integration by parts formula. For suitable $v\in \mathcal{S}(\mathbb{R}^{N})$, we define \begin{equation}gin{equation}\label{eq:allanimaa} \mathcal N^{1,\,{(i-1)}}_\sigma v(x){:=\Delta^{i-1}\big((-\Delta)^{\sigma}_{\mathbb{R}^N\setminus \Omega}(\Delta^{\frac m2} v)}\big)(x), \qquad x\in \partialrtial \Omega \quad \text{and}\quad i=1,2,\ldots, m/2 \end{equation} and \begin{equation}gin{equation}\label{eq:allanimaa1} \mathcal N^{2,\,{(i-1)}}_\sigma v(x){:=\frac{\partialrtial}{\partialrtial\nu}\Delta^{i-1}\big((-\Delta)^{\sigma}_{\mathbb{R}^N\setminus \Omega}(\Delta^{\frac m2} v)}\big)(x), \qquad x\in \partialrtial \Omega \quad \text{and}\quad i=1,2,\ldots, m/2, \end{equation} with $\nu$ denoting the unit outer normal to $\partialrtial \Omega$. {With these definitions, {\em for suitable} $u,v\in \mathcal{S}(\mathbb{R}^{N})$ (in particular with $u$ satisfying similar hypotheses to \eqref{condition3}) it can be shown the following} \begin{equation}gin{eqnarray}\noindentnumber &&\frac {c_{N,\sigma}}{2}\int_{Q(\Omega)}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))(\Delta^{\frac m2} v(x)-\Delta^{\frac m2} v(y))}{|x-y|^{N+2\sigma}}\,dx\,dy\\\noindentnumber &&=\int_{\Omega}v\,(-\Delta)^su\, dx +\int_{\mathbb{R}^N\setminus \Omega} \Delta^{\frac m2}\mathcal N_\sigma(\Delta^{\frac m2}u(x))\, v(x)\, dx\\\noindentnumber &&+\sum_{i=1}^{m/2}\int_{\partialrtial \Omega}\frac{\partialrtial}{\partialrtial\nu}\left(\Delta^{\frac{m-2i}{2}}v(x)\right)\, \mathcal N^{1,\,{(i-1)}}_\sigma u(x)\, dS-\sum_{i=1}^{m/2}\int_{\partialrtial \Omega}\mathcal N^{2,\,{(i-1)}}_\sigma u(x)\,\Delta^{\frac{m-2i}{2}}v(x)\, dS. \end{eqnarray} {In fact, roughly speaking,} denoting by $\nu$ the unit outer normal field to the boundary $\partialrtial \Omega$ and integrating twice by parts we obtain \begin{equation}gin{eqnarray}\label{eq:cicoooo} &&\frac {c_{N,\sigma}}{2}\int_{Q(\Omega)}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))(\Delta^{\frac m2} v(x)-\Delta^{\frac m2} v(y))}{|x-y|^{N+2\sigma}}\,dx\,dy\\\noindentnumber &&=c_{N,\sigma}\int_{\Omega}\Delta^{\frac m2} v(x)\int_{\mathbb R^N}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\, dx \\\noindentnumber &&+ c_{N,\sigma}\int_{\mathcal C \Omega}\Delta^{\frac m2} v(x)\int_{\Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}}\, dy\, dx\\\noindentnumber &&\overlineerset{\text{1th i.b.p.}}{=} -\int_{\Omega}\nabla (\Delta^{\frac{m-2}{2}}v(x))\cdot \nabla \left (c_{N,\sigma}\int_{\mathbb R^N}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dx\\\noindentnumber &&-\int_{\mathbb{R}^N\setminus \Omega}\nabla (\Delta^{\frac{m-2}{2}}v(x))\cdot \nabla \left (c_{N,\sigma}\int_{\Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dx \\\noindentnumber &&+\int_{\partialrtial \Omega}\frac{\partialrtial}{\partialrtial\nu}\left(\Delta^{\frac{m-2}{2}}v(x)\right)\left (c_{N,\sigma}\int_{\mathbb{R}^N\setminus \Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dS\\\noindentnumber &&\overlineerset{\text{2th i.b.p}}{=}\int_{\Omega} \Delta^{\frac{m-2}{2}}v(x)\,\Delta \left (c_{N,\sigma}\int_{\mathbb R^N}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dx \\\noindentnumber &&+\int_{\mathbb{R}^N\setminus \Omega}\Delta^{\frac{m-2}{2}}v(x)\,\Delta \left (c_{N,\sigma}\int_{\Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dx \\\noindentnumber &&+\int_{\partialrtial \Omega}\frac{\partialrtial}{\partialrtial\nu}\left(\Delta^{\frac{m-2}{2}}v(x)\right)\left (c_{N,\sigma}\int_{\mathbb{R}^N\setminus \Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dS\\\noindentnumber &&- \int_{\partialrtial \Omega}\Delta^{\frac{m-2}{2}}v(x)\frac{\partialrtial}{\partialrtial \nu}\left (c_{N,\sigma}\int_{\mathbb{R}^N\setminus \Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dS. \end{eqnarray} Then if we continue to integrate by parts, as we did in \eqref{eq:cicoooo}, after $m/2$ steps we get \begin{equation}gin{eqnarray}\noindentnumber &&\frac {c_{N,\sigma}}{2}\int_{Q(\Omega)}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))(\Delta^{\frac m2} v(x)-\Delta^{\frac m2} v(y))}{|x-y|^{N+2\sigma}}\,dx\,dy\\\noindentnumber &&\vdots\\\noindentnumber &&\overlineerset{\text{mth i.b.p}}{=}\int_{\Omega} v(x)\Delta^{\frac m2} \left (c_{N,\sigma}\int_{\mathbb R^N}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dx\\\noindentnumber &&+\int_{\mathbb{R}^N\setminus \Omega} v(x)\Delta^{\frac m2} \left (c_{N,\sigma}\int_{\Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dx\\\noindentnumber &&+ \sum_{i=1}^{m/2}\int_{\partialrtial \Omega}\frac{\partialrtial}{\partialrtial\nu}\left(\Delta^{\frac{m-2i}{2}}v(x)\right)\, \Delta^{i-1}\left (c_{N,\sigma}\int_{\mathbb{R}^N\setminus \Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dS\\\noindentnumber &&-\sum_{i=1}^{m/2}\int_{\partialrtial \Omega}\Delta^{\frac{m-2i}{2}}v(x)\frac{\partialrtial}{\partialrtial \nu}\Delta^{i-1}\left (c_{N,\sigma}\int_{\mathbb{R}^N\setminus \Omega}\frac{(\Delta^{\frac m2} u(x)-\Delta^{\frac m2} u(y))}{|x-y|^{N+2\sigma}} \, dy\right)\, dS \end{eqnarray} and then we obtain the conclusion using Proposition \ref{pro:equivfraclapl} and equations \eqref{eq:neumannenrico}, \eqref{eq:allanimaa} and \eqref{eq:allanimaa1}. \subsection{A semilinear Neumann problem and some open questions} Consider the problem \begin{equation}gin{equation}\label{semilinearproblem} \left\{ \begin{equation}gin{array}{rcll} d(-\Delta)^s u+u&=&|u|^{p-1}u \quad & x\in \Omega\\ \mathcal{N}_s(u)&=&0\quad & x\in \mathbb{R}^{{N}}\setminus \Omega, \end{array} \right. \end{equation} where $0<s<1$, $\Omega\subset\mathbb{R}^{{N}}$ is a smooth bounded domain and the diffusion coefficient $d$ is positive. For the classical Laplacian this problem was deeply analyzed by Lin, Ni and Takagi in their classical paper \cite{LNT} where the reader can also see the motivations of this model in the local case. First of all we notice that $v_0 =0$ and $v=\pm 1$ are the unique possible constant solutions of problem \eqref{semilinearproblem}. Therefore we will be interested in finding nontrivial solutions. It is clear that if $1<p< 2^*_s$ this is equivalent to look for non-constant critical points of the energy functional, \begin{equation}gin{equation}\label{energys=1} J_d(u)=\frac {{d}}{2}\int\int_{Q(\Omega)}\delta isplaystyle\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}dxdy+\frac 12\int_\Omega u^2dx-\frac 1{p+1}\int_\Omega |u|^{p+1} dx, \end{equation} where $$2^*_s=\left\{ \begin{equation}gin{array}{lll} &\delta isplaystyle\frac{N+2s}{N-2s},\quad &N>2s\\ &+\infty,\quad &N\le 2s. \end{array} \right. $$ is the critical fractional exponent. Notice that $J_d$ is well defined in \begin{equation}gin{equation}\label{spacebase1} {H}^s(\Omega)=\left\{\, u:\mathbb{R}^N\rightarrow\mathbb{R}\,|\, u \hbox{ measurable, } \,\, { \iint_{Q(\Omega)} \delta isplaystyle\frac{|u(x)-u(y)|^2}{|x-y|^{N+2\sigma}}dxdy}<+\infty\right\}, \end{equation} Therefore the main result is the following \begin{equation}gin{theorem}\label{existence} There exists a nontrivial nonconstant solution, $u_d$, to problem \eqref{semilinearproblem}, provided $d$ is sufficiently small. \end{theorem} \begin{equation}gin{proof} Following closely the arguments done in \cite{LNT}, we can use the \textit{Mountain-Pass Lemma} by Ambrosetti-Rabinowitz, \cite{AR}, in order to find critical points of $J_d$. Indeed, it is easy to check that the geometry of the \textit{Mountain-Pass Lemma} holds for all $d>0$, that is, there exists $\rho$ such that $J_d(u)>0$ for all $0<\|u\|\le \rho$, $J_d(u)>\begin{equation}ta>0$, $\|u\|=\rho$ and there exists $v$, with $\|v\|>\rho$ such that $J_d({v})<0$. Moreover any Palais-Smale sequence is bounded and by the Rellich Compactness Theorem admits a convergent subsequence. Let us assume now that \begin{equation}gin{equation}\label{emma} \mbox{$\exists$ $\phi_{d}\in H^s(\Omega)$, and $t_1,\widetilde{C}>0$ such that $J_d(t_1\phi_d)=0$ and $J_d(t\phi_d)<\widetilde{C}d^{\frac{N}{2s}}$, $0\leq t\leq t_1$.} \end{equation} Then if $\Gamma=\{\gamma\in \mathcal{C}\left([0,1],H^{s}(\Omega)\right)\,|\, \gamma(0)=0, \gamma(1)={t_1\phi_d}\}$, it follows that the minimax value $$c_d=\inf\limits_{\gamma\in \Gamma}\max\limits_{[0,1]}J_d(\gamma(t)){\geq\begin{equation}ta>0=J_d(0)},$$ is a crititical value of $J_d$, {that is, there exists a solution $\widetilde{u}$ such that $J_{d}(\widetilde{u})=c_{d}$. Moreover since, by \eqref{emma}, taking $d$ small enough, $$J_d(\widetilde{u})=c_d\leq max_{[0,t_1]} J_d(t\phi_d)\leq \widetilde{C}d^{N/2s}< \left(\frac 12-\frac{1}{p+1}\right)|\Omega|=J_d(1)=J_d(-1),$$ we conclude that in the set $J_d^{-1}(c_d)$ there is some nonconstant critical point.} To prove \eqref{emma} let us consider $$\phi(x)=\begin{equation}gin{cases} (1-|x|), \quad |x|<1\\ 0, \quad |x|\ge 1, \end{cases} $$ and define $\phi_d (x)=d^{-\frac {N}{2s}}\phi\left(\frac{x}{d^{\frac {1}{2s}}}\right)$. Assume without lost of generality that $0\in\Omega$ and take $d$ small enough in such a way that the ball of radius $d^{\frac {1}{2s}}$ is contained in $\Omega$. We have by a direct calculation one can check that $${\frac 12\int\int_{Q(B_{d^{\frac {1}{2s}}})}\delta isplaystyle\frac{|\phi_d(x)-\phi_d(y)|^2}{|x-y|^{N+2s}}dxdy\le C(N) d^{-\left(1+\frac N{2s}\right)},} \quad ||\phi_d||_q^q= C_q(N) d^{\frac N{2s}(1-q)} $$ Therefore, for constants depending only on the dimension, $$g(t):=J_d(t\phi_d)=c_1\delta isplaystyle\frac 12 d^{-\frac N{2s}}t^2-c_2\delta isplaystyle\frac 1{p+1}d^{-\frac N{2s}p}t^{p+1}.$$ It is easy to check that there exists $t_1>0$ such that $g(t_1)=0$. Moreover $g$ verifies that its maximum for $t>0$ is attained in $t_0=(\frac{c_1}{c_2})^{\frac1{p-1}} d^{\frac N{2s}}<t_1$, {that is} $$g(t)=J_d(t\phi_d)\le J_d(t_0\phi_d)\le C d^{\frac N{2s}},$$ {and \eqref{emma} follows as wanted.} \end{proof} \begin{equation}gin{remark} The higher order Neumann semilinear problem can be studied in a similar way. To be precise we consider the case $s=1+\sigma$, $0<\sigma<1$ and leave to the reader the details for the interval $s>2$. Consider the problem \begin{equation}gin{equation}\label{eq:p}\tag{$\mathcal P$} \begin{equation}gin{cases} d(-\Delta)^s u+u = |u|^{p-1}u &\text{in}\,\,\Omega, \quad 1<s<2\\ \mathcal N_\sigma^1 u =0&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega=: \mathcal C\overlineerline\Omega\\ \mathcal N_\sigma^2 u=0 &\text{on}\,\,\partialrtial\Omega, \end{cases}\end{equation} for $d>0$, $s=1+\sigma$, $\sigma>0$ and $1< p< 2^*_s$. The energy functional now is \begin{equation}gin{equation}\label{energys>1} J_d(u)=\frac d2\int\int_{Q(\Omega)}\delta isplaystyle\frac{|\nabla u(x)-\nabla u(y)|^2}{|x-y|^{N+2\sigma}}dxdy+\frac 12\int_\Omega u^2dx-\frac 1{p+1}\int_\Omega |u|^{p+1} dx. \end{equation} Observe that, as in the case $s<1$, $J_d$ verifies the geometrical and compactness hypotheses to apply the \textit{Mountain Pass Theorem } so taking, for instance, $\phi(x)=(1-|x|^2)^2_+$ it follows that $$J_d(\phi_d)\le C d^{\frac{N}{2s}}.$$ Therefore a similar argument as above shows that, for $d$ small enough, the mountain pass critical point is nonconstant. \end{remark} \subsubsection{Some open questions} Among others, the following questions seem to be open and interesting to solve. \begin{equation}gin{enumerate} \item Asymptotic behavior of the nonconstant solutions when $d\to 0$, $0<s$. The local case $s=1$ this problem was studied for positive solutions in the pioneering paper \cite{LNT}, where a concentration phenomenon appears in the point of maximum curvature of $\partialrtial \Omega$. As far as we know, this result should be new in the local case $s=2$ or higher integer order. \item Study of the critical case $p=2^*$ and the behavior of the nonconstant solutions. The local case, $s=1$, was studied in \cite{APY}. \end{enumerate} \subsection{A Neumann condition for the p-Laplacian operator $(-\Delta)_p^s$ in the standard nonlocal case $\bf{0<s<1 }$}\label{sec:plaplaciano} Using the variational approach for the higher order operator developed in Section \ref{sec:variational}, we define a Neumann problem for the nonlinear p-Laplacian nonlocal operator that, to the best of our knowledge, it has not been studied up to now. Throughout all this section, let us suppose $p\in (1,\infty)$, $s\in (0,1)$ and let $\Omega\subset {\mathbb R}^N$ be a smooth bounded domain with $N>sp$. For smooth functions, we define the fractional $p$-Laplacian operator $(-\Delta)_p^s$ {(see for instance \cite{p1, p2, p3} and the references therein)}, as \begin{equation}gin{equation}\label{eq:plaplace} (- \Delta)^s_p\, u(x):= \lim_{\varepsilon \searrow 0} \int_{\mathbb{R}^N \setminus B_\varepsilon(x)} \frac{|u(x) - u(y)|^{p-2}\, (u(x) - u(y))}{|x - y|^{N+s\,p}}\, dy, \qquad x \in \mathbb{R}^N. \end{equation} Moreover for $\mathcal{S}(\mathbb{R}^{N})$, define \begin{equation}gin{equation}\label{eq:neumannenricop} \mathcal N_{s,p} v(x):=\int_{\Omega}\frac{|v(x)-v(y)|^{p-2}(v(x)-v(y))}{|x-y|^{N+2s}}\,dy, \qquad x\in \mathbb{R}^N\setminus\overlineerline \Omega, \end{equation} namely the non local Neumann condition in the case of the non local p-Laplace operator. The equation \eqref{eq:neumannenricop} represents the counterpart in the non local case, of the the local Neumann condition $|\nabla u|^{p-2}\partialrtial_\nu u$, i.e. the normal component of the flux across the boundary. Following the proofs of the Proposition \ref{pro:intbyparts1} and of the Proposition \ref{pro:intbyparts2}, using definitions \eqref{eq:neumannenricop} and \eqref{eq:plaplace} it can be proved the following \begin{equation}gin{theorem}\label{eq:nnmiservealtro} Let $u,v\in \mathcal{S}(\mathbb{R}^{N})$. Then \begin{equation}gin{equation}\noindentnumber \int_{\Omega}(-\Delta)_p^su\,dx=-\int_{\mathbb R^N\setminus \Omega} \mathcal N_{s,p} u \,dx \end{equation} and \begin{equation}gin{eqnarray}\noindentnumber &&\frac{1}{2}\int_{Q(\Omega)}\frac{|u(x)-u(y)|^{p-2}(u(x)- u(y))( v(x)-v(y))}{|x-y|^{N+2s}}\, dx\,dy\\\noindentnumber &&=\int_{\Omega}v\, (-\Delta)_p^su\, dx +\int_{\mathbb{R}^N\setminus \Omega} v\, \mathcal {N}_{s,p} u \, dx. \end{eqnarray} \end{theorem} Theorem \ref{eq:nnmiservealtro} suggests, as we did in Section \ref{se:weakvaria}, the idea of which should be the correct weak formulation of the nonlocal $p$-Neumann problem in the case $0<s<1$, i.e. it gives the good candidate for the weak form of the p-Laplace operator \eqref{eq:plaplace}. Now we can give the following \begin{equation}gin{definition} Let $g\in L^1(\mathbb R^N\setminus\Omega)$, we set $$ W^{s,p}_{\mathcal N(g)}(\Omega)=\Big\{u:\mathbb R^N\rightarrow {\mathbb R}\,:\, \, u {\mbox{ measurable }}\quad \text{and}\quad \|u\|_{W^{s,p}_{\mathcal N(g)}{(\Omega)}}< +\infty\Big\}, $$ where \begin{equation}gin{equation}\label{eq:pnorm} \|u\|_{W^{s,p}_{\mathcal N(g)}(\Omega)}=\left (\int_\Omega |u|^p\,dx+ \iint_{Q(\Omega)}\frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}\, dx dy+ \int_{\mathcal C \Omega}|g||u|^p\, dx\right)^{\frac 1p}. \end{equation} \end{definition} Following the ideas done in \cite[Proposition 3.1]{DRoV} we can be proved the next \begin{equation}gin{proposition}\label{pro:reflex} $W^{s,p}_{\mathcal N(g)}(\Omega)$ is a reflexive Banach space. \end{proposition} \begin{equation}gin{proof} We sketch the proof. We can readily check that \eqref{eq:pnorm} is a norm and, arguing as in \cite[Proposition 3.1]{DRoV}, that $W^{s,p}_{\mathcal N(g)}(\Omega)$ is a Banach space. To prove that it is reflexive let us define the space $\mathcal A= L^p(Q(\Omega), dxdy)\times L^p(\Omega,dx) \times L^p(\mathbb{R}^N\setminus\Omega,|g|dx).$ By standard results, this product space is reflexive. Then the operator $T:W^{s,p}_{\mathcal N(g)}(\Omega) \rightarrow \mathcal A$ defined as \[T u =\left[\frac{|u(x)-u(y)|}{|x-y|^{\frac Np+s}}\chi_{Q(\Omega)}(x,y),\,u \chi_{\Omega}, \, u \chi_{\mathbb R^N\setminus\Omega}\right],\] where $\chi_\mathcal S(\cdot)$ denotes the characteristic function of a measurable set $\mathcal S$, is an isometry from $W^{s,p}_{\mathcal N(g)}(\Omega)$ into $\mathcal A$ (the space $\mathcal A$ is also equipped with the norm in \eqref{eq:pnorm}). Thus, since $W^{s,p}_{\mathcal N(g)}(\Omega)$ is a Banach space, $T(W^{s,p}_{\mathcal N(g)}(\Omega))$ is a closed subspace of $\mathcal A$, so that, $T(W^{s,p}_{\mathcal N(g)}(\Omega))$ is reflexive and therefore, using the fact that $T$ is an isometry, $W^{s,p}_{\mathcal N(g)}(\Omega)$ as well. \end{proof} Thanks to the previous result, we can use the variational arguments developed in, for instance \cite{dibenedetto}, to get the existence and uniqueness result for the $(-\Delta)^s_{p}$ operator, $0<s<1$. That is, the following \begin{equation}gin{theorem}\label{main2} Let $\Omega\subset\mathbb R^N$ a bounded $C^1$ domain and let us suppose that $f\in L^{p'}(\Omega)$, $\frac{1}{p}+\frac{1}{p'}=1$, and {$g\in L^{\infty}_c(\mathbb{R}^N\setminus \Omega)$}. Then, the problem $$ \begin{equation}gin{cases} (-\Delta)^s_p u = f(x) &\text{in}\,\,\Omega\\ \mathcal N_{s,p} u =g&\text{in}\,\,\mathbb{R}^N\setminus\overlineerline\Omega, \end{cases} $$ has a weak solution, that is, $$\frac{1}{2}\int_{Q(\Omega)}\frac{|u(x)-u(y)|^{p-2}(u(x)- u(y))( v(x)-v(y))}{|x-y|^{N+2s}}\, dx\,dy=\int_{\Omega}f\, v dx +\int_{\mathbb{R}^N\setminus \Omega} g\, v \, dx,\, \forall v \in W^{s,p}_{\mathcal N(g)}(\Omega),$$ if and only if the following compatibility condition holds \begin{equation}gin{equation}\label{eq:unodiez_p} \int_{\Omega} f \, dx +\int_{\mathbb R^N\setminus \Omega}g \, dx=0. \end{equation} Moreover, if \eqref{eq:unodiez_p} holds, the solution is unique up to a constant $c\in \mathbb R^N.$ \end{theorem} The proof of the previous result can be done using the same minimization techniques developed in the proof Theorem \ref{key_th} under hypothesis $(\mathcal A_{(f,g_1,g_2)})-Case \, A$ once we prove a Poincar\'e-type inequality in the Banach space \begin{equation}gin{equation}\label{eq:defspazlasecondvoltnat} \widetilde{W}^{s,p}_{\mathcal N(g)}(\Omega):=\{u\in W^{s,p}_{\mathcal N(g)}(\Omega):\, \int_{\Omega} u\, dx=0\}. \end{equation} This inequality will allow us to affirm that the norm in $\widetilde{W}^{s,p}_{\mathcal N(g)}(\Omega)$ defined as $$\|u\|^p_{\widetilde{W}^{s,p}_{\mathcal N(g)}(\Omega)}:=\int_{Q(\Omega)}\frac{|u(x)- u(y)|^p}{|x-y|^{N+sp}}\, dx dy,$$ is equivalent to the one in ${W}^{s,p}_{\mathcal N(g)}(\Omega)$ given in \eqref{eq:pnorm}. \begin{equation}gin{lemma}\label{pro:poincplapl} For every $v\in \widetilde{W}^{s,p}_{\mathcal N(g)}(\Omega)$ if $g\in L^{\infty}_c(\mathbb{R}^N\setminus \Omega)$ then the following Poincar\'e-type inequality holds \begin{equation}gin{equation}\noindentnumber \int_{\Omega}|v|^p\, dx+\int_{\mathbb{R}^N\setminus \Omega}|g||v|^p\, dx\leq C(N,s,\Omega)\iint_{Q(\Omega)}\frac{|v(x)- v(y)|^p}{|x-y|^{N+sp}}\, dx dy. \end{equation} \end{lemma} \begin{equation}gin{proof} Following the proof of \eqref{poincare} let us suppose, by contradiction, that there exists, up to a renormalization, a sequence $\{v_k\}\subset \widetilde{W}^{s,p}_{\mathcal N(g)}(\Omega)$ such that \begin{equation}gin{equation}\label{pplol} \int_{\Omega}|v_k|^p\, dx+\int_{\mathbb{R}^N\setminus \Omega}|g||v_k|^p\, dx=1\,\, \mbox{and}\,\, \iint_{Q(\Omega)}\frac{|v_k(x)-v_k(y)|^{p}}{|x-y|^{N+sp}}\, dx dy<\frac{1}{k}. \end{equation} Using now that the embedding of $W^{s,p}(\Omega)$ is compact (see \cite{DnPV}), it follows that, up to subsequence, there exists $v\in L^p(\Omega)$ such that \begin{equation}gin{equation}\label{porq} v_{k}\rightarrow v,\qquad\text{in}\,\, L^p(\Omega). \end{equation} Moreover if we take a ball $B$ centered at the origin with $\Omega \subset B$ we get by elementary inequalities that \begin{equation}gin{equation}\label{eq:cauchyyyyyyseqvk} \fint_B |v_k|^p\,dx\leq C(s,p,B, N, \Omega)\fint_{B}\fint_{\Omega}\frac{|v_k(x)-v_k(y)|^{p}}{|x-y|^{N+sp}}\, dx dy+\fint_\Omega |v_k|^p\,dx \end{equation} By \eqref{pplol}- \eqref{eq:cauchyyyyyyseqvk} we deduce that for all $\varepsilon>0$, there exists $\overlineerline k$ such that for all $m,k>\overlineerline k$ \[\int_{B}|v_k-v_m|^p\, dx< \varepsilon,\] namely in particular $v_k$ is a Cauchy sequence in $L^p(B)$ and therefore, up to a subsequence, $v_k$ converges to some $v$ in $L^p(B)$ and a.e. in B. Passing to the limit in \eqref{pplol}, by the lower semicontinuity of the norm w.r.t. the weak convergence, on one hand we get that $v$ must be a constant in $\mathbb{R}^N$ and on the other hand that \[\int_{\Omega}|v|^p\, dx+\int_{\mathbb{R}^N\setminus \Omega}|g||v|^p\, dx=1,\] that is a contradiction with \eqref{eq:defspazlasecondvoltnat}. \end{proof} Now we can give the \begin{equation}gin{proof}[Proof of Theorem \ref{main2}] For every $u\in {W}^{s,p}_{\mathcal N(g)}(\Omega)$ we define the nonlinear functional $$J_p(u):= \frac1p \int_{Q(\Omega)}\frac{|\nabla u(x)-\nabla u(y)|^p}{|x-y|^{N+sp}}\, dx dy-\int_{\Omega}f{u}\,dx-\int_{\mathbb R^N\setminus \Omega}g{u}\, dx.$$ We note that, by the compatibility condition \eqref{eq:unodiez_p}, it follows that $J(u)=J(u-\overlineerline u)$, where $$\overlineerline u =\frac{1}{|\Omega|}\int_{\Omega}u\, dx.$$ Therefore $J_p$ can be defined in the space $\widetilde{W}^{s,p}_{\mathcal N(g)}(\Omega)$. Following the proof of Theorem \ref{key_th} and the Poincar\'e inequality given in Lemma \ref{pro:poincplapl} the result follows. \end{proof} \subsubsection{Some open questions}{Among many other possible choices, we think that it would be very interesting to find a natural Neumann condition for the operator \begin{equation}gin{equation}\label{eq:plaplace1+} (- \Delta)^s_p\, u(x):= -\operatorname{div} \left (\lim_{\varepsilon \searrow 0} \int_{\mathbb{R}^N \setminus B_\varepsilon(x)} \frac{|\nabla u(x) - \nabla u(y)|^{p-2}\, (\nabla u(x) -\nabla u(y))}{|x - y|^{N+\sigma\,p}}\, dy, \qquad x \in \mathbb{R}^N\right), \end{equation} where $s=1+\sigma$. } \begin{equation}gin{thebibliography}{10} \bibitem{AJS}N. Abatangelo, S. Jarohs, A. Salda\~na, \newblock On the maximum principle for higher-order fractional Laplacians. {\em Preprint}, arXiv:1607.00929. \bibitem{APY} Adimurthi, F. Pacella, S. L. Yadava, \newblock Interaction between the geometry of the boundary and positive solutions of a semilinear Neumann problem with critical nonlinearity. \emph{J. Functional Anal.} 113 (1993), no. 2 318-350. \bibitem{AR} A. Ambrosetti, P. H. Rabinowitz, \newblock Dual variational methods in critical point theory and applications.\emph{J. Functional Anal.} {14} (1973), 349-381. \bibitem{LA} J. M. Arrieta, P. D. Lamberti, \newblock Higher order elliptic operators on variable domains. Stability results and boundary oscillations for intermediate problems. {\em J. Differential Equations}, 263 (2017), no.7, 4222-4266. \bibitem{BCGJ} G. Barles, E. Chasseigne, C. Georgelin, E. Jakobsen, \newblock On Neumann type problems for nonlocal equations in a half space. {\em Trans. Amer. Math. Soc.} 366 (2014), no. 9, 4873-4917. \bibitem{BGJ} G. Barles, C. Georgelin, E. Jakobsen, \newblock On Neumann and oblique derivatives boundary conditions for nonlocal elliptic equations. {\em J. Differential Equations}, 256 (2014), 1368-1394. \bibitem{BDGQ} B. Barrios, L. Del Pezzo, J. Garcia-Melian, A. Quaas, \newblock Monotonicity of solutions for some nonlocal elliptic problems in half-spaces. {\em Calc. Var. Partial Differential Equations}, 56 (2017), 56-39. \bibitem{BDGQ2} B. Barrios,L. Del Pezzo, J. Garcia-Melian, A. Quaas, \newblock Symmetry results in the half-space for a semi-linear fractional Laplace equation through a one-dimensional analysis. {\em Ann. Mat. Pura Appl. (4)}, to appear. \bibitem{BMS} B.~Barrios, L.~ Montoro, B.~Sciunzi, \newblock On the moving plane method for nonlocal problems in bounded domains. {\em J. Anal. Math.}, to appear. \bibitem{p1} B. Barrios, I. Peral, S. Vita, \newblock Some remarks about the summability of nonlocal nonlinear problems. {\em Adv. Nonlinear Anal.} 4 (2015), no. 2, 91-107. \bibitem{BBC} K. Bogdan, K. Burdzy, Z. Q. Chen, \newblock Censored stable processes. {\em Probab. Theory Relat. Fields} 127 (2003), 89-152. \bibitem{BL}V. Burenkov, P. D. Lamberti, \newblock Spectral stability of higher order uniformly elliptic operators. {\em Sobolev spaces in mathematics. II}, Int. Math. Ser. (N. Y.), 9 (2009), 69-102. \bibitem{C1} L.M. Chasman, \newblock An isoperimetric inequality for fundamental tones of free plates. {\em Comm. Math. Phys.}, 303 (2011), no. 2, 421-449. \bibitem{CK} Z. Q. Chen, P. Kim, \newblock Green function estimate for censored stable processes. {\em Probab. Theory Relat. Fields} 124 (2002), 595-610. \bibitem{CERW} C. Cortazar, M. Elgueta, J. Rossi, N. Wolanski, \newblock Boundary fluxes for nonlocal diffusion. {\em J. Differential Equations} 234 (2007), 360-390. \bibitem{CERW1} C. Cortazar, M. Elgueta, J. Rossi, N. Wolanski, \newblock How to approximate the heat equation with Neumann boundary conditions by nonlocal diffusion problems. {\em Arch. Rat. Mech. Anal.} 187 (2008), 137-156. \bibitem{CERW2} C. Cortazar, M. Elgueta, J. Rossi, N. Wolanski, \newblock Asymptotic behavior for nonlocal diffusion equations. {\em J. Math. Pures Appl.} 86 (2006), 271-291. \bibitem{CT} A. Cotsiolis, N. K. Tavoularis, Best constants for Sobolev inequalities for higher order fractional derivatives. {\em J. Math. Anal. Appl.} 295 (2004) 225-236. \bibitem{p2} L. Del Pezzo, A. Quaas, \newblock A Hopf's lemma and a strong minimum principle for the fractional p -Laplacian. {\em J. Differential Equations} 263 (2017), no. 1, 765-778. \bibitem{dibenedetto} E. DiBenedetto, {\em Partial Differential Equations (Second edition)}, Birkhäuser. 2010 \bibitem{DnPV} E.~Di Nezza, G.~Palatucci, E.~Valdinoci, \newblock Hitchhiker's guide to the fractional Sobolev spaces. {\em Bull. Sci. Math.}, 136 (2012), no. 5, 521-573. \bibitem{DG}S. Dipierro, H. C. Grunau, \newblock Boggio's formula for fractional polyharmonic {D}irichlet problems. {\em Ann. Mat. Pura Appl. (4)} 196 (2017), no.4, 1327-1344. \bibitem{DMPS}{S. Dipierro, L. Montoro, I. Peral and B. Sciunzi}, \newblock Qualitative properties of positive solutions to nonlocal critical problems involving the Hardy-Leray potential. {\em Calc. Var. Partial Differential Equations}, 55 (2016), no.4, Paper No. 99, 29. \bibitem{DSV}{S. Dipierro, N. Soave, E. Valdinoci}, \newblock On fractional elliptic equations in Lipschitz sets and epigraphs: regularity, monotonicity and rigidity results. {\em Math. Ann.}, 369 (2017), no. 3-4, 1283-1326. \bibitem{DRoV} S.~Dipierro, X.~ Ros-Oton, E.~Valdinoci, \newblock Nonlocal problems with Neumann boundary conditions. {\em Rev. Mat. Iberoam.}, 33 (2017), no. 2, 377-416. \bibitem{p3} G. Franzina, G. Palatucci, \newblock Fractional {$p$}-eigenvalues. {\em Riv. Math. Univ. Parma (N.S.)} 5 (2014), no. 2, 373-386. \bibitem{G0} G. Grubb, \newblock Fractional {L}aplacians on domains, a development of {H}\"ormander's theory of {$\mu$}-transmission pseudodifferential operators. {\em Adv. Math.}, 268 (2015), 478-528. \bibitem{G} G. Grubb, \newblock Local and nonlocal boundary conditions for {$\mu$}-transmission and fractional order elliptic pseudodifferential operators. {\em Anal. PDE.}, 7 (2014), no. 7, 1649-1682. \bibitem{G1} G. Grubb, \newblock Spectral results for mixed problems and fractional elliptic operators. {\em J. Math. Anal. Appl.}, 421 (2015), no. 2, 1616-1634. \bibitem{Guan} Q.Y. Guan, \newblock Integration by Parts Formula for Regional Fractional Laplacian. \emph{Commun. Math. Phys.}, 266 (2006), 289-329. \bibitem{Q-YM}{Q. Y. Guan, Z. M. Ma}, \newblock Reflected symmetric {$\alpha$}-stable processes and regional fractional {L}aplacian. {\em Probab. Theory Related Fields} 134 (2006), no. 4, 649-694. \bibitem{LNT} C.S Lin, W.M. Ni, I. Takagi, \newblock Large amplitude stationary solutions to a chemotaxis systems. \emph{J. Differential Equations}, 72 (1988), 1-27. \bibitem{MYZ} C. Miao, J. Yang, J. Zheng, \newblock An improved maximal inequality for 2D fractional order Schr\"odinger operators. {\em Studia Math.} 230 (2015), no. 2, 121-165. \bibitem{MPV}E. Montefusco, B. Pellacci, G. Verzini, \newblock Fractional diffusion with Neumann boundary conditions: the logistic equation. {\em Disc. Cont. Dyn. Syst. Ser. B} 18 (2013), 2175-2202. \bibitem{Mou} C. Mou, Y. Yi, \newblock Interior Regularity for Regional Fractional Laplacian. \emph{Commun. Math. Phys.}, 340 (2015), 233-251. \bibitem{RoS0} X. Ros-Oton, J. Serra, \newblock The Dirichlet problem for the fractional Laplacian: regularity up to the boundary. {\em J. Math. Pures Appl},. 101 (2012), 275-302. \bibitem{RoS1} X. Ros-Oton, J. Serra, \newblock The Pohozaev identity for the fractional Laplacian. {\em Arch. Rat. Mech. Anal}. 213 (2014), 587-628. \bibitem{RoS} X. Ros-Oton, J. Serra, \newblock Local integration by parts and Pohozaev identities for higher order fractional Laplacians. {\em Discrete Contin. Dyn. Syst.}, 35 (2015), no. 5, 2131-2150. \bibitem{SV} P. Stinga, B. Volzone, \newblock Fractional semilinear {N}eumann problems arising from a fractional {K}eller-{S}egel model. {\em Calc. Var. Partial Differential Equations} 54 (2015), no. 1, 1009-1042. \bibitem{re} K. Rektorys, \newblock Variational methods in mathematics scinece and engineering. {\em D. Reidel Publishing Company}. Boston. USA \bibitem{V} G.C. Verchota, \newblock The biharmonic Neumann problem in Lipschitz domains. {\em Acta Math.}, 195 (2005), no. 2, 217--279. \bibitem{Will} Michel Willem, \newblock Functional Analysis. Birkh\"auser Basel (2013) \bibitem{Y} R. Yang, \newblock On higher order extensions for the fractional Laplacian. {\em Preprint}, arXiv:1302.4413. \end{thebibliography} \end{document}
\begin{document} \mathbf{a}uthor{Graciela Carboni} \mathbf{a}ddress{C\'\i clo B\'asico Com\'un\\ Pabell\'on 3 - Ciudad Universitaria\\ (1428) Buenos Aires, Argentina.} \curraddr{} \email{[email protected]} \thanks{Supported by UBACYT X0294} \mathbf{a}uthor{Jorge A. Guccione} \mathbf{a}ddress{Departamento de Matem\'atica\\ Facultad de Ciencias Exactas y Naturales\\ Pabell\'on 1 - Ciudad Universitaria\\ (1428) Buenos Aires, Argentina.}\curraddr{} \email{[email protected]} \thanks{Supported by PICT 12330, UBACYT X0294 and CONICET} \mathbf{a}uthor{Juan J. Guccione} \mathbf{a}ddress{Departamento de Matem\'atica\\ Facultad de Ciencias Exactas y Naturales\\ Pabell\'on 1 - Ciudad Universitaria\\ (1428) Buenos Aires, Argentina.} \curraddr{} \email{[email protected]} \thanks{Supported by PICT 12330, UBACYT X0294 and CONICET} \subjclass[2000]{Primary 16E40; Secondary 16S36} \date{} \keywords{Hochschild homology, cyclic homology, monogenic extensions} \dedicatory{} \begin{abstract} We study the Hochschild and cyclic homologies of noncommutative monogenic extensions. As an aplication we compute the Hochschild and cyclic homologies of the rank~$1$ Hopf algebras introduced in \cite{K-R}. \end{abstract} \mathfrak{m}aketitle \section*{Introduction} Let $k$ be a commutative ring with $1$. A monogenic extension of $k$ is a $k$-algebra $k[x]/\langle f \overlinengle$, where $f\in k[x]$ is a monic polynomial. In \cite{F-G-G} this concept was generalized to the non-commutative setting. Examples are the rank~$1$ Hopf algebras in characteristic zero, recently introduced in \cite{K-R}. In the paper \cite{F-G-G}, mentioned above, it was compute the Hochschild cohomology ring of these extensions. In the present paper we study their Hochschild and cyclic homology groups. The main result, obtained by us, is that, for any monogenic extension $A$ of a $k$-algebra $K$, there exists a small mixed complex $(C^S_*(A),d_*,D_*)$, giving the Hochschild and cyclic homology of $A$ relative to $K$. When $K$ is a separable $k$-algebra this gives the absolute Hochschild and cyclic homology groups. The mixed complex $(C^S(A),d_*,D_*)$ is enough simple to allow us to compute the homology of each rank~$1$ Hopf algebras. The paper is organized as follows: In Section~1 we give some preliminaries we need. In particular we recall the simple $\mathcal{U}psilon$-projective resolution of a monogenic extension $A/K$ (where $\mathcal{U}psilon$ is the family of all $A$-bimodule epimorphisms which split as $K$-bimodule map), build in \cite{F-G-G}. In Section~2 we use the mentioned above resolution to build a complex $C^S(A,M)=(C^S_*(A,M),d_*)$ giving the relative to $K$ Hochschild homology of $A$ relative to $M$, for each $A$-bimodule $M$ (symmetric over $k$). Then, we obtain explicit computations under suitable hypothesis. In Section~3 we prove that $C^S(A,A)$ is the Hochschild complex of a mixed complex. This generalizes the main result of \cite{Bach}. Then, we use this result to compute the cyclic homology of $A$ in several cases, including the rank~$1$ Hopf algebras. \section{Preliminaries} In this section we recall some well known definitions and results, and we fix some notations that we will use in the rest of the paper. \subsection{A simple resolution for a noncommutative monogenic extension} In this subsection we recall some definitions and results pro\-ved in \cite{F-G-G}. Let $k$ be a commutative ring, $K$ an associative $k$-algebra, which we do not assume to be commutative, and $\mathbf{a}lpha$ a $k$-algebra endomorphism of $K$. Consider the Ore extension $B=K[x,\mathbf{a}lpha]$, that is the algebra generated by $K$ and $x$ subject to the relations \[ x\lambda=\mathbf{a}lpha(\lambda)x\quad\text{for all } \lambda\in K. \] Let $f=x^n+\sum_{i=1}^{n}\lambda_ix^{n-i}$ be a monic polynomial of degree $n\ge 2$, where each coefficient $\lambda_i\in K$ satisfies $\mathbf{a}lpha(\lambda_i)=\lambda_i$ and $\lambda_i\lambda = \mathbf{a}lpha^i(\lambda)\lambda_i$ for every $\lambda\in K$. Sometimes we will write $f = \sum_{i=0}^n \lambda_ix^{n-i}$, assuming that $\lambda_0=1$. The monogenic extension of $K$ associated with $\mathbf{a}lpha$ and $f$ is the quotient $A=B/\cl{f}$. It is easy to see that $\{1,x,\dots,x^{n-1}\}$ is a left $K$-basis of the algebra $A$. Moreover, given $P\in B$, there exist unique $\overline{P}$ and $\stackrel{\dots}{P}$ in $B$ such that \[ P=\overline{P}f+\stackrel{\dots}{P}\quad\text{and}\quad \stackrel{\dots}{P}=0 \text{ or } \deg \stackrel{\dots}{P}<n. \] In this paper, unadorned tensor product $\otimes$ means $\otimes_K$ and all the maps are $k$-linear. Given a $K$-bimodule $M$, we let $M\otimes$ denote the quotient $M/[M,K]$, where $[M,K]$ is the $k$-module generated by the commutators $m\lambda - \lambda m$ with $\lambda\in K$ and $m\in M$. Let $A^2_{\mathbf{a}lpha^r} =A_{\mathbf{a}lpha^r}\otimes A$, where $A_{\mathbf{a}lpha^r}$ is $A$ endowed with the regular left $A$-module structure and with the right $K$-module structure twisted by $\mathbf{a}lpha^r$, that is, if $a\in A_{\mathbf{a}lpha^r}$ and $\lambda \in K$, then $a\cdot \lambda = a \mathbf{a}lpha^r(\lambda)$. We recall that \[ \frac{T}{Tx}: B\to A^2_{\mathbf{a}lpha} \] is the unique $K$-derivation such that $\frac{Tx}{Tx}=1\otimes 1$. Notice that \[ \frac{Tx^i}{Tx} = \sum_{\ell=0}^{i-1}x^{\ell}\otimes x^{i-\ell-1}. \] Let $\mathcal{U}psilon$ be the family of all $A$-bimodule epimorphisms which split as $K$-bimodule maps. \begin{teo}{\rm(\cite[Theorem 2.1]{F-G-G})}\label{teorema 1.1} The complex \[ C_S'(A) = \mathbf{x}ymatrix@C-8pt{\cdots\mathbf{a}r[r]& A^2_{\mathbf{a}lpha^{2n+1}} \mathbf{a}r[r]^-{d'_5}& A^2_{\mathbf{a}lpha^{2n}} \mathbf{a}r[r]^-{d'_4}& A^2_{\mathbf{a}lpha^{n+1}}\mathbf{a}r[r]^-{d'_3}& A^2_{\mathbf{a}lpha^n}\mathbf{a}r[r]^-{d'_2}& A^2_{\mathbf{a}lpha} \mathbf{a}r[r]^-{d'_1}& A^2}, \] where \[ d'_{2m+1}\colon A^2_{\mathbf{a}lpha^{mn+1}}\to A^2_{\mathbf{a}lpha^{mn}} \quad \text{and} \quad d'_{2m}\colon A^2_{\mathbf{a}lpha^{mn}}\to A^2_{\mathbf{a}lpha^{(m-1)n+1}}, \] are defined by \begin{align*} & d'_{2m+1}(1\otimes 1) = x\otimes 1-1\otimes x,\\ & d'_{2m}( 1\otimes 1) = \frac{Tf}{Tx}= \sum_{i=1}^{n} \lambda_{n-i}\sum_{\ell=0}^{i-1}x^{\ell}\otimes x^{i-\ell-1}, \end{align*} is a $\mathcal{U}psilon$-projective resolution of $A$. \end{teo} Let $(A\otimes \overline{A}^{\otimes^*}\otimes A,b')$ be the canonical Hochschild resolution relative to $K$. Here $\overline{A} = A/K$. \begin{teo}{\rm(\cite[Proposition 2.2 and Theorem 2.3]{F-G-G})}\label{teorema 1.2} There are comparison maps $$ \phi'_*\colon C_S'(A)\to (A\otimes \overline{A}^{\otimes^*}\otimes A,b')\quad\text{and}\!\quad\! \psi'_*\colon (A\otimes \overline{A}^{\otimes^*}\otimes A,b')\to C_S'(A), $$ which are inverse one of each other up to homotopy. These maps are given by \begin{align*} & \phi'_0(1\otimes 1)= 1\otimes 1,\\ & \phi'_1(1\otimes 1)= 1\otimes x\otimes 1,\\ &\phi'_{2m}(1\otimes 1) = \sum_{\mathfrak{m}athbf{i}\in \mathds{I}_m} \boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf{i}}} x^{|\mathfrak{m}athbf{i}-\boldsymbol{\ell}|-m} \otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,1}}\otimes 1,\\ &\phi'_{2m+1}(1\otimes 1) = \sum_{\mathfrak{m}athbf{i}\in \mathds{I}_m} \boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf{i}}} x^{|\mathfrak{m}athbf{i}-\boldsymbol{\ell}|-m} \otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,1}}\otimes x\otimes 1,\\ &\psi'_{2m}(\hat{\mathbf{x}}^{\mathfrak{m}athbf{i}_{1,2m}})= \overline{x^{i_1+i_2}}\, \overline{x^{i_3+i_4}}\cdots \overline{x^{i_{2m-1}+i_{2m}}}\otimes 1,\\ &\psi'_{2m+1}(\hat{\mathbf{x}}^{\mathfrak{m}athbf{i}_{1,2m+1}})= \overline{x^{i_1+i_2}}\, \overline{x^{i_3+i_4}} \cdots \overline{x^{i_{2m-1}+i_{2m}}}\frac{T (x^{i_{2m+1}})}{Tx} , \end{align*} where \begin{itemize} \item $\mathds{I}_m=\{(i_1,\dots,i_m)\in \mathfrak{m}athbb{Z}^m: 1\le i_j\le n \text{ for all $j$}\}$, \item $\mathds{J}_{\mathfrak{m}athbf{i}}=\{(l_1,\dots,l_m)\in \mathfrak{m}athbb{Z}^m: 1\le l_j<i_j \text{ for all $j$}\}$, \item $\boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}} =\lambda_{n-i_1}\cdots \lambda_{n-i_m}$, \item $\widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,1}} = x\otimes x^{\ell_m} \otimes \cdots\otimes x \otimes x^{\ell_1}$, \item $|\mathfrak{m}athbf{i}-\boldsymbol{\ell}|=\sum_{j=1}^m (i_j -\ell_j)$. \item $\hat{\mathbf{x}}^{\mathfrak{m}athbf{i}_{1r}} = 1\otimes x^{i_1}\otimes \cdots\otimes x^{i_r}\otimes 1$, \end{itemize} \end{teo} \begin{prop}\label{prop 1.3} $\psi'_*\phi'_*= \mathrm{id}$ and a homotopy $\omegaega'_{*+1}$ from $\phi'_* \psi'_*$ to $\mathrm{id}$ is recursively defined by $\omegaega'_1=0$ and \begin{align*} \omegaega'_{r+1}(\mathbf{x}\otimes 1) & = (-1)^r(\phi'_r\psi'_r-\mathrm{id}-\omegaega'_rb'_r)(\mathbf{x}\otimes1)\otimes 1\\ & = (-1)^r \phi'_r\psi'_r(\mathbf{x}\otimes 1)\otimes 1 - \omegaega'_r(\mathbf{x})\otimes 1, \end{align*} for $\mathbf{x}\in A \otimes \overline{A}^{\otimes^r}$. \end{prop} \begin{proof} The equality $\psi'_*\phi'_*= \mathrm{id}$ follows immediately from the definitions. For the second assertion see \cite[Proposition 1.2.1]{G-G}. \end{proof} \subsection{Mixed complexes} In this subsection we recall briefly the notion of mixed complex. For more details about this concept we refer to \cite{K} and \cite{B}. A mixed complex $(X,b,B)$ is a graded $C$-module $(X_r)_{r\ge 0}$, endowed with morphisms $b\colon X_r\to X_{r-1}$ and $B\colon X_r\to X_{r+1}$, such that $$ b b = 0,\quad B B = 0\quad\text{and}\quad B b + b B = 0. $$ A morphism of mixed complexes $f\colon (X,b,B)\to (Y,d,D)$ is a family $f_r\colon X_r\to Y_r$ of maps, such that $d f = f b$ and $D f= f B$. A mixed complex $\mathfrak{m}athcal{X} = (X,b,B)$ determines a double complex \[ \mathbf{x}ymatrix{\\\\ \BP(\mathfrak{m}athcal{X})=}\qquad \mathbf{x}ymatrix{ & \vdots \dto^-{b} &\vdots \dto^-{b}& \vdots \dto^-{b}& \vdots \dto^-{b}\\ \dots & X_3 \lto_-{B}\dto^-{b} & X_2 \lto_-{B}\dto^-{b} & X_1 \lto_-{B}& X_0 \lto_-{B}\\ \dots & X_2 \lto_-{B}\dto^-{b} & X_1 \lto_-{B}\dto^-{b} & X_0 \lto_-{B}\\ \dots & X_1 \lto_-{B}\dto^-{b} & X_0 \lto_-{B}\\ \dots & X_0 \lto_-{B}} \] By deleting the positively numbered columns we obtain a subcomplex $\BN(\mathfrak{m}athcal{X})$ of $\BP(\mathfrak{m}athcal{X})$. Let $\BN'(\mathfrak{m}athcal{X})$ be the kernel of the canonical surjection from $\BN(\mathfrak{m}athcal{X})$ to $(X,b)$. The quotient double complex $\BP(\mathfrak{m}athcal{X}) /\BN'(\mathfrak{m}athcal{X})$ is denoted by $\BC(\mathfrak{m}athcal{X})$. The homologies $\mathrm{HC}_*(\mathfrak{m}athcal{X})$, $\mathrm{HN}_*(\mathfrak{m}athcal{X})$ and $\mathrm{HP}_*(\mathfrak{m}athcal{X})$, of the total complexes of $\BC(\mathfrak{m}athcal{X})$, $\BN(\mathfrak{m}athcal{X})$ and $\BP(\mathfrak{m}athcal{X})$ respectively, are called the cyclic, negative and periodic homologies of $\mathfrak{m}athcal{X}$. The homology $\mathrm{HH}_*(\mathfrak{m}athcal{X})$, of $(X,b)$, is called the Hochschild homology of $\mathfrak{m}athcal{X}$. Finally, it is clear that a morphism $f\colon \mathfrak{m}athcal{X}\to \mathfrak{m}athcal{Y}$ of mixed complexes induces a morphism from the double complex $\BP(\mathfrak{m}athcal{X})$ to the double complex $\BP(\mathfrak{m}athcal{Y})$. Let $A$ be a noncommutative monogenic extension of $K$. The normalized mixed complex of $A$ relative to $K$ is $(A\otimes \overline{A}^{\otimes^*}\otimes,b,B)$, where $b$ is the canonical Hochschild boundary map and $$ B([a_0\otimes\cdots\otimes a_r]) = \sum_{i=0}^r (-1)^{ir} [1\otimes a_i\otimes\cdots\otimes a_r\otimes a_0\otimes\cdots\otimes a_{i-1}], $$ in which $[a_0\otimes\cdots\otimes a_r]$ denotes the class of $a_0\otimes\cdots\otimes a_r$ in $A\otimes \overline{A}^{\otimes^r}\otimes$. The cyclic, negative, periodic and Hochschild homologies $\mathrm{HC}^K_*(A)$, $\mathrm{HN}^K_*(A)$, $\mathrm{HP}^K_*(A)$ and $\mathrm{HH}^K_*(A)$ of $A$ are the respective homologies of $(A\otimes\overline{A}^{\otimes^*}\otimes,b,B)$. \subsection{The perturbation lemma} Next we recall the perturbation le\-mma. We give the more general version introduced in \cite{C}. A homotopy equivalence data \begin{equation} \mathbf{x}ymatrix{(Y,\partial)\mathbf{a}r@<-1ex>[r]_-{i} & (X,d) \mathbf{a}r@<-1ex>[l]_-{p}}, \quad h\colon X_*\to X_{*+1},\label{eq1} \end{equation} consists of the following: \begin{enumerate} \item Chain complexes $(Y,\partial)$, $(X,d)$ and quasi-isomorphisms $i$ and $p$ between them, \item A homotopy $h$ from $ip$ to $\mathrm{id}$. \end{enumerate} \noindent A perturbation~$\delta$ of~\eqref{eq1} is a map $\delta\colon X_*\to X_{*-1}$ such that $(d+\delta)^2 = 0$. We call it small if $\mathrm{id} - \delta h$ is invertible. In this case we write $$ \Delta = (\mathrm{id} - \delta h)^{-1} \delta $$ and we consider \begin{equation} \mathbf{x}ymatrix{(Y,\partial^1)\mathbf{a}r@<-1ex>[r]_-{i^1} & (X,d+\delta)\mathbf{a}r@<-1ex>[l]_-{p^1}}, \quad h^1\colon X_*\to X_{*+1},\label{eq2} \end{equation} with $$ \partial^1 = \partial + p \Delta i,\quad i^1 = i + h \Delta i,\quad p^1 = p + p \Delta h,\quad h^1 = h + h \Delta h. $$ A deformation retract is a homotopy equivalence data such that $p i = \mathrm{id}$. A deformation retract is called special if $h i = 0$, $p h = 0$ and $h h = 0$. In the case considered in this paper the map $\delta h$ is locally nilpotent, and so $(\mathrm{id} - \delta h)^{-1} = \sum_{j=0}^{\infty} (\delta h)^j$. \begin{teo} (\cite{C}) If $\delta$ is a small perturbation of the homotopy equivalence data~\eqref{eq1}, then the perturbed data~\eqref{eq2} is a homotopy equivalence. Moreover, if \eqref{eq1} is a special deformation retract, then~\eqref{eq2} is also.\label{teorema 1.4} \end{teo} \section{Hochschild homology of $A$} Given an $A$-bimodule $M$, we let $[M,K]_{\mathbf{a}lpha^j}$ denote the $k$-submodule of $M$ generated by the twisted commutators $[m,\lambda]_{\mathbf{a}lpha^j} = m\mathbf{a}lpha^j(\lambda) - \lambda m$. As usual, we let $A^e$ and $\mathrm{H}^K_*(A,M)$ denote the enveloping algebra $A\otimes_k A^{\mathrm{op}}$ of $A$ and the relative to $K$ Hochschild homology of $A$ with coefficients in $M$, respectively. When $M = A$ we will write $\mathrm{HH}^K_*(A)$ instead of $\mathrm{H}^K_*(A,A)$. In the following proposition we use the same notations as in Theorem~\ref{teorema 1.1}. \begin{teo}\label{teorema 2.1} Let $M$ be an $A$-bimodule. The following facts hold: \begin{enumerate} \item The chain complex \[ \spreaddiagramcolumns{-0.3pc} \quad\qquad C^S(A,M)= \mathbf{x}ymatrix@C-8pt{\cdots \mathbf{a}r[r]^-{d_4}& \frac{M}{[M,K]_{\mathbf{a}lpha^{n+1}}} \mathbf{a}r[r]^-{d_3}& \frac{M}{[M,K]_{\mathbf{a}lpha^n}} \mathbf{a}r[r]^-{d_2}& \frac{M}{[M,K]_{\mathbf{a}lpha}} \mathbf{a}r[r]^-{d_1}& \frac{M}{[M,K]_{\mathbf{a}lpha^0}}}, \] where the boundaries are the maps defined by \begin{align*} & d_{2m+1}([\mathfrak{m}]) = [\mathfrak{m} x-x\mathfrak{m}],\\ &d_{2m}([\mathfrak{m}]) =\sum_{i=1}^{n}\sum_{\ell=0}^{i-1} [ \lambda_{n-i}x^{i-\ell-1}\mathfrak{m} x^{\ell}], \end{align*} in which $[\mathfrak{m}]$ denotes the class of $\mathfrak{m}\in M$ in $\frac{M}{[M,K]_{\mathbf{a}lpha^{mn+1}}}$ and $\frac{M}{[M,K]_{\mathbf{a}lpha^{mn}}}$ respectively, computes $\mathrm{H}^K_*(A,M)$. \mathfrak{m}edskip \item The maps \begin{align*} &\phi_*\colon C^S(A,M)\to(M\otimes \overline{A}^{\otimes^*}\otimes,b_*),\\ &\psi_*\colon (M\otimes \overline{A}^{\otimes^*}\otimes,b_*)\to C^S(A,M), \end{align*} defined by \begin{align*} \qquad\quad & \phi_0([\mathfrak{m}])= [\mathfrak{m}],\\ & \phi_1([\mathfrak{m}])= [\mathfrak{m}\otimes x],\\ &\phi_{2m}([\mathfrak{m}]) = \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}}\mathfrak{m} x^{|\mathfrak{m}athbf{i}- \boldsymbol{\ell}|-m}\otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,1}}],\\ &\phi_{2m+1}([\mathfrak{m}])=\sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in\mathds{J}_{\mathfrak{m}athbf i}} [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}}\mathfrak{m} x^{|\mathfrak{m}athbf{i}-\boldsymbol{\ell}|-m} \otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,1}}\otimes x],\\ &\psi_{2m}([\mathfrak{m}\otimes\mathbf{x}^{\mathfrak{m}athbf{i}_{1,2m}}])=[\mathfrak{m}\overline{x^{i_1+i_2}}\cdots\overline{x^{i_{2m-1}+i_{2m}}}],\\ &\psi_{2m+1}([\mathfrak{m}\otimes\mathbf{x}^{\mathfrak{m}athbf{i}_{1,2m+1}}])\!=\!\!\sum_{\ell=0}^{i_{2m+1}-1} \!\!\![x^{i_{2m+1}-\ell-1}\mathfrak{m} \overline{x^{i_1\!+i_2}} \cdots \overline{x^{i_{2m-1}\!+i_{2m}}}x^{\ell}], \end{align*} where $[\mathfrak{m}\otimes \mathbf{x}^{\mathfrak{m}athbf{i}_{1r}}]$ denotes the class of $\mathfrak{m}\otimes \mathbf{x}^{\mathfrak{m}athbf{i}_{1r}}$ in $M\otimes \overline{A}^{\otimes^r}\otimes$, are chain morphisms which are inverse one of each other up to homotopy. \item Let $$ \qquad\beta\colon M \otimes_{A^e} A\otimes \overline{A}^{\otimes^{r+1}}\otimes A\to M\otimes\overline{A}^{\otimes^{r+1}}\otimes $$ be the map defined by $$ \beta_{r+1}(m\otimes \mathbf{x}_{0,r+2}) = [x_{r+2}mx_0\otimes \mathbf{x}_{1,r+1}]. $$ The composition $\psi_*\phi_*$ gives the identity map, and the family of maps $$ \qquad\omegaega_{*+1}\colon M\otimes \overline{A}^{\otimes^*}\otimes \to M\otimes \overline{A}^{\otimes^{*+1}}\otimes, $$ defined by $$ \omegaega_{r+1}([m\otimes \mathbf{x}]) = \beta \bigl(m\otimes_{A^e}\omegaega'_{r+1}(1\otimes\mathbf{x}\otimes 1)\bigr), $$ is an homotopy from $\phi_*\psi_*$ to the identity map. \end{enumerate} \end{teo} \begin{proof} For the first item, apply the functor $M\otimes_{A^e}-$ to the resolution $C_S'(A)$, and use the identification \[ \mathbf{x}ymatrix@R-4ex{M\otimes_{A^e} A^2_{\mathbf{a}lpha^j}\mathbf{a}r[r]^-{\cong}& \frac{M}{[M,K]_{\mathbf{a}lpha^j}} \\ \mathfrak{m}\otimes (a\otimes b)\mathbf{a}r@{|->}[r] & [b\mathfrak{m} a].} \] For instance \begin{align*} d_{2m}([\mathfrak{m}]) & = \sum_{i=1}^n \sum_{\ell = 0}^{i-1} [x^{i-\ell-1}\mathfrak{m}\lambda_{n-i}x^{\ell}]\\ & = \sum_{i=1}^n \sum_{\ell = 0}^{i-1} [x^{i-\ell-1}\mathfrak{m} x^{\ell}\lambda_{n-i}]\\ & = \sum_{i=1}^n \sum_{\ell = 0}^{i-1} [\lambda_{n-i} x^{i-\ell-1}\mathfrak{m} x^{\ell}]. \end{align*} Let $\psi_*$ and $\phi_*$ be the morphism induced by the comparison maps $\psi'_*$ and $\phi'_*$. The second and third item follow easily from this definition in the same manner. \end{proof} When $M = A$ we will write $C^S(A)$ instead of $C^S(A,M)$. \subsection{Explicit computations} The aim of this subsection is to compute the Hochschild homology of $A$ with coefficients in $A$, under suitable hypothesis. \begin{teo}\label{teorema 2.2} Let $C^S_r(A)$ denote the $r$-th module of $C^S(A)$. If there exists $\breve{\lambda}\in \mathcal{Z}(K)$ such that \begin{itemize} \item $\mathbf{a}lpha^n(\breve{\lambda})=\breve{\lambda}$, \item $\breve{\lambda}-\mathbf{a}lpha^i(\breve{\lambda})$ is invertible in $K$ for $1\le i<n$, \end{itemize} then \[ C^S_r(A) = \begin{cases} \frac{K}{[K,K]_{\mathbf{a}lpha^{mn}}}&\text{if $r=2m$,}\\ \frac{K}{[K,K]_{\mathbf{a}lpha^{(m+1)n}}}x^{n-1}& \text{if $r=2m+1$.}\end{cases} \] \end{teo} \begin{proof} By item~(1) of Theorem~\ref{teorema 2.1} we know that $$ C^S_r(A) = \begin{cases} \frac{A}{[A,K]_{\mathbf{a}lpha^{mn}}}&\text{if $r=2m$,}\\ \frac{A}{[A,K]_{\mathbf{a}lpha^{mn+1}}} & \text{if $r=2m+1$.}\end{cases} $$ Moreover $$ [a,\lambda]_{\mathbf{a}lpha^r}= \sum_{i=0}^{n-1}[\lambda'_i,\lambda] _{\mathbf{a}lpha^{r+i}}x^i $$ for each $a=\sum_{i=0}^{n-1} \lambda'_ix^i\in A$ and $\lambda\in K$. Hence, it will be sufficient to check that if $i$ is not congruent to $0$ module $n$, then $[K,K]_{\mathbf{a}lpha^{mn+i}}=K$. But this follows immediately from the fact that $\mathbf{a}lpha^i(\breve{\lambda}) - \breve{\lambda}$ is invertible if $i$ is not congruent to $0$ module $n$ and \[ [\lambda',\breve{\lambda}]_{\mathbf{a}lpha^{mn+i}}= \lambda'\mathbf{a}lpha^{mn+i}(\breve{\lambda}) - \breve{\lambda}\lambda' = \lambda'(\mathbf{a}lpha^i(\breve{\lambda})-\breve{\lambda}), \] since $\breve{\lambda}\in \mathcal{Z}(K)$ and $\mathbf{a}lpha^n(\breve{\lambda}) = \breve{\lambda}$. \end{proof} \begin{teo}\label{teorema 2.3} Under the hypothesis of Theorem~\ref{teorema 2.2}, the boundary maps of $C^S(A)$ are given by \begin{align*} & d_{2m+1}([\lambda]x^{n-1} )= [(\mathbf{a}lpha(\lambda)-\lambda)\lambda_n],\\ & d_{2m+2}([\lambda])= \Biggl[\sum_{\ell=0}^{n-1} \mathbf{a}lpha^{\ell}(\lambda) \Biggr]x^{n-1}, \end{align*} for all $m\ge 0$. Consequently, if $\lambda_n=0$, then the odd boundary maps are zero. \end{teo} \begin{proof} By item~(1) of Theorem~\ref{teorema 2.1}, \[ d_{2m+1}([\lambda]x^{n-1} )=[\lambda x^n-x\lambda x^{n-1}]=[(\lambda-\mathbf{a}lpha(\lambda))x^n] = [(\mathbf{a}lpha(\lambda)-\lambda)\lambda_n], \] where the last equality follows from Theorem~\ref{teorema 2.2}. Again by item~(1) of Theorem~\ref{teorema 2.1}, \begin{align*} d_{2m+2}([\lambda]) &=\sum_{i=1}^{n}\sum_{\ell=0}^{i-1} [\lambda_{n-i}x^{i-\ell-1}\lambda x^{\ell}]\\ &=\sum_{i=1}^{n}\sum_{\ell=0}^{i-1}[\lambda_{n-i}\mathbf{a}lpha^{i-l-1}(\lambda)x^{i-1}]\\ &= \Biggl[\sum_{\ell=0}^{n-1} \mathbf{a}lpha^{n-\ell-1}(\lambda) \Biggr]x^{n-1}, \end{align*} where the last equality follows again from Theorem~\ref{teorema 2.2}. \end{proof} Theorem~\ref{teorema 2.3} implies that $\lambda\lambda_n-\mathbf{a}lpha^n(\lambda)\lambda_n\in [K,K]_{\mathbf{a}lpha^{mn}}$ for all $\lambda\in K$ and $m\ge 0$. Indeed, this can be proved directly from the hypothesis at the beginning of this paper and then it is true with full generality. In fact, $$ \lambda\lambda_n-\mathbf{a}lpha^n(\lambda)\lambda_n = \lambda\lambda_n-\lambda_n\lambda = \lambda\mathbf{a}lpha^{mn}(\lambda_n)-\lambda_n\lambda. $$ \begin{coro}\label{corolario 2.4} Under the hypothesis of Theorem~\ref{teorema 2.2}, \begin{align*} & \mathrm{HH}^K_0(A)=\frac{K}{[K,K]+\mathrm{Im}(\mathbf{a}lpha-\mathrm{id})\lambda_n},\\ & \mathrm{HH}^K_{2m+1}(A)=\frac{\left\{\lambda\in K:(\mathbf{a}lpha(\lambda)-\lambda)\lambda_n\in [K,K]_{\mathbf{a}lpha^{mn}}\right\}}{[K,K]_{\mathbf{a}lpha^{(m+1)n}}+\mathrm{Im}\left(\sum_{\ell=0}^{n-1}\mathbf{a}lpha^{\ell} \right)}x^{n-1},\\ & \mathrm{HH}^K_{2m+2}(A)= \frac{\left\{\lambda\in K: \sum_{\ell=0}^{n-1}\mathbf{a}lpha^{\ell}(\lambda) \in [K,K]_{\mathbf{a}lpha^{(m+1)n}}\right\}} {[K,K]_{\mathbf{a}lpha^{(m+1)n}}+\mathrm{Im}(\mathbf{a}lpha-\mathrm{id})\lambda_n}. \end{align*} \end{coro} \begin{rem}\label{remark 2.5} Note that the results in Theorems~\ref{teorema 2.2} and \ref{teorema 2.3}, and Corollary~\ref{corolario 2.4} do not depend on $f$, with the exception of its degree $n$ and its independent term $\lambda_n$. \end{rem} Assume now that $k$ is a field, the hypothesis of Theorem~\ref{teorema 2.2} are fulfilled and $\mathbf{a}lpha$ is diagonalizable. Let $\omegaega_1 = 1, \omegaega_2,\dots,\omegaega_s$ be the eigenvalues of $\mathbf{a}lpha$ and let $K^{\omegaega_h}$ be the eigenspace of eigenvalue $\omegaega_h$. Write $$ [K,K]^{\omegaega_h}_{ \mathbf{a}lpha^r} = K^{\omegaega_h}\cap [K,K]_{\mathbf{a}lpha^r}. $$ Note that $1,\lambda_1,\dots,\lambda_n\in K^1$. Moreover, using the properties of $\breve{\lambda}$ and that $\mathbf{a}lpha$ is an algebra endomorphism, it is easy to see that there is a primitive $n$-th root of $1$ in $k$ and that all the $n$-th roots of $1$ in $k$ are eigenvalues of $\mathbf{a}lpha$. Consequently, the characteristic of $k$ does not divide~$n$. \begin{teo}\label{teorema 2.6} The chain complex $C^S(A)$ decomposes as a direct sum $C^S(A) = \bigoplus_{h=1}^s C^{S,\omegaega_h}(A)$, where \[ C_r^{S,\omegaega_h}(A) = \begin{cases} \frac{K^{\omegaega_h}}{[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{mn}}}&\text{if $r=2m$,}\\[8pt] \frac{K^{\omegaega_h}}{[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{(m+1)n}}}x^{n-1}& \text{if $r=2m+1$.}\end{cases} \] Moreover the boundary maps $d_*^{\omegaega_h}$ of $C_r^{S,\omegaega_h}(A)$ are given by: $$ d_{2m}^{\omegaega_1}([\lambda]) = \left(\sum_{l=0}^{n-1} \omegaega_h^l\right) [\lambda]x^{n-1}\quad\text{and}\quad d_{2m+1}^{\omegaega_h}([\lambda]x^{n-1}) = (\omegaega_h-1)[\lambda\lambda_n]. $$ \end{teo} \begin{proof} It follows easily from Theorem~\ref{teorema 2.2} and \ref{teorema 2.3}, since $\lambda_n\in K^1$. \end{proof} \begin{coro}\label{corolario 2.7} Let $\mathrm{HH}^{K,\omegaega_h}_*(A)$ denote the homology of $C^{S,\omegaega_h}(A)$. By Theorem~\ref{teorema 2.6} we know that $\mathrm{HH}^K_*(A) = \bigoplus_{h=1}^s \mathrm{HH}^{K,\omegaega_h}_*(A)$. Moreover, \begin{align*} & \mathrm{HH}^{K,\omegaega_h}_0(A) = \begin{cases} \frac{K^1}{[K,K]^1} & \text{if $h = 1$,} \\[8pt] \frac{K^{\omegaega_h}}{[K,K]^{\omegaega_h} + K^{\omegaega_h}\lambda_n} & \text{if $h \ne 1$,}\end{cases}\\[5pt] & \mathrm{HH}^{K,\omegaega_h}_{2m+1}(A) = \begin{cases} \frac{\left\{\lambda \in K^{\omegaega_h}:\lambda\lambda_n\in [K,K]^{\omegaega_h}_{\mathbf{a}lpha^{mn}}\right\}}{[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{(m+1)n}}}x^{n-1} & \text{if $h \ne 1$ and $\omegaega_h^n = 1$,}\\[8pt] 0 &\text{otherwise,}\end{cases}\\[5pt] & \mathrm{HH}^{K,\omegaega_h}_{2m+2}(A) = \begin{cases} \frac{K^{\omegaega_h}}{[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{(m+1)n}} +K^{\omegaega_h}\lambda_n} & \text{if $h \ne 1$ and $\omegaega_h^n = 1$,}\\[8pt] 0 &\text{otherwise,} \end{cases} \end{align*} \end{coro} \noindent Note that if $\mathbf{a}lpha^n$ has finite order $v$ (that is $\mathbf{a}lpha^{nv} = \mathrm{id}$ and $\mathbf{a}lpha^{nj}\ne \mathrm{id}$ for $0<j<v$), then $$ \mathrm{HH}^{K,\omegaega_h}_{2m+1}(A) = \mathrm{HH}^{K,\omegaega_h}_{2(m+v)+1}(A)\quad\text{and}\quad \mathrm{HH}^{K,\omegaega_h}_{2m+2}(A) = \mathrm{HH}^{K,\omegaega_h}_{2(m+v)+2}(A) $$ for all $m\ge 0$. \begin{ex}\label{ejemplo 2.8} Let $k$ be a field, $K = k[G]$ the group $k$-algebra of a finite group $G$ and $chari\colon G\to k^{\times}$ a character, where $k^{\times}$ is the group of unities of $k$. Let $\mathbf{a}lpha\colon K\to K$ be the automorphism defined by $\mathbf{a}lpha(g) =chari(g)g$ and let $f = x^n + \lambda_1 x^{n-1}+\cdots +\lambda_n\in K[x]$ be a monic polynomial whose coefficients satisfy the hypothesis required in the introduction. Assume that there exists $g_1\in\mathcal{Z}(G)$ such that $chari(g_1)$ is a primitive $n$-th root of $1$. In this section we apply the results obtained in Section~2 to compute the Hochschild homology of $A = K[x,\mathbf{a}lpha]/\langle f\overlinengle$ relative to $K$ (if the characteristic of $k$ is relative prime to the order of $G$, then $k[G]$ is a separable $k$-algebra and so, by \cite[Theorem 1.2]{G-S}, $\mathrm{HH}^K_*(A)$ coincides with the absolute Hochschild homology $\mathrm{HH}_*(A)$ of $A$). Note that the hypothesis of Theorem~\ref{teorema 2.2} are fulfilled, taking $\breve \lambda = g_1$. In particular the homological behavior of $A$ is independent of the polynomial $f$, with the exception of its degree and its independent term. Since $\mathbf{a}lpha$ is diagonalizable Theorem~\ref{teorema 2.6} and Corollary~\ref{corolario 2.7} apply. In this case \begin{align*} &\{\omegaega_1,\dots,\omegaega_s\} = chari(G),\\[5pt] &K^{\omegaega_h} = \bigoplus_{\{g\in G:chari(g) = \omegaega_h\}} kg,\\[5pt] &[K,K]^{\omegaega_h}_{\mathbf{a}lpha^j} = \sum_{\{g_1,g_2\in G:chari(g_1g_2) = \omegaega_h\}} k(chari^j(g_2)g_1g_2- g_2g_1). \end{align*} As a concrete example take the Dihedral group $D_{2u}$. That is the group generated by $g,h$ and the relations $g^u = h^2 = 1$ and $hg=g^{-1}h$. Consider the character $chari\colon D_{2u}\to \mathbb{C}$ defined by $chari(g^jh^l) = (-1)^l$. Let $A$ be the monogenic extension of $K = \mathbb{C}[D_{2u}]$ associated with $\mathbf{a}lpha$ and the polynomial $f = x^2$. Direct computations show that \begin{align*} &K^1 = \mathbb{C}[\langle g\overlinengle],\\ & K^{-1} = \mathbb{C}[\langle g\overlinengle]h,\\ &[K,K]^1 = \bigoplus_{j=1}^{[(u-1)/2]} \mathbb{C}(g^j-g^{u-j}),\\ & [K,K]^{-1} = \mathbb{C}[\langle g\overlinengle](g^2-1)h. \end{align*} Hence, applying Corollary~\ref{corolario 2.7}, we obtain \begin{align*} &\mathrm{HH}_0(A) = \frac{\mathbb{C}[\langle g\overlinengle]}{\bigoplus_{j=1}^{[(u-1)/2]} \mathbb{C}(g^j-g^{u-j})} \mathrm{op}lus \frac{\mathbb{C}[\langle g\overlinengle]h}{\mathbb{C}[\langle g\overlinengle](g^2-1)h},\\ &\mathrm{HH}_{2m+1}(A) = \frac{\mathbb{C}[\langle g\overlinengle]h}{\mathbb{C}[\langle g\overlinengle](g^2-1)h} x,\\ &\mathrm{HH}_{2m+2}(A) = \frac{\mathbb{C}[\langle g\overlinengle]h}{\mathbb{C}[\langle g\overlinengle](g^2-1)h}. \end{align*} \end{ex} Next we consider another situation in which the cohomology of $A$ can be compute. The following results are very closed to the ones valid in the commutative setting. \begin{teo}\label{teorema 2.9} If $\mathbf{a}lpha$ is the identity map, then $$ C^S_r(A) = \frac{K}{[K,K]}\mathrm{op}lus \frac{K}{[K,K]} x\mathrm{op}lus \cdots \mathrm{op}lus \frac{K}{[K,K]}x^{n-1} = \frac{A}{[A,A]}. $$ Moreover, the odd coboundary maps $d_{2m+1}$ of $C^S(A)$ are zero, and the even coboundary maps $d_{2m}$ takes $[a]$ to $[f'a]$. \end{teo} \begin{proof} This is immediate. \end{proof} \begin{coro}\label{corolario 2.10} If $\mathbf{a}lpha$ is the identity map, then \begin{align*} & \mathrm{H}^K_0(A) = \frac{A}{[A,A]},\\ & \mathrm{H}^K_{2m+1}(A) = \frac{A}{[A,A]+f'A} ,\\ & \mathrm{H}^K_{2m+2}(A) = \frac{([A,A]:f')}{[A,A]}, \end{align*} where $([A,A]:f') = \left\{a\in A: f'a\in [A,A]\right\}$. \end{coro} \subsection{Hochschild homology of rank~$1$ Hopf algebras} Let $k$ be a characteristic zero field. Recall that $k^{\times}$ denotes the group of unities of $k$. Let $G$ be a finite group and $chari\colon G\to k^{\times}$ a character. Assume that there exists $g_1\in \mathcal{Z}(G)$ such that $chari(g_1)$ is a primitive $n$-th root of $1$. In this section we compute the Hochschild homology of $A = k[G][x,\mathbf{a}lpha]/\langle x^n-\mathbf{x}i(g_1^n-1)\overlinengle$ over $k$, where $\mathbf{x}i\in k$ and $\mathbf{a}lpha\in \mathrm{Aut}(k [G])$ is defined by $\mathbf{a}lpha(g) = chari(g)g$. We divide the problem in three cases. The first and second ones give Hochschild homology of rank~$1$ Hopf algebras. Let $C_n\subseteq k$ be the set of all $n$-th roots of~$1$. \mathfrak{m}edskip \noindent\bf $\boldsymbol{\mathbf{x}i} \boldsymbol{=} \boldsymbol{0}$.\rm\enspace In this case $A = K[x,\mathbf{a}lpha]/\langle x^n\overlinengle$, where $K = k[G]$. From Corollary~\ref{corolario 2.7} it follows that \begin{align*} &\mathrm{HH}_0(A) = \frac{K}{[K,K]},\\ &\mathrm{HH}_{2m+1}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w_{\mathbf{a}lpha^{(m+1)n}}} x^{n-1},\\ &\mathrm{HH}_{2m+2}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w_{\mathbf{a}lpha^{(m+1)n}}}. \end{align*} \mathfrak{m}edskip \noindent\bf $\boldsymbol{\mathbf{x}i} \boldsymbol{\ne} \boldsymbol{0}$ and $\boldsymbol{chari}^{\mathfrak{m}athbf{n}} \boldsymbol{=} \boldsymbol{\mathrm{id}}$.\rm\enspace In this case $f=x^n-\mathbf{x}i(g_1^n-1)$ satisfies the hypothesis required in the introduction (that is $\mathbf{a}lpha(\mathbf{x}i(g_1^n-1)) = \mathbf{x}i(g_1^n-1)$ and $\mathbf{x}i(g_1^n-1)\lambda = \mathbf{a}lpha^n(\lambda)\mathbf{x}i(g_1^n-1)$). Moreover $K = k[G]$ is separable over $k$ and so, $\mathrm{HH}_*(A) = \mathrm{HH}^K_*(A)$. By Corollary~\ref{corolario 2.7} we have \begin{align*} &\mathrm{HH}_0(A) = \frac{K^1}{[K,K]^1}\mathrm{op}lus \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w+K^w(g_1^n-1)},\\ &\mathrm{HH}_{2m+1}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{\{\lambda\in K^w:\lambda(g_1^n-1)\in [K,K]^w \}}{[K,K]^w}x^{n-1},\\ &\mathrm{HH}_{2m+2}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w + K^w(g_1^n-1)}. \end{align*} \mathfrak{m}edskip \noindent\bf $\boldsymbol{\mathbf{x}i} \boldsymbol{\ne} \boldsymbol{0}$ and $\boldsymbol{chari}^{\mathfrak{m}athbf{n}} \boldsymbol{\ne} \boldsymbol{\mathrm{id}}$.\rm\enspace Let $g\in G$ such that $chari^n(g)\neq 1$. Since $$ g^{-1}(x^n-\mathbf{x}i(g_1^n-1))g = chari^n(g)x^n - \mathbf{x}i(g_1^n -1), $$ we conclude that the ideal $\cl{x^n-\mathbf{x}i(g_1^n-1)}$ coincides with the ideal $\cl{x^n, g_1^n-1}$. So, $A = k[G/\cl{g_1^n}][x,\widetilde{\mathbf{a}lpha}]/\cl{x^n}$, where $\widetilde{\mathbf{a}lpha}$ is the automorphism induced by $\mathbf{a}lpha$. We consider now $K=k[G/\cl{g_1^n}]$ and $f=x^n$. These data satisfy the hypothesis of Theorem~\ref{teorema 2.6}. Moreover the algebra $K=k[G/\cl{g_1^n}]$ is separable over $k$ and so, $\mathrm{HH}_*(A) = \mathrm{HH}^K_*(A)$. Thus, by Corollary~\ref{corolario 2.7}, we have \begin{align*} &\mathrm{HH}_0(A) = \frac{K}{[K,K]},\\ &\mathrm{HH}_{2m+1}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w_{\widetilde{\mathbf{a}lpha}^{(m+1)n}}} x^{n-1},\\ &\mathrm{HH}_{2m+2}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w_{\widetilde{\mathbf{a}lpha}^{(m+1)n}}}. \end{align*} \section{Cyclic homology of $A$} In this section we get a mixed complex, simpler than the canonical of Tsygan computing the cyclic homology of a noncommutative monogenic extension $A$. A simple tensor $a_0\otimes \cdots \otimes a_r\in A\otimes \overline{A}^{\otimes^r}$ will be called {\em monomial} if there exist $\lambda\in K\setminus \{0\}$, $0\le i_0<n$ and $1\le i_1,\dots,i_r<n$ such that $a_0=\lambda x^{i_0}$ and $a_j=x^{i_j}$ for $j>0$. We define the {\em degree} of a monomial tensor $$ \lambda x^{i_0}\otimes \cdots \otimes x^{i_r}\in A\otimes \overline{A}^{\otimes^r}, $$ as $\deg(\lambda x^{i_0}\otimes \cdots \otimes x^{i_r})=i_0+\cdots + i_r$. Since $1,x,\dots,x^{n-1}$ is a basis of $\overline{A}$ as a left $K$-module, each element $\mathbf{a} \in A\otimes \overline{A}^{\otimes^r}$ can be written in a unique way as a sum of monomial tensors. The {\em degree} $\deg(\mathbf{a})$, of $\mathbf{a}$, is the maximum of the degrees of its monomial tensors. Since $[A\otimes\overline{A}^{\otimes^r},K]$ is an homogeneous $k$-submodule of $A\otimes\overline{A}^{\otimes^r}$ we have a well defined concept of degree on $A\otimes\overline{A}^{\otimes^r}\otimes$. Similarly it can be defined the {\em degree} $\deg(\mathbf{a})$ of an element $\mathbf{a}\in A\otimes \overline{A}^{\otimes^r}\otimes A$. \begin{prop}\label{prop 3.1} It is true that $\deg(\omegaega_{r+1}(\mathbf{a}))\le \deg(\mathbf{a})$. \end{prop} \begin{proof} Let $\mathbf{x}_1 = 1\otimes x^{i_1}\otimes\cdots \otimes x^{i_r}\otimes 1\in A\otimes\overline{A}^{\otimes^r}\otimes A$. By the definition of $w_{r+1}$ it suffices to show that $w'_{r+1}(\mathbf{x}_1)$ is a sum of tensors of the form $$ \lambda'x^{j_0}\otimes x^{j_1}\otimes\cdots\otimes x^{j_{r+2}}, $$ with $j_0+ \cdots + j_{r+2} \le i_1+\cdots+i_r$. From the definitions it follows that $$ \deg(\phi'_r\psi'_r(\mathbf{x}_1)) \le \deg(\mathbf{x}_1). $$ The proposition follows now by induction on $r$, since $$ w'_{r+1}(\mathbf{x}_1) = (-1)^r\phi'_r\psi'_r(\mathbf{x}_1)\otimes 1 - w'_r(\mathbf{x}_2)x^{i_r}\otimes 1, $$ where and $\mathbf{x}_2 = 1\otimes x^{i_1}\otimes\cdots \otimes x^{i_{r-1}}\otimes 1$. \end{proof} \mathfrak{m}edskip Let $D_r\colon C^S_r(A)\to C^S_{r+1}(A)$ be the composition $D_r = \phi_{r+1} B_r \psi_r$. \begin{teo}\label{teorema 3.2} $(C^S_*(A),d_*,D_*)$ is a mixed complex, giving the cyclic homology of $A$. \end{teo} \begin{proof} From Theorem~\ref{teorema 2.1} we get a special deformation retract between the total complexes of the double complexes $\BC(A\otimes\overline{A}^{\otimes^*}\otimes,b,0)$ and $\BC(C^S_*(A),d_*, 0)$. Consider the perturbation $B$. By the perturbation lemma it suffices to prove that $\psi_{r+2j+1} (B \omegaega)^j B \phi_r = 0$ for all $j>0$. Assume first that $r = 2m$. By the definition of $\phi_{2m}$ and Proposition~\ref{prop 3.1}, $$ \deg((B \omegaega)^j B \phi_{2m}([\lambda' x^j]) < mn+n $$ On the other hand $\psi_{2m+2j+1}$ vanishes on element of degree lesser or equal of $(m+j)n$. The assertion follows by combining theses facts. The case $r = 2m+1$ is similar. \end{proof} \begin{teo}\label{teorema 3.3} The Connes operator $D_*$ is given by \begin{align*} & D_{2m}([\lambda x^j])=\left[\sum_{h=0}^{j-1}\mathbf{a}lpha^{mn+h}(\lambda) x^{j-1}\right]\\ & \phantom{D_{2m}([\lambda x^j])}+\sum_{u=0}^{m-1} \left[\overline{\sum_{i=1}^n\lambda_{n-i} \left(\sum_{l=0}^{i-1} \mathbf{a}lpha^{nu+l}(\lambda)\right) x^{j+i-1}} \right],\\ & D_{2m+1}([\lambda x^j]) = \begin{cases}0&\text{if $j<n-1$,}\\ (\mathrm{id} - \mathbf{a}lpha) (\sum_{u=0}^m \mathbf{a}lpha^{nu}(\lambda))&\text{if $j=n-1$.} \end{cases} \end{align*} \end{teo} \begin{proof} Besides the notations introduced in Theorem~\ref{teorema 1.2} we use the following ones. \begin{itemize} \item $\breve{\mathbf{x}}^{\boldsymbol{\ell}_{u,1}} = x^{l_u}\otimes x\otimes\cdots \otimes x^{l_1}\otimes x$, \item $\widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,u+1}} = x\otimes x^{l_m}\otimes\cdots \otimes x\otimes x^{l_{u+1}}$, \item $|\boldsymbol{\ell}_{u,1}| = \ell_1 +\cdots + \ell_u + u$. \end{itemize} \noindent We shall first compute $D_{2m+1}$. By definition $$ B \phi_{2m+1}([\lambda x^j]) = \sum_{u=0}^m \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} - \sum_{u=0}^m \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \mathcal{U}psilon_{\mathfrak{m}athbf{i}}^{\boldsymbol{\ell}}, $$ where \begin{align*} & \Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} = [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n} - \mathfrak{m}athbf{i}}\mathbf{a}lpha^{|\boldsymbol{\ell}_{u,1}|}(\lambda) \otimes \breve{\mathbf{x}}^{\boldsymbol{\ell}_{u,1}} \otimes x^j x^{|\mathfrak{m}athbf{i}- \boldsymbol{\ell}|-m}\otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,u+1}}\otimes x]\\ & \Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} = [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n} - \mathfrak{m}athbf{i}}\mathbf{a}lpha^{|\boldsymbol{\ell}_{u,1}|+1}(\lambda) \otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{u,1}}\otimes x \otimes x^j x^{|\mathfrak{m}athbf{i}- \boldsymbol{\ell}|-m}\otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,u+1}}]. \end{align*} If $\psi_{2m+2}(\Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) \ne 0$, then $l_1 = \cdots = l_m = n-1$. So $i_1 = \cdots = i_m = n$. Thus, $$ \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \psi_{2m+2}(\Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) = [\mathbf{a}lpha^{nu}(\lambda) \overline{x^{j+1}}] = \begin{cases} 0 & \text{if $j<n-1$,}\\ [\mathbf{a}lpha^{nu}(\lambda)] & \text{if $j=n-1$.} \end{cases} $$ Similarly, $\psi_{2m+2}(\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) \ne 0$ implies that $l_1 = \cdots = l_m = n-1$. Hence $i_1 = \cdots = i_m = n$ and $$ \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \psi_{2m+2}(\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) = [\mathbf{a}lpha^{nu+1}(\lambda) \overline{x^{j+1}}] = \begin{cases} 0 & \text{if $j<n-1$,}\\ [\mathbf{a}lpha^{nu+1}(\lambda)] & \text{if $j=n-1$.} \end{cases} $$ The formula for $D_{2m+1}$ follows immediately from these facts. We now compute $D_{2m}$. By definition $$ B \phi_{2m}([\lambda x^j]) = \sum_{u=0}^{m-1} \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} (\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} + \Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) + \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \mathcal{U}psilon_{\mathfrak{m}athbf{i}}^{\boldsymbol{\ell}}, $$ where \begin{align*} & \Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} = [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}} \mathbf{a}lpha^{|\boldsymbol{\ell}_{u1}|}(\lambda) \otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{u1}} \otimes x^j x^{|\mathfrak{m}athbf{i}- \boldsymbol{\ell}|-m}\otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,u+1}}],\\ & \Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} = [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n} - \mathfrak{m}athbf{i}}\mathbf{a}lpha^{|\boldsymbol{\ell}_{u+1,1}|-1}(\lambda) \otimes x^{l_{u+1}}\otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{u1}} \otimes x^j x^{|\mathfrak{m}athbf{i}- \boldsymbol{\ell}|-m}\otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,u+2}}\otimes x]\\ & \mathcal{U}psilon_{\mathfrak{m}athbf{i}}^{\boldsymbol{\ell}} = [\boldsymbol{\lambda}_{\mathfrak{m}athbf{n}- \mathfrak{m}athbf{i}} \mathbf{a}lpha^{|\boldsymbol{\ell}_{m,1}|}(\lambda) \otimes \widetilde{\mathbf{x}}^{\boldsymbol{\ell}_{m,1}} \otimes x^j x^{|\mathfrak{m}athbf{i}- \boldsymbol{\ell}|-m}]. \end{align*} If $\psi_{2m+1}(\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) \neq 0$, then $l_1 = \cdots = \widehat{l_{u+1}} =\cdots = l_m = n-1$. In this case $i_1 = \cdots = \widehat{i_{u+1}} =\cdots = i_m = n$ and \begin{align*} \psi_{2m+1}(\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) & = \sum_{h=0}^{l-1} \left[x^{l-h-1} \lambda_{n-i} \mathbf{a}lpha^{nu}(\lambda) \overline{\stackrel{\dots\dots\dots\dots \dots\dots\dots} {x^{j+i-l-1}}x}x^h\right]\\ & =\sum_{h=0}^{l-1} \left[ \lambda_{n-i} \mathbf{a}lpha^{nu+l-h-1}(\lambda) x^{l-h-1}\overline{\stackrel{\dots\dots\dots\dots \dots\dots\dots} {x^{j+i-l-1}}x}x^h\right]\\ & = \sum_{h=0}^{l-1} \left[\lambda_{n-i} \mathbf{a}lpha^{nu+l-h-1}(\lambda) x^{l-1} \overline{\stackrel{\dots\dots\dots\dots \dots\dots\dots} {x^{j+i-l-1}}x}\right]\\ & = \sum_{h=0}^{l-1} \left[\lambda_{n-i} \mathbf{a}lpha^{nu+l-h-1}(\lambda)x^{l-1}\left(\overline{x^{j+i-l}} - \overline{x^{j+i-l-1}}x \right) \right]. \end{align*} So, \begin{align*} \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \psi_{2m+1}(\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) & = \sum_{i=1}^n \sum_{l=1}^{i-1}\sum_{h=0}^{l-1}\left[\lambda_{n-i} \mathbf{a}lpha^{nu+l-h-1}(\lambda)x^{l-1} \overline{x^{j+i-l}}\right]\\ &-\sum_{i=1}^n\sum_{l=2}^i\sum_{h=1}^{l-1}\left[\lambda_{n-i}\mathbf{a}lpha^{nu+l-h-1}(\lambda)x^{l-1} \overline{x^{j+i-l}} \right]\\ & = \sum_{i=1}^n\sum_{l=1}^{i-1}\left[\lambda_{n-i}\mathbf{a}lpha^{nu+l-1}(\lambda)x^{l-1} \overline{x^{j+i-l}} \right]. \end{align*} Similarly, $\psi_{2m+1}(\Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}})\ne 0$ implies $l_2 = \cdots = l_m = n-1$. In this case $i_2 = \cdots = i_m = n$ and \begin{align*} \psi_{2m+1}(\Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) & = \left[\lambda_{n-i} \mathbf{a}lpha^{nu+l}(\lambda) \overline{x^{l}\stackrel{\dots\dots\dots\dots\dots\dots\dots} {x^{j+i-l-1}}}\right]\\ & = \left[\lambda_{n-i} \mathbf{a}lpha^{nu+l}(\lambda)\left( \overline{ x^{j+i-1}} - x^{l}\overline{x^{j+i-l-1}}\right)\right]. \end{align*} So, \begin{align*} \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf{i}}} \psi_{2m+1}(\Delta_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}}) & = \sum_{i=1}^n \left[\lambda_{n-i} \left(\sum_{l=1}^{i-1} \mathbf{a}lpha^{nu+l}(\lambda)\right) \overline{x^{j+i-1}}\right]\\ & - \sum_{i=1}^n \sum_{l=1}^{i-1}\left[\lambda_{n-i} \mathbf{a}lpha^{nu+l}(\lambda)x^{l}\overline{x^{j+i-l-1}}\right]. \end{align*} Consequently, \begin{align*} \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \psi_{2m+1}(\Gamma_{\mathfrak{m}athbf{i},u}^{\boldsymbol{\ell}} + \Delta_{\mathfrak{m}athbf{i},u} ^{\boldsymbol{\ell}}) & = \sum_{i=1}^n \sum_{l=1}^{i-1}\left[\lambda_{n-i} \mathbf{a}lpha^{nu+l-1}(\lambda)x^{l-1}\overline{x^{j+i-l}}\right]\\ & - \sum_{i=1}^n \sum_{l=2}^{i}\left[\lambda_{n-i} \mathbf{a}lpha^{nu+l-1}(\lambda)x^{l-1}\overline{x^{j+i-l}}\right]\\ & + \sum_{i=1}^n \left[\lambda_{n-i} \left(\sum_{l=1}^{i-1} \mathbf{a}lpha^{nu+l}(\lambda)\right)\overline{x^{j+i-1}}\right] \\ & = \sum_{i=1}^n \left[\lambda_{n-i} \mathbf{a}lpha^{nu}(\lambda) \overline{x^{j+i-1}}\right]\\ & + \left[\sum_{i=1}^n\lambda_{n-i}\left(\sum_{l=1}^{i-1} \mathbf{a}lpha^{nu+l}(\lambda)\right) \overline{x^{j+i-1}} \right]\\ & = \left[\overline{\sum_{i=1}^n\lambda_{n-i} \left(\sum_{l=0}^{i-1} \mathbf{a}lpha^{nu+l}(\lambda)\right) x^{j+i-1}} \right]. \end{align*} Lastly, $\psi_{2m+1}(\mathcal{U}psilon_{\mathfrak{m}athbf{i}}^{\boldsymbol{\ell}}) = 0$ except if $l_1 = \cdots =l_m = n-1$. In this last case $i_1 = \cdots = i_m = n$. So $$ \sum_{\mathfrak{m}athbf{i}\in\mathds{I}_m} \sum_{\boldsymbol{\ell}\in \mathds{J}_{\mathfrak{m}athbf i}} \psi_{2m+1}(\mathcal{U}psilon_{\mathfrak{m}athbf{i}}^{\boldsymbol{\ell}}) = \sum_{h=0}^{j-1} \left[x^{j-h-1}\mathbf{a}lpha^{mn}(\lambda) x^h\right] = \left[\sum_{h=0}^{j-1}\mathbf{a}lpha^{mn+h}(\lambda) x^{j-1}\right]. $$ the formula for $D_{2m}$ follows easily from all these facts. \end{proof} \subsection{Explicit computations} In this subsection we compute the cy\-clic homology of $A$ with coefficients in $A$, under suitable hypothesis. We will freely use the notations introduced at the beggining of Section~2 and bellow remark~\ref{remark 2.5}. Recall that by Theorem~\ref{teorema 2.2}, if there exists $\breve{\lambda}\in \mathcal{Z}(K)$ such that \begin{itemize} \item $\mathbf{a}lpha^n(\breve{\lambda})=\breve{\lambda}$, \item $\breve{\lambda}-\mathbf{a}lpha^i(\breve{\lambda})$ is invertible in $K$ for $1\le i<n$, \end{itemize} then \[ C^S_r(A) = \begin{cases} \frac{K}{[K,K]_{\mathbf{a}lpha^{mn}}}&\text{if $r=2m$,}\\ \frac{K}{[K,K]_{\mathbf{a}lpha^{(m+1)n}}}x^{n-1}& \text{if $r=2m+1$.}\end{cases} \] Moreover, by Theorem~\ref{teorema 2.3}, the Hochschild boundary maps of the mixed complex $(C^S_*(A),d_*,D_*)$ are given by \begin{align*} & d_{2m+1}([\lambda]x^{n-1} )= [(\mathbf{a}lpha(\lambda)-\lambda)\lambda_n],\\ & d_{2m+2}([\lambda])= \Biggl[\sum_{\ell=0}^{n-1} \mathbf{a}lpha^{\ell}(\lambda) \Biggr]x^{n-1}. \end{align*} We now compute the Connes operator $D_*$. \begin{teo}\label{teorema 3.4} Under the hypothesis of Theorem~\ref{teorema 2.2}, we have: \begin{align*} & D_{2m}([\lambda])= 0,\\ & D_{2m+1}([\lambda] x^{n-1}) = \left[(\mathrm{id} - \mathbf{a}lpha) \left(\sum_{u=0}^m \mathbf{a}lpha^{nu}(\lambda)\right)\right]. \end{align*} \end{teo} \begin{proof} If follows immediately from Theorem~\ref{teorema 3.3}. \end{proof} \begin{teo}\label{teorema 3.5} Assume the hypothesis of Theorem~\ref{teorema 2.6} are fulfilled. Then the mixed complex $(C^S_*(A),d_*,D_*)$ decomposes as a direct sum $$ (C^S_*(A),d_*,D_*) = \bigoplus_{h=1}^s (C^{S,\omegaega_h}_*(A),d^{\omegaega_h}_*,D^{\omegaega_h}_*), $$ where the Hochschid complexes $(C^{S,\omegaega_h}_*(A),d^{\omegaega_h}_*)$ are as in Theorem~\ref{teorema 2.6}. Moreover the Connes operators $D^{\omegaega_h}_*$ satisfy: $D^{\omegaega_h}_{2m}=0$, and $$ D^{\omegaega_h}_{2m+1}([\lambda]x^{n-1})= (1-\omegaega_h)\left(\sum_{u=0}^m \omegaega_h^{nu}\right)[\lambda]. $$ \end{teo} \begin{proof} If follows immediately from Theorem~\ref{teorema 3.4}. \end{proof} In the rest of this section we assume that $k$ is a characteristic zero field and that hypothesis of Theorem~\ref{teorema 2.6} are fulfilled. We let $\mathrm{HC}^{K,\omegaega_h}_*(A)$ denote the cyclic homology of $(C^{S,\omegaega_h}_*(A),d^{\omegaega_h}_*,D^{\omegaega_h}_*)$. \begin{teo}\label{teorema 3.6} The cyclic homology of $A$ decomposes as $$ \mathrm{HC}^K_*(A) = \bigoplus_{h=1}^s \mathrm{HC}^{K,\omegaega_h}_*(A). $$ Moreover, we have: \begin{align*} & \mathrm{HC}^{K,\omegaega_h}_{2m}(A) = \begin{cases} \frac{K^1}{[K,K]^1} & \text{if $h=1$,}\\[5pt] \frac{K^{\omegaega_h}}{[K,K]^{\omegaega_h} + K^{\omegaega_h}\lambda_n} & \text{if $\omegaega_h^n \ne 1$,}\\[5pt] \frac{K^{\omegaega_h}}{[K,K]^{\omegaega_h}+K^{\omegaega_h}\lambda_n^{m+1}}& \text{otherwise,} \end{cases}\\[5pt] & \mathrm{HC}^{K,\omegaega_h}_{2m+1}(A) = \begin{cases} 0 & \text{if $h=1$ or $\omegaega_h^n \ne 1$,}\\[5pt] \frac{\{\lambda\in K^{\omegaega_h}: \lambda\lambda_n^m\in [K,K]^{\omegaega_h}\}} {[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{(m+1)n}}}x^{n-1} & \text{otherwise,} \end{cases} \end{align*} \end{teo} \begin{proof} The first assertion is an immediate consequence of Theorem~\ref{teorema 3.5}, and the computation of $\mathrm{HC}^{K,\omegaega_h}_*$ for $h=1$ and for $\omegaega_h^n \ne 1$ follows from Corollary~\ref{corolario 2.7}. So, in order to finish the proof it remains to consider the case $h>1$ and $\omegaega_h^n = 1$. By Theorems~\ref{teorema 2.6} and \ref{teorema 3.4}, the cyclic homology of the mixed complex $(C^{S, \omegaega_h}_*(A),d^{\omegaega_h}_*,D^{\omegaega_h}_*)$, is the homology of \[ \spreaddiagramcolumns{1.75pc} \mathbf{x}ymatrix{ \vdots \dto^-{d^{\omegaega_h}} &\vdots \dto^-{0}& \vdots \dto^-{d^{\omegaega_h}}& \vdots \dto^-{0}& \vdots \dto^-{d^{\omegaega_h}}\\ X_4 \dto^-{0} & X_3 \lto_-{l_{2(1-\omegaega_h)}}\dto^-{d^{\omegaega_h}} & X_2 \dto^-{0} \lto_-{0}& X_1 \dto^-{d^{\omegaega_h}}\lto_-{l_{1-\omegaega_h}}& X_0 \lto_-{0}\\ X_3 \dto^-{d^{\omegaega_h}} & X_2 \lto_-{0}\dto^-{0} & X_1 \dto^-{d^{\omegaega_h}} \lto_-{l_{1-\omegaega_h}}& X_0 \lto_-{0}\\ X_2 \dto^-{0} & X_1 \lto_-{l_{1-\omegaega_h}}\dto^-{d^{\omegaega_h}} & X_0 \lto_-{0}\\ X_1 \dto^-{d^{\omegaega_h}} & X_0 \lto_-{0}\\ X_0,} \] where \begin{itemize} \item $X_{2m} = \frac{K^{\omegaega_h}} {[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{mn}}}$ and $X_{2m+1} = \frac{K^{\omegaega_h}} {[K,K]^{\omegaega_h}_{\mathbf{a}lpha^{(m+1)n}}}x^{n-1}$, \item $l_{m(1-\omegaega_h)}([\lambda]) = m(1-\omegaega_h)[\lambda]$, \item $d_{2m+1}^{\omegaega_h}([\lambda]x^{n-1}) = (\omegaega_h-1)[\lambda\lambda_n]$. \mathfrak{m}edskip \end{itemize} We first compute the homology in degree $2m$. Let $$ \iota\colon X_0\to X_{2m}\mathrm{op}lus X_{2m-2}\mathrm{op}lus\cdots\mathrm{op}lus X_0 $$ be the canonical inclusion. By using that each $l_{i(1-\omegaega_h)}$ map is an isomorphism it is easy to see that $\iota$ induces an epimorphism $$ \overline{\iota}\colon X_0\to \mathrm{HC}^{K,\omegaega_h}_{2m}(A). $$ It is easy to see now that the boundary of $$ ([\zeta_{2m+1}],[\zeta_{(2m-1)+1}],\dots,[\zeta_1])\in X_{2m+1}\mathrm{op}lus X_{(2m-1)+1} \mathrm{op}lus \cdots \mathrm{op}lus X_1 $$ equals $i([\lambda])$ if and only if \begin{equation} [\zeta_{2i+1}] = \frac{(-1)^{m-i}i!}{m!} [\zeta_{2m+1}\lambda_n^{m-i}]\quad\text{for $0\le i\le m$}\label{eq3} \end{equation} and $[\zeta_{2m+1}\lambda_n^{m+1}] = [\lambda]$. The assertion about $\mathrm{HC}_{2m}^{K,\omegaega_h}(A)$ follows easily from these facts. We now are going to compute the homology in degree $2m+1$. It is immediate that $$ ([\zeta_{2m+1}],[\zeta_{(2m-1)+1}],\dots,[\zeta_1])\in X_{2m+1}\mathrm{op}lus X_{(2m-1)+1} \mathrm{op}lus \cdots \mathrm{op}lus X_1 $$ is a cycle of degree $2m+1$ if and only if it satisfies \eqref{eq3} and $$ \zeta_{2m+1} \lambda_n^{m+1} \in [K,K]^{\omegaega_h}. $$ This ends the computation of $\mathrm{HC}_{2m+1}^{K,\omegaega_h}(A)$. \end{proof} \begin{rem}\label{nota 3.7} From the above computations it follows that: \begin{enumerate} \item If $h=1$ or $\omegaega_h^n\ne 1$, then the map $$ S\colon \mathrm{HC}^{K,\omegaega_h}_{2m+2}(A)\to \mathrm{HC}^{K,\omegaega_h}_{2m}(A) $$ is the identity map. \item If $h>1$ and $\omegaega_h^n = 1$, then we have: \renewcommand{\descriptionlabel}[1] {\textrm{#1}} \begin{description} \item[\rm a.] The map $S\colon \mathrm{HC}^{K,\omegaega_h}_{2m+2}(A)\to \mathrm{HC}^{K,\omegaega_h}_{2m}(A)$ is the canonical surjection. \item[\rm b.] The map $i\colon \mathrm{HH}^{K,\omegaega_h}_{2m}(A)\to \mathrm{HC}^{K,\omegaega_h}_{2m}(A)$ is given by $$ i([\lambda]) = \frac{1}{m!}[\lambda\lambda_n^m]. $$ \item[\rm c.] The map $B\colon \mathrm{HC}^{K,\omegaega_h}_{2m}(A)\to \mathrm{HH}^{K,\omegaega_h}_{2m+1}(A)$ is zero. \item[\rm d.] The map $S\colon \mathrm{HC}^{K,\omegaega_h}_{2m+3}(A)\to \mathrm{HC}^{K,\omegaega_h}_{2m+1}(A)$ is given by $$ S([\lambda]x^{n-1}) = \frac{1}{m+1}[\lambda\lambda_n]x^{n-1}. $$ \item[\rm e.] The map $i\colon \mathrm{HH}^{K,\omegaega_h}_{2m+1}(A)\to \mathrm{HC}^{K,\omegaega_h}_{2m+1}(A)$ is the canonical inclusion. \item[\rm f.] The map $B\colon \mathrm{HC}^{K,\omegaega_h}_{2m+1}(A)\to \mathrm{HH}^{K,\omegaega_h}_{2m+2}(A)$ is given by $$ B([\lambda]x^{n-1}) = [(m+1)(1-\omegaega_h))\lambda]x^{n-1}. $$ \end{description} \end{enumerate} \end{rem} \begin{rem}\label{nota 3.8} Theorem~\ref{teorema 3.6} applies in particular to the monogenic extensions of finite group algebras $K = k[G]$ considered in Example~\ref{ejemplo 2.8}. Note that since $K$ is a separable $k$-algebra, this computes the absolute cyclic homology, as follows easily from \cite[Theorem~1.2]{G-S} using the SBI-sequence. We now consider a concrete example. Let $G=D_{2n}$, $chari$, $\mathbf{a}lpha$ and $A$ be as in Example~\ref{ejemplo 2.8}. Then: \begin{align*} &\mathrm{HC}_{2m}(A) = \frac{\mathbb{C}[\langle g\overlinengle]}{\bigoplus_{j=1}^{[(u-1)/2]} \mathbb{C}(g^j-g^{u-j})} \mathrm{op}lus \frac{\mathbb{C}[\langle g\overlinengle]h}{\mathbb{C}[\langle g\overlinengle](g^2-1)h},\\ &\mathrm{HC}_{2m+1}(A) = \frac{\mathbb{C}[\langle g\overlinengle]h}{\mathbb{C}[\langle g\overlinengle](g^2-1)h} x. \end{align*} \end{rem} \subsection{Cyclic homology of rank~$1$ Hopf algebras} Let $k$, $G$, $chari$, $g_1$, $\mathbf{a}lpha$ and $A$ be as in Subsection~2.2. Here we compute the cyclic homology of $A$. Let $C_n\subseteq k$ be the set of all $n$-th roots of $1$. As in the above mentioned subsection we consider three cases. \mathfrak{m}edskip \noindent\bf $\boldsymbol{\mathbf{x}i} \boldsymbol{=} \boldsymbol{0}$.\rm\enspace That is $A = K[x,\mathbf{a}lpha]/\langle x^n\overlinengle$, where $K = k[G]$. From Theorem~\ref{teorema 3.6} it follows that \begin{align*} &\mathrm{HC}_{2m}(A) = \frac{K}{[K,K]},\\ &\mathrm{HC}_{2m+1}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w_{\mathbf{a}lpha^{(m+1)n}}} x^{n-1}. \end{align*} \mathfrak{m}edskip \noindent\bf $\boldsymbol{\mathbf{x}i} \boldsymbol{\ne} \boldsymbol{0}$ and $\boldsymbol{chari}^{\mathfrak{m}athbf{n}} \boldsymbol{=} \boldsymbol{\mathrm{id}}$.\rm\enspace In this case $A = K[x,\mathbf{a}lpha]/\langle x^n-\mathbf{x}i(g_1^n-1)\overlinengle$, where $K = k[G]$. Arguing as in Subsection~2.2, but using Theorem~\ref{teorema 3.6} instead of Corollary~\ref{corolario 2.7}, we obtain \begin{align*} &\mathrm{HC}_{2m}(A) = \frac{K^1}{[K,K]^1}\mathrm{op}lus \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w+K^w(g_1^n-1)^{m+1}},\\ &\mathrm{HC}_{2m+1}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{\{\lambda\in K^w:\lambda(g_1^n-1)^m\in [K,K]^w \}}{[K,K]^w}x^{n-1}. \end{align*} \mathfrak{m}edskip \noindent\bf $\boldsymbol{\mathbf{x}i} \boldsymbol{\ne} \boldsymbol{0}$ and $\boldsymbol{chari}^{\mathfrak{m}athbf{n}} \boldsymbol{\ne} \boldsymbol{\mathrm{id}}$.\rm\enspace In this case $A = K[x,\widetilde{\mathbf{a}lpha}]\langle x^n\overlinengle$, where the algebra $K = k[G/\langle g_1^n\overlinengle]$ and $\widetilde{\mathbf{a}lpha}$ is the automorphism induced by $\mathbf{a}lpha$. By Theorem~\ref{teorema 3.6}, we obtain \begin{align*} &\mathrm{HC}_{2m}(A) = \frac{K}{[K,K]},\\ &\mathrm{HC}_{2m+1}(A) = \bigoplus_{w\in C_n\setminus\{1\}} \frac{K^w}{[K,K]^w_{\widetilde{\mathbf{a}lpha}^{(m+1)n}}} x^{n-1}. \end{align*} \end{document}
\begin{document} \begin{abstract} Path sets are spaces of one-sided infinite symbol sequences corresponding to the one-sided infinite walks beginning at a fixed initial vertex in a directed labeled graph $\mathcal{G}$. Path sets are a generalization of one-sided sofic shifts. This paper studies decimation operations $\psi_{j, n}(\cdot)$ which extract symbol sequences in infinite arithmetic progressions $(\bmod\, n)$ starting with the symbol at position $j$. It also studies a family of $n$-ary interleaving operations ${\circledast}n$, one for each $n \ge 1$, which act on an ordered set $(X_0, X_1, ..., X_{n-1})$ of one-sided symbol sequences $X_0 {\circledast} X_1 {\circledast} \cdots {\circledast} X_{n-1}$ on an alphabet ${\mathcal A}$, by interleaving the symbols of each $X_i$ in arithmetic progressions $(\bmod \, n)$, It studies a set of closure operations relating interleaving and decimation. This paper gives basic algorithmic results on presentations of path sets and existence of a minimal right-resolving presentation. It gives an algorithm for computing presentations of decimations of path sets from presentations of path sets, showing the minimal right-resolving presentation of $\psi_{j,n}(X)$ has at most one more vertex than a minimal right-resolving presentation of $X$. It shows that a path set has only finitely many distinct decimations. It shows the class of path sets on a fixed alphabet is closed under interleavings and gives an algorithm to compute presentations of interleavings of path sets. It studies interleaving factorizations and classifies path sets that have infinite interleaving factorizations and gives an algorithm to recognize them. It shows the finiteness of a process of iterated interleaving factorizations, which ``freezes" factors that have infinite interleavings. \end{abstract} \title{Interleaving of Path Sets} \tableofcontents \section{Introduction}\label{sec:1} Let $\mathcal{A}$ be a finite alphabet with at least two elements, and let $\mathcal{A}^\mathbb{N}$ denote the full one-sided shift on $\mathcal{A}$: that is, the space of one-sided infinite strings of symbols from ${\mathcal A}$. One-sided symbolic dynamics studies the action of the one-sided shift map $S: {\mathcal A}^{{\mathbb N}} \to {\mathcal A}^{{\mathbb N}}$ given by $$ {\rm S}(a_0a_1 a_2 \cdots) = a_1a_2a_3 \cdots. $$ on subsets $X \subset {\mathcal A}^{{\mathbb N}}$. This paper considers a class of such $X$ called {\em path sets}, studied in \cite{AL14a}. Consider a finite labeled graph $\mathcal{G} = (G, \mathcal{E})$ whose underlying graph $G=(V, E)$ is a directed graph with edge set $E$ (permitting loops and multiple edges), with labeling ${\mathcal E}: E \to \mathcal{A}$ specifying a labeling of the graph edges by elements of $\mathcal{A}$, and some fixed initial vertex $v$ of $\mathcal{G}$. From $({\mathcal G}, v)$ we obtain a set $\mathcal{P}=X(\mathcal{G}, v) \subset \mathcal{A}^\mathbb{N}$ of symbol sequences $x=a_0a_1a_2 \cdots $ giving the edge labels of all one-sided infinite walks in $\mathcal{G}$ that begin at the marked vertex $v$. We call any such $\mathcal{P}$ a \emph{path set}, and we call $(\mathcal{G},v)$ a \emph{presentation} of $\mathcal{P}=X(\mathcal{G}, v)$. The name path set for these objects was given in \cite{AL14a}, where previous work was reviewed. We let $\mathcal{C}(\mathcal{A})$ denote the class of all path sets on the alphabet $\mathcal{A}$. Path sets are a generalization of one-sided sofic shifts in symbolic dynamics; however in general they are not invariant under the one-sided shift map. The main feature of path sets is specification of a fixed initial vertex $v$, which encodes an initial condition which can break shift-invariance in the dynamics. The path set concept has recently found application to number theory, fractal geometry and to neural networks. More specifically, Abram, Bolshakov, and Lagarias \cite{ABL17} used path sets to study Hausdorff dimension of intersections of multiplicative translates of $p$-adic Cantor sets, for application to a problem of Erd\H{o}s on ternary expansions of powers of $2$, see \cite{AL14b, AL14c, Lagarias09}. It turns out that these intersections are always presentable as $p$-adic path set fractals, a kind of geometric realization of path sets inside the $p$-adic integers $\mathbb{Z}_p$ viewed as the full shift on $\{0,1,\ldots, p-1\}$. In another direction Ban and Chang \cite{BC16} show that the mosaic solution space of the initial value problem of a multi-layer cellular neural network is topologically conjugate to a path set. Thus, according to Ban and Chang \cite{BC16}, ``The more properties we know about path sets, the more we know about the topological structure of the solution spaces derived from differential equations with initial conditions, and vice versa.'' This paper studies the effect on path sets of two families of operations defined on one-sided symbol sequences ${\mathcal A}^{{\mathbb N}}$, which are {\em decimations} and {\em interleavings}. Decimation and interleaving operations are initially defined using individual elements of ${\mathcal A}^{{\mathbb N}}$, extend to act on arbitrary subsets $X \subseteq {\mathcal A}^{{\mathbb N}}$ by set union, and then are specialized to path sets ${\mathcal P} \subseteq {\mathcal A}^{{\mathbb N}}$ in this paper. The paper \cite{ALS21} studies these operations acting on general sets $X \subseteq {\mathcal A}^{{\mathbb N}}$. Decimation operations on path sets were studied by the first two authors in \cite{AL14a}. \begin{defn} \label{decimationdef} Let $\mathcal{A}$ be a finite alphabet of symbols. (1) For $ j \ge 0$ the {\em $j$-th decimation operation at level $n$}, $\psi_{j,n}: {\mathcal A}^{{\mathbb N}} \to {\mathcal A}^{{\mathbb N}}$, is defined on an individual sequence ${\bf x}= x_0x_1x_2x_3 \cdots$ by $$ \psi_{j,n}({\bf x}) = {\bf y}:= x_j x_{j+n}x_{j + 2n}x_{j+3n} \cdots. $$ (2) The {\em $j$-th decimation of level $n$}, denoted $\psi_{j, n}(X)$, of the set $X \subset {\mathcal A}^{{\mathbb N}}$ is the set defined $$ \psi_{j, n}(X) := \bigcup_{ {\bf x} \in X} \psi_{j,n}({\bf x}) $$ \end{defn} The definition applies for all $j \ge 0$, but a special role is played by the operations with $0 \le j \le n-1$, which we term {\em principal decimations}, see \eqref{eqn:basic-relation} below. The set of all level $n$ decimation operators for $j \ge n$ are obtained from principal decimations by iterating the one-sided shift operator ${\rm S}$, noting that $\psi_{j+n, n}({\bf x}) = {\rm S} \circ \psi_{j, n} ({\bf x})$. The main emphasis of the paper is the family of {\em interleaving operations}, which comprise an infinite collection of $n$-ary operations $(n \ge 1)$, one for each $n$, defined for arbitrary subsets $X$ of the shift space $\mathcal{A}^\mathbb{N}$. \begin{defn} \label{interleavingdef} Let $\mathcal{A}$ be a finite alphabet of symbols. (1) The {\em $n$ fold interleaving operation} $({\circledast})_{j=0}^{n-1} {\mathcal A}^{{\mathbb N}} \times {\mathcal A}^{{\mathbb N}} \times \cdots \times {\mathcal A}^{{\mathbb N}} \to {\mathcal A}^{{\mathbb N}}$ is defined pointwise on individual sequences: ${\bf x}_{j} = a_{j,0} a_{j,1}a_{j,2} \cdots $ for $0 \le j \le n-1$ by $$ ({\bf x}_0, {\bf x}_1, \cdots, {\bf x}_{n-1}) \mapsto {\bf y} :=({\circledast}n)_{j=0}^{n-1} {\bf x}_j = {\bf x}_0 {\circledast} {\bf x}_1 {\circledast} \cdots {\circledast} {\bf x}_{n-1}= b_0b_1b_2 \cdots $$ with symbol sequence $$ b_{ni+k}= a_{k, i} \quad \mbox{for} \quad i \ge 0, \,\, 0 \le k \le n-1. $$ (2) The \emph{$n$-fold interleaving} $X = ({\circledast}n)_{i=0}^{n-1} X_i$ of the the sets $X_0, X_1, X_2,\ldots, X_{n-1} \subseteq \mathcal{A}^\mathbb{N}$ in the specified order is \begin{align*} \quad\quad ({\circledast}n)_{i=0}^{n-1}X_i & := X_0 {\circledast} X_1{\circledast} X_2{\circledast} \cdots {\circledast} X_{n-1} \\ & = \{(x_i)_{i=0}^{\infty}\in\mathcal{A}^{\mathbb{N}} :\, x_jx_{j+n}x_{j+2n}\ldots \in X_j \text{ for all }0\leq j\leq n-1\}. \end{align*} \end{defn} The notation ${\circledast}$ above does not indicate the arity of the $n$-fold interleaving operation ${\circledast}n$; the arity of composed operations is to be inferred via groupings of parentheses. The $n$-fold interleaving operations of arity $2$ are not commutative, in general $X_1 {\circledast} X_2 \ne X_2 {\circledast} X_1$, and not associative, in general $(X_1 {\circledast} X_2) {\circledast} X_3 \ne X_1 {\circledast}(X_2 {\circledast} X_3)$. Interleaving and decimation operators are related by a pointwise identit stating that the $n$-fold interleaving of the ordered set of principal decimations at level $n$ gives the identity map: \begin{equation}\label{eqn:basic-relation} ({\circledast}n)_{j=0}^{n-1} \psi_{j, n}({\bf x}) = {\bf x} \quad \mbox{for} \quad {\bf x} \in {\mathcal A}^{{\mathbb N}}. \end{equation} At the set level it shows that the level $n$ principal decimations provides a right-inverse to recover the individual factors of an $n$-fold interleaving \begin{equation}\label{eqn:basic-relation2} X_j = \psi_{j, n} ( X_0 {\circledast} X_1 {\circledast} \cdots {\circledast} X_{n-1}). \end{equation} \subsection{Symbolic dynamics}\label{subsec:12} This paper studies decimation and interleaving operations on path sets from the viewpoint of symbolic dynamics and coding theory, with (one-sided) symbol sequences viewed inside the full one-sided shift $\mathcal{A}^\mathbb{N}$ with the shift topology, which is the product topology where each factor $\mathcal{A}$ has the discrete topology, so that $\mathcal{A}^\mathbb{N}$ is a compact set. Much of symbolic dynamics studies closed sets $X$ which are shift-invariant: ${\rm S} X= X$. The interleaving concept is useful in coding theory as a method for improving the burst error correction capability of a code, cf. \cite[Section 7.5]{VO89}. The analogue of Definition ~\ref{interleavingdef} for finite codes is referred to by coding theorists as \emph{block interleaving at depth $n$}. Each interleaving operation ${\circledast}_n$ respects the shift topology in the sense that if $\{ X_i: 0 \le i \le n-1\}$ are closed subsets of the one-sided shift space $\mathcal{A}^\mathbb{N}$ then $X= ({\circledast}n)_{i=0}^{n-1} X_i= X_0 {\circledast} X_1 {\circledast} X_2{\circledast} \cdots {\circledast} X_{n-1}$ is also a closed subset of $\mathcal{A}^\mathbb{N}$. A great deal of study has been given to the subclasses of {\em shifts of finite type}, and of the generalization to {\em sofic shifts}, studied for two-sided infinite sequences in Lind and Marcus \cite{LM95}. These are special cases of path sets. General path sets ${\mathcal P}$ are not shift-invariant, but we show in Theorem \ref{thm:weak-shift-invariant-0} below that they satisfy a weak form of shift-invariance. \subsection{Path sets}\label{subsec:13} We recall the definition of path set from \cite{AL14a}. A \emph{pointed graph} $(\mathcal{G},v)$ over a finite alphabet $\mathcal{A}$ comprises a finite edge-labeled directed graph $\mathcal{G}= (G, {\mathcal E})$ and a distinguished vertex $v$ of the underlying directed graph $G$. The directed graph $G=(V, E) $ is specified by its vertex set $V$ and (directed) edge set $E$ with edges $e=(v_1, v_2) \in V \times V$, and the data ${\mathcal E}\subset E \times {\mathcal A}$ specifies the set of labeled edges $(e, a)$, with labels drawn from the alphabet ${\mathcal A}$. We allow loops and multiple edges, but require that all triples $(e, a) = (v_1, v_2, a)$ be distinct. We use interchangeably the terms {\em vertex} and {\em state} of $G$, as in Lind and Marcus \cite[Sect. 2.2]{LM95}. The results of this paper regard the finite alphabet ${\mathcal A}$ as fixed, unless specifically noted otherwise. \begin{defn} \label{de111} A \emph{path set}\footnote{In \cite{AL14a} the notation $X_{{\mathcal G}}(v)$ was used.} (or {\em pointed follower set}) ${\mathcal P} = X(\mathcal{G}, v) $ specified by a pointed labeled graph $({\mathcal G}, v)$ and a distinguished vertex $v$ is the subset of ${\mathcal A}^{{\mathbb N}}$ made up of the symbol sequences of successive edge labels of all possible one-sided infinite walks in $\mathcal{G}$ issuing from the distinguished vertex $v$. \end{defn} We let ${\mathcal C}({\mathcal A})$ denote the collection of all path sets using labels from the alphabet ${\mathcal A}$. We call the data $({\mathcal G}, v)$ a {\em presentation} of the path set ${\mathcal P}= X(\mathcal{G},v)$. A path set ${\mathcal P}$ typically has many different presentations. The set ${\mathcal P}$ by definition includes every symbolic path sequence with multiplicity one, although the graph ${\mathcal G}$ could potentially contain many paths starting from $v$ with identical symbol sequences. The paper \cite{AL14a} showed that the class $\mathcal{C}(\mathcal{A})$ of all path sets ${\mathcal P}$ with fixed (finite) alphabet $\mathcal{A}$ is closed under all decimation operations $\psi_{j,n}({\mathcal P})$ We will give a second proof of this result in Section \ref{sec:decimation}. This paper shows in addition that the class $\mathcal{C}(\mathcal{A})$ of all path sets with fixed (finite) alphabet $\mathcal{A}$ is closed under all the interleaving operations ${\circledast}_n$. Conversely it shows that if a path set ${\mathcal P}$ has an $n$-fold interleaving factorization, then each factor in the factorization is necessarily a path set. In contrast, the smaller classes of one-sided sofic shifts and of shifts of finite type are not closed under $n$-interleaving. interleaving can break one-sided shift-invariance even for the most well-behaved shift spaces. Path sets therefore appear to be a natural level of generality at which to study interleaving operations. \subsection{Main results}\label{subsec:14} In this paper we study the decimation and interleaving operations restricted to path sets. \subsubsection{Presentations of path sets}\label{subsec:141} In Section \ref{sec:prelim} we prove basic properties of presentations of path sets, and discuss algorithms to test for these properties, in the language of symbolic dynamics. The paper \cite[Section 3]{AL14a} showed that every path set ${\mathcal P}$ has a presentation with several additional properties: {\em right-resolving, reachable, and pruned}.\footnote{A labeled directed graph $\mathcal{G}$ is called \emph{right-resolving} if any two edges emanating from the same vertex have distinct labels. A pointed graph $(\mathcal{G},v)$ is called \emph{reachable} if there is a directed path in $\mathcal{G}$ from the initial vertex $v$ to every other vertex of $\mathcal{G}$; for a finite automaton this property is called {\em accessible}. A reachable pointed graph $(\mathcal{G},v)$ is called \emph{pruned} if in it has no sinks, meaning every vertex has out-degree at least one; for finite automata this property is called {\em coaccessible}. A finite automaton is \emph{trim} if it is both accessible and coaccessible, see \cite{Eilenberg74}, \cite[page 27]{PP04}.} The right-resolving property guarantees uniqueness of symbolic paths: any two distinct paths in the graph ${\mathcal G}$ starting from the initial vertex $v$ have different symbol sequences; that is, for such a presentation the (finite or infinite) symbol sequence uniquely specifies the path. In contrast in symbolic dynamics sofic systems need not have unique minimal right-resolving presentations (in the sense of sofic system presentations) if the sofic system is reducible, compare \cite[Example 3.3.21]{LM95}. \subsubsection{ Decimations and the shift}\label{subsec:142} The paper \cite{AL14a} showed that all decimations $\psi_{j,n}({\mathcal P})$ of a path set are path sets, giving a constructive algorithm to compute a presentation of $\psi_{j,n}({\mathcal P})$ from that of ${\mathcal P}$. Section \ref{sec:decimation} presents a different construction, {\em the modified $n$-th higher power presentation}, to obtain presentations $\psi_{j,n}({\mathcal P})$ which require at most one vertex more than the number of vertices $m$ of the input presentation of ${\mathcal P}$. (The $n$-th higher power construction is well known, cf. \cite{[Sect. 1.4]LM95}. ) As a first consequence of the modified higher power presentation we show that path sets satisfy a weak version of shift-invariance. We say that a set $X$ is {\em weakly shift-invariant} if there exist $k > j \ge 0$ with ${\rm S}^k X = {\rm S}^jX$ . \begin{thm}\label{thm:weak-shift-invariant-0} {\rm (Weak shift-invariance of path sets)} For any path set ${\mathcal P}$ there exist integers $k > j \ge 0$ giving the equality of iterated shifts ${\rm S}^k {\mathcal P} = {\rm S}^j {\mathcal P}.$ \end{thm} This result is proved as Theorem \ref{thm:weak-shift-invariant}. It comes from the property that iterations of the shift operator ${\rm S}$ are given by $1$-decimations. We have in general $\psi_{j,n}({\mathcal P}) = \psi_{0,n}({\rm S}^j {\mathcal P})$ and particular $\psi_{j,1}({\mathcal P})= {\rm S}^j {\mathcal P}.$ . \begin{defn} \label{def:full-decimation-set} The {\em full decimation set} ${\mathfrak D}(X)$ of an arbitrary set $ X \subset {\mathcal A}^{{\mathbb N}}$ is the set of all (principal and non-principal) decimations: $$ {\mathfrak D}(X) := \{ \psi_{j,m}(X): \, \mbox{for} \quad m \ge 1 \quad \mbox{and} \quad j \ge 0 \}. $$ \end{defn} The modified higher power construction implies finiteness for the set of all decimations of a path set. \begin{thm}\label{thm:decimation_set_bound} {\rm ( Full decimation set bound)} For each path set ${\mathcal P}$ its full decimation set ${\mathfrak D}({\mathcal P})$ is a finite set. If ${\mathcal P}$ has a presentation having $m$ vertices, then ${\mathfrak D}({\mathcal P})$ has cardinality bounded by $$ |{\mathfrak D}({\mathcal P}) | \le 2^{ (m+1)^2|{\mathcal A}|}. $$ \end{thm} This result is proved as Theorem \ref{thm:decimation_set_bound2}. In contrast, there are closed set $X \subset {\mathcal A}^{NN}$ such that ${\mathfrak D}(X)$ is an infinite set, cf. \cite[Section 7]{ALS21}. Theorem \ref{thm:decimation_set_bound}. answers a question raised in \cite[page 113]{AL14a}. That paper defined the $n$-kernel of a path set ${\mathcal P}$ by $${\rm Ker}_n({\mathcal P}) := \{ \psi_{j,n^k}({\mathcal P}) : j \ge 0, k \ge 0 \}.$$ It called a path set {\em $n$-automatic} if ${\rm Ker}_n({\mathcal P})$ is finite. This notion was proposed in analogy with the definition of {\em $n$-automatic sequences} made by Allouche and Shallit \cite{AS92}, \cite{AS03}. Since ${\rm Ker}_{n} ({\mathcal P}) \subset {\mathfrak D}({\mathcal P})$, Theorem \ref{thm:decimation_set_bound} implies that every path set is $n$-automatic in this sense for every $n \ge 1$. The output presentation of the $n$-th higher power construction is not necessarily right-resolving, even if the input presentation is right-resolving. However a standard construction, the subset construction, shows that if ${\mathcal P}$ has a presentation with $m$ vertices, then each decimation $\psi_{j,n}({\mathcal P})$ has a right-resolving presentation with at most $2^{m+1} -1$ vertices, an upper bound which is independent of $n$. \begin{thm}\label{thm:decimation_presentation0} {\rm (Right-resolving presentations of decimation sets of a path set.)} Given a path set ${\mathcal P}$ on alphabet ${\mathcal A}$ with at least two letters, having a (not necessarily right-resolving) presentation ${\mathcal P} = X({\mathcal G}, v)$ with $m$ vertices. Then for each $n \ge 1$ and each $j \ge 0$ the decimation set $\psi_{j,n} ({\mathcal P})$ has a right-resolving presentation having at most $2^{m+1}-1$ vertices. \end{thm} This result is proved as Theorem \ref{thm:decimation_presentation}. \subsubsection{ Closure of $\mathcal{C}({\mathcal A})$ under interleavings}\label{subsec:143} In Section \ref{sec:intalgsec} we start the study of interleaving operations on the class $\mathcal{C}({\mathcal A})$ of all path sets on a fixed alphabet ${\mathcal A}$. We first show that $\mathcal{C}({\mathcal A})$ is closed under the interleaving operations. \begin{thm}\label{thm:intalgthm} {\rm (${\mathcal C}({\mathcal A})$ is closed under interleaving )} If $\mathcal{P}_0, \ldots, \mathcal{P}_{n-1}$ are path sets on the alphabet $\mathcal{A}$, then their $n$-fold interleaving $$X := ({\circledast}n)_{i=0}^{n-1} {\mathcal P}_i= \mathcal{P}_0 {\circledast} \mathcal{P}_1 {\circledast} \cdots {\circledast} \mathcal{P}_{n-1}$$ is a path set; i.e., $X \in {\mathcal C}({\mathcal A})$, \end{thm} This result is proved as Theorem \ref{thm:intalgthm2}. We establish Theorem \ref{thm:intalgthm} as a corollary of an effective algorithmic construction of the $n$-fold interleaving ${\mathcal P}$ at the level of presentations of path sets. \begin{thm}\label{thm:algthm} {\rm(Interleaving pointed graph product construction)} Let $n \ge 2$ and suppose $\mathcal{P}_0, \ldots, \mathcal{P}_{n-1}$ are path sets with given presentations $(\mathcal{G}_0,v_0), \ldots, (\mathcal{G}_{n-1},v_{n-1})$, respectively. There exists a construction taking as inputs these presentations and giving as output a presentation $({\mathcal H}, {\bf v})$ of the $n$-fold interleaving $X := \mathcal{P}_0 {\circledast} \mathcal{P}_1 {\circledast} \cdots {\circledast} \mathcal{P}_{n-1}.$ In particular $X= X({\mathcal H}, {\bf v})$ is a path set. This construction has the following properties: \begin{enumerate} \item[(i)] If $\mathcal{G}_i$ has $k_i$ vertices for each $0 \leq i \leq n-1$, then $\mathcal{H}$ will have at most $n{\circledast}od_{i=0}^{n-1}k_i$ vertices. \item [(ii)] If the pointed graphs $(\mathcal{G}_i, v_i)$ are right-resolving for all $0 \leq i \leq n-1$, then the output pointed graph $\mathcal{H}$ will also be right-resolving. \item [(iii)] If the pointed graphs $(\mathcal{G}_i, v_i)$ are pruned for all $0 \leq i \leq n-1$, then the output pointed graph $\mathcal{H}$ will also be pruned. \end{enumerate} \end{thm} This result is proved as Theorem \ref{thm:algthm2}. Theorem \ref{thm:algthm} constructs ${\mathcal H}$ by a graph-theoretic construction, termed here the {\em $n$-fold interleaved (pointed) graph product}, which takes as input pointed graphs $({\mathcal G}_i, v_i)$, for $0 \le i \le n-1$ and produces as output a pointed graph \begin{equation*} ({\mathcal H}, {\bf v}) := ({\circledast}n)_ {i=0}^{n-1}( {\mathcal G}_i, v_i), \end{equation*} such that the underlying path set ${\mathcal P} =X({\mathcal H}, {\bf v}) $ is the $n$-fold interleaving of the path sets ${\mathcal P}_i = X({\mathcal G}_i, v_i)$. The presentation $({\mathcal H}, v)$ found by the construction depends on the input presentations of ${\mathcal P}_i$. We provide examples showing that minimality of right-resolving presentations is not always preserved under the interleaved pointed graph product. \subsubsection{ Decimations and Interleaving Factorizations }\label{subsec:144} Decimations and Interleaving operations together define a set of closure operations on symbol sets. We define the {\em $n$-fold interleaving closure} $X^{[n]} $ of a general set $X \subset {\mathcal A}^{{\mathbb N}}$ by \begin{equation} X^{[n]} = \psi_{0,n}(X) {\circledast} \psi_{1,n}(X) {\circledast} \cdots {\circledast} \psi_{n-1, n}(X). \end{equation} These closure operations were studied in \cite{ALS21}. One always has $X \subseteq X^{[n]}$, the operation is idempotent: $(X^{[n]})^{[n]} = X^{[n]}$, and the operation takes closed sets to closed sets. We say a set has an {\em $n$-fold interleaving factorization } if $X= X^{[n]}$. In that case its associated {\em interleaving factors} are given by $X_{j,n} = \psi_{j,n}(X)$ for $0 \le j \le n-1$. These interleaving factors are unique, i.e. $X = X_0 {\circledast} X_1 {\circledast} \cdots {\circledast} X_{n-1}$ and $X= Y_0 {\circledast} Y_1 {\circledast} \cdots {\circledast} Y_{n-1}$ then $X_j= Y_j$ for $0 \le j \le n-1$ as sets. \begin{thm}\label{thm:factorthm} {\rm (${\mathcal C}({\mathcal A})$ is stable under $n$-fold interleaving closure operations)} If ${\mathcal P}$ is a path set, then for each $n \ge 1$ the $n$-fold interleaving closure ${\mathcal P}^{[n]}$ is a path set. In addition, if ${\mathcal P}$ is $n$-factorizable then each of its $n$-fold intereaving factors ${\mathcal P}_i= \psi_{i,n}({\mathcal P})$ for $0 \le i\le n-1$ are path sets. \end{thm}. This result is proved as Theorem \ref{thm:factorthm2}. All interleaving factors are decimations $\psi_{j,n}({\mathcal P})$ with $0 \le j \le n-1$. We show that if a path set ${\mathcal P}$ has an $n$-fold interleaving presentation, then there is an improved upper bound on the size of the associated decimations $\psi_{j,n}({\mathcal P})$ relative to that given in Theorem \ref{thm:decimation_presentation0}. \begin{thm}\label{thm:56a} {\rm ( Upper bound on minimal presentation size of $n$-fold interleaving factors ) } Let ${\mathcal P}$ be a path set having $m$ vertices in its minimal right-resolving presentation. Suppose that ${\mathcal P}$ has an $n$-fold interleaving factorization ${\mathcal P} = ({\circledast})_{j=0}^{n-1} {\mathcal P}_j$. Then each $n$-fold interleaving factor ${\mathcal P}_j = \psi_{j,n}({\mathcal P})$ has a minimal right-resolving presentation having at most $m$ vertices. \end{thm} This result is proved as Theorem \ref{thm:68}. \subsubsection{Classification of interleaving factorizations of path sets }\label{subsec:145} The factorization problem is the problem of finding all the possible interleaving factorizations of a path set ${\mathcal P}$ under $n$-fold interleaving. This problem is interesting and complicated due to the fact that some general sets $X \subset {\mathcal A}^{{\mathbb N}}$ may have factorizations for infinitely many $n \ge 1$, which we call {\em infinitely factorizable} sets. The simplest example is the one-sided shift $\mathcal{A}^{{\mathbb N}}$, which factors for all $n \ge 1$ as $$ \mathcal{A}^\mathbb{N} = (\mathcal{A}^\mathbb{N})^{({\circledast} n)}, $$ It is a path set, and Section \ref{sec:6} gives a structural characterization of infinitely factorizable path sets. The paper \cite{ALS21} gave a classification of the possible pattern of interleaving factorizations for a general set $X$, and a separate classification for a general closed set $X \subset {\mathcal A}^{{\mathbb N}}$, see Section \ref{subsec:55}. For a closed set $X \subset {\mathcal A}^{{\mathbb N}}$, exactly one of the following holds. \begin{enumerate} \item[(i)] $X$ is factorizable for all $n \ge 1$. \item[(ii)] $X$ is $n$-factorizable for a finite set of $n$, which are all the divisors of an integer $f= f(X) \ge 1$. \end{enumerate}. \noindent All path sets are closed sets so this classification applies to them. One can easily show that all allowed patterns of interleaving factorizations in this dichotomy occur for path sets. The main results of this paper about interleaving factorizations concern the structure of factorizations of a path set ${\mathcal P}$. They are given in Sections \ref{sec:6} and \ref{sec:factorsec}. \begin{enumerate} \item[(2)] We characterize infinitely factorizable path sets in terms of the form of their minimal right-resolving presentation, which we call ``leveled." (Theorem \ref{thm:63}) \item[(2)] For the remaining finitely factorizable path sets, we obtain an upper bound on the size of $n$ for which $n$-factorability can occur, in terms of the size of their minimal right-resolving presentation: one has $n \ge m-1$. Thus $f({\mathcal P}) \le m-1$. (Theorem \ref{thm:non_leveled_fact}). \item[(3)] We show that the process of iterated interleaving factorization of any path set always terminates in finitely many steps if we agree to ``freeze" any infinitely factorizable path set encountered in the process. (Theorem \ref{thm:complete_fact0}). \end{enumerate} The bound in (2) leads to an effective algorithm for determining if a path set is infiniitely factorizable or not. The finiteness result in (3) also follows from (2). For general closed sets $X$, the paper \cite{ALS21} gave examples having an infinite depth tree of recursively refined iterated factorizations. \subsection{Related work}\label{sec:15} There is a large literature of related work in automata theory, semigroups and symbolic dynamics; see the discussion in \cite[Sect. 1.2]{AL14a}. The path set concept has previously been studied in automata theory and formal language theory, given in different terminology. In \cite{AL14a} we observed that path sets are characterized as those $\omega$-sets recognizable by a finite (deterministic) B\"{u}chi automaton having one initial state and having every state be a terminal state. These sets were characterized in Perrin and Pin \cite[Chapter III, Proposition 3.9]{PP04} as the set of (B\"{u}chi) recognizable sets that are closed in the product topology on ${\mathcal A}^{{\mathbb N}}$. The name path set is consistent with the term ``path'' in a finite automaton used in Eilenberg \cite[page 13]{Eilenberg74}. The set of finite initial blocks of a path set ${\mathcal P}$ forms a rational language (also called a regular language)$L({\mathcal P})$ in ${\mathcal A}^{\star}$, the set of all finite words in the alphabet ${\mathcal A}$. The formal language $L({\mathcal P})$ uniquely characterizes the path set. We may call the set of formal languages $$ {\mathcal L}({\mathcal A}) : = \{ L({\mathcal P}): \, {\mathcal P} \quad \mbox{ a path set on alphabet} \,\, {\mathcal A}\} $$ the set of {\em path set languages}. These languages may be characterized as being the prefix-closed regular languages, see Appendix \ref{sec:A0}. The set ${\mathcal L}({\mathcal A})$ forms a strict subset of all rational languages on the finite alphabet $\mathcal{A}$. Special cases of interleaving operations have been considered in automata theory and formal languages. Eilenberg \cite[Chapter II.3, page 20]{Eilenberg74} introduced a notion of {\em internal shuffle product} $A \coprod B$ of two recognizable sets (= regular language) which corresponds to $2$-interleaving. In Proposition 2.5 in Chapter II of that book, Eilenberg proved that the collection of recognizable sets is closed under internal shuffle product. Interleaving operations have been used in coding theory in various code constructions, see for example Vanstone and van Oorschot \cite{VO89}, Chapters 5 and 7. One-sided path sets also appear in studies in aperiodic order. The 1989 paper of de Bruijn \cite[Sect. 5 and 6]{DeB:89} deals mainly with two-sided infinite connector sequences, but has one-sided ``singular" examples as well. He studied these sequences in connection with rewriting rules describing inflation and deflation for aperidoic tilings, in particular the Penrose tilings, topics which he previously studied in \cite{DeB:81a}, \cite{DeB:81b}, \cite{DeB:81c}. One may consider extensions of interleaving to infinite alphabets. The notion of full one-sided shift based on the product topology does not give a compact space for infinite alphabets. In 2014 Ott, Tomforde, and Willis \cite{OTW14} formulated a definition of full shifts on infinite alphabets which gives a compact shift space, which may be useful for this purpose. \subsection{Contents}\label{sec:14} The contents of the paper are as follows: \begin{enumerate} \item Section ~\ref{sec:prelim} collects together preliminary results about path sets. In particular, it shows they are uniquely determined by their set of allowed finite initial blocks. It gives an effective algorithm to tell whether two presentations $({\mathcal G}_1, v_1)$ and $({\mathcal G}_2, v_2)$ give the same path set. It shows the uniqueness (up to isomorphism) of minimal right-resolving presentations of a path set ${\mathcal P}$, a fact which does not parallel the theory for sofic shifts (whose definition of right-resolving does not require the presentation to be pointed.) \item Section \ref{sec:decimation} presents an algorithm for finding a presentation of a decimation $\psi_{j,n}({\mathcal P})$ from a presentation of ${\mathcal P}$. It proves that all path sets are weakly shift-invariant. It shows that the full set of decimations ${\mathfrak D}({\mathcal P})$ of a path set ${\mathcal P}$ is a finite set. It gives a upper bound on the size of right resolving presentation of all decimations $\psi_{j,n}({\mathcal P})$ that depends only on the size of the presentation of ${\mathcal P}$. \item Section ~\ref{sec:intalgsec} shows that any $n$-fold interleaving of path sets is a path set. It gives a construction, the (pointed) interleaving graph product, which when given presentations of the individual ${\mathcal P}_i$, yields a presentation of the $n$-fold interleaving of these path sets. It gives examples showing the output presentations need not be right-resolving, even when the presentations of the ${\mathcal P}_i$ are minimal right-resolving. \item Section ~\ref{sec:5} first reviews results for decimation and interleaving operations acting on general sets $X \subset {\mathcal A}^{{\mathbb N}}$ which were shown in \cite{ALS21}. That paper defines a hierarchy of closure operations $X \mapsto X^{[n]}$, the {\em $n$-th interleaving closure} of $X$, and shows that a set $X$ is $n$-factorizable if and only if $X^{[n]} = X$. Second, it restricts to path sets, and proves that if ${\mathcal P}$ is a path set, then so is ${\mathcal P}^{[n]}$ for all $n \ge 1$. \item Section ~\ref{sec:5B} recalls results from \cite{ALS21} on the structure of the allowed sets of possible $n$-factorizations that a general set $X \subset {\mathcal A}^{{\mathbb N}}$ may have. It deduces that the set ${\mathcal C}^{\infty}({\mathcal A})$ of infinitely factorizable path sets is stable under all decimation and interleaving operations. It proves that if a path set ${\mathcal P}$ is $n$-factorizable, the minimal right resolving presentation of such $\psi_{j, n}({\mathcal P})$ requires no more nodes than that of a minimal right-resolving presentation of ${\mathcal P}$. \item Section ~\ref{sec:6} determines the structure of infinitely factorizable closed subsets $X$ of ${\mathcal A}^{{\mathbb N}}$. It characterizes infinitely factorizable path sets in two ways: the first is a syntactic property of the infinite words in ${\mathcal P}$, and the second characterizes the form of their minimal right-resolving presentations. \item Section \ref{sec:factorsec} analyzes the structure of factorizations of finitely factorizable path sets. An iterated interleaving factorization is complete if all of its factors are either infinitely factorizable or indecomposable. It proves that every finitely factorizable path set has at least one complete factorization. \item Section \ref{sec:concluding} discusses open questions and further work. \item Appendix ~\ref{sec:A0} discusses path sets from the viewpoint of automata theory. \item Appendix~\ref{sec:B0} gives a sufficient condition on a presentation of a path set ${\mathcal P}$ for all of its interleaving factorizations to be {\em self-interleaving factorizations}. An $n$-fold self-interleaving factorization is one having all factors $X_i= Z$ identical, for $0 \le i \le n-1$, where $Z$ may depend on $n$. \end{enumerate} {\bf Acknowledgments.} The work of W. Abram and D. Slonim was facilitated by the Hillsdale College LAUREATES program, done by D. Slonim under the supervision of W. Abram. \section{Structure and presentations of path sets} \label{sec:prelim} In this section we establish basic structural results about path sets, formulated in the terminology of symbolic dynamics in Lind and Marcus \cite{LM95}. We fix a finite alphabet $\mathcal{A}$, and let $\mathcal{C}(\mathcal{A})$ denote the collection of all path sets on $\mathcal{A}$. We use interchangeably the term {\em word} or {\em block} to mean a finite string of consecutive symbols from ${\mathcal A}$, often viewed inside an infinite word. In this section we show that path sets have a minimal right-resolving presentation, unique up to isomorphism of path set presentations. Equivalent results can be found in the automata theory literature, which uses different terminology, see Appendix B. We have included proofs to provide a self-contained treatment in the language of symbolic dynamics. In particular, we introduce two notions of finite and infinite follower sets and characterize path sets in terms of finiteness properties of both finite and infinite follower sets. These notions are needed for later proofs. Path sets also have an important characterization in terms of automata theory, which uses different terminology. In automata theory path sets ${\mathcal P}$ are characterized as those recognizable sets in ${\mathcal A}^{{\mathbb N}}$ for non-deterministic B\"{u}chi automata which are closed sets in its product topology. We discuss automata theory results in Appendix \ref{sec:A0}. \subsection{Closure properties of path sets}\label{subsec:21} We recall basic results on the closure of path sets in the symbol topology, and under set operations, shown in \cite{AL14a}. \begin{thm}\label{thm:operations} (\cite[Theorem 1.2]{AL14a}) (1) Each path set ${\mathcal P}$ in $\mathcal{C}(\mathcal{A})$ is a closed subset in the product topology on $\mathcal{A}^\mathbb{N}$. (2) If ${\mathcal P}_1$ and ${\mathcal P}_2$ are path sets, then so is ${\mathcal P}_1 \cap {\mathcal P}_2$. (3) If ${\mathcal P}_1$ and ${\mathcal P}_2$ are path sets, then so is ${\mathcal P}_1 \cup {\mathcal P}_2$. \end{thm} \begin{rem}\label{rem22} The collection ${\mathcal C}(\mathcal{A})$ of all path sets in $\mathcal{A}^\mathbb{N}$ is not closed under complementation inside $\mathcal{A}^\mathbb{N}$. See \cite[Example 2.3 ]{AL14a}. \end{rem} \subsection{Presentations of path sets}\label{subsec:presentations} Each path set has infinitely many presentations. We recall the following properties such a presentation may have. \begin{defn}\label{def:properties} (1) A labeled directed graph $\mathcal{G}$ is called \emph{right-resolving} if any two edges emanating from the same vertex have distinct labels. (2) A pointed graph $(\mathcal{G},v)$ is called \emph{reachable} if there is a directed path in $\mathcal{G}$ from the initial vertex $v$ to every other vertex of $\mathcal{G}$. (3) A pointed graph $(\mathcal{G},v)$ is called \emph{pruned} if it has no sinks, meaning every vertex has an exiting edge; i.e., it has out-degree at least one. \end{defn} \begin{prop}\label{thm:3.2frompaper1} Every path set ${\mathcal P}$ has a presentation that is right-resolving, pruned and reachable. \end{prop} \begin{proof} Theorem 3.2 of \cite{AL14a} gives (in its proof) an algorithm which when given an arbitrary presentation ${\mathcal P}= X({\mathcal G}, v)$ of a path set will compute another presentation $({\mathcal G}', v')$ for ${\mathcal P}$ which is right-resolving. The pruning operation given in Section 3 of \cite{AL14a} then iteratively removes stranded vertices while retaining the right-resolving property. A vertex is {\em stranded} if it either has no entering edges or no exit edges, or both. When a stranded vertex is removed, new vertices may become stranded, and the operation repeats until no stranded vertices remain. Finally a pruned, right-resolving presentation can be converted to a right-resolving, pruned and reachable presentation by further removing all vertices not reachable from $v'$. \end{proof} \begin{rem}\label{rem:trim} A presentation $({\mathcal G}, v)$ specifies a finite automaton in the sense of Eilenberg \cite[Chapter II]{Eilenberg74}, having a single initial state $I=\{v\}$, where we impose the requirement that all states be terminal states: i.e., $T= V({\mathcal G})$. (The terminal states are not specified in the path set definition.) In automata theory {\em right-resolving} is equivalent to the automaton being {\em deterministic}. For a single initial state automaton {\em reachable} is equivalent to being {\em accessible}. For an automaton having all states being terminal states, {\em pruned} is equivalent to being {\em co-accessible.} Therefore the presentation produced by Proposition \ref{thm:3.2frompaper1} is a {\em trim} automaton, i.e. deterministic, accessible and co-accessible. \end{rem} \subsection{Word follower sets and vertex follower sets}\label{subsec:23} The internal structure of presentations of path sets is determined by their patterns of initial words over all paths. We formulate two notions of follower set which capture this internal structure. These definitions are adapted from definitions in Lind and Marcus \cite{LM95} for two-sided infinite sequences. The first of these notions applies to general subsets $X$ of the one-sided shift, parallel to \cite[Defn. 3.2.4]{LM95}. \begin{defn} \label{def:symbolic_follower} (Word follower set) Let $X$ be a subset of the one-sided shift ${\mathcal{A}}^{{\mathbb N}}$, and let $w=b_0b_1\cdots b_{k-1}$ be a finite word of length $|w|= k$, allowing the empty word $\emptyset$ of length $0$. (1) The {\em word follower set ${\mathit{F}}_X(b)$ of an initial finite word $b$} of $X$ is the set of all finite blocks $$ {\mathit{F}}_{X}(b) = \{ a\, |~ a \text{ is a finite block such that} \,\, ba \,\, \text{is an initial block of some} \,\,x= ba x' \in X\}.$$ (2) A set ${\mathit{F}} \subseteq {\mathcal{A}}^{{\mathbb N}}$ is a {\em word follower set of $X$} if there exists some initial block $b$ of $\mathcal{P}$ such that ${\mathit{F}}= {\mathit{F}}_X(b) $. \end{defn} A closed set $X$ in ${\mathcal A}^{{\mathbb N}}$ may possess infinitely many different word follower sets ${\mathit{F}}_X(b)$, as $b$ varies. We show below that path sets ${\mathcal P}=X({\mathcal G}, v)$ have only finitely many different word follower sets as $b$ varies. Note that the initial block set ${\mathcal B}^{I}({\mathcal P}) = {\mathit{F}}_{X} (\emptyset)$. The second of these notions applies to presentations $({\mathcal G}, v)$ of path sets ${\mathcal P}$, parallel to \cite[Defn. 3.3.7]{LM95}. \begin{defn} \label{def:vertex_follower} (Vertex follower set) The \emph{vertex follower set ${\mathit{F}}({\mathcal G}, v')$ of a vertex $v'$} in a labeled directed graph $\mathcal{G}$ is the set of all finite words $b=b_0b_1 \ldots b_{k-1}$ that can be presented by labels of paths on $\mathcal{G}$ beginning at vertex $v'$. \end{defn} The next result shows for a right-resolving presentation $({\mathcal G}, v)$ of a path set ${\mathcal P}$ that all its possible word follower sets occur as vertex follower sets of the directed labeled graph ${\mathcal G}$. \begin{prop}\label{onevertexforeachfollowerset} Let $(\mathcal{G}, v) $ be a right-resolving, pruned presentation of a path set $\mathcal{P}$, with $\mathcal{G}$ having $m$ vertices. (1) For each finite initial word $b \in {\mathcal B}^{I}({\mathcal P})$, the word follower set ${\mathit{F}}_{{\mathcal P}}( b)$ equals the vertex follower set ${\mathit{F}}({\mathcal G}, v')$ for some reachable vertex $v'$ of $\mathcal{G}$. In particular, there are at most $m$ different word follower sets. (2) Conversely, each vertex follower set ${\mathit{F}}({\mathcal G}, v')$ of a reachable vertex occurs as the word follower set ${\mathit{F}}_{{\mathcal P}}( b)$ of some initial word $b \in {\mathcal B}^{I}({\mathcal P})$. The word $b$ can be chosen to be of length at most $m-1$. If $v'=v$ we choose $b = \emptyset$. \end{prop} \begin{proof} (1) Let $b= b_1 b_2 \cdots b_k$ be a finite initial word of $\mathcal{P}$, and let ${\mathit{F}}_{{\mathcal P}}(b)$ be its word follower set. Since $\mathcal{G}$ is right-resolving, there is exactly one path on $\mathcal{G}$ beginning at $v$ presenting label set $b$. Let $v'$ be the (reachable) vertex where this path terminates. A finite word $w$ will be in the word follower set of $b$ if and only if there is a directed path beginning at vertex $v'$ that presents the label set $w$, and since ($\mathcal{G}, v)$ is pruned, there exists an infinite word $x= bwx' \in {\mathcal P}$. Consequently the word follower set ${\mathit{F}}_{{\mathcal P}}( b)$ coincides with the vertex follower set ${\mathit{F}}({\mathcal G}, v')$. (2) Let $v'$ be a reachable vertex in $(\mathcal{G}, v)$, so that there exists a directed path $\pi$ from $v$ to $v'$, which can be chosen to have length at most $m-1$, since there are $n$ vertices in ${\mathcal G}$. Let $b$ be the block labeling this path, which uniquely determines $v'$ since the presentation is right-resolving. As above, since $\mathcal{G}$ is pruned, the vertex follower set ${\mathit{F}}({\mathcal G}, v')$ equals the word follower set ${\mathit{F}}_{{\mathcal P}}( b)$. \end{proof} \subsection{Finiteness of follower sets for path sets}\label{sec24} Proposition \ref{onevertexforeachfollowerset} implies the finiteness of the number of distinct word follower sets for any path set. \begin{thm}\label{thm:finite_symbolic} {\em (${\mathcal C}({\mathcal A})$ characterized by finiteness of word follower sets)} A set $X \subseteq \mathcal{A}^{{\mathbb N}}$ is a path set if and only if it is closed and has a finite number of distinct word follower sets. \end{thm} \begin{proof} Suppose first that $X= {\mathcal P}$ is a path set. Then ${\mathcal P}$ is closed by Theorem \ref{thm:operations}. Now ${\mathcal P}$ has a right-resolving, pruned presentation $({\mathcal G}, v)$ by Theorem \ref{thm:3.2frompaper1}; let $n$ be the number of its vertices. Proposition \ref{onevertexforeachfollowerset} implies it has at most $n$ distinct word follower sets ${\mathit{F}}_{X}(b)$ as $b$ varies. Thus $X$ is closed and has a finite number of distinct word follower sets. Conversely, suppose $X$ is closed and has a finite number $n$ of distinct word follower sets. We construct a right-resolving presentation $X({\mathcal G}, v)$ of a path set having $n$ vertices, and check that $X= X({\mathcal G}, v)$. This construction will yield a minimal right-resolving presentation of $X$. We give a name to each follower set in the finite list by assigning to it the minimal prefix $b$ defining it as ${\mathit{F}}_{{\mathcal P}}(b)$. (We may put a total order on the alphabet ${\mathcal{A}}$ and use the lexicographic order on prefixes to define ``minimal''.) These word follower sets ${\mathit{F}}_{{\mathcal P}}(b)$ will name the states of ${\mathcal G}$. For $b= \emptyset$ we select $v= {\mathit{F}}_{{\mathcal P}}(\emptyset) = {\mathcal B}^{I}({\mathcal P})$ to be the initial vertex of ${\mathcal G}$. For each vertex $v_i := {\mathit{F}}_{{\mathcal P}} (b_i)$ and each letter $a$ that occurs as an initial letter of some word of the follower set $ {\mathit{F}}({\mathcal P}, b_i)$, we add an exit edge with label $a$ which goes from $v_i$ to the follower set ${\mathit{F}}_{{\mathcal P}}(b_i a)$. Since the vertices enumerate the complete list of possible word follower sets, it will be some vertex $v_j= {\mathit{F}}_{{\mathcal P}}( b_j)$ for some $b_j$. (We have ${\mathit{F}}_{{\mathcal P}}( b_ia)={\mathit{F}}_{{\mathcal P}}(b_j)$ but we may have $b_ia \ne b_j$.) We have constructed $({\mathcal G}, v)$, and it is right-resolving as there is at most one exit edge with a given symbol from each vertex. It is also pruned and reachable by construction. It remains to show that $X= X({\mathcal G}, v)$, which will verify that it is a path set. (1) We show the inclusion $X \subseteq X({\mathcal G}, v)$. Take $x=a_0a_1a_2\cdots\in X$ and construct a path in $({\mathcal G}, v)$ realizing this symbol sequence. Given a finite sequence $b_k=a_0a_1\ldots a_k \in {\mathcal B}^{I}(X)$, we prove by induction on $k \ge 0$, that at the $k$-th step finds a path in $({\mathcal G}, v)$ moving from vertex labeled ${\mathit{F}}_X(b_{k-1})$ to vertex labeled ${\mathit{F}}_{X}( b_{k})$ with edge symbol $a_k$. Here $b_{-1} = \emptyset$ is the base case, and both the base case and the induction step follow by the definition of edges in ${\mathcal G}$. (2) We show the reverse inclusion $X({\mathcal G}, v) \subseteq X$ using the fact that $X$ is closed. Each infinite symbol sequence $y= a_0a_1... a_k\cdots $ in $X({\mathcal G}, v)$, starting at vertex labeled ${\mathit{F}}_{X}(\emptyset)$, appears on a unique path (by right-resolving property) which at step $k$ is at vertex corresponding to ${\mathit{F}}_{X}(a_0a_1\cdots a_k)$ of ${\mathcal G}$. We may prove by induction on $k$ that the finite path $b_k=a_0a_1 \ldots a_k$ is an initial block in $X$. The word follower set property of ${\mathit{F}}_{X} (a_0a_1\cdots a_k)$ permits inductively adding the symbol $a_{k+1}$, for both the base case $k= -1$ and the induction step. Since $X$ is closed by hypothesis, the infinite word $y$ belongs to $X$. \end{proof} \begin{cor}\label{thm:initialblocks} {\rm (Path sets are characterized by initial words)} A path set ${\mathcal P}$ is characterized by its set $ {\mathcal B}^{I}({\mathcal P})$ of all finite initial words. That is, if two path sets have the same set of initial words, then they are identical. \end{cor} \begin{proof} The limit set of the set of initial words $ {\mathcal B}^{I}(X)$ of an arbitrary set $X\subseteq {\mathcal A}^{{\mathbb N}}$ is its topolgical closure $\overline{X}$. Since path sets ${\mathcal P}$ are closed sets by Theorem \ref{thm:finite_symbolic}, they are detemined by ${\mathcal B}^{I}({\mathcal P})$. \end{proof} \begin{rem}\label{rem:sofic_characterize} The characterization of Theorem \ref{thm:finite_symbolic} parallels a characterization of sofic shifts found by Ashley, Kitchens and Stafford \cite{AKS92} (cf. \cite[Theorem B.1]{AL14a}) in 1992, which assumes the extra condition of one-sided shift-invariance: {\em Any shift-invariant subset $X$ of ${\mathcal{A}}^{{\mathbb N}}$ is a sofic shift if and only if it has only finitely many different word follower sets ${\mathit{F}}(X, b)$.} \end{rem} \subsection{Minimal presentations of path sets}\label{subsec:minimal_presentations} \begin{defn} \label{defn:minimal-pres} (1) A {\em minimal presentation} for a path set ${\mathcal P}$ is a presentation with a minimal number of vertices. (2) A {\em minimal right-resolving presentation} is a right-resolving presentation having a minimal number of vertices among all right-resolving presentations. \end{defn} A minimal presentation is always pruned and reachable, but it need not be right-resolving. Minimal right-resolving presentations are sometimes not minimal presentations. \begin{thm}\label{thm:minimal-to-RRminimal} Let ${\mathcal P}$ be a path set having a minimal presentation having $m$ vertices. Then ${\mathcal P}$ has a minimal right-resolving presentation having at most $2^m-1$ vertices. \end{thm} \begin{proof} This result is proved by the well-known subset construction in the automata theory literature, for obtaining a minimal deterministic finite state automaton matching a finite state deterministic automaton. It appears \cite[Chapter III, Sect. 5, Theorem 5.2]{Eilenberg74}. \end{proof} Below we characterize minimal right-resolving presentations, showing uniqueness in the process. We first recall a definition from symbolic dynamics, which is a one-sided shift version of a definition in Lind and Marcus \cite[page 71, page 78]{LM95} \begin{defn}\label{def:follower_separated} A directed labeled graph $\mathcal{G}$ is called {\em follower-separated} if all vertices have distinct vertex follower sets. \end{defn} In characterizing the number of states in a minimal right-resolving presentation, we formulate two definitions that view path sets as ``infinite follower sets''. \begin{defn}\label{def:word_vertex_path_set} (1) Given a path set ${\mathcal P}$ and a finite word $w \in {\mathcal A}^{\star}$, the {\em word path set } ${\mathcal P}^w$ of ${\mathcal P}$ is the set of all infinite words $x$ such that $wx \in {\mathcal P}$. (2) Given a presentation $({\mathcal G}, v)$ of a path set ${\mathcal P}$, an associated {\em vertex path set} is any path set $X({\mathcal G}, v')$ where $v'$ is a vertex of ${\mathcal G}$. \end{defn} A nonempty word path set ${\mathcal P}^w$ is a path set. For a right-resolving presentation $({\mathcal G}, v)$ of ${\mathcal P}$ an argument parallel to Proposition \ref{onevertexforeachfollowerset} (1) shows that ${\mathcal P}^{w}$ equals the vertex path set $X({\mathcal G}, v')$ for the vertex $v'$ of ${\mathcal G}$ that is the final vertex on the unique path with symbol labels $w$ from the initial vertex $v$. \begin{thm}\label{thm:minimal_presentation} {\rm (Minimal right-resolving presentation)} (1) A path set ${\mathcal P}$ has a minimal right-resolving presentation $({\mathcal G}, v)$, which is unique up to isomorphism of pointed labeled graphs. This presentation is pruned, reachable and follower-separated. (2) Conversely, if a right-resolving presentation of ${\mathcal P}$ is pruned, reachable and follower-separated, then it is minimal. (3) The number $m$ of vertices in a minimal right-resolving presentation of ${\mathcal P}$ is the number of distinct word follower sets ${\mathit{F}}_{{\mathcal P}}(b)$ of ${\mathcal P}$. It also equals the number of distinct vertex follower sets ${\mathit{F}}({\mathcal G}, v)$ of ${\mathcal P}$ in any right-resolving, reachable presentation ${\mathcal P} =X({\mathcal G}, v)$. (4) The number $m$ of vertices in a minimal right-resolving presentation of ${\mathcal P}$ is the number of distinct nonempty word path sets ${\mathcal P}^{w}$ of ${\mathcal P}$. It also equals the number of distinct nonempty vertex path sets $X({\mathcal G}, v')$ in any right-resolving presentation $({\mathcal G}, v)$ of ${\mathcal P}$. \end{thm} \begin{proof} (1) Proposition \ref{onevertexforeachfollowerset} implies that the number of vertices of any right-resolving presentation of ${\mathcal P}$ must equal or exceed the number of distinct word follower sets ${\mathit{F}}_{{\mathcal P}}( b)$. The proof of Theorem \ref{thm:finite_symbolic} constructed a right-resolving presentation $({\mathcal G}, v)$ for ${\mathcal P}$, which has one vertex for each distinct word follower set, which must therefore be minimal. By construction it is pruned and the vertex follower sets in this presentation are distinct, so it is follower-separated. It remains to show uniqueness. We know that any minimal right-resolving presentation necessarily has vertices labeled by all of the possible word follower sets. The exit edges from the pointed vertex $v$ have different labels $a'$ (from right-resolving property) and the edge labeled $a'$ must go to the vertex labeled by the word follower set ${\mathit{F}}_{{\mathcal P}} (a')$. This assignment is the only way to permit the initial word follower set ${\mathit{F}}_{{\mathcal P}}( \emptyset)$ to reach all words for it that begin with prefix $a'$. Similarly the exit edges from each vertex ${\mathit{F}}_{{\mathcal P}}( b)$ must take the allowed prefix labels $a'$ of words in ${\mathit{F}}_{{\mathcal P}}( b)$ and for each $a'$ must map to vertex with follower set label ${\mathit{F}}_{{\mathcal P}} (ba')$. Every labeled edge is forced, so the construction is unique. (2) Suppose that a right-resolving presentation $({\mathcal G}, v)$ of ${\mathcal P}$ is pruned, follower-separated and reachable. By Proposition \ref{onevertexforeachfollowerset} ( 2) each vertex follower sets is a symbolic follower set. By Proposition \ref{onevertexforeachfollowerset} (1) the vertex follower sets includes all distinct word follower sets. The follower-separation property implies each distinct follower set occurs exactly once in ${\mathcal G}$ as a vertex follower set, so $({\mathcal G}, v)$ is minimal. (3) The two assertions are a consequence of Proposition \ref{onevertexforeachfollowerset} saying that every vertex follower set of every right-resolving presentation is some word follower set, and that all distinct word follower sets appear as vertex follower sets in every right-resolving presentation. (4) The two assertions follow from (3) using Theorem \ref{thm:initialblocks}, which implies that word path sets are uniquely determined by their word follower sets, and vice-versa. Similarly vertex path sets are uniquely determined by their vertex follower sets, and vice-versa. \end{proof} \begin{rem}\label{rem:minimal-not} For any minimal presentation of ${\mathcal P}$ that has fewer vertices than in a minimal right-resolving presentation, there will necessarily be word follower sets ${\mathcal P}^w$ that are not vertex follower sets of such a presentation. \end{rem} \begin{rem} For sofic shifts minimal right-resolving presentations (in the two-sided sofic shift sense, which is not the path set sense) are not necessarily unique. A counterexample is given in \cite[Example 3.3.21]{LM95}. \end{rem} \subsection{Recognizing and distinguishing path sets} \label{sec:recognition} We describe algorithms for testing identity of path sets and for finding minimal right-resolving presentations. They are based on the following effective bound for telling when two given vertex follower sets in a (possibly disconnected) presentation are equal. \begin{prop}\label{thm:exercise} Let $\mathcal{G}$ be a right-resolving pruned labeled graph, not necessarily connected, that has $m$ vertices. If two vertices, $v_1$ and $v_2$, have distinct vertex follower sets ${\mathit{F}}({\mathcal G}, v_i)$, then there is some word $w=a_1a_2\cdots a_r$ of length $m$ in ${\mathcal{A}}$ which belongs to exactly one of the two follower sets ${\mathcal F}(\mathcal{G}, v_1)$ and ${\mathcal F}(\mathcal{G}, v_2)$. \end{prop} \begin{proof} The proof of this theorem\footnote{One can improve the length bound on $w$ to $m-1$, and there exist examples showing that the upper bound $m-1$ is the best possible.} is outlined in Exercise 3.4.10 of \cite{LM95}. Similar results appear in Conway \cite[Chapter 1, Theorems 6 and 7]{Co71}. \end{proof} \begin{prop}\label{cor:n+m} Let $\mathcal{P}_1= X({\mathcal G}_1, v_1)$ and $\mathcal{P}_2=X({\mathcal G}_2, v_2)$ be path sets with right-resolving presentations, where $\mathcal{G}_1$ has $m_1$ vertices and $\mathcal{G}_2$ has $m_2$ vertices. Then $\mathcal{P}_1=\mathcal{P}_2$ if and only if $\mathcal{P}_1$ and $\mathcal{P}_2$ share the same set of initial $(m_1+m_2)$-blocks; i.e., ${\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_1, v_1) = {\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_2, v_2)$. \end{prop} \begin{proof} This result follows directly from Proposition ~\ref{thm:exercise}. We form a (disconnected) graph ${\mathcal G} = {\mathcal G}_1 \sqcup {\mathcal G}_2$, which has $m_1+m_2$ vertices. The contrapositive of Proposition ~\ref{thm:exercise} says that the follower sets ${\mathcal F}({\mathcal G}, v_1)$ and ${\mathcal F}({\mathcal G}. v_2)$ are equal if and only if they contain the same set of words of length $m_1+m_2$; i.e., if ${\mathcal F}_{m_1+m_2}({\mathcal G}, v_1) = {\mathcal F}_{m_1+m_2}({\mathcal G}, v_2).$ Because the graph ${\mathcal G}$ is disconnected in two pieces, these follower sets are ${\mathcal F}({\mathcal G}, v_1)={\mathcal B}^{I}({\mathcal G}_1, v_1)$ and ${\mathcal F}({\mathcal G}. v_2)={\mathcal B}^{I}({\mathcal G}_2, v_2)$. For the same reason, we have that the length $m_1$ follower sets are ${\mathcal F}_{m_1+m_2}({\mathcal G}, v_1) = {\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_1, v_1)$ and ${\mathcal F}_{m_1+m_2}({\mathcal G}, v_2)= {\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_2, v_2)$, whence ${\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_1, v_1)= {\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_2, v_2)$. Therefore we conclude that the latter equality implies equality of the initial follower sets ${\mathcal B}^{I}({\mathcal G}_1, v_1) = {\mathcal B}^{I}({\mathcal G}_2, v_2)$. By Theorem \ref{thm:initialblocks} we conclude ${\mathcal P}_1={\mathcal P}_2$. \end{proof} \begin{prop}\label{thm:identity} {\rm (Testing Identity of Path Sets)} There is an effective algorithm which when given two pointed graphs $({\mathcal G}_1, v_1)$ and $({\mathcal G}_2, v_2)$ determines whether the path sets ${\mathcal P}_1=X({\mathcal G}_1, v_1)$ and ${\mathcal P}_2= X({\mathcal G}_2, v_2)$ are identical. \end{prop} \begin{proof} Proposition \ref{cor:n+m} yields an effective algorithm to tell if two path sets ${\mathcal P}_1= X({\mathcal G}_1, v_1)$ and ${\mathcal P}_2=X({\mathcal G}_2, v_2)$ are equivalent. We first use the method of \cite[Theorem 3.2]{AL14a} to convert the given presentations to pointed graphs $({\mathcal G}_1^{'}, v_1^{'})$ and $( {\mathcal G}_2^{'}, v_2^{'}) $ that are right-resolving and reachable. Suppose these two graphs ${\mathcal G}_1^{'}$ and ${\mathcal G}_2^{'}$ have $m_1$ and $m_2$ vertices respectively. It now suffices to exhaustively determine all members of the finite sets ${\mathcal B}^{I}_{m_1+m_2}({\mathcal G}_1^{'}, v_1^{'})$ and ${\mathcal B}_{m_1+m_2}^{I}({\mathcal G}_2^{'}, v_2^{'})$ of initial blocks of length $m_1+m_2$ by tracing paths through the graphs, and to check whether these sets are identical. \end{proof} \section{Decimations of path sets} \label{sec:decimation} We study the effect of decimation operations on path sets. The following result was originally established as Theorem 1.5 in \cite{AL14a}. \begin{thm}\label{thm:decimation} {\rm ( ${\mathcal C}({\mathcal A})$ is closed under decimation) } If ${\mathcal P} \in {\mathcal C}({\mathcal A})$ is a path set, then for any $(j, n)$ with $j \ge 0$ and $n \ge 1$, the $j$-th decimation set ${\mathcal P}_{j,n}$ of ${\mathcal P}$ at depth $n$, given by $$ {\mathcal P}_{j,n} = \psi_{j, n}({\mathcal P}) := \bigcup_{ {\bf x} \in {\mathcal P}} \{ \psi_{j,n}({\bf x}) \}, $$ is a path set. \end{thm} The proof given in \cite{AL14a} formulated an algorithm which when given as input a right-resolving presentation $({\mathcal G}, v)$ of a path set ${\mathcal P}$, and the values $(j,n)$ produced as output a (not necessarily right-resolving) presentation $({\mathcal G}_{j,n} , v_{j,n})$ for the $j$-th decimation at level $n$ of ${\mathcal P}$, $\psi_{j,n}({\mathcal P})$, for $0 \le j \le n-1$. The algorithm was outlined in the discussion in \cite[Section 7]{AL14a}. It has the feature that number of vertices of the output presentation it produces can be much larger than the number of vertices in the input presentation. Here we present algorithms which produce presentations of $\psi_{j,n}({\mathcal P})$ which are smaller: they increase the number of vertices of the input presentations by at most $1$. The paper \cite{AL14a} showed that any iterated shift $S^j({\mathcal P})$ of a path set is a path set. In Section \ref{subsec:31} we give an algorithm which shows that from any presentation of a path set with $m$ vertices one can constructively find a presentation of any $S^j({\mathcal P})$ having at most $m+1$ vertices. Note that $S^j({\mathcal P}) = \psi_{j,1}({\mathcal P}).$ In Section \ref{subsec:32N} we present a second constructive algorithm that finds a presentation of $\psi_{j,n}({\mathcal P})$ for $0 \le j \le n-1$, the higher power presentation, having no more than $m$ vertices. Combining it with the algorithm of Section \ref{subsec:31} we obtain presentation for each $\psi_{j, n}({\mathcal P})$ with $j \ge n$. In Section \ref{subsec:33N} we use this result to prove finiteness of the set of all decimations $\psi_{j,n}({\mathcal P})$ of a path set ${\mathcal P}$. For general sets $X \subseteq \mathcal{A}^{\mathbb{N}}$, we will term the decimations $\psi_{j,n}(X)$ with $0 \le j \le n-1$ {\em principal decimations} and call the remaining $\psi_{j,n}(X)$ with $j \ge n$ {\em subsidiary decimations}. This terminology reflects the fact that, when acting on a single word $a_0a_1\cdots$, the principal decimations at level $n$ supply enough information to reconstruct $X$ word by word, using the identity \eqref{eqn:basic-relation}. \subsection{Iterated shift operators on path sets}\label{subsec:31} We have $\psi_{0,1}(X) = X$ and $\psi_{j, 1}(X) = S^j (X)$, where $S^j$ is the $j$-fold iteration of the left shift operator, which operates on individual symbol sequences $a_0a_1a_2 \cdots \in {\mathcal A}^{{\mathbb N}}$ by \begin{equation*} {\rm S}^j( a_0a_1a_2a_3 \cdots ) = a_j a_{j+1}a_{j+2} a_{j+3} \cdots. \end{equation*} \begin{thm}\label{thm:decimation_alg} {\rm (Iterated shift operator)} Given a path set with presentation ${\mathcal P} = X({\mathcal G}, v)$ having $m$ vertices, which is reachable. Then for each $j \ge 1$ there exists a presentation of the $j$-th iterated shift $${\rm S}^j {\mathcal P} := \psi_{j,1}({\mathcal P}) = X({\mathcal G}_{j,1}, v)$$ which has at most $m+1$ vertices. \end{thm} \begin{proof} The path set ${\rm S}^j({\mathcal P})$ is exactly the set of infinite words in ${\mathcal G}$ emanating from the set $V^{(j)}({\mathcal G},v)$ of vertices of ${\mathcal G}$ that can be reached from the initial vertex $v$ after traversing a path with $j$ edges. We create a new graph ${\mathcal G}_{j,1}$ from ${\mathcal G}$ by adding a new vertex $w$, so that $V({\mathcal G}_{j,1} )= V({\mathcal G}) \cup \{ w\}$. The directed labeled graph ${\mathcal G}_{j,1}$ has the same directed labeled edges as ${\mathcal G}$ on the vertices $V({\mathcal G})$, the new vertex $w$ has no entering edges and is defined to have labeled exit edges from $w$ to vertex $v_2 \in V({\mathcal G})$ whenever there is an entering edge $v_1 \to v_2$ from some $v_1 \in V({\mathcal G})$ in having the given label. Any duplicate labeled edges obtained this way are to be discarded. The new vertex will be the marked vertex in the presentation $ X({\mathcal G}', w)$. We claim that the presentation $ X({\mathcal G}', w)$ is $\psi_{j,1}({\mathcal P})$. Indeed, after one step each path from $w$ enters ${\mathcal G}$ and stays there forever after. \end{proof} \begin{rem} The presentation ${\rm S}^j({\mathcal P}) =X({\mathcal G}_{j,1}, w)$ obtained in this construction need not be right-resolving; there may be multiple edges with the same label emanating from $w$. \end{rem} \begin{defn} \label{weakshift} (Shift-invariance, weak shift-invariance, weak shift-stability) (1) A set $X \subset {\mathcal A}^{{\mathbb N}}$ is {\em shift-invariant } if $SX = X$. (2) A set $X$ is {\em weakly shift-invariant} if there are integers $k > j \ge 0$ such that the iterated shifts $S^kX = S^jX$. (3) A set $X \subseteq {\mathcal A}^{{\mathbb N}}$ is {\em weakly shift-stable} if there are integers $k > j \ge 0$ such that $S^k X \subseteq S^j X$. \end{defn} The concept of weak shift-stability was introduced and studied in \cite{ALS21}. Weak shift-invariance implies weak shift-stability of a set $X$. \begin{thm}\label{thm:weak-shift-invariant} {\rm (Weak shift-invariance of path sets)} For any path set ${\mathcal P}$ there exist integers $k > j \ge 0$ giving the equality of iterated shifts $S^k {\mathcal P} = S^j {\mathcal P}.$ That is, all path sets ${\mathcal P}$ are weakly shift-invariant. \end{thm} \begin{proof} Given ${\mathcal P}$, take a reachable presentation for it, letting $m$ be its number of vertices. According to Theorem \ref{thm:decimation_alg}, all iterated shifts $S^j({\mathcal P})$ for $j \ge 1$ have reachable presentations (on the same alphabet) having at most $m+1$ vertices. The number of such presentations is finite. (There are at most $(m+1)(2^{|{\mathcal A}|(m+1)(m+2)}$ of them, noting that presentations do not allow multiple directed edges with the same label between two vertices.) By the pigeonhole principle, there must exist two integers $0\le j<k$ giving the same presentation, so $S^k {\mathcal P} = S^j {\mathcal P}$. \end{proof} \subsection{Higher power presentation for decimations of path sets}\label{subsec:32N} We present an algorithm which constructs from a given presentation of ${\mathcal P}= X({\mathcal G}, v)$ a presentation ${\mathcal G}_{j,n}$ of $\psi_{j,n}({\mathcal P})$ for principal decimations, called here the {\em modified $n$-th higher power presentation}, which has the vertex bound $|V({\mathcal G}_{j,n})| \le |V({\mathcal G})|$. It is based on the well known {\em $n$-th higher power construction}, cf. \cite[Sect. 1.4]{LM95}. which presents ${\mathcal P}$ in blocks using labels from a larger symbol alphabet ${\mathcal A}^n$. The modified algorithm replaces the $n$-block word labels produced by this construction to labels using the original alphabet ${\mathcal A}$ in such a way as to obtain a presentations ${\mathcal G}_{j,n}$ of all principal decimations $\psi_{j,n}({\mathcal P})$, for $0 \le j \le n-1$. We then apply the shift construction in Theorem \ref{thm:decimation_alg} to get a presentation of each $\psi_{j,n}({\mathcal P})$ for $j \ge n$ having at most one extra vertex: $|V({\mathcal G}_{j,n})| \le |V({\mathcal G})| +1$. \begin{thm}\label{thm:decimation_alg2} {\rm (Higher powers of a path set)} Given a path set with presentation ${\mathcal P} = X({\mathcal G}, v)$ on alphabet ${\mathcal A}$. For $n \ge 2$ there exists a presentation $\psi_{j, n}({\mathcal P}) = X({\mathcal G}_{j.n}, v)$ of the $(j, n)$-th decimation of ${\mathcal P}$, for $0 \le j \le n-1$, such that each ${\mathcal G}_{j,n}$ has the same vertex set as ${\mathcal G}$ and has the same marked vertex $v$. For $j \ge n$ there exists a presentation $\psi_{j, n}({\mathcal P}) = X({\mathcal G}_{j.n}, w)$, where ${\mathcal G}_{j,n}$ has the same vertex set of ${\mathcal G}$, plus one extra vertex $w$, which will be the marked vertex. \end{thm} \begin{proof} We associate to the presentation $({\mathcal G}, v)$ a construction $({\mathcal G}_n,v)$ called the {\em $n$-th higher power presentation}, in which ${\mathcal G}_n$ has the same vertex set as ${\mathcal G}$ and the same initial vertex $v$, but its edges are labeled by the product alphabet ${\mathcal A}^n$. (This construction parallels that in \cite{LM95}, Defn. 2.3.10.) In ${\mathcal G}_m$ we draw a directed edge between vertices $v_1$ and $v_2$ with edge label $b_0b_1\cdots b_{n-1} \in {\mathcal A}^n$ if there is a directed path of length $n$ in ${\mathcal G}$ starting at $v_1$ and ending at $v_2$, having successive edge labels $b_0, b_1 , \cdots, b_{n-1}$. It is straightforward to see that ${\mathcal P}= X({\mathcal G}_n, v)$, viewed in the enlarged alphabet ${\mathcal A}^n$, generates the output infinite words in blocks of $n$ symbols. We now obtain a presentation $({\mathcal G}_{j, n}, v)$ from $({\mathcal G}_{n}, v)$ by relabeling edges, replacing each edge symbol $b_0b_1\cdots b_{n-1} \in {\mathcal A}^n$ by a single symbol $b_j\in {\mathcal A}$, its $j$-th symbol. After this is done, there may exist pairs of vertices $v_1$ and $v_2$ being connected by multiple edges labeled with the same symbol $b_j$; we delete duplicate edges. In addition, the resulting graph might be disconnected; we retain the induced subgraph having the set of vertices reachable starting from $v$ using this set of edges. We claim that $\psi_{j,n}({\mathcal P}) = X({\mathcal G}_{j,n}, v)$. To prove the claim we show inclusions hold in both directions. Suppose $x=x_0x_1\ldots\in\psi_{(j,n)}({\mathcal P})$. Then there is some word $y=y_0y_1\ldots\in{\mathcal P}$ such that $x_i=y_{j+in}$ for all $i$. Since $y$ is presented by $(\mathcal{G},v)$, there is an infinite path in $\mathcal{G}$ starting at $v$, presenting $y$. Therefore, there is an edge in $\mathcal{G}_n$ from $v$ to a vertex $v'$ of ${\mathcal G}$ labeled with the first $n$ letters of $y$, another edge from $v'$ to another vertex $v''$ labeled with the next $n$ letters, and so on. Take a corresponding path in $\mathcal{G}_{j,n}$. The word presented will begin with the $j$th letter of the first block of $n$ letters from $y$, followed by the $j$th letter of the second block of $n$ letters, and so on. Thus, the word presented will be $y_jy_{j+n}y_{j+2n}\ldots$. This word is $x$ so $x \in X({\mathcal G}_{j,n},v)$, whence $\psi_{j,n}({\mathcal P}) \subseteq X({\mathcal G}_{j,n}, v)$. For the other inclusion, suppose $x\in X(\mathcal{G}_{j,n},v)$. Then there is a path on ${\mathcal G}_{j,n}$ starting at $v$ presenting $x$. A corresponding path on ${\mathcal G}_n$ will present a word whose $(j,n)$th decimation is $x$. Thus, $x\in\psi_{j,n}({\mathcal P})$, and we have $\psi_{j,n}({\mathcal P}) \supseteq X({\mathcal G}_{j,n}, v)$. We have completed the construction for principal decimations. For the remaining decimations $\psi_{j,n}({\mathcal P})$ with $j \ge n$, we apply the higher power construction to the presentation obtained in Theorem \ref{thm:decimation_alg}, in which $\psi_{j, n}({\mathcal P})$ becomes the initial principal decimation $\psi_{0,n}({\rm S}^j {\mathcal P})$ of ${\rm S}^j {\mathcal P}$. \end{proof} \subsection{Finiteness of full decimation set }\label{subsec:33N} Definition \ref{def:full-decimation-set} states that the {\em full decimation set} ${\mathfrak D}({\mathcal P})$ of a path set is defined by $$ {\mathfrak D}({\mathcal P}) := \{ \psi_{j,n}({\mathcal P}): \, \mbox{all} \quad n \ge 1 \quad \mbox{and all } \quad j \ge 0\}. $$ For the special case of path sets we show finiteness of the full decimation set. \begin{thm}\label{thm:decimation_set_bound2} {\rm (Full decimation set bound)} For each path set ${\mathcal P}$ its full decimation set ${\mathfrak D}({\mathcal P})$ is a finite set. If ${\mathcal P}$ has a presentation having $m$ vertices, then ${\mathfrak D}({\mathcal P})$ has cardinality bounded by $$ |{\mathfrak D}({\mathcal P}) | \le 2^{ m^2|{\mathcal A}|}. $$ \end{thm} \begin{proof} Theorem \ref{thm:decimation_alg} implies a upper bound for $|{\mathfrak D}({\mathcal P})|$ given by the number of distinct labeled directed graphs on $m$ vertices having the property that there is at most one directed edge going from one given vertex $v_1$ to another given vertex $v_2$, with a given symbol $a\in {\mathcal A}$. The number of such directed vertex pairs is $m^2$ and the number of possible directed edge patterns from a fixed vertex $v_1$ to another fixed vertex $v_2$ is exactly $2^{|{\mathcal A}|}$, so we obtain $|{\mathfrak D}({\mathcal P}) | \le 2^{m^2|{\mathcal A}|}.$ \end{proof} \begin{rem}\label{rem:37} In contrast to Theorem \ref{thm:decimation_set_bound} there exist closed $X \subset {\mathcal A}^{{\mathbb N}}$ for which all members of the infinite collection $\{\psi_{j, n}(X)\}$ for $j \ge 0$ are distinct, see \cite[Example 6.5]{ALS21}. \end{rem} \subsection{Right resolving presentations for decimations of path sets}\label{subsec:34N} The output presentation of $\psi_{j,n}({\mathcal P})$ produced by Theorem \ref{thm:decimation_alg} need not be right-resolving, even if the given input presentation of ${\mathcal P}$ were right-resolving. Using the subset construction for obtaining a right-resolving presentation from a general presentation, we obtain the following result. \begin{thm}\label{thm:decimation_presentation} {\rm (Right-resolving presentations of decimation sets of a path set)} Given a path set ${\mathcal P}$ on alphabet ${\mathcal A}$ with at least two letters, having a (not necessarily right-resolving) presentation ${\mathcal P} = X({\mathcal G}, v)$ with $m$ vertices. Then for each $n \ge 1$ and each $j \ge 0$ the decimation set $\psi_{j,n} ({\mathcal P})$ has a right-resolving presentation having at most $2^{m+1}-1$ vertices. \end{thm} This bound on the number of vertices implies a finiteness result for the number of distinct decimation sets; see Theorem \ref{thm:decimation_set_bound2}. \section{Interleaving of path sets} \label{sec:intalgsec} Our object is to constructively prove the following result. \begin{thm}\label{thm:intalgthm2} {\rm (${\mathcal C}({\mathcal A})$ is closed under interleaving)} If $\mathcal{P}_0, \ldots, \mathcal{P}_{n-1}$ are path sets on the alphabet $\mathcal{A}$, then their $n$-fold interleaving $$X := ({\circledast}n)_{i=0}^{n-1} {\mathcal P}_i= \mathcal{P}_0 {\circledast} \mathcal{P}_1 {\circledast} \cdots {\circledast} \mathcal{P}_{n-1}$$ is a path set; i.e., $X \in {\mathcal C}({\mathcal A})$. \end{thm} To do this, we give an effective procedure for computing a presentation $({\mathcal G}, v)$ of the $n$-interleaving set $X:= \mathcal{P}_0 {\circledast} \mathcal{P}_1 {\circledast} \cdots {\circledast} \mathcal{P}_{n-1}$, given presentations of each input factor $\mathcal{P}_i= ({\mathcal G}_i, v_i) $. This presentation certifies that $X$ is a path set. We give examples. We also prove the converse result that every interleaving factor of a path set ${\mathcal P}$ is a path set given by some decimation of ${\mathcal P}$. \subsection{ $n$-fold interleaving construction}\label{subsec:41} \begin{thm}\label{thm:algthm2} {\rm(Interleaving pointed graph product construction)} Let $n \ge 2$ and suppose that $\mathcal{P}_0, \ldots, \mathcal{P}_{n-1}$ are path sets with given presentations $(\mathcal{G}_0,v_0), \ldots, (\mathcal{G}_{n-1},v_{n-1})$, respectively. There exists a construction taking as inputs these presentations and giving as output a presentation $({\mathcal H}, {\bf v})$ of the $n$-fold interleaving $X := \mathcal{P}_0 {\circledast} \mathcal{P}_1 {\circledast} \cdots {\circledast} \mathcal{P}_{n-1}.$ In particular $X= X({\mathcal H}, {\bf v})$ is a path set. This construction has the following properties: \begin{enumerate} \item[(i)] If $\mathcal{G}_i$ has $k_i$ vertices for each $0 \leq i \leq n-1$, then $\mathcal{H}$ will have at most $n{\circledast}od_{i=0}^{n-1}k_i$ vertices. \item [(ii)] If the pointed graphs $(\mathcal{G}_i, v_i)$ are right-resolving for all $0 \leq i \leq n-1$, then the output pointed graph $\mathcal{H}$ will also be right-resolving. \item [(iii)] If the pointed graphs $(\mathcal{G}_i, v_i)$ are pruned for all $0 \leq i \leq n-1$, then the output pointed graph $\mathcal{H}$ will also be pruned. \end{enumerate} \end{thm} \begin{proof}[Proof of Theorem ~\ref{thm:algthm}] Suppose $\mathcal{P}_i$ is presented by the pointed graph $(\mathcal{G}_i, v_i^0)$, which has vertex set $\mathcal{V}_i$ having $k_i$ vertices, for all $0 \leq i \leq n-1$. We construct a new pointed labeled graph $(\mathcal{H}, {\bf v})$, which we term the {\em $n$-fold interleaving pointed graph product} of $\tilde{{\mathcal G}}_i := (\mathcal{G}_i, v_i^0)$. The underlying directed labeled graph ${\mathcal H}$ is the {\em $n$-fold interleaving graph product} of the labeled graphs ${\mathcal G}_i$: $$ {\mathcal H} := {\circledast}_{i=0}^{n-1} {\mathcal G}_i := {\mathcal G}_0 {\circledast} {\mathcal G}_1 {\circledast} \cdots {\circledast} {\mathcal G}_{n-1}, $$ using as input an ordered set of $n$ directed labeled graphs $\mathcal{G}_i$. The vertices of ${\mathcal H}$ consist of a union of products of the vertices of the ${\mathcal G}_i$, for $n$ cyclically rotated copies of the ${\mathcal G}_i$. To begin, choose (for convenience) a numbering to the vertices ${\mathcal V}_i$ from each $\mathcal{G}_i$, and let $v_i^j$ be the $j$th element of $\mathcal{V}_i$, with $v_i^0$ the marked vertex of $\mathcal{V}_i$. We define the $i$-th vertex set in ${\mathcal H}$ to be $$ {\mathcal V}^{i} := {\mathcal V}_{i} \times {\mathcal V}_{i+1} \times \cdots \times {\mathcal V}_{n-1} \times {\mathcal V}_0 \times {\mathcal V}_1 \times \cdots {\mathcal V}_{i-1}. $$ and let ${\mathcal V}({\mathcal H}) = \bigcup_{i=0}^{n-1} {\mathcal V}^{i}$ be the vertex set of ${\mathcal H}$. Here a vertex in ${\mathcal V}^{i}$ is a vector, $$ (v_i^{j_{i}}, v_{i+1}^{j_{i+1}} ,\ldots , v_{n-1}^{j_{n-1}}, v_0^{j_0}, v_1^{j_1}, \ldots v_{i-1}^{j_{i-1}})\,: \, \, \mbox{where each } \, 0 \le j_m \le k_m -1. $$ The labeled edges of $\mathcal{H}$ all connect vertices of ${\mathcal V}^{i}$ to vertices of the next set ${\mathcal V}^{i+1}$, with indices taken modulo $n$. Whenever there is an edge from $v_i^{\ell_1}$ to $v_i^{\ell_2}$ in $\mathcal{G}_i$ which has label $a$, draw edges in $\mathcal{H}$ labeled $a$ from each vertex in ${\mathcal V}^i$ that is of the form\footnote{There are $\frac{1}{k_i} {\circledast}od_{j=0}^{n-1} k_j$ such edges.} \begin{equation*} (v_i^{\ell_1}, v_{i+1}^{j_{i+1}}, \ldots , v_{n-2}^{j_{n-2}}, v_{n-1}^{j_{n-1}},v_0^{j_0}, v_1^{j_1}, \ldots , v_{i-1}^{j_{i-1}}). \end{equation*} to that vertex in ${\mathcal V}^{i+1}$ given by \begin{equation*} ( v_{i+1}^{j_{i+1}}, \ldots , v_{n-2}^{j_{n-2}}, v_{n-1}^{j_{n-1}},v_0^{j_0}, v_1^{j_1}, \ldots , v_{i-1}^{j_{i-1}}, v_i^{\ell_2}). \end{equation*} We use the cyclic ordering for superscripts $i ~(\bmod ~n)$ of ${\mathcal V}^{i}$, so that ${\mathcal V}^{n} \equiv {\mathcal V}^0$. For the pointed graph version of this construction, we add as the pointed vertex of ${\mathcal H}$ vertex of ${\mathcal V}^{0}$ given by ${\bf v}^0= (v_0^0, v_{1}^0 \cdots v_{n-2}^{0} , v_{n-1}^{0})$ determined by the pointed vertices $v_i^0$ of the individual ${\mathcal G}_i$. Now define $\widetilde{\mathcal{P}}$ to be the path set presented by $(\mathcal{H},{\bf v}^0)$. {\bf Claim.} {\em \,\, $\widetilde{\mathcal{P}}=({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i :=\mathcal{P}_0 {\circledast} \mathcal{P}_1 {\circledast} \cdots {\circledast} \mathcal{P}_{n-1}.$}\\ To prove the claim, we first show the inclusion \, $({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i \subseteq \widetilde{\mathcal{P}}$. Let $(x_t)_{t=0}^{\infty}\in ({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i$. By definition there exist edge label elements separately in each factor $\mathcal{P}_i$ that traverse an infinite edge path, $(e_{i,t})_{t=0}^{\infty} $ in ${\mathcal G}_i$ which has associated symbol sequence \begin{equation*} (x_{i,t})_{t=0}^{\infty}\in \mathcal{P}_i\text{ for all }0\leq i\leq n-1, \end{equation*} which visits an associated sequence of vertices $$ (v_{i, t})_{t=0}^{\infty} $$ in the graph ${\mathcal G}_i$. We call $(v_{i,t})_{i=0}^{\infty}$ a {\em vertex path} associated to the edge path $(x_{i, t})_{i=0}^{\infty}$. (A vertex path is uniquely determined by the edge path, requiring that the initial vertex be the marked vertex. There could be several edge paths giving the same marked vertex, if there are multiple edges.) The edge symbol sequences $(x_{i, t})_{i=0}^{\infty} $ interleave to reconstruct the sequence $(x_t)_{t=0}^{\infty}$ via $$ x_{i,t}=x_{i+ nt} \quad \mbox{ for all} \,\, 0\leq t<\infty. $$ We show these elements give a sequence of update edges for an edge path in ${\mathcal H}$ realizing this symbol sequence $x_t$. We start at $t=0$ at initial vertex ${\bf v}^0 =(v_{0}^0,v_{1}^0, \ldots ,v_{n-1}^0) \in {\mathcal V}^0$. We proceed in ``rounds" of $n$ steps. At the beginning of ``round'' $k$ at $t=kn$ we will be at a vertex ${\bf v}^{k} =(v_0^{j_{0,k}}, v_1^{j_{1,k}} , \ldots , v_{n-1}^{j_{n-1, k}}) \in {\mathcal V}^{0})$. During the round, with steps numbered $0 \to n-1$, the vertices cyclically rotate (to the left) and at the start of $i$-th step the vector is initially in ${\mathcal V}^{i}$, the leftmost vertex $v_i^{j_i, k}$ is updated to $v_i^{j_{i+1}, k+1}$, by moving on an edge on ${\mathcal G}_i$ between these two vertices, which has edge label $x_{nk+i}$, and the new vertex in ${\mathcal G}_i$ is moved all the way to the right, to get a vector in ${\mathcal V}^{i+1}$. At the end of the round we back in ${\mathcal V}^0$ at a new vertex vector ${\bf v}^{k+1}.$ We proceed by induction on the number of rounds $k$. The base case $k=0$ starts with all $j_{i, 0} =0$. The induction hypothesis is that at the end of the $r$-th round we have followed a path on ${\mathcal H}$ that incremented motion on each of the ${\mathcal G}_i$ by one symbol of $x_{i, r}$ and has moved in the $i$-th vector coordinate from vertex $v_{i,r}$ to the vertex $v_{i, r+1}$, corresponding to ${\mathcal G}_i$, That is, at step $nr+i$ we produced symbol $x_{i, r+1}$ and at time $t= n(r+1)$ the $i$-th vector component of ${\bf v}^{r+1}= (v_0^{j_{0,r+1}}, \cdots , v_{n-1}^{j_{n-1, r-1}})$, has entry $v_{i, r+1} \in {\mathcal G}_i$, for $0 \le i \le n-1$. The induction step is completed using the fact that each $(e_{i, t} : t \ge 0 ) \in {\mathcal P}_i$ is a legal edge path in ${\mathcal G}_i$ that permits taking the next step at ${\mathcal G}_i$ in the next round. We conclude that there is an infinite path in $\mathcal{H}$ originating at ${\bf v}^{0}$ with edge labels $(x_0,x_1,x_2,\ldots)$, producing $(x_t)_{t=0}^{\infty}$ is an element of $\widetilde{\mathcal{P}}$. Thus $({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i \subseteq \widetilde{\mathcal{P}}.$ It remains to show the reverse inclusion $\widetilde{\mathcal{P}} \subseteq ({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i $. Suppose given $(x_k)_{k=0}^{\infty}\in \widetilde{\mathcal{P}}$. Then in the first $n$ steps there is a vertex path \begin{equation*} \begin{array}{c} ( v_0^0, v_1^0, \ldots, v_{n-2}^0, v_{n-1}^0) ; \quad ( v_1^{0}, v_{2}^0, \ldots, v_{n-1}^0, v_0^{j_{0,1}}); \quad (v_{2}^0, v_{3}^0, \ldots ,v_{n-1}^0 , v_0^{j_{0,1}}, v_1^{j_{1,1}}); \\ \ldots ; \quad (v_{n-1}^0, v_{0}^{j_{0,1}}, v_1^{j_{1,1}}, \ldots, v_{n-2}^{j_{n-2,1}}); \quad (v_0^{j_{0,1}}, v_1^{j_{1,1}}, \ldots, v_{n-2}^{j_{n-2,1}}, v_{n-1}^{j_{n-1,1}}); \end{array} \end{equation*} in $\mathcal{H}$ which can be traversed by edges labeled $x_0,x_1,\ldots, x_{n-1}$. Notice that the first coordinate of a vertex will be the last coordinate of the vertex that follows after $n-1$ steps. Since the initial vertex is $(v_{1}^0, v_{2}^0\ldots v_{n-1}^0,v_n^0)$, we know that for each $0\leq i\leq n-1$, there is a matching edge in $\mathcal{G}_i$ from $v_i^0$ to $v_i^{j_{i,1}}$. For any $k<\infty$, an edge in $\mathcal{H}$ with edge label $x_k$ from vertex \begin{equation*} (v_i^{j_{i,r}}, \ldots, v_{n-1}^{j_{n-1,r}}, v_{0}^{j_{0,r+1}}, v_{1}^{j_{1,r+1}}, \ldots . v_{i-1}^{j_{i-1,r+1}} ) \in {\mathcal V}^{i} , \end{equation*} to vertex \begin{equation*} ( v_{i+1}^{j_{i+1,r}}, \ldots v_{n-1}^{j_{n-1,r}}, v_{0}^{j_{0,r+1}}, v_{1}^{j_{1,r+1}}, \ldots , v_{i-1}^{j_{i-1,r+1}}, v_i^{j_{i,r+1}}) \in {\mathcal V}^{i+1} \end{equation*} corresponds to a directed edge in $\mathcal{G}_i$ from $v_i^{j_{i,r}}$ to $v_i^{j_{i,r+1}}$ that has label $x_k$. Following our given vertex path in $\mathcal{H}$ for $n-1$ more steps gets us to a vertex in ${\mathcal H}$ whose last coordinate is $v_i^{j_{i,r+1}}$. There is an edge in $\mathcal{H}$ labeled $x_{k+n}$ emanating from this vertex which corresponds to an edge in $\mathcal{G}_i$ labeled $x_{k+n}$ emanating from $v_i^{j_{i,r+1}}$ and going to $v_i^{j_{i,r+2}}$. Thus, for each $0\leq i\leq n-1$, the labels $x_i,x_{i+n},x_{i+2n}\ldots$ are the labels of an infinite path in $\mathcal{G}$ originating at $v_i^0$, so $(x_k)_{k=0}^{\infty}\in ({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i$. We conclude that $\widetilde{\mathcal{P}} \subseteq ({\circledast}n)_{i=0}^{n-1}\mathcal{P}_i.$ The claim follows. The claim shows that the interleaving $({\circledast}n)_{i=0}^{n-1}{\mathcal P}_i$ is the path set $\widetilde{{\mathcal P}}$, having a presentation $({\mathcal H}, {\bf v}^0)$. Since ${\mathcal H}$ has $n {\circledast}od_{i=0}^{n-1} k_i$ vertices, this proves (i) . For (ii), if each of the ${\mathcal G}_i$ is right-resolving, then it is evident from the interleaving graph product construction that ${\mathcal H}$ is right-resolving. Each vertex ${\bf v}$ of ${\mathcal V}^{i}$ has at most one exit edge having a given label $a$, inherited from ${\mathcal G}_i$. For (iii), if the graph ${\mathcal G}_i$ is pruned, then each vertex has at least one exit edge. The construction of edges for ${\mathcal H}$ then shows that each vertex in ${\mathcal V}^{i}$ has an exit edge if and only if each vertex of ${\mathcal G}_i$ has an exit edge. \end{proof} \begin{proof}[Proof of Theorem ~\ref{thm:intalgthm}] Theorem \ref{thm:intalgthm} is is an immediate consequence of Theorem ~\ref{thm:algthm}. \end{proof} \begin{rem}\label{rmk:graph-product-cmt} (1) The $n$-fold interleaving graph product operation does not always produce minimal right-resolving presentations, even when all the input presentations are minimal right-resolving, see Example \ref{exmp1}. (2) The $n$-fold interleaving graph product operation does not always produce reachable presentations when all the input presentations are reachable. \end{rem} \subsection{Examples} We present examples showing that the $n$-fold interleaving pointed graph product, given minimal right-resolving presentations $({\mathcal G}_i, v_i)$ as input, may not produce a right-resolving presentation of their interleaving as output. \begin{exmp}\label{exmp1} (Non-preservation of minimal right-resolving property: extra automorphisms) Let $\mathcal{P}_0 = X(\mathcal{G}_0, v_0)$ and $\mathcal{P}_1 = X(\mathcal{G}_1,v_1)$, where $\mathcal{G}_0$ and $\mathcal{G}_1$ are the graphs given in Figure 4.1. Evidently $\mathcal{P}_0 = \mathcal{P}_1 = \{0,1\}^\mathbb{N}$, the full shift on two letters, and $(\mathcal{G}_0,v_0)$, $(\mathcal{G}_1,v_1)$ are (isomorphic) minimal right-resolving presentations. It is easy to see that $\mathcal{P}_0 {\circledast} \mathcal{P}_1 = \{0,1\}^\mathbb{N}$ as well. Figure 4.2 shows the presentation of $\mathcal{P}_0 {\circledast} \mathcal{P}_1$ given by our algorithm. This presentation is right-resolving but non-minimal; it is a double-covering of the minimal right-resolving representation. \end{exmp} \begin{figure}\label{fig1} \end{figure} \begin{figure}\label{fig2} \end{figure} The non-minimality of the presentation constructed in Example ~\ref{exmp1} is a result of the fact that the $n$-fold interleaving graph product construction keeps track of which input path set each digit comes from. If all the input path sets have the same presentation, then the graph product has a cyclic automorphism of order $n$. For any path set $\mathcal{P}$, the presentation of the $n$-fold self-interleaving $\mathcal{P} {\circledast} \mathcal{P} {\circledast} \cdots {\circledast} \mathcal{P}$ given by the construction of Theorem ~\ref{thm:algthm} is an $n$-fold covering of another presentation, the one constructed in \cite[Proposition 3.4]{ABL17}. \begin{exmp}\label{exmp2} (Non-preservation of minimal right-resolving property: failure of follower-separation) Consider the path sets $\mathcal{Q}_0=\{(0^{\infty})\}\cup\{(0^n12^\infty)|n\in\mathbb{N}\}$ and $\mathcal{Q}_1=\{(32^{\infty})\}$. Figure 4.3 gives minimal right-resolving presentations $(\mathcal{H}_0,v_0)$ of $\mathcal{Q}_0$ and $(\mathcal{H}_1, v_2)$ of $\mathcal{Q}_1$. The presentation of $\mathcal{Q}_0 {\circledast} \mathcal{Q}_1$ given by Theorem \ref{thm:algthm} is shown in Figure 4.4. This presentation is not minimal, since $v_1v_3$ and $v_3v_1$ have the same follower sets. However, identifying the vertices $v_1v_3$ and $v_3v_1$ and replacing the edges between them with a single self-loop labeled $2$ will give a minimal right-resolving presentation. This presentation is shown in Figure 4.5. \end{exmp} \begin{figure}\label{fig3} \end{figure} \begin{figure}\label{fig4} \end{figure} \begin{figure}\label{fig5} \end{figure} \section{Interleaving closure operations } \label{sec:5} The paper \cite{ALS21} shows that interleavings of principal $n$-decimations define a series of closure operations $X \mapsto X^{[n]}$ for arbitrary subsets $X \subset {\mathcal A}^{{\mathbb N}}$. We recall properties of these operations established in \cite{ALS21} which relate them to $n$-fold decimations. Then we show that the class ${\mathcal C}({\mathcal A})$ of path sets is stable under these closure operations. \subsection{Interleaving closure operations} Decimations combined with $n$-interleavings define a series of closure operations on path sets. The closure operations are defined for arbitrary subsets $X \subset {\mathcal A}^{{\mathbb N}}$, as described in \cite{ALS21}. \begin{defn} \label{def:dec-closed} (Interleaving closure operations) Given a subset $X$ of ${\mathcal{A}}^{{\mathbb N}}$ the {\em $n$-fold interleaving closure $X^{[n]}$ of $X$} is given by the $n$-fold interleaving \begin{equation*} X^{[n]} := \psi_{0, n}(X) {\circledast}\psi_{1,n}(X) \ast \cdots {\circledast} \psi_{n-1, n}(X). \end{equation*} \end{defn} We recall some results from \cite{ALS21}. The following result parts (1) and (2) are consequences of Theorem 4.2 of \cite{ALS21}), and parts (3) and (4) are consequences of Theorem 4.12 of \cite{ALS21}. \begin{thm}\label{thm:inclusion} {\rm ($n$-fold interleaving closure)} \, Given a subset $X$ of ${\mathcal{A}}^{{\mathbb N}}$ one has the set inclusion \begin{equation}\label{eqn:interleave-inclusion} X \subseteq X^{[n]}, \end{equation} where $X^{[n]}$ is the $n$-fold interleaving closure of $X$. If $X \subseteq Y $ then $X^{[n]} \subseteq Y^{[n]}$. In addition: (1) The operation $X \mapsto X^{[n]}$ is idempotent; i.e., $(X^{[n]})^{[n]} = X^{[n]}$ for all $X \subset {\mathcal A}^{{\mathbb N}}$. (2) The $n$-fold interleaving closure $X^{[n]}$ has the property that it is the maximal set $Y$ such that $X \subseteq Y$ and \begin{equation*} \psi_{j,n}(Y) = \psi_{j, n}(X) \quad \mbox{for} \quad 0\le j \le n-1. \end{equation*} (3) If $X$ is a closed set in ${\mathcal{A}}^{{\mathbb N}}$ then each decimation set $X_{j,n}= \psi_{j,n}(X)$ is a closed set. The n-th interleaving closure $X^{[n]}$ is a closed set in ${\mathcal A}^{{\mathbb N}}$. (4) The $n$-fold interleaving closure operation commutes with the closure operation on the product topology on ${\mathcal A}^{{\mathbb N}}$, in the sense that $$ (\overline{X})^{[n]} = \overline{X^{[n]}}. $$ \end{thm} The following result is is a consequence of Theorem 2.8 of \cite{ALS21}. \begin{thm}\label{thm:DIF2} {\rm (Decimations and interleaving factorizations)} A general subset $X$ of ${\mathcal{A}}^{{\mathbb N}}$ has an {$n$-fold interleaving factorization} $X = X_{0} {\circledast} X_{1} {\circledast} \cdots {\circledast} X_{n-1}$ if and only if $X= X^{[n]}$. In this case, each $$X_{i} = \psi_{j,n}(X) \quad \mbox{for} \quad 0 \le j \le n-1,$$ so when they exist, $n$-fold interleaving factorizations are unique. \end{thm} \subsection{Interleaving closures of path sets} \label{subsec:52} We specialize to path sets, and show all the $n$-fold interleaving closures ${\mathcal P}^{[n]}$, $(n \ge 1)$, of a path set ${\mathcal P}$ are path sets. (Theorem \ref{thm:factorthm}). \begin{thm}\label{thm:factorthm2} {\rm (${\mathcal C}({\mathcal A})$ is stable under $n$-fold interleaving closure operations)} If ${\mathcal P}$ is a path set, then for each $n \ge 1$ the $n$-fold interleaving closure ${\mathcal P}^{[n]}$ is a path set. In addition, if ${\mathcal P}$ is $n$-factorizable then each of its $n$-fold intereaving factors ${\mathcal P}_j= \psi_{j,n}({\mathcal P})$ for $0 \le j\le n-1$ are path sets. \end{thm}. \begin{proof} If ${\mathcal P}$ is a path set then by Theorem \ref{thm:decimation} each $\psi_{j,n}({\mathcal P})$ is a path set. Thus $$ {\mathcal P}^{[n]} := \psi_{0, n}({\mathcal P}) {\circledast} \psi_{1, n}({\mathcal P}) {\circledast} \cdots {\circledast} \psi_{n-1,n}({\mathcal P}), $$ is a path set by Theorem \ref{thm:intalgthm2}. Now suppose ${\mathcal P}$ has an $n$-fold interleaving factorization $$ {\mathcal P} = X_{0,n} {\circledast} X_{1, n} {\circledast} \cdots {\circledast} X_{n-1, n}. $$ Then by Theorem \ref{thm:DIF2} $X_{j,n}= \psi_{j,n}({\mathcal P})$ for $0 \le j\le n-1$. But $\psi_{j,n}({\mathcal P})$ is a path set by Theorem \ref{thm:decimation}. \end{proof} \section{Interleaving factorizations } \label{sec:5B} We recall results on interleaving factorizations of general sets $X \subset {\mathcal A}^{{\mathbb N}}$ from \cite{ALS21}, relating to closure operations. We deduce that the set ${\mathcal C}^{\infty}(X)$ of infinitely factorizable path sets is stable under $n$-fold interleavings of its members, for all $n \ge 1$. Finally we obtain a bound on the size of minimal right-resolving presentations of interleaving factors of path sets, which are ncessarily principal decimations, which improves on the bound of Theorem \ref{thm:decimation_alg} for general decimations of path sets. \subsection{Structure of interleaving factors:arbitrary sets $X$}\label{subsec:55} We recall from \cite{ALS21} results on the structure of possible interleaving factorizations for a general set $X \subset {\mathcal A}^{{\mathbb N}}$, and derive corollaries of one of them. The following result is a consequence of Theorem 2.12 of \cite{ALS21}. \begin{thm} \label{thm:lcm-factorization} {\rm (Divisibility for interleaving factorizations) } (1) Let $X \subseteq {\mathcal A}^{{\mathbb N}}$ have an $n$-fold interleaving factorization. If $d$ divides $n$, then $X$ also has an $d$-fold interleaving factorization. (2) Let $X$ have $m$-fold and $n$-fold interleaving factorizations. Then $X$ has an $\ell$-fold interleaving factorization, where $\ell= {\rm lcm}(m,n)$ is the least common multiple of $m$ and $n$. \end{thm} An immediate consequence of this result is a dichotomy. For a general set $X \subset {\mathcal A}^{{\mathbb N}}$, exactly one of the following hold. \begin{enumerate} \item[(1)] $X$ is factorizable for infinitely many $n$, \item[(2)] $X$ is $n$-factorizable for a finite set of $n$, which are exactly the divisors of a fixed integer $f= f(X)$. \end{enumerate}. The following result shows that if $X$ is a closed set, then infinite factorizability implies $n$-factorizability for all $n \ge 1$. It is a consequence of Theorem 2.13 of \cite{ALS21}. \begin{thm} \label{thm:61} {\rm ( Classification of infinitely factorizable closed $X$)} For a closed set $X \subseteq {\mathcal{A}}^{{\mathbb N}}$ where ${\mathcal{A}}$ is a finite alphabet, the following properties are equivalent. \begin{enumerate} \item[(i)] $X$ is infinitely factorizable; i.e., $X$ has an $n$-interleaving factorization for infinitely many $n \ge 1$. \item[(ii)] $X$ has an $n$-interleaving factorization for all $n \ge 1$. \item[(iii)] For each $k \ge 0$ there are nonempty subsets ${\mathcal{A}}_k \subset {\mathcal{A}}$ such that $X = {\circledast}od_{k=0}^{\infty} {\mathcal{A}}_k$ is a countable product of finite sets with the product topology. \end{enumerate} \end{thm} \begin{rem}\label{rem:64a} (1) If $|{\mathcal A}| \ge 2$, then there are uncountably many infinitely factorizable closed sets $X \subset {\mathcal A}^{{\mathbb N}}$, while there are only countably many path sets. (2) For $k, \ell \ge 1$ and a finite set of consecutive ${\mathcal A}_k, {\mathcal A}_{k+1}, {\mathcal A}_{k+2}, ... {\mathcal A}_{k+\ell}$ such that there is a block $a_k a_{k+1} \cdots a_{k+\ell}$ with each $a_{k+i} \in {\mathcal{A}}_{k+i}$ for $0\le i \le \ell$ that does not occur in any element of $X$, we say that $X$ has a \emph{ $(k , \ell)$-missing-configuration}. The proof shows that existence of a $(k, \ell)$-missing-configuration certifies that $X$ has no $n$-fold interleaving factorization with $n \ge k+ \ell +1$. The proof of Theorem \ref{thm:61} given in \cite{ALS21} shows that if $X$ is not infinitely factorizable then it has a $(k, \ell)$-missing configuration for some finite $k, \ell \ge 1$. \end{rem} \begin{cor}\label{cor:62} Let $X$ be an infinitely factorizable closed subset of $\mathcal{A}^{{\mathbb N}}$. Then it is factorizable for all $n \ge 1$, so its factor set ${\mathfrak{F}}(X)$ contains all decimations $\psi_{j, n}(X)$ for $n \ge 1$ and $0 \le j \le n-1$. Each decimated set $\psi_{j, n}(X)$ is also infinitely factorizable. \end{cor} \begin{proof} By property (ii) of Theorem \ref{thm:61}, $X$ is factorizable for each $n \ge 1$, and its $n$-fold factors are $\psi_{j, n}(X)$ for $0 \le j \le n-1$. Now the property (iii) is preserved under decimations of all orders, hence all $\psi_{j, n}(X)$ must be infinitely factorizable. \end{proof} For an infinitely factorizable $X$ it is possible that all decimations $\psi_{j,n}(X)$ are pairwise distinct. In such cases the factor set ${\mathfrak{F}}(X)$ would be infinite. \begin{cor}\label{cor:63} The set ${\mathcal Y}( \mathcal{A})$ of all infinitely factorizable closed subsets $X \subseteq \mathcal{A}^{{\mathbb N}}$ is closed under the operation of $n$-fold interleaving for all $n \ge 1$. That is, if $X_0, X_1, \cdots , X_{n-1} \in {\mathcal Y}(\mathcal{A})$, then $$ ({\circledast})_{i=0}^{n-1} X_i = X_0 {\circledast} X_1 {\circledast} \cdots {\circledast} X_{n-1} \in {\mathcal Y}(\mathcal{A}). $$ \end{cor} \begin{proof} This fact follows using the characterization of infinitely factorizable by property (iii) of Theorem \ref{thm:61}. This property is inherited under $n$-fold interleaving of sets $X_i$ that have it. \end{proof} Combining Theorem \ref{thm:lcm-factorization} and Theorem \ref{thm:61} yields the following result. \begin{thm} \label{thm:classification} (\cite[Theorem 2.10]{ALS21}) {\rm (Dichotomy theorem)} Let $X$ be a closed subset of ${\mathcal A}^{{\mathbb N}}$. Then exactly one of the following holds for $X$. (1) (Infinitely factorizable) The set of $n$ where $X$ has an $n$-interleaving factorization is the set of all positive integers ${\mathbb N}^{+}$. (2) (Finitely factorizable) The set of $n$ where $X$ has an $n$-interleaving factorization is the set of all divisors of some integer $f = f(X)$. \end{thm} \subsection{Infinitely factorizable path sets}\label{subsec:56} We deduce that the collection ${\mathcal C}^{\infty}({\mathcal A})$ of all infinitely factorizable path sets on ${\mathcal A}$ is closed under all interleaving operations. \begin{thm}\label{thm:infinite_factorizable_clsd} {\em (${\mathcal C}^{\infty} ({\mathcal A})$ is closed under interleaving)} (1) If ${\mathcal P}$ is a path set on the alphabet ${\mathcal A}$ having an $n$-fold interleaving factorization for all $n \ge 1$, then each interleaving factor $\psi_{j,n}({\mathcal P})$ is itself infinitely factorizable. (2) Conversely, if the $n$ path sets $\{ {\mathcal P}_i : \, 0 \le i \le n-1\}$ are each infinitely factorizable then the $n$-fold interleaving ${\mathcal P} := {\mathcal P}_0 {\circledast} {\mathcal P}_1 {\circledast} \cdots {\circledast} {\mathcal P}_{n-1}$ is infinitely factorizable. \end{thm} \begin{proof} Statement (1) of Theorem \ref{thm:infinite_factorizable_clsd} follows from Corollary \ref{cor:62} combining it with the fact that all interleaving factors $\psi_{j,m}({\mathcal P})$ are path sets. Statement (2) follows from Corollary \ref{cor:63}, combining it with Theorem \ref{thm:intalgthm} to infer that ${\mathcal P}$ is a path set. \end{proof} \subsection{Size of minimal right-resolving presentations for interleaving factors of path sets }\label{subsec:57N} We now suppose that a path set ${\mathcal P}$ has an $n$-fold interleaving factorization. The following bound on the size of minimal right-resolving presentations of interleaving factors (which are necessarily principal decimations) improves on the upper bound of Theorem \ref{thm:decimation_alg} for general $n$-level decimations. \begin{thm}\label{thm:56} {\rm ( Upper bound on minimal presentation size of $n$-fold interleaving factors ) } Let ${\mathcal P}$ be a path set having $m$ vertices in its minimal right-resolving presentation. Suppose that ${\mathcal P}$ has an $n$-fold interleaving factorization ${\mathcal P} = ({\circledast})_{j=0}^{n-1} {\mathcal P}_j$. Then each $n$-fold interleaving factor ${\mathcal P}_j = \psi_{j,n}({\mathcal P})$ has a minimal right-resolving presentation having at most $m$ vertices. \end{thm} \begin{proof} According to the equivalences in Theorem \ref{thm:minimal_presentation}, It suffices to show that for each $j$, the number of distinct word path sets of the form ${\mathcal P}_j^w$, where $w$ is an initial word of ${\mathcal P}_j$, is no larger than the number of distinct word path sets of the form ${\mathcal P}^{w'}$, where $w'$ is an initial word of ${\mathcal P}$. For an initial word $w=w_0w_1\ldots w_{k-1}$ of ${\mathcal P}_j$, let $z\in{\mathcal P}$, and say $z=z^0{\circledast} z^1{\circledast}\ldots{\circledast} z^{n-1}$, with $w$ the initial $k$-length word of $z^{j}$. Let $w'$ be the initial word of $z$ of length $nk+j$. Then the letter in $z$ immediately following $w'$ is from $z^j$. We assert that $\psi_{0,n}({\mathcal P}^{w'})={\mathcal P}_j^w$. We show both inclusions hold. The set ${\mathcal P}^{w'}$ is the set of all infinite words $x$ such that $w'x\in{\mathcal P}$. Note that if $w'x\in{\mathcal P}$, then $\psi_{j,n}(w'x)=wy$, where $y=\psi_{0,n}(x)$. Thus, the $(0,n)$ decimation of any infinite word following $w'$ in ${\mathcal P}$ is an infinite word following $w$ in $\psi_{j,n}({\mathcal P})={\mathcal P}_j$, so we have the inclusion $\psi_{0,n}({\mathcal P}^{w'})\subseteq{\mathcal P}_j^w$. For the other inclusion, note that for any $y\in{\mathcal P}_j^w$, we have $wy\in{\mathcal P}_j$. Hence if we define $\tilde{z}=z^0{\circledast} \ldots {\circledast} z^{j-1}{\circledast} (wy) {\circledast} z^{j+1}{\circledast}\ldots {\circledast} z^{n-1}$, then $\tilde{z}\in {\mathcal P}$, and the initial word of $\tilde{z}$ of length $nk+j$ is $w'$, since the choice of $y$ does not affect the first $nk+j$ letters of $\tilde{z}$. Thus $\tilde{z}=w'x$ for some $x\in{\mathcal P}^{w'}$, and $\psi_{0,n}(x)=y$. Hence $y\in\psi_{0,n}({\mathcal P}^{w'})$, and we get ${\mathcal P}_j^w\subseteq\psi_{0,n}({\mathcal P}^{w'})$, proving the assertion. Thus, every path set ${\mathcal P}_j^w$ is the $(0,n)$-decimation of a path set ${\mathcal P}^{w'}$. It follows that there are at least as many distinct path sets of the form ${\mathcal P}^{w'}$ as there are of the form ${\mathcal P}_j^{w}$. \end{proof} \begin{rem}\label{rem:55} There is nothing special about the use of $(0,n)$-decimations in this proof. If, after choosing $j$, $w$, and $z$, we had chosen the word $w'$ to be of length $nk+j-i$, for any $0\leq i\leq n-1$, then the letter in $z$ occurring $(i+1)$ steps after the last letter of $w'$ would be from $z^j$, and we could have shown that every path set ${\mathcal P}^w$ is the $(i,n)$-decimation of a path set ${\mathcal P}^{w'}$. \end{rem} \section{Structure of Infinitely Factorizable Path Sets} \label{sec:6} This section classifies all infinitely factorizable path sets, in terms of the structure of their minimal right-resolving presentation. It deduces an improved upper bound on the size of minimal right-resolving presentations of interleaving factors of a general path set (which are decimations) than that derived for decimations in Theorem \ref{thm:decimation_alg2}. \subsection{Characterization of infinitely factorizable path sets} \label{subsec:62} We characterize the path sets ${\mathcal P}$ that are infinitely factorizable as having a minimal right-resolving presentation $({\mathcal G}, v)$ of a particularly simple kind. \begin{defn}\label{defn:leveled} (Leveled presentation) A presentation $({\mathcal G}, v)$ of a path set ${\mathcal P}$ is {\em leveled} if it is right-resolving and all infinite paths in ${\mathcal G}$ from the marked vertex $v$ visit exactly the same set of vertices in the same order; i.e., all exit edges of ${\mathcal G}$ from a vertex $v'$ necessarily go the same target vertex $v^{''}$ (depending on $v'$). There may be multiple edges (with different symbol labels) between $v'$ and $v''$. We say that a path set ${\mathcal P}$ is {\em leveled } if it has such a presentation; otherwise it is {\em non-leveled}. \end{defn} Figure 7.1 gives an example of a leveled presentation. \begin{figure}\label{fig61} \end{figure} \begin{thm}\label{thm:63} A path set ${\mathcal P}$ is infinitely factorizable if and only if it has a minimal right-resolving presentation $({\mathcal G}, v_0)$ that is leveled. \end{thm} \begin{proof} To prove necessity, we must show that if ${\mathcal P}$ is infinitely factorizable, then its minimal right-resolving presentation is leveled. We prove the assertion by induction on the number of vertices reached from the inital vertex in G, starting from the marked vertex $v_0$. Using Theorem \ref{thm:61}, condition (iii) for being infinitely factorizable says that $ {\mathcal P} = {\circledast}od_{k=0}^{\infty} \mathcal{A}_k, $ where each $\mathcal{A}_k$ is a subset of the (finite) alphabet $\mathcal{A}$. Each exit edge from the marked vertex $v_0$ of $({\mathcal G},v_0)$ goes to a vertex $v'$ whose vertex path set $X({\mathcal G}, v')$ must be $ {\mathcal P}' = {\circledast}od_{k=1}^{\infty} \mathcal{A}_k. $ The (finite) vertex follower set ${\mathit{F}}({\mathcal G}, v')$ is then $$ {\mathit{F}}({\mathcal G}, v') = \bigcup_{m=1}^{\infty} \, {\circledast}od_{i=1}^m \mathcal{A}_k. $$ By Theorem \ref{thm:minimal_presentation} all vertex follower sets in ${\mathcal G}$ are distinct. Consequently there can be only one choice for $v''$, and all exit edges from $v_0$ go to it. If $v''=v_0$ we have a self-loop at $v$ and are done. Otherwise $v''$ is a new vertex; call it $v_1$. There may be multiple edges (with different labels) from $v_0$ to $v_1$; the labels are exactly the letters in ${\mathcal A}_0$. The induction hypothesis on $v_j$ supposes that the vertex $v_j$ has associated vertex path set ${\mathcal P}_j = {\circledast}od_{k=j}^{\infty} \mathcal{A}_k.$ We next study exit edges from $v_j$. They necessarily go to a vertex $v''$ whose vertex path $X({\mathcal G}, v^{"} )$ is $ {\mathcal P}_{j+1}^{'}= {\circledast}od_{i=2}^{\infty} \mathcal{A}_k$ and whose (finite) vertex follower set is $$ {\mathit{F}}({\mathcal G}, v'') = \bigcup_{m=2}^{\infty} \, {\circledast}od_{i=1}^m \mathcal{A}_k. $$ By uniqueness of a vertex in the minimal presentation having a particular finite follower set, all exit edges must go to the same vertex $v''$. If $v''$ is one of the previously found vertices, we are done. Otherwise we are at a new vertex $v_{j+1}$. Since ${\mathcal P}$ is a path set it has a presentation with finitely many vertices so the process must terminate. The induction is complete, so $({\mathcal G}, v_0)$ is leveled. Suppose such the leveled presentation ${\mathcal G}$ has $m$ vertices. The directed graph ${\mathcal G}$ either has a unique vertex path that is an $m$-cycle or else this graph has the appearance of a Greek letter $\rho$, with the unique vertex path having a preperiodic part $v_0 \to v_1 \cdots \to v_{s-1}$, with $s$ vertices, followed by moving around a periodic part $v_{s} \to v_{s+1}, \ldots v_{s+p-1} \to v_{s}$, a period $p$ cycle, with $p= m -s$. To prove sufficiency, we must show that every path set ${\mathcal P}$ having a leveled presentation $({\mathcal G}, v_0)$ is infinitely factorizable. A leveled presentation is right-resolving, since the labels ${\mathcal A}_k$ exiting from vertex $v_k$ are distinct. It is clear from the internal structure of a leveled presentation (as a rho-graph) that the associated path set ${\mathcal P}=({\mathcal G}, v_0)$ necessarily has the form ${\mathcal P} = {\circledast}od_{k=0}^{\infty} {\mathcal A}_k$ where for the first $m$ steps ${\mathcal A}_k$ are the set of edge labels for vertices $v_0, v_1, \ldots , v_{m-1}$. After this point the edge labels repeat periodically with a period $p=m-s$, where $\ell$ is the length of the pre-period, having the equality of sets ${\mathcal A}_{m+j} = {\mathcal A}_{\ell +k}$ with $0 \le k \le p-1$ determined by the congruence $k \equiv j (\bmod p)$. Since this presentation satisfies Theorem \ref{thm:61}(iii), ${\mathcal P}$ is infinitely factorizable. Finally, the presentation is minimal using Theorem \ref{thm:minimal_presentation}, because all vertex path sets $X({\mathcal P}, v_i)$ fo $0 \le i \le m-1$ are distinct, whence all finite follower sets ${\mathit{F}}({\mathcal G}, v)$ are distinct. \end{proof} \subsection{Bounds for the number of distinct factors of infinitely factorizable path sets} \label{subsec:63} We upper bound the number of distinct interleaving factors ${\mathfrak{F}}({\mathcal P})$ of a infinitely factorizable path set in terms of the size of its minimal right-resolving presentation. Since all factors are decimations we know by Theorem \ref{thm:decimation_set_bound} that ${\mathfrak{F}}({\mathcal P}) \subseteq {\mathfrak D}({\mathcal P})$ is finite. \begin{thm} \label{thm:monfactfinprop} Suppose that ${\mathcal P}$ is an infinitely factorizable path set that has a right-resolving presentation $({\mathcal G}, v)$ with $m$ vertices. (1) Each possible distinct factor occurs in some $n$-fold factorization having $n \le 2m-1$. (2) The cardinality $|{\mathfrak{F}}({\mathcal P})|$ of the factor set ${\mathfrak{F}}({\mathcal P})$ is at most $m^2$. \end{thm} \begin{proof} It suffices to consider the minimal right-resolving presentation $({\mathcal G}, v_0)$ of ${\mathcal P}$, which must be a leveled presentation, and which has at most $m$ vertices. Let $p$ be the period of the graph. There is a unique vertex path on ${\mathcal G}$ starting from the initial vertex, which we consider to be the 0th vertex. Call the vertex reachable in 1 step from the 0th vertex the 1st vertex, and so on. Now $\mathcal{A}_k({\mathcal P})$ is the set of symbols available at the $k$-th vertex, and by Theorem \ref{thm:61}, ${\mathcal P}={\circledast}od_{k=0}^{\infty}\mathcal{A}_k({\mathcal P})$. Then $\psi_{j,n}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn+j}({\mathcal P})$. Now, since there are only $m$ vertices (the 0th vertex through the $(m-1)$st vertex), we may choose $j'$ so that the $j'$th vertex is the $j$th vertex, which also gives us that the $(j'+1)$st vertex is the $(j+1)$st vertex, and so on. Hence: \[\psi_{j,n}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn+j}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn+j'}({\mathcal P})=\psi_{j',n}({\mathcal P})\] Likewise, because there are only $m$ vertices, we may choose $n'<m$ so that the $j'+n'$th vertex is the $j'+n$th vertex. We wish to show that this will also imply that the $(j'+kn')$th vertex is the $(j'+kn)$th vertex for all $k$. If $n<m$, we may take $n'=n$. If $n\geq m$, then the $(j'+n)$th vertex is in the periodic part of the graph, so the fact that the $(j'+n')$th vertex is the $(j'+n)$th vertex implies that $n'=n~(\bmod~p)$. This in turn implies that the $(j'+kn)$th vertex is the $(j'+kn')$th vertex for all $k$. Hence we have: \[\psi_{j,n}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn+j'}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn'+j'}({\mathcal P})\] Since $j'$ and $n'$ are both chosen between $0$ and $m-1$, there are at most $m^2$ distinct sets of this form, whence the cardinality of the factor set ${\mathfrak{F}}({\mathcal P})$ is at most $m^2$, proving (2). However, we have not guaranteed that $j'<n'$, and so the set ${\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn'+j'}({\mathcal P})$ is not guaranteed to be one of the $n'$-fold interleaving factors of ${\mathcal P}$. If $n<m$, then we have $j'\leq j<n=n'$, and so we are done. If $n\geq m$, then because the $(j'+n')$th vertex is in the periodic part of the graph, we may take $n''=n'+pr$ for any $r\geq 1$ and get that the $(j'+kn'')$th vertex is the $(j'+kn')$th vertex for all $k\geq0$. Since $j'<m$ and $p\leq m$, It is always possible to choose $r$ such that $j'<n''\leq 2m-1$. We then have: \[\psi_{j,n}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn'+j'}({\mathcal P})={\circledast}od_{k=0}^{\infty}\mathcal{A}_{kn''+j'}({\mathcal P})=\psi_{j',n''}({\mathcal P})\] which proves (1). \end{proof} \begin{rem} \label{rem:68} The bound $2m-1$ in Theorem \ref{thm:monfactfinprop} is sharp. Consider a circular graph with $m$ vertices, where the available alphabets at all vertices are distinct. Mark one of the vertices. If ${\mathcal P}$ is the path set presented, then it can be shown that $\psi_{m-1,2m-1}({\mathcal P})$ (which is the full shift over the alphabet available at the $(m-1)$st vertex) is not an $n$-fold interleaving factor for any $n<2m-1$. \end{rem} \subsection{Minimal right-resolving presentations for interleaving factors-part 2 }\label{subsec:64} Theorem \ref{thm:56} showed that any path set ${\mathcal P}$ having a minimal right-resolving presentation with $m$ vertices that has an $n$-fold interleaving factorization ${\mathcal P} = ({\circledast})_{i=0}^{n-1} {\mathcal P}_j$ , necessarily has every factor ${\mathcal P}_j= \psi_{j,n}({\mathcal P})$ has a minimal right-resolving presentation having $m$ or fewer vertices. We now show that the equality case implies all these sets are leveled. \begin{thm} \label{thm:68} Let ${\mathcal P}$ be a path set having a minimal right-resolving presentation with $m$ vertices. Suppose that ${\mathcal P}$ has an $n$-fold interleaving factorization ${\mathcal P} = ({\circledast}_n)_{i=0}^{n-1} {\mathcal P}_j$ such that at least one factor ${\mathcal P}_j= \psi_{j,n}({\mathcal P})$ has a minimal right-resolving presentation with $m$ vertices. Then the path set ${\mathcal P}$ must be leveled. and all of the factors ${\mathcal P}_j$ for $0 \le j \le n-1$ are leveled. \end{thm} \begin{proof} Suppose ${\mathcal P}=X({\mathcal G},v)$ and ${\mathcal P}_i=X({\mathcal G}_i,v_i)$ are minimal right-resolving presentations, where ${\mathcal G}$ and ${\mathcal G}_i$ both have $m$ vertices. We begin by showing that the hypotheses of the theorem imply the following: {\bf Claim.}{\em ~All word path sets ${\mathcal P}^w$ are determined by their $(0,n)$-decimations $\psi_{0,n}({\mathcal P}^w)$.} Since ${\mathcal G}$ and ${\mathcal G}_i$ both have $m$ vertices with distinct follower sets, they both have $m$ distinct vertex path sets $X({\mathcal G}, v')$ and $X({\mathcal G}_i, v_i^{'})$ that can be presented by choosing initial vertices. The proof of Theorem \ref{thm:56} established that every word path set ${\mathcal P}_i^w$ is a set of the form $\psi_{0,n}({\mathcal P}^{w'})$, for another word $w'$. Our claim follows immediately from the fact that there are $m$ such distinct path sets, but also only $m$ distinct path sets ${\mathcal P}^{w'}$. Now, we will show that ${\mathcal P}_j$ is leveled, for all $0\leq j\leq n-1$, using only the key fact stated in our claim (in particular, we no longer need to use any special information about $\mathcal{P}_i$, and what follows works for $j=i$ as well as for $j\neq i$). Let $w^{j,1}$ and $w^{j,2}$ be initial words of ${\mathcal P}_j$, both of length $k$. Say they are the initial blocks of the infinite words $x^{j,1}$ and $x^{j,2}$, respectively. It will suffice to show the equality of word path sets ${\mathcal P}_j^{w^{j,1}}={\mathcal P}_j^{w^{j,2}}$, since this equality is equivalent to the equality of the word follower sets ${\mathit{F}}_{{\mathcal P}_j} (w^{j,1})$ and ${\mathit{F}}_{{\mathcal P}_j} (w^{j,2})$ of $\mathcal{P}_j$, using Theorem \ref{thm:initialblocks}, because these word follower sets are the initial words ${\mathcal B}^{I}( {\mathcal P}_j^{w^{j,1}})$ and ${\mathcal B}^{I}({\mathcal P}_j^{w^{j,2}})$, respectively. This equality of the word follower sets then implies the equality of the vertex follower sets ${\mathit{F}}({\mathcal P}_j, w^{j,1})$ and ${\mathit{F}}({\mathcal P}_j, w^{j,2})$, which since the presentation of ${\mathcal P}_j$ is minimal right-resolving means that we arrived at the same vertex of ${\mathcal P}_j$ following the symbol paths for $w^{j,1}$ and $w^{j,2}$ from the initial vertex $v_j$ of this presentation. The property of being at the same vertex is exactly the desired leveling property of ${\mathcal P}_j$. It remains to show that ${\mathcal P}_j^{w^{j,1}}={\mathcal P}_j^{w^{j,2}}$. Now for all $l\neq j$, let $x^l\in{\mathcal P}_l$ be chosen arbitrarily. Then let $$ y=x^0{\circledast} x^1{\circledast} \ldots {\circledast} x^{j,1}{\circledast} x^{j+1}{\circledast} \ldots{\circledast} x^{n-1} \,\, \mbox{and} \,\, z=x^0{\circledast} x^1{\circledast} \ldots {\circledast} x^{j,2}{\circledast} x^{j+1}{\circledast}\ldots{\circledast} x^{n-1}. $$ Let $b^1$ and $b^2$ be the words made up of the first $(n(k-1)+j+1)$ entries of $y$ and $z$, respectively. In particular, the last entry of $b^1$ is the last entry of $w^{j,1}$, and the last entry of $b^2$ is the last entry of $w^{j,2}$. We will show: $$ {\mathcal P}_j^{w^{j,1}}=\psi_{n-1,n}({\mathcal P}^{b^1})\,\, \mbox{and} \,\, {\mathcal P}_j^{w^{j,2}}=\psi_{n-1,n}({\mathcal P}^{b^2}). $$ by reasoning along similar lines as in the proof of Theorem \ref{thm:56}. Specifically, for the first equality, ${\mathcal P}^{b^1}$ is the set of all infinite words $x$ such that $b^1x\in{\mathcal P}$. and if $b^1x\in{\mathcal P}$, then $\psi_{j,n}(b^1x)=w^{j,1}y$, where $y=\psi_{n-1,n}(x)$. (The reason that we have $(n-1,n)$-decimations here instead of $(0,n)$-decimations is that the words $b^1$ and $b^2$ have length $n-1$ less than $kn+j$, the length of the word used in the proof of Theorem \ref{thm:56}.) Thus, the $(n-1,n)$ decimation of any infinite word following $b^1$ in ${\mathcal P}$ is an infinite word following $w^{j,1}$ in $\psi_{j,n}({\mathcal P})={\mathcal P}_j$, and we have $\psi_{n-1,n}({\mathcal P}^{w'})\subseteq{\mathcal P}_j^{w^{j,1}}$. On the other hand, for any $y\in{\mathcal P}_j^{w^{j,1}}$, we have $w^{j,1}y\in{\mathcal P}_j$. Hence $\tilde{z}=x^0{\circledast} \ldots {\circledast} x^{j-1}{\circledast} (w^{j,1}y) {\circledast} x^{j+1}{\circledast}\ldots {\circledast} x^{n-1}\in {\mathcal P}$, and the initial word of $\tilde{z}$ of length $(n(k-1)+j+1)$ is $b^1$, since the choice of $y$ does not affect the first $(n(k-1)+j+1)$ letters of $\tilde{z}$. Thus $\tilde{z}=w'x$ for some $x\in{\mathcal P}^{w'}$, and $\psi_{n-1,n}(x)=y$. Hence $y\in\psi_{n-1,n}({\mathcal P}^{w'})$, and we get ${\mathcal P}_j^w\subseteq\psi_{n-1,n}({\mathcal P}^{w'})$, proving that ${\mathcal P}_j^{w^{j,1}}=\psi_{n-1,n}({\mathcal P}^{b^1})$. By the same argument (replacing $b^1$ with $b^2$ and $w^{j,1}$ with $w^{j,2}$), we get ${\mathcal P}_j^{w^{j,2}}=\psi_{n-1,n}({\mathcal P}^{b^2})$. Thus, if we can show that $\mathcal{P}^{b^1}={\mathcal P}^{b^2}$, then we get the desired equality ${\mathcal P}_j^{w^{j,1}}={\mathcal P}_j^{w^{j,2}}$. But our earler claim tells us that ${\mathcal P}^{b^1}$ and ${\mathcal P}^{b^2}$ are determined by their $(0,n)$-decimations. Their $(0,n)$-decimations are determined by the choice of (the first $k-1$ letters of) $y^{j+1}$, which is the same for $b^1$ and $b^2$ by construction. Thus, we have shown that all the interleaving factors of $\mathcal{P}$ are leveled, and so ${\mathcal P}$ is leveled. \end{proof} \section{ Finitely Factorizable Path Sets} \label{sec:factorsec} Finitely factorizable path sets coincide with non-leveled path sets, so they are algorithmically recognizable. \subsection{Bounds for number of distinct $n$-fold interleaving factorizations} \label{subsec:71} \begin{thm} \label{thm:non_leveled_fact} Let $\mathcal{P}$ be a path set having a right-resolving presentation $({\mathcal G}, v)$ having $m$ vertices. If $\mathcal{P}$ is finitely factorizable, i.e., non-leveled, and has an $n$-fold interleaving factorization, then $n \le m-1$. \end{thm} \begin{proof} To prove the bound, we will assume ${\mathcal P}$ has an $n$-fold interleaving factorization, and show that a minimal right-resolving presentation of ${\mathcal P}$ must have at least $n+1$ vertices. If so we must have $m \ge n+1$, giving the result. We are given the interleaving factorization ${\mathcal P} = {\mathcal P}_0 {\circledast} {\mathcal P}_1 {\circledast} \cdots {\circledast} {\mathcal P}_{n-1}$. We let $({\mathcal G}_i, v_i^0)$ for $0 \le i \le n-1$ be minimal right-resolving presentations for each ${\mathcal P}_i$. In particular, by Theorem \ref{thm:minimal_presentation}, each labeled graph ${\mathcal G}_i$ has the property that all its vertices have distinct vertex follower sets. One of the ${\mathcal P}_i$ must be finitely factorizable. For if all of the ${\mathcal P}_i$ were infinitely factorizable, then by Corollary \ref{cor:63} their $n$-fold interleaving ${\mathcal P} := {\mathcal P}_0 {\circledast} {\mathcal P}_1 {\circledast} \cdots {\circledast} {\mathcal P}_{n-1}$ would be infinitely factorizable, a contradiction. For definiteness suppose ${\mathcal P}_{i_0}$ is finitely factorizable. Then ${\mathcal P}_{i_0}$ is non-leveled, by Proposition \ref{thm:63}. Because ${\mathcal P}_{i_0}=X({\mathcal G}_{i_0},v_{i_0}^0)$ is non-leveled, the graph ${\mathcal G}_{i_0}$ has a reachable vertex $w$ which has exit edges to two different vertices---call them $w_1$ and $w_2$---which have distinct vertex follower sets ${\mathit{F}}({\mathcal G}_{i_0}, w_1)$ and ${\mathit{F}}({\mathcal G}_{i_0}, w_2)$ by minimality of the presentation $X({\mathcal G}_{i_0},v_{i_0}^0)$. We now study the $n$-fold interleaving graph product $(\mathcal{H}, {\bf v}^0) := ({\circledast}_n)_{i=0}^{n-1}({\mathcal G}_i,v_i^0)$ studied in Theorem \ref{thm:algthm}. We will show that $(\mathcal{H}, {\bf v}^{0})$ has at least $n+1$ different vertex follower sets, over all vertices reachable from ${\bf v}^{0}$. If so, by Proposition \ref{onevertexforeachfollowerset} ${\mathcal P} = X(\mathcal{H}, {\bf v}^{0})$ will have at least $n+1$ distinct word follower sets, and Theorem \ref{thm:minimal_presentation} then implies that the minimal right-resolving presentation $({\mathcal G}, v)$ of ${\mathcal P}$ has at least $n+1$ vertices. Consequently $m \ge n+1$ and we are done. Recall that the vertex set $V({\mathcal H}) = \bigcup_{i=1}^{n-1} {\mathcal V}^{i}$, but that the graph ${\mathcal H}$ need not be connected, Our argument must establish that the $n+1$ vertex follower sets constructed are in the reachable component of ${\bf v}$. We first consider a shortest directed path in ${\mathcal G}_{i_0}$ from the initial vertex $v_{i_0}$ to the vertex $w$---call its length $k$---and denote it $v_{i_0}^0,v_{i_0}^1,v_{i_0}^2,\ldots,v_{i_0}^{k-1}$, with $v_{i_0}^{k-1}=w$. Then let $v_i^0,v_i^1,v_i^2,\ldots$ denote an arbitrary vertex path from the initial vertex in $\mathcal{G}_i$. We know that the initial vertex of $\mathcal{H}$ is ${\bf v}^0=(v_0^0,v_1^0,\ldots,v_{n-1}^0) \in {\mathcal V}^0$. By the construction in the proof of Theorem \ref{thm:algthm}, there is an edge from this vertex to the vertex $(v_1^0,v_2^0,\ldots,v_{n-1}^0,v_0^1) \in {\mathcal V}^{1}$, from here to $(v_2^0,v_3^0,\ldots,v_{n-1}^0,v_0^1,v_1^1)\in {\mathcal V}^2$, eventually reaching after $kn+i_0$ steps the vertex $$ {\bf v}_0:= (v_{i_0}^{k-1} ,v_{i_0+1}^{k-1},\ldots v_{n-1}^{k-1},v_0^k,v_1^k,\ldots,v_{i_0-1}^k) \in {\mathcal V}^{i_0} $$ where ${\mathcal V}^{i_0}= {\mathcal V}_{i_0} \times {\mathcal V}_{i_0+1} \times \cdots \times {\mathcal V}_{n-1} \times {\mathcal V}_0 \times {\mathcal V}_1 \times \cdots {\mathcal V}_{i_0-1}.$ For the next step, we have two choices, moving $w \to w_{\ell}$ for $\ell=1,2$, and we can then reach, in sequence, the following $n$ vertices in ${\mathcal H}$: \begin{eqnarray*}\label{nvertices1} &{\bf v}_1(\ell) := (v_{i_0+1}^{k-1},\ldots v_{n-1}^{k-1},v_0^k,v_1^k,\ldots,v_{i_0-2}^k,v_{i_0-1}^k,w_{\ell}) \in {\mathcal V}_{i_0+1}& \\ &{\bf v}_2(\ell) := (v_{i_0+2}^{k-1},\ldots v_{n-1}^{k-1},v_0^k,v_1^k,\ldots,v_{i_0-1}^k,w_{\ell},v_{i_0+1}^{k}) \in {\mathcal V}_{i_0+2} & \\ &.&\\ &{\bf v}_n(\ell) := (w_{\ell},v_{i_0+1}^{k},\ldots v_{n-1}^{k},v_0^{k+1},v_1^{k+1},\ldots,v_{i_0-1}^{k+1})\, \in \,{\mathcal V}_{i_0} \quad \quad& \end{eqnarray*} The last $n-1$ of these steps will be chosen to travel edges in $\mathcal{H}$ corresponding to identical edges for $\ell=1, 2$ in the graphs ${\mathcal G}_{i_0+j}$ for $1 \le j \ne n-1$. We have now specified a list of $2n$ vertices in $\mathcal{H}$ that are reachable from its initial vertex ${\bf v}^{0}$. We will show that among these $2n$ vertices there are at least $n+1$ distinct vertex follower sets in ${\mathcal H}$. To tell vertex follower sets apart we use the following test. {\bf Claim.} {\em Given two vertex follower sets ${\mathcal F}( {\mathcal H}, {\bf y}^1)$ and ${\mathcal F}({\mathcal H},{\bf y}^2)$ with vertices in ${\mathcal H}$ in vertex subsets ${\bf y}^1\in {\mathcal V}^{i_1}$ and ${\bf y}^2 \in {\mathcal V}^{i_2}$, respectively, a necessary condition for the vertex follower set equality \begin{equation}\label{eqn:follower-set-match} {\mathcal F}( {\mathcal H}, {\bf v}^1) = {\mathcal F}({\mathcal H},{\bf v}^2) \end{equation} is that each of their projected vertices in the individual graphs ${\mathcal G}_i$ for $0 \le i \le n-1$ must have identical vertex follower sets, \begin{equation}\label{eqn:follower-set-pre-match} {\mathcal F}( {\mathcal G}_{i_1+ j} , {\bf y}^1(j) ) = {\mathcal F}({\mathcal G}_{i_2+j} ,{\bf y}^2(j)), \quad \mbox{for} \quad 0 \le j \le n-1. \end{equation} These conditions are equivalent, for each $j$, to the path set equalities \begin{equation}\label{eqn:path-set-pre-match} X({\mathcal G}_{i_1+j}, {\bf y}^{1}(j)= \psi_{j, n}(X( {\mathcal H}, {\bf y}^1) ) = \psi_{j, n}(X( {\mathcal H}, {\bf y}^2))=X({\mathcal G}_{i_2+j}, {\bf y}^{2}(j)). \end{equation} } To prove the claim, note that following an infinite path in the graph $({\mathcal H}, {\bf v}^{0})$ is the same thing as following separate independent paths in each of the $n$ graphs $({\mathcal G}_i, v_i)$, starting from their initial vertices. Write ${\bf v}^j = (v^j(0), v_j(1), \cdots, v_j(n-1))$ The finite follower set equality \eqref{eqn:follower-set-match} implies the path set equality $X( {\mathcal H}, {\bf y}^1) = X({\mathcal H}, {\bf v}^2).$ Note that $\psi_{j,n}(X( {\mathcal H}, {\bf y}^{\ell} )) = X({\mathcal G}_{i_1+j}, {\bf v}^{\ell}(j)),$ for all $j$. The vertex follower sets of the projections are completely determined by the property that they are initial languages of the path sets $\psi_{j,n}(X({\mathcal H}, {\bf y}^{\ell}))$. Therefore the equality ${\mathcal F}( {\mathcal G}_{i_1+j} , v_i (1) ) = {\mathcal F} ({\mathcal G}_{i_2+j},v_i (2))$ holds for all $0 \le j \le n-1$, proving the claim. Thus, for example, for the vertex follower set of vertex $$(v_{i_0+1}^{k-1},\ldots v_{n-1}^{k-1},v_0^k,v_1^k,\ldots,v_{i_0-1}^k,w_2) \in {\mathcal V}^{1_0+1}$$ to be equal to the vertex follower set of $$(v_{i_0+2}^{k-1},\ldots v_{n-1}^{k-1},v_0^k,v_1^k,\ldots,w_1,v_{i_0+1}^{k}) \in {\mathcal V}^{i_0+2},$$ we would need to have the row of path sets \[ \left( X(\mathcal{G}_{i_0+1},v_{i_0+1}^{k-1}), X(\mathcal{G}_{i_0+2},v_{i_0+2}^{k-1}), \ldots, X(\mathcal{G}_{i_0-1},v_{i_0-1}^{k}), X(\mathcal{G}_{i_0},w_2) \right) \] to be identical with the row of path sets \[ \left( X(\mathcal{G}_{i_0+2},v_{i_0+2}^{k-1}), X(\mathcal{G}_{i_0+3},v_{i_0+3}^{k-1}), \ldots, X(\mathcal{G}_{i_0},w_1), X(\mathcal{G}_{i_0+1},v_{i_0+1}^{k}) \right). \] We want to make sure that not too many coincidences of vertex follower sets $\mathcal{H}$ can occur in this manner. For each of these $2n$ vertices, consider the associated row of path sets as above. Considering all of these rows together gives us path sets filling in two arrays of the following schematic shape, where the entries are path sets. Here $\bar{a} = X({\mathcal G}_{i_0}, w_1)$ and $\bar{b} = X({\mathcal G}_{i_0}, w_2)$. \begin{equation*} A=\left( \begin{array}{ccccc} x_{i+1} & x_{i+2} & \cdots & x_{i-1} & \bar{a} \\ x_{i+2} & x_{i+3} & \cdots & \bar{a} & y_{i+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ x_{i-1} & \bar{a} & \cdots & y_{i-3} & y_{i-2} \\ \bar{a} & y_{i+1} & \cdots & y_{i-2} & y_{i-1} \\ \end{array} \right) \,\, B=\left( \begin{array}{ccccc} x_{i+1} & x_{i+2} & \cdots & x_{i-1} & \bar{b} \\ x_{i+2} & x_{i+3} & \cdots & \bar{b} & y_{i+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ x_{i-1} & \bar{b} & \cdots & y_{i-3} & y_{i-2} \\ \bar{b} & y_{i+1} & \cdots & y_{i-2} & y_{i-1} \\ \end{array} \right) \end{equation*} We show that matrices of this schematic shape always have at least $n+1$ distinct rows, as a special case of the following combinatorial lemma. \begin{lem}\label{matlem} For any set $X$, let $A$ and $B$ be $n \times n$ matrices with coefficients from $X$, and let $R_A$ be the set of distinct rows of $A$, $R_B$ be the set of distinct rows of B. If $A$ and $B$ have constant skew-diagonal entries $a_{i, n+1-i}=\bar{a} $ and $b_{i,n+1-i}=\bar{b}$, with $\bar{a} \ne \bar{b}$, and if in addition $a_{ij} = b_{ij}$ holds for all entries not on the skew-diagonal ( $i+j \ne n+1$), then $|R_A\cup R_B|\geq n+1$. \end{lem} The assertion of Lemma \ref{matlem} completes the proof of the theorem; we prove it below. \end{proof} \begin{proof}[Proof of Lemma \ref{matlem}] We prove the result by induction on $n$. It holds for the base case $n=1$, for $A$ has one entry: $\bar{a}$, and $B$ has a distinct entry: $\bar{b}$. Now suppose the theorem holds for $n$, and let $A,B\in M^{(n+1)\times (n+1)}(X)$. Then we have two matrices of the following form: \begin{equation*} A=\left( \begin{array}{ccccc} x_{1,1} &x_{1,2}& \cdots & x_{1,n} & \bar{a} \\ x_{2,1} & x_{2,2} & \cdots & \bar{a} & x_{2,n+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ x_{n,1} & \bar{a} & \cdots & x_{n,n} & x_{n,n+1} \\ \bar{a} & x_{n+1,2}& \cdots & x_{n+1,n} & x_{n+1,n+1} \end{array} \right), \quad B=\left( \begin{array}{cccc} x_{1,1} & \cdots & x_{1,n} & \bar{b} \\ x_{2,1} & \cdots & \bar{b} & x_{2,n+1} \\ \vdots & \ddots & \vdots & \vdots \\ x_{n,1} & \cdots & x_{n,n} & x_{n,n+1} \\ \bar{b} & \cdots & x_{n+1,n} & x_{n+1,n+1} \end{array} \right). \end{equation*} Let $R_A$ be the set of distinct rows of $A$, and $R_B$ be the set of distinct rows of B, and let $R=R_A\cup R_B$. We need to show $|R|\geq n+2$. Consider $A',B'\in M^{ n \times n}(X)$ defined as the upper right $ n \times n$ corner of $A$ and $B$. \begin{equation*} A'=\left( \begin{array}{cccc} x_{1,2} & \cdots & x_{1,n} & \bar{a} \\ x_{2,2} & \cdots & \bar{a} & x_{2,n+1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,2} & \cdots & x_{n-1,n} & x_{n-1,n+1} \\ \bar{a} & \cdots & x_{n,n} & x_{n, n+1} \\ \end{array} \right), \, B'=\left( \begin{array}{cccc} x_{1,2} & \cdots & x_{1, n} & \bar{b} \\ x_{2,2} & \cdots & \bar{b} & x_{2,n+1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,2} & \cdots & x_{n-1,n} & x_{n-1,n+1} \\ \bar{b} & \cdots & x_{n,n} & x_{n, n+1} \\ \end{array} \right) \end{equation*} By the induction hypothesis $A'$ and $B'$ have between them at least $n+1$ distinct rows. These rows will all remain distinct in $A$ and $B$ when we include the first coordinate position. The $(n+1)$-st rows of $A$ and $B$ are identical in their last $n$ entries: \begin{equation*} {\bf v}_{n+1}= \left( \begin{array}{cccc} x_{n+1, 2} & \cdots & x_{n+1,n} & x_{n+1, n+1} \\ \end{array} \right) \end{equation*} If this row ${\bf v}_{n+1} $ with $n$ entries is different from all rows in $A'$ and $B'$, then we can include the last row of $A$ with the rows of $A$ and $B$ corresponding to the distinct rows of $A$ and $B$, to obtain at least $n+1$ distinct rows in $R(A) \cup R(B)$. On the other hand if ${\bf v}_{n+1} $ already occurs in the set $R(A')\cup R(B')$ then remove it from the list for $R(A')\cup R(B')$, obtaining at least $n$ row positions in $A$ and $B$ corresponding to distinct rows in $R(A') \cup R(B')$ not equal to ${\bf v}$. Combine these row positions in $A,B$ with the last row of both $A$ and $B$ (which differ in their $(n+1, 1)$ entry) to obtain $n+2$ distinct rows in $R(A ) \cup R(B),$ completing the induction step. \end{proof} \begin{rem}\label{rem:73} The lower bound $n+1$ of Lemma \ref{matlem} is tight, on taking all $a_{i,j} = b_{i,j}= \bar{a}$ off the skew-diagonal. \end{rem} \subsection{Iterated interleaving and complete factorizations} \label{subsec:73} We next consider {\em iterated interleaving factorizations}, in which we may continue to factorize a finitely factorizable path set ${\mathcal P}$ at finitely factorizable factors ${\mathcal Q}$ if they are decomposable. An iterated factorization is said to be {\em complete} if all factors are either indecomposable or are infinitely factorizable. We associate to any (finite) iterated factorization a rooted tree with root node ${\mathcal P}$, and with leaf nodes corresponding to the factors in the iterated factorization, and with internal nodes labeled by factors in some intermediate factorization in the iteration process. The {\em depth} of a node in the tree is the number of edges traversed in the unique path to the root node. The root node has depth $0$. The {\em branching factor} at a node is the number of edges from it to nodes at the next higher depth. The iterated interleaving process grows the tree, starting with a single root node ${\mathcal P}$. Each iteration step replaces one factor ${\mathcal Q}$ in the current factorization by an $n$-fold interleaving factorization of it (for some $n \ge 2$). This step corresponds to adding to this current factor node ${\mathcal Q}$ (which is a leaf node of the current tree) $n$ new branches with the $n$-fold interleaving factors ${\mathcal Q}_j = \psi_{j,n}({\mathcal Q})$ as new leaves of ${\mathcal Q}$, so that ${\mathcal Q}$ becomes an internal node of the new tree. We treat any infinitely factorizable factors ${\mathcal Q}$ encountered in the process as ``frozen'' and do not factor them further. Figure 8.1 exhibits such a tree. \begin{figure}\label{fig71} \end{figure} A factorization is said to be {\em complete} if each individual factor is either infinitely factorizable or cannot be further factorized. In the case of general closed sets $X \subset {\mathcal A}^{{\mathbb N}}$ the paper \cite[Theorem 7.1]{ALS21} constructed examples where the recursive factorization procedure above can go on forever (with no infinitely factorizable factors), leading to an infinite tree of factors of $X$. Furthermore these examples have no complete factorizations. The situation for path sets is different: any sequence of factorizations always terminates in a complete factorization, see Theorem \ref{thm:complete_fact0} below. The following result will be used to establish termination of the iterated interleaving process for path sets. \begin{thm} \label{thm:75} Let ${\mathcal P}$ be a finitely factorizable path set having $m$ vertices in its minimal right-resolving presentation. Then for $n \ge 2$ any interleaving factor ${\mathcal P}_j= \psi_{j,n}({\mathcal P})$ in an $n$-fold interleaving factorization of ${\mathcal P}$ has at most $m-1$ vertices in its minimal right-resolving presentation. \end{thm} \begin{proof} Theorem \ref{thm:56} shows that if ${\mathcal P}$ has an $n$-fold factorization ${\mathcal P}= ({\circledast})_{i=0}^{n-1} {\mathcal P}_j$ then each ${\mathcal P}_j$ has at most $m$ vertices in its minimal right-resolving presentation. Theorem \ref{thm:68} shows that if in addition some factor ${\mathcal P}_j$ has exactly $m$ vertices in its minimal right-resolving factorization, then ${\mathcal P}$ and all ${\mathcal P}_j$ are leveled, hence infinitely factorizable, contradicting the hypothesis that ${\mathcal P}$ is finitely factorizable. It follows that all factors ${\mathcal P}_j$ must have at most $m-1$ vertices in their minimal right-resolving presentation. \end{proof} Theorem \ref{thm:75} implies the existence of complete factorizations of finitely factorizable path sets. \begin{thm} \label{thm:complete_fact0} {\rm (Complete Factorizations of Path Sets)} (1) Every path set ${\mathcal P}$ has at least one iterated interleaving presentation that is complete. (2) Every path set ${\mathcal P}$ has finitely many iterated interleaving factorizations (with ``frozen'' infinitely factorizable factors) that are complete. (3) If ${\mathcal P}$ has $m$ vertices in its minimal right-resolving presentation, then each such complete factorization has at most $(m-1)!$ factors. In addition the associated tree has depth at most $m-1$, and has branching factor at internal nodes of depth $j$ of at most $m-j-1$ . \end{thm} \begin{proof} The path set ${\mathcal P}$ has $m$ vertices in its minimal right-resolving presentation. We take any nontrivial $n$-fold interleaving factorization of ${\mathcal P}$, and we know at least one factor in it will be finitely factorizable. If any finitely factorizable ${\mathcal Q}= {\mathcal P}_i$ appearing in it is decomposable, we may further factorize i${\mathcal P}_i$ (as an iterated interleaving factorization), each new factorization having at least one finitely factorizable factor. Theorem \ref{thm:non_leveled_fact} says that any $n$-fold factorization of ${\mathcal Q}$ necessarily has $n < m({\mathcal Q})$, where $m ({\mathcal Q})$ is the number of vertices in a minimal right-resolving presentation of ${\mathcal Q}$. Let ${\mathcal Q}=({\circledast})_{i=0}^{n-1} {\mathcal Q}_{j,n}$ Theorem \ref{thm:75} then says the number of vertices $m({\mathcal Q}_{j,n})$ in the minimal right-resolving presentation of ${\mathcal Q}_{j,n}$ has $m({\mathcal Q}_{j,n}) < m({\mathcal Q})$. Call the {\em level} of a node its distance from the root node, where the root node is assigned level $0$. Then any level $k$ node in the tree that is finitely factorizable corresponds to a factor ${\mathcal Q}$ that has at most $m-k$ vertices in its minimal right-resolving presentation. It follows from this fact that the maximal possible level of any node in the tree is $m-1$, bounding the maximum nesting level of parentheses in the iterated interleaving factorization. In addition the maximal branching factor possible at level $k$ is $m-k-1$, applying Theorem \ref{thm:non_leveled_fact}. We conclude that all possible such trees are finite, that they can have one node at level $0$, at most $m-1$ nodes at level $1$, at most $(m-1)(m-2)$ nodes at level $2$, up to $(m-1)!$ nodes at level $(m-1).$ Now the maximal number of leaves possible in such a tree, terminating this process on any level, is $(m-1)!$. The total number of nodes, counting internal nodes, is $(m-1)! ( 1 +\frac{1}{2} + \frac{1}{3!} +\cdots + \frac{1}{(m-1)!} \le e(m-1)!$. \end{proof} \begin{rem}\label{rem:76} We do not settle the question whether there is uniqueness of the indecomposable factors in a complete factorization. The infinitely factorizable factors are non-unique. \end{rem} \section{Concluding Remarks} \label{sec:concluding} \subsection{Automatic sequences associated to path sets} \label{sec:83} Theorem \ref{thm:decimation_set_bound} has the consequence that every path set is $n$-automatic for all $n \ge 1$ in the sense given in \cite{AL14a}. One may ask whether the are many integer counting statistics of an arbitrary path set ${\mathcal P}$ are $n$-automatic sequences in the sense of Allouche and Shallit \cite{AS92}, \cite{AS03}. A statistic of particular interest is the function $f(k) = N_k^{I}({\mathcal P})$ that counts the number of initial words of length $k$ in the path set ${\mathcal P}$. Decimations and interleavings together define an infinite collection of closure operations $ X \mapsto X^{[n]}$ on ${\mathcal A}^{{\mathbb N}}$. These are idempotent operations that preserve the property of being closed sets in the symbol topology. One may define ${\mathcal C}({\mathcal A})^{[n]}$ to be the subclass of path sets that is invariant under the $n$-th closure operation. (It is not clear whether these classes of sets are closed under union or intersection, however.) These closure operations are effectively computable on path sets, using presentations. One can ask if there are automata-theoretic characterizions of the class of path sets that is invariant under a given closure operation. \appendix \section{Path sets in Automata Theory}\label{sec:A0} Path sets have an important characterizations in automata theory. We recall basic definitions and terminology in automata theory, following Eilenberg \cite{Eilenberg74} and, for infinite words, Perrin and Pin \cite{PP04}. Recall that ${\mathcal A}^{\star}$ denotes the set of all finite words in the alphabet ${\mathcal A}$, including the empty word $\emptyset$. We let $A^{\omega}$ denote all infinite words $a_0a_1a_2 \cdots$ in the language with alphabet ${\mathcal A}$ (rather than ${\mathcal A}^{{\mathbb N}}$, which we used in the main text.) \subsection{Automata and languages}\label{sec:A0.1} \begin{defn}\label{def:FA1} A {\em finite automaton} on an alphabet ${\mathcal A}$ is denoted ${\bf A} := (Q, I, T)$ ( in full $(Q, {\mathcal A}, E, I, T)$) which has a finite directed labeled graph in which $Q$ denotes the (finite) set of its states, with specified subsets $I$ of {\em initial states} and $T$ of {\em terminal states} (or {\em final states}). (The sets $I$ and $T$ may overlap.) Additional data speifying the automaton consists of labeled edge data $E \subset Q \times {\mathcal A} \times Q$, writing $ e=(v_1, a, v_2)$, for the directed edge from state $v_1$ to state $v_2$, carrying a label $a \in {\mathcal A}$. \end{defn} The alphabet $A$ and the labeled edge data $E$ are traditionally omitted from the notation for ${\bf A}$. \begin{defn}\label{def:DET1} A finite automaton is {\em deterministic} if $I$ contains one element and each state $v \in V$ has for each symbol $a \in {\mathcal A}$ at most one exit edge labeled with this symbol. Otherwise it is {\em nondeterministic}. \end{defn} A deterministic automaton is characterized by the property that for each finite path the symbolic path label data plus the initial state on the path uniquely determine the path; i.e., the sequence of states visited in following the path. \begin{defn}\label{def:FL1} The {\em formal language $L({\bf A}) \subset {\mathcal A}^{\star}$} associated to a finite automaton ${\bf A}$ is the set of all finite symbol sequences obtained as labels following some directed path starting from an initial vertex $v \in I$ and ending at some terminal vertex $w \in T$. The empty sequence (denoted $1$) is included in $L({\bf A})$ if some initial state is a terminal state. \end{defn} \begin{defn}\label{def:BA} A finite {\em B\"{u}chi automaton} has the same automaton ${\bf A}$, but the (B\"{u}chi) language $ L^{\omega}({\bf A})$ {\em recognized} by such an automation is the set of all infinite words $a_0 a_1 a_2\ldots \in {\mathcal A}^{{\mathbb N}}$ which are labels of an infinite directed path starting at an initial state and passing through terminal states at infinitely many times. \end{defn} The theory of B\"{u}chi \cite{Buchi62} as originally developed allowed automata having a countably infinite number of states. \begin{defn}\label{def:RECOG} For fixed ${\mathcal A}$ the set of languages recognized by some finite B\"{u}chi automaton, allowing nondeterminism and arbitrary sets of initial and terminal vertices, are called {\em recognizable languages} on alphabet ${\mathcal A}$. (\cite[page 25]{PP04}.) \end{defn} The class of recognizable languages are characterized as coinciding with the class of $\omega$-rational languages. (\cite[Chapter I. Theorem 5.4]{PP04}). \subsection{Automata-theoretic characterization of path sets }\label{sec:A0.2} To a presentation $({\mathcal G}, v)$ of a path set ${\mathcal P}$ on a finite alphabet ${\mathcal A}$ we canonically associate the finite automaton ${\bf A}_{{\mathcal G}}=(Q, I, T)$ where $Q= V({\mathcal G})$, the initial state set $I= \{ v\}$ and the terminal state set $T = V({\mathcal G})$ consisting every state of ${\mathcal G}$, and with edge label data specified by $\mathcal{E}$. A path set ${\mathcal P} = X({\mathcal G}, v)$ then coincides with the B\"{u}chi language $ L^{\omega}({\bf A})$ recognized by the associated B\"{u}chi automaton ${\bf A}$ associated to $({\mathcal G}, v)$. We have the following characterization. \begin{thm}\label{thm:PSA} { \rm (Automata Characterization of Path Sets)} The following properties are equivalent, for a finite alphabet ${\mathcal A}$. \begin{enumerate} \item[(1)] ${\mathcal P}$ is a path set with alphabet ${\mathcal A}$. \item[(2)] ${\mathcal P}$ is a recognizable language that is a closed set in ${\mathcal A}^{{\mathbb N}}$. \item[(3)] ${\mathcal P}$ is recognized by a finite B\"{u}chi automaton in which every state is terminal. \item[(4)] ${\mathcal P}$ is recognized by a finite deterministic B\"{u}chi automaton in which every state is terminal. \end{enumerate} \end{thm} \begin{proof} The path set definition $(1)$ is equivalent to $(3)$. The equivalence of $(2), (3), (4)$ and $(5)$ is shown in Proposition 3.9 in Chapter III of Perrin and Pin \cite{PP04}. The reduction from (3) to (4) uses the fact that every state of the automaton is a terminal state. \end{proof} It is known that the set of languages recognizable languages is strictly larger that those recognized by deterministic B\"{u}chi automata. The set of recognizable languages are closed under complement, while the languages recognized by deterministic B\'{u}chi automata (allowing arbitrary subsets of $Q$ of terminal states) are not closed under complement. However the set of detrerminisitic B\"{u}chi languages has a nice characterization. given in \cite[Chapter III, Corollary 6.3]{PP04} The class ${\mathcal C}({\mathcal A})$ of path sets forms a strict subset of the languages recognized by some deterministic B\"{u}chi automaton; i.e., there are non-closed sets of $A^{{\mathbb N}}$ recognized by some deterministic B\"{u}chi automaton, as shown by the following example. \begin{exmp}\label{exam:non-closed} Consider the directed labeled graph with two states pictured, on alphabet ${\mathcal A}= \{0.1\}.$ \begin{figure}\label{fig0} \end{figure} Let $I =\{v_0\}$ be the initial state and let ${\bf A}_1$ denote the automaton with every state terminal $T =\{ v_0, v_1\}$ and let ${\bf A}_2$ denote the automaton with $T = \{v_1\}$. Then $$ L^{\omega}({\bf A}_1) = \{ 1^k 0 1^{\infty}: \, k \ge 0\}\, \cup\, \{ 1^{\infty}\} $$ while $ L^{\omega}({\bf A}_2) = \{ 1^k 0 1^{\infty}: \, k \ge 0\}. $ The set $L^{\omega}({\bf A}_1)$ is a closed set in ${\mathcal A}^{\omega}$, and is a path set. The set $L^{\omega}({\bf A}_2)$ is not closed in ${\mathcal A}^{\omega}$ because it omits the limit point $\{ 1^{\infty}\}$. \end{exmp} \subsection{Minimal deterministic automata}\label{sec:A0.3} A finite automaton ${\bf A}$ is {\em deterministic} if the initial state plus the symbols sequence of some legal path uniquely determine the set of states that path passes through. This concept is equivalent to {\em right-resolving} in symbolic-dynamics terminology. \begin{defn}\label{def:presentation} (1) An automaton $(Q, I, T)$ is {\em accessible} if every state can be reached by a directed path from some initial state. (2) An automaton is {\em co-accessible} if every state has a directed path to a terminal state. (3) An automaton that is accessible and co-accessible is called {\em trim}. \end{defn} For a finite automaton ${\bf A}$ with one initial state $v$, the property of {\em reachability} is equivalent to it being {\em accessible.} For a finite automaton ${\bf A}$ in which every state is terminal, the property of being {\em pruned} is equivalent to being {\em co-accessible.} For finite automaton the property of right-resolving is equivalent to being {\em deterministic}. A deterministic automaton is {\em complete} if each state has an exit edge with label for each letter of the label alphabet ${\mathcal A}$. \begin{thm}\label{thm:deterministic} Every finite (non-deterministic) automaton ${\bf A}= (Q, I, T)$ has an equivalent deterministic automaton ${\bf A}^{\ast}= (Q^{\ast}, I^{\ast}, T^{\ast})$ that has $L({\bf A}) = L({\bf A}^{\ast}).$ \end{thm} \begin{proof} This is proved using the subset construction \cite[Chapter III, Sect. 2]{Eilenberg74}. \end{proof} \begin{thm} (1) Every finite automaton ${\bf A}$ has a minimal deterministic presentation ${\bf A}_{min}$ giving the same language $L({\bf A})$.This presentation is unique up to isomorphism of automata. (2) The minimal automaton is deterministic and accessible. The behavior of accepting paths from each state of the minimal automaton is different. The minimal automaton is trim unless there is a state which has no accepting exit paths (``null state.'') (3) The minimal automaton is not complete unless $L({\bf A}) = {\mathcal A}^{{\mathbb N}}$ is the full shift. \end{thm} \begin{proof} See Eilenberg \cite[Chapter III, Sect. 5]{Eilenberg74}. Theorem 5.2 gives (1) and (2), and (3) is mentioned in this proof. The proof is by construction. \end{proof} Given a finite automaton, there is an effectively computable procedure to obtain the minimal deterministic automaton, see the discussion in Eilenberg \cite[Chapter III] {Eilenberg74}. Some authors study {\em complete minimal automata}, which are obtained by adding an extra ``null state'' to the automaton which is not co-accessible, e.g. Sakarovich \cite[Chapter I.4.2]{Sakarovich:90}. \begin{rem} Every finite B\"{u}chi automaton recognizes a language recognizable by a finite trim B\'{u}chi automaton (\cite[Chapter I, Proposition 5.2]{PP04}. If the automaton is deterministic then the corresponding trim automaton is deterministic. But for general automata the set of terminal states may be altered under the transformation. \end{rem} \subsection{Path set languages and closures of rational languages}\label{sec:A0.4} \begin{defn}\label{def:PSL} {\em A {\em path set language} $L^{I}({\mathcal P}) \subset {\mathcal A}^{\star}$ of a path set ${\mathcal P}$ is the set of finite initial prefixes of words in the path set. } \end{defn} Eilenberg \cite[Chapter XIV, page 379]{Eilenberg74} defines a closure operation on a language $L \subset {\mathcal A}^{\ast}$, which defines $\bar{L} \subset A^{\omega}$ to be the set of all infinite words $a_0a_1a_2 \ldots$ which has infinitely many prefixes $a_0a_1 \ldots a_n$ being a word in $L$. \begin{defn}\label{def:prefix-closed} A language $L \subset A^{\star}$ is called {\em prefix-closed} if all prefixes of each word in $L$ belongs to $L$. \end{defn} A path set language $C= L({\mathcal P})$ is prefix-closed. One also has $\overline{L({\mathcal P})} = {\mathcal P}$. \begin{thm}\label{thm:PSL} {\rm (Path Set Language Characterization)} The following properties are equivalent. \begin{enumerate} \item[(1)] ${\mathcal L}$ is a path set language $L^{I}({\mathcal P})$ for some ${\mathcal P}$ using the finite alphabet ${\mathcal A}$. \item[(2)] ${\mathcal L}$ is a rational language which is prefix-closed in the sense that all the prefixes of each word $w \in L$ also belong to $L$. \item[(3)] ${\mathcal L}$ is recognizable by a trim finite deterministic automaton, in which all states are terminal. \end{enumerate} For any such language the associated path set ${\mathcal P} = \overline{L}$ consists of all infinite words which have infinitely many prefixes belonging to $L$. Any other rational languages $L_1$ which have ${\mathcal P}$ as their closure have $L_1 \subset L$. \end{thm} \begin{proof} The equivalence of (2) and (3) appears as Exercise 5.1 in \cite[page48]{Eilenberg74}. \end{proof} One can define a closure operation on regular languages $L$, as follows. One first passes to the infinite language $\bar{L}$, which is a path set. Then one passes to the initial prefix language ${{\mathcal L}}^{i}( \bar{L})$ of this path set, which is a prefix-closed language. This relation is idempotent because the closure of ${{\mathcal L}}^{i}( \bar{L})$ is again $\bar{L}.$ If $L$ is recognized by a co-accessible automaton then $L \subseteq {{\mathcal L}}^{i}( \bar{L})$.\\ \section{Self-interleaving criterion} \label{sec:B0} We give a sufficient condition on a presentation of a path set to guarantee that all interleaving factorizations of it are self-interleavings (possibly of other path sets). \begin{thm}\label{thm:selfloop} {\rm (Sufficient condition for only self-interleaving factorizations )} Let ${\mathcal P}= X({\mathcal G},v)$ be a path set having a right-resolving presentation in which the graph ${\mathcal G}$ has a self-loop at the initial vertex $v$. Then for all $n \ge 1$ any $n$-fold interleaving factorization ${\mathcal P}= {\mathcal P}_0 {\circledast} {\mathcal P}_1 {\circledast} \ldots {\circledast} {\mathcal P}_{n-1}$ must be a self-interleaving ${\mathcal P} = {\mathcal Q}^{({\circledast} n)}$ of a single factor ${\mathcal Q}$, where ${\mathcal Q}$ depends on $n$. \end{thm} \begin{proof} Suppose that the presentation $({\mathcal G}, v)$ has a self-loop at vertex $v$ with label $a_0 \in \mathcal{A}$. Then $x \in {\mathcal P}$ implies that $(a_0)^kx \in {\mathcal P}$ for each $k \ge 1$. In consequence: \begin{enumerate} \item We have ${\mathcal P} \subseteq S({\mathcal P})$, where $S$ denotes the left-shift operator, because $S(a_0x) = x$. \item The $n$-decimation sets ${\mathcal P}_j= \psi_{j,n}({\mathcal P})$ satisfy the inclusions $$ \psi_{j,n}({\mathcal P}) \subseteq \psi_{j+1, n}({\mathcal P}) $$ for all $j \ge 0$. Indeed if $y \in \psi_{j, n}({\mathcal P})$ then there exists $x \in {\mathcal P}$ with $\psi_{j,n}(x) =y$. But now $a_0 x \in {\mathcal P}$ and $y = \psi_{j+1, n}(a_0 x)$ so $y \in \psi_{j+1,n}({\mathcal P})$. \end{enumerate} By Theorem ~\ref{thm:initialblocks}, a path set is characterized by its set ${\mathcal B}^{I}({\mathcal P})$ of initial blocks. If we can show that $\mathcal{P}_i$ has the same set of initial blocks as $\mathcal{P}_0$ for all $1\leq j \leq n-1$, then we will have proven the result, taking ${\mathcal Q}= {\mathcal P}_0$. By hypothesis there is a right-resolving presentation $(\mathscr{G},v)$ of $\mathcal{P}$ having a self-loop labeled $a_0$ at $v$. Let $x=x_0x_1x_2\ldots x_m$ be an initial block in $\mathcal{P}_0$. Then there are words $w_0,w_1,\ldots w_m$, each of length $n-1$, such that $x_0w_1x_1w_2x_2\ldots w_mx_m$ is an initial block in $\mathcal{P}$. Then for all $0\leq i\leq n-1$, $a^ix_0w_1x_1w_2x_2\ldots w_mx_m$ is an initial block in $\mathcal{P}$. By the definition of interleaving, $x$ is an initial block of $\mathcal{P}_i$ for each $0\leq i\leq n-1$. Thus, any initial block of $\mathcal{P}_0$ is an initial block of all other $\mathcal{P}_j$ with $1 \le j\leq n-1$. Now, for some $0\leq i\leq n-1$, let $y=y_0y_1\ldots y_m$ be an initial block of $\mathcal{P}_i$. Since $\mathscr{G}$ has a self loop labeled $a_0$, a path traversing the loop infinitely many times presents the point $(a_0)^{\infty}\in\mathcal{P}$. Thus, each $\mathcal{P}_j$, $0\leq j\leq n-1$, contains the point $(a_0)^{\infty}$. Using this point from each $\mathcal{P}_j$ except $\mathcal{P}_i$, we can form a point in $\mathcal{P}$ with initial word $(a_0)^{i-1}y_0(a_0)^{n-1}y_1(a_0)^{n-1}y_2\ldots (a_0)^{n-1}y_m$. Since $\mathscr{G}$ is right-resolving, the path presenting this point begins by traversing the $a_0$-labeled self loop $n-1$ times, thus ending at the initial vertex. Therefore, there is a path beginning at the initial vertex and presenting $y_0(a_0)^{n-1}y_1(a_0)^{n-1}y_2\ldots (a_0)^{n-1}y_m$. Therefore, $y$ is an initial block of $\mathcal{P}_0$. Therefore ${\mathcal P}_0 ={\mathcal P}_i$ for all $i$. since they have the same initial block set. \end{proof} \begin{rem} Certain examples of path sets (denoted $X(1, M)$) studied in Abram et al \cite[Section 3.4 and Proposition 5.1]{ABL17} exhibited interleaving factorizations that were self-interleavings. The presentations of such $X(1,M)$ have self-loops at the initial vertex, and Theorem \ref{thm:selfloop} applies to give this result.These path sets arise in the study of intersections of $p$-adic path set fractals, as defined and studied in \cite{AL14b}, and arose in consideration of a problem of Erd\H{o}s, c.f. \cite{Lagarias09}. \end{rem} \end{document}
\begin{document} \begin{titlepage} \title{Faster No-Regret Learning Dynamics for Extensive-Form Correlated and Coarse Correlated Equilibria} \end{titlepage} \tableofcontents \section{Introduction} Game-theoretic solution concepts describe how rational agents should act in games. Over the last two decades there has been tremendous progress in imperfect-information game solving and algorithms based on game-theoretic solution concepts have become the state of the art. Prominent milestones include an optimal strategy for Rhode Island hold'em poker~\citep{Gilpin07:Lossless}, a near-optimal strategy for limit Texas hold'em~\citep{Bowling15:Heads}, and a superhuman strategy for no-limit Texas hold'em~\citep{Brown17:Superhuman,Moravvcik17:DeepStack}. In particular, these advances rely on algorithms that approximate \emph{Nash equilibria} (\emph{NE}) of two-player zero-sum \emph{extensive-form games} (\emph{EFGs}). EFGs are a broad class of games that capture sequential and simultaneous interaction, and imperfect information. For two-player zero-sum EFGs, it is by now well-understood how to compute a Nash equilibrium at scale: in theory this can be achieved using accelerated uncoupled no-regret learning dynamics, for example by having each player use an \emph{optimistic} regret minimizer and leveraging suitable \emph{distance-generating functions}~\citep{Hoda10:Smoothing,Kroer20:Faster,Farina21:Better} for the EFG decision space. Such a setup converges to an equilibrium at a rate of $O(T^{-1})$. In practice, modern variants of the \emph{counterfactual regret minimization (CFR)} framework~\citep{Zinkevich07:Regret} typically lead to better practical performance, although the worst-case convergence rate known in theory remains inferior. CFR is also an uncoupled no-regret learning dynamic. However, many real-world applications are not two-player zero-sum games, but instead have \emph{general-sum} utilities and often more than two players. In such settings, Nash equilibrium suffers from several drawbacks when used as a prescriptive tool. First, there can be multiple equilibria, and an equilibrium strategy may perform very poorly when played against the ``wrong'' equilibrium strategies of the other player(s). Thus, the players effectively would need to communicate in order to find an equilibrium, or hope to converge to it via some sort of decentralized learning dynamics. Second, finding a Nash equilibrium is computationally hard both in theory~\citep{Daskalakis06:Complexity,Etessami07:Complexity} and in practice~\citep{Berg17:Exclusion}. This effectively squashes any hope of developing efficient learning dynamics that converge to Nash equilibria in general games. A competing notion of rationality proposed by \citet{Aumann74:Subjectivity} is that of \emph{correlated equilibrium} (\emph{CE}). Unlike $\nash$, it is known that the former can be computed in polynomial time and, perhaps even more importantly, it can be attained through \emph{uncoupled} learning dynamics where players only need to reason about their own observed utilities. This overcomes the often unreasonable presumption that players have knowledge about the other players' utilities. At the same time, uncoupled learning algorithms have proven to be a remarkably \emph{scalable} approach for computing equilibria in large-scale games, as described above. In normal-form games (NFGs), a \emph{correlated strategy} is defined as a probability distribution over joint action profiles, customarily modeled via a trusted external mediator that draws an action profile from this distribution and then privately recommends to each player their component. A correlated strategy is a CE if, for each player, the mediator's recommendation is the best action in expectation, assuming that all the other players follow their recommended actions~\cite{Aumann74:Subjectivity}. In NFGs it has long been known that uncoupled no-regret learning dynamics can converge to CE and \emph{coarse correlated equilibria} (\emph{CCE}) at a rate of $O(T^{-1/2})$~\citep{Foster97:Calibrated,Hart00:Simple}. More recently, it has been established that accelerated dynamics can converge at a rate of $\widetilde{O}(T^{-1})$~\citep{Daskalakis21:near,Anagnostides21:Near} in NFGs, where the notation $\widetilde{O}(\cdot)$ suppresses $\polylog(T)$ factors. However, in the context of EFGs the idea of correlation is much more intricate, and there are several notions of correlated equilibria based on when the mediator gives recommendations and how the mediator reacts to players who disregard the advice. Three natural extensions of CE to extensive-form games are the \emph{extensive-form correlated equilibrium (EFCE)} by~\citet{Stengel08:Extensive}, the \emph{extensive-form coarse correlated equilibrium (EFCCE)} by~\citet{Farina20:Coarse}, and the \emph{normal-form coarse correlated equilibrium (NFCCE)} by~\citet{Celli2018:Computing}. The set of those equilibria are such that, for any extensive-form game, EFCE $\subseteq$ EFCCE $\subseteq$ NFCCE. In an EFCE, the stronger of those notions of correlation, the mediator forms recommendations for each of the possible decision points an agent may encounter in the game, and recommended actions are gradually revealed to players as they reach new information sets; thus, the mediator must take into account the \emph{evolution} of the players' beliefs throughout the game. Because of the sequential nature, the presence of private information in the game, and the gradual revelation of recommendations, the constraints associated with $\efce$ are significantly more complex than for normal-form games. For these reasons, the question of whether uncoupled learning dynamics can converge to an $\efce$ was only recently resolved by \citet{Celli20:No-Regret}. Moreover, in a follow-up work the authors also established an explicit rate of convergence of $O(T^{-1/2})$~\citep{Farina21:Simple}. Our paper is primarily concerned with the following fundamental question: \begin{quote} \centering \emph{Can we develop faster uncoupled no-regret learning dynamics for EFCE?} \end{quote} We affirmatively answer this question by developing dynamics converging at a rate of $O(T^{-3/4})$ to an EFCE. Furthermore, we also study learning dynamics for the simpler solution concept of EFCCE. More precisely, although accelerated learning dynamics for EFCE can be automatically employed for EFCCE (since the set of EFCEs forms a subset of the set of EFCCEs), all the known learning dynamics for EFCE have large per-iteration complexity. Indeed, they require as an intermediate step the expensive computation of the stationary distributions of multiple Markov chains. Thus, the following natural question arises: \emph{Are there learning dynamics for EFCCE that avoid the expensive computation of stationary distributions?} We answer this question in the positive. Our results reveal that EFCCE is more akin to NFCCE than to EFCE from a learning perspective, although EFCE prescribes a much more compelling notion of correlation than NFCCE. \subsection{Contributions} Our first primary contribution is to develop faster no-regret learning dynamics for EFCE: \begin{theorem} \label{theorem:main} On any general-sum multiplayer extensive-form game, there exist uncoupled no-regret learning dynamics which lead to a correlated distribution of play that is an $O(T^{-3/4})$-approximate $\efce$. Here the $O(\cdot)$ notation suppresses game-specific parameters polynomial in the size of the game. \end{theorem} This substantially improves over the prior best known rate of $O(T^{-1/2})$ recently established by~\citet{Farina21:Simple}. To achieve this result we employ the framework of \emph{predictive} (also known as \textit{optimistic}) regret minimization~\citep{Chiang12:Online,Rakhlin13:Optimization}. One of our conceptual contributions is to connect this line of work with the framework of \emph{Phi-regret} minimization~\citep{Greenwald03:General,Gordon08:No} by providing a general template for stable-predictive Phi-regret minimization (\Cref{theorem:accelerating-Phi}). The importance of Phi-regret is that it leads to substantially more compelling notions of hindsight rationality, well-beyond the usual \emph{external} regret \citep{Gordon08:No}, including the powerful notion of \emph{swap regret}~\citep{Blum07:From}. Moreover, one of the primary insights behind the result of \citet{Farina21:Simple} is to cast convergence to an EFCE as a Phi-regret minimization problem. Given these prior connections, we believe that our stable-predictive template is of independent interest, and could lead to further applications in the future. From a technical standpoint, in order to apply our generic template for accelerated Phi-regret minimization (\Cref{theorem:accelerating-Phi}), we establish two separate ingredients. First, we develop a \emph{predictive} external regret minimizer for the set of transformations associated with $\efce$. This deviates from the construction of \citet{Farina21:Simple} in that we have to additionally guarantee and preserve the predictive bounds throughout the construction. Further, our algorithm combines optimistic regret minimization---under suitable DGFs---for the sequence-form polytope, with \emph{regret decomposition} in the style of CFR. While these have been the two main paradigms employed in EFGs, they were used separately in the past. We refer to \Cref{fig:algo} for a detailed description of our algorithm. The second central component consists of sharply characterizing the stability of fixed points of \emph{trigger deviation functions}. This turns out to be particularly challenging, and direct extensions of prior techniques only give a bound that is \emph{exponential} in the size of the game. In this context, one of our key technical contributions is to provide a refined perturbation analysis for a Markov chain consisting of a rank-one stochastic matrix (\Cref{lemma:convex_characterization}). To do this, we deviate from prior techniques (\emph{e.g.}, \citep{Candogan13:Dynamics,Chen20:Hedging}) that used the Markov chain tree theorem, and instead use an alternative linear-algebraic characterization for the eigenvectors of the underlying Laplacian system. This leads to a rate of convergence that depends \emph{polynomially} on the description of the game, which is crucial for the practical applicability of the dynamics. Next, we shift our attention to learning dynamics for EFCCE. We first introduce the notion of \emph{coarse trigger deviation functions}, and we show that if each player employs a no-coarse-trigger-regret algorithm, the correlated distribution of play converges to an EFCCE (\Cref{theorem:EFCCE-convergence}). This allows for a unifying treatment of EFCE and EFCCE. Moreover, we show that, unlike all existing methods for computing fixed points of trigger deviation functions, the fixed points of \emph{coarse} trigger deviation functions admit a succinct closed-form characterization (\Cref{thm:closedForm}); in turn, this enables us to obtain a much more efficient algorithm for computing them (\Cref{algo:FP-EFCCE}). From a practical standpoint, this is crucial as it substantially reduces the per-iteration complexity of the dynamics, placing EFCCE closer to NFCCE in terms of the underlying complexity, even though EFCCE prescribes a stronger notion of correlation. Another implication of our closed-form characterization is an improved stability analysis for the fixed points, which is much less technical than the one we give for EFCE (\Cref{proposition:FP-EFCCE}). Finally, we support our theoretical findings with experiments on several general-sum benchmarks. \subsection{Further Related Work} \label{sec:related} The line of work on accelerated no-regret learning was pioneered by \citet{Daskalakis15:Near}, showing that one can bypass the adversarial $\Omega(T^{-1/2})$ barrier for the incurred average regret if \emph{both} players in a zero-sum game employ an uncoupled variant of Nesterov's excessive gap technique~\citep{Nesterov05:Excessive}, leading to a near-optimal rate of $O(\log T/T)$. Subsequently, \citet{Rakhlin13:Online} showed that the optimal rate of $O(T^{-1})$ can be obtained with a remarkably simple variant of mirror descent which incorporates a \emph{prediction} term in the update step. While these results only hold for zero-sum games, \citet{Syrgkanis15:Fast} showed that an $O(T^{-3/4})$ rate can be obtained for multiplayer general-sum normal-form games. In a recent result, \citet{Chen20:Hedging} strengthened the regret bounds in \cite{Syrgkanis15:Fast} from external to swap regret using the celebrated construction of \citet{Blum07:From}, thereby establishing a rate of convergence of $O(T^{-3/4})$ to CE. Even more recent work~\citep{Daskalakis21:near,Anagnostides21:Near} has established a near-optimal rate of convergence of $\widetilde{O}(T^{-1})$ to correlated equilibria in normal-form games when all players leverage \emph{optimistic multiplicative weights update (OMWU)}, where $\widetilde{O}(\cdot)$ hides $\polylog(T)$ factors. Extending these results to EFCE presents a considerable challenge since their techniques crucially rely on the softmax-type strucure of OMWU on the simplex, as well as the particular structure of the associated fixed points. Correlated equilibria in extensive-form games are much less understood than Nash equilibria. It is known that a feasible $\efce$ can also be computed efficiently through a variant of the \emph{Ellipsoid algorithm} \citep{Papadimitriou08:Computing,Jiang15:Polynomial-Time}, while an alternative sampling-based approach was given by \citet{Dudik09:SamplingBased}. However, those approaches perform poorly in large-scale problems, and do not allow the players to arrive at $\efce$ via distributed learning. \citet{Celli19:Learning} devised variants of the $\cfr$ algorithm that provably converge to an NFCCE, a solution concept much less appealing than $\efce$ in extensive-form games~\citep{Gordon08:No}. Finally, \citet{Morrill21:Efficient, Morrill21:Hindsight} characterize different hindsight rationality notions in EFGs, associating each solution concept with suitable $O(T^{-1/2})$ no-regret learning dynamics. \section{Preliminaries} \label{section:prel} In this section we introduce the necessary background related to extensive-form games (EFGs), correlated equilibria in EFGs, and regret minimization. A comprehensive treatment on EFGs can be found in \cite{Shoham09:Multiagent}, while for an introduction to the theory of learning in games the reader is referred to the excellent book of \citet{Cesa06:Prediction}. \paragraph{Conventions} In the sequel we use the $O(\cdot)$ notation to suppress (universal) constants. We typically use subscripts to indicate the player or some element in the game tree uniquely associated with a given player, such as a decision point; to lighten our notation, the associated player is not made explicit in the latter case. Superscripts are reserved almost exclusively for time indexes. Finally, the $k$-th coordinate of a vector $\vec{x} \in \R^d$ will be denoted by $\vec{x}[k]$. \subsection{Extensive-Form Games} An extensive-form game is abstracted on a directed and rooted \emph{game tree} $\mathcal{T}$. The set of nodes of $\mathcal{T}$ is denoted with $\mathcal{H}$. Non-terminal nodes are referred to as \emph{decision nodes} and are associated with a player who acts by selecting an action from a set of possible actions $\mathcal{A}_h$, where $h \in \mathcal{H}$ represents the decision node. By convention, the set of players $[n] \cup \{c\}$ includes a \emph{fictitious} agent $c$ who ``selects'' actions according to some fixed probability distributions dictated by the nature of the game (\emph{e.g.}, the roll of a dice); this intends to model external stochastic phenomena occurring during the game. For a player $i \in [n] \cup \{c\}$, we let $\mathcal{H}_{i} \subseteq \mathcal{H}$ be the subset of decision nodes wherein a player $i$ makes a decision. The set of \emph{leaves} $\mathcal{Z} \subseteq \mathcal{H}$, or equivalently the \emph{terminal nodes}, correspond to different outcomes of the game. Once the game transitions to a terminal node $z \in \mathcal{Z}$ payoffs are assigned to each player based on a set of (normalized) utility functions $\{ u_{i} : \mathcal{Z} \to [-1,1] \}_{i \in [n]}$. It will also be convenient to represent with $p_{c}(z)$ the product of probabilities of ``chance'' moves encountered in the path from the root until the terminal node $z \in \mathcal{Z}$. In this context, the set of nodes in the game tree can be expressed as the (disjoint) union $\mathcal{H} \coloneqq \bigcup_{i \in [n] \cup \{c\}} \mathcal{H}_{i} \cup \mathcal{Z}$. \paragraph{Imperfect Information} To model imperfect information, the set of decision nodes $\mathcal{H}_{i}$ of player $i$ are partitioned into a collection of sets $\mathcal{J}_{i}$, which are called \emph{information sets}. Each information set $j \in \mathcal{J}_{i}$ groups nodes which cannot be distinguished by player $i$. Thus, for any nodes $h, h' \in j$ we have $\mathcal{A}_h = \mathcal{A}_{h'}$. As usual, we assume that the game satisfies \emph{perfect recall}: players never forget information once acquired. This implies, in particular, that for any nodes $h, h' \in j$ the sequence of $i$'s actions from the root until $h$ must coincide with the sequence from the root to node $h'$; otherwise, $i$ would be able to distinguish between nodes $h$ and $h'$ by virtue of perfect recall. We will also define a partial order $\prec$ on $\mathcal{J}_{i}$ so that $j \prec j'$, for $j, j' \in \mathcal{J}_{i}$, if there exist nodes $h \in j$ and $h' \in j'$ such that the path from the root to $h'$ passes through $h$. If $j \prec j'$, we will say that $j$ is an \emph{ancestor} of $j'$, or equivalently, $j'$ is a \emph{descendant} of $j$. \paragraph{Sequence-form Strategies} For a player $i \in [n]$, an information set $j \in \mathcal{J}_{i}$, and an action $a \in \mathcal{A}_j$, we will denote with $\sigma = (j, a)$ the \emph{sequence} of $i$'s actions encountered on the path from the root of the game until (and included) action $a$. For notational convenience, we will use the special symbol $\varnothing$ to denote the \emph{empty sequence}. Then, $i$'s set of sequences is defined as $\Sigma_{i} \coloneqq \{(j, a) : j \in \mathcal{J}_{i}, a \in \mathcal{A}_j \} \cup \{ \varnothing \}$; we will also use the notation $\Sigma_i^* \coloneqq \Sigma_{i} \setminus \{ \varnothing \}$. For a given information set $j \in \mathcal{J}_{i}$ we will use $\sigma_{j} \in \Sigma_{i}$ to represent the \emph{parent sequence}; \emph{i.e.} the last sequence encountered by player $i$ before reaching any node in the information set $j$, assuming that it exists. Otherwise, we let $\sigma_{j} = \varnothing$, and we say that $j$ is a \emph{root information set} of player $i$. A \emph{strategy} for a player specifies a probability distribution for every possible information set encountered in the game tree. For perfect-recall EFGs, strategies can be equivalently represented in \emph{sequence-form}: \begin{definition}[Sequence-form Polytope] \label{definition:sequence_form} The \emph{sequence-form strategy polytope} for player $i \in [n]$ is defined as the following (convex) polytope: \begin{equation*} \mathcal{Q}_i \coloneqq \bigg\{ \vec{q}_i \in \mathbb{R}^{|\Sigma_{i}|}_{\geq 0} : \vec{q}_i[\varnothing] = 1, \quad \vec{q}_i[\sigma_j] = \sum_{a \in \mathcal{A}_j} \vec{q}_i[(j, a)], \quad \forall j \in \mathcal{J}_{i} \bigg\}. \end{equation*} \end{definition} This definition ensures the probability mass conservation for the sequence-form strategies along every possible decision point. The probability of playing action $a$ at information set $j \in \mathcal{J}_{i}$ can be obtained by dividing $\vec{q}[(j, a)]$ by $\vec{q}[\sigma_j]$. Analogously, one can define the sequence-form strategy polytope for the \emph{subtree} of the partially ordered set $(\mathcal{J}_{i}, \prec)$ \emph{rooted} at $j \in \mathcal{J}_{i}$, which will be denoted by $\mathcal{Q}_j$. Moreover, the set of \emph{deterministic} sequence-form strategies for player $i \in [n]$ is the set $\Pi_{i} = \mathcal{Q}_{i} \cap \{0, 1\}^{|\Sigma_{i}|}$, and similarly for $\Pi_j$. A well-known implication of Kuhn's theorem~\cite{Kuhn53:Extensive} is that $\mathcal{Q}_{i} = \co \Pi_{i}$, and $\mathcal{Q}_j = \co \Pi_j$, for any $i \in [n]$ and $j \in \mathcal{J}_{i}$. The \emph{joint} set of deterministic sequence-form strategies of the players will be represented with $\Pi \coloneqq \bigtimes_{i \in [n]} \Pi_{i}$. As such, an element $\vec{\pi} \in \Pi$ is an $n$-tuple $(\vec{\pi}_{1}, \dots, \vec{\pi}_{n})$ specifying a deterministic sequence-form strategy for every player $i \in [n]$. Finally, we overload notation by representing the utility of player $i \in [n]$ under a profile $\vec{\pi} \in \Pi$ as \begin{equation*} u_{i}(\vec{\pi}) \coloneqq \sum_{z \in \mathcal{Z}} p_{c}(z) u_{i}(z) \mathbbm{1} \{ \vec{\pi}_{k}[\sigma_{k, z}] = 1, \forall k \in [n]\}, \end{equation*} where $\sigma_{i, z}$ denotes the last sequence of player $i$ before reaching the terminal node $z \in \mathcal{Z}$. For the convenience of the reader, in \Cref{tab:notation} we have summarized the main notation related to EFGs used throughout this paper. \begin{figure} \caption{Summary of EFG notation.} \label{tab:notation} \caption{Example of a two-player EFG.} \label{fig:example} \end{figure} \end{minipage}} \end{figure} \paragraph{An Illustrative Example.} To further clarify some of the concepts we have introduced so far, we illustrate a simple two-player EFG in \Cref{fig:example}. Black nodes belong to player~$1$, white round nodes to player~$2$, square nodes are terminal nodes (aka leaves), and the crossed node is a chance node. Player $2$ has two information sets, $\mathcal{J}_2 \coloneqq \{\textsc{C}, \textsc{D} \}$, each containing two nodes. This captures the lack of knowledge regarding the action played by player $1$. In contrast, the outcome of the chance move is observed by both players. At the information set $\textsc{C}$, player $2$ has two possible actions, $\mathcal{A}_{\textsc{C}} \coloneqq \{\seq{5}, \seq{6}\}$. Thus, one possible sequence for player $2$ is the pair $\sigma = (\textsc{C}, \seq{5}) \in \Sigma_{2}$. \subsection{Online Learning and Optimistic Regret Minimization} Consider a convex and compact set $\mathcal{X} \subseteq \R^d$ representing the set of strategies of some agent. In the online decision making framework, a \emph{regret minimizer} $\mathcal{R}$ can be thought of as a black-box device which interacts with the external environment via the following two basic subroutines: \begin{itemize}[nolistsep,itemsep=1mm,leftmargin=.8cm] \item $\mathcal{R}.\nextstr()$: The regret minimizer returns a strategy $\vec{x}^{(t)} \in \mathcal{X}$ at time $t$; \item $\mathcal{R}.\obsut(\ell^{(t)})$: The regret minimizer receives as feedback a \emph{linear utility function} $\ell^{(t)}: \mathcal{X} \ni \vec{x} \mapsto \langle \vec{\ell}^{(t)}, \vec{x} \rangle$, and may alter its internal state accordingly. \end{itemize} The utility function $\ell^{(t)}$ could depend adversarially on the previous outputs $\vec{x}^{(1)}, \dots, \vec{x}^{(t-1)}$, but not on $\vec{x}^{(t)}$. The decision making is \emph{online} in the sense that the regret minimizer can adapt to previously received information, but no information about future utilities is available. The performance of a regret minimizer is typically measured in terms of its \emph{cumulative external regret} (or simply regret), defined, for a time horizon $T \in \N$, as follows. \begin{equation} \label{eq:external_regret} \reg^T \coloneqq \max_{\vec{x}^* \in \mathcal{X}} \sum_{t=1}^T \langle \vec{x}^*, \vec{\ell}^{(t)} \rangle - \sum_{t=1}^T \langle \vec{x}^{(t)}, \vec{\ell}^{(t)} \rangle. \end{equation} That is, the performance of the online algorithm is compared to the best \emph{fixed} strategy in \emph{hindsight}. A regret minimizer is called \emph{Hannan consistent} if, under any sequence of (bounded) utility functions, its regret grows sublinearly with $T$; that is, $\reg^T = o(T)$. It is well-known that broad families of learning algorithms incur $O(\sqrt{T})$ regret under \emph{any} sequence of utility functions, which also matches the lower bound in the adversarial regime (see~\cite{Cesa06:Prediction}). \paragraph{\texorpdfstring{Phi}{Phi}-Regret.} A conceptual generalization of external regret is the so-called \emph{Phi-regret}. In this framework the performance of the learning algorithm is measured based on a \emph{set of transformations} $\Phi \ni \phi: \mathcal{X} \to \mathcal{X}$, leading to the notion of (cumulative) $\Phi$-regret: \begin{equation*} \reg^T \coloneqq \max_{\phi^* \in \Phi} \sum_{t=1}^T \langle \phi^*(\vec{x}^{(t)}), \vec{\ell}^{(t)} \rangle - \sum_{t=1}^T \langle \vec{x}^{(t)}, \vec{\ell}^{(t)} \rangle. \end{equation*} When the set of transformations $\Phi$ coincides with the set of \emph{constant functions} we recover the notion of external regret given in \eqref{eq:external_regret}. However, Phi-regret is substantially stronger and it yields more appealing notions of hindsight rationality~\citep{Gordon08:No}, incorporating the notion of \emph{swap regret}~\citep{Blum07:From}. \paragraph{Optimistic Regret Minimization.} An emerging subfield of online learning (\cite{Chiang12:Online,Rakhlin13:Online}) studies the improved performance guarantees one can obtain when the utilities observed by the regret minimization algorithm possess additional structure, typically in the form of \emph{small variation}. Such considerations diverge from the adversarial regime we previously described, and are motivated---among others---by the fact that in many settings the utility functions are themselves selected by \emph{regularized learning algorithms}. For our purposes we shall employ the following definition, which is a modification of the $\texttt{RVU}$ property~\citep{Syrgkanis15:Fast}. \begin{definition}[Stable-Predictive] \label{definition:stable-predictive} Let $\mathcal{R}$ be a regert minimizer and $\|\cdot\|$ be any norm. $\mathcal{R}$ is said to be $\kappa$-\emph{stable} with respect to $\|\cdot\|$ if for all $t \geq 2$ the strategies output by $\mathcal{R}$ are such that \begin{equation*} \| \vec{x}^{(t)} - \vec{x}^{(t-1)} \| \leq \kappa. \end{equation*} Moreover, $\mathcal{R}$ is said to be $(\alpha_z^{(i),\,t}ha, \beta)$-\emph{predictive} with respect to $\|\cdot\|$ if its regret $\reg^T$ can be bounded as \begin{equation} \label{eq:predictive} \reg^T \leq \alpha_z^{(i),\,t}ha + \beta \sum_{t=1}^T \| \vec{\ell}^{(t)} - \vec{m}^{(t)} \|_*^2, \end{equation} for any sequence of utilities $\vec{\ell}^{(1)}, \dots, \vec{\ell}^{(T)}$, where $\|\cdot\|_*$ is the dual norm of $\|\cdot\|$. \end{definition} In the above definition $\vec{m}^{(t)}$ serves as the \emph{prediction} of the regret minimizer $\mathcal{R}$ at time $t \geq 1$. While traditional online algorithms are not known to satisfy \eqref{eq:predictive}, we will next present natural variants which are indeed stable-predictive in the sense of \Cref{definition:stable-predictive}. \paragraph{Optimistic Follow the Regularized Leader.} Let $d$ be a $1$-strongly convex \emph{distance generating function (DGF)} with respect to a norm $\|\cdot\|$, and $\eta > 0$ be the \emph{learning rate}. The update rule of \emph{optimistic follow the regularized leader (OFTRL)} takes the following form for $t \geq 2$: \begin{equation} \label{eq:oftrl} \tag{OFTRL} \vec{x}^{(t)} \coloneqq \argmax_{\vec{x} \in \mathcal{X}} \left\{ \left\langle \vec{x}, \vec{m}^{(t)} + \sum_{\tau=1}^{t-1} \vec{\ell}^{(\tau)} \right\rangle - \frac{d(\vec{x})}{\eta} \right\}, \end{equation} where $\vec{m}^{(t)}$ is the prediction at time $t$, and $\vec{x}^{(1)} \coloneqq \argmin_{\vec{x} \in \mathcal{X}} d(\vec{x})$. Unless specified otherwise, it will be tacitly assumed that $\vec{m}^{(t)} \coloneqq \vec{\ell}^{(t-1)}$, for $t \geq 1$, where we conventionally let $\vec{\ell}^{(0)} \coloneqq \vec{0}$. \citet{Syrgkanis15:Fast} established the following property: \begin{lemma} \label{lemma:oftrl-predictive} \eqref{eq:oftrl} is $(\Omega_d/\eta, \eta)$-predictive\footnote{\citet{Syrgkanis15:Fast} only stated this for the simplex, but their proof readily extends to arbitrary convex and compact sets.} with respect to any norm $\|\cdot\|$ for which $d$ is $1$-strongly convex, where $\Omega_d$ is the range of $d$ on $\mathcal{X}$, that is, $\Omega_d \coloneqq \max_{\vec{x},\vec{x}'\in\mathcal{X}} \{d(\vec{x}) - d(\vec{x}')\}$. \end{lemma} The \emph{entropic regularizer} on the simplex is defined as $d(\vec{x}) \coloneqq \sum_{k=1}^d \vec{x}[k] \log \vec{x}[k]$, and it is well-known to be $1$-strongly convex with respect to the $\ell_1$-norm. This OFTRL setup will be referred to as \emph{optimistic multiplicative weights updates (OMWU)}.\footnote{When $\vec{m}^{(t)} \coloneqq \vec{0}$, for all $t \geq 1$, we recover \emph{multiplicative weights updates (MWU)}.} We will also require a suitable DGF for the sequence-form polytope. To this end, we will employ the \emph{dilatable global entropy} DGF, recently introduced by~\citet{Farina21:Better}. \begin{definition}[\citep{Farina21:Better}] \label{definition:global-dgf} The \emph{dilatable global entropy distance generating function} $d : \mathcal{Q} \to \R_{\geq 0}$ is defined as \begin{equation*} d(\vec{x}) \coloneqq \sum_{\sigma \in \Sigma } \vec{w}[\sigma] \vec{x}[\sigma] \log (\vec{x}[\sigma]). \end{equation*} The vector $\vec{w} \in \R^{|\Sigma|}$ is defined recursively as \begin{align*} \vec{w}[\varnothing] &= 1; \\ \vec{w}[(j,a)] &= \vec{\gamma}[j] - \sum_{j': \sigma_{j'} = (j,a)} \vec{\gamma}[j'], \quad \forall (j, a) \in \Sigma, \end{align*} where \begin{equation} \label{eq:gamma} \vec{\gamma}[j] = 1 + \max_{a \in \mathcal{A}_j} \left\{ \sum_{j': \sigma_{j'} = (j,a) } \vec{\gamma}[j'] \right\}, \quad \forall j \in \mathcal{J}. \end{equation} \end{definition} This DGF is ``nice'' (in the parlance of \citet{Hoda10:Smoothing}) since its gradient, as well as the gradient of its convex conjugate, can be computed \emph{exactly} in linear time in $|\Sigma|$---the dimension of the domain. Our analysis will require the following characterization. \begin{lemma}[\cite{Farina21:Better}] \label{lemma:dge} The dilatable global entropy $d$ of \Cref{definition:global-dgf} is a DGF for the sequence-form polytope $\mathcal{Q}$. Moreover, it is $1/\|\mathcal{Q} \|_1$-strongly convex on $\relint \mathcal{Q}$ with respect to the $\|\cdot\|_1$ norm, where $\|\mathcal{Q} \|_1 = \max_{\vec{q} \in \mathcal{Q}}\|\vec{q}\|_1$. Finally, the $d$-diameter $\Omega_d$ of $\mathcal{Q}$ is at most $\| \mathcal{Q} \|_1^2 \max_{j \in \mathcal{J}} \log |\mathcal{A}_j|$. \end{lemma} In the sequel we will instantiate \eqref{eq:oftrl} with dilatable global entropy as DGF to construct a stable-predictive regret minimizer for the sequence-form strategy polytope. \subsection{Extensive-Form Correlated and Coarse Correlated Equilibrium} In this subsection we introduce the notion of an \emph{extensive-form correlated and coarse correlated equilibrium} (henceforth EFCE and EFCCE respectively). First, for EFCE we will work with the definition used in \citep{Farina19:Correlation}, which is equivalent to the original one due to \citet{Stengel08:Extensive}. To this end, let us introduce the concept of a \emph{trigger deviation function}. \begin{definition} \label{definition:trigger_deviation_functions} Consider some player $i \in [n]$. A \emph{trigger deviation function} with respect to a \emph{trigger sequence} $\hat{\sigma} = (j,a) \in \Sigma_i^*$ and a \emph{continuation strategy} $\hat{\vec{\pi}}_i \in \Pi_j$ is any linear function $f : \R^{|\Sigma_{i}|} \to \R^{|\Sigma_{i}|}$ with the following properties. \begin{itemize} \item Any strategy $\vec{\pi}_i \in \Pi_{i}$ which does not prescribe the sequence $\hat{\sigma}$ remains invariant. That is, $f(\vec{\pi}_i) = \vec{\pi}_i$ for any $\vec{\pi}_i \in \Pi_{i}$ such that $\vec{\pi}_i[\hat{\sigma}] = 0$; \item Otherwise, the prescribed sequence $\hat{\sigma} = (j, a)$ is modified so that the behavior at $j$ and all of its descendants is replaced by the behavior specified by the continuation strategy: \begin{equation*} f(\vec{\pi}_i)[\sigma] = \begin{cases} \vec{\pi}_i[\sigma] & \textrm{ if } \sigma \not\succeq j ;\\ \hat{\vec{\pi}}_i[\sigma] & \textrm{ if } \sigma \succeq j, \end{cases} \end{equation*} for all $\sigma \in \Sigma_{i}$. \end{itemize} \end{definition} We will let $\Psi_{i} \coloneqq \{ \phi_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} : \hat{\sigma} = (j, a) \in \Sigma_{i}^*, \hat{\vec{\pi}}_i \in \Pi_j \}$ be the set of all possible (linear) mappings defining trigger deviation functions for player $i$. We are ready to introduce the concept of $\efce$. \begin{definition}[EFCE] \label{definition:efce} A probability distribution $\vec{\mu} \in \Delta(\Pi)$ is an $\epsilon$-EFCE, for $\epsilon \geq 0$, if for every player $i \in [n]$ and every trigger deviation function $\phi_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} \in \Psi_{i}$, \begin{equation*} \E_{\vec{\pi} \sim \vec{\mu}} \left[ u_{i} \left( \phi_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} (\vec{\pi}_{i}), \vec{\pi}_{-i} \right) - u_{i}(\vec{\pi}) \right] \leq \epsilon, \end{equation*} where $\vec{\pi} = (\vec{\pi}_1, \dots, \vec{\pi}_n) \in \Pi$. We say that $\vec{\mu} \in \Delta(\Pi)$ is an $\efce$ if it is a $0$-EFCE. \end{definition} \begin{theorem}[\cite{Farina21:Simple}] \label{theorem:EFCE-convergence} Suppose that for every player $i \in [n]$ the sequence of deterministic sequence-form strategies $\vec{\pi}_i^{(1)}, \dots, \vec{\pi}_i^{(T)} \in \Pi_{i}$ incurs $\Psi_{i}$-regret at most $\reg_i^{T}$ under the sequence of linear utility functions \begin{equation*} \ell_i^{(t)} : \Pi_{i} \ni \vec{\pi}_{i} \mapsto u_{i} \left( \vec{\pi}_{i}, \vec{\pi}_{-i}^{(t)} \right). \end{equation*} Then, the correlated distribution of play $\vec{\mu} \in \Delta(\Pi)$ is an $\epsilon$-EFCE, where $\epsilon \coloneqq \frac{1}{T} \max_{i \in [n]} \reg_i^{T}$. \end{theorem} Similarly, we introduce the closely related notion of a \emph{coarse} trigger deviation function. \begin{definition}[Coarse Trigger Deviation Functions] \label{definition:coarse_trigger_deviation_functions} Consider some player $i \in [n]$. A \emph{coarse trigger deviation function} with respect to an information set $j \in \mathcal{J}_i$ and a continuation strategy $\hat{\vec{\pi}}_i \in \Pi_j$ is any linear function $f : \R^{|\Sigma_{i}|} \to \R^{|\Sigma_{i}|}$ with the following properties: \begin{itemize} \item For any $\vec{\pi}_i \in \Pi_{i}$ such that $\vec{\pi}_i[\sigma_j] = 0$ it holds that $f(\vec{\pi}_i) = \vec{\pi}_i$; \item Otherwise, \begin{equation*} f(\vec{\pi}_i)[\sigma] = \begin{cases} \vec{\pi}_i[\sigma] & \textrm{ if } \sigma \not\succeq j ;\\ \hat{\vec{\pi}}_i[\sigma] & \textrm{ if } \sigma \succeq j, \end{cases} \end{equation*} for all $\sigma \in \Sigma_i$. \end{itemize} \end{definition} We will also let $\widetilde{\Psi}_{i} \coloneqq \{ \phi_{j \rightarrow \hat{\vec{\pi}}_i} : j\in \mathcal{J}_{i}, \hat{\vec{\pi}}_i \in \Pi_j \}$ be the set of all (linear) mappings inducing coarse trigger deviations functions for player $i$. \begin{definition}[EFCCE] \label{definition:efcce} A probability distribution $\vec{\mu} \in \Delta(\Pi)$ is an $\epsilon$-EFCCE, for $\epsilon \geq 0$, if for every player $i \in [n]$ and every coarse trigger deviation function $\phi_{j \rightarrow \hat{\vec{\pi}}_i} \in \widetilde{\Psi}_{i}$, \begin{equation*} \E_{\vec{\pi} \sim \vec{\mu}} \left[ u_{i} \left( \phi_{j \rightarrow \hat{\vec{\pi}}_i} (\vec{\pi}_{i}), \vec{\pi}_{-i} \right) - u_{i}(\vec{\pi}) \right] \leq \epsilon, \end{equation*} where $\vec{\pi} = (\vec{\pi}_1, \dots, \vec{\pi}_n) \in \Pi$. We say that $\vec{\mu} \in \Delta(\Pi)$ is an EFCCE if it is a $0$-EFCCE. \end{definition} Analogously to \Cref{theorem:EFCE-convergence}, we show (in \Cref{appendix:proofs_prel}) that if all players employ a $\widetilde{\Psi}_i$-regret minimizer, the correlated distribution of play converges to an EFCCE. \begin{restatable}{theorem}{efccereg} \label{theorem:EFCCE-convergence} Suppose that for every player $i \in [n]$ the sequence of deterministic sequence-form strategies $\vec{\pi}_i^{(1)}, \dots, \vec{\pi}_i^{(T)} \in \Pi_{i}$ incurs $\widetilde{\Psi}_{i}$-regret at most $\reg_i^{T}$ under the sequence of linear utility functions \begin{equation*} \ell_i^{(t)} : \Pi_{i} \ni \vec{\pi}_{i} \mapsto u_{i} \left( \vec{\pi}_{i}, \vec{\pi}_{-i}^{(t)} \right). \end{equation*} Then, the correlated distribution of play $\vec{\mu} \in \Delta(\Pi)$ is an $\epsilon$-EFCCE, where $\epsilon \coloneqq \frac{1}{T} \max_{i \in [n]} \reg_i^{T}$. \end{restatable} \section{Accelerating Phi-Regret Minimization with Optimism} \label{section:accelerating-Phi} In this section we present a general construction for obtaining improved Phi-regret guarantees. Our template is then instantiated in \Cref{sec:efce,section:efcce} to obtain faster dynamics for EFCE and EFCCE. Our approach combines the framework of \citet{Gordon08:No} with stable-predictive (aka. optimistic) regret minimization. As in \cite{Gordon08:No}, we combine 1) a regret minimizer that outputs a linear transformation $\phi^{(t)} \in \Phi$ at every time $t$, and 2) a fixed-point oracle for each $\phi^{(t)} \in \Phi$. However, our construction further requires that 2) is stable (in the sense of \Cref{definition:stable-predictive}). To achieve this, we will focus on regret minimizers having the following property. \begin{definition} \label{definition:FP-smooth} Consider a set of functions $\Phi$ such that $\phi(\mathcal{X}) \subseteq \mathcal{X}$ for all $\phi \in \Phi$, and a no-regret algorithm $\mathcal{R}_{\Phi}$ for the set of transformations $\Phi$ which returns a sequence $(\phi^{(t)})$. We say that $\mathcal{R}_{\Phi}$ is \emph{fixed point $\kappa$-stable} with respect to a norm $\|\cdot\|$ if the following conditions hold. \begin{itemize} \item Every $\phi^{(t)}$ admits a fixed point. That is, there exists $\vec{x}^{(t)} \in \mathcal{X}$ such that $\phi^{(t)}(\vec{x}^{(t)}) = \vec{x}^{(t)}$. \item For $\vec{x}^{(t)}$ with $\vec{x}^{(t)} = \phi^{(t)}(\vec{x}^{(t)})$, there is $\vec{x}^{(t+1)} = \phi^{(t+1)}(\vec{x}^{(t+1)})$ such that $\|\vec{x}^{(t+1)} - \vec{x}^{(t)} \| \leq \kappa$. \end{itemize} \end{definition} In this context, we will show how to construct a stable-predictive $\Phi$-regret minimizer starting from the following two components. \begin{enumerate} \item $\mathcal{R}_{\Phi}$: An $(A, B)$-predictive fixed point $\kappa$-stable regret minimizer for the set $\Phi$; \item \label{item:FP} $\fporacle(\phi ; \widetilde{\vec{x}}, \kappa, \epsilon)$: A \emph{stable fixed point oracle} which returns a point $\vec{x} \in \mathcal{X}$ such that (i) $\|\phi(\vec{x}) - \vec{x}\| \leq \epsilon$, and (ii) $\|\vec{x} - \widetilde{\vec{x}}\| \leq \kappa$ (the existence of such a fixed point is guaranteed by the fixed point $\kappa$-stability assumption on the regret minimizer). \end{enumerate} \begin{restatable}[Stable-Predictive Phi-Regret Minimization]{theorem}{phiaccel} \label{theorem:accelerating-Phi} Consider an $(A, B)$-predictive regret minimizer $\mathcal{R}_{\Phi}$ with respect to $\|\cdot\|_{1}$ for a set of linear transformations $\Phi$ on $\mathcal{X}$. Moreover, suppose that $\mathcal{R}_{\Phi}$ is fixed point $\kappa$-stable. Then, if we have access to a $\fporacle$, we can construct a $\kappa$-stable algorithm with $\Phi$-regret $\reg^T$ bounded as \begin{equation*} \reg^T \leq A + 2 B \sum_{t=1}^T \| \vec{\ell}^{(t)} - \vec{\ell}^{(t-1)}\|^2_{\infty} + 2 B \|\vec{\ell}\|_\infty^2 \sum_{t=1}^T \|\vec{x}^{(t)} - \vec{x}^{(t-1)}\|^2_\infty + \|\vec{\ell}\|_\infty \sum_{t=1}^T \epsilon^{(t)}, \end{equation*} where $\epsilon^{(t)}$ is the error of $\fporacle$ at time $t$, and $\|\vec{\ell}^{(t)}\|_\infty \leq \|\vec{\ell}\|_\infty$ for any $t \geq 1$. It is also assumed that $\|\vec{x}\|_{\infty} \leq 1$ for all $\vec{x} \in \mathcal{X}$. \end{restatable} The $\ell_1$ norm is used only for convenience; the theorem readily extends under any equivalent norm. The proof of \Cref{theorem:accelerating-Phi} builds on the construction of \citet{Gordon08:No}, and it is included in \Cref{appendix:proof-accelerating_phi}. \section{Faster Convergence to EFCE} \label{sec:efce} Our framework (\Cref{theorem:accelerating-Phi}) reduces accelerating $\Phi$-regret minimization to (i) developing a predictive regret minimizer for the set $\Phi$, and (ii) establishing the stability of the fixed points ($\fporacle$). In this section we establish these components for the set of all possible trigger deviations functions (\Cref{definition:trigger_deviation_functions}), leading to faster convergence to EFCE. In particular, \Cref{section:Phi-regret} is concerned with the former task while \Cref{section:stability} is concerned with the latter. \subsection{Constructing a Predictive Regret Minimizer for \texorpdfstring{$\Psi_{i}$}{Psi-i}} \label{section:Phi-regret} Here we develop a regret minimizer for the set $\co \Psi_{i}$, the convex hull of all trigger deviation functions (\Cref{definition:trigger_deviation_functions}) of player $i \in [n]$. Given that $\co \Psi_{i} \supseteq \Psi_{i}$, this will immediately imply a $\Psi_i$-regret minimizer---after applying \Cref{theorem:accelerating-Phi}. To this end, the set $\co \Psi_{i}$ can be evaluated in two stages. First, for a fixed sequence $\hat{\sigma} = (j, a) \in \Sigma_{i}^*$ we define the set $\Psi_{\hat{\sigma}} \coloneqq \co \left\{ \phi_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} : \hat{\vec{\pi}}_i \in \Pi_j \right\}$. Then, we take the convex hull of all $\Psi_{\hat{\sigma}}$; that is, $\co \Psi_{i} = \co \{ \Psi_{\hat{\sigma}} : \hat{\sigma} \in \Sigma^*_i \}$. In light of this, we first develop a predictive regret minimizer for the set $\Psi_{\hat{\sigma}}$, for any $\hat{\sigma} \in \Sigma_i^*$. These individual regret minimizers are then combined using a \emph{regret circuit} to conclude the construction in \Cref{theorem:co-circuit}. The overall algorithm is illustrated in \Cref{fig:algo}. All of the omitted poofs and pseudocode for this section are included in \Cref{appendix:proof-Phi-regret}. \begin{figure} \caption{An overview of the overall construction. For notational convenience we have let $\Sigma^*_i \coloneqq \{ \seq{1} \label{fig:algo} \end{figure} \subsubsection{Predictive Regret Minimizer for the set \texorpdfstring{$\Psi_{\hat{\sigma}}$}{Psi}.} Consider a sequence $\hat{\sigma} \in \Sigma_i^*$. We claim that the set of transformations $\Psi_{\hat{\sigma}} \coloneqq \co \left\{ \phi_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} : \hat{\vec{\pi}}_i \in \Pi_j \right\}$ is the image of $\mathcal{Q}_j$ under the affine mapping $h_{\hat{\sigma}} : \vec{q} \mapsto \phi_{\hat{\sigma} \rightarrow \vec{q}}$. Hence, it is not hard to see that a regret minimizer for $\Psi_{\hat{\sigma}}$ can be constructed starting from a regret minimizer for $\mathcal{Q}_j$. We now show that the predictive bound is preserved through this construction. \begin{restatable}{proposition}{firstcircuit} \label{proposition:R_sigma} Consider a player $i \in [n]$ and any trigger sequence $\hat{\sigma} = (j, a) \in \Sigma^*_i$. There exists an algorithm which constructs a regret minimizer $\mathcal{R}_{\hat{\sigma}}$ with access to an $(A, B)$-predictive regret minimizer $\mathcal{R}_{\mathcal{Q}_j}$ for the set $\mathcal{Q}_j$ such that $\mathcal{R}_{\hat{\sigma}}$ is $(A, B)$-predictive. \end{restatable} This proposition requires a predictive regret minimizer for the set $\mathcal{Q}_j$, for each $j \in \mathcal{J}_i$. To this end, we instantiate \eqref{eq:oftrl} with dilatable global entropy as DGF (\Cref{definition:global-dgf}). Then, combining \Cref{lemma:oftrl-predictive} with \Cref{lemma:dge} leads to the following predictive bound. \begin{lemma} \label{lemma:predictive-OMD} Suppose that the regret minimizer $\mathcal{R}_{\mathcal{Q}_j}$ is instantiated with dilatable global entropy. Then, $\mathcal{R}_{\mathcal{Q}_j}$ is $(A, B)$-predictive with respect to $\|\cdot\|_1$, where $A = \frac{\|\mathcal{Q}_i\|^2_1 \max_{j \in \mathcal{J}_i} \log |\mathcal{A}_j|}{\eta}$ and $B = \eta \|\mathcal{Q}_i\|_1$. \end{lemma} The discrepancy between this bound and the one in \Cref{lemma:oftrl-predictive} derives from the fact that the modulus of convexity with respect to $\|\cdot\|_1$ for the dilatable global entropy is $1/\|\mathcal{Q}_i\|_1$ instead of $1$. Alternatively, we also establish a predictive variant of CFR which can be used in place of OFTRL for performing regret minimization over the set $\mathcal{Q}_j$. \begin{proposition}[Predictive CFR; Full Version in \Cref{theorem:opt-cfr}] \label{proposition:cfr} There exists a variant of CFR using OMWU which is $(A, B)$-predictive, where $A = O(\frac{ \max_{j \in \mathcal{J}} \log|\mathcal{A}_j|}{\eta} \|\mathcal{Q}\|_1 )$ and $B = O(\eta \|\mathcal{Q}\|_1^3)$. \end{proposition} This construction follows the approach of~\citet{Farina19:Stable}, but here we make the dependencies on the size of the game explicit. The predictive bound we obtain for CFR is inferior to the one in \Cref{lemma:predictive-OMD}, so the rest of our theoretical analysis will follow the ``global'' approach. \subsubsection{Predictive Regret Minimizer for \texorpdfstring{$\co \Psi_{i}$}{co Psi}.} The next step consists of appropriately combining the regret minimizers $\Psi_{\hat{\sigma}}$, for all $\hat{\sigma} \in \Sigma_i^*$, to a composite regret minimizer for the set $\co \Psi_{i}$. To this end, we will use a \emph{regret circuit} for the convex hull, formally introduced below. \begin{proposition}[\cite{Farina19:Regret}] \label{proposition:regret_circuit-co} Consider a collection of sets $\mathcal{X}_1, \dots, \mathcal{X}_m$, and let $\mathcal{R}_i$ be a regret minimizer for the set $\mathcal{X}_i$, for each $i \in [m]$. Moreover, let $\mathcal{R}_{\Delta}$ be a regret minimizer for the $m$-simplex $\Delta^m$. A regret minimizer $\mathcal{R}_{\co}$ for the set $\co \{\mathcal{X}_1, \dots, \mathcal{X}_m\}$ can be constructed as follows. \begin{itemize} \item $\mathcal{R}_{\co}.\nextstr$ obtains the next strategy $\vec{x}_i^{(t)}$ of each regret minimizer $\mathcal{R}_i$, as well as the next strategy $\vec{\lambda}^{(t)} = (\vec{\lambda}^{(t)}[1], \dots, \vec{\lambda}^{(t)}[m]) \in \Delta^m$ of $\mathcal{R}_{\Delta}$, and returns the corresponding convex combination: $\vec{\lambda}^{(t)}[1] \vec{x}_1^{(t)} + \dots + \vec{\lambda}^{(t)}[m] \vec{x}_m^{(t)}$. \item $\mathcal{R}_{\co}.\obsut(L^{(t)})$ forwards $L^{(t)}$ to each of the regret minimizers $\mathcal{R}_1, \dots, \mathcal{R}_m$, while it forwards the utility function $(\vec{\lambda}[1], \dots, \vec{\lambda}[m]) \mapsto \vec{\lambda}[1] L^{(t)}(\vec{x}_1^{(t)}) + \dots + \vec{\lambda}[m] L^{(t)}(\vec{x}_m^{(t)})$ to $\mathcal{R}_{\Delta}$. \end{itemize} Then, if $\reg^T_1, \dots, \reg^T_m$ are the regrets accumulated by the regret minimizers $\mathcal{R}_1, \dots, \mathcal{R}_m$, and $\reg^T_{\Delta}$ is the regret of $\mathcal{R}_{\Delta}$, the regret $\reg^T_{\co}$ of the composite regret minmizers $\mathcal{R}_{\co}$ can be bounded as \begin{equation*} \reg^T_{\co} \leq \reg^T_{\Delta} + \max \{\reg_1^T, \dots, \reg_m^T\}. \end{equation*} \end{proposition} Next, we leverage this construction to obtain the main result of this subsection: a predictive regret minimizer for the set of transformations $\co \Psi_i$. \begin{restatable}{theorem}{secondcircuit} \label{theorem:co-circuit} There exists a regret minimization algorithm $\mathcal{R}_{\Psi_i}$ for the set $\co \Psi_{i}$ (\Cref{fig:algo}) such that under any sequence of utility vectors $\vec{L}_i^{(1)}, \dots, \vec{L}_i^{(T)}$ its regret $\reg_{\Psi_i}^T$ can be bounded as \begin{equation*} \reg_{\Psi_i}^T \leq \frac{ \log |\Sigma_i| + \|\mathcal{Q}_i\|^2_1 \max_{j \in \mathcal{J}_i} \log |\mathcal{A}_j|}{\eta} + \eta (\|\mathcal{Q}_i\|_1 + 4|\Sigma_{i}|^2) \sum_{t=1}^T \| \vec{L}_i^{(t)} - \vec{L}_i^{(t-1)} \|_{\infty}^2. \end{equation*} \end{restatable} As illustrated in \Cref{fig:algo}, the ``mixer'' $\mathcal{R}_{\Delta}$ is instantiated with OMWU, while each regret minimizer $\mathcal{R}_{\hat{\sigma}}$, for $\hat{\sigma} \in \hat{\sigma} \in \Sigma_i^*$, internally employs the dilatable global entropy as DGF to construct a regert minimizer over $\mathcal{Q}_j$. A notable ingredient of our predictive regret circuit (\Cref{proposition:circ-pred}) is that we employ an advanced prediction mechanism in place of the usual ``one-recency bias'' wherein the prediction is simply the previously observed utility. This leads to an improved regret bound as we further explain in \Cref{remark:better_prediction}. \iffalse \begin{figure} \caption{An overview of the overall algorithm.} \label{fig:algo} \end{figure} \fi \subsection{Stability of the Fixed Points} \label{section:stability} As suggested by \Cref{theorem:accelerating-Phi}, employing a predictive regret minimizer is of little gain if we cannot guarantee that the observed utilities will be stable. For this reason, in this subsection we focus on characterizing the stability of the fixed points, eventually leading to our stable-predictive $\co \Psi_{i}$-regret minimizer. In the context of \Cref{theorem:accelerating-Phi}, this establishes the \emph{stable} fixed point oracle. All of the omitted proofs of this section are included in \Cref{appendix:section_5}. \paragraph{Multiplicative Stability.} Our analysis will reveal a particularly strong notion of stability we refer to as \emph{multiplicative stability}. More precisely, we say that a sequence $( \vec{z}^{(t)} )$, with $\vec{z}^{(t)} \in \mathbb{R}^d_{>0}$, is $\kappa$\emph{-multiplicative-stable}, with $\kappa \in (0,1)$, if $(1 + \kappa)^{-1} \vec{z}^{(t-1)}[k] \leq \vec{z}^{(t)}[k] \leq (1 + \kappa) \vec{z}^{(t-1)}[k]$, for any $k \in [d]$ and for all $t \geq 2$. When $\vec{z}^{(t)}[k]$ and $\vec{z}^{(t-1)}[k]$ are such that $ (1 + \kappa)^{-1} \vec{z}^{(t-1)}[k] \leq \vec{z}^{(t)}[k] \leq (1 + \kappa) \vec{z}^{(t-1)}[k]$, we say that they are $\kappa$-\emph{multiplicative-close}. We begin by showing that OMWU on the simplex and OFTRL with dilatable global entropy as DGF guarantee multiplicative stability. \begin{restatable}{lemma}{mulstabsimplex} \label{lemma:OMW-simplex-stability} Consider the OMWU algorithm on the simplex $\Delta^m$ with $\eta > 0$. If all the observed utilities and the predictions are such that $\|\vec{\ell}^{(t)}\|_{\infty}, \| \vec{m}^{(t)}\|_\infty \leq \|\vec{\ell}\|_\infty$, and $\eta < 1/(12\|\vec{\ell}\|_\infty)$, then the sequence $(\vec{x}^{(t)})$ produced by $\omw$ is $(12 \eta \|\vec{\ell}\|_\infty)$-multiplicative-stable. \end{restatable} \begin{restatable}{lemma}{mulstabseq} \label{lemma:OMD-stability} Consider the \eqref{eq:oftrl} algorithm on the sequence-form strategy polytope $\mathcal{Q}$ with dilatable global entropy as DGF and $\eta > 0$. If all the utility functions are such that $\|\vec{\ell}^{(t)}\|_{\infty} \leq 1$, and $\eta = O(1/\mathfrak{D})$ is sufficiently small, then the sequence $(\vec{x}^{(t)})$ produced is $O(\eta \mathfrak{D})$-multiplicative-stable. \end{restatable} To establish multiplicative stability of \eqref{eq:oftrl} under the dilatable global entropy DGF we first derive a closed-form solution which reveals the multiplicative structure of the update rule for the behavioral strategies at every ``local'' decision point. Then, the conversion to the sequence-form representation leads to a slight degradation of an $O(\mathfrak{D})$ (depth) factor in the multiplicative stability. Next, we use \Cref{lemma:OMW-simplex-stability,lemma:OMD-stability} to arrive at the following conclusion. \begin{corollary} \label{corollary:mul-stability} Consider the regret minimization algorithm of \Cref{fig:algo}, and suppose that $\mathcal{R}_{\Delta}$ is instantiated using OMWU with $\eta > 0$, while each $\mathcal{R}_{\hat{\sigma}}$ is instantiated using \eqref{eq:oftrl} with dilatable global entropy as DGF and $\eta > 0$, for all $\hat{\sigma} \in \Sigma_i^*$. Then, for a sufficiently small $\eta = O(1/\|\mathcal{Q}_i\|_1)$, \begin{itemize} \item[(i)] The output sequence of each $\mathcal{R}_{\hat{\sigma}}$ is $O(\eta \mathfrak{D}_i)$-multiplicative-stable; \item[(ii)] The output sequence of $\mathcal{R}_{\Delta}$ is $O(\eta \|\mathcal{Q}_i\|_1)$-multiplicative-stable. \end{itemize} \end{corollary} \begin{comment} More broadly, we will also establish the multiplicative stability of $\omw$ under the sequence-form strategy polytope when the dilatable global entropy of \Cref{definition:global-dgf} is used as the distance generating function. \begin{lemma}[Multiplicative Stability of $\omw$ on the Sequence-Form Polytope] \label{lemma:OMW-polytope-stability} Consider the $\omw$ no-regret algorithm $\mathcal{R}_{\mathcal{Q}}$ on the $|\Sigma|$-dimensional sequency-form strategy polytope with learning rate $\eta > 0$. If all the utility functions $\vec{\ell}^{t} \in \R^m$ are such that $\|\vec{\ell}_{t}\|_{\infty} \leq L$, and $\eta = O(1/(L\mathfrak{D}))$, where $\mathfrak{D}$ is the depth of the sequence-form strategy polytope, the sequence of strategies $\vec{q}^{t}$'s produced by the $\omw$ algorithm are $O(\eta L)$ multiplicatively stable. \end{lemma} \end{comment} Armed with this characterization, we will next establish the multiplicative stability of the fixed points associated with trigger deviation functions. To this end, building on the approach of \citet{Farina21:Simple}, let us introduce the following definitions. \begin{definition} Consider a player $i \in [n]$ and let $J \subseteq \mathcal{J}_{i}$ be a subset of $i$'s information sets. We say than $J$ is a \emph{trunk} of $\mathcal{J}_{i}$ if, for every $j \in J$, all predecessors of $j$ are also in $J$. \end{definition} \begin{definition} \label{definition:partial_fixed_point} Consider a player $i \in [n]$, a trunk $J \subseteq \mathcal{J}_{i}$, and $\phi_i \in \co \Psi_{i}$. A vector $\vec{x}_i \in \R_{\geq 0}^{|\Sigma_{i}|}$ is a $J$-\emph{partial fixed point} of $\phi_i$ if the following conditions hold: \begin{itemize} \item $\vec{x}_i[\varnothing] = 1$ and $\vec{x}_i[\sigma_j] = \sum_{a \in \mathcal{A}_j} \vec{x}_i[(j, a)]$, for all $j \in J$; \item $\phi_i(\vec{x}_i)[\varnothing] = \vec{x}_i[\varnothing] = 1$, and $\phi_i(\vec{x}_i)[(j, a)] = \vec{x}_i[(j, a)]$, for all $j \in J$ and $a \in \mathcal{A}_j$. \end{itemize} \end{definition} An important property is that a $J$-partial fixed point can be efficiently ``promoted'' to a $J \cup \{j^*\}$-partial fixed point by computing the stationary distribution of a certain Markov chain (see \Cref{algo:Extend}). However, it is a priori unclear how this fixed point operation would affect the stability of the produced strategies. In fact, even for a $2$-state Markov chain, the stationary distribution could behave very unsmoothly under slight perturbations in the transition probabilities; \emph{e.g.}, see~\citep{Meyer80:The,Haviv84:Perturbation,Chen20:Hedging}. This is where the stronger notion of multiplicative stability comes into play. Indeed, it turns out that as long as the transition probabilities are multiplicative-stable, the stationary distribution will also be stable~\citep{Candogan13:Dynamics}. This observation was also leveraged by \citet{Chen20:Hedging} to obtain an $O(T^{-3/4})$ rate of convergence to correlated equilibria in normal-form games. However, our setting is substantially more complex, and direct extensions of those prior techniques appears to only give a bound \emph{exponential} in the size of the game. In light of this, one of our key observations is that the associated Markov chains has a particular structure which enables us to establish a polynomial degradation in terms of stability. At a high level, we observe that the underlying Markov chain can be expressed as the convex combination of a stable chain with a much less stable rank-one component. The main concern is that the unstable rank-one chain could cause a substantial degradation in terms of the stability of the fixed points. We address this by proving the following key lemma. \begin{restatable}{lemma}{rankone} \label{lemma:convex_characterization} Let $\mat{M}$ be the transition matrix of an $m$-state Markov chain such that $\mat{M} := \vec{v} \vec{1}^\top + \mat{C}$, where $\mat{C}$ is a matrix with strictly positive entries and columns summing to $1 - \lambda$, and $\vec{v}$ is a vector with strictly positive entries summing to $\lambda$. Then, if $\vec{\pi}$ is the stationary distribution of $\mat{M}$, there exists, for each $i \in [m]$, a (non-empty) finite set $F_i$ and $F = \bigcup_i F_i$, and corresponding parameters $b_j \in \{0, 1\}, 0 \leq p_j \leq m-2, |S_j| = m - p_j - b_j - 1$, for each $j \in F_i$, such that \begin{equation*} \vec{\pi}[i] = \frac{\sum_{j \in F_i} \lambda^{p_j + 1} (\vec{v}[q_j])^{b_j} \prod_{(s,w) \in S_j} \mat{C}[(s,w)]}{\sum_{j \in F} C_j \lambda^{p_j + b_j} \prod_{(s,w) \in S_j}\mat{C}[(s,w)]}, \end{equation*} where $C_j = C_j(m)$ is a positive parameter. \end{restatable} The main takeaway of this lemma is that the stationary distribution has only an \emph{affine dependence} on the vector $\vec{v}$. This will be crucial as $\vec{v}$ will be much less stable than the entries of $\mat{C}$, as we make precise in the sequel. Naturally, \Cref{lemma:convex_characterization} is not at all apparent from the Markov chain tree theorem, and derives from the particular structure of the Markov chain. Indeed, to establish \Cref{lemma:convex_characterization} we deviate from the existing techniques which are relying on the Markov chain tree theorem, and we instead leverage linear-algebraic techniques to characterize the corresponding eigenvector of the underlying Laplacian system. As a result, using a slight variant of \Cref{lemma:convex_characterization} (see \Cref{corollary:w_formula}) leads to the following stability bound. \begin{restatable}{corollary}{markovstable} \label{corollary:Markov_stability} Let $\mat{M}, \mat{M}'$ be the transition matrices of $m$-state Markov chains such that $\mat{M} = \vec{v} \vec{1}^{\top} + \mat{C}$ and $\mat{M}' = \vec{v}' \vec{1}^{\top} + \mat{C}'$, where $\mat{C}$ and $\mat{C}'$ are matrices with strictly positive entries, and $\vec{v}, \vec{v}'$ are vectors with strictly positive entries such that $\vec{v} = \vec{r}/l$ and $\vec{v}' = \vec{r}'/l'$, for some $l > 0$ and $l' > 0$. If $\vec{\pi}$ and $\vec{\pi}'$ are the stationary distributions of $\mat{M}$ and $\mat{M}'$, let $\vec{w} \coloneqq l \vec{\pi}$ and $\vec{w}' \coloneqq l' \vec{\pi}'$. Finally, let $\lambda$ and $\lambda'$ be the sum of the entries of $\vec{v}$ and $\vec{v}'$ respectively. Then, if (i) the matrices $\mat{C}$ and $\mat{C}'$ are $\kappa$-multiplicative-close; (ii) the scalars $\lambda$ and $\lambda'$ are $\kappa$-multiplicative-close; (iii) the vectors $\vec{r}$ and $\vec{r}'$ are $\gamma$-multiplicative-close; and (iv) the scalars $l$ and $l'$ are also $\gamma$-multiplicative-close, then the vectors $\vec{w}$ and $\vec{w}'$ are $(\gamma + O(\kappa m))$-multiplicative-close, for a sufficiently small $\kappa = O(1/m)$. \end{restatable} Under the assertion that $\gamma \gg \kappa$, the key takeaway is that the ``closeness'' of $\vec{w}$ and $\vec{w}'$ does not scale with $O((\gamma + \kappa) m)$, but only as $\gamma + O(\kappa m)$. Using this bound we are ready to characterize the degradation in stability after a ``promotion'' (\Cref{algo:Extend}) of a partial fixed point (in the formal sense of \Cref{definition:partial_fixed_point}). \begin{comment} In this context, before we proceed with the stability analysis we will require the following characterization: \begin{lemma} \label{lemma:stability-Markov} Consider two ergodic $m$-state Markov chains $M$ and $M'$ so that $M'$ is $\kappa$-multiplicative-close to $M$, for a sufficiently small $\kappa = O(1/m)$. Then, their stationary distributions are $O(\kappa m)$-multiplicative-close. \end{lemma} Our proof of this lemma relies on the Markov chain tree theorem (e.g. see \cite{Anantharam89:Proof}), and constitutes a refinement of the result shown by \cite{Chen20:Hedging}. Armed with this lemma, we will establish the following: \end{comment} \begin{restatable}{proposition}{stabextend} \label{proposition:stability_extend} Consider a player $i \in [n]$, and let $\phi^{(t)}_i = \sum_{\hat{\sigma} \in \Sigma_i^*} \vec{\lambda}_i^{(t)}[\hat{\sigma}] \phi_{\hat{\sigma} \rightarrow \vec{q}^{(t)}_{\hat{\sigma}}}$ be a transformation in $\co \Psi_{i}$ such that the sequences $(\vec{\lambda}_i^{(t)})$ and $(\vec{q}_{\hat{\sigma}}^{(t)})$ are $\kappa$-multiplicative-stable, for all $\hat{\sigma} \in \Sigma_i^*$. If $(\vec{x}_i^{(t)})$ is a $\gamma$-multiplicative-stable $J$-partial fixed point sequence, the sequence of $(J \cup \{j^*\})$-partial fixed points of $\phi_i$ is $(\gamma + O(\kappa |\mathcal{A}_{j^*}|))$-multiplicative-stable, for any sufficiently small $\kappa = O(1/|\mathcal{A}_{j^*}|)$. \end{restatable} Moreover, we employ this proposition as the inductive step to derive sharp multiplicative-stability bounds for the sequence of fixed points. The underlying algorithm gradually invokes the ``promotion'' subroutine (\Cref{algo:Extend}) in a top-down traversal of the tree, and it is given in \Cref{algo:FP-EFCE}. \begin{restatable}{theorem}{fpth} \label{theorem:stability-EFCE} Consider a player $i \in [n]$, and let $\phi_i^{(t)} = \sum_{\hat{\sigma} \in \Sigma_i^{*}} \vec{\lambda}_i^{(t)}[\hat{\sigma}] \phi_{\hat{\sigma} \rightarrow \vec{q}^{(t)}_{\hat{\sigma}}}$ be a transformation in $\co \Psi_{i}$ such that the sequences $(\vec{\lambda}_i^{(t)})$ and $(\vec{q}_{\hat{\sigma}}^{(t)})$ are $\kappa$-multiplicative-stable, for all $\hat{\sigma} \in \Sigma_i^*$. Then, the sequence of fixed points $\vec{q}_i^{(t)} \in \mathcal{Q}_{i}$ of $\phi_i^{(t)}$ is $O(\kappa |\mathcal{A}_i| \mathfrak{D}_{i})$-multiplicative-stable, for a sufficiently small $\kappa = O(1/(|\mathcal{A}_{i}| \mathfrak{D}_{i}))$, where $|\mathcal{A}_{i}| \coloneqq \max_{j \in \mathcal{J}_{i}} |\mathcal{A}_j|$. \end{restatable} A more refined bound is discussed in \Cref{remark:precise_bound}. The important insight of \Cref{theorem:stability-EFCE} is that the stability degrades according to the \emph{sum} of the actions at the decision points encountered along each path, and not as the \emph{product} of the actions. This is crucial as the latter bound---which would follow from prior techniques---need not be polynomial in the description of the game. At the heart of this improvement lies our refined characterization obtained in \Cref{lemma:convex_characterization}. Using the stability bounds derived in \Cref{corollary:mul-stability}, we are ready to establish the multiplicative-stability of the sequence of fixed points. \begin{corollary}[Stability of Fixed Points] \label{corollary:FP-stability} For any sufficiently small $\eta = O(1/(\mathfrak{D}_i |\mathcal{A}_i| \| \mathcal{Q}_i \|_1 ))$, the sequence of fixed points $(\vec{q}_i^{(t)})$ of player $i \in [n]$ is $O(\eta \mathfrak{D}_i |\mathcal{A}_i| \| \mathcal{Q}_i \|_1 )$-multiplicative-stable. \end{corollary} \subsection{Completing the Proof} \label{subsection:completing} Finally, we combine all of the previous pieces to complete the construction. First, we apply \Cref{theorem:accelerating-Phi} using the predictive bound obtained in \Cref{theorem:co-circuit} to conclude that the $\Psi_i$-regret of each player $i \in [n]$ can be bounded as \begin{equation} \label{eq:psi-reg} \reg_i^T \leq \frac{\log |\Sigma_i| + \|\mathcal{Q}_i\|^2_1 \log |\mathcal{A}_i|}{\eta} + 10\eta |\Sigma_i|^2 \sum_{t=1}^T \|\vec{\ell}_i^{(t)} - \vec{\ell}_i^{(t-1)}\|_\infty^2 + 10\eta |\Sigma_i|^2 \sum_{t=1}^T \|\vec{q}_i^{(t)} - \vec{q}_i^{(t-1)}\|_\infty^2, \end{equation} where we assumed---for simplicity---exact computation of each fixed point, \emph{i.e.}, $\epsilon^{(t)} = 0$ for any $t \geq 1$, while we also used the fact that $\|\vec{\ell}_i^{(t)}\|_\infty \leq 1$ which follows from the normalization assumption. So far we have focused on bounding the regret of each player without making any assumptions about the stability of the observed utility functions. A crucial observation is that if all players employ a regularized (or smooth) learning algorithm, then the observed utility functions will also change slowly over time. This is formalized in the following auxiliary claim. \begin{restatable}{claim}{utilinfty} \label{claim:aux} For any player $i \in [n]$ the observed utilities satisfy \begin{equation*} \| \vec{\ell}_i^{(t)} - \vec{\ell}_i^{(t-1)} \|^2_{\infty} \leq (n-1) |\mathcal{Z}|^2 \sum_{k \neq i} \| \vec{q}_k^{(t)} - \vec{q}_k^{(t-1)} \|^2_1. \end{equation*} \end{restatable} Thus, plugging this bound to \eqref{eq:psi-reg} yields that the $\Psi_i$-regret $\reg_i^T$ of each player $i$ can be bounded as \begin{equation*} \frac{\log |\Sigma_i| + \|\mathcal{Q}_i\|^2_1 \log |\mathcal{A}_i|}{\eta} + 10 \eta (n-1) |\Sigma_i|^2 |\mathcal{Z}|^2 \sum_{t=1}^T \sum_{k \neq i} \|\vec{q}_k^{(t)} - \vec{q}_k^{(t-1)}\|_1^2 + 10\eta |\Sigma_i|^2 \sum_{t=1}^T \|\vec{q}_i^{(t)} - \vec{q}_i^{(t-1)}\|_\infty^2. \end{equation*} As a result, using \Cref{corollary:FP-stability} we arrive at the following conclusion. \begin{corollary} \label{corollary:1/4} Suppose that each player follows the dynamics of \Cref{fig:algo} with a sufficiently small learning rate $\eta = O(1/(T^{1/4} \mathfrak{D}_i |\mathcal{A}_i| \|\mathcal{Q}_i\|_1))$. Then, the $\Psi_i$-regret of each player will be bounded as $\reg_i^T \leq \mathcal{P} T^{1/4}$, where $\mathcal{P}$ is independent on $T$ and polynomial on the description of the game. \end{corollary} Finally, \Cref{theorem:main} follows from \Cref{theorem:EFCE-convergence} after performing sampling in order to transition to deterministic strategies, as we explain in \Cref{appendix:everything}. We also point out that the complexity of every iteration in the proposed dynamics is analogous to that in~\citep{Farina21:Simple}, although the dynamics developed in the latter paper only attain a rate of convergence of $O(T^{-1/2})$. Finally, we remark that it is easy to make the overall regret minimizer robust against adversarial losses using an adaptive choice of learning rate. \begin{comment} \paragraph{Perturbation Theory for Stationary Distributions of Markov Chains.} The goal of this paragraph is to establish the following lemma: \begin{lemma} \label{lemma:FP-continuity} Let $\mat{P}$ and $\widetilde{\mat{P}}$ represent the stochastic matrices of ergodic Markov chains, such that the perturbation matrix $\mat{E} \coloneqq \mat{P} - \widetilde{\mat{P}}$ satisfies $\|\mat{E}\|_2 \leq \kappa$, for a sufficiently small $\kappa$. Then, if $\vec{x}$ and $\widetilde{\vec{x}}$ are the stationary distributions of the Markov chains described with $\mat{P}$ and $\widetilde{\mat{P}}$, it follows that \begin{equation} \| \vec{x} - \widetilde{\vec{x}} \|_2 \leq f(\kappa). \end{equation} \end{lemma} This result leverages the theory of perturbations in stationary Markov chains \cite{Meyer80:The,Haviv84:Perturbation}. In particular, we will leverage a characterization due to Meyer \cite[Theorem 4.3]{D.80:The}: \begin{lemma}[\cite{D.80:The}] \label{lemma:Meyer} Let $\mat{P}$ and $\widetilde{\mat{P}}$ represent the transition matrices of ergodic Markov chains with stationary distributions $\vec{x}$ and $\widetilde{\vec{x}}$ respectively. if $\mat{E} \coloneqq \mat{P} - \widetilde{\mat{P}}$ is the perturbation matrix, there exists a parameter $\lambda \geq 0$ which depends only the transition matrix $\mat{P}$, such that for a sufficiently small perturbation $\|\mat{E}\| \leq \kappa < 1/\lambda$, \begin{equation} \|\vec{x} - \widetilde{\vec{x}}\| \leq \frac{\lambda \kappa}{1 - \lambda \kappa} \|\vec{x}\|, \end{equation} where it is assumed that $\|\mat{E}\|$ is the induced matrix norm from the vector norm $\|\vec{x}\|$; i.e. $\|\mat{E}\| = \sup_{\vec{x} \in \mathbb{R}^d} \{\|\mat{E} \vec{x} \| : \|\vec{x} \| = 1 \}$. \end{lemma} \begin{proof}[Proof of \Cref{lemma:FP-continuity}] The proof follows directly from \Cref{lemma:Meyer}. \end{proof} \end{comment} \NewDocumentCommand{\pure}{O{i}}{ \ifthenelse{\equal{#1}{}} {\vec{\pi}} {\vec{\pi}^{(#1)}} } \NewDocumentCommand{\hatpure}{O{i}}{ \ifthenelse{\equal{#1}{}} {\hat{\vec{\pi}}} {\hat{\vec{\pi}}^{(#1)}} } \newcommand{\mathscr{Z}}{\mathscr{Z}} \newcommand{\mathbb{R}}{\mathbb{R}} \NewDocumentCommand{\puret}{O{i}O{t}}{\vec{\pi}^{(#1),\,#2}} \NewDocumentCommand{\Pure}{O{i}}{\Pi^{(#1)}} \NewDocumentCommand{\tdev}{O{j}O{\hatpure[]}}{ \phi_{#1\to#2} } \NewDocumentCommand{\Mdev}{O{j}O{\hatpure[]}}{ \vec{M}_{#1\to#2} } \NewDocumentCommand{\cJ}{O{i}}{\mathscr{J}^{(#1)}} \newcommand{j}{j} \newcommand{\info_{\emptyseq}}{j_{\varnothing}} \newcommand{a_{\emptyseq}}{a_{\varnothing}} \newcommand{\vec{\lambda}}{\vec{\lambda}} \newcommand{j}{j} \NewDocumentCommand{\Info}{O{i}}{\mathcal{J}_{#1}} \newcommand{\mathscr{S}}{\mathscr{S}} \NewDocumentCommand{\Seqs}{O{i}}{\Sigma_{#1}} \NewDocumentCommand{\pc}{O{z}}{p^{(c)}(#1)} \NewDocumentCommand{\ut}{O{i}}{u^{(#1)}} \NewDocumentCommand{\bbone}{O{XXX}}{\mathds{1}_{\{#1\}}} \NewDocumentCommand{\Ph}{O{i}}{\Psi_{#1}} \NewDocumentCommand{\Tph}{O{i}}{(\co\Ph)} \newcommand{\mathscr{A}}{\mathscr{A}} \newcommand{\alpha_z^{(i),\,t}}{\alpha_z^{(i),\,t}ha_z^{(i),\,t}} \NewDocumentCommand{\Q}{O{i}}{\mathcal{Q}} \NewDocumentCommand{\q}{}{\vec{q}} \NewDocumentCommand{\qt}{O{i}O{t}}{\vec{q}^{(#1),\,#2}} \NewDocumentCommand{\parseq}{O{i}O{j}}{\sigma^{(#1)}(#2)} \NewDocumentCommand{\rdev}{O{i}O{j}O{\hat{\vec{\pi}}}}{ r^{(#1)}_{\muv,#2\to#3} } \NewDocumentCommand{\cumr}{O{i}O{T}}{R^{(#1),\,#2}} \section{Faster Convergence to EFCCE} \label{section:efcce} In this section we turn our attention to learning dynamics for extensive-form \emph{coarse} correlated equilibrium (EFCCE). While the dynamics we previously developed for EFCE would also trivially converge to EFCCE, as the former is a subset of the latter~\citep{Farina20:Coarse}, our main contribution is to show that each iteration of EFCCE dynamics can be substantially more efficient compared to EFCE. Indeed, unlike all known methods for EFCE, we obtain in \Cref{sec:closed form} a succinct closed-form solution for the fixed points associated with EFCCE which does not require the expensive computation of the stationary distribution of a Markov chain. This places EFCCE closer to normal-form coarse correlated equilibria (NFCCE) in terms of the per-iteration complexity, even thought EFCCE prescribes a much more compelling notion of correlation. Furthermore, we use this closed-form characterization in \Cref{sec:stab} to obtain improved stability bounds for the fixed points associated with EFCCE, and with a much simpler analysis compared to the one for EFCE. \subsection{Closed-Form Fixed Point Computation}\label{sec:closed form} As suggested by our general template introduced in \Cref{theorem:accelerating-Phi}, we first have to construct a predictive regret minimizer for the set of \emph{coarse} trigger deviation functions $\widetilde{\Psi}_i$ (\Cref{definition:coarse_trigger_deviation_functions}). This construction is very similar to the one for $\Psi_i$ we previously described in detail in \Cref{section:Phi-regret}. For this reason, here we focus on the computation and the stability properties of the fixed points associated with any $\phi_i \in \co \widetilde{\Psi}_i$. Specifically, we will first show that it it possible to compute a sequence-form strategy $\vec{q}_i$ such that $\phi_i(\vec{q}_i) = \vec{q}_i$ in linear time on $O(|\Sigma_i| \mathfrak{D}_i)$. Indeed, let $\phi_i =\sum_{j\in\Info}\vec{\lambda}_i[j]\tdev[j][\q_{j}]$ be any coarse trigger deviation function, where $\vec{\lambda}_i\in\Delta({\Info})$, and $\q_{j}\in\Q_{j}$ for each $j\in\Info$. \Cref{algo:FP-EFCCE} describes an efficient procedure to compute a fixed point of a given transformation $\phi_i \in \co \widetilde{\Psi}_i$. In particular, the algorithm iterates over the sequences of player $i$ according to their partial ordering $\prec$. That is, it is never the case that a sequence $\sigma=(j,a)$ is considered before $\sigma_j$. For every sequence $\sigma=(j,a) \in \Sigma_i^* $ the algorithm computes $d_{\sigma}\in\mathbb{R}_{\ge0}$ as the sum of the weights corresponding to information sets preceding $j$ (\Cref{line:dsigma}). If $d_{\sigma}=0$, the choice we make at $\sigma$ is indifferent as long as the resulting vector $\q_i$ is a well-formed sequence-form strategy. For this reason, we simply set $\q_i[\sigma]$ so that the probability-mass flow is evenly divided among sequences originating in $j$ (\Cref{line: set to uniform}). Otherwise, when $d_{\sigma}\ne 0$, \Cref{eq: fixed point} assigns to $\q_i[\sigma]$ a value equal to the weighted sum of $\q_{j'}[\sigma]\q_i[\sigma']$ for sequences $\sigma'=(j',a')$ preceding information set $j \in \mathcal{J}_i$. In the next theorem we show that \Cref{algo:FP-EFCCE} is indeed correct, and runs in time $O(|\Sigma_i| \mathfrak{D}_i)$. \begin{restatable}{theorem}{closedForm} \label{thm:closedForm} For any player $i\in[n]$ and any transformation $\phi_i = \sum_{j\in\Info}\vec{\lambda}_i[j]\tdev[j][\q_{j}]\in\co \widetilde{\Psi}_i$, the output $\q_i \in \mathbb{R}^{|\Seqs|}$ of \Cref{algo:FP-EFCCE} is such that $\q_i \in \mathcal{Q}_i$ and $\phi_i(\q_i)= \q_i$. Furthermore, \Cref{algo:FP-EFCCE} runs in $O(|\Sigma_i|\mathfrak{D}_i)$. \end{restatable} \begin{algorithm}[ht] \SetAlgoLined \DontPrintSemicolon \KwIn{$\phi_i = \sum_{j \in \mathcal{J}_{i}} \vec{\lambda}_i[j] \phi_{j \rightarrow \vec{q}_j} \in \co \widetilde{\Psi}_{i}$} \KwOut{$\vec{q}_i \in \mathcal{Q}_i$ such that $\phi_i(\vec{q}_i) = \vec{q}_i$} $\vec{q}_i \leftarrow \mathbf{0} \in \R^{|\Sigma_{i}|}$, $\vec{q}_i[\varnothing] \leftarrow 1$\label{line:q star init} \\ \For{$\sigma = (j, a) \in \Sigma^*_{i}$ in top-down ($\prec$) order}{ $d_{\sigma} \leftarrow \sum_{j' \preceq j} \vec{\lambda}_i[j']$ \label{line:dsigma} \\ \If{$d_{\sigma} = 0$}{$\vec{q}_i[\sigma] \leftarrow \frac{\vec{q}_i[\sigma_j]}{|\mathcal{A}_j|}$} \label{line: set to uniform} \Else{$\vec{q}_i[\sigma] \leftarrow \frac{1}{d_{\sigma}} \sum_{j' \preceq j} \vec{\lambda}_i[j'] \vec{q}_{j'}[\sigma] \vec{q}_i[\sigma_{j'}]$ \label{eq: fixed point} \\ } \textbf{return} $\vec{q}_i$ } \caption{$\textsc{FixedPoint}(\phi_i)$ for $\phi_i \in \co \widetilde{\Psi}_{i}$} \label{algo:FP-EFCCE} \end{algorithm} \subsection{Stability of the Fixed Points} \label{sec:stab} Another important application of our closed-form solution in \Cref{algo:FP-EFCCE} is that it allows us to obtain through a simple analysis sharp bounds on the stability of the fixed points. Indeed, we show that the fixed point operation only leads to (multiplicative) degradation linear in the depth of each player's subtree. \begin{restatable}{proposition}{stabilityefcce} \label{proposition:FP-EFCCE} Suppose that the sequences $(\vec{\lambda}_i^{(t)})$ and $(\vec{q}_j^{(t)})$, for all $j \in \mathcal{J}_i$, are $\kappa$-multiplicative-stable. Then, \Cref{algo:FP-EFCCE} yields a sequence of $(12 \kappa \mathfrak{D}_{i})$-multiplicative-stable strategies, assuming that $\kappa < 1/(12 \mathfrak{D}_{i})$. \end{restatable} Observe that the derived bound on stability is slightly better compared to that for $\efce$ (\Cref{theorem:stability-EFCE}). Consequently, having established the stability of the fixed points, we can apply \Cref{theorem:accelerating-Phi} to derive a stable-predictive $\widetilde{\Psi}_{i}$-regret minimizer for each player $i \in [n]$. This leads to a result analogous to \Cref{corollary:1/4} we showed for EFCE, but our dynamics for EFCCE have a substantially improved per-iteration complexity. \section{Experiments} In this section we support our theoretical findings through experiments conducted on benchmark general-sum games. Namely, we experiment with the following popular games: (i) a three-player variant of \emph{Kuhn poker}~\citep{Kuhn50:Simplified}; (ii) a two-player bargaining game known as \emph{Sheriff}~\citep{Farina19:Correlation}; (iii) a three-player version of \emph{Liar's dice}~\citep{Lisy15:Online}; and (iv) three-player Goofspiel~\citep{Ross71:Goofspiel}. A detailed description of each of these games and the precise parameters we use is given in \cref{app:games}. The rest of this section is organized as follows. \Cref{sec:exp-EFCE} shows the convergence of our accelerated dynamics for EFCE (as presented in \Cref{sec:efce}) compared to the prior state of the art. Next, \Cref{sec:exp-EFCCE} illustrates the convergence of our dynamics for EFCCE. \subsection{Convergence to EFCE} \label{sec:exp-EFCE} Here we investigate the performance of our accelerated dynamics for EFCE (\Cref{fig:algo}) compared to the existing algorithm by~\citet{Farina21:Simple}. Both of these dynamics will be based on a CFR-style decomposition into ``local'' regret-minimization problems. In particular, our stable-predictive dynamics will use OMWU at every local decision point (as in \Cref{proposition:cfr}), while the algorithm of~\citet{Farina21:Simple} will be instantiated with (i) \emph{regret matching$^+$} ($\rmp$)~\citep{Tammelin14:Solving} for each simplex (in place of regret matching), and (ii) using the vanilla MWU algorithm for each simplex. In accordance to our theoretical predictions (\Cref{corollary:1/4}), the stepsize for $\omw$ is set as $\eta^{(t)} = \tau \cdot t^{-1/4}$, while for MWU it is set as $\eta^{(t)} = \tau \cdot t^{-1/2}$. Here $\tau$ is treated as a hyperparameter, chosen by picking the best-performing value among $\{0.01, 0.1, 1, 10, 100 \}$. \begin{figure} \caption{The performance of EFCE dynamics based on MWU, OMWU, and RM$^+$ on four general-sum EFGs.} \label{fig:efce_convergence_experiments} \end{figure} \Cref{fig:efce_convergence_experiments} shows the rate of convergence for each of the three learning dynamics we described. On the $x$-axis we indicate the number of iterations performed by each algorithm and on the $y$-axis we plot the \emph{EFCE gap}, defined as the maximum advantage that any player can gain by defecting optimally from the mediator's recommendations. It should be noted that one iteration costs the same for every algorithm, up to constant factors. We see that on every game, $\omw$ performs better than or on par with $\rmp$ and MWU. On Sheriff, a benchmark introduced specifically for the study of correlated equilibria, $\omw$ outperforms both $\rmp$ and MWU by about an order of magnitude. In the context of two-player zero-sum games, CFR with $\rmp$ is a formidable algorithm, typically outperforming theoretically superior dynamics. With that in mind, it is quite interesting that for EFCE computation we are able to achieve better performance using OMWU with only a modest amount of stepsize tuning. We hypothesize that this is due to the inherent differences between solving a zero-sum game via Nash equilibrium versus the problem of computing correlated equilibria. One caveat to these results is that we did not use two tricks that help $\cfr$ in two-player zero-sum EFG solving: \emph{alternation} and \emph{linear averaging}. These tricks are known to retain convergence guarantees in that context~\citep{Tammelin15:Solving,Farina19:Online,Burch19:Revisiting}, but it is unclear if they still guarantee convergence in the $\efce$ setting. \subsection{EFCCE} \label{sec:exp-EFCCE} Next, we investigate the convergence of our learning dynamics for EFCCE, obtained within the same framework we developed for EFCE. We first measure the rate of convergence in an analogous to the previous subsection setup. The results are illustrated in \Cref{fig:efcce_convergence_experiments}. \begin{figure} \caption{The performance of EFCCE dynamics based on MWU, OMWU, and RM$^+$ on four general-sum EFGs.} \label{fig:efcce_convergence_experiments} \end{figure} Interestingly, we observe a noticeable qualitative difference for convergence to EFCCE. Indeed, unlike EFCE (\Cref{fig:efce_convergence_experiments}), $\rmp$ outperforms OMWU in both Liar's dice and Goofspiel. It is also surprising that MWU converges faster than its optimistic counterpart in Kuhn poker. These results suggest a substantial difference in the convergence properties between EFCE and EFCCE. Furthermore, we illustrate in \Cref{fig:efce_vs_efcce} the running time complexity of EFCE versus EFCCE dynamics (both instantiated with $\rmp$), measured in terms of the EFCCE gap. \begin{figure} \caption{The convergence of EFCE and EFCCE dynamics to EFCCE, measured through the EFCCE gap.} \label{fig:efce_vs_efcce} \end{figure} In each game, the fixed point computation for the EFCE dynamics was performed through an optimized implementation of the power iteration method, interrupted when the Euclidean norm of the residual was below the value of $10^{-6}$. On the other hand, the fixed points for EFCCE were computed using our closed-form solution (\Cref{algo:FP-EFCCE}). In all four games, we see that our EFCCE dynamics outperform the EFCE dynamics in terms of the running time complexity, often by a significant margin. This is consistent with our intuition since EFCE dynamics are solving a strictly harder problem---minimizing the EFCE gap, instead of the EFCCE gap. \section{Conclusions} In this paper we developed uncoupled no-regret learning dynamics so that if all agents play $T$ repetitions of the game according to our dynamics, the correlated distribution of play is an $O(T^{-3/4})$-approximate extensive-form correlated equilibrium. This substantially improves over the prior best rate of $O(T^{-1/2})$. One of our main technical contributions was to characterize the stability of the fixed points associated with trigger deviation functions through a refined perturbation analysis of a structured Markov chain, which may be of independent interest. On the other hand, for fixed points associated with extensive-form \emph{coarse} correlated equilibria we established a closed-form solution, circumventing the computation of the stationary distribution of any Markov chain. Finally, experiments conducted on standard benchmarks corroborated our theoretical findings. Following recent progress in normal-form games \citep{Daskalakis21:near,Anagnostides21:Near}, an important question for the future is to obtain a further acceleration of the order $\widetilde{O}(T^{-1})$. As we pointed out in \Cref{sec:related}, this would inevitably require new techniques since the known methods do not apply for the substantially more complex problem of extensive-form correlated equilibria. We believe that our characterization of the fixed points associated with trigger deviation functions could be an important step towards achieving this goal. \begin{acks} Tuomas Sandholm is supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556. \end{acks} \appendix \section{Omitted Proofs} \label{appendix:proofs} This section includes all of the proofs we omitted from the main body. Let us first introduce some additional useful notation. \subsection{Further Notation} It will be convenient to instantiate a trigger deviation function (recall \Cref{definition:trigger_deviation_functions}) in the form of a linear mapping $\phi_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} : \R^{|\Sigma_{i}|} \ni \vec{x} \mapsto \mat{M}_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i} \vec{x}$, where $\mat{M}_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i}$ is such that for any $\sigma_r, \sigma_c \in \Sigma_{i}$, \begin{equation} \label{eq:canonical_trigger_deviation_functions} \mat{M}_{\hat{\sigma} \rightarrow \hat{\vec{\pi}}_i}[\sigma_r, \sigma_c] = \begin{cases} 1 & \textrm{if } \sigma_c \not\succeq \hat{\sigma} \quad\&\quad \sigma_r = \sigma_c; \\ \hat{\vec{\pi}}_i[\sigma_r] & \textrm{if } \sigma_c = \hat{\sigma} \quad\&\quad \sigma_r \succeq j; \\ 0 & \textrm{otherwise}, \end{cases} \end{equation} where $\hat{\sigma} = (j, a) \in \Sigma^*_i$. It is not hard to show that the linear mapping described in \eqref{eq:canonical_trigger_deviation_functions} is indeed a trigger deviation function in the sense of \Cref{definition:trigger_deviation_functions}. Similarly, we express a coarse trigger deviation function in the form of a linear mapping $\phi_{j \rightarrow \hat{\vec{\pi}}_i} : \R^{|\Sigma_{i}|} \ni \vec{x} \mapsto \mat{M}_{j \rightarrow \hat{\vec{\pi}}_i} \vec{x}$, where $\mat{M}_{j \rightarrow \hat{\vec{\pi}}_i}$ is such that for any $\sigma_r, \sigma_c \in \Sigma_{i}$, \begin{equation*} \mat{M}_{j \rightarrow \hat{\vec{\pi}}_i}[\sigma_r, \sigma_c] = \begin{cases} 1 & \textrm{if } \sigma_c \not\succeq j \quad\&\quad \sigma_r = \sigma_c; \\ \hat{\vec{\pi}}_i[\sigma_r] & \textrm{if } \sigma_c = \sigma_j \quad\&\quad \sigma_r \succeq j; \\ 0 & \textrm{otherwise}. \end{cases} \end{equation*} Furthermore, we will use the notation $\vec{x} \otimes \vec{y} = \vec{x} \vec{y}^{\top}$ to denote the \emph{outer product} of (compatible) vectors $\vec{x}$ and $\vec{y}$, while we will also write $(\mat{M})^{\flat}$ to represent the standard \emph{vectorization} of matrix $\mat{M}$. \input{text/appendix_proofs_accelerated_phi} \input{text/appendix_proofs_phi_regret} \input{text/appendix_section5} \input{text/appendix_everything_together} \section{Sequential Decision Making and Stable-Predictive \texorpdfstring{$\cfr$}{}} \label{appendix:opt-cfr} The main purpose of this section is to provide a stable-predictive variant of $\cfr$ following the construction in~\citep{Farina19:Stable}. The main result is given in \Cref{theorem:opt-cfr}. We begin by introducing the basic setting of \emph{sequential decision making}. A sequential decision process can be represented using a tree consisting of two types of nodes: \emph{decision nodes} and \emph{observation nodes}. The set of all decision nodes will be denoted by $\mathcal{J}$, while the set of observation nodes by $\mathcal{K}$. At every decision node $j \in \mathcal{J}$ the agent has to select a strategy $\vec{x}_j$ in the form of a probability distribution over all possible actions $\mathcal{A}_j$. On the other hand, at every observation point $k \in \mathcal{K}$ the agent may receive a feedback in the form of a signal in the set $\mathcal{S}_k$. At every decision point $j \in \mathcal{J}$ of the sequential decision process, the strategy $\vec{x}_j \in \Delta(\mathcal{A}_j)$ secures a utility of the form $\langle \vec{\ell}_j, \vec{x}_j \rangle$, for some utility vector $\vec{\ell}_j$. The expected utility throughout the entire decision process can be expressed as $\sum_{j \in \mathcal{J}} \pi_j \langle \vec{\ell}_j, \vec{x}_j \rangle$, where $\pi_j$ is the probability that the agent reaches decision point $j$. It is important to point out that in all extensive-form games of perfect recall the agents face a sequential decision process. A central ingredient for our construction of stable-predictive CFR is a decomposition of the strategy space, described in detail below. \paragraph{Decomposition of the Sequence-Form Strategy Space} Our construction will rely on a recursive decomposition of the sequence-form strategy space $\mathcal{X}^{\triangle}$: \begin{itemize} \item Consider an observation node $k \in \mathcal{K}$, and let $\mathcal{C}_k$ be the children decision points of $k$. Then, $\mathcal{X}_k^{\triangle}$ can be decomposed as the following Cartesian product: \begin{equation} \label{eq:observation-decomposition} \mathcal{X}_k^{\triangle} \coloneqq \bigtimes_{j \in \mathcal{C}_k} \mathcal{X}_j^{\triangle}; \end{equation} \item Consider a decision point $j \in \mathcal{J}$, and let $\mathcal{C}_j = \{k_1, \dots, k_{m_j} \}$ be the children observation points of $j$, with $m_j = |\mathcal{A}_j|$. Then, $\mathcal{X}_j^{\triangle}$ can be decomposed as follows: \begin{equation} \label{eq:decision-decomposition} \mathcal{X}^{\triangle}_j \coloneqq \left\{ \begin{pmatrix} \vec{\lambda}[1] \\ \vdots \\ \vec{\lambda}[m_j] \\ \vec{\lambda}[1] \vec{x}_{1} \\ \vdots \\ \vec{\lambda}[m_j] \vec{x}_{m_j} \end{pmatrix} : (\vec{\lambda}[1], \dots, \vec{\lambda}[m_j]) \in \Delta^{m_j}, \vec{x}_{1} \in \mathcal{X}_{k_1}^{\triangle}, \dots, \vec{x}_{{m_j}} \in \mathcal{X}^{\triangle}_{k_{m_{j}}} \right\}. \end{equation} \end{itemize} In view of this decomposition, the basic ingredients for the overall construction are given in \Cref{proposition:observation} and \Cref{proposition:decision}. We should note that in the sequel the stability and the predictive bounds will be tacitly assumed with respect to the pair of norms $(\| \cdot \|_1, \| \cdot \|_{\infty})$. \begin{proposition} \label{proposition:observation} Consider an observation node $k \in \mathcal{K}$, and assume access to a $\kappa_j$-multiplicative-stable $(\alpha_z^{(i),\,t}ha_j, \beta_j)$-predictive regret minimizer $\mathcal{R}_j^{\triangle}$ over the sequence-form strategy space $\mathcal{X}_j^{\triangle}$, for each $j \in \mathcal{C}_k$. Then, we can construct a $\max_j\{\kappa_j\}$-multiplicative-stable $(A, B)$-predictive regret minimizer $\mathcal{R}_k^{\triangle}$ for the sequence-form strategy space $\mathcal{X}_k^{\triangle}$, where $A = \sum_{j \in \mathcal{C}_k} \alpha_z^{(i),\,t}ha_j$ and $B = \sum_{j \in \mathcal{C}_k} \beta_j$. \end{proposition} \begin{proof} Given the decomposition of \eqref{eq:observation-decomposition}, the composite regret minimizer can be constructed using a regret circuit for the Cartesian product~\citep{Farina19:Regret}. In particular, it is direct to verify that $\reg_k^{\triangle, T} = \sum_{j \in \mathcal{C}_k} \reg_j^{\triangle, T}$, where $\reg_k^{\triangle, T}$ is the regret accumulated by the composite regret minimizer, and $\reg_j^{\triangle, T}$ the regret of each individual regret minimizer $\mathcal{R}_j^{\triangle}$. In particular, by assumption we know that \begin{equation*} \reg_j^{\triangle, T} \leq \alpha_z^{(i),\,t}ha_j + \beta_j \sum_{t=1}^T \| \vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle, (t-1)}\|^2_{\infty}. \end{equation*} As a result, we can conclude that \begin{equation*} \reg_k^{\triangle, T} \leq \left( \sum_{j \in \mathcal{C}_k} \alpha_z^{(i),\,t}ha_j \right) + \left( \sum_{j \in \mathcal{C}_k} \beta_j \right) \sum_{t=1}^T \|\vec{\ell}_k^{\triangle, (t)} - \vec{\ell}_k^{\triangle, (t-1)}\|^2_{\infty}, \end{equation*} where we used that $\| \vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle, (t-1)}\|_{\infty} \leq \| \vec{\ell}_k^{\triangle, (t)} - \vec{\ell}_k^{\triangle, (t-1)}\|_{\infty}$. Finally, the $\max_j \{\kappa_j\}$-multiplicative-stability of $\mathcal{R}_k^{\triangle}$ follows directly from the $\kappa_j$-multiplicative-stability of each $\mathcal{R}_j^{\triangle}$. \end{proof} In the following construction the regret circuit for the convex hull uses an advanced prediction mechanism, analogously to that we explained in \Cref{remark:better_prediction}. \begin{proposition} \label{proposition:decision} Consider a decision node $j \in \mathcal{J}$, and assume access to a $K$-multiplicative-stable $(\alpha_z^{(i),\,t}ha_k, \beta_k)$-predictive regret minimizer $\mathcal{R}_k^{\triangle}$ over the sequence-form strategy space $\mathcal{X}_k^{\triangle}$, for each $k \in \mathcal{C}_j$. Moreover, assume access to a $\kappa$-multiplicative-stable $(\alpha_z^{(i),\,t}ha, \beta)$-predictive regret minimizer $\mathcal{R}_{\Delta}$ over the simplex $\Delta(\mathcal{A}_j)$. Then, we can construct a $(\kappa + \kappa K + K)$-multiplicative-stable $(A, B)$-predictive regret minimizer $\mathcal{R}_j^{\triangle}$ for the sequence-form strategy space $\mathcal{X}_j^{\triangle}$, where \begin{equation*} \begin{split} A &= \alpha_z^{(i),\,t}ha + \max_{k \in \mathcal{C}_j}\{\alpha_z^{(i),\,t}ha_k\}; \\ B &= \max_{k \in \mathcal{C}_j} \{ \beta_k\} + \beta \| \mathcal{Q}\|^2_1, \end{split} \end{equation*} where $\|\mathcal{Q}\|_1$ an upper bound on the $\ell_1$ norm of all $\vec{x} \in \mathcal{X}^{\triangle}$. \end{proposition} \begin{proof} For this construction we will use the regret circuit for the convex hull, stated in \Cref{proposition:regret_circuit-co}. First, we have that, by assumption, the regret $\reg_k^{\triangle, T}$ accumulated by each regret minimizer $\mathcal{R}_k^{\triangle}$ can be bounded as \begin{equation*} \reg_k^{\triangle, T} \leq \alpha_z^{(i),\,t}ha_k + \beta_k \sum_{t=1}^T \|\vec{\ell}_k^{\triangle, (t)} -\vec{\ell}_k^{\triangle, (t-1)} \|^2_{\infty}. \end{equation*} Moreover, by construction, each regret minimizer $\mathcal{R}_k^{\triangle}$ receives the same utility as $\mathcal{R}_j^{\triangle}$; this, along with the guarantee of \Cref{proposition:regret_circuit-co}, imply that \begin{equation} \label{eq:decision_node-1} \reg_j^{\triangle, T} \leq \alpha_z^{(i),\,t}ha + \max_{k \in \mathcal{C}_j} \{ \alpha_z^{(i),\,t}ha_k \} + \max_{k \in \mathcal{C}_j} \{ \beta_k \} \sum_{t=1}^T \|\vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle, (t-1)} \|^2_{\infty} + \beta \sum_{t=1}^T \| \vec{\ell}_\lambda^{(t)} - \vec{\ell}_\lambda^{(t-1)}\|^2_{\infty}, \end{equation} where $\vec{\ell}_\lambda^{(t)}$ represents the utility function received as input by $\mathcal{R}_{\Delta}$. Next, similarly to the analysis of \Cref{proposition:circ-pred}, we can infer that for some $k \in \mathcal{C}_j$, \begin{align*} \|\vec{\ell}_\lambda^{(t)} - \vec{\ell}_\lambda^{(t-1)}\|_{\infty} = | \langle \vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle,(t-1)}, \vec{x}_k^{(t)} \rangle | \leq \| \vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle, (t-1)} \|_\infty \| \vec{x}_k^{(t)} \|_1 \leq \| \vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle, (t-1)} \|_\infty \| \mathcal{Q}\|_1 \end{align*} where we used that $\| \vec{x}_k^{(t)} \|_1 \leq \|\mathcal{Q}\|_1 $. As a result, if we plug-in this bound to \eqref{eq:decision_node-1} we can conclude that \begin{equation*} \reg_j^{\triangle, T} \leq \left( \alpha_z^{(i),\,t}ha + \max_{k \in \mathcal{C}_j}\{\alpha_z^{(i),\,t}ha_k\} \right) + \left( \max_{k \in \mathcal{C}_j} \{ \beta_k\} + \beta \|\mathcal{Q}\|_1^2 \right) \sum_{t=1}^T \| \vec{\ell}_j^{\triangle, (t)} - \vec{\ell}_j^{\triangle, (t-1)}\|^2_{\infty}. \end{equation*} Finally, the $(\kappa + \kappa K + K)$-multiplicative-stability of $\mathcal{R}_j^{\triangle}$ can be directly verified from the decomposition given in \eqref{eq:decision-decomposition}. \end{proof} \begin{remark} Given the decomposition provided in \Cref{eq:decision-decomposition}, the regret circuit for the convex hull should operate on the appropriate ``lifted'' space, but this does not essentially alter the analysis of the regret since the augmented entries in the lifted space remain invariant; this is illustrated and further explained in~\citep[Figure 7]{Farina19:Regret}. \end{remark} Finally, we inductively combine \Cref{proposition:observation} and \Cref{proposition:decision} in order to establish the main result of this section: a stable-predictive variant of $\cfr$. \begin{theorem}[Optimistic $\cfr$] \label{theorem:opt-cfr} If every local regret minimizer $\mathcal{R}_j^{\triangle}$ is updated using $\omw$ with a sufficiently small learning rate $\eta$, for each $j \in \mathcal{J}$, we can construct an $(A, B)$-predictive regret minimizer $\mathcal{R}^{\triangle}$ for the space of sequence-form strategies $\mathcal{X}^{\triangle}$, such that \begin{equation} \label{eq:predictivity} \begin{split} A &= O\left( \frac{\log |\mathcal{A}|}{\eta} \|\mathcal{Q}\|_1 \right) ; \\ B &= O( \eta \|\mathcal{Q}\|^3_1), \end{split} \end{equation} where $|\mathcal{A}| \coloneqq \max_{j \in \mathcal{J}} |\mathcal{A}_j|$; $\|\vec{\ell}\|_\infty$ is an upper bound on the $\ell_{\infty}$ norm of the utilities observed by $\mathcal{R}^{\triangle}$; $\|\mathcal{Q}\|_1$ is an upper bound on the $\ell_1$ norm of any $\vec{x} \in \mathcal{X}^{\triangle}$; and $\mathfrak{D}$ is the depth of the decision process. Moreover, the sequence of strategies produced by $\mathcal{R}^{\triangle}$ is $O(\eta \mathfrak{D} \|\mathcal{Q}\|_1 \|\vec{\ell}\|_\infty)$-multiplicative-stable. \end{theorem} \begin{proof} First of all, it is easy to see that all losses observed by the ``local'' regret minimizers---\emph{i.e.}, the \emph{counterfactual losses}~\citep[Section 4]{Farina19:Stable}---have $\ell_\infty$ bounded by $O(\|\mathcal{Q}\|_1 \| \vec{\ell}\|_\infty)$. As a result, we can conclude from \Cref{lemma:OMW-simplex-stability} that the output of each local regret minimizer $\mathcal{R}_j^{\triangle}$ under $\omw$ with a sufficiently small learning rate $\eta$ is $O(\eta \|\mathcal{Q}\|_1 \|\vec{\ell}\|_\infty )$-multiplicative-stable. Along with \Cref{proposition:decision}, we can inductively infer that the output of $\mathcal{R}^{\triangle}$ is $O(\eta \mathfrak{D} \| \mathcal{Q} \|_1 \|\vec{\ell}\|_\infty) $-multiplicative-stable, for a sufficiently small $\eta = O(1/(\mathfrak{D} \| \mathcal{Q} \|_1 \|\vec{\ell}\|_\infty))$. This established the claimed bound for the multiplicative stability. For the predictive bound, first recall that the range of the entropic regularizer on the $m$-dimensional simplex is $\log m$. Thus, by \Cref{lemma:oftrl-predictive} we know that each local regret minimizer at information set $j \in \mathcal{J}$ instantiated with $\omw$ with learning rate $\eta$ will be $(\log (|\mathcal{A}_j|/\eta, \eta)$-predictive. As a result, the predictive bound in \eqref{eq:predictivity} follows inductively from \Cref{proposition:decision}. \end{proof} Naturally, the same bounds apply for constructing a regret minimizer for the subspace $\mathcal{X}_j^{\triangle}$, for any decision point $j \in \mathcal{J}$, as required in \Cref{proposition:R_sigma}. \section{Description of Game Instances used in the Experiments} \label{app:games} In this section we give a description of the game instances used in our experiments. The parameters associated with each game are summarized in \Cref{table:parameters}. \paragraph{Kuhn poker} First, we experimented on a \emph{three-player} variant of the popular benchmark game known as \emph{Kuhn poker}~\citep{Kuhn50:Simplified}. In our version, a deck of three cards---a Jack, a Queen, and a King---is employed. Players initially commit a single chip to the pot, and privately receive a single card. The first player can either {\em check} or {\em bet} (\emph{i.e.} place an extra chip). Then, the second player can in turn check or bet if the first player checked, or {\em folded/called} in response to the first player's bet. If no betting occurred in the previous rounds, the third player can either check or bet. In the contrary case, the player can either fold or call. Following a bet of the second player (or respectively the third player), the first player (or respectively the first and the second players) has to decide whether to fold or to call. At the \emph{showdown}, the player with the \emph{highest} card---who has not folded in a previous round---gets to win all the chips committed in the pot. \paragraph{Sheriff} Our second benchmark is a bargaining game inspired by the board game \emph{Sheriff of Nottingham}, introduced by \cite{Farina19:Correlation}. In particular, we used the \emph{baseline} version of the game. This game consists of two players: the \emph{Smuggler} and the \emph{Sheriff}. The smuggler must originally come up with a number $n \in \{0, 1, 2, 3\}$ which corresponds to the number of illegal items to be loaded in the cargo. It is assumed that each illegal item has a fixed value of $1$. Subsequently, $2$ rounds of bargaining between the two players follow. At each round, the Smuggler decides on a bribe ranging from $0$ to $3$, and the Sheriff must decide whether or not the cargo will be inspected given the bribe amount. The Sheriff's decision is binding only in the last round of bargaining. In particular, if during the last round the Sheriff accepts the bribe, the game stops with the Smuggler obtaining a utility of $n$ minus the bribe amount $b$ that was proposed in the last bargaining round, while the Sheriff receives a utility equal to $b$. On the other hand, if the Sheriff does not accept the bribe in last bargaining round and decides to inspect the cargo, there are two possible alternatives. If the cargo has no illegal items (\emph{i.e.} $n = 0$), the Smuggler receives the fixed amount of $3$, while the utility of the Sheriff is set to be $-3$. In the contrary case, the utility of the smuggler is assumed to be $-2n$, while the utility of the Sheriff is $2n$. \paragraph{Liar's dice} The final benchmark we experimented on is the game of \emph{Liar's dice}, introduced by ~\citet{Lisy15:Online}. In the three-player variant, the beginning of the game sees each of the three players privately roll an unbiased $3$-face die. Then, the players have to sequentially make claims about their private information. In particular, the first player may announce any face value up to $3$, as well as the minimum number of dice that the player claims are showing that value among the dice of \emph{all} players. Then, each player can either make a higher bid, or challenge the previous claim by declaring the previous agent a ``liar''. More precisely, it is assumed that a bid is higher than the previous one if either the face value is higher, or if the claimed number of dices is greater. If the current claim is challenged, all the dices must be revealed. If the claim was valid, the last bidder wins and receives a reward of $+1$, while the challenger suffers a negative payoff of $-1$. Otherwise, the utilities obtained are reversed. Any other player will receive $0$ utility. \paragraph{Goofspiel} This game was introduced---in its current form---by~\citet{Ross71:Goofspiel}. In Goofspiel every player has a hand of cards numbered from $1$ to $r$, where $r$ is the \emph{rank} of the game. An additional stack of $r$ cards is shuffled and singled out as winning the current prize. In each turn a prize card is revealed, and each player privately chooses one of its cards to bid. The player with the highest card wins the current prize; in case of a tie, the prize card is discarded. After $r$ turns have been completed, all the prizes have been dealt out and players obtain the sum of the values of the prize cards they have won. It is worth noting that, due to the tie-breaking mechanism, even two-player instances are general-sum. We also consider instances with \emph{limited information}---the actions of the other players are observed only at the end of the game. This makes the game strategically more involved as players have less information regarding previous opponents' actions. \begin{table}[H]\centering \begin{tabular}{lrrrr} \toprule \textbf{Game} & \textbf{Players} & \textbf{Decision points} & \textbf{Sequences} & \textbf{Leaves}\\ \midrule Kuhn poker & 3 & 36 & 75 & 78\\ Sheriff & 2 & 73 & 222 & 256\\ Goofspiel & 3 & 837 & 934 & 1296\\ Liar's dice & 3 & 1536 & 3069 & 13797\\ \bottomrule \end{tabular} \caption{The parameters of each game.} \label{table:parameters} \end{table} \end{document}
\begin{document} \title{Cubic Maximal Nontraceable Graphs\thanks{ This material is based upon work supported by the National Research Foundation under Grant number 2053752.}} \author{Marietjie Frick, Joy Singleton \\ University of South Africa,\\ P.O. Box 392, Unisa, 0003,\\ South Africa.\\ \textbf{e-mail:} [email protected] [email protected]} \date{ } \maketitle \begin{abstract} We determine a lower bound for the number of edges of a 2-connected maximal nontraceable graph, and present a construction of an infinite family of maximal nontraceable graphs that realize this bound. \textbf{Keywords: }maximal nontraceable, maximal nonhamiltonian, hypohamiltonian, hamiltonian path, hamiltonian cycle, traceable, nontraceable, hamiltonian, nonhamiltonian, maximal hypohamiltonian \textbf{2000 Mathematics Subject Classification: } 05C38 \end{abstract} \section{\ Introduction} We consider only simple, finite graphs $G$ and denote the vertex set and edge set of $G$ by $V(G)$ and $E(G)$, respectively. The \textit{open neighbourhood} of a vertex $v$ in $G$ is the set $N_{G}(v)=\{x\in V(G):vx\in E(G)\}$. If $U$ is a nonempty subset of $V(G)$ then $\langle U \rangle$ denotes the subgraph of $G$ induced by $U$. A graph $G$ is \textit{ hamiltonian }if it has a \textit{hamiltonian cycle }(a cycle containing all the vertices of $G$), and \textit{traceable} if it has a \textit{hamiltonian path }(a path containing all the vertices of $G$). A graph$\ G$ is \textit{ maximal nonhamiltonian}\emph{\ }(MNH)\ if $G$ is not hamiltonian, but $\ G+e$ is hamiltonian for each $e$ $\in E(\overline{G})$, where $\overline{G}$ denotes the complement of $G.$ A graph $G$ is \textit{maximal nontraceable} (MNT) if $G$ is not traceable, but\ $G+e$ is traceable for each $e\in E( \overline{G}).$ A graph $G$ is \textit{hypohamiltonian}\ if $G$ is not hamiltonian, but every vertex-deleted subgraph $G-v$ of $G$ is hamiltonian. We say that a graph $G$ is \textit{maximal hypohamiltonian} (MHH) if it is MNH and hypohamiltonian. In 1978 Bollob\'{a}s \cite{bollobas} posed the problem of finding the least number of edges, $f(n)$, in a MNH graph of order $n$. Bondy \cite{bondy} had already shown that a MNH graph with order $n\geq 7$ that contained $m$ vertices of degree $2$ had at least $(3n+m)/2$ edges, and hence $f(n)\geq \left\lceil 3n/2\right\rceil $ for $n\geq 7$. Combined results of Clark, Entringer and Shapiro \cite{ce}, \cite{ces} and Lin, Jiang, Zhang and Yang \cite {ljzy} show that $f(n)=\left\lceil 3n/2\right\rceil $ for $n\geq 19$ and for $n=6,10,11,12,13,17$. The values of $f(n)$ for the remaining values of $n$ are also given in \cite {ljzy}. Let $g(n)$ be the minimum size of a MNT graph of order $n$. Dudek, Katona and Wojda \cite{dudek} showed that $g(n)\geq (3n-20)/2$ for all $n$ and, by means of a recursive construction, they found MNT graphs of order $n$ and size $O(n\log n)$. To date, no cubic MNT graphs have been reported. We construct an infinite family of cubic MNT graphs, thus showing that $g(n)\leq 3n/2$ for infinitely many $n$. Now let $g_{2}(n)$ be the minimum size of a 2-connected MNT graph of order $ n$. We prove that $g_{2}(n)\geq \left\lceil 3n/2\right\rceil $ for $n\geq 7$ . It then follows from our constructions that $g_{2}(n)=\left\lceil 3n/2\right\rceil $ for $n=8p$ for $p\geq 5$, $n=8p+2$ for $p\geq 6$, $n=8p+4$ for $p=3$ and $p \ge 6$, and $n=8p+6$ for $p\geq 4$. \section{A lower bound for the size of a 2-connected MNT graph} Bondy \cite{bondy} proved that if $G$ is a 2-connected MNH graph and $v\in V(G)$ with degree $d\left( v\right) =2$, then each neighbour of $v$ has degree at least 4. He also showed that the neighbours of such a vertex are in fact adjacent. In order to prove a corresponding result for 2-connected MNT graphs we need the following result. \begin{lemma} \label{subgraph}Let $Q$ be a path in a MNT graph $G$. If $\langle V(Q) \rangle$ is not complete, then some internal vertex of $Q$ has a neighbour in $G-V(Q)$. \end{lemma} \begin{proof} Let $u$ and $v$ be two nonadjacent vertices of $\langle V(Q) \rangle$. Then $G+uv$ has a hamiltonian path $P$. Let $x$ and $y$ be the two endvertices of $Q$ and suppose no internal vertex of $Q$ has a neighbour in $G-V(Q)$. Then $P$ has a subpath $R$ in $\langle V(Q) \rangle + uv$ and $R$ has either one or both endvertices in $\{x,y\}$. If $R$ has only one endvertex in $\{x,y\}$, then $P$ has an endvertex in $Q$. In either case the path obtained from $P$ by replacing $R$ with $Q$ is a hamiltonian path of $G$. \end{proof} \begin{lemma} \label{mntdeg2}If $G$ is a MNT graph and $v\in V(G)$ with $d\left( v\right) =2$, then the neighbours of $v$ are adjacent. If in addition $G$ is $2$-connected, then each neighbour of $v$ has degree at least $4$. \end{lemma} \begin{proof} Let $N_G(v)=\{x_1,x_2\}$ and let $Q$ be the path $x_1vx_2$. Since $N_G(v)\subseteq Q$, it follows from Lemma \ref{subgraph} that $\langle V(Q) \rangle$ is a complete graph; hence $x_1$ and $x_2$ are adjacent. Now assume that $G$ is $2$-connected. Since $G$ is not traceable we assume $ d(x_{1})>2$. Then also $d(x_{2})>2$ otherwise $x_{1}$ would be a cut vertex of $G$. Let $z$ be a neighbour of $x_1$ and let $Q$ be the path $zx_1vx_2$. Since $d(v)=2$ the graph $\langle V(Q) \rangle$ is not complete, and hence it follows from Lemma \ref{subgraph} that $x_1$ has a neighbour in $G-V(Q)$. Thus $d(x_1)\geq 4$. Similarly $d(x_2)\geq 4$. \end{proof} We also have the following two lemmas concerning MNT graphs that have vertices of degree 2. \begin{lemma} \label{mnt1deg2}Suppose $G$ is a $2$-connected MNT graph. Suppose $v_{1},v_{2}\in V(G)$ such that $d(v_{1})=d(v_{2})=2$ and $v_{1}$ and $ v_{2}$ have exactly one common neighbour $x$. Then $d(x)\geq 5.$ \end{lemma} \begin{proof} The vertices $v_{1}$ and $v_{2}$ cannot be adjacent otherwise $x$ would be a cut vertex. Let $N(v_i)=\{x,y_i\}$; $i=1,2$. It follows from Lemma \ref{mntdeg2} that $x$ is adjacent to $y_i$; $i=1,2$. Let $Q$ be the path $y_1v_1xv_2y_2$. Since $\langle V(Q) \rangle$ is not complete, it follows from Lemma \ref{subgraph} that $x$ has a neighbour in $G-V(Q)$. Hence $d(x) \geq 5$. \end{proof} \begin{lemma} \label{mnt2deg2}Suppose $G$ is a MNT graph. Suppose $v_{1},v_{2}\in V(G)$ such that $d(v_{1})=d(v_{2})=2$ and $v_{1}$ and $v_{2}$ have the same two neighbours $x_{1}$ and $x_{2}$. Then $N_{G}(x_{1})-\{x_{2} \}=N_{G}(x_{2})-\{x_{1}\}$. Also $d(x_{1})=d(x_{2})\geq 5.$ \end{lemma} \begin{proof} From Lemma \ref{mntdeg2} it follows that $x_{1}$ and $x_{2}$ are adjacent. Let $Q$ be the path $x_2v_1x_1v_2$. $\langle V(Q) \rangle$ is not complete since $v_1$ and $v_2$ are not adjacent. Thus it follows from Lemma \ref{subgraph} that $x_1$ has a neighbour in $G-V(Q)$. Now suppose $p \in N_{G-V(Q)}(x_1)$ and $p \notin N_{G}(x_2)$. Then a hamiltonian path $P$ in $G+px_{2}$ contains a subpath of either of the forms given in the first column of Table 1. Note that $i,j\in \{1,2\}$; $i\neq j$ and that $L$ represents a subpath of $P$ in $ G-\{x_{1},x_{2},v_{1},v_{2},p\}$. If each of the subpaths is replaced by the corresponding subpath in the second column of the table we obtain a hamiltonian path $P^{\prime }$ in $G$, which leads to a contradiction. \begin{equation*} \begin{tabular}{|l|l|} \hline Subpath of $P$ & Replace with \\ \hline $v_{i}x_{1}v_{j}x_{2}p$ & $v_{i}x_{2}v_{j}x_{1}p$ \\ \hline $v_{i}x_{1}Lpx_{2}v_{j}$ & $v_{i}x_{2}v_{j}x_{1}Lp$ \\ \hline \end{tabular} \end{equation*} \begin{equation*} \text{Table 1} \end{equation*} Hence $p \in N_{G}(x_2)$. Thus $N_{G}(x_{1})-\{x_{2}\}\subseteq N_{G}(x_{2})-\{x_{1}\}$. Similarly $N_{G}(x_{2})-\{x_{1}\}\subseteq N_{G}(x_{1})-\{x_{2}\}$. Thus $ N_{G}(x_{1})-\{x_{2}\}=N_{G}(x_{2})-\{x_{1}\}$ and hence $d(x_{1})=d(x_{2})$. Now let $Q$ be the path $px_1v_1x_2v_2$. Since $\langle V(Q) \rangle$ is not complete, it follows from Lemma \ref{subgraph} that $x_1$ or $x_2$ has a neighbour in $G-V(Q)$. Hence $d(x_1)=d(x_2) \geq 5$. \end{proof} We now consider the size of a 2-connected MNT\ graph. \begin{lemma} \label{mnt3deg2}Suppose $G$ is a MNT graph of order $n\geq 6$ and that $v_{1},v_{2}$ and $ v_{3}$ are vertices of degree $2$ in $G$ having the same neighbours, $x_{1}$ and $x_{2}$. Then $G-\{v_{1},v_{2},v_{3}\}$ is complete and hence $|E(G)|= \frac{1}{2}(n^{2}-7n+24)$. \end{lemma} \begin{proof} Suppose $G-\{v_{1},v_{2},v_{3}\}$ is not complete. Then there exist $p,q\in V(G)-\{v_{1},v_{2},v_{3}\}$ which are not adjacent. However, since $\{v_{1},v_{2}, v_{3}\}$ is an independent set, no path in $G+pq$ having $pq$ as an edge can contain all three of $v_{1},v_{2}$ and $v_{3}$. Thus $G+pq$ is not traceable. Thus $G-\{v_{1},v_{2},v_{3}\}=K_{n-3}.$ Hence $|E(G)|= \frac{1}{2}(n-3)(n-4)+6.$ \end{proof} \begin{theorem} Suppose $G$ is a $2$-connected MNT graph. If $G$ has order $n\geq 7$ and $m$ vertices of degree $2$, then $|E(G)|\geq \frac{1}{2}(3n+m)$. \end{theorem} \begin{proof} If $G$ has three vertices of degree 2 having the same two neighbours then, by Lemma \ref{mnt3deg2} \begin{equation*} |E(G)|=\tfrac{1}{2}(n^{2}-7n+24)\geq \tfrac{1}{2}(3n+m)\ \text{when}\ n \geq 7, \end{equation*} since $m=3.$ We now assume that $G$ does not have three vertices of degree 2 that have the same two neighbours. Let $v_{1},...,v_{m}$ be the vertices of degree 2 in $G$ and let $H=G-\{v_{1},...,v_{m}\}.$ Then by Lemmas \ref{mntdeg2}, \ref {mnt1deg2} and \ref{mnt2deg2} the minimum degree, $\delta (H)$ of $H$ is at least 3. Hence \begin{equation*} |E(G)|=|E(H)|+2m\geq \tfrac{3}{2}(n-m)+2m=\tfrac{1}{2}(3n+m). \end{equation*} \end{proof} Thus $g_2(n) \geq \frac{1}{2} (3n+m)$ for $n \geq 7$. For $m \geq 1$ this bound is realized for $n=7$ (a Zelinka Type I graph \cite{zelinka}) and $n=18$ (a graph constructed in \cite{bullock}). These graphs are depicted in Fig.\ 1. \[ \includegraphics[scale=0.5]{cubicfig1.eps} \] \[ \text{Fig.\ 1} \] We now construct an infinite family of 2-connected cubic MNT graphs, showing that $g_2(n)= \frac{3n}{2}$ for infinitely many $n$. \section{A construction of an infinite family of cubic MNT graphs} In order to construct cubic MNT\ graphs we require the following lemmas concerning MHH graphs. \begin{lemma} \label{hypo2}Suppose $H$ is a hypohamiltonian graph having a vertex $z$ of degree $3$. Put $F=H-z$. \begin{enumerate} \item[(a)] $F$ has a hamiltonian path ending at any of its vertices. \item[(b)] There is no hamiltonian path in $F$ with both endvertices in $N_{H}(z)$. \item[(c)] For any $y\in N_{H}(z)$ there exists a hamiltonian path in $F-y$ with the other two vertices of $N_{H}(z)$ being the endvertices. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(a)] $F$ is hamiltonian. \item[(b)] If a hamiltonian path exists in $F$ having both endvertices in $N_{H}(z)$, then $H$ has a hamiltonian cycle, which is a contradiction. \item[(c)] Since $H-y$ is hamiltonian there is a hamiltonian cycle in $H-y$ containing the path $vzw$, where $v,w\in N_{H}(z)-\{y\}$. Thus there is a hamiltonian path in $F-y$ with endvertices $v$ and $w$. \end{enumerate} \end{proof} \begin{lemma} \label{mnt1}Suppose $H$ is a MNH graph having a vertex $z$ of degree $3$. Put $F=H-z$. If $u_{1}$ and $u_{2}$ are nonadjacent vertices in $F$, then $ F+u_{1}u_{2}$ has a hamiltonian path with both endvertices in $N_{H}(z)$. \end{lemma} \begin{proof} There exists a hamiltonian cycle in $H+u_{1}u_{2}$ which contains the path $vzw$, where $v,w\in N_{H}(z)$. Thus there exists a hamiltonian path in $F+u_{1}u_{2}$ with endvertices $v$ and $w$. \end{proof} \begin{center} \Large{\bf Construction of the graph $K_4[H_1,H_2,H_3]$} \end{center} For $i=1,2,3$, let $H_{i}$ be a cubic MHH graph, with a vertex $z_i$ with neighbours $a_i$, $ b_i$ and $c_i$, which satisfies the following condition.\newline \noindent{\textbf{Condition (C)}:}\newline {\it{For every vertex}} $u_{i} \notin N_{H_{i}}(z_{i})$, {\it{the graph}} $H_{i}+z_{i}u_{i}$ {\it{has a hamiltonian cycle containing the edge}} $a_{i}z_{i}$ {\it{as well as a hamiltonian cycle not containing}} $a_{i}z_{i}$. Graphs satisfying this condition will be presented at the end of the paper. In the same sense as Gr\"{u}nbaum \cite{grunbaum} we use $H_{i}\backslash z_{i}$ to denote $H_{i}$ ``opened up'' at $z_{i}$ (see Fig.\ 2). \[ \includegraphics[scale=0.67]{cubicfig3.eps} \] \[ \text{Fig.\ 2} \] Let $K_4[H_1,H_2,H_3]$ be an inflated $K_4$ obtained from $H_{i}\backslash z_{i}$; $i=1,2,3$ and a vertex $x$ by joining $x$ to the semi-edge incident with $a_i$ for $i=1,2,3$ and joining the remaining semi-edges as depicted in Fig.\ 3. Let $F_i$ denote $H_i-z_i$; $i=1,2,3$. We call $a_i$, $b_i$ and $c_i$ the \textit{exit vertices} of $F_i$. \[ \includegraphics[scale=0.7]{cubic3.eps} \] \[ \text{Fig.\ 3} \] We introduce the following notation which we use in the theorem below: \newline $P_{G}(v,w)$ denotes a hamiltonian path in a graph $G$ from $v$ to $w$; \newline $P_{G}(-,w)$ denotes a hamiltonian path in $G$ ending at $w$;\newline $P_{G}(v,-)$ denotes a hamiltonian path in $G$ beginning at $v$; and\newline $P_{G}$ denotes a hamiltonian path in $G$. \begin{theorem} \label{cubicmnt}The graph $G=K_{4}[H_{1},H_{2},H_{3}]$ is a cubic MNT graph. \end{theorem} \begin{proof} It is obvious from the construction that $G$ is cubic. We now show that $G$ is nontraceable. Suppose $P$ is a hamiltonian path of $ G $. Then at least one of the $F_{i}$'s, say $F_{2}$, does not contain an endvertex of $P$. Thus $P$ passes through $F_{2}$, using two of the exit vertices of $F_{2}$. However, by Lemma \ref{hypo2}(b) such a path cannot contain all the vertices of $F_{2}$. We now show that $G+uv$ is traceable for all nonadjacent vertices $u$ and $v$ in $G$.\newline \textbf{Case 1.} $u,v\in F_{i}$; $i\in \{1,2,3\}$.\newline Without loss of generality consider $i=2$. By Lemma \ref{mnt1} there is a hamiltonian path in $F_{2}+uv$ with endvertices two of $a_{2}$, $b_{2}$ and $c_{2}$.\newline \textit{Subcase (i).} Suppose the endvertices are $a_{2}$ and $c_{2}$. (A similar proof holds for $a_{2}$ and $b_{2}$.) By using Lemma \ref{hypo2}(a) we obtain the hamiltonian path \begin{equation*} P_{G+uv}=P_{F_{1}}(-,a_{1})xP_{F_{2}+uv}(a_{2},c_{2})P_{F_{3}}(b_{3},-). \end{equation*} \textit{Subcase (ii).} Suppose the endvertices are $b_{2}$ and $c_{2}$. By using Lemma \ref{hypo2}(c) we obtain the hamiltonian path \begin{equation*} P_{G+uv}=a_{1}xP_{F_{3}-c_{3}}(a_{3},b_{3})P_{F_{2}+uv}(c_{2},b_{2})P_{G_{1}-a_{1}}(c_{1},b_{1})c_{3}. \end{equation*} \textbf{Case 2.} $u\in \{a_{i},b_{i},c_{i}\}$ and $v\in \{a_{j},b_{j},c_{j}\}$; $i,j\in \{1,2,3\}$; $i\neq j $.\newline Without loss of generality we choose $i=2$ and $j=3$. By using Lemmas \ref{hypo2}(a) and (c) we find a hamiltonian path $P_{G+uv}$ in $G+uv$. All subcases can be reduced to the following:\newline \textit{Subcase (i).} $u=a_{2},v=a_{3}$. \begin{equation*} P_{G+uv}=a_{2}a_{3}xP_{G_{1}-c_{1}}(a_{1},b_{1})P_{F_{3}-a_{3}}(c_{3},b_{3})P_{F_{2}-a_{2}}(c_{2},b_{2})c_{1}. \end{equation*} \textit{Subcase (ii).} $u=a_{2},v=b_{3}$. \begin{equation*} P_{G+uv}=c_{2}b_{3}P_{F_{2}-c_{2}}(a_{2},b_{2})P_{G_{1}-b_{1}}(c_{1},a_{1})xP_{F_{3}-b_{3}}(a_{3},c_{3})b_{1}. \end{equation*} \textit{Subcase (iii).} $u=a_{2},v=c_{3}$. \begin{equation*} P_{G+uv}=b_{1}c_{3}P_{F_{2}-c_{2}}(a_{2},b_{2})P_{G_{1}-b_{1}}(c_{1},a_{1})xP_{F_{3}-c_{3}}(a_{3},b_{3})c_{2}. \end{equation*} \textit{Subcase (iv).} $u=b_{2},v=b_{3}$. \begin{equation*} P_{G+uv}=c_{2}b_{3}P_{F_{2}-c_{2}}(b_{2},a_{2})xP_{F_{3}-b_{3}}(a_{3},c_{3})P_{F_{1}}(b_{1},-). \end{equation*} \textit{Subcase (v).} $u=b_{2},v=c_{3}$. \begin{equation*} P_{G+uv}=c_{1}b_{2}c_{3}P_{G_{1}-c_{1}}(b_{1},a_{1})xP_{F_{2}-b_{2}}(a_{2},c_{2})P_{F_{3}-c_{3}}(b_{3},a_{3}). \end{equation*} \textbf{Case 3.} $u\in F_{i}-\{a_{i},b_{i},c_{i}\}$ and $v\in F_{j}$; $i,j\in \{1,2,3\}$; $i\neq j$.\newline Without loss of generality we choose $i=2$ and $j=3$. Let $F_{2}^{\ast}$ be the graph obtained from $G$ by contracting $G-V(F_2)$ to a single vertex $z_{2}^{\ast}$. Then $F_{2}^{\ast}$ is isomorphic to $H_2$ and hence, by Condition \textbf{(C)}, $F_{2}^{\ast}+uz_{2}^{\ast}$ has a hamiltonian cycle containg the path $uz_{2}^{\ast}a_2$. Thus $F_2$ has a hamiltonian path with endvertices $u$ and $a_{2}$. Using this fact and Lemma \ref{hypo2}(a) we construct the hamiltonian path \begin{equation*} P_{G+uv}=P_{F_{3}}(-,v)P_{F_{2}}(u,a_{2})xP_{F_{1}}(a_{1},-). \end{equation*} \textbf{Case 4.} $u=x$ and $v\in F_{i}$; $i\in \{1,2,3\}$.\newline Without loss of generality we choose $i=2$.\newline \textit{Subcase (i).} $v\in \{b_{2},c_{2}\}$.\newline Consider $v=b_{2}.$ (The case $v=c_{2}$ follows similarly.) By using Lemmas \ref{hypo2}(a) and (c) we obtain the hamiltonian path \begin{equation*} P_{G+uv}=P_{F_{3}}(-,b_{3})P_{F_{2}-b_{2}}(c_{2},a_{2})xb_{2}P_{F_{1}}(c_{1},-). \end{equation*} \textit{Subcase (ii).} $v\in F_{2}-\{a_{2},b_{2},c_{2}\}$.\newline According to Condition \textbf{(C)} and an argument similar to that in Case 3, there is a hamiltonian path in $F_{2}$ with endvertices $v$ and $d$, where $d\in \{b_{2},c_{2}\}$. Suppose $d=b_{2}$. (A similar proof holds for $d=c_{2}$.) Using this fact and Lemma \ref{hypo2}(a) we construct the hamiltonian path \begin{equation*} P_{G+uv}=P_{F_{3}}(-,a_{3})xP_{F_{2}}(v,b_{2})P_{F_{1}}(c_{1},-). \end{equation*} \end{proof} The Petersen graph ($n=10$), the Coxeter graph ($n=28$) and the Isaacs' snarks $J_{k}$ ($n=4k$) for odd $k\geq 5$ are all cubic MHH graphs (see \cite {bondy}, \cite{ce}). We determined, by using the Graph Manipulation Package developed by Siqinfu and Sheng Bau*, that a snark of order $22$, reported by Chisala \cite{chisala}, is also MHH. The Petersen graph, the snark of order 22 and the Coxeter graph are shown in Fig.\ 4. \[ \includegraphics[scale=0.6]{petersen1.eps} \ \ \ \ \ \includegraphics[scale=0.5]{snark22.eps} \ \ \ \ \ \includegraphics[scale=0.4]{cox2.eps} \] \[ \text{Fig.\ 4} \] All the cubic MHH graphs mentioned above satisfy condition \textbf{(C)}. In fact, it follows from Theorem 10 in \cite{ce} that the Isaac's snarks satisfy a stronger condition, namely that if $v\notin N_H(z)$ then, for \textit{every} $u\in N_H(z)$ there exists a hamiltonian cycle in $H+uv$ containing $uz$, and this condition holds for all $z\in H$. We determined, again by using the Graph Manipulation Package, that each of the graphs shown in Fig.\ 4 also satisfy this extended condition for the specified vertex $z$. The programme allows one to sketch a graph $G$ on the computer screen by placing vertices and adding edges. On request the programme will either draw in a hamiltonian cycle or state that the graph is non-hamiltonian. We tested to see if each of the above mentioned MNH graphs $G$ satisfied the condition \textbf{(C)} by considering symmetry and adding appropriate edges and noting the structure of the hamiltonian cycle drawn in. Thus, by using various combinations of these MHH graphs, we can produce cubic MNT\ graphs of order \begin{equation*} n=\left\{ \begin{tabular}{ll} $8p$ & $p\geq 5$ \\ $8p+2$ & $p\geq 6$ \\ $8p+4$ & $p=3,p\geq 6$** \\ $8p+6$ & $p\geq 4.$ \end{tabular} \right. \end{equation*} Thus $g_{2}(n)= \frac{3n}{2} $ for all the values of $ n$ stated above. \textbf{Remark: }Our construction yields MNT graphs of girths 5, 6 and 7. We do not know whether MNT graphs of girth bigger than 7 exist. \textbf{Acknowledgements} \newline * We wish to thank Sheng Bau for allowing us the use of the programme, Graph Manipulation Package Version 1.0 (1996), Siqinfu and Sheng Bau, Inner Mongolia Institute of Finance and Economics, Huhhot, CN-010051, People's Republic of China. \newline ** We wish to thank the referee who brought to our attention the infinite family $K_{4}[S,J_{k},J_{k'}]$ of MNT graphs, where $S$ is the snark of order $22$ and $J_{k}$ and $J_{k'}$ are Isaac's snarks, which gives $n=8p+4$ for $p \ge7$. \end{document}
\begin{document} \title{ON THE EXISTENCE OF A SHORT PIVOTING SEQUENCE FOR A LINEAR PROGRAM} \phantom-arkboth{On decomposition of a matrix by convex combinations}{} \author{Anders FORSGREN\thanks{\footAF} \and Fei WANG\thanks{\footFW}} \def\footAF{Optimization and Systems Theory, Department of Mathematics, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden ({\tt [email protected]}). Research supported by the Swedish Research Council (VR).} \def\footFW{Optimization and Systems Theory, Department of Mathematics, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden ({\tt [email protected]}).} \phantom-aketitle\thispagestyle{empty} \begin{abstract} Pivoting methods are of vital importance for linear programming, the simplex method being the by far most well-known. In this paper, a primal-dual pair of linear programs in canonical form is considered. We show that there exists a sequence of pivots, whose length is bounded by the minimum dimension of the constraint matrix, such that the pivot creates a nonsingular submatrix of the constraint matrix which increases by one row and one column at each iteration. Solving a pair of linear equations for each of these submatrices generates a sequence of optimal solutions of a primal-dual pair of linear programs of increasing dimensions, originating at the origin. The optimal solutions to the original primal-dual pair of linear programs are obtained in the final step. It is only an existence result, we have not been able to generate any rules based on properties of the problem to generate the sequence. The result is obtained by a decomposition of the final basis matrix. \end{abstract} \section{Introduction} Pivoting methods for linear programming are based on solving a sequence of linear system of equations determined by a square submatrix of the constraint matrix, that typically changes by one column and/or one row in between iterations. The simplex method~\cite{Dan63} is probably by far the most well-known, but we also want to mention criss-cross methods~\cite{Ter85,FT97,BonM03} and Lemke's method~\cite{lemke1954}. We consider a primal-dual pair of linear programs in canonical form \[ (PLP) \quad \begin{array}{ll} \phantom-inimize{x\inI\!\!R^n} & c^T\! x \\ [4pt] \mathop{\operator@font op}erator@font subject\ to & A x \ge b, \\ & x \ge 0, \end{array} \quad\quad (DLP) \quad \begin{array}{ll} \phantom-aximize{y\inI\!\!R^m} & b^T\! y \\ [4pt] \mathop{\operator@font op}erator@font subject\ to & A^T\! y \le c, \\ & y \ge 0. \end{array} \] We show that if $(PLP)$ and $(DLP)$ are both feasible, then there exists a nonnegative integer $r$, with $r\le\phantom-in\{m,n\}$, and a sequence of pivots $(i_k,j_k)$, $k=1,\dots,r$, which generate sets of row indices $R_k=\cup_{l=1}^k\{i_l\}$ and columns indices $C_k=\cup_{l=1}^k\{j_l\}$, with $i_{k+1}\in\{1,\dots,m\}\backslash _{\scriptscriptstyle R}k$ and $j_{k+1}\in\{1,\dots,n\}\backslash _{\scriptscriptstyle C}k$, such that $A_{R_k_{\scriptscriptstyle C}k}$ is nonsingular, and if $x__{\scriptscriptstyle C}k$ and $y__{\scriptscriptstyle R}k$ are computed from \[ A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k} x__{\scriptscriptstyle C}k = b__{\scriptscriptstyle R}k, \quad A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T y__{\scriptscriptstyle R}k = c__{\scriptscriptstyle C}k, \] they are nonnegative, and therefore optimal to $(PLP_k)$ and $(DLP_k)$ respectively, where \[ (PLP_k) \quad \begin{array}{ll} \phantom-inimize{x__{\scriptscriptstyle C}k\inI\!\!R^k} & c__{\scriptscriptstyle C}k^T x__{\scriptscriptstyle C}k^{\null} \\ [4pt] \mathop{\operator@font op}erator@font subject\ to & A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} x__{\scriptscriptstyle C}k^{\null} \ge b__{\scriptscriptstyle R}k^{\null}, \\ & x__{\scriptscriptstyle C}k^{\null} \ge 0, \end{array} \quad\quad (DLP_k) \quad \begin{array}{ll} \phantom-aximize{y__{\scriptscriptstyle R}k\inI\!\!R^k} & b__{\scriptscriptstyle R}k^T y__{\scriptscriptstyle R}k^{\null} \\ [4pt] \mathop{\operator@font op}erator@font subject\ to & A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T y__{\scriptscriptstyle R}k^{\null} \le c__{\scriptscriptstyle C}k^{\null}, \\ & y__{\scriptscriptstyle C}k^{\null} \ge 0. \end{array} \] Finally, $x_{C_r}^{\null}$ and $y_{R_r}^{\null}$ are not only optimal to $(PLP_r)$ and $(DLP_r)$, but together with $x_j=0$ for $j\in\{1,\dots,n\}\backslash C_r$ and $y_i=0$ for $i\in\{1,\dots,m\}\backslash R_r$ they also give optimal solutions to $(PLP)$ and $(DLP)$ respectively. We refer to such a sequence of pivots as a \emph{short} sequence of pivots. The existence of this short sequence of pivots is shown by a decomposition of the optimal basis matrix. We also give a related result for a slightly more structured linear program, which is a min-max problem for a given $m\times n$ matrix $M$, formulated as the following primal-dual pair of linear programs \begin{displaymath} (P)\quad \begin{array}{ll} \phantom-inimize{u\inI\!\!R^n,\alpha\inI\!\!R} & \alpha \\ \mathop{\operator@font op}erator@font subject\ to & Mu + e \alpha \ge 0, \\ & e^T\! u = 1, \\ & u\ge 0, \end{array} \qquad\quad (D)\quad \begin{array}{ll} \phantom-aximize{v\inI\!\!R^m,\beta\inI\!\!R} & \beta \\ \mathop{\operator@font op}erator@font subject\ to & M^T\! v +e \beta \le 0, \\ & e^T\! v = 1, \\ & v \ge 0. \end{array} \end{displaymath} For the general linear program, we cannot relate the short sequence of pivots to monotonicity in objective function value, whereas this can be done for the min-max problem. The difference is that $(P)$ and $(D)$ are both defined on the unit simplex, and they are both always feasible. The results are straightforward, but to the best of our knowledge, our result on the existence of a sequence of pivots of length at most $\phantom-in\{m,n\}$ improves on what is known. Fukuda, Lühti and Namiki~\cite{FLN97} and Fukuda and Terlaky~\cite{FT99} show that there is a sequence of pivots of length bounded by at most $m+n$ leading to the optimal solution. It should be pointed out that our existence result does not automatically give a method with better worst-case complexity than enumeration of all potential basis matrices. We have not been able to use the information to give rules based on global information that makes use of the short sequence of pivots. \section{Existence of a short pivoting sequence for a linear program} We will refer to a nonsingular square submatrix of $A$ as a \emph{basis matrix}. If $_{\scriptscriptstyle R}p$ and $_{\scriptscriptstyle C}p$ denote the row and column indices of $A$ that define the basis matrix, the basis matrix is referred to as $A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}$. If $_{\scriptscriptstyle R}z$ and $_{\scriptscriptstyle C}z$ denote the remaining row and column indices, the primal and dual \emph{basic solutions} associated with $A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}$ are uniquely given by \begin{dbleqnarray*} A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}^{\null} x_{_{\scriptscriptstyle C}p}^{\null} & = & b__{\scriptscriptstyle R}p^{\null}, & A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}^T y__{\scriptscriptstyle R}p^{\null} & = & c__{\scriptscriptstyle C}p^{\null}, \\ x__{\scriptscriptstyle C}z^{\null} & = & 0, & y__{\scriptscriptstyle R}z^{\null} & = & 0. \end{dbleqnarray*} The primal-dual pair of basic solutions given by the basis matrix is optimal to $(PLP)$ and $(DLP)$ respectively if and only if the solutions are feasible to $(PLP)$ and $(DLP)$ respectively, i.e., \begin{dbleqnarray*} A_{_{\scriptscriptstyle R}z_{\scriptscriptstyle C}p} x__{\scriptscriptstyle C}p^{\null} & \ge & b__{\scriptscriptstyle R}z^{\null}, & A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}z}^T y__{\scriptscriptstyle R}p^{\null} & \le & c__{\scriptscriptstyle C}z^{\null}, \\ x__{\scriptscriptstyle C}p^{\null} & \ge & 0, & y__{\scriptscriptstyle R}p^{\null} & \ge & 0. \end{dbleqnarray*} If $(PLP)$ and $(DLP)$ are both feasible, then there exists at least one basis matrix which gives a primal-dual optimal pair of basic solutions. This well-known result is summarized in the following lemma. \begin{lemma}[Existence of optimal basic feasible solution]\label{lem:partitionLP} Assume that both $(PLP)$ and $(DLP)$ are feasible. Then, there is a partitioning of the row indices of $A$ into two sets $_{\scriptscriptstyle R}p$ and $_{\scriptscriptstyle R}z$, and a partitioning of the column indices of $A$ into two sets $_{\scriptscriptstyle C}p$ and $_{\scriptscriptstyle C}z$, so that $\abs{_{\scriptscriptstyle R}p}=\abs{_{\scriptscriptstyle C}p}$, and associated with the resulting matrix \[ \phantom-tx{cc}{ A_{_{\scriptscriptstyle R}p _{\scriptscriptstyle C}p} & A_{_{\scriptscriptstyle R}p _{\scriptscriptstyle C}p} \\ A_{_{\scriptscriptstyle R}z _{\scriptscriptstyle C}p} & A_{_{\scriptscriptstyle R}z _{\scriptscriptstyle C}z} }, \] the submatrix $A_{_{\scriptscriptstyle R}p _{\scriptscriptstyle C}p}$ is nonsingular, and there are vectors $x$ and $y$ for which \begin{dbleqnarray*} A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}^{\null} x_{_{\scriptscriptstyle C}p}^{\null} & = & b__{\scriptscriptstyle R}p^{\null}, & A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}^T y__{\scriptscriptstyle R}p^{\null} & = & c__{\scriptscriptstyle C}p^{\null}, \\ A_{_{\scriptscriptstyle R}z_{\scriptscriptstyle C}p}^{\null} x__{\scriptscriptstyle C}p^{\null} & \ge & b__{\scriptscriptstyle R}z^{\null}, & A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}z}^T y__{\scriptscriptstyle R}p^{\null} & \le & c__{\scriptscriptstyle C}z^{\null}, \\ x__{\scriptscriptstyle C}p^{\null} & \ge & 0, & y__{\scriptscriptstyle R}p^{\null} & \ge & 0, \\ x__{\scriptscriptstyle C}z^{\null} & = & 0, & y__{\scriptscriptstyle R}z^{\null} & = & 0, \end{dbleqnarray*} hold. These vectors $x$ and $y$ are optimal solutions to $(PLP)$ and $(DLP)$ respectively. \end{lemma} \begin{proof} Proofs are typically given for standard form of a linear program, e.g., \cite[Theorem 3.4]{MR3100219}. This can be achieved by adding slack variables to $(PLP)$, from which the result follows. \end{proof} Our concern is to decompose the basis matrix by eliminating one row and one column at a time. The following lemma gives the basis for such an elimination of row $i$ and column $j$ for a given set of row indices $R$ and column indices $C$. \begin{lemma}\label{lem:invLP} Consider problems $(PLP)$ and $(DLP)$. Let $R$ denote a set of row indices of $A$ and let $C$ denote a set of column indices of $A$. Assume that \begin{equation}\label{eq:deltaxy} A_{RC} x_C = b_R, \quad A_{RC}^T y_R = c_C, \quad A_{RC} _{\scriptscriptstyle D}eltait x_C = e_i, \quad A_{RC}^T _{\scriptscriptstyle D}eltait y_R = e_j, \end{equation} where $e_i$ and $e_j$ are the $i$th and $j$th unit vectors of dimensions $\abs{R}$ and $\abs{C}$ respectively. Then, \begin{subequations} \begin{eqnarray} c_C^T x_C & = & b_R^T y_R, \\ c_C^T _{\scriptscriptstyle D}eltait x_C & = & e_i^T y_R, \\ b_R^T _{\scriptscriptstyle D}eltait y_R & = & e_j^T x_C, \\ e_j^T _{\scriptscriptstyle D}eltait x_C & = & e_i^T _{\scriptscriptstyle D}eltait y_R. \end{eqnarray} \end{subequations} In addition, assume that $e_j^T _{\scriptscriptstyle D}eltait x_C = e_i^T _{\scriptscriptstyle D}eltait y_R\ne 0$. Then there are unique scalars $s_i$ and $t_j$ such that \begin{equation}\label{eqn-zericomp} e_j^T (x_C+ s_i _{\scriptscriptstyle D}eltait x_C) = 0, \quad e_i^T (y_R+ t_j _{\scriptscriptstyle D}eltait y_R) = 0, \end{equation} given by \begin{equation}\label{eqn-step} s_i=-_{\scriptscriptstyle{\it FR}}ac{e_j^T x_C}{e_j^T _{\scriptscriptstyle D}eltait x_C}, \quad t_j = -_{\scriptscriptstyle{\it FR}}ac{e_i^T y_R}{e_i^T _{\scriptscriptstyle D}eltait y_R}. \end{equation} Furthermore, \begin{equation}\label{eqn-costsame} c_C^T ( x_C + s_i _{\scriptscriptstyle D}eltait x_C ) = b_R^T ( y_R + t_j _{\scriptscriptstyle D}eltait y_R ). \end{equation} \end{lemma} \begin{proof} We obtain \begin{eqnarray*} c_C^T x_C & = & y_R^T A_{RC} x_C = y_R^T b_R = b_R^T y_R, \\ c_C^T _{\scriptscriptstyle D}eltait x_C & = & y_R^T A_{RC} _{\scriptscriptstyle D}eltait x_C = y_R^T e_i = e_i^T y_R, \\ b_R^T _{\scriptscriptstyle D}eltait y_R & = & x_C^T A_{RC}^T _{\scriptscriptstyle D}eltait y_R = x_C^T e_j = e_j^T x_C, \\ e_j^T _{\scriptscriptstyle D}eltait x_C & = & _{\scriptscriptstyle D}eltait y_R^T A_{RC} _{\scriptscriptstyle D}eltait x_C = _{\scriptscriptstyle D}eltait y_R^T e_i = e_i^T _{\scriptscriptstyle D}eltait y_R. \end{eqnarray*} To show the final results, if $e_j^T _{\scriptscriptstyle D}eltait x_C = e_i^T _{\scriptscriptstyle D}eltait y_R\ne 0$, then the values of $s_i$ and $t_j$ given by (\ref{eqn-step}) follow immediately. From these values of $s_i$ and $t_j$, we obtain \begin{eqnarray*} 0 & = & e_j^T (x_C+ s_i _{\scriptscriptstyle D}eltait x_C) = _{\scriptscriptstyle D}eltait y_R^T A_{RC} (x_C+ s_i _{\scriptscriptstyle D}eltait x_C) = b_R^T _{\scriptscriptstyle D}eltait y_R + s_i _{\scriptscriptstyle D}eltait y_R^T A_{RC} _{\scriptscriptstyle D}eltait x_C, \\ 0 & = & e_i^T (y_R+ t_j _{\scriptscriptstyle D}eltait y_R) = _{\scriptscriptstyle D}eltait x_C^T A_{RC}^T (y_R+ t_j _{\scriptscriptstyle D}eltait y_R) = c_C^T _{\scriptscriptstyle D}eltait x_C + t_j _{\scriptscriptstyle D}eltait x_C^T A_{RC}^T _{\scriptscriptstyle D}eltait y_R. \end{eqnarray*} A combination of these equations gives \begin{equation}\label{eqn-costdiffsame} t_j b_R^T _{\scriptscriptstyle D}eltait y_R = s_i c_C^T _{\scriptscriptstyle D}eltait x_C = - s_i t_j _{\scriptscriptstyle D}eltait x_C^T A_{RC}^T _{\scriptscriptstyle D}eltait y_R. \end{equation} A combination of $b_R^T y_R = c_C^T x_C$ and (\ref{eqn-costdiffsame}) gives (\ref{eqn-costsame}), as required. \end{proof} This result may now be used to reduce the dimension of the basis matrix by one row and one column, while maintaining primal and dual optimality to the reduced problem. \begin{lemma}\label{lem-reducedimLP} Consider problems $(PLP)$ and $(DLP)$. Let $_{\scriptscriptstyle R}k$ denote a set of row indices of $A$ and let $_{\scriptscriptstyle C}k$ denote a set of column indices of $A$ such that $\abs{_{\scriptscriptstyle R}k}=\abs{_{\scriptscriptstyle C}k}=k$, with $k\ge 2$. Assume that $A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}$ is nonsingular, and assume that \begin{displaymath} A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} x__{\scriptscriptstyle C}k^{\null} = b__{\scriptscriptstyle C}k^{\null}, \qquad A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T y__{\scriptscriptstyle R}k^{\null} = c__{\scriptscriptstyle C}k^{\null}, \end{displaymath} where $x__{\scriptscriptstyle C}k^{\null} \ge 0$ and $y__{\scriptscriptstyle R}k^{\null} \ge 0$. Then, there is a row index $i_k$, with $i_k\in_{\scriptscriptstyle R}k$, and a column index $j_k$, with $j_k\in_{\scriptscriptstyle C}k$, such that $A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}$ is nonsingular, where $_{\scriptscriptstyle R}km=_{\scriptscriptstyle R}k\backslash\{i_k\}$ and $_{\scriptscriptstyle C}km=_{\scriptscriptstyle C}k\backslash\{j_k\}$. Furthermore, it holds that \begin{displaymath} A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km} x__{\scriptscriptstyle C}km^{\null} = b__{\scriptscriptstyle C}km^{\null}, \qquad A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^T y__{\scriptscriptstyle R}km^{\null} = c__{\scriptscriptstyle C}km^{\null}, \end{displaymath} for $x__{\scriptscriptstyle C}km^{\null}\ge0$, $y__{\scriptscriptstyle R}km^{\null}\ge0$. \end{lemma} \begin{proof} We may apply Lemma~\ref{lem:invLP} for $R=_{\scriptscriptstyle R}k$ and $C=_{\scriptscriptstyle C}k$. The quantities $x__{\scriptscriptstyle C}k$, $y__{\scriptscriptstyle R}k$, $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$ and $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k$ are well defined since $A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}$ is nonsingular. First, assume that $y_i=0$ for some $i\in_{\scriptscriptstyle R}k$. Let $i_k=i$. Compute $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$ as in Lemma~\ref{lem:invLP} for this $i$. If $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k\not\ge 0$, we may compute the most limiting positive step for maintaining nonnegativity of $x__{\scriptscriptstyle C}k + s _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$, i.e., \begin{equation}\label{eqn-poss} s=\phantom-in_{j\in_{\scriptscriptstyle C}k:e_j^T _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k < 0}_{\scriptscriptstyle{\it FR}}ac{e_j^T x__{\scriptscriptstyle C}k}{-e_j^T _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k}. \end{equation} If $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k\not\le 0$, we may compute the most limiting negative step for maintaining nonnegativity of $x__{\scriptscriptstyle C}k + s _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$, i.e., \begin{equation}\label{eqn-negs} s=\phantom-ax_{j\in_{\scriptscriptstyle C}k:e_j^T _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k > 0}_{\scriptscriptstyle{\it FR}}ac{e_j^T x__{\scriptscriptstyle C}k}{-e_j^T _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k}. \end{equation} Since $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k\ne 0$, at least one of (\ref{eqn-poss}) and (\ref{eqn-negs}) is well defined. Pick one which is well defined, let $j_k$ be a minimizing index and let $s_i$ be the corresponding $s$-value. If $_{\scriptscriptstyle R}km=_{\scriptscriptstyle R}k\backslash \{i_k\}$ and $_{\scriptscriptstyle C}km=_{\scriptscriptstyle C}k\backslash \{j_k\}$, then $A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}$ is nonsingular, since $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$ up to a scalar is the unique vector in the nullspace of $A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}k}$, and $e_{j_k}^T_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k \ne 0$ by (\ref{eqn-poss}) and (\ref{eqn-negs}). Now assume that $x_j=0$ for some $j\in_{\scriptscriptstyle C}k$. This is totally analogous to the case $y_i=0$. Let $j_k=j$. Compute $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k$ as in Lemma~\ref{lem:invLP} for this $j$. If $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k\not\ge 0$, we may compute the most limiting positive step for maintaining nonnegativity of $y__{\scriptscriptstyle R}k + t _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k$, i.e., \begin{equation}\label{eqn-post} t=\phantom-in_{i\in_{\scriptscriptstyle R}k:e_i^T _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k < 0}_{\scriptscriptstyle{\it FR}}ac{e_i^T y__{\scriptscriptstyle R}k}{-e_i^T _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k}. \end{equation} If $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k\not\le 0$, we may compute the most limiting negative step for maintaining nonnegativity of $y__{\scriptscriptstyle R}k + t _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k$, i.e., \begin{equation}\label{eqn-negt} t=\phantom-ax_{i\in_{\scriptscriptstyle R}k:e_i^T _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k > 0}_{\scriptscriptstyle{\it FR}}ac{e_i^T y__{\scriptscriptstyle R}k}{-e_i^T _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k}. \end{equation} Since $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k\ne 0$, at least one of (\ref{eqn-post}) and (\ref{eqn-negt}) is well defined. Pick one which is well defined, let $i_k$ be a minimizing index and let $t_j$ be the corresponding $t$-value. If $_{\scriptscriptstyle R}km=_{\scriptscriptstyle R}k\backslash \{i_k\}$ and $_{\scriptscriptstyle C}km=_{\scriptscriptstyle C}k\backslash \{j_k\}$, then $A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}$ is nonsingular, since $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k$ up to a scalar is the unique vector in the nullspace of $A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}km}^T$, and $e_{j_k}^T_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle C}k \ne 0$ by (\ref{eqn-post}) and (\ref{eqn-negt}). Finally, we consider the case when $x__{\scriptscriptstyle C}k>0$ and $y__{\scriptscriptstyle R}k>0$. Then, for any $i$, $j$ pair with $i\in_{\scriptscriptstyle R}k$ and $j\in_{\scriptscriptstyle C}k$, Lemma~\ref{lem:invLP} gives $c__{\scriptscriptstyle C}k^T _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k> 0$ since $y__{\scriptscriptstyle R}k>0$ and $b__{\scriptscriptstyle R}k^T _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k> 0$ since $x__{\scriptscriptstyle C}k>0$. We may now pick an $i\in_{\scriptscriptstyle R}k$ and compute $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$ as in~\eqref{eq:deltaxy}. We must have $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k\ne 0$, since $e_i\ne 0$. We may now compute the most limiting step from (\ref{eqn-poss}) or (\ref{eqn-negs}), out of which at least one has to be well defined. Assume first the former. Let $s_i$ denote the step and let $j$ be an index for which the minimum is attained, so that $e_j^T(x__{\scriptscriptstyle C}k+s_i _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k)= 0$. By computing $_{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k$ for this $j$, there is an associated positive $t_j$ such that $e_i^T(y__{\scriptscriptstyle R}k+t_j _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k)= 0$ by Lemma~\ref{lem:invLP}. If $y__{\scriptscriptstyle R}k+t_j _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k \ge 0$, we are done. Otherwise, we let $t_j$ be the maximum positive step such that $y__{\scriptscriptstyle R}k+t_j _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k \ge 0$. By construction, this must give a strict reduction of $t_j$, but $t_j$ will remain strictly positive since $y__{\scriptscriptstyle R}k>0$. We now conversely find the associated step $s_i$. This process may be repeated a finite number of times for $i$, $j$ pairs until $x__{\scriptscriptstyle C}k+s_i _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k \ge 0$ with $e_j^T(x__{\scriptscriptstyle C}k+s_i _{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k)= 0$ and $y__{\scriptscriptstyle R}k+t_j _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k \ge 0$ with $e_i^T(y__{\scriptscriptstyle R}k+t_j _{\scriptscriptstyle D}eltait y__{\scriptscriptstyle R}k) = 0$. Note that (\ref{eqn-costsame}) of Lemma~\ref{lem:invLP} implies that each step gives a strict reduction in objective function value, since one of the $s_i$ or $t_j$ values is reduced. Let $i_k=i$ and $j_k=j$. If $_{\scriptscriptstyle R}km=_{\scriptscriptstyle R}k\backslash \{i_k\}$ and $_{\scriptscriptstyle C}km=_{\scriptscriptstyle C}k\backslash \{j_k\}$, then $A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}$ is nonsingular, since $_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k$ up to a scalar is the unique vector in the nullspace of $A_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}k}$, and $e_{j_k}^T_{\scriptscriptstyle D}eltait x__{\scriptscriptstyle C}k \ne 0$. If (\ref{eqn-negt}) is used instead of (\ref{eqn-post}), the argument is analogous, but now $s_i$ and $t_j$ are negative, increasing towards zero. \end{proof} The optimality conditions given by Lemma~\ref{lem:partitionLP} imply the existence of a nonsingular submatrix $A_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}$. Recursive application of Lemma~\ref{lem-reducedimLP} gives a decomposition of this matrix into square nonsingular submatrices of dimensions shrinking by one row and one column at a time, corresponding to primal-dual optimal pairs of $(PLP_k)$ and $(DLP_k)$ respectively, for $k=r,r-1,\dots,1$. By reversing this argument, there must exist a sequence of $r$ pivots $(i_1,j_1)$, $(i_2,j_2)$, $\dots$, $(i_r,j_r)$, such that at stage $k$, optimal solutions to $(PLP_k)$ and $(DLP_k)$ are created, and at stage $r$, optimal solutions to $(PLP)$ and $(DLP)$ are found. This is summarized in the following theorem. \begin{theorem}\label{thm-LP} Assume that problems $(PLP)$ and $(DLP)$ both have feasible solutions. For the optimality conditions given by Lemma~\ref{lem:partitionLP}, let $r=\abs{_{\scriptscriptstyle R}p}=\abs{_{\scriptscriptstyle C}p}$. Then, $r\le\phantom-in\{m,n\}$ and there are pairs of row and column indices $(i_k,j_k)$, $k=1,\dots,r$, which generate sets of row indices $R_1=\{i_1\}$, $_{\scriptscriptstyle R}kp=_{\scriptscriptstyle R}k\cup\{i_{k+1}\}$, and sets of column indices $C_1=\{j_1\}$, $_{\scriptscriptstyle C}kp=_{\scriptscriptstyle C}k\cup\{j_{k+1}\}$, with $i_{k+1}\in\{1,\dots,m\}\backslash _{\scriptscriptstyle R}k$ and $j_{k+1}\in\{1,\dots,n\}\backslash _{\scriptscriptstyle C}k$, such that for each $k$, $A_{R_k_{\scriptscriptstyle C}k}$ is nonsingular, and $x__{\scriptscriptstyle C}k$ and $y__{\scriptscriptstyle R}k$ computed from \[ A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k} x__{\scriptscriptstyle C}k = b__{\scriptscriptstyle R}k, \quad A_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T y__{\scriptscriptstyle R}k = c__{\scriptscriptstyle C}k, \] are optimal to $(PLP_k)$ and $(DLP_k)$ respectively. In addition, $x_{C_r}$ and $y_{R_r}$ together with $x_j=0$ for $j\in\{1,\dots,n\}\backslash C_r$ and $y_i=0$ for $i\in\{1,\dots,m\}\backslash R_r$ are optimal to $(PLP)$ and $(DLP)$ respectively. \end{theorem} \begin{proof} For $r=0$ or $r=1$, the result is immediate from the optimality conditions of Lemma~\ref{lem:partitionLP}. For $r\ge 2$, let $R_r=_{\scriptscriptstyle R}p$ and $C_r=_{\scriptscriptstyle C}p$, and repeatedly apply Lemma~\ref{lem-reducedimLP} for $k=r,r-1,r-2,...,2$. This gives an ordering of the indices of $_{\scriptscriptstyle R}p$ and $_{\scriptscriptstyle C}p$ as $i_r$, $i_{r-1}$, \dots, $i_1$ and $j_r$, $j_{r-1}$, \dots, $j_1$, such that the corresponding $x__{\scriptscriptstyle C}k$ and $y__{\scriptscriptstyle R}k$ are optimal to $(PLP_k)$ and $(DLP_k)$ respectively for $k=1,\dots,r$. In addition, $x_{C_r}$ and $y_{R_r}$ are optimal to $(PLP)$ and $(DLP)$ respectively. If the ordering is reversed, so that $k=1,\dots,r$, the result follows. \end{proof} We note that Theorem~\ref{thm-LP} shows the existence of a sequence of pivots that would create the optimal basis in $r$ steps, where $r\le\phantom-in\{m,n\}$. We refer to such a sequence of pivots as a \emph{short} sequence of pivots. This, however, does not constitute an algorithm based on global information. We have no global information on how to create the sequence of pivots. There is a straightforward method given by enumerating all potential sequences of pivots that generate primal-dual optimal pairs of linear programs of increasing dimension. However, we have not been able to give any useful bound on the potential number of bases. We also note that the short sequence of pivots does not explicitly take into account objective function value, as the simplex method does. Therefore, we cannot ensure generating a short sequence of pivots by making use of pivots associated with primal simplex only or dual simplex only. \section{Decomposing a matrix by convex combinations} We also consider the related problem of decomposing a given $m\times n$ matrix $M$ by convex combination, formulated as the following primal-dual pair of linear programs \begin{displaymath} (P)\quad \begin{array}{ll} \phantom-inimize{u\inI\!\!R^n,\alpha\inI\!\!R} & \alpha \\ \mathop{\operator@font op}erator@font subject\ to & Mu + e \alpha \ge 0, \\ & e^T\! u = 1, \\ & u\ge 0, \end{array} \qquad\quad (D)\quad \begin{array}{ll} \phantom-aximize{v\inI\!\!R^m,\beta\inI\!\!R} & \beta \\ \mathop{\operator@font op}erator@font subject\ to & M^T\! v +e \beta \le 0, \\ & e^T\! v = 1, \\ & v \ge 0. \end{array} \end{displaymath} Here, and throughout, $e$ denotes the vector of ones of the appropriate dimension. Problems $(P)$ and $(D)$ have a joint optimal value $\gamma$ by strong duality for linear programming. The difference to the general linear program is that $u$ and $v$ are defined on the unit simplex, and $(P)$ and $(D)$ are always feasible. This will enable a slightly stronger result in which monotonicity in the objective function value may be enforced. We state the analogous results to the general linear programming case, and point out the differences. \begin{lemma}[Existence of optimal basic feasible solution]\label{lem:partitionM} For a given matrix $M$, there exists a number $\gamma$ and a partitioning of the row indices of $M$ into two sets $_{\scriptscriptstyle R}p$ and $_{\scriptscriptstyle R}z$, and a partitioning of the column indices of $M$ into two sets $_{\scriptscriptstyle C}p$ and $_{\scriptscriptstyle C}z$, so that $\abs{_{\scriptscriptstyle R}p}=\abs{_{\scriptscriptstyle C}p}$, and associated with the resulting matrix \[ \phantom-tx{ccc}{ M_{_{\scriptscriptstyle R}p _{\scriptscriptstyle C}p} & M_{_{\scriptscriptstyle R}p _{\scriptscriptstyle C}z} & e \\ M_{_{\scriptscriptstyle R}z _{\scriptscriptstyle C}p} & M_{_{\scriptscriptstyle R}z _{\scriptscriptstyle C}z} & e \\ e^T & e^T & 0}, \] the submatrix \[ \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}p _{\scriptscriptstyle C}p} & e \\ e^T & 0} \] is nonsingular, and there are vectors $u$ and $v$ for which \begin{dbleqnarray*} \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p} & e \\ e^T & 0 } \phantom-tx{c}{ u_{_{\scriptscriptstyle C}p}^{\null} \\ \gamma } & = & \phantom-tx{c}{0 \\ 1}, & \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}p}^T & e \\ e^T & 0 } \phantom-tx{c}{ v_{_{\scriptscriptstyle R}p}^{\null} \\ \gamma } & = & \phantom-tx{c}{0 \\ 1}, \\ \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}z_{\scriptscriptstyle C}p} & e } \phantom-tx{c}{ u_{_{\scriptscriptstyle C}p}^{\null} \\ \gamma } & \ge & 0, & \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}p_{\scriptscriptstyle C}z}^T & e } \phantom-tx{c}{ v_{_{\scriptscriptstyle R}p}^{\null} \\ \gamma } & \le & 0, \\ u__{\scriptscriptstyle C}p^{\null} & \geq & 0, & v__{\scriptscriptstyle R}p^{\null} & \geq & 0, \\ u__{\scriptscriptstyle C}z^{\null} & = & 0, & v__{\scriptscriptstyle R}z^{\null} & = & 0, \end{dbleqnarray*} hold. The vectors $(u,\gamma)$ and $(v,\gamma)$ are optimal solutions to $(P)$ and $(D)$ respectively. \end{lemma} \begin{proof} This is analogous to Lemma~\ref{lem:partitionLP}. The only difference is that $\alpha$ and $\beta$ are free variables. \end{proof} \begin{lemma}\label{lem:invM} Consider problems $(P)$ and $(D)$. Let $R$ denote a set of row indices of $M$ and let $C$ denote a set of column indices of $M$. Assume that \begin{dbleqnarray*} \phantom-tx{cc}{ M_{RC} & e \\ e^T & 0 } \phantom-tx{c}{ u_{C}^{\null} \\ \alpha } & = & \phantom-tx{c}{0 \\ 1}, & \phantom-tx{cc}{ M_{RC}^T & e \\ e^T & 0 } \phantom-tx{c}{ v_{R}^{\null} \\ \beta } & = & \phantom-tx{c}{0 \\ 1}, \\ \phantom-tx{cc}{ M_{RC} & e \\ e^T & 0 } \phantom-tx{c}{ _{\scriptscriptstyle D}eltait u_{C}^{\null} \\ _{\scriptscriptstyle D}eltait \alpha } & = & \phantom-tx{c}{e_i \\ 0}, & \phantom-tx{cc}{ M_{RC}^T & e \\ e^T & 0 } \phantom-tx{c}{ _{\scriptscriptstyle D}eltait v_{R}^{\null} \\ _{\scriptscriptstyle D}eltait \beta } & = & \phantom-tx{c}{e_j \\ 0}. \end{dbleqnarray*} Then, \begin{subequations} \begin{eqnarray} \alpha & = & \beta, \\ _{\scriptscriptstyle D}eltait \alpha & = & e_i^T v_R, \\ _{\scriptscriptstyle D}eltait \beta & = & e_j^T u_C, \\ e_j^T _{\scriptscriptstyle D}eltait u_C & = & e_i^T _{\scriptscriptstyle D}eltait v_R. \end{eqnarray} \end{subequations} In addition, assume that $e_j^T _{\scriptscriptstyle D}eltait u_C = e_i^T _{\scriptscriptstyle D}eltait v_R\ne 0$. Then there are unique scalars $s_i$ and $t_j$ such that \[ e_j^T (u_C+ s_i _{\scriptscriptstyle D}eltait u_C) = 0, \quad e_i^T (v_R+ t_j _{\scriptscriptstyle D}eltait v_R) = 0, \] given by \[ s_i=-_{\scriptscriptstyle{\it FR}}ac{e_j^T u_C}{e_j^T _{\scriptscriptstyle D}eltait u_C}, \quad t_j = -_{\scriptscriptstyle{\it FR}}ac{e_i^T v_R}{e_i^T _{\scriptscriptstyle D}eltait v_R}. \] Furthermore, \[ \alpha + s_i _{\scriptscriptstyle D}eltait \alpha = \beta + t_j _{\scriptscriptstyle D}eltait \beta. \] \end{lemma} \begin{proof} This is analogous to Lemma~\ref{lem:invLP}. \end{proof} As for the general linear programming case, associated with $(P)$ and $(D)$ we will consider the linear programs \begin{displaymath} (P_k)\quad \begin{array}{ll} \phantom-inimize{u__{\scriptscriptstyle C}k^{\null}\inI\!\!R^k,\alpha_k\inI\!\!R} & \alpha_k \\ \mathop{\operator@font op}erator@font subject\ to & M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} u__{\scriptscriptstyle C}k^{\null} + e \alpha_k \ge 0, \\ & e^T\! u__{\scriptscriptstyle C}k^{\null} = 1, \\ & u__{\scriptscriptstyle C}k^{\null}\ge 0, \end{array} \qquad\quad (D_k)\quad \begin{array}{ll} \phantom-aximize{v__{\scriptscriptstyle R}k^{\null}\inI\!\!R^k,\beta_k\inI\!\!R} & \beta_k \\ \mathop{\operator@font op}erator@font subject\ to & M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T v__{\scriptscriptstyle C}k^{\null} +e \beta_k \le 0, \\ & e^T\! v__{\scriptscriptstyle C}k^{\null} = 1, \\ & v__{\scriptscriptstyle C}k^{\null} \ge 0. \end{array} \end{displaymath} where $_{\scriptscriptstyle R}k$ denotes a set of row indices of $A$ and $_{\scriptscriptstyle C}k$ denotes a set of column indices of $M$ such that $\abs{_{\scriptscriptstyle R}k}=\abs{_{\scriptscriptstyle C}k}=k$. \begin{lemma}\label{lem-reducegamma} Let $M$ be an $m\times n$ matrix. Let $_{\scriptscriptstyle R}k$ denote a set of row indices of $M$ and let $_{\scriptscriptstyle C}k$ denote a set of column indices of $M$ such that $\abs{_{\scriptscriptstyle R}k}=\abs{_{\scriptscriptstyle C}k}=k$, with $k\ge 2$. Assume that \[ \phantom-tx{cc} {M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} & e \\ e^T & 0} \] is nonsingular, and assume that \begin{displaymath} \phantom-tx{cc} {M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} & e \\ e^T & 0} \phantom-tx{c}{u__{\scriptscriptstyle C}k^{\null} \\ \gamma_k} = \phantom-tx{c}{0 \\ 1}, \qquad \phantom-tx{cc} {M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T & e \\ e^T & 0} \phantom-tx{c}{v__{\scriptscriptstyle R}k^{\null} \\ \gamma_k} = \phantom-tx{c}{0 \\ 1}, \end{displaymath} where $u__{\scriptscriptstyle C}k^{\null} \geq 0$ and $v__{\scriptscriptstyle R}k^{\null} \geq 0$. Then, there is a row index $i_k^{(1)}$, $i_k^{(1)}\in_{\scriptscriptstyle R}k$, and a column index $j_k^{(1)}$, $j_k^{(1)}\in_{\scriptscriptstyle C}k$, such that if $R_{k-1}^{(1)}=_{\scriptscriptstyle R}k\backslash\{i_k^{(1)}\}$ and $_{\scriptscriptstyle C}km=_{\scriptscriptstyle C}k\backslash\{j_k^{(1)}\}$, then \[ \phantom-tx{cc} { M_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^{(1)} & e \\ e^T & 0} \] is nonsingular. Furthermore, it holds that \[ \phantom-tx{cc} { M_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^{(1)} & e \\ e^T & 0} \phantom-tx{c}{u__{\scriptscriptstyle C}km^{(1)} \\ \gamma_{k-1}^{(1)}} = \phantom-tx{c}{0 \\ 1}, \quad \phantom-tx{cc} { (M_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^{(1)})^T & e \\ e^T & 0} \phantom-tx{c}{v__{\scriptscriptstyle R}km^{(1)} \\ \gamma_{k-1}^{(1)}} = \phantom-tx{c}{0 \\ 1}, \] for $u__{\scriptscriptstyle C}km^{(1)}\ge0$, $v__{\scriptscriptstyle R}km^{(1)}\ge0$ and $\gamma_{k-1}^{(1)} \leq \gamma_k^{(1)}$. In addition, there is a row index $i_k^{(2)}$, $i_k^{(2)}\in_{\scriptscriptstyle R}k$, and a column index $j_k^{(2)}$, $j_k^{(2)}\in_{\scriptscriptstyle C}k$, with $(i_k^{(2)},j_k^{(2)})\ne (i_k^{(1)},j_k^{(1)})$, such that if $R_{k-1}^{(2)}=_{\scriptscriptstyle R}k\backslash\{i_k^{(2)}\}$ and $_{\scriptscriptstyle C}km=_{\scriptscriptstyle C}k\backslash\{j_k^{(2)}\}$, then \[ \phantom-tx{cc} { M_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^{(2)} & e \\ e^T & 0} \] is nonsingular. Furthermore, it holds that \[ \phantom-tx{cc} { M_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^{(2)} & e \\ e^T & 0} \phantom-tx{c}{u__{\scriptscriptstyle C}km^{(2)} \\ \gamma_{k-1}^{(2)}} = \phantom-tx{c}{0 \\ 1}, \quad \phantom-tx{cc} { (M_{_{\scriptscriptstyle R}km_{\scriptscriptstyle C}km}^{(2)})^T & e \\ e^T & 0} \phantom-tx{c}{v__{\scriptscriptstyle R}km^{(2)} \\ \gamma_{k-1}^{(2)}} = \phantom-tx{c}{0 \\ 1}, \] for $u__{\scriptscriptstyle C}km^{(2)}\ge0$, $v__{\scriptscriptstyle R}km^{(2)}\ge0$ and $\gamma_{k-1}^{(2)} \geq \gamma_k^{(2)}$. \end{lemma} \begin{proof} The difference compared to Lemma~\ref{lem-reducedimLP} is that $\alpha_k$ and $\beta_k$ are free variables. Hence, there is no nonnegativity condition on them to handle. We note from Lemma~\ref{lem:invM} that \[ \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k} & e \\ e^T & 0 } \phantom-tx{c}{ _{\scriptscriptstyle D}eltait u_{_{\scriptscriptstyle C}k}^{\null} \\ _{\scriptscriptstyle D}eltait \alpha_k } = \phantom-tx{c}{e_i \\ 0}. \] Hence, we cannot have $_{\scriptscriptstyle D}eltait u__{\scriptscriptstyle C}k=0$, since $e _{\scriptscriptstyle D}eltait \alpha_k=e_i$ cannot have a solution for $k\ge 2$. It follows that $_{\scriptscriptstyle D}eltait u__{\scriptscriptstyle C}k\ne 0$ so that $e^T\! _{\scriptscriptstyle D}eltait u__{\scriptscriptstyle C}k= 0$ implies that $_{\scriptscriptstyle D}eltait u__{\scriptscriptstyle C}k$ must have both strictly positive and strictly negative components. The situation is analogous for $_{\scriptscriptstyle D}eltait v__{\scriptscriptstyle R}k$. Therefore, there is a choice of selecting $s$ negative or positive analogously to (\ref{eqn-poss}) and (\ref{eqn-negs}). Consequently, there are two different possible index pairs, one corresponding to $_{\scriptscriptstyle D}eltait \gamma_k\ge 0$ and one corresponding to $_{\scriptscriptstyle D}eltait \gamma_k\le 0$. \end{proof} \begin{theorem}\label{thm-M} Let $M$ be a given $m\times n$ matrix. For the optimality conditions given by Lemma~\ref{lem:partitionM}, let $r=\abs{_{\scriptscriptstyle R}p}=\abs{_{\scriptscriptstyle C}p}$. Then, $r\le\phantom-in\{m,n\}$ and there are pairs of row and column indices $(i_k,j_k)$, $k=1,\dots,r$, which generate sets of row indices $R_1=\{i_1\}$, $_{\scriptscriptstyle R}kp=_{\scriptscriptstyle R}k\cup\{i_{k+1}\}$, and sets of column indices $C_1=\{j_1\}$, $_{\scriptscriptstyle C}kp=_{\scriptscriptstyle C}k\cup\{j_{k+1}\}$, with $i_{k+1}\in\{1,\dots,m\}\backslash _{\scriptscriptstyle R}k$ and $j_{k+1}\in\{1,\dots,n\}\backslash _{\scriptscriptstyle C}k$, such that for each $k$, \[ \phantom-tx{cc} { M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} & e \\ e^T & 0} \] is nonsingular, and $(u__{\scriptscriptstyle C}k,\gamma_k)$ and $(v__{\scriptscriptstyle R}k,\gamma_k)$ computed from \begin{dbleqnarray*} \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^{\null} & e \\ e^T & 0 } \phantom-tx{c}{ u_{_{\scriptscriptstyle C}k}^{\null} \\ \gamma_k } & = & \phantom-tx{c}{0 \\ 1}, & \phantom-tx{cc}{ M_{_{\scriptscriptstyle R}k_{\scriptscriptstyle C}k}^T & e \\ e^T & 0 } \phantom-tx{c}{ v_{_{\scriptscriptstyle R}k}^{\null} \\ \gamma_k } & = & \phantom-tx{c}{0 \\ 1}, \end{dbleqnarray*} are optimal to $(P_k)$ and $(D_k)$ respectively. In addition, $(u_{C_r},\gamma_r)$ and $(v_{R_r},\gamma_r)$ together with $u_j=0$ for $j\in\{1,\dots,n\}\backslash C_r$ and $v_i=0$ for $i\in\{1,\dots,m\}\backslash R_r$ are optimal to $(P)$ and $(D)$ respectively. Such a sequence of pairs of row and column indices $(i_k,j_k)$, $k=1,\dots,r$, exists also if one of the additional requirements $\gamma_{k+1}\le \gamma_k$, $k=1,\dots,r-1$, or $\gamma_{k+1}\ge \gamma_k$, $k=1,\dots,r-1$, are imposed. \end{theorem} \begin{proof} The result is analogous to Theorem~\ref{thm-LP}. The only difference is the final statement on additional requirement $\gamma_{k+1}\le \gamma_k$, $k=1,\dots,r-1$, or $\gamma_{k+1}\ge\gamma_k$, $k=1,\dots,r-1$, not contradicting the existence of the short pivot sequence. This is a consequence of the proved existence of two potential pivots in the reduction step of Lemma~\ref{lem-reducegamma}. \end{proof} \section{Summary} For a pair of linear programs in canonical forms that both are feasible, we have shown the existence of a sequence of pivots of length at most $\phantom-in\{m,n\}$ that leads from the origin to a primal-dual pair of optimal solutions. At each step, the pivot creates a nonsingular submatrix of the constraint matrix that increases in dimension by one row an column by including the row and column of the pivot element. By solving two linear systems involving the submatrix a pair of primal and dual solutions are obtained. These solutions are optimal for the restricted problem where only rows and columns of the submatrix are included. At the final step, the solutions are optimal to the full primal and dual problem respectively. We have not been able to give rules for an algorithm taking into account global information that would give this correct path without potentially enumerating all possible paths, which might be exponentially many. We therefore only publish the result as is, and hope that the result will be useful for further understanding of pivoting methods for linear programming. We also note in passing that the reduction of Lemma~\ref{lem-reducedimLP} and Lemma~\ref{lem-reducegamma} can be done from an arbitrary basis matrix if the nonnegativity condition is omitted. Therefore, there is a sequence of pivots of length at most $\phantom-in\{m,n\}$ leading from any pair of primal-dual basic solutions to the origin, Consequently, Theorem~\ref{thm-LP} and Theorem~\ref{thm-M} imply that there is an overall bound of at most $2\phantom-in\{m,n\}$ pivots leading from any pair of primal-dual basic solutions to an optimal pair of primal-dual basic solutions. This is at least as tight as the bound $n+m$ given by Fukuda and Terlaky~\cite{FT99}. \end{document}
\begin{document} \title{Obstacle Mean-Field Game Problem} \begin{abstract} In this paper, we introduce and study a first-order mean-field game obstacle problem. We examine the case of local dependence on the measure under assumptions that include both the logarithmic case and power-like nonlinearities. Since the obstacle operator is not differentiable, the equations for first-order mean field game problems have to be discussed carefully. Hence, we begin by considering a penalized problem. We prove this problem admits a unique solution satisfying uniform bounds. These bounds serve to pass to the limit in the penalized problem and to characterize the limiting equations. Finally, we prove uniqueness of solutions. \end{abstract} \thanks{ D. Gomes was partially supported by KAUST baseline and start-up funds and KAUST SRI, Uncertainty Quantification Center in Computational Science and Engineering. } \sigmaection{Introduction} The mean-field game framework \cite{C1, C2, ll1, ll2, ll3, ll4} is a class of methods that model the behavior of large populations of rational agents under a non-cooperative dynamic behavior. This research area has applications ranging from economics to engineering, as discussed in the recent surveys \cite{llg2, cardaliaguet, GS}, the additional references therein, and the lectures by P. L. Lions in Coll\'ege de France \cite{LCDF}. In this paper, we investigate first-order mean-field game obstacle problems in the stationary periodic setting. To our knowledge, in the context of mean-field games, these problems were not studied previously. Before describing the problem, we start by recalling the original stationary mean-field game problem from \cite{ll1}, as well as the obstacle problem for Hamilton-Jacobi (H-J) equations \cite{L}. Let $\mathbb{T}^N$ be the $N$-dimensional torus identified when convenient with $[0, 1]^N$. Consider a continuous function, $H:\mathbb{R}r^N\times \mathbb{T}^N\to \mathbb{R}r$, the Hamiltonian, and a continuous increasing function, $g: \mathbb{R}r^+_0\to \mathbb{R}r$. In \cite{ll1}, the authors consider the stationary mean-field game system \begin{equation} \label{smfg} \begin{cases} H(Du,x)=g(\theta)+\overline{H}\\ \text{div}v(D_pH \theta)=0, \end{cases} \end{equation} where the unknowns are a function $u:\mathbb{T}^N\to \mathbb{R}r$, a probability measure identified with its density $\theta:\mathbb{T}^N\to \mathbb{R}r$ and a constant $\overline{H}$. The second equation is the adjoint of the linearization of the first equation in the variable $u$. This system \eqref{smfg} has the canonical structure of a mean-field game problem: a nonlinear elliptic or parabolic nonlinear partial differential equation (PDE) coupled with a PDE given by the adjoint of its linearization. The existence of weak solutions for \eqref{smfg} was considered in \cite{ll1}. In \cite{E2} mean-field games are not mentioned explicitly, however, the results there yield the existence of smooth solutions of \eqref{smfg} for $g(\theta)=\ln \theta$. The second order case, was also studied in \cite{GM} (see also \cite{GIMY}), \cite{GPM1}, and \cite{GPatVrt}). Stationary mean-field games with congestion were considered in \cite{GMit}. The time-dependent problem was addressed for parabolic mean-field games in \cite{ll2}, \cite{CLLP}, \cite{porretta}, \cite{GPM2}, \cite{GPM3}, \cite{GPim1}, \cite{GPim2}, and in \cite{Cd1}, and \cite{Cd2} for first-order mean-field games. The first-order obstacle problem arises in optimal stopping (see \cite{L}, \cite{MR921827}, \cite{MR957658}, \cite{Bardi} and the references therein). In the periodic setting, a model problem is the following: let $\partialsi:\mathbb{T}^N\to \mathbb{R}r$, and $H:\mathbb{T}^N\times \mathbb{R}r^N \to \mathbb{R}r$ be continuous functions. The obstacle problem is defined by \begin{equation} \label{obstacleHJ} \max\{H(Du,x), u-\partialsi(x)\}=0, \end{equation} where $u:\mathbb{T}^N\to \mathbb{R}r$ is a bounded continuous function. The linearization of the obstacle operator is not well-defined since the left-hand side of \eqref{obstacleHJ} may fail to be differentiable. Thus, it is not clear what should be the corresponding mean-field model. One of the contributions of this paper is the characterization of the appropriate analog to \eqref{smfg} for obstacle problems. This is achieved by applying the penalization method. This is a standard technique employed in many related problems, e.g. \cite{L}. In the classical obstacle problem, to do so, one considers a family of smooth functions, $\beta_\epsilonsilon:\mathbb{R}r\to \mathbb{R}r_0^+$, which vanish identically in $\mathbb{R}r_0^-$ and satisfy $\beta_\epsilonsilon(z)=\frac{z-\epsilonsilon}{\epsilonsilon}$ for $z>\epsilonsilon$. Then, obstacle problem is approximated by the equation \begin{equation} \label{approxobs} H(Du_\epsilon,x)+\beta_\epsilonsilon(u_\epsilon-\partialsi)=\epsilonsilon \Delta u_\epsilon. \end{equation} This equation admits viscosity solutions that satisfy uniform Lipschitz bounds. By sending $\epsilonsilon\to 0$, one obtains a solution to \eqref{obstacleHJ}. Thanks to \cite{L}, for every $\epsilonsilon > 0$ there exists a smooth solution $u^\operatorname{Var}epsilon$ to \eqref{approxobs}. It is also well known that, up to subsequences, $u^{\epsilonsilon}$ converges uniformly to a viscosity solution $u$ of \eqref{obstacleHJ}. The rate of convergence of this approximation was investigated using the nonlinear adjoint method in \cite{CGT2}. We then are led naturally to the approximate mean-field obstacle problem \begin{equation} \label{amfgobs} \begin{cases} H(Du_\epsilonsilon,x)+\beta_\epsilonsilon(u_\epsilonsilon-\partialsi)=g(\theta_\epsilonsilon)\\ -\text{div}v(D_pH(Du_\epsilonsilon,x)\theta_\epsilonsilon)+\beta_\epsilonsilon'(u_\epsilonsilon-\partialsi) \theta_\epsilonsilon=\gamma(x). \end{cases} \end{equation} The additional term $\gamma$ in the right-hand side of \eqref{amfgobs} arises for the following reason: the mean-field obstacle problem models a population of agents trying to move optimally up to a certain stopping time at which they switch to the obstacle (the term $\beta_\epsilonsilon' \theta_\epsilonsilon$ is the flow of agents switching to the obstacle). Without a source term introducing new agents in the system, we could fall into the pathological situation $\theta_\epsilonsilon\equiv 0$. As it will be clear from the discussion, the approximate problem \eqref{amfgobs} admits smooth solutions even without additional elliptic regularization terms. This remarkable property is also true for certain first-order mean-field games, see, for instance, \cite{E1}. The function $u_\epsilonsilon$ in \eqref{amfgobs} is the value function for an optimal stopping problem. This problem may not admit a continuous solution, \cite{MR921827}, \cite{MR957658}. Owing to the structure of \eqref{amfgobs}, we were able to prove regularity estimates that hold uniformly in $\epsilonsilon$. However, in other related important situations, this may not be the case. It would be extremely interesting to consider a discontinuous viscosity solution approach for such problems. As we will show in Section \ref{convsec}, by passing to the limit in \eqref{amfgobs}, we obtain the mean-field obstacle problem \begin{equation}\label{obstacleepmfglimiteqrem} \begin{cases} H(Du,x) = g(\theta)&\text{in }\quad \mathbb{T}^N,\\ -\text{div} (D_p H(Du,x)\theta)\leq \gamma(x)\quad&\text{in } \quad \mathbb{T}^N,\\ -\text{div} (D_p H(Du,x)\theta)= \gamma(x)&\text{in } \quad \{u<\partialsi\}.\\ u\leq \partialsi. \end{cases} \end{equation} This paper is structured as follows: after discussing the main hypothesis in Section \ref{opmfg}, we prove, in Section \ref{A priori estimates}, various estimates for \eqref{amfgobs} that are uniform in $\epsilonsilon$. Namely, we obtain: \begin{thm}\label{lipestimthm} Under the assumptions of Section \ref{opmfg}, let $(u_\epsilonsilon, \theta_\epsilonsilon)$ be the solution to \eqref{amfgobs}. Then, there exists a constant $C$ independent of $\epsilon$ such that \begin{equation}\label{w22est}\|u_\epsilonsilon\|_{W^{2, 2}(\mathbb{T}^N)}\le C, \end{equation} \begin{equation}\label{thetabounded} \|\theta_\epsilonsilon\|_\infty\le C,\end{equation} \begin{equation}\label{thetagradest}\|\theta_\epsilonsilon\|_{W^{1, 2}(\mathbb{T}^N)}\le C,\end{equation} and \begin{equation}\label{Dubounded} \|Du_\epsilonsilon\|_\infty\le C.\end{equation} \end{thm} Applying these estimates, we consider the limit $\epsilonsilon\to 0$ in Section \ref{convsec}. There we get the following result: \begin{thm} \label{cthm} Under the assumptions of Section \ref{opmfg}, let $(u_\epsilonsilon, \theta_\epsilonsilon)$ be the solution to \eqref{amfgobs}. Then there exists $u\in W^{1, \infty}(\mathbb{T}^d)\cap W^{2, 2}(\mathbb{T}^d)$, and $\theta\in L^\infty(\mathbb{T}^d) \cap W^{1, 2}(\mathbb{T}^d)$ such that, through some subsequence, \begin{equation}s u_\epsilon\rightarrow u\quad\text{in }L^\infty(\mathbb{T}^N),\end{equation}s \begin{equation}s Du_\epsilon\rightarrow Du,\quad \theta_\epsilon\rightarrow\theta \quad\text{in }L^2(\mathbb{T}^N),\end{equation}s \begin{equation}s D^2u_\epsilon\rightharpoonup D^2u \quad\text{in }L^2(\mathbb{T}^N),\end{equation}s as $\epsilonsilon\rightarrow 0$. Furthermore, $(u, \theta)$ solves \eqref{obstacleepmfglimiteqrem}. \end{thm} Finally, in Section \ref{uniqsec} we establish the uniqueness of solution of the limit problem. More precisely, our main result is: \begin{thm} \label{uthm} Under the assumptions of Section \ref{opmfg}, there exists a unique solution $(u, \theta)$ $u\in W^{1, \infty}(\mathbb{T}^d)\cap W^{2, 2}(\mathbb{T}^d)$ and $\theta\in L^\infty(\mathbb{T}^d) \cap W^{1, 2}(\mathbb{T}^d)$ of the mean-field obstacle problem \eqref{obstacleepmfglimiteqrem}. \end{thm} \sigmaection{Assumptions} \label{opmfg} In this section, we describe our main assumptions. First, to ease the presentation, we assume the obstacle to vanish, that is, $\partialsi\equiv 0$. This entails no loss of generality as we can always redefine the Hamiltonian and the solution so that the new obstacle vanishes. In addition, we will take the source term $\gamma(x)=1$. However, our results can be easily adapted to deal with a non-vanishing smooth source $\gamma$. On the Hamiltonian $H$ and the function $g$ we assume: \begin{itemize} \item [(i)] $H:\mathbb{R}^N\times\mathbb{R}^N\rightarrow\mathbb{R}$ is smooth and positive; \item [(ii)]For each $p\in\mathbb{R}^N$, $x\rightarrow H(p,x)$ is periodic; \item [(iii)] There exists a constant $\lambdabda>0$ such that \begin{equation}\label{Hconvexity} H_{p_ip_j}(p,x)\xi_i\xi_j\ge \lambdabda|\xi |^2\end{equation} for all $p,\,x,\,\xi\in\mathbb{R}^N$; \item [(iv)] There exists $C>0$ such that \begin{equation}\label{growthH}\begin{split} &|D^2_{pp}H|\le C\\ &|D^2_{xp}H|\le C(1+|p|)\\& |D^2_{xx}H|\le C(1+|p|^2)\end{split}\end{equation} and \begin{equation}\label{dphp}H(p,x)-D_pH(p,x)p\le C\end{equation} for all $p,\,x\in\mathbb{R}^N$. \item [(vi)] $g:\mathbb{R}^+\rightarrow\mathbb{R}$ is smooth and such that \begin{enumerate} \item[(a)] $g'>0$, \item[(b)] $g^{-1}(0)>0$, \item[(c)] $\theta\rightarrow \theta g(\theta)$ is convex, \item[(d)] there exist $C,\widetilde{C}>0$ and $\alphapha\in[0,\alphapha_0)$ with $\alphapha_0$ the solution of \begin{equation} \label{lipsalphaassmp} 2\alphapha_0=(\alphapha_0+1)\beta (\beta-1), \qquad \beta=\sigmaqrt{\frac{2^*}{2}}, \end{equation} if $N>2$, and $\alphapha_0=\infty$ if $N\leq 2$, such that \begin{equation}\label{g'prop}C\theta^{\alpha-1}\leq g'(\theta)\leq \widetilde{C}\theta^{\alpha-1}+\widetilde{C},\end{equation} \item[(f)] for any $C_0>0$ there exists $ C_1>0$ such that \begin{equation}\label{ggrowthprop} C_0 \theta\leq \frac 1 2 g(\theta)\theta + C_1, \end{equation} for any $\theta\ge 0$. \end{enumerate} \end{itemize} We choose a penalization term $\beta_\epsilon:\mathbb{R}\rightarrow\mathbb{R}$, smooth, with $0\le \beta_\epsilon'\le \frac{1}{\epsilon}$, $\beta_\epsilon''\geq 0$ and such that \begin{equation}\label{beta1}\beta_\epsilon(s)=0\quad \text{for } s\le 0,\quad\beta_\epsilon(s)=\frac{s-\epsilon}{\epsilon}\quad \text{for } s>2\epsilon\end{equation} \begin{equation}\label{beta2} |\beta_\epsilon(s)-s\beta_\epsilon'(s)|\leq C\quad \text{for }s\in\mathbb{R}.\end{equation} \begin{rem} The typical examples we have in mind for $g$ are $$g(\theta)=\log(\theta),$$ and $$g(\theta)=\theta^\alpha+\theta_0,$$ for some $\theta_0>0$, $\alpha\in (0,\alphapha_0)$ with $\alphapha_0$ as in Assumption \ref{lipsalphaassmp}. \end{rem} \begin{rem} The assumptions on the Hamiltonian imply that \begin{equation}\label{Hquadratic}\frac{\gamma}{2}|p|^2 -C\le H(p,x)\le C |p|^2 +C\end{equation} and \begin{equation}\label{DpHsulinear}\begin{split}& |D_pH(p,x)|\le C(1+|p|)\\& |D_xH(p,x)|\le C(1+|p|^2)\end{split} \end{equation} for all $p,\,x\in\mathbb{R}^N$. \end{rem} \sigmaection{A-priori estimates} \label{A priori estimates} In this section, we will establish various a-priori estimates for smooth solutions of the approximate mean-field obstacle problem. Because these estimates will be uniform in $\epsilonsilon$, we can pass to an appropriate limit as $\epsilonsilon\to 0$, as explained in the next section. In what follows, we denote by $(u,\theta)$ a classical solution of \eqref{amfgobs}, and we will omit the subscript $\epsilonsilon$ for convenience. \begin{lem}\label{boundlem1} Under the assumptions of Section \ref{opmfg}, there exist constants $C$, $\theta_0>0$ independent of $\epsilon$ such that for any solution $(u,\theta)$ of \eqref{amfgobs}, \begin{equation}\label{lowerboundtheta}\theta\geq \theta_0\quad \text{in }\mathbb{T}^N,\end{equation} \begin{equation}\label{Hsbound}\int_{\Tt^N} \theta dx\le C,\end{equation} \begin{equation}\label{Hsbound2}\left|\int_{\Tt^N} \theta g(\theta)dx\right|\le C,\end{equation} \begin{equation}\label{uL1bound} \int_{\Tt^N} |u| dx\le C,\end{equation} and \begin{equation}\label{duthetaboundlem2} \int_{\Tt^N} |Du|^2\theta dx\le C.\end{equation} \end{lem} \text{div}m The lower bound on $\theta$ is a consequence of the fact that $g^{-1}$ is increasing with $g^{-1}(0)>0$, and $H$ and $\beta_\epsilon$ are non-negative: $$\theta=g^{-1}(H(Du,x)+\beta_\epsilon(u))\ge g^{-1}(0)=:\theta_0>0.$$ Next, multiplying the first equation of \eqref{amfgobs} by $\theta$, the second equation by $u$, integrating and subtracting, we get \begin{equation}s\begin{split} \int_{\Tt^N} g(\theta)\theta dx=&\int_{\Tt^N} (H(Du,x)+\beta_\epsilon(u))\theta dx\\& =\int_{\Tt^N} (H(Du,x)-D_pH(Du,x)Du)\theta dx \\&+\int_{\Tt^N}(\beta_\epsilon(u)-\beta_\epsilon '(u)u)\theta dx +\int_{\Tt^N} u dx. \end{split} \end{equation}s Then, using \eqref{dphp} and \eqref{beta2}, we can find a constant $C_0>0$ such that \begin{equation} \label{lem1comp} \int_{\Tt^N} g(\theta)\theta dx \le C_0 \int_{\Tt^N} \theta dx +\int_{\Tt^N} u dx \leq C_0 \int_{\Tt^N} \theta dx+\int_{\Tt^N} u^+ dx. \end{equation} Since, $g$ satisfies \eqref{ggrowthprop}, we deduce that \begin{equation} \label{aaa} \frac{1}{2}\int_{\Tt^N} g(\theta)\theta dx \le \int_{\Tt^N} u^+ dx+C_1. \end{equation} Since $H\geq 0$, $\beta_\epsilonsilon(u)\leq g(\theta)$. In particular, \eqref{aaa} implies \begin{equation}s \int_{\Tt^N} \beta_\epsilon(u)dx \le \int_{\Tt^N} g(\theta)dx \le \frac{1}{\theta_0}\int_{\Tt^N} g(\theta)\theta dx\le C\int_{\Tt^N} u^+ dx+C.\end{equation}s At the same time, by \eqref{beta2}, \begin{equation}s \int_{\Tt^N} \beta_\epsilon(u)dx\ge \int_{\{u>2\epsilon\}} \beta'_\epsilon(u)udx-C=\frac{1}{\epsilon} \int_{\{u>2\epsilon\}} u dx-C.\end{equation}s Hence \begin{equation}s \frac{1}{\epsilon} \int_{\Tt^N} u^+dx \le C\int_{\Tt^N} u^+ dx+C,\end{equation}s from which, for $\epsilon$ small enough, we get \begin{equation} \label{uplus}\int_{\Tt^N} u^+ dx\le C\epsilon.\end{equation} We infer, in particular, that $\int_{\Tt^N} g(\theta)\theta dx \le C$ from which \eqref{Hsbound} follows. On the other hand, the convexity of $\theta g(\theta)$ implies \begin{equation}s \int_{\Tt^N} \theta g(\theta)dx \geq \left( \int_{\Tt^N} \theta dx \right) g\left(\int_{\Tt^N} \theta dx\right)\geq -C\end{equation}s and \eqref{Hsbound2} is then proven. Estimate \eqref{uL1bound} can be proven observing that \eqref{lem1comp} combined with \eqref{Hsbound}, \eqref{Hsbound2}, and estimate \eqref{uplus} yields \[ \left| \int_{\Tt^N} u dx\right|\leq C. \] This estimate, combined with \eqref{uplus} implies, \[ \int_{\Tt^N} u^- dx \leq C \] from which then \eqref{uL1bound} follows. Finally, using the first equation of \eqref{amfgobs} and \eqref{Hquadratic} we get \begin{equation}s \int_{\Tt^N} |Du|^2\theta dx \le C\int_{\Tt^N} g(\theta)\theta dx+C\int_{\Tt^N}\theta dx\end{equation}s and then \eqref{duthetaboundlem2} is a consequence of \eqref{Hsbound} and \eqref{Hsbound2}. \end{proof} \begin{lem}\label{boundlem2} Under the assumptions of Section \ref{opmfg}, there exists a constant $C>0$ independent of $\epsilon$ such that for any solution $(u,\theta)$ of \eqref{amfgobs} \begin{equation}\label{H2estim}\|u\|_{W^{2,2}(\mathbb{T}^N)}\leq C,\end{equation} \begin{equation}\label{thetagbound} \int_{\Tt^N} g'(\theta)|D\theta|^2dx\leq C\end{equation} and \begin{equation}\label{thetagbound2}\|\theta^\frac{\alpha+1}{2}\|_{W^{1,2}(\mathbb{T}^N)}\leq C.\end{equation} \end{lem} \text{div}m Using \eqref{Hquadratic}, \eqref{lowerboundtheta} and \eqref{Hsbound2} , we get \begin{align*} & \int_{\Tt^N} |Du|^2dx\leq C \int_{\Tt^N} H(Du,x)dx+C\le C\int_{\Tt^N} g(\theta)dx+C\\&\quad \le C\int_{\Tt^N} \theta g(\theta)dx+C\le C. \end{align*} The previous bound on $\int_{\Tt^N} |Du|^2dx$, estimate \eqref{uL1bound}, and the Poincar\'{e} inequality imply \begin{equation}s \|u\|_{L^{2}(\mathbb{T}^N)}\le C.\end{equation}s Next, differentiating twice with respect to $x_i$ the first equation in \eqref{amfgobs}, and then summing on $i$ (we use Einstein's convention, that is, summing over repeated indices) we get \begin{equation}s\begin{split}D_p H\cdot D(\Delta u)&+\Delta_x H+ 2 H_{x_ip_j}u_{x_jx_i}+H_{p_jp_l}u_{x_jx_i}u_{x_lx_i}\\& +\beta_\epsilon'(u)\Delta u+\beta_\epsilon''(u)|Du|^2=\Delta(g(\theta)). \end{split}\end{equation}s Multiplying the previous equation by $\theta$ and using that \begin{align*} &\int_{\Tt^N} (D_p H\cdot D(\Delta u)+\beta_\epsilon'(u)\Delta u)\theta dx=\int_{\Tt^N} (-\text{div}( D_p H\theta)+\beta_\epsilon'(u)\theta)\Delta u dx\\&\quad =\int_{\Tt^N} \Delta u dx=0, \end{align*} we obtain \begin{align*} &\int_{\Tt^N}(\Delta_x H+ 2 H_{x_ip_j}u_{x_jx_i}+H_{p_jp_l}u_{x_jx_i}u_{x_lx_i})\theta dx\\&\quad= -\int_{\Tt^N} \beta_\epsilon''(u)|Du|^2\theta dx+\int_{\Tt^N} \Delta(g(\theta))\theta dx. \end{align*} The uniformly convexity of $H$, properties \eqref{growthH}, the convexity of $\beta_\epsilon$ and \eqref{duthetaboundlem2} then imply \begin{equation}s C\int_{\Tt^N} |D^2 u|^2\theta dx+ \int_{\Tt^N} g'(\theta)|D\theta|^2dx \leq C \int_{\Tt^N} |D u|^2\theta dx+C\leq C\end{equation}s which gives, in particular, \eqref{thetagbound}. Moreover, from the previous inequality and \eqref{lowerboundtheta}, we infer that \begin{equation}s \int_{\Tt^N} |D^2 u|^2dx\le C. \end{equation}s This concludes the proof of \eqref{H2estim}. Finally, from \eqref{thetagbound} and \eqref{g'prop} we infer that \begin{equation}s \int_{\Tt^N}\theta^{\alpha-1}|D\theta|^2dx\leq C \int_{\Tt^N} g'(\theta)|D\theta|^2dx\leq C,\end{equation}s that is $|D\theta^\frac{\alpha+1}{2}|\in L^2(\mathbb{T}^N)$. Since, in addition $\theta\in L^1(\mathbb{T}^N)$, the previous estimate and the Poincar\'{e} inequality imply that $\theta^\frac{\alpha+1}{2}\in L^2(\mathbb{T}^N)$ and so \eqref{thetagbound2} holds. \end{proof} We end this section with the proof of Theorem \ref{lipestimthm}. \begin{proof}[Proof of Theorem \ref{lipestimthm}] Estimate \eqref{w22est} follows from lemma \ref{boundlem2}. Therefore, we proceed to prove the remaining bounds. First, we remark that from assumption \eqref{g'prop}, $g$ satisfies \begin{equation}\label{gbehavioral>0}g(\theta)\leq C\theta^{\alpha}+C,\quad\text{when }\alpha>0,\end{equation} and \begin{equation}\label{gbehavioral=0} g(\theta)\leq C\log(\theta)+C,\quad\text{when }\alpha=0.\end{equation} Next, we show that $\beta'_\epsilon(u)$ is bounded uniformly in $\epsilon$. The function $s\rightarrow \beta'_\epsilon(s)$ is increasing. Hence $\beta'_\epsilon(u)$ attains its maximum where $u$ has the maximum. Let $x_0$ be a maximum point of $u$, then $Du(x_0)=0$ and $D^2u(x_0)\leq 0$ and from \eqref{amfgobs}, at $x=x_0$ we have \begin{equation}s \begin{split} 1&=-\text{div} (D_p H(Du,x)\theta)+\beta_{\epsilon}'(u)\theta\\& =-H_{p_ip_j}u_{x_ix_j}\theta-H_{p_ix_i}\theta-\frac{1}{g'(\theta)}(u_{x_ix_j}H_{p_i}H_{p_j}+H_{x_i}H_{p_i}+\beta_\epsilon'(u)u_{x_i}H_{p_i})+\beta_\epsilon'(u)\theta\\& \ge -H_{p_ix_i}\theta-\frac{H_{x_i}H_{p_i}}{g'(\theta)}+\beta_\epsilon'(u)\theta. \end{split} \end{equation}s Then \begin{equation}s \max \beta_\epsilon'(u)=\beta_\epsilon'(u(x_0))\leq H_{p_ix_i}(0,x_0)+\frac{H_{x_i}(0,x_0)H_{p_i}(0,x_0)}{g'(\theta(x_0))\theta(x_0)}+\frac{1}{\theta(x_0)}.\end{equation}s Using the properties of the Hamiltonian, \eqref{g'prop} and \eqref{lowerboundtheta}, we conclude that \begin{equation}\label{beta'bounded} \max \beta_\epsilon'(u)\le C.\end{equation} Next, we claim that, for some constant $C$ independent on $p$, \begin{equation}\label{thetagradpropthmclaim1} \int_{\Tt^N}\theta^{p-1}|D\theta|^2dx\leq C\int_{\Tt^N}\theta^{p+1}(1+|Du|^4)dx.\end{equation} In order to prove \eqref{thetagradpropthmclaim1}, we use the technique from \cite{E1} (see the proof of Theorem 5.1) and multiply equation the second equation in \eqref{amfgobs} by $\text{div} (\theta^p D_pH(Du,x))$, for $p>0$, and integrate by parts: \begin{equation}\label{lemlipine1}\begin{split} \int_{\Tt^N} \beta_\epsilon'\theta\text{div} (\theta^p D_pH)dx&=\int_{\Tt^N} (\theta H_{p_i})_{x_i} (\theta^p H_{p_j})_{x_j}dx \\& =\int_{\Tt^N} (\theta H_{p_i})_{x_j} (\theta^p H_{p_j})_{x_i}dx \\& =\int_{\Tt^N} (\theta (H_{p_i})_{x_j}+\theta_{x_j}H_{p_i})(\theta^p (H_{p_j})_{x_i}+p\theta^{p-1}\theta_{x_i}H_{p_j})dx\\& =\int_{\Tt^N} \theta^{p+1}(H_{p_i})_{x_j}(H_{p_j})_{x_i}+p\theta^{p-1}H_{p_i}\theta_{x_i}H_{p_j}\theta_{x_j}\\&+(p+1)\theta^p\theta_{x_i}H_{p_j}(H_{p_i})_{x_j}dx\\& =:\int_{\Tt^N} I_1+I_2+I_3 dx. \end{split}\end{equation} Using assumptions \eqref{growthH} on $H$, we get \begin{equation}s\begin{split} I_1&=\theta^{p+1}(H_{p_ip_k}u_{x_kx_j}+H_{p_ix_j})(H_{p_jp_l}u_{x_lx_i}+H_{p_jx_i})\\& \ge \theta^{p+1}[\gamma^2|D^2u|^2-C(1+|Du|)|D^2u|-C(1+|Du|^2)]\\& \ge \theta^{p+1}\tilde{\gamma}^2|D^2u|^2 -C \theta^{p+1}(1+|Du|^2), \end{split} \end{equation}s for some $\tilde{\gamma}>0$. Clearly $$I_2=p\theta^{p-1}|D_pH\cdot D\theta|^2.$$ Let us estimate $I_3$ from below. From the first equation of \eqref{amfgobs}, we gather that \begin{equation}\label{gradthetasunstitution}H_{p_j}u_{x_jx_l}=g'(\theta)\theta_{x_l}-H_{x_l}-\beta_\epsilon'u_{x_l}.\end{equation} Assumption \eqref{g'prop} and the lower bound on $\theta$ \eqref{lowerboundtheta}, imply the existence of a positive constant $C_0$ such that \begin{equation}\label{g'thetathetalower}g'(\theta)\theta\ge C_0>0.\end{equation} Then, using the properties of the Hamiltonian, \eqref{beta'bounded} , \eqref{gradthetasunstitution} and \eqref{g'thetathetalower}, we get \begin{equation}s\begin{split}I_3&=(p+1)\theta^p\theta_{x_i}H_{p_j}(H_{p_ip_l}u_{x_lx_j}+H_{p_ix_j})\\& =(p+1)g'(\theta)\theta^{p}H_{p_ip_l}\theta_{x_i}\theta_{x_l}+(p+1)\theta^p\theta_{x_i}(H_{p_j}H_{p_ix_j}-H_{p_ip_l}H_{x_l})-(p+1)\theta^p\beta_\epsilon'H_{p_ip_l}\theta_{x_i}u_{x_l}\\& \ge (p+1)\gamma C_0\theta^{p-1}|D\theta|^2-C(p+1)\theta^p|D\theta|(1+|Du|^2)-C(p+1)\theta^p|D\theta||Du|\\& \ge C(p+1)\theta^{p-1}|D\theta|^2-C(p+1)\theta^{p+1}(1+|Du|^4). \end{split} \end{equation}s Next, let us bound from above the left-hand side of \eqref{lemlipine1}. We have \begin{equation}s\begin{split} \beta_\epsilon'\theta\text{div} (\theta^p D_p H)&= \beta_\epsilon'\theta(p\theta^{p-1}D_pH \cdot D\theta+\theta^pH_{p_ip_j}u_{x_jx_i}+\theta^pH_{p_ix_i})\\& \le p \theta^{p-1}|D_pH \cdot D\theta|^2+\tilde{\gamma}^2\theta^{p+1}|D^2u|^2+Cp\theta^{p+1}(1+|Du|), \end{split} \end{equation}s where, again, we used the properties of the Hamiltonian and \eqref{beta'bounded}. From the preceding estimates, we conclude that \begin{equation}s\begin{split} C (p+1)\int_{\Tt^N}\theta^{p-1}|D\theta|^2dx&-C(p+1)\int_{\Tt^N}\theta^{p+1}(1+|Du|^4)dx+p\int_{\Tt^N}\theta^{p-1}|D_pH\cdot D\theta|^2 dx\\& +\tilde{\gamma}^2\int_{\Tt^N} \theta^{p+1}|D^2u|^2 -C\int_{\Tt^N} \theta^{p+1}(1+|Du|^2)dx\\& \le \int_{\Tt^N} I_1+I_2+I_3dx\\& =\int_{\Tt^N} \beta_\epsilon'\theta\text{div} (\theta^p D_pH)dx\\& \leq p\int_{\Tt^N} \theta^{p-1}|D_pH\cdot D\theta|^2dx+\tilde{\gamma}^2\int_{\Tt^N}\theta^{p+1}|D^2u|^2dx\\&+Cp\int_{\Tt^N}\theta^{p+1}(1+|Du|)dx. \end{split} \end{equation}s The previous inequalities imply \eqref{thetagradpropthmclaim1}. By Lemma \ref{boundlem2}, if $N>2$, we have $\theta\in L^{\frac{2^* (1+\alphapha)}{2}}$, and for $N=2$, $\theta\in L^p$, for all $p$. If $N=1$, the \eqref{thetabounded} holds trivially by Morrey's theorem. Assume $N>2$, then Sobolev's inequality provides the bound \begin{equation}\label{sobolinthmlip}\begin{split} \left(\int_{\Tt^N} \theta^{\frac{p+1}{2}2^*}dx\right)^\frac{2}{2^*}& \leq C\int_{\Tt^N} \theta^{p+1}dx+C\int_{\Tt^N}|D(\theta^{\frac{p+1}{2}})|^2dx\\& = C\int_{\Tt^N} \theta^{p+1}dx+C(p+1)^2\int_{\Tt^N}\theta^{p-1}|D\theta|^2dx. \end{split} \end{equation} Let $\beta:=\sigmaqrt{\frac{2^*}{2}}=\sigmaqrt{\frac{N}{N-2}}>1$, then assumption \eqref{lipsalphaassmp} can be rewritten in the following way $$2\alpha\leq (\alphapha+1)\beta^2\frac{\beta-1}{\beta}$$ and, for $\alpha>0$, it implies, together with \eqref{Hquadratic} and \eqref{gbehavioral>0} that \begin{equation}s|Du|^4\leq C(g(\theta))^2+C\leq C\theta^{2\alpha}+C\leq C(1+\theta^{(\alphapha+1)\beta^2\frac{\beta-1}{\beta}}).\end{equation}s The same inequality holds when $\alpha=0$, using \eqref{gbehavioral=0}: \begin{equation}s|Du|^4\leq C(g(\theta))^2+C\leq C(\log(\theta))^2+C\leq C(1+\theta^{(\alphapha+1)\beta^2\frac{\beta-1}{\beta}}).\end{equation}s Therefore, from H{\"o}lder inequality we get \begin{equation}s\begin{split}\int_{\Tt^N} \theta^{p+1}(1+|Du|^4)dx&\leq C\int_{\Tt^N} \theta^{p+1}(1+\theta^{(\alphapha+1)\beta^2\frac{\beta-1}{\beta}})dx\\& \leq C \int_{\Tt^N} \theta^{p+1}dx+C\left(\int_{\Tt^N} \theta^{(p+1)\beta}dx\right)^\frac{1}{\beta}\left(\int_{\Tt^N} \theta^{(\alphapha+1)\beta^2} dx\right)^\frac{\beta-1}{\beta}\\& \leq C \int_{\Tt^N} \theta^{p+1}dx+C\left(\int_{\Tt^N} \theta^{(p+1)\beta}dx\right)^\frac{1}{\beta}\\& \leq C\left(\int_{\Tt^N} \theta^{(p+1)\beta}dx\right)^\frac{1}{\beta}. \end{split} \end{equation}s The last inequality, \eqref{thetagradpropthmclaim1} and \eqref{sobolinthmlip} give the estimate \begin{equation}s\left(\int_{\Tt^N}\theta^{(p+1)\beta^2}dx\right)^\frac{1}{\beta^2}\leq Cp^2 \left(\int_{\Tt^N}\theta^{(p+1)\beta}dx\right)^\frac{1}{\beta}. \end{equation}s Arguing as in \cite{E1}, we get \eqref{thetabounded} and hence \eqref{Dubounded} for $N>2$. When $N\le 2$, the reasoning is similar, because $\theta\in L^p$ for any $p$, and so \eqref{thetabounded} holds too. Finally, \eqref{thetagradest} is a consequence of \eqref{thetagbound2} and the estimate \eqref{thetabounded} just proven. \end{proof} \sigmaection{Convergence} \label{convsec} In this section, we present the proof of Theorem \ref{cthm} using the previous estimates. \begin{proof}[Proof of Theorem \ref{cthm}] Let $(u_\epsilon,\theta_\epsilon)$ be a solution of \eqref{amfgobs}. The estimates obtained in the previous section, namely in Theorem \ref{lipestimthm}, imply the existence of functions $u\in W^{2,2}(\mathbb{T}^N)\cap W^{1,\infty}(\mathbb{T}^N)$ and $\theta\in W^{1,2}(\mathbb{T}^N)\cap L^{\infty}(\mathbb{T}^N)$ such that, up to subsequence, as $\epsilon\rightarrow0$ \begin{equation}s u_\epsilon\rightarrow u\quad\text{in }L^\infty(\mathbb{T}^N),\end{equation}s \begin{equation}s Du_\epsilon\rightarrow Du,\quad \theta_\epsilon\rightarrow\theta \quad\text{in }L^2(\mathbb{T}^N),\end{equation}s \begin{equation}s D^2u_\epsilon\rightharpoonup D^2u \quad\text{in }L^2(\mathbb{T}^N).\end{equation}s Furthermore, the sequence $u_\epsilon$ converges uniformly to a non-positive function $u$. For $s>0$, since $\beta_\epsilon'$ is increasing ($\beta_\epsilon''>0$) and $\beta_\epsilon(0)=0$ we have $$\beta_\epsilon(s)=\beta_\epsilon(0)+\beta_\epsilon'(\xi_s)s\leq \max_{t\in[0,s]}\beta_\epsilon' (t)s=\beta_\epsilon'(s)s.$$ Therefore, for any $s\in\mathbb{R}$ $$\beta_\epsilon(s)\leq\beta_\epsilon'(s)s^+.$$ This implies, using \eqref{beta'bounded} $$ 0\leq \beta_\epsilon(u_\epsilon)\leq \beta_\epsilon'(u_\epsilon)(u_\epsilon)^+\leq C(u_\epsilon)^+. $$ We conclude that $\beta_\epsilon(u_\epsilon)\rightarrow 0$ uniformly as $\epsilon\rightarrow 0$. Hence, the limit $(u, \theta)$ solves \[ H(Du,x) = g(\theta)\quad\text{in }\quad \mathbb{T}^N, \] \[ -\text{div} (D_p H(Du,x)\theta)\leq 1\quad\text{in } \quad \mathbb{T}^N, \] \[ -\text{div} (D_p H(Du,x)\theta)= 1\quad\text{in } \quad \{u<0\}. \] $$u\leq 0.$$ \end{proof} \sigmaection{Uniqueness} \label{uniqsec} We end the paper with the proof of uniqueness of solutions to \eqref{obstacleepmfglimiteqrem}. This will be based upon a modified monotonicity argument inspired by the original technique by Lasry and Lions, see \cite{ll1, ll2, ll3}. \begin{proof}[Proof of Theorem \ref{uthm}] Let $(u_1, \theta_1)$ and $(u_2, \theta_2)$ be distinct solutions of \eqref{obstacleepmfglimiteqrem}. Set $$A:=\{u_1-u_2>0\}.$$ $A$ is an open set. Moreover, $A\sigmaubset \{u_2<0\}$ since $u_1-u_2=u_1\leq 0$ in $\{u_2=0\}$, therefore, \begin{equation}s-\text{div}(D_pH(Du_2,x)\theta_2)=\gamma(x)\quad\text{in }A.\end{equation}s From the first equality in \eqref{obstacleepmfglimiteqrem}, we have \begin{equation*}\int_A [H(Du_1,x)-H(Du_2,x)](\theta_1-\theta_2)dx=\int_A(g(\theta_1)-g(\theta_2))(\theta_1-\theta_2)dx.\end{equation*} Using the second inequality for $u_1$ and the third equality for $u_2$ in \eqref{obstacleepmfglimiteqrem}, multiplying by $ u_1-u_2>0$ in $A$ and integrating by parts, we obtain \begin{equation*}\begin{split}0&\leq \int_A \text{div}(D_p H(D u_1, x)\theta_1-D_pH(Du_2,x)\theta_2)(u_1-u_2)dx \\&=-\int _A(D_p H(D u_1, x)\theta_1-D_pH(Du_2,x)\theta_2)D(u_1-u_2)dx.\end{split}\end{equation*} Note that there is no boundary data since $u_1-u_2=0$ on $\partialartial A$. Adding the two inequalities and using the convexity of $H$, we get \begin{equation*}\begin{split} 0\leq \int_A(g(\theta_1)-g(\theta_2))(\theta_1-\theta_2)&\leq \int_A [H(Du_1,x)-H(Du_2,x)](\theta_1-\theta_2)dx\\& -\int _A(D_p H(D u_1, x)\theta_1-D_pH(Du_2,x)\theta_2)D(u_1-u_2)dx\\& =-\int _A[H(Du_2,x)-H(Du_1,x)-D_p H(D u_1, x)D(u_2-u_1)]\theta_1dx\\& -\int _A[H(Du_1,x)-H(Du_2,x)-D_p H(D u_2, x)D(u_1-u_2)]\theta_2dx\\& \leq -C\int _A |D(u_1-u_2)|^2dx. \end{split}\end{equation*} Thus we infer that $|A|=0$, i.e., $u_1\leq u_2$ almost everywhere. \end{proof} \end{document}
\begin{document} \title{A non-ellipticity result, or\\ the impossible taming of the logarithmic strain measure} \author{ Robert J.\ Martin\thanks{ Corresponding author: Robert J.\ Martin, \ \ Lehrstuhl f\"{u}r Nichtlineare Analysis und Modellierung, Fakult\"{a}t f\"{u}r Mathematik, Universit\"{a}t Duisburg-Essen, Thea-Leymann Str. 9, 45127 Essen, Germany; email: [email protected] } \quad and\quad Ionel-Dumitrel Ghiba\thanks{ Ionel-Dumitrel Ghiba, \ \ Alexandru Ioan Cuza University of Ia\c si, Department of Mathematics, Blvd.~Carol I, no.~11, 700506 Ia\c si, Romania; Octav Mayer Institute of Mathematics of the Romanian Academy, Ia\c si Branch, 700505 Ia\c si and Lehrstuhl f\"{u}r Nichtlineare Analysis und Modellierung, Fakult\"{a}t f\"{u}r Mathematik, Universit\"{a}t Duisburg-Essen, Thea-Leymann Str. 9, 45127 Essen, Germany; email: [email protected], [email protected] } \quad and\quad Patrizio Neff\thanks{ Patrizio Neff, \ \ Head of Lehrstuhl f\"{u}r Nichtlineare Analysis und Modellierung, Fakult\"{a}t f\"{u}r Mathematik, Universit\"{a}t Duisburg-Essen, Thea-Leymann Str. 9, 45127 Essen, Germany; email: [email protected] } } \maketitle \begin{abstract} The logarithmic strain measures $\norm{\log U}^2$, where $\log U$ is the principal matrix logarithm of the stretch tensor $U=\sqrt{F^TF}$ corresponding to the deformation gradient $F$ and $\norm{\,.\,}$ denotes the Frobenius matrix norm, arises naturally via the geodesic distance of $F$ to the special orthogonal group $\SO(n)$. This purely geometric characterization of this strain measure suggests that a viable constitutive law of nonlinear elasticity may be derived from an elastic energy potential which depends solely on this intrinsic property of the deformation, i.e.\ that an energy function $W\col\GLpn\to\R$ of the form \begin{equation} W(F)=\Psi(\norm{\log U}^2) \label{eq:abstractFunctionForm}\tag{1} \end{equation} with a suitable function $\Psi\col[0,\infty)\to\R$ should be used to describe finite elastic deformations. However, while such energy functions enjoy a number of favorable properties, we show that it is not possible to find a strictly monotone function $\Psi$ such that $W$ of the form \eqref{eq:abstractFunctionForm} is Legendre-Hadamard elliptic. Similarly, we consider the related isochoric strain measure $\norm{\dev_n\log U}^2$, where $\dev_n \log U$ is the deviatoric part of $\log U$. Although a polyconvex energy function in terms of this strain measure has recently been constructed in the planar case $n=2$, we show that for $n\geq3$, no strictly monotone function $\Psi\col[0,\infty)\to\R$ exists such that $F\mapsto \Psi(\norm{\dev_n\log U}^2)$ is polyconvex or even rank-one convex. Moreover, a volumetric-isochorically decoupled energy of the form $F\mapsto \Psi(\norm{\dev_n\log U}^2) + W_{\textrm{\rm vol}}(\det F)$ cannot be rank-one convex for any function $W_{\textrm{\rm vol}}\col(0,\infty)\to\R$ if $\Psi$ is strictly monotone. \end{abstract} \tableofcontents \section{Introduction} \subsection{Strain measures in nonlinear elasticity} \label{section:introduction} In nonlinear hyperelasticity, the behaviour of an elastic material is determined by an \emph{elastic energy potential}\footnote{For the notation employed here and throughout, see Section \ref{notationsect}.} \begin{equation} W\col\GLpn\to\R\,,\quad F\mapsto W(F) \end{equation} depending on the deformation gradient $F=\grad\varphi$ of a deformation $\varphi$. A large variety of representation formulae for certain classes of such functions is available in the literature. In particular, it is well known that any objective and isotropic function $W$ can be expressed in terms of the singular values of the argument, i.e.\ for any function $W\col\GLpn\to\R$ with \[ W(F)={W}(Q_1\, F\, Q_2) \qquad\text{ for all }\; Q_1,Q_2,R\in\SO(n)\,, \] there exists a unique symmetric function $g\col\mathbb{R}_+^n\to \mathbb{R}$ such that $W(F)=g(\lambda_1,\dotsc,\lambda_n)$ for all $F\in\GLpn$ with singular values $\lambda_1,\dotsc,\lambda_n$. Furthermore, if $f\col[0,\infty)\to\R$ is injective, then $W$ can also be written as \begin{align} W(F)=\widetilde{g}(f(\lambda_1),f(\lambda_2),...,f(\lambda_n)) \end{align} with a symmetric function $\widetilde{g}\col\mathbb{R}\to \mathbb{R}$. In particular, this representation is possible for functions $f=f_{(m)}$ of the form \begin{align} f_{(m)}(x)=\left\{ \begin{array}{ll} \displaystyle\frac{1}{2\.m}(x^{2m}-\id),& m\neq 0\,, \\ \log x,& m=0\,, \end{array} \right. \end{align} which correspond to the commonly used \cite{boehlkeBertram2002} \emph{strain tensors} of \emph{Seth-Hill type} \cite{seth1961generalized,hill1970} \begin{align} E_{(m)}=\left\{ \begin{array}{ll} \displaystyle\frac{1}{2\, m}(U^{2\, m}-\id),& m\neq 0, \\ \log U,& m=0\,, \end{array} \right. \end{align} where $\log U$ is the principal matrix logarithm of the stretch tensor $U=\sqrt{F^TF}$. In general, a (material) strain tensor is commonly defined as a \quoteref{uniquely invertible isotropic second order tensor function} of the right Cauchy-Green deformation tensor $C=F^TF$ \cite[p.~268]{truesdell60}.\footnote{Additional properties such as monotonicity are sometimes required of strain tensors, see for example \cite[p.\ 230]{Hill68} or \cite[p.~118]{Ogden83}.} Due to the invertibility of strain tensor mappings, any energy function $W$ can also be written in the form \[ W(F) = \widetilde{W}(E(F)) \] for any strain tensor $E$. In contrast to a strain tensor, a \emph{strain measure} is an arbitrary mapping $\omega\col\GLp(n)\to\R$ such that $\omega(F)=0$ if and only if $F\in\SOn$. Examples of strain measures include the squared Frobenius norms of the Seth-Hill strain tensors \begin{equation} \omega_{(m)} = \norm{E_{(m)}}^2=\sum_{i=1}^n f_{(m)}^2(\lambda_i)\,. \end{equation} From this perspective, a strain measure indicates how much a deformation gradient $F\in\GLp(n)$ differs from a pure rotation, which suggests that an appropriate strain measure should be defined by introducing a \emph{distance} function on $\GLpn$ and using the distance of $F$ to the space of pure rotations $\SO(n)$, or a function thereof, as the strain measure. The particular choice of a suitable distance on $\GLpn$, however, is not immediately obvious. Grioli \cite{Grioli40,agn_neff2014grioli} showed that employing the Euclidean distance on $\GLp(n)$ yields the strain measure \begin{equation} \omega_{\rm Grioli}\colonequals{\rm dist}_{\rm Euclid }^2 (F, \SO(n ))=\norm{U-\id}^2=\norm{E_{(1)}}^2=\omega_{(1)}\,. \end{equation} However, this strain measure suffers from a number of serious shortcomings due to the fact that the Euclidean distance is not an intrinsic distance measure on $\GLp(n)$ \cite{agn_neff2014riemannian,agn_neff2015geometry,agn_martin2014minimal}. On the other hand, strain measures involving the logarithmic strain arise from choosing the \emph{geodesic distance} on $\GLpn$ endowed with a natural Riemanian metric structure. More precisely \cite{agn_neff2015geometry}, \begin{align}\label{gsm} \norm{\log U}^2&={\rm dist}_{{\rm geod}}^2(F, \SO(n)),\notag\\ \norm{\dev_n\log U}^2&={\rm dist}^2_{{\rm geod,{\rm SL}(n)}}\left( \frac{F}{(\det F)^{1/n}}, \SO(n)\right),\\ [\tr(\log U)]^2&=[\log \det U]^2={\rm dist}^2_{{\rm geod,\mathbb{R}_+\cdot \id}}\left((\det F)^{1/n}\cdot \id, \id\right),\notag \end{align} where ${\rm dist}_{{\rm geod}}$, ${\rm dist}_{{\rm geod,\mathbb{R}_+\cdot \id}}$ and ${\rm dist}_{{\rm geod,{\rm SL}(n)}}$ are the canonical left invariant geodesic distances on the Lie-groups $\GLn$, ${\rm SL}(n)\colonequals\{X\in \GL(n)\;|\det{X}=1\}$ and $\mathbb{R}_+\cdot\id$, respectively \cite{agn_neff2015geometry,agn_lankeit2014minimization}. Energy functions and constitutive laws expressed in terms of these logarithmic strain measures have been a subject of interest in nonlinear elasticity theory for a long time, going back to investigations by the geologist G.\,F.~Becker \cite{becker1893,agn_neff2014rediscovering} first published in 1893 and the famous introduction of the quadratic Hencky strain energy \[ \WH\col\GLpn\to\R\,,\quad \WH(F) = \mu\,\norm{\dev_n\log U}^2+\frac{\kappa}{2}\,[\tr(\log U)]^2 = \mu\,\norm{\log U}^2+\frac{\Lambda}{2}\,[\tr(\log U)]^2 \] by Heinrich Hencky in 1929 \cite{Hencky1928,Hencky1929}. Hencky later considered more general elastic energy functions based on the logarithmic strain as well; for example, in a 1931 article in the Journal of Rheology \cite{hencky1931}, he suggested an energy function of the form \begin{equation} \label{eq:hencky1931} W_{1931}(F) = \mu\,\norm{\dev_3\,\log U}^2 + h(\det U)\,, \end{equation} were the volumetric part $h\col(0,\infty)\to\R$ of the energy is a function to be determined by experiments. Important contributions are also due to H.~Richter, who considered the three logarithmic invariants $K_1=\tr(\log U)$, $K_2^2= \tr((\dev_3\log U)^2)$ and $\widetilde{K}_3=\tr((\dev_3\log U)^3)$ in a 1949 article \cite{richter1949verzerrung}. More recently, a set of isotropic invariants similar to those used by Richter were introduced by Criscione et al.\ \cite{criscione2000invariant,criscione2002direct,wilber2005baker,idjeri2011identification}, who considered the \emph{invariant basis} \begin{equation}\label{gsm2} \left\{ \begin{alignedat}{2} &K_1=\tr(\log U)=\log \det U &&\text{\enquote{the amount-of-dilatation}}\\[.63em] &K_2=\norm{\dev_3\log U} &&\text{\enquote{the magnitude-of-distortion}}\\ &K_3=3\sqrt{6}\, \det \left(\displaystyle\frac{\dev_3 \log U}{\norm{\dev_3\log U}}\right) \qquad\quad &&\text{\enquote{the mode-of-distortion}} \end{alignedat} \right. \end{equation} for the natural strain $\log U$ and showed that any isotropic energy $W$ on $\GLp(3)$ can be represented in the form \begin{equation}\label{eq:criscioneEnergy} W(F)=W_{\rm {C}risc}(K_1,K_2,K_3)\,. \end{equation} Similarly, Lurie \cite{lurie2012nonlinear} used the invariants $K_1$, $K_2$ and $\widehat{K}_3=\arcsin (K_3)$. Although energy functions expressed in terms of logarithmic strain measures often exhibit some interesting and desirable properties \cite{Anand79,agn_neff2015exponentiatedI,agn_neff2015exponentiatedII}, they also provide a number of mathematical challenges. One of the greatest difficulties is posed by the lack of appropriate \emph{convexity properties}. \subsection{Convexity properties of energy functions} Among the many constitutive properties for hyperelasticity discussed in the literature, some of the most important ones are the conditions of \emph{rank-one convexity} and \emph{polyconvexity} \cite{ball1977constitutive} of the energy function $W$. \begin{definition} \label{definition:rankOneConvexity} A function $W\colon\GLpn\to\mathbb{R}$ is called \emph{rank-one convex} if for all $F\in\GLpn$, all $\xi,\eta\in\mathbb{R}^n$ and any interval $I\subset\R$ such that $\det (F+t\cdot \xi\otimes \eta)>0$ for all $t\in I$, the mapping \[ I\to\R\,,\quad t\mapsto W(F+t\cdot \xi\otimes \eta) \] is convex. \end{definition} \begin{remark} A sufficiently regular function $W$ is rank-one convex on $\GLp(n)$ if and only if it satisfies the \emph{Legendre-Hadamard ellipticity} condition \begin{align} \label{def:lhellipt} D^2_F W(F)(\xi\otimes\eta,\xi\otimes\eta) \geq 0 \qquad\text{for all }\; \xi, \eta\in\mathbb{R}^n\,,\; F\in \GLp(n)\,. \end{align} If strict inequality holds in \eqref{def:lhellipt} for all $F\in\GLpn$ and all $\xi,\eta\in\R^n\setminus\{0\}$, then $W$ is called \emph{strongly} Legendre-Hadamard elliptic. \end{remark} \begin{definition}\label{definition:polyconvexityGLSL} ~ \begin{itemize} \item[i)] A function $W\colon\Rnn\to\R\cup\{\infty\}$ is called polyconvex if there exists a convex function $P\colon\mathbb{R}^m\to\mathbb{R}\cup\{\infty\}$ such that \begin{equation} W(F) = P(\mathbb{M}(F)) \qquad\text{for all }\;F\in\mathbb{R}^{n\times n}\,, \end{equation} where $\mathbb{M}(F)\in\R^m$ denotes the vector of all minors of $F$. \item[ii)] A function $W\colon\GLpn\to\R$ is called polyconvex if the function \begin{align} \widetilde{W}\colon\Rnn\to\R\cup\{\infty\}\,,\quad \widetilde{W}(F)= \begin{cases} W(F) &:\; F\in\GLpn\\ \infty &:\; F\notin\GLpn \end{cases} \end{align} is polyconvex according to i). \end{itemize} \end{definition} \begin{remark} \label{remark:polyImpliesRankOne} If $W\colon\GLpn\to\R$ is polyconvex, then $W$ is rank-one convex \cite{Dacorogna08}. \end{remark} Although, unlike many other constitutive assumptions, the condition of polyconvexity is not necessitated by physical or mechanical considerations, it is one of the most important tools to ensure the existence of energy minimizers under appropriate boundary conditions. Rank-one convexity (or LH-ellipticity), on the other hand, is generally not sufficient to ensure the existence of minimizers. However, it is not only a necessary condition for polyconvexity \cite{Dacorogna08,wilber2002,ndanou2014criterion}, but directly motivated by physical reasoning as well: for example, ellipticity of a constitutive law ensures finite wave propagation speed \cite{eremeyev2007constitutive,zubov2011,sawyersRivlin78} and prevents discontinuities of the strain along plane interfaces under homogeneous Cauchy stress \cite{agn_neff2016injectivity,agn_mihai2016hyperelastic,agn_mihai2017hyperelastic}. However, constructing a viable energy function in terms of logarithmic strain measures which satisfies either of these convexity conditions turns out to be quite challenging. In a 2004 article, Sendova and Walton \cite{sendova2005strong} gave a number of necessary conditions for the rank-one convexity of energies of the form \eqref{eq:criscioneEnergy}. In the planar case $n=2$, it was recently shown by Neff et al.\ \cite{agn_neff2015exponentiatedII,agn_ghiba2015exponentiated} that the \emph{exponentiated Hencky energy} \begin{equation}\label{eq:expHenckyIntroduction} W_{_{\rm eH}}\col\GLp(2)\to\R\,,\quad W_{_{\rm eH}}(F) = \frac{\mu}{k}\,e^{k\,\norm{\dev_2\log U}^2}+\frac{\kappa}{2\,\widehat{k}}\,e^{\widehat{k}\,[(\log \det U)]^2}\,, \end{equation} where $k\geq\frac14$ and $\widehat{k}\geq\frac18$ are additional dimensionless parameters, is polyconvex (and thus quasiconvex and rank-one convex). In the three-dimensional case, however, the exponentiated Hencky energy is \emph{not} rank-one convex. As we will show in this article, the search for a rank-one convex energy resembling \eqref{eq:expHenckyIntroduction} in the three-dimensional case was, unfortunately, destined to fail from the beginning: For $n\geq3$, there exists no strictly monotone function of $\norm{\log U}^2$ or $\norm{\dev_n\log U}^2$ which is rank-one convex on $\GLpn$; further, an energy with a volumetric-isochoric split whose isochoric part is a strictly monotone function of $\norm{\dev_n\log U}^2$ cannot be rank-one convex. These main results are presented in Sections \ref{log}, \ref{devlog} and \ref{volisosect}, respectively. \subsection{Related work} Bertram et al.\ \cite{bertram2007rank} considered quadratic energies of the form \begin{align} W(F)=g(\lambda_1,\lambda_2,...,\lambda_n)=\frac{1}{2}\sum_{i=1}^n f^2(\lambda_i)+\beta\, \sum_{1\leq i<j\leq n}f(\lambda_i)f(\lambda_j)\,, \end{align} with $f\col\mathbb{R}_+\to \mathbb{R}$ such that $f(1)=0,$ $f'(1)=0$, $f'\neq 0$ and $\beta\in \mathbb{R}$. The functions $f$ are known as \emph{generalized strain measures} \cite{boehlkeBertram2002}. The authors prove that if the Hessian of $g$ at $(1,1,1)$ is positive definite, $\beta\neq 0$, and $f$ is strictly monotone, and/or if $f^2$ is a Seth-Hill strain measure $\omega_{(m)}$ corresponding to any $m\in \mathbb{R}$, then the energy $W$ is not rank-one convex. This extends previous results \cite{Raoult1986,Neff_Diss00,Bruhns01} for $f=f_{(1)}$ and $f=f_{(0)}$. From these observations, the authors conclude that a necessary condition for an energy to be rank-one convex is that the stress-strain relationship in the considered generalized strain measures must be physically non-linear. \subsection{Notation}\label{notationsect} Throughout this article, $F=\grad\varphi$ denotes the deformation gradient corresponding to a deformation $\varphi$, $C=U^TU$ is the right Cauchy-Green deformation tensor, $B=FF^T$ is the Finger tensor, $U=\sqrt{F^TF}$ is the right stretch tensor and $V=\sqrt{FF^T}$ is the left stretch tensor corresponding to $F$. Furthermore, we denote the standard Euclidean scalar product on $\R^{n\times n}$ by $\langle {X},{Y}\rangle=\tr{(X Y^T)}$, the Frobenius tensor norm is given by $\norm{{X}}^2=\langle {X},{X}\rangle$ and the identity tensor on $\R^{n\times n}$ is denoted by $\id$; note that $\tr{(X)}=\langle {X},{\id}\rangle$. We adopt the usual abbreviations of Lie-group theory, i.e.\ $\GL(n)\colonequals\{X\in\R^{n\times n}\;|\det{X}\neq 0\}$ denotes the general linear group, $\OO(n)\colonequals\{X\in \GL(n)\;|\;X^TX=\id\}$ is the orthogonal group, $\SO(n)\colonequals\{X\in \GL(n,\R)\;| X^T X=\id,\;\det{X}=1\}$ is the special orthogonal group and $\GLp(n)\colonequals\{X\in\R^{n\times n}\;|\det{X}>0\}$ is the group of invertible matrices with positive determinant. The superscript $^T$ is used to denote transposition, and $\Cof A = (\det A)A^{-T}$ is the cofactor of $A\in \GLp(n)$. For all vectors $\xi,\eta\in\R^n$ we denote the dyadic product by $(\xi\otimes\eta)_{ij}\colonequals\xi_i\,\eta_j$. By \enquote{$\cdot$} we denote the multiplication with scalars or the multiplication of matrices. The Fr\'echet derivative of a function $W$ at $F$ applied to the tensor-valued increment $H$ is denoted by $D_F[W(F)]. H$. Similarly, $D_F^2[W(F)]. (H_1,H_2)$ is the bilinear form induced by the second Fr\'echet derivative of the function $W$ at $F$ applied to $(H_1,H_2)$. We also identify the first derivative $D_F W$ with the gradient, writing $D_F W.H = \iprod{D_F W, H}$ for $F\in\GLpn$ and $H\in\Rnn$, and employ the chain rules \begin{align} D_F(( \Phi\circ W)(F)).H=D_F(\Phi(W(F))).H= \Phi'(W(F))\, D_FW(F).H\,,\notag\\ D_F((W\circ G)(F)).H=D_F(W(G(F))).H= \langle DW(G(F)), D_FG(F).H\rangle\notag \end{align} for $W\col\mathbb{R}^{3\times3}\to \mathbb{R}$, $G\col\mathbb{R}^{3\times3}\to \mathbb{R}^{3\times3}$ and $\Phi\col\mathbb{R}\to \mathbb{R}$. For instance, \begin{align}\label{fdsvk} D_F(\norm{F^TF-\id}^2). H&=2\,\langle F^TF-\id, D_F(F^TF-\id).H\rangle=2\,\langle F^TF-\id, D_F(F^TF-\id).H\rangle\notag\\ &=4\,\langle F^TF-\id, F^TH+H^TF\rangle=4\,\langle FF^TF-F, H\rangle\,, \end{align} since \begin{align} D_F(\norm{F}). H=\frac{1}{\norm{F}}\,\langle F, H\rangle,\qquad D_F(\norm{F}^2).H=2\, \norm{F}\, \frac{1}{\norm{F}}\,\langle F, H\rangle=2\,\langle F, H\rangle \notag \end{align} for all $F\neq 0$. \section{Rank-one convex energies in terms of strain measures} We consider the problem of rank-one convexity in terms of different strain measures $\omega$. More specifically, we are interested in whether or not it is \emph{possible} for a given $\omega$ to find a non-trivial (i.e.\ non-constant) function $\Psi$ such that $\Psi\circ\omega$ is rank-one convex. A basic necessary condition on $\omega$ for the existence of a \emph{strictly monotone} function $\Psi\col\R\to\R$ such that the mapping $F\mapsto\Psi(\omega(F))$ is rank-one convex is stated in the following lemma. \begin{lemma} \label{lemma:impossibility} Let $\omega\in {\rm C}^2(\GLpn; I)$ for an interval $I\subset\R$. If there exist $F\in\GLpn$ and $\xi,\eta\in\mathbb{R}^n\setminus \{0\}$ such that \begin{equation}\label{eq:omegaCondition} D\omega(F).(\xi\otimes \eta) = 0 \qquad\text{and}\qquad D^2\omega(F).(\xi\otimes \eta,\xi\otimes \eta) < 0\,, \end{equation} then there exists no strictly monotone function $\Psi\col I\to\R$ such that the mapping $F\mapsto W(F)\colonequals\Psi(\omega(F))$ is rank-one convex on $\GLpn$. \end{lemma} \begin{proof} Let $F\in\GLpn$ and $\xi\otimes \eta\in\Rnn$ satisfy \eqref{eq:omegaCondition}. Then for $\eps>0$ sufficiently small, the mapping \[ p\col (-\eps,\eps)\to I\,,\quad p(t) = \omega(F+t\cdot \xi\otimes \eta) \] has a strict maximum at $t=0$, since \[ p'(0) = D\omega(F).(\xi\otimes \eta) = 0 \qquad\text{and}\qquad p''(0) = D^2\omega(F).(\xi\otimes \eta,\xi\otimes \eta) < 0\,. \] If $\Psi$ is strictly monotone on $I$, then the mapping \begin{equation}\label{eq:impossibilityLemmaNonConvexity} q\col (-\eps,\eps)\to \R\,,\quad q(t) = \Psi(p(t)) = W(F+t\cdot \xi\otimes \eta) \end{equation}has a strict maximum in $t=0$ as well. In particular, $q$ cannot be convex, which implies that $W$ is not rank-one convex (cf.\ Definition \eqref{definition:rankOneConvexity}). \end{proof} If $\Psi$ is twice differentiable on $I$ with $\Psi'(t)>0$ for all $t\in I$, then Lemma \ref{lemma:impossibility} also follows from the observation that \begin{equation} D^2 W(F).(\xi\otimes \eta,\xi\otimes \eta) = \Psi''(\omega(F))\cdot \underbrace{[D \omega(F).(\xi\otimes \eta)]^2}_{=0} + \underbrace{\Psi'(\omega(F))}_{>0}\cdot \underbrace{D^2 \omega(F).(\xi\otimes \eta,\xi\otimes \eta)}_{<0} \;<\; 0 \end{equation} for $F\in\GLpn$ and $\xi\otimes \eta\in\Rnn$ satisfying \eqref{eq:omegaCondition}, since in that case, the Legendre-Hadamard ellipticity condition is violated at $F$. \begin{remark} Note that by the usual interpretation of $\omega$ as the \emph{amount of strain} in a deformation, the assumption of (strict) monotonicity of $\Psi$ follows from basic physical reasoning, since an elastic energy function $W$ should always increase with increasing strain (cf.\ \cite[Section 2.2]{agn_neff2015exponentiatedI}). For example, if $\omega(F)=\norm{\log U}^2$, then the monotonicity of $\Psi$ is equivalent to the monotonicity of the mapping $t\mapsto W(t\cdot\id)$ on $(1,\infty)$, which, in turn, follows from the physically motivated requirement that the hydrostatic pressure corresponding to a purely volumetric deformation should be negative for extensions (and positive for compression). Furthermore, as we will discuss in Section \ref{Logarsectmon}, if $\omega$ is given by the deviatoric quadratic Hencky strain measure $\norm{\dev_3\log U}^2$ or by $\norm{\log U}^2$, then $\Psi'\geq 0$ \emph{must} hold everywhere if $\Psi\circ\omega$ is to be elliptic. In the deviatoric case, the strict inequality $\Psi'> 0$ also follows from the additional assumption that $W$ is compatible with linear elasticity, see Lemma \ref{remark:strictMonotonicityConditionLinearCompatibilityDev}. \end{remark} \subsection{A Saint-Venant-Kirchhoff type strain measure} Before we apply Lemma \ref{lemma:impossibility} to the logarithmic strain measures discussed in Subsection \ref{section:introduction}, we consider the simple example of the \emph{Saint-Venant-Kirchhoff type strain measure}\footnote{Note that the classical Saint-Venant-Kirchhoff energy is well-known to be neither polyconvex nor rank-one convex \cite{Raoult1986}.} \[ \omega_{\textrm{SVK}}\col\GLpn\to\R\,,\quad \omega_{\textrm{SVK}}(F) = \norm{F^TF-\id}^2 \] for arbitrary dimension $n\geq2$. Using \eqref{fdsvk}, we find \begin{align*} D\omega_{\textrm{SVK}}(F).H &= 4\.\iprod{FF^TF-F, H}\,,\\ D^2\omega_{\textrm{SVK}}(F).(H,H) &= 4\.(\norm{HF^T}^2 + \norm{F^TH}^2 + \tr((F^TH)^2) - \norm{H}^2) \end{align*} for $F\in\GLpn$ and $H\in\Rnn$. Thus, for $F=\frac12\.\id$ and the rank-one direction $H=e_1\otimes e_2$, where $e_i\in\R^n$ denotes the $i$-th unit vector, we find $\tr H=0$ and thus \begin{align*} D\omega_{\textrm{SVK}}(F).H &= -\frac{3}{2}\.\iprod{\id, e_1\otimes e_2} = 0\,,\\ D^2\omega_{\textrm{SVK}}(F).(H,H) &= 4\.(\norm{\tfrac12\.H}^2 + \norm{\tfrac12\.H}^2 + \tr((\tfrac12\.H)^2) - \norm{H}^2) = -\tfrac12\.\norm{H}^2 = -\tfrac12 < 0\,. \end{align*} Therefore, according to Lemma \ref{lemma:impossibility}, there is no strictly monotone increasing $\Psi\col[0,\infty)\to\R$ such that the mapping $F\mapsto\Psi(\norm{F^TF-\id}^2)$ is rank-one convex. In other words: for dimension $n\geq2$, there is no (physically viable) Legendre-Hadamard elliptic elastic energy in terms of the strain measure $\norm{F^TF-\id}^2 = \norm{C-\id}^2$. \section{Logarithmic strain measures} \label{Logarsect} Returning to the question of ellipticity of energy functions in terms of logarithmic strain measures, we start in the one-dimensional case. Identifying $\GLp(1)$ with $(0,\infty)$, the logarithmic strain measure $\norm{\log U}^2$ can be written as $(\log t)^2$ for $F=t\in(0,\infty)$. \subsection{The one-dimensional case} It is easily seen that the function $t\mapsto (\log t)^2$ is not convex. However, it is possible to find some function $\Psi\col\mathbb{R}_+\to \mathbb{R}$ which \enquote{convexifies} the logarithm in the sense that $t\mapsto \Psi((\log t)^2)$ is convex: Consider \begin{align}\label{1d2d} W(t)&=\Psi((\log t)^2),\notag\\ W^\prime(t)&=\Psi^\prime((\log t)^2)\, 2\, (\log t)\, \frac{1}{t},\\ W^{\prime\prime}(t)&=\Psi^{\prime\prime}((\log t)^2)\, 4\, (\log t)^2\, \frac{1}{t^2}+ \Psi^\prime((\log t)^2)\, 2\, \frac{1}{t^2}-\Psi^\prime((\log t)^2)\, 2\, (\log t)\, \frac{1}{t^2}\notag\\ &=\frac{1}{t^2}\left[\Psi^{\prime\prime}((\log t)^2)\, 4\, (\log t)^2\, + \Psi^\prime((\log t)^2)\, 2\, (1- \log t)\right].\notag \end{align} Hence, the question whether $t\mapsto W(t)$ is convex can be restated as \begin{align} W^{\prime\prime}(t)\geq 0\quad &\iff\quad \Psi^{\prime\prime}((\log t)^2)\, 4\, (\log t)^2\, \geq- \Psi^\prime((\log t)^2)\, 2\, (1- \log t)\notag\\ &\iff\quad \Psi^{\prime\prime}((\log t)^2)\geq -\frac{\frac{d^2}{dt^2}((\log t)^2)}{\left[\frac{d}{dt}((\log t)^2)\right]^2}=\frac{ \log t-1}{ 2\, (\log t)^2\,}\, \Psi^\prime((\log t)^2) \label{eq:1dfullConvexityStatement} \end{align} for all $t>0$ with $t\neq 1$. \begin{figure} \caption{\footnotesize{The one dimensional representation of $-\frac{D_F^2(\norm{\log U} \label{fig1d} \end{figure} \begin{figure} \caption{\footnotesize{The one dimensional representation of $t=F\mapsto-{D_F^2(\norm{\log U} \label{fig2d} \end{figure} We observe first that $t\mapsto \frac{ \log t-1}{ 2\, (\log t)^2\,}$ is bounded above by $\frac{1}{8}$, see Fig.\ \ref{fig1d}, since \begin{align} \max_{t\in (0,\infty)}\frac{ \log t-1}{ 2\, (\log t)^2} = \frac{ \log t-1}{ 2\, (\log t)^2}\Big|_{t=e^2} = \frac{1}{8}\,. \end{align} In particular, there exists no \emph{concave critical point} of the mapping $t\mapsto (\log t)^2$, i.e.\ the conditions $-{\frac{d^2}{dt^2}((\log t)^2)}>0$ and ${\frac{d}{dt}((\log t)^2)}=\frac{2\log t}{t}=0$ are never satisfied for the same $t>0$. If we also assume that $\Psi$ is monotone increasing and convex, then \eqref{eq:1dfullConvexityStatement} is always satisfied for $t<1$ and therefore reduces to the condition \[ \Psi^{\prime\prime}(s)\geq \frac{\sqrt{s}-1}{ 2\, s^2\,}\, \Psi^\prime(s) \quad\text{ for all }\; s>0 \] which, for instance, is satisfied by any monotone convex function $\Psi\col\mathbb{R}\to \mathbb{R}$ with \begin{align} \Psi^{\prime\prime}(s)\geq\frac{1}{8}\, \Psi^\prime(s) \qquad\text{for all }\; s>1\,. \end{align} For example, if $\Psi$ is given by $\Psi(s)=e^{\frac{1}{8}s}$, then the corresponding energy function $W\col\R\to\R$ with $W(x)=e^{\frac{1}{8}\.\log^2(x)}$ is convex with respect to $x$. \subsection{Necessary conditions for rank-one convexity}\label{Logarsectmon} Let $W\col\GLp(3)\to\mathbb{R}_+$ be an objective and isotropic function, and let $g$ denote its representation in terms of the singular values, i.e.\ $W(F)=g(\lambda_1,\lambda_2,\lambda_3)$ for all $F\in\GLp(3)$ with singular values $\lambda_1,\lambda_1,\lambda_3$. Then the \emph{Baker-Ericksen inequalities} can be stated as \cite{marsden1994foundations,bakerEri54} \begin{align} (\lambda_i-\lambda_j)\cdot \bigg( \lambda_i\frac{\partial g}{\partial \lambda_i}-\lambda_j\frac{\partial g}{\partial \lambda_j} \bigg) \geq 0 \qquad\text{for all }\;\lambda_i,\lambda_j\in(0,\infty)\,,\; i,j=1,2,3\,, \end{align} which is equivalent to \cite{silhavy2002monotonicity} \begin{align}\label{inegg} g(\lambda_1,\lambda_2,\lambda_3)\geq g(\overline{\lambda}_1,\overline{\lambda}_2,\overline{\lambda}_3) \end{align} for all $(\lambda_1,\lambda_2,\lambda_3)$ and $(\overline{\lambda}_1,\overline{\lambda}_2,\overline{\lambda}_3)$ such that \[ \lambda_1\geq \lambda_2\geq \lambda_3, \qquad \overline{\lambda}_1\geq \overline{\lambda}_2\geq \overline{\lambda}_3, \] and \[ \lambda_1\geq \overline{\lambda}_1, \quad \lambda_1\lambda_2\geq \overline{\lambda}_1\overline{\lambda}_2, \quad \lambda_1\lambda_2\lambda_3\geq \overline{\lambda}_1\overline{\lambda}_2\overline{\lambda}_3. \] It is well known \cite{dacorogna01,silhavy1997mechanics} that the Baker-Ericksen inequalities are a necessary\footnote{ Note that the Baker-Ericksen inequalities are not sufficient for rank-one convexity; for example, the mapping $F\mapsto \norm{\log F^T F}^2$ is not rank-one convex, while the corresponding representation $g$ in terms of the singular values satisfies \eqref{inegg} \cite{agn_borisov2015sum}. } condition for rank-one convexity of the energy $W$. In particular, the rank-one convexity of $W$ therefore implies \begin{align} g(\lambda_1,1,1)\geq g(\overline{\lambda}_1,1,1) \qquad\text{for all }\; \lambda_1,\overline{\lambda}_1\in \mathbb{R}_+\,,\; \lambda_1\geq\overline{\lambda}_1\geq 1\,, \end{align} i.e.\ that the mapping $x\mapsto g(x,1,1)$ is monotone increasing on $[1,\infty)$, as well as \begin{align} g(1,1,\lambda_1)\geq g(1,1, \overline{\lambda}_1) \qquad\text{for all }\; \lambda_1,\overline{\lambda}_1\in \mathbb{R}_+\,,\; \lambda_1\leq\overline{\lambda}_1\leq 1\,, \end{align} i.e.\ that the mapping $x\mapsto g(1,1,x)$ is monotone decreasing on $(0,1]$. Since $g$ is symmetric, the mapping $x\mapsto g(x,1,1)$ is monotone decreasing on $(0,1]$ as well for rank-one convex energies. If $W$ is of the form $W(F) = \Psi(\norm{\log U}^2) = g(\lambda_1,\lambda_2,\lambda_3)$ with a differentiable function $\Psi$, then the representation of $W(F)$ is terms of the singular values of $F$ is given by \begin{align} g(\lambda_1,\lambda_2,\lambda_3)=\Psi(\log^2\lambda_1+\log^2\lambda_2+\log^2\lambda_3)\,. \end{align} Thus the rank-one convexity of such an energy implies that the mapping $x\mapsto g(x,1,1) = \Psi(\log^2 x)$ must be monotone increasing on $[1,\infty)$ and monotone decreasing on $(0,1]$. Since \begin{align} \frac{d}{dx}\Psi(\log^2x)=\Psi^\prime(\log^2 x)\cdot \frac{2\.\log x}{x} \end{align} it follows that the rank-one convexity of $F\mapsto W(F)=\Psi(\norm{\log U}^2)$ implies that $\Psi$ is monotone increasing on $[0,\infty)$. The same result holds true for arbitrary dimension $n$. Similar conditions hold if $W$ is given in terms of the deviatoric logarithmic strain measure $\norm{\dev_3\log U}^2$, i.e.\ if $W$ is of the form $W(F)=\Psi(\norm{\dev_3\log U}^2)$. In that case, \begin{align} W(F)=g(\lambda_1,\lambda_2,\lambda_3)=\Psi \left( \frac{1}{3} \left( \log^2 \frac{\lambda_1}{\lambda_2}+\log^2 \frac{\lambda_2}{\lambda_3}+\log^2 \frac{\lambda_3}{\lambda_1} \right) \right)\,, \end{align} thus rank-one convexity of $W$ implies that the mapping $x\mapsto g(x,1,1)=\Psi(\frac{2}{3}\log^2 x)$ is monotone increasing on $[1,\infty)$ and monotone decreasing on $(0,1]$, which, in turn, implies that $\Psi$ must be monotone increasing on $[0,\infty)$. Again, the same monotonicity condition must be satisfied for arbitrary dimension $n$. A similar implication was found by Sendova and Walton \cite{sendova2005strong}, who considered energy functions of the form \[ W(F) = \Phi \left( \log \det U,\; \norm{\dev_3 \log U},\; 3\.\sqrt{6}\.\det \left(\frac{\dev_3 \log U}{\norm{\dev_3\log U}} \right) \right)\,. \] In particular \cite[Proposition 2]{sendova2005strong}, they showed that if a mapping of the form $F\mapsto\widetilde{\Psi}(\norm{\dev_3 \log U})$ is Legendre-Hadamard elliptic (i.e.\ rank-one convex), then\footnote{ Although Sendova and Walton considered strong Legendre-Hadamard ellipticity and deduced a strict version of the inequalities \eqref{noi}, their proof can easily be seen to work for (non-strict) rank-one convexity as well. } \begin{align}\label{noi} \widetilde{\Psi}^\prime(t) \geq 0\qquad \text{and}\qquad \widetilde{\Psi}^{\prime\prime}(t)\geq \left(\frac{3\,t}{8}+\frac{1}{t}\right)\widetilde{\Psi}^\prime(t) \end{align} for all $t>0$. Of course, for $W(F)=\Psi(\norm{\dev_3\log U}^2)=\widetilde{\Psi}(\norm{\dev_3\log U})$, the representations $\Psi$ and $\widetilde{\Psi}$ are connected by the equality \[ \Psi(t^2) = \widetilde{\Psi}(t)\,, \] thus rank-one convexity of $W$ implies \[ 0 \leq \widetilde{\Psi}'(t) = 2\.t\,\Psi'(t^2) \quad\implies\quad 0 \leq \Psi'(t^2) \] as well as \[ 4\.t^2\,\Psi''(t^2) + 2\.\Psi'(t^2) =\widetilde{\Psi}''(t) \geq \left(\frac{3\.t}{8}+\frac{1}{t}\right)\widetilde{\Psi}'(t) = \left(\frac{3\.t}{8}+\frac{1}{t}\right)\cdot 2\.t\.\Psi'(t^2) = \left(\frac{3\.t^2}{4}+2\right)\Psi'(t^2) \] or, equivalently, \begin{align} \Psi''(t^2) \;\geq\; \frac{3}{16}\.\Psi'(t^2) \end{align} for all $t>0$. In particular, rank-one convexity of $F\mapsto \Psi(\norm{\dev_3\log U}^2)$ therefore implies \[ \Psi'(x) \geq 0 \qquad\text{and}\qquad \Psi''(x) \geq 0 \qquad\text{ for all }\;x>0\,. \] Moreover, it can be inferred from \cite[Proposition 2]{sendova2005strong} that a necessary condition for \emph{strict} Legendre-Hadamard ellipticity of an energy $W$ with $W(F)=\Psi(\norm{\dev_3\log U}^2)$ is that $\Psi\col[0,\infty)\to \mathbb{R}$ must be strictly monotone and uniformly convex. We note that the strict monotonicity of $\Psi$ also follows from the (not necessarily strict) Legendre-Hadamard ellipticity (i.e.\ from classical rank-one convexity) if $\Psi$ is two-times continuously differentiable and $\Psi'(0)>0$, since, in that case, the convexity of $\Psi$ implies $\Psi'(x)\geq\Psi'(0)>0$ for all $x>0$. The results of this section are summarized in the following lemmas. \begin{lemma} \label{lemma:monotonicity} Let $\Psi\col[0,\infty)\to\R$ be continuously differentiable such that the mapping $F\mapsto \Psi(\norm{\log U}^2)$ is rank-one convex on $\GLpn$. Then $\Psi$ is monotone increasing. \end{lemma} \begin{lemma} \label{lemma:monotonicityDev} Let $\Psi\col[0,\infty)\to\R$ be continuously differentiable such that the mapping $F\mapsto \Psi(\norm{\dev_n \log U}^2)$ is rank-one convex on $\GLpn$. Then $\Psi$ is monotone increasing. \end{lemma} \begin{lemma} \label{lemma:strictMonotonicityConditionDev} Let $\Psi\col[0,\infty)\to\R$ be two-times continuously differentiable such that the mapping $F\mapsto \Psi(\norm{\dev_3 \log U}^2)$ is rank-one convex on $\GLp(3)$. Then \begin{itemize} \item[i)] $\Psi$ is convex, \item[ii)] if $\Psi'(0)>0$, then $\Psi'(x)>0$ for all $x>0$. \end{itemize} \end{lemma} \begin{remark} \label{remark:strictMonotonicityConditionLinearCompatibilityDev} The requirement $\Psi'(0)>0$ in Lemma \ref{lemma:strictMonotonicityConditionDev} is necessarily satisfied if the elastic energy $W_{\textrm{\rm iso}}\col\GLp(3)\to\R$ with $W(F)=\Psi(\norm{\dev_3\log U}^2)$ is of the form \begin{equation} \label{eq:linearCompatibilityDev} W_{\textrm{\rm iso}}(\id+H) = \mu\.\norm{\dev_3 \sym H}^2 + \mathcal{O}(\norm{H}^3) \end{equation} with $\mu>0$, where $\sym H = \frac12(H+H^T)$ is the symmetric part of $H\in\R^{3\times3}$, since \begin{align*} D_F^2 W(\id).(H,H) &=\Psi''(0)\cdot \smash{ \underbrace{ [(D_F \norm{\dev_3\log U}^2|_{F=\id}).H]^2 }_{=0 \vphantom{=2\.\norm{\dev_3 \sym H}^2}} \,+\, \Psi'(0)\cdot \underbrace{ (D_F^2 \norm{\dev_3\log U}^2|_{F=\id}).(H,H) }_{=2\.\norm{\dev_3 \sym H}^2} } \end{align*} for $H\in\R^{3\times3}$. Since a function $W_{\textrm{\rm iso}}$ depending only on $\dev_3 \log U$ is always \emph{isochoric}, i.e.\ $W_{\textrm{\rm iso}}(a\.F)=W_{\textrm{\rm iso}}(F)$ for all $a>0$, it is often coupled additively with a \emph{volumetric} function depending on $\det F = \det U$ to obtain a viable elastic energy potential $W$ of the form $W(F)=W_{\textrm{\rm iso}}(F)+W_{\textrm{\rm vol}}(\det F)$. In that case, $W$ can only be \emph{compatible with classical linear elasticity}, i.e.\ be of the form \begin{equation} \label{eq:linearCompatibility} W(\id+H) = \mu\.\norm{\dev_3 \sym H}^2 + \frac{\kappa}{2}\.[\tr(\sym H)]^2 + \mathcal{O}(\norm{H}^3) \end{equation} with $\mu>0$ and $\kappa>0$, if \eqref{eq:linearCompatibilityDev} is satisfied for $W_{\textrm{\rm iso}}$ and thus if $\Psi'(0)>0$. This so-called \emph{volumetric-isochoric split} is discussed further in Section \ref{volisosect}. \end{remark} \section{Functions depending on $\norm{\log U}^2$}\label{log} Although it was shown in the previous section that if an energy $W$ of the form $W(F) = \Psi(\norm{\log U}^2)$ is to be rank-one convex on $\GLpn$ the function $\Psi$ must be monotone increasing, we will assume in the following that the monotonicity of $\Psi$ is strict. In particular, this restriction excludes the trivial examples of constant functions $\Psi$, for which the energy $W$ would obviously be rank-one convex and polyconvex. Our main result is the following. \begin{proposition}\label{thlog} There is no strictly monotone function $\Psi\col[0,\infty)\to\mathbb{R}$ such that \begin{align}\label{fde1} F\mapsto W(F) = \Psi(\norm{\log U}^2) = \Psi(\norm{\log V}^2) \end{align} is rank-one convex in $\GLp(n)$, $n\geq 2$. \end{proposition} \begin{proof} Our aim is to use Lemma \ref{lemma:impossibility} and, therefore, to show that there exist $F\in\GLpn$ and $\xi,\eta\in\mathbb{R}^n\setminus \{0\}$ such that \begin{align}\label{ident1} D_F(\norm{\log U}^2).(\xi\otimes\eta)=0 \qquad \text{and}\qquad D_F^2(\norm{\log U}^2).(\xi\otimes\eta,\xi\otimes\eta)<0\,. \end{align} Since (see Appendix \ref{appendix:logDerivative}) \[ D_F(\norm{\log U}^2).(\xi\otimes \eta) = \iprod{2\,(\log V)\, F^{-T}, \xi\otimes \eta} \] for all $F\in\GLpn$ and all $\xi,\eta\in\R^n$, the conditions \eqref{ident1} are satisfied if there are $F\in \GLp(n)$ and two directions $\xi,\eta\neq 0$ such that \begin{align} \langle (\log V)\, \xi, F^{-T}\, \eta\rangle=0, \qquad \text{and} \qquad D_F^2(\norm{\log V}^2).(\xi\otimes\eta,\xi\otimes\eta)<0. \end{align} It is difficult to compute $D_F^2(\norm{\log V}^2).(\xi\otimes\eta,\xi\otimes\eta)$ explicitly without resorting to a cumbersome eigenvector representation. However, using the function $h\col(-\varepsilon,\varepsilon)\to \mathbb{R}_+$ with \begin{align} h(t)&=\norm{\log \sqrt{(F+t\,\xi\otimes\eta)^T(F+t\,\xi\otimes\eta)}}^2=\norm{\log \sqrt{(F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}}^2\notag\\ &= \frac{1}{4}\norm{\log (F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}^2=\frac{1}{4}\sum\limits_{i=1}^n\log^2 \mu_i(t), \end{align} where $\mu_i(t)$, $i=1,2,...,n$ are the eigenvalues of $(F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T$ and $\varepsilon>0$ is small enough such that $\mu_i(t)>0$ for all $i=1,2,...,n$ and all $t\in(-\varepsilon,\varepsilon)$, we may write \begin{align} h'(t)&= \frac{1}{4}\langle D_F (\norm{\log (F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}^2),\xi \otimes \eta\rangle,\\ h''(t)&= \frac{1}{4} D^2_F (\norm{\log (F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}^2). (\xi \otimes \eta,\xi \otimes \eta).\notag \end{align} Hence, since \begin{align} h'(0)&= \frac{1}{4}\langle D_F (\norm{\log { F\,F^T}}^2),\xi \otimes \eta\rangle \qquad\text{and}\qquad h''(0)= \frac{1}{4} D^2_F (\norm{\log { F\, F^T}}^2). (\xi \otimes \eta,\xi \otimes \eta)\,, \end{align} the conditions \eqref{ident1} are satisfied if \begin{align}\label{condh} h''(0)< 0\qquad \text{and}\qquad h'(0)&= \frac{1}{4}\langle (\log V)\, \xi, F^{-T}\, \eta\rangle=0\,. \end{align} We note that once the result is established for $n=2$, it can immediately be extended to arbitrary dimension $n$ (by suitable restriction). \begin{figure} \caption{\footnotesize{The function $h$ has a critical point and is concave at $t=0$.} \label{region} \end{figure} In the two-dimensional case, the equation $\langle (\log V)\, \xi, F^{-T}\, \eta\rangle=0$ is satisfied if \begin{align}\label{zo} F^{-T}\, \eta&=a\,\begin{pmatrix} ((\log V)\, \xi)_2\\-((\log V)\, \xi)_1 \end{pmatrix} \quad\iff\quad \eta=a\, F^{T}\,\begin{pmatrix} ((\log V)\, \xi)_2\\-((\log V)\, \xi)_1 \end{pmatrix}=a\, F^{T}\,\begin{pmatrix} 0&1\\ -1&0 \end{pmatrix}(\log V)\, \xi \end{align} for some number $a\in\R$. Let $F=\matr{e^8&0\\0&e^2}$ and $\xi=\frac{\sqrt{2}}{2}\matr{1\\1}$. Then for $a=-1$, \eqref{zo} yields $\eta=\sqrt{2}\.\matr{-e^8\\4\.e^2}$. The eigenvalues $\mu_i(t)$ of $(F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T$ are given by \begin{align} \mu_1(t) &= \frac{e^4}{2}\, \left( 1 + 32\.t^2 + 8\.t + e^{12}\.(2\.t^2 - 2\.t + 1) + \sqrt{(32\.t^2 + 8\.t + 1 + e^{12}\.(2\.t^2 - 2\.t +1))^2 - 4\.e^{12}\.(3\.t + 1)^2} \,\right)\,,\nnl \mu_2(t) &= \frac{e^4}{2}\, \left( 1 + 32\.t^2 + 8\.t + e^{12}\.(2\.t^2 - 2\.t + 1) - \sqrt{(32\.t^2 + 8\.t + 1 + e^{12}\.(2\.t^2 - 2\.t +1))^2 - 4\.e^{12}\.(3\.t + 1)^2} \,\right)\,. \nonumber \end{align} Since $\mu_1(t)\,\mu_2(t)=[\det (F + t\,\xi\otimes\eta)]^2 = e^{20}\.(3 t+1)^2$ and $\mu_1(t)>0$ for all $t\in \mathbb{R}$, the function $h$ has the form \begin{equation*} h(t) = \frac{1}{4}\, \left[ \log^2 \mu_1(t)+\log^2 \left( e^{20}\.\frac{(3t+1)^2}{\mu_1(t)} \right) \right] = \frac{1}{4}\, \left[ \log^2 \mu_1(t) + \Big( 20 + 2\.\log(3t+1) - \log\mu_1(t) \Big)^2 \right] \end{equation*} for $t\in(-\frac13,\frac13)$, and thus its derivatives are given by \begin{align} h'(t) &= \frac14\, \left[ \frac{2\.\mu_1'(t)\cdot \log \mu_1(t)}{\mu_1(t)} + \Big( 40 + 4\.\log(3\.t+1)-2\.\log(\mu_1(t)) \Big) \cdot \left( \frac{6}{3\.t+1}-\frac{\mu_1'(t)}{\mu_1(t)} \right) \right]\,,\\ h''(t) &= \frac14\, \Bigg[ \frac{2\.\mu_1''(t)\,\log\mu_1(t)}{\mu_1(t)} + \frac{2\.\mu_1'(t)^2}{(\mu_1(t))^2} + 2\,\left( \frac{6}{3\.t+1} - \frac{\mu_1'(t)}{\mu_1(t)} \right)^2 - \frac{2\.\mu_1'(t)^2\,\log\mu_1(t)}{(\mu_1(t))^2}\nnl &\qquad\quad + \Big( 40 + 4\.\log(3\.t+1)-2\.\log(\mu_1(t)) \Big) \cdot \left( \frac{\mu_1'(t)^2}{(\mu_1(t))^2} - \frac{\mu_1''(t)}{\mu_1(t)} - \frac{18}{(3\.t+1)^2} \right) \Bigg]\,. \end{align} In particular, since \begin{align} \mu_1(0)=e^{16}\,, \qquad \mu_1'(0)=-2\, e^{16}\,, \qquad \mu_1''(0)=\frac{2\, e^{16} \left(7+2\, e^{12}\right)}{e^{12}-1}\,, \end{align} we find (cf.\ Fig.\ \ref{region}) \begin{align} h'(0)&=0,\qquad h''(0)=\frac{110-2\.e^{12}}{e^{12}-1}<0\,. \end{align} In conclusion, for $\xi=\frac{\sqrt{2}}{2}\matr{1\\1}$, $\eta= \sqrt{2}\.\matr{4\,e^2 \\ - e^8}$ and $F=\matr{e^8&0\\0&e^2}$, the desired conditions \eqref{ident1} are satisfied. Hence, according to Lemma \ref{lemma:impossibility}, the function $W$ cannot be rank-one convex. \end{proof} \section{Functions depending on $\norm{\dev_n\log U}^2$}\label{devlog} We now consider a function $W\col\GLpn\to\R$ of the form \begin{align}\label{devf} W(F)&=\Psi(\norm{\dev_n\log U}^2)=\Psi(\norm{\dev_n\log V}^2)=\Psi\left(\frac{1}{n}\sum\limits_{i,j=1}^n\log^2 \frac{\mu_i}{\mu_j}\right)\,, \end{align} where $\mu_i$, $i=1,2,...,n$ are the singular values of $F$. Since $\dev_n \log (U\inv) = -\dev_n \log U$ and $\dev_n (a\.\log U) = \dev_n \log U$ for $a>0$, it is easy to see that every function $W$ of the form \eqref{devf} is tension-compression symmetric and isochoric, i.e.\ satisfies \begin{align} W(F)=W(F^{-1})\qquad \text{and}\qquad W(a\.F)=W(F) \end{align} for all $F\in\GLpn$ and all $a>0$; note that, in particular, \begin{align} W(F) = W\left(\frac{F}{(\det F)^{\afrac1n}}\right) \qquad\text{ for all }\; F\in\GLp(n)\,. \end{align} Furthermore, in the planar case, i.e.\ for $n=2$, every objective, isotropic and isochoric energy $W\col\GL^+(2)\to\R$ can be written in the form \eqref{devf} with a unique function $\Psi\col[0,\infty)\to\R$, and the rank-one convexity is characterized by the following result \cite{agn_martin2015rank,agn_ghiba2017SL}: \begin{proposition} \label{prop:mainResultInTermsOfLogSquared} Let $W\col\GL^+(2)\to\R,\;F\mapsto W(F)$ be an objective, isotropic and isochoric function and let $\Psi\col[0,\infty)\to\R$ denote the uniquely determined functions with \[ W(F)=\Psi(\norm{\dev_2\log U}^2) \] for all $F\in\GL^+(2)$ with singular values $\lambda_1,\lambda_2$. If $\Psi\in C^2([0,\infty))$, then the following are equivalent: \begin{itemize} \item[i)] $W$ is polyconvex, \item[ii)] $W$ is rank-one convex, \item[iii)] $2\,\eta\,\Psi^{\prime\prime}(\eta)+ (1-\sqrt{2\,\eta})\,\Psi^{\prime}(\eta)\geq 0$ \quad for all $\eta\in(0,\infty)$. \end{itemize} \end{proposition} \noindent For example, the energy $W\col\GL^+(2)\to\R$ with $W(F)=e^{k\norm{\dev_2\log U}^2}$ is polyconvex for $k\geq \frac{1}{4}$. In the three-dimensional case, however, not every function $W$ of the form $W(F)=\Psi(\norm{\dev_3\log V}^2)$ such that $\Psi$ satisfies condition iii) in Proposition \ref{prop:mainResultInTermsOfLogSquared} is polyconvex or even rank-one convex (e.g.\ the mapping $F\mapsto e^{k\norm{\dev_3\log V}^2}$). In fact, there exists no strictly monotone function $\Psi$ such that $W$ is rank-one convex, as the following result shows. \begin{proposition}\label{notrankonedev} For $n\geq3$, there is no strictly monotone function $\Psi\col[0,\infty)\to\mathbb{R}$ such that \begin{align}\label{fde2} F\mapsto W(F)=\Psi(\norm{\dev_n\log V}^2) \end{align} is rank-one convex in $\GLp(n)$. \end{proposition} \begin{remark} According to Remark \ref{remark:strictMonotonicityConditionLinearCompatibilityDev}, for a sufficiently smooth function $W$ on $\GLp(3)$, the condition of strict monotonicity can be replaced by the requirement that $W$ is compatible with linear elasticity. \end{remark} \begin{proof} Without loss of generality, we consider only the case $n=3$, since the result may be extended to arbitrary dimension $n\geq3$ by a suitable restriction. The idea of the proof is similar to that of Proposition \ref{thlog}, i.e.\ we need to find $F\in\GLpn$ and $\xi,\eta\in\mathbb{R}^n\setminus \{0\}$ such that \begin{align}\label{ident2} D_F(\norm{\dev_3\log U}^2).(\xi\otimes\eta)=0 \qquad \text{and}\qquad D_F^2(\norm{\dev_3\log U}^2).(\xi\otimes\eta,\xi\otimes\eta)<0\,. \end{align} Since \begin{align} \label{eq:firstDerivativeDevLog} D_F(\norm{\dev_3\log U}^2).(\xi\otimes\eta) = \langle 2\,(\dev_3\log V)\, F^{-T}, \xi\otimes \eta\rangle = 2\, \langle (\dev_3\log V)\.\xi,\, F^{-T}\eta \rangle\,, \end{align} conditions \eqref{ident2} can be restated as \begin{align}\label{dasp} \langle (\dev_3\log V)\, \xi, F^{-T}\, \eta\rangle=0 \qquad \text{and} \qquad D_F^2(\norm{\dev_3\log V}^2).(\xi\otimes\eta,\xi\otimes\eta)<0. \end{align} For given fixed $F\in \GLp(3)$ and $\xi\in \mathbb{R}^3$ such that $(\dev_3\log V)\,\xi\neq 0$, a solution $\eta\in \mathbb{R}^3$ of equation \eqref{dasp}$_1$ is given by \begin{align}\label{m0} F^{-T}\eta=\begin{pmatrix} 0&1&0\\ -1&0&0\\ 0&0&0 \end{pmatrix}\,\vartheta \;\equalscolon\; m_0, \qquad \vartheta=\frac{(\dev_3\log V)\, \xi}{\norm{(\dev_3\log V)\,\xi}} \,. \end{align} \begin{figure} \caption{\footnotesize{Construction of the counterexample.} \label{rotire} \end{figure} \noindent More generally, any $m\in \mathbb{R}$ obtained by arbitrary rotation $Q(\vartheta,\theta)$, $\theta\in [0,2\, \pi)$ of $m_0$ (as given in \eqref{m0}) around $\vartheta$ provides a solution of \eqref{dasp}$_1$, i.e.\ for given fixed $F\in \GLp(3)$ and $\xi\in \mathbb{R}^3$, any $\eta\in \mathbb{R}^3$ given by \begin{align} F^{-T}\eta&=m=Q(\vartheta,\theta)\,m_0= Q(\vartheta,\theta)\,\begin{pmatrix} 0&1&0\\ -1&0&0\\ 0&0&0 \end{pmatrix}\vartheta\,, \notag\\ \implies\quad &\eta=F^{T} Q(\vartheta,\theta)\,\begin{pmatrix} 0&1&0\\ -1&0&0\\ 0&0&0 \end{pmatrix}\,\frac{(\dev_3\log V)\, \xi}{\norm{(\dev_3\log V)\,\xi}} \label{eq:orthogonalEta} \end{align} is a solution to equation \eqref{dasp}$_1$. Recall that for a unit vector $\vartheta=(\vartheta_1,\vartheta_2,\vartheta_3)^T$, the matrix for a rotation by an angle $\theta\in[0,2\,\pi)$ about the axis $\vartheta$ is given by \begin{align} Q(\vartheta,\theta)&=\begin{pmatrix} \cos \theta +\vartheta_1^2 \left(1-\cos \theta\right) & \vartheta_1 \vartheta_2 \left(1-\cos \theta\right) - \vartheta_3 \sin \theta & \vartheta_1 \vartheta_3 \left(1-\cos \theta\right) + \vartheta_2 \sin \theta \\ \vartheta_2 \vartheta_1 \left(1-\cos \theta\right) + \vartheta_3 \sin \theta & \cos \theta + \vartheta_2^2\left(1-\cos \theta\right) & \vartheta_2 \vartheta_3 \left(1-\cos \theta\right) - \vartheta_1 \sin \theta \\ \vartheta_3 \vartheta_1 \left(1-\cos \theta\right) - \vartheta_2 \sin \theta & \vartheta_3 \vartheta_2 \left(1-\cos \theta\right) + \vartheta_1 \sin \theta & \cos \theta + \vartheta_3^2\left(1-\cos \theta\right) \end{pmatrix} \notag\\ &= \cos\theta\, \id_3 + \sin\theta\,{\rm anti}(\vartheta) + (1-\cos\theta)\,{\vartheta}\otimes{\vartheta}\,, \label{eq:rotationMatrix} \end{align} where \begin{align} {\rm anti}(\vartheta)= \begin{pmatrix} 0 & -\vartheta_3 & \vartheta_2 \\[3pt] \vartheta_3 & 0 & -\vartheta_1 \\[3pt] -\vartheta_2 & \vartheta_1 & 0 \end{pmatrix} \end{align} is the cross-product matrix of $\vartheta$. Again, computing $D_F^2(\norm{\dev_3\log V}^2).(\xi\otimes\eta,\xi\otimes\eta)$ explicitly is rather inconvenient. We therefore introduce the function $h\col(-\varepsilon,\varepsilon)\to \mathbb{R}_+$ given by \begin{align} h(t)&=\norm{\dev_n\log \sqrt{(F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}}^2= \frac{1}{4}\norm{\dev_3\log (F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}^2\notag\\ &=\frac{1}{12}\left(\log^2 \frac{\mu_1}{\mu_2}+\log^2 \frac{\mu_2}{\mu_3}+\log^2 \frac{\mu_3}{\mu_1}\right)\,, \label{eq:devLogRatios} \end{align} where $\mu_i(t)$ , $i=1,2,3$ are the eigenvalues of $(F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T$ and $\varepsilon>0$ is small enough such that $\mu_i(t)>0$, $i=1,2,...,n$ for all $t\in(-\varepsilon,\varepsilon)$. Then \begin{align} h'(t)&= \frac{1}{4}\langle D_F (\norm{\dev_n\log (F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}^2),\xi \otimes \eta\rangle\,,\notag\\ h''(t)&= \frac{1}{4} D^2_F (\norm{\dev_n\log (F+t\,\xi\otimes\eta)(F+t\,\xi\otimes\eta)^T}^2). (\xi \otimes \eta,\xi \otimes \eta) \label{eq:hDevLogSecondDerivative} \end{align} and thus \begin{align} h'(0)&= \frac{1}{4}\langle D_F (\norm{\dev_n\log { F\,F^T}}^2),\xi \otimes \eta\rangle\,,\qquad h''(0)= \frac{1}{4} D^2_F (\norm{\dev_n\log { F\,F^T}}^2). (\xi \otimes \eta,\xi \otimes \eta)\,. \label{eq:hDevLogSecondDerivativeZero} \end{align} Due to \eqref{eq:orthogonalEta} and \eqref{eq:hDevLogSecondDerivativeZero}, in order to fulfil \eqref{dasp}, it is sufficient to find $F\in\GLp(3)$, $\xi\in\R^3$ and $\theta\in[0,2\pi)$ such that \begin{align} \label{ineq223} h''(0) < 0 \qquad \text{for}\qquad \eta&=F^{T} Q(\vartheta,\theta)\,\begin{pmatrix} 0&1&0\\ -1&0&0\\ 0&0&0 \end{pmatrix}\frac{(\dev_3\log V)\, \xi}{\norm{(\dev_3\log V)\,\xi}}\,, \end{align} Let \begin{align} \xi=\left( \begin{array}{c} 0 \\ \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \\ \end{array} \right)\,, \qquad F=\begin{pmatrix} 1&0&0\\ 0&e^{20}&0\\ 0&0& e^{15} \end{pmatrix}\quad \text{and}\quad \qquad \theta=\frac{\pi}{2}\,. \end{align} Then \begin{equation*} \vartheta \;=\; \frac{(\dev_3\log V)\, \xi}{\norm{(\dev_3\log V)\,\xi}} \;=\; \frac{1}{\sqrt{29}}\,\left(\begin{array}{c}0\\5\\2\end{array}\right)\,, \qquad Q(\vartheta,\theta) \;=\; {\rm anti}(\vartheta)+\vartheta\otimes \vartheta \;=\; \frac{1}{\sqrt{29}}\,\left(\begin{array}{ccc}0 & -2 & 5 \\ 2 & 25 & 10 \\ -5 & 10 & 4\end{array}\right) \end{equation*} and \begin{equation} \eta \;=\; F^{T} Q(\vartheta,\theta)\,\begin{pmatrix}0&1&0\\-1&0&0\\0&0&0\end{pmatrix}\,\frac{(\dev_3\log V)\, \xi}{\norm{(\dev_3\log V)\,\xi}} \;=\; \frac{1}{29}\left( \begin{array}{c}0\\10\, e^{20}\\-25\, e^{15}\end{array}\right)\,. \end{equation} For these choices and for $t\in \left(-\frac{29 \sqrt{2}}{15},\frac{29 \sqrt{2}}{15}\right)$, we directly compute \begin{align} \mu_1(t)&=1,\notag\\ \mu_2(t)&=\frac{1}{1682}\Bigg[e^{40} \left(100\, t^2+290\, \sqrt{2}\, t+841\right)+e^{30} \left(625\, t^2-725\, \sqrt{2}\, t+841\right)\\& -\frac{1}{2} \sqrt{4\, e^{60}\, \left(e^{10} \left(100\, t^2+290\, \sqrt{2} t+841\right)+625\, t^2-725 \,\sqrt{2} t+841\right)^2-6728\, e^{70} \left(15 \,t-29\, \sqrt{2}\right)^2}\Bigg],\notag\\ \mu_3(t)&=\frac{1}{1682}\Bigg[e^{40} \left(100\, t^2+290\, \sqrt{2}\, t+841\right)+e^{30} \left(625\, t^2-725\, \sqrt{2}\, t+841\right)\notag\\& +\frac{1}{2} \sqrt{4\, e^{60}\, \left(e^{10} \left(100\, t^2+290\, \sqrt{2} t+841\right)+625\, t^2-725 \,\sqrt{2} t+841\right)^2-6728\, e^{70} \left(15 \,t-29\, \sqrt{2}\right)^2}\Bigg]\,.\notag \end{align} Since $\mu_2(t)\,\mu_3(t)=[\det (F + t\,\xi\otimes\eta)]^2=\frac{e^{70} \left(15 \,t-29 \sqrt{2}\right)^2}{1682}$, and due to \eqref{eq:devLogRatios}, the function $h$ is given by \begin{align} h(t)=\frac{1}{12} \left[\log^2 (\mu_3(t))+\log^2\left(\mu_3(t)\,\frac{1682}{e^{70} \left(15\, t-29 \sqrt{2}\right)^2}\right)+ \log^2\left(\mu_3^2(t)\,\frac{1682}{e^{70} \left(15\, t-29 \sqrt{2}\right)^2}\right)\right]\,, \end{align} and thus \begin{align} h'(t)&=\frac{1}{6 \left(29 \,\sqrt{2}-15\, t\right) \mu_3(t)}\Bigg[3 \left(29 \,\sqrt{2}-15 \,t\right) \mu_3'(t) \log \frac{1682\, \mu_3^2(t)}{e^{70} \left(29\, \sqrt{2}-15\, t\right)^2}\notag\\&\qquad\qquad\qquad\qquad\qquad\qquad+30\, \mu_3(t) \log \frac{1682^2 \mu_3^3(t)}{e^{140} \left(29 \,\sqrt{2}-15\, t\right)^4}\Bigg]\,,\notag\\ h''(t)&=\frac{1}{2 \left(29\sqrt{2}-15 \,t\right)^2 \mu_3^2(t)}\Bigg[-\left(29 \sqrt{2}-15\, t\right)^2 (\mu_3'(t))^2 \left(\log \frac{1682 \,\mu_3^2(t)}{e^{70} \left(29 \sqrt{2}-15\, t\right)^2}-2\right)\\&\qquad\qquad\qquad\qquad\qquad\qquad + \left(29 \sqrt{2}-15\, t\right)^2 \mu_3(t)\,\mu_3''(t) \log \frac{1682\, \mu_3^2(t)}{e^{70} \left(29 \sqrt{2}-15\, t\right)^2}\notag\\&\qquad\qquad\qquad\qquad\qquad\qquad+60 \left(29 \sqrt{2}-15\, t\right) \mu_3(t)\,\mu_3'(t)+150\,\mu_3^2(t) \left(\log \frac{1682^2 \mu_3^3(t)}{e^{140} \left(29 \sqrt{2}-15\, t\right)^4}+4\right)\Bigg]\,.\notag \end{align} At $t=0$, we find \begin{align} h'(0)&=\frac{1}{174 \sqrt{2}\, \mu_3(0)}\Big[87 \,\sqrt{2} \,\mu_3'(0) \log \frac{\mu_3^2(0)}{e^{70}}+30\, \mu_3(0) \,\log \frac{\mu_3^3(0)}{e^{140}}\Big],\notag\\ h''(0)&=\frac{1}{3364 \,\mu_3^2(0)}\Big[-1682\, (\mu_3'(0))^2 \left(\log \frac{\mu_3^2(0)}{e^{70}}-2\right)+ 1682\,\mu_3(0)\,\mu_3''(0) \log \frac{\mu_3^2(0)}{e^{70}}\\&\qquad\qquad\qquad\quad+1740 \sqrt{2}\, \mu_3(0)\,\mu_3'(0)+150\, \mu_3^2(0) \left(\log \frac{\mu_3^3(0)}{e^{140}}+4\right)\Big]\,\notag \end{align} and since $\mu_3(0)=e^{40}$, $\mu_3'(0)=\frac{10\, \sqrt{2}\, e^{40}}{29}$, $\mu_3''(0)=\frac{25\, \left(e^{40}+8\, e^{50}\right)}{841 \,\left(e^{10}-1\right)}$, we finally obtain (cf.\ Fig.\ \ref{mdev2}) \begin{align} h'(0)=0\,,\qquad h''(0)=-\frac{25 \left(4\, e^{10}-49\right)}{841 \left(e^{10}-1\right)}<0, \end{align} completing the proof of the non-rank-one-convexity of $W$. \begin{figure} \caption{\footnotesize{Graphical representation of $h$, clearly showing the concave critical point at $t=0$.} \label{mdev2} \end{figure} \end{proof} \subsection{Isochoric, tension-compression symmetric energies} \label{section:isochoricTCsymmetricEnergies} For $n\geq3$, not every isotropic energy on $\GLpn$ which is isochoric as well as tension-compression symmetric, i.e.\ satisfies $W(F) = W(F^{-1}=W(a\.F))$ for all $F\in\GLpn$ and all $a>0$, can be represented in terms of $\norm{\dev_n\log U}^2$ alone. An example of such a function which cannot be written in the form $W(F)=\Psi(\norm{\dev_n \log U}^2)$ is given by $W\col\mathrm{GL}^+(3)\to\mathbb{R}$ with \[ W(F) = [\det(\dev_3 \log U)]^2\,. \] It is straightforward to check that $W$ is isochoric and tension-compression symmetric. Furthermore, for \begin{equation}\label{eq:isochoricRepresentationCounterexamples} U_1 = \matr{e&0& 0\\0&e&0\\0&0&e^{-2}} \qquad\text{ and }\qquad U_2 = \matr{e^{\sqrt{3}}&0& 0\\0&1&0\\0&0&e^{-\sqrt{3}}}\,, \end{equation} we find \[ W(U_1) = \left[\det \matr{1&0& 0\\0&1&0\\0&0&{-2}}\right]^2 = 4 \neq 0 = \left[\det \matr{{\sqrt{3}}&0& 0\\0&0&0\\0&0&{-\sqrt{3}}}\right]^2 = W(U_2)\,. \] However, since \[ \norm{\dev_3 \log U_1}^2 = \left\lVert\dev_3 \matr{1&0& 0\\0&1&0\\0&0&{-2}}\right\rVert^2 = 6 = \left\lVert\dev_3 \matr{{\sqrt{3}}&0& 0\\0&0&0\\0&0&{-\sqrt{3}}}\right\rVert^2 = \norm{\dev_3 \log U_2}^2\,, \] the equality $\Psi(\norm{\dev_3 \log U_1}^2)=\Psi(\norm{\dev_3 \log U_2}^2)$ must hold for all functions $\Psi\col[0,\infty)\to\R$. Therefore $W$ cannot be expressed in the form $W(F)=\Psi(\norm{\dev_3 \log U}^2)$. Since our attempts to find a function $\Psi$ such that $F\mapsto\Psi(\norm{\dev_3 \log U}^2)$ is rank-one on $\GLp(3)$ turned out to be in vain, we considered the possibility that a tension-compression symmetric and isochoric elastic energy on $\GLp(3)$ cannot be rank-one convex in general. However, as the following example demonstrates, this assumption turned out to be false, showing that the non-ellipticity of \eqref{devf} for $n=3$ is a more particular drawback of the logarithmic formulation alone. Consider the invariants $\Ihat_1,\Ihat_2,\Ihat_3\col\GLp(3)\to\mathbb{R}$ defined by \begin{equation}\label{eq:isochoricInvariants} \Ihat_1(F) = \frac{\lambda_1^2}{\lambda_2\.\lambda_3}\,,\qquad\qquad \Ihat_2(F) = \frac{\lambda_1\.\lambda_2}{\lambda_3^2}\,,\qquad\qquad \Ihat_3(F) = \lambda_1\.\lambda_2\.\lambda_3 \end{equation} for $F\in\GLp(3)$ with (ordered) singular values $\lambda_1\geq\lambda_2\geq\lambda_3$. Then $\Ihat_1$ and $\Ihat_2$ are isochoric, i.e.\ $\Ihat_i(a\.F)=\Ihat_i(F)$ for all $a>0$ and $i\in\{1,2\}$. Furthermore,\footnote{Note that the largest singular value of $F\inv$ is $\frac{1}{\lambda_3}$.} \[ \Ihat_1(F^{\inv}) = \frac{\lambda_3^{-2}}{\lambda_2^{-1}\.\lambda_1^{-1}} = \frac{\lambda_1\.\lambda_2}{\lambda_3^2} = \Ihat_2(F) \] and \[ \Ihat_2(F\inv) = \Ihat_1((F\inv)\inv) = \Ihat_1(F)\,. \] \begin{lemma} \label{lemma:multiplicativeInvariantsPolyconvexity} The functions $\Ihat_1$, $\Ihat_2$ and $\Ihat_3$ are polyconvex. \end{lemma} \begin{proof} We use the representations \begin{align} \widehat{I}_1(F) &= \frac{\lambda_1^2}{\lambda_2\.\lambda_3} = \frac{\lambda_1^3}{\lambda_1\.\lambda_2\.\lambda_3} = \frac{\norm{F}_2^3}{\det F}\,,\notag\\ \widehat{I}_2(F) &= \frac{\lambda_1\.\lambda_2}{\lambda_3^2} = \lambda_1\.\lambda_2\.\lambda_3 \cdot \left(\frac{1}{\lambda_3}\right)^3 = (\det F)\cdot \norm{F\inv}_2^3 = w^*(F)\,,\\ \widehat{I}_3(F) &= \lambda_1\.\lambda_2\.\lambda_3 = \det F\,,\notag \end{align} where $w^*$ denotes the \emph{Shield transformation}\footnote{The Shield transformation of $W^*$ of $W\col\GLpn\to\R^n$ is given by $W^*(F)=(\det F)\cdot W(F\inv)$.} \cite{shield1967inverse} of the function $w\col\GLp(3)\to\mathbb{R}$ with $w(F)=\norm{F}_2^3$. The polyconvexity of $\widehat{I}_3$ is obvious, the proof of the polyconvexity of $\widehat{I}_1$ can be adapted from \cite[Lemma 3.2]{agn_ghiba2015exponentiated} and $\widehat{I}_2$ is polyconvex as the Shield transformation of the polyconvex mapping $w$ \cite{silhavy1997mechanics}. \end{proof} \begin{proposition} \label{prop:isochoricTCsymmetricEnergy} The energy function $W\colon\GLp(3)\to\mathbb{R}$ with \[ W(F) = \Ihat_1(F) + \Ihat_2(F) = \frac{\lambda_1^2}{\lambda_2\.\lambda_3} + \frac{\lambda_1\.\lambda_2}{\lambda_3^2} \] for all $F\in\GLp(3)$ with ordered singular values $\lambda_1\geq\lambda_2\geq\lambda_3$ is isochoric, tension-compression symmetric and polyconvex. \end{proposition} \begin{proof} The polyconvexity of $W$ follows directly from the polyconvexity of $\Ihat_1$ and $\Ihat_2$, see Lemma \ref{lemma:multiplicativeInvariantsPolyconvexity}. Similarly, $W$ is isochoric as the sum of the isochoric functions $\Ihat_1,\Ihat_2$. Furthermore, \[ W(F\inv) = \Ihat_1(F\inv) + \Ihat_2(F\inv) = \Ihat_2(F) + \Ihat_1(F) = W(F)\,, \] thus $W$ is tension-compression symmetric as well. \end{proof} Since polyconvexity implies rank-one convexity, the energy function $W$ given in Proposition \ref{prop:isochoricTCsymmetricEnergy} is an example of a rank-one convex energy on $\GLp(3)$ which is isochoric and tension-compression symmetric. However, $W$ cannot be expressed as a function of $\norm{\dev_3 \log U}^2$, since for $U_1,U_2$ as defined in \eqref{eq:isochoricRepresentationCounterexamples}, we find \[ W(U_1) = \frac{e^2}{e\cdot e^{-2}} + \frac{e\cdot e}{(e^{-2})^2} = e^3+e^6 \;\neq\; 2\.e^{3\sqrt{3}} = \frac{(e^{\sqrt{3}})^2}{1\cdot e^{-\sqrt{3}}} + \frac{e^{\sqrt{3}}\cdot1}{(e^{-\sqrt{3}})^2} = W(U_2)\,. \qedhere \] \section{The volumetric-isochoric split}\label{volisosect} A function $W$ on $\GLpn$ is called \emph{volumetric-isochorically split} if it is of the form \begin{align} W=W_{\textrm{\rm iso}}(F)+W_{\textrm{\rm vol}}(\det F) \end{align} with a function $W_{\textrm{\rm vol}}\col[0,\infty)\to\R$ and an objective, isotropic function $W_{\textrm{\rm iso}}\col\GL^+(2)\to \mathbb{R}$ which is additionally isochoric, i.e.\ satisfies $W_{\textrm{\rm iso}}(a\,F)=W_{\textrm{\rm iso}}(F)$ for all $F\in\GL^+(2)$ and all $a>0$. In the two-dimensional case, every isochoric-volumetrically decoupled energy $W\col\GLp(2)\to \mathbb{R}_+$ can be written in the form \cite{agn_martin2015rank} \begin{align} \label{eq:isoVolDecoupled} W(F)=\Psi(\norm{\dev_2\log U}^2))+W_{\textrm{\rm vol}}(\det F) \end{align} with some function $\Psi\col\mathbb{R}_+\to \mathbb{R}_+$. According to Proposition \ref{prop:mainResultInTermsOfLogSquared} (cf.\ \cite[page 213]{Dacorogna08}), an energy $W$ of the form \eqref{eq:isoVolDecoupled} is rank-one convex for any convex function $W_{\textrm{\rm vol}}$ and any function $\Psi$ satisfying the inequality $2\,\eta\,\Psi^{\prime\prime}(\eta)+ (1-\sqrt{2\,\eta})\,\Psi^{\prime}(\eta)\geq 0$ for all $\eta\in(0,\infty)$. Hence, all the functions $W_{_{\rm eH}}\col\GLp(2)\to \R$ from the family of exponentiated Hencky type energies \cite{agn_neff2015exponentiatedI,agn_schroeder2017exponentiated,agn_nedjar2017finite} \begin{align} W_{_{\rm eH}}(F)= \displaystyle\frac{\mu}{k}\,e^{k\,\norm{{\rm dev}_n\log U}^2}+\frac{\kappa}{2\widehat{k}}\,e^{\widehat{k}\,[(\log \det U)]^2} \end{align} are {rank-one convex} for $\mu>0, \kappa>0$, $k\geq\displaystyle\frac{1}{4}$ and $\widehat{k}\displaystyle\geq \tel8$. In the three-dimensional case, not every objective, isotropic and isochoric energy function may be written as function of $\norm{\dev_3\log V}^2$. It is also known that there exist volumetric-isochorically decoupled energies which are rank-one convex, see for example Section \ref{section:isochoricTCsymmetricEnergies}. However, we show that \begin{theorem} For $n\geq3$, there do not exist two-times continuously differentiable functions $\Psi\col[0,\infty)\to\mathbb{R}$ and $W_{\textrm{\rm vol}}\col[0,\infty)\to \mathbb{R}$ with $\Psi'(0)>0$ such that \begin{align} W(F)=\Psi(\norm{\dev_n\log V}^2)+W_{\textrm{\rm vol}}(\det F) \end{align} is rank-one convex on $\GLp(n)$. \end{theorem} \begin{remark} According to Remark \ref{remark:strictMonotonicityConditionLinearCompatibilityDev}, the condition $\Psi'(0)>0$ can be replaced by the requirement that $W$ is compatible with linear elasticity in the three-dimensional case. \end{remark} \begin{proof} We prove the result only for $n=3$. First, we compute the second derivative for the volumetric part. Since \begin{align} \det(F+H)&= \det F+\langle H,\Cof F\rangle +\langle \Cof H,F\rangle+\det H ,\notag \end{align} we find \begin{align} D_F(\det F).H= \langle H,\Cof F\rangle = \det F\, \langle F^{-1}H,\id\rangle=\det F\, \tr(F^{-1}H) \end{align} as well as \begin{align} D_F^2(\det F).(H,H)=2\, \langle \Cof H,F\rangle \,. \end{align} With $H=\xi\otimes \eta$, using that $\Cof (\xi\otimes \eta)=0$, we thus find \begin{align} D_F^2W_{\textrm{\rm vol}}(\det F).(H,H)&=W_{\textrm{\rm vol}}^{\prime\prime}(\det F)\, [D_F(\det F).H]^2+W_{\textrm{\rm vol}}^{\prime}(\det F) \,\smash{\underbrace{D_F^2(\det F).(H,H)}_{=0}}\notag\\ &=W_{\textrm{\rm vol}}^{\prime\prime} (\det F)\,[\det F\, \langle F^{-1}H,\id\rangle]^2. \end{align} Regarding the isochoric part, we recall that \begin{align} D_F W_{\textrm{\rm iso}}(F).\xi\otimes\eta&=\Psi'(\norm{\dev_3\log V}^2)\cdot D_F(\norm{\dev_3\log V}^2).\xi\otimes\eta,\notag\\ D_F^2 W_{\textrm{\rm iso}}(F).(\xi\otimes\eta,\xi\otimes\eta)&=\Psi''(\norm{\dev_3\log V}^2)\cdot [D_F(\norm{\dev_3\log V}^2).\xi\otimes\eta]^2\\ &\qquad+\Psi'(\norm{\dev_3\log V}^2)\cdot D_F^2(\norm{\dev_3\log V}^2).(\xi\otimes\eta,\xi\otimes\eta).\notag \end{align} Therefore, the rank-one convexity condition for the total energy $W$ reads \begin{align} 0 &\leq \Psi''(\norm{\dev_3\log V}^2)\cdot [D_F(\norm{\dev_3\log V}^2).\xi\otimes\eta]^2\\ &\quad+\Psi'(\norm{\dev_3\log V}^2)\cdot D_F^2(\norm{\dev_3\log V}^2).(\xi\otimes\eta,\xi\otimes\eta) + W_{\textrm{\rm vol}}^{\prime\prime}(\det F)\, \,(\det F)^2\, \langle F^{-1}(\xi\otimes\eta) ,\id\rangle^2\notag \end{align} or, equivalently (cf.\ \eqref{eq:firstDerivativeDevLog}), \begin{align}\label{condalt} 0 &\leq 4\,\Psi''(\norm{\dev_3\log V}^2)\cdot \langle (\dev_3\log V)\, \xi, F^{-T}\, \eta\rangle^2\notag\\ &\quad+\Psi'(\norm{\dev_3\log V}^2)\cdot D_F^2(\norm{\dev_3\log V}^2).(\xi\otimes\eta,\xi\otimes\eta) + W_{\textrm{\rm vol}}^{\prime\prime}(\det F)\, \,(\det F)^2\, \langle \xi ,F^{-T}\eta\rangle^2\,. \end{align} In the following, we choose \begin{align} F_0=\begin{pmatrix} 1&0&0\\ 0&e^{20}&0\\ 0&0& e^{10} \end{pmatrix}, \end{align} and \begin{align} \xi=\begin{pmatrix} \frac{\sqrt{3}}{2} \sin (\alpha ) \\ \frac{\sqrt{3}}{2} \cos (\alpha ) \\ \frac{1}{2} \\ \end{pmatrix},\qquad \eta=F_0^{T}\, \{\xi\times [(\dev_3\log V_0)\,\xi]\}, \end{align} where $V_0=\sqrt{F_0\, F_0^T}=F_0$. Then obviously \begin{align}\label{choice1} \langle \xi ,F_0^{-T}\eta\rangle=0\qquad \text{and} \qquad \langle (\dev_3\log V_0)\, \xi, F_0^{-T}\, \eta\rangle=0. \end{align} Assume now that the energy $W$ is rank-one convex- Then, due to \eqref{condalt} and \eqref{choice1}, \begin{align}\label{condnou} \Psi'(\norm{\dev_3\log V_0}^2)\cdot D_F^2(\norm{\dev_3\log V_0}^2)\Big|_{F=F_0}.(\xi\otimes\eta,\xi\otimes\eta)\geq 0\,. \end{align} We will show that this inequality is violated. Again, we use the equality \begin{align} \hspace{-0.5cm}h''(0)&= \frac{1}{4} D^2_F (\norm{\dev_3\log { F\,F^T}}^2)\Big|_{F=F_0}. (\xi \otimes \eta,\xi \otimes \eta), \end{align} where \begin{align} h(t)&=\norm{\dev_3\log \sqrt{(F_0+t\,\xi\otimes\eta)(F_0+t\,\xi\otimes\eta)^T}}^2 =\frac{1}{12}\left[\log^2 \frac{\mu_1(t)}{\mu_2(t)}+\log^2 \frac{\mu_2(t)}{\mu_3(t)}+\log^2 \frac{\mu_3(t)}{\mu_1(t)}\right]\notag \end{align} for $t\in(-\eps,\eps)$, $\eps$ sufficiently small, and $\mu_i(t)$, $i=1,2,3$, denote the eigenvalues of $(F_0+t\,\xi\otimes\eta)(F_0+t\,\xi\otimes\eta)^T$. \begin{figure} \caption{\footnotesize{The graph of the function $h$. At the critical point $t=0$, the function $h$ is concave, i.e.\ $h''(0)<0$, thus the rank-one convexity condition in \eqref{condalt} \label{grafvoliso3} \end{figure} \noindent After some lengthy computation, we find \begin{align} h''(0)=\frac{75 \left(e^{40} \left(319-185 \sqrt{3}\right)+140 e^{20}+185 \sqrt{3}+301\right)}{16 \left(e^{40}-1\right)}\approx -6.70031<0\,,\notag \end{align} completing the proof; the graph of the mapping $t\mapsto h(t)$ is also shown in Fig.\ \ref{grafvoliso3}. \end{proof} \section*{Acknowledgment} The work of Ionel-Dumitrel Ghiba was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS-UEFISCDI, project number PN-III-P1-1.1-TE-2016-2314. \begin{footnotesize} \printbibliography[heading=bibnumbered] \end{footnotesize} \appendix \section{The derivative of $\norm{\log U}^2$} \label{appendix:logDerivative} There are multiple ways of computing the derivative of the mapping $F\mapsto \norm{\log U}^2$. Here, we discuss two ways of obtaining the derivative: a general formula for the trace of so-called \emph{primary matrix functions} and a method used by Vall\'ee \cite{vallee1978,vallee2008dual} (and indicated earlier by Richter \cite{richter1948}) for computing the Kirchhoff stress tensor corresponding to a hyperelastic material. For any function $f\col\R^+\to\R$, we also denote by $f$ the corresponding \emph{primary matrix function}, which is uniquely defined by the equality \begin{equation} f\big(Q^T\diag(\lambda_1,\dotsc,\lambda_n)\,Q\big) = Q^T\diag(f(\lambda_1),\dotsc,f(\lambda_n))\,Q \end{equation} for all $\lambda_i>0$ and all $Q\in\On$. If $f\col\R^+\to\R$ is differentiable, then \cite{agn_martin2015some} \[ D_S (\tr f(S)) = f'(S)\,, \] where $f'$ is interpreted as the primary matrix function corresponding to $f'\col\R^+\to\R$. In particular, since $\frac{d}{dt} \log^2(t) = \frac{2\.\log{t}}{t} \equalscolon w(t)$, we find \begin{align*} D_B (\norm{\log B}^2) = D_B \tr(\log^2(B)) = w(B) = 2\.\log(B)\.B\inv = 2\.B\inv\.\log(B)\,. \end{align*} We can obtain the same result by applying Vall\'ee's general formula \cite{vallee2008dual} \[ D_X \Phi(\exp(X)).\Htilde = \iprod{(D\Phi)(\exp(X)),\, \exp(X)\cdot \Htilde}\,, \] which holds for any continuously differentiable isotropic function $\Phi\col\PSymn\to\R$, to the special case $\Phi(X)=\norm{\log X}^2$, which yields \begin{equation}\label{eq:valleeFormula} \iprod{2\.X,\Htilde} = D_X (\norm{X}^2).\Htilde = D_X \Phi(\exp(X)).\Htilde = \iprod{(D\Phi)(\exp(X)),\, (\exp(X)\cdot \Htilde)} \end{equation} and thus, with $X=\log B$ and $\Htilde=B\inv H$, \[ \iprod{D_{\log B}\norm{\log B}^2,\, H} = \iprod{(D\Phi)(\log B),\, \exp(\log B) \cdot B\inv H} \overset{\eqref{eq:valleeFormula}}= \iprod{2\log (B)\.B\inv ,H} = \iprod{2B\inv\log (B) ,H}\,. \] We can now directly obtain the derivative with respect to $F$: \begin{align*} \big( D_F \norm{\log U}^2 \big).H &= \frac14\, \big( D_F \norm{\log B}^2 \big).H\\ &= \frac14\, \iprod{D_B(\norm{\log(B)}^2,\, D_F (B).H}\\ &= \frac12\, \iprod{\log(B)\.B\inv,\, D_F (FF^T).H}\\ &= \frac12\, \iprod{\log(B)\.B\inv,\, FH^T+HF^T}\\ &= \frac12\, \iprod{B\inv\.\log(B),\, FH^T} + \frac12\, \iprod{\log(B)\.B\inv,\, HF^T}\\ &= \iprod{F^{-T}F\inv\.\log(V),\, FH^T} + \iprod{\log(V)\.F^{-T}F\inv,\, HF^T}\\ &= \iprod{F\inv\.\log(V),\, H^T} + \iprod{\log(V)\.F^{-T},\, H} = 2\.\iprod{\log(V)\.F^{-T},\, H}\,. \end{align*} \end{document}
\begin{document} \setlength{\textheight}{8.0truein} \runninghead{Sub-- and super--fidelity as bounds for quantum fidelity} {J. A. Miszczak, Z. Pucha{\l}a, P. Horodecki, A. Uhlmann, and K. Zyczkowski} \normalsize\textlineskip \thispagestyle{empty} \setcounter{page}{103} \copyrightheading{9}{1\&2}{2009}{0103--0130} \vspace*{0.68truein} \alphfootnote \fpage{103} \centerline{\bf SUB-- AND SUPER--FIDELITY AS BOUNDS FOR QUANTUM FIDELITY} \vspace*{0.37truein} \centerline{\footnotesize JAROS{\L}AW ADAM MISZCZAK~~~~ZBIGNIEW PUCHA{\L}A} \vspace*{0.015truein} \centerline{\footnotesize\it Institute of Theoretical and Applied Informatics, Polish Academy of Sciences,} \baselineskip=10pt \centerline{\footnotesize\it Ba{\l}tycka 5, 44-100 Gliwice, Poland} \vspace*{10pt} \centerline{\footnotesize PAWE{\L} HORODECKI} \vspace*{0.015truein} \centerline{\footnotesize\it Faculty of Applied Physics and Mathematics, Gda{\'n}sk University of Technology, } \baselineskip=10pt \centerline{\footnotesize\it Narutowicza 11/12, 80-952 Gda{\'n}sk, Poland} \baselineskip=10pt \centerline{\footnotesize\it and} \baselineskip=10pt \centerline{\footnotesize\it National Quantum Information Centre of Gda{\'n}sk, } \baselineskip=10pt \centerline{\footnotesize\it Andersa 27, 81-824 Sopot, Poland} \vspace*{10pt} \centerline{\footnotesize ARMIN UHLMANN} \vspace*{0.015truein} \centerline{\footnotesize\it Institute of Theoretical Physics, University of Leipzig,} \baselineskip=10pt \centerline{\footnotesize\it Vor dem Hospitaltore 1, D-04103 Leipzig, Germany} \vspace*{10pt} \centerline{\footnotesize KAROL \.ZYCZKOWSKI} \vspace*{0.015truein} \centerline{\footnotesize\it Instytut Fizyki im. Smoluchowskiego, Uniwersytet Jagiello{\'n}ski, } \baselineskip=10pt \centerline{\footnotesize\it Reymonta 4, 30-059 Krak{\'o}w, Poland} \baselineskip=10pt \centerline{\footnotesize\it and} \baselineskip=10pt \centerline{\footnotesize\it Centrum Fizyki Teoretycznej, Polska Akademia Nauk, } \baselineskip=10pt \centerline{\footnotesize\it Aleja Lotnik{\'o}w 32/44, 02-668 Warszawa, Poland} \vspace*{0.225truein} \publisher{May 19, 2008}{September 30, 2008} \vspace*{0.21truein} \abstracts{ We derive several bounds on fidelity between quantum states. In particular we show that fidelity is bounded from above by a simple to compute quantity we call super--fidelity. It is analogous to another quantity called sub--fidelity. For any two states of a two--dimensional quantum system ($N=2$) all three quantities coincide. We demonstrate that sub-- and super--fidelity are concave functions. We also show that super--fidelity is super--multiplicative while sub--fidelity is sub--multiplicative and design feasible schemes to measure these quantities in an experiment. Super--fidelity can be used to define a distance between quantum states. With respect to this metric the set of quantum states forms a part of a $N^2-1$ dimensional hypersphere. }{}{} \vspace*{10pt} \keywords{quantum fidelity, quantum states, Bures distance, distances in state space} \vspace*{3pt} \communicate{R Jozsa~\&~M Mosca } \vspace*{1pt}\textlineskip \section{Introduction} By processing quantum information we wish to transform a quantum state in a controlled way. Taking into account inevitable interaction with an environment and possible imperfection of real dynamics it is then crucial to characterize quantitatively, to what extend a given quantum state gets close to its target. For this purpose one often uses {\sl fidelity} \cite{Jo94}, here denoted by $F$. That quantity has also been called {\sl transition probability} \cite{Uh76}: Operationally it is the maximal success probability of changing a state to another one by a measurement in a larger quantum system. If both quantum states are pure, fidelity is the squared overlap between them. In the general case fidelity between any two mixed states is the function of the trace norm of the product of their square roots. Thus analytical evaluation of fidelity, or its direct experimental measurement becomes a cumbersome task. Hence there is a need for other quantities, which bound fidelity and are easier to compute and measure. The aim of this work is to present some bounds for fidelity and to develop experimental schemes to estimate it for an arbitrary pair of mixed quantum states. In particular we find an upper bound for fidelity by a simple quantity which is the function of purity of both states and the trace of their product. Since it possesses some nice algebraic properties we believe it may become useful in future research and propose to call it {\sl super--fidelity}. In a sense it is a quantity complementary to the one forming the lower bound proved in \cite{Uhlm00}, and we tend to call {\sl sub--fidelity}. For any two one--qubit states all three quantities coincide. Fidelity is well known to be multiplicative with respect to the tensor product. In this work we prove that super--fidelity is concave and super--multiplicative, while sub--fidelity is concave and sub--multiplicative. Fidelity can be used to define the Bures distance between quantum states and the Bures angle. As shown by Uhlmann in \cite{Uh92} the Bures geometry of the set of one--qubit states ($N=2$), is equivalent to a three-dimensional hemisphere $\frac{1}{2} S^3$. The set of density operators, ${\Omega}_N$, becomes the space of non-constant curvature by the Bures metric for $N\ge 3$, \cite{Di99b}. We construct distance and angle analogous to the Bures distance out of super--fidelity in a similar way. With respect to this metric the set ${\Omega}_N$ forms a fragment of a $N^2-1$ dimensional hypersphere with the maximally mixed state $\rho_*:={\ensuremath{\mathbbm I}}/N$ at the pole. A linear function of super--fidelity was earlier used by Chen {\it et al.} \cite{CFUZ02} to analyze the set of mixed quantum states and demonstrate its hyperbolic interpretation. This paper is organized as follows. In Section II the definition and basic properties of fidelity are reviewed. Sections III and IV are devoted to bounds on fidelity. In Section V we define sup-- and super--fidelity and investigate their properties. Experimental schemes designed to measure these quantities are presented in section VI. In Section VII we analyze the geometry of the set of quantum states induced by the distance derived by super--fidelity. Concluding remarks are followed by appendices, in which we prove necessary lemmas and present the collection of useful algebraic facts. \section{Fidelity between quantum states}\label{sec:fid1} Consider an $N$-- dimensional Hilbert space ${\cal H}_N$. A linear operator $\rho: {\cal H}_N \to {\cal H}_N$ represents quantum state if it is Hermitian, semipositive, $\rho=\rho^{\dagger}\ge 0$, and normalized, tr$\rho=1$. Let ${\Omega}_N$ denote the set of all mixed quantum states of size $N$. Fidelity between quantum states $\rho_1$ and $\rho_2$ is defined as \cite{Uh76,Jo94}, \begin{equation} \label{fidel} F(\rho_1, \rho_2) = \left(\mathrm{tr}|\sqrt{\rho_1}\sqrt{\rho_2}| \right)^2 = || \rho_1^{1/2} \rho_2^{1/2}||_1^2, \end{equation} where $||\cdot||_1$ is Schatten 1-norm (trace norm), \begin{equation} ||A||_1 = \mathrm{tr}|A| := \mathrm{tr}\sqrt{AA^{\dagger}} . \end{equation} Alternatively, the trace norm of an operator can be expressed as the sum of its singular values, $||A||_1 =\sum_{i=1}^n \sigma_i(A)$. Here $\sigma_i(A)$ is equal to the square root of the corresponding eigenvalue of the positive matrix $AA^{\dagger}$ -- see e.g. \cite{bhatia}. There are different uses of the name {\it fidelity}. In \cite{Jo94} the older notion {\it transition probability} has been renamed {\it fidelity} by Jozsa. In \cite{NC00} $\sqrt{F}$ has been called {\it fidelity}, while \cite{BZ06} uses Jozsa's notion, and to the latter convention we shall stick in calling fidelity the expression in Eq. (\ref{fidel}). For pure states the definition (\ref{fidel}) is reduced to the transition probability. If one state is pure, $\rho_{1}=|\psi\rangle \langle \psi|$, then $F(\rho_1, \rho_2) = \langle \psi|\rho_2|\psi\rangle$. Hence for any two pure states their fidelity is equal to their squared overlap, $F(\psi, \phi) =|\langle \psi|\phi \rangle |^2=:\kappa$. Fidelity enjoys several important properties \cite{Uh76,inf1,inf2,Uh83a,Jo94}, which can also be proved on state spaces of unital C$^*$-algebras. Some of them are: \newcounter{fidelprop} \begin{list}{\bf \roman{fidelprop})}{\usecounter{fidelprop}} \item {\bf Bounds:} $0\le F(\rho_1,\rho_2)\le 1$. Furthermore $F(\rho_1,\rho_2)=1$ iff $\rho_1=\rho_2$, while $F(\rho_1,\rho_2)=0$ iff $ \mbox{supp}(\rho_1) \perp \mbox{supp}(\rho_2)$. \item {\bf Symmetry:} $F(\rho_1,\rho_2)= F(\rho_2,\rho_1)$. \item {\bf Unitary invariance:} $F(\rho_1,\rho_2)=F(U\rho_1U^{\dagger},U\rho_2U^{\dagger})$, for any unitary operator $U$. \item {\bf Concavity:} $F(\rho,a\rho_1+(1-a)\rho_2)\ge a F(\rho,\rho_1)+(1-a)F(\rho,\rho_2)$, for $a \in [0,1]$. \item {\bf Multiplicativity:} $F(\rho_1 \otimes \rho_2,\rho_3 \otimes \rho_4)= F(\rho_1,\rho_3) F(\rho_2,\rho_4)$. \item {\bf Joint concavity:} $\sqrt{F}(a\rho_1+(1-a)\rho_2, a\rho'_1+(1-a)\rho'_2) \geq a \sqrt{F}(\rho_1,\rho'_1)+(1-a)\sqrt{F}(\rho_2,\rho'_2)$, for $a \in [0,1]$. \end{list} For further analysis of fidelity properties it is instructive to work with eigenvalues of a matrix $\sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}}$. Let us denote them by $\lambda_i, \ i=1,\dots,N$. This matrix is positive so its eigenvalues and singular values coincide. Unless otherwise stated, we tacitly assume that $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_N$. The root fidelity reads \begin{equation} \label{rootfid} \sqrt {F(\rho_1,\rho_2)} = \mathrm{tr} \sqrt{\sqrt{\rho_1} \rho_2\sqrt{\rho_1}} = \sum_{i=1}^N \lambda_i . \end{equation} Squaring this equation one obtains a compact expression for fidelity, \begin{equation} \label{eqn:fidelity-sum-singular} F(\rho_1,\rho_2) = \left( \sum_{i=1}^N \lambda_i \right)^2 = \mathrm{tr} \rho_1\rho_2 + 2 \sum_{i<j} \lambda_i \lambda_j , \end{equation} where we have taken into account that $\mathrm{tr} \rho_1\rho_2 = \mathrm{tr} \sqrt{\rho_1}\rho_2\sqrt{\rho_1} = \sum_{i=1}^N \lambda_i^2$. The matrix $\sqrt{\rho_1}\rho_2\sqrt{\rho_1}$ is similar to $\rho_1\rho_2$ and they share the same set of $N$ eigenvalues. \section{Bounds for fidelity}\label{sec:bounds} We shall need some further algebraic definitions. For any matrix $X$ of size $N$ with a set of eigenvalues $\{ \lambda_1, \dots, \lambda_N \}$ we define elementary symmetric functions $s_m(X)$ as the elementary symmetric function of its eigenvalues \cite[Def.~1.2.9]{hj}. For instance, the second and third elementary symmetric functions read \begin{eqnarray} \label{symmfun} s_2(X) &=& \sum_{i<j} \lambda_i \lambda_j , \label{s_2}\\ s_3(X) &=& \sum_{i<j<k} \lambda_i \lambda_j \lambda_k . \label{s33} \end{eqnarray} For any matrix of rank $r$ the highest non-vanishing symmetric function reads $s_r(X) = \prod_{i=1}^r \lambda_i$. In the generic case $r = N$ we have $s_N(X) = \det(X)$. In this section we shall list several bounds for fidelity, some of which are well known in the literature. Let us start by stating a simple result, \begin{equation} F(\rho_1,\rho_2) \le \mathrm{tr} \rho_1 \mathrm{tr} \rho_2 , \end{equation} which follows directly from Fact \ref{fact:2} (see Appendix A) if we set $\nu=1/2$. This fact implies the property $ F(\rho_1,\rho_2) \le 1$. Expression (\ref{eqn:fidelity-sum-singular}) implies the following lower bound \begin{equation} \mathrm{tr} \rho_1 \rho_2 \leq F(\rho_1,\rho_2) \leq N \mathrm{tr}|\rho_1 \rho_2|. \label{eqn:bound-tr-prod} \end{equation} To get the upper bound we use Fact \ref{fact:3} (see Appendix A) and set $\nu=1/2$ to obtain $||\sqrt{\rho_1} \sqrt{\rho_2}||_1^2 \le N||\rho_1 \rho_2||_1$. Let us now denote the spectra of the states $\rho_1$ and $\rho_2$, by vectors $\vec p$ and $\vec q$, respectively. The fidelity between them is then bounded by the classical fidelity between diagonal density matrices \cite{MMPZ08} \begin{equation} F ( p^{\uparrow}, q^{\downarrow}) \le F(\rho_1,\rho_2) \leq F ( p^{\uparrow}, q^{\uparrow}) , \label{fidboth} \end{equation} where the arrows up (down) indicate that the eigenvalues are put in the nondecreasing (nonincreasing) order. The lower bound in (\ref{eqn:bound-tr-prod}) can be improved, since the following result is true \cite{Uhlm00} \begin{equation} F(\rho_1,\rho_2) \ge \mathrm{tr} \rho_1 \rho_2 + \sqrt{2} \sqrt { (\mathrm{tr} \rho_1\rho_2)^2- \mathrm{tr} \rho_1\rho_2\rho_1\rho_2} . \label{lowerbis} \end{equation} The above inequality is saturated for any pair of one--qubit states. Furthermore, the above inequality is an equality if the rank of $\rho_1 \rho_2$ does not exceed two. On the other hand, the inequality is strict if that rank is larger than two --- see Appendix E. For completeness we present the simple proof of inequality (\ref{lowerbis}) in Appendix B. Another lower bound is obtained if the rank of $\rho_1 \rho_2$ is exactly $r$. If $s_r$ denotes the $r^{\text{th}}$ elementary symmetric function then \begin{equation} \label{determin} F(\rho_1,\rho_2) \geq \mathrm{tr} \rho_1 \rho_2 + r(r-1) \sqrt[r]{s_r(\rho_1 \rho_2)} . \end{equation} This bound is proved in Appendix C. If both states are generic, i.e. if they are of the maximal rank the above formula reads \begin{equation} F(\rho_1,\rho_2) \geq \mathrm{tr} \rho_1 \rho_2 + N(N-1) \sqrt[N]{\det \rho_1 \det \rho_2}. \end{equation} The key result of this paper consist in the following upper bound, in a sense complementary to (\ref{lowerbis}). \begin{theorem}\label{prop:main-theorem} For any density matrices $\rho_1$ and $\rho_2$ we have \begin{equation} F(\rho_1,\rho_2) \leq \ \mathrm{tr} \rho_1 \rho_2 + \sqrt{(1-\mathrm{tr} \rho_1^2)(1-\mathrm{tr} \rho_2^2)}. \label{eqn:main-theorem} \end{equation} \end{theorem} Before presenting the proof in the subsequent section let us first note that the bound is saturated if at least one of the states is pure. Furthermore, an equality holds for any two mixed states of size $N=2$. To show this property observe that in this case the sum in (\ref{eqn:fidelity-sum-singular}) consists of a single term $2\lambda_1\lambda_2=2\sqrt{{\rm det}(\rho_1 \rho_2)} = \sqrt{2{\rm det}(\rho_1)} \sqrt{2{\rm det}(\rho_2)}$. Since for any one-qubit state one has $2{\rm det}(\rho)=1-\mathrm{tr}\rho^2$ an equality in (\ref{eqn:main-theorem}) follows. This fact was already known to H{\"u}bner \cite{Hu92}. In a similar way we treat the more general case of $N = 3$ in Appendix \ref{sec:app3}, for which some other equations for fidelity are derived. \section{Proof of the main upper bound} The notion of the second symmetric function (\ref{s_2}) allows us to write the expression \begin{equation} [(\mathrm{tr} X)^2 - (\mathrm{tr} X^2)] = 2 s_2(X) . \label{sym2X} \end{equation} Note that if $X$ has nonnegative eigenvalues then $s_2(X) \geq 0$. Using (\ref{sym2X}) we can rewrite fidelity \begin{equation} F(\rho_1, \rho_2) = \mathrm{tr} \rho_1 \rho_2 + 2 s_2 \left(\sqrt{\rho_1^{1/2} \rho_2 \rho_1^{1/2}} \right), \label{Fs1} \end{equation} and \begin{equation} \sqrt{(1-\mathrm{tr} \rho_1^2)(1-\mathrm{tr} \rho_2^2)} = 2 \sqrt{s_2(\rho_1) s_2(\rho_2)}. \label{Fs2} \end{equation} Thus the Theorem \ref{prop:main-theorem} can be equally expressed as an inequality \begin{equation}\label{main-as-ineq} s_2 \left(\sqrt{\rho_1^{1/2} \rho_2 \rho_1^{1/2}} \right) \leq \sqrt{s_2(\rho_1) s_2(\rho_2)}. \end{equation} The proof of (\ref{main-as-ineq}) is decomposed into two Lemmas, the proof of which can be found in Appendix \ref{sec:LemmaProof}. \begin{lemma}\label{lemma:s2(X)<s2(d(A)d(B))} For given density matrices $\rho_1, \rho_2$ with eigenvalues $p_1, \dots , p_n$ and $q_1, \dots , q_n$ respectively \begin{equation} s_2 \left(\sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \right) \leq s_2 \left(\sqrt{\mathrm{diag}(p) \mathrm{diag}(q)} \right), \end{equation} where $\mathrm{diag}(p)$ and $\mathrm{diag}(q)$ denote diagonal matrices with entries on diagonal $p_1, \dots , p_n$ and $q_1, \dots , q_n$ respectively. \end{lemma} \begin{lemma}\label{lemma:s2<sqrt(s2)} With notation as in Lemma \ref{lemma:s2(X)<s2(d(A)d(B))}, we have \begin{equation} s_2 \left(\sqrt{\mathrm{diag}(p) \mathrm{diag}(q)} \right) \leq \sqrt{s_2(\mathrm{diag}(p)) s_2(\mathrm{diag}(q))} = \sqrt{s_2(\rho_1 ) s_2(\rho_2)}. \end{equation} \end{lemma} \proofof{Theorem \ref{prop:main-theorem}}{ For given density matrices $\rho_1$ and $\rho_2$ with eigenvalues $p_1, \dots , p_n$ and $q_1, \dots , q_n$ respectively. We denote diagonal matrices with entries on diagonal $p_1, \dots , p_n$ and $q_1, \dots , q_n$ as $\mathrm{diag}(p)$ and $\mathrm{diag}(q)$ respectively. \begin{eqnarray*} F(\rho_1, \rho_2) &=& \mathrm{tr} \rho_1 \rho_2 + 2 s_2 \left(\sqrt{\rho_1^{1/2} \rho_2 \rho_1^{1/2}} \right) \\ &\leq& \mathrm{tr} \rho_1 \rho_2 + 2 s_2 \left(\sqrt{\mathrm{diag}(p) \mathrm{diag}(q)} \right) \\ &\leq& \mathrm{tr} \rho_1 \rho_2 + 2 \sqrt{s_2(\rho_1 ) s_2(\rho_2)} = \mathrm{tr} \rho_1 \rho_2 + \sqrt{(1-\mathrm{tr} \rho_1^2)(1-\mathrm{tr} \rho_2^2)}. \end{eqnarray*} Making use of Lemma \ref{lemma:s2(X)<s2(d(A)d(B))} for the first inequality and of Lemma \ref{lemma:s2<sqrt(s2)} for the second one we arrive at the inequality (\ref{eqn:main-theorem}). } \section{Sub-- and super--fidelity and their properties} \subsection{Definition and basic facts} We shall start this section with a general {\bf definition}. For any two hermitian operators $A$ and $B$ let us define two quantities \begin{eqnarray} E(A,B) &=& \mathrm{tr} AB + \sqrt{2[(\mathrm{tr} AB)^2 - \mathrm{tr} ABAB]} , \label{eqn:sub-fide-def} \\ G(A,B) &=& \mathrm{tr} AB + \sqrt{(\mathrm{tr} A)^2-\mathrm{tr} A^2} \sqrt{(\mathrm{tr} B)^2-\mathrm{tr} B^2} . \label{eqn:quasi-fide-def} \end{eqnarray} For any two density operators their traces are equal to unity, so $E(\rho_1,\rho_2)$ and $G(\rho_1,\rho_2) $ have lower bound (\ref{lowerbis}) and upper bound (\ref{eqn:main-theorem}), respectively. Thus both universal bounds for fidelity can be rewritten as \begin{equation} E(\rho_1,\rho_2) \le F(\rho_1 , \rho_2 ) \le G(\rho_1 , \rho_2 ) . \label{quasi-ineq} \end{equation} Note that both bounds require the evaluation of three traces only, so they are easier to compute than the original fidelity. As shown in Section \ref{sec:bounds} for $N=2$ all three quantities are equal, so we propose to call $E(\rho_1 , \rho_2 )$ and $G(\rho_1 , \rho_2 )$ as {\sl sub--} and {\sl super--fidelity}. These names are additionally motivated by the following appealing properties: \newcounter{superfidelprop} \begin{list}{\bf \roman{superfidelprop}')} {\usecounter{superfidelprop}} \item {\bf Bounds:} $0 \le E(\rho_1,\rho_2) \le 1$ and $0 \le G(\rho_1,\rho_2) \le 1$. \item {\bf Symmetry:} $E(\rho_1,\rho_2)= E(\rho_2,\rho_1)$ and $G(\rho_1,\rho_2)= G(\rho_2,\rho_1)$. \item {\bf Unitary invariance:} $E(\rho_1,\rho_2)=E(U\rho_1U^{\dagger},U\rho_2U^{\dagger})$ and $G(\rho_1,\rho_2)=G(U\rho_1U^{\dagger},U\rho_2U^{\dagger})$, for any unitary operator $U$. \item {\bf Concavity:} \begin{prop}\label{prop:qfid-concave} Sub-- and super--fidelity are concave, that is for $A,B,C,D \in \Omega_N$ and $\alpha \in [0,1]$ we have \begin{eqnarray} E(A,\alpha B + (1-\alpha)C) &\geq& \alpha E(A,B) + (1-\alpha) E(A,C), \label{eqn:subfid-conca-prop} \\ G(A,\alpha B + (1-\alpha)C) &\geq& \alpha G(A,B) + (1-\alpha) G(A,C). \label{eqn:qfid-conca-prop} \end{eqnarray} \end{prop} \item {\bf Properties of the tensor product:} \begin{prop}\label{prop:qfid-smul} Super--fidelity is super--multiplicative, that is for $A,B,C,D \in \Omega_N$ \begin{equation} G(A \otimes B, C \otimes D) \geq G(A,C) G(B,D), \end{equation} \end{prop} while \begin{prop}\label{prop:subfid-sub} Sub--fidelity is sub--multiplicative, that is for $A,B,C,D \in \Omega_N$ \begin{equation} E(A \otimes B, C \otimes D) \leq E(A,C) E(B,D). \end{equation} \end{prop} \end{list} Properties {\bf i')}, {\bf ii')} and {\bf iii')} follow from the properties of $\mathrm{tr} AB$ and definitions (\ref{eqn:sub-fide-def}) and (\ref{eqn:quasi-fide-def}). In this section we prove properties {\bf iv')} and {\bf v')}. \proofof{Proposition \ref{prop:qfid-concave}}{ The definitions (\ref{eqn:sub-fide-def}) and (\ref{eqn:quasi-fide-def}) can be rewritten in terms of the aforementioned elementary symmetric functions (\ref{symmfun}) using relation (\ref{sym2X}), \begin{eqnarray} E(A,B) &=& \mathrm{tr} AB + 2 \sqrt{ s_2(AB)} , \label{sub-s2} \\ G(A,B) &=& \mathrm{tr} AB + 2 \sqrt{s_2(A) s_2(B)} . \label{quasi-s2} \end{eqnarray} In general $r^{\text{th}}$ root of $s_r$ is concave on the cone of positive operators \cite{Marcus}. This implies concavity of $G$ directly. To get concavity of $E$ we can replace matrix $AB$ by the similar matrix $A^{1/2} B A^{1/2}$ which is positive. Using the concavity of $\sqrt{s_2(A^{1/2} B A^{1/2})}$ we obtain the result. } \proofof{Proposition \ref{prop:qfid-smul}}{ First we note that super--fidelity is not multiplicative. As an example we can take \begin{equation} A = \left(\begin{array}{c c} 1 & 0\\ 0 & 0\\ \end{array} \right),\ B = \left(\begin{array}{c c} \frac{1}{2} & 0\\ 0 & \frac{1}{2}\\ \end{array} \right),\ C = \left(\begin{array}{c c} 0 & 0\\ 0 & 1\\ \end{array} \right),\ D = \left(\begin{array}{c c} \frac{1}{2} & 0\\ 0 & \frac{1}{2}\\ \end{array} \right), \end{equation} in which case we have \begin{equation} \frac{1}{2} = G(A \otimes B, C \otimes D) > G(A,C) G(B,D) = 0. \end{equation} To prove the proposition we write \begin{equation} G(A \otimes B, C \otimes D) = \mathrm{tr} AC \mathrm{tr} BD + \sqrt{(1-\mathrm{tr} A^2 \mathrm{tr} B^2)(1-\mathrm{tr} C^2 \mathrm{tr} D^2)}, \end{equation} and \begin{equation} G(A,C) G(B,D) = \left(\mathrm{tr} AC + \sqrt{(1-\mathrm{tr} A^2 )(1-\mathrm{tr} C^2)} \right) \left(\mathrm{tr} BD + \sqrt{(1-\mathrm{tr} B^2 )(1-\mathrm{tr} D^2)} \right). \end{equation} Denoting $\mathrm{tr} A^2 = \alpha, \mathrm{tr} B^2 = \beta, \mathrm{tr} C^2 = \gamma$ and $\mathrm{tr} D^2 = \delta$ we have to show that \begin{eqnarray*} \sqrt{(1-\alpha \beta)(1-\gamma \delta)} &\geq& \mathrm{tr} AC \sqrt{(1-\beta )(1-\delta)} + \mathrm{tr} BD \sqrt{(1-\alpha)(1-\gamma)} \nonumber \\ && + \sqrt{(1-\alpha )(1-\gamma)(1-\beta )(1-\delta)}. \end{eqnarray*} Now from Fact \ref{fact:holder} with $a = 2$ (see Appendix B) one has \begin{equation} \mathrm{tr} AC \leq \sqrt{\mathrm{tr} A^2 \mathrm{tr} C^2} = \sqrt{\alpha \gamma} \end{equation} and \begin{equation} \mathrm{tr} BD \leq \sqrt{\mathrm{tr} B^2 \mathrm{tr} D^2} = \sqrt{\beta \delta}. \end{equation} Thus it is enough to show that \begin{eqnarray}\label{eqn:mainIneq} \sqrt{(1-\alpha \beta)(1-\gamma \delta)} &\geq& \sqrt{\alpha \gamma} \sqrt{(1-\beta )(1-\delta)} +\sqrt{\beta \delta} \sqrt{(1-\alpha)(1-\gamma)} \nonumber \\ && + \sqrt{(1-\alpha )(1-\gamma)(1-\beta )(1-\delta)}. \end{eqnarray} We define two vectors \begin{equation} X = \left(\begin{array}{c} \sqrt{\alpha}\sqrt{1-\beta} \\ \sqrt{\beta} \sqrt{1 - \alpha}\\ \sqrt{1-\alpha}\sqrt{1-\beta} \end{array} \right) \text{ and } Y = \left(\begin{array}{c} \sqrt{\gamma}\sqrt{1-\delta} \\ \sqrt{\delta} \sqrt{1 - \gamma}\\ \sqrt{1-\gamma}\sqrt{1-\delta} \end{array} \right). \end{equation} Note that \begin{equation}\label{eqn:xy} \scalar{X}{Y} = \sqrt{\alpha \gamma} \sqrt{(1-\beta )(1-\delta)} +\sqrt{\beta \delta} \sqrt{(1-\alpha)(1-\gamma)} +\sqrt{(1-\alpha )(1-\gamma)(1-\beta )(1-\delta)} \end{equation} and \begin{equation} \label{eqn:xx} \scalar{X}{X} = (1-\alpha \beta) \text{ and } \scalar{Y}{Y} = (1-\gamma \delta). \end{equation} Now by combining (\ref{eqn:xx}) with (\ref{eqn:xy}) and using Cauchy--Schwarz inequality \begin{equation} \sqrt{\scalar{X}{X} \scalar{Y}{Y}} \geq \scalar{X}{Y}, \end{equation} we obtain (\ref{eqn:mainIneq}). } \proofof{Proposition \ref{prop:subfid-sub}}{ To show sub--multiplicativity of sub--fidelity we write the definition (\ref{eqn:sub-fide-def}) for a tensor product, \begin{eqnarray*} E(A\otimes B, C\otimes D) &=& \mathrm{tr} [(A\otimes B) (C\otimes D)] \\ && + \sqrt{2[ \bigl( \mathrm{tr} (A\otimes B)(C\otimes D)\bigr)^2 - \mathrm{tr} (A\otimes B)(C\otimes D)( A\otimes B)(C\otimes D)]} \\ &=& \mathrm{tr} AC \mathrm{tr} BD + \sqrt{2[(\mathrm{tr} AC \mathrm{tr} BD)^2 - \mathrm{tr} ACAC \mathrm{tr} BDBD]} . \end{eqnarray*} The product of two sub--fidelities reads \begin{eqnarray*} \lefteqn{E(A,C) E(B,D) = (\mathrm{tr} AC + \sqrt{2[(\mathrm{tr} AC)^2 - \mathrm{tr} ACAC]} )(\mathrm{tr} BD + \sqrt{2[(\mathrm{tr} BD)^2 - \mathrm{tr} BDBD]} ) }\\ &=& \mathrm{tr} AC\mathrm{tr} BD + \mathrm{tr} AC \sqrt{2[(\mathrm{tr} BD)^2 - \mathrm{tr} BDBD]} + \mathrm{tr} BD \sqrt{2[(\mathrm{tr} AC)^2 - \mathrm{tr} ACAC]} \\ && + \sqrt{2[(\mathrm{tr} AC)^2 - \mathrm{tr} ACAC]}\sqrt{2[(\mathrm{tr} BD)^2 - \mathrm{tr} BDBD]}. \end{eqnarray*} For short we denote \begin{displaymath} \begin{array}{lll} \alpha = \mathrm{tr} AC, & a = \mathrm{tr} ACAC, \\ \beta = \mathrm{tr} BD, & b = \mathrm{tr} BDBD. \end{array} \end{displaymath} We have $\alpha^2 \geq a$ and $\beta^2 \geq b$. By rewriting above expressions in the new notation we obtain \begin{eqnarray*} E(A\otimes B, C\otimes D) &=& \alpha \beta + \sqrt{2[\alpha^2 \beta^2 - a b ]} \end{eqnarray*} and \begin{eqnarray*} E(A,C) E(B,D) &=& \alpha \beta + \alpha \sqrt{2[\beta^2 - b]} + \beta \sqrt{2[\alpha ^2 - a]} + \sqrt{2[\alpha ^2 - a]}\sqrt{2[\beta^2 - b]}. \end{eqnarray*} Now we write \begin{eqnarray*} \sqrt{2[\alpha^2 \beta^2 - a b ]} &=& \sqrt{2[\alpha^2 \beta^2 - ab + ab - ab - \alpha^2 b + \alpha^2 b - \beta^2 a + \beta^2 a ]} \\ &=& \sqrt{2[(\alpha^2-a)(\beta^2-b) + b(\alpha^2-a) + a (\beta^2 -b) ]}. \end{eqnarray*} Making use of subadditivity of square root we obtain \begin{eqnarray*} \sqrt{2[\alpha^2 \beta^2 - a b ]} &\leq& \sqrt{2(\alpha^2-a)(\beta^2-b)} + \sqrt{2 b(\alpha^2-a)} + \sqrt{2 a (\beta^2 -b) ]} . \end{eqnarray*} Because $2<4$, $a \leq \alpha^2$ and $b \leq \beta^2$ we get \begin{eqnarray*} \sqrt{2[\alpha^2 \beta^2 - a b ]} &\leq& \sqrt{4(\alpha^2-a)(\beta^2-b)} + \beta\sqrt{2 (\alpha^2-a)} + \alpha\sqrt{2 (\beta^2 -b) ]}. \end{eqnarray*} And as a result we obtain the desired inequality \begin{eqnarray*} E(A\otimes B, C\otimes D) &\leq& E(A,C) E(B,D). \end{eqnarray*} } For any pair of Hermitian operators $X_1$ and $X_2$ let us now define a quadratic {\sl Lorentz form} \begin{equation} (X_1,X_2)_L := [(\mathrm{tr} X_1) (\mathrm{tr} X_2) - \mathrm{tr} X_1 X_2] . \label{quadform} \end{equation} To find out the motivation standing behind this name let us expand a Hermitian operator $X$ in an operator basis, $X=\sum_{j=0}^{N^2-1} a_j H_j$. We assume that the basis is orthogonal, $\mathrm{tr} H_j H_k=\delta_{jk}$, the first operator is proportional to identity, $H_0={\ensuremath{\mathbbm I}}/\sqrt{N}$, and all other operators $H_j$ are traceless. Then the form (\ref{quadform}) gives \begin{equation} (X,X)_L = (\mathrm{tr} X)^2 - \mathrm{tr} X^2 = \frac{(N-1)}{N} a_0^2 - \sum_{j=1}^{N^2-1} a_j^2 , \label{quadform2} \end{equation} which is of Minkowski--Lorentz type. By the help of this notion, super--fidelity can be written as \begin{equation} G(A,B) = \mathrm{tr} AB + [(A,A)_L (B,B)_L]^{1/2} , \label{quasi-lor} \end{equation} while sub--fidelity reads \begin{equation} E(A,B) = \mathrm{tr} AB + [2(AB,AB)_L]^{1/2} . \label{sub-lor} \end{equation} The \emph{forward cone} with respect to the form (\ref{quadform}) is given by operators $X$ satisfying \begin{equation} (X,X)_L \ge 0 \hbox{ and } \mathrm{tr} X \ge 0 . \label{pos_cone} \end{equation} Since the density matrices are normalized, $\mathrm{tr} \rho=1$, the form $(\rho,\rho)_L$ is non--negative. For a Lorentz form any two forward directed Hermitian matrices $A$ and $B$ satisfy \begin{equation} (A,A)_L (B,B)_L \le [(A,B)_L]^2 . \end{equation} Substituting this bound into expression (\ref{quasi-lor}) we arrive at an upper bound for super--fidelity \begin{equation} G(A,B) \le \mathrm{tr} AB + (\mathrm{tr} A)(\mathrm{tr} B) - (\mathrm{tr} AB) = (\mathrm{tr} A)(\mathrm{tr} B) . \label{Gbound} \end{equation} For the case of normalized density matrices, $\mathrm{tr} \rho=1$ we get $G(\rho_1,\rho_2)\le 1$. Using the bound (\ref{determin}) for density operators we introduce a third quantity \begin{equation} \label{def:E'} E'(A,B) = \mathrm{tr} AB + r(r-1)\sqrt[r]{s_r(A B)}, \end{equation} where $r$ is the rank of matrix $AB$. Note that for $r=2$ this formula is reduced to an expression (\ref{sub-s2}) for sub--fidelity, hence in this case $E'=E$. Since $(s_r(X))^{1/r}$ is concave for density operators we infer that the quantity $E'(A,B)$, defined in equation (\ref{def:E'}), is separately concave in $A$ and in $B$. \subsection{Examples and classical analogues} To observe sub-- and super--fidelity in action consider a family of mixed states \begin{equation} \rho_a =a|\psi\rangle \langle \psi| + (1-a) {\mathbbm I}/N , \label{rhoa} \end{equation} which interpolates between arbitrary pure state $|\psi\rangle$ and the maximally mixed state. It is straightforward to compute the fidelity between the state $\rho_a$ and the maximally mixed state $\rho_*:={\mathbbm I}/N$, \begin{equation} F(\rho_a,\rho_*) = \frac{1}{N^2} \left( \sqrt{(N-1)a + 1} + (N-1)\sqrt{1-a} \right)^2 , \label{Fa} \end{equation} as well as other bounds \begin{eqnarray} E(\rho_a,\rho_*) &=& \frac{1}{N} + \sqrt{2} \frac{1}{N} \sqrt{1 - \frac{1}{N}} \sqrt{1 - a^2} , \\ E'(\rho_a,\rho_*) &=& \frac{1}{N} + \left(1 - \frac{1}{N}\right) \sqrt[N]{((N-1)a + 1)(1-a)^{N-1} } , \\ G(\rho_a,\rho_*) &=& \frac{1}{N} + \left(1 - \frac{1}{N}\right) \sqrt{1 - a^2} . \label{Ga} \end{eqnarray} These results are plotted in Fig. \ref{fig:ex-fid-compare} for $N=2,3,4,5$. \begin{figure}\label{fig:ex-fid-compare} \end{figure} For $N=2$ all these quantities coincide, and the quality of the approximation goes down with the system size $N$, as expected. Looking at the graph one could imagine that relation $E\le E'$ is fulfilled. However, such an equality does not hold as we found a counter example: the pair of states analyzed in the figure with parameter $a$ very close to unity. One may work out several other examples, for which sub-- and super--fidelity are easy to find. Explicit formulas are simple in the case of two commuting density matrices $\rho_p$ and $\rho_q$ with spectra given by vectors $\vec p$ and $\vec q$, respectively. In such a classical case these quantities read \begin{eqnarray} E(\rho_p, \rho_q) & = & \sum_{i=1}^N p_i q_i +\sqrt{2 \left[ \left( \sum_{i=1}^{N} p_i q_i \right)^2- \sum_{i=1}^N p_i^2 q_i ^2 \right] } , \\ F(\rho_p,\rho_q) &=& \left( \sum_{i=1}^N \sqrt{p_i q_i} \right)^2 ,\\ G(\rho_p,\rho_q) &=& \sum_{i=1}^N p_i q_i + \sqrt{\left(1- \sum_{i=1}^N p_i^2\right) \left(1- \sum_{i=1}^N q_i^2\right)}. \label{classical} \end{eqnarray} \subsection{The difference $G-F$} In view of the inequality (\ref{quasi-ineq}) it is natural to ask how big the difference $G-F$ might be. Since both quantities coincide if one of the states is pure, let us analyze the case of two mixed states living in orthogonal subspaces. More precisely, let us fix an even dimensionality of the Hilbert space $N=2M$, and define two diagonal states, each supported in $M$ dimensional space, $\rho_1=\frac{2}{N}{\rm diag}(1,\dots,1,0,\dots,0)$ and $\rho_2=\frac{2}{N}{\rm diag}(0,\dots,0,1,\dots,1)$. Since they are supported by orthogonal subspaces their fidelity vanishes, $F(\rho_1,\rho_2)=0$. On the other hand the definition (\ref{eqn:quasi-fide-def}) gives their super--fidelity \begin{equation} G(\rho_1,\rho_2) = \frac{N-2}{N}, \end{equation} equal in this case to the difference $G-F$. As expected for $N=2$ we get $G=F=0$. However, for $N$ large enough the difference $G-F$ may become arbitrarily close to unity. Thus working with super--fidelity $G$ in place of fidelity $F$ one needs to remember that this approximation works fine for small systems or where at least one of the states is pure enough. \section{On measurement methods} \subsection{Associated physical observables} Here we shall shortly discuss possibilities of measurement of both sub-- and super--fidelities in physical experiments. The approach below follows the techniques used in state spectrum estimation \cite{EkertEtAl} and nonlinear entanglement detection and/or estimation which has been developed significantly last years (see \cite{ADH-Single-Witness,ASH-Generalised-Entropies} and references therein). Those approaches exploited the properties of SWAP operator and other permutation unitary operations to get the properties of single state rather than the relation of different states. There were little exceptions: one was a quantum network measurement of an overlap of the two states \cite{EkertEtAl}. Here we shall follow the latter idea since we want to estimate the {\it distance} of two different quantum states. In particular we shall see that it is possible to measure these quantities with help of not more than two collective observables. This fact may be helpful in experimental comparison of two different stationary sources of quantum states. Quite remarkably, as we shall see below, with help of similar techniques, super--fidelity can be represented by only three experimental probabilities which makes it very friendly from an experimental point of view. We start by providing a simple example. First one can see that to calculate sub-- and super--fidelity it is necessary to calculate the values of the terms of the form $\mathrm{tr} AB$. Let $A,B\in M_2(\ensuremath{\mathbbm C})$. In this case \begin{equation} A=\left( \begin{array}{ll} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right) , B=\left( \begin{array}{ll} b_{11} & b_{12} \\ b_{21} & b_{22} \end{array} \right) \end{equation} and \begin{equation} \mathrm{tr} AB = \mathrm{tr} \left[\left( \begin{array}{ll} a_{11} b_{11}+a_{12} b_{21} & a_{11} b_{12}+a_{12} b_{22} \\ a_{21} b_{11}+a_{22} b_{21} & a_{21} b_{12}+a_{22} b_{22} \end{array} \right)\right] = a_{11} b_{11}+a_{21} b_{12}+a_{12} b_{21}+a_{22} b_{22}. \end{equation} On the other hand this value can be calculated using SWAP gate as \begin{eqnarray} \mathrm{tr} \left[\text{SWAP} (A\otimes B) \right]&=& \mathrm{tr} \left[\left( \begin{array}{llll} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right) \left( \begin{array}{llll} a_{11} b_{11} & a_{11} b_{12} & a_{12} b_{11} & a_{12} b_{12} \\ a_{11} b_{21} & a_{11} b_{22} & a_{12} b_{21} & a_{12} b_{22} \\ a_{21} b_{11} & a_{21} b_{12} & a_{22} b_{11} & a_{22} b_{12} \\ a_{21} b_{21} & a_{21} b_{22} & a_{22} b_{21} & a_{22} b_{22} \end{array} \right)\right] \\ &=& \mathrm{tr}\left[ \left( \begin{array}{llll} a_{11} b_{11} & a_{11} b_{12} & a_{12} b_{11} & a_{12} b_{12} \\ a_{21} b_{11} & a_{21} b_{12} & a_{22} b_{11} & a_{22} b_{12} \\ a_{11} b_{21} & a_{11} b_{22} & a_{12} b_{21} & a_{12} b_{22} \\ a_{21} b_{21} & a_{21} b_{22} & a_{22} b_{21} & a_{22} b_{22} \end{array} \right) \right] = \mathrm{tr} AB. \label{SWAPproperty} \end{eqnarray} To address the question of measurability of the quantities (\ref{eqn:sub-fide-def}), (\ref{eqn:quasi-fide-def}) let us first recall the corresponding permutation operators which we shall need subsequently. The first one will be just SWAP operator (example of which is the SWAP gate presented above) $V_{12}: {\cal H}_N \otimes {\cal H}_{N} \rightarrow {\cal H}_N \otimes {\cal H}_{N}$ which is defined by the action \begin{equation} V_{12}|\phi_{1}\rangle \otimes |\psi_{2}\rangle = |\psi_{2}\rangle \otimes |\phi_{1}\rangle . \label{swap} \end{equation} This is a Hermitian operator and as a such it represents an observable. It has a simple eigendecomposition in the form \begin{equation} V_{12}=P^{(+)}_{12} - P^{(-)}_{12} , \label{swap-eigendecomposition} \end{equation} where projections $P^{(\pm)}_{12}$ onto symmetric and antisymmetric subspaces of ${\cal H}_N \otimes {\cal H}_{N}$ are \begin{equation} P_{12}^{\pm}=\frac{1}{2}({\mathbbm I}_{12} \pm V_{12}) . \label{sym-antisym-proj} \end{equation} Below we shall omit the indices and use the notation $V$ and $P^{(\pm)}$, if it does not lead to confusion. An important property usually exploited in case of entanglement detection is that the formula (\ref{SWAPproperty}) holds for the SWAP operator $V$ of any dimension \cite{Werner}. Apart form that operation we will also need a family of unitary permutation matrices $V_{1234}^{\pi}:{\cal H}_N^{\otimes 4} \rightarrow {\cal H}_{N}^{\otimes 4}$, \begin{equation} V_{1234}^{\pi} |\psi_{1}\rangle \otimes |\psi_{2} \rangle \otimes |\psi_{3} \rangle \otimes |\psi_{4}\rangle = |\psi_{\pi(1)}\rangle \otimes |\psi_{\pi(2)} \rangle \otimes |\psi_{\pi(3)} \rangle \otimes |\psi_{\pi(4)}\rangle , \end{equation} where $\pi$ represents any chosen permutation of the indices $(1,2,3,4)$. For simplicity we shall drop the indices using the notation $V^{\pi}$. Let us define the set ${\cal S}$ of all eight permutations that do not map the sequence $(1,2,3,4)$ into a one having odd or even elements one after another. For instance, the permutations defined by the ranges (2341) or (3214) belong to ${\cal S}$, while (2314) or (1423) do not. For a fixed set ${\cal S}'\subset {\cal S}$ and some $\pi_{0} \in {\cal S}$ we define the following observable: \begin{equation} W^{{\cal S}',\pi_{0}} =\frac{1}{2|{\cal S'}|} \left( \sum_{\pi \in {\cal S}'}V^{\pi}V^{\pi_{0}}V^{\pi} + \sum_{\pi \in {\cal S}'}V^{\pi^{-1}}V^{\pi_{0}^{-1}}V^{\pi^{-1}} \right) . \label{averaged-4-copy-obs} \end{equation} A special case is the observable $W^{\{\pi_{0}\},\pi_{0}}=(V^{\pi_{0}}+V^{\pi_{0}^{-1}})/2$ with $\pi_{0}$ being just some cyclic permutation (cf. \cite{ADH-Single-Witness} and references therein). The choice of the permutation $\pi_{0}$ and/or the subset ${\cal S}'$ may be motivated by a specific physical situation. In the case of single qubit sources ($N=2$) the observables (\ref{averaged-4-copy-obs}) have highly degenerated spectra and the corresponding eigenvectors have very symmetric forms. In particular the observable $W^{\{\pi_{0}\},\pi_{0}}$ has spectrum $\{1,-1,0 \}$ which means that its mean value requires probabilities of only two outcomes of incomplete von Neumann measurement. The observable has the spectral decomposition $W^{\{\pi_{0}\},\pi_{0}}=Q^{(+)}-Q^{(-)}$ where support of the projector $Q^{(+)}$ is spanned by eigenvectors $\{ |\phi_{1}\rangle=|0000\rangle,|\phi_{2}\rangle=|1111\rangle, |\phi_{3}\rangle=(|0111\rangle +|1011\rangle + |1101\rangle +|1110\rangle)/2,|\phi_{4}\rangle= (|0011\rangle +|0110\rangle + |1001\rangle +|1100\rangle)/2, |\phi_{5}\rangle=(|0101\rangle +|1010\rangle)/\sqrt{2},|\phi_{6}\rangle=\sigma_{x}^{\otimes 4}|\psi_{4}\rangle \}$ while the support of the second projector $Q^{(-)}$ (orthogonal to $Q^{(+)}$) corresponds to $\{ I\otimes \sigma_{z}^{\otimes 2} \otimes I|\phi_{3}\rangle, I^{\otimes 2} \otimes \sigma_{z}^{\otimes 2}|\phi_{4}\rangle, I^{\otimes 3} \otimes \sigma_{z}|\phi_{5}\rangle, I\otimes \sigma_{z}^{\otimes 2} \otimes I|\phi_{6}\rangle \}$. To illustrate how to measure the quantities $E$ and $G$ suppose now we can perform collective measurements on two and four copies of both quantum states. We plan measurements that allow two or four copies of analyzed states to interact. Then (cf. \cite{EkertEtAl,ADH-Single-Witness,ASH-Generalised-Entropies} and references therein) the sub-- and super--fidelities can be represented in terms of averages of following observables, \begin{eqnarray} E(\rho_1,\rho_2) &=& \mathrm{tr} V \rho_1\otimes \rho_2 + \sqrt{2[(\mathrm{tr} V \rho_1 \otimes \rho_2)^2 - \mathrm{tr} W^{{\cal S},\pi_{0}}\rho_1 \otimes \rho_2\otimes \rho_1 \otimes \rho_2]} , \label{eqn:sub-fide-exp} \\ G(\rho_1,\rho_2) &=& \mathrm{tr} V \rho_1\otimes \rho_2 + \sqrt{1-\mathrm{tr} V \rho_1\otimes\rho_1} \sqrt{1-\mathrm{tr} V \rho_2 \otimes \rho_2} . \label{eqn:quasi-fide-exp} \end{eqnarray} There are two simple but important observations to be made. The sub--fidelity $E$ can be measured with help of two setups: (i) the one measuring the observable $V$ and (ii) the second one measuring observable $W^{{\cal S},\pi_{0}}$. Each setup requires one source: setup (i) needs the source that creates, say, pairs $\rho_{1} \otimes \rho_2$, while setup (ii) requires a source producing quadruples of the form, say, $\rho_{1} \otimes \rho_2 \otimes \rho_{1} \otimes \rho_{2}$. Our scheme will work also for a worse source that produces one of the pairs (quadruples) $\{ \rho_{1} \otimes \rho_2, \rho_{2} \otimes \rho_1 \}$ ($\{ \rho_{1} \otimes \rho_2 \otimes \rho_{1} \otimes \rho_{2},\rho_{2} \otimes \rho_1 \otimes \rho_{2} \otimes \rho_{1} \} $) at random according to an unknown biased probability distribution, which will not affect the results of the corresponding estimate for sub--fidelity. The second observation is that the super--fidelity $G$ can be measured with help of {\it single} setup, namely the one that measures observable $V$, but requires its application to three types of sources i.e. the ones creating pairs $\rho_{1} \otimes \rho_1$, $\rho_{2} \otimes \rho_2$, and, say, $\rho_{1} \otimes \rho_2$. Again, the last source may produce at random one of the pairs $\{ \rho_{1} \otimes \rho_2, \rho_{2} \otimes \rho_1 \}$ and this will not affect the estimate for super--fidelity. It is very interesting to study the form of super--fidelity in terms of directly measurable quantities, \emph{i.e.} {\it probabilities}, since it has a simple optical implementation. Let us introduce the probabilities of the projection onto the antisymmetric subspace of ${\cal H}_N \otimes {\cal H}_{N}$: \begin{equation} p^{(-)}_{ij}=\mathrm{tr} P^{(-)} \rho_{i} \otimes \rho_{j},\ \ i, j = 1, 2. \label{probability-antysymmetric} \end{equation} Then super--fidelity has a particularly nice form, \begin{equation} G(\rho_1,\rho_2) = 1-2\left( p^{(-)}_{12} - \sqrt{p^{(-)}_{11}p^{(-)}_{22}} \right) \ , \label{super-fidelity-3probabilities} \end{equation} which is crucial for further discussion. Note that the super--fidelity can be represented in terms of only {\it three} probabilities that can be measured in a single set-up. One can perform a simple consistency test by checking, whether the combination of experimental probabilities satisfy (up to error bars) the condition $p^{(-)}_{12} - \sqrt{p^{(-)}_{11}p^{(-)}_{22}}\leq 0.5$ -- otherwise one had an unphysical result, since super--fidelity can not be negative. Note that the probability $p^{(-)}_{11}$ has been already measured experimentally for two copies of composite systems in context of entanglement detection \cite{Bovino} or estimation \cite{Walborn} under some assumptions about the nature of the sources. In subsection below we shall refer to the scheme analogous to the one utilized in Ref. \cite{Bovino}. It is interesting to note that if the state $\varrho$ is of $d$-dimensional type, then reproduction of sub-- and super--fidelity {\it via} quantum tomography requires $2d^{2}-2$ independent quantities to be estimated since each of the two states is described by $d^{2}-1$ real parameters. On the other hand, to find the quantities $E$ and $G$ in the way described above one requires only two or three independent real quantities (probabilities) to be estimated independly on how large the dimension $d$ is. The price to be payed is, of course, that one must perform collective experiments. Preparation of reliabe setups of such experiments might be a good test for quantum engeneering. \subsection{Measuring super--fidelity of states representing photons polarizations} Consider now physical setup that would compare two states of polarization of single photon in terms of super--fidelity $G$. In this case the density matrix is defined on Hilbert space isomorphic to $C^{2}$ where the horizontal (vertical) polarization, usually denoted by $|H \rangle$ ($|V \rangle$) corresponds to the standard basis element $|0\rangle$ ($|1 \rangle$). Suppose one has memoryless sources of two types $S_{i}$ ($i=1,2$) sending photons in polarization states $\rho_{1}$, $\rho_{2}$ respectively. The experimental setup is elementary. We have sources $S_{i}$, $S_{j}$, where we put either $i=j=1,2$ (sources of the same type) or, say $i=1$, $j=2$ (different sources) then we have a beamsplitter (in equal distance to the source) and two detectors behind it (see Fig. \ref{fig2}). If two photons form sources $S_{i}$, $S_{j}$ meet on the beamsplitter and the two detectors click, we have so--called anticoalescence event, which happens with probability $p^{(-)}_{ij}$ \cite{Bovino}. Otherwise we deal with a coalescence result which occurs with probability $p^{(+)}_{ij}= 1- p^{(-)}_{ij}$. Putting all three probabilities of anti-coalescence into formula (\ref{super-fidelity-3probabilities}) we reproduce the expression for super--fidelity. \begin{figure}\label{fig2} \end{figure} This seems to be the most easy experiment with two sources to perform. Such an experiment can be realized for two sources of photons engineered with help of controlled decoherence (in a way similar to Ref. \cite{Kwiat}) corresponding to two different mixed states of a qubit. The above scheme immediately extends to the case of states $\rho_{1}$, $\rho_{2}$ are defined on $N=2^{n}$-dimensional Hilbert space representing polarization degrees of freedom of $n$ photons. In this case the total Hilbert space is ${\cal H}_{N}=(\ensuremath{\mathbbm C}^{2})^{\otimes n}$ and the scheme reads as in Fig. \ref{fig3} (compare \cite{Alves, ASH-Generalised-Entropies}). If the probability $p^{(s_k),k}_{ij}$ with $s_{k}=-1$ ($s_{k}=+1$) corresponds to anticoalescence (coalescence) on $k$-th beamsplitter, i.e. it represents the probability of two clicks (one click) in the pair of detectors $D_k$, $D_{k}'$, then the total probabilities: \begin{equation} p^{(-)}_{ij}=\sum_{s_{1},s_{2}...,s_{n}: \ s_{1}s_{2}...s_{n}=-1}p^{(s_1),1}_{ij}p^{(s_2),2}_{ij}... p^{(s_n),n}_{ij}, \ \ \ i,j=1,2 \label{probabilities-n-photons} \end{equation} are these we put into (\ref{super-fidelity-3probabilities}). In the formula above we count all the cases when an odd number of anti-coalescence events occurs, provided that there is no photon losses during the experiment. \begin{figure}\label{fig3} \end{figure} For $n=2$ this type of experiment has already been performed with two two-photon sources producing entangled states \cite{Bovino}. However, the sources were considered to provide the same state on average rather than two different ones. A similar reasoning was used in another recent experiment, in which photon polarization and momentum degrees of freedom were used to estimate the concurrence where additional strong assumption about purity of each copies were also used \cite{Walborn}. In general, measurements schemes of quantities like purity, concurrence, sub-- and super--fidelity in the collective framework like the one presented here requires the assumption that the sources producing states are stationary and memoryless. Quite remarkably this is the same assumption one makes in quantum tomography. As discussed in \cite{vanEnk,vanEnkEtAl} there may be difficulties with satisfying it in real experimental scenarios for instance due to classical correlations between the consecutive copies of the system. In other words the condition of having the global state in $\varrho^{N}$ form (which mathematically corresponds to quantum de Finnetti condition \cite{QuantumDeFinnetti}) may not be obeyed. This point requires more analysis, and leads, in general to nontrivial issues. It seems however that stationarity and memoryless character of the source can usually be satisfied approximately. Then the present measurement would serve (similarily like quantum tomography does, though may be in a way more sensitive to source correlations) as an approximative, coarse-grained-like characteristic of the states sources under reasonable physical assumptions. \subsection{Quantum networks} There is yet another method of detection of quantities that may be considered here. This is a method based on quantum networks. It is known that a unitary operation $U$ acting on state $\sigma$ but {\it controlled} by a qubit in the superposition state $|+\rangle\equiv \frac{1}{2}(|0\rangle +|1\rangle)$ reproduces the value $Re (\mathrm{tr} U \sigma)$ directly as a mean value of the Pauli matrix $\langle \sigma_{x}\rangle $ measured on the controlled qubit \cite{Knill,EkertEtAl}. This fact allows us to measure certain nonlinear functions of the state. To get $\mathrm{tr} \rho^{k}$ one takes $k$ copies of the state, $\sigma=\rho^{\otimes k}$ and takes for $U$ the operator of cyclic permutation, in full analogy to $V^{\pi}$ used in previous subsection. To measure the overlap of two matrices, $\rho_1$, $\rho_2$, one takes $\sigma=\rho_1 \otimes \rho_2$ and uses the SWAP operator, $U=V$. The corresponding network is already provided explicitly in Ref. \cite{EkertEtAl}, so we shall not write it down here. Such a network allows one to measure all three quantities needed to reproduce the super--fidelity $G$. Indeed, the network produces {\it directly} (as mean values of $\sigma_z$ on controlled qubit) all three mean values: $\mathrm{tr} \rho_i\rho_j$, $i=1,2$ provided that states forming the input of the controlled part of the network are $\rho_{i} \otimes \rho_{j}$. Some alternative constructions of programmable networks designed to measure super--fidelity are also possible. \begin{figure}\label{fig4} \end{figure} Similarly, sub--fidelity $E$ can also be estimated with a network-based experimental scheme. Following reasoning from Ref. \cite{ADH-Single-Witness} one constructs the following programmable quantum network -- see Fig. \ref{fig4}. Depending on the program state $|\Psi_{12}\rangle$ as a mean value of $\sigma_{z}$ of the measured controlling qubit one gets \begin{enumerate} \item[(i)] $\mathrm{tr} \rho_1\rho_2$ if $|\Psi_{12}\rangle=|0\rangle|0\rangle$, \item[(ii)] $\mathrm{tr} \rho_1\rho_2\rho_1\rho_2$ if $|\Psi_{12}\rangle=|1\rangle|0\rangle$, \item[(iii)] $\frac{1}{2}\left(\mathrm{tr} \rho_1\rho_2\rho_1\rho_2 - (\mathrm{tr} \rho_1\rho_2)^{2}\right)$ if $|\Psi_{12}\rangle = (|0\rangle|1\rangle+|1\rangle|0\rangle)/\sqrt{2}$ \ {\sl i.e.} if it is in Bell state. \end{enumerate} The last quantity up to the factor $(-\frac{1}{4})$ is just the quantity that occurs under the square root in the formula for $E$. In general to estimate the sub--fidelity $E$ we may ''run'' the first ''program'' and either the second or the third one. Alternatively, we may run all three programs and use the data to verify the accuracy of the experiment by comparing the two partially independent estimates of $E$ obtained in that way. It is easy tu see, that the same network can be used to estimate super--fidelity $G$ if one puts as an input $\rho_{i} \otimes \rho_{i} \otimes \rho_{j}\otimes \rho_{j}$, $j=1,2$. \section{Distances and geometry of the space of states} \subsection{Hilbert-Schmidt distance and flat geometry} The geometry of the space of quantum states depends on the metric used \cite{BC94,PS96,ZS01,BZ06}. The set ${\Omega}_N$ of mixed states of size $N$ reveals the Euclidean (flat) geometry if it is analyzed with respect to the {\sl Hilbert-Schmidt distance}, \begin{equation} D_{HS}(\rho_1,\rho_2)=\sqrt{ \mathrm{tr} [(\rho_1 - \rho_2)^2] }. \label{HilbSchmi} \end{equation} To demonstrate this property let us first concentrate on the simplest case, $N=2$. Making use of the notion of a coherence vector $\vec \tau$ any state of a qubit can be written in the {\sl Bloch representation} \begin{equation} \rho = \frac {\ensuremath{\mathbbm I}}{N} + {\vec \tau}\cdot {\vec \lambda} . \label{Pauli} \end{equation} Here $\vec{\lambda}$ denotes the vector of three rescaled traceless Pauli matrices $\{\sigma^x, \sigma^y, \sigma^z \}/{\sqrt{2}}$, which are orthogonal in the sense of the Hilbert-Schmidt scalar product, $\langle \lambda^k |\lambda^m\rangle= \mathrm{tr}(\lambda^k)^{\dagger}\lambda^m=\delta_{km}$. Together with $\lambda^0={\ensuremath{\mathbbm I}}/\sqrt{2}$ they form an orthonormal basis in the space of complex density matrices of size two. Due to Hermiticity of $\rho$ the three-dimensional Bloch vector $\vec \tau$ is real. Positivity condition implies $|\vec \tau|\le 1/\sqrt{2}=R_2$ with equality for pure states, which form the Bloch sphere of radius $R_2$. Representation (\ref{Pauli}) implies that for any state of a qubit $\mathrm{tr} \rho^2=1/2 + |\tau|^2$. Consider two arbitrary density matrices and express their difference $\rho_1-\rho_2$ in the Bloch form. The entries of this difference consist of the differences between components of both Bloch vectors ${\vec \tau}_1$ and ${\vec \tau}_2$. Therefore Hilbert-Schmidt distance induces the flat (Euclidean) geometry of ${\Omega}_2$, \begin{equation} D_{HS}\bigl( \rho_{{\vec \tau}_1}, \rho_{{\vec \tau}_2}\bigr)= D_{E}({\vec \tau}_1, {\vec \tau}_2) , \label{densHS} \end{equation} where $D_E$ is the Euclidean distance between both Bloch vectors in ${\mathbbm R}^3$. It is worth to add that expression (\ref{densHS}) holds for an arbitrary $N$. In this case $\vec \tau $ is a real vector with $N^2-1$ components, while the vector ${\vec \lambda}=\{\lambda^k\}_{k=1}^{N^2-1}$ in (\ref{Pauli}) denotes the set of $N^2-1$ traceless generators of the group $SU(N)$. Positivity of $\rho$ implies that the length of the Bloch vector is limited by \begin{equation} |{\vec \tau}| \le D_{HS}({\ensuremath{\mathbbm I}}/N,|\psi\rangle \langle \psi|) = \sqrt{ \frac{N-1}{N}} =: R_N . \label{radiusN} \end{equation} For $N=2$ the condition $|{\vec \tau}| \le R_2$ is sufficient to imply that the corresponding matrix is positive and represents a state, while for $N\ge 3$ it is only a necessary condition \cite{BZ06}. This is related to the fact that with respect to the flat, H--S geometry the set ${\Omega}_2$ forms a full $3$-ball, while for larger $N$ the set ${\Omega}_N$ forms a convex subset of the $(N^2-1)$-dimensional ball of radius $R_N$ centered at $\rho_*={\ensuremath{\mathbbm I}}/N$. \subsection{Bures distance and the geometry it induces} The notion of fidelity, introduced in (\ref{fidel}), can be used to define the {\sl Bures distance} \cite{Bu69,Uh76} \begin{equation} D_{F}(\rho_1,\rho_2) = \sqrt{ 2 - 2 \sqrt{F(\rho_1,\rho_2)}} . \label{Bures} \end{equation} or the {\sl Bures length} \cite{Uh95} (later called {\sl angle} in \cite{NC00}), \begin{equation} D'_F (\rho_1,\rho_2) := {\rm arccos} \sqrt{F (\rho_1,\rho_2)}=\frac{1}{2}{\rm arccos} \Bigl( 2F(\rho_1,\rho_2)-1\Bigr) . \label{anglefid} \end{equation} For any pair of pure states the Bures length coincides with their Fubini--Study distance, $D'_F\bigl(\rho_{\psi}, \rho_{\phi}\bigr)= d_{FS}\bigl( |\psi\rangle,|\phi\rangle\bigr)= {\rm arccos}|\langle \psi|\phi\rangle|$. The Bures metric is distinguished by its rather special properties: it is a Riemannian, monotone metric \cite{Pe96}, Fisher adjusted metric \cite{PS96}, closely related to the statistical distance \cite{BC94}. It is not difficult to describe the geometry of the set of mixed states of a single qubit induced by the Bures metric. Consider a mixed state $\rho\in {\Omega}_2$ and its transformation proposed in \cite{Uh92} \begin{equation} \rho(x,y,z) \to \Bigl( x,y,z,t=\sqrt{\frac{1}{2}-x^2-y^2-z^2} \Bigr) . \label{blow1} \end{equation} It blows up the Bloch ball ${\bf B}^3$ of radius $R_2=1/\sqrt{2}$ into a hyper-hemisphere $\frac{1}{2}S^3$ of the same radius. The original variables $(x,y,z)$ denote the parameters of the state in the Bloch vector representation. The auxiliary variable reads $t=\sqrt{1/2-|\tau|^2}$ in terms of the Bloch vector, so that $t^2+\mathrm{tr} \rho^2=1$. The maximally mixed state $\rho_*=(0,0,0)$, is mapped into a hyper-pole. It is equally distant from all pure states located at the hyper-equator $S^2$, which form the boundary of ${\Omega}_2$. Any state $\rho$ is uniquely represented by an 'extended Bloch vector', ${\vec v}=(x,y,z,t)$ of length $R_2$. The auxiliary variable reads $t=\sqrt{1/2-|\tau|^2}$ in terms of the Bloch vector, so that $t^2+\mathrm{tr} \rho^2=1$. Consider two states $\rho_1$ and $\rho_2$, described by two vectors $\vec v_1$ and ${\vec v_2} \in {\mathbbm R}^4$, which form the angle $\vartheta$. Since for any one-qubit states the bound (\ref{eqn:main-theorem}) becomes an equality, we see that fidelity between them reads $F(\rho_1,\rho_2)=\mathrm{tr} \rho_1\rho_2+\sqrt{t_1^2 t_2^2} =1/2+{\vec \tau_1} \cdot {\vec \tau_2}+t_1 t_2$. This can be rewritten with the use of extended vectors $\vec v_i$ and the angle between them, $F=1/2+{\vec v_1}\cdot{\vec v_2}=1/2+R_2^2 \cos \vartheta$. Since $R_2^2=1/2$ we find \begin{equation} \vartheta =\arccos\bigl( 2F-1\bigr) = 2D'_F (\rho_1,\rho_2) , \label{theta} \end{equation} which shows that the Bures length (\ref{anglefid}) between any two mixed states is proportional to the Riemannian distance between the corresponding points at the Uhlmann hemisphere. Making use of the fidelity $F$ one can also define other distances in the space of quantum states. For instance, Gilchrist et al. \cite{GLN05} have shown that the {\sl root infidelity} \begin{equation} C(\rho_1,\rho_2) = \sqrt{ 1 - F(\rho_1,\rho_2)} \label{infidelity} \end{equation} satisfies the triangle inequality and thus introduces a metric. This very quantity can be used to bound the trace distance $D_{\rm tr}(\rho_1,\rho_2)= \frac{1}{2}{\rm Tr} |\rho_1-\rho_2| \le C(\rho_1,\rho_2)$ from above \cite{FG99,BZ06}. Note that the Bures distance, Bures length and root infidelity are functions of the same quantity, so they generate the same topology. \subsection{Modified Bures length} In analogy to (\ref{Bures}) and (\ref{anglefid}) one may ask whether \begin{equation} D_{G}(\rho_1,\rho_2) = \sqrt{ 2 - 2 \sqrt{G(\rho_1,\rho_2)}} . \label{Buresup} \end{equation} and \begin{equation} D'_G (\rho_1,\rho_2) := {\rm arccos} \sqrt{G (\rho_1,\rho_2)} \label{angleup} \end{equation} define distances. This is obvious for $N=2$ because $F=G$. The situation changes for $N\ge 3$, for which the $D_F$ and $D_G$ do differ and only $F \le G$ is valid. We do not know, whether $D_G$ and $D'_G$ are distances. However, it can be proved that a direct analogue of the root infidelity (\ref{infidelity}) \begin{equation} \label{new.1a} C'(\rho_1, \rho_2) = \sqrt{ 1 - G(\rho_1,\rho_2)} \end{equation} is a genuine distance. The same is true for the {\sl modified Bures length}, \begin{equation} \label{new.1b} D'_M( \rho_1, \rho_2) = \arccos G(\rho_1, \rho_2) . \end{equation} \proof{ Let us call ${\cal L}$ the direct sum of the real linear space of all Hermitian operators and the 1-dimensional space of real numbers. Its elements are $\{H, x\}$, $H$ Hermitian, $x$ a real number. ${\cal L}$ becomes Euclidean (i.e. a real Hilbert space) by defining the scalar product \begin{equation} \label{new.2} (\{H_1, x_1\} , \{H_2, x_2\}) = \mathrm{tr} H_1 H_2 + x_1 x_2 . \end{equation} Let us denote by $B({\cal L})$ the unit ball of ${\cal L}$ and by $S({\cal L})$ the unit sphere. Our proof rests on the embedding of the Hermitian operators \begin{equation} \label{new.3a} B_N = \left\{ H | \ \mathrm{tr} H = 1, \mathrm{tr} H^2 \leq 1 \right\} \end{equation} into $S({\cal L})$ by \begin{equation} \label{new.3b} H \to \xi_H := \left\{ H, \sqrt{1 - \mathrm{tr} H^2} \right\} . \end{equation} Clearly, $(\xi_H, \xi_H) = 1$, and from (\ref{new.2}) we get \begin{equation} \label{new.3c} (\xi_H, \xi_{H'}) = G(H, H') . \end{equation} Now it is obvious that $\sqrt{2-2G}$ is the Euclidean distance between $\xi_H$ and $\xi_H'$, provided $H$ and $H'$ belong to $B_N$. Because the density operators form a subset of $B_N$, (\ref{new.1a}) is a distance. From (\ref{new.3c}) we get $G(H, H') = \cos \alpha$, where $\alpha$ is the angle from which $\xi_H$ and $\xi_H'$ are seen from the center of the ball $B({\cal L})$. Thus, $\arccos G = \alpha$ and, in particular, (\ref{new.1b}) is a distance. } Let us now return to the two conditions of (\ref{new.3a}). They are equivalent with \begin{equation} \label{new.4} B_N = \left\{ H | \ \mathrm{tr} H = 1, \mathrm{tr} (H - \frac{1}{N} \ensuremath{\mathbbm I})^2 \leq \frac{N-1}{N} \right\} \end{equation} and they describe the smallest ball containing the state space. $B_N$ is an affine translate by $1/N$ of the generalized Bloch-ball, \cite{NC00}. $B_N$ is centered at $A = N^{-1} \ensuremath{\mathbbm I}$ and is of radius $\sqrt{(N-1)/ N}$. Above we have embedded $B_N$ by the map (\ref{new.3b}) into the sphere $S({\cal L})$. Just this gives the opportunity to apply Mielnik's definition \cite{Mi74} for a transition probability (he also called it {\sl affine ratio}) of two extremal states of a compact convex set. In our case the compact convex set is $B({\cal L})$ and its extremal part is $S({\cal L})$. At the case at hand, Mielnik's procedure starts with first choosing an extremal point $\xi \in S({\cal L})$ and selecting all affine functions $l$ satisfying $l(\xi) = 1$ and $0 \leq l \leq 1$ on $B({\cal L})$. Any such function can be written \begin{equation} \label{new.5a} l(\eta) = \frac{a + (\xi, \eta) + 1}{a + 2 } , \end{equation} with $a \geq 0$ and $\eta \in {\cal L}$ arbitrarily. Now we have to vary over all these affine functions, \begin{equation} \label{new.5b} p_{M}(\eta, \xi) := \min_l l(\eta) = \min_a \frac{a + (\xi, \eta)}{2 + a} , \end{equation} to get Mielnik's transition probability \begin{equation} \label{new.5c} p_{M}(\xi, \eta) = \frac{1 + (\xi, \eta)}{2} . \end{equation} Returning to $H, H' \in B_N$, we can write \begin{equation} \label{new.5d} p_M(H, H') := p_M(\xi_H, \xi_{H'}) = \frac{1 + G(H, H')}{2} \end{equation} and, in becoming even more special by choosing two density operators for $H$ and $H'$ in the equation above, we arrive at \begin{equation} \label{new.5e} D_{M}(\rho_1, \rho_2) = 2 \sqrt{1 - p_M(\rho_1, \rho_2)} = 2 \sin \frac{\alpha}{2}, \end{equation} (using $2 \cos^2 = 1 + \cos$) and also at \begin{equation} \label{new.5g} D'_{M}(\rho_1, \rho_2) = 2 \arccos \sqrt{p_M(\rho_1, \rho_2)} . \end{equation} \section{Concluding remarks} In this paper we analyzed various bounds for quantum fidelity. Two quantities, we propose to call sub-- and super--fidelity, posses particularly nice properties. On one hand these quantities form universal lower and upper bounds for the fidelity. Moreover, with respect to the tensor product they display sub-- and super--multiplicativity. On the other hand, quantities $E$ and $G$ are much easier to calculate than the original fidelity $F$. To compute any of these bounds it is enough to evaluate three traces only. Thus one can expect, the quantities introduced in this paper might become useful for various tasks of the theory of quantum information processing. Furthermore, under a realistic assumption that several copies of both states are available, it is possible to design a scheme to measure experimentally sub-- and super--fidelity between arbitrary mixed states. For instance, the measurement of super--fidelity is possible if one has three copies of each state. In this paper we have worked out concrete schemes of such experiments concerning the super--fidelity between any two mixed states representing the polarization of photons. \nonumsection{Acknowledgements} \noindent We would like to thank A. Buchleitner for inviting three of us to Dresden in September 2005 for a workshop on Quantum Information, during which our collaboration on this project was initiated. It is also a pleasure to thank I. Bengtsson and M. Horodecki for inspiring discussions. J.A.M. would like to thank Iza Miszczak for her help. We acknowledge financial support by the Polish Ministry of Science and Higher Education under the grants number N519 012 31/1957 and DFG-SFB/38/2007, by the LFPPI network and by the European Research Project SCALA. \nonumsection{Note added} \noindent After this paper was submitted we learned about a related work by Mendonca et al. \cite{MNMFL08} in which the super--fidelity was independently introduced and was called an 'alternative fidelity' measure. In this valuable work the authors provide an alternative proof of super--multiplicativity of $G$, discuss its relation to the trace distance and analyze the distance $G$ induces into the space of mixed quantum states, and prove that $G$ is {\sl jointly} concave in its two arguments. \nonumsection{References} \appendix{~~~Algebraic facts}\label{sec:app} \noindent In this appendix we collect useful algebraic facts, which are used in the main body of the paper. \begin{fact}[From corollary IX.5.3 in \cite{bhatia}]\label{fact:2} For any positive matrices $A$ and $B$ and every unitarily invariant norm $|||\cdot|||$ we have \begin{equation} |||A^\nu B^{1-\nu}||| \leq|||A|||^\nu |||B|||^{1-\nu}, \end{equation} where $\nu \in [0,1]$. \end{fact} \begin{fact}[From corollary IX.5.4 in \cite{bhatia}]\label{fact:3} For any positive matrices $A$ and $B$ and every unitarily invariant norm $|||\cdot|||$ we have \begin{equation} |||A^\nu B^\nu||| \leq ||| \ensuremath{\mathbbm I} |||^{1 -\nu} |||AB|||^{\nu}, \end{equation} where $\nu \in [0,1]$. \end{fact} Next two facts can be found in \cite{coope}. \begin{fact}\label{fact:4} Matrix $AB$ is similar to matrices $\sqrt{A}B\sqrt{A}$ and $\sqrt{B}A\sqrt{B}$. \end{fact} \begin{fact}\label{fact:5} For positive matrices $A$ and $B$ matrix $AB$ has positive eigenvalues. \end{fact} \begin{fact}\label{fact:6} If $p_1 + p_2 + \dots + p_n = 1$ and $p_i \geq 0$ then \begin{equation} 1 - p_1^2 - p_2^2 - \dots -p_n^2 = \sum_{i \neq j} p_i p_j. \end{equation} \end{fact} \begin{prop}\label{prop:ProdStrongIso} Let $g$ be defined as \begin{equation} g(x) = \sum_{i\not=j}\sqrt{x_i}\sqrt{x_j} . \end{equation} For $x , y \in \ensuremath{\mathbbm R}_+^n$ such that \begin{equation}\label{def:weakProdMaj} \prod_{i=1}^k x_i \leq \prod_{i=1}^k y_i , \ \mathrm{ for } \ k=1,\dots,n , \end{equation} with equality for $k=n$, we have \begin{equation} g(x) \leq g(y). \end{equation} \end{prop} \proof{ We introduce notation \begin{equation} g_i(\cdot) = \frac{\partial g}{\partial x_i} (\cdot) . \end{equation} Direct computation shows that function $g$ satisfies \begin{equation}\label{eqn:ProdSchurOstrowski} u_1 g_1(u) \geq u_2 g_2(u) \geq \dots \geq u_n g_n(u) , \end{equation} for $u \in \ensuremath{\mathbbm R}^n$ such that $u_1 \geq u_2 \geq \dots \geq u_n \geq 0$. We denote $\alpha_i = \log(x_i)$ and $\beta_i = \log(y_i)$. Note that (\ref{def:weakProdMaj}) can be rewritten as \begin{equation} \sum_{i=1}^k \alpha_i \leq \sum_{i=1}^k \beta_i \text{ for } k=1,\dots,n, \end{equation} with equality for $k=n$. We define new function \begin{equation}\label{def:funkcjaH} h(v) = g(e^{v_1},e^{v_2},\dots, e^{v_n}). \end{equation} For a given vector $u$ such that $u_1 \geq u_2 \geq \dots \geq u_n \geq 0$, and $v_i = \log(u_i)$ we have \begin{equation} v_1 \geq v_2 \geq \dots \geq v_n . \end{equation} Using (\ref{eqn:ProdSchurOstrowski}) we can write \begin{equation} e^{v_1} g_1(e^{v_1},e^{v_2},\dots, e^{v_n}) \geq \dots \geq e^{v_1} g_n(e^{v_1},e^{v_2},\dots, e^{v_n}) . \end{equation} Now from above and (\ref{def:funkcjaH}) we have \begin{equation} h_1(v) \geq h_2(v) \geq \dots \geq h_n(v). \end{equation} Note now that function $h$ satisfies condition from \cite[Theorem 3.6]{buliga} and thus it is {\it Schur-convex}, so \begin{equation} h(\alpha) \leq h(\beta). \end{equation} Using (\ref{def:funkcjaH}) we can write \begin{equation} g(x) \leq g(y). \end{equation} Thus the proof is complete. } \begin{fact}[H\"older's inequality \cite{handbook}]\label{fact:holder} For $ a >1, b = a/(a-1)$ and positive semidefinite $A$ and $B$ we have \begin{equation} \mathrm{tr} (A B) \leq (\mathrm{tr} A^a)^{1/a} (\mathrm{tr} B^b)^{1/b}. \end{equation} \end{fact} \begin{fact}\label{fact:inequality1} For density matrices $A$ and $B$ we have \begin{equation} 1-\sqrt{\mathrm{tr} A^2}\sqrt{\mathrm{tr} B^2} \geq \sqrt{1-\mathrm{tr} A^2}\sqrt{1-\mathrm{tr} B^2} . \end{equation} \end{fact} \proof{ This inequality can be rewritten in equivalent form \begin{equation} 1-2\sqrt{\mathrm{tr} A^2 \mathrm{tr} B^2} + \mathrm{tr} A^2 \mathrm{tr} B^2 \geq 1 - \mathrm{tr} A^2 - \mathrm{tr} B^2 + \mathrm{tr} A^2 \mathrm{tr} B^2 , \end{equation} which is equivalent to \begin{equation} \sqrt{\mathrm{tr} A^2 \mathrm{tr} B^2} \leq \frac{\mathrm{tr} A^2 + \mathrm{tr} B^2}{2}. \end{equation} This completes the proof since for any positive numbers the arithmetic mean is always greater than or equal to the geometric mean. } \begin{fact}[Maclaurin inequality \protect{\cite[p. 5]{Biler}}]\label{fact:Maclaurin} For a given matrix $A$ of rank $r$ and with $r$ positive eigenvalues we have \begin{equation} \sqrt[k]{\frac{s_k(A)}{\binom{r}{k}}} \geq \sqrt[k+1]{\frac{s_{k+1}(A)}{\binom{r}{k+1}}} \end{equation} for $1 \leq k < r$. \end{fact} \appendix{~~~Proof of the lower bound (\ref{lowerbis})} \noindent To prove that sub--fidelity $E$ is not larger than fidelity $F$, let us take a look at equations (\ref{Fs1}) and (\ref{sub-s2}) in which both quantities are expressed in terms of the second symmetric function. We can rewrite the function $s_2$, which forms fidelity, \begin{eqnarray} s_2(\sqrt{A^{1/2} B A^{1/2}}) &=& \sum_{i<j} \lambda_i(\sqrt{A^{1/2} B A^{1/2}}) \lambda_j(\sqrt{A^{1/2} B A^{1/2}}) \\ &=& \sum_{i<j} \sqrt{\lambda_i(A^{1/2} B A^{1/2})} \sqrt{\lambda_j(A^{1/2} B A^{1/2})} \\ &=& \sum_{i<j} \sqrt{\lambda_i(A B )} \sqrt{\lambda_j(A B)} . \end{eqnarray} The last equality is the consequence of similarity of matrices $A^{1/2} B A^{1/2}$ and $A B$. Making use of subadditivity of square root we obtain \begin{eqnarray} s_2(\sqrt{A^{1/2} B A^{1/2}}) &\geq& \sqrt{\sum_{i<j} \lambda_i(A B ) \lambda_j(A B) } \\ &=& \sqrt{s_2(A B)}. \end{eqnarray} As a consequence we get \begin{equation} F(A,B) = \mathrm{tr} AB + 2 s_2(\sqrt{A^{1/2} B A^{1/2}}) \geq \mathrm{tr} AB + 2 \sqrt{s_2(A B)} = E(A,B). \end{equation} \appendix{~~~Proof of the lower bound (\ref{determin}) } \label{sec:app2} \noindent To prove inequality (\ref{determin}) we use Fact \ref{fact:Maclaurin} (Maclaurin inequality) and obtain \begin{equation} \left( \frac{s_2\left(\sqrt{A^{1/2} B A^{1/2}} \right)}{\binom{r}{2}} \right)^{1/2} \geq \left( \frac{s_{r}\left(\sqrt{A^{1/2} B A^{1/2}} \right)}{\binom{r}{r}}\right)^{1/r}. \end{equation} Using Fact \ref{fact:4} we get \begin{eqnarray*} s_2\left(\sqrt{A^{1/2} B A^{1/2}} \right) &\geq& \binom{r}{2} \left( {s_{r}\left(\sqrt{A^{1/2} B A^{1/2}} \right)}\right)^{2/r} = \binom{r}{2} \left({\prod_{i=1}^r \lambda_{i}\left(\sqrt{A^{1/2} B A^{1/2}} \right)} \right)^{2/r} \\ &=& \binom{r}{2} \left({\prod_{i=1}^r \sqrt{\lambda_{i}\left(A^{1/2} B A^{1/2} \right)}}\right)^{2/r} = \binom{r}{2} \left({\prod_{i=1}^r \lambda_{i}\left(A B \right)} \right)^{1/r}\\ &=& \binom{r}{2} \sqrt[r]{s_r(AB)}. \end{eqnarray*} Now using (\ref{Fs1}) we write \begin{equation} F(A,B) = \mathrm{tr} AB + 2 s_2\left(\sqrt{A^{1/2} B A^{1/2}} \right) \geq \mathrm{tr} AB + r(r-1) \sqrt[r]{s_r(AB)}. \end{equation} \halmos \appendix{~~~Proofs of Lemmas}\label{sec:LemmaProof} \noindent \proofof{Lemma \ref{lemma:s2(X)<s2(d(A)d(B))}}{ Observe that the matrix $\rho_1^{1/2} \rho_2 \rho_1^{1/2}$ is similar to $\rho_1 \rho_2$ and thus \begin{eqnarray*} 2 s_2 \left(\sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \right) &=& \sum_{i \neq j} \lambda_i \left(\sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \right) \lambda_j\left(\sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \right) \\ &=& \sum_{i \neq j} \sqrt{ \lambda_i \left(\rho_1^{1/2}\rho_2 \rho_1^{1/2} \right) \lambda_j\left(\rho_1^{1/2}\rho_2 \rho_1^{1/2} \right) } \\ &=& \sum_{i \neq j} \sqrt{ \lambda_i \left(\rho_1 \rho_2 \right) \lambda_j\left(\rho_1 \rho_2 \right) }, \end{eqnarray*} where $\lambda_i(A)$ denotes $i^{\text{th}}$ eigenvalue of a matrix $A$. Let us define a function $g : R^n \to R$ which acts on a vector $\vec x$ of non-negative numbers \begin{equation} g({\vec x}) := \sum_{i\not=j}\sqrt{x_i x_j} . \end{equation} It allows one to rewrite \begin{equation}\label{eqn:s_2-as-g} 2 s_2\left( \sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \right) = g \bigl( {\vec \lambda}(\rho_1\rho_2) \bigr) , \end{equation} where $\vec \lambda(A)$ denotes the vector of eigenvalues of $A$. From \cite[Theorem 3.3.2 and 3.3.4]{hj2} we obtain \begin{equation} \prod_{i=1}^k \lambda_i(\rho_1\rho_2) \leq \prod_{i=1}^k \lambda_i(\rho_1)\lambda_i(\rho_2) \text{ for } k=1,\dots,n , \end{equation} with equality for $k=n$. Making use of Proposition \ref{prop:ProdStrongIso} from Appendix A with $x_i = \lambda_i(\rho_1 \rho_2)$ and $y_i = \lambda_i(\rho_1) \lambda_i(\rho_2)$ we obtain \begin{equation} g\bigl( {\vec \lambda} (\rho_1\rho_2) \bigr) \leq g\bigl( {\vec \lambda (\rho_1)} \circ {\vec \lambda(\rho_2)} \bigr), \end{equation} where $\circ$ denotes Hadamard product, \cite[Definition 7.5.1]{hj2}. Now making use of (\ref{eqn:s_2-as-g}) we get \begin{equation} s_2\left( \sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \right) \leq s_2\left(\sqrt{ \mathrm{diag}({\vec \lambda(\rho_1)}) \mathrm{diag}({\vec \lambda(\rho_2)})}\right). \end{equation} And thus the proof is complete. } \proofof{Lemma \ref{lemma:s2<sqrt(s2)}}{ For given density matrices $\rho_1, \rho_2$ with eigenvalues $p_1, \dots , p_n$ and $q_1, \dots , q_n$ respectively. We denote diagonal matrices with entries on diagonal $p_1, \dots , p_n$ and $q_1, \dots , q_n$ as $\mathrm{diag}(p),\mathrm{diag}(q)$ respectively. Rewriting the second elementary function $s_2$ we obtain \begin{equation} 2 s_2 \left(\sqrt{\mathrm{diag}(p) \mathrm{diag}(q)} \right) = \sum_{i \neq j} \sqrt{p_i q_i} \sqrt{p_j q_j}. \end{equation} On the other hand \begin{equation} 2 \sqrt{s_2(\rho_1) s_2(\rho_2)} = \sqrt{\left(1 - \sum p_i^2 \right) \left(1 - \sum q_i^2 \right)} . \end{equation} Let us define vectors $x, y \in \ensuremath{\mathbbm R}^{n^2}$ \begin{equation} x_{i,j} = \sqrt{p_i p_j} (1- \delta_{i,j}),\ y_{i,j} = \sqrt{q_i q_j} (1- \delta_{i,j}), \end{equation} where $x_{i,j} = x_{n(i-1) + j}$. Using Cauchy--Schwarz inequality \begin{equation} |\scalar{x}{y}| \leq \sqrt{\scalar{x}{x}}\sqrt{\scalar{y}{y}}, \end{equation} we get \begin{equation} \label{eq8} \sum_{i \neq j} \sqrt{p_i q_i} \sqrt{p_j q_j} \leq \sqrt{\left(1 - \sum p_i^2 \right) \left(1 - \sum q_i^2 \right)}. \end{equation} This completes the proof. } \appendix{~~~The case $N=3$}\label{sec:app3} \noindent In this section we are going to study the fidelity of two states $\rho_1$ and $\rho_2$ in the case where the rank $r$ of their product $\rho_1\rho_2$ is not greater than $3$. As in Section \ref{sec:fid1} we will denote eigenvalues of $\sqrt{\rho_1^{1/2} \rho_2 \rho_1^{1/2}}$ by $\lambda_i$, so eigenvalues of $\rho_1^{1/2} \rho_2 \rho_1^{1/2}$ are given by $\{\lambda^2_i\}$ and by similarity we have that the eigenvalues of $\rho_1 \rho_2$ are also given by $\{\lambda^2_i\}$. Since $r\le 3$, not more than three eigenvalues of $\rho_1\rho_2$ are positive, so the third symmetric function (\ref{s33}) reads $s_3(\rho_1\rho_2)=(\lambda_1 \lambda_2 \lambda_3)^2$. This is so for any two states of a qutrit, so for $N=3$ one has $s_3(\rho_1\rho_2)={\rm det}(\rho_1\rho_2)$. Consider now the expression for fidelity (\ref{eqn:fidelity-sum-singular}) which can be rewritten with the use of the second symmetric function, \begin{equation} \label{fid_s2} F(\rho_1,\rho_2) = \mathrm{tr} \rho_1\rho_2 + 2 s_2 \Bigl( \sqrt{\rho_1^{1/2}\rho_2 \rho_1^{1/2}} \Bigr). \end{equation} The square of the symmetric function presented in the above equation, can be written as $\bigl(\sum_{i<j} \lambda_i \lambda_j \bigr)^2 = \sum_{i<j} \lambda_i ^2 \lambda_j^2 +R$. The reminder $R$, defined implicitly by this equation, is equal to zero if $r \le 2$ and the sum consists of a single term only. It is difficult to handle $R$ generally. But if $r=3$ one has \begin{equation} \label{fid_R} R=\lambda_1\lambda_2(\lambda_2\lambda_3+\lambda_3\lambda_1) + ... \ = 2(\lambda_1\lambda_2 \lambda_3)(\lambda_1+\lambda_2+\lambda_3) . \end{equation} However, in the particular case $r\le 3$ discussed here used to (\ref{rootfid}) one has $\lambda_1+\lambda_2+\lambda_3=\sqrt{F}$ while $\lambda_1 \lambda_2 \lambda_3=\sqrt{s_3(\rho_1\rho_2)}$. Combining this with (\ref{eqn:fidelity-sum-singular}) we get the equation for fidelity satisfied for $r \le 3$ \begin{equation} F = \mathrm{tr} \rho_1 \rho_2 + 2 \sqrt{ s_2(\rho_1\rho_2) + 2 \sqrt{F} \sqrt{s_3( \rho_1 \rho_2) }} . \label{rank3} \end{equation} In the case $r \le 2$ the third function $s_3$ vanishes, so this equation leads to an expression, $F = \mathrm{tr} \rho_1 \rho_2 + 2 \sqrt{ s_2(\rho_1\rho_2)}= \mathrm{tr} \rho_1 \rho_2 + 2 \lambda_1\lambda_2$, already discussed in Section \ref{sec:fid1}. Another relation for fidelity is due to the fact that an assumption $r\le 3$ implies that \begin{equation} \sum_{j< k} \lambda_j \lambda_k = (\lambda_1\lambda_2 \lambda_3)(\lambda_1^{-1}+\lambda_2^{-1}+\lambda_3^{-1}) . \end{equation} Therefore in this case one has \begin{equation} \label{r3_s2} s_2 \Bigl( \sqrt{A_1^{1/2} B A^{1/2} } \Bigr) = \sqrt{ {\rm det}(AB) F(1/A,1/B)} . \end{equation} Lifting for a moment the assumption that the arguments of fidelity have to be normalized, we arrive therefore at another equation for fidelity satisfied for $r\le 3$, \begin{equation} F (A,B) = \mathrm{tr} AB + 2\sqrt{ {\rm det}(AB) F(1/A,1/B) } . \label{rank3b} \end{equation} \end{document}
\begin{document} \begin{frontmatter} \title{ON THE GENERATION OF PYTHAGOREAN TRIPLES AND REPREZENTATION OF INTEGERS AS A DIFFERENCE OF TWO SQUARES} \author{Emil Asmaryan\corref{cor1}\fnref{label2}} \address[label2]{Institute of Radiophysics and Electronics of Armenian Nat.Ac.Sci., Alikhanian Bros.str.,1, 0203, Ashtarak, Armenia} \cortext[cor1]{Emil Asmaryan} \ead{[email protected]} \begin{abstract} The general formulas for finding the quantity of all primitive and nonprimitive triples generated by the given number $x$ have been proposed. Also the formulas for finding the complete quantity of the representations of the integers as a difference of two squares have been obtained. \end{abstract} \begin{keyword} Pythagorean triple, primitive, integer, square, representation. \end{keyword} \end{frontmatter} \section{Introduction}\label{sec1} The equation \begin{equation}\label{eq:1} x^2+y^2=z^2 \end{equation} is under consideration. The solutions of \eqref{eq:1}, when $x$,$y$,$z$ ($x$,$y>2$) are the integer positive numbers, are named the Pythagorean triples, and \eqref{eq:1} itself ranks among the Diofantine equation. If some triple ($x$,$y$,$z$) is known, then one can obtain the infinite quantity of solutions of \eqref{eq:1} by multiplying of $x$,$y$,$z$ by the arbitrary numbers. The triples with coprime $x$,$y$,$z$ are named primitive. Since the Euclidean times there is a well-known method, which allows to calculate the Pythagorean triples by two integer numbers $k$, $l$ , representing $x$, $y$, $z$ as \begin{equation*} x=2kl, \hspace{1cm} y=k^2-l^2, \hspace{1cm} z=k^2+l^2; \hspace{1cm} k>1 \end{equation*} \begin{equation}\label{eq:2} (2kl)^2+(k^2-l^2)^2=( k^2 + l^2)^2 \end{equation} If $k$ and $l$ are coprime and of opposite parity, then \eqref{eq:2} gives the primitive triples. There also exist the other ways of obtaining of the Pythagorean triples or their representations as special combinations of certain numbers. However, in this work the problem of finding the Pythagorean triples is stated as follows: for the arbitrary integer $x>2$ to find the total quantity of the primitive and nonprimitive triples, in which $x$ is one of the elements on the left side of \eqref{eq:1} (i.e. generated by $x$), and also to propose the convenient way for their calculation. The methods mentioned above are not effective to solve this kind of problem. So, the method \eqref{eq:2} gives only as many triples as the given $x$ can be represented as $2kl$ (if $x$ is even), or as $k^2-l^2$ (if $x$ is odd). The Euclidean method gives the right quantity of primitive triples and also gives some quantity of nonprimitive (if the integer numbers $k$ and $l$ are not coprime, or they are both odd). However, as it is shown below in this paper, the total quantity of nonprimitive triples, generated by $x$, in reality may be significantly more than the quantity of the triples obtained from \eqref{eq:2} with the integers $k$ and $l$. In the present work the general formulas for finding the quantity of all triples generated by the given number $x$ have been proposed. Also the formulas for finding the complete quantity of the representations of the integers as a difference of two squares have been obtained. Some of this results are represented in \cite{proc2016}. \section{Calculation of triples} Let us turn to the calculation of Pythagorean triples and their quantity. Taking into account, that the numbers $x$ and $y$ can both be even or of different parity, but cannot both be odd, we use the following standard procedure. \begin{enumerate}[(a)] \item \label{en:a} Let the odd number $x >1$ be given. Then $y$ is even, and $z$ is odd, i.e. $z=y+(2m+1)$, $m \geq 0$. From \eqref{eq:1} we have: \begin{equation*} x^2+[z-(2m+1)]^2=z^2, \end{equation*} \begin{equation}\label{eq:3} z=\frac{1}{2}\bigg[\frac{x^2}{2m+1}+2m+1\bigg],\hspace{1cm} y=\frac{1}{2}\bigg[\frac{x^2}{2m+1}-(2m+1)\bigg]. \end{equation} Since $y$ and $z$ must be integer, then $2m +l\equiv d$ must be divisor of $x^2$: \begin{equation}\label{eq:4} y_i=\frac{1}{2}\bigg(\frac{x^2}{d_i}-d_i\bigg),\hspace{1cm} z_i=\frac{1}{2}\bigg(\frac{x^2}{d_i}+d_i\bigg) \end{equation} Substituting into \eqref{eq:4} the values of divisors $d_i$, we obtain corresponding triples ($x$, $y_i$, $z_i$). It is easy to show, that from \eqref{eq:4} just $z$ will be odd, and $y$ will be even. In order to find the quantity of all different triples with positive elements, only divisors $d_i<x$ must be used in \eqref{eq:4}. Let $N_d$ be the quantity of all divisors of $x^2$. It is obvious that the quantities of divisors $d_i<x$ and $d_i>x$ are the same. Therefore the quantity of $d_i<x$ and so the quantity of Pythagorean triples is: \begin{equation}\label{eq:5} N_{tr}=\frac{N_d-1}{2} \end{equation} Note, that the squares of integer numbers always have the odd quantity of divisors. \item \label{en:b} Let the even number $x>2$ be given. In this case $y$ and $z$ are the numbers of the same parity, i.e. $z=y+2m$, $m>0$. Then, from \eqref{eq:1} we obtain: \begin{eqnarray*} x^2+(z-2m)^2=z^2,\\ x^2-4mz+4m^2=0 \end{eqnarray*} \begin{equation}\label{eq:6} y=\frac{(\frac{x}{2})^2}{m}-m,\hspace{1cm} z=\frac{(\frac{x}{2})^2}{m}+m \end{equation} Since $y$ and $z$ must be integer, then $m\equiv d$ must be divisor of $(\frac{x}{2})^2$ ,i.e. \begin{equation}\label{eq:7} y_i=\frac{(\frac{x}{2})^2}{d_i}-d_i,\hspace{1cm} z_i=\frac{(\frac{x}{2})^2}{d_i}+d_i. \end{equation} Substituting into\eqref{eq:7} the values of $d_i$, we obtain the triples ($x$, $y_i$, $z_i$). In order to obtain the triples with positive elements only divisors $d_i<\frac{x}{2}$ must be used in \eqref{eq:7}. If $N_d$ is the quantity of all divisors of $(\frac{x}{2})^2$, then the quantity of divisors $d_i<\frac{x}{2}$ is equal to $\frac{N_d-1}{2}$. Therefore, the quantity of all Pythagorean triples, generated by the number $x$, is also $\frac{N_d-1}{2}$. Let us now determine $N_d$ in cases \ref{en:a}) and \ref{en:b}). For convenience we redenote $x\equiv n$. As it is known, the complete quantity $Q$ of divisors of any number $n$ may be determined by its canonic expansion \begin{equation}\label{eq:8} n=p_1^{s_1}\cdot p_2^{s_2}\cdot\cdot\cdot \cdot p_q^{s_q}, \end{equation} where $p_1,\cdot\cdot\cdot,p_q$ are the different prime numbers none-equal to 1. \end{enumerate} Then \begin{equation}\label{eq:9} Q=(s_1+1)(s_2+1)...(s_q+1). \end{equation} If $n$ is odd number, then from its canonic expansion \eqref{eq:8} we obtain: \begin{equation}\label{eq:10} n^2=p_1^{2s_1}\cdot p_2^{2s_2}\cdot\cdot\cdot \cdot p_q^{2s_q}, \end{equation} and \begin{equation}\label{eq:11} N_d=(2s_1+1)(2s_2+1)...(2s_q+1). \end{equation} Hence the complete quantity of Pythagorean triples is: \begin{equation}\label{eq:12} N_{tr}=\frac{(2s_1+1)(2s_2+1)...(2s_q+1)-1}{2} \end{equation} If $n$ is even, then $p_1=2$. Writing down $n$ as $n=2^{s_1+1}\cdot p_2^{s_2}\cdot\cdot\cdot p_q^{s_q}$, $s_1\geq 0$, we obtain: \begin{equation}\label{eq:13} \frac{n}{2}=2^{s_1}\cdot p_2^{s_2}\cdot\cdot\cdot p_q^{s_q},\hspace{1cm} (\frac{n}{2})^2=2^{2s_1}\cdot p_2^{2s_2}\cdot\cdot\cdot p_q^{2s_q} \end{equation} Then the complete quantity of triples is given by the same expression \eqref{eq:12}, but $s_1,s_2,\cdot\cdot\cdot,s_q$ are taken from the canonic expansion \eqref{eq:13} of $\frac{n}{2}$: Thus we can formulate the following result: Every integer number $n>2$ generates 〖$N_{tr}$ Pythagorean triples having the form: \begin{enumerate}[(a)] \item \label{en:a2} for odd n: \begin{equation}\label{eq:14} \left\{ n,\frac{1}{2}\bigg(\frac{n^2}{d_i}-d_i\bigg),\frac{1}{2}\bigg(\frac{n^2}{d_i}+d_i\bigg) \right\}, \end{equation} where $d_i$ are the divisors of $n^2$ which are less than $n$, and 〖$N_{tr}$ is determined by expression \eqref{eq:12} with $s_1,s_2,\cdot\cdot\cdot,s_q$ taken from canonic expansion of $n$; \item \label{en:b2} for even $n$: \begin{equation}\label{eq:15} \left \{ n,\frac{(\frac{n}{2})^2}{d_i}-d_i,\frac{(\frac{n}{2})^2}{d_i}+d_i\right\}, \end{equation} where $d_i$ are the divisors of $(\frac{n}{2})^2$ which are less than $\frac{n}{2}$, and 〖$N_{tr}$ is determined by expression \eqref{eq:12} with $s_1,s_2,\cdot\cdot\cdot,s_q$ taken from canonic expansion of $\frac{n}{2}$. As the illustration let us consider the example: $n=120= 2^3\cdot3\cdot5; \frac{n}{2}= 2^2\cdot3\cdot5=60; (\frac{n}{2})^2= 3600 = 2^4\cdot3^2\cdot5^2; N_d= 45; N_{tr}=22,$ including 4 primitive. Euclidean method \eqref{eq:2} (with integer $k$, $l$) gives only 6 triples, including 4 primitive. \end{enumerate} It follows from \eqref{eq:14} and \eqref{eq:15} that for given $n$ there are maximal and minimal values of other two elements of Pythagorean triples, namely: \begin{enumerate}[(a)] \item \label{en:a3} for odd n:\newline maximal:$ \frac{n^2-1}{2}$ and $\frac{n^2+1}{2}$, at $d_i = 1$ \newline minimal: $\frac{1}{2}(\frac{n^2}{d_{max}}-d_{max})$ and $\frac{1}{2}(\frac{n^2}{d_{max}}+d_{max}),$ where $d_{max}$ is the divisor of $n^2$, most near to $n$; \item \label{en:b3} for even n:\newline maximal:$ \big(\frac{n}{2}\big)^2-1$ and $\big(\frac{n}{2}\big)^2+1$, at $d_i = 1$ \newline minimal: $\frac{\big(\frac{n}{2}\big)^2}{d_{max}}-d_{max}$ and $\frac{\big(\frac{n}{2}\big)^2}{d_{max}}+d_{max}$, where $d_{max}$ is the divisor of $(\frac{n}{2})^2$, most near to $\frac{n}{2}$. \end{enumerate} \section{Determination of the quantity of primitive triples} Now we find the quantity of primitive triples included in $N_{tr}$ \subsection{Let number $n$ be odd} We use again the canonic expansion \eqref{eq:10}. As a divisors $d_i$ in \eqref{eq:14} let’s take multipliers contained in \eqref{eq:10}, which are less than $n$ and represent the products of numbers $p_j^{2s_j}$ between themselves in all possible quantities and combinations (single, double, triple etc), i.e. the numbers of form \begin{eqnarray}\label{eq:16} p_j^{2s_j},p_j^{2s_j}\cdot p_m^{2s_m},p_j^{2s_j}\cdot p_m^{2s_m}\cdot p_j^{2s_r},... < n \\ j,m,r...=1,2,...,q \nonumber \end{eqnarray} We see, that when dividing the number $n^2$ , written in form \eqref{eq:10}, by the divisors $d_i$ of the type \eqref{eq:16}, all appropriate $p_j,...,p_r$ will be cancelled in the first term of bracket in expressions $\frac{1}{2}\big(\frac{n^2}{d_i}\mp d_i\big)$, whereas the second term contains only these $p_j,...,p_r$. Therefore this two terms will not contain the same $p_j,...,p_r$ and cannot divide by any divisors of $n$. Then the elements of such triples will be coprime and these triples will be primitive. Since the numbers $p_j^{2s_j}$ are contained in $d_i$ giving the primitive triples in indivisible manner (i.e. they can be considered as if they are “prime”, in power 1), then writing \eqref{eq:10} as \begin{equation*} n^2=\big( p_1^{2s_1}\big)^1\cdot....\cdot \big(p_q^{2s_q}\big)^1 \end{equation*} and using \eqref{eq:9}, we obtain that the quantity of divisors of type \eqref{eq:16}is equal to $\frac{(1+1)^q}{2}=2^{q-1}$. Hence, we obtain the following result: The quantity of primitive triples generated by the odd number $n$, is: \begin{equation}\label{eq:17} N_p=2^{q-1} \end{equation} where $q$ is the quantity of prime numbers $p_j$ in the canonic expansion of $n$. Thus, $N_p$ does not depend on numbers $p_j,s_j$ and is determined only by number $q$. \subsection{Let $n$ be even} If $\frac{n}{2}$ is odd (i.e. $n$ indivisible by 4), then one can see from \eqref{eq:15} that $y$ and $z$ are also both even by all divisors $d_i$. Therefore such $n$ cannot generate the primitive triples. If $\frac{n}{2}$ is even, then the primitive triples are obtained from \eqref{eq:15} by $d_i$, having the form \eqref{eq:16} in canonic expansion \eqref{eq:13} of $\big(\frac{n}{2}\big)^2$. Hence, we obtain the following combined expression for quantity of primitive triples generated by even $n$: \begin{equation}\label{eq:18} N_p=2^{q-2}\big[1+(-1)^{n/2}\big] \end{equation} where $q$ is the quantity of prime numbers $p_j$ in canonic expansion of $\frac{n}{2}$. Note that $N_p$ always includes the triple, obtained by $d_i =1$. For illustration we consider the example: \begin{equation*} n=2220=2^2\cdot 3 \cdot 5\cdot 37; \frac{n}{2}=1110=2\cdot 3\cdot 5 \cdot 37; \bigg(\frac{n}{2}\bigg)^2=2^2\cdot 3^2 \cdot 5^2 \cdot 37^2 =1232100 \end{equation*} The total quantity of triples $N_{tr} = \frac{(2+1)^4-1}{2}=40$; The quantity of primitive triples $N_p=2^{4-1} = 8$, and they are obtained from \eqref{eq:15} by the following divisors $d_i$: $d_1 = 1; d_2= 2^2; d_3 = 3^2; d_4 = 5^2; d_5 = 2^2\cdot 3^2; d_6 = 2^2\cdot 5^2; d_7 = 3^2\cdot 5^2; d_8 = 2^2\cdot 3^2\cdot 5^2$. In works \cite{robbins2006number,OMLAND20171} the quantity of primitive Pythagorean triangles with given inradius has been obtained. Here we have determined the complete quantity of Pythagorean triangles with given cathetus. \section{The representation of integers as a difference of the squares of two integer numbers} The formulas \eqref{eq:14} and \eqref{eq:15} are proven to be useful for finding all possible representations of integer numbers as a difference of two squares. \subsection{} Let $n$ be the arbitrary odd number more than 1, and we want to represent it in form \begin{equation}\label{eq:19} n=k^2-l^2, \hspace{1cm} k,l>0 \end{equation} It is clear from identity \eqref{eq:2} written as \begin{equation*} (2kl)^2+n^2=(k^2+l^2)^2, \end{equation*} that $2kl$ and $k^2+l^2$ are the elements of Pythagorean triples generated by odd number $n$. As we already know, they are given by expressions \eqref{eq:14}. In particular, we can write: \begin{equation}\label{eq:20} k^2+l^2=\frac{1}{2}\bigg(\frac{n^2}{d_i}+d_i\bigg) \end{equation} From \eqref{eq:19} and \eqref{eq:20} we find $k$ and $l$: \begin{equation}\label{eq:21} k=\frac{n+d_i}{2\sqrt{d_i}}, \hspace{1cm} l=\frac{n-d_i}{2\sqrt{d_i}} \end{equation} It follows from \eqref{eq:21}, that $k$ and $l$ will be integers and positive, if $d_i< n$ and are the squares of integer numbers. Indeed, in this case $\sqrt{d_i}$ is the integer number, being the divisor of $n$ and $d_i$, and, besides, since $n$ and $d_i$, are both odd, so their sum and difference are even. One can see that $k$ and $l$ have the opposite parity. Therefore, the quantity $N_r$ of all representations \eqref{eq:19} is equal to quantity of divisors of $n^2$, which are less than $n$, and are in canonic expansion \eqref{eq:10} the numbers of the form \begin{eqnarray} \label{eq:22} p_j^{2\alpha_j},p_j^{2\alpha_j}\cdot p_m^{2\alpha_m},p_j^{2\alpha_j}\cdot p_m^{2\alpha_m}\cdot p_r^{2\alpha_r},... < n \\ j,m,r...=1,2,...,q, \hspace{1cm} \alpha_j\leq s_j \nonumber \end{eqnarray} We find the quantity of such divisors writing \eqref{eq:10} in form \begin{equation} \label{eq:23} n^2=\big(p_1^2\big)^{s_1}\cdot \big(p_2^2\big)^{s_2}\cdot ... \cdot \big(p_q^2\big)^{s_q} \end{equation} Now, considering the numbers $p_j^2$ as indivisible and taking into account only $d_j<n$, we find, using \eqref{eq:9}, the complete quantity of representations \eqref{eq:19} with positive and integer $k$ and $l$: \begin{eqnarray}\label{eq:24} N_r=\frac{(s_1+1)(s_2+1)\cdot ... \cdot (s_q+1)}{2}, \hspace{1cm} \text{if $n$ is nonsquare, i.e. some of $s_j$ are odd} \end{eqnarray} and \begin{eqnarray}\label{eq:25} N_r=\frac{(s_1+1)(s_2+1)\cdot ... \cdot (s_q+1)-1}{2}, \hspace{1cm} \text{if $n$ is square, i.e. all $s_j$ are even} \end{eqnarray} Hence, the representations \eqref{eq:19} are given by expression: \begin{equation}\label{eq:26} n=\bigg(\frac{n+d_i}{2\sqrt{d_i}}\bigg)^2-\bigg(\frac{n-d_i}{2\sqrt{d_i}}\bigg)^2, \end{equation} where $di$ are the divisors of $n^2$ which are less than $n$ and are the squares of integers (including 1). For example, if $n = 3465 = 3^2\cdot 5\cdot 7\cdot 11, n^2 = 3^4\cdot 5^2\cdot 7^2\cdot 11^2$, then the quantity of representations \eqref{eq:19} is equal: \begin{equation*} N_r=\frac{(2+1)(1+1)^3}{2}=12, \end{equation*} and we may calculate them on \eqref{eq:26} by $d_i$ equal to: \begin{equation*} 1; 3^2; 5^2; 7^2 ; 3^4 ; 11^2; 3^2\cdot 5^2; 3^2\cdot 7^2; 3^2\cdot 11^2; 5^2 \cdot 7^2; 3^4\cdot 5^2; 5^2\cdot11^2 \end{equation*} Correspondently, the 12 pairs $(k,l)$, in \eqref{eq:19} are: (1733, 1732); (579, 576); (349, 344); (251, 244); (197, 188); (163,152); (123,108); (93,72); (69,36); (67, 32); (61,16); (59,4). \subsection{} Let $n$ be even and we want to represent it as \begin{equation}\label{eq:27} n=k^2-l^2, \hspace{1cm} k,l>0 \end{equation} According to \eqref{eq:2} written as \begin{equation*} 2(kl)^2+n^2=(k^2+l^2)^2, \end{equation*} $2kl$ and $k^2-l^2$ are the elements of Pythagorean triples generated by the even number $n$. Then, according to \eqref{eq:15}, \begin{equation}\label{eq:28} k^2+l^2=\frac{(\frac{n}{2})^2}{d_i}+d_i, \end{equation} where $d_i$ are the divisors of $\big(\frac{n}{2}\big)^2$ which are less than $\frac{n}{2}$. From \eqref{eq:27} and \eqref{eq:28} we find: \begin{equation}\label{eq:29} k=\frac{n+2d_i}{2\sqrt{2d_i}}, \hspace{1cm} l=\frac{n-2d_i}{2\sqrt{2d_i}} \end{equation} It follows from \eqref{eq:29}, that $k$ and $l$ will be positive integers if $d_i<\frac{n}{2}$, and have the form \begin{equation} \label{eq:30} d_i=2^{2a+1}\cdot (2b+1)^{2c}, \hspace{1cm} \text{integers} \hspace{0.5cm} a,b,c\geq 0 \end{equation} Let’s find the quantity of such divisors, using canonic expansion \eqref{eq:13} of $\big(\frac{n}{2}\big)^2$. Since the maximal value of $2a+1$ is $2s_1-1$ , then in \eqref{eq:30} $0\leq a < s_1$, and $\big(2b+1\big)^{2c}$ are the numbers of the form \eqref{eq:22}, where $j=2,...q$. According to \eqref{eq:13} and \eqref{eq:23} the quantity of all odd divisors of $\big(\frac{n}{2}\big)^2$ having the form \eqref{eq:22} is \begin{equation*} (s_2+1)\cdot (s_3+1)\cdot ... \cdot (s_q+1) \end{equation*} Then the quantity of all even $d_i$ having the form \eqref{eq:30} is \begin{equation}\label{eq:31} \frac{1}{2}\cdot2s_1\cdot(s_2+1)\cdot... \cdot(s_q+1) \end{equation} Taking into account that for obtaining of the all different representations \eqref{eq:27} only $d_i<\frac{n}{2}$ must be used, we obtain the following result. The quantity of the representations \eqref{eq:27} of the even number $n$ is: \begin{align}\label{eq:32} N_r=\frac{s_1\cdot (s_2+1)\cdot...\cdot(s_q+1)}{2}, \hspace{0.5cm}\parbox{15em}{if $n$ is nonsquare, i.e. either $s_1$ is even or some of $s_2,\cdot...\cdot,s_q$ are odd} \end{align} and \begin{equation}\label{eq:33} N_r=\frac{s_1\cdot (s_2+1)\cdot...\cdot(s_q+1)-1}{2}, \hspace{0.5cm}\parbox{15em}{if $n$ is square, i.e. $s_1$ is odd and all $s_2,\cdot...\cdot,s_q$ are even} \end{equation} All these representations are given by expression \begin{equation}\label{eq:34} n=\bigg(\frac{n+2d_i}{2\sqrt{2d_i}}\bigg)^2-\bigg(\frac{n-2d_i}{2\sqrt{2d_i}}\bigg)^2 \end{equation} where $d_i$ are the divisors of $\big(\frac{n}{2}\big)^2$, which are less the $\frac{n}{2}$ and have the form \eqref{eq:30}. One can see that $k$ and $l$ have the same parity. As it follows from \eqref{eq:32}, if $s_1=0$, then $N_r=0$, i.e. the even $n$ indivisible by 4 cannot be represented as the difference of two squares. Remember that they also cannot generate the primitive Pythagorean triples. In \cite{nyblom} the quantities of the representations \eqref{eq:19} and \eqref{eq:27} have been obtained by method of factorization of number $n$ in two factors. If we take the designations used there, then our expressions for $N_r$ coincide with \cite{nyblom}, with only difference that, in contrast to \cite{nyblom}, \eqref{eq:25} and \eqref{eq:33} don’t include the trivial case $k=\sqrt{n},l=0$. Example. $n= 900 = 2^2\cdot 3^2\cdot5^2; \frac{n}{2} = 2\cdot 3^2\cdot 5^2 ; \big(\frac{n}{2}\big)^2= 2^2\cdot3^4\cdot5^4; N_r=\frac{1\cdot(2+1)^2-1}{2}=4 $. The appropriate divisors $d_i$ are $d_1=2; d_2=2\cdot3^2; d_3=2*3^4; d_4=2\cdot 5^2$, and the corresponding pairs $(k,l)$ are: (226, 224); (78,72); (50,40); (34,16). Now we return to the Pythagorean triples and find out how to calculate all triples by \eqref{eq:2}. Let the odd number $n=k^2-l^2$ be given. Note that if all $s_j=1$, then the quantity $N_r$ is equal to quantity of primitive triples, i.e. $N_r=2^{q-1}$. Generally speaking $N_r\geq N_p$ , and their difference gives some quantity of nonprimitive triples, but not all nonprimitive triples are exhausted by them. The remaining $N_{tr}-N_r$ triples are obtained from \eqref{eq:14} by divisors $d_i<n$, which are not the squares of integer numbers. By the such $d_i$ expressions \eqref{eq:21} give the irrational $k$ and $l$, while $2kl, k^2-l^2$ and $k^2+l^2$ remain integer. On the other side, they are just those nonprimitive triples which cannot be obtained from \eqref{eq:2} with integers $k$ and $l$. Therefore the complete quantity $N_tr$ can be obtained by \eqref{eq:2}, using the irrational $k$, $l$ given by expressions \eqref{eq:21}, alongside with integer $k$ and $l$. Let the even number $n=2kl$ be given. Then according to \eqref{eq:15} we have \begin{equation*} k^2+l^2=\frac{\big(\frac{n}{2}\big)^2}{d_i}+d_i, \hspace{1cm} k^2-l^2=\frac{\big(\frac{n}{2}\big)^2}{d_i}-d_i \end{equation*} and \begin{equation}\label{eq:35} k=\frac{n}{2\sqrt{d_i}}, \hspace{1cm} l=\sqrt{d_i} \end{equation} Therefore, all $N_{tr}$ triples generated by the even number $n$, may be obtained from \eqref{eq:2} using $k=\frac{n}{2\sqrt{d_i}}$, $l=\sqrt{d_i}$, where $d_i$ are all divisors of $\big(\frac{n}{2}\big)^2$, which are less than $\frac{n}{2}$ . \section{Conclusion} In the more later methods of generation of the Pythagorean triples the different special representations of generating numbers are used, in particular, the representation by Fibonacci numbers \cite{horadam1961fibonacci}, geometrical representations, e.g. Dickson's method \cite{dickson2013history}, and others \cite{amato2017characterization,2012arXiv1201.2145R}. Therefore they don’t give the universal formulas for finding all triples. In this work we have obtained the general formulas, giving all primitive and nonprimitive triples generated by given number which is represented in the most general form, by its canonic expansion. We have also used this method to find all representations of integers as a difference of two squares and to reveal the relation between quantities of such representations and triples. We have also shown how one can use the Euclid’s formula \eqref{eq:2} to find all triples, which cannot be obtained by this formula with integers $k$ and $l$. All these results are obtained with the common method using the formulas \eqref{eq:14}, \eqref{eq:15} and canonic expansions of appropriate numbers. \end{document}
\begin{document} \title{ Quantum Electromagnetic Vacuum Fluctuations in Inhomogeneous Dielectric Media } \begin{abstract} A new mathematical and computational technique for calculating quantum vacuum expectation values of energy and momentum densities associated with electromagnetic fields in bounded domains containing inhomogeneous media is discussed. This technique is illustrated by calculating the mode contributions to the difference in the vacuum force expectation between opposite ends of an inhomogeneous dielectric non-dispersive medium confined to a perfectly conducting rigid box. \end{abstract} \keywords{Cavity QED, Electromagnetism, Casimir, Inhomogeneous Dielectric} \section{INTRODUCTION} \label{sec:intro} Electromagnetic quantum fluctuation phenomena lie at the heart of many processes where the interface between classical and quantum physics plays a prominent role. Among these one may cite areas of quantum optics, micro-cavity physics, micro-fluidics, photonic structures, early Universe cosmo-genesis, dark energy and cold-atom technology. In these systems one is often confronted with phenomena that interrelate classical continuum mechanics, classical electromagnetism, cavity quantum electrodynamics and fundamental issues relating to fluctuation-dissipation mechanisms \cite{Loudon,Milton} \!\!. In particular, dynamic (material) fluctuations induced by quantum fluctuations of the electromagnetic field have experimental consequences and offer an exciting opportunity to confront the limitations of basic theory with observable data. In technology such fluctuations may manifest themselves as quantum induced stresses. For example Casimir stresses cannot be ignored as nano-structures develop ever smaller miniaturizations. In micro-fluidics, physical processes can be confined to (deformable) micro-cavities that are guided by electromagnetic fields. Such micro-laboratories offer new possibilities to explore cavity QED experimentally as well as enhancing the control features of micro-fluidic design. Indeed it has even been suggested that chemical processes in such an environment may shed light on the mechanism that evolves inert matter into living cells. The role of quantum fluctuations in determining the behavior of fabricated micro-structures is becoming increasingly important in a wide area of science and technology. Such fluctuations are also at the heart of many fundamental problems in physics ranging from the stability of fundamental constituents of matter to the lack of progress in unifying quantum field theory with gravitation. Many of the problems arise due to a lack of knowledge of basic interactions between fields and matter at some scale and the need to regularize current theories in order to make experimental predictions. For renormalizable theories such as QED in the vacuum these are remarkably accurate. However macroscopic predictions directly from QED that are affected by the presence of bulk macroscopic matter and material interfaces can be made with far less confidence since they depend critically on both geometric and constitutive modeling at macroscopic scales \cite{Inui1,Inui2} \!\!. In particular the role of quantum states of the electromagnetic field on the detailed behavior of isolated closed material micro-domains of polarisable matter remains an unsolved problem \cite{Nester} \!\!. The quantization of the electromagnetic field in the presence of media satisfying linear electromagnetic constitutive relations relies on a knowledge of a regular basis of eigen-solutions to a modified Helmholtz type equation that determines the electromagnetic fields. If the electromagnetic properties of the media are discontinuous across surfaces in space such solutions must satisfy jump conditions across them dictated by Maxwell's equations for the fields. The nature of the eigen-spectrum of the vacuum Helmholtz operator on time-parameterized differential forms on space is determined by the global topology of spatial domains with boundaries. Mathematical procedures exist for analyzing such problems using the Hodge-Weyl-Friedrichs decomposition of forms on manifolds with boundary. In principle they can be used to construct a Hilbert space with a real basis of {\it transverse (divergence-less) } forms (i.e. in the kernel of the co-derivative $\delta$) satisfying Dirichlet and Neumann boundary conditions. The split of this space into mutually orthogonal subspaces is responsible for the classification of electromagnetic fields into TE, TM and TEM modes in certain domains. For example, if the vacuum of 3-dimensional Euclidean space is partitioned into interior and exterior regions by a closed perfectly conducting boundary surface one may establish an ortho-normal basis of transverse $1$-forms $\PHI N( \bm r),\,\PSI N( \bm r)$ satisfying appropriate boundary conditions in each region. For a hollow closed cavity these basis modes can be defined in terms of the eigen-$1$-forms $\Phi_N( \bm r)$ and $\Psi_N( \bm r)$ of the Hodge-de Rham operator (Laplacian) $\Delta$ on forms in space satisfying different boundary conditions: \begin{eqnarray*} {\Delta\Phi_N=\mu^2_N\Phi_N}, \quad {\Delta\Psi_N=\lambda^2_N\Psi_N}, \end{eqnarray*} \begin{eqnarray*} \PHI N \equiv \frac{1}{\mu^2_N} \delta\, d \Phi_N,\quad \PSI N \equiv\frac{1}{\lambda^2_N}\delta\,d \Psi_N, \quad \delta\PHI N=0,\quad \delta\PSI N=0 \end{eqnarray*} where $\delta\equiv -\#\, d \, \#$ is the Euclidean exterior co-derivative on $1$-forms on a simply-connected domain ${\cal U}\in {\bf R}^3 $, $d$ denotes the exterior derivative, $-\Delta=d\delta + \delta\, d$ and $N$ labels a triplet of infinitely denumerable discrete numbers labelling the real non-zero eigen-values and associated eigen-forms. In a simply-connected domain such $1$-forms can be employed to represent the Coulomb gauge Maxwell vector-potential $1$-form ${\bm A}={\bm A}^{{ \mbox{\tiny $T\!E$} }}+{\bm A}^{{ \mbox{\tiny $T\!M$} }}$ where: \begin{eqnarray*} {\bm A}^{{ \mbox{\tiny $T\!E$} }}(t, \bm r)=\sum_N {\cal A}^{{ \mbox{\tiny $T\!E$} }}_N(t)\, \PHI N( \bm r), \qquad {\bm A}^{{ \mbox{\tiny $T\!M$} }}(t, \bm r)=\sum_N {\cal A}^{{ \mbox{\tiny $T\!M$} }}_N(t)\, (\# d\PSI N( \bm r)) \end{eqnarray*} and $\#$ denotes the Euclidean Hodge map on forms in space. The eigen-values $\mu_N,\lambda_N$ determine the normal-mode frequencies of electromagnetic fields in the cavity. A similar analysis can be performed in the exterior region (which may be non-simply connected and involve TEM modes). The computation of the eigenvalues $\lambda_N, \,\mu_N$ is the key precursor to all quantum computations since they characterize the extrinsic domain geometry and determine the spectral content of the infinite number of quantum oscillators that represent the electromagnetic field. However it is often difficult to determine analytically such bases for generic domains. Furthermore if the partition involves electrically neutral domains containing linear media that may be inhomogeneous, anisotropic, magneto-electric or dispersive this program involves a modified Helmholtz classical boundary value problem \cite{LeonBook,LeonInhom,LeonInhom2} \!\!. In this paper the modification necessitated by the presence of an inhomogeneous but non-dispersive and non-conducting medium contained in a closed rectangular 3-dimensional perfectly conducting cavity is explored. This is a precursor to a regularization scheme needed to extract finite quantum expectation values for stresses in the medium induced by quantum electromagnetic fluctuations. \section{Inhomogeneous Dielectric Media} \label{ch1} Consider a smooth open region of space containing a stationary non-dispersive medium characterized by an inhomogeneous permittivity scalar $\epsilon( \bm r)$ and constant permeability scalar $\mu=\mu_0$. Denoting time derivatives with an over-dot the classical source-free Maxwell system to be solved is: \begin{alignat}{2} d\,\bm e &=-\dot\BB, \qquad &\frac{1}{\mu} d\, \bm b &=\epsilon \dot \bm E\\ d\,\bm B&=0, \quad & d\,(\epsilon\EE)& =0 \\ d\,\epsilon\ne 0, \,&\, d\,\mu = 0, \qquad &\dot\epsilon=0, \,&\,\dot\mu=0 \end{alignat} where $\bm e,\,\,\bm b$ denote time dependent electric and magnetic 1-forms respectively and $\EE=\#\bm e, \,\, \BB=\#\bm b$. If the (time-dependent) spatial 1-form $\bm A$ and spatial 0-form $\phi$ belong to a class of gauge-equivalent potentials defining the electric and magnetic fields by \begin{equation}a \bm e=-\dot \bm A - d\,\phi, \qquad \bm b=\# d\, \bm A \label{FIELDS} \end{equation}a then the above system reduces to \begin{eqnarray*} \delta(\epsilon \dot\bm A) + \delta (\epsilon d\,\phi) &=& 0 \\ \delta d\,\bm A + \epsilon\mu(d\,\dot\phi + \bm dot\bm A) &=& 0 \end{eqnarray*} In a particular gauge with $$\delta\,(\epsilon\bm A)=0$$ the equation for the scalar potential decouples: \begin{eqnarray*} \delta d\,\bm A+ \epsilon\mu \bm dot\bm A &=& -\epsilon\mu d\,\dot\phi \\ \delta (\epsilon d\,\phi) &=& 0 . \end{eqnarray*} Furthermore, we may set $d\,\phi=0$ for systems without free charge\cite{Glauber} \!\!. Hence in this gauge the local Maxwell system above is solved in terms of spatial 1-forms satisfying $\delta (\epsilon\bm A)=0$ and the equation: \begin{equation}a \delta d\,\bm A+\epsilon\mu\bm dot\bm A=0. \label{HELM} \end{equation}a This gives rise to a modified Helmholtz equation for time harmonic fields. Across any non-conducting interface where the dielectric scalar is discontinuous one has two conditions on $\bm e$ and $\bm b$ restricted to the interface. At each point on the interface the jump in the normal component of $\bm b$ and the tangential component of $\bm e$ must vanish. Furthermore if there are no real charges or electric currents on the interface the jump in the normal component of $\bm d$ where $\bm d=\epsilon \bm e$ and the tangential component of $\bm h$ where $\bm h=\mu^{-1}\bm b$ must also vanish on the interface. If the interface is perfectly conducting one assumes that all fields vanish on the side of the interface that is free of the material medium when calculating the jump. It is worth noting that in a bounded inhomogeneous medium where $\epsilon$ is a continuous function of position one cannot exploit translational invariance in space and spatial Fourier transforms\cite{Brevik} to simply express normal modes in terms of eigen-forms of $ {\bm R}^3 $ spatial translation operators. If a bounded domain $U \subset {\bm R}^3 $ contains a dielectric with a piecewise inhomogeneous permittivity one writes the general real 1-form solution to (\ref{HELM}) on $U$ as \begin{equation}a \bm A(t, \bm r)=\sum_N {\cal A}_N(t, \bm r) \label{GEN}\end{equation}a where $N$ denotes a triple of discrete labels. Suppose the dielectric is composed of $M$ sub-domains where the permittivity is $\epsilon_m( \bm r)$ in the $m$-th sub-domain. Thus \begin{eqnarray*} \epsilon( \bm r)=\sum_{m=1}^M \epsilon_m( \bm r) {\cal Y}_m( \bm r) \end{eqnarray*} where ${\cal Y}_m( \bm r)$ is unity in the subdomain $U_{m}\subset U$ and zero elsewhere. For stationary electrically neutral dielectrics in domains $U$ bounded by conducting surfaces the electromagnetic jump conditions at interfaces above yield a homogeneous system of equations that determine a collection of eigen-modes (up to normalization) and the associated eigen-frequencies $\omega_N$. The number of distinct eigen-spaces and the degeneracies of the associated eigen-frequencies depends on the rank and symmetry of the homogeneous system which in turn reflects how the boundary geometry and boundary conditions affect the nature of the global topology of the domain. For a rank $S$ system the eigen-modes may be written: \begin{equation}a {\cal A}_N(t, \bm r)=\sum_{s=1}^S\sum_{m=1}^M \left( {\cal A}^{(+),s,m}_N( \bm r){\cal Y}_m( \bm r)\, e^{-i \omega_N^s t} + {\cal A}^{(-),s,m}_N( \bm r){\cal Y}_m( \bm r) \,e^{i \omega_N^s t}\right) \end{equation}a where the $ \{{\cal A}^{(+),s,m}_N( \bm r)\}= \{{\cal A}^{(-),s,m }_N( \bm r)\}^{*} $ constitute a basis of solutions to (\ref{HELM}) subject to $ \delta (\epsilon{\cal A}^{(+),s,m }_N ) =0$ and the above jump conditions in the domain $U$. \section{Electromagnetic Energy and Stresses} \label{ch2} Classical forces (and torques) transmitted by electromagnetic fields through the vacuum can be encoded into a covariant stress-energy-momentum tensor. The modification of such a tensor for fields in a medium has long been a subject of debate and experimental investigation. This debate has continued when the fields become operators subject to quantum laws. In this article we adopt a symmetrization of the stress-energy-momentum tensor for media at rest advocated by Minkowski. In terms of electromagnetic fields in any sub-domain $U_m$ the instantaneous classical electromagnetic energy is: \begin{equation}a {\cal E}_m=\int_{U_m} \frac{1}{2}\left( \bm e \wedge \# \bm d + \bm b \wedge \# \bm h \right) \label{ENERGY}.\end{equation}a The component of the instantaneous integrated electromagnetic stress \cite{TuckerWalton} \!\!, in a direction defined by a unit spacelike Killing vector field $K$ that generates spatial translations, transmitted across the side of any portion $\Sigma_m$ of an oriented surface in $U_m$ adjacent to the fields in the following integrand, is the force component: \begin{eqnarray*}\label{FORCE} {\cal F}_{K,m}=\frac{1}{2}\int_{\Sigma_m}\left( i_K\#\bm h \wedge \bm b - \bm e \wedge i_K \# \bm d - \# \bm b \wedge \bm h(K) - \bm d(K) \wedge \# \bm e\right) \end{eqnarray*} \subsection{Computation of Induced Dielectric Stresses by Electromagnetic Mode Fluctuations in a Cavity} \label{ch3} The above generalities will now be illustrated for a system comprised of a simply-connected inhomogeneous dielectric medium bounded by a perfectly conducting stationary, inextensible rectangular box with sides of length $L_x,L_y,L_z$. Finding non-trivial exact analytic solutions to (\ref{HELM}) is non-trivial. However, if $\epsilon(x,y,z)=\epsilonilon_0 \,\beta \exp(\alpha z/L_z)$ in Cartesian coordinates with real dimensionless positive constants $\beta,\alpha$, then general solutions satisfying the above boundary conditions can be expressed in terms of Bessel and trigonometric functions. Since the interior $U$ of the box is simply connected the boundary conditions yield $S=2$ and a decomposition into orthogonal $TE$ and $TM$ modes with respect to the $z$-axis is possible. With opposite faces of the box at $z=0$ and $z=L_z$ respectively the $TE$ mode structure is given in the above gauge by: \begin{eqnarray*} {\cal A}_N^{ (+),\mbox{\tiny $T\!E$} }( \bm r)=\frac{ {{\cal N}}_N^{{ \mbox{\tiny $T\!E$} }} }{\epsilonilon_0}\Phi_N^{{ \mbox{\tiny $T\!E$} }}\!\left[\eta^{{ \mbox{\tiny $T\!E$} }}_N(z)\right] \left\{ \df k_x \sin(k_x x)\cos(k_y y) d\,y -k_y \cos(k_x x) \sin(k_y y) d\, x \right\} \end{eqnarray*} where $k_x= \frac{n_x\pi } {L_x }, k_y= \frac{n_y\pi } {L_y }$, $N$ stands for the triple $({n_x,n_y,p})$ with $n_x,n_y$ positive integers (including zero) and ${{\cal N}}^{{ \mbox{\tiny $T\!E$} }}_N$ denotes a normalization constant. Furthermore \begin{eqnarray*} \Phi_N^{{ \mbox{\tiny $T\!E$} }}\left[\eta^{{ \mbox{\tiny $T\!E$} }}_N(z)\right] &=& J_{\nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(z)\right] + \zeta_N^{{ \mbox{\tiny $T\!E$} }} Y_{\nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(z)) \right] \end{eqnarray*} where \begin{eqnarray*} \eta_N^{{ \mbox{\tiny $T\!E$} }}(z)= \frac{ 2 L_z \omega_N^{{ \mbox{\tiny $T\!E$} }} \sqrt{\beta} \exp( \frac{ \alpha z} {2 L_z } ) } { \alpha c_0 }, \quad \nu_N^{{ \mbox{\tiny $T\!E$} }}= \frac{2 L_z } { \alpha}\sqrt{ k_x^2 +k_y^2 }, \quad \zeta_N^{{ \mbox{\tiny $T\!E$} }}= -\frac{J_{\nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(0)\right] } { Y_{\nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(0)\right] } \end{eqnarray*} with $c_{0}^{2}=\frac{1}{\epsilonilon_{0}\mu_{0}}$ in these expressions and the $\omega_N^{{ \mbox{\tiny $T\!E$} }}$ are the values of the $p$-th roots of the $TE$-mode spectrum generating equation: \begin{eqnarray*} J_{ \nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[\eta^{{ \mbox{\tiny $T\!E$} }}_N(0)\right] Y_{ \nu_N^{{ \mbox{\tiny $T\!E$} }}} \!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(L_z)\right] - J_{ \nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(L_z)\right] Y_{ \nu_N^{{ \mbox{\tiny $T\!E$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!E$} }}_N(0)\right] &=& 0. \end{eqnarray*} The $TM$ mode structure is given by: \begin{eqnarray*} {\cal A}_N^{ (+),\mbox{\tiny $T\!M$} }( \bm r) &=& \frac{ {\cal N}_N^{{ \mbox{\tiny $T\!M$} }} \omega_N^{{ \mbox{\tiny $T\!M$} }} }{\epsilonilon_0 c_0} \left\{ \left( \Phi_N^{\prime { \mbox{\tiny $T\!M$} }}\!\left[\eta^{{ \mbox{\tiny $T\!M$} }}_N(z)\right] + \frac{ \Phi_N^{ { \mbox{\tiny $T\!M$} }}\!\left[\eta^{{ \mbox{\tiny $T\!M$} }}_N(z)\right] } {\eta^{{ \mbox{\tiny $T\!M$} }}_N(z) } \right) \left( \df k_x \cos(k_x x)\sin(k_y y) d\,x +k_y \sin(k_x x) \cos(k_y y) d\, y\right) \right. \\ && \hspace{2.5cm} \left. + \frac{ \alpha}{2L_z \eta_N^{{ \mbox{\tiny $T\!M$} }}} \left( \df (\nu_N^{{ \mbox{\tiny $T\!M$} }})^2-1 \right) \Phi_N^{{ \mbox{\tiny $T\!M$} }}\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(z)\right] \sin(k_x x) \sin(k_y y)\,d\,z ) \right\} \end{eqnarray*} with normalization constant ${\cal N}^{{ \mbox{\tiny $T\!M$} }}_N$, \begin{eqnarray*} \Phi_N^{{ \mbox{\tiny $T\!M$} }}\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(z) \right] &=& J_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[\eta^{{ \mbox{\tiny $T\!M$} }}_N(z)\right] + \zeta_N^{{ \mbox{\tiny $T\!M$} }} Y_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(z) \right] \end{eqnarray*} where \begin{eqnarray*} \eta_N^{{ \mbox{\tiny $T\!M$} }}(z)= \frac{ 2L_z \omega_N^{{ \mbox{\tiny $T\!M$} }} \sqrt{\beta} \exp( \frac{ \alpha z}{2 L_z } ) }{ \alpha c_0 }, \quad (\nu_N^{{ \mbox{\tiny $T\!M$} }})^2= \frac{4 L_z^2 } { \alpha^2}{( k_x^2 +k_y^2) } + 1, \quad \zeta_N^{{ \mbox{\tiny $T\!M$} }}=- \frac{ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0) J_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }^{\prime}\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0)\right] + J_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0) \right]} {\eta^{{ \mbox{\tiny $T\!M$} }}_N(0) Y_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }^{\prime}\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0)\right] + Y_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0) \right] }. \end{eqnarray*} In this case the $\omega_N^{{ \mbox{\tiny $T\!M$} }}$ are the values of the $p$-th roots of the $TM$-mode spectrum generating equation which may be written in the form: \begin{eqnarray*} \widetilde J_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0)\right]\widetilde Y_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(L_z)\right] - \widetilde J_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(L_z)\right] \widetilde Y_{ \nu_N^{{ \mbox{\tiny $T\!M$} }} }\!\left[ \eta^{{ \mbox{\tiny $T\!M$} }}_N(0)\right] &=& 0 \end{eqnarray*} where $\widetilde f(\eta)\equiv \eta f^\prime(\eta) + f(\eta) $ for any $f(\eta)$. These expressions enable one to calculate the field modes for $\bm e,\bm b,\bm h,\bm d$ using (\ref{FIELDS}) and the constitutive relations. The quantum description can be constructed by generalizing the methods used in vacuum cavity QED. A Fock space of quantised modes is introduced by introducing the annihilation and creation operators $ \aop_N^s $ and $ {\aop_{N^\prime}^{s^\prime}}^{\dagger} $ satisfying the commutator relations: $$[ \aop_N^s, {\aop_{N^\prime}^{s^\prime}}^{\dagger} ]=\delta_{ N { N^\prime} }\, \delta ^{ s {s^\prime} }$$ for $s\in\{TE,TM\}$. The Fock space vacuum state $\Lambda_0 $ is annihilated by all $ \aop_N^s $. In the above gauge stationary quantum modes in a closed cavity are described by the Hermitian operator \begin{eqnarray*} \widehat{\cal A}_N(t, \bm r)=\sum_{s\in\{TE,TM\}}\sum_{m=1}^M \left( {\aop_{N^\prime}^{s^\prime}}^{\dagger} {\cal A}^{(+),s,m}_N( \bm r){\cal Y}_m( \bm r)\, e^{-i \omega_N^s t} + \aop_N^s {\cal A}^{(-),s,m}_N( \bm r){\cal Y}_m( \bm r) \,e^{i \omega_N^s t}\right). \end{eqnarray*} The quantum field modes for the Hermitian operators $\widehat\bm e,\widehat\bm b,\widehat\bm h,\widehat\bm d$ follow from (\ref{FIELDS}) but with this mode operator and the corresponding operator constitutive relations. Replacing the classical fields for the dielectric filled cavity in the classical expression for ${\cal E}$ by such operators yields the quantum hamiltonian $\widehat{\cal E}$ for quantum fields in the cavity. Its expectation value $ E_{\Lambda_0}[ \widehat{\cal E} ] $ in the Fock space vacuum state requires renormalization \cite{Santos,BordagBook} \!\!. This is effected by subtracting from an infinite mode sum an expectation value of the energy of a system with a homogeneous medium. To effect this subtraction both mode summations generally require a regularization scheme for their definition. Thus for the system with conducting boundaries and a discrete spectrum one defines: \begin{eqnarray*} \langle {\cal E}\rangle_{\text{reg}} &\equiv& \frac{\hbar }{2} \sum_{s \in \{TE,TM\}}\sum_N \omega_N^s \, \psi(\kappa,\omega_N^s) \end{eqnarray*} for some suitable smooth function satisfying $\psi(0,\omega_N^s)=1$ that renders the summations meaningful. Each cavity mode labeled by $N,s$, with eigen-frequency $\omega_N^s$, contributes the factor $ \frac{\hbar}{2}\, \omega_N^s$ to the vacuum expectation of the (regularized) energy. The well-defined condition \begin{eqnarray*}\label{QENERGY} {\cal E}_N^s\equiv \int_{U} \frac{1}{2}\left( \df \bm e_N^s \wedge \# \bm d_N^s + \bm b_N^s \wedge \# \bm h_N^s \right) \;\;=\;\; \frac{\hbar}{2} \;\omega_N^s \end{eqnarray*} fixes the normalizations ${\cal N }_N^s$ of the mode amplitudes ${ {\cal A}}_N^{(+), s}$ and their conjugates: \begin{eqnarray*} ({\cal N}_N^{{ \mbox{\tiny $T\!E$} }})^2 &=& \frac{16 \hbar \epsilonilon_0 }{\alpha^{2}\sqrt{\beta} c_{0}({\nu_N^{{ \mbox{\tiny $T\!E$} }}})^2}\frac{L_z^2}{L_x L_y}\frac{1}{I^{{ \mbox{\tiny $T\!E$} }}_{N}\Omega^{{ \mbox{\tiny $T\!E$} }}_{N}} \end{eqnarray*} where, with $\Omega^{{ \mbox{\tiny $T\!E$} }}_N=\frac{2 \omega_N^{{ \mbox{\tiny $T\!E$} }} L_z \sqrt{\beta} } { \alpha\, c_0} $, \begin{eqnarray*} I^{{ \mbox{\tiny $T\!E$} }}_{N} &=& e^\alpha \left( {\Phi^{\prime}}^{{ \mbox{\tiny $T\!E$} }}_N\!\left[\Omega^{{ \mbox{\tiny $T\!E$} }}_N e^{\frac{\alpha}{2}}\right] \right)^2 - \left( {\Phi^{\prime}}^{{ \mbox{\tiny $T\!E$} }}_N\!\left[ \Omega^{{ \mbox{\tiny $T\!E$} }}_N\right] \right)^2 \end{eqnarray*} and \begin{eqnarray*} ({\cal N}_N^{{ \mbox{\tiny $T\!M$} }})^2 &=& \frac{64\hbar \epsilonilon_{0}\sqrt{\beta}}{\alpha^{4}c_{0}[({\nu_N^{{ \mbox{\tiny $T\!M$} }}})^2-1]} \frac{ L_z^4}{L_x L_y} \frac{1}{I^{{ \mbox{\tiny $T\!M$} }}_{N}\Omega_N^{{ \mbox{\tiny $T\!M$} }}} \end{eqnarray*} where, with $\Omega^{{ \mbox{\tiny $T\!M$} }}_{N}=\frac{2 \omega_N^{{ \mbox{\tiny $T\!M$} }} L_z \sqrt{\beta} } { \alpha\, c_0} $, \begin{eqnarray*} I^{{ \mbox{\tiny $T\!M$} }}_{N} &=& \left(\df 1 - (\nu^{{ \mbox{\tiny $T\!M$} }}_{N})^{2} + (\Omega^{{ \mbox{\tiny $T\!M$} }}_{N})^{2}e^{\alpha} \right) \left( \df \Phi^{{ \mbox{\tiny $T\!M$} }}_{N}\!\left[\Omega^{{ \mbox{\tiny $T\!M$} }}_{N}e^{\frac{\alpha}{2}}\right]\right)^{2} - \left(\df 1 - (\nu^{{ \mbox{\tiny $T\!M$} }}_{N})^{2} + (\Omega^{{ \mbox{\tiny $T\!M$} }}_{N})^{2} \right)\left(\df \Phi^{{ \mbox{\tiny $T\!M$} }}_{N}\!\left[\Omega^{{ \mbox{\tiny $T\!M$} }}_{N}\right]\right)^{2}. \end{eqnarray*} The mode contributions to the vacuum expectation values of the induced electromagnetic stress field in the dielectric can now be calculated from the Hermitian operator-valued stress 2-form: \begin{equation}a \widehat{\cal \sigma}_{K}=\frac{1}{2}\left( i_K\#\widehat\bm h \wedge \widehat\bm b - \widehat\bm e \wedge i_K \# \widehat\bm d - \# \widehat\bm b \wedge \widehat\bm h(K) -\widehat \bm d(K) \wedge \# \widehat\bm e\right). \label{QFORCE} \end{equation}a The quantum expectation value of the regularized force component in the Fock vacuum state $\Lambda_0$ acting perpendicular to a surface $\Sigma_{z_{0}}$ of the box at $z=z_{0}$ is \begin{eqnarray*} \langle {\cal F}_{z_{0}} \rangle_{\text{reg}} &\equiv& \left. E_{\Lambda_0}\left[ \int_{\Sigma_{z_{0}}}\, \widehat \sigma_{ \frac{ \partial} { \partial z}} \right]_{\text{reg}} \right|_{z=z_{0}}. \end{eqnarray*} In a box containing a homogeneous permittivity, the expectation values of the force at the end faces of the box at $z=0,z=L_z$ are equal. This is not the case for an inhomogeneous dielectric in general. Indeed one finds, after some calculation, the difference between the force expectations \begin{eqnarray*} \langle \Delta {\cal F} \rangle_{\text{reg}} \;\;\equiv\;\; \langle {\cal F}_{0} \rangle_{\text{reg}} - \langle {\cal F}_{L_z} \rangle_{\text{reg}} &=& \frac{ \hbar \alpha} { 4 L_z} \sum_{s \in \{ TE,TM\}} \sum_N \omega_N^{s} \, \psi(\kappa,\omega_N^s) \end{eqnarray*} This is a surprisingly simple result given the complexity of the mode structures involved. To effect a renormalization of this result requires a computation which will not be reported here. \section{Conclusion} \label{ch4} It is expected that a non-zero $ \langle \Delta {\cal F} \rangle $ will survive renormalization when the regulator is removed and this indicates that any confined inhomogeneous material dielectric must sustain stresses induced by electromagnetic quantum fluctuations if the confining domain is rigid. If the medium remains static such stresses induce mechanical (elastic) stresses in the dielectric to maintain equilibrium. Unlike similarly induced classical stresses by the classical gravitational field in the laboratory (that vary with the orientation of the dielectric) the quantum induced electromagnetic stresses are permanent. In principle they could be detected experimentally by noting the variation of the induced stress field within the dielectric with variations of the permittivity inhomogeneities. Such variations might be detected using photo-elastic effects on the polarization of light passing through the medium. \acknowledgements The authors are grateful to STFC and the EPSRC for financial support for this research which is part of the Alpha-X collaboration. \nocite{*} \end{document}
\begin{document} \flushbottom \title{A low-noise on-chip coherent microwave source} \thispagestyle{empty} Quantum computing has developed rapidly in the last decade using a range of different physical systems\cite{HSJ08,SHO09,MDL14,NRK18,SAM18,AAB19}. For example, semiconductor and superconductor-based qubits with frequencies in the microwave regime have been studied extensively\cite{HSJ08,SHO09,NFB10,RBT13}. In such systems, the control of a large quantum processor is typically implemented by channelling a sequence of microwave pulses to the qubits operating at many different frequencies. This can be achieved using conventional room temperature electronics. However, the approach requires a large number of broad-band connections scaling linearly with the number of qubits, to transmit signals from room temperature to the refrigerator that hosts the quantum processor at temperatures typically below 100 mK. When scaling-up of these quantum-computing systems, the heavily attenuated bundles of coaxial microwave cables will determine much of the system-level requirements, including the cooling power and the physical size of the refrigerator\cite{KSK19}. Cable lengths also lead to latencies and limitations in, for example, quantum error correction\cite{KNL97,CSH96}. Cryogenic integrated control electronics can potentially overcome these challenges. Independent of how the cryogenic control electronics are realised, any viable approach will need stable microwave sources integrated in the relevant operating environment to, for example, create the carrier frequencies for the modulated pulses\cite{BOB16}. Josephson-junction-based sources have previously been studied in the field of radio astronomy as local oscillators for receivers in the millimetre and submillimetre wave bands\cite{VSP02}. However, for typical solid-state quantum information processing applications operating in the sub-20-GHz band, the specific requirements are stringent. In particular, an on-chip microwave source should exhibit a very low power dissipation, long coherence time, and low noise. Some of these properties have previously been explored with prototype devices based on quantum dots\cite{LSG15,LHS17,LSE17} and Josephson junctions\cite{CLA14,CBR17} embedded in resonators. Similar designs can also find applications in generating non-classical radiation\cite{GBA19,RPD19}. However, detailed design guidelines for specific applications are generally lacking and it remains unclear whether the signal quality of these systems will be sufficient for high-fidelity qubit operations. In this Article, we report an on-chip coherent microwave source based on a Josephson junction strongly coupled to a spiral resonator. We develop a quantitative theoretical model for the resonator-coupled Josephson junction which operates in a particular parameter regime and can yield a stable and coherent continuous-wave microwave output. Our theory is verified in experiments which also provide relevant system parameters, including the output power and the microwave generation efficiency. We also confirm the applicability of our source for quantum-coherent operations by measuring the phase noise of the oscillator output, and provide the total phase noise spectrum up to large offset frequencies and evaluate its impact on the dephasing of an ideal qubit. We analyse, in particular, the source-induced infidelity of the identity operation and of the NOT gate for an ideal qubit, and conclude that they lie below 0.1\% up to a 10-ms evolution. We achieve this performance metric with a total cryogenic power consumption of 200 pW---which is compatible with the millikelvin environment---and a generation efficiency of about 15\%. We thus suggest that the source is compatible for driving qubits in schemes, in which it is combined with additional amplitude and phase modulation components. Due to the high output power, our on-chip source may also be of potential use in other applications, such as writing and retrieving quantum information encoded in spin ensembles\cite{GJK14} and pumping a microwave-to-optical photon converter\cite{GMS20}. \vspace*{10pt} \noindent \textbf{Theory and design} \noindent Our oscillators are realized as capacitively shunted Josephson junctions (C-shunt JJs) coupled to a resonator. Under certain conditions described below, the phase dynamics of a C-shunt JJ locks to a radiation field that couples to it\cite{HGH06}. The phase-locked and biased junction generates power at the corresponding angular frequency $\omega$. Our devices reside in the parameter regime corresponding to relatively high Josephson coupling energies in the range of $E_\textrm{J}/h\gtrsim 1$~THz, where $h$ is the Planck constant. Furthermore, it is known that the required harmonic phase-locking conditions are favored in the case\cite{KAU81} $\omega \gg \omega_\textrm{p}$, i.e., the generated angular frequency $\omega$ sufficiently exceeds the plasma angular frequency of the junction $\omega_\textrm{p} = \sqrt{8E_\textrm{J}E_\textrm{c}}$, where $E_\textrm{c}$ is the charging energy of the junction. The plasma frequencies of bare junctions are typically much higher than our frequency range of interest, 1--10~GHz, which motivates us to decrease $\omega_\textrm{p}$ with a capacitive shunt. To this end, we use an additional parallel-plate capacitor with the effective charging energy $E_\textrm{c} = e^2/(2C_\textrm{s})\approx h\times$ 100--600~kHz, where $e$ is the elementary charge, or equivalently a shunt capacitance $C_\textrm{s}$ in the range of 30--200~pF. The utilized harmonic modes in our devices are implemented in planar resonators, either as lumped $LC$ resonators or spiral resonators. In either case, the designed mode profile corresponds to that of the equivalent $L_1$--$C_1$ series resonator as shown in Fig.~\ref{fig:1}a, i.e., there is a current maximum at the junction. The dissipation, including coupling to the load, possible intrinsic damping mechanisms, and the dissipation from the bias circuitry are formally included in the series resistance $R_1$. Note that the parameters of the equivalent circuit can be extracted by standard circuit analysis from the device design parameters, and hence can be considered known in the discussion below. Furthermore, we implement the voltage bias across the junction by applying direct current (DC) through a shunt resistor $R_\textrm{s}$ connected in parallel with the junction as shown in Fig.~\ref{fig:1}d. The detailed schemes for implementing the bias and for the coupling of the output radiation are detailed in the Supplementary Information (SI). Our theoretical approach is based on a perturbative treatment of the C-shunt JJ under a radio-frequency (RF) drive, see Methods section for the detailed derivation. Since $\omega \gg \omega_\textrm{p}$, the voltage bias across the shunted junction is relatively accurately given by the voltage across the shunt capacitor assuming that all the current drive is simply charging and discharging the capacitor. Thus we define our unperturbed circuit to consist only of the shunt capacitor and the drive. In the next step, we treat the Josephson effect as a perturbation that slightly adjusts the current of the shunt capacitor and hence the voltage, see Fig.~\ref{fig:1}b. The gain properties of this driven circuit composing of the junction and the shunt capacitor can be extracted from the Fourier–Bessel expansion of the current-phase relation of a JJ assuming that the junction dynamics is phase-locked to the first Shapiro step with the average voltage of $\left\langle U\right\rangle =\frac{\Phi_{0}\omega}{2\pi}$, where $\Phi_{0}=h/(2e)$. The solution is conveniently represented in Fig.~\ref{fig:1}c by a complex impedance $Z_\textrm{J} = R_\textrm{J} + X_\textrm{J}$, in which $R_\textrm{J}$ and $X_\textrm{J}$ correspond to the real and imaginary components as \begin{equation}\label{eq:RJ} R_\textrm{J} = -\frac{\hbar\omega\langle I_\textrm{J}\rangle}{eI_{1}^{2}} \end{equation} \begin{equation}\label{eq:XJ} X_\textrm{J} = \frac{1}{i\omega C_\textrm{s}}\Bigg(1-\frac{I_\textrm{c}}{I_{1}}[J_{0}(\widetilde i_1)-J_{2}(\widetilde i_1)]\sqrt{1-\Big(\frac{\langle I_\textrm{J}\rangle}{I_\textrm{c}J_{1}(\widetilde i_1)}\Big)^{2}}\Bigg) \end{equation} \noindent where $\hbar=h/(2\pi)$, $\langle I_\textrm{J}\rangle$ is the DC current through the junction, $I_1$ is the current amplitude of the self-sustained oscillation of the system at angular frequency $\omega$, $\widetilde i_1 = \frac{2\pi I_{1}}{\Phi_{0}\omega^{2}C_\textrm{s}}$, $I_\textrm{c}$ is the critical current of the junction, and $J_n$ refers to the $n$th-order Bessel function of the first kind. As detailed in the SI, $\langle I_\textrm{J}\rangle$ can be expressed as $\langle I_\textrm{J}\rangle = I_{\mathrm{b}}-\frac{\Phi_{0}\omega}{2\pi R_{s}}$, where $I_\mathrm{b}$ and $R_\mathrm{s}$ are the total bias current and the resistance parallel to the junction at DC, respectively. The fact that $R_\textrm{J}$ is negative for positive $\langle I_\textrm{J}\rangle$ is a manifestation of the power generation ability\cite{HGH06}. To provide intuitive understanding of the electrodynamics of the source, we note that the power output properties are related to the real component of the impedance, $R_\textrm{J}$, whereas the imaginary component $X_\textrm{J}$ slightly modifies the resonator frequency. In a steady state, the photon emission rate from the JJ to the resonator equals that of the resonator decay, satisfied for $R_\textrm{J} = -R_1$ according to Fig.~\ref{fig:1}c. Formally, we can express the total losses including the Josephson effect by $R_\textrm{J} + R_1$ such that the total effective quality factor assumes the form $Q_\textrm{tot} = \sqrt{L_1/C_1}/(R_\textrm{J}+R_1)$, where we have for simplicity neglected to effect of $X_\textrm{J}$. Thus the sustained oscillations corresponding to $R_\textrm{J} = -R_1$ lead to a diverging $Q_\textrm{tot}$. Due to the amplitude dependence of $R_\textrm{J}$, one obtains at each bias point a single well-defined amplitude for the sustained oscillation. The total RF power generation is given by $P_\textrm{out} = -(Q_\textrm{t}/Q_\textrm{e})R_\textrm{J} I_1^2/2 =(Q_\textrm{t}/Q_\textrm{e})\langle I_\textrm{J}\rangle\hbar\omega/(2e)$ where $Q_\textrm{t}$ and $Q_\textrm{e}$ are the experimentally measurable total and loaded quality factors excluding the Josephson effect (see SI). The phase-locking condition implies\cite{KAU81} $\langle I_\textrm{J}\rangle<I_\textrm{c}|J_1(\tilde{i}_{1})|$. Therefore, the maximum power available from the oscillator is $P_\textrm{out,max} = 0.58\times I_\textrm{c}\hbar\omega/(2e)$ where the numeric prefactor follows from the properties of Bessel functions, i.e., from $\max_x{|J_1(x)|}\approx0.58$. Thus, an oscillator for a desired power level and a given frequency may be optimized by choosing $I_\textrm{c}$ accordingly. Technologically, the range of available critical currents spans several orders of magnitude\cite{LMV17}. Note that the maximum power output is obtained by optimizing the load such that $R_1\approx 0.68\frac{\pi I_{\mathrm{C}}}{\Phi_{0}C_{s}^{2}\omega^{3}}$ and minimizing the residual microwave losses such that $\eta_\mathrm{mw} \equiv Q_\mathrm{t}/Q_\mathrm{e}\approx 1$ (see SI). To account for the imaginary component, we note that $X_\textrm{J}$ yields a small correction to the reactance of the excited mode, and results in a small shift in the output frequency. Its bias point dependence maps into the corresponding dependence for the output frequency. This phenomenon is anticipated to affect the quality of the output signal, since a fluctuating bias point thus leads to fluctuations in the frequency and hence in the phase of the output, as studied for different types of Josephson oscillators in the literature\cite{SSY01}. The relative effect of $X_\textrm{J}$ in the total reactance is minimized by maximizing the characteristic impedance of the resonator $\sqrt{L_1/C_1}$. This provides a guideline to resonator optimization in order to achieve higher frequency and phase stability of the output signal: maximize the impedance under the constraints set by the resonator design. Although we can achieve much higher resonator impedances given our design rules, we set here the impedance to roughly 100~$\Omega$ so that the dependence between the emission frequency and the DC bias can be conveniently experimentally observed. This allows us to demonstrate the validity of our model explicitly. Further improvement of the linewidth can be realized by increasing the characteristic impedance of the resonator, which may be obtained by decreasing the metallization ratio of the distributed transmission line resonator design and by using a thin film of high-kinetic-inductance material. \vspace*{10pt} \noindent \textbf{Experimental results} \noindent After the calibration of the gain of the amplification chain (Fig.~S3), the microwave emission spectrum is measured and shown in Fig.~\ref{fig:2}a as a function of the DC bias current $I_\textrm{b}$ which is converted to the DC bias voltage by the on-chip shunt resistor. We observe three bias regions with distinctive signatures in both the current-voltage (IV) characteristics and the emission spectrum as follows: the supercurrent state for $I_\textrm{b} <$~10~{\textmu}A (region I), the self-induced Shapiro step\cite{BCS99, HGH06} for 13~{\textmu}A~$< I_\textrm{b} <$~18~{\textmu}A (region II), and the normal state with the resistance determined by the shunt resistor for $I_\textrm{b} >$~19~{\textmu}A (region III). There is no photon emission in region I and negligible emission in region III. In the contrast, a major emission occurs at the Shapiro step in region II. The Shapiro step is a manifestation of the self-induced locking of the Josephson and resonator dynamics leading to the measurable power emission. The central frequency of the emitted signal shifts towards higher frequency with increasing $I_\textrm{b}$ owing to above-mentioned $X_\textrm{J}$ [Fig.~\ref{fig:2}a]. This behaviour is well captured by our model. The emitted power, shown in Fig.~\ref{fig:2}b, is obtained by integrating the emission spectrum over frequency. The power increases almost linearly with increasing $I_\textrm{b}$, again well captured by the model. The output power exceeds 20 pW for $I_\textrm{b} >$ 16 {\textmu}A peaking at 28 pW ($-75.5$~dBm) with a corresponding DC power $P_\textrm{DC}$ = 17.7~{\textmu}A $\times$ 10.7~{\textmu}V = 189 pW ($-67.2$~dBm). This suggests a DC-to-RF conversion efficiency of 15$\%$ at maximum output power. As shown in Fig.~\ref{fig:2}c, the typical emission linewidth is $4.1\pm 0.1$~kHz which is roughly five times as sharp as that obtained in ref.\cite{CBR17} ($\sim$22 kHz). Such a narrow linewidth suggests potential for a noticeable improvement of phase stability over previous coherent cryogenic sources\cite{LST65,AIN07,LSG15,LHS17,LSE17,CLA14,CBR17}. Interestingly, we use only a single-pole room temperature commercial low-pass filter on the DC line in the present setup, whereas filters at multiple temperature stages have been utilized in previous studies\cite{CBR17}. Thus our filtering scheme relaxes some of the burden required to build and test the experimental setup. However, improvements on the filtering scheme in our setup may lead to further narrowing of the spectral line. To find evidence that the output of our source is composed of microwaves in a coherent state, we utilize the heterodyne detection technique. The output field is demodulated by a local reference tone to yield the in-phase (I) and quadrature-phase (Q) components. The frequency of the reference tone is detuned from the central emission frequency by 62.5 MHz, optimized for our setup. The results of $10^6$ samples are summarized as a two-dimensional (2D) probability distribution depicted in Fig.~\ref{fig:2}d. The probability distribution of the output shows a nearly Gaussian shape with respect to the intensity of the radiation field, or the radius in Fig.~\ref{fig:2}d, as detailed in Fig.~S7. The Gaussian ensembles rotates at an intermediate frequency of 62.5 MHz in the IQ plane, and hence resembles a ring with finite radius and width. This coincides with the distribution of a coherent state averaged over different phase shifts, hence our observation provides evidence on the coherent character of the emission. In Fig.~S8, we provide data on a reference device differing from that discussed here mainly in its lower critical current, roughly 1.8~{\textmu}A, and having a lumped-element $LC$ resonator. In addition, the resonator impedance is $\sim\,$3.8~$\Omega$ as opposed to $\sim\,$75~$\Omega$ for the spiral resonator. As expected from the discussion above, the frequency of the emitted signal in Fig.~S8 is much more sensitive to the bias current than for the spiral-resonator sample. Thus the emitted signal is expected to experience excessive phase noise. Nevertheless, the good agreement of the experimental data with our theory for this sample of very different parameters than those of the spiral-resonator sample provides a convenient verification of our model. To gain more understanding on possible limitations of the linewidth of the generated signal, we utilize the well-established injection locking technique\cite{LSG15, LHS17,CBR17,MDF19}. Here, the frequency of the injection tone $f_\textrm{inj}$ is fixed to that of the free emission at a given bias, whereas the power $P_\textrm{inj}$ is swept. Figure~\ref{fig:3}a illustrates that in our experiments, the injection tone induces a very sharp peak into the emission spectrum, into which the whole emission gradually shifts with increasing injection power. For $P_\textrm{inj} > -100$ dBm, we observe only a single emission peak with a linewidth of 1~Hz (Fig~\ref{fig:3}b). Interestingly, this linewidth is limited by the smallest resolution bandwidth of the used spectrum analyzer and our more detailed study of the measured spectrum shown in Fig.~S12 suggests that the linewidth of the injection-locked source is of the order of 1~mHz or below. It is also possible to measure very small linewidths with an advanced hardware setup, for example, by carrying out a Fourier transform of the IQ traces after sufficient averaging. Yet, we note that, the typical linewidths of state-of-the-art superconducting qubits are in the kilohertz range, i.e., comparable to the linewidth of our source without injection locking, see Fig.~\ref{fig:2}c. We have also measured the emission spectrum with a fixed $P_\textrm{inj}$ but varying $f_\textrm{inj}$, and the results agree well with Adler theory, as shown in Figs.~\ref{fig:3}c and~d, and discussed in the SI. We extract the the phase noise $\mathscr{L}$ from the emission spectrum under injection locking with $P_\textrm{inj} = -100$ dBm where the injection tone contributes less than $1\%$ of the total power. Our results presented in Fig.~\ref{fig:4}a show that $\mathscr{L}$ decays rapidly with increasing frequency offset $f_\textrm{off}$ from $f_\textrm{inj}$. It reaches $-95$~dBc/Hz at $f_\textrm{off}=10$~kHz, which is about 15~dB below the corresponding value for a typical lab-grade local oscillator (LO) operating at room temperature\cite{BOB16}, but it needs further improvement to be compatible with high-precision LOs such as that used to generate the injection tone. The measured phase noise eventually saturates to $-120$~dBc/Hz at $\sim$5 MHz. The saturation is mainly determined by noise added by our amplification chain (Fig.~S10). The noise floor can be possibly subtracted to a large extent from the source noise by averaging the data carefully for the offset frequency exceeding 5~MHz (Fig.~\ref{fig:4}). It is possible to minimize the influence of the system noise using the cross-correlation technique\cite{WWF92} and thus to obtain the actual $\mathscr{L}$ at large $f_\textrm{off}$. We leave this extension for future research. Nevertheless, we note that $\mathscr{L} = -116$~dBc/Hz at an offset frequency of 1~MHz is well below $-99$~dBc/Hz obtained by the quantum-dot-based on-chip microwave source studied in ref.\cite{LHS17}. Owing to the large output power and hence the large signal-to-noise ratio, the phase noise $\mathscr{L}$ yields the dominating noise of the device up to relatively large offset frequencies. This motivates us to examine the influence of the phase noise on apparent qubit dephasing and on the gate and operation fidelity. We consider the source, augmented with a noiseless pulse and phase modulator, to drive an ideal qubit that is free of intrinsic dephasing and dissipation. Following the framework of ref.\cite{BOB16}, we calculate the infidelity of the quantum operations, defined as $1 - F_\textrm{av}(\tau)$, where the averaged operational fidelity is denoted by $F_\textrm{av}(\tau)$. We have $$F_\textrm{av}(\tau) \approx \frac{1}{2} [1+\textrm{e}^{-X(\tau)}]$$ where the evolution time is denoted by $\tau$, $$X(\tau)=\frac{1}{2\pi}\int_{0}^{\infty}\textrm{d}f\,10^{\mathscr{L}(f)/(10\textrm{ dBc/Hz})}\frac{1}{\textrm{Hz}} \sum_{l \in x,y,z}G_{z,l}(f,\tau)$$ and $G_{z,l}(f,\tau)$ is a filter function that quantifies the action of the control Hamiltonian and hence depends on the specific quantum operation as discussed in the SI. Figure~\ref{fig:4}b shows the calculated infidelity for prototypical quantum operations: Ramsey, Hahn echo, and NOT gate operations. The infidelity is $\sim\,$0.1$\%$ for all these operations after a long evolution time of $\tau=10$~ms. These infidelities are an order of magnitude lower than those achieved by a typical lab-grade LO\cite{BOB16}. However, in the short $\tau$ limit, the calculated infidelity is about an order of magnitude larger than that obtained from lab-grade LO\cite{BOB16} due to the overestimated phase noise $\mathscr{L}$ at large $f_\textrm{off}$ arising from the amplification chain. The low-offset-frequency components dominate in the long $\tau$ limit. On the other hand, both low- and high-frequency components have a noticeable contribution in the short $\tau$ limit as discussed in the SI. For comparison, the infidelities of the operations are reduced by an order of magnitude if extract them from the phase noise measured for the LO. These infidelities are also significantly affected by the noise floor set by the amplification chain. Same calculation with the noise floor subtracted can be found in the SI. The above-measured low noise of our microwave source suggests that the device is suitable for controlling state-of-the-art superconducting qubits with coherence times currently reaching 100~{\textmu}s\cite{RGP12, RBM19, XHC20}. \\ \noindent \textbf{Conclusions} \noindent In this work, we demonstrated an on-chip coherent microwave source realized by a Josephson junction strongly coupled to a spiral resonator. The source can generate microwave signals with narrow linewidth ($<1$~Hz, suggestively $\lesssim 1$~mHz), low noise ($<-120$~dBc/Hz), high output power ($>25$~pW), and a fine dc-to-rf power conversion efficiency of about 15\%. The output power is two orders of magnitude higher than that previously reported for double-quantum-dot sources\cite{LSG15, LHS17, LSE17} or an aluminum-junction source\cite{CBR17}. We confirm that the expected infidelity bound arising from the phase noise of our source in a typical quantum-logic operation is below 0.1$\%$ up to 10-ms evolution ensuring that the signal quality is sufficient for the control of state-of-the-art quantum systems. We used the injection-locking technique in an effort to study the intrinsic limitations of the oscillator. As for any oscillating source independent of the technology, frequency and phase stabilization techniques need to be used. An alternative to this injection locking scheme is to bring the reference tone to the cryogenic temperature at a low-frequency band restricting the bandwidth requirements from room temperature, and to use frequency multiplication techniques\cite{HSD16} to generate the injection tone. Furthermore, integration of superconducting quantum interference devices (SQUIDs) with the source allows to tune the emission frequency without significant amplitude modulation, and hence enables frequency stabilization based on phase-locked loops which are typically used in the context of voltage-controlled oscillators at room temperature, and for which superconducting counterparts have been demonstrated as well\cite{VSP02}. In general, there are several approaches pursued in the field of integrated control of quantum systems, including cryogenic semiconductor-based techniques, or even those based on optical-to-microwave transducers\cite{GMS20}. Semiconductor-based oscillators have been demonstrated with the output power of about 0.2 {\textmu}W at 1.5~K\cite{ANG19}. Full semiconductor-based cryogenic control systems have been demonstrated at the operating temperatures of a few kelvin, with power consumption of the order of 100~mW\cite{BPS20}. Cryogenic semiconducting electronics has been demonstrated at the tempearture of 100 mK, with power consumption of 18~nW per cell\cite{PDK21}. While less matured, all-superconducting control electronics concepts are likely superior in power efficiency. Ideally, the power dissipated at the base temperature is efficiently converted into the control signals of the quantum system. An order-of-magnitude estimate for our source, assuming a 100-ns-long control period or readout pulses of 10 photons at 5.3 GHz enables driving about 7000 qubits with the output power available. The power and signal quality alone does not yet guarantee the source to be useful in the control of quantum systems. A feasible scheme for millikelvin-operated full waveform control may be a combination of our source with cryogenic microwave phase shifters\cite{KOL17, ZLK20} or flux-tunable resonators\cite{OSS07} and quantum-circuit refrigerators\cite{TPL17, SMS19}, enabling power-efficient waveform control. Moreover, cryogenic mixing or parametric frequency conversion techniques are an option for full waveform generation. Importantly, we anticipate that a stable microwave source is useful in other contexts than direct qubit control such as those based on single-flux-quantum (SFQ) logic\cite{LMV19} needing a master clock reference. Compared to semiconductor counterparts, SFQ is orders of magnitude more power efficient\cite{SPH06}. In general, it is understood that the field of integrated control systems in cryogenic solid-state quantum technology is still at its infancy. Independent of the detailed realization, it is expected that a power-efficient low-phase-noise reference oscillator is a necessary component, acting as the primary source of microwave power and as reference master clock. In the future, we aim to study the properties of cascaded cryogenic sources, where one injection-locked source works as the locking tone for other sources. In such a scenario, the total injection power delivered from room temperature, and hence the required amount of microwave cables, is independent of size of the cryogenic control system such as the number of qubits in a large-scale quantum computer. Furthermore, we aim to study the thermalization of the output impedance of the source. Although the measured phase noise also includes the effect of finite temperature, thermal photons leaking to different parts of a quantum computer may lead to additional undesired dephasing. Fortunately, our preliminary thermal analysis and that in ref.\cite{YHZ19} suggests that the shunt resistor can be thermalized well below 100~mK even at high output powers. In fact, a cryogenic source may also be able to utilize the exponential thermal suppression of noise photons at the signal frequency, in stark contrast to signals generated at room temperature, after which the suppression of noise photons by cryogenic attenuation or filtering leads to equal suppression of the signal power. \noindent \textbf{Methods} \noindent\textbf{Theoretical model.} Let us develop an analytic model for the Josephson oscillators based on a capacitively shunted Josephson junction coupled in parallel with a resonator circuit. The concept is to first derive an analytic model for the junction as a gain element. Then the gain properties are analyzed for the resonator-coupled junction. The mathematical tools used include (i) well-known conditions of junction phase-locking to an RF drive, (ii) a perturbative harmonic analysis of the junction to provide the gain properties under the RF drive and the phase-locking conditions, and (iii) showing by a power-balance criterion that the gain and phase-locking conditions satisfy a stable sustained oscillating mode by the self-generated RF drive. For (ii), the junction properties are described as an effective impedance, the negative real part of which is the manifestation of the gain. The model provides a convenient engineering tool for obtaining the basic properties of the oscillator. Namely, it provides simple design criteria for stable oscillator operation that can be used to predict the output power, DC-to-RF conversion efficiency, and curiously, the operation-point-dependent output frequency. We begin our analysis from an RF-driven capacitively shunted Josephson junction, as shown in Fig~\ref{fig:1}. In the first step, let us consider a bare capacitively shunted Josephson junction. Assume that the junction is subjected to an RF current $I_\textrm{RF}=I_{1}\sin(\omega t)$ such that $\omega\gg\omega_\textrm{p}$, where $\omega_\textrm{p}=1/\sqrt{L_\textrm{J}C_\textrm{s}}$, is the junction plasma frequency, $L_\textrm{J}$ is the Josephson inductance, and $C_\textrm{s}$ is the capacitance parallel to the junction. The capacitance $C_\textrm{s}$ is dominated by the shunt capacitance since the intrinsic capacitance of the junction is negligible. Under the above assumptions, the RF drive couples predominantly capacitively through the parallel connection of the junction and the shunt capacitance. Thus, the unperturbed voltage, neglecting the Josephson effect, across the tunnel junction is simply \begin{equation}\label{eq:1} V_\textrm{T}(t)= \frac{1}{C_\textrm{s}}\int I_{1}\sin(\omega t)\textrm{d}t=-\frac{I_{1}}{\omega C_\textrm{s}}\cos(\omega t)+\langle U\rangle \end{equation} where the constant of integration is the DC voltage across the junction. We employ the AC Josephson relation to obtain the phase across the junction as \begin{equation}\label{eq:2} \phi=\frac{2\pi}{\Phi_{0}}\int V_\textrm{T}\,\textrm{d}t=-\frac{2\pi I_{1}}{\Phi_{0}\omega^{2}C_\textrm{s}}\sin(\omega t)+\frac{2\pi}{\Phi_{0}}\langle U\rangle t-\phi_\textrm{c} \end{equation} where we have defined the constant of integration $-\phi_\textrm{c}$. Next, we utilize the Josephson current-phase relation $ I_\textrm{J}=I_\textrm{c}\sin\phi$ which provides the current through the Josephson tunnel element as \begin{equation}\label{eq:3} I_\textrm{J}=I_\textrm{c}\sin[-\widetilde i_1 \sin(\omega t)+\omega t-\phi_\textrm{c}] \end{equation} where we have defined $\widetilde i_1 = \frac{2\pi I_{1}}{\Phi_{0}\omega^{2}C_\textrm{s}}$ and written $\langle U\rangle=\Phi_{0}\omega/(2\pi)$ by adopting the phase-locking condition, i.e., the system is biased on the first Shapiro step. This condition is analyzed in the SI to check the validity range of the solutions. Here, we merely assume it. We rewrite equation~\eqref{eq:3} with the help of its Fourier–Bessel series as \begin{equation}\label{eq:4} I_\textrm{J}(I_{0}, \ I_{1},\ t)=I_\textrm{c}\sum_{m=-\infty}^{\infty}J_{m}(\widetilde i_1)\sin[\omega t(1-m)-\phi_\textrm{c}] \end{equation} Let us first consider the direct current $\langle I_\textrm{J}\rangle$ through the tunnel element. In equation~\eqref{eq:4} this follows from the term $m=1$ as \begin{equation} \langle I_\textrm{J}\rangle=I_\textrm{c}J_{1}(\widetilde i_1)\sin(-\phi_\textrm{c}) \end{equation} In fact, $\langle I_\textrm{J}\rangle = I_{\mathrm{b}}-\frac{\Phi_{0}\omega}{2\pi R_{s}}$ is related to external direct bias current (see SI) $I_{\mathrm{b}}$. Solving for $\phi_\textrm{c}$ yields \begin{equation}\label{eq:6} \phi_\textrm{c}=\arcsin\Big(-\frac{\langle I_\textrm{J}\rangle}{I_\textrm{c}J_{1}(\widetilde i_1)}\Big) \end{equation} To address the RF properties, we consider the fundamental-frequency component of $I_\textrm{J}(t)$ which follows from equation~\eqref{eq:4}, picking terms $m=0$ and $m=2$ as \begin{equation} I_{J1}(t)=I_\textrm{c}[J_{0}(\widetilde i_1)\sin(\omega t-\phi_\textrm{c})+J_{2}(\widetilde i_1) \sin (-\omega t-\phi_\textrm{c})] \end{equation} Using a Bessel recurrence formula and equation~\eqref{eq:6}, leads to \begin{equation} I_{J1}(t)= I_\textrm{c}[J_{0}(\widetilde i_1)-J_{2}(\widetilde i_1)]\sqrt{1-\Big(\frac{\langle I_\textrm{J}\rangle}{I_\textrm{c}J_{1}(\widetilde i_1)}\Big)^{2}}\sin(\omega t)+\frac{2\langle I_\textrm{J}\rangle}{\widetilde i_1J_{1}(\widetilde i_1)}J_{1}(\widetilde i_1)\cos(\omega t) \end{equation} In the capacitively shunted junction case, the RF current $I_{J1}$ couples back to the shunt capacitance since we assume that it is the dominant impedance at frequency $\omega$. As this happens, a voltage emerges on top of $V_\textrm{T}$ of equation~\eqref{eq:1}. Marking this voltage perturbation as $V_\textrm{P}(t)$ we obtain \begin{equation}\label{eq:9} V_\textrm{P}(t)=\frac{I_\textrm{c}}{\omega C_\textrm{s}}[J_{0}(\widetilde i_1)-J_{2}(\widetilde i_1)]\sqrt{1-\Big(\frac{\langle I_\textrm{J}\rangle}{I_\textrm{c}J_{1}(\widetilde i_1)}\Big)^{2}}\cos(\omega t)-\frac{2\langle I_\textrm{J}\rangle}{\widetilde i_1\omega C_\textrm{s}}\sin(\omega t) \end{equation} Let us convert equations~\eqref{eq:1} and~\eqref{eq:9} into frequency plane by identifying the in-phase $\sin(\omega t)$ and quadrature $\cos(\omega t)$ components to write $\widetilde V_\textrm{tot}(\omega)=\widetilde V_\textrm{T}(\omega)+\widetilde V_\textrm{P}(\omega)$. Furthermore, it is practical to write the output in the form of impedance $Z_\textrm{J}(\omega)=\widetilde V_\textrm{tot}(\omega)/I_{1}$. From equations~\eqref{eq:1} and~\eqref{eq:9} and by arranging the quadratures, it follows \begin{equation}\label{eq:10} Z_\textrm{J}(\omega)= -\frac{2\langle I_\textrm{J}\rangle}{\widetilde i_1 I_{1}\omega C_\textrm{s}}-i\frac{1}{\omega C_\textrm{s}}+i\frac{I_\textrm{c}}{\omega C_\textrm{s}I_{1}}[J_{0}(\widetilde i_1)-J_{2}(\widetilde i_1)]\sqrt{1-\Big(\frac{\langle I_\textrm{J}\rangle}{I_\textrm{c}J_{1}(\widetilde i_1)}\Big)^{2}} \end{equation} Insertion of the definition $\widetilde i_1 = \frac{2\pi I_{1}}{\Phi_{0}\omega^{2}C_\textrm{s}}$ here yields \begin{equation}\label{eq:11} Z_\textrm{J}(\omega)= -\frac{\omega\Phi_{0}\langle I_\textrm{J}\rangle}{\pi I_{1}^{2}} +\frac{1}{i\omega C_\textrm{s}}\Bigg\{1-\frac{I_\textrm{c}}{I_{1}}[J_{0}(\widetilde i_1)-J_{2}(\widetilde i_1)]\sqrt{1-\Big(\frac{\langle I_\textrm{J}\rangle}{I_\textrm{c}J_{1}(\widetilde i_1)}\Big)^{2}}\Bigg\} \end{equation} We stress that the real part of the junction impedance is negative if $\langle I_\textrm{J}\rangle>0$, and hence represents gain. Note that here the direction of positive current is fixed by the choice that we operate at the first positive-voltage Shapiro step. The imaginary part of the junction impedance equals to that of the shunt capacitor modified by a nontrivial perturbative term due to the Josephson effect. Equation~\eqref{eq:11} yields equations~\eqref{eq:RJ} and~\eqref{eq:XJ} for the real and the imaginary component of the junction impedance, respectively. \vspace*{10pt} \noindent\textbf{Device fabrication.} The devices are fabricated in a multi-layer process for superconducting circuits, the key element of which is a sidewall-passivated-spacer (SWAPS) technique\cite{LMV17} for the Nb-Al/AlO${}_x$-Nb Josephson junctions. Figures~1b and~S1 summarize the structure of the device. The fabrication begins with a hydrofluoric-acid (HF) dip of a 150-mm high-resistivity silicon wafer to remove oxides from the surface. The trilayer stack for the junctions with thicknesses 100~nm/10~nm/100~nm is then deposited, with a target critical-current density of 100~A/cm${}^2$. We deposit the subsequent layers as follows: a second 120-nm-thick niobium layer, an atomic-layer-deposited 50-nm AlO${}_x$ insulator for a parallel-plate capacitance density of roughly 1.5~fF/\textmu$\text{m}^2$, a third niobium layer of 120-nm thickness, and a normal-metal layer with thickness of about 100--130~nm for a sheet resistance of approximately 4~$\Omega/\square$. The patterning of the layers is enabled by ultraviolet (UV) photolithography. The niobium layers are etched with plasma, whereas the insulators and the resistors are wet etched. The Josephson oscillators are fabricated in the same batch of wafers as the traveling-wave parametric amplifiers of ref.\cite{SVM20}.\\ \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1} \begin{thebibliography}{10} \urlstyle{rm} \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \expandafter\ifx\csname doiprefix\endcsname\relax\defDOI: {DOI: }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem{HSJ08} \bibinfo{author}{Houck, A.~A.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Controlling the spontaneous emission of a superconducting transmon qubit}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{101}}, \bibinfo{pages}{080502}, DOI: \url{10.1103/PhysRevLett.101.080502} (\bibinfo{year}{2008}). \bibitem{SHO09} \bibinfo{author}{Shinkai, G.}, \bibinfo{author}{Hayashi, T.}, \bibinfo{author}{Ota, T.} \& \bibinfo{author}{Fujisawa, T.} \newblock \bibinfo{journal}{\bibinfo{title}{Correlated coherent oscillations in coupled semiconductor charge qubits}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{103}}, \bibinfo{pages}{056802}, DOI: \url{10.1103/PhysRevLett.103.056802} (\bibinfo{year}{2009}). \bibitem{MDL14} \bibinfo{author}{Muhonen, J.~T.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Storing quantum information for 30 seconds in a nanoelectronic device}}. \newblock {\emph{\JournalTitle{Nature Nanotechnology}}} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{986} (\bibinfo{year}{2014}). \bibitem{NRK18} \bibinfo{author}{Neill, C.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{A blueprint for demonstrating quantum supremacy with superconducting qubits}}. \newblock {\emph{\JournalTitle{Science}}} \textbf{\bibinfo{volume}{360}}, \bibinfo{pages}{195--199} (\bibinfo{year}{2018}). \bibitem{SAM18} \bibinfo{author}{Saffman, M.} \newblock \bibinfo{journal}{\bibinfo{title}{{Quantum computing with neutral atoms}}}. \newblock {\emph{\JournalTitle{National Science Review}}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{24--25}, DOI: \url{10.1093/nsr/nwy088} (\bibinfo{year}{2018}). \bibitem{AAB19} \bibinfo{author}{Arute, F.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Quantum supremacy using a programmable superconducting processor}}. \newblock {\emph{\JournalTitle{Nature}}} \textbf{\bibinfo{volume}{574}}, \bibinfo{pages}{505--510} (\bibinfo{year}{2019}). \bibitem{NFB10} \bibinfo{author}{Nadj-Perge, S.}, \bibinfo{author}{Frolov, S.}, \bibinfo{author}{Bakkers, E.} \& \bibinfo{author}{Kouwenhoven, L.~P.} \newblock \bibinfo{journal}{\bibinfo{title}{Spin--orbit qubit in a semiconductor nanowire}}. \newblock {\emph{\JournalTitle{Nature}}} \textbf{\bibinfo{volume}{468}}, \bibinfo{pages}{1084--1087} (\bibinfo{year}{2010}). \bibitem{RBT13} \bibinfo{author}{Rist{\`e}, D.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Millisecond charge-parity fluctuations and induced decoherence in a superconducting transmon qubit}}. \newblock {\emph{\JournalTitle{Nature Communications}}} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{1--6} (\bibinfo{year}{2013}). \bibitem{KSK19} \bibinfo{author}{Krinner, S.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Engineering cryogenic setups for 100-qubit scale superconducting circuit systems}}. \newblock {\emph{\JournalTitle{EPJ Quantum Technology}}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{2}, DOI: \url{10.1140/epjqt/s40507-019-0072-0} (\bibinfo{year}{2019}). \bibitem{KNL97} \bibinfo{author}{Knill, E.} \& \bibinfo{author}{Laflamme, R.} \newblock \bibinfo{journal}{\bibinfo{title}{Theory of quantum error-correcting codes}}. \newblock {\emph{\JournalTitle{Phys. Rev. A}}} \textbf{\bibinfo{volume}{55}}, \bibinfo{pages}{900--911}, DOI: \url{10.1103/PhysRevA.55.900} (\bibinfo{year}{1997}). \bibitem{CSH96} \bibinfo{author}{Calderbank, A.~R.} \& \bibinfo{author}{Shor, P.~W.} \newblock \bibinfo{journal}{\bibinfo{title}{Good quantum error-correcting codes exist}}. \newblock {\emph{\JournalTitle{Phys. Rev. A}}} \textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{1098--1105}, DOI: \url{10.1103/PhysRevA.54.1098} (\bibinfo{year}{1996}). \bibitem{BOB16} \bibinfo{author}{Ball, H.}, \bibinfo{author}{Oliver, W.~D.} \& \bibinfo{author}{Biercuk, M.~J.} \newblock \bibinfo{journal}{\bibinfo{title}{The role of master clock stability in quantum information processing}}. \newblock {\emph{\JournalTitle{npj Quantum Information}}} \textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{1--8} (\bibinfo{year}{2016}). \bibitem{VSP02} \bibinfo{author}{Koshelets, V.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Towards a phase-locked superconducting integrated receiver: prospects and limitations}}. \newblock {\emph{\JournalTitle{Physica C: Superconductivity}}} \textbf{\bibinfo{volume}{367}}, \bibinfo{pages}{249 -- 255}, DOI: \url{https://doi.org/10.1016/S0921-4534(01)01046-2} (\bibinfo{year}{2002}). \bibitem{LSG15} \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{Stehlik, J.}, \bibinfo{author}{Gullans, M.~J.}, \bibinfo{author}{Taylor, J.~M.} \& \bibinfo{author}{Petta, J.~R.} \newblock \bibinfo{journal}{\bibinfo{title}{Injection locking of a semiconductor double-quantum-dot micromaser}}. \newblock {\emph{\JournalTitle{Phys. Rev. A}}} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{053802}, DOI: \url{10.1103/PhysRevA.92.053802} (\bibinfo{year}{2015}). \bibitem{LHS17} \bibinfo{author}{Liu, Y.-Y.}, \bibinfo{author}{Hartke, T.~R.}, \bibinfo{author}{Stehlik, J.} \& \bibinfo{author}{Petta, J.~R.} \newblock \bibinfo{journal}{\bibinfo{title}{Phase locking of a semiconductor double-quantum-dot single-atom maser}}. \newblock {\emph{\JournalTitle{Phys. Rev. A}}} \textbf{\bibinfo{volume}{96}}, \bibinfo{pages}{053816}, DOI: \url{10.1103/PhysRevA.96.053816} (\bibinfo{year}{2017}). \bibitem{LSE17} \bibinfo{author}{Liu, Y.-Y.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Threshold dynamics of a semiconductor single atom maser}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{119}}, \bibinfo{pages}{097702}, DOI: \url{10.1103/PhysRevLett.119.097702} (\bibinfo{year}{2017}). \bibitem{CLA14} \bibinfo{author}{Chen, F.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Realization of a single-{C}ooper-pair {J}osephson laser}}. \newblock {\emph{\JournalTitle{Phys. Rev. B}}} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{020506}, DOI: \url{10.1103/PhysRevB.90.020506} (\bibinfo{year}{2014}). \bibitem{CBR17} \bibinfo{author}{Cassidy, M.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Demonstration of an ac {J}osephson junction laser}}. \newblock {\emph{\JournalTitle{Science}}} \textbf{\bibinfo{volume}{355}}, \bibinfo{pages}{939--942} (\bibinfo{year}{2017}). \bibitem{GBA19} \bibinfo{author}{Grimm, A.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Bright on-demand source of antibunched microwave photons based on inelastic {C}ooper pair tunneling}}. \newblock {\emph{\JournalTitle{Phys. Rev. X}}} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{021016}, DOI: \url{10.1103/PhysRevX.9.021016} (\bibinfo{year}{2019}). \bibitem{RPD19} \bibinfo{author}{Rolland, C.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Antibunched photons emitted by a dc-biased {J}osephson junction}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{122}}, \bibinfo{pages}{186804}, DOI: \url{10.1103/PhysRevLett.122.186804} (\bibinfo{year}{2019}). \bibitem{GJK14} \bibinfo{author}{Grezes, C.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Multimode storage and retrieval of microwave fields in a spin ensemble}}. \newblock {\emph{\JournalTitle{Phys. Rev. X}}} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{021049}, DOI: \url{10.1103/PhysRevX.4.021049} (\bibinfo{year}{2014}). \bibitem{GMS20} \bibinfo{author}{Arnold, G.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Converting microwave and telecom photons with a silicon photonic nanomechanical interface}}. \newblock {\emph{\JournalTitle{Nature Communications}}} \textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{4460} (\bibinfo{year}{2020}). \bibitem{HGH06} \bibinfo{author}{Hassel, J.}, \bibinfo{author}{Grönberg, L.}, \bibinfo{author}{Helistö, P.} \& \bibinfo{author}{Seppä, H.} \newblock \bibinfo{journal}{\bibinfo{title}{Self-synchronization in distributed {J}osephson junction arrays studied using harmonic analysis and power balance}}. \newblock {\emph{\JournalTitle{Applied Physics Letters}}} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{072503}, DOI: \url{10.1063/1.2337536} (\bibinfo{year}{2006}). \bibitem{KAU81} \bibinfo{author}{Kautz, R.~L.} \newblock \bibinfo{journal}{\bibinfo{title}{The ac {J}osephson effect in hysteretic junctions: Range and stability of phase lock}}. \newblock {\emph{\JournalTitle{Journal of Applied Physics}}} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{3528--3541}, DOI: \url{10.1063/1.329132} (\bibinfo{year}{1981}). \bibitem{LMV17} \bibinfo{author}{Gr\"{o}nberg, L.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Side-wall spacer passivated sub-$\mu$m {J}osephson junction fabrication process}}. \newblock {\emph{\JournalTitle{Superconductor Science and Technology}}} \textbf{\bibinfo{volume}{30}}, \bibinfo{pages}{125016}, DOI: \url{10.1088/1361-6668/aa9411} (\bibinfo{year}{2017}). \bibitem{SSY01} \bibinfo{author}{Salerno, M.}, \bibinfo{author}{Samuelsen, M.~R.} \& \bibinfo{author}{Yulin, A.~V.} \newblock \bibinfo{journal}{\bibinfo{title}{Spectral linewidths of {J}osephson oscillators}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{5397--5400}, DOI: \url{10.1103/PhysRevLett.86.5397} (\bibinfo{year}{2001}). \bibitem{BCS99} \bibinfo{author}{Barbara, P.}, \bibinfo{author}{Cawthorne, A.~B.}, \bibinfo{author}{Shitov, S.~V.} \& \bibinfo{author}{Lobb, C.~J.} \newblock \bibinfo{journal}{\bibinfo{title}{Stimulated emission and amplification in {J}osephson junction arrays}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{82}}, \bibinfo{pages}{1963--1966}, DOI: \url{10.1103/PhysRevLett.82.1963} (\bibinfo{year}{1999}). \bibitem{LST65} \bibinfo{author}{Langenberg, D.~N.}, \bibinfo{author}{Scalapino, D.~J.}, \bibinfo{author}{Taylor, B.~N.} \& \bibinfo{author}{Eck, R.~E.} \newblock \bibinfo{journal}{\bibinfo{title}{Investigation of microwave radiation emitted by {J}osephson junctions}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{15}}, \bibinfo{pages}{294--297}, DOI: \url{10.1103/PhysRevLett.15.294} (\bibinfo{year}{1965}). \bibitem{AIN07} \bibinfo{author}{Astafiev, O.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Single artificial-atom lasing}}. \newblock {\emph{\JournalTitle{Nature}}} \textbf{\bibinfo{volume}{449}}, \bibinfo{pages}{588--590} (\bibinfo{year}{2007}). \bibitem{MDF19} \bibinfo{author}{Markovi\ifmmode~\acute{c}\else \'{c}\fi{}, D.}, \bibinfo{author}{Pillet, J.}, \bibinfo{author}{Flurin, E.}, \bibinfo{author}{Roch, N.} \& \bibinfo{author}{Huard, B.} \newblock \bibinfo{journal}{\bibinfo{title}{Injection locking and parametric locking in a superconducting circuit}}. \newblock {\emph{\JournalTitle{Phys. Rev. Applied}}} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{024034}, DOI: \url{10.1103/PhysRevApplied.12.024034} (\bibinfo{year}{2019}). \bibitem{WWF92} \bibinfo{author}{{Walls}, W.~F.} \newblock \bibinfo{title}{Cross-correlation phase noise measurements}. \newblock In \emph{\bibinfo{booktitle}{Proceedings of the 1992 IEEE Frequency Control Symposium}}, \bibinfo{pages}{257--261} (\bibinfo{year}{1992}). \bibitem{RGP12} \bibinfo{author}{Rigetti, C.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Superconducting qubit in a waveguide cavity with a coherence time approaching 0.1 ms}}. \newblock {\emph{\JournalTitle{Phys. Rev. B}}} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{100506}, DOI: \url{10.1103/PhysRevB.86.100506} (\bibinfo{year}{2012}). \bibitem{RBM19} \bibinfo{author}{Rol, M.~A.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Fast, high-fidelity conditional-phase gate exploiting leakage interference in weakly anharmonic superconducting qubits}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{123}}, \bibinfo{pages}{120502}, DOI: \url{10.1103/PhysRevLett.123.120502} (\bibinfo{year}{2019}). \bibitem{XHC20} \bibinfo{author}{Xu, Y.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Experimental implementation of universal nonadiabatic geometric quantum gates in a superconducting circuit}}. \newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}} \textbf{\bibinfo{volume}{124}}, \bibinfo{pages}{230503}, DOI: \url{10.1103/PhysRevLett.124.230503} (\bibinfo{year}{2020}). \bibitem{HSD16} \bibinfo{author}{{Rashid}, H.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Frequency multiplier based on distributed superconducting tunnel junctions: Theory, design, and characterization}}. \newblock {\emph{\JournalTitle{IEEE Transactions on Terahertz Science and Technology}}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{724--736}, DOI: \url{10.1109/TTHZ.2016.2583201} (\bibinfo{year}{2016}). \bibitem{ANG19} \bibinfo{author}{{Matheoud}, A.~V.}, \bibinfo{author}{{Sahin Solmaz}, N.} \& \bibinfo{author}{{Boero}, G.} \newblock \bibinfo{journal}{\bibinfo{title}{A low-power microwave {HEMT} {$LC$} oscillator operating down to 1.4 {K}}}. \newblock {\emph{\JournalTitle{IEEE Transactions on Microwave Theory and Techniques}}} \textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{2782--2792}, DOI: \url{10.1109/TMTT.2019.2916552} (\bibinfo{year}{2019}). \bibitem{BPS20} \bibinfo{author}{{Patra}, B.} \emph{et~al.} \newblock \bibinfo{title}{A scalable {C}ryo-{CMOS} 2-to-20{G}hz digitally intensive controller for 4$\times$32 frequency multiplexed spin qubits$/$transmons in 22nm {F}in{FET} technology for quantum computers}. \newblock In \emph{\bibinfo{booktitle}{2020 IEEE International Solid- State Circuits Conference - (ISSCC)}}, \bibinfo{pages}{304--306}, DOI: \url{10.1109/ISSCC19947.2020.9063109} (\bibinfo{year}{2020}). \bibitem{PDK21} \bibinfo{author}{Pauka, S.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{A cryogenic interface for controlling many qubits}}. \newblock {\emph{\JournalTitle{Nature Electronics}}} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{64--70}, DOI: \url{https://doi.org/10.1038/s41928-020-00528-y} (\bibinfo{year}{2021}). \bibitem{KOL17} \bibinfo{author}{Kokkoniemi, R.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Flux-tunable phase shifter for microwaves}}. \newblock {\emph{\JournalTitle{Scientific Reports}}} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{1--6} (\bibinfo{year}{2017}). \bibitem{ZLK20} \bibinfo{author}{Zhang, J.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Broadband tunable phase shifter for microwaves}}. \newblock {\emph{\JournalTitle{AIP Advances}}} \textbf{\bibinfo{volume}{10}}, \bibinfo{pages}{065128}, DOI: \url{10.1063/5.0006499} (\bibinfo{year}{2020}). \bibitem{OSS07} \bibinfo{author}{Osborn, K.}, \bibinfo{author}{Strong, J.}, \bibinfo{author}{Sirois, A.~J.} \& \bibinfo{author}{Simmonds, R.~W.} \newblock \bibinfo{journal}{\bibinfo{title}{Frequency-tunable {J}osephson junction resonator for quantum computing}}. \newblock {\emph{\JournalTitle{IEEE transactions on applied superconductivity}}} \textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{166--168} (\bibinfo{year}{2007}). \bibitem{TPL17} \bibinfo{author}{Tan, K.~Y.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Quantum-circuit refrigerator}}. \newblock {\emph{\JournalTitle{Nature Communications}}} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{1--8} (\bibinfo{year}{2017}). \bibitem{SMS19} \bibinfo{author}{Silveri, M.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Broadband {L}amb shift in an engineered quantum system}}. \newblock {\emph{\JournalTitle{Nature Physics}}} \textbf{\bibinfo{volume}{15}}, \bibinfo{pages}{533--537} (\bibinfo{year}{2019}). \bibitem{LMV19} \bibinfo{author}{Li, K.}, \bibinfo{author}{McDermott, R.} \& \bibinfo{author}{Vavilov, M.~G.} \newblock \bibinfo{journal}{\bibinfo{title}{Hardware-efficient qubit control with single-flux-quantum pulse sequences}}. \newblock {\emph{\JournalTitle{Phys. Rev. Applied}}} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{014044}, DOI: \url{10.1103/PhysRevApplied.12.014044} (\bibinfo{year}{2019}). \bibitem{SPH06} \bibinfo{author}{Savin, A.~M.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{High-resolution superconducting single-flux quantum comparator for sub-{K}elvin temperatures}}. \newblock {\emph{\JournalTitle{Applied Physics Letters}}} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{133505}, DOI: \url{10.1063/1.2357858} (\bibinfo{year}{2006}). \bibitem{YHZ19} \bibinfo{author}{Yeh, J.-H.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Hot electron heatsinks for microwave attenuators below 100 m{K}}}. \newblock {\emph{\JournalTitle{Applied Physics Letters}}} \textbf{\bibinfo{volume}{114}}, \bibinfo{pages}{152602}, DOI: \url{10.1063/1.5097369} (\bibinfo{year}{2019}). \newblock \eprint{https://doi.org/10.1063/1.5097369}. \bibitem{SVM20} \bibinfo{author}{Simbierowicz, S.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{Characterizing cryogenic amplifiers with a matched temperature-variable noise source}}. \newblock {\emph{\JournalTitle{Review of Scientific Instruments}}} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{034708}, DOI: \url{10.1063/5.0028951} (\bibinfo{year}{2021}). \bibitem{HJK18} \bibinfo{author}{Hollmann, A.} \emph{et~al.} \newblock \bibinfo{journal}{\bibinfo{title}{30 {G}hz-voltage controlled oscillator operating at 4 {K}}}. \newblock {\emph{\JournalTitle{Review of Scientific Instruments}}} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{114701}, DOI: \url{10.1063/1.5038258} (\bibinfo{year}{2018}). \end{thebibliography} \vspace*{10pt} \noindent \textbf{Acknowledgements} \noindent We have received funding from the Centre for Quantum Engineering at Aalto under Grant number JVSG; European Research Council under Grant No~681311 (QUESS) and MarieSkłodowska-Curie Grant No.~795159; Academy of Finland through its Centres of Excellence Programme (project numbers ~312300, ~312059 and ~312295) and grants (numbers ~314447, ~314448, ~314449, ~305237, ~316551, ~308161, ~335460, and ~314302). We thank Hannu Sipola, Roope Kokkoniemi, Jean-Philippe Girard, and Jukka Pekola for useful discussion.\\ \begin{figure} \caption{\textbf{Working principle and structure of the device.} \label{fig:1} \end{figure} \begin{figure} \caption{\textbf{Characteristics of the spiral-resonator source.} \label{fig:2} \end{figure} \begin{figure} \caption{\textbf{Emission spectrum with injection locking.} \label{fig:3} \end{figure} \begin{figure} \caption{\textbf{Noise properties of the on-chip source.} \label{fig:4} \end{figure} \begin{table*} \centering \caption{\textbf{Comparison of several typical low temperature oscillators.} Key parameters of cryogenic sources gathered from the indicated references. Here, 'N/A' refers to a case where the corresponding data was not found.} \label{table:1} \begin{tabular}{|p{50mm}|p{25mm}|p{15mm}|p{18mm}|p{40mm}|} \hline Device & Operation Temperature & Output Power & Linewidth & Phase Noise \\ \hline Nb junction device (this work) & 10 mK & 28 pW & 4 kHz & -116 dBc/Hz at 1 MHz \\ \hline Al junction device\cite{CBR17} & 10 mK & 0.255 pW & 22 kHz & N/A \\ \hline Double-quantum-dot\cite{LHS17} & 10 mK & 0.2 pW & 5.6 kHz & -99 dBc/Hz at 1.3 MHz \\ \hline SiGe HBT oscillator\cite{HJK18} & 4 K & 0.2 {\textmu}W & 200 kHz & -112 dBc/Hz at 1 MHz \\ \hline Cryogenic HEMT oscillator\cite{ANG19} & 1.4 K & 0.2 {\textmu}W & N/A & -112 dBc/Hz at 1 MHz \\ \hline \end{tabular} \end{table*} \vspace*{10pt} \end{document}
\begin{document} \title*{Kernel estimates for nonautonomous Kolmogorov equations with potential term} \titlerunning{Kernel estimates for nonautonomous Kolmogorov equations with potential term} \author{Markus Kunze, Luca Lorenzi\thanks{Thank you dad, for having conveyed your love for mathematics to me!}, Abdelaziz Rhandi\\ \hspace*{2cm}{\sc To the memory of Prof. Alfredo Lorenzi}} \authorrunning{M. Kunze, L. Lorenzi, A. Rhandi} \institute{Markus Kunze \at Graduiertenkolleg 1100, University of Ulm, 89069 Ulm, Germany, \email{[email protected]} \and Luca Lorenzi \at Dipartimento di Matematica e Informatica, Universit\`a degli Studi di Parma, Parco Area delle Scienze 53/A, 43124 Parma, Italy, \email{[email protected]} \and Abdelaziz Rhandi \at Dipartimento di Ingegneria dell'Informazione, Ingegneria Elettrica e Matematica Applicata, Universit\`a degli Studi di Salerno, Via Ponte Don Melillo 1, 84084 Fisciano (Sa), Italy, \email{[email protected]}} \maketitle \abstract{Using time dependent Lyapunov functions, we prove pointwise upper bounds for the heat kernels of some nonautonomous Kolmogorov operators with possibly unbounded drift and diffusion coefficients and a possibly unbounded potential term.} \section{introduction} We consider nonautonomous evolution equations \begin{equation}\label{eq.nee} \left\{ \begin{array}{rlll} \partial_t u(t,x) & = & \mathscr{A} (t)u(t,x), & (t,x) \in (s, 1]\times \mathbb{R}^d,\\[1mm] u(s,x) & = & f(x), & x \in \mathbb{R}^d\, , \end{array}\right. \end{equation} where the time dependent operators $\mathscr{A} (t)$ are defined on smooth functions $\varphi$ by \[ \mathscr{A} (t)\varphi (x) = \sum_{ij=1}^dq_{ij}(t,x)D_{ij}\varphi (x) + \sum_{i=1}^dF_i(t,x)D_i\varphi (x) - V(t,x)\varphi (x). \] We write $\mathscr{A}_0(t)$ for the operator $\mathscr{A}(t) +V(t)$. Throughout this article, we will always assume that the following hypothesis on the coefficients are satisfied. \begin{hyp}\label{hyp1} The coefficients $q_{ij}, F_j$ and $V$ are defined on $[0,1]\times \mathbb{R}^d$ for $i,j=1, \ldots d$. Moreover, \begin{enumerate} \item there exists an $\varsigma \in (0,1)$ such that $q_{ij}, F_j, V \in C^{\frac{\varsigma}{2}, \varsigma}_\mathrm{loc} ([0,1]\times \mathbb{R}^d)$ for all $i,j =1, \ldots, d$. Further, $q_{ij} \in C^{0,1}((0,1)\times \mathbb{R}^d)$; \item the matrix $Q = (q_{ij})$ is symmetric and uniformly elliptic in the sense that there exists a number $\eta > 0$ such that \begin{eqnarray*} \sum_{i,j=1}^d q_{ij}(t,x)\xi_i\xi_j \geq \eta |\xi|^2 \quad \mbox{for all}\,\, \xi \in \mathbb{R}^d,\,\, (t,x) \in [0,1]\times \mathbb{R}^d; \end{eqnarray*} \item $V \geq 0$; \item there exist a nonnegative function $Z\in C^2(\mathbb{R}^d)$ and a constant $M \geq 0$ such that $\lim_{|x|\to\infty}Z(x) = \infty$ and we have $\mathscr{A}(t)Z(x) \leq M$, as well as $\eta \Delta_x Z(x) + F(t,x)\cdot \nabla_xZ(x)-V(t,x)Z(x) \leq M$, for all $(t,x) \in [0,1]\times \mathbb{R}^d$; \item there exists a nonnegative function $Z_0\in C^2(\mathbb{R}^d)$ such that $\lim_{|x|\to\infty}Z_0(x) = \infty$ and we have $\mathscr{A}_0(t)Z_0(x) \leq M$, as well as $\eta \Delta_x Z_0(x) + F(t,x)\cdot \nabla_x Z_0(x) \leq M$, for all $(t,x) \in [0,1]\times \mathbb{R}^d$. \end{enumerate} \end{hyp} We summarize Hypothesis \ref{hyp1}(4)-(5) saying that $Z$ (resp. $Z_0$) is {\it a Lyapunov function} for the operators $\mathscr{A}$ and $\eta\Delta +F\cdot \nabla_x -V$ (resp. for the operators $\mathscr{A}_0$ and $\eta\Delta +F\cdot\nabla_x$). Clearly, $5$ implies $4$. However, for applications it will be important to differentiate between $Z$ and $Z_0$. The previous assumptions guarantee that, for any $f\in C_b(\mathbb{R}^d)$, the Cauchy problem \eqref{eq.nee} admits a unique solution $u \in C_b([s,1]\times \mathbb{R}^d) \cap C^{1,2}((s,1]\times \mathbb{R}^d)$. Moreover, there exists an evolution family $(G(t,s))_{(t,s) \in D} \subset \mathscr{L} (C_b(\mathbb{R}^d))$, where $D = \{(t,s) \in [0,1]^2 : t \geq s\}$, which governs Equation \eqref{eq.nee}, i.e., $u(t,x) = (G(t,s)f)(x)$. Here and throughout the paper, the index ``$b$'' stands for boundedness. By \cite[Proposition 3.1]{al11}, the operators $G(t,s)$ are given by \emph{Green kernels} $g(t,s, \cdot, \cdot)$, i.e.,\ we have \begin{equation} G(t,s)f(x) = \int_{\mathbb{R}^d} f(y)g(t,s, x,y)\, dy. \label{kernel-G} \end{equation} Our aim is to prove estimates for the Green kernel $g$. Similar results as we present here have been obtained in \cite{mpr10, ms07, MST, s08} for autonomous equations without potential term. The case of autonomous equations with potential term was treated in \textcolor{red}{\cite{alr10,lmpr11,lr12}}. Recently, generalizing techniques from \cite{ffmp09} to the parabolic situation, the authors of the present article extended these results also to nonautonomous equations and, even more importantly, allowed also unbounded diffusion coefficients, see \cite{klr13}. In this article, we extend the results of \cite{klr13} to also allow potential terms in the equation. Applying our main abstract result (Theorem \ref{t.main}) in a concrete situation, we obtain the following result. In its formulation, for $s \geq 0$, we use the notation $|x|_{*}^s$ to denote a smooth version of the $s$-th power of the absolute value function, i.e., $|x|_*^s = |x|^s$ whenever $|x|\geq 1$ and the map $x \mapsto |x|_*^s$ is twice continuously differentiable in $\mathbb{R}^d$. This is done to meet the differentiability requirement in Hypothesis \ref{hyp1}(1), 3 and 5 and also later differentiability requirements. If $s=0$ or $s >1$ we can choose $|x|_*^s = |x|^s$ for any $x\in\mathbb{R}^d$ as this is already twice continuously differentiable. \begin{theorem}\label{t.example} Let $k>d+2$, $m, r\geq 0$ and $p> 1$ be given with $p> m-1$ and $r > m-2$. We consider the (time independent) operator $\mathscr{A} (t) \equiv \mathscr{A}$, defined on smooth functions $\varphi$ by \[ \mathscr{A} \varphi (x) = (1+|x|_{*}^m)\Delta\varphi (x) - |x|^{p-1}x\cdot \nabla \varphi (x) - |x|^r\varphi(x). \] Then we have the following estimates for the associated Green kernel $g$: \begin{enumerate} \item if $p\geq \half (m+r)$, then for $\alpha > \frac{p+1-m}{p-1}$ and $\varepsilon < \frac{1}{p+1-m}$ we have \[ g(t,s,x,y) \leq C(t-s)^{1-\frac{\alpha(m\vee p)k}{p+1-m}} e^{-\varepsilon (t-s)^\alpha |y|_{*}^{p+1-m}}; \] \item if $p < \half (m+r)$, then for $\varepsilon < \frac{2}{r+2-m}$ and $\alpha > \frac{r-m+2}{r+m-2}$, if $r+m>2$, and $\alpha>\frac{r+2-m}{2(p-1)}$, if $r+m\le 2$, we have \[ g(t,s,x,y) \leq C(t-s)^{1-\frac{\alpha(2m\vee 2p\vee r)k}{r+2-m}} e^{-\varepsilon (t-s)^\alpha |y|_{*}^{\frac{1}{2}(r+2-m)}}, \] for all $x,y \in \mathbb{R}^d$ and $s \in [0,t)$. \end{enumerate} Here, $C$ is a positive constant. \end{theorem} These bounds should be compared to the ones in \cite[Example 3.3]{alr10}, where the case $m=0$ was considered. We would like to note that in Theorem \ref{t.example} we have restricted ourselves to the autonomous situation so that one can compare the results with those in \cite{alr10}. Genuinely nonautonomous examples can easily be constructed along the lines of \cite[Section 5]{klr13}. \section{Time dependent Lyapunov functions} In this section we introduce time dependent Lyapunov functions and prove that they are integrable with respect to the measures $g_{t,s}(x, dy):=g(t,s,x,y)dy$, where $g(t,s,\cdot,\cdot)$ is the Green kernel associated to the evolution operator $G(t,s)$, see \eqref{kernel-G}, and $g(t,\cdot,x,\cdot)\in L^1((0,1)\times\mathbb{R}^d)$. To do so, it is important to have information about the derivative of $G(t,s)f$ with respect to $s$. We have the following result, taken from \cite[Lemma 3.4]{al11}. Here and in the rest of the paper, the index ``$c$'' stands for compactly supported. \begin{lemma}\label{l.sderivative} \begin{enumerate} \item For $f \in C_c^2(\mathbb{R}^d)$, $s_0\leq s_1 \leq t$ and $x \in \mathbb{R}^d$ we have \begin{equation}\label{eq.sderivative1} G(t,s_1)f(x) - G(t,s_0)f(x) = -\int_{s_0}^{s_1}G(t,\sigma)\mathscr{A} (\sigma )f(x)\, d\sigma. \end{equation} \item For $f \in C^2(\mathbb{R}^d)$, constant and positive outside a compact set, the function $G(t,\cdot )\mathscr{A} (\cdot)f(x)$ is integrable in $[0,t]$ and for $s_0\leq s_1 \leq t$ we have \begin{eqnarray*} G(t,s_1)f(x) - G(t,s_0)f(x) \geq - \int_{s_0}^{s_1}G(t,\sigma)\mathscr{A}(\sigma)f(x)\, d\sigma. \end{eqnarray*} \end{enumerate} \end{lemma} We note that in the case where $V\equiv 0$ part (2) in Lemma \ref{l.sderivative} follows trivially from part (1), since in that situation $G(t,s)\mathbbm{1} \equiv \mathbbm{1}$ and $\mathscr{A}(t)\mathbbm{1} = 0$ so that equation \eqref{eq.sderivative1} holds for $f=\mathbbm{1}$, cf.\ \cite[Lemma 3.2]{kll10}. Let us note some consequences of Lemma \ref{l.sderivative} for later use. First of all, part (1) of the lemma implies that $\partial_sG(t,s)f = -G(t,s)\mathscr{A} (s)f$ for $f \in C^2_c(\mathbb{R}^d)$. Arguing as in \cite[Lemma 2.2]{klr13}, we see that for $0\leq a \leq b \leq t$, $x \in \mathbb{R}^d$ and $\varphi \in C^{1,2}_c([a,b]\times\mathbb{R}^d)$, the function $s\mapsto G(t,s)\varphi (s)(x)$ is differentiable in $[a,b]$ and \[ \partial_sG(t,s)\varphi (s)(x) = G(t,s)\partial_s\varphi (s)(x) - G(t,s)\mathscr{A} (s)\varphi (s)(x). \] Consequently, for such a function $\varphi$ we have that \begin{equation}\label{eq.weak} \int_a^b G(t,s)\big[\partial_s \varphi (s) -\mathscr{A} (s)\varphi (s)\big](x)\, ds = G(t,b)\varphi (b)(x) - G(t,a)\varphi (a)(x), \end{equation} for every $x \in \mathbb{R}^d$. As a consequence of formula \eqref{eq.weak} and \cite[Corollary 3.11]{bkr06} we get the following result. \begin{lemma} \label{lem-reg-g} For any $t\in (0,1]$ and any $x\in\mathbb{R}^d$ the function $g(t,\cdot,x,\cdot)$ is continuous (actually, locally H\"older continuous) in $(0,t)\times\mathbb{R}^d$. \end{lemma} We now introduce time dependent Lyapunov functions. \begin{definition} \label{d.lyap} Let $t \in (0,1]$. A \emph{time dependent Lyapunov function $($on $[0,t])$} is a function $0 \leq W \in C([0,t]\times\mathbb{R}^d)\cap C^{1,2}((0,t)\times\mathbb{R}^d)$ such that \begin{enumerate} \item $W(s,x) \leq Z(x)$ for all $(s,x) \in [0,t]\times \mathbb{R}^d$; \item $\lim_{|x|\to \infty}W(s,x) = \infty$, uniformly for $s$ in compact subsets of $[0,t)$; \item there exists a function $0\leq h \in L^1((0,t))$ such that \begin{equation} \partial_s W(s,x) - \mathscr{A}(s)W(s) \geq -h(s)W(s) \label{star} \end{equation} and \begin{equation} \partial_sW(s) - (\eta \Delta W(s)+F(s)\cdot\nabla_x W(s)-V(s)W(s)) \geq - h(s)W(s)\,, \label{star-star} \end{equation} on $\mathbb{R}^d$, for every $s \in (0,t)$. \end{enumerate} Sometimes, we will say that $W$ is a time dependent Lyapunov function \emph{with respect to $h$} to emphasize the dependence on $h$. \end{definition} \begin{proposition}\label{p.lyapunov} Let $W$ be a time dependent Lyapunov function on $[0,t]$ with respect to $h$. Then for $0\leq s \leq t$ and $x \in \mathbb{R}^d$ the function $W(s)$ is integrable with respect to the measure $g_{t,s}(x,dy)$. Moreover, setting \[ \zeta_W(s,x) := \int_{\mathbb{R}^d} W(s,y)g_{t,s}(x, dy) \] we have \begin{equation}\label{eq.zetaest} \zeta_W(s,x) \leq e^{\int_s^th(\tau)\, d\tau} W(t,x). \end{equation} \end{proposition} \begin{proof} Let us first note that by \cite[Proposition 4.7]{al11} the function $Z$ is integrable with respect to $g_{t,s}(x,dy)$. Moreover, \begin{equation}\label{eq.zest} G(t,s)Z(x) := \int_{\mathbb{R}^d} Z(y)g_{t,s}(x, dy)\leq Z(x) + M(t-s). \end{equation} It thus follows immediately from domination that $W(s)$ is integrable with respect to $g_{t,s}(x,dy)$. We now fix a sequence of functions $\psi_n \in C^\infty([0,\infty))$ such that \begin{enumerate}[(i)] \item $\psi_n (\tau ) = \tau$ for $\tau\in [0,n]$; \item $\psi_n(\tau) \equiv \mathrm{const.}$ for $\tau \geq n+1$; \item $0\leq \psi_n'\leq 1$ and $\psi_n''\leq 0$. \end{enumerate} Let us also fix $0\le s<r<t$. Note that, for any $n\in\mathbb{N}$, the function $W_n := \psi_n\circ W$ is the sum of a function in $C_c^{1,2}([0,r]\times\mathbb{R}^d)$ and a positive constant. Indeed, $W(s,\sigma) \to \infty$ as $|x| \to \infty$ uniformly on $[0,r]$. For a positive constant function, we have by Lemma \ref{l.sderivative}(2) that \[ G(t,r)\mathbbm{1} - G(t,s)\mathbbm{1} \geq - \int_s^rG(t,\sigma ) \mathscr{A}(\sigma)\mathbbm{1}\, d\sigma = \int_s^r G(t,\sigma)\big[\partial_s\mathbbm{1} -\mathscr{A}(\sigma)\mathbbm{1}\big]\, d\sigma. \] Combining this with Equation \eqref{eq.weak}, it follows that \begin{align} &\quad G(t,r)W_n(r)(x) - G(t,s)W_n(s)(x)\notag\\ \geq & \,\int_s^r G(t,\sigma)\big[\partial_{\sigma} W_n(\sigma) -\mathscr{A}(\sigma )W_n (\sigma)\big](x)\, d\sigma\notag\\ = & \,\int_s^r G(t,\sigma ) \big[\psi_n'(W(\sigma))\big(\partial_\sigma W(\sigma) - \mathscr{A}(\sigma )W(\sigma)\big)\big](x)\, d\sigma\notag\\ & \quad - \int_s^r G(t,\sigma)\big[ V(\sigma)W(\sigma)\psi_n'(W(\sigma))-V(\sigma)\psi_n(W(\sigma))\big](x)\, d\sigma\notag\\ &\quad - \int_s^r G(t,\sigma) \big[ \psi_n''(W(\sigma))\big(Q(\sigma )\nabla_x W(\sigma)\cdot\nabla_x W(\sigma)\big)\big](x)\, d\sigma\notag\\ \geq & \,-\int_s^r G(t,\sigma)\big[\psi_n'(W(\sigma))h(\sigma) W(\sigma)\big](x)\, d\sigma\label{eq.wnest}, \end{align} for any $x\in\mathbb{R}^d$, since $G(t,s)$ preserves positivity and the condition $\psi_n''\le 0$ implies that $y\psi_n'(y)-\psi_n(y)\le 0$ for any $y\ge 0$. We next want to let $r \uparrow t$. We fix an increasing sequence $(r_k)\subset (s,t)$, converging to $t$ as $k\to\infty$. By monotone convergence, we clearly have \[ \int_s^{r_k}G(t,\sigma)\big[h(\sigma)W_n(\sigma)\big](x)\, d\sigma \to \int_s^{t}G(t,\sigma)\big[h(\sigma)W_n(\sigma)\big](x)\, d\sigma \] as $k\to\infty$. We now claim that $G(t,r_k)W_n(r_k)(x) \to G(t,t)W_n(t)(x) = W_n(t,x)$ as $k\to \infty$. To see this, we note that for $f \in C_b(\mathbb{R}^d)$, the function $(s,x) \mapsto G(t,s)f(x)$ is continuous in $[0,t]\times \mathbb{R}^d$ as a consequence of \cite[Theorem 4.11]{al11}. This immediately implies that $G(t,r_k)W_n(t)(x) \to G(t,t)W_n(t)(x) = W_n(t,x)$ as $k\to \infty$. Moreover, from \eqref{eq.zest} it follows that \begin{equation} g_{t,s}(\mathbb{R}^d\setminus B(0,R))\le \frac{1}{\inf_{\mathbb{R}^d\setminus B(0,R)}Z}\int_{\mathbb{R}^d}Z(y)g_{t,s}(x,dy) \le \frac{Z(x)+M}{\inf_{\mathbb{R}^d\setminus B(0,R)}Z}, \label{tight} \end{equation} where $B(0,R)\subset\mathbb{R}^d$ denotes the open ball centered at $0$ with radius $R$, and the right-hand side of \eqref{tight} converges to zero as $R\to\infty$. Hence, the set of measures $\{g_{t,s}(x,dy) : s \in [0,t]\}$ is tight. Taking into account that $W_n(r_k)$ is uniformly bounded and converges locally uniformly to $W_n(t)$ as $k\to \infty$, it is easy to see that \[ G(t,r_k)W_n(r_k)(x) - G(t, r_k)W_n(t)(x) = \int_{\mathbb{R}^d} (W_n(r_k,y) - W_n(t,y)) \, g_{t,r_k}(x,dy) \to 0 \] as $k\to \infty$. Combining these two facts, it follows that $G(t,r_k)W_n(r_k)(x) \to W_n(t,x)$ as claimed. Thus, letting $r\uparrow t$ in \eqref{eq.wnest}, we find that \begin{equation}\label{eq.wnestt} W_n(t,x) - G(t,s)W_n(s)(x) \geq - \int_s^t G(t,\sigma)\big[\psi_n'(W(\sigma)) h(\sigma) W(\sigma)\big](x)\, d\sigma. \end{equation} Note that $\psi_n'(W(\sigma)) h(\sigma) W(\sigma)$ and $W_n(s)$ converge increasingly to $W(\sigma)h(\sigma)$ and $W(s)$, respectively, as $n\to \infty$, for any $\sigma\in [s,t]$. Moreover, \eqref{eq.zest} implies that $G(t,\sigma)W(\sigma)\in (0,\infty)$. Since each operator $G(t,\sigma)$ preserves positivity, we can use monotone convergence to let $n\to \infty$ in \eqref{eq.wnestt}, obtaining \[ W(t,x) - G(t,s)W(s)(x) \geq - \int_s^th(\sigma)G(t,\sigma)W(\sigma)(x)\, d\sigma. \] Equivalently, \begin{equation}\label{eq2-7} \zeta_W(t,x) - \zeta_W(s,x) \geq -\int_s^th(\sigma)\zeta_W(\sigma,x)\, d\sigma,\qquad\;\,x\in\mathbb{R}^d. \end{equation} This inequality yields \eqref{eq.zetaest}. Indeed, the function $\Phi$, defined by \begin{eqnarray*} \Phi (\tau) := \bigg( \zeta_W(t,x) + \int_\tau^t h(\sigma) \zeta_W (\sigma,x)\, d\sigma \bigg) e^{\int_s^\tau h(\sigma)\, d\sigma}\,, \end{eqnarray*} is continuous on $[s,t]$ and increasing since its weak derivative is nonnegative by \eqref{eq2-7}. Hence $\Phi(s)\le\Phi(t)$, from which \eqref{eq.zetaest} follows at once if we take again \eqref{eq2-7} into account. \qed \end{proof} Let us illustrate this in the situation of Theorem \ref{t.example}. \begin{proposition}\label{p.example} Consider the (time independent) operator $\mathscr{A}(t) \equiv \mathscr{A}$, defined by \[ \mathscr{A}\varphi (x) = (1+|x|_{*}^m)\Delta\varphi (x) - |x|^{p-1}x\cdot \nabla \varphi (x) - |x|^{r}\varphi(x), \] where $m, r \geq 0$ and $p> 1$. Moreover, assume one of the following situations: \begin{enumerate}[(i)] \item $p>m-1$, $\beta := p+1-m$ and $\delta < 1/\beta$; \item $r > m-2$, $\beta := \half (r+2-m)$ and $\delta < 1/\beta$. \end{enumerate} Then the following properties hold true: \begin{enumerate} \item the function $Z(x) := \exp (\delta |x|_{*}^\beta)$ satisfies Part (4) of Hypothesis \ref{hyp1}; \item for $0<\varepsilon < \delta$ and $\alpha>\alpha_0$, the function $W(s,x) := \exp (\varepsilon (t-s)^\alpha |x|_{*}^\beta)$ is a time dependent Lyapunov function in the sense of Definition \ref{d.lyap}. Here, $\alpha_0 = \frac{\beta}{p-1}$ if we assume condition (ii) and additionally $m+r \leq 2$. In all other cases, $\alpha_0=\frac{\beta}{m+\beta-2}$. \end{enumerate} \end{proposition} \begin{proof} In the computations below, we assume that $|x| \geq 1$ so that $|x|_*^s = |x|^s$ for $s\ge 0$. At the cost of slightly larger constants, these estimates can be extended to all of $\mathbb{R}^d$. We omit the details which can be obtained as in the proof of \cite[Lemma 5.2]{klr13} (1) By direct computations, we see that \[ \mathscr{A} Z(x) = \delta \beta \mathcal{B}ig[ (1+|x|^m)|x|^{\beta-2} \big(d+ \beta-2 + \delta \beta |x|^\beta\big) - |x|^{p-1+\beta}-|x|^r\mathcal{B}ig]Z(x). \] The highest power of $|x|$ appearing in the first term is $|x|^{m+2\beta - 2}$ which, in case (i) is exactly $|x|^{p-1+\beta}$, in case (ii) it is exactly $|x|^r$. In both cases, the highest power in the square brackets has a negative coefficient in front, namely $\delta\beta -1$. Thus $\lim_{|x|\to \infty}\mathscr{A} Z(x) = - \infty$. It now follows from the continuity of $\mathscr{A} Z$ that $\mathscr{A} Z \leq M$ for a suitable constant $M$. Since $\eta \Delta Z+F\cdot\nabla Z-VZ\le\mathscr{A} Z$, we conclude that the function $\eta\Delta Z+F\cdot\nabla Z-VZ$ is bounded from above as well. (2) We note that since $\varepsilon < \delta$, we have $W(s,x) \leq (Z(x))^\frac{\varepsilon}{\delta} \leq Z(x)$ for all $s \in [0,t]$ and $x \in \mathbb{R}^d$ so that (1) in Definition \ref{d.lyap} is satisfied. Condition (2) is immediate from the definition of $W$ so that it only remains to verify condition (3). A computation shows that \begin{align} &\quad \partial_sW(s,x) - \mathscr{A} W(s,x)\notag\\ = &\, -\varepsilon \alpha (t-s)^{\alpha-1}|x|^\beta W(s,x) - \varepsilon \beta (t-s)^\alpha W(s,x)\times\label{final-est-1}\\ & \quad \times \mathcal{B}ig[ (1+|x|^m)|x|^{\beta-2}\big( d + \beta-2 + \varepsilon \beta (t-s)^\alpha |x|^\beta\big) - |x|^{p-1+\beta}\mathcal{B}ig ] + |x|^rW(s,x)\notag\\ \geq & \,-\varepsilon \alpha (t-s)^{\alpha-1}|x|^\beta W(s,x) - \varepsilon \beta (t-s)^\alpha W(s,x)\times\notag\\ & \; \times \mathcal{B}ig[ (1+|x|^m)|x|^{\beta-2}\big( d +\beta-2 + \delta \beta |x|^\beta\big) - |x|^{p-1+\beta}\mathcal{B}ig ] + |x|^rW(s,x)\notag\\ & \; + \varepsilon\beta^2(\delta-\varepsilon) (t-s)^\alpha (1+|x|^m)|x|^{2\beta-2}W(s,x)\notag\\ \geq & \,\varepsilon (t-s)^{\alpha-1}|x|^\beta\big( (\delta -\varepsilon) \beta^2(t-s) |x|^{m+\beta-2} - \alpha \big) W(s,x)\notag\\ &\; -\varepsilon \beta (t-s)^\alpha W(s,x)\mathcal{B}ig[ (1+|x|^m)|x|^{\beta-2}\big(d+\beta-2+\delta\beta |x|^{\beta}\big) -|x|^{p-1+\beta}-|x|^r\mathcal{B}ig ], \label{final-est} \end{align} where in the last inequality we took into account that $\varepsilon\beta(t-s)^{\alpha}<1$. To further estimate $\partial_sW(s) - \mathscr{A} W(s)$, we first assume that $\beta+m-2\ge 0$. This condition is satisfied under condition (i) and also under condition (ii) provided that $m+r> 2$. We set $C:= \big[(\delta-\varepsilon)\beta^2/\alpha\big]^{-\frac{1}{\beta+m-2}}$ and distinguish two cases. {\it Case 1:} $|x| \geq C(t-s)^{-\frac{1}{\beta+m-2}}$. In this case $(\delta -\varepsilon) \beta^2(t-s)|x|^{\beta+m-2} \geq \alpha$ so that the first summand in \eqref{final-est} is nonnegative. Replacing $C$ with a larger constant if necessary, we can -- as in the proof of part (1) -- ensure that also the second summand is positive so that overall $\partial_sW(s) -\mathscr{A} W(s) \geq 0$ in this case. {\it Case 2:} $1\leq |x| < C(t-s)^{-\frac{1}{\beta+m-2}}$. In this case, we start again from Estimate \eqref{final-est-1}. We drop the terms involving $-|x|^{p-1+\beta}$ and $|x|^r$ and, using that $|x|\geq 1$, estimate further as follows: \begin{align*} &\quad W(s,x)^{-1}(\partial_sW(s,x) -\mathscr{A} W(s,x))\\ \geq& \, -\varepsilon\alpha (t-s)^{\alpha-1}|x|^\beta - 2\varepsilon\beta(t-s)^\alpha |x|^{m+\beta-2}(d+\beta-2 + \varepsilon\beta |x|^\beta)\\ \geq &\, - \varepsilon\alpha (t-s)^{\alpha-1}C^\beta (t-s)^{-\frac{\beta}{m+\beta-2}} -2\varepsilon\beta (t-s)^{\alpha}C^{m+\beta-2}(t-s)^{-1} \times\\ &\quad\quad\times \big(d+\beta-2 + \varepsilon\beta C^\beta (t-s)^{-\frac{\beta}{m+\beta-2}}\big)\\ \geq&\, -\tilde{C}(t-s)^{\alpha -1 - \frac{\beta}{m+\beta-2}}=: -h(s). \end{align*} Note that $h\in L^1(0,t)$ since $\alpha-1 -\frac{\beta}{m+\beta-2} > -1$ by assumption. Suppose now that $m+\beta-2\le 0$, so that $|x|^{m+\beta-2} \leq 1$ for $|x|\geq 1$. Taking again into account that $\varepsilon\beta (t-s)^\alpha <1$ and dropping the term involving $|x|^r$, we derive from \eqref{final-est-1} that \begin{align*} &\quad W(s,x)^{-1}(\partial_sW(s,x) - \mathscr{A} W(s,x))\notag\\ \geq &\, -\varepsilon (t-s)^{\alpha-1}|x|^\beta\big (\alpha + 2\beta- \beta (t-s)|x|^{p-1}\big ) -2(d+\beta-2), \end{align*} for any $|x|\ge 1$. We can now argue as above taking $C=\big [(\alpha+2\beta)/\beta\big ]^{\frac{1}{p-1}}$ and distinguishing the cases $|x|\ge C(t-s)^{-\frac{1}{p-1}}$ and $1\le |x|< C(t-s)^{-\frac{1}{p-1}}$. We conclude that \[ W(s,x)^{-1}(\partial_sW(s,x) - \mathscr{A} W(s,x))\ge -\varepsilon C^{\beta}(\alpha+2\beta)(t-s)^{\alpha-1-\frac{\beta}{p-1}} =:-h(s), \] for any $s\in (0,t)$, $|x|\ge 1$, and $h\in L^1((0,t))$ due to the condition on $\alpha$. We have thus proved \eqref{star} in Definition \ref{d.lyap}. The analogous estimate \eqref{star-star} for $\eta \Delta_{x} + F\cdot \nabla_{x} - c$ follows from observing that $\eta \Delta_xW + F\cdot \nabla_x W - cW \leq \mathscr{A} W$. \qed \end{proof} \section{Kernel bounds in the case of bounded diffusion coefficients} \label{sect-3} Throughout this section, we set $Q(a,b):=(a,b)\times\mathbb{R}^d$ and $\overline{Q}(a,b):=[a,b]\times\mathbb{R}^d$ for any $0\le a<b\le 1$. Moreover, we assume that the coefficients $q_{ij}$ and their spatial derivatives $D_kq_{ij}$ are bounded on $Q(0,b)$ for $i,j,k=1, \ldots, d$ and every $b<1$. We will remove this additional boundedness assumption in the next section. Fix now $t \in [0,1]$. For $0\leq a < b \leq t$, $x \in \mathbb{R}^d$ and $k\geq 1$, we define the quantities $\Gamma_j(k,x,a,b)$ for $j=1,2$ by \[ \Gamma_1(k,x,a,b) := \bigg(\int_{Q(a,b)} |F(s,y)|^k g(t,s,x,y)\, ds\, dy \bigg)^\frac{1}{k}, \] where $g$ is the Green kernel associated with $\mathscr{A}$, and \[ \Gamma_2(k,x,a,b) := \bigg(\int_{Q(a,b)} |V(s,y)|^k g(t,s,x,y)\, ds\, dy \bigg)^\frac{1}{k}. \] We also make an additional assumption about the parabolic equation governed by the operators $\mathscr{A}_0$ without potential term. Hypothesis \ref{hyp1}(5) guarantees that the Cauchy problem \eqref{eq.nee} with $\mathscr{A}$ being replaced by $\mathscr{A}_0$ admits a unique solution $u\in C_b(\overline Q(s,1))\cap C^{1,2}(Q(s,1))$ for any $f\in C_b(\mathbb{R}^d)$. The associated evolution operator admits a Green kernel which we denote by $g_0$. In the following lemma, we will deal with the space $\mathscr{H}^{p,1}(Q(a,b))$ of all functions in $W^{0,1}_p(Q(a,b))$ with distributional time derivative in $(W^{0,1}_{p'}(Q(a,b)))'$, where $1/p+1/p'=1$. We refer the reader to \cite{krylov01,mpr10} for more details on these spaces. Here, we just prove the following result which is crucial in the proof of Theorem \ref{t.mainbdd} (cf. \cite[Lemma 7.2]{mpr10}). \begin{lemma} \label{lem-cortona-0} Let $u\in {\mathscr H}^{p,1}(Q(a,b))\cap C_b(\overline Q(a,b))$ for some $p\in (1,\infty)$. Then, there exists a sequence $(u_n)\subset C^{\infty}_c(\mathbb{R}^{d+1})$ of smooth functions such that $u_n$ tends to $u$ in $W^{0,1}_p(Q(a,b))$ and locally uniformly in $\overline Q(a,b)$, and $\partial_tu_n$ converges to $\partial_tu$ weakly$^*$ in $(W^{0,1}_{p'}(Q(a,b)))'$ as $n\to\infty$. \end{lemma} \begin{proof} We split the proof in two steps: first we prove the statement with $Q(a,b)$ being replaced with $\mathbb{R}^{d+1}$ and, then, using this result we complete the proof. {\em Step 1.} Let $\vartheta\in C^{\infty}_c(\mathbb{R})$ be a smooth function such that $\vartheta\equiv 1$ in $(-1,1)$ and $\vartheta \equiv 0$ in $\mathbb{R} \setminus(-2,2)$. For any $\sigma>0$, any $t\in\mathbb{R}$ and any $x\in\mathbb{R}^d$, set $\vartheta_{\sigma}(t,x)=\vartheta(|t|/\sigma)\vartheta(|x|/\sigma)$. Next, we define the function $u_n\in C^{\infty}_c(\mathbb{R}^{d+1})$ by setting \begin{align*} u_n(t,x)&=n^{d+1}\vartheta_n(t,x)\int_{\mathbb{R}^{d+1}}u(s,y)\vartheta_{1/n}(t-s,x-y)\,ds\,dy\\ &=:n^{d+1}\vartheta_n(t,x)(u\star \vartheta_{1/n})(t,x)\,, \end{align*} for any $(t,x)\in\mathbb{R}^{d+1}$ and any $n\in\mathbb{N}$. Clearly, $u_n$ converges to $u$ in $W^{0,1}_p(\mathbb{R}^{d+1})$ and locally uniformly in $\mathbb{R}^{d+1}$. Let us fix a function $\psi\in W^{0,1}_{p'}(\mathbb{R}^{d+1})$. Applying the Fubini-Tonelli theorem and taking into account that $\vartheta_{1/n}(r,z)=\vartheta_{1/n}(-r,-z)$ for any $(r,z)\in\mathbb{R}^{d+1}$, we easily deduce that $\langle \partial_tu_n ,\psi \rangle=\langle \partial_tu ,\psi_n\rangle $ for any $n\in\mathbb{N}$, where $\psi_n=n^{d+1}\vartheta_{1/n}\star(\vartheta_n\psi)$ and $\langle \cdot ,\cdot \rangle$ denotes the duality pairing of $W^{0,1}_{p'}(\mathbb{R}^{d+1})$ and $(W^{0,1}_{p'}(\mathbb{R}^{d+1}))'$. Since $\psi_n$ converges to $\psi$ in $W^{0,1}_{p'}(\mathbb{R}^{d+1})$ as $n\to \infty$, we conclude that $\langle \partial_tu ,\psi_n\rangle \to \langle \partial_tu,\psi \rangle$ as $n\to\infty$. This shows that $\partial_tu_n\stackrel{*}{\rightharpoonup}\partial_tu$ in $(W^{0,1}_{p'}(\mathbb{R}^{d+1}))'$ as $n\to\infty$. {\em Step 2.} Let us now consider the general case. We extend $u\in {\mathscr H}^{p,1}(Q(a,b))\cap C_b(\overline Q(a,b))$ to $(3a-2b,2b-a)$, by symmetry, first with respect to $t=b$ and then with respect to $t=a$. The so obtained function $v$ belongs to ${\mathscr H}^{p,1}(Q(3a-2b,2b-a))\cap C_b(\overline Q(3a-2b,2b-a))$. Proving that $v\in W^{0,1}_p(Q(3a-2b,2b-a))\cap C_b(\overline Q(3a-2b,2b-a))$ is immediate. Hence, it remains to prove that the distributional derivative $\partial_tv$ belongs to $(W^{0,1}_{p'}(Q(3a-2b,2b-a)))'$. To that end fix $\varphi\in C^{\infty}_c(Q(3a-2b,2b-a))$ and observe that \begin{align} \int_{Q(3a-2b,2b-a)}v\partial_t\varphi\,dt\,dx =\int_{Q(a,b)}u\partial_t\Phi\,dt\,dx\,, \label{extension-1} \end{align} where the function $\Phi=\varphi-\varphi(2b-\cdot,\cdot)-\varphi(2a-\cdot,\cdot)+\varphi(2a-2b+\cdot,\cdot)$ belongs to $W^{0,1}_{p'}(Q(a,b))$. It follows immediately that $\langle \partial_tv ,\varphi \rangle=\langle \partial_t u ,\Phi \rangle$. The density of $C^{\infty}_c(Q(a,b))$ in $W^{0,1}_{p'}(Q(a,b))$ implies that $\partial_tv\in (W^{0,1}_{p'}(Q(3a-2b,2b-a)))'$. We now fix a function $\zeta\in C^{\infty}_c((3a-2b,2b-a))$ such that $\zeta\equiv 1$ in $[a,b]$. Applying Step 1 to the function $(t,x)\mapsto \zeta(t) v(t,x)$, which belongs to ${\mathscr H}^{p,1}(\mathbb{R}^{d+1})\cap C_b(\mathbb{R}^{d+1})$, we can find a sequence $(u_n)\subset C^{\infty}_c(\mathbb{R}^{d+1})$ converging to the function $\zeta v$ locally uniformly in $\mathbb{R}^{d+1}$ and in $W^{0,1}_p(\mathbb{R}^{d+1})$, and such that $\partial_tu_n\stackrel{*}{\rightharpoonup}\partial_t(\zeta v)$ in $(W^{0,1}_{p'}(\mathbb{R}^{d+1}))'$. Clearly, $u_n$ converges to $u$ locally uniformly in $\overline Q(a,b)$ and in $W^{0,1}_p(Q(a,b))$. Moreover, fix $\varphi\in W^{0,1}_{p'}(Q(a,b))$ and denote by $\overline\varphi$ the null extension of $\varphi$ to the whole of $\mathbb{R}^{d+1}$. Clearly, $\overline\varphi$ belongs to $W^{0,1}_{p'}(\mathbb{R}^{d+1})$. Since \begin{align*} \int_{Q(a,b)}\partial_tu_n\varphi\,dt\,dx= \int_{\mathbb{R}^{d+1}}\partial_tu_n\overline\varphi\,dt\,dx\, \end{align*} and $\partial_t u_n\stackrel{*}{\rightharpoonup} \partial_t(v\zeta)$ in $(W^{0,1}_{p'}(\mathbb{R}^{d+1}))'$, from formula \eqref{extension-1} and since $\zeta'\overline\varphi\equiv 0$ and $\zeta\overline\varphi\equiv\varphi$, it follows that \begin{eqnarray*} \lim_{n\to\infty}\int_{Q(a,b)}\partial_tu_n\varphi\,dt\,dx &=& \langle \partial_t(\zeta v),\overline\varphi \rangle =\int_{Q(a,b)} v\zeta'\overline\varphi\,dt\,dx +\langle \partial_t v,\zeta\overline\varphi\rangle \\ &=& \langle \partial_tv ,\overline\varphi\rangle =\langle \partial_tu ,\varphi\rangle \,. \end{eqnarray*} This completes the proof. \qed \end{proof} \begin{lemma}\label{l.bdd} Let $0\leq a<b <t$ and $x \in \mathbb{R}^d$. Moreover, assume that $g_0(t, \cdot, x, \cdot) \in L^\infty (Q(a,b))$. Then, $g(t, \cdot, x, \cdot) \in C_b(\overline Q(a,b))$. Moreover, if for some $q>1$ we have $\Gamma_1(q,x,a,b)< \infty$ and $\Gamma_2(q,x,a,b)<\infty$, then $g(t,\cdot, x, \cdot) \in \mathscr{H}^{p,1}(Q(\tilde a,\tilde b))$ for all $p \in (1,q)$ and any $a<\tilde a<\tilde b<b$. \end{lemma} \begin{proof} By the maximum principle, $g(t,\cdot,x,\cdot) \leq g_0(t,\cdot,x,\cdot)$ almost surely. Hence, $g(t,\cdot,x,\cdot) \in L^\infty (Q(a,b))$. The continuity of the function $g(t,\cdot,x,\cdot)$ follows from Lemma \ref{lem-reg-g}. To infer that $g(t,\cdot,x,\cdot)$ belongs to $\mathscr{H}^{p,1}(Q(\tilde a,\tilde b))$, for any $\tilde a$ and $\tilde b$ as in the statement of the lemma, we want to use \cite[Lemma 3.2]{mpr10} (see also \cite[Lemma 3.2]{klr13} for the nonautonomous situation). We note that the proof of that lemma remains valid for operators with potential term, provided that both $\Gamma_1(q,x,a,b)<\infty$ and $\Gamma_2(q,x,a,b)< \infty$. Thus \cite[Lemma 3.2]{klr13} yields $g \in \mathscr{H}^{p,1}(Q(\tilde a,\tilde b))$ for all $p \in (1, q)$. \qed \end{proof} We next establish the kernel estimates. To that end, we use time-dependent Lyapunov functions. We make the following assumptions. \begin{hyp}\label{hyp2} Fix $0< t\leq 1$, $x \in \mathbb{R}^d$ and $0< a_0<a<b< b_0<t$. Let time dependent Lyapunov functions $W_1, W_2$ with $W_1 \leq W_2$ and a weight function $1 \leq w \in C^{1,2}(Q(0,t))$ be given such that \begin{enumerate} \item the functions $w^{-2}\partial_sw$ and $w^{-2}\nabla_y w$ are bounded on $Q(a_0,b_0)$; \item there exist a constant $k>d+2$ and constants $c_1,\ldots,c_7\geq 1$, possibly depending on the interval $(a_0,b_0)$, such that \[ \begin{array}{ll} \mathrm{(i)}\quad w\le c_1w^{\frac{k-2}{k}}W_1^{\frac{2}{k}}\,, & \mathrm{(ii)} \quad |Q\nabla_y w| \leq c_2 w^{\frac{k-1}{k}}W_1^\frac{1}{k}\,,\\ \mathrm{(iii)}\!\!\! \quad |\mathrm{Tr} (Q D^2 w)| \leq c_3w^{\frac{k-2}{k}}W_1^\frac{2}{k}\,, & \mathrm{(iv)} \quad |\partial_sw|\leq c_4w^{\frac{k-2}{k}}W_1^\frac{2}{k}\,,\\ \mathrm{(v)} \quad |\sum_{i=1}^d D_iq_{ij}| \leq c_5 w^{-\frac{1}{k}}W_2^\frac{1}{k}\,,\quad & \\ \mathrm{and} &\\ \mathrm{(vi)} \quad |F|\leq c_6w^{-\frac{1}{k}}W_2^{\frac{1}{k}}\,, &\mathrm{(vii)}\quad V^{\frac{1}{2}}\le c_7w^{-\frac{1}{k}}W_2^{\frac{1}{k}}, \end{array} \] on $Q(a_0,b_0)$; \item $g_0(t, \cdot, x, \cdot) \in L^\infty (Q(a_0,b_0))$. \end{enumerate} \end{hyp} Having fixed $t$ and $x$, we write $\rho(s,y) := g(t,s,x,y)$ to simplify notation. We can now prove the main result of this section. \begin{theorem}\label{t.mainbdd} Assume Hypotheses \ref{hyp2}. Then there exists a positive constant $C_1$, depending only on $d, k$ and $\eta$, such that \begin{align}\label{eq.mainest} w\rho \leq C_1&\bigg [ c_1^\frac{k}{2}\sup_{s\in (a_0,b_0)}\zeta_{W_1}(s) + \bigg ( \frac{c_1^{\frac{k}{2}}}{(b_0-b)^{\frac{k}{2}}}+c_2^k +c_3^{\frac{k}{2}}+c_4^{\frac{k}{2}}\bigg ) \int_{a_0}^{b_0}\zeta_{W_1}(s)\, ds \notag\\ &+ \bigg (c_2^{\frac{k}{2}}c_6^{\frac{k}{2}}+c_5^k+c_6^k+c_7^k\bigg )\int_{a_0}^{b_0} \zeta_{W_2}(s)\, ds \bigg] \end{align} in $Q(a,b)$. \end{theorem} \begin{proof} We first assume that the weight function $w$, along with its first order partial derivatives is bounded. It follows from Hypothesis \ref{hyp2}(2)(i) and (vi) that \begin{align*} \Gamma_1(k/2,x,a_0,b_0)^{\frac{k}{2}} & = \int_{Q(a_0,b_0)} |F(s,y)|^{\frac{k}{2}} g(t,s,x,y)\, ds\,dy\\ & \leq \int_{Q(a_0,b_0)} w(s,x)|F(s,x)|^{\frac{k}{2}} g(t,s,x,y)\, ds\,dy\\ & \leq c_6^{\frac{k}{2}}\int_{Q(a_0,b_0)} w(s,y)^{\half}W_2(s,y)^{\half} g(t,s,x,y)\, ds\,dy \\ & \leq c_1^{\frac{k}{4}}c_6^{\frac{k}{2}}\int_{Q(a_0,b_0)} W_2(s,y) g(t,s,x,y)\, ds\,dy < \infty, \end{align*} as a consequence of Proposition \ref{p.lyapunov}. Moreover, using Hypothesis \ref{hyp2}(2)(vii) instead, it follows that \[ \Gamma_2(k/2,x,a_0,b_0)^{\frac{k}{2}}\leq c_7^k\int_{a_0}^{b_0}\zeta_{W_2}(s,x)\, ds < \infty. \] We thus infer from Lemma \ref{l.bdd} that $g(t,\cdot, x, \cdot) \in L^\infty (Q(a_0,b_0)) \cap \mathscr{H}^{p,1}(Q(a_1,b_1))$ for all $p \in (1, \frac{k}{2})$, where $a_0<a_1<a<b<b_1<b_0$. Let $\vartheta:\mathbb{R}\to\mathbb{R}$ be a smooth function with $\vartheta (s)=1$ for $s \in [a,b]$, $\vartheta (s) = 0$ for $s \geq b_1$, $0\le\vartheta\le 1$ and $|\vartheta'| \leq 2(b_1-b)^{-1}$ in $\mathbb{R}$. Given $\psi \in C^{1,2}_c(Q(a_1,b_1))$, we put $\varphi(s,y) := \vartheta(s)^{\frac{k}{2}}w(s,y)\psi (s,y)$. It follows from \eqref{eq.weak} that \begin{equation}\label{eq.1} \int_{Q(a_1,b_1)}\big[\partial_s \varphi (s,y) - \mathscr{A} (s)\varphi (s,y)\big]\rho (s,y)\, ds\, dy = 0. \end{equation} We write $\tilde \rho := \vartheta^\frac{k}{2}\rho$ and note that $w\tilde\rho\in \mathscr{H}^{p,1}(Q(a_1,b_1))$ for all $p \in (1, \frac{k}{2})$, since $w$ and its derivatives are bounded. Thus with some standard computations involving integration by parts we derive from \eqref{eq.1} that \begin{align*} & \phantom{=}\int_{Q(a_1,b_1)}\big [\langle Q\nabla_y (w\tilde{\rho}), \nabla_y \psi\rangle - \psi\partial_s (w\tilde{\rho})\big ] \, ds\,dy\\ & = \int_{Q(a_1,b_1)} \tilde{\rho} \bigg( 2 \sum_{i,j=1}^d q_{ij}(D_iw)(D_j \psi) - \sum_{i,j=1}^d w (D_iq_{ij})(D_j\psi)+ w \langle F,\nabla_y \psi\rangle\bigg) \, ds\,dy\\ & \qquad-\frac{k}{2}\int_{Q(a_1,b_1)}\rho w\psi \vartheta^{\frac{k-2}{k}}\vartheta'\, ds\,dy\\ & \qquad + \int_{Q(a_1,b_1)} \psi\big(\tilde{\rho}\mathrm{Tr}(QD^2 w)+\tilde{\rho}\langle F,\nabla_y w\rangle - \tilde{\rho} Vw- \tilde{\rho} \partial_s w\big)\, ds\,dy\,, \end{align*} where, with a slight abuse of notation, we denote by $\int_{Q(a_1,b_1)}\psi \partial_s(w\overline\rho)\,ds\,dy$ the pairing between $\partial_s(w\overline\rho)\in (W^{0,1}_{p'}(Q(a_1,b_1)))'$ and $\psi\in W^{0,1}_{p'}(Q(a_1,b_1))$. We now want to apply \cite[Theorem 3.7]{klr13} to the function $u =w\tilde\rho$ and infer that there exists a constant $C$, depending only on $\eta, d$ and $k$ (but not on $\|Q\|_\infty)$, such that \begin{align}\label{eq.inftyest} \|w\tilde{\rho} \|_{\infty}\leq C \bigg( &\|w\tilde{\rho}\|_{\infty,2} + \|\tilde{\rho} Q\nabla_y w\|_{k} +\|\tilde{\rho} Fw\|_{k} + \sum_{j=1}^d\bigg\|\tilde{\rho}w\sum_{i=1}^d D_iq_{ij}\bigg\|_{k} + \|\tilde\rho Vw\|_\frac{k}{2}\notag\\ &\!\!\!\!\!\!\!+ \frac{k}{b_1-b}\|\rho w\vartheta^{\frac{k-2}{k}}\|_{\frac{k}{2}} + \|\tilde{\rho} \mathrm{Tr}(QD^2w) \|_{\frac{k}{2}} + \|\tilde{\rho}\partial_sw \|_{\frac{k}{2}} + \|\tilde{\rho}F\cdot\nabla_y w \|_{\frac{k}{2}} \bigg)\,, \end{align} where for $p\in [1,\infty)$ we denote by $\|f\|_p$ the usual $L^p$-norm of the function $f:Q(a_1,b_1)\to\mathbb{R}$. Moreover, $\|f\|_{\infty,2}:=\sup_{s\in (a_1,b_1)}\|f(s,\cdot)\|_{L^2(\mathbb{R}^d)}$. Note that a major tool in the proof of that theorem is the formula \begin{equation} \int_{Q(a_1,b_1)}\vartheta(v-\ell)_+\partial_tv\,dt\,dx =\frac{1}{2}\left[\int_{\mathbb{R}^d}\vartheta(v(b_1)-\ell)_+^2\,dx-\int_{\mathbb{R}^d}\vartheta(v(a_1)-\ell)_+^2\,dx\right]\,. \label{AA0} \end{equation} satisfied by $v=w\tilde\rho$, any $\ell>0$ and any nonnegative function $\vartheta\in C^{\infty}_c(\mathbb{R}^d)$, if $p>d+2$. However, formula \eqref{AA0} is satisfied also in the case $p\le d+2$, which is our situation, if we additionally assume that $v\in C_b(\overline Q(a_1,b_1))$ (which follows from Lemma \ref{l.bdd}). Its proof can be obtained arguing as in \cite[Lemma 3.6]{klr13} taking Lemma \ref{lem-cortona-0} into account, with slight and straightforward changes. Once formula \eqref{AA0} is established, the proof of \eqref{eq.inftyest} follows the same lines as in \cite[Theorem 3.7]{klr13} with no changes. We now estimate the terms in the right-hand side of \eqref{eq.inftyest}, using part (2) of Hypothesis \ref{hyp2}. We have \begin{align*} \|\tilde\rho Q\nabla_y w\|_k^k & = \int_{Q(a_1,b_1)}|\tilde\rho Q\nabla_y w|^k\, ds\,dy \ \leq c_2^k\int_{Q(a_1,b_1)}\tilde\rho^k w^{k-1}W_1\, ds\, dy \\ & \leq c_2^k\|\tilde\rho w\|_\infty^{k-1}\int_{a_1}^{b_1}\zeta_{W_1}(s,x)\, ds. \end{align*} Let us write $M_k := \int_{a_1}^{b_1}\zeta_{W_k}(s,x)\, ds$ and $\bar{M}:= \sup_{s \in (a_1,b_1)}\zeta_1 (s,x)$. With similar estimates as above, we find \[ \begin{array}{ll} \|\tilde \rho F w\|_k \leq c_6\|\tilde\rho w\|_\infty^{\frac{k-1}{k}}M_2^\frac{1}{k}, \qquad & \mathcal{B}ig\| \tilde\rho w\sum_{i=1}^d D_iq_{ij} \mathcal{B}ig\|_k \leq c_5\|\tilde\rho w\|_\infty^{\frac{k-1}{k}}M_2^\frac{1}{k},\\[0.5em] \|\tilde \rho V w\|_\frac{k}{2} \leq c^2_7\|\tilde\rho w\|_\infty^{\frac{k-2}{k}}M_2^\frac{2}{k}, \qquad & \|\rho w\vartheta^\frac{k-2}{2}\|_\frac{k}{2}\leq c_1\|\tilde\rho w\|_\infty^{\frac{k-2}{k}}M_1^\frac{2}{k},\\[0.5em] \|\tilde \rho \mathrm{Tr}(QD^2 w)\|_\frac{k}{2} \leq c_3\|\tilde\rho w\|_\infty^{\frac{k-2}{k}}M_1^\frac{2}{k}, \qquad & \|\tilde \rho \partial_s w\|_\frac{k}{2}\leq c_4\|\tilde\rho w\|_\infty^{\frac{k-2}{k}}M_1^\frac{2}{k},\\[0.5em] \|\tilde \rho F\cdot\nabla_y w\|_\frac{k}{2} \leq \eta^{-1} c_2c_6 \|\tilde\rho w\|_\infty^{\frac{k-2}{k}}M_2^\frac{2}{k}, & \|w\tilde\rho\|_{\infty, 2} \leq c_1^\frac{k}{4} \|w\tilde\rho\|_\infty^\half \bar M^\half . \end{array} \] From \eqref{eq.inftyest} and the above estimates, we obtain the following inequality for $X :=\|w\tilde\rho\|_\infty^\frac{1}{k}$\,: \begin{eqnarray*} X^k \leq \alpha X^{\frac{k}{2}}+ \beta X^{k-1} + \gamma X^{k-2}\,, \end{eqnarray*} where $\alpha := Cc_1^\frac{k}{4}\bar M^\half$, $\beta = C\mathcal{B}ig (c_2M_1^\frac{1}{k}+(c_6+c_5d)M_2^\frac{1}{k}\mathcal{B}ig )$ and \[ \gamma = C\bigg(\frac{c_1}{b_1-b} +c_3+c_4\bigg)M_1^\frac{2}{k} + C(c_2c_6+c_7^2)M_2^\frac{2}{k}. \] Estimating $\alpha X^{k/2}\le \frac{1}{4}{X^k}+\alpha^2$, we find \begin{equation} X^k \leq \frac{4}{3}\alpha^2 + \frac{4}{3}\beta X^{k-1}+\frac{4}{3}\gamma X^{k-2}. \label{estim-X} \end{equation} We note that the function \begin{align*} f(r)=r^k-\frac{4}{3}\beta r^{k-1}-\frac{4}{3}\gamma r^{k-2}-\frac{4}{3}\alpha^2 =&r^{k-2}\left (r^2-\frac{4}{3}\beta r-\frac{4}{3}\gamma\right )-\frac{4}{3}\alpha^2\\ :=&r^{k-2}g(r)-\frac{4}{3}\alpha^2 \end{align*} is increasing in $\bigg(\frac{4}{3}\beta+\sqrt{\frac{4}{3}\gamma}+\left (\frac{4}{3}\alpha^2\right )^{\frac{1}{k}},\infty\bigg )$ since the functions $r\mapsto r^{k-2}$ and $g$ are positive and increasing. Moreover, \begin{align*} &f\bigg (\frac{4}{3}\beta+\sqrt{\frac{4}{3}\gamma}+\bigg (\frac{4}{3}\alpha^2\bigg )^{\frac{1}{k}}\bigg ) =\bigg (\frac{4}{3}\beta+\sqrt{\frac{4}{3}\gamma}+\bigg (\frac{4}{3}\alpha^2\bigg )^{\frac{1}{k}}\bigg )^{k-2}\times\\ &\qquad\qquad\quad\quad\times \bigg [\bigg (\frac{4}{3}\alpha^2\bigg )^{\frac{2}{k}}+\bigg (\frac{4}{3}\bigg )^{\frac{3}{2}}\beta\gamma^{\half}+ 2\bigg (\frac{4}{3}\bigg )^{\frac{k+2}{2k}}\alpha^{\frac{2}{k}}\bigg (\frac{\sqrt{3}}{3}\beta+\sqrt{\gamma}\bigg )\bigg ] -\frac{4}{3}\alpha^2\\ &> \bigg (\frac{4}{3}\alpha^2\bigg )^{\frac{k-2}{k}}\bigg (\frac{4}{3}\alpha^2\bigg )^{\frac{2}{k}}- \frac{4}{3}\alpha^2=0. \end{align*} From these observations and inequality \eqref{estim-X} it follows that $X \leq \frac{4}{3}\beta+\sqrt{\frac{4}{3}\gamma}+\left (\frac{4}{3}\alpha^2\right )^{\frac{1}{k}}$. Equivalently, \[ \|\tilde\rho w\|_\infty \le K_1\left (\alpha^2+\beta^k+\gamma^{\frac{k}{2}}\right ), \] for some positive constant $K_1$. Taking into account that $c\geq 1$, one derives \eqref{eq.mainest} from this by plugging in the definitions of $\alpha, \beta, \gamma$ and, then, letting $a_1\downarrow a_0$ and $b_1\uparrow b_0$. To finish the proof of the theorem, it remains to remove the additional assumption on the weight $w$. To that end, we set $w_\varepsilon := \frac{w}{1+\varepsilon w}$. Using Hypothesis \ref{hyp2}(1), we see that $w_\varepsilon$, along with its partial derivatives is bounded. Straightforward computations show that Part (2) of Hypothesis \ref{hyp2} is satisfied with the same constants $c_1,\ldots,c_7$. Thus the first part of the proof shows that \eqref{eq.inftyest} is satisfied with $w$ replaced with $w_\varepsilon$ and the constants on the right-hand side do not depend on $\varepsilon$. Thus, upon $\varepsilon \downarrow 0$ we obtain \eqref{eq.inftyest} for the original $w$. \qed \end{proof} \section{The case of general diffusion coefficients} We now remove the additional boundedness assumption imposed in Section \ref{sect-3}. We do this by approximating general diffusion coefficients with bounded ones, taking advantage of the fact that the constant $C_1$ obtained in Theorem \ref{t.mainbdd} does not depend on the supremum norm of the diffusion coefficients. More precisely, we approximate the diffusion matrix $Q$ as follows. Given a function $\varphi \in C_c^\infty(\mathbb{R})$ such that $\varphi\equiv 1$ in $(-1,1)$, $\varphi\equiv 0$ in $\mathbb{R}\setminus (-2,2)$ and $|t\varphi'(t)| \leq 2$ for all $t \in \mathbb{R}$, we define $\varphi_n(s,x):= \varphi (W_1(s,x)/n)$ for $s\in [0,t]$ and $x \in \mathbb{R}^d$. We put \[ q_{ij}^{(n)}(s,x) :=\varphi_n(s,x)q_{ij}(s,x) + (1-\varphi_n(s,x))\eta\delta_{ij}, \] where $\delta_{ij}$ is the Kronecker delta, and define the operators $\mathscr{A}_n(s)$ by \[ \mathscr{A}_n(s) := \sum_{i,j=1}^d q_{ij}^{(n)}(s)D_{ij} + \sum_{j=1}^d F_j(s)D_j - V(s). \] We collect some properties of the approximating operators, omitting the easy proof. \begin{lemma}\label{l.prop} Each operator $\mathscr{A}_n$ satisfies Hypothesis \ref{hyp1} in $[0,t]$, and its diffusion coefficients are bounded together with their first-order spatial derivatives. Moreover, any time dependent Lyapunov function for the operator $\partial_s -\mathscr{A} (s)$ on $[0,t]$ is a time dependent Lyapunov function for the operator $\partial_s -\mathscr{A}_n(s)$ with respect to the same $h$. \end{lemma} It follows that the parabolic equation \eqref{eq.nee} with $\mathscr{A}$ replaced with $\mathscr{A}_n$ is wellposed and the solution is given through an evolution family $(G_n(r,s))_{0\leq s \leq r \leq t}$. Moreover, for $s<r$ the operator $G_n(r,s)$ is given by a Green kernel $g_n(r,s,\cdot, \cdot)$. We write $\mathscr{A}_n^0:= \mathscr{A}_n + V$ and denote the Green kernel associated to the operators $\mathscr{A}_n^0$ by $g_n^0$. We make the following assumptions. \begin{hyp}\label{hyp3} Fix $0< t\leq 1$, $x \in \mathbb{R}^d$ and $0<a_0<a<b< b_0<t$ and assume we are given time dependent Lyapunov functions $W_1, W_2$ with $W_1 \leq W_2 \leq c_0Z^{1-\sigma}$ for some constants $c_0>0$ and $\sigma \in (0,1)$ and a weight function $1 \leq w \in C^2(\mathbb{R}^d)$ such that \begin{enumerate} \item Hypotheses \ref{hyp2}(1)-(2) are satisfied; \item $|\Delta_y w| \leq c_8w^\frac{k-2}{k}W_1^\frac{2}{k}$ and $|Q\nabla_y W_1| \leq c_9w^{-\frac{1}{k}} W_1 W_2^\frac{1}{k}$ on $[a_0,b_0]\times \mathbb{R}^d$, for certain constants $c_8, c_9\ge 1$; \item for $n\in \mathbb{N}$ we have $g_n^0(t, \cdot, x, \cdot) \in L^\infty(Q(a,b))$. \end{enumerate} \end{hyp} In order to prove kernel estimates for the Green kernel $g$, we apply Theorem \ref{t.mainbdd} to the operators $\mathscr{A}_n$ and then let $n \to \infty$. To do so, we have to show that the operators $\mathscr{A}_n$ satisfy Hypothesis \ref{hyp2}. \begin{lemma} \label{lemma-4} The operator $\mathscr{A}_n$ satisfies Hypothesis \ref{hyp2} with the same constants $c_1$, $c_4$, $c_6$, $c_7$ and with $c_2$, $c_3$ and $c_5$ being replaced, respectively, by $2c_2$, $c_3+\eta c_8$ and $c_5+4c_9$. \end{lemma} \begin{proof} Since part (1) is obvious and part (3) follows directly from part (3) in Hypothesis \ref{hyp3}, we only need to check part (2) of Hypothesis \ref{hyp2}. Here, the estimates (i), (iv), (vi) and (vii) are obvious, as they do not depend on the diffusion coefficients. Let us next note that \[ |\nabla_y w| = |Q^{-1}Q\nabla_y w| \leq \eta^{-1}c_2w^\frac{k-1}{k}W_1^\frac{1}{k}, \] so that \[ |Q_n\nabla_y w| = |\varphi_n Q\nabla_y w + (1-\varphi_n)\eta\nabla_y w| \leq |Q\nabla_y w| + \eta|\nabla_y w| \leq 2c_2 w^\frac{k-1}{k}W_1^\frac{1}{k}. \] This gives (ii) for $Q_n$. As for (iii), we have \[ |\mathrm{Tr}(Q_nD^2w)| \leq |\mathrm{Tr}(QD^2w)| + \eta |\Delta w| \leq (c_3+\eta c_8) w^{\frac{k-2}{k}}W_1^\frac{2}{k}. \] It remains to check (v). We note that \[ \sum_{i=1}^d D_iq_{ij}^{(n)} = \varphi_n\sum_{i=1}^d D_iq_{ij} + \frac{\varphi'(W_1/n)}{n}\big[ (Q\nabla_y W_1)_j - \eta D_j W_1\big]. \] As $|t\varphi'(t)| \leq 2$, it follows that \[ \bigg|\frac{\varphi'(W_1/n)}{n} \big[ (Q\nabla_y W_1)_j - \eta D_jW_1\big ]\bigg| \leq \frac{2}{W_1} (|Q\nabla_y W_1| + \eta |\nabla_y W_1|). \] Consequently, \[ \bigg|\sum_{i=1}^d D_iq_{ij}^{(n)}\bigg| \leq \bigg|\sum_{i=1}^d D_iq_{ij}\bigg| + \frac{2}{W_1} (|Q\nabla_y W_1| + \eta |\nabla_y W_1|) \leq (c_5+4c_9) w^{-\frac{1}{k}}W_2^\frac{1}{k}. \] This finishes the proof. \qed \end{proof} We shall need the following convergence result for the Green kernels. \begin{lemma}\label{l.conv} Fix $r \leq t$ and $x \in \mathbb{R}^d$ and define $\rho_n(s,y) := g_n(r,s,x,y)$ and $\rho(s,y) := g(r,s,x,y)$ for $s \in [0,r]$ and $y \in \mathbb{R}^d$. Then $\rho_n \to \rho$, locally uniformly in $(0,r)\times \mathbb{R}^d$. \end{lemma} \begin{proof} The proof is obtained as that of \cite[Proposition 2.9]{klr13}. We give a sketch. Using Schauder interior estimates and a diagonal argument, one shows that for any $f \in C_c^{2+\varsigma}(\mathbb{R}^d)$ $G_n(\cdot, s)f$ converges to $G(\cdot, s)f$ locally uniformly. This implies that the measure $\rho_n(s,y)\, dsdy$ converges weakly to the measure $\rho (s,y)\, dsdy$. On the other hand, \cite[Corollary 3.11]{bkr06} implies that for a compact set $K \subset \mathbb{R}^d$ and a compact interval $J \subset (0,r)$ we have $\|\rho_n\|_{C^\gamma (J\times K)} \leq C$ for certain constants $C>0$ and $\gamma \in (0,1)$ independent of $n$. Thus, by compactness, a subsequence converges locally uniformly to some continuous function $\psi$ which, by the above, has to be $\rho$. \qed \end{proof} We can now state and prove our main result. \begin{theorem}\label{t.main} Assume Hypothesis \ref{hyp3}. Then there exists a positive constant $C_1$, depending only on $d, k$ and $\eta$, such that \begin{align}\label{eq.mainest2} w\rho \leq C_1&\bigg [ c_1^\frac{k}{2}\sup_{s\in (a_0,b_0)}\zeta_{W_1}(s) + \bigg ( \frac{c_1^{\frac{k}{2}}}{(b_0-b)^{\frac{k}{2}}}+c_2^k +c_3^{\frac{k}{2}}+c_4^{\frac{k}{2}}+c_8^{\frac{k}{2}}\bigg ) \int_{a_0}^{b_0}\zeta_{W_1}(s)\, ds \notag\\ &+ \bigg (c_2^{\frac{k}{2}}c_6^{\frac{k}{2}}+c_5^k+c_6^k+c_7^k+c_9^k\bigg )\int_{a_0}^{b_0} \zeta_{W_2}(s)\, ds \bigg] \end{align} in $(a,b)\times\mathbb{R}^d$. \end{theorem} \begin{proof} We apply Theorem \ref{t.mainbdd} to the operators $\mathscr{A}_n$. Taking Lemma \ref{lemma-4} into account, we obtain \begin{align}\label{eq.obtainedest} w\rho_n \leq C_1&\bigg [c_1^\frac{k}{2}\sup_{s\in (a_0,b_0)}\zeta_{1,n}(s) + \bigg (c_2^{\frac{k}{2}}c_6^{\frac{k}{2}}+(c_5+4c_9)^k+c_6^k+c_7^k\bigg )\int_{a_0}^{b_0} \zeta_{2,n}(s)\, ds \notag\\ &+ \bigg ( \frac{c_1^{\frac{k}{2}}}{(b_0-b)^{\frac{k}{2}}}+(2c_2)^k +(c_3+\eta c_8)^{\frac{k}{2}}+c_4^{\frac{k}{2}}\bigg ) \int_{a_0}^{b_0}\zeta_{1,n}(s)\, ds \bigg], \end{align} in $(a,b)$, where $\zeta_{j,n}(s) := \int_{\mathbb{R}^d} W_j(x, y) g_n(t,s,x, y)dy$. Note that $\zeta_{j,n}$ is well defined by Proposition \ref{p.lyapunov}, since $W_j$ is also a time dependent Lyapunov function for $\mathscr{A}_n$ by Lemma \ref{l.prop}. Since $\rho_n \to \rho$ locally uniformly by Lemma \ref{l.conv}, Estimate \eqref{eq.mainest2} follows from \eqref{eq.obtainedest} upon $n\to \infty$ once we prove that the right-hand sides also converge. To that end, it suffices to prove that $\zeta_{j,n}$ converges to $\zeta_{W_j}$ uniformly on $(a_0,b_0)$. Using the estimate $W_j \leq c_0 Z^{1-\sigma}$ and H\"older's inequality, we find \begin{align} |\zeta_{j,n}(s) - \zeta_j(s)| & \leq \int_{\mathbb{R}^d} W_j(s) |\rho_n(s) - \rho(s)|\, dy\notag\\ & \leq \int_{B(0,R)} W_j(s) |\rho_n(s) - \rho(s)|\, dy\notag\\ & \qquad + \int_{\mathbb{R}^d\setminus B(0,R)} W_j(s)\rho_n(s)\, dy + \int_{\mathbb{R}^d\setminus B(0,R)} W_j(s)\rho (s)\, dy\notag\\ & \leq \|W_j\|_{L^\infty ((a_0,b_0)\times B(0,R))}\|\rho_n-\rho\|_{L^\infty ((a_0,b_0)\times B(0,R))} |B(0,R)|\label{eq.toshow}\\ & \, + c_0 \bigg(\int_{\mathbb{R}^d\setminus B(0,R)} Z(y) g_n(t,s,x,y)\,dy\bigg)^{1-\sigma}(g_n(t,s,x, \mathbb{R}^d \setminus B(0,R)))^\sigma\notag\\ & \, + c_0 \bigg(\int_{\mathbb{R}^d\setminus B(0,R)} Z(y) g(t,s,x,y)\,dy\bigg)^{1-\sigma}(g(t,s,x, \mathbb{R}^d \setminus B(0,R)))^\sigma,\notag \end{align} where $|B(0,R)|$ denotes the Lebesgue measure of the ball $B(0,R)$. We first note that, as a consequence of Equation \eqref{eq.zest} (which is also valid if $G$ is replaced with $G_n$ since $Z$ is also a Lyapunov function for $\mathscr{A}_n$), the integrals $\int_{\mathbb{R}^d} Z(y)g_n(t,s,x,y)\,dy$ are uniformly bounded. Arguing as in the proof of \eqref{tight}, it is easy to check that the measures $\{g_n(t,s,x,y)\,dy: s\in [0,t]\}$ are tight. Therefore, the last two terms in \eqref{eq.toshow} can be bounded by any given $\varepsilon>0$ if $R$ is chosen large enough. Since $\rho_n \to \rho$ locally uniformly, given $R$, also the first term in \eqref{eq.toshow} can be bounded by $\varepsilon$ if $n$ is large enough. Thus, altogether $\zeta_{j,n} \to \zeta_j$ uniformly on $[a_0,b_0]$. This finishes the proof. \qed \end{proof} \section{Proof of Theorem \ref{t.example}} Let us come back to the example from Theorem \ref{t.example}. We start by observing that the same computations as in the proof of Proposition \ref{p.example} show that the function $Z_0(x)=\exp(\delta|x|_{*}^{p+1-m})$ is a Lyapunov function for both the operators $\mathscr{A}_0$ and $\eta\Delta_x-F\cdot\nabla_x$. To obtain estimates for the Green kernel associated with the operator $\mathscr{A}$, we want to apply Theorem \ref{t.main}. We assume that we are in the situation of Proposition \ref{p.example} and pick $0<\varepsilon_0<\varepsilon_1<\varepsilon_2 < \delta$, where $\delta < 1/\beta$, and $\alpha > \frac{\beta}{m+\beta-2}$. For $\beta\ge 2$, we define the functions $w, W_1, W_2:[0,t]\times\mathbb{R}^d$ by \[ w(s,y) := e^{\varepsilon_0(t-s)^\alpha |y|_{*}^\beta}\quad\mbox{and}\quad W_j(s,y):= e^{\varepsilon_j(t-s)^\alpha |y|_{*}^\beta}. \] Let us check the conditions of Theorem \ref{t.main}. As a consequence of Proposition \ref{p.example}, $W_1$ and $W_2$ are time dependent Lyapunov functions which obviously satisfy $W_1\leq W_2 \leq Z^{1-\sigma}$ for suitable $\sigma$, where $Z(y) := \exp (\delta |y|_{*}^\beta)$. We have to verify that with this choice of $w, W_1$ and $W_2$ Hypothesis \ref{hyp3} is satisfied. As before, we make only computations assuming that $|x| \geq 1$, omitting the details concerning the neighborhood of the origin. We now fix arbitrary $a_0, b_0\in (0,t)$ with $a_0<b_0$. Note that $w(s,y)^{-2}\partial_s w(s,y) = -\varepsilon_0\alpha (t-s)^{\alpha-1}|y|^\beta e^{-\varepsilon_0(t-s)^\alpha |y|^\beta}$. This is clearly bounded. Similarly, one sees that $w^{-2}\nabla_y w$ is bounded. Let us now turn to part (2) of Hypotheses \ref{hyp2} and \ref{hyp3}. Since $w\leq W_1$, clearly (2)(i) is satisfied with $c_1=1$. As for (2)(ii), we have \[ \frac{|Q(s,y)\nabla_y w(s,y)|}{w(s,y)^{1-1/k}W_1(s,y)^{1/k}} = \varepsilon_0\beta (t-s)^{\alpha} |y|^{\beta -1}(1+|y|^m)e^{-\frac{1}{k}(\varepsilon_1-\varepsilon_0)(t-s)^\alpha |y|^\beta}. \] To bound this expression, we note that for $\tau,\gamma, z >0$, we have \begin{eqnarray*} z^\gamma e^{-\tau z^\beta} = \tau^{-\frac{\gamma}{\beta}}(\tau z^\beta)^\frac{\gamma}{\beta}e^{-\tau z^\beta} \leq \tau^{-\frac{\gamma}{\beta}}\bigg( \frac{\gamma}{\beta}\bigg)^\frac{\gamma}{\beta} e^{-\frac{\gamma}{\beta}} =: \tau^{-\frac{\gamma}{\beta}} C(\gamma,\beta)\, , \end{eqnarray*} which follows from the fact that the maximum of the function $t \mapsto t^pe^{-t}$ on $(0,\infty)$ is attained at the point $t=p$. Applying this estimate in the case where $z = |y|$, $\tau= k^{-1}(\varepsilon_1-\varepsilon_0)(t-s)^\alpha$, $\beta=\beta$ and $\gamma = \beta-1+m$, we get \begin{align*} & \phantom{=} \frac{|Q(s,y)\nabla_y w(s,y)|}{w(s,y)^{1-1/k}W_1(s,y)^{1/k}}\\ & \leq 2\varepsilon_0\beta (t-s)^\alpha \bigg(\frac{\varepsilon_1-\varepsilon_0}{k}\bigg)^{-\frac{\beta -1+m}{\beta}}(t-s)^{-\alpha \frac{\beta-1+m}{\beta}} C(\beta-1+m, \beta)\\ & =: \bar{c} (t-s)^{-\frac{\alpha(m-1)}{\beta}} \leq \bar{c}(t-b_0)^\frac{-\alpha(m-1)_+}{\beta}\, , \end{align*} for a certain constant $\bar{c}$. Thus we can choose the constant $c_2$ as $\bar{c}(t-b_0)^{-\frac{\alpha(m-1)_+}{\beta}}$, where $\bar{c}$ is a universal constant. Note that $c_2$ depends on the interval $(a_0,b_0)$ only through the factor $(t-b_0)^{-\gamma_2}$. As it turns out, similar estimates show that also for (2)(iii)--(vii) in Hypothesis \ref{hyp2} and in Part (2) of Hypothesis \ref{hyp3} we can choose constants $c_3,\ldots,c_9$ of this form, however with different exponents $\gamma_3,\ldots,\gamma_9$. We now determine the exponents we can choose. To simplify the presentation, we drop constants from our notation and write $\lesssim$ to indicate a constant which merely depends on $d, m, p, r, k, \varepsilon_0, \varepsilon_1, \varepsilon_2$. As for (iii) we find \begin{align*} &\frac{|\mathrm{Tr}(QD^2w(s,y))|}{w(s,y)^{1-2/k}W_1(s,y)^{2/k}}\\ \lesssim & \big [(t-s)^{\alpha}|y|^{\beta-2+m}+(t-s)^{2\alpha}|y|^{2\beta -2+m}\big ] e^{-\frac{2}{k}(\varepsilon_1-\varepsilon_0)(t-s)^\alpha|y|^\beta}\\ \lesssim & (t-s)^{2\alpha} (t-s)^{-\alpha\frac{2\beta-2+m}{\beta}} \leq (t-b_0)^{-\frac{\alpha (m-2)_+}{\beta}}\,, \end{align*} so that here $\gamma_3 = \frac{(m-2)_+}{\beta}$. The estimates \begin{align*} \frac{|\partial_sw(s,y)|}{w(s,y)^{1-2/k}W_1(s,y)^{2/k}} & \lesssim (t-s)^{\alpha-1}|y|^{\beta} e^{-\frac{2}{k}(\varepsilon_1-\varepsilon_0)(t-s)^\alpha|y|^\beta}\\ & \lesssim (t-s)^{\alpha -1} (t-s)^{-\alpha} \leq (t-b_0)^{-1}\,, \end{align*} \begin{align*} \frac{w(s,y)^{1/k}|\sum_{i=1}^dD_iq_{ij}(s,y)|}{W_2(s,y)^{1/k}} \lesssim |y|^{m} e^{-\frac{1}{k}(\varepsilon_2-\varepsilon_0)(t-s)^\alpha|y|^\beta} \lesssim (t-s)^{-\frac{\alpha m}{\beta}} \leq (t-b_0)^{-\frac{\alpha m}{\beta}} \end{align*} and \begin{align*} \frac{w(s,y)^{1/k}|F(s,y)|}{W_2(s,y)^{1/k}}=|y|^{p} e^{-\frac{1}{k}(\varepsilon_2-\varepsilon_0)(t-s)^\alpha|y|^\beta}\lesssim (t-s)^{-\frac{\alpha p}{\beta}} \leq (t-b_0)^{-\frac{\alpha p}{\beta}}, \end{align*} show that in (iv), resp.\ (v), resp.\ (vi) we can choose $\gamma_4 = 1$, resp.\ $\gamma_5 = \frac{\alpha m}{\beta}$ resp.\ $\gamma_6=\frac{\alpha p}{\beta}$. A similar estimate as for (vi) shows that in (vii) we can choose $\gamma_7 = \frac{\alpha r}{2\beta}$. Concerning part (2) of Hypothesis \ref{hyp3}, we note that repeating the computations for Hypothesis \ref{hyp2}(2)(ii)-(iii) with $m=0$, we see that in the estimate for $|\Delta_y w|$ and $|Q\nabla_y W_1|$ we can pick $c_8=c_9=\bar c$. Finally for part (3) of Hypothesis \ref{hyp2}, we note that in this special situation the boundedness of the Green kernel for the associated operators without potential term can also be established using time dependent Lyapunov functions. This has been done in \cite{klr13}. We may thus invoke Theorem \ref{t.main}. To that end, given $s \in (0,t)$, we choose $a_0 := \max\{ s -(t-s)/2, s/2\}$ and $b_0 := s + (t-s)/2$ so that $t-b_0 = (t-s)/2$ and $b_0-a_0 \leq t-s$. Let us note that, as a consequence of Proposition \ref{p.lyapunov}, \[ \zeta_{W_j}(s,x) \leq \exp \bigg( \int_s^t h(\tau)\, d\tau\bigg) W_j(t,x) = \exp \bigg(\int_s^t h(\tau)\, d\tau \bigg). \] Thus, recalling the form of $h$ from the proof of Proposition \ref{p.example}, we see that there exists a constant $H$, depending only on $\alpha, \beta$ and $m$, hence independent of $(a_0,b_0)$, such that \[ \int_{a_0}^{b_0} \zeta_{W_j}(s)\, ds \leq H (b_0-a_0) \leq H(t-s). \] Thus, by Theorem \ref{t.main}, we find that, for a certain constant $C$, we have \begin{align} w\rho \leq C \mathcal{B}ig( (t-s)^{1-\frac{k}{2}}+(t-s)^{1-\frac{\alpha}{2\beta}((m-1)_++p)k} +(t-s)^{1-\frac{\alpha}{\beta}\Lambda k} \mathcal{B}ig), \label{eq.lastest} \end{align} where $\Lambda=m\vee p\vee\frac{r}{2}$. To simplify this further, we note first that $$\Lambda \ge \frac{1}{2}((m-1)_++p).$$ Now, let us assume that both $p> m-1$ and $r > m-2$ so that we can either assume (i) or (ii) in Proposition \ref{p.example}. Note that in case (i), we have, by the choice of $\alpha$, that \[ \frac{\alpha\Lambda}{\beta} \geq \frac{\alpha p}{\beta} > \frac{p}{m+\beta -2} =\frac{p}{p-1} > \frac{1}{2}. \] In case (ii), we distinguish the cases $r+m>2$ and $r+m\le 2$. If $r+m>2$ we have \[ \frac{\alpha\Lambda}{\beta} \geq \frac{\alpha r}{2\beta} > \frac{r}{2(m+\beta -2)} = \frac{r}{r+m -2} > \frac{1}{2}, \] since $r>m-2$. On the other hand, if $r+m\le 2$, then \[ \frac{\alpha\Lambda}{\beta} \geq \frac{\alpha p}{\beta} > \frac{p}{p-1} > \frac{1}{2}, \] Thus, the right-hand side of \eqref{eq.lastest} can be estimated by a constant times $(t-s)^{1-\frac{\alpha}{\beta}\Lambda k}$. Therefore, if $p\geq\half (m+r)$, we pick $\beta = p+1-m$. We have, for $\alpha > \frac{p+1-m}{p-1}$, $\varepsilon < \frac{1}{p+1-m}$, \begin{align*} g(t,s,x,y) \leq C(t-s)^{1-\frac{\alpha(m\vee p)k}{p+1-m}}e^{-\varepsilon (t-s)^\alpha |y|_{*}^{p+1-m}}, \end{align*} for a certain constant $C$. On the other hand, for $p < \half (m+r)$, we pick $\beta = \half (r+2-m)$. So, we obtain \[ g(t,s,x,y) \leq C (t-s)^{1-\frac{\alpha(2m\vee 2p\vee r)}{2(r+2-m)}k} e^{-\varepsilon (t-s)^\alpha |y|_*^{\half (r+2-m)}}, \] for $\varepsilon <\frac{2}{r+2-m}$ and $\alpha > \frac{r-m+2}{r+m-2}$ if $r+m>2$, and $\alpha>\frac{r+2-m}{2(p-1)}$ if $r+m\le 2$, where, again, $C$ is a positive constant independent of $t$ and $s$. This finishes the proof of Theorem \ref{t.example}. \end{document}
\begin{document} \begin{abstract} We show that partially hyperbolic diffeomorphisms of $d$-dimensional tori isotopic to an Anosov diffeomorphism, where the isotopy is contained in the set of partially hyperbolic diffeomorphisms, are dynamically coherent. Moreover, we show a \textit{global stability result}, i.e. every partially hyperbolic diffeomorphism as above is \textit{leaf-conjugate} to the linear one. As a consequence, we obtain intrinsic ergodicity and measure equivalence for partially hyperbolic diffeomorphisms with one-dimensional center direction that are isotopic to Anosov diffeomorphisms through such a path. \begin{itemize}gskip \noindent {\bf Keywords: Partial hyperbolicity, Dynamical coherence, Measures of maximal entropy} \noindent {\bf MSC 2000:} 37C05, 37C20, 37C25, 37C29, 37D30. \subsetd{abstract} \maketitle \section{Introduction}\label{SectionIntroduccion} A fundamental problem in dynamical systems is classifying dynamical phenomena and describing the spaces that support these actions. By the 1970s there was a good classification of smooth systems that are uniformly hyperbolic. This is seen in the well known Franks-Manning classification result of Anosov diffeomorphisms of tori. This result provides a global classification of Anosov diffeomorphisms on tori (or more generally infranilmanifolds) up to topological conjugacy: any Anosov diffeomorphism of $\mbox{$\mathbb{T}$}T^d$ is topologically conjugate to its linear part. The proof uses the structure of the invariant foliation as a key tool in obtaining such a classification. Such a result is sometimes referred to as a \textit{global stability} result since it provides classification beyond small perturbations of the system (which is referred to as \textit{structural stability}). For Anosov flows there are also some global stability results: let us mention for example a result of Ghys \cite{Ghy} (that overlaps with some related results of Gromov \cite{Gromov}) which states that if $\phi_t$ is an Anosov flow in a 3-manifold which is a circle bundle over a surface then $\phi_t$ is \textit{orbit equivalent} to the geodesic flow in a surface of negative curvature. In the case of flows, orbit equivalence is the natural extension of topological conjugacy as it is well known that topological conjugacy is too strong an equivalence for local stability results. Recently, there is a great deal of interest in understanding the dynamical properties of partially hyperbolic diffeomorphisms, precise definitions are given in Section~\ref{s.precise}. For partially hyperbolic diffeomorphisms the natural equivalence relation is given by \textit{leaf-conjugacy} as introduced in \cite{HPS} where a local stability result is provided (under some technical hypotheses). In dimension 3 some global stability results have been obtained (see \cite{Hammerlindl,HP}). These involve studying integrability of the center bundle since the notion of equivalence up to leaf conjugacy relies on the existence of a center foliation, sometimes referred to as \textit{dynamical coherence}. In dimension 3 the center bundle is one-dimensional, a hypothesis that provides some starting point for studying integrability. In this paper we consider partially hyperbolic diffeomorphisms of the $d$-torus isotopic to Anosov diffeomorphisms with no restriction on the dimension of the center bundle. The main result provides integrability of the center bundle as well as a global stability statement in the case the partially hyperbolic diffeomorphism can be connected to the linear Anosov diffeomorphism by a path of partially hyperbolic diffeomorphisms. The known techniques of working with codimension one foliations are no longer available and we must trade those by dynamical-geometrical properties of the foliations related with the existence of global semiconjugacies to the linear representative. We start with an informal presentation of our results followed by a more precise formulation. \subsection{Dynamical coherence}\label{s.coherence} It is well known that the stable and unstable bundles of an Anosov diffeomorphism are integrable. This extends to the stable and unstable bundles of a partially hyperbolic diffeomorphism~\cite{HPS}, but the integrability of the center bundle is a subtler issue, see for instance~\cite{BuW}. When the center bundle is integrable the partially hyperbolic diffeomorphism is {\it dynamically coherent}. \begin{mainteo}\label{t.main} If $f: \mbox{$\mathbb{T}$}T^d \to \mbox{$\mathbb{T}$}T^d$ is a partially hyperbolic diffeomorphism that is isotopic to a linear Anosov automorphism along a path of partially hyperbolic diffeomorphisms, then $f$ is dynamically coherent. \subsetd{mainteo} We establish dynamical coherence without the usual restrictions on the dimension of the center bundle, the strength of the domination, or the geometry of the strong foliations in the universal cover. This is one of the first results on dynamical coherence without restriction on the dimension of the center bundle that holds in ``large'' open sets (whole connected components of partially hyperbolic diffeomorphisms) and the center fibers are noncompact. The techniques we introduce also allow us to show plaque-expansiveness as defined in \cite{HPS} which in turn give a global stability result in this setting. See section \ref{Section-LeafConj}. \subsection{Maximizing measures} Another motivation for this paper grew from an attempt to extend the results of \cite{BFSV}. In that paper it is shown that a well known example of partially hyperbolic diffeomorphism, known as Ma\~n\'e's example (see \cite{ManheContributions} or \cite{BDV} Chapter 7), has a unique measure of maximal entropy. In fact, it is shown there that using the measure of maximal entropy that Ma\~n\'e's example as a measure preserving transformation is isomorphic to the measure preserving transformation given by a linear Anosov automorphism of $\mbox{$\mathbb{T}$}T^3$ and Haar measure. This result was extended in~\cite{BF} to certain diffeomorphisms that are $C^0$ close to hyperbolic toral automorphisms, but not partially hyperbolic. In this case the diffeomorphisms satisfy a weak version of hyperbolicity called a dominated splitting. A further extension was obtained by Ures~\cite{Ures} to all \textit{absolutely} partially hyperbolic diffeomorphisms of $\mbox{$\mathbb{T}$}T^3$ isotopic to Anosov as well as other higher dimensional cases under the further assumption of quasi-isometry of the strong foliations (in order to be able to use results of \cite{Brin,Hammerlindl}). For $\mbox{$\mathbb{T}$}T^3$, under the assumption of pointwise partial hyperbolicity, this result can be weakened to cover all (not necessarily absolute) partially hyperbolic diffeomorphisms of $\mbox{$\mathbb{T}$}T^3$ isotopic to Anosov thanks to the results in \cite{Pot}, see \cite{HP} section 6.1. Let us briefly comment on the idea of the proof of the existence and uniqueness of maximal entropy measures for partially hyperbolic diffeomorphisms with one dimensional center isotopic to Anosov. For such diffeomorphisms there always exists a continuous semiconjugacy to their linear part, and the main point in the proof consists in showing the following properties: \begin{itemize} \item The fibers of the semiconjugacy are connected arcs of bounded length (and thus carry no entropy). \item The image of the set of points on which the semiconjugacy is $1$ to $1$ has total Lebesgue measure in $\mbox{$\mathbb{T}$}T^d$. \subsetd{itemize} These two results together with properties of topological and measure theoretic entropy give the desired result (see Section \ref{Section-MEM} for more details). The main point is to obtain dynamical coherence and use the fact that fibers of the semiconjugacy are contained in center manifolds, this is to be expected since one expects the semiconjugacy to be injective along strong manifolds. This is why in \cite{Ures} the hypotheses of quasi-isometry and absolute partial hyperbolicity are used. We prove that partially hyperbolic diffeomorphisms (not necessarily absolute) which are isotopic to the linear Anosov diffeomorphisms along a path of partially hyperbolic diffeomorphisms with one-dimensional center bundle have a unique measure of maximal entropy, see Corollary C. We remark that even for absolute partially hyperbolic diffeomorphisms this result was not known without further hypotheses on the geometry of the strong foliations. We remark that in~\cite{NY, BF} it was shown that there are systems with a unique measure of maximal entropy and whose topological entropy is $C^1$ locally constant even if the center bundles have dimension 2. In~\cite{NY} the situation is a partially hyperbolic diffeomorphism that is dynamically coherent with 2-dimensional center fibers, and in~\cite{BF} there are two transverse foliations each 2-dimensional and tangent to the dominated splitting. In both of these cases the diffeomorphisms can be chosen to be isotopic to Anosov. Moreover, in~\cite{BFSV} it shown that there are partially hyperbolic diffeomorphisms isotopic to Anosov (through a path of partially hyperbolic ones) having bidimensional center and having a unique measure of maximal entropy (and whose topological entropy is also $C^1$ locally constant). This example can be extended to higher dimensional center. Thus, another reason to establish dynamical coherence in the Main Theorem is that under certain additional hypotheses one may be able to establish there is a unique measure of maximal entropy and constant topological entropy for systems isotopic to Anosov diffeomorphisms without the restriction of the center bundle being 1-dimensional (although of course one cannot expect that this holds in the entire connected component in this case). \subsection{Precise Setting}\label{s.precise} We say that $f: M \to M$ is \textit{partially hyperbolic} if there exists a $Df$-invariant splitting $TM = E^{ss}_f \oplus E^c_f \oplus E^{uu}_f$ such that there exists $N>0$ and $\lambda>1$ verifying that for every $x\in M$ and unit vectors $v^{\Sigmama}\in E^\Sigmama_f(x)$ ($\Sigmama= ss,c,uu$) we have \begin{itemize} \item[(i)] $\lambda \|Df_x^N v^{ss}\| < \|Df_x^N v^c \| < \lambda^{-1} \| Df_x^N v^{uu} \|, $ and \item[(ii)] $ \|Df_x^N v^{ss}\| < \lambda^{-1} < \lambda < \| Df_x^N v^{uu} \|$. \subsetd{itemize} We will assume throughout that $N=1$ due to results in \cite{Gourmelon}. We remark that the bundles can be trivial. The definition we have used of partial hyperbolicity is the weakest one appearing in the literature. It is sometimes referred to as \textit{pointwise} partial hyperbolicity as opposed to \textit{absolute} partial hyperbolicity \varphiootnote{For absolute partial hyperbolicity it is required that the inequalities hold for unit vectors that may belong to the bundles of different points.}. The absolute partial hyperbolicity sometimes simplifies proofs of dynamical coherence (see \cite{Brin}) but is quite artificial as it does not capture the real nature of domination (this becomes clear for example when more bundles are involved). We remark that there are different results in the study of dynamical coherence depending on the definition used, see \cite{BBI2,HHU,Pot}. We denote $$\mathsf{PH}(\mbox{$\mathbb{T}$}T^d) = \{ f: \mbox{$\mathbb{T}$}T^d \to \mbox{$\mathbb{T}$}T^d \text{ partially hyperbolic} \}.$$ Let $A \in SL(d,\mbox{$\mathbb{Z}$}Z)$ be a linear Anosov automorphism admitting a dominated splitting of the form $E^{ss}_A \oplus E^{ws}_A \oplus E^{wu}_A \oplus E^{uu}_A$. We denote as $E^s_A=E^{ss}_A \oplus E^{ws}_A$, $E^c_A= E^{ws}_A \oplus E^{wu}_A$ and $E^u_A = E^{wu}_A \oplus E^{uu}_A$. There may be many possibilities for the dimensions of $E^{ss}_A$ and $E^{ws}_A$ (respectively for $E^{wu}_A$ and $E^{uu}_A$). We consider $\mathsf{PH}_{A,s,u}(\mbox{$\mathbb{T}$}T^d) \subset \mathsf{PH}(\mbox{$\mathbb{T}$}T^d)$ the subset of those which are isotopic to $A$ and whose splitting verifies $\dim E^{ss}_f = \dim E^{ss}_A=s$ and $\dim E^{uu}_f = \dim E^{uu}_A=u$. In order to simplify notation we will denote $\mathsf{PH}_{A,s,u}(\mbox{$\mathbb{T}$}T^d)$ as $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ leaving the dimensions of $E^{\Sigmama}_A$ ($\Sigmama=ss,uu$) implicit from the context (we will leave them fixed throughout the paper). For $X\subset\mbox{$\mathbb{T}$}T^d$ we let $\widetilde X$ denote the lift of $X$ to $\mbox{$\mathbb{R}$}R^d$. Similarly, for $f:\mbox{$\mathbb{T}$}T^d\rightarrow \mbox{$\mathbb{T}$}T^d$ a diffeomorphism we let $\widetilde{f}:\mbox{$\mathbb{R}$}R^d\rightarrow \mbox{$\mathbb{R}$}R^d$ be the lift of $f$. Given $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ we know from~\cite{Fr} there exists $H_f: \mbox{$\mathbb{R}$}R^d \to \mbox{$\mathbb{R}$}R^d$ a continuous and surjective map such that $$ A \circ H_f = H_f \circ \widetilde f .$$ Moreover, $H_f(x + \gamma) = H_f(x) + \gamma$ for every $\gamma \in \mbox{$\mathbb{Z}$}Z^d$. \begin{rem}\label{Remark-ContinuousVariationSemiconj} The map $H$ varies continuously with $f$ in the $C^0$-topology. This is a general fact which does not require $f$ to be partially hyperbolic. This means that given $\varepsilon>0$ there exists a neighborhood $\cU$ of $f$ in the $C^0$-topology such that $d(H_f(x),H_g(x))< \varepsilon$ for every $x\in \mbox{$\mathbb{R}$}R^d$ and $g\in \cU$. \par {$\diamondsuit$} \vspace*{.05in} \subsetd{rem} We say that $f$ is \textit{dynamically coherent} if there exist $f$-invariant foliation $\cW^{cs}_f$ and $\cW^{cu}_f$ tangent respectively to $E^s_f\oplus E^c_f$ and $E^c_f \oplus E^u_f$ (and hence there exists an invariant center foliation $\cW^c_f$ tangent to $E^c_f$). We say that a dynamically coherent $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ is \textit{center-fibered} if $H_f^{-1}(E^c_A + H_f(x))= \widetilde \cW^c_f(x)$. This means that by the semiconjugacy $H_f$ different leaves of the center foliation map surjectively to different translates of $E^c_A$. We denote $\mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ to be the connected components of $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ containing a dynamically coherent and center-fibered partially hyperbolic diffeomorphism. Notice that the linear Anosov diffeomorphism $A$ is center-fibered so that $\mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ is a non-empty open set with at least one connected component. Notice also that the space of Anosov diffeomorphisms may not be connected~\cite{FG}, so that the set $\mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ is potentially larger than the connected component containing $A$. Note also that in \cite{FG} the construction is based by showing that there is an Anosov diffeomorphisms which is conjugated to $A$ by a diffeomorphism isotopic to the identity (but not diffeotopic) and therefore this Anosov diffeomorphism is dynamically coherent and center fibered. At the moment, we do not know the answer to the following questions which we believe to have affirmative answers and would improve our results considerably. \begin{quest} Is every partially hyperbolic diffeomorphism in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ in the connected component of a dynamically coherent and center-fibered one? In other words, does $\mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)= \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$? \subsetd{quest} In fact, the hypothesis of being center-fibered is crucial to our proofs but it is not clear whether it follows from dynamical coherence or not. \begin{quest} Is there an example of a partially hyperbolic diffeomorphism in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ such that it is dynamically coherent but not center-fibered? \subsetd{quest} An affirmative answer to the first question would imply an affirmative answer to the second. However, in view of the results of \cite{FG} it seems clear that (if it admits a positive answer) the first question is much harder in principle. \subsection{Precise Statement of results} \begin{teoA}\label{Teo-Main} Every $f \in \mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ is dynamically coherent and center fibered. \subsetd{teoA} We prove some intermediary results in more generality. Also, the theorem can be applied even in the case where $E^{ss}_A$ or $E^{uu}_A$ are zero dimensional (if both are trivial, the theorem itself is trivial). We also provide a global stability result in this context by showing \textit{leaf conjugacy} and thus improving previous results on the case of one dimensional center (see \cite{Hammerlindl, HP}). Recall that two dynamically coherent partially hyperbolic diffeomorphisms $f,g:M\to M$ are said to be \textit{leaf conjugate} if there exists a homeomorphism $h:M\to M$ such that $h(\cW_f^c(x))=\cW_g^c(h(x))$ and $h\circ f(\cW_f^c(x))=\cW_g^c(g\circ h(x)).$ \begin{teoB}\label{t.leafconjugacy} Any two diffeomorphisms in the same connected component of $PH_A^0(\mbox{$\mathbb{T}$}^d)$ are leaf conjugate. In particular any diffeomorphism of $PH^0_A$ in the same connected component of $A$ is leaf conjugate to $A.$ \subsetd{teoB} We also investigate the existence of measures of maximal entropy and we deduce the following consequence: \begin{corC}\label{Cor-Main} If $\dim E^c_f=1$ then there exists a unique maximal entropy measure with equal entropy to the linear part. \subsetd{corC} See Section \ref{Section-MEM} for definitions and the proof of the Corollary. It is possible that our results can be applied in the case of partially hyperbolic diffeomorphisms isotopic to Anosov diffeomorphisms in nilmanifolds. However, this has to be done with some care since even the initial Anosov diffeomorphism may not be dynamical coherent (see \cite{BuW} for possible problems). It may then be the case that every partially hyperbolic diffeomorphism isotopic to such Anosov through partially hyperbolic diffeomorphisms will not be dynamically coherent (extending a construction announced by Gourmelon \cite{BuW}), but we have not checked this in detail. It is also possible that our techniques shed light in studying the case of partially hyperbolic diffeomorphisms of $\mbox{$\mathbb{T}$}T^d$ isotopic to linear partially hyperbolic automorphisms even if these are not Anosov. This is because there are some types of semiconjugacies when the linear part is partially hyperbolic, and under some (possibly more restrictive) hypotheses one expects that our techniques could be adapted to that case. Notice that the non-dynamical coherent examples given by \cite{HHU} are not isotopic to their linear representative through partially hyperbolic systems. \textbf{Organization of the paper:} In the next section we provide some basic definitions and preliminary results. In sections \ref{s.proper} and \ref{s.openclose} we state the main property and prove that is an open and closed property in $PH_A^0(\mbox{$\mathbb{T}$}^d)$ which is fundamental to prove our results. In section \ref{s.dyncoherence} we give sufficient conditions to have dynamically coherence. Theorem A is proved in section \ref{s.teomain} and Theorem B is proved in Section \ref{Section-LeafConj} together with other results concerning quasi isometric foliations. Section \ref{Section-MEM} is devoted to the study of measures of maximal entropy and to prove Corollary C. \section{Definitions and preliminaries:} \subsection{First remarks} For $f\in\mathrm{PH}(\mbox{$\mathbb{T}$}T^d)$ there exist $f$-invariant foliations $\cW^{ss}_f$ and $\cW^{uu}_f$ tangent to $E^{ss}_f$ and $E^{uu}_f$ respectively that we call the \textit{strong foliations}. We let $\widetilde \cW^\Sigmama(x)$ denote the associated $\Sigmama$ foliation for $\widetilde{f}$ (where $\Sigmama= ss, uu$ or when they exist $\Sigmama=cs,cu,c$). In general, we have $H_f(\widetilde \cW^{uu}_f(x)) \subset E^{wu}_A \oplus E^{uu}_A + H_f(x)$. Similarly, for $\widetilde \cW^{ss}_f$ we have $H_f(\widetilde \cW^{ss}_f(x)) \subset E^{ws}_A \oplus E^{ss}_A + H_f(x)$. We now introduce some notation. Let $$B^\Sigmama_R(x) = B_R(x) \cap (E^\Sigmama_A + x) $$ for $\Sigmama = ss, uu, c, s,u, ws,wu$ where $B_R(x)$ is the ball of raius $R$ centered at $x.$ For $f \in \mathsf{PH}(\mbox{$\mathbb{T}$}T^d)$ we let $$ D^{\Sigmama}_{R,f} (x) = \{ y \in \widetilde \cW^\Sigmama(x) \ : \ d_{\cW^\Sigmama}(x, y) < R \} $$ where $d_{\cW^\Sigmama}(\cdot, \cdot)$ denotes the metric inside the leaves induced by restricting the metric of $\mbox{$\mathbb{R}$}R^d$ to a Riemannian metric in the leaves. Sometimes we will denote $d_{\cW^\Sigmama}$ as $d_{\Sigmama}$. From the continuous variation on compact parts of the strong manifolds one has the following classical result \cite{HPS}. \begin{prop}\label{Prop-ContinuousVariation} For every $R>0$ and $\varepsilon>0$ there exists $\cU$ a $C^1$-neighborhood of $f$ and $\delta>0$ such that for every $g \in \cU$ and every $x,y\in \mbox{$\mathbb{R}$}R^d$ with $d(x,y)<\delta$ one has $$ d_{C^1} (D^\Sigmama_{R,g}(x), D^\Sigmama_{R,f}(y)) < \varepsilon $$ \noindent for $\Sigmama= ss, uu$. \subsetd{prop} \begin{rem}\label{Remark-PHConstants} For $f\in \mathsf{PH}(\mbox{$\mathbb{T}$}T^d)$ there exist constants $1< \lambda_f < \Delta_f$ such that in a $C^1$-neighborhood $\cU$ of $f$ we have $$ D^{uu}_{(\lambda_f R),g}(\widetilde g(x)) \subset \widetilde g (D^{uu}_{R,g}(x)) \subset D^{uu}_{(\Delta_f R), x} (\widetilde g(x)) $$ for every $g \in \cU$, $x \in \mbox{$\mathbb{R}$}R^d$ and $R>0$. A similar result holds for $D^{ss}$ by applying $\widetilde g^{-1}$. This follows from the fact that the derivative of $f$ restricted to the unstable bundle is always larger than $\lambda_f$ and the global derivative of $f$ is smaller than $\Delta_f$ (from compactness). Therefore, one can also show for $g$ $C^1$-close to $f$ that one has the same estimates for the derivative of $g$ in any vector lying in a small cone around the unstable direction of $f$, so that the estimates hold for disks tangent to a cone close to the unstable direction. \par {$\diamondsuit$} \vspace*{.05in} \subsetd{rem} \subsection{Strong Almost Dynamical Coherence} The following definitions are motivated by the ones introduced in \cite{Pot} but slightly adapted to our needs. \begin{defi}[Almost parallel foliations] Let $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1$ and $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2$ be foliations of $\mbox{$\mathbb{T}$}T^d$. Then they are \textit{almost parallel} if there exists $R>0$ such that for every $x\in \mbox{$\mathbb{R}$}R^d$ there exists $y_1$ and $y_2$ such that: \begin{itemize} \item $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(x) \subset B_R (\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(y_1))$ and $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(y_1) \subset B_R(\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(x))$, and \item $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(x) \subset B_R(\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(y_2))$ and $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(y_2) \subset B_R(\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(x)).$ \subsetd{itemize}\par {$\diamondsuit$} \vspace*{.05in} \subsetd{defi} Being almost parallel is an equivalence relation (see \cite{HP} Appendix B). Notice that the condition can be stated in terms of Hausdorff distance by saying that for every $x\in \mbox{$\mathbb{R}$}R^d$ there exists $y_1$ and $y_2$ such that the Hausdorff distance between $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(x)$ and $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(y_1)$ is smaller than $R$ and the Hausdorff distance between $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(x)$ and $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(y_2)$ is smaller than $R$. \begin{defi}[Strong Almost Dynamical Coherence] Let $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ we say it is \textit{strongly almost dynamically coherent} (SADC) if there exists foliations $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}$ and $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cu}$ (not necessarily invariant) which are respectively transverse to $E^{uu}_f$ and $E^{ss}_f$ and are almost parallel to the foliations $E^{ss}_A \oplus E^c_A$ and $E^c_A \oplus E^{uu}_A$ respectively. \par {$\diamondsuit$} \vspace*{.05in} \subsetd{defi} The next result is proved in \cite[Proposition 4.5]{Pot}. \begin{prop}\label{Prop-SADCopenclosed} Being SADC is an open and closed property in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$. In particular, every $f \in \mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ verifies this property. \subsetd{prop} The idea of the proof is that open is trivial since the same foliation works by the continuous variation of the $E^{ss}$ and $E^{uu}$ bundles. If $f_n \to f$ one can choose $n$ large enough so that the bundles are close. By choosing the foliation $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}_n$ for $f_n$ and iterating backwards by $f_n$ a finite number of times one gets a foliations which works for $f$. Notice that since $f_n$ is isotopic to $A$ it fixes the class of foliations almost parallel to any $A$-invariant hyperplane. \section{$\Sigmama$-properness}\label{s.proper} We define $\Pi^{\Sigmama}_x$ to be the projection of $\mbox{$\mathbb{R}$}R^d$ onto $E^\Sigmama_A + x$ along the complementary subbundles of $A$, we will usually omit the subindex $x$. Let $H^{\Sigmama}_f:= \Pi^{\Sigmama}\circ H_f$. \begin{defi}[$\Sigmama$-properness] For $\Sigmama=ss,uu$ we say that $f\in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ is $\Sigmama$-\textit{proper} if the map $H^\Sigmama_f|_{\widetilde \cW^\Sigmama}$ is (uniformly) proper. More precisely, for every $R>0$ there exists $R'>0$ such that, for every $x\in \mbox{$\mathbb{R}$}R^d$ we have that \varphiootnote{This can also be expressed as: $(H_f^\Sigmama)^{-1}(B^\Sigmama_R (H_f(x))) \cap \widetilde \cW^\Sigmama_f(x) \subset D^\Sigmama_{R',f}(x)$.} $y\in \widetilde \cW^{\Sigmama}(x)$ and $d(H^{\Sigmama}_f(x), H^{\Sigmama}_f(y))<R$ implies $d_{\Sigmama}(x,y)<R'$. \par {$\diamondsuit$} \vspace*{.05in} \subsetd{defi} \begin{lema}\label{Lema-AlcanzaUno} Assume $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ such that there exists $R_1>0$ verifying that for every $x\in \mbox{$\mathbb{R}$}R^d$ we have $y \in \widetilde \cW^{\Sigmama}(x)$ and $d(H^\Sigmama_f(x), H^\Sigmama_f(y)) < 1$ implies $d_\Sigmama (x,y) < R_1$. Then $f$ is $\Sigmama$-proper. \subsetd{lema} \dem We consider the case $\Sigmama=uu$ the other is symmetric. Since $A$ is Anosov and expands uniformly along $E^{uu}_A$ we know that given $R>0$ there exists $N>0$ such that for every $z\in \mbox{$\mathbb{R}$}R^d$ we have $B^{uu}_R(A^N(z)) \subset A^N(B^{uu}_1(z))$ . Consider $R>0$ and $R' = \Delta_f^N R_1$ with $N$ as defined above and $\Delta_f$ as in Remark \ref{Remark-PHConstants}. Let $$y \in (H^\Sigmama_f)^{-1}(B_R^{uu}(H_f(x))) \cap \widetilde{W}^{\Sigmama}_f(x).$$ Then, we can see that $\widetilde f^{-N}(y) \in D^{uu}_{R_1}(\widetilde f^{-N}(x))$. Indeed, since $$H^{uu}_f(y) \in B_R^{uu}(H_f(x))$$ and $A^{-N}(H_f(y)) = H_f(\tilde f^{-N}(y))$ we have that $$\Pi^{uu}(A^{-N}(H_f(y))) \in B^{uu}_1(A^{-N}(x))$$ from how we chose $N$. Then, from the hypothesis of the Lemma we know that $\widetilde f^{-N}(y) \in (H_f^{uu})^{-1}(B^{uu}_1(A^{-N}(x)))$ which is contained in $D^{uu}_{R_1,f}(\widetilde f^{-N}(x))$. Using Remark \ref{Remark-PHConstants} we deduce that $y \in D^{uu}_{R',f}(x)$ as desired. \par {$\Box$} \vspace*{.05in} In the remainder of this section we will show the equivalence between $\Sigmama$-properness and the following conditions: \begin{itemize} \item[($I^{\Sigmama}$)] The function $H^\Sigmama_f$ is injective when restricted to each leaf of $\widetilde \cW^{\Sigmama}_f$.\\ \item[($S^{\Sigmama}$)] The function $H^\Sigmama_f$ is onto $E^\Sigmama_A+H_f(x)$ when restricted to each leaf of $\widetilde \cW^{\Sigmama}_f(x)$. \subsetd{itemize} \begin{lema}\label{Lema-PropImpliesInjectandSurject} If $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ is $\Sigmama$-proper, then it verifies both ($I^\Sigmama$) and ($S^\Sigmama$). \subsetd{lema} \dem First we show the injectivity of $H^\Sigmama_f$ along leaves of $\widetilde \cW^{\Sigmama}_f$. Assume by contradiction that $y$ belongs to the leaf $\widetilde \cW^{\Sigmama}_f(x)$ of $\widetilde \cW^{\Sigmama}_f$ and that $H^\Sigmama_f(x)=H^\Sigmama_f(y)$ where $y\neq x$. Since $y \neq x$ there exists a $\delta>0$ such that $y \neq D^\Sigmama_{\delta,f}(x)$. Using Remark \ref{Remark-PHConstants} we know that given $R_1>0$ there exists $N \in \mbox{$\mathbb{Z}$}Z$ such that $\widetilde f^N(y) \notin D^\Sigmama_{R_1,f}(\widetilde f^N(x))$. Consider $R_1$ given by $\Sigmama$-properness applied to $R=1$. Then, we know that $$(H^\Sigmama_f)^{-1}(B^{\Sigmama}_{1}(H_f(z))) \subset D^\Sigmama_{R_1,f}(z)$$ for every $z\in \mbox{$\mathbb{R}$}R^d$. However, we have $$(H^\Sigmama_f)^{-1}(B^{\Sigmama}_{1} (H_f (\widetilde f^N(x)))$$ contains $\widetilde f^N(y)$, and $\widetilde f^N(y)$ is not contained in $D^\Sigmama_{R_1,f}(\widetilde f^N(x))$, a contradiction. Now, we shall show surjectivity of $H^\Sigmama_f$ along leaves of $\widetilde \cW^{\Sigmama}_f$ onto $E^{\Sigmama}_A$. In the argument we will use the injectivity property established above. We claim first that injectivity of $H^\Sigmama_f$ implies that there exists a $\delta>0$ such that $$H^\Sigmama_f (\partial D^\Sigmama_{1,f}(x)) \cap B^{\Sigmama}_\delta(H_f(x)) = \emptyset. $$ Indeed, otherwise there would exist a pair of sequences $x_n, y_n$ such that $y_n \in \partial D^\Sigmama_{1,f}(x_n)$ and that $$H^\Sigmama_f (y_n) \in B^\Sigmama_{1/n} (H_f(x_n)).$$ Taking a subsequence and composing with deck transformations we can assume that both sequences converge to points $x,y$. We have that $y \in \partial D^{\Sigmama}_{1,f}(x)$, in particular $y\neq x$ and we know by continuity of $H_f$ and $\Pi^\Sigmama$ that $H^\Sigmama_f(x)=H^\Sigmama_f(y)$ contradicting injectivity. From injectivity and Invariance of Domain (see for instance \cite{Hatcher} Theorem 2B.3), we know that for every $z\in \mbox{$\mathbb{R}$}R^d$ we have $S_z= H_f^\Sigmama (\partial D^{\Sigmama}_{1,f}(x))$ is a $(\dim E^{\Sigmama}_f -1)$-dimensional sphere embedded in $E^{\Sigmama}_A + H_f(x)$. Using Jordan's Separation Theorem (\cite{Hatcher} Proposition 2B.1) and the fact that $\dim E^{\Sigmama}_f = \dim E^{\Sigmama}_A$ we deduce that $S_z$ separates $E^{\Sigmama}_A + H_f(x)$ into two components. Moreover, the image by $H^\Sigmama_f$ of $D^{\Sigmama}_{1,f}(x)$ is the bounded component and contains $H_f(x)$. From the above remark it also contains $B^{\Sigmama}_\delta(H_f(x))$. Now, fix $R>0$, then there exists $N\in \mbox{$\mathbb{Z}$}Z$ such that $$B^{\Sigmama}_R(z) \subset A^N(B^\Sigmama_\delta(A^{-N}(z))).$$ Using the semiconjugacy we see that $$ B^\Sigmama_R(H_f(x)) \subset H^\Sigmama_f (\widetilde f^N(D^\Sigmama_{1,f}(\widetilde f^{-N}(x)))).$$ Since this holds for any $R$ we know $H^\Sigmama_f$ verifies ($S^\Sigmama$) as desired. \par {$\Box$} \vspace*{.05in} \begin{rem}\label{IimpliesS} Note that in the above proof we have proven that ($I^\Sigmama$) implies ($S^\Sigmama$). \subsetd{rem} \begin{lema}\label{Lema-ImasSimpliesSigmaProper} If $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ verifies ($I^\Sigmama$) and ($S^{\Sigmama}$) then $f$ is $\Sigmama$-proper. \subsetd{lema} \dem The fact that $f$ has properties ($I^\Sigmama$) and ($S^{\Sigmama}$) implies that for every $x\in \mbox{$\mathbb{R}$}R^d$ we know $$H^\Sigmama_f : \widetilde \cW^{\Sigmama}_f(x) \to E^{\Sigmama}_A + H_f(x)$$ is a homeomorphism for every $x \in \mbox{$\mathbb{R}$}R^d$. In particular, we deduce that $$(H^\Sigmama_f)^{-1}(B^{\Sigmama}_1(H_f(x))) \cap \widetilde \cW^\Sigmama(x)$$ is bounded for every $x\in \mbox{$\mathbb{R}$}R^d$. Consider the function $\varphi: \mbox{$\mathbb{R}$}R^d \to \mbox{$\mathbb{R}$}R$ such that $\varphi(x)$ is the infimum of the values of $R$ such that $$(H^\Sigmama_f)^{-1}(B^{\Sigmama}_1(H_f(x))) \cap \widetilde \cW^\Sigmama(x) \subset D^\Sigmama_{R,f}(x).$$ \noindent that is to say, the infimum of the values $R$ such that $y\in \widetilde \cW^\Sigmama(x)$ and $d(H_f(x),H_f(y)) \leq 1$ implies that $d_\Sigmama(x,y) \leq R$. From Lemma \ref{Lema-AlcanzaUno} we know that if $\varphi$ is uniformly bounded in $\mbox{$\mathbb{R}$}R^d$ then $f$ is $\Sigmama$-proper. Since $\varphi$ is $\mbox{$\mathbb{Z}$}Z^d$-periodic, it is enough to control its values in a fundamental domain that is compact. Thus, it is enough to show that if $x_n \to x$ then $\limsup \varphi(x_n) \leq \varphi(x)$. To show this, notice that $H^\Sigmama_f( D^\Sigmama_{\varphi(x),f}(x))$ contains $B^\Sigmama_1(H_f(x))$. Since it is a homeomorphism we deduce that for every $\varepsilon$, there exists $\delta$ such that $$ B^\Sigmama_{1+\delta}(H_f(x)) \subset H^\Sigmama_f( D^\Sigmama_{\varphi(x)+\varepsilon,f}(x)).$$ Using the continuous variation of the $\widetilde \cW^\Sigmama$ leaves (Proposition \ref{Prop-ContinuousVariation}) and continuity of $H^\Sigmama_f$ we deduce that for $n$ large enough that $H^\Sigmama_f( D^\Sigmama_{\varphi(x)+\varepsilon,f}(x_n))$ contains $B^\Sigmama_1(H_f(x_n))$ showing that $\limsup \varphi(x_n) \leq \varphi(x) + \varepsilon$ and this holds for every $\varepsilon>0$. \par {$\Box$} \vspace*{.05in} \section{Dynamical coherence}\label{s.dyncoherence} We now state a criteria for integrability of the bundles of a partially hyperbolic diffeomorphism. This criteria generalizes the one given in \cite{Pot} for dimension 3 (though it requires stronger hypotheses). We recall that two transverse foliations $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1$ and $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2$ of $\mbox{$\mathbb{T}$}T^d$ have a \textit{global product structure} if for any two points $x, y \in \mbox{$\mathbb{R}$}R^d$ the leaves $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_1(x)$ and $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_2(y)$ intersect in a unique point. \begin{teo}\label{Teo-CriteriumCoherence} Assume that $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ verifies the following properties: \begin{itemize} \item $f$ is SADC. \item $f$ is $uu$-proper. \subsetd{itemize} Then, the bundle $E^{ss}_f \oplus E^c_f$ is integrable into an $f$-invariant foliation $\cW^{cs}_f$ that verifies $$ H_f^{-1} ((E^{ss}_A \oplus E^c_A) + H_f(x) )= \widetilde \cW^{cs}_f(x). $$ Moreover, we know $\widetilde \cW^{cs}_f$ has a global product structure with $\widetilde \cW^{uu}_f$. \subsetd{teo} \dem We know $\{H^{-1}(E^s_A \oplus E^c_A + y)\}_{y \in \mbox{$\mathbb{R}$}R^d}$ is an $\widetilde f$-invariant partition of $\mbox{$\mathbb{R}$}R^d$ that is invariant under deck transformations. This follows as a direct consequence of the semiconjugacy relation and the fact that $H$ is $\mbox{$\mathbb{Z}$}Z^d$-periodic. We shall show that under the assumptions of the theorem that $\{H^{-1}(E^s_A \oplus E^c_A + y)\}_{y \in \mbox{$\mathbb{R}$}R^d}$ is a foliation. Let $\mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}$ be a foliation given by the SADC property. Since it is almost parallel to the linear foliation induced by the subspace $E^{ss}_A \oplus E^c_A$ and $H_f$ is a bounded distance from the identity we know $H(\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x))$ is a bounded Hausdorff distance of (a translate of) $E^{ss}_A \oplus E^{c}_A$ for every $x\in \mbox{$\mathbb{R}$}R^d$. From the properties ($I^{uu}$) and ($S^{uu}$) we deduce that there is a global product structure between $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}$ and $\widetilde \cW^{uu}$. Indeed, consider $x, y \in \mbox{$\mathbb{R}$}R^d$, we shall first show that $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x)$ intersects $\widetilde \cW^{uu}(y)$. To do this, consider the set $Q=\mbox{$\mathbb{R}$}R^d \setminus \widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x)$. By a Jordan Separation like result one deduces that the $d-cs-1$-homology of $Q$ is non-trivial where $cs= \dim E^{ss}_A +\dim E^c_A$. For a proof see for example Lemma 2.1 of \cite{ABP}. Since $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x)$ is a bounded Hausdorff distance from $E^{ss}_A \oplus E^c_A$ one deduces that there is a non-trivial cycle of the $d-cs-1$-homology group $H_{d-cs-1}(Q)$ inside $E^{uu}_A$. Choosing this cycle sufficiently far away from $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs} (x)$, and using properties ($I^{uu}$) and ($S^{uu}$) one deduces the existence of a non-trivial cycle contained in $\widetilde \cW^{uu}_f (y)$. This gives the intersection point (for more details see the proof of Proposition 3.1 in \cite{ABP}). Now we must prove that the intersection point between $\widetilde \cW^{uu}(x)$ and $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(y)$ is unique. For this, it is enough to show that given a leaf $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x)$ of $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}$ there is no leaf of $\widetilde \cW^{uu}_f$ intersecting $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x)$ more than once. We will use the following easy facts that follow from the hypotheses we have made on $f$: \begin{itemize} \item[(1)] $H_f$ is injective along leafs of $\widetilde \cW^{uu}_f$. Moreover, for every $y\in \mbox{$\mathbb{R}$}R^d$ we have $H_f(\cW^{uu}_f(y))$ intersects $E^{ss}_A \oplus E^c_A$ in a unique point. \item[(2)] The image $L = H_f(\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}(x))$ is contractible and at bounded Hausdorff distance from $E^{ss}_A \oplus E^c_A$. \subsetd{itemize} Property (1) and the continuity of $\widetilde \cW^{uu}_f$ and $H_f$ allow us to define a continuous map $\varphi : L \to E^{ss}_A \oplus E^c_A$ that is onto by what we have already proved. Local product structure and property (2) imply that $\varphi$ must be a covering and consequently a homeomorphism. Using again that $H_f$ is injective along $\widetilde \cW^{uu}_f$ we conclude uniqueness of the intersection point as desired. To finish the proof of the theorem we argue as in Theorem 7.2 of \cite{Pot}. Let us sketch the main points since in this case the proof becomes simpler. Since $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}$ is uniformly transverse to $E^{uu}_f$, there are uniform local product structure boxes in $\mbox{$\mathbb{R}$}R^d$. Inside each local product structure box, by choosing suitable coordinate systems, one can look at the leaves of the foliations $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}_n = \widetilde f^{-n}(\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs})$ as uniformly bounded graphs from a disk of dimension $cs= \dim E^{ss}_f + \dim E^c_f$ to a disk of dimension $uu = \dim E^{uu}_f$. These family of graphs are precompact in the $C^1$-topology (see for instance \cite{HPS} or \cite[Section 3]{BuW2}). The key point, whose proof is identical as the one of the first claim in the proof of Theorem 7.2 of \cite{Pot} is that the image by $H_f$ of any of these limit graphs (which are $C^1$-manifolds tangent to $E^{ss}_f \oplus E^c_f$) is contained in the corresponding translate of $E^{ss}_A \oplus E^c_A$. Now, using the fact that $H_f$ is injective along strong unstable manifolds, one deduces that such limits are unique, and so the limit graphs form a well defined foliation with the desired properties (see Theorem 7.2 of \cite{Pot} for more details). Since the foliation $\widetilde \cW^{cs}_f$ has the same properties of $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}^{cs}$ we get global product structure exactly as above. \par {$\Box$} \vspace*{.05in} A symmetric statement holds for $f$ being $ss$-proper, so we obtain the next corollary. \begin{cor}\label{Cor-ImplicaCenterFibered} If $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ verifies the SADC property and is both $uu$-proper and $ss$-proper, then $f$ is dynamically coherent and center fibered. \subsetd{cor} To prove our main theorem the goal will be to show that having the SADC property and being $\Sigmama$-proper for $\Sigmama=uu, ss$ are open and closed properties among partially hyperbolic diffeomorphisms of $\mbox{$\mathbb{T}$}T^d$ isotopic to linear Anosov automorphisms. \section{Openness and Closedness of $\Sigmama$-properness}\label{s.openclose} In this section we prove that being $\Sigmama$-proper is an open and closed property among diffeomorphisms in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ having the SADC property. Without the SADC property it is not hard to show that it is an open property, however, our proof of that it is a closed property uses Theorem \ref{Teo-CriteriumCoherence} so we need the SADC property (which we already know is open and closed by Proposition \ref{Prop-SADCopenclosed}). \begin{prop}\label{Prop-OpennesSigmaProper} Being $\Sigmama$-proper is a $C^1$-open property in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$. \subsetd{prop} \dem From Lemma \ref{Lema-AlcanzaUno} it is enough to show that there exists a $C^1$-neighborhood $\cU$ of $f$ such that for each $g\in \cU$ we know there exists $R_1$ such that for every $x\in \mbox{$\mathbb{R}$}R^d$ we have $$(H^\Sigmama_g)^{-1}(B_1^\Sigmama(H_g(x))) \cap \widetilde \cW^{\Sigmama}_g(x) \subset D^\Sigmama_{R_1,g}(x).$$ \noindent or equivalently, for every $y \in \widetilde \cW^\Sigmama_g(x)$ having $d(H_g^\Sigmama(x),H_g^\Sigmama(y))\leq 1$ implies $d_\Sigmama(x,y) \leq R_1$. Since $f$ is $\Sigmama$-proper we know from Lemma \ref{Lema-PropImpliesInjectandSurject} that $H^\Sigmama_f$ is a homeomorphism from $\widetilde \cW^\Sigmama_f(x)$ onto $E^\Sigmama_A + H(x)$. We can choose $R_1$ such that $$ H^\Sigmama_f ((D^\Sigmama_{R_1,f}(x))^c) \cap B^\Sigmama_2 (H_f(x)) = \emptyset. $$ Let $A^\Sigmama_{R_1,R_2,g}(x)$ be the annulus $D^\Sigmama_{R_2,g}(x) \setminus D^\Sigmama_{R_1,g}(x)$ for any $R_2 > R_1$. For $R_2 > \Delta_f R_1$ we have $$ H^\Sigmama_f (A^\Sigmama_{R_1,R_2,f}(x)) \cap B^\Sigmama_2 (H_f(x)) = \emptyset .$$ Choose $\cU$ a neighborhood of $f$ such that \begin{itemize} \item[(i)] the constant $\Delta_f$ holds for every $g\in \cU$ (see Remark \ref{Remark-PHConstants}), and \item[(ii)] for every $g\in \cU$ we have that $H^\Sigmama_g (A^\Sigmama_{R_1,R_2,g}(x)) \cap B^\Sigmama_1 (H_1(x)) = \emptyset$ (this can be done due to Remark \ref{Remark-ContinuousVariationSemiconj} and Proposition \ref{Prop-ContinuousVariation}). \subsetd{itemize} This implies that $$(H^\Sigmama_g)^{-1}(B_1^\Sigmama(H_g(x))) \cap \widetilde \cW^{\Sigmama}_g(x) \subset D^\Sigmama_{R_1,g}(x).$$ Indeed, otherwise there exists $y \in \widetilde \cW^\Sigmama_g(x)$ such that $H^\Sigmama_g (y) \in B_1^\Sigmama(H_g(x))$ but such that $y \notin D^\Sigmama_{R_2,g}(x)$. From the choice of $\Delta_f$ we know that there exists $n\in \mbox{$\mathbb{Z}$}Z$ such that $\widetilde g^{n}(y) \in A^\Sigmama_{R_1,R_2}(\widetilde g^n(x))$ (moreover $n>0$ for $\Sigmama= ss$ and $n< 0$ for $\Sigmama=uu$) and one knows $$H^\Sigmama_g(\widetilde g^n(y)) \in B_1^\Sigmama(H_g(\widetilde g^n(x)))$$ which contradicts (ii) above. \par {$\Box$} \vspace*{.05in} Notice that the proof shows that the $\Sigmama$-properness is indeed uniform in the whole neighborhood of $f$. The following is the crucial part in the proof of the main theorem. \begin{prop}\label{Prop-ClosednessSigmaProper} Being $\Sigmama$-proper and SADC is a $C^1$-closed property in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$. \subsetd{prop} \dem Consider $f_k \to f$ such that $f_k$ are $\Sigmama$-proper and SADC. From Proposition \ref{Prop-SADCopenclosed} we know that $f$ is also SADC. We will use $k$ instead of $f_k$ in the subscripts to simplify the notation. \\ Let us assume that $\Sigmama=uu$. Notice that the diffeomorphisms $f_k$ are in the hypothesis of Theorem \ref{Teo-CriteriumCoherence} so that for every $k>0$ there exist an $f_k$-invariant foliation $\cW^{cs}_k$ tangent to $E^{ss}_k \oplus E^c_k$ which verifies that $\widetilde \cW^{cs}_k(x) = H_k^{-1}((E^{ss}_A \oplus E^c_A) + H_k(x))$, or equivalently: $$ \text{(}\ast\text{)} \quad H^{uu}_k(x)=H^{uu}_k(y) \ \ \text{ if and only if } \ \ y \in \widetilde \cW^{cs}_k(x).$$ \begin{af} Given $\varepsilon>0$, there exists $\delta>0$, a cone field $\mathcal{C}} \def\cI{\mathcal{I}} \def\cO{\mathcal{O}} \def\cU{\mathcal{U}^{uu}$ around $E^{uu}_f$ and $k_0$ such that if $k \geq k_0$ and $D$ is a disk tangent to $\mathcal{C}} \def\cI{\mathcal{I}} \def\cO{\mathcal{O}} \def\cU{\mathcal{U}^{uu}$ of internal radius larger than $\varepsilon$ and centered at $x$, then $$ B^{uu}_\delta(H_k(x)) \subset H^{uu}_k(D). $$ \subsetd{af} \dem Consider a finite covering of $\mbox{$\mathbb{T}$}T^d$ by boxes of local product structure for the bundles of $f$. By choosing them small enough (in particular, smaller than $\varepsilon$) it is possible to assume that the bundles are almost constant in each box (and by changing the metric, also almost orthogonal to each other). By choosing $k_0$ sufficiently large we know that for every $k \geq k_0$ the same boxes are also local product structure boxes for $f_k$. If $B$ is a box of local product structure we denote by $2B$ and $3B$ the box of double and triple the size, respectively, centered at the same point as $B.$ We can consider the covering small enough and $k_0$ sufficiently large so that for every $k>k_0$ we know \begin{itemize} \item the boxes $2B$ and $3B$ are also local product structure boxes for all the $f_k$'s in particular \item for every $B$ of the covering and every disk $D$ tangent to $\mathcal{C}} \def\cI{\mathcal{I}} \def\cO{\mathcal{O}} \def\cU{\mathcal{U}^{uu}$ of internal radius $\varepsilon$ and centered at a point $x \in B$ we have that $D$ intersects in a (unique) point in $3B$ every center-stable plaque of $\cW^{cs}_k$ which intersects $2B$ (see Figure~\ref{figure}). \item the previous condition together with ($\ast$) implies that for every disk $D$ tangent to $\mathcal{C}} \def\cI{\mathcal{I}} \def\cO{\mathcal{O}} \def\cU{\mathcal{U}^{uu}$ of internal radius $\varepsilon$ and centered in a point $x\in B$ one has that $H^{uu}_k(2B) \subset H^{uu}_k(D)$. \subsetd{itemize} \begin{figure}[ht]\begin{center} \input{figure.pstex_t} \caption{\small{The local product structure boxes.}} \label{figure} \subsetd{center}\subsetd{figure} By ($\ast$) above we know that $H_k$ is injective along leaves of $\widetilde \cW^{uu}_k$ so that we have that given a connected component $2B$ of the lift of a local product structure box we have $$\mathrm{int}( H^{uu}_k (2B))\neq \emptyset. $$ \noindent and that any point $x$ in $B$ is in the interior of $H^{uu}_k(2B)$. Moreover, since there are finitely many such boxes, we know that there exists a uniform $\delta$ such that $H^{uu}_k(B)$ is at distance $\delta$ from the boundary of $H^{uu}_k(2B)$ independently of the box $B$. We deduce that every disk $D$ of internal radius $\varepsilon$ centered at a point $x$ and tangent to a small cone around $E^{uu}_f$ verifies that $H^{uu}_k (D)$ contains $B^{uu}_\delta(H_k(x))$ as desired. \par {$\diamondsuit$} \vspace*{.05in} \begin{af} For any $k$ large enough and $x,y \in \mbox{$\mathbb{R}$}R^d$ we have that $\widetilde \cW^{uu}_f(x)$ intersects $\widetilde \cW^{cs}_k(y)$. \subsetd{af} \dem The previous claim implies that if $k$ is large enough, for every $x, y \in \mbox{$\mathbb{R}$}R^d$ and we denote $d$ as the distance between $H^{uu}_k(x)$ and $ H^{uu}_k(y)$ and let $N_0>\varphirac{d}{\delta}$, then \begin{equation}\label{eq.intersection} D^{uu}_{N_0\varepsilon, f} (x) \cap \widetilde \cW^{cs}_k(y) \neq \emptyset. \subsetd{equation} Indeed, consider the straight segment joining $H^{uu}_k(x)$ with $H^{uu}_k(y)$ in $E^{uu}_A+H_k(x).$ We can cover this segment by $N_0$ balls $B_1,...,B_{N_0}$ of radius $\delta/2$ and such that $B_i\cap B_{i+1}\neq\emptyset.$ Now, $H^{uu}_k(D^{uu}_{\varepsilon, f}(x))$ contains $B_1.$ Thus, $H^{uu}_k(D^{uu}_{2\varepsilon, f}(x))$ contains $B_1\cup B_2$ and inductively we have $H^{uu}_k(D^{uu}_{N_0\varepsilon, f}(x))$ contains $B_1\cup\ldots\cup B_{N_0}$ and $H^{uu}_k(y).$ Using property ($\ast$) above, this implies~\eqref{eq.intersection}. Therefore, for every $x,y \in \mbox{$\mathbb{R}$}R^d$ we have that $\widetilde \cW^{uu}_f(x)$ intersects $\widetilde \cW^{cs}_k(y)$. \par {$\diamondsuit$} \vspace*{.05in} \begin{af} For $k$ large enough the foliations $\widetilde \cW^{uu}_f$ and $\widetilde \cW^{cs}_k$ have a global product structure. Equivalently, the map $H^{uu}_k|_{\widetilde \cW^{uu}_f(x)} : \widetilde \cW^{uu}_f(x) \to E^{uu}_A + H_f(x)$ is a homeomorphism. \subsetd{af} \dem By the previous claim, it is enough to show that the intersection point between $\widetilde \cW^{uu}_f(x)$ and $\widetilde \cW^{cs}_k(y)$ is unique for any $x,y.$ Since $\widetilde \cW^{uu}_f(x)$ intersects transversally $\widetilde \cW^{cs}_k(y)$ for any $x,y$ and $$H_k(\widetilde \cW^{cs}_k(y))= (E^{ss}_A \oplus E^c_A) + H_k(y)$$ we conclude that $H_k(\widetilde \cW^{uu}_f(x))$ is topologically transversal to $(E^{ss}_A\oplus E^c_A) + H_k(y)$ for any $x,y.$ This implies that $$\Pi^{uu}: H_k(\widetilde\cW^{uu}_f(x))\to E^{uu}_A$$ is a covering map and since $H_k(\widetilde\cW^{uu}_f(x))$ is contractible we know it is one-to-one. Thus, we have that $H^{uu}_k$ restricted to $\widetilde\cW^{uu}_f(x)$ is a homeomorphism onto $E^{uu}_A$ which implies the desired global product structure. \par {$\diamondsuit$} \vspace*{.05in} We now return to the proof of the proposition. We must show (see Lemma \ref{Lema-AlcanzaUno}) that there exists some $R>0$ such that for every $x\in \mbox{$\mathbb{R}$}R^d$ we have that if $y \in \widetilde \cW^{uu}_f(x)$ and $d(H^{uu}_f(x),H^{uu}_f(y)) \leq 1$ then $d_{uu}(x,y) \leq R$. Equivalently, that $$ (H^{uu}_f)^{-1}(B^{uu}_1(H_f(x))) \cap \widetilde \cW^{uu}_f(x) \subset D^{uu}_{R,f}(x).$$ We will show that for every $x\in \mbox{$\mathbb{R}$}R^d$ there exists some finite $\psi(x)$ such that $$(H^{uu}_f)^{-1}(B^{uu}_1(H_f(x))) \cap \widetilde \cW^{uu}_f(x) \subset D^{uu}_{\psi(x),f}(x).$$ Then, one can conclude by arguing as in the proof of Lemma \ref{Lema-ImasSimpliesSigmaProper} by considering the infimum $\varphi(x)$ of all possible values of $\psi(x)$ satisfying the property which will be a semicontinuous and periodic function that by a compactness argument is enough to complete the proof. We know that $d_{C^0}(H_k,H_f)< K_0.$ The previous claim and the fact that $f_k$ is center-fibered implies that $H^{uu}_k$ restricted to $\widetilde\cW^{uu}_f(x)$ is a homeomorphism onto $E^{uu}_A$, so we know that there exists some $R_1>0$ such that $$H^{uu}_k((D^{uu}_{R_1,f}(x))^c)\cap B^{uu}_{2+2K_0}(H_k(x))=\emptyset$$ and so $$H^{uu}_f((D^{uu}_{R_1,f}(x))^c)\cap B^{uu}_{1}(H_f(x))=\emptyset.$$ This implies that $$(H^{uu}_f)^{-1}(B^{uu}_1(H_f(x))) \cap \widetilde \cW^{uu}_f(x) \subset D^{uu}_{R_1,f}(x).$$ Setting $\psi(x)=R_1$ we conclude the proof. \par {$\Box$} \vspace*{.05in} \begin{rem} In principle, being $\Sigmama$-proper could be a closed property too, but in the proof we had to assume also that the sequence (and limit) also had the SADC property. It could be that other proof without the use of that property is possible. \subsetd{rem} \section{Proof of Theorem A}\label{s.teomain} From our previous results we obtain the following: \begin{teo}\label{Teo-Main2} If $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ is in the same connected component of a partially hyperbolic $g$ that is $\Sigmama$-proper (for $\Sigmama= ss,uu$) and has the SADC property, then $f$ is dynamically coherent and center fibered. \subsetd{teo} \dem Propositions \ref{Prop-OpennesSigmaProper} and \ref{Prop-ClosednessSigmaProper} together with Proposition \ref{Prop-SADCopenclosed} imply that being $\Sigmama$-proper ($\Sigmama=ss,uu$) and having the SADC property is an open and closed property in $\mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$. This implies that every $f$ in the the same connected component of a partially hyperbolic $g$ that is $\Sigmama$-proper (for $\Sigmama= ss,uu$) and has the SADC property as in the hypothesis of Corollary \ref{Cor-ImplicaCenterFibered}. \par {$\Box$} \vspace*{.05in} \demo{of Theorem A} \, It is enough to show that if $f$ is a partially hyperbolic diffeomorphism in $PH_A^0(\mbox{$\mathbb{T}$}^d)$ that is dynamically coherent and center-fibered, then it must be $\Sigmama$-proper for $\Sigmama=ss,uu$ and have the SADC property. This follows from the following remarks: \begin{itemize} \item The central stable foliation $\cW^{cs}_f$ is transversal to $E^{uu}_f$ and the central unstable foliation $\cW^{cu}_f$ is transversal to $E^{ss}_f.$ \item Since it is center fibered we know the semiconjugacy is injective along strong stable and unstable manifolds and also that $$H(\cW^{cs}_f(x))\subset (E^{ss}_A\oplus E^c_A) +H(x)$$and $$H(\cW^{cu}_f(x))\subset (E^{uu}_A\oplus E^c_A) +H(x).$$ \item Again, since it is center fibered, we know $\Pi^\Sigmama\circ H$ is injective along strong stable and unstable manifolds. This also implies surjectivity (see Remark \ref{IimpliesS}) and hence we have $\Sigmama$-properness for $\Sigmama=ss,uu.$ \item The surjectivity above and the center fibered property implies that $$H(\cW^{cs}_f(x))= (E^{ss}_A\oplus E^c_A) +H(x)$$and $$H(\cW^{cu}_f(x))= (E^{uu}_A\oplus E^c_A) +H(x)$$and from this we easily have the SADC property since $H$ is at bounded distance from the identity. \subsetd{itemize} \par {$\Box$} \vspace*{.05in} \section{Leaf conjugacy and global stability}\label{Section-LeafConj} We will now deduce some more additional properties of the systems. We recall that a foliation $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}$ of $\mbox{$\mathbb{R}$}R^d$ is called \textit{quasi-isometric} if there exist constants $C,D>0$ such that for any pair of points $x,y$ in the same leaf of $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}$ one has $$ d_{\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}}(x,y) \leq C d(x,y) + D $$ \noindent where as before $d_{\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}}(\cdot,\cdot)$ denotes the leafwise distance between points and $d(\cdot,\cdot)$ the usual distance in $\mbox{$\mathbb{R}$}R^d$. We remark that if the foliation $\widetilde \mathcal{F}} \def\cL{\mathcal{L}} \def\cR{\mathcal{R}} \def\cX{\mathcal{X}$ has $C^1$-leaves, it is possible to change the constants to have $D=0$. \begin{prop}\label{Proposition-QuasiIsometry} If $f\in \mathrm{PH}^0_A$ is $\Sigmama$-proper $(\Sigmama=ss,uu)$ then the foliation $\widetilde \cW^{\Sigmama}$ is quasi-isometric. \subsetd{prop} \dem First we choose a metric on $\mbox{$\mathbb{R}$}^d$ by declaring $E^{ss}_A, E^c_A$ and $E^{uu}_A$ mutually orthogonal, this metric is equivalent to the usual metric on $\mbox{$\mathbb{R}$}^d.$ The proof consists of $3$ steps: \begin{itemize} \item[(i)] For every $K>0$ there exists $C_K$ such that if $d(x,y)<K$ and $y\in \widetilde \cW^\Sigmama(x)$ then $d_\Sigmama(x,y) < C_K d(x,y) $. \item[(ii)] For every $C_1>0$ there exists $K$ such that for every $x\in \mbox{$\mathbb{R}$}R^d$ we have that $\widetilde \cW^{\Sigmama}(x)$ is contained in $B_{K/2}(x) \cup (\mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama_{C_1}+x)$ where $\mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama_{C_1}$ is the cone around $E^{\Sigmama}_A$ of vectors $v = v^\Sigmama + v^\perp$ satisfying $\|v^\perp \| < C_1 \|v^\Sigmama \|$ with $v^\Sigmama \in E^{\Sigmama}_A$ and $v^\perp \in (E^\Sigmama_A)^\perp$. Notice that $(E^\Sigmama_A)^\perp=E^{cs}_A$ if $\Sigmama=uu$ and $(E^\Sigmama_A)^\perp=E^{cu}_A$ if $\Sigmama=ss.$ \item[(iii)] If $y \in \widetilde \cW^\Sigmama(x)$ one can choose points $x=x_1, \ldots, x_n=y$ in $\widetilde \cW^\Sigmama(x)$ and $K$ such that $d(x_i,x_{i+1})< K$ and such that $$\sum_{i=1}^{n-1} d(x_i,x_{i+1}) \leq 3 d(x,y).$$ \subsetd{itemize} Once we have this, putting together properties (i) and (iii) we deduce that $$ d_\Sigmama(x,y) \leq \sum d_\Sigmama (x_i, x_{i+1}) \leq C_K \sum d(x_i,x_{i+1}) < 3C_K d (x,y) $$ \noindent showing quasi-isometry. We first notice that (i) is a direct consequence of $\Sigmama$-properness. In fact, if (i) did not hold we would obtain a sequence $x_n,y_n$ of points at distance smaller or equal to $K$ such that $d_\Sigmama(x_n,y_n) \geq n$. Using $\Sigmama$-properness we would obtain that $d(H_f(x_n),H_f(y_n)) \to \infty$. On the other hand, since $H_f$ is at bounded distance from the identity and $x_n$ and $y_n$ are at distance smaller than $K$ one gets that $d(H_f(x_n),H_f(y_n))$ must be bounded, a contradiction. Let us prove (ii). Since $H_f$ is a bounded distance from the identity, to prove (ii) it is enough to show the same property for $H_f(\widetilde \cW^{\Sigmama}_f(x))$. Assume that it is not true. Then, there exists a cone $\mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama$ and we may find sequences (using $\Sigmama$-properness) $x_n, y_n \in \mbox{$\mathbb{R}$}R^d$ such that $y_n \in H_f(\widetilde \cW^{\Sigmama}_f(x_n))$ with $d(y_n,H_f(x_n))\to\infty$ and $y_n\notin \mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama+H_f(x_n).$ We assume for simplicity that $\Sigmama=uu,$ the other case is quite similar. Let $\lambda_c^{-1}=\|A_{/E^{cs}_A}\|$ and let $\lambda_u=\|A^{-1}_{/E^{uu}_A}\|.$ Notice that $\lambda_u/\lambda_c<1.$ Notice first that if $\lambda_c >1$ we know $A_{/E^{cs}_A}$ is contacting so that $H_f : \widetilde \cW^{uu}_f(x) \to E^{u}_A + H_f(x)$ is a homeomorphism and property (ii) is immediate. Also, since $A$ is Anosov we can assume that (maybe by considering an iterate) that $\lambda_c \neq 1$. So, in what follows we shall assume that $\lambda_c < 1$. Let $\varepsilonilon >0$ and let $m_n=\inf\{m\ge 0:\lambda_c^md(y_n,H_f(x_n))\le \varepsilonilon\}.$ Since $d(y_n,H_f(x_n))\to\infty$ and $y_n\notin \mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama+H_f(x_n)$ we have that $m_n\to\infty.$ And we know $$d(A^{-m_n}(y_n), A^{-m_n}(H_f(x_n)))\ge\lambda_c\varepsilonilon.$$ On the other hand, $$ \begin{array}{llll} & d(\Pi^{uu}(A^{-m_n}(y_n)), A^{-m_n}(H_f(x_n)))\\ = & d(A^{-m_n}(\Pi^{uu}(y_n)), A^{-m_n}(H_f(x_n)))\\ \le & \lambda_u^{m_n}\varphirac{d(y_n,H_f(x_n))}{C_1}\le \left(\varphirac{\lambda_u}{\lambda_c}\right)^{m_m}\varphirac{\varepsilonilon}{C_1}\to_{n\to\infty}0. \subsetd{array} $$ Now, composing with deck transformation we may assume that $$f^{-m_n}(x_n)\to x$$ and $$A^{-m_n}(y_n)\to y\in H_f(\widetilde \cW^{\Sigmama}_f(x)),\; y\neq H_f(x).$$ But $\Pi^{uu}(y)=x,$ a contradiction with property ($I^{uu}$) (which follows from $uu$-properness by Lemma \ref{Lema-PropImpliesInjectandSurject}). Thus, we obtain that (ii) is verified. Finally, to prove (iii) we use (ii): We choose $C_1\le 1/2$ and $K$ from (ii) and we define the sequence $x_i$ inductively. First, we impose $x_1=x$. Then, if $d(x_i,y)<K$ we choose $x_{i+1}=y$. Otherwise we pick $x_{i+1}$ as follows. Notice that $d(\Pi^\Sigmama(y),x_i)\ge \varphirac{2}{3}K$ and let $z_{i+1}$ be the point in the segment joining $x_i$ and $\Pi^\Sigmama(y)$ (which is contained in $E^\Sigmama_A+x_i$) at distance $\varphirac{2}{3}K$ from $x_i.$ Now, $(\Pi^{\Sigmama})^{-1}(z_{i+1})\cap (\mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama +x_i)$ is a disc $D_i$ of radius $\varphirac{2}{3}C_1 K$ in $(E^\Sigmama_A)^\perp+z_{i+1}.$ Since $H^\Sigmama_f$ is homomorphism onto $E^\Sigmama_A+H_f(x_i)$ when restricted to $\widetilde\cW^\Sigmama(x_i)$ and $H_f$ is at bounded distance from the identity, we conclude that $\Pi^\Sigmama$ is onto $E^\Sigmama_A+x_i$ when restricted to $\widetilde\cW^\Sigmama_f(x_i)$. By (ii) there is at least one point in $D_i\cap \cW^\Sigmama_f(x_i).$ We set $x_{i+1}$ to be one of these points. We must now show that the process finishes in finitely many steps. Notice that since $y\in \mathcal{E}} \def\cK{\mathcal{K}} \def\cQ{\mathcal{Q}} \def\cW{\mathcal{W}^\Sigmama+x_i$ the straight line segment joining $x_i$ and $y$ intersects $D_i$ and $d(y,D_i)\le d(x_i,y)-\varphirac{2}{3}K.$ Thus $$d(x_{i+1},y)\le d(x_i,y)-\varphirac{2}{3}K +\varphirac{2}{3}C_1K\le d(x_i,y)-\varphirac{1}{3}K.$$ So that the process ends in finitely many steps. Notice also that $d(x_i,x_{i+1})\le K$ and so the above inequality also shows that $$d(x_{i+1},y)\le d(x_i,y)-\varphirac{1}{3}d(x_i,x_{i+1}).$$ Therefore, if we have chosen the sequence $x=x_1,x_2,...,x_n=y$ we have by induction that $$d(x_{n-1},y)\le d(x,y)-\varphirac{1}{3}\sum_{i=1}^{n-2}d(x_i,x_{i+1})$$ and so $$\sum_{i=1}^{n-1}d(x_i,x_{i+1})\le 3 d(x,y).$$ \par {$\Box$} \vspace*{.05in} When the central dimension is one it is possible to use the results of \cite{Hammerlindl} to obtain a property called \textit{leaf conjugacy}. This notion is related with the existence of the semiconjugacy but slightly different, it says that there exists a homeomorphism $h: \mbox{$\mathbb{T}$}T^d \to \mbox{$\mathbb{T}$}T^d$ which sends center leaves of $f$ to center leaves of the linear Anosov diffeomorphism and conjugates the dynamics modulo the center behavior (see \cite{Hammerlindl} for more details). The results in~\cite{Hammerlindl} are proved in the absolute partially hyperbolic setting, but in \cite{HP} it is explained which hypothesis should be added in the pointwise case in order to recover his results. \begin{prop}\label{Cor-ImplicaLeafConjCenter1} Let $f \in \mathsf{PH}_A(\mbox{$\mathbb{T}$}T^d)$ with $\dim E^c_f=1$ and verifying SADC property and $\Sigmama$-properness for $\Sigmama=ss,uu$ then $f$ is leaf conjugate to $A$. \subsetd{prop} \dem Theorem 3.2 in \cite{HP} states that the following properties of a dynamically coherent partially hyperbolic diffeomorphism with one dimensional center and isotopic to $A$ guarantee leaf conjugacy. \begin{itemize} \item[(i)] The foliations $\widetilde \cW^{\Sigmama}_f$ ($\Sigmama=cs,cu$) are almost parallel to the corresponding linear foliations of $A$. \item[(ii)] The foliations $\widetilde \cW^\Sigmama_f$ are \textit{asymptotic} to $E^{\Sigmama}_A$ (i.e. We have that $$\varphirac{d(\Pi^\Sigmama(x),\Pi^\Sigmama(y))}{d(x,y)} \to 1$$ as $d(x,y) \to \infty$ with $x,y$ in the same leaf of $\widetilde \cW^\Sigmama$). \item[(iii)]The foliations $\widetilde \cW^{\Sigmama}_f$ ($\Sigmama=ss,uu$) are quasi-isometric. \subsetd{itemize} SADC property implies property (i). It is quite easy to see that using the semiconjugacy with $A$ that conditions ($I^{\Sigmama}$) and ($S^\Sigmama$) imply property (ii). Recall that $\Sigmama$-properness implied properties ($I^\Sigmama$) and ($S^\Sigmama$) (Lemma \ref{Lema-PropImpliesInjectandSurject}). The proof then concludes by applying Proposition \ref{Proposition-QuasiIsometry} to conclude that (iii) is also satisfied. \par {$\Box$} \vspace*{.05in} Using the concept of plaque-expansiveness we are able to prove the previous result without assuming one dimensionality of the center bundle. We remark that in \cite{HamPlaque} it is proved that \textit{absolute partial hyperbolicity} and quasi-isometry implies plaque-expansiveness. We recall the definition of plaque expansiveness from \cite{HPS}: Let $f: M \to M$ be a dynamically coherent partially hyperbolic diffeomorphism with center foliation $\cW^c$, we say that $f$ is \textit{plaque-expansive} if for every $\varepsilon>0$ sufficiently small the following holds: \begin{itemize} \item[--] Let $\{x_n \}_{n\in \mbox{$\mathbb{Z}$}Z}$ and $\{y_n\}_{n\in \mbox{$\mathbb{Z}$}Z}$ be two sequences in $M$ such that $d(x_n,y_n) < \varepsilon$ and such that the points $f(x_n)$, $x_{n+1}$ (resp. $f(y_n)$, $y_{n+1}$) belong to the same leaf of $\cW^c$ and $d_{\cW^{c}}(f(x_n), x_{n+1})< \varepsilon$ (resp. $d_{\cW^{c}}(f(y_n), y_{n+1})< \varepsilon$) for every $n$. Then, $x_0$ and $y_0$ belong to the same center leaf and $d_{\cW^{c}}(x_0,y_0)< K \varepsilon$ for $K$ independent of $\varepsilon$. \subsetd{itemize} As before, we use $\Sigmama$-properness to obtain this property: \begin{prop}\label{prop-PlaqueExpansive} Let $f: \mbox{$\mathbb{T}$}T^d \to \mbox{$\mathbb{T}$}T^d$ be a partially hyperbolic diffeomorphism isotopic to Anosov such that it is dynamically coherent and center fibered then the center foliation is plaque expansive. \subsetd{prop} \dem Consider $\varepsilon$ small enough so that two points in an $\varepsilon$-neighborhood belong to the same local product structure box (as in the proof of the claim inside the proof of Proposition \ref{Prop-ClosednessSigmaProper}). Note that if we lift this product box to the universal cover (and take a connected component) then we know that a center leaf intersects this box at most in one connected component (otherwise this violates the center fibered property and the injectivity of the semiconjugacy along the strong stable and unstable foliations). Let $\{x_n\}$ and $\{y_n\}$ be two pseudo-orbits verifying the properties above, that is: \begin{itemize} \item[(i)] $d(x_n, y_n) < \varepsilon$ for every $n\in \mbox{$\mathbb{Z}$}Z$. \item[(ii)] $d_{\cW^{c}}(f(x_n),x_{n+1})< \varepsilon$ for every $n\in \mbox{$\mathbb{Z}$}Z$. In particular, $f(x_n)$ and $x_{n+1}$ belong to the same center leaf. \item[(iii)]$d_{\cW^{c}}(f(y_n),y_{n+1})< \varepsilon$ for every $n\in \mbox{$\mathbb{Z}$}Z$. In particular, $f(y_n)$ and $y_{n+1}$ belong to the same center leaf. \subsetd{itemize} We must prove that this implies that $x_0$ and $y_0$ belong to the same leaf of $\cW^c$ and $d_{\cW^{c}}(x_0,y_0)<K \varepsilon$ for some uniform $K$ which does not depend on $\varepsilon$. To do this, we will lift the sequences to the universal cover. Choose $\tilde x_0$ and $\tilde y_0$ in $\mbox{$\mathbb{R}$}R^d$ such that they project respectively to $x_0$ and $y_0$ and such that $d(\tilde x_0,\tilde y_0) < \varepsilon$. Since $\varepsilon$ is small, this already determines uniquely a pair of sequences $\{ \tilde x_n \}$ and $\{ \tilde y_n \}$ satisfying properties analogous to (i), (ii) and (iii). We now consider the sequences $\{ X_n= H_f(\tilde x_n) \}$ and $\{ Y_n = H_f(\tilde y_n) \}$. Using the expansivity of $A$ one deduces that $X_0$ and $Y_0$ lie in the same leaf of the foliation by translates of $E^{c}_A$. Since $f$ is center-fibered we deduce that both $\tilde x_0$ and $\tilde y_0$ lie in the same leaf of $\tilde \cW^{c}_f$. Moreover, from how we chose the value of $\varepsilon$ and the fact that $d(\tilde x_0, \tilde y_0) < \varepsilon$ one gets the desired property. \par {$\Box$} \vspace*{.05in} As a consequence of this and results of Chapter 7 in \cite{HPS} we obtain our global stability result: \demo{of Theorem B} \, Let $f$ and $g$ be two diffeomorphisms in the same connected component of $PH_A^0(\mbox{$\mathbb{T}$}^d).$ Consider a path $f_t$ such that $f_0=g$ and $f_1=f$ and such that $f_t$ is in $\mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ for every $t\in [0,1]$. By the main theorem, $f_t$ is dynamically coherent and center fibered for every $t\in [0,1]$. Thus Proposition \ref{prop-PlaqueExpansive} applies and we know that the center foliation of $f_t$ is plaque-expansive at every $t$. Using the main result of Chapter 7 of \cite{HPS} we deduce that for every $t_0$ there exists $\varepsilon$ such that the diffeomorphism $f_{t_0}$ is leaf-conjugate to every $f_t$ with $t \in (t_0 + \varepsilon, t_0 -\varepsilon)$. By compactness and transitivity of the relation of being leaf conjugate, one deduces Theorem B. \par {$\Box$} \vspace*{.05in} \section{Measures of Maximal Entropy}\label{Section-MEM} The variational principle states that if $f:X\rightarrow X$ is continuous and $X$ is a compact metric space, then $h_{top}(f) = \sup_{\mu} h_\mu(f)$ where $\mu$ varies among all $f$-invariant Borel probability measures, see for instance \cite{Manhe}. It is thus an interesting question to know whether a given system has measures with entropy equal to the topological entropy of the system, and when such measures exist (which are called \textit{measures of maximal entropy}) to know how many of them are there. When there is a unique measure of maximal entropy the system is \textit{intrinsically ergodic}. Corollary C states that every diffeomorphism in $\mathsf{PH}_A^0(\mbox{$\mathbb{T}$}T^d)$ with one dimensional center is intrinsically ergodic. In \cite{Ures} a similar result is proved under the added assumption of absolute partially hyperbolic case and under stronger assumptions on the foliations of $f$. \demo{of Corollary C} \, From Theorem A and Proposition \ref{Cor-ImplicaLeafConjCenter1}, let $h$ be the semiconjugacy from $f$ to $A$, then for each $x\in\mbox{$\mathbb{T}$}T^d$ we know that $[x]=h^{-1}h(x)$ is a point or bounded closed interval in the center fiber containing $x$. The Leddrappier-Walters type arguments in~\cite{BFSV} allow us to conclude that $h_{\mathrm{top}}(f)=h_{\mathrm{top}}(A)$ and that a lift of the Haar measure, $\mu$, for $A$ is a measure of maximal entropy for $f$. From Lemma 4.1 in~\cite{Ures} we know that $$\mu\{ x\in\mbox{$\mathbb{T}$}T^d\, :\, [x]=\{x\}\}=1.$$ Theorem 1.5 in~\cite{BFSV} now applies and we know that $f$ is intrinsically ergodic. Furthermore, the unique measure of maximal entropy can be seen as the unique lift of Haar measure for $A$ and also as the limit of the measures given by the periodic classes as defined in~\cite{BFSV}. \par {$\Box$} \vspace*{.05in} \begin{thebibliography}{2} \begin{itemize}bitem[ABP]{ABP} A. Artigue, J. Brum, and R. Potrie, Local product structure for expansive homeomorphisms, \textit{Topology and its Applications}, {\bf 156} (2009), no. 4, 674--685. \begin{itemize}bitem[BDV]{BDV} C. Bonatti, L. D\'iaz and M. Viana, \textit{Dynamics Beyond Uniform Hyperbolicity. A global geometric and probabilistic perspective}, Encyclopaedia of Mathematical Sciences {\bf 102}. Mathematical Physics III. Springer-Verlag (2005). \begin{itemize}bitem[Br]{Brin} M. Brin, On dynamical coherence, \textit{Ergodic. Th. and Dynam. Sys.}, {\bf 23} (2003), 395--401. \begin{itemize}bitem[BBI$_1$]{BBI} M. Brin, D. Burago and S. Ivanov, On partially hyperbolic diffeomorphisms of 3-manifolds with commutative fundamental group. \textit{Modern dynamical systems and applications} Cambridge Univ. Press, Cambridge, (2004), 307-312. \begin{itemize}bitem[BBI$_2$]{BBI2} M. Brin, D. Burago and S. Ivanov, Dynamical coherence of partially hyperbolic diffeomorphisms of the 3-torus. \textit{Journal of Modern Dynamics}, {\bf 3} (2009), 1-11. \begin{itemize}bitem[BuW$_1$]{BuW} K. Burns and A. Wilkinson, Dynamical coherence and center bunching, \textit{Discrete and Continuous Dynamical Systems A} (Pesin birthday issue), {\bf 22} (2008), 89-100. \begin{itemize}bitem[BuW$_2$]{BuW2} K. Burns and A. Wilkinson, On the ergodicity of partially hyperbolic systems, \textit{Ann. of Math.}, {\bf 171} (2010), 451-489. \begin{itemize}bitem[BF]{BF} J. Buzzi and T. Fisher, Entropic stability beyond partial hyperbolicity, to appear in \textit{Journal of Modern Dynamics}, arXiv:1103.2707. \begin{itemize}bitem[BFSV]{BFSV} J. Buzzi, T. Fisher, M. Sambarino, and C. Vasquez, Maximal Entropy Measures for certain Partially Hyperbolic, Derived from Anosov systems, \textit{Ergodic. Th. and Dynam. Sys.}, {\bf 32} (2012), no. 1, 63-79. \begin{itemize}bitem[FG]{FG} F.T. Farrel and A. Gogolev, The space of Anosov diffeomorphisms, to appear in \textit{Journal of the London Math. Society}, arXiv:1201.3595. \begin{itemize}bitem[Fr]{Fr} J. Franks, Anosov diffeomorphisms, \textit{1970 Global Analysis (Proc. Sympos. Pure Math., Vol. XIV, Berkeley, Calif., 1968)}, pp. 61- 93 Amer. Math. Soc., Providence, R.I. \begin{itemize}bitem[Ghy]{Ghy} E. Ghys, Flots d'Anosov sur les 3-vari\'et\'es fibr\'ees en cercles, \textit{Ergodic Th. and Dynam. Sys.}, {\bf 4} (1984), 67--80. \begin{itemize}bitem[Gou]{Gourmelon} N. Gourmelon, Adapted metrics for dominated splittings, \textit{Ergodic. Th. and Dynam. Sys.}, {\bf 27} (2007), 1839-1849. \begin{itemize}bitem[Gr]{Gromov} M. Gromov, Three remarks on geodesic dynamics and the fundamental group, \textit{Enseign Math.} (2) {\bf 46} (2000), 391--402. \begin{itemize}bitem[H$_1$]{Hammerlindl} A. Hammerlindl, Leaf conjugacies in the torus, Thesis, and to appear in \textit{Ergodic theory and dynamical systems}. \begin{itemize}bitem[H$_2$]{HamPlaque} A. Hammerlindl, Quasi-isometry and plaque expansiveness, \textit{Canadian Mathematical Bulletin}, {\bf 54}(2011), no. 4, 676-679. \begin{itemize}bitem[HP]{HP} A. Hammerlindl and R. Potrie, Pointwise partial hyperbolicity in 3-dimensional nilmanifolds, \textit{preprint}, arXiv 1302.0543. \begin{itemize}bitem[Hat]{Hatcher} A. Hatcher, \textit{Algebraic Topology}, Cambridge University Press, (2002). \begin{itemize}bitem[HPS]{HPS} M. Hirsch, C. Pugh and M. Shub, Invariant Manifolds, \textit{Springer Lecture Notes in Math.}, {\bf 583} (1977). \begin{itemize}bitem[M$_1$]{ManheContributions} R. Ma\~n\'e, Contributions to the stability conjecture, \textit{Topology}, {\bf 17} (1978), 383--396. \begin{itemize}bitem[M$_2$]{Manhe} R. Ma\~n\'e, \textit{Ergodic theory and differentiable dynamics}, Springer-Verlag (1983). \begin{itemize}bitem[NY]{NY} S. Newhouse and L.-S. Young. Dynamics of certain skew products, volume 1007 of {\it Lecture Notes in Math.,} pages 611--629. Springer, Berlin, 1983. \begin{itemize}bitem[Pot]{Pot} R. Potrie, Partial hyperbolicity and foliations in $\mbox{$\mathbb{T}$}T^3$, \textit{preprint}, arXiv:1206.2860. \begin{itemize}bitem[RHRHU]{HHU} F. Rodriguez Hertz, J. Rodriguez Hertz, R. Ures, A non-dynamically coherent example in $\mbox{$\mathbb{T}$}T^3$, preprint. \begin{itemize}bitem[U]{Ures} R. Ures,Intrinsic ergodicity of partially hyperbolic diffeomorphisms with hyperbolic linear part, \textit{Proceedings of the AMS}, {\bf 140} (2012), 1973-1985 . \subsetd{thebibliography} \subsetd{document}
\begin{document} \title{Time-dependent Displaced and Squeezed Number States} \author{Sang Pyo Kim}\email{[email protected]} \affiliation{Department of Physics, Kunsan National University, Kunsan 573-701, Korea} \date{\today} \begin{abstract} We generalize the wave functions of the displaced and squeezed number states, found by Nieto, to a time-dependent harmonic oscillator with variable mass and frequency. These time-dependent displaced and squeezed number states are obtained by first squeezing and then displacing the exact number states and are exact solutions of the Schr\"{o}dinger equation. Further, these wave functions are the time-dependent squeezed harmonic-oscillator wave functions centered at classical trajectories. \\ Keywords: Displaced number states, Squeezed number states, Time-dependent oscillator, Invariant operators \end{abstract} \pacs{PACS numbers: 03.65.Ta, 03.65.Ge, 03.65.Ca, 03.65.Yz} \maketitle Recently, Nieto found the wave functions of the displaced and squeezed number states for a static oscillator \cite{nieto}. These states are an extension of the displaced (coherent) and squeezed states of the vacuum state. The coherent states, actually discovered by Sch\"{o}dinger \cite{sch}, have been developed as a useful concept and tool in quantum optics \cite{coh}. A general class of Gaussian states beyond the vacuum state was first considered in Ref. 4, and these squeezed states were defined further in terms of the squeeze operator \cite{sto}. The displaced and squeezed states were also introduced \cite{roy}, and the displaced number states, as well as the squeezed number states, were found \cite{knight}. The squeezed number states show the super-Poissonicity of photon statistics. On the other hand, Lewis and Riesenfeld \cite{lew} introduced the invariant method to find the Fock space of exact states of time-dependent oscillators (for a review and references, see Refs. 9 and 10). Based on the invariant method, the coherent states of time-dependent oscillators were found \cite{dod,har,kim-lee}, and the squeezed states were studied \cite{ali,kim1,kim2}. Through the use of the evolution in terms of su(1,1) algebra, the displaced and squeezed number states were found for a general driven time-dependent oscillator \cite{lo}. The purpose of this paper is to find the displaced and squeezed number states and their wave functions for a time-dependent oscillator with variable mass and frequency. A time-dependent oscillator also has time-dependent annihilation and creation operators, the invariant operators satisfying the quantum Liouville-von Neumann equation, whose number states are the exact solutions of the time-dependent Schr\"{o}dinger equation \cite{dod,kim1,kim2,kim-page,ksk,kim3}. We find the invariant displacement and squeeze operators that map the exact number states into the displaced and squeezed number states. For a static oscillator, we compare these wave functions with those of Ref. 1. We consider a time-dependent oscillator with variable mass and frequency and with a Hamiltonian given by \begin{equation} \hat{H} (t) = \frac{1}{2m (t)} \hat{p}^2 + \frac{1}{2} m(t) \omega^2 (t) \hat{x}^2, \label{osc} \end{equation} and use the invariant operator method to find the Fock space of exact number states. Two linear invariant operators are introduced \cite{dod,kim1,kim2,kim-page}: \begin{eqnarray} \hat{a} (t) &=& \frac{i}{\sqrt{\hbar}} [u^*(t) \hat{p} - m(t) \dot{u}^*(t) \hat{x} ], \nonumber\\ \hat{a}^{\dagger} (t) &=& - \frac{i}{\sqrt{\hbar}} [u (t) \hat{p} - m(t) \dot{u} (t) \hat{x}], \label{inv op} \end{eqnarray} where $u$ is a complex solution to the classical equation of motion, \begin{equation} \ddot{u} (t) + \frac{\dot{m}(t)}{m (t)} \dot{u} (t) + \omega^2 (t) u (t) = 0. \label{cl eq} \end{equation} With the Wronskian condition, \begin{equation} m (t) [u (t) \dot{u}^* (t) - u^* (t) \dot{u}(t)] = i, \label{wron} \end{equation} imposed, the operators in Eq. (\ref{inv op}) satisfy the usual commutation relation at equal times \begin{equation} [\hat{a} (t), \hat{a}^{\dagger} (t) ] = 1, \end{equation} and play the roles of time-dependent annihilation and creation operators. As the Hamiltonian in Eq. (\ref{osc}) has the representation \begin{equation} \hat{H} (t) = \frac{\hbar m}{2} \bigl[( \dot{u}^* \dot{u} + \omega^2 u^* u ) (\hat{a}^{\dagger} \hat{a} + \hat{a} \hat{a}^{\dagger}) + (\dot{u}^{*2} + \omega^2 u^{*2}) \hat{a}^{\dagger 2} + (\dot{u}^2 + \omega^2 u^2) \hat{a}^2 \bigr], \end{equation} $\hat{a}(t)$ and $\hat{a}^{\dagger} (t)$ do not necessarily diagonalize the Hamiltonian. The condition for diagonalization of the Hamiltonian is given by the equation \begin{equation} \dot{u}^2 (t) + \omega^2 u^2 (t) = 0. \label{diag} \end{equation} Equation (\ref{diag}) is exactly satisfied by $u (t) = e^{- i \omega_0 t}/\sqrt{2 m_0 \omega_0}$ for a static oscillator with constant $m_0$ and $\omega_0$ and is approximately satisfied by the adiabatic (WKB) solution, $u (t) = e^{- i \int \omega (t) }/\sqrt{2 m \omega(t)}$, for slowly varying mass and frequency. Except for these cases, the number operator $\hat{N} (t) = \hat{a}^{\dagger} (t) \hat{a} (t)$ does not commute with the Hamiltonian $\hat{H} (t)$. However, the number operator $\hat{N} (t)$, which is another invariant operator, still provides the exact wave functions of number states for the time-dependent Schr\"{o}dinger equation \cite{kim-page}: \begin{eqnarray} \Psi_n (x, u(t)) = \Biggl(\frac{1}{2^{n} n! \sqrt{2 \pi \hbar} \rho} \Biggr)^{1/2} e^{- i \Theta (n + 1/2)} H_n \Biggl(\frac{x}{\sqrt{2 \hbar} \rho} \Biggr) \exp \Biggl[\frac{i m \dot{u}^*}{2 \hbar u^*} x^2\Biggr], \label{wave} \end{eqnarray} where $u (t) = \rho (t) e^{- i \Theta (t)}$. As Eq. (\ref{cl eq}) is linear, the most general complex solution can take the form \begin{equation} u_{\nu} (t) = \mu u_0 (t) + \nu u_0^* (t), \label{most sol} \end{equation} where \begin{eqnarray} \mu = \cosh r, \quad \nu = e^{- i \phi} \sinh r, \label{sq par} \end{eqnarray} and $u_0 (t)$ is some preferred complex solution that satisfies Eq. (\ref{wron}). The preferred solution may be chosen based on, for instance, the minimum uncertainty or energy expectation value \cite{kim1,kim2}. Here, we have fixed the overall phase of Eq. (\ref{sq par}) since the wave functions in Eq. (\ref{wave}) do not depend on it. In fact, $\nu$ (or the two real constants, $r$ and $\phi$) is the squeezing parameter of the Bogoliubov transformation \begin{eqnarray} \hat{a}_{\nu} (t) &=& (\cosh r) \hat{a}_0 (t) - (e^{ i \phi} \sinh r) \hat{a}^{\dagger}_0 (t), \nonumber\\ \hat{a}^{\dagger}_{\nu} (t) &=& (\cosh r) \hat{a}^{\dagger}_0 (t) - (e^{- i \phi} \sinh r) \hat{a}_0 (t), \label{bog tran} \end{eqnarray} where $\hat{a}_{\nu}(t), \hat{a}^{\dagger}_{\nu}(t)$, and $\hat{a}_0(t), \hat{a}^{\dagger}_0(t)$ are the operators obtained by substituting $u_{\nu}(t)$ and $u_0(t)$, respectively, into Eq. (\ref{inv op}). Note that the subscript 0 in this paper denotes the zero squeezing parameter instead of a static oscillator with constant frequency and mass. In terms of the squeeze operator \cite{sto} \begin{equation} \hat{S} (\nu, t) = \exp \Biggl[ \frac{1}{2} (\nu e^{i \pi}) \hat{a}^{\dagger 2}_0 (t) - \frac{1}{2} (\nu e^{ i \pi})^* \hat{a}^2_0(t) \Biggr], \label{sq op} \end{equation} the Bogoliubov transformation is written as \begin{eqnarray} \hat{a}_{\nu} (t) &=& \hat{S}(\nu, t) ~\hat{a}_0 (t)~ \hat{S}^{\dagger}(\nu, t), \nonumber\\ \hat{a}_{\nu}^{\dagger} (t) &=& \hat{S}(\nu, t) ~ \hat{a}_0^{\dagger} (t) ~ \hat{S}^{\dagger}(\nu, t). \label{un tran} \end{eqnarray} The number operators $ \hat{N}_0 (t) = \hat{a}_0^{\dagger} (t) \hat{a}_0 (t)$ and $ \hat{N}_{\nu} (t) = \hat{a}^{\dagger}_{\nu} (t) \hat{a}_{\nu} (t)$ do not, in general, commute with each other, \begin{equation} [ \hat{N}_{\nu} (t), \hat{N}_0 (t) ] = \sinh 2r [e^{i \phi} \hat{a}_0^{\dagger 2}(t) - e^{- i \phi} \hat{a}_0^2(t)], \end{equation} though they commute only for the trivial case of zero squeezing $(r = 0)$. Nevertheless, they, being invariant operators, lead to the exact number states for the oscillator in Eq. (\ref{osc}), \begin{eqnarray} \hat{N}_{0} (t) \vert n, t \rangle &=& n \vert n, t \rangle, \nonumber\\ \hat{N}_{\nu} (t) \vert \nu, n, t \rangle &=& n \vert \nu, n, t \rangle, \label{num st} \end{eqnarray} and to their wave functions \begin{eqnarray} \langle x \vert n, t \rangle = \Psi_n (x, u_0(t)), \nonumber\\ \langle x \vert \nu, n, t \rangle = \Psi_n (x, u_{\nu}(t)). \label{num wave} \end{eqnarray} Here $\Psi_n (x, u_{0}(t))$ and $\Psi_n (x, u_{\nu}(t))$ are the wave functions in Eq. (\ref{wave}) obtained by using $u_0(t)$ and $u_{\nu}(t)$, respectively. Due to the unitary transformation in Eq. (\ref{un tran}), the $\nu$-dependent number state is a squeezed state, \begin{equation} \vert \nu, n, t \rangle = \hat{S} (\nu, t) \vert n, t \rangle. \label{sq nst} \end{equation} The wave function $\Psi_n (x, u_{\nu} (t))$ of Eq. (\ref{sq nst}) is an exact solution for the Hamiltonian in Eq. (\ref{osc}), which belongs to the wave functions in Eq. (\ref{wave}). In an other way, we can directly show that any invariant operator $\hat{O}(t)$ maps an exact state $\vert \Psi_0 (t) \rangle $ to another state $\hat{O}(t)\vert \Psi_0 (t) \rangle$. In fact, the quantum Liouville-von Neumann equation, \begin{equation} i \hbar \frac{\partial}{\partial t} \hat{O} (t) + [\hat{O}(t), \hat{H} (t)] = 0, \end{equation} leads to the time-dependent Schr\"{o}dinger equation \begin{equation} i \hbar \frac{\partial}{\partial t} \Bigl(\hat{O}(t)\vert \Psi_0 (t) \rangle \Bigr) = \hat{H} (t) \Big(\hat{O}(t)\vert \Psi_0 (t) \rangle \Bigr) \label{map eq} \end{equation} whenever $\vert \Psi_0 (t) \rangle$ is a solution to the time-dependent Schr\"{o}dinger equation. Note that the squeeze operator in Eq. (\ref{sq op}) is an invariant operator since it is a functional of the invariant operators $\hat{a}_0 (t)$ and $\hat{a}_0^{\dagger} (t)$. Thus, $\hat{S} (\nu, t)$ generates the exact state $\vert \nu, n, t \rangle$ out of the number state $\vert n, t \rangle$, which is already an exact one. As for the case of a time-independent oscillator \cite{sto}, the displaced (coherent) state is an eigenstate of the time-dependent annihilation operator \cite{har,kim2}, \begin{equation} \hat{a}_{\nu} (t) \vert \alpha, \nu, t \rangle = \alpha \vert \alpha, \nu, t \rangle, \end{equation} for any complex constant $\alpha$, and has the representation \begin{equation} \vert \alpha, \nu, t \rangle = e^{- |\alpha|^2/2} \sum_{n = 0}^{\infty} \frac{\alpha^n}{\sqrt{n!}} \vert \nu, n, t \rangle. \label{num rep} \end{equation} The time-dependent displaced state in Eq. (\ref{num rep}) is an exact quantum state of the Schr\"{o}dinger equation for the Hamiltonian in Eq. (\ref{osc}) since each $\vert \nu, n, t \rangle$ is a solution to the Schr\"{o}dinger equation. In another formulation, we introduce the time-dependent displacement operator \begin{equation} \hat{D} (\alpha, t) = e^{\alpha \hat{a}^{\dagger}_{\nu} (t) - \alpha^* \hat{a}_{\nu} (t)}, \label{dis op} \end{equation} which is a functional of the invariant operators $\hat{a}_0 (t)$ and $\hat{a}_0^{\dagger} (t)$ and, thus, is an invariant operator. If the operator in Eq. (\ref{dis op}) operates on the squeezed number state in Eq. (\ref{sq nst}), we obtain the displaced and squeezed number state \begin{equation} \vert \alpha, \nu, n, t \rangle = \hat{D} (\alpha, t) \vert \nu, n, t \rangle = \hat{D} (\alpha, t) \hat{S} (\nu, t) \vert n, t \rangle. \label{coh nst} \end{equation} Using the relations \begin{eqnarray} \hat{D} (\alpha, t)~ \hat{a}_{\nu} (t) ~\hat{D}^{\dagger}(\alpha, t) = \hat{a}_{\nu} (t) - \alpha, \nonumber\\ \hat{D} (\alpha, t) \hat{a}^{\dagger}_{\nu} (t) \hat{D}^{\dagger}(\alpha, t) = \hat{a}_{\nu}^{\dagger} (t) - \alpha^*, \label{dis rel} \end{eqnarray} we get another representation \begin{eqnarray} \vert \alpha, \nu, n, t \rangle &=& \frac{1}{\sqrt{n!}} [\hat{a}_{\nu} (t) - \alpha^*]^n \vert \alpha, \nu, t \rangle \nonumber\\ &=& \frac{e^{- | \alpha |^2}}{\sqrt{n!}} \sum_{k = 0}^{\infty} \frac{\alpha^k}{\sqrt{k!}} [\hat{a}_{\nu} (t) - \alpha^*]^n \vert \nu, k, t \rangle. \label{num rep2} \end{eqnarray} Being a sum of exact number states, the displaced and squeezed number state in Eq. (\ref{coh nst}) or (\ref{num rep2}) is another exact quantum state of the time-dependent Schr\"{o}dinger equation. In an other way, $\hat{D} (\alpha, t) \hat{S} (\nu, t)$, a product of invariant operators, is still another invariant operator and, thus, maps the exact number state $\vert n, t \rangle$ into the displaced and squeezed number state $\vert \alpha, \nu, n, t \rangle$, which, according to Eq. (\ref{map eq}), is an exact quantum state of the time-dependent Schr\"{o}dinger equation for the Hamiltonian in Eq. (\ref{osc}). The displaced and squeezed number state has the expectation values \begin{eqnarray} x_c (t) &=& \langle \alpha, \nu, n, t \vert \hat{x} \vert \alpha, \nu, n, t \rangle = \sqrt{\hbar} [\alpha u_{\nu} (t) + \alpha^* u^*_{\nu} (t)], \nonumber\\ p_c (t) &=& \langle \alpha, \nu, n, t \vert \hat{p} \vert \alpha, \nu, n, t \rangle = \sqrt{\hbar} m(t) [\alpha \dot{u}_{\nu} (t) + \alpha^* \dot{u}^*_{\nu} (t)], \label{coh ex} \end{eqnarray} and \begin{eqnarray} \langle \alpha, \nu, n, t \vert \hat{x}^2 \vert \alpha, \nu, n, t \rangle &=& \hbar u_{\nu}^* u_{\nu} (2n+1) + x_c^2,\nonumber\\ \langle \alpha, \nu, n, t \vert \hat{p}^2 \vert \alpha, \nu, n, t \rangle &=& \hbar m^2 \dot{u}_{\nu}^* \dot{u}_{\nu} (2n+1) + p_c^2. \end{eqnarray} Note that $p_c (t) = m(t) \dot{x}_c(t)$, and that $x_c(t)$ obeys the classical equation of motion in Eq. (\ref{cl eq}). Inverting Eq. (\ref{coh ex}) for \begin{eqnarray} \alpha &=& \frac{i}{\sqrt{\hbar}} [ u^*_{\nu} (t) p_c (t) - m(t) \dot{u}^*_{\nu} (t) x_c (t)], \nonumber\\ \alpha^* &=& - \frac{i}{\sqrt{\hbar}} [ u_{\nu} (t) p_c (t) - m(t) \dot{u}_{\nu} (t) x_c (t)], \end{eqnarray} and using Eq. (\ref{dis rel}), we finally obtain the wave function for the displaced and squeezed number state: \begin{eqnarray} \Psi_n (x, x_c, p_c, u_{\nu}(t)) = \Biggl(\frac{1}{2^{n} n! \sqrt{2 \pi \hbar} \rho_{\nu}} \Biggr)^{1/2} e^{- i \Theta_{\nu} (n + 1/2)} e^{i p_c x/ \hbar} \nonumber\\ \times H_n \Biggl(\frac{x - x_c}{\sqrt{2 \hbar} \rho_{\nu}} \Biggr) \exp \Biggl[\frac{i m \dot{u}_{\nu}^*}{2 \hbar u_{\nu}^*} (x - x_c)^2\Biggr], \label{coh sq wave} \end{eqnarray} where $u_{\nu} (t) = \rho_{\nu} (t) e^{- i \Theta_{\nu}(t)}$. Thus, the wave functions in Eq. (\ref{coh sq wave}) are, up to a phase factor $e^{i p_c x/ \hbar}$, the wave functions in Eq. (\ref{wave}) with the center shifted to the classical trajectory $x_c (t)$. Finally, we compare the wave functions in Eq. (\ref{coh sq wave}) of the displaced and squeezed number states for a time-dependent oscillator with those of Ref. 1 for a static oscillator. The static oscillator has a constant mass $m_0$ and a frequency $\omega_0$. We may then choose \begin{equation} u_0 (t) = \frac{1}{\sqrt{2 m_0 \omega_0}} e^{- i \omega_0 t}, \label{pref sol} \end{equation} as the preferred solution to Eq. (\ref{cl eq}). If $u_0(t)$ is inserted into Eq. (\ref{wave}), it leads to the harmonic-oscillator wave functions. Now, the displaced state with the most general solution, Eq. (\ref{most sol}), has the expectation value \begin{eqnarray} x_c (t) &=& \sqrt{\frac{\hbar}{2 m_0 \omega_0}} \Bigl[(\alpha \cosh r - \alpha^* e^{i \phi} \sinh r) e^{- i \omega_0 t} + (\alpha^* \cosh r - \alpha e^{- i \phi} \sinh r) e^{i \omega_0 t} \Bigr] \nonumber\\& \equiv & x_0 \cos \omega_0 t + \frac{p_0}{m_0 \omega_0} \sin \omega_0 t, \end{eqnarray} and $p_c (t) = m_0 \dot{x}_c (t)$. The classical trajectory has two integration constants: $x_c (0) = x_0$ and $p_c (0) = p_0$. Then, the wave function in Eq. (\ref{coh sq wave}) becomes \begin{eqnarray} \Psi_n (\nu, x_c, p_c, x, t) = \frac{1}{\sqrt{2^n n!}} \Biggl( \frac{A_{\nu}}{\sqrt{\pi}} \Biggr)^{1/2} e^{- i \Theta_{\nu} (n + 1/2)} e^{i p_c x/ \hbar} \nonumber\\ \times H_n (A_{\nu} (x - x_c)) e^{- B_{\nu} (x - x_c)^2}, \label{gen wave} \end{eqnarray} where \begin{eqnarray} A_{\nu} (t) &=& \frac{1}{\sqrt{2 \hbar u_{\nu}^* u_{\nu}}} = \sqrt{\frac{m_0 \omega_0 }{\hbar}} \frac{1}{[\cosh 2r + \sinh 2r \cos (2 \omega_0 t - \phi)]^{1/2}}, \nonumber\\ B_{\nu} (t) &=& \frac{i m \dot{u}_{\nu}^*}{2 \hbar u_{\nu}^*} = \frac{m_0 \omega_0}{2 \hbar} \Biggl[\frac{\cosh r e^{i \omega_0 t} - e^{i \phi} \sinh r e^{- i \omega_0 t} }{\cosh r e^{i \omega_0 t} + e^{i \phi} \sinh r e^{- i \omega_0 t}} \Biggr], \nonumber\\ e^{ - i \Theta_{\nu} (t)} &=& \Biggl[ \frac{\cosh r e^{- i \omega_0 t} + e^{- i \phi} \sinh r e^{i \omega_0 t}}{\cosh r e^{ i \omega_0 t} + e^{i \phi} \sinh r e^{- i \omega_0 t}} \Biggr]^{1/2}. \label{coeff} \end{eqnarray} We can show, for the convention, $m_0 = \omega_0 = \hbar = 1$, that \begin{eqnarray} A_{\nu} (0) = \frac{1}{{\cal F}_4}, \quad B_{\nu} (0) = \frac{{\cal F}_2}{2}, \quad e^{- i \Theta_{\nu} (0)} = {\cal F}_3^{1/2}, \end{eqnarray} where ${\cal F}$'s in Eqs. (21)-(24) of Ref. 1 are given by \begin{eqnarray} {\cal F}_2 &=& \frac{1 - i \sin \phi \sinh r (\cosh r + e^{i \phi} \sinh r)}{(\cosh r + \cos \phi \sinh r) (\cosh r + e^{i \phi} \sinh r)}, \nonumber\\ {\cal F}_3 &=& \frac{\cosh r + e^{- i \phi} \sinh r}{\cosh r + e^{i \phi} \sinh r}, \nonumber\\ {\cal F}_4 &=& (\cosh^2 r + \sinh^2 r + 2 \cos \phi \cosh r \sinh r)^{1/2}. \end{eqnarray} Thus, the wave functions in Eq. (20) of Ref. 1 for the static oscillator are special cases of the wave functions in Eq. (\ref{gen wave}) at $t = 0$. Similarly, we show that \begin{eqnarray} A_{\nu} (t) = \frac{1}{{\cal F}_4 B A^{1/2}}, \quad B_{\nu} (t) = \frac{{\cal F}_2 \cos t + i \sin t}{2(\cos t + i {\cal F}_2 \sin t)}, \quad e^{- i \Theta_{\nu} (t)} = ({\cal F}_3 A)^{1/2}, \end{eqnarray} where $A$ in Eq. (46) and $B$ in Eq. (47) of Ref. 1 are given by \begin{eqnarray} A &=& \frac{{\cal F}_4^2 (\cos t + i {\cal F}_2 \sin t) - 2 i \sin t}{{\cal F}_4^2 (\cos t + i {\cal F}_2 \sin t)}, \nonumber\\ B &=& \cos t + i {\cal F}_2 \sin t. \end{eqnarray} Hence, the wave functions in Eq. (\ref{gen wave}) of the time-dependent Schr\"{o}dinger equation are the same as Eq. (45) of Ref. 1 for the time-evolved displaced and squeezed states. In summary, using the invariant method, we obtained the displaced (coherent) and squeezed number states and their wave functions for a time-dependent oscillator. These states for the time-dependent oscillator are an extension of the wave functions found by Nieto for a static oscillator. The wave functions for the displaced and squeezed number states are found to be the squeezed harmonic-oscillator wave functions centered around a classical trajectory. \acknowledgements This work was supported by the Korea Research Foundation under grant No. KRF-2002-041-C00053. \begin{references} \bibitem{nieto} M. M. Nieto, Phys. Lett. A {\bf 229}, 135 (1997). \bibitem{sch} E. Schr\"{o}dinger, Naturwiss. {\bf 14}, 664 (1926). \bibitem{coh} R. J. Glauber, Phys. Rev. {\bf 130}, 2529 (1963); J. R. Klauder, J. Math. Phys. {\bf 4}, 1055 (1963); E. C. G. Sudarshan, Phys. Rev. Lett. {\bf 10}, 277 (1963). \bibitem{ken} E. H. Kennard, Z. Phys. {\bf 44}, 326 (1927). \bibitem{sto} D. Stoler, Phys. Rev. D {\bf 1}, 3217 (1970); E. Y. C. Lu, Lett. Nuovo Cimento {\bf 3}, 585 (1972); H. P. Yuen, Phys. Rev. A {\bf 13}, 2226 (1976); J. N. Hollenhorst, Phys. Rev. D {\bf 19}, 1669 (1979); C. M. Caves, {\it ibid.} {\bf 23}, 1693 (1981). \bibitem{roy} S. M. Roy and V. Singh, Phys. Rev. D {\bf 25}, 3413 (1982); M. V. Satyanarayana, {\it ibid.} {\bf 32}, 400 (1985). \bibitem{knight} M. S. Kim, F. A. M. de Oliveira, and P. L. Knight, Phys. Rev. A {\bf 40}, 2494 (1989); F. A. M. de Oliveira, M. S. Kim, P. L. Knight, and V. Bu$\breve{\rm z}$ek, {\it ibid.} {\bf 41}, 2645 (1990). \bibitem{lew} H. R. Lewis, Jr., Phys. Rev. Lett. {\bf 27}, 510 (1967); H. R. Lewis, Jr., and W. B. Riesenfeld, J. Math. Phys. {\bf 10}, 1458 (1969). \bibitem{mos} A. Mostafazadeh, {\it Dynamical Invariants, Adiabatic Approximation, and the Geometric Phase} (Nova Science Pub., New York, 2001). \bibitem{um} C-I. Um, K-H. Yeon, and T. F. George, Phys. Rep. {\bf 362}, 63 (2002); C-I. Um and K-H. Yeon, J. Korean Phys. Soc. {\bf 41}, 594 (2002). \bibitem{dod} I. A. Malkin, V. I. Man'ko, and D. A. Trifnov, Phys. Rev. D {\bf 2}, 1371 (1970); V. V. Dodonov and V. I. Man'ko, Phys. Rev. A {\bf 20}, 550 (1979). \bibitem{har} J. G. Hartley and J. R. Ray, Phys. Rev. D {\bf 25}, 382 (1982); J. R. Ray, {\it ibid.} {\bf 25}, 3417 (1982); A. K. Rajagopal and J. T. Marshall, Phys. Rev. A {\bf 26}, 2977 (1982); I. A. Pedrosa, Phys. Rev. D {\bf 36}, 1279 (1987). \bibitem{kim-lee} S. P. Kim and C. H. Lee, Phys. Rev. D {\bf 62}, 125020 (2000); S. P. Kim, J.-Y. Ji, H.-S. Shin, and K.-S. Soh, {\it ibid.} {\bf 56}, 3756 (1997); K. H. Cho, J. Y. Ji, S. P. Kim, C. H. Lee, and J.Y. Ryu, {\it ibid.} {\bf 56}, 4916 (1997). \bibitem{ali} J. Aliaga, G. Crespo, and A. N. Proto, Phys. Rev. A {\bf 42}, 4325 (1990); {\it ibid.} {\bf 43}, 595 (1990). \bibitem{kim1} J. K. Kim and S. P. Kim, J. Phys. A {\bf 32}, 2711 (1999). \bibitem{kim2} S. P. Kim, J. Phys. A {\bf 36}, 12089 (2003). \bibitem{lo} C. F. Lo, Phys. Rev. A {\bf 43}, 404 (1991). \bibitem{kim-page} S. P. Kim and D. N. Page, Phys. Rev. A {\bf 64}, 012104 (2001). \bibitem{ksk} S. P. Kim, S. Sengupta, and F. C. Khanna, Phys. Rev. D {\bf 64}, 105026 (2001); S. P. Kim and C. H. Lee, {\it ibid.} {\bf 65}, 045013 (2002); S. Sengupta, F. C. Khanna, and S. P. Kim, {\it ibid.} {\bf 68}, 105014 (2003). \bibitem{kim3} S. P. Kim, J. Korean Phys. Soc. {\bf 41}, 643 (2002); {\bf 43}, 11 (2003). \end{references} \end{document}
\begin{document} \title{\textbf{Positively Invariant Subset for Non-Densely Defined Cauchy Problems}} \author{Pierre Magal$^{{\rm (a)}}$ and Ousmane Seydi$^{{\rm (b)}}$ \\ {\small $^{{\rm (a)}}$ \textit{Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence, France} } \\ {\small \textit{CNRS, IMB, UMR 5251, F-33400 Talence, France.}} \\ {\small $^{{\rm (b)}}$ \textit{D\'epartement Tronc Commun,\'Ecole Polytechnique de Thi\`es, S\'en\'egal}} } \maketitle \begin{abstract} Under weak Hille-Yosida conditions and using a generalized notion of sub tangential condition, we prove the positive invariance of a closed subset by the semiflow generated by a semi-linear non densely defined Cauchy problem. A simple remark shows that the sufficient condition for the positivity of the semiflow implies our sub tangentiality condition. But the sub tangential condition applies to a much larger class of closed positively invariant subset. Our results can be applied to hyperbolic and parabolic partial differential equations as well as functional differential equations. As an illustration we apply our results to an age-structured equation in $L^p$ spaces which is only defined on a closed subset of $L^p$. \end{abstract} \noindent \textbf{Key words}. Semilinear differential equations, non-dense domain, integrated semigroup, positively invariant subset, age structured population dynamics models. \noindent \textbf{AMS Subject Classification}. 37N25, 92D25, 92D30 \section{Introduction} In this article we consider an abstract semi-linear Cauchy problem \begin{equation}\label{1.1} \dfrac{du(t)}{dt}=Au(t)+F(t,u(t)), \text{ for } t \geq 0, \text{ with } u(0)=u_0 \in \overline{D(A)}, \end{equation} where $A:D(A) \subset X \to X$ is a linear operator on a Banach space $X$, and $F:[0, \infty) \times \overline{D(A)} \to X$ is continuous. We assume that the map $x \to F(t,x)$ is Lipschitz on the bounded sets of $\overline{D(A)}$ uniformly with respect to $t$ in a bounded interval of $[0, \infty)$. We point out that $D(A)$ is not necessarily dense in $X$ and $A$ is not necessarily a Hille-Yosida operator. The invariance of subset for differential equation has a long history which starts with the seminal paper of the Japanese mathematician Nagumo \cite{Nagumo} in 1942. The result for ordinary differential equations was rediscovered later on by Brezis \cite{Brezis} and Hartman \cite{Hartman} and was further extended to ordinary differential equation in ordered Banach spaces by Walter \cite{Walter71} and Redheffer and Walter \cite{Redheffer-Walter75}. Several extensions to partial differential equations were proposed later on by Redheffer and Walter \cite{Redheffer-Walter77} and Martin \cite{Martin79} for parabolic equations, etc. Martin and Smith \cite{Martinsmith} further investigated comparison/differential inequalities and invariant sets for abstract functional differential equations and reaction-diffusion systems that have time delays in the nonlinear reaction terms, and their developed results have had many applications. We refer to the book of Pavel and Motreanu \cite{Pavel-Motreanu} for an extensive study of densely defined semi-linear Cauchy problem. In \cite{Pavel-Motreanu} the authors studied the positive invariance for general closed subset subjected to tangency condition. They also conidered positive invariance of time dependent closed subset and extended their results to semilinear differential inclusion problems. The case of closed convex subset for non-densely defined Cauchy problems with a Hille-Yosida linear operator perturbed by Lipschitz continuous non linear map has been studied by Thieme \cite{Thieme90a}. The goal of this article is to extend the results of Thieme \cite{Thieme90a} from the Hille-Yosida case to the non Hille-Yosida case. It is worth noting that the non Hille-Yosida case induces several difficulties due to the problem of non uniform boundedness of $\lambda (\lambda-A)^{-1}$ whenever $\lambda$ becomes. To overcome these difficulties we use a somewhat different approach compared to Thieme \cite{Thieme90a} combined with some generalization of the estimates on the integrated semigroup from the space of continuous functions to the space of regulated functions. Thank to our weak Hille-Yosida condition on the linear operator $A$ in \eqref{1.1} (see Assumption \ref{ASS4.4}) combined together with our generalized sub tangential condition (see Assumptions \eqref{ASS2.1} and \eqref{ASS2.4}) we can be applied our result to hyperbolic and parabolic partial differential equations in $L^p$ instead of $L^1$. The paper is organized as follows. In sections 2 and 3 we recall some basic results about non densely defined Cauchy problems. In section 4, we investigate the positive invariance of a closed subset. In section 5, we apply our result to an age-structured equation in $L^p$ spaces which only defined in a closed subset of $L^p$ and show that it generates a semiflow. \section{Preliminary results} Let $A: D(A) \subset X \to X$ be a linear operator. In the following we use the following notations $$ X_0:=\overline{D(A)} $$ and $A_0:D(A_0) \subset X_0 \to X_0$ the part of $A$ in $X_0$ that is $$ A_0 x=A x, \quad \forall x\in D(A_0), $$ and $$ D(A_0)=\lbrace x\in D(A): Ax\in X_0\rbrace. $$ \begin{assumption} \label{ASS2.1} We assume that \begin{itemize} \item[{\rm (i)}] There exist two constants $\omega_A\in \mathbb{R}$ and $M_A\geq 1$, such that $(\omega_A,+\infty)\subset \rho (A)$ and \begin{equation*} \left{\rm V}ert (\lambda I-A)^{-n}\right{\rm V}ert _{\mathcal{L}(X_0)}\leq M_A \left( \lambda -\omega_A \right) ^{-n},\;\forall \lambda >\omega_A,\;\forall n \geq 1. \end{equation*} \item[{\rm(ii)}] $\lim_{\lambda \rightarrow +\infty }(\lambda I-A)^{-1}x=0,\ \forall x\in X$. \end{itemize} \end{assumption} It is important to note that Assumption \ref{ASS2.1} does not say that $A$ is a Hille-Yosida linear operator since the operator norm in Assumption \ref{ASS2.1}-\textrm{(i)} is taken into $X_0 \subseteq X$ (where the inclusion can be strict) instead of $X$. Further, it follows from \cite{Magal-Ruan2018} that $\rho(A)=\rho(A_0)$. Therefore by Assumption \ref{ASS2.1}, $(A_0,D(A_0))$ is a Hille-Yosida linear operator of type $(\omega_A,M_A)$ and generates a strongly continuous semigroup $\lbrace T_{A_0}(t) \rbrace_{t\geq 0}\subset \mathcal{L}(X_0)$ with \begin{equation*} {\rm V}ert T_{A_0}(t){\rm V}ert_{\mathcal{L}(X_0)}\leq M_A e^{\omega_A t}, \quad \forall t\geq 0. \end{equation*} As a consequence $$ \lim_{\lambda \to + \infty} \lambda \left(\lambda I -A \right)^{-1}x =x $$ only for $x \in X_0$. It is important to note that the above limit does not exist in general whenever $x$ belongs to $X$. We summarize the above discussions as follows. \begin{lemma}\label{LE2.2} \ Assumption \ref{ASS2.1} is satisfied if and only if there exist two constants, $M_A\geq 1$ and $\omega_A \in \mathbb{R},$ such that $\left( \omega_A ,+\infty \right) \subset \rho (A)$ and $A_{0}$ is the infinitesimal generator of a $C_{0}$-semigroup $\left\{ T_{A_{0}}(t)\right\} _{t\geq 0}$ on $X_{0}$ which satisfies $\left\| T_{A_{0}}(t)\right\| _{\mathcal{L} (X_{0})}\leq M_Ae^{\omega_A t},\forall t\geq 0$. \end{lemma} Next, we consider the non homogeneous Cauchy problem \begin{equation}\label{2.1} v^\prime(t)=\ A v(t)+f(t), \quad t\geq0 \quad \text{and} \quad v(0)=v_0\in X_0, \end{equation} with $f\in L^{1}_{{\rm loc}}(\mathbb{R},X)$. The integrated semi-group is one of the major tools to investigate non-homogeneous Cauchy problems. This notion was first introduced by Ardent \cite{Arendt87a, Arendt87b}. We refer to the books Arendt et al. \cite{Arendt01} whenever $A$ an Hille-Yosida operator. We refer to Magal and Ruan \cite{Magal-Ruan07, Magal-Ruan2018} and Thieme \cite{Thieme08} for an integrated semi-group theory whenever $A$ is not Hille-Yosida operator. We also refer to the book of Magal and Ruan \cite{Magal-Ruan2018} for more references and results on this topic. \begin{definition} \label{DE2.3} Let Assumption \ref{ASS2.1} be satisfied. Then $\left\lbrace S_A(t) \right\rbrace_{t \geq 0} \in \mathcal{L}(X)$ the \textbf{integrated semigroup generated by} $A$ is a strongly continuous family of bounded linear operators on $X$, which is defined by $$ S_A(t)x=(\lambda I-A_0) \int_0^t T_{A_{0}}(l) (\lambda I-A)^{-1}x dl, \forall t \geq 0. $$ \end{definition} In order to obtain existence and uniqueness of solutions for \eqref{2.1} whenever $f$ is a continuous map, we will require the following assumption. \begin{assumption}\label{ASS2.4} Assume that for any $\tau >0$ and $f\in C\left( \left[ 0,\tau \right] ,X\right) $ there exists $v_{f}\in C\left( \left[ 0,\tau \right] ,X_{0}\right) $ an integrated (mild) solution of \begin{equation*} \frac{dv_{f}(t)}{dt}=Av_{f}(t)+f(t),\text{ for }t\geq 0\text{ and }v_{f}(0)=0, \end{equation*} that is to say that $$ \int_0^t v_f(r)dr \in D(A), \ \forall t\geq 0 $$ and $$ v_f(t)=A\int_0^t v_f(r)dr +\int_0^t f(r)dr , \ \forall t\geq 0. $$ Moreover we assume that there exists a non decreasing map $\delta :[0,+\infty) \rightarrow [0,+\infty)$ such that \begin{equation*} {\rm V}ert v_f(t) {\rm V}ert \leq \delta(t) \underset{s\in [0,t]}{\sup} {\rm V}ert f(s){\rm V}ert , \ \forall t\geq 0, \end{equation*} with $\delta(t)\rightarrow 0$ as $t\rightarrow 0^+$. \end{assumption} \begin{remark} \label{REM2.5} Note that Assumption \ref{ASS2.4} is equivalent (see \cite{Magal-Ruan09a}) to the assumption that there exists a non-decreasing map $\delta : [0,+\infty)\rightarrow [0,+\infty)$ such that for each $\tau >0$ and each $f\in C\left( \left[ 0,\tau \right] ,X\right)$ the map $t\rightarrow (S_A\ast f)(t)$ is differentiable in $[0,\tau]$ with $$ {\rm V}ert (S_A \diamond f)(t) {\rm V}ert\leq \delta(t) \sup_{s\in [0,t]} {\rm V}ert f(s) {\rm V}ert,\ \forall t\in [0,\tau], $$ where $(S_A\ast f)(t)$ and $(S_A \diamond f)(t)$ will be defined below in Theorem~\ref{TH2.7} and equation \eqref{2.3}. \end{remark} \begin{remark}\label{REM2.6} It is important to point out the fact Assumption \ref{ASS2.4} is also equivalent to saying that $\left\lbrace S_A(t)\right\rbrace_{t\geq 0}\subset \mathcal{L}(X,X_0)$ is of bounded semi-variation on $[0,t]$ for any $t>0$ that is to say that $$ V^\infty(S_A,0,t):=\sup\left\lbrace \left{\rm V}ert \sum_{i=0}^{n-1} [S_A(t_{j+1})-S_A(t_{j})]x_j \right{\rm V}ert\right\rbrace <+\infty $$ where the supremum is taken over all partitions $0=t_0<\dots<t_n=t$ of $[0,t]$ and all elements $x_1,\dots,x_n\in X$ with ${\rm V}ert x_j {\rm V}ert\leq 1$, for $j=1,2,\ldots,n$. Moreover the non-decreasing map $\delta : [0,+\infty)\rightarrow [0,+\infty)$ in Assumption \ref{ASS2.4} is defined by $$ \delta(t):= \sup_{s \in [0,t]} V^\infty(S_A,0,s),\ \forall t\geq 0. $$ \end{remark} The following result is proved in \cite[Theorem 2.9]{Magal-Ruan09a}. \begin{theorem}\label{TH2.7} Let Assumptions \ref{ASS2.1} and \ref{ASS2.4} be satisfied. Then for each $\tau >0$ and each $f\in C(\left[ 0,\tau \right] ,X)$ the map $$ t\rightarrow \left(S_{A}\ast f\right) (t):=\int_{0}^{t}S_{A}(t-s)f(s)ds $$ is continuously differentiable, $\left( S_{A}\ast f\right) (t)\in D(A),\forall t\in \left[ 0,\tau \right] ,$ and if we set $ u(t)=\frac{d}{dt}\left( S_{A}\ast f\right) (t),$ then \begin{equation*} u(t)=A\int_{0}^{t}u(s)ds+\int_{0}^{t}f(s)ds,\;\forall t\in \left[ 0,\tau \right] . \end{equation*} Moreover we have \begin{equation*} \left\| u(t)\right\| \leq \delta (t)\sup_{s\in \left[ 0,t\right] }\left\| f(s)\right\| ,\;\forall t\in \left[ 0,\tau \right] . \end{equation*} Furthermore, for each $\lambda \in \left( \omega ,+\infty \right)$ we have for each $t\in \left[ 0,\tau \right]$ that \begin{equation} \label{2.2} \left( \lambda I-A\right) ^{-1}\frac{d}{dt}\left( S_{A}\ast f\right) (t)=\int_{0}^{t}T_{A_{0}}(t-s)\left( \lambda I-A\right) ^{-1}f(s)ds. \end{equation} \end{theorem} From now on we will use the following notation \begin{equation} \label{2.3} \left( S_{A} \diamond f\right) (t):=\frac{d}{dt}\left( S_{A}\ast f\right) (t). \end{equation} From \eqref{2.2} and using the fact that $\left( S_{A} \diamond f\right) (t) \in X_0 $, we deduce the approximation formula \begin{equation} \label{2.4} \left( S_{A} \diamond f\right) (t)= \lim_{\lambda \to + \infty}\int_{0}^{t}T_{A_{0}}(t-s)\lambda \left( \lambda I-A\right) ^{-1}f(s)ds. \end{equation} A consequence of the approximation formula is the following \begin{equation} \label{2.5} \left( S_{A} \diamond f\right) (t+s)=T_{A_0}(s)\left( S_{A} \diamond f\right) (t)+ \left( S_{A} \diamond f(t+.)\right) (s), \forall t, s \geq 0. \end{equation} The following result is proved by Magal and Ruan \cite[Theorem 3.1]{Magal-Ruan07}, which will be constantly used and applied to the operator $A-\gamma B$ in sections 4 and 5. \begin{theorem}[Bounded Linear Perturbation]~\\ \label{TH2.8} Let Assumptions \ref{ASS2.1} and \ref{ASS2.4} be satisfied. Assume $L\in \mathcal{ L}\left( X_{0},X\right) $ is a bounded linear operator.\ Then $ A+L:D(A)\subset X\rightarrow X$ satisfies the conditions in Assumptions \ref{ASS2.1} and \ref{ASS2.4}. More precisely, if we fix $\tau _{L}>0$ such that \begin{equation*} \delta \left( \tau _{L}\right) \left\| L\right\| _{\mathcal{L}\left( X_{0},X\right) }<1, \end{equation*} and if we denote by $\left\{ S_{A+L}(t)\right\} _{t\geq 0}$ the integrated semigroup generated by $A+L,$ then for any $f\in C\left( \left[ 0,\tau _{L} \right] ,X\right)$, we have \begin{equation*} \left\| \frac{d}{dt}\left( S_{A+L}\ast f\right) \right\| \leq \frac{\delta \left( t\right) }{1-\delta \left( \tau _{L}\right) \left\| L\right\| _{ \mathcal{L}\left( X_{0},X\right) }}\sup_{s\in \left[ 0,t\right] }\left\| f(s)\right\| ,\;\forall t\in \left[ 0,\tau _{L}\right] . \end{equation*} \end{theorem} The following result is proved in \cite[Lemma 2.13]{Magal-Ruan09a}. \begin{lemma} Let Assumptions \ref{ASS2.1} and \ref{ASS2.4} be satisfied. Then $$ \lim_{\lambda \to + \infty} {\rm V}ert \left( \lambda I -A \right)^{-1} {\rm V}ert_{\mathcal{L}(X)}=0. $$ \end{lemma} It follows that if $B \in \mathcal{L}(X_0,X)$, then for all $\lambda>0$ large enough the linear operator $\lambda I -A -B$ is invertible and its inverse can be written as follows $$ \left(\lambda I -A -B \right)^{-1}=\left(\lambda I -A \right)^{-1}\left[ I-B\left(\lambda I -A \right)^{-1}\right]^{-1}. $$ \section{Existence and Uniqueness of a Maximal Semiflow} Consider now the non-autonomous semi-linear Cauchy problem \begin{equation}\label{3.1} U(t,s)x=x+A\int_{s}^{t}U(l,s)xdl+\int_{s}^{t}F(l,U(l,s)x)dl,\;\;\text{{}} t\geq s\geq 0, \end{equation} and the following problem \begin{equation} U(t,s)x=T_{A_{0}}(t-s)x+\frac{d}{dt}(S_{A}\ast F(.+s,U(.+s,s)x)(t-s),\text{ } t\geq s\geq 0. \label{3.2} \end{equation} We will make the following assumption. \begin{assumption} \label{ASS3.1} Assume that $F:\left[ 0,+\infty \right) \times \overline{D(A)}\rightarrow X$ is a continuous map such that for each $\tau _{0}>0$ and each $\xi >0,$ there exists $K(\tau _{0},\xi )>0$ such that \begin{equation*} \left\| F(t,x)-F(t,y)\right\| \leq K(\tau _{0},\xi )\left\| x-y\right\| \end{equation*} whenever $t\in \left[ 0,\tau _{0}\right] ,$\textit{\ }$y,x\in X_{0},$ and $ \left\| x\right\| \leq \xi ,\left\| y\right\| \leq \xi .$ \end{assumption} In the following definition $\tau $ is the blow-up time of maximal solutions of (\ref{3.1}). \begin{definition}[Non autonomous maximal semiflow] \label{DE3.2} ~\\ Consider two maps $\tau :\left[ 0,+\infty \right) \times X_{0}\rightarrow \left( 0,+\infty \right] $ and $U:D_{\tau }\rightarrow X_{0},$ where $$ D_{\tau }=\left\{ (t,s,x)\in \left[ 0,+\infty \right) ^{2}\times X_{0}:s\leq t<s+\tau \left( s,x\right) \right\}. $$ We say that $U$ is \textbf{ a maximal non-autonomous semiflow on} $X_{0}$ if $U$ satisfies the following properties \begin{itemize} \item[{\rm (i)}] $\tau \left( r,U(r,s)x\right) +r=\tau \left( s,x\right) +s,\forall s\geq 0,\forall x\in X_{0},\forall r\in \left[ s,s+\tau \left( s,x\right) \right)$. \item[{\rm (ii)}] $U(s,s)x=x,\forall s\geq 0,\forall x\in X_{0}$. \item[{\rm (iii)}]$U(t,r)U(r,s)x=U(t,s)x,\forall s\geq 0,\forall x\in X_{0},\forall t,r\in \left[ s,s+\tau \left( s,x\right) \right) $ with $t\geq r.$ \item[{\rm (iv)}] If $\tau \left( s,x\right) <+\infty ,$ then \begin{equation*} \lim_{t\rightarrow \left( s+\tau \left( s,x\right) \right) ^{-}}\left\| U(t,s)x\right\| =+\infty . \end{equation*} \end{itemize} \end{definition} Set \begin{equation*} D=\left\{ \left( t,s,x\right) \in \left[ 0,+\infty \right) ^{2}\times X_{0}:t\geq s\right\} . \end{equation*} The following theorem is the main result in this section, which was proved in \cite[Theorem 5.2]{Magal-Ruan07}. \begin{theorem} \label{TH3.3} Let Assumptions \ref{ASS2.1}, \ref{ASS2.4} and \ref{ASS3.1} be satisfied. Then there exists a map $\tau :\left[ 0,+\infty \right) \times X_{0}\rightarrow \left( 0,+\infty \right] $ and a maximal non-autonomous semiflow $U:D_{\tau }\rightarrow X_{0},$ such that for each $x\in X_{0}$ and each $s\geq 0,$ $U(.,s)x\in C\left( \left[ s,s+\tau \left( s,x\right) \right) ,X_{0}\right) $ is a unique maximal solution of (\ref{3.1}) (or equivalently a unique maximal solution of (\ref{3.2})). Moreover, $D_{\tau }$ is open in $D$ and the map $\left( t,s,x\right) \rightarrow U(t,s)x$ is continuous from $D_{\tau }$ into $ X_{0}. $ \end{theorem} \section{Positive invariance of a closed subset} In this section we will study the positive invariance of a closed subset by imposing the so called sub-tangential condition. Our results extend those in \cite{Pavel-Motreanu,Thieme90a} since we focus on the study of non densely defined non Hille-Yosida semilinear Cauchy problems. We start with some lemmas that will be useful in the subsequent discussions. \begin{lemma}\label{LE4.1} Let Assumptions \ref{ASS2.1} and \ref{ASS2.4} be satisfied. Let $0\leq a<b$ and $x \in X$ be given and define $$ f(t):=x \mathbbm{1}_{[a,b]}(t),\ \forall t\geq 0. $$ Then $t \rightarrow (S_A \ast f)(t)$ is differentiable in $[0,+\infty)$ and $$ (S_A \diamond f)(t)=\dfrac{d}{dt}(S_A \ast f)(t)=S_A((t-a)^+)x-S_A((t-b)^+)x,\ \forall t\geq 0, $$ where $\sigma^+:=\max(0,\sigma), \forall \sigma \in \mathbb{R}$. \end{lemma} \begin{proof} We observe that $$ (S_A \ast f)(t)=\left\lbrace \begin{array}{lll} \int_{a}^t S_A(t-s)x ds & \text{ if } & t\in [a,b],\\ \int_{a}^b S_A(t-s)x ds & \text{ if } & t\geq b, \\ 0 & \text{ if } & 0\leq t\leq a, \end{array} \right. $$ which is equivalent to $$ (S_A \ast f)(t)=\left\lbrace \begin{array}{lll} \int_{0}^{t-a} S_A(s)x ds & \text{ if } & t\in [a,b],\\ \int_{t-b}^{t-a} S_A(s)x ds & \text{ if } & t\geq b, \\ 0 & \text{ if } & 0\leq t\leq a. \end{array} \right. $$ Then the formula follows by computing the time derivative. \end{proof}\\ By using similar arguments in the proof of Lemma \ref{LE4.1} one can easily obtain the following results. \begin{lemma}\label{LE4.2} Let Assumptions \ref{ASS2.1} and \ref{ASS2.4} be satisfied. Let $0\leq a<b$ be given. Let $a=t_0<\dots < t_n=b$ be a partition of $[a,b]$. Let $f :[a,b]\rightarrow X$ be the step function defined by $$ f(t):=\sum_{i=0}^{n-1} x_i \mathbbm{1}_{[t_i,t_{i+1})}(t),\ \forall t\in[a,b) \ \text{ and } \ f(b)=f(t_{n-1})=x_{n-1}. $$ Then $t \rightarrow (S_A \ast f(a+\cdot))(t-a)$ is differentiable in $[a,b]$ and for any $t \in [t_k,t_{k+1}],\ k=0,\dots,n-1$ one has $$ (S_A \diamond f(a+\cdot))(t-a)=\sum_{i=0}^{k-1} [S_A(t-t_i)-S_A(t-t_{i+1})]x_i+S_A(t-t_k)x_k. $$ \end{lemma} Recall that $f:[a,b]\rightarrow X$ is a regulated function if the limit from the right side $\displaystyle \lim_{s\rightarrow t^+}f(s)$ exists for each $t \in [a,b)$, and the limit from the left side $\displaystyle \lim_{s\rightarrow t^-}f(s)$ exists for each $t\in (a,b]$. For each $b>a\geq 0$, we assume ${\rm Reg}([a,b],X)$ denotes the space of regulated functions from $[a,b]$ to $X$, and we also denote by ${\rm Step}([a,b],X)$ the space of step functions from $[a,b]$ to $X$. The following lemma extend the property described in Assumption \ref{ASS2.4} for the space of continuous functions to the space of regulated functions. \begin{lemma}\label{LE4.3} Let Assumptions \ref{ASS2.1} and \ref{ASS2.4} be satisfied and let $0\leq a<b$ be given. Then for any $f \in {\rm Reg}([a,b],X)$ we have $$ {\rm V}ert (S_A \diamond f(a+\cdot))(t-a) {\rm V}ert \leq \delta(t-a) \sup_{s\in [a,t]} {\rm V}ert f(s) {\rm V}ert, \ \forall t\in [a,b]. $$ \end{lemma} \begin{proof} Since ${\rm Step}([a,b],X)$ is dense in ${\rm Reg}([a,b],X)$ for the topology of uniform convergence (see Dieudonne \cite[p.139]{Dieudonne}), it is sufficient to prove the result for $f \in {\rm Step}([a,b],X)$ and apply the linear extension theorem to the bounded linear operator $$ f \in {\rm Step}([a,b],X) \mapsto (S_A \diamond f)(\cdot). $$ Let $f \in {\rm Step}([a,b],X)$ be a non zero step function given by $$ f(t):=\sum_{i=0}^{n-1} x_i \mathbbm{1}_{[t_i,t_{i+1})}(t),\ \forall t\in [a,b),\ \text{ and } f(b)=f(t_{n-1})=x_{n-1} $$ with $a=t_0<t_1<\cdots<t_n=b$. Let $t\in [a,b]$ be given and fixed. Then there exists $k\in \{0,\dots,n-1\}$ such that $t\in [t_k,t_{k+1}]$. Hence by Lemma \ref{LE4.2} we have $$ \begin{array}{llll} (S_A \diamond f(a+\cdot))(t-a)&=& \displaystyle \sum_{i=0}^{k-1} [S_A(t-t_i)-S_A(t-t_{i+1})]x_i+S_A(t-t_k)x_k\\ &=& \displaystyle \sum_{i=0}^{k} S_A(t-t_{k-i})x_{k-i}-\sum_{i=1}^{k}S_A(t-t_{k-i+1}) x_{k-i}\\ \end{array} $$ Setting $$ \bar{t}_i=t-t_{k-i+1},\ i=1,\dots, k \ \text{ and } \ \bar{t}_{0}:=0 $$ and $$ \bar{x}_i:=\dfrac{x_{k-i}}{\alpha}, \ i=0,\dots, k $$ with $\alpha:=\max_{i=1,\dots,k} {\rm V}ert x_i {\rm V}ert>0$. Then we obtain $$ \begin{array}{llll} (S_A \diamond f(a+\cdot))(t-a) &=& \alpha \displaystyle \sum_{i=0}^{k} [S_A(\bar{t}_{i+1})-S_A(\bar{t}_{i})]\bar{x}_i.\\ \end{array} $$ Since $0=\bar{t}_0<\cdots< \bar{t}_{k+1}=t-a$ and ${\rm V}ert \bar{x}_i {\rm V}ert\leq 1$ for all $i=1,\dots,k$, it follows from Remark \ref{REM2.6} that $$ {\rm V}ert (S_A \diamond f(a+\cdot))(t-a){\rm V}ert \leq \alpha V^{\infty }(S_A,0,t-a)\leq \alpha \delta(t-a) $$ and the result follows by observing that $$ \alpha:=\max_{i=1,\dots,k} {\rm V}ert x_i {\rm V}ert=\sup_{s\in [a,t]}{\rm V}ert f(s) {\rm V}ert. $$ \end{proof} In order to prove the invariance property of a closed subset $C_0 \subset X_0$ we need to make the following assumption. \begin{assumption}[Sub-Tangential Condition]\label{ASS4.4} Let $C_0$ be a closed subset of $X_0$. We assume that there exists a bounded linear operator $B : X_0\rightarrow X$ such that for each $\xi >0$ and each $\sigma>0$ there exists $\gamma=\gamma(\xi,\sigma)>0$ such that $$ \lim_{h\rightarrow 0^+}\dfrac{1}{h} d\left( T_{(A-\gamma B)_0}(h)x+S_{A-\gamma B}(h) \left[F(t,x)+\gamma Bx \right],C_0\right)=0, $$ whenever $x \in C_0$ with $ {\rm V}ert x {\rm V}ert \leq \xi $ and $t \in [0, \sigma]$. Here the map $x \to d(x,C_0)$ is the Hausdorff semi-distance which is defined as $$ d(x,C_0):=\inf_{y\in C_0} {\rm V}ert x-y {\rm V}ert. $$ \end{assumption} \begin{remark} Recall that the usual assumption for the non negativity of the mild solutions of \eqref{1.1} is covered by Assumption \ref{ASS4.4}. In fact $X_{0+}$ is positively invariant with respect to semiflow generated by \eqref{1.1} if for each $\xi >0$ and each $\sigma>0$ there exists $\gamma=\gamma(\xi,\sigma)>0$ such that $$ T_{(A-\gamma B)_0}(h)x+S_{A-\gamma B}(h)[F(t,x)+\gamma Bx] \in X_{0+} $$ whenever $x \in X_{0+}$ with $ {\rm V}ert x {\rm V}ert \leq \xi $ and $t \in [0, \sigma]$. \end{remark} The main result of this article is the following theorem. \begin{theorem}[Positive invariant Subset] \label{TH4.6} Let Assumptions \ref{ASS2.1}, \ref{ASS2.4}, \ref{ASS3.1} and \ref{ASS4.4} be satisfied. Then for each $x\in C_{0}$ and each $s\geq 0$, we have $$ U(t,s)x\in C_0, \forall t \in \left[ s,s+\tau \left( s,x\right) \right). $$ \end{theorem} The rest of this section is devoted to the proof of Theorem \ref{TH4.6}. We fix the initial condition $x_0 \in C_0$ and $s=0$. Set $\rho:= 2 ({\rm V}ert x_0 {\rm V}ert+1)$ and define $$ F_\gamma(t,x):=F(t,x)+\gamma Bx, \ \forall (t,x) \in [0,+\infty)\times X_0. $$ Let $\Lambda:= \Lambda(\rho)>0$ be the constant such that \begin{equation} \label{4.1} {\rm V}ert F_\gamma(t,x)-F_\gamma(t,y){\rm V}ert \leq \Lambda {\rm V}ert x-y {\rm V}ert, \forall t\in [0,\rho],\ \forall x,y \in B(0,\rho). \end{equation} Therefore by setting $$ {\rm G}amma:=2\Lambda \rho+\sup_{t\in [0,\rho]}{\rm V}ert F_\gamma(t,x_0) {\rm V}ert, $$ we obtain \begin{equation}\label{4.2} {\rm V}ert F_\gamma(t,x) {\rm V}ert \leq {\rm G}amma,\ \forall t\in [0,\rho],\ \forall x\in B(0,\rho). \end{equation} Let $\gamma:=\gamma(\rho)>0$ be a constant such that \begin{equation}\label{4.3} \lim_{h\rightarrow 0^+} \dfrac{1}{h} d\left( T_{(A-\gamma B)_0}(h)x+S_{A-\gamma B}(h)F_\gamma(t,x),C_0\right)=0, \end{equation} whenever $x \in C_0$, $ {\rm V}ert x {\rm V}ert \leq \rho $ and $t \in [0, \rho]$. Then by Theorem \ref{TH2.8}, $A-\gamma B: D(A)\subset X\rightarrow X$ satisfies Assumptions \ref{ASS2.1} and \ref{ASS2.4}. Hence combining Theorem \ref{TH2.8} and Lemma \ref{LE4.3} we know that if we fix $\tau _{\gamma}>0$ such that \begin{equation*} \gamma \delta \left( \tau _{\gamma}\right) \left\| B\right\| _{\mathcal{L}\left( X_{0},X\right) }<1, \end{equation*} then there exists a non decreasing map $\delta_\gamma : [0,+\infty)\rightarrow [0,+\infty)$ with $$\lim_{t\rightarrow 0^+}\delta_\gamma(t)=0$$ such that for each $f \in {\rm Reg}([a,b],X)$, $0\leq a<b\leq \tau_\gamma$ \begin{equation}\label{4.4} \left\| \left( S_{A-\gamma B}\diamond f(a+\cdot)\right)(t-a) \right\| \leq \delta_\gamma(t-a)\sup_{s\in \left[ a,t\right] }\left\| f(s)\right\| ,\;\forall t\in \left[ a,b\right] . \end{equation} To shorten the notations we set $$ \omega_\gamma:=\omega_{_{A-\gamma B}} \ \text{ and } \ M_\gamma:= M_{_{A-\gamma B}}. $$ Let $\tau\in (0,\min(\tau \left( 0,x\right),\tau_\gamma,\rho))$ be small enough to satisfy \begin{equation}\label{4.5} {\rm G}amma \delta_\gamma(h)+M_{\gamma} e^{\omega_{\gamma}^+ h} h+{\rm V}ert T_{(_{A-\gamma B})_0}(h)x_0{\rm V}ert\leq \rho,\ \forall h\in [0,\tau] \end{equation} with $$ \omega_\gamma^+=\max(0,\omega_\gamma), $$ and \begin{equation}\label{4.6} 0<\Lambda \delta_\gamma (\tau)<1, \end{equation} where $\Lambda$ has been defined as an upper bound for the Lipschitz norm of $F_\gamma$ on $B(0,\rho)\cap C_0$ in \eqref{4.1}. \noindent \textbf{Construction of the knots :} Let $ \varepsilon \in (0,1)$ be fixed. We define by induction a sequence $(l_k,y_k) \in [0,\tau]\times C_0$ where the index $k \in \mathbb{N}$ is a non-negative integer possibly unbounded. For $k=0$ we start with $$ l_0= 0 \ \text{ and } y_0=x_0\in C_0. $$ In order to compute the next increment, we define for each integer $k \geq 0$ \begin{equation}\label{4.7} \begin{array}{ll} I_k = \left \lbrace \eta \in (0,\varepsilon^*) : \right. & {\rm V}ert F_\gamma(l,y)-F_\gamma(l_k,y_k){\rm V}ert\leq \varepsilon,\ \forall \vert l-l_k\vert\leq \eta, \ \forall y\in B(y_k,\eta)\cap C_0,\\ & \left.\dfrac{1}{\eta} d\left( T_{(A-\gamma B)_0}(\eta)y_k+S_{A-\gamma B}(\eta)F_\gamma(l_k,y_k),C_0\right) <\dfrac{\varepsilon}{2} \right. \\ & \left. \text{ and } {\rm V}ert T_{(A-\gamma B)_0}(\eta)y_k-y_k {\rm V}ert \leq \varepsilon\right\rbrace \end{array} \end{equation} where $\varepsilon^*:=\min(\varepsilon,\rho)$. Set \begin{equation}\label{4.8} r_k:= \sup(I_k)>0 \ \text{ and } l_{k+1}:=\min \left( l_k+\dfrac{r_k}{2},\tau\right). \end{equation} We define $$ y_{k+1}= y_k \in C_0 \text{ if } l_{k+1}=\tau. $$ Otherwise if $l_{k+1}=l_k+\dfrac{r_k}{2}<\tau$, then $$ 0< l_{k+1}-l_{k}=\dfrac{r_k}{2}<r_k $$ hence $$ l_{k+1}-l_k \in I_k. $$ Thus, it follows that $$ \dfrac{1}{l_{k+1}-l_k} d\left( T_{(A-\gamma B)_0}(l_{k+1}-l_k)y_k+S_{A-\gamma B}(l_{k+1}-l_k)F_\gamma(l_k,y_k),C_0\right)<\dfrac{\varepsilon}{2}. $$ Therefore, we can find $y_{k+1} \in C_0$ satisfying \begin{equation}\label{4.9} \dfrac{1}{l_{k+1}-l_k}{\rm V}ert T_{(A-\gamma B)_0}(l_{k+1}-l_k)y_k+S_{A-\gamma B}(l_{k+1}-l_k)F_\gamma(l_k,y_k)-y_{k+1}{\rm V}ert \leq \dfrac{\varepsilon}{2}. \end{equation} Setting $$ H_k:=\dfrac{1}{l_{k+1}-l_k} \left[ y_{k+1}-T_{(A-\gamma B)_0}(l_{k+1}-l_k)y_k-S_{A-\gamma B}(l_{k+1}-l_k)F_\gamma(l_k,y_k) \right] \in X_0. $$ Then it follows that \begin{equation}\label{4.10} H_k\in X_0 \text{ and } {\rm V}ert H_k {\rm V}ert \leq \dfrac{\varepsilon}{2} \end{equation} and \begin{equation}\label{4.11} y_{k+1}=T_{(A-\gamma B)_0}(l_{k+1}-l_k)y_k+S_{A-\gamma B}(l_{k+1}-l_k)F_\gamma(l_k,y_k)+(l_{k+1}-l_k)H_k\in C_0. \end{equation} \begin{lemma}\label{LE4.7} Let Assumptions \ref{ASS2.1}, \ref{ASS2.4}, \ref{ASS3.1} and \ref{ASS4.4} be satisfied. Then the knots $(l_k,y_k)$, $k\geq 0$ satisfy the following properties \begin{itemize} \item[{\rm (i)}] For all $k>m\geq 0$ we have \begin{equation}\label{4.12} \begin{array}{lll} y_k&=& \displaystyle T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m+\sum_{i=m}^{k-1} (l_{i+1}-l_{i})T_{(A-\gamma B)_0}(l_{k}-l_{i+1})H_i\\ & & \displaystyle+\sum_{i=m}^{k-1} T_{(A-\gamma B)_0}(l_{k}-l_{i+1})S_{A-\gamma B}(l_{i+1}-l_i)F_\gamma(l_i,y_i) \end{array} \end{equation} \item[{\rm (ii)}] $y_k \in B(0,\rho)\cap C_0$ for any $k\geq 0$. \item[{\rm (iii)}] For all $k>m\geq 0$ we have $$ {\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m{\rm V}ert \leq {\rm G}amma \delta_\gamma(l_k-l_m)+\dfrac{\varepsilon}{2} M_\gamma e^{\omega_\gamma^+ (l_k-l_m)} (l_k-l_m). $$ \end{itemize} \end{lemma} \begin{proof} \textbf{Proof of (i):} Let $k>m\geq 0$ be given. Recall that for all $i=0,\dots,k-1$ we have $$ y_{i+1}=T_{(A-\gamma B)_0}(l_{i+1}-l_{i})y_{i}+S_{A-\gamma B}(l_{i+1}-l_{i})F_\gamma(l_{i},y_{i})+(l_{i+1}-l_{i})H_i. $$ Define the linear operator $L_i : X_0 \rightarrow X_0$ by $$ L_i:=T_{(A-\gamma B)_0}(l_{i+1}-l_{i}),\ i=0,\dots,k-1 $$ Hence $$ y_{i+1}=L_iy_i+S_{A-\gamma B}(l_{i+1}-l_{i})F_\gamma(l_{i},y_{i})+(l_{i+1}-l_{i})H_i,\ i=1,\dots,k-1. $$ In order to use a variation of constants formula, we introduce the evolution family $$ U(i,j)=L_{i-1}\cdots L_j \ \text{ if } i>j \ \text{ and } U(i,i)=I_{X_0}. $$ Then it follows from the semigroup property that $$ U(i,j)=T_{(A-\gamma B)_0}(l_{i}-l_{j}),\ \text{ if } i\geq j . $$ By using a discrete variation of constants formula, we have for integers $ k \geq m \geq 0$ \begin{equation*} \begin{array}{lll} y_k&=&\displaystyle U(k,m)y_m+\sum_{i=m}^{k-1} U(k,i+1)[S_{A-\gamma B}(l_{i+1}-l_{i})F_\gamma(l_{i},y_{i})+(l_{i+1}-l_{i})H_i] \\ & =& \displaystyle T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m+\sum_{i=m}^{k-1} (l_{i+1}-l_{i}) T_{(A-\gamma B)_0}(l_{k}-l_{i+1})H_i\\ & &+ \displaystyle \sum_{i=m}^{k-1} T_{(A-\gamma B)_0}(l_{k}-l_{i+1})S_{A-\gamma B}(l_{i+1}-l_{i})F_\gamma(l_{i},y_{i}). \end{array} \end{equation*} \textbf{Proof of (ii):} We will argue by recurrence. The property is true for $k=0$ since $y_0=x_0 \in B(0,\rho)\cap C_0$. Assume that for $k\geq 1$ $$ y_0,\dots, y_{k-1} \in B(0,\rho)\cap C_0. $$ We are in a position to show that $y_k \in B(0,\rho)\cap C_0$. In view of \eqref{4.12}, for any $m=0,\dots,k-1$, we have $$ y_k-T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m=\sum_{i=m}^{k-1} T_{(A-\gamma B)_0}(l_{k}-l_{i+1})[S_{A-\gamma B}(l_{i+1}-l_{i})F_\gamma(l_{i},y_{i})+(l_{i+1}-l_{i})H_i]. $$ Then it follows that $$ {\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m {\rm V}ert\leq {\rm V}ert W_{k,m} {\rm V}ert+{\rm V}ert Z_{k,m} {\rm V}ert, $$ where $$ W_{k,m}:=\sum_{i=m}^{k-1} T_{(A-\gamma B)_0}(l_{k}-l_{i+1})S_{A-\gamma B}(l_{i+1}-l_{i})F_\gamma(l_{i},y_{i}) $$ and $$ Z_{k,m}:=\sum_{i=m}^{k-1} (l_{i+1}-l_{i})T_{(A-\gamma B)_0}(l_{k}-l_{i+1})H_i. $$ Next, we do estimates of $W_{k,m}$ and $Z_{k,m}$. Since $H_i \in X_0$ and ${\rm V}ert H_i {\rm V}ert \leq \dfrac{\varepsilon}{2}$, for any $i=m,\dots,k-1$, it is easy to obtain from \eqref{4.10} that \begin{equation}\label{4.13} \begin{array}{lll} {\rm V}ert Z_{k,m}{\rm V}ert &\leq & \displaystyle \sum_{i=m}^{k-1} (l_{i+1}-l_{i})\dfrac{\varepsilon}{2} M_\gamma e^{\omega_\gamma (l_{i+1}-l_{i})} \\ &\leq & \displaystyle\dfrac{\varepsilon}{2} M_\gamma e^{\omega_\gamma^+ (l_k-l_m)} (l_k-l_m), \end{array} \end{equation} where $$ \omega_\gamma^+=\max(0,\omega_\gamma). $$ In order to estimate $W_{k,m}$, we will rewrite it in a more convenient form. Using the following relationship $$ T_{(A-\gamma B)_0}(\sigma)S_{A-\gamma B}(h)=S_{A-\gamma B}(\sigma+h)-S_{A-\gamma B}(\sigma),\ \forall \sigma\geq 0,\forall h\geq 0, $$ we see that $$ W_{k,m}= \sum_{i=m}^{k-1} [ S_{A-\gamma B}(l_{k}-l_{i})-S_{A-\gamma B}(l_{k}-l_{i+1}) ]F_\gamma(l_{i},y_{i}). $$ By Lemma \ref{LE4.2} we have $$ W_{k,m}= (S_{A-\gamma B} \diamond f_\gamma(l_m+\cdot))(l_k-l_m) $$ with step function $$ f_\gamma(t)=F_\gamma(l_{i},y_{i}),\ \forall t\in [l_i,l_{i+1}),\ i=m,\dots,k-1\ \text{ and } \ f_\gamma(l_k)=F_\gamma(l_{k-1},y_{k-1}). $$ Therefore by using the inequality \eqref{4.4} with $a=l_m$ and $b=l_k$ it follows that \begin{equation}\label{4.14} \begin{array}{lll} \displaystyle \left {\rm V}ert W_{k,m} \right {\rm V}ert &= & \displaystyle \left {\rm V}ert(S_{A-\gamma B} \diamond f_\gamma(l_m+\cdot))(l_k-l_m) \right {\rm V}ert.\\ &\leq & \displaystyle \delta_\gamma(l_k-l_m) \sup_{s \in [l_m,l_k]} {\rm V}ert f_\gamma(s) {\rm V}ert \\ &=& \displaystyle \delta_\gamma(l_k-l_m) \max_{i=m,\dots,k-1} {\rm V}ert F_\gamma(l_{i},y_{i}) {\rm V}ert. \end{array} \end{equation} By using \eqref{4.2} and the induction assumption, we deduce that $$ \max_{i=m,\dots,k-1} {\rm V}ert F_\gamma(l_{i},y_{i}) {\rm V}ert\leq {\rm G}amma. $$ Then it follows from \eqref{4.13} and \eqref{4.14} that \begin{equation*} {\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m {\rm V}ert \leq {\rm G}amma \delta_\gamma(l_k-l_m)+\dfrac{\varepsilon}{2} M_\gamma e^{\omega_\gamma^+ (l_k-l_m)} (l_k-l_m) \end{equation*} for $m=0,\dots,k-1$. To conclude the proof of (ii) we note that $$ \begin{array}{llll} {\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k}-l_{0})x_0 {\rm V}ert={\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k})x_0 {\rm V}ert &\leq & {\rm G}amma \delta_\gamma(l_k)+M_\gamma e^{\omega_\gamma^+ l_k} l_k \\ \end{array} $$ and $$ \begin{array}{lll} {\rm V}ert y_k {\rm V}ert &\leq & {\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k})x_0 {\rm V}ert+{\rm V}ert T_{(A-\gamma B)_0}(l_{k})x_0 {\rm V}ert \\ & \leq & {\rm G}amma \delta_\gamma(l_k)+M_\gamma e^{\omega_\gamma^+ l_k} l_k+{\rm V}ert T_{(A-\gamma B)_0}(l_{k})x_0{\rm V}ert. \end{array} $$ Since $l_k\in [0,\tau]$, the inequality \eqref{4.5} implies that $y_k \in B(0,\rho)\cap C_0$. \noindent \textbf{Proof of (iii): } The proof follows the same lines in (ii). \end{proof} \begin{lemma} \label{LE4.8} Let Assumptions \ref{ASS2.1}, \ref{ASS2.4}, \ref{ASS3.1} and \ref{ASS4.4} be satisfied. Then there exists an integer $n_\varepsilon\geq 1$ such that $l_{n_\varepsilon}=\tau$. That is to say that we have a finite number of knots $(l_k,y_k)$, $k=0,\dots,n_\varepsilon$ with $$ 0=l_0<l_1<\cdots<l_{n_\varepsilon-1}< l_{n_\varepsilon}=\tau \ \text{ and }\ y_0,y_1,\dots,y_{n_\varepsilon} \in C_0,\ y_0=x_0. $$ \end{lemma} \begin{proof} We will use proof by contradiction. Assume that $l_k<\tau$ for all $k\geq 0$. That is to say that $$ l_{k+1}=l_k+\dfrac{r_k}{2}, \ \forall k\geq 0. $$ Since the sequence is strictly increasing, there exists $l^*\leq \tau$ such that $l_k\rightarrow l^*$ as $k\rightarrow +\infty$ and $l_k<l^*$ for each $k\geq 0$. This also implies that \begin{equation}\label{4.15} \lim_{k\rightarrow +\infty} r_k=0. \end{equation} In order to contradict \eqref{4.15}, we will prove that there exists $k_0$ large enough and $\eta^*>0$ such that $\eta^* \in I_k$ for all $k\geq k_0$. This will mean that $r_k=\sup I_k \geq \eta^*>0$ for all $k\geq k_0$. Let us show that $\left\lbrace y_k \right\rbrace_{k\geq 0}$ is a Cauchy sequence. To this end, we let $m\geq 0$ be arbitrary and $k\geq j>m$ be given. Then from Lemma \ref{LE4.7}, for all $k\geq j>m$, we have $$ \begin{array}{llll} {\rm V}ert y_k-y_j {\rm V}ert &\leq & {\rm V}ert y_k-T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m {\rm V}ert \\ \\ &+& {\rm V}ert T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m-T_{(A-\gamma B)_0}(l_{j}-l_{m})y_m{\rm V}ert\\ \\ &+ & {\rm V}ert T_{(A-\gamma B)_0}(l_{j}-l_{m})y_m-y_j {\rm V}ert \\ \\ &\leq & {\rm G}amma \delta_\gamma(l_k-l_m)+M_\gamma e^{\omega_\gamma^+ (l_k-l_m)} (l_k-l_m) \\ \\ &+& {\rm V}ert T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m-T_{(A-\gamma B)_0}(l_{j}-l_{m})y_m{\rm V}ert \\ \\ &+& {\rm G}amma \delta_\gamma(l_j-l_m)+M_\gamma e^{\omega_\gamma^+ (l_j-l_m)} (l_j-l_m). \end{array} $$ Then $$ \limsup_{k,j\rightarrow +\infty} {\rm V}ert y_k-y_j {\rm V}ert\leq 2 {\rm G}amma \delta_\gamma(l^*-l_m)+2 M_\gamma e^{\omega_\gamma^+ (l^*-l_m)} (l^*-l_m). $$ Since $m$ is arbitrary and $$ \lim_{m\rightarrow +\infty} [2 {\rm G}amma \delta_\gamma(l^*-l_m)+2 M_\gamma e^{\omega_\gamma^+ (l^*-l_m)} (l^*-l_m)]=0, $$ we deduce that $(y_k)_{k\geq 0}$ is a Cauchy sequence in $B(0,\rho)\cap C_0$. Therefore there exists $y^*\in B(0,\rho)\cap C_0$ such that $$ \lim_{k\rightarrow +\infty} y_k=y^* \in C_0. $$ Since $y^* \in C_0$ we have $$ \lim_{h\rightarrow 0^+}\dfrac{1}{h} d\left( T_{(A-\gamma B)_0}(h)y^*+S_{A-\gamma B}(h)F_\gamma(l^*,y^*),C_0\right)=0. $$ By using the above limit, we can find $\eta^*\in (0,\dfrac{\varepsilon}{4}) $ small enough such that \begin{equation}\label{4.16} 0<\eta^*< \dfrac{\varepsilon}{4}<\varepsilon^* \end{equation} and \begin{equation}\label{4.17} \dfrac{1}{\eta^*} d\left( T_{(A-\gamma B)_0}(\eta^*)y^*+S_{A-\gamma B}(\eta^*)F_\gamma(l^*,y^*),C_0\right) \leq \dfrac{\varepsilon}{4} \end{equation} and (by using the continuity of $(l,y) \to T_{(A-\gamma B)_0}(l)y$) \begin{equation} \label{4.17Bis} {\rm V}ert T_{(A-\gamma B)_0}(\eta^*)y^*-y^*{\rm V}ert \leq \dfrac{\varepsilon}{2} \end{equation} and (by using the continuity of $(l,y) \to F_\gamma(l,y)$) \begin{equation}\label{4.18} \vert l^*-l \vert \leq 2\eta^* \ \text{ and } {\rm V}ert y-y^* {\rm V}ert \leq 2\eta^* \Rightarrow {\rm V}ert F_\gamma(l,y)-F_\gamma(l^*,y^*) {\rm V}ert \leq \dfrac{\varepsilon}{2} \\. \end{equation} To obtain a contradiction, we will use the 1-Lipschitz continuity of $x\in X\rightarrow d(x,C_0)$ combined with the continuity of $(l,y) \to F_\gamma(l,y)$ and $(l,y) \to T_{(A-\gamma B)_0}(l)y$ at $(l^*,y^*)$. Thus there exists $k_0\geq 0$ large enough such that for all $k\geq k_0$ one has \begin{equation}\label{4.19} \left\lbrace \begin{array}{llll} {\rm V}ert F_\gamma(l_k,y_k)-F_\gamma(l^*,y^*) {\rm V}ert \leq \dfrac{\varepsilon}{2}\\ {\rm V}ert T_{(A-\gamma B)_0}(\eta^*)y_k-T_{(A-\gamma B)_0}(\eta^*)y^*{\rm V}ert\leq \dfrac{\varepsilon}{4}\\ {\rm V}ert y_k-y^* {\rm V}ert\leq \eta^* \text{ and }0<\vert l^*-l_k \vert \leq \eta^* \end{array} \right. \end{equation} since $\eta^*$ is fixed and $ y_k\to y^* $ and $l_k \to l^*$. By using \eqref{4.17} and $ y_k\to y^* $ and $l_k \to l^*$, we obtain for each $k\geq k_0$ (taking possibly $k_0$ larger) \begin{equation}\label{4.20} \dfrac{1}{\eta^*} d\left( T_{(A-\gamma B)_0}(\eta^*)y_k+S_{A-\gamma B}(\eta^*)F_\gamma(l_k,y_k),C_0\right) < \dfrac{\varepsilon}{2},\ \forall k\geq k_0. \end{equation} Next we note that for any $k\geq k_0$ $$ 0\leq l-l_k \leq \eta^* \Rightarrow\vert l-l^* \vert\leq \vert l-l_k \vert+\vert l^*-l_k\vert\leq 2\eta^* $$ and $$ {\rm V}ert y-y_k {\rm V}ert \leq \eta^* \Rightarrow{\rm V}ert y-y^* {\rm V}ert\leq {\rm V}ert y-y_k {\rm V}ert+{\rm V}ert y^*-y_k{\rm V}ert\leq 2\eta^*. $$ { Combining \eqref{4.17}-\eqref{4.18} with \eqref{4.19}, it follows that} for any $k\geq k_0$ \begin{equation}\label{4.21} {\rm V}ert F_\gamma(l,y)-F_\gamma(l_k,y_k) {\rm V}ert\leq {\rm V}ert F_\gamma(l,y)-F_\gamma(l^*,y^*) {\rm V}ert+{\rm V}ert F_\gamma(l^*,y^*)-F_\gamma(l_k,y_k) {\rm V}ert\leq \varepsilon \end{equation} whenever \begin{equation}\label{4.22} \vert l-l_k \vert \leq \eta^* \ \text{ and }\ {\rm V}ert y-y_k {\rm V}ert \leq \eta^*. \end{equation} {In view of \eqref{4.16}, \eqref{4.17} and \eqref{4.19}, we further have} \begin{equation}\label{4.23} \begin{array}{lll} {\rm V}ert T_{(A-\gamma B)_0}(\eta^*)y_k-y_k {\rm V}ert &\leq & {\rm V}ert T_{(A-\gamma B)_0}(\eta^*)y_k-T_{(A-\gamma B)_0}(\eta^*)y^*{\rm V}ert \\ & & +{\rm V}ert T_{(A-\gamma B)_0}(\eta^*)y^*-y^* {\rm V}ert+{\rm V}ert y^*-y_k {\rm V}ert\leq \varepsilon. \end{array} \end{equation} Finally it follows from \eqref{4.20}-\eqref{4.23} that $0<\eta^* \in I_k$ for all $k\geq k_0$ which contradicts \eqref{4.15}. \end{proof} \noindent \textbf{Construction of the approximate solution:} Recall that from property (i) of Lemma \ref{LE4.7} we have for each $m=0,\dots,k-1$ and each $k\geq 1$ \begin{equation}\label{4.24} \begin{array}{lll} y_k&=& \displaystyle T_{(A-\gamma B)_0}(l_{k}-l_{m})y_m+\sum_{i=m}^{k-1} (l_{i+1}-l_{i})T_{(A-\gamma B)_0}(l_{k}-l_{i+1})H_i\\ & & \displaystyle+\sum_{i=m}^{k-1} T_{(A-\gamma B)_0}(l_{k}-l_{i+1})S_{A-\gamma B}(l_{i+1}-l_i)F_\gamma(l_i,y_i). \end{array} \end{equation} For each $t\in [l_k,l_{k+1}]$ and each $k=0,\dots,{n_\varepsilon}-1$, we set \begin{equation}\label{4.25} \begin{array}{lll} u_\varepsilon(t)&:=& \displaystyle T_{(A-\gamma B)_0}(t-l_{0})y_0+S_{A-\gamma B}(t-l_k)F_\gamma(l_k,y_k)+(t-l_k)H_k\\ & & + \displaystyle \sum_{i=0}^{k-1} (l_{i+1}-l_{i})T_{(A-\gamma B)_0}(t-l_{i+1})H_i\\ & & \displaystyle+\sum_{i=0}^{k-1} T_{(A-\gamma B)_0}(t-l_{i+1})S_{A-\gamma B}(l_{i+1}-l_i)F_\gamma(l_i,y_i) \end{array} \end{equation} with the convention $$ \sum_{i=m}^{p}=0\ \text{ if } p<m. $$ By using the semigroup property for $t \to T_{(A-\gamma B)_0}(t)$, we deduce from \eqref{4.24} and \eqref{4.25} that \begin{equation} \label{4.26} u_\varepsilon(t)=T_{(A-\gamma B)_0}(t-l_k)y_k+S_{A-\gamma B}(t-l_k)F_\gamma(l_k,y_k)+(t-l_k)H_k,\ \forall t\in [l_k,l_{k+1}]. \end{equation} Then it is clear that $u_\varepsilon(t)$ is well defined and continuous from $[0,\tau]$ into $X_0$ and $$ u_\varepsilon(l_k)=y_k,\ \forall k=0,\dots,n_\varepsilon. $$ {Next we rewrite $u_\varepsilon(t)$ into a form that will be convenient for our subsequent discussions}. By using the relationship $$ S_{A-\gamma B}(h+\sigma)-S_{A-\gamma B}(\sigma)=T_{(A-\gamma B)_0}(\sigma)S_{A-\gamma B}(h),\ \forall h\geq 0,\ \forall \sigma\geq 0 $$ one can rewrite from \eqref{4.25} the formula of $u_\varepsilon$ as $$ \begin{array}{lll} u_\varepsilon(t)&=& \displaystyle T_{(A-\gamma B)_0}(t-l_0)y_0+S_{A-\gamma B}(t-l_k)F_\gamma(l_k,y_k)+(t-l_k)H_k\\ & & \displaystyle +\sum_{i=0}^{k-1} (l_{i+1}-l_i)T_{(A-\gamma B)_0}(t-l_{i+1})H_i\\ & & \displaystyle +\sum_{i=0}^{k-1}[S_{A-\gamma B}(t-l_{i+1})-S_{A-\gamma B}(t-l_i)]F_\gamma(l_i,y_i), \forall t\in [l_k,l_{k+1}]. \end{array} $$ {Setting} \begin{equation}\label{4.27} f_\gamma(t)=F_\gamma(l_i,y_i),\ \forall t\in [l_i,l_{i+1}),\ i=0,\dots,n_{\varepsilon}-1,\ f_\gamma(l_{n_\varepsilon})=F_\gamma(l_{n_\varepsilon-1},y_{n_\varepsilon-1}) \end{equation} and remembering that $y_0=x_0$, by Lemma \ref{LE4.2} we obtain for each $t\in [l_k,l_{k+1}]$, \begin{equation}\label{4.28} \begin{array}{lll} u_\varepsilon(t)&=&\displaystyle T_{(A-\gamma B)_0}(t-l_0)x_0+(S_{A-\gamma B} \diamond f_\gamma(l_0+\cdot))(t-l_0)\\ & & \displaystyle +(t-l_k)H_k+\sum_{i=0}^{k-1} (l_{i+1}-l_i)T_{(A-\gamma B)_0}(t-l_{i+1})H_i. \end{array} \end{equation} Similar arguments also gives for any $t\in [l_k,l_{k+1}]$ and each integer $m \in [0, k]$ \begin{equation}\label{4.29} \begin{array}{lll} u_\varepsilon(t)&=&\displaystyle T_{(A-\gamma B)_0}(t-l_m)y_m+(S_{A-\gamma B} \diamond f_\gamma(l_m+\cdot))(t-l_m)\\ & & \displaystyle +(t-l_k)H_k+\sum_{i=m}^{k-1} (l_{i+1}-l_i)T_{(A-\gamma B)_0}(t-l_{i+1})H_i. \end{array} \end{equation} By using again \eqref{4.2}, we also have the following estimate that for any $t\in [l_m,l_{k}]$ with $k\geq m$, \begin{equation}\label{4.30} {\rm V}ert (S_{A-\gamma B} \diamond f_\gamma(l_m+\cdot))(t-l_m) {\rm V}ert \leq {\rm G}amma\delta_\gamma (t-l_m) . \end{equation} \begin{lemma}\label{LE4.9} Let Assumptions \ref{ASS2.1}, \ref{ASS2.4}, \ref{ASS3.1} and \ref{ASS4.4} be satisfied. Then the approximate solution $u_\varepsilon(t)$ in \eqref{4.28} satisfies the following properties \begin{itemize} \item[{\rm (i)}] There exits a constant $\hat{M}_0>0$ such that $$ {\rm V}ert u_\varepsilon(t)-y_k {\rm V}ert \leq \hat{M}_0(\varepsilon+\delta_\gamma(\varepsilon)),\ \forall t\in [l_k,l_{k+1}] $$ with $k=0,\dots,n_\varepsilon-1$. \item[{\rm (ii)}] $u_\varepsilon(t) \in B(0,\rho),\ \forall t\in [0,\tau]$. \item[{\rm (iii)}] There exists a {constant} $\hat{M}_1>0$ such that for all $t\in [0,\tau]$ \begin{equation}\label{4.31} \left{\rm V}ert u_\varepsilon(t)-T_{(A-\gamma B)_0}(t)x_0-(S_{A-\gamma B} \diamond F_\gamma(\cdot,u_\varepsilon(\cdot))(t)\right{\rm V}ert\leq \hat{M}_1(\varepsilon+\delta_\gamma(\varepsilon)). \end{equation} \end{itemize} \end{lemma} \begin{proof} {We first prove that, for each $t\in [l_m,l_p]$ with $p\geq m\geq 0$ and each $\bar{y}\in X_0$, we have} \begin{equation}\label{4.32} {\rm V}ert u_\varepsilon(t)-\bar{y}{\rm V}ert \leq {\rm V}ert T_{(A-\gamma B)_0}(t-l_m)y_m-\bar{y}{\rm V}ert +{\rm G}amma \delta_\gamma(t-l_m)+\dfrac{\varepsilon}{2} M_\gamma(t-l_m) e^{\omega_\gamma^+(t-l_m)}. \end{equation} Let $p> m\geq 0$ be given. From \eqref{4.29} we have \begin{equation*} \begin{array}{lll} u_\varepsilon(t)-\bar{y} &=&\displaystyle T_{(A-\gamma B)_0}(t-l_m)y_m-\bar{y}+(S_{A-\gamma B} \diamond f_\gamma(l_m+\cdot))(t-l_m)\\ & & \displaystyle +(t-l_k)H_k+\sum_{i=m}^{k-1} (l_{i+1}-l_i)T_{(A-\gamma B)_0}(t-l_{i+1})H_i,\ \forall t\in [l_k,l_{k+1}] \end{array} \end{equation*} with $m\leq k\leq p-1$. Hence \begin{equation*} \begin{array}{lll} {\rm V}ert u_\varepsilon(t)-\bar{y} {\rm V}ert &\leq &\displaystyle {\rm V}ert T_{(A-\gamma B)_0}(t-l_m)y_m-\bar{y}{\rm V}ert +{\rm V}ert (S_{A-\gamma B} \diamond f_\gamma(l_m+\cdot))(t-l_m){\rm V}ert \\ & & \displaystyle +(t-l_k){\rm V}ert H_k{\rm V}ert +\sum_{i=m}^{k-1} (l_{i+1}-l_i){\rm V}ert T_{(A-\gamma B)_0}(t-l_{i+1})H_i{\rm V}ert,\ \forall t\in [l_k,l_{k+1}] \end{array} \end{equation*} with $m\leq k\leq p-1$. {In view of \eqref{4.30}, and} $$ H_i \in X_0\ \text{ and } {\rm V}ert H_i {\rm V}ert\leq \dfrac{\varepsilon}{2},\ i=0,\dots,n_\varepsilon, $$ {we see that, for any $t\in [l_k,l_{k+1}]$ with $m\leq k\leq p-1$,} \begin{equation*} \begin{array}{lll} {\rm V}ert u_\varepsilon(t)-\bar{y} {\rm V}ert &\leq &\displaystyle {\rm V}ert T_{(A-\gamma B)_0}(t-l_m)y_m-\bar{y}{\rm V}ert +{\rm G}amma \delta_\gamma(t-l_m) +(t-l_k) \dfrac{\varepsilon}{2} \\ & & \displaystyle +\sum_{i=m}^{k-1}M_\gamma e^{\omega_\gamma (t-l_{i+1})} \dfrac{\varepsilon}{2} (l_{i+1}-l_i)\\ &\leq &\displaystyle {\rm V}ert T_{(A-\gamma B)_0}(t-l_m)y_m-\bar{y}{\rm V}ert + {\rm G}amma \delta_\gamma(t-l_m)+\dfrac{\varepsilon}{2} M_\gamma(t-l_m) e^{\omega_\gamma^+(t-l_m)}\\ &\leq &\displaystyle {\rm V}ert T_{(A-\gamma B)_0}(t-l_m)y_m-\bar{y}{\rm V}ert +{\rm G}amma \delta_\gamma(t-l_m)+\dfrac{\varepsilon}{2} M_\gamma(t-l_m) e^{\omega_\gamma^+(t-l_m)}, \end{array} \end{equation*} which {proves} \eqref{4.32}. \noindent \textbf{Proof of (i):} By using \eqref{4.32} with $m=k$, $p=k+1$ and $\bar{y}=y_k${, for each $t\in [l_k,l_{k+1}]$, it follows that} $$ {\rm V}ert u_\varepsilon(t)-y_k {\rm V}ert \leq {\rm V}ert T_{(A-\gamma B)_0}(t-l_k)y_k-y_k{\rm V}ert +{\rm G}amma \delta_\gamma(t-l_k)+\dfrac{\varepsilon}{2} M_\gamma(t-l_k) e^{\omega_\gamma^+(t-l_k)}. $$ {Observing that} $$ t\in [l_k,l_{k+1}] \Rightarrow t-l_k \leq l_{k+1}-l_k\leq \dfrac{r_k}{2}<r_k\leq \varepsilon \Rightarrow t-l_k \in I_k $$ where $I_k$ and $r_k$ are defined respectively in \eqref{4.7} and \eqref{4.8}. {Then} we deduce that $$ {\rm V}ert T_{(A-\gamma B)_0}(t-l_k)y_k-y_k{\rm V}ert\leq \varepsilon,\ \forall t\in [l_k,l_{k+1}] $$ and \begin{equation}\label{4.33} {\rm V}ert u_\varepsilon(t)-y_k {\rm V}ert \leq \varepsilon +{\rm G}amma \delta_\gamma(\varepsilon)+\dfrac{\varepsilon}{2} M_\gamma \varepsilon e^{\omega_\gamma^+\varepsilon },\ \forall t\in [l_k,l_{k+1}]. \end{equation} This proves (i). \\ \noindent \textbf{Proof of (ii):} {In view of \eqref{4.32} with $m=0$, $p=n_\varepsilon$ and $\bar{y}=0$, and using the fact $l_0=0$ and $y_0=x_0$, we deduce that} $$ \begin{array}{llll} {\rm V}ert u_\varepsilon(t){\rm V}ert \leq {\rm V}ert T_{(A-\gamma B)_0}(t)x_0{\rm V}ert+{\rm G}amma \delta_\gamma(t)+M_\gamma e^{\omega_\gamma^+ t}t,\ \forall t\in [0,\tau]. \end{array} $$ {Then the fact $0\leq t \leq \tau$ together with the inequality \eqref{4.5} imply that} $$ {\rm V}ert u_\varepsilon(t) {\rm V}ert\leq \rho, \ \forall t\in [0,\tau]. $$ \noindent \textbf{Proof of (iii):} Let \begin{equation*} v _\varepsilon(t)=u_\varepsilon(t)-T_{(A-\gamma B)_0}(t)x_0-(S_{A-\gamma B} \diamond F_\gamma(\cdot,u_\varepsilon(\cdot))(t),\ \forall t\in [0,\tau]. \end{equation*} {We further define} $$ g_\gamma(t):=f_\gamma(t)- F_\gamma(t,u_\varepsilon(t)),\ \forall t\in [0,\tau] $$ or equivalently \begin{equation}\label{4.34} g_\gamma(t)=\left\lbrace\begin{array}{lll} F_\gamma(l_k,y_k)-F_\gamma(t,u_\varepsilon(t)) & \text{ if } & t\in [l_k,l_{k+1}),\ k=0,\dots,n_\varepsilon-1\\ F_\gamma(l_{n_\varepsilon-1},y_{n_\varepsilon-1})-F_\gamma(l_{n_\varepsilon},y_{n_\varepsilon}) & \text{ if } & t=l_{n_\varepsilon}. \end{array} \right. \end{equation} where $f_\gamma$ is defined in \eqref{4.27} and $n_\varepsilon$ has been defined in Lemma \ref{LE4.8}. Then using \eqref{4.29} we get $$ v_\varepsilon(t)=(S_{A-\gamma B} \diamond g_\gamma(\cdot))(t)+(t-l_k)H_k+\sum_{i=0}^{k-1} (l_{i+1}-l_i)T_{(A-\gamma B)_0}(t-l_{i+1})H_i,\ \forall t\in [l_k,l_{k+1}]. $$ Since $g_\gamma \in {\rm Reg}([0,\tau],X)$, it follows that $$ \begin{array}{lll} {\rm V}ert v_\varepsilon(t){\rm V}ert &\leq & \displaystyle\delta_\gamma(t) \sup_{s \in [0,t]} {\rm V}ert g_\gamma(s) {\rm V}ert+\dfrac{\varepsilon}{2}M_\gamma(t-l_0) e^{\omega_\gamma^+t}\\ &\leq &\displaystyle \delta_\gamma(\tau) \sup_{s \in [0,t]} {\rm V}ert g_\gamma(s) {\rm V}ert+\dfrac{\varepsilon}{2}M_\gamma\tau e^{\omega_\gamma^+\tau}. \end{array} $$ Therefore one can obtain \eqref{4.31} by estimating $$ \sup_{s \in [0,t]} {\rm V}ert g_\gamma(s) {\rm V}ert,\ \forall t\in [0,\tau]. $$ In view of \eqref{4.34}, it follows that $$ {\rm V}ert g_\gamma(t) {\rm V}ert\leq {\rm V}ert F_\gamma(l_k,y_k)-F_\gamma(t,y_k) {\rm V}ert+{\rm V}ert F_\gamma(t,y_k)-F_\gamma(t,u_\varepsilon(t)) {\rm V}ert,\ t\in [l_k,l_{k+1}] $$ with $k=0,\dots,n_\varepsilon$. {Observing that if $t\in [l_k,l_{k+1}]$, then} $$ t-l_k\leq l_{k+1}-l_k\leq \dfrac{r_k}{2}< r_k\leq \rho \Rightarrow t-l_k \in I_k \text{ and } t\in [0,\rho] $$ where $I_k$ and $r_k$ are defined respectively in \eqref{4.7} and \eqref{4.8}. {This observation together with the fact} $$ u_\varepsilon(t) \in B(0,\rho), \ \forall t\in [0,\tau], $$ {implying that} $$ {\rm V}ert g_\gamma(t) {\rm V}ert \leq \varepsilon+\Lambda {\rm V}ert y_k-u_\varepsilon(t) {\rm V}ert,\ \forall t\in [l_k,l_{k+1}]. $$ Finally we infer from \eqref{4.33} that $$ {\rm V}ert g_\gamma(t) {\rm V}ert \leq \varepsilon+\Lambda [\varepsilon +{\rm G}amma \delta_\gamma(\varepsilon)+\dfrac{\varepsilon}{2} M_\gamma \varepsilon e^{\omega_\gamma^+\varepsilon }],\ \forall t\in [l_k,l_{k+1}]. $$ The result follows. \end{proof}\\ \noindent \textbf{Existence of solution in $C_0$:} At this stage, the approximated solution $t \to u_\varepsilon(t)$ only belongs to $C_0$ for $t=l_k$ (since $u(l_k)=y_k \in C_0$). In this last part of the proof, we take the limit when $\varepsilon \to 0$ and after proving that the limit exits (by using Cauchy sequences), we will prove that the limit solution takes his value in $C_0$. We first prove that the approximated solution $(u_\varepsilon)_{\varepsilon \in (0,\varepsilon^*)}$ forms a Cauchy sequence in $C([0,\tau],X_0)$ and its limit is a solution of system \eqref{1.1}. Indeed, by using property (iii) of Lemma \ref{LE4.9}, we have $$ {\rm V}ert u_\varepsilon(t)-u_\sigma (t){\rm V}ert\leq \hat{M}_1 [\varepsilon+\delta_\gamma(\varepsilon)+\sigma+\delta_\gamma(\sigma)]+\delta_\gamma(t) \sup_{s\in [0,t]}{\rm V}ert F_\gamma(s,u_\varepsilon(s))-F_\gamma(s,u_\sigma(s)) {\rm V}ert. $$ {Since} $$ u_\varepsilon(t), u_\sigma(t)\in B(0,\rho),\ \forall t\in [0,\tau],\ 0<\tau\leq \rho, $$ we obtain $$ {\rm V}ert u_\varepsilon(t)-u_\sigma (t){\rm V}ert\leq \hat{M}_1 [\varepsilon+\delta_\gamma(\varepsilon)+\sigma+\delta_\gamma(\sigma)]+\delta_\gamma(\tau) \Lambda \sup_{s\in [0,\tau]} {\rm V}ert u_\varepsilon(s)-u_\sigma(s) {\rm V}ert,\ \forall t\in [0,\tau]. $$ {In view of \eqref{4.6}, we have $0<\delta_\gamma(\tau) \Lambda<1$, and hence,} $$ \sup_{t\in [0,\tau]} {\rm V}ert u_\varepsilon(t)-u_\sigma (t){\rm V}ert\leq \dfrac{\hat{M}_1}{1-\delta_\gamma(\tau) \Lambda} [\varepsilon+\delta_\gamma(\varepsilon)+\sigma+\delta_\gamma(\sigma)]. $$ Therefore $(u_\varepsilon)_{\varepsilon \in (0,\varepsilon^*)} \in C([0,\tau],X_0)$ is a Cauchy sequence in $C([0,\tau],X_0)$ endowed with {the supremum norm. Then} there exists $u \in C([0,\tau],X_0)$ such that $$ \lim_{\varepsilon \rightarrow 0^+} \sup_{t\in [0,\tau]} {\rm V}ert u_\varepsilon(t)-u(t) {\rm V}ert=0. $$ {Letting $\varepsilon$ tend to zero in \eqref{4.31}, it} is straightforward that $$ \begin{array}{lll} u(t)&=& T_{(A-\gamma B)_0}(t)x_0+(S_{A-\gamma B} \diamond F_\gamma(\cdot,u(\cdot))(t)\\ &=&T_{A_0}(t)x_0+(S_{A} \diamond F(\cdot,u(\cdot))(t),\ \forall t\in [0,\tau],\ \forall t\in [0,\tau]. \end{array} $$ {That} is to say that $u \in C([0,\tau],X_0)$ is a mild solution of \eqref{1.1} in $[0,\tau]$. Finally using property (i) of Lemma \ref{LE4.9}, {we see that} $$ d(u_\varepsilon(t),C_0) \leq \hat{M}_0 (\varepsilon+\delta_\gamma(\varepsilon)),\ \forall t\in [0,\tau] \Rightarrow\lim_{\varepsilon\rightarrow 0^+} d(u_\varepsilon(t),C_0) =0,\ \forall t\in [0,\tau]. $$ {By the continuity of $x\in X_0 \mapsto d(x,C_0)$, we further see that} $$ d(u(t),C_0) =\lim_{\varepsilon\rightarrow 0^+} d(u_\varepsilon(t),C_0) ,\ \forall t\in [0,\tau] \Rightarrow u(t)\in C_0,\ \forall t\in [0,\tau]. $$ \section{Applications to age structured models} We will consider a generalization of the one dimensional model presented in \cite{Thieme90a}. The model considered is the following \begin{equation}\label{5.1} \left\{ \begin{array}{lll} \displaystyle \dfrac{\partial u(t,a)}{\partial t}+\dfrac{\partial u(t,a)}{\partial a}=-\mu(a) u(t,a)\left(\kappa-\Theta(u(t,a))\right) \\ \displaystyle u(t,0)=\int_0^{+\infty} \beta(a) u(t,a) \left(\kappa-\Theta(u(t,a))\right) da \\ u(0,.)=u_0\in {\rm L}^p_+(\mathbb{R}_+,\mathbb{R}^n),\ p\in [1,+\infty) \end{array} \right. \end{equation} where we have set \begin{equation*} \Theta(x)=\sum_{k=0}^n x_k,\ \forall x \in \mathbb{R}^n \end{equation*} and assume that $\kappa>0$, $\beta,\mu \in {\rm L}^{\infty}_+(\mathbb{R}_+,\mathbb{R})$ with $\frac{1}{p}+\frac{1}{q}=1$ and $$ \beta(a)=0,\ \forall a \geq a_\dagger \ \text{ and } \ \mu(a)\geq \mu_->0,\ \forall a \geq 0. $$ It is important to note that the model \eqref{5.1} is not well defined in ${\rm L}^p(\mathbb{R}_+,\mathbb{R}^n)$ however it does in a proper subset of ${\rm L}^p(\mathbb{R}_+,\mathbb{R}^n)$ namely \begin{equation} \label{5.2} C=\left \{ \varphi \in {\rm L}^p_+(\mathbb{R}_+,\mathbb{R}^n) : 0\leq \Theta(\varphi(a)) \leq \kappa \text{ for a.e.} \ a\geq 0 \right \}. \end{equation} \noindent \textbf{Truncated system:} The interest of our result is that we will be able to demonstrate the existence of solutions for initial data in $C$. To do so we introduce the following truncation function $\chi : \mathbb{R}\rightarrow [0,\kappa]$ defined by $$ \chi(s)=\min(k,s^+),\ \forall s \in \mathbb{R} $$ and we set for each $i=1,\dots,n$ \begin{equation}\label{5.3} \left\{ \begin{array}{lll} \displaystyle \dfrac{\partial u_i(t,a)}{\partial t}+\dfrac{\partial u_i(t,a)}{\partial a}=-\mu(a) \chi(u_i(t,a))\chi\left(\kappa- \Theta(u(t,a))\right) \\ \displaystyle u_i(t,0)=\int_0^{+\infty} \beta(a) \chi(u_i(t,a))\chi\left(\kappa- \Theta(u(t,a))\right) da \\ u_i(0,.)=u_{i0}\in {\rm L}^p_+(\mathbb{R}_+,\mathbb{R}),\ p\in [1,+\infty) \end{array} \right. \end{equation} which is well defined in ${\rm L}^p_+(\mathbb{R}_+,\mathbb{R}^n)$. The idea is to prove that for each $\varphi \in C$ there exists a unique mild solution of \eqref{5.3} lying in $C$ and since the two systems coincide in $C$ the result follows.\\ \textbf{Abstract reformulation: } Set $$ X=\mathbb{R}^n \times {\rm L}^p(\mathbb{R}_+,\mathbb{R}^n) $$ endowed with the usual product norm. Consider the linear operator $A:D(A)\subset X \to X$ $$ A \left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right) = \left( \begin{array}{c} -\varphi(0)\\ -\varphi^\prime \end{array} \right) $$ and $$ D(A)=\left\{ 0_{\mathbb{R}^n} \right\} \times {\rm W}^{1,p}(\mathbb{R}_+, \mathbb{R}^n) $$ and note that the closure of the domain of $A$ is $$ X_0:=\overline{D(A)}=\left\{ 0_{\mathbb{R}^n} \right\} \times {\rm L}^p(\mathbb{R}_+,\mathbb{R}^n). $$ Consider the non linear maps $F_0 :{\rm L}^p(\mathbb{R}_+,\mathbb{R}^n) \rightarrow \mathbb{R}^n$ and $F_1 :{\rm L}^p(\mathbb{R}_+,\mathbb{R}^n)\to {\rm L}^p(\mathbb{R}_+,\mathbb{R}^n)$ defined respectively for each $i=1,\dots,n$ by $$ F_0(\varphi)_i= \int_0^{+\infty} \beta(a) \chi(\varphi_i(a)) \chi\left(\kappa-\Theta(\varphi(a))\right) da,\ \text{ for a.e } a\geq 0 $$ and $$ F_1(\varphi)_i(a)=-\mu(a)\chi(\varphi_i(a)) \chi\left(\kappa-\Theta(\varphi(a))\right),\ \text{ for a.e } a\geq 0. $$ Next we consider $F : X_0 \rightarrow X$ defined by $$ F \left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right) =\left( \begin{array}{c} F_0(\varphi)\\ F_1(\varphi) \end{array} \right). $$ By identifying $u(t,.)$ with $v(t):= \left( \begin{array}{c} 0_{\mathbb{R}^n}\\ u(t,.) \end{array} \right)$ we can rewrite the partial differential equation \eqref{5.1} as the following abstract Cauchy problem \begin{equation*} v^\prime(t)=Av(t)+F(v(t)), \text{ for } t \geq 0, \ v(0)=\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ u_0 \end{array} \right) \in X_0. \end{equation*} It is well known that the linear operator $A : D(A)\subset X_0\rightarrow X_0$ is not Hille-Yosida for $p>1$ but fulfill the conditions of Assumption \ref{ASS2.1} (see \cite[Section 6]{Magal-Ruan07}). By using similar arguments in \cite{Magal-Ruan07} one can also show that Assumption \ref{ASS2.4} is satisfied. It can be easily checked that $F$ is Lipschitz on bounded sets of $X_0$. Therefore in what follow we will only verify that Assumption \ref{ASS4.4} is satisfied. We consider the following closed subset as a candidate for the application of our results \begin{equation} \label{5.4} \mathcal{C}_0=\{ 0_{\mathbb{R}^n}\} \times C. \end{equation} In order to verify Assumption \ref{ASS4.4} we will first determine the strongly continuous semigroup $\{T_{A_0}(t) \}_{t\geq 0} \subset \mathcal{L}(X_0)$ generated by $A_0$ the part of $A$ in $X_0$ and the integrated semigroup $\{S_{A}(t) \}_{t\geq 0} \subset \mathcal{L}(X)$ generated by $A$. Indeed $\{T_{A_0}(t) \}_{t\geq 0} \subset \mathcal{L}(X_0)$ is given by \begin{equation*} T_{A_0}(t)\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right)=\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{T}_{A_0}(t)(\varphi) \end{array} \right),\ \forall \left( \begin{array}{cc} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right) \in X_0 \end{equation*} with $t\rightarrow \widehat{T}_{A_0}(t)(\varphi)$ the unique continuous mild solution of the partial differential equation \begin{equation*} \left\{ \begin{array}{lll} \dfrac{\partial u(t,a)}{\partial t}+\dfrac{\partial u(t,a)}{\partial a}=0,\ t>0,\ a>0 \\ \displaystyle u(t,0)=0,\ t>0 \\ u(0,.)=\varphi \in {\rm L}^p(\mathbb{R}_+,\mathbb{R}^n). \end{array} \right. \end{equation*} Thus integrating along the characteristics yields \begin{equation}\label{5.5.0} \widehat{T}_{A_0}(t)(\varphi)(a)= \left\lbrace \begin{array}{l} \varphi (a-t), \text{ if } a>t,\\ 0, \text{ if } a<t,\\ \end{array} \right. \end{equation} which can be rewritten into the more condensed form \begin{equation}\label{5.5} \widehat{T}_{A_0}(t)(\varphi)(a)=H(a-t)\varphi(a-t),\ \forall t\geq 0,\ \text{ for a.e. } a \geq 0 \end{equation} where the map $\varphi(a)$ is understood as its extension by $0$ for almost every $a < 0$ and $a\rightarrow H(a)$ is the Heaviside function defined by $$ H(a)=1 \ \text{ if } a \geq 0 \ \text{ and } H(a)=0,\ \text{ if } a<0. $$ Furthermore the integrated semigroup generated by $A$ is given by \begin{equation*} S_{A}(t)\left( \begin{array}{c} x\\ \varphi \end{array} \right)=\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{S}_{A}(t)(x,\varphi) \end{array} \right),\ \forall \left( \begin{array}{cc} x\\ \varphi \end{array} \right) \in X \end{equation*} with $t\rightarrow \widehat{S}_A(t)(x,\varphi)$ the unique mild solution of the partial differential equation \begin{equation*} \left\{ \begin{array}{lll} \dfrac{\partial u(t,a)}{\partial t}+\dfrac{\partial u(t,a)}{\partial a}=\varphi(a),\ t>0,\ a>0, \\ \displaystyle u(t,0)=x,\ t>0, \\ u(0,.)=0_{{\rm L}^p}, \end{array} \right. \end{equation*} which is obtained by integrating along the characteristics as follow \begin{equation} \widehat{S}_A(t)(x,0)(a)= \left\lbrace \begin{array}{l} x, \text{ if } t>a,\\ 0, \text{ if } t<a, \end{array} \right. \end{equation} and \begin{equation*} \widehat{S}_A(t)(0,\varphi)(a)=\left( \int_0^t \widehat{T}_{A_0}(l)\left(\varphi\right) dl \right) \left(a \right) \end{equation*} therefore (since by linearity $\widehat{S}_A(t)(x,\varphi)=\widehat{S}_A(t)(x,0)+\widehat{S}_A(t)(0,\varphi)$ we obtain \begin{equation}\label{5.6} \widehat{S}_A(t)(x,\varphi)(a)=(1- H(a-t))x+\left( \int_0^t \widehat{T}_{A_0}(l)\left(\varphi\right) dl \right) \left(a \right),\ \forall t\geq 0,\ \text{ for a.e. } a \geq 0. \end{equation} The following lemma will allows us to give a more explicit form of \eqref{5.6}. \begin{lemma}\label{LE5.1.0} For each $t\geq 0$ we have $$ \widehat{S}_A(t)(0,\varphi)(a)=\int_0^t \widehat{T}_{A_0}(l)\left(\varphi\right)(a)dl=\int_0^t H(a-l) \varphi(a-l) dl,\ \text{ for a.e. } a\geq 0 $$ where $H$ is the Heaviside function. Moreover we have \begin{equation}\label{5.9.0} \widehat{S}_A(t)(x,\varphi)(a)=(1- H(a-t))x+\int_0^t H(a-l) \varphi(a-l) dl,\ \forall t\geq 0,\ \text{ a.e. } a \geq 0. \end{equation} \end{lemma} \begin{proof} Let $x^* : L^p(\mathbb{R}_+,\mathbb{R}^n)\rightarrow \mathbb{R}$ any linear continuous functional. Then by the Riesz representation theorem there exists a unique $\psi \in L^q(\mathbb{R}_+,\mathbb{R}^n)$ with $\frac{1}{p}+\frac{1}{q}=1$ such that $$ x^*(\phi)=\int_0^{+\infty}\psi(a) \phi(a)da,\ \forall \phi \in L^p(\mathbb{R}_+,\mathbb{R}^n). $$ Therefore we have by using Fubini's theorem for each $t\geq 0$ $$ \begin{array}{llll} x^*\left( \widehat{S}_A(t)(0,\varphi) \right)&=&\displaystyle x^* \left(\int_0^t \widehat{T}_{A_0}(l)\left(\varphi\right) dl\right) \\ &=& \displaystyle \int_0^t x^*\left(\widehat{T}_{A_0}(l)\left(\varphi\right)\right) dl \\ &=& \displaystyle \int_0^t \int_0^{+\infty} \psi(a)\widehat{T}_{A_0}(l)(\varphi)(a)da dl\\ &=& \displaystyle \int_0^t \int_l^{+\infty} \psi(a)\varphi(a-l)da dl\\ &=& \displaystyle \int_0^t \int_0^{a} \psi(a)\varphi(a-l)dl da+ \int_t^\infty \int_0^{t} \psi(a)\varphi(a-l)dl da \\ &=& \displaystyle \int_0^\infty \int_0^{\min(a,t)} \psi(a)\varphi(a-l)dl da \\ &=& \displaystyle \int_0^\infty \psi(a) \int_0^{t} H(a-l)\varphi(a-l)dl da. \\ \end{array} $$ Since $x^*$ is arbitrary, by using the Hahn-Banach theorem we deduce that $$ \widehat{S}_A(t)(0,\varphi)(a)=\int_0^t \widehat{T}_{A_0}(l)(\varphi)(a) dl,\ \forall t\geq 0,\ \text{ for a.e. } a \geq 0 $$ and the result follows by using \eqref{5.5.0}. \end{proof}\\ Hence using \eqref{5.5} and \eqref{5.6} we have for each $\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right)\in \mathcal{C}_0$ \begin{equation*} T_{A_0}(h)\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right)+S_{A}(h)F\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right)=\left( \begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{v}(\varphi;h) \end{array} \right),\ \forall h\geq 0 \end{equation*} with $$ \widehat{v}(\varphi;h)=\widehat{T}_{A_0}(h)(\varphi)+\widehat{S}_A(h)(F_0(\varphi),F_1(\varphi)),\ \forall h\geq 0. $$ More precisely by using \eqref{5.5} and \eqref{5.9.0} we have \begin{equation}\label{5.7} \widehat{v}(\varphi;h)(a)=H(a-h)\varphi(a-h)+[1-H(a-h)]F_0(\varphi)+\int_0^h H(a-l) F_1(\varphi)(a-l) dl,\ \forall h\geq 0,\ \forall a \geq 0 \end{equation} or equivalently \begin{equation}\label{5.8} \widehat{v}(\varphi;h)(a)=\widehat{v}_1(\varphi;h)(a)+\widehat{v}_2(\varphi;h)(a),\ \forall h\geq 0,\ \forall a \geq 0 \end{equation} with \begin{equation}\label{5.9} \left\lbrace \begin{array}{lll} \widehat{v}_1(\varphi;h)(a)=&H(a-h)\varphi(a-h)+[1-H(a-h)]F_0(\varphi)\\ \\ &+h H(a-h)F_1(\varphi)(a-h)\\ \\ \displaystyle \widehat{v}_2(\varphi;h)(a)=& h [F_1(\varphi)(a)-H(a-h)F_1(\varphi)(a-h)]\\ \\ &+\int_0^h [H(a-l)F_1(\varphi)(a-l)-F_1(\varphi)(a)]dl. \end{array} \right. \end{equation} \begin{lemma}\label{LE5.1} For each $\varphi \in C$ we have $$ \lim_{h\rightarrow 0^+}\dfrac{1}{h} {\rm V}ert \widehat{v}_2(\varphi;h) {\rm V}ert_{L^p}=0. $$ \end{lemma} \begin{proof} We will give the proof for $1<p<+\infty$. The case $p=1$ can be obtained easily. Let $q\in (1,+\infty)$ be given such that $\frac{1}{p}+\frac{1}{q}=1$. We have for each $h>0$ $$ \begin{array}{lllll} {\rm V}ert \widehat{v}_2(\varphi;h) {\rm V}ert_{L^p}&\leq & h{\rm V}ert H(.) F_1(\varphi)(.)-H(.-h)F_1(\varphi)(.-h) {\rm V}ert_{L^p} \\ & & \displaystyle +\left(\int_0^{+\infty}\left(\int_0^h {\rm V}ert H(a-l)F_1(\varphi)(a-l)-F_1(\varphi)(a){\rm V}ert dl \right)^p da\right)^{\frac{1}{p}}\\ &\leq & h {\rm V}ert H(.) F_1(\varphi)(.)-H(.-h)F_1(\varphi)(.-h) {\rm V}ert_{L^p} \\ & & \displaystyle +h^\frac{1}{q}\left(\int_0^{+\infty} \int_0^h {\rm V}ert H(a-l)F_1(\varphi)(a-l)-F_1(\varphi)(a){\rm V}ert^p dl da \right)^{\frac{1}{p}}\\ &\leq & h {\rm V}ert H(.) F_1(\varphi)(.)-H(.-h)F_1(\varphi)(.-h) {\rm V}ert_{L^p} \\ & & \displaystyle +h^\frac{1}{q}\left( \int_0^h {\rm V}ert H(.-l)F_1(\varphi)(.-l) -H(.) F_1(\varphi)(.){\rm V}ert_{L^p}^p dl \right)^{\frac{1}{p}}\\ &\leq & h {\rm V}ert H(.) F_1(\varphi)(.)-H(.-h)F_1(\varphi)(.-h) {\rm V}ert_{L^p} \\ & & \displaystyle +h\left( \int_0^1 {\rm V}ert H(.-lh)F_1(\varphi)(.-lh) -H(.) F_1(\varphi)(.){\rm V}ert_{L^p}^p dl \right)^{\frac{1}{p}} \end{array} $$ and the result follows by using the continuity of the translation in $L^p$. \end{proof}\\ \noindent Note that since the $d(.;\mathcal{C}_0)$ is $1$-Lipschitz continuous we have $$ 0\leq\dfrac{1}{h} d\left(\left(\begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{v}(\varphi;h) \end{array} \right);\mathcal{C}_0\right)\leq \dfrac{1}{h} d\left(\left(\begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{v}_1(\varphi;h) \end{array} \right);\mathcal{C}_0\right)+\dfrac{1}{h}{\rm V}ert \widehat{v}_2(\varphi;h) {\rm V}ert_{L^p},\ \forall h>0 $$ it now follows from Lemma \ref{LE5.1} that Assumption \ref{ASS4.4} is satisfied if \begin{equation}\label{5.10} \lim_{h\rightarrow 0^+}\dfrac{1}{h} d\left(\left(\begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{v}_1(\varphi;h) \end{array} \right);\mathcal{C}_0\right)=0. \end{equation} In order to prove \eqref{5.10} we will show that under some conditions to be make precise later $\left(\begin{array}{c} 0_{\mathbb{R}^n}\\ \widehat{v}_1(\varphi;h) \end{array} \right)$ belongs to $\mathcal{C}_0$ for $h>0$ sufficiently small. To this end note that \begin{equation}\label{5.11} \widehat{v}_1(\varphi;h)(a)=\left\lbrace \begin{array}{llll} \varphi(a-h)+h F_1(\varphi)(a-h) & \text{ if } & a\geq h \\ F_0(\varphi) & \text{ if } & a< h. \end{array} \right. \end{equation} \begin{lemma} \label{LE5.2} Assume that \begin{equation}\label{5.12} \int_0^{a_\dagger} \beta(a) da \leq \dfrac{4}{\kappa} \end{equation} Then there exists $h_0>0$ such that for each $\varphi \in C$ we have $$ \widehat{v}_1(\varphi;h)\in C,\ \forall h\in (0,h_0). $$ \end{lemma} \begin{proof} Let $\varphi \in C$ be given. Since $\Theta$ is linear, if $a\geq h$ then by \eqref{5.11} we have $$ \begin{array}{llll} \Theta\left( \widehat{v}_1(\varphi;h)(a)\right)&=&\Theta\left( \varphi(a-h)-h \mu(a-h) \varphi(a-h) \left[\kappa-\Theta(\varphi(a-h))\right]\right)\\ &=& (1-h\mu(a-h) [\kappa-\Theta(\varphi(a-h)])\Theta(\varphi(a-h))\\ \end{array} $$ hence $$ [1-h{\rm V}ert\mu {\rm V}ert_{\infty} (\kappa-\Theta(\varphi(a-h))]\Theta(\varphi(a-h)) \leq \Theta\left( \widehat{v}_1(\varphi;h)(a)\right) \leq [1-h \mu_- (\kappa-\Theta(\varphi(a-h))] \Theta(\varphi(a-h)) $$ and since the map $s\in [0,\kappa]\rightarrow [1-h \mu_- (\kappa-s)] s$ is non decreasing for $h>0$ sufficiently small it follows that there exists $h_0>0$ depending only on $\kappa$ and $\mu$ such that \begin{equation}\label{5.13} 0\leq \Theta\left( \widehat{v}_1(\varphi;h)(a)\right)\leq \kappa,\ \forall a\geq h,\ \forall h\in [0,h_0]. \end{equation} For $0\leq a<h$ by using \eqref{5.11} we have $$ \begin{array}{llll} \Theta\left( \widehat{v}_1(\varphi;h)(a)\right) &=& \Theta\left(F_0(\varphi)\right)\\ &=&\displaystyle \Theta\left(\int_0^{+\infty} \beta(a)\varphi(a)[\kappa-\Theta(\varphi(a))] da\right)\\ &=&\displaystyle\int_0^{a_\dagger} \beta(a) [\kappa-\Theta(\varphi(a))] \Theta\left(\varphi(a)\right) da. \end{array} $$ Since the maximum of the map $s\in [0,\kappa]\rightarrow (\kappa-s) s$ is $\dfrac{\kappa^2}{4}$ we obtain that \begin{equation}\label{5.14} 0\leq \Theta\left( \widehat{v}_1(\varphi;h)(a)\right) \leq \dfrac{\kappa^2}{4} \int_0^{a_\dagger} \beta(a) da ,\ \text{ if } 0\leq a <h,\ h>0 \end{equation} Thus we see from \eqref{5.12} and \eqref{5.14} that \begin{equation}\label{5.15} 0\leq \Theta\left( \widehat{v}_1(\varphi;h)(a)\right) \leq \kappa,\ \text{ if } 0\leq a <h,\ h>0. \end{equation} The result follows from \eqref{5.13} and \eqref{5.15}. \end{proof}\\ We have the following result. \begin{theorem} \label{TH5.3} Assume that \begin{equation*} \int_0^{a_\dagger} \beta(a) da \leq \dfrac{4}{\kappa}. \end{equation*} Then for each $u_0 \in L^p_+(\mathbb{R}_+,\mathbb{R}^n)$ with $$ 0\leq \Theta(u_0(a))\leq \kappa,\ \text{ for a.e. } a\in \mathbb{R}_+ $$ there exists a unique continuous globally defined mild solution $t\in \mathbb{R}_+\rightarrow u(t,.)\in L^p_+(\mathbb{R}_+,\mathbb{R}^n)$ of \eqref{5.1} such that \begin{equation}\label{5.16} 0\leq \Theta(u(t,a))\leq \kappa,\ \text{ for a.e. } a\in \mathbb{R}_+,\ \forall t\geq 0. \end{equation} \end{theorem} \begin{proof} The existence of a maximally defined solution of \eqref{5.3} satisfying \eqref{5.16} is direct application of Theorem \ref{TH4.6}. To obtain the global existence of the solution of \eqref{5.3} it is enough to observe that $$ F\left( \begin{array}{cc} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right)\leq \left( \begin{array}{cc} \int_0^{+\infty} \beta(a)\kappa \varphi(a) da \\ \mu(.)\kappa \varphi(.) \end{array} \right),\ \forall \left( \begin{array}{cc} 0_{\mathbb{R}^n}\\ \varphi \end{array} \right) \in X_{0+}:=\{0_{\mathbb{R}^n}\} \times L^p_+(\mathbb{R}_+,\mathbb{R}^n). $$ and infer from \cite[Corollary 3.7]{Magal-Ruan09a}. The result follows by using the fact that system \eqref{5.1} coincides with \eqref{5.3} in $C$. \end{proof} \end{document}
\begin{document} \centerline{\Large \bf } \vskip 6pt \begin{center}{\Large \bf Orlicz mixed chord integrals}\end{center} \vskip 6pt\begin{center} \centerline{Chang-Jian Zhao\footnote{Research is supported by National Natural Science Foundation of China (11371334, 10971205).}} \centerline{\it Department of Mathematics, China Jiliang University, Hangzhou 310018, P. R. China}\centerline{\it Email: [email protected]~~ [email protected]} \end{center} \vskip 10pt \begin{center} \begin{minipage}{12cm} {\bf Abstract}~ We introduce a affine geometric quantity and call it {\it Orlicz mixed chord integral}, which generalize the chord integrals to Orlicz space. Minkoswki and Brunn-Minkowski inequalities for the Orlicz mixed chord integrals are establish. These new inequalities in special cases yield some isoperimetric inequalities for the usual chord integrals. The related concepts and inequalities of $L_{p}$-mixed chord integrals are also derived. {\bf Keywords} star body, mixed chord integrals, Orlicz mixed chord integrals, first order variation, Orlicz dual Brunn-Minkowski theory. {\bf 2010 Mathematics Subject Classification} 46E30. \end{minipage} \end{center} \vskip 20pt \noindent{\large \bf 1 ~Introducation}\vskip 10pt The radial addition $K\widetilde{+}L$ of star sets $K$ and $L$ can be defined by $$\rho(K\widetilde{+}L,\cdot)=\rho(K,\cdot)+\rho(L,\cdot),$$ where the star sets means compact sets that is star-shaped at $o$ and contains $o$ and $\rho(K,\cdot)$ denotes the radial function of star set $K$. The radial function is defined by $$\rho(K,u)=\max\{c\geq 0: cu\in K\},\eqno(1.1)$$ for $u\in S^{n-1}$, where $S^{n-1}$ denotes the surface of the unit ball centered at the origin. The initial study of the radial addition can be found in [1, P. 235]. $K$ is called a star body, if $\rho(K,\cdot)$ is positive and continuous, and let ${\cal S}^{n}$ denote the set of star bodies. Radial addition and volume are the basis and core of dual Brunn-Minkowski theory (see, e.g., [2], [3], [4], [5], [6], [7], [8] and [9]). It is important that the dual Brunn-Minkowski theory can count among its successes the solution of the Busemann-Petty problem in [3], [11], [12], [13], and [14]. Recently, it has turned to a study extending from $L_p$-dual Brunn-Minkowski theory to Orlicz dual Brunn-Minkowski theory. The dual Orlicz-Brunn-Minkowski theory and its dual have attracted people's attention [15], [16], [17], [18], [19], [20], [21], [22], [23], [24] and [25]. For $K\in {\cal S}^{n}$ and $u\in S^{n-1}$, the half chord of $K$ in the direction $u$, defined by. $$d(K,u)=\frac{1}{2}(\rho(K,u)+\rho(K,-u)).$$ If there exist constants $\lambda>0$ such that $d(K,u)=\lambda d(L,u),$ for all $u\in S^{n-1}$, then star bodies $K,L$ are said to have similar chord (see Gardner [1] or Schneider [26]). Lu [27] introduced the chord integral of star bodies: For $K\in {\cal S}^{n}$ and $0\leq i<n$, the chord integral of $K$, denoted by $B_{i}(K),$ defined by $$B_{i}(K)=\frac{1}{n}\int_{S^{n-1}}d(K,u)^{n-i}dS(u).\eqno(1.2)$$ For $i=0$, $B_{i}(K)$ becomes the chord integral $B(K)$. The main aim of the present article is to generalize the chord integrals to Orlicz the space. We introduce a new affine geometric quantity such it Orlicz mixed chord integrals. The fundamental notions and conclusions of the chord integrals and related isoperimetric inequalities for the chord integrals are extended to an Orlicz setting. The new inequalities in special case yield the $L_p$-dual Minkowski, and Brunn-Minkowski inequalities for the $L_{p}$-mixed chord integrals. In Section 3, we introduce a notion of Orlicz chord addition $K\check{+}_{\phi}L$ of star bodies $K,L$, defined by $$\phi\left(\frac{d(K,x)}{d(K\check{+}_{\phi}L,x)} ,\frac{d(L,x)}{d(K\check{+}_{\phi}L,x)}\right)=1.\eqno(1.3)$$ Here $\phi\in \Phi_{2}$, the set of convex function $\phi:[0,\infty)^{2}\rightarrow (0,\infty)$ that are decreasing in each variable and satisfy $\phi(0,0)=\infty$ and $\phi(\infty,1)=\phi(1,\infty)=1$. The particular instance of interest corresponds to using (1.5) with $\phi(x_{1},x_{2})=\phi_{1}(x_{1})+\varepsilon\phi_{2}(x_{2})$ for $\varepsilon>0$ and some $\phi_{1},\phi_{2}\in\Phi$, where the sets of convex functions $\phi_{1}, \phi_{2}:[0,\infty)\rightarrow (0,\infty)$ that are decreasing and satisfy $\phi_{1}(0)=\phi_{2}(0)=\infty$, $\phi_{1}(\infty)=\phi_{2}(\infty)=0$ and $\phi_{1}(1)=\phi_{2}(1)=1$ . In accordance with the spirit of Aleksandrov [28], Fenchel and Jensen [29] introduction of mixed quermassintegrals, and introduction of Lutwak's [30] $L_p$-mixed quermassintegrals, we are based on the study of first order variational of the chord integrals. In Section 4, we prove that the first order Orlicz variation of the chord integrals can be expressed as: For $K,L\in {\cal S}^{n},$ $\phi_{1},\phi_{2}\in {\Phi}$, $0\leq i<n$ and $\varepsilon>0$, $$\frac{d}{d\varepsilon}\bigg|_{\varepsilon=0^{+}}B_{i} (K\check{+}_{\phi}\varepsilon\cdot L)=\frac{n-i}{n\cdot(\phi_{1}) '_{r}(1)}\cdot B_{\phi_{2},i}(K,L)^{n},\eqno(1.4)$$ where $\phi'_{r}(1)$ denotes the value of the right derivative of convex function $\phi$ at point $1$. In this first order variational equation (1.4), we find a new geometric quantity. Based on this, we extract the required geometric quantity, denotes by $B_{\phi,i}(K,L)$ and call as Orlicz mixed chord integrals, defined by $$B_{\phi_{2},i}(K,L)=\left(\frac{n\cdot(\phi_{1}) '_{r}(1)}{n-i}\cdot\frac{d}{d\varepsilon}\bigg|_{\varepsilon=0^{+}}B_{i} (K\check{+}_{\phi}\varepsilon\cdot L)\right)^{1/n}.\eqno(1.5)$$ We also show the new affine geometric quantity has an integral representation. $$B_{\phi,i}(K,L)=\frac{1}{n}\int_{S^{n-1}}\phi \left(\frac{d(L,u)}{d(K,u)}\right)d(K,u)^{n-i}dS(u).\eqno(1.6)$$ In Section 5, as application, we establish an Orlicz Minkowski inequality for the Orlicz mixed chord integrals: If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $\phi\in \Phi$, then $$B_{\phi,i}(K,L)\geq {B}_{i}(K)\cdot\phi\left(\left(\frac{{B}_{i}(L)}{{B}_{i}(K)}\right)^{1/(n-i)}\right) .\eqno(1.7)$$ If $\phi$ is strictly convex, equality holds if and only if $K$ and $L$ have similar chord. In Section 6, we establish an Orlicz Brunn-Minkowski inequality for the Orlicz chord addition and the chord integrals. If $K,L\in{\cal S}^{n}$, $0\leq i<n$ and $\phi\in \Phi_{2}$, then $$1\geq\phi\left(\left(\frac{{B}_{i}(K)} {{B}_{i}(K\check{+}_{\phi}L)}\right)^{1/(n-i)},\left(\frac{{B}_{i}(L)}{{B}_{i}(K\check{+}_ {\phi}L)}\right)^{1/(n-i)}\right).\eqno(1.8)$$ If $\phi$ is strictly convex, equality holds if and only if $K$ and $L$ have similar chord. \vskip 10pt \noindent{\large \bf 2 ~Preliminaries}\vskip 10pt The setting for this paper is $n$-dimensional Euclidean space ${\Bbb R}^{n}$. A body in ${\Bbb R}^{n}$ is a compact set equal to the closure of its interior. For a compact set $K\subset {\Bbb R}^{n}$, we write $V(K)$ for the ($n$-dimensional) Lebesgue measure of $K$ and call this the volume of $K$. Associated with a compact subset $K$ of ${\Bbb R}^n$, which is star-shaped with respect to the origin and contains the origin, its radial function is $\rho(K,\cdot): S^{n-1}\rightarrow [0,\infty),$ defined by $$\rho(K,u)=\max\{\lambda\geq 0: \lambda u\in K\}.$$ Note that the class (star sets) is closed under unions, intersection, and intersection with subspace. The radial function is homogeneous of degree $-1$, that is (see e.g. [1]), $$\rho(K,ru)=r^{-1}\rho(K,u),$$ for all $u\in {S}^{n-1}$ and $r>0$. Let $\tilde{\delta}$ denote the radial Hausdorff metric, as follows, if $K, L\in {\cal S}^{n}$, then $$\tilde{\delta}(K,L)=|\rho(K,u)-\rho(L,u)|_{\infty}.$$ From the definition of the radial function, it follows immediately that for $A\in GL(n)$ the radial function of the image $AK=\{Ay: y\in K\}$ of $K$ is given by (see e.g. [26]) $$\rho(AK,x)=\rho(K, A^{-1}x),\eqno(2.1)$$ for all $x\in {\Bbb R}^{n}$. For $K_{i}\in {\cal S}^{n}, i=1,\ldots,m$, define the real numbers $R_{K_{i}}$ and $r_{K_{i}}$ by $$R_{K_{i}}=\max_{u\in S^{n-1}}d(K_{i},u),~~ {\rm and}~~ r_{K_{i}}=\min_{u\in S^{n-1}}d(K_{i},u),\eqno(2.2)$$ obviously, $0<r_{K_{i}}<R_{K_{i}},$ for all $K_{i}\in {\cal S}^{n}$, and writing $R=\max\{R_{K_{i}}\}$ and $r=\min\{r_{K_{i}}\}$, where $i=1,\ldots,m.$ \vskip 8pt {\it 2.1~ Mixed chord integrals}\vskip 8pt If $K_{1},\ldots,K_{n}\in {\cal S}^{n}$, the mixed chord integral of $K_{1},\ldots,K_{n}$, denotes by $B(K_{1},\ldots,K_{n})$, defined by (see [27]) $$B(K_{1},\ldots,K_{n})=\frac{1}{n}\int_{S^{n-1}}d(K_{1},u)\cdots d(K_{n},u)dS(u).$$ If $K_{1}=\cdots=K_{n-i}=K,$ $K_{n-i+1}=\cdots=K_{n}=L$, the mixed chord integral $B(K_{1},\ldots,K_{n})$ is written as $B_{i}(K,L)$. If $L=B$ ($B$ is the unit ball centered at the origin), the mixed chord integral $B_{i}(K,L)=B_{i}(K,B)$ is written as $B_{i}(K)$ and call chord integral of $K$. Obviously, For $K\in {\cal S}^{n}$ and $0\leq i<n$, we have $$B_{i}(K)=\frac{1}{n}\int_{S^{n-1}}d(K,u)^{n-i}dS(u).\eqno(2.3)$$ If $K_{1}=\cdots=K_{n-i-1}=K,$ $K_{n-i}=\cdots=K_{n-1}=B$ and $K_{n}=L$, the mixed chord integral $B(\underbrace{K,\ldots,K}_{n-i-1},\underbrace{B,\ldots,B}_{i},L)$ is written as $B_{i}(K,L)$ and call $i$-th mixed chord integral of $K$ and $L$. For $K,L\in {\cal S}^{n}$ and $0\leq i<n$, it is easy that $$B_{i}(K,L)=\frac{1}{n}\int_{S^{n-1}}d(K,u)^{n-i-1}d(L,u)dS(u).\eqno(2.4)$$ This integral representation (2.4), together with the H\"{o}lder inequality, immediately gives: The Minkowski inequality for the $i$-th mixed chord integral. If $K,L\in {\cal S}^{n}$ and $0\leq i<n$, then $$B_{i}(K,L)^{n-i}\leq B_{i}(K)^{n-i-1}B_{i}(L),\eqno(2.5)$$ with equality if and only if $K$ and $L$ have similar chord. \vskip 8pt {\it 2.2~ $L_p$-mixed chord integrals}\vskip 8pt Putting $\phi(x_{1},x_{2})=x_{1}^{-p}+x_{2}^{-p}$ and $p\geq 1$ in (1.5), the Orlicz chord addition $\check{+}_{\phi}$ becomes a new addition $\check{+}_{p}$ in $L_p$-space, and call as $L_p$-chord addition of star bodies $K$ and $L$. $$d(K\check{+}_{p}L,u)^{-p}=d(K,u)^{-p}+d(L,u)^{-p},\eqno(2.6)$$ for $u\in S^{n-1}$. The following result follows immediately form (2.6) with $p\geq 1$. $$-\frac{np}{n-i}\lim_{\varepsilon\rightarrow 0^{+}}\frac{B_{i}(K\check{+}_{p} \varepsilon\cdot L)-B_{i}(L)}{\varepsilon}=\frac{1}{n}\int_{S^{n-1}}d(K,u)^{n-i+p}d(L,u)^{-p}dS(u).$$ {\bf Definition 2.1}~ Let $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $p\geq 1$, the $L_p$-chord integral of star $K$ and $L$, denotes by $B_{-p,i}(K,L)$, defined by $$B_{-p,i}(K,L)=\frac{1}{n}\int_{S^{n-1}}d(K,u)^{n-i+p}d(L,u)^{-p}dS(u).\eqno(2.7)$$ Obviously, when $K=L$, the $L_p$-mixed chord integral $B_{-p,i}(K,K)$ becomes the chord integral $B_{i}(K).$ This integral representation (2.7), together with the H\"{o}lder inequality, immediately gives: {\bf Proposition 2.2}~ {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $p\geq 1$, then $$B_{-p,i}(K,L)^{n-i}\geq B_{i}(K)^{n-i+p}B_{i}(L)^{-p},\eqno(2.8)$$ with equality if and only if $K$ and $L$ have similar chord.} {\bf Proposition 2.3}~ {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $p\geq 1$, then $$B_{i}(K\check{+}_{p}L)^{-p/(n-i)}\geq B_{i}(K)^{-p/(n-i)}+B_{i}(L)^{-p/(n-i)},\eqno(2.9)$$ with equality if and only if $K$ and $L$ are dilates.} {\it Proof}~ From (2.6) and (2.7), it is easily seen that the $L_p$-chord integrals is linear with respect to the $L_p$-chord addition, and together with inequality (2.8) show that for $p\geq 1$ $$B_{-p,i}(Q, K\check{+}_{p}L)=B_{-p,i}(Q,K)+B_{-p,i}(Q,L)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ $$~~~~~~~~~~~~~~\geq B_{i}(Q)^{(n-i+p)/(n-i)}(B_{i}(K)^{-p/(n-i)}+B_{i}(L)^{-p/(n-i)}),$$ with equality if and only if $K$ and $L$ have similar chord. Take $K\check{+}_{p}L$ for $Q$, recall that $B_{p,i}(Q,Q)=B_{i}(Q)$, inequality (2.9) follows easy. \vskip 10pt \noindent{\large \bf 3 ~Orlicz chord addition}\vskip 10pt Throughout the paper, the standard orthonormal basis for ${\Bbb R}^{n}$ will be $\{e_{1},\ldots,e_{n}\}$. Let $\Phi_{n},$ $n\in{\Bbb N}$, denote the set of convex function $\phi:[0,\infty)^{n} \rightarrow (0,\infty)$ that are strictly decreasing in each variable and satisfy $\phi(0)=\infty$ and $\phi(e_{j})=1$, $j=1,\ldots,n$. When $n=1$, we shall write $\Phi$ instead of $\Phi_{1}$. The left derivative and right derivative of a real-valued function $f$ are denoted by $(f)'_{l}$ and $(f)'_{r}$, respectively. We first define the Orlicz chord addition. {\bf Definition 3.1}~ Let $m\geq 2, \phi\in\Phi_{m}$, $K_{j}\in {\cal S}^{n}$ and $j=1,\ldots,m$, define the Orlicz chord addition of $K_{1},\ldots,K_{m}$, denotes by $\check{+}_{\phi}(K_{1},\ldots,K_{m})$, defined by $$d(\check{+}_{\phi}(K_{1},\ldots,K_{m}),u)=\sup\left\{\lambda>0: \phi\left(\frac{d(K_{1},u)}{\lambda},\ldots,\frac{d(K_{m},u)}{\lambda}\right)\leq 1\right\},\eqno(3.1)$$ for $u\in S^{n-1}.$ Equivalently, the Orlicz chord addition $\check{+}_{\phi}(K_{1},\ldots,K_{m})$ can be defined implicitly by $$\phi\left(\frac{d(K_{1},u)}{d(\check{+}_{\phi}(K_{1}, \ldots,K_{m}),u)},\ldots,\frac{d(K_{m},u)}{d(\check{+}_{\phi}(K_{1}, \ldots,K_{m}),u)}\right)=1,\eqno(3.2)$$ for all $u\in {S}^{n-1}$. An important special case is obtained when $$\phi(x_{1},\ldots,x_{m})=\sum_{j=1}^{m}\phi_{j}(x_{j}),$$ for some fixed $\phi_{j}\in \Phi$ such that $\phi_{1}(1)=\cdots=\phi_{m}(1)=1$. We then write $\check{+}_{\phi}(K_{1},\ldots,K_{m})=K_{1}\check{+}_{\phi}\cdots\check{+}_{\phi}K_{m}.$ This means that $K_{1}\check{+}_{\phi}\cdots\check{+}_{\phi}K_{m}$ is defined either by $$d(K_{1}\check{+}_{\phi}\cdots\check{+}_{\phi}K_{m},u)= \sup\left\{\lambda>0:\sum_{j=1}^{m}\phi_{j}\left(\frac{d(K_{j},u)}{\lambda} \right)\leq 1\right\},\eqno(3.3)$$ for all $u\in {S}^{n-1}$, or by the corresponding special case of (3.2). {\bf Lemma 3.2}~ {\it The Orlicz chord addition $\check{+}_{\phi}: ({\cal S}^{n})^{m}\rightarrow {\cal S}^{n}$ is monotonic.} {\it Proof}~ This follows immediately from (3.1). {\bf Lemma 3.3}~ {\it The Orlicz chord addition $\check{+}_{\phi}: ({\cal S}^{n})^{m}\rightarrow {\cal S}^{n}$ is $GL(n)$ covariant.} {\it Proof}~ This follows immediately from (2.1) and (3.1). This shows Orlicz chord addition $\check{+}_{\phi}$ is $GL(n)$ covariant. {\bf Lemma 3.4}~ {\it Suppose $K,\ldots,K_{m}\in{\cal S}^{n}$. If $\phi\in \Phi$, then $$\phi\left(\frac{d(K_{1},u)}{t}\right)+\cdots+\phi\left(\frac{d(K_{m},u)}{t}\right)=1$$ if and only if} $$d(\check{+}_{\phi}(K_{1}, \ldots,K_{m}),u)=t$$ {\it Proof}~ This follows immediately from definition 3.1. {\bf Lemma 3.5}~ {\it Suppose $K_{m},\ldots,K_{m}\in{\cal S}^{n}$. If $\phi\in \Phi$, then} $$\frac{r}{\phi^{-1}(\frac{1}{m})}\leq d(\check{+}_{\phi}(K_{1}, \ldots,K_{m}),u)\leq\frac{R}{\phi^{-1}(\frac{1}{m})}.$$ {\it Proof}~ This follows immediately from Lemma 3.4. {\bf Lemma 3.6}~ {\it The Orlicz chord addition $\check{+}_{\phi}: ({\cal S}^{n})^{m}\rightarrow {\cal S}^{n}$ is continuous.} {\it Proof}~ This follows immediately from Lemma 3.5. Next, we define the Orlicz chord linear combination on the case $m=2$. {\bf Definition 3.7}~ Orlicz chord linear combination $\check{+}_{\phi}(K,L,\alpha,\beta)$ for $K,L\in {\cal S}^{n}$, and $\alpha,\beta\geq 0$ (not both zero), defined by $$\alpha\cdot\phi_{1}\left(\frac{d(K,u)}{d(\check{+}_{\phi}(K,L,\alpha,\beta),u)}\right)+ \beta\cdot\phi_{2}\left(\frac{d(L,u)} {d(\check{+}_{\phi}(K,L,\alpha,\beta),u)}\right)=1,\eqno(3.4)$$ for all $u\in {S}^{n-1}$. We shall write $K\check{+}_{\phi}\varepsilon\cdot L$ instead of $\check{+}_{\phi}(K,L,1,\varepsilon)$, for $\varepsilon\geq 0$ and assume throughout that this is defined by (3.1), if $\alpha=1, \beta=\varepsilon$ and $\phi\in \Phi$. We shall write $K\check{+}_{\phi}L$ instead of $\check{+}_{\phi}(K,L,1,1)$ and call the Orlicz chord addition of $K$ and $L$. \vskip 10pt \noindent{\large \bf 4 ~Orlicz mixed chord integrals}\vskip 10pt In order to define Orlicz mixed chord integrals, we need the following Lemmas 4.1-4.4. {\bf Lemma 4.1}~ {\it Let $\phi\in \Phi$ and $\varepsilon>0$. If $K,L\in {\cal S}^{n}$, then} $K\check{+}_{\phi}\varepsilon\cdot L\in {\cal S}^{n}.$ {\it Proof}~ This follows immediately from (3.4) and Lemma 3.5. {\bf Lemma 4.2}~ {\it If $K,L\in {\cal S}^{n}$, $\varepsilon>0$ and $\phi\in \Phi$, then $$K\check{+}_{\phi}\varepsilon\cdot L\rightarrow K\eqno(4.1)$$ as} $\varepsilon\rightarrow 0^{+}$. {\it Proof}~ This follows immediately from (3.4)and noting that $\phi_{2}$, $\phi_{1}^{-1}$ and $d$ are continuous functions. {\bf Lemma 4.3}~ {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $\phi_{1}, \phi_{2}\in \Phi$, then} $$\frac{d}{d\varepsilon}\bigg|_{\varepsilon=0^{+}}d(K\check{+}_{\phi}\varepsilon\cdot L,u)^{n-i}=\frac{n-i}{(\phi_{1})'_{r}(1)}\cdot\phi_{2} \left(\frac{d(L,u)}{d(K,u)}\right)\cdot d(K,u)^{n-i}.\eqno(4.2)$$ {\it Proof}~ Form (3.4), (4.1), Lemma 4.2 and notice that $\phi_{1}^{-1}$, $\phi_{2}$ are continuous functions, we obtain for $0\leq i<n$ $$\frac{d}{d\varepsilon}\bigg|_{\varepsilon=0^{+}}d(K\check{+}_{\phi}\varepsilon\cdot L,u)^{n-i}=\frac{n-i}{(\phi_{1})'_{r}(1)}\cdot\phi_{2} \left(\frac{d(L,u)}{d(K,u)}\right)\cdot d(K,u)^{n-i}.$$ {\bf Lemma 4.4}~ {\it If $\phi\in \Phi_{2}$, $0\leq i<n$ and $K,L\in {\cal S}^{n}$, then} $$\frac{(\phi_{1})'_{r}(1)}{n-i}\cdot\frac{d}{d\varepsilon}\bigg|_{\varepsilon=0^{+}}B_{i} (K\check{+}_{\phi}\varepsilon\cdot L) =\frac{1}{n}\int_{S^{n-1}}\phi_{2} \left(\frac{d(L,u)}{d(K,u)}\right)\cdot d(K,u)^{n-i} dS(u).\eqno(4.3)$$ {\it Proof}~ This follows immediately from (2.1) and Lemma 4.2. Denoting by $B_{\phi,i}(K,L)$, for any $\phi\in\Phi$ and $0\leq i<n$, the integral on the right-hand side of (4.4) with $\phi_{2}$ replaced by $\phi$, we see that either side of the equation (4.3) is equal to $B_{\phi_{2},i}(K,L)$ and hence this new Orlicz mixed chord integrals $B_{\phi,i}(K,L)$ has been born. {\bf Definition 4.5}~ For $\phi\in \Phi$ and $0\leq i<n$, Orlicz mixed chord integrals of star bodies $K$ and $L$, $B_{\phi,i}(K,L)$, defined by $$B_{\phi,i}(K,L)=:\frac{1}{n}\int_{S^{n-1}}\phi \left(\frac{d(L,u)}{d(K,u)}\right)\cdot d(K,u)^{n-i}dS(u). \eqno(4.4)$$ {\bf Lemma 4.6}~ {\it If $\phi_{1}, \phi_{2}\in \Phi$, $0\leq i<n$ and $K,L\in {\cal S}^{n}$, then} $$B_{\phi_{2},i}(K,L)=\frac{(\phi_{1})'_{r}(1)}{n-i}\lim_{\varepsilon\rightarrow 0^+} \frac{{B}_{i}(K\check{+}_{\phi}\varepsilon\cdot L)-{B}_{i}(K)}{\varepsilon}.\eqno(4.5)$$ {\it Proof}~ This follows immediately from Lemma 4.4 and (4.4). {\bf Lemma 4.7} {\it If $K,L\in {\cal S}^{n}$, $\phi\in {\cal C}$ and any $A\in{\rm SL(n)}$, then for $\varepsilon>0$} $$A(K\check{+}_{\phi}\varepsilon\cdot L)=(AK)\check{+}_{\phi}\varepsilon\cdot(AL).\eqno(4.6)$$ {\it Proof}~ This follows immediately from (2.1) and (3.1). We easy find that $B_{\phi,i}(K,L)$ is invariant under simultaneous unimodular centro-affine transformation. {\bf Lemma 4.8}~ {\it If $\phi\in \Phi$, $0\leq i<n$ and $K,L\in {\cal S}^{n}$, then for $A\in SL(n)$,} $$B_{\phi,i}(AK,AL)=B_{\phi,i}(K,L).\eqno(4.7)$$ {\it Proof}~ This follows immediately from Lemmas 3.3 and 4.7. \vskip 10pt \noindent{\large \bf 5 ~Orlicz chord Minkowski inequality}\vskip 10pt In this section, we need define a Borel measure in $S^{n-1}$, denotes by $B_{n,i}(K,\upsilon),$ call as chord measure of star body $K$. {\bf Definition 5.1}~ Let $K\in {\cal S}^{n}$ and $0\leq i<n$, the chord measure, denotes by $B_{n,i}(K,\upsilon),$ defined by $$dB_{n,i}(K,\upsilon)=\frac{d(K,\upsilon)^{n-i}}{n{B}_{i}(K)}dS(\upsilon). \eqno(5.1)$$ {\bf Lemma 5.2}~ (Jensen's inequality) {\it Let $\mu$ be a probability measure on a space $X$ and $g: X\rightarrow I\subset {\Bbb R}$ is a $\mu$-integrable function, where $I$ is a possibly infinite interval. If $\phi: I\rightarrow {\Bbb R}$ is a convex function, then $$\int_{X}\phi(g(x))d\mu(x)\geq\phi\left(\int_{X}g(x)d\mu(x)\right). \eqno(5.2)$$ If $\phi$ is strictly convex, equality holds if and only if $g(x)$ is constant for $\mu$-almost all $x\in X$} (see [31, p.165]). {\bf Lemma 5.3}~ {\it Suppose that $\phi: [0,\infty)\rightarrow (0,\infty)$ is decreasing and convex with $\phi(0)=\infty$. If $K,L\in{\cal S}^{n}$ and $0\leq i<n$, then $$\frac{1}{n{B}_{i}(K)}\int_{S^{n-1}}\phi \left(\frac{d(L,u)}{d(K,u)}\right)d(K,u)^{n-i}dS(u)\geq \phi\left(\left(\frac{{B}_{i}(L)}{{B}_{i}(K)}\right)^{1/(n-i)}\right) .\eqno(5.3)$$ If $\phi$ is strictly convex, equality holds if and only if $K$ and $L$ have similar chord.} {\it Proof}~ This follows immediately from (2.4), (2.5), (5.1) and Jensen's inequality. {\bf Theorem 5.4}~ (Orlicz chord Minkowski inequality) {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $\phi\in \Phi$, then $$B_{\phi,i}(K,L)\geq {B}_{i}(K)\phi\left(\left(\frac{{B}_{i}(L)}{{B}_{i}(K)}\right)^{1/(n-i)}\right) .\eqno(5.4)$$ If $\phi$ is strictly convex, equality holds if and only if $K$ and $L$ have similar chord.} {\it Proof}~ This follows immediately from (4.5) and Lemma 5.3. {\bf Corollary 5.5} {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $p\geq 1$, then $$B_{-p,i}(K,L)^{n-i}\geq{B}_{i}(K)^{n-i+p}{B}_{i}(L)^{-p},\eqno(5.5)$$ with equality if and only if $K$ and $L$ are dilates.} {\it Proof} This follows immediately from Theorem 5.4 with $\phi_{1}(t)=\phi_{2}(t)=t^{-p}$ and $p\geq 1$. Taking $i=0$ in (5.6), this yields $L_{p}$-Minkowski inequality is following: If $K,L\in {\cal S}^{n}$ and $p\geq 1$, then $$B_{-p}(K,L)^{n}\geq B(K)^{n+p}B(L)^{-p},$$ with equality if and only if $K$ and $L$ have similar chord. {\bf Corollary 5.6} {\it Let $K,L\in {\cal M}\subset{\cal S}^{n}$, $0\leq i<n$ and $\phi\in \Phi$, and if either $$B_{\phi,i}(Q,K)=B_{\phi,i}(Q,L),~ {\it for~ all}~ Q\in{\cal M}\eqno(5.6)$$ or $$\frac{B_{\phi,i}(K,Q)}{{B}_{i}(K)}=\frac{B_{\phi,i}(L,Q)}{{B}_{i}(L)},~ {\it for~ all}~ Q\in{\cal M},\eqno(5.7)$$ then} $K=L.$ When $\phi_{1}(t)=\phi_{2}(t)=t^{-p}$ and $p\geq 1$, Corollary 5.6 becomes the following result. {\bf Corollary 5.7} {\it Let $K,L\in {\cal M}\subset{\cal S}^{n}$, $0\leq i<n$ and $p\geq 1$, and if either $$B_{-p,i}(K,Q)=B_{-p,i}(L,Q),~ {\it for~ all}~ Q\in{\cal M}$$ or $$\frac{B_{-p,i}(K,Q)}{{B}_{i}(K)}=\frac{B_{-p,i}(L,Q)}{{B}_{i}(L)},~ {\it for~ all}~ Q\in{\cal M},$$ then} $K=L.$ \vskip 10pt \noindent{\large \bf 6 ~Orlicz chord Brunn-Minkowski inequality}\vskip 10pt {\bf Lemma 6.1}~ {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$, and $\phi_{1}, \phi_{2}\in \Phi$, then} $${B}_{i}(K\check{+}_{\phi}L)=B_{\phi_{1},i}(K\check{+}_{\phi}L, K) +B_{\phi_{2},i}(K\check{+}_{\phi}L, L).\eqno(6.1)$$ {\it Proof}~ This follows immediately from (3.1), (3.4) and (4.5). {\bf Theorem 6.2}~ (Orlicz chord Brunn-Minkowski inequality)~ {\it If $K,L\in{\cal S}^{n}$, $0\leq i<n$ and $\phi\in \Phi_{2}$, then $$1\geq\phi\left(\left(\frac{{B}_{i}(K)} {{B}_{i}(K\check{+}_{\phi}L)}\right)^{1/(n-i)},\left(\frac{{B}_{i}(L)}{{B}_{i}(K{+}_ {\phi}L)}\right)^{1/(n-i)}\right).\eqno(6.2)$$ If $\phi$ is strictly convex, equality holds if and only if $K$ and $L$ have similar chord.} {\it Proof}~ This follows immediately from (5.4) and Lemma 6.1. {\bf Corollary 6.3} {\it If $K,L\in {\cal S}^{n}$, $0\leq i<n$ and $p\geq 1$, then $${B}_{i}(K\check{+}_{p}L)^{-p/(n-i)}\geq{B}_{i}(K)^{-p/(n-i)}+{B}_{i}(L)^{-p/(n-i)},\eqno(6.3)$$ with equality if and only if $K$ and $L$ have similar chord.} {\it Proof} The result follows immediately from Theorem 6.2 with $\phi(x_{1},x_{2})=x_{1}^{-p}+x_{2}^{-p}$ and $p\geq 1$. Taking $i=0$ in (6.3), this yields the $L_{p}$-Brunn-Minkowski inequality for the chord integrals. If $K,L\in{\cal S}^{n}$ and $p\geq 1$, then $$B(K\check{+}_{p}L)^{-p/n}\geq B(K)^{-p/n}+B(L)^{-p/n},$$ with equality if and only if $K$ and $L$ have similar chord. \end{document}
\begin{document} \title[Some topological properties of spaces between the Sorgenfrey and usual topologies on real number] {Some topological properties of spaces between the Sorgenfrey and usual topologies on real number} \author{Fucai Lin} \address{(Fucai Lin): 1. School of mathematics and statistics, Minnan Normal University, Zhangzhou 363000, P. R. China; 2. Fujian Key Laboratory of Granular Computing and Applications, Minnan Normal University, Zhangzhou 363000, P. R. China} \email{[email protected]; [email protected]} \author{Jiada Li} \address{(Jiada Li): School of mathematics and statistics, Minnan Normal University, Zhangzhou 363000, P. R. China} \email{[email protected]} \thanks{This work is supported by the Key Program of the Natural Science Foundation of Fujian Province (No: 2020J02043), the National Natural Science Foundation of China (Grant No. 11571158), the Institute of Meteorological Big Data-Digital Fujian and Fujian Key Laboratory of Data Science and Statistics.} \keywords{Sorgenfrey line; $H$-space; zero-dimension; local compactness; $\sigma$-compactness; $k_{\omega}$-property; perfectly subparacompact; quasi-metrizable.} \subjclass[2010]{primary 54A10; secondary 54B99} \begin{abstract} The $H$-space, denoted as $(\mathbb{R}, \tau_{A})$, has $\mathbb{R}$ as its point set and a basis consisting of usual open interval neighborhood at points of $A$ while taking Sorgenfrey neighborhoods at points of $\mathbb{R}$-$A$. In this paper, we mainly discuss some topological properties of $H$-spaces. In particular, we prove that, for any subset $A\subset \mathbb{R}$, (1) $(\mathbb{R}, \tau_{A})$ is zero-dimensional iff $\mathbb{R}\setminus A$ is dense in $(\mathbb{R}, \tau_{E})$; (2) $(\mathbb{R}, \tau_{A})$ is locally compact iff $(\mathbb{R}, \tau_{A})$ is a $k_{\omega}$-space; (3) if $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact, then $\mathbb{R}\setminus A$ is countable and nowhere dense; if $\mathbb{R}\setminus A$ is countable and scattered, then $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact; (4) $(\mathbb{R}, \tau_{A})^{\aleph_{0}}$ is perfectly subparacompact; (5) there exists a subset $A\subset\mathbb{R}$ such that $(\mathbb{R}, \tau_{A})$ is not quasi-metrizable; (6) $(\mathbb{R}, \tau_{A})$ is metrizable if and only if $(\mathbb{R}, \tau_{A})$ is a $\beta$-space. \end{abstract} \maketitle \section{Introduction} The usual, metric topology on $\mathbb{R}$ is a topological space which coarser than the Sorgenfrey line $\mathbb{S}$, which is a well known space and has been studied extensively. It is well known that Sorgenfrey line has a basis consisting of all half-open intervals of the form $[a, b)$, where $a< b$. The class of $H$-spaces mentioned in \cite{Chatyrko} is a space between usual topology of real number $\mathbb{R}$ and topology of Sorgenfrey line $\mathbb{S}$, which was described by Hattori in \cite{Braverman}. The $H$-space, denoted as $(\mathbb{R}, \tau_{A})$, has $\mathbb{R}$ as its point set and a basis consisting of usual open interval neighborhood at points of $A$ while taking Sorgenfrey neighborhoods at points of $\mathbb{R}-A$, that is, the topology $\tau_{A}$ is defined as follows: (1) For each $x\in A$, $\{(x-\varepsilon, x+\varepsilon): \varepsilon>0\}$ is the neighborhood base at $x$, and (2) for each $x\in \mathbb{R}\setminus A$, $\{[x, x+\varepsilon): \varepsilon>0\}$ is the neighborhood base at $x$.\\ Chatyrko and Hattori first began to study the properties of such spaces, where many interesting results were obtained, see \cite{Chatyrko1} and \cite{Chatyrko2}. In particular, for any $A\subset\mathbb{R}$, $(\mathbb{R}, \tau_{A})$ is regular, hereditarily Lindel\"{o}of, hereditarily separable and Baire space. Moreover, for any closed subset $A$ of $\mathbb{R}$, they proved that $(\mathbb{R}, \tau_{A})$ is homeomorphic to the Sorgenfrey line $\mathbb{S}$ if and only if $A$ is countable. In 2017, Kulesza in \cite{Chatyrko} made an improvement and a summary on the basis of Chatyrko and Hattori's work, and the author called this kind of spaces as $H$-space, and demonstrated the properties of $H$-space with respect to homeomorphism, functions, completeness and reversibility. In particular, Kulesza proved that $(\mathbb{R}, \tau_{A})$ is homeomorphic to $\mathbb{S}$ if and only if $A$ is scattered, and $(\mathbb{R}, \tau_{A})$ is complete if and only if $\mathbb{R}\setminus A$ is countable, which implies that $(\mathbb{R}, \tau_{\mathbb{P}})$ is complete. Moreover, Bouziad and Sukhacheva in \cite{BS} gave some characterizations of some topological properties of $(\mathbb{R}, \tau_{A})$, such as each compact subspace being countable, locally compactness and so on. In this paper, we continue the work of Chatyrko and Hattori by proving additional information about the spaces $(\mathbb{R}, \tau_{A})$. The remaining of this paper is organized as follows. Section 2 is dedicated to outline some concepts and terminologies. In Section 3, we mainly discuss some topological properties of $H$-spaces, such as zero-dimension, $\sigma$-compactness, $k_{\omega}$-property, perfect property, quasi-metrizability and so on. In particular, we give the characterizations of $A$ or $\mathbb{R}\setminus A$ such that $(\mathbb{R}, \tau_{A})$ has topological properties of zero-dimension, $\sigma$-compactness, and $k_{\omega}$-property respectively. Moreover, we show that $(\mathbb{R}, \tau_{A})^{\aleph_{0}}$ is perfectly subparacompact. Further, we discuss some generalized metric properties of $(\mathbb{R}, \tau_{A})$, and prove that there exists a subset $A\subset \mathbb{R}$ such that $(\mathbb{R}, \tau_{A})$ is not quasi-metrizable. In Section 4, we pose some interesting questions about $H$-spaces. \maketitle \section{Preliminaries} In this section, we introduce the necessary notation and terminologies. First of all, let $\mathbb{N}$, $\mathbb{Z}$ and $\mathbb{R}$ denote the sets of all positive integers, all integers and all real numbers, respectively. For undefined terminologies, the reader may refer to \cite{Chatyrko3} and \cite{Gr}. \begin{definition}\cite{Chatyrko3} Let $X$ be a topological space. (1) $X$ is called {\it zero-dimensional} if it has a base of sets that are at the same time open and closed in it. (2) $X$ is called a {\it Baire space} if every intersection of a countable collection of open dense sets in $X$ is also dense in $X$. (3) $X$ is called {\it locally compact}, if every point $x$ of $X$ has a compact neighbourhood, i.e., there exists an open set $U$ and a compact set $K$, such that $x\in U\subseteq K$. (4) $X$ is called a {\it $k_\omega$-space} if there exists a family of countably many compact subsets $\{K_{n}: n\in\mathbb{N}\}$ of $X$ such that each subset $F$ of $X$ is closed in $X$ provided that $F\cap K_n$ is closed in $K_n$ for each $n\in\mathbb{N}$. (5) $X$ is {\it $\sigma$-compact} if it is the union of countably many compact subsets of $X$. (6) $X$ is {\it Lindel\"{o}f} if every open cover of $X$ has a countable subcover. Clearly, each $k_{\omega}$-space is $\sigma$-compact and each $\sigma$-compact is Lindel\"{o}f. \end{definition} \begin{definition}\cite{Chatyrko3, Gr} (1) A space $X$ is {\it subparacompact} if each open cover of $X$ has a $\sigma$-locally finite closed refinement. (2) A space $X$ is {\it perfect} if each closed subset of $X$ is a $G_{\delta}$ in $X$. (3) A space $X$ is {\it perfectly subparacompact} if it is perfect and subparacompact. (4) A space $X$ is {\it weakly $\theta$-refinable} if for each open cover $\mathscr{U}$ of $X$, there exists an open cover $\mathscr{V}=\bigcup_{n=1}^{\infty}\mathscr{V}(n)$ of $X$ which is refines $\mathscr{U}$ and which has the property that if $x\in X$, then there exists an $n\in\mathbb{N}$ such that $x$ belongs to exactly $k$ members of $\mathscr{V}(n)$ for some $k\in\mathbb{N}$. (5) A family $\mathscr{U}$ of open sets in $X$ is called {\it interior-preserving} if for $\mathscr{F}\subset \mathscr{U}$ and $y\in\bigcap\mathscr{F}$, $\bigcap\mathscr{F}$ is an open neighborhood of $y$. \end{definition} \begin{definition}\cite{Chatyrko3} A family $\mathscr{P}$ of subsets of a space $X$ is a {\it network} for $X$ if for each point $x\in X$ and any neighbhorhood $U$ of $x$ there is an $P\in\mathscr{P}$ such that $x\in P\subset U$. The {\it network weight} of a space $X$ is defined as the smallest cardinal number of the form $|\mathscr{P}|$, where $\mathscr{P}$ is a network for $X$, this cardinal number is denoted by $nw(X)$. \end{definition} \begin{definition}\cite{Gr} Recall that $(X, \tau)$ is a {\it $\beta$-space} if there exists a function $g: \omega\times X\rightarrow \tau$ such that if $x\in g(n, x_{n})$ for every $n\in\omega$ then the sequence $\{x_{n}\}$ has a cluster point in $X$. \end{definition} \begin{definition}\cite{Chatyrko2} Let $A$ be a subset of $\mathbb{R}$ of the real number. Defined the topology $\tau_{A}$ on $\mathbb{R}$ as follows: (1)\, For each $x\in A, \{(x-\epsilon, x+\epsilon):\epsilon>0\}$ is the neighborhood base at $x$, (2)\, For each $x\in \mathbb{R}- A, \{[x, x+\epsilon):\epsilon>0\}$ is the neighborhood base at $x$. Then $(\mathbb{R}, \tau_{A})$ is called {\it $H$-space}. The point $x$ is called an {\it $\mathbb{R}$-point}, if $x\in A$, otherwise, $x$ is called an {\it $\mathbb{S}$-point}. \end{definition} Let $\tau_{E}$ and $\tau_{S}$ denote the usual (Euclidean) topology of $\mathbb{R}$ and the topology of the Sorgenfrey line $\mathbb{S}$ respectively. It is clear that $\tau_{A}=\tau_{E}$ if $A=\mathbb{R}$ and $\tau_{A}=\tau_{S}$ if $A=\emptyset$. And it is also obvious that $\tau_{E}\subset\tau_{A}\subset\tau_{S}$. Some topological properties of $(\mathbb{R}, \tau_{E})$ and $(\mathbb{R}, \tau_{S})$ is as the table below: \begin{table}[!hbt] \centering \textbf{Some topological properties of $(\mathbb{R}, \tau_{E})$ and $(\mathbb{R}, \tau_{S})$} \ \ \begin{tabular}{|c|c|c|c|} \hline Number & Property & $(\mathbb{R}, \tau_{E})$ & $(\mathbb{R}, \tau_{S})$\\ \hline 1 & metrizable & Yes & No \\ \hline 2 & Hereditarily Separable & Yes & Yes \\ \hline 3 & Normality & Yes & Yes \\ \hline 4 & Lindel\"{o}ff & Yes & Yes \\ \hline 5 & Measurable & Yes & No \\ \hline 6 & Baire Space & Yes & Yes \\ \hline 7 & Zero-dimension & No & Yes \\ \hline 8 & Compactness & No & No \\ \hline 9 & Countably Compact & No & No \\ \hline 10 & Local Compactness & Yes & No \\ \hline 11 & Sequential Compactness & No & No \\ \hline 12 & Paracompactness & Yes & Yes \\ \hline 13 & $\sigma$-Compactness & Yes & No \\ \hline 14 & Connectedness & Yes & No \\ \hline 15 & Path Connectedness & Yes & No \\ \hline 16 & Local Connectedness & Yes & No \\ \hline 17 & Every compact subset is countable & No & Yes \\ \hline \end{tabular} \end{table} By the table, it is easy to see that, for every subset $A$ of real number, the $H$-space is always a hereditarily separable, paracompact, Lindel\"{o}ff, normal and first-countable space. And we can also know that, the $H$-space is always not a compact, countably compact or sequentially compact space for any subset $A$ of $\mathbb{R}$. According to the \cite[Proposition 2.3]{Chatyrko2}, $H$-space $(\mathbb{R}, \tau_{A})$ is second-countable if and only if $\mathbb{R}-A$ is countable. \maketitle \section{main results} In this section, we mainly discuss some topological properties of $H$-spaces, such as zero-dimension, $\sigma$-compactness, $k_{\omega}$-property, perfect property, quasi-metrizability and so on. we First, we give an obvious lemma. \begin{lemma}\label{l0} Let $D$ be a dense subset of $(\mathbb{R}, \tau_{A})$. Then $D$ is dense in $(\mathbb{R}, \tau_{E})$ and $(\mathbb{R}, \tau_{S})$. \end{lemma} \begin{proof} Obviously, $D$ is a dense subset of $(\mathbb{R}, \tau_{E})$ since $\tau_{A}$ is finer than $\tau_{E}$. In order to prove $D$ being dense in $(\mathbb{R}, \tau_{S})$, take an arbitrary non-empty open subset $U$ in $\tau_{S}$, then there exists a non-empty open subset $V$ in $\tau_{E}$, hence $\emptyset\neq V\cap D\subset U\cap D$. Therefore, $D$ is also dense in $(\mathbb{R}, \tau_{S})$. \end{proof} By Lemma~\ref{l0}, we have the following corollary. \begin{corollary} For an arbitrary subset $A$ of $\mathbb{R}$, we have $d(\mathbb{R}, \tau_{A})=d(\mathbb{R}, \tau_{E})=d(\mathbb{R}, \tau_{S})=\omega$. \end{corollary} Since $(\mathbb{R}, \tau_{E})$ and $(\mathbb{R}, \tau_{S})$ are all Baire, we have the following corollary. \begin{corollary} For an arbitrary subset $A$ of $\mathbb{R}$, the $H$-space $(\mathbb{R}, \tau_{A})$ is a Baire space. \end{corollary} \begin{proposition} For an arbitrary subset $A$ of $\mathbb{R}$, the $H$-space $(\mathbb{R}, \tau_{A})$ is homeomorphic to $(\mathbb{R}, \tau_{E})$ if and only if $A=\mathbb{R}$. \end{proposition} \begin{proof} Assume that $(\mathbb{R}, \tau_{A})$ is homeomorphic to $(\mathbb{R}, \tau_{E})$ and $A\neq\mathbb{R}$. Hence $\mathbb{R}\setminus A\neq\emptyset$. Take an arbitrary point $a\in \mathbb{R}\setminus A$. Then $(-\infty, a)$ is an open and closed subset in $(\mathbb{R}, \tau_{A})$. Hence $(\mathbb{R}, \tau_{A})$ is not connected. However, $(\mathbb{R}, \tau_{E})$ is connected. Hence $A=\mathbb{R}$. \end{proof} Now, we can prove one of main results of this paper, which gives a characterization of subset $\mathbb{R}\setminus A$ such that $(\mathbb{R}, \tau_{A})$ is zero-dimensional. \begin{theorem} For an arbitrary subset $A$ of $\mathbb{R}$, the $H$-space $(\mathbb{R}, \tau_{A})$ is zero-dimensional if and only if $\mathbb{R}\setminus A$ is dense in $(\mathbb{R}, \tau_{E})$. \end{theorem} \begin{proof} Necessity. Let $(\mathbb{R}, \tau_{A})$ be zero-dimensional. Assume that $\mathbb{R}\setminus A$ is not dense in $(\mathbb{R}, \tau_{E})$, then it follows from Lemma~\ref{l0} that $\mathbb{R}\setminus A$ is also not dense in $(\mathbb{R}, \tau_{A})$. Then there exists an open subset $U$ in $(\mathbb{R}, \tau_{A})$ such that $U\cap (\mathbb{R}\setminus A)=\emptyset$. Hence $U\subset A$, which implies that there exists an open interval $(c, d)$ such that $(c, d)\subset U\subset A$. Since $(\mathbb{R}, \tau_{A})$ is zero-dimensional and $(c, d)\subset A$, it follows that $(c, d)$ contains an open and closed subset $V$. Since each neighborhood of each point of $A$ belongs to $\tau_{E}$, it is easy to see that $V$ is an open and closed subset in $(\mathbb{R}, \tau_{E})$. However, $(\mathbb{R}, \tau_{E})$ is not zero-dimensional, which is a contradiction. Sufficiency. Let $\mathbb{R}\setminus A$ be dense in $(\mathbb{R}, \tau_{E})$. Take an arbitrary $x_{0}\in\mathbb{R}$. We divide into the proof into the following two cases. {\bf Case 1:} $x_{0} \in A$. By Lemma~\ref{l0}, there exist a strictly increasing sequence $\{y_{n}\}$ and a strictly decreasing sequence $\{z_{n}\}$ such that $\{y_{n}\}\subset \mathbb{R}\setminus A$, $\{z_{n}\}\subset \mathbb{R}\setminus A$ and two sequences all converge to $x_{0}$ in $(\mathbb{R}, \tau_{E})$. For any $n\in\mathbb{N}$, put $U_{n}=[y_{n}, z_{n})$. Clearly, each $U_{n}$ is an open and closed subset of $(\mathbb{R}, \tau_{A})$. However, it easily check that the family $\{U_{n}: n\in\mathbb{N}\}$ is a base at $x_{0}$ in $(\mathbb{R}, \tau_{A})$. Hence the point $x_{0}$ has a base consisting of open and closed subsets in $(\mathbb{R}, \tau_{A})$. {\bf Case 2:} $x_{0} \not\in A$. Obviously, there exists a strictly decreasing sequence $\{x_{n}\}$ such that $\{x_{n}\}\subset \mathbb{R}\setminus A$ and $\{x_{n}\}$ converges to $x_{0}$ in $(\mathbb{R}, \tau_{E})$. Then the family $\{[x_{0}, x_{n}): n\in\mathbb{N}\}$ is base consisting of open and closed subsets in $(\mathbb{R}, \tau_{A})$. In a word, $(\mathbb{R}, \tau_{A})$ is zero-dimensional. \end{proof} Next we prove the second main result of this paper, which shows that local compactness is equivalent to $k_{\omega}$ property in $(\mathbb{R}, \tau_{A})$. Indeed, A. Bouziad and E. Sukhacheva in \cite{BS} has proved that, for an arbitrary subset $A$ of $\mathbb{R}$, we have $(\mathbb{R}, \tau_{A})$ is locally compact if and only if $\mathbb{R}\setminus A$ is closed in $\mathbb{R}$ and discrete in $\mathbb{S}$. \begin{theorem}\label{t2} For an arbitrary subset $A$ of $\mathbb{R}$, then the following statements are equivalent: \begin{enumerate} \item $(\mathbb{R}, \tau_{A})$ is locally compact; \item $(\mathbb{R}, \tau_{A})$ is a $k_{\omega}$-space; \item $\mathbb{R}\setminus A$ is discrete and closed in $(\mathbb{R}, \tau_{A})$. \end{enumerate} \end{theorem} \begin{proof} From \cite{BS}, we have (1) $\Leftrightarrow$ (3). The implication (1) $\Rightarrow$ (2) is easily to be checked. It suffices to prove that (2) $\Rightarrow$ (1). (2) $\Rightarrow$ (1). Assume that $(\mathbb{R}, \tau_{A})$ is a $k_{\omega}$-space, and let $\{K_{n}\}$ be an increasing sequence of compact subsets such that the family $\{K_{n}\}$ determines the topology of $(\mathbb{R}, \tau_{A})$. Take an arbitrary $x_{0}\in\mathbb{R}$. Since $(\mathbb{R}, \tau_{A})$ is first-countable, choose an open neighborhood base $\{U_{n}: n\in\mathbb{N}\}$ of point $x_{0}$ in $(\mathbb{R}, \tau_{A})$ such that $U_{n+1}\subset U_{n}$ for each $n\in\mathbb{N}$. We claim that there exist $m, p\in\mathbb{N}$ such that $U_{m}\subset K_{p}$. Suppose not, then for each $n\in\mathbb{N}$ it follows that $U_{n}\setminus K_{n}\neq\emptyset$, hence choose a point $x_{n}\in U_{n}\setminus K_{n}$. Put $K=\{x_{n}: n\in\mathbb{N}\}\cup \{x_{0}\}$. Then $K$ is a compact subset in $(\mathbb{R}, \tau_{A})$ and $K\setminus K_{n}\neq\emptyset$ for each $n\in\mathbb{N}$. However, since $(\mathbb{R}, \tau_{A})$ is a $k_{\omega}$-space, it easily check that there exists $n_{0}\in\mathbb{N}$ such that $K\subset K_{n_{0}}$, which is a contradiction. \end{proof} From Theorem~\ref{t2}, it natural to pose the following question. \begin{question} What subsets $A$ of $\mathbb{R}$ are $(\mathbb{R}, \tau_{A})$ being $\sigma$-compact? \end{question} Next we give some a partial answer to this question. First, we give some lemmas. \begin{proposition}\label{p1} For arbitrary $B$ of $(\mathbb{R}, \tau_{S})$, we have $nw(B)\geq |B|$. In particular, $w(B)\geq |B|$. \end{proposition} \begin{proof} Let $\mathscr{P}$ be an arbitrary network of the subspace $B$ with $|\mathscr{P}|=nw(B)$. For each $x\in B$, let $$\mathscr{B}_{x}=\{x\in P: P\in\mathscr{P}, P\subset [0, \frac{1}{n})\cap B\ \mbox{for some}\ n\in\mathbb{N}\}.$$ Then $\bigcup_{x\in B}\mathscr{B}_{x}\subset \mathscr{P}$ and is also a network of the subspace $B$. However, for any $x, y\in B$ with $x\neq y$, we have $\mathscr{B}_{x}\cap \mathscr{B}_{y}=\emptyset$, hence $nw(B)\geq |B|$. \end{proof} By Proposition~\ref{p1} and the separability of $(\mathbb{R}, \tau_{S})$, we have the following corollary. \begin{corollary}\label{l1}\cite[Proposition 2.3]{Chatyrko2} For any subset $X$ of $(\mathbb{R}, \tau_{S})$, $X$ is metrizable if and only if $X$ is countable. \end{corollary} \begin{lemma}\label{l9} For an arbitrary subset $A$ of $\mathbb{R}$, $(\mathbb{R}, \tau_{A})$ is submetrizable. \end{lemma} \begin{proof} Since $(\mathbb{R}, \tau_{A})$ and $\tau_E\subset \tau_A$, it follows that $(\mathbb{R}, \tau_{A})$ is submetrizable. \end{proof} \begin{lemma}\label{l2} If $K$ is a compact subset of $(\mathbb{R}, \tau_{A})$, then $K\cap (\mathbb{R}\setminus A)$ is countable. \end{lemma} \begin{proof} By Lemma~\ref{l9}, $K$ is metrizable. Put $X=K\cap (\mathbb{R}\setminus A)$. Then $X$ is metrizable. Moreover, $X$ is subspace of $(\mathbb{R}, \tau_{S})$. By Corollary~\ref{l1}, $X$ is countable. Therefore, $K\cap (\mathbb{R}\setminus A)$ is countable. \end{proof} \begin{lemma}\label{t1} For an arbitrary subset $A$ of $\mathbb{R}$, if $\overline{\mathbb{R}\setminus A}$ is countable then $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact. \end{lemma} \begin{proof} Put $U=\mathbb{R}\setminus\overline{\mathbb{R}\setminus A}$. Then $U$ is open in $(\mathbb{R}, \tau_{A})$ and $U\subset A$, hence $U$ is open in $(\mathbb{R}, \tau_{E})$, which implies that $U$ is $\sigma$-compact. From $U\subset A$, it follows that $U$ is $\sigma$-compact $(\mathbb{R}, \tau_{A})$. By the countability of $\overline{\mathbb{R}\setminus A}$, $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact. \end{proof} Now we have the following two results. \begin{theorem}\label{t4} For an arbitrary subset $A$ of $\mathbb{R}$, $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact, then $\mathbb{R}\setminus A$ is countable and nowhere dense in $(\mathbb{R}, \tau_{A})$. \end{theorem} \begin{proof} By Lemma~\ref{l2}, it is easy to see that $\mathbb{R}\setminus A$ is countable. It suffices to prove that $\mathbb{R}\setminus A$ is nowhere dense. Assume that $\mathbb{R}\setminus A$ is not nowhere dense. Then there exists an open subset $V$ being contained in the closure of $\mathbb{R}\setminus A$. Hence there exist $a, b\in \mathbb{R}\setminus A$ such that $[a, b)\subset V$. Then $[a, b)$ is $\sigma$-compact since $[a, b)$ is open and closed in $(\mathbb{R}, \tau_{A})$, hence there exists a sequence of compact subsets $\{K_{n}\}$ of $(\mathbb{R}, \tau_{A})$ such that $[a, b)=\bigcup_{n\in\mathbb{N}} K_{n}$. By Lemma~\ref{l0}, it easily check that $[a, b)$ is a Baire space, then there exists $n\in\mathbb{N}$ such that $K_{n}$ contains an empty open subset $W$ in $(\mathbb{R}, \tau_{A})$. Since $W\subset [a, b)\subset V$, there exist $c, d\in\mathbb{R}\setminus A$ such that $[c, d)\subset [a, b)$. Since $[c, d)$ is closed and $[c, d)\subset K_{n}$, it follows that $[c, d)$ is compact, which is a contradiction. Therefore, $\mathbb{R}\setminus A$ is nowhere dense. \end{proof} \begin{theorem}\label{t6} For an arbitrary subset $A$ of $\mathbb{R}$, if $\mathbb{R}\setminus A$ is countable and scattered in $(\mathbb{R}, \tau_{A})$, then $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact. \end{theorem} \begin{proof} Assume that $\mathbb{R}\setminus A$ is countable and scattered. Then it follows from \cite[Corollary 3]{V. Kannan} that $\mathbb{R}\setminus A$ is homeomorphic to a subspace of $[0, \omega_{1}).$ Hence it easily see that the closure of $\mathbb{R}\setminus A$ in $(\mathbb{R}, \tau_{A})$ is countable. By Lemma~\ref{t1}, $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact. \end{proof} The following example shows that the property of $\sigma$-compact in $(\mathbb{R}, \tau_{A})$ dose not imply local compactness. \begin{example} There exists a subset $A$ such that $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact but not locally compact. \end{example} \begin{proof} Let $A=\mathbb{R}\setminus(\{0\}\cup\{\frac{1}{n}: n\in\mathbb{N})$. Then $\mathbb{R}\setminus A=\{0\}\cup\{\frac{1}{n}: n\in\mathbb{N}\}$ is closed, countable scattered and non-discrete. By Theorem~\ref{t6}, $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact. However, it follows from Theorem~\ref{t2} that $(\mathbb{R}, \tau_{A})$ is not locally compact and not a $k_{\omega}$-space. \end{proof} Next we prove that $(\mathbb{R}, \tau_{A})^{\aleph_{0}}$ is perfectly subparacompact for arbitrary subset $A\subset\mathbb{R}$. The proof of the following theorem is similar to the proof of \cite[Lemma~2.3]{Heath}. However, for the convenience to the reader, we give out the proof. \begin{theorem}\label{t5} For an arbitrary subset $A$ of $\mathbb{R}$, $(\mathbb{R}, \tau_{A})^{n}$ is perfect for every $n\in\mathbb{N}$. \end{theorem} \begin{proof} By induction. The theorem is clear for $n=1$ since $(\mathbb{R}, \tau_{A})$ is a Lindel\"{o}f space. Therefore let us suppose the theorem for $n$ and let us prove it for $n+1$. Let $Z=\prod_{i=1}^{n+1}Z_{i}$ with $Z_{i}=(\mathbb{R}, \tau_{A})$ for all $i\leq n+1.$ For every $m\leq n+1$, put $Z(m)=\prod_{i=1}^{n+1}Z_{i}(m)$, where $Z_{m}(m)=(\mathbb{R}, \tau_{E})$ and $Z_{i}(m)=(\mathbb{R}, \tau_{A})$ if $i\neq m$. Now it suffices to prove that arbitrary open subset $U$ of $Z$ is an $F_{\sigma}$ in $Z$. For every $m\leq n+1$, let $U(m)$ be the interior of $U$ as a subset of $Z(m)$, and let $U^{\star}=\bigcup_{m=1}^{n+1}U(m)$. It follows from \cite[Lemma 2.2]{Heath} that each $Z(m)$ is perfect, hence $U(m)$ is an $F_{\sigma}$ in $Z(m)$ and thus also in $Z$. Therefore, $U^{\star}$ is also an $F_{\sigma}$ in $Z$. Put $U^{\prime}=U\setminus U^{\star}$. Thus it only remains to prove that $U^{\prime}$ is an $F_{\sigma}$ in $Z$. Clearly, for each $x=(x_{1}, \ldots, x_{n+1})\in U^{\prime}$, it has $x_{i}\in \mathbb{R}\setminus A$ for each $i\leq n+1$. For each $z\in U^{\prime}$, let $\{W_{j}(z): j\in\mathbb{N}\}$ denote the base of neighborhoods of $z$ in $Z$ defined by $$W_{j}(z)=\{y\in Z: y_{i}\in [z_{i}, z_{i}+\frac{1}{j})\ \mbox{for each}\ i\leq n+1\}.$$ For every $j\in\mathbb{N}$, let $$U^{\prime}_{j}=\{z\in U^{\prime}: W_{j}(z)\subset U\}.$$It easily see that $U^{\prime}=\bigcup_{j=1}^{\infty}U^{\prime}_{j}$. Next we shall prove that each $U^{\prime}_{j}$ is closed in $Z$. Take an arbitrary $j\in\mathbb{N}$, assume $z\not\in U^{\prime}_{j}$, it suffices to prove that $z$ is not in the closure of $U^{\prime}_{j}$ in $Z$. For each $F\subset\{1, \cdots, n+1\}$, let $$U^{\prime}_{j, F}(z)=\{y\in U^{\prime}_{j}: z_{i}=y_{i}\ \mbox{iff}\ i\in F\}.$$ Clearly, $U^{\prime}_{j}=\bigcup\{U^{\prime}_{j, F}(z): F\subset \{1, \cdots, n+1\}\}$. Then it suffices to prove that for each $F\subset\{1, \cdots, n+1\}$ there exists a neighborhood of $z$ in $Z$ disjoint from $U^{\prime}_{j, F}$. Indeed, suppose that $W_{j}(z)\cap U^{\prime}_{j, F}(z)\neq\emptyset$, then it can choose a point $x\in W_{j}(z)\cap U^{\prime}_{j, F}(z)$. Then the set $$V=W_{j}(z)\cap\{y\in Z: y_{i}<x_{i}\ \mbox{if}\ i\not\in F\}$$ is a neighborhood of $z$ in $Z$, and it will suffice to prove that $V\cap U^{\prime}_{j, F}=\emptyset.$ Suppose not, then there exists some $y\in V\cap U^{\prime}_{j, F}$. Clearly, $y\in W_{j}(z)$ and $y\neq z$, thus there is an $m\leq n+1$ such that $y_{m}>z_{m}$. Then $m\not\in F$. Put $$W=W_{j}(y)\cap \{u\in Z: y_{m}<u_{m}\}.$$ Clearly, $W$ is open in $Z(m)$ and $W\subset W_{j}(y)\subset U$. It follows from the definition of $U(m)$ that $W\subset U(m)\subset U^{\star}$. Moreover, it easily check that $x\in W$. Therefore, $x\in U^{\star}$, which is a contradiction. That completes the proof. \end{proof} \begin{theorem} For arbitrary $n\in\mathbb{N}$, $(\mathbb{R}, \tau_{A})^{n}$ is perfectly subparacompact. \end{theorem} \begin{proof} By Theorem~\ref{t5}, $(\mathbb{R}, \tau_{A})^{n}$ is perfect. It suffices to prove that $(\mathbb{R}, \tau_{A})^{n}$ is subparacompact. By induction. The result is certainly true for $n=1$. Let us assume that $(\mathbb{R}, \tau_{A})^{n}$ is subparacompact. Next it will prove that $(\mathbb{R}, \tau_{A})^{n+1}$ is subparacompact. By \cite[Proposition 2.9]{Lutzer}, it suffices to prove that $(\mathbb{R}, \tau_{A})^{n+1}$ is weakly $\theta$-refinable. Put $Z=(\mathbb{R}, \tau_{A})^{n+1}$. Let $\mathscr{W}=\{W(\alpha): \alpha\in A\}$ be a basic open cover of the space $Z$, where $W(\alpha)=U(1, \alpha)\times \cdots\times U(n+1, \alpha)$ such that $U(k, \alpha)=(a(k, \alpha), b(k, \alpha))$ if $a(k, \alpha)\in A$ and $U(k, \alpha)=[a(k, \alpha), b(k, \alpha))$ if $a(k, \alpha)\not\in A$. By the same notations in the proof of Theorem~\ref{t5}, let $W(\alpha, m)$ be the interior of the set $W(\alpha)$ in $Z(m)$ for each $m\leq n+1$, thus $W(\alpha, m)$ is open in $Z(m)$, hence also open in $Z$. For each $m\leq n+1$, put $\mathscr{G}(m)=\{W(\alpha, m): \alpha\in A\}$. By \cite[Corollary 2.6]{Lutzer}, each $Z(m)$ is perfect subparacompact. Then it follows from \cite[Porposition 2.9]{Lutzer} that $\mathscr{G}(m)$ has a weakly $\theta$-refinement $\mathscr{H}(m)$ which covers $\bigcup\mathscr{H}(m)$ and which consists of open subsets of $Z(m)$. Clearly, $\mathscr{H}(m)$ is also a collection of open subsets of $Z$. Let $$Y=Z\setminus\cup\{\bigcup\mathscr{H}(m): 1\leq m\leq n+1\}.$$ Then for each $y\in Y$ it has $y_{i}\in \mathbb{R}\setminus A$ for each $i\leq n+1$, hence there exists $\alpha\in A$ such that $y_{i}=a(i, \alpha)$ for each $i\leq n+1$. For each $y\in Y$, pick $\alpha(y)\in A$ such that $y\in W(\alpha(y))$. Put $$\mathscr{H}(0)=\{W(\alpha(y)): y\in Y\}.$$ It easily see that if $x$ and $y$ are distinct elements of $Y$, then $y\not\in W(\alpha(x))$. Therefore, $\mathscr{H}(0)$ is a collection of open subsets of $Z$ which covers $Y$ in such a way that each point of $Y$ belongs to exactly one member of $\mathscr{H}(0)$. Hence $\mathscr{H}=\{\mathscr{H}(m): m\in\omega\}$ is a weak $\theta$-refinement of $\mathscr{W}$. Therefore, $Z$ is weakly $\theta$-refinable. \end{proof} By \cite[Proposition 2.1]{Heath} and \cite[Proposition 2.7]{Lutzer}, we have the following theorem. \begin{theorem} For an arbitrary subset $A$ of $\mathbb{R}$, $(\mathbb{R}, \tau_{A})^{\aleph_{0}}$ is perfectly subparacompact. \end{theorem} \begin{corollary}\cite{Lutzer,Heath} The space $(\mathbb{R}, \tau_{S})^{\aleph_{0}}$ is perfectly subparacompact. \end{corollary} Finally, we consider the quasi-metrizability of $H$-spaces. It is well-known that $(\mathbb{R}, \tau_{E})$ and $(\mathbb{R}, \tau_{S})$ are all quasi-metrizable, it natural to pose the following question. \begin{question}\label{q11} For an arbitrary $A\subset \mathbb{R}$, is $(\mathbb{R}, \tau_{A})$ quasi-metrizable? \end{question} We give some a negative answer to Question~\ref{q11} in Example~\ref{e111}. Indeed, from the definition of generalized ordered space, we have the following proposition. \begin{proposition}\label{p0} For arbitrary $A\subset\mathbb{R}$, the $H$-space $(\mathbb{R}, \tau_{A})$ is a generalized ordered space. \end{proposition} By \cite[Theorem 10]{Jacob Kolner}, we can easily give a characterization of subset $A$ of $\mathbb{R}$ such that $(\mathbb{R}, \tau_{A})$ is quasi-metrizable, see Theorem~\ref{t4545}. \begin{theorem}\label{t4545} For any subset $A\subset \mathbb{R}$, the $H$-space $(\mathbb{R}, \tau_{A})$ is quasi-metrizable if and only if $\mathbb{R}\setminus A$ is a $F_{\sigma}$-set in $(\mathbb{R}, \tau_{S^{-}})$, where $(\mathbb{R}, \tau_{S^{-}})$ is the set of real numbers with the topology generated by the base $\{(a, b]: a, b\in\mathbb{R}, a<b\}$. \end{theorem} Now, we can give a negative answer to Question~\ref{q11}. \begin{example}\label{e111} There exists a subset $A$ of $\mathbb{R}$ such that $(\mathbb{R}, \tau_{A})$ is not quasi-metrizable. \end{example} \begin{proof} Indeed, let $A=\mathbb{Q}$ be the rational number. By Theorem~\ref{t4545}, assume $\mathbb{R}\setminus A$ is a $F_{\sigma}$-set in $(\mathbb{R}, \tau_{S^{-}})$, then $\mathbb{Q}$ is a $G_{\delta}$-set in $(\mathbb{R}, \tau_{S^{-}})$. However, it follows from \cite[Theorem 3.4]{D.K. Burke1} that $(\mathbb{R}, \tau_{S^{-}})$ does not have a dense metrizable $G_{\delta}$-space, which is a contradiction. \end{proof} Obviously, if $\mathbb{R}\setminus A$ is a $F_{\sigma}$-set in $(\mathbb{R}, \tau_{A})$, then $\mathbb{R}\setminus A$ is a $F_{\sigma}$-set in $(\mathbb{R}, \tau_{S^{-}})$, hence we have the following corollary. \begin{corollary}\label{cc0000} If $\mathbb{R}\setminus A$ is a $F_{\sigma}$-set in $(\mathbb{R}, \tau_{A})$, then $(\mathbb{R}, \tau_{A})$ is quasi-metrizable. \end{corollary} We now close this section with a result about generalized metric property of $H$-space. \begin{theorem}\label{t111} For an arbitrary $A\subset\mathbb{R}$, then the following statements are equivalent: \begin{enumerate} \item $(\mathbb{R}, \tau_{A})$ is metrizable; \item $(\mathbb{R}, \tau_{A})$ is a $\beta$-space; \item $\mathbb{R}\setminus A$ is countable. \end{enumerate} \end{theorem} \begin{proof} Obviously, it suffices to prove (2) $\Rightarrow$ (3). Assume that $(\mathbb{R}, \tau_{A})$ is a $\beta$-space. Since $(\mathbb{R}, \tau_{A})$ is a paracompact submetrizable space, it follows from \cite[Theorem 7.8 (ii)]{Gr} that $(\mathbb{R}, \tau_{A})$ is semi-stratifiable. By Proposition~\ref{p0} and \cite[Theorems 5.16 and 5.21]{Gr}, $(\mathbb{R}, \tau_{A})$ is a stratifiable space, hence $(\mathbb{R}, \tau_{A})$ is a $\sigma$-space by \cite[Theorem 5.9]{Gr}. Then $(\mathbb{R}, \tau_{A})$ has a countable network since $(\mathbb{R}, \tau_{A})$ is separable, hence $\mathbb{R}\setminus A$ has a countable network. Therefore, it follows from Proposition~\ref{p1} that $\mathbb{R}\setminus A$ must be countable. \end{proof} \maketitle \section{Open questions} It is well known that $(\mathbb{R}, \tau_{E})\times (\mathbb{R}, \tau_{E})$ is Lindel\"{o}f, and $(\mathbb{R}, \tau_{S})\times (\mathbb{R}, \tau_{S})$ is not Lindel\"{o}f, hence it is natural to pose the following question. \begin{question}\label{q0} For an arbitrary subset $A$ of $\mathbb{R}$, are the following statements equivalent? \begin{enumerate} \item $(\mathbb{R}, \tau_{A})\times (\mathbb{R}, \tau_{A})$ is Lindel\"{o}f; \item $(\mathbb{R}, \tau_{A})\times (\mathbb{R}, \tau_{A})$ is normal; \item $(\mathbb{R}, \tau_{A})$ is metizable. \end{enumerate} \end{question} The following example gives a negative answer to Question~\ref{q0} under the assumption of CH. \begin{example} Under the assumption of CH, there exists a subspace $A\subset\mathbb{R}$ such that $\mathbb{R}\setminus A$ is uncountable and $(\mathbb{R}, \tau_{A})\times (\mathbb{R}, \tau_{A})$ is Lindel\"{o}f. \end{example} \begin{proof} By \cite[Theorem 3.4]{D.K. Burke}, there exists an uncountable subset $Y\subset \mathbb{S}$ such that $Y^{2}$ is Lindel\"{o}f. Put $A=\mathbb{R}\setminus Y$. Then $(\mathbb{R}, \tau_{A})\times (\mathbb{R}, \tau_{A})$ is Lindel\"{o}f. Indeed, it is obvious that $$(\mathbb{R}, \tau_{A})\times (\mathbb{R}, \tau_{A})=(A\times A)\cup (A\times Y)\cup(Y\times A)\cup(Y\times Y).$$ Since $A$ is a separable metrizabale space, the subspace $A\times A$, $A\times Y$ and $Y\times A$ are Lindel\"{o}f. Therefore, $(\mathbb{R}, \tau_{A})\times (\mathbb{R}, \tau_{A})$ is Lindel\"{o}f. \end{proof} By Theorem~\ref{t6}, we have the following question. \begin{question} If $(\mathbb{R}, \tau_{A})$ is $\sigma$-compact, is $A$ a scattered subspace? \end{question} The following question was posed by Boaz Tsaban. \begin{question} When is the space $(\mathbb{R}, \tau_{A})$ Menger (Hurewicz) for any $A\subset\mathbb{R}$? \end{question} \end{document}
\begin{document} \author{Dong Quan Vu} \email{[email protected]} \affiliation{ \institution{Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG} \country{France} } \author{Patrick Loiseau} \email{[email protected]} \affiliation{ \institution{Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LIG} \country{France} } \renewcommand{Vu and Loiseau}{Vu and Loiseau} \keywords{exact and approximate computation of equilibria; Blotto games; all-pay auctions} \maketitle \section{Introduction} \label{sec:Intro} The \emph{Colonel Blotto game}, first introduced by \cite{borel1921}, is a famous resource allocation games. Two players A and B compete over $n$ battlefields by simultaneously distributing resources such that the sum of each player's allocations does not exceed her budget (the so-called \emph{budget constraint}). Each battlefield has a certain value. In each battlefield, the player who has the higher allocation wins and gains the whole battlefield's value while the other player gains zero; this is the \emph{winner-determination rule}. The total payoff of each player is the sum of gains from all the battlefields. The Colonel Blotto game captures a large range of practical situations. Its original application is \emph{military logistic} \cite{gross1950,grosswagner}, where resources correspond to soldiers, equipment or weapons; but it is now also used to model \emph{security problems} where battlefields are security targets and resources are security forces or effort \cite{chia2012,schwartz2014}, \emph{political competitions} where players are political parties who distribute their time or money resources to compete over voters or states \cite{kovenock2012,myerson1993incentives,roberson2006}, \emph{competitions in online advertising} \cite{masucci2014,masucci2015}, or \emph{radio-spectrum management systems} \cite{hajimirsaadeghi2017dynamic}. In many of these applications, however, the winner-determination rule of the Colonel Blotto game is too restrictive to capture practical situations because a player might have an advantage over some battlefields; we refer to this as \emph{favoritism}. There can be two basic types of favoritism: \begin{trivlist} \item[\emph{First},] players may have resources committed to battlefields before the game begins---we refer to them as \emph{pre-allocations}. These pre-allocations then add up to the allocations to determine the winner in each battlefield. In military logistics for instance, before the start of military operations, it is often the case that one side (or both) already installed military forces on some battlefields. Pre-allocations can also be found in R\&D contests, where companies can use the technologies they currently possess to gain advantage while competing to develop new products/technologies. In political contests, it is often the case that voters have an a priori position that may be interpreted as a pre-allocation of the corresponding party (e.g., Californian voters are in majority pro-Democrats). \item[\emph{Second},] the \emph{resources effectiveness} may not be the same for both players, and may vary across battlefields. For example, in airport-surveillance, it often requires several agents to patrol a security target while a single terrorist may suffice for a successful attack. In military logistics, the effectiveness of resources (equipment, soldiers, etc.) may differ amongst players and vary according to the landscapes/features of the battlefields. In R\&D contests, one unit of resources (researchers/machines) of a company often has different strengths and weaknesses than that of other companies. \end{trivlist} In this work, we propose and analyze an extension of the Colonel Blotto game with a winner-determination rule capturing pre-allocations and asymmetric effectiveness of resources. Specifically, we consider the following rule: in battlefield $i \in \{1,\cdots, n\}$, if the allocations of Players A and B are $x^A$ and $x^B$ respectively, Player A wins if $x^A > q_i \cdot x^B - p_i$ and Player B wins otherwise (we will specify the tie breaking rule below). Here, $p_i \in \mathbb{R}$ and $q_i >0$ are given parameters known to both players that represent pre-allocations and asymmetric effectiveness of resources respectively. We call this game the \emph{Colonel Blotto game with favoritism} and denote it by \FCB throughout the paper. We focus on characterizing and computing Nash equilibria of the \FCB game. Completely characterizing and computing a Nash equilibrium of the Colonel Blotto game, even without favoritism, is a notoriously challenging problem (see related works below). A standard approach consists in first identifying candidate equilibrium marginal distributions for each battlefield's allocation---called the \emph{optimal univariate distributions}. This is often done by looking for an equivalence to the related problem of all-pay auctions---the game where two bidders secretly bid on a common item and the higher bidder wins the item and gains its value but both players pay their bids. Then, constructing an equilibrium based on these univariate distributions can be done exactly for some particular cases of parameters configurations (see related works below). In cases where this is not possible, an alternative solution is to look for \emph{approximate equilibria} with well-controlled approximation errors \cite{vu2019approximate}. Several works also consider a relaxation of the game with budget constraints in expectation only---which is called the \emph{General Lotto game}---as a relevant model for certain applications \cite{myerson1993incentives,kovenock2020generalizations}. In this paper, we analyze the Colonel Blotto game with favoritism by following a similar pattern and make four main contributions as follows: \begin{enumerate} \item We first consider the model of \emph{all-pay auction with favoritism} ($\FAPA$), where the rule determining the winning bidder is shifted with an additive and a multiplicative parameter. We completely characterize the equilibria in general parameters configurations (with asymmetric items evaluation and no restriction on which bidder has which kind of favoritism). While the \FAPA game was studied in prior works, this result fills a gap in the literature. \item We prove the existence of a set of optimal univariate distributions of the \FCB game and give a construction thereof. The main challenge is that it is equivalent to finding a fixed point, but for a complex two-dimensional function for which standard existence results fail to apply. We overcome this obstacle by drawing tools from topology and carefully tailoring them to our particular problem. Based on this core result, we deduce the equilibrium of the \FCB game for particular cases for which it is known how to construct joint distributions from the optimal univariate distributions. For other cases we show that, by applying the rescaling technique of \cite{vu2019approximate}, we can obtain an approximate equilibrium of the $\FCB$ game with negligible approximation error when the number of the battlefields is~large. Finally, for any parameter configuration, we immediately obtain the equilibrium of the relaxed \emph{General Lotto game with favoritism} in which one can simply sample independently on each battlefield. \item We propose an algorithm that efficiently finds an approximation of the proposed optimal univariate distributions with arbitrarily small error. This improves the scalability of our results upon the naive solution for exact computation (which is exponential in the number of battlefields). Our algorithm is based on approximately solving the two-dimensional fixed-point problem by a dichotomy procedure using a generalization of the intermediate value theorem with the notion of winding number of parametric curves. \item We conduct a number of numerical experiments to analyze and illustrate the effect of favoritism in the players' payoffs at equilibrium of the \FCB~game (and of the \FGL game). \end{enumerate} \paragraph{\textbf{Related Work}} There is a rich literature on characterizing equilibria of the (classical) Colonel Blotto game. The common approach is to look for a set of \emph{optimal univariate distributions} of the game, and then construct $n$-variate joint distributions \emph{whose realizations satisfy the budget constraints} (in other words, their supports are subsets of the (mixed) strategy sets). These joint distributions are equilibria of the game. Constructing such joint distributions, however, is challenging and equilibria are only successfully characterized in several restricted instances: Colonel Blotto games where players have symmetric budgets \cite{borel1938,gross1950,grosswagner,laslier2002,thomas2017,Boix-Adsera20a}, Colonel Blotto games with asymmetric budgets and two battlefields \cite{macdonell2015} or with any number of battlefields but under assumptions on the homogeneity of battlefields' values \cite{roberson2006,schwartz2014}. The Colonel Blotto game still lacks a complete characterization of equilibrium in its generalized parameters configuration, i.e., with asymmetric budgets and heterogeneous battlefields (see \cite{kovenock2012conflicts} for a survey). An extension of the Colonel Blotto game is studied in \cite{kovenock2020generalizations}, where the two players can have different evaluations of the battlefields. The authors find a set of optimal univariate distributions based on a solution of a fixed-point equation, but they can construct the n-variate equilibrium distribution only in restricted settings. Our work follows a similar pattern in spirit, but the fixed-point equation supporting the optimal univariate distributions is different and harder to solve because it is two-dimensional. While studying the Colonel Blotto game, many works also consider the corresponding General Lotto game \cite{kovenock2020generalizations,myerson1993incentives}, in which budget constraints are relaxed to hold in expectation. There, an equilibrium can be directly obtained from a set of optimal univariate distributions by independently drawing on each battlefield. In recent work, \cite{vu2019approximate} propose an alternative approach to find solutions of the Colonel Blotto game: it consists of independently drawing on each battlefield and then rescaling to meet the budget constraint with probability one. The authors show that this rescaling strategy (termed independently uniform ($\IU$) strategy) yields an approximate equilibrium with error decreasing with the number of battlefields. The problem of constructing sets of optimal univariate distributions in the Colonel Blotto game can be converted into the problem of searching for an equilibrium of an all-pay auction. The state-of-the-art in characterizing equilibria of all-pay auctions is as follows: equilibria of the (classical) all-pay auctions were completely characterized by \cite{baye1994,hillman1989} in games with any number of~bidders. The all-pay auction with favoritism (also referred to as all-pay auctions with head-starts and handicaps and all-pay auctions with incumbency advantages) was studied by \cite{konrad2002investment} but its equilibria were explicitly characterized only in cases where players assess the item with the same value and where both kinds of favoritism are in favor of one player. Therefore, it still lacks an explicit analysis of equilibria with a general configuration of parameters. Note also that the literature on the \FAPA game (and all-pay auctions) goes beyond the study on their equilibria, see e.g., \cite{fu2006theory,li2012contests,pastine2012incumbency,siegel2009all,siegel2014asymmetric} and surveys by \cite{corchon2007theory,fu2019contests,konrad2009}. Our work is the first to introduce the Colonel Blotto game with pre-allocations and asymmetric effectiveness of players' resources. The only exception is the recent work by \cite{chandan2020showing}, where partial results are obtained with pre-allocations only but from a very different perspective: the authors study a three-stage Colonel Blotto game that allows players to pre-allocate their resources; several conditions where pre-allocating is advantageous are indicated and this result is extended to three-player Colonel Blotto games. Note finally that there is also a growing literature on the discrete Colonel Blotto game, where players' allocations must be integers, see \eg \cite{Ahmadinejad16a,Behnezhad19a,Behnezhad17a,behnezhad2018battlefields,hart2008,hortala2012,vu18a,vu2020}; but this literature did not consider favoritism and these results are based on a very different set of techniques in comparison to that of the game models considered in this work. \paragraph{\textbf{Notation}} Throughout the paper, we use bold symbols (e.g., $\boldsymbol{x}$) to denote vectors and subscript indices to denote its elements (e.g., $\boldsymbol{x} = (x_1, x_2, \ldots , x_n)$). We also use the notation $\R^{n}_{>0}:= \braces*{ \boldsymbol{x}^{n} \in \R^{n}: x_i >0, \forall i}$, \mbox{$\R^{n}_{\ge0}:= \braces*{ \boldsymbol{x}^{n} \in \R^{n}: x_i \ge 0, \forall i}$} and $[n] =\{1,2, \ldots , n\}$, for any $n \in \mathbb{N} \backslash \{0\}$. We use the notation $\play$ to denote a player and $-\play$ to indicate her opponent. We also use the standard asymptotic notation $\bigoh$ and its variant $\tilde{\bigoh}$ which ignores logarithmic terms. Finally, we use $\prob(E)$ to denote the probability of an event $E$ and $\Ex [X] $ to denote the expectation of a random variable~$X$. \paragraph {\textbf{Outline of the Paper}} The remainder of this paper is organized as follows. \cref{sec:Form} introduces the formulations of the {$\FCB$}, \FGL and \FAPA games. We present our complete equilibria characterization of the \FAPA game in \cref{sec:EquiAPA}. Using this result, in \cref{sec:OptUni_GRCBC}, we prove the existence and show the construction of a set of optimal univariate distributions of the $\FCB$ game. In \cref{sec:Corollary_results}, we derive several corollary results, concerning the equilibria and approximate equilibria of the \FCB and \FGL games, from these optimal univariate distributions. In \cref{sec:heuristic}, we then present an algorithm that efficiently finds an approximation of the proposed optimal univariate distributions with arbitrarily small errors. In \cref{sec:NumExp}, we conduct several numerical experiments illustrating the effect of two types of favoritism in the \FCB and \FGL~games. Finally, we give a concluding discussion in \cref{sec:conclu}. \section{Games Formulations} \label{sec:Form} In this section, we define the Colonel Blotto game with favoritism ($\FCB$), and two related models: the General Lotto game with favoritism ($\FGL$) and the all-pay auction with favoritism ($\FAPA$). \subsection{Colonel Blotto and General Lotto Games with Favoritism} \label{sec:FormCB} The Colonel Blotto game with favoritism (\FCB game) is a one-shot complete-information game between two Players A and B. Each player has a fixed \emph{budget} of resources, denoted $X^A$ and $X^B$ respectively; without loss of generality, we assume that \mbox{$0< X^A \le X^B$}. There are $n$ battlefields ($n \ge 3$). Each battlefield $i \in [n]$ has a value $w_i>0$ and is embedded with two additional parameters: $p_i \in \mathbb{R}$ and $q_i>0$. Knowing these parameters, players compete over the $n$ battlefield by simultaneously allocating their resources. The summation of resources that a player allocates to the battlefields cannot exceed her budget; i.e., the {pure strategy} set of Player ${\play} \in \left\{ A, B \right\}$ is $S^{\play} = \left\{ \boldsymbol{x}^{\play} \in \mathbb{R}_ {\ge 0} ^n: \sum\nolimits_{i = 1}^n {x_i^{\play} \le {X^{\play}}} \right\}$. If Players A and B respectively allocate $x^A_i$ and $x^B_i$ to battlefield $i$, the winner in this battlefield is determined by the following rule: if $x^A_i > q_i x^B_i -p_i$, Player A wins and gains the value $w_i$; reversely, if $x^A_i < q_i x^B_i -p_i$, Player B wins and gains $w_i$; finally, if a tie occurs, i.e., $x^A_i = q_i x^B_i -p_i$, Player A gains $\alpha w_i$ and Player B gains $(1-\alpha) w_i$, where $\alpha \in [0,1]$ is a given parameter.\footnote{We call $\alpha$ the tie-breaking parameter. It can be understood as if we randomly break the tie such that Player A wins battlefield $i$ with probability $\alpha$ while Player B wins it with probability $(1-\alpha)$. This includes all the tie-breaking rules considered in classical CB games found in the literature.} The payoff of each player in the game is the summation of gains they obtain from all the battlefields. Formally, we have the following definition. \begin{definition}[The \FCB game] \label{def:FCB} The Colonel Blotto game with favoritism (with $n$ battlefields), denoted $\FCBn$, is the game with the description above; in particular, when players A and B play the pure strategies \mbox{$\boldsymbol{x}^A = \parens*{x^A_i}_{i \in [n]}\! \in \! S^A$} and \mbox{$\boldsymbol{x}^B = \parens*{x^B_i}_{i \in [n]} \in S^B $} respectively, their payoffs are defined as: \begin{equation*} \mathrm{\Pi}_{\FCBn}^A \left( \boldsymbol{x}^A, \boldsymbol{x}^B \right) = \sum \limits_{i \in [n]} {w_i \be \left( x^A_i, q_i x^B_i \!- \!p_i \right) } \textrm{ and } \mathrm{\Pi}_{\FCBn}^B \left( \boldsymbol{x}^A, \boldsymbol{x}^B \right) =\sum \limits_{i \in [n]} {w_i \left[ 1\!-\! \be \left( x^A_i, q_i x^B_i\! - \!p_i \right) \right] }. \end{equation*} Here, $\be: \mathbb{R}^2_{\ge 0} \rightarrow [0,1]$, termed as the Blotto function, is defined as follows: $\be\left( {x,y} \right)=1$ if $x>y$, $\be\left( {x,y} \right)=\alpha$ if $x=y$ and $\be\left( {x,y} \right)=0$ if $x<y$. \end{definition} A \emph{mixed strategy} of a player, say $\play \in \braces*{A,B}$, in $\FCBn$ is an $n$-variate distribution such that any pure strategy drawn from it is an $n$-tuple satisfying the corresponding budget constraint of player~$\play$. We reuse the notations $\mathrm{\Pi}_{\FCBn}^A \left(\sigma_A, \sigma_B \right)$ and $\mathrm{\Pi}_{\FCBn}^B \left(\sigma_A, \sigma_B \right)$ to denote the payoffs of Players A and B when they play the mixed strategies $\sigma_A$ and~$\sigma_B$ respectively. Note that to lighten the notation $\FCBn$, we include only the subscript $n$ (the number of battlefields) and omit other parameters involved in the definition of the game (including $X^A, X^B, \alpha, w_i, p_i, q_i, \forall i \in [n]$). The game $\FCBn$ extends the classical Colonel Blotto game by including the favoritism that a player may have in battlefield $i$ through the parameters $p_i$ and $q_i$; which can be interpreted as~follows: \begin{trivlist} \item[$(i)$] $p_i$ represents the difference between pre-allocations that players have at battlefield $i$ before the game begins (note that pre-allocations are not included in the players' budget $X^A$ and $X^B$). If $p_i >0$, Player A's pre-allocation at battlefield $i$ is larger; if $p_i<0$, Player B has a larger pre-allocation. \item[$(ii)$] $q_i$ represents the asymmetry in the effectiveness of players' resources (\emph{not} including the pre-allocations). Specifically, in battlefield $i$, each unit of Player~B's resource is worth $q_i$ units of Player A's resource. If $0< q_i < 1$, Player A's resource is more effective than that of Player B; reversely, if $ q_i > 1$, Player B's resource is more effective. \end{trivlist} Note that if $p_i =0$ and $q_i = 1, \forall i \in [n]$, the game $\FCBn$ coincides with the classical CB game. Unlike many works in the literature on classical CB game, in the $\FCBn$ game defined above, we do not make assumptions on the symmetry in players' budgets or on the homogeneity of the battlefields'~values. For the sake of conciseness, in the remainder, we consider \FCB under the following assumptions: \begin{assumption} \label{assum:1} $\sum_{i \in [n]} {\parens*{q_i X^B - p_i}} \ge X^A$ and $\sum_{i \in [n]} { \parens*{X^A + p_i} /q_i } \ge X^B $. \end{assumption} \begin{assumption}\label{assum:2} For any $i \in [n]$, $\parens*{q_i X^B - p_i} \ge 0$ and $\parens*{X^A + p_i} /q_i \ge 0$. \end{assumption} These assumptions are used simply to exclude trivial cases where one player has too strong favoritism in one (or all) battlefields. Indeed, if \cref{assum:1} is violated, there exists trivial pure equilibria.\footnote{If $\sum_{i \in [n]} {\parens*{q_i X^B - p_i}} < X^A$, by allocating $q_i X^B - p_i + \varepsilon $ to battlefield $i$ ($\varepsilon$ is an arbitrarily small number), Player A guarantees to win all battlefields regardless of Player B' allocations. If $\sum_{i \in [n]} { \parens*{X^A + p_i} /q_i } < X^B $, by allocating $\parens*{X^A + p_i} /q_i + \varepsilon$ to battlefield $i$, Player B guarantees to win all battlefields.} On the other hand, if in battlefield $i^* \in [n]$, $\parens*{q_{i^*} X^B - p_{i^*}} < 0$ (resp. $\parens*{X^A + p_{i^*}} /q_{i^*} < 0$), then by allocating 0, Player A (resp. Player B) guarantees to win this battlefield regardless of her opponent's allocation. Therefore, if $\FCBn$ has an battlefield $i^*$ violating \cref{assum:2}, an equilibrium of $\FCBn$ is simply the strategies where both players allocate 0 to battlefield $i^*$ and play an equilibrium of the game $\mathcal{CB}^F_{n-1}$ having the same setting as $\FCBn$ but excluding battlefield $i^*$. Note that analogous assumptions (when $p_i = 0$ and $q_i = 1, \forall i$) are found in other works considering the classical CB game (see \eg \cite{roberson2006} and Figure 1 in \cite{kovenock2020generalizations}). Next, similar to the definition of the General Lotto game obtained from relaxing the classical CB game, for each instance of the \FCB game, we define an instance of the General Lotto with favoritism ($\FGL$) where the budget constraint is requested to hold only in expectation. Formally: \begin{definition}[The \FGL game] \label{def:FGL} The General Lotto game with favoritism (with $n$ battlefields), denoted $\FGLn$, is the game with the same setting and parameters as the $\FCBn$ game, but where a mixed strategy of Player $\play \in \{A,B\}$ in $\FGLn$ is an $n$-variate distribution with marginal distributions $(F^{\play}_i)_{i \in [n]}$ such that \mbox{$\sum_{i \in [n]} {\Ex_{x_i \sim F^{\play}_i }[ x_i ] } \le X^{\play}$}. \end{definition} We finally define the notion of Optimal Univariate Distributions in the $\FCB$ game. This notion is of great importance in studying equilibria of the \FCB game since intuitively, they are the candidates for the marginals of the equilibria. Formally: \begin{definition}[Optimal Univariate Distributions (OUDs)] \label{def:OUD} $\left\{F^A_i, F^B_i: i \in [n] \right\}$ is a set of \emph{OUDs} of the game $\FCBn$ if the following conditions are satisfied: \begin{enumerate}[label=(C.\arabic*),ref=\textit{(C.\arabic*)}] \item the supports of $F^A_i, F^B_i$ are subset of $\mathbb{R}_{\ge 0}$, \label{condi:OUD1} \item \mbox{$\sum_{i \in [n]} {\Ex_{x_i \sim F^\play_i }[ x_i ] } \le X^{\play}$}, $\play \in \{A,B\}$, \label{condi:OUD2} \item if Player $\play$ draws her allocation to battlefield $i$ from $F^\play_i$, $\forall i \in [n]$, Player $-\play$ has no pure strategy inducing a better payoff than when she draws her allocation to battlefield $i$ from $F^{-\play}_i$, $\forall i \in [n]$.\label{condi:OUD3} \end{enumerate} \end{definition} \subsection{All-pay Auctions with Favoritism} \label{sec:FormAPA} All-pay auctions with favoritism ($\FAPA$) have been studied in the literature under different sets of assumptions (see \cref{sec:Intro}). For the sake of coherence, in this section, we re-define the formulation of the \FAPA game using our notation as follows: In the \FAPA game, two players, A and B, compete for a common item that is evaluated by each player with a value, denoted $u^A$ and $u^B$ respectively ($u^A, u^B > 0$).\footnote{The case where either $u^A =0$ or $u^B =0$ is trivial (there exist trivial pure equilibria) and thus, is omitted.} The item is embedded with two additional parameters: $p \in \mathbb{R}$ and $q>0$. Players simultaneously submit their \emph{bids} $x^A, x^B \ge 0$ (unlike in the $\FCB$ game, players can bid as large as they want in $\FAPA$). If $x^A > q x^B - p$, Player A wins the item and gains the value $u^A$; if $x^A < q x^B - p$, Player B wins and gains the value $u^B$; and in case of a tie, i.e., $x^A = q x^B -p$, Player A gains $\alpha u^A$ and Player B gains $(1-\alpha) u^B$ ($\alpha \in [0,1]$). Finally, \emph{both players} pay their bids. \begin{definition}[The \FAPA game] \label{def:APA-F} \emph{\FAPA} is the game with the above description; in particular, when the players A and B bid $x^A$ and $x^B$ respectively, their payoffs are \mbox{$\mathrm{\Pi}^A_{\FAPA} \left( x^A, x^B \right) \!= \!u^A \be \left( x^A, q x^B \!-\! p \right) \!-\! x^A$} and \mbox{$\mathrm{\Pi}^B_{\FAPA} \left( x^A, x^B \right) \!= \!u^B [1-\be \left( x^A, q x^B \!-\! p \right)] \!-\! x^B$}. Here, the function $\be$ is defined in \cref{def:FCB}. \end{definition} The formulation of the \FAPA game presented above differs from classical all-pay auctions by the parameters $p$ and $q$. If $p>0$, Player A has an \emph{additive advantage} in competing to win the item and if $p<0$, Player B has this favoritism; likewise, when $0<q<1$, Player A has a \emph{multiplicative favoritism} to compete for the item and when $q>1$, it is in favor of Player B. Our formulation of \FAPA is more general than the models (with two players) considered by previous works in the literature. If $p=0$, $q=1$ and $\alpha=1/2$, the $\FAPA$ game coincides to the classic two-bidder (first-price) all-pay auction (e.g., in \cite{baye1994,hillman1989}). If $u^A = u^B$, $\alpha = 1/2$, $p>0$ and $0<q \le 1$ (i.e., Player A has both advantages), $\FAPA$ coincides with the framework of all-pay contests with incumbency advantages considered in \cite{konrad2002investment}. Moreover, we also define \FAPA with a generalization of the tie-breaking rule (with the parameter $\alpha$ involving in the function $\be$) covering other tie-breaking rules considered in previous works. Finally, our definition of $\FAPA$ and its equilibria characterization (see \cref{sec:EquiAPA}) can also be extended to cases involving more than two players/bidders; in this work, we only analyze the two-player $\FAPA$ since it relates directly to the \FCB game, which is our main~focus. \section{Equilibria of All-pay Auctions with Favoritism} \label{sec:EquiAPA} In this section, we characterize the equilibrium of the $\FAPA$ game. The closed-form expression of the equilibrium depends on the relation between the parameters $u^A, u^B,p$ and $q$. We present two groups of results corresponding to two cases: $p \ge 0$ (\cref{sec:APAPos}) and $p<0$ (\cref{sec:APANeg}). \subsection{Equilibria of $\FAPA$ with $p \ge 0$.} \label{sec:APAPos} We first focus on the case where $p \ge 0$; in other words, Player A has an additive advantage. In equilibrium, players choose their bids according to uniform-type distributions which depend on the relation between $u^A, u^B, p$ and $q$. Particularly, we obtain the following theorem: \begin{theorem} \label{theo:positive} In the $\FAPA$ game where $p \ge 0$, we have the following results: \begin{itemize} \item[$(i)$] If $q u^B - p \le 0$, there exists a unique pure equilibrium where players' bids are $x^A = x^B = 0$ and their equilibrium payoffs are $\mathrm{\Pi}^A_{\FAPA} = u^A$ and $\mathrm{\Pi}^B_{\FAPA} = 0$ respectively. \item[($ii$)] If $0 < q u^B - p \le u^A$, there exists no pure equilibrium; there is a unique mixed equilibrium where Player A (resp. Player B) draws her bid from the distribution $\Aiip$ (resp. $\Biip$) defined as follows. \begin{align} & \Aiip(x) = \left\{ \begin{array}{l} \frac{p}{q u^B} + \frac{x}{q u^B}, \forall x \in \left[ 0, q u^B \!-\! p \right], \\ 1 \qquad \qquad, \forall x > q u^B \!-\!p, \end{array} \right. \textrm{and} & \Biip(x) = \left\{ \begin{array}{l} 1 - \frac{q u^B}{u^A} + \frac{p}{u^A}, \forall x \in \left[ 0, \frac{p}{q} \right) \\ 1 - \frac{q u^B}{u^A} + \frac{q \cdot x}{u^A}, \forall x \in \left[\frac{p}{q}, u^B \right], \\ 1 \qquad \qquad \qquad, \forall x > u^B. \end{array} \right. \label{eq:A+Def} \end{align} In this mixed equilibrium, players' payoffs are $\mathrm{\Pi}^A_{\FAPA} = u^A - q u^B + p$ and $\mathrm{\Pi}^A_{\FAPA} =0$. \item[$(iii)$] If $q u^B - p > u^A$, there exists no pure equilibrium; there is a unique mixed equilibrium where Player A (resp. Player B) draws her bid from the distribution $\Aiiip$ (resp. $\Biiip$) defined as follows. \begin{align} & \Aiiip(x) = \left\{ \begin{array}{l} 1- \frac{u^A}{q u^B} + \frac{x}{q u^B}, \forall x \in \left[ 0, u^A \right], \\ 1 \qquad \qquad \qquad, \forall x > u^A, \end{array} \right. \textrm{ and } & \Biiip(x) = \left\{ \begin{array}{l} 0 \qquad \qquad, \forall x \in \left[ 0, \frac{p}{q} \right) \\ - \frac{p}{u^A} + \frac{q \cdot x}{u^A}, \forall x \in \left[\frac{p}{q}, \frac{u^A + p}{q} \right], \\ 1 \qquad \qquad, \forall x > \frac{u^A + p}{q}. \end{array} \right. \label{eq:bFA+Def} \end{align} In this mixed equilibrium, players' payoffs are $\mathrm{\Pi}^A_{\FAPA}= 0$ and $\mathrm{\Pi}^B_{\FAPA} = u^B - (u^A+p)/q$. \end{itemize} \end{theorem} A formal proof of \cref{theo:positive} can be found in \ref{appen:proofAPAPos}; here we discuss an intuitive interpretation of the result. First, note that no player has an incentive to bid more than the value at which she assesses the item, otherwise she is guaranteed a negative payoff. Then, the condition in Result~$(i)$ of \cref{theo:positive} indicates that Player A has too large advantages such that she always wins regardless of her own bid and Player B's bid, hence it is optimal for both players to bid zero (see the proof for the case $q u^B - p =0$). The condition in Result~$(ii)$ of \cref{theo:positive} gives Player A a favorable position: she can guarantee to win with a non-negative payoff by bidding $u^A$ knowing that Player B will not bid more than $u^B$; reversely, the condition in Result~$(iii)$ implies that Player B has a favorable position: by bidding $u^B$, she guarantees to win with a non-negative payoff since Player A will not bid more than $u^A$. Importantly, in Result~$(ii)$ of this theorem, as long as the condition $0 < q u^B - p \le u^A$ is satisfied, when $p$ increases (and/or $q$ decreases), the equilibrium payoff of Player A increases. This is in coherence with the intuition that when Player A has larger advantages, she can gain more. However, if $p$ is too large (and/or $q$ is too small) such that the condition in Result~$(i)$ of \cref{theo:positive} is satisfied, Player B gives up totally and Player A gains a fixed payoff ($u^A$) even if $p$ keeps increasing (and/or $q$ keeps decreasing). A similar intuition can be deduced for Player B and~Result~$(iii)$. \begin{figure*} \caption{The mixed equilibrium of the $\FAPA$ with $p\ge 0$.} \label{fig:APA_pos} \end{figure*} We now turn our focus to the distributions $\Aiip, \Biip,\Aiiip$ and $\Biiip$ in Results~$(ii)$ and~$(iii)$ of \cref{theo:positive}. First, note that the superscript ${}^+$ in the notations of these distributions simply refers to the condition $p \ge 0$ being considered (to distinguish it with the case where $p < 0$ presented below) while the subscript index ($2$ or $3$) indicates that these distributions correspond to Results~$(ii)$ or~$(iii)$. These distributions all relate to uniform distributions: $\Aiip$ is the distribution placing a non-negative probability mass at zero, and then uniformly distributing the remaining mass on the range $\left(0, q u^B-p \right]$ while $\Biip$ places a non-negative mass at zero, then uniformly distributes the remaining mass on $\left[p/q,u^B \right]$; similarly, $\Aiiip$ places a mass at zero and uniformly distributes the remaining mass on $\left( 0, u^A \right]$ while $\Biiip$ is the uniform distribution on $\left[p/q, (u^A+p)/q \right]$; see an illustration in \cref{fig:APA_pos}. Note finally that \cref{theo:positive} is consistent with results in the restricted cases of \FAPA presented in \cite{konrad2002investment} (where $u^A = u^B$, $p \ge 0$, $0 < p <1$ and $\alpha =1/2$) and with results for the classical APA from \cite{baye1996all,hillman1989} (i.e., when $p =0$, $q = 1$, $\alpha = 1/2$); see \ref{appen:preli} where we reproduce these results to ease comparison. \subsection{The $\FAPA$ game with $p < 0$} \label{sec:APANeg} We now consider the $\FAPA$ game in the case $p < 0$. We first define $p^{\prime} = -p/q$ and $q^{\prime} = 1/q$. Since $p<0$, we have $p^{\prime} >0$. Moreover, \mbox{$\be \left(x^A, q x^B - p \right) = \be \left( (x^A +p)/q, x^B\right) = \be \left( q^{\prime} x^A - p^{\prime}, x^B \right)$} for any $x^A, x^B$. Therefore, the $\FAPA$ game with $p < 0$ (and $q>0$) is equivalent to an \FAPA with $p^{\prime} >0 $ (and $q^{\prime} >0$) in which the roles of players are exchanged. Applying \cref{theo:positive} with $p^{\prime}>0$ (and $q^{\prime} >0$), we can easily deduce the equilibrium of the \FAPA with $p<0$. Due to the limited space, we only present this result in \ref{sec:appen:APAneg}. \section{Optimal Univariate Distributions of the Colonel Blotto Game with Favoritism} \label{sec:OptUni_GRCBC} The notion of optimal univariate distributions plays a key role in the equilibrium characterization of the Colonel Blotto game and its variants. In this section, we prove the existence of, and construct a set of optimal univariate distributions of the $\FCB$ game. This is the core result of our work. As discussed in \cref{sec:Intro}, a classical approach in the Blotto games literature is to reduce the problem of constructing optimal univariate distributions (OUDs) to the problem of finding the equilibria of a set of relevant all-pay auction instances---each corresponding to players' allocations in one battlefield. The main question then becomes: \emph{which set of all-pay auction instances should we consider in order to find OUDs of the $\FCB$ game?} Naturally, from their formulations in \cref{sec:FormCB} and \cref{sec:FormAPA}, a candidate is the set of $\FAPA$ games in which the additive and multiplicative advantages of the bidders correspond to the parameters representing the pre-allocations and asymmetric effectiveness of players in each battlefield of the $\FCB$ game. The (uniquely defined) equilibrium distributions of these $\FAPA$ games satisfy Condition~\ref{condi:OUD3} in \cref{def:OUD} (\ie they are the marginal best-response against one another in the corresponding battlefield of the $\FCB$ game). Now, we only need to define the items' values in these $\FAPA$ games in such a way that their corresponding equilibrium distributions also hold Condition~\ref{condi:OUD2} in \cref{def:OUD} (\ie they satisfy the budget constraints in expectation). To do this, we first parameterize the items' values assessed by the bidders in the involved $\FAPA$ games, then we match these parameters with equations defining Condition~\ref{condi:OUD2}. Summarizing the above discussion, we define a particular set of distributions (on $\Rpos$) as~follows: \begin{definition} \label{def:OptDis_GRCBC} Given a game $\FCBn$ and a pair of positive real numbers $\kappa = (\gA, \gB)\! \in\! \mathbb{R}^{2}_{> 0}$, for each $i\! \in\! [n]$, we define $\Ai$ and $\Bi$ to be the pair of distributions that forms the equilibrium of the $\FAPA$ game with $p:= p_i$, $q:= q_i$, $u^A:= w_i \cdot \gA$ and $u^B:= w_i \cdot \gB$. The explicit formulas of $\Ai$ and $\Bi$ are given in \cref{tab:Opt_Uni_GRCBC} for each configuration of $w_i, p_i, q_i, \gA$ and $\gB$. \end{definition} \begingroup \renewcommand{1.2}{1.2} \begin{table}[tb!] \footnotesize \centering \caption[]{$\Ai$ and $\Bi$ corresponding to $\kappa = (\gA, \gB)$ and a $\FCBn$ game. The notation $I^+_j(\gA,\gB)$ and $I^-_j(\gA,\gB)$ for $j=1,2,3$ denote the set of indices of battlefields satisfying the corresponding conditions; for example, $I^+_1(\gA, \gB)=\{i \in [n]:p_i \ge 0, q_i w_i \gB - p_i \le 0\}$ and $\forall i \in I^+_1 (\gA, \gB)$, $\Ai(x) = 1$, $\Bi(x) = 1, \forall x~\ge~0$.} \label{tab:Opt_Uni_GRCBC} \begin{tabular}{ |l|l|l| } \hline \rowcolor{Gray} Indices Sets & Conditions & Definition \\ \hline $I^+_1(\gA, \gB)$ & {$i \in [n]:p_i \ge 0$, $q_i w_i \gB - p_i \le 0$} & $\Ai(x) = 1, \forall x \ge 0$ and $\Bi(x) = 1, \forall x \ge 0$. \\ \hline $I^+_2(\gA, \gB)$ & \multirow{2}{3cm}{$i \in [n]: p_i \ge 0$, \mbox{$0 < q_i w_i \gB - p_i \le w_i \gA$}} & $\Ai(x) = \left\{ \begin{array}{l} \frac{p_i}{q_i w_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in \left[ 0, q_i w_i \gB - p_i \right], \\ 1 \qquad \quad \qquad \qquad, \forall x > q_i w_i \gB -p_i, \end{array} \right.$ \\ & & $\Bi(x) = \left\{ \begin{array}{l} 1 - \frac{q_i \gB}{\gA} + \frac{p_i}{w_i \gA}, \forall x \in \left[ 0, \frac{p_i}{q_i} \right), \\ 1 - \frac{q_i \gB}{\gA} + \frac{q_i \cdot x}{w_i \gA}, \forall x \in \left[\frac{p_i}{q_i}, w_i \gB \right], \\ 1 \qquad \quad \qquad \qquad, \forall x > w_i \gB. \end{array} \right.$ \\ \hline \ $I^+_3(\gA, \gB)$ & \multirow{2}{3cm}{$i \in [n]: p_i \ge 0$, \mbox{$q_i w_i \gB - p_i > w_i \gA$}} & $\Ai(x) = \left\{ \begin{array}{l} 1- \frac{\gA}{q_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in \left[ 0, w_i \gA \right], \\ 1 \qquad \qquad \qquad \qquad, \forall x > w_i \gA, \end{array} \right.$\\ & & $\Bi(x) = \left\{ \begin{array}{l} 0 \qquad \qquad \qquad, \forall x \in \left[ 0, \frac{p_i}{q_i} \right), \\ - \frac{p_i}{w_i \gA} + \frac{q_i \cdot x}{w_i \gA}, \forall x \in \left[\frac{p_i}{q_i}, \frac{w_i \gA + p_i}{q_i} \right], \\ 1 \qquad \qquad \qquad, \forall x > \frac{w_i \gA + p_i}{q_i}. \end{array} \right. $ \\ \hline $I^-_1(\gA, \gB)$ & {$i \in [n]: p_i <0$, ${w_i \gA} \le - p_i$} & $\Ai(x) = 1, \forall x \ge 0$ and $\Bi(x) = 1, \forall x \ge 0$. \\ \hline $I^-_2(\gA, \gB)$ & \multirow{2}{3cm}{$i \in [n]: p_i <0$, \mbox{$-p_i< {w_i \gA} \le q_i w_i \gB - p_i$}} & $\Ai(x) = \left\{ \begin{array}{l} 1-\frac{\gA}{q_i \gB} - \frac{p_i}{q_i w_i \gB}, \forall x \in \left[ 0, - p_i \right), \\ 1-\frac{\gA}{q_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in \left[ -p_i, w_i \gA \right],\\ 1 \qquad \quad \qquad \qquad, \forall x > w_i \gA, \end{array} \right.$ \\ & & $\Bi(x) : \left\{ \begin{array}{l} - \frac{p_i}{w_i \gA} + \frac{q_i \cdot x}{w_i \gA}, \forall x \in \left[ 0, \frac{w_i \gA + p_i}{q_i} \right], \\ 1 \qquad \qquad \qquad, \forall x > \frac{w_i \gA + p_i}{q_i}. \end{array} \right.$ \\ \hline \ $I^-_3(\gA, \gB)$ & \multirow{2}{3cm}{$i \in [n]: p_i <0$, \mbox{${w_i \gA } > q_i w_i \gB - p_i$}} & $\Ai(x) = \left\{ \begin{array}{l} 0 \qquad \quad \qquad \qquad, \forall x \in \left[ 0, -p_i \right), \\ \frac{p_i}{q_i w_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in [ - p_i, q_i w_i \gB - p_i],\\ 1 \qquad \quad \qquad , \forall x > q_i w_i \gB -p_i, \end{array} \right.$ \\ & & $\Bi(x) = \left\{ \begin{array}{l} 1- \frac{q_i \gB}{\gA} + \frac{q_i \cdot x}{w_i \gA}, \forall x \in \left[0, w_i \gB \right], \\ 1 \qquad \quad \qquad \qquad, \forall x > w_i \gB. \end{array} \right. $\\ \hline \end{tabular} \end{table} \endgroup We consider the following system of equations (with variables~$\gA, \gB$): \begin{align} \left\{ \begin{array}{l} \sum_{i \in [n]} \Ex_{x \sim \Ai} \left[ x \right] = X^A, \\ \sum_{i \in [n]} \Ex_{x \sim \Bi} \left[ x \right] = X^B. \end{array} \right. \label{eq:system_Ex} \end{align} By defining the sets $\Iplus:= \braces*{j: p_j \ge 0, {\gB > \frac{p_j}{q_j w_j }} }$, \mbox{$\Iminus := \braces*{j: p_j < 0, {\gA > \frac{-p_j}{w_j }} }$} and the term \mbox{$\hi := \min \{ q_i w_i \gB, w_i \gA + p_i \}$}, and computing the expected values of $\Ai$ and $\Bi$ for $i \in [n]$ (see the details in \ref{sec:appen_proof_OUD}), we can rewrite System~\eqref{eq:system_Ex} as: \begin{align} \left\{ \begin{array}{l} g^A (\gA, \gB) = 0, \\ g^B (\gA,\gB) = 0, \end{array} \right. \label{eq:system_f} \end{align} where $g^A, g^B: \mathbb{R}^2 \rightarrow \mathbb{R}$ are the following functions (for each given instance of the $\FCBn$~game): \begin{subequations} \begin{align} g^A (\gA,\gB ) &\!=\! \sum \nolimits_{i \in \Iplus} \frac{ \bracks*{\hi}^2 \!-\! {p_i}^2 }{2 q_i w_i } + \sum\nolimits_{i \in \Iminus} \frac{ \bracks*{\hi}^2 }{2 q_i w_i} - X^B \gA, \label{eq:fA} \\ g^B (\gA,\gB) &\!=\! \sum\nolimits_{i \in \Iplus} \frac{ \left[\hi \!-\! p_i\right]^2}{2 q_i w_i } \!+\! \sum\nolimits_{i \in \Iminus} \frac{ \left[ \hi \!-\!p_i \right]^2 \!-\! p_i^2 }{2 q_i w_i} \!-\! X^A \gB. \label{eq:fB} \end{align} \end{subequations} With these definitions, we can state our main result as the following theorem: \begin{theorem} \label{theo:OUDs} For any game $\FCBn$, \begin{itemize} \item[$(i)$] There exists a positive solution $\kappa = (\gA, \gB) \in \mathbb{R}^2_{>0}$ of System~\eqref{eq:system_f}. \item[$(ii)$] For any positive solution \mbox{$\kappa = (\gA, \gB)\! \in \! \mathbb{R}^2_{>0}$} of System~\eqref{eq:system_f}, the corresponding set of distributions \mbox{$\left\{\Ai,\Bi, i \in [n] \right\}$} from \cref{def:OptDis_GRCBC} is a set of optimal univariate distributions of~$\FCBn$. \end{itemize} \end{theorem} \cref{theo:OUDs} serves as a core result for other analyses in this paper; it is interesting and important in several aspects. First, it shows that in any instance of the $\FCB$ game, there always exists a set of OUDs with the form given in \cref{def:OptDis_GRCBC}. Second, by comparing these OUDs of the $\FCB$ game with that of the classical Colonel Blotto game (see \eg results from \cite{kovenock2020generalizations}), we can see how the pre-allocations and the asymmetric effectiveness affect players' allocations at equilibrium; we will return to this point in \cref{sec:NumExp} with more discussions. Moreover, as candidates for marginals of the equilibrium of the $\FCB$ game (in cases where it exists), the construction of such OUDs allows us to deduce a variety of corollary results concerning equilibria and approximate equilibria of $\FCB$ and $\FGL$ games (we present and discuss them in \cref{sec:Corollary_results} and~\cref{sec:heuristic}). We give a detailed proof of \cref{theo:OUDs} in \ref{sec:appen_proof_OUD} and only discuss its main intuition here. First, we can prove Result~$(ii)$ of \cref{theo:OUDs} by simply checking the three conditions of \cref{def:OUD} defining the OUDs of the $\FCB$ game: it is trivial that for $\kappa \in \mathbb{R}^2_{>0}$, the supports of $\Ai, \Bi, \forall i \in [n]$ are subsets of $\Rpos$ (thus, they satisfy Condition~\ref{condi:OUD1}); moreover, if \mbox{$\kappa=(\gA,\gB) \in \R^{2}_{>0}$} is a solution of System~\eqref{eq:system_f}, then it is a solution of System~\eqref{eq:system_Ex} and trivially, $\Ai, \Bi, \forall i \in [n]$ satisfy Condition~\ref{condi:OUD2};\footnote{Note that due to the ``use-it-or-lose-it" rule of the $\FCB$ game, among the existing equilibria (if any), there exists at least one equilibrium in which players use all their resources, thus we only need to consider the equality case of Condition~\ref{condi:OUD2}.} finally, we can check that for each configuration of $w_i, p_i, q_i, \gA, \gB$ (given that $\gA, \gB >0$), the distributions $\Ai, \Bi$ form the equilibrium of the corresponding $\FAPA$ game; thus \mbox{$\Ai,\Bi, i \in [n]$} satisfy Condition~\ref{condi:OUD3}. On the other hand, proving Result~$(i)$ of \cref{theo:OUDs} is a challenging problem in itself: $g^A$ and $g^B$ are not simply quadratic functions of $\gA$ and $\gB$ since these variables also appear in the conditions of the involved summations. Note that the particular instance of System~\eqref{eq:system_f} where $p_i=0, q_i =1, \forall i \in [n]$ coincides with a system of equations considered in \cite{kovenock2020generalizations} for the case of the classical Colonel Blotto game (without favoritism). Proving the existence of positive solutions of this system \emph{in this particular case} can be reduced to showing the existence of positive solutions of a real-valued 1-dimensional function (with a single variable $\lambda = \gA/\gB$) which can be done by using the intermediate value theorem (see \cite{kovenock2020generalizations}). \emph{In the general case of the $\FCB$ game and System~\eqref{eq:system_f}}, this approach \emph{cannot} be applied due to the involvement of arbitrary parameters $p_i, q_i, i \in [n]$. Alternatively, one can see our problem as proving the existence of a fixed-point in $\R^2_{>0}$ of the function \mbox{$F:\mathbb{R}^2 \rightarrow \mathbb{R}^2$} such that $F \parens*{\gA,\gB} = \parens*{g^A \parens*{\gA,\gB}/X^B + \gA, g^B \parens*{\gA,\gB}/X^A + \gB}$. This direction is also challenging since the particular formulations of $g^A$ and $g^B$ (thus, of $F$) does not allow us to use well-known tools such as Brouwer's fixed-point theorem \cite{brouwer1911abbildung} and/or Poincaré-Miranda theorem \cite{kulpa1997poincare}. Instead of the approaches discussed above, in this work, we prove Result~${(i)}$ of \cref{theo:OUDs} via the following equivalent formulation: \emph{proving the existence of a {positive zero}, \ie the existence of a point $(a,b) \in \R^2$ such that $G(a,b)=(0,0)$, of the $G:\R^2 \rightarrow\R^2$} defined as follows: \begin{align} &G\parens{\gA,\gB} = \parens*{ g^A \parens{\gA,\gB} , g^B \parens{\gA,\gB}} \in \R^2, \forall (\gA, \gB) \in \R^2. \label{eq:G_func} \end{align} Note also that although $(0,0)$ is a trivial solution of System~\eqref{eq:system_f} (\ie it is a zero of $G$), we can only construct $\Ai, \Bi$ (as in \cref{def:OptDis_GRCBC}) from solutions whose coordinates are \emph{strictly} positive (\ie only from positive zeros of $G$). In this proof, we work with the notion of \emph{winding numbers} which is intuitively defined as follows: the winding number, denoted $\W \parens*{\curve,y}$, of a parametric (2-dimensional) closed curve $\curve$ around a point $y \in \R^2$ is the number of times that $\curve$ travels counterclockwise around $y$ (see formal definitions of parametric curves, winding numbers and other related notions in \ref{sec:appen_winding}). This notion allows an important result as follows:\footnote{See \cite{chinn1966first} for a more general statement of \cref{lem:wind}. It is also considered in the literature as a variant of the \emph{main theorem of connectedness} in topology (see e.g., Theorem 12.N in \cite{viro2008elementary}).} \begin{lemma} \label{lem:wind} If $G$ is a continuous mapping, for any set $D \subset \R^2$ which is topologically equivalent to a disk such that $\W(\curve, (0,0)) \neq 0$ where $\curve$ is the $G$-image of the boundary of $D$, then $(0,0) \in G(D)$. \end{lemma} Our proof proceeds by crafting a tailored set $D \subset \R^2{>0}$ such that the function $G$ from \eqref{eq:G_func} satisfies all sufficient conditions of \cref{lem:wind};\footnote{It is trivial that $g^A( \cdot , \gB), g^A( \gA , \cdot) $ and $g^B( \cdot , \gB), g^B( \gA , \cdot) $ are all continuous and monotone functions in $\mathbb{R}_{>0}$; therefore, from Proposition 1 of \cite{kruse1969joint}, $g^A(\gA, \gB)$ and $g^B(\gA,\gB) $ are continuous functions in~$R^2_{>0}$.} then, we conclude that $G$ has a zero in $D$ and Result~$(ii)$ of \cref{theo:OUDs} follows. Note that finding such a set $D$ and quantifying the involved winding number are non-trivial due to the complexity in the expressions of $g^A$ and $g^B$. We illustrate \cref{lem:wind} and how the proof of Result~$(ii)$ of \cref{theo:OUDs} proceeds in a particular instance of $\FCB$ in \cref{ex:example_1234}. \begin{example}\label{ex:example_1234} \emph{Consider a game $\FCBn$ with $n = 4$, $X^A=4, X^B=4$, \mbox{$w_1=w_3 =1$}, $w_2 = w_4 =2$, $p_1 = p_2 =1$, \mbox{$p_3 = p_4 =-1$}, $q_i =1, \forall i$. We illustrate in \cref{fig:example_1234} the values of the function $G:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ corresponding to this game. \cref{fig:example_1234}(a) represents the output plane where each point is mapped with a color; e.g., if a point has the color blue, we know that both coordinates of this point are positive. \cref{fig:example_1234}(b) presents the input plane. Function $G$ maps each point in this input plane with a point in the output plane. Then, in \cref{fig:example_1234}(b), we colorize each point in the input plane with the corresponding color of its output (colors are chosen according to \cref{fig:example_1234}(a)). Solving \eqref{eq:system_f} in this case, we see that $(2,2)$ is the unique zero of $G$. We observe that in \cref{fig:example_1234}(b), when one choose a disk containing $(2,2)$, its boundary passes through all colors, which indicates that the $G$-image of its boundary goes around the origin $(0,0)$ of the output plane. This is confirmed by \cref{fig:example_1234}(c) showing the $G$-image of a rectangle $D$ having the vertices $(1,1)$, $(1, 4)$, $(4, 4)$, $(4, 1)$ (thus, it contains $(2,2)$); we observe that $G(\partial D)$, which is the blue curve, travels 1 time around $(0,0)$, thus $\W(G(D),(0,0)) \neq 0$.} \end{example} \begin{figure*}\label{fig:example_1234} \end{figure*} To complete this section, we compute the players' payoffs in the $\FCB$ game in the case where their allocations follow the proposed OUDs. Recall the notation of the indices sets defined in \cref{tab:Opt_Uni_GRCBC}, we have the following proposition (its proof is given in \ref{sec:appen_proof_payoffEQ}): \begin{proposition}\label{propo:payoffEQ} Given a game $\FCBn$ and $\kappa = \parens*{\gA, \gB} \in \R^2_{>0}$, if Players A and B play strategies such that the marginal distributions corresponding to battlefield $i \in [n]$ are $\Ai$ and $\Bi$ respectively, then their payoffs are: \begin{subequations} \begin{align} \Pi^A_{\FCBn} & = \sum \limits_{i \in I^+_1(\gA,\gB)} {\left[w_i \mathbb{I}_{\{p_i > 0\}} + \alpha w_i \mathbb{I}_{\{p_i =0\}} \right]} \! + \! \sum \limits_{i \in I^+_2(\gA,\gB)} {\left[ w_i \left(1 \!-\! \frac{q_i \gB}{\gA} \!+\! \frac{p_i}{w_i \gA} \right) \!+\! \frac{(q_i w_i \gB \!-\! p_i )^2}{2 w_i \gA q_i \gB} \right]} \nonumber \\ & \hspace{4mm} +\! \sum \limits_{i \in I^+_3(\gA,\gB)} \left[ \frac{w_i \gA}{2 q_i \gB} \right] \!+\! \sum \limits_{i \in I^-_2(\gA,\gB)} \left[ \frac{w_i \gA}{2 q_i \gB} \!-\! \frac{p_i ^2}{2 w_i \gA q_i \gB} \right] \!+\! \sum \limits_{i \in I^-_3 (\gA,\gB)} \left[w_i \!-\! \frac{q_i \gB w_i }{2\gA} \right] \label{eq:payoff_A_GL},\\ \Pi^B_{\FCBn} & = \sum_{i \in [n]} w_i - \Pi^A_{\FCBn}. \label{eq:payoff_B_GL} \end{align} \end{subequations} \end{proposition} If there exists an equilibrium of the game $\FCBn$ whose marginals are $\Ai, \Bi, i \in [n]$, then \eqref{eq:payoff_A_GL} and \eqref{eq:payoff_B_GL} are formulations of the equilibrium payoffs in this game. Observe, however, that as $p_i$ and $q_i$ (and $w_i$) change, System~\eqref{eq:system_f} also changes; thus, its solutions also vary and the configuration of the corresponding indices sets change. Therefore, the relationship between the favoritism parameters and the payoffs induced by the corresponding OUDs is \emph{not} easily deducible from \eqref{eq:payoff_A_GL}-\eqref{eq:payoff_B_GL}. We delay our discussion on this to \cref{sec:NumExp} where we present results from numerical~experiments. \section{Equilibria Results for the Colonel Blotto Game with Favoritism} \label{sec:Corollary_results} In the previous section, we successfully construct a set of OUDs of the \FCB game; we now show how one can use this result to deduce an equilibrium. As a preliminary result, we give a high-level condition under which the OUDs from \cref{def:OptDis_GRCBC} constitute an equilibrium of the $\FCB$ game; it is presented as a direct corollary from \cref{theo:OUDs} as follows: \begin{corollary} \label{corol:FCBEqui} For any game $\FCBn$ and any positive solution $\kappa=(\gA, \gB)$ of System~\eqref{eq:system_f}, if there exists a mixed-strategy of Player A (resp., Player B) whose univariate marginal distributions correspond to $\Ai$ (resp., $\Bi$) for all $i \in [n]$, then these mixed strategies constitute an equilibrium of~$\FCBn$. \end{corollary} \cref{corol:FCBEqui} is a standard statement in analyzing equilibria of Colonel Blotto games from their OUDs. In general, it is challenging to construct such mixed strategies as required in \cref{corol:FCBEqui}---this is, as discussed in \cref{sec:Intro}, a notorious difficulty in studying the Colonel Blotto game. In the literature, several alternative concepts of solutions are proposed based on the related OUDs, they are relaxed from the equilibrium of the Colonel Blotto game in one way or another. We show that these results can also be extended to the \FCB game thanks to \cref{theo:OUDs}: in \cref{sec:GL_game}, we analyze a trivial equilibrium of the \FGL game and in \cref{sec:IU_result}, we propose an approximate equilibrium of the \FCB game based on the rescaling technique of \cite{vu2019approximate}. Nevertheless, we start in \cref{sec:Equi_FCB} by listing special cases of \FCB where the exact equilibrium can be computed. \subsection{Exact Equilibria of the $\FCB$ Game in Particular Cases} \label{sec:Equi_FCB} For several parameters configurations, we can leverage existing results in special cases of the Colonel Blotto game to solve special cases of the \FCB game. For instance, \cite{roberson2006} successfully constructs an equilibrium of the game with homogeneous battlefields---the key idea is that this game has a set of OUDs that are the same for all battlefields. Therefore, we can generalize this idea to construct an equilibrium from the set of OUDs $\left\{ \Ai, \Bi: i \in [n] \right\}$ in any $\FCBn$ game whose parameters $w_i, p_i, q_i$ are such that $ \Ai(x) = F_{A^{\kappa }_j}(x), \forall x \in [0, \infty)$, $\forall i,j \in [n]$ (and $ \Bi(x) = F_{B^{\kappa }_j}(x), \forall x \in [0, \infty)$). A simple example where this condition holds is when $w_i = w_j$, $p_i = p_j$ and $q_i = q_j $, $\forall i, j \in [n]$ in which any solution of System~\eqref{eq:system_f} induces an indices set (as defined in \cref{tab:Opt_Uni_GRCBC}) that contains the whole set $\{1, \ldots, n\}$. It is also possible to extend this approach to \FCB games where the set of battlefields can be partitioned into groups with homogeneous OUDs and such that the cardinality of each group is sufficiently large (following an idea proposed by \cite{schwartz2014} in the case of classical Colonel Blotto~games). \subsection{Equilibrium of the General Lotto Game with Favoritism} \label{sec:GL_game} In some applications of the \FCB game (and of the classical Colonel Blotto game), the budget constraints do not need to hold with probability 1; instead, they are only required to hold in expectation. In such cases, the General Lotto game with favoritism ($\FGL$) is relevant and applicable. Due to the relaxation in the budget constraints, any set of OUDs of a game instance $\FCBn$ can serve as a set of equilibrium marginals of the corresponding game $\FGLn$ (having the same parameters): it is trivial to deduce mixed strategies of $\FGLn$ from univariate distributions $\Ai, \Bi, i \in [n]$ (from \cref{def:OptDis_GRCBC}). Formally, from \cref{theo:OUDs}, we have the following corollary: \begin{corollary} \label{corol:payoffGL} For any game $\FGLn$ and any positive solution $\kappa=(\gA, \gB)$ of System~\eqref{eq:system_f}, the strategy profile where Player A (resp., Player B) draws independently her allocation to battlefield $i \in [n]$ from $\Ai$ (resp., $\Bi$) is an equilibrium. \end{corollary} Naturally, in the \FGL game, when players follow the equilibrium described in \cref{corol:payoffGL}, they gain the same payoffs as in \eqref{eq:payoff_A_GL}-\eqref{eq:payoff_B_GL}. It is also trivial to check that \cref{corol:payoffGL} is consistent with previous results on the classical General Lotto game (\ie the \FGL game where $p_i=0$ and $q_i=1$ for any $i \in [n]$), \eg from \cite{myerson1993incentives,kovenock2020generalizations}. \subsection{An Approximate Equilibrium of the \FCB Game} \label{sec:IU_result} In the game theory literature, approximate equilibria are often considered as alternative solution-concepts when it is impossible or inefficient to compute exact equilibria. Here, we focus on finding a good approximate equilibrium of the \FCB game that can be simply and efficiently constructed; this is relevant in applications where the budget constraints must hold precisely (\eg in security or telecommunication systems involving a fixed capacity of resources) but sub-optimality is acceptable if the error is negligible relative to the scale of the problem. We begin by recalling the definition of approximate equilibria \cite{myerson1991game,Nisan07} in the context of the \FCB game: \begin{definition}[$\eps$-equilibria]\label{def:appro_equi} For any $\varepsilon \ge 0$, an \emph{\mbox{$\varepsilon$-equilibrium}} of a game $\FCBn$ is any strategy profile $\left(s^{*},t^{*} \right)$ such that \mbox{$\Pi^A_{\FCBn}(s,t^*) \!\le\! \Pi^A_{\FCBn}(s^*,t^*)\!+\!\varepsilon$} and \mbox{$\Pi^B_{\FCBn}(s^*,t)\! \le\! \Pi^B_{\FCBn}(s^*,t^*)\! +\! \varepsilon$} for any strategy $s$ and $t$ of Players A and B. \end{definition} The set of OUDs constructed in \cref{sec:OptUni_GRCBC} allows us to apply directly a technique from the literature of classical Colonel Blotto game to look for an approximate equilibrium of the \FCB game: in recent work, \cite{vu2019approximate} propose an approximation scheme (called the IU strategy) for the Colonel Blotto game in which players independently draw their allocations from a set of OUDs, then rescale them to guarantee the budget constraint. The authors prove that IU strategies constitute an $\varepsilon W$-equilibrium of the CB game where $\varepsilon = \tilde{\mO}(n^{-1/2})$ and $W$ is the sum of battlefields' values. We extend this idea to the \FCB game and propose the following definition: \begin{definition}[IU Strategies] \label{def:IU} For any game $\FCBn$ and any solution $\kappa = (\gA, \gB) \in \R^2_{>0}$ of System~\eqref{eq:system_f}, we define $\IU^{\play}_{\kappa}$ to be the mixed strategy of player $\play \in \braces*{A,B}$ such that her allocations, namely $\boldsymbol{x}^\play$, are randomly generated by the following simple procedure:\footnote{In fact, in this definition, when $\sum_{j \in [n]} a_j =0$ (resp. when $\sum_{j \in [n]} b_j =0$), we can assign any arbitrary $\boldsymbol{x}^A \in S^A$ (resp. any $\boldsymbol{x}^A \in S^B$). This choice will not affect the asymptotic results stated in this section (particularly, \cref{propo:IU}).} \emph{Player A} draws independently a real number $a_i$ from $\Ai, \forall i \in [n]$. If $\sum_{j=1}^n a_j = 0$, set $x^A_i = \frac{X^A}{n}$; otherwise, set $x^A_i = \frac{a_i}{\sum_{j=1}^n a_j} \cdot X^A$. \emph{Player B} draws independently a real number $b_i$ from $\Bi,\forall i \in [n]$. If $\sum_{j=1}^n b_j = 0$, set $x^B_i = \frac{X^B}{n}$; otherwise, set $x^B_i = \frac{b_i}{\sum_{j=1}^n b_j} \cdot X^B$. \end{definition} Intuitively, by playing IU strategies, players draw independently from the OUDs then normalize before making the actual allocations. It is trivial to check that the realizations from $\IU^{\play}_{\kappa}$ satisfies the corresponding budget constraint in the \FCB game. Now, we consider the following~assumption: \begin{assumption}\label{assum:bound} $\exists \wmax, \wmin: 0<\wmin \le w_i \le \wmax < +\infty$, $\forall i \in [n]$. \end{assumption} \noindent \cref{assum:bound} is a mild technical assumption that is satisfied by most (if not all) applications of the \FCB game. Intuitively, it says that the battlefields' values of the game $\FCB$ are bounded away from 0 and infinity. With a simple adaptation of the results of \cite{vu2019approximate} (for the classical Colonel Blotto game), we obtain the following proposition (we give its proof in \ref{appen:IU}): \begin{proposition}[IU Strategies is an $\eps$-equilibrium] \label{propo:IU} In any game $\FCBn$ satisfying \cref{assum:bound}, there exists a positive number \mbox{$\varepsilon = \tilde{\mO} \left(n^{-1/2}\right)$} such that for any solution $\kappa = (\gA, \gB) \in \R^2_{>0}$ of System~\eqref{eq:system_f}, the profile $\parens*{\IU^A_{\kappa}, \IU^B_{\kappa}}$ is an \mbox{$\varepsilon W^n$-equilibrium} where $W^n:= \sum_{i=1}^n w_i$. \end{proposition} Here, recall that the notation $\tilde{\mO}$ is a variant of the $\mO$-asymptotic notation where logarithmic terms are ignored. We can interpret \cref{propo:IU} as follows: consider a sequence of games $\FCBn$ in which $n$ increases (\ie games with larger and larger numbers of battlefields). Note that $W^n$ is an upper-bound of the players' payoffs in the game $\FCBn$, thus $W^n$ is relative to the scale of this game. To qualify the proposed approximate equilibrium based on the evolution of $n$, we consider the ratio between the involved approximation error $\eps W^n$ of $\parens*{\IU^A_{\kappa}, \IU^B_{\kappa}}$ and this relative-scaled quantity $W^n$; this tracks down the proportion of payoff that Player $\play$ might lose by following $\IU^{\play}_{\kappa}$ instead of the best-response against $\IU^{-\play}_{\kappa}$. As we consider $\FCBn$ games with larger and larger $n$, this ratio (which is exactly~$\varepsilon$) quickly tends to 0 with a speed in order $\tilde{\mO} (n^{-1/2})$. Therefore, in the \FCB games with large number of battlefields, the players can confidently play $\parens*{\IU^A_{\kappa}, \IU^B_{\kappa}}$ as an approximate equilibrium knowing that the involved level of error is negligible. Note also that the approximation error presented in \cref{propo:IU} also depend on other parameters of the game including $X^A, X^B$, $\wmin, \wmax$, $\max\{ \left|p_i\right|, i\in [n] \}, \min \{q_i, i \in [n]\}$ and $\alpha$ (these constants are hidden in the $\tilde{\bigoh}$ notation). \section{Efficient Approximation of Optimal Univariate Distributions} \label{sec:heuristic} Our characterization of the (approximate) equilibria of the \FCB and \FGL games build upon System~\eqref{eq:system_f}. \cref{theo:OUDs} shows the existence of a solution of this system, but in practice it is also important to be able to compute such a solution. It is not clear how to do this efficiently: recall that \eqref{eq:system_f} is \emph{not} simply a system of quadratic equations in $\gA, \gB$ since as these variables change, the configuration of the indices sets (involved in the definitions of $g^A, g^B$) also changes. Given a game $\FCBn$, a naive way to solve System~\eqref{eq:system_f} would be to consider all possible partitions of \mbox{$[n]$} into $\Iplus$, \mbox{$\Iminus$} and $[n] \backslash \bracks*{\Iplus \bigcup \Iminus}$, then solve the particular system of quadratic equations corresponding to each case. This approach, however, is inefficient as in the worst case, the number of partitions is exponential in $n$; as illustrated in the following toy example: \begin{figure} \caption{Conditions to partition battlefields into the indices sets of the $\FCBn$ game in \cref{ex:example_GRCB_2} \label{fig:example} \end{figure} \begin{example}\label{ex:example_GRCB_2} \emph{ Consider the game $\FCBn$ with $n =2$, $X^A = X^B= 2$, $w_1 = w_2 =1$, $p_1 = -2$, $p_2 = 0$, $q_1 = 1/2$ and $q_2 = 1$. Even in this extremely simple game, there are 6 possible configurations of $\Iplus$ and $\Iminus$. In \cref{fig:example}, we illustrate these cases by 6 regions in the first quadrant of the $\gA$-$\gB$ plane separated by the axes and the polynomials involving in the conditions that determine the indices sets. Consider these cases, each inducing a system of quadratic equations, we see that there is no positive solution of~\eqref{eq:system_f} in the cases corresponding to Regions I, II, III, IV and VI of \cref{fig:example}. Only when $2 < \gA < \min \{ \gB, \gB /2 +2 \}$ (i.e., the point $(\gA, \gB)$ lies in Region V of \cref{fig:example}), we have $\Iminus =\{1\}$ and $\Iplus=\{2\}$ and thus, System~\eqref{eq:system_f} has a unique positive solution in $\mathbb{R}^2_{>0}$ that is $\gA = 2 + \sqrt{4/3}$ and $\gB = 2 + \sqrt{12}$ (which satisfy the conditions of Region~V). } \end{example} In large or even moderate instances, this naive approach is impossible, leading to the important question: can we (approximately) solve System~\eqref{eq:system_f} more efficiently? We answer this question positively: we propose in \cref{sec:approx_algo} an \emph{approximation algorithm} that computes an solution of System~\eqref{eq:system_f} with arbitrarily small error, and we analyze its running time in \cref{sec:efficiency}. Finally, in \cref{sec:approx_OUDs}, we analyze the impact of the approximation error on the OUDs from \cref{sec:OptUni_GRCBC}. \subsection{An Approximation Algorithm Solving System~\eqref{eq:system_f}} \label{sec:approx_algo} Throughout this section, we focus on the following approximation concept: in any game $\FCBn$ (and $\FGLn$), for any $\delta>0$, a point $(\tgA, \tgB) \in \mathbb{R}^2_{>0}$ is called a \emph{${\delta}$-approximate solution} of System~\eqref{eq:system_f} if there exists a solution $(\gA, \gB) \in \R^2_{>0}$ of System~\eqref{eq:system_f} satisfying the following conditions: \begin{align} & \left| \tgA - \gA \right| \le {\delta} \textrm{ and } \left|\tgB - \gB \right| \le {\delta}, \label{eq:delta_close} \\ \textrm{ and }& g^A(\tgA, \tgB)\le 0 \textrm{ and } g^B(\tgA, \tgB)\le0. \label{eq:system_approx_f} \end{align} Intuitively, any $\tk = (\tgA, \tgB)$ satisfying~\eqref{eq:delta_close} is ${\delta}$-close to a solution of System~\eqref{eq:system_f} (in the metric induced by the $\| \cdot\|_{\infty}$ norm) and, by \eqref{eq:system_approx_f}, the distributions $\braces*{ \tAi, \tBi , i \in [n]}$ from \cref{def:OptDis_GRCBC} corresponding to $\tk$ satisfies Condition~\ref{condi:OUD2} of \cref{def:OUD} (i.e., budget constraints).\footnote{Note also that since $G$ is continuous (it is also Lipschitz-continuous \wrt $\| \cdot \|_{\infty}$ norm), the distance between $G(\tgA, \tgB)$ and $(0,0)$ also tends to 0 when $\delta \rightarrow 0$; to quantify this, one might look for the Lipschitz constants of $g^A, g^B$; but since this analysis is not relevant to results presented in this section, we omit the details.} In principle, we would like to find ${\delta}$-approximate solutions such that $\delta$ is as small as possible; naturally, this will come with a trade-off on the running time (we discuss this further in \cref{sec:efficiency}). We propose an approximation algorithm, having a stopping-criterion parameter $\delta$, that quickly finds a $\delta$-approximate solution of~\eqref{eq:system_f} in any game $\FCBn$. A pseudo-code of this algorithm is given in \ref{appen:heuristic} along with many details; we discuss here only its main intuition. Recall that the function $G:\R^2 \rightarrow \R^2$ defined in~\eqref{eq:G_func} is a continuous mapping and that solving System~\eqref{eq:system_f} is equivalent to finding a zero of $G$. To do this, our approximation algorithm consists of a dichotomy procedure (i.e., a bisection method) such that at each loop-iteration, it considers a smaller subset of $\mathbb{R}^2$. It starts with an arbitrary rectangle $D \subset \mathbb{R}^2_{>0}$ (including its boundary and interior), then checks to see whether its image via $G$ contains the point $(0,0)$---this can be done by computing the winding number of the $G$-image of the boundary of $D$ (which is a closed parametric curve) around $(0,0)$: due to \cref{lem:wind}, if this winding number is non-zero, $G(D)$ contains $(0,0)$. If $G(D)$ does not contains $(0,0)$, we enlarge the rectangle $D$ (\eg by doubling its length and width) while maintaining that $D \subset \R^2_{>0}$. We repeat this enlargement step until we find a rectangle whose $G$-image contains $(0,0)$. Due to \cref{theo:OUDs} and \cref{lem:wind}, such a rectangle $D$ exists with a zero of $G$ inside. We then proceed by dividing the rectangle $D$ into smaller rectangles and checking which among them has a $G$-image containing $(0,0)$, then repeating this procedure on that smaller rectangle. The algorithm terminates as soon as it finds a rectangle, say $D^*$, such that $G(D^*)$ has a non-zero winding number around $(0,0)$ and $D^*$ has a diameter smaller than $\delta$ (thus any point in $D^*$ satisfies \eqref{eq:delta_close}); as a sub-routine of the computation of the involved winding number, our algorithm also determines a point $(\tgA, \tgB)$ in $D^*$ satisfying \eqref{eq:system_approx_f}. The output $(\tgA, \tgB)$ is a $\delta$-approximate solution of System~\eqref{eq:system_f}. \subsection{Computational Time of the Approximation Algorithm} \label{sec:efficiency} In the approximation algorithm described above, the most complicated step is the computation of the winding number of the involved rectangles. To do this efficiently, we draw tools from the literature: for any parametric curve $\curve:[a,b] \rightarrow \R^2$ where $\min_{t \in [a,b]} \| \curve(t)\|_{\infty} = \delta$, the insertion procedure with control of singularity (IPS) algorithm proposed by \cite{zapata2012geometric} takes $\bigoh \parens*{(b-a) \delta^{-1}}$ time to output a special polygonal approximation of $\curve$---having a number $\bigoh \parens*{(b-a) \delta^{-1}}$ of vertices---such that the winding number of this polygonal approximation is precisely the winding number of $\curve$. To compute this winding number, we calculate the value of $\curve$ at all vertices of this polygon. Inserting IPS into our approximation algorithm running with the parameter $\delta$, in any game $\FCBn$, for any rectangle~$D$ in consideration, we can represent $G(\partial D)$ by a parametric curve $\curve:[a,b] \rightarrow \R^2$ and compute the winding number of $G(\partial D)$ in $\bigoh \parens*{n(b-a) \delta^{-1}}$ time (it takes $\bigoh(n)$ time to compute the $G$-value of a vertex of the polygonal approximation). Note that this computational time also depends on other parameters of the game $\FCBn$ (they are hidden in $\bigoh$-notation above); we discuss this point in details in \ref{appen:heuristic}. Based on this procedure, we have the following~proposition: \begin{proposition}\label{propo:heuristic} For any game $\FCBn$ and $\delta <1$, the approximation algorithm described above finds a $\delta$-approximate solution of System~\eqref{eq:system_f} in $\tilde{\bigoh}(n \delta^{-1})$ time. \end{proposition} \cref{propo:heuristic} confirms the efficiency of our approximation algorithm, as the running time of our algorithm is only $\bigoh(n)$. The order $\tilde{\bigoh}(\delta^{-1})$ gives the trade-off between the running-time and the precision-level of solutions $\delta$. In fact, the running time of our algorithm also depends on the choice of the initial rectangle. More precisely, for a solution $(\gA, \gB)$ of System~\eqref{eq:system_f} such that \mbox{$\|(\gA, \gB)\|_{\infty} \!<\!R$}, our approximation algorithm, initialized with a rectangle whose center $(\gA_0,\gB_0)$ satisfying $\|(\gA_0,\gB_0) \|_{\infty} = L_0$ and $\delta <1$, terminates after $\bigoh \parens*{\log \parens*{\frac{R}{\delta}} + \log \parens*{ \max \braces*{\frac{R}{L_0}, \frac{L_0}{R}}}}$ iterations and each iteration runs in $\bigoh \parens*{ {R n}{\delta}^{-1} }$ time. Intuitively, if the initialized rectangle is too small and/or the actual solution is too far away from this rectangle, the algorithm requires a longer time. We conduct several experiments to illustrate the computational time of our approximation algorithm. Due to space constraints, we place these results in~\ref{appen:heuristic}; globally, our proposed algorithm is very fast in comparison with the naive approach described~above. \subsection{Approximations of Optimal Univariate Distributions of the \FCB Game} \label{sec:approx_OUDs} To conclude this section, we show that from a $\delta$-approximate solutions of System~\eqref{eq:system_f}, one gets an approximate equilibrium for the \FCB and \FGL games. This is based on the following proposition: \begin{proposition}\label{lem:approx_sol} In any game $\FCBn$ (and $\FGLn$), let $\kappa = (\gA,\gB)$ and $\tk = (\tgA, \tgB)$ be a positive solution and a $\delta$-approximate solution of System~\eqref{eq:system_f} respectively (such that $\kappa, \tk$ satisfy \eqref{eq:delta_close}-\eqref{eq:system_approx_f}). Then, the sets of distributions $\braces{\Ai,\Bi, i \in [n]}$ and $\braces{\tAi,\tBi, i \in [n]}$ from \cref{def:OptDis_GRCBC} corresponding to $\kappa$ and $\tk$~satisfy $\big| \Ai(x)\! -\! \tAi(x) \big| \le \bigoh \parens{\delta}$ and $\big| \Bi(x) - \tBi(x) \big| \le \bigoh\parens{\delta}$, for any $i \in [n]$ and~$x\in [0,\infty)$. \end{proposition} A proof of \cref{lem:approx_sol} is given in \ref{appen:approx_OUD}. Intuitively, it shows that when $\tk = (\tgA, \tgB)$ is a $\delta$-approximate solution of \eqref{eq:system_f}, the distributions ${\tAi,\tBi}$ are approximations of the distributions ${\Ai,\Bi}$ with the approximation error in order $\bigoh(\delta)$ (note that it is also polynomial in terms of $1/\min\{\gA, \tgA, \gB, \tgB\}$, $\max\{\gA, \tgA, \gB, \tgB\}$, $1/\min_{i \in [n]}{q_i}$ and $\max_{i \in [n]}{|p_i|}$). From \cref{lem:approx_sol}, we can also deduce that the players' payoffs when they use strategies with marginals following ${\tAi,\tBi, i \in [n]}$ are $\bracks*{\bigoh ( \delta) W^n}$-close to the payoffs when players use strategies with marginal ${\Ai,\Bi, i \in [n]}$ (see the formulations given in \eqref{eq:payoff_A_GL}-\eqref{eq:payoff_B_GL}). As a consequence, by following the scheme leading to results in \cref{sec:Corollary_results}, for any games $\FCBn$ and $\FGLn$ (having the same parameters), for any $\delta$-approximate solution $\tk = (\tgA, \tgB)$ of System~\eqref{eq:system_f}, we have: \begin{itemize} \item[$(i)$] In $\FGLn$, the strategies profile when Player A (resp. Player B) draws independently her allocation to battlefield $i$ from ${\tAi}$ (resp. $\tBi$) constitutes a $\bracks*{\bigoh(\delta) W^n}$-equilibrium.\footnote{These are indeed mixed strategies of $\FGLn$ since $\sum_{i \in [n]} \Ex_{x \sim \tAi} \left[ x \right] < X^A$ and $\sum_{i \in [n]} \Ex_{x \sim \tBi} \left[ x \right] < X^B$ due to~\eqref{eq:system_approx_f}.} \item[$(ii)$] In $\FCBn$, the strategies $(\IU^{A}_{\tk}, \IU^{B}_{\tk} )$ is a $\bracks*{\bigoh(\delta + \varepsilon ) W^n}$-equilibrium where $\varepsilon = \tilde{\bigoh}(n^{-1/2})$. In this case, we cannot obtain an approximate equilibrium with a level of error better than $\varepsilon$; to achieve this, we only need to run the approximation algorithm with $\delta = {\varepsilon}$. \end{itemize} \section{Numerical Illustrations of the Effect of Favoritism in Colonel Blotto and General Lotto~Games} \label{sec:NumExp} In this section, we conduct numerical experiments illustrating the effect of favoritism in the \FCB and \FGL games. For each game instance, if its parameters satisfy \cref{assum:1} and~\cref{assum:2}, we run the approximation algorithm described in \cref{sec:heuristic} to find a ${\delta}$-approximate solution $\tk = (\tgA, \tgB)$ of the corresponding System~\eqref{eq:system_f} where we set ${\delta} = 10^{-6}$; then, we report the obtained results regarding the distributions $\tAi, \tBi, i \in [n]$ from \cref{def:OptDis_GRCBC} corresponding to~$\tk$. If \cref{assum:1} and~\cref{assum:2} are violated, we report the results corresponding to the trivial pure equilibria. In the first experiment, we aim to illustrate the relation between parameters $p_i, q_i$ ($i \in [n]$) and the players' equilibrium payoffs in the \FGL game presented in \eqref{eq:payoff_A_GL}-\eqref{eq:payoff_B_GL} (which are also the payoffs of the corresponding \FCB game if the assumptions in \cref{corol:FCBEqui} hold). Since $\FGL$ and $\FCB$ are constant-sum games, we focus on the payoff of Player A. Although our results hold for \FCB and \FGL games in the general setting of parameters, we first focus on instances where the players' budgets are symmetric and all battlefields are homogeneous in order to single out the effect of favoritism. In particular, we consider a group of instances of the \FGL game with $n=4$ battlefields, $X^A = X^B =10$, $\alpha =1/2$, in which all battlefields have the same values ($w_i =1, \forall i$) and the same favoritism parameters $p_i = \bp, q_i = \bq, \forall i$ for given $\bp, \bq$. For comparison, recall that the instance where $\bp= 0, \bq =1$ corresponds to the classical Colonel Blotto/ General Lotto~game. \begin{figure*}\label{fig:Experiment1} \end{figure*} \cref{fig:Experiment1}(a) illustrates the (expected) equilibrium payoff of Player A in instances of the \FGL game where \mbox{$\bp \in \left\{- \! X^A,-\!0.99X^A,\ldots, 0, 0.01{X^A}, 0.02X^A, \ldots, X^A\right\}$} and \mbox{$\bq \in \{{1}/{10}, {1}/{5}, {1}/{2}, 1, 2, 5, 10 \}$}. First we observe that as $\bp$ increases and/or $\bq$ decreases, i.e., favoritism inclines towards Player~A, her expected payoff naturally increases (or at least does not decrease). Second, most instances satisfy \cref{assum:1} and~\cref{assum:2}; in this case the curves representing Player A's payoffs are piecewise quadratic in $\bp$, which is consistent with its theoretical expression in \eqref{eq:payoff_A_GL}. Third, we observe that the equilibrium payoff of Player A, as a function of $\bp$, is discontinuous at several points. This is due to the fact that for instances where $\bp$ is very small or when $\bq$ is very large (i.e., Player B has strong favoritism), the game has trivial equilibria where Player B can guarantee to win all battlefields (and thus, Player A's payoff is 0). Next, we consider game instances with the same $n, X^A, X^B, w_i$ and $p_i$, but we allow resource's effectiveness to vary across battlefields, in particular, $q_1 = q_2 = \bq$ and $q_3 = q_4 = 1/\bq$ where \mbox{$\bq \in \{0.1, 0.2,0.5 \}$}. \cref{fig:Experiment1}(b) reports Player A’s equilibrium payoff in these cases. First, we observe that when $\bp = 0$, the game is symmetric and each player's equilibrium payoff is precisely $ \sum_{i \in [n]} {w_i} /2 = 2$. Contrary to the case of \cref{fig:Experiment1}(a), however, in \cref{fig:Experiment1}(b) when $\bp$ is large, Player A can no longer guarantee to win all battlefields. Moreover, in cases where $\bar{q_i} \in \{0.1, 0.2\}$, when $\bp \ge 2$ and it increases (i.e., Player A has strong pre-allocations), she cannot improve much her payoff. This is explained by the fact that although Player A can guarantee to win battlefields 1 and 2 (where $q_i$ is small), the effectiveness of her resources in battlefields 3 and 4 is too weak so she does not gain much in these battlefields. This illustrates the different effect on equilibrium of the favoritism in resources' effectiveness and in~pre-allocations. In the next experiment, we consider the following situation: players compete on $n =4$ battlefields where $w_1 = w_2 = w_3 = 1$, $w_4 = 5$ and $q_i = 1, \forall i$ (i.e., resources have the same effectiveness). Player A has a total budget $X^* = 10$, but in this experiment a proportion $P < X^*$ taken out of this budget is pre-allocated. Then, Players A and B play an \FGL (or an \FCB) game where Player A's budget is $X^A = 10 - P$, Player B's budget is $X^B = 10$, and $p_i \ge 0$ such that $\sum_{i \in [n]} p_i = P$. We aim to analyze Player A's payoff when $P$ increases (i.e., when more and more of her budget is committed as pre-allocation). While interesting, we leave the question of what is an optimal distribution of pre-allocation as future work; here we simply compare two simple distributions of Player A's pre-allocation: (i) in the \emph{spread strategy}, the pre-allocation is spread over all battlefields: $p_i = P/n, \forall i \in [n]$; (ii) in the \emph{focus strategy}, the pre-allocation is concentrated on battlefield 4 (the battlefield with a large value): $p_1 = p_2 = p_3 = 0$ and $p_4 = P$. \begin{figure*}\label{fig:Experiment2} \end{figure*} \cref{fig:Experiment2}(a) illustrates Player A's equilibrium payoff in the \FGL games as described above with $P \in \left\{0,0.1, \ldots, 9.9 \right\}$. When $P=0$, it is the classical CB game and in this case, the game is symmetric, thus, each player has an equilibrium payoff $\sum_{i \in [n]}w_i /2 =4$. As $P$ increases, Player A's payoff decreases; intuitively, when the proportion of Player A's budget that is pre-allocated increases, she reveals more information about her (pre-)allocations and has less flexibility in play. Interestingly, we observe that in instances where $P$ is relatively large, Player A gets a better payoff by distributing the pre-allocations using the focus strategy rather than by the spread strategy, \ie it is better for Player A to focus on ``securing'' the big battlefield. In \cref{fig:Experiment2}(b), we plot the expected allocations of Player A at equilibrium, alone and when added to her pre-allocations (note that since the parameters on battlefields 1, 2, 3 are identical, Player A's expected allocations are the same on these battlefields). As $P$ increases, Player A' expected allocations to the battlefields decrease since her budget \mbox{$X^A = X^* - P$} is reduced (although the aggregate of her allocation and pre-allocation increases in some cases). Interestingly, when $P$ is relatively small, Player A's expected allocations to battlefield 4 is much larger than that at battlefield 1 (this is because $w_4 > w_1$). However, when $P$ increases and under the focus strategy for pre-allocation, her allocation at battlefield 4 decreases quicker than that with the spread strategy. This is consistent with the intuition above that she already ``secures'' this battlefield by the focus strategy thus, she should not distribute a large allocation~there. \section{Concluding Discussion} \label{sec:conclu} We introduced the \emph{Colonel Blotto game with favoritism} and analyzed its equilibria and approximate equilibria. We first characterized completely the equilibrium of all-pay auctions with favoritism. Using this, we then proved that there exists a set of optimal univariate distributions of the Colonel Blotto game with favoritism and gave a construction thereof. In several special cases of the Colonel Blotto game with favoritism, these univariate distributions give an exact equilibrium; in other cases, we derived an approximate equilibrium. We then proposed an algorithm that efficiently computes an approximation of the proposed optimal univariate distributions with arbitrarily small~error. Our model of favoritism uses a linear-form of the winner-determination rule, defined in \cref{sec:FormCB}, similar to works on all-pay auctions with favoritism (see e.g., \cite{konrad2002investment,siegel2014asymmetric}). This is a natural formulation to capture the fundamental properties of favoritism due to its simplicity and to the natural interpretation of the parameters $p_i$ and $q_i$; and our Colonel Blotto game with favoritism---which is derived from this rule---provides a meaningful model for applications with favoritism (see \cref{sec:Intro} for several motivational examples). Nevertheless, an interesting direction for future investigations would be to consider more general winner-determination rules to model favoritism. A natural extension of our work is to consider polynomial winner-determination rules, that is, player A wins battlefield $i$ if $P^{m}_{A,i} (x^A_i) \ge P^{m}_{B,i} (x^B_i)$ and loses otherwise; where, $P^{m}_{A,i} (\cdot)$ and $P^{m}_{B,i} (\cdot)$ are some pre-determined polynomials of degree $m$ with coefficients dependent on the battlefield and the corresponding player (the first two coefficients are similar to $p_i$ and $q_i$). It would remain possible to map the corresponding Colonel Blotto game to a set of all-pay auctions; but these all-pay auctions would now have complex (polynomial) winner-determination rules. This raises two challenges: \emph{(i)} the equilibrium for such complex all-pay auctions (used to derive the optimal univariate distributions of the Colonel Blotto game) is not known and appears to be non-trivial to derive; and \emph{(ii)} proving the existence of the univariate distributions might require a fixed-point technique other than our solution in the linear-form case (cf. \cref{theo:OUDs})---or at least an adaptation of our proof technique. \appendix \renewcommand{Appendix~\Alph{section}}{Appendix~\Alph{section}} \setcounter{lemma}{0} \setcounter{definition}{0} \renewcommand{\Alph{section}-\arabic{lemma}}{\Alph{section}-\arabic{lemma}} \renewcommand{\Alph{section}-\arabic{definition}}{\Alph{section}-\arabic{definition}} \renewcommand{\Alph{section}-\arabic{proposition}}{\Alph{section}-\arabic{proposition}} \setcounter{equation}{0} \renewcommand\theequation{\Alph{section}.\arabic{equation}} \renewcommand{\Alph{section}-\arabic{lemma}}{\Alph{section}-\arabic{lemma}} \section{Supplementary Materials for Results in Section~\ref{sec:EquiAPA}} \subsection{Known results on equilibria of all-pay auctions} \label{appen:preli} To ease the comparison between the state-of-the-art and our main results on the \FAPA game (Section~\ref{sec:EquiAPA}), we review here several results in the literature. The results stated in this section are extracted from previous works and rewritten in our notations. \begin{theorem}[extracted from \cite{baye1994,hillman1989}] \label{theo:clasAPA} In the classical two-player all-pay auction (i.e., an $\FAPA$ with $p = 0$, $q =1$ and $\alpha = 1/2$), if $u^A \ge u^B$, there exists a unique mixed equilibrium where Players A and B bid according to the following distributions: \begin{align} & A(x) = \left\{ \begin{array}{l} \frac{x}{u^B}, \forall x \in \left[ 0, u^B \right], \\ 1 \quad , \forall x > u^B, \end{array} \right. \textrm{ and } & B(x) = \left\{ \begin{array}{l} \frac{u^A-u^B}{u^A} + \frac{x}{u^A}, \forall x \in \left[ 0, u^B \right] \\ 1 \qquad \qquad , \forall x > u^B. \end{array} \right. \end{align} In this equilibrium, Player A's payoff is $\Pi^A = u^A-u^B$, Player B's payoff is $\Pi^B = 0$. \end{theorem} In intuition, $A(x)$ is the uniform distribution on $[0, u^B]$ and $B(x)$ is the distribution with a (strictly positive) probability mass at $0$ and the remaining mass is distributed uniformly in $(0, u^B]$. In the case where $u^B > u^A$, players exchange their roles and a similar statement to Theorem~\ref{theo:clasAPA} can be easily deduced. \begin{theorem}[extracted from \cite{konrad2002investment}] \label{theo:incumAPA} In the $\FAPA$ where $u^A = u^B = u$, $ q>0$, $0< q <1$ and $\alpha = 1/2$, \begin{itemize} \item[$(i)$] If $q u - p \le 0$, there exists a unique pure equilibrium where players' bids are $x^A = x^B = 0$ and their equilibrium payoffs are $\Pi^A = u$ and $\Pi^B = 0$. \item[($ii$)] If $0 < q u - p$, there exists no pure equilibrium; the unique mixed equilibrium is where Players A and B draw their bids from the following distributions: \begin{align} & \bar{A}(x) = \left\{ \begin{array}{l} \frac{p}{q u} + \frac{x}{q u}, \forall x \in \left[ 0, q u - p \right], \\ 1 \qquad \qquad, \forall x > q u -p, \end{array} \right. \textrm{ and } & \bar{B}(x) = \left\{ \begin{array}{l} 1 - q + \frac{p}{u}, \forall x \in \left[ 0, \frac{p}{q} \right) \\ 1 - q + \frac{q \cdot x}{u}, \forall x \in \left[\frac{p}{q}, u \right], \\ 1 \qquad \qquad , \forall x > u. \end{array} \right. \end{align} In this mixed equilibrium, players' payoffs are $\Pi^A = u(1-q) + p$ and $\Pi^B =0$. \end{itemize} \end{theorem} Intuitively, $\bar{A}(x)$ is the distribution placing a positive mass at 0 and distributing the remaining mass uniformly on $(0, qu - p]$ and $\bar{B}(x)$ is the distribution placing a mass at 0 and distributing the remaining mass uniformly on $\left( p/q, u \right)$. It is possible to deduce similar results for the case where $p < 0 $ and $ q > 1$ (it is not stated explicitly in \cite{konrad2002investment}). However, \cite{konrad2002investment} does not consider the cases where the additive asymmetric parameter $p$ is in favor of one player while the multiplicative asymmetric parameter $q$ is in favor of the~other. \subsection{Proof of Theorem~\ref{theo:positive}}\label{appen:proofAPAPos} \begin{proof} \textbf{Proof of Result $(i)$:} For any $x^B \ge u^B$ and any $x^A$, we have \mbox{$\Pi_{\FAPA}^B\left( x^A, x^B \right) < 0$}. Moreover, due to the condition $q u^B - p \le 0$, we have $x^A > q x^B - p$ for any $x^A \ge 0$ and $0 \le x^B < u^B$; that is, player B always loses if she bids strictly lower than $u^B$. Trivially, $x^B = 0$ is the unique dominant strategy of player B. Player A's best response against $x^B =0$ is $x^A=0$. In conclusion, we have: \begin{align*} & \Pi_{\FAPA}^A\left (0, 0\right) = u^A \textrm{ and } \Pi_{\FAPA}^A \left(x^A,0\right) = u^A - x^A < u^A, \forall x^A >0,\\ & \Pi_{\FAPA}^B\left (0, 0\right) = 0 \textrm{ and } \Pi_{\FAPA}^B \left(0, x^B\right) <0, \forall x^B >0,\\ \end{align*} \textbf{Proof of Result $(ii)$} First, from $0 < q u^B - p \le u^A$, we have $0 \le p/q < u^B$. We prove (by contradiction) that there exists no pure equilibrium under this condition. Assume that the profile $x^A, x^B$ is a pure equilibrium of the $\FAPA$ game. We consider two cases: \begin{itemize} \item Case 1: If $x^A = 0$, then player B's best response is to choose $ x^B = p/q + \varepsilon$ with an infinitesimal $\varepsilon >0$ since by doing it, she can guarantee to win (since $q(p/q+\varepsilon) -p =q \varepsilon > 0$) and gets the payoff $u^B - p/q - \varepsilon > 0$.\footnote{Note that if player B choose $x^B = 0$, she loses and hes payoff is only $0$.} However, player A's best response against $ x^B = p/q + \varepsilon$ \emph{is not} $x^A= 0$.\footnote{Player A's best response against $ x^B = p/q + \varepsilon$ is $x^A= q \varepsilon + \delta$ (with an infinitesimal $\delta >0$ such that $u^A - q \varepsilon - \delta > 0$).} \item Case 2: If $x^A > 0$, then player B's best response is either $x^B = (x^A + p)/q + \varepsilon$ if there exists $\varepsilon > 0$ small enough such that $q u^B - p -x^A - \varepsilon > 0$ or $x^B = 0$ if there is no such $\varepsilon$. However, $x^A > 0$ is not the best response of player A against neither $x^B = (x^A + p)/q + \varepsilon$ nor against $x^B = 0$.\footnote{The best response of player A against $x^B = (x^A + p)/q + \varepsilon$ is $x^A + q\varepsilon + \delta$ where $0< \delta < u^A -x^A - q \varepsilon$ ($\delta$ exists thanks to the condition on $\varepsilon$ and that $q u^B -p \le u^A $) and her best response against $x^B =0 $ is $x^A =0$.} \end{itemize} We conclude that $x^A, x^B$ cannot the best response against each other; thus, there exists no pure equilibrium in this case. Now, we prove that if player B plays according to $\Biip$, player A has no incentive to deviate from playing according to $\Aiip$. Denote by $A^+_2$ and $B^+_2$ the random variables that correspond to $\Aiip$ and $\Biip$, since $\Aiip$ is a continuous distribution on $\left( 0, q u^B - p \right]$, we have: \begin{align} \Pi_{\FAPA}^A \left(\Aiip, \Biip \right) & = \left[u^A \prob\left(B^+_2 < \frac{p}{q} \right) - 0 \right] \prob\left (A^+_2 = 0 \right) + \left[\alpha u^A \prob\left(B^+_2 = \frac{p}{q} \right) - 0 \right] \prob\left (A^+_2 = 0 \right) \nonumber \\ & \qquad \qquad + \int_{0}^{q u^B - p} \left[ u^A \prob \left( B^+_2 < \frac{x + p}{q} \right) - x \right] \de \Aiip(x) \nonumber \\ & = u^A \Biip\left( \frac{p}{q}\right) \frac{p}{q u^B} + 0 + \int_{0}^{q u^B - p} \left[ u^A \Biip \left(\frac{x + p}{q} \right) - x \right] \de \Aiip(x) \label{eq:case1.2} \\ & = \left( u^A - qu^B + p \right) \frac{p}{q u^B} + \int_{0}^{q u^B - p} \left( u^A - qu^B + p \right)\frac{1}{q u^B} \de x \nonumber\\ & = u^A - q u^B +p. \nonumber \end{align} Here, \eqref{eq:case1.2} comes from the fact that $\prob \left(B^+_2 = p/q \right) =0,$ due to definition. Now, if player A plays a pure strategy $x^A > q u^B -p$ while player B plays $\Biip$, her payoff~is: \begin{equation*} \Pi_{\FAPA}^A \left(x^A, \Biip \right) \le u^A - x^A < u^A - q u^B +p = \Pi_{\FAPA}^A \left(\Aiip, \Biip \right). \end{equation*} Moreover, for any pure strategy $x^A \in [0, q u^B - p]$, we have: \begin{align*} \Pi_{\FAPA}^A \left(x^A, \Biip \right) &= u^A \prob\left(B^+_2 < \frac{x^A \!+\! p}{q} \right) + \alpha u^A \prob\left(B^+_2 = \frac{x^A \!+\! p}{q} \right) \!-\! x^A\\ & \le u^A \Biip \left( \frac{x^A \!+\! p}{q} \right) \!-\! x^A = u^A \left[1 \!- \!\frac{q u^B}{u^A} \!+\! \frac{q}{u^A} \frac{(x^A \!+\! p)}{q}\right] \!-\! x^A \\ & \!=\! u^A \!-\! q u^B \!+\! p\\ & \!=\! \Pi_{\FAPA}^A \left(\Aiip, \Biip \right). \end{align*} In conclusion, $\Pi^A\left(\Aiip, \Biip \right) \ge \Pi^A \left(x^A, \Biip \right)$ for any $x^A \ge 0$. Similarly, we prove that when player A plays $\Aiip$, player B has no incentive to deviate from $\Biip$. Indeed, since $\Biip$ is a continuous distribution on $[p/q, u^B]$, we~have \begin{align} \Pi_{\FAPA}^B \left(\Aiip, \Biip \right) & = \left[u^B \prob\left(A^+_2 < 0 \right) - \frac{p}{q} \right] \prob\left (B^+_2 = \frac{p}{q} \right) \nonumber\\ & \qquad \qquad + \left[(1-\alpha) u^B \prob\left(A^+_2 = 0 \right) - \frac{p}{q} \right] \prob\left (B^+_2 = \frac{p}{q} \right) \nonumber \\ & \qquad \qquad + \int_{p/q}^{u^B} \left[ u^B \prob \left( A^+_2 < qx - p \right) - x \right] \de \Biip(x) \nonumber \\ & = 0 + 0 + \int_{p/q}^{u^B} \left[ u^B \Aiip \left( qx - p \right) - x \right] \de \Biip(x) \label{eq:explain} \\ & = \int_{p/q}^{u^B} \left[ u^B \left(\frac{p}{ q u^B} + \frac{qx - p}{q u^B} \right) - x \right]\frac{q}{u^A} \de x \nonumber \\ & = 0. \nonumber \end{align} Here, \eqref{eq:explain} comes from the fact $\prob \left(B^+_2 = p/q \right) = 0$ and that $\prob (A^+_2 = z) =0$ for any \mbox{$z \in (0, q u^B -p]$} due to definition. Now, as stated above, for any pure strategy \mbox{$x^B > u^B$}, trivially $\Pi^B_{\FAPA}(\Aiip, x^B)$ < 0. Moreover, \mbox{$\Pi_{\FAPA}^B \left( \Aiip, x^B \right) \le u^B \Aiip \left( q x^B \!-\! p\right) \! -\! x^B \!= \!0$} for any \mbox{$x^B \in [0, u^B]$}. Therefore, we conclude that \mbox{$\Pi_{\FAPA}^B\left(\Aiip, \Biip \right) \ge \Pi_{\FAPA}^B \left(\Aiip,x^B \right)$} for any $x^B \ge 0$. \paragraph{Proof of Result~$(iii)$} Similarly to the proof of Result $(ii)$, we can prove that there exists no pure equilibrium if $ q u^B - p > u^A >0$. Now, let us denote by $A^+_3$ and $B^+_3$ the random variables that correspond to $\Aiiip$ and~$\Biiip$; we prove that if player B plays according to $\Biiip$, player A has no incentive to deviate from playing according to $\Aiiip$. \begin{align*} \Pi_{\FAPA}^A \left(\Aiiip, \Biiip \right) & = \left[u^A \prob\left(B^+_3 < \frac{p}{q} \right) - 0 \right] \prob\left (A^+_3 = 0 \right) + \left[\alpha u^A \prob\left(B^+_3 = \frac{p}{q} \right) - 0 \right] \prob\left (A^+_3 = 0 \right) \nonumber \\ & \qquad \qquad + \int_{0}^{u^A} \left[ u^A \prob \left( B^+_3 < \frac{x + p}{q} \right) - x \right] \de \Aiiip(x) \nonumber \\ & = 0 + 0 + \int_{0}^{u^A} \left[ u^A \Biip \left(\frac{x + p}{q} \right) - x \right] \de \Aiiip(x) \\ & = \int_{0}^{u^A} \left[u^A \left(\frac{-p}{u^A} + \frac{q}{u^A} \frac{(x + p)}{q} \right) - x \right] \de \Aiiip(x)\\ & = 0. \nonumber \end{align*} Moreover, trivially, for any $x^A > u^B$, we have $\Pi_{\FAPA}^A \left(x^A, \Biiip \right) < 0$ and for any \mbox{$x^A \in [0, u^B]$}, we have \begin{align*} \Pi_{\FAPA}^A \left(x^A, \Biiip \right) \le & u^A \Biiip \left( \frac{x^A + p}{q} \right) - x^A \\ = & u^A \left[\frac{-p}{u^A} + \frac{q}{u^A} \frac{(x^A + p)}{q} \right] - x^A \\ = & 0 = \Pi_{\FAPA}^A \left(\Aiiip, \Biiip \right). \end{align*} Therefore, $\Pi_{\FAPA}^A\left(\Aiiip, \Biiip \right) \ge \Pi_{\FAPA}^A \left(x^A, \Biiip \right)$ for any $x^A \ge 0$. On the other hand, since $\Biiip$ is a continuous distribution on $\left[\frac{p}{q}, \frac{u^A+p}{q} \right]$, we do not need to consider the tie cases and we can deduce that: \begin{align} \Pi_{\FAPA}^B \left(\Aiiip, \Biiip \right) & = \int_{p/q}^{\frac{u^A + p}{q}} \left[ u^B \Aiiip \left(qx - p \right) - x \right] \de \Biiip(x) \nonumber \\ & = \int_{p/q}^{\frac{u^A+p}{q}} \left[ u^B \left( 1- \frac{u^A}{q u^B} + \frac{qx- p}{q u^B} \right) - x \right]\frac{q}{u^A} \de x \nonumber \\ & = u^B - \frac{u^A + p}{q}. \nonumber \end{align} Moreover, trivially, for any $x^B > u^B$, we have $\Pi^B\left( \Aiiip, x^B \right) < 0 < u^B - \frac{u^A + p}{q}$; and for any \mbox{$x^B \in [0, u^B]$}, we have: \begin{equation*} \Pi_{\FAPA}^B \left( \Aiiip, x^B \right) \le u^B \Aiiip \left( q x^B - p\right) - x^B = u^B \left( 1- \frac{u^A}{q u^B} + \frac{qx^B- p}{q u^B} \right) - x^B = u^B - \frac{u^A + p}{q}. \end{equation*} Therefore, we can conclude that \mbox{$\Pi_{\FAPA}^B\left(\Aiiip, \Biiip \right) \ge \Pi^B \left(\Aiiip,x^B \right)$} for any $x^B \ge 0$. Finally, for a proof of uniqueness of the mixed equilibrium in Result~$(ii)$ and~$(iii)$, we can follow the scheme presented by~\cite{baye1996all} and check through a series of lemmas. This is a standard approach in the literature of all-pay auction and we omit the detailed proof here. \end{proof} \subsection{Equilibrium of \FAPA when $p <0$, $q > 0$} \label{sec:appen:APAneg} We now consider the $\FAPA$ game in the case $p < 0$. We first define $p^{\prime} = -p/q$ and $q^{\prime} = 1/q$. Since $p<0$, we have $p^{\prime} >0$. Moreover, for any $x^A, x^B$, we have: \begin{equation*} \be \left(x^A, q x^B - p \right) = \be \left( (x^A +p)/q, x^B\right) = \be \left( q^{\prime} x^A - p^{\prime}, x^B \right). \end{equation*} Therefore, the $\FAPA$ game with $p < 0$ (and $q>0$) is equivalent to an \FAPA with $p^{\prime} >0 $ (and $q^{\prime} >0$) in which the roles of players are exchanged. Applying Theorem~\ref{theo:positive} to this \emph{new} game, we can deduce the following theorem: \begin{theorem} \label{theo:negative} In the $\FAPA$ game where $p < 0$, we have the following results: \begin{itemize} \item[$(i)$] If $(u^A + p)/q \le 0$,\footnote{That is $q^{\prime} u^B - p^{\prime} \le 0$.} there exists a unique pure equilibrium where players' bids are $x^A = x^B = 0$ and their equilibrium payoffs are $\Pi^A = 0$ and $\Pi^B = u^B$ respectively. \item[($ii$)] If $0 < (u^A + p)/q \le u^B$,\footnote{That is $0 \le q^{\prime} u^A - p^{\prime} \le u^B$.} there exists no pure equilibrium; there is a mixed equilibrium where Player A (resp. Player B) draws her bid from the distribution $\Aiip$ (resp. $\Biip$) defined as follows. \begin{align} & \Aiim(x) = \left\{ \begin{array}{l} 1-\frac{u^A}{q u^B} - \frac{p}{q u^B}, \forall x \in \left[ 0, - p \right), \\ 1-\frac{u^A}{q u^B} + \frac{x}{q u^B}, \forall x \in \left[ -p, u^A \right],\\ 1 \qquad \qquad \qquad, \forall x > u^A, \end{array} \right. \textrm{ and } & \Biim(x) = \left\{ \begin{array}{l} - \frac{p}{u^A} + \frac{qx}{u^A}, \forall x \in \left[ 0, \frac{u^A+p}{q} \right] \\ 1 \qquad \qquad, \forall x > \frac{u^A+p}{q}. \end{array} \right. \label{eq:FA-Def} \end{align} In this mixed equilibrium, players' payoffs are $\Pi^A =0$ and $\Pi^B =u^B - (u^A+p)/q$. \item[$(iii)$] If $ (u^A + p)/q > u^B$,\footnote{That is $q^{\prime} u^A - p^{\prime} > u^B$.} there exists no pure equilibrium; there is a mixed equilibrium where Player A (resp. Player B) draws her bid from the distribution $\Aiiim$ (resp. $\Biiim$) defined as follows. \begin{align} & \Aiiim(x) = \left\{ \begin{array}{l} 0 \qquad \qquad, \forall x \in \left[ 0, -p \right), \\ \frac{p}{q u^B} + \frac{x}{q u^B}, \forall x \in \left[ -p, q u^B -p\right],\\ 1 \qquad \qquad, \forall x > q u^B -p, \end{array} \right. \textrm{ and } & \Biiim(x) = \left\{ \begin{array}{l} 1- \frac{q u^B}{u^A} + \frac{q \cdot x}{u^A}, \forall x \in \left[0, u^B \right], \\ 1 \qquad \qquad, \forall x > u^B. \end{array} \right. \label{eq:bFA-Def} \end{align} In this mixed equilibrium, players' payoffs are $\Pi^A = u^A -q u^B + p$ and $\Pi^B = 0$. \end{itemize} \end{theorem} Similarly to the case where $p \ge 0$, we can verify that in Theorem~\ref{theo:negative}, all the functions $\Aiim, \Biim,\Aiiim$ and $\Biiim$ satisfy the conditions of a distribution and they are continuous on $[0, \infty)$. These distributions are also in the class of uniform-type distributions. The interpretation of these functions are very similar to the analysis for $\Aiip, \Biip,\Aiiip$ and $\Biiip$ and their illustration are given in Figure~\ref{fig2}. \begin{figure*} \caption{The mixed equilibrium of the $\FAPA$ with $p < 0$.} \label{fig2} \end{figure*} We compare the results in Theorem~\ref{theo:negative} with the results in the classical all-pay auction (Theorem~\ref{theo:clasAPA}). \section{Supplementary Materials for Results in Section~\ref{sec:OptUni_GRCBC}} We will provide the proof of Theorem~\ref{theo:OUDs} in \ref{sec:appen_proof_OUD} and the proof of Proposition~\ref{propo:payoffEQ} in \ref{sec:appen_proof_payoffEQ}. The main tool used in these proofs is the notion of the winding number of parametric curves---an important notion in topology; for the sake of completeness, we first revisit the definitions related to this concept in \ref{sec:appen_winding}. \subsection{Preliminaries on Winding Number of Parametric Curves} \label{sec:appen_winding} Intuitively, the winding number of a 2-dimensional curve relative to a given (2-dimensional) point is the number of rotations the curve goes around the point. A 2-dimensional curve can either be defined as a continuous function from $\mathbb{R}$ to $\mathbb{R}^2$ (often used in topology) or either as a complex-valued function of a real variable (often used in complex analysis); as a consequence, the winding number can also be defined by using either topological terminology or Cauchy integral formula (\ie contour integral) of complex curves. In this work, we choose the former approach, \ie defining the winding number in topological sense. The related definitions presented below are mostly extracted from \cite{chinn1966first} and rewritten in our notation. We begin with the basic concepts of parametric curves in~topology. Given a range $[a,b] \subset \mathbb{R}$, any continuous mapping $\curve: [a,b] \rightarrow \mathbb{R}^2$ is called a \emph{parametric curve}. Henceforth, we refer to a parametric curve simply as a \emph{curve}. For a curve $\curve$, we abuse the notation and denote its image (\ie $\curve([a,b]) \subset \R^2$) also by $\curve$. There are two type of curves that are of special interest for us in this work as follows: \begin{definition}[Closed curve and short curve] {$~$} \begin{itemize} \item $ \curve: [a,b] \rightarrow \mathbb{R}^2$ is a closed curve if and only if $\curve(a) = \curve(b)$. \item $\curve: [a,b] \rightarrow \mathbb{R}^2$ is a short curve relative to to a point $y \in \mathbb{R}^2$ if there exists a ray, say $R$, coming from $y$ which does not intersect $\curve$. \end{itemize} \end{definition} Now, in the $\R^2$ plan, for a given ray $R$ starting from $y \in \mathbb{R}^2$, we call $\pole (R,y)$ the polar coordinate system whose pole (\ie the reference point) is $y$ and the polar axis (\ie the reference direction) is~$R$. For any point $x \in \R^2$, we denote its angular coordinate in $\pole(R,y)$ by $\mathbb{A}^{(R,y)}_x$; WLOG, we assume that $\mathbb{A}^{(R,y)}_x \in [0,2\pi], \forall x \in \R^2$ (the counterclockwise rotation is positive). Given a \emph{short} curve $\curve:[a,b] \rightarrow \R^2$, a point $y \notin \curve$ (\ie $\nexists t \in [a,b]: \curve(t) =y$) and a ray $R$ coming from $y$ which does not intersect $\curve$, the value of the \emph{angle swept} by $\curve$ relative to the point $y$ is defined as~follows: \begin{equation*} \A(\curve, y ) = \mathbb{A}^{(R,y)}_{\curve(b)} - \mathbb{A}^{(R,y)}_{\curve(a)}. \end{equation*} In other words, the angle swept by a short curve relative to a point is the difference between the angular coordinates of its two ending-points in the corresponding polar coordinate system. \begin{lemma}[extracted from \cite{chinn1966first}] \label{lem:angle_swept} The angle swept of short curves is additive; more formally, let $a<b<c$ be real numbers and let $\curve: [a,c] \rightarrow \R^2$ be a short curve relative to a point $y$. Denote $\curve_1= \restr{\curve}{[a,b]}$ and $\curve_2= \restr{\curve}{[b,c]}$, then $\curve_1, \curve_2$ are also short curves relative to $y$ and we have $\A(\curve,y) =\A(\curve_1,y) + \A(\curve_2,y) $. \end{lemma} Next, we define the notion of sufficiently-fine partitions of a curve. For any curve $\curve:[a,b] \rightarrow \R^2$, let $t_0,t_1, \ldots, t_m$ be a sequence of real numbers such that $a=t_0 < t_1<\ldots< t_m =b$. A \emph{sufficiently-fine partition} of the curve $\curve$ relative to a point $y$ is a sequence of curves $\{\curve_1, \curve_2, \ldots, \curve_m\}$ such that $\curve_i = \restr{\curve}{[t_{i-1}.t_i]}$ is a \emph{short} curve relative to $y$ for any $i =1, 2,\ldots, m$. Importantly, for any curve $\curve$ and a point $y \notin \curve$, there always exists a sufficient partition of $\curve$ relative to $y$. Moreover, if $\{\curve_1, \curve_2, \ldots, \curve_m\}$ and $\{\curve^{\prime}_1, \curve^{\prime}_2, \ldots, \curve^{\prime}_k\}$ are two sufficiently-fine partitions of the curve $\curve$ relative to $y$, then \mbox{$\sum_{j=1}^m \A(\curve_j,y) = \sum_{j=1}^k \A(\curve^{\prime}_j,y)$} (see \cite{chinn1966first} for proofs of these~statements). Based on Lemma~\ref{lem:angle_swept} concerning the angle swept by short curves and the notion of sufficiently fine partition, we can define the angle swept by any generic parametric curve (not necessarily short) which induces the definition of the winding number. \begin{definition}[Angle swept by a curve] For any curve $\curve$ and a point $y \notin \curve$, the angle swept by $\curve$ relative to $y$ is defined as $\A(\curve,y) = \sum_{i=1}^m {\A(\curve_i,y)}$ where $\{\curve_1, \curve_2, \ldots, \curve_m\}$ is any sufficiently-fine partition of $\curve$ relative to $y$ (note that for any $i=1,2,\ldots, m$, $\A(\curve_i,y)$ is well-defined since $\curve_i$ is a short curve relative to $y$). \end{definition} \begin{definition}[Winding number] Given a \emph{closed} curve $\curve$ and a point $y \notin \curve$, the winding number of $\curve$ around $y$ is defined~as: \begin{equation*} \W(\curve,y) = \frac{\A(\curve,y)}{2\pi}. \end{equation*} \end{definition} Trivially, since $\curve$ is a closed curve, its winding number $\W(\curve,y)$ is an integer number. Finally, we present the following fixed-point theorem based on the notion of winding number. \begin{lemma}[Fixed-point theorem (extracted from \cite{chinn1966first})] \label{lem:winding} Let $D$ be a disk in $\R^2$ (or any topologically equivalence of a disk) and $\partial D$ be its boundary. Let $G: D \rightarrow \R^2$ be a continuous mapping and $y\in \R^2$ such that $y \notin G(\partial D)$. If the closed curve $\curve:= G \parens*{\partial D}$ has a non-zero winding number around $y$, then $y \in G(D)$; in other words, there exists $x \in D$ such that $G(x) = y$. \end{lemma} A proof of this theorem is given in \cite{chinn1966first}. We note that this theorem can be considered as a generalization of the intermediate value theorem (for 1-dimensional functions). \subsection{Proof of Theorem~\ref{theo:OUDs} (OUDs Construction of the $\FCB$ Game)} \label{sec:appen_proof_OUD} While the proof of Result~$(ii)$ of Theorem \ref{theo:OUDs} has been presented completely in Section~\ref{sec:OptUni_GRCBC}; we will prove Result~$(i)$ of this theorem in this section. In other words, given a game instance $\FCBn$, we look for a set $D \in \R^2$, topologically equivalent to a $\R^2$-disk, such that when combined with the function $G$ corresponding to $\FCBn$, this set satisfies sufficient conditions of Lemma~\ref{lem:wind} (which is an adaptation of Lemma~\ref{lem:winding} into the problem in consideration. Particularly, \emph{we want to find $D$ such that $G(\partial D)$ (where $G$ is defined as in \eqref{eq:G_func}) is a closed curve and it has a non-zero winding number around $(0,0)$}. As discussed in Section~\ref{sec:OptUni_GRCBC}, if $p_i=0, \forall i \in [n]$, we can follow the approach of \cite{kovenock2020generalizations} to convert System~\ref{eq:system_f} into a real-valued 1-dimensional function and proceed to prove the existence of its solution via the intermediate value theorem. In the remainder of this section, we assume that in the game $\FCBn$, there exists $i \in [n]$ such that $p_i \neq 0$. For the sake of brevity, we use the notations $I_{\ge0}:= \braces*{i \in [n]: p_i \ge 0}$ and $I_{<0}:= \braces*{i \in [n]: p_i < 0}$. For any given $(\gA, \gB) \in \R^2$, we also define $\Iplus:= \braces*{j \in I_{\ge0}: {\gB > \frac{p_j}{q_j w_j }} }$ and \mbox{$\Iminus := \braces*{j \in I_{<0}: {\gA > \frac{-p_j}{w_j }} }$}. Now, let us choose $D$ to be a rectangle whose vertices are $(\delta,\delta)$, $(L, \delta)$, $(L,L)$ and $(L,\delta)$ where $\delta, L$ will be defined later such that $0< \delta < L$. More formally, we define the following closed curve: \begin{align*} \curve \colon& [\delta, 4L-3\delta] \longrightarrow\R^2 \nonumber \\ & \phantom{+++} t \phantom{+++} \longmapsto \left\{ \begin{array}{l} (t, \delta), \forall t \in [\delta, L], \\ (L, \delta-L+t), \forall t \in [L,2L-\delta],\\ (3L - t -\delta, L), \forall t \in [2L-\delta, 3L-2\delta],\\ (\delta, 4L - 2 \delta - t), \forall t \in [3L-2\delta, 4L-3\delta]. \end{array} \right. \end{align*} In other words, $\curve$ is a parametric curve form of $\partial D$ that starts at $(\delta, \delta)$, goes along the sides of the rectangle $D$ in the counter-clockwise direction and stops when it reaches $(\delta, \delta)$ again. for the sake of notation, we will denote by $G(\curve)$ the $G$-image of $\curve$; importantly, by the definition of $G$ and the fact that $G$ is a continuous mapping, $G(\curve)$ is also a closed curve in $\R^2$. If $(0,0) \in G(\curve)$ (\ie $(0,0)$ lies on the $G$-image of $\partial D$, then we have proved that there exists a positive zero of $G$; thus, we can conclude the proof in this case. In the remainder of the proof, we assume that $(0,0) \notin G(\curve)$. Now, we also define \begin{itemize} \item $\curve_1:= \restr{\curve}{[\delta, L]}$ representing the curve going along the side of the rectangle $D$ from $(\delta, \delta)$ to~$(L,\delta)$, \item $\curve_2:= \restr{\curve}{[L,2L-\delta]}$ representing the curve going along the side of the rectangle $D$ from $(L,\delta)$ to~$(L,L)$, \item $\curve_3:= \restr{\curve}{[2L-\delta, 3L-2\delta]}$ representing the curve going along the side of the rectangle $D$ from $(L,L)$ to~$(\delta, L)$, \item $\curve_4:= \restr{\curve}{[3L-2\delta, 4L-3\delta]}$ representing the curve going along the side of the rectangle $D$ from $(L,L)$ to~$(\delta,\delta)$. \end{itemize} For any $i=1,2,3,4$, we also denote $G(\curve_i)$ the $G$-image of $\curve_i$ and note that $G(\curve_i)$ is also a parametric curve in $\R^2$. We will proceed the proof by showing that $\braces*{G(\curve_1), G(\curve_2), G(\curve_3), G(\curve_4})$ is a sufficiently-fine partition of $G(\curve)$ (\ie of $\partial D$) relative to $(0,0)$; moreover, $\sum_{j=1}^4 \A(\curve_j) \neq 0$. The curves $G(\curve_1), G(\curve_2), G(\curve_3), G(\curve_4)$ have different forms and features; each requires a different analysis. We do this in six steps as~follows. \textit{\underline{Step 0:} Prove that $G(\delta,\delta) \in \R^2_{<0}$ and $G(L,L) \in \R^2_{>0}$ when $\delta$ is small enough and $L$ is large enough.} Let us choose $\delta$ such that \begin{align} & \delta < \bar{\delta} =\left\{ \begin{array}{l} \min \braces*{ \min \limits_{i \in I_{\ge 0}} \braces*{\frac{p_i}{q_i w_i}}, \min\limits_{i \in I_{<0}} \braces*{\frac{-p_i}{w_i}} }, \textrm{ if } I_{\ge0} \neq \emptyset \textrm{ and } I_{p<0} \neq \emptyset , \\ \min\limits_{i \in I_{\ge0}} \braces*{\frac{p_i}{q_i w_i}}, \textrm{ if } I_{<0} = \emptyset,\\ \min \limits_{i \in I_{<0}} \braces*{\frac{-p_i}{w_i}}, \textrm{ if } I_{\ge0} = \emptyset. \end{array} \right. \label{eq:delta_choice} \end{align} Trivially, under the assumptions mentioned above, $\delta >0$; moreover, $\Ip(\delta, \delta) = \In(\delta, \delta) = \emptyset$; thus, \begin{equation*} g^A(\delta, \delta) = -X^B \delta <0 \quad \textrm{ and } \quad g^B(\delta, \delta) = -X^A \delta <0 . \end{equation*} On the other hand, for any $L > L_0:= \max \braces*{ \max \limits_{I_{\ge0}} \braces*{\frac{p_i}{q_i w_i}}, \max \limits_{i \in I_{<0}} \braces*{\frac{-p_i}{w_i}}}$, we have $I^+(L,L) = I_{\ge0}$ and $I^-(L,L) = I_{<0}$. Therefore, we have \begin{align} g^A (L,L ) &\!=\! \sum_{i \in I_{\ge0}} \frac{ \bracks*{h_i(L,L)}^2 \!-\! {p_i}^2 }{2 q_i w_i } + \sum_{i \in I_{<0}} \frac{ \bracks*{h_i(L,L)}^2 }{2 q_i w_i} - X^B L \label{eq:LLA} \\ g^B (\gA,\gB) &\!=\! \sum_{i \in I_{\ge0}} \frac{ \left[h_i(L,L) \!-\! p_i\right]^2}{2 q_i w_i } \!+\! \sum_{i \in I_{<0}} \frac{ \left[ h_i(L,L) \!-\!p_i \right]^2 \!-\! p_i^2 }{2 q_i w_i} \!-\! X^A L. \label{eq:LLB} \end{align} By definition, $h_i(L,L) := \min \{ q_i w_i L, w_i L + p_i \}$; therefore, the right-hand-sides of~\eqref{eq:LLA} and~\eqref{eq:LLB} are quadratic expressions in terms of $L$ with a strictly-positive second-degree coefficient; in other words, they are convex functions in terms of $L$. Therefore, there exists a constant $L_1>0$ such that for any $L \ge \max \braces*{L_0,L_1}$, we have $g^A (L,L ) >0$ and $g^B (L,L ) >0$. \textit{\underline{Step 1:} Prove that $G(\curve_1)$ is a short curve and $\A(G(\curve_1), (0,0)) <0$ when $\delta$ is small enough and $L$ is large enough.} We recall that $\curve_1:[\delta, L] \rightarrow \R^2$ and $\curve_1(t) = (t, \delta)$. Now, from the choice of $\delta$ as in \eqref{eq:delta_choice}, for any $t \in [\delta, L]$, we have $\delta < \frac{p_i}{q_i w_i}$ for any $i \in I_{\ge0}$; therefore, $I^+(t,\delta) = \emptyset$. Thus, \begin{equation*} g^A (t,\delta) = \sum_{i \in I^-(t, \delta)} \frac{h_i(t, \delta)^2}{2q_i w_i} - X^B t. \end{equation*} If $I_{<0} = \emptyset$, trivially $g^A (t,\delta) <0$. Now, if $I_{<0} = \emptyset$, we have: \begin{align*} g^A (t,\delta) & \le \sum_{i \in I^-(t, \delta)} \frac{(q_i w_i \delta)^2}{2q_i w_i} - X^B t \\ & = \sum_{i \in \braces*{ j \in I_{<0}: t >\frac{-p_j}{w_j}} (t, \delta)} \bracks*{ \frac{q_i w_i \delta^2}{2} - \frac{X^B t}{| I^-(t, \delta) |}} \\ & \le \sum_{i \in \braces*{ j \in I_{<0}: t >\frac{-p_j}{w_j}} (t, \delta)} \bracks*{ \frac{q_i w_i \delta^2}{2} - \frac{X^B \delta}{| I^-(t, \delta) |}} \quad \textrm{(since } \delta < \frac{-p_i}{ w_i} < t) \\ \end{align*} Therefore, for any $\delta < \frac{X^B}{| I^-(t, \delta) | } \frac{2}{\max{q_i w_i}}$, we always have $ g^A (t,\delta)<0$. In other words, for any $t \in [\delta, L]$, the curve $G(t, \delta)$ lies on the left of the Oy axis in the the $\R^2$-plane. We conclude that $G(\curve_1)$ is a short curve relative to $(0,0)$ (we can use $R_1: [0, + \infty) \rightarrow \R^2$ such that $R_1(t) = (t,0)$ as the reference~ray). On the other hand, \begin{equation*} g^B (t,\delta) = \sum_{i \in I^-(t, \delta)} \frac{ \bracks*{h_i(t, \delta) - p_i}^2 - p_i^2}{2q_i w_i} - X^A \delta. \end{equation*} For any $i \in I^-(t, \delta)$, $\bracks*{h_i(t, \delta) - p_i}^2 - p_i^2$ is an increasing function of $t$. Intuitively, this means that as $t$ increases from $\delta$ to $L$, the curve $G(\curve_1)$ lies on the left of the Oy-axis and only goes upward. Therefore, the angle swept by $G(\curve_1)$ relative to $(0,0)$ is negative, \ie $\A(G(\curve_1),(0,0)) < 0$. \textit{\underline{Step 2:} Prove that $G(\curve_2)$ is a short curve and $\A(G(\curve_2), (0,0)) <0$ when $\delta$ is small enough and $L$ is large enough.} We recall that $\curve_2:[L, 2L-\delta] \rightarrow \R^2$ and $\curve_2(t) = (L, t-L + \delta)$. In this step, for the sake of brevity, let us denote $t^{\prime} = t- L +\delta$; as $t$ increases from $L$ to $2L - \delta$, we have $t^{\prime}$ increases from $\delta$ to $L$. In other words, we can rewrite $\curve_2(t) = (L,\tp)$. First, to prove that $G(\curve_2)$ is a short curve, we will prove that it does not intersect the ray $R_4: [0, + \infty) \rightarrow \R^2$ such that $R_4(t) = (0,-t)$; intuitively, this means that all intersection points of $G(\curve_2)$ and the Oy-axis have positive x-coordinates. Indeed, fix a number $\tp \in [\delta, L]$ such that \mbox{$g^A(L, \tp) = 0$},\footnote{There exists such $\tp$ since $g^A(L,\delta)<0$ and $g^A(L,L)>0$ (see Step 1 and 2).} we have $h_i(L, \tp):= \min \braces*{q_i w_i \tp, w_i L + p_i } = q_i w_i \tp$ for any $i \in I^+(L,\tp)$. We can see this by using proof by contradiction: assume otherwise, \ie assume there exists $j \in I^+(L,\tp)$ such that $q_j w_j \tp > w_j L + p_j$, then $g^A(L, \tp) > \frac{(w_j L + p_j)^2}{2 q_j w_j} - X^B L >0 $ when we choose $L$ large enough;\footnote{The right-hand-side of this inequality is a quadratic expression in terms of $L$ with a strictly-positive second-degree coefficient} this contradicts with the assumption that $g^A(L,\tp) = 0$. As a~consequence, \begin{align} & g^A(L,\tp) = 0 \nonumber\\ \Leftrightarrow & \sum_{i \in I^+(L, \tp)} \frac{h_i(L,\tp)^2 - p_i^2 }{2q_i w_i} + \sum_{i \in I^-(L, \tp)}\frac{h_i(L,\tp)^2}{2q_i w_i} - X^B L = 0 \label{eq:curve2_mid}\\ \Leftrightarrow & \sum_{i \in I^+(L, \tp)} \frac{(q_i w_i \tp)^2 - p_i^2 }{2q_i w_i} + \sum_{i \in I^-(L, \tp)}\frac{(q_i w_i \tp)^2}{2q_i w_i} - X^B L = 0, \nonumber \end{align} which implies that \begin{equation} \sum_{i \in I^+(L, \tp)} \frac{(q_i w_i \tp)^2 - p_i^2 }{2q_i w_i} - X^B L \le 0. \label{eq:curve2_A} \end{equation} On the other hand, we have: \begin{align} g^B (L,\tp) & = \sum_{i \in I^+(L, \tp)} \frac{ \bracks*{h_i(t, \delta) - p_i}^2}{2q_i w_i} + \sum_{i \in I^-(L, \tp)} \frac{ \bracks*{h_i(t, \delta) - p_i}^2 - p_i^2}{2q_i w_i} - X^A \tp \nonumber\\ & = \sum_{i \in I^+(L, \tp)} \frac{-2 h_i(L,\tp) p_i + p_i^2}{2q_i w_i} + \sum_{i \in I^-(L, \tp)}\frac{-2 h_i(L,\tp) p_i}{2q_i w_i} + X^B L - X^A \tp \quad \textrm{(from \eqref{eq:curve2_mid})} \nonumber\\ & \ge -\sum_{i \in I^+(L, \tp)} \frac{2 q_i w_i \tp p_i + p_i^2}{2q_i w_i} + X^B L - X^A \tp \quad \textrm{(since } p_i <0, \forall i \in I^-(L, \tp)). \label{eq:curve2_end} \end{align} Now, if $I^+(L,\tp) = \emptyset$, from \eqref{eq:curve2_end}, trivially $g^B (L,\tp) \ge 0$ (since $X^A\le X^B$ and $\tp \le L $). In cases where $I^+(L,\tp) \neq \emptyset$, we have $\sum_{i \in I^+(L,\tp)} \frac{q_i w_i}{2} >0$ and we define \begin{equation*} C_2(L): = \sqrt{\frac{\sum_{i \in I^+(L,\tp)} \frac{p_i^2}{2 q_i w_i} + X^B L}{\sum_{i \in I^+(L,\tp)} \frac{q_i w_i}{2}}}. \end{equation*} From \eqref{eq:curve2_A}, we have $\tp \le C_2(L)$. Combining this to \eqref{eq:curve2_end}, we have \begin{equation} g^B (L,\tp) \ge -\sum_{i \in I^+(L, \tp)} \frac{2 q_i w_i \tp p_i + p_i^2}{2q_i w_i} + X^B L - X^A \cdot C_2(L).\label{eq:curve2_stop} \end{equation} Now, since $C_2(L) = \bigoh(\sqrt{L})$, the right-hand-side of \eqref{eq:curve2_stop} is a quadratic expression in terms of $\sqrt{L}$ with a strictly-positive second-degree coefficient; therefore, there exists $L_2>0$ large enough such that for any $L\ge \max\braces*{L_0, L_1, L_2}$, if $g^A(L,\tp) =0$, we always have $ g^B (L,\tp) \ge 0 $. Finally, we conclude that $G(\curve_2)$ is a short curve relative to $(0,0)$ (we can choose $R_4$ defined above as the reference ray). Note that $(L,\delta)$ and $(L,L)$ are respectively the starting point and ending point of $G(\curve_2)$; moreover, from Steps 1 and 2, we have proved $g^A(L,\delta) <0 $ and $g^A(L,L)>0$, therefore, the angle swept by $G(\curve_2)$ relative to $(0,0)$ is also negative, \ie $\A(G(\curve_2),(0,0)) <0$. \textit{\underline{Step 3:} Prove that $G(\curve_3)$ is a short curve and $\A(G(\curve_3), (0,0)) <0$ when $\delta$ is small enough and $L$ is large enough.} We recall that $\curve_3:[2L-\delta, 3L-2\delta] \rightarrow \R^2$ and $\curve_3(t) = (3L-t-\delta, L)$. In this step, for the sake of brevity, let us denote $t^{\prime} = t- 2L +2\delta$; as $t$ increases from $2L-\delta$ to $3L- 2\delta$, we have $t^{\prime}$ increases from $\delta$ to $L$. In other words, we can rewrite $\curve_3(t) = (L + \delta - \tp, L)$. First, we aim to prove that $G(\curve_3)$ does not intersect the ray $R_3:[0,\infty) \rightarrow \R^2$ where \mbox{$R_3(t) = (-t,0)$}, \ie we prove that if $g^B (L+\delta - \tp,L) =0$ then $g^A (L+\delta - \tp,L) >0$. To do this, we notice that for any $\tp \in \bracks{\delta,L}$ such that $ g^B(L+\delta - \tp,L) =0$, we have \begin{equation} h_i(L+ \delta - \tp,L) = \min\braces*{q_i w_i L, w_i (L+\delta-\tp) + p_i} = w_i (L+\delta-\tp) + p_i, \forall i \in I^-(L+\delta-\tp,L).\label{eq:curve_3_contr} \end{equation} Indeed, we can prove \eqref{eq:curve_3_contr} by proof of contradiction: assume otherwise, \ie assume that there exists $j \in I^-(L+\delta-\tp,L)$ such that $q_j w_j L < w_j (L+\delta-\tp) + p_j$, then \mbox{$g^B(L+\delta - \tp,L) > \frac{(q_j w_j L - p_j)^2 - p_j^2}{2q_j w_j} - X^A L >0$} when we choose $L$ large enough; this contradicts with the assumption that $g^B(L+\delta - \tp,L) =0$. As a consequence of \eqref{eq:curve_3_contr}, we have: \begin{align} & g^B(L+\delta - \tp,L) = 0 \\ \Leftrightarrow & \sum_{i \in I^+(L+\delta - \tp,L) } \frac{ \bracks*{h_i(L+\delta - \tp,L) - p_i}^2}{2q_i w_i} + \sum_{i \in I^-(L+\delta - \tp,L) } \frac{ \bracks*{h_i(L+\delta - \tp,L) - p_i}^2 - p_i^2}{2q_i w_i} - X^A L = 0 \label{eq:curve_3:gB} \\ \Rightarrow & \sum_{i \in I^-(L+\delta - \tp,L) } \frac{ \bracks*{h_i(L+\delta - \tp,L) - p_i}^2 - p_i^2}{2q_i w_i} - X^A L \le 0 \nonumber \\ \Rightarrow & \sum_{i \in I^-(L+\delta - \tp,L) } \frac{ \bracks*{w_i(L+\delta - \tp,L)}^2 - p_i^2}{2q_i w_i} - X^A L \le 0 \label{eq:curve_3:mid} \end{align} On the other hand, \begin{align} & g^A(L+\delta - \tp,L) \\ =& \sum_{i \in I^+(L+\delta - \tp,L)} \frac{h_i(L+\delta - \tp,L)^2 - p_i^2 }{2q_i w_i} + \sum_{i \in I^-(L+\delta - \tp,L)}\frac{h_i(L+\delta - \tp,L)^2}{2q_i w_i} - X^B L \nonumber \\ =& \sum_{i \in I^+(L+\delta - \tp,L)} \frac{2 h_i(L+\delta - \tp,L) p_i }{2q_i w_i} + \sum_{i \in I^-(L+\delta - \tp,L)} \frac{w_i(L+\delta - \tp) p_i }{2 q_i w_i} + X^B L - X^A(L+\delta - \tp) \quad \textrm{(due to \eqref{eq:curve_3:gB}}) \nonumber\\ \ge & \sum_{i \in I^-(L+\delta - \tp,L)} \frac{w_i(L+\delta - \tp) p_i }{2 q_i w_i} + X^B L - X^A(L+\delta - \tp) \textrm{(since } p_i >0, \forall i \in I^+(L+\delta - \tp,L). \label{eq:curve_3:almost} \end{align} Now, if $I^-(L+\delta - \tp,L) = \emptyset$, from \eqref{eq:curve_3:almost}, trivially $g^A(L+\delta - \tp,L)>0$ since $X^B\ge X^A$ and $ L \ge L+\delta-\tp , \forall \tp \in [\delta, L]$. Reversely, if $I^-(L+\delta - \tp,L) \neq \emptyset$, we have $\sum_{i \in I^-(L+\delta - \tp,L)} \frac{w_i^2}{2 q_i w_i} >0$, and we can define: \begin{equation*} C_3(L): = \sqrt{\frac{\sum \limits_{i \in I^-(L+\delta - \tp,L)} \frac{p_i^2}{2q_i w_i} + X^A L }{\sum \limits_{i \in I^-(L+\delta - \tp,L)} \frac{w_i^2}{2 q_i w_i} }}. \end{equation*} From \eqref{eq:curve_3:mid}, we have $ (L+\delta - \tp) \le C_3(L)$; therefore, combine with \eqref{eq:curve_3:almost}, we have \begin{equation} g^A(L+\delta - \tp,L) \ge \sum_{i \in I^-(L+\delta - \tp,L)} \frac{w_i(L+\delta - \tp) p_i }{2 q_i w_i} + X^B L - X^A \cdot C_3(L).\label{eq:curve3_end} \end{equation} Now, since $C_3(L) = \bigoh(\sqrt{L})$, the right-hand-side of \eqref{eq:curve3_end} is a quadratic expression in terms of $\sqrt{L}$ with a strictly-positive second-degree coefficient; therefore, with $L$ large enough, we have $ g^A(L+\delta - \tp,L) \ge 0$. In conclusion, we have prove that if $g^B(L+\delta - \tp,L)=0$, we always have $g^A(L+\delta - \tp,L)\ge 0$; thus, $G(\curve_3)$ does not intersect the ray $R_3$. Therefore, $G(\curve_3)$ is a short curve relative to $(0,0)$. Note that $(L,L)$ and $(\delta,L)$ are respectively the starting point and ending point of $G(\curve_3)$; moreover, $g^B(L,L)>0$ and $g^B(\delta,L)>0$, we conclude that the angle swept by $G(\curve_3)$ relative to $(0,0)$ is negative, \ie $\A(G(\curve_3),(0,0)) <0$. \textit{\underline{Step 4:} Prove that $G(\curve_4)$ is a short curve and $\A(G(\curve_4), (0,0)) <0$ when $\delta$ is small enough and $L$ is large enough.} We recall that $\curve_4:[3L-2\delta, 4L-3\delta] \rightarrow \R^2$ and $\curve_4(t) = (\delta, 4L-2\delta-t)$. In this step, for the sake of brevity, let us denote $\tp = 3L -3\delta - t$; as $t$ increases from $3L-2\delta$ to $4L- 3\delta$, we have $t^{\prime}$ increases from $\delta$ to $L$. In other words, we can rewrite $\curve_4(t) = (\delta, L+\delta-\tp)$. As $\delta$ is chosen as in~\eqref{eq:delta_choice}, we have $I^-(\delta, L+\delta-\tp)= \emptyset$; therefore, \begin{align} g^B (\delta, L+\delta-\tp) = \sum_{i \in I^+(\delta, L+\delta-\tp)} \frac{ \bracks*{ h_i(\delta, L+\delta-\tp) - p_i }^2 }{2q_i w_i} - X^A (L+\delta-\tp) \label{eq:curve_4_gb} \end{align} If $I^+(\delta, L+\delta-\tp)= \emptyset$, then from \eqref{eq:curve_4_gb}, trivially $g^B (\delta, L+\delta-\tp)<0$. Reversely, if $I^+(\delta, L+\delta-\tp) \neq \emptyset$, we can rewrite \eqref{eq:curve_4_gb} as follows: \begin{align} g^B (\delta, L+\delta-\tp) = & \sum_{i \in I^+(\delta, L+\delta-\tp)} \bracks*{\frac{ \bracks*{ h_i(\delta, L+\delta-\tp) - p_i }^2 }{2q_i w_i} \frac{X^A (L+\delta-\tp)}{ | I^+(\delta, L+\delta-\tp)|}} \nonumber\\ \le & \sum_{i \in I^+(\delta, L+\delta-\tp)} \bracks*{ \frac{ \bracks*{ w_i \delta}^2 }{2q_i w_i} \frac{X^A \delta}{ | I^+(\delta, L+\delta-\tp)|} } \nonumber\\ \end{align} Therefore, if $\delta < \frac{X^A}{| I^+(\delta, L+\delta-\tp)| } \min_{I_{\ge0} \braces*{ \frac{2q_i}{w_i}}}$, we have $ g^B (\delta, L+\delta-\tp) < 0$. Therefore, the curve $G(\curve_4)$ does not intersect the ray $R_2: [0,\infty) \rightarrow \R^2$ such that $R_2(t) = (0,t)$; thus, $G(\curve_4)$ is a short curve. Note that $(\delta, L)$ and $(\delta,\delta)$ are respectively the starting point and ending point of $G(\curve_4)$; moreover, for $L$ large enough, we have \begin{equation*} g^B(\delta, L) = \sum_{i \in I^+(\delta, L)} \frac{ \bracks*{ h_i(\delta, L) - p_i }^2 }{2q_i w_i} - X^A L \le \sum_{i \in I^+(\delta, L)} \frac{ \bracks*{ w_i \delta }^2 }{2q_i w_i} - X^A L \le - X^A \delta = g^B(\delta, \delta). \end{equation*} Therefore, the angle swept by $G(\curve_4)$ relative to $(0,0)$ is negative, \ie $\A(G(\curve_4) ,(0,0)) <0$. \textit{\underline{Step 5:} Conclusion} By choosing $\delta>0$ small enough and $L \gg \delta$ large enough such that all conditions mentioned in Steps~0-4 hold, we conclude that $\braces*{G(\curve_1),G(\curve_2),G(\curve_3),G(\curve_4)}$ is a sufficiently fine partition of the curve $G(\curve)$; therefore, \begin{equation*} \W(G(\curve),(0,0)) = \frac{\sum_{j=1}^4} \A(G(\curve_j) ,(0,0)) {2\Pi} <0. \end{equation*} Recall that $\curve$ is the boundary of the rectangle $D$ whose vertices are $(\delta,\delta)$, $(L, \delta)$, $(L,L)$ and $(L,\delta)$. Apply Lemma~\ref{lem:winding}, we conclude that $(0,0) \in G(D)$. This concludes the proof. \subsection{Proof of Proposition~\ref{propo:payoffEQ}} \label{sec:appen_proof_payoffEQ} In this section, we focus on Proposition~\ref{propo:payoffEQ}. We aim to compute the payoffs of the players in a game $\FCBn$ when they play the strategies having marginals $\left\{\Ai, \Bi \right\}_{i \in [n]}$ where $(\gA, \gB)$ is a solution of System~\eqref{eq:system_f}; more precisely, it is when the allocation of Player A (resp. Player B) to battlefield $i$ follows $\Ai$ (resp. $\Bi$). We denote $A_i$ (respectively, $B_i$) the random variable corresponding to $\Ai$ (respectively,~$\Bi$). In this case, the expected payoff that Player A gains in battlefield $i \in [n]$ is defined as follows $\Pi^A_i := \alpha w_i \prob(A_i = q_i B_i - p_i) + w_i \prob \left( B_i < \frac{A_i + p_i}{q_i} \right)$. We have the following remark: \begin{remark}\label{remark:tie} If tie allocations happen with probability zero, i.e., $\prob \left( B_i = \frac{A_i + p_i}{q_i} \right) = 0$, \begin{equation} \prob \left( B_i < \frac{x + p_i}{q_i} \right) = \Bi\left( \frac{x+ p_i}{q_i}\right) , \forall x \sim A_i. \end{equation} \end{remark} Now, $\Ai$ and $\Bi$ are define in Table~\ref{tab:Opt_Uni_GRCBC} which involve 6 cases of parameter configurations which corresponds to 6 indices sets $I^+_1 (\gA, \gB)$, $I^+_2 (\gA, \gB)$, $I^+_3 (\gA, \gB)$, $I^-_1 (\gA, \gB)$, $I^-_2 (\gA, \gB)$ and $I^-_3 (\gA, \gB)$. In the following, we consider these 6 cases: \underline{Case 1:} $i \in I^+_1$. In this case, both players allocate zero with probability 1. Therefore, if $p_i >0$, Player A wins this battlefield with probability 1. On the other hand, if $p_i =0$, a tie situation happens. Therefore, we~have \begin{align*} \Pi^A_i = w_i \mathbb{I}_{\{p_i > 0\}} + \alpha w_i \mathbb{I}_{\{p_i =0\}} . \end{align*} \underline{Case 2:} $i \in I^+_2$. First, we show that tie situations only happen with probability 0 in this case. Indeed, we have: \begin{itemize} \item If $p_i =0$, trivially we see that $ \Ai$ is a uniform distribution (on $[0, q_i w_i \gB]$), thus, $\prob(x^A = q_i x^B -p_i ) =0$ for any $x^A \sim A_i, x^B \sim B_i$. \item If $p_i >0$, note that $ \Ai$ and $ \Bi$ are distributions with an atom at 0; therefore, $A_i = 0$ and $B_i =0$ might happen with a positive probability. However, if that is the case, Player A wins this battlefield (since $0 < 0 - p_i$). On the other hand, due to the continuity of $ \Ai$ on $(0, q_i w_i \gB -p_i)$, $\prob(x^A = q_i x^B -p_i ) =0$ for any $x^A \sum A_i$ and $x^A \neq 0$ and $x^B \sim B_i$. \end{itemize} Using Remark~\ref{remark:tie} and the fact that the derivative of $ \Ai$ equals zero on $(q_i w_i \gB -p_i, \infty)$, we the following: \begin{align} \Pi^A_i &:= w_i \frac{p_i}{q_i w_i \gB } \left( 1 - \frac{q_i \gB}{\gA} + \frac{p_i}{w_i \gA} \right) + w_i\int \limits_{0}^{\infty} \Bi\left( \frac{x + p_i}{q_i}\right) \de \Ai(x) \nonumber \\ & = w_i \frac{p_i}{q_i w_i \gB } \left( 1 - \frac{q_i \gB}{\gA} + \frac{p_i}{w_i \gA} \right) + w_i \int \limits_{0}^{q_i w_i \gB - p_i } \left(1 - \frac{q_i \gB}{\gA} + \frac{x + p_i}{w_i \gA} \right) \frac{1}{q_i \gB w_i} \de x \nonumber \\ & = w_i \left(1 - \frac{q_i \gB}{\gA} + \frac{p_i}{w_i \gA} \right) + \frac{(q_i w_i \gB - p_i )^2}{2 w_i \gA q_i \gB}. \end{align} \underline{Case 3:} $i \in I^+_3$. In this case, since $ \Bi$ is the uniform distribution on $\left[\frac{p_i}{q_i}, \frac{w_i \gA + p_i}{q_i } \right]$, ties happen with probability zero. Therefore, \begin{align*} \Pi^A_i &:= w_i\int \limits_{0}^{\infty} \Bi\left( \frac{x + p_i}{q_i}\right) \de \Ai(x) \\ & = w_i\int \limits_{0}^{w_i \gA} \left(\frac{-p_i}{w_i\gA} + \frac{x+p_i}{w_i \gA} \right) \frac{1}{q_i w_i \gB} \de x\\ & = \frac{w_i \gA}{2 q_i \gB}. \end{align*} \underline{Case 4:} $i \in I^-_1$. In this case, both players allocate zero with probability 1. Since conditions of the indices set $I^-_1$ require that $p_i < 0$, Player B wins with probability 1; therefore, Player A's payoff is: \begin{align*} \Pi^A_i &:= 0. \end{align*} \underline{Case 5:} $i \in I^-_2$. In this case, ties happens with probability zero (note that although \mbox{$\prob(A_i =0) >0$} and $\prob(B_i =0) >0$, if both players allocate zero, Player B wins since $p_i <0$). Therefore, \begin{align*} \Pi^A_i &:= w_i\int \limits_{0}^{\infty} \Bi\left( \frac{x + p_i}{q_i}\right) \de \Ai(x) \\ & = w_i\int \limits_{-p_i}^{w_i \gA} \left(\frac{-p_i}{w_i\gA} + \frac{x+p_i}{w_i \gA} \right) \frac{1}{q_i w_i \gB} \de x\\ & = \frac{w_i \gA}{2 q_i \gB} - \frac{p_i ^2}{2 w_i \gA q_i \gB}. \end{align*} \underline{Case 6:} $i \in I^-_3$. In this case, since $ \Ai$ is the uniform distribution on $\left[{-p_i}, q_i w_i \gB - p_i \right]$, ties happen with probability zero. Therefore, \begin{align*} \Pi^A_i &:= w_i\int \limits_{0}^{\infty} \Bi\left( \frac{x + p_i}{q_i}\right) \de \Ai(x) \\ & = w_i\int \limits_{-p_i}^{q_i w_i \gB - p_i} \left(1 - \frac{q_i \gB}{\gA} + \frac{x+ p_i}{w_i \gA} \right) \frac{1}{q_i w_i \gB} \de x\\ & = w_i \left( 1 - \frac{q_i \gB}{\gA} + \frac{p_i}{w_i \gA} \right) + \frac{(q_i w_i \gB - p_i)^2 - p_i^2}{2 w_i \gA q_i \gB} \\ & = w_i - \frac{q_i \gB w_i }{2\gA}. \end{align*} In conclusion, the total expected payoff of each player is simply the aggregate of her payoffs in each battlefields; therefore, given a pair of $\gA, \gB$, the total payoff of Player A is: \begin{align} \Pi^A &:= \sum \limits_{i \in I^+_1} {\left[w_i \mathbb{I}_{\{p_i > 0\}} + \alpha w_i \mathbb{I}_{\{p_i =0\}} \right]} + \sum \limits_{i \in I^+_2} {\left[ w_i \left(1 - \frac{q_i \gB}{\gA} + \frac{p_i}{w_i \gA} \right) + \frac{(q_i w_i \gB - p_i )^2}{2 w_i \gA q_i \gB} \right]} \nonumber \\ & \hspace{5mm} + \sum \limits_{i \in I^+_3} \left[ \frac{w_i \gA}{2 q_i \gB} \right] + \sum \limits_{i \in I^-_2} \left[ \frac{w_i \gA}{2 q_i \gB} - \frac{p_i ^2}{2 w_i \gA q_i \gB} \right] + \sum \limits_{i \in I^-_3} \left[w_i - \frac{q_i \gB w_i }{2\gA} \right] \label{payoff_A_GL}. \end{align} Player B's expected payoff is simply \begin{equation} \Pi^B = \sum_{i \in [n]} w_i - \Pi^A. \label{payoff_B_GL} \end{equation} \section{Supplementary Materials for Results in Section~\ref{sec:Corollary_results}} In Section~\ref{sec:IU_result}, we have presented Proposition~\ref{propo:IU} concerning the IU strategies in the \FCB game. As mentioned above, the result of this proposition can be obtained by following the proof of \cite{vu2019approximate} for the case of classical Colonel Blotto game. For the sake of completeness, in this section, we present the main ideas of the proof of Proposition~\ref{propo:IU}. To prove that $(\IU^A_\kappa,\IU^B_\kappa)$ constitutes an $\eps W^n$-equilibrium of a game $\FCBn$ where $\eps = \bigoh(n^{-1/2})$, we need to prove the following inequalities hold for any pure strategies $\boldsymbol{x}^A$ and $\boldsymbol{x}^B$ of Players A and B: \begin{align} & \Pi^A_{\FCBn} (\boldsymbol{x}^A, \IU^B_\kappa) \le \Pi^A_{\FCBn} (\IU^A_\kappa, \IU^B_\kappa) + \eps W^n \label{eq:IU_A}\\ & \Pi^B_{\FCBn} (\IU^A_\kappa, \boldsymbol{x}^B) \le \Pi^B_{\FCBn} (\IU^A_\kappa, \IU^B_\kappa) + \eps W^n.\label{eq:IU_B} \end{align} We focus on \eqref{eq:IU_A} (the proof for \eqref{eq:IU_B} can be done similarly). For the sake of brevity, we only present the proof where the tie-breaking rule parameter $\alpha$ is set to 1 (\ie if a tie happens at a battlefield $i$, Player A gains the value $w_i$). The case with a general value of $\alpha$ can be done similarly by noticing that the distributions $\Ai, \Bi$ are continuous at almost all points of their supports except at a single point (either $0$ or $p_i/q_i$---depending on the index set to which $i$ belongs); thus, the probability of a ties happens are 0 almost everywhere; even at the points where $\Ai, \Bi$ are discontinuous, the probability that a tie happens also goes quickly to 0 when $n$ increases with a speed much faster than the approximation error that we consider; therefore, one can also ignore these tie cases (see \cite{vu2019approximate} for a detailed discussions on similar phenomenon in the classical Colonel Blotto~game). Now, let us denote $A_i$ and $B_i$ the random variables corresponding to distributions $\Ai, \Bi$. For any $i \in [n]$, we also define the random~variables: \begin{equation*} A^n_i = \frac{A_i \cdot X^A}{\sum_{j \in [n] } A_j} \textrm{ and } B^n_i = \frac{B_i \cdot X^B}{\sum_{j \in [n] } B_j}, \end{equation*} and call the corresponding distributions by $\Ani$ and $\Bni$. By definition, $\Ani, i \in [n]$ are the marginals of the $\IU^A_{\kappa}$ strategy and $\Bni, i \in [n]$ are the marginals of the $\IU^B_{\kappa}$ strategy. Therefore, we can rewrite the involved payoffs as follows: \begin{align*} \Pi^A_{\FCBn} (\boldsymbol{x}^A, \IU^B_\kappa) & = \sum_{i \in [n]} { w_i \prob \parens*{B^n_i \le \frac{x^A_i + p_i}{q_i} }} = \sum_{i \in [n]} w_i \Bni \parens* {\frac{ x^A_i + p_i}{q_i} } ,\\ \Pi^A_{\FCBn} (\IU^A_\kappa, \IU^B_\kappa) &= \sum_{i \in [n]} w_i \int_{0}^{\infty} \Bni \parens* {\frac{ x + p_i}{q_i} } \textrm{d} \Ani (x) . \end{align*} Now, to make connection between these payoffs and the players' payoffs when they have marginals $\Ai, \Bi, i\in [n]$ (which are OUDs of the game), we need the following important lemma: \begin{lemma}\label{lem:damn} In any game $\FCBn$, we have \begin{equation*} \sup_{x \in [0,\infty)} \abs*{ \Ai(x) - \Ani(x) } < \bigoh(n^{-1/2}) \textrm{ and } \sup_{x \in [0,\infty)} \abs*{ \Bi(x) - \Bni(x) } < \bigoh(n^{-1/2}) . \end{equation*} \end{lemma} A proof of this lemma can be obtained by following Lemma~B4 of \cite{vu2019approximate}. At a high-level, Lemma~\ref{lem:damn} comes from applying Hoeffding’s inequality \cite{hoeffding1963probability} and the fact that $\kappa=(\gA, \gB)$ is a solution of System~\ref{eq:system_f} (thus $ \mathbb{E} \bracks*{ \sum_{j \in [n]} A_j } = X^A $) to show that as $n \rightarrow \infty$, \begin{equation*} A^n_i = \frac{A_i \cdot X^A}{\sum_{j \in [n] } A_j} \rightarrow \frac{A_i \cdot X^A}{\mathbb{E} \bracks*{ \sum_{j \in [n]} A_j }} = \frac{A_i X^A}{X^A} = A_i. \end{equation*} We also have a similar result for $B^n_i$ and $B_i$. Based on this, we can prove that $\Ani$ and $\Bni$ uniformly converge toward $A_i$ and $B_i$ as $n$ increases. Importantly, by working out the details, we can also determine the rate of this convergence (which gives the upper-bounds presented in Lemma~\ref{lem:damn}). Now, based on Lemma~\ref{lem:damn}, we can show that as \begin{equation} \Pi^A_{\FCBn} (\boldsymbol{x}^A, \IU^B_\kappa) \le \sum_{i \in [n]} w_i \Bi \parens* {\frac{ x^A_i + p_i}{q_i} } + \sum_{i=1}^n w_i \bigoh(n^{-1/2}).\label{eq:IU_a} \end{equation} On the other hand, we can also combine Lemma~\ref{lem:damn} with a variant of the portmanteau theorem (see \eg Lemma B5 of \cite{vu2019approximate}) to obtain that: \begin{equation} \abs*{\int_{0}^{\infty} \Bni \parens* {\frac{ x + p_i}{q_i} } \textrm{d} \Ani (x) - \int_{0}^{\infty} \Bi \parens* {\frac{ x + p_i}{q_i} } \textrm{d} \Ai (x) } < \bigoh(n^{-1/2})\label{eq:IU_b} \end{equation} By definition, $\Ai$ and $\Bi$ are equilibrium of the corresponding \FAPA game (see Definition~\ref{def:OptDis_GRCBC}); therefore, they are best-response against one another. In other words, for any pure strategy $\boldsymbol{x}^A$ of Player A, we have: \begin{equation*} \sum_{i \in [n]} w_i \Bi \parens* {\frac{ x^A_i + p_i}{q_i} } \le \sum_{i \in [n]} w_i \int_{0}^{\infty} \Bi \parens* {\frac{ x + p_i}{q_i} } \textrm{d} \Ai (x) . \end{equation*} Combining this inequality with~\eqref{eq:IU_a} and \eqref{eq:IU_b}, we have: \begin{align*} \Pi^A_{\FCBn} (\boldsymbol{x}^A, \IU^B_\kappa) & = \sum_{i \in [n]} w_i \Bni \parens* {\frac{ x^A_i + p_i}{q_i} } ,\\ & \le \sum_{i \in [n]} w_i \Bi \parens* {\frac{ x^A_i + p_i}{q_i} } + \sum_{i=1}^n w_i \bigoh(n^{-1/2}) \\ & \le \sum_{i \in [n]} w_i \int_{0}^{\infty} \Bi \parens* {\frac{ x + p_i}{q_i} } \textrm{d} \Ai (x) + \sum_{i=1}^n w_i \bigoh(n^{-1/2}) \\ & \le \sum_{i \in [n]} w_i \int_{0}^{\infty} \Bni \parens* {\frac{ x + p_i}{q_i} } \textrm{d} \Ani (x) + \sum_{i=1}^n w_i \bigoh(n^{-1/2}) \\ & \le \Pi^A_{\FCBn} (\IU^A_\kappa, \IU^B_\kappa) + W^n\bigoh(n^{-1/2}) . \end{align*} Similarly, we can prove that $\Pi^B_{\FCBn} (\IU^A_\kappa, \boldsymbol{x}^B) \le \Pi^B_{\FCBn} (\IU^A_\kappa, \IU^B_\kappa) + W^n \bigoh(n^{-1/2}) $. This concludes the~proof. \label{appen:IU} \section{Supplementary Materials for Results in Section~\ref{sec:heuristic}} In Section~\ref{sec:heuristic}, we presented the main ideas of the approximation algorithm that we propose in order to efficiently compute a $\delta$-approximate solution of System~\ref{eq:system_f} in any given \FCB game instance. In this section, we give re-discuss this algorithm in more details. First, we discuss a pseudo-code of this algorithm in~\ref{appen:pseudo}. We then give more details on the computational time results of this algorithm in \ref{appen:compute} \subsection{A Pseudo-code of the Approximation Algorithm} \label{appen:pseudo} \begin{algorithm}[h!] \KwIn{$\FCBn$ game.} \textbf{Parameters}: ${\delta}>0$, $m>0$ and $M \gg m$\; \KwOut{A $\delta$-approximate solution $(\tgA, \tgB) \in \mathbb{R}^2_{>0}$ of System~\ref{eq:system_f} of $\FCBn$ (satisfying~\eqref{eq:delta_close} and \eqref{eq:system_approx_f})} Initialize $D$ to be the rectangle with four vertices $(m,m), (m,M), (M,M), (M, m) $ \; Compute $\omega_D = $ the winding number of $G(D)$ around $(0,0)$ via Algorithm~\ref{algo:aux}\; \uIf{$\omega_D = 0$ }{ M:= 2M and ${m} := {m}/2$ , then repeat line 1 } \ElseIf{$\omega_D \neq 0$ }{ Divide $D$ into two rectangles, $D_1$ and $D_2$, with equal areas\; Compute $\omega_{D_1} = $ the winding number of $G(D_1)$ around $(0,0)$ via Algorithm~\ref{algo:aux}\; \uIf{$\omega_{D_1} \neq 0$}{ \uIf{diameter of $D_1$ is less than ${\delta}$}{Stop and return $(\tgA, \tgB) \in D_1$ satisfying ~\eqref{eq:system_approx_f} computed by Algorithm~\ref{algo:aux}\;} \lElse{Set $D:= D_1$ and repeat line 6} } \Else{ Compute $\omega_{D_2} = $ the winding number of $G(D_2)$ around $(0,0)$ via Algorithm~\ref{algo:aux}\; \uIf{diameter of $D_2$ is less than ${\delta}$}{Stop and return $(\tgA, \tgB) \in D_2$ satisfying ~\eqref{eq:system_approx_f} computed by Algorithm~\ref{algo:aux}} \lElse{Set $D:= D_2$ and repeat line 6} } } \caption{Approximation algorithm finding a ${\delta}$-approximate solution of System~\eqref{eq:system_f}} \label{algo:heuristic_GRCB} \end{algorithm} Algorithm~\ref{algo:heuristic_GRCB} follows precisely the template described in Section~\ref{sec:heuristic}. Note that Algorithm~\ref{algo:heuristic_GRCB} takes 3 parameters as inputs: $\delta$ controls the preciseness level of the output approximation solutions, $m$ and $M$ controls the initialized rectangle. Moreover, it also uses a sub-routine to compute the winding number of the $G$-image of the rectangles involved in the dichotomy procedure. We present a pseudo-code of this sub-routine procedure as Algorithm~\ref{algo:aux}. Intuitively, to compute a winding number of $G(D)$ where $D$ is a rectangle having the parametric (closed) curve form as $\curve:[a,b] \rightarrow \R^2$, we compute a polygonal approximation of $G(D)$ via IPS algorithm proposed by \cite{zapata2012geometric}; we then calculate the winding number of this polygon by checking how many time one cross the Ox-axis in the $\R^2$-plane when tracking the curve $G(\curve)$by following the sides of this polygon (if it crosses in counterclockwise direction, we increase the counting by 1 unit and it crosses in clockwise direction, we decrease it by 1 unit). Moreover, while doing this, Algorithm~\ref{algo:aux} also computes the $G$-value of all vertices of the involved polygon; Algorithm~\ref{algo:aux} will record a any point $(x,y) \in D$ whose $G$-image is one of the vertex of the polygon and that $(x,y)$ satisfies \eqref{eq:system_approx_f}. If the winding number of $G(D)$ is non-zero, such $(x,y)$ is guaranteed to exist due to Lemma~\ref{lem:winding}. \begin{algorithm} \KwIn{$\FCBn$ game, a rectangle $D$ presented as a parametric (closed) curve $\curve:[a,b] \rightarrow \R^2$} \KwOut{$\omega_{D} = $ the winding number of $G(D)$ around $(0,0)$ and a point in $D$ satisfying~\eqref{eq:system_approx_f}} Initialize $\omega_{D}=0$\; Use IPS Algorithm from \cite{zapata2012geometric} to find an array $(t_0 = a, t_1, \ldots, t_k = b)$ satisfying properties of connection relative to $G(\curve)$ (\ie $\{G(\curve(t_i)), i = 0,\ldots,k\}$ forms a polygonal approximation of~$G(D)$)\; \For{$i =0, \ldots, k$}{ Compute $G(\curve(t_i))$, $G(\curve(t_{i+1}))$\; \lIf{$G(\curve(t_i))$ satisfy \eqref{eq:system_approx_f}}{Return the point $\curve(t_i)$} \lIf{Segment from $G(\curve(t_i))$ to $G(\curve(t_{i+1}))$ crosses from $\{(x,y) \in R^2: x>0, y<0 \}$ to $\{(x,y) \in R^2: x>0, y>0 \}$ } {$\omega_{D} = \omega_{D}+1$} \lElseIf{Segment from $G(\curve(t_i))$ to $G(\curve(t_{i+1}))$ crosses from $\{(x,y) \in R^2: x>0, y>0 \}$ to $\{(x,y) \in R^2: x>0, y<0 \}$ }{$\omega_{D} = \omega_{D}-1$} } \caption{Winding number computation} \label{algo:aux} \end{algorithm} \subsection{Computational Time of the Approximation Algorithm} \label{appen:compute} Proposition~\ref{propo:heuristic} states that by running our approximation algorithm in $\tilde{\bigoh}(n \delta^{-1})$ time, we will find a $\delta$-approximate solution of System~\eqref{eq:system_f}. In this section, we first give a proof of this proposition. \paragraph{Proof of Proposition~\ref{propo:heuristic}}{ Recall the notation $R$ and $L_0$ denoting the max-norm of an actual solution $(\gA,\gB)$ of System~\eqref{eq:system_f} and that of the center of the initialized rectangle. In Algorithm~\ref{algo:heuristic_GRCB}, we observe that after each enlargement step (Lines~5), we end up with a rectangle that is double in size; therefore, the loop in Lines 4-5 will terminate after $\bigoh(\log \parens*{ \max \braces*{ {R}/{L_0}, {L_0}/{R}} })$ iterations; when this loops end, we guarantee to find a rectangle containing $(\gA, \gB)$ (thus, the $G$-image of the boundary of this rectangle has non-zero winding number around $(0,0)$). Now, each time the loop in Lines 6-17 of Algorithm~\ref{algo:heuristic_GRCB} repeats, the size of the rectangle in consideration is reduced by one half. Therefore, after at most $\bigoh\parens*{\log \parens{R/\delta }}$ iterations (we assume that $\delta<1$), we end up with a rectangle whose diameter is smaller than $\delta$. The fact that there is always a sub-rectangle (obtained by dividing the rectangle considered in the previous loop run) such that the winding number of its $G$-image is non-zero is guaranteed by Lemma~\ref{lem:winding}; this guarantees that this loop cycle terminates after $\bigoh\parens*{\log \parens{R/\delta }}$ iterations. Finally, we see that each time we need to compute a winding number in Algorithm~\ref{algo:heuristic_GRCB}, we call for a run of Algorithm~\ref{algo:aux}. From Theorem 4 of \cite{zapata2012geometric}, it takes IPS Algorithm $\bigoh \parens*{(b-a) \delta^{-1}}$ time to output the array $(t_0 = a, t_1, \ldots, t_k = b)$ as described in Line~2 of Algorithm~\ref{algo:aux} where $k= \bigoh \parens*{(b-a) \delta^{-1}}$ (thus, it induces a polygon with $k$ vertices). Note that since $G$ is Lipschitz-continuous, $G(D)$ is also a Lipschitz curve; thus the sufficient conditions of this theorem holds. Finally, each time Algorithm~\ref{algo:aux} computes a value $G(\curve(t_i))$, it takes $\bigoh(n)$ time; this is due the the definition of $G$ in~\eqref{eq:G_func}. In conclusion, each call of Algorithm~\ref{algo:aux} takes $\bigoh \parens*{(b-a) \delta^{-1}} n $ time and the result follows. \qed } Now, to illustrate the efficiency of our approximation algorithm, we conduct several experiments. First, re-visit the toy-example (Example~\ref{ex:example_1234}) considered in Section~\ref{sec:heuristic} where we showed that a naive approach for computing its solution is very inefficient. The application of our approximate algorithm to solve this problem is given as Example~\ref{ex:damn}. \begin{example}\label{ex:damn} \emph{Recall the game instance $\FCBn$ (with $n=2$) considered in Example~\ref{ex:example_GRCB_2} where System~\eqref{eq:system_f} has one positive (exact) solution $(\gA,\gB) := (2 + \sqrt{4/3}, 2+\sqrt{12}) \approx (3.1547005,5.4641016) $. With the parameter \mbox{$\delta=10^{-6}$}, our approximation algorithm outputs the solution $(\tgA, \tgB) =( 3.1547010, 5.4641018) $. The computation time is $\sim 2.78$~seconds when initializing with the rectangle whose vertices are $(\delta,\delta)$, $(\delta, 10X^A)$, $(10X^A, 10X^A)$ and~$(10X^A,\delta)$.} \end{example} \begin{figure}\label{fig:example_time_GRCB} \end{figure} Next, we conduct the following experiment (running with a machine with an Intel Xeon CPU \@2.20GHz and 12Gb RAM). For each $n \in \{5,10,20,50,100\}$, we randomly generate 10 instances\footnote{We choose $X^A, X^B \in \{1,2,\ldots, 100\}$ randomly at uniform ($X^A \le X^B$); then, for each $i \in [n]$, we randomly generate a battlefield value $w_i \sim \mathcal{U}(0, X^A]$ and with equal probability, we choose either $p_i>0$ or $p_i=<0$ or $p_i = 0$; then draw $p_i$ from $\mathcal{U}(0, X^A)$ or $\mathcal{U}(-X^A, 0)$ or set it equal $0$ respectively; then, with equal probability, we choose either $q_i>1$ or $q_i \in (0,1)$ or $q_i = 1$; then draw $q_i$ from $\mathcal{U}(1, X^A)$ or $\mathcal{U}(1/X^A, 1)$ or set it equal $1$ respectively.} of $\FCBn$. We then run~\ref{algo:heuristic_GRCB} on each game instance with the input $ {\delta} \in \{10^{-1}, 10^{-2}, \ldots, 10^{-6}\}$ and $M:= 10 \cdot \min\{X^A,X^B\}$; we then measure the time it takes to output the $ {\delta}$-approximate solution of the corresponding System~\eqref{eq:system_f}. Figure~\ref{fig:example_time_GRCB} shows the average running time of~\ref{algo:heuristic_GRCB} taken from the 10 instances for each $n$ and ${\delta}$. \subsection{Approximations of Optimal Univariate Distributions} \label{appen:approx_OUD} In this section, we give the proof of Proposition~\ref{lem:approx_sol} which shows the relation between the distributions $\tAi \tBi, i \in [n]$ from Definition~\ref{def:OptDis_GRCBC} that corresponds with any $\delta$-approximate solution of System~\eqref{eq:system_f} to and the OUDs $\Ai, \Bi, i \in [n]$ (based on the solution $(\gA, \gB)$ of System~\ref{eq:system_f}. \paragraph{Proof of Proposition~\ref{lem:approx_sol}}{ Fix an $i \in [n]$, we look for upper-bounds of $ |\Ai(x) - \tAi(x) |$ and $ |\Bi(x) - \tBi(x) |$. To do this, we consider two main cases: where $p_i \ge 0$ and where $p_i <0$. We start with the case where $p_i \ge 0$. WLOG, let us assume $\gA \le \tgA$ and $\gB \le \tgB$ (the case where either $\gA > \tgA$ or $\gB > \tgB$ can be done similarly by switching the roles of $\gA, \gB$ and $\tgA, \tgB$). Given the value $(\gA, \gB)$, battlefield $i$ belongs to one of the indices sets $I^+_1(\gA, \gB), I^+_2(\gA, \gB)$ or $I^+_3(\gA, \gB)$. Similarly, we know that $i$ also belongs to one of the indices sets $I^+_1(\tgA, \tgB), I^+_2(\tgA, \tgB)$ or $I^+_3(\tgA, \tgB)$.} \textit{Case 1.1:} If $i$ belongs to $I^+_1(\gA, \gB) \cap I^+_1(\tgA, \tgB)$. Trivially, $\Ai(x) = \tAi(x)=1, \forall x$ and $\Bi(x) = \tBi(x)=1, \forall x$. Trivially, $ |\Ai(x) - \tAi(x) |=0 <\delta$ and $ |\Bi(x) - \tBi(x) |= 0<\delta$ for any $x$. \textit{Case 1.2:} If $i$ belongs to $I^+_2(\gA, \gB) \cap I^+_2(\tgA, \tgB)$. We have \begin{align*} & \Ai(x) = \left\{ \begin{array}{l} \frac{p_i}{q_i w_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in \left[ 0, q_i w_i \gB - p_i \right], \\ 1 \qquad \quad \qquad \qquad, \forall x > q_i w_i \gB -p_i, \end{array} \right. \\ \textrm{ and } & \tAi(x) = \left\{ \begin{array}{l} \frac{p_i}{q_i w_i \tgB} + \frac{x}{q_i w_i \tgB}, \forall x \in \left[ 0, q_i w_i \tgB - p_i \right], \\ 1 \qquad \quad \qquad \qquad, \forall x > q_i w_i \tgB -p_i, \end{array} \right. \end{align*} For any $x \in [0, q_i w_i \gB - p_i]$, we also have $x \in [0,q_i w_i \tgB - p_i]$ and thus \begin{align*} |\Ai(x) - \tAi(x) | & = \abs*{ \frac{p_i}{q_i w_i \gB} + \frac{x}{q_i w_i \gB} - \frac{p_i}{q_i w_i \tgB} - \frac{x}{q_i w_i \tgB} }\\ & = \abs*{ \frac{p_i + x}{q_i w_i} \parens*{ \frac{\tgB - \gB}{\gB \tgB }} }\\ & \le \abs*{ \frac{q_i w_i \tgB }{q_i w_i} \parens*{ \frac{\delta}{\gB \tgB }} }\\ &= \frac{\delta}{\gB} \end{align*} Now, for any $ x$ such that $q_i w_i \gB - p_i < x \le q_i w_i \tgB - p_i $, we have: \begin{align*} |\Ai(x) - \tAi(x) | & = \abs*{ 1 - \frac{p_i}{q_i w_i \tgB} - \frac{x}{q_i w_i \tgB} }\\ & = \abs*{ \frac{q_i w_i \tgB - p_i -x}{q_i w_i \tgB} }\\ & < \abs*{ \frac{q_i w_i \tgB - p_i - q_i w_i \gB + p_i}{q_i w_i \tgB} }\\ &= \frac{\delta}{\tgB} \end{align*} Finally, for any $x > q_i w_i \tgB - p_i> q_i w_i \gB - p_i$, trivially $|\Ai(x) - \tAi(x)| = |1-1|= 0 < \delta$. Therefore, we conclude that in this case, for any $x$, $|\Ai(x) - \tAi(x)| \le \delta/ \min\braces*{\gB,\tgB}$. A similar proof can be done to show that $|\Bi(x) - \tBi(x)| \le \delta/ \min\braces*{\gA,\tgA}$. \textit{Case 1.3:} If $i$ belongs to $I^+_3(\gA, \gB) \cap I^+_3(\tgA, \tgB)$. We have: \begin{align*} & \Ai(x) = \left\{ \begin{array}{l} 1- \frac{\gA}{q_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in \left[ 0, w_i \gA \right], \\ 1 \qquad \qquad \qquad \qquad, \forall x > w_i \gA, \end{array} \right. \\ \textrm{ and } & \tAi(x) = \left\{ \begin{array}{l} 1- \frac{\tgA}{q_i \tgB} + \frac{x}{q_i w_i \tgB}, \forall x \in \left[ 0, w_i \tgA \right], \\ 1 \qquad \qquad \qquad \qquad, \forall x > w_i \tgA, \end{array} \right. \end{align*} For any $x \in [0, w_i \gA]$, we also have $x \in [0, w_i \tgA]$, therefore, \begin{align*} \abs*{\Ai(x) - \tAi(x) } = & \abs*{ 1- \frac{\gA}{q_i \gB} + \frac{x}{q_i w_i \gB} -1+ \frac{\tgA}{q_i \tgB} - \frac{x}{q_i w_i \tgB} }\\ = & \abs*{ \frac{1}{q_i} \parens*{ \frac{\tgA}{\tgB} - \frac{\gA}{\gB}} + \frac{x}{q_i w_i} \frac{\tgB - \gB}{ \tgB \gB} }\\ \le & \abs*{\frac{2}{q_i} { \frac{\tgA}{\tgB} }} +\abs*{ \frac{w_i \tgA}{q_i w_i} \frac{\tgB - \gB}{ \tgB \gB} }\\ \le & \abs*{\frac{2}{q_i} { \frac{\tgA}{\tgB} }} +\abs*{ \frac{\tgA}{q_i } \frac{\delta}{ \tgB \gB} }\\ = & \bigoh(\delta) \end{align*} For any $x$ such that $ w_i \gA < x \le w_i \tgA$, we have \begin{align*} \abs*{\Ai(x) - \tAi(x) } = \abs*{ 1-1+ \frac{\tgA}{q_i \tgB} - \frac{x}{q_i w_i \tgB} } = \abs*{\frac{w_i \tgA- x}{q_i w_i \tgB} } \le { \frac{\tgA - \gA}{q_i \tgB} } \le \frac{\delta}{q_i \tgB}. \end{align*} Finally, for any $ x > w_i \tgA \ge w_i \gA$, trivially, we have $\abs*{\Ai(x) - \tAi(x) } =0$. We conclude that in this case, for any $x$, we also obtain $\abs*{\Ai(x) - \tAi(x) } < \bigoh (\delta)$. In a similar manner, we have $\abs*{\Bi(x) - \tBi(x) } < \bigoh (\delta)$. \textit{Case 1.4:} We consider the case where $i \in I^+_1(\gA, \gB) \cap I^+_2(\tgA, \tgB)$, \ie when $ q_i w_i \gB - p_i \le 0 < q_i w_i \tgB - p_i \le w_i \tgA$ (this might happen since $\gA \le \tgA$). In this case, if $x \in q_i w_i \tgB - p_i$, we have: \begin{align*} \abs*{ \Ai(x) - \tAi(x)} = \abs*{ 1 - \frac{p_i}{q_i w_i \tgB} - \frac{x}{q_i w_i \tgB} } \le \frac{\delta}{\tgB}. \end{align*} Moreover, when $x > q_i w_i \tgB - p_i$, we have $\abs*{ \Ai(x) - \tAi(x)} = 0 $. Therefore, we conclude that $\abs*{ \Ai(x) - \tAi(x)} \le \bigoh(\delta)$ for any $x$. A similar proof can be done for $\abs*{ \Bi(x) - \tBi(x)} \le \bigoh{\delta}$. \textit{Case 1.5:} The case where $i \in I^+_1(\gA, \gB) \cap I^+_3(\tgA, \tgB)$ can be done similar to Case 1.4. \textit{Case 1.6:} The case where $i \in I^+_2(\gA, \gB) \cap I^+_3(\tgA, \tgB)$, we have: \begin{align*} & \Ai(x) = \left\{ \begin{array}{l} \frac{p_i}{q_i w_i \gB} + \frac{x}{q_i w_i \gB}, \forall x \in \left[ 0, q_i w_i \gB - p_i \right], \\ 1 \qquad \quad \qquad \qquad, \forall x > q_i w_i \gB -p_i, \end{array} \right. \\ \textrm{ and } & \tAi(x) = \left\{ \begin{array}{l} 1- \frac{\tgA}{q_i \tgB} + \frac{x}{q_i w_i \tgB}, \forall x \in \left[ 0, w_i \tgA \right], \\ 1 \qquad \qquad \qquad \qquad, \forall x > w_i \tgA, \end{array} \right. \end{align*} First, if $x \le \min \braces*{q_i w_i \gB -p_i, w_i \tgA }$, we have: \begin{align*} \abs*{ \Ai(x) - \tAi(x)} & = \abs*{\frac{p_i}{q_i w_i \gB} + \frac{x}{q_i w_i \gB} - 1 + \frac{\tgA}{q_i \tgB} - \frac{x}{q_i w_i \tgB}}\\ & = \abs*{ \frac{p_i + x - q_i w_i \gB }{q_i w_i \gB} + \frac{w_i \tgA - x}{q_i w_i \tgB} }\\ & = \abs*{ \frac{1}{q_i w_i} \parens*{ \frac{p_i - q_i w_i \gB}{ \gB} + \frac{w_i \tgA}{\tgB} + \frac{x (\tgB -\gB)}{\tgB \gB} } }\\ & \le \bigoh(\delta). \end{align*} Therefore, we conclude that when $p_i \ge 0$, for any $x$, we can always prove that \mbox{$\abs*{\Ai(x) -\tAi(x)} < \delta $} and $\abs*{\Bi(x) -\tBi(x)} < \delta $. Now, for the case where $p_i <0$, we can do similarly to the analysis when $p_i \ge 0$ by simply exchanging the roles of A and B, then replace $q_i = 1/q_i$, $p_i = -\frac{p_i}{q_i}$. We conclude the proof. \qed \label{appen:heuristic} \end{document}
\begin{document} \newtheorem{theo}{Theorem} \newtheorem{defi}[theo]{Definition} \newtheorem{theo1}[theo]{Theorem} \newtheorem{prop}[theo]{Proposition} \newtheorem{lemm}[theo]{Lemma} \newtheorem{coro}[theo]{Corollary} \newcommand{\mbox{${\bf R}$}}{\mbox{${\bf R}$}} \newcommand{\mbox{${\bf Z}$}}{\mbox{${\bf Z}$}} \newcommand{\mbox{${\bf Q}$}}{\mbox{${\bf Q}$}} \newcommand{\mbox{${\bf N}$}}{\mbox{${\bf N}$}} \newcommand{\mbox{$v \in G$}}{\mbox{$v \in G$}} \newcommand{\mbox{$\phi \in \Gamma$}}{\mbox{$\phi \in \Gamma$}} \newcommand{\mbox{$G_v$}}{\mbox{$G_v$}} \newcommand{\mbox{$C_1(\alpha)$}}{\mbox{$C_1(\alpha)$}} \newcommand{\mbox{$C_i(\alpha)$}}{\mbox{$C_i(\alpha)$}} \newcommand{\mbox{${\rm Im}$}}{\mbox{${\rm Im}$}} \newcommand{\mbox{${\rm deg}$}}{\mbox{${\rm deg}$}} \newcommand{\mbox{${\rm dist}$}}{\mbox{${\rm dist}$}} \newcommand{\mbox{${\rm Wr}$}}{\mbox{${\rm Wr}$}} \newcommand{\mbox{$\mid$}}{\mbox{$\mid$}} \newcommand{\mbox{$\ \not\in\ $}}{\mbox{$\ \not\in\ $}} \newcommand{\mbox{${\rm fix}$}}{\mbox{${\rm fix}$}} \newcommand{\mbox{${\rm pr}$}}{\mbox{${\rm pr}$}} \newcommand{\mbox{${\rm Sym}\;$}}{\mbox{${\rm Sym}\;$}} \newcommand{\mbox{${\rm Aut}\;$}}{\mbox{${\rm Aut}\;$}} \newcommand{\mbox{${\rm Aut}\;$}tn}[0]{\mbox{${\rm Aut}\;T_n$}} \newcommand{\mbox{${\rm Aut}\;$}x}[0]{\mbox{${\rm Aut}\;X$}} \newcommand{\mbox{${\bf Q}$}Z}{\mbox{${\rm QZ}$}} \title{Normal subgroups of groups acting on trees and automorphism groups of graphs} \mbox{${\rm Aut}\;$}hor{R\"ognvaldur G. M\"oller\footnote{Science Institute,University of Iceland, IS-107 Reykjav\'ik, Iceland. e-mail: [email protected]}\ \mbox{ and} Jan Vonk\footnote{Mathematical Institute, 24-29 St Giles', Oxford, OX1 3LB, England. e-mail: [email protected].} \footnote{The second author wishes to thank the ERASMUS programme for support.} } \maketitle \begin{abstract} \noindent Let $T$ be a tree and $e$ an edge in $T$. If $C$ is a component of $T\setminus e$ and both $C$ and its complement are infinite we say that $C$ is a half-tree. The main result of this paper is that if $G$ is a closed subgroup of the automorphism group of $T$ and $G$ leaves no non-trivial subtree invariant and fixes no end of $T$ then the subgroup generated by the pointwise stabilizers of half-trees is topologically simple. This result is used to derive analogues of recent results of Caprace and De Medts \cite{CapraceDeMedts2011} and it is also applied in the study of the full automorphism group of a locally finite primitive graph with infinitely many ends. \end{abstract} Running title: NORMAL SUBGROUPS OF GROUPS ACTING ON TREES \section*{Introduction} In the first half of this paper we study a variant of Tits' simplicity result from his ground breaking paper \cite{Tits1970} on automorphism groups of trees. Tits studies a group action on a tree $T$ such that the action satisfies a certain independence property called property P. Here we look at a different property which can also be thought of as an independence property. This property is defined in terms of {\em half-trees}. If $e$ is an edge in a tree $T$ and both components of $T\setminus e$ are infinite then we call these components half-trees. A group acting on $T$ has property H if the pointwise stabilizer of every half-tree is non-trivial. For a closed subgroup $G$ of the automorphism group of a tree $T$ we let $G^{++}$ denote the closure of the subgroup generated by all pointwise stabilizers in $G$ of half-trees. The main result of this paper is {\bf Theorem~\ref{TSimple}}\ \ {\em Let $G$ be a closed subgroup of the automorphism group of some tree $T$. Assume that no proper non-empty subtree of $T$ is invariant under $G$ and no end of $T$ is fixed by $G$. Suppose also that $G$ has property H. If $N$ is a non-trivial closed subgroup of $G$ normalized by $G^{++}$ then $N$ contains $G^{++}$. In particular, the subgroup $G^{++}$ is topologically simple. } \noindent It is also shown that many of the results in a recent paper of Caprace and De Medts \cite{CapraceDeMedts2011} that are proved for groups satisfying property P are also true if property H is assumed instead. In the second half of the paper the full automorphism group of a graph with infinitely many ends is studied. Such groups satisfy a certain independence property because if $A$ is a set of vertices or edges in a graph $X$ such that $X\setminus A$ is not connected then the subgroup of the full automorphism group fixing all the elements in $A$ and leaving invariant each component of $X\setminus A$ acts on each component independently of what it does on the other components. First we look at the automorphism group of a transitive graph with connectivity 1, i.e. connected graphs were the removal of a single vertex produces a disconnected graph. The automorphism group of such a graph can be studied with the aid of Tits' original result and is shown to be simple under general conditions (proof in Appendix A). The final result of the paper is the following theorem where the automorphism group is thought of as a topological group with the permutation topology inherited from the action on the vertex set of the graph. {\bf Theorem~\ref{TPrimitiveSimple}}\ \ {\em Let $X$ be a locally finite connected primitive graph (meaning that the automorphism group is transitive and the automorphism group preserves no non-trivial proper equivalence relation on the vertex set) with infinitely many ends. Then $G=\mbox{${\rm Aut}\;$}x$ has an open topologically simple subgroup of finite index.} A crucial part in the proof of this theorem is the fact that if $G$ is the automorphism group of a locally finite primitive graph with infinitely many ends then $G$ is non-discrete. This follows from \cite[Theorem~2.5]{Smith2010} as is explained in Appendix B. \section{General background and terminology} \subsection{Graphs} We think of a {\em graph} $X$ as a pair $(VX,EX)$ where $VX$ is the vertex set and $EX$ is the set of edges. An edge is a two element subset of $VX$. If $e=\{x,y\}$ is an edge we say that $x$ and $y$ are {\em adjacent} and that $x$ and $y$ are the {\em end-vertices} of $e$. The degree of a vertex $x$ is the number of vertices adjacent to $x$. A graph is said to be {\em locally finite\/} if all its vertices have finite degree. A {\em path} in $X$ is a sequence $v_0, v_1, \ldots, v_n$ of distinct vertices such that $v_i$ is adjacent to $v_{i+1}$ for all $i=0, 1, \ldots, {n-1}$. If the condition that the vertices be distinct is dropped then we speak of a {\em walk}. A {\em ray\/} (also called a {\em half-line\/}) in a graph $X$ is a sequence $\{v_i\}_{i\in {\bf N}}$ of distinct vertices such that $v_i$ is adjacent to $v_{i+1}$ for all ${i\in \mbox{${\bf N}$}}$. A {\em line\/} (also called a {\em double ray\/}) in $X$ is a sequence $\{v_i\}_{i\in {\bf Z}}$ of distinct vertices such that $v_i$ is adjacent to $v_{i+1}$ for all ${i\in \mbox{${\bf Z}$}}$. For vertices $u$ and $v$ such that there is a path starting with $u$ and ending with $v$ we let $d(u,v)$ denote the minimum length of such a path. A path of length $d(u,v)$ starting with $u$ and ending with $v$ is called a {\em geodesic}. If $X$ is connected then $d$ is a metric on $VX$. The notion of a {\em graph} described above is often called a {\em simple graph} where an edge always has two distinct end-vertices and an edge is completely determined by its end-vertices. When discussing quotients of graphs with a group action we need the more general concept of a {\em multigraph}. A multigraph $X$ is a pair $(VX, EX)$ together with a map $t$ defined on the set $EX$ of edges such that the values of $t$ are either single vertices from $VX$ or two element subsets of $VX$ that represents the end vertices of $e$. An {\em end} of a graph $X$ is defined as an equivalence class of rays such that two rays are said to be equivalent if there is a third ray that contains infinitely many vertices from both of them. A connected graph $X$ has more than one end if and only if there is a finite set of vertices $F$ such that $X\setminus F$ has two distinct components that both contain rays. If $X$ is a tree then two rays belong to the same end if and only if their intersection is a ray. For more information about this concept consult \cite[Chapter 8]{Diestel2006} and \cite{Moller1995}. \subsection{Permutation groups} Let $G$ be a group acting on a set $Y$. For $x\in Y$ let $G_x$ denote the {\em stabilizer of\/} $x$ in $G$; that is, $G_x$ is the subgroup of all elements in $G$ that fix $x$. For a subset $A$ of $Y$, define $$G_{(A)}=\{g\in G\mid g(a)=a\mbox{ for all }a\in A\}$$ and $$G_{\{A\}}=\{g\in G\mid gA=A\}.$$ The group $G_{(A)}$ is called the {\em pointwise stabilizer of} $A$ and the group $G_{\{A\}}$ the {\em setwise stabilizer of} $A$. Two points $x, y$ are said to be in the same orbit of $G$ if there is an element $g\in G$ such that $g(x)=y$. If any two elements in $Y$ are in the same orbit then we say that $G$ is transitive on $Y$. The orbits of a stabilizer $G_x$ of a point $x\in Y$ are called the {\em suborbits} of $G$. An action of a group $G$ on a set $Y$ defines a homomorphism from $G$ to the group $\mbox{${\rm Sym}\;$} Y$ of all permutations of $Y$. If this homomorphisms is injective, i.e.\ the only element of $G$ that fixes all the points in $Y$ is the identity, we say the action is {\em faithful}. Then we can think of $G$ as a subgroup of $\mbox{${\rm Sym}\;$} Y$ and speak of $G$ as a {\em permutation group}. The automorphism group of a graph $X$ is denoted by \mbox{${\rm Aut}\;$}x, and we think of \mbox{${\rm Aut}\;$}x\ primarily as a permutation group on $VX$. If \mbox{${\rm Aut}\;$}x\ acts transitively on $VX$ then the graph $X$ is said to be {\em transitive}. Varying slightly from the terminology above we define {\em the stabilizer of an edge} in a graph $X$ to be the subgroup fixing both end vertices of the edge. If a group $G$ acts on a set $Y$ we can construct a graph $X$ with vertex set $Y$ such that the action of $G$ on $Y$ gives an action on $X$ by automorphisms. This is done by insisting that the edge set $EX$ is a union of orbits of $G$ on the set of two element subsets of $Y$. Such graphs are called {\em (undirected) orbital graphs}. A group is said to act primitively on a set $Y$ if the only $G$-invariant equivalence relations on $Y$ are the trivial one (each equivalence class contains only one element) and the universal one (there is only one equivalence class). If a group $G$ acts transitively on a set $Y$ the following conditions are equivalent: {\bf (i)} $G$ acts primitively. {\bf (ii)} if $y$ is a point in $Y$ then the subgroup $G_y$ is a maximal subgroup of $G$. {\bf (iii)} for every pair $x$ and $y$ of distinct points in $Y$ the orbital graph with edge set $G\{x, y\}$ is connected. We say that a graph $X$ is {\em primitive} if \mbox{${\rm Aut}\;$}x\ acts primitively on $VX$. \subsection{The permutation topology} Let $G$ be a group acting on a set $Y$. The {\em permutation topology} on $G$ is defined by choosing as a neighbourhood basis of the identity the family of pointwise stabilizers of finite subsets of $Y$, i.e.\ a neighbourhood basis of the identity is given by the family of subgroups \[\{G_{(F)}\mid F\mbox{ is a finite subset of }Y\}.\] For an introduction to the permutation topology see \cite{Moller2010}. From this definition it is apparent that a sequence $(g_i)_{i\in \mbox{${\bf N}$}}$ of elements in $G$ has an element $g\in G$ as a limit if and only if for every point $y\in Y$ there is a number $N$ (possibly depending on $y$) such that $g_n(y) =g(y)$ for every $n\geq N$. One could also use the property above describing convergence of sequences as a definition of the topology and think of the permutation topology as the topology of pointwise convergence. If we think of $Y$ as having the discrete topology and elements of $G$ as maps $Y\rightarrow Y$, then the permutation topology is equal to the compact-open topology. A subgroup $U$ of $G$ is open if and only if there is a finite subset $F$ of $Y$ such that $G_{(F)}\subseteq U$. One can also note that if $G$ is a permutation group on $Y$ then the permutation topology makes $G$ totally disconnected. Compactness has a natural interpretation in the permutation topology. A subset of a topological space is said to be {\em relatively compact} if it has compact closure. \begin{lemm}{\rm (\cite[Lemma 1 and Lemma 2]{Woess1992}, cf. \cite[Lemma~2.2]{Moller2010})}\label{LCompact} Let $G$ be a group acting transitively on a set $Y$. Assume that $G$ is closed in the permutation topology and that all suborbits are finite. (i) The stabilizer $G_y$ of a point $y \in Y$ is compact. (ii) A subset $A$ of $G$ is relatively compact in $G$ if and only if the set $Ay$ is finite for every point $y$ in $Y$. Furthermore, if $A$ is a subset of $G$ and $Ay$ is finite for some $y\in Y$ then $Ay$ is finite for every $y$ in $Y$. \end{lemm} A subgroup $H$ in a topological group $G$ is said to be cocompact if $G/H$ is a compact space. This concept has also a natural interpretation in terms of the permutation topology. \begin{lemm}\label{Lcocompact} {\rm (\cite[Proposition~1]{Nebbia2000}, cf.~\cite[Lemma~2.3]{Moller2010})} Let $G$ be a group acting transitively on a set $Y$. Assume that $G$ is closed in the permutation topology and all suborbits are finite. Then a subgroup $H$ of $G$ is cocompact if and only if $H$ has finitely many orbits on $Y$. \end{lemm} \section{Groups acting on trees} \subsection{Preliminaries on trees and group actions on trees} In the present context {\em a tree} is a connected graph that has no non-trivial cycles (i.e.\ there are no walks $v_0, v_1, \ldots, v_n$ such that $v_0=v_n$ and $v_0, v_1, \ldots, v_{n-1}$ are distinct and $n\geq 3$). In \cite{Tits1970} Tits classifies the automorphisms of a tree $T$. First there are automorphisms that fix some vertex of $T$, then there are automorphisms that leave some edge of $T$ invariant but transpose its end-vertices and finally there are translations. An automorphism $g$ of $T$ is called a {\em translation} if there is a line $L=\{v_i\}_{i\in {\bf Z}}$ that is invariant under $g$ and there is a non-zero integer $k$ such that $g(v_i)=v_{i+k}$ for all $i\in\mbox{${\bf Z}$}$. The line $L$ is called the {\em axis} of the translation. Suppose $e$ is an edge and $T_1$ is one of the components of $T\setminus e$. If $g$ is an automorphism of $T$ such that $g(T_1)$ is a proper subset of $T_1$ then $g$ is a translation and the edge $e$ lies on the axis of $g$. When $L$ is a path (finite or infinite) in a tree there is a well-defined map from the vertex set of the tree to the vertex set of $L$ such that a vertex $x$ is mapped to the unique vertex in $L$ that is closest to $x$. This map will be denoted with $\mbox{${\rm pr}$}_L$. For a vertex $x$ in $L$ the set $\mbox{${\rm pr}$}_L^{-1}(x)$ is the vertex set of a subtree of $T$ which we call {\em the branch of} $T$ {\em at} $x$ (relative to $L$). Let $G$ be a group acting on $T$. Note that the set $\mbox{${\rm pr}$}_L^{-1}(x)$ is invariant under the group $G_{(L)}$. Define $G_{(L)}^x$ as the permutation group that we get by restricting the action of $G_{(L)}$ to $\mbox{${\rm pr}$}_L^{-1}(x)$. From the maps $G_{(L)}\rightarrow G_{(L)}^x$ we get a homomorphism from the group $G_{(L)}$ to the group $\mbox{${\rm pr}$}od_{x\in L} G^x_{(L)}$. Following Tits in \cite{Tits1970} we say that a group $G$ acting on a tree $T$ has property P if the homomorphism $G_{(L)}\rightarrow \mbox{${\rm pr}$}od_{x\in L} G_{(L)}^x$ above is an isomorphism for every path $L$ in $T$. In \cite{CapraceDeMedts2011} property P is called {\em Tits' independence property}. The essence of property P is that $G_{(L)}$ acts on each branch of $T$ at $L$ independently of how it acts on the other branches. In his groundbreaking paper Tits then goes on to prove \cite[Th\'eor\`eme~4.5]{Tits1970} that if a group $G$ acts on a tree $T$ such that property P is satisfied and no proper non-empty subtree is invariant under $G$ and no end of $T$ is fixed by $G$ then the subgroup $G^+$ of $G$ generated by the stabilizers of edges is simple. (Recall that the stabilizer of an edge $e=\{x,y\}$ in $G$ is defined as the subgroup fixing both $x$ and $y$.) \subsection{Properties P, E and H} A subtree $T'$ of a tree $T$ is called a {\em half-tree} of $T$ if $T'$ is one of the components of $T\setminus\{e\}$ for some edge $e$ in $T$ and both components of $T\setminus\{e\}$ are infinite. For an edge $e=\{u, v\}$ in $T$ we let $T_{u, e}$ denote the component of $T\setminus \{e\}$ that contains $u$. A group acting on a tree $T$ is said to have property E if for every edge $e=\{u, v\}$ the stabilizer of $e$ acts independently on the two components of $T\setminus\{e\}$, i.e. $G_e=G_{(T_{u, e})}G_{(T_{v, e})}$. If $G$ is a closed subgroup of $\mbox{${\rm Aut}\;$} T$ then properties P and E are equivalent, see \cite[Lemma 10]{Amann2003}. \begin{lemm}\label{LNoInvariant} Let $G$ be a subgroup of $\mbox{${\rm Aut}\;$} T$ for some infinite tree $T$. (i) {\rm (\cite[Lemme 4.1]{Tits1970})} The following two conditions are equivalent: (a) no proper non-empty subtree of $T$ is invariant under $G$ and (b) for every vertex $x$ in $T$ the orbit $Gx$ intersects every half-tree of $T$. Furthermore if (a) (and then (b) also) holds then the tree $T$ has no leaves (vertices of degree 1) and every edge defines two half-trees that both have unbounded diameter. (ii) {\rm (\cite[Lemme 4.4]{Tits1970})} Suppose $N$ is a non-trivial subgroup of $\mbox{${\rm Aut}\;$} T$ normalized by $G$. If no proper non-empty subtree of $T$ is invariant under $G$ and no end of $T$ is fixed by $G$ then the same is true about $N$. (iii) Suppose that no proper non-empty subtree of $T$ is invariant under $G$. If $e$ is an edge in $T$ then $G$ contains a translation $g$ such that $e$ is in the axis of $g$. \end{lemm} {\em Proof.} We only need to prove part (iii); parts (i) and (ii) are proved in \cite{Tits1970} except the addendum in part (i) which is obvious. Let $u$ and $v$ denote the end vertices of $e$. By part (i) we can find an element $g_1\in G$ such that $g_1(u)\in T_{v,e}$. If $g_1(T_{v,e})\subsetneq T_{v,e}$ then $g_1$ is the translation we are seeking. If not we find an element $g_2\in G$ such that $g_2(v)\in T_{u,e}$. As before, if $g_2(T_{u,e})\subsetneq T_{u,e}$ then $g_2$ is the translation we are seeking but if not then $g=g_1g_2^{-1}$ is the translation we are after. The result in part (iii) and the argument in the proof are well-known and can for example be seen in a more general context in \cite{Jung1981}. \begin{lemm}\label{LPropertyH} Let $T$ be a tree and $G$ a subgroup of $\mbox{${\rm Aut}\;$} T$. Assume that no proper non-empty subtree of $T$ is invariant under $G$. (i) Suppose that there is some edge $e=\{u,v\}$ in $T$ such that the pointwise stabilizers of both the half-trees $T_{u,e}$ and $T_{v,e}$ are trivial. Then the stabilizer of every half-tree in $T$ is trivial. (ii) Suppose that there is some edge $e=\{u,v\}$ in $T$ such that the pointwise stabilizers of both the half-trees $T_{u,e}$ and $T_{v,e}$ are non-trivial. Then the stabilizer of every half-tree in $T$ is non-trivial. (iii) Suppose that there is some edge $e=\{u,v\}$ in $T$ such that the pointwise stabilizer of $T_{u,e}$ is trivial but the pointwise stabilizer of $T_{v,e}$ is non-trivial. Then $G$ must fix an end of $T$. \end{lemm} {\em Proof.} (i) Let $f$ be an edge in $T$. This edge defines two half-trees of $T$ which we denote with $T_1$ and $T_2$. By Lemma~\ref{LNoInvariant}(i) there is an element $g\in G$ such that $g(e)\in T_1$. Then either $g(T_{u,e})\subseteq T_1$ or $g(T_{v,e})\subseteq T_1$. If we assume, for instance, that $g(T_{u,e})\subseteq T_1$ then $G_{(T_1)}\subseteq G_{(g(T_{u,e}))}=gG_{(T_{u,e})}g^{-1}=\{1\}$. In the same way we show that $G_{(T_2)}$ is trivial. (ii) Let $f$, $T_1$ and $T_2$ be as above. Find an element $g\in G$ such that $g(e)\in T_1$. Then either $g(T_{u,e})\subseteq T_1$ or $g(T_{v,e})\subseteq T_1$. Say, for the sake of the argument, $g(T_{u,e})\subseteq T_1$ and then $T_2\subseteq g(T_{v,e})$. Then $f\in T_{v,e}$ and $T_2\subseteq g(T_{v,e})$. Hence $G_{(T_2)}\supseteq G_{(gT_{v ,e})}=gG_{(T_{v ,e})}g^{-1}\neq\{1\}$. So $G_{(T_2)}$ is non-trivial. In the same way we show that $G_{(T_1)}$ is also non-trivial. (iii) Looking at parts (i) and (ii) we conclude that it is true for every edge $f=\{w,z\}$ in $T$ that the pointwise stabilizer of one of the half-trees defined by $f$ is trivial and the pointwise stabilizer of the other one is non-trivial. If the pointwise stabilizer of $T_{w,f}$ is trivial then we think of $f$ as an directed arc with initial vertex $w$ and terminal vertex $z$. The edge $e=\{u,v\}$ is oriented so that $u$ is the initial vertex and $v$ the terminal vertex. Do this for every edge in $T$ and note that the direction of the edges is preserved by the action of $G$. Consider an edge $f=\{w,z\}$ that is contained in the half-tree $T_{v,e}$. Assume that $w$ is closer to $v$ than $z$. Then $T_{w,f}\supseteq T_{u,e}$ and thus $G_{(T_{w,f})}\subseteq G_{(T_{u,e})}=\{1\}$. Hence the initial vertex of $f$ is $w$ and the terminal vertex is $z$. Another way to describe this is to say that the edge $f$ is directed away from $v$. We see that every vertex in $T_{v,e}$ is the terminal vertex of precisely one directed edge. The half-tree $T_{v,e}$ contains vertices from every $G$-orbit on the vertex set of $T$ and thus it is true for every vertex in $T$ that it is the terminal vertex of precisely one directed edge. Let $R_1=v_0, v_1, \ldots$ be a ray in $T$ such that each edge $\{v_i, v_{i+1}\}$ is directed so that $v_i$ is the terminal vertex. Suppose now that $R_2=w_0, w_1, w_2, \ldots$ is another such ray in $T$. If the two rays intersect then they both belong to the same end. Suppose the two rays are disjoint. Select a path $u_0, u_1, \ldots, u_n$ of shortest possible length such that $u_0$ is a vertex in $R_1$ and $u_n$ is a vertex in $R_2$. Then $u_1$ is not in $R_1$ and thus the edge $\{u_0,u_1\}$ is directed so that $u_0$ is the initial vertex. Note also that $u_1$ is not on $R_1$. Similarly the edge $\{u_{n-1}, u_n\}$ has $u_n$ as an initial vertex. Now we see that there must be a number $k$ such that the edges $\{u_{k-1}, u_k\}$ and $\{u_{k}, u_{k+1}\}$ both have $u_k$ as a terminal vertex. This is a contradiction and thus the two rays $R_1$ and $R_2$ belong to the same end $\omega$, which is clearly fixed by $G$. \begin{defi} A group $G$ that is a subgroup of the automorphism group of some tree $T$ is said to have property H if the pointwise stabilizer of every half-tree in $T$ is non-trivial. \end{defi} Let $G^{++}$ denote the closure of the subgroup generated by the pointwise stabilizers of all the half-trees in T. Clearly $G^{++}$ is normal in $G$. If $G$ has no proper non-empty invariant subtree and does not fix an end of $T$ then Lemma~\ref{LPropertyH} above shows that if $G$ does not have property H then the pointwise stabilizer of every half-tree is trivial and then property H is equivalent to the property that the group $G^{++}$ is non-trivial. Note also that if $G$ has property H then $G$ is not discrete. The relationship between properties P and H is not simple. In the case that $G$ is not discrete and has no non-empty proper invariant subtree then property P implies property H. On the other hand the example below shows that property H does not imply property P. {\em Example.} Let $T$ be a tree and $f:VT\rightarrow I$ some map defined on the vertex set of $T$. In his paper Tits \cite{Tits1970} studies the group $\mbox{${\rm Aut}\;$}\!\!_f\; T=\{g\in \mbox{${\rm Aut}\;$} T\mid f\circ g=f\}$. This group clearly has property P. One can also study the group $G$ of all automorphisms of $T$ that preserve the equivalence relation defined by the fibers of $f$. It is not to be expected that this group has property P, but in many cases it will have property H. Consider the case of a regular tree $T$ of degree 6. Colour all the vertices in one part of the natural bipartition red and then colour the vertices in the other part of the natural bipartition with three different colours so that each red vertex is adjacent to two vertices of each colour. The group $C$ of automorphisms of $T$ that map every vertex to a vertex of the same colour has property P and is simple by Tits' theorem \cite[Th\'eor\'eme~4.5]{Tits1970}. The group $G$ of automorphisms that leave the partitioning of the vertices given by this colouring invariant does not have property P but it has property H and it is easy to see that $G^{++}=C$. \begin{theo}\label{TSimple} Let $G$ be a closed subgroup of the automorphism group of some tree $T$. Assume that no proper non-empty subtree of $T$ is invariant under $G$ and no end of $T$ is fixed by $G$. Assume also that $G$ has property H. If $N$ is a non-trivial closed subgroup of $G$ normalized by $G^{++}$ then $N$ contains $G^{++}$. In particular, the subgroup $G^{++}$ is topologically simple. \end{theo} {\em Proof.} First note that by Lemma~\ref{LNoInvariant}(ii) we see that $G^{++}$ does not leave any proper non-empty subtree invariant and $G^{++}$ does not fix an end. Now we apply Lemma~\ref{LNoInvariant}(ii) again but this time to $G^{++}$ and $N$ and find out that $N$ does not leave any proper non-empty subtree invariant and does not fix an end. Let $e=\{u,v\}$ be an edge in $T$. By part (iii) of Lemma~\ref{LNoInvariant} we see that there is a translation $h\in N$ such that $h(T_{v,e})\subsetneq T_{v,e}$ and $T_{u,e}\subsetneq h(T_{u,e})$. Suppose $g\in G_{(T_{u,e})}$. Set $f_n=gh^ng^{-1}h^{-n}$. Since $N$ is normalized by $G^{++}$ we see that $f_n\in N$ for every $n\geq 0$. The element $h^ng^{-1}h^{-n}$ fixes the half-tree $h^{n}(T_{u,e})$ and in particular $T_{u,e}$ is fixed by $f_n$. If we consider $T_{v,e}\setminus h^{n}(T_{v,e})$ then this part of the tree is fixed by $h^ng^{-1}h^{-n}$ and thus $f_n$ acts on this part like $g$. Whence we see that $f_n\rightarrow g$ when $n\rightarrow \infty$. From this argument we conclude that $G_{(T_{u, e})}$ is contained in $N$. Of course one can apply the same argument to show that $G_{(T_{v, e})}$ is contained in $N$. We conclude that $G^{++}$ is contained in $N$. Now it is clear that $G^{++}$ is topologically simple. {\em Remark.} The contraction group for an automorphism $\alpha$ of a topological group $G$ is defined as the subgroup of all elements $g\in G$ such that $\alpha^n(g)\rightarrow 1$, see \cite{BaumgartnerWillis2004}. In the above proof $h^ng^{-1}h^{-n}\rightarrow 1$ and hence $g^{-1}$ belongs to the contraction group for the inner automorphism of $G$ defined by $h$. \begin{coro} Let $G$ be a closed subgroup of $\mbox{${\rm Aut}\;$} T$ for some tree $T$. Suppose that $G$ has property H and does not stabilize a proper non-empty subtree or fix an end. Then $G^{++}$ is the unique minimal closed normal subgroup of $G$. \end{coro} The {\em quasi-center} $QZ(G)$ of a topological group $G$ consists of all elements with an open centralizer. Caprace and De Medts show in \cite[Proposition~3.6]{CapraceDeMedts2011} that a closed subgroup $G$ of the automorphism group of some tree $T$ such that $G$ satisfies property P and does not have any proper non-empty invariant subtrees has a trivial quasi-center. A simple adaptation of their proof gives an analogous result for groups with property H. \begin{prop} Let $G$ be a closed subgroup of $\mbox{${\rm Aut}\;$} T$ for a tree $T$. Assume that $G$ has property H and that $G$ leaves no proper non-empty subtree of $T$ invariant. Then the quasi-center of $G$ is trivial. \end{prop} {\em Proof.} Suppose $g$ is an element of $G$ that has an open centralizer. Let $v$ be a vertex of $T$. Then there is a finite set $S$ of vertices such that $G_{(S)}$ is contained in the centralizer of $g$. If necessary we can replace $S$ with $S\cup\{v\}$ so we can assume that $v\in S$ and we may also safely assume that $S$ is a subtree of $T$. Let $\tilde{S}$ be the subtree of $T$ containing every vertex of $T$ that is fixed by $G_{(S)}$. Because $g$ centralizes $G_{(S)}$ the tree $\tilde{S}$ is invariant under $g$. Suppose $e=\{u,w\}$ is an edge in $T$ such that the vertex $u$ is in $\tilde{S}$ but $w$ is not. Using property H we can find a nontrivial element $h\in G_{(T_{u,e})}$. But $\tilde{S}\subseteq T_{u,e}$ so $h\in G_{(S)}$ and $g$ commutes with $h$ and $ghg^{-1}=h$. Note that $h=ghg^{-1}$ fixes $g(T_{u,e})=T_{g(u),g(e)}$. Since $g(\tilde{S})=\tilde{S}$, we see that if $g(u)\neq u$ then $T_{w,e}\subseteq g(T_{u,e})$ and then $h$ would fix pointwise both $T_{u,e}$ and $T_{w,e}$ -- a contradiction. Hence we conclude that $g(u)=u$. Therefore $g$ fixes every vertex in $\tilde{S}$ that is adjacent to some vertex not in $\tilde{S}$. Suppose now that $e=\{u,w\}$ is an edge in $T$ such that $u$ is in $S$ but $w$ is not in $S$. If the edge $e$ is not in $\tilde{S}$ then the above argument shows that $g$ fixes $u$. On the other hand, if $e$ is in $\tilde{S}$ then $G_{(T_{u,e})}\neq\{1\}$ and $G_{(T_{u,e})}\subseteq G_{(S)}$ and thus $G_{(S)}$ moves some vertex in $T_{w,e}$. Therefore $T_{w,e}$ is not contained in $\tilde{S}$. From this we infer that $T_{w,e}$ contains a vertex $z$ in $\tilde{S}$ that is adjacent to a vertex not in $\tilde{S}$ and thus $z$ is fixed by $g$. Applying this argument to every edge with precisely one of its end vertices in $S$ and we conclude that $g$ must fix a vertex in every component of $T\setminus S$. Since $S$ is finite we now see that every vertex in $S$ is fixed by $g$ and in particular $g$ fixes $v$. Since $v$ was arbitrary we conclude that that $g$ fixes every vertex of $T$ and that $g=1$. The following is an analogue of Proposition~3.8 from the paper \cite{CapraceDeMedts2011} of Caprace and De Medts and the proof uses the same argument. \begin{prop}\label{PSubtree}{\rm (Cf.~\cite[Proposition~3.8]{CapraceDeMedts2011})} Let $G$ be a closed subgroup of the automorphism group of some tree $T$ that leaves no proper non-empty subtree invariant. Suppose $H$ is a non-compact open subgroup of $G$ that does not fix an end of $T$ and $T'$ is a minimal invariant subtree for $H$. Then for every edge $e$ in $T'$ the group $H$ contains the pointwise stabilizers in $G$ of the two half-trees of $T$ defined by the edge $e$. \end{prop} {\em Proof.} Since $H$ is non-compact the tree $T'$ is infinite and every edge $e$ in $T'$ splits $T'$ up into two half-trees. From Lemma~\ref{LNoInvariant}(iii) above we see that for every edge $e$ in $T'$ there is a hyperbolic element $h$ in $H$ such that $e$ is on the axis of $h$. Since $H$ is open there is a finite set of vertices such that the pointwise stabilizer $G_{(S)}$ is contained in $H$. Let $T_1$ and $T_2$ denote the two half-trees of $T$ defined by $e$. Using a suitable power of $h$ we can assume that $h^n(S)\subseteq T_1$ and then $G_{(T_1)}\subseteq h^nG_{(S)}h^{-n}\subset H$. Similarly we can show that $G_{(T_2)}\subseteq H$. If $H$ is compact then the tree $T'$ either has just a single vertex or consists of an edge with its end vertices and $H$ then contains an element that transposes the two end vertices. If it is assumed that $G$ is topologically simple (like in \cite[Proposition~3.8]{CapraceDeMedts2011}) then $G$ acts without inversion on $T$ and the latter possibility above can not occur and the conclusion of Proposition~\ref{PSubtree} holds trivially. \begin{lemm}{\rm (Cf.~\cite[Lemma~3.11]{CapraceDeMedts2011})}\label{LNoFree} Let $G$ be a closed edge transitive subgroup of the automorphism group of some tree $T$. If $G$ is simple and has property H then there is no vertex $v$ in $T$ such that the action of $G_v$ on the set of edges with $v$ as an end vertex is free. \end{lemm} The argument in the proof of lemma above is the same as in \cite{CapraceDeMedts2011}. The following is an version of \cite[Theorem A]{CapraceDeMedts2011}, but here property H is assumed instead of property P. Caprace and de Medts derive their theorem from \cite[Theorem ~3.9]{CapraceDeMedts2011} and their argument also works for this version where we use Proposition~\ref{PSubtree} instead of \cite[Proposition~3.8]{CapraceDeMedts2011} and Lemma~\ref{LNoFree} instead of \cite[Lemma~3.11]{CapraceDeMedts2011}. \begin{theo}\label{TTheorem A}{\rm (Cf.~\cite[Theorem A and Theorem 3.9]{CapraceDeMedts2011})} Let $T$ be a tree all of whose vertices have degree at least 3. Suppose $G$ is a topologically simple closed subgroup of $\mbox{${\rm Aut}\;$} T$ which does not stabilize any proper non-empty subtree and which satisfies property H. Then the following conditions are equivalent. (i) Every proper open subgroup of $G$ is compact. (ii) For every vertex $v\in VT$, the induced action of $G_v$ on the edges that have $v$ as an end vertex is primitive. In particular the action of $G$ on the set of edges of $T$ is transitive. \end{theo} \section{The automorphism group of a graph with\\ connectivity 1}\label{SConnectivity} A connected graph $X$ is said to have {\em connectivity} 1 if there is a vertex $x$ in $X$ such that $X\setminus x$ is not connected. Such a vertex $x$ is called a {\em cutvertex}. If a transitive graph has a cutvertex then every vertex is a cutvertex. The {\em blocks} (called {\em lobes} in \cite{JungWatkins1977}) of a graph $X$ with connectivity 1 are the maximal connected subgraphs that do not have connectivity 1. In this section, we obtain some simplicity results on the automorphism group of a transitive graph $X$ with connectivity 1. We use Tits' simplicity theorem \cite[Th\'eor\`eme~4.5]{Tits1970}. From a graph $X$ with connectivity 1 we can construct a tree $T_X$ called the {\em block graph} of $X$. The vertex set of $T_X$ is the union of the set of blocks of $X$ and the set of cutvertices in $X$. The set of edges in $T$ consists of all pairs $\{x, B\}$ where $x$ is a cutvertex and $B$ a block and $x$ is in $B$. The set of cutvertices and the set of blocks thus form the parts of the natural bipartition of the tree $T_X$. The automorphism group of $X$ acts on $T_X$. \begin{lemm}\label{LPropertyP} Let $X$ be a transitive graph with connectivity 1. The action of $G=\mbox{${\rm Aut}\;$}x$ on $T_X$ has property P. \end{lemm} {\em Proof.} This can be seen directly or by noting that the action of $G$ on $T_X$ clearly has property E, and as $G$ is a closed permutation group, the action has property P. This lemma allows us to prove certain simplicity results for $\mbox{${\rm Aut}\;$}x$. We say that a group $G$ acting on a set $Y$ is generated by stabilizers of points if the stabilizers in $G$ of the points in $Y$ generate $G$. \begin{theo}\label{TConnectivity1Simple} Let $X$ be a transitive graph with connectivity $1$ and $G=\mbox{${\rm Aut}\;$}x$. Let $n$ be the number of blocks a vertex in $X$ lies in. (i) If the automorphism group of some block is not transitive, then $G$ is not simple. (ii) If the automorphism group of every block is transitive and generated by vertex stabilizers, then $G$ is simple, unless $n=2$ and any two blocks are isomorphic, in which case $G$ has a normal simple subgroup of index $2$. \end{theo} \begin{coro}\label{CPrimSimple} Let $X$ be a primitive graph with connectivity $1$. If each vertex is contained in more than two blocks then the group $G=\mbox{${\rm Aut}\;$}x$ is simple. If each vertex is only contained in two blocks then $G$ has a simple normal subgroup of index $2$. \end{coro} The proofs of Theorem~\ref{TConnectivity1Simple} and Corollary~\ref{CPrimSimple} can be found in Appendix A together with the necessary background. \section{Automorphism groups of primitive graphs with infinitely many ends} \begin{theo}\label{TPrimitiveSimple} Let $X$ be a locally finite connected primitive graph with infinitely many ends. Then $G=\mbox{${\rm Aut}\;$}x$ has an open topologically simple subgroup of finite index. \end{theo} {\em Proof.} {\bf Step 1} in the proof is to define an action of $G$ on a tree $T$. It is shown in \cite[Proposition~3]{Moller1994} that if $X$ is a locally finite primitive graph with more than one end and $G$ is a group acting primitively on $X$ by automorphisms then there is a pair of vertices $x, y$ in $X$ such that the graph $Y$ with the same vertex set as $X$ and edge set $EY=G\{x, y\}$ is connected and has connectivity 1. Note that the action of $G$ on the vertex set of $X$ (the same as the vertex set of $Y$) gives an action of $G$ by automorphisms on $Y$. The group $G$ now acts on the block graph $T_Y$ that is a tree, as explained in Section~\ref{SConnectivity}. {\bf Step 2 }is to show that the action of $G$ on $T_Y$ is faithful, fixes no end of $T_Y$ and leaves no proper non-empty subtree invariant. As explained in Section~\ref{SConnectivity} we can think of a vertex in $X$ also as a vertex in $T_Y$ and identify the vertex set of $X$ with one of the parts of the natural bipartition of the vertex set of $T_Y$. The action of $G$ on $T_Y$ is thus obviously faithful and there is no proper non-empty invariant subtree. Suppose that $G$ fixes some end of $T_Y$. We want to define a $G$ invariant proper non-trivial equivalence relation on the vertex set of $X$ contradicting the assumption that $G$ acts primitively on $X$. Take a ray $R=v_0, v_1, v_2, \ldots$ in $T$ belonging to an end $\omega$ fixed by $G$ and say that vertices $u$ and $v$ are related if there is a number $N(u,v)$ such that $d(u, v_i)=d(v,v_i)$ for all numbers $i$ larger than $N(u,v)$. It is left to the reader to show that this is an equivalence relation and does not depend on the choice of the ray $R$. The equivalence classes are often called {\em horocycles}. Since the end $\omega$ is fixed by $G$ this equivalence relation is invariant under $G$. Restricting to the vertex set of $X$ (which we think of as a subset of the vertex set of $T_Y$) we see that this would give a proper non-trivial $G$ invariant equivalence relation on the vertex set of $X$ contradicting the assumption that $G$ acts primitively on $X$. Hence it is impossible that $G$ fixes an end of $T_Y$. {\bf Step 3 }is to show that the action of $G$ on $T_Y$ has property H. An edge $\{x, B\}$ in $T_Y$ where $x$ is a vertex in $Y$ and $B$ is a block in $Y$ gives a partition of $T_Y$ into two half-trees and that in turns gives a partition of the vertex set of $Y$ into two disjoint parts $C_{x}$ and $C_{B}$. Let $S_Y$ be the set of all the edges in the block $B$ that have $x$ as a endvertex. If we remove the edges in $S_Y$ from $Y$ then we get a graph with two components that have vertex sets $C_{x}$ and $C_{B}$, respectively. Define now $S_X$ as the set of edges in $X$ that have one endvertex in $C_{x}$ and the other one in $C_{B}$. Because $X$ is locally finite and $G$ is transitive on the vertex set of $X$ we see that $G$ has only finitely many orbits on pairs $\{u, v\}$ of adjacent vertices in $X$. The action of $G$ on the vertex set of $X$ (the same as the vertex set of $Y$) induces automorphisms of both $X$ and $Y$ and we see that there is a constant $k$ such that if $u$ and $v$ are adjacent vertices in $X$ then $d_Y(u,v)\leq k$. The graph $Y$ is locally finite and thus there are only finitely many pairs of vertices $u\in C_{x}$ and $v\in C_{B}$ such that $d_Y(u,v)\leq k$. Now it follows from the above that the set $S_X$ is finite. Define $H$ as the subgroup of $G$ consisting of all the elements of $G$ that fix all the edges in $S_X$ and their endvertices. Since the set $S_X$ is finite, the group $H$ is open in the permutation topology on $G$. This groups leaves the sets $C_{x}$ and $C_B$ invariant. It follows from \cite[Theorem~2.5]{Smith2010} that the group $G$ in the permutation topology is non-discrete (see Appendix B for a detailed explanation). The set $S_X$ separates the sets $C_{x}$ and $C_B$ (i.e. any path between an vertex in $C_{x}$ and a vertex in $C_B$ contains an edge from $S_X$). Because $G$ is the full automorphism group of $X$ then $H$ acts independently on $C_{x}$ and $C_B$, i.e. $H=H_{(C_x)}H_{(C_B)}$. As $H$ is nontrivial, $H_{(C_x)}$ or $H_{(C_B)}$ is non-trivial. But $H_{(C_x)}$ is contained in $G_{(T_{x,e})}$ and $H_{(C_B)}$ is contained in $G_{(T_{B,e})}$. Hence the action of $G$ on $T_Y$ has property H. The final {\bf step} is an application of Theorem~\ref{TSimple}. As stated above the group $G^{++}$ is in this case a non-trivial topologically simple open normal subgroup of $G$ and every closed non-trivial normal subgroup of $G$ contains $G^{++}$. Since $G$ acts primitively on the vertex set of $X$ the normal subgroup $G^{++}$ acts transitively on the vertex set of $X$. By Lemma~\ref{Lcocompact} we conclude that $G^{++}$ is cocompact in $G$. Because $G^{++}$ is open we know that the quotient space $G/G^{++}$ is discrete and since it is also compact we see that it must be finite. Hence $G^{++}$ has finite index in $G$. {\em Remark.} The result \cite[Proposition~3]{Moller1994} about primitive graphs referred to above is proved by using the theory of structure trees developed and described for instance in \cite{DicksDunwoody1989}, \cite{Moller1995} and \cite{ThomassenWoess1993}. Using this theory it is possible to apply Theorem~\ref{TSimple} more generally to automorphism groups of locally finite graphs with infinitely many ends. \section*{Appendix A: Automorphism groups of graphs with connectivity 1}\label{AConnectivity1} This appendix contains the proofs of Theorem~\ref{TConnectivity1Simple} and Corollary~\ref{CPrimSimple} together with necessary background discussion. For a graph $X$ with connectivity 1 we let $B_i$, $i\in I$, denote a family of representatives for the isomorphism types of blocks in $X$. Furthermore, use $B_i^{(j)}$ for $j\in J_i$ to denote the orbits of $\mbox{${\rm Aut}\;$} B_i$ on the vertex set of $B_i$. For a vertex $x$ in $X$ we let $m_i^{(j)}(x)$ be the number of blocks of type $B_i$ that contain $x$ in the orbit $B_i^{(j)}$. Jung and Watkins in \cite[Theorem 3.2]{JungWatkins1977} show that a graph $X$ of connectivity 1 is transitive if and only if all the functions $m_i^{(j)}(x)$ are constant on $VX$. Consider an action of a group $G$ on a tree $T$ and assume that the action has property P. Let $G^+$ denote the subgroup of $G$ generated by the stabilizers of edges. The subgroup $G^+$ is simple by Tits' theorem \cite[Th\'eor\`eme~4.5]{Tits1970}. To decide if $G$ is simple we must investigate when $G=G^+$. For a vertex $x$ in $T$ we define $T_1(x)$ as the set of vertices adjacent to $x$. The set $T_1(x)$ is invariant under $G_x$ and we define $G_x^{T_1(x)}$ to be the permutation group that $G_x$ induces on $T_1(x)$. \begin{lemm}\label{LStabilisers} If $x$ is a vertex in $T$ then the group $G^+$ contains the group $G_{(T_1(x))}$. If the group $F=G_x^{T_1(x)}$ is generated by stabilizers of vertices (thought of as a permutation group on $T_1(x)$) then $G^+$ contains $G_x$. \end{lemm} {\em Proof.} The first part of the Lemma is obvious because if $e$ is an edge in $T$ with $x$ as an end-vertex then $G_{(T_1(x))}$ is contained in $G_e$ and thus $G_{(T_1(x))}$ is contained in $G^+$. Consider now the action of $G_x^+$ on $T_1(x)$. For a vertex $y$ in $T_1(x)$ the group $G^+$ contains the stabilizer of the edge between $x$ and $y$ and thus the stabilizer of $y$ in $G_x^{T_1(x)}$ is contained in the permutation group induced by $G^+$ on $T_1(x)$. Hence, if $F$ is generated by stabilizers then $G^+$ induces the full group $F$ on $T_1(x)$. Since $G^+$ contains the group $G_{(T_1(x))}$ we conclude that $G^+$ contains the full stabilizer $G_x$. \begin{lemm}\label{LNotSimple} Let $Y$ denote the quotient graph of $T$ by the action of $G$. If the graph $Y$ is not a tree then $G$ is not simple. If $Y$ is a tree then $G$ is generated by stabilizers of vertices. \end{lemm} {\em Proof.} Let $R$ denote the normal subgroup of $G$ generated by all the stabilizes of vertices. By \cite[Corollary 1 in \S 5.4]{Serre1980} the quotient group $G/R$ is isomorphic to the fundamental group of $Y$, and this is non-trivial if $Y$ is not a tree. {\em Proof of Theorem~\ref{TConnectivity1Simple}.} (i) Suppose that $B$ is some block of $X$ such that the automorphism group of $B$ is not transitive. Say $x$ and $y$ are vertices in $B$ that belong to different orbits of $\mbox{${\rm Aut}\;$} B$. The edges $\{x,B\}$ and $\{y,B\}$ in $T_X$ belong then to different orbits of $\mbox{${\rm Aut}\;$}x$ and have therefore different images in the quotient graph of $T_X$ under the action of $G$. But their images in the quotient graph have the same end-vertices and thus the quotient graph is not a tree. By Lemma~\ref{LNotSimple}, $\mbox{${\rm Aut}\;$}x$ is not simple. (ii) The stabilizer in $G$ of a vertex $x$ in $X$ acts on the set of blocks that contain $x$ as a direct product of symmetric groups. If $n>2$ the permutation group induced by $G_x$ on the neighbours of $x$ in $T$ is generated by stabilizers, and therefore $G_x \leq G^+$. If $n=2$ and the two blocks containing a given vertex are not isomorphic, the same conclusion obviously holds since $G_x$ acts trivially on $T_1(x)$. Let $B$ be a block of $X$ and think of $B$ as a vertex in $T_X$. The vertices in $T_X$ contained in $T_1(B)$ correspond to the vertices in the block $B$ and $G_B$ induces the full automorphism group of $B$ on $T_1(B)$. Since $\mbox{${\rm Aut}\;$} B$ is generated by stabilizers, we have $G_B \leq G^+$. The quotient graph $Y$ of $T_X$ by the action of $G$ has one vertex $\tilde{x} $ for the orbit of $G$ on the vertices in $T_X$ corresponding to vertices in $X$ and one vertex for each orbit on the blocks, joined to $\tilde{x} $. This is a tree and by Lemma~\ref{LNotSimple} we see that $G$ is generated by the stabilizers of vertices and hence $G^+=G$. \par Finally assume $n=2$ and all blocks are isomorphic. In this case each vertex in the tree $T_X$ corresponding to a vertex in $X$ has degree 2. Construct a new graph $T'_X$ such that the set of vertices is the set of blocks of $X$ and two vertices in $T'_X$ are adjacent if and only if the corresponding blocks have a common vertex. The condition that each vertex is contained in just two blocks guarantees that $T'_X$ is a tree. The assumption that all the blocks are isomorphic says that $G$ acts transitively on the vertex set of $T'_X$. One now sees that $G$ cannot be simple because the subgroup $N$ of $G$ preserving the classes of the natural bipartition of $T'_X$ is normal in $G$ with index 2. It is clear that $N$ is generated by the stabilizers of vertices in $T'_X$ (i.e. the stabilizers in $G$ of the blocks of $X$). The argument above shows that the group $G^+$ contains all the stabilizers of blocks in $X$ and thus $N=G^+$ and $N$ is simple. {\em Comment. } Assume $n=2$ and the two blocks containing a given vertex are not isomorphic. By the classification of Jung and Watkins of transitive graphs with connectivity 1 described above we see that the automorphism group of each block must be transitive. {\em Proof of Corollary~\ref{CPrimSimple}.} Jung and Watkins in \cite{JungWatkins1977} also give a complete description of primitive graphs with connectivity 1. In these graphs each block is a primitive graph, any two blocks are isomorphic and each block has at least three vertices. Each block is a primitive graph (therefore, transitive) with at least three vertices and therefore the automorphism group of a block is generated by stabilizers. (It is impossible that the automorphism group of a primitive graph is regular.) Because all the blocks are isomorphic the result now follows from Theorem~\ref{TConnectivity1Simple}. \section*{Appendix B: Primitive graphs with infinitely many ends}\label{APrimitive} The purpose of this appendix is to explain how Theorem~2.5 in Smith's paper \cite{Smith2010} implies that if a group acts primitively on an infinite locally finite graph with more than one end then the stabilizer of a vertex is infinite. We start by proving a results for group actions on trees and then use the tree described in Step 1 of the proof of Theorem~\ref{TPrimitiveSimple} to prove our results for a group acting primitively on a locally finite graph with infinitely many ends. \begin{lemm}\label{LSimon} Let $G$ be a group acting on a tree $T$. Let $x$ and $y$ be distinct vertices in $T$. Assume that on the path between $x$ and $y$ is a vertex $z$, distinct from both $x$ and $y$, such that $G_{x, z}=G_{y, z}$. Suppose that $d(x,z)\leq d(z,y)$. If $h\in H=\langle G_x, G_y\rangle$ then either $h$ fixes $y$ or $d_T(y, h(y))>d(x,y)$. \end{lemm} {\em Proof.} Set $A=G_x$, $B=G_y$ and $C=A\cap B=G_{x,y}$. Let $\{a_i\}_{i\in I}$ be a set of coset representatives for $C$ in $A$ and, similarly, let $\{b_j\}_{j\in J}$ be a set of coset representatives for $C$ in $B$. Assume that the identity element is included in both families. Set $k=d_T(x,y)$. For $g\in A$ we define $x(g)$ as a vertex in $[x, y]$ that is fixed by $g$ and is in the greatest distance from $x$. If $g\in B$ define $y(g)$ similarly. For an element $g\in A\setminus C$ the condition in the lemma means that $d_T(x,x(g))<d_T(x,z)\leq k/2$ and then $d_T(y,x(g))>d_T(y,z)\geq k/2$. Similarly, if $g\in B\setminus C$ then $d_T(y,y(g))<d_T(y,z)$ and $d_T(x,y(g))>d_T(x,z)$. Recall that if $v$ is a vertex in $T$ then $\mbox{${\rm pr}$}_{[x,y]}(v)$ is defined as the vertex on the geodesic $[x,y]$ that is closest to $v$. Write $h\in H$ as $h=b_{j_l}a_{i_l}\cdots b_{j_2}a_{i_2}b_{j_1}a_{i_1}c$ where $c\in C$ and none of the $a_i$'s and $b_j$'s is the identity element with the possible exceptions of $a_{i_1}$ and $b_{j_l}$. We use induction over $l$. Our induction hypothesis is that if $h$ is as above and $h$ does not fix $y$ then $d_T(y, h(y))>k$ and if $b_{j_l}\neq 1$ then $\mbox{${\rm pr}$}_{[x,y]}(h(y))=y(b_{j_l})$. To start with it is obvious that if $l=0$ or $l=1$ and $a_{i_1}=1$ then $h$ fixes $y$. Assume now that $a_{i_1}\neq 1$. Then $a_{i_1}$ is in $A\setminus C$ and does not fix $y$. Recall that $d_T(y, x(a_{i_1}))>d_T(y,z)\geq k/2$. The geodesic from $y$ to $a_{i_1}(y)$ goes through the vertex $x(a_{i_1})$. Hence $d_T(y, a_{i_1}(y))=2d_T(y, x(a_{i_1}))>k$. Because $b_{i_1}$ fixes $y$ we see that $d_T(y, b_{i_1}a_{i_1}(y))=d_T(y, a_{i_1}(y))>k$. Let us look closer at what happens if $b_{i_1}\neq 1$. The geodesic from $y$ to $a_{i_1}(y)$ goes through the vertex $x(a_{i_1})$ and thus also through that vertex $y(b_{i_1})$. Note that the vertex $y(b_{i_1})$ is the vertex in $[y,b_{i_1}a_{i_1}(y)]\cap[x,y]$ that is furthest away from $y$ and thus $d_T(y,\mbox{${\rm pr}$}_{[x,y]}(h(y)))=d_T(y,y(b_{i_1}))<d_T(y,z)$. Assuming the induction hypothesis above we write $h= b_{j_{l+1}}a_{i_{l+1}}b_{j_l}a_{i_l}\cdots b_{j_1}a_{i_1}c$ with all the $a_{i}$'s and $b_{j}$'s occurring non-trivial except possibly $a_{i_1}$ and $b_{j_{l+1}}$. Write $h'= b_{j_l}a_{i_l}\cdots b_{j_1}a_{i_1}c$ and note that $b_{j_{l}}\neq 1$. The induction hypothesis says that $d_T(y,\mbox{${\rm pr}$}_{[x,y]}(h'(y)))<d_T(y,z)$. Observe that $$d(y,\mbox{${\rm pr}$}_{[x,y]}(h'(y))+d(\mbox{${\rm pr}$}_{[x,y]}(h'(y)),h'(y))=d(y,h'(y))>k.$$ The geodesic from $x$ to $h'(y)$ contains $x(a_{i_{l+1}})$ and $\mbox{${\rm pr}$}_{[x,y]}(a_{i_{l+1}}h'(y))=x(a_{i_{l+1}})$. Clearly $d(\mbox{${\rm pr}$}_{[x,y]}(a_{i_{l+1}}h'(y)),a_{i_{l+1}}h'(y))> d(\mbox{${\rm pr}$}_{[x,y]}(h'(y)),h'(y))$ and $$d(y,\mbox{${\rm pr}$}_{[x,y]}(a_{i_{l+1}}h'(y)))=d(y,x(a_{i_{l+1}}))> d(y,\mbox{${\rm pr}$}_{[x,y]}(h'(y)).$$ Hence $d_T(y,a_{i_{l+1}}h'(y))> d(y,h'(y))>k$. Therefore $$d_Y(y,h(y))=d_T(y,b_{i_{l+1}}a_{i_{l+1}}h(y))>k.$$ Note that the geodesic from $y$ to $a_{i_{k+1}}h(y)$ goes through the vertex $y(b_{i_{l+1}})$ and thus $\mbox{${\rm pr}$}_{[x,y]}(b_{i_{l+1}}a_{i_{l+1}}h(y))=y(b_{i_{l+1}})$ and therefore $d_T(y, \mbox{${\rm pr}$}_{[x,y]}(h(y)))<d_T(y,z)$. \begin{theo}{\rm (Cf.~\cite[Theorem~2.5]{Smith2010})}\label{TSimon} Let $G$ be a group acting on a tree $T$ with two orbits $V_1$ and $V_2$ on the vertex set. Suppose that there are distinct vertices $x$ and $y$ in $V_1$ and that on the path between $x$ and $y$ is a vertex $z$, distinct from both $x$ and $y$, such that $G_{x, z}=G_{y, z}$. Then $G$ does not act primitively on $V_1$. \end{theo} {\em Proof.} Let $x, y$ and $z$ be as in the Theorem. Suppose that $G$ acts primitively on $V_1$. Then $G$ acts transitively on $V_1$ and, since $G_x$ is a maximal subgroup of $G$, then $\langle G_x, G_y\rangle=G$. But now we have a contradiction with Lemma~\ref{LSimon} because $k=d_T(x,y)\geq 2$ and the orbit of $y$ under $\langle G_x, G_y\rangle=G$ would have to contain vertices in distance 2 from $y$ but by the Lemma that is impossible. \begin{coro}\label{CPrimitive} Let $G$ be a group acting on a tree $T$ with two orbits $V_1$ and $V_2$ on the vertex set. Suppose that $G$ acts primitively on $V_1$ and that the tree has infinite diameter. Then the stabilizer $G_x$ of a vertex $x$ in $V_1$ is infinite. \end{coro} {\em Proof.} Suppose the stabilizers of vertices in $V_1$ are finite. Let $g$ be an element in $G$ that acts like a translation on $T$ and let $\{v_i\}_{i\in\mbox{${\bf Z}$}}$ be the line $L$ that $g$ acts on by translation. (Such an element exists by Lemma~\ref{LNoInvariant} part (iii).) Assume that $v_0$ is in $V_1$. Set $u_j=h^j(v_0)$. Define $G(i)$ as the stabilizer of $u_i$ and $G(i,j)$ as $G_{u_i}\cap G_{u_j}$. Note that $G(i,j)$ fixes all the vertices in the path between $u_i$ and $u_j$. Hence $G(0)\supseteq G(0, 1)\supseteq G(0, 2)\supseteq\cdots$. Because $G(0)$ is finite this sequence must eventually stop. So there is a number $m$ such that $G(0, m)$ is equal to $G(0, j)$ for all $j\geq m$, i.e. the group $G(0, m)$ fixes all vertices $u_j$ with $j\geq m$. Now $G(0, m)\supseteq G({-1}, m)\supseteq G({-2}, m)\supseteq\cdots$. Since $G(0, m)$ is finite there is number $n\leq 0$ such that $G(n, m)=G(j, m)$ for all $j\leq n$. The conclusion is that the group $G(n, m)$ fixes all the $u_i$'s and hence fixes all the vertices on the line $L$ and $G(n, m)$ is indeed equal to the pointwise stabilizer of the line $L$. Note that $g^{m-n}G(n, m)g^{-(m-n)}=G(m,{2m-n})$. We now set $x=u_n$, $z=u_m$ and $y=u_{2m-n}$, and see that $G_{x,z}=G_{y,z}$ and by Theorem~\ref{TSimon} it is now impossible that $G$ acts primitively on $V_1$. We have reached a contradiction and therefore the assumption that the stabilizer of a vertex in $V_1$ is finite must be wrong. \begin{coro}\label{CPrimitiveTree} Let $G$ be a group acting primitively on a locally finite connected graph $X$ with infinitely many ends. Then the stabilizer $G_x$ of a vertex $x$ in $X$ is infinite and the group $G$ with the permutation topology is not discrete. \end{coro} {\em Proof.} The action of $G$ on the tree $T_Y$ as described in Step 1 of the proof of Theorem~\ref{TPrimitiveSimple} satisfies the conditions in Corollary~\ref{CPrimitiveTree} with $V_1$ being the set of vertices in the tree $T$ that corresponds to the vertex set of $X$. The stabilizer of a vertex $x$ in $T$ is equal to the stabilizer of the corresponding vertex in $X$ and the conclusion follows from Corollary~\ref{CPrimitive}. Keeping in mind that the graph $X$ is locally finite we conclude that $G$ is not discrete. \end{document}
\begin{document} \title[Dimorphic Mersenne numbers and their applications]{Dimorphic Mersenne numbers and their applications} \author{Taekyun Kim} \address{Department of Mathematics, Kwangwoon University, Seoul 139-701, Republic of Korea} \email{[email protected]} \author{DAE SAN KIM} \address{Department of Mathematics, Sogang University, Seoul 121-742, Republic of Korea} \email{[email protected]} \subjclass[2010]{11B68; 11B73; 11B83} \keywords{dimorphic Mersenne numbers; incomplete Bell polynomials; degenerate Bernoulli polynomials} \begin{abstract} The Mersenne primes are primes which can be written as some prime power of 2 minus 1. These primes were studied from antiquity in that their close connection with perfect numbers and even to present day in that their easiness for primality test. In this paper, we introduce the dimorphic Mersenne numbers as a degenerate version of the Mersenne numbers and investigate some of their properties in connection with the degenerate Bernoulli polynomials and the incomplete Bell polynomials. \end{abstract} \maketitle \section{Introduction} In recent years, there have been active explorations for various degenerate versions of many special numbers and polynomials with diverse tools such as generating functions, combinatorial methods, $p$-adic analysis, umbral calculus, differential equations, probability theory, operator theory, analytic number theory and quantum physics. These were initiated by Carlitz in [3,4] and yielded many interesting results of arithmetical and combinatorial nature (see [7-9,12-14,16-18] and the references therein).\par The aim of this paper is to introduce the dimorphic Mersenne numbers as a degenerate version of the Mersenne numbers and to find some of their applications in connection with the degenerate Bernoulli polynomials. The novelty of this paper is that it is the first paper which introduces the dimorphic Mersenne numbers and shows some of their applicatons associated with the degenerate Bernoulli numbers and the incomplete Bell polynomials. \par The outline of this paper is as follows. In Section 1, we recall Mersenne numbers, the degenerate exponentials and the degenerate Bernoulli polynomials. We remind the reader of the degenerate Stirling numbers of the second kind, the incomplete Bell polynomials and the complete Bell polynomials. Section 2 is the main result of this paper. We first introduce the dimorphic Mersenne numbers as a degenerate version of the Mersenne numbers. Then, by using the generating function of the dimorphic Mersenne numbers in \eqref{16}, in Theorem 1, we derive a recurrence formula for the degenerate Bernoulli polynomials involving the dimorphic Mersenne numbers. We express a factor of the generating function of the dimorphic Mersenne numbers in terms of the incomplete Bell polynomials with arguments given by the degenerate Bernoulli polynomials (see \eqref{16}, \eqref{20}). In Theorem 2, we obtain from this expression a represnetation of the dimorphic Mersenne numbers in terms of those incomplete Bell polynomials with arguments given by the degenerate Bernoulli polynomials. Finally, in Theorem 3 we get an expression of the degenerate Bernoulli polynomials in terms of the same incomplete Bell polynomials with arguments given by the degenerate Bernoulli polynomials from Theorem 2. In the rest of this section, we recall the facts that are needed throughout this paper. \par The Mersenne numbers are defined by \begin{displaymath} M_{n}=2^{n}-1,\quad (n\ge 0),\quad (\mathrm{see}\ [2]). \end{displaymath} $M_{p}$ is called a Mersenne prime if $p$ is prime and $M_{p}$ is also prime. For example, $M_{p}$ is a Mersenne prime for $p=2,3,5,7,13,17,19,31,61,89,107,127,521$ (see [2]). As of May 2022, the largest prime is the Mersenne prime $2^{82,589,933}-1$ which has 24,862,048 digits when it is written in base 10. It is easy to show that \begin{equation} \frac{z}{1-3z+2z^{2}}=\sum_{n=0}^{\infty}M_{n}z^{n},\quad (\mathrm{see}\ [2]). \label{1} \end{equation} \par For any nonzero $\lambda\in\mathbb{R}$, the degenerate exponentials are defined by \begin{equation} e_{\lambda}^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0}^{\infty}(x)_{n,\lambda}\frac{t^{n}}{n!},\quad e_{\lambda}(t)=e_{\lambda}^{1}(t),\quad (\mathrm{see}\ [7-9,11-14]),\label{2} \end{equation} where \begin{equation} (x)_{0,\lambda}=1,\quad (x)_{n,\lambda}=x(x-\lambda)(x-2\lambda)\cdots(x-(n-1)\lambda),\quad (n\ge 1).\label{3} \end{equation} Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}e_{\lambda}^{x}(t)=e^{xt},\quad \lim_{\lambda\rightarrow 0}(x)_{n,\lambda}=x^{n},\quad (n\ge 0). \end{displaymath} \par In [3,4], Carlitz introduced the degenerate Bernoulli polynomials defined by \begin{equation} \frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)=\sum_{n=0}^{\infty}\beta_{n,\lambda}(x)\frac{t^{n}}{n!}.\label{4} \end{equation} When $x=0$, $\beta_{n,\lambda}=\beta_{n,\lambda}(0),\ (n\ge 0)$, are called the degenerate Bernoulli numbers. Note that $\displaystyle \lim_{\lambda\rightarrow 0}\beta_{n,\lambda}=B_{n}\displaystyle$, where $B_{n}$ are the ordinary Bernoulli numbers, (see [1-19]). From \eqref{4}, we note that \begin{displaymath} \beta_{n,\lambda}(x)=\sum_{k=0}^{n}\binom{n}{k}(x)_{n-k,\lambda}\beta_{k,\lambda},\quad (n\ge 0),\quad (\mathrm{see}\ [3,4]). \end{displaymath} \par It is known that the Stirling numbers of the second kind are defined by \begin{equation} x^{n}=\sum_{k=0}^{n}S_{2}(n,k)(x)_{k},\quad (n\ge 0),\quad (\mathrm{see}\ [8]),\label{5} \end{equation} where $(x)_{0}=1,\ (x)_{n}=x(x-1)\cdots(x-n+1),\ (n\ge 1)$. From \eqref{5}, we note that \begin{equation} \frac{1}{k!}\big(e^{t}-1\big)^{k}=\sum_{n=k}^{\infty}S_{2}(n,k)\frac{t^{n}}{n!},\quad (k \ge 0),\quad (\mathrm{see}\ [8]). \label{6} \end{equation} \par The incomplete Bell polynomials are defined by \begin{equation} \frac{1}{k!}\bigg(\sum_{i=1}^{\infty}x_{i}\frac{t^{i}}{i!}\bigg)^{k}=\sum_{n=k}^{\infty}B_{n,k}(x_{1},\cdots,x_{n-k+1})\frac{t^{n}}{n!},\quad (k\ge 0),\quad (\mathrm{see}\ [1-8,10-13,15]),\label{7} \end{equation} More explicitly, they are given by \begin{equation} \begin{aligned} &B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1})\\ &=\sum_{\substack{l_{1}+\cdots+l_{n-k+1}=k\\ l_{1}+2l_{2}+\cdots+(n-k+1)l_{n-k+1}=n}}\frac{n!}{l_{1}!l_{2}!\cdots l_{n-k+1}!} \bigg(\frac{x_{1}}{1!}\bigg)^{l_{1}} \bigg(\frac{x_{2}}{2!}\bigg)^{l_{2}}\cdots \bigg(\frac{x_{n-k+1}}{(n-k+1)!}\bigg)^{l_{n-k+1}}, \end{aligned}\label{8} \end{equation} where the sum runs over all nonnegetive integers $l_{1}, \dots, l_{n-k+1}$ satisfying $l_{1}+\cdots+l_{n-k+1}=k$ and $l_{1}+2l_{2}+\cdots+(n-k+1)l_{n-k+1}=n$. \par The complete Bell polynomials are given by \begin{align} \exp\bigg(\sum_{i=1}^{\infty}x_{i}\frac{t^{i}}{i!}\bigg)&=\sum_{n=0}^{\infty}B_{n}(x_{1},\dots,x_{n})\frac{t^{n}}{n!} \label{9}\\ &=1+\sum_{n=1}^{\infty}B_{n}(x_{1},x_{2},\dots,x_{n})\frac{t^{n}}{n!},\quad (\mathrm{see}\ [1-8,10-13,15]). \nonumber \end{align} Then we have \begin{displaymath} B_{0}(x_{1},\dots,x_{n})=1,\ B_{n}(x_{1},\dots,x_{n})=\sum_{l_{1}+2l_{2}+\cdots+nl_{n}=n}\frac{n!}{l_{1}!\cdots l_{n}!}\bigg(\frac{x_{1}}{1!}\bigg)^{l_{1}}\cdots\bigg(\frac{x_{n}}{n!}\bigg)^{l_{n}},\quad (n \ge 1), \end{displaymath} where the sum runs over all nonnegetive integers $l_{1}, \dots, l_{n}$ satisfying $l_{1}+2l_{2}+\cdots+nl_{n}=n$. \par From \eqref{7} and \eqref{8}, we note that \begin{align} \exp\bigg(\sum_{i=1}^{\infty}x_{i}\frac{t^{i}}{i!}\bigg) &=1+\sum_{k=1}^{\infty}\frac{1}{k!}\bigg(\sum_{i=1}^{\infty}x_{i}\frac{t^{i}}{i!}\bigg)^{k} \label{10}\\ &=1+\sum_{k=1}^{\infty}\sum_{n=k}^{\infty}B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1})\frac{t^{n}}{n!} \nonumber \\ &=1+\sum_{n=1}^{\infty}\sum_{k=1}^{n}B_{n,k}(x_{1},\dots,x_{n-k+1})\frac{t^{n}}{n!}.\nonumber \end{align} By \eqref{9} and \eqref{10}, we get \begin{equation} B_{n}(x_{1},x_{2},\dots,x_{n})=\sum_{k=1}^{n}B_{n,k}(x_{1},\dots,x_{n-k+1}),\quad (n\ge 1). \label{11} \end{equation} It is known that the Bell polynomials are given by \begin{displaymath} \phi_{n}(x)=\sum_{k=0}^{n}S_{2}(n,k)x^{k},\quad (n\ge 0),\quad (\mathrm{see}\ [16,17]). \end{displaymath} From \eqref{6}, \eqref{7} and \eqref{11}, we note that \begin{displaymath} B_{n,k}(1,1,\dots,1)=S_{2}(n,k),\quad B_{n}(x,x,\dots,x)=\phi_{n}(x),\quad (n\ge 0). \end{displaymath} \section{Dimorphic Mersenne numbers and their applications} Now, we consider the dimorphic Mersenne numbers given by \begin{equation} M_{n,\lambda}=(2)_{n,\lambda}-(1)_{n,\lambda},\quad (\lambda\in\mathbb{R},\quad n\ge 0). \label{12} \end{equation} Note that $\lim_{\lambda\rightarrow 0}M_{n,\lambda}=M_{n}$. We observe that \begin{align} e_{\lambda}^{2}(t)-e_{\lambda}(t)&=\sum_{n=0}^{\infty}(2)_{n,\lambda}\frac{t^{n}}{n!}-\sum_{n=0}^{\infty}(1)_{n,\lambda}\frac{t^{n}}{n!} \label{13} \\ &=\sum_{n=0}^{\infty}\Big((2)_{n,\lambda}-(1)_{n,\lambda}\Big)\frac{t^{n}}{n!}. \nonumber \end{align} Thus, by \eqref{12} and \eqref{13}, we have \begin{equation} e_{\lambda}^{2}(t)-e_{\lambda}(t)=\sum_{n=0}^{\infty}M_{n,\lambda}\frac{t^{n}}{n!}. \label{14} \end{equation} Thus, by \eqref{14} and noting that $M_{0,\lambda}=0$, we see that \begin{equation} \frac{1}{t}\big(e_{\lambda}^{2}(t)-e_{\lambda}(t)\big)=\frac{1}{t}\sum_{n=1}^{\infty}M_{n,\lambda}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}\frac{M_{n+1,\lambda}}{n+1}\frac{t^{n}}{n!}. \label{15} \end{equation} From \eqref{15}, we have \begin{align} \sum_{n=0}^{\infty}\frac{M_{n+1,\lambda}}{n+1}\frac{t^{n}}{n!}&=\frac{1}{t}\big(e_{\lambda}^{2}(t)-e_{\lambda}(t)\big) =\frac{e_{\lambda}(t)}{t}\big(e_{\lambda}(t)-1\big)\label{16} \\ &=\frac{e_{\lambda}^{x+1}(t)}{te_{\lambda}^{x}(t)}(e_{\lambda}(t)-1). \nonumber \end{align} \par Now, we observe that \begin{align} e_{\lambda}^{x+1}(t)&=\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t) \frac{e_{\lambda}^{x+1}(t)}{te_{\lambda}^{x}(t)}\big(e_{\lambda}(t)-1\big) \label{17}\\ &=\sum_{l=0}^{\infty}\beta_{l,\lambda}(x)\frac{t^{l}}{l!}\sum_{m=0}^{\infty}\frac{M_{m+1,\lambda}}{m+1}\frac{t^{m}}{m!}\nonumber\\ &=\sum_{n=0}^{\infty}\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}(x)\frac{M_{n-l+1,\lambda}}{n-l+1}\frac{t^{n}}{n!}. \nonumber \end{align} On the other hand, by \eqref{2}, we also have \begin{equation} e_{\lambda}^{x+1}(t)=\sum_{n=0}^{\infty}(x+1)_{n,\lambda}\frac{t^{n}}{n!}.\label{18} \end{equation} From \eqref{17} and \eqref{18}, we note that \begin{align} (x+1)_{n,\lambda} &=\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}(x)\frac{M_{n-l+1,\lambda}}{n-l+1}\label{19}\\ &=\beta_{n,\lambda}(x)+\sum_{l=0}^{n-1}\binom{n}{l}\beta_{l,\lambda}(x)\frac{M_{n-l+1,\lambda}}{n-l+1}.\nonumber \end{align} Therefore, by \eqref{19}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \beta_{n,\lambda}(x)=(x+1)_{n,\lambda}-\sum_{l=0}^{n-1}\binom{n}{l}\beta_{l,\lambda}(x)\frac{M_{n-l+1,\lambda}}{n-l+1}. \end{displaymath} \end{theorem} From \eqref{7}, we note that \begin{align} \frac{e_{\lambda}(t)-1}{te_{\lambda}^{x}(t)}&=\frac{1}{\frac{t}{e_{\lambda}(t)-1} e_{\lambda}^{x}(t)}=\frac{1}{1+\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)-1}\label{20} \\ &=\frac{1}{1+\sum_{n=1}^{\infty}\beta_{n,\lambda}(x)\frac{t^{n}}{n!}}=\sum_{k=0}^{\infty}(-1)^{k}\bigg(\sum_{i=1}^{\infty}\beta_{i,\lambda}(x)\frac{t^{i}}{i!}\bigg)^{k} \nonumber \\ &=1+\sum_{k=1}^{\infty}(-1)^{k}k!\frac{1}{k!}\bigg(\sum_{i=1}^{\infty}\beta_{i,\lambda}(x)\frac{t^{i}}{i!}\bigg)^{k}\nonumber \\ &=\sum_{k=1}^{\infty}(-1)^{k}k!\sum_{n=k}^{\infty}B_{n,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{n-k+1,\lambda}(x)\big)\frac{t^{n}}{n!}+1 \nonumber \\ &=\sum_{n=1}^{\infty}\sum_{k=1}^{n}(-1)^{k}k!B_{n,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{n-k+1,\lambda}(x)\big)\frac{t^{n}}{n!}+1.\nonumber \end{align}\\ On the other hand, by \eqref{16}, we get \begin{align} &1+\sum_{n=1}^{\infty}\frac{M_{n+1,\lambda}}{n+1}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}\frac{M_{n+1,\lambda}}{n+1}\frac{t^{n}}{n!}=\frac{e_{\lambda}^{x+1}(t)}{te_{\lambda}^{x}(t)}(e_{\lambda}(t)-1)\label{21}\\ &=e_{\lambda}^{x+1}(t)\bigg(1+\sum_{j=1}^{\infty}\sum_{k=1}^{j}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big)\frac{t^{j}}{j!}\bigg)\nonumber \\ &=\sum_{l=0}^{\infty}(x+1)_{l,\lambda}\frac{t^{l}}{l!}\bigg(1+\sum_{j=1}^{\infty}\sum_{k=1}^{j}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big)\frac{t^{j}}{j!}\bigg)\nonumber \\ &=\sum_{n=0}^{\infty}(x+1)_{n,\lambda}\frac{t^{n}}{n!}+\sum_{n=1}^{\infty}\sum_{j=1}^{n}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big)\frac{t^{n}}{n!} \nonumber \\ &=1+\sum_{n=1}^{\infty}\bigg((x+1)_{n,\lambda}+\sum_{j=1}^{n}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big)\bigg)\frac{t^{n}}{n!}. \nonumber \end{align} Therefore, by comparing the coefficients on both sides of \eqref{21}, we obtain the following theorem. \begin{theorem} For $n\in\mathbb{N}$ and any $x$, we have \begin{displaymath} \frac{M_{n+1,\lambda}}{n+1}=(x+1)_{n,\lambda}+\sum_{i=1}^{n}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big). \end{displaymath} \end{theorem} We note that \begin{align} B_{n,1}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{n,\lambda}(x)\big)&=\sum_{\substack{l_{1}+\cdots+l_{n}=1\\ l_{1}+2l_{2}+\cdots+nl_{n}=n}}\frac{n!}{l_{1}!l_{2}!\cdots l_{n}!}\bigg(\frac{\beta_{1,\lambda}(x)}{1!}\bigg)^{l_{1}}\cdots \bigg(\frac{\beta_{n,\lambda}(x)}{n!}\bigg)^{l_{n}}\label{22}\\ &=n!\frac{\beta_{n,\lambda}(x)}{n!}=\beta_{n,\lambda}(x). \nonumber \end{align} From Theorem 2 and \eqref{22}, we further observe that \begin{align} &\frac{M_{n+1,\lambda}}{n+1}=(x+1)_{n,\lambda}+\sum_{j=1}^{n}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big)\label{23} \\ &=(x+1)_{n,\lambda}+\sum_{k=1}^{n}(-1)^{k}k!B_{n,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{n-k+1,\lambda}(x)\big)\nonumber \\ &\quad +\sum_{j=1}^{n-1}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big) \nonumber \\ &=(x+1)_{n,\lambda}-B_{n,1}\big(\beta_{1,\lambda}(x),\dots,\beta_{n,\lambda}(x)\big)+\sum_{k=2}^{n}(-1)^{k}k!B_{n,k}\big(\beta_{1,\lambda}(x), \beta_{2,\lambda}(x),\dots, \beta_{n-k+1,\lambda}(x)\big)\nonumber \\ &\quad +\sum_{j=1}^{n-1}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x), \beta_{2,\lambda}(x),\dots, \beta_{j-k+1,\lambda}(x)\big) \nonumber \\ &= -\beta_{n,\lambda}(x)+(x+1)_{n,\lambda}+\sum_{k=2}^{n}(-1)^{k}k!B_{n,k}\big(\beta_{1,\lambda}(x), \beta_{2,\lambda}(x),\dots, \beta_{n-k+1,\lambda}(x)\big) \nonumber \\ &\quad +\sum_{j=1}^{n-1}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x), \beta_{2,\lambda}(x),\dots, \beta_{j-k+1,\lambda}(x)\big).\nonumber \end{align} Thus, by \eqref{23}, we get \begin{equation} \begin{aligned} \beta_{n,\lambda}(x)&=(x+1)_{n,\lambda}-\frac{M_{n+1,\lambda}}{n+1}+\sum_{k=2}^{n}(-1)^{k}k!B_{n,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{n-k+1,\lambda}(x)\big) \\ &\quad +\sum_{j=1}^{n-1}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big). \end{aligned}\label{24} \end{equation} Therefore, by \eqref{24}, we obtain the following theorem. \begin{theorem} For $n\in\mathbb{N}$, we have \begin{align*} \beta_{n,\lambda}(x)&=(x+1)_{n,\lambda}-\frac{M_{n+1}}{n+1}+\sum_{k=2}^{n}(-1)^{k}k!B_{n,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{n-k+1,\lambda}(x)\big) \\ &\quad +\sum_{j=1}^{n-1}\sum_{k=1}^{j}\binom{n}{j}(x+1)_{n-j,\lambda}(-1)^{k}k!B_{j,k}\big(\beta_{1,\lambda}(x),\beta_{2,\lambda}(x),\dots,\beta_{j-k+1,\lambda}(x)\big). \end{align*} \end{theorem} \section{Conclusion} Study of various degenerate versions of some special numbers and polynomials, which began with the papers [3,4] by Carlitz, regained recent interests of some mathematicians and led to unexpected introduction of degenerate gamma functions and degenerate umbral calculus (see [9,14]). \indent In this paper, the dimorphic Mersenne numbers were introduced as a degenerate version of the Mersenne numbers and some of their properties was investigated in connection with the degenerate Bernoulli polynomials and the incomplete Bell polynomials. \par It is one of our future research projects to continue to study various degenerate versions of some special numbers and polynomials and to find their applications to physics, science and engineering as well as to mathematics. \end{document}
\begin{document} \title{Polarization-correlated photon pairs from a single ion} \author{F. Rohde, J. Huwer, N. Piro, M. Almendros, C. Schuck, F. Dubin and J. Eschner} \affiliation{ICFO-Institut de Ci\`encies Fot\`oniques, Mediterranean Technology Park, E-08860 Castelldefels (Barcelona), Spain} \date{\today} \begin{abstract} In the fluorescence light of a single atom, the probability for emission of a photon with certain polarization depends on the polarization of the photon emitted immediately before it. Here correlations of such kind are investigated with a single trapped calcium ion by means of second order correlation functions. A theoretical model is developed and fitted to the experimental data, which show 91\% probability for the emission of polarization-correlated photon pairs within 24 ns. \end{abstract} \pacs{42.50.Ar, 42.50.Ct, 42.50.Ex} \maketitle \section{Introduction} One of the most relevant tools for characterizing the light field emitted by a quantum optical system is its intensity correlation function $g^{(2)}(\tau)$, the most prominent example being the observation of antibunching \cite{Kimble1977PRLv39p691,Diedrich1987PRLv58p203} in the fluorescence of a single atom. Intensity correlation functions of the fluorescence light of a single ion excited by two light fields have been investigated before. In \cite{Schubert1992PRLv68p3016} the pair correlation conditioned on the wavelength of the photons scattered by a $\mrm{Ba^+}$ ion (eight-level structure) reveal the transient internal dynamics of the ion, which is characterized by optical pumping and the excitation of Raman coherence. In a detailed description of these experiments \cite{Schubert1995PRAv52p2994}, it is shown that measuring the correlation function for only one polarization of one of the two light fields by adding a polarization filter projects the ion in a coherent superposition of its energy eigenstates. This may be considered a first signature of atom-photon entanglement \cite{PET}. Other investigations of the resonance fluorescence of coherently driven single atoms by means of $g^{(2)}(\tau)$ comprise, e.g., a theoretical analysis of the effect of the quantized motion of a two-level atom in a harmonic trap on the second order correlation function \cite{Jakob1999PRAv59p2111}, experimental measurements of photon correlations revealing single-atom dynamics in a magneto-otical trap \cite{Gomer1998PRAv58p1657,Gomer1998APBv67p689} and atomic transport in an optical lattice \cite{Jurczak1996PRLv77p1727}, the theoretical demonstration of nonclassical correlations in the radiation of two atoms \cite{Skornia2001PRAv64p63801}, i.e. bunched and anti-bunched light is emitted in different spatial directions, and the study of intensity-intensity correlations from a three-level atom damped by a broadband squeezed vacuum \cite{Carreno2004JoOBv6p315}. Another application of photon-photon correlations in the resonance fluorescence from single atoms is an experiment where, using a self homodyning configuration, the second order correlation function was used to characterize the secular motion of a trapped ion \cite{Rotter2008NJPv10p43011}. The experiment revealed the dynamics of both internal and external degrees of freedom of the ion's wave function, from nanosecond to millisecond timescales, in a single measurement run. Polarization correlations of subsequent photons enitted from a bichromatically driven four-level atom have been predicted theoretically \cite{Jakob2002JOBv4p308}. In this proposal a four-level atom ($\mrm{J=1/2}$ to $\mrm{J=1/2}$) with degenerate Zeeman sub-states is coupled to a light field consisting of two components which are symmetrically detuned from the resonance frequency of the atomic transition. In contrast, in the experiment presented in this paper, the Zeeman degeneracy is lifted and the corresponding atomic transition (four-level system) is driven monochromatically. Furthermore, the effect of the coupling of the excited states to another manifold ($\mrm{J=3/2}$) of four Zeeman sub-levels by a second light field is taken into account. In the context of quantum optical information technologies, many entanglement protocols strongly rely on a projective measurement \cite{Cabrillo1999PRAv59p1025,Browne2003PRLv91p67901,Feng2003PRLv90p217902,Duan2003PRLv90p253601,Simon2003PRLv91p110405}, which is also the origin of the effect of antibunching. In fact, the degree of antibunching is a benchmark for single quantum emitters or ideal single photon sources that are suitable for quantum networking and communication. Beyond proving that a source is a pure single quantum emitter, correlation functions are used to characterize and develop applications for quantum communication. In \cite{Dubin2007PRLv99p183001}, for example, the resonance fluorescence from a continuously excited single ion is split and recombined on a beam splitter with a relative delay, thus creating an effective two-photon source. Here the measured correlation function reveals the quality of the mode matching at the beam splitter. Measuring and controlling the degree of second order correlation of a photon source is therefore a very important step in designing quantum optical tools in quantum information processing. This engineering of correlation functions is of special interest for quantum networks where single photons mediate information between nodes of single atoms \cite{Zoller2005EPJDv36p203,Briegel1998PRLv81p5932}. In this context, we present a polarization selective measurement of the correlation function $g^{(2)}(\tau)$ of fluorescence photons from a single $^{40}\mrm{Ca^+}$ ion, which is continuously laser excited on the $\mrm{4^2S_{1/2}}$ to $\mrm{4^2P_{1/2}}$ transition. In particular, the correlation function of emitted $\sigma^-$ and $\sigma^+$ polarized photons \footnote{Note that we identify the photon polarization ($\sigma^-$ or $\sigma^+$) by the transition on which the photon has been emitted.} was measured conditioned on the previous detection of a $\sigma^-$ photon, and it is shown that the system can be tailored such that the polarization of a photon depends strongly on the polarization of a previous one. \section{Model} \label{Model} Figure \ref{Termschema_Ca_8levels} shows the relevant levels of the $^{40}\mrm{Ca^+}$ ion. Continuous excitation with laser light at 397~nm and 866~nm generates continuous resonance fluorescence. We consider the case of observation along the direction of the magnetic field and write $g^{(2)}(\tau)$ in terms of the transition operators for the $\sigma$-transitions from $\mrm{S_{1/2}}$ to $\mrm{P_{1/2}}$ (see figure \ref{Termschema_Ca_8levels} for the numbering of the levels), \begin{figure} \caption{Level scheme of $\mrm{^{40} \label{Termschema_Ca_8levels} \end{figure} \begin{eqnarray} \hat{\sigma}_1&=& |1\rangle \langle 4|, \hspace{1cm} \hat{\sigma}_1^{\dagger}= |4\rangle \langle 1|,\label{sigma8level_a}\\ \hat{\sigma}_2&=& |2\rangle \langle 3|, \hspace{1cm} \hat{\sigma}_2^{\dagger}= |3\rangle \langle 2|. \label{sigma8level} \end{eqnarray} If we consider the general case without polarization selective detection, the second order correlation function reads \cite{Loudon2000} \begin{equation} g^{(2)}(\tau)=\frac{\sum_{i,j=1}^{2} \langle \hat{\sigma}^{\dagger}_{i}(t)\hat{\sigma}^{\dagger}_{j}(t+\tau)\hat{\sigma}_{j}(t+\tau)\hat{\sigma}_{i}(t)\rangle}{\langle \hat{\sigma}^{\dagger}_{1}(t)\hat{\sigma}_{1}(t) + \hat{\sigma}^{\dagger}_{2}(t)\hat{\sigma}_{2}(t)\rangle^2}. \label{g2pi} \end{equation} Using the quantum regression theorem \cite{Mandel1995} it can be shown that for all initial conditions and at steady state, the second order correlation function is related to the excited state populations by \begin{equation} g^{(2)}(\tau)=\frac{\rho_{33}(\tau)+\rho_{44}(\tau)}{\rho_{33}(\infty)+\rho_{44}(\infty)}, \label{g2rho3344} \end{equation} where $\rho_{lm}(\tau)$ with $l,m \in [1,8]$ are the matrix elements of the density operator representing populations and coherences of the states $|l\rangle$ and $|m\rangle$ at time $\tau$ after the detection of a first photon. Equation \ref{g2rho3344} allows to predict $g^{(2)}$ by solving the optical Bloch equations. The second order correlation function $g^{(2)}(\tau)$ of the 397~nm fluorescence light of a single $^{40}\mrm{Ca^+}$ ion is thus proportional to the population of the ion's excited state $\mrm{P_{1/2}}$ at time $\tau$ whereby the initial state at $\tau=0$ is determined by the first measured photon. \begin{figure*} \caption{Detection of $\sigma^{-} \label{Termschema_Ca_4levels_cond_g2_prodject} \label{Termschema_Ca_4levels_cond_g2_sigma-} \label{Termschema_Ca_4levels_cond_g2_sigma+} \label{conditionedg2} \end{figure*} We now consider polarization-selective detection of photons for the situation of our experiment. Figure \ref{conditionedg2} shows a sketch of the four levels involved in the emission of blue (397~nm) photons on the $\mrm{P_{1/2}}$ to $\mrm{S_{1/2}}$ transition. The detection of a $\sigma^-$ polarized photon projects the ion into state $|2\rangle=|\textrm{S}_{1/2}, m_j=1/2\rangle$ (figure \ref{Termschema_Ca_4levels_cond_g2_prodject}). If the ion is continuously illuminated by linearly polarized light at 397 and 866~nm with polarization perpendicular to the magnetic field, no $\pi$ transitions are excited. The ion effectively sees a superposition of $\sigma^-$ and $\sigma^+$ polarized light. After the emission of a $\sigma^-$ photon and the corresponding projection into state $|2\rangle$, a subsequent excitation can thus only occur to state $|3\rangle=|\textrm{P}_{1/2}, m_j=-1/2\rangle$ by absorbing a $\sigma^-$ polarized photon. From state $|3\rangle$ the ion can either decay back to state $|2\rangle$ under emission of a second $\sigma^-$ polarized photon (figure \ref{Termschema_Ca_4levels_cond_g2_sigma-}), or it can decay to state $|1\rangle=|\textrm{S}_{1/2}, m_j=-1/2\rangle$ under emission of a $\pi$ polarized photon. Thus the next emitted photon after a $\sigma^-$ photon can not be $\sigma^+$ polarized. In order to emit a $\sigma^+$ polarized photon, the ion must first be excited to state $|4\rangle=|\textrm{P}_{1/2}, m_j=1/2\rangle$, which can only happen out of state $|1\rangle$ by absorbing a $\sigma^+$ polarized photon from the exciting beams (figure \ref{Termschema_Ca_4levels_cond_g2_sigma+}). The probability of detecting a $\sigma^+$ polarized photon after the time $\tau$ conditioned on the previous detection of a $\sigma^-$ polarized photon is therefore much lower than the probability of detecting a $\sigma^-$ polarized photon for the same condition. Since the correlation function is a measure for the photon-photon waiting time distribution, the $g^{(2)}$ derived from correlating $\sigma^-$ with $\sigma^+$ photons should be suppressed with respect to the $g^{(2)}$ derived from correlating $\sigma^-$ with $\sigma^-$ photons. In fact both correlations are expected to exhibit almost ideal antibunching, with the difference that for the first case the dip around $\tau$=0 is expected to be much wider. In other words, and taking into account also the case where $\sigma^+$ and $\sigma^-$ are exchanged, there will be a much stronger antibunching for the correlation of photons with opposite $\sigma$ polarization than for photons with the same $\sigma$ polarization. The same considerations are applicable for excitation with pure $\pi$-polarized light. In that case the correlation of subsequent photons is much stronger for opposite than for equal $\sigma$ polarization. Analogous to equation \ref{g2rho3344}, the second order correlation functions for $\sigma^-$ and $\sigma^+$ light, conditioned on the previous detection of a $\sigma^-$ photon ($\hat{\rho}(0)=\hat{\rho}_{\mrm{init}}=|2\rangle \langle 2|$) are derived to be \begin{equation} g^{(2)}_{\sigma^-}(\tau) =\frac{\rho_{33}(\tau)}{\rho_{33}(\infty)} \label{gcond5} \end{equation} and \begin{equation} g^{(2)}_{\sigma^+}(\tau) =\frac{\rho_{44}(\tau)}{\rho_{44}(\infty)}, \label{gcond6} \end{equation} respectively. In contrast to the general case (equation \ref{g2rho3344}) the conditioned second order correlation function is given by only one of the excited state populations at time $\tau$. In figure \ref{g2_theory} and \ref{g2_theory_strong_excitation} the correlation functions calculated according to equations \ref{gcond5} and \ref{gcond6} are plotted for weak and strong excitation parameters typical for our experimental setup. The upper curves, in blue, show $g^{(2)}_{\sigma^-}(\tau)$, while the lower ones, in red, represent $g^{(2)}_{\sigma^+}(\tau)$. As expected $g^{(2)}_{\sigma^-}(\tau)$ and $g^{(2)}_{\sigma^+}(\tau)$ show very different behaviors. For both weak and strong excitation, $g^{(2)}_{\sigma^-}(\tau)$ rises to high values with a steep slope. For weak excitation a maximum $g^{(2)}$ of 15.6 is reached at $\tau=\pm 29\,\mrm{ns}$. For longer time differences the correlation function falls monotonously until it reaches one. For strong excitation $g^{(2)}_{\sigma^-}(\tau)$ reaches a maximal value of 2.8 at $\tau=\pm 13\,\mrm{ns}$. For $\tau > 13\,\mrm{ns}$ the correlation function falls to 1 after 200~ns showing some coherent oscillations. \begin{figure*} \caption{(a) Conditioned second order correlation functions $g^{(2)} \label{g2_theory} \label{g2_theory_zoom} \label{g2_theory_fig} \end{figure*} \begin{figure*} \caption{(a) Conditioned second order correlation functions $g^{(2)} \label{g2_theory_strong_excitation} \label{g2_theory_strong_excitation_zoom} \label{g2_theory_strong_excitation_fig} \end{figure*} The two $g^{(2)}_{\sigma^+}(\tau)$ functions, on the contrary, show a flat behavior on short time scales before they rise with a moderate slope to their maximum value, which is reached at approximately 400~ns for the weak excitation and 200~ns for the strong excitation. In the latter case $g^{(2)}_{\sigma^+}(\tau)$ rises directly to 1 without a transient overshoot, while for weak excitation it reaches a value of 3.1 before it falls to 1 for $\tau > 400\,\mrm{ns}$. The shape of $g^{(2)}_{\sigma^-}(\tau)$ in figure \ref{g2_theory} is characterized by large correlation values and a long decay time (compared to the lifetime of the $\mrm{P_{1/2}}$ state) to the asymptotic value and can be attributed to optical pumping into the $\mrm{D_{3/2}}$ state \cite{Schubert1995PRAv52p2994}. The excitation on the $\mrm{S_{1/2} - P_{1/2}}$ transitions is much stronger than the one on the $\mrm{D_{3/2} - P_{1/2}}$ transitions. As a result, a large fraction of the population is transferred to the $\mrm{D_{3/2}}$ state after long delay times and the observed fluorescence is weak. This small flux of fluorescence, caused by a small steady-state population of the state $|3\rangle$, determines the normalization factor for long time intervals $\rho_{33}(\infty)$. At short time delays after the projection into state $|2\rangle$, however, i.e. during the first 30-40~ns, a large fraction of the population is excited to state $|3\rangle$ and the optical pumping to the $\mrm{D_{3/2}}$ states is negligible. Since the correlation function is the ratio of the population of state $|3\rangle$ at time $\tau$ and that in the steady state, high values are reached, which then decay to one revealing the time scale of the optical pumping. The characteristics of $g^{(2)}_{\sigma^+}(\tau)$ can be explained with an analogous argumentation. Here, excitation to state $|4\rangle$ during the first 30-40~ns is even smaller than the steady state population $\rho_{44}(\infty)$ for long time delays. After the projection into $|2\rangle$, it takes several scattering events and therefore more time until the population of state $|4\rangle$ exceeds $\rho_{44}(\infty)$. As the inset of figure \ref{g2_theory} shows, for large time intervals $g^{(2)}_{\sigma^+}(\tau)$ decays to the asymptotic value with the same time constant as $g^{(2)}_{\sigma^-}(\tau)$. The correlation functions shown in figure \ref{g2_theory_strong_excitation} are calculated for equal excitation strength on the $\mrm{S_{1/2} - P_{1/2}}$ and $\mrm{D_{3/2} - P_{1/2}}$ transitions. Consequently, the correlation values of $g^{(2)}_{\sigma^-}(\tau)$ are smaller and the decay to the asymptotic value is much faster. The exciting fields are strong enough to cause some damped oscillations at the generalized Rabi frequency $\Omega_G=\sqrt{|\Omega_{397}|^2+\Delta_{397}^2}$ in the correlation function. Due to the complex eight-level structure, the oscillations do not occur at only one generalized Rabi frequency, but each transition between the Zeeman sub-levels contributes a Fourier component depending on the intensity and detuning of the driving field. In figure \ref{g2_theory_strong_excitation} this becomes noticeable by comparing the frequency of the strongly suppressed oscillations of $g^{(2)}_{\sigma^+}(\tau)$ with the ones from $g^{(2)}_{\sigma^-}(\tau)$. Due to the Zeeman splitting, the detunings of the $|1\rangle$ to $|4\rangle$ and the $|2\rangle$ to $|3\rangle$ transitions with respect to the exciting laser give rise to different generalized Rabi frequencies. The most interesting feature in figure \ref{g2_theory} and \ref{g2_theory_strong_excitation} is the behavior of the correlation function for times $\tau$ close to zero. $g^{(2)}_{\sigma^+}(\tau)$ shows a flat plateau of values very close to zero for $-15\,\mrm{ns}<\tau<15\,\mrm{ns}$ in the case of weak and for $-10\,\mrm{ns}<\tau<10\,\mrm{ns}$ in the case of strong excitation. In the same time interval $g^{(2)}_{\sigma^-}(\tau)$ rises very fast to high values for both excitation conditions. The ratio \begin{equation} p(\tau)=\frac{\int_0^{\tau} g^{(2)}_{\sigma^-}(t)dt}{\int_0^{\tau} g^{(2)}_{\sigma^+}(t)dt} \label{ratio} \end{equation} of the number of $\sigma^-$- and $\sigma^+$-polarized photons emitted by the ion in $\tau$ describes the purity of the polarization in that time window. The probability with which, if a second photon is detected within this time interval, the polarization of this second photon is $\sigma^-$, is easily calculated as $\frac{p}{1+p}$. Figure \ref{g2_theory_zoom} and \ref{g2_theory_strong_excitation_zoom} show $p$ for weak and strong excitation conditions, respectively. The ratio diverges for $\tau \rightarrow 0$ ($>10^5$ for $\tau=1\,\mrm{ns}$) in both excitation regimes, because $\rho_{33}(\tau)$ increases quadratically in time from $\tau=0$, while $\rho_{44}(\tau)$ increases with $\tau^4$. In other words, if a second photon is emitted within a short time interval after a first $\sigma^-$-polarized photon, then with very high probability (that can be chosen by the time interval) it will also be $\sigma^-$-polarized. By exciting the ion with $\pi$-polarized light it is possible to generate equally strongly correlated photon pairs of oppositely $\sigma$-polarized photons. \section{Experiment} \label{Experiment} \begin{figure} \caption{Setup for the measurement of polarization-conditioned correlation functions. The fluorescence light is split in to two parts by collecting it with the two HALO lenses. Multimode fibers are used to couple the light to two PMTs.} \label{setup_g2} \end{figure} Figure \ref{setup_g2} shows the setup of the measurement which is described in detail elsewhere \cite{Gerber2009NJoPv11p13,Rohde2009a[vp}. A single $^{40}\mrm{Ca^+}$ ion is trapped in a linear Paul trap and continuously excited by two lasers at $397$~nm and $866$~nm. The $397\,\mathrm{nm}$ laser is red detuned to provide Doppler cooling while the resonant $866$~nm laser prevents optical pumping to the $\mrm{D_{3/2}}$ state. The 397~nm fluorescence light is split into two parts by collecting it with two high numerical aperture lenses (HALO) \cite{Gerber2009NJoPv11p13}. Both lenses direct the collected photons to photo multipliers (PMT) through multimode fibers. In each beam path a $\lambda/4$ plate and a polarizing beam splitter are used to select the respective polarization. The arrival times of the signals from the PMTs are recorded with picosecond resolution by commercial counting electronics \footnote{Pico Harp 300, Pico Quant}, and the correlation function is obtained by postprocessing the data. \subsubsection{Calibration} Before measurement of the correlation functions, an excitation spectrum, i.e. the rate of 397~nm fluorescence as a function of the detuning of 866~nm, was recorded. The spectrum that shows four dark resonances provides a calibration of the experimental parameters, which are then used as the starting point to fit the conditioned correlation functions. In figure \ref{fit20090320_1825_GPC2e} this excitation spectrum is plotted. \begin{figure} \caption{Excitation spectrum of a single ion. The 397 and 866~nm lasers are approximately vertically polarized and propagate under $90^{\circ} \label{fit20090320_1825_GPC2e} \end{figure} The Rabi frequencies, detunings and the magnetic field which are deduced from a fit to the data are indicated in the caption. To account for experimental deviations in the polarization angle of the excitation lasers from the ideal vertical polarization, this parameter was also varied in the fit. The angles of the polarizations of the two laser beams with the magnetic field, as used in the model shown in figure \ref{fit20090320_1825_GPC2e}, are $\alpha_{397}=0.46\cdot \pi$ for the blue laser and $\alpha_{866}=0.4\cdot \pi$ for the infrared laser (rather than $\pi/2$ in the ideal case). This is compatible with the available control over the experimental settings and the quality of the optical components used. \subsubsection{Conditioned correlation functions} \begin{figure} \caption{Blue (top): ($g^{(2)} \label{doublefit_sse_5_EB4e5_2} \end{figure} Figure \ref{doublefit_sse_5_EB4e5_2} shows the data for the two measured conditioned correlation functions plotted in one graph. Data for $g^{(2)}_{\sigma^+}$ are presented in red (lower curve) and data for $g^{(2)}_{\sigma^-}$ in blue (upper curve). The data are normalized to a long-time value of one and presented without background subtraction. The solid lines are the $g^{(2)}$ functions obtained by a fit of the Rabi frequencies to the experimental data using the model discussed in section \ref{Model}. The values of the background, laser detunings, magnetic field as well as the polarization angles extracted from the fit to the excitation spectrum have been kept fixed. The two Rabi frequencies have been fitted to both curves at the same time and agree well with the experiment. The deviation of the Rabi frequencies between the fitted correlation functions and the fitted excitation spectrum lies within the statistical error \footnote{Since the parameters used in fitting the spectrum as well as the $g^{(2)}$ functions are not fully independent from each other, various sets of parameters are consistent with the data, in the sense that the whole set of parameters for the spectrum fits the correlations within 1 of the $\chi^2$ deviation and vice versa.}. The correlation functions are very similar to the ones for the case of weak excitation in figure \ref{g2_theory_fig}. For large $\tau$ ($>$ 400~ns) both functions overlap and slowly decay to one. The characteristic behavior of large (small) correlation values and the slow decay to the asymptotic value for $g^{(2)}_{\sigma^-}$ ($g^{(2)}_{\sigma^+}$), explained in section \ref{Model}, are clearly observed in the measurement. \begin{figure} \caption{Zoom of FIG. \ref{doublefit_sse_5_EB4e5_2} \label{g2_fit20090320_1825_GPC2e_zoom} \end{figure} Figure \ref{g2_fit20090320_1825_GPC2e_zoom} shows a zoom into the region for small time differences. At $\tau=0$ both curves reach a value close to zero. With increasing $\tau$, $g^{(2)}_{\sigma^-}$ rises with a very steep quadratic slope and reaches a value as large as 12, whereas $g^{(2)}_{\sigma^+}$ stays flat for $\sim5\,\mrm{ns}$, before it rises with a moderate slope to a maximum value of 2.9. The model agrees very well with the data, proving the good control over the creation of polarization correlated photon pairs that we obtained experimentally. The main difference between the model calculation of $g^{(2)}$ in figure \ref{g2_theory} and experimental data in figure \ref{g2_fit20090320_1825_GPC2e_zoom} is that the predicted $\tau^4$-like plateau of $g^{(2)}_{\sigma^+}(\tau)$ is less pronounced in the measurement. Simulations with the model from section \ref{Model} show that this is explained by small errors in the polarization of the exciting lasers and in the detection setup. The polarizations of the exciting lasers have been fitted to the excitation spectrum (fig \ref{fit20090320_1825_GPC2e}), and these results have been used in the model of the correlation functions. To achieve an agreement of the quality as it is shown in figure \ref{doublefit_sse_5_EB4e5_2} and \ref{g2_fit20090320_1825_GPC2e_zoom}, also deviations from the ideal polarizations in the detection were accounted for in the model. These deviations occur when the polarization is not filtered perfectly, and consequently the measured $g^{(2)}$ function contain some coincidences that are caused by the respective orthogonal polarization. The theoretical model in figures \ref{doublefit_sse_5_EB4e5_2} and \ref{g2_fit20090320_1825_GPC2e_zoom} has been calculated using 2.5~\% of wrong $\sigma^+$ polarized photons for the initial detection events in both curves. For $g^{(2)}_{\sigma^-}$ the conditioned detection of the second $\sigma^-$ polarized photon has an error of 5\%, while for $g^{(2)}_{\sigma^+}$ the detection of the $\sigma^+$ polarized photon has an error of 1.8\%. This demonstrates that $g^{(2)}_{\sigma^+}$, in particular its $\tau^4$ characteristic, is very sensitive to small polarization errors at times $\tau$ close to zero. These errors are within the precision with which the polarization filtering was controlled in the experiment, given the low light levels and imperfections of the optics. Furthermore, there is a contribution due to the collection of light from a solid angle along the quantization axis. Each HALO lens collects 4\% of the light emitted into the full solid angle \cite{Gerber2009NJoPv11p13}, which leads to the detection of a small fraction wrong $\sigma$ and $\pi$ polarized photons. The sensitivity of the generation of polarization correlated photon pairs to polarization errors gets even more evident considering the purity $p$ (equation \ref{ratio}). Figure \ref{integral_ratio_GPC2e_no_background_paper} shows this ratio for the values of the model calculation that was fitted to the data from figure \ref{doublefit_sse_5_EB4e5_2} and \ref{g2_fit20090320_1825_GPC2e_zoom}. \begin{figure} \caption{Purity $p$ for the model calculation fitted to the measured data from figure \ref{doublefit_sse_5_EB4e5_2} \label{integral_ratio_GPC2e_no_background_paper} \end{figure} In contrast to the ideal case from figure \ref{g2_theory_zoom}, $p$ does not diverge for times $\tau \rightarrow 0$, but instead reaches a maximum of almost 10 at 24~ns and falls then to 9.3 at 1~ns. This means, if a second photon is detected with our setup within 24~ns, it is $\sigma^-$ polarized with 91~\% probability. A practical figure of merit also has to consider the absolute number of photons within the time interval $\tau$. After 24~ns the ion emits on average 0.07 $\sigma^-$ polarized photons into the full solid angle, out of which 8\% are detected with our setup \cite{Gerber2009NJoPv11p13}. Increasing the time window will yield more photons, but decrease the polarization purity. The curve for the ideal case in figure \ref{g2_theory_zoom} reaches a value of 130 at 24~ns, suggesting that it is in principle possible to generate polarization correlated photon pairs with more than 99\% probability using this method. This could be achieved by reducing polarization errors in excitation and detection. The excitation conditions of the single ion also affect the efficiency of our photon pair source. For the strong excitation conditions of figure \ref{g2_theory_strong_excitation_fig}, for example, the ion emits on average 0.2 $\sigma^-$ polarized photons within 24~ns. Simultaneously the purity decreases, yielding a $\sigma^-$ polarized photon in only 96\% of the cases (figure \ref{g2_theory_strong_excitation_zoom}). For a good comparison of the source under weak and strong excitation conditions it is convenient to look at the efficiency of both cases for equal photon numbers. Under strong excitation conditions the ion emits on average 0.07 photons in 12~ns. If a second photon is detected within these 12~ns, it is $\sigma^-$ polarized with more than 99\%. Increasing the Rabi frequencies of the excitation beams thus allows to reduce the time interval in which the photon pairs are emitted while keeping photon number and polarization purity constant. Summarizing, it was shown that the second order correlation function of the fluorescence light of a single ion can be engineered by polarization-sensitive detection. The $g^{(2)}$ functions for $\sigma^+$ and $\sigma^-$ light conditioned on the previous detection of a $\sigma^-$ photon show a characteristic anti-bunching behavior that allows for heralding photons of a certain polarization in an otherwise randomly polarized stream of photons, with possible applications in quantum optical information technology. A single, laser cooled ion generated polarization correlated photon pairs within a time window of 24~ns with an efficiency of 91\%. Model calculations show that the presented method has the potential to reach an efficiency of more than 99\% within the same time window. We thank Giovanna Morigi for helpful discussions. We acknowledge support from the European Commission (SCALA, contract 015714; EMALI, MRTN-CT-2006-035369), the Spanish MICINN (QOIT, CSD2006-00019; QLIQS, FIS2005-08257; QNLP, FIS2007-66944), and the Generalitat de Catalunya (2005SGR00189; FI-AGAUR fellowship of C.S.). \end{document}
\begin{document} \title{ extbf {Uniformly exponentially stable approximations for a class of damped systems with unbounded feedbacks } \abstract{} In this paper we study time semi-discrete approximations of a class of exponentially stable infinite dimensional systems with unbounded feedbacks. It has recently been proved that for time semi-discrete systems, due to high frequency spurious components, the exponential decay property may be lost as the time step tends to zero. We prove that adding a suitable numerical viscosity term in the numerical scheme, one obtains approximations that are uniformly exponentially stable with respect to the discretization parameter.\\\\\ \textbf{Key words and phrases}: exponential stabilization, observability inequality, discretization, viscosity term.\\ \textbf{2010 MSC}: 70J25, 93B07, 49M25. \section{Introduction} Let $X$ and $Y$ be real Hilbert spaces ( $Y$ will be identified to its dual space) with norms denoted respectively by $\Vert .\Vert_X$ and $\Vert .\Vert_Y$.\\ Let $A: D(A)\to X$ be a skew-adjoint operator with compact resolvent and $B\in \mathfrak{L} (Y, D(A)')$, where $D(A)'$ is the dual space of $D(A)$ obtained by means of the inner product in $X$.\\ We consider the system described by \begin{equation}\label{1} \dot{z}(t)=Az-BB^*z,\;\;\;t\geq 0,\;\;\;z(0)=z_0\in X. \end{equation} Here and henceforth, a dot ($^.$ ) denotes differentiation with respect to time $t$. The element $z_0\in X$ is the initial state, and $z(t)$ is the state of the system.\\\\\\ Most of the linear equations modeling the damped vibrations of elastic structures can be written in the form (\ref{1}). Some other relevant models, as the damped Schrodinger equations, fit in this setting as well.\\ We assume the following hypothesis introduced in \cite{4}:\\ (H)\\ If $\beta>0$ is fixed and $C_{\beta}=\{\lambda\in\mathbb{C} \;|\; Re \lambda=\beta\}$, the function \begin{equation}\label{2} \lambda\in\mathbb{C}_+=\{\lambda\in\mathbb{C}\; |\; Re \lambda>0\}\to H(\lambda)=B^*(\lambda I-A)^{-1}B\in \mathfrak{L}(Y) \end{equation} is bounded on $C_{\beta}$.\\ We define the energy of the solutions of system (\ref{1}) by: \begin{equation}\label{3} E(t)=\frac{1}{2}\Vert z(t)\Vert_X^2,\;\; t\geq 0, \end{equation} which satisfies \begin{equation}\label{4} \frac{d E}{dt}(t)=-\Vert B^*z(t)\Vert_Y^2,\;\;t\geq 0. \end{equation} In this paper, we assume that system (\ref{1}) is exponentially stable, that is there exist positive constants $\mu$ and $\nu$ such that any solution of (\ref{1}) satisfies \begin{equation}\label{5} E(t)\leq \mu E(0)\exp(-\nu t),\;\; t\geq 0. \end{equation} Our goal is to develop a theory allowing to get, as a consequence of (\ref{5}), exponential stability results for time-discrete systems.\\ We start considering the following natural time-discretization scheme for the continuous system (\ref{1}). For any $\Delta t$>0, we denote by $z^k$ the approximation of the solution $z$ of system (\ref{1}) at time $t_k=k\Delta t$, for $k\in\mathbb{N}$, and introduce the following implicit midpoint time discretization of system (\ref{1}): \begin{equation}\label{6} \left \{ \begin{array}{lcr} \frac{z^{k+1}-z^k}{\Delta t} = A\left(\frac{z^k+z^{k+1}}{2}\right)-BB^*\left(\frac{z^k+z^{k+1}}{2}\right),\;k\in \mathbb{N}, & \\ z^0=z_0. \end{array} \right. \end{equation} We define the discrete energy by: \begin{equation}\label{7} E^k=\frac{1}{2}\Vert z^k\Vert_X^2,\;\;\; k\in\mathbb{N}, \end{equation} which satisfies the dissipation law \begin{equation} \frac{E^{k+1}-E^k}{\Delta t}=-\left\Vert B^*\left(\frac{z^k+z^{k+1}}{2}\right)\right\Vert_Y^2,\;\; k\in\mathbb{N}. \end{equation} It is well known that if the continuous system is exponentially stable, the time-discrete ones do no more inherit of this property due to spurious high frequency modes ( see \cite{15}), that is we cannot expect in general to find positive constants $\mu_0$ and $\nu_0$ such that \begin{equation}\label{1.1} E^k\leq \mu_0 E^0\exp(-\nu_0k\Delta t),\;\;\;k\in\mathbb{N}, \end{equation} holds for any solution of (\ref{6}) uniformly with respect to $\Delta t>0.$\\\\ Therefore, as in [12, 10, 13, 6], in order to get a uniform decay, it seems natural to add in system (\ref{6}) a suitable extra numerical viscosity term to damp these high-frequency spurious components. We obtain the new system: \begin{equation}\label{8} \left \{ \begin{array}{lcr} \frac{\tilde{z}^{k+1}-z^k}{\Delta t} = A\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)-BB^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right),\;\;k\in \mathbb{N}, & \\\\ \frac{z^{k+1}-\tilde{z}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{z^{k+1}},\;\;\;k\in\mathbb{N},\\\\ z^0=z_0. \end{array} \right. \end{equation} The energy of (\ref{8}), still defined by (\ref{7}), now satisfies: \begin{equation}\label{9} \left \{ \begin{array}{lcr} \tilde{E}^{k+1}=E^k - \Delta t \left\Vert B^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)\right\Vert_Y^2,\;\;\;k\in \mathbb{N}, \\\\ E^{k+1}+(\Delta t)^3\Vert Az^{k+1}\Vert_X^2+\frac{(\Delta t)^6}{2}\Vert {A^2}{z}^{k+1}\Vert_X^2=\tilde{E}^{k+1},\;\;k\in \mathbb{N}. \end{array} \right. \end{equation} Putting these identities together, we get: \begin{equation*} E^{k+1}+(\Delta t)^3\left\Vert Az^{k+1}\right\Vert_X^2+\frac{(\Delta t)^6}{2}\left\Vert {A^2}{z}^{k+1}\right\Vert_X^2+ \Delta t \left\Vert B^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)\right\Vert_Y^2 = E^k. \end{equation*} Summing this identities from $j=k_1$ to $j=k_2-1$, we obtain:\\ $\Vert z^{k_2}\Vert_X^2 + 2 \Delta t \displaystyle{\sum_{j=k_1}^{k_2-1}}\left\Vert B^*\left(\frac{z^j+\tilde{z}^{j+1}}{2}\right)\right\Vert_Y^2 + 2 \Delta t \displaystyle{\sum_{j=k_1}^{k_2-1}}(\Delta t)^2 \Vert Az^{j+1}\Vert_X^2 $\\ $ \hspace{5cm} + \Delta t \displaystyle{\sum_{j=k_1}^{k_2-1}}(\Delta t)^5 \Vert {A^2}{z^{j+1}}\Vert_X^2$ \begin{equation}\label{10} =\Vert z^{k_1}\Vert_X^2,\;\;\; \forall k_1<k_2. \end{equation} The main result of this paper reads as follows: \begin{teo} Assume that system (\ref{1}) is exponentially stable, i.e. satisfies (\ref{5}) and the hypothesis (H) is verified.\\ Then there exist two positive constants $\mu_0$ and $\nu_0$ such that any solution of (\ref{8}) satisfies (\ref{1.1}) uniformly with respect to the discretization parameter $\Delta t>0$. \end{teo} Our strategy is based on the fact that the uniform exponential decay properties of the energy of systems (\ref{1}) and (\ref{8}) respectively are equivalent to uniform observability properties for the conservative system \begin{equation}\label{11} \dot{y}=Ay,\;\;\;t\in\mathbb{R},\;\;\;\;\;y(0)=y_0\in X, \end{equation} and its time semi-discrete viscous version: \begin{equation}\label{12} \left \{ \begin{array}{lcr} \frac{\tilde{u}^{k+1}-u^k}{\Delta t} = A\left(\frac{u^k+\tilde{u}^{k+1}}{2}\right),\;\;\;k\in \mathbb{N}, & \\\\ \frac{u^{k+1}-\tilde{u}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{u^{k+1}},\;\;\;k\in\mathbb{N},\\\\ u^0=u_0. \end{array} \right. \end{equation} At the continuous level the observability property consists in the existence of a time $T>0$ and a positive constant $k_T>0$ such that \begin{equation}\label{13} k_T\Vert y_0\Vert_X^2\leq \int_{0}^{T}\Vert B^* y(t)\Vert_Y^2\;dt, \end{equation} for every solution of (\ref{11}) (see \cite{4}).\\ A similar argument can be applied to the semi-discrete system (\ref{8}). Namely, the uniform exponential decay (\ref{1.1}) of the energy of solutions of (\ref{8}) is equivalent to the following observability inequality: there exist positive constants $T$ and $c$ such that, for any $\Delta t>0$, every solution $u$ of (\ref{12}) satisfies:\\ $c\|u_0\|_X^2 \leq \Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T\right]}}\|B^* u^k\|_Y^2+\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T\right]}}(\Delta t)^2\|Au^{k+1}\|_X^2$\\ \begin{equation}\label{14} +\Delta t \displaystyle {\sum_{k \Delta t \in \left[0,T\right]}}(\Delta t)^5\|A^2 u^{k+1}\|_X^2. \end{equation} Our approach has common points with the result obtained in \cite{6} for feedbacks which are bounded in the energy space. The main difference is that we replace the assumption of boundedness of $B$ by the assumption (H).\\ Let us mention the works [13, 9], where boundary (that is $B$ is unbounded) stabilization issues were discussed for space discrete of the 1-d wave equation.\\ In our knowledge, this paper is the first one providing exponential decay properties for time-discrete systems, when the continuous setting has this property, in the case where $B$ is unbounded.\\ The outline of this paper is as follows.\\ In the second section, we give the background needed here. We recall some results on the observability of time-discrete conservative systems and prove (\ref{14}) in Section 3. Section 4 contains the proof of the main result. The last section is devoted to some applications.\\ In the following, to simplify the notation, $C$ and $c$ will denote a positive constants that may change from line to line, but don't depend on $\Delta t.$ \section{Some Background and Preliminaries} In this section we give some background (without any proof) that we need in our present work ( for more details, see \cite{14}).\\ Throughout this section, $X$ is Hilbert space and $A:D(A)\to X$ be a densely defined operator with $\rho(A)\neq \emptyset$ ($\rho(A)$ is the resolvent set of $A$). We assume that $D(A)$ is endowed with the norm, $\Vert .\Vert_{D(A)} $, of the graph of $A$.\\ For every $\beta \in\rho(A)$, we define $$ \Vert x\Vert_1=\Vert (\beta I-A)x\Vert_X,\;\;\;\forall x\in D(A).$$ The space $D(A)$ with this norm is a Hilbert space, denoted $X_1$. It is well known that $\Vert .\Vert_1$ is equivalent to $\Vert .\Vert_{D(A)}$.\\\\\\\\ We denote by $X_{-1}$ the completion of $X$ with respect to the norm $$\Vert x\Vert_{-1}=\Vert (\beta I-A)^{-1}x\Vert_X \;\;\;\; \forall x\in X \;\text{and}\; \beta \in \rho(A).$$ Then $A\in\mathfrak{L}(X_1,X)$ and $A$ has a unique extension $\tilde{A}\in\mathfrak{L}(X,X_{-1})$.\\ Moreover, \begin{equation}\label{15} (\beta I-A)^{-1}\in\mathfrak{L}(X,X_1),\;\;\; (\beta I-\tilde{A})^{-1}\in\mathfrak{L}(X_{-1},X) \end{equation} and these two operators are unitary.\\ We recall also, if $A$ is maximal dissipative (for brevity m-dissipative) then $(0,\infty)\subset \rho(A)$ and \begin{equation}\label{16} \Vert (\beta I-A)^{-1}\Vert_{\mathfrak{L}(X)}\leq \frac{1}{\beta}\;\;\;\forall \beta\in (0,\infty). \end{equation} When $A$ is skew-adjoint, we have both $A$ and $-A$ are m-dissipative.\\ Now, we give the definition of a contraction. \begin{defn} A contraction is a bounded operator $C$, with the property that the norm $\Vert C\Vert \leq 1$. Note that the powers of a contraction have the same property, $\Vert C^n\Vert\leq 1$ for $n\in\mathbb{N}$. \end{defn} The following theorem (\cite{11}) gives a way of characterizing a generator of a semigroup of contraction. \begin{teo} Let $X$ be a Hilbert space, and let $A: D(A)\to X$ be a linear operator with dense domain. Then the following conditions are equivalent:\\ i) $A$ is the generator of a $C_0$ semigroup of contraction ,\\ ii) all $\lambda\in \mathbb{C}_+$ belong to $\rho(A)$, and $A_{\lambda}=(\bar{\lambda} I+A)(\lambda I-A)^{-1}$ is a contraction. \end{teo} Note that if $A$ is skew-adjoint, then it is the generator of a $C_0$ contraction semigroup.\\ Finally, we recall also that if $A$ is skew-adjoint, then we have $-A^2$ is a positive self adjoint operator and consequently $A^2$ is m-dissipative. \\ \section{Observability of time-discrete systems} This section is organized as follows. First, we recall the results of \cite{5} on the observability of the time-discrete conservative system of (\ref{6}). Then, we give the proof of observability inequality (\ref{14}) which consists in the decomposition of the solution $u$ of (\ref{12}) into its low and high frequency parts, that we handle separately, as in \cite{6}.\\ \subsection{Some results on discrete observability} We first need to introduce some notations.\\\\\\ Since $A$ is a skew-adjoint operator with compact resolvent, its spectrum is discrete and $\sigma(A)=\{i\mu_j;\;j\in\mathbb{N}\}$, where $(\mu_j)_{j\in\mathbb{N}}$ is a sequence of real numbers such that $\vert \mu_j\vert \to\infty$ when $j\to\infty$. Set $(\partialhi_j)_{j\in\mathbb{N}}$ an orthonormal basis of eigenvectors of $A$ associated to the eigenvalues $(i\mu_j)_{j\in\mathbb{N}}$, that is \begin{equation}\label{17} A\partialhi_j=i\mu_j\partialhi_j. \end{equation} Moreover, define \begin{equation}\label{18} C_s(A) = span \{\partialhi_j:\;\text{the corresponding}\;i\mu_j\;\text{satisfies}\;\vert\mu_j\vert\leq s\}. \end{equation} The following theorem was proved in \cite{5}: \begin{teo} Assume that $B^*\in\mathfrak{L}(D(A),Y)$, that is \begin{equation}\label{19} \Vert B^*z\Vert_Y^2\leq C_B^2\Vert z\Vert_{D(A)}^2=C_B^2(\Vert Az\Vert_X^2+\Vert z\Vert_X^2),\;\;\forall z\in D(A), \end{equation} and that $A$ and $B^*$ satisfy the following hypothesis: \begin{equation}\label{20} \begin{cases} \text{There exist constants}\;\; M, m>0\;\;\;\text{such that}\\\\ M^{2}\Vert(iwI-A)y\Vert_X^2+m^2\Vert B^*y\Vert_Y^2\geq \Vert y\Vert_X^2,\; \forall w \in\mathbb{R},\;y \in D(A). \end{cases} \end{equation} \\ Then, for any $\delta>0$, there exists $T_{\delta}$ such that for any $T>T_{\delta}$, there exists a positive constant $k_{T,\delta}$, independent of $\Delta t$, that depends only on $m, M, C_B, T$ and $\delta$, such that for $\Delta t>0$ small enough, we have: \begin{equation}\label{21} k_{T,\delta}\Vert y^{0}\Vert_X^2 \leq \Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T \right]} }\left\Vert B^*\left(\frac{y^k+y^{k+1}}{2}\right)\right\Vert_Y^2,\;\; \forall y^0 \in C_{\delta/ \Delta t}(A), \end{equation} where $y^k$ is the solution of \begin{equation}\label{22} \frac{y^{k+1}-y^{k}}{\Delta t} = A\left(\frac{y^{k}+y^{k+1}}{2}\right),\;\;k\in \mathbb{N},\;\; y^0=y_0. \end{equation} \end{teo} In the sequel, when there is no ambiguity, we will use the simplified notation $C_{\delta/\Delta t}$ instead of $C_{\delta/\Delta t}(A)$.\\ Hypothesis (\ref{20}) is the so-called Hautus test or resolvent estimate, which has been proved in \cite{8} to be equivalent to the continuous observability inequality (\ref{13}) for the conservative system (\ref{11}) for suitable positive constants $T$ and $k_T$, which turns out to be equivalent to the exponential decay property (\ref{5}) for the continuous damped system (\ref{1}).\\ The following Lemma gives a resolvent estimate which was proved in \cite{6}: \begin{lem} Under the assumptions of Theorem 1.1, the resolvent estimate (\ref{20}) holds, with constants $m$ and $M$ that depend only on $\mu$ and $\nu$ given by (\ref{5}). \end{lem} Applying Theorem 3.1, for any $\delta>0$, choosing a time $T^*>T_{\delta}$ there exists a positive constant $k_{T^*,\delta}$ such that the inequality (\ref{21}) holds for any solution $y$ of (\ref{22}) with $y^0\in C_{\delta/\Delta t}$. In the sequel, we fix a positive number $\delta>0$ (for instance $\delta=1$), and $T^*=2T_{\delta}$. \subsection{Uniform observability inequalities} \begin{lem} There exists a constant $c>0$ such that (\ref{14}) holds with $T=T^*$ for all solutions $u$ of (\ref{12}) uniformly with respect to $\Delta t$. \end{lem} {\bf Proof.}\;\; We decompose the solution $u$ of (\ref{12}) into its low and high frequency parts. To be more precise, we consider \begin{equation}\label{24} u_l=\partiali_{\delta/\Delta t}u,\;\;\;\; u_h=(I-\partiali_{\delta/\Delta t})u, \end{equation} where $\delta$ is the positive number that we have been chosen above, and $\partiali_{\delta/\Delta t}$ is the orthogonal projection on $C_{\delta/\Delta t}$ defined in (\ref{18}). Here the notations $u_l$ and $u_h$ stand for the low and high frequency components, respectively.\\ Note that both $u_l$ and $u_h$ are solutions of (\ref{12}).\\ Besides, $u_h$ lies in the space $C_{\delta/ \Delta t}^\bot$, in which the following property holds: \begin{equation}\label{25} \Delta t \Vert Ay\Vert_X\geq \delta \Vert y\Vert_X,\;\;\;\;\forall y\in C_{\delta/ \Delta t}^\bot. \end{equation} {\bf The low frequencies.}\;\; We compare $u_l$ with $y_l$ solution of (\ref{22}) with initial data $y_l(0)=u_l(0)$. Set $w_l=u_l-y_l$. From (\ref{21}), we get: $$k_{T^*,\delta}\Vert u_l^{0}\Vert_X^2\leq 2\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u_l^k+\tilde{u}_l^{k+1}}{2}\right)\right\Vert_Y^2$$ \begin{equation}\label{26} \hspace{4.1cm} + 2\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{w_l^k+\tilde{w}_l^{k+1}}{2}\right)\right\Vert_Y^2. \end{equation} The equation satisfied by $w_l$ is: \begin{equation}\label{27} \left \{ \begin{array}{lcr} \frac{\tilde{w}_l^{k+1}-w_l^k}{\Delta t} = A\left(\frac{w_l^k+\tilde{w}_l^{k+1}}{2}\right),\;\;k\in \mathbb{N}, \\\\ \frac{w_l^{k+1}-\tilde{w}_l^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{u_l^{k+1}},\;\;k\in\mathbb{N},\\\\ w_l^0=0. \end{array} \right. \end{equation} It is easy to see that $\Vert \tilde{w}_l^{k+1}\Vert_X^2=\Vert w_l^k\Vert_X^2$. Besides, we have:\\ $\left\Vert B^*\left(\frac{w_l^k+\tilde{w}_l^{k+1}}{2}\right)\right\Vert_Y^2 \leq \frac{1}{2}\big\Vert B^*w_l^k\big\Vert_Y^2 + \frac{1}{2}\big\Vert B^*\tilde{w}_l^{k+1}\big\Vert_Y^2$.\\ Simple Calculations give: $$B^*\tilde{w}_l^{k+1}=B^*(I-\frac{\Delta t}{2} A)^{-1}(I+\frac{\Delta t}{2} A)w_l^k.$$ Therefore, we have:\\ $$\big\Vert B^*\tilde{w}_l^{k+1}\big\Vert_Y^2 = \big\Vert B^*(I-\frac{\Delta t}{2} A)^{-1}(I+\frac{\Delta t}{2} A)w_l^k\big\Vert_Y^2$$ $\hspace{3.5cm} \leq C_B^2 \big\Vert (I-\frac{\Delta t}{2} A)^{-1}(I+\frac{\Delta t}{2} A)w_l^k\big\Vert_{D(A)}^2$\\ $\hspace{3.5cm} \leq C \big\Vert (I-\frac{\Delta t}{2} A)^{-1}(I+\frac{\Delta t}{2} A)w_l^k\big\Vert_1^2.$\\\\\\ Finally, we get $$ \big\Vert B^*\tilde{w}_l^{k+1}\big\Vert_Y^2\leq C\big\Vert (I+\frac{\Delta t}{2} A)w_l^k\big\Vert_X^2$$ $\hspace{4.5cm}\leq C\big\Vert w_l^k\Vert_X^2 +c\frac{(\Delta t)^2}{4}\Vert Aw_l^k\big\Vert_X^2$\\ $\hspace{4.5cm} \leq C\big\Vert w_l^k\big\Vert_X^2,$ where we have used (\ref{15}) and the fact that $w_l^k\in C_{\delta/\Delta t}$.\\ In the same manner, by using (\ref{15}) and the fact that $\tilde{w}_l^{k+1}\in C_{\delta/\Delta t}$ \\ and $\Vert \tilde{w}_l^{k+1}\Vert_X^2=\Vert w_l^k\Vert_X^2$, we show that:\\ $$\big\Vert B^*w_l^k\big\Vert_Y^2 \leq c \big\Vert w_l^k\big\Vert_X^2,$$ Consequently, we obtain: \begin{equation}\label{28} \left\Vert B^*\left(\frac{w_l^k+\tilde{w}_l^{k+1}}{2}\right)\right\Vert_Y^2 \leq c \left\Vert w_l^k\right\Vert_X^2. \end{equation} In addition, we have (see \cite{6}) \begin{equation}\label{29} \big\Vert w_l^k\big\Vert_X^2 \leq \Delta t \sum_{j=1}^{k} (\Delta t)^2\big\Vert Au_l^j\big\Vert_X^2+\delta^2 \Delta t\sum_{j=0}^{k}\big\Vert w_l^j\big\Vert_X^2. \end{equation} Gr\"{o}onwal's Lemma applies and allows to deduce from (\ref{26}) and (\ref{28}), the existence of a positive $c$ independent of $\Delta t$ such that \begin{equation*} c\Vert u_l^{0}\Vert_X^2\leq \Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u_l^k+\tilde{u}_l^{k+1}}{2}\right)\right\Vert_Y^2 +\Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^*\right]}}(\Delta t)^2\|Au_l^k\|_X^2. \end{equation*} Besides, $$\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u_l^k+\tilde{u}_l^{k+1}}{2}\right)\right\Vert_Y^2\leq 2\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u^k+\tilde{u}^{k+1}}{2}\right)\right\Vert_Y^2$$ $$ \hspace{6cm}+ 2\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u_h^k+\tilde{u}_h^{k+1}}{2}\right)\right\Vert_Y^2.$$ Let us then estimate the last term in the right-hand side of the above inequality. We have $$\left\Vert B^*\left(\frac{u_h^k+\tilde{u}_h^{k+1}}{2}\right)\right\Vert_Y^2 \leq \frac{1}{2}\big\Vert B^*u_h^k\big\Vert_Y^2 + \frac{1}{2}\big\Vert B^*\tilde{u}_h^{k+1}\big\Vert_Y^2.$$ But, $$B^*u_h^k=B^*(I+\frac{\Delta t}{2} A)^{-1}(I-\frac{\Delta t}{2} A)\tilde{u}_h^{k+1}.$$\\\\ Then, we have $$\big\Vert B^*u_h^k\big\Vert_Y^2 \leq C \big\Vert (I+\frac{\Delta t}{2} A)^{-1}(I-\frac{\Delta t}{2} A)\tilde{u}_h^{k+1}\big\Vert_{D(A)}^2$$ $\hspace{2.8cm} \leq C \big\Vert (I-\frac{\Delta t}{2} A)\tilde{u}_h^{k+1}\big\Vert_X^2$\\ $\hspace{2.8cm} \leq 2C \big\Vert \tilde{u}_h^{k+1}\big\Vert_X^2 +C \frac{(\Delta t)^2}{2}\big\Vert A\tilde{u}_h^{k+1}\big\Vert_X^2.$\\ Now, since $A\in\mathfrak{L}(X_1,X)$ and from (\ref{15}), we get $$\big\Vert A\tilde{u}_h^{k+1}\big\Vert_X^2 = \big\Vert A(I-\frac{\Delta t}{2} A)^{-1}(I+\frac{\Delta t}{2} A)u_h^k\big\Vert_X^2$$ $\hspace{3.5cm} \leq C \big\Vert (I-\frac{\Delta t}{2} A)^{-1}(I+\frac{\Delta t}{2} A)u_h^k\big\Vert_{X_1}^2$\\ $\hspace{3.5cm} \leq C \big\Vert (I+\frac{\Delta t}{2} A)u_h^k\big\Vert_X^2$\\ $\hspace{3.5cm} \leq 2C \left(\big\Vert u_h^k\Vert_X^2 +\frac{(\Delta t)^2}{2} \big\Vert Au_h^k\big\Vert_X^2\right).$ \\ Consequently, for $\Delta t$ small enough, we obtain: $$\big\Vert B^*u_h^k\big\Vert_Y^2\leq C \left(\big\Vert u_h^k\Vert_X^2 + (\Delta t)^2\big\Vert Au_h^k\big\Vert_X^2\right).$$ By the same way, we prove that: $$\big\Vert B^*\tilde{u}_h^{k+1}\big\Vert_Y^2\leq C \left(\big\Vert u_h^k\Vert_X^2 + (\Delta t)^2\big\Vert Au_h^k\big\Vert_X^2\right),$$ and, sine $\tilde{u}_h^{k+1}$ and $u_h^k$ belong to $C_{\delta/ \Delta t}^\bot$ for all $k$, we get from (\ref{25}) that $\Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^* \right]} }\left\Vert B^*\left(\frac{u_h^k+\tilde{u}_h^{k+1}}{2}\right)\right\Vert_Y^2 \leq C \Delta t \displaystyle {\sum_{k \Delta t \in \left]0,T^* \right]} }\big\Vert u_h^k\big\Vert_X^2$ \\ $ \hspace{5.5cm} + C \Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^* \right]} }(\Delta t)^2\big\Vert Au_h^k\big\Vert_X^2.$\\\\ $\hspace{5.5cm}\leq C \Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^* \right]} }(\Delta t)^2\big\Vert Au_h^k\big\Vert_X^2.$\\ Therefore, we get:\\ $\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u_h^k+\tilde{u}_h^{k+1}}{2}\right)\right\Vert_Y^2 \leq C \Delta t \displaystyle {\sum_{k \Delta t \in \left]0,T^* \right]} }\big\Vert Au_h^k\big\Vert_X^2 $\\ $\hspace{5.5cm} + \Delta t\left\Vert B^*\left(\frac{u_h^0+\tilde{u}_h^{1}}{2}\right)\right\Vert_Y^2.$\\ Besides,\\ $\Delta t\left\Vert B^*\left(\frac{u_h^0+\tilde{u}_h^{1}}{2}\right)\right\Vert_Y^2 \leq 2 \Delta t\left\Vert B^*\left(\frac{u_l^0+\tilde{u}_l^{1}}{2}\right)\right\Vert_Y^2 + 2\Delta t\left\Vert B^*\left(\frac{u^0+\tilde{u}^{1}}{2}\right)\right\Vert_Y^2.$\\ Now, as (\ref{28}) we show that $$\Delta t\left\Vert B^*\left(\frac{u_l^0+\tilde{u}_l^{1}}{2}\right)\right\Vert_Y^2\leq C\Delta t \big\Vert u_l^0\big\Vert_X^2.$$ Then,\\ $\Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u_h^k+\tilde{u}_h^{k+1}}{2}\right)\right\Vert_Y^2 \leq C \Delta t \displaystyle {\sum_{k \Delta t \in \left]0,T^* \right]} }\big\Vert Au_h^k\big\Vert_X^2+C\Delta t \big\Vert u_l^0\big\Vert_X^2$ \\\\ $\hspace{6cm}+\Delta t\left\Vert B^*\left(\frac{u^0+\tilde{u}^{1}}{2}\right)\right\Vert_Y^2 .$\\ It follows that there exists $c>0$ independent of $\Delta t$ such that $$c\Vert u_l^0\Vert_X^2\leq \Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{u^k+\tilde{u}^{k+1}}{2}\right)\right\Vert_Y^2+\Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^*\right]}}(\Delta t)^2\|Au_l^k\|_X^2$$ \begin{equation}\label{30} \hspace{3cm} +\Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^* \right]} }(\Delta t)^2\big\Vert Au_h^k\big\Vert_X^2. \end{equation} {\bf The high frequencies.} We proceed as in \cite{6}, we get:\\ $C\Vert u_h^0\Vert_X^2\leq \Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^*\right]}}(\Delta t)^2\|Au_h^k\|_X^2$\\ \begin{equation}\label{31} \hspace{5.5cm} +\Delta t \displaystyle { \sum_{k \Delta t \in \left]0,T^*\right]}}(\Delta t)^5\|A^2u_h^k\|_X^2. \end{equation} Combining (\ref{30}) and (\ref{31}) yields Lemma 3.2, since $u_h$ and $u_l$ lie in orthogonal spaces with respect to the scalar\\ products $\langle .,.\rangle_X$ and $\langle A.,A.\rangle_X.$ \;\;$\square$ \section{Proof of Theorem 1.1.} The proof of Theorem 1.1 will essentially rely on the following lemma: \begin{lem} Let $w$ be the solution of \begin{equation*} \left \{ \begin{array}{lcr} \frac{\tilde{w}^{k+1}-w^k}{\Delta t} = A\left(\frac{w^k+\tilde{w}^{k+1}}{2}\right)+Bv^k,\;\;\;k\in \mathbb{N}, & \\\\\ \frac{w^{k+1}-\tilde{w}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{w^{k+1}},\;\;\;\;\;k\in\mathbb{N},\\\\ w^0=0, \end{array} \right. \end{equation*} where $v^k\in l^2(k\Delta t; Y)=\{v^k,\;\; \text{such that}\;\; \displaystyle{\sum_{k\Delta t\in[0,T^*]}}\Vert v^k\Vert_Y^2 <\infty\}$.\\ Let $T^*>0$ defined as before. There exists a positive constant $C$ independent of $\Delta t$ such that for all $0 < \Delta t < 1$, we have the following estimate \begin{equation}\label{32} \Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert B^*\left(\frac{w^k+\tilde{w}^{k+1}}{2}\right)\right\Vert_Y^2 \leq C \Delta t \displaystyle { \sum_{k \Delta t \in \left[0,T^* \right]} }\left\Vert v^k\right\Vert_Y^2, \end{equation} where $v^k \in l^2(k\Delta t; Y)$. \end{lem} {\bf Proof.} We denote by: \begin{equation}\label{321} S_{k}= \frac{\widetilde{w}^{k+1}+w^{k}}{2}. \end{equation} From (\ref{30}) we get: \begin{equation}\label{331} S_{k}=(I-\frac{\Delta t}{2}A)^{-1}w^{k}+\frac{\Delta t}{2}(I-\frac{\Delta t}{2}A)^{-1}B v^{k}. \end{equation} Besides, \begin{equation}\label{341} w^{k+1}=Lw^{k}+\Delta t RBv^{k}, \end{equation} where: \begin{equation}\label{351} \begin{cases} R=(I-(\Delta t)^{3}A^{2})^{-1}(I-\frac{\Delta t}{2}A)^{-1}\\\\ L=(I-(\Delta t)^{3}A^{2})^{-1}(I-\frac{\Delta t}{2}A)^{-1}(I+\frac{\Delta t}{2}A). \end{cases} \end{equation} Then, for $k\geq 1$: \begin{equation}\label{361} w^{k}=\Delta t\sum_{l=0}^{k-1} L^{k-1-l}RBv^{l}. \end{equation} Combining (\ref{331}), (\ref{361}) we deduce that: \begin{equation}\label{371} S_{k}=\frac{\Delta t}{2}(I-\frac{\Delta t}{2}A)^{-1}Bv^{k}+\Delta t\sum_{l=0}^{k-1}(I-\frac{\Delta t}{2}A)^{-1}L^{k-1-l}RBv^{l}. \end{equation} Consequently, \begin{equation*} B^*S_{k}=\frac{\Delta t}{2}B^*(I-\frac{\Delta t}{2}A)^{-1}Bv^{k}+\Delta t\sum_{l=0}^{k-1}B^*(I-\frac{\Delta t}{2}A)^{-1}L^{k-1-l}RBv^{l}. \end{equation*} Using Cauchy Schwarz inequality and the hypothesis (H), it follows that there exists $c >0$ independent of $\Delta t$ such that\\\\ $\big\Vert B^{*}S_{k}\big\Vert_Y^2\leq c \big\Vert v^{k}\big\Vert_Y^2+(\Delta t)^2 \displaystyle{\sum_{l=0}^{k-1}} \big\Vert B^{*}(I-\frac{\Delta t}{2}A)^{-1}L^{k-1-l}RB \big\Vert_{\mathfrak{L}(Y)}^2\displaystyle{\sum_{l=0}^{k-1}}\big\Vert v^{l} \big\Vert_Y^2. $\\ Besides, we have:\\ $\big\Vert B^{*}(I-\frac{\Delta t}{2}A)^{-1}L^{k-1-l}RB \big\Vert_{\mathfrak{L}(Y)}^2\leq \Vert B^*\Vert^2\Vert(I-\frac{\Delta t}{2}A)^{-1}\Vert^2\Vert L^{k-1-l}\Vert^2\Vert R\Vert^2\Vert B\Vert^2.$\\ Now, let us estimate $\Vert L^{k-1-l}\Vert^2$. We have:\\ $\Vert L^{k-1-l}\Vert^2=\big\Vert [(I-(\Delta t)^{3}A^{2})^{-1}]^{k-1-l}[(I-\frac{\Delta t}{2}A)^{-1}(I+\frac{\Delta t}{2}A)]^{k-1-l}\big\Vert^2$\\ $ \hspace{2cm} \leq\Vert (I+\frac{\Delta t}{2}A)(I-\frac{\Delta t}{2}A)^{-1}\Vert^{2(k-1-l)}$\\ $\hspace{2cm}\leq 1,$\\ where we used (\ref{16}) and Theorem 2.1.\\ Using again (\ref{16}) (since $A$ and $A^2$ are m-dissipative), we get\\ $\big\Vert B^{*}(I-\frac{\Delta t}{2}A)^{-1}L^{k-1-l}RB \big\Vert_{\mathfrak{L}(Y)}^2\leq \Vert B\Vert^4.$\\ We deduce that\\ \begin{equation}\label{40} \big\Vert B^{*}S_{k}\big\Vert_Y^2\leq c \big\Vert v^{k} \big\Vert_Y^2 + (\Delta t)^{2}\big\Vert B \big\Vert^4 \frac{T}{\Delta t}\sum_{l=0}^{k-1}\big\Vert v^{l} \big\Vert_Y^2, \end{equation} which implies\\\\ \begin{equation}\label{41} \displaystyle{\sum_{k=1}^{\frac{T}{\Delta t}}} \big\Vert B^{*}S_{k}\big\Vert_Y^2\leq c \displaystyle{\sum_{k=1}^{ \frac{T}{\Delta t}}} \big\Vert v^{k} \big\Vert_Y^2 +\big\Vert B \big\Vert^4 T \Delta t \displaystyle{\sum_{k=1}^{\frac{T}{\Delta t}}}\displaystyle{ \sum_{l=0}^{k-1}}\big\Vert v^{l} \big\Vert_Y^2. \end{equation} Since $k-1< \frac{T}{\Delta t}$, we obtain \begin{equation}\label{42} \displaystyle{\sum_{k=1}^{\frac{T}{\Delta t}}} \big\Vert B^{*}S_{k}\big\Vert_Y^2\leq c \displaystyle{\sum_{k=1}^{ \frac{T}{\Delta t}}} \big\Vert v^{k} \big\Vert_Y^2 +\big\Vert B \big\Vert^4 T \Delta t \displaystyle{\sum_{k=1}^{\frac{T}{\Delta t}}}\displaystyle{ \sum_{l=0}^{\frac{T}{\Delta t}}}\big\Vert v^{l} \big\Vert_Y^2 \end{equation} \begin{equation}\label{43} \hspace{3.3cm}\leq c \displaystyle{\sum_{k=1}^{ \frac{T}{\Delta t}}} \big\Vert v^{k} \big\Vert_Y^2 +\big\Vert B \big\Vert^4 T \Delta t \displaystyle{\sum_{l=0}^{\frac{T}{\Delta t}}}\displaystyle{ \sum_{k=1}^{\frac{T}{\Delta t}}}\big\Vert v^{l} \big\Vert_Y^2 \end{equation} \begin{equation}\label{44} \hspace{1cm}\leq c \displaystyle{\sum_{k=1}^{ \frac{T}{\Delta t}}} \big\Vert v^{k} \big\Vert_Y^2 +\big\Vert B \big\Vert^4 T^{2} \displaystyle{ \sum_{k=0}^{\frac{T}{\Delta t}}}\big\Vert v^{k} \big\Vert_Y^2. \end{equation} Simple Calculations give: \begin{equation}\label{45} \big\Vert B^{*}S_{0}\big\Vert_Y^2\leq c \big\Vert v^{0} \big\Vert_Y^2. \end{equation} Combining (\ref{44}), (\ref{45}) and the fact that $S_{k}= \frac{\widetilde{w}^{k+1}+w^{k}}{2}$, we get the existence of constant $C$ independent of $\Delta t$ such that\\ \begin{equation*} \sum_{k=0}^{\frac{T}{\Delta t}}\left\Vert B^{*}(\frac{w^{k}+\widetilde{w}^{k+1}}{2})\right\Vert_{Y}^{2} \leqslant C\sum_{k=0}^{\frac{T}{\Delta t}}\big\Vert v^{k}\big\Vert_Y^2.\;\;\;\;\square \end{equation*} Now, we give the proof of our main result.\\ {\bf Proof of Theorem 1.1.}\;\;Here we follow the argument in \cite{7}.\\ We decompose $z$ solution of (\ref{8}) as $z=u+w$ where $u$ is the solution of the system (\ref{12}) with initial data $u^0=z^0$. Applying Lemma 3.2 to $u=z-w$, we get:\\ $ c \Vert z^0\Vert_X^2\leq 2\Bigg(\Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*]}}\left\Vert B^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)\right\Vert_Y^2$\\ $ + \Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*[}}(\Delta t)^2\left\Vert Az^{k+1}\right\Vert_X^2 +\Delta t\displaystyle{\sum_{k\Delta t\in [0,T^*[}}(\Delta t)^5\left\Vert {A^2}{z}^{k+1}\right\Vert_X^2 \Bigg )$\\ $+ 2\Bigg( \Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*]}}\left\Vert B^* \left(\frac{w^k+\tilde{w}^{k+1}}{2}\right)\right\Vert_Y^2+ \Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*[}}(\Delta t)^2\left\Vert Aw^{k+1}\right\Vert_X^2$\\ \begin{equation}\label{33} +\Delta t\displaystyle{\sum_{k\Delta t\in [0,T^*[}}(\Delta t)^5\left\Vert {A^2}{w}^{k+1}\right\Vert_X^2 \Bigg). \end{equation} Below, we bound the terms in the right-hand side of (\ref{33}) involving $w$ by the ones involving $z$.\\ The function $w$ satisfies: \begin{equation}\label{34} \left \{ \begin{array}{lcr} \frac{\tilde{w}^{k+1}-w^k}{\Delta t} = A\left(\frac{w^k+\tilde{w}^{k+1}}{2}\right)-BB^{\ast}\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right),\;\;\;k\in \mathbb{N}, & \\\\\ \frac{w^{k+1}-\tilde{w}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{w^{k+1}},\;\;\;\;k\in\mathbb{N},\\\\ w^0=0. \end{array} \right. \end{equation} \\ By applying now Lemma 4.1 with $v^k =-B^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)$, we obtain that \begin{equation}\label{35} \displaystyle{\sum_{k\Delta t\in [0,T^*]}}\left\Vert B^*\left(\frac{w^k+\tilde{w}^{k+1}}{2}\right)\right\Vert_Y^2 \leq C \Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*]}}\left\Vert B^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)\right\Vert_Y^2. \end{equation} Besides, we have (see \cite{6}) \begin{displaymath} \Vert w^k \Vert_X^2 + 2(\Delta t) \sum_{j=0}^{k-1}(\Delta t)^2 \Vert Aw^{j+1} \Vert_X^2 + (\Delta t)\sum_{j=0}^{k-1}(\Delta t)^5 \Vert {A^2}{w^{j+1}} \Vert_X^2 \end{displaymath} \begin{equation}\label{36} \leq \Delta t \sum_{j=0}^{k-1}\left(\left\Vert B^*\left(\frac{z^j+\tilde{z}^{j+1}}{2}\right)\right\Vert_Y^2+\left\Vert B^*\left(\frac{w^j+\tilde{w}^{j+1}}{2}\right)\right\Vert_Y^2\right). \end{equation} Combining (\ref{33}), (\ref{35}) and (\ref{36}) we get the existence of a constant $c$ independent of $\Delta t$ such that \\ $c \Vert z^0\Vert_X^2\leq 2\Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*]}}\left\Vert B^*\left(\frac{z^k+\tilde{z}^{k+1}}{2}\right)\right\Vert_Y^2 + \Delta t \displaystyle{\sum_{k\Delta t\in [0,T^*[}}(\Delta t)^2\left\Vert Az^{k+1}\right\Vert_X^2$\\\\ $$+\Delta t\displaystyle{\sum_{k\Delta t\in [0,T^*[}}(\Delta t)^5\left\Vert {A^2}{z}^{k+1}\right\Vert_X^2.$$ Finally, using the energy identity (\ref{10}), we get that \[\Vert z^{T^{\ast}/ \Delta t}\Vert_X^2\leq (1-c)\Vert z^0\Vert_X^2.\] The semi-group property then implies Theorem 1.1.\;\;\;\;\; $\square$ \begin{rem} There are other possible discretizations schemes for system (\ref{1}) and other viscosity operators could have been chosen (see subsection 2.3 in \cite{6} for more details). Note that the results given in the mentioned subsection still valid in our case but we replace the assumption of boundedness of $B$ by the assumption (H). \end{rem} \section{Applications} \subsection{The wave equation} We consider the following system \begin{equation*} \left\{ \begin{array}{lcr} u_{tt}(x,t)-u_{xx}(x,t)=0,\;\;x\in (0,\xi)\cup (\xi,1),\;t>0,\\\\ u(0,t)=0,\;\;\;u_x(1,t)=0,\;\;t>0,\\\\ u({\xi}_{-},t)=u({\xi}_{+},t),\;\;\;t>0,\\\\ u_{x}( {\xi}_{-},t)-u_{x}({\xi}_{+},t) = -\alpha u_t(\xi,t),\;\;\;t>0,\\\\ u(x,0)=u_0(x),\;\;u_t(x,0)=u_1(x),\;\;0<x<1,\end{array} \right. \end{equation*} where $\xi\in (0,1)$ is a rational number with an irreducible fraction ($\xi=\frac{p}{q}$, where $p$ is odd), and $\alpha$ is a positive constant.\\ To show that this system enters in the abstract setting of this paper, let us recall that it is equivalent to:\\ $\dot{Z}=AZ-BB^*Z,$\;\; with\; $Z=\left(\begin{array}{l} u\\ v\\ \end{array}\right)$,\;\;$A=\left( \begin{array}{c} 0\;\;\;\;\;I \\ \partialartial_{xx}\;\;\;\;0 \end{array} \right).$\\ In this setting, $A$ is a skew-adjoint unbounded operator on the Hilbert space $X= V\times L^2(0,1)$, with domain $D(A)=H\times V$,\\ where $$V=\{u\in H^1(0,1):\; u(0)=0\},$$ and $$H=\{u\in H^2(0,1):\;u(0)=u_x(1)=0\}.$$ The operator $B$ is defined by: $B:\mathbb{R}\to D(A)':\; k\to \left(\begin{array}{l} 0\\ \sqrt{\alpha}k\delta_{\xi}\\ \end{array}\right),$\;\; where $\delta_{\xi}$ is the Dirac mass concentrated in the point $\xi$.\\ It is well known that, in the case where $\xi=\frac{p}{q}$ ($p$ is odd), the energy of the system above decays exponentially, and the operators $A$ and $B$ defined above satisfy the assumption (H) (see \cite{2}).\\ Then, we introduce the following time semi-discrete approximation scheme: \begin{equation}\label{38} \left \{ \begin{array}{lcr} \frac{\tilde{Z}^{k+1}-Z^k}{\Delta t} = A\left(\frac{Z^k+\tilde{Z}^{k+1}}{2}\right)-BB^*\left(\frac{Z^k+\tilde{Z}^{k+1}}{2}\right),\;k\in \mathbb{N}, & \\\\ \frac{Z^{k+1}-\tilde{Z}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{Z^{k+1}} ,\;k\in\mathbb{N}, \\\\ Z^0=(u_0,v_0). \end{array} \right. \end{equation} According to Theorem 1.1, we get: \begin{teo} There exist positive constants $\mu_0$ and $\nu_0$ such that any solution of (\ref{38}) satisfies (\ref{1.1}) uniformly with respect to the discretization parameter $\Delta t>0$. \end{teo} \subsection{One Euler-Bernoulli beam with interior damping} We consider the following initial and boundary problem: \begin{equation*} \left\{ \begin{array}{lcr} u_{tt}(x,t)-u_{xxxx}(x,t)+\alpha u_t(\xi,t)\delta_{\xi}=0,\;\;0<x<1,\;t>0,\\\\ u(0,t)=u_x(1,t)=u_{xx}(0,t)=u_{xxx}(1,t)=0,\;\;t>0,\\\\ u(x,0)=u_0(x),\;\;u_t(x,0)=u_1(x),\end{array} \right. \end{equation*} where $\xi\in (0,1)$ is defined as above, and $\alpha>0$. Hence it is written in the form (\ref{1}) with the following choices: Take $Z$ as above, and $X=H\times L^2(0,1)$. The operator $A$ defined by $$A=\left( \begin{array}{c} 0\;\;\;\;\;I \\ -\partialartial_{xxxx}\;\;\;\;0 \end{array} \right),$$ with domain $D(A)=V\times H$, where \\ $V=\{u\in H^4(0,1);\;u(0)=u_x(1)=u_{xx}(0)=u_{xxx}(1)=0\}$, and $H$ is defined as in the last subsection.\\ This operator is skew-adjoint on $X$. We now define the operator $B$ as: $$B:\mathbb{R}\to D(A)':\;\;k\to \left(\begin{array}{l} 0\\ \sqrt{\alpha}k\delta_{\xi}\\ \end{array}\right).$$ The energy of the system above decays exponentially , and the hypothesis (H) was verified (see \cite{3}).\\ As an application of Theorem 1.1, we get: \begin{teo} The solutions of \begin{equation*} \left \{ \begin{array}{lcr} \frac{\tilde{Z}^{k+1}-Z^k}{\Delta t} = A\left(\frac{Z^k+\tilde{Z}^{k+1}}{2}\right)-BB^*\left(\frac{Z^k+\tilde{Z}^{k+1}}{2}\right),\;\;k\in \mathbb{N}, & \\\\ \frac{Z^{k+1}-\tilde{Z}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{Z^{k+1}},\;\;k\in\mathbb{N}, \\\\ Z^0=(u_0,v_0), \end{array} \right. \end{equation*} are exponentially uniformly decaying in the sense of (\ref{1.1}). \end{teo} \subsection{Dirichlet boundary stabilization of the wave equation} Let $\Omega\subset\mathbb{R}^n,\; n\geq 2$ be an open bounded domain with a sufficiently smooth boundary $\partialartial \Omega=\bar{\Gamma}_0\cup\bar{\Gamma}_1$, where $\Gamma_0$ and $\Gamma_1$ are disjoint parts of the boundary relatively open in $\partialartial \Omega$,\; $int(\Gamma_0)\neq \emptyset$. We consider the wave equation: \begin{equation*} \left \{ \begin{array}{lcr} u_{tt}-\Delta u=0,\;\;\Omega\times (0,+\infty),\\\\ u=\frac{\partialartial}{\partialartial\nu}(Gu_t),\;\;\Gamma_0\times(0,+\infty),\\\\ u=0,\;\;\;\Gamma_1\times (0,+\infty),\\\\ u(x,0)=u_0(x),\;\;u_t(x,0)=u_1(x),\;\; \Omega, \end{array} \right. \end{equation*}\\\\ where $\nu$ is the unit normal vector of $ \partialartial \Omega$ pointing towards the exterior of $\Omega$ and $G=(-\Delta)^{-1}: H^{-1}(\Omega)\to H_0^1(\Omega)$.\\ Denote: $A=\left( \begin{array}{c} 0\;\;\;\;\;I \\ \Delta\;\;\;\;0 \end{array} \right)$, with $D(A)=H_0^1(\Omega)\times L^2(\Omega)$. In this setting, $A$ is a skew-adjoint unbounded operator on $X=L^2(\Omega)\times H^{-1}(\Omega)$. Moreover define $$B\in\mathfrak{L}(L^2(\Gamma_0), D(A)'),$$ by $Bv=\left(\begin{array}{l} 0\\ -\Delta Dv \end{array}\right)$,\;$\forall\; v\in L^2(\Gamma_0)$, where $D$ is the Dirichlet map i.e., $Df=g$ if and only if \[\left \{ \begin{array}{lcr} \Delta g=0,\;\;\Omega,\\\\ g=f,\;\Gamma_0,\;\;g=0,\;\Gamma_1. \end{array} \right.\] The energy of the system above decays exponentially, and the hypothesis (H) was verified (see \cite{1}). Applying Theorem 1.1, we obtain: \begin{teo} The solutions of \begin{equation*} \left \{ \begin{array}{lcr} \frac{\tilde{Z}^{k+1}-Z^k}{\Delta t} = A\left(\frac{Z^k+\tilde{Z}^{k+1}}{2}\right)-BB^*\left(\frac{Z^k+\tilde{Z}^{k+1}}{2}\right),\;\;k\in \mathbb{N}, & \\\\ \frac{Z^{k+1}-\tilde{Z}^{k+1}}{\Delta t}={(\Delta t)^2}{A^2}{Z^{k+1}},\;\;k\in\mathbb{N}, \\\\ Z^0=(u_0,v_0), \end{array} \right. \end{equation*} are exponentially uniformly decaying in the sense of (\ref{1.1}). \end{teo} \end{document}
\begin{document} \title{Multiqubit matter-wave interferometry under decoherence and the Heisenberg scaling recovery} \author{Yanming Che} \affiliation{Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou, Zhejiang 310027, China} \author{Jing Liu} \affiliation{MOE Key Laboratory of Fundamental Physical Quantities Measurement \& Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China} \author{Xiao-Ming Lu} \email{[email protected]} \affiliation{Department of Physics, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China} \author{Xiaoguang Wang} \email{[email protected]} \affiliation{Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou, Zhejiang 310027, China} \date{\today} \begin{abstract} Most matter-wave interferometry (MWI) schemes for quantum sensing have so far been evaluated in ideal situations without noises. In this work, we provide assessments of generic multiqubit MWI schemes under Markovian dephasing noises. We find that for certain classes of the MWI schemes with scale factors that are nonlinearly dependent on the interrogation time, the optimal precision of maximally entangled probes \emph{decreases} with increasing the particle number $N$, for both independent and collective dephasing situations. This result challenges the conventional wisdom found in dephasing Ramsey-type interferometers. We initiate the analyses by investigating the optimal precision of multiqubit Sagnac atom interferometry for rotation sensing. And we show that due to the competition between the unconventional interrogation-time quadratic phase accumulation and the exponential dephasing processes, the Greenberger-Horne-Zeilinger (GHZ) state, which is the optimal input state in noiseless scenarios, leads to vanishing quantum Fisher information in the large-$N$ regime. Then our assessments are further extended to generic MWI schemes for quantum sensing with entangled states and under decoherence. Finally, a quantum error-correction logical GHZ state is tentatively analyzed, which could have the potential to recover the Heisenberg scaling and improve the sensitivity. \end{abstract} \maketitle \section{Introduction} \vspace*{-1.5ex} Matter-wave interferometry (MWI) is sensitive to inertial effects and has been widely used in quantum sensing of physical quantities, including gravitational force, acceleration, and rotation of reference frames~\cite{DegenRMP}. With quantum entanglement as resources, quantum sensing is expected to achieve higher precision and sensitivity, e.g., the Heisenberg limit~\cite{GiovannettiScience2004,GiovannettiPRL2006}. Sagnac atom-interferometry gyroscopes (SAIGs) are quantum sensors for rotation frequency based on the Sagnac interferometry~\cite{BarrettCRPhys2014} of matter waves, where atoms are coherently split and controlled with wave guides (e.g., see Ref.~\cite{KasevichPRL1991}) to enclose a finite area in space and encode the rotation frequency into the Sagnac phase, which can be finally read out from the interference fringes~\cite{KasevichPRL1991,GustavsonPRL1997, BarrettCRPhys2014,KandesarXiv2013,TrubkoPRL2015,HelmPRL2015,HainePRL2016,HelmPRL2018}. Most of schemes for SAIG utilize both wave nature and spin degrees of freedom (hyperfine states) of atoms. For example, a scheme with uncorrelated and trap-guided atomic clocks was proposed in Ref.~\cite{StevensonPRL2015}, and was later generalized to the one with multiparticle Greenberger-Horne-Zeilinger (GHZ) state to beat the standard quantum limit (SQL) in Ref.~\cite{LuoPRA2017}. So far, these proposed schemes were considered in ideal situations where the sensing protocols consisted of perfect unitary quantum channels. However, in realistic experiments, inevitable noises may cause errors which prevent from the expected precision. In standard Ramsey interferometers for atomic clocks, where the transition frequency $\omega$ between two energy levels of atoms is measured, the phase accumulation is linear in the interrogation time while the dephasing caused by noises is exponential. And the use of entangled states has been proved to only give a constant improvement for the ultimate precision, but still follows the SQL~\cite{HuelgaPRL1997,EscherNPhys2011,DemkowiczNC2012}. Several strategies and techniques have been proposed and used to protect the precision of atomic clocks from noises~\cite{TanPRA2013,LiuPRA2017,*LiuPRA2017MultiPara,KesslerPRL2014,DurPRL2014,UndenPRL2016}. Whereas, the evaluation and optimization of generic multiqubit MWI schemes for quantum sensing under decoherence still remain challenging. In this paper, we present an assessment of generic MWI schemes with maximally entangled states and under dephasing noises. Start with the SAIG as a prototype, we analyze the competition between the unconventional phase accumulation and the exponential dephasing processes. And we find that for certain classes of the MWI schemes with scale factors that are nonlinearly dependent on the interrogation time, the optimal precision of maximally entangled probes \emph{decreases} with increasing the particle number $N$, for both independent and collective dephasing situations. These classes include most of the current mainstream MWI schemes with atomic clock states and certain time-dependently controlled Hamiltonian systems. Our findings challenge the conventional wisdom found in dephasing Ramsey-type interferometers~\cite{HuelgaPRL1997,EscherNPhys2011,DemkowiczNC2012}. This paper is organized as follows. In Sec.~\ref{sec:ModelandPhase} we model the matter-wave Sagnac interferometry with maximally entangled states and derive the multiparticle Sagnac phase. In Sec.~\ref{sec:SensitivityDecoherence} we evaluate the optimal sensitivity of Sagnac-type interferometers under local decoherence noises via the quantum Fisher information. In Sec.~\ref{sec:GenericMWI}, we provide assessments of generic multiqubit MWI schemes with GHZ inputs and under independent Markovian dephasing noises. In Sec.~\ref{sec:CollectiveDephasing} the QFI of generic multiqubit MWI schemes with GHZ inputs under collective dephasing is provided. In Sec.~\ref{sec:QEC} the potential of recovering the Heisenberg scaling with quantum error-correction codes is presented. Finally, in Sec.~\ref{sec:Conclusion}, we conclude our work and give further discussions on the minor enhancement by entanglement in the precision of Sagnac-type interferometers. \vspace*{-1.5ex} \section{Matter-wave Sagnac interferometry with entangled states} \vspace*{-1.5ex} \label{sec:ModelandPhase} In order to sense the rotation frequency of a reference frame ${\cal{R}}$ rotating in an angular velocity $\bf \Omega$ with respect to an inertial frame ${\cal{K}}$, $N$ two-state cold atoms~\cite{NumberNote} can be initially prepared at the GHZ state (e.g., via the nonlinear~\cite{WinelandPRA1996,LeibfriedScience2004,MolmerPRL1999,LeePRL2006,GietkaPRA2015,PezzePRL2009} or Rydberg blockade~\cite{SaffmanPRL2009} interactions with suitable coupling parameters), which is the optimal multiparticle input state for unitary quantum channels~\cite{PangPRA2014}, i.e., $|\psi_0\rangle=(|{\bf 0}\rangle+|{\bf 1}\rangle)/\sqrt{2}$, where $|{\bf 0}\rangle=\left|0\right\rangle ^{\otimes N}$ and $|{\bf 1}\rangle=\left|1\right\rangle ^{\otimes N}$, with $\left|0\right\rangle = \left|\uparrow\right\rangle \left(\left|1\right\rangle = \left|\downarrow\right\rangle\right)$ being the single-atom (pseudo)spin state and eigenstate of Pauli matrix $\sigma_z$ with eigenvalue $+1$ ($-1$). Subsequently, the $|{\bf 0}\rangle$ and $|{\bf 1}\rangle$ components are coherently split by a beam splitter which establishes a state-path entanglement, and then are guided to transport along two distinct paths in real space~\cite{BordePLA1989}, within an interrogation time $\tau$, and are finally recombined at $t=\tau$. The state-path entanglement associates the phase shift between two interferometer paths (Sagnac phase) with the relative phase of two atomic states (qubit phase), which can be read out from the atomic spectroscopy, e.g., parity measurement~\cite{WinelandPRA1996,GerryPRA2010,LuoPRA2017}, after applying a $\pi/2$ pulse. \emph{Model.---}We assume that the $N$ two-state bosonic atoms are in the Bose-Einstein condensed (BEC) state, which is described by the mean-field wave function (order parameter) $\Psi_{\xi}\left({\bf r}, t\right)$ for the two split components $\left|\xi\right\rangle = \left|0\right\rangle$ and $\left|\xi\right\rangle = \left|1\right\rangle$, respectively. And the wave guide is provided by a ring trap of toroidal geometry~\cite{HalkyardPRA2010,KandesarXiv2013,HelmPRL2015,HainePRL2016}, with a trapping potential in cylindrical coordinates $\left\{r, \theta, z\right\}$ of the form $V_{\mathrm{trap}}\left({\bf r}, t\right)=\frac{1}{2}m \left[\omega_r^2\left(r-R\right)^2+\omega_{\theta}^2 R^2 \theta^2 \Theta(-t) +\omega_z^2 z^2\right]$~\cite{KandesarXiv2013,HainePRL2016}, where $m$ is the particle mass and ($\omega_r, \omega_{\theta}, \omega_z$) are the respective (radial, angular, axial) trapping frequencies, and $\Theta(-t)$ and $R$ are the Heaviside step function and the radius of the circular interferometer, respectively. See Fig.~\ref{fig:SI} for a schematic illustration, where we assume ${\bf{\Omega}}=\Omega{\bf{z}}$. When the radial and axial trapping confinements are sufficiently tight, the dynamics along these directions is freezed and then the time evolution ($t \ge 0$) of the order parameter in the rotating frame ${\cal{R}}$ is given by the one-dimensional Gross-Pitaevskii (GP) equation~\cite{DalfovoRMP1999} \begin{equation} \label{eq:GPE} i\hbar \frac{\partial}{\partial t}\Psi_{\xi}\left(\theta, t\right) = H_{\xi}\Psi_{\xi}\left(\theta, t\right), \end{equation} with the mean-field Hamiltonian \begin{equation} \label{eq:GPEHamiltonian} H_{\xi} = \frac{\hat{L}_z^2}{2mR^2} + {\cal U} \left| \Psi_{\xi}\left(\theta, t\right)\right|^2 - \Omega \hat{L}_z, \end{equation} where $\hat{L}_z=-i\hbar \frac{\partial}{\partial \theta}$ is the axial angular momentum operator and ${\cal U}$ is the contact interaction strength. \begin{figure} \caption{(Color online) Schematic matter-wave Sagnac interferometer for rotation sensing with entangled GHZ input of BEC atoms and circular waveguides of radius $R$ in the $x-y$ plane, observed from the inertial frame ${\cal{K} \label{fig:SI} \end{figure} For the ${\cal U}=0$ case, the time evolution operator for the $i$th particle reads \begin{equation} \label{eq:Uoperator} \hat{U}_i\left(t\right)=\mathrm{exp}\left(\Omega t \frac{\partial} {\partial \theta_i}\right)\mathrm{exp}\left[\frac{i \hbar t}{2mR^2} \frac{\partial^2}{\partial \theta_i^2}\right]\otimes {\cal{I}}_2, \end{equation} where ${\cal{I}}_2$ is the two-dimensional identity matrix. The trapping potential along the angular direction is $V_{\mathrm{trap}}\left(\theta, t\right)=\frac{1}{2}m\omega_{\theta}^2 R^2\theta^2 \Theta(-t)$, and we assume that for the both components, the initial mean-field wave function at $t=0$ is a Gaussian wave packet, i.e., ground state of the harmonic trap, $\Psi\left(\theta, 0\right)=\left(\frac{1}{\sqrt{\pi}\sigma}\right)^{\frac{1}{2}} \mathrm{exp}\left\{-\frac{\left[\theta-\theta(0)\right]^2}{2\sigma^2}\right\}$ for $\theta \in \left[\theta(0)-\pi, \theta(0)+\pi\right]$, where $\theta(0)=0$ and $\sigma=\sqrt{\hbar/\left(m\omega_{\theta}\right)}/R \ll \pi$ are the initial center and the width of the wave packet, respectively. Due to the periodicity of the $\theta$ coordinate, the wave function outside this interval can be defined via $\Psi\left(\theta+2J\pi, 0\right)=\Psi\left(\theta, 0\right)$, with $J$ being an integer. The multiqubit initial GHZ state, is given by \begin{equation} \label{eq:InitialState} \left|\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; 0 \right) \right\rangle = \frac{1}{\sqrt{2}}\prod_{i=1}^N \Psi\left(\theta_i, 0\right) \left(\left|{\bf 0} \right\rangle + \left|{\bf 1}\right\rangle \right), \end{equation} for which the normalization condition is $1 = \int \mathrm{d}\theta_1 \mathrm{d}\theta_2 \cdot \cdot \cdot \mathrm{d}\theta_N \left \langle\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; 0 \right) | \tilde{\psi} \left(\theta_1, \theta_2, ..., \theta_N; 0\right) \right\rangle$. The interferometer is launched at $t=0$ via kicking the two components with $\pm v$ group velocity, respectively, as in Refs.~\cite{KandesarXiv2013,HainePRL2016}. The kicking operator reads $\hat{K}(v)=\mathrm{exp}\left(\frac{i}{\hbar}L_k\sum_{j=1}^N \theta_j \sigma_{jz}\right)$, which plays the role of a beam splitter, with $L_k=mRv$ being the kicking angular momentum and $\sigma_{jz}$ being the Pauli $Z$ matrix of the $j$th particle. Finally, at time $t=\tau$ when the two components are recombined for the first time~\cite{TimeNote}, the full quantum state is given by \begin{eqnarray} \label{eq:ReadoutState} \left|\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; \tau \right) \right\rangle = \hat{K}^{\dagger}(v) \bigotimes_{i=1}^N \hat{U}_i\left(\tau\right) \hat{K}(v) \nonumber \\ \left|\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; 0 \right) \right\rangle, \end{eqnarray} where $\hat{U}_i\left(\tau\right)$ is the time evolution operator for the $i$th qubit at $t=\tau$, which is given by Eq.~(\ref{eq:Uoperator}). Note that for well-defining a Sagnac phase, we have firstly assume the interaction ${\cal U}=0$ during the interrogation. Consequently, the quantum state in the spin subspace after tracing out the orbital degrees of freedom related to $\Psi_{\xi}\left(\theta, \tau\right)$ is given by $|\psi(\tau)\rangle=\left(\mathrm{e}^{i\phi_S}|{\bf 0}\rangle+|{\bf 1}\rangle\right)/\sqrt{2}$ (up to a global phase factor), where (see Appendix~\ref{apped:Sagnacphase} for detailed derivations) \begin{eqnarray} \label{eq:SagnacPhase} \phi_S = \beta N \Omega \tau^2 \end{eqnarray} is the multiparticle Sagnac phase, with $\beta=2mv^2/\left(\pi \hbar\right)$. This expression for $\phi_S$ is equivalent to $N$ times the well-known single-particle Sagnac phase $2m\Omega A/\hbar$, where $A=\pi R^2$ is the area of the Sagnac interferometer, and for constant $v$ we have $A= v^2 \tau^2/\pi$. As a result, a Sagnac pure phase gate as an unitary operation mapping the initial state of the qubits to the readout state is constructed, which in the GHZ subspace spanned by $\left\{|{\bf 0}\rangle, |{\bf 1}\rangle\right\}$ reads \begin{eqnarray} \label{eq:PhaseGate} U(\phi_S) = \mathrm{diag}\left[\mathrm{e}^{i\phi_S}, 1\right], \end{eqnarray} and the rotation frequency can be extracted from the interference signal of the final state. Following standard quantum metrological protocols, the above Sagnac phase encoding and rotation frequency readout processes are repeated for $\nu=T/\tau$ times to reach a high precision, where $T$ is the total resource time (See Fig.~\ref{fig:SI}). And the standard deviation $\delta \hat{\Omega}$ for any unbiased estimator $\hat{\Omega}$ of the rotation frequency is bounded from below by the quantum Cram\'er-Rao bound (QCRB)~\cite{Helstrom,Holevo}, $\delta \hat{\Omega} \ge 1/\sqrt{\nu F}$, where $F$ is the quantum Fisher information (QFI) with respect to $\Omega$ for the readout state, which is an effective theoretical tool for assessing the performance of various interferometry schemes for quantum sensing~\cite{HainePRL2016}. Equivalently, we have $\delta \hat{\Omega} \sqrt{T} \ge 1/\sqrt{F/\tau}$. For more introduction to quantum sensing and QFI, see Appendix~\ref{apped:QFI}. From Eq.~(\ref{eq:SagnacPhase}), we see that the scale factor $\cal{S}$ of the interferometer is ${\cal{S}} \propto N \tau^2$, and the noiseless optimal QFI is~\cite{LuoPRA2017} \begin{eqnarray} \label{eq:NoiselessQFI} F_0=\left(\partial_{\Omega} \phi_S\right)^2=\left(2mNA/\hbar\right)^2, \end{eqnarray} which achieves the Heisenberg scaling and increases monotonically with the area of the interferometer. \vspace*{-1.5ex} \section{Optimal sensitivity under independent decoherence} \vspace*{-1.5ex} \label{sec:SensitivityDecoherence} In previous derivation of the multiqubit Sagnac phase we have neglected the interaction and local field fluctuations. Now we consider the qubit dephasing arising from such effects, which can not be neglected in a realistic system~\cite{InteractionNote}. And for shortcomings of the mean-field analysis in MWI with interaction effects, see e.g., Refs.~\cite{AghamalyanNJP2015,DasPRA2015}. The atom-atom collision and local fluctuations may cause a random shift $\delta \omega(t)$ of the energy difference for each qubit, which can be formulated as a Gaussian random process with zero mean and the correlation function~\cite{ScullyBook,SzankowskiPRA2014} $\left \langle \delta \omega(t) \delta \omega(t^{\prime}) \right \rangle=2\gamma \delta(t-t^{\prime})$, where $\delta(t-t^{\prime})$ is the Dirac function. The ensemble average leads to the exponential dephasing of the single qubit~\cite{ScullyBook,PuriPRA1992,TWChenPRA2003}. Considering the dephasing of the $N$-qubit system, the master equation for the state $\varrho(t)$ in the phase-covariant frame can be written as~\cite{DephasingNote} \begin{eqnarray} \label{eq:Master} \frac{\mathrm{d}\varrho(t)}{\mathrm{d}t}=\frac{\gamma}{2}\sum_{i=1}^N \left[\sigma_{iz}\varrho(t)\sigma_{iz}-\varrho(t)\right], \end{eqnarray} where $\gamma > 0$ is the dephasing strength. \begin{figure} \caption{(Color online) The QFI $F/\tau$ (in units of $\beta^2/\gamma^3$) of Sagnac interferometers as functions of $\gamma \tau$ for increasing the qubit number $N$ with (a) GHZ probe and (b) uncorrelated qubits, where SQL represents the standard quantum limit and $F_{SQL} \label{fig:QFI} \end{figure} \begin{table*}[t] \centering \caption{Comparison between Ramsey and Sagnac interferometers with GHZ probe states} \begin{tabular}{c c c c c c c c c} \hline \hline Interferometers & Quantity & $H_{\mathrm{single}}$ & Phase & Noiseless $F /\tau$ & Noiseless $\tau_{opt}$ & Dephasing & Noisy $\left(F /\tau\right)_{opt}$ & Noisy $\tau_{opt}$ \\ \hline Ramsey & $\omega$ & $\hbar \omega \sigma_z/2$ &$N \omega \tau$ & ${\cal{O}}\left(N^2\right)$ & $T$ & $e^{-N\gamma \tau}$ & ${\cal{O}}\left(N\right)$~\cite{HuelgaPRL1997,EscherNPhys2011,DemkowiczNC2012} & $1/\left(2N\gamma\right)$ \\ \hline Sagnac & $\Omega$ & $-\Omega \hat{L}_z$ & $\beta N \Omega \tau^2$ &${\cal{O}}\left(N^2\right)$~\cite{LuoPRA2017} & $T$ & $e^{-N\gamma \tau}$ & ${\cal{O}}\left(N^{-1}\right)$ & $3/\left(2N\gamma\right)$ \\ \hline \hline \end{tabular} \label{tb:RamseySagnac} \end{table*} The completely positive and trace preserving (CPTP) map ${\cal E}$, which is a solution of Eq.~(\ref{eq:Master}) and maps $\varrho_0$ to $\varrho(\tau)$, is $\varrho(\tau)={\cal E}\left(\varrho_0\right)=\bigotimes_{i=1}^{N}{\cal E}_{i}\left(\varrho_0\right)$, where $\varrho_0=\rho_0=|\psi_0\rangle\langle\psi_0|$ is the initial state and ${\cal E}_{i}(\varrho_0)=E_{i0}\varrho_0E_{i0}^{\dagger}+E_{i1}\varrho_0E_{i1}^{\dagger}$ is the local phase-flip error operator for the $i$th qubit, with $E_{i0}=\sqrt{1-p(\tau)}{\cal{I}}_2$ and $E_{i1}=\sqrt{p(\tau)}\sigma_{iz}$ being the Kraus operators, where $p(\tau)=\left(1-e^{-\gamma \tau}\right)/2$ is the single-qubit error probability. Then it is straightforward to reach the readout state~\cite{ReadoutNotes}, $\rho(\tau) = {\cal E}\left[U(\phi_S)\rho_0 U^{\dagger}(\phi_S)\right] = \left[|{\bf 0}\rangle\langle{\bf 0}|+|{\bf 1} \rangle\langle{\bf 1}|+(e^{-N \gamma \tau}e^{i\phi_S} |{\bf 0}\rangle\langle {\bf 1}|+\mathrm{h.c.})\right]/2$, where $\phi_S$ is given by Eq.~(\ref{eq:SagnacPhase}). The QFI with respect to $\Omega$ for this state is (see Appendix~\ref{apped:QFIReadOut}) \begin{eqnarray} \label{eq:QFIwithNoise} F = \beta^2 N^2 \tau^4 e^{-2N \gamma \tau}. \end{eqnarray} Note that here $F$ is the interrogation-time dependent QFI at the point where the Sagnac phase accumulation is accomplished, which is actually $F_S$ in Ref.~\cite{HainePRL2016}. From Eq.~(\ref{eq:QFIwithNoise}) we see that the optimal interrogation time and the Sagnac area $A$ are constrained by decoherence in noisy scenarios. The precision bound for rotation sensing is determined by $F/\tau$ and its optimized value over the interrogation time is given by \begin{eqnarray} \label{eq:QFI-opt} \left(F/\tau\right)_{opt}=\beta^2\left(\frac{3}{2\gamma e}\right)^3 \frac{1}{N}, \end{eqnarray} with $\tau_{opt}=3/(2N\gamma)$~\cite{MarkovianNote}. Therefore, the maximally entangled state reduces the precision with increasing $N$ even under uncorrelated dephasing, which is a completely new result in contrast to that was found in Ramsey-type interferometers~\cite{HuelgaPRL1997,EscherNPhys2011,DemkowiczNC2012}. On the other hand, the QFI for uncorrelated qubits is given by $F_{SQL}= N \beta^2 \tau^4 e^{-2 \gamma \tau}$, and is proportional to $N$ for any value of $\tau$. Shown in Fig.~\ref{fig:QFI} is the interrogation-time normalized $F$ for increasing the qubit number $N$ with (a) GHZ probe and (b) uncorrelated qubits, respectively. For a given value of $N$ and increasing $\tau$, the power law $\tau^3$ behavior dominates at the beginning while the exponential prevails after reaching a maximum. While the SQL is achieved in Fig.~\ref{fig:QFI}(b) with uncorrelated qubits, in contrast, the QFI curves are shrinking with increasing $N$ in Fig.~\ref{fig:QFI}(a) for the GHZ probe. The physical reason is that $\phi_S\left(\tau=\tau_{opt}\right) \propto N^{-1}$, such that the accumulated phase signal is weakened with increasing $N$. \vspace*{-1.5ex} \section{Assessments of generic multiqubit MWI schemes} \vspace*{-1.5ex} \label{sec:GenericMWI} Above results can be generalized to more generic MWI schemes for quantum sensing with entangled states and under independent dephasing, which were previously only considered in single-qubit or noiseless scenarios. In general, for GHZ probes in the presence of independent dephasing, if the accumulated phase is $\phi_{\chi} (\tau) \propto N \chi \tau^{\lambda}$ with $\chi$ being the physical quantity to be sensed and $\lambda > 0$ the time exponent of the scale factor, then the optimal QFI with respect to $\chi$ is \begin{eqnarray} \left(F_{\chi}/\tau\right)_{opt}={\cal{O}}\left(N^{3-2\lambda}\right), \end{eqnarray} with $\tau_{opt}=(2\lambda-1)/(2N\gamma)$ (see Appendix~\ref{apped:QFIReadOut}). Therefore, we may conclude that the Heisenberg scaling is actually inaccessible because the condition $\tau_{opt} > 0$ requires $\lambda > 1/2$. And for $\lambda \ge 1$, the best QFI can be achieved is $\left(F_{\chi}/\tau\right)_{opt}={\cal{O}}(N)$ (SQL) with the $\lambda=1$ class~\cite{HuelgaPRL1997,EscherNPhys2011,DemkowiczNC2012}. For classes with $\lambda \ge 2$, the entangled probes could reduce the precision with increasing the particle number. Many of the current mainstream MWI schemes with atomic clock states belong to the $\lambda=2$ class (without entanglement), and so do the proposed schemes in Refs.~\cite{PangNC2017,GefenPRA2017} with time-dependently controlled Hamiltonians. For example, the atom gravimetry considered in Refs.~\cite{KasevichPRL1991,KasevichAPB1992,SchleichPRL2013,SchleichNJP2013,KritsotakisarXiv2017}, with the single-qubit phase $\phi_{{\bf g}}(\tau)={\bf{k}}_0 \cdot {\bf g} \left(\tau/2\right)^2$ and the atom free-propagation Sagnac interferometers in Refs.~\cite{GustavsonPRL1997,GustavsonCQG2000,DurfeePRL2006,BarrettCRPhys2014}, with the encoded phase $\phi_{{\bf \Omega}}(\tau)={\bf{k}}_0 \cdot \left({\bf{v}}_0 \times {\bf{\Omega}}\right) \tau^2/2$, where $\bf g$ is gravitational acceleration, and ${\bf{v}}_0$ and ${\bf{k}}_0$ are the semiclassical velocity of atoms and effective Raman propagation vectors, respectively. In Table~\ref{tb:RamseySagnac} we give a comparison between standard Ramsey interferometers for atomic clocks ($\lambda=1$) and Sagnac interferometers for rotation sensing ($\lambda=2$)~\cite{SagnacNote}, with GHZ input states and $H_{\mathrm{single}}$ denotes the single-particle sensing Hamiltonian. \vspace*{-1.5ex} \section{QFI in the presence of collective dephasing} \vspace*{-1.5ex} \label{sec:CollectiveDephasing} For closely spaced atoms in a Bose-Einstein condensate, they may collectively couple to the external-field fluctuations~\cite{FerriniPRA2011,DornerNJP2012,ZhongPRA2013,SzankowskiPRA2014}. Here we give the scaling behavior the QFI in the presence of collective dephasing for matter-wave interferometers with GHZ probe and different scale factors. The master equation for the state $\varrho(t)$ in the phase-covariant frame in the presence of collective dephasing is given by~\cite{DornerNJP2012,ZhongPRA2013} \begin{eqnarray} \label{eq:SMaster} \frac{\mathrm{d}\varrho(t)}{\mathrm{d}t}=\Gamma \left[2J_z\varrho(t)J_z-J_z^2\varrho(t)-\varrho(t)J_z^2\right], \end{eqnarray} where $\Gamma > 0$ is the collective dephasing strength and $J_z=\sum_{i=1}^N \sigma_{iz}/2$ is the third component of the collective spin operator. For the GHZ state, the readout state at time $t=\tau$ can be analytically calculated in a similar way, which is given by \begin{equation} \rho(\tau)=\frac{1}{2}\left[|{\bf 0}\rangle\langle{\bf 0}|+|{\bf 1} \rangle\langle{\bf 1}|+(e^{-N^2 \Gamma \tau}e^{i\phi(\tau)} |{\bf 0}\rangle\langle {\bf 1}|+\mathrm{h.c.})\right], \end{equation} where $\phi$ is the multiparticle interferometer phase $\phi(\tau) \propto N \chi \tau^{\lambda}$ with the scale factor ${\cal{S}} \propto N \tau^{\lambda}$. The QFI with respect to $\chi$ for this state can be calculated with the same method as in Appendix~\ref{apped:QFIReadOut} and is given by \begin{eqnarray} {\cal{F}}_{\chi} &=& \left[\frac{\partial \phi (\tau)}{\partial \chi}\right]^2 e^{-2N^2 \Gamma \tau} = {\cal{S}}^2 e^{-2N^2 \Gamma \tau} \nonumber \\ &\propto& N^2 \tau^{2\lambda} e^{-2N^2 \Gamma \tau}. \end{eqnarray} And the the interrogation-time optimized value of ${\cal{F}}_{\chi}/\tau$ is \begin{equation} \left({\cal{F}}_{\chi}/\tau\right)_{opt} \propto N^{4\left(1-\lambda\right)} \left( \frac{2\lambda-1}{2\Gamma e}\right)^{2\lambda-1}, \end{equation} with $\tau_{opt}=(2\lambda-1)/(2N^2 \Gamma)$. So for Ramsey-type interferometers ($\lambda =1$), the optimal QFI is $\left({\cal{F}}_{\chi}/\tau\right)_{opt}=1/\left(2\Gamma e\right) = \mathrm{const}$~\cite{DornerNJP2012,ZhongPRA2013,SzankowskiPRA2014} and is independent of $N$. While for Sagnac-type interferometers with $\lambda =2$, the decoherence time is $\tau_{opt} \propto N^{-2}$ and $\left({\cal{F}}_{\chi}/\tau\right)_{opt} \propto N^{-4} \left[3/\left(2\Gamma e\right)\right]^3$, which decreases very rapidly with increasing the particle number $N$. This is in stark contrast to the constant precision of Ramsey-type interferometers~\cite{DornerNJP2012,ZhongPRA2013,SzankowskiPRA2014}. \vspace*{-1.5ex} \section{Recovering the Heisenberg scaling with quantum error correction} \vspace*{-1.5ex} \label{sec:QEC} To tentatively recover the Heisenberg scaling and improve the sensitivity, we theoretically explore in the following the potential of quantum error-correction (QEC) codes for MWI schemes under local dephasing. QECs have been realized in experiments for quantum computation~\cite{SchindlerScience2011,WaldherrNature2014,OfekNature2016,UndenPRL2016} and have been proposed for quantum metrology~\cite{DurPRL2014,KesslerPRL2014,ArradPRL2014,LuNC2015,OzeriArxiv2013,ReiterNC2017,DemkowiczPRX2017,ZhouNC2018}. Here we analyze a QEC scheme with logical GHZ states proposed in Refs.~\cite{DurPRL2014,LuNC2015}, which utilizes redundant qubits to suppress phase-flip errors, with a possible application in the Sagnac atom interferometers. As in Refs.~\cite{DurPRL2014,LuNC2015}, with $n$ physical qubits in each logical block, the error probability $p(\tau)$ is exponentially suppressed by replacing the raw GHZ state with a logical one~\cite{DurPRL2014,LuNC2015}, where the coding space $\cal{C(G)}$ is stabilized by the stabilizer group $\cal{G}$. The $n$-qubit ($n$ is odd) phase-flip code is defined as ${\cal{C}}_n =\left\{|0\rangle_L, |1\rangle_L\right\}$, where $|0\rangle_L =\left(|+\rangle^{\otimes n}+|-\rangle^{\otimes n}\right)/\sqrt{2}$ and $|1\rangle_L =\left(|+\rangle^{\otimes n}-|-\rangle^{\otimes n}\right)/\sqrt{2}$ are the bases for each logical qubit block, with $|+\rangle$ ($|-\rangle$) being the eigenstate of the Pauli matrix $\sigma_x$ with eigenvalue $+1$ ($-1$). The above code is stabilized by the operator $X_{\alpha}=\prod_{i \in \alpha} \sigma_{ix}$, with $\alpha \subset \{1,2,3,...n\}$ and $|\alpha|=\mathrm{even}$, and is capable of correcting $(n-1)/2$ phase-flip errors $\{\sigma_{iz}\}$~\cite{LuNC2015,DurPRL2014}. With $N$ total physical qubits as resources, the number of logical qubits is $N/n$. Furthermore, the error probability is renormalized to the logical level as \begin{equation} p_L(\tau)=\sum_{k=0}^{(n-1)/2} \dbinom{n}{k} p^{n-k}(\tau)\left[1-p(\tau)\right]^k. \end{equation} The QFI can be rewritten in terms of $p(\tau)$ using $e^{-\gamma \tau}=1-2p(\tau)$, and the logical QFI in terms of $p_L(\tau)$ is given by \begin{eqnarray} F_L &=& \left(n\beta\right)^2 \left(N/n\right)^2 \tau^4 \left[1-2p_L(\tau)\right]^{2N/n} \nonumber \\ &=& \beta^2 N^2 \tau^4 \left[1-2p_L(\tau)\right]^{2N/n}. \end{eqnarray} Now the quantum Cr\'{a}mer-Rao bound for the rotation frequency sensing is $\delta \Omega \sqrt{T}=1/\sqrt{F_L/\tau}$. \begin{figure} \caption{(Color online) The QCRB $\delta \Omega \sqrt{T} \label{fig:QCRB} \end{figure} Plotted in Fig.~\ref{fig:QCRB}(a) is $\delta \Omega \sqrt{T}$ vs $\gamma \tau$ for a given total qubit number $N=15$ and increasing qubit number $n$ in each logical block. With small $p(\tau)$, the optimal interrogation time which minimizes the precision bound is $\tau_{opt}=3/\left(2N\gamma_{eff}\right)$, as in Eq.~(\ref{eq:QFI-opt}), with the effective dephasing strength $\gamma_{eff}=\gamma {\cal{O}}\left[p^{(n-1)/2}(\tau_{opt})\right]$. We find that the use of the logical code will increase $\tau_{opt}$ in a power law fashion and improve the sensitivity, where the preservation of Heisenberg scaling is promising. Shown in Fig.~\ref{fig:QCRB}(b) is $\delta \Omega \sqrt{T}$ vs $N$ for $\gamma \tau=0.1$. The representative values for $n$ are taken the same as that in the panel (a). One sees that for each $n$, there exists an optimal total qubit number $N_{opt}$, where a minimum precision bound is attained. Furthermore, for $N \in [1, N_{opt})$, the Heisenberg scaling (shown with magenta dashed) is achieved~\cite{DurPRL2014}. For small $\gamma \tau$ and $p(\tau)$, it is straightforward to obtain $N_{opt}=\mathrm{int}\left\{1/\left[2p_L + {\cal{O}} \left(p_L^2\right)\right]\right\} n$, where $\mathrm{int}\{y\}$ denotes the integer part of $y$. For the set of values of $n$ and $\gamma \tau$ in Fig.~\ref{fig:QCRB}(b), $N_{opt} \approx (10, 219, 2320, 4.5 \times 10^7)$ for $n=(1, 3, 5, 15)$, respectively. Therefore, with the help of the logical code, the effective scope for the Heisenberg scaling can be extended. \vspace*{-1.5ex} \section{Conclusion and discussion} \vspace*{-1.5ex} \label{sec:Conclusion} In summary, we have presented an assessment of the optimal precision given by the QCRB for matter-wave interferometers, with multiqubit GHZ input and in the presence of decoherence. Our results show that due to the competition between the unconventional phase accumulation (i.e., $\lambda \ge 2$) and the exponential dephasing, the use of entangled probes leads to vanishingly small QFI while increasing the particle number, which challenges the conventional wisdom. Finally, for completeness, we tentatively analyzed a QEC scheme with logical GHZ states, which could have the potential to protect the Heisenberg scaling. It is worth noting that the non-entangled spin state and maximally entangled GHZ state with unconventional interferometric scale factors are investigated and compared in our work, due to the analytical computability of the QCRB for such states. We show that the latter gives a much worse precision than the former for Sagnac-type interferometers in the presence of uncorrelated dephasing. Intuitively, there should be a maximal precision arising from the balance between the entanglement-enhancement and noise-reduction effects. With the general method in Ref.~\cite{EscherNPhys2011} for estimating the upper bound of the noisy QFI maximized over all possible input states and by replacing the scale factor ${\cal{S}} \propto \tau$ of Ramsey-type interferometers with ${\cal{S}} \propto \tau^2$, one can obtain that the use of (partially) entangled states can only give $2.8\%$ relative precision enhancement with respect to the uncorrelated spin state for Sagnac interferometers. This is quite minor compared to the $\sqrt{e}$ enhancement in Ramsey-type interferometers~\cite{HuelgaPRL1997,EscherNPhys2011,DemkowiczNC2012}. \vspace*{-1.ex} \acknowledgments \vspace*{-1.5ex} We acknowledge helpful discussions with Jun Xin and Hui-Ke Jin. This work was supported by the National Key Research and Development Program of China (No.~2017YFA0304202 and No.~2017YFA0205700), the NSFC through Grant No.~11875231, and the Fundamental Research Funds for the Central Universities through Grant No.~2018FZA3005. J. Liu acknowledges the support from the National Natural Science Foundation of China (Grant No.11805073). X.M. Lu acknowledges support from the National Natural Science Foundation of China under Grant Nos.~61871162 and 11805048, and the Natural Science Foundation of Zhejiang Province under Grant No.~LY18A050003. \appendix \section{Derivation of the multiqubit Sagnac phase in Eq. (\ref{eq:SagnacPhase})} \label{apped:Sagnacphase} Here we provide detailed derivation of the multiqubit Sagnac phase in Eq. (\ref{eq:SagnacPhase}) in the main text. After applying the kicking operator $\hat{K}(v)$, the mean-field wave function for the $j$th particle of $\left| \xi\right\rangle_j$ spin state ($\xi=0, 1$) is given by $\Psi_{\xi}\left(\theta_j, 0\right)=\Psi\left(\theta_j, 0\right) \mathrm{exp} \left[(-1)^\xi i L_k\theta_j/\hbar\right]$, which can be directly obtained with $\sigma_{jz}\left| \xi\right\rangle_j = (-1)^{\xi}\left| \xi\right\rangle_j$, and $\Psi\left(\theta_j, 0\right)$ is the initial Gaussian wave packet. Therefore, the wave function at time $t$ reads \begin{eqnarray} \Psi_{\xi}\left(\theta_j, t\right) \otimes \left| \xi\right\rangle_j = \hat{U}_j(t)\Psi_{\xi}\left(\theta_j, 0\right) \otimes \left| \xi\right\rangle_j. \end{eqnarray} In addition, the Fourier transform of the initial Gaussian wave packet is given by $\Psi\left(\theta, 0\right)=\left[1/(2\pi)\right]^{1/2}\sum_{l=-\infty}^{l=+\infty} \tilde{\Psi}(l)\mathrm{exp}\left(il\theta\right)$, where \begin{eqnarray} \tilde{\Psi}(l)&=&\left[1/(2\pi)\right]^{1/2} \int_{-\pi}^{\pi} \Psi\left(\theta, 0\right) \mathrm{exp}\left(-il\theta\right) \mathrm{d}\theta \nonumber \\ &=&\left(\sigma/\sqrt{\pi}\right)^{1/2} \mathrm{exp}\left(-\sigma^2 l^2/2\right) \mathrm{erf}\left(\frac{\pi+i\sigma^2l}{\sqrt{2}\sigma}\right) \nonumber \\ &\approx& \left(\sigma/\sqrt{\pi}\right)^{1/2} \mathrm{exp}\left(-\sigma^2 l^2/2\right), \end{eqnarray} where $\mathrm{erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}\mathrm{exp}\left(-t^2\right)\mathrm{d}t$ is the Gaussian error function, for which $\mathrm{erf}(z) \rightarrow 1$ when $\mathrm{Re}z \rightarrow +\infty$, which is the situation with $\sigma \ll \pi$ here. And by applying the time evolution operator $\hat{U}_j(t)$, one can obtain \begin{eqnarray} \label{eq:SMeanFieldWF} \Psi_{\xi}\left(\theta_j, t\right) &\approx& \left(\frac{1}{\sqrt{\pi}\tilde{\sigma}(t)}\right)^{\frac{1}{2}} \mathrm{exp}\left\{-\frac{\left[\theta_j-\theta^{\left(\xi\right)}(t)\right]^2} {2\sigma \tilde{\sigma}(t)}\right\} \nonumber \\ &\times& \mathrm{exp}\left[(-1)^{\xi} \frac{i}{\hbar}L_k \left(\Omega t+\theta\right) \right] \mathrm{exp}\left[\frac{-it L_k^2}{2\hbar mR^2}\right] \nonumber \\ &\times& \sum_{n=-\infty}^{+\infty}\mathrm{exp}\left\{2\pi i n \kappa-2\pi^2n^2 /\left[\sigma \tilde{\sigma}(t)\right]\right\}, \end{eqnarray} where $\tilde{\sigma}(t)=\sigma+i\hbar t/\left(mR^2\sigma\right)$ and $\theta^{\left(\xi\right)}(t)=\left[(-1)^{\xi}v/R-\Omega\right]t$, and $\kappa=-i\left[\theta_j-\theta^{\left(\xi\right)}(t)\right] /\left[\sigma \tilde{\sigma}(t)\right]$. Furthermore, under the condition $|\tilde{\sigma}(t)| \ll \pi$ for $t \in [0, \tau]$, we have $\sum_{n=-\infty}^{+\infty} \mathrm{exp}\left\{2\pi i n \kappa-2\pi^2n^2/\left[\sigma \tilde{\sigma}(t)\right]\right\} =1+\sum_{n=-\infty, n \ne 0}^{+\infty}\mathrm{exp}\left\{2\pi i n \kappa-2\pi^2n^2 /\left[\sigma \tilde{\sigma}(t)\right]\right\} \approx 1$, and then we obtain \begin{eqnarray} \label{eq:SmodulusWF} \left|\Psi_{\xi}\left(\theta_j, t\right)\right|^2 \approx \frac{1}{\sqrt{\pi}\left|\tilde{\sigma}(t)\right|} \mathrm{exp}\left\{-\frac{\left[\theta_j-\theta^{\left(\xi\right)}(t)\right]^2} {\left|\tilde{\sigma}(t)\right|^2}\right\}. \nonumber \\ \end{eqnarray} Therefore, at time $t$ and under the condition $|\tilde{\sigma}(t)| \ll \pi$, the wave function in Eq.~(\ref{eq:SMeanFieldWF}) describes Gaussian wave packets centered at $\theta^{\left(\xi\right)}(t)$, i.e., propagating in group linear velocity $(-1)^{\xi}v-\Omega R$, for $\xi=0$ and $1$, respectively, and with the same width $\left|\tilde{\sigma}(t)\right|$. The interrogation time (or collision time) $\tau$, at which the two centers of the counter-propagating Gaussian wave packets are completely overlapped, is given by $\theta^{\left(0\right)}(\tau)-\theta^{\left(1\right)}(\tau)=2\pi$, or equivalently, $\tau=\pi R/v$. With above results, one can obtain the multiparticle readout state $\left|\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; \tau \right) \right\rangle$ in Eq.~(\ref{eq:ReadoutState}) and the corresponding density matrix reads $\tilde{\rho}\left(\theta_1, \theta_2, ..., \theta_N; \tau \right) =\left|\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; \tau \right) \right\rangle \left \langle\tilde{\psi}\left(\theta_1, \theta_2, ..., \theta_N; \tau \right) \right|$. The reduced density matrix in the spin subspace after tracing out the orbital degrees of freedom related to $\Psi_{\xi}\left(\theta, \tau\right)$ is given by \begin{eqnarray} \label{eq:SReducedDensityMatrix} \rho(\tau) &=& \int \mathrm{d}\theta_1 \mathrm{d}\theta_2 \cdot \cdot \cdot \mathrm{d}\theta_N \tilde{\rho}\left(\theta_1, \theta_2, ..., \theta_N; \tau \right) \nonumber \\ &=& \frac{1}{2}\left[|{\bf 0}\rangle\langle{\bf 0}|+|{\bf 1} \rangle\langle{\bf 1}|+(e^{i\phi_S} |{\bf 0}\rangle\langle {\bf 1}| +\mathrm{h.c.})\right], \end{eqnarray} where \begin{eqnarray} \label{eq:SSagnacPhase} \phi_S = \beta N \Omega \tau^2 \end{eqnarray} is the multiparticle Sagnac phase, with $\beta=2mv^2/\left(\pi \hbar\right)$. This expression for $\phi_S$ is equivalent to $N$ times the well-known single-particle Sagnac phase $2m\Omega A/\hbar$, where $A=\pi R^2$ is the area of the Sagnac interferometer, and for constant $v$ we have $A= v^2 \tau^2/\pi$. The corresponding spin-subspace quantum state can be written as $|\psi(\tau)\rangle=\left(\mathrm{e}^{i\phi_S}|{\bf 0}\rangle +|{\bf 1}\rangle\right)/\sqrt{2}$ (up to a global phase factor), with which $\rho(\tau)$ can be given by $\rho(\tau)=|\psi(\tau)\rangle \langle\psi(\tau)|$. \section{Quantum sensing and quantum Fisher information} \label{apped:QFI} Here we present a brief introduction to quantum sensing and QFI. The QFI plays a crucial role in quantum metrology and quantum sensing. Our basic quantum resources for a SAIG include $N$ cold probe (two-level) atoms (qubits), total sensing time $T$, single-round interrogation time $\tau$, and the controlling and measurement devices. In a standard metrological scheme, the initial sate of the probe is prepared at $\rho_0$ and followed by a dynamical evolution $\rho_0 \xrightarrow{\phi_{\chi} (t)} \rho_{\chi}$ ($\rho_{\chi} := \rho_{\chi}(t)$), which encodes the quantity $\chi$ to be sensed into the relative phase $\phi_{\chi}(t)$ of qubits, and can be read out by quantum measurements after a single-round time $t=\tau$. Within the total time $T$, the number of repetitive rounds of sensing and measurement is $\nu=T/\tau$. The standard deviation for any unbiased estimator $\hat{\chi}$ is bounded from below by the quantum Cram\'er-Rao bound (QCRB)~\cite{Helstrom,Holevo}, \begin{equation} \label{eq:SQCRB} \delta \hat{\chi} \ge 1/\sqrt{\nu F}, \end{equation} where $F$ is the QFI at $t=\tau$, or equivalently, \begin{equation} \label{eq:SC-R bound} \delta \hat{\chi} \sqrt{T} \ge 1/\sqrt{F/\tau}. \end{equation} Thus, finding the optimal input state and quantum measurement to maximize the QFI is a central problem in high precision quantum sensing. In general, the QFI of $\chi$ associated with $\rho_{\chi}$ is defined by $F=\mathrm{Tr}(\rho_{\chi} L^2)$~\cite{Helstrom,Holevo}, where $\mathrm{Tr}$ is the trace operation and $L$ is the symmetric logarithmic derivative (SLD) operator, which is given by \begin{equation} \partial_{\chi} \rho_{\chi}=\left(\rho_{\chi} L + L \rho_{\chi}\right) /2. \end{equation} Usually, a signal accumulation process is a unitary quantum channel, which gives $\rho_{\chi}=U_{\chi}\rho_0U^{\dagger}_{\chi}$, where $U_{\chi}$ is a time and $\chi$ dependent unitary operator. It has been shown that for a pure state in unitary quantum channels, the QFI can be obtained from the variance of a Hermitian operator ${\cal{H}}=i\left(\partial_{\chi}U^{\dagger}_{\chi}\right)U_{\chi}$ in $\rho_0$, with~\cite{BoixoPRL2007,TaddeiPRL2013,PangPRA2014,LiuSR2015} \begin{equation} \label{eq:F-H} F=4(\langle{\cal{H}}^2\rangle-\langle{\cal{H}}\rangle^2), \end{equation} where $\langle O\rangle:=\mathrm{Tr}(\rho_0 O)$ for any operator $O$. For an ensemble of $N$ qubits as the input state in a standard Ramsey experiment, the maximal QFI ($\propto N^2$) is obtained when $\rho_0$ is the GHZ state~\cite{PangPRA2014}, and when the inputs are uncorrelated qubits, $F \propto N$. So the GHZ state gives the Heisenberg scaling for the sensing precision while the uncorrelated inputs leads to the SQL, according to Eq.~(\ref{eq:SC-R bound}). However, in the presence of noises, the unitary quantum channel will be modified by errors, and the corresponding QFI will be reduced or even be lost. As a result, the expected sensing precision may not be achieved. A special case is taking the quantity $\chi$ to be the relative phase $\phi$ of the two interferometric modes, and the unitary phase imprinting operator is given by $U_{\phi}=\mathrm{exp}\left(-i\phi J_z\right)$, where $J_z=\sum_{i=1}^N \sigma_{iz}/2$ is half of the relative number operator between the two modes. And the corresponding $\cal{H}$ in Eq.~(\ref{eq:F-H}) is ${\cal{H}}=J_z$. Therefore, the QFI in Eq.~(\ref{eq:F-H}) is exactly the variance of the relative number with respect to the initial probe state, and the QCRB in Eq.~(\ref{eq:SQCRB}) manifests itself as the uncertainty relation between the \emph{relative} phase and the \emph{relative} number (take $\nu =1$). So the initial state with the largest relative number fluctuation (e.g., the GHZ state) gives the highest phase resolution, while the total number of the state can be fixed. \section{Calculations of the noisy QFI under independent dephasing} \label{apped:QFIReadOut} Here we give the detailed Calculations of the noisy QFI under independent dephasing and generalize the result for Sagnac-type interferometers to more genetic classes. The spectral decomposition of the density matrix $\rho$ is given by \begin{equation} \rho = \sum_{i=1}^d p_i \left|\psi_i \right\rangle \left\langle \psi_i \right|, \end{equation} where $d$ is the dimension of the support set of $\rho$, and $p_i$ is the $i$th eigenvalue of $\rho$, with $\left|\psi_i \right\rangle$ being the corresponding $i$th eigenvector. With this representation, the QFI with respect to the quantity $\chi$ can be expressed as~\cite{ZhangPRA2013,LiuCTP2014} \begin{eqnarray} \label{eq:SQFIExpression} F = \sum_{i=1}^d \frac{\left(\partial_{\chi} p_i\right)^2}{p_i} + \sum_{i=1}^d 4p_i \left\langle \partial_{\chi} \psi_i | \partial_{\chi} \psi_i \right\rangle \nonumber \\ - \sum^d _{i,j=1; \atop p_i+p_j \ne 0} \frac{8p_i p_j}{p_i + p_j}\left|\left\langle \psi_i | \partial_{\chi} \psi_j \right\rangle \right|^2. \end{eqnarray} For the readout GHZ state \begin{equation} \rho(\tau)=\left[|{\bf 0}\rangle\langle{\bf 0}|+|{\bf 1} \rangle\langle{\bf 1}|+(e^{-N \gamma \tau}e^{i\phi_S} |{\bf 0}\rangle\langle {\bf 1}|+\mathrm{h.c.})\right]/2 \end{equation} at $t=\tau$, where $\phi_S = \beta N \Omega \tau^2$ is the Sagnac phase, the dimension of the support set is $d=2$, with the two eigenvalues $p_{\pm}=\left(1\pm e^{-N\gamma\tau}\right)/2$ and the corresponding eigenvectors $\left|\psi_{\pm} \right\rangle =\left(e^{i\phi_S}|{\bf 0}\rangle \pm |{\bf 1}\rangle\right)/\sqrt{2}$, respectively. The QFI with respect to the rotation frequency $\Omega$ can be readily calculated from Eq.~(\ref{eq:SQFIExpression}) and is given by \begin{eqnarray} F &=& \left(\frac{\partial \phi_S}{\partial \Omega}\right)^2 e^{-2N \gamma \tau} \nonumber \\ &=& \beta^2 N^2 \tau^4 e^{-2N \gamma \tau}, \end{eqnarray} \begin{figure} \caption{(Color online) Effects of increasing the dephasing strength $\gamma$ on $F/\tau$ vs $\tau$. We set $\beta^2 = 1$ and the representative value for the qubit number is $N=5$.} \label{Sfig:QFI} \end{figure} which is Eq.~(\ref{eq:QFIwithNoise}) in the main text. In noiseless scenarios with $\gamma=0$, $F/\tau$ increases monotonically with $\tau$ while it has an optimum for finite dephasing strength. In Fig.~\ref{Sfig:QFI} we show the effects of increasing the dephasing strength on the interrogation-time normalized QFI of the readout state with a fixed qubit number $N$. One sees that both of the optimal $F/\tau$ and optimal interrogation time are decreasing with increasing $\gamma$. \emph{QFI of generic MWI schemes.} For generic MWI schemes with GHZ states and under independent dephasing noises, if the accumulated phase is $\phi_{\chi}(\tau) \propto N \chi \tau^{\lambda}$ ($\lambda > 0$), then following the same procedure one can easily obtain the QFI with respect to the quantity $\chi$, which is given by \begin{eqnarray} F_{\chi} &=& \left(\frac{\partial \phi_{\chi}}{\partial \chi}\right)^2 e^{-2N \gamma \tau} \nonumber \\ &\propto& N^2 \tau^{2\lambda} e^{-2N \gamma \tau}. \end{eqnarray} And the the interrogation-time optimized value of $F_{\chi}/\tau$ is $\left(F_{\chi}/\tau\right)_{opt}={\cal{O}}(N^{3-2\lambda})$, with $\tau_{opt}=(2\lambda-1)/(2N\gamma)$. So for $\lambda \ge 1$, the best QFI can be achieved is $\left(F_{\chi}/\tau\right)_{opt}={\cal{O}}(N)$ (SQL) with the $\lambda=1$ class. See the main text. \begin{thebibliography}{75} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Degen}\ \emph {et~al.}(2017)\citenamefont {Degen}, \citenamefont {Reinhard},\ and\ \citenamefont {Cappellaro}}]{DegenRMP} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont {Degen}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reinhard}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Cappellaro}},\ }\href {\doibase 10.1103/RevModPhys.89.035002} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {035002} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2004)\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{GiovannettiScience2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href {\doibase 10.1126/science.1104149} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {306}},\ \bibinfo {pages} {1330} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2006)\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{GiovannettiPRL2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}},\ }\href {\doibase 10.1103/PhysRevLett.96.010401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {010401} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barrett}\ \emph {et~al.}(2014)\citenamefont {Barrett}, \citenamefont {Geiger}, \citenamefont {Dutta}, \citenamefont {Meunier}, \citenamefont {Canuel}, \citenamefont {Gauguet}, \citenamefont {Bouyer},\ and\ \citenamefont {Landragin}}]{BarrettCRPhys2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Barrett}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Geiger}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Dutta}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Meunier}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Canuel}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gauguet}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bouyer}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Landragin}},\ }\href {\doibase 10.1016/j.crhy.2014.10.009} {\bibfield {journal} {\bibinfo {journal} {C. R. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {875} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kasevich}\ and\ \citenamefont {Chu}(1991)}]{KasevichPRL1991} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kasevich}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chu}},\ }\href {\doibase 10.1103/PhysRevLett.67.181} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {181} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gustavson}\ \emph {et~al.}(1997)\citenamefont {Gustavson}, \citenamefont {Bouyer},\ and\ \citenamefont {Kasevich}}]{GustavsonPRL1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~L.}\ \bibnamefont {Gustavson}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bouyer}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\ }\href {\doibase 10.1103/PhysRevLett.78.2046} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {2046} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kandes}\ \emph {et~al.}()\citenamefont {Kandes}, \citenamefont {Carretero-Gonzalez},\ and\ \citenamefont {Bromley}}]{KandesarXiv2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Kandes}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Carretero-Gonzalez}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~W.~J.}\ \bibnamefont {Bromley}},\ }\href@noop {} {}\bibinfo {note} {ArXiv:1306.1308}\BibitemShut {NoStop} \bibitem [{\citenamefont {Trubko}\ \emph {et~al.}(2015)\citenamefont {Trubko}, \citenamefont {Greenberg}, \citenamefont {Germaine}, \citenamefont {Gregoire}, \citenamefont {Holmgren}, \citenamefont {Hromada},\ and\ \citenamefont {Cronin}}]{TrubkoPRL2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Trubko}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Greenberg}}, \bibinfo {author} {\bibfnamefont {M.~T.~S.}\ \bibnamefont {Germaine}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Gregoire}}, \bibinfo {author} {\bibfnamefont {W.~F.}\ \bibnamefont {Holmgren}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Hromada}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Cronin}},\ }\href {\doibase 10.1103/PhysRevLett.114.140404} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {140404} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Helm}\ \emph {et~al.}(2015)\citenamefont {Helm}, \citenamefont {Cornish},\ and\ \citenamefont {Gardiner}}]{HelmPRL2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Helm}}, \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Cornish}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Gardiner}},\ }\href {\doibase 10.1103/PhysRevLett.114.134101} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {134101} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Haine}(2016)}]{HainePRL2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Haine}},\ }\href {\doibase 10.1103/PhysRevLett.116.230404} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {230404} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Helm}\ \emph {et~al.}(2018)\citenamefont {Helm}, \citenamefont {Billam}, \citenamefont {Rakonjac}, \citenamefont {Cornish},\ and\ \citenamefont {Gardiner}}]{HelmPRL2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Helm}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Billam}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rakonjac}}, \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Cornish}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Gardiner}},\ }\href {\doibase 10.1103/PhysRevLett.120.063201} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {063201} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stevenson}\ \emph {et~al.}(2015)\citenamefont {Stevenson}, \citenamefont {Hush}, \citenamefont {Bishop}, \citenamefont {Lesanovsky},\ and\ \citenamefont {Fernholz}}]{StevensonPRL2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Stevenson}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont {Hush}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Bishop}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lesanovsky}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Fernholz}},\ }\href {\doibase 10.1103/PhysRevLett.115.163001} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages} {163001} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Luo}\ \emph {et~al.}(2017)\citenamefont {Luo}, \citenamefont {Huang}, \citenamefont {Zhang},\ and\ \citenamefont {Lee}}]{LuoPRA2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Luo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Lee}},\ }\href {\doibase 10.1103/PhysRevA.95.023608} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {023608} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huelga}\ \emph {et~al.}(1997)\citenamefont {Huelga}, \citenamefont {Macchiavello}, \citenamefont {Pellizzari}, \citenamefont {Ekert}, \citenamefont {Plenio},\ and\ \citenamefont {Cirac}}]{HuelgaPRL1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont {Huelga}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Macchiavello}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Pellizzari}}, \bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Ekert}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},\ }\href {\doibase 10.1103/PhysRevLett.79.3865} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {3865} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Escher}\ \emph {et~al.}(2011)\citenamefont {Escher}, \citenamefont {de~Matos~Filho},\ and\ \citenamefont {Davidovich}}]{EscherNPhys2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Escher}}, \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {de~Matos~Filho}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Davidovich}},\ }\href {\doibase 10.1038/nphys1958} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {406} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Demkowicz-Dobrza\'{n}ski}\ \emph {et~al.}(2012)\citenamefont {Demkowicz-Dobrza\'{n}ski}, \citenamefont {Ko\l{}ody\'{n}ski},\ and\ \citenamefont {Gu\c{t}\u{a}}}]{DemkowiczNC2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Demkowicz-Dobrza\'{n}ski}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ko\l{}ody\'{n}ski}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gu\c{t}\u{a}}},\ }\href {\doibase 10.1038/ncomms2067} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {1063} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tan}\ \emph {et~al.}(2013)\citenamefont {Tan}, \citenamefont {Huang}, \citenamefont {Yin}, \citenamefont {Kuang},\ and\ \citenamefont {Wang}}]{TanPRA2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.-S.}\ \bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Kuang}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}},\ }\href {\doibase 10.1103/PhysRevA.87.032102} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {032102} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ and\ \citenamefont {Yuan}(2017{\natexlab{a}})}]{LiuPRA2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Liu}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yuan}},\ }\href {\doibase 10.1103/PhysRevA.96.012117} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {012117} (\bibinfo {year} {2017}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ and\ \citenamefont {Yuan}(2017{\natexlab{b}})}]{LiuPRA2017MultiPara} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Liu}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yuan}},\ }\href {\doibase 10.1103/PhysRevA.96.042114} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {042114} (\bibinfo {year} {2017}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kessler}\ \emph {et~al.}(2014)\citenamefont {Kessler}, \citenamefont {Lovchinsky}, \citenamefont {Sushkov},\ and\ \citenamefont {Lukin}}]{KesslerPRL2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Kessler}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lovchinsky}}, \bibinfo {author} {\bibfnamefont {A.~O.}\ \bibnamefont {Sushkov}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}},\ }\href {\doibase 10.1103/PhysRevLett.112.150802} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {150802} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {D\"ur}\ \emph {et~al.}(2014)\citenamefont {D\"ur}, \citenamefont {Skotiniotis}, \citenamefont {Fr\"owis},\ and\ \citenamefont {Kraus}}]{DurPRL2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {D\"ur}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Skotiniotis}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Fr\"owis}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kraus}},\ }\href {\doibase 10.1103/PhysRevLett.112.080801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {080801} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Unden}\ \emph {et~al.}(2016)\citenamefont {Unden}, \citenamefont {Balasubramanian}, \citenamefont {Louzon}, \citenamefont {Vinkler}, \citenamefont {Plenio}, \citenamefont {Markham}, \citenamefont {Twitchen}, \citenamefont {Stacey}, \citenamefont {Lovchinsky}, \citenamefont {Sushkov}, \citenamefont {Lukin}, \citenamefont {Retzker}, \citenamefont {Naydenov}, \citenamefont {McGuinness},\ and\ \citenamefont {Jelezko}}]{UndenPRL2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Unden}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Balasubramanian}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Louzon}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Vinkler}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Markham}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Twitchen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Stacey}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lovchinsky}}, \bibinfo {author} {\bibfnamefont {A.~O.}\ \bibnamefont {Sushkov}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Retzker}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Naydenov}}, \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {McGuinness}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}},\ }\href {\doibase 10.1103/PhysRevLett.116.230502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {230502} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{Num()}]{NumberNote} \BibitemOpen \href@noop {} {}\bibinfo {note} {We assume the total number of particles is fixed. For quantum sensing with states of a fluctuating total number of particles, see Ref.~\cite{HyllusPRL2010}, where the precision bound is given by the averaged number of particles.}\BibitemShut {Stop} \bibitem [{\citenamefont {Bollinger}\ \emph {et~al.}(1996)\citenamefont {Bollinger}, \citenamefont {Itano}, \citenamefont {Wineland},\ and\ \citenamefont {Heinzen}}]{WinelandPRA1996} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Bollinger}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Heinzen}},\ }\href {\doibase 10.1103/PhysRevA.54.R4649} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {R4649} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leibfried}\ \emph {et~al.}(2004)\citenamefont {Leibfried}, \citenamefont {Barrett}, \citenamefont {Schaetz}, \citenamefont {Britton}, \citenamefont {Chiaverini}, \citenamefont {Itano}, \citenamefont {Jost}, \citenamefont {Langer},\ and\ \citenamefont {Wineland}}]{LeibfriedScience2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Barrett}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schaetz}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Britton}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Jost}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Langer}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href {\doibase 10.1126/science.1097576} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {304}},\ \bibinfo {pages} {1476} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M\o{}lmer}\ and\ \citenamefont {S\o{}rensen}(1999)}]{MolmerPRL1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {M\o{}lmer}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {S\o{}rensen}},\ }\href {\doibase 10.1103/PhysRevLett.82.1835} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {1835} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}(2006)}]{LeePRL2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Lee}},\ }\href {\doibase 10.1103/PhysRevLett.97.150402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {150402} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gietka}\ \emph {et~al.}(2015)\citenamefont {Gietka}, \citenamefont {Sza\ifmmode~\acute{n}\else \'{n}\fi{}kowski}, \citenamefont {Wasak},\ and\ \citenamefont {Chwede\ifmmode~\acute{n}\else \'{n}\fi{}czuk}}]{GietkaPRA2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Gietka}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Sza\ifmmode~\acute{n}\else \'{n}\fi{}kowski}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Wasak}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chwede\ifmmode~\acute{n}\else \'{n}\fi{}czuk}},\ }\href {\doibase 10.1103/PhysRevA.92.043622} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {043622} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pezz\'e}\ and\ \citenamefont {Smerzi}(2009)}]{PezzePRL2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezz\'e}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}},\ }\href {\doibase 10.1103/PhysRevLett.102.100401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {100401} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Saffman}\ and\ \citenamefont {M\o{}lmer}(2009)}]{SaffmanPRL2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Saffman}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {M\o{}lmer}},\ }\href {\doibase 10.1103/PhysRevLett.102.240502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {240502} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pang}\ and\ \citenamefont {Brun}(2014)}]{PangPRA2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pang}}\ and\ \bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont {Brun}},\ }\href {\doibase 10.1103/PhysRevA.90.022117} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {022117} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bord\'{e}}(1989)}]{BordePLA1989} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {Bord\'{e}}},\ }\href {\doibase 10.1016/0375-9601(89)90537-9} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {140}},\ \bibinfo {pages} {10} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerry}\ and\ \citenamefont {Mimih}(2010)}]{GerryPRA2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~C.}\ \bibnamefont {Gerry}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mimih}},\ }\href {\doibase 10.1103/PhysRevA.82.013831} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {013831} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Halkyard}\ \emph {et~al.}(2010)\citenamefont {Halkyard}, \citenamefont {Jones},\ and\ \citenamefont {Gardiner}}]{HalkyardPRA2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {Halkyard}}, \bibinfo {author} {\bibfnamefont {M.~P.~A.}\ \bibnamefont {Jones}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Gardiner}},\ }\href {\doibase 10.1103/PhysRevA.81.061602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {061602} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dalfovo}\ \emph {et~al.}(1999)\citenamefont {Dalfovo}, \citenamefont {Giorgini}, \citenamefont {Pitaevskii},\ and\ \citenamefont {Stringari}}]{DalfovoRMP1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Dalfovo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Giorgini}}, \bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Pitaevskii}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Stringari}},\ }\href {\doibase 10.1103/RevModPhys.71.463} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {463} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{Tim()}]{TimeNote} \BibitemOpen \href@noop {} {}\bibinfo {note} {The interrogation time $\tau$ is defined at which the center of the two counter-propagating Gaussian wave packets are completely overlapped (See Appendix~\ref{apped:Sagnacphase}).}\BibitemShut {Stop} \bibitem [{\citenamefont {Helstrom}(1976)}]{Helstrom} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~W.}\ \bibnamefont {Helstrom}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Detection and Estimation Theory}}}\ (\bibinfo {publisher} {Academic Press},\ \bibinfo {address} {New York},\ \bibinfo {year} {1976})\BibitemShut {NoStop} \bibitem [{\citenamefont {Holevo}(1982)}]{Holevo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Holevo}},\ }\href@noop {} {\emph {\bibinfo {title} {Probabilistic and Statistical Aspects of Quantum Theory}}}\ (\bibinfo {publisher} {North-Holland Publishing Company},\ \bibinfo {address} {Amsterdam},\ \bibinfo {year} {1982})\BibitemShut {NoStop} \bibitem [{\citenamefont {Pang}\ and\ \citenamefont {Jordan}(2017)}]{PangNC2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pang}}\ and\ \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Jordan}},\ }\href {\doibase 10.1038/ncomms14695} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {14695} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gefen}\ \emph {et~al.}(2017)\citenamefont {Gefen}, \citenamefont {Jelezko},\ and\ \citenamefont {Retzker}}]{GefenPRA2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Gefen}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Retzker}},\ }\href {\doibase 10.1103/PhysRevA.96.032310} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {032310} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{Int()}]{InteractionNote} \BibitemOpen \href@noop {} {}\bibinfo {note} {Here for Sagnac interferometers, the inter-path interaction is zero, which is different from Refs.~\cite{GrossNature2010,RiedelNature2010,SzankowskiPRA2014}, where the intra- and inter-modes interactions are almost canceled and the net interaction could be negligibly small in the phase imprinting process.}\BibitemShut {Stop} \bibitem [{\citenamefont {Aghamalyan}\ \emph {et~al.}(2015)\citenamefont {Aghamalyan}, \citenamefont {Cominotti}, \citenamefont {Rizzi}, \citenamefont {Rossini}, \citenamefont {Hekking}, \citenamefont {Minguzzi}, \citenamefont {Kwek},\ and\ \citenamefont {Amico}}]{AghamalyanNJP2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aghamalyan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cominotti}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Rizzi}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rossini}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Hekking}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Minguzzi}}, \bibinfo {author} {\bibfnamefont {L.-C.}\ \bibnamefont {Kwek}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Amico}},\ }\href {\doibase 10.1088/1367-2630/17/4/045023} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {045023} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kol\'a\ifmmode~\check{r}\else \v{r}\fi{}}\ \emph {et~al.}(2015)\citenamefont {Kol\'a\ifmmode~\check{r}\else \v{r}\fi{}}, \citenamefont {Opatrn\'y},\ and\ \citenamefont {Das}}]{DasPRA2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kol\'a\ifmmode~\check{r}\else \v{r}\fi{}}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Opatrn\'y}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~K.}\ \bibnamefont {Das}},\ }\href {\doibase 10.1103/PhysRevA.92.043630} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {043630} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scully}\ and\ \citenamefont {Zubairy}(1997)}]{ScullyBook} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont {Scully}}\ and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Zubairy}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Optics}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {address} {Cambridge},\ \bibinfo {year} {1997})\ p.\ \bibinfo {pages} {163}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sza\ifmmode~\acute{n}\else \'{n}\fi{}kowski}\ \emph {et~al.}(2014)\citenamefont {Sza\ifmmode~\acute{n}\else \'{n}\fi{}kowski}, \citenamefont {Trippenbach},\ and\ \citenamefont {Chwede\ifmmode~\acute{n}\else \'{n}\fi{}czuk}}]{SzankowskiPRA2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Sza\ifmmode~\acute{n}\else \'{n}\fi{}kowski}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Trippenbach}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chwede\ifmmode~\acute{n}\else \'{n}\fi{}czuk}},\ }\href {\doibase 10.1103/PhysRevA.90.063619} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {063619} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Puri}\ and\ \citenamefont {Agarwal}(1992)}]{PuriPRA1992} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~R.}\ \bibnamefont {Puri}}\ and\ \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Agarwal}},\ }\href {\doibase 10.1103/PhysRevA.45.5073} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {5073} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ and\ \citenamefont {Leung}(2003)}]{TWChenPRA2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~W.}\ \bibnamefont {Chen}}\ and\ \bibinfo {author} {\bibfnamefont {P.~T.}\ \bibnamefont {Leung}},\ }\href {\doibase 10.1103/PhysRevA.67.055802} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {67}},\ \bibinfo {pages} {055802} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{Dep()}]{DephasingNote} \BibitemOpen \href@noop {} {}\bibinfo {note} {For each atom, we consider the fluctuation of the density-distribution dependent interaction [see Eq.~(\ref{eq:GPEHamiltonian})] or thermal fluctuations as the cause of the dephasing. The total dephasing operator is given by the summation over the individual qubits, where in Eq.~(\ref{eq:Master}) we have assumed the same dephasing strength $\gamma$ for each atom for simplicity. This is different from the external-field fluctuation induced collective dephasing, e.g., for Ramsey-type interferometers, see Refs.~\cite{DornerNJP2012,ZhongPRA2013,SzankowskiPRA2014}. Also, this is distinct from the periodical phase diffusion-revival process of single-particle coherence arising from nonlinear interactions with initial coherent spin states, e.g., see Ref.~\cite{KhodorkovskyPRA2009}. Such evolution process does not diffuse the phase of the GHZ state and the reduced single-particle coherence always vanishes.}\BibitemShut {Stop} \bibitem [{Rea()}]{ReadoutNotes} \BibitemOpen \href@noop {} {}\bibinfo {note} {Applying the $\pi/2$ pulse does not change the QFI with respect to $\Omega$, so here we refer the state just after the Sagnac phase encoding as the readout state.}\BibitemShut {Stop} \bibitem [{Mar()}]{MarkovianNote} \BibitemOpen \href@noop {} {}\bibinfo {note} {Here we assume the Markovian noise. If considering the non-Markovian dephasing, then $\tau_{opt} \propto 1/\sqrt{N}$~\cite{ChinPRL2012} and $\left(F/\tau\right)_{opt} \propto \sqrt{N}$ for Sagnac-type interferometers.}\BibitemShut {Stop} \bibitem [{\citenamefont {Kasevich}\ and\ \citenamefont {Chu}(1992)}]{KasevichAPB1992} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kasevich}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chu}},\ }\href {\doibase 10.1007/BF00325375} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. B}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {321} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schleich}\ \emph {et~al.}(2013{\natexlab{a}})\citenamefont {Schleich}, \citenamefont {Greenberger},\ and\ \citenamefont {Rasel}}]{SchleichPRL2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~P.}\ \bibnamefont {Schleich}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Greenberger}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Rasel}},\ }\href {\doibase 10.1103/PhysRevLett.110.010401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {010401} (\bibinfo {year} {2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schleich}\ \emph {et~al.}(2013{\natexlab{b}})\citenamefont {Schleich}, \citenamefont {Greenberger},\ and\ \citenamefont {Rasel}}]{SchleichNJP2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~P.}\ \bibnamefont {Schleich}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Greenberger}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Rasel}},\ }\href {\doibase 10.1088/1367-2630/15/1/013007} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {013007} (\bibinfo {year} {2013}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kritsotakis}\ \emph {et~al.}()\citenamefont {Kritsotakis}, \citenamefont {Szigeti}, \citenamefont {Dunningham},\ and\ \citenamefont {Haine}}]{KritsotakisarXiv2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kritsotakis}}, \bibinfo {author} {\bibfnamefont {S.~S.}\ \bibnamefont {Szigeti}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Dunningham}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Haine}},\ }\href@noop {} {}\bibinfo {note} {ArXiv:1710.06340}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gustavson}\ \emph {et~al.}(2000)\citenamefont {Gustavson}, \citenamefont {Landragin},\ and\ \citenamefont {Kasevich}}]{GustavsonCQG2000} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~L.}\ \bibnamefont {Gustavson}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Landragin}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\ }\href {\doibase 10.1088/0264-9381/17/12/311} {\bibfield {journal} {\bibinfo {journal} {Classical Quantum Gravity}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {2385} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Durfee}\ \emph {et~al.}(2006)\citenamefont {Durfee}, \citenamefont {Shaham},\ and\ \citenamefont {Kasevich}}]{DurfeePRL2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Durfee}}, \bibinfo {author} {\bibfnamefont {Y.~K.}\ \bibnamefont {Shaham}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Kasevich}},\ }\href {\doibase 10.1103/PhysRevLett.97.240801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {240801} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{Sag()}]{SagnacNote} \BibitemOpen \href@noop {} {}\bibinfo {note} {Note that if one fixes the initial kicking angular momentum $L_k$ while doing the optimization, then the scheme will belongs to the $\lambda=1$ class, as in Ref.~\cite{HalkyardPRA2010}.}\BibitemShut {Stop} \bibitem [{\citenamefont {Ferrini}\ \emph {et~al.}(2011)\citenamefont {Ferrini}, \citenamefont {Spehner}, \citenamefont {Minguzzi},\ and\ \citenamefont {Hekking}}]{FerriniPRA2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ferrini}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Spehner}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Minguzzi}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~W.~J.}\ \bibnamefont {Hekking}},\ }\href {\doibase 10.1103/PhysRevA.84.043628} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {043628} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dorner}(2012)}]{DornerNJP2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Dorner}},\ }\href {\doibase 10.1088/1367-2630/14/4/043011} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {043011} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhong}\ \emph {et~al.}(2013)\citenamefont {Zhong}, \citenamefont {Sun}, \citenamefont {Ma}, \citenamefont {Wang},\ and\ \citenamefont {Nori}}]{ZhongPRA2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase 10.1103/PhysRevA.87.022337} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {022337} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schindler}\ \emph {et~al.}(2011)\citenamefont {Schindler}, \citenamefont {Barreiro}, \citenamefont {Monz}, \citenamefont {Nebendahl}, \citenamefont {Nigg}, \citenamefont {Chwalla}, \citenamefont {Hennrich},\ and\ \citenamefont {Blatt}}]{SchindlerScience2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Schindler}}, \bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Barreiro}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Monz}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Nebendahl}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nigg}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Chwalla}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hennrich}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href {\doibase 10.1126/science.1203329} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {332}},\ \bibinfo {pages} {1059} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Waldherr}\ \emph {et~al.}(2014)\citenamefont {Waldherr}, \citenamefont {Wang}, \citenamefont {Zaiser}, \citenamefont {Jamali}, \citenamefont {Schulte-Herbr\"uggen}, \citenamefont {Abe}, \citenamefont {Ohshima}, \citenamefont {Isoya}, \citenamefont {Du}, \citenamefont {Neumann},\ and\ \citenamefont {Wrachtrup}}]{WaldherrNature2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Waldherr}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zaiser}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Jamali}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schulte-Herbr\"uggen}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Abe}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Ohshima}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Isoya}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Neumann}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wrachtrup}},\ }\href {\doibase 10.1038/nature12919} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {506}},\ \bibinfo {pages} {204} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ofek}\ \emph {et~al.}(2016)\citenamefont {Ofek}, \citenamefont {Petrenko}, \citenamefont {Heeres}, \citenamefont {Reinhold}, \citenamefont {Leghtas}, \citenamefont {Vlastakis}, \citenamefont {Liu}, \citenamefont {Frunzio}, \citenamefont {Girvin}, \citenamefont {Jiang}, \citenamefont {Mirrahimi}, \citenamefont {Devoret},\ and\ \citenamefont {Schoelkopf}}]{OfekNature2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ofek}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Petrenko}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Heeres}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Reinhold}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Leghtas}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vlastakis}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mirrahimi}}, \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}},\ }\href {\doibase 10.1038/nature18949} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {536}},\ \bibinfo {pages} {441} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arrad}\ \emph {et~al.}(2014)\citenamefont {Arrad}, \citenamefont {Vinkler}, \citenamefont {Aharonov},\ and\ \citenamefont {Retzker}}]{ArradPRL2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Arrad}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Vinkler}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Retzker}},\ }\href {\doibase 10.1103/PhysRevLett.112.150801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {150801} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lu}\ \emph {et~al.}(2015)\citenamefont {Lu}, \citenamefont {Yu},\ and\ \citenamefont {Oh}}]{LuNC2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yu}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Oh}},\ }\href {\doibase 10.1038/ncomms8282} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {7282} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ozeri}()}]{OzeriArxiv2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ozeri}},\ }\href@noop {} {}\bibinfo {note} {ArXiv:1310.3432}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reiter}\ \emph {et~al.}(2017)\citenamefont {Reiter}, \citenamefont {S\o{}rensen}, \citenamefont {Zoller},\ and\ \citenamefont {Muschik}}]{ReiterNC2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reiter}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {S\o{}rensen}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Muschik}},\ }\href {\doibase 10.1038/s41467-017-01895-5} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {1822} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Demkowicz-Dobrza\ifmmode~\acute{n}\else \'{n}\fi{}ski}\ \emph {et~al.}(2017)\citenamefont {Demkowicz-Dobrza\ifmmode~\acute{n}\else \'{n}\fi{}ski}, \citenamefont {Czajkowski},\ and\ \citenamefont {Sekatski}}]{DemkowiczPRX2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Demkowicz-Dobrza\ifmmode~\acute{n}\else \'{n}\fi{}ski}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Czajkowski}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Sekatski}},\ }\href {\doibase 10.1103/PhysRevX.7.041009} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {041009} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2018)\citenamefont {Zhou}, \citenamefont {Zhang}, \citenamefont {Preskill},\ and\ \citenamefont {Jiang}}]{ZhouNC2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Jiang}},\ }\href {\doibase 10.1038/s41467-017-02510-3} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {78} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hyllus}\ \emph {et~al.}(2010)\citenamefont {Hyllus}, \citenamefont {Pezz\'e},\ and\ \citenamefont {Smerzi}}]{HyllusPRL2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hyllus}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezz\'e}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}},\ }\href {\doibase 10.1103/PhysRevLett.105.120501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {120501} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gross}\ \emph {et~al.}(2010)\citenamefont {Gross}, \citenamefont {Zibold}, \citenamefont {Nicklas}, \citenamefont {Est¨¨ve},\ and\ \citenamefont {Oberthaler}}]{GrossNature2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gross}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Zibold}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Nicklas}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Est\`eve}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~K.}\ \bibnamefont {Oberthaler}},\ }\href {\doibase 10.1038/nature08919} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {464}},\ \bibinfo {pages} {1165} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Riedel}\ \emph {et~al.}(2010)\citenamefont {Riedel}, \citenamefont {B\"ohi}, \citenamefont {Li}, \citenamefont {H\"ansch}, \citenamefont {Sinatra},\ and\ \citenamefont {Treutlein}}]{RiedelNature2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Riedel}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {B\"ohi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {T.~W.}\ \bibnamefont {H\"ansch}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sinatra}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Treutlein}},\ }\href {\doibase 10.1038/nature08988} {\bibfield {journal} {\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {464}},\ \bibinfo {pages} {1170} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Khodorkovsky}\ \emph {et~al.}(2009)\citenamefont {Khodorkovsky}, \citenamefont {Kurizki},\ and\ \citenamefont {Vardi}}]{KhodorkovskyPRA2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Khodorkovsky}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vardi}},\ }\href {\doibase 10.1103/PhysRevA.80.023609} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {023609} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chin}\ \emph {et~al.}(2012)\citenamefont {Chin}, \citenamefont {Huelga},\ and\ \citenamefont {Plenio}}]{ChinPRL2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Chin}}, \bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont {Huelga}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href {\doibase 10.1103/PhysRevLett.109.233601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {233601} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boixo}\ \emph {et~al.}(2007)\citenamefont {Boixo}, \citenamefont {Flammia}, \citenamefont {Caves},\ and\ \citenamefont {Geremia}}]{BoixoPRL2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {S.~T.}\ \bibnamefont {Flammia}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Caves}}, \ and\ \bibinfo {author} {\bibfnamefont {JM}~\bibnamefont {Geremia}},\ }\href {\doibase 10.1103/PhysRevLett.98.090401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {090401} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Taddei}\ \emph {et~al.}(2013)\citenamefont {Taddei}, \citenamefont {Escher}, \citenamefont {Davidovich},\ and\ \citenamefont {de~Matos~Filho}}]{TaddeiPRL2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Taddei}}, \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Escher}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Davidovich}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {de~Matos~Filho}},\ }\href {\doibase 10.1103/PhysRevLett.110.050402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {050402} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2015)\citenamefont {Liu}, \citenamefont {Jing},\ and\ \citenamefont {Wang}}]{LiuSR2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.-X.}\ \bibnamefont {Jing}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}},\ }\href {\doibase 10.1038/srep08565} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {8565} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2013)\citenamefont {Zhang}, \citenamefont {Li}, \citenamefont {Yang},\ and\ \citenamefont {Jin}}]{ZhangPRA2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~M.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {X.~W.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Yang}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~R.}\ \bibnamefont {Jin}},\ }\href {\doibase 10.1103/PhysRevA.88.043832} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {043832} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2014)\citenamefont {Liu}, \citenamefont {Jing}, \citenamefont {Zhong},\ and\ \citenamefont {Wang}}]{LiuCTP2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {X.-X.}\ \bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zhong}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}},\ }\href {\doibase 10.1088/0253-6102/61/1/08} {\bibfield {journal} {\bibinfo {journal} {Commun. Theor. Phys.}\ }\textbf {\bibinfo {volume} {61}},\ \bibinfo {pages} {45} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Signatures of Multiplicity Spaces in Tensor Products of $\mathfrak{sl} \begin{abstract} We study multiplicity space signatures in tensor products of representations of $\mathfrak{sl}_2$ and $U_q(\mathfrak{sl}_2)$, and give some applications. We completely classify definite multiplicity spaces for generic tensor products of $\mathfrak{sl}_2$ Verma modules. This provides a classification of a family of unitary representations of a basic quantized quiver variety, one of the first such classifications for any quantized quiver variety. We use multiplicity space signatures to provide the first real critical point lower bound for generic $\mathfrak{sl}_2$ master functions. As a corollary to this bound, we obtain a simple and asymptotically correct approximation for the number of real critical points of a generic $\mathfrak{sl}_2$ master function. As a first step to quantizing this picture, we obtain a formula for multiplicity space signatures in tensor products of finite dimensional simple $U_q(\mathfrak{sl}_2)$ representations.\end{abstract} \onehalfspacing \tableofcontents \section{Introduction and results} Let $\mathfrak{g}$ be a semisimple Lie algebra over $\mathbb{C}$, with a choice $\mathfrak{b}$ of Borel subalgebra. The Verma module $M_{\lambda}$ for $\mathfrak{g}$ with real highest weight $\lambda$ carries a certain Hermitian form known as the \textit{Shapovalov form}. It is uniquely determined (up to scalar) by a certain contravariance condition, and is nondegenerate for generic\footnote{In this context, \textit{generic} means lying outside the union of countably many hyperplanes; it is generally possible to write formulas for those hyperplanes.} $\lambda$. Yee has used Kazhdan-Lustzig polynomials to explicitly compute the signatures of these forms (see \cite{YV}). Given real highest weights $\lambda_1, \ldots, \lambda_n$, the tensor product $\bigotimes_{i=1}^n M_{\lambda_i}$ carries a Hermitian form equal to the product of the Shapovalov forms. If $(\lambda_1, ..., \lambda_n)$ is generic then these forms are nondegenerate, and this tensor product splits as a direct sum: $ \bigotimes_{i=1}^n M_{\lambda_i} = \bigoplus_{\mu\in P^+} M_{(\sum_i \lambda_i)-\mu} \otimes E_{\mu} $. Here $P^+$ is the positive part of the root lattice and $E_{\mu}\cong \operatorname{Hom} (M_{(\sum \lambda_i)-\mu}, \bigotimes_{i=1}^n M_{\lambda_i})$ is the \textit{level $\mu$ multiplicity space}. These summands are orthogonal, and the induced Hermitian form on each isotypic piece is nondegenerate and contravariant; it follows by uniqueness of the Shapovalov form that the induced Hermitian form on the isotypic piece $M_{(\sum_i \lambda_i)-\mu} \otimes E_{\mu}$ is the tensor product of the Shapovalov form on $M_{(\sum_i \lambda_i)-\mu}$ with a certain uniquely determined nondegenerate Hermitian form on $E_{\mu}$. The dimension of $E_{\mu}$ is known; the purpose of this paper is to investigate its signature, and some applications. We restrict attention to $\mathfrak{sl}_2$ (and, later, $U_q(\mathfrak{sl_2})$). We take the standard basis $E,F,H$ and identify weights with their value on $H$. In this case, the meaning of \textit{generic} is made precise as follows: $\lambda\in \mathbb{R}$ is generic if $\lambda\notin \mathbb{Z}_{\geq0}$, and $(\lambda_1, \ldots, \lambda_n)\in\mathbb{R}^n$ is generic if each of $\lambda_1,\ldots,\lambda_n$ and $\sum_i\lambda_i$ is generic. The Verma module $M_{\lambda}$ is generic if $\lambda$ is; a tensor product of generic Verma modules is a module of the form $\bigotimes_{i=1}^n M_{\lambda_i}$ with $(\lambda_1,\ldots,\lambda_n)$ generic. Also $P^+=2\mathbb{Z}_{\geq 0}$; we will write $\mu=2m$ and reindex the multiplicity spaces so that $\bigotimes_{i=1}^n M_{\lambda_i} = \bigoplus_{m\in \mathbb{Z}_{\geq 0}} M_{(\sum_i \lambda_i)-2m} \otimes E_m$. The Shapovalov form is by definition contravariant in the sense that $(E,F)$ and $(H,H)$ are adjoint pairs with respect to it. We denote by $^*$ the antilinear anti-involution of $U(\mathfrak{sl}_2)$ so determined. Our main results are Theorems \ref{thm:class}, \ref{thm:cpb}, and \ref{thm:qcb}. In particular, we find all the definite multiplicity spaces in any tensor product of generic Verma modules (for $\mathfrak{sl}_2$). \begin{theorem}\label{thm:class} The list in Appendix \ref{app:class} classifies all definite multiplicity spaces in any tensor product of generic Verma modules. \end{theorem} \begin{remark} The classification of definite multiplicity spaces given by Theorem \ref{thm:class} provides a classification of a family of unitary representations of a certain quantized quiver variety, one of the first such classifications for any quantized quiver variety. Indeed let $\mathcal{U}=U(\mathfrak{sl}_2)$ and for $\lambda\in\mathbb{C}$ let $\mathcal{U}_{\lambda}$ denote the quotient of $\mathcal{U}$ by the central character associated to $\lambda$. Notice that the quantum Hamiltonian reduction $\mathcal{A}=(\mathcal{U}_{\lambda_0}\otimes\ldots\otimes\mathcal{U}_{\lambda_n}/(\mathcal{U}_{\lambda_0}\otimes\ldots\otimes\mathcal{U}_{\lambda_n})\mathfrak{sl}_2)^{\mathfrak{sl}_2}$ acts naturally on $\operatorname{Hom}(M_{\lambda_0},M_{\lambda_1}\otimes\ldots\otimes M_{\lambda_n})$. Taking $\lambda_1,\ldots,\lambda_n$ generic reals and $\lambda_0=-2m+\lambda_1+\ldots+\lambda_n$, this $\operatorname{Hom}$ space is $E_m$ as above. $\mathcal{A}$ is an example of a quantized quiver variety (see \cite{BL}) for the star-shaped quiver with one central vertex, $n+1$ other vertices each with one edge to the central vertex, and dimension vector $(2,1,\ldots,1)$. Notice that for real $\lambda$ the antilinear anti-involution $^*$ descends to one on $\mathcal{U}_{\lambda}$, and hence on $\mathcal{A}$, also denoted $^*$; moreover, essentially by definition, this latter $^*$ is the adjoint operator map with respect to the induced form on $E_m$.\end{remark} Next, we use signatures of multiplicity spaces to study a problem in differential topology. There is a family of $\mathfrak{sl}_2$ \textit{master functions} that arises in the study of Knizhnik-Zamolodchikov equations and spin chain Gaudin models (see \cite{SV}). Namely, for any two positive integers $n,m$ and any two sequences of $n$ real numbers $z = (z_1, ..., z_n)$ and $\lambda = (\lambda_1, ..., \lambda_n)$, there is a a hyperplane arrangement $\mathcal{A}=\bigcup_{i=1}^{m}\bigcup_{j=1}^{n}\{t\in \mathbb{C}^m:t_i=z_j\}\cup\bigcup_{1\leq i<j\leq m}\{t\in \mathbb{C}^m:t_i=t_j\}\subset\mathbb{C}^m$ and a master function $F_{z, \lambda,m} : \mathbb{C}^m-\mathcal{A} \to \mathbb{C}$, defined by $ F_{z, \lambda, m}(t_1, t_2, ..., t_m) = \operatorname{Disc}(Q) \cdot \prod_{i} |Q(z_i)|^{-\lambda_i} $, where $\operatorname{Disc}(Q) = \prod_{i < j} (t_i-t_j)^2$ is the discriminant of $Q(x) = \prod_{i=1}^m (x-t_i)$. A critical point $(t_1,\ldots,t_m)$ of $F_{z, \lambda, m}$ is said to be \textit{real} if the corresponding polynomial $Q(x)$ has real coefficients. Computing the number $N_{z, \lambda, m}$ of real critical points of an arbitrary master function $F_{z, \lambda, m}$ is an open and difficult problem. Recently, Mukhin and Tarasov (see \cite{MT}) have given lower bounds for numbers of real solutions in problems appearing in Schubert calculus by computing signatures of Hermitian forms on the Gaudin model. We study real critical points of the master function using an approach based on Mukhin and Tarasov's work, a Bethe ansatz setup due to Etingof, Frenkel, and Kirillov (see \cite{EFK}), and a Bethe vector characterization due to Feigin, Frenkel, and Rybnikov (see \cite{FFR}). Our second main result, Theorem \ref{thm:cpb}, uses multiplicity space signatures to give a lower bound for the number of real critical points of a generic\footnote{We call a master function generic if its real parameters are generic.} $\mathfrak{sl}_2$ master function. \begin{theorem}\label{thm:cpb} For a generic $\mathfrak{sl}_2$ master function $F_{z, \lambda, m}$, we have $ |\operatorname{sgn}(E_m)| \leq N_{z, \lambda, m} $. \end{theorem} \noindent Here $\operatorname{sgn}(E_m)$ denotes the signature of the space $E_m$ in the decomposition of $\bigotimes_{i=1}^n M_{\lambda_i}$.\\ We use this lower bound on $N_{z, \lambda, m}$ together with an upper bound on $N_{z, \lambda, m}$ given by Mukhin and Varchenko (see \cite{MV}) to show that $\operatorname{dim}(E_m) = \binom{m+n-2}{n-2}$ is a good approximation for $N_{z, \lambda, m}$ for $m$ large. \begin{corollary}\label{cor:cpbsharp} For fixed generic $\lambda$ and $z$ sequences, we have $ \lim_{m \to \infty} \frac{N_{z,\lambda,m}}{\binom{m+n-2}{n-2}} = 1. $ \end{corollary} Finally, we extend our work on signatures to quantum groups. For generic\footnote{We call a complex number on the unit circle generic if it is not a root of unity.} $q\in\mathbb{C}$ on the unit circle, the quantum enveloping algebra $U_q(\mathfrak{sl}_2)$ is the standard $q$-deformation of the enveloping algebra of $\mathfrak{sl}_2$ (see \cite{J}). For each nonnegative integer $a$, there is an $(a+1)$-dimensional simple representation of $U_q(\mathfrak{sl}_2)$, denoted $\widetilde{V}_a$, and this representation carries a Shapovalov form. Moreover, the tensor product $\widetilde{V}_a \otimes \widetilde{V}_b$ of two such representations (which is again a representation) carries an induced contravariant nondegenerate Hermitian form, defined using the Drinfeld coboundary structure (see \cite{D}). We recall the definition. There is a \textit{standard universal $R$-matrix}, defined for our choice of coproduct as $ R = q^{\frac{H \otimes H}{2}} \sum_{i \geq 0} q^{\binom{i}{2}} \cdot \frac{(q-q^{-1})^{i}}{[i]!} \cdot F^i \otimes E^i $, where $[i] = \frac{q^i - q^{-i}}{q-q^{-1}}$ and $[i]! = [1][2]...[i]$. The matrix $\overline{R}$ of the Drinfeld coboundary structure is defined in terms of the $R$-matrix by $\overline{R} = R(R^{21}R)^{-1/2}$. The form on the tensor product $\widetilde{V}_a \otimes \widetilde{V}_b$ is given by $(v_1 \otimes w_1, v_2 \otimes w_2) = \sum_i (a_iv_1, v_2) \cdot (b_iw_1, w_2)$, where $\overline{R} = \sum_i a_i \otimes b_i$ and the pairings $(v_1, v_2)$ and $(w_1, w_2)$ are computed using the forms on $\widetilde{V}_a$ and $\widetilde{V}_b$. By the cactus axiom (see \cite{KT} for the definition), this construction can be extended to a construction of a unique Hermitian form on the tensor product of any number of these finite dimensional simple $U_q(\mathfrak{sl}_2)$ representations. Because the operator $\operatorname{twist}\circ\overline{R}$ is an isomorphism $\widetilde{V}_a \otimes \widetilde{V}_b\to\widetilde{V}_b \otimes \widetilde{V}_a$, the form on the tensor product is contravariant (in a sense to be explained later). As in the $\mathfrak{sl}_2$ case, the tensor product decomposes as the direct sum of other such finite-dimensional representations with some multiplicity spaces, each carrying its own induced form. Our third main result, Theorem \ref{thm:qcb}, is a combinatorial formula for the multiplicity space signatures in an arbitrary tensor product of finite dimensional simple $U_q(\mathfrak{sl}_2)$ representations. \begin{theorem}\label{thm:qcb} We have the decomposition $ \bigotimes_{i=1}^n \widetilde{V}_{a_i} \cong \bigoplus_{m \geq 0} \widetilde{V}_{(\sum_i a_i) - 2m} \otimes \widetilde{E}_m $, where $$ \operatorname{sgn}(\widetilde{E}_m) = \sum_{m_1+m_2+\cdots +m_{n-1} = m} \prod_{j=1}^{n-1} \operatorname{sign} \Big[ \binom{1+\sum_{k=1}^{j+1} a_k - \sum_{k=1}^j m_k}{m_j}_q \binom{\sum_{k=1}^j a_j - \sum_{k=1}^{j-1} m_k}{m_j}_q \binom{a_{j+1}}{m_j}_q \Big]. $$ \end{theorem} \begin{remark*} \begin{enumerate}\item When $q = 1$, the formula is still valid (even though $1$ is not generic) and gives multiplicity space signatures for a tensor product of generic $\mathfrak{sl}_2$ Verma modules.\ \item The content of this theorem is the case $n=2$; the case of general $n$ follows (e.g. by induction). In particular we make no claim as to the definiteness of these spaces in the quantum case, and suppose the form given above may not be particularly well adapted to that question.\end{enumerate} \end{remark*}\ The paper is organized as follows. In Section \ref{sec:prelims}, we cover definitions and background information. In Section \ref{sec:cpb}, we prove Theorem \ref{thm:cpb} and the approximation given by Corollary \ref{cor:cpbsharp}. In Section \ref{sec:qcb}, we prove Theorem \ref{thm:qcb}. We leave the proof (and indeed the statement) of the classification claimed by Theorem \ref{thm:class} to the appendix. \section{Preliminaries}\label{sec:prelims} \subsection{Signature characters} Let $V$ be a representation of $\mathfrak{sl}_2$ on which $H$ acts diagonally, with finite dimensional weight spaces, and whose weights live in the union of finitely many strings of the form $\lambda-\mathbb{Z}^+$, $\lambda\in\mathbb{R}$. The tensor product of any two such has the same property. Assume furthermore $V$ is endowed with a nondegenerate contravariant Hermitian form - that is to say, one for which the adjointness conditions $H^* = H$, $E^* = F$, and $F^* = E$ hold. Examples include the Verma modules $M_\lambda$ with $\lambda\in\mathbb{R}\backslash\mathbb{Z}^+$ together with their Shapovalov forms. Notice that the weight spaces in any such representation are orthogonal, and that direct sums and tensor products of these representations naturally have the same properties. To any such representation $V$ of $\mathfrak{sl}_2$, we can associate a \textit{signature character} $\operatorname{ch}_s(V)$, which is a (possibly infinite) sum of powers of the formal symbol $e$ with coefficients in the ring $\mathbb{Z}[s]/(s^2-1)$. Namely, we put $\operatorname{ch}_s(V)=\sum_{\alpha} (a_\alpha+b_\alpha s) e^{\alpha} $, where the sum is taken over all weights $\alpha$. Here $a_\alpha$ is the maximal dimension of a positive definite subspace of the weight space $V_\alpha$, and $b_\alpha$ is the maximal dimension of a negative definite subspace of $V_\alpha$. Thus $a_\alpha+b_\alpha$ is the dimension of $V_\alpha$ and $a_\alpha-b_\alpha$ is the signature of $V_\alpha$. The signature character respects the direct sum and tensor product: $\operatorname{ch}_s( \bigotimes_i V_i) = \prod_i \operatorname{ch}_s(V_i)$ and $\operatorname{ch}_s( \bigoplus_i V_i) = \sum_i \operatorname{ch}_s(V_i).$ We study the tensor products of generic Verma modules using signature characters. Let $\lambda_1, \lambda_2, ..., \lambda_n$ be generic reals. For the rest of this paper, we will denote $\beta_{\lambda_i} = \operatorname{ch}_s(M_{\lambda_i})$. \begin{proposition}\label{founder} We have $ \operatorname{ch}_s(M_{\lambda_i}) = \beta_{\lambda_i} = \begin{cases} \sum_{j=0}^{\infty} s^je^{\lambda_i-2j} &\mbox{if } \lambda_i < 0, \\ \sum_{j=0}^{\floor{\lambda_i}} e^{\lambda_i-2j} + \sum_{j=\ceil{\lambda_i}}^{\infty} s^{j+\ceil{\lambda_i}}e^{\lambda-2j} &\mbox{if } \lambda_i > 0. \end{cases} $ \end{proposition} Here the notations $\lfloor-\rfloor$ and $\lceil-\rceil$ denote respectively the floor and ceiling functions $\mathbb{R}\to\mathbb{Z}$. They will be used throughout the paper.\ There is a unique decomposition of signature characters $\prod_{i=1}^n \beta_{\lambda_i} = \sum_{m=0}^{\infty} (a_m+sb_m)\cdot \beta_{\lambda-2m},$ where $\lambda=\sum_{i=1}^{n}\lambda_i$. This decomposition exactly encodes the tensor product decomposition $\bigotimes_{i=1}^n M_{\lambda_i} \cong \bigoplus_{m=0}^{\infty} M_{\lambda-2m} \otimes E_m$, in the sense that each multiplicity space $E_m$ has dimension equal to $\binom{m+n-2}{n-2} = a_m+b_m$ and signature equal to $a_m-b_m$. Hence, determining when $E_m$ is definite amounts to determining when $a_m = 0$ or $b_m = 0$. \subsection{Quantum groups case} For indeterminate $q$, the algebra $U_q(\mathfrak{sl}_2)$ is generated over $\mathbb{C}[q, q^{-1}, \frac{1}{q-q^{-1}}]$ by $E, F, K, K^{-1}$ with defining relations $KEK^{-1} = q^2E$, $KFK^{-1} = q^{-2}F$, and $EF-FE = \frac{K-K^{-1}}{q-q^{-1}}.$ For each $a\in \mathbb{Z}^+$, there is a certain simple $U_q(\mathfrak{sl}_2)$-module $\widetilde{V}_a$. It is free over $\mathbb{C}[q, q^{-1}, \frac{1}{q-q^{-1}}]$ of rank $a+1$, with a basis $\{ v_i \}_{i=0}^{a}$ on which the operators $E$, $F$, and $K$ act by $Ev_i = [a-i+1]v_{i-1}$, $Fv_i = [i+1]v_{i+1}$, and $Kv_i = q^{a-2i}v_i$, where $[k] = \frac{q^k - q^{-k}}{q-q^{-1}}$. As in the classical case, the representation $\widetilde{V}_a$ carries a unique contravariant nondegenerate Hermitian form $(,)$, as a free $\mathbb{C}[q, q^{-1}, \frac{1}{q-q^{-1}}]$-module, satisfying $(v_0,v_0)=1$. Here by contravariant we mean that the adjointness conditions $E^*=F, F^*=E$ and $K^*=K^{-1}$ hold with respect to the form, and by Hermitian we mean sesquilinear with respect to the involution of the ground ring determined by $z\to\overline{z}$ for $z\in\mathbb{C}$ and $q\to q^{-1}$. Thus it will be usually possible to specialize this picture to $q$ lying on the unit circle. This form is known also as the Shapovalov form. \begin{proposition}\label{prop:qshap} Under the Shapovalov form $(,)$ and the normalization $(v_0, v_0) = 1$, we have $ (v_i, v_i) = \binom{a}{i}_q$.\\ Here, $\binom{a}{i}_q = \frac{[a]!}{[i]![a-i]!}$ denotes the quantum binomial coefficient. \end{proposition} The tensor product $\bigotimes_{i=1}^n \widetilde{V}_{a_i}$ is a representation of $U_q(\mathfrak{sl}_2)$ under the iteration of the coproduct map defined by $\Delta(E) = E \otimes 1 + K \otimes E$, $\Delta(F) = 1 \otimes F + F \otimes K^{-1}$, and $\Delta(K^{\pm 1}) = K^{\pm 1} \otimes K^{\pm 1}$. Furthermore, the Shapovalov forms and the Drinfeld coboundary induce a nondegenerate contravariant Hermitan form on the representation $\bigotimes_{i=1}^n \widetilde{V}_{a_i}$. This tensor product decomposes into a direct sum of finite dimensional simple $U_q(\mathfrak{sl}_2)$ representations $\widetilde{V}_{a}$ with multiplicities; as before, isotypic pieces are orthogonal, and the uniqueness of the Shapovalov form gives each multiplicity space an induced Hermitian form. In Section \ref{sec:qcb} we outline the proof of Theorem \ref{thm:qcb} which gives a signature formula for these multiplicity spaces. \section{Critical point bound}\label{sec:cpb} In this section we use the Bethe ansatz method to derive the lower bound on the number of real critical points of the master function given by Theorem \ref{thm:cpb}. Then we use the bound to derive the asymptotic approximation for the number of real critical points of the master function given by Corollary \ref{cor:cpbsharp}. Throughout this section, fix two generic real sequences $\lambda_1,\ldots,\lambda_n$ and $z_1,\ldots,z_n$. Fix a positive integer $m$ and let $U\subset\mathbb{C}^m$ be given by $U=\{(t_1,\ldots,t_m)|t_i\neq z_k \operatorname{ ~for~ all~ } i,k\}$. We consider two functions on $U$. The first is the $\mathbb{C}[x]$-valued function $Q(x)(t_1,\ldots,t_m)=\prod_{i=1}^m (x-t_i)$ and the second is the $\mathbb{C}$-valued master function: $$ F_{z, \lambda,m}(t_1, t_2, ..., t_m) = \operatorname{Disc}(Q) \cdot \prod_{k} |Q(z_k)|^{-\lambda_k} = \prod_{i < j} (t_i-t_j)^2 \cdot \prod_{i, k} |t_i-z_k|^{-\lambda_k}. $$ We will assume that $z_k\neq z_l$ for $k\neq l$. \subsection{Preliminaries for the proof of Theorem \ref{thm:cpb}}\label{sec:cpbprelims} We begin with some definitions and preliminary results. \begin{definition*}\begin{itemize}\item For any $X \in U(\mathfrak{sl}_2)$ and any $i$ with $1 \leq i \leq n$, define an operator $X_i$ on $\bigotimes_{j=1}^n M_{\lambda_j}$ by $X_i = \underbrace{1 \otimes \cdots \otimes 1}_{i-1 \text{ factors}} \otimes X \otimes \underbrace{1 \otimes \cdots \otimes 1}_{n-i \text{ factors}}$.\ \item For each $i$ and $j$ with $1 \leq i \ne j \leq n$, define the \textit{Casimir tensor} $\Omega_{ij}$ by $ \Omega_{ij} = E_i F_j + F_i E_j + \frac{H_i H_j}{2}. $\ \item For each $i$ with $1 \leq i \leq n$, define the \textit{Gaudin model Hamiltonian} ${\mathcal{H}}_i$ by $ \mathcal{H}_i = \sum_{j \ne i} \frac{\Omega_{ij}}{z_i-z_j}. $\ \item For any complex number $t$ distinct from all $z_i$, define an operator $Z(t) = \sum_{i=1}^n \frac{H_i}{t-z_i}$.\ \item For any $t$ as above, define an operator $Y(t)$ by $Y(t)= \sum_{i=1}^n \frac{F_i}{t-z_i}$.\ \item For $(t_1, t_2, ..., t_m)\in U$ define $b_Q = Y(t_1)Y(t_2) \cdots Y(t_m)v$, where $v$ is the tensor product of the canonical generators of each $M_{\lambda_i}$. \end{itemize}\end{definition*} \begin{remark*} There is a clash of notation: we have the multiplicity spaces $E_m$, and the operators defined above $E_j$. This should not be cause for confusion.\end{remark*} We now assume that $(t_1,\ldots,t_m)$ is a (complex) critical point of $F_{z, \lambda, m}$. We state some preliminary results, whose proofs can largely be found in \cite{EFK} and \cite{FFR}. \begin{lemma}\label{lem:hcom} The Gaudin model Hamiltonians $\mathcal{H}_i$ commute with each other, they act on the multiplicity space $E_m=\operatorname{Hom}(M_{(\sum_i \lambda_i)-2m}, \bigotimes_{i} M_{\lambda_i})$ and they are self-adjoint under the induced Hermitian form on this space. \end{lemma} We should clarify what it means for $\mathcal{H}_i$ to act on $E_m=\operatorname{Hom}(M_{(\sum_i \lambda_i)-2m}, \bigotimes_{i} M_{\lambda_i})$. $\mathcal{H}_i$ is defined as an operator on $\bigotimes_{i} M_{\lambda_i}$. It turns out that these operators intertwine the action of $\mathfrak{sl}_2$ and so act on $E_m=\operatorname{Hom}(M_{(\sum_i \lambda_i)-2m}, \bigotimes_{i} M_{\lambda_i})$ by composition. \begin{proof} See \cite{EFK}. \end{proof} Notice that $E_m$ is canonically identified with the subspace of $\bigotimes_{i} M_{\lambda_i}$ consisting of all vectors of weight $\sum_i\lambda_i-2m$ and annihilated by $E$. Furthermore that action of $\mathcal{H}_i$ on $E_m$ coincides under this identification with the restriction of the action of $\mathcal{H}_i$ on $\bigotimes_{i} M_{\lambda_i}$. \begin{lemma}\label{lem:eb_Q} We have $Eb_Q = 0$. \end{lemma} \begin{proof} See \cite{EFK}. \end{proof} \begin{lemma}\label{lem:ZYcom} We have $$ [Z(t_a), Y(t_b)] = \frac{2}{t_a-t_b}(Y(t_a)-Y(t_b)). $$ \end{lemma} \begin{proof} See \cite{EFK}. \end{proof} \begin{lemma}\label{lem:hb} We have $ [\mathcal{H}_i, Y(t_1)Y(t_2) \cdots Y(t_m)]v = \frac{-\lambda_iQ'(z_i)}{Q(z_i)}b_Q. $ \end{lemma} \begin{proof} See \cite{EFK}. \end{proof} As a corollary to Lemma \ref{lem:hb}, we have that $b_Q$ is an eigenvector of each $\mathcal{H}_i$ with eigenvalue $ \frac{-\lambda_iQ'(z_i)}{Q(z_i)} + \left( \frac{\lambda_i}{2} \sum_{j \ne i}^n \frac{\lambda_j}{z_i-z_j} \right). $ (This is just the eigenvalue computed in Lemma \ref{lem:hb} plus the $\mathcal{H}_i$-eigenvalue of $v$.) \begin{lemma}\label{lemyo} The joint eigenvalues of the operators $\mathcal{H}_i$ each have multiplicity $1$, and the eigenvectors of the form $Y(t_1)Y(t_2) \cdots Y(t_m)v$ are all the joint eigenvectors up to scalars. \end{lemma} \begin{proof} See \cite{FFR}. \end{proof} \begin{lemma}\label{bQreal} If the joint eigenvector $b_Q$ has real joint eigenvalue, then the critical point $(t_1, ..., t_m)$ is real. That is, the corresponding polynomial $Q$ has real coefficients. \end{lemma} \begin{proof} See Appendix \ref{app:bQreal}. \end{proof} Notice that the converse is immediate from the remark below Lemma \ref{lem:hb}. \subsection{Proof of Theorem \ref{thm:cpb}} We now tie together the results from Section \ref{sec:cpbprelims} to deduce the critical point bound. The multiplicity space $E_m$ is canonically isomorphic to the subspace of all $H$-eigenvectors of $\bigotimes_i M_{\lambda_i}$ of eigenvalue $-2m+\sum_i \lambda_i$ which are annihilated by $E$. Since $Eb_Q = 0$ and $Hb_Q = (-2m+\sum_i \lambda_i)b_Q$, we may view $b_Q$ as an element of $E_m$. Note that the Gaudin Hamiltonians $\mathcal{H}_i$ descend to commuting self-adjoint operators on $E_m$. We know that the joint eigenspaces for the $\mathcal{H}_i$ on $E_m$ are all one-dimensional and spanned by elements $b_Q$ corresponding to critical points of the master function. Moreover those joint eigenspaces with real joint eigenvalue are precisely those corresponding to real critical points. Thus the result follows from the following easy lemma in linear algebra: \begin{lemma}Let $V$ be a finite-dimensional complex vector space equipped with a non-degenerate Hermitian form, and $\mathcal{H}_i$ be finitely many commuting self-adjoint operators on $V$. Then the signature of $V$ is a lower bound for the maximal number of linearly independent real joint eigenvectors for the operators $\mathcal{H}_i$.\end{lemma} \subsection{Preliminaries for the proof of Corollary \ref{cor:cpbsharp}} \begin{definition*} Define $\beta_n$ for integer $n$ by $$ \beta_{n} = e^{-\epsilon}\beta_{n+\epsilon} $$ for any $0 < \epsilon < 1$. This is well-defined. \end{definition*} \begin{definition*} For any real $\mu$, let $\beta_\mu^-$ be $\beta_\mu$ evaluated at $s=-1$. \end{definition*} \begin{definition*} For each nonnegative integer $n$, define a polynomial $V_{n}$ by $$ V_{n}(x) = 1+2x+2x^2+\cdots+2x^{n}. $$ \end{definition*} We state the following results without proofs. \begin{lemma} For an integer $n$, we have $$ \beta_{n}^- = \begin{cases} \beta_{-1}^- \cdot e^{n+1} \cdot V_{n+1}(e^{-2}) &\mbox{if } n \geq 0 \\ \beta_{-1}^- \cdot e^{n+1} &\mbox{if } n < 0. \end{cases} $$ \end{lemma} \begin{lemma}\label{lem:finperturbs} For any integer $n_1$ and any negative integer $n_2$, we have $$ e^{n_1}\beta_{n_2} = \beta_{n_1+n_2} $$ if $n_1+n_2 < 0$. Also, if $n_1+n_2 \geq 0$, then $e^{n_1}\beta_{n_2}$ can be written as a finite sum of signature characters. \end{lemma} \subsection{Proof of Corollary \ref{cor:cpbsharp}} Recall that $N_{z, \lambda, m}$ denotes the number of real critical points of $F_{z, \lambda,m}$. From Theorem \ref{thm:cpb}, we know $|\operatorname{sgn}(E_m)| \leq N_m \leq \operatorname{dim}(E_m)= \binom{m+n-2}{n-2}$. We will show, for fixed generic $\lambda_i$'s, that as $m$ approaches infinity, the ratio $$\frac{|\operatorname{sgn}(E_m)|}{\binom{m+n-2}{n-2}}$$ approaches $1$. This will prove Corollary \ref{cor:cpbsharp}. The tensor product $M = \bigotimes_{i=1}^n M_{\lambda_i}$ has signature character equal to $\prod_{i=1}^n \beta_{\lambda_i}$. We need to examine the signatures of the multiplicity spaces $E_m$ in the decomposition of $M$, which amounts to examining the coefficients of $\beta_{\lambda-2m}^-$ in the decomposition of $\prod_{i=1}^n \beta_{\lambda_i}^-$. We have \begin{align*} \prod_{i=1}^n \beta_{\lambda_i}^- &= e^{\sum_{i=1}^n \{ \lambda_i \}} \cdot \prod_{i=1}^n \beta_{\floor{\lambda_i}}^- \\ &= e^{\sum_{i=1}^n \{ \lambda_i \}} \cdot \left( \prod_{i=1}^p \beta_{-1}^- \cdot e^{\ceil{\lambda_i}} \cdot V_{\ceil{\lambda_i}}(e^{-2}) \right) \cdot \left( \prod_{i=p+1}^n \beta_{-1}^- \cdot e^{\ceil{\lambda_i}} \right) \\ &= e^{\sum_{i=1}^n \ceil{\lambda_i}+\{ \lambda_i \} } \cdot \left( \beta_{-1}^- \right)^n \cdot \left( \prod_{i=1}^p V_{\ceil{\lambda_i}}(e^{-2}) \right) \\ &= e^{\sum_{i=1}^n \ceil{\lambda_i}+\{ \lambda_i \} } \cdot \left( \sum_{m=0}^{\infty} \beta_{-n-2m}^- \cdot (-1)^m \cdot \binom{m+n-2}{n-2} \right) \cdot \left( \prod_{i=1}^p V_{\ceil{\lambda_i}}(e^{-2}) \right). \end{align*} Combining the above computation with Lemma \ref{lem:finperturbs}, we obtain that for all sufficiently large $m$, \begin{align}\label{eqn:emlarge} \operatorname{sgn}(E_m) \cdot (-1)^m = \sum_{i=0}^T \binom{m+n-2-i}{n-2} \cdot (-1)^{i} \cdot c_i, \end{align} where $T = \sum_{i=1}^p \ceil{\lambda_i}$ and the $c_i$'s are defined by the polynomial identity $$ \prod_{i=1}^p V_{\ceil{\lambda_i}}(x) = \sum_{i=0}^{T} c_ix^i. $$ The RHS in (\ref{eqn:emlarge}) is a polynomial in $m$ of degree $n-2$. Its leading term is $$ \frac{m^{n-2}}{(n-2)!} \cdot (c_0 - c_1 + \cdots + (-1)^Tc_T). $$ We will show that this leading term is nonzero and compute it explicitly. We have $\sum_{i=0}^T (-1)^ic_i = \prod_{i=1}^p V_{\ceil{\lambda_i}}(-1)$. But $V_{\ceil{\lambda_i}}(-1)$ is equal to $+1$ or $-1$ for each $i$, so $\sum_{i=0}^T (-1)^ic_i$ also equals $+1$ or $-1$. Therefore, we obtain that $\pm \frac{m^{n-2}}{(n-2)!}$ is the leading term of a polynomial which equals $\operatorname{sgn}(E_m) \cdot (-1)^m$ for all sufficiently large $m$. We also know that $\frac{m^{n-2}}{(n-2)!}$ is the leading term of the polynomial in $m$ defined by the binomial coefficient $\binom{m+n-2}{n-2}$. Hence $$ \lim_{m \to \infty} \frac{|\operatorname{sgn}(E_m)|}{\binom{m+n-2}{n-2}} = 1 $$ as desired. \section{Signatures for the quantum group case}\label{sec:qcb} In this section we prove Theorem \ref{thm:qcb}. Throughout this section, fix a generic $q$ on the complex unit circle and nonnegative integers $a$ and $b$. We start by proving some preliminary results. \subsection{Preliminaries for the proof of Theorem \ref{thm:qcb}} Let $v_0$ and $w_0$ denote the standard highest weight vectors in the simple representations $\widetilde{V}_a$ and $\widetilde{V}_b$ of $U_q(\mathfrak{sl}_2)$, respectively. Recall that $$ \widetilde{V}_a \otimes \widetilde{V}_b \cong \bigoplus_{m=0}^{\operatorname{min} \{ a,b \} } \widetilde{V}_{a+b-2m}. $$ For each subrepresentation $\widetilde{V}_{a+b-2m} \subset \widetilde{V}_a \otimes \widetilde{V}_b$, we shall see that there is a unique highest weight vector of the form $$ v_0 \otimes w_m + \sum_{i=1}^m c_{m,i} \cdot v_i \otimes w_{m-i}, $$ where $c_{m,i}$ are scalars. We will call the highest weight vector determined by these scalars the \textit{unit-normalized} highest weight vector. Recall the universal $R$-matrix $$ R = q^{\frac{H \otimes H}{2}} \sum_{n \geq 0} q^{\binom{n}{2}} \cdot \frac{(q-q^{-1})^n}{[n]!} F^n \otimes E^n. $$ If $T$ is the twist operator $u\otimes v\mapsto v\otimes u$, then $TR:\widetilde{V}_a\otimes\widetilde{V}_b\to\widetilde{V}_b\otimes\widetilde{V}_a$ is an isomorphism of representations; this gives the braiding on the appropriate tensor category of $U_q(\mathfrak{sl}_2)$-modules. Since the highest weight spaces of $\widetilde{V}_a\otimes\widetilde{V}_b$ are all one-dimensional, we see that the image under $TR$ of the unit-normalized highest weight vector of $\widetilde{V}_{a+b-2m}\subset \widetilde{V}_a\otimes\widetilde{V}_b$ must be a scalar multiple of the unit-normalized highest weight vector of $\widetilde{V}_{a+b-2m}\subset \widetilde{V}_b\otimes\widetilde{V}_a$. In the following two lemmas, we compute this scalar, and the scalars $c_{m,i}$. \begin{lemma}\label{lem:unorm} In the subrepresentation $\widetilde{V}_{a+b-2m} \subset \widetilde{V}_a \otimes \widetilde{V}_b$, the unit-normalized highest weight vector $u$ is $$ u=\sum_{i=0}^m c_{m,i} \cdot v_i \otimes w_{m-i}, $$ where $$c_{m,i} = (-1)^i \cdot q^{ai-i^2+i} \cdot \frac{\binom{b-m+i}{i}_q}{\binom{a}{i}_q}.$$ \end{lemma} \begin{proof} Certainly any vector $u$ of weight $a+b-2m$ is of the form $\sum_{i=0}^m c_{m,i} \cdot v_i \otimes w_{m-i}$. The condition that $u$ is a highest weight vector is equivalent to: $$ 0 = \Delta(E)u = (E \otimes 1 + K \otimes E)u = \sum_{i=1}^m c_{m,i} \cdot Ev_i \otimes w_{m-i} + c_{m,i-1} \cdot Kv_{i-1} \otimes Ew_{m-i+1}. $$ Recall that $$Ev_i = [a-i+1]\cdot v_{i-1},$$ $$Kv_{i-1} = q^{a-2i+2}\cdot v_{i-1},$$ $$Ew_{m-i+1} = [b-m+i]\cdot w_{m-i}.$$ Thus the above equation is equivalent to $$ [a-i+1]c_{m,i} + c_{m,i-1} \cdot q^{a-2i+2}[b-m+i]=0 $$ which is to say $$ c_{m,i} = (-1) \cdot q^{a-2i+2} \cdot \frac{[b-m+i]}{[a-i+1]} \cdot c_{m,i-1}, $$ for $1\leq i\leq m$. Equivalently $$ c_{m,i} = (-1)^i \cdot q^{ai-i^2+i} \cdot \frac{\binom{b-m+i}{i}_q}{\binom{a}{i}_q}c_{m,0}. $$ We set $c_{m,0}=1$ to get the unit-normalized highest weight vector. \end{proof} \begin{lemma}\label{lem:rnorm} With $u$ as above, we have $$ TRu=\sum_{i=0}^m c_{m,i}' \cdot w_i \otimes v_{m-i}, $$ where $$c_{m,0}' = (-1)^m \cdot q^{ab/2-am-bm+m^2-m} \cdot \frac{\binom{b}{m}_q}{\binom{a}{m}_q}$$ and $$ c_{m,i}' = (-1)^i \cdot q^{bi-i^2+i} \cdot \frac{\binom{a-m+i}{i}_q}{\binom{b}{i}_q}\cdot c_{m,0}'. $$ \end{lemma} \begin{proof} The second point is immediate from the preceding lemma. In computing $Ru$, the only way to get a multiple of $v_0\otimes w_m$ is by applying the summand $q^{\frac{H\otimes H}{2}}$ of $R$ to the summand $v_0\otimes w_m$ of $u$. Thus $c_{m,m}'=q^{ab/2-am}$. The first claim follows. \end{proof} \subsection{Proof of Theorem \ref{thm:qcb}} Recall that $\widetilde{V}_a$ carries a Shapovalov form, and the induced form on $\widetilde{V}_a\otimes\widetilde{V}_b$ is defined using Drinfeld's unitized $R$-matrix $\overline{R} = R(R^{21}R)^{-1/2}$ rather than $R$. Notice that $R^{12}R=TRTR$ is an automorphism of $\widetilde{V}_a\otimes\widetilde{V}_b$ so is multiplication on each factor $\widetilde{V}_{a+b-2m}$ by some scalar. So for $u$ as above, it will be necessary to compute $T\overline{R}(u)$. By the above calculation, $$ TRTRu=(-1)^m \cdot q^{ab/2-am-bm+m^2-m} \cdot \frac{\binom{b}{m}_q}{\binom{a}{m}_q}\cdot (-1)^m \cdot q^{ab/2-am-bm+m^2-m} \cdot \frac{\binom{a}{m}_q}{\binom{b}{m}_q}u=q^{ab-2am-2bm+2m^2-2m}u. $$ So $T\overline{R}u=q^{-ab/2+am+bm-m^2+m}\sum_{i=0}^m c_{m,i}' \cdot w_i \otimes v_{m-i}$. There is a natural Hermitian contravariant pairing $\langle~,\rangle$ between $\widetilde{V}_b\otimes\widetilde{V}_a$ and $\widetilde{V}_a\otimes\widetilde{V}_b$, and the form $(~,)$ on $\widetilde{V}_a\otimes\widetilde{V}_b$ is defined by $(x,y)=\langle T\overline{R}x,y\rangle$. We wish to compute $(u,u)$. We have: $$ (u,u) = \langle q^{-ab/2+am+bm-m^2+m}\sum_{i=0}^m c_{m,i}' \cdot w_i \otimes v_{m-i},\sum_{i=0}^m c_{m,i} \cdot v_i \otimes w_{m-i}\rangle\\ $$ \noindent We know that under the Shapovalov form on $\widetilde{V}_a$, $(v_i,v_j)=\delta_{ij}\binom{a}{i}_q$. Therefore: $$ (u,u) = q^{-ab/2+am+bm-m^2+m}\sum_{i=0}^mc_{m,m-i}'\overline{c_{m,i}}\binom{a}{i}_q\binom{b}{m-i}_q. $$ Plugging in the known quantities $c_{m,i}$, $c_{m,i}'$, this comes to $$ \frac{\binom{b}{m}_q}{\binom{a}{m}_q}q^{bm-m^2+m}\sum_{i=0}^m q^{-(a+b+2-2m)i} \cdot \binom{b-m+i}{i}_q\binom{a-i}{m-i}_q $$ We apply the identity $\binom{l_1}{l_2}_q = (-1)^{l_2}\binom{l_2-l_1-1}{l_2}$ twice to get $$ (-1)^m\frac{\binom{b}{m}_q}{\binom{a}{m}_q}q^{bm-m^2+m}\sum_{i=0}^m q^{-(a+b+2-2m)i} \cdot \binom{m-b-1}{i}_q\binom{m-a-1}{m-i}_q $$ The quantum Vandermonde identity (see \cite{BC}) states that $$ q^{bm-m^2+m}\sum_{i=0}^m q^{-(a+b+2-2m)i} \cdot \binom{m-b-1}{i}_q\binom{m-a-1}{m-i}_q=\binom{2m-a-b-2}{m}_q $$ and so $$ (u,u)=(-1)^m\frac{\binom{b}{m}_q}{\binom{a}{m}_q}\binom{2m-a-b-2}{m}_q. $$ Applying the earlier identity a third time, we conclude: $$ (u,u)=\frac{\binom{b}{m}_q}{\binom{a}{m}_q}\binom{a+b+1-m}{m}_q. $$ Theorem \ref{thm:qcb} follows. \section{Further work} We can ask the same questions for other semisimple Lie algebras, or more generally for Kac-Moody algebras. In particular, we can ask for a definite multiplicity space classification for any Kac-Moody algebra. This will give classifications of families of unitary representations for more quantized quiver varieties. In addition, it would be interesting to further explore our bound on the number of real critical points of the master function. Specifically, we are interested in how tight this bound is. Computer testing suggests that the bound is good for many $\lambda$ and $z$ sequences, in the sense that the ratio $\frac{N_m}{|\operatorname{sgn}(E_m)|}$ is usually close to $1$. When $m$ is small, however, the bound is bad at least in some special cases. For example, when $m=2$ we can provide a geometric bound for $N_m$ in the following way:\ Consider the real part of the domain of the master function (for $m=2$). It is equal to the complement in $\mathbb{R}^2$ of the lines $t_1=t_2$ and $t_1=z_i$, $t_2=z_i$ for $1\leq i\leq n$. In this way it is the union of square and triangular regions. We compute (depending on the values of the $\lambda_i$) the limits of the master function at the boundary of each connected region, and observe that if this limit is everywhere zero then there is a maximum in that region; and if it is everywhere infinite then there is a minimum in that region; and if there are precisely two connected components of the boundary on which the limit is infinite then there is a saddle point in that region. In any case we obtain a real critical point. We thus obtain a lower bound on the number of real critical points, given as some explicit function of $\lambda_1,\ldots,\lambda_n$. Finally, it would be interesting to extend the relationship between signatures of multiplicity spaces in tensor products of Verma modules and numbers of real critical values of the master function to the quantum case; associated to the quantum Knizhnik-Zamolodchikov equations should be a quantum master function, and we expect that, as in the classical case, the number of its real critical points is bounded by the signature of the appropriate multiplicity space in an appropriate tensor product of Verma modules for $U_q(\mathfrak{sl}_2)$. \section{Classification List}\label{app:class} In this appendix we state the classification of definite multiplicity spaces in $$ M_{\lambda_1} \otimes M_{\lambda_2} \otimes \cdots \otimes M_{\lambda_n}, $$ as promised in Theorem \ref{thm:class}. Let $\lambda = \sum_{i=1}^n \lambda_i$, and assume the $\lambda_i$'s are generic reals in decreasing order, with the first $p$ of them positive ($0 \leq p \leq n$) and the rest negative. Let us give the representation $\bigotimes_{i=1}^n M_{\lambda_i}$ a name, once and for all: call it $M$. By Proposition \ref{founder}, we have that the signatures of the multiplicity spaces depend only on the values $\floor{\lambda}, \floor{\lambda_1}, ... , \floor{\lambda_n}$. We define the \textit{explicit type} of $M$ to be $\langle \floor{\lambda}, \floor{\lambda_1}, ..., \floor{\lambda_n} \rangle$ and the \textit{implicit type} to be $\langle \floor{\lambda_1}, ..., \floor{\lambda_n} \rangle$. \\ \begin{classlist*}\ \begin{itemize} \item[Case 1.] $p = 0$ \\ Every space is definite. The even-level spaces are positive definite and the odd-level spaces are negative definite. \item[Case 2.] $n = 2$ \\ Every space is definite. The sign of the level $m$ space is given by the function $g(\lambda_1, \lambda_2, m)$, defined in subsection \ref{subsec:g}. \item[Case 3.] $p = 1, n \geq 3$ \\ The definite spaces are all those with levels less than or equal to $\operatorname{max}\{0, \ceil{\frac{\lambda}{2}}\}$. In this range, the even-level spaces are positive definite and the odd-level spaces are negative definite. \item[Case 4.] $p = n-1, n \geq 3$ \begin{itemize} \item[a.] If $\lambda < 0$, then the definite spaces are all those with levels less than or equal to $\ceil{\lambda_p}$, and they are all positive definite. \item[b.] If $\lambda > 0$ and $\ceil{\lambda+1} \leq \ceil{\lambda_p}$, then the definite spaces are all those with levels either equal to $0$ or between $\ceil{\lambda+1}$ and $\ceil{\lambda_p}$ (inclusive). They are all positive definite. \item[b.] If $\lambda > 0$ and $\ceil{\lambda+1} > \ceil{\lambda_p}$, then the (positive definite) level $0$ space is the unique definite space, unless $M$ has explicit type $\langle 1,0,0,-1 \rangle$, in which case the (negative definite) level $2$ space is the only additional definite space. \end{itemize} \item[Case 5.] $p = n, n \geq 3$ \\ The spaces of levels less than or equal to $\ceil{\lambda_p}$) are positive definite. These are all definite spaces, outside of the following exceptional explicit types (for which we give all additional definite spaces): \begin{itemize} \item For explicit type $\langle 3d,d,d,d \rangle$ where $d \geq 0$, the level $2d+1$ and $2d+2$ spaces are positive definite. \item For explicit type $\langle 3d+2,d,d,d \rangle$ where $d \geq 0$, the level $2d+2$ and $2d+3$ spaces are negative definite. \item For explicit type $\langle 3d-1,d,d,d-1 \rangle$ where $d \geq 1$, the level $2d+1$ space is positive definite. \item For explicit type $\langle 3d+1, d,d,d-1 \rangle$ where $d \geq 1$, the level $2d+2$ space is negative definite. \item For explicit type $\langle 3d-2,d,d-1,d-1 \rangle$ where $d \geq 1$, the level $2d$ space is positive definite. \item For explicit type $\langle 3d,d,d-1,d-1 \rangle$ where $d \geq 1$, the level $2d+1$ space is negative definite. \item For explicit type $\langle 3,0,0,0,0 \rangle$, the level $3$ space is negative definite. \item For explicit type $\langle 4,1,1,1,1 \rangle$, the level $4$ space is positive definite. \item For explicit type $\langle 0,0,0,...,0 \rangle$ where $p \geq 4$, the level $2$ space is positive definite. \item For explicit type $\langle 1,1,0,...,0 \rangle$ where $p \geq 4$, the level $2$ space is positive definite. \end{itemize} \item[Case 6.] $2 \leq p \leq n-2, n \geq 4$ \\ There is one definite space. It is the level $0$ space and it is positive definite. \end{itemize} \end{classlist*} \subsection{Definition of the function $g$}\label{subsec:g} We define the function $g$ on a triple consisting of two nonintegral reals $\lambda_1 > \lambda_2$ and a nonnegative integer $m$ as follows: \\ \\ If $0 > \lambda_1 > \lambda_2$ then $$ g(\lambda_1, \lambda_2, m) = (-1)^m. $$ If $\lambda_1 > 0$, $\lambda_2 < 0$, $\lambda_1+\lambda_2<0$, then $$ g(\lambda_1, \lambda_2, m) =\left\{\begin{matrix} 1 & \lambda_1>m-1\\ (-1)^{\lfloor{\lambda_1-m+1}\rfloor} & \lambda_1<m-1 \end{matrix}\right. .$$ If $\lambda_1 > 0$, $\lambda_2 < 0$, $\lambda_1 + \lambda_2 > 0$ then \begin{multline*} g(\lambda_1, \lambda_2, m) = \left\{ \begin{array}{rr} (-1)^m & : 0 \leq m \leq \floor{\frac{\lambda_1+\lambda_2}{2}} \\ (-1)^{\ceil{\frac{\lambda_1+\lambda_2}{2}}} & : \ceil{\frac{\lambda_1+\lambda_2}{2}} \leq m \leq \floor{\frac{\lambda_1+\lambda_2+1}{2}} \\ (-1)^{\ceil{\frac{\lambda_1+\lambda_2}{2}}+\ceil{\frac{\lambda_1+\lambda_2+1}{2}} + m} & : \ceil{\frac{\lambda_1+\lambda_2+1}{2}} \leq m \leq \ceil{\lambda_1+\lambda_2} \\ 1 & : \ceil{\lambda_1+\lambda_2}+1 \leq m \leq \ceil{\lambda_1} \\ (-1)^{\ceil{\lambda_1}+m} & : \ceil{\lambda_1}+1 \leq m \end{array} \right\} \end{multline*} If $\lambda_1 > \lambda_2 > 0$, then \begin{multline*} g(\lambda_1, \lambda_2, m) = \left\{ \begin{array}{rr} 1 & : 0 \leq m \leq \floor{\lambda_2} \\ g(\lambda_1, \lambda_2-2\ceil{\lambda_2}, m - \ceil{\lambda_2}) & : \ceil{\lambda_2} \leq m \leq \text{max}(\ceil{\lambda_2}, \floor{\lambda_1}) \\ g(\lambda_1, \lambda_2-2\ceil{\lambda_2}, m - \ceil{\lambda_2}) \\+ 2\floor{\lambda_1}+2\floor{\lambda_2}-2\floor{\lambda_1+\lambda_2} & : \text{max}(\ceil{\lambda_2}, \floor{\lambda_1})+1 \leq m \leq \ceil{\lambda_1}+\ceil{\lambda_2} \\ (-1)^{m-\ceil{\lambda_1}-\ceil{\lambda_2}} & : \ceil{\lambda_1} + \ceil{\lambda_2}+1 \leq m \end{array} \right\} \end{multline*} The right way to think about these formulas is to understand that the signs of the (in this case one-dimensional) multiplicity spaces pass through several phases, during each of which they are either alternating or constant; the formulas above are just describing when the transitions between these phases happen.\\ In the following appendices, we present the proof of this classification. \section{Proof of Case 1}\label{app:allnegativecase} For $\lambda_1, \lambda_2,..., \lambda_n < 0$, we have by Proposition \ref{founder} that $\operatorname{ch}_s(M_{\lambda_i}) = \sum_{j=0}^{\infty} s^je^{\lambda_i-2j} = \frac{e^{\lambda_j}}{1-se^{-2}}$. Hence the signature character of $\bigotimes_{i=1}^n M_{\lambda_i}$ is $\frac{e^{\sum_{i=1}^n \lambda_i}}{(1-se^{-2})^n}$. Using a common generating function manipulation, we get that \begin{align*} \frac{e^{\sum_{i=1}^n \lambda_i}}{(1-se^{-2})^n} &= e^{\sum_{i=1}^n \lambda_i} \sum_{j=0}^{\infty} \binom{j+n-1}{n-1} s^je^{-2j} \\ &= \sum_{j=0}^{\infty} \binom{j+n-2}{n-2} \cdot s^j\beta_{\sum_{i=1}^n \lambda_i - 2j} \end{align*} whence we see that all even-level spaces are positive definite and all odd-level spaces are negative definite. \section{Proof of Case 2}\label{app:n2class} Since every multiplicity space has dimension $1$, every multiplicity space is definite. We just need to determine which spaces are positive definite and which are negative definite, which we do using the following result (Lemma \ref{lem:edecomp1}). Recall that $\beta_\mu$ denotes the signature character of $M_\mu$; we will write $\beta_\mu^-$ for $\beta_\mu$ evaluated at $s=-1$. \begin{lemma}\label{lem:edecomp1} We have $$ e^{\mu} = \beta^-_{\mu} - \operatorname{sign}(\mu)\beta^-_{\mu-2} + (\operatorname{sign}(1-\mu) - 1)\beta^-_{\mu-2\ceil{\mu}}. $$ \end{lemma} \begin{proof} This is a straightforward calculation. One should check in turn in each of the intervals $\mu > 1$, $1 > \mu > 0$, and $0 > \mu$. \end{proof} We proceed by analyzing in turn the cases $p=1$ (i.e. $\lambda_1>0,\lambda_2<0$) and $p=2$ (i.e. $\lambda_1>2, \lambda_2>0$). \subsection{Proof of Case 2 for $p=1$}\label{app:p1n2class} If $\lambda_1+\lambda_2 < 0$, then $e^{\mu}\beta_{\lambda_2} = \beta_{\mu+\lambda_2}$ for any $\mu=\lambda_1,\lambda_1-1,\ldots$ (compare with, e.g. Lemma \ref{lem:finperturbs}). In that case, the result is easy. So we assume that $\lambda=\lambda_1+\lambda_2 > 0$. Then we have \begin{align*} \beta_{\lambda_1}^-\beta_{\lambda_2}^- &= (e^{\lambda_1}+e^{\lambda_1-2}+ \cdots +e^{\lambda_1-2\floor{\lambda_1}} + \beta_{\lambda_1-2\ceil{\lambda_1}}^-)\beta_{\lambda_2}^- && \text{by~Proposition~}\ref{founder}\\ &= (e^{\lambda_1}+e^{\lambda_1-2}+ \cdots +e^{\lambda_1-2\floor{\lambda_1}})\beta_{\lambda_2}^- + \sum_{m = \ceil{\lambda_1}}^{\infty} (-1)^{m+\ceil{\lambda_1}}\beta_{\lambda-2m}^- && \text{by Case }1 \end{align*} At this point, for brevity, we will treat the case $\lfloor\lambda\rfloor\equiv2\mod4$; the other cases are similar, and left as an exercise. Notice that $(e^{\lambda_1}+e^{\lambda_1-2})\beta_{\lambda_2}^-=e^\lambda$, which in turn (by Lemma \ref{lem:edecomp1}) is equal to $\beta_\lambda^--\beta_{\lambda-2}^--2\beta_{\lambda-2\lceil\lambda\rceil}^-$ (for $\lambda>1$). Similarly, we obtain that $$ (e^{\lambda_1}+e^{\lambda_1-2}+\ldots+e^{\lambda_1-\lfloor{\lambda}\rfloor})\beta_{\lambda_2}^-=\sum_{m=0}^{\lfloor\lambda\rfloor/2}(-1)^m\beta_{\lambda-2m}^- - \sum_{j=0}^{(\lfloor\lambda\rfloor-2)/4}2\beta_{\lambda-2\lceil\lambda\rceil+4j}^- $$ while $$ (e^{\lambda_1-\lfloor{\lambda}\rfloor-2}+e^{\lambda_1-\lfloor{\lambda}\rfloor-4}+\ldots+e^{\lambda_1-2\lfloor{\lambda_1}\rfloor})\beta_{\lambda_2}^-=\sum_{m=\floor{\lambda}/2+1}^{\lfloor\lambda_1\rfloor}\beta_{\lambda-2m}^- . $$ We thus obtain $$ \beta_{\lambda_1}^-\beta_{\lambda_2}^- = \sum_{m=0}^{\floor{\lambda}+1}(-1)^m\beta_{\lambda-2m}^-+\sum_{m=\floor{\lambda}+2}^{\floor{\lambda_1}}\beta_{\lambda-2m}^-+\sum_{m=\ceil{\lambda_1}}^{\infty}(-1)^{m+\ceil{\lambda_1}}\beta_{\lambda-2m}^-. $$ Compare signs with the function $g$ defined above. \subsection{Proof of Case 2 for $p=2$}\label{app:p2n2class} For any positive real $t$, write $L_t = e^t + e^{t-2} + \cdots + e^{t-2\floor{t}}$. We have \begin{align*} \beta_{\lambda_1}^-\beta_{\lambda_2}^- &= (L_{\lambda_1}+\beta_{\lambda_1-2\ceil{\lambda_1}}^-)(L_{\lambda_2}+\beta_{\lambda_2-2\ceil{\lambda_2}}^-) \\ &= \beta_{\lambda_1}^-\beta_{\lambda_2-2\ceil{\lambda_2}}^- + \beta_{\lambda_2}^-\beta_{\lambda_1-2\ceil{\lambda_1}}^- - \beta_{\lambda_1-2\ceil{\lambda_1}}^-\beta_{\lambda_2-2\ceil{\lambda_2}}^- + L_{\lambda_1}L_{\lambda_2}, \end{align*} and \begin{align*} L_{\lambda_1}L_{\lambda_2} &= \sum_{m=0}^{\floor{\lambda_2}}(m+1)e^{\lambda-2m} \\ &+ \sum_{m=\ceil{\lambda_2}}^{\floor{\lambda_1}} \ceil{\lambda_2}e^{\lambda-2m} \\ &+ \sum_{m=\ceil{\lambda_1}}^{\ceil{\lambda_1}+\ceil{\lambda_2}} (\floor{\lambda_1}+\floor{\lambda_2}-m+1)e^{\lambda-2m}. \end{align*} Rewriting the powers of $e$ in $L_{\lambda_1}L_{\lambda_2}$ as signature characters using Lemma \ref{lem:edecomp1} and applying the result of Subsection \ref{app:p1n2class} to the other terms gives the result. \section{Proof of Case 3}\label{app:nnpclass} Our strategy is first to identify a finite range of possible definite spaces. To that end, it is convenient to consider the completion $X$ of the group algebra $\mathbb{Z}[s][\mathbb{Z}]=\bigoplus_{j=-\infty}^{\infty}\mathbb{Z}[s].e^j$ given by\begin{align*}X=\bigoplus_{j=0}^{\infty}\mathbb{Z}[s].e^j\oplus \prod_{j=-\infty}^{-1}\mathbb{Z}[s].e^j\end{align*}with the algebra structure determined by continuity. One may readily check (by an upper-triangularity phenomenon) that \begin{align*}X=\bigoplus_{j=0}^{\infty}\mathbb{Z}[s].\beta_j\oplus \prod_{j=-\infty}^{-1}\mathbb{Z}[s].\beta_j.\end{align*} In fact, the ring $X$ has been implicitly where our calculations have taken place thus far. Now notice that $X$ contains the semiring $X^+=\bigoplus_{j=0}^{\infty}\mathbb{Z}^+[s].\beta_j\oplus \prod_{j=-\infty}^{-1}\mathbb{Z}^+[s].\beta_j$. We may therefore introduce a partial order on $X$ by setting $x\leq y$ iff $y-x\in X^+$. In that case we will say that $y$ \emph{contains} $x$. Multiplication by $X^+$ preserves this partial order. \begin{lemma}\label{lem:nnpbound} No space with level more than $\ceil{\frac{\lambda}{2}}$ is definite. \end{lemma} \begin{proof} We treat the case, $\lambda>0$, the other case being similar. \\ Fix an integer $d\geq \ceil{\lambda/2}$. From Case $1$, we have $\beta_{\lambda_2}\beta_{\lambda_3}\ldots\beta_{\lambda_n}=\sum_{m=0}^{\infty}s^m\binom{m+n-2}{n-2}\beta_{\lambda-\lambda_1-2m}$, which in particular contains either $\beta_{\lambda-\lambda_1-2d}+s\beta_{\lambda-\lambda_1-2d-2}$ or $s\beta_{\lambda-\lambda_1-2d}+\beta_{\lambda-\lambda_1-2d-2}$. We assume the former (without loss of generality). Multiplying by $\beta_{\lambda_1}$, we have that $\beta_{\lambda_1}\beta_{\lambda_2}\ldots\beta_{\lambda_n}$ contains $\beta_{\lambda_1}(\beta_{\lambda-\lambda_1-2d}+s\beta_{\lambda-\lambda_1-2d-2})$, which in turn (using Case $2$) contains $\beta_{\lambda-2d-2}+s\beta_{\lambda-2d-2}$. (The crucial point here is that $\lambda_1>0$, so that the second term of $\beta_{\lambda_1}\beta_{\lambda-\lambda_1-2d}$ has coefficient $1$, rather than $s$). We conclude that $\beta_{\lambda_1}\beta_{\lambda_2}\ldots\beta_{\lambda_n}$ contains $(1+s)\beta_{\lambda-2d-2}$ for any $d\geq\ceil{\lambda/2}$, which completes the proof.\end{proof} To complete Case $3$, we must show that the multiplicity spaces of levels $m\leq\ceil{\lambda/2}$ are all definite, and have sign $(-1)^m$. This follows by an analysis similar to the proof of Lemma \ref{lem:nnpbound} (in particular first expanding $\beta_{\lambda_2}\beta_{\lambda_3}\ldots\beta_{\lambda_n}$ and then using Case $2$ to multiply the result by $\beta_{\lambda_1}$). We leave this as an exercise. \section{Proof of Case 4 for $n = 3$}\label{app:nppclass} We use essentially the same strategy, only the details are more complicated. \subsection{Proof of Case 4 for $n = 3$ and $\lambda < 0$}\label{subsec:ln} \begin{lemma}\label{lem:ln1} No space with level greater than $\ceil{\lambda_2}$ is definite. \end{lemma} \begin{proof} Since $\lambda_2+\lambda_3 < 0$, the previously proven Case 2 classification gives that the product of $\beta_{\lambda_2}$ and $\beta_{\lambda_3}$ contains the product of $\beta_{\lambda_2-2\ceil{\lambda_2}}$ and $\beta_{\lambda_3}$. Hence, $\beta_{\lambda_1}\cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ contains $\beta_{\lambda_1}\cdot \beta_{\lambda_2-2\ceil{\lambda_2}} \cdot \beta_{\lambda_3}$. By the $n=3$ subcase of Case 3 (previously proven) we get the claim. \end{proof} \begin{lemma}\label{lem:ln2} Every space with level at most $\ceil{\lambda_2}$ is positive definite. \end{lemma} \begin{proof} From the previously proven Case 2 classification, we obtain that in the decomposition of $M_{\lambda_1} \otimes M_{\lambda_2}$, the spaces with level at most $\ceil{\lambda_2}$ are all positive definite. Applying the Case 2 classification again, we get that if the following condition is satisfied for $0 \leq k \leq \ceil{\lambda_2-1}$: $$ \lambda_1+\lambda_2 - 2k - 2\ceil{\lambda_1+\lambda_2 - 2k} \leq \lambda_1 + \lambda_2 - 2\ceil{\lambda_2} $$ then the claim holds. This is equivalent to: $$ \ceil{\lambda_1+\lambda_2} \geq 2\ceil{\lambda_2}-1 $$ which is true. \end{proof} \begin{proof}[Proof of $\lambda < 0$ subcase of Case 4 for $n = 3$] Lemma \ref{lem:ln1} shows that there are no definite spaces with level more than $\ceil{\lambda_2}$. Lemma \ref{lem:ln2} shows that all spaces with level at most $\ceil{\lambda_2}$ are positive definite. Thus, the $\lambda < 0$ case of Case 4 for $n = 3$ holds. \end{proof} \subsection{Proof of Case 4 for $n=3$ and $\lambda > 0$}\label{subsec:lp} From the previously proven Case 2 classification, the product $\beta_{\lambda_1}\cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ contains the product of $\beta_{\lambda_1-2\ceil{\lambda_1}}\cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ if $\ceil{\lambda_1+\lambda_3} < \ceil{\lambda_1}$, and it contains $\beta_{\lambda_1-2\ceil{\lambda_1+1}}\cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ if $\ceil{\lambda_1+\lambda_3} = \ceil{\lambda_1}$. From this we get that all definite spaces have levels at most $\ceil{\lambda_1+1}$, so we only need to focus on this finite set of spaces. Our strategy here is to write $\beta_{\lambda_1}\cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ as a sum of powers of $e$ and use Lemma \ref{lem:edecomp1} (see Appendix \ref{app:p1n2class}) to determine the definite spaces. We begin by writing out the $e$-decomposition for the first $\ceil{\lambda_1+1}$ powers of $e$ in $(\beta_{\lambda_1} \beta_{\lambda_2} \beta_{\lambda_3})^-$: \begin{lemma}\label{lem:edecompnpp} If $k \leq \ceil{\lambda_2}$, then the coefficient of $e^{\lambda-2k}$ in the $(\beta_{\lambda_1} \beta_{\lambda_2} \beta_{\lambda_3})^-$ is $\ceil{\frac{k+1}{2}}$. If $\ceil{\lambda_1} \geq k > \ceil{\lambda_2}$, then the coefficient of $e^{\lambda-2k}$ is $\ceil{\frac{k+1}{2}}$ if $2 | (k-\ceil{\lambda_2})$ and $\ceil{\frac{k+1}{2}}-(k-\floor{\lambda_2})$ otherwise. \end{lemma} \begin{proof} Use the fact that each term of $e^{\lambda-2k}$ in the final product arises from a sum of products of $e^{\lambda_1-2i_1}, e^{\lambda_2-2i_2}, e^{\lambda_3-2i_3}$ for $i_1+i_2+i_3 = k$. There are $\binom{k+2}{2}$ such products, and we can write them all out. If $k \leq \ceil{\lambda_2}$, these terms are easy to analyze. If $k > \ceil{\lambda_2}$, we must write out four cases based on the parities of $k$ and $\ceil{\lambda_2}$. After writing down the answer in each case, it is easy to see that the description provided covers all cases. \end{proof} \begin{corollary}\label{cor:npp1} Define a function $r$ by $r(i) = 0$ if $i$ is even and $r(i) = 1$ if $i$ is odd. Define a function $t$ by $t(i) = 1$ if $i$ positive and $t(i) = 0$ if $i$ nonpositive. Define a function $f$ by $$f(\lambda_1, \lambda_2, k) = \ceil{\frac{k+1}{2}} - (k-\floor{\lambda_2})\cdot r(k-\ceil{\lambda_2})\cdot t(k-\ceil{\lambda_2})$$ if $0 \leq k \leq \ceil{\lambda_1+1}$ and $f(\lambda_1, \lambda_2, k) = 0$ if $k$ is not in that range. Then an equivalent statement of Lemma \ref{lem:edecompnpp} is that the coefficient of $e^{\lambda-2k}$ in the $(\beta_{\lambda_1} \beta_{\lambda_2} \beta_{\lambda_3})^-$ is $f(\lambda_1, \lambda_2, k)$ for $0 \leq k \leq \ceil{\lambda_1}$. \end{corollary} \begin{corollary} By the same reasoning as in Lemma \ref{lem:edecompnpp}, we get that the coefficient of $e^{\lambda-2\ceil{\lambda_1+1}}$ in the $(\beta_{\lambda_1} \beta_{\lambda_2} \beta_{\lambda_3})^-$ is $f(\lambda_1, \lambda_2, \ceil{\lambda_1+1}) - 2$. \end{corollary} We now have an explicit description of the original signature character product written as a sum of powers of $e$. We will now find the definite spaces (which have levels $k$ between 0 and $\ceil{\lambda_1+1}$), tackling the $k \leq \ceil{\lambda_1}$ and $k = \ceil{\lambda_1+1}$ cases separately. We begin by solving the $k \leq \ceil{\lambda_1}$ case. Using Lemma \ref{lem:edecomp1}, the fact that the function $x \to x-2\ceil{x}$ is cyclic, and a division of the interval $[0, \ceil{\lambda_1+1}]$ into several intervals based on the signs of $\lambda-2k$ and $2k+1-\lambda$, we get the following result (Lemma \ref{lem:check}): \begin{lemma}\label{lem:check} We have the following results on the signatures of the first $\ceil{\lambda_1+1}$ spaces. \begin{itemize} \item If $0 \leq k \leq \ceil{\frac{\lambda}{2}}$, then the signature of the level $k$ space is $$ f(\lambda_1, \lambda_2, k) - f(\lambda_1, \lambda_2, k-1). $$ \item If $\ceil{\frac{\lambda+2}{2}} \leq k \leq \text{min}(\ceil{\lambda_1}, \ceil{\lambda})$, then the signature of the level $k$ space is $$ f(\lambda_1, \lambda_2, k) + f(\lambda_1, \lambda_2, k-1) - 2f(\lambda_1, \lambda_2, \ceil{\lambda}-k). $$ \item If $\ceil{\lambda+1} \leq k \leq \ceil{\lambda_1}$, then the signature of the level $k$ space is $$ f(\lambda_1, \lambda_2, k) + f(\lambda_1, \lambda_2, k-1) $$ \end{itemize} \end{lemma} Now, note that checking the positive/negative definiteness of each level $k$ space is equivalent to checking whether the signature of the level $k$ space is equal to $k+1$ or $-k-1$. This follows because the dimension of each level $k$ space is $\binom{k+3-2}{3-2} = k+1$, and thus the coefficient of the $\beta_{\lambda_1+\lambda_2+\lambda_3-2k}$ term (evaluated at $s = -1$) in the signature character decomposition of $\beta_{\lambda_1} \cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ is $k+1$ if and only if the level $k$ space is positive definite, and $-k-1$ if and only if the level $k$ is negative definite. \begin{lemma}\label{lem:nppfirstpos} If $1 \leq k \leq \ceil{\lambda_1}$, then the level $k$ space is positive definite iff $\ceil{\lambda+1} \leq k \leq \ceil{\lambda_2}$. In particular if $\ceil{\lambda_1+1} > \ceil{\lambda_2}$, then the only positive definite space in the first $\ceil{\lambda_1+1}$ spaces is the level $0$ space. \end{lemma} \begin{proof} If $\ceil{\lambda+1} \leq k \leq \ceil{\lambda_2}$, then Corollary \ref{cor:npp1} and Lemma \ref{lem:check} give that the level $k$ space is definite. Now we show that no other $\ceil{\lambda_1} \geq k > 0$ has level $k$ space positive definite. If $1 \leq k \leq \ceil{\frac{\lambda}{2}}$, then using Lemma \ref{lem:check}, the maximum possible value for the signature of the level $k$ space is $$1 + (k-1) - \floor{\lambda_2} < k+1.$$ If $\ceil{\frac{\lambda+2}{2}} \leq k \leq \text{min}(\lambda_1, \lambda)$, then we must address several cases. If $k > \ceil{\lambda_2}$, $\ceil{\lambda} - k \leq \ceil{\lambda_2}$, or $2|\ceil{\lambda}-k-\ceil{\lambda_2}$, then some bounding shows that the signature of the level $k$ space is less than $k+1$. The other possibility is that $k > \ceil{\lambda_2}$, $\ceil{\lambda}-k > \ceil{\lambda_2}$, and $2 | \ceil{\lambda}-k -\ceil{\lambda_2}-1$. In this case the signature of the level $k$ space is equal to $$ k+1 - 2\ceil{\frac{\lambda-k+1}{2}} + 2\ceil{\lambda-k} - 2\floor{\lambda_2}. $$ This level $k$ space is positive definite when $$ 2\ceil{\lambda-k} - 2\ceil{\frac{\lambda-k+1}{2}} = 2\floor{\lambda_2}. $$ Viewing the LHS as a function of $k$, we see that as $k$ increases by 2, the LHS decreases by 2. Evaluating the LHS at $k=0$ and $k=1$ gives that the above equality holds only if $k$ is equal to one of $$ \frac{\ceil{\lambda-1}}{2} - \floor{\lambda_2} $$ or $$ \frac{\ceil{\lambda-2}}{2} - \floor{\lambda_2}. $$ But $k$ can't equal either of these as $k \geq \frac{\ceil{\lambda+2}}{2}$. So there are no positive definite spaces of level $k$ when $\ceil{\frac{\lambda+2}{2}} \leq k \leq \text{min}(\lambda_1, \lambda)$. \\ If $\ceil{\lambda_2+1} \leq k \leq \ceil{\lambda_1}$, then the maximum possible value of the signature of the level $k$ space is $$ k+1 - (k-\floor{\lambda_2}) < k+1. $$ So, the only positive definite spaces are the ones in the given range. \end{proof} \begin{lemma}\label{lem:nppfirstneg} There are no negative definite spaces with level at most $\ceil{\lambda_1}$. \end{lemma} \begin{proof} Consider a level $k$ space with $0 \leq k \leq \ceil{\lambda_1}$. If $k \leq \ceil{\frac{\lambda}{2}}$, then the minimum possible signature of the level $k$ space is: $$ 0 - (k-\floor{\lambda_2}) > -k-1 $$ If $\ceil{\frac{\lambda+2}{2}} \leq k \leq \text{min}(\ceil{\lambda_1}, \ceil{\lambda})$, then the minimum possible value of the signature of the level $k$ space is \begin{align*} (k+1) - 2\ceil{\frac{\lambda-k+1}{2}} - k+\floor{\lambda_2} &\geq 1 - \ceil{\lambda-k+2} \\ &\geq k-1 - \ceil{\lambda} \\ &\geq -k. \end{align*} If $\ceil{\lambda+1} \leq k \leq \ceil{\lambda_1}$, then the minimum possible value of the signature of the level $k$ space is $$ k+1 - (k-\floor{\lambda_2}) > -k-1. $$ So there are no negative definite spaces. \end{proof} Lemmas \ref{lem:nppfirstpos} and \ref{lem:nppfirstneg} classify definite spaces with level at most $\ceil{\lambda_1}$. All that remains is to determine when the level $\ceil{\lambda_1+1}$ space is definite. \begin{lemma}\label{lem:excnpp} The only tensors with the level $\ceil{\lambda_1+1}$ space definite are tensors with explicit type $\langle 1,0,0,-1 \rangle$. In this explicit type, the level $2$ space is negative definite. \end{lemma} \begin{proof} First we check when the $\ceil{\lambda_1+1}$ space is positive definite. As $\ceil{\lambda} - \ceil{\lambda_1} \leq \ceil{\lambda_2}$ the maximum possible value of the signature of the level $\ceil{\lambda_1}$ space is: $$ f(\lambda_1, \lambda_2, \ceil{\lambda_1+1})-2 + f(\lambda_1, \lambda_2, \ceil{\lambda}) \leq \ceil{\lambda_1} < \ceil{\lambda_1+2}. $$ So this space can never be positive definite. For negative definiteness, we have that the minimum possible value of the signature is: $$ f(\lambda_1, \lambda_2, \ceil{\lambda_1+1})-2 + f(\lambda_1, \lambda_2, \ceil{\lambda}) - 2f(\lambda_1, \lambda_2, \ceil{\lambda} - \ceil{\lambda_1}), $$ which is at least $\ceil{\lambda_1} - \ceil{\lambda} - 2 + \floor{\lambda_2}$. For this space to be negative definite, then, we must have $$ -\ceil{\lambda_1} \geq \ceil{\lambda_1} - \ceil{\lambda} + \floor{\lambda_2} $$ Rearranging and using $\ceil{\lambda} \leq \ceil{\lambda_1}+\ceil{\lambda_2}+1$, we get that for this space to be negative definite, we must have $2 \geq \ceil{\lambda_1}$. Checking the (finitely many) explicit types with this condition yields the given exception. \end{proof} \begin{proof}[Proof of $\lambda > 0$ subcase of Case 4 for $n = 3$] From initial observations, only spaces with level at most $\ceil{\lambda_1+1}$ can be definite. Lemmas \ref{lem:nppfirstpos} and \ref{lem:nppfirstneg} classify definite spaces with level at most $\ceil{\lambda_1}$, while Lemma \ref{lem:excnpp} shows when the level $\ceil{\lambda_1+1}$ is definite. Pulling these claims together gives the $\lambda > 0$ subcase of Case 4. \end{proof} Subsections \ref{subsec:ln} and \ref{subsec:lp} together yield the classification in Case 4. \section{Proof of Case 5 for $n = 3$}\label{app:pppclass} An easy corollary to the previously proven Case 2 classification is that the first $\ceil{\lambda_3+1}$ multiplicity spaces are indeed positive definite. Thus, we need only classify multiplicity spaces with level greater than $\ceil{\lambda_3}$, which we will call the exceptional multiplicity spaces. To do this, we first determine all possible exceptional spaces in Subsections \ref{subsec:pppepl2} and \ref{subsec:pppepg2}. Then in Subsection \ref{subsec:pppexc} we use character computations to determine which of those instances actually produce exceptional spaces. \\ We begin with some notation. Write $A = \floor{\lambda_1}$, $a = \lambda_1-\floor{\lambda_1}$, and similarly define $B, b$ and $C, c$ for $\lambda_2$ and $\lambda_3$ respectively. Define $\epsilon = a+b+c$. We divide into subcases based on the floor of $\epsilon$, which is between $0$ and $2$ (inclusive). \subsection{Possible exceptional multiplicity spaces when $\epsilon < 2$}\label{subsec:pppepl2} \begin{lemma}\label{lem:pppepl2} If $M$ does not have explicit type $\langle 3d-1,d,d,d-1 \rangle$, $\langle 3d-2,d,d-1,d-1 \rangle$, $\langle 3d, d,d,d \rangle$, or $\langle 3d+1, d,d,d \rangle$, then it has no definite spaces with level more than $\ceil{\lambda_3}$. If $M$ has one of those types, then the possible exceptional definite spaces have levels $2d+1$ (first type), $2d$ (second type), $2d+1$ and $2d+2$ (third type), and $2d+2$ (fourth type). \end{lemma} \begin{proof} By "fudging" the fractional parts of $\lambda_1$, $\lambda_2$, $\lambda_3$, we can ensure that $b+c < 1$ while maintaining $M$'s explicit type. If this condition is satisfied, then by the previously proven Case 2 classification, the signature character product $\beta_{\lambda_1} \cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ contains $\beta_{\lambda_1} \cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3-2\ceil{\lambda_3}}$. From the closed form of two positive and one negative Vermas, we obtain that the only spaces with level more than $\ceil{\lambda_3}$ that can be definite in $M$s are spaces with level $k$, where $k$ satisfies $$ \ceil{\lambda_1+\lambda_2+\lambda_3-2\ceil{\lambda_3}+1}+\ceil{\lambda_3} \leq k \leq \ceil{\lambda_2}+\ceil{\lambda_3}. $$ This inequality is equivalent to $$ \ceil{A+B-C+\epsilon-1}+\ceil{\lambda_3} \leq k \leq B+1+\ceil{\lambda_3}. $$ Using some case analysis and bounding, we obtain that the only possible exceptional definite spaces occur in types: \begin{itemize} \item Explicit type $\langle 3d-1,d,d,d-1 \rangle$, in which the level $2d+1$ space could be definite. \item Explicit type $\langle 3d-2,d,d-1,d-1 \rangle$, in which the level $2d$ space could be definite. \item Explicit type $\langle 3d, d,d,d \rangle$, in which the level $2d+1$ and $2d+2$ spaces could be definite. \item Explicit type $\langle 3d+1, d,d,d \rangle$, in which the level $2d+2$ space could be definite. \end{itemize} \end{proof} \subsection{Possible exceptional multiplicity spaces when $2 < \epsilon < 3$}\label{subsec:pppepg2} \begin{lemma}\label{lem:pppepg2} If $M$ does not have explicit type $\langle 3d,d,d-1,d-1 \rangle$, $\langle 3d+2,d,d,d \rangle$, or $\langle 2d+d'+2, d,d,d' \rangle$ for $d > d'$, then it has no definite spaces with level more than $\ceil{\lambda_3}$. If $M$ has one of those types, then the possible exceptional definite spaces have levels $2d+1$ (first type), $2d+2$ and $2d+3$ (third type), and $d+d'+3$ (fourth type). \end{lemma} \begin{proof} We divide into cases based on whether $\floor{\lambda_3} = \floor{\lambda_2}$ or not. \\ Suppose $\floor{\lambda_2} = \floor{\lambda_3}$. Then from the previously proven Case 2 classification, the signature character product $\beta_{\lambda_1} \cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ contains $\beta_{\lambda_1} \cdot \beta_{\lambda_2-2\ceil{\lambda_3}-2} \cdot \beta_{\lambda_3}$. By the definite space classification for the previously proven $n=3$ subcase of Case 4, the only possible exceptional definite spaces are those with level $k$ where $k$ satisfies $$ \ceil{\lambda_1+\lambda_2+\lambda_3-2\ceil{\lambda_2}-2+1} + \ceil{\lambda_3+1} \leq k \leq \ceil{\lambda_3} + \ceil{\lambda_3+1}. $$ This inequality is equivalent to $$ A+\ceil{\lambda_3+1} \leq k \leq C+1+\ceil{\lambda_3+1}. $$ Using some case analysis and bounding, we get that the only possible exceptional definite spaces occur in \begin{itemize} \item $\langle 3d,d,d-1,d-1 \rangle$, in which the level $2d+1$ space could be definite. \item $\langle 3d+2, d,d,d \rangle$, in which the level $2d+2$ and $2d+3$ spaces could be definite. \end{itemize} The other possibility is that $\floor{\lambda_3} < \floor{\lambda_2}$. Using the previously proven Case 2 classification, the signature character product $\beta_{\lambda_1} \cdot \beta_{\lambda_2} \cdot \beta_{\lambda_3}$ contains $\beta_{\lambda_1} \cdot \beta_{\lambda_2-2\ceil{\lambda_2}-1} \cdot \beta_{\lambda_3+1}$. Using the definite space classification for the previously proven $n=3$ subcase of Case 4, we get that the only possible exceptional definite spaces are spaces with level $k$ where $k$ satisfies $$ \ceil{\lambda_1+\lambda_2+\lambda_3-2\ceil{\lambda_2}+1} + \ceil{\lambda_2} \leq k \leq \ceil{\lambda_3+1}+\ceil{\lambda_2} $$ Using some case analysis and bounding, we get the only possible exceptional definite spaces occur in $\langle 2d+d'+2, d,d,d' \rangle$, in which the $d+d'+3$ space could be definite. Here $d > d'$. \end{proof} \subsection{Classification of exceptions}\label{subsec:pppexc} Here we will address the exceptions found in the previous two subsections and show that six of the seven potential families of exceptions are always exceptional, and the seventh family is never exceptional. We begin with the seventh family. \begin{lemma} In explicit type $\langle 3d+1, d,d,d \rangle$, the level $2d+2$ space is nondefinite. \end{lemma} \begin{proof} Fudge around the fractional parts of $\lambda_1$, $\lambda_2$, and $\lambda_3$, so that $a+c > 1$ and $b+c < 1$. (We can do this and maintain the explicit type of $M$.) By multiplying $\beta_{\lambda_1} \cdot \beta_{\lambda_3}$ and using the previously proven Case 2 classification, we get that the level $2d+2$ space is not positive definite. By multiplying $\beta_{\lambda_2} \cdot \beta_{\lambda_3}$ and using the Case 2 classification, we get that the level $2d+2$ space is not negative definite. We are done. \end{proof} Now we will show that the other six exceptional families are always exceptional, and we will determine the positivity/negativity of the exceptional definite spaces. Our strategy here is to write the signature character product as a sum of powers of $e$ and use the following results along with Lemma \ref{lem:edecomp1} to compute the signatures of the multiplicity spaces: \begin{definition*} Define a function $f_2$ of a real $x$ and an integer $j$ by $$ f_2(x,j) = 0 $$ if $j \leq \ceil{x}$ and $$ f_2(x,j) = (j-\ceil{x}-\floor{\frac{j-\ceil{x+1}}{2}})(1+\floor{\frac{j-\ceil{x+1}}{2}}) $$ if $j \geq \ceil{x+1}$. \end{definition*} \begin{lemma}\label{lem:pppedecomp2} For $k \leq \ceil{\lambda_2}+\ceil{\lambda_3}+1$, the coefficient of $e^{\lambda-2k}$ in the original signature character product (evaluated at $s=-1$) is $$ \binom{k+2}{2} - 2f_2(\lambda_1, k) - 2f_2(\lambda_2, k) - 2f_2(\lambda_3, k) $$ \end{lemma} \begin{proof} Each term of $e^{\lambda-2k}$ arises from a product of $e^{\lambda_1-2i_1}$, $e^{\lambda_2-2i_2}$, and $e^{\lambda_3-2i_3}$, where $i_1+i_2+i_3=k$. If all of these terms were $1$, then by a standard counting argument, the coefficient would be $\binom{k+2}{2}$. However, some of the terms are $-1$. So if we can count the number of terms that are $-1$ and subtract twice that from $\binom{k+2}{2}$, we will be done. \\ A $-1$ arises as a product of $-1\cdot1\cdot1$ or $-1\cdot-1\cdot-1$. But since $k$ is bounded by $\ceil{\lambda_3}+\ceil{\lambda_2}+1$, the three $-1$'s cannot happen. So we need only count the number of $-1,1,1$ triples. \\ We claim that the number of valid triples with a $-1$ coming from $\beta_{\lambda_1}$ is $f_2(\lambda_1, k)$. To see this, count the number of solutions to: $$ 2i'_1 + i_2+i_3 = k-\ceil{\lambda_1+1} $$ using standard methods. By symmetry, the total number of valid triples is $f_2(\lambda_1, k) + f_2(\lambda_2, k) + f_2(\lambda_3, k)$. Subtracting twice this from our original count gives the result. \end{proof} \begin{lemma} We have the following exceptional definite spaces: \begin{itemize} \item For $d \geq 0$ and explicit type $\langle 3d,d,d,d \rangle$, the level $2d+1$ and $2d+2$ spaces are positive definite. \item For $d \geq 0$ and explicit type $\langle 3d+2,d,d,d \rangle$, the level $2d+2$ and $2d+3$ spaces are negative definite. \item For $d \geq 1$ and explicit type $\langle 3d-1,d,d,d-1 \rangle$, the level $2d+1$ space is positive definite. \item For $d \geq 1$ and explicit type $\langle 3d+1, d,d, d-1 \rangle$, the level $2d+2$ space is negative definite. \item For $d \geq 1$ and explicit type $\langle 3d-2,d,d-1,d-1 \rangle$, the level $2d$ space is positive definite. \item For $d \geq 1$ and explicit type $\langle 3d,d,d-1,d-1 \rangle$, the level $2d+1$ space is negative definite. \end{itemize} \end{lemma} \begin{proof} We will outline the proof for $\langle 3d, d,d,d \rangle$. The other cases are exactly the same. \\ For $\langle 3d,d,d,d \rangle$, we know the only possible exceptional definite spaces are levels $2d+1$ and $2d+2$. We claim that these are both positive definite. Based on whether $d$ is odd or even, the claim that in $\langle 3d,d,d,d \rangle$, levels $2d+1$ and $2d+2$ are positive definite is by Lemmas \ref{lem:edecomp1}, \ref{lem:pppedecomp2} and the cyclicity of the map $x \to x-2\ceil{x}$, equivalent to some quadratic polynomials being identically 0. For example, take the case of even $d$ (so $d = 2a$). Writing $\lambda = \lambda_1+\lambda_2+\lambda_3$, we see that by Lemma \ref{lem:edecomp1} that \begin{align*} e^{\lambda-4d-2} &= \beta_{\lambda - 4d - 2}^- + \beta_{\lambda - 4d - 4}^-, \\ e^{\lambda-4d} &= \beta_{\lambda - 4d}^- + \beta_{\lambda - 4d - 2}^-, \\ e^{\lambda - 2\ceil{\lambda}+ 4d + 2} &= \beta_{\lambda - 2\ceil{\lambda}+ 4d + 2}^- - \beta_{\lambda - 2\ceil{\lambda}+ 4d}^- - 2\beta_{\lambda - 4d - 2}^-. \end{align*} Using Lemma \ref{lem:pppedecomp2}, the coefficients of these three powers of $e$ in the signature character product corresponding to our tensor product are \begin{align*} &\binom{2d+3}{2} - 2f_2(\lambda_1, 2d+1) - 2f_2(\lambda_2, 2d+1) - 2f_2(\lambda_3, 2d+1), \\ &\binom{2d+2}{2} - 2f_2(\lambda_1, 2d) - 2f_2(\lambda_2, 2d) - 2f_2(\lambda_3, 2d), \\ &\binom{d+2}{2} - 2f_2(\lambda_1, d) - 2f_2(\lambda_2, d) - 2f_2(\lambda_3, d), \end{align*} respectively. As an easy corollary of Lemma \ref{lem:edecomp1} we know that $\beta_{\lambda - 4d - 2}$ only arises in the signature decompositions of $e^{\lambda-4d-2}$, $e^{\lambda-4d}$, and $e^{\lambda - 2\ceil{\lambda}+ 4d + 2}$, so we see that the signature of the level $2d+1$ space is \begin{align*} &\Big( \binom{2d+3}{2} - 2f_2(\lambda_1, 2d+1) - 2f_2(\lambda_2, 2d+1) - 2f_2(\lambda_3, 2d+1) \Big) \\ + &\Big(\binom{2d+2}{2} - 2f_2(\lambda_1, 2d) - 2f_2(\lambda_2, 2d) - 2f_2(\lambda_3, 2d) \Big) \\ - 2&\Big(\binom{d+2}{2} - 2f_2(\lambda_1, d) - 2f_2(\lambda_2, d) - 2f_2(\lambda_3, d) \Big). \end{align*} Since by the definition of $f_2$ we know that for any real $x$ that $f_2(x, j) = f_2(\ceil{x}, j)$, we get that the signature of the level $2d+1$ is equal to: \begin{align*} \Big( \binom{2d+3}{2} - 6f_2(d+1, 2d+1) \Big) + \Big(\binom{2d+2}{2} - 6f_2(d+1, 2d) \Big) - 2\Big(\binom{d+2}{2} - 6f_2(d+1, d) \Big). \end{align*} Now, our claim that the level $2d+1$ space is positive definite is the same as claiming that the signature of the level $2d+1$ space is equal to the dimension $2d+2$ of the level $2d+1$ space, which is equivalent to saying that: \begin{align*} &\Big( \binom{4a+3}{2} - 6f_2(2a+1, 4a+1) \Big) \\ + &\Big(\binom{4a+2}{2} - 6f_2(2a+1, 4a) \Big) \\ - 2&\Big(\binom{2a+2}{2} - 6f_2(2a+1, 2a) \Big) \\ - 4&a - 2 \end{align*} is identically zero for all nonnegative integers $a$ (above, we have substituted in $d = 2a$). Expanding the above expression completely shows that \begin{align*} 0 = &\Big( \binom{4a+3}{2} - 6f_2(2a+1, 4a+1) \Big) \\ + &\Big(\binom{4a+2}{2} - 6f_2(2a+1, 4a) \Big) \\ - 2&\Big(\binom{2a+2}{2} - 6f_2(2a+1, 2a) \Big) \\ - 4&a - 2, \end{align*} as needed, so the level $2d+1$ space is indeed positive definite for even $d$. Completing the analogous calculations for the level $2d+1$ space for odd $d$ and the level $2d+2$ space for odd and even $d$ shows that the level $2d+1$ and $2d+2$ spaces are always positive definite in explicit type $\langle 3d, d, d, d \rangle$, as desired. \end{proof} \begin{lemma} In explicit type $\langle 2d+d'+2, d,d,d' \rangle$ where $d > d'$, the only potential exceptionally definite space (level $d+d'+2$) is definite exactly when $d'=d-1$. If it is definite then it is negative definite. \end{lemma} \begin{proof} Set $a = \ceil{n/2}$ and $b = \ceil{m/2}$. Using Lemmas \ref{lem:edecomp1} and \ref{lem:pppedecomp2} we get that the signature of the level $d+d'+2$ space is negative, so we only need to check when it is equal to $-d-d'-3$. If $d$ and $d'$ have oppositive parities, than using Lemmas \ref{lem:edecomp1} and \ref{lem:pppedecomp2} gives that this happens exactly when $d'=d-1$. If $d$ and $d'$ have the same parity, this happens either when $d=d'$ (impossible) or never. \end{proof} From the classification of exceptional multiplicity spaces proved in Subsection \ref{subsec:pppexc}, we obtain Case 5. \section{Proof of Remaining Cases}\label{app:n4class} We start with some new definitions. Define a function $\Lambda$ by $\Lambda(k) = \sum_{i=1}^k \lambda_i$. Let $\lambda_+ = \Lambda(p)$ and $\lambda_- = \lambda - \lambda_+$. \subsection{Proof of Case 6} \begin{lemma}\label{lem:pn21} If $p \geq 2$, $n-p \geq 2$, and $|\lambda_+| > |\lambda_{p+1}|$, then there is exactly one definite space in $M$. It is the level $0$ space and it is positive definite. \end{lemma} \begin{proof} When we multiply $\beta_{\lambda_1}, \beta_{\lambda_2}, ..., \beta_{\lambda_{p+1}}$, the product is: $$ e^{\lambda_{p+1}+\lambda_+} + (p+s)e^{\lambda_{p+1}+\lambda_+-2} = \beta_{\lambda_{p+1}+\lambda_+} + (p-1 + s)\beta_{\lambda_{p+1}+\lambda_+-2} $$ Since the level $1$ space is nondefinite, when we multiply by $\beta_{\lambda_{p+2}}$, we get that all spaces with level at least $1$ are nondefinite. Thus, in $M$ product, all spaces with level at least $1$ are nondefinite. In $M$ product, the level $0$ space is positive definite. Our claim follows. \end{proof} \begin{lemma}\label{lem:pn22} If $p \geq 1$, $n-p \geq 2$, and $|\lambda_-| > |\lambda_p|$, then there is exactly one definite space in $M$. It is the level 0 space and it is positive definite. \end{lemma} \begin{proof} The product of the $n-p-1$ signature characters $\beta_{\lambda_{p+2}},..., \beta_{\lambda_n}$ contains $\beta_{-\lambda_{p+1} +\lambda_-}$. So the product of $\beta_{\lambda_p}, \beta_{\lambda_{p+1}}, ... \beta_{\lambda_n}$ contains the product of $\beta_{\lambda_p}, \beta_{\lambda_{p+1}}, \beta_{-\lambda_{p+1}+\lambda_-}$. From the previously proven Case 3 classification, we deduce that there is exactly one definite space. That is the level $0$ space which is positive definite. \end{proof} \begin{lemma} If $p \geq 2$ and $n-p \geq 2$, then there is exactly one definite space in $M$. It is the level 0 space and it is positive definite. \end{lemma} \begin{proof} Assume the claim is false. Then by Lemma \ref{lem:pn21}, $\lambda_{p+1} + \lambda_+ < 0$. By Lemma \ref{lem:pn22}, $0 < \lambda_p + \lambda_-$. Adding these yields $$ \lambda_+ + \lambda_{p+1} < \lambda_p + \lambda_- $$ or $$ \sum_{i<p} \lambda_i < \sum_{i>p+1} \lambda_i $$ which is a contradiction as the LHS is positive and the RHS is negative. So our assumption was false and the claim holds. \end{proof} This proves the Case 6 classification. \subsection{Proof of Case 4 for $n \geq 4$} \begin{lemma} If $|\Lambda(p-1)| > |\lambda_n|$, then there is exactly one definite space in $M$. It is the level 0 space and it is positive definite. \end{lemma} \begin{proof} We know $p \geq 3$ since $n \geq 4$. By the same logic as in the proof of Lemma \ref{lem:pn21}, in the tensor product of $M_{\lambda_1} \otimes \cdots \otimes M_{\lambda_{p-1}} \otimes M_{\lambda_n}$, the level 1 space is nondefinite. So when we tensor this with $M_{p}$, all the spaces with level at least 1 are nondefinite. The level 0 space is positive definite. The lemma follows. \end{proof} \begin{lemma}\label{lem:pnminus11} If $|\Lambda(p-1)| < |\lambda_n|$, then no space after the level $\ceil{\lambda_p}$ is definite in $M$. \end{lemma} \begin{proof} The product of $\beta_{\lambda_p}$ and $\beta_{\lambda_n}$ contains the product of $\beta_{\lambda_p - 2\ceil{\lambda_p}}$ and $\beta_{\lambda_n}$. Thus, the product of all our $n$ signature characters contains the product of $\beta_{\lambda_1},..., \beta_{\lambda_{p-1}}, \beta_{\lambda_p - 2\ceil{\lambda_{p}}}, \beta_{\lambda_n}$. Applying Lemma \ref{lem:pn22} gives that in $M$ product, no space after the level $\ceil{\lambda_p}$ space is definite. \end{proof} \begin{lemma} If $|\Lambda(p-1)| < |\lambda_n| < |\Lambda(p)|$, then there are exactly $\ceil{\lambda_p} - \floor{\lambda}$ definite spaces in $M$. They are all positive definite; one of them is level 0 and the others are levels $\ceil{\lambda+1}$ through $\ceil{\lambda_p}$. \end{lemma} \begin{proof} When we multiply $\beta_{\lambda_1},..., \beta_{\lambda_p}$, it's easy to see that the first $\ceil{\lambda_p}$ signature characters in the sum decomposition are positive definite. (Use the fact that $s$'s don't appear in the series for each $\beta_x$ until $\ceil{x+1}$ when $x$ is positive.) Using the fact that the product of $\beta_{\lambda_1}, ... ,\beta_{\lambda_p}$ contains $\beta_{\lambda_+}$ and $p\beta_{-2+\lambda_+}$, along with the Case 2 classification and Lemma \ref{lem:pnminus11}, gives that the definite spaces in $M$ product are positive and have levels 0 and $\ceil{\lambda+1}$ through $\ceil{\lambda_p}$. \end{proof} \begin{lemma} If $|\lambda_n| > \Lambda(p)$, then there are exactly $\ceil{\lambda_p+1}$ definite spaces in $M$. They are all positive definite and they have levels 0 through $\ceil{\lambda_p}$. \end{lemma} \begin{proof} From Lemma \ref{lem:pnminus11}, it's enough to show that spaces with level 0 to $\ceil{\lambda_p}$ are positive definite. When we multiply $\beta_{\lambda_1},..., \beta_{\lambda_p}$, it's easy to see that the first $\ceil{\lambda_p}$ signature characters in the sum decomposition are positive definite. (Use the fact that $s$'s don't appear in the series for each $\beta_x$ until $\ceil{x+1}$ when $x$ is positive.) Using this, along with the previously proven Case 2 classification, gives that the first $\ceil{\lambda_p +1}$ spaces are definite in $M$ product. \end{proof} Combining the results of this subsection gives the $n \geq 4$ subcase of Case 4. The form of the classification stated in Case 4 is slightly different than the form of the classification we have proved here, but it can be easily shown to be equivalent. \subsection{Proof of Case 5 for $n \geq 4$} \begin{lemma}\label{lem:speccond} Suppose $M$ does not satisfy either of the following special conditions: \begin{itemize} \item $p > 4$, and $M$ has implicit type either $\langle 1, 0, ..., 0 \rangle$ or $\langle 0, 0, ... , 0 \rangle$. \item $p = 4$, and $M$ has $\floor{\lambda_1} \in \{ 0, 1 \}$. \end{itemize} Then $M$ has exactly $\ceil{\lambda_p+1}$ definite spaces. They are all positive definite and they have levels 0 through $\ceil{\lambda_p}$. \end{lemma} \begin{proof} Since $M$ contains $M_{\lambda_1} \otimes M_{\lambda_2} \otimes M_{\lambda_n}$ as a sub-tensor product, the already proven $n=3$ subcase of Case 5 gives that the only possible implicit types with exceptional definite spaces are $\langle 0, 0, ..., 0 \rangle$, $\langle 1, 1, ... , 1 \rangle$, and $\langle 1, 0, ..., 0 \rangle$. None of the (finitely many) explicit types for $n=5$ that have implicit type $\langle 1, 1, ..., 1 \rangle$ has exceptional definite spaces, so the claim is proven. \end{proof} \begin{lemma}\label{lem:pnminus12} Suppose $p \geq 4$ and $M$ has explicit type $\langle 0,0,...,0 \rangle$. Then it has exactly three definite spaces. \end{lemma} \begin{proof} It is easily checked that in a tensor product decomposition of three Verma modules of the original set of Verma modules, the level 3 space has multiplicity signature $1+3s$. Thus no space after level 2 is definite in $M$. It is then easily checked that levels 0, 1, and 2 are definite. \end{proof} \begin{lemma} Suppose $p > 4$ and $M$ has implicit type $\langle 0,0, ..., 0\rangle$. Then if $\lambda_+ < 1$, $M$ has exactly 3 definite spaces: They are levels 0,1,2 and they are positive definite. If $\lambda_+ > 1$, then $M$ has exactly 2 definite spaces: They are levels 0,1 and they are positive definite. \end{lemma} \begin{proof} If $\lambda_+ < 1$, then the claim follows from Lemma \ref{lem:pnminus12}. Otherwise suppose $\lambda_+ > 1$. Let $i$ be minimal such that $\Lambda(i) > 1$. From assumptions, $2 \leq i \leq p$. If $2 < i < p$, then in the tensor product of $M_{\lambda_1},\ldots, M_{\lambda_i}$, we can check that the level 2 space is nondefinite, so our claim holds for $M$. Otherwise we have $i = p$ or $i = 2$. If $i = p$, tensor the first $p-1$ Vermas and apply Lemma \ref{lem:pnminus12} and the already proven Case 2 classification to get the claim. If $i = 2$, check that all the (finitely many) explicit types for $n=5$ which have $\lambda_+ > 1$ and implicit type $\langle 0,0,0,0,0 \rangle$ have two definite spaces. So in all cases the claim holds. \end{proof} \begin{lemma} Suppose $p > 4$ and $M$ has implicit type $\langle 1,0,...,0 \rangle$. Then if $\lambda_+ < 2$, $M$ has exactly 3 definite spaces: They are levels 0,1,2 and they are positive definite. If $\lambda_+ > 2$, then $M$ has exactly 2 definite spaces: They are levels 0,1 and they are positive definite. \end{lemma} \begin{proof} Check that the claim holds for all (finitely many) explicit types of tensors which have $p=5$ and implicit type $\langle 1,0,..,0 \rangle$. For $p > 5$, tensoring the 5 Vermas with largest highest weights and using the result in the last sentence gives that no space after level 3 can be definite. Check that if $\lambda_+ < 2$ there are three definite spaces and if $\lambda_+ > 2$ there are two definite spaces as claimed. \end{proof} \begin{lemma} Suppose $p = 4$. Then there are $\ceil{\lambda_p+1}$ definite spaces in $M$ as described before except for the following exceptions: \begin{itemize} \item Explicit type $\langle 0,0,0,0,0 \rangle$, in which level 0,1,2 spaces are positive definite. \item Explicit type $\langle 3,0,0,0,0 \rangle$, in which level 0,1 spaces are positive definite and level 3 space is negative definite. \item Explicit type $\langle 1,1,0,0,0 \rangle$, in which level 0,1,2 spaces are positive definite. \item Explicit type $\langle 4,1,1,1,1 \rangle$, in which the level 0,1,2,4 spaces are positive definite. \end{itemize} \end{lemma} \begin{proof} From Lemma \ref{lem:speccond}, we only need to check the (finitely many) explicit types for which $\lambda_1 < 2$. \end{proof} Combining the results in this subsection gives the $n \geq 4$ subcase of Case 5. \section{Proof of Lemma \ref{bQreal}}\label{app:bQreal} Recall the definition $ b_Q = Y(t_1)Y(t_2) \cdots Y(t_m)v $ from Section \ref{sec:cpbprelims}. Expanding this gives the equivalent definition $$ b_Q = \sum_{a_1+a_2+...+a_n = m} \bigotimes_{i=1}^n F^{a_i}v_i \cdot \left( \sum_{\sigma} \frac{1}{\prod_{j=1}^m (t_j - z_{\sigma(j)})} \right), $$ where $\sigma$ ranges over all maps $\sigma:[m]\to[n]$ satisfying $|\sigma^{-1}\{i\}|=a_i$. We assume that the joint eigenvalue (for the action of $\mathcal{H}_1,\ldots,\mathcal{H}_n$) of $b_Q$ is real. The operators $\mathcal{H}_i$ preserve the obvious real form of $\bigotimes_{i=1}^n M_{\lambda_i}$. It follows that the real and imaginary parts of $b_Q$ are also joint eigenvectors with the same joint eigenvalue. But recall (Lemma \ref{lemyo}) that the joint eigenspaces are all one-dimensional; it follows that $b_Q$ is proportional to a real vector: there exists $a\in\mathbb{C}^\times$ such that $\sum_{\sigma} \frac{1}{\prod_{j=1}^m (t_j - z_{\sigma(j)})}\in a\mathbb{R}$ for all partitions $(a_1,\ldots,a_n)$ of $m$. Considering partitions with $n-1$ zero entries, one obtains that $1/Q(z_i)\in a\mathbb{R}$ for each $i$. We show by induction on $k$ that $Q^{(k)}(z_i)/Q(z_i)\in\mathbb{R}$ for each $i$. The case $k=1$ is the assumption that the joint eigenvalue is real, see the corollary to Lemma \ref{lem:hb}. Assume that $Q'(z_i)/Q(z_i), ..., Q^{(k-1)}(z_i)/Q(z_i)$ are all real for each $i$. We show that $Q^{(k)}(z_i)$ is real for each $i$. Consider the partition $p_k = (m-k,k,0,...,0)$ of $m$. The coefficient of the summand of $b_Q$ indexed by $p_k$ is $$ \frac{1}{Q(z_1)} \cdot \sum_{k\text{-subsets $S$ of } \{1,2,...,m\}} \frac{\prod_{i \in S} (t_{i}-z_1)}{\prod_{i \in S} (t_{i}-z_2)}\in a\mathbb{R}; $$ multiplying by $Q(z_1)$ we get that $$ \sum_{k\text{-subsets $S$ of } \{1,2,...,m\}} \frac{\prod_{i \in S} (t_{i}-z_1)}{\prod_{i \in S} (t_{i}-z_2)}\in \mathbb{R}. $$ Define a polynomial associated to the $k$-set $S$ by $R_S(z) = \prod_{i \in S} (t_{i}-z)$. Then the line above says that $$ \sum_{k\text{-subsets } S} \frac{R_S(z_1)}{R_S(z_2)}\in\mathbb{R}. $$ Using the Taylor expansion $$ \frac{R_S(z_1)-R_S(z_2)}{z_1-z_2} = R'_S(z_2) + \left( \frac{z_1-z_2}{2!} \right)R''_S(z_2) + \dots $$ we get that $$ \sum_{k\text{-subsets } S} \frac{R'_S(z_2) + \left( \frac{z_1-z_2}{2!} \right)R''_S(z_2) + ...}{R_S(z_2)} = \frac{1}{z_1-z_2}\sum_{k\text{-subsets } S} \frac{R_S(z_1)}{R_S(z_2)}-\frac{\binom{m}{k}}{z_1-z_2}\in\mathbb{R}. $$ But this is equal to \begin{align*} & \sum_{k\text{-subsets } S} \left(\sum_{\{j_1\} \subset S} \frac{1}{z_2-t_{j_1}} + (z_1-z_2) \sum_{2-\text{subsets } \{j_1,j_2\}\subset S} \frac{1}{(z_2-t_{j_1})(z_2-t_{j_2})} + \ldots\right)\\ = & \binom{m-1}{k-1}\frac{Q'(z_2)}{Q(z_2)}+\binom{m-2}{k-2}\frac{z_1-z_2}{2!}\frac{Q''(z_2)}{Q(z_2)}+\binom{m-3}{k-3}\frac{(z_1-z_2)^2}{3!}\frac{Q'''(z_2)}{Q(z_2)}+\ldots + \binom{m-k}{0}\frac{(z_1-z_2)^{k-1}}{(k)!}\frac{Q^{(k)}(z_2)}{Q(z_2)} \end{align*} All but the last of these summands are assumed to be real in the inductive hypothesis. Therefore the last term is also real, and rescaling we get that $\frac{Q^{(k)}(z_2)}{Q(z_2)}\in\mathbb{R}$. We similarly prove (using permutations of the partitions $p_k$) that $\frac{Q^{(k)}(z_i)}{Q(z_i)}\in\mathbb{R}$ for all $i$. This completes the induction.\\ \noindent We have shown that $Q^{(k)}(z_i)\in a^{-1}\mathbb{R}$ for all $i$, $k$. But taking $k=m$ gives $\pm m! \in a^{-1}\mathbb{R}$, so $a\in\mathbb{R}$ and hence $Q^{(k)}(z_i)\in \mathbb{R}$ for all $i$, $k$.\\ \noindent Let $P(z)=Q(z+z_1)$. Then $P$ has real coefficients if and only if $Q$ does, since $z_1$ is real. But all derivatives of $P$ evaluated at $0$ give real numbers. So $P$, and hence $Q$, has real coefficients. \end{document}
\begin{document} \title*{The Quantum Mirror to the Quartic del Pezzo Surface} \author{H\"ulya Arg\"uz} \institute{H\"ulya Arg\"uz \at Institute of Science and Technology, Austria, \email{[email protected]} } \maketitle \abstract{ A log Calabi--Yau surface $(X,D)$ is given by a smooth projective surface $X$, together with an anti-canonical cycle of rational curves $D \subset X$. The homogeneous coordinate ring of the mirror to such a surface -- or to the complement $X\setminus D$ -- is constructed using wall structures and is generated by theta functions \cite{GHK, GHS}. In \cite{A1}, we provide a recipe to concretely compute these theta functions from a combinatorially constructed wall structure in $\vec{R}^2$, called the heart of the canonical wall structure. In this paper, we first apply this recipe to obtain the mirror to the quartic del Pezzo surface, denoted by $dP_4$, together with an anti-canonical cycle of $4$ rational curves. We afterwards describe the deformation quantization of this coordinate ring, following \cite{BP}. This gives a non-commutative algebra, generated by quantum theta functions. There is a totally different approach to construct deformation quantizations using the realization of the mirror as the monodromy manifold of the Painlev\'e IV equation \cite{CMR,M}. We show that these two approaches agree. } \section{The mirror to $(dP_4,D)$} \label{sec:2} We construct the mirror to $(dP_4,D)$, following the recipe in \cite{A1}. To obtain the theta functions generating the coordinate ring of the mirror, we define an initial wall structure associated to $(X,D)$, using the following data: \begin{itemize} \item A choice of a \emph{toric model}, that is, a birational morphism $(X, D) \to (\overline{X},\overline{D})$ to a smooth toric surface $\overline{X}$ with its toric boundary $\bar{D}$ such that $D \to \overline{D}$ is an isomorphism. For the quartic del Pezzo surface $X=dP_4$, obtained by blowing up $5$ general points in ${\vec{P}}^2$, we consider the toric model given by a toric blow-up of ${\vec{P}}^2$, illustrated as in Figure \ref{Fig: dP40}. We choose $\overline{D}$ to be the toric boundary divisor in $\overline{X}$, and let $D$ be the strict transform of $\overline{D}$. We note that the equations of the mirror will be independent of the choice of the toric model \cite{A1,GHK}. This choice defines a natural subdivision of $M_{\vec{R}}$, where $M \cong \vec{Z}^2$ is a fixed lattice, and $M_{\vec{R}} = M \otimes\vec{R} \cong \vec{R}^2$, given by the toric fan $\Sigma_{\overline{X}} \subset M_{\vec{R}}$ of $\overline{X}$. We denote $M_{\vec{R}}$ together with this subdivision by $(M_{\vec{R}},\Sigma_{\overline{X}})$. \begin{figure} \caption{On the left hand figure $H$ is the class of a general line on the projective plane $\vec{P} \label{Fig: dP40} \end{figure} \item A \emph{multi-valued piecewise linear} (MVPL) function $\varphi$ on $(M_{\vec{R}},\Sigma_{\overline{X}})$ with values in the monoid of integral points of the cone of effective curves on $X$, denoted by $\mathrm{NE}(X)$. Up to a linear function, we uniquely define $\varphi$ by specifying its kinks along each ray of $\Sigma_{\overline{X}}$ to be the pullback of the class of the curve in $\overline{X}$ corresponding to this ray, as illustrated in Figure \ref{fig:PL}. \end{itemize} \begin{figure} \caption{The kinks of the MVPL function $\varphi$ on the rays of the fan $\Sigma_{\overline{X} \label{fig:PL} \end{figure} \begin{definition} \label{def wall structure} A wall structure on $(M_{\vec{R}},\Sigma_{\overline{X}})$ is a collection of pairs $(\rho,f_{\rho})$, called walls, consisting of rays $\rho \subset M_{\vec{R}}$, together with functions $ f_{\rho} \in \vec{C}[NE(X)^{\mathrm{gp}}][M]$, referred to as wall-crossing functions. Each wall crossing function defines a wall-crossing transformation \begin{equation} \label{Eqn: crossing walls} \theta_{\gamma,\rho}: \quad z^v\longmapsto f_\rho^{\langle n_\rho,v \rangle} z^v \,, \end{equation} prescribing how a monomial $z^v$ changes when crossing the wall $(\rho,f_{\rho})$. Here, $n_\rho$ is the normal vector to $\rho$ chosen with a sign convention as in \cite[\S$2.2.1$]{AG}. \end{definition} We note that in general one might need to work with infinitely many walls, and then to work modulo an ideal of $NE(X)$ while defining the wall crossing functions. However, in the case of the quartic del Pezzo surface we will only need finitely many walls, and the wall crossing functions will be elements of a polynomial ring, as in Definition \ref{def wall structure}. To obtain the equation of the mirror to $(X,D)$, we first define an initial wall structure associated to $(X,D)$. To do this, we first define an initial set of walls in $(M_{\vec{R}},\Sigma_{\overline{X}})$. For every non-toric blow-up in the toric model, we include a wall $(\rho,f_\rho)$, where $\rho$ is the ray in $\Sigma_{\overline{X}}$ corresponding to the divisor on which we do the non-toric blow-up, and \[ f_{\rho} = 1+ t^{-E_i}z^{-v_{\rho}} = 1+ t^{-E_i}x^{-a}y^{-b} \,,\] where $E_i \in NE(X)$ is the class of the exceptional curve, $z^{v_{\rho}}$ denotes the element in the monoid ring $\vec{C}[M]$, corresponding to the primitive direction $v_{\rho} \in M$ of $\rho$ pointing towards $0$ referred to as the direction vector, and we use the convention $x= z^{(1,0)}$ and $y= z^{(0,1)}$. The negative signs on the powers of $t$ and $z$ are chosen following the sign conventions of \cite{AG}. \begin{figure} \caption{Walls of the initial wall structure associated to $(dP_4,D)$, and their perturbations obtained by translating some walls, so that each intersection is formed by only two walls meeting at a point. The dashed blue lines indicate where the kinks of the MVPL function $\varphi$ are, the black rays with arrows are walls. On the left hand figure the function attached to wall with direction $(1,0)$ is given by $1+t^{-E_2} \label{Fig: perturb} \end{figure} The walls of the initial wall structure do not generally intersect only pairwise, but there can be triple or more complicated intersections as illustrated on the left hand Figure \ref{Fig: perturb}, where we have $4$ walls intersecting - two of these walls lie on top of each other and have direction vector $(-1,-1)$, and the other two have direction vectors $(1,0)$ and $(0,1)$. However, we can always move these walls, so that any of the intersection points of the initial walls will be formed by only two walls intersecting. The initial wall structure after a choice of such a perturbation is illustrated on the right hand side of Figure \ref{Fig: perturb}. In this situation, only when two walls intersect each time, we can easily describe a \emph{consistent wall structure} obtained from the initial wall structure by inserting new walls to it, so that the composition of the wall crossing transformations around each intersection point is identity. This diagram is referred as the heart of the canonical wall structure in \cite{A1}. Roughly put, each time a wall with support on a ray $\rho_i$ with direction $v_i$ and with attached function $ 1+t^{-E_i}z^{-v_i}$ intersects another wall with support on a ray $\rho_j$ with direction $v_j$ and with attached function $1+t^{-E_j}z^{-v_j}$, we first extend these rays to lines, so that on the new rays we form while doing this we attach the functions $1+t^{\kappa_i-E_i}z^{-v_i}$ and $1+t^{\kappa_j-E_j}z^{-v_j}$ respectively. Here $\kappa_i$ and $\kappa_j$ denote the sums of the kinks of the MVPL function $\varphi$, lying on rays that intersect the initial rays. We furthermore insert an additional wall with support on a ray $\rho_{i+j}$ with direction $v_i+v_j$ and with attached function \[ 1+t^{(\kappa_i+\kappa_j)-(E_i+E_j)}z^{-(v_i+v_j)} \,.\] Note that this simple prescription describes the consistent wall structure, because each time two walls with direction vectors say $v_i$ and $v_j$ intersect, the determinant of $v_i$ and $v_i$ is $\pm 1$. The coordinate ring of the mirror to $(X,D)$ is generated by \emph{theta functions}, determined by keeping track of how a set of initial monomials change under wall crossing in this consistent wall structure \cite[\S3]{A1}. We have as many theta functions as the number of rays of the toric model associated to $(dP_4,D)$, whose fan is illustrated in Figure \ref{fig:PL}. The direction vectors of these rays are given by $(-1,-1),(-1,0),(1,1),(0,-1)$, which correspond to the initial set of monomials $x^{-1}y^{-1},x^{-1},xy,y^{-1}$. The corresponding $4$ theta functions, respectively denoted by $\vartheta_1,\ldots,\vartheta_4$, are determined by tracing how these monomials change as the corresponding rays cross walls in the consistent wall structure associated to $(dP_4,D)$ illustrated in Figure \ref{Fig: DGPScons}. To determine the wall crossings we first fix a general point $P$ as in Figure \ref{Fig: DGPScons} , and look at the rays coming from the directions $(-1,-1),(-1,0),(1,1),(0,-1)$ and stopping at this point -- note that, we made a choice of the point $P$ so that each ray will cross exactly two walls and the situation will be symmetric, and it will make it easier to calculate the theta functions. Each of the $4$ theta functions is then obtained by the following wall crossings: The red ray labelled with I crosses the two walls with attached functions $1+t^{H-E_4-E_5}x$ and $1+t^{E_1-E_5}y^{-1}$, the functions on the two walls crossed by the red ray labelled by II are $1+t^{E_1-E_5}y^{-1}$ and $1+t^{H-E_1-E_3}xy$, the functions on the walls crossed by the red ray labelled by III are $1+t^{-E_2}x^{-1}$ (in this case there is also the kink of the MVPL function $H-E_1$ we take into account in the computation of the theta functions) and $1+t^{H-E_1-E_2-E_3}y$, and finally the red ray labelled by IV crosses the walls with attached functions $1+t^{2H-E_1-E_2-E_3-E_4}xy^2$ and $1+t^{H-E_1-E_4}xy$ (and in this case there is also a kink of the MVPL function given by $E_1$). Hence, we obtain; \begin{eqnarray} \nonumber & & x^{-1}y^{-1} \mapsto x^{-1}y^{-1} +t^{H-E_4-E_5} y^{-1} \mapsto x^{-1}y^{-1} +t^{E_1-E_5} x^{-1} y^{-2} + t^{H-E_4-E_5} y^{-1} =: \vartheta_1 \\ \nonumber & & x^{-1} \mapsto x^{-1} +t^{E_1-E_5} x^{-1} y^{-1} \mapsto x^{-1} +t^{H-E_1-E_3} y + t^{E_1-E_5} x^{-1}y^{-1} =: \vartheta_2 \\ \nonumber & & xy \mapsto t^{H-E_1} xy + t^{H-E_1-E_2} y \mapsto t^{H-E_1} xy + t^{2H-2E_1-E_2-E_3} xy^2 +t^{H-E_1-E_2} y =: \vartheta_3 \\ \nonumber & & y^{-1} \mapsto y^{-1} +t^{2H-E_1-E_2-E_3-E_4} xy \mapsto t^{E_1}y^{-1} +t^{H-E_4} x + t^{2H-E_1-E_2-E_3-E_4} xy =: \vartheta_4 \nonumber \end{eqnarray} \begin{figure} \caption{The consistent wall structure obtained from the perturbed initial wall structure with $4$ incoming rays, illustrated in Figure \ref{Fig: perturb} \label{Fig: DGPScons} \end{figure} The above theta functions generate the mirror to $(dP_4,D)$, which is obtained as a family of log Calabi--Yau surfaces $(dP_4,D)$ given as $\mathrm{Spec}$ of the quotient of $\vec{C}[\mathrm{NE}(X)][\vartheta_1,\vartheta_2,\vartheta_3,\vartheta_4]$ by the following two quadratic equations: \begin{eqnarray} \label{Eq: classic quadric eqns} \vartheta_1 \vartheta_3 & = & C_1 + t^{H-E_1-E_2} \vartheta_2 + t^{H-E_1-E_5} \vartheta_4 \\ \nonumber \vartheta_2 \vartheta_4 & = & C_2 + t^{E_1} \vartheta_1 + t^{H-E_3-E_4} \vartheta_3 \nonumber \end{eqnarray} where \begin{eqnarray} \label{Eq C1C2} C_1 & = & t^{H-E_1} + t^{2H-E_1-E_2-E_3-E_5} + t^{2H-E_1-E_2-E_4-E_5} \\ \nonumber C_2 & = & t^{H-E_4} + t^{H-E_3} + t^{2H-E_2-E_3-E_4-E_5}. \end{eqnarray} Note that the resulting equations for the mirror given above, agrees with the one obtained in \cite{B} using computer algebra. \section{The quantum mirror to $(dP_4,D)$} A general recipe to construct a deformation quantization of mirrors of log Calabi--Yau surfaces is given in \cite{BP}. For $(dP_4,D)$, as the consistent wall structure consists of only finitely many walls, the general recipe reduces to the following simple prescription: the quantum theta functions, denoted by $\hat{\vartheta}_1,\ldots,\hat{\vartheta}_4$, are obtained from the theta functions above by replacing the monomials $z^v \in \vec{C}[M]$, by quantum variables, denoted by $\hat{z}^v$, which are elements of the quantum torus, that is such that $\hat{z}^v \hat{z}^{v'}=q^{\frac{1}{2}\det(v,v')}\hat{z}^{v+v'}$, where $q$ is the quantum deformation parameter. Hence, from the equations for the theta functions above, we obtain: \begin{eqnarray} \hat{\vartheta}_1 & = & \hat{z}^{(-1,-1)} + t^{E_1-E_5} \hat{z}^{(-1,-2) } + t^{H- E_4-E_5} \hat{z}^{(0,-1)} \\ \nonumber \hat{\vartheta}_2 & = & \hat{z}^{(-1,0)} + t^{H-E_1-E_3} \hat{z}^{(0,1) } + t^{ E_1-E_5} \hat{z}^{(-1,-1)} \\ \nonumber \hat{\vartheta}_3 & = & t^{H-E_1}\hat{z}^{(1,1)} + t^{2H-2E_1-E_2-E_3} \hat{z}^{(1,2) } + t^{H- E_1-E_2} \hat{z}^{(0,1)} \\ \nonumber \hat{\vartheta}_4 & = & t^{E_1} \hat{z}^{(0,-1)} + t^{H-E_4} \hat{z}^{(1,0) } + t^{2H-E_1-E_2-E_3-E_4} \hat{z}^{(1,1)} \end{eqnarray} These quantum theta functions satisfy the followsing $8$ relations -- first two obtained as deformations of the two quadric equations in (\ref{Eq: classic quadric eqns}), and the latter $6$ equations determining the non-commutativity of the products of each two among four quantum theta functions. \begin{eqnarray} \label{Eq:quantul theta} \hat{\vartheta}_1 \hat{\vartheta}_3 & = & C_1 + q^{-\frac{1}{2}} t^{H-E_1-E_2} \hat{\vartheta}_2 + q^{\frac{1}{2}} t^{H-E_1-E_5} \hat{\vartheta}_4 \\ \nonumber \hat{\vartheta}_2 \hat{\vartheta}_4 & = & C_2 + q^{\frac{1}{2}} t^{E_1} \hat{\vartheta}_1 + q^{-\frac{1}{2}} t^{H-E_3-E_4} \hat{\vartheta}_3 \\ \nonumber q^{\frac{1}{2}} \hat{\vartheta}_1 \hat{\vartheta}_3 - q^{-\frac{1}{2}} \hat{\vartheta}_3 \hat{\vartheta}_1 & = & (q^{\frac{1}{2}}- q^{-\frac{1}{2}})C_1+(q-q^{-1})t^{H-E_1-E_5}\hat{\vartheta}_4 \\ \nonumber q^{\frac{1}{2}} \hat{\vartheta}_2 \hat{\vartheta}_4 - q^{-\frac{1}{2}} \hat{\vartheta}_4 \hat{\vartheta}_2 & = & (q^{\frac{1}{2}}- q^{-\frac{1}{2}})C_2+(q-q^{-1})t^{E_1}\hat{\vartheta}_1 \\ \nonumber q^{\frac{1}{2}} \hat{\vartheta}_1 \hat{\vartheta}_2 - q^{-\frac{1}{2}} \hat{\vartheta}_2 \hat{\vartheta}_1 & = & (q^{\frac{1}{2}}- q^{-\frac{1}{2}}) t ^{2H-E_1-E_3-E_4-E_5} \\ \nonumber q^{\frac{1}{2}} \hat{\vartheta}_2 \hat{\vartheta}_3 - q^{-\frac{1}{2}} \hat{\vartheta}_3 \hat{\vartheta}_2 & = & (q^{\frac{1}{2}}- q^{-\frac{1}{2}}) t ^{H-E_5} \\ \nonumber q^{\frac{1}{2}} \hat{\vartheta}_3 \hat{\vartheta}_4 - q^{-\frac{1}{2}} \hat{\vartheta}_4 \hat{\vartheta}_3 & = & (q^{\frac{1}{2}}- q^{-\frac{1}{2}}) t ^{H-E_2} \\ \nonumber q^{\frac{1}{2}} \hat{\vartheta}_4 \hat{\vartheta}_1 - q^{-\frac{1}{2}} \hat{\vartheta}_1 \hat{\vartheta}_4 & = & (q^{\frac{1}{2}}- q^{-\frac{1}{2}}) t ^{2H-E_1-E_2-E_3-E_4} \end{eqnarray} Eliminating $\hat{\vartheta}_4$ in (\ref{Eq:quantul theta}), we obtain the same equations as in \cite[Corollary 4.6]{M}, obtained there using a totally different approach based on the fact that the cubic equation obtained from (\ref{Eq: classic quadric eqns}) by eliminating $\vartheta_4$ is the defining equation of the wild character variety arising as the monodromy manifold of the Painlev\'e IV equation. \FloatBarrier \end{document}
\begin{document} \author{O.V.~Kulikova} \email{[email protected]} \title{On independent families of normal subgroups in free groups} \markboth{}{On independent families of normal subgroups in free groups} \maketitle \begin{abstract} Consider a presentation $\mathcal{P}=<{\bf x}\mid{\bf \bigcup_{i=1}^n r_i}>$. Let ${\bf R_i}$ be the normal closure of the set ${\bf r_i}$ in the free group ${\bf F}$ with basis ${\bf x}$, $\mathcal{P}_i=<{\bf x}\mid{\bf r_i}>$, ${\bf N_i} = \mathsf{pr}od_{j\neq i}{\bf R_j}$. In the present article, using geometric techniques of pictures, generators for $\frac{{\bf R_i}\cap {\bf N_i}}{[{\bf R_i}, {\bf N_i}]}$, $i=1,\ldots,n$, are obtained from a set of generators over $\{\mathcal{P}_i\mid i=1,\ldots, n\}$ for $\pi_2(\mathcal{P})$. As a corollary, we get a sufficient condition for the family $\{{\bf R_1},\ldots,{\bf R_n}\}$ to be independent. \end{abstract} \section*{Introduction} Consider a presentation $\mathcal{P}=<{\bf x}\mid{\bf r}>$, where ${\bf r=\bigcup_{i=1}^n r_i}$. For $i=1,\ldots, n$, let ${\bf R_i}$ be the normal closure of the set ${\bf r_i}$ in the free group ${\bf F}$ with basis ${\bf x}$, $\mathcal{P}_i=<{\bf x}\mid{\bf r_i}>$, ${\bf N_i} = \mathsf{pr}od_{j\neq i}{\bf R_j}$. Set ${\bf R} = \mathsf{pr}od_{i=1}^n{\bf R_i}$, ${\bf G} = {\bf F}/{\bf R}$ and ${\bf G_i} = {\bf F}/{\bf R_i}$. In this paper we study the quotient groups $\frac{{\bf R_i}\cap {\bf N_i}}{[{\bf R_i}, {\bf N_i}]}$, $i=1,\ldots,n$, which are a natural measure of the redundancy between ${\bf R_1}, ..., {\bf R_n}$. Geometric techniques of spherical pictures \cite{pride, gener_b_pr} are used to prove the main theorem of this paper, in which we determine generators for $\frac{{\bf R_i}\cap {\bf N_i}}{[{\bf R_i}, {\bf N_i}]}$, $i=1,\ldots,n$, providing that a set of spherical pictures over $\mathcal{P}$, generating $\pi_2(\mathcal{P})$ over $\{\mathcal{P}_i\mid i=1,\ldots, n\}$, is known. The idea of applying these techniques to determine generators was already used in \cite{BoGu92} for some presentations. The family $\{{\bf R_1},\ldots,{\bf R_n}\}$ is said to be {\it independent}, if ${\bf R_i}\cap {\bf N_i} = [{\bf R_i}, {\bf N_i}]$ for $i=1,\ldots,n$. Independence may be considered as ensuring that certain intersections are as small as possible, or as ensuring that certain commutator subgroups are as large as possible. Independence and related notions have been studied in \cite{Bo91, ratcl, DuElGi92, gilbert, huebsch, lind=my}. For example, it is shown in \cite{DuElGi92} that $\{{\bf R_1},\ldots,{\bf R_n}\}$ is independent if and only if the inclusions ${\bf R_i}\rightarrow{\bf R}$ induce an isomorphism $\oplus_{i=1}^n(\mathbb{Z}{\bf G}\otimes_{\mathbb{Z}{\bf G_i}}H_1{\bf R_i})\rightarrow H_1{\bf R}$ of relation modules. Also it is known \cite{ratcl, DuElGi92, huebsch} that if $\{{\bf R_1},\ldots,{\bf R_n}\}$ is independent, then $\mathcal{P}$ is (A) over $\{\mathcal{P}_i\mid i=1,\ldots,n\}$, i.e. $\pi_2(\mathcal{P})$ is generated by the empty set over $\{\mathcal{P}_i\mid i\in I\}$. It is proved in \cite{ratcl} that the converse holds in case $n=2$. In the present paper we obtain the converse statement for arbitrary $n$, as a corollary of the main theorem. In the first section of this paper we remind the facts from \cite{pride} and \cite{gener_b_pr}, required to formulate and prove the main results of this paper. The main results themselves are formulated in the second section and proved in the third section. The reader may omit the first section and skip to the next ones if the papers \cite{pride} and \cite{gener_b_pr} are familiar to him. \section{Main definitions}\label{S:df_in_F} The following definitions, notation and facts from \cite{pride} and \cite{gener_b_pr} will be used in the second and third sections of the present paper. Let $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ be a presentation for a group $\bf G$, where $\bf{x}$ is a set ("generators") and $ \bf{r}$ is a set of cyclically reduced words on $\bf{x}\cup\bf{x}^{-1}$ ("relators"). We assume that no relator of $\bf{r}$ is freely trivial, nor is a conjugate of any other relator or its inverse (so called the $RH$-hypothesis). We let $\bf w$ denote the set of all words on $\bf{x}\cup\bf{x}^{-1}$ (reduced or not). Two words $u$ and $v$ in $\bf w$ are {\it identically equal}, if they represent the same element in the free semigroup on $\bf{x}\cup\bf{x}^{-1}$. The words $u$ and $v$ are {\it freely equal}, if one can be obtained from the other by a finite number of insertions and deletions of inverse pairs $x^{\varepsilon}x^{-\varepsilon}$, where $x\in{\bf x}, \varepsilon=\pm 1$. The free equivalence class of $W\in\bf{w}$ will be denoted by $[W]$. The free group ${\bf F}$ on $\bf{x}$ consists of the free equivalence classes, where the multiplication is defined by $[W][U]=[WU]$. We let ${\bf N}$ denote the normal closure of $\{[R]\mid R\in\bf{r}\}$ in ${\bf F}$. Thus, ${\bf G} = {\bf F}/ {\bf N}$. If $\bf s$ is a subset of $\bf r$ then we let $\bf s^w$ denote the set of all words of the form $WS^{\epsilon}W^{-1}$, where $W\in{\bf w}, S\in{\bf s}, \epsilon=\pm 1$. \underline{Sequences.} Let us consider finite sequences of elements of $\bf r^w$. Let $\sigma = (c_1, ...., c_m)$, where $c_i\in \bf r^w$ ($i=1,...,m$). We define the {\it inverse} $\sigma^{-1}$ of $\sigma$ to be $(c_m^{-1},...,c_1^{-1})$. We define the {\it conjugate} $W\sigma W^{-1}$ of $\sigma$ by $W\in {\bf w}$ to be $(Wc_1W^{-1},...,Wc_mW^{-1})$. We define operations on sequences as follows. Let $\sigma = (c_1, ...., c_m)$, where $c_i = W_iR_i^{\epsilon_i}W_i^{-1}$, $W_i\in{\bf w}, R_i\in{\bf r}, \epsilon_i=\pm 1, i=1,...,m$. (SUB) {\it Substitution.} Replace each $W_i$ by a word freely equal to it. (DEL) {\it Deletion.} Delete two consecutive terms if one is identically equal to the inverse of the other. (INS) {\it Insertion. } The opposite of deletion. (EX) {\it Exchange. } Replace two consecutive terms $c_i$, $c_{i+1}$ by either $c_{i+1}$, $c_{i+1}^{-1}c_ic_{i+1}$ or by $c_ic_{i+1}c_i^{-1}$, $c_i$. Two sequences $\sigma$, $\sigma'$ will be said to be {\it (Peiffer) equivalent} (denoted $\sigma\sim\sigma'$) if one can be obtained from the other by a finite number of applications of the operations (SUB), (DEL), (INS), (EX). The equivalence class containing $\sigma$ will be denoted by $<\sigma>$. We can define a binary operation $+$ on the set $\Sigma$ of all equivalence classes by the rule $$ <\sigma_1> + <\sigma_2> = <\sigma_1\sigma_2>. $$ (Here $\sigma_1\sigma_2$ is the juxtaposition of the two sequences $\sigma_1$, $\sigma_2$.) Under this operation $\Sigma$ is a group. The identity (zero element) is the equivalence class of the empty sequence, and the inverse (negative) $-<\sigma>$ of $<\sigma>$ is $<\sigma^{-1}>$. We define $\Pi\sigma$ to be the product $c_1c_2...c_m$. We say that $\sigma$ is an {\it identity sequence}, if $\Pi\sigma$ is freely equal to $1$. We let $\pi_2$ denote the (abelian) subgroup of $\Sigma$ consisting of all elements $<\sigma>$, where $\sigma$ is an identity sequence. (Occasionally, when we want to emphasize the presentation $\mathcal{P}$, we will write $\pi_2(\mathcal{P})$). ${\bf F}$ acts on $\Sigma$ by the rule $[W]\cdot<\sigma> = <W\sigma W^{-1}>$, where $[W]\in {\bf F}, <\sigma>\in \Sigma$. The mapping $\partial : \Sigma\rightarrow {\bf F}$, where $\partial : <\sigma>\mapsto[\Pi\sigma]$, is a group homomorphism with the image ${\bf N}$ and the kernel $\pi_2$. The triple $(\Sigma, {\bf F}, \partial)$ is an example of a (free) crossed module. ${\bf N}$ acts trivially on $\pi_2$. It follows that $\pi_2$ is a left ${\bf G}$-module with ${\bf G}$-action given by $[W]{\bf N}\cdot<\sigma> = [W]\cdot<\sigma>$, where $[W]\in {\bf F}$, $<\sigma>\in \pi_2$. Let $R\in {\bf r}$. Then $R= {R^o}^{p(R)}$, where $R^o$ is not a proper power and $p(R)$ is a positive integer ($R^o$ is the {\it root} of $R$, and $p(R)$ is the {\it period}). The identity sequences $$ \zeta_R = (R, R^oR^{-1}{R^o}^{-1})\,\,\,\,\,\, (R\in {\bf r}) $$ will be called the {\it trivial sequences}, and the elements $$ <\zeta_R> $$ of $\pi_2$ will be called the {\it trivial elements}. The submodule of $\pi_2$ generated by the trivial elements will be denoted by ${\bf T}$. $\mathcal{P}$ is called {\it aspherical} (respectively, {\it combinatorially aspherical (CA)}) if $\pi_2=0$ (respectively, $\pi_2={\bf T}$). As is shown in \cite{cch}, $\mathcal{P}$ is aspherical if and only if $\mathcal{P}$ is (CA) and no element of $\bf r$ is a proper power. \underline{Subpresentations.} Let $\mathcal{P}_0 = \langle \bf{x}_0\mid \bf{r}_0\rangle$ be a subpresentation of $\mathcal{P}$, and let $(\Sigma_0, {\bf F}_0, \partial_0)$ be the associated crossed module. There is an obvious mapping of crossed modules $$ (\varphi,\psi) : (\Sigma_0, {\bf F}_0, \partial_0)\rightarrow (\Sigma, {\bf F}, \partial), $$ where $$ \varphi(<\sigma>_0) = <\sigma>\,\,\,\,\,\, (<\sigma>_0\in \Sigma_0); $$ $$ \psi([W]_0)=[W]\,\,\,\,\,\, ([W]_0\in {\bf F}_0). $$ Restricting $\varphi$ gives a homomorphism $$ j: \pi_2(\mathcal{P}_0)\rightarrow\pi_2(\mathcal{P}). $$ In general $j$ is not injective. It is still unknown whether $j$ is injective for every subpresentation $\mathcal{P}_0$ of aspherical $\mathcal{P}$ (Whitehead problem). Consider the presentation $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ and suppose that $\bf r$ is expressed as a union $\bf r = r_1\cup r_2$. For $\lambda =1,2$, let $\mathcal{P}_{\lambda} = \langle \bf{x}\mid \bf{r}_{\lambda}\rangle$ and $j_{\lambda}: \pi_2(\mathcal{P_{\lambda}})\rightarrow\pi_2(\mathcal{P})$ be the homomorphism as discussed above. Note that $\rm{Im}\, j_{\lambda}$ is a submodule of $\pi_2(\mathcal{P})$. Let ${\bf N_{\lambda}}$, where $\lambda = 1,2$, be the normal closure of $\{[R]\mid R\in {\bf r}_{\lambda}\}$ in the free group ${\bf F}$. Now ${\bf F}$ acts on $\frac{{\bf N_1}\cap {\bf N_2}}{[{\bf N_1},{\bf N_2}]}$ via conjugation: $$ [W]\cdot[U][{\bf N_1}, {\bf N_2}] = [WUW^{-1}][{\bf N_1}, {\bf N_2}]\,\,\,\,\,\, ([W]\in {\bf F}, [U]\in {\bf N_1}\cap {\bf N_2}). $$ It is easy to show that ${\bf N}(={\bf N_1}{\bf N_2})$ acts trivially, and so we get an induced action of ${\bf G} = {\bf F}/{\bf N}$ on $\frac{{\bf N_1}\cap {\bf N_2}}{[{\bf N_1},{\bf N_2}]}$. We can define a ${\bf G}$-homomorphism $$ \eta: \pi_2(\mathcal{P})\rightarrow \frac{{\bf N_1}\cap {\bf N_2}}{[{\bf N_1},{\bf N_2}]} $$ by the following rule. Let $<\sigma>\in \pi_2(\mathcal{P})$, and $V$ be the product (taken in order) of the elements of $\sigma$ which belong to $\bf r_1^w$. Then $\eta(<\sigma>) = [V][{\bf N_1}, {\bf N_2}]$. It is not hard to show that $\eta$ is well-defined. The following theorem was proved by Gutierr$\acute{\text е}$z and Ratcliffe \cite{ratcl}. \begin{thm} \label{thmGR} Let $\xi: \rm{Im}\, j_1\oplus\rm{Im}\, j_2 \longrightarrow \pi_2(\mathcal{P})$ be induced by the inclusions $\rm{Im}\, j_1,\rm{Im}\, j_2 \longrightarrow \pi_2(\mathcal{P})$. Then the sequence $$ \rm{Im}\, j_1\oplus\rm{Im}\, j_2 \longrightarrow^{\xi} \pi_2(\mathcal{P})\longrightarrow^{\eta} \frac{{\bf N_1}\cap {\bf N_2}}{[{\bf N_1},{\bf N_2}]}\longrightarrow 0 $$ is exact. If $\bf r_1$ and $\bf r_2$ are disjoint then $\xi$ is injective, and so the sequence is short exact. \end{thm} \underline{Pictures.} Sequences can be studied very successfully using geometric objects called pictures. Pictures were introduced in \cite{igusa, rourke}. These objects are a very useful tool to solve combinatorial group theory problems. See, for example, \cite {pride, gener_b_pr} and the references cited there. Let us remind the definition of pictures (according to \cite{gener_b_pr}). A {\it picture} $\mathbf{P}$ is a finite collections of pairwise disjoint closed disks $\Delta_1,\ldots, \Delta_m$ in a closed disk $D^2$, together with a finite number of disjoint arcs $\alpha_1,\ldots,\alpha_l$ properly embedded in $D^2 -\cup_{i=1}^mint\Delta_i$ (where "$int$"\, denotes interior). Loosely speaking, below the disks $\Delta_1,\ldots,\Delta_m$ will be called {\it vertices} of $\mathbf{P}$. An arc can be either a simple closed curve having trivial intersection with с $\partial D^2\cup\partial\Delta_1\cup\ldots\cup\partial\Delta_m$, or a simple non-closed curve which joins two different points of $\partial D^2\cup\partial\Delta_1\cup\ldots\cup\partial\Delta_m$. The {\it boundary} $\partial D^2$ of $\mathbf{P}$ will be denoted by $\partial \mathbf{P}$. The {\it corners} of a vertex $\Delta$ of $\mathbf{P}$ are the closures of the connected components of $\partial\Delta -\cup_{j}\alpha_j$. The {\it regions} of $\mathbf{P}$ are the closures of the connected components of $D^2-(\cup_i\Delta_i\bigcup\cup_j\alpha_j)$. The {\it components} of $\mathbf{P}$ are the connected components of $\cup_i\Delta_i\cup\cup_j\alpha_j$. The picture $\mathbf{P}$ is {\it connected} if it has at most one component. The picture $\mathbf{P}$ is {\it spherical} if it has at least one vertex and no arc of $\mathbf{P}$ meets $\partial \mathbf{P}$. Assume that a group presentation $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ and a picture $\mathbf{P}$ are given. Fix an orientation of the ambient disk $D^2$, thereby determining a sense of positive rotation (i.g. clockwise). Assume that the vertices and arcs of $\mathbf{P}$ are labeled by elements of $\mathcal{P}$ as follows. (i) Each arc of $\mathbf{P}$ is equipped with a normal orientation (indicated by an arrow transverse to the arc), and is labeled by an element of $\bf{x}\cup\bf{x}^{-1}$. (ii) Each vertex $\Delta$ of $\mathbf{P}$ is equipped with a sign $\epsilon(\Delta) = \pm 1$ and is labeled by a relator $R(\Delta)\in\bf{r}$. For a corner $c$ of a vertex $\Delta$ of $\mathbf{P}$, $W(c)$ denotes the word in $\bf{x}\cup\bf{x}^{-1}$ obtained by reading in order the (signed) labels on the arcs that are encountered in a walk around $\partial\Delta$ in the positive direction, beginning and ending at an interior point of $c$ (with the understanding that if we cross an arc, labeled $y$ say, in the direction of its normal orientation then we read $y$, whereas if we cross the arc against the orientation we read $y^{-1}$). The oriented and labeled picture $\mathbf{P}$ is a {\it picture over} $\mathcal{P}$ if for each corner $c$ of each vertex $\Delta$ of $\mathbf{P}$, $W(c)$ is identically equal to a cyclic permutation of $R(\Delta)^{\epsilon(\Delta)}$. We call $W(c)$ the {\it label} of $\Delta$, and denote it by $W(\Delta)$. A corner $c$ is a {\it basic corner} of $\Delta$ of $\mathbf{P}$ if $W(c)$ is identically equal to $R(\Delta)^{\epsilon(\Delta)}$. The vertex $\Delta$ has exactly $p$ basic corners, where $p\geqslant 1$ is the period of $R(\Delta)$. A picture $\mathbf{P}$ over $\mathcal{P}$ becomes a {\it based} picture over $\mathcal{P}$ when it is equipped with basepoints as follows. \begin{itemize} \item Each vertex $\Delta$ has one {\it basepoint}, which is a selected point in the interior of a basic corner of $\Delta$. \item $\mathbf{P}$ has a {\it global basepoint}, which is a selected point in $\partial \mathbf{P}$ that does not lie on any arc of $\mathbf{P}$. \end{itemize} The {\it boundary label} on a based picture $\mathbf{P}$ over $\mathcal{P}$ is the word $W(\mathbf{P})$ obtained by reading in order the (signed) labels on the arcs of $\mathbf{P}$ that are encountered in a walk around $\partial \mathbf{P}$ in the positive direction, beginning and ending at the global basepoint. Alteration of the global basepoint or of the orientation of the ambient disk $D^2$ changes the boundary label of $\mathbf{P}$ only up to cyclic permutation and inversion. There is the following pictorial version of the "van Kampen lemma"\, (it can be obtained from the theorem 1.1 (V) and the lemma 1.2 (V) of \cite{lind} and duality). \begin{lem}\label{lemVKampen} A word $U$ in $\bf{x}\cup\bf{x}^{-1}$ represents the identity of ${\bf G}$ defined by $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ if and only if there is a based picture $\mathbf{P}$ over $\mathcal{P}$ with boundary label identically equal to $U$. \end{lem} The mirror image of a picture $\mathbf{P}$ will be denoted by $-\mathbf{P}$. We can form the {\it sum} $\mathbf{P}_1+\mathbf{P}_2$ of two pictures $\mathbf{P}_1$ and $\mathbf{P}_2$ in the obvious way (Figure 1): \unitlength=1mm \begin{picture}(120,50)(-20,0) \put(20,25){\circle{20}} \put(20,18){\circle*{1}} \put(19,24){$\mathbf{P}_1$} \put(40,25){+} \put(60,25){\circle{20}} \put(60,18){\circle*{1}} \put(59,24){$\mathbf{P}_2$} \put(80,25){=} \put(100,25){\circle{20}} \put(100,18){\circle*{1}} \multiput(100,19)(0,2){7}{\line(0,1){1}} \put(95,24){$\mathbf{P}_1$} \put(101,24){$\mathbf{P}_2$} \end{picture} \begin{center} Figure 1 \end{center} A {\it transverse path $\gamma$} in $\mathbf{P}$ over $\mathcal{P}$ is a path in the closure of $D^2-\bigcup_i \Delta_i$ which intersects the arcs of $\mathbf{P}$ only finitely many times (moreover, if the path intersects an arc then it crossed it, and doesn't just touch it), no endpoints of $\gamma$ touches any arc, and whenever $\gamma$ meets $\partial\mathcal{P}$ or any $\partial\Delta_i$, it does so only in its endpoints. Since we will only over consider transverse paths, we will from now on drop the use of the adjective "transverse". If we travel along a path $\gamma$ from its initial point to its terminal point we will cross various arcs, and we can read off the (signed) labels on these arcs, giving a word $W(\gamma)$, the {\it label on} $\gamma$. Let a simple closed path $\gamma$ in $\mathbf{P}$ encloses a {\it subpicture} $\mathbf{B}$ of $\mathbf{P}$. This subpicture consists of the vertices and (portions of) arcs that are separated from $\partial\mathbf{P}$ by $\gamma$. When $\mathbf{P}$ is spherical, the {\it compliment} of $\mathbf{B}$ in $\mathbf{P}$ is defined as follows. Delete the interior of $\mathbf{B}$ to form an oriented annulus. Identification of $\partial\mathbf{P}$ to a point produces an oriented disk that has the boundary $\gamma$, and which supports a new picture over $\mathcal{P}$. The compliment of $\mathbf{B}$ in $\mathbf{P}$ is obtained from this new picture by a planar reflection. The complement has the same boundary label as $\mathbf{B}$ and its vertices are those of $\mathbf{P}-\mathbf{B}$, taken with opposite signs. The subpicture $\mathbf{B}$ enclosed by a simple closed path $\gamma$ will be called a {\it spherical subpicture} if $\gamma$ intersects no arc. A spherical subpicture will be called {\it empty} if it neither consists of any vertex nor any portion of any arc. \underline{On the connection between sequences and pictures.} A {\it spray} for a based picture $\mathbf{P}$ with $n$ vertices $\Delta_1,\Delta_2,\ldots,\Delta_n$ is a sequence $\underline{\gamma}=(\gamma_1, \gamma_2, \ldots, \gamma_n)$ of simple paths satisfying the following: \begin{itemize} \item for each $i=1, 2,\ldots,n$, $\gamma_i$ starts at the global basepoint of $\mathbf{P}$ and ends at a basepoint of $\Delta_{\vartheta(i)}$, where $\vartheta$ is a permutation of $\{1, 2, \ldots,n\}$ (depending on $\underline{\gamma}$); \item for $1\leqslant i<j\leqslant n$, $\gamma_i$ and $\gamma_j$ intersect only at the global basepoint; \item travelling around the global basepoint clockwise we encounter the paths in the order $\gamma_1, \gamma_2, \ldots, \gamma_n$. \end{itemize} The {\it sequence $\sigma(\underline{\gamma})$ associated with $\underline{\gamma}$} is $$(W(\gamma_1)W(\Delta_{\vartheta(1)})W(\gamma_1)^{-1}, \ldots, W(\gamma_n)W(\Delta_{\vartheta(n)})W(\gamma_n)^{-1}).$$ A based picture will be said to {\it represent} a sequence $\sigma$ if there is a spray for the picture whose associated sequence is $\sigma$. Note that if $\mathbf{P}$ represents $\sigma$ then $-\mathbf{P}$ represents $\sigma^{-1}$; if $\mathbf{P}_1$, $\mathbf{P}_2$ represents $\sigma_1$, $\sigma_2$ respectively then $\mathbf{P}_1 + \mathbf{P}_2$ represents $\sigma_1\sigma_2$. One can prove (see for example \cite{pride}) that every sequence can be represented by a picture, and every identity sequence can be represented by a spherical picture; conversely, if $\mathbf{P}$ is a picture and if $\underline{\gamma}$ is a spray for $\mathbf{P}$, then $$\partial(<\sigma(\underline{\gamma})>) = [W(\mathbf{P})],$$ and if $\mathbf{P}$ is a spherical picture and and if $\underline{\gamma}$ is a spray for $\mathbf{P}$, then $\sigma(\underline{\gamma})$ is an identity sequence. If $\underline{\gamma}$, $\underline{\gamma}'$ are two sprays for a picture $\mathbf{P}$, then $<\sigma(\underline{\gamma})> - <\sigma(\underline{\gamma}')>\in {\bf T}$ (theorem 1.4, theorem 2.4 of \cite{pride}). Consider a set ${\bf X} = \{\mathbf{P}_{\lambda}\mid \lambda\in \Lambda\}$ of based spherical pictures over $\mathcal{P}$. For each $\lambda$, let $\sigma_{\lambda}$ be the identity sequence associated with a spray for $\mathbf{P}_{\lambda}$, and $J({\bf X})$ be the submodule of $\pi_2$ generated by $\{<\sigma_{\lambda}>\mid \lambda\in \Lambda\}$. We say that {\it ${\bf X}$ generates $\pi_2$} if $\pi_2 = J({\bf X}) + {\bf T}$. It follows from Theorem \ref{thmGR} that if $\pi_2 = {\bf T}$, i.e. the presentation is (CA), then $\frac{{\bf N_1}\cap {\bf N_2}}{[{\bf N_1}, {\bf N_2}]}=0$, since $\eta({\bf T}) = 0$. If $\pi_2 = J({\bf X}) + {\bf T}$ then $\frac{{\bf N_1}\cap {\bf N_2}}{[{\bf N_1}, {\bf N_2}]}$ normally generated by the images of the elements $<\sigma_{\lambda}>$ associated with sprays for all pictures $\mathbf{P}_{\lambda}\in {\bf X}$. Moreover, as noted in \cite{BoGu92}, the image of $<\sigma_{\lambda}>$ under $\eta$ is $[V_{\lambda}][{\bf N_1}, {\bf N_2}]$, where $V_{\lambda}$ is the label of a simple closed path in $\mathbf{P}_{\lambda}$ (oriented as $\partial\mathbf{P}_{\lambda}$) separating the vertices with $\bf r_1$-labels from the vertices with $\bf r_2$-labels. \underline{Operations on pictures.} Generally below we will not distinguish between pictures which are isotopic. The following operations ("deformations") can be applied to a based picture $\mathbf{P}$ over $\mathcal{P}$ (\cite{gener_b_pr}). $BRIDGE:$ (Bridge move) See Figure 2. \unitlength=1mm \begin{picture}(120,50)(-8,0) \thicklines \put(70,25){\vector(1,0){10}} \thinlines \put(40,40){\line(0,-1){30}} \put(50,10){\line(0,1){30}} \put(38,25){\vector(1,0){5}} \put(52,25){\vector(-1,0){5}} \multiput(95,17)(0,23){2}{\line(0,-1){7}} \multiput(105,10)(0,23){2}{\line(0,1){7}} \put(100,33){\oval(10,10)[b]} \put(100,17){\oval(10,10)[t]} \put(100,24){\vector(0,-1){5}} \put(100,26){\vector(0,1){5}} \multiput(35,25)(18,0){2}{\rm $x$} \put(96,22){\rm $x$} \put(102,26){\rm $x$} \end{picture} \begin{center} Figure 2 \end{center} $FLOAT:$ Deletion of a closed arc that separates $D^2$ into two parts, one of which contains the global basepoint of $\mathbf{P}$ and all remaining arcs and vertices of $\mathbf{P}$ (such a closed arc is called a {\it floating circle}). $FLOAT^{-1}:$ (Insertion of a floating circle). The opposite of $FLOAT$. A {\it folding pair} is a connected spherical subpicture that contains exactly two vertices such that \begin{itemize} \item these two vertices are labeled by the same relator and have opposite signs; \item the basepoints of the vertices lie in the same region; \item each arc in the subpicture has an endpoint on each vertex. \end{itemize} $FOLD:$ (Deletion of a folding pair). If there is a simple closed path $\beta$ in $D^2$ such that the part of $\mathbf{P}$ encircled by $\beta$ is a folding pair, then remove that part of $\mathbf{P}$ encircled by $\beta$. $FOLD^{-1}:$ (Insertion of a folding pair). The opposite of $FOLD$. Let ${\bf X} = \{\mathbf{P}_{\lambda}\mid \lambda\in \Lambda\}$ be a set of based spherical pictures over $\mathcal{P}$. By an {\it $\bf X$-picture} we mean either a picture $\mathbf{P}_{\lambda}$ from ${\bf X}$ or its mirror image $-\mathbf{P}_{\lambda}$. $REPLACE({\bf X}):$ Replace a subpicture of $\mathbf{P}$ by the complement of that subpicture in an ${\bf X}$-picture. Two based spherical pictures are called {\it ${\bf X}$-equivalent} if one of them can be transformed into the other one (up to planar isotopy) by a finite sequence of operations $BRIDGE$, $FLOAT^{\pm 1}$, $FOLD^{\pm 1}$, $REPLACE({\bf X})$. We introduce two further operations on pictures as follows. $DELETE({\bf X}):$ (Deletion of an $\bf X$-picture). If there is a simple closed path $\beta$ in $D^2$ such that the part of $\mathbf{P}$ enclosed by $\beta$ is a copy of an ${\bf X}$-picture, then delete that part of $\mathbf{P}$ enclosed by $\beta$. $DELETE({\bf X})^{-1}:$ (Insertion of an $\bf X$-picture). The opposite of $DELETE({\bf X})$. Note that the operation $REPLACE({\bf X})$ includes $DELETE({\bf X})^{\pm 1}$. On the other hand, the result of the operation $REPLACE({\bf X})$ can be obtained by a finite sequence of operations $DELETE({\bf X})^{\pm 1}$, $BRIDGE$, $FLOAT^{\pm 1}$, $FOLD^{\pm 1}$. Thus, in the definition of ${\bf X}$-equivalent pictures, $REPLACE({\bf X})$ can be replaced by $DELETE({\bf X})^{\pm 1}$. A {\it dipole} in a picture over $\mathcal{P}$ consists of an arc which meets two corners $c_1$, $c_2$ in distinct vertices such that \begin{itemize} \item the two vertices are labeled by the same relator and have opposite signs; \item $c_1$ and $c_2$ lie in the same region of the picture; \item $W(c_1) = W(c_2)^{-1}$. \end{itemize} By a {\it complete dipole} over $\mathcal{P}$, we mean a connected based spherical picture over $\mathcal{P}$ that contains just two vertices, and where each arc of the picture constitutes a dipole. Note that a complete dipole is just a folding pair, in that the vertex basepoints need not lie in the same region. If the relator that labels the two vertices of the complete dipole has period one, then a complete dipole is exactly the same as a folding pair. A complete dipole will be called {\it primite} if the relator labeling its vertices has root $Q$ and period $p>1$, and there is a path joining the vertex basepoints with label $Q^f$, where {\rm gcd} $(f,p)=1$. It follows from Lemma 2.1 \cite{gener_b_pr} that, modulo primitive dipoles, one need not be concerned with choices of vertex basepoints. The following theorem (see Corollary 1 of Theorem 2.6 \cite{pride}, Theorem 1.6 (2) \cite{gener_b_pr}) will play an important role in the proof of the main theorem of the present paper. \begin{thm} \label{thmPr3} Let $\bf X_0$ be a collection of based spherical pictures. Then ${\bf X_0}$ generates $\pi_2 (\mathcal{P})$ if and only if every spherical picture over $\mathcal{P}$ is $\bf X$-equivalent to the empty picture, where $\bf X$ is the union of $\bf X_0$ and the collection of primitive dipoles for all relators of $\mathcal{P}$, which are a proper power. \end{thm} It follows from Theorem \ref{thmPr3} (see Corollary 2 of Theorem 2.6 \cite{pride}) that $\mathcal{P}$ is (CA) (i.e. $\pi_2 = {\bf T}$) if and only if every spherical picture over $\mathcal{P}$ is $\bf X$-equivalent to the empty picture, where $\bf X$ is the collection of primitive dipoles for all relators of $\mathcal{P}$, which are a proper power. \underline{Independent sets.} Suppose that $\{\mathcal{P}_i\mid i\in I\}$ is a collection of subpresentations of $\mathcal{P}$, and let ${\bf X}_i$ denote the collection of all based spherical pictures over $\mathcal{P}_i$, $i\in I$. We shall say that a set ${\bf Y}$ of based spherical pictures over $\mathcal{P}$ {\it generates} $\pi_2(\mathcal{P})$ {\it over } $\{\mathcal{P}_i\mid i\in I\}$ if the ${\bf G}$-module $\pi_2(\mathcal{P})$ is generated by the homotopy classes $[f_\mathbf{P}]$ ($\mathbf{P}\in{\bf Y}\cup\bigcup_{i\in I}{\bf X}_i$) (\cite{gener_b_pr}). By the analogue of Theorem \ref{thmPr3} (see Theorem 1.6 (2) \cite{gener_b_pr}), a collection ${\bf Y}$ of based spherical pictures over $\mathcal{P}$ generates $\pi_2(\mathcal{P})$ over $\{\mathcal{P}_i\mid i\in I\}$ if and only if every spherical picture over $\mathcal{P}$ is $\bf X$-equivalent to the empty picture, where ${\bf X} = {\bf Y}\cup\bigcup_{i\in I}{\bf X}_i$. This notion is useful when there are certain subpresentations in the presentation which we know little about, or which in some way are arbitrary. Then, we can often isolate them by determining a "nice"\, set of generators of $\pi_2(\mathcal{P})$ relative to these subpresentations. Examples of sets of generators over subpresentations for $\pi_2(\mathcal{P})$ can be found in \cite{gener_b_pr}. We will say that $\mathcal{P}$ is (CA) {\it over} $\{\mathcal{P}_i\mid i\in I\}$ if $\pi_2(\mathcal{P})$ is generated over $\{\mathcal{P}_i\mid i\in I\}$ by primitive dipoles; $\mathcal{P}$ is (A) {\it over} $\{\mathcal{P}_i\mid i\in I\}$ if $\pi_2(\mathcal{P})$ is generated over $\{\mathcal{P}_i\mid i\in I\}$ by the empty set. Consider a presentation $\mathcal{P}=<{\bf x}\mid{\bf r}>$, where ${\bf r=\bigcup_{i=1}^n r_i}$. For $i=1,\ldots, n$, let ${\bf R_i}$ be the normal closure of ${\bf r_i}$ in the free group ${\bf F}$ with basis ${\bf x}$, $\mathcal{P}_i=<{\bf x}\mid{\bf r_i}>$, ${\bf N_i} = \mathsf{pr}od_{j\neq i}{\bf R_j}$. Set ${\bf R} = \mathsf{pr}od_{i=1}^n{\bf R_i}$, ${\bf G} = {\bf F}/{\bf R}$ и ${\bf G_i} = {\bf F}/{\bf R_i}$. The family $\{ {\bf R_1},\ldots,{\bf R_n}\}$ is said to be {\it independent} if ${\bf R_i}\cap {\bf N_i} = [{\bf R_i}, {\bf N_i}]$ for $i=1,\ldots,n$. This and related notions have been studied in \cite{Bo91, ratcl, DuElGi92, gilbert, huebsch}. Note that any primitive dipole over $\mathcal{P}$ belongs to some collection ${\bf X}_i$ of all based spherical pictures over $\mathcal{P}_i$ $(i=1,\ldots,n)$, since ${\bf r=\bigcup_{i=1}^n r_i}$. So $\mathcal{P}$ is (A) over $\{\mathcal{P}_i\mid i=1,\ldots,n\}$ if and only if $\mathcal{P}$ is (CA) over $\{\mathcal{P}_i\mid i=1,\ldots,n\}$. \section{Main results}\label{S:df_in_F} \begin{thm}\label{th5} Let ${\bf F}$ be the free group with basis ${\bf x}$, $ \bf{r=\bigcup_{i=1}^n r_i}$ be a set of cyclically reduced words in $\bf{x}\cup\bf{x}^{-1}$, and $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ be the presentation satisfying the $RH$-hypothesis. For $i=1,\ldots, n$, let ${\bf R_i}$ be the normal closure of the set ${\bf r_i}$ in ${\bf F}$, $\mathcal{P}_i=\langle{\bf x}\mid{\bf r_i}\rangle$, ${\bf N_i} = \mathsf{pr}od_{j\neq i}{\bf R_j}$, $ \bf{\hat{r}_i=\bigcup_{j\neq i} r_j}$, ${\bf R} = \mathsf{pr}od_{i=1}^n{\bf R_i}$. If $\pi_2(\mathcal{P})$ is generated over $\{\mathcal{P}_i\mid i=1,\ldots, n\}$ by a collection $\bf Y$ of based spherical pictures over $\mathcal{P}$, then for $i=1,\ldots, n$, the group $\frac{{\bf R_i}\cap {\bf N_i}}{[{\bf R_i},{\bf N_i}]}$ is generated by $$\{[WRW^{-1}][{\bf R_i}, {\bf N_i}]\mid R\in{\bf r_i}\cap{\bf \hat{r}_i}, [W]\in{\bf W}\}\cup\{[WV_YW^{-1}][{\bf R_i}, {\bf N_i}]\mid Y\in{\bf Y}, [W]\in{\bf W}\},$$ where $V_Y$ is a label of a simple closed path in a based spherical picture $Y$, separating the vertices with $\bf r_i$-labels and the vertices with $(\bf r - r_i)$-labels, ${\bf W}\subseteq {\bf F}$ is a set of representatives of all the cosets of $\bf R$ in ${\bf F}$. \end{thm} It is known \cite{ratcl, DuElGi92, huebsch} that if $\{{\bf R_1},\ldots,{\bf R_n}\}$ is independent, then $\mathcal{P}$ is (A) over $\{\mathcal{P}_i\mid i=1,\ldots,n\}$. The converse statement for $n=2$ is shown in \cite{ratcl}. From Theorem \ref{th5}, we obtain the following generalization (take $\bf Y$ empty). \begin{cor}\label{corOFth5} Let ${\bf F}$ be the free group with basis ${\bf x}$, $ \bf{r=\bigcup_{i=1}^n r_i}$ be a set of cyclically reduced words in $\bf{x}\cup\bf{x}^{-1}$, where $\bf r_1, \ldots, r_n$ are mutually disjoint sets, and $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ be the presentation satisfying the $RH$-hypothesis. For $i=1,\ldots, n$, let ${\bf R_i}$ be the normal closure of ${\bf r_i}$ in ${\bf F}$, $\mathcal{P}_i=\langle{\bf x}\mid{\bf r_i}\rangle$. If $\mathcal{P}$ is (A) over $\{\mathcal{P}_i\mid i=1,\ldots,n\}$, then $\{{\bf R_1},\ldots,{\bf R_n}\}$ is independent. \end{cor} Thus, for $\bf{r=\bigsqcup_{i=1}^n r_i}$, $\{{\bf R_1},\ldots,{\bf R_n}\}$ is independent if and only if $\mathcal{P}$ is (A) over $\{\mathcal{P}_i\mid i=1,\ldots,n\}$. \section{The proof of Theorem \ref{th5}} Let us consider the case $i=1$ (the proof for $i=2,\ldots,n$ is similar). By $\mathfrak{N}$ denote the normal closure of $\{[R]\mid R\in{\bf r_1}\cap{\bf \hat{r}_1}\}\cup\{ [V_Y]\mid Y\in{\bf Y}\}$ in ${\bf F}$. We need to prove that if $\pi_2(\mathcal{P})$ is generated over $\{\mathcal{P}_j\mid j=1,\ldots, n\}$ by $\bf Y$, then modulo $\mathfrak{N}[{\bf R_1},{\bf N_1}]$, an arbitrary element $[U]\in {\bf R_1}\cap {\bf N_1}$ is equal to the identity. This will imply that $\frac{{\bf R_1}\cap {\bf N_1}}{[{\bf R_1},{\bf N_1}]}$ is generated by $$\{[WRW^{-1}][{\bf R_1}, {\bf N_1}]\mid R\in{\bf r_1}\cap{\bf \hat{r}_1}, [W]\in{\bf W}\}\cup\{[WV_YW^{-1}][{\bf R_1}, {\bf N_1}]\mid Y\in{\bf Y}, [W]\in{\bf W}\},$$ since $[V_Y]\in {\bf R_1}\cap {\bf N_1}$, ${\bf r_1}\cap{\bf \hat{r}_1}\subset {\bf R_1}\cap {\bf N_1}$ and ${\bf R}={\bf R_1}{\bf N_1}$. Since $[U]$ belongs both to ${\bf R_1}$ and to ${\bf N_1}$, by Lemma \ref{lemVKampen} there are a picture $\mathbf{P}_{{\bf R_1}}$ over $\mathcal{P}_1=\langle{\bf x}\mid{\bf r_1}\rangle$ with boundary label identically equal to $U$ and a picture $\mathbf{P}_{{\bf N_1}}$ over $\mathcal{P}_{({\bf r - r_1})}=\langle{\bf x}\mid({\bf r - r_1})\rangle$ with boundary label identically equal to $U^{-1}$. Suppose that $U$ is identically equal to $x_1x_2\ldots x_m$, where $x_j\in {\bf x}\cup {\bf x}^{-1}$. Then, the arcs $\alpha_1,\ldots,\alpha_m$ met $\partial\mathbf{P}_{{\bf R_1}}$ have the labels respectively $x_1, \ldots,x_m$, and the arcs $\beta_1,\ldots, \beta_m$ met $\partial\mathbf{P}_{{\bf N_1}}$ have the labels respectively $x_m,\ldots, x_1$. Paste $\mathbf{P}_{{\bf R_1}}$ and $\mathbf{P}_{\bf N_1}$ by their boundaries so that for $j=1,\ldots, m$, the arc $\alpha_j$ extends the arc $\beta_{m-(j-1)}$ and the global basepoint $O_{{\bf R_1}}$ of $\mathbf{P}_{{\bf R_1}}$ coincides with the global basepoint $O_{{\bf N_1}}$ of $\mathbf{P}_{\bf N_1}$. In the obtained two-sphere, choose a small closed disk $D$ in the interior of any region of $\mathbf{P}_{{\bf R_1}}$ and cut $D-{\partial D}$ out it to get a spherical picture $\mathbf{P}$ over $\mathcal{P} = \langle \bf{x}\mid \bf{r}\rangle$ on the disk $D^2$ with boundary $\partial\mathbf{P} (= \partial D^2)= \partial D$. The pasted boundaries $\partial\mathbf{P}_{{\bf R_1}}$ and $\partial\mathbf{P}_{{\bf N_1}}$ give a path $Equ$ on $D^2$. We will call $Equ$ the {\it equator}. The pasted points $O_{{\bf R_1}}$ and $O_{{\bf N_1}}$ give a point $p\in Equ$. Fix an orientation of $Equ$ so that the label of $Equ-\{p\}$ is equal identically to $U$. Below the label of $Equ-\{p\}$ under this orientation will be called the {\it equatorial label}. The part of $\mathbf{P}$ corresponding to $\mathbf{P}_{{\bf R_1}}$ (resp., $\mathbf{P}_{{\bf N_1}}$) will be called the {\it $\bf r_1$-hemisphere} (resp., the {\it $({\bf r-r_1})$-hemisphere}). Below we will transform $\mathbf{P}$ by planar isotopy, $BRIDGE$, $FLOAT^{\pm 1}$, $FOLD^{\pm 1}$, $DELETE({\bf X})^{\pm 1}$ under the following conditions: the equatorial label is not changed modulo $\mathfrak{N}[{\bf R_1},{\bf N_1}]$, all vertices of $\mathbf{P}$ with $\bf r_1$-labels remain only in the $\bf r_1$-hemisphere, all vertices with $({\bf r-r_1})$-labels remain only in the $({\bf r-r_1})$-hemisphere. Deformations (operations), satisfied these conditions, will be called {\it admissible}. A picture, in which the equator divides the vertices under these conditions, will be called a {\it picture with equator}. The aim of these operations is to reduce the picture $\mathbf{P}$ with equator to a picture with equator with boundary label equal to the identity in the free group, that will imply that the initial equatorial label, i.e. $[U]$, belongs to $\mathfrak{N}[{\bf R_1},{\bf N_1}]$. To proof the existence of the desired operations, we will need two auxiliary statements: Lemma \ref{lem_help1} and Lemma \ref{lem_help2}. In these lemmas, we use the notation of Theorem \ref{th5}, we let $\bf X$ denote the union ${\bf Y}\cup\bigcup_{i= 1}^n{\bf X}_i$, where ${\bf X}_i$ is the collection of all based spherical picture over $\mathcal{P}_i$, $i=1,\ldots,n$; by an {\it ${\bf Z}^W$-picture} we mean a spherical picture, containing only one $\bf Z$-picture and, possibly, closed arcs encircling this $\bf Z$-picture, where $\bf Z$ is a given collection of spherical pictures. \begin{lem} \label{lem_help1} If $\pi_2(\mathcal{P})$ is generated over $\{\mathcal{P}_j\mid j=1,\ldots, n\}$ by $\bf Y$, then the picture $\mathbf{P}$ with equator $Equ$ can be reduced by a finite number of admissible operations to a picture $\widetilde{\mathbf{P}}$ with equator, being a finite sum of ${\bf X}^W$-pictures and, possibly, also containing some closed arcs encircling the point $p$. \end{lem} {\bf Proof of Lemma \ref{lem_help1}.} Since $\pi_2(\mathcal{P})$ is generated over $\{\mathcal{P}_j\mid j=1,\ldots, n\}$ by $\bf Y$, then by Theorem \ref{thmPr3}, the based spherical picture $\mathbf{P}$ over $\mathcal{P}$ is $\bf X$-equivalent to the empty picture, i.e., there are a finite sequence of based spherical pictures $\mathbf{P}_0, \mathbf{P}_1,\ldots, \mathbf{P}_s$ and a finite sequence $\mathfrak{f}_1,\ldots, \mathfrak{f}_s$ of operations $BRIDGE$, $FLOAT^{\pm 1}$, $FOLD^{\pm 1}$, $DELETE({\bf X})^{\pm 1}$, which transforms $\mathbf{P}$ (up to planar isotopy) to the empty picture so that $\mathfrak{f}_j: \mathbf{P}_{j-1} \mapsto \mathbf{P}_j$, $j=1,\ldots,s$, $\mathbf{P}_0 = \mathbf{P}$, $\mathbf{P}_s$ is the empty picture. Note that any folding pair is a spherical picture over some presentation $\mathcal{P}_i$, $i=1,\ldots,n$, and, hence, it belongs to ${\bf X}_i$. In this case $FOLD^{\pm 1}$ is a special case of $DELETE(\bf{X})^{\pm 1}$, therefore, below $FOLD^{\pm 1}$ will not be considered separately. Since $\mathfrak{f}_1,\ldots, \mathfrak{f}_s$ are not necessarily admissible, using the sequences $\mathbf{P}_0, \mathbf{P}_1,\ldots, \mathbf{P}_s$ and $\mathfrak{f}_1,\ldots, \mathfrak{f}_s$, we will construct two new sequences $\mathfrak{Z}_0, \mathfrak{Z}_1,\ldots,\mathfrak{Z}_s$ and $\mathfrak{g}_1,\ldots, \mathfrak{g}_s, \mathfrak{g}_{s+1}$, where, for $i=1,\ldots,s$, $\mathfrak{Z}_i$ is a collection of disjoint spherical subpictures in $\mathbf{P}_i$, not containing the whole of the equator, and $\mathfrak{g}_i$ is an admissible operation. In addition, the sequence $\mathfrak{g}_1,\ldots, \mathfrak{g}_s, \mathfrak{g}_{s+1}$ will transform $\mathbf{P}$, as a picture with equator, to $\widetilde{\mathbf{P}}$ so that $\mathfrak{g}_j: \widetilde{\mathbf{P}}_{j-1} \mapsto \widetilde{\mathbf{P}}_j$, where $\widetilde{\mathbf{P}}_0 = \mathbf{P}$, $\widetilde{\mathbf{P}}_j = \mathbf{P}_j\cup\mathfrak{Z}_j$, $\widetilde{\mathbf{P}}_{s+1} =\widetilde{\mathbf{P}}$ are pictures with equator, $j=1,\ldots,s$. This will prove Lemma \ref{lem_help1}. We let $\mathfrak{Z}_j^0$, $j=1,\ldots,s$, denote a finite collection of disjoint disks in the interior of $D^2$, containing the empty spherical subpictures obtained from subpictures of $\mathfrak{Z}_j$ by deletion of all arcs and all vertices. In the case of a planar isotopy, we may assume that this isotopy reduces $\mathbf{P}_{j-1}$ to $\mathbf{P}_{j}=F_1(\mathbf{P}_{j-1})$, where $F_t : D^2\times [0,1] \to D^2\times [0,1]$ is a continuous isotopy of the disk $D^2$, so that \begin{itemize} \item[(i)] $F_t$ leaves all vertices fixed, i.e., for any $t\in [0,1]$ and each vertex $\Delta$, $F_t(\Delta)=\Delta$; \item[(ii)] for any $t\in [0,1]$ and each arc $\alpha$, the intersection of the arc $F_t(\alpha)$ and the equator $Equ$ consists of a finite number of points; moreover, if $Equ$ intersects $F_1(\alpha)$, then it crosses it, and doesn't just touch it; \item[(iii)] for any arc $\alpha$, the arc $F_1(\alpha)$ does not intersect any disk of $\mathfrak{Z}_{j-1}^0$. \end{itemize} If for any $t\in (0,1)$ and each arc $\alpha$, the arc $F_t(\alpha)$ does not intersect any disk of $\mathfrak{Z}_{j-1}^0$, then this isotopy is called {\it admissible}, otherwise it is called {\it inadmissible}. An admissible isotopy does not change the equatorial label, as an element of the free group. Since an admissible isotopy is an admissible operation, below we will assume operations to be equal if they are equal up to admissible isotopy. Include operations of inadmissible isotopy in the list of operations $\mathfrak{f}_1,\ldots, \mathfrak{f}_s$. We will construct $\mathfrak{g}_j$, $j=1,\ldots,s$, so that $\mathfrak{g}_j$ transforms $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j})$ to $(\mathbf{P}_j-\mathfrak{Z}_j)$ in just the same way (up to admissible isotopy of $(\mathbf{P}_j-\mathfrak{Z}_j)$) as $\mathfrak{f}_j$ transforms $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j}^0)$ to $(\mathbf{P}_j-\mathfrak{Z}_j^0)$. In addition we can always assume that the arcs of $\widetilde{\mathbf{P}}_j$ and the boundary of any subpicture from $\mathfrak{Z}_j^0$ intersect the equator not more than finitely many times. As $\mathfrak{Z}_0$, take a set of a single spherical subpicture in $\mathbf{P} = \mathbf{P}_0$ such that this subpicture is empty and contains the point $p\in Equ$ and a connected part of the equator. Let us define $\mathfrak{Z}_j$ and $\mathfrak{g}_j$, $j=1,\ldots,s$, by induction on $j$. Assume that $\mathfrak{Z}_0, \mathfrak{Z}_1,\ldots,\mathfrak{Z}_{j-1}$ and $\mathfrak{g}_1,\ldots, \mathfrak{g}_{j-1}$ have been already defined. Construct $\mathfrak{Z}_{j}$ and $\mathfrak{g}_{j}$ by $\mathfrak{f}_j: \mathbf{P}_{j-1} \mapsto \mathbf{P}_j$ as follows. The operation $\mathfrak{f}_j$ is one of the operations: an inadmissible isotopy, $BRIDGE$, $FLOAT^{\pm 1}$, $DELETE({\bf X})^{\pm 1}$. \underline{Case 1. Inadmissible isotopy.} There are an arc $\alpha$ (labeled by $x\in \bf{x}\cup\bf{x}^{-1}$) in $\mathbf{P}_{j-1}$ and $t\in (0,1)$ so that $F_t(\alpha)$ intersects some empty spherical subpicture $\mathbf{P}_{\xi}^0$ from $\mathfrak{Z}_{j-1}^0$. Denote by $\mathbf{P}_{\xi}$ the spherical subpicture from $\mathfrak{Z}_{j-1}$, which $\mathbf{P}_{\xi}^0$ is obtained from. To obtain $\mathfrak{Z}_j$ from $\mathfrak{Z}_{j-1}$, (for each arc $\alpha$, each such subpicture $\mathbf{P}_{\xi}^0$ and each passing of $F_t(\alpha)$ through $\mathbf{P}_{\xi}^0$) add to $\mathbf{P}_{\xi}$ a closed arc (labeled by $x$), encircling all objects (that may be arcs, vertices, the point $p$) being already in $\mathbf{P}_{\xi}$ . The action of $\mathfrak{g}_j$ on $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j})$ coincides with the action of $\mathfrak{f}_j$ on $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j}^0)$. The action of $\mathfrak{g}_j$ on $\mathfrak{Z}_{j}$ corresponds to the operation $BRIDGE$ performed on the arc $\alpha$ in the interior of the disk of $\mathbf{P}_{\xi}$. So $\mathfrak{g}_j$ does not change the equatorial label as an element of the free group. See Figure 3. \unitlength=1mm \unitlength=1mm \begin{picture}(120,50)(-5,0) \qbezier[30](42,33)(50,40)(58,33) \qbezier[30](42,17)(50,10)(58,17) \qbezier[30](42,33)(35,25)(42,17) \qbezier[30](58,33)(65,25)(58,17) \put(48,10){$\mathbf{P}_{\xi}^0$} \qbezier[30](92,33)(100,40)(108,33) \qbezier[30](92,17)(100,10)(108,17) \qbezier[30](92,33)(85,25)(92,17) \qbezier[30](108,33)(115,25)(108,17) \put(98,10){$\mathbf{P}_{\xi}^0$} \thicklines \put(70,25){\vector(1,0){10}} \put(73,28){\rm $\mathfrak{f}_j$} \thinlines \put(30,40){\line(0,-1){30}} \put(28,25){\vector(1,0){4}} \put(27,26){\rm $x$} \put(120,40){\line(0,-1){30}} \put(118,25){\vector(1,0){4}} \put(117,26){\rm $x$} \end{picture} \unitlength=1mm \unitlength=1mm \begin{picture}(120,50)(-5,0) \qbezier[30](42,33)(50,40)(58,33) \qbezier[30](42,17)(50,10)(58,17) \qbezier[30](42,33)(35,25)(42,17) \qbezier[30](58,33)(65,25)(58,17) \put(48,10){$\mathbf{P}_{\xi}$} \qbezier[30](92,33)(100,40)(108,33) \qbezier[30](92,17)(100,10)(108,17) \qbezier[30](92,33)(85,25)(92,17) \qbezier[30](108,33)(115,25)(108,17) \put(98,10){$\mathbf{P}_{\xi}$} \put(51,21){\circle*{2}} \put(51,29){\circle*{2}} \put(47,25){\circle*{2}} \thinlines \put(47,25){\line(1,1){4}} \put(47,25){\line(1,-1){4}} \put(51,21){\line(0,1){8}} \put(100,25){\circle{20}} \put(101,21){\circle*{2}} \put(101,29){\circle*{2}} \put(97,25){\circle*{2}} \thinlines \put(97,25){\line(1,1){4}} \put(97,25){\line(1,-1){4}} \put(101,21){\line(0,1){8}} \thicklines \put(70,25){\vector(1,0){10}} \put(73,28){\rm $\mathfrak{g}_j$} \thinlines \put(30,40){\line(0,-1){30}} \put(28,25){\vector(1,0){4}} \put(27,26){\rm $x$} \put(109,25){\vector(-1,0){4}} \put(108,23){\tiny \rm $x$} \put(120,40){\line(0,-1){30}} \put(118,25){\vector(1,0){4}} \put(117,26){\rm $x$} \end{picture} \begin{center} Figure 3 \end{center} \underline{Case 2. $FLOAT$, $DELETE({\bf X})$.} In $\mathbf{P}_{j-1}$, there is a spherical subpicture $\mathbf{P}_{\eta}$ containing only a floating circle (resp., only an ${\bf X}$-picture). The operation $\mathfrak{f}_j$ deletes $\mathbf{P}_{\eta}$ from $\mathbf{P}_{j-1}$. Applying an admissible isotopy to $\partial\mathbf{P}_{\eta}$ and $\mathfrak{f}_j$ if necessary, we may assume that $\mathbf{P}_{\eta}$ does not contain the whole of the equator, and $\partial\mathbf{P}_{\eta}$ does not intersect any disk from $\mathfrak{Z}_{j-1}^0$. Note that the arcs and the vertices of $\mathbf{P}_{\eta}$ do not intersect $\mathfrak{Z}_{j-1}^0$ as well. Let $\mathbf{P}_{\xi_1}^0$, ..., $\mathbf{P}_{\xi_m}^0$ be all spherical subpictures from $\mathfrak{Z}_{j-1}^0$ in the interior of the disk of $\mathbf{P}_{\eta}$, and let $\mathbf{P}_{\xi_1}$, ..., $\mathbf{P}_{\xi_m}$ be the corresponding subpictures from $\mathfrak{Z}_{j-1}$. The collection $\mathfrak{Z}_{j}$ is obtained from $\mathfrak{Z}_{j-1}$ by deleting $\mathbf{P}_{\xi_1}$, ..., $\mathbf{P}_{\xi_m}$ and adding the spherical subpicture from $\widetilde{\mathbf{P}}_{j-1}$ encircled by the path $\partial\mathbf{P}_{\eta}$ (this subpicture contains the disjoint union of $\mathbf{P}_{\eta}$ and $\mathbf{P}_{\xi_1}$, ..., $\mathbf{P}_{\xi_m}$). The operation $\mathfrak{g}_j$ acts identically (up to admissible isotopy). \underline{Case 3. $FLOAT^{-1}$, $DELETE({\bf X})^{-1}$.} Let $\mathfrak{f}_j$ be an operation $FLOAT^{-1}$ (resp., $DELETE({\bf X})^{-1}$). The operation $\mathfrak{f}_j$ inserts a spherical subpicture $\mathbf{P}_{\eta}$ in $(\mathbf{P}_{j-1} - \mathfrak{Z}_{j-1}^0)$ such that $\mathbf{P}_{\eta}$ contains only a floating circle (resp., only an ${\bf X}$-picture). Applying an admissible isotopy if necessary, we can obtain the following. If $\mathfrak{f}_j$ is $FLOAT^{-1}$, the subpicture $\mathbf{P}_{\eta}$ does not intersect the equator. If $\mathfrak{f}_j$ is $DELETE({\bf X})^{-1}$ and $\mathbf{P}_{\eta}$ contains the vertices only with $\bf r_1$-labels or only with $(\bf r - r_1)$-labels, then the whole of $\mathbf{P}_{\eta}$ is in the hemisphere corresponding to the labels of the vertices in $\mathbf{P}_{\eta}$. If $\mathfrak{f}_j$ is $DELETE({\bf X})^{-1}$ and $\mathbf{P}_{\eta}$ contains the vertices both with $\bf r_1$-labels and with $(\bf r - r_1)$-labels (we may assume that $\mathbf{P}_{\eta}\in{\bf Y}$), then $\mathbf{P}_{\eta}$ is disposed so that $\hat{Equ} = \mathbf{P}_{\eta}\cap Equ$ is connected and the equator divides the vertices of $\mathbf{P}_{\eta}$ into two parts: the vertices with $\bf r_1$-labels (in the $\bf r_1$-hemisphere) and the vertices with $(\bf r - r_1)$-labels (in the $(\bf r - r_1)$-hemisphere), in addition the label $U_{\eta}$ of the path $\hat{Equ}$ is such that $[U_{\eta}]\in \mathfrak{N}$. Put $\mathfrak{Z}_{j} =\mathfrak{Z}_{j-1}$. The action of $\mathfrak{g}_j$ on $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j})$ coincides with the action of $\mathfrak{f}_j$ on $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j}^0)$, and the action of $\mathfrak{g}_j$ on $\mathfrak{Z}_{j}$ is identical. The operation $\mathfrak{g}_j$ does not change the equatorial label modulo $\mathfrak{N}[{\bf R_1},{\bf N_1}]$. \underline{Case 4. $BRIDGE$.} We may assume that $\mathfrak{f}_j$ acts only on $(\mathbf{P}_{j-1} - \mathfrak{Z}_{j-1}^0)$. Put $\mathfrak{Z}_{j} =\mathfrak{Z}_{j-1}$. The action of $\mathfrak{g}_j$ on $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j})$ coincides with the action of $\mathfrak{f}_j$ on $(\mathbf{P}_{j-1}-\mathfrak{Z}_{j}^0)$, and the action of $\mathfrak{g}_j$ on $\mathfrak{Z}_{j}$ is identical. The operation $\mathfrak{g}_j$ does not change the equatorial label as an element in the free group. To complete the proof of Lemma \ref{lem_help1}, it remains to define $\mathfrak{g}_{s+1}: \widetilde{\mathbf{P}}_{s} \mapsto \widetilde{\mathbf{P}}_{s+1}$, where $\widetilde{\mathbf{P}}_s = \mathbf{P}_s\cup\mathfrak{Z}_s$, $\widetilde{\mathbf{P}}_{s+1} =\widetilde{\mathbf{P}}$. Since $\mathbf{P}_s$ is the empty picture, the arcs and the vertices of $\widetilde{\mathbf{P}}_s$ belong to spherical subpictures from $\mathfrak{Z}_{s}$. The operation $\mathfrak{g}_{s+1}$ transforms each spherical picture $\mathbf{P}_{\mu}$ from $\mathfrak{Z}_{s}$ as follows. By construction of $\mathfrak{Z}_{s}$, $\mathbf{P}_{\mu}$ is the union of embedded one in other spherical pictures each of which either is an ${\bf X}^W$-picture, or contains only closed arcs and, possibly, the point $p$. The operation $\mathfrak{g}_{s+1}$ decomposes $\mathbf{P}_{\mu}$ as a sum of ${\bf X}^W$-pictures and spherical pictures, containing only closed arcs and, possibly, $p$, by applying admissibly isotopy and $BRIDGE$ (see an example on Figure 4) so that the vertices remain in their own hemispheres, no summand contains the whole of the equator, and the boundary of each summand intersects the equator not more than finitely many times. After that $\mathfrak{g}_{s+1}$ deletes all floating circles not encircling $p$ by applying $FLOAT$. The operation $\mathfrak{g}_{s+1}$ does not change the equatorial label as an element of the free group. \unitlength=1mm \begin{picture}(125,50)(1,0) \thicklines \put(77,25){\vector(1,0){10}} \thinlines \multiput(10,30)(105,0){2}{\circle*{2}} \multiput(50,30)(105,0){2}{\circle*{2}} \multiput(10,40)(105,0){2}{\circle*{2}} \multiput(50,40)(105,0){2}{\circle*{2}} \multiput(10,31)(105,0){2}{\line(0,1){8}} \multiput(50,31)(105,0){2}{\line(0,1){8}} \multiput(11,40)(105,0){2}{\line(1,0){38}} \put(30,29){\oval(40,38)[b]} \multiput(30,22)(105,-2){2}{\circle{2}} \multiput(26,15)(105,-2){2}{\circle{2}} \multiput(34,15)(105,-2){2}{\circle{2}} \multiput(31,21)(105,-2){2}{\line(1,-2){2.5}} \multiput(29,21)(105,-2){2}{\line(-1,-2){2.5}} \multiput(27,15)(105,-2){2}{\line(1,0){6}} \put(115,30){\line(1,0){40}} \put(145,28){\vector(0,1){5}} \put(135,16.5){\oval(40,13)} \put(40,8){\vector(0,1){5}} \put(145,25){\vector(0,-1){5}} \put(146,20){\rm x} \multiput(41,11)(105,20){2}{\rm x} \multiput(0,26)(107,8){2}{\rm $\mathbf{P}'$} \multiput(22,18)(105,-2){2}{\rm $\mathbf{P}''$} \qbezier[60](8,42)(30,55)(52,42) \qbezier[60](8,10)(30,-3)(52,10) \qbezier[60](8,42)(-11,26)(8,10) \qbezier[60](52,42)(71,26)(52,10) \qbezier[60](113,42)(135,48)(157,42) \qbezier[60](113,28)(135,22)(157,28) \qbezier[40](113,42)(98,35)(113,28) \qbezier[40](157,42)(172,35)(157,28) \qbezier[30](23,22)(30,26)(37,22) \qbezier[30](23,14)(30,10)(37,14) \qbezier[20](23,22)(18,18)(23,14) \qbezier[20](37,22)(42,18)(37,14) \qbezier[30](128,20)(135,24)(142,20) \qbezier[30](128,12)(135,9)(142,12) \qbezier[20](128,20)(123,16)(128,12) \qbezier[20](142,20)(147,16)(142,12) \end{picture} \begin{center} Figure 4. {Partition of subpictures $\mathbf{P}'$ and $\mathbf{P}''$}\\{\tiny (Not to complicate the figure, the orientation and the label are indicated only for the arc which is transformed by $BRIDGE$.)} \end{center} \rule {6pt}{6pt} \begin{lem} \label{lem_help2} If a based spherical picture $\tilde{\mathbf{P}}$ with the equator $Equ$ is a finite sum of ${\bf X}^W$-pictures and, possibly, contains some closed arcs around the point $p$, then by a finite number of admissible operations, $\tilde{\mathbf{P}}$ can be reduced to a picture with equator with the equatorial label equal to the identity in the free group. \end{lem} {\bf Proof of Lemma \ref{lem_help2}.} Below we will use the operation $COMMUTE$ depicted on Figure 5. This operation changes the equatorial label on an element from $[{\bf R_1},{\bf N_1}]$ (see details in \cite{ok1}). It can be realized as a planar isotopy and a finite number of $DELETE({\bf X})^{\pm 1}$, $BRIDGE$, $FLOAT^{\pm 1}$. \unitlength=1mm \begin{picture}(120,50)(-9,0) \thicklines \multiput(30,10)(10,0){3}{\line(1,0){5}} \multiput(85,10)(10,0){3}{\line(1,0){5}} \multiput(35,40)(10,0){3}{\line(1,0){5}} \multiput(90,40)(10,0){3}{\line(1,0){5}} \multiput(30,15)(0,10){3}{\line(0,1){5}} \multiput(85,15)(0,10){3}{\line(0,1){5}} \multiput(60,10)(0,10){3}{\line(0,1){5}} \multiput(115,10)(0,10){3}{\line(0,1){5}} \put(67,25){\vector(1,0){10}} \thicklines \multiput(30,25)(55,0){2}{\line(1,0){30}} \thinlines \multiput(20,25)(97,0){2}{${Equ}$} \put(38,10){\line(0,1){20}} \put(38,31){\circle{2}} \qbezier[10](35,30)(35,34)(38,34) \qbezier[10](41,30)(41,34)(38,34) \qbezier[30](35,10)(35,20)(35,30) \qbezier[30](41,10)(41,20)(41,30) \put(52,40){\line(0,-1){20}} \put(52,19){\circle*{2}} \qbezier[10](49,20)(49,16)(52,16) \qbezier[10](55,20)(55,16)(52,16) \qbezier[30](49,40)(49,30)(49,20) \qbezier[30](55,40)(55,30)(55,20) \put(93,25){\line(0,-1){5}} \put(93,19){\circle*{2}} \qbezier[10](90,20)(90,16)(93,16) \qbezier[10](96,20)(96,16)(93,16) \qbezier[10](90,20)(90,22)(90,25) \qbezier[10](96,20)(96,22)(96,25) \put(93,10){\line(0,1){2}} \qbezier[30](94,15)(105,17)(104,25) \qbezier[10](90,10)(89,14)(94,15) \qbezier(96,14)(107,14)(107,25) \qbezier(93,10)(92,13)(96,14) \qbezier[30](99,12)(110,13)(110,25) \qbezier[5](96,10)(95,12)(99,12) \put(107,25){\line(0,1){5}} \put(107,31){\circle{2}} \qbezier[10](104,30)(104,34)(107,34) \qbezier[10](110,30)(110,34)(107,34) \qbezier[10](104,25)(104,28)(104,30) \qbezier[10](110,25)(110,28)(110,30) \put(107,40){\line(0,-1){2}} \qbezier[30](106,35)(95,33)(96,25) \qbezier[10](110,40)(111,36)(106,35) \qbezier(104,36)(93,36)(93,25) \qbezier(107,40)(108,37)(104,36) \qbezier[30](101,38)(90,37)(90,25) \qbezier[5](104,40)(105,38)(101,38) \end{picture} \begin{center} Figure 5 \end{center} At first, applying $FLOAT$ and $DELETE({\bf X})$ to $\widetilde{\mathbf{P}}$, we delete all ${\bf X}^W$-pictures, not intersecting $Equ$. This does not change the equatorial label. Further, by means of planar isotopy and $COMMUTE$, we obtain that for each ${\bf X}^W$-picture $\mathbf{P}_{\eta}$, the intersection $Equ\cap \mathbf{P}_{\eta}$ is connected, i.e., $Equ$ divides $\mathbf{P}_{\eta}$ into two parts: a subpicture over $\mathcal{P}_1=\langle{\bf x}\mid{\bf r_1}\rangle$ in the $\bf r_1$-hemisphere and a subpicture over $\mathcal{P}_{({\bf r - r_1})}=\langle{\bf x}\mid({\bf r - r_1})\rangle$ in the $(\bf r - r_1)$-hemisphere. If at least one of these parts does not contain vertices, the label of $Equ\cap \mathbf{P}_{\eta}$ is equal to the identity in the free group. Otherwise either $\mathbf{P}_{\eta}$ is an ${\bf Y}^W$-picture, or $\mathbf{P}_{\eta}$ contains vertices with labels from ${\bf r_1}\cap{\bf \hat{r}_1}$. In the first case the label of $Equ\cap \mathbf{P}_{\eta}$ is equal to $[WV_YW^{-1}]$ in the free group, where $V_Y$ is the label of a simple closed path in a based spherical picture $Y\in {\bf Y}$, separating the vertices with $\bf r_1$-labels and the vertices with $(\bf r - r_1)$-labels. In the second case the label of $Equ\cap \mathbf{P}_{\eta}$ is equal to the product of elements of the form $[WR^{\pm 1}W^{-1}]$, where $R\in{\bf r_1}\cap{\bf \hat{r}_1}$. In each of these cases the label of $Equ\cap \mathbf{P}_{\eta}$ belongs to $\mathfrak{N}[{\bf R_1},{\bf N_1}]$. Now we delete all ${\bf X}^W$-pictures from $\widetilde{\mathbf{P}}$. It follows from the above arguments that this operation is admissible. After such operation only several closed arcs encircling $p$ may remain in $\widetilde{\mathbf{P}}$. So the equatorial label is equal to the identity in the free group. This completes the proof of Lemma \ref{lem_help2}. \rule {6pt}{6pt} Let us continue the proof of Theorem \ref{th5}. By Lemma \ref{lem_help1}, the picture $\mathbf{P}$ with the equator can be reduced by a finite number of admissible operations to a picture $\widetilde{\mathbf{P}}$ with equator, being a finite sum of ${\bf X}^W$-pictures and, possibly, also containing some closed arcs around $p$. By Lemma \ref{lem_help2}, $\tilde{\mathbf{P}}$ can be reduced by a finite number of admissible operations to a picture with equator with the equatorial label equal to the identity in the free group. So the equatorial label $[U]$ of the initial picture $\mathbf{P}$ is equal to the identity modulo $\mathfrak{N}[{\bf R_1},{\bf N_1}]$. \rule {6pt}{6pt} {\it Keywords:} Presentations, subpresentations, asphericity, independent families of normal subgroups, intersection of normal subgroups, mutual commutant of normal subgroups, spherical pictures, identity sequences. \end{document}
\begin{document} \title{Fast approximate inference with INLA: the past, the present and the future } \author{Daniel Simpson, Finn Lindgren and H\aa{}vard Rue\\ Department of Mathematical Sciences\\ Norwegian University of Science and Technology\\ N-7491 Trondheim, Norway} \date{May 15, 2011} \maketitle \begin{abstract} Latent Gaussian models are an extremely popular, flexible class of models. Bayesian inference for these models is, however, tricky and time consuming. Recently, Rue, Martino and Chopin introduced the Integrated Nested Laplace Approximation (INLA) method for deterministic fast approximate inference. In this paper, we outline the INLA approximation and its related R package. We will discuss the newer components of the r-INLA program as well as some possible extensions. \end{abstract} \section{Introduction} As the statistical understanding of applied scientists increases and new techniques deliver larger, more complicated data sets, applied statisticians are faced with increasingly complex models. Naturally, as the complexity of these models increase, it becomes harder and harder to perform inference. Appropriately, a great deal of effort has been expended on constructing numerical methods for performing approximate Bayesian inference. Undoubtably, the most popular family of approximate inference methods in Bayesian statistics is the class of Markov Chain Monte Carlo (MCMC) methods. These methods, which exploded into popularity in the mid 1980s and have remained at the forefront of Bayesian statistics ever since, with the basic framework being extended to cope with increasingly more complex problems. The key advantage of MCMC methods is that, in their most vanilla incarnation, they are extremely simple to program. This simplicity, together with their incredible flexibility, has lead to the proliferation of these methods. Of course, there are problems: a single site auxiliary Gibbs sampler for spatial logistic regression is known to fail spectacularly. This is just the tip of the iceberg---for even mildly complicated models, it can be extremely difficult to construct a MCMC scheme that converges in a reasonable amount of time. For large models, and especially spatial models, fast convergence isn't enough. Even if you could sample exactly from the posterior, sampling--based methods converge like $\mathcal{O}(N^{-1/2})$, where $N$ is the number of samples, which suggests that you need $10^{2p}$ samples to get an error of around $10^{-p}$. Clearly, if computing a single sample is even reasonably expensive, this cost will be prohibitive. In the best case, this means that MCMC schemes for large problems typically take hours or even days to deliver estimates that are only correct to three or four decimal places. Clearly this is less than ideal! The only way around this efficiency problem is to consider alternatives to sampling-based methods. The first step in constructing an efficient approximate inference scheme is to greatly restrict the class of models that we will consider: it is na\"ive to expect that an efficient algorithm exists that will solve all of the problems that MCMC treats. With this in mind, we restrict our attention to the class of \emph{latent Gaussian models}, which we define in three stages as \begin{align*} y_i | \mathbf{x} &\sim \pi(y_i | x_i) \qquad &&\text{(Observation equation)} \\ \mathbf{x}|\boldsymbol{\theta} & \sim N(\boldsymbol{\mu}(\boldsymbol{\theta}),\mathbf{Q}(\boldsymbol{\theta})^{-1}) \qquad &&\text{(Latent Gaussian field)} \\ \boldsymbol{\theta} &\sim \pi(\boldsymbol{\theta}) \qquad &&\text{(Parameter model)}, \end{align*} where $\mathbf{Q}(\boldsymbol{\theta})$ is the \emph{precision matrix} (that is, the inverse of the covariance matrix) of the Gaussian random vector $\mathbf{x}$. In the interest of having a computable model, we will restrict $\mathbf{Q}$ to be either sparse or small enough that computing multiple factorisations is not an issue. These models cover a large chunk of classical statistical models, including dynamic linear models, stochastic volatility models, generalised linear (mixed) models, generalised additive (mixed) models, spline smoothing models, disease mapping, log-Gaussian Cox processes, model-based geostatistics, spatio-temporal models and survival analysis. The Integrated Nested Laplace Approximation (INLA), builds upon the use of Laplace approximations, which were originally for approximating posterior distributions by \citet{art367}. The first step in the INLA approximation is to perform a Laplace approximation to the joint posterior \begin{align} \pi(\boldsymbol{\theta}|\mathbf{y}) &= \frac{\pi(\mm{\theta})\pi(\mm{x},\mm{\theta})\pi(\mm{y}|\mm{x})}{\pi(\mm{x}|\mm{\theta},\mm{y})} \notag \\ &\propto \frac{\pi(\mm{\theta})\pi(\mm{x},\mm{\theta})\pi(\mm{y}|\mm{x})}{\pi_G(\mm{x}|\mm{\theta},\mm{y})}, \label{laplace} \end{align} where $\pi_G(\mm{x}|\mm{\theta},\mm{y})$ is the Gaussian approximation to $\pi(\mm{x}|\mm{\theta},\mm{y})$ that matches the true distribution at the mode \citep{art451}. The approximate posterior marginals for the non-Gaussian parameters can then be constructed through numerical integration as long as the dimension of $\mm{\theta}$ is not too large. The posterior marginals for the latent field $\pi(x_i | \mm{y})$ are constructed by computing a Laplace approximation to $\pi(x_i | \mm{\theta},\mm{y})$ and then integrating out against the approximate joint posterior for $\mm{\theta}|\mm{y}$. Full details of the approximation scheme can be found in \citet{art451}. \section{The \texttt{r-INLA}\ program} \label{section2} The INLA method was designed to be provide fast inference for a large class of practical Bayesian problems. In order to fulfil this aim, the \texttt{r-INLA}\ package was created as an \texttt{R} interface to the INLA program, which is itself written in \texttt{C}. The syntax for the \texttt{r-INLA}\ package is based on the inbuilt \texttt{glm} function in \texttt{R}, which highlights the effectiveness of the INLA method as a general solver for generalised linear (mixed) models. The \texttt{r-INLA}\ package is available from \texttt{http://r-inla.org}. They key to the computational efficiency of the \texttt{r-INLA}\ program is that it is based on \texttt{GMRFLib}, a \texttt{C} library written by H\aa{}vard Rue for performing efficient computations on Gaussian Markov random fields. As such, \texttt{r-INLA}\ is particularly effective when the latent Gaussian field has the Markov property. This covers the case of spline smoothing (in any dimension), as well as conditional autoregressive models and some Mat\'{e}rn random fields \citep{Lindgren2011}. Such a latent field is specified through the \texttt{formula} mechanism in \texttt{R}. To demonstrate the \texttt{r-INLA}\ package, let us consider some survival data for myeloid leukaemia cases in the north-west of England. The model is a Cox proportional hazard model, where the hazard depends linearly on the age and sex of the patient, smoothly on the white blood count (\texttt{wbc}) and an econometric covariate (\texttt{tpi}). Furthermore, it is assumed that there is a spatially correlated random effect, which takes into account which district the patient is in. The following code performs full Bayesian inference on the appropriate generalised additive mixed model in around seven seconds. The posterior mean spatial effect is shown in Figure \ref{postmean} \begin{Schunk} \begin{Sinput} > data(Leuk) > g = system.file("demodata/Leuk.graph", package = "INLA") > formula = inla.surv(Leuk$time, Leuk$cens) ~ 1 + age + sex + f(inla.group(wbc), + model = "rw1") + f(inla.group(tpi), model = "rw2") + f(district, + model = "besag", graph.file = g) > result = inla(formula, family = "coxph", data = Leuk) \end{Sinput} \end{Schunk} \begin{figure} \caption{The posterior mean for the effect of district. \label{postmean} \label{postmean} \end{figure} \section{New features} Since the original INLA paper, there have been a number of new developments. In this section, we outline some of the most recent additions to the \texttt{r-INLA}\ package. \paragraph{Manipulating the likelihood} The original INLA method was limited to observation models where each observation depended on one element of the latent Gaussian field. While this is commonly the case, this assumption is violated, for example, when the observed data consists of area averages of the latent field. In this case, $$ y_i| \mm{x} \sim \pi\left(y_i \big| \sum_{j} a_{ij}x_j\right). $$ We further assume that the dependence of the data on the latent field is ``local'' in the sense that most elements of the ``$\mm{A}$ matrix'' are zero. With this assumption, everything stays Markovian and fast inference is still possible. This is implemented in the \texttt{r-INLA}\ program by modifying the \texttt{control.compute} parameter in the \texttt{r-INLA}\ function call: \begin{Schunk} \begin{Sinput} > res = inla(formula, family = "...", data = ..., control.compute = list(A = amat)) \end{Sinput} \end{Schunk} Beyond relaxing this restriction to the class of models considered by the \texttt{r-INLA}\ program, there are a number of other new methods for building new models. The \texttt{f()} function, which \texttt{r-INLA}\ uses to specify random effects, has two new options: \texttt{replicate} and \texttt{copy}. The first option can be used to simply deal with the case where the likelihood requires independent replicates of the model with the same hyperparameters. The \texttt{copy} option is useful in situations where the latent field uses the same random field multiple times, possibly with different scalings. Finally, \texttt{r-INLA}\ has been extended to include models where the data comes from different sources. In this case, different subsets of the data will require different likelihood functions. This can be programmed in \texttt{r-INLA}\ by re-writing the data as a matrix where the number of columns are equal to the number of likelihoods. In the case where there are two likelihoods, each containing $n$ data points, this is achieved through the command \begin{Schunk} \begin{Sinput} > Y = matrix(NA, N, 2) > Y[1:n, 1] = y[1:n] > Y[1:n + n, 2] = y[(n + 1):(2 * n)] \end{Sinput} \end{Schunk} The \texttt{r-INLA}\ command is then modified appropriately by setting \texttt{family = c("model1", "model2")}. \paragraph{Survival models} A class of models that were not considered in the original INLA paper were Bayesian survival models. The trick is to see Bayesian survival models as just another set of Latent Gaussian models. In some situations, this is straightforward, while at other times it requires data augmentation tricks, which are implemented in the \texttt{inla.surv()} function, demonstrated in Section \ref{section2}. These methods are outlined in \citep{tech90,tech91}, which also discuss ways to deal with different types of censoring. \paragraph{Stochastic partial differential equations} A new method for constructing computationally efficient Gaussian random fields by taking advantage of the spatial Markov property was presented in a recent read paper by \citet{Lindgren2011}. The idea is to use the fact that these fields can be represented as the solution to stochastic partial differential equations (SPDEs) to construct computationally efficient approximations to them. Beyond building computationally efficient approximations to standard spatial models, this method also allows for the construction of \emph{new} classes of random fields with physically interpretable non-stationary. These models have been implemented in \texttt{r-INLA}\ . The following chunk of code fits a Bayesian spline through some noisy data points. It begins by constructing a mesh on the unit square with vertices at the observation locations (\texttt{points}) \begin{Schunk} \begin{Sinput} > bnd = inla.mesh.segment(matrix(c(0, 0, 1, 0, 1, 1, 0, 1), ncol = 2, + byrow = TRUE)) > mesh = inla.mesh.create(points, boundary = bnd, refine = list(max.edge = 0.1)) \end{Sinput} \end{Schunk} The second step is to construct the SPDE model \begin{Schunk} \begin{Sinput} > spde = inla.spde.create(mesh, model = "imatern") \end{Sinput} \end{Schunk} where \texttt{imatern} is the intrinsic matern model with $\nu = 1$, i.e. the spline smoothing model. Finally the formula is constructed and the inference is performed in the standard way: \begin{Schunk} \begin{Sinput} > formula = y ~ f(data_points, model = spde) - 1 > r = inla(formula, family = "gaussian", data = list(y = y, data_points = mesh$idx$loc)) \end{Sinput} \end{Schunk} \section{What the future holds} There are an almost endless number of ways that the INLA method \texttt{r-INLA}\ program can be extended. In this section we describe some of the new features that we are currently working on. \paragraph{Fixing ``failures'': global Gaussian approximations} The Laplace approximation proceeds by fitting a Gaussian approximation around the mode of $\pi(\mm{x} | \mm{\theta},\mm{y})$, however there are situations in which this is not the most appropriate approximation. For instance, if the true distribution is bimodal, a better `fit' would be obtained by constructing a Gaussian approximation that \emph{globally} approximates the distribution. Another situation where these more global approximation would be of use is the following case of ``failure''. Consider the problem of approximating the latent random field for the following logistic regression model. \begin{Schunk} \begin{Sinput} > n = 100 > eta = 1 + rnorm(n) > p = exp(eta)/(1 + exp(eta)) > y = rbinom(n, size = 1, prob = p) > bad.result = inla(y ~ 1 + f(num, model = "iid"), family = "binomial", + Ntrials = rep(1, n), data = list(y = y, num = c(1:100))) \end{Sinput} \end{Schunk} Figure \ref{binomial_bad} shows the posterior for the precision of the random effect. INLA has clearly missed the correct precision, which was $1$. \begin{figure} \caption{The precision for the latent Gaussian field is badly overestimated---the true value is $\phi= 1$. \label{binomial_bad} \label{binomial_bad} \end{figure} So what went wrong? Quite simply there is very little information in the data and hence the model is very prior sensitive. This sensitivity, combined with the vague prior that \texttt{r-INLA}\ uses as a default produced the nonsense results in Figure \ref{binomial_bad}. \paragraph{Kronecker product models} In a number of applications, the precision matrix in the Gaussian random field can be written as a Kronecker product of two standard covariance matrices. A simple example of this is the separable space-time model constructed by using spatially correlated innovations in an AR(1) model: $$ \mm{x}_{t+1} = \phi\mm{x}_t + \mm{\epsilon}_t, $$ where $\phi$ is a scalar and $\mm{\epsilon} \sim N(0, {\mm{Q}_{\epsilon}}^{-1})$. In this case, the precision matrix is $\mm{Q} = \mm{Q}_\text{AR(1)} \otimes \mm{Q}_{\epsilon}$, where $\otimes$ is the Kronecker product. Due to the prevalence of Kronecker product models, it is desirable to add a Kronecker product mechanism to \texttt{r-INLA}\ . The general Kronecker product mechanism is currently in progress, but a number of special cases are already available in the code through the undocumented \texttt{group} feature. For example, a separable spatiotemporal SPDE model can be constructed using the command \begin{Schunk} \begin{Sinput} > frm = y ~ f(loc, model = spde, group = time, control.group = list(model = "ar1")) \end{Sinput} \end{Schunk} in which every observation \texttt{y} is assigned a location \texttt{loc} and a time \texttt{time}. At each time, the spatial points are linked by an SPDE model, while across the time periods, they evolve according to an AR(1) process. \paragraph{Extending the SPDE methodology} The grouping mechanism described above can be used to produce separable space-time models, that is models in which the covariance function can be factored into a purely spatial and a purely temporal component. In some situations, this type of separability is an unrealistic assumption and a great deal of research has gone into constructing classes of non-separable spatiotemporal covariance functions. An interesting property of SPDE models is that \emph{any} model built with a sensible space-time partial differential operator will lead to a non-separable model. Furthermore, these models will inherit the good physical properties of the deterministic PDE models, such as causality and non-reversibility. This guarantees that the non-separability is \emph{useful}, rather than simply present. We are currently working to include the stochastic heat equation model $$ \frac{\partial }{\partial t} (\tau(s,t) x(s,t)) -\nabla\cdot \left(\mm{D}(s,t) \nabla(\tau(s,t)x(s,t))\right) + \nabla\cdot(\mm{b}(s,t)x(s,t)) + \kappa^2(s,t) x(s,t)= W(s,t), $$ where the noise process $W(s,t)$ is white in time, but correlated and Markovian in space. The challenge here is not simply placing the model into the \texttt{r-INLA}\ framework. This model includes temporally varying anisotropy and temporally varying drift, and therefore, even parameterising this model is an open problem. \paragraph{Gamma frailty models: relaxing the Gaussian assumptions} The assumption of Gaussian random effects is at the very heart of the INLA approximation. However, there are a number of situations in which this is not a realistic assumption. An example of this comes when incorporating frailty into Cox proportional hazard models. In these models, the hazard function for individual $i$ is modelled as $$ h(t_i) = h_0 \nu_i \exp(\eta_i), $$ where $\eta_i$ is a linear model containing covariates and $\nu_i$ is the frailty term, which models unobserved heterogeneity in the population. Clearly, if we take $\nu_i$ to be log-normal, the resulting model fits firmly in the standard INLA framework. Unfortunately, log-normal frailties are an uncommon model, typically the frailty term is taken to be gamma distributed. The question is, therefore, can we incorporate gamma frailty models into the INLA framework. The solution to this problem comes in the guise of ``importance sampling''--type decomposition: $$ \text{Gamma} = \underbrace{\text{LogNormal}}_{\text{``Prior''}} \times \underbrace{\frac{\text{Gamma}}{\text{LogNormal}}}_\text{``Correction''}. $$ With this type of formulation, it is possible to include gamma frailty models into the INLA framework. This approach is not entirely satisfactory---although we can theoretically do this for any model suitably close to the log-normal (such as the log-t distribution), it is not particularly flexible. The aim of this work is to incorporate ideas from Bayesian nonparametrics to construct a class of suitable non-Gaussian random effects models that can be incorporated into this framework. This will massively increase the class of models for which INLA is available. \section{Conclusion} This article was finished on 15th May, 2011 and all of the information about INLA is correct at this time. This statement is necessary---INLA is still a project in active development. By the time you read this, some of the `present' features will have moved into the `past', and the `future' features will be edging ever closer to inclusion. In fact, those who are interested can follow the progress of the INLA project at \texttt{http://inla.googlecode.com}, or by frequently updating the `testing' version of INLA using the command \begin{verbatim} > inla.update(testing=TRUE) \end{verbatim} This `testing' version of INLA updates frequently and includes experimental interfaces to the newest features. This build also has the pleasant feature of matching with the documentation on \texttt{http://r-inla.org}! The \texttt{r-INLA} project was created to provide an easy to use tool for performing Bayesian inference on latent Gaussian models. As such, the set of problems that \texttt{r-INLA} can solve is limited to those that someone has wanted to solve. There are any number of possible extensions not listed in the `future' section that we are not currently considering because no one has asked for them yet. The lesson here is \emph{if you want \texttt{r-INLA} to have a particular feature, observation model or prior model, you need to ask us!} The development of the INLA project is driven entirely by the research interests of the development team and the requests that we receive from the user community. { } \end{document}
\begin{document} \begin{center} \textbf{{\LARGE Spectral types of linear $q$-difference equations and $q$-analog of middle convolution}} \end{center} \begin{center} {\large Hidetaka Sakai and Masashi Yamaguchi Graduate School of Mathematical Sciences, The university of Tokyo, \\ Komaba, Tokyo 153-8914, Japan.} \end{center} \begin{abstract} We give a $q$-analog of middle convolution for linear $q$-difference equations with rational coefficients. In the differential case, middle convolution is defined by Katz, and he examined properties of middle convolution in detail. In this paper, we define a $q$-analog of middle convolution. Moreover, we show that it also can be expressed as a $q$-analog of Euler transformation. The $q$-middle convolution transforms Fuchsian type equation to Fuchsian type equation and preserves rigidity index of $q$-difference equations.\\ \end{abstract} {\it 2010 Mathematics Subject Classification.} --- 39A13, 33D15. {\it Key words and phrases.} --- $q$-difference equations, Fuchsian equation, Rigidity index, Middle convolution. \section{Introduction} {\ } In this paper, we give a $q$-analog of middle convolution for linear $q$-difference equations with rational coefficients, and we show properties of the $q$-middle convolution. Before that, we briefly look over the theory of middle convolution for differential equations. At first, we look over a theory of Katz in \cite{K}. He defined addition and middle convolution for solutions of differential equations of Schlesinger normal form \begin{equation} \frac{dY}{dx}(x)=A(x)Y(x),\ \ A(x)=\sum _{k=1}^N\frac{A_k}{x-t_k}\ \ (t_k\in \mathbb{C},\ A_k\in \mathrm{M}_m(\mathbb{C})). \end{equation} These operations transform Fuchsian equation to Fuchsian equation and preserve rigidity index of the equation. Rigidity index is the integer related to the number of accessory parameters. Accessory parameters are parameters which are independent of eigenvalues of $A_k,\, A_{\infty}=-(A_1+\cdots +A_N))$. If the equation (1) has no accessory parameters, it is called ``rigid". Katz showed that any irreducible rigid Fuchsian equations can be obtained from a certain 1st order equation by finite iterations of additions and middle convolutions. Katz's theorem tells that there exists integral representation of solutions of any irreducible rigid Fuchsian equations, because an addition transforms solution $Y(x)$ of the equation (1) to \[ \prod _{k=1}^r(x-a_k)^{b_k}\cdot Y(x)\, (a_k,b_k\in \mathbb{C})\] and a middle convolution is integral transformation for solution $Y(x)$ of the equation (1). \begin{rem}\normalfont{There are two types}, ``additive version" and ``multiplicative version" of middle convolution defined by Katz. Additive version is transformation for equations. Multiplicative version is transformation for solutions. Multiplicative middle convolution induces a transformation of monodromy representation. In this paper, we treat the similar version to the former, which should be called ``additive version" $q$-middle convolution. In the $q$-difference case, we think that connection matrix between two local solutions at singularities $x=0,\infty$ correspond to monodromy of differential equation. Birkhoff studied the connection matrix $P(x)$ for local solutions $Y_0(x),Y_{\infty}(x)$ at singularities $x=0,\infty$ of linear $q$-difference system with polynomial coefficient $Y(qx)=A(x)Y(x)$. Furthermore, Sauloy considered a category of linear $q$-difference systems with rational coefficients, a category of solutions and a category of connection data in \cite{Sa}. He gave Riemann-Hilbert correspondence for these categories. Based on the Sauloy's result, Roques studied rigidity of connection of linear $q$-difference systems with rational coefficients in \cite{Ro}.\ $\square$ \end{rem} We referred to an easier construction of Dettweiler and Reiter in order to define the $q$-analog of middle convolution. Let us look over a result of Dettweiler and Reiter in \cite{DR1,DR2}. They express Katz's middle convolution in terms of matrices. The next transformation is called ``convolution" with parameter $\lambda \in \mathbb{C}$: \begin{align} &\frac{dZ}{dx}(x)=G(x)Z(x),\ \ G(x)=\sum _{k=1}^N\frac{G_k}{x-t_k}\ \ (G_k\in \mathrm{M}_{mN}(\mathbb{C})),\\ &G_k=\begin{pmatrix} & & O & & \\ A_1 & \dotsi & A_k+\lambda 1_m & \dotsi & A_N \\ & & O & & \end{pmatrix} (k\, \mathrm{th\ entry})\ \ (1 \le k \le N,\ 1_m=\{ \delta _{i,j}\}_{1\le i,j\le m}\in \mathrm{M}_m(\mathbb{C})). \end{align} Moreover, we define two linear spaces \begin{equation} \mathcal{K}=\begin{pmatrix} \ker A_1 \\ \vdots \\ \ker A_N \end{pmatrix},\ \ \ \ \mathcal{L}=\ker (G_1+\cdots +G_N). \end{equation} Let $\overline{G}_k$ be a matrix induced by the action of $G_k$ on $\mathbb{C}^{mN}/(\mathcal{K}+\mathcal{L})$. We define middle convolution \[ mc_{\lambda}\, :\ (A_1,\, \ldots ,\, A_n)\longmapsto (\overline{G}_1,\, \ldots ,\, \overline{G}_n).\] We obtain a similar transformation by considering the Dettweiler and Reiter's setting in the $q$-difference case. Let \begin{align*} \boldsymbol{B}&={}^t(B_1, \ldots ,B_N,B_{\infty}) \in (\mathrm{M}_m(\mathbb{C}))^{N+1},\\ \boldsymbol{b}&={}^t(b_1, \ldots ,b_N) \in (\mathbb{C}\backslash \{ 0 \} )^N\ \ (b_i=b_j \Rightarrow i=j). \end{align*} We set an equation \begin{equation} E_{\boldsymbol{B},\boldsymbol{b}}\, :\ \sigma _x Y(x)=B(x)Y(x),\ \ \ B(x)=B_{\infty}+\sum _{i=1}^N \frac{B_i}{1-\frac{x}{b_i}}. \end{equation} For an equation $E_{\boldsymbol{B},\boldsymbol{b}}$, we define the $q$-convolution. \begin{defi}\label{conv}$(q$-$\mathrm{convolution})$\label{conv}\ \ Let $\mathcal{E}$ be the set of $E_{\boldsymbol{B},\boldsymbol{b}}$'s. For $E_{\boldsymbol{B},\boldsymbol{b}} \in \mathcal{E},\lambda \in \mathbb{C}$, we define $q$-convolution$\, c_{\lambda}:\mathcal{E}\longrightarrow \mathcal{E}\ (E_{\boldsymbol{B},\boldsymbol{b}} \longmapsto E_{\boldsymbol{F},\boldsymbol{b}})$ as \begin{equation} \begin{split} \boldsymbol{F}&=(F_1, \ldots ,F_N,F_{\infty}) \in (\mathrm{M}_{(N+1)m}(\mathbb{C}))^{N+1},\\ F_i&=\begin{pmatrix} & & O & & \\ B_0 & \dotsi & B_i-(1-q^{\lambda})1_m & \dotsi & B_N \\ & & O & & \end{pmatrix} (i+1\, \mathrm{th\ entry}),\ 1 \le i \le N,\\ F_{\infty}\! &=1_{(N+1)m}-\widehat{F},\\ \widehat{F}&=(B_{t-1})_{1 \leq s,t \leq N+1}=\begin{pmatrix} B_0 & \dotsi & B_N \\ \vdots & \ddots & \vdots \\ B_0 & \dotsi & B_N \end{pmatrix},\ B_0=1_m-B_{\infty}-\sum _{j=1}^NB_j. \end{split} \end{equation} \end{defi} Furthermore, we define the $q$-middle convolution. \begin{defi}$(q$-$\mathrm{middle\ convolution})$\ \ Let $\mathcal{V}= \mathbb{C}^m$ and $\boldsymbol{F}$-invariant subspaces of $\mathcal{V}^{N+1}$ as \begin{equation} \mathcal{K}= \mathcal{K}_{\mathcal{V}}=\bigoplus _{i=0}^N\ker B_i,\ \mathcal{L}=\mathcal{L}_{\mathcal{V}}(\lambda )=\ker (\widehat{F}-(1-q^{\lambda})1_{(N+1)m}). \end{equation} Let $\overline{F}_k$ be a matrix induced by the action of $F_k$ on $\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L})$, and we define the $q$-middle convolution $mc_{\lambda}$ as $\mathcal{E}\longrightarrow \mathcal{E}\ (E_{\boldsymbol{B},\boldsymbol{b}} \longmapsto E_{\boldsymbol{\overline{F}},\boldsymbol{b}})$. \end{defi} We abbreviated that modules $(\boldsymbol{B},\mathcal{V}),\, (\boldsymbol{F},\mathcal{V}^{N+1}),\, (\boldsymbol{\overline{F}},\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L}))$ are $\mathcal{V},\, \mathcal{V}^{N+1},\, \mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L})$ respectively. Moreover, we set \[ c_{\lambda}(\boldsymbol{B})=\boldsymbol{F},\ \ c_{\lambda}(\mathcal{V})=\mathcal{V}^{N+1},\ \ mc_{\lambda}(\boldsymbol{B})=\boldsymbol{\overline{F}},\ \ mc_{\lambda}(\mathcal{V})=\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L}).\] Here a $q$-analog of middle convolution was defined. We can also give an integral representation of $q$-convolution by $q$-analog of Euler transformation. We will describe it in detail in Section 2. By the way, we would like to understand $q$-middle convolution as a transformation for the analog of Fuchsian equation. From now on, we set $q \in \mathbb{C},\, 0<|q|<1,\, \sigma _x:x \longmapsto qx$. We set a linear $q$-difference equation with polynomial coefficient \begin{equation}\label{ea} E_A\, :\ \sigma _xY(x)=A(x)Y(x),\ \ A(x)=\sum _{k=0}^NA_kx^k\ \ (A_k\in \mathrm{M}_m(\mathbb{C})). \end{equation} Moreover, we let $A_{\infty}=A_N.$ We define ``Fuchsian" $q$-difference equations. \begin{defi}$\mathrm{(Fuchsian\ type\ equation)}$\ \ For an equation $E_A,$ if $A_0,\, A_{\infty}\in \mathrm{GL}_m(\mathbb{C})$, then we call $E_A$ Fuchsian type $q$-difference equation. \end{defi} Although we cannot apply the $q$-middle convolution to this Fuchsian equation directly, we see that the equation $E_A$ is connected with $E_{\boldsymbol{B},\boldsymbol{b}}$ by simple transformations. We consider $m\times m$ matrix system $E_R$ with rational coefficients \begin{equation} E_R\, :\ \sigma _xY(x)=R(x)Y(x). \end{equation} As gauge transformations for the solution $Y(x)$ of the equation $E_R$, we consider only two types in this paper. The first one is the transformation \begin{equation} \varphi _P:Y(x)\longmapsto \widetilde{Y}(x)=PY(x)\ \ (P\in \mathrm{GL}_m(\mathbb{C})). \end{equation} The second one is the transformation \begin{equation} \varphi _f:Y(x) \longmapsto \widetilde{Y}(x)=f(x)Y(x), \end{equation} where $f(x)$ is solution of $\sigma _xf(x)=Q(x)f(x)\, $($Q(x)$ is a scalar rational function). This function $f(x)$ can be expressed by using the functions \begin{equation*} (ax;q)_{\infty},\ \ \ \ \ \ \vartheta _q (x). \end{equation*} Here we set \begin{align*} (a_1, \ldots ,a_n;q)_0&=1,\\ (a_1, \ldots ,a_n;q)_m&=\prod _{i=1}^n \prod _{j=0}^{m-1}(1-a_iq^j)\, (m \in \mathbb{Z}_{>0}),\\ (a_1, \ldots ,a_n;q)_{\infty }&=\lim_{m \rightarrow \infty}(a_1, \ldots ,a_n;q)_m,\\ \vartheta _q (x)&=\prod _{n=0}^{\infty}(1-q^{n+1})(1+xq^n)(1+x^{-1}q^{n+1}). \end{align*} To be specific, for the solution $Y(x)$ of the equation $E_R$, \begin{align*} &\text{if\ we\ put}\ \widetilde{Y}(x)=(ax;q)_{\infty}Y(x),\ \text{then}\ \sigma _x\widetilde{Y}(x)=(1-ax)R(x)\widetilde{Y}(x);\\ &\text{if\ we\ put}\ \widetilde{Y}(x)= \frac{1}{\vartheta _q (x)}Y(x),\ \text{then}\ \sigma _x\widetilde{Y}(x)=xR(x)\widetilde{Y}(x);\\ &\text{if\ we\ put}\ \widetilde{Y}(x)= \frac{\vartheta _q (x)}{\vartheta _q (ax)}Y(x)\, (a\in \mathbb{C}\backslash \{ 0 \} ),\ \text{then}\ \sigma _x\widetilde{Y}(x)=aR(x)\widetilde{Y}(x). \end{align*} We define a family of equations by modulo $\varphi _P$ and $\varphi _f$. We interpret the $q$-middle convolution as the transformation of the family of equations. From arbitrary equation $E_R$, we obtain $\Tilde{E}_R$: \begin{align} \Tilde{E}_R\, :\ \sigma _x\widetilde{Y}(x)&=A(x)\widetilde{Y}(x),\\ A(x)&=\sum_{i=0}^NA_ix^i\, (A_k\in \mathrm{M}(m,\mathbb{C}),\ A_0,A_N \ne 0,\ \forall a \in \mathbb{C}\, ;\, A(a)\ne 0), \end{align} which is determined up to multiplication of constant and similarity transformations by $\varphi _P$. We call $\Tilde{E}_R$ the canonical form of the equation $E_R$. In general case, for canonical form $\sigma _x\widetilde{Y}(x)=A(x)\widetilde{Y}(x)$ of $E_{\boldsymbol{B},\boldsymbol{b}}$, we obtain \begin{align} &A(x)=T(x)B(x),\ \ T(x)=\prod _{i=1}^N \Bigl( 1-\frac{x}{b_i}\Bigr) ,\\ &\label{rel1}A_0=1_m-B_0,\ A_{\infty}=b_{\infty}B_{\infty},\ B_0=1_m-\sum _{i=1}^NB_i-B_{\infty},\ b_{\infty}=\prod _{i=1}^N(-b_i^{-1}), \\ &\mathrm{rank}B_i = \begin{cases} m-n_1^k & (b_i=a_k\in Z_R)\\ m & (b_i\notin Z_R) \end{cases}\ (1 \le i \le N,\ n_1^k=\mathrm{dim\, ker}A(a_k)). \end{align} \begin{rem} \normalfont{The }definition of the Fuchsian type equation may not be appropriate. We look at Heine's $q$-hypergeometric function \begin{equation} {}_2\varphi _1(\alpha ,\beta ,\gamma ;q;x)=\sum _{n=0}^{\infty}\frac{(\alpha ;q)_n(\beta ;q)_n}{(q;q)_n(\gamma ;q)_n}x^n. \end{equation} Here $u(x)={}_2\varphi _1(\alpha ,\beta ,\gamma ;q;x)$ satisfies the equation \begin{equation} \{ (1-\sigma _x)(1-q^{-1}\gamma \sigma _x)-x(1-\alpha \sigma _x)(1-\beta \sigma _x) \} u(x)=0. \end{equation} If we set $\displaystyle{v(x)=\frac{1}{x}\sigma _xu(x)}$ and $\displaystyle{Y(x)=\left( \begin{smallmatrix} u(x) \\ v(x) \end{smallmatrix} \right) ,}$ then we obtain \begin{equation}\label{fu} \sigma _xY(x)=\frac{1}{x(q\alpha \beta x-\gamma )}\begin{pmatrix} 0 & x^2(q\alpha \beta x-\gamma )\\ -x+1 & x\{ (\alpha +\beta )x-q^{-1}\gamma -1\} \end{pmatrix}Y(x). \end{equation} Although this is not Fuchsian $q$-difference equation in our sence, this equation transforms to Fuchsian type equation by a simple transformation: \begin{equation} Y(x)\longmapsto \Tilde{Y}(x)=\begin{pmatrix} 1 & 0\\ 1 & -x \end{pmatrix}Y(x)=\begin{pmatrix} u(x)\\ (1-\sigma _x)u(x)\end{pmatrix}. \end{equation} $\Tilde{Y}(x)$ satisfies Fuchsian $q$-difference equation \begin{equation}\label{he} \sigma _x\Tilde{Y}(x)=\frac{1}{\alpha \beta x-q^{-1}\gamma}\begin{pmatrix} \alpha \beta x-q^{-1}\gamma & -\alpha \beta x+q^{-1}\gamma \\ (1-\alpha )(1-\beta )x & (\alpha +\beta -\alpha \beta )x-1 \end{pmatrix}\Tilde{Y}(x). \end{equation} Although we do not introduce such transformations, Saloy used a transformation by rational component matrix as a gauge transformation in \cite{Sa}. We think that our Fuchsian $q$-difference equation corresponds to the Schlesinger normal form in the differential case. Although we do not call the equation (\ref{fu}) Fuchsian type, we might have to do. On the other hand, in the differential case, there exists Fuchsian differential equations which cannot be written in the Schlesinger normal form. We set $y_i(x)\, (i=1,2)$ the components of a solution $Y(x)$ of a equation \begin{equation}\label{fu2} \frac{dY}{dx}(x)=R(x)Y(x)\ \ \ \ (R(x)\ \mathrm{is\ rational\ function}). \end{equation} If singularities of $y_i(x)$ are at most regular singularities, we call the equation (\ref{fu2}) Fuchsian differential equation. Regular singularity is defined from local properties of solution. In more detail, if function $y(x)$ is not holomorphic at $x=x_0$ and for any $\underline{\theta},\overline{\theta}\, (\underline{\theta}<\overline{\theta}) ,$ there exists $n_0\in \mathbb{Z}_{>0}$ such that \[ \lim _{\underline{\theta}<\arg (x-x_0)<\overline{\theta},\, x\rightarrow x_0}|x-x_0|^{n_0}|y(x)|=0,\] we call $x=x_0$ the regular singularity of $y(x)$. Here we consider the equation of Schlesinger normal form \[ \frac{dY}{dx}(x)=\left( \sum _{i=1}^N\frac{A_i}{x-a_i}\right) Y(x)\ \ \ \ (a_i\in \mathbb{C},\, A_i\in \mathrm{M}_m(\mathbb{C})),\] that is, a special case of the Fuchsian differential equation.\ $\square$ \end{rem} We can think that our Fuchsian type equation actually Fuchsian because Carmichael's theorem has been establish in \cite{Ca}. \begin{theo}$\mathrm{(Carmichael)}$\ \ Let $\alpha _j^{\xi}\ (1\le j\le m,\, \xi =0,\infty )$ the eigenvalues of $A_{\xi}\in \mathrm{GL}_n(\mathbb{C})$, we assume further that $A_{\xi}$ are semi-simple and \[ \frac{\alpha _j^{\xi}}{\alpha _k^{\xi}} \not\in q^{\mathbb{Z}_{>0}} \qquad (\forall j, \forall k).\] Then, there exist unique solutions $Y_{\xi}(x)$ of the equation (\ref{ea}) with the following properties, \begin{eqnarray} Y_0(x)&=&\widehat{Y}_0(x)x^{D_0},\\ Y_{\infty}(x)&=&q^{\frac{N}{2}u(u-1)}\widehat{Y}_{\infty}(x)x^{D_{\infty}}, \qquad u=\frac{\log x}{\log q}. \end{eqnarray} Here $\widehat{Y}_{\xi}(x)$ is a holomorphic and invertible matrix at $x=\xi$ such that $\widehat{Y}_{\xi}(\xi )=C_{\xi}\in \mathrm{GL}(m,\mathbb{C})$ and $A_{\xi}=C_{\xi}q^{D_{\xi}}C_{\xi}^{-1},\, D_{\xi}=\mathrm{diag}(\log \alpha _j^{\xi}/\log q).$ \end{theo} \begin{rem} \normalfont{The }functions used in the above theorem \begin{equation} x^{\log \theta /\log q},\ \ \ \ q^{u(u-1)/2}\ \ (u=\log x/\log q) \end{equation} are solutions of the following equations, respectively, \begin{equation} \sigma _xf(x)=\theta f(x),\ \ \ \ \sigma _xf(x)=xf(x). \end{equation} Hence instead of these functions, we can use the following single-valued functions as solutions of the above equations, \begin{equation} \frac{\vartheta _q (x)}{\vartheta _q (\theta x)},\ \ \frac{1}{\vartheta _q (x)}. \end{equation} These functions are widely used in recent years, we use these in this paper.\ $\square$ \end{rem} The purpose of this study is to examine properties of the $q$-middle convolution. Let us describe the contents of this paper. In the 2nd section, we show that $q$-convolution can be expressed by a $q$-analog of Euler transformation. In the 3rd section, we define the spectral type and the rigidity index for the equation $E_R$. Spectral types are defined by the size of Jordan cells of $A_0,\, A_{\infty}$ and types of elementary divisors of $A(x)$. Notice that the rigidity index is not only determined by data of $B_k$ of $B(x)$, but also by data of elementary divisors of coefficient polynomial $A(x)$ of canonical form $\Tilde{E}_R$. In the 4th section, we prove the three main theorems. \begin{theo}\label{Fuchs}$(\mathrm{Fuchsian\ type\ equation})$\ \ If equation $E_R$ is Fuchsian type equation, then $mc_{\lambda}(E_R)$ is also Fuchsian type equation. \end{theo} Here we assume that next conditions $(\ast ),(\ast \ast)$ after the manner of Dettweiler and Reiter in \cite{DR1}. (These conditions are generally satisfied if $\mathrm{dim}\mathcal{V}=1$ or $\mathrm{dim}\mathcal{V}>1$ and $\boldsymbol{B}$ is irreducible) \begin{defi}\label{ast}\ \ We define the conditions $(\ast ),(\ast \ast)${\rm :} \begin{align*} (\ast )\, :\ &\forall i \in \{ 0, \ldots ,N \} , \forall \tau \in \mathbb{C}\ ;\ \ \bigcap _{j \neq i}\ker B_j\! \cap \! \ker (B_i\! +\! \tau 1_m)\! =\! 0,\\ (\ast \ast )\, :\ &\forall i \in \{ 0, \ldots ,N \} , \forall \tau \in \mathbb{C}\ ;\ \ \sum _{j \neq i}\mathrm{im}B_j\! +\! \mathrm{im}(B_i\! +\! \tau 1_m)\! =\! \mathcal{V}. \end{align*} \end{defi} \begin{theo}\label{irre}$(\mathrm{irreducibility})$\ \ If $(\ast ),(\ast \ast )$ are satisfied, then $\mathcal{V}$ is irreducible if and only if $mc_{\lambda}(\mathcal{V})$ is irreducible. \end{theo} \begin{theo}\label{index}$(\mathrm{rigidity\ index})$\ \ If $(\ast ) ,(\ast \ast )$ are satisfied, then $mc_{\lambda}$ preserves rigidity index of Fuchsian equation $E_R$. \end{theo} To prove these theorems, we do not need for the following conditions in the Theorem 1.6 : \[ A_0, A_{\infty}\, :\, \mbox{semi-simple},\ \ \ \ \frac{\theta _j}{\theta _k}, \frac{\kappa _j}{\kappa _k} \not\in q^{\mathbb{Z}_{>0}}\ (\theta _i, \kappa _i:\, \mathrm{eigenvalues\ of}\ A_0,\, A_{\infty}\, \mathrm{respectively}).\] We will explain ``rigidity indexh in the section 3. This is defined by ``spestral type'' of the Fuchsian equation $E_R$. \section{Integral representation of $q$-convolution} We gave a $q$-analog of convolution as a transformation of the $q$-difference equations. We can also give an integral representation of ``$q$-convolution'' by a $q$-analog of Euler transformation. In this section, we show \begin{theo}\ \ For\ the\ solution\ $Y(x)$\ of\ the\ equation\ $E_{\boldsymbol{B},\boldsymbol{b}}$,\ let\ $\widehat{Y}(x)={}^t({}^t\widehat{Y}_0(x), \ldots ,{}^t\widehat{Y}_N(x))$\ by \begin{equation} \widehat{Y}_i(x)=\int _0^{\infty}\frac{P_{\lambda}(x,s)}{s-b_i}Y(s)\ d_qs,\ b_0\! =0,\ P_{\lambda}(x,s)=\frac{(q^{\lambda +1}sx^{-1};q)_{\infty}}{(qsx^{-1};q)_{\infty}}. \end{equation} Then,\ $\widehat{Y}(x)$\ is\ the\ solution\ of\ the\ equation\ $E_{\boldsymbol{F},\boldsymbol{b}}${\rm (}see Definition \ref{conv}{\rm )}. Here Jackson integral is defined by \[ \int _0^{\infty}f(x)\, d_qx=(1-q)\sum _{n=-\infty}^{\infty}q^nf(q^n).\] \end{theo} $Proof.$\ \ $P_{\lambda}(x,s)$ \rm{is a solution of partial difference equations} \begin{equation*} (\sigma _x-\sigma _s^{-1})y(x,s)=0,\ \sigma _xy(x,s)=\frac{1-q^{\lambda}sx^{-1}}{1-sx^{-1}}y(x,s). \end{equation*} Hence $P_{\lambda}(x,s)$ is a solution of \begin{equation*} \frac{\sigma _xP_{\lambda}(x,s)}{s-b_i}=\frac{x-q^{\lambda}b_i}{x-b_i}\frac{P_{\lambda}(x,s)}{s-b_i}+\frac{x}{x-b_i}\frac{\sigma _s^{-1}-1}{s}P_{\lambda}(x,s). \end{equation*} Moreover, this function is independent to $b_i\in \mathbb{C}$. By multiplying $Y(s)$, and by Jackson integral calculation, we obtain \begin{equation*} \sigma _x\widehat{Y}_i(x)=\left\{ 1+\frac{(1-q^{\lambda})b_i}{x-b_i} \right\} \widehat{Y}_i(x)+\frac{x}{x-b_i}\int _0^{\infty}\frac{\sigma _s^{-1}-1}{s}P_{\lambda}(x,s)Y(s)\ d_qs. \end{equation*} Meanwhile, we obtain \begin{align*} \int _0^{\infty}& \frac{\sigma _s^{-1}-1}{s}P_{\lambda}(x,s)\cdot Y(s)\ d_qs\\ &=\int _0^{\infty}P_{\lambda}(x,s)\frac{1}{s}\{ \sigma _s Y(s)-Y(s) \} \ d_qs\\ &=\int _0^{\infty}P_{\lambda}(x,s)\frac{1}{s}\biggl( B_{\infty}+\sum _{j=1}^N \frac{B_j}{1-\frac{s}{b_j}}-1_m \biggr) Y(s)\ d_qs\\ &=\int _0^{\infty}P_{\lambda}(x,s)\biggl\{ \frac{1}{s}\biggl( B_{\infty}+\sum _{j=1}^NB_j-1_m\biggr) -\sum _{j=1}^N\frac{1}{s-b_j}B_j\biggr\} Y(s)\ d_qs\\ &=-\int _0^{\infty}P_{\lambda}(x,s)\sum _{j=0}^N\frac{1}{s-b_j}B_j\cdot Y(s)\ d_qs\ \biggl( b_0=0,\ B_0=1_m-\sum _{i=1}^NB_i-B_{\infty}\biggr) \\ &=-\sum _{j=0}^NB_j\int _0^{\infty}\frac{P_{\lambda}(x,s)}{s-b_j}Y(s)\ d_qs\\ &=-\sum _{j=0}^NB_j\widehat{Y}_j(x). \end{align*} Here $\widehat{Y}_i(x)$ satisfies \begin{align*} \sigma _x\widehat{Y}_i(x)&=\left\{ 1+\frac{(1-q^{\lambda})b_i}{x-b_i} \right\} \widehat{Y}_i(x)-\frac{x}{x-b_i}\sum _{j=0}^NB_j\widehat{Y}_j(x)\\ &=\widehat{Y}_i(x)-\sum _{j=0}^NB_j\widehat{Y}_j(x)+\frac{1}{1-\frac{x}{b_i}}\biggl\{ -(1-q^{\lambda})\widehat{Y}_i(x)+\sum _{j=0}^NB_j\widehat{Y}_j(x) \biggr\} . \end{align*} Therefore, $\widehat{Y}(x)$ is a solution of the equation $E_{\boldsymbol{F},\boldsymbol{b}}$.\ $\square$ From the above, we proved that $q$-convolution can be expressed by a $q$-analog of Euler transformation. \section{Rigidity index of $q$-difference equations} In this section, we define the spectral type and the rigidity index of the equation $E_R$. We set the coefficient $A(x)=\sum _{k=0}^NA_kx^k$ of the canonical form of a Fuchsian equation $E_R$. \begin{defi} Let\ $A_{\xi} \sim \bigoplus _{i=1}^{l_{\xi}}\bigoplus _{j=1}^{s_i^{\xi}} J(\alpha _i^{\xi},t_{i,\, j}^{\xi})\ (J(\alpha ,t):\, Jordan\ cell,\ t_{i,\, j+1}^{\xi} \le t_{i,\, j}^{\xi}).$ Moreover,\ let\ $\{ m_{i,\, k}^{\xi}\} _k$\ denote\ the\ conjugate\ of\ $\{ t_{i,\, j}^{\xi}\} _j$\ in\ Young\ diagram. We\ call \[ S_{\xi}:m_{1,1}^{\xi} \ldots m_{1,{t_{1,1}^{\xi}}}^{\xi}, \ldots ,m_{{l_{\xi}},1}^{\xi} \ldots m_{{l_{\xi}},{t_{{l_{\xi}},1}^{\xi}}}^{\xi}\] spectral\ type\ of\ $A_{\xi}.$ \end{defi} \begin{defi} Let\ $Z_{A}=\{a \in \mathbb{C}\, ;\, \mathrm{det}\, A(a)=0 \}$\ and\ denote\ by\ $d_i\, (1\le i\le m)$ the elementary\ divisors\ of\ $\mathrm{det}\, A(x)\, (d_{i+1}|d_i)$. For\ any\ $a_i\in Z_A$,\ we\ denote\ by\ $\{ \Tilde{n}_k^i\} _k$\ the\ orders\ of\ zeros\ $a_i$\ of\ $\{ d_k\} _k$. We set $\{ n_j^i\} _j$\ the\ conjugate\ of\ $\{ \Tilde{n}_k^i\} _k$.\ We\ call \[ S_{\mathrm{div}}:n_1^1 \ldots n_{k_1}^1, \ldots ,n_1^l \ldots n_{k_l}^l\] spectral\ type\ of\ $A(x)$. \end{defi} \begin{defi} We\ call\ $S(E_R)=(S_0;S_{\infty};S_{\mathrm{div}})$\ spectral\ type\ of\ $E_R.$ \end{defi} From the above, we define the rigidity index. \begin{defi} We\ define\ the\ rigidity\ index\ $\mathrm{idx}(E_R)$\ of\ the\ equation\ $E_R$\ as \begin{equation} \mathrm{idx}(E_R)=\ \sum_{\xi =0,\infty}\sum_{i=1}^{l_{\xi}}\sum_{j=1}^{t_{i,1}^{\xi}}(m_{i,\, j}^{\xi})^2+\sum _{i=1}^l\sum_{j=1}^{k_i}(n_j^i)^2-m^2N. \end{equation} \end{defi} For example, we consider \begin{align*} &E_1\, :\ \sigma _xY(x)=A(x)Y(x),\ \ \ A(x)=A_0+A_1x+A_{\infty}x^2,\\ &A_0\sim J(\alpha _1^0,2)\oplus J(\alpha _1^0,1)^{\oplus 2}\oplus J(\alpha _2^0,1),\ A_{\infty}\sim J(\alpha _1^{\infty},1)^{\oplus 3}\oplus J(\alpha _2^{\infty},1)\oplus J(\alpha _3^{\infty},1),\\ &A(x)\sim \mathrm{diag}((x-a_1)(x-a_2)^2(x-a_3)(x-a_4),\, (x-a_1)(x-a_2),\, (x-a_1)(x-a_2),\, x-a_1,1)\\ &\ (\alpha _i^0\ne \alpha _j^0,\, \alpha _i^{\infty}\ne \alpha _j^{\infty},\, a_i\ne a_j\, (i\ne j)). \end{align*} Spectral type and rigidity index of the equation $E_1$ are \[ S(E_1)\, :\ 31,\! 1;3,\! 1,\! 1;4,\! 31,\! 1,\! 1,\ \ \ \ \mathrm{idx}(E_1)=0.\] \begin{rem} \normalfont{We}\ can\ also\ express\ the\ rigidity\ index\ $\mathrm{idx}(E_R)$\ of\ the\ equation\ $E_R$\ as \begin{equation} \mathrm{idx}(E_R)=\mathrm{dim}Z(A_0)+\mathrm{dim}Z(A_{\infty})+\sum _{i=1}^l\sum_{j=1}^{k_i}(n_j^i)^2-m^2N. \end{equation} Here, we let $Z(A)=\{ X\in \mathrm{GL}_m(\mathbb{C})\, ;\, AX=XA\} \, (A\in \mathrm{M}_m(\mathbb{C}))$.\ $\square$ \end{rem} We can easily check the next facts. \begin{prop} \begin{align*} \mathrm{(i)}\ \ &\sum_{i=1}^{l_{\xi}}\sum_{j=1}^{t_{i,1}^{\xi}}m_{i,\, j}^{\xi}=m,\ \ \sum _{i=1}^l\sum_{j=1}^{k_i}n_j^i=Nm.\\ \mathrm{(ii)}\ \ &n_i=\textstyle{\sum_{j=1}^{k_i}}n_j^i\, is\ a\ multiplicity\ of\ \mathrm{det}A(x)\ of\ zeros\ a_i\in Z_A.\\ \mathrm{(iii)}\ \ &\mathop{\rm idx} (E_R)\ is\ even\ number. \end{align*} \end{prop} After the definition of $q$-analog of spectral type and rigidity index, let's look at some examples. At first, we consider the Heine's $q$-hypergeometric equation $E_2\,$:\ (\ref{he}). It is easy to confirm that the equation $E_2$ has generally the following data: \begin{equation} S(E_2)\, :\ 1,\! 1;1,\! 1;1,\! 1,\ \ \ \ \mathrm{idx}(E_2)=2\rm{.} \end{equation} Moreover, we consider generalized $q$-hypergeometric equation \begin{align} &E_3\, :\ \sigma _xY(x)=A(x)Y(x),\ \ A(x)=\begin{pmatrix} 0 & f_0& & \\ & \ddots & \ddots & \\ & & 0 & f_0\\ -f_m & \cdots &-f_2 & -f_1 \end{pmatrix},\\ &\ f_0\sigma _x^m+f_1\sigma _x^{m-1}+\cdots +f_m=\prod _{k=1}^m\left( \frac{b_k}{q}\sigma _x-1\right) -\lambda x\prod _{k=1}^m(a_k\sigma _x-1)\ (a_k,b_k,\lambda \in \mathbb{C}^{\ast}). \end{align} We set $A(x)=A_0+A_{\infty}x\ (A_k\in \mathrm{M}_m(\mathbb{C}))$. We obtain the data of the equation $E_3$ as \begin{align} &\mathrm{Ev}(A_0)=\left\{ \frac{q}{b_1},\, \ldots ,\, \frac{q}{b_m}\right\} ,\ \ \mathrm{Ev}(A_{\infty})=\left\{ \frac{1}{a_1},\, \ldots ,\, \frac{1}{a_m}\right\} ,\\ &\mathrm{zeros\ of\ det}A(x)\ \mathrm{are}\ \frac{1}{\lambda}\ \mathrm{and}\ \frac{1}{\lambda}\prod _{k=1}^m\frac{b_k}{qa_k}\ (\mathrm{multiplicity}: m-1). \end{align} Here we denote by $\mathrm{Ev}(A_{\xi})\, (\xi =0,\infty )$ the set of eigenvalues of $A_{\xi}$. Therefore, we generally obtain rigidity index of the equation $E_3$ as \[ \mathrm{idx}(E_3)=1^2\times m+1^2\times m+1^2+(m-1)^2-1\times m^2=2.\] \begin{rem} \normalfont{In general case, since} we can also express the Fuchsian equation $E_A\, :\, \sigma _x^{-1}Y(x)=A(q^{-1}x)^{-1}Y(x),$ we expect $\mathrm{idx}(E_{A^{-1}})=\mathrm{idx}(E_A)$. Let us check this fact. We put \[ \Tilde{A}(x)=\mathrm{det}A(x)\, A(x)=\sum _{k=0}^{N(m-1)}\Tilde{A}_kx^k,\ \ \ \Tilde{A}_{\infty}=\Tilde{A}_{N(m-1)},\] then we get \[ A_0\Tilde{A}_0=1_m,\ \ \ \ A_{\infty}\Tilde{A}_{\infty}=\kappa 1_m\ (\kappa \in \mathbb{C}\backslash \{ 0\} ).\] Moreover, the spectral type $S(E_{A^{-1}})=(\Tilde{S}_0;\Tilde{S}_{\infty};\Tilde{S}_{\mathrm{div}})$ satisfies $\Tilde{S}_0=S_0,\, \Tilde{S}_{\infty}=S_{\infty}$ and \[ \Tilde{S}_{\mathrm{div}}\, :\, \underbrace{m\ldots m}_{n_1-k_1}m-n^1_{k_1}\ldots m-n^1_1,\, \ldots ,\, \underbrace{m\ldots m}_{n_l-k_l}m-n^l_{k_l}\ldots m-n^l_1\] because $\Tilde{A}(x)\sim \mathrm{det}A(x)\, \mathrm{diag}(d_i^{-1}).$ Therefore, we obtain \begin{align*} \mathrm{idx}(E_{A^{-1}})&=\mathrm{dim}Z(\Tilde{A}_0)+\mathrm{dim}Z(\Tilde{A}_{\infty})+\sum _{i=1}^l\left\{ m^2(n_i-k_i)+\sum _{j=1}^{k_i}(m-n_j^i)^2\right\} -N(m-1)m^2\\ &=\mathrm{dim}Z(A_0)+\mathrm{dim}Z(A_{\infty})+\sum _{i=1}^l\left\{ m^2n_i-2m\sum _{j=1}^{k_i}n_j^i+\sum _{j=1}^{k_i}(n_j^i)^2\right\} -N(m-1)m^2\\ &=\mathrm{dim}Z(A_0)+\mathrm{dim}Z(A_{\infty})+(m^2-2m)\cdot Nm+\sum _{i=1}^l\sum _{j=1}^{k_i}(n_j^i)^2-N(m-1)m^2\\ &=\mathrm{dim}Z(A_0)+\mathrm{dim}Z(A_{\infty})+\sum _{i=1}^l\sum _{j=1}^{k_i}(n_j^i)^2-Nm^2\\ &=\mathrm{idx}(E_A).\ \square \end{align*} \end{rem} In the next section, we study how these data are changed by $q$-middle convolution in detail. \section{Properties of $q$-middle convolution} In this section, we prove the three theorems. \\ \\ \textbf{Theorem \ref{Fuchs}}\ (Fuchsian type equation)\ \ \it{If equation $E_R$ is Fuchsian type equation, then $mc_{\lambda}(E_R)$ is also Fuchsian type equation}\rm{.} \\ \\ \rm{\textbf{Theorem \ref{irre}}\ (irreducibility)}\ \ \it{If $(\ast ),(\ast \ast )$ are satisfied, then $\mathcal{V}$ is irreducible if and only if $mc_{\lambda}(\mathcal{V})$ is irreducible}\rm{.}\\ \\ \rm{\textbf{Theorem \ref{index}}\ (rigidity index)}\ \ \it{If $(\ast ) ,(\ast \ast )$ are satisfied, then $mc_{\lambda}$ preserves rigidity index of Fuchsian equation $E_R$}\rm{.}\\ About $(\ast ),(\ast \ast ),$ see Definition \ref{ast}. Theorem \ref{Fuchs} is proved easily by examining coefficient polynomial of canonical form of $c_{\lambda}(\Tilde{E}_R)$. Although many preparations are necessary for us to prove Theorem \ref{irre}, the outline is the same as method of Detteweiler and Reiter in \cite{DR1}. Finally, Theorem \ref{index} is proved by investigating in detail the change of spectral type of the equation $E_R$. \subsection{Proof of Theorem \ref{Fuchs}.} Here we prove the next theorem.\\ \\ \textbf{Theorem \ref{Fuchs}}\ (Fuchsian type equation)\ \ \it{If equation $E_R$ is Fuchsian type equation, then $mc_{\lambda}(E_R)$ is also Fuchsian type equation}\rm{.}\\ $Proof.$\ \ We put coefficients $A(x)=\sum _{k=0}^NA_kx^k\, (A_0,A_{\infty}\in \mathrm{GL}_m(\mathbb{C})),\ G(x)=\sum _{k=0}^NG_kx^k$ of canonical form of $E_{\boldsymbol{B},\boldsymbol{b}},\, E_{\boldsymbol{F},\boldsymbol{b}}\, (\boldsymbol{F}=c_{\lambda}(\boldsymbol{B}))$. From the relations (\ref{rel1}): \begin{equation*} A_0=1_m-B_0,\ A_{\infty}=b_{\infty}B_{\infty},\ B_0=1_m-\sum _{i=1}^NB_i-B_{\infty},\ b_{\infty}=\prod _{i=1}^N(-b_i^{-1})\ne 0, \end{equation*} we obtain $B_0-1_m,B_{\infty}\in \mathrm{GL}_m(\mathbb{C})$. For any $v={}^t({}^tv_0, \ldots ,{}^tv_N)\in \ker F_{\infty}\, (v_k\in \mathcal{V}),$ we get $G_{\infty}\in \mathrm{GL}_{(N+1)m}(\mathbb{C})$ because \[ 0=G_{\infty}v=b_{\infty}F_{\infty}v=b_{\infty}{}^t({}^t(B_{\infty}s), \ldots ,{}^t(B_{\infty}s))\, (s=\textstyle{\sum _{i=0}^N} B_iv_i).\] Meanwhile, for any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \ker G_0, $ since \[ 0=G_0v=(1_{(N+1)m}-F_0)v=(\textstyle{\sum _{i=1}^N}F_i+F_{\infty})v=(\textstyle{\sum _{i=1}^N}F_i+1_{(N+1)m}-\widehat{F})v,\] we obtain $v=0.$ Hence $G_0\in \mathrm{GL}_{(N+1)m}(\mathbb{C})$. Therefore, $mc_{\lambda}(E_R)$ is a Fuchsian type equation.\ $\square$\\ \subsection{Proof of Theorem \ref{irre}.} Here we derive a dimension formula of $q$-middle convolution. Moreover, we prove that $q$-middle convolution preserves irreducibility of the equation. The outline is the same as calculations of Detteweiler and Reiter in \cite{DR1}. At first, linear spaces $\mathcal{K},\mathcal{L}$ satisfy the next proposition. \begin{prop}\ \ $\mathcal{K},\mathcal{L}$ are $\boldsymbol{F}$-invariant subspaces of $\mathcal{V}^{N+1}.$ \end{prop} $Proof.$\ \ (i)\ Let $J=\{ 1, \ldots ,N \}.$ For any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \mathcal{K}\, (v_k\in \mathrm{ker}B_k)$, we get \[ F_jv={}^t(0, \ldots ,\stackrel{j+1}{\stackrel{\vee}{(q^{\lambda}-1){}^tv_j}} , \ldots , 0)\in \mathcal{K}\ \ (j \in J).\] Hence $F_j\mathcal{K}$ is subspace of $\mathcal{K}$. In the meantime, $F_{\infty}\mathcal{K}$ is subspace of $\mathcal{K}$ because for any $v \in \mathcal{K}$, we obtain $F_{\infty}v=(1_{(N+1)m}-\widehat{F})v=v \in \mathcal{K}$. Therefore, $\mathcal{K}$ is $\boldsymbol{F}$-invariant subspace of $\mathcal{V}^{N+1}.$ (ii)\ Let \[ 1_{m,k}=\{ \delta _{i,k+1}\delta _{j,k+1}1_m\} _{1\le i,j\le N+1}=\mathrm{diag}(0, \ldots ,\stackrel{k+1}{\stackrel{\vee}{1_m}} , \ldots ,0).\] For any $v \in \mathcal{L}$, we get \[ (\widehat{F}-(1-q^{\lambda})1_{(N+1)m})F_jv=(\widehat{F}-(1-q^{\lambda})1_{(N+1)m})1_{m,j}(\widehat{F}-(1-q^{\lambda})1_{(N+1)m})v=0\ \ (j \in J).\] Hence $F_j\mathcal{L}$ is subspace of $\mathcal{L}$. Moreover, $F_{\infty}\mathcal{L}$ is subspace of $\mathcal{L}$ because for any $v \in \mathcal{L}$, we obtain \[ (\widehat{F}-(1-q^{\lambda})1_{(N+1)m})F_{\infty}v=(\widehat{F}-(1-q^{\lambda})1_{(N+1)m})(1_{(N+1)m}-\widehat{F})v=0.\] Therefore, $\mathcal{L}$ is $\boldsymbol{F}$-invariant subspace of $\mathcal{V}^{N+1}.$\ $\square$ \\ The next facts are important as ``dimension formula". \begin{prop}{\ }\\ $\mathrm{(i)}$\ If $\lambda = 0,$ then $\mathcal{K}$ is subspace of $\mathcal{L}$ and satisfies \begin{equation} \mathcal{L}= \{ {}^t({}^tv_0, \ldots ,{}^tv_N) ; \textstyle{\sum _{j=0}^N} B_jv_j = 0 \} . \end{equation} $\mathrm{(ii)}$\ If $\lambda \neq 0,$ then $\mathcal{K}\cap \mathcal{L}=0, \mathcal{L}= \{ {}^t({}^th, \ldots ,{}^th) ;h \! \in \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m) \}$ and \begin{equation} \dim (mc_{\lambda}(\mathcal{V}))=(N+1)m-\sum _{i=1}^N \dim \ker B_i-\dim \ker (A_0-1_m)-\dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m). \end{equation} \end{prop} $Proof.$\ \ (i)\ If $\lambda =0$, then $\mathcal{L}=\ker \widehat{F}.$ Here for any $v \in \mathcal{K},$ we obtain $\widehat{F}v=0$. Hence $v \in \mathcal{L}.$ Moreover, we obtain $\mathcal{L}= \{ {}^t({}^tv_0, \ldots ,{}^tv_N) ; \sum _{j=0}^N B_jv_j = 0 \} .$ (ii)\ If $\lambda \ne 0,$ for any $v \in \mathcal{K}\cap \mathcal{L},$ we obtain \[ 0=(\widehat{F}-(1-q^{\lambda})1_{(N+1)m})v=\widehat{F}v-(1-q^{\lambda})v=(q^{\lambda}-1)v.\] Hence we get $v=0$. For any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \mathcal{L},$ we obtain $\widehat{F}v=(1-q^{\lambda})v.$ Consequently, we see $\sum _{j=0}^NB_jv_j=(1-q^{\lambda})v_i\, (i \in I=\{ 0, \ldots ,N \} ).$ Here $v_0=\dotsi =v_N$ and \[ \mathcal{L}= \{ {}^t({}^th, \ldots ,{}^th) ;h \! \in \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m) \} .\] Therefore, we can compute $\dim (mc_{\lambda}(\mathcal{V}))\, $: \begin{align*} \dim (mc_{\lambda}(\mathcal{V}))&=\dim (\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L}))\\ &=\dim (\mathcal{V}^{N+1})-\dim (\mathcal{K}+\mathcal{L})\\ &=\dim (\mathcal{V}^{N+1})-\dim \mathcal{K}-\dim \mathcal{L}\ \ (\because \ \mathcal{K}\cap \mathcal{L}=0)\\ &=(N+1)m-\sum _{i=0}^N\dim \ker B_i-\dim \ker (B_{\infty}-q^{\lambda}1_m)\\ &=(N+1)m-\sum _{i=1}^N\dim \ker B_i-\dim \ker (A_0-1_m)-\dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m).\ \ \square \end{align*} \begin{prop}\ \ If $\mathcal{W}$ is $\boldsymbol{B}$-invariant subspace of $\mathcal{V}$, then $\mathcal{W}^{N+1}$ is $\boldsymbol{F}$-invariant subspace. Moreover, $mc_{\lambda}(\mathcal{W})$ is submodule of $mc_{\lambda}(\mathcal{V}).$ \end{prop} $Proof.$\ \ For any $w={}^t({}^tw_0, \ldots ,{}^tw_N) \in \mathcal{W}^{N+1}$ and $j \in J=\{ 1, \ldots ,N \}$, it is clear that \[ F_jw={}^t(0, \ldots ,\stackrel{j+1}{\stackrel{\vee}{\textstyle{\sum _{i=0}^N}{}^t(B_iw_i)-(1-q^{\lambda}){}^tw_j}},\ldots ,0) \in \mathcal{W}^{N+1}.\] Since $F_{\infty}w=(1_{(N+1)m}-\widehat{F})w=w-\widehat{F}w \in \mathcal{W}^{N+1},\ \mathcal{W}^{N+1}$ is $\boldsymbol{F}$-invariant subspace of $\mathcal{V}^{N+1}$. The second claim follows from \begin{equation}\label{rel2} \mathcal{W}^{N+1} \cap (\mathcal{K}_{\mathcal{V}}+\mathcal{L}_{\mathcal{V}})=\mathcal{K}_{\mathcal{W}}+\mathcal{L}_{\mathcal{W}}. \end{equation} Hence we prove (\ref{rel2}). If $\lambda =0,\ \mathcal{K}$ is subspace of $\mathcal{L}$. If $\lambda \ne 0,$ then \[ \mathcal{K}_{\mathcal{W}}+\mathcal{L}_{\mathcal{W}}=\mathcal{K}_{\mathcal{V}\cap \mathcal{W}}+\mathcal{L}_{\mathcal{V}\cap \mathcal{W}}\] is subspace of $\mathcal{W}^{N+1} \cap (\mathcal{K}_{\mathcal{V}}+\mathcal{L}_{\mathcal{V}})$. Moreover, for any $w={}^t({}^tw_0, \ldots ,{}^tw_N) \in \mathcal{W}^{N+1} \cap (\mathcal{K}_{\mathcal{V}}+\mathcal{L}_{\mathcal{V}})$ and $i \in I=\{ 0, \ldots ,N \}$, we can let \[ w_i=k_i+h\, (k_i \in \ker B_i,h \in \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m)).\] Here we obtain $\mathcal{W}\ni \sum _{i=0}^NB_iw_i= \sum _{i=0}^NB_i(k_i+h)=(1-q^{\lambda})h.$ Consequently, $h \in \mathcal{W}$. Moreover, we find $w \in \mathcal{K}_{\mathcal{W}}+\mathcal{L}_{\mathcal{W}}$ from $k_i=w_i-h \in \mathcal{W}$. Therefore, $\mathcal{W}^{N+1} \cap (\mathcal{K}_{\mathcal{V}}+\mathcal{L}_{\mathcal{V}})$ is subspace of $\mathcal{K}_{\mathcal{W}}+\mathcal{L}_{\mathcal{W}}.\ \square$\\ From now on, we assume the conditions $(\ast ),(\ast \ast)$. Here we can prove \begin{prop}\label{404}\ \ If $(\ast \ast)$ is satisfied, then $mc_0(\mathcal{V}) \simeq \mathcal{V}.$ \end{prop} $Proof.$\ \ If $\lambda =0,$ then we get $\mathcal{K}+\mathcal{L}= \mathcal{L}= \{ {}^t({}^tv_0, \ldots ,{}^tv_N) ; \sum _{j=0}^NB_jv_j=0\}$. Let \[ \phi :{}^t({}^tv_0, \ldots ,{}^tv_N) \longmapsto \sum _{j=0}^NB_jv_j.\] Then $\phi :\mathcal{V}^m \longrightarrow \mathcal{V}$ is surjection from a condition $(\ast \ast)$. For any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \mathcal{V}^{N+1}$, we get \begin{align*} (\phi \circ F_j)(v)&=\phi ({}^t(0, \ldots ,\stackrel{j+1}{\stackrel{\vee}{{}^ts}} , \ldots , 0))=B_js=(B_j \circ \phi )(v),\ s=\textstyle{\sum _{i=0}^N}B_iv_i\, (j \in J=\{ 1, \ldots ,N \} ),\\ (\phi \circ F_{\infty})(v)&=(\phi \circ (1_{(N+1)m}-\widehat{F}))(v)=\phi ({}^t({}^tv_0-{}^ts, \ldots ,{}^tv_N-{}^ts))=B_{\infty}s=(B_{\infty} \circ \phi )(v). \end{align*} Therefore, we obtain \begin{equation*} \mathcal{V}= \mathrm{im}(\phi ) \simeq \mathcal{V}^{N+1}/{\ker (\phi )} = \mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L}) = mc_0(\mathcal{V}).\ \square \end{equation*} Here we introduce a transformation $\psi _{\mu}$ in expedient. \begin{defi}\ \ For $\boldsymbol{T}=(T_1, \ldots ,T_N,T_{\infty}) \in (\mathrm{M}_{(N+1)m}(\mathbb{C}))^{N+1},$ we define \begin{equation} \psi _{\mu}:(\mathrm{M}_{(N+1)m}(\mathbb{C}))^{N+1} \longrightarrow (\mathrm{M}_{(N+1)m}(\mathbb{C}))^{N+1},\ (T_1, \ldots ,T_N,T_{\infty}) \longmapsto (T_1, \ldots ,T_N,T_{\infty}+\mu 1_{(N+1)m}). \end{equation} We set the module $\psi _{\mu}(\mathcal{V})=(\psi _{\mu}(\boldsymbol{T}),\mathcal{V}).$ \end{defi} Here $\psi _{\mu}$ preserves irreducibility of equations clearly. Moreover, we introduce a transformation $\Psi _{\lambda}$. \begin{defi}\ \ We define $\Psi _{\lambda}:\mathcal{E}\longrightarrow \mathcal{E}$, \begin{equation} \Psi _{\lambda}=\psi _{1-q^{\lambda}} \circ c_{\lambda}. \end{equation} Let $\widetilde{\boldsymbol{F}}=\Psi _{\lambda}(\boldsymbol{B}),\ \Psi _{\lambda}(\mathcal{V})=(\widetilde{\boldsymbol{F}},\mathcal{V}^{N+1}).$ We let $\check{F}_k$ be a matrix induced by the action of $F_k$ on $\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L}).$ Moreover, we define $\overline{\Psi}_{\lambda}:\mathcal{E}\longrightarrow \mathcal{E}$, \begin{equation} \overline{\Psi}_{\lambda}(\boldsymbol{B})=\check{\boldsymbol{F}},\ \overline{\Psi}_{\lambda}(\mathcal{V})=\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L})=(\check{\boldsymbol{F}},\mathcal{V}^{N+1}/(\mathcal{K}+\mathcal{L})). \end{equation} \end{defi} Here the following facts are proved in the same way as above. \begin{prop}\ \ $\mathcal{K},\mathcal{L}$ are $\widetilde{\boldsymbol{F}}$-invariant. \end{prop} \begin{prop}\ \ If $\mathcal{W}$ is $\boldsymbol{B}$-invariant subspace of $\mathcal{V}$, then $\mathcal{W}^{N+1}$ is $\widetilde{\boldsymbol{F}}$-invariant subspace. Moreover, $\overline{\Psi}_{\lambda}(\mathcal{W})$ is submodule of $\overline{\Psi}_{\lambda}(\mathcal{V}).$ \end{prop} From $\psi _0=\mathrm{id}_{\mathcal{V}^{N+1}},\, mc_0=\overline{\Psi}_0,$ the next proposition is obvious. \begin{prop}\label{409}\ \ If $(\ast \ast)$ is satisfied, then $\overline{\Psi}_0(\mathcal{V}) \simeq \mathcal{V}.$ \end{prop} Proof of the Proposition \ref{410}, \ref{411}, \ref{412} are similar to Detteweiler and Reiter's paper \cite{DR1}. \begin{prop}\label{410}\ \ If $(\ast ),(\ast \ast)$ are satisfied, then for any $\lambda ,\mu \in \mathbb{C},\ \overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}) \simeq \overline{\Psi}_{\mu}(\mathcal{V}^{N+1})/\overline{\Psi}_{\mu}(\mathcal{K}_{\mathcal{V}} +\mathcal{L}_{\mathcal{V}}(\lambda )).$ \end{prop} $Proof.$\ \ If $\mu =0,$ it is easily seen that \[ \overline{\Psi}_0 \circ \overline{\Psi}_{\lambda}(\mathcal{V}) \simeq \overline{\Psi}_{\lambda}(\mathcal{V}) = \mathcal{V}^{N+1}/(\mathcal{K}_{\mathcal{V}} +\mathcal{L}_{\mathcal{V}}(\lambda )) \simeq \overline{\Psi}_0(\mathcal{V}^{N+1} )/\overline{\Psi}_0(\mathcal{K}_{\mathcal{V}} +\mathcal{L}_{\mathcal{V}}(\lambda )).\] Here we assume $\mu \ne 0.$ We set \begin{align} \lambda '&=q^{\lambda}-1,\ \mu '=q^{\mu}-1,\ \mathcal{K}_1=\mathcal{K}_{\mathcal{V}},\ \mathcal{L}_1=\mathcal{L}_{\mathcal{V}}(\lambda ),\ \mathcal{K}_2=\mathcal{K}_{\mathcal{V}^{N+1}},\ \mathcal{L}_2=\mathcal{L}_{\mathcal{V}^{N+1}}(\mu ),\\ \widetilde{\boldsymbol{F}}&=\Psi _{\lambda }(\boldsymbol{B}),\ \check{\boldsymbol{F}}=\overline{\Psi}_{\lambda}(\boldsymbol{B}),\ \mathcal{M}=\overline{\Psi}_{\lambda}(\mathcal{V}),\ \mathcal{H}= \mathcal{K}_1+\mathcal{L}_1. \end{align} Let us first prove \begin{equation} \mathrm{(i)}\ \mathcal{K}_{\mathcal{M}}=(\mathcal{K}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}},\ \ \ \mathrm{(ii)}\ \mathcal{L}_{\mathcal{M}}=(\mathcal{L}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}. \end{equation} (i)\ We set $\check{F}_0=1_m-\sum _{i=1}^N\check{F}_i-\check{F}_{\infty}.$ For any $k+\mathcal{H}^{N+1}={}^t({}^tk_0, \ldots ,{}^tk_N)+\mathcal{H}^{N+1} \in (\mathcal{K}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}$, we obtain $k+\mathcal{H}^{N+1} \in \mathcal{K}_{\mathcal{M}}$ from $\check{F}_i(k_i+\mathcal{H})=\mathcal{H}(i \in I=\{ 0, \ldots ,N \}).$ Therefore, $(\mathcal{K}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}$ is subspace of $\mathcal{K}_{\mathcal{M}}$. On the other hand, for any $v+\mathcal{H}^{N+1}={}^t({}^tv_0, \ldots ,{}^tv_N)+\mathcal{H}^{N+1} \in \mathcal{K}_{\mathcal{M}}, v_i={}^t({}^tv_{i0}, \ldots ,{}^tv_{iN})\, (v_{ij}\in \mathcal{V}),$ we compute $\widetilde{F}_0v_0\, $: \[ \widetilde{F}_0v_0=(1_m-\textstyle{\sum _{i=1}^N}\widetilde{F}_i-\widetilde{F}_{\infty})v_0=(\widehat{F}-\textstyle{\sum _{i=1}^N}\widetilde{F}_i)v_0={}^t(\textstyle{\sum _{j=0}^N}{}^t(B_jv_{0j}),-\lambda '{}^tv_{01}, \ldots ,-\lambda '{}^tv_{0N})\] and we find \[ \widetilde{F}_jv_j={}^t(0, \ldots ,\stackrel{j+1}{\stackrel{\vee}{\textstyle{\sum _{i=0}^N}{}^t(B_iv_{ji})+\lambda '{}^tv_{jj}}}, \ldots ,0)\, (j \in J=\{ 1, \ldots ,N \}).\] (i-1)\ If $\lambda =0,$ then it is clear that $\Tilde{F}_iv_i={}^t(0, \ldots ,\sum _{j=0}^N{}^t(B_jv_{ij}), \ldots ,0)\, (i \in I).$ Moreover, $\widetilde{F}_iv_i \in \mathcal{H}= \mathcal{K}+\mathcal{L}= \mathcal{L}= \{ {}^t({}^tw_0, \ldots ,{}^tw_N) ; \sum _{j=0}^N B_jw_j=0 \} $ and $B_i\sum _{j=0}^NB_jv_{ij}=0.$ Hence we get \[ \widetilde{F}_iv_i \in {}^t(0, \ldots ,\stackrel{i+1}{\stackrel{\vee}{\ker B_i}}, \ldots ,0).\] Therefore, we obtain $v_i \in \ker \widetilde{F}_i+\mathcal{K}_1.$ (i-2)\ If $\lambda \ne 0,$ then \[ \widetilde{F}_iv_i=({}^tk_{i0}+{}^th_i, \ldots ,{}^tk_{iN}+{}^th_i)\qquad (k_{ij} \! \in \! \ker B_j,\, h_i \in \ker (A_{\infty}-b_{\infty}q^{\lambda}1_m),\, i \in I).\] If $i \ne 0,$ we get $h_i=-k_{ij} \in \ker B_j\, (j \in I\setminus \{ i\} ).$ Hence we see $h_i \in \ker (B_i+\lambda '1_m)$ from $h_i \in \ker (A_{\infty}-b_{\infty}q^{\lambda}1_m)= \ker (\sum _{r=0}^NB_r+\lambda '1_m).$ Since $(\ast \ast )$ is satisfied, we get $h_i=0$. Here \[ \widetilde{F}_iv_i \in {}^t(0, \ldots ,\stackrel{i+1}{\stackrel{\vee}{\ker B_i}}, \ldots ,0).\] If $i=0$, then it results in the case $i\ne 0$ because \begin{equation} \widetilde{F}_0=1_{(N+1)m}-\sum _{r=1}^NF_r+\lambda '1_{(N+1)m}-F_{\infty}= \begin{pmatrix} B_0+\lambda '1_m & \dotsi & B_N \\ & O & \end{pmatrix}. \end{equation} Hence we find $v+\mathcal{H}^{N+1} \in (\mathcal{K}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}.$ Moreover, $\mathcal{K}_{\mathcal{M}}$ is a subspace of $(\mathcal{K}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}$. Therefore, we obtain $\mathcal{K}_{\mathcal{M}}=(\mathcal{K}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}.$ (ii)\ For any \[ v+\mathcal{H}^{N+1}\! ={}^t({}^th, \ldots ,{}^th)+\mathcal{H}^{N+1}\! \in \! (\mathcal{L}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}(h\! \in \! \ker (\widetilde{F}_{\infty}-q^{\mu}1_{(N+1)m})),\] we let $\widetilde{H}=(F_{t-1})_{1 \leq s,t \leq N+1},\check{H}=(\check{F}_{t-1})_{1 \leq s,t \leq N+1}.$ Then we obtain \[(\check{H}+\mu 1_{(N+1)^2m})(v+\mathcal{H}^{N+1})=(\widetilde{H}+\mu 1_{(N+1)^2m})v+\mathcal{H}^{N+1}=\mathcal{H}^{N+1}.\] Consequently, we find $v+\mathcal{H}^{N+1} \in \mathcal{L}_{\mathcal{M}}$. Meanwhile, for any \[ v+\mathcal{H}^{N+1}={}^t({}^th, \ldots ,{}^th)+\mathcal{H}^{N+1} \in \mathcal{L}_{\mathcal{M}}(h \in \ker (\overline{F}_{\infty}-q^{\mu}1_{(N+1)m})),\] we see $v+\mathcal{H}^{N+1} \in (\mathcal{L}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}.$ Therefore, we obtain $\mathcal{L}_{\mathcal{M}}=(\mathcal{L}_2+\mathcal{H}^{N+1})/{\mathcal{H}^{N+1}}.$ Let us remember the isomorphism theorems. For a linear space $V$ and subspaces $W,W'$ of $V$, \begin{align*} &(\mathrm{iii})\ \mathrm{if}\ W' \subset W,\ \mathrm{then}\ (V/W')/(W/W') \simeq V/W;\\ &(\mathrm{iv})\ W'/(W \cap W') \simeq (W+W')/W. \end{align*} From the above, we can compute $mc_{\mu} \circ mc_{\lambda}(\mathcal{V})\, $: \begin{align*} mc_{\mu} \circ mc_{\lambda}(\mathcal{V}) &=mc_{\mu}(\mathcal{V}^{N+1}/{\mathcal{H}})\\ &=(\mathcal{V}^{N+1}/{\mathcal{H}})^{N+1}/(\mathcal{K}_{\mathcal{M}}+\mathcal{L}_{\mathcal{M}})\\ &=(\mathcal{V}^{{(N+1)}^2}/{\mathcal{H}}^{N+1})/((\mathcal{K}_2+\mathcal{L}_2+\mathcal{H}^{N+1})/\mathcal{H}^{N+1})\ \ (\because \ (\mathrm{i}),(\mathrm{ii}))\\ &\simeq (\mathcal{V}^{{(N+1)}^2}/(\mathcal{K}_2+\mathcal{L}_2))/((\mathcal{K}_2+\mathcal{L}_2+\mathcal{H}^{N+1})/(\mathcal{K}_2+\mathcal{L}_2))\ \ (\because \ (\mathrm{iii}))\\ &\simeq mc_{\mu}(\mathcal{V}^{N+1})/(\mathcal{H}^{N+1}/((\mathcal{K}_2+\mathcal{L}_2)\cap \mathcal{H}^{N+1}))\ \ (\because \ (\mathrm{iv}))\\ &=mc_{\mu}(\mathcal{V}^{N+1})/mc_{\mu}(\mathcal{K}_{\mathcal{V}} +\mathcal{L}_{\mathcal{V}}(\lambda )).\ \ \square \end{align*} \begin{prop}\label{411}\ \ $mc_{\lambda}$ preserves conditions $(\ast ),(\ast \ast).$ \end{prop} $Proof.$\ \ It is sufficient to prove that $\overline{\Psi}_{\lambda}$ preserves conditions $(\ast ),(\ast \ast).$ In the case $\lambda =0$ is obvious because of Proposition \ref{409}. Hence we assume $\lambda \ne 0$ and $\mathcal{V}$ satisfy $(\ast ),(\ast \ast).$ Here we use notations in proof of previous proposition. If $\tau =0,$ for any $v+\mathcal{H}={}^t({}^tv_0, \ldots ,{}^tv_N)+\mathcal{H}\in \bigcap _{i=0}^N \ker \check{F}_i,$ it is clear that $\widetilde{F}_0v \in \mathcal{H}.$ Here we get $v \in \mathcal{H}$ from Proposition \ref{410}(i-2). Consequently, we obtain $\bigcap _{i=0}^N \ker \check{F}_i=\{ \mathcal{H}\} .$ If $\tau \ne 0,$ for any $v+\mathcal{H}\in \bigcap _{j \ne i} \ker \check{F}_j \cap (\check{F}_i+\tau 1_{(N+1)m})\, (i \in J=\{ 1, \ldots ,N \} )$, we get $v \in \mathcal{H}$ from $\widetilde{F}_0v \in \mathcal{H}.$ Hence we obtain $\bigcap _{j \ne i} \ker \check{F}_j \cap (\check{F}_i+\tau 1_{(N+1)m})=\{ \mathcal{H}\} .$ The case $i=0$ is reduced to the case $i\in J$. Therefore, $\overline{\Psi}_{\lambda}(\mathcal{V})$ satisfies $(\ast )$. In the meantime, we put any $\tau \in \mathbb{C}$ and $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \mathcal{V}^{N+1}.$ If $i \in J,$ then \[ \widetilde{F}_iv={}^t(0, \ldots ,\stackrel{i+1}{\stackrel{\vee}{\sum _{j=0}^N{}^t(B_jv_j)+\lambda '{}^tv_i}} , \ldots ,0).\] Hence $\widetilde{F}_iv$ spans the linear space ${}^t(0, \ldots ,\mathcal{V}, \ldots ,0).$ Moreover, it is clear that \[ (\widetilde{F}_0+\tau 1_{(N+1)m})v={}^t(\sum _{j=0}^N{}^t(B_jv_j)+(\lambda '+\tau ){}^tv_0,\tau {}^tv_1, \ldots ,\tau {}^tv_N).\] Consequently, $\sum _{j=0}^NB_jv_j+(\lambda '+\tau )v_0$ spans $\mathcal{V}$. Here the case $i=0$ is reduced to the case $i\in J$. Therefore, we obtain $\sum _{j \neq i}\mathrm{im}\check{F}_j+\mathrm{im}(\check{F}_i+\tau 1_{(N+1)m})=\mathcal{V}^{N+1}+\mathcal{H}\, (i \in J).$ From the above, $\overline{\Psi}_{\lambda}(\mathcal{V})$ satisfies $(\ast \ast )$.\ $\square$\\ Here the $\overline{\Psi}_{\lambda}$ satisfies the next proposition. \begin{prop}\label{412}\ \ If $(\ast ),(\ast \ast)$ are satisfied, then for any $\lambda ,\mu \in \mathbb{C},\, \overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}) \simeq \overline{\Psi}_{\log _q(q^{\lambda}+q^{\mu}-1)}(\mathcal{V})$. \end{prop} $Proof.$\ \ If $\lambda \mu =0,$ it is obvious. We assume $\lambda \mu \ne 0$ and set \begin{align} &\widetilde{\boldsymbol{F}}=\Psi_{\lambda}(\boldsymbol{B}),\ \boldsymbol{F}'=\Psi_{\log _q(q^{\lambda}+q^{\mu}-1)}(\boldsymbol{B}),\ \boldsymbol{H}=\Psi_{\mu}(\widetilde{\boldsymbol{F}}),\ \mathcal{K}_1=\mathcal{K}_{\mathcal{V}},\ \mathcal{L}_1=\mathcal{L}_{\mathcal{V}}(\lambda ),\\ &\mathcal{K}_2=(\mathcal{K}_{\mathcal{V}^{N+1}},\widetilde{\boldsymbol{F}}),\ \mathcal{L}_2=(\mathcal{L}_{\mathcal{V}^{N+1}}(\mu ),\widetilde{\boldsymbol{F}}),\ \mathcal{L}'=\mathcal{L}_{\mathcal{V}}(\log _q(q^{\lambda}+q^{\mu}-1)),\ \mathcal{H}= \mathcal{K}_1+\mathcal{L}_1. \end{align} Here we prove that induced mapping $\overline{\phi}:\overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}) \longrightarrow \overline{\Psi}_{\log _q(q^{\lambda}+q^{\mu}-1)}(\mathcal{V})$ is isomorphism from \begin{equation} \phi :\ \Psi_{\mu} \circ \Psi_{\lambda}(\mathcal{V}) \longrightarrow \Psi_{\log _q(q^{\lambda}+q^{\mu}-1)}(\mathcal{V})\ \biggr( {}^t({}^tv_0, \ldots ,{}^tv_N) \longmapsto \sum _{i=0}^N\widetilde{F}_iv_i \biggl). \end{equation} We first find \begin{equation} \overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}) \simeq \overline{\Psi}_{\mu}(\mathcal{V}^{N+1})/\overline{\Psi}_{\mu}(\mathcal{K}_{\mathcal{V}} +\mathcal{L}_{\mathcal{V}}(\lambda )) \simeq \mathcal{V}^{{(N+1)}^2}/(\mathcal{K}_2+\mathcal{L}_2+{\mathcal{H}}^{N+1}). \end{equation} It is easy to check that $(\mathcal{L}_1)^{N+1}$ is subspace of $\mathcal{K}_2=\ker (\phi )$. Moreover, we get $\phi ((\mathcal{K}_1)^{N+1})=\sum _{i=0}^N\widetilde{F}_i\mathcal{K}_1=\mathcal{K}_1$ and $\mathcal{L}_2=\{ {}^t({}^th, \ldots ,{}^th) ;h \in \ker (\widetilde{F}_{\infty}-q^{\mu}1_{(N+1)m})\}$. Hence we obtain \[ \phi (\mathcal{L}_2)=\sum _{i=0}^N\widetilde{F}_i\ker F_{\infty}'=\bigr( \sum _{i=0}^N\widetilde{F}_i\bigl) \ker F_{\infty}'=\ker F_{\infty}'=\mathcal{L}'\ (F_{\infty}' =\widetilde{F}_{\infty}-q^{\mu}1_{(N+1)m}).\] Here we compute $\dim (\mathcal{K}_2)\, $: \[ \dim (\mathcal{K}_2)=\sum _{i=0}^N\mathrm{dim \, ker}\widetilde{F}_i=\sum _{i=0}^N\{ \dim (\mathcal{V}^{N+1})-\mathrm{rank}\widetilde{F}_i \}=\sum _{i=0}^N\{ (N+1)m-m \}=N(N+1)m.\] Consequently, we can calculate $\dim (\overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}))\, $: \begin{align*} \dim (\overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}))&=\dim (\mathcal{V}^{{(N+1)}^2}/(\mathcal{K}_2+\mathcal{L}_2+\mathcal{H}^{N+1}))\\ &=\dim (\mathcal{V}^{{(N+1)}^2})-\dim (\mathcal{K}_2+\mathcal{L}_2+(\mathcal{K}_1)^{N+1}+(\mathcal{L}_1)^{N+1})\\ &=(N+1)^2m-\dim (\mathcal{K}_2+\mathcal{L}_2+(\mathcal{K}_1)^{N+1})\\ &=(N+1)^2m-\dim (\mathcal{K}_2)-\dim (\mathcal{L}_2+(\mathcal{K}_1)^{N+1})\\ &=(N+1)^2m-N(N+1)m-\dim (\mathcal{K}_1+\mathcal{L}')\\ &=(N+1)m-\dim (\mathcal{K}_1+\mathcal{L}')\\ &=\dim (\mathcal{V}^{N+1})-\dim (\mathcal{K}_1+\mathcal{L}')\\ &=\dim (\mathcal{V}^{N+1}/(\mathcal{K}_1+\mathcal{L}')). \end{align*} Here we set $\lambda '=q^{\lambda}-1,\mu '=q^{\mu}-1$. For any \[ v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \mathcal{V}^{{(N+1)}^2},\ (v_j={}^t({}^tv_{j0}, \ldots ,{}^tv_{jN}),\, v_{ij}\in \mathcal{V}),\] we get the following relations. \begin{gather*} (F_i'\circ \phi )(v)={}^t(0, \ldots ,\stackrel{i+1}{\stackrel{\vee}{{}^tw_i}} , \ldots , 0)=(\phi \circ H_i)(v)\, (i \in \{ 0, \ldots ,N \} ),\\ w_i=\sum _{j=0}^NB_j\biggr\{ \sum _{k=0}^NB_kv_{jk}+\lambda 'B_jv_{jj}+(\lambda '+\mu ')v_{ij} \biggl\}+\lambda '(\lambda '+\mu ')v_{ii},\\ F_{\infty}'\circ \phi =\phi -\sum _{i=0}^N(F_i'\circ \phi )=\phi -\sum _{i=0}^N(\phi \circ H_i )=\phi \circ H_{\infty}. \end{gather*} Therefore, we obtain $\overline{\Psi}_{\mu} \circ \overline{\Psi}_{\lambda}(\mathcal{V}) \simeq \overline{\Psi}_{\log _q(q^{\lambda}+q^{\mu}-1)}(\mathcal{V}).\ \square$\\ From the above, Theorem \ref{irre} is shown.\\ \\ \rm{\textbf{Theorem \ref{irre}}\ (irreducibility)}\ \ \it{If $(\ast ),(\ast \ast )$ are satisfied, then $\mathcal{V}$ is irreducible if and only if $mc_{\lambda}(\mathcal{V})$ is irreducible}\rm{.}\\ $Proof.$\ \ For any non-zero irreducible module $\mathcal{V}$ and $\lambda \in \mathbb{C}$, we put $\mathcal{M}=\overline{\Psi}_{\lambda}(\mathcal{V})$ and non-zero submodule $\mathcal{M}'$ of $\mathcal{M}$. Here $\mathcal{W}=\overline{\Psi}_{\log _q(1-q^{\lambda})}(\mathcal{M}')$ is submodule of \begin{equation*} \overline{\Psi}_{\log _q(1-q^{\lambda})}(\mathcal{M})=(\overline{\Psi}_{\log _q(1-q^{\lambda})}\circ \overline{\Psi}_{\lambda})(\mathcal{V}) \simeq \overline{\Psi}_0(\mathcal{V})=mc_0(\mathcal{V})\simeq \mathcal{V}. \end{equation*} Hence we obtain $\mathcal{W}=0\ \mathrm{or}\ \mathcal{V}$. If $\mathcal{W}=0,$ then we get $\mathcal{M}' \simeq \overline{\Psi}_{\lambda}(\mathcal{W})=\overline{\Psi}_{\lambda}(0)=0.$ This is a contradiction. Consequently, we find $\mathcal{W}=\mathcal{V}$. Moreover, we get \begin{equation*} \mathcal{M}'= \overline{\Psi}_{\lambda}(\mathcal{W})=\overline{\Psi}_{\lambda}(\mathcal{V})=\mathcal{M}. \end{equation*} Hence $\mathcal{W}$ is irreducible module. Here $\overline{\Psi}_{\lambda}(\mathcal{V})$ is irreducible if and only if $mc_{\lambda}(\mathcal{V})$ is irreducible. Therefore, $\mathcal{V}$ is irreducible if and only if $mc_{\lambda}(\mathcal{V})$ is irreducible. The proof of the theorem has been completed.\ $\square$\\ \subsection{Proof of Theorem \ref{index}.} In this section, we prove that $mc_{\lambda}$ preserves rigidity index of equation $E_R$. At first, we examine the change of spectral types $S_0,S_{\infty}$. \begin{lem}\ \ We set coefficient polynomial $A(x)=\sum _{k=0}^NA_kx^k\ (resp.\ G(x)=\sum _{k=0}^NG_kx^k)$ of canonical form of $E_{\boldsymbol{B},\boldsymbol{b}}\ (resp.\ E_{\boldsymbol{F},\boldsymbol{b}})$, we let $\mathrm{Ev}(M)$ be the set of eigenvalues of $M\in \mathrm{M}_m(\mathbb{C})$. If \[A_0\sim \bigoplus _{\theta \in \mathrm{Ev}(A_0)}\bigoplus _{j=1}^{s_{\theta}^0}J(\theta ,t_{\theta ,\, j}^0),\ \ \ A_{\infty}\sim \bigoplus _{\kappa \in \mathrm{Ev}(A_{\infty})}\bigoplus _{j=1}^{s_{\kappa}^{\infty}}J(\kappa ,t_{\kappa ,\, j}^{\infty})\] and $(\ast \ast )$ is satisfied, then we obtain \begin{align*} G_0&\sim \bigoplus _{\theta \in \mathrm{Ev}(A_0)\setminus \{ q^{\lambda}\} }\bigoplus _{j=1}^{s_{\theta}^0}J(\theta ,t_{\theta ,\, j}^0)\oplus \bigoplus _{j=1}^{s_{q^{\lambda}}^0}J(q^{\lambda} ,t_{q^{\lambda} ,\, j}^0+1)\oplus J(q^{\lambda},1)^{\oplus (Nm-s_{q^{\lambda}}^0)},\\ G_{\infty}&\sim \bigoplus _{\kappa \in \mathrm{Ev}(A_{\infty})\setminus \{ b_{\infty}\} }\bigoplus _{j=1}^{s_{\kappa}^{\infty}}J(\kappa ,t_{\kappa ,\, j}^{\infty})\oplus \bigoplus _{j=1}^{s_{b_{\infty}}^{\infty}}J(b_{\infty} ,t_{b_{\infty} ,\, j}^{\infty}+1)\oplus J(b_{\infty},1)^{\oplus (Nm-s_{b_{\infty}}^{\infty})}. \end{align*} \end{lem} $Proof.$\ \ It is easily seen that $G_0=1_m-F_0,\ G_{\infty}=b_{\infty}F_{\infty},\ F_0=1_m-\sum _{i=1}^NF_i-F_{\infty}$ and \begin{equation} \theta 1_{(N+1)m}-G_0=\begin{pmatrix} \theta 1_m-A_0 & B_1& \dotsi & B_N\\ & (\theta -q^{\lambda})1_m & & \\ & & \ddots & \\ & & & (\theta -q^{\lambda})1_m \end{pmatrix}\ \ (\theta \in \mathbb{C}). \end{equation} (i)\ \ If $\theta \ne q^{\lambda},$ then $\dim \ker ((\theta 1_{(N+1)m}-G_0)^n)=\dim \ker ((\theta 1_m-A_0)^n)\ (n\in \mathbb{Z}_{>0}).$ (ii)\ \ If $\theta =q^{\lambda},$ then for any $v={}^t({}^tv_0,\, \ldots \, ,{}^tv_N)\in \mathcal{V}^{N+1}\ (v_i\in \mathcal{V}),$ we get \[ (\theta 1_{(N+1)m}-G_0)v={}^t({}^tv',0,\, \ldots \, ,0),\, v'=\sum _{k=0}^NB_kv_k+(\theta -1)1_m.\] Here $v'$ spans $\mathcal{V}$ because condition $(\ast \ast )$. Hence we obtain \[ \dim \ker (\theta 1_{(N+1)m}-G_0)=Nm,\ \dim \ker ((\theta 1_{(N+1)m}-G_0)^{n+1})=\dim \ker ((\theta 1_m-A_0)^n)\ (n\in \mathbb{Z}_{>0}).\] (iii)\ \ If $\kappa \ne b_{\infty}$, then for any $v={}^t({}^tv_0,\, \ldots \, ,{}^tv_N)\in \ker ((\kappa 1_{(N+1)m}-G_{\infty})^n)\ (v_i\in \mathcal{V},\, n\in \mathbb{Z}_{>0})$, we get \[ 0=(\kappa 1_{(N+1)m}-G_{\infty})^nv=\{ (\kappa -b_{\infty})1_{(N+1)m}+b_{\infty}\widehat F\} ^nv=(\kappa -b_{\infty})^nv+P\widehat{F}v\ (P\in \mathrm{M}_m(\mathbb{C})).\] Hence we find $v_0=\cdots =v_N.$ Moreover, it is clear that \[ (\kappa 1_{(N+1)m}-G_{\infty})^nv=\{ (\kappa -b_{\infty})1_{(N+1)m}+b_{\infty}\widehat{F}\} ^nv={}^t({}^tv',\, \ldots \, ,{}^tv'),\ v'=(\kappa 1_m-A_{\infty})^nv_0.\] Therefore, we obtain $\dim \ker ((\kappa 1_{(N+1)m}-G_{\infty})^n)=\dim \ker ((\kappa 1_m-A_{\infty})^n)$. (iv)\ \ If $\kappa =b_{\infty}$, then we obtain \[ \dim \ker (\kappa 1_{(N+1)m}-G_{\infty})=\dim \ker \widehat{F}=(N+1)m-\mathrm{dim\ im}\widehat{F}=(N+1)m-m=Nm\] from $\kappa 1_{(N+1)m}-G_{\infty}=b_{\infty}\widehat F$ and $(\ast \ast ).$ Here for any \[ v={}^t({}^tv_0,\, \ldots \, ,{}^tv_N)\in \ker ((\kappa 1_{(N+1)m}-G_{\infty})^{n+1})\ (v_i\in \mathcal{V},\, n\in \mathbb{Z}_{>0}),\] it is easily seen that \[ (\kappa 1_{(N+1)m}-G_{\infty})v=b_{\infty}\widehat{F}v={}^t({}^tv',\, \ldots \, ,{}^tv'),\ \ \ v'=b_{\infty}\sum _{k=0}^NB_kv_k\] and \[ (\kappa 1_{(N+1)m}-G_{\infty})^{N+1}v={}^t({}^tw,\, \ldots \, ,{}^tw),\ \ \ w=(\kappa 1_m-A_{\infty})^nv'.\] Therefore, we obtain $\dim \ker ((\kappa 1_{(N+1)m}-G_{\infty})^{n+1})=\dim \ker ((\kappa 1_m-A_{\infty})^n)$.\ $\square$\\ We prepare for examining changes of spectral type $S_{\mathrm{div}}$. \begin{lem}\ \ We can reduce $G(x)$ to $\widetilde{G}(x):$ \begin{equation} \widetilde{G}(x)=\begin{pmatrix} T(x)1_m & & & \\ & \ddots & & \\ & & T(x)1_m & \\ V_1(x) & \cdots & V_N(x) & A(q^{-\lambda}x) \end{pmatrix} \end{equation} by elementary matrices. Here $V_i(x)\, (i=1,\ldots ,N)$ are polynomials and $T(x)=\prod _{k=1}^N(1-\frac{x}{b_k}).$ \end{lem} $Proof.$\ \ For any $\lambda \in \mathbb{C},k\in J=\{ 1,\, \ldots ,\, N\} ,b_k\in \mathbb{C}\setminus \{ 0\}$, let $s_k=1-\frac{x}{b_k},\ s_k'=1-\frac{x}{q^{\lambda}b_k},\ T_k=\frac{T(x)}{s_k},\ b_{i,j}=1-\frac{b_i}{b_j}$. It is clear that \begin{align} G(x)&=T(x)F(x)\\ &=\left( \prod _{k=1}^Ns_k\right) \cdot \left( F_{\infty}+\sum _{l=1}^N\frac{F_l}{s_l}\right) \\ &=T(x)1_m\oplus \bigoplus _{k=1}^Nq^{\lambda}s_k'T_k(x)1_m+\left( -T(x)1_m\oplus \bigoplus _{k=1}^N\frac{xT_k(x)}{b_k}1_m\right) \begin{pmatrix} 1_m \\ \vdots \\ 1_m \end{pmatrix}(B_0\cdots B_n). \end{align} Here we row reduce $G(x)$ by the elementary matrix \begin{equation} \begin{pmatrix} (1-s_1)1_m & s_11_m & &\\ -1_m & 1_m & & \\ \vdots & & \ddots & \\ -1_m & & & 1_m \end{pmatrix}. \end{equation} Next, we column reduce by the elementary matrix \begin{equation} \begin{pmatrix} 1_m & & & \\ 1_m & 1_m & & \\ \vdots & & \ddots & \\ 1_m & & & 1_m \end{pmatrix}. \end{equation} Then we obtain \begin{equation} q^{\lambda}\begin{pmatrix} T1_m & s_1'T1_m& & \\ & s_1'T_11_m & & \\ & & \ddots & \\ & & & s_N'T_N1_m \end{pmatrix}+\begin{pmatrix} O_m \\ T_11_m \\ \vdots \\ T_N1_m \end{pmatrix}(q^{\lambda}1_m-B_{\infty} \ B_1\cdots B_n). \end{equation} We set \begin{align} f_{i,j}&=(-1)^{i+j}b_ib_j^{-1}b_{i+1,j}^{-1}\prod _{k=1}^{j-1}(b_{j,k}^{-1}b_{i,k})\cdot \prod _{k=j+1}^{i-1}(b_kb_j^{-1}b_{k,j}^{-1}b_{i,k})\ \ (b_{N+1,j}=1),\\ g_i&=-\prod _{k=1}^ib_{i+1,k}^{-1}\cdot \prod _{k=1}^{i-1}b_{i,k} (\ne 0) \end{align} and $C_0=(C_{i,j}^0)_{1\le i,j\le N+1}\in \mathrm{M}_{(N+1)m}(\mathbb{C})\ (C_{i,j}^1\in \mathrm{M}_m(\mathbb{C}))$ as \begin{equation} C_{i,j}^0=\begin{cases} 1_m & (i=j=1)\\ f_{i-1,j-1}s_{i-1}1_m & (2\le i,j\le N,i\ge j)\\ g_{i-1}s_i1_m & (2\le i=j-1 \le N)\\ f_{N,j-1}1_m & (i=N+1,j\ne 1)\\ O_m & (\mathrm{otherwise}) \end{cases}. \end{equation} Here $C_0$ is an elementary matrix. Let $j\in \{ 1,\, \ldots ,\, N\},\, I_j=\{ 1,\, \ldots ,\, j\}$. We prove \begin{align*} \mathrm{(i)}&\ \sum _{k=1}^lf_{l,k}=-g_l\ \ (l\in I_{N-1}),\\ \mathrm{(ii)}&\ \sum _{k=1}^Nf_{N,k}T_k(x)=t_0\ \ \left( t_0=\prod _{k=1}^{N-1}b_{N,k}\ne 0\right) . \end{align*} (i)\ It is clear that $\sum _{k=1}^lf_{l,k}(b_{l+1})\equiv 0\ (\mathrm{mod}\, g_l(b_{l+1}))$. Here we set $f(b_{l+1})=-(g_l)^{-1}\sum _{k=1}^lf_{l,k},$ then we find $\deg f(b_{l+1})\le l-1$ and $f(b_s)=1\ (s\in I_l).$ Therefore, for any $b_{l+1}\in \mathbb{C}$, we obtain $f(b_{l+1})=1.$ (ii)\ Let $g(x)=\sum _{k=1}^Nf_{N,k}T_k(x),$ then we find $\deg g(x)\le N-1$ and $g(b_s)=t_0\ (s\in I_N).$ Therefore, $g(x)=t_0.$ Hence we get \begin{equation} C_0\begin{pmatrix} O_m \\ T_11_m \\ \vdots \\ T_N1_m \end{pmatrix}=\begin{pmatrix} O_m \\ \vdots \\ O_m \\ t_01_m \end{pmatrix}. \end{equation} Here let us reduce \begin{equation} q^{\lambda}C_0\begin{pmatrix} T1_m & s_1'T1_m& & \\ & s_1'T_11_m & & \\ & & \ddots & \\ & & & s_N'T_N1_m \end{pmatrix}. \end{equation} We set $U_{i,j}(p):=(p\delta _{si}\delta _{tj}1_m)_{1\le s,t\le N+1}\in \mathrm{M}_{(N+1)m}(\mathbb{C})\ (p\in \mathbb{C})$ and \begin{align} h_{i,j}&=g_j^{-1}\sum _{k=1}^jf_{i,k},\\ C_l&=1_{(N+1)m}+\sum _{k=l+2}^NU_{k,l+1}(h_{k-1,l})\ (1\le l\le N-2),\\ C_{N-1}&=1_m\oplus (-g_1^{-1})1_m\oplus \cdots \oplus (-g_{N-1}^{-1})1_m \oplus 1_m,\\ C&=C_{N-1}C_{N-2}\cdots C_1C_0. \end{align} Then we obtain \begin{equation} \begin{split} &C\begin{pmatrix} T1_m & s_1'T1_m& & \\ & s_1'T_11_m & & \\ & & \ddots & \\ & & & s_N'T_N1_m \end{pmatrix}\\ &\ =\begin{pmatrix} T1_m & s_1'T1_m & \\ & s_1'T1_m & -s_2'T1_m & & & \\ & & s_2'T1_m & -s_3'T1_m & & \\ & & & \ddots & \ddots & \\ & & & & s_{N-1}'T1_m & -s_N'T1_m\\ & f_{N,1}s_1'T_11_m & f_{N,2}s_2'T_21_m & \cdots & f_{N,N-1}s_{N-1}'T_{N-1}1_m & f_{N,N}s_N'T_N1_m \end{pmatrix}. \end{split} \end{equation} For any $i\in I_{N-1}$, we set \begin{equation} \begin{split} u_i&=\prod _{k=1}^is_k',\ \ u_i'=\prod _{k=1}^ib_{i+1,k},\ \ \widetilde{u}_i=s_{i+1}^{-1}(u_i-u_i'),\\ D_0&=1_{(N+1)m}-U_{1,2}(s_1'),\\ D_{1,i}&=(1_{(N+1)m}+U_{i+2,i+1}(\widetilde{u}_i))(1_{(N+1)m}+U_{i+2,i+2}(u_i'-1))(1_{(N+1)m}+U_{i+1,i+2}(s_{i+1}')),\\ D_{2,i}&=(1_{(N+1)m}+U_{i+2,i+1}(-\widetilde{u}_is_{i+1}'))(1_{(N+1)m}+U_{i+1,i+1}(u_i'^{-1}-1))(1_{(N+1)m}+U_{i+1,i+2}(s_{i+1}')),\\ D_1&=D_0D_{1,1}\cdots D_{1,N-1},\ \ \ D_2=D_{2,N-1}\cdots D_{2,1}. \end{split} \end{equation} Here we remember $A(q^{-\lambda}x)=T(q^{-\lambda}x)B(q^{-\lambda}x)=(\prod _{k=1}^Ns_k')(B_{\infty}+\sum _{l=1}^Ns_l'^{-1}B_l),$ and we compute \begin{equation} D_2C\left\{ q^{\lambda}\begin{pmatrix} T1_m & s_1'T1_m& & \\ & s_1'T_11_m & & \\ & & \ddots & \\ & & & s_N'T_N1_m \end{pmatrix}+\begin{pmatrix} O_m \\ T_11_m \\ \vdots \\ T_N1_m \end{pmatrix}(q^{\lambda}1_m-B_{\infty} \ B_1\cdots B_n)\right\} D_1. \end{equation} This is the $\widetilde{G}(x)$.\ $\square$\\ We prove the next lemma for examining type of elementary divisors of $G(x)$. \begin{lem}\ \ For coefficient polynomial $A(x)=\sum _{k=0}^NA_kx^k$ of canonical form of Fuchsian equation $E_R$, we define $P_A\in \mathrm{M}_{Nm}(C)$ as \begin{equation} \begin{pmatrix} & 1_m & & \\ & & \ddots & \\ & & & 1_m \\ -A_{\infty}^{-1}A_0 & -A_{\infty}^{-1}A_1 & \cdots & -A_{\infty}^{-1}A_{N-1} \end{pmatrix}. \end{equation} Then, for any $a_i\in Z_R=\{a \in \mathbb{C}\, ;\, \mathrm{det}A(a)=0 \}$, we obtain \begin{equation} n_j^i=\dim \ker ((a_i 1_{Nm}-P_A)^j)-\dim \ker ((a_i 1_{Nm}-P_A)^{j-1})\, (j\in \mathbb{Z}_{>0}). \end{equation} \end{lem} $Proof.$\ \ $x1_{Nm}-P_A$ can be transformed to $1_{(N-1)m}\oplus A(x)$ by elementary matrices. Therefore, type of elementary divisors of $x1_{Nm}-P_A$ and type of elementary divisors of $A(x)$ are equal except for $1_{(N-1)m}$.\ $\square$\\ We obtain the following lemma by calculating the dimensions of the generalized eigenspaces of $P_A$. \begin{lem}\ \ Let $I_j=\{ 1,\, \ldots ,\, j\} ,j_1=\min \{ N+1,j\} ,j_2=\max \{ N+1,j\} \ (j\in \mathbb{Z}_{>0}),$ $I_j'=\{ 1,\, \ldots ,\, j_2\}$. For any $a\in \mathbb{C}\setminus \{ 0\}$, the following conditions are equivalent{\normalfont :} \begin{align*} (i)&\ {}^t({}^tv_1,\, \ldots \, ,{}^tv_N)\in \ker (a1_{Nm}-P_A)\ (v_k\in \mathcal{V}),\\ (ii)&\ There\ exist\ v_{j_1},\, \ldots ,\, v_{j_2}\in \mathcal{V}\ such\ that\ for\ w_k=\sum _{l=1}^k(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}v_l\ (k\in I_j),\\ &\ v_k=\sum _{l=1}^j(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}w_l\, (k\in I_j')\ and\ \sum _{i=0}^{k-1}\frac{(-1)^i}{i!}\frac{d^iA}{dx^i}(a)w_{i+j-k+1}=0\, (k\in I_j). \end{align*} \end{lem} $Proof.$\ \ If $j=1$, then for any $v={}^t({}^tv_1,\, \ldots \, ,{}^tv_N)\in \ker (a1_{Nm}-P_A)\ (v_k\in \mathcal{V}),$ we put $w_1=v_1,v_{N+1}=a^Nv_1.$ Here we get $v_k=a^{k-1}v_1=a^{k-1}w_1\, (k\in I_1')$ and $A(a)w_1=\sum _{k=0}^NA_ka^kv_1=\sum _{k=0}^NA_kv_{k+1}=0$ from $(P_A-a1_{Nm})v=0.$ We assume that the equivalence is satisfied in the case $j=j'\in \mathbb{Z}_{>0}$. For any $v={}^t({}^tv_1,\, \ldots \, ,{}^tv_N)\in \ker ((a1_{Nm}-P_A)^{j'+1})\ (v_k\in \mathcal{V}),$ we let \[ u={}^t({}^tu_1,\, \ldots \, ,{}^tu_N)=(a1_{Nm}-P_A)v\, (u_k\in \mathcal{V}),\ v_{N+1}=-A_{\infty}^{-1}\sum _{k=0}^{N-1}A_kv_{k+1}.\] Then we find $u_k=av_k-v_{k+1}$ and $\sum _{k=0}^NA_kv_{k+1}=0\ (k\in I_0=\{ 1,\, \ldots ,\, N\})$. Here we set \[ \widetilde{w}_k=\sum _{l=1}^k(-1)^{l-1}\! \left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) \! a^{k-l}u_l\ (k\in I_{j'}) .\] There exist $u_{j_1'},\, \ldots ,\, u_{j_2'}\in \mathcal{V}\ (j_1'=\min \{ N+1,j'\} ,j_2'=\max \{ N+1,j'\} )$ such that \begin{equation} \widetilde{w}_k=\sum _{l=1}^k(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}u_l=\sum _{l=1}^{k+1}(-1)^{l-1}\left( \begin{matrix} k \\ l-1 \end{matrix} \right) a^{k+1-l}v_l\ \ (k\in I_{j'}). \end{equation} Let $w_1=v_1,w_k=\widetilde{w}_{k-1}\, (k\in I_{j'+1}\setminus \{ 1\} ),$ we obtain \[ w_k=\sum _{l=1}^k(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}v_l\ \ (k\in I_{j'+1}).\] Here we find $\displaystyle{u_k=\sum _{l=1}^{j'}(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}w_{l+1}\, (k\in I_{j'}').}$ We put $v_k\in \mathcal{V}$ such that $av_k-v_{k+1}=u_k\ (k\in \{ j_1',\, \ldots ,\, j_2'\})$. For any $k\in I_{j'}'$, we get \begin{equation} av_k-v_{k+1}=\sum _{l=1}^{j'}(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}w_{l+1}=\sum _{l=2}^{j'+1}(-1)^{l-2}\left( \begin{matrix} k-1 \\ l-2 \end{matrix} \right) a^{k+1-l}w_l \end{equation} and \begin{equation} v_k=\sum _{l=1}^{j'+1}(-1)^{l-1}\left( \begin{matrix} k-1 \\ l-1 \end{matrix} \right) a^{k-l}w_l. \end{equation} Moreover, we obtain \[ \sum _{i=0}^{k-1}\frac{(-1)^i}{i!}\frac{d^iA}{dx^i}(a)w_{i+(j'+1)-k+1}=0\] from $\displaystyle{\sum _{i=0}^{k-1}\frac{(-1)^i}{i!}\frac{d^iA}{dx^i}(a)\widetilde{w}_{i+j'-k+1}=0\ \ (k\in I_{j'}).}$ On the other hand, by the computation: \begin{equation} \begin{split} 0&=\sum _{k=0}^NA_kv_{k+1}\\ &=\sum _{k=0}^{N-1}A_k\sum _{l=1}^{j'+1}(-1)^{l-1}\left( \begin{matrix} k \\ l-1 \end{matrix} \right) a^{k+1-l}w_l\\ &\ \ \ \ +A_N\left\{ \sum _{l=1}^{j'+1}(-1)^{l-1}\left( \begin{matrix} N-1 \\ l-1 \end{matrix} \right) a^{N+1-l}w_l-\sum _{l=1}^{j'}(-1)^{l-1}\left( \begin{matrix} N-1 \\ l-1 \end{matrix} \right) a^{N-l}w_{l+1}\right\} \\ &=\sum _{l=0}^{(j'+1)-1}\frac{(-1)^l}{l!}\sum _{k=0}^N\frac{k!}{(k-l)!}A_k a^{k-l}w_{l+1}\\ &=\sum _{l=0}^{(j'+1)-1}\frac{(-1)^l}{l!}\frac{d^lA}{dx^l}(a)w_{l+(j'+1)-(j'+1)+1}, \end{split} \end{equation} (ii) is satisfied in the case $j=j'+1\in \mathbb{Z}_{>0}$. The proof of the lemma has been completed.\ $\square$\\ From the above, we can calculate the type of elementary divisors of $G(x)=c_{\lambda}(A)(x).$ We obtain the next lemma by calculating the dimension of the generalized eigenspaces of $P_{\widetilde{G}}\in \mathrm{M}_{N(N+1)m}$. \begin{lem}\ \ If $(\ast) ,(\ast \ast )$ are satisfied, then for any $a\in Z_R=\{a \in \mathbb{C}\, ;\, \mathrm{det}A(a)=0 \}$ and $j\in \mathbb{Z}_{>0},$ we obtain {\rm (i),(ii):}\\ {\rm (i)}\ \ If $q^{\lambda}a\in q^{\lambda}Z_R\setminus \{ b_k\, ;\, k\in \{ 1,\, \ldots ,\, N\} \}$,\ then\\ $\dim \ker ((q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})^j)=\dim \ker ((a1_{Nm}-P_A)^j).$\\ {\rm (ii)}\ \ If $q^{\lambda}a\in q^{\lambda}Z_R\cap \{ b_k\, ;\, k\in \{ 1,\, \ldots ,\, N\} \}$,\ then\ $\dim \ker (q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})=Nm$\ and\\ $\dim \ker ((q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})^{j+1})=\dim \ker ((a1_{Nm}-P_A)^j).$ \end{lem} $Proof.$\ \ (i)\ For any \[ v={}^t({}^tv_1,\, \ldots \, ,{}^tv_N)\in \ker (a1_{N(N+1)m}-P_{\widetilde{G}}),\ (v_k={}^t({}^tv_{k,0},\, \ldots \, ,{}^tv_{k,N}),\ v_{k,l}\in \mathcal{V}),\] we find $v_k=a^{k-1}v_1,\widetilde{G}(q^{\lambda}a)v_1=0$. Moreover, we obtain $A(a)v_{1,N}=0$, $\dim \ker (q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})=\dim \ker A(a)=\dim \ker (a1_{Nm}-P_A)$ from $\widetilde{G}(q^{\lambda}a)v_1=0\ \Leftrightarrow \ v_{1,j}=0\, (j\ne N)$. Meanwhile, we assume $\dim \ker ((q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})^{j'})=\dim \ker ((a1_{Nm}-P_A)^{j'})\, (j=j'\in \mathbb{Z}_{>0})$. In another expression, for $w_k={}^t({}^tw_{k,0},\, \ldots \, ,{}^tw_{k,l})\in \mathcal{V}^{N+1}\, (w_{k,N}\in \mathcal{V},\, k\in J=\{ 1,\, \ldots ,\, j'\}),$ \begin{equation} \begin{split} &\sum _{i=0}^{k-1}\frac{(-1)^i}{i!}\frac{d^i\widetilde{G}}{dx^i}(q^{\lambda}a)w_{i+j'-k+1}=0\\ &\ \Leftrightarrow \ \sum _{i=0}^{k-1}q^{(j-i-1)\lambda}\frac{(-1)^i}{i!}\frac{d^iA}{dx^i}(a)w_{i+j'-k+1,N}=0,\ w_{k,l}=0\, (l\ne N). \end{split} \end{equation} Here if there exist \[ w_k={}^t({}^tw_{k,0},\, \ldots \, ,{}^tw_{k,N})\in \mathcal{V}^{N+1}\, (w_{k,l}\in \mathcal{V},\, k\in J'=\{ 1,\, \ldots ,\, j'+1\})\] such that $\sum _{i=0}^{k-1}\frac{(-1)^i}{i!}\frac{d^i\widetilde{G}}{dx^i}(q^{\lambda}a)w_{i+j'-k+2}=0.$ Then we get $w_{k,l}=0\, (k\ne 1,l\ne N).$ Moreover, we find \[ w_{1,l}=0,\ \ \sum _{i=0}^{k-1}q^{(j'-i)\lambda}\frac{(-1)^i}{i!}\frac{d^iA}{dx^i}(a)w_{i+j'-k+2,N}=0\ (k\in J',\, l\ne N),\] because $\sum _{i=0}^{j'}\frac{(-1)^i}{i!}\frac{d^i\widetilde{G}}{dx^i}(q^{\lambda}a)w_{i+1}=0.$ Therefore, we obtain \[ \dim \ker ((q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})^{j'+1})=\dim \ker ((a1_{Nm}-P_A)^{j'+1}).\] (ii)\ If $q^{\lambda}a=b_{k_0}\in q^{\lambda}Z_R\cap \{ b_k\, ;\, k\in \{ 1,\, \ldots ,\, N\} \} \, (k_0\in \{ 1,\, \ldots ,\, N\}),$ then we obtain \[ \dim \ker (q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})=\dim \ker \, \widetilde{G}(k_0)=\dim \ker \, G(k_0)=(N+1)m-\mathrm{dim\ im}G(k_0)=Nm.\] We assume that there exist $w_k={}^t({}^tw_{k,0},\, \ldots \, ,{}^tw_{k,N})\in \mathcal{V}^{N+1}\, (w_{k,l}\in \mathcal{V},\, k=1,2)$ such that \[ \widetilde{G}(q^{\lambda}a)w_2=0,\ \ \ \frac{d\widetilde{G}}{dx}(q^{\lambda}a)w_2=\widetilde{G}(q^{\lambda}a)w_1.\] Then it is clear that $\frac{dT}{dx}(b_{k_0})\ne 0$. Hence we get \[ w_{2,l}=0\, (l\ne N),\ \ \ A(a)w_{2,N}=0,\ \ \ \frac{dA}{dx}(a)w_{2,N}=q^{\lambda}\sum _{l=0}^NU_l(q^{\lambda}a)w_{1,l}.\] Here $q^{\lambda}\sum _{l=0}^NU_l(q^{\lambda}a)w_{1,l}$ spans $\mathcal{V}$ from condition $(\ast \ast )$.\ Moreover, we find \[ \dim \ker ((q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})^2)=\dim \ker A(a)=\dim \ker (a1_{Nm}-P_A).\] Therefore, we obtain \[ \dim \ker ((q^{\lambda}a1_{N(N+1)m}-P_{\widetilde{G}})^{j'+2})=\dim \ker ((a1_{Nm}-P_A)^{j'+1}).\ \square\] From the above, the next proposition is obvious. \begin{prop} If $(\ast) ,(\ast \ast )$\ are satisfied and the spectral type $S(E_R)=(S_0;S_{\infty};S_{\mathrm{div}})$ of Fuchsian equation $E_R$ is given as \begin{equation} \begin{split} S_{\xi}&\, :\, m_{1,1}^{\xi} \ldots m_{1,{t_{1,1}^{\xi}}}^{\xi}, \ldots ,m_{{l_{\xi}},1}^{\xi} \ldots m_{{l_{\xi}},{t_{{l_{\xi}},1}^{\xi}}}^{\xi}\ (\xi =0,\infty ),\\ S_{\mathrm{div}}&\, :\, n_1^1 \ldots n_{k_1}^1, \ldots ,n_1^l \ldots n_{k_l}^l, \end{split} \end{equation} then spectral type $S(c_{\lambda}(E_R))=(S_0';S_{\infty}';S_{\mathrm{div}}')$ satisfies \begin{equation} \begin{split} S_0'&\, :\, \begin{cases} Nm\ m_{1,1}^0 \ldots m_{1,{t_{1,1}^0}}^0, \ldots ,m_{l_0,1}^0 \ldots m_{{l_0},{t_{l_0,1}^0}}^0 & (q^{\lambda}=\alpha _1^0)\\ Nm,m_{1,1}^0 \ldots m_{1,{t_{1,1}^0}}^0, \ldots ,m_{l_0,1}^0 \ldots m_{{l_0},{t_{l_0,1}^0}}^0 & (q^{\lambda}\notin \mathrm{Ev}(A_0)) \end{cases},\\ S_{\infty}'&\, :\, \begin{cases} Nm\ m_{1,1}^{\infty} \ldots m_{1,{t_{1,1}^{\infty}}}^{\infty}, \ldots ,m_{{l_{\infty}},1}^{\infty} \ldots m_{{l_{\infty}},{t_{{l_{\infty}},1}^{\infty}}}^{\infty} & (b_{\infty}=\alpha _1^{\infty})\\ Nm,m_{1,1}^{\infty} \ldots m_{1,{t_{1,1}^{\infty}}}^{\infty}, \ldots ,m_{{l_{\infty}},1}^{\infty} \ldots m_{{l_{\infty}},{t_{{l_{\infty}},1}^{\infty}}}^{\infty} & (b_{\infty}\notin \mathrm{Ev}(A_{\infty})) \end{cases},\\ S_{\mathrm{div}}'&\, :\, \underbrace{Nm,\ldots ,Nm}_{r_1},Nm\ n_1^1 \ldots n_{k_1}^1, \ldots ,Nm\ n_1^{r_2} \ldots n_{k_{r_2}}^{r_2},n_1^{r_2+1} \ldots n_{k_{r_2+1}}^{r_2+1} ,\ldots ,n_1^l \ldots n_{k_l}^l\\ &\ \ \ \ (b_1,\ldots ,b_{r_1}\in \{ b_k\, ;\, k\in \{ 1,\ldots ,N\} \} \setminus q^{\lambda}Z_A,\ q^{\lambda}a_1,\ldots ,q^{\lambda}a_{r_2}\in \{ b_k\, ;\, k\in \{ 1,\ldots ,N\} \} ). \end{split} \end{equation} \end{prop} We show the next lemma in order to examine how $q$-middle convolution changes the spectral type. \begin{lem}\ \ If $\lambda \ne 0, $ for $\theta ,\kappa ,a\in \mathbb{C}\setminus \{ 0\} ,I=\{ 1,\, \ldots ,\, N\} ,$ we obtain \begin{align} &\dim (\ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{K})=\begin{cases} \dim \ker (A_0-1_m) & (\theta =1)\\ \sum _{k=1}^N\dim \ker B_k & (\theta =q^{\lambda})\\ 0 & (\theta \ne 1,q^{\lambda}) \end{cases},\\ &\dim (\ker (\kappa 1_{(N+1)m}-G_{\infty}) \cap \mathcal{K})=\begin{cases} \dim \ker (A_0-1_m)+\sum _{k=1}^N\dim \ker B_k & (\kappa =b_{\infty})\\ 0 & (\kappa \ne b_{\infty}) \end{cases},\\ &\dim (\ker \, G(a)\cap \mathcal{K})\\ &\ \ \ \ \ \ \ \ =\begin{cases} \dim \ker (A_0-1_m)+\sum _{k\ne j}\dim \ker B_k & (a=b_j)\\ \dim \ker B_j & (a=q^{\lambda}b_j\in q^{\lambda}Z_A\setminus \{ b_k;k\in I\})\\ 0 & (\mathrm{otherwise}) \end{cases},\\ &\dim \left( \frac{dG}{dx}(a)\frac{}{}^{-1}(\mathrm{im}\, G(a))\cap \ker \, G(a)\cap \mathcal{K}\right) =\begin{cases} \dim \ker B_j & (a=q^{\lambda}b_j\in q^{\lambda}Z_A)\\ 0 & (\mathrm{otherwise}) \end{cases},\\ &\dim (\ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{L})=\begin{cases} \dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m) & (\theta =q^{\lambda})\\ 0 & (\theta \ne q^{\lambda}) \end{cases}, \end{align} \begin{align} &\dim (\ker (\kappa 1_{(N+1)m}-G_{\infty}) \cap \mathcal{L})=\begin{cases} \dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m) & (\kappa =q^{\lambda}b_{\infty})\\ 0 & (\kappa \ne q^{\lambda}b_{\infty}) \end{cases},\\ &\dim (\ker \, G(a)\cap \mathcal{L})=\begin{cases} \dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m) & (a\in \{ b_k;k\in I\} )\\ 0 & (a\notin \{ b_k;k\in I\} ) \end{cases}. \end{align} \end{lem} $Proof.$\ \ (i)\ (Change of $S_0$ due to the $\mathcal{K}$) For $\theta \in \mathbb{C}$ and any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{K}\, (v_k\in \mathcal{V}),$ it is easily seen that \[ 0=(\theta 1_{(N+1)m}-G_0)v={}^t(\textstyle{\sum _{k=0}^N}{}^t(B_kv_k)+(\theta -1)\, {}^tv_0,(\theta -q^{\lambda})\, {}^tv_1 , \ldots ,(\theta -q^{\lambda})\, {}^tv_N).\] If $\theta =1,$ then it is clear that $\theta \ne q^{\lambda}$ and $v_k=0\, (k\in I=\{ 1,\, \ldots ,\, N\} ),v_0 \in \ker (A_0-1_m).$ Here we get $\dim (\ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{K})=\dim \ker (A_0-1_m).$ If $\theta =q^{\lambda}, $ then we find $v_k\in \ker B_k\, (k\in I).$ Therefore, we obtain $\dim (\ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{K})=\sum _{k=1}^N\dim \ker B_k.$ (ii)\ (Change of $S_{\infty}$ due to the $\mathcal{K}$) For $\kappa \in \mathbb{C}$ and any $v \in \ker (\kappa 1_{(N+1)m}-G_{\infty}) \cap \mathcal{K},$ we get \[ 0=(\kappa 1_{(N+1)m}-G_{\infty})v=(\kappa 1_{(N+1)m}-b_{\infty}F_{\infty})v=\{ \kappa 1_{(N+1)m}-b_{\infty}(1_{(N+1)m}-\widehat{F})\} v=(\kappa -b_{\infty})v.\] If $\kappa =b_{\infty},$ then we obtain $\dim (\ker (\kappa 1_{(N+1)m}-G_{\infty}) \cap \mathcal{K})=\dim \mathcal{K}=\dim \ker (A_0-1_m)+\sum _{k=1}^N\dim \ker B_k$. (iii)\ (Change of $S_{\mathrm{div}}$ due to the $\mathcal{K}$) (iii-a) For any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \ker \, G(b_k)\cap \mathcal{K}\, (v_k\in \mathcal{V},\, k \in I),$ it is clear that $v_k=0.$ Hence we get $\dim (\ker \, G(b_k)\cap \mathcal{K})=\dim \ker (A_0-1_m)+\sum _{l\ne k}\dim \ker B_l.$ (iii-b) If $q^{\lambda}a_i\in q^{\lambda}Z_A\setminus \{ b_k\, ;\, k\in I\}$, then $T(q^{\lambda}a_i)\ne 0$. Hence we obtain \begin{equation} \ker \, G(q^{\lambda}a_i)=\ker F(q^{\lambda}a_i)=\ker \left( 1_{(N+1)m}-\widehat{F}+\sum _{k=1}^N\frac{F_k}{1-\frac{q^{\lambda}a_i}{b_k}} \right) . \end{equation} For any $v={}^t({}^tv_0, \ldots ,{}^tv_N) \in \ker G(q^{\lambda}a_i)\cap \mathcal{K}\, (v_k\in \mathcal{V})$, we get \[ 0=\{ 1-\widehat{F}+\sum _{k=1}^N(1-q^{\lambda}a_ib_k^{-1})^{-1}F_k\} v=\{ 1_m\oplus _{k=1}^Nq^{\lambda}(1-a_ib_k^{-1})(1-q^{\lambda}a_ib_k^{-1})^{-1}1_m\} v.\] Here if $a_i\notin \{ b_k\, ;\, k\in I\}$, then $v=0.$ In the meantime, if $a_i=b_j\, (j\in I)$ and $k\ne j$, then $v_k=0$. Therefore, we find $v_j\in \ker B_j.$ From the above, we obatin \begin{equation} \dim \left( \frac{dG}{dx}(q^{\lambda}a_i)\frac{}{}^{-1}(\mathrm{im}\, G(q^{\lambda}a_i))\cap \ker \, G(q^{\lambda}a_i)\cap \mathcal{K}\right) =\begin{cases} 0 & (a_i\notin \{ b_k\, ;\, k\in I\} )\\ \dim \ker B_j=n_1^j & (a_i=b_j) \end{cases}. \end{equation} (iii-c) If $q^{\lambda}a_i=b_{j'}\in q^{\lambda}Z_A\cap \{ b_k\, ;\, k\in I\} \, (j'\in I),$ then we put $w_k={}^t({}^tw_{k,0}, \ldots ,{}^tw_{k,N}) \in \mathcal{V}^{N+1}\, (w_k\in \mathcal{V},\, k=1,2)$ such that \[ w_2\in \ker G(q^{\lambda}a_i)\cap \mathcal{K},\ \ \ \frac{dG}{dx}(q^{\lambda}a_i)w_2=G(q^{\lambda}a_i)w_1.\] Hence we find $\ker G(q^{\lambda}a_i)\cap \mathcal{K}=\ker F_{j'}\cap \mathcal{K}$ and $q^{\lambda}\ne 1.$ Therefore, we get $w_{2,j'}=0$. Moreover, $G(q^{\lambda}a_i)w_1$ spans ${}^t(0,\ldots ,0,\mathcal{V},0,\ldots ,0)$ from $(\ast \ast )$. If $a_i\notin \{ b_k\, ;\, k\in I\}$, then we get $w_{2,k}=0\, (k\ne j')$ from $\frac{dG}{dx}(q^{\lambda}a_i)w_2=G(q^{\lambda}a_i)w_1$. Therefore, $w_2=0.$ Meanwhile, if $a_i=b_j\, (j\in I)$ and $k\ne j,$ then we find $w_{2,k}=0$ and $w_{2,j}\in \ker B_j.$ From the above, we obatin \begin{equation} \dim \left( \frac{dG}{dx}(q^{\lambda}a_i)\frac{}{}^{-1}(\mathrm{im}\, G(q^{\lambda}a_i))\cap \ker \, G(q^{\lambda}a_i)\cap \mathcal{K}\right) =\begin{cases} 0 & (a_i\notin \{ b_k\, ;\, k\in I\} )\\ \dim \ker B_j & (a_i=b_j) \end{cases}. \end{equation} (iv)\ (Change of $S_0$ due to the $\mathcal{L}$) For $\theta \in \mathbb{C}$ and any $v={}^t({}^th, \ldots ,{}^th) \in \ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{L}\ (h \in \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m))$, we find \[ 0=(\theta 1_{(N+1)m}-G_0)v={}^t((\theta -q^{\lambda})\, {}^th , \ldots ,(\theta -q^{\lambda})\, {}^th).\] If $\theta =q^{\lambda},$ then $h \in \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m)$. Therefore, we obtain $\dim (\ker (\theta 1_{(N+1)m}-G_0) \cap \mathcal{L})=\dim \mathcal{L}=\dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m).$ (v)\ (Change of $S_{\infty}$ due to the $\mathcal{L}$) For $\kappa \in \mathbb{C}$ and $v={}^t({}^th, \ldots ,{}^th) \in \ker (\kappa 1_{(N+1)m}-G_{\infty}) \cap \mathcal{L}\ (h \in \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m)),$ we get \[ 0=(\kappa 1_{(N+1)m}-G_{\infty})v=(\kappa -q^{\lambda}b_{\infty})v.\] If $\kappa =q^{\lambda}b_{\infty},$ then we obtain $\dim (\ker (\kappa 1_{(N+1)m}-G_{\infty}) \cap \mathcal{L})=\dim \mathcal{L}=\dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m)$. (vi)\ (Change of $S_{\mathrm{div}}$ due to the $\mathcal{K}$) For any $k \in I$, $\mathcal{L}$ is subspace of $\ker G(b_k)=\ker F_k.$ Therefore, we obtain \[ \dim (\ker G(b_k) \cap \mathcal{L})=\dim \mathcal{L}=\dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m).\ \square \] From the above, Theorem \ref{index} is shown.\\ \\ \rm{\textbf{Theorem \ref{index}}\ (rigidity index)}\ \ \it{If $(\ast ) ,(\ast \ast )$ are satisfied, then $mc_{\lambda}$ preserves rigidity index of Fuchsian equation $E_R$}\rm{.}\\ $Proof.$\ \ In the case $\lambda =0,$ it is obvious from Proposition \ref{404}. We assume $\lambda \ne 0$. Let coefficient $\overline{G}(x)=\sum _{k=0}^N\overline{G}_kx^k\ (\overline{G}_{\infty}=\overline{G}_N)$ of canonical form of $E_{\boldsymbol{\overline{F}},\boldsymbol{b}}\, (\boldsymbol{\overline{F}}=mc_{\lambda}(\boldsymbol{B}))$. It is clear that $q^{\lambda}\ne 1,q^{\lambda}b_{\infty}\ne b_{\infty}.$ Here let $\alpha _{i_0}^0=1,\alpha _{i_{\infty}}^{\infty}=q^{\lambda}b_{\infty}$. we get \begin{equation} \dim \ker (A_0-1_m)=m_{i_0,1}^0,\ \ \dim \ker (A_{\infty}-q^{\lambda}b_{\infty}1_m)=m_{i_{\infty},1}^{\infty}. \end{equation} Moreover, we set \begin{equation} b_k=\begin{cases} a_k & (k\in \{ 1,\ldots ,r\} )\\ c_k & (k\in \{ r+1,\ldots ,N\} ,c_k\notin Z_A)\end{cases},\ \ d_k=\dim \ker B_k,\ \ d=\sum _{k=1}^Nd_k . \end{equation} Then we find \begin{equation} \dim (mc_{\lambda}(\mathcal{V}))=(N+1)m-m_{i_0,1}^0-m_{i_{\infty},1}^{\infty}-d,\ \ \ d=\sum _{k=1}^rn_1^k. \end{equation} Since these relations, we obtain \begin{align} p_0&=\dim \ker (\overline{G}_0-q^{\lambda}1_{\dim (mc_{\lambda}(\mathcal{V}))})=Nm-m_{i_{\infty},1}^{\infty}-d,\\ p_{\infty}&=\dim \ker (\overline{G}_{\infty}-b_{\infty}1_{\dim (mc_{\lambda}(\mathcal{V}))})=Nm-m_{i_0,1}^0-d,\\ p_k&=\dim \ker \overline{G}(b_k)=Nm-m_{i_0,1}^0-m_{i_{\infty},1}^{\infty}-d+d_k\ (k\in \{ 1,\ldots ,N\} ). \end{align} From the above, rigidity index, $\mathop{\rm idx} (mc_{\lambda}(E_R))$, of equation $E_R$ is calculated: \begin{align*} \mathop{\rm idx} &(mc_{\lambda}(E_R))\\ &=\sum _{i \ne i_0}\sum _{j=1}^{t_{i,1}^0}(m_{i,j}^0)^2+\sum _{j=2}^{t_{i_0,1}^0}(m_{i_0,j}^0)^2+(p_0)^2+\sum _{i \ne i_{\infty}}\sum _{j=1}^{t_{i,1}^{\infty}}(m_{i,j}^{\infty})^2+\sum _{j=2}^{t_{i_{\infty},1}^{\infty}}(m_{i_{\infty},j}^{\infty})^2+(p_{\infty})^2\\ &\ \ +\sum _{i=1}^r\sum _{j=2}^{k_i}(n_j^i)^2+\sum _{i=r+1}^l\sum _{j=1}^{k_i}(n_j^i)^2+\sum _{k=1}^N(p_k)^2-N\{ \dim (mc_{\lambda}(\mathcal{V}))\} ^2\\ &=\sum _{i=1}^{l_0}\sum _{j=1}^{t_{i,1}^0}(m_{i,j}^0)^2-(m_{i_0,1}^0)^2+(p_0)^2+\sum _{i=1}^{l_{\infty}}\sum _{j=1}^{t_{i,1}^{\infty}}(m_{i,j}^{\infty})^2-(m_{i_{\infty},1}^{\infty})^2+(p_{\infty})^2\\ &\ \ +\sum _{i=1}^r\sum _{j=1}^{k_i}(n_j^i)^2-\sum _{i=1}^r(n_1^i)^2+\sum _{k=1}^N(p_k)^2-N\{ \dim (mc_{\lambda}(\mathcal{V}))\} ^2\\ &=\mathop{\rm idx} (E_R)-(m_{i_0,1}^0)^2+(Nm-m_{i_{\infty},1}^{\infty}-d)^2-(m_{i_{\infty},1}^{\infty})^2+(Nm-m_{i_0,1}^0-d)^2-\sum _{i=1}^r(n_1^i)^2\\ &\ \ +\sum _{k=1}^N(Nm-m_{i_0,1}^0-m_{i_{\infty},1}^{\infty}-d+d_k)^2-N\{ (N+1)m-m_{i_0,1}^0-m_{i_{\infty},1}^{\infty}-d\} ^2+Nm^2\\ &=\mathop{\rm idx} (E_R). \end{align*} The proof of the theorem has been completed.\ $\square$\\ \\ \textbf{{\Large Acknowledgements}}\\ We would like to express my sincere gratitude to T.Oshima, Y.Haraoka, K.Takemura, D.Yamakawa, K.Hiroe, H.Kawakami and S.Ishizaki for their helpful comments and information about the middle convolution. We wish to thank M.Jimbo, M.Noumi, K.Kajiwara, Y.Ohyama, N.Joshi, T.Masuda, T.Takenawa, T.Tsuda, M.Murata, and Y.Katsushima for discussions and interest. This work is partially supported by JSPS KAKENHI no.24540205. \end{document}
\begin{document} \title {Holonomic and Legendrian parametrizations of knots} \author {Joan S. Birman \thanks{\noindent The first author acknowledges partial support from the U.S.National Science Foundation under Grants DMS-9705019 and DMS-9973232. The second author is a graduate student in the Mathematics Department of Columbia University. She was partially supported under the same grant, and also under DMS-98-10750.} \\ [email protected] \and Nancy C. Wrinkle \\ [email protected]} \date{J.Knot Theory and its Ramifications {\bf 9} No. 3 (2000), p. 293-309.} \maketitle \centerline{Received 8 April 1999} \begin{abstract} \noindent Holonomic parametrizations of knots were introduced in 1997 by Vassiliev, who proved that every knot type can be given a holonomic parametrization. Our main result is that any two holonomic knots which represent the same knot type are isotopic in the space of holonomic knots. A second result emerges through the techniques used to prove the main result: strong and unexpected connections between the topology of knots and the algebraic solution to the conjugacy problem in the braid groups, via the work of Garside. We also discuss related parametrizations of Legendrian knots, and uncover connections between the concepts of holonomic and Legendrian parametrizations of knots. \end{abstract} \noindent \section{Introduction:} \label{section:Introduction} Let $f:\hbox{\sl I\kern-.18em R \kern-.3em} \to \hbox{\sl I\kern-.18em R \kern-.3em}$ be a $C^\infty$ periodic function with period $2\pi$. Following Vassiliev \cite{Vass}, use $f$ to define a map $\tilde{f}:S^1 \to \hbox{\sl I\kern-.18em R \kern-.3em}^3$ by setting $\tilde{f}(t) = (-f(t), f'(t),-f''(t))$. Let $\pi$ be the restriction of $\tilde{f}$ to the first two coordinates. We call $\pi$ the {\em projection} of $K = \tilde{f}(S^1)$ (onto the $xy$ plane). It turns out that with some restrictions on the choice of the defining function $K$ will be a knot and $\pi$ will yield a knot diagram with some very pleasant properties, which we now begin to describe. We highlight our assumptions about $f$ with bullets, and describe their consequences: \begin{enumerate} \item [(1)] We assume that $f$ is chosen so that $\tilde{f}(S^1)$ is a knot $K\subset\hbox{\sl I\kern-.18em R \kern-.3em}^3$, i.e. \begin{itemize} \item There do not exist distinct points $t_1,t_2\in [0,2\pi)$ such that \\ $(-f(t_1),f'(t_1),-f^{\prime\prime}(t_1)) = (-f(t_2),f'(t_2),-f^{\prime\prime}(t_2)).$ \end{itemize} Note that this implies that double points in the projection which are off the $x$ axis are transverse. For, at a double point $(-f(t_1),f'(t_1)) = (-f(t_2),f'(t_2))$. The double point is transverse if the tangent vectors to the projected image are distinct at $t_1$ and $t_2$. The tangent vectors are $(-f'(t_1),f^{\prime\prime}(t_1))$ and $(-f'(t_2),f^{\prime\prime}(t_2))$. They are distinct because $\tilde{f}$ is an embedding, which implies that $f^{\prime\prime}(t_1) \neq f^{\prime\prime}(t_2)$. \item [(2)] The reasoning used in (1) above shows that the tangent to $\pi(K)$ at an instant when $\pi(K)$ crosses the $x$ axis is vertical. We observe that this implies that if a double point occurred at an axis crossing, then it would necessarily be a point where the two branches of $\pi(K)$ had a common tangent. We rule out this behavior, which is not allowed in a `regular' knot diagram, by requiring that $f$ be chosen so that all double points are away from the $x$ axis, i.e. \begin{itemize} \item If $(-f(t_1),f'(t_1)) = (-f(t_2),f'(t_2))$ then $f'(t_1)\neq 0$. \end{itemize} \item [(3)] The singularities in a regular knot diagram are required to be at most a finite number of transverse double points. To achieve that we need one more assumption, i.e. \begin{itemize} \item There do not exist distinct points $t_1,t_2,t_3\in [0,2\pi )$ such that $(-f(t_1),f'(t_1)) = (-f(t_2),f'(t_2)) = (-f(t_3),f'(t_3)) $. \end{itemize} \end{enumerate} Vassiliev observed that these conditions hold for generic $f$. A simple example is obtained by taking $f(t) = cos(t)$, giving the unknot which is pictured in Figure \ref{figure:holonomic unknot}. \begin{figure} \caption{Three-space view of the unknot which is defined by $f(t) = cos(t).$} \label{figure:holonomic unknot} \end{figure} Two additional examples are given in Figure \ref{figure:2-braid unknots}, which show the projections of the knots which are defined by the functions $f_\pm(t) = cos(t) \pm sin(2t)$.\\ \begin{figure} \caption{Two additional representations of the unknot} \label{figure:2-braid unknots} \end{figure} \noindent There is an immediate suggestion of a closed braid in this parametrization, for the following reasons. In the half-space $y > 0$, we know that $f'(t) > 0$, so $-f(t)$ is decreasing. Similarly, in the half-space $y < 0$, we have that $-f(t)$ is increasing. Since $f$ is assumed to be generic, if ${\cal K}$ is crossing from the half-space $y < 0$ to the half-space $y > 0$ at $t_0$ then $z = -f''(t_0) < 0$ and if it is crossing from the half-space $y > 0$ to the half-space $y < 0$ at $t_0$ then $z= -f''(t_0) > 0$. Thus the projected image of $K$ on the $xy$ plane winds continually in an anticlockwise sense (anticlockwise because the $x$ coordinate is $-f(t)$). The only reason it may not already be a closed braid is that there may not be a single point on the $x$ axis which separates all of the axis-crossings with $f''(t) > 0$ from the crossings where $f''(t) < 0$. An example of a holonomic knot which is not in braid form is pictured in Figure \ref{figure:winding}. \begin{figure} \caption{The function $f(t) = Sin(t) + 4Sin(2t) + Sin(4t) + 1.5Sin(5t)$ defines a holonomic trefoil that winds continually anticlockwise but does not have a single point about which it winds.} \label{figure:winding} \end{figure} \begin{enumerate} \item[(4)] Analyzing the difficulty, we see that the number of zeros in the $x$ coordinate, i.e. in the graph of $f(t)$, is smaller than the number of zeros in the $y$ coordinate, i.e. in the graph of $f'(t)$. We add one more requirement: \begin{itemize} \item The number of zeros in one cycle of $f$ is the same as the number of zeros in one cycle of $f'$. \end{itemize} \end{enumerate} When $f(t)$ is chosen to satisfy (1)-(4) our parametrization gives a closed braid. The braid index is then one-half the number of zeros in one cycle of $f$ (or of $f'$). When all these conditions are satisfied our knot is said to have a {\em holonomic parametrization}. \\ \noindent There is more to be learned from elementary observations. Consult Figure \ref{figure:signs of crossings}(a), which shows four little arcs in the projection of a typical ${\cal K} = \tilde{f}(S^1)$ onto the $xy$ plane. \begin{figure} \caption{Determining the sign of a crossing: Figure (a) shows the projected images onto the $xy$ plane; Figure (b) show their lifts to 3-space.} \label{figure:signs of crossings} \end{figure} The four strands are labeled 1,2,3,4. First consider strands 1 and 2. Both are necessarily oriented in the direction of decreasing $x$ because they lie in the half-space defined by $f'(t) > 0$. Since $f'$ is decreasing on strand 1, it follows that $f^{\prime\prime}$ is negative on strand 1, so $-f^{\prime\prime}$ is positive, so strand 1 lies above the $xy$ plane. Since $f'$ is increasing on strand 2, it also follows that strand 2 lies below the $xy$ plane. Thus the crossing associated to the double point in the projection must be negative, as in the top sketch in Figure \ref{figure:signs of crossings}(b), and in fact the same will be true for {\em every} crossing in the upper half-plane. For the same reasons, the projected image of every crossing in the lower half of the $xy$ plane must come from a positive crossing in 3-space. Thus, up to Reidemeister II moves, ${\cal K}$ is a closed braid which factorizes (up to cyclic permutation) as a product $NP$, where the open braid $N$ (resp. $P$) represents some number of negative (resp. positive) crossings. Moreover, the type of any such knot is completely defined by its singular projection onto the $xy$ plane. \\ \noindent As an example, the single double point in the left sketch in Figure \ref{figure:2-braid unknots} lifts to a negative crossing in 3-space, whereas that in the right sketch lifts to a positive crossing. Thus $\tilde{f}_+$ defines the 2-braid representative $\sigma_1^{-1}$ of the unknot and $\tilde{f}_-$ gives the representative $\sigma_1$.\\ \noindent A different example is given in Figure \ref{figure:holonomic trefoil}, which shows a holonomic parametrization of the positive trefoil knot as the 2-braid $\sigma_1^3 $. The graph of the defining function $f(t)$ is also illustrated. It has 2 local maxima and 2 local minima and 4 zeros, so the knot is defined by a holonomic 2-braid. \begin{figure} \caption{ The function $f(t) = sin(t) + 4 sin(2t) + sin(4t)$ determines a holonomic positive $2$-braid trefoil. } \label{figure:holonomic trefoil} \end{figure} The parametrization in Figure \ref{figure:holonomic trefoil} was found by John Bueti, Michael Kinnally, and Felix Tubiana during an undergraduate summer research project \footnote {supported by NSF Grant DMS-98-10750} at Columbia University. Notice that the graph of $f(t)$ suggests a sawtooth. They were able to generalize this example to 2-braid representatives of other type $(2,q)$ torus knots by the use of truncated Fourier approximations of certain sawtooth functions. However, their partial results are a little bit complicated to describe, and at this writing they have not proved that their functions work for all $q$. It is clear that much remains to be done. \\ \noindent Note that our holonomic knots are special cases of Kauffman's {\it Fourier} knots and Trautwein's {\it harmonic} knots, both of which are parametrized by three distinct truncated Fourier series in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ \cite{Kauff}, \cite{Traut}.\\ \noindent These knots were introduced into the literature in \cite{Vass}, where they appeared as a special case of the $n$-jet extension of $f: C^k\to\hbox{\sl I\kern-.18em R \kern-.3em}$, where $C^k$ is the disjoint union of $k$ copies of $S^1$, i.e., the map $\tilde{f}_n:C^k\to\hbox{\sl I\kern-.18em R \kern-.3em}^n$ which is defined by $\tilde{f}_n(t_1,\dots,t_k) = (f(t_1,\dots,t_k),f'(t_1,\dots,t_k),\dots,f^{(n-1)}(t_1,\dots,t_k))$. (Remark: We have changed Vassiliev's conventions slightly because the 3-jet extension, as he used it, results in sign conventions which will be confusing for knot theorists.) Vassiliev called his $n$-knots {\em holonomic knots} and studied them. One of his results for $n=3$ was: \ \noindent {\bf Theorem \cite{Vass}:} {\it Every tame knot type in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ can be represented by a holonomic closed braid of some (in general very high) braid index.} \ \noindent In \cite{Vass} Vassiliev asked the question: ``Is it true that any two holonomic knots in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ represent the same knot type if and only if they are isotopic in the space of holonomic knots?" Figure \ref{figure:non-holonomic isotopy} gives an example of a holonomic and a non-holonomic isotopy. \begin{figure} \caption{Holonomic and non-holonomic isotopies on fragments of a knot diagram.} \label{figure:non-holonomic isotopy} \end{figure} \\ \noindent The main results in this note are a very simple new proof of a sharpened version of Vassiliev's Theorem and an affirmative answer to his question: \begin{theorem} \label{theorem:Vassiliev} Every tame knot type in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ can be represented by a holonomic closed braid ${\cal H}$. Moreover the braid index of ${\cal H}$ can be chosen to be the minimal braid index of the knot type. \end{theorem} \begin{theorem} \label{theorem:holonomic isotopy} If two holonomic knots with defining functions $f_0$ and $f_1$ represent the same knot type, then there is a generic holonomic isotopy $F:S^1\times I\to\hbox{\sl I\kern-.18em R \kern-.3em}$ with $F(t,0)=f_0(t)$ and $F(t,1)=f_1(t)$. Thus the study of holonomic knot types is equivalent to the study of ordinary knot types. \end{theorem} \noindent Here is an outline of the paper. In $\S$\ref{section:background} we set up essential background. We will also state and prove Theorem 0, which may be of interest in its own right. In $\S$\ref{section:proofs} we prove Theorems 1 and 2. In $\S$\ref{section:Legendrian} we discuss parametrizations of Legendrian knots. Since the literature on Legendrian knots may not be well-known to knot theorists, we will discuss them in fairly simplistic terms in $\S 4.1$, using insights which we gained as we struggled to understand the constraints placed by the requirement that Legendrian knots are tangent to the standard tight contact structure on $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. Propositions \ref{proposition:holonomic-Legendrian} and \ref{proposition:Legendrian cousins} of $\S 4.2$ uncover relationships between holonomic and Legendrian parametrizations of knots. \ \noindent {\bf Remark:} The proofs of the results in this paper make very heavy use of the work of Garside \cite{Gar} and the related work in {\cite{Adjan}, \cite{Epstein}, \cite{El-M}. Indeed, it appeared to us as we worked out details, that many of Garside's subtler results seemed to be designed explicitly for holonomic knots! We note that this is the first time that we have encountered a direct natural connection between the algebraic solution to the word and conjugacy problems in the braid group, via the work of Garside and others, and the geometry of knots. This will become clear after the statement and proof of Theorem 0. \\ \noindent {\bf Acknowledgments:} We thank Oliver Dasbach, Sergei Chmutov and Serge Tabachnikov for stimulating discussions and helpful remarks. \section{Background and notation:} \label{section:background} We summarize below the main facts we will need from the published literature. The reader is referred to \cite{Bir} for background material on the Artin braid groups $\{{\bf B}_n; n =1,2,\dots \}$. We will use the standard elementary braid generators $\sigma_1,\dots,\sigma_{n-1}$, where $\sigma_i$ denotes a positive crossing of the $i^{th}$ and $(i+1)^{st}$ braid strands. Defining relations in ${\bf B}_n$ are: \begin{enumerate} \item [(A1).] $\sigma_i\sigma_j = \sigma_j\sigma_i$ if $|i-j|\geq 2$ for all $1\leq i,j\leq n-1.$ \item [(A2).] $\sigma_i\sigma_{i+1}\sigma_i = \sigma_{i+1}\sigma_i\sigma_{i+1}$, where $1\leq i \leq n-1.$ \end{enumerate} \noindent We shall use the symbols: \begin{enumerate} \item [] $X,Y,Z,\dots $ for words in the generators of ${\bf B}_n$. \item [] ${\bf X},{\bf Y},{\bf Z},\dots$ for the elements they represent in ${\bf B}_n$. \item [] ${\cal X},{\cal Y},{\cal Z},\dots$ for the associated cyclic words. \item [] $[{\cal X}],[{\cal Y}],[{\cal Z}],\dots$ for their conjugacy classes in ${\bf B}_n$. \item [] $P,P_1,P_2,\dots$ for positive words in the generators of ${\bf B}_n$. \item [] $N,N_1,N_2,\dots$ for negative words in the generators of ${\bf B}_n$. \item [] $X = Y$ if $X$ and $Y$ represent the same element of ${\bf B}_n$. \item [] $H = NP$ if $H$ can be so represented, with $P$ positive and $N$ negative. \item [] ${\cal H} = N|P = P|N$ if the cyclic word ${\cal H}$ has such a representation. \end{enumerate} \ The vertical bar in the notation ${\cal H} = N|P = P|N$ can be interpreted geometrically as separating the closed braid into $N$egative (above the $xy$ plane) and $P$ositive (below the $xy$ plane) pieces. \\ \noindent {\bf The contributions of Vassiliev.} We shall use the following results from \cite{Vass}: \begin{enumerate} \item [(V1).] If a knot is represented by a diagram in the $xy$ plane which has only negative (resp. positive) crossings in the upper (resp. lower) half-plane, then it may be modified by isotopy to a knot which has a holonomic parametrization. \item [(V2).] Every holonomic knot may be modified by a holonomic isotopy to a holonomic closed braid, i.e., a closed braid which splits as a product $N|P$. (Vassiliev calls them {\em normal} braids, but we prefer the term {\it holonomic} closed braid.) \item [(V3).] The following modifications in a holonomic closed braid $N|P$ are realized by holonomic isotopy: \begin{enumerate} \item Positive (resp. negative) braid equivalences in $P$ (resp. $N$). \item If $P=N_1P_1$ in ${\bf B}_n$, where $N_1$ is negative and $P_1$ is positive, replace $N|P$ by $NN_1|P_1$, with a similar move at the other interface. A special case of this move occurs when we add or delete $\sigma_j^{-1}\sigma_j$ at the interface. \item Insert $\sigma_n^{\pm 1}$ at either interface of the $n$-braid $N|P$ to obtain an $(n+1)$-braid, or the inverse of this move. \end{enumerate} \end{enumerate} \noindent {\bf The contributions of Garside.} Let ${\bf B}^+_n$ be the semigroup of positive words in ${\bf B}_n$ which is generated by $\sigma_1,\dots,\sigma_{n-1}$, with defining relations (A1) and (A2). If $X,Y$ are words in ${\bf B}^+_n$ we write $X \doteq Y$ to indicate that $X$ and $Y$ are equivalent words in ${\bf B}^+_n$. In \cite{Gar} F. Garside proved that the natural map from ${\bf B}^+_n\to{\bf B}_n$ is an embedding, i.e., if $P_1,P_2$ are positive words in the generators of ${\bf B}_n$ then ${\bf P}_1 = {\bf P}_2$ if and only if $P_1 \doteq P_2$. The same is true for negative words and negative equivalences. \\ \noindent In \cite{Gar} Garside introduced the $n$-braid ${\bf \Delta}$, a `half-twist' which is defined by the word: $$ \Delta = \Delta_n = (\sigma_1\sigma_2\dots \sigma_{n-1})(\sigma_1\sigma_2\dots\sigma_{n-2})\cdots(\sigma_1\sigma_2)(\sigma_1)$$ and uncovered some of its remarkable properties. A {\em fragment of} $\Delta$ is any initial subword of one of the (many) positive words which are representatives of {\bf $\Delta$}. \begin{enumerate} \item [(G1).] For each $i = 1,2,\dots,{n-1}$ the weak commutativity relation $\Delta_n \sigma_i \doteq \sigma_{n-i}\Delta_n$ holds. \item [(G2).] For each $i = 1,2,\dots,{n-1}$ there are fragments of $\Delta$, say $U_i$ and $V_i$, such that $\Delta \doteq U_i\sigma_i \doteq \sigma_i V_i$, or equivalently $\sigma_i^{-1} = \Delta^{-1} U_i = V_i\Delta^{-1}$. \item [(G3).] (\cite{Gar}, \cite{Adjan}, \cite{Epstein}) For any ${\bf X}\in{\bf B}_n$ and any $X$ which represents ${\bf X}$ there exists a systematic procedure for converting $X$ to a unique {\em normal form} $\Delta^kP_1\cdots P_r$. In the normal form each $P_i$ is a fragment of $\Delta$, also each $P_i$ is a longest possible fragment of $\Delta$ in the class of all positive words which are positively equal to $P_i$. Finally, $k$ is maximal and $r$ is simultaneously minimal for all such representations. If one starts with $X=\Delta^iQ$, where $Q$ is positive, one finds the normal form by repeatedly `combing out' powers of $\Delta$ from $Q$, i.e. using the fact that if $k>i$, then $X=\Delta^{i+1}Q_1$ where $Q\doteq\Delta Q_1$. A finite number of such combings yields $P$. An examination of all positive words which represent the same element of ${\bf B}_n$ as $P$ produces the decomposition $P\doteq P_1\cdots P_r$. \item [(G4).] (\cite{Gar} and \cite{El-M}) For any conjugacy class $[{\cal X}]\in{\bf B}_n$ and any braid $X$ whose closed braid represents $[{\cal X}]$, there exists a systematic procedure for converting $X$ to a related normal form $\Delta^{k'}P'_1\cdots P'_{r'}$ where $k'$ is maximal and $r'$ is minimal for all such representations of words in the same conjugacy class. Call this a {\em summit form} for $[{\cal X}]$. The integers $k'$ and $r'$ are unique but the positive braid $P'_1\cdots P'_{r'}$ is not unique. However there is a finite collection of all such positive braids and it is unique. \item[(G5).] There is a systematic procedure for finding a summit form: Assume that $X\in{\bf B}_n$ is in the normal form of (G3). Garside proves that there exists a positive word $W$ which is a product $A_1A_2\cdots A_z$, where each $A_i$ is a fragment of $\Delta$, such that $W^{-1}XW = X'$, where $X'$ is a summit form. He also shows how to find $A_1,\dots,A_z$. Let $W_i=A_1A_2\cdots A_i$. Then each $W_i^{-1}XW_i = \Delta^{k_i}P_{1,i}P_{2,i}\cdots P_{r_{i},i}$ where $k_i\geq k_{i-1}$ and $r_i\leq r_{i-1}$, also $r=r_1$ and $r'=r_z$. \item[(G6).] If $X' = \Delta^{k'}P'_1\cdots P'_{r'}$ and $X'' = \Delta^{k'}P''_1\cdots P''_{r'}$ are both summit forms of $X$, then $X'$ and $X''$ are related by a series of positive conjugacies, as in (G5), with each $k_i = k'$ and each $r_i = r'$. \end{enumerate} \noindent We pause in our review of the background material to point out a connection between Garside's work and holonomic isotopy. \ \noindent {\bf Theorem 0:} {\it Given any open braid $H$ which is in holonomic form $NP$, the following hold:} \begin{enumerate} \item {\it The open braid $NP$ may be brought to Garside's form $\Delta^{-q}Q$, where $q$ is a non-negative integer and $Q\in{\bf B}^+_n$, by a holonomic isotopy.} \item {\it A further holonomic isotopy converts the open braid $\Delta^{-q}Q$ to its unique normal form $\Delta^kP_1P_2\cdots P_r$.} \item {\it A final holonomic isotopy converts the associated closed braid to one of its summit forms $\Delta^{k'}P'_1\cdots P'_{r'}$.} \end{enumerate} \noindent{\bf Proof of Theorem 0:} \begin{enumerate} \item We are given $H = NP$. If $N = \emptyset$ there is nothing to prove, so assume that $N \not= \emptyset$. Using only negative equivalences, comb out powers of $\Delta^{-1}$ to write $N$ in the form $N=\Delta^{-k}\sigma_{j_1}^{-1}\cdots\sigma_{j_{s-1}}^{-1}\sigma_{j_s}^{-1}$ where the word $\sigma_{j_1}^{-1}\cdots\sigma_{j_{s-1}}^{-1}\sigma_{j_s}^{-1}$ is not equivalent to any word which contains a power of $\Delta^{-1}$. By (V3) this move can be realized by a holonomic isotopy. If $s=0$ the theorem is true, so assume that $s\geq 1$. By (G2) we may find a fragment $U_{j_s}$ of $\Delta$ such that $\sigma_{j_s}^{-1} = \Delta^{-1}U_{j_s}$. Therefore by (V3) our closed braid is holonomically equivalent to $\Delta^{-k}\sigma_{j_1}^{-1}\cdots\sigma_{j_{s-1}}^{-1}\Delta^{-1}U_{j_s}P$, which by (G1) is holonomically equivalent to $\Delta^{-k-1}\sigma_{n-j_1}^{-1}\cdots\sigma_{n-j_{s-1}}^{-1}U_{j_s}P.$ Induction on $s$ completes the proof. \item To change $\Delta^{-k}Q$ to normal form for its word class, first comb out powers of $\Delta$ from the positive part $Q$. Then put the new positive part into Garside's normal form. Both of these steps are achieved by positive braid equivalences, so by (V3) this part of the work is realizable by a holonomic isotopy. \item Following (G5) we move the normal form just achieved to a summit form and check that this move is holonomic. When we consider conjugates of $H$ in (G5), Garside tells us that it is enough to work with conjugates $W^{-1}HW$, where $W$ is positive. If $H$ was holonomic before conjugating by $W$, then the positivity of $W$ keeps the new braid holonomic after conjugation. $\|$ \end{enumerate} \noindent {\bf The contributions of Markov.} \ We will need to use Markov's Theorem (see \cite{Bir} or \cite{Morton}) in our work. We state it in the form in which it will be most useful in this paper: \\ \noindent {\bf Markov's Theorem} Let $X\in {\bf B}_p$ and $X'\in{\bf B}_q$ be two braids whose associated closed braids ${\cal X}, {\cal X}'$ define the same oriented knot type. Then there is a sequence of conjugacy classes in the braid groups $\{{\bf B}_n, \ 1\leq n <\infty\}$: \begin{center} $[{\cal X}] = [{\cal X}_0] \longrightarrow [{\cal X}_1] \longrightarrow [{\cal X}_2] \longrightarrow \cdots \longrightarrow [{\cal X}_{r-1}] \longrightarrow [{\cal X}_r] = [{\cal X}']$ \end{center} \noindent where $[{\cal X}_j] \subset {\bf B}_{n_j}$, and there are open braid representatives $X_{j,1}$ for each $1\leq j\leq r$ and $X_{j,2}$ for each $0\leq j\leq r-1$ of the conjugacy class $[{\cal X}_j]$ such that either: \begin{enumerate} \item [(M1).] $n_{j+1} = n_j + 1$ and $X_{j+1,1} = X_{j,2}\sigma_{n_j}^{\pm 1}$ (adding a trivial loop), or \item [(M2).] $ n_{j+1} = n_j - 1$ and $X_{j+1,1}\sigma_{n_{j+1}}^{\pm1} = X_{j,2}$ (deleting a trivial loop).\\ \end{enumerate} \section{The proofs:} \label{section:proofs} \noindent {\bf Proof of Theorem 1.} By a well-known theorem of Alexander (see \cite{Bir} for example), any knot type may be represented as a closed braid. In the collection of all closed braid representatives of a given knot type, let us suppose that ${\cal K}$ is the closure of the braid $\sigma_{\mu_1}^{\epsilon_1}\cdots\sigma_{\mu_r}^{\epsilon_r}$, where each $\epsilon_j = \pm 1$. Use (G2) to replace each $\sigma_{\mu_i}^{-1}$ by $\Delta^{-1}V_{\mu_i}$. Then use (G1) to push the $\Delta^{-1}$'s to the left. This replaces the given representative by a new closed braid ${\cal H}$ which is in the desired form $N|P$. But then, by (V1), ${\cal H}$ is isotopic to a holonomic closed braid. \ \noindent If we had chosen the braid representative of our knot ${\cal K}$ to have minimum braid index (which we can do without loss of generality) then the holonomic braid ${\cal H}$ will also have minimum braid index because the changes which we introduced to achieve $N|P$ form do not change the number of braid strands. $\|$ \noindent {\bf Remark:} This proof differs from Vassiliev's proof in the following way. We both begin with an arbitrary knot diagram. He modifies the given diagram by a move which eliminates positive (resp. negative) crossings in the upper (resp. lower) half-plane, at the expense of adding some number of anticlockwise loops which encircle points on the $x$-axis. He then changes the resulting holonomic knot to a holonomic closed braid by using holonomic Reidemeister II moves, as in our Figure \ref{figure:non-holonomic isotopy}. In our proof, we modify the original diagram to a closed braid, and then use a very small part of Garside's work, without changing the number of braid strands, to complete the proof. We will see this theme expanded in the proof of Theorem \ref{theorem:holonomic isotopy}.\\ \noindent {\bf Proof of Theorem 2.} We begin our proof of Theorem 2 with two holonomic knots which, by (V1) and (V2), can be assumed to be holonomic closed braids. Thus ${\cal H} = N|P$ and ${\cal H}' = N'|P'$. By hypothesis, our closed braids define the same oriented link types in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. Markov's Theorem then gives us a chain of conjugacy classes of braids which connects them: $$[{\cal H}] = [{\cal X}_0] \longrightarrow [{\cal X}_1] \longrightarrow [{\cal X}_2] \longrightarrow \cdots \longrightarrow [{\cal X}_{r-1}] \longrightarrow [{\cal X}_r] = [{\cal H}'].$$ \noindent Let's consider the passage from the class $[{\cal X}_j]$ to the class $[{\cal X}_{j+1}]$. By Markov's Theorem, we must choose representative open braids $X_{j,1}, X_{j,2}$ of the conjugacy class $[{\cal X}_j]\subset {\bf B}_{n_j}$ and show that in either of the two cases (M1), (M2) our representatives and the Markov moves between them are holonomic. \begin{itemize} \item In the situation of (M1) the braid $X_{j,2}\in {\bf B}_{n_j}$ is not necessarily holonomic. We change the closure of $X_{j,2}$ to a holonomic closed braid $H_{j,2}$, if necessary, using Garside's methods: $H_{j,2} = \Delta_{n_j}^{-p}P$ where $p\geq 0$ and $P\in{\bf B}^+_{n_j}$. Then $\Delta_{n_j}^{-p}P \sigma_{n_j}^{\pm 1} = {\cal H}_{j+1,1}$ is holonomic for both choices of the exponent of $\sigma_{n_j}$. By (V3) the passage ${\cal H}_{j,2}\longrightarrow{\cal H}_{j+1,1}$, which adds a trivial loop at the interface between the positive and negative parts of ${\cal H}_{j,2}$, can be realized by a holonomic isotopy. \item In the situation of (M2) the braid $X_{j+1,1}\in {\bf B}_{n_j-1}$ is in general not holonomic, but we may replace it as above with a holonomic $(n_j-1)$-braid $H_{j+1,1}$ which is in the form $N|P$. But then ${\cal H}_{j,2}$ is also holonomic, for both choices of the exponent of $\sigma_{n_j-1}$, because the ambiguously signed letter is at the interface between the positive and negative parts of ${\cal H}_{j,2}$. The passage ${\cal H}_{j,2}\longrightarrow {\cal H}_{j+1,1}$, which deletes the trivial loop, can clearly be realized by a holonomic isotopy. \end{itemize} \noindent The initial and final braids ${\cal H}_{0,1}$ and ${\cal H}_{r,2}$ have not yet been defined; we take them to be the given representatives $N|P$ and $N'|P'$ of $[{\cal H}]$ and $[{\cal H}']$ respectively. \\ \noindent Thus we have produced a sequence of conjugacy classes of holonomic braids which joins $[{\cal H}]$ to $[{\cal H}']$, and in each conjugacy class we have two holonomic representatives ${\cal H}_{j,1}$ and ${\cal H}_{j,2}$, and for each $j=1,2,\dots,r-1$ we go from ${\cal H}_{j,2}$ to ${\cal H}_{j+1,1}$ via a holonomic isotopy. \\ \noindent The only point which remains to be proved is that, in each conjugacy class $[{\cal X}_j] \subset {\bf B}_{n_j}$ in the sequence, there is a holonomic isotopy between the two chosen holonomic representatives. The proof will be seen to be independent of $j$, so we simplify the notation, setting $n=n_j$. Assume that we have two holonomic representatives ${\cal H}, {\cal H}'$ of the same conjugacy class in ${\bf B}_n$. Our task is to prove that they are holonomically isotopic. \\ \noindent In view of Theorem 0, we may assume without loss of generality that ${\cal H}$ and ${\cal H}'$ are summit forms. By (G6), two summit forms in the same conjugacy class are related by conjugations by positive words which are fragments of $\Delta$ using only positive conjugacy. Again, positive conjugation by a positive word sends holonomic braids to holonomic braids. Thus we have the desired holonomic isotopy from ${\cal H}$ to ${\cal H}'$ and the proof of Theorem 2 is complete. $\|$ \section{Legendrian knots} \label{section:Legendrian} Theorem 2 was in some ways an unexpected result, because it is well-known that the analogous theorem fails in the case of Legendrian knots. With the goal of understanding this situation better we investigate, briefly, parametrizations of Legendrian knots. We begin by introducing Legendrian knots, via their parametrizations. \subsection {Parametrizing Legendrian knots} \label{subsection:parametrizing Legendrian knots} Introduce coordinates $(x, v, z)$ in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. A knot is {\em Legendrian} if it is represented by a smooth embedding of $S^1 \rightarrow \hbox{\sl I\kern-.18em R \kern-.3em}^3$ whose image is everywhere tangent to the planes of the standard contact structure on $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. The {\em standard} contact structure on $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ is the nonintegrable field of planes defined by the differential 1-form $\alpha = z dx - dv$. A curve will be everywhere tangent to these 2-planes if it is in the kernel of $\alpha$, so if a knot can be parametrized by $(x(t), v(t), z(t))$, where $x,v,z$ are real-valued periodic functions with period (say) $2\pi$, then it is tangent to this contact structure if and only if $z(t) = dv/dx = v'(t)/x'(t)$ for all $t\in [0,2\pi]$. \\ \noindent A simple example is given by the parametrization $(-cos(t), -sin^3(t), -3 sin(t) cos(t))$. See Figure \ref{figure:Legendrian unknot}. In this example the projection onto the $xv$ plane (the so-called {\em front projection}) has cusps, but they do not represent points of non-smoothness in the space curve. Indeed, the front projection of a Legendrian knot always contains $2m$ cusps for some $m\geq 1$. Notice that the $z$ coordinate is the slope of the tangent to the curve in the front projection. This makes these projections similar to our $xy$ projections of holonomic knots, because the signs of the crossings in the front projection are completely determined without further data. Here is a simple rule for finding the signs of the crossings: The front projection has no vertical tangencies, because $z(t)$ would be undefined at a vertical tangency. Therefore at a double point both branches intersect a vertical line through the double point transversally. Crossings are positive (respectively negative) if the two branches at a double point intersect a vertical line through the double point from opposite (respectively the same) directions. \\ \begin{figure}\label{figure:Legendrian unknot} \end{figure} \noindent It is simple to construct other examples (indeed all possible examples) of parametrized Legendrian knots. Let $x(t)$ and $z(t)$ be real-valued smooth periodic functions with period $2\pi$. Then $x(t), z(t)$ have Fourier expansions: $$x(t) = \sum_i a_i sin(it) + b_i cos(it), \ \ \ z(t) = \sum_j c_j sin (jt) + d_j cos(jt).$$ The third coordinate $v(t)$ has a similar expansion, determined up to an additive constant from the first two by the condition $v'(t) = x'(t) z(t)$. Of course we need to be sure that $x(t)$ and $z(t)$ are generic (i.e. that they satisfy restrictions analogous to the bulleted assumptions in $\S$1). \subsection{The Legendrian cousins of holonomic knots} \label{section:Legendrian cousins} Holonomic knots have a parametrization $(x(t),y(t),z(t))$ where $x(t) = -f(t)$, $y(t) = f'(t)$, and $z(t) = -f''(t))$, and $f$ is a function chosen as in section \ref{section:Introduction}. Following a suggestion of S. Chmutov, we consider now the differential 1-form to whose kernel this parametrization corresponds. The holonomic parametrization imposes the following conditions on the three coordinates: $-y(t) = x'(t)$ and $-z(t) = y'(t)$. If we assume $y(t) \neq 0$, then $z = (y dy)/dx$. Hence the 1-form $\beta$ whose kernel contains a holonomic knot is given by $\beta = zdx - ydy$. For the 1-form $\beta$ to define a contact structure there is an additional condition it must satisfy: it must have a non-vanishing volume form $ \beta\wedge d\beta$. Here $\beta \wedge d\beta = dx \wedge ydy \wedge dz $, so $\beta\wedge d\beta$ is only non-zero off the plane $y = 0$, and our earlier assumption that $y\neq 0$ is an essential part of the story. By construction our holonomic knots are tangent to this contact structure, and we show further that this contact structure is isomorphic to the standard one on $\hbox{\sl I\kern-.18em R \kern-.3em}^3$: \begin{proposition} \label{proposition:holonomic-Legendrian} Every holonomic knot is tangent to a contact structure on $\hbox{\sl I\kern-.18em R \kern-.3em}^3 - \{(x,y,z) \in \hbox{\sl I\kern-.18em R \kern-.3em}^3 : y = 0\}$ which is isomorphic to the standard contact structure on $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ in the complement of the plane $\{(x, v, z) \in \hbox{\sl I\kern-.18em R \kern-.3em}^3 : v = 0\}$. \end{proposition} \np {\bf Proof:} \ We map the upper half-space $U_+ = \{(x,y,z) \in \hbox{\sl I\kern-.18em R \kern-.3em}^3 : y > 0\}$ to the upper half-space $V_+ = \{(x,v,z) \in \hbox{\sl I\kern-.18em R \kern-.3em}^3 : v = y^2/2>0\}$ and the lower half-space $U_- = \{(x,y,z) \in \hbox{\sl I\kern-.18em R \kern-.3em}^3 : y < 0\}$ to $V_+$ with the same transformation $y \rightarrow y^2/2$. This gives us transformations sending the two half-spaces of $\hbox{\sl I\kern-.18em R \kern-.3em}^3 - \{y = 0\}$ to the upper half-space $V_+$ of $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. In $V_+$ we have the new relations $dv = ydy$ and $dv = zdx$, so our 1-form $\beta = zdx - ydy$ becomes the 1-form $\alpha = zdx - dv$. The contact structure defined by $\alpha$ on $V_+$ is exactly the restriction of the standard one on $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. We can extend it from $V_+$ to all of $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ then, although on all of $\hbox{\sl I\kern-.18em R \kern-.3em}^3$ the transformation from $\beta$ to $\alpha$ is not invertible. (It is not invertible along the plane $v = 0$, where $z = dv/dx$ goes to infinity.) $\|$ \noindent In view of Proposition \ref{proposition:holonomic-Legendrian}, a simple modification of our holonomic parametrizations suggests itself as a method for parametrizing a related class of Legendrian knots. Let ${\cal H}$ be any knot type in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. Using Theorem 1, choose a smooth periodic function $f(t)$ such that $(-f(t),f'(t),-f^{\prime\prime}(t))$ is a holonomic parametrization $H(t)$ of ${\cal H}$. Let $(-f(t),f'(t)^3,-3f'(t)f^{\prime\prime}(t))$ be a parametrization $L(t)$ of a new knot ${\cal L}$. We call ${\cal L}$ the {\it Legendrian cousin} of the holonomic knot ${\cal H}$. We already gave one example in Figure 7, which illustrated the Legendrian cousin of the unknot of Figure 1. A different example is given in Figure 8, which shows the projection onto the $xy$-plane of the Legendrian cousin of the holonomic trefoil of Figure 5. \begin{figure}\label{figure:cousin of holonomic trefoil} \end{figure} \begin{proposition} \label{proposition:Legendrian cousins} Let ${\cal H}$ be any knot type in $\hbox{\sl I\kern-.18em R \kern-.3em}^3$. As above, let $H(t)$ be its holonomic parametrization and $L(t)$ the parametrization of ${\cal L}$, its Legendrian cousin. Then: \begin{enumerate} \item The parametrized curve $L(t)$ is Legendrian, relative to the standard contact structure $z dx -dv$ on $\hbox{\sl I\kern-.18em R \kern-.3em}^3 - \{(x, v, z) \in \hbox{\sl I\kern-.18em R \kern-.3em}^3 : v = 0\}$. \item The curves $H(t)$ and $L(t)$ are closed braids with respect to the $z$ axis. Their projected images onto the $xy$ plane define equivalent immersions, i.e., there is a $1-1$ correspondence between the singularities of the two projections and an isotopy from one projection to the other induced by the isomorphism of $\hbox{\sl I\kern-.18em R \kern-.3em}^2$ taking coordinates $(x,y) \to (x,v)$. \item All crossings in the front projection for ${\cal L}$ are negative. \item Whereas any knot type ${\cal H}$ is represented by a holonomic $H(t)$, the knot type ${\cal L}$ which is represented by its Legendrian cousin $L(t)$ is always a fibered knot, i.e. its complement fibers over the circle with fiber a closed orientable surface $F$, with $\partial F = L$. In fact, ${\cal L}$ belongs to a very special class of fibered knots which can be represented by closed negative braids. \end{enumerate} \end{proposition} \np {\bf Proof:} \ To prove (1), notice that $$-3f'(t)f^{\prime\prime}(t) = {d((f'(t))^3 /dt \over -f'(t)}.$$ To prove (2), let $D(H)$ and $D(L)$ be the parametrized immersed curves $(-f(t),f'(t))$ \\ and $(-f(t),f'(t)^3)$ in the $xy$ plane so that $D(H)$ is a holonomic projection for ${\cal H}$ and $D(L)$ is a front projection for ${\cal L}$. Then the diagrams $D(L)$ and $D(H)$ are the same immersions, up to the isomorphism of $\hbox{\sl I\kern-.18em R \kern-.3em}^2$ which changes coordinates from $(x, y)$ to $(x, v)$. For observe that $$(-f(t_1),f'(t_1)) = (-f(t_2),f'(t_2)) \Longleftrightarrow (-f(t_1),f'(t_1)^3) = (-f(t_2),f'(t_2)^3).$$ Notice that the cusps in $D(L)$ occur precisely at the points where $D(H)$ crosses the $x$ axis. The assertion about closed braids is then clear, from our work in $\S$1 of this paper. \\ \noindent (3) The statement about the signs of the crossings in ${\cal L}$ follows from the fact that, using the rule given in Section \ref{subsection:parametrizing Legendrian knots}, the signs of the crossings in $D(L)$ are always negative. However, in $D(H)$ they are negative in the upper half-plane and positive in the lower half-plane. It's easy to understand why this is the case. At a double point in both $D(h)$ and $D(L)$ we have distinct values $t_1,t_2$ with $f(t_1) = f(t_2)$ and $f'(t_1) = f'(t_2)$. The signs of the crossings are determined by the $z$-values on the two branches, that is by $-f^{\prime\prime}(t_1)$ and $-f^{\prime\prime}(t_2)$ in $H(t)$, but by $-f^{\prime\prime}(t_1)f'(t_1)$ and $-f^{\prime\prime}(t_2)f'(t_2)$ in $L(t)$. From this it follows that at a double point in the upper half-plane, where $f'(t_1)=f'(t_2) > 0$, the signs of the crossings will be the same in $H(t)$ and $L(t)$, whereas at a double point in the lower half-plane, where $f'(t_1)=f'(t_2) < 0$, the signs of the crossings in $L(t)$ are opposite to those in $H(t)$. From this it follows that all crossings in $L(t)$ are negative. An example can be seen in Figures 8 and 5. The holonomic knot in Figure 5 is a positive type (2,3) torus knot, whereas the Legendrian knot in Figure 8 is a negative type (2,3) torus knot.\\ \noindent (4) Finally, it is well-known (see, for example, \cite{St}) that knots which can be represented by closed negative or positive braids are a very special subclass of fibered knots.$\|$ \noindent Proposition 2 shows that the failure of the contact structure defined by $\beta$ to extend across the plane $y=0$ has profound consequences. By Theorem 2 an arbitrary isotopy between holonomic knots can always be deformed to a holonomic isotopy, but it should now be clear that we cannot expect any such result for the Legendrian cousins of holonomic knots. Indeed, our proofs of Theorems 1 and 2 above depended heavily on our ability to move across the plane $y=0$ via holonomic isotopies, but for Legendrian cousins we are restricted to Legendrian isotopies in either half-space, so entirely new methods would be needed to prove an analogous theorem for Legendrian cousins. \\ \noindent {\bf Remark:} The Legendrian cousins of Proposition \ref{proposition:Legendrian cousins} can be generalized to an infinite family of Legendrian cousins of $H(t)$, with parametrizations $$L_k(t) = (-f(t),f'(t)^{2k+1},-(2k+1)f'(t)^{2k-1}f^{\prime\prime}(t)).$$ They share a common knot diagram, so they all represent the same knot type. In fact, they represent the same Legendrian knot type. An explicit Legendrian isotopy $L(t,s)$ from $L_k(t)\to L_m(t)$ was suggested by Oliver Dasbach. It begins at $L(t,0) = L_k(t)$ and ends at $L(t,1) = L_m(t)$ and is defined by $L(t,s) =$ $$ (-f(t),sf'(t)^{2k+1}+(1-s)f'(t)^{2m+1},-s(2k+1)f'(t)^{2k-1}f^{\prime\prime}(t) -(1-s)(2k+1)f'(t)^{2m-1}f^{\prime\prime}(t))$$ \noindent \small {Joan S. Birman, Math. Dept, Barnard College of Columbia University,} \noindent \small {604 Mathematics, 2990 Broadway, New York, N.Y. 10027.}\\ \noindent \small {Nancy C. Wrinkle, Math. Dept., Columbia University,} \noindent \small {408 Mathematics, 2990 Broadway, New York, N.Y. 10027.} \end{document}
\begin{document} \title{Primitive Permutation Groups and Strongly Factorizable Transformation Semigroups} \author{Jo\~ao Ara\'ujo\footnote{Departamento de Matem\'atica and CMA, Faculdade de Ci\^encias e Tecnologia (FCT) Universidade Nova de Lisboa (UNL), 2829-516 Caparica, Portugal; [email protected]} \footnote{CEMAT-Ci\^{e}ncias, Faculdade de Ci\^{e}ncias, Universidade de Lisboa 1749--016, Lisboa, Portugal; [email protected]} , Wolfram Bentz\footnote{School of Mathematics \& Physical Sciences, University of Hull, Kingston upon Hull, HU6 7RX, UK; [email protected]}, and Peter J. Cameron\footnote{School of Mathematics and Statistics, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, UK; [email protected]}} \date{} \maketitle \begin{abstract} Let $\Omega$ be a finite set and $T(\Omega)$ be the full transformation monoid on $\Omega$. The rank of a transformation $t\in T(\Omega)$ is the natural number $|\Omega t|$. Given $A\mathop{\mathrm{Su}}bseteq T(\Omega)$, denote by $\langle A\rangle$ the semigroup generated by $A$. Let $k$ be a fixed natural number such that $2\le k\le |\Omega|$. In the first part of this paper we (almost) classify the permutation groups $G$ on $\Omega$ such that for all rank $k$ transformation $t\in T(\Omega)$, every element in $S_t:=\langle G,t\rangle$ can be written as a product $eg$, where $e^2=e\in S_t$ and $g\in G$. In the second part we prove, among other results, that if $S\le T(\Omega)$ and $G$ is the normalizer of $S$ in the symmetric group on $\Omega$, then the semigroup $SG$ is regular if and only if $S$ is regular. (Recall that a semigroup $S$ is regular if for all $s\in S$ there exists $s'\in S$ such that $s=ss's$.) The paper ends with a list of problems. \end{abstract} \section{Introduction} A semigroup $S$ with set of idempotents $E$ and group of units $G$ is said to be {\em strongly factorizable} if $S=EG$. Denote by $S_n$ the symmetric group on a set $\Omega$ of cardinality $n$ and denote by $T_n$ the full transformation semigroup on the same set. It is clear that $S_n$ is the group of units of $T_n$. Given $t\in T_n$ the {\em rank} of $t$ is the cardinality of the set $\Omega t$. The first goal of this paper is to prove the following sequence of theorems. We note that, for $1<k<n-1$, the $k$-transitive or $k$-homogeneous groups are explicitly known. \begin{theorem}\label{rank2} Let $n\ge2$, and $G\le S_n$ be a group. The following are equivalent: \begin{enumerate} \item for all rank $2$ transformations $t\in T_n$ the semigroup $\langle G,t\rangle$ is strongly factorizable; \item $G$ is primitive. \end{enumerate} \end{theorem} \begin{theorem}\label{rankn} Let $G\le S_n$ and let $k$ be a fixed natural number such that $6\le k\le n$. The following are equivalent: \begin{enumerate} \item for all rank $k$ transformations $t\in T_n$ the semigroup $\langle G,t\rangle$ is strongly factorizable; \item $G=S_n$ or $G=A_n$ (with $n\ne k$) in their natural action on $n$ points. \end{enumerate} \end{theorem} \begin{theorem}\label{rank5} Let $n\ge 5$, and $G\le S_n$. The following are equivalent: \begin{enumerate} \item for all rank $5$ transformations $t\in T_n$ the semigroup $\langle G,t\rangle$ is strongly factorizable; \item $G$ is $5$-transitive or $G=A_6$ ($n=6$). \end{enumerate} \end{theorem} \begin{theorem}\label{rank4} Let $n\ge 4$, and $G\le S_n$. If $G$ is $4$-transitive, or $G$ is one of $A_5$ ($n=5$), or $M_{11}$ ($n=12$), then the semigroup $\langle G,t\rangle$ is strongly factorizable, for all rank $4$ transformations $t\in T_n$. Any other group satisfying this property contains $\mathrm{PSL}(2,2^p)$ ($n=2^p+1$), where $2^p-1$ is a (Mersenne) prime. \end{theorem} \begin{theorem}\label{rank3} Let $n \ge 3$, $G\le S_n$. If $G$ satisfies one of the following properties: \begin{enumerate} \item $G$ is $3$-transitive; \item $\mathrm{PSL}(2,q)\le G \le \mathrm{P}\Gamma \mathrm{L}(2,q)$ where $q \equiv 3~\mbox{ mod }~4$ ($n=q+1$); \item $\mathrm{PSL}(2,q)\le G \le \mathrm{P}\Sigma \mathrm{L}(2,q)$ where $q \equiv 1~\mbox{ mod }~4$ ($n=q+1$); \item $G=\mathrm{Sp}(2d,2)$, $d \ge 3$, in either of its $2$-transitive representations ($n=2^{2d-1}\pm 2^{d-1}$); \item $G=2^d: \mathrm{Sp}(d,2)$, $d \ge4$ and even ($n=2^d$); \item $G$ is one of $A_4$ ($n=4$), $\mathrm{PSL}(2,11)$ ($n=11$), $2^4:A_6$ ($n=16$), $2^6:G_2(2)$ or its subgroup of index $2$ ($n=64$), Higman--Sims ($n=176$), $\mathrm{Co}_3$ ($n=276$); \end{enumerate} then the semigroup $\langle G,t\rangle$ is strongly factorizable, for all rank $3$ transformations $t\in T_n$. Any other group satisfying the previous property must satisfy $\mathop{\mathrm{AGL}}(1,2^p)\le G\le\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,2^p)$ with $p$ and $2^p-1$ prime. \end{theorem} These results required the repeated use of the classification of finite simple groups. This completes the part of the paper dealing with semigroups $S$ in which $S=EG$. In the second part we turn to semigroups of the form $SG$, where $S\le T_n$ and $G$ is the normalizer of $S$ in $S_n$. The general goal is to decide if some properties of $SG$ carry to $S$. The main theorem is the following. \begin{theorem} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group $S_n$. Then $SG$ is regular if and only if $S$ is regular. \end{theorem} As often happens in semigroup theory, the proof is short but tricky. Finally we prove the following result that generalizes a well known theorem in groups. \begin{theorem} Let $S$ be a finite semigroup. Then every element of $S$ has a square root if and only if every element in $S$ belongs to a subgroup of odd order. \end{theorem} A semigroup $S$ is said to be {\em factorizable} if there exist two sets $A,B\mathop{\mathrm{Su}}bset S$ such that $S=AB$. Factorizable semigroups were prompted by the study of extensions (direct products, Zappa--Sz\'ep and Catino extensions \cite{catino}), but the study went far beyond that initial motivation leading to many papers in very different contexts (structural semigroup theory, topological semigroups, presentations, automata and languages, combinatorial semigroup theory, Morita equivalences, etc.). If $S=EG$, where $E$ is its set of idempotents and $G$ is its group of units, then $S$ is said to be {strongly factorizable}. The class of strongly factorizable semigroups contains the two most studied transformation semigroups: $T_n$, the full transformation monoid on a finite set and the monoid of endomorphisms of a finite-dimensional vector space. As, by an argument similar to the one used to prove Cayley's Theorem in groups, every finite semigroup embeds in some finite $T_n$, it follows that every finite semigroup is a subsemigroup of a strongly factorizable semigroup. Recall that a semigroup $S$ is said to be {\em regular} if for all $a\in S$ there exists $b\in S$ (called an {\em inverse} of $a$) such that $a=aba$ and $b=bab$. If every element in a regular semigroup has only one inverse, then the semigroup is said to be {\em inverse}. Not every strongly factorizable semigroup is inverse (consider a non-commutative semigroup of idempotents with identity), but they are all regular semigroups. In fact, if $eg\in EG$ then $eg=(eg)(g^{-1}e)(eg)$ and $g^{-1}e=(g^{-1}e)(eg)(g^{-1}e)$. Finally we note that in the context of inverse semigroups (where strongly factorizable semigroups are simply called factorizable) this topic is an entire body of knowledge on its own. (For details we suggest the lovely survey paper \cite{FitzGerald2009} and the references therein.) In the context of groups, it is usually assumed that at least one of $A$ and $B$ is a subgroup; many classifications are known, notably that of Liebeck, Praeger and Saxl~\cite{lps,lps2}, who found all factorizations of almost simple groups where both factors are subgroups. Throughout the twentieth century there was a doctrine which stated that a problem in semigroups was considered solved when reduced to a question in groups. This dramatically changed in this century when it was realized that it would be much more productive for both sides to keep an ongoing conversation. One of the driving forces of this conversation has been the following general problem, whose study has led to significant research on permutation groups: \begin{quote} Classify the pairs $(G,a)$, where $a$ is a map on a finite set $\Omega$ and $G$ is a group of permutations of $\Omega$, such that the semigroup $\langle G,a\rangle$ generated by $a$ and $G$ has a given property $P$. \end{quote} A very important class of groups that falls under this general scheme is that of \emph{synchronizing groups}, groups of permutations on a set that together with any non-invertible map on the same set generate a constant (see \cite{steinberg,abc,abcrs,arcameron22,acs,cameron,neumann}). These groups are very interesting from a group theoretic point of view and are linked to the \emph{\v{C}ern\'y conjecture}, a longstanding open problem in automata theory. Three other sample instances of the general question are the following: \begin{enumerate} \item Let $A\mathop{\mathrm{Su}}bseteq T_n$; classify the permutation groups $G\le S_n$ such that $\langle G,a\rangle$ is regular for all $a\in A$. For many different sets $A$, this problem has been considered in \cite{abc2,abc3,ac,AMS,lmm,lm,levi96,mcalister}, among others. \item Classify the permutation groups $G\le S_n$ such that for all $a\inT_n\setminus S_n$ the equality $\langle G,a\rangle\setminus G=\langle S_n,a\rangle\setminus S_n$ holds. This problem was solved in \cite{AAC}. \item Classify the permutation groups $G\le S_n$ such that for all $a\in T_n\setminus S_n$ we have $\langle G,a\rangle \setminus S_n = \langle g^{-1}ag\mid g\in G\rangle$. This classification \cite{acmn} answered an old problem. \end{enumerate} We saw above the importance of strongly factorizable semigroups and the multitude of contexts in which they appear. The first goal of this paper is to continue the trend described above and classify the permutation groups that together with any transformation of a given rank $k$ generate a strongly factorizable semigroup. The second part of the paper deals with the following problem proposed by the third author. Observe that in the founding paper of factorizable semigroups \cite{tolo} the goal was to check when some given properties of $A$ and $B$ carry to the factorizable oversemigroup $S:=AB$. Here we go the converse direction: given $T=SG$, where $S\le T_n$ and $G$ is the normalizer of $S$ in $S_n$, find semigroup properties that carry from $SG$ to $S$. This looks a sensible question since in $SG$ we can take advantage of the group theory machinery and hence checking a property might be easier in $SG$ than in $S$. The main result of this part of the paper says that {\em regularity} carries from $SG$ to $S$. We now summarise the contents of the paper. In Section \ref{two} we classify permutation groups with what we call the {\em ordered $k$-ut property}. This is the cornerstone of our classification results. Section \ref{three} connects the group theory results of the previous section with the theorems on factorizable semigroups. Section \ref{four} deals with the semigroups $SG$, where $S\le T_n$ and $G$ is its normalizer in $S_n$. The paper finishes with a list of open problems. \section{The ordered $k$-ut property}\label{two} A permutation group $G$ on $\Omega$ is said to have the \emph{$k$-universal transversal property} (or $k$-ut for short) if, given any $k$-subset $A$ and $k$-partition $P$ of $\Omega$, there exists $g\in G$ such that $Ag$ is a transversal to $P$. These groups were studied in connection with permutation groups $G$ such that $\langle G,a\rangle$ is regular for all maps $a$ of rank $k$. The groups satisfying the $k$-ut property for $3\le k\le n/2$ were partly classified in\cite{ac} (small corrections to the case of $3$-ut where made in \cite{abc}). A permutation group $G$ on $\Omega$ is said to have the \emph{ordered $k$-ut property} if, given any ordered $k$-subset $A=(a_1,\ldots,a_k)$ and ordered $k$-partition $\pi=(P_1,\ldots,P_k)$ of $\Omega$, there exists $g\in G$ such that $a_ig\in P_i$ for $i=1,\ldots,k$. Our goal is to classify the groups possessing ordered $k$-ut. Clearly, ordered $k$-ut implies the usual $k$-ut, so we only need look among permutation groups with $k$-ut. \mathop{\mathrm{Su}}bsection{Permutation group properties} A permutation group is \emph{$k$-primitive} if it is $k$-transitive and the stabiliser of $k-1$ points is primitive on the remaining points. A permutation group is \emph{generously $k$-transitive} if, given any $(k+1)$-set $M\mathop{\mathrm{Su}}bseteq\Omega$, the group induced on $M$ by its setwise stabiliser is the symmetric group of degree $k+1$. It is straightforward to prove that a generously $k$-transitive group is indeed $k$-transitive. The next result summarises the relationship between these concepts and the ordered $k$-ut property. \begin{prop}\label{p:basicprop} \begin{enumerate} \item A $k$-transitive group has the ordered $k$-ut property. \item For $k\ge 2$, ordered $k$-ut implies ordered $(k-1)$-ut. \item A permutation group $G$ which has $k$-ut and is generously $(k-1)$-transitive has ordered $k$-ut. \item A permutation group $G$ with the ordered $k$-ut property is $(k-1)$-transitive. \item A permutation group $G$ with the ordered $k$-ut property is $(k-1)$-primitive.\label{list:primitive} \item For $k\ge 2$, if $G$ has ordered $k$-ut, its point stabiliser has ordered $(k-1)$-ut. \end{enumerate} \end{prop} \begin{proof} (a), (b) and (f) are straightforward. (c) Given a $k$-set $A$ and a $k$-partition $\pi$, there is an element of $G$ mapping $A$ to a transversal of $\pi$; premultiplying this element by an element in the setwise stabiliser of $A$ shows that we can map elements of $A$ to parts of $\pi$ in any order. (d) Let $(a_1,\ldots,a_{k-1})$ and $(b_1,\ldots,b_{k-1})$ be two ordered $(k-1)$-tuples of distinct points of $\Omega$. If $x$ is any point different from $a_1,\ldots,a_{k-1}$, then a permutation mapping $a_1,\ldots,a_{k-1},x$ to a transversal of the partition $\{b_1\},\ldots,$ $\{b_{k-1}\},\Omega\setminus\{b_1,\ldots,b_{k-1}\}$ maps the first $(k-1)$-tuple to the second. (e) Suppose that $G$ is not $(k-1)$-primitive; let $B$ be a non-trivial block of imprimitivity for the stabiliser of distinct $a_1,\ldots,a_{k-2}\in \Omega$. Let $A$ be a subset consisting of $a_1,\ldots,a_{k-2}$ and two points $b_1,b_2$ of $B$, and $P$ the partition into $\{a_1\}$, \dots, $\{a_{k-2}\}$, $B$, and the rest of $\Omega$. Any permutation mapping $a_i$ to $a_i$ for $i=1,\ldots,k-2$, maps $b_1,b_2$ either both into $B$ or outside of $B$. Hence $G$ does not have the ordered $k$-ut property. $\Box$ \end{proof} \begin{prop}\label{p:ord2ut} A permutation group $G$ has the ordered $2$-ut property if and only if it is primitive. \end{prop} \begin{proof} Ordered $2$-ut implies primitivity, by (\ref{list:primitive}) above. Conversely, suppose $G$ is primitive. Then all orbital digraphs for $G$ are connected, and hence (since $G$ is transitive) strongly connected. Now let $A=\{a_1,a_2\}$ be a $2$-set and $\pi=\{P_1,P_2\}$ a $2$-partition. Since the orbital graph with edge set $(a_1,a_2)^G$ is strongly connected, there is an edge with initial vertex in $P_1$ and terminal vertex in $P_2$; the element of $G$ mapping $(a_1,a_2)$ to this edge witnesses ordered $k$-ut. $\Box$\end{proof} The next proposition gives sufficient conditions for generous $k$-transitivity. \begin{prop} \begin{enumerate} \item Suppose that $G$ is $k$-transitive, and every orbital of the $(k-1)$-point stabiliser is self-paired. Then $G$ is generously $k$-transitive. \item Suppose that $G$ is $k$-transitive, and the non-trivial orbits of the stabiliser of $k$ points all have different sizes. Then $G$ is generously $k$-transitive. \end{enumerate} \end{prop} \begin{proof} (a) Take any $k+1$ points $a_1,\ldots,a_{k+1}$. By assumption, $G$ has an element fixing $a_1,\ldots,a_{k-1}$ and interchanging $a_k$ with $a_{k+1}$. Since the numbering of the points is arbitrary, the setwise stabiliser of the set of $k+1$ points induces every possible transposition on it. The transpositions generate the symmetric group. (b) This follows immediately from (a), since paired orbits have the same sizes. $\Box$\end{proof} \mathop{\mathrm{Su}}bsection{The classification of the groups with the ordered $k$-ut property} Trivially, $G \le S_n$ has the ordered $n$-ut property if and only if $G=S_n$. Any permutation group has the ordered $1$-ut property and by Proposition \ref{p:ord2ut}, ordered $2$-ut is equivalent to primitivity. Ordered $k$-ut clearly implies $k$-ut, and hence (by Proposition~\ref{p:basicprop}(b)), $k'$-ut for all $1\le k'\le k$. Hence it remains to consider the groups arising in the classification of groups with $k$-ut from \cite{abc,ac} (given below), except that these results only classify the values of $n$ with $ \lfloor \frac{n+1}{2}\rfloor \ge k$. Below, we will deal with smaller values of $n$ by ad-hoc arguments. \begin{prop} \label{p:ord6ut} Let $n\ge 6$, $G\le S_n$, then $G$ has the ordered $k$-ut property for some $6\le k \le n$, if and only if $G=S_n$ or $G=A_n$ (with $n\ne k$) in their natural action on $n$ points. \end{prop} \begin{proof} By \cite[Theorem 1.4]{ac}, for $n\ge 11$, the only groups with $6$-ut are $A_n$ and $S_n$, hence no other group has ordered~$6$-ut, and so it does not have ordered $k$-ut either. For $n\le 10$, the listed groups are the only ones that are $(k-1)$-transitive. Conversely, it is easy to check that the listed values of $A_n$ and $S_n$ have the ordered~$k$-ut property, for $6 \le k\le n$. $\Box$\end{proof} \begin{prop} \label{p:ord5ut}Let $n\ge 5$, $G\le S_n$, then $G$ has the ordered $5$-ut property if and only if it is $5$-transitive or $A_6$ ($n=6$). \end{prop} \begin{proof} For $n\ge 11$, a group with $5$-ut is $5$-homogeneous or $\mathrm{P}\Gamma\mathrm{L}(2,32)$ (with degree $33$) \cite[Theorem 1.5]{ac}. The $5$-homogeneous groups (with $n\ge 10$) are $5$-transitive and have ordered $5$-ut, while $\mathrm{P}\Gamma\mathrm{L}(2,32)$ is not $4$-transitive so does not have ordered $5$-ut. For $n\le 10$, $A_6$, which clearly satisfies ordered~$5$-ut, is the only group that is $4$-transitive, but not $5$-transitive. $\Box$\end{proof} \begin{prop} \label{p:ord4ut} Let $n\ge 4$, $G\le S_n$, then $G$ has the ordered $4$-ut property if it is $4$-transitive, $A_5$ ($n=5$), or $M_{11}$ ($n=12$). If there are any other groups with ordered $4$-ut, they contain $\mathrm{PSL}(2,2^p)$ ($n=2^p+1$), where $2^p-1$ is a (Mersenne) prime. \end{prop} \begin{proof} By \cite[Theorems 1.3, 1.6]{ac} for $n\ge8$, a group with $4$-ut is $4$-homo\-geneous or $M_{11}$ ($n=12$), or possibly almost simple with socle $\mathrm{PSL}(2,q)$ where $q$ is prime or $q=2^p$ for some prime $p$ (with $n=q+1$). The $4$-homogeneous groups with $n\ge 8$ are $4$-transitive except for $\mathrm{PSL}(2,8)$, $\mathrm{P\Gamma L}(2,8)$ ($n=9$), and $\mathrm{P\Gamma L}(2,32)$ ($n=33$). For $4\le n\le7$, the only $3$-, but not $4$-transitive groups are $A_5$ ($n=5$) and $\mathrm{PGL}(2,5)$ ($n=6$). The $4$-transitive groups have ordered $4$-ut, and the Mathieu group $M_{11}$ ($n=12$) is generously $3$-transitive (the orbit lengths for the $3$-point stabiliser are $3$ and $6$), and thus also has ordered $4$-ut. Almost simple groups contained in $\mathrm{PSL}(2,q)$ with $q\ge5$ prime are not $3$-primitive (the stabiliser of two points has a normal cyclic subgroup of composite order $q-1$). Now consider $G=\mathrm{P}\Gamma\mathrm{L}(2,2^p)$, for $2^p-1$ composite. Once again, these groups have the property that the $2$-point stabiliser has a regular normal subgroup which is cyclic of composite order; so they are not $3$-primitive. Finally, $A_5$ clearly has ordered $4$-ut. $\Box$\end{proof} We remark that computation shows that the groups $\mathrm{PSL}(2,8)$, $\mathrm{P}\Gamma\mathrm{L}(2,8)$ ($n=9$), and $\mathrm{P}\Gamma\mathrm{L}(2,32)$ ($n=33$) satisfy ordered $4$-ut. Before we consider the case $k=3$, we will give an updated list of the status of the $3$-ut property. The following theorem combines results from \cite{ac} with the corrections from \cite{abc} and adds the (trivial) cases with $n=3,4$. \begin{prop}\label{p:3ut} Let $n\ge 3$, $G\le S_n$, then $G$ has the $3$-ut property if it satisfies one of the following properties: \begin{enumerate} \item $G$ is $3$-homogeneous; \item $\mathrm{PSL}(2,q)\le G \le \mathrm{P}\Sigma \mathrm{L}(2,q)$ where $q \equiv 1 \mbox{ mod } 4$ ($n=q+1$); \item $G=\mathrm{Sp}(2d,2)$, $d \ge 3$, in either of its $2$-transitive representations ($n=2^{2d-1}\pm 2^{d-1}$); \item $G=2^d: \mathrm{Sp}(d,2)$, $d \ge4$ and even ($n=2^d$); \item $G$ is one of $\mathrm{AGL}(1,7)$, ($n=7$), $\mathrm{PSL}(2,11)$ ($n=11$), $2^4:A_6$ ($n=16$), $2^6:G_2(2)$ or its subgroup of index $2$ ($n=64$), $\mathrm{Sz}(8)$, $\mathrm{Sz}(8):3$ ($n=65$), Higman-Sims ($n=176$), $\mathrm{Co}_3$ ($n=276$); \end{enumerate} If there are any other groups with $3$-ut, they are one of the following: \begin{enumerate} \item[(e)] Suzuki groups $\mathrm{Sz}(q)$ with $q\ge 32$, potentially extended by field automorphisms ($n=q^2+1$); \item[(f)] $\mathrm{AGL}(1,q)\le G\le \mathrm{A}\Gamma\mathrm{L}(1,q)$, where $q$ is either prime with $q \equiv 11 \mbox{ mod } 12$, or $q=2^p$ with $p$ prime, and for all $c \in \mathrm{GF}(q)\setminus\{0,1\}$, $|\langle -1,c,c-1\rangle|=q-1$ $(n=q$); \item[(g)] subgroups of index $2$ in $\mathrm{AGL}(1,q)$, with $q \equiv 11~\mbox{ mod }~12$ and prime, and for all $c \in \mathrm{GF}(q)\setminus\{0,1\}$, $|\langle -1,c,c-1\rangle|=q-1$ $(n=q$). \end{enumerate} \end{prop} With this result, we can prove that the groups with ordered $3$-ut are just those listed in Theorem~\ref{rank3}. \begin{prop}\label{p:ord3ut} Let $n \ge 3$, $G\le S_n$. If $G$ satisfies one of the following properties: \begin{enumerate} \item $G$ is $3$-transitive; \item $\mathrm{PSL}(2,q)\le G \le \mathrm{P}\Gamma \mathrm{L}(2,q)$ where $q \equiv 3~\mbox{ mod }~4$ ($n=q+1$); \item $\mathrm{PSL}(2,q)\le G \le \mathrm{P}\Sigma \mathrm{L}(2,q)$ where $q \equiv 1~\mbox{ mod }~4$ ($n=q+1$); \item $G=\mathrm{Sp}(2d,2)$, $d \ge 3$, in either of its $2$-transitive representations ($n=2^{2d-1}\pm 2^{d-1}$); \item $G=2^d: \mathrm{Sp}(d,2)$, $d \ge4$ and even ($n=2^d$); \item $G$ is one of $A_4$ ($n=4$), $\mathrm{PSL}(2,11)$ ($n=11$), $2^4:A_6$ ($n=16$), $2^6:G_2(2)$ or its subgroup of index $2$ ($n=64$), Higman--Sims ($n=176$), $\mathrm{Co}_3$ ($n=276$); \end{enumerate} then it satisfies ordered $3$-ut. Any other group satisfying ordered $3$-ut must satisfy $\mathop{\mathrm{AGL}}(1,2^p)\le G\le\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,2^p)$ with $p$ and $2^p-1$ prime. \end{prop} \begin{proof} According to Proposition \ref{p:3ut}, groups with $3$-ut are one of the five types in the proposition, or potentially one of the three additional types listed. If $G$ is $3$-transitive, then it has the ordered $3$-ut property. Suppose that $G$ is $3$-homogeneous but not $3$-transitive. If $3\le n \le 5$, then $G$ is $A_4$ ($n=4$), which clearly has ordered $3$-ut, $\mathop{\mathrm{AGL}}(1,5)$ ($n=5$), which we will exclude below, or nor $2$-transitive and hence does not have ordered $3$-ut. If $n\ge 6$, then $G$ is $\mathop{\mathrm{AGL}}(1,8)$, $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,8)$ ($n=8$) or $\mathop{\mathrm{A}\Gamma\mathrm{L}}(1,32)$ ($n=32$), or $G$ contains $\mathrm{PSL}(2,q)$ for $q\equiv3$~(mod~$4$) ($n=q+1$). The affine groups are included in our undecided cases, so assume $G$ contains such a $\mathrm{PSL}(2,q)$. Let $\{a_1,a_2,a_3\}$ be an ordered $3$-set and $\{P_1,P_2,P_3\}$ an ordered $3$-partition of the underlying set $\Omega$. Without loss of generality, we may assume that the parts $P_1,P_2,P_3$ are arranged in increasing order of size. Using the transitivity of $G$, we may assume that $a_1$ can be mapped into $P_1$, and indeed that $a_1\in P_1$. Now the set \[\{(x,y):(a_1,x,y)\in(a_1,a_2,a_3)^G\}\] is the edge set of a Paley tournament on $\Omega\setminus\{a_1\}$. If this tournament includes an arc from $P_2$ to $P_3$, then we are done; so suppose not. If $|P_2|=1$, then $|P_1|=1$, and all arcs between $P_2$ and $P_3$ point into the point in $P_2$; so the tournament has out-degree at most $1$, a contradiction. So suppose that $|P_2|>1$. In the Paley tournament on $q$ points, any two points are dominated by precisely $(q-3)/4$ points; but if there are no arcs from $P_2$ to $P_3$, then two points in $P_2$ are dominated by every point in $P_3$, and by assumption there are at least $q/3$ such points. So $(q-3)/4\ge q/3$, a contradiction. So $\mathrm{PSL}(2,q)$, and any overgroup, has ordered $3$-ut for $q\equiv3$~(mod~$4$). We claim that, with the exceptions of $\mathrm{AGL}(1,7)$ ($n=7$), $\mathrm{Sz}(8)$ and $\mathrm{Sz}(8):3$ ($n=65$), types (b)--(e) in Proposition \ref{p:3ut} are generously $2$-transitive, and so have ordered $3$-ut. For types (c),(d), and most groups of type (e), the $2$-point stabilisers have all orbits of different sizes, these being \begin{itemize}\itemsep0pt \item $2^{2d-2}\pm2^{d-1}-2$ and $2^{2d-2}$ for type (c); \item $2^{2d-1}-2$ and $2^{2d-1}$ for type (d); \item $3$, $6$ for $\mathrm{PSL}(2,11)$ ($n=11$); \item $6$, $8$ for $2^4:A_6$ ($n=16$); \item $6$, $24$ and $32$ for $2^6:G_2(2)$ and its subgroup ($n=64$); \item $12$, $72$ and $90$ for $\mathrm{HS}$ ($n=176$); \item $112$ and $162$ for $\mathrm{Co}_3$ ($n=276$). \end{itemize} For (b), since the point stabiliser has even order and rank at most $3$, all its non-trivial orbitals are self-paired. For the groups of type (f), if $q-1$ is not prime, then the point stabiliser of $\mathrm{A}\Gamma\mathrm{L}(1,q)$ has a proper normal subgroup, and so these groups are not $2$-primitive, and hence do not have ordered $3$-ut. The same argument excludes $\mathop{\mathrm{AGL}}(1,5)$ and $\mathop{\mathrm{AGL}}(1,7)$. Of the remaining groups, we observe that the Suzuki groups do not have ordered $3$-ut since they are not $2$-primitive (the point stabiliser has a normal subgroup of order $q$); and subgroups of index $2$ in $\mathrm{AGL}(2,q)$ for odd $q$ fail to be $2$-transitive. Hence the only open cases remaining are groups containing $\mathop{\mathrm{AGL}}(1,q)$ with $q-1$ is prime, which occurs only if $q=2^p$ for $p$ prime (and $2^p-1$ is a Mersenne prime). $\Box$\end{proof} Computation shows that $\mathrm{AGL}(1,32)$ and $\mathrm{A}\Gamma\mathrm{L}(1,32)$ do indeed have ordered $3$-ut. \section{Strongly factorizable semigroups and maps with fixed rank}\label{three} A monoid $S$ with group of units $G$ and set of idempotents $E$ is said to be {\em strongly factorizable} if $S=EG$. Let $\Omega$ be a finite set. Every finite semigroup can be embedded in some $T(\Omega)$, a strongly factorizable semigroup. More generally, any semigroup $S$ such that $\mathop{\mathrm{Sym}}\nolimits(\Omega)\le S\le T(\Omega)$ is strongly factorizable. Let $k$ be a natural number; the goal of this section is to classify the groups $G\le \mathop{\mathrm{Sym}}\nolimits(\Omega)$ such that $\langle G,t\rangle$ is strongly factorizable for all rank $k$ transformation $t\in T(\Omega)$. The next result links this goal and the results of the previous sections. \begin{lemma} Let $\Omega$ be a finite set, $G\le\mathop{\mathrm{Sym}}\nolimits(\Omega)$ and $k\le|\Omega|$. Then the following are equivalent: \begin{enumerate} \item $G$ possesses the ordered $k$-ut property; \item for all rank $k$ transformations $t\in T(\Omega)$, we have \[ \langle G, t\rangle = EG, \] where $E$ is the set of idempotents of $\langle G ,t\rangle$. \end{enumerate} \end{lemma} \begin{proof} First, assume that $G$ has the ordered $k$-ut property. Let $a\in\langle G,t\rangle$ be a map of rank $l\le k$. As $G$ has ordered $k$-ut, it has ordered $l$-ut (Proposition \ref{p:basicprop} (b)). Therefore, given a sequence of kernel classes of $a$, say $(A_1,\ldots,A_l)$, and the corresponding $l$-tuple of images $(A_1 a,\ldots,A_l a)$, there exists $g\in G$ such that $A_i a g \mathop{\mathrm{Su}}bseteq A_i$, for all $i\in \{1,\ldots,l\}$; thus $ag$ is an idempotent and $a=(ag)g^{-1}\in EG$. This proves the direct implication. Conversely, let $(A_1,\ldots,A_k)$ be a $k$-partition of $\Omega$ and let $(a_1,\ldots ,a_k)$ be a $k$-tuple of different elements of $\Omega$. We claim that there exists $g\in G$ such that $a_ig \in A_i$, for all $i\in \{1,\ldots,k\}$. In fact, let $t\in T(\Omega)$ be a map such that $A_i t =\{a_i\}$. By assumption $\langle G,t\rangle$ is strongly factorizable and hence $t=eg$, for some $g\in G$ and idempotent $e \in \langle G,t\rangle$, and thus $tg^{-1}=e$. Because $e$ and $t$ have the same kernel classes and every point in the image of $e$ is fixed, we have that $A_i e \mathop{\mathrm{Su}}bseteq A_i$ for all $i$. It follows that $\{a_i g^{-1}\}=A_itg^{-1}=A_i e\mathop{\mathrm{Su}}bseteq A_i$ (for all $i \in \{1,\ldots, k\}$). The result follows. $\Box$ \end{proof} Glueing together the previous result with the classification of the groups possessing the ordered $k$-ut in Propositions \ref{p:ord6ut}, \ref{p:ord5ut}, \ref{p:ord4ut}, and \ref{p:ord3ut}, we get Theorems \ref{rank2}--\ref{rank3} which are the main theorems of the first part of this paper. \section{Semigroups and their normalizers}\label{four} Let $S\le T_n$ be a semigroup and let $G\le S_n$ be its normalizer in $S_n$. We are interested in the relation between $S$ and $\langle S,G\rangle$. On one hand the semigroup $\langle S,G\rangle$ might be more accessible to study since we can take advantage of group theoric results, but on the other hand the properties of $S$ might be very different from the properties of $\langle S,G\rangle$. For example, we might be unable to verify if a given semigroup $S$ is regular. If all $t\in S$ have rank at most $k$ and if $G$ has the $k$-ut property, then the semigroup $\langle S,G\rangle$ is easily seen to be regular. Hence, to prove the regularity of $S$, we need to prove that regularity of $\langle S,G\rangle$ implies regularity of $S$. Therefore, the goal of this section is to study semigroup properties that carry from $\langle S,G\rangle$ to $S$. We start by proving a general result. \begin{lemma} Let $S\le T_n$ and let $G$ be its normalizer in $S_n$. Then \[ \langle S,G\rangle = SG. \] \end{lemma} \begin{proof} For $s\in S$ and $g\in G$ let $s^g$ denote $g^{-1}sg$. Let $t\in \langle S,G\rangle$. We now have (for some $g_1, \dots, g_{k+1} \in G$, $s_1, \dots s_k \in S$), \[\begin{array}{rcl} t&=&g_1s_1g_2s_2\ldots g_ks_kg_{k+1}\\ &=&g_1s_1g^{-1}_1(g_1g_2)s_2(g_1g_2)^{-1}(g_1g_2g_3)\ldots (g_1\ldots g_k)s_k(g_1\ldots g_k)^{-1}(g_1\ldots g_kg_{k+1})\\ &=&s^{g_1}_1s^{g_1g_2}_2\ldots s^{g_1\ldots g_k}_k (g_1\ldots g_{k+1})\\ &=&sg \in SG, \end{array} \]where $s=s^{g_1}_1s^{g_1g_2}_2\ldots s^{g_1\ldots g_k}_k\in S$ and $g=g_1\ldots g_{k+1}\in G$. Thus $\langle S,G\rangle\mathop{\mathrm{Su}}bseteq SG$. The reverse inclusion is obvious. $\Box$ \end{proof} \mathop{\mathrm{Su}}bsection{Regularity} Recall that a semigroup $S$ is regular if for all $a\in S$ there exists $a'\in S$ such that $a=aa'a$. Two elements $a,b\in S$ are said to be ${\mathcal R}$-related if there exist $u,v\in S^1$ such that $a=bu$ and $b=av$ ($S^1$ denotes the monoid obtained by adjoining an identity to $S$). Similarly, $a,b\in S$ are said to be ${\mathcal L}$-related if there exist $u,v\in S^1$ such that $a=ub$ and $b=va$. It is well know that a semigroup is regular if and only if every element is ${\mathcal R}$-related (or ${\mathcal L}$-related) to an idempotent. In what follows, by a transformation monoid $S\le T_n$ we mean a semigroup of transformations containing the identity transformation. The key result in this subsection is the following lemma. \begin{lemma}\label{main} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group. If $a\in S$ is ${\mathcal R}$-related in $SG$ to an idempotent of $SG$, then $a$ is ${\mathcal R}$-related in $S$ to the same idempotent. \end{lemma} \begin{proof} Let $a\in S$ and assume that $a$ is ${\mathcal R}$-related in $SG$ to an idempotent in $SG$, that is, there exist $b\in S, h \in G$ such that $(bh)(bh)=bh$ and for some $b_1i_1,b_2i_2\in SG$, we have $a=(bh)(b_1i_1)$ and $bh=a(b_2i_2)$. The claim is clearly true if $bh$ is the identity, so assume this is not case. Now, by a theorem of McAlister, for all $s\in S$, the semigroups $\langle s,G\rangle \setminus G $ and $\langle g^{-1}sg\mid g\in G\rangle\setminus G$ have the same idempotents (\cite[Lemma 2.2]{mcalister} and \cite[Lemma 2.2]{AMS}). As $S=\bigcup_{s\in S}\langle g^{-1}sg\mid g\in G\rangle$ and $SG=\bigcup_{s\in S}\langle s,G\rangle$, it follows that $S$ and $SG$ have the same idempotents. Thus $bh\in S$. It remains to prove that $a$ and $bh$ are ${\mathcal R}$-related in $S$, that is, there exist $u,v\in S$ such that $a=(bh)u$ and $bh=av$. Since $(bh)(b_1i_1)=a\in S$, we can take $u= (bh)(b_1i_1)$ so that $a=(bh)u=(bh)\ (bh)(b_1i_1)$. Observe that $a(b_2 i_2)(bh)=(bh)(bh)=bh$. We claim that $i_2bh\in S$ and hence $b_2i_2bh\in S$, thus proving the theorem. We start by proving that $hbh\in S$. In fact, \[\begin{array}{rcl} h^{-1}bh,b\in S&\Rightarrow& h^{-1}bhb\in S\Rightarrow h^{-2}bhbh=h^{-2}bh\in S\Rightarrow h^{-2}bhb\in S\Rightarrow \\ &\Rightarrow& h^{-3}bhbh=h^{-3}bh\in S\Rightarrow \ldots \Rightarrow h^{-k}bh\in S. \end{array}\] As $G$ is finite, for some $k$ we have $h^{-k}=h$. The claim follows. Now we claim that $i^{-k}_2b\in S$ for all natural numbers $k$. We proceed by induction. From $a,b_2\in S$ we get $ab_2\in S$ and hence $i^{-1}_2ab_2i_2\in S$, thus $ i^{-1}_2 bh \in S$ so that $ i^{-1}_2 bhb \in S$; as $bhbh=bh$, we have $bhb=b$ which together with $ i^{-1}_2 bhb \in S$ yields $ i^{-1}_2 b \in S$. Now suppose that $i^{-k}_2 b\in S$ (for some natural $k\ge 1$); we want to prove that $i^{-(k+1)}_2b\in S$. From $i^{-(k+1)}_2b_2i^{k+1}_2,i^{-k}_2b\in S$, we get $i^{-(k+1)}_2b_2i^{(k+1)}_2i^{-k}_2b=i^{-(k+1)}_2b_2 i_2 b\in S$. Thus \[\begin{array}{rcccl} S&\ni& \left( i^{-(k+1)}_2ai^{(k+1)}_2 \right) \left( i^{-(k+1)}_2b_2 i_2 b \right)&=& i^{-(k+1)}_2ai^{(k+1)}_2 i^{-(k+1)}_2b_2 i_2 b\\ & & &=&i^{-(k+1)}_2ab_2 i_2 b\\ & & &=& i^{-(k+1)}_2bh b \\ & & &=&i^{-(k+1)}_2b. \end{array} \] It is proved that $i^{-k}_2b\in S$ for all natural $k$. As $G$ is finite, for some $k$ we have $i_2b=i^{-k}_2b\in S$. Since $i_2b,hbh\in S$, it follows that $i_2bhbh=i_2bh\in S$. As $b_2,i_2bh\in S$, we get $b_2i_2bh\in S$ and hence $a(b_2i_2bh)=(ab_2i_2)(bh)=(bh)^2=bh$. It is proved that $a$ and the idempotent $bh$ are ${\mathcal R}$-related in $S$. $\Box$ \end{proof} By symmetry we get the following. \begin{lemma} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group $G$. If $a\in S$ is ${\mathcal L}$-related in $SG$ to an idempotent of $SG$, then $a$ is ${\mathcal L}$-related in $S$ to the same idempotent. \end{lemma} Two elements $a,b \in S$ are said to be ${\mathcal H}$-related if they are ${\mathcal R}$-related and ${\mathcal L}$-related. The two previous results imply the following. \begin{lemma}\label{hmain} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group $G$. If $a\in S$ is ${\mathcal H}$-related in $SG$ to an idempotent of $SG$, then $a$ is ${\mathcal H}$-related in $S$ to the same idempotent. \end{lemma} A number of consequences follow from these lemmas. \begin{cor} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group $S_n$. Then $SG$ is regular if and only if $S$ is regular. \end{cor} \begin{proof} Let $a\in S$. As $SG$ is regular and $S\le SG$, it follows that $a$ is ${\mathcal R}$-related in $SG$ to an idempotent of $SG$. By Lemma \ref{main}, $a$ is ${\mathcal R}$-related in $S$ to the same idempotent (which is thus in $S$). We conclude that every element in $S$ is ${\mathcal R}$-related to an idempotent. Regarding the converse, suppose $S$ is regular, say $s=ss's$ and $s'=s'ss'$, for all $s\in S$. Let $sg\in SG$. Then $sg=(sg)(g^{-1}s')(sg)$ and $g^{-1}s'=(g^{-1}s')(sg)(g^{-1}s')$; in addition $g^{-1}s'=(g^{-1}s'g)g^{-1}\in SG$. It is proved that every element in $SG$ has an inverse in $SG$. The result follows. $\Box$ \end{proof} We observe that it is possible for non-idempotents $p,q\in S$ to be ${\mathcal R}$-related in $SG$, but not in $S$. For example, pick $g,t,q\in T_7$ as follows: $g:=(567)$, \[\begin{array}{ccc} t:=\left( \begin{array} { c c c c } { { 1,2,3,4 } } & { 5 } & { 6 } & { 7 } \\ { 1 } & { 2 } & { 3 } & { 4 } \end{array} \right)&\mbox{ and }& q:=\left( \begin{array} { c c c c } { { 1,2,3,4 } } & { 5 } & { 6 } & { 7 } \\ { 1 } & { 3 } & { 4 } & { 2 } \end{array} \right) \end{array} \] Then $S:=\langle g,t,q\rangle$ has $7$ elements and its normalizer $G$ in $S_7$ is generated by $\langle g, (34)(76),(23)(76)\rangle$. We have $t=q(243)$ and $q=t(234)$, thus $t$ and $q$ are ${\mathcal R}$-related in $SG$, but they are not ${\mathcal R}$-related in $S$. Recall that a semigroup is completely regular if each element belongs to a maximal subgroup; equivalently, every ${\mathcal H}$-class contains an idempotent. Therefore, Lemma \ref{hmain} implies the following. \begin{cor} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group $S_n$. Then $SG$ is completely regular if and only if $S$ is completely regular. \end{cor} An element $a$ of a semigroup $S$ is said to be {\em abundant} if it is ${\mathcal R}$-related and ${\mathcal L}$-related to an idempotent in some oversemigroup $T$ of $S$. Therefore, if an element $a\in S$ is abundant in $SG$, this means that $a$ is also abundant in $S$. The {\em abundant world} generalizes the regular world, but in this context adds nothing. An element $a$ of a semigroup $S$ is said to be {\em right [left] inverse} if it is ${\mathcal R}$-related [${\mathcal L}$-related] to exactly one idempotent in $S$; the element is inverse if it is both left and right inverse. A semigroup is inverse if all of its elements are inverse. As seen above, if $a\in S$ is ${\mathcal R}$-related to exactly one idempotent in $SG$, by Lemma \ref{main} we know that in $S$ the element $a$ is ${\mathcal R}$-related to the same idempotent (that belongs to $S$); thus, if $a\in S$ is right inverse in $SG$, it is also right inverse in $S$; as a consequence, if $SG$ is an inverse semigroup, then so is $S$. (This last conclusion follows immediately from the fact if $SG$ is inverse, it is regular and the idempotents commute and hence the same is true in $S$.) A semigroup is said to be \emph{Clifford} if it is inverse and completely regular. By the results above it follows that if $SG$ is Clifford, then so is $S$. A monoid $S$ is {\em intra-regular} if for all $a\in S$ there exist $b,c \in S$ such that $ba^2c= a$. We would like to know if $SG$ intra-regular implies $S$ intra-regular, but we only have the following partial result. \begin{prop} Let $G\le S_n$ be a group of exponent~$2$ and let $S\le T_n$ be a transformation monoid. Then $SG$ is intra-regular implies that $S$ is intra-regular. \end{prop} \begin{proof} As $S$ is a finite semigroup, for each $x\in S$ there exist natural numbers $l$ and $m$, $m>l$, such that $x^l=x^m$. Let $a \in S$ be arbitrary, and $l$ and $m$ as above. If $l=1$ and $m=2$, then $a=a^2$ and hence $a=1aa1$, with $1\in S$; if $l=1$ and $m>2$, then $a=1a^2a^{m-2}$, so the result holds for all $a$ for which $l=1$. Assume instead that $l,m>1$. By intra-regularity in $SG$, there exist $eg,fh\in SG$ such that $a=ega^2fh$, with $e,f\in S$ and $g,h\in G$. It is clear that $f,hfh^{-1}\in S$ and hence $fhfh^{-1}\in S$; as $h^{-1}=h$ it follows that $fhfh\in S$. We claim that $egaeg\in S$. In fact $ae\in S$ because $a,e\in S$; thus $gaeg^{-1},e\in S$ and hence $egaeg^{-1}=egaeg\in S$. The claim follows. Now \[ (egaeg)a^2(fhfh)=ega(ega^2fh)fh=ega(a)fh=ega^2fh=a. \]It is proved that every intraregular element in $SG$ is intraregular in $S$, when $G$ satisfies $x^2=1$. $\Box$ \end{proof} \mathop{\mathrm{Su}}bsection{Semigroups having square roots} The aim of this subsection is to carry the foregoing investigation to the case of $SG$ having square roots. Observe that every finite semigroup satisfies an identity of the form $x^m=x^k$, for $0\le k<m$. If $a,b\in S$ and $b^2=a$ we say that $b$ is a square root of $a$; we denote an arbitrary square root of $x$ by $\sqrt{x}$ and the notation $\sqrt{x}\in A\mathop{\mathrm{Su}}bseteq S$ means that $x$ has a square root in the set $A$. \begin{theorem}\label{mainsquare} Let $S$ be a finite semigroup in which every element has a square root. Then for every $x\in S$ we have $\sqrt{x}\in \langle x \rangle$. \end{theorem} \begin{proof} Let $s: S\to S$ given by $s(x)=x^2$. That $S$ has square roots for all of its elements means that $s$ is surjective, and as $S$ is finite, it is also injective. But for any $x \in S$, $s(\langle x \rangle) \mathop{\mathrm{Su}}bseteq \langle x \rangle$. Let $t$ be the restriction of $s$ to $ \langle x \rangle$. Then $t$ is injective (because $s$ is), and by finiteness, also surjective. Hence $x$ has a square root in $ \langle x \rangle$. $\Box$ \end{proof} Now back to the leitmotiv of this paper. \begin{cor} Let $S\le T_n$ be a transformation monoid and $G$ be the normalizer of $S$ in the symmetric group $G$. If every element in $SG$ has a square root, then so has every element in $S$. \end{cor} \begin{proof} As $SG$ has square roots for all its elements, it follows that for every $x\in SG$ we have $\sqrt{x}\in \langle x\rangle$ and hence every $x\in S$ has square root in $S$. $\Box$\end{proof} It is well known that every element of a finite group has a square root if and only if the group has odd order. The next theorem extends this result to semigroups. \begin{cor} Every element of a finite semigroup $S$ has a square root if and only if every $s\in S$ belongs to an odd order maximal subgroup of $S$. \end{cor} \begin{proof} Let every element of $S$ have a square root, and $x\in S$. By Theorem \ref{mainsquare}, $\sqrt{x}\in \langle x\rangle$, that is, $\sqrt{x}=x^n$, for some natural $n$. Therefore, $x=x^{2n}$ and hence $\langle x\rangle$ is a cyclic group with $x^{-1}=x^{2n-2}$ and $x^{2n-1}$ being the identity element. This means that the ${\mathcal H}$-class of $x$ is a group (since it has an idempotent); as every element in this group has a square root, which necessarily lies in it, this group must have odd order. Conversely, if $S$ is a union of odd order groups, then every element in $S$ has a square root. $\Box$ \end{proof} It is well known that neither the symmetric group nor the full transformation monoid contain square roots of all their elements, except in the trivial cases. With our main theorem at hand we can say a bit more: if $S$ is a finite semigroup and $a\in S$ has no square root, then it is not possible to extend $S$ to a finite oversemigroup $T$ containing a square root for $a$ (because a square root of $a$ must belong to $\langle a\rangle$ and this semigroup remains the same in any oversemigroup of $S$). \mathop{\mathrm{Su}}bsection{A negative result} A semigroup $S$ is said to be {\em ${\mathcal R}$-commutative} if for all $a,b\in S$ we have $ab{\mathcal R}ba$. Let $S<T(\{1,\ldots,7\})$ be the semigroup generated by the permutations $(24)(36)$, $(15)(23)(46)$ and the transformation \[ t =\left( \begin{array}{ccc} \{1,5,7\}&\{2,3\}&\{4,6\}\\ 7&1&5 \end{array} \right). \] The normalizer $G$ of $S$ in the symmetric group is the group generated by $(15)$, $(24)(36)$ and $(15)(23)(46)$. GAP shows that the semigroup $SG$ is ${\mathcal R}$-commutative, but $S$ is not. By symmetry, a corresponding counterexample exists for ${\mathcal L}$-commutativity. \section{Problems}\label{five} We now propose a number of problems. The first is essentially in \cite{ac} but (annoyingly) keeps resisting. \begin{problem} Complete the classification of the groups possessing the $3$- and $4$-ut property so that Theorem \ref{rank4} and Theorem \ref{rank3} can be completed. \end{problem} There is a well known correspondence between the behaviour of $T(\Omega)$ and $\mathop{\mathrm{End}}(V)$, when $\Omega$ is a finite set and $V$ is a finite dimension vector space. This prompted the introduction of {\em independence algebras}, a class containing both sets and vector spaces as particular cases. Therefore the next two problems turn out to be very natural. \begin{problem} Prove linear analogues of the main theorems in this paper. \end{problem} \begin{problem} Find in the context of independence algebras analogues of the main theorems in this paper. \end{problem} Let $S$ be a finite semigroup. Two elements $a,b\in S$ are said to be $\mathcal{J}$-related if they generate the same principal ideal, that is, $S^1aS^1=S^1bS^1$. We recall (\cite[p.57]{Ho95}) that if $S<T$ is a regular subsemigroup of a semigroup $T$, then ${\mathcal R}_S={\mathcal R}_T\cap (S\times S)$. The same happens for ${\mathcal L}$ or ${\mathcal H}$, but fails for ${\mathcal J}$. \begin{problem} Let $S\le T_n$ and $G\le S_n$ be its normalizer in $S_n$. Is it true that ${\mathcal J}_S={\mathcal J}_{SG}\cap (S\times S)$? \end{problem} An {\em existential property} of semigroups is a first order language condition on the elements of the semigroup that uses an existential quantifier. For example, regularity is an existential property of semigroups. \begin{problem} Let $S\le T_n$ be a semigroup and $G$ its normalizer in $S_n$. Let $P$ be an existential property of semigroups. Decide if $SG$ satisfies $P$ implies that $S$ also satisfies $P$. \end{problem} The following result was proved for groups satisfying $g^2=1$. Can it be generalized for other classes of groups? \begin{problem} Let $G\le S_n$ be a group and let $S\le T_n$ be a transformation monoid. Is it true that if $SG$ is intra-regular then $S$ is intra-regular? \end{problem} We close the list of problems with a general semigroup structure question. \begin{problem} Consider $G$, one of the groups appearing in Theorems \ref{rank2}--\ref{rank3}. Describe the structure (Green's relations, automorphisms, congruences, conjugacy classes, the variety generated, etc.) of the semigroups $\langle G,a\rangle$, for $a\in T_n$. \end{problem} \end{document}
\begin{document} \begin{titlepage} \centering \subject{Doctorate Dissertation\\ 博士論文} \title{Entanglement theory in distributed quantum information processing} \subtitle{(分散型量子情報処理のエンタングルメント理論)} \date{A Dissertation Submitted for\\ Degree of Doctor of Science\\ December 2018\\ 平成30年12月博士(理学)申請\\ } \publishers{Department of Physics, Graduate School of Science,\\ The University of Tokyo\\ 東京大学大学院理学系研究科物理学専攻\\ Hayata Yamasaki\\ 山崎隼汰 } \maketitle \end{titlepage} \frontmatter \chapter{Abstract} Distributed quantum information processing is a promising platform for scaling up quantum information processing, where small- and intermediate-scale quantum devices are connected by a network of quantum channels for communicating quantum information, so as to cooperate in achieving larger-scale information processing. In such distributed settings, entangled states shared among the multiple devices serve as a resource for achieving nonlocal information processing tasks by local operations and classical communication (LOCC), where transformations of multipartite entangled states play central roles. This thesis analyzes properties of quantum entanglement in these small- and intermediate-scale settings and multipartite settings. The first part of this thesis investigates a communication task, \textit{quantum state merging}, on the small and intermediate scales. Aiming at transferring quantum information from a sender $A$ to a receiver $B$ on these scales, this thesis analyzes entanglement cost required for one-shot quantum state merging. Achievability bounds of entanglement cost and protocols are presented, so as to achieve one-shot state merging on the small and intermediate scales. Improved converse bounds of the entanglement cost are also derived. Moreover, it is proven that there is a case where $B$'s preprocessing and backward classical communication from $B$ to $A$ can be indispensable for minimizing entanglement cost in one-shot state merging from $A$ to $B$. The second part of this thesis analyzes multipartite entanglement in distributed quantum information processing. To quantitatively characterize nonlocal properties of multipartite state transformations for encoding and decoding quantum information in a multipartite quantum system, entanglement costs in such encoding and decoding are analyzed, where the multipartite system is distributed among spatially separated parties connected by a network. In addition, advantage of using multipartite entanglement over bipartite entanglement is investigated, and it is shown that when there exists a limitation on the local system size for each party, multipartite entanglement is an indispensable resource without which certain processes cannot be accomplished. These analyses clarify fundamental limitations and potential applications of distributed quantum information processing to characterize properties of quantum entanglement in the small- and intermediate-scale settings and multipartite settings, providing a paradigm for investigating multipartite entanglement in distributed quantum information processing over networks beyond the state convertibility under LOCC\@. \chapter{Acknowledgments} I express my sincere thanks to my supervisor Mio Murao for all constructive suggestions and considerable supports. I thank Akihito Soeda for extensive discussions and enormous helps. I am grateful to Barbara Kraus for accepting my visit and broadening my view on multipartite quantum entanglement, and to Alexander Pirker and Wolfgang D\"{u}r for discussions and the collaboration. I thank members in the group of Mio Murao, Shojun Nakayama, Eyuri Wakakuwa, Seiseki Akibue, Jisho Miyazaki, Kohtaro Kato, Atsushi Shimbo, Yuki Mori, Ryosuke Sakai, Qingxiuxiong Dong, Marco T\'{u}lio Quintino, Paula Belzig, and Wataru Yokojima, and members in the group of Barbara Kraus, Katharina Schwaiger, David Sauerwein, Martin Hebenstreit, Yaiza Aragones Soria, Raphael Brieger, Czarnetzki Leonhard, Farid Shahandeh, and Matthias Englbrecht, for insightful discussions. \tableofcontents \listoffigures \listoftables \chapter*{List of Publications} This thesis is based on the following papers. \begin{description} \item[\cite{Y14}] H.\ Yamasaki, A.\ Pirker, M.\ Murao, W.\ D\"{u}r, and B.\ Kraus, Phys.\ Rev.\ A \textbf{98}, 052313 (2018). \item[\cite{Y12}] H.\ Yamasaki and M.\ Murao, \textit{Quantum state merging for arbitrarily small-dimensional systems}, (2018), arXiv:1806.07875. \item[\cite{Y13}] H.\ Yamasaki and M.\ Murao, \textit{Distributed Encoding and Decoding of Quantum Information over Networks}, (2018), arXiv:1807.11483. \item[\cite{Y19}] H.\ Yamasaki and M.\ Murao, \textit{Quantum-side-information preprocessing and backward classical communication in one-shot quantum state merging}, Unpublished. \end{description} This thesis also uses the results in the following previous works of mine. \begin{description} \item[\cite{Y6}] H.\ Yamasaki, A.\ Soeda, and M.\ Murao, Phys.\ Rev.\ A \textbf{96}, 032330 (2017). \item[\cite{Y18}] H.\ Yamasaki, \textit{Distributed Construction of Multipartite Entangled States over Quantum Networks}, Master's thesis, The University of Tokyo (2016). \end{description} \mainmatter \part{Introduction and preliminaries} \chapter{Introduction} This chapter provides introduction and organization of the whole of this thesis. \section{Introduction to entanglement theory in distributed quantum information processing} Physics provides a way of understanding the world based on fundamental laws, and phenomena possibly exhibited in the world ensure consistency of such fundamental laws of theories in physics. Such theories in physics model complex phenomena in the world as consequences of the simplified laws. Successive efforts have led to several theories in physics, such as classical mechanics on macroscopic scales, theory of relativity on high energy scales, and quantum mechanics on microscopic scales, which have been verified on respective scales. While identification of fundamental laws is a starting point of this type of theories, it is also fundamental to ask the following question as a next step: \textit{What kind of phenomena can be exhibited within the laws of such a theory?} Studies of this type of question establish the base of the consistency of the laws, facilitating better understanding of the world and prediction of novel phenomena in the world described by the theory. Quantum information theory~\cite{N1,W5,W11} studies consequences of the laws of quantum mechanics from an operational approach, answering what kind of information processing is possible and what is impossible using operations allowed in quantum mechanics. Traditionally, such an operational approach to physics is also taken in thermodynamics, a phenomenological theory corresponding to classical mechanics. (See Reference~\cite{L} and the references therein.) Thermodynamics answers what kind of physical processes are possible and what are impossible, when we perform operations for \textit{actively} processing states of physical systems described by classical mechanics, such as steam engines, rather than passively observing the systems. This approach to physics in thermodynamics is operational in the sense that it abstracts Hamiltonian dynamics of the systems and introduces a class of idealized operations on the systems, such as adiabatic and isothermal processes. Abstract properties of the physical systems, such as energy and entropy, are characterized by analyzing the fundamental limitations on appropriate tasks, such as work extraction from the Carnot cycle, performed by these idealized operations in thermodynamics. While thermodynamics assumes that physical systems and operations are on \textit{macroscopic} scales, suppose that operations on \textit{microscopic} scales are at hand. On the microscopic scales, operations on the physical systems may be described by not classical but \textit{quantum} mechanics. Making the most of such microscopic operations on quantum systems is known to have potential applications to information processing, and this way of information processing exploiting advantage of quantum mechanics is called \textit{quantum information processing}. In quantum information processing, instead of a classical binary bit taking $0$ or $1$ as a state, a two-level quantum system, or a \textit{qubit}, can be used as a basic unit for carrying quantum information. Quantum information processing is performed by transforming quantum information represented by a quantum state of qubits, where in contrast with classical bits, arbitrary \textit{superposition} of two distinguishable quantum states $\Ket{0}$ and $\Ket{1}$ may be taken as a quantum state of each qubit. In this sense, while states $0$ and $1$ of a classical bit represent classical information, a quantum state $\alpha_0\Ket{0}+\alpha_1\Ket{1}$ of a qubit can represent quantum information in addition to $\Ket{0}$ and $\Ket{1}$ of the qubit representing classical information. Classical information can be obtained from a quantum state by a probabilistic process, a quantum measurement. More generally than qubits, a $D$-dimensional quantum system is called a \textit{qudit} and is used for quantum information processing. Similarly to physical processes in thermodynamics achieved by the idealized operations, quantum information processing can also be regarded as a physical process of transforming quantum states by operations with the laws of quantum mechanics, in the sense that an initially given input of quantum or classical information to this process is represented as a quantum state and is transformed into the final output of quantum or classical information. Quantum information theory analyzes what kind of information processing can be achieved and what cannot within the laws of quantum mechanics, providing a quantitative understanding and an operational meaning of abstract properties even characteristic of quantum mechanics. Exploiting the quantum mechanical property of \textit{interference}, quantum information processing is potentially faster than classical one for performing some classical computational tasks, such as solving an arithmetic problem of prime factoring~\cite{P6}, simulating quantum systems~\cite{F2,L8}, and sampling from the solution of a linear system~\cite{H14}. In other words, large-scale quantum information processing potentially provides excessive computational power that is not simulatable by any classical algorithms in an efficient way. Moreover, spatially separated quantum systems in a superposition state may exhibit \textit{quantum entanglement}, a type of correlation characteristic of quantum mechanics, which is incompatible with any hidden-variable theory based on the paradigm of classical mechanics~\cite{B29,B27,B28}. If a quantum state has entanglement, the state is called an \textit{entangled state}, and otherwise called a \textit{separable state}. Entanglement appearing in a bipartite system, \textit{i.e.}, that consisting of two subsystems, is called \textit{bipartite entanglement}, and entanglement appearing in a multipartite system, \textit{i.e.}, that consisting of more than two, is called \textit{multipartite entanglement}. In quantum information theory, information processing tasks exploiting such quantum mechanical properties are analyzed, so as to quantitatively characterize consequences of these quantum mechanical properties beyond classical mechanics. \begin{figure}\label{fig:distributed} \end{figure} Recent advances in quantum technology facilitate quantum information processing using quantum devices capable of coherently keeping quantum states of a quantum system inside and of performing low-noise operations for transforming these quantum states. There exists, however, technical difficulty in increasing the number of low-noise qubits built in one quantum device~\cite{P4}, and hence, the quantum system size of such a quantum device may be limited on small and intermediate scales of up to several dozens of qubits at least in the near future. To achieve large-scale quantum information processing, larger quantum system sizes than those in such small- and intermediate-scale quantum devices are required. For scaling up quantum information processing, \textit{distributed quantum information processing}~\cite{V3,C9,W2} is considered to be a promising platform, where larger-scale information processing is achieved using multiple quantum devices connected by a network of quantum channels for communicating quantum information, that is, quantum communication, as illustrated in Figure~\ref{fig:distributed}. In contrast to quantum information processing performed by arbitrarily transforming quantum states of a quantum system, the quantum devices in distributed quantum information processing share a composite quantum system whose subsystems are located in each device, and each device is allowed to perform state transformations only on the subsystem in the device. Nonlocal state transformations over different quantum devices are performed by combining these local state transformations in each device with quantum communication. Given multiple quantum devices in such a distributed setting, quantum entanglement shared among the devices is considered as a correlation which cannot be generated using a class of operations consisting of arbitrary local state transformations inside each quantum device within quantum mechanics as well as arbitrary inter-device communication of classical information represented by bits. This class of operations is called local operations and classical communication (LOCC)~\cite{D11,C19,C7}. To generate arbitrary entangled states shared among multiple devices from a separable state, quantum communication for transferring quantum states between the devices is necessary in addition to LOCC, and sufficiently much use of quantum communication allows the quantum devices to perform arbitrary nonlocal transformations of quantum states of the shared composite system. Conversely, if two devices initially share a particular type of bipartite entangled state, quantum communication between these devices can be simulated by a protocol, \textit{quantum teleportation}~\cite{B5}, using LOCC assisted by the shared bipartite entanglement. Hence, nonlocal state transformations over different quantum devices can be achieved by combining LOCC with shared entanglement. Distributed quantum information processing can be used as a framework for operational analysis of the quantum entanglement shared among the quantum devices. Entanglement serves as a \textit{resource} assisting LOCC in distributed quantum information processing, for achieving nonlocal state transformations over spatially separated quantum systems. While classical communication can be reliably performed using current technology, quantum communication for sharing entanglement is more challenging and costly. In this regard, it is natural to investigate efficient use of entanglement when cost of LOCC is negligible. This approach of regarding entanglement as resources is a fundamental starting point of \textit{entanglement theory}~\cite{H2,P1,E5}, which has been successful in establishing operational understanding of bipartite entanglement. Among bipartite entangled states, convertibility between these entangled states under LOCC establishes partial order of the states in terms of their usability as a resource. This partial order yields quantifications of entanglement in terms of its value as a resource, where it is required that these quantifications are monotonically nonincreasing under LOCC\@. Such a quantification of entanglement is called an entanglement measure. This way of characterizing entanglement may also generalize to a more general and abstract formulation called \textit{quantum resource theory}, so that the resource-theoretic approach is applicable to investigating properties characteristic of quantum mechanics other than entanglement, such as coherence and purity~\cite{C13}. Multipartite entanglement also serves as a resource for multiparty tasks relevant to distributed quantum information processing, such as measurement-based quantum computation~\cite{R5,R6,R7}, distributed sensing~\cite{K1,E4}, and quantum networking~\cite{P2}. Multipartite entanglement ubiquitously appears in many-body quantum systems in condensed matter physics~\cite{A12} and quantum gravity~\cite{R4}. However, straightforward applications of the bipartite resource-theoretic analysis are not sufficient to characterize properties of multipartite entanglement on more than two systems, because mathematical structures of multipartite entangled states are not as simple as bipartite entanglement~\cite{E2,W3,B26}. Especially, in case of multi-qudit systems whose subsystems are of equal dimension, almost no LOCC transformation among quantum states of the system is possible~\cite{G1,S3}, and hence, the paradigm based on the partial order of bipartite entanglement under LOCC does not generalize to multipartite entanglement. This thesis aims to characterize properties of multipartite entanglement through an operational approach, not only using the framework of LOCC, but using quantum communication networks in addition to LOCC, motivated by the settings for distributed quantum information processing. In distributed quantum information processing, a nonlocal transformation of a quantum state shared between two quantum devices can be performed by first transferring one device's part of the state to the other device, and then performing the transformation locally on the latter device, followed by transferring the state back. While this strategy for performing a nonlocal state transformation is not always the most efficient in terms of a communication cost, this strategy exactly and deterministically achieves the transformation. Given two quantum devices sharing a quantum state, the communication task of transferring one device's part of this shared state to the other is called \textit{quantum state merging}~\cite{H3,H4}. Part~\ref{part:1} of this thesis aims to reduce the cost of achieving quantum state merging performed in this two-party LOCC setting of distributed quantum information processing. While the original formulation of quantum state merging in References~\cite{H3,H4} and their successive works are mainly targeted at quantum communication on large scales, protocols aimed at efficient distributed quantum information processing over a network should be designed to be suitable for arbitrarily small-dimensional quantum systems, especially, on the small and intermediate scales relevant to distributed quantum information processing. Part~\ref{part:1} of this thesis analyzes quantum state merging on the small and intermediate scales, different from the existing studies on the large scales. Moreover, distributed quantum information processing may involve more than two quantum devices, where transformations of multipartite entangled states play central roles. Part~\ref{part:2} of this thesis quantitatively analyzes requirements of quantum communication and quantum system sizes required for transforming multipartite entanglement in distributed quantum information processing. The results established in Part~\ref{part:1} on quantum state merging are used for evaluating the requirements of quantum communication. Part~\ref{part:2} also introduces and analyzes tasks of multipartite entanglement transformations in a setting where local quantum system sizes in LOCC are limited, motivated by distributed quantum information processing on the small and intermediate scales. These analyses clarify fundamental limitations and potential applications of distributed quantum information processing to characterize properties of quantum entanglement in the small- and intermediate-scale settings and multipartite settings relevant to distributed quantum information processing, providing a paradigm for investigating multipartite entanglement in distributed quantum information processing over networks beyond the state convertibility introducing the partial order of entanglement under LOCC\@. More detailed backgrounds and settings are given after the preliminaries in Chapter~\ref{sec:preliminaries_all}, at the beginning of Parts~\ref{part:1} and~\ref{part:2}. \section{Technologies for distributed quantum information processing} This section summarizes experimental technologies relevant to distributed quantum information processing, to which theoretical results in this thesis are potentially applicable. Quantum technologies cover wide applications such as quantum computation, quantum communication, quantum simulation, and quantum sensing~\cite{A18}. Distributed quantum information processing can be realized by combining technologies for quantum computation and quantum communication. Ongoing experimental approaches for realizing quantum computation include superconducting circuits~\cite{W13}, ion traps~\cite{H17,C22}, photonic systems~\cite{K9}, and nitrogen-vacancy (NV) centers~\cite{D13}. Superconducting circuits achieves control of $9$ qubits in 2015~\cite{K10}, and ion traps $5$ qubits in 2016~\cite{D14}. A major challenge in realizing quantum computation stems from noise, and one way to reducing effects of noise is quantum error correction~\cite{G,D,T10,B}, where quantum information is represented as a superposition of predetermined multipartite entangled states of a quantum error correcting code, so that local noise can be detected and corrected. If noise of each quantum operation on qubits is below a given threshold, errors during quantum computation can be arbitrarily suppressed by quantum error correction. However, it is not straightforward to increase the number of controllable qubits required for quantum error correction while keeping low noise; that is, there may exists a trade-off relation between quantity and quality of qubits. Distributed quantum information processing is considered to be a candidate for scaling up quantum computation if a limited number of low-noise qubits are available. This situation contrasts with that considered in theoretical research on noisy intermediate-scale quantum (NISQ) technology~\cite{P4}, which aims to find advantages and applications of intermediate-scale quantum devices from several dozens to a few hundreds of qubits that compromise on reducing noise. In distributed quantum information processing, quantum information may be represented using a quantum error correcting code for fault tolerance, and hence, analysis of communication tasks for a state in a superposition of fixed entangled states plays essential roles. Using such low-noise local operations, noisy entanglement at a distance generated by lossy quantum communication may be purified by means of entanglement distillation~\cite{B3}. As for quantum communication, a quantum cryptographic task, quantum key distribution, is demonstrated using photonic systems and optical fibers over $307$ km in 2015~\cite{K11}. However, distribution of quantum entanglement is currently more difficult due to lack of low-noise local quantum systems, as well as inefficiency in conversion between matter-based qubits and photons. Entanglement at a distance is detected between ion traps in 2007~\cite{M7}, between NV centers in 2013~\cite{B30}, and between electron spins separated by 1.3 km in 2015~\cite{H18}. While fault-tolerant networks required for distributed quantum information processing pose technological challenges~\cite{W2}, theoretical analysis of minimal quantum communication for achieving distributed quantum information processing is beneficial to clarifying a technological target in future experiments. \section{Organization of this thesis} The rest of this thesis is organized as follows. After providing preliminaries to the rest of this thesis in Chapter~\ref{sec:preliminaries_all}, Part~\ref{part:1} analyzes a communication task, quantum state merging, between two spatially separated quantum parties having arbitrarily small-dimensional systems, so that the results are applicable to any two small- and intermediate-scale quantum devices on a network used in distributed quantum information processing. Using the results established in Part~\ref{part:1}, Part~\ref{part:2} analyzes properties of multipartite entanglement using the framework of distributed quantum information processing over networks. The results in Part~\ref{part:1} and Part~\ref{part:2} are summarized as follows, and the conclusion of these results is given in Part~\ref{part:conclusion}. The structure of chapters in each part is illustrated in Figure~\ref{fig:organization}. \begin{itemize} \item Part~\ref{part:1} analyzes a communication task of quantum state merging~\cite{H3,H4} on small and intermediate scales. In distributed quantum information processing, two quantum devices, namely, $A$ and $B$, may share a correlated state, and state merging is a fundamental communication task aiming at transferring $A$'s part of this shared state to $B$, where $B$'s part is called quantum side information and may be used for reducing required communication costs in state merging. Aiming at transferring quantum information on small and intermediate scales relevant to distributed quantum information processing, Part~\ref{part:1} considers a \textit{one-shot} scenario of state merging, where only a single copy of the shared state is given. While existing protocols achieving one-shot quantum state merging are costly on the small and intermediate scales, Chapter~\ref{sec:merge} in Part~\ref{part:1} establishes a protocol applicable even on the small and intermediate scales, as well as analyzing lower bounds for minimal costs in the one-shot scenario of state merging. Also, aiming at making the most of quantum side information in a one-shot scenario, Chapter~\ref{sec:two_way} in Part~\ref{part:1} proves that $B$'s preprocessing of quantum side information and backward classical communication from $B$ to $A$ can be indispensable for minimizing the cost in one-shot state merging from $A$ to $B$. These results complement existing protocols achieving nearly optimal one-shot state merging on a large scale, opening the way to another direction for future research on transferring quantum information on small and intermediate scales. \item Part~\ref{part:2} analyzes properties of multipartite entanglement in distributed quantum information processing, from the viewpoints of quantum communication costs over networks and the sizes of local quantum systems. Using the protocols established in Part~\ref{part:1}, Chapter~\ref{sec:distributed_encoding_decoding} in Part~\ref{part:2} evaluates costs of implementing multipartite nonlocal quantum state transformations for encoding and decoding quantum information in a multipartite quantum system, progressing beyond quantifications of bipartite and multipartite entanglement based on quantum communication costs. These encoding and decoding of quantum information are fundamental building blocks in quantum information processing, and difference between encoding and decoding is quantitatively characterized in terms of their implementation costs in distributed quantum information processing on a given tree-topology network for quantum communication. In Chapter~\ref{sec:multipartite} in Part~\ref{part:2}, advantage of the use of multipartite entanglement over bipartite entanglement is analyzed in terms of local quantum system sizes in distributed quantum information processing. Concrete examples are given to prove that multipartite entanglement outperforms bipartite entanglement when limitations on the local system sizes exist. These results facilitate operational understanding and efficient use of multipartite entanglement, from the viewpoints motivated by distributed quantum information processing over the networks beyond the state convertibility under LOCC\@. \end{itemize} \begin{figure}\label{fig:organization} \end{figure} \chapter{\label{sec:preliminaries_all}Preliminaries} This chapter summarizes concepts in quantum information theory~\cite{N1,W5,W11} relevant to the rest of this thesis. Section~\ref{sec:notations} summarizes formulation of general quantum mechanical operations used in quantum information processing. Then, Section~\ref{sec:locc} defines local operations and classical communication (LOCC), which is a class of operations playing central roles in analyzing quantum entanglement. After summarizing decomposition theorems used for operational analysis of entanglement in Section~\ref{sec:decomposition}, basic results of entanglement theory are summarized in Section~\ref{sec:entanglement}. \section{\label{sec:notations}Operations in quantum mechanics} In quantum mechanics, a physical system is represented as a complex Hilbert space. Systems that can be represented as finite-dimensional Hilbert spaces are considered here for simplicity. A composite system consisting of different subsystems is represented by the tensor product of Hilbert spaces representing each subsystem. A composite system consisting of two subsystems is said to be \textit{bipartite}, and that of more than two subsystems \textit{multipartite}. For a system labeled $A$, let $\mathcal{H}^{A}$ denote a Hilbert space representing the system, where the dimension of the Hilbert space may be written as \begin{equation} D^A\coloneq\dim\mathcal{H}^A. \end{equation} If $D^A\geqq 2$, then the system $\mathcal{H}^{A}$ is called a \textit{qudit}, and especially if $D^A=2$, called a \textit{qubit}. Let $\mathbb{C}$ denote the set of complex numbers, $\mathbb{Q}$ rational, and $\mathbb{R}$ real. Then, $\mathcal{H}^A$ is isomorphic to $\mathbb{C}^{D^A}$, which is denoted by \begin{equation} \mathcal{H}^A=\mathbb{C}^{D^A}. \end{equation} A Hilbert space $\mathcal{H}$ representing a composite system consisting of $N$ subsystems labeled $A_1\,,A_2\,,\ldots,A_N$ satisfies \begin{equation} \mathcal{H}=\bigotimes_{k=1}^N\mathcal{H}^{A_k}=\mathcal{H}^{A_1}\otimes\cdots\otimes\mathcal{H}^{A_N}. \end{equation} For any system labeled $A$ and represented by a $D^A$-dimensional Hilbert space $\mathcal{H}^{A}$, fix an arbitrary set of $D^A$ mutually orthogonal normalized vectors as preferred, and write this set as \begin{equation} \left\{\Ket{l}^A \in\mathcal{H}^A:l\in\left\{0,\ldots,D^A-1\right\}\right\}, \end{equation} which is called the \textit{computational basis} of $\mathcal{H}^A$. The identity operator on $\mathcal{H}^A$ is denoted by \begin{equation} \mathbb{1}^A\coloneq\sum_{l=0}^{D^A-1}\Ket{l}\Bra{l}^A, \end{equation} which may also be written as $\mathbb{1}_{D^A}^A$ for clarity of dimension. The identity operators in formulas may be omitted for brevity. A Hilbert space spanned by vectors $\Ket{\psi_0},\Ket{\psi_1},\ldots$ is denoted by \begin{equation} \spn\left\{\Ket{\psi_0},\Ket{\psi_1},\ldots\right\}. \end{equation} A pure quantum state of a system $\mathcal{H}$ is represented by a normalized vector, denoted by a ket \begin{align} &\Ket{\psi}\in\mathcal{H},\\ &\left\|\Ket{\psi}\right\|=1, \end{align} where $\left\|\cdots\right\|$ represents the Euclidean norm, and $e^{\textup{i}\theta}\Ket{\psi}$ is identified with $\Ket{\psi}$ for any phase $\theta$. In the following, a ket is normalized unless explicitly noted that it is unnormalized. More generally, a mixed state of the system $\mathcal{H}$ is represented by a positive semidefinite operator of unit trace, which is called a density operator, and these conditions of a density operator $\psi$ are denoted by \begin{align} \psi&\geqq 0, \left(\Leftrightarrow \Braket{\phi|\psi|\phi}\geqq 0,\forall\Ket{\phi}\right)\\ \tr\psi&\coloneq\sum_{l=0}^{D-1}\Braket{l|\psi|l}=1, \end{align} where ${\left\{\Ket{l}\right\}}_{l=0,\ldots,D-1}$ is the computational basis of $\mathcal{H}$, while $\tr\psi$ does not depend on the choice of the basis. Given a system labeled $A$, the set of density operators on $\mathcal{H}^A$ is denoted by $\mathcal{D}\left(\mathcal{H}^A\right)$, and the set of bounded operators on $\mathcal{H}^A$ is denoted by $\mathcal{B}\left(\mathcal{H}^A\right)$. A state of a bipartite system is called a \textit{bipartite state}, and a state of a multipartite system a \textit{multipartite state}. Superscripts of an operator or a vector represent the labels of the corresponding Hilbert spaces, \textit{e.g.}, for a mixed state \begin{equation} \psi^{A_1 A_2 A_3} \in \mathcal{D}\left(\mathcal{H}^{A_1}\otimes\mathcal{H}^{A_2}\otimes\mathcal{H}^{A_3}\right), \end{equation} and for a pure state \begin{equation} \Ket{\psi}^{A_1 A_2 A_3} \in\mathcal{H}^{A_1}\otimes\mathcal{H}^{A_2}\otimes\mathcal{H}^{A_3}. \end{equation} A density operator corresponding to a pure state may be written as \begin{equation} \psi^{A_1 A_2 A_3}\coloneq\Ket{\psi}\Bra{\psi}^{A_1 A_2 A_3}. \end{equation} Superscripts may be omitted if obvious from the context. Operations on quantum systems can be considered to consist of unitary transformations and measurements assisted by adding and discarding auxiliary systems. Time evolution of a quantum state may be described using an exponential function of Hamiltonians in quantum mechanics, such as that derived from the Schr\"{o}dinger equation in cases of non-relativistic closed systems. This description using Hamiltonian is especially suited for situations where the Hamiltonian $H$ is time-independent and the time evolution in time $t_1-t_0$ is represented as $e^{-\textup{i}H\left(t_1-t_0\right)}$, or $H$ is, if time-dependent, changed according to a few parameters. However, quantum information processing considers different situations where such Hamiltonian is actively engineered and arbitrarily controlled in a time-dependent way. To analyze what kind of information processing is possible within quantum mechanics in such situations, the description explicitly using Hamiltonian is replaced by a unitary transformation of states, which is represented for a system $\mathcal{H}^A$ as a unitary operator $U^A$. Measurements are operations probabilistically obtaining classical information of measurement outcomes from a quantum state, where it is assumed that the number of the measurement outcomes is finite. A measurement on a system $\mathcal{H}^A$ can be represented by a family of \textit{measurement operators} \begin{equation} \left\{M_m^A: m=\left\{0,\ldots,n-1\right\}\right\} \end{equation} satisfying the completeness condition \begin{equation} \sum_{m=0}^{n-1}{M_m^A}^\dag M_m^A =\mathbb{1}, \end{equation} where $m$ is a label representing an outcome, and $n$ denotes the number of possible outcomes. The number of the outcomes may not be explicitly written if not of interest, and the labels for the outcomes may be written using subscripts, such as ${\left\{M_m^A\right\}}_m$ in the above case. Given any state $\psi^A$, after performing a measurement represented by ${\left\{M_m^A\right\}}_m\,$, the post-measurement state ${\psi_m^{\prime}}^A$ corresponding to each measurement outcome $m$ is given by \begin{align} {\psi_m^{\prime}}^A&\coloneq\frac{1}{p\left(m\right)}{M_m^A \psi {M_m^A}^\dag},\\ p\left(m\right)&\coloneq\tr{M_m^A \psi {M_m^A}^\dag}, \end{align} where $p\left(m\right)$ is a probability distribution representing the probability of obtaining each measurement outcome. For example, a projective measurement in the computational basis ${\left\{\Ket{l}\right\}}_{l}$ can be represented as a family of projectors \begin{equation} {\left\{\Pi_l\coloneq\Ket{l}\Bra{l}\right\}}_l. \end{equation} By setting $n=1$, any unitary transformation $U$ can also be included in this formulation of measurement operators, where the corresponding family of measurement operators reduces to $\left\{U\right\}$. If post-measurement states of a measurement represented by measurement operators ${\left\{M_m^A\right\}}_{m=0,\ldots,n-1}$ are not of interest, it is also possible to consider a family of positive semidefinite operators called a \textit{positive operator-valued measure} (POVM) \begin{equation} \left\{\Lambda_m^A\coloneq {M_m^A}^\dag M_m^A\geqq 0: m=\left\{0,\ldots,n-1\right\}\right\} \end{equation} satisfying the completeness condition \begin{equation} \sum_{m=0}^{n-1}\Lambda_m =\mathbb{1}. \end{equation} Given any state $\psi^A$, consider performing a measurement of $\psi^A$ represented by a POVM ${\left\{\Lambda_m^A={M_m^A}^\dag M_m^A\right\}}_m\,$, and the probability of obtaining each measurement outcome $m$ is \begin{equation} p\left(m\right)=\tr{\Lambda_m^A \psi}=\tr{M_m^A \psi {M_m^A}^\dag}. \end{equation} Note that for any state $\psi^A$ and any measurement, the positivity and the completeness condition guarantee the axiom of probability: \begin{align} p\left(m\right)&\geqq 0,\quad \forall m;\\ \sum_m p\left(m\right)&=1. \end{align} A more general formulation of measurements may include situations where classical post-processing of the measurement outcomes is allowed, and some of the outcomes can be coarse-grained by this classical post-processing. This situation is formulated using \textit{quantum instruments}. The quantum instrument on $\mathcal{H}^A$ is represented using linear maps \begin{equation} \mathcal{E}_m^A:\mathcal{B}\left(\mathcal{H}^A\right)\to \mathcal{B}\left(\mathcal{H}^A\right) \end{equation} in the form of \begin{equation} \label{eq:instrument} \mathcal{E}_m^A\left(\psi^A\right)\coloneq\sum_{j=0}^{J_m -1} \left(M_{m,j}^A\right)\psi^A{\left(M_{m,j}^A\right)}^\dag, \end{equation} where $m\in\left\{0,\ldots,n-1\right\}$ is a label representing an outcome, $n$ denotes the number of possible outcomes, ${\left\{M_{m,j}^A:m\in\left\{0,\ldots,n-1\right\},j\in\left\{0,\ldots,J_m-1\right\}\right\}}$ is a family of measurement operators whose outcomes are labeled by $m$ for the quantum instrument and by $j$ to be erased by the classical post-processing, and $J_m$ may depend on $m$. This post-processing for erasing some of the outcomes is called \textit{coarse-graining}. More precisely, given a family of measurement operators \begin{equation} \left\{M_0\,,\ldots,M_{n-1}\right\}, \end{equation} coarse-graining divides these measurement operators into $n^\prime$ subgroups of $J_0\,,\ldots,J_{n^\prime-1}$ elements, respectively, satisfying $J_0+\cdots+J_{n^\prime-1}=n$, that is \begin{equation} \left\{M_{0,0}\,,\ldots,M_{0,J_0-1}\,,M_{1,0}\,,\ldots,M_{1,J_1-1}\,,\ldots,M_{n^\prime-1,0}\,,\ldots,M_{n^\prime-1,J_{n^\prime-1}-1}\right\}. \end{equation} Using this coarse-graining, a quantum instrument is defined as a family of linear maps \begin{equation} \left\{\mathcal{E}_m^A:m=0,\ldots,n^\prime-1\right\}, \end{equation} where each $\mathcal{E}_m^A$ is in the form of Equation~\eqref{eq:instrument}. Given any state $\psi^A$, after performing a measurement represented by ${\left\{\mathcal{E}_m^A\right\}}_m\,$, the post-measurement state for each outcome $m$ is given by \begin{equation} \frac{\mathcal{E}_m^A\left(\psi^A\right)}{\tr\mathcal{E}_m^A\left(\psi^A\right)}, \end{equation} which is obtained with probability \begin{equation} {p\left(m\right)}\coloneq\tr\mathcal{E}_m^A\left(\psi^A\right). \end{equation} The single-outcome case of $n=1$ corresponds to the situation of performing the measurement ${\left\{M_{m,j}^A\right\}}_{m,j}$ followed by erasing all the outcomes, which can be performed deterministically. An auxiliary system is considered to be an additionally prepared system different from an initially given system, so that operations can be performed on the composite system consisting of these systems. When added, this auxiliary system is initialized as a fixed state, where in the following the fixed initial state is chosen as $\Ket{0}$ for simplicity. Given any state $\psi^A$, adding an auxiliary system $\mathcal{H}^{A^\prime}$ to the given system $\mathcal{H}^A$ followed by an unitary transformation $U^{AA^\prime}$ on $\mathcal{H}^A\otimes\mathcal{H}^{A^\prime}$ is represented by an isometry transformation \begin{equation} U^{A\to AA^\prime}\coloneq U^{AA^\prime}\left(\mathbb{1}^A\otimes\Ket{0}^{A^\prime}\right) \end{equation} satisfying \begin{equation} \left(U^{A\to AA^\prime}\right)\psi^A {\left(U^{A\to AA^\prime}\right)}^\dag=U^{AA^\prime}\left(\psi^A\otimes\Ket{0}\Bra{0}^{A^\prime}\right){U^{AA^\prime}}^\dag. \end{equation} In other words, isometry transformations are invertible transformations from states of a smaller-dimensional quantum system to those of a larger-dimensional quantum system. Note that such a unitary transformation $U^{AA^\prime}$ corresponding to the isometry transformation $U^{A\to AA^\prime}$ is not unique in general. Superscripts of an operator such as $U^{A\to AA^\prime}$ represent the input system $\mathcal{H}^A$ and the output system $\mathcal{H}^A\otimes\mathcal{H}^{A^\prime}$ for the state transformation represented by the operator. Conversely, consider a situation of discarding a part of the subsystems comprising a composite system. Given a state of such a composite system, a state obtained by discarding a part of the subsystems is called a reduced state of the rest of subsystems. Such a reduced state is represented as a state obtained by performing partial trace on Hilbert spaces representing the discarded subsystems. Partial trace on a Hilbert space representing a discarded subsystem is a linear transformation such that any deterministic operation on the discarded system before the partial trace does not change the reduced state after the partial trace. A reduced state may be represented by superscripts if obvious. For example, given a system represented by $\mathcal{H}^{X_1}\otimes\mathcal{H}^{X_2}\otimes\mathcal{H}^{X_3}$ and a state $\psi^{X_1 X_2 X_3} \in \mathcal{D}\left(\mathcal{H}^{X_1}\otimes\mathcal{H}^{X_2}\otimes\mathcal{H}^{X_3}\right)$, the partial trace on $\mathcal{H}^{X_2}\otimes\mathcal{H}^{X_3}$ is denoted by $\tr_{X_2 X_3}\,$, and the reduced state of $\mathcal{H}^{X_1}$ is represented as \begin{equation} \begin{split} \psi^{X_1}&\coloneq\tr_{X_2 X_3}\psi^{X_1 X_2 X_3}\\ &=\sum_{l=0}^{D^{X_2}-1}\sum_{l^\prime =0}^{D^{X_3}-1}\left(\mathbb{1}^{X_1}\otimes\Bra{l}^{X_2}\otimes\Bra{l^\prime}^{X_3}\right)\psi^{X_1 X_2 X_3}\left(\mathbb{1}^{X_1}\otimes\Ket{l}^{X_2}\otimes\Ket{l^\prime}^{X_3}\right), \end{split} \end{equation} where $D^{X_2}=\dim\mathcal{H}^{X_2}$ and $D^{X_3}=\dim\mathcal{H}^{X_3}$. Combining the above operations is sufficient for achieving the class of any operations consistent with quantum mechanics. A transformation of quantum states is represented by a linear map, and in the following, a linear map may be simply referred to as a map. For example, the identity map on a system $\mathcal{H}^{A}$ is denoted by $\id^A$ and due to linearity, \begin{equation} \id^A\left(\psi^A\right)=\psi^A \end{equation} is satisfied for any $\psi^A\in\mathcal{B}\left(\psi^A\right)$. Since adding and discarding auxiliary systems are allowed, a map representing a state transformation may have different input and output systems represented by different Hilbert spaces. In the following, superscripts of linear maps represent the labels of input and output systems; \textit{e.g.}, for cases where input and output systems are the same, a ma may be written as \begin{equation} \mathcal{N}^{A}:\mathcal{B}\left(\mathcal{H}^{A}\right)\to\mathcal{B}\left(\mathcal{H}^{A}\right), \end{equation} and otherwise, a map for a state transformation from a system $\mathcal{H}^{A_\textup{in}}$ to a system $\mathcal{H}^{A_\textup{out}}$ may be written as \begin{equation} \mathcal{N}^{A_\textup{in}\to A_\textup{out}}:\mathcal{B}\left(\mathcal{H}^{A_\textup{in}}\right)\to\mathcal{B}\left(\mathcal{H}^{A_\textup{out}}\right). \end{equation} Any linear map $\mathcal{N}^{A_\textup{in}\to A_\textup{out}}$ representing a deterministic state transformation has to satisfy the following two properties for being consistent with the axiom of probability: \begin{description} \item[Completely positive property] Given any auxiliary system $\mathcal{H}^R$ and any operator $\psi^{RA_\textup{in}}\in\mathcal{B}\left(\mathcal{H}^R\otimes\mathcal{H}^{A_\textup{in}}\right)$, if the operator is a positive semidefinite operator, that is, \begin{equation} \psi^{RA_\textup{in}}\geqq 0, \end{equation} then the operator obtained by performing $\mathcal{N}^{A_\textup{in}\to A_\textup{out}}$ on $\mathcal{H}^{A_\textup{in}}$ is also mapped into a positive semidefinite operator, that is, \begin{equation} \left(\id^R\otimes\mathcal{N}^{A_\textup{in}\to A_\textup{out}}\right)\left(\psi^{RA_\textup{in}}\right)\geqq 0; \end{equation} \item[Trace-preserving property] Given any operator $\psi^{A_\textup{in}}\in\mathcal{B}\left(\mathcal{H}^{A_\textup{in}}\right)$, $\mathcal{N}^{A_\textup{in}\to A_\textup{out}}$ preserves the trace of the operator, that is, \begin{equation} \tr\mathcal{N}^{A_\textup{in}\to A_\textup{out}}\left(\psi^{A_\textup{in}}\right)=\tr\psi^{A_\textup{in}}. \end{equation} \end{description} A map satisfying these properties is called a completely positive and trace-preserving (CPTP) map. Given any CPTP map $\mathcal{N}^{A_\textup{in}\to A_\textup{out}}$, there exists a Hilbert space $\mathcal{H}^E$ representing an auxiliary system and an isometry transformation $U_\mathcal{N}^{A_\textup{in}\to A_\textup{out}E}$ from $\mathcal{H}^{A_\textup{in}}$ to $\mathcal{H}^{A_\textup{out}}\otimes\mathcal{H}^{E}$ such that for any input state $\psi^{A_\textup{in}}$ \begin{equation} \mathcal{N}^{A_\textup{in}\to A_\textup{out}}\left(\psi^{A_\textup{in}}\right)=\tr_E \left(U_\mathcal{N}^{A_\textup{in}\to A_\textup{out}E}\right)\psi^{A_\textup{in}}{\left(U_\mathcal{N}^{A_\textup{in}\to A_\textup{out}E}\right)}^\dag. \end{equation} This representation of a CPTP map is called the \textit{Stinespring dilation} of the CPTP map, implying that state transformations represented by any CPTP maps can be achieved by adding an auxiliary system, performing a unitary transformation, and discarding a part of subsystems. The CPTP maps are also referred to as channels. To investigate properties of a CPTP map $\mathcal{N}^{A_\textup{in}\to A_\textup{out}}$, it is useful to consider the Choi operator $J\left(\mathcal{N}^{A_\textup{in}\to A_\textup{out}}\right)\in\mathcal{B}\left(\mathcal{H}^R\otimes\mathcal{H}^{A_\textup{out}}\right)$ of $\mathcal{N}^{A_\textup{in}\to A_\textup{out}}$ defined as \begin{equation} \label{eq:choi_operator} J\left(\mathcal{N}^{A_\textup{in}\to A_\textup{out}}\right)\coloneq\left(\id^R\otimes\mathcal{N}^{A_\textup{in}\to A_\textup{out}}\right)\left(\left(\sum_{l=0}^{D^{A_\textup{in}}-1}\Ket{l}^R\otimes\Ket{l}^{A_\textup{in}}\right)\left(\sum_{l=0}^{D^{A_\textup{in}}-1}\Bra{l}^R\otimes\Bra{l}^{A_\textup{in}}\right)\right), \end{equation} where this Choi operator is not normalized, $\mathcal{H}^R$ is an additional Hilbert space for introducing this Choi operator, and \begin{equation} D^{A_\textup{in}}\coloneq\dim\mathcal{H}^{A_\textup{in}}. \end{equation} There exists one-to-one correspondence between a CPTP map and the Choi operator of the CPTP map. In terms of these properties, the quantum instrument ${\left\{\mathcal{E}_m\right\}}_m$ can also equivalently be considered as a family of completely positive (CP) maps whose sum $\sum_m \mathcal{E}_m$ is a trace-preserving map. In the same way as CPTP maps, different input and output systems represented by different Hilbert spaces can be considered for quantum instruments. Given any quantum instrument ${\left\{\mathcal{E}_m^{A_\textup{in}\to A_\textup{out}}\right\}}_m$ and any state $\psi^{A_\textup{in}}$, introduce an auxiliary system $\mathcal{H}^X$ for storing the measurement outcome of this quantum instrument, and performing the measurement represented by ${\left\{\mathcal{E}_m^{A_\textup{in}\to A_\textup{out}}\right\}}_m$ is equivalent to performing a CPTP map $\mathcal{E}^{A_\textup{in}\to A_\textup{out}X}$ acting as \begin{equation} \begin{split} \mathcal{E}^{A_\textup{in}\to A_\textup{out}X}\left(\psi^{A_\textup{in}}\right)&\coloneq\sum_m\mathcal{E}_m^{A_\textup{in}\to A_\textup{out}}\left(\psi^{A_\textup{in}}\right)\otimes\Ket{m}\Bra{m}^X\\ &=\sum_m p\left(m\right)\frac{\mathcal{E}_m^{A_\textup{in}\to A_\textup{out}}\left(\psi^{A_\textup{in}}\right)}{\tr\mathcal{E}_m^{A_\textup{in}\to A_\textup{out}}\left(\psi^{A_\textup{in}}\right)}\otimes\Ket{m}\Bra{m}^X \end{split} \end{equation} where the measurement outcomes are represented as orthogonal pure states for the computational basis of $\mathcal{H}^X$. Note that while quantum instruments are discussed above, the same argument for different input and output systems holds for measurement operators as special cases of quantum instruments. Any CPTP map is equivalent to a single-outcome quantum instrument. In the rest of this thesis, the most general forms for representing operations, that is, CPTP maps and quantum instruments, and more specific forms, such as isometries and measurement operators, are suitably used for describing maps. \section{\label{sec:locc}Local operations and classical communication} In distributed quantum information processing, multiple quantum devices capable of coherently keeping the quantum states of a quantum system inside and of performing operations for transforming these quantum states cooperate in achieving quantum information processing. The local quantum system in each quantum device can be regarded as a subsystem comprising a multipartite quantum system distributed among the devices. Operations performed in each device is restricted to local operations on the subsystem held in the device. To perform arbitrary nonlocal operations on the distributed multipartite system, classical communication of measurement outcomes between the devices is not sufficient, while quantum communication for transferring quantum information of quantum states can be used for achieving such nonlocal operations. However, while classical communication can be reliably performed using current technologies, quantum communication between spatially separated quantum devices is more challenging and costly. In this regard, it is natural to ask what can be achieved only using local operations and classical communication (LOCC)~\cite{D11,C19,C7}. This section provides definition of LOCC, and the notion of entanglement is also introduced in terms of LOCC\@. To introduce LOCC, each of such quantum devices in performing LOCC is called a party being able to perform arbitrary local operations on the party's local quantum system. Let $N$ be the number of the parties, and the parties are denoted by $v_1\,,\ldots,v_N$. The set of the parties is denoted by \begin{equation} V\coloneq\left\{v_1\,,\ldots,v_N\right\}. \end{equation} For each $v_k\in V$, let $\mathcal{H}^{v_k}$ represent the system held by the party $v_k\,$, and the whole multipartite system distributed among the parties is denoted by \begin{equation} \mathcal{H}\coloneq\bigotimes_{v_k\in V}\mathcal{H}^{v_k}. \end{equation} LOCC consists of measurements by each party and classical communication for sending the measurement outcomes to the other parties, where each measurement can be conditioned by the former measurement outcomes obtained by other parties. When classical communication can be freely performed, it is sufficient to consider that the measurement outcomes for each measurement are sent to all the parties. Classical communication introduces sequential order of measurements, and the number of classical communication is referred to as the round of classical communication. In the following, LOCC is introduced in terms of measurement operators for simplicity, while a more formal definition in terms of quantum instruments also follows from combining the following argument with classical post-processing of coarse-graining, as discussed later. A family of measurement operators ${\left\{M_{m_1}\right\}}_{m_1}$ on $\mathcal{H}$ is called \textit{non-correcting one-way local} from a party $v_k\in V$ if each measurement operator is in the form of \begin{equation} M_{m_1}\coloneq M_{m_1}^{v_k}\otimes\bigotimes_{v\in V \setminus\left\{v_k\right\}}\mathbb{1}^v. \end{equation} where ${\left\{M_{m_1}^{v_k}\right\}}_{m_1}$ is $v_k$'s measurement on $\mathcal{H}^{v_k}$ with outcome $m_1\,$, and for each $v\in V \setminus\left\{v_k\right\}$, $\mathbb{1}^v$ is the identity operator on $\mathcal{H}^v$. This measurement is called one-way in the sense that classical communication is performed in the one-way direction from $v_k$ and the others, and is called non-correcting in the sense that the other parties than $v_k$ do not perform any operation for correction conditioned by $v_k$'s measurement outcome. To describe $r$ rounds of classical communication, write a tuple of labels for representing measurement outcomes as \begin{equation} \boldsymbol{m}_r\coloneq\left(m_1\,,\ldots,m_r\right). \end{equation} where $r=1,2,\ldots$ and $\boldsymbol{m}_1$ is identified with $m_1$. A family of measurement operators ${\left\{M_{\boldsymbol{m}_r}\right\}}_{\boldsymbol{m}_r}$ on $\mathcal{H}$ is said to be \textit{LOCC linked} to a family of measurement operators ${\left\{M_{\boldsymbol{m}_{r-1}}\right\}}_{\boldsymbol{m}_{r-1}}$ on $\mathcal{H}$ if there exists a party $v_k\in V$ and a non-correcting one-way local measurement ${\left\{M_{m_r}\right\}}_{m_r}$ from $v_k$ such that for each $\boldsymbol{m}_r\,$, $M_{\boldsymbol{m}_r}$ is the composition of $M_{\boldsymbol{m}_{r-1}}$ and $M_{m_r}\,$, that is, \begin{equation} M_{\boldsymbol{m}_r}=M_{m_r}M_{\boldsymbol{m}_{r-1}}, \end{equation} where each measurement operator in ${\left\{M_{m_r}\right\}}_{m_r}$ is in the form of \begin{equation} M_{m_r}\coloneq M_{m_r|\boldsymbol{m}_{r-1}}^{v_k}\otimes\bigotimes_{v\in V \setminus\left\{v_k\right\}}\mathbb{1}^v, \end{equation} and ${\left\{M_{m_r|\boldsymbol{m}_{r-1}}^{v_k}\right\}}_{m_r}$ is $v_k$'s measurement operator on $\mathcal{H}^{v_k}$ with outcome $m_r$ possibly conditioned by all the preceding outcomes $\boldsymbol{m}_{r-1}$ having been broadcast to all the parties by classical communication. Using these notions, LOCC is defined as follows. Let \textit{non-correcting one-round LOCC} refer to a non-correcting one-way local measurement from some party, and define \textit{non-correcting $r$-round LOCC} for any $r\geqq 2$ as operations represented by a family of measurement operators LOCC linked to that representing non-correcting $(r-1)$-round LOCC\@. For any $r\geqq 1$, \textit{$r$-round LOCC} is defined as operations represented as a family of measurement operators achieved by a non-correcting $r$-round LOCC with outcome $\boldsymbol{m}_r$ followed by each party $v$'s local measurement ${\left\{M_{m^v|\boldsymbol{m}_r}^v\right\}}_{m^v}$ with measurement outcome $m^v$ conditioned by $\boldsymbol{m}_r$. Finite-round LOCC refers to operations that are $r$-round LOCC for some finite $r$. Considering a sequence of $r$-round LOCC for $r=1,2,\ldots$, where non-correcting $r$-round LOCC for each $r$-round LOCC is LOCC-linked to that for $(r-1)$-round LOCC, and \textit{LOCC} is defined as operations that can be represented as a limit of this type of LOCC-linked sequence of $r$-round LOCC as $r\to\infty$. A CPTP map achieved by LOCC is called an LOCC map. To define LOCC in terms of quantum instruments, it suffices to modify the above definition so that whenever a measurement outcome $m_r$ is obtained for each $r=1,2,\ldots$, classical post-processing of coarse-graining of all the measurement outcomes $\boldsymbol{m}m_r$ is performed, in the same way as Reference~\cite{C7}. Note that there exists subtle difference between the above definition of $r$-round LOCC and that in Reference~\cite{C7} in that the above definition allows each party's local measurement after non-correcting $r$-round LOCC, while Reference~\cite{C7} allows only each party's CPTP map after that. This difference matters when separation between one-way LOCC and two-way LOCC in a task of local state discrimination is discussed in Chapter~\ref{sec:two_way}. For any $r=1,2,\ldots$, the set of quantum instruments representing $r$-round LOCC is strictly included by that of $(r+1)$-round LOCC, and that of finite-round LOCC is strictly included in LOCC\@. The set of CPTP maps achievable by $r$-round LOCC is known to be compact if $r$ is finite, but that representing LOCC is not, since LOCC possibly includes infinitely many rounds~\cite{C7}. In cases of two parties denoted by $A$ and $B$ by convention, \textit{one-way LOCC} refers to one-round LOCC, and \textit{two-way LOCC} refers to LOCC other than one-way LOCC\@. One-way LOCC from $A$ to $B$ refers to one-round LOCC defined using non-correcting one-way local measurements only from $A$ but not from $B$. This class of operations, LOCC, naturally defines a class of states exhibiting \textit{quantum entanglement}. Consider situations where the cost of performing LOCC is negligible compared to quantum communication. In such situations, it is natural to assume that the parties can freely perform LOCC\@. Then, a state $\psi\in\mathcal{D}\left(\mathcal{H}\right)$ is called a separable state if for any $\phi\in\mathcal{D}\left(\mathcal{H}\right)$, there exists an LOCC map $\mathcal{E}$ such that \begin{equation} \psi=\mathcal{E}\left(\phi\right). \end{equation} In other words, separable states are the states that can be obtained from scratch by LOCC, in the sense that these states can be obtained from any state by LOCC\@. Also equivalently, separable pure states are product states of local states, that is, \begin{equation} \bigotimes_{v_k\in V}\Ket{\psi_k}^{v_k}, \end{equation} where $\Ket{\psi_k}^{v_k}\in\mathcal{H}^{v_k}$ for each $v_k\,$, and mixed separable states are convex combinations of product states. An entangled state is defined as a state which is not separable. \textit{Bipartite entanglement} refers to entanglement of bipartite entangled states shared between two parties, and \textit{multipartite entanglement} refers to that of multipartite entangled states shared among more than two parties. Note that in terms of quantum resource theory~\cite{C13}, LOCC is regarded as free operations, and separable states are free states obtained from this free operations. Entangled states are resources that cannot be obtained by LOCC from a separable state, and LOCC assisted by an initially given entangled state shared among the parties can be advantageous in distributed quantum information processing compared to performing LOCC without such assistance, as discussed in Section~\ref{sec:entanglement}. \section{\label{sec:decomposition}Decomposition theorems for analysis of quantum state transformation} For simplifying analysis of properties of entangled states under LOCC, mathematical decomposition theorems for operators representing quantum states can be exploited. This section provides such decomposition theorems for later use. \paragraph{Spectral decomposition} Given a system $\mathcal{H}$ and a state $\psi\in\mathcal{D}\left(\mathcal{H}\right)$, the \textit{spectral decomposition} of $\psi$ yields \begin{equation} \psi=\sum_{l=0}^{R-1}\lambda_l\Ket{\psi_l}\Bra{\psi_l}, \end{equation} where $R$ is the rank of $\psi$, each $\lambda_l>0$ is a nonzero eigenvalue, and ${\left\{\Ket{\psi_l}\right\}}_l$ is a set of normalized pure states representing eigenvectors corresponding to nonzero eigenvalues, which are orthogonal with each other. It is assumed that the eigenvalues are sorted in descending order, that is, \begin{equation} \label{eq:eigen} \lambda_0\geqq\lambda_1\geqq\cdots\geqq\lambda_{R-1}. \end{equation} More generally, if $\psi$ is a Hermitian operator, spectral decomposition of $\psi$ is in the same form as the above while each nonzero eigenvalue $\lambda_l$ can be a negative real number. Given any function $f:\mathbb{R}\to\mathbb{R}$ and any Hermitian operator $\psi$, define \begin{equation} f\left(\psi\right)\coloneq\sum_{l=0}^{R-1}f\left(\lambda_l\right)\Ket{\psi_l}\Bra{\psi_l}, \end{equation} where the spectral decomposition of $\psi$ is used on the right-hand side. Note that this definition is equivalent to considering Tailor expansion of $f\left(x\right)$ and substituting $x$ with $\psi$ in this Tailor expansion to define $f\left(\psi\right)$. For example, \begin{equation} \sqrt{\psi}\coloneq\sum_{l=0}^{R-1}\sqrt{\lambda_l}\Ket{\psi_l}\Bra{\psi_l}. \end{equation} \paragraph{Singular value decomposition and Schmidt decomposition} Similarly to the spectral decomposition, given any Hilbert spaces $\mathcal{H}^A$ and $\mathcal{H}^B$, and any bounded operator from $\mathcal{H}^B$ to $\mathcal{H}^A$ \begin{equation} \psi^{B\to A}=\sum_{l,l^\prime}\psi_{l,l^\prime}\Ket{l}^A\Bra{l^\prime}^B, \end{equation} where ${\left\{\Ket{l}^A\right\}}_l$ and ${\left\{\Ket{l^\prime}^A\right\}}_{l^\prime}$ are the computational bases of $\mathcal{H}^A$ and $\mathcal{H}^B$, respectively, the singular value decomposition of $\psi$ is a decomposition in the form of \begin{equation} \psi^{B\to A}=\sum_{l=0}^{R-1}\lambda_l\Ket{\psi_l}^A\Bra{\psi_l}^B, \end{equation} where $R$ is the rank of $\psi$, each $\lambda_l>0$ is a nonzero singular value, and ${\left\{\Ket{\psi_l}^A\right\}}_l$ and ${\left\{\Ket{\psi_l}^B\right\}}_l$ are sets of normalized pure states of $\mathcal{H}^A$ and $\mathcal{H}^B$, respectively, representing singular vectors corresponding to nonzero singular values, which are orthogonal with each other. Analogously to singular value decomposition, given any Hilbert spaces $\mathcal{H}^A$ and $\mathcal{H}^B$, and any bipartite pure state of $\mathcal{H}^A\otimes\mathcal{H}^B$ \begin{equation} \Ket{\psi}^{AB}=\sum_{l,l^\prime}\psi_{l,l^\prime}\Ket{l}^A\otimes\Ket{l^\prime}^B, \end{equation} where ${\left\{\Ket{l}^A\right\}}_l$ and ${\left\{\Ket{l^\prime}^A\right\}}_{l^\prime}$ are the computational bases of $\mathcal{H}^A$ and $\mathcal{H}^B$, respectively, the Schmidt decomposition of $\Ket{\psi}^{AB}$ is a decomposition in the form of \begin{equation} \Ket{\psi}^{AB}=\sum_{l=0}^{R-1}\sqrt{\lambda_l}\Ket{\psi_l}^A\otimes\Ket{\psi_l}^B, \end{equation} where $R$ is called the Schmidt rank of $\Ket{\psi}^{AB}$, each $\lambda_l>0$ is called a nonzero Schmidt coefficient, and ${\left\{\Ket{\psi_l}^A\right\}}_l$ and ${\left\{\Ket{\psi_l}^B\right\}}_l$ are sets of normalized pure states of $\mathcal{H}^A$ and $\mathcal{H}^B$, respectively, which are orthogonal with each other. While $\mathcal{H}^A$ and $\mathcal{H}^B$ are spanned by bases consisting of $\dim\mathcal{H}^A$ vectors and $\dim\mathcal{H}^B$ vectors, respectively, ${\left\{\Ket{\psi_l}^A\right\}}_l$ and ${\left\{\Ket{\psi_l}^B\right\}}_l\,$, consisting of $R$ vectors corresponding to nonzero Schmidt coefficients, can be used as a part of such bases, and this type of basis is called a Schmidt basis. Schmidt-basis states may refer to states in a Schmidt basis corresponding to nonzero Schmidt coefficients. Given the above Schmidt decomposition of $\Ket{\psi}^{AB}$, tracing out $\mathcal{H}^B$ yields the spectral decomposition of $\psi^A$ \begin{equation} \psi^A=\sum_{l=0}^{R-1}\lambda_l\Ket{\psi_l}\Bra{\psi_l}^A, \end{equation} and assume that the Schmidt coefficients are sorted in descending order in the same way Equation~\eqref{eq:eigen} for the eigenvalues. Given any state $\psi^A$, a bipartite pure state $\Ket{\psi}^{AB}$ is called a \textit{purification} of $\psi^A$ if \begin{equation} \psi^A=\tr_B\Ket{\psi}\Bra{\psi}^{AB}, \end{equation} where $\mathcal{H}^B$ is an auxiliary system for this purification. A purification of $\psi^A$ may not be unique, but different purifications are related by isometries. More precisely, consider any state $\psi^A$ given in the spectral-decomposition form as \begin{equation} \psi^A=\sum_{l=0}^{R-1}\lambda_l\Ket{\psi_l}\Bra{\psi_l}^A, \end{equation} and any two purifications of $\psi^A$ \begin{align} \Ket{\psi}^{AB}&\in\mathcal{H}^A\otimes\mathcal{H}^B,\\ \Ket{\psi^\prime}^{AB^\prime}&\in\mathcal{H}^A\otimes\mathcal{H}^{B^\prime}. \end{align} Then, the corresponding Schmidt-decomposition forms are given by \begin{align} \Ket{\psi}^{AB}&=\sum_{l=0}^{R-1}\sqrt{\lambda_l}\Ket{\psi_l}^A\otimes\Ket{\psi_l}^B,\\ \Ket{\psi^\prime}^{AB^\prime}&=\sum_{l=0}^{R-1}\sqrt{\lambda_l}\Ket{\psi_l}^A\otimes\Ket{\psi_l^\prime}^{B^\prime}. \end{align} Hence, there exists an isometry $U^{B\to B^\prime}$ such that for each $l$ \begin{equation} U^{B\to B^\prime}\Ket{\psi_l}^{B}=\Ket{\psi_l^\prime}^{B^\prime}, \end{equation} that is, \begin{equation} \left(\mathbb{1}^A\otimes U^{B\to B^\prime}\right)\Ket{\psi}^{AB}=\Ket{\psi^\prime}^{AB^\prime}. \end{equation} \paragraph{Norms and distances} Using the spectral decomposition, the $1$-norm, also known as the trace norm, of any Hermitian operator $\psi$ is defined as \begin{equation} {\left\|\psi\right\|}_1\coloneq\sum_{l=0}^{R-1}\left|\lambda_l\right|, \end{equation} where nonzero eigenvalues of $\psi$ are used on the right-hand side. As for another norm, the $\infty$-norm, also known as the operator norm, of $\psi$ is defined as \begin{equation} {\left\|\psi\right\|}_\infty\coloneq\max_l\left\{\lambda_l\right\}=\lambda_0\,, \end{equation} where nonzero eigenvalues of $\psi$ are used on the right-hand side. This type of norms of Hermitian operators can be used for quantifying distance between two quantum states. Given two states $\psi\in\mathcal{D}\left(\mathcal{H}\right)$ and $\phi\in\mathcal{D}\left(\mathcal{H}\right)$, trace distance between these two states is defined as \begin{equation} {\left\|\psi-\phi\right\|}_1\,, \end{equation} which characterizes the optimal success probability of discriminating these two states by a quantum measurement~\cite{H16,W11}. There exists another commonly used quantity in quantum information theory representing closeness of two quantum states, the (square root) fidelity between $\psi$ and $\phi$, defined as \begin{equation} F\left(\psi,\phi\right)\coloneq{\left\|\sqrt{\psi}\sqrt{\phi}\right\|}_1. \end{equation} As for other quantities, refer to Reference~\cite{A14}. The trace distance and the fidelity are related by Fuchs-van de Graaf inequalities \begin{equation} 1-\frac{1}{2}{\left\|\psi-\phi\right\|}_1\leqq F\left(\psi,\phi\right)\leqq\sqrt{1-\frac{1}{4}{\left({\left\|\psi-\phi\right\|}_1\right)}^2}. \end{equation} The squared form of this fidelity is written as \begin{equation} F^2\left(\psi,\phi\right)\coloneq{\left({\left\|\sqrt{\psi}\sqrt{\phi}\right\|}_1\right)}^2, \end{equation} and is extensively used in this thesis. This fidelity satisfies the following properties: \begin{enumerate} \item $0\leqq F^2\left(\psi,\phi\right)\leqq 1$; \item $F^2\left(\psi,\phi\right)=1\Leftrightarrow \psi=\phi$; \item $F^2\left(\psi,\phi\right)=F^2\left(\phi,\psi\right)$ (symmetric); \item $F^2\left(\psi\otimes{\psi^\prime},\phi\otimes{\phi^\prime}\right)=F^2\left(\phi,\psi\right)F^2\left(\phi^\prime,\psi^\prime\right)$ (multiplicative); \end{enumerate} where $\psi$, $\phi$, ${\psi^\prime}$, and ${\phi^\prime}$ are arbitrary states. If $\psi$ is pure, \textit{i.e.}, $\psi=\Ket{\psi}\Bra{\psi}$, it holds that \begin{equation} F^2\left(\psi,\phi\right)=\Bra{\psi}\phi\Ket{\psi}. \end{equation} Using the fidelity, the purified distance~\cite{T12,G11} between two normalized states $\psi$ and $\phi$ is defined as \begin{equation} P\left(\psi,\phi\right)\coloneq\sqrt{1-F^2\left(\psi,\phi\right)}. \end{equation} The purified distance between two normalized states $\psi$ and $\phi$ can also be represented as the minimum trace distance between purifications of $\psi$ and $\phi$~\cite{T5,T11}. Note that this thesis uses the purified distance only between normalized operators, while there also exists a generalized definition of the purified distance between sub-normalized operators~\cite{T5,T11} \begin{align} \label{eq:purified_distance_subnormalized} &P\left(\psi,\phi\right)\coloneq\sqrt{1-F_{\ast}^2\left(\psi,\phi\right)},\\ &F_{\ast}^2\left(\psi,\phi\right)\coloneq{\left({\left\|\sqrt{\psi}\sqrt{\phi}\right\|}_1-\sqrt{\left(1-\tr\psi\right)\left(1-\tr\phi\right)}\right)}^2,\\ &\psi\geqq 0,\,\tr\psi\leqq 1,\,\phi\geqq 0,\,\tr\phi\leqq 1. \end{align} The purified distance satisfies the following properties: \begin{enumerate} \item $0\leqq P\left(\psi,\phi\right)\leqq 1$; \item $P\left(\psi,\phi\right)=0\Leftrightarrow \psi=\phi$; \item $P\left(\psi,\phi\right)=P\left(\phi,\psi\right)$ (symmetric); \item $P\left(\psi,\phi\right)\leqq P\left(\psi,\omega\right)+P\left(\omega,\phi\right)$ (triangle inequality); \item $P\left(\mathcal{E}\left(\psi\right),\mathcal{E}\left(\phi\right)\right)\leqq P\left(\psi,\phi\right)$ (monotonicity); \end{enumerate} where $\psi$, $\phi$, and $\omega$ are arbitrary states, and $\mathcal{E}$ is any CPTP map~\cite{T5}. Moreover, for any state $\psi$, $\phi$, and $\omega$, \begin{equation} P\left(\psi\otimes\omega,\phi\otimes\omega\right)=P\left(\psi,\phi\right), \end{equation} due to the multiplicativity of the fidelity. For any $\epsilon\geqq 0$, two states $\psi$ and $\phi$ are said to be $\epsilon$-close in terms of the fidelity or the purified distance if \begin{equation} \begin{split} &F^2\left(\psi,\phi\right)\geqq 1-\epsilon^2\\ &\Leftrightarrow P\left(\psi,\phi\right)\leqq \epsilon. \end{split} \end{equation} \paragraph{Koashi-Imoto decomposition} As for another decomposition, the Koashi-Imoto decomposition~\cite{K3,H6,K5,W4} is introduced in the following. The Koashi-Imoto decomposition is first introduced in Reference~\cite{K3} to characterize a CPTP map $\mathcal{T}$ leaving any state in a given set $\left\{\psi_i^A\in\mathcal{D}\left(\mathcal{H}^A\right):i\in I\right\}$ invariant. Note that the index set $I$ can be an infinite set. The Koashi-Imoto decomposition of a set of states is presented in the following lemma, of which an algorithmic proof is given in Reference~\cite{K3}, and alternative proofs are given in References~\cite{H6,K5} through an operator-algebraic approach. Note that due to the second condition in the following lemma, the Koashi-Imoto decomposition is \textit{uniquely} determined, corresponding to the decomposition said to be maximal in Reference~\cite{K3}. \begin{lemma} \label{lem:koashi_imoto_decomposition_set} (Theorem~3 in Reference~\cite{K3}, Theorem~9 in Reference~\cite{H6}, and Lemma~6 in Reference~\cite{K5}) \textit{Koashi-Imoto decomposition of a set of states.} Given any set \begin{equation} {\left\{\psi_i^A\in\mathcal{D}\left(\mathcal{H}^A\right): i\in I\right\}}, \end{equation} there exists a \textit{unique} decomposition of $\mathcal{H}^A$ \begin{equation} \mathcal{H}^A=\bigoplus_{j=0}^{J-1}\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{a_j^\textup{R}} \end{equation} such that \begin{enumerate} \item For each $i\in I$, $\psi_i^A$ is decomposed into \begin{equation} \psi_i^A=\bigoplus_{j=0}^{J-1} p\left(j\right) \omega_j^{a_j^\textup{L}}\otimes\phi_{i,j}^{a_j^\textup{R}}\, , \end{equation} where $p\left(j\right)$ is a probability distribution and for each $j\in\{0,\ldots,J-1\}$, $\omega_j^{a_j^\textup{L}}\in\mathcal{D}\left(\mathcal{H}^{a_j^\textup{L}}\right)$ is independent of $i$, and $\phi_{i,j}^{a_j^\textup{R}}\in\mathcal{D}\left(\mathcal{H}^{a_j^\textup{R}}\right)$ depends on $i$. \item For any CPTP map \begin{equation} \mathcal{T}:\mathcal{B}\left(\mathcal{H}^A\right)\to\mathcal{B}\left(\mathcal{H}^A\right), \end{equation} if $\mathcal{T}$ leaves $\psi_i^A$ invariant for each $i\in I$, that is, \begin{equation} \mathcal{T}\left(\psi_i^A\right)=\psi_i^A, \end{equation} then the isometry $U_\mathcal{T}$ for the Stinespring dilation of $\mathcal{T}$ is decomposed into \begin{equation} U_\mathcal{T}=\bigoplus_{j=0}^{J-1} U_j^{a_j^\textup{L}}\otimes\mathbb{1}^{a_j^\textup{R}}, \end{equation} where, for each $j\in\{0,\ldots,J-1\}$, $U_j^{a_j^\textup{L}}$ is an isometry from $\mathcal{H}^{a_j^\textup{L}}$ to $\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{A^\prime }$ satisfying \begin{equation} \tr_{A^\prime }U_\mathcal{T} \omega_j^{a_j^\textup{L}} U_\mathcal{T}^\dag = \omega_j^{a_j^\textup{L}}. \end{equation} \end{enumerate} \end{lemma} Using Lemma~\ref{lem:koashi_imoto_decomposition_set}, Reference~\cite{H6} considers the Koashi-Imoto decomposition of a given bipartite state $\psi^{RA}$. The Koashi-Imoto decomposition of $\psi^{RA}$ is obtained using a set of $A$'s states that can be \textit{steered} through $\psi^{RA}$, that is, the set of states of $\mathcal{H}^A$ that can be prepared by performing a measurement of $\psi^{RA}$ on $\mathcal{H}^R$ and post-selecting an outcome. Using an arbitrary positive semidefinite operator $\Lambda^R$, this set of states is denoted by \begin{equation} \label{eq:psi_lambda} \begin{split} S_\psi^{A|R}&\coloneq{\left\{\psi^A\left(\Lambda^R\right):\Lambda^R\geqq 0\right\}},\\ \psi^A\left(\Lambda^R\right)&\coloneq\frac{\tr_R \left[\left(\Lambda^R\otimes\mathbb{1}^A\right)\psi^{RA}\right]}{\tr \left[\left(\Lambda^R\otimes\mathbb{1}^A\right)\psi^{RA}\right]}, \end{split} \end{equation} where the post-selected outcome of a measurement of $\psi^{RA}$ on $\mathcal{H}^R$ corresponds to $\Lambda^R$. Regard the operator $\Lambda^R$ as the index of the set $S_\psi^{A|R}$, and apply the Koashi-Imoto decomposition of a set of states shown in Lemma~\ref{lem:koashi_imoto_decomposition_set} to this set $S_\psi^{A|R}$, where $\Lambda^R$ for $\psi^A\left(\Lambda^R\right)$ corresponds to the index $i$ for $\psi_i^A$ in Lemma~\ref{lem:koashi_imoto_decomposition_set}, and the set of such positive semidefinite operators $\Lambda^R$ corresponds to the index set $I$. Then, Reference~\cite{H6} shows that the Koashi-Imoto decomposition of the bipartite state $\psi^{RA}$ is obtained as follows. \begin{lemma} \label{lem:koashi_imoto_decomposition_bipartite} (in Proof of Theorem~6 in Reference~\cite{H6}) \textit{Koashi-Imoto decomposition of a bipartite state.} Given any bipartite state $\psi^{RA}$, the Koashi-Imoto decomposition of the set $S_\psi^{A|R}$ defined as Equation~\eqref{eq:psi_lambda} yields a unique decomposition of $\mathcal{H}^A$ satisfying the conditions in Lemma~\ref{lem:koashi_imoto_decomposition_set} \begin{equation} \mathcal{H}^A=\bigoplus_{j=0}^{J-1}\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{a_j^\textup{R}}, \end{equation} and $\psi^{RA}$ is decomposed into \begin{equation} \psi^{RA}=\bigoplus_{j=0}^{J-1} p\left(j\right) \omega_j^{a_j^\textup{L}}\otimes\phi_j^{Ra_j^\textup{R}}, \end{equation} where $p\left(j\right)$ is a probability distribution. \end{lemma} Considering a purification $\Ket{\psi}^{RAB}$ of the bipartite state $\psi^{RA}$ in Lemma~\ref{lem:koashi_imoto_decomposition_bipartite}, Reference~\cite{W4} introduces the Koashi-Imoto decomposition of the tripartite pure state $\Ket{\psi}^{RAB}$ as follows. \begin{lemma} \label{lem:koashi_imoto_decomposition_tripartite} (Lemma~11 in Reference~\cite{W4}) \textit{Koashi-Imoto decomposition of a tripartite pure state.} Given any tripartite pure state $\Ket{\psi}^{RAB}$, the Koashi-Imoto decomposition of the set $S_\psi^{A|R}$ defined as Equation~\eqref{eq:psi_lambda} yields a unique decomposition of $\mathcal{H}^A$ satisfying the conditions in Lemma~\ref{lem:koashi_imoto_decomposition_set} \begin{equation} \mathcal{H}^A=\bigoplus_{j=0}^{J-1}\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{a_j^\textup{R}} \end{equation} such that the support of $\psi^B$ \begin{equation} \supp\left(\psi^B\right)\coloneq\spn\left\{\Ket{v}:\psi^B\Ket{v}\neq 0\right\} \end{equation} is decomposed into \begin{equation} \supp\left(\psi^B\right)=\bigoplus_{j=0}^{J-1}\mathcal{H}^{b_j^\textup{L}}\otimes\mathcal{H}^{b_j^\textup{R}}, \end{equation} and $\Ket{\psi}^{RAB}$ is decomposed into \begin{equation} \Ket{\psi}^{RAB}=\bigoplus_{j=0}^{J-1}\sqrt{p\left(j\right)}\Ket{\omega_j}^{a_j^\textup{L} b_j^\textup{L}}\otimes\Ket{\phi_j}^{R a_j^\textup{R} b_j^\textup{R}}, \end{equation} where $p\left(j\right)$ is a probability distribution. \end{lemma} Consequently, to obtain the Koashi-Imoto decomposition of a given pure state $\Ket{\psi}^{RAB}$, apply the algorithm presented in Reference~\cite{K3} or the operator-algebraic theorems used in References~\cite{H6,K5} to the set of states $S_\psi^{A|R}$ defined as Equation~\eqref{eq:psi_lambda}, and then follow the above argument. The former way of applying the algorithm in Reference~\cite{K3} is demonstrated in Appendix~\ref{sec:koashi_imoto} for concrete examples. \section{\label{sec:entanglement}Entanglement as a resource for distributed quantum information processing} This section summarizes examples of state transformations implementable by LOCC assisted by entangled states relevant to this thesis. These examples show that entanglement can be used as a resource when spatially separated parties are restricted to LOCC\@. After introducing some results on entanglement transformations under LOCC, this section also provides the notion of entanglement measures quantifying entanglement in terms of its value as a resource. \paragraph{Entanglement swapping and quantum teleportation} To begin, the following example is shown for demonstrating a protocol for transforming entangled states by LOCC, which is called \textit{entanglement swapping}~\cite{Y17,Z3}. Entanglement swapping involves three parties $R$, $A$, and $B$, and systems $\mathcal{H}^R$ of $R$, $\mathcal{H}^{A}\otimes\mathcal{H}^{A^\prime}$ of $A$, and $\mathcal{H}^B$ of $B$, where dimensions of these systems are set to be \begin{equation} D\coloneq\dim\mathcal{H}^R=\dim\mathcal{H}^A=\dim\mathcal{H}^{A^\prime}=\dim\mathcal{H}^B. \end{equation} Consider an entangled state of $\mathcal{H}^{R}\otimes\mathcal{H}^A$ with Schmidt rank $D$ shared between $R$ and $A$ defined as \begin{equation} \Ket{\Phi_D^+}^{RA}\coloneq\frac{1}{\sqrt{D}}\sum_{l=0}^{D-1}\Ket{l}^{R}\otimes\Ket{l}^A, \end{equation} where ${\left\{\Ket{l}^R\right\}}_l$ and ${\left\{\Ket{l}^A\right\}}_l$ are the computational bases, and the same form of entangled state $\Ket{\Phi_D^+}^{A^\prime B}\in\mathcal{H}^{A^\prime}\otimes\mathcal{H}^{B}$ shared between $A$ and $B$. The whole state of $\mathcal{H}^R\otimes\mathcal{H}^{A}\otimes\mathcal{H}^{A^\prime}\otimes\mathcal{H}^B$ is \begin{equation} \Ket{\Phi_D^+}^{RA}\otimes\Ket{\Phi_D^+}^{A^\prime B}, \end{equation} and the reduced state of $R$ and $B$ is \begin{equation} \frac{\mathbb{1}^R}{D}\otimes\frac{\mathbb{1}^B}{D}, \end{equation} which is a separable state of $\mathcal{H}^R\otimes\mathcal{H}^B$ shared between $R$ and $B$. This type of state proportional to $\mathbb{1}$ is called a completely mixed state. Entanglement swapping aims to prepare an entangled state between $R$ and $B$ by LOCC assisted by these entangled states shared between $R$ and $A$, and between $A$ and $B$. Consider $A$'s measurement on $\mathcal{H}^{A}\otimes\mathcal{H}^{A^\prime}$ in the basis \begin{equation} \label{eq:max_basis} \left\{\left(\mathbb{1}^{A}\otimes {\left(X_D^{A^\prime}\right)}^l{\left(Z_D^{A^\prime}\right)}^{l^\prime}\right)\Ket{\Phi_D^+}^{AA^\prime}:l,l^\prime\in\left\{0,\ldots,D-1\right\}\right\}, \end{equation} where the measurement outcome is labeled by $l$ and $l^\prime$, and $X_D^{A^\prime}$ and $Z_D^{A^\prime}$ are the generalized Pauli operators on a $D$-dimensional Hilbert space $\mathcal{H}^{A^\prime}$ defined as \begin{align} X_{D}^{A^\prime}&\coloneq\sum_{l=0}^{D-1}\Ket{l+1\bmod D}\Bra{l}^{A^\prime},\\ Z_{D}^{A^\prime}&\coloneq\sum_{l=0}^{D-1}\exp\left(\frac{\textup{i}2\pi l}{D}\right)\Ket{l}\Bra{l}^{A^\prime}. \end{align} In the case of qubits, \textit{i.e.}, $D=2$, subscripts of the generalized Pauli operators may be omitted to simply write these operators as \begin{align} X^{A^\prime}\coloneq X_2^{A^\prime},\\ Z^{A^\prime}\coloneq Z_2^{A^\prime}, \end{align} and the states in the above basis of $\mathcal{H}^{A}\otimes\mathcal{H}^{A^\prime}=\mathbb{C}^2\otimes\mathbb{C}^2$ reduce to \begin{align} \Ket{\Phi_2^+}^{AA^\prime}&=\frac{1}{\sqrt{2}}\left(\Ket{0}\otimes\Ket{0}+\Ket{1}\otimes\Ket{1}\right),\\ Z^{A^\prime}\Ket{\Phi_2^+}^{AA^\prime}&=\frac{1}{\sqrt{2}}\left(\Ket{0}\otimes\Ket{0}-\Ket{1}\otimes\Ket{1}\right),\\ X^{A^\prime}\Ket{\Phi_2^+}^{AA^\prime}&=\frac{1}{\sqrt{2}}\left(\Ket{0}\otimes\Ket{1}+\Ket{1}\otimes\Ket{0}\right),\\ X^{A^\prime}Z^{A^\prime}\Ket{\Phi_2^+}^{AA^\prime}&=\frac{1}{\sqrt{2}}\left(\Ket{0}\otimes\Ket{1}-\Ket{1}\otimes\Ket{0}\right). \end{align} For any $M^{A^\prime}\in\mathcal{B}\left(\mathcal{H}^{A^\prime}\right)$, the state $\Ket{\Phi_D^+}^{A^\prime B}$ satisfies \begin{equation} \left(M^{A^\prime}\otimes\mathbb{1}^B\right)\Ket{\Phi_D^+}^{A^\prime B}=\left(\mathbb{1}^{A^\prime}\otimes {\left(M^B\right)}^\textup{T}\right)\Ket{\Phi_D^+}^{A^\prime B}, \end{equation} where ${\left(M^B\right)}^\textup{T}$ represents the transpose of $M^B$ with respect to the computational basis of $\mathcal{H}^B$. Thus, the state \begin{equation} \Ket{\Phi_D^+}^{RA}\otimes\Ket{\Phi_D^+}^{A^\prime B} \end{equation} is transformed by this measurement into a state proportional to \begin{equation} \begin{split} &\left({\left(X_D^{A^\prime}\right)}^l{\left(Z_D^{A^\prime}\right)}^{l^\prime}\Ket{\Phi_D^+}\Bra{\Phi_D^+}^{AA^\prime}{\left(Z_D^{A^\prime}\right)}^{l^\prime}{\left(X_D^{A^\prime}\right)}^l\right)\left(\Ket{\Phi_D^+}^{RA}\otimes\Ket{\Phi_D^+}^{A^\prime B}\right)\\ &\propto\left({\left(X_D^{A^\prime}\right)}^l{\left(Z_D^{A^\prime}\right)}^{l^\prime}\Ket{\Phi_D^+}^{AA^\prime}\right)\otimes \left({\left(X_D^B\right)}^l{\left(Z_D^B\right)}^{l^\prime}\Ket{\Phi_D^+}^{RB}\right), \end{split} \end{equation} which can be shown by using \begin{align} {\left(X_{D}^{B}\right)}^\textup{T}&={X_{D}^{B}},\\ {\left(Z_{D}^{B}\right)}^\textup{T}&={Z_{D}^{B}}. \end{align} Therefore, performing classical communication of $A$'s measurement outcome $l$ and $l^\prime$ from $A$ to $B$, followed by $B$'s unitary transformation ${\left(Z_D^B\right)}^{l^\prime}{\left(X_D^B\right)}^l$ conditioned by $l$ and $l^\prime$ for correction, the parties can prepare an entangled state $\Ket{\Phi_D^+}^{RB}$ between $R$ and $B$ by LOCC assisted by the initially shared entangled states $\Ket{\Phi_D^+}^{RA}\otimes\Ket{\Phi_D^+}^{A^\prime B}$, while the reduced state initially shared between $R$ and $B$ is not entangled. This protocol achieves entanglement swapping. Note that no operation is performed on $\mathcal{H}^R$ throughout the protocol. This protocol for entanglement swapping can be considered as LOCC performed by $A$ and $B$ assisted by an entangled state $\Ket{\Phi_D^+}^{A^\prime B}$ for transferring $A$'s part of $\Ket{\Phi_D^+}^{RA}$ from $A$ to $B$, keeping coherence between $R$ and $B$ to obtain $\Ket{\Phi_D^+}^{RB}$. If $A$ and $B$ performing LOCC assisted by $\Ket{\Phi_D^+}^{A^\prime B}$ achieve a CPTP map $\mathcal{E}^{A\to B}$ transferring $A$'s part of $\Ket{\Phi_D^+}^{RA}$ from $A$ to $B$, that is, \begin{equation} \left(\id^R\otimes\mathcal{E}^{A\to B}\right)\left({\Phi_D^+}^{RA}\right)={\Phi_D^+}^{RB}, \end{equation} then, due to the linearity of the CPTP map, the same CPTP map $\mathcal{E}^{A\to B}$ can transfer an arbitrary state given from $\mathcal{D}\left(\mathcal{H}^{A}\right)$, that is, \begin{equation} \mathcal{E}^{A\to B}\left(\psi^{A}\right)=\psi^B,\quad\forall\psi^{A}\in\mathcal{D}\left(\mathcal{H}^{A}\right), \end{equation} and \textit{vice versa}. The protocol for transferring an arbitrary state on $\mathcal{H}^{A}$ by entanglement-assisted LOCC is known as \textit{quantum teleportation}~\cite{B5}. This equivalence between transferring $A$'s part of $\Ket{\Phi_D^+}^{RA}$ from $A$ to $B$ and transferring arbitrary states of $\mathcal{H}^{A}$ is known as the relative state method~\cite{P3}, where in the former case, $R$ is regarded as reference on which neither $A$ nor $B$ can perform any operation, and $A$ and $B$ keeps coherence between $R$ and $AB$. Quantum teleportation simulates noiseless quantum communication transferring an arbitrary state $\psi^A$ of a $D$-dimensional system $\mathcal{H}^A$ from $A$ to $B$ by LOCC assisted by shared entanglement in the form of $\Ket{\Phi_D^+}$. Conversely, $\Ket{\Phi_D^+}$ shared between $A$ and $B$ can be prepared by quantum communication, where such a protocol can be $A$'s preparing $\Ket{\Phi_D^+}^{AA^\prime}$ by local operations, followed by transferring a part of this bipartite state corresponding to $\mathcal{H}^{A^\prime}$ from $A$ to $B$ by quantum communication. Thus, when LOCC can be freely performed, shared entanglement and quantum communication can be used as an equivalent resource for assisting LOCC\@. The entangled state in such a form with the minimal Schmidt rank, that is, $\Ket{\Phi_2^+}$, can be used as a basic unit of entanglement and called an \textit{ebit}. \paragraph{Quantum state transformation by LOCC} Given that entanglement may serve as a resource assisting LOCC for performing distribute quantum information processing, it is natural to analyze which entangled state has more capability as a resource than others under LOCC\@. For two states $\phi$ and $\psi$ shared among $N$ parties, if there exists an LOCC map $\mathcal{E}_\textup{LOCC}$ achieving \begin{equation} \mathcal{E}_\textup{LOCC}\left(\phi\right)=\psi, \end{equation} then $\phi$ is said to be convertible, or transformable, into $\psi$ by LOCC\@. This state transformation by LOCC is denoted by \begin{equation} \phi\xrightarrow{\textup{LOCC}}\psi. \end{equation} In this case, $\phi$ can be considered to have more capability as a resource for assisting LOCC than $\psi$. If such a resource state having more capability, such as $\phi$ in the above case, is shared among parties, the parties may transform the shared resource state by LOCC into another suitable form, such as $\psi$, for assisting LOCC\@. This paradigm yields a \textit{common resource state}~\cite{S18,G2} transformable into any state in a given set, that is, a resource state having more capability than any state in the set. This set of states is called the \textit{target set} in the context of common resource states. Common resource states are assumed to be \textit{fully entangled}, that is, entangled with respect to any bipartition of the parties. More formally, given any target set $S$, a fully entangled state $\Ket{\phi}$ is called a common resource state for $S$ if for any $\psi\in S$, it holds that \begin{equation} \phi\xrightarrow{\textup{LOCC}}\psi. \end{equation} Similarly, Reference~\cite{M6} also introduces common resource states in terms of state convertibility by stochastic LOCC, that is, performing an LOCC measurement followed by post-selecting an outcome. For bipartite pure states of $\mathcal{H}^A\otimes\mathcal{H}^B$ shared between $A$ and $B$, convertibility of the states under LOCC is characterized by \textit{majorization}, as summarized in the following. Consider two states having Schmidt decomposition \begin{align} \Ket{\phi}^{AB}&\coloneq\sum_{l=0}^{R_\phi-1}\sqrt{\lambda_l^\phi}\Ket{\phi_l}^A\otimes\Ket{\phi_l}^B,\\ \Ket{\psi}^{AB}&\coloneq\sum_{l=0}^{R_\psi-1}\sqrt{\lambda_l^\psi}\Ket{\psi_l}^A\otimes\Ket{\psi_l}^B. \end{align} Reduced states on $\mathcal{H}^A$ of these states are \begin{align} \phi^A&=\sum_{l=0}^{R_\phi-1}\lambda_l^\phi \Ket{\phi_l}\Bra{\phi_l}^A,\\ \psi^A&=\sum_{l=0}^{R_\psi-1}\lambda_l^\psi \Ket{\psi_l}\Bra{\psi_l}^A. \end{align} A Hermitian operator $\phi^A$ is said to be \textit{majorized} by a Hermitian operator $\psi^A$, which is denoted by \begin{equation} \phi^A\prec\psi^A, \end{equation} if there exists a CPTP map $\mathcal{U}^A$ in the following form called a mixed unitary channel \begin{equation} \begin{split} \phi^A&=\mathcal{U}^A\left(\psi^A\right)\\ &\coloneq\sum_{j}p\left(j\right)U_j^A\psi^A{U_j^A}^\dag \end{split} \end{equation} where $p\left(j\right)$ is a probability distribution, and $U_j^A$ for each $j$ is a unitary operator on $\mathcal{H}^A$. Such a probability distribution introduces randomness, and hence, if two quantum states $\phi$ and $\psi$ satisfy $\phi^A\prec\psi^A$, $\phi$ can be considered to be a more randomized state than $\psi$ in the sense that $\phi$ can be obtained from $\psi$ by a mixed unitary channel. To investigate properties of mixed unitary channels, it is useful to consider a unital channel $\tilde{\mathcal{U}}^A$ defined as a channel transforming the identity to the identity, that is, \begin{equation} \tilde{\mathcal{U}}^A\left(\mathbb{1}^A\right)=\mathbb{1}^A. \end{equation} For qubits, a channel is a mixed unitary channel if and only if it is a unital channel. But in general, any mixed unitary channel is a unital channel, but not \textit{vice versa}~\cite{L4,W11}. The majorization condition of Hermitian operators can also be represented in terms of real vectors defined using eigenvalues of $\phi^A$ and $\psi^A$. For a Hermitian operator $\phi^A$, define a real vector of $\dim\mathcal{H}^A$ elements \begin{equation} \boldsymbol{\lambda}\left(\phi^A\right)\coloneq\left(\lambda_0^\phi,\ldots,\lambda_{R_\phi-1}^\phi,0,\ldots,0\right), \end{equation} where the first $R_\phi$ elements are eigenvalues of $\phi^A$ in descending order, and the rest is filled with zero. Let $\boldsymbol{\lambda}\left(\psi^A\right)$ denote a real vector of $\dim\mathcal{H}^A$ elements defined for $\psi^A$ in the same way. The zero elements of $\boldsymbol{\lambda}\left(\phi^A\right)$ and $\boldsymbol{\lambda}\left(\psi^A\right)$ may be denoted by \begin{align} \lambda_{R_\phi}^{\phi}&=\lambda_{R_\phi+1}^{\phi}=\cdots=\lambda_{D-1}^{\phi}=0,\\ \lambda_{R_\psi}^{\psi}&=\lambda_{R_\psi+1}^{\psi}=\cdots=\lambda_{D-1}^{\psi}=0. \end{align} A real vector $\boldsymbol{\lambda}\left(\phi^A\right)$ of $D$ elements is said to be \textit{majorized} by $\boldsymbol{\lambda}\left(\psi^A\right)$, which is denoted by \begin{equation} \boldsymbol{\lambda}\left(\phi^A\right)\prec\boldsymbol{\lambda}\left(\psi^A\right), \end{equation} if it holds that \begin{align} \sum_{l=0}^{m}\lambda_l^\phi&\leqq\sum_{l=0}^{m}\lambda_l^\psi,\quad\forall m\in\left\{0,\ldots,D-2\right\},\\ \sum_{l=0}^{D-1}\lambda_l^\phi&=\sum_{l=0}^{D-1}\lambda_l^\psi, \end{align} where the elements of $\boldsymbol{\lambda}\left(\phi\right)$ and $\boldsymbol{\lambda}\left(\psi\right)$ are assumed to be in descending order. A Hermitian operator $\phi^A$ is majorized by a Hermitian operator $\psi^A$ if and only if the corresponding real vector $\boldsymbol{\lambda}\left(\phi\right)$ is majorized by $\boldsymbol{\lambda}\left(\psi\right)$. Using majorization, convertibility between two bipartite pure states by LOCC is characterized as follows. \begin{lemma} \label{lem:pure_convertibility} (Reference~\cite{N2}) \textit{Convertibility between bipartite pure states by LOCC.} For any two bipartite pure states $\Ket{\phi}^{AB}$ and $\Ket{\psi}^{AB}$, \begin{equation} \phi^{AB}\xrightarrow{\textup{LOCC}}\psi^{AB} \end{equation} if and only if \begin{equation} \phi^A\prec\psi^A. \end{equation} \end{lemma} This characterization of transformations between bipartite pure states under LOCC generalizes to transformations of a bipartite pure state into a bipartite mixed state as follows. \begin{lemma} \label{lem:mixed} (Reference~\cite{J1}) \textit{Convertibility of a bipartite pure state into a bipartite mixed by LOCC.} For a bipartite pure state $\Ket{\phi}^{AB}$ and a bipartite mixed state $\psi^{AB}$, \begin{equation} \phi^{AB}\xrightarrow{\textup{LOCC}}\psi^{AB} \end{equation} if and only if \begin{equation} \boldsymbol{\lambda}\left(\phi^A\right)\prec\min\sum_j p(j)\boldsymbol{\lambda}\left(\psi_j^A\right), \end{equation} where the minimization is taken over any ensemble ${\left\{p(j),\Ket{\psi_j}^{AB}\right\}}_j$ of pure states which are not necessarily orthogonal to each other and satisfy \begin{equation} \psi^{AB}=\sum_j p(j)\Ket{\psi_j}\Bra{\psi_j}^{AB}. \end{equation} \end{lemma} Convertibility of bipartite states under LOCC establishes partial order of entangled states in terms of capability as a resource, in the sense that the set of quantum states can be regarded as a partially ordered set if a relation between two quantum states defined according to whether one state can be convertible into the other by LOCC is considered as the partial order of this set. Given a bipartite system $\mathcal{H}^A\otimes\mathcal{H}^B$, entanglement in this partial order can be quantified by a function $E:\mathcal{D}\left(\mathcal{H}^A\otimes\mathcal{H}^B\right)\to\mathbb{R}$ satisfying \begin{equation} \begin{split} &\phi^{AB}\xrightarrow{\textup{LOCC}}\psi^{AB}\\ &\Rightarrow E\left(\phi^{AB}\right)\geqq E\left(\psi^{AB}\right). \end{split} \end{equation} and a function having this property is called an \textit{entanglement measure}. Note that one can additionally impose other properties to identify theoretically tractable entanglement measures, such as $E\left(\psi^{AB}\right)=0$ for any separable state $\psi$, as reviewed in References~\cite{H2,P1,E5}. Various entanglement measures are known in bipartite cases, such as distillable entanglement~\cite{B3}, entanglement cost~\cite{B3,H1}, relative entropy of entanglement~\cite{V7}, and squashed entanglement~\cite{C21}, and these entanglement measures coincide for any pure state $\Ket{\psi}^{AB}$ with the entanglement entropy defined as \begin{equation} -\sum_{l=0}^{R_\psi-1}\lambda_l^\psi\log_2\lambda_l^\psi, \end{equation} where $\lambda_l^\psi$ for each $l$ corresponds to a Schmidt coefficient appearing in the Schmidt decomposition \begin{equation} \Ket{\psi}^{AB}=\sum_{l=0}^{R_\psi-1}\sqrt{\lambda_l^\psi}\Ket{\psi_l}^A\otimes\Ket{\psi_l}^B. \end{equation} A basic unit of these entanglement measures is \textit{ebit}, that is, the entanglement entropy of $\Ket{\Phi_2^+}$. For bipartite pure states, the Schmidt rank is also monotonically nonincreasing under LOCC~\cite{L2}, and this property is referred to as the LOCC monotonicity of the Schmidt rank. Hence, the Schmidt rank, or its generalization to mixed states~\cite{T1}, can be regarded as an entanglement measure, while they are discrete. As a special case of local operations, a unitary transformation on $\mathcal{H}^A\otimes\mathcal{H}^B$ in the form of $U^A\otimes U^B$ is called a \textit{local unitary} transformation. Given two states $\phi^{AB}$ and $\psi^{AB}$, consider a case where there exists a local unitary transformation $U^A\otimes U^B$ such that \begin{equation} \phi^{AB}=\left(U^A\otimes U^B\right)\psi^{AB}\left({U^A}^\dag\otimes {U^B}^\dag\right). \end{equation} In this case, $\phi^{AB}$ and $\psi^{AB}$ are said to be \textit{locally unitarily equivalent}. Since unitary transformations are invertible, for any locally unitarily equivalent states $\phi^{AB}$ and $\psi^{AB}$ and any entanglement measure $E$, it holds that \begin{equation} E\left(\phi^{AB}\right)=E\left(\psi^{AB}\right). \end{equation} Lemmas~\ref{lem:pure_convertibility} and~\ref{lem:mixed} imply that for any state $\psi^{AB}$, there exists an LOCC map achieving \begin{equation} \Ket{\Phi_D^+}^{AB}\xrightarrow{\textup{LOCC}}\psi^{AB}, \end{equation} where \begin{equation} D=\min\left\{\dim\mathcal{H}^A,\dim\mathcal{H}^B\right\}. \end{equation} Hence, $\Ket{\Phi_D^+}^{AB}$ and its locally unitarily equivalent states maximize any entanglement measure $E$, and in this sense, $\Ket{\Phi_D^+}^{AB}$ and its locally unitarily equivalent states are called \textit{maximally entangled states}. Maximally entangled states of two qubits, that is, $\Ket{\Phi_2^+}^{AB}$ and its locally unitarily equivalent states, are called Bell states. The maximally entangled state of a bipartite system is unique up to these local unitary transformations. In contrast to these well-established results on bipartite entanglement, properties of multipartite entanglement are more involved~\cite{E2,W3,B26}. For a multipartite system in general, there may not exist a single maximally entangled state in the multipartite system itself transformable by LOCC into all the states in the system~\cite{V1,S1,S2,M1,G1,S3}. In particular, given a multipartite system where each local dimension is $d$, almost no LOCC transformation among pure states of the system is possible~\cite{G1,S3}. Due to these facts, applicability of resource-theoretic analysis based on state convertibility under LOCC is limited if multipartite entanglement is concerned. In contrast, the analysis of multipartite entanglement in Part~\ref{part:1} and~\ref{part:2} adopt a different perspective, based on settings relevant to distributed quantum information processing where the parties can be restricted to having small- and intermediate-scale quantum systems of up to several dozens of qubits and connected by a network for quantum communication. Part~\ref{part:1} analyzes a fundamental communication task, quantum state merging~\cite{H3,H4}, under such small- and intermediate-scale settings, and Part~\ref{part:2} analyzes manipulation of multipartite entanglement on networks. \part{\label{part:1}One-shot quantum state merging on small and intermediate scales under one-way and two-way communication} \chapter{Background and overview of Part~\ref{part:1}} Quantum state merging~\cite{H3,H4} is a communication task playing crucial roles in distributed quantum information processing~\cite{W8,W9,W10,W12} and multipartite entanglement transformations~\cite{A3,D8,Y8,D9,S4}. Quantum state merging, or quantum state redistribution~\cite{D2,D3} as a generalized task including state merging, was originally introduced in the context of quantum Shannon theory, and they have also applied to the analyses of a family of other quantum communication tasks in quantum Shannon theory, such as derivation of a capacity of noisy quantum channels~\cite{D4,A7,H5,A2,H11,A8,P3,W5}. In the task of state merging originally formulated using the framework of local operations and classical communication (LOCC)~\cite{H3,H4}, two spatially separated parties $A$ and $B$ initially share an entangled resource state and are given $n$ shared states whose purification with reference $R$ is represented as ${\left(\Ket{\psi}^{RAB}\right)}^{\otimes n}$, where $A$ and $B$ know classical description of $\Ket{\psi}^{RAB}$. The goal of the task is to asymptotically transfer $A$'s part of $\Ket{\psi}^{RAB}$ from $A$ to $B$ and obtain $\Ket{\psi}^{RB'B}$, keeping coherence between $B$ and $R$, by $A$ and $B$'s LOCC assisted by shared entanglement within an error in fidelity approaching to zero as $n \rightarrow \infty$. This type of scenario of achieving a task for infinitely many times within a vanishing error is called the asymptotic scenario. When $A$ and $B$ are initially given a shared maximally entangled resource state in addition to $\Ket{\psi}^{RAB}$, quantum communication can be simulated by LOCC assisted by this maximally entangled resource state by means of quantum teleportation~\cite{B5}. Given a protocol for state merging, the amount of this shared entanglement required for the protocol, or equivalently, that of quantum communication when LOCC is free, is called \textit{entanglement cost} of the protocol, regarded as the cost to be minimized. It is an essential feature of state merging that the parties may exploit classical description of the initially given states $\Ket{\psi}^{RAB}$ for reducing entanglement cost required for the protocols. Without classical description, there exists a trivial protocol achieving state merging by quantum teleportation~\cite{B5} for transferring $A$'s part of $\Ket{\psi}^{RAB}$ from $A$ to $B$. This trivial protocol does not require the classical description, and as the result, it requires the same entanglement cost for any given state. In contrast, entanglement cost in state merging can be reduced compared to quantum teleportation and can even be negative when the protocol provides a net gain of shared entanglement. Quantum state merging can also be regarded as an analogue of source coding with decoder's side information in classical information theory established by Slepian and Wolf~\cite{S7}. Reference~\cite{S7} introduces and analyzes a situation involving three parties $A_1\,$, $A_2\,$, and $B$, where each of $A_1$ and $A_2$ is given classical information that is correlated with the other's, and the classical information of $A_1$ and $A_2$ is to be transferred to $B$. Then, Reference~\cite{S7} characterizes the minimal amount of classical communication from $A_1$ and $A_2$ to $B$ required for achieving this task. If all of the $A_1$'s classical information is first transferred to $B$, this classical information possibly correlated with $A_2$'s is called \textit{side information} at $B$, which can be used for reducing the amount of classical communication for transferring the rest of classical information from $A_2$ to $B$ compared to the case without this side information. In quantum state merging, if $B$'s part of $\Ket{\psi}^{RAB}$ is correlated with $A$'s, $B$'s part may contribute to reducing entanglement cost required for transferring $A$'s, compared to the cases without $B$'s part. Such a task with $B$'s ability to use a part of the shared quantum state is called a task with \textit{quantum side information} at $B$. Similar notions of quantum side information are also widely used in the contexts other than state merging, such as entropic uncertainty relations~\cite{C12}, state exchange~\cite{Y16}, and classical-quantum Slepian-Wolf problems~\cite{D10,T5,R3,T9,L1,M5,C11}. Properties of $B$'s quantum side information in state merging can be quantitatively captured in terms of entanglement cost in state merging. In the asymptotic scenario of state merging, the minimal entanglement cost in state merging is given by the conditional quantum entropy ${H\left(A|B\right)}_{\psi}$ per copy~\cite{H3,H4,B20}, which clarifies an operational meaning of the conditional quantum entropy. While the asymptotic scenario is well-established in quantum Shannon theory, there have also been studied zero-error scenarios~\cite{G5}, which are originally established in a classical setting by Shannon~\cite{S6} and first introduced into a quantum setting in Reference~\cite{M2}. Regarding the zero-error scenarios of classical source coding with decoder's side information, optimal zero-error code design is proven to be $NP$-hard~\cite{K7}. However, in classical coding theory, explicit construction of zero-error coding protocols such as Shannon coding~\cite{S9} and Huffman coding~\cite{H9}, if not necessarily optimal, establishes a foundation of theoretical analyses as well as practical applications. In this direction, explicit zero-error coding protocols for classical source coding with decoder's side information are given in References~\cite{K7,W7,J2,Y11,Z2,M3,M4}. Aside from this regime where infinitely many copies of $\Ket{\psi}^{RAB}$ are given, another regime is the one-shot regime where only a single copy of $\Ket{\psi}^{RAB}$ is given. The scenarios in the one-shot regime can also be classified into two scenarios: one is an exact scenario with zero error, and the other is an approximate scenario in which a nonzero error is tolerated for reducing entanglement cost. Analysis in the one-shot regime clarifies the structure of protocols achieving the communication tasks at a single-copy level and is more relevant to practical situations such as distributed quantum information processing. In addition to the asymptotic scenario, state merging and redistribution have been defined and analyzed in various one-shot scenarios~\cite{Y9,B9,B12,D7,D6,H10,B10,D5,M,N3,A4,A5,B15,B13,A16,A17}. There also exist other derivatives of state merging and redistribution in modified settings~\cite{B1,B2,S11,S12,S14,A10,A11}. In this part, after providing preliminaries in Chapter~\ref{sec:preliminaries_1}, the following two results on one-shot state merging are presented in Chapters~\ref{sec:merge} and~\ref{sec:two_way}, aiming at investigating entanglement cost of transferring quantum information of unknown states on the small and intermediate scales. Chapter~\ref{sec:merge} constructs protocols achieving one-shot state merging even on small and intermediate scales, as well as analyzing lower bounds of the minimal achievable entanglement cost. While the protocols constructed in Chapter~\ref{sec:merge} use only one-way LOCC from $A$ to $B$, Chapter~\ref{sec:two_way} discusses advantage of two-way LOCC over one-way LOCC in one-shot state merging. \section*{Quantum state merging for arbitrarily small-dimensional systems} \begin{figure}\label{fig:merge} \end{figure} Chapter~\ref{sec:merge} investigates general bounds of entanglement cost required for quantum state merging on the small and intermediate scales relevant to distributed quantum information processing. The existing protocols for one-shot quantum state merging or redistribution~\cite{B9,Y9,B12,D7,D6,H10,B10,D5,M,N3,A4,A5,B15,B13,A16,A17} achieve near optimality only on a large scale relevant to \textit{one-shot quantum information theory} where functions of states called the smooth conditional min- and max-entropies~\cite{R2,T5} are used for evaluating entanglement cost. Definitions of these functions are summarized in Appendix~\ref{sec:one_shot_entropies}. These protocols also cause a nonzero approximation error in fidelity, since the vital techniques for these protocols, namely, one-shot decoupling~\cite{D6} and the convex-split lemma~\cite{A4}, cannot avoid errors. As higher fidelity is pursued in state merging of a fixed single copy of $\Ket{\psi}^{RAB}$, entanglement cost required for the protocols diverges to infinity. Hence, there always exists a region of error close to zero where the protocols do not contribute to reducing the entanglement cost. Moreover, in cases where the system size for the reduced state of $\Ket{\psi}^{RAB}$ on $A$ is as small as up to a few dozens of qubits, the protocols require more entanglement cost than quantum teleportation even if the error tolerance is reasonably large. (See Remark~\ref{remark:usefulness} for more discussion.) In this sense, strategies in state merging to exploit the classical description of $\Ket{\psi}^{RAB}$ for reducing entanglement cost have \textit{not} yet been established for arbitrarily small-dimensional systems or arbitrarily high fidelity. In contrast, Chapter~\ref{sec:merge} explicitly constructs protocols for one-shot state merging with the following features: \begin{enumerate} \item Applicable to any state of an arbitrarily small-dimensional system, including small- and intermediate-scale states; \item Fulfilling arbitrarily high fidelity requirement, including zero error; \item Retaining the essential feature of state merging, that is, exploiting classical description of $\Ket{\psi}^{RAB}$ for reducing entanglement cost. \end{enumerate} The tasks of one-shot state merging investigated here are achieved exactly, that is, without approximation, and is called \textit{exact state merging}, as illustrated in Figure~\ref{fig:merge}. Entanglement cost of the obtained protocols for exact state merging is not larger than, and can be strictly smaller than, that for its inverse task, \textit{exact state splitting} summarized in Section~\ref{sec:split}, depending on the Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$~\cite{W4,K3,H6,K5}. Multiple examples of states including those relevant to distributed quantum information processing are also shown, where the obtained protocols for exact state merging can reduce entanglement cost, since these states have \textit{nontrivial} Koashi-Imoto decompositions. In the same way as the asymptotic scenarios, entanglement cost of the obtained protocol can even be negative. In addition to providing achievability bounds by constructing the protocols, Chapter~\ref{sec:merge} also obtains converse bounds, that is, lower bounds of the minimal entanglement cost required for any protocol for exact state merging. The obtained converse bounds improve the existing converse bound~\cite{B9} given in terms of the conditional max-entropy~\cite{R2,T5,T11}, and it is shown that the converse bound is achievable when the state to be merged is represented by qubits. By means of \textit{smoothing}~\cite{R2,T5,T11}, these results on exact state merging are straightforwardly extended to \textit{approximate state merging}, where arbitrarily small approximation error in fidelity is allowed so that the entanglement cost can further be reduced compared to exact state merging. The obtained converse bound of entanglement cost in approximate state merging improves the existing converse bound~\cite{B10}. \section*{One-shot quantum state merging under one-way and two-way communication} The minimal entanglement cost ${H\left(A|B\right)}_{\psi}$ in the asymptotic scenario of state merging can be achieved by only \textit{one-way} LOCC, using one-way classical communication only from $A$ to $B$, even if $A$ and $B$ are allowed to perform \textit{two-way} LOCC, using two-way classical communication both from $A$ to $B$ and from $B$ to $A$. Indeed, ${H\left(A|B\right)}_{\psi}$ is monotonically nondecreasing under a class of operations consisting of $B$'s preprocessing and backward classical communication from $B$ to $A$, as shown in Section~\ref{sec:cost}. A conventional interpretation of ${H\left(A|B\right)}_{\psi}\coloneq {H\left(AB\right)}_{\psi}-{H\left(B\right)}_{\psi}$ is that the first term ${H\left(AB\right)}_{\psi}$ quantifies quantum information encoded in $A$ and $B$'s shared state, and the second term ${H\left(B\right)}_{\psi}$ quantifies quantum information initially located at $B$~\cite{H4}, while this interpretation is based on one-way communication from $A$ to $B$ in analogy to classical source coding with $B$'s classical side information~\cite{S7}. As for one-shot scenarios of state merging, the existing protocols~\cite{B9,Y9,B12,D7,D6,H10,B10,D5,M,N3,A4,A5,B15,B13,A16,A17}, based on either technique of one-shot decoupling~\cite{D6} or the convex-split lemma~\cite{A4}, use only one-way communication similarly to the asymptotic scenario, and whether protocols using two-way communication may outperform those using only one-way communication has been unknown~\cite{B10}. In contrast, Chapter~\ref{sec:two_way} demonstrates a provable advantage of two-way LOCC over one-way LOCC in exploiting $B$'s quantum side information in a one-shot scenario of state merging, showing that the minimal entanglement cost in state merging under two-way LOCC can be strictly smaller than that under one-way LOCC, while they coincide in the asymptotic scenario. The results in Chapter~\ref{sec:two_way} suggest that under a one-shot regime, $B$'s preprocessing and backward classical communication from $B$ to $A$ can be indispensable for making the most of quantum side information, that is, for minimizing entanglement cost in state merging. \chapter{\label{sec:preliminaries_1}Preliminaries to Part~\ref{part:1}} This chapter provides preliminaries to Part~\ref{part:1}. Section~\ref{sec:def_merge} defines tasks of one-shot state merging to be analyzed in the rest of Part~\ref{part:1}. For comparison, results of the preceding works~\cite{H3,H4} on the asymptotic scenario of state merging are summarized in Section~\ref{sec:asymptotic}, and definitions and properties of state splitting, the inverse task of state merging, are summarized in Section~\ref{sec:split}. Sections~\ref{sec:def_merge} and~\ref{sec:split} are based on Reference~\cite{Y12}. \section{\label{sec:def_merge}Definition of one-shot state merging} Definitions of the tasks of one-shot state merging to be analyzed in this part are presented in this section. Quantum state merging involves three spatially separated parties $A$, $B$, and $R$, where by convention, $A$ is a sender, $B$ is a receiver, and $R$ is a reference to consider a purification. Let $\mathcal{H}^A$ and $\mathcal{H}^{\overline{A}}$ be $A$'s systems, $\mathcal{H}^B$, $\mathcal{H}^{B'}$, and $\mathcal{H}^{\overline{B}}$ be $B$'s, and $\mathcal{H}^R$ be $R$'s, where \begin{equation} \dim\mathcal{H}^{A}=\dim\mathcal{H}^{B'}. \end{equation} Given any tripartite pure state $\Ket{\psi}^{RAB}$ shared among $R$, $A$, and $B$, state merging of $\Ket{\psi}^{RAB}$ aims to transfer $A$'s part of $\Ket{\psi}^{RAB}$ to $B$, keeping coherence between $R$ and $B$, to obtain $\Ket{\psi}^{RB^\prime B}$. Assume that the parties $A$ and $B$ are allowed to freely perform local operations and classical communication (LOCC), and quantum communication is performed by LOCC assisted by a maximally entangled resource state shared between $A$ and $B$ in $\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}$. Note that $A$ and $B$ cannot perform any operation on $R$. While a trivial protocol for state merging of $\Ket{\psi}^{RAB}$ is quantum teleportation of $A$'s part of $\Ket{\psi}^{RAB}$, the initially given state $\Ket{\psi}^{RAB}$ in state merging may have entanglement between $A$ and $B$, and hence, there are cases where $A$ and $B$ can distill this entanglement of $\Ket{\psi}^{RAB}$ in achieving state merging of $\Ket{\psi}^{RAB}$ to reduce the required amount of entanglement. A maximally entangled state initially shared between $A$ and $B$ is written as \begin{equation} \Ket{\Phi_K^+}^{\overline{A}\overline{B}}\coloneq\sum_{l=0}^{K-1}\Ket{l}^{\overline{A}}\otimes\Ket{l}^{\overline{B}}, \end{equation} where $K$ denotes the Schmidt rank of the initially shared maximally entangled resource state. After achieving state merging, $A$ and $B$ are still allowed to share a maximally entangled state \begin{equation} \Ket{\Phi_L^+}^{\overline{A}\overline{B}}\coloneq\sum_{l=0}^{L-1}\Ket{l}^{\overline{A}}\otimes\Ket{l}^{\overline{B}}, \end{equation} where $L$ denotes the Schmidt rank of the finally shared maximally entangled resource state. The amount of entanglement for $\Ket{\Phi_K^+}$ and $\Ket{\Phi_L^+}$ is measured in terms of the entanglement entropy, that is, $\log_2 K$ and $\log_2 L$, respectively. If $\log_2 K-\log_2 L > 0$, $\log_2 K-\log_2 L$ is regarded as the amount of entanglement consumed in the protocol, and otherwise, $\log_2 L-\log_2 K$ as a net gain of entanglement. When $\log_2 K > 0$, $\log_2 L > 0$, and $\log_2 K-\log_2 L > 0$ in state merging, a part of entanglement of the initially shared maximally entangled resource state, that is, $\log_2 L$ ebits out of $\log_2 K$ ebits, can be considered to be used catalytically. This setting is called a \textit{catalytic setting}. In one-shot scenarios, it is also beneficial to minimize the initially required amount of entanglement for achieving state merging. Hence, another setting can also be considered by fixing $\log_2 L=0$ in the above catalytic setting of state merging. This setting is called a \textit{non-catalytic setting}. Note that protocols for state merging in the non-catalytic setting also works in the catalytic setting, but not necessarily \textit{vice versa}. In the following of this chapter, the catalytic setting is considered unless stated otherwise explicitly. A simple one-shot scenario of state merging is that requiring zero error in the protocol. This task is called \textit{exact state merging} and defined as follows. The task of exact state merging is also illustrated in Figure~\ref{fig:merge}. \begin{definition} \label{def:merging} \textit{Exact state merging.} Exact state merging of a purified given state $\Ket{\psi}^{RAB}$ is a task for parties $A$ and $B$ to achieve a transformation \begin{equation} \id^R\otimes\mathcal{M}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) ={\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}} \end{equation} by an LOCC map \begin{equation} \mathcal{M}:\mathcal{B}\left(\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}\right)\to\mathcal{B}\left(\mathcal{H}^{B'}\otimes\mathcal{H}^B\otimes\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}\right), \end{equation} which can be constructed depending on the classical description of $\Ket{\psi}^{RAB}$. The definition of exact state merging in the non-catalytic setting is also obtained by setting $\log_2 L=0$ in the above definition. Entanglement cost of a protocol for exact state merging in the catalytic setting is defined as \begin{equation} \log_2 K-\log_2 L, \end{equation} and that in the non-catalytic setting is defined as \begin{equation} \log_2 K. \end{equation} \end{definition} The minimal entanglement cost among all the protocols for exact state merging of $\Ket{\psi}^{RAB}$ may be simply referred to as entanglement cost in exact state merging of $\Ket{\psi}^{RAB}$. If \begin{equation} \log_2 K\geqq\log_2 \dim\mathcal{H}^A, \end{equation} there exists a trivial protocol for exact state merging by quantum teleportation to transfer $\psi^{A}$ from $A$ to $B$. The results given in Chapters~\ref{sec:merge} and~\ref{sec:two_way} provide protocols at less entanglement cost using the classical description of $\Ket{\psi}^{RAB}$. There exist following tasks achievable at the same entanglement cost using the same protocol as those in exact state merging of a given state $\Ket{\psi}^{RAB}$. Consider the Schmidt decomposition of $\Ket{\psi}^{RAB}$ with respect to bipartition between $\mathcal{H}^R$ and $\mathcal{H}^{A}\otimes\mathcal{H}^{B}$ \begin{equation} \label{eq:schmidt} \Ket{\psi}^{RAB}=\sum_{l=0}^{D-1} \sqrt{\lambda_l}\Ket{l}^R\otimes\Ket{\psi_l}^{AB}, \end{equation} where $D$ is the Schmidt rank, and $\lambda_l>0$ for each $l\in\{0,\ldots,D-1\}$. Then, entanglement cost in exact state merging of $\Ket{\psi}^{RAB}$ equals to that of a maximally entangled state $\Ket{\Phi_D^+\left(\psi\right)}^{RAB}$ with Schmidt rank $D$ corresponding to $\Ket{\psi}^{RAB}$ \begin{equation} \label{eq:max} \Ket{\Phi_D^+\left(\psi\right)}^{RAB}\coloneq\sum_{l=0}^{D-1} \frac{1}{\sqrt{D}}\Ket{l}^R\otimes\Ket{\psi_l}^{AB}, \end{equation} where the Schmidt basis on the right-hand side is the same as that in Equation~\eqref{eq:schmidt}, and this maximally entangled state is independent of the Schmidt coefficients $\sqrt{\lambda_0},\ldots,\sqrt{\lambda_{D-1}}$ in Equation~\eqref{eq:schmidt}. This equivalence also implies that entanglement cost in exact state merging of $\Ket{\psi}^{RAB}$ is the same as that required for merging arbitrary bipartite states shared between $A$ and $B$ on a subspace of $\mathcal{H}^A\otimes\mathcal{H}^B$ spanned by the Schmidt-basis states ${\left\{\Ket{\psi_l}^{AB}\right\}}_l$ corresponding to nonzero Schmidt coefficients in Equation~\eqref{eq:schmidt}. The equivalence between considering the maximally entangled state with $R$ in Equation~\eqref{eq:max} and considering arbitrary bipartite states on the corresponding subspace is also known as the relative state method~\cite{P3}. Note that in general, entanglement cost in exact state merging of $\Ket{\psi}^{RAB}$ is different from that required for merging arbitrary bipartite states given from an ensemble ${\left\{p\left(l\right),\Ket{\psi_l}^{AB}\right\}}_l$ for a probability distribution $p\left(l\right)$, since coherence of arbitrary superposition of ${\left\{\Ket{\psi_l}^{AB}\right\}}_l$ has to be kept in state merging of $\Ket{\psi}^{RAB}$. These equivalences are shown as the following proposition, and see Appendix~\ref{sec:equivalence} for the proof. \begin{proposition} \label{prp:max} \textit{Equivalence of exact state merging of an arbitrary tripartite pure state, a corresponding maximally entangled state, and a corresponding set of bipartite states.} Given any fixed integer $K$, $L$, and any pure state $\Ket{\psi}^{RAB}$ whose Schmidt rank with respect to bipartition between $\mathcal{H}^R$ and $\mathcal{H}^{A}\otimes\mathcal{H}^{B}$ is $D$ and whose Schmidt decomposition is given by Equation~\eqref{eq:schmidt}, the following statements are equivalent: \begin{enumerate} \item An LOCC map $\mathcal{M}$ achieves the following exact state merging of $\Ket{\psi}^{RAB}$ \begin{equation} \id^R\otimes\mathcal{M}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) ={\psi}^{RB^\prime B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}; \end{equation} \item The same LOCC map $\mathcal{M}$ as the above achieves the following exact state merging of $\Ket{\Phi_D^+\left(\psi\right)}^{RAB}$ \begin{equation} \id^R\otimes\mathcal{M}\left({\Phi_D^+\left(\psi\right)}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) ={\Phi_D^+\left(\psi\right)}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}, \end{equation} where $\Ket{\Phi_D^+\left(\psi\right)}^{RAB}$ is the maximally entangled state corresponding to $\Ket{\psi}^{RAB}$, defined as Equation~\eqref{eq:max}. \item Define a set $S_\psi^{AB}\subset\mathcal{H}^A\otimes\mathcal{H}^B$ of arbitrary bipartite states on a subspace of $\mathcal{H}^A\otimes\mathcal{H}^B$ spanned by the Schmidt-basis states ${\left\{\Ket{\psi_l}^{AB}\right\}}_l$ corresponding to nonzero Schmidt coefficients of $\Ket{\psi}^{RAB}$ in Equation~\eqref{eq:schmidt}, that is, \begin{equation} S_\psi^{AB} \coloneq{\left\{\psi_{\boldsymbol{\alpha}}^{AB}\coloneq\sum_{l=0}^{D-1}\sum_{l^\prime=0}^{D-1}\alpha_{l,l^\prime} \Ket{\psi_l}\Bra{\psi_{l^\prime}}^{AB}\in\mathcal{D}\left(\mathcal{H}^A\otimes\mathcal{H}^B\right)\right\}}_{\boldsymbol{\alpha}}, \end{equation} where $\boldsymbol{\alpha}$ denotes the tuple of the parameters $\alpha_{l,l^\prime}$ for all $l$ and $l^\prime$. Then, the same LOCC map $\mathcal{M}$ as the above achieves the following state transformation for any bipartite state $\psi_{\boldsymbol{\alpha}}^{AB}\in S_\psi^{AB}$ \begin{equation} \mathcal{M}\left(\psi_{\boldsymbol{\alpha}}^{AB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) =\psi_{\boldsymbol{\alpha}}^{B^\prime B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}, \end{equation} where $\mathcal{M}$ is independent of $\boldsymbol{\alpha}$. \end{enumerate} The same equivalence also holds in the non-catalytic setting by fixing $\log_2 L=0$. \end{proposition} While this zero-error scenario is fundamental, a sufficiently small error in fidelity of quantum states does not largely affect outcomes of any measurement in quantum mechanics. Hence, another scenario can be considered, where a nonzero error for approximation may be tolerated for reducing entanglement cost in comparison with exact state merging. This task is called \textit{approximate state merging} and defined as follows. \begin{definition} \label{def:approxiamte_state_merging} \textit{Approximate state merging.} Approximate state merging of a given state $\Ket{\psi}^{RAB}$ within a given error $\epsilon\geqq 0$ is a task of parties $A$ and $B$ performing an LOCC map \begin{equation} \tilde{\mathcal{M}}: \mathcal{B}\left(\mathcal{H}^A\otimes\mathcal{H}^{B}\otimes\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}\right) \to \mathcal{B}\left(\mathcal{H}^{B^\prime}\otimes\mathcal{H}^B\otimes\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}\right) \end{equation} achieving a transformation \begin{equation} \id^R \otimes\tilde{\mathcal{M}}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) ={\tilde\psi}^{RB^\prime B\overline{A}\overline{B}}, \end{equation} where the fidelity of the final state satisfies \begin{equation} F^2\left({\tilde{\psi}}^{RB^\prime B\overline{A}\overline{B}}, \psi^{RB^\prime B}\otimes{\Phi_L^+}^{\overline{A}\overline{B}}\right)\coloneq\left(\Bra{\psi}\otimes\Bra{\Phi_L^+}\right){\tilde{\psi}}\left(\Ket{\psi}\otimes\Ket{\Phi_L^+}\right)\geqq 1-\epsilon^2. \end{equation} Given a protocol for approximate state merging, entanglement cost of the protocol is defined as \begin{equation} \log_2 K - \log_2 L. \end{equation} Approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ in the non-catalytic setting is defined by fixing \begin{equation} \log_2 L = 0 \end{equation} in the above definition. \end{definition} \section{\label{sec:asymptotic}Asymptotic scenario of quantum state merging and quantum entropy} The results of the preceding works~\cite{H3,H4} on entanglement cost required for the asymptotic scenario of quantum state merging is summarized in this section. The minimal entanglement cost in the asymptotic scenario is characterized by entropic functions, which are also summarized in this section. The asymptotic scenario of quantum state merging is originally analyzed in References~\cite{H3,H4}, which aims at achieving approximate state merging of many copies of the same state so as to use the law of large numbers; that is, instead of $\Ket{\psi}^{RAB}$ in Definition~\ref{def:approxiamte_state_merging}, the given state is in the form of \begin{equation} {\left(\Ket{\psi}^{RAB}\right)}^{\otimes n}. \end{equation} For any $\epsilon>0$, the rate of entanglement cost of a protocol achieving approximate state merging of ${\left(\Ket{\psi}^{RAB}\right)}^{\otimes n}$ within $\epsilon$ is evaluated by \begin{equation} \frac{1}{n}\left(\log_2 K-\log_2 L\right). \end{equation} References~\cite{H3,H4} analyze the minimal achievable rate of the entanglement cost under asymptotically vanishing error, that is, \begin{equation} \lim_{\epsilon\to 0}\lim_{n\to\infty}\inf\left\{\frac{1}{n}\left(\log_2 K-\log_2 L\right)\right\}, \end{equation} where the infimum is taken over all the protocols achieving approximate state merging of ${\left(\Ket{\psi}^{RAB}\right)}^{\otimes n}$ within $\epsilon$. This minimal achievable rate of the entanglement cost in state merging in the asymptotic scenario is evaluated using quantum entropy summarized in the following. For any discrete random variable $X$ with a probability distribution $p\left(x\right)$, the entropy of $X$ is defined as \begin{equation} H\left(X\right)\coloneq -\sum_x p\left(x\right)\log_2 p\left(x\right). \end{equation} Similarly, for any state $\psi^A$, the quantum entropy of $\psi^A$ is defined as \begin{equation} {H\left(A\right)}_\psi\coloneq-\tr\psi^A\log_2\psi^A, \end{equation} which is also written as \begin{equation} H\left(\psi^A\right)\coloneq {H\left(A\right)}_\psi. \end{equation} Quantum entropy is invariant under isometry; that is, for any isometry $U^{A\to A^\prime}$, it holds that \begin{equation} H\left(\psi^A\right)={H\left(\left(U^{A\to A^\prime}\right)\psi^A{\left(U^{A\to A^\prime}\right)}^\dag\right)}. \end{equation} For any bipartite state $\psi^{AB}$, the joint quantum entropy of $\psi^{AB}$ for the bipartite system $\mathcal{H}^A\otimes\mathcal{H}^B$ is defined as \begin{equation} {H\left(AB\right)}_\psi\coloneq H\left(\psi^{AB}\right). \end{equation} The joint quantum entropy of multipartite states is generally defined in the same way. If a given bipartite state $\psi^{XA}$ is in the form of \begin{equation} \psi^{XA}=\sum_x p\left(x\right)\Ket{x}\Bra{x}^X\otimes\psi_x^A, \end{equation} where $p\left(x\right)$ is a probability distribution, ${\left\{\Ket{x}\right\}}_x$ is the computational basis of $\mathcal{H}^X$, and $\psi_x^A\in\mathcal{D}\left(\mathcal{H}^A\right)$ for each $x$, then $\psi^{XA}$ is called a classical-quantum state. This classical-quantum state $\psi^{XA}$ can be regarded as a mixed state representing an ensemble ${\left\{p\left(x\right),\Ket{x}\Bra{x}^X\otimes\psi_x^A\right\}}_x$. The joint quantum entropy of this classical-quantum state $\psi^{XA}$ is evaluated as \begin{equation} {H\left(XA\right)}_\psi= H\left(X\right)+\sum_x p\left(x\right) H\left(\psi_x^A\right). \end{equation} For any bipartite state $\psi^{AB}$, the conditional quantum entropy of $\psi^{AB}$ is defined as \begin{equation} {H\left(A|B\right)}_\psi\coloneq {H\left(AB\right)}_\psi - {H\left(B\right)}_\psi. \end{equation} Conditional quantum entropy satisfies for any CPTP map $\mathcal{N}^{B\to B^\prime}$ \begin{align} {H\left(A|B\right)}_\psi&\geqq{H\left(A|B^\prime\right)}_{\psi^\prime}\,,\\ {\psi^\prime}^{B^\prime}&\coloneq\mathcal{N}^{B\to B^\prime}\left(\psi^B\right), \end{align} and this type of inequality is called the data processing inequality. Using these notations, the minimal achievable rate of entanglement cost in the asymptotic scenario of state merging is evaluated as follows. \begin{lemma} (References~\cite{H3,H4}.) \textit{Entanglement cost in state merging in the asymptotic scenario} For any pure state $\Ket{\psi}^{RAB}$, the minimal achievable rate of the entanglement cost in the asymptotic scenario of state merging is given by the conditional quantum entropy of $\psi^{AB}$, that is, \begin{equation} \lim_{\epsilon\to 0}\lim_{n\to\infty}\inf\left\{\frac{1}{n}\left(\log_2 K-\log_2 L\right)\right\}={H\left(A|B\right)}_\psi \end{equation} where the infimum is taken over all the protocols achieving approximate state merging of ${\left(\Ket{\psi}^{RAB}\right)}^{\otimes n}$ within $\epsilon$. \end{lemma} \section{\label{sec:split}Quantum state splitting for arbitrarily small-dimensional systems} Before proceeding to analysis of one-shot state merging in Chapters~\ref{sec:merge} and~\ref{sec:two_way}, an inverse task of one-shot state merging, one-shot state splitting, is introduced in this section for comparison. State splitting involves three spatially separated parties $A$, $B$, and $R$, where by convention, $A$ is a sender, $B$ is a receiver, and $R$ is a reference to consider purification. Let $A$ have systems $\mathcal{H}^A$, $\mathcal{H}^{A^\prime }$, and $\mathcal{H}^{\overline{A}}$, $B$ have $\mathcal{H}^B$ and $\mathcal{H}^{\overline{B}}$, and $R$ have $\mathcal{H}^R$, where \begin{equation} \dim\mathcal{H}^{A^\prime }=\dim\mathcal{H}^B. \end{equation} In state splitting, $A$ and $B$ initially share a maximally entangled resource state, and $A$ also holds a given bipartite state of $\mathcal{H}^A\otimes\mathcal{H}^{A^\prime}$, whose purification with reference $R$ is represented as $\Ket{\psi}^{RAA^\prime}$. Assume that $A$ and $B$ can freely perform local operations and classical communication (LOCC) assisted by a maximally entangled resource state initially shared between $\mathcal{H}^{\overline{A}}$ of $A$ and $\mathcal{H}^{\overline{B}}$ of $B$, that is, \begin{equation} \Ket{\Phi^+_K}^{\overline{A} \overline{B}}\coloneq\frac{1}{\sqrt{K}}\sum_{l=0}^{K-1}\Ket{l}^{\overline{A}} \otimes\Ket{l}^{\overline{B}}, \end{equation} where $K$ denotes the Schmidt rank of this resource state. Note that $A$ and $B$ cannot perform any operation on $\mathcal{H}^R$. State splitting of $\Ket{\psi}^{RAA^\prime}$ aims to transfer a part of $\Ket{\psi}^{RAA^\prime}$ corresponding to $\mathcal{H}^{A^\prime}$ from $A$ to $B$, keeping coherence between $R$ and $AB$, to obtain $\Ket{\psi}^{RAB}$, where $\mathcal{H}^B$ is $B$'s system corresponding to $\mathcal{H}^{A^\prime}$. In the same way as exact state merging, a simple one-shot scenario of state splitting is that requiring zero error. This task is called \textit{exact state splitting}, illustrated in Figure~\ref{fig:split}, and defined as follows. Note that it suffices to consider no catalytic use of entanglement in exact state splitting. \begin{figure}\label{fig:split} \end{figure} \begin{definition} \label{def:splitting} \textit{Exact state splitting.} Exact state splitting of a purified given state $\Ket{\psi}^{R A A^\prime }$ is a task for parties $A$ and $B$ to achieve a transformation \begin{equation} \id^R \otimes\mathcal{S}\left({\psi}^{RAA^\prime }\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) ={\psi}^{RAB} \end{equation} by an LOCC map \begin{equation} \mathcal{S}: \mathcal{B}\left(\mathcal{H}^A\otimes\mathcal{H}^{A^\prime }\otimes\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}\right) \to \mathcal{B}\left(\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^{\overline{A}}\otimes\mathcal{H}^{\overline{B}}\right), \end{equation} which can be constructed depending on the classical description of $\Ket{\psi}^{RAA^\prime}$. Given a protocol for exact state splitting, entanglement cost of the protocol is defined as $\log_2 K$. \end{definition} If \begin{equation} \log_2 K\geqq\log_2\dim\mathcal{H}^{A^\prime}, \end{equation} there exists a trivial protocol for state splitting by quantum teleportation to transfer $\psi^{A^\prime }$ from $A$ to $B$. In contrast, there are cases where classical description of $\Ket{\psi}^{RAA^\prime}$ can be used for reducing the entanglement cost. The following theorem shows the minimal entanglement cost in exact state splitting and a protocol achieving the minimal entanglement cost. \begin{theorem} \label{thm:split} \textit{Optimal entanglement cost in exact state splitting.} Given any $\Ket{\psi}^{RAA^\prime }$, exact state splitting of $\Ket{\psi}^{RAA^\prime }$ is achievable if and only if \begin{equation} \log_2 K \geqq \log_2 \rank \psi^{A^\prime }. \end{equation} \end{theorem} \begin{proof} \textit{If part}: The proof is by construction, and an LOCC protocol achieving \begin{equation} \label{eq:split_upper} \log_2 K = \log_2 \rank \psi^{A^\prime } \end{equation} is constructed. Note that the trivial protocol, that is, quantum teleportation of $\psi^{A^\prime}$, requires entanglement cost $\log_2 K = \log_2\dim\mathcal{H}^{A^\prime }$, and the protocol achieving Equation~\eqref{eq:split_upper} outperforms this trivial protocol when $\psi^{A^\prime}$ is not a full-rank state, that is, $\rank\psi^{A^\prime }<\dim\mathcal{H}^{A^\prime}$; \textit{e.g.}, when $\psi^{A^\prime}$ is locally represented as a code state of a quantum error correcting code~\cite{G,D,T10,B} using a larger-dimensional system $\mathcal{H}^{A^\prime}$ than the rank of $\psi^{A^\prime}$. To achieve Equation~\eqref{eq:split_upper}, a method for compressing $\psi^{A^\prime }$ is provided. Consider the Schmidt decomposition of the given state $\Ket{\psi}^{RAA^\prime }$ with respect to the bipartition between $\mathcal{H}^R\otimes\mathcal{H}^A$ and $\mathcal{H}^{A^\prime }$, that is, \begin{equation} \Ket{\psi}^{RAA^\prime }=\sum_{l\in R_{\psi}} \sqrt{\lambda_l^\psi}\Ket{l}^{RA}\otimes\Ket{l}^{A^\prime }, \end{equation} where $R_{\psi}\coloneq\left\{0,\ldots,\rank \psi^{A^\prime }-1\right\}$, each $\sqrt{\lambda_l^\psi}>0$ is a nonzero Schmidt coefficient, and ${\left\{\Ket{l}^{RA}: l\in R_{\psi}\right\}}$ and ${\left\{\Ket{l}^{A^\prime }: l\in R_{\psi}\right\}}$ are subsets of the Schmidt bases of $\mathcal{H}^R\otimes\mathcal{H}^A$ and $\mathcal{H}^{A^\prime }$, respectively, corresponding to the nonzero Schmidt coefficients. Let $\mathcal{H}^{A^{\prime\prime}}$ be $A$'s auxiliary system satisfying \begin{equation} \dim\mathcal{H}^{A^{\prime\prime}}=\rank \psi^{A^\prime }, \end{equation} and ${\left\{\Ket{l}^{A^{\prime\prime}}: l\in R_{\psi}\right\}}$ be the computational basis of $\mathcal{H}^{A^{\prime\prime}}$. Consider an isometry $U_\textup{split}$ from $\mathcal{H}^{A^\prime }$ to $\mathcal{H}^{A^{\prime\prime}}$ satisfying for each $l\in R_{\psi}$ \begin{equation} \Ket{l}^{A^{\prime\prime}}=U_\textup{split}\Ket{l}^{A^\prime }. \end{equation} By performing $U_\textup{split}$, $\psi^{A^\prime }$ is compressed into a state on $\mathcal{H}^{A^{\prime\prime}}$, that is, \begin{equation} \begin{split} \Ket{\psi'}^{RAA^{\prime\prime}}&\coloneq\mathbb{1}^{RA}\otimes U_\textup{split}\Ket{\psi}^{RAA^\prime }\\ &=\sum_{l\in R_{\psi}} \sqrt{\lambda_l^\psi}\Ket{l}^{RA}\otimes\Ket{l}^{A^{\prime\prime}}. \end{split} \end{equation} By performing $U_\textup{split}^\dag$, the given state $\Ket{\psi}$ can be recovered from the compressed state $\Ket{\psi'}$. The LOCC protocol achieving Equation~\eqref{eq:split_upper} is as follows. First, $A$ performs $U_\textup{split}$ to transform the given state $\Ket{\psi}^{RAA^\prime }$ into the compressed state $\Ket{\psi'}^{RAA^{\prime\prime}}$. Next, the reduced state ${\psi'}^{A^{\prime\prime}}$ is sent from $A$ to $B$ by quantum teleportation using the resource state satisfying Equation~\eqref{eq:split_upper}. After performing quantum teleportation, $B$ performs $U_\textup{split}^\dag$ on the system for the received state to recover $\Ket{\psi}^{RAB}$. \textit{Only if part}: The proof uses the LOCC monotonicity of the Schmidt rank. The Schmidt rank of $\Ket{\psi}^{RAA^\prime }\otimes\Ket{\Phi^+_K}^{\overline{A}\overline{B}}$ between the party $B$ and the other parties $R$ and $A$ is $K$. After performing an LOCC map $\id^R\otimes\mathcal{S}$, the Schmidt rank of $\Ket{\psi}^{RAB}$ between the party $B$ and the other parties $R$ and $A$ is $\rank\psi^{A^\prime }$. Since the Schmidt rank of pure states is monotonically nonincreasing under LOCC, it holds that \begin{equation} K\geqq\rank\psi^{A^\prime }. \end{equation} Therefore, it holds that \begin{equation} \log_2 K \geqq \log_2 \rank \psi^{A^\prime }. \end{equation} \end{proof} \chapter{\label{sec:merge}Quantum state merging for arbitrarily small-dimensional systems} This chapter investigates general bounds of entanglement cost required for one-shot state merging defined in Chapter~\ref{sec:preliminaries_1} and illustrated in Figure~\ref{fig:merge}, aiming at achieving this task on the small and intermediate scales. Sections~\ref{sec:merge_achievability_exact} and~\ref{sec:achievability_approximate} construct protocols for exact state merging and approximate state merging, respectively, so that these protocols can be applied to arbitrarily small-dimensional systems. Sections~\ref{sec:converse} and~\ref{sec:converse_approximate} provides improved converse bounds of entanglement cost in exact state merging and approximate state merging, respectively. Implications are discussed in Section~\ref{sec:examples}. \section{\label{sec:merge_achievability_exact}Achievability bound for exact state merging} In this section, protocols for exact state merging are constructed, which are applicable to any state of an arbitrarily small-dimensional system. To construct the protocols, the Koashi-Imoto decomposition introduced in Section~\ref{sec:decomposition} is used. Given any state $\Ket{\psi}^{RAB}$, Lemma~\ref{lem:koashi_imoto_decomposition_tripartite} implies that there exists a unique decomposition of $\mathcal{H}^A$ and $\supp\left(\psi^B\right)$ \begin{align} \label{eq:notation_space} \mathcal{H}^A=\bigoplus_{j=0}^{J-1}\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{a_j^\textup{R}},\quad \supp\left(\psi^B\right)=\bigoplus_{j=0}^{J-1}\mathcal{H}^{b_j^\textup{L}}\otimes\mathcal{H}^{b_j^\textup{R}}, \end{align} such that $\Ket{\psi}^{RAB}$ is uniquely decomposed into \begin{equation} \label{eq:notation_state} \Ket{\psi}^{RAB}=\bigoplus_{j=0}^{J-1}\sqrt{p\left(j\right)}\Ket{\omega_j}^{a_j^\textup{L} b_j^\textup{L}}\otimes\Ket{\phi_j}^{R a_j^\textup{R} b_j^\textup{R}}, \end{equation} where $p\left(j\right)$ is a probability distribution. Using this Koashi-Imoto decomposition, a protocol for exact state merging can be constructed, as shown in the following theorem. \begin{theorem} \label{thm:merge} \textit{An achievability bound of entanglement cost in exact state merging applicable to arbitrarily small-dimensional systems.} Given any $\Ket{\psi}^{RAB}$ and any $\delta > 0$, there exists a protocol for exact state merging of $\Ket{\psi}^{RAB}$ achieving \begin{equation} \label{eq:merge_cost} \log_2 K-\log_2 L \leqq \max_{j}\left\{\log_2\left(\lambda_0^{a_j^\textup{L}}\dim\mathcal{H}^{a_j^\textup{R}}\right)\right\} + \delta, \end{equation} where $\lambda_0^{a_j^\textup{L}}$ is the largest eigenvalue of \begin{equation} \omega_j^{a_j^\textup{L}}=\tr_{b_j^\textup{L}}\Ket{\omega_j}\Bra{\omega_j}^{a_j^\textup{L} b_j^\textup{L}}, \end{equation} and the other notations are the same as those in Equations~\eqref{eq:notation_space} and~\eqref{eq:notation_state}. \end{theorem} As for the non-catalytic setting of exact state merging, entanglement cost $\log_2 K$ for the initially shared maximally entangled resource state can be reduced compared to $\log_2 K$ in the catalytic setting in Theorem~\ref{thm:merge}. Note that $\log_2 K$ in the non-catalytic setting may be more than the net entanglement cost $\log_2 K - \log_2 L$ in the catalytic setting in Theorem~\ref{thm:merge}. \begin{theorem} \label{thm:merge_without_catalyst} \textit{An achievability bound of entanglement cost in the non-catalytic setting of exact state merging applicable to arbitrarily small-dimensional systems.} Given any $\Ket{\psi}^{RAB}$, there exists a protocol for exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting achieving \begin{equation} \label{eq:merge_without_catalyst_cost} \log_2 K \leqq \max_{j}\left\{\log_2\left\lceil\lambda_0^{a_j^\textup{L}}\dim\mathcal{H}^{a_j^\textup{R}}\right\rceil\right\}, \end{equation} where $\lceil{}\cdots{}\rceil$ is the ceiling function, and the other notations are the same as those in Theorem~\ref{thm:merge}. \end{theorem} In the following, the proofs of Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} are provided. \begin{proof}[Proof of Theorem~\ref{thm:merge}] The proof is by construction, and a protocol for exact state merging of $\Ket{\psi}^{RAB}$ achieving Inequality~\eqref{eq:merge_cost} is shown in the following. Define \begin{align} j_0&\coloneq\argmax_{j}\left\{\log_2\left(\lambda_0^{a_j^\textup{L}}\dim\mathcal{H}^{a_j^\textup{R}}\right)\right\},\\ D^{a_j^\textup{R}}&\coloneq\dim\mathcal{H}^{a_j^\textup{R}}\quad \text{for each $j\in\left\{0,\ldots,J-1\right\}$.} \end{align} The protocol uses the Koashi-Imoto decomposition in the following tensor-product form of the Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$, which is equivalent to that shown in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite} as well as Equations~\eqref{eq:notation_space} and~\eqref{eq:notation_state}. Given the Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$ in the form of Equation~\eqref{eq:notation_state}, this decomposition can also be written using auxiliary systems $\mathcal{H}^{a_0}$ and $\mathcal{H}^{b_0}$ as \begin{equation} \label{eq:ki_tripartite_isometry} \begin{split} \left(\mathbb{1}^R\otimes U^A \otimes U^B\right) \Ket{\psi}^{RAB} =\sum_{j=0}^{J-1}\sqrt{p\left(j\right)}\Ket{j}^{a_0}\otimes\Ket{j}^{b_0}\otimes\Ket{\omega_j}^{a^L b^L}\otimes\Ket{\phi_j}^{R a^R b^R}, \end{split} \end{equation} where $\mathcal{H}^{a_0}$, $\mathcal{H}^{b_0}$, $\mathcal{H}^{a^L}$, $\mathcal{H}^{b^L}$, $\mathcal{H}^{a^R}$, and $\mathcal{H}^{b^R}$ satisfy \begin{align} \dim\mathcal{H}^{a_0}&=J,\\ \dim\mathcal{H}^{b_0}&=J,\\ \dim\mathcal{H}^{a^L}&=\max_j\left\{\dim\mathcal{H}^{a_j^L}\right\},\\ \dim\mathcal{H}^{b^L}&=\max_j\left\{\dim\mathcal{H}^{b_j^L}\right\},\\ \dim\mathcal{H}^{a^R}&=\max_j\left\{\dim\mathcal{H}^{a_j^R}\right\},\\ \dim\mathcal{H}^{b^R}&=\max_j\left\{\dim\mathcal{H}^{b_j^R}\right\}, \end{align} $U^A$ is an isometry from $\mathcal{H}^A$ to $\mathcal{H}^{a_0}\otimes\mathcal{H}^{a^L}\otimes\mathcal{H}^{a^R}$, $U^B$ is an isometry from $\mathcal{H}^B$ to $\mathcal{H}^{b_0}\otimes\mathcal{H}^{b^L}\otimes\mathcal{H}^{b^R}$, and ${\{\Ket{j}^{a_0}:j=0,\ldots,J-1\}}$ and ${\{\Ket{j}^{b_0}:j=0,\ldots,J-1\}}$ are the computational basis of $\mathcal{H}^{a_0}$ and $\mathcal{H}^{b_0}$, respectively. As stressed in Reference~\cite{K3}, information on $\psi^A$ is encoded in three parts of the Koashi-Imoto decomposition in Equation~\eqref{eq:ki_tripartite_isometry}, namely, $\mathcal{H}^{a_0}$, $\mathcal{H}^{a^\textup{R}}$, and $\mathcal{H}^{a^\textup{L}}$, which can be regarded as the classical part, the quantum part, and the redundant part, respectively. In the rest of the proof, the following three subprocesses are presented: \begin{enumerate} \item Entanglement distillation from the \textit{redundant} part; \item Quantum teleportation of the \textit{quantum} part; \item Coherent merging of the \textit{classical} part by a measurement. \end{enumerate} Then, these three subprocesses are combined using controlled measurements and controlled isometries, which are controlled by computational-basis states of $\mathcal{H}^{a_0}$ and $\mathcal{H}^{b_0}$. \textit{Subprocess~1: Entanglement distillation from the redundant part.} Due to the continuity of $\log_2\,$, there exists a rational number $\tilde{\lambda}_0^{a_{j_0}^\textup{L}}\in\mathbb{Q}$ such that \begin{equation} \log_2\left(\lambda_{0}^{a_{j_0}^\textup{L}}D^{a^\textup{R}_{j_0}}\right) \leqq\log_2\left(\tilde{\lambda}_{0}^{a_{j_0}^\textup{L}}D^{a^\textup{R}_{j_0}}\right) \leqq\log_2\left(\lambda_{0}^{a_{j_0}^\textup{L}}D^{a^\textup{R}_{j_0}}\right)+\delta. \end{equation} Thus, for any $j\in\left\{0,\ldots,J-1\right\}$, it holds that \begin{equation} \lambda_{0}^{a_{j}^\textup{L}}D^{a^\textup{R}_j} \leqq\lambda_{0}^{a_{j_0}^\textup{L}}D^{a^\textup{R}_{j_0}} \leqq\tilde{\lambda}_{0}^{a_{j_0}^\textup{L}}D^{a^\textup{R}_{j_0}}. \end{equation} Hence, it is obtained that \begin{equation} \lambda_0^{a_{j}^\textup{L}}\leqq\frac{D^{a^\textup{R}_{j_0}}}{D^{a^\textup{R}_j}}{\tilde{\lambda}_0^{a_{j_0}^\textup{L}}}, \end{equation} and since $\tilde{\lambda}_0^{a_{j_0}^\textup{L}}\in\mathbb{Q}$, there exist integers $K_j$ and $L_j$ such that the right-hand side of the above inequality is written as \begin{equation} \frac{D^{a^\textup{R}_{j_0}}}{D^{a^\textup{R}_j}}{\tilde{\lambda}_0^{a_{j_0}^\textup{L}}}=\frac{K_j}{L_j}. \end{equation} Therefore, it holds that \begin{equation} \frac{\lambda_0^{a_{j}^\textup{L}}}{K_j}\leqq \frac{1}{L_j}. \end{equation} For each $j\in\left\{0,\ldots,J-1\right\}$, the majorization condition for LOCC convertibility between bipartite pure states in Lemma~\ref{lem:pure_convertibility} guarantees that there exists an LOCC map represented by a family of operators \begin{equation} {\left\{M_{j,m_1}\otimes U_{j,m_1}\right\}}_{m_1} \end{equation} achieving for each $m_1\,$, \begin{equation} \left(M_{j,m_1}\otimes U_{j,m_1}\right)\left(\Ket{\omega_j}^{a^\textup{L} b^\textup{L}}\otimes\Ket{\Phi^+_{K_j}}^{\overline{A}\overline{B}}\right) =\Ket{\Phi^+_{L_j}}^{\overline{A}\overline{B}}, \end{equation} where ${\left\{M_{j,m_1}\right\}}_{m_1}$ represents $A$'s measurement from $\mathcal{H}^{a^\textup{L}}\otimes\mathcal{H}^{\overline{A}}$ to $\mathcal{H}^{\overline{A}}$ with outcome $m_1$ satisfying the completeness \begin{equation} \sum_{m_1} M_{j,m_1}^\dag M_{j,m_1}=\mathbb{1}, \end{equation} and $U_{j,m_1}$ represents $B$'s isometry from $\mathcal{H}^{b^\textup{L}}\otimes\mathcal{H}^{\overline{B}}$ to $\mathcal{H}^{\overline{B}}$ conditioned by $m_1$. Regarding an explicit form of ${\left\{M_{j,m_1}\otimes U_{j,m_1}\right\}}_{m_1}\,$, refer to References~\cite{N2,T3}. \textit{Subprocess~2: Quantum teleportation of the quantum part.} While quantum teleportation for sending the full reduced state $\phi_j^{a^R}\coloneqq\tr_{R b^R}\Ket{\phi_j}\Bra{\phi_j}^{R a^R b^R}$ requires a maximally entangled resource state with Schmidt rank \begin{equation} \dim\mathcal{H}^{a^R}=\max_j\dim\mathcal{H}^{a_j^R}, \end{equation} a compression method is adopted here instead of just performing quantum teleportation of $\phi_j^{a^R}$, so that each $\phi_j^{a^R}$ is transferred from $A$ to $B$ using a maximally entangled resource state with Schmidt rank $\dim\mathcal{H}^{a_j^R}$, which is smaller than or equal to $\dim\mathcal{H}^{a^R}$. Consider $A$'s auxiliary system $\bigotimes_{j=0}^{J-1}\mathcal{H}^{{\left(a^\prime\right)}_j^\textup{R}}$, where $\dim\mathcal{H}^{{\left(a^\prime\right)}_j^\textup{R}}=D^{a_j^\textup{R}}$ for each $j$. The state $\Ket{\phi_j}^{R a^\textup{R} b^\textup{R}}$ can be compressed into \begin{equation} \Ket{\phi_j}^{R{\left(a^\prime\right)}_j^\textup{R} b^\textup{R}}=U'_j\Ket{\phi_j}^{R a^\textup{R} b^\textup{R}}, \end{equation} where $U'_j$ is an isometry from $\mathcal{H}^{a^\textup{R}}$ to $\mathcal{H}^{{\left(a^\prime\right)}_j^\textup{R}}$, and $\Ket{\phi_j}^{R{\left(a^\prime\right)}_j^\textup{R} b^\textup{R}}$ represents the same state as $\Ket{\phi_j}^{R a^\textup{R} b^\textup{R}}$. Quantum teleportation~\cite{B5} to send states of $\mathcal{H}^{{\left(a^\prime\right)}_j^\textup{R}}$ consists of $A$'s projective measurement in the maximally entangled basis ${\left\{\Ket{\Phi_{j,m_2}}\right\}}_{m_2}$ on $\mathcal{H}^{{\left(a^\prime\right)}_j^\textup{R}}\otimes\mathcal{H}^{\overline{A}}$ with measurement outcome $m_2$ and $B$'s generalized Pauli correction $\sigma_{j,m_2}$ from $\mathcal{H}^{\overline{B}}$ to $\mathcal{H}^{{(b')}^\textup{R}}$ conditioned by $m_2\,$, where $\mathcal{H}^{{(b')}^\textup{R}}$ is $B$'s auxiliary system corresponding to $\mathcal{H}^{a^\textup{R}}$. The map for quantum teleportation is represented by \begin{equation} {\left\{\Bra{\Phi_{j,m_2}}\otimes\sigma_{j,m_2}\right\}}_{m_2}\,, \end{equation} which traces out the post-measurement state of $A$ and achieves for each $m_2\,$, \begin{equation} \begin{split} &\left(\Bra{\Phi_{j,m_2}}\otimes\sigma_{j,m_2}\right)\left(\Ket{\phi_j}^{R {\left(a^\prime\right)}_j^\textup{R} b^\textup{R}}\otimes\Ket{\Phi_{D^{a_j^\textup{R}}}^+}^{\overline{A}\overline{B}}\right)\\ &=\left[\left(\Bra{\Phi_{j,m_2}}U'_j\right)\otimes\sigma_{j,m_2}\right]\left(\Ket{\phi_j}^{R a^\textup{R} b^\textup{R}}\otimes\Ket{\Phi_{D^{a_j^\textup{R}}}^+}^{\overline{A}\overline{B}}\right)\\ &=\Ket{\phi_j}^{R {(b')}^\textup{R} b^\textup{R}}. \end{split} \end{equation} \textit{Subprocess~3: Coherent merging of the classical part by a measurement.} As for the classical part $\mathcal{H}^{a_0}$, $A$ performs a measurement to merge the classical part. This measurement should be performed without breaking coherence between $R$ and $B$. This contrasts with the protocol proposed in Reference~\cite{K4} for transferring a state drawn from a given ensemble, in which a projective measurement onto each of the subspaces of the Koashi-Imoto decomposition indexed by $j$ in Lemma~\ref{lem:koashi_imoto_decomposition_set} destroys superposition of states among different subspaces. For coherent merging of the classical part, $A$ performs a projective measurement on $\mathcal{H}^{a_0}$ with outcome $m_3$ in the Fourier basis ${\left\{\Ket{m_3}\right\}}_{m_3}$ defined in terms of the computational basis ${\left\{\Ket{j}^{a_0}\right\}}_j\,$, that is, for each $m_3\,$, \begin{equation} \Ket{m_3}^{a_0}\coloneq \sum_{j=0}^{J-1}\exp\left(\frac{\textup{i}{\pi}jm_3}{J}\right)\Ket{j}^{a_0}. \end{equation} After sending the measurement outcome $m_3$ by classical communication from $A$ to $B$, the originally given state of $\mathcal{H}^{a_0}\otimes\mathcal{H}^{a_L}\otimes\mathcal{H}^{b_L}$ can be recovered from $B$'s classical part $\mathcal{H}^{b_0}$ of the post-measurement state by $B$'s local isometry conditioned by $m_3$ \begin{equation} \label{eq:subprocess_3} \sum_{j=0}^{J-1}\exp\left(\frac{\textup{i}{\pi}jm_3}{J}\right)\Ket{j}^{{(b')}_0}\otimes\Ket{j}\Bra{j}^{b_0}\otimes\Ket{\omega_j}^{{(b')}^L b^L}, \end{equation} where $\mathcal{H}^{{(b')}_0}\otimes\mathcal{H}^{{(b')}^L}$ is $B$'s auxiliary system corresponding to $\mathcal{H}^{a_0}\otimes\mathcal{H}^{a^L}$. Subprocesses~1--3 are combined using controlled measurements and controlled isometries. Regarding $A$'s measurement, the measurements used in Subprocesses~1 and~2 are performed by extending each measurement to a measurement controlled coherently by the computational-basis state $\Ket{j}^{a_0}$. Regarding Subprocess~1 for the redundant part, the controlled version of the measurement is given by \begin{equation} \sum_{j=0}^{J-1}\Ket{j}\Bra{j}^{a_0}\otimes M_{j,m_1}\,, \end{equation} and regarding Subprocess~2 for the quantum part, given by \begin{equation} \sum_{j=0}^{J-1}\Ket{j}\Bra{j}^{a_0}\otimes\left(\Bra{\Phi_{j,m_2}}U'_j\right). \end{equation} The measurement in Subprocess~3 for the classical part is also represented in terms of the computational basis as \begin{equation} \sum_{j=0}^{J-1}\Bra{m_3}^{a_0}\left(\Ket{j}\Bra{j}^{a_0}\right)=\sum_{j=0}^{J-1}\exp\left(\frac{-\textup{i}{\pi}jm_3}{J}\right)\Bra{j}^{a_0}. \end{equation} Combining these three together, $A$'s measurement ${\left\{M_{m_1\,,m_2\,,m_3}\right\}}_{m_1\,,m_2\,,m_3}$ is given by \begin{equation} M_{m_1\,,m_2\,,m_3} =\sum_{j=0}^{J-1}\left[\exp\left(\frac{-\textup{i}{\pi}jm_3}{J}\right)\Bra{j}^{a_0}\right]\otimes\left[\Bra{\Phi_{j,m_2}}U'_j M_{j,m_1}\right]. \end{equation} The completeness of this measurement follows from \begin{equation} \begin{split} &\sum_{m_1\,,m_2\,,m_3}M_{m_1\,,m_2\,,m_3}^\dag M_{m_1\,,m_2\,,m_3}\\ &=\sum_{m_1\,,m_2\,,m_3}\sum_{j,j'}\left[\exp\left(\frac{\textup{i}{\pi}m_3(j'-j)}{J}\right)\Ket{j'}\Bra{j}\right] \otimes\left[M_{j,m_1}^\dag {U'_{j'}}^\dag\Ket{\Phi_{j',m_2}}\Bra{\Phi_{j,m_2}}U'_j M_{j,m_1}\right]\\ &=\sum_{j}\Ket{j}\Bra{j} \otimes\left[\sum_{m_1\,,m_2}M_{j,m_1}^\dag{U'_j}^\dag\Ket{\Phi_{j,m_2}}\Bra{\Phi_{j,m_2}}U'_j M_{j,m_1}\right]\\ &=\mathbb{1}, \end{split} \end{equation} where $\mathbb{1}$ is the identity operator on $\mathcal{H}^{a_0}\otimes\mathcal{H}^{a^\textup{L}}\otimes\mathcal{H}^{a^\textup{R}}\otimes\mathcal{H}^{\overline{A}}$. As for $B$'s isometry, the isometries in Subprocesses~1 and~2 are also controlled coherently by the computational-basis state $\Ket{j}^{b_0}$. Regarding Subprocess~1 for the redundant part, the controlled version of the isometry is given by \begin{equation} \sum_{j=0}^{J-1}\Ket{j}\Bra{j}^{b_0}\otimes U_{j,m_1}\,, \end{equation} and regarding Subprocess~2 for the quantum part, given by \begin{equation} \sum_{j=0}^{J-1}\Ket{j}\Bra{j}^{b_0}\otimes\sigma_{j,m_2}. \end{equation} The isometry in Subprocess~3 is given by Equation~\eqref{eq:subprocess_3}. Combining these three together, $B$'s isometry $U_{m_1\,,m_2\,,m_3}$ is given by \begin{equation} \begin{split} U_{m_1\,,m_2\,,m_3} =\sum_{j=0}^{J-1}\exp\left(\frac{\textup{i}{\pi}jm_3}{J}\right)\Ket{j}^{{(b')}_0}\otimes\Ket{j}\Bra{j}^{b_0}\otimes\Ket{\omega_j}^{{(b')}^L b^\textup{L}} \otimes\sigma_{j,m_2} U_{j,m_1}. \end{split} \end{equation} Consequently, for any combination $(m_1\,,m_2\,,m_3)$, the LOCC map represented by a family of operators ${\left\{M_{m_1\,,m_2\,,m_3}\otimes U_{m_1\,,m_2\,,m_3}\right\}}_{m_1\,,m_2\,,m_3}$ achieves state merging of $\Ket{\psi}^{RAB}$ \begin{equation} \begin{split} &\left(M_{m_1\,,m_2\,,m_3}\otimes U_{m_1\,,m_2\,,m_3}\right)\\ &\quad\left(\Ket{j}^{a_0}\otimes\Ket{j}^{b_0}\otimes\Ket{\omega_j}^{a^\textup{L} b^\textup{L}}\otimes\Ket{\phi_j}^{R a^\textup{R} b^\textup{R}} \otimes\Ket{\Phi_{D^{a_j^\textup{R}}}^+}^{\overline{A}\overline{B}}\otimes\Ket{\Phi_{K_j}^+}^{\overline{A}\overline{B}}\right)\\ &=\Ket{j}^{{(b')}_0}\otimes\Ket{j}^{b_0}\otimes\Ket{\omega_j}^{{(b')}^L b^\textup{L}}\otimes\Ket{\phi_j}^{R {(b')}^\textup{R} b^\textup{R}} \otimes\Ket{\Phi_{L_j}^+}^{\overline{A}\overline{B}}. \end{split} \end{equation} Choose $K$ as the least common multiple of the integers $\left\{D^{a_0^R}K_0\,,\ldots,D^{a_{J-1}^R}K_{J-1}\right\}$, and this state transformation yields \begin{equation} \begin{split} &\left(M_{m_1\,,m_2\,,m_3}\otimes U_{m_1\,,m_2\,,m_3}\right)\\ &\quad\left(\Ket{j}^{a_0}\otimes\Ket{j}^{b_0}\otimes\Ket{\omega_j}^{a^L b^L}\otimes\Ket{\phi_j}^{R a^R b^R}\otimes\Ket{\Phi_{K}^+}^{\overline{A}\overline{B}}\right)\\ &=\Ket{j}^{{(b')}_0}\otimes\Ket{j}^{b_0}\otimes\Ket{\omega_j}^{{(b')}^L b^L}\otimes\Ket{\phi_j}^{R {(b')}^R b^R}\otimes\Ket{\Phi_{L}^+}^{\overline{A}\overline{B}}, \end{split} \end{equation} where $L$ is an integer defined as \begin{equation} L\coloneq\frac{K}{\tilde{\lambda}_0^{a_{j_0}^L}D^{a_{j_0}^R}}=\frac{K}{D^{a_j^R} K_j}L_j\,,\quad \forall j\in\left\{0,\ldots,J-1\right\}. \end{equation} Then, an LOCC map represented as \begin{equation} \label{eq:merge} {\left\{\left[ M_{m_1\,,m_2\,,m_3}U^A\right] \otimes\left[ \left({\left(U^{B'}\right)}^\dag\otimes {\left(U^B\right)}^\dag\right) U_{m_1\,,m_2\,,m_3}U^B\right]\right\}}_{m_1\,,m_2\,,m_3} \end{equation} achieves for each $(m_1\,,m_2\,,m_3)$, \begin{equation} \begin{split} &\left(\left[ M_{m_1\,,m_2\,,m_3}U^A\right]\otimes\left[ \left({\left(U^{B'}\right)}^\dag\otimes {\left(U^B\right)}^\dag\right) U_{m_1\,,m_2\,,m_3}U^B\right]\right)\\ &\quad\left(\Ket{\psi}^{RAB}\otimes\Ket{\Phi_{K}^+}^{\overline{A}\overline{B}}\right)\\ &=\Ket{\psi}^{RB'B}\otimes\Ket{\Phi_{L}^+}^{\overline{A}\overline{B}}, \end{split} \end{equation} where $U^A$ and $U^B$ are those in Equation~\eqref{eq:ki_tripartite_isometry}, and ${\left(U^{B'}\right)}^\dag$ from $\mathcal{H}^{{(b')}_0}\otimes\mathcal{H}^{{(b')}^L}\otimes\mathcal{H}^{{(b')}^\textup{R}}$ to $\mathcal{H}^{B'}=\bigoplus_{j=0}^{J-1}\mathcal{H}^{{(b')}_j^L}\otimes\mathcal{H}^{{(b')}_j^R}$ acts in the same way as ${\left(U^A\right)}^\dag$. The protocol represented by the LOCC map shown in Equation~\eqref{eq:merge} achieves the condition on entanglement cost given in Inequality~\ref{eq:merge_cost}, as shown in the following. For each $j$, entanglement cost amounts to \begin{equation} \begin{split} &\log_2 D^{a_j^\textup{R}} + \log_2 K_j - \log_2 L_j\\ &= \log_2 \left(\frac{K_j}{L_j} D^{a_j^\textup{R}}\right)\\ &= \log_2 \left(\tilde{\lambda}_0^{a_{j_0}^\textup{L}}D^{a_{j_0}^\textup{R}}\right), \end{split} \end{equation} which is independent of $j$. Thus, entanglement cost of the whole protocol is given by \begin{equation} \begin{split} &\log_2 K-\log_2 L\\ &=\log_2 \left(\tilde{\lambda}_0^{a_{j_0}^\textup{L}}D^{a_{j_0}^\textup{R}}\right)\\ &\leqq \log_2 \left(\lambda_0^{a_{j_0}^\textup{L}}D^{a^\textup{R}_{j_0}}\right)+\delta\\ &=\max_j\left\{\log_2 \left(\lambda_0^{a_{j_0}^\textup{L}}\dim\mathcal{H}^{a^\textup{R}_{j_0}}\right)\right\}+\delta, \end{split} \end{equation} which yields the conclusion. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:merge_without_catalyst}] The proof is by construction, and a protocol for exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting achieving the equality in~\eqref{eq:merge_without_catalyst_cost} is shown in the following. Define for each $j\in\{0,\ldots,J-1\}$, \begin{equation} D^{a_j^\textup{R}}\coloneq\dim\mathcal{H}^{a_j^\textup{R}}. \end{equation} The core idea of the protocol is similar to that in Theorem~\ref{thm:merge} using the Koashi-Imoto decomposition in the form of Equation~\eqref{eq:ki_tripartite_isometry}. The rest of the proof is given in the same way as the proof of Theorem~\ref{thm:merge}, where Subprocess~2 and Subprocess~3 are the same as those in Theorem~\ref{thm:merge}, and Subprocess~1 is modified as follows, since the resource state is not used catalytically in the entanglement distillation from the redundant part in Subprocess~1. \textit{Subprocess~1:} For each $j\in\{0,\ldots,J-1\}$, it holds that \begin{equation} \lambda_{0}^{a_{j}^\textup{L}}D^{a^\textup{R}_j} \leqq\left\lceil\lambda_{0}^{a_j^\textup{L}}D^{a_j^\textup{R}}\right\rceil \leqq\max_j\left\{ \left\lceil \lambda_0^{a_j^\textup{L}}D^{a^\textup{R}_j}\right\rceil\right\}. \end{equation} Then, given the resource state $\Ket{\Phi_K^+}$, where \begin{equation} K=\max_j\left\{\left\lceil \lambda_0^{a_j^\textup{L}}D^{a_j^\textup{R}}\right\rceil\right\}, \end{equation} it holds that \begin{equation} \frac{\lambda_{0}^{a_{j}^\textup{L}}}{K}\leqq\frac{1}{D^{a_j^\textup{R}}}. \end{equation} For each $j\in\left\{0,\ldots,J-1\right\}$, the majorization condition for LOCC convertibility between bipartite pure states in Lemma~\ref{lem:pure_convertibility} guarantees that there exists an LOCC map represented by a family of operators ${\left\{M_{j,m_1}\otimes U_{j,m_1}\right\}}_{m_1}$ achieving, for each $m_1\,$, \begin{equation} \left(M_{j,m_1}\otimes U_{j,m_1}\right)\left(\Ket{\omega_j}^{a^\textup{L} b^\textup{L}}\otimes\Ket{\Phi^+_{K}}^{\overline{A}\overline{B}}\right) =\Ket{\Phi^+_{D^{a_j^\textup{R}}}}^{\overline{A}\overline{B}}, \end{equation} where ${\left\{M_{j,m_1}\right\}}_{m_1}$ represents $A$'s measurement from $\mathcal{H}^{a^\textup{L}}\otimes\mathcal{H}^{\overline{A}}$ to $\mathcal{H}^{\overline{A}}$ with outcome $m_1$ satisfying the completeness \begin{equation} \sum_{m_1} M_{j,m_1}^\dag M_{j,m_1}=\mathbb{1}, \end{equation} and $U_{j,m_1}$ represents $B$'s isometry from $\mathcal{H}^{b^\textup{L}}\otimes\mathcal{H}^{\overline{B}}$ to $\mathcal{H}^{\overline{B}}$ conditioned by $m_1$. In the same way as Theorem~\ref{thm:merge}, $A$'s combined measurement \begin{equation} {\left\{\Bra{m_1\,,m_2\,,m_3}\right\}}_{m_1\,,m_2\,,m_3}\,, \end{equation} where the post-measurement state is traced out, is given by \begin{equation} \Bra{m_1\,,m_2\,,m_3} =\sum_{j=0}^{J-1}\left[\exp\left(\frac{-\textup{i}{\pi}jm_3}{J}\right)\Bra{j}^{a_0}\right]\otimes\left[\Bra{\Phi_{j,m_2}}U'_j M_{j,m_1}\right]. \end{equation} Also, $B$'s combined isometry $U_{m_1\,,m_2\,,m_3}$ is given by \begin{equation} U_{m_1\,,m_2\,,m_3} =\sum_{j=0}^{J-1}\exp\left(\frac{\textup{i}{\pi}jm_3}{J}\right)\Ket{j}^{{(b')}_0}\otimes\Ket{j}\Bra{j}^{b_0}\otimes\Ket{\omega_j}^{{(b')}^L b^\textup{L}} \otimes\sigma_{j,m_2} U_{j,m_1}. \end{equation} Consequently, the LOCC map represented by \begin{equation} \label{eq:merge_without_catalyst} {\left\{\left[\Bra{m_1\,,m_2\,,m_3}U^A\right]\otimes \left[\left({\left(U^{B'}\right)}^\dag\otimes{\left(U^B\right)}^\dag\right) U_{m_1\,,m_2\,,m_3}U^B\right]\right\}}_{m_1\,,m_2\,,m_3}\,, \end{equation} achieves for any combination $(m_1\,,m_2\,,m_3)$, \begin{equation} \begin{split} &\left(\left[\Bra{m_1\,,m_2\,,m_3}U^A\right]\otimes \left[\left({\left(U^{B'}\right)}^\dag\otimes{\left(U^B\right)}^\dag\right) U_{m_1\,,m_2\,,m_3}U^B\right]\right)\\ &\quad\left(\Ket{\psi}^{RAB}\otimes\Ket{\Phi_{K}^+}^{\overline{A}\overline{B}}\right)\\ &=\Ket{\psi}^{RB'B}, \end{split} \end{equation} where $U^A$, $U^B$, and $U^{B'}$ are the same as those in Equation~\eqref{eq:merge}. Entanglement cost of the protocol represented by the LOCC map shown in Equation~\eqref{eq:merge_without_catalyst} is given by \begin{equation} \begin{split} &\log_2 K\\ &= \max_j\left\{\log_2\left\lceil \lambda_0^{a_j^\textup{L}}D^{a_j^\textup{R}}\right\rceil\right\}\\ &=\log_2\max_j\left\{\log_2\left\lceil \lambda_0^{a_j^\textup{L}}\dim\mathcal{H}^{a_j^\textup{R}}\right\rceil\right\}, \end{split} \end{equation} which yields the conclusion. \end{proof} \begin{remark} \label{remark:merge} \textit{Comparison between exact state merging and splitting.} Entanglement cost in exact state merging is not larger than that in its inverse task, that is, exact state splitting summarized in Section~\ref{sec:split}. For any $\Ket{\psi}^{RAB}$, it holds that \begin{align} &\max_{j}\left\{\log_2\lambda_0^{a_j^\textup{L}}\dim\mathcal{H}^{a_j^\textup{R}}\right\}\leqq\log_2\rank\psi^{A},\\ &\max_{j}\left\{\log_2\left\lceil\lambda_0^{a_j^\textup{L}}\dim\mathcal{H}^{a_j^\textup{R}}\right\rceil\right\}\leqq\log_2\rank\psi^{A}, \end{align} where the left-hand sides are the optimal entanglement cost in exact state merging shown in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst}, the right-hand sides are the optimal entanglement cost in exact state splitting shown in Theorem~\ref{thm:split}, $\mathcal{H}^{a_j^\textup{R}}$ is a Hilbert space defined according to the Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$ in Equation~\eqref{eq:notation_space}, and $\lambda_0^{a_j^\textup{L}}$ is the largest eigenvalue of a reduced state \begin{equation} \omega_j^{a_j^\textup{L}}=\tr_{b_j^\textup{L}}\Ket{\omega_j}\Bra{\omega_j}^{a_j^\textup{L} b_j^\textup{L}} \end{equation} of $\Ket{\omega_j}^{a_j^\textup{L} b_j^\textup{L}}$ also defined according to the Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$ in Equation~\eqref{eq:notation_state}. These inequalities can be derived from \begin{equation} \dim\mathcal{H}^{a_j^\textup{R}}\leqq\rank\psi^{A} \end{equation} and \begin{equation} \lambda_0^{a_j^\textup{L}}\leqq 1, \end{equation} where the former inequality holds by construction of the Koashi-Imoto decomposition, and the latter follows from the normalization of $\omega_j^{a_j^\textup{L}}$. Moreover, as will be shown in Implication~\ref{ex:1} in Section~\ref{sec:examples}, entanglement cost in exact state merging can be strictly smaller than that in spitting. \end{remark} \begin{remark} \label{remark:usefulness} \textit{Usefulness of the protocols for exact state merging on small and intermediate scales.} The obtained protocols for exact state merging outperforms the existing protocols for approximate state merging~\cite{B9,Y9,B12,D7,D6,H10,B10,D5,M,N3,A4,A5,B15,B13,A16,A17} in terms of entanglement cost, as discussed in the following. While some of the existing protocols are fully quantum protocols achieved by local operations and quantum communication assisted by shared entanglement, replacing the quantum communication in a fully quantum protocol with quantum teleportation yields an entanglement-assisted LOCC protocol corresponding to the fully quantum protocol. Using this replacement, the entanglement cost in state merging of $\Ket{\psi}^{RAB}$, which is denoted here by $E_\textup{merge}(\psi)$, is compared in the LOCC framework. The protocols in Theorem~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} for exact state merging of $\Ket{\psi}^{RAB}$ require at most as much entanglement cost as that required for quantum teleportation of $\psi^A$, and when the system size for $\psi^A$ is small, these protocols cost less than the existing protocols for approximate state merging in one-shot scenarios. Regarding the existing protocols, the achievability bounds of $E_\textup{merge}(\psi)$ of the corresponding entanglement-assisted LOCC protocols can be calculated from the analyses in References~\cite{B9,B12,D7,D6,H10,B10,N3}. Given $\epsilon > 0$, these achievability bounds are in the form \begin{equation} E_\textup{merge}(\psi)=\cdots+O\left(\log\frac{1}{\epsilon}\right) \quad\textup{as } \epsilon\to 0, \end{equation} which diverges to infinity as higher fidelity is pursued. For example, from Theorem~4 in Reference~\cite{B10}, the achievability bound of $E_\textup{merge}(\psi)$ of one-shot state merging of $\Ket{\psi}^{RAB}$ within an error $\epsilon>0$ is given by \begin{equation} {H_{\max}^{\epsilon_1}\left(A|B\right)}_{\psi}+2\log_2 \frac{1}{\epsilon_4}+3, \end{equation} where $\epsilon=8\epsilon_1+\sqrt{3\epsilon_4}$, and the first term is represented by the smooth conditional max-entropy~\cite{R2,T5,T11} summarized in Appendix~\ref{sec:one_shot_entropies}. To achieve $\epsilon=0.02$, the second and third terms amount to \begin{equation} 2\log_2 \frac{1}{\epsilon_4}+3>28.7. \end{equation} Note that $\epsilon=0.02$ guarantees, in the task of state discrimination of $\Ket{\psi}$ and the final state, the optimal success probability \begin{equation} P_\textup{succ}=\frac{1}{2}+\frac{1}{4}{\left\|\psi-\psi_\textup{final}\right\|}_1\leqq 51\%, \end{equation} which is obtained from the Fuchs-van de Graaf inequalities \begin{equation} \frac{1}{4}{\left\|\psi-\psi_\textup{final}\right\|}_1\leqq \frac{1}{2}\sqrt{1-F^2}. \end{equation} Thus, given $\Ket{\psi}^{RAB}$ where \begin{equation} \dim\mathcal{H}^A\leqq 2^{28}, \end{equation} even if \begin{equation} {H_{\max}^{\epsilon_1}\left(A|B\right)}_{\psi}=0, \end{equation} the approximate protocols requires more entanglement cost than the protocols in Theorem~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} and even than quantum teleportation. Also, as will be discussed in Implication~\ref{ex:1} in Section~\ref{sec:examples}, useful states for distributed quantum information processing, including the Greenberger-Horne-Zeilinger (GHZ) states and multipartite code states for quantum error correcting codes~\cite{G,D,T10,B}, have \textit{nontrivial} Koashi-Imoto decompositions, that is, $J\neq 1$, when these states are regarded as tripartite states. In this regard, the protocols for exact state merging are already sufficient for reducing entanglement cost compared to quantum teleportation in these cases relevant to distributed quantum information processing. \end{remark} \section{\label{sec:achievability_approximate}Achievability bound for approximate state merging} The protocols on exact state merging presented in the previous section is extended to its approximate versions, using smoothing~\cite{R2,T5,T11}. In the following, the catalytic setting in Theorem~\ref{thm:merge} is considered, while extension of the protocol in the non-catalytic setting in Theorem~\ref{thm:merge_without_catalyst} is also possible in the same way. Note that while allowing small error in smoothing may provide better bounds, the bounds obtained by smoothing usually include optimization over a ball of close states, and exact state merging already suffices for useful examples including those relevant to distributed quantum information processing, as discussed in Remark~\ref{remark:usefulness}. Given any pure state $\Ket{\psi}^{RAB}$ and an error $\epsilon\geqq 0$, an achievability bound of entanglement cost in approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ is obtained as follows. Consider the Koashi-Imoto decomposition of any pure state $\Ket{\tilde\psi}^{RAB}$ satisfying \begin{equation} {F^2\left(\psi^{RAB},\tilde\psi^{RAB}\right)}\coloneq{\left|\Braket{\psi|\tilde\psi}\right|}^2\geqq 1-{\left(\frac{\epsilon}{2}\right)}^2. \end{equation} Due to the Koashi-Imoto decomposition of $\Ket{\tilde\psi}^{RAB}$ shown in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, $\mathcal{H}^A$ and $\mathcal{H}^B$ are uniquely decomposed into \begin{align} \label{eq:notation_space_approx} \mathcal{H}^A=\bigoplus_{j=0}^{J-1}{\mathcal{H}}^{\tilde{a}_j^\textup{L}}\otimes{\mathcal{H}}^{\tilde{a}_j^\textup{R}},\quad \mathcal{H}^B=\bigoplus_{j=0}^{J-1}{\mathcal{H}}^{\tilde{b}_j^\textup{L}}\otimes{\mathcal{H}}^{\tilde{b}_j^\textup{R}}, \end{align} and $\Ket{\tilde\psi}^{RAB}$ is uniquely decomposed into \begin{equation} \label{eq:notation_state_approx} \Ket{\tilde\psi}^{RAB}=\bigoplus_{j=0}^{\tilde{J}-1}\sqrt{\tilde{p}\left(j\right)}\Ket{\tilde{\omega}_j}^{\tilde{a}_j^\textup{L} \tilde{b}_j^\textup{L}}\otimes\Ket{\tilde{\phi}_j}^{R \tilde{a}_j^\textup{R} \tilde{b}_j^\textup{R}}, \end{equation} where $\tilde{p}\left(j\right)$ is a probability distribution. Using these notations, Theorem~\ref{thm:merge} on exact state merging is extended to approximate state merging as follows. \begin{theorem} \label{thm:approximate} \textit{An achievability bound of entanglement cost in approximate state merging applicable to arbitrarily small-dimensional systems.} Given any $\Ket{\psi}^{RAB}$, any $\epsilon \geqq 0$, and any $\delta > 0$, there exists a protocol for approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ achieving \begin{equation} \label{eq:approximate} \log_2 K-\log_2 L\leqq \min_{\Ket{\tilde\psi}}\max_{j}\left\{\log_2\left(\lambda_0^{\tilde{a}_j^\textup{L}}\dim\mathcal{H}^{\tilde{a}_j^\textup{R}}\right)\right\} + \delta, \end{equation} where the notations are the same as those in Equations~\eqref{eq:notation_space_approx} and~\eqref{eq:notation_state_approx}, $\lambda^{\tilde{a}_j^\textup{L}}_0$ is the largest eigenvalue of $\tilde{\omega}_j^{\tilde{a}_j^\textup{L}}$, and the minimization is over any normalized pure state $\Ket{\tilde\psi}^{RAB}$ satisfying \begin{equation} {F^2\left(\psi^{RAB},\tilde\psi^{RAB}\right)}\coloneq{\left|\Braket{\psi|\tilde\psi}\right|}^2\geqq 1-{\left(\frac{\epsilon}{2}\right)}^2. \end{equation} \end{theorem} \begin{proof} The proof is by construction, and it is shown that the LOCC map $\tilde{\mathcal{M}}$ for exact state merging of an approximate state $\Ket{\tilde\psi}$ for the minimum in Inequality~\eqref{eq:approximate} achieves approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$. To calculate the error in approximate state merging, the purified distance $P\left(\rho,\sigma\right)$ of any two states $\rho$ and $\sigma$ is used. Using the properties of the purified distance summarized in Section~\ref{sec:decomposition}, it is straightforward to obtain \begin{equation} \begin{split} &P\left(\id^R\otimes\tilde{\mathcal{M}}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right),{\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}\right)\\ &\leqq P\Big(\id^R\otimes\tilde{\mathcal{M}}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right), \id^R\otimes\tilde{\mathcal{M}}\left({\tilde\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right)\Big)\\ &\quad +P\Big(\id^R\otimes\tilde{\mathcal{M}}\left({\tilde\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right), {\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}\Big)\\ &\leqq P\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}},{\tilde\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) +P\left({\tilde\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}},{\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}\right)\\ &= P\left({\psi}^{RAB},{\tilde\psi}^{RAB}\right) +P\left({\tilde\psi}^{RB'B},{\psi}^{RB'B}\right)\\ &\leqq\frac{\epsilon}{2}+\frac{\epsilon}{2}\\ &=\epsilon. \end{split} \end{equation} Therefore, it holds that \begin{equation} \begin{split} &F^2\left(\id^R\otimes\tilde{\mathcal{M}}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right),{\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}\right)\\ &\geqq 1-\epsilon^2. \end{split} \end{equation} \end{proof} \section{\label{sec:converse}Converse bound for exact state merging} A converse bound of entanglement cost in exact state merging illustrated in Figure~\ref{fig:merge} is derived in this section. In other words, this section aims at obtaining a lower bound of the entanglement cost of any possible protocol for exact state merging. This converse bound improves the existing converse bound in terms of the conditional max-entropy originally shown in Reference~\cite{B9}. After showing this converse bound, comparison with the existing bound is analyzed, followed by discussing the tightness of the obtained converse bound. A converse bound for exact state merging is obtained as follows. \begin{theorem} \label{thm:new} \textit{A converse bound of entanglement cost in exact state merging.} For any state $\Ket{\psi}^{RAB}$ and any protocol for exact state merging of $\Ket{\psi}^{RAB}$, it holds that \begin{equation} \label{eq:lower_catalytic} \log_2 K - \log_2 L \geqq \inf\left\{\log_2 K - \log_2 L: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\frac{\mathbb{1}_L}{L}\otimes\psi^{AB}\right\}, \end{equation} where $\prec$ denotes majorization for Hermitian operators summarized in Section~\ref{sec:entanglement}. Also, for any protocol for exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting, it holds that \begin{equation} \label{eq:lower_non_catalytic} \log_2 K \geqq \min\left\{\log_2 K: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\psi^{AB}\right\}, \end{equation} where the notations are the same as those in Inequality~\eqref{eq:lower_catalytic}. \end{theorem} \begin{proof} The proof of Inequality~\eqref{eq:lower_catalytic} is given, while Inequality~\eqref{eq:lower_non_catalytic} can be shown in a similar way by substituting $L$ in the following proof with $1$. Any protocol for exact state merging transforms $\Ket{\psi}^{RAB}\otimes\Ket{\Phi_K^+}^{\overline{A}\overline{B}}$ into $\Ket{\psi}^{RB'B}\otimes\Ket{\Phi_L^+}^{\overline{A}\overline{B}}$ by LOCC\@. Hence, with respect to the bipartition between $\mathcal{H}^R\otimes\mathcal{H}^A\otimes\mathcal{H}^{\overline{A}}$ and $\mathcal{H}^B\otimes\mathcal{H}^{B^\prime}\otimes\mathcal{H}^{\overline{B}}$, LOCC convertibility between bipartite pure states yields the majorization condition in Lemma~\ref{lem:pure_convertibility} \begin{equation} \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\frac{\mathbb{1}_L}{L}\otimes\psi^{AB} \end{equation} in terms of Hermitian operators representing their reduced states. Since this majorization holds for any $K$ and $L$ achieving exact state merging of $\Ket{\psi}^{RAB}$, Inequality~\eqref{eq:lower_catalytic} is obtained. \end{proof} As a corollary of Theorem~\ref{thm:new}, the following converse bound for maximally entangled states in the form of Equation~\eqref{eq:max} is obtained, which is easier to calculate compared to that in Theorem~\ref{thm:new}. The following analysis in this section may assume that $\psi^R=\frac{\mathbb{1}^R}{D}$ holds for a given state $\Ket{\psi}^{RAB}$ for simplicity, based on the fact that entanglement cost in exact state merging of $\Ket{\psi}^{RAB}$ and that of $\Ket{\Phi_D^+\left(\psi\right)}^{RAB}$ are the same, as shown in Proposition~\ref{prp:max}. Note that to calculate the converse bound in the following corollary for any given state $\Ket{\psi}^{RAB}$, first calculate the Schmidt decomposition of $\Ket{\psi}^{RAB}$ to obtain the corresponding maximally entangled state $\Ket{\Phi_D^+\left(\psi\right)}^{RAB}$ from Equation~\eqref{eq:max}, and then apply the corollary to $\Ket{\Phi_D^+\left(\psi\right)}^{RAB}$. \begin{corollary} \label{col:tractable_converse} \textit{A converse bound of entanglement cost in exact state merging derived from Theorem~\ref{thm:new}}. For any state $\Ket{\psi}^{RAB}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{D}$, and any protocol for exact state merging of $\Ket{\psi}^{RAB}$, it holds that \begin{equation} \label{eq:tractable_lower_catalytic} \log_2 K - \log_2 L \geqq \log_2 \left({\lambda_0^B}D\right), \end{equation} where $\lambda_0^B$ is the largest eigenvalue of $\psi^B$. Also, for any protocol for exact state merging in the non-catalytic setting of $\Ket{\psi}^{RAB}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{D}$, it holds that \begin{equation} \label{eq:tractable_lower_non_catalytic} \log_2 K \geqq \log_2 \left\lceil\lambda_0^B D\right\rceil, \end{equation} where $\lceil{}\cdots{}\rceil$ is the ceiling function, and $\lambda_0^B$ is the same as that in Equation~\eqref{eq:tractable_lower_catalytic}. \end{corollary} \begin{proof}[\textit{Proof of Inequality~\eqref{eq:tractable_lower_catalytic}}] Due to Theorem~\ref{thm:new}, exact state merging implies \begin{equation} \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\frac{\mathbb{1}_L}{L}\otimes\psi^{AB}. \end{equation} Thus, the largest eigenvalues of the both sides of this majorization satisfy \begin{equation} \frac{\lambda_0^B}{K}\leqq\frac{1}{DL}. \end{equation} Hence, it holds that \begin{equation} \log_2 K - \log_2 L \geqq \log_2\left({\lambda_0^B}D\right). \end{equation} \end{proof} \begin{proof}[\textit{Proof of Inequality~\eqref{eq:tractable_lower_non_catalytic}}] From the same argument as the above, it is obtained that \begin{equation} \frac{\lambda_0^B}{K}\leqq\frac{1}{D}. \end{equation} Hence, it holds that \begin{equation} K\geqq\lambda_0^B D, \end{equation} and since $K$ is an integer, it holds that \begin{equation} K\geqq \left\lceil \lambda_0^B D\right\rceil. \end{equation} Therefore, it is obtained that \begin{equation} \log_2 K \geqq \log_2 \left\lceil\lambda_0^B D\right\rceil. \end{equation} \end{proof} Reference~\cite{B9} also provides a converse bound of entanglement cost in exact state merging of any given state $\Ket{\psi}^{RAB}$ in terms of the conditional max-entropy as follows. Note that this converse bound in Reference~\cite{B9} is only shown for one-way LOCC, while the converse bounds in Theorem~\ref{thm:new} and Corollary~\ref{col:tractable_converse} are applicable to any LOCC map including two-way LOCC\@. \begin{lemma} \label{lem:old} (Corollary 4.12.\ in Reference~\cite{B9}) \textit{A converse bound of entanglement cost in exact state merging in Reference~\cite{B9}.} For any state $\Ket{\psi}^{RAB}$ and any one-way LOCC protocol for exact state merging of $\Ket{\psi}^{RAB}$, where classical communication is performed only from $A$ to $B$, it holds that \begin{equation} \log_2 K - \log_2 L \geqq {H_{\max}(A|B)}_\psi, \end{equation} where the right-hand side is the conditional max-entropy summarized in Appendix~\ref{sec:one_shot_entropies}. \end{lemma} For states in the form of Equation~\eqref{eq:max}, the converse bounds in Theorem~\ref{thm:new} and Corollary~\ref{col:tractable_converse} are at least as tight as the existing bound in Lemma~\ref{lem:old} as shown in the following proposition. Moreover, Implication~\ref{ex:2} in Section~\ref{sec:examples} will show a case where these converse bounds are strictly tighter than the existing bound. It is sufficient to show that the converse bound in Corollary~\ref{col:tractable_converse} is at least as tight as that in Lemma~\ref{lem:old}, since Theorem~\ref{thm:new} provides at least as tight bound as that in Corollary~\ref{col:tractable_converse}. \begin{proposition} \textit{Comparison of converse bounds of entanglement cost in exact state merging.} For any state $\Ket{\psi}^{RAB}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{D}$, it holds that \begin{equation} \log_2\left({\lambda_0^B}D\right)\geqq{H_{\max}(A|B)}_\psi, \end{equation} where the notations are the same as those in Corollary~\ref{col:tractable_converse} and Lemma~\ref{lem:old}. \end{proposition} \begin{proof} Consider the Schmidt decomposition of $\Ket{\psi}^{RAB}$ \begin{equation} \Ket{\psi}^{RAB}=\sum_{l=0}^{D-1} \frac{1}{\sqrt{D}}\Ket{l}^R\otimes\Ket{\psi_l}^{AB}. \end{equation} Reference~\cite{V2} shows that $2^{{H_{\max}(A|B)}_\psi}$ equals to the following optimization problem, which is a type of optimization problem called semidefinite programming: minimize ${\left\|Z^B\right\|}_\infty$ subject to $\mathbb{1}^R \otimes Z^{AB}\geqq\Ket{\psi}\Bra{\psi}^{RAB}$ and $Z^{AB}\geqq 0$. The case \begin{equation} Z^{AB}=D\psi^{AB} \end{equation} satisfies these constraints: \begin{align} &\mathbb{1}^R\otimes D\psi^{AB}=\sum_l\Ket{l}\Bra{l}^R\otimes\sum_l\Ket{\psi_l}\Bra{\psi_l}^{AB}\geqq\Ket{\psi}\Bra{\psi}^{RAB};\\ &D\psi^{AB}\geqq 0. \end{align} Therefore, it holds that \begin{equation} \begin{split} &\log_2 \left({\lambda_0^B}D\right) = \log_2 {\left\|D\psi^B\right\|}_\infty\\ &\geqq\min_{Z^{AB}} \log_2{\left\|Z^B\right\|}_\infty ={{H_{\max}(A|B)}_\psi}. \end{split} \end{equation} \end{proof} It is natural to ask how tight the converse bounds in Theorem~\ref{thm:new} and Corollary~\ref{col:tractable_converse} are. In the following analysis of the tightness, the non-catalytic setting of exact state merging using one-way LOCC from $A$ to $B$ is considered for simplicity, and the following proposition simplifies the analysis. \begin{proposition} \label{prp:equivalence} \textit{A necessary and sufficient condition for exact state merging in the non-catalytic setting by one-way LOCC\@.} Given any pure state $\Ket{\psi}^{RAB}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{D}$, there exists one-way LOCC map $\mathcal{M}^{A\to B}$ from $A$ to $B$ achieving \begin{equation} \id^R\otimes\mathcal{M}^{A\to B}\left(\psi^{RAB}\otimes{\Phi_K^+}^{\overline{A}\overline{B}}\right)=\psi^{RB'B} \end{equation} if and only if there exists a mixed-unitary channel \begin{equation} \mathcal{U}(\rho)=\sum_m p\left(m\right) U_m\rho U_m^\dag, \end{equation} where $p\left(m\right)$ is a probability distribution and $U_m$ for each $m$ is a unitary, achieving \begin{equation} \label{eq:mixed_unitary} \id^R\otimes\mathcal{U}^{\hat{B}}\left({\Phi_D^+}^{R\hat{B}}\right)=\psi^{RB}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}, \end{equation} where $\mathcal{H}^{\hat{B}}=\mathcal{H}^{B}\otimes\mathcal{H}^{\overline{B}}$ and $\Ket{\Phi_D^+}^{R\hat{B}}\coloneq\frac{1}{\sqrt{D}}\sum_{l=0}^{D-1}\Ket{l}^R\otimes\Ket{l}^{\hat{B}}$. \end{proposition} \begin{proof} \textit{If part:} Assume that \begin{equation} \psi^{RB}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}=\sum_m p\left(m\right)\left(\mathbb{1}^R\otimes U_m^{\hat{B}}\right){\Phi_D^+}^{R{\hat{B}}}{\left(\mathbb{1}^R\otimes U_m^{\hat{B}}\right)}^\dag. \end{equation} A purification yields \begin{equation} \left(\mathbb{1}^{RB \overline{B}}\otimes U \right)\left(\Ket{\psi}^{RAB}\otimes\Ket{\Phi_K^+}^{\overline{A}\overline{B}}\right) =\sum_m \sqrt{p\left(m\right)}\Ket{m}^{A_0}\otimes\left(\mathbb{1}^R\otimes U_m^{\hat{B}}\right)\Ket{\Phi_D^+}^{R{\hat{B}}}, \end{equation} where $\mathcal{H}^{A_0}$ is $A$'s auxiliary system, and $U$ is an isometry performed by $A$. Hence, one-way LOCC from $A$ to $B$ represented by ${\left\{\left(\Bra{m}^{A_0} U\right)\otimes {\left(U_m^{\hat{B}}\right)}^\dag\right\}}_m\,$, where the post-measurement state of $A$ is traced out, achieves, for each $m$, \begin{equation} \mathbb{1}^{R}\otimes\left[\left(\Bra{m}^{A_0} U\right)\otimes {\left(U_m^{\hat{B}}\right)}^\dag\right]\left(\Ket{\psi}^{RAB}\otimes\Ket{\Phi_K^+}^{\overline{A}\overline{B}}\right) \propto\Ket{\Phi_D^+}^{R{\hat{B}}}, \end{equation} and $\Ket{\Phi_D^+}^{R{\hat{B}}}$ on the right-hand side can be transformed into $\Ket{\psi}^{RB^\prime B}$ by $B$'s local isometry. \textit{Only if part:} Assume that there exists $A$'s POVM ${\left\{\Lambda_m\right\}}_m$ on $\mathcal{H}^A\otimes\mathcal{H}^{\overline{A}}$ satisfying for each $m$ \begin{equation} \tr_A\left[\left(\mathbb{1}^{RB\overline{B}}\otimes\Lambda_m\right)\left(\psi^{RAB}\otimes{\Phi_K^+}^{\overline{A}\overline{B}}\right)\right] =p\left(m\right){\left(\mathbb{1}^{R}\otimes U_m^{\hat{B}}\right)}{\Phi_D^+}^{R{\hat{B}}}{\left(\mathbb{1}^{R}\otimes U_m^{\hat{B}}\right)}^\dag, \end{equation} where $p\left(m\right)$ is a probability distribution, and $U_m^{\hat{B}}$ is $B$'s unitary correction conditioned by $m$. Note that ${\Phi_D^+}^{R{\hat{B}}}$ on the right-hand side can be transformed into $\Ket{\psi}^{RB^\prime B}$ by $B$'s local isometry. Then, it holds that \begin{equation} \begin{split} &\psi^{RB}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}\\ &=\sum_m\tr_A\left[\left(\mathbb{1}^{RB\overline{B}}\otimes\Lambda_m\right)\left(\psi^{RAB}\otimes{\Phi_K^+}^{\overline{A}\overline{B}}\right)\right]\\ &=\sum_m p\left(m\right)\left(\mathbb{1}^R\otimes U_m^{\hat{B}}\right){\Phi_D^+}^{R{\hat{B}}}{\left(\mathbb{1}^R\otimes U_m^{\hat{B}}\right)}^\dag\\ &=\id^R\otimes\mathcal{U}^{\hat{B}}\left({\Phi_D^+}^{R\hat{B}}\right). \end{split} \end{equation} \end{proof} Note that it is straightforward to generalize the above proof of Proposition~\ref{prp:equivalence} in the non-catalytic setting to the catalytic setting, that is, \begin{equation} \begin{split} &\id^R\otimes\mathcal{M}^{A\to B}\left(\psi^{RAB}\otimes{\Phi_K^+}^{\overline{A}\overline{B}}\right)={\psi}^{RB^\prime B}\otimes{\Phi_L^+}^{\overline{A}\overline{B}}\\ &\Leftrightarrow\id^R\otimes\mathcal{U}^{\hat{B}}\left({\Phi_D^+}^{R{\hat{B}}}\otimes\frac{\mathbb{1}_L^{\hat{B}}}{L}\right)=\psi^{RB}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}, \end{split} \end{equation} which can also be shown for quantum state redistribution in approximate scenarios~\cite{B15,B13}. For qubits, the converse bound in Corollary~\ref{col:tractable_converse} is tight enough to provide the optimal entanglement cost, as shown in the following. Note that an equivalent condition in terms of Schmidt coefficients of $\Ket{\psi_l}^{AB}$ in Equation~\eqref{eq:max} is also given in Theorem~II.1.\ in Reference~\cite{O}. \begin{theorem} \label{thm:qubit} \textit{Optimal entanglement cost in exact state merging in the non-catalytic setting for qubits.} Consider any three-qubit pure state $\Ket{\psi}^{RAB}\in{\left(\mathbb{C}^2\right)}^{\otimes 3}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{2}$, exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting is achievable if and only if \begin{equation} \log_2 K \geqq \log_2 \left\lceil\lambda_0^B D\right\rceil, \end{equation} where the notations are the same as those in Corollary~\ref{col:tractable_converse}. Equivalently, the exact state merging of $\Ket{\psi}^{RAB}$ is achievable at entanglement cost $\log_2 K = 0$ if and only if $\psi^B=\frac{\mathbb{1}^B}{2}$, and otherwise, entanglement cost $\log_2 K = 1$ is required. \end{theorem} \begin{proof} \textit{If part:} Assume that \begin{equation} \psi^B=\frac{\mathbb{1}^B}{2}, \end{equation} and the existence of an LOCC protocol for exact state merging of $\Ket{\psi}^{RAB}$ achieving $\log_2 K=0$ is shown. Note that otherwise, quantum teleportation of $\psi^A$ achieves $\log_2 K = 1$. To show the existence of the LOCC protocol, Proposition~\ref{prp:equivalence} implies that it is sufficient to prove the existence of a mixed-unitary channel $\mathcal{U}$ achieving \begin{equation} \label{eq:qubit} \id^R\otimes\mathcal{U}^B\left({\Phi_2^+}^{RB}\right)=\psi^{RB}. \end{equation} Note that $\mathcal{H}^{\hat{B}}$ in Equation~\eqref{eq:mixed_unitary} in Proposition~\ref{prp:equivalence} is simply written as $\mathcal{H}^B$ in Equation~\eqref{eq:qubit} since $\mathcal{H}^{\hat{B}}=\mathcal{H}^B$ in this proof. Given $\psi^{RB}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{2}$, $\psi^{RB}$ can be regarded as a normalized operator of the Choi operator of a CPTP map $\mathcal{U}^B$ defined as Equation~\eqref{eq:choi_operator}. Tracing out $\mathcal{H}^R$ for $\psi^{RB}$ yields \begin{equation} \mathcal{U}^B\left(\frac{\mathbb{1}^B}{2}\right)=\psi^B=\frac{\mathbb{1}^B}{2}, \end{equation} and hence, $\mathcal{U}^B$ is a unital channel. Since any unital channel on a qubit is a mixed-unitary channel, $\mathcal{U}^B$ is a mixed-unitary channel, which yield the conclusion. \end{proof} As for qudits of more than two dimension, the converse bound in Theorem~\ref{thm:new} is not necessarily achievable, since the following proposition shows an example of exact state merging that does not satisfy the equality of~\eqref{eq:lower_non_catalytic}. The following proposition shows a three-qutrit state of which any one-way LOCC protocol for exact state merging in the non-catalytic setting fails to achieve \begin{equation} \log_2 K =\min\left\{\log_2 K: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\psi^{AB}\right\}. \end{equation} \begin{proposition} \label{prp:qutrit} \textit{Impossibility of achieving the converse bound of entanglement cost in exact state merging in the non-catalytic setting for qutrits.} There exists a three-qutrit pure state $\Ket{\psi}^{RAB}\in{\left(\mathbb{C}^3\right)}^{\otimes 3}$ satisfying $\psi^R=\frac{\mathbb{1}^R}{D}$ where $D=3$, such that exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting cannot be achieved by any one-way LOCC protocol at entanglement cost \begin{equation} \log_2 K =\min\left\{\log_2 K: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\psi^{AB}\right\}, \end{equation} where the notations are the same as those in Theorem~\ref{thm:new}. \end{proposition} \begin{proof} Consider a CPTP map \begin{equation} \mathcal{N}(\rho)=\frac{1}{2}(\tr\rho)\mathbb{1}-\frac{1}{2}\rho^\textup{T}, \end{equation} where $\rho^\textup{T}$ is the transpose of $\rho$ with respect to the computational basis. The Choi operator of $\mathcal{N}$ defined as Equation~\eqref{eq:choi_operator} is written as \begin{equation} \begin{split} &J(\mathcal{N})\coloneq\\ &\frac{1}{2}\left(\Ket{2}\otimes\Ket{1}-\Ket{1}\otimes\Ket{2}\right){\left(\Bra{2}\otimes\Bra{1}-\Bra{1}\otimes\Bra{2}\right)}+\\ &\frac{1}{2}\left(\Ket{0}\otimes\Ket{2}-\Ket{2}\otimes\Ket{0}\right){\left(\Bra{0}\otimes\Bra{2}-\Bra{2}\otimes\Bra{0}\right)}+\\ &\frac{1}{2}\left(\Ket{1}\otimes\Ket{0}-\Ket{0}\otimes\Ket{1}\right){\left(\Bra{1}\otimes\Bra{0}-\Bra{0}\otimes\Bra{1}\right)}. \end{split} \end{equation} This map $\mathcal{N}$ is a unital channel but not a mixed-unitary channel~\cite{L4,W11}. Consider a normalized state \begin{equation} \psi^{RB}\coloneq\frac{J(\mathcal{N})}{3}. \end{equation} A purification of $\psi^{RB}$ is \begin{equation} \begin{split} &\Ket{\psi}^{RAB}=\\ &\frac{1}{\sqrt{3}}\Ket{0}^A\otimes{\left(\frac{1}{\sqrt{2}}\Ket{2}^R\otimes\Ket{1}^B-\frac{1}{\sqrt{2}}\Ket{1}^R\otimes\Ket{2}^B\right)}+\\ &\frac{1}{\sqrt{3}}\Ket{1}^A\otimes{\left(\frac{1}{\sqrt{2}}\Ket{0}^R\otimes\Ket{2}^B-\frac{1}{\sqrt{2}}\Ket{2}^R\otimes\Ket{0}^B\right)}+\\ &\frac{1}{\sqrt{3}}\Ket{2}^A\otimes{\left(\frac{1}{\sqrt{2}}\Ket{1}^R\otimes\Ket{0}^B-\frac{1}{\sqrt{2}}\Ket{0}^R\otimes\Ket{1}^B\right)} \end{split} \end{equation} This state satisfies \begin{align} &\psi^R=\frac{\mathbb{1}^R}{3},\\ &\psi^B=\frac{\mathbb{1}^B}{3}. \end{align} Hence, it holds that \begin{equation} \min\left\{\log_2 K: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\psi^{AB}\right\}=0. \end{equation} Assume that there exists a one-way LOCC protocol for exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting at entanglement cost $\log_2 K=0$, to derive a contradiction. Due to Proposition~\ref{prp:equivalence}, this assumption is equivalent to the existence of a mixed-unitary channel $\mathcal{U}^B$ such that \begin{equation} \id^R\otimes\mathcal{U}^B\left({\Phi_3^+}^{RB}\right)=\psi^{RB}=\frac{J(\mathcal{N})}{3}, \end{equation} where, in the same way as Equation~\eqref{eq:qubit}, $\mathcal{H}^{\hat{B}}$ in Equation~\eqref{eq:mixed_unitary} in Proposition~\ref{prp:equivalence} is written as $\mathcal{H}^B$. Due to the one-to-one correspondence between a CPTP map and the Choi operator of the CPTP map, $\mathcal{N}=\mathcal{U}$ is necessary, which contradicts to the fact that $\mathcal{N}$ is not a mixed-unitary channel, and the conclusion is obtained. \end{proof} \section{\label{sec:converse_approximate}Converse bound for approximate state merging} Given any pure state $\Ket{\psi}^{RAB}$ and an error $\epsilon\geqq 0$, Theorem~\ref{thm:new} is extended in this section, to obtain a converse bound of entanglement cost in approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$. While the catalytic setting is analyzed in the following, the same argument holds for the non-catalytic setting. It is also shown that this converse bound for approximate state merging improves the converse bound derived from the previous study on one-shot approximate state redistribution~\cite{B10} when $\epsilon$ is sufficiently small. In the same way as the proof of Theorem~\ref{thm:new} on exact state merging, a converse bound of entanglement cost in approximate state merging is obtained by applying a majorization condition for LOCC convertibility to the bipartition between $B$ and $RA$. While the proof of Theorem~\ref{thm:new} on exact state merging uses the majorization condition for LOCC convertibility between bipartite pure states in Lemma~\ref{lem:pure_convertibility}, approximate state merging requires another majorization condition for LOCC convertibility from a bipartite pure state to a bipartite mixed state shown in Lemma~\ref{lem:mixed}, since the final state in approximate state merging can be a mixed state. Given any pure state $\Ket{\psi}^{RAB}$ and an error $\epsilon\geqq 0$, a converse bound of entanglement cost in approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ is obtained using Lemma~\ref{lem:mixed} as follows. \begin{theorem} \label{thm:approximate_converse} \textit{A converse bound of entanglement cost in approximate state merging.} For any state $\Ket{\psi}^{RAB}$, any error $\epsilon\geqq 0$, and any protocol for approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$, \begin{equation} \begin{split} &\log_2 K - \log_2 L \geqq\\ &\inf\Big\{\log_2 K - \log_2 L:\\ &\quad\boldsymbol{\lambda}\left(\psi^{B}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}\right)\prec\sum_j p(j)\boldsymbol{\lambda}\left(\psi_j^{B^\prime B\overline{B}}\right),\\ &\quad\left.F^2\left(\sum_j p(j)\Ket{\psi_j}\Bra{\psi_j}^{RB^\prime B\overline{A}\overline{B}},\psi^{RB^\prime B}\otimes{\Phi_L^+}^{\overline{A}\overline{B}}\right)\geqq 1-\epsilon^2\right\}. \end{split} \end{equation} \end{theorem} \begin{proof} Any protocol for approximate state merging transforms $\Ket{\psi}^{RAB}\otimes\Ket{\Phi_K^+}^{\overline{A}\overline{B}}$ into ${\tilde\psi}^{RB^\prime B \overline{A}\overline{B}}$ by LOCC, where ${\tilde\psi}$ satisfies \begin{equation} F^2\left({\tilde\psi}^{RB^\prime B \overline{A}\overline{B}},\psi^{RB^\prime B}\otimes{\Phi_L^+}^{\overline{A}\overline{B}}\right)\geqq 1-\epsilon_1^2. \end{equation} Substituting $A$, $B$, $\Ket{\phi}^{AB}$, and $\psi^{AB}$ in Lemma~\ref{lem:mixed} with $R\overline{A}$, $B^\prime B \overline{B}$, $\Ket{\psi}^{RAB}\otimes\Ket{\Phi_K^+}^{\overline{A}\overline{B}}$, and ${\tilde\psi}^{RB^\prime B \overline{A}\overline{B}}$, respectively, yields an ensemble $\left\{p(j),\Ket{\psi_j}^{RB^\prime B \overline{A}\overline{B}}\right\}$ satisfying \begin{align} &{\tilde\psi}^{RB^\prime B \overline{A}\overline{B}}=\sum_j p(j)\Ket{\psi_j}\Bra{\psi_j}^{RB^\prime B \overline{A}\overline{B}},\\ &\boldsymbol{\lambda}\left(\psi^{B}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}\right)\prec\sum_j p(j)\boldsymbol{\lambda}\left(\psi_j^{B^\prime B\overline{B}}\right). \end{align} Therefore, the conclusion is obtained. \end{proof} Reference~\cite{B10} also analyzes a converse bound for fully quantum protocols for one-shot approximate state redistribution, which is a generalized task including approximate state merging as a special case. As discussed in Remark~\ref{remark:usefulness}, it is straightforward to convert this converse bound for fully quantum protocols to the converse bound of entanglement cost in the LOCC framework, which yields the following. \begin{lemma} \label{lem:existing_approxiamte_converse} (Proposition~12 in Reference~\cite{B10}) \textit{A converse bound of entanglement cost in approximate state merging given in Reference~\cite{B10}.} For any state $\Ket{\psi}^{RAB}$, any errors $\epsilon_1\in(0,1)$, $\epsilon_2\in(0,1-\epsilon_1)$, and any protocol for approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon_1\,$, it holds that \begin{equation} \log_2 K - \log_2 L \geqq {H_{\min}^{\epsilon_2}(AB)}_\psi-{H_{\min}^{\epsilon_1+\epsilon_2}(B)}_\psi, \end{equation} where $H_{\min}^{\epsilon}$ is the smooth min-entropy summarized in Appendix~\ref{sec:one_shot_entropies}. \end{lemma} When the error tolerance in approximate state merging is sufficiently small, the converse bound shown in Theorem~\ref{thm:approximate_converse} improves the converse bound shown in Lemma~\ref{lem:existing_approxiamte_converse} in the following sense. \begin{proposition} \textit{Comparison of converse bounds of entanglement cost in approximate state merging.} For any state $\Ket{\psi}^{RAB}$, any errors $\epsilon_1\in(0,1)$, $\epsilon_2\in(0,1-\epsilon_1)$, and any protocol for approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon_1\,$, \begin{equation} \begin{split} &\lim_{\epsilon_1\to 0}\inf\Big\{\log_2 K - \log_2 L:\\ &\quad\boldsymbol{\lambda}\left(\psi^{B}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}\right)\prec\sum_j p(j)\boldsymbol{\lambda}\left(\psi_j^{B^\prime B\overline{B}}\right),\\ &\quad F^2\Big(\sum_j p(j)\Ket{\psi_j}\Bra{\psi_j}^{RB^\prime B\overline{A}\overline{B}},\psi^{RB^\prime B}\otimes{\Phi_L^+}^{\overline{A}\overline{B}}\Big)\geqq 1-\epsilon^2\Big\}\\ &\geqq\lim_{\epsilon_1,\epsilon_2\to 0}\left({H_{\min}^{\epsilon_2}(AB)}_\psi-{H_{\min}^{\epsilon_1+\epsilon_2}(B)}_\psi\right), \end{split} \end{equation} where the notations are the same as those in Theorem~\ref{thm:approximate_converse} and Lemma~\ref{lem:existing_approxiamte_converse}. \end{proposition} \begin{proof} Regarding the converse bound shown in Theorem~\ref{thm:approximate_converse}, it holds that \begin{equation} \begin{split} &\lim_{\epsilon_1\to 0}\inf\Big\{\log_2 K - \log_2 L:\\ &\quad\boldsymbol{\lambda}\left(\psi^{B}\otimes\frac{\mathbb{1}_K^{\overline{B}}}{K}\right)\prec\sum_j p(j)\boldsymbol{\lambda}\left(\psi_j^{B^\prime B\overline{B}}\right),\\ &\quad F^2\Big(\sum_j p(j)\Ket{\psi_j}\Bra{\psi_j}^{RB^\prime B\overline{A}\overline{B}},\psi^{RB^\prime B}\otimes{\Phi_L^+}^{\overline{A}\overline{B}}\Big)\geqq 1-\epsilon_1^2\Big\}\\ &=\inf\left\{\log_2 K - \log_2 L: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\frac{\mathbb{1}_L}{L}\otimes\psi^{AB}\right\}. \end{split} \end{equation} As for the converse bound shown in Lemma~\ref{lem:existing_approxiamte_converse}, the limit can be calculated as~\cite{R2,T5,T11} \begin{equation} \lim_{\epsilon_1,\epsilon_2\to 0}\left({H_{\min}^{\epsilon_2}(AB)}_\psi-{H_{\min}^{\epsilon_1+\epsilon_2}(B)}_\psi\right)=\log_2 \frac{1}{\lambda_0^{AB}} - \log_2 \frac{1}{\lambda_0^{B}}, \end{equation} where $\lambda_0^{AB}$ and $\lambda_0^{B}$ are the largest eigenvalues of $\psi^{AB}$ and $\psi^B$, respectively. The majorization \begin{equation} \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\frac{\mathbb{1}_L}{L}\otimes\psi^{AB} \end{equation} implies that the largest eigenvalues of this majorization satisfy \begin{equation} \frac{\lambda_0^{B}}{K}\leqq\frac{\lambda_0^{AB}}{L}, \end{equation} and hence, \begin{equation} \log_2 K - \log_2 L \geqq\log_2 \frac{1}{\lambda_0^{AB}} - \log_2 \frac{1}{\lambda_0^{B}}. \end{equation} Due to this implication, it holds that \begin{equation} \begin{split} &\inf\left\{\log_2 K - \log_2 L: \frac{\mathbb{1}_K}{K}\otimes\psi^{B}\prec\frac{\mathbb{1}_L}{L}\otimes\psi^{AB}\right\} \geqq\log_2 \frac{1}{\lambda_0^{AB}} - \log_2 \frac{1}{\lambda_0^{B}}, \end{split} \end{equation} which yields the conclusion. \end{proof} \section{\label{sec:examples}Implications} Implications of the results in this chapter are discussed. In the following, $\otimes$ in representing the tensor product of states may be omitted for brevity. Define \begin{align} \Ket{+}&\coloneq\frac{1}{\sqrt{2}}\left(\Ket{0}+\Ket{1}\right),\\ \Ket{\Psi^\pm}&\coloneq\frac{1}{\sqrt{2}}\left(\Ket{0}\Ket{1}\pm\Ket{1}\Ket{0}\right),\\ \Ket{\Phi^\pm}&\coloneq\frac{1}{\sqrt{2}}\left(\Ket{0}\Ket{0}\pm\Ket{1}\Ket{1}\right). \end{align} \begin{implication} \label{ex:1} \textit{Reduced entanglement cost in exact state merging compared with quantum teleportation and exact state splitting by performing a measurement on the classical part followed by classical communication.} Consider a tripartite Greenberger-Horne-Zeilinger (GHZ) state of $d$-dimensional systems for any $d\geqq 2$ \begin{equation} \Ket{\textup{GHZ}_d}^{RAB}\coloneq\frac{1}{\sqrt{d}}\sum_{l=0}^{d-1}\Ket{l}^R\Ket{l}^A\Ket{l}^B. \end{equation} Quantum teleportation of the reduced state of $\Ket{\textup{GHZ}_d}^{RAB}$ on $A$ requires $\log_2 d$ ebits, that is, $\Ket{\Phi_d^+}$ for an initial resource state. Note that exact state splitting summarized in Section~\ref{sec:split} also requires $\log_2 d$ ebits due to Theorem~\ref{thm:split}. By contrast, the protocols for exact state merging of $\Ket{\textup{GHZ}_d}^{RAB}$ in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} achieve respectively \begin{equation} \log_2 K - \log_2 L = 0<\log_2 d \end{equation} and \begin{equation} \log_2 K = 0<\log_2 d. \end{equation} In a similar way, as will be shown in Chapter~\ref{sec:distributed_encoding_decoding}, the protocol for exact state merging can be used for achieving \textit{zero} entanglement cost in exact state merging of multipartite code states of quantum error correcting codes~\cite{G,D,T10,B}. \end{implication} \begin{implication} \label{ex:3} \textit{Negative entanglement cost in exact state merging by entanglement distillation from the redundant part.} Consider a pure state \begin{equation} \begin{split} \Ket{\psi}^{RAB}=\frac{1}{\sqrt{3}}\Big(&\Ket{0}^R\Ket{\Psi^+}^{A_1B_1}\Ket{\Phi^-}^{A_2B_2}\Ket{\Phi^+}^{A_3B_3}+\\ &\Ket{1}^R\Ket{0}^{A_1}\Ket{0}^{B_1}\Ket{\Phi^-}^{A_2B_2}\Ket{\Phi^+}^{A_3B_3}+\\ &\Ket{2}^R\Ket{2}^{A_1}\Ket{2}^{B_1}\Ket{0}^{A_2}\Ket{0}^{B_2}\Ket{\Psi^-}^{A_3B_3}\Big), \end{split} \end{equation} where each of $\mathcal{H}^A=\mathcal{H}^{A_1}\otimes\mathcal{H}^{A_2}\otimes\mathcal{H}^{A_3}$ and $\mathcal{H}^B=\mathcal{H}^{B_1}\otimes\mathcal{H}^{B_2}\otimes\mathcal{H}^{B_3}$ is of $3\times 2\times 2=12$ dimension. Quantum teleportation of $\psi^A$ requires $\log_2 12$ ebits, that is, $\Ket{\Phi_{12}^+}$ for an initial resource state. By contrast, the protocols for exact state merging of $\Ket{\psi}^{RAB}$ in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} achieve respectively \begin{equation} \log_2 K - \log_2 L = -1 < 0 \end{equation} and \begin{equation} \log_2 K = 0. \end{equation} The former negative entanglement cost leads to a net gain of shared entanglement. \end{implication} \begin{implication} \label{ex:2} \textit{Improvement in converse bounds of entanglement cost in exact state merging.} Consider a three-qubit pure state \begin{equation} \Ket{\psi}^{RAB}=\frac{1}{\sqrt{2}}\Big(\Ket{0}^R\Ket{\Psi^+}^{AB} +\Ket{1}^R\Ket{0}^A\Ket{0}^{B}\Big). \end{equation} The protocols for exact state merging of $\Ket{\psi}^{RAB}$ in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} require respectively \begin{equation} \log_2 K - \log_2 L = 1 \end{equation} and \begin{equation} \log_2 K = 1. \end{equation} Since $\psi^B\neq\frac{\mathbb{1}^B}{2}$, the latter equality for exact state merging in the non-catalytic setting is optimal due to Theorem~\ref{thm:qubit}. As for the former, this example shows the difference between the converse bounds of entanglement cost in exact state merging in Theorem~\ref{thm:new} and Lemma~\ref{lem:old}. In this case, \begin{align} &\log_2 \left({\lambda_0^B}D\right)=\log_2 \frac{3}{2} > 0.5849,\\ &{H_{\max}(A|B)}_\psi < 0.5432, \end{align} where the notations are the same as those in Theorem~\ref{thm:new} and Lemma~\ref{lem:old}, and the value of ${H_{\max}(A|B)}_\psi$ is calculated by a semidefinite programming~\cite{V2} using Split Conic Solver (SCS)~\cite{S8} and YALMIP~\cite{L5}. These calculations imply that the converse bounds in Theorem~\ref{thm:new} and Corollary~\ref{col:tractable_converse} can be strictly tighter than the existing converse bound obtained from Lemma~\ref{lem:old}. \end{implication} \begin{implication} \label{ex:4} \textit{Asymmetry between $A$ and $B$ in exact state merging.} Consider a three-qubit pure state \begin{equation} \Ket{\psi}^{RAB}=\frac{1}{\sqrt{2}}\Big(\Ket{0}^R\Ket{0}^{A}\Ket{0}^{B} +\Ket{1}^R\Ket{1}^A\Ket{+}^{B}\Big). \end{equation} The protocols for exact state merging of $\Ket{\psi}^{RAB}$ in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} require respectively \begin{equation} \log_2 K - \log_2 L = 1 \end{equation} and \begin{equation} \log_2 K = 1. \end{equation} Since $\psi^B\neq\frac{\mathbb{1}^B}{2}$, the latter equality for exact state merging in the non-catalytic setting is optimal due to Theorem~\ref{thm:qubit}. In contrast, interchange $A$ and $B$ for $\Ket{\psi}^{RAB}$ to consider \begin{equation} \Ket{\psi^\prime}^{RAB}=\frac{1}{\sqrt{2}}\Big(\Ket{0}^R\Ket{0}^{A}\Ket{0}^{B} +\Ket{1}^R\Ket{+}^A\Ket{1}^{B}\Big). \end{equation} In the same way as the above case of $\Ket{\psi}^{RAB}$, the protocols for exact state merging of $\Ket{\psi^\prime}^{RAB}$ in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} require respectively \begin{equation} \log_2 K - \log_2 L = 1 \end{equation} and \begin{equation} \log_2 K = 1. \end{equation} However, since $\psi^B=\frac{\mathbb{1}^B}{2}$, Theorem~\ref{thm:qubit} implies that there exists a protocol for exact state merging in the non-catalytic setting of $\Ket{\psi^\prime}^{RAB}$ achieving \begin{equation} \log_2 K = 0 < 1. \end{equation} Indeed, $\Ket{\psi^\prime}^{RAB}$ can also be written as \begin{equation} \begin{split} \Ket{\psi^\prime}^{RAB}=&\sqrt{\frac{1}{2}+\frac{\sqrt{2}}{4}}{\left[\frac{\left(1+\sqrt{2}\right)\Ket{0}+\Ket{1}}{\sqrt{4+2\sqrt{2}}}\right]}^A\Ket{\Phi^-}^{RB}+\\ &\sqrt{\frac{1}{2}-\frac{\sqrt{2}}{4}}{\left[\frac{\left(1-\sqrt{2}\right)\Ket{0}+\Ket{1}}{{\sqrt{4-2\sqrt{2}}}}\right]}^A\Ket{\Phi^+}^{RB}, \end{split} \end{equation} and hence, $A$'s measurement in basis \begin{equation} \left\{\frac{\left(1+\sqrt{2}\right)\Ket{0}+\Ket{1}}{\sqrt{4+2\sqrt{2}}}, \frac{\left(1-\sqrt{2}\right)\Ket{0}+\Ket{1}}{{\sqrt{4-2\sqrt{2}}}}\right\} \end{equation} yields a maximally entangled state between $R$ and $B$. These cases imply that the difference in entanglement costs between the optimal protocol and the protocols presented in Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} may arise depending on whether the quantum part of the Koashi-Imoto decomposition can be merged at less entanglement cost than performing quantum teleportation. Note that the optimal protocol obtained in Theorem~\ref{thm:qubit} works only for qubits, and Proposition~\ref{prp:qutrit} implies that extension to qudits is not straightforward. \end{implication} \begin{implication} \label{ex:5} \textit{Special cases where the achievability and converse bounds for exact state merging coincide.} Special cases are discussed where one of the subsystems of the system $\mathcal{H}^R\otimes\mathcal{H}^A\otimes\mathcal{H}^B$ for a given state $\Ket{\psi}^{RAB}$ is initially decoupled from the others. In these cases, the achievability bound for exact state merging in Theorem~\ref{thm:merge} coincides with the converse bound in Theorem~\ref{thm:new}. Note that in general, there may exist a gap between these bounds as discussed in Implications~\ref{ex:2} and~\ref{ex:4}, while full characterization of the cases where this gap closes is unknown. Consider the case where the system $\mathcal{H}^R$ is initially decoupled with the others, and a given pure state is in the form \begin{equation} \Ket{\psi_{R\textup{-}AB}}^{RAB}=\Ket{\mu}^{R}\otimes\Ket{\nu}^{AB}. \end{equation} Due to the Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$ in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, the decomposition of $\mathcal{H}^A$ is \begin{equation} \mathcal{H}^A=\mathcal{H}^{a_0^L}, \end{equation} where in terms of the notations of Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, $J=1$, and $\mathcal{H}^{a_0^R}$ does not explicitly appear since in this case \begin{equation} \dim\mathcal{H}^{a_0^L}=\dim\mathcal{H}^A,\quad\dim\mathcal{H}^{a_0^R}=1. \end{equation} As for $\Ket{\psi_{R\textup{-}AB}}^{RAB}$, the decomposition yields \begin{equation} \Ket{\psi_{R\textup{-}AB}}^{RAB}=\Ket{\mu}^{R}\otimes\Ket{\nu}^{a_0^L b_0^L}, \end{equation} and define \begin{equation} \lambda_0\coloneq\lambda_0^{a_0^L}=\lambda_0^B, \end{equation} where the notations are the same as those in Theorems~\ref{thm:merge} and~\ref{thm:new}. The protocol in Theorem~\ref{thm:merge} for exact state merging of $\Ket{\psi_{R\textup{-}AB}}^{RAB}$ achieves for any $\delta > 0$ \begin{equation} \log_2 K - \log_2 L \leqq \log_2\lambda_0 + \delta, \end{equation} where shared entanglement is distilled by Subprocess~1 in the proof of Theorems~\ref{thm:merge}. The converse bound in Theorem~\ref{thm:new} shows for any protocol for exact state merging of $\Ket{\psi_{R\textup{-}AB}}^{RAB}$ \begin{equation} \log_2 K - \log_2 L \geqq \log_2\lambda_0. \end{equation} Next, consider the case where the system $\mathcal{H}^B$ is initially decoupled with the others, and a given pure state is in the form \begin{equation} \Ket{\psi_{B\textup{-}RA}}^{RAB}=\Ket{\mu}^{B}\otimes\Ket{\nu}^{RA}. \end{equation} Due to the Koashi-Imoto decomposition of $\Ket{\psi_{B\textup{-}RA}}^{RAB}$ in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, the decomposition of $\mathcal{H}^A$ is \begin{equation} \mathcal{H}^A=\mathcal{H}^{a_0^R}\oplus\mathcal{H}^{a_1^L}, \end{equation} where in terms of the notations of Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, $J=1$, and $\mathcal{H}^{a_0^L}$ and $\mathcal{H}^{a_1^R}$ do not explicitly appear since in this case \begin{align} \dim\mathcal{H}^{a_0^L}&=1,\\ \dim\mathcal{H}^{a_0^R}&=\rank\psi_{B\textup{-}RA}^A,\\ \dim\mathcal{H}^{a_1^L}&=\dim\mathcal{H}^A-\rank\psi_{B\textup{-}RA}^A,\\ \dim\mathcal{H}^{a_1^R}&=1. \end{align} As for $\Ket{\psi_{B\textup{-}RA}}^{RAB}$, the decomposition yields \begin{equation} \Ket{\psi_{B\textup{-}RA}}^{RAB}=\Ket{\mu}^{b_0^L}\otimes\Ket{\nu}^{R a_0^R}. \end{equation} The protocol in Theorem~\ref{thm:merge} for exact state merging of $\Ket{\psi_{B\textup{-}RA}}^{RAB}$ achieves \begin{equation} \log_2 K = \rank \nu^{a_0^R} = \rank\psi_{B\textup{-}RA}^A,\quad \log_2 L = 0. \end{equation} where $\nu^{a_0^R}$ is transferred using quantum teleportation in Subprocess~2 in the proof of Theorems~\ref{thm:merge}. The converse bound in Theorem~\ref{thm:new} shows for any protocol for exact state merging of $\Ket{\psi_{B\textup{-}RA}}^{RAB}$ \begin{equation} \log_2 K - \log_2 L \geqq \rank\psi_{B\textup{-}RA}^A. \end{equation} Finally, consider the case where the system $\mathcal{H}^A$ is initially decoupled with the others, and a given pure state is in the form \begin{equation} \Ket{\psi_{A\textup{-}RB}}^{RAB}=\Ket{\mu}^{A}\otimes\Ket{\nu}^{RB}. \end{equation} Due to the Koashi-Imoto decomposition of $\Ket{\psi_{A\textup{-}RB}}^{RAB}$ in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, the decomposition of $\mathcal{H}^A$ is \begin{equation} \mathcal{H}^A=\mathcal{H}^{a_0^L}, \end{equation} where in terms of the notations of Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, $J=1$, and $\mathcal{H}^{a_0^R}$ does not explicitly appear since in this case \begin{align} \dim\mathcal{H}^{a_0^L}=\dim\mathcal{H}^A,\quad \dim\mathcal{H}^{a_0^R}=1. \end{align} As for $\Ket{\psi_{A\textup{-}RB}}^{RAB}$, the decomposition yields \begin{equation} \Ket{\psi_{A\textup{-}RB}}^{RAB}=\Ket{\mu}^{a_0^L}\otimes\Ket{\nu}^{R b_0^R}. \end{equation} The protocol in Theorem~\ref{thm:merge} for exact state merging of $\Ket{\psi_{A\textup{-}RB}}^{RAB}$ achieves \begin{equation} \log_2 K = \log_2 L = 0, \end{equation} where $B$ locally prepares a state corresponding to $\Ket{\mu}^{a_0^L}$ due to Subprocess~3 in the proof of Theorems~\ref{thm:merge}. The converse bound in Theorem~\ref{thm:new} shows for any protocol for exact state merging of $\Ket{\psi_{A\textup{-}RB}}^{RAB}$ \begin{equation} \log_2 K - \log_2 L \geqq 0. \end{equation} \end{implication} \chapter{\label{sec:two_way}One-shot quantum state merging under one-way and two-way communication} This chapter proves that in one-shot state merging from $A$ to $B$, $B$'s preprocessing of quantum side information and backward classical communication from $B$ to $A$ can be indispensable for minimizing the entanglement cost. The setting and the statement are presented in Section~\ref{sec:statement_two_way}, and the proof is given in Section~\ref{sec:proof} using interconnection between state merging and another relevant task, local state discrimination. Based on this interconnection between state merging and local state discrimination, interpretation of entanglement cost in state merging is discussed in Section~\ref{sec:cost} \section{\label{sec:statement_two_way}Separation between one-way and two-way LOCC in a one-shot state merging.} The main result of this chapter shows a provable advantage of two-way LOCC over one-way LOCC in a one-shot scenario of state merging, which contrasts with the existing protocols for one-shot state merging using only one-way communication~\cite{B9,Y9,B12,D7,D6,H10,B10,D5,M,N3,A4,A5,B15,B13,A16,A17}. This advantage is shown for approximate state merging of a particular given state in the non-catalytic setting introduced in Definition~\ref{def:approxiamte_state_merging}. Note that this result straightforwardly shows that the advantage also exists for exact state merging. In the following of this chapter, state merging may refer to this approximate state merging in the non-catalytic setting, if obvious. The main result is illustrated in Figure~\ref{fig:result} and shown as follows. \begin{theorem} \label{thm:result} \textit{Separation between one-way LOCC and two-way LOCC in a one-shot state merging} There exists a state $\Ket{\psi}^{RAB}$ (defined later in Eq.~\eqref{eq:phi}) and a nonzero error threshold $\epsilon_0>0$ such that for any $\epsilon\in\left[0,\epsilon_0\right]$, the following hold. \begin{enumerate} \item The \textit{optimal one-way} LOCC protocol for non-catalytic approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ requires \textit{one} ebit of entanglement cost, that is, \begin{equation} \log_2 K = 1; \end{equation} \item There exists a \textit{two-way} LOCC protocol for non-catalytic approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ achieving \textit{zero} entanglement cost, that is, \begin{equation} \log_2 K = 0 < 1. \end{equation} \end{enumerate} \end{theorem} \begin{figure}\label{fig:result} \end{figure} \begin{table}[t!] \centering \caption[Is there a case where separation between one-way LOCC and two-way LOCC can be shown?]{\label{table:compare}Is there a case where separation between one-way LOCC and two-way LOCC can be shown? The separations are in terms of achievability of deterministic transformations between two fixed bipartite pure states, entanglement cost in state spitting, entanglement cost in state merging, distillable entanglement from bipartite mixed states, and success probability of local state discrimination among bipartite states. State merging provides the contrast between the asymptotic and one-shot scenarios.} \begin{tabular}{@{}lll@{}} \toprule task & asymptotic scenario & one-shot scenario \\ \midrule \begin{tabular}{@{}l@{}}state transformation (bipartite pure)\end{tabular} & No~\cite{B3}. & No~\cite{N2}.\\ state splitting & No~\cite{A2}. & No~(Theorem~\ref{thm:split}).\\ state merging & No~\cite{H3,H4}. & Yes~(Theorem~\ref{thm:result}).\\ \begin{tabular}{@{}l@{}} entanglement distillation \end{tabular} & Yes~\cite{B14}. & Yes~\cite{C2}.\\ \begin{tabular}{@{}l@{}}local state discrimination \end{tabular} & Yes~\cite{O3}. & Yes~\cite{G4,C4,O2,C5,N4,T7,T8,C3}.\\ \bottomrule \end{tabular} \end{table} Note that for proving Theorem~\ref{thm:result}, it is not sufficient to apply the proof techniques having used for obtaining converse bounds of entanglement cost in state merging that are based on the monotonicity of entropic functions~\cite{H4,H10} or the majorization condition used in Chapter~\ref{sec:merge}, since these techniques are based on no-go theorems applicable to any LOCC map including two-way LOCC\@. The proof of Theorem~\ref{thm:result} requires a no-go theorem that is \textit{only applicable to one-way} LOCC and is \textit{provably false for two-way} LOCC, and hence, another proof technique than these existing ones has to be established. Regarding provable separation between one-way LOCC and two-way LOCC in achievability of a given task, only several examples are known to date, such as entanglement distillation and local state discrimination, as shown in Table~\ref{table:compare}. Note that while the set of one-way LOCC maps is strictly included in that of two-way LOCC maps~\cite{C7}, this difference does not necessarily affect achievability of a given task; \textit{e.g.}, one-way LOCC suffices for deterministic transformations between two fixed bipartite pure states and state splitting. Among the known separations, the separation in local state discrimination based on hypothesis testing is first proven in a one-shot scenario~\cite{O2}, but whether the separation still survives in the corresponding asymptotic scenario was open until it is shown in Reference~\cite{O3} that the separation \textit{does survive}. In contrast to such known separations shown in both asymptotic and one-shot scenarios, Theorem~\ref{thm:result} on state merging provides a case where provable separation in a one-shot scenario \textit{does not asymptotically survive}, in the sense that one-way LOCC suffices in the corresponding asymptotic scenario. As for another remark, this chapter evaluates the amount of \textit{initially} shared entanglement and does not allow catalytic use of this shared entanglement, for simplicity. There are known only a few tasks of which catalytic use of entanglement is proven to affect achievability, such as entanglement transformation~\cite{J1,E1}, distributed implementation of a nonlocal bipartite unitary~\cite{V4}, and local state discrimination~\cite{Y15}. While state merging can be regarded as a transformation of tripartite pure states, problems on state transformations in such a catalytic setting are hard to solve analytically in general, even in bipartite cases as pointed out in Reference~\cite{J1}. As for catalyst in state merging, even if catalyst is allowed in the definition itself, asymptotic optimality can be achieved with an inconsiderable amount of catalyst~\cite{H4}, while there exists no quantitative study in one-shot scenarios. \section{\label{sec:proof}Interconnection between state merging and local state discrimination} To prove separation between one-way LOCC and two-way LOCC in a one-shot scenario of state merging in Theorem~\ref{thm:result}, local state discrimination is used. In local state discrimination, two parties $A$ and $B$ initially share an unknown state $\Ket{\psi_l}^{AB}$ given from a known set \begin{equation} {\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,\ldots,D-1} \end{equation} of $D$ orthogonal pure states, and the task aims to determine the index $l$ of $\Ket{\psi_l}^{AB}$ with unit probability by an LOCC measurement. Note that for the analysis in this chapter, it suffices to consider local state discrimination without using initially shared entangled resource states, while generalization to that using resources of shared entanglement is straightforward, as discussed in References~\cite{C1,B11,A6,Z1,B7,G3,B8}. There exists a set of orthogonal pure states for which local state discrimination is not achievable by one-way LOCC but is achievable by two-way LOCC, which is called a $2$-LOCC set. References~\cite{N4,T7,T8} provide $2$-LOCC sets for any possible dimensional systems. State merging can be viewed as a generalized task of local state discrimination, in the sense that achievability of the former implies that of the latter. Proposition~\ref{prp:equivalence} shows that if there exists a protocol achieving state merging of a tripartite state having the Schmidt decomposition \begin{equation} \Ket{\psi}^{RAB}\coloneq\frac{1}{\sqrt{D}}\sum_{l=0}^{D-1}\Ket{l}^R\otimes\Ket{\psi_l}^{AB} \end{equation} at zero entanglement cost, then this protocol transforms any superposition of the $D$ orthogonal states \begin{equation} {\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,\ldots,D-1} \end{equation} into that of \begin{equation} {\left\{\Ket{\psi_l}^{B^\prime B}\right\}}_{l=0,\ldots,D-1}\,, \end{equation} \textit{i.e.}, \begin{equation} \label{eq:relative} \sum_{l=0}^{D-1}\alpha_l\Ket{\psi_l}^{AB}\xrightarrow{\textup{LOCC}}\sum_{l=0}^{D-1}\alpha_l\Ket{\psi_l}^{B^\prime B}. \end{equation} Thus, local state discrimination for ${\left\{\Ket{\psi_l}^{AB}\right\}}_{l}$ can be achieved by first performing the protocol for state merging of $\Ket{\psi}^{RAB}$ to transform $\Ket{\psi_l}^{AB}$ into $\Ket{\psi_l}^{B^\prime B}$ for any $l$, and then performing $B$'s measurement for discriminating $B$'s orthogonal states ${\left\{\Ket{\psi_l}^{B^\prime B}\right\}}_{l}$. Note that a similar interconnection is also pointed out in the asymptotic scenario~\cite{A9}. In contrast, achievability of local state discrimination does not necessarily imply that of state merging if a protocol achieving local state discrimination uses a technique called \textit{elimination}, \textit{i.e.}, the measurement for excluding some of the possibilities of ${\left\{\Ket{\psi_l}^{AB}\right\}}_{l}$. For example, consider a set of states \begin{equation} \big\{\Ket{\psi_0}^{AB}\coloneq\Ket{0}^A\otimes\Ket{0}^B,\Ket{\psi_1}^{AB}\coloneq\Ket{0}^A\otimes\Ket{1}^B,\Ket{\psi_2}^{AB}\coloneq\Ket{1}^A\otimes\Ket{+}^B\big\}, \end{equation} where \begin{equation} \Ket{+}\coloneq\frac{1}{\sqrt{2}}\left(\Ket{0}+\Ket{1}\right). \end{equation} If $A$ eliminates some of the possibilities by a measurement in basis $\left\{\Ket{0},\Ket{1}\right\}$, $B$'s local measurement conditioned by $A$'s outcome can discriminate the remaining orthogonal states on $B$. In contrast, state merging of the corresponding tripartite state \begin{equation} \frac{1}{\sqrt{3}}\sum_{l=0}^{2}\Ket{l}^R\otimes\Ket{\psi_l}^{AB} \end{equation} is not achievable at zero entanglement cost due to the converse bound shown in Corollary~\ref{col:tractable_converse}. In this way, a protocol for local state discrimination using elimination does not generalize to that for state merging, because elimination destroys coherence between $R$ and the others. As for the known $2$-LOCC sets, two-way LOCC protocols shown in References~\cite{N4,T7,T8} for achieving local state discrimination require elimination, and hence, \textit{do not generalize} to state merging in a straightforward way. In contrast, the following analysis identifies a $2$-LOCC set for which a two-way LOCC protocol for local state discrimination can be constructed \textit{without elimination}, and the corresponding two-way LOCC protocol for state merging can also be constructed. Consider a set ${\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,1,2}$ of three orthogonal states of $\mathbb{C}^{11}\otimes\mathbb{C}^{11}$, and define each state as \begin{equation} \label{eq:s} \begin{split} \Ket{\psi_0}^{AB}\coloneq&\sqrt{\frac{2}{11}}\Ket{\Phi_2^+}^{AB}\oplus\sqrt{\frac{9}{11}}\Ket{\Phi_9^+}^{AB},\\ \Ket{\psi_1}^{AB}\coloneq&\sqrt{\frac{2}{11}}\gamma_1 X_2^A\Ket{\Phi_2^+}^{AB}\oplus\sqrt{\frac{9}{11}}{\left(X_9^A\right)}^3\Ket{\Phi_9^+}^{AB},\\ \Ket{\psi_2}^{AB}\coloneq&\sqrt{\frac{2}{11}}\gamma_2 Z_2^A\Ket{\Phi_2^+}^{AB}\oplus\sqrt{\frac{9}{11}}{\left(X_9^A\right)}^6\Ket{\Phi_9^+}^{AB}, \end{split} \end{equation} where each subsystem is decomposed into subspaces \begin{equation} \mathbb{C}^{11}=\mathbb{C}^{2}\oplus\mathbb{C}^{9}, \end{equation} $X_k^A$ and $Z_k^A$ are the generalized Pauli operator on a subspace $\mathbb{C}^k$ of $A$'s system for $A$'s part of \begin{equation} \Ket{\Phi_k^+}^{AB}\coloneq\frac{1}{\sqrt{k}}\sum_{l=0}^{k-1}\Ket{l}^A\otimes\Ket{l}^B, \end{equation} and $\gamma_1$ and $\gamma_2$ are nonreal complex numbers satisfying \begin{align} &{\left|\gamma_1\right|}^2=1,\\ &{\left|\gamma_2\right|}^2=1,\\ &\gamma_2\neq\pm\textup{i}\gamma_1^2. \end{align} The corresponding tripartite state is \begin{equation} \label{eq:phi} \Ket{\psi}\coloneq\frac{1}{\sqrt{3}}\sum_{l=0}^{2}\Ket{l}^R\otimes\Ket{\psi_l}^{AB}, \end{equation} where ${\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,1,2}$ is defined as Equation~\eqref{eq:s}. This state $\Ket{\psi}^{RAB}$ yields Theorem~\ref{thm:result} as follows. \begin{proof}[Proof of the first statement in Theorem~\ref{thm:result}.] The set \begin{equation} {\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,1,2} \end{equation} defined as Equation~\eqref{eq:s} is shown to be a $2$-LOCC set in Reference~\cite{N4}, and hence, impossibility of local state discrimination by one-way LOCC yields impossibility of \textit{exact} state merging in the non-catalytic setting of $\Ket{\psi}^{RAB}$ defined as Equation~\eqref{eq:phi} at zero entanglement cost by one-way LOCC\@. Since the set of one-way LOCC maps is compact, this impossibility of exact state merging in the non-catalytic setting by one-way LOCC implies that there exists a sufficiently small but nonzero error $\epsilon>0$ such that \textit{approximate} state merging in the non-catalytic setting of $\Ket{\psi}^{RAB}$ within $\epsilon$ is still impossible at zero entanglement cost by one-way LOCC\@. Note that the no-go theorem on local state discrimination by one-way LOCC in Reference~\cite{N4} does not generalize in a straightforward way to scenarios where catalytic use of entanglement is allowed, due to the fact that there may exist local state discrimination that is achievable at zero entanglement by using shared entanglement catalytically, but is not achievable without catalytic use of entanglement~\cite{Y20}. The rest of the proof constructs a one-way LOCC protocol for state merging of $\Ket{\psi}^{RAB}$ achieving one ebit of entanglement cost and zero error, \textit{i.e.}, \begin{equation} \label{eq:one_ebit} \begin{split} \log_2 K&=1,\\ F^2\left(\tilde\psi,\Ket{\psi}\Bra{\psi}\right)&=1, \end{split} \end{equation} based on the general protocol established in Theorem~\ref{thm:merge_without_catalyst} using the Koashi-Imoto decomposition. Note that this one-way LOCC protocol is less costly than the trivial protocol of performing quantum teleportation of $A$'s part of $\Ket{\psi}^{RAB}$ of an eleven-dimensional system. While the general protocol shown in Theorem~\ref{thm:merge_without_catalyst} requires $\log_2 3$ ebits of entanglement cost for $\Ket{\psi}^{RAB}$, this protocol can be modified using a specific structure of $\Ket{\psi}^{RAB}$, to achieve one ebit of entanglement cost. The following construction of this protocol mainly discusses this specific part in the particular case of $\Ket{\psi}^{RAB}$. For brevity, define \begin{align} &\Ket{\Psi_0}\coloneq\Ket{\Phi_2^+},\\ &\Ket{\Psi_1}\coloneq\left(\gamma_1 X_2^A\otimes\mathbb{1}^B\right)\Ket{\Phi_2^+},\\ &\Ket{\Psi_2}\coloneq\left(\gamma_2 Z_2^A\otimes\mathbb{1}^B\right)\Ket{\Phi_2^+}. \end{align} Using Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}, the following Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$ is obtained. The Hilbert spaces $\mathcal{H}^A=\mathbb{C}^{11}$ of $A$ and $\supp\left(\psi^B\right)=\mathcal{H}^B=\mathbb{C}^{11}$ of $B$ are decomposed into \begin{equation} \label{eq:h_decomposition} \begin{split} &\mathcal{H}^A=\bigoplus_{j=0}^{3}\mathcal{H}^{a_j^\textup{R}},\\ &\mathcal{H}^B=\bigoplus_{j=0}^{3}\mathcal{H}^{b_j^\textup{R}}, \end{split} \end{equation} where \begin{align} &\dim\mathcal{H}^{a_0^\textup{R}}=\dim\mathcal{H}^{b_0^\textup{R}}=2,\\ &\dim\mathcal{H}^{a_1^\textup{R}}=\dim\mathcal{H}^{b_1^\textup{R}}=3,\\ &\dim\mathcal{H}^{a_2^\textup{R}}=\dim\mathcal{H}^{b_2^\textup{R}}=3,\\ &\dim\mathcal{H}^{a_3^\textup{R}}=\dim\mathcal{H}^{b_3^\textup{R}}=3. \end{align} Note that $\mathcal{H}^{a_j^\textup{L}}$ and $\mathcal{H}^{b_j^\textup{L}}$ in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite} do not explicitly appear in the decomposition in Equation~\eqref{eq:h_decomposition}, since $\mathcal{H}^{a_j^\textup{L}}=\mathbb{C}$ and $\mathcal{H}^{b_j^\textup{L}}=\mathbb{C}$ for each $j\in\left\{0,\ldots,3\right\}$ in this case. The state $\Ket{\psi}^{RAB}$ is decomposed into \begin{equation} \Ket{\psi}^{RAB}=\sqrt{\frac{2}{11}}\Ket{\phi_0}^{Ra_0^\textup{R} b_0^\textup{R}}\oplus\bigoplus_{j=1}^{3}\sqrt{\frac{3}{11}}\Ket{\phi_j}^{Ra_j^\textup{R} b_j^\textup{R}}, \end{equation} where \begin{equation} \Ket{\phi_0}^{Ra_0^\textup{R} b_0^\textup{R}}\coloneq\sqrt{\frac{1}{3}}\sum_{l=0}^{2}\Ket{l}^R\otimes\Ket{\Psi_l}^{a_0^\textup{R} b_0^\textup{R}}, \end{equation} and for each $j\in\left\{1,2,3\right\}$, \begin{equation} \Ket{\phi_j}^{Ra_j^\textup{R} b_j^\textup{R}}\coloneq\sqrt{\frac{1}{9}}\sum_{l,m=0}^{2}\Ket{l}^R\otimes\Ket{l+m\bmod 3}^{a_j^\textup{R}}\otimes\Ket{m}^{b_j^\textup{R}}. \end{equation} While the definition of ${\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,1,2}$ in Equation~\eqref{eq:s} uses the decomposition of each system $\mathbb{C}^{11}=\mathbb{C}^2\oplus\mathbb{C}^9$, $\mathcal{H}^{a_0^\textup{R}}$ and $\mathcal{H}^{b_0^\textup{R}}$ in Equation~\eqref{eq:h_decomposition} correspond to $\mathbb{C}^2$, $\mathcal{H}^{a_1^\textup{R}}$ and $\mathcal{H}^{b_1^\textup{R}}$ in Equation~\eqref{eq:h_decomposition} correspond to a three-dimensional subspace of $\mathbb{C}^9$ spanned by $\left\{\Ket{0},\Ket{3},\Ket{6}\right\}$, $\mathcal{H}^{a_2^\textup{R}}$ and $\mathcal{H}^{b_2^\textup{R}}$ correspond to that by $\left\{\Ket{1},\Ket{4},\Ket{7}\right\}$, and $\mathcal{H}^{a_3^\textup{R}}$ and $\mathcal{H}^{b_3^\textup{R}}$ correspond to that by $\left\{\Ket{2},\Ket{5},\Ket{8}\right\}$. Introducing auxiliary systems $\mathcal{H}^{a_0}$ of $A$ and $\mathcal{H}^{b_0}$ of $B$, this decomposition can also be written as \begin{equation} \label{eq:psi_decomposition} \begin{split} \left(U^A\otimes U^B\right)\Ket{\psi}^{RAB} =\sqrt{\frac{2}{11}}\Ket{0}^{a_0}\otimes\Ket{0}^{b_0}\otimes\Ket{\phi_0}^{Ra^\textup{R} b^\textup{R}} +\sum_{j=1}^{3}\sqrt{\frac{3}{11}}\Ket{j}^{a_0}\otimes\Ket{j}^{b_0}\otimes\Ket{\phi_j}^{Ra^\textup{R} b^\textup{R}} \end{split} \end{equation} where \begin{align} &\dim\mathcal{H}^{a_0}=\dim\mathcal{H}^{b_0}=4,\\ &\dim\mathcal{H}^{a^\textup{R}}=\max_j\left\{\dim\mathcal{H}^{a_j^\textup{R}}\right\}=3,\\ &\dim\mathcal{H}^{b^\textup{R}}=\max_j\left\{\dim\mathcal{H}^{b_j^\textup{R}}\right\}=3, \end{align} $U^A$ is $A$'s local isometry from $\mathcal{H}^A$ to $\mathcal{H}^{a_0}\otimes\mathcal{H}^{a^\textup{R}}$, and $U^B$ is $B$'s local isometry from $\mathcal{H}^B$ to $\mathcal{H}^{b_0}\otimes\mathcal{H}^{b^\textup{R}}$. Using the Koashi-Imoto decomposition in the form of Equation~\eqref{eq:psi_decomposition}, the protocol for exact state merging shown in Theorem~\ref{thm:merge_without_catalyst} performs three subprocesses $1$, $2$, and $3$, which are combined using controlled measurements and controlled isometries. In the following, these three subprocesses in the case of $\Ket{\psi}^{RAB}$ are discussed. In particular, Subprocess~2 is modified using a specific structure of $\Ket{\psi}^{RAB}$ to achieve one ebit of entanglement cost. \textit{Subprocess 1:} The first subprocess is concerned with reduced states on $\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{b_j^\textup{L}}$, and since $\mathcal{H}^{a_j^\textup{L}}$ and $\mathcal{H}^{b_j^\textup{L}}$ do not explicitly appear in the decomposition in Equation~\eqref{eq:h_decomposition}, this subprocess is not performed in this case. \textit{Subprocess 2:} The second subprocess is for transferring $A$'s part of $\Ket{\phi_j}^{Ra^\textup{R} b^\textup{R}}$ to $B$, so that $\Ket{\phi_j}^{R{\left(b^\prime\right)}^\textup{R} b^\textup{R}}$ is obtained, where $\mathcal{H}^{{\left(b^\prime\right)}^\textup{R}}$ is $B$'s auxiliary system corresponding to $\mathcal{H}^{a^\textup{R}}$. While quantum teleportation is used for this subprocess in the proofs of Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} to provide a general protocol, there may exist cases where this subprocess can be achieved at less entanglement cost than performing quantum teleportation, as pointed out in Implication~\ref{ex:4}. As for the case of $\Ket{\psi}^{RAB}$, $\Ket{\phi_0}^{Ra^\textup{R} b^\textup{R}}$ is merged using quantum teleportation, which requires one ebit of an initially shared maximally entangled state $\Ket{\Phi_2^+}^{\overline{A}\overline{B}}$, where $\mathcal{H}^{\overline{A}}$ and $\mathcal{H}^{\overline{B}}$ are systems for the shared maximally entangled states of $A$ and $B$, respectively. If $\Ket{\phi_1}^{Ra^\textup{R} b^\textup{R}}$, $\Ket{\phi_2}^{Ra^\textup{R} b^\textup{R}}$, or $\Ket{\phi_3}^{Ra^\textup{R} b^\textup{R}}$ are also merged in the same way, $\log_2 3$ ebits are required. Instead, by performing $A$'s measurement on $\mathcal{H}^{a^\textup{R}}$ in the computational basis \begin{equation} {\left\{\Ket{m}^{a^\textup{R}}\right\}}_{m=0,1,2} \end{equation} followed by $B$'s isometry correction conditioned by $A$'s measurement outcome, no entanglement is required for merging $\Ket{\phi_1}^{Ra^\textup{R} b^\textup{R}}$, $\Ket{\phi_2}^{Ra^\textup{R} b^\textup{R}}$, and $\Ket{\phi_3}^{Ra^\textup{R} b^\textup{R}}$. However, to coherently combine Subprocess~2 for $\Ket{\phi_0}^{Ra^\textup{R} b^\textup{R}}$, $\Ket{\phi_1}^{Ra^\textup{R} b^\textup{R}}$, $\Ket{\phi_2}^{Ra^\textup{R} b^\textup{R}}$, and $\Ket{\phi_3}^{Ra^\textup{R} b^\textup{R}}$, one ebit of entanglement $\Ket{\Phi_2^+}^{\overline{A}\overline{B}}$ has to be consumed by $A$'s measurement on $\mathcal{H}^{\overline{A}}$ in the computational basis ${\left\{\Ket{m}^{\overline{A}}\right\}}_{m=0,1}$ followed by $B$'s isometry correction. Consequently, the LOCC map for Subprocess~2 can be written as a family of operators \begin{equation} {\left\{\Bra{j,m_2}\otimes\sigma_{j,m_2}\right\}}_{m_2} \end{equation} tracing out the post-measurement state of $A$, where $\Ket{0,m_2}$ and $\sigma_{0,m_2}$ corresponds to ${\left(U_j^\prime\right)}^\dag\Ket{\Phi_{j,m_2}}$ and $\sigma_{j,m_2}$ in Subprocess~2 used for Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst} based on quantum teleportation, and for each $j\in\left\{1,2,3\right\}$, ${\left\{\Ket{j,m_2}\right\}}_{m_2}$ and $\sigma_{j,m_2}$ are the computational basis for $A$'s measurement and the isometry for $B$'s correction conditioned by $A$'s measurement outcome $m_2\,$, respectively. \textit{Subprocess 3:} The third subprocess is for merging states on $\mathcal{H}^{a_0}\otimes\mathcal{H}^{b_0}$, and this subprocess can be performed in the same way as Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst}. Combining these three subprocesses in the same way as Theorems~\ref{thm:merge} and~\ref{thm:merge_without_catalyst}, obtain the one-way LOCC protocol achieving Equation~\eqref{eq:one_ebit} is obtained, which yields the conclusion. \end{proof} \begin{proof}[Proof of the second statement in Theorem~\ref{thm:result}] The proof is by construction, and a two-way LOCC protocol for exact state merging of $\Ket{\psi}$ in the non-catalytic setting achieving zero entanglement cost \begin{equation} \log_2 K=0 \end{equation} is constructed. This two-way LOCC protocol works as follows, and the explicit forms of measurements are shown later. While three maximally entangled two-qubit states \begin{equation} \left\{\Ket{\Phi_2^+}^{AB},\gamma_1 X_2^A\Ket{\Phi_2^+}^{AB},\gamma_2 Z_2^A\Ket{\Phi_2^+}^{AB}\right\} \end{equation} used in Equation~\eqref{eq:s} cannot be discriminated by any LOCC measurement by themselves~\cite{G10}, $B$ can perform an appropriate three-outcome measurement \begin{equation} {\left\{M_j^B\right\}}_{j=0,1,2}\,, \end{equation} so that the additional terms on $A$'s subspace $\mathbb{C}^9$ in Equation~\eqref{eq:s} become orthogonal. Using this orthogonality, $A$ can also perform an appropriate thirty-three-outcome measurement \begin{equation} {\left\{M_{k|j}^A\right\}}_{k=0,\ldots,32} \end{equation} conditioned by $B$'s measurement outcome $j$, so that for each measurement outcome $j$ and $k$ of the LOCC measurement \begin{equation} {\left\{M_{k|j}^A\otimes M_{j}^B\right\}}_{j,k}\,, \end{equation} orthogonal states ${\left\{\Ket{\psi_l}^{AB}\right\}}_{l=0,1,2}$ defined as Equation~\eqref{eq:s} are transformed into orthogonal states of $B$. Thus, $B$'s local isometry correction conditioned by $j$ and $k$ yields $\Ket{\psi}^{RB^\prime B}$. In the following, $B$'s measurement \begin{equation} {\left\{M_j^B\right\}}_{j=0,1,2} \end{equation} and $A$'s measurement \begin{equation} {\left\{M_{k|j}^A\right\}}_{k=0,\ldots,32} \end{equation} conditioned by $B$'s measurement outcome $j$ are shown explicitly. To present these measurements, consider that $\mathcal{H}^A$ and $\mathcal{H}^B$ are decomposed in the same way as Equation~\eqref{eq:s} for defining ${\left\{\Ket{\psi_l}^{AB}\right\}}_l\,$, that is, \begin{align} \mathcal{H}^A&=\mathbb{C}^2\oplus\mathbb{C}^9,\\ \mathcal{H}^B&=\mathbb{C}^2\oplus\mathbb{C}^9. \end{align} The measurement ${\left\{M_j^B\right\}}_{j=0,1,2}$ performed by $B$ is \begin{align} &M_0^B\coloneq\sqrt{\frac{1}{3}}\left(\Ket{0}\Bra{0}+\Ket{1}\Bra{1}\right)\oplus\left(\Ket{0}\Bra{0}+\Ket{1}\Bra{1}+\Ket{2}\Bra{2}\right),\\ &M_1^B\coloneq\sqrt{\frac{1}{3}}\left(\Ket{0}\Bra{0}+\Ket{1}\Bra{1}\right)\oplus\left(\Ket{3}\Bra{3}+\Ket{4}\Bra{4}+\Ket{5}\Bra{5}\right),\\ &M_2^B\coloneq\sqrt{\frac{1}{3}}\left(\Ket{0}\Bra{0}+\Ket{1}\Bra{1}\right)\oplus\left(\Ket{6}\Bra{6}+\Ket{7}\Bra{7}+\Ket{8}\Bra{8}\right), \end{align} where each operator on the right-hand side is on $\mathbb{C}^2\oplus\mathbb{C}^9$. This measurement satisfies the completeness condition \begin{equation} \sum_{j=0}^{2}M_j^\dag M_j=\mathbb{1}. \end{equation} As for $A$'s measurement ${\left\{M_{k|j}^A\right\}}_{k=0,\ldots,32}$ conditioned by $j\in\left\{0,1,2\right\}$, the case of $j=0$, that is, ${\left\{M_{k|0}^A\right\}}_{k=0,\ldots,32}\,$, is shown first, while a similar construction applies to the cases of $j=1,2$, as discussed later. For brevity, define a bipartite pure state $\Ket{\Psi}\in\mathbb{C}^9\otimes\mathbb{C}^9$ with Schmidt rank three as \begin{equation} \Ket{\Psi}\coloneq\sqrt{\frac{1}{3}}\left(\Ket{0}\otimes\Ket{0}+\Ket{1}\otimes\Ket{1}+\Ket{2}\otimes\Ket{2}\right), \end{equation} and also define the Fourier-basis states of three-dimensional subspaces of $\mathbb{C}^9$ \begin{align} \Ket{\omega_{n}^{\left(0,4,8\right)}}&\coloneqq\frac{1}{\sqrt{3}}\Ket{0}+\frac{\exp\left(\frac{\textup{i}\pi n}{3}\right)}{\sqrt{3}}\Ket{4}+\frac{\exp\left(\frac{\textup{i}\pi 2n}{3}\right)}{\sqrt{3}}\Ket{8},\\ \Ket{\omega_{n}^{\left(1,5,6\right)}}&\coloneqq\frac{1}{\sqrt{3}}\Ket{1}+\frac{\exp\left(\frac{\textup{i}\pi n}{3}\right)}{\sqrt{3}}\Ket{5}+\frac{\exp\left(\frac{\textup{i}\pi 2n}{3}\right)}{\sqrt{3}}\Ket{6},\\ \Ket{\omega_{n}^{\left(2,3,7\right)}}&\coloneqq\frac{1}{\sqrt{3}}\Ket{2}+\frac{\exp\left(\frac{\textup{i}\pi n}{3}\right)}{\sqrt{3}}\Ket{3}+\frac{\exp\left(\frac{\textup{i}\pi 2n}{3}\right)}{\sqrt{3}}\Ket{7}, \end{align} where $n\in\left\{0,1,2\right\}$. If $B$'s measurement outcome is $j=0$, the post-measurement state is \begin{equation} \Ket{\psi^{\left(0\right)}}^{RAB}=\frac{1}{\sqrt{3}}\sum_{l=0}^{2}\Ket{l}^R\otimes\Ket{\psi_l^{\left(0\right)}}^{AB}, \end{equation} where \begin{equation} \begin{split} \Ket{\psi_0^{\left(0\right)}}\coloneq&\sqrt{\frac{2}{11}}\Ket{\Phi_2^+}\oplus\sqrt{\frac{9}{11}}\Ket{\Psi}\\ =&\sqrt{\frac{1}{11}}\left(\Ket{0}\otimes\Ket{0}+\Ket{1}\otimes\Ket{1}\right)\oplus\sqrt{\frac{3}{11}}\left(\Ket{0}\otimes\Ket{0}+\Ket{1}\otimes\Ket{1}+\Ket{2}\otimes\Ket{2}\right),\\ \Ket{\psi_1^{\left(0\right)}}\coloneq&\sqrt{\frac{2}{11}}\left(\gamma_1 X_2\otimes\mathbb{1}\right)\Ket{\Phi_2^+}\oplus\sqrt{\frac{9}{11}}\left({\left(X_9\right)}^3\otimes\mathbb{1}\right)\Ket{\Psi}\\ =&\sqrt{\frac{1}{11}}\gamma_1\left(\Ket{1}\otimes\Ket{0}+\Ket{0}\otimes\Ket{1}\right)\oplus\sqrt{\frac{3}{11}}\left(\Ket{3}\otimes\Ket{0}+\Ket{4}\otimes\Ket{1}+\Ket{5}\otimes\Ket{2}\right),\\ \Ket{\psi_2^{\left(0\right)}}\coloneq&\sqrt{\frac{2}{11}}\left(\gamma_2 Z_2\otimes\mathbb{1}\right)\Ket{\Phi_2^+}\oplus\sqrt{\frac{9}{11}}\left({\left(X_9\right)}^6\otimes\mathbb{1}\right)\Ket{\Psi}\\ =&\sqrt{\frac{1}{11}}\gamma_2\left(\Ket{0}\otimes\Ket{0}-\Ket{1}\otimes\Ket{1}\right)\oplus\sqrt{\frac{3}{11}}\left(\Ket{6}\otimes\Ket{0}+\Ket{7}\otimes\Ket{1}+\Ket{8}\otimes\Ket{2}\right). \end{split} \end{equation} In this case, $A$'s measurement ${\left\{M_{k|0}^A\right\}}_{k=0,\ldots,32}$ is in the form of \begin{equation} M_{k|0}\coloneq\Bra{\phi_{k|0}}, \end{equation} where $k\in\left\{0,\ldots,32\right\}$, the post-measurement state of $A$ is traced out, and $\Ket{\phi_{k|0}}\in\mathbb{C}^2\oplus\mathbb{C}^9$ is an unnormalized vector. Each $\Ket{\phi_{k|0}}$ is defined as \begin{align} \Ket{\phi_{0 |0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{0}+\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{1 |0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{0}+\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{2 |0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{0}+\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{3 |0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{0}+\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{4 |0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{0}-\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{5 |0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{0}-\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{6 |0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{0}-\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{7 |0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{0}-\Ket{4}-\overline{\gamma_2}\Ket{6}\right),\\ \Ket{\phi_{8 |0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{1}+\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{9 |0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{1}+\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{10|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{1}+\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{11|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{1}+\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{12|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{1}-\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{13|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{1}-\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{14|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{1}-\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{15|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{1}-\Ket{5}-\overline{\gamma_2}\Ket{7}\right),\\ \Ket{\phi_{16|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{2}+\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{17|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{2}+\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{18|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{2}+\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{19|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{2}+\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{20|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{2}-\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{21|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{0}&\oplus\sqrt{\frac{1}{36}}\left( \Ket{2}-\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{22|0}} &\coloneqq& \sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{2}-\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{23|0}} &\coloneqq&-\sqrt{\frac{3}{36}}\Ket{1}&\oplus\sqrt{\frac{1}{36}}\left(-\Ket{2}-\Ket{3}-\overline{\gamma_2}\Ket{8}\right),\\ \Ket{\phi_{24|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{0}^{\left(0,4,8\right)}},\\ \Ket{\phi_{25|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{1}^{\left(0,4,8\right)}},\\ \Ket{\phi_{26|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{2}^{\left(0,4,8\right)}},\\ \Ket{\phi_{27|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{0}^{\left(1,5,6\right)}},\\ \Ket{\phi_{28|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{1}^{\left(1,5,6\right)}},\\ \Ket{\phi_{29|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{2}^{\left(1,5,6\right)}},\\ \Ket{\phi_{30|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{0}^{\left(2,3,7\right)}},\\ \Ket{\phi_{31|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{1}^{\left(2,3,7\right)}},\\ \Ket{\phi_{32|0}}&\coloneqq&\boldsymbol{0}&\oplus\sqrt{\frac{28}{36}}\Ket{\omega_{2}^{\left(2,3,7\right)}}, \end{align} where $\boldsymbol{0}$ is the zero vector on $\mathbb{C}^2$. This measurement satisfies the completeness condition \begin{equation} \sum_{k=0}^{32}M_{k|0}^\dag M_{k|0}=\mathbb{1}. \end{equation} Similarly, the other measurements for $A$ conditioned by $B$'s measurement outcomes $j=1$ and $j=2$, that is, ${\left\{M_{k|1}^A\right\}}_{k=0,\ldots,32}$ and ${\left\{M_{k|2}^A\right\}}_{k=0,\ldots,32}\,$, respectively, are defined for each $k\in\left\{0,\ldots,32\right\}$ as \begin{align} M_{k|1}&\coloneq M_{k|0}\left(\boldsymbol{0}\oplus{\left(X_9\right)}^3\right),\\ M_{k|2}&\coloneq M_{k|0}\left(\boldsymbol{0}\oplus{\left(X_9\right)}^6\right). \end{align} These measurements satisfy the completeness condition \begin{equation} \sum_{k=0}^{32}M_{k|j}^\dag M_{k|j}=\mathbb{1}, \end{equation} for each $j\in\left\{1,2\right\}$. In the two-way LOCC protocol for exact state merging of $\Ket{\psi}^{RAB}$ in the non-catalytic setting at zero entanglement cost, $B$ first performs the measurement ${\left\{M_j^B\right\}}_{j=0,1,2}\,$, and the measurement outcome $j$ is sent by classical communication from $B$ to $A$. Conditioned by $j$, the measurement ${\left\{M_{k|j}^A\right\}}_{k=0,\ldots,32}$ is performed by $A$, and the measurement outcome $k$ is sent by classical communication from $A$ to $B$. After this LOCC measurement ${\left\{M_{k|j}^A\otimes M_j^B\right\}}_{j,k}$ by $A$ and $B$, for any pair of measurement outcomes $j\in\left\{0,1,2\right\}$ and $k\in\left\{0,\ldots,32\right\}$, the post-measurement state \begin{equation} \frac{\left(M_{k|j}^A\otimes M_j^B\right)\Ket{\psi}^{RAB}}{\left\|\left(M_{k|j}^A\otimes M_j^B\right)\Ket{\psi}^{RAB}\right\|}, \end{equation} is a maximally entangled state with Schmidt rank three between $R$ and $B$. Therefore, $B$ performs local isometry conditioned by $j$ and $k$ to transform this maximally entangled state into $\Ket{\psi}^{RB^\prime B}$. This protocol yields the conclusion. \end{proof} \section{\label{sec:cost}Interpretation of entanglement cost in quantum state merging} This section discusses how entanglement cost in state merging can be interpreted. As Theorem~\ref{thm:result} shows that entanglement cost in state merging under one-way LOCC and that two-way LOCC are different under a one-shot regime, the entanglement cost under two-way LOCC cannot be interpreted based only on one-way communication in analogy to classical source coding with $B$'s side information. But given the interconnection between the tasks of state merging and local state discrimination, these tasks can be interpreted as \textit{distributed decoding} of information encoded in an initially shared state. This section first summarizes the difference in properties of $B$'s side information in asymptotic and one-shot scenarios of state merging, and then provide another interpretation of state merging based on distributed decoding. References~\cite{H3,H4} interpret the minimal entanglement cost in the asymptotic scenario of state merging as \textit{partial quantum information conditioned by $B$'s prior quantum information}. Consider three parties, namely, $A$, $B$, and $R$, and any tripartite pure state $\Ket{\psi}^{RAB}$ shared among $A$, $B$, and $R$. Define a measure of partial quantum information conditioned by $B$'s prior quantum information for $\Ket{\psi}^{RAB}$ as the rate of the minimal entanglement cost in the asymptotic scenario of state merging of $\Ket{\psi}^{RAB}$, which is given by the conditional quantum entropy ${H\left(A|B\right)}_\psi$~\cite{H3,H4}. Here, let $A$ and $B$ perform a class of operations consisting of any local preprocessing of $B$'s prior quantum information of $\psi^B$ and backward classical communication from $B$ to $A$, which is a subclass of LOCC\@. The following proposition show that ${H\left(A|B\right)}_\psi$ is monotonically nondecreasing on average under this class of operations, and the proof is given in Appendix~\ref{sec:monotonic}. A similar monotonic property of conditional quantum entropy induced by measurements is also discussed in Reference~\cite{C10}. Note that while Reference~\cite{H4} discusses a case where the conditional quantum entropy is decreased by adding quantum side information to $A$, this case in Reference~\cite{H4} is different from the case discussed here, since in Reference~\cite{H4}, entanglement between $A$ and $B$ can be increased by adding the quantum side information to $A$, but in this case cannot be increased by LOCC\@. \begin{proposition} \label{prp:monotonic} \textit{Monotonic property of partial quantum information in the asymptotic scenario.} Given any state $\Ket{\psi}^{RAB}$, for any operation by $A$ and $B$ represented as \begin{equation} \label{eq:backward} {\left\{U_j^A\otimes M_j^B\right\}}_j\,, \end{equation} where ${\left\{M_j^B\right\}}_j$ is $B$'s measurement for preprocessing satisfying the completeness condition $\sum_j M_j^\dag M_j=\mathbb{1}$, and $U_j^A$ is $A$'s isometry conditioned by $B$'s measurement outcome $j$ sent by backward classical communication from $B$ to $A$, it holds that \begin{equation} {H\left(A|B\right)}_\psi\leqq\sum_j p\left(j\right){H\left(A|B\right)}_{\psi_j}\,, \end{equation} where $\Ket{\psi_j}^{RAB}$ is the post-measurement state corresponding to $j$, that is, \begin{align} \Ket{\psi_j}^{RAB}&\coloneq\sqrt{\frac{1}{p\left(j\right)}}\left(\mathbb{1}^R\otimes U_j^A\otimes M_j^B\right)\Ket{\psi}^{RAB},\\ p\left(j\right)&\coloneq{\left\|\left(\mathbb{1}^R\otimes U_j^A\otimes M_j^B\right)\Ket{\psi}^{RAB}\right\|}^2. \end{align} \end{proposition} In contrast to this asymptotic scenario, Theorem~\ref{thm:result} indicates that entanglement cost in a one-shot scenario of state merging can be strictly decreased by the class of operations shown in Equation~\eqref{eq:backward}. In the asymptotic scenario, Proposition~\ref{prp:monotonic} shows that the ability of performing $B$'s preprocessing and backward classical communication in two-way LOCC does not contribute to increasing $B$'s \textit{prior quantum information} from $\psi^B$. In contrast, Theorem~\ref{thm:result} can be interpreted to say that, in a one-shot scenario, the same ability may increase $B$'s \textit{prior quantum information}. In this sense, these interpretations provide notions of $B$'s \textit{prior quantum information} having \textit{different} properties depending on scenarios. However, state merging can also be viewed in another way, based on the interconnection between state merging and local state discrimination. In local state discrimination for ${\left\{\Ket{\psi_l}^{AB}\right\}}_l\,$, the index $l$ can be regarded as classical information encoded in $\Ket{\psi_l}^{AB}$, and local state discrimination aims to decode this classical information by LOCC\@. In the same way, state merging of $\Ket{\psi}^{RAB}$ can also be regarded as distributed decoding of \textit{quantum} information by entanglement-assisted LOCC, in the sense that a protocol for state merging decodes arbitrary superposition of nonlocally shared states into the same superposition of $B$'s states, as shown in Formula~\eqref{eq:relative}. This view of state merging as distributed decoding is generalized to more than two parties in Chapter~\ref{sec:distributed_encoding_decoding}. These notions of information may be \textit{nonlocally} encoded in the shared quantum state~\cite{B6,B19}, in the sense that neither $A$ nor $B$ has local access to such nonlocally encoded information. From this viewpoint, the minimal entanglement cost in state merging can be interpreted to characterize a nonlocal property of the map \begin{equation} \mathcal{D}^{AB\to B}\left(\sum_l\alpha_l\Ket{\psi_l}^{AB}\right)=\sum_l\alpha_l\Ket{l}^{B} \end{equation} for decoding quantum information initially encoded in $\sum_l\alpha_l\Ket{\psi_l}^{AB}$, where $\sum_l\alpha_l\Ket{l}^{B}$ is locally unitarily equivalent to $\sum_l\alpha_l\Ket{\psi_l}^{B^\prime B}$. This map $\mathcal{D}^{AB\to B}$ is an isometry map possibly defined for any given $\Ket{\psi}^{RAB}$. Note that if catalytic use of entanglement is allowed, negative entanglement cost can also be viewed as a net gain of shared entanglement from the redundant part of $\sum_l\alpha_l\Ket{\psi_l}^{AB}$ as discussed in Chapter~\ref{sec:merge}, and the gained entanglement can be used as a resource for distributed decoding in future in the same way as the conventional interpretation~\cite{H3,H4}. \part{\label{part:2}Operational analysis of multipartite quantum entanglement in distributed quantum information processing} \chapter{Background and overview of Part~\ref{part:2}} This part is motivated by the following previous studies on state constructions and transformations using entanglement-assisted LOCC\@. In Reference~\cite{Y6} and the master thesis~\cite{Y18} of the author of this thesis, tasks of construction of a multipartite entangled state shared among spatially separated parties from a separable state have been analyzed, using a given inter-party network for quantum communication. In the framework of local operations and classical communication (LOCC), single use of a noiseless quantum channel and that of a maximally entangled state are at equivalent cost by means of quantum teleportation simulating quantum communication~\cite{B5}. For a bipartite state, the minimal amount of quantum communication required for preparing the state provides a well-established entanglement measure quantifying a nonlocal property of the state, called the \textit{entanglement cost} of the state~\cite{B3,H1,T1}. The entanglement cost of a bipartite \textit{state} also generalizes to that required for spatially separated parties implementing a given nonlocal state \textit{transformation}, such as nonlocal unitaries~\cite{Z,E,C15,C16,N,Y5,C17,Y,S,S16,Y4,Y2,S17,X1,C18,V,Y3,W8,W9,W10,W12} and nonlocal measurements~\cite{J,B24,B25}, although this generalization usually accompanies challenging optimization and has been analyzed only in special cases to date. Another direction is generalization of a \textit{bipartite} state to a \textit{multipartite} state~\cite{Y6,G9,Y7} while analysis of multipartite entanglement is also challenging~\cite{E2,W3,B26}. To characterize multipartite entanglement in terms of quantum communication, Reference~\cite{Y6} formulates that required for preparing the multipartite state shared among parties using a network of the noiseless quantum channels, establishing a characterization called \textit{graph-associated entanglement cost} of multipartite states. In this part, after providing preliminaries in Chapter~\ref{sec:preliminaries_2}, the following two results are presented in Chapters~\ref{sec:distributed_encoding_decoding} and~\ref{sec:multipartite}. \section*{Distributed encoding and decoding of quantum information over networks} Encoding and decoding quantum information in a multipartite quantum system are fundamental building blocks in quantum information processing. In particular, quantum error correcting codes~\cite{G,D,T10,B} require such encoding and decoding between a logical state and an entangled physical state of a multipartite system. Quantum information is represented by this logical state, and these encoding and decoding are the inverse transformations of each other, mathematically represented by isometries. These types of encoding and decoding have to be performed so that coherence of these states is kept; that is, an arbitrary superposition of the logical state should be preserved without revealing the classical description of the logical state. In addition to quantum information processing, the concept of encoding and decoding nowadays has interdisciplinary roles in analyzing many-body quantum systems exhibiting nonlocal features, such as topological order in quantum phase of matter~\cite{K,K2}, holographic principle in quantum gravity~\cite{A,P}, and eigenstate thermalization hypothesis in statistical physics~\cite{F}. \begin{figure}\label{fig:intro} \end{figure} These encoding and decoding are also indispensable for distributed quantum information processing, where spatially separated parties connected by a network for quantum communication cooperate in achieving an information processing task. Especially, encoding and decoding are crucial for some multiparty cryptographic tasks such as quantum secret sharing~\cite{B21,C,G7}. In such distributed settings, a multipartite system for encoding a logical state is distributed among the spatially separated parties. In this case, encoding and decoding are nonlocal transformations over all the parties, and the nonlocal properties of transformations for encoding and decoding lead to cost in implementations of the encoding and decoding. In Chapter~\ref{sec:distributed_encoding_decoding}, entanglement costs characterizing the nonlocal properties of transformations for encoding and decoding are formalized. Consider a setting where $N$ parties are connected by a network of the noiseless quantum channels, as illustrated in Figure~\ref{fig:intro}. The network topology is represented by a graph in graph theory~\cite{B22} in terms of vertices and edges. Any connected network of $N$ parties requires at least $N-1$ channels. If an $N$-vertex connected graph has exactly $N-1$ edges, the graph is called a tree. Using the network, the parties can spread and concentrate quantum information of \textit{unknown} states so as to encode and decode quantum information in a distributed system according to a given isometry representing the encoding and decoding. The amount of quantum communication required for spreading and concentrating quantum information over the network characterizes nonlocal properties of the isometry. Due to the equivalence between the noiseless quantum channel and the maximally entangled state, a collection of maximally entangled states distributed according to the network topology comprises the resource state for spreading and concentrating quantum information by LOCC\@. It is assumed that LOCC is free, and motivated by quantum communication on networks, Chapter~\ref{sec:distributed_encoding_decoding} considers this type of initial resource state consisting of bipartite entanglement. The minimal total amount of quantum communication is evaluated by the entanglement entropy of the maximally entangled states for each edge, which are called the \textit{entanglement costs in spreading and concentrating quantum information}. The entanglement cost in spreading quantum information characterizes the encoding, and that of concentrating characterizes the decoding. Chapter~\ref{sec:distributed_encoding_decoding} evaluates the entanglement costs in spreading and concentrating quantum information over any given tree-topology network for an \textit{arbitrarily} given isometry, which differs from the works presented in References~\cite{F3,S15} for implementing \textit{particular} isometries in the context of quantum secret sharing. To analyze the entanglement costs, spreading and concentrating quantum information are reduced to sequential applications of exact state merging and splitting for two parties defined in Sections~\ref{sec:merge_achievability_exact} and~\ref{sec:split}, respectively, as illustrated in Figure~\ref{fig:intro}. Regarding spreading quantum information, exact state splitting is used for providing a protocol and derive the optimal entanglement cost in spreading quantum information, which is given in terms of the rank of a state defined with respect to each edge of the given tree. Another protocol is also shown for achieving concentrating quantum information. In particular, using exact state merging, the entanglement cost in concentrating quantum information can be reduced compared to that of spreading quantum information. During spreading and concentrating \textit{quantum} information, coherence has to be kept, and this point is contrasted with encoding and decoding \textit{classical} information in quantum states shared among multiple parties investigated in the context of a type of quantum secret sharing based on local state discrimination~\cite{C14,R,Y10,W6,B23,L3}. The protocols for spreading and concentrating quantum information are applicable to any isometry representing encoding and decoding and provide a protocol for one-shot distributed source compression~\cite{D8,D9,A8} applicable to arbitrarily small-dimensional systems and a general protocol for LOCC-assisted decoding of shared quantum information having studied in the context of quantum secret sharing~\cite{G8}. \section*{When does multipartite entanglement outperform bipartite entanglement?} In a distributed setting where spatially separable parties can freely perform LOCC, any multipartite entangled state can be prepared by LOCC from initially distributed bipartite entangled states among the parties, using quantum teleportation~\cite{B5} or less costly protocol established in Reference~\cite{Y6}. In this regard, even if multipartite entanglement is to be used for a multiparty task in distributed information processing, initially sharing bipartite entangled states is sufficient, and hence, it would be natural to doubt whether multipartite entanglement is necessary for performing some tasks by LOCC\@. Aiming at showing that multipartite entanglement is still indispensable for distributed quantum information processing, Chapter~\ref{sec:multipartite} provides \textit{nontrivial} examples demonstrating the difference in capability between entangled resource states exhibiting multipartite entanglement and those consisting only of bipartite entangled states. This difference arises when there exists a limitation on the size of each party's local quantum system, that is, the dimension of the Hilbert space representing the local quantum system. The comparison between bipartite and multipartite entanglement as resources for distributed quantum information processing presented in Chapter~\ref{sec:multipartite} is motivated by technological limitations on the number of qubits which can be stored in one quantum device, and is different from the comparison in the context of quantum key distribution~\cite{E3,P5}, since the cost of LOCC is considered to be negligible in Chapter~\ref{sec:multipartite}. The difference can also be shown in a trivial example of qubits as follows. Consider three parties $A$, $B$, and $C$ sharing two Bell states, that is, two two-qubit maximally entangled states \begin{equation} \Ket{\Phi_2^+}^{AB}\otimes\Ket{\Phi_2^+}^{BC}, \end{equation} one of which is shared between $A$ and $B$, and the other of which is shared between $B$ and $C$. These two Bell states as a whole are regarded as a state consisting of bipartite entangled states. In this case, once these two Bell states are given to the parties, the parties can transform the two Bell states by LOCC into any three-qubit state shared among $A$, $B$, and $C$, such as the three-qubit Greenberger-Horne-Zeilinger (GHZ) state \begin{equation} \Ket{\textup{GHZ}}\coloneq\frac{1}{\sqrt{2}}\left(\Ket{000}+\Ket{111}\right), \end{equation} and the three-qubit $W$ state \begin{equation} \Ket{W}\coloneq\frac{1}{\sqrt{3}}\left(\Ket{100}+\Ket{010}+\Ket{001}\right), \end{equation} both of which can be regarded as states exhibiting multipartite entanglement. However, if each party's local system size is limited to one qubit, the parties cannot store any state consisting of a collection of bipartite entangled states to obtain $\Ket{\textup{GHZ}}$ and $\Ket{W}$ by LOCC, while the parties can still store one of these states exhibiting multipartite entanglement as a resource for performing some tasks by LOCC\@. \begin{figure}\label{fig:multipartite_intro} \end{figure} Apart from the above trivial example of qubits, Chapter~\ref{sec:multipartite} aims to demonstrate the difference even in cases where the size of local systems of some parties is not limited to one qubit. As illustrated in Figure~\ref{fig:multipartite_intro}, two settings of tasks aiming at preparing states in a target set from a common resource state~\cite{S18,G2} by LOCC are considered for differentiating capabilities of entangled states only consisting of a collection of bipartite entangled states and those exhibiting multipartite entanglement. The tasks are called \textit{system-size-limited quantum state preparation}, where one of the two settings is called a \textit{static} setting, and the other is called a \textit{dynamic} setting. In the static setting, each party's local system size is limited, and a common resource state for a given target set is stored within this limitation. For a given target set of states of a multipartite system in general, there may not exist any common resource state in the multipartite system itself transformable by LOCC into all the states in the set. In particular, given a multipartite system where each local dimension is $d$, almost no LOCC transformation among pure states of the system is possible~\cite{V1,S1,S2,M1,G1,S3}. This fact implies that, in general, a common resource state for a set of multipartite states may be a state of a higher-dimensional system than that for the set itself. If there is a limitation on each party's local system size, it may not be possible for the parties to store an entangled state of a higher-dimensional system serving as a common resource state. Despite the efforts to understand properties of multipartite entanglement~\cite{E2,W3,B26}, general quantitative conditions of the smallest system size for common resource states have not yet been established. Chapter~\ref{sec:multipartite} provides nontrivial examples where, within a given limitation on local system sizes, the preparation of a state in a given target set is \textit{not} achievable by any common resource state consisting of a collection of bipartite entangled states, but it is achievable by a common resource state exhibiting multipartite entanglement. In contrast to previous studies on the LOCC convertibility between multipartite pure states of the \textit{same}-dimensional systems~\cite{V1,S1,S2,M1,G1,S3,S13,V6,T2,T4,T6}, this analysis requires LOCC transformations from a common resource state of a \textit{higher}-dimensional Hilbert space into a set of states of a \textit{lower}-dimensional Hilbert space. The examples show the difference in the capabilities between these two types of common resource states, namely, those consisting of bipartite entanglement and those exhibiting multipartite entanglement. As for the dynamic setting, in addition to considering a limitation on local system sizes for storing a common resource state, the parties by themselves prepare the common resource state within this limitation by performing quantum communication. Some of the common resource states exhibiting multipartite entanglement analyzed in the static setting can be prepared within the limitation using quantum communication. Hence, temporal uses of bipartite quantum communication resources are still sufficient for preparing such common resource states. In contrast, Chapter~\ref{sec:multipartite} also shows other examples of states exhibiting multipartite entanglement that can be stored but cannot be prepared within a limitation on local system sizes, indicating that the common resource state used in the dynamic setting has an intermediate capability between those consisting of bipartite entanglement and those exhibiting multipartite entanglement in the static setting. \chapter{\label{sec:preliminaries_2}Preliminaries to Part~\ref{part:2}} This chapter provides preliminaries to Part~\ref{part:2}. Section~\ref{sec:network} models networks in distributed quantum information processing as a collection of bipartite entangled states. Notations for a class of networks having tree topology is summarized in Section~\ref{sec:tree} for later use. These definitions are based on References~\cite{Y6,Y13}. For this class of tree-topology networks, the results on constructing multipartite states investigated in Reference~\cite{Y6} are summarized in Section~\ref{sec:construction}. \section{\label{sec:network}Quantum networks and entangled states consisting of bipartite entanglement} A network of quantum communication channels among $N$ parties is represented by a graph~\cite{B22}. Let \begin{equation} G=(V(G),E(G)) \end{equation} denote a simple undirected graph representing the restriction on quantum communication. To perform arbitrary entanglement transformations over $N$ parties, $G$ has to be a connected graph. The arguments of $V(G)$ and $E(G)$ may be omitted to be simply written as \begin{equation} G=(V,E), \end{equation} if obvious. Each of the $N$ vertices \begin{equation} v\in V=\{v_1\,,v_2\,,\ldots,v_N\} \end{equation} represents one of the $N$ parties, and each edge \begin{equation} e=\{v_k\,,v_{k'}\}\in E \end{equation} represents a bidirectional noiseless quantum channel between $v_k$ and $v_{k'}$. Quantum communication is only allowed between the parties directly connected by an edge. Note that an edge $\{v_k\,,v_{k'}\}\in E$ is identified with $\{v_{k'}\,,v_{k}\}$. Assume that the $N$ parties can freely perform LOCC\@. When LOCC can be freely performed, quantum communication of a state of an $M_e$-dimensional system from a party $v_k$ to another party $v_{k'}$ connected by a channel $e = \left\{v_k\,,v_{k'}\right\}$ is achieved using quantum teleportation~\cite{B5} by LOCC assisted by a maximally entangled state shared between $v_k$ and $v_{k'}$ \begin{equation} \Ket{\Phi_{M_e}^+}^{e}\coloneq\frac{1}{\sqrt{M_e}}\sum_{l=0}^{M_e-1}\Ket{l}^{v_k}\otimes\Ket{l}^{v_{k'}} \end{equation} where $M_e$ is the Schmidt rank, and the superscript $e=\{v_k\,,v_{k'}\}$ represents a state shared between $v_k$ and $v_{k'}$. In the LOCC framework, the tasks of performing a transformation of a multipartite entangled state shared among the $N$ parties under the restriction on quantum communication is equivalent to the tasks of performing the transformation by LOCC assisted by an initial resource state consisting of a set of bipartite maximally entangled states specified by a set of edges $E$. The initial resource state for a given graph $G$ shared among the $N$ parties is represented by \begin{align} \bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e, \end{align} where $M_e$ is the Schmidt rank of the initial resource state specified by each edge $e$. A more general class of this type of initial resource states can be those consisting of bipartite entanglement. Consider a collection of bipartite entangled states distributed among the parties $v_1\,,\ldots,v_N$. The distribution of the bipartite entangled states can also be represented by a graph $G=(V,E)$, where each vertex in the set $V=\left\{v_1\,,\ldots,v_N\right\}$ represents a party, and each edge $e=\left\{v_k\,,v_{k'}\right\}\in E$ a bipartite entangled state $\Ket{\phi_e}^e$ shared between two parties $v_k$ and $v_{k'}$. An entangled state $\Ket{\phi}$ is called a state \textit{consisting of bipartite entanglement} if there exists a graph $G=(V,E)$ such that $\Ket\phi$ is locally unitarily equivalent to a state in the form \begin{equation} \bigotimes_{e\in E}\Ket{\phi_e}^e. \end{equation} Note that this definition assumes \textit{pure} states consisting of bipartite entanglement, while generalization to \textit{mixed} states is straightforward. If $\Ket{\phi}$ is fully entangled and is not a state consisting of bipartite entanglement, $\Ket{\phi}$ is called a state \textit{exhibiting multipartite entanglement}. \section{\label{sec:tree}Tree-topology networks} There are optimization problems on general networks that are hard to solve, such as the Hamiltonian cycle problem~\cite{K6} and the multicommodity flow problem~\cite{E6}. Optimization of communication on networks is more involved, since the technique of network coding~\cite{A15} for reducing communication may be applied in such optimization of communication cost. However, for a special class of networks, the optimization of communication cost can reduce to a simpler solvable case. As for distributed quantum information processing, construction of low-noise quantum channels is challenging using current technology, and hence, it makes sense to consider networks connecting all the parties using the minimal number of quantum channels. Such networks are represented by a class of graphs called \textit{trees} having the minimal number of edges connecting all the vertices, and are called \textit{tree-topology networks}. \begin{figure}\label{fig:tree_notation} \end{figure} Trees are connected graphs containing no cycle~\cite{B22}. Trees with $N$ vertices have $N-1$ edges, which is the minimum to connect all the vertices. Any connected graphs can be reduced to a tree spanning all the vertices by removing some of the edges. Let \begin{equation} T=(V,E) \end{equation} denote a tree. Among the $N$ vertices of a tree $T=(V,E)$, a vertex can be designated as the root of the tree $T$, which is labeled $v_1\in V$ in the following. In addition, the $N$ vertices $v_1\,,\ldots,v_N$ of $T$ are labeled so that, for any $v_k\neq v_1\,$, any vertex $v_{k'}$ on the path connecting $v_k$ and the root $v_1$ satisfies $k\geqq k'$; that is, the vertices are labeled in ascending order on such paths, as illustrated in Figure~\ref{fig:tree_notation}. Call this type of labeling of the vertices an \textit{ascending labeling} of the vertices. In the following, a tree is always regarded as a rooted tree with an ascending labeling. A rooted tree has a recursive structure as illustrated in Figure~\ref{fig:tree_notation}. For the root $v_1$ of any given tree $T=(V,E)$, any vertex $c$ adjacent to $v_1\,$, that is $\left\{v_1\,,c\right\}\in E$, is called a child of $v_1$. Recursively, for any non-root vertex $v_k$ being a child of $v_{k^\prime}\,$, any vertex $c$ adjacent to $v_k$ except $v_{k^\prime}\,$, that is $\left\{v_k\,,c\right\}\in E\setminus\left\{\left\{v_{k^\prime}\,,v_k\right\}\right\}$, is called a child of $v_k$. A vertex without any child is called a leaf. For any non-root vertex $v_k\in V\setminus\left\{v_1\right\}$, let $p\left(v_k\right)\in V$ denote a vertex having $v_k$ as a child, which is called the parent of $v_k$. For any vertex $v_k\in V$, $v_k$'s descendants are recursively defined as vertices being a child of $v_k$ or being a child of a descendant of $v_k$. For any vertex $v_k\in V$, let $C_{v_k}$ denote the set of $v_k$'s children, $D_{v_k}$ the set of $v_k$'s descendants, and $D'_{v_k}$ the set of $v_k$ itself and $v_k$'s descendants. Any edge $e\in E$ of the rooted tree can be written as $e=\{p\left(v_k\right),v_k\}$ for some $v_k\in V$. For any $v\in V$, $D'_v$ can be decomposed by using these notations as \begin{equation} D'_v = \{v\}\cup D_v =\{v\}\cup \bigcup_{c\in C_v} D'_c. \end{equation} The set of all vertices $V$ is represented by $V=D'_{v_1}$ for the root specified by $v_1$. Using this decomposition, $V$ can be recursively decomposed according to the given rooted tree. \section{\label{sec:construction}Construction of multipartite quantum states on networks} Reference~\cite{Y6} investigates tasks of constructing multipartite quantum states shared among spatially separated parties connected by a given tree-topology network for quantum communication. This task is called \textit{construction of a multipartite state} and defined as follows. While Reference~\cite{Y6} analyzes two cases of exact and approximate constructions, this section summarizes the results on exact construction for later use. \begin{definition} \textit{Exact construction of a multipartite state.} For a given graph $G=(V,E)$, exact construction of a given multipartite state \begin{equation} \Ket{\psi}\in\bigotimes_{v_k\in V}\mathcal{H}^{v_k}, \end{equation} where $\mathcal{H}^{v_k}$ for each party represented as $v_k\in V$ is $v_k$'s system to be prepared in $\Ket{\psi}$, is a task of the $N$ parties $v_1\,,\ldots,v_N\in V$ preparing $\Ket{\psi}$ shared among the $N$ parties from scratch by performing an LOCC map $\mathcal{C}$ assisted by an initial resource state $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$, that is, \begin{equation} \mathcal{C}\left(\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}^e\right)=\Ket{\psi}\Bra{\psi}. \end{equation} \end{definition} Given any graph $G=(V,E)$ and any multipartite pure state \begin{equation} \Ket{\psi}\in\bigotimes_{v_k\in V}\mathcal{H}^{v_k}, \end{equation} consider the total amount of entanglement of initial resource states $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$ in terms of the entanglement entropy of each $\Ket{\Phi_{M_e}^+}^e$, that is, \begin{equation} \sum_{e\in E}\log_2 M_e\,, \end{equation} and $\log_2 M_e$ for each $e\in E$ of $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$ for exact construction of $\Ket{\psi}$ for $G$ minimizing this total amount of entanglement defines \textit{graph-associated entanglement cost} of $\Ket{\psi}$, quantitatively characterizing multipartite entanglement of $\Ket{\psi}$. Reference~\cite{Y6} evaluates graph-associated entanglement cost of $\Ket{\psi}$ for an arbitrary given tree. For any tree $T=(V,E)$, if an edge $e\in E$ is deleted, $T$ is divided into two disjoint trees whose vertices are represented by disjoint sets $V_e$ and $\overline{V}_e$ satisfying \begin{equation} V=V_e\cup \overline{V}_e. \end{equation} For any $\Ket{\psi}\in\bigotimes_{v_k\in V}\mathcal{H}^{v_k}$, let \begin{equation} R_e\left(\Ket{\psi}\right) \end{equation} denote the Schmidt rank of $\Ket{\psi}$ with respect to the bipartition $\bigotimes_{v_k\in V_e}\mathcal{H}^{v_k}$ and $\bigotimes_{v_k\in \overline{V}_e}\mathcal{H}^{v_k}$ of $\mathcal{H}=\bigotimes_{v_k\in V}\mathcal{H}^{v_k}$. Using this notation, Reference~\cite{Y6} provides the necessary and sufficient condition for the initial resource state $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$ being transformable into $\Ket{\psi}$ by LOCC, as follows. As for the proof~\cite{Y6}, the ``if'' part of this lemma is shown by construction of a protocol, and the ``only if'' part is shown from the LOCC monotonicity of the Schmidt rank. Note that apart from this exact construction, Reference~\cite{Y6} also analyzes the task of approximately constructing multipartite states in the framework of the second-order asymptotic analysis~\cite{T9,D12}. \begin{lemma} \label{lem:graph_associated} \textit{Graph-associated entanglement cost in exact construction of multipartite states.} For any tree $T=(V,E)$ and any multipartite state \begin{equation} \Ket{\psi}\in\bigotimes_{v_k\in V}\mathcal{H}^{v_k}, \end{equation} exact construction of $\Ket{\psi}$ for $T$ is achievable if and only if \begin{equation} \log_2 M_e\geqq R_e\left(\psi\right). \end{equation} \end{lemma} \chapter{\label{sec:distributed_encoding_decoding}Distributed encoding and decoding of quantum information over networks} This chapter analyzes nonlocal transformations of multipartite entangled states shared among spatially separated parties, in particular, transformations for encoding and decoding quantum information in a shared multipartite quantum system. Section~\ref{sec:spread_concentrate} defines tasks of spreading and concentrating quantum information on networks for achieving such encoding and decoding, respectively. The former task of spreading is analyzed in Section~\ref{sec:encoding}, and the latter of concentrating is analyzed in Section~\ref{sec:decoding}. Applications of these tasks are discussed in Section~\ref{sec:example}. \section{\label{sec:spread_concentrate}Definition of spreading and concentrating quantum information} State transformations for the encoding and decoding can be nonlocal transformations over multiple parties, and local operations and classical communication (LOCC) by the parties may not be sufficient for performing such nonlocal transformations. These encoding and decoding are achievable if the parties are allowed to communicate quantum information with each other using a network for quantum communication. These tasks can also be regarded as tasks of spreading and concentrating quantum information according to a given isometry representing the encoding or decoding using quantum communication. Given a network represented by any graph $G=(V,E)$ in general, the parties aim to spread and concentrate quantum information according to a given isometry representing encoding and decoding, respectively. A system $\mathcal{H}$ for logical states is located at one of the $N$ parties, and the vertex labeled $v_1\in V$ is always assigned as the party where $\mathcal{H}$ is located. Let $D$ denote the dimension of $\mathcal{H}$, that is, \begin{equation} D\coloneq\dim\mathcal{H}. \end{equation} Write the computational basis of $\mathcal{H}$ as \begin{equation} \{\Ket{l}\in\mathcal{H}:l=0,1,\ldots,D-1\}. \end{equation} In addition, the $N$ parties share a multipartite system $\tilde{\mathcal{H}}$ for physical states. The system $\tilde{\mathcal{H}}$ is spanned by a set of $D$ orthonormal pure states \begin{equation} \left\{\ket{\tilde{\psi}_l}^{v_1\cdots v_N}\in\tilde{\mathcal{H}}:l=0,1,\ldots,D-1\right\}. \end{equation} For each $v_k\in V$, let $\tilde{\mathcal{H}}^{v_k}$ denote a part of the shared multipartite system $\tilde{\mathcal{H}}$ located at the party $v_k$. Note that $\dim\tilde{\mathcal{H}}^{v_k}$ is arbitrary as long as it holds that \begin{equation} \dim\tilde{\mathcal{H}}=\dim\mathcal{H}=D, \end{equation} and hence, $\tilde{\mathcal{H}}$ is a \textit{subspace} of the Hilbert space consisting of these subsystems for the $N$ parties, that is, \begin{equation} \tilde{\mathcal{H}}\subset\bigotimes_{v\in V}\tilde{\mathcal{H}}^{v}. \end{equation} Consider encoding and decoding as linear bijective maps between $\mathcal{B}\left(\mathcal{H}\right)$ and $\mathcal{B}\left(\tilde{\mathcal{H}}\right)$ mapping the basis states of $\mathcal{H}$ and $\tilde{\mathcal{H}}$ as \begin{equation} \ket{l}\in\mathcal{H}\leftrightarrow\ket{\tilde{\psi}_l}\in\tilde{\mathcal{H}} \end{equation} for each $l\in\left\{0,\ldots,D-1\right\}$. The encoding map is represented by an isometry $U$ from $\mathcal{H}$ to $\tilde{\mathcal{H}}$ satisfying \begin{equation} \ket{\tilde{\psi}_l}=U\Ket{l}. \end{equation} Encoding refers to a transformation from $\rho\in\mathcal{D}\left(\mathcal{H}\right)$ into $U\rho U^\dag\in\mathcal{D}\left(\tilde{\mathcal{H}}\right)$, and decoding refers to the inverse transformation represented by $U^\dag$. The formal definitions of the tasks of spreading and concentrating quantum information are given in terms of the LOCC framework as follows. The tasks of spreading and concentrating quantum information are also illustrated in Figure~\ref{fig:intro}. Note that the tasks are performed deterministically and exactly. \begin{definition} \textit{Spreading and concentrating quantum information.} Spreading quantum information over a given graph $G=(V,E)$ for a given isometry $U$ is a task of the $N$ parties $v_1\,,\ldots,v_N\in V$ applying $U$ to an arbitrary \textit{unknown} input state $\rho\in\mathcal{D}\left(\mathcal{H}\right)$ of one party $v_1\in V$ to share $U\rho U^\dag\in\mathcal{D}\left(\tilde{\mathcal{H}}\right)$ among the $N$ parties by performing an LOCC map $\mathcal{S}$ assisted by an initial resource state $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$, that is, \begin{equation} \label{eq:encoding} \mathcal{S}\left(\rho\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right)=U\rho U^\dag. \end{equation} Concentrating quantum information over $G=(V,E)$ and $U$ is a task of the $N$ parties $v_1\,,\ldots,v_N\in V$ applying $U^\dag$ to a shared input state $U\rho U^\dag\in\mathcal{D}\left(\tilde{\mathcal{H}}\right)$ corresponding to an arbitrary \textit{unknown} state $\rho\in\mathcal{D}\left(\mathcal{H}\right)$ to recover $\rho$ at one party $v_1\in V$ by performing an LOCC map $\mathcal{C}$ assisted by $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$, that is, \begin{equation} \label{eq:decoding} \mathcal{C}\left(U\rho U^\dag\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right)=\rho. \end{equation} \end{definition} Minimum requirements for initial resource states assisting LOCC protocols achieving spreading and concentrating quantum information define entanglement cost. In the same way as the case of analyzing the graph-associated entanglement cost in constructing multipartite states~\cite{Y6}, given any graph $G=(V,E)$, the entanglement cost of consuming the bipartite maximally entangled state $\Ket{\Phi^+_{M_e}}^e$ for each $e\in E$ of the initial resource state $\bigotimes_{e\in E}\Ket{\Phi^+_{M_e}}$ is identified by the entanglement entropy of $\Ket{\Phi^+_{M_e}}^e$, that is, \begin{equation} \log_2 M_e. \end{equation} If a sufficiently large amount of entanglement is available for each edge, there exist trivial protocols for achieving spreading and concentrating quantum information, simply using quantum teleportation~\cite{B5} so that the party $v_1$ can locally perform any given isometry on the unknown input state. In contrast, the aim here is to reduce the total amount of entanglement \begin{equation} \sum_{e\in E}\log_2 M_e \end{equation} required for spreading and concentrating quantum information, or equivalently, the total amount of quantum communication when LOCC is free. \begin{definition} \textit{Entanglement costs in spreading and concentrating quantum information.} The \textit{entanglement cost in spreading quantum information} over a given graph $G=(V,E)$ for a given isometry $U$ is a family \begin{equation} {\left(\log_2 M_e\right)}_{e\in E} \end{equation} identifying an initial resource state achieving spreading quantum information over $G$ for $U$ minimizing \begin{equation} \sum_{e\in E}\log_2 M_e. \end{equation} The \textit{entanglement cost in concentrating quantum information} over $G$ for $U$ is a family \begin{equation} {\left(\log_2 M_e\right)}_{e\in E} \end{equation} identifying an initial resource state achieving concentrating quantum information over $G$ for $U$ minimizing \begin{equation} \sum_{e\in E}\log_2 M_e. \end{equation} \end{definition} \begin{figure}\label{fig:encoding} \end{figure} \begin{figure}\label{fig:decoding} \end{figure} To analyze the entanglement costs in spreading and concentrating quantum information, these tasks are reduced to a particular type of state transformations. Given any graph $G=(V,E)$ and any isometry $U$, the state transformation equivalent to spreading quantum information over $G$ for $U$ is illustrated in Figure~\ref{fig:encoding}, and the state transformation equivalent to concentrating in Figure~\ref{fig:decoding}. To define these equivalent state transformations, consider a $D$-dimensional system $\mathcal{H}^R$ located at a party $R$ other than the $N$ parties $v_1\,,\ldots,v_N\in V$, where $\mathcal{H}^R$ is a reference system on which none of the $N$ parties can apply any operation. Note that $D=\dim\mathcal{H}$. Write a maximally entangled state with Schmidt rank $D$ shared between $R$ and $v_1$ as \begin{equation} \label{eq:data_maximally_entangled_state} \Ket{\Phi^+_D}=\frac{1}{\sqrt{D}}\sum_{l=0}^{D-1}\Ket{l}\otimes\Ket{l}\in\mathcal{H}^R\otimes\mathcal{H}. \end{equation} Moreover, write a state obtained by performing $U$ on $\mathcal{H}$ for $\Ket{\Phi^+_D}$ as \begin{equation} \label{eq:code_maximally_entangled_state} \ket{\tilde{\Phi}^+_D}\coloneq\left(\mathbb{1}^R\otimes U\right)\Ket{\Phi^+_D} =\frac{1}{\sqrt{D}}\sum_{l=0}^{D-1}\Ket{l}\otimes\ket{\tilde{\psi}_l}\in\mathcal{H}^R\otimes\tilde{\mathcal{H}}, \end{equation} where $\mathbb{1}^R$ is the identity operator on the system $\mathcal{H}^R$. The equivalence between the two tasks is shown as follows, and the proof is given in Appendix~\ref{sec:equivalence_spread_concentrate}, which is a generalization of the technique of the relative state method~\cite{P3}. \begin{proposition} \label{lem:encoding_state_transformation} \textit{State transformations equivalent to spreading and concentrating quantum information.} Spreading quantum information over a given graph $G=(V,E)$ for a given isometry $U$ defined as Equation~\eqref{eq:encoding} is achievable if and only if there exists an LOCC map $\mathcal{S}$ by the $N$ parties assisted by the initial resource state $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$ such that \begin{equation} \label{eq:encoding_state_transformation} \id^R\otimes\mathcal{S}\left(\Ket{\Phi^+_D}\Bra{\Phi^+_D}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right) =\ket{\tilde\Phi^+_D}\bra{\tilde\Phi^+_D}, \end{equation} where $\id^R$ is the identity map on $\mathcal{H}^R$, and the states $\Ket{\Phi^+_D}$ and $\ket{\tilde\Phi^+_D}$ are defined as Equation~\eqref{eq:data_maximally_entangled_state} and~\eqref{eq:code_maximally_entangled_state}, respectively. Concentrating quantum information over $G=(V,E)$ for $U$ defined as Equation~\eqref{eq:decoding} is achievable if and only if there exists an LOCC map $\mathcal{C}$ by the $N$ parties assisted by $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$ such that \begin{equation} \label{eq:decoding_state_transformation} \id^R\otimes\mathcal{C}\left(\ket{\tilde\Phi^+_D}\bra{\tilde\Phi^+_D}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right) =\Ket{\Phi^+_D}\Bra{\Phi^+_D}, \end{equation} where the notations are the same as those in Equation~\eqref{eq:encoding_state_transformation}. \end{proposition} Calculating entanglement cost in spreading and concentrating quantum information for any network represented by an arbitrary graph is difficult due to optimization included in the definition of entanglement cost. The following analysis is focused on a special class of graphs, \textit{trees}. A network represented by a tree describes the situation where all parties are connected by the smallest number of channels, as discussed in Section~\ref{sec:tree}. \section{\label{sec:encoding}Entanglement cost in spreading quantum information} In this section, the optimal entanglement cost in spreading quantum information over any tree for any isometry is derived. To evaluate the entanglement cost, the two-party protocol for exact state splitting shown in Theorem~\ref{thm:split} is generalized to multiple parties, so as to construct the optimal protocol for spreading quantum information over any tree-topology network connecting multiple parties. The entanglement cost in spreading quantum information is evaluated using the following notations. Given any tree $T=(V,E)$, let $\tilde{\Phi}_{D,e}^{+}$ for each $e=\{p\left(v_k\right),v_k\}\in E$ denote the reduced state for $\ket{\tilde{\Phi}_D^+}$ on the system $\bigotimes_{v\in D'_{v_k}}\tilde{\mathcal{H}}^{v}$ shared among $v_k$ itself and the descendants of $v_k\,$, that is, \begin{equation} \label{eq:encoding_reduced_state} \tilde{\Phi}_{D,e}^{+}\coloneq\tr_{R\overline{D'_{v_k}}}\ket{\tilde{\Phi}_D^+}\Bra{\tilde{\Phi}_D^+}, \end{equation} where $\overline{D'_{v_k}}=V\setminus D'_{v_k}$ and $\tr_{R\overline{D'_{v_k}}}$ is the partial trace on $\mathcal{H}^R\otimes\bigotimes_{v\in\overline{D'_{v_k}}}\tilde{\mathcal{H}}^{v}$. The following theorem shows an optimal protocol for the state transformation defined as Equation~\eqref{eq:decoding_state_transformation} in Proposition~\ref{lem:encoding_state_transformation} equivalent to spreading quantum information, and evaluates the optimal entanglement cost. \begin{theorem} \label{thm:spreading} \textit{Entanglement cost in spreading quantum information over trees.} Given any tree $T=(V,E)$ and any isometry $U$, spreading quantum information over $T$ for $U$ is achievable if and only if for each $e\in E$ \begin{equation} \label{eq:encoding_cost} \log_2 M_e \geqq \log_2\rank\tilde{\Phi}_{D,e}^{+}, \end{equation} where $\tilde{\Phi}_{D,e}^{+}$ is defined as Equation~\eqref{eq:encoding_reduced_state}. \end{theorem} \begin{proof} \textit{If part}: Given any tree $T = (V,E)$ with an ascending labeling and any isometry $U$, a protocol for the state transformation defined as Equation~\eqref{eq:encoding_state_transformation} in Proposition~\ref{lem:encoding_state_transformation} is constructed by applying exact state splitting in Theorem~\ref{thm:split} sequentially starting from the root party represented as $v_1\in V$, and the following proof also shows that this protocol achieves the equality in~\eqref{eq:encoding_cost} for each $e\in E$. In this protocol, the root party $v_1$ first locally applies the given isometry $U$ to \begin{equation} \Ket{\Phi_D^+}\in\mathcal{H}^R\otimes\mathcal{H} \end{equation} on $\mathcal{H}$, so as to obtain \begin{equation} \ket{\tilde{\Phi}_D^+}\in\mathcal{H}^R\otimes\tilde{\mathcal{H}}, \end{equation} where $\tilde{\mathcal{H}}$ is located at $v_1$ at this moment. Then, the parties perform the following sub-protocol using the exact state splitting sequentially in order $v_1\,,v_2\,,\ldots,v_N$ to spread the state of $\tilde{\mathcal{H}}$. After all the parties performing the sub-protocol, spreading quantum information is achieved. The sub-protocol for each party $v_k\in V$ is shown as follows. At the beginning of $v_k$'s sub-protocol, it is assumed that the party $v_k$ holds the reduced state of $\ket{\tilde{\Phi}_D^+}$ on $\bigotimes_{v\in D'_{v_k}}\tilde{\mathcal{H}}^v$, that is, the system for the parties corresponding to $v_k$ itself and $v_k$'s descendants. Note that this assumption is satisfied because of an ascending labeling. If $v_k$ has no child, $v_k$'s sub-protocol terminates. Otherwise, for each child $c\in C_{v_k}\,$, $v_k$ performs the exact state splitting in Theorem~\ref{thm:split}, where $v_k$ and $c$ in the sub-protocol are regarded as $A$ and $B$ in Theorem~\ref{thm:split}, and the subsystem $\bigotimes_{v\in D'_c}\tilde{\mathcal{H}}^v$, the other subsystems of party $v_k\,$, and all the rest of the system of the parties other than $v_k$ in the sub-protocol are regarded as $\mathcal{H}^{A^\prime}$, $\mathcal{H}^A$, and $\mathcal{H}^{R}$ in Theorem~\ref{thm:split}, respectively. For each edge $e\in E$, Theorem~\ref{thm:split} shows that the exact state splitting in this sub-protocol achieves the equality in~\eqref{eq:encoding_cost}, where $e=\left\{v_k\,,c\right\}$ in the above case. \textit{Only if part}: The converse is derived from the LOCC monotonicity of the Schmidt rank in the state transformation defined as Equation~\eqref{eq:encoding_state_transformation} in Proposition~\ref{lem:encoding_state_transformation}. Consider an arbitrary edge $e=\left\{p(v_k),v_k\right\}\in E$ where $v_k\neq v_1$. The Schmidt rank of the initial state \begin{equation} \Ket{\Phi_D^+}^{Rv_1}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^{e} \end{equation} between the parties in $D'_{v_k}\,$, that is, $v_k$ itself and $v_k$'s descendants, and the other parties in $\left\{R\right\}\cup V\setminus D'_{v_k}$ is \begin{equation} M_e. \end{equation} After performing an LOCC map $\id^R\otimes\mathcal{S}$, the Schmidt rank of the final state \begin{equation} \ket{\tilde{\Phi}_D^+}^{Rv_1\cdots v_N} \end{equation} with respect to the same bipartition is \begin{equation} \rank\tilde{\Phi}_{D,e}^{+}. \end{equation} Since the Schmidt rank is monotonically nonincreasing under LOCC, it holds that \begin{equation} M_e\geqq\rank\tilde{\Phi}_{D,e}^{+}. \end{equation} Therefore, the conclusion \begin{equation} \log_2 M_e\geqq\log_2\rank\tilde{\Phi}_{D,e}^{+} \end{equation} for each $e\in E$ is obtained. \end{proof} \section{\label{sec:decoding}Entanglement cost in concentrating quantum information} This section derives an upper bound of entanglement cost in concentrating quantum information over any tree for any isometry, and shows that the entanglement cost in concentrating quantum information is not larger, and can be strictly smaller, than that of spreading quantum information. To evaluate the entanglement cost, the two-party protocol for exact state merging in the non-catalytic setting in Theorem~\ref{thm:merge_without_catalyst} is generalized to multiple parties, so as to construct a protocol for concentrating quantum information over any tree-topology network connecting multiple parties. The entanglement cost in concentrating quantum information is evaluated using the following notations. Given any tree $T=(V,E)$ and any isometry $U$, the protocol achieves the state transformation defined as Equation~\eqref{eq:decoding_state_transformation} in Proposition~\ref{lem:encoding_state_transformation} equivalent to spreading quantum information. Write the initial state shared among $R,v_1\,,\ldots,v_N$ as \begin{equation} \Ket{\Phi^N}\coloneq\ket{\tilde{\Phi}_D^+}\in\mathcal{H}^R\otimes\tilde{\mathcal{H}}, \end{equation} and the states during the protocol as a sequence \begin{equation} \label{eq:sequence} \Ket{\Phi^N}\rightarrow\Ket{\Phi_{\boldsymbol{m}_{N-1}}^{N-1}}\rightarrow\cdots\rightarrow\Ket{\Phi_{\boldsymbol{m}_1}^1}, \end{equation} where the subscript $\boldsymbol{m}_k\coloneq\left({m}^{v_N},\ldots,{m}^{v_{k+1}}\right)$ denotes a tuple representing measurement outcomes obtained during the protocol and \begin{equation} \label{eq:v_k} \Ket{\Phi_{\boldsymbol{m}_k}^k}\in\mathcal{H}^R\otimes\bigotimes_{v\in V_k}\tilde{\mathcal{H}}^{v}, \quad V_k\coloneq\left\{v_1\,,\ldots,v_k\right\}, \end{equation} for each $k\in\{1,\ldots,N-1\}$. For any $\boldsymbol{m}_1\,$, the last state $\Ket{\Phi_{\boldsymbol{m}_1}^1}$ in sequence~\eqref{eq:sequence} is convertible into \begin{equation} \Ket{\Phi_D^+}\in\mathcal{H}^R\otimes\mathcal{H} \end{equation} by an isometry transformation by party $v_1\,$, and recurrence relation to determine sequence~\eqref{eq:sequence} is given in the proof of the following theorem (by Equation~\eqref{eq:recursive_state}). The following theorem uses the Koashi-Imoto decomposition of $\Ket{\Phi_{\boldsymbol{m}_k}^k}$ shown in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}. In this case, the Hilbert spaces are decomposed into \begin{align} \tilde{\mathcal{H}}^{v_k}&=\bigoplus_{j=0}^{J_{\boldsymbol{m}_k}-1}\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{L}}\otimes\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{R}},\\ \bigotimes_{v\in V_{k-1}}\tilde{\mathcal{H}}^{v}&=\bigoplus_{j=0}^{J_{\boldsymbol{m}_k}-1}\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_1\cdots v_{k-1}\right)}_j^\textup{L}}\otimes\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_1\cdots v_{k-1}\right)}_j^\textup{R}}, \end{align} where $V_{k-1}$ is defined as Equation~\eqref{eq:v_k}. The state is decomposed into \begin{equation} \Ket{\Phi_{\boldsymbol{m}_k}^k}=\bigoplus_{j=0}^{J_{\boldsymbol{m}_k}-1}\sqrt{p_{\boldsymbol{m}_k}\left(j\right)}\Ket{\omega_{\boldsymbol{m}_k\,,j}}\otimes\Ket{\phi_{\boldsymbol{m}_k\,,j}}, \end{equation} where $p_{\boldsymbol{m}_k}\left(j\right)$ is a probability distribution, and for each $j\in\{0,\ldots,J_{\boldsymbol{m}_k}-1\}$, \begin{align} \Ket{\omega_{\boldsymbol{m}_k\,, j}}&\in\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{L}}\otimes\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_1\cdots v_{k-1}\right)}_j^\textup{L}},\\ \Ket{\phi_{\boldsymbol{m}_k\,,j}}&\in\mathcal{H}^R\otimes\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{R}}\otimes\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_1\cdots v_{k-1}\right)}_j^\textup{R}}. \end{align} Also let $\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^\textup{L}}$ denote the largest eigenvalue of the reduced state of $\Ket{\omega_{\boldsymbol{m}_k\,,j}}$ on $\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{L}}$, that is, \begin{equation} \tr_{{\left(v_1\cdots v_{k-1}\right)}_j^\textup{L}} \Ket{\omega_{\boldsymbol{m}_k\,,j}}\Bra{\omega_{\boldsymbol{m}_k\,,j}}. \end{equation} \begin{theorem} \label{thm:concentrating} \textit{Entanglement cost in concentrating quantum information.} Given any tree $T = (V,E)$ and any isometry $U$, concentrating quantum information over $T$ for $U$ is achievable if there exists an ascending labeling of the vertices satisfying for each $e=\left\{p\left(v_k\right),v_k\right\}\in E$ \begin{equation} \label{eq:decoding_cost_upper} \log_2 M_e \geqq \max_{\boldsymbol{m}_k\,,j}\left\{\log_2\left\lceil\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^\textup{L}}\dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{R}}\right\rceil\right\}, \end{equation} where $\lceil{}\cdots{}\rceil$ is the ceiling function. \end{theorem} \begin{proof} Given any tree $T = (V,E)$ with an ascending labeling of the vertices and any isometry $U$, the proof is given by construction of a protocol for the state transformation defined as Equation~\eqref{eq:decoding_state_transformation} in Proposition~\ref{lem:encoding_state_transformation} achieving the equality in~\eqref{eq:decoding_cost_upper} for each $e\in E$. In the protocol, the parties other than the root $v_1$ sequentially perform a sub-protocol using exact state merging in the non-catalytic setting shown in Theorem~\ref{thm:merge_without_catalyst}, where each of the parties $v_N\,,\ldots,v_2$ in this order is regarded as the sender $A$ in these sequential applications of the exact state merging. After all of these parties performing the sub-protocol, the root party $v_1$ performs an isometry to obtain the state $\Ket{\Phi_D^+}$, which achieves concentrating quantum information. In the following, the sub-protocol for the non-root parties $v_N\,,\ldots,v_2$ and the isometry for the root party $v_1$ are described. For any $v_k\in \left\{v_N\,,\ldots,v_2\right\}$, the sub-protocol for party $v_k$ is as follows. The state $\Ket{\Phi^{N}}=\Ket{\tilde{\Phi_D^+}}$ may be written as $\Ket{\Phi_{\boldsymbol{m}_{N}}^{N}}$ for brevity. At the beginning of $v_k$'s sub-protocol, it is assumed that the party $v_k$ has the reduced state on $\tilde{\mathcal{H}}^{v_k}$ of \begin{equation} \label{eq:assumption_merge} \Ket{\Phi_{\boldsymbol{m}_{k}}^{k}}\in\mathcal{H}^R\otimes\tilde{\mathcal{H}}^{v_k}\otimes\bigotimes_{m=1}^{k-1}\tilde{\mathcal{H}}^{v_m}. \end{equation} Based on the classical information $m^{v_N},\ldots,m^{v_{k+1}}$ of measurement outcomes sent from other parties by classical communication, the party $v_k$ calculates the measurement basis ${\left\{\Ket{m^{v_k}}\right\}}_{m^{v_k}}$ defined as \begin{equation} \Ket{m^{v_k}}\coloneq U^A\Ket{m_1\,,m_2\,,m_3}, \end{equation} where $m^{v_k}$ on the left-hand side is a label corresponding to the tuple of three labels on the right-hand side $m_1\,$, $m_2\,$, and $m_3\,$, and $U^A\Ket{m_1\,,m_2\,,m_3}$ on the right-hand side is that in Equation~\eqref{eq:merge_without_catalyst} in the proof of Theorem~\ref{thm:merge_without_catalyst} for the exact state merging of $\Ket{\Phi_{\boldsymbol{m}_{k}}^{k}}$ in the non-catalytic setting, in which the systems $\mathcal{H}^R$, $\tilde{\mathcal{H}}^{v_k}$, and $\bigotimes_{m=1}^{k-1}\tilde{\mathcal{H}}^{v_m}$ are regarded as $\mathcal{H}^{R}$, $\mathcal{H}^A$, and $\mathcal{H}^B$ in Equation~\eqref{eq:merge_without_catalyst}, respectively. The party $v_k$ performs this measurement, and the states in the sequence~\eqref{eq:sequence} are recursively described as \begin{equation} \label{eq:recursive_state} \Ket{\Phi_{\boldsymbol{m}_{k-1}}^{k-1}}=\frac{\left(\mathbb{1}\otimes\Bra{m^{v_k}}\right)\left(\Ket{\Phi_{\boldsymbol{m}_{k}}^{k}}\otimes\Ket{\Phi_{M_{e}}^+}^{e}\right)}{\left\|\left(\mathbb{1}\otimes\Bra{m^{v_k}}\right)\left(\Ket{\Phi_{\boldsymbol{m}_{k}}^{k}}\otimes\Ket{\Phi_{M_{e}}^+}^{e}\right)\right\|}, \end{equation} where $\mathbb{1}$ is the identity operator on the system of the parties other than $v_k\,$, $\Ket{\Phi_{M_{e}}^+}^{e}$ with $e=\left\{p\left(v_k\right),v_k\right\}$ is the resource state shared between $v_k$ and $v_k$'s parent $p\left(v_k\right)$, and the system of party $p\left(v_k\right)$ for the resource state $\Ket{\Phi_{M_{e}}^+}^{e}$ on the right-hand side is regarded on the left-hand side as part of $\tilde{\mathcal{H}}^{p\left(v_k\right)}$ of the party $p\left(v_k\right)$. After this measurement, the party $v_k$ sends the measurement outcome $m^{v_k}$ to all the parties by classical communication, where the post-measurement state is represented by $\Ket{\Phi_{\boldsymbol{m}_{k-1}}^{k-1}}$. Note that the assumption~\eqref{eq:assumption_merge} is satisfied for the next party $v_{k-1}$ performing the sub-protocol, that is, \begin{equation} \Ket{\Phi_{\boldsymbol{m}_{k-1}}^{k-1}}\in\mathcal{H}^R\otimes\tilde{\mathcal{H}}^{v_{k-1}}\otimes\bigotimes_{m=1}^{k-2}\tilde{\mathcal{H}}^{v_m}, \end{equation} because of an ascending order of the vertices. For each edge $e=\left\{p\left(v_k\right),v_k\right\}\in E$, Theorem~\ref{thm:merge_without_catalyst} shows that the exact state merging in this sub-protocol achieves the equality in~\eqref{eq:decoding_cost_upper}. As for the root party $v_1\,$, an isometry $U_{\boldsymbol{m}_1}^{v_1}$ for obtaining the state $\Ket{\Phi_D^+}$ is shown as follows. After the parties $v_N\,,\ldots,v_2$ performing the above sub-protocols, the shared state reduces to $\Ket{\Phi_{\boldsymbol{m}_{1}}^{1}}\in\mathcal{H}^R\otimes\tilde{\mathcal{H}}^{v_1}$. For each $v_k\in\left\{v_N\,,\ldots,v_2\right\}$, define an isometry \begin{equation} U_{m^{v_k}}\coloneq\left({U^{B'}}^\dag\otimes{U^B}^\dag\right) U_{m_1\,,m_2\,,m_3}U^B, \end{equation} where $m^{v_k}$ on the left-hand side is a label corresponding to the tuple of three labels on the right-hand side $m_1\,$, $m_2\,$, and $m_3\,$, and $\left({U^{B'}}^\dag\otimes{U^B}^\dag\right) U_{m_1\,,m_2\,,m_3}U^B$ on the right-hand side is that in Equation~\eqref{eq:merge_without_catalyst} in the proof of Theorem~\ref{thm:merge_without_catalyst}. Each $U_{m^{v_k}}$ recovers the state $\Ket{\Phi_{\boldsymbol{m}_{k}}^{k}}$ from the post-measurement state $\Ket{\Phi_{\boldsymbol{m}_{k-1}}^{k-1}}$ corresponding to $\Bra{m^{v_k}}$, that is, \begin{equation} \Ket{\Phi_{\boldsymbol{m}_{k}}^{k}}=U_{m^{v_k}}\Ket{\Phi_{\boldsymbol{m}_{k-1}}^{k-1}}. \end{equation} Repeating the above yields \begin{equation} \ket{\tilde{\Phi}_D^+}=\Ket{\Phi^N}=U_{m^{v_N}}\cdots U_{m^{v_2}}\Ket{\Phi_{\boldsymbol{m}_{1}}^{1}}. \end{equation} Consequently, the party $v_1$ obtains for any $\boldsymbol{m}_1$ \begin{align} \Ket{\Phi_D^+}&=U_{\boldsymbol{m}_1}^{v_1}\Ket{\Phi_{\boldsymbol{m}_{1}}^{1}},\\ U_{\boldsymbol{m}_1}^{v_1}&\coloneq U^\dag U_{m^{v_N}}\cdots U_{m^{v_2}}. \end{align} Note that it may not be possible for the parties $v_N\,,\ldots,v_2$ to locally perform $U_{m^{v_N}},\ldots,U_{m^{v_2}}$ during the sub-protocol, since these isometries can be nonlocal. \end{proof} The following theorem shows that the entanglement cost in concentrating quantum information is not larger than that of spreading quantum information. Moreover, the former can be strictly smaller than the latter, as demonstrated in Applications~\ref{ex:distributed_source_compression} and~\ref{ex:locc_decoding} in the next section. Note that this difference in entanglement cost arises from the difference between quantum state merging and splitting discussed in Remark~\ref{remark:merge}. \begin{theorem} \textit{Comparison of entanglement cost between spreading and concentrating quantum information.} Given any tree $T=(V,E)$ with any ascending labeling and any isometry $U$, it holds that \begin{equation} \max_{\boldsymbol{m}_k\,,j}\left\{\log_2\left\lceil\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^\textup{L}}\dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^\textup{R}}\right\rceil\right\}\leqq\log_2\rank\tilde{\Phi}_{D,e}^{+} \end{equation} where the notations are the same as those in Theorems~\ref{thm:spreading} and~\ref{thm:concentrating}. \end{theorem} \begin{proof} The proof uses the LOCC monotonicity of the Schmidt rank in the state transformation defined as Equation~\eqref{eq:decoding_state_transformation} in Proposition~\ref{lem:encoding_state_transformation}, and properties of the Koashi-Imoto decomposition. Regard the given tree $T=(V,E)$ as the rooted tree with its root $v_1\,$, and consider an arbitrary edge $e=\left\{p(v_k),v_k\right\}\in E$ where $v_k\neq v_1$. The Schmidt rank of the initial state \begin{equation} \ket{\tilde{\Phi}_D^+}^{Rv_1\cdots v_N}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^{e} \end{equation} between the parties in $D'_{v_k}$ and the other parties in $\{R\}\cup V\setminus D'_{v_k}$ is \begin{equation} M_e\rank\tilde{\Phi}_{D,e}^{+}. \end{equation} After the parties $v_N\,,\ldots,v_{k-1}$ performing the above sub-protocols, which is an LOCC map, the state reduces to \begin{equation} \Ket{\Phi_{\boldsymbol{m}_k}^k}\otimes\bigotimes_{e\in E_k}\Ket{\Phi_{M_e}^+}^{e}, \end{equation} where $\Ket{\Phi_{\boldsymbol{m}_k}^k}$ is defined as Equation~\eqref{eq:v_k} and $E_k\coloneq\left\{\left\{p\left(v_2\right),v_2\right\},\ldots,\left\{p\left(v_k\right),v_k\right\}\right\}$. The Schmidt rank of $\Ket{\Phi_{\boldsymbol{m}_k}^k}\otimes\bigotimes_{e\in E_k}\Ket{\Phi_{M_e}^+}^{e}$ with respect to the same bipartition of the parties as the above is \begin{equation} M_e \rank{\left(\Phi_{\boldsymbol{m}_k}^k\right)}^{v_k}, \end{equation} where ${\left(\Phi_{\boldsymbol{m}_k}^k\right)}^{v_k}$ denotes the reduced state of the system $\tilde{\mathcal{H}}^{v_k}$ for the state $\Ket{\Phi_{\boldsymbol{m}_k}^k}$. Since the Schmidt rank is monotonically nonincreasing under LOCC, it holds that \begin{equation} M_e\rank\tilde{\Phi}_{D,e}^{+}\geqq M_e\rank{\left(\Phi_{\boldsymbol{m}_k}^k\right)}^{v_k}. \end{equation} By construction of the Koashi-Imoto decomposition, it holds that \begin{equation} \rank{\left(\Phi_{\boldsymbol{m}_k}^k\right)}^{v_k}\geqq\dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^R}. \end{equation} for any ${\boldsymbol{m}_k}$ and $j$. Since $\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^L}\leqq 1$, it is obtained that \begin{equation} \dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^R}\geqq\left\lceil\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^L}\dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^R}\right\rceil. \end{equation} Thus, for any ${\boldsymbol{m}_k}$ and $j$, it holds that \begin{equation} \log_2\rank\tilde{\Phi}_{D,e}^{+}\geqq\log_2\left\lceil\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^L}\dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^R}\right\rceil. \end{equation} Therefore, the conclusion \begin{equation} \max_{\boldsymbol{m}_k\,,j}\left\{\log_2\left\lceil\lambda_{\boldsymbol{m}_k\,,0}^{{\left(v_k\right)}_j^L}\dim\mathcal{H}_{\boldsymbol{m}_k}^{{\left(v_k\right)}_j^R}\right\rceil\right\}\leqq\log_2\rank\tilde{\Phi}_{D,e}^{+} \end{equation} for each $e=\left\{p\left(v_k\right),v_k\right\}\in E$ is obtained. \end{proof} \section{\label{sec:example}Applications} Applications of the protocols for spreading and concentrating quantum information are provided in this section. In the following, $\otimes$ may be omitted if obvious. \begin{implication} \label{ex:distributed_source_compression} \textit{Application to one-shot distributed source compression for arbitrarily small-dimensional systems.} When applied to a star-topology tree, such as \begin{equation} \label{eq:star} \begin{split} T&=(V,E),\\ V&=\left\{v_k:k=1,2,3\right\},\\ E&=\left\{e_1=\left\{v_1\,,v_2\right\},e_2=\left\{v_1\,,v_3\right\}\right\}, \end{split} \end{equation} the protocol for concentrating quantum information shown in Theorem~\ref{thm:concentrating} can be regarded as a protocol for one-shot zero-error distributed source compression~\cite{D8,D9,A8}. Although the protocol achieves transformations between $\Ket{\Phi_D^+}$ and $\ket{\tilde{\Phi}_D^+}$, that is, maximally entangled states between $R$ and the others, it is straightforward to prove that the protocol also work for any pure state shared among the parties $R, v_1\,,\ldots,v_N\,$, which is proven for two parties in Proposition~\ref{prp:max}, and the same argument also applies to more than two parties. Note that the protocol for concentrating quantum information is applicable to arbitrarily small-dimensional systems as well as achieving zero error, while the existing protocols for the one-shot distributed source compression~\cite{D8,D9,A8} are inefficient for small- and intermediate-scale states and cannot avoid nonzero approximation error, similarly to the case of $N=2$ discussed in Remark~\ref{remark:usefulness}. For the network defined as Equation~\eqref{eq:star} and an isometry mapping the basis states as \begin{equation} \begin{split} \Ket{0}&\leftrightarrow\Ket{0}^{v_1}\Ket{0}^{v_2}\Ket{0}^{v_3},\\ \Ket{1}&\leftrightarrow\frac{1}{\sqrt{2}}\left(\Ket{0}^{v_1}+\Ket{1}^{v_1}\right)\Ket{1}^{v_2}\Ket{1}^{v_3}, \end{split} \end{equation} where the three-qubit states on the right-hand sides are orthogonal to each other due to the orthogonality of $\Ket{0}$ and $\Ket{1}$, Theorem~\ref{thm:spreading} yields the entanglement cost in spreading quantum information \begin{equation} \begin{split} \log_2 M_{e_1}&=1,\\ \log_2 M_{e_2}&=1, \end{split} \end{equation} and Theorem~\ref{thm:concentrating} yields a protocol for concentrating quantum information achieving \begin{equation} \begin{split} \log_2 M_{e_1}&=1,\\ \log_2 M_{e_2}&=0\neq 1. \end{split} \end{equation} In concentrating quantum information, the states in sequence~\eqref{eq:sequence} are calculated as \begin{align} \label{eq:app1} &\Ket{\Phi^{3}}=\ket{\tilde{\Phi}_D^+}=\frac{1}{\sqrt{2}}\Ket{0}^R\Ket{0}^{{\left(v_3\right)}_0^R}\Ket{00}^{{\left(v_1 v_2\right)}_0^R}\oplus\left(\pm\frac{1}{\sqrt{2}}\Ket{1}^R\Ket{1}^{{\left(v_3\right)}_1^R}\Ket{+1}^{{\left(v_1 v_2\right)}_1^R}\right)\\ \label{eq:app2} &\xrightarrow{\text{Measurement in }\left\{\Ket{\pm}^{v_3}\right\}} \Ket{\Phi_{\left(\Ket{\pm}^{v_3}\right)}^{2}}=\frac{1}{\sqrt{2}}\Ket{0}^R\Ket{0}^{{\left(v_2\right)}_0^R}\Ket{0}^{{\left(v_1\right)}_0^R}\pm\frac{1}{\sqrt{2}}\Ket{1}^R\Ket{1}^{{\left(v_2\right)}_0^R}\Ket{+}^{{\left(v_1\right)}_0^R} \end{align} where the right-hand sides of Equations~\eqref{eq:app1} and~\eqref{eq:app2} shows the Koashi-Imoto decomposition of the state for each step in the sequence~\eqref{eq:sequence}, and the final state shared between $R$ and $v_1$ is obtained by transferring $v_2$'s one-qubit state by quantum teleportation from $v_2$ to $v_1\,$, which requires $\log_2 M_{e_1}=1$. The difference in the resource requirements for concentrating quantum information between the edges $e_1$ and $e_2$ arises because of the difference between the Koashi-Imoto decomposition of the state $\Ket{\Phi_{\boldsymbol{m}_{2}}^{2}}$ after the party $v_3$ performing the exact state merging and the Koashi-Imoto decomposition of the state $\Ket{\Phi^{3}}=\ket{\tilde{\Phi}_D^+}$ before. By contrast, if the labeling of the parties $v_2$ and $v_3$ are interchanged, the tree $T$ changes to \begin{equation} \begin{split} &T'=(V',E),\\ &V'=\left\{v'_1=v_1\,, v'_2=v_3\,, v'_3=v_2\right\},\\ &E=\left\{e_1=\left\{v'_1\,,v'_3\right\}=\left\{v_1\,,v_2\right\},e_2=\left\{v'_1\,,v'_2\right\}=\left\{v_1\,,v_3\right\}\right\}, \end{split} \end{equation} and the protocol for concentrating quantum information applied to this tree $T'$ achieves \begin{equation} \begin{split} \log_2 M_{e_1}&=0\neq 1,\\ \log_2 M_{e_2}&=1. \end{split} \end{equation} This example implies that the entanglement cost in concentrating quantum information for each edge of a graph may be affected by the labeling of the vertices, that is, the order of sequential applications of exact state merging. In this case, to obtain the entanglement cost, it is necessary to calculate the sequence~\eqref{eq:sequence} of the states during the protocol by recursively using Equation~\eqref{eq:recursive_state}. \end{implication} \begin{implication} \label{ex:locc_decoding} \textit{Application to LOCC-assisted decoding in quantum secret sharing.} Similarly to the protocol for concentrating quantum information, Reference~\cite{G8} proposes schemes of quantum secret sharing and a protocol for decoding shared secret of quantum information, in which the parties collaboratively perform LOCC to reduce total quantum communication required for the decoding. While the protocol in Reference~\cite{G8} works for a particular class of quantum codes, the protocols shown in Theorems~\ref{thm:spreading} and~\ref{thm:concentrating} are applicable to any encoding and decoding in addition to this particular class. For example, a different scheme of quantum secret sharing from those considered in Reference~\cite{G8} can be obtained from the five-qubit code~\cite{C,G7}, which maps the basis states as \begin{equation} \begin{split} \Ket{0}\leftrightarrow\frac{1}{4}(&\Ket{00000} + \Ket{11000} + \Ket{01100} + \Ket{00110}\\ +&\Ket{00011}+\Ket{10001}-\Ket{10100}-\Ket{01010}\\ -&\Ket{00101}-\Ket{10010}-\Ket{01001}-\Ket{11110}\\ -&\Ket{01111}-\Ket{10111}-\Ket{11011}-\Ket{11101}),\\ \Ket{1}\leftrightarrow\frac{1}{4}(&\Ket{11111} + \Ket{00111} + \Ket{10011} + \Ket{11001}\\ +&\Ket{11100} + \Ket{01110}-\Ket{01011}-\Ket{10101}\\ -&\Ket{11010}-\Ket{01101}-\Ket{10110}-\Ket{00001}\\ -&\Ket{10000}-\Ket{01000}-\Ket{00100}-\Ket{00010}), \end{split} \end{equation} where each qubit on the right-hand sides belongs to each of the parties $v_1\,,\ldots,v_5$. For this isometry and a line-topology tree \begin{equation} \begin{split} T&=(V,E),\\ V&=\left\{v_k:k=1,\ldots,N\right\},\\ E&=\left\{e_k=\left\{v_k\,,v_{k+1}\right\}:k=1,\ldots,N-1\right\}, \end{split} \end{equation} where $N=5$, Theorem~\ref{thm:spreading} yields the entanglement cost in spreading quantum information \begin{equation} \begin{split} \log_2 M_{e_1}&=2,\\ \log_2 M_{e_2}&=3,\\ \log_2 M_{e_3}&=2,\\ \log_2 M_{e_4}&=1, \end{split} \end{equation} and Theorem~\ref{thm:concentrating} yields a protocol for concentrating quantum information achieving \begin{equation} \label{eq:ex3} \begin{split} \log_2 M_{e_1}&=0,\\ \log_2 M_{e_2}&=0,\\ \log_2 M_{e_3}&=0,\\ \log_2 M_{e_4}&=0. \end{split} \end{equation} In concentrating quantum information, the states in sequence~\eqref{eq:sequence} are calculated as \begin{align*} &\Ket{\Phi^{5}}=\ket{\tilde{\Phi}_D^+}\propto\Ket{+}^R\Ket{+}^{{\left(v_5\right)}_0^R}{\left(\Ket{0000}^{{\left(v_1 v_2 v_3 v_4\right)}_0^R}+\cdots\right)}\oplus\Ket{-}^R\Ket{-}^{{\left(v_5\right)}_1^R}{\left(\Ket{0000}^{{\left(v_1 v_2 v_3 v_4\right)}_1^R}+\cdots\right)}\\ &\downarrow{\text{Measurement in }\left\{\Ket{0}^{v_5},\Ket{1}^{v_5}\right\}}\\ &\Ket{\Phi_{\left(\Ket{0}^{v_5}\right)}^{4}}\\ &=\frac{1}{4}\left[\Ket{0}^{R}\left(\Ket{0000}^{v_1 v_2 v_3 v_4} + \Ket{1100} + \Ket{0110} + \Ket{0011} -\Ket{1010}-\Ket{0101}-\Ket{1001}-\Ket{1111}\right)\right.\\ &\quad\left.+\Ket{1}^R\left(\Ket{1110}^{v_1 v_2 v_3 v_4} + \Ket{0111}-\Ket{1101}-\Ket{1011} -\Ket{1000}-\Ket{0100}-\Ket{0010}-\Ket{0001}\right)\right]\\ &\propto\Ket{+}^R\Ket{+}^{{\left(v_4\right)}_0^R}\left(\Ket{000}^{{\left(v_1 v_2 v_3\right)}_0^R}+\cdots\right)\oplus\Ket{-}^R\Ket{-}^{{\left(v_4\right)}_0^R}\left(\Ket{000}^{{\left(v_1 v_2 v_3\right)}_0^R}+\cdots\right)\\ &\downarrow{\text{Measurement in }\left\{\Ket{0}^{v_4},\Ket{1}^{v_4}\right\}}\\ &\Ket{\Phi_{\left(\Ket{0}^{v_5},\Ket{0}^{v_4}\right)}^{3}}\\ &=\frac{1}{2\sqrt{2}}\left[\Ket{0}^R\left(\Ket{000}^{v_1 v_2 v_3} + \Ket{110} + \Ket{011} - \Ket{101}\right) +\Ket{1}^R\left(\Ket{111}^{v_1 v_2 v_3} - \Ket{100}-\Ket{010}-\Ket{001}\right)\right]\\ &\propto\Ket{+}^R\Ket{+}^{{\left(v_3\right)}_0^R}\left(\Ket{00}^{{\left(v_1 v_2\right)}_0^R}+\cdots\right)\oplus\Ket{-}^R\Ket{-}^{{\left(v_3\right)}_0^R}\left(\Ket{00}^{{\left(v_1 v_2\right)}_0^R}+\cdots\right)\\ &\downarrow{\text{Measurement in }\left\{\Ket{0}^{v_3},\Ket{1}^{v_3}\right\}}\\ &\Ket{\Phi_{\left(\Ket{0}^{v_5},\Ket{0}^{v_4},\Ket{0}^{v_3}\right)}^{2}}\\ &=\frac{1}{2}\left[\Ket{0}^R\left(\Ket{00}^{v_1 v_2}+\Ket{11}\right)-\Ket{1}^R\left(\Ket{01}^{v_1 v_2}+\Ket{10}\right)\right]\\ &\propto\Ket{-}^R\Ket{+}^{{\left(v_2\right)}_0^R}\Ket{+}^{{\left(v_1\right)}_0^R}\oplus\Ket{+}^R\Ket{-}^{{\left(v_2\right)}_1^R}\Ket{-}^{{\left(v_1\right)}_1^R},\\ &\downarrow{\text{Measurement in }\left\{\Ket{0}^{v_2},\Ket{1}^{v_2}\right\}}\\ &\Ket{\Phi_{\left(\Ket{0}^{v_5},\Ket{0}^{v_4},\Ket{0}^{v_3},\Ket{0}^{v_2}\right)}^{1}}=\frac{1}{\sqrt{2}}\left(\Ket{0}^{R}\Ket{0}^{v_1}-\Ket{1}^R\Ket{1}^{v_1}\right)\\ &\downarrow{\text{Local isometry by }v_1}\\ &\frac{1}{\sqrt{2}}\left(\Ket{0}^{R}\Ket{0}^{v_1}+\Ket{1}^R\Ket{1}^{v_1}\right) \end{align*} where the Koashi-Imoto decomposition of the state for each step in the sequence~\eqref{eq:sequence} is shown after $\propto$ for the above states, and while only the sequence of states for the measurement outcomes corresponding to $\Ket{0}$'s is shown in the above, those corresponding to other outcomes can be calculated in the same way. Equation~\eqref{eq:ex3} shows that the five-qubit code can be decoded only by LOCC, \textit{i.e.,} without quantum communication. Note that, if the protocol in Theorem~\ref{thm:concentrating} is applied to quantum secret sharing, some subsets of the parties may extract partial knowledge about the shared secret of quantum information during the protocol while this is the same situation as the existing protocol in Reference~\cite{G8}. \end{implication} \chapter{\label{sec:multipartite}When does multipartite entanglement outperform bipartite entanglement?} This chapter aims at differentiating capabilities of multipartite entanglement and bipartite entanglement. To achieve this goal, Section~\ref{sec:def} introduces the tasks of system-size-limited quantum state preparation in the static and dynamic settings. The static setting is analyzed in Section~\ref{sec:analysis}, and the dynamic setting is analyzed in Section~\ref{sec:analysis2}. \section{\label{sec:def}Definition of system-size-limited quantum state preparation} This section defines the tasks of system-size-limited quantum state preparation, where difference between states exhibiting multipartite entanglement and state consisting only bipartite entanglement arises in achievability of these tasks. These tasks are also illustrated in Figure~\ref{fig:multipartite_intro}. Consider a scenario where a multipartite system is distributed among spatially separated parties $v_1\,,\ldots,v_N\,$, and the local system size of each party is limited. Given a target set $S$ of multipartite states of this distributed system, the system-size-limited quantum state preparation for $S$ is a task of the parties transforming a shared common resource state stored within the limitation of local system sizes into an arbitrary state $\Ket{\psi}\in S$ by local operations and classical communication, where use of auxiliary systems is also limited within the limitation. The limitation on local system sizes are formulated as follows. Assume that each party $v_k\in\left\{v_1\,,\ldots,v_N\right\}$ has a quantum system represented by a Hilbert space $\overline{\mathcal{H}}^{v_k}$, whose dimension is \begin{equation} D^{\left(v_k\right)}\coloneq\dim\overline{\mathcal{H}}^{v_k}. \end{equation} The total system shared by the parties is denoted by \begin{equation} \overline{\mathcal{H}}\coloneq \bigotimes_{k=1}^{N}\overline{\mathcal{H}}^{v_k}. \end{equation} The configuration of system sizes for the parties is represented as a tuple \begin{equation} \boldsymbol{D}=\left(D^{\left(v_1\right)},\ldots,D^{\left(v_N\right)}\right). \end{equation} The parties store a common resource state within this configuration $\boldsymbol{D}$ of a given system $\overline{\mathcal{H}}$. This common resource state is to be transformed by LOCC into a state in a given target set, so that the state in the transformed form can be used for some given task. In general, a common resource state for a set of multipartite states may be of a higher-dimensional system than the system for the set itself. Thus, states in the target set obtained from the common resource state by LOCC is of a subspace $\mathcal{H}$ of $\overline{\mathcal{H}}$ where each party $v_k$ has a subsystem $\mathcal{H}^{v_k}$ of $\mathcal{H}$, that is, \begin{align} \mathcal{H}&\coloneq\bigotimes_{k=1}^N\mathcal{H}^{v_k},\\ \mathcal{H}^{v_k}&\subset\overline{\mathcal{H}}^{v_k},\quad\forall v_k. \end{align} The target set $S$ of states to be obtained from the common resource state is given from $\mathcal{H}$. Each party $v_k$ may perform any unitary operations and any measurement on the system $\overline{\mathcal{H}}^{v_k}$, but $v_k$ is \textit{not} allowed to add auxiliary systems increasing the dimension of $\overline{\mathcal{H}}^{v_k}$. Measurements can be represented by quantum instruments, and while there exists a class of measurements called indirect measurements, which may require an auxiliary working quantum system in their implementation, the protocols investigated in this chapter require only projective measurements, which can be considered to be implementable without such an auxiliary system. For the completeness of the definition, $v_k$ may be allowed to implement an indirect measurement using a projective measurement and one auxiliary working qubit in addition to the system $\overline{\mathcal{H}}^{v_k}$ itself, where the auxiliary working qubit has to be traced out after each measurement. Note that the use of only one auxiliary working qubit is sufficient for implementing any indirect measurement~\cite{A13}. The parties can freely perform classical information processing and classical communication, which can be performed without using a quantum system. Given a configuration of system sizes $\boldsymbol{D}$, assume in both the static setting and the dynamic setting that the parties can perform local operations on a limited-size quantum system in the above sense and classical communication, and this restricted LOCC is called \textit{LOCC within the configuration $\boldsymbol{D}$}. To compare multipartite and bipartite entanglement for the common resource states, two settings of system-size-limited quantum state preparation are defined in the following, one of which is called the \textit{static} setting, and the other the \textit{dynamic} setting. The task of system-size-limited quantum state preparation is the static setting is defined as follows. \begin{definition} \textit{System-size-limited quantum state preparation in the static setting} The system-size-limited quantum state preparation in the static setting for a configuration $\boldsymbol{D}$ of system sizes and a target set $S$ is a task of $N$ parties achieving the following: \begin{enumerate} \item The system $\mathcal{H}$ shared by the parties is initialized as a common resource state $\Ket\phi\in\overline{\mathcal{H}}$ for $S$; \item A particular target state $\Ket{\psi}\in S$ is chosen from the target set $S$, and all the parameters of $\Ket{\psi}$ for its classical description are given to all the parties. Then, the parties perform LOCC within the configuration $\boldsymbol{D}$ to transform the common resource state $\Ket\phi$ into the chosen target state $\Ket\psi$ in the target set. \end{enumerate} \end{definition} Section~\ref{sec:analysis} analyzes properties of the common resource state $\Ket\phi$ for achieving a system-size-limited quantum state preparation, that is, whether the task is achievable when the common resource state $\Ket\phi$ is a state exhibiting multipartite entanglement or consisting only of bipartite entanglement. In the dynamic setting, in addition to LOCC within a given configuration $\boldsymbol{D}$, allow any two parties $v_k$ and $v_{k^\prime}$ to perform quantum communication. Each quantum communication from a party $v_k$ to another $v_{k^\prime}$ is called one \textit{round} of quantum communication. A protocol may include multiple rounds of quantum communication, and these multiple rounds are performed sequentially. When $v_k$ sends a state of a $D$-dimensional system to $v_{k^\prime}$ by quantum communication, it is required that $v_k$ initially stores the state to be sent in a $D$-dimensional subsystem of $\overline{\mathcal{H}}^{v_k}$, and $v_{k^\prime}$ initializes a $D$-dimensional subsystem of $\overline{\mathcal{H}}^{v_{k^\prime}}$ as a fixed state $\Ket{0}$, so that $v_{k^\prime}$ receives the state using this subsystem. After each quantum communication, the $D$-dimensional subsystem of $\overline{\mathcal{H}}^{v_k}$ is initialized as a fixed state $\Ket{0}$, so that $v_k$ can reuse this subsystem. Note that quantum communication between the parties is not allowed in the static setting and is allowed only in the dynamic setting. The task of system-size-limited quantum state preparation in the dynamic setting is defined as follows. \begin{definition} \textit{System-size-limited quantum state preparation in the dynamic setting} The system-size-limited quantum state preparation in the dynamic setting for a configuration $\boldsymbol{D}$ of system sizes and a target set $S$ is a task of $N$ parties achieving the following: \begin{enumerate} \item The party prepare a common resource state $\Ket\phi\in\overline{\mathcal{H}}$ for $S$ by quantum communication in addition to LOCC within the configuration $\boldsymbol{D}$; \item A particular target state $\Ket{\psi}\in S$ is chosen from the target set $S$, and all the parameters of $\Ket{\psi}$ for its classical description are given to all the parties. Then, the parties perform LOCC within the configuration $\boldsymbol{D}$ to transform the common resource state $\Ket\phi$ into the chosen target state $\Ket\psi$ in the target set. \end{enumerate} \end{definition} In this dynamic setting, the common resource state $\Ket\phi$ is deterministically prepared by finite rounds of quantum communication, and $\Ket\phi$ may be a state exhibiting multipartite entanglement. Note that common resource states in the dynamic setting are expected to have an intermediate capability between common resource states consisting only of bipartite entanglement and common resource states exhibiting multipartite entanglement in the static setting, since common resource states in the dynamic setting may exhibit multipartite entanglement but are prepared by only temporal uses of bipartite quantum communication resources. \section{\label{sec:analysis}System-size-limited quantum state preparation in the static setting} This section analyzes system-size-limited quantum state preparation in the static setting. It is shown in this section that there exist examples of system-size-limited quantum state preparation in the static setting which is achievable by a common resource state exhibiting multipartite entanglement but not by any common resource state consisting only of bipartite entanglement. To show such a nontrivial example, consider eight parties $v_1\,,\ldots,v_8$. The configuration of the parties' system sizes \begin{equation} \boldsymbol{D}_0=\left(D_0^{\left(v_1\right)},\ldots, D_0^{\left(v_8\right)}\right) \end{equation} are \begin{equation} \label{eq:d} \begin{split} D_0^{\left(v_k\right)}&=\dim\overline{\mathcal{H}}^{v_k} = 4,\; \dim\mathcal{H}^{v_k} = 2, \;\forall v_k\in\{v_1\,,\ldots,v_7\};\\ D_0^{\left(v_8\right)}&=\dim\overline{\mathcal{H}}^{v_8} =\dim\mathcal{H}^{v_8} = 2. \end{split} \end{equation} For each $v_k\in\left\{v_1\,,\ldots,v_7\right\}$, consider the four-dimensional system $\overline{\mathcal{H}}^{v_k}$ to consist of two qubits, where one for the target set is denoted by $\mathcal{H}^{v_k}$, and the other auxiliary qubit for common resource states is denoted by $\mathcal{H}_\textup{a}^{v_k}$. As for $v_8\,$, $\overline{\mathcal{H}}^{v_8}$ is identical to $\mathcal{H}^{v_8}$. In the following, the systems may be written as \begin{align} \overline{\mathcal{H}}^{v_k}&=\mathcal{H}^{v_k}\otimes\mathcal{H}_\textup{a}^{v_k},\quad \forall v_k\in\left\{v_1\,,\ldots,v_7\right\},\\ \overline{\mathcal{H}}^{v_8}&=\mathcal{H}^{v_8}. \end{align} \begin{figure}\label{fig:target_set} \end{figure} \begin{figure}\label{fig:correspondence} \end{figure} \begin{figure}\label{fig:tree} \end{figure} Define a target set $S_0$ on \begin{equation} \mathcal{H}=\bigotimes_{k=1}^{N}\mathcal{H}^{v_k} \end{equation} as the set of all the possible output states of a quantum circuit illustrated in Figure~\ref{fig:target_set}. The circuit illustrated in Figure~\ref{fig:target_set} consists of seven two-qubit unitary gates \begin{equation} \exp\left(\textup{i}\alpha_k Z\otimes Z\right) \end{equation} parameterized by $\alpha_i\in\left\{\alpha_1\,,\ldots,\alpha_7\right\}$, where $0\leqq\alpha_i < 2\pi$ for each $\alpha_k$. Let \begin{equation} \boldsymbol\alpha\coloneq\left(\alpha_1\,,\ldots,\alpha_7\right). \end{equation} denote the tuple of the seven parameters. The input to the circuit is an eight-qubit product state \begin{equation} \Ket{+}^{\otimes 8}\in\mathcal{H}, \end{equation} where \begin{equation} \Ket{+}\coloneq\frac{1}{\sqrt{2}}\left(\Ket{0}+\Ket{1}\right). \end{equation} The target set $S_0$ consists of the eight-qubit output states of the circuit parameterized by $\boldsymbol\alpha$ for representing the gates in the circuit, that is, \begin{equation} \label{eq:s_0} S_0\coloneq\left\{\Ket{\psi\left(\boldsymbol\alpha\right)}\in\mathcal{H}:\boldsymbol\alpha=\left(\alpha_1\,,\ldots,\alpha_7\right)\right\}, \end{equation} where each qubit is placed at one of the parties, as illustrated in Figure~\ref{fig:target_set}. For example, consider the parameters \begin{equation} \boldsymbol\alpha_0\coloneq\left(0,0,0,0,0,0,0\right), \end{equation} and the state \begin{equation} \Ket{\psi\left(\boldsymbol\alpha_0\right)}=\Ket{+}^{\otimes 8}\in S_0 \end{equation} is a product state, since each gate in the circuit reduces to the identity map. In contrast, consider the parameters \begin{equation} \boldsymbol\alpha_{\frac{\pi}{4}}\coloneq\left(\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4},\frac{\pi}{4}\right), \end{equation} and the state \begin{equation} \Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\in S_0 \end{equation} is a fully entangled state, since each gate $\exp\left(\textup{i}\frac{\pi}{4} Z\otimes Z\right)$ entangles $\Ket{+}\otimes\Ket{+}$. The configuration $\boldsymbol{D}_0$ and the target set $S_0$ defined above yield the following two theorems on the system-size-limited quantum state preparation. Theorem~\ref{thm:multipartite} shows achievability of the system-size-limited quantum state preparation using a common resource state exhibiting multipartite entanglement. In contrast, Theorem~\ref{thm:bipartite} is a no-go theorem on the same system-size-limited quantum state preparation for any common resource state consisting only of bipartite entanglement. These theorems suggest difference in achievability of system-size-limited quantum state preparation between multipartite and bipartite entanglement. \begin{theorem} \label{thm:multipartite} \textit{Multipartite entanglement for a system-size-limited quantum state preparation in the static setting.} The system-size-limited quantum state preparation in the static setting for the configuration $\boldsymbol{D}_0$ defined as Equation~\eqref{eq:d} and the target set $S_0$ defined as Equation~\eqref{eq:s_0} is achievable using a common resource state exhibiting multipartite entanglement. \end{theorem} \begin{theorem} \label{thm:bipartite} \textit{Bipartite entanglement for a system-size-limited quantum state preparation in the static setting.} The system-size-limited quantum state preparation in the static setting for the configuration $\boldsymbol{D}_0$ defined as Equation~\eqref{eq:d} and the target set $S_0$ defined as Equation~\eqref{eq:s_0} is \textit{not} achievable using any common resource state consisting of bipartite entanglement. \end{theorem} Note that while shallower quantum circuits having a similar structure to the circuit in Figure~\ref{fig:target_set} are not sufficient for proving the difference between multipartite and bipartite entanglement, the example in Theorems~\ref{thm:multipartite} and~\ref{thm:bipartite} is not necessarily the simplest, and other target sets of states having similar properties also exist. In particular, the following theorem shows another example, where the target set $S_1$ is of $2m$-qudit states, the size of each qudit is $D\geqq 2$, and each state in $S_1$ has the maximal Schmidt rank with respect to any bipartition between $m$ qudits and the other $m$ qudits. Random weighted graph states or random pure states fulfill this condition, for which the reduced states have almost maximum entropy for any bipartition~\cite{C20,P7,H15}. For any resource state consisting of bipartite entanglement to obtain states in $S_1$ by LOCC, or even by stochastic LOCC, there has to be at least one party for which the local quantum system size for storing this resource state needs to be almost quadratically larger than $D$, that is, greater than or equal to $D^{2-\frac{1}{m}}$. Also note that for some special configurations of local system sizes, these differences between multipartite and bipartite entanglement do not arise, especially in cases of~\cite{R8} \begin{equation} \dim\overline{\mathcal{H}}^{v_1}\geqq\prod_{k=2}^{N}\dim\overline{\mathcal{H}}^{v_k}. \end{equation} \begin{theorem} \label{prp:max_rank} \textit{Requirement for resource states consisting of bipartite entanglement for preparing a multipartite entangled state having maximal Schmidt ranks.} Consider a $2m$-qudit state $\Ket{\psi}\in\mathcal{H}\coloneq{\left(\mathbb{C}^d\right)}^{\otimes 2m}$ of local system size $D$ which has the maximal Schmidt rank with respect to bipartite cuts between any $m$ qudits and the other $m$ qudits; that is, for any such bipartite cut, the Schmidt rank is $D^m$. If $2m$ parties $v_1\,,\ldots,v_{2m}$ prepare $\Ket{\psi}$ by LOCC from any resource state only consisting of bipartite entanglement, then there has to exist at least one party $v\in V\coloneq\left\{v_1\,,\ldots,v_{2m}\right\}$ for which the local system size $\dim\overline{\mathcal{H}}^{v}$ for storing this resource state is almost quadratically larger, that is, \begin{equation} \label{eq:full_schmidt_rank} \max_{v_k\in V} \left\{\dim\overline{\mathcal{H}}^{v_k}\right\} \geqq D^{2- \frac{1}{m}}. \end{equation} \end{theorem} Note that the lower bound of local system sizes in Inequality~\eqref{eq:full_schmidt_rank} is almost sufficient for fulfilling the necessary condition~\eqref{eq:schmidt_rank_condition} on the Schmidt ranks in the proof of Theorem~\ref{prp:max_rank} by storing a symmetric distribution of maximally entangled states shared between all pairs of the parties. In this case, since each party shares maximally entangled states with the other $2m-1$ parties, the maximally entangled state corresponding to each $e\in E$ satisfies \begin{equation} M_e=\left\lceil D^\frac{1}{m}\right\rceil, \end{equation} and the local system size for each $v\in V$ is \begin{equation} {\left\lceil D^\frac{1}{m}\right\rceil}^{2m-1}, \end{equation} where $\lceil{}\cdots{}\rceil$ is the ceiling function. The proofs of Theorems~\ref{thm:multipartite},~\ref{thm:bipartite}, and~\ref{prp:max_rank} are as follows. \begin{proof}[Proof of Theorem~\ref{thm:multipartite}] The proof is by construction of a common resource state exhibiting multipartite entanglement for the target set $S_0$. As a common resource state exhibiting multipartite entanglement, a class of graph states proposed in Reference~\cite{S18} can be used. A graph state~\cite{H12,H13} is a multi-qubit entangled state characterized by a graph $G=(V,E)$. Note that while graphs in this thesis also represent distribution of bipartite entanglement, a graph state is a different concept, which is a state exhibiting multipartite entanglement obtained for a graph $G=(V,E)$ as follows: first, for each vertex $v_k\in V$, a qubit labeled as $v_k$ is initialized as \begin{equation} \Ket{+}^{v_k}\coloneq\frac{1}{\sqrt{2}}\left(\Ket{0}^{v_k}+\Ket{1}^{v_k}\right), \end{equation} and then, for each edge $e=\left\{v_k\,,v_{k^\prime}\right\}\in E$, the controlled-$Z$ gate \begin{equation} \label{eq:cz} CZ^{v_k\,,v_{k^\prime}} \coloneq{\left(\Ket{00}\Bra{00}+\Ket{01}\Bra{01}+\Ket{10}\Bra{10}-\Ket{11}\Bra{11}\right)}^{v_k\,,v_{k^\prime}} \end{equation} is applied to two qubits labeled as $v_k$ and $v_{k^\prime}$. Reference~\cite{S18} proposes an LOCC protocol for preparing any pure state of an arbitrary number of qubits by performing sequential projective measurements and local unitary corrections on a particular type of graph states. To see how this protocol works, consider the three-vertex graph shown in Figure~\ref{fig:correspondence} as a simple example. The graph state $\Ket{\Phi}^{v_1\,,v_2\,,v_3}$ represented by this graph is invariant under a local unitary transformation $X^{v_1}\otimes Z^{v_2}\otimes Z^{v_3}$, that is, \begin{equation} X^{v_1}\otimes Z^{v_2}\otimes Z^{v_3}\Ket{\Phi}^{v_1\,,v_2\,,v_3}=\Ket{\Phi}^{v_1\,,v_2\,,v_3}, \end{equation} where $X$ and $Z$ are the Pauli operators on a qubit. Thus, if the unitary operator $\exp\left(\textup{i}\alpha X^{v_1}\right)$ parameterized by $\alpha$ is performed on qubit $v_1\,$, the action is equivalent to \begin{equation} \begin{split} \exp\left(\textup{i}\alpha X^{v_1}\right)\otimes\mathbb{1}^{v_2}\otimes\mathbb{1}^{v_3}\Ket{\Phi}^{v_1\,,v_2\,,v_3} &=\mathbb{1}^{v_1}\otimes\exp\left(\textup{i}\alpha Z^{v_2}\otimes Z^{v_3}\right) \Ket{\Phi}^{v_1\,,v_2\,,v_3}, \end{split} \end{equation} which can be shown using the Taylor series of the exponential function. Then, it is straightforward to verify that if $\exp\left(\textup{i}\alpha X^{v_1}\right)$ and a measurement in $Z$ basis $\left\{\Ket{0},\Ket{1}\right\}$ are performed on the qubit $v_1\,$, the post-measurement state of two qubits $v_2$ and $v_3$ can be deterministically transformed by local unitary corrections $\mathbb{1}^{v_2}\otimes\mathbb{1}^{v_3}$ or $Z^{v_2}\otimes Z^{v_3}$ conditioned by the measurement outcome $\Ket{0}$ or $\Ket{1}$, respectively, into \begin{equation} \label{eq:psi_alpha} \Ket{\psi\left(\alpha\right)}^{v_2\,,v_3}\coloneq\exp\left(\textup{i}\alpha Z^{v_2}\otimes Z^{v_3}\right)\left(\Ket{+}^{v_2}\otimes\Ket{+}^{v_3}\right). \end{equation} In the same way, it is shown in Reference~\cite{S18} that any quantum circuit consisting of one-qubit Clifford gates and multi-qubit gates \begin{equation} \exp\left(\textup{i}\alpha Z\otimes Z\otimes\cdots\otimes Z\right) \end{equation} parameterized by $\alpha$ can be implemented by performing sequential projective measurements and local unitary corrections on a particular graph state corresponding to the quantum circuit. In addition, it is shown that any pure state of an arbitrary number of qubits is locally unitarily equivalent to a pure state generated by a quantum circuit consisting of these types of gates. As for the common resource state for the target set $S_0\,$, the fifteen-qubit graph state $\Ket{\Phi_\textup{res}}$ illustrated in Figure~\ref{fig:tree} held by the parties $v_1\,,\ldots,v_8$ can be used. In the same way as explained above, there exists a protocol for transforming the graph state $\Ket{\Phi_\textup{res}}$ in Figure~\ref{fig:tree} into any state $\Ket{\psi\left(\boldsymbol{\alpha}\right)}\in S_0$. In this protocol, each of parties $v_k\in\left\{v_1\,,\ldots,v_7\right\}$ performs a unitary $\exp\left(\textup{i}\alpha_k X^{v_k}\right)$ parameterized by $\alpha_k$ and a measurement in the $Z$ basis $\left\{\Ket{0},\Ket{1}\right\}$ on the auxiliary qubit represented by $\mathcal{H}_\textup{a}^{v_k}$, followed by local unitary corrections on qubits other than $\mathcal{H}_\textup{a}^{v_k}$ conditioned by the measurement outcome. After the parties performing this protocol, the parties obtain $\Ket{\psi\left(\boldsymbol\alpha\right)}\in S_0$ deterministically for any parameters $\boldsymbol\alpha=\left(\alpha_1\,,\ldots,\alpha_7\right)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:bipartite}] In this proof, a necessary condition of a resource state consisting of bipartite entanglement for preparing a state $\Ket{\psi\left(\boldsymbol\alpha_{\frac{\pi}{4}}\right)}\in S_0$ by LOCC within the configuration $\boldsymbol D_0$ is first derived, and then, it is shown that any resource state consisting of bipartite entanglement for preparing $\Ket{\psi\left(\boldsymbol\alpha_{\frac{\pi}{4}}\right)}$ cannot satisfy this necessary condition. A necessary condition for preparing the state $\Ket{\psi\left(\boldsymbol\alpha_{\frac{\pi}{4}}\right)}\in S_0$ from a resource state consisting of bipartite entanglement by LOCC within the configuration $\boldsymbol D_0$ is derived as follows. Observe that the state $\Ket{\psi\left(\boldsymbol\alpha_{\frac{\pi}{4}}\right)}$ is fully entangled, that is, entangled with respect to any bipartition of the eight qubits. To prepare a fully entangled state, the resource state at party $v_8$ has to be entangled with some other parties. Since \begin{equation} \dim \overline{\mathcal{H}}^{v_8}=2, \end{equation} the party $v_8$ can store only one qubit of a bipartite resource state entangled with another party, which is labeled \begin{equation} u_7\in\{v_1\,,\ldots,v_7\}. \end{equation} The quantum system $\overline{\mathcal{H}}^{u_7}$ at $u_7$ is decomposed into \begin{equation} \overline{\mathcal{H}}^{u_7}=\mathcal{H}^{u_7}_{\{u_7\,,v_8\}}\otimes\mathcal{H}^{u_7}_\textup{r}, \end{equation} where $\mathcal{H}^{u_7}_{\{u_7\,,v_8\}}$ is a system of more than one dimension for the bipartite entangled resource state shared with $v_8\,$, and $\mathcal{H}^{u_7}_\textup{r}$ the remaining quantum system. It is necessary that \begin{equation} \label{eq:dim} \begin{split} &\dim\mathcal{H}^{u_7}_{\{u_7\,,v_8\}}=2,\\ &\dim\mathcal{H}^{u_7}_\textup{r}=2, \end{split} \end{equation} which can be shown by contradiction as follows. Assume that \begin{equation} \dim\mathcal{H}^{u_7}_{\{u_7\,,v_8\}}>2. \end{equation} Then, it is necessary that \begin{equation} \dim\mathcal{H}^{u_7}_\textup{r}<2, \end{equation} and the resource state shared between the parties $u_7$ and $v_8$ cannot be entangled with any of the other parties. This contradicts the assumption that a fully entangled state can be prepared, and Equation~\eqref{eq:dim} is shown. Since it holds that \begin{equation} \dim\mathcal{H}^{u_7}_\textup{r}=2, \end{equation} the party $u_7$ can store another single qubit of a bipartite resource state entangled with a party other than $v_8\,$, which is labeled \begin{equation} u_6\in\{v_1\,,\ldots,v_7\}\setminus\{u_7\}. \end{equation} By iterating this argument, any resource state consisting of bipartite entanglement for preparing a fully entangled state by LOCC within the configuration $\boldsymbol{D}_0$ is required to be seven two-qubit entangled states shared between $u_1$--$u_2\,$, $\ldots$, $u_6$--$u_7\,$, and $u_7$--$v_8\,$, respectively, where \begin{equation} \label{eq:perm} (u_1\,,\ldots,u_7)~\text{is a permutation of}~(v_1\,,\ldots,v_7). \end{equation} Note that although $u_1$ uses only one qubit in this case, the remaining system of $u_1\,$, which is two dimensional, cannot be used for sharing an entangled state with the other parties, since there is no dimension left in the quantum systems of the other parties. Therefore, the distribution of the two-qubit entangled states is represented by a line-topology graph, as illustrated in Figure~\ref{fig:permutation}. Note that this line-topology graph is a tree. Since the target set $S_0$ includes a fully entangled state $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$, it is necessary that any common resource state consisting of bipartite entanglement for $S_0$ within the configuration $\boldsymbol D_0$ is a state consisting of seven two-qubit entangled states represented by the line-topology tree as shown in Figure~\ref{fig:permutation}. \begin{figure}\label{fig:permutation} \end{figure} It is shown that the state $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ cannot be prepared from any such resource state as follows. Since any two-qubit entangled state can be obtained by LOCC from a Bell state \begin{equation} \frac{1}{\sqrt{2}}\left(\Ket{0}\otimes\Ket{0}+\Ket{1}\otimes\Ket{1}\right), \end{equation} it suffices to consider resource states consisting of seven Bell states represented by the line-topology tree. Thus, the condition on the Schmidt ranks given in Lemma~\ref{lem:graph_associated} implies that the state $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ can be prepared from resource states consisting of seven Bell states represented by a line-topology tree if and only if for any edge $e$ of the line-topology tree \begin{equation} R_e\left(\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\right)\leqq 2, \end{equation} where the notations are the same as those in Lemma~\ref{lem:graph_associated}. In other words, the Schmidt rank of $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ with respect to each edge of the line-topology tree needs to be smaller than or equal to two. However, the explicit calculation of $R_e\left(\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\right)$ for all the edges $e$ of all the $7!=5040$ different trees obtained from the permutations of $v_1\,,\ldots,v_7$ in Equation~\eqref{eq:perm} shows that, for any of the permutations, there exists an edge $e$ such that \begin{equation} \label{eq:r_e} R_e\left(\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\right)>2. \end{equation} The Schmidt rank $R_e\left(\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\right)$ in Inequality~\eqref{eq:r_e} can be exactly calculated with the help of a computer program. Although computers cannot calculate irrational numbers exactly, the Schmidt rank $R_e\left(\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\right)$ of a vector $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ with irrational elements can be reduced to that of a vector only with integer elements. To remove irrational coefficients for normalization of the state $\Ket{+}$ and the gates $\exp\left(\textup{i}\frac{\pi}{4}Z\otimes Z\right)$, substitute $\Ket{+}$ and $\exp\left(\textup{i}\frac{\pi}{4}Z\otimes Z\right)$ in the circuit in Figure~\ref{fig:target_set} with $\sqrt{2}\Ket{+}$ and $\sqrt{2}\exp\left(\textup{i}\frac{\pi}{4}Z\otimes Z\right)$, respectively. The resulting vector \begin{equation} \Ket{\tilde\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\coloneq 2^{\frac{15}{2}}\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)} \end{equation} has the same Schmidt ranks as $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ for any bipartition, and all the elements of $\Ket{\tilde\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ are complex numbers whose real and imaginary parts are both integers by construction. Therefore, Schmidt ranks of $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ can be exactly obtained by calculating those of $\Ket{\tilde\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ by computer. The calculation of $R_e\left(\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}\right)$ implies that the state $\Ket{\psi\left(\boldsymbol\alpha_\frac{\pi}{4}\right)}$ cannot be prepared from any resource state consisting of the seven Bell states. Due to this calculation, it is concluded that there exists no common resource state consisting of bipartite entanglement for the target set $S_0$ within the configuration $\boldsymbol D_0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{prp:max_rank}] Since any bipartite state can be obtained from a maximally entangled state, it suffices to evaluate $\dim\overline{\mathcal{H}}^{v_k}$ for storing a resource state consisting of bipartite maximally entangled states distributed according to the complete graph $K=\left(V,E\right)$, that is, the fully connected graph for the $2m$ parties. Let $M_e\in\left\{1,2,\ldots\right\}$ denote the Schmidt rank of the maximally entangled state for each edge $e\in E$. First, a lower bound of the total system size for storing $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$, that is, $\prod_{e\in E}{\left(M_e\right)}^2$, is derived. Consider an edge cut $C$~\cite{B22} of $K$ between any $m$ vertices and the other $m$ vertices. Since the Schmidt rank is monotonically nonincreasing under LOCC, it is necessary that, for any $C$, \begin{equation} \label{eq:schmidt_rank_condition} \prod_{e\in C}M_e\geqq D^m. \end{equation} Considering Inequality~\eqref{eq:schmidt_rank_condition} for all the \begin{equation} {{2m\choose m}}/2 \end{equation} possible choices of $C$ between any $m$ vertices and the other $m$ vertices and taking the products of the right- and left-hand sides of these inequalities yield \begin{equation} \prod_{C} \prod_{e\in C} M_e\geqq D^{m{\frac{{2m\choose m}}{2}}}. \end{equation} Since $M_e$ for each $e \in E$ appears ${2m-2\choose m-1}$ times in the product on the left-hand side, the last inequality can be written as \begin{equation} \prod_{C} \prod_{e\in C} M_e =\prod_{e\in E}{\left(M_e\right)}^{{2m-2}\choose{m-1}}\geqq D^{m\frac{{2m\choose m}}{2}}. \end{equation} Therefore, a lower bound of the total system size is \begin{equation} \prod_{e\in E}{\left(M_e\right)}^2\geqq D^{2\left(2m-1\right)}. \end{equation} Since the total system size for storing $\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}^e$ is written as \begin{equation} \dim\overline{\mathcal{H}}=\prod_{v_k\in V}\dim\overline{\mathcal{H}}^{v_k}, \end{equation} it holds that \begin{equation} \prod_{v_k\in V}\dim\overline{\mathcal{H}}^{v_k}\geqq\prod_{e\in E}{\left(M_e\right)}^2\geqq D^{2\left(2m-1\right)}. \end{equation} Therefore, it is obtained that \begin{equation} \max_{v_k\in V} \left\{\dim\overline{\mathcal{H}}^{v_k}\right\}\geqq{\left(\prod_{v_k\in V}\dim\overline{\mathcal{H}}^{v_k}\right)}^{\frac{1}{2m}}\geqq D^{2-\frac{1}{m}}, \end{equation} which yields the conclusion. \end{proof} \section{\label{sec:analysis2}System-size-limited quantum state preparation in the dynamic setting} This section analyzes the difference in system-size-limited quantum state preparation appearing in the dynamic setting. Before analyzing multipartite cases, a simpler bipartite case is discussed to clarify the difference between the static setting and the dynamic setting. Consider two parties $v_1$ and $v_2\,$, where each party has two qubits; that is, the configuration $\left(D^{\left(v_1\right)},D^{\left(v_2\right)}\right)$ is given by \begin{align} D^{\left(v_1\right)}&=\dim\overline{\mathcal{H}}^{v_1} = 4,\\ D^{\left(v_2\right)}&=\dim\overline{\mathcal{H}}^{v_1} = 4. \end{align} In this case, these two parties can store an entangled resource state with Schmidt rank four in the static setting. However, in the dynamic setting, the parties can prepare an entangled resource state with Schmidt rank at most two, which is shown as follows. Consider any shared state $\Ket{\phi}^{v_1\,,v_2}$ after the last round of quantum communication for preparing $\Ket{\phi}^{v_1\,,v_2}$, where it is assumed that the direction of the quantum communication in the last round is from $v_1$ to $v_2$ without loss of generality. Since the quantum communication sends out at least one qubit from $v_1\,$, the rank of $v_1$'s reduced state for $\Ket{\phi}^{v_1\,,v_2}$ is at most two; that is, the Schmidt rank of $\Ket{\phi}^{v_1\,,v_2}$ is at most two. Since the Schmidt rank is monotonically nonincreasing by LOCC, $v_1$ and $v_2$ after the last round of quantum communication cannot prepare an entangled resource state with Schmidt rank more than two, which yields the conclusion. Although this two-party example is trivial, nontrivial cases of more than two parties are shown as follows. Theorem~\ref{prp:bipartite_dynamic} shows that the common resource states available in the dynamic setting can still have more capability than any common resource state consisting of bipartite entanglement in the static setting, as well as the common resource states exhibiting multipartite entanglement in the static setting. In contrast, Theorem~\ref{prp:multipartite_dynamic} shows the existence of common resource states which cannot be prepared in the dynamic setting by the parties within a limitation of local system sizes while the common resource states can still be stored within the limitation in the static setting. This implies that the common resource states in the dynamic setting have in this case less capability than a common resource state exhibiting multipartite entanglement in the static setting. \begin{figure}\label{fig:multipartite} \end{figure} \begin{theorem} \label{prp:bipartite_dynamic} \textit{A common resource state in the dynamic setting having more capability than any common resource state consisting of bipartite entanglement.} The state $\Ket{\Phi_\textup{res}}$ in the proof of Theorem~\ref{thm:multipartite} and in Figure~\ref{fig:tree} can be used as a common resource state for achieving the system-size-limited quantum state preparation in the dynamic setting for the configuration $\boldsymbol{D}_0$ defined as Equation~\eqref{eq:d} and the target set $S_0$ defined as Equation~\eqref{eq:s_0}, while the system-size-limited quantum state preparation in the static setting for $\boldsymbol{D}_0$ and $S_0$ cannot be achieved by any common resource state consisting of bipartite entanglement due to Theorem~\ref{thm:bipartite}. \end{theorem} \begin{theorem} \label{prp:multipartite_dynamic} \textit{Common resource states exhibiting multipartite entanglement which cannot be prepared in the dynamic setting.} Consider four parties $v_1\,$, $v_2\,$, $v_3\,$, and $v_4$. Given a configuration $\boldsymbol{D}_1=\left(D_1^{\left(v_1\right)},D_1^{\left(v_2\right)},D_1^{\left(v_3\right)},D_1^{\left(v_4\right)}\right)$, where \begin{align} D_1^{\left(v_1\right)}&=\dim\overline{\mathcal{H}}^{v_1} = 4,\\ D_1^{\left(v_k\right)}&=\dim\overline{\mathcal{H}}^{v_k} = 2, \; \forall v_k\in\left\{v_2\,,v_3\,,v_4\right\}, \end{align} any fully entangled common resource state $\Ket{\phi}\in\overline{\mathcal{H}}$ whose Schmidt rank with respect to the bipartition between $v_1$ and $v_2 v_3 v_4$ is more than two cannot be prepared in the dynamic setting, although there exists such a common resource state which can be stored in the static setting. \end{theorem} Note that under the limitation in Theorem~\ref{prp:multipartite_dynamic}, the parties can prepare any state whose Schmidt rank with respect to the bipartition between $v_1$ and $v_2 v_3 v_4$ is not more than two. This is because $v_1$'s reduced state can be represented by one qubit in this case, and hence, the parties can perform quantum communication to bring arbitrary two qubits to $v_1$ to perform any two-qubit gates. As for another remark, while it is assumed in the definition of the dynamic setting that quantum communication is performed sequentially, one can also consider simultaneous quantum communication between two parties, which is considered as a swap operation between the two. However, this simultaneous quantum communication yields a trivial result since the parties under the limitation in Theorem~\ref{prp:multipartite_dynamic} can prepare any state $\Ket{\Phi}\in\overline{\mathcal{H}}$ using swap operations for letting $v_1$ perform arbitrary two-qubit gates. \begin{proof}[Proof of Theorem~\ref{prp:bipartite_dynamic}] It is shown that the common resource state $\Ket{\Phi_\textup{res}}$ in the proof of Theorem~\ref{thm:multipartite} and in Figure~\ref{fig:tree} can be prepared by the parties using quantum communication in addition to LOCC within the configuration $\boldsymbol{D}_0$. The protocol for preparing $\Ket{\Phi_\textup{res}}$ is represented by a quantum circuit illustrated in Figure~\ref{fig:multipartite}. In this circuit, the parties repeatedly perform $CZ$ gates defined as Equation~\eqref{eq:cz} to entangle qubits initialized as $\Ket{+}$, distribute one qubit of the entangled state by quantum communication, and perform a $CZ$ gate again to entangle the remaining part of the entangled state with another qubit initialized as $\Ket{+}$. After this protocol, the state $\Ket{\Phi_\textup{res}}$ is shared among the parties $v_1\,,\ldots,v_8$. \end{proof} \begin{proof}[Proof of Theorem~\ref{prp:multipartite_dynamic}] The proof is given in a similar way to the example given at the beginning of this section. Consider any fully entangled state $\Ket{\phi}^{v_1\,,v_2\,,v_3\,,v_4}$ shared among $v_1\,$, $v_2\,$, $v_3\,$, and $v_4$ after the last round of quantum communication for preparing $\Ket{\phi}^{v_1\,,v_2\,,v_3\,,v_4}$. The direction of the quantum communication in the last round is either of the following three possibilities: \begin{enumerate} \item from $v_1$ to $v_k$ where $k\in\left\{2,3,4\right\}$; \item from $v_k$ to $v_{k^\prime}$ where $k,k^\prime\in\left\{2,3,4\right\}$ and $k\neq k^\prime$; \item from $v_k$ to $v_1$ where $k\in\left\{2,3,4\right\}$. \end{enumerate} Since $\Ket{\phi}^{v_1\,,v_2\,,v_3\,,v_4}$ is fully entangled, the latter two possibilities 2 and 3, which lead to a product state between $v_k$ and the others, are excluded. Regarding possibility 1, after sending at least one qubit from $v_1$ to $v_k\,$, the rank of $v_1$'s reduced state for $\Ket{\phi}^{v_1\,,v_2\,,v_3\,,v_4}$ is at most two; that is, the Schmidt rank of $\Ket{\phi}^{v_1\,,v_2\,,v_3\,,v_4}$ with respect to the bipartition between $v_1$ and $v_2 v_3 v_4$ is at most two. Since the Schmidt rank is monotonically nonincreasing by LOCC, the parties after the last round of quantum communication cannot prepare any common resource state whose Schmidt rank with respect to the bipartition between $v_1$ and $v_2 v_3 v_4$ is more than two, which yields the conclusion. \end{proof} \part{\label{part:conclusion}Conclusion and outlook} \chapter{\label{sec:summary_1}Conclusion of Part~\ref{part:1}} Part~\ref{part:1} analyzed entanglement cost, or equivalently, quantum communication cost under LOCC, required for one-shot quantum state merging, aimed at investigating properties of transferring quantum information between two parties on small and intermediate scales. The following two results are obtained in this part. Being complementary to existing protocols achieving nearly optimal one-shot state merging on a large scale, these results open the way to another direction for future research on small and intermediate scales. \section*{Quantum state merging for arbitrarily small-dimensional systems} Chapter~\ref{sec:merge} constructed protocols for one-shot state merging under one-way LOCC, which work for any state of an arbitrarily small-dimensional system and satisfy arbitrarily high fidelity requirements. The protocols retain the essential feature of state merging; that is, entanglement cost can be reduced by exploiting a structure of a given state. This feature arises because the Koashi-Imoto decomposition of the given state shows the classical part, the quantum part, and the redundant part of the state, and entanglement can be gained from the redundant part by entanglement distillation, while the classical part can be merged at zero entanglement cost by a measurement followed by classical communication of the measurement outcome. In these protocols, it is crucial to coherently combine different subprocesses, namely, entanglement distillation from the redundant part and quantum teleportation of the quantum part, using controlled measurements and controlled isometries. In addition to achievability bounds for an arbitrarily small-dimensional system derived from the protocols for exact state merging, improved converse bounds of entanglement cost in exact state merging are shown and this bound is proven to be optimal when a purification of the state to be merged is a three-qubit state. These results on exact state merging can also be extended to its approximate versions by means of smoothing~\cite{R2,T5,T11}, while exact state merging suffices in cases relevant to distributed quantum information processing, such as the cases of code states for quantum error correcting codes~\cite{G,D,T10,B}. These results yield protocols for one-shot quantum state merging applicable even to small- and intermediate-scale states, and further research will be needed to establish general strategies for state merging achieving both small-scale applicability and asymptotic optimality. \section*{One-shot quantum state merging under one-way and two-way communication} Chapter~\ref{sec:two_way} proved that the minimal entanglement cost in state merging under \textit{one-way} LOCC and that under \textit{two-way} LOCC can be different in a one-shot scenario, while they have shown to coincide in the asymptotic scenario. The analysis in Chapter~\ref{sec:two_way} employs interconnection between state merging and local state discrimination, to demonstrate a provable separation between one-way LOCC and two-way LOCC in state merging, whose \textit{asymptotically non-surviving} property is different from the known separations in Table~\ref{table:compare}. Based on this interconnection, state merging and local state discrimination can also be interpreted as distributed decoding of nonlocally encoded information. These results suggest that in state merging from $A$ to $B$ under a one-shot regime, preprocessing of quantum side information at $B$ and backward classical communication from $B$ to $A$ may increase usability of the quantum side information for reducing entanglement cost of protocols, while further research will be needed to construct general protocols for one-shot state merging using two-way communication. Even if construction of an optimal two-way protocol for one-shot quantum state merging should be challenging, the difference between one-way and two-way LOCC in entanglement cost in state merging may also appear in the framework of second-order asymptotic analysis~\cite{T9}. In particular, entanglement cost of non-catalytic approximate state merging of $\Ket{\psi}^{RAB}$ within $\epsilon$ may be in the form of \begin{equation} \min\left\{\log_2 K\right\}=n{H\left(A|B\right)}_\psi+\sqrt{n}C\left(\epsilon\right)+\mathcal{O}\left(\log n\right), \end{equation} where the minimum is taken over all the protocols achieving approximate state merging of ${\left(\Ket{\psi}^{RAB}\right)}^{\otimes n}$ within $\epsilon$, and $C\left(\epsilon\right)$ is a function of $\epsilon$ for the coefficient of the second term. Whereas whether one-way or two-way LOCC is allowed in quantum state merging does not affect the first coefficient ${H\left(A|B\right)}_\psi\,$, it is left as an open question whether $C\left(\epsilon\right)$ is affected or not. \chapter{\label{sec:summary_2}Conclusion of Part~\ref{part:2}} Part~\ref{part:2} analyzed properties of multipartite entanglement in distributed quantum information processing. The following two results are obtained in this part. These results facilitate operational understanding and efficient use of multipartite entanglement in the context of distributed quantum information processing. \section*{Distributed encoding and decoding of quantum information over networks} Chapter~\ref{sec:distributed_encoding_decoding} quantitatively characterized nonlocal properties of multipartite quantum transformations for encoding and decoding quantum information in a multipartite system in terms of the entanglement cost. For any tree-topology network connecting spatially separated parties $v_1\,,\ldots,v_N\,$, the entanglement costs required for performing an isometry $U:\mathcal{H}\to\bigotimes_{k=1}^{N}\tilde{\mathcal{H}}^{v_k}$ representing encoding and the inverse $U^\dag:\bigotimes_{k=1}^{N}\tilde{\mathcal{H}}^{v_k}\to\mathcal{H}$ representing decoding are evaluated, where the system $\mathcal{H}$ for logical states is located at one of the parties and each subsystem $\tilde{\mathcal{H}}^{v_k}$ for physical states is located at each party $v_k$. Regarding the encoding, a protocol for spreading quantum information is constructed, and this protocol is proven to achieve the optimal entanglement cost. As for the decoding, the protocol for concentrating quantum information is also constructed and this protocol can reduce the entanglement cost compared to that of spreading quantum information. Hence, while $U$ and $U^\dag$ are inverse of each other, a bound is derived for quantitatively differentiating nonlocal properties of $U$ for encoding and $U^\dag$ for decoding in terms of entanglement cost. Applications of these protocols to multiparty tasks are also demonstrated, such as one-shot distributed source compression~\cite{D8,D9,A8} and LOCC-assisted decoding in quantum secret sharing~\cite{G8}. The concept of encoding and the decoding represented by isometries has pivotal roles not only in quantum information science, and further investigation of applications within and beyond quantum information science is left for future works. \section*{When does multipartite entanglement outperform bipartite entanglement?} Chapter~\ref{sec:multipartite} introduced and analyzed the task of system-size-limited quantum state preparation for comparing multipartite and bipartite entanglement from the viewpoint of local quantum system sizes of the parties in the distributed settings. Introducing limitations on the size of the local system of each party, Chapter~\ref{sec:multipartite} analyzes the capabilities of common resource states exhibiting multipartite entanglement for a given target set of quantum states and those consisting of bipartite entanglement. By showing nontrivial examples, the capabilities of these common resource states are differentiated in terms of achievability of the system-size-limited quantum state preparations for the same target set in the static setting where a common resource state has to be stored within a given limitation of local system sizes. In addition to this static setting, the dynamic setting is considered where the parties may use a common resource state exhibiting multipartite entanglement, but this common resource state has to be prepared by temporal uses of bipartite quantum communication resources within the limitation of local system sizes. As for the dynamics setting, examples shown in Chapter~\ref{sec:multipartite} imply that common resource states in the dynamic setting have an intermediate capability between the common resource states exhibiting multipartite entanglement and those consisting of bipartite entanglement. These results provide examples indicating that multipartite entanglement outperforms bipartite entanglement when limitations on the local system sizes matter in both the static setting and the dynamic setting. Further research will be needed to establish more general connections between the system sizes for common resource states and properties differentiating multipartite and bipartite entanglement. \chapter{Concluding remarks and outlook} This thesis has established a paradigm for investigating multipartite entanglement based on distributed quantum information processing over networks, progressing beyond applications of resource-theoretic analyses based on the state convertibility introducing the partial order of entanglement in the LOCC framework. In particular, properties of multipartite entanglement are characterized in terms of entanglement costs and local quantum system sizes required for distributed quantum information processing. For such characterization, this thesis has constructed and analyzed the protocols for one-shot quantum state merging and splitting applicable to arbitrarily small-dimensional quantum systems, which can be used as fundamental building blocks of further theoretical analyses and experimental implementations of distributed quantum information processing. Besides applying the framework of distributed quantum information processing over networks established in this thesis to further investigations of multipartite entanglement, the obtained results in this thesis are also related to the following open questions in broader research fields. \section*{One-shot quantum information theory on small and intermediate scales} Chapter~\ref{sec:merge} discusses the cases where asymptotically optimal protocols in one-shot quantum information theory are not necessarily efficient on small and intermediate scales relevant to distributed quantum information processing. As for one-shot quantum state merging, answers to the following questions are beneficial to constructing more efficient protocols on the small and intermediate scales than those obtained in Chapter~\ref{sec:merge}: Under what condition does the obtained protocols for one-shot quantum state merging based on the Koashi-Imoto decomposition become optimal, and more generally, how is the minimal cost of one-shot state merging on the small and intermediate scales characterized? \section*{Entanglement catalysis} As discussed in Chapter~\ref{sec:two_way}, there are tasks of which catalytic use of entanglement affects achievability, and further investigations of this entanglement catalysis lead to advantageous distributed quantum information processing and better understanding of quantum entanglement. How is it possible in the catalytic setting to prove the asymptotically non-surviving separation between one-way LOCC and two-way LOCC in one-shot quantum state merging shown in Chapter~\ref{sec:two_way}, or conversely, does this separation disappear by allowing catalytic use of entanglement? \section*{Characterization of properties of multipartite entanglement beyond quantification} While entanglement costs in spreading and concentrating quantum information are defined for a general network, it is crucial to consider tree-topology networks in the evaluation of the entanglement costs in Chapter~\ref{sec:distributed_encoding_decoding}. If entanglement costs can be evaluated for a more general class of networks than tree-topology networks, difference in the entanglement costs arising from topologies of the given networks may provide a characterization of multipartite entanglement based on not only quantities but also the network topologies. Is there another class of networks than tree-topology networks over which such evaluation of entanglement costs is possible? \section*{Causality and entanglement in communication tasks} Chapter~\ref{sec:multipartite} shows cases where tasks using multipartite entanglement cannot be achieved using bipartite entanglement. Combination of bipartite entanglement with classical communication between two parties achieves quantum communication by means of quantum teleportation, where entanglement can be regarded as a spatial resource shared between spatially separated parties, and classical communication can be regarded as a temporal resource introducing a temporal causal order into distributed quantum information processing. As for multipartite cases, no analogous correspondence between spatial resources of states exhibiting multipartite entanglement and temporal resources for achieving communication tasks is known in general. Is there a situation where a resource state exhibiting multipartite entanglement can also be interpreted as a resource for a multipartite version of some quantum communication task, and if such a situation exists, what serves as a multipartite temporal resource introducing causality among multiple parties, corresponding to the multipartite spatial resource, \textit{i.e.}, multipartite entanglement? \appendix \part*{Appendix} \chapter{\label{sec:koashi_imoto}How to obtain Koashi-Imoto decomposition} This appendix demonstrates how to obtain the Koashi-Imoto decomposition of a given tripartite pure state $\Ket{\psi}^{RAB}$ shown in Lemma~\ref{lem:koashi_imoto_decomposition_tripartite}. The Koashi-Imoto decomposition of $\Ket{\psi}^{RAB}$ follows from that of the corresponding set $S_\psi^{A|R}$ defined as Equation~\eqref{eq:psi_lambda}, as discussed in Section~\ref{sec:decomposition}. This appendix summarizes an algorithm shown in Reference~\cite{K3} for obtaining the Koashi-Imoto decomposition of any given set of states and provide an example of how to obtain the Koashi-Imoto decomposition of a given tripartite pure state using this algorithm. The algorithm shown in Reference~\cite{K3} works by iteratively refining decompositions of the Hilbert space $\mathcal{H}^A$ in the form of \begin{equation} \label{eq:decomposition_form} \mathcal{H}^A=\bigoplus_{j=0}^{J-1}\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{a_j^\textup{R}}. \end{equation} For a decomposition in this form, let $\Pi^{a_j^\textup{L}}$ and $\Pi^{a_j^\textup{R}}$ denote the projectors onto $\mathcal{H}^{a_j^\textup{L}}$ and $\mathcal{H}^{a_j^\textup{R}}$, respectively. The degree of refinement is evaluated by an index $r$ defined for the decomposition in the form of Equation~\eqref{eq:decomposition_form} as \begin{equation} r\coloneq\frac{1}{2}\left(\sum_{J=0}^{J-1}\dim\mathcal{H}^{a_j^\textup{R}}\right)\left(\sum_{J=0}^{J-1}\dim\mathcal{H}^{a_j^\textup{R}}+1\right)-J+1. \end{equation} The algorithm begins with initially regarding $\mathcal{H}^A$ as \begin{equation} \mathcal{H}^A=\mathcal{H}^{a_0^\textup{L}}, \end{equation} where $J=1$, the index is initially \begin{equation} r=1, \end{equation} and $\mathcal{H}^{a_0^\textup{R}}$ does not explicitly appear since \begin{equation} \dim\mathcal{H}^{a_0^\textup{L}}=\dim\mathcal{H}^A,\quad\dim\mathcal{H}^{a_0^\textup{R}}=1. \end{equation} Then, the refinement can be performed by two types of procedures, which are referred to as the \textit{$\textup{L}$-decomposing procedure} and the \textit{$\textup{R}$-combining procedure}. According to the given set of states, the $\textup{L}$-decomposing procedure decomposes a Hilbert space $\mathcal{H}^{a_{j_0}^\textup{L}}$ in an intermediate decomposition in the form of Equation~\eqref{eq:decomposition_form} into two subspaces, and the $\textup{R}$-combining procedure combines two different Hilbert spaces $\mathcal{H}^{a_{j_0}^\textup{R}}$ and $\mathcal{H}^{a_{j_1}^\textup{R}}$ in an intermediate decomposition in the form of Equation~\eqref{eq:decomposition_form} into one, as discussed later. Each procedure increases the index $r$ representing the degree of refinement of the decomposition, and the algorithm repeatedly applies either of the two procedures, until a decomposition maximizing $r$ is obtained. Since $r$ is an integer bounded by \begin{equation} 1\leqq r\leqq\frac{1}{2}\left(\dim\mathcal{H}^A\right)\left(\dim\mathcal{H}^A+1\right), \end{equation} the algorithm terminates after applying these procedures \begin{equation} O\left({\left(\dim\mathcal{H}^A\right)}^2\right) \end{equation} times in total. The decomposition maximizing $r$ is uniquely determined and is said to be maximal in Reference~\cite{K3}, satisfying the conditions shown in Lemma~\ref{lem:koashi_imoto_decomposition_set}. For obtaining the Koashi-Imoto decomposition of a given bipartite state $\psi^{RA}$, whether the decomposition in the form of Equation~\eqref{eq:decomposition_form} is maximal can also be checked by calculating operators on $\mathcal{H}^\textup{R}\otimes\mathcal{H}^{a_j^\textup{L}}\otimes\mathcal{H}^{a_j^\textup{R}}$ for all $j$ \begin{equation} \label{eq:product_operator} \psi^{R a_j^\textup{L} a_j^\textup{R}} \coloneq\left(\mathbb{1}^{R}\otimes\Pi^{a_j^\textup{L}}\otimes\Pi^{a_j^\textup{R}}\right)\psi^{RA}\left(\mathbb{1}^{R}\otimes\Pi^{a_j^\textup{L}}\otimes\Pi^{a_j^\textup{R}}\right), \end{equation} and if the decomposition is maximal, each of these operators is a tensor product of operators of $\mathcal{H}^\textup{R}\otimes\mathcal{H}^{a_j^\textup{R}}$ and $\mathcal{H}^{a_j^\textup{L}}$. In the following, how to perform the $\textup{L}$-decomposing procedure and the $\textup{R}$-combining procedure is discussed in the case of the Koashi-Imoto decomposition of $S_\psi^{A|R}\coloneq\left\{\psi^A\left(\Lambda^{R}\right):\Lambda^{R}\geqq 0\right\}$ defined as Equation~\eqref{eq:psi_lambda}. \textit{The $\textup{L}$-decomposing procedure}: (See also Lemma~3 in Reference~\cite{K3}.) Given an intermediate decomposition in the form of Equation~\eqref{eq:decomposition_form}, the $\textup{L}$-decomposing procedure aims to decompose a Hilbert space $\mathcal{H}^{a_{j_0}^\textup{L}}$ in this given decomposition into two subspaces, so that the decomposition is refined as \begin{equation} \mathcal{H}^{a_{j_0}^\textup{L}}\otimes\mathcal{H}^{a_{j_0}^\textup{R}}=\left(\mathcal{H}_{+}^{a_{j_0}^\textup{L}}\otimes\mathcal{H}^{a_{j_0}^\textup{R}}\right)\oplus\left(\mathcal{H}_{-}^{a_{j_0}^\textup{L}}\otimes\mathcal{H}^{\tilde{a}_{j_0}^\textup{R}}\right), \end{equation} where the right-hand side represents subspaces in a refined decomposition satisfying \begin{equation} \mathcal{H}^{a_{j_0}^\textup{L}}=\mathcal{H}_{+}^{a_{j_0}^\textup{L}}\oplus\mathcal{H}_{-}^{a_{j_0}^\textup{L}}. \end{equation} For the Koashi-Imoto decomposition of $S_\psi^{A|R}$, this refinement is achieved in the following way. \begin{enumerate}[{Step $\textup{L}$}-1:] \item Find $j_0\in\left\{0,\ldots,J-1\right\}$, $\Ket{a}\in\mathcal{H}^{a_{j_0}^\textup{R}}$, $\Ket{b}\in\mathcal{H}^{a_{j_0}^\textup{R}}$, and $\Lambda^{R}\geqq 0$ such that for any $c\geqq 0$ \begin{equation} \rho\neq c\rho^\prime, \end{equation} where $\rho$ and $\rho^\prime$ are operators on $\mathcal{H}^{a_{j_0}^\textup{L}}$ defined as \begin{align} \rho&\coloneq\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Bra{a}^{a_{j_0}^\textup{R}}\right)\psi^A\left(\Lambda^{R}\right)\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Ket{a}^{a_{j_0}^\textup{R}}\right),\\ \rho^\prime&\coloneq\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Bra{b}^{a_{j_0}^\textup{R}}\right)\psi^A\left(\mathbb{1}^{R}\right)\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Ket{b}^{a_{j_0}^\textup{R}}\right). \end{align} \item Calculate the spectral decomposition of an operator on $\mathcal{H}^{a_{j_0}^\textup{L}}$ \begin{equation} \frac{\rho}{\tr\rho}-\frac{\rho^\prime}{\tr\rho^\prime}=\sum_l \lambda_l\Ket{l}\Bra{l}. \end{equation} Using the subspaces spanned by eigenvectors of this operator corresponding to the positive eigenvalues and the non-positive eigenvalues, decompose $\mathcal{H}^{a_{j_0}^\textup{L}}$ into \begin{equation} \mathcal{H}^{a_{j_0}^\textup{L}}=\mathcal{H}_{+}^{a_{j_0}^\textup{L}}\oplus\mathcal{H}_{-}^{a_{j_0}^\textup{L}}, \end{equation} where the subspaces on the right-hand side are defined as \begin{align} \mathcal{H}_{+}^{a_{j_0}^\textup{L}}&\coloneq \spn \left\{\Ket{l}\in\mathcal{H}^{a_{j_0}^\textup{L}}:\lambda_l>0\right\},\\ \mathcal{H}_{-}^{a_{j_0}^\textup{L}}&\coloneq\spn\left\{\Ket{l}\in\mathcal{H}^{a_{j_0}^\textup{L}}:\lambda_l\leqq 0 \right\}. \end{align} Note that $\mathcal{H}_{+}^{a_{j_0}^\textup{L}}$ and $\mathcal{H}_{-}^{a_{j_0}^\textup{L}}$ are nonzero subspaces. \item Define a refined decomposition as \begin{align} \mathcal{H}^A&=\bigoplus_{j=0}^{\tilde{J}-1}\mathcal{H}^{\tilde{a}_j^\textup{L}}\otimes\mathcal{H}^{\tilde{a}_j^\textup{R}},\\ \tilde{J}&\coloneq J+1,\\ \mathcal{H}^{\tilde{a}_j^\textup{L}}&\coloneq\begin{cases} \mathcal{H}^{a_j^\textup{L}}&\textup{if } 0\leqq j\leqq j_0-1,\\ \mathcal{H}^{a_{j-1}^\textup{L}}&\textup{if } j_0\leqq j\leqq J-2,\\ \mathcal{H}_{+}^{a_{j_0}^\textup{L}}&\textup{if } j=J-1,\\ \mathcal{H}_{-}^{a_{j_0}^\textup{L}}&\textup{if } j=J,\\ \end{cases}\\ \mathcal{H}^{\tilde{a}_j^\textup{R}}&\coloneq\begin{cases} \mathcal{H}^{a_j^\textup{R}}&\textup{if } 0\leqq j\leqq j_0-1,\\ \mathcal{H}^{a_{j-1}^\textup{R}}&\textup{if } j_0\leqq j\leqq J-2,\\ \mathcal{H}^{a_{j_0}^\textup{R}}&\textup{if } j=J-1,\, J. \end{cases} \end{align} \end{enumerate} \textit{The $\textup{R}$-combining procedure}: (See also Lemma~4 in Reference~\cite{K3}.) Given an intermediate decomposition in the form of Equation~\eqref{eq:decomposition_form}, the $\textup{R}$-combining procedure aims to combine two different Hilbert spaces $\mathcal{H}^{a_{j_0}^\textup{R}}$ and $\mathcal{H}^{a_{j_1}^\textup{R}}$ in this given decomposition into one, so that the decomposition is refined as \begin{equation} \begin{split} &\left(\mathcal{H}^{a_{j_0}^\textup{L}}\otimes\mathcal{H}^{a_{j_0}^\textup{R}}\right)\oplus\left(\mathcal{H}^{a_{j_1}^\textup{L}}\otimes\mathcal{H}^{a_{j_1}^\textup{R}}\right)\\ &=\left(\mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}\otimes\left(\mathcal{H}^{a_{j_0}^\textup{R}}\oplus\mathcal{H}^{a_{j_1}^\textup{R}}\right)\right) \oplus\left(\mathcal{H}_\perp^{a_{j_0}^\textup{L}}\otimes\mathcal{H}^{a_{j_0}^\textup{R}}\right)\oplus\left(\mathcal{H}_\perp^{a_{j_1}^\textup{L}}\otimes\mathcal{H}^{a_{j_1}^\textup{R}}\right), \end{split} \end{equation} where the right-hand side represents subspaces in a refined decomposition satisfying \begin{align} \mathcal{H}^{a_{j_0}^\textup{L}}&=\mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}\oplus\mathcal{H}_\perp^{a_{j_0}^\textup{L}},\\ \mathcal{H}^{a_{j_1}^\textup{L}}&=\mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}\oplus\mathcal{H}_\perp^{a_{j_1}^\textup{L}}.\\ \end{align} For the Koashi-Imoto decomposition of $S_\psi^{A|R}$, this refinement is achieved in the following way. \begin{enumerate}[{Step $\textup{R}$}-1:] \item Find $j_0\in\left\{0,\ldots,J-1\right\}$, $j_1\in\left\{0,\ldots,J-1\right\}$, $\Ket{a}\in\mathcal{H}^{a_{j_0}^\textup{R}}$, $\Ket{b}\in\mathcal{H}^{a_{j_1}^\textup{R}}$, and $\Lambda^{R}\geqq 0$ such that $j_0 < j_1$ and \begin{equation} \begin{split} &\supp\left(\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Bra{a}^{a_{j_0}^\textup{R}}\right)\psi^A\left(\Lambda^{R}\right)\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Ket{a}^{a_{j_0}^\textup{R}}\right)\right) =\mathcal{H}^{a_{j_0}^\textup{L}},\\ &\supp\left(\left(\Pi^{a_{j_1}^\textup{L}}\otimes\Bra{b}^{a_{j_1}^\textup{R}}\right)\psi^A\left(\Lambda^{R}\right)\left(\Pi^{a_{j_1}^\textup{L}}\otimes\Ket{b}^{a_{j_1}^\textup{R}}\right)\right) =\mathcal{H}^{a_{j_1}^\textup{L}},\\ &\sigma\neq\boldsymbol{0}, \end{split} \end{equation} where $\supp(\cdots)$ represents the support, $\boldsymbol{0}$ is the zero operator, and $\sigma$ is an operator from $\mathcal{H}^{a_{j_0}^\textup{L}}$ to $\mathcal{H}^{a_{j_1}^\textup{L}}$ defined as \begin{equation} \sigma\coloneq\left(\Pi^{a_{j_1}^\textup{L}}\otimes\Bra{b}^{a_{j_1}^\textup{R}}\right)\psi^A\left(\Lambda^{R}\right)\left(\Pi^{a_{j_0}^\textup{L}}\otimes\Ket{a}^{a_{j_0}^\textup{R}}\right). \end{equation} \item Calculate the singular value decomposition of $\sigma$ \begin{equation} \sigma=\sum_{l=0}^{R-1} \sigma_l\Ket{l}^{a_{j_1}^\textup{L}}\Bra{l}^{a_{j_0}^\textup{L}}, \end{equation} where $R\coloneq\rank\sigma$, and $\sigma_0\,,\ldots,\sigma_{R-1}$ are the positive singular values. Using the subspace spanned by the singular vectors $\left\{\Ket{0},\ldots,\Ket{R-1}\right\}$ of $\sigma$ corresponding to the positive singular values, decompose $\mathcal{H}^{a_{j_0}^\textup{L}}$ and $\mathcal{H}^{a_{j_1}^\textup{L}}$ into \begin{align} \mathcal{H}^{a_{j_0}^\textup{L}}&=\mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}\oplus{\mathcal{H}_\perp^{a_{j_0}^\textup{L}}},\\ \mathcal{H}^{a_{j_1}^\textup{L}}&=\mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}\oplus{\mathcal{H}_\perp^{a_{j_1}^\textup{L}}}, \end{align} where the subspaces on the right-hand side are defined as \begin{align} \mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}&\coloneq\spn\left\{\Ket{0},\ldots,\Ket{R-1}\right\},\\ \mathcal{H}_\perp^{a_{j_0}^\textup{L}}&\coloneq\supp\left(\Pi^{a_{j_0}^\textup{L}}-\sum_{l=0}^{R-1}\Ket{l}\Bra{l}^{a_{j_0}^\textup{L}}\right),\\ \mathcal{H}_\perp^{a_{j_1}^\textup{L}}&\coloneq\supp\left(\Pi^{a_{j_1}^\textup{L}}-\sum_{l=0}^{R-1}\Ket{l}\Bra{l}^{a_{j_1}^\textup{L}}\right). \end{align} Note that $\mathcal{H}_\perp^{a_{j_0}^\textup{L}}$ and $\mathcal{H}_\perp^{a_{j_0}^\textup{L}}$ may be zero, and define flags indicating whether $\mathcal{H}_\perp^{a_{j_0}^\textup{L}}$ and $\mathcal{H}_\perp^{a_{j_0}^\textup{L}}$ are zero as \begin{align} s_{j_0}&\coloneq\begin{cases} 0&\textup{if } \mathcal{H}_\perp^{a_{j_0}^\textup{L}}=\left\{\boldsymbol{0}\right\},\\ 1&\textup{otherwise}, \end{cases}\\ s_{j_1}&\coloneq\begin{cases} 0&\textup{if } \mathcal{H}_\perp^{a_{j_1}^\textup{L}}=\left\{\boldsymbol{0}\right\},\\ 1&\textup{otherwise}. \end{cases} \end{align} \item Define a refined decomposition as \begin{align} \mathcal{H}^A&=\bigoplus_{j=0}^{\tilde{J}-1}\mathcal{H}^{\tilde{a}_j^\textup{L}}\otimes\mathcal{H}^{\tilde{a}_j^\textup{R}},\\ \tilde{J}&\coloneq J-1+s_{j_0}+s_{j_1}\,,\\ \mathcal{H}^{\tilde{a}_j^\textup{L}}&\coloneq\begin{cases} \mathcal{H}^{a_j^\textup{L}}&\textup{if } 0\leqq j\leqq j_0-1,\\ \mathcal{H}^{a_{j+1}^\textup{L}}&\textup{if } j_0\leqq j\leqq j_1-2,\\ \mathcal{H}^{a_{j+2}^\textup{L}}&\textup{if } j_1-1\leqq j\leqq J-3,\\ \mathcal{H}^{a_{j_0\cap j_1}^\textup{L}}&\textup{if } j=J-2,\\ \mathcal{H}_\perp^{a_{j_0}^\textup{L}}&\textup{if } j = J-2+s_{j_0}\\ &\textup{and }s_{j_0}=1,\\ \mathcal{H}_\perp^{a_{j_1}^\textup{L}}&\textup{if } j = J-2+s_{j_0}+s_{j_1}\\ &\textup{and }s_{j_1}=1,\\ \end{cases}\\ \mathcal{H}^{\tilde{a}_j^\textup{R}}&\coloneq\begin{cases} \mathcal{H}^{a_j^\textup{R}}&\textup{if } 0\leqq j\leqq j_0-1,\\ \mathcal{H}^{a_{j+1}^\textup{R}}&\textup{if } j_0\leqq j\leqq j_1-2,\\ \mathcal{H}^{a_{j+2}^\textup{R}}&\textup{if } j_1-1\leqq j\leqq J-3,\\ \mathcal{H}^{a_{j_0}^\textup{R}}\oplus\mathcal{H}^{a_{j_1}^\textup{R}}&\textup{if } j=J-2,\\ \mathcal{H}^{a_{j_0}^\textup{R}}&\textup{if } j = J-2+s_{j_0}\\ &\textup{and } s_{j_0}=1,\\ \mathcal{H}^{a_{j_1}^\textup{R}}&\textup{if } j = J-2+s_{j_0}+s_{j_1}\\ &\textup{and } s_{j_1}=1. \end{cases} \end{align} \end{enumerate} The following example demonstrates how to obtain the Koashi-Imoto decomposition of a tripartite pure state using the above algorithm. \begin{example} \textit{Koashi-Imoto decomposition of a tripartite pure state.} Consider a tripartite pure state \begin{equation} \begin{split} &\Ket{\psi}^{RAB}\\ &\coloneq\frac{1}{2\sqrt{2}}{\left(\Ket{0}^{R}\otimes\Ket{0}^{A_1}+\Ket{1}^{R}\otimes\Ket{1}^{A_1}\right)} \otimes{\left(\Ket{0}^{A_2}\otimes\Ket{0}^{B}+\Ket{1}^{A_2}\otimes\Ket{1}^{B}\right)}\\ &\quad +\frac{1}{\sqrt{2}}\Ket{2}^{R}\otimes\Ket{2}^{A_1}\otimes\Ket{0}^{A_2}\otimes\Ket{2}^{B}, \end{split} \end{equation} where $\mathcal{H}^\textup{R}$ is of $3$ dimension, $\mathcal{H}^A=\mathcal{H}^{A_1}\otimes\mathcal{H}^{A_2}$ of $3\times 2 = 6$ dimension, and $\mathcal{H}^{B}$ of $3$ dimension. The Koashi-Imoto decomposition can be algorithmically obtained as follows, where the order of subspaces in intermediate decompositions is sorted for readability. \begin{enumerate}[{Step} 1:] \item Initially, regard $\mathcal{H}^A$ as \begin{equation} \label{eq:1} \mathcal{H}^A=\mathcal{H}^{a_0^\textup{L}}. \end{equation} \item Apply the $\textup{L}$-decomposing procedure to the intermediate decomposition given by Equation~\eqref{eq:1}, where $j_0=0$, $\Ket{a}=1$, $\Ket{b}=1$, and $\Lambda^{R}=\Ket{0}\Bra{0}$, and $\mathcal{H}^A$ is decomposed into \begin{equation} \label{eq:2} \mathcal{H}^A=\mathcal{H}^{a_0^\textup{L}}\oplus\mathcal{H}^{a_1^\textup{L}}, \end{equation} where $\dim\mathcal{H}^{a_0^\textup{R}}=\dim\mathcal{H}^{a_1^\textup{R}}=1$ and \begin{align} \mathcal{H}^{a_0^\textup{L}}=\spn\Big\{&\Ket{0}^{A_1}\otimes\Ket{0}^{A_2},\Ket{0}^{A_1}\otimes\Ket{1}^{A_2}\Big\},\\ \mathcal{H}^{a_1^\textup{L}}=\spn\Big\{&\Ket{1}^{A_1}\otimes\Ket{0}^{A_2},\Ket{1}^{A_1}\otimes\Ket{1}^{A_2}, \Ket{2}^{A_1}\otimes\Ket{0}^{A_2},\Ket{2}^{A_1}\otimes\Ket{1}^{A_2}\Big\},\\ \end{align} \item Apply the $\textup{L}$-decomposing procedure to the intermediate decomposition given by Equation~\eqref{eq:2}, where $j_0=1$, $\Ket{a}=1$, $\Ket{b}=1$, and $\Lambda^{R}=\Ket{1}\Bra{1}$, and $\mathcal{H}^A$ is decomposed into \begin{equation} \label{eq:3} \mathcal{H}^A=\mathcal{H}^{a_0^\textup{L}}\oplus\mathcal{H}^{a_1^\textup{L}}\oplus\mathcal{H}^{a_2^\textup{L}}, \end{equation} where $\dim\mathcal{H}^{a_0^\textup{R}}=\dim\mathcal{H}^{a_1^\textup{R}}=\dim\mathcal{H}^{a_2^\textup{R}}=1$ and \begin{align} \mathcal{H}^{a_0^\textup{L}}=\spn\Big\{&\Ket{0}^{A_1}\otimes\Ket{0}^{A_2},\Ket{0}^{A_1}\otimes\Ket{1}^{A_2}\Big\},\\ \mathcal{H}^{a_1^\textup{L}}=\spn\Big\{&\Ket{1}^{A_1}\otimes\Ket{0}^{A_2},\Ket{1}^{A_1}\otimes\Ket{1}^{A_2}\Big\},\\ \mathcal{H}^{a_2^\textup{L}}=\spn\Big\{&\Ket{2}^{A_1}\otimes\Ket{0}^{A_2},\Ket{2}^{A_1}\otimes\Ket{1}^{A_2}\Big\}. \end{align} \item Apply the $\textup{R}$-combining procedure to the intermediate decomposition given by Equation~\eqref{eq:3}, where $j_0=0$, $j_1=1$, $\Ket{a}=1$, $\Ket{b}=1$, and $\Lambda^{R}=\Ket{0}\Bra{0}+\Ket{0}\Bra{1}+\Ket{1}\Bra{0}+\Ket{1}\Bra{1}$, and $\mathcal{H}^A$ is decomposed into \begin{equation} \label{eq:4} \mathcal{H}^A=\left(\mathcal{H}^{a_0^\textup{L}}\otimes\mathcal{H}^{a_0^\textup{R}}\right)\oplus\mathcal{H}^{a_1^\textup{L}}, \end{equation} where $\dim\mathcal{H}^{a_1^\textup{R}}=1$ and \begin{align} \mathcal{H}^{a_0^\textup{L}}=\spn\Big\{&\Ket{0}^{A_2},\Ket{1}^{A_2}\Big\},\\ \mathcal{H}^{a_0^\textup{R}}=\spn\Big\{&\Ket{0}^{A_1},\Ket{1}^{A_1}\Big\},\\ \mathcal{H}^{a_1^\textup{L}}=\spn\Big\{&\Ket{2}^{A_1}\otimes\Ket{0}^{A_2},\Ket{2}^{A_1}\otimes\Ket{1}^{A_2}\Big\}. \end{align} \item Terminate the algorithm, since for each $j$, the operator $\psi^{R a_j^\textup{L} a_j^\textup{R}}$ defined as Equation~\eqref{eq:product_operator} is a tensor product of operators of $\mathcal{H}^\textup{R}\otimes\mathcal{H}^{a_j^\textup{R}}$ and $\mathcal{H}^{a_j^\textup{L}}$, and hence, the decomposition in Equation~\eqref{eq:4} is maximal. In this case, $\Ket{\psi}^{RAB}$ is decomposed into \begin{equation} \begin{split} &\Ket{\psi}^{RAB}\\ &=\frac{1}{2\sqrt{2}}{\left(\Ket{0}^{R}\otimes\Ket{0}^{a_0^\textup{R}}+\Ket{1}^{R}\otimes\Ket{1}^{a_0^\textup{R}}\right)} \otimes{\left(\Ket{0}^{a_0^\textup{L}}\otimes\Ket{0}^{b_0^\textup{L}}+\Ket{1}^{a_0^\textup{L}}\otimes\Ket{1}^{b_0^\textup{L}}\right)}\\ &\quad \oplus\frac{1}{\sqrt{2}}\left(\Ket{2}^{R}\otimes{\left(\Ket{2}\otimes\Ket{0}\right)}^{a_1^\textup{L}}\otimes\Ket{2}^{b_1^\textup{L}}\right), \end{split} \end{equation} \end{enumerate} \end{example} \chapter{\label{sec:equivalence}Tasks equivalent to exact state merging} This appendix provides the proof of Proposition~\ref{prp:max} on the tasks equivalent to exact state merging, in the sense that the tasks shown in Proposition~\ref{prp:max} are achievable at the same entanglement cost using the same protocol. \begin{proof}[Proof of Proposition~\ref{prp:max}] The equivalence in the catalytic setting is shown in the following, while the statement in the non-catalytic setting follows from the same argument setting $\log_2 L=0$. It is shown that each of Statements~1--3 holds if and only if \begin{equation} \label{eq:m} \mathcal{M}\left({\Ket{\psi_l}\Bra{\psi_{l'}}}^{AB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right)={\Ket{\psi_l}\Bra{\psi_{l'}}}^{B'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}} \end{equation} holds for any $l$ and $l'$. \textit{Statement~1 $\Leftrightarrow$ \textup{Equation}~\eqref{eq:m}}: Assume Statement~1; that is, an LOCC map $\mathcal{M}$ by $A$ and $B$ achieves the following exact state merging of $\Ket{\psi}^{RAB}$ \begin{equation} \id^R\otimes\mathcal{M}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right)={\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}. \end{equation} The left-hand side and the right-hand side are written as \begin{equation} \begin{split} \id^R\otimes\mathcal{M}\left({\psi}^{RAB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right) &=\sum_{l,l'}\frac{1}{\sqrt{\lambda_l\lambda_{l'}}}{\Ket{l}\Bra{l'}}^R\otimes\mathcal{M}\left({\Ket{\psi_l}\Bra{\psi_{l'}}}^{AB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right),\\ {\psi}^{RB'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}} &=\sum_{l,l'}\frac{1}{\sqrt{\lambda_l\lambda_{l'}}}{\Ket{l}\Bra{l'}}^R\otimes{\Ket{\psi_l}\Bra{\psi_{l'}}}^{B'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}. \end{split} \end{equation} Due to the linear independence, Equation~\eqref{eq:m} holds for any $l$ and $l'$. The converse follows from the linearity of $\mathcal{M}$. \textit{Statement~2 $\Leftrightarrow$ \textup{Equation}~\eqref{eq:m}}: This equivalence can be shown in the same way as the equivalence between Statement~1 and Equation~\eqref{eq:m}, by substituting $\psi$ with ${\Phi}_D^+\left(\psi\right)$. \textit{Statement~3 $\Leftrightarrow$ \textup{Equation}~\eqref{eq:m}}: Assume Statement~3. For each $l$, \begin{equation} \mathcal{M}\left({\Ket{\psi_l}\Bra{\psi_{l}}}^{AB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right)={\Ket{\psi_l}\Bra{\psi_{l}}}^{B'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}} \end{equation} holds as a special case of Statement~3. For any different $l$ and $l^\prime$, consider two cases of choosing $\psi_{\boldsymbol{\alpha}}^{AB}\in S_\psi^{AB}$ as \begin{equation} \frac{1}{2}\Ket{\psi_l}\Bra{\psi_l}+\frac{1}{2}\Ket{\psi_l}\Bra{\psi_{l^\prime}}+\frac{1}{2}\Ket{\psi_{l^\prime}}\Bra{\psi_l}+\frac{1}{2}\Ket{\psi_{l^\prime}}\Bra{\psi_{l^\prime}} \end{equation} and \begin{equation} \frac{1}{2}\Ket{\psi_l}\Bra{\psi_l}+\frac{\textup{i}}{2}\Ket{\psi_l}\Bra{\psi_{l^\prime}}-\frac{\textup{i}}{2}\Ket{\psi_{l^\prime}}\Bra{\psi_l}+\frac{1}{2}\Ket{\psi_{l^\prime}}\Bra{\psi_{l^\prime}}. \end{equation} Applying Statement~3 to these two states and using the linearity of $\mathcal{M}$ yield \begin{align} \mathcal{M}\left({\Ket{\psi_l}\Bra{\psi_{l^\prime}}}^{AB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right)&={\Ket{\psi_l}\Bra{\psi_{l^\prime}}}^{B'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}},\\ \mathcal{M}\left({\Ket{\psi_{l^\prime}}\Bra{\psi_{l}}}^{AB}\otimes{\Phi^+_K}^{\overline{A}\overline{B}}\right)&={\Ket{\psi_{l^\prime}}\Bra{\psi_{l}}}^{B'B}\otimes{\Phi^+_L}^{\overline{A}\overline{B}}. \end{align} Therefore, Equation~\eqref{eq:m} holds for any $l$ and $l^\prime$. The converse follows from the linearity of $\mathcal{M}$. \end{proof} \chapter{\label{sec:monotonic}Monotonic property of conditional quantum entropy} This appendix provides the proof of Proposition~\ref{prp:monotonic} on the monotonically nondecreasing property of conditional quantum entropy ${H\left(A|B\right)}_\psi$ under $B$'s preprocessing and backward classical communication from $B$ to $A$. \begin{proof}[Proof of Proposition~\ref{prp:monotonic}] This proof shows that $B$'s preprocessing ${\left\{M_j^{B}\right\}}_j$ does not decrease the conditional quantum entropy on average, and backward classical communication and $A$'s isometry $U_j^A$ do not change the conditional quantum entropy. Performing ${\left\{U_j^A\otimes M_j^B\right\}}_j$ is equivalent to sequentially performing the following steps. First, the measurement ${\left\{M_j^B\right\}}_j$ can be regarded as $B$'s local channel transforming $\psi^{RAB}$ into \begin{equation} {\psi^\prime}^{XRAB}\coloneq\sum_j p\left(j\right)\Ket{j}\Bra{j}^{X}\otimes\frac{M_j^B \psi^{RAB} {M_j^B}^\dag}{p\left(j\right)}, \end{equation} where $\mathcal{H}^{X}$ is $B$'s system for storing the measurement outcome. Next, the backward classical communication transforms ${\psi^\prime}^{XRAB}$ into \begin{equation} \begin{split} &{\psi^{\prime\prime}}^{X^\prime XRAB}\\ &\coloneq\sum_j p\left(j\right)\Ket{j}\Bra{j}^{X^\prime}\otimes\Ket{j}\Bra{j}^{X}\otimes\frac{M_j^B \psi^{RAB} {M_j^B}^\dag}{p\left(j\right)}, \end{split} \end{equation} where $\mathcal{H}^{X^\prime}$ is $A$'s system for storing the measurement outcome. Finally, the isometry $U_j^A$ transforms ${\psi^{\prime\prime}}^{X^\prime XRAB}$ into \begin{equation} {\psi^{\prime\prime\prime}}^{X^\prime XRAB}\coloneq\sum_j p\left(j\right)\Ket{j}\Bra{j}^{X^\prime}\otimes\Ket{j}\Bra{j}^{X}\otimes\psi_j^{RAB}. \end{equation} The conditional quantum entropy for each of these steps is evaluated as follows. Regarding the measurement ${\left\{M_j^B\right\}}_j\,$, the data processing inequality yields \begin{equation} {H\left(A|B\right)}_\psi\leqq{H\left(A|XB\right)}_{\psi^\prime}. \end{equation} As for the backward classical communication, it holds that \begin{equation} {H\left(A|XB\right)}_{\psi^\prime}={H\left(A|XB\right)}_{{\psi^{\prime\prime}}}={H\left(X^\prime A|XB\right)}_{{\psi^{\prime\prime}}}. \end{equation} Since the isometry $U_j^A$ for each $j$ can be performed using a controlled isometry independent of $j$ \begin{equation} \sum_j\Ket{j}\Bra{j}^{X^\prime}\otimes U_j^A, \end{equation} it holds that \begin{equation} {H\left(X^\prime A|XB\right)}_{{\psi^{\prime\prime}}}={H\left(X^\prime A|XB\right)}_{\psi^{\prime\prime\prime}}. \end{equation} Therefore, it is obtained that \begin{equation} \begin{split} {H\left(A|B\right)}_\psi&\leqq{H\left(X^\prime A|XB\right)}_{\psi^{\prime\prime\prime}}\\ &={H\left(A|XB\right)}_{\psi^{\prime\prime\prime}}\\ &=\sum_j p\left(j\right){H\left(A|B\right)}_{\psi_j}\,, \end{split} \end{equation} which yields the conclusion. \end{proof} \chapter{\label{sec:equivalence_spread_concentrate}Tasks equivalent to spreading and concentrating quantum information} This appendix provides the proof of Proposition~\ref{lem:encoding_state_transformation} on state transformations equivalent to spreading and concentrating quantum information over networks, in the sense that the tasks shown in Proposition~\ref{lem:encoding_state_transformation} are achievable at the same entanglement cost using the same protocol. \begin{proof}[Proof of Proposition~\ref{lem:encoding_state_transformation}] The statement on spreading quantum information is proven in the following, while the statement on concentrating quantum information also follows from the same argument by substituting $\rho$, $U\rho U^\dag$, $\Ket{l}\Bra{l'}$, $\ket{\tilde{\psi}_l}\bra{\tilde{\psi}_{l'}}$, and $\mathcal{S}$ in the following with $U\rho U^\dag$, $\rho$, $\ket{\tilde{\psi}_l}\bra{\tilde{\psi}_{l'}}$, $\Ket{l}\Bra{l'}$, and $\mathcal{C}$, respectively. \textit{If part}: If there exists an LOCC map $\mathcal{S}$ defined as Equation~\eqref{eq:encoding} for any input state $\rho$, Equation~\eqref{eq:encoding_state_transformation} holds as a special case of Equation~\eqref{eq:encoding} in which the input state $\rho$ is a completely mixed state. \textit{Only if part}: Assume that there exists an LOCC map $\mathcal{S}$ defined as Equation~\eqref{eq:encoding_state_transformation}. Due to the linearity of the map $\mathcal{S}$, Equation~\eqref{eq:encoding_state_transformation} yields \begin{equation} \frac{1}{D}\sum_{l,l'=0}^{D-1}\Ket{l}\Bra{l'}\otimes\mathcal{S}\left(\Ket{l}\Bra{l'}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right) =\frac{1}{D}\sum_{l,l'=0}^{D-1}\Ket{l}\Bra{l'}\otimes\ket{\tilde{\psi}_l}\bra{\tilde{\psi}_{l'}}. \end{equation} Since the set ${\left\{\Ket{l}\Bra{l'}\right\}}_{l,l'}$ of operators on the system $\mathcal{H}^R$ is linearly independent, it holds that \begin{equation} \mathcal{S}\left(\Ket{l}\Bra{l'}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right) = \ket{\tilde{\psi}_l}\bra{\tilde{\psi}_{l'}}, \end{equation} for each $l,l'\in\left\{0,\ldots,D-1\right\}$. Therefore, writing any operators $\rho\in\mathcal{D}\left(\mathcal{H}\right)$ and $U\rho U^\dag\in\mathcal{D}\left(\tilde{\mathcal{H}}\right)$ as \begin{equation} \rho=\sum_{l,l'=0}^{D-1}c_{l,l'}\Ket{l}\Bra{l'},\quad U\rho U^\dag=\sum_{l,l'=0}^{D-1}c_{l,l'}\ket{\tilde{\psi}_l}\bra{\tilde{\psi}_{l'}} \end{equation} yields Equation~\eqref{eq:encoding} \begin{equation} \begin{split} &\mathcal{S}\left(\rho\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right)\\ &=\sum_{l,l'=0}^{D-1}c_{l,l'}\mathcal{S}\left(\Ket{l}\Bra{l'}\otimes\bigotimes_{e\in E}\Ket{\Phi_{M_e}^+}\Bra{\Phi_{M_e}^+}\right)\\ &=\sum_{l,l'=0}^{D-1}c_{l,l'}\ket{\tilde{\psi}_l}\bra{\tilde{\psi}_{l'}}\\ &=U\rho U^\dag. \end{split} \end{equation} \end{proof} \chapter{\label{sec:one_shot_entropies}Min- and max-entropies} This appendix summarizes entropic functions used in analyses of one-shot quantum state merging, such as min- and max-entropies~\cite{R2,T5,T11}. Given any quantum state $\psi^{AB}\in\mathcal{D}\left(\mathcal{H}^A\otimes\mathcal{H}^B\right)$, the conditional min-entropy $H_{\min}$ and the conditional max-entropy $H_{\max}$ of $A$ conditioned by $B$ are defined as \begin{align} &{H_{\min}\left(A|B\right)}_\psi\coloneq\max_{\sigma^B\in\mathcal{D}\left(\mathcal{H}^B\right)}\sup\left\{\lambda\in\mathbb{R}:\psi^{AB}\leqq\frac{\mathbb{1}^A\otimes\sigma^B}{2^\lambda}\right\},\\ &{H_{\max}\left(A|B\right)}_\psi\coloneq\max_{\sigma^B\in\mathcal{D}\left(\mathcal{H}^B\right)}\log_2 {\left\|\sqrt{\psi^{AB}}\sqrt{\mathbb{1}^A\otimes\sigma^B}\right\|}_1^2. \end{align} These entropies are defined so that the duality is satisfied; that is, for any pure state $\Ket{\psi}^{RAB}$, it holds that \begin{equation} -{H_{\min}\left(A|R\right)}_\psi={H_{\max}\left(A|B\right)}_\psi. \end{equation} The definition of min- and max-entropy of $A$ is obtained by considering $\dim\mathcal{H}^B=1$ in the above definition of the conditional min- and max-entropies, that is, \begin{align} &{H_{\min}\left(A\right)}_\psi\coloneq\sup\left\{\lambda\in\mathbb{R}:\psi^{A}\leqq\frac{\mathbb{1}^A}{2^\lambda}\right\}=\log_2\frac{1}{\lambda_0},\\ &{H_{\max}\left(A\right)}_\psi\coloneq\log_2{\left\|\sqrt{\psi^{A}}\right\|}_1^2=2\log_2\tr\sqrt{\psi^A}, \end{align} where $\lambda_0$ is the largest eigenvalue of $\psi^A$. The smoothed versions of these entropies are defined using optimization over states that are sufficiently close to the given state, and this technique is called smoothing. In the following, the set of sub-normalized operators on a Hilbert space $\mathcal{H}^A$ is denoted by \begin{equation} \mathcal{D}_{\leqq}\left(\mathcal{H}^A\right)\coloneq\left\{\psi^A\in\mathcal{B}\left(\mathcal{H}^A\right):\psi^A\geqq 0, \tr\psi^A\leqq 1\right\}. \end{equation} Given any state $\psi^{AB}\in\mathcal{D}\left(\mathcal{H}^A\otimes\mathcal{H}^B\right)$ and any error threshold $\epsilon\in\left[0,\tr\sqrt{\psi^{AB}}\right]$ for smoothing, define the $\epsilon$-ball of states around $\psi^{AB}$ as \begin{equation} \mathcal{B}^\epsilon\left(\psi^{AB}\right)\coloneq\left\{\sigma^{AB}\in\mathcal{D}_{\leqq}\left(\mathcal{H}^A\otimes\mathcal{H}^B\right):P\left(\psi^{AB},\sigma^{AB}\right)\leqq\epsilon\right\}, \end{equation} where $P\left(\psi^{AB},\sigma^{AB}\right)$ is the purified distance between sub-normalized states defined as Equation~\eqref{eq:purified_distance_subnormalized}. The $\epsilon$-smooth conditional min-entropy $H_{\min}^\epsilon$ and the $\epsilon$-smooth conditional max-entropy $H_{\max}^\epsilon$ of $A$ conditioned by $B$ are defined as \begin{align} &{H_{\min}^\epsilon\left(A|B\right)}_\psi\coloneq\max_{\tilde{\psi}^{AB}\in\mathcal{B}^\epsilon\left(\psi^{AB}\right)}{H_{\min}\left(A|B\right)}_{\tilde{\psi}}\,,\\ &{H_{\max}^\epsilon\left(A|B\right)}_\psi\coloneq\min_{\tilde{\psi}^{AB}\in\mathcal{B}^\epsilon\left(\psi^{AB}\right)}{H_{\max}\left(A|B\right)}_{\tilde{\psi}}. \end{align} Note that the optimal states in the smoothing of these definitions are not necessarily normalized. The definition of the $\epsilon$-smooth min- and max-entropy of $A$ is also obtained by considering $\dim\mathcal{H}^B=1$ in the above definition of the $\epsilon$-smooth conditional min- and max-entropies. These smoothed entropies converge to the non-smoothed ones as $\epsilon\to 0$. \end{document}
\begin{document} \title{Duality between injective envelopes and flat covers} \author{Ville Puuska\thanks{Tampere University, Tampere, Finland. Email: [email protected]}} \date{} \maketitle \begin{abstract} We establish a duality between injective envelopes and flat covers over a commutative Noetherian ring. One case of this duality states that a morphism is an injective envelope, if and only if its Matlis dual is a flat cover. We also show that if we swap injective envelopes and flat covers in this duality, neither implication is true in general. \end{abstract} \section{Introduction}\label{sec: Introduction} In this paper we establish the following duality between injective envelopes and flat covers of modules over a commutative Noetherian ring: \begin{theorem}\label{thm: duality of inj envelope and flat cover} Let \(R\) be a commutative Noetherian ring. Let \(i \colon M \to I\) be a morphism of \(R\)-modules. Then the following are equivalent: \begin{enumerate} \item \(i\) is an injective envelope; \item the morphism \(\Hom_R(i,E)\) is a flat cover for some injective cogenerator \(E\); \item the morphism \(\Hom_R(i,E)\) is a flat cover for all injective modules \(E\). \end{enumerate} \end{theorem} Previously only partial dualities of this kind have been established, e.g.~in \cite{belshoff92, xue96, enochs12, mao15}. In particular, we recall the result of Enochs and Huang \cite[Theorem 3.7]{enochs12}, which states that a ring is left Noetherian, if and only if a monomorphism \(i\) is an injective preenvelope precisely when \(i^+\) is a flat precover; here \((-)^+ := \Hom_\ZZ(-,\QQ/\ZZ)\). Combining this with Theorem \ref{thm: duality of inj envelope and flat cover}, we can characterize commutative Noetherian rings precisely as the commutative rings for which a morphism \(i\) is an injective envelope, if and only if \(i^+\) is a flat cover. The key idea in the proof of Theorem \ref{thm: duality of inj envelope and flat cover} is to use the following alternative characterizations of injective envelopes and flat covers over a commutative Noetherian ring. It is well known that a monomorphism \(i \colon M \to E\), where \(E\) is injective, is an injective envelope, if and only if \(\Hom_{R_\mf p}(k(\mf p),i_\mf p)\) is an isomorphism for all \(\mf p \in \spec R\). Dailey proved in his thesis \cite[Proposition 4.2.7]{dailey16} a dual characterization of flat covers of cotorsion modules using the functor \(k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,-)\). As our proof demonstrates, utilizing these two functors makes it far easier to attack the problem of proving the full duality between injective envelopes and flat covers. We also consider the converse, i.e.~whether we can swap flat covers and injective envelopes in this duality. It turns out that for a wide variety of Noetherian rings this does not hold without extra restrictions. For example, we show that if \(R\) is a commutative Noetherian ring with \(\dim R \geq 2\), then there exists a flat cover \(f\) such that \(\Hom_R(f,E)\) is not an injective envelope for any injective cogenerator \(E\). However, the converse does hold with an added reflexivity requirement as was already shown in \cite[Corollary 3.3]{mao15}. \section{Preliminaries}\label{sec: Preliminaries} Throughout, \(R\) is a commutative Noetherian ring with identity and modules are over \(R\) unless stated otherwise. We denote the category of \(R\)-modules by \(\RMod\). Following \cite{enochs81}, we call a morphism \(f \colon F \to M\) of \(R\)-modules with \(F\) flat a \emph{flat precover}, if \(\Hom_R(F',F) \to \Hom_R(F',M) \to 0\) is exact for all flat modules \(F'\). If a flat precover \(f\) further satisfies \(fh = f\) only for automorphisms \(h \colon F \to F\), then we call \(f\) a \emph{flat cover}. Flat covers are unique up to isomorphism. Every module has a flat cover by \cite{bican01}. For a module \(M\), the \emph{minimal flat resolution of \(M\)} is a resolution \(\cdots \xrightarrow{d_2} F_1 \xrightarrow{d_1} F_0 \xrightarrow{d_0} M \to 0\) such that \(F_i \to \im d_i\) is a flat cover for each \(i \in \NN\). A module \(M\) is \emph{cotorsion}, if \(\Ext^1_R(F,M) = 0\) for all flat modules \(F\). Importantly, for any module \(M\) and injective module \(E\), \(\Hom_R(M,E)\) is pure injective and thus cotorsion \cite[Lemma 2.1]{enochs84}. We will denote the Matlis dual of a module \(M\) by \(M^\vee := \Hom_R(M,\bigoplus_{\mf n} E(R/\mf n))\) where the direct sum is taken over the maximal ideals, i.e.~\(\mf n \in \maxspec R\). Similarly, for a morphism \(f \colon M \to N\), we set \(f^\vee := \Hom_R(f, \bigoplus_\mf n E(R/\mf n))\). We will say that the module \(M\) is \emph{reflexive}, if the evaluation morphism \(M \to (M^\vee)^\vee\) is an isomorphism. Note that if \(M\) is reflexive, then all submodules and quotients of \(M\) are also reflexive. To prove Theorem \ref{thm: duality of inj envelope and flat cover}, we need the following alternative characterizations of injective envelopes and flat covers. The characterization of injective envelopes is well known, but the characterization of flat covers was only recently proven by Dailey in his thesis. \begin{fact}\label{fact: injective envelope iff soc isom} Let \(i \colon M \to E\) be a morphism. Then, \(i\) is the injective envelope, if and only if \(E\) is injective, \(i\) is a monomorphism, and \(\Hom_{R_\mf p}(k(\mf p),i_\mf p)\) is an isomorphism for all \(\mf p \in \spec R\). \end{fact} \begin{fact}[{\cite[Proposition 4.2.7]{dailey16}}]\label{fact: flat cover iff top isom} Let \(p \colon F \to M\) be an epimorphism where \(F\) is flat and cotorsion. Then, \(p\) is the flat cover, if and only if \(\ker p\) is cotorsion, and \(k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,p)\) is an isomorphism for all \(\mf p \in \spec R\). \end{fact} \begin{lemma}\label{lem: duality of soc and top} For \(\mf p \in \spec R\), an injective module \(E\), and a module \(M\), there exists a natural isomorphism \[k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(M,E)) \cong \Hom_R(\Hom_{R_\mf p}(k(\mf p),M_\mf p),E).\] \end{lemma} \begin{proof} The tensor-hom adjunction gives the natural isomorphism \[k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(M,E)) \cong k(\mf p) \otimes_{R_\mf p} \Hom_R(M_\mf p,E).\] Since \(k(\mf p)\) is finitely presented as an \(R_\mf p\)-module and \(E\) is injective, we have the natural isomorphism (see e.g.~\cite[Theorem 3.2.11]{enochs11}) \[k(\mf p) \otimes_{R_\mf p} \Hom_R(M_\mf p,E) \cong \Hom_R(\Hom_{R_\mf p}(k(\mf p),M_\mf p),E).\] \end{proof} \section{Duality between injective envelopes and flat covers}\label{sec: Duality between injective envelopes and flat covers} We are now ready to prove our main result. Note that \(2 \To 1\) is not new, it follows e.g.~from the argument in the proof of \cite[Theorem 3.7]{enochs12}. For completeness, we still give a proof for \(2 \To 1\) by reversing \(1 \To 3\). \begin{proof}[Proof of Theorem \ref{thm: duality of inj envelope and flat cover}] \(1 \To 3\): Assume that \(i\) is an injective envelope. Let \(E\) be any injective module. Now \(\Hom_R(i,E)\) is an epimorphism, \(\Hom_R(I,E)\) is flat and cotorsion, and \(\ker \Hom_R(i,E) = \Hom_R(\coker i,E)\) is cotorsion. Thus, by Fact \ref{fact: flat cover iff top isom}, \(\Hom_R(i,E)\) is a flat cover, if and only if \(k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(i,E))\) is an isomorphism for all \(\mf p \in \spec R\). For any \(\mf p \in \spec R\) we have the commutative diagram \[\begin{tikzcd} k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(I,E)) \dar{k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(i,E))} \rar{\cong} & \Hom_R(\Hom_{R_\mf p}(k(\mf p),I_\mf p),E) \dar{\Hom_R(\Hom_{R_\mf p}(k(\mf p),i_\mf p),E)} \\ k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(M,E)) \rar{\cong} & \Hom_R(\Hom_{R_\mf p}(k(\mf p),M_\mf p),E) \end{tikzcd}\] where the horizontal morphisms are isomorphisms by Lemma \ref{lem: duality of soc and top}. Since the morphisms \(\Hom_{R_\mf p}(k(\mf p),i_\mf p)\) are isomorphisms, the morphisms \(\Hom_R(\Hom_{R_\mf p}(k(\mf p),i_\mf p),E)\) are isomorphisms as well. Hence the morphisms \(k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(i,E))\) are isomorphisms and thus \(\Hom_R(i,E)\) is a flat cover. \(3 \To 2\): Trivial. \(2 \To 1\): Assume that \(\Hom_R(i,E)\) is a flat cover for some injective cogenerator \(E\). By Fact \ref{fact: flat cover iff top isom}, the morphisms \(k(\mf p) \otimes_{R_\mf p} \Hom_R(R_\mf p,\Hom_R(i,E))\) are isomorphisms for all \(\mf p \in \spec R\). The above diagram shows that the morphisms \(\Hom_R(\Hom_{R_\mf p}(k(\mf p),i_\mf p),E)\) are all isomorphisms. Since \(E\) is an injective cogenerator, the morphisms \(\Hom_{R_\mf p}(k(\mf p),i_\mf p)\) are then all isomorphisms. Thus \(i\) is an injective envelope by Fact \ref{fact: injective envelope iff soc isom}. \end{proof} \begin{remark} In \cite[Theorem 3.7]{enochs12} the authors give the following characterization of left Noetherian rings: a ring \(S\) is left Noetherian, if and only if a monomorphism \(i\) is an injective preenvelope, precisely when \(i^+\) is a flat precover. Here \((-)^+ := \Hom_\ZZ(-,\QQ/\ZZ) \cong \Hom_S(-,S^+)\); note that \(S^+\) is an injective cogenerator. If \(S\) is commutative, we can combine this characterization with Theorem \ref{thm: duality of inj envelope and flat cover} to see that \(S\) is Noetherian, if and only if a morphism \(i\) is an injective envelope, precisely when \(i^+\) is a flat cover. \end{remark} Theorem \ref{thm: duality of inj envelope and flat cover} of course immediately generalizes to minimal injective and flat resolutions. \begin{corollary}\label{cor: duality of min inj reso and min flat reso} Let \(0 \to M \to I^\bullet\) be a chain complex. The following conditions are equivalent: \begin{enumerate} \item \(0 \to M \to I^\bullet\) is a minimal injective resolution; \item \(\Hom_R(I^\bullet,E) \to \Hom_R(M,E) \to 0\) is a minimal flat resolution for some injective cogenerator \(E\); \item \(\Hom_R(I^\bullet,E) \to \Hom_R(M,E) \to 0\) is a minimal flat resolution for all injective modules \(E\). \end{enumerate} \end{corollary} \begin{remark} For an injective module \(E\) and a module \(M\), this suggests a connection between the Bass numbers \(\mu_i(\mf p,M)\) and the dual Bass numbers \(\pi_i(\mf p,\Hom_R(M,E))\) defined by Xu in \cite[Definition 3.2]{xu95}. Indeed the following formula holds \[\pi_i(\mf p,\Hom_R(M,E)) = \dim_{k(\mf p)} \Hom_R(k(\mf p),E)^{\mu_i(\mf p,M)},\] where \(\Hom_R(k(\mf p),E)^{\mu_i(\mf p,M)}\) denotes the direct \emph{product} of copies of \(\Hom_R(k(\mf p),E)\) indexed by a set of cardinality \(\mu_i(\mf p,M)\). To see this, recall that the dual Bass numbers of the cotorsion module \(\Hom_R(M,E)\) satisfy \[\pi_i(\mf p,\Hom_R(M,E)) = \dim_{k(\mf p)} \Tor_i^{R_\mf p}(k(\mf p),\Hom_R(R_\mf p,\Hom_R(M,E)))\] by \cite[Theorem 2.2]{enochs97}. The formula follows then from the identity \[\Tor_i^{R_\mf p}(k(\mf p),\Hom_R(R_\mf p,\Hom_R(M,E))) \cong \Hom_R(\Ext^i_{R_\mf p}(k(\mf p),M_\mf p),E).\] This identity was already used in the proof of \cite[Proposition 2.9]{enochs97}, where the authors note that it implies \(\pi_i(\mf p,E(R/\mf p)) = \mu_i(\mf p,R)\). \end{remark} Next, we will consider the converse of Theorem \ref{thm: duality of inj envelope and flat cover}. It was proven in \cite[Corollary 3.3]{mao15} that the converse holds with an added reflexivity requirement. For completeness, we give an easy proof of this using Theorem \ref{thm: duality of inj envelope and flat cover}. \begin{proposition}[{\cite[Corollary 3.3]{mao15}}]\label{prop: duality from flat to injective} Let \(p \colon F \to M\) be a morphism where \(F\) is reflexive. Then \(p\) is a flat cover, if and only if \(p^\vee\) is an injective envelope. \end{proposition} \begin{proof} Note that \(F\) is flat and \(p\) is an epimorphism, if and only if \(F^\vee\) is injective and \(p^\vee\) is a monomorphism. Thus we can assume that \(F\) is flat and \(p\) is an epimorphism. Since \(F\) is reflexive and \(p\) is an epimorphism, \(M\) is reflexive as well. Hence, the commutative diagram \[\begin{tikzcd} F \rar{p} \dar{\cong} & M \dar{\cong} \\ (F^\vee)^\vee \rar{(p^\vee)^\vee} & (M^\vee)^\vee \end{tikzcd}\] shows that \(p\) is a flat cover, if and only if \((p^\vee)^\vee\) is a flat cover. The claim now follows from Theorem \ref{thm: duality of inj envelope and flat cover}. \end{proof} Both implications in the previous proposition fail if we drop the assumption that \(F\) is reflexive. We will show this in the following examples. \begin{example} Let \(f \colon F \to M\) be a flat precover. If \(\Hom_R(f,E)\) is an injective envelope for some injective cogenerator \(E\), then \(f\) is a flat cover. To see this, let \(h \colon F \to F\) be a morphism such that \(fh = f\). Now \(\Hom_R(h,E)\Hom_R(f,E) = \Hom_R(f,E)\) implies that \(\Hom_R(h,E)\) is an isomorphism. Thus \(h\) is an isomorphism. This implication does not work if \(f\) is not a flat precover. For example, let \((R,\mf m)\) be a Noetherian local ring that is \emph{not} complete. We denote the residue field by \(k = R/\mf m\). Since \(\widehat{R} \to k\) is the flat cover of \(k\) and \(R \not\cong \widehat{R}\), \(f \colon R \to k\) is not a flat cover. However, \(f^\vee \colon k = k^\vee \to R^\vee = E(k)\) is an injective envelope. \end{example} \begin{example} Let \((R,\mf m)\) be a Noetherian local ring that is not complete or \(\dim R \geq 2\). By \cite[Theorem 4.1]{belshoff92} there exists an injective envelope \(i \colon M \to I\) such that \(M\) is reflexive, but \(I\) is not. Now \(i^\vee\) is a flat cover by Theorem \ref{thm: duality of inj envelope and flat cover}, but \((i^\vee)^\vee\) is not an injective envelope since it factors through \(I \to (I^\vee)^\vee\) which is not an isomorphism. \end{example} \begin{example} We can generalize the previous example further to non-local Noetherian rings. Let \(R\) be Noetherian with \(\mf m \in \maxspec R\) such that \(R_\mf m\) is not complete or \(\dim R_\mf m \geq 2\). Let \(f \colon F \to M\) be a flat cover in the category of \(R_\mf m\)-modules such that \(\Hom_{R_\mf m}(f,E(k(\mf m)))\) is not an injective envelope; this exists by the previous example. The morphism \(f\) is a flat cover in \(\RMod\) as well. To see this, note that any morphism \(F' \to M\) where \(F'\) is a flat \(R\)-module factors through \(F'_\mf m\). Thus there exists a morphism \(F'_\mf m \to F\) such that the diagram \[\begin{tikzcd} & F' \dar \\ & F'_\mf m \dar \ar{dl} \\ F \rar & M \end{tikzcd}\] commutes. Hence \(f\) is a flat precover in \(\RMod\). Clearly then \(f\) is a flat cover in \(\RMod\). The morphism \(\Hom_{R_\mf m}(f,E(k(\mf m))) = \Hom_R(f,E(R/\mf m))\) is a direct summand of \(f^\vee\). Since \(\Hom_{R_\mf m}(f,E(k(\mf m)))\) is not an injective envelope, \(f^\vee\) is not an injective envelope either. Even further, for \emph{any} injective cogenerator \(E\), the morphism \(\Hom_R(f,E)\) is not an injective envelope. This follows, since \(\bigoplus_{\mf n \in \maxspec R} E(R/\mf n)\) is a direct summand of \(E\) and so \(f^\vee\) a direct summand of \(\Hom_R(f,E)\). \end{example} We could of course directly extend Proposition \ref{prop: duality from flat to injective} to left resolutions that consist of reflexive modules. However, we can relax this assumption slightly with the following lemma. \begin{lemma} If \(M\) has a resolution consisting of reflexive flat modules, then the minimal flat resolution of \(M\) consists of reflexive modules as well. \end{lemma} \begin{proof} Note that an epimorphism from a flat module with a cotorsion kernel is a flat precover. The kernels in a resolution consisting of reflexive flat modules are also reflexive and thus cotorsion. Hence, such a resolution is obtained by chaining flat precovers. The minimal flat resolution of \(M\) is a direct summand of any such resolution. Therefore the minimal flat resolution also consists of reflexive modules. \end{proof} \begin{corollary}\label{cor: duality of reflexive min flat reso and min inj reso} Assume that \(M\) has a resolution consisting of reflexive flat modules. Then, a chain complex \(F_\bullet \to M \to 0\) is a minimal flat resolution, if and only if \(0 \to M^\vee \to F_\bullet^\vee\) is a minimal injective resolution. \end{corollary} \begin{proof} Assume first that \(F_\bullet \to M \to 0\) is the minimal flat resolution of \(M\). By the previous lemma, it consists of reflexive modules. Thus it is isomorphic to \((F_\bullet^\vee)^\vee \to (M^\vee)^\vee \to 0\). This being a minimal flat resolution implies that \(0 \to M^\vee \to F_\bullet^\vee\) is a minimal injective resolution by Corollary \ref{cor: duality of min inj reso and min flat reso}. Assume then that \(0 \to M^\vee \to F_\bullet^\vee\) is the minimal injective resolution of \(M^\vee\). Again by Corollary \ref{cor: duality of min inj reso and min flat reso}, \((F_\bullet^\vee)^\vee \to M \to 0\) is a minimal flat resolution. The previous lemma then shows that \((F_\bullet^\vee)^\vee\) consists of reflexive modules. Since \(F_\bullet\) is a subcomplex of \((F_\bullet^\vee)^\vee\), \(F_\bullet\) also consists of reflexive modules. Thus \(F_\bullet = (F_\bullet^\vee)^\vee\) so \(F_\bullet \to M \to 0\) is a minimal flat resolution. \end{proof} \end{document}
\begin{document} \title{Stable free boundary surfaces with constant extrinsic curvature in $3$-dimensional space forms} \begin{abstract} {In this paper we use the notion of stability for free boundary surfaces with constant higher order mean curvature to obtain rigidity results for $H_2$-surfaces with free boundary of a geodesic ball of a simply connected $3$-dimensional space form or a slab of $\mathbb{R}^3$.} \end{abstract} \unmarkedfntext{Keywords: Isometric immersions, free boundary, capillary, higher order mean curvature} \unmarkedfntext{2000 Mathematical Subject Classification: 53C42, 53A10.} \unmarkedfntext{The authors were partially supported by CAPES.} \section{Introduction}\label{um} {The extrinsic curvature $H_2$ of a surface is the product of its principal curvatures. As a consequence of the Gauss equation, when immersed into a $3$-manifold with constant curvature equal to $c \in \mathbb{R}$, the intrinsic curvature $K$ and the extrinsic curvature are related via $H_2=K-c$. In particular, when the ambient space is the $3$-dimensional Euclidean space $\mathbb{R}^3$ both notions coincide. The notion of stability of surfaces with constant mean curvature has been studied by mathematicians throughout the last four decades. It is known that minimal hypersurfaces can be seen as critical points of the volume functional, whereas the hypersurfaces with constant mean curvature (CMC) can also be described on a variational setting. They are critical points of the area functional with respect to variations which preserve volume. Stability, then, means that they are a minimum of area for such variations. Given a region $\Omega$ of a Riemannian manifold $M$, a hypersurface $\Sigma \subseteq \Omega$ whose boundary $\partial\Sigma$ is contained into $\partial\Omega$ is said to have free boundary if its boundary intersects orthogonally $\partial\Omega$. In a more general situation, when the contact angle is constant along the intersection, such submanifold is said to be capillary. Minimal or CMC capillary hypersurfaces supported on $\partial\Omega$ can also be seen as critical points of the volume functional but, in this case, it is only considered variations which maintains the boundary on the hypersurface which supports it. A number of results has been proved for the case of the ambient space has dimension equal to $3$ \cite{souam1997stability, ros1995stability, nunes2017stable, ros1997stability}. When considering higher order mean curvatures, the notion of stability does not come from a variational setting. Despite that, for closed hypersurfaces (fixed bounded variations) the second author and Barbara Nelli proposed a notion of stability for such hypersurfaces by {using the} linearization of the corresponding PDE (see \cite{elbert2019note}) and, in \cite{damasceno2022stable}, the first and the second authors proposed a notion of stability for the free boundary and capillary cases). The main goal of this paper is to prove the following results: \begin{teo}\label{003.1} Let $\Sigma^2$ be a closed disk and $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^3(c)$ be a $H_2$-surface with free boundary in $\partial\Omega$ and $H_2>0$. Then $\varphi(\Sigma)$ is totally umbilical. \end{teo} \begin{teo}\label{006.1} Let $\varphi : \Sigma^2 \rightarrow B_R \subseteq \mathbb{M}^3(c)$ be a stable $H_2$-surface with free boundary in a geodesic ball $B_R$ with radius $R>0$. If $c>0$ assume the surface is contained into a hemisphere and if $c<0$ assume that $\frac{A(\Sigma)}{\ell(\partial\Sigma)}>-\frac{\cn_c(R)}{c\sn_c(R)}$, where \begin{equation}\label{0014} \sn_c(\rho)=\begin{dcases*} \frac{\sin\left(\rho\sqrt{c}\right)}{\sqrt{c}},& if $c>0$ \\ \rho,& if $c=0$ \\ \frac{\sinh\left(\rho\sqrt{-c}\right)}{\sqrt{-c}},& if $c<0$ \end{dcases*} \end{equation} and $\cn_c(\rho)=\sn_c^\prime(\rho)$. Then $\varphi(\Sigma)$ is totally umbilical. \end{teo} \begin{teo}\label{008.1} Let $\varphi : \Sigma \rightarrow \mathbb{R}^3$ be a compact stable $H_2$-surface with a free boundary in a slab bounded by two parallel planes $\Pi_1$ and $\Pi_2$ such that its genus is equal to $0$ and $H_2>0$. Then $\varphi(\Sigma)$ is a surface of revolution around an axis orthogonal to $\Pi_1$. \end{teo} The first is a generalization of \cite[Theorem 1]{nitsche1985stationary} when $c=0$ and \cite[Theorem 4.1]{souam1997stability} when $c \neq 0$. The second theorem is an extension of \cite[Theorem 5.1]{souam1997stability} whereas the third is an extension of \cite[Theorem 3.1]{ainouz2016stable}. The paper is organized as the following: the Section \ref{dois} is dedicated to fix the notation and the concepts used throughout the rest of the paper, and the sections \ref{tres}, \ref{quatro} and \ref{cinco} are dedicated to the proof of Theorems \ref{003.1}, \ref{006.1} and \ref{008.1}, respectively.} \section{Preliminaries}\label{dois} {Let $\left(M^3,g\right)$ be an oriented Riemannian manifold and $\varphi : \Sigma^2 \rightarrow M$ be an oriented surface with unit normal vector field $\eta$ in the normal bundle $\Gamma(N\Sigma)$. Its second fundamental form $\emph{II}$, scalar second fundamental form $\emph{II}_\eta$ and Weingarten operator $A=\left(\emph{II}_\eta\right)^\flat$ are defined, respectively, as \begin{eqnarray*} \emph{II}\left(X,Y\right) &=& \left(\overline\nabla_XY\right)^\perp=\left<\overline\nabla_XY,\eta\right>\eta=\emph{II}_\eta\left(X,Y\right)\eta \\ \left<A(X),Y\right> &=& \emph{II}_\eta\left(X,Y\right)=\left<-\overline\nabla_X\eta,Y\right>, \end{eqnarray*} where $X,Y \in \Gamma(T\Sigma)$ and $\overline\nabla $ is the Levi-Civita connection of $M$. Let $\kappa_1(p) \geq \kappa_2(p)$ be the principal curvatures of the surface $\varphi$ at $p$. The $1$-mean curvature $H_1$ is the arithmetic mean of $\kappa_1$ and $\kappa_2$ and the $2$-mean curvature is given by its product $H_2=\kappa_1\kappa_2$. The surface is said to have constant mean curvature of order $r \in \{1,2\}$ if $H_r$ is constant over $\Sigma$; when this happens, $\Sigma$ is called an {$H_r$-surface}. The first Newton transformation $P_1$ is defined by $P_1=2H_1I-A$. Since $A_p$ is self-adjoint for all $p \in \Sigma$, the Newton transformations are self-adjoint as well and their eigenvectors are the same as those of $A$. If $\{e_1,e_2\}$ denotes the eigenvectors of $A$ then the eigenvalue associated to $e_i$ is equal to $S_1(A_i)=2H_1-\kappa_i$. Moreover, we have the following identities: \begin{eqnarray} \tr P_1 &=& 2H_1 \label{0001} \\ \tr P_1A &=& 2H_2 \label{0003} \\ \tr P_1A^2 &=& 2H_1H_2. \label{0008} \end{eqnarray} \ In a general Riemannian manifold $(M,g)$ with Levi-Civita connection $\overline\nabla$, if $\phi$ is a pointwise symmetric $(2,0)$-tensor in $M$, the Cheng-Yau operator of $f \in C^\infty(M)$ is defined by \begin{equation*} \Box f=\tr\left(\phi\left(\hess f\right)^\flat\right)=\dive\left(\phi\overline\nabla f\right)-\left<\dive\phi,\overline\nabla f\right>, \end{equation*} where $\hess f$ is the Hessian of $f$ in $M$, $\left(\hess f\right)^\flat$ is the metric $(1,1)$-tensor field on $M$ equivalent to $\hess f$ and $\dive\phi:=\tr\left(\overline\nabla\phi\right)$. The operator $\phi$ is said to be divergence free if $\dive\phi=0$. When considering an oriented surface $\varphi : \Sigma^2 \rightarrow M^3$ with shape operator $A \in \Gamma\left(\End\left(T\Sigma\right)\right)$, the $L_1$-operator of $\Sigma$ is defined as the Cheng-Yau operator for the Newton transformation $P_1$, i.e., \begin{equation*} L_1f=\tr\left(P_1\left(\hess f\right)^\flat\right), \quad f \in C^\infty(\Sigma). \end{equation*} Here, we say that $-L_1$ is a second-order elliptic differential operator when $P_1$ is positive definite on each point of $\Sigma$. If $H_2>0$, then, after a choice of orientation on $\varphi$, $P_1$ is positive definite. \cite[Lemma 3.10]{elbert2002constant}. In \cite[Theorem 4.1]{rosenberg1993hypersurfaces}, H. Rosenberg proved that $P_1$ is divergence free when $M$ has constant sectional curvature (see also \cite[Corollary 3.7]{elbert2002constant} for the case where $r=1$ and $M$ is Einstein). Let $\Omega \subseteq M$ be a closed domain with smooth boundary $\partial\Omega$ and assume that $\varphi : \Sigma \rightarrow M$ is an oriented surface such that $\varphi(\Sigma) \subseteq \Omega$ and $\varphi(\partial\Sigma) \subseteq \partial\Omega$. Let $\nu \in \Gamma\left(T\Sigma\vert_{\partial\Sigma}\right)$ be the unit outward conormal vector field on $\partial\Sigma$ and let $\overline\nu \in \Gamma\left(T\partial\Omega\vert_{\partial\Sigma}\right)$ and $\overline\eta \in \Gamma\left(TM\vert_{\partial\Omega}\right)$ be the unit normal vector fields associated to the immersions $\varphi\vert_{\partial\Sigma} : \partial\Sigma \rightarrow \partial\Omega$ and $\iota_{\partial\Omega} : \partial\Omega \hookrightarrow M$, respectively, such that $\left\{\nu,\eta\right\}$ has the same orientation as $\left\{\overline\nu,\overline\eta\right\}$ on each point of $\varphi(\partial\Sigma)$. If $\theta$ denotes the angle between $\nu$ and $\overline\nu$, then \begin{equation}\label{0002}\begin{dcases*} \nu=\cos\theta\,\overline\nu+\sin\theta\,\overline\eta \\ \eta=-\sin\theta\,\overline\nu+\cos\theta\,\overline\eta \end{dcases*}, \quad \text{or conversely,} \quad \begin{dcases*} \overline\nu=\cos\theta\,\nu-\sin\theta\,\eta \\ \overline\eta=\sin\theta\,\nu+\cos\theta\,\eta \end{dcases*}. \end{equation} A $H_r$-surface $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is said to be {capillary} if the contact angle $\theta$ between $\partial\Sigma$ and $\partial\Omega$ is constant. When $\theta=\frac{\pi}{2}$, $\varphi$ is called a {free boundary surface}. When $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is a surface with free boundary in $\Omega$, \eqref{0002} implies that $\nu=\overline\eta$ and $\eta=-\overline\nu$. The following result will be used throughout this article. \begin{lema}[{\cite[Lemma 2.2]{ainouz2016stable}}]\label{002.4} Suppose $\iota_{\partial\Omega}$ is a totally umbilical immersion into $M$ and that $\varphi$ is a capillary immersion into $M$. Then the unit outwards normal vector field $\nu \in \Gamma\left(T\Sigma\vert_{\partial\Sigma}\right)$ is a principal direction of $\varphi$. \end{lema} A variation of $\varphi$ is a smooth function $\Phi : \Sigma^2 \times (-\varepsilon,\varepsilon) \rightarrow M^3$ such that, for each $t \in (-\varepsilon,\varepsilon)$, $\varphi_t=\Phi\vert_{\Sigma \times \{t\}}$ is an isometric immersion and $\varphi_0=\varphi$. The pair $(\Sigma,\varphi_t^*g)$ will be denoted by $\Sigma_t$. The variational field of $\Phi$ in $\varphi_t$ is defined by $$\xi_t(p)=\left.\Phi_*\frac{\partial}{\partial t}\right\vert_{(p,t)} \in \Gamma\left(TM\vert_{\varphi_t(\Sigma)}\right).$$ If $\eta_t \in \Gamma(N\Sigma)$ is the unit normal vector field of $\varphi_t$, the support function of $\Phi$ at $t$ is defined by $$f_t=\left<\xi_t,\eta_t\right> \in C^\infty(\Sigma).$$ Since $\varphi_t : \Sigma \rightarrow M$ is an oriented surface, one can define its second fundamental form $\emph{II}_t$, its scalar second fundamental form $\left(\emph{II}_t\right)_{\eta_t}$ and its Weingarten operator $A_t$. Also, we set $\overline{R}_{\eta_t}\left(X\right):=\overline{R}\left(\eta_t,X\right)\eta_t$, where $\overline{R}$ the Riemann curvature tensor of $M$ defined by $$\overline{R}(X,Y)Z=\overline\nabla_Y\overline\nabla_XZ-\overline\nabla_X\overline\nabla_YZ+\overline\nabla_{[X,Y]}Z, \quad X,Y,Z \in \Gamma(TM).$$ If $H_2(t)$ denotes the $2$-mean curvature associated to immersion $\varphi_t$, its variation is given by \begin{equation}\label{0004} H_2^\prime(t)=\left(L_1\right)_tf_t+2H_1(t)H_2(t)f_t+\tr_{\Sigma_t}\left(\left(P_1\overline{R}_\eta\right)_t\right)f_t+\xi_t^\top\left(H_2(t)\right), \end{equation} where $\left(L_1\right)_t$ is the $L_1$-operator of immersion $\varphi_t$ and $\left(P_1\overline{R}_\eta\right)_t:=\left(P_1\right)_t \circ \overline{R}_{\eta_t}$. A proof of \eqref{0004} can be found in \cite[Proposition 3.2]{elbert2002constant}. The enclosed volume between $\Sigma$ and $\Sigma_t$ is defined as $\mathcal{V}(t)=\int_{\Sigma \times [0,t]} \Phi^*d\mu_M$, with $d\mu_M$ being the volume form of $(M,g)$. A variation $\Phi$ is volume-preserving if $\mathcal{V}(t)=\mathcal{V}(0)$ for all $t \in (-\varepsilon,\varepsilon)$. It is known that $$\mathcal{V}^\prime(0)=\int_\Sigma f\,d\mu_\Sigma,$$ where $u=\left<\xi,\eta\right> \in C^\infty(\Sigma)$ and $d\mu_\Sigma$ is the volume form of $\left(\Sigma,\varphi^*g\right)$. Thus, a variation $\Phi$ is volume-preserving if and only if $\int_\Sigma f\,d\mu_\Sigma=0$. A $H_2$-surface $\varphi : \Sigma \rightarrow M$ is {positive definite} if $P_1$ is positive definite on each point $p \in \Sigma$. A variation $\Phi$ of a surface $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is called admissible if $\varphi_t(\inte\Sigma) \subseteq \inte\Omega$ and $\varphi_t(\partial\Sigma) \subseteq \partial\Omega$ for any $t \in (-\varepsilon,\varepsilon)$, where $\varphi_t=\Phi\vert_{\Sigma \times \{t\}}$. If $\Phi$ is an admissible variation of $\varphi$, then $\xi\vert_{\partial\Sigma} \in \Gamma\left(T\partial\Omega\vert_{\partial\Sigma}\right)$. If $\Sigma$ is a capillary $H_2$-surface supported in $\partial\Omega$ with contact angle $\theta \in (0,\pi)$ and $\Phi$ is a volume-preserving admissible variation of $\varphi$, consider the functional, defined in \cite{damasceno2022stable}, \begin{equation}\label{0005} \mathcal{F}_{1,\theta}[\Sigma_t]=-\int_\Sigma H_2(t)\left<\xi_t,\eta_t\right>\,d\mu_{\Sigma_t}+\int_{\partial\Sigma} \left<\xi_t,(P_1\nu-\vert{P_1\nu}\vert\cos\theta\,\overline\nu)_t\right>\,d\mu_{\partial\Sigma_t}, \end{equation} where $d\mu_{\Sigma_t}$ and $d\mu_{\partial\Sigma_t}$ denote the volume forms of $\Sigma_t$ and $\partial\Sigma_t=\left(\partial\Sigma,\left(\varphi_t\vert_{\partial\Sigma}\right)^*g\right)$, respectively. If $\partial\Omega$ is totally umbilical and $\Phi$ is an admissible volume-preserving variation of $\varphi$ then \begin{multline}\label{0006} \left.\frac{\partial}{\partial t}\mathcal{F}_{1,\theta}\left[\Sigma_t\right]\right\vert_{t=0}=-\int_\Sigma f\left(L_1f+\tr\left(P_1\left(A^2+\overline{R}_\eta\right)\right)f\right)\,d\mu_\Sigma+\\+\int_{\partial\Sigma} \left\vert{P_1\nu}\right\vert\,f\left(\frac{\partial f}{\partial\nu}+\left(\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu)-\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)\right)f\right)\,d\mu_{\partial\Sigma}, \end{multline} where $f=\left<\xi,\eta\right> \in C^\infty(\Sigma)$ is the support function of $\Phi$ at $t=0$ and $\emph{II}_\Sigma$ and $\emph{II}_{\partial\Omega}$ are the second fundamental forms of $\varphi : \Sigma \rightarrow \Omega$ and $\iota_{\partial\Omega} : \partial\Omega \hookrightarrow \Omega$, respectively. For a proof see \cite[Appendix A]{damasceno2022stable}. A positive definite capillary $H_2$-surface $\varphi : \Sigma \rightarrow \Omega \subseteq M$ supported in $\partial\Omega$ with contact angle $\theta \in (0,\pi)$ is {$r$-stable} if $\left.\frac{\partial}{\partial t}\mathcal{F}_{1,\theta}\left[\Sigma_t\right]\right\vert_{t=0} \geq 0$ for any volume-preserving admissible variation $\Phi$ of $\varphi$. If the inequality holds for all admissible variations of $\varphi$, $\Sigma$ is said to be {strongly $r$-stable}. The expression \eqref{0006} is associated to the eigenvalue problem below: \begin{equation}\label{0007}\begin{dcases*} T_1f=-L_1f-q_rf=\lambda f, & \quad $\text{in}~\Sigma$ \\ \frac{\partial f}{\partial\nu}+\alpha_\theta f=0, & \quad $\text{on}~\partial\Sigma$ \end{dcases*},\end{equation} where $q_r=\tr\left(P_1\left(A^2+\overline{R}_\eta\right)\right) \in C^\infty(\Sigma)$ and $\alpha_\theta=\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu)-\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu) \in C^\infty(\partial\Sigma)$. For the properties involving its principal eigenvalue see \cite[Proposition 3.4]{damasceno2022stable}. The notion of $1$-stability can also be considered when $P_1$ is negative definite, see \cite[Remark 3.5]{damasceno2022stable}. Let $\mathbb{M}^3(c)$ be the simply connected space form of constant sectional curvature $c$, i.e., $\mathbb{M}^3(c)$ is equal to $\mathbb{R}^3$ if $c=0$, $\mathbb{S}^3(c)$ if $c>0$ and $\mathbb{H}^3(c)$ if $c=0$. In this paper we consider the following models for $\mathbb{M}^3(c)$: \begin{eqnarray*} \mathbb{R}^3 &=& \left\{x=(x_1,x_2,x_3,x_4) \in \mathbb{R}^4\,\left|\right.\,x_4=0\right\} \\ \mathbb{S}^3(c) &=& \left\{x=(x_1,x_2,x_3,x_4) \in \mathbb{R}^4\,\left|\right.\,x_1^2+x_2^2+x_3^2+x_4^2=\frac{1}{c^2}\right\} \\ \mathbb{H}^3(c) &=& \left\{x=(x_1,x_2,x_3,x_4) \in \mathbb{R}_1^4\,\left|\right.\,x_1^2+x_2^2+x_3^2-x_4^2=-\frac{1}{c^2}, x_4>0\right\} \end{eqnarray*} endowed with the pullback of the Euclidean metric for $c \geq 0$ or the Minkowski metric for $c<0$. When $M=\mathbb{M}^3(c)$ we have that $\overline{R}_\eta(X)=cX$ for all $X \in \Gamma(T\mathbb{M}^3(c))$ and $\dive P_1=0$ (see \cite[Theorem 4.1]{rosenberg1993hypersurfaces}). Thus, \eqref{0006} can be rewritten as \begin{equation}\label{0009} \left.\frac{\partial}{\partial t}\mathcal{F}_{1,\theta}\left[\Sigma_t\right]\right\vert_{t=0}=\int_\Sigma \left<P_1\nabla f,\nabla f\right>-2H_1\left(H_2+c\right)f^2\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_1\nu}\right\vert\alpha_\theta f^2\,d\mu_{\partial\Sigma}. \end{equation} The equation \eqref{0009} can be viewed as a quadratic form associated to a bilinear symmetric form on the Hilbert space $H^1(\Sigma)$, the closure of $C^\infty(\Sigma)$ with respect to the Sobolev norm $$\left\Vert\cdot\right\Vert_{H^1(\Sigma)}^2=\left\Vert\cdot\right\Vert_{L^2(\Sigma)}^2+\left\Vert\nabla\cdot\right\Vert_{L^2(\Sigma)}^2.$$ The $1$-index form of $\varphi : \Sigma \rightarrow \Omega\subseteq\mathbb{M}^3(c)$ is \begin{equation}\label{0010} \mathcal{I}_{1,\theta}(f_1,f_2)=\int_\Sigma \left<P_1\nabla f_1,\nabla f_2\right>-2H_1\left(H_2+c\right)f_1f_2\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_1\nu}\right\vert\alpha_\theta f_1f_2\,d\mu_{\partial\Sigma}, \end{equation} where $f_1,f_2 \in H^1(\Sigma)$. $\Sigma$ is strongly $1$-stable if and only if $\mathcal{I}_{1,\theta}(f,f) \geq 0$ for all $f \in H^1(\Sigma)$ and $1$-stable if $\mathcal{I}_{1,\theta}(f,f) \geq 0$ for all $f \in \mathcal{F}=\left\{f \in H^1(\Sigma)\,|\,\int_\Sigma f\,d\mu_\Sigma=0\right\}$. It can be proved that a totally umbilical capillary compact surface supported on a connected totally umbilical surface of $\mathbb{M}^3(c)$ is $1$-stable \cite[Proposition 4.2]{damasceno2022stable}. \ As in the case $r=0$, when considering $\varphi$ a capillary $(r+1)$-minimal surface, i.e. $H_2=0$, we say that $\varphi$ is $1$-stable if $\mathcal{I}_{1,\theta}(f,f) \geq 0$ for all $f \in C_0^\infty(\Sigma)$. This means the hypothesis on the variation being volume-preserving is dropped. \ If $f \in \mathcal{F}$, the normal vector field $\xi=f\eta$ on $\Sigma$ is a {Jacobi field} if $f \in \ker\mathcal{I}_{1,\theta}\vert_{\mathcal{F} \times \mathcal{F}}$, i.e., $\mathcal{I}_{1,\theta}(f,g)=0$ for every $g \in \mathcal{F}$. The next lemma, whose proof is in \cite[Lemma 4.4]{damasceno2022stable}, gives a characterization of Jacobi fields on $\Sigma$. \begin{lema}\label{005.3} Let $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^3(c)$ be a positive definite $H_2$-surface with free boundary in $\partial\Omega$ and $f \in \mathcal{F}$. Then \begin{enumerate}[i)] \item $\xi=f\eta$ is a Jacobi field on $\Sigma$ if and only if $f \in C^\infty(\Sigma)$ and \begin{equation}\label{0011}\begin{dcases*} T_1f=\text{constant} & in $\Sigma$ \\ \frac{\partial f}{\partial\nu}+\alpha_\theta f=0 & on $\partial\Sigma$ \end{dcases*}.\end{equation} \item If $\varphi$ is $r$-stable and $\mathcal{I}_{1,\theta}(f,f)=0$ then $f$ is a Jacobi field on $\Sigma$. \end{enumerate} \end{lema}} \section{Proof of Theorem \ref{003.1}}\label{tres} {The Theorem \ref{003.1}} is inspired by its analogous version proved by J. Nitsche in \cite{nitsche1985stationary} when $c=0$ and by R. Souam in \cite[Theorem 4.1]{souam1997stability} when $c \neq 0$, both of them addressing constant mean curvature immersions of disk into a ball in $\mathbb{M}^3(c)$. Here, we consider immersions of the disk into a compact, convex smooth body $\Omega$. In order to prove this result, one needs the following theorem proved by R. Bryant. \ {\noindent {\bf Theorem A }\cite[Theorem 3]{bryant2011complex}: Let $\varphi : \Sigma^2 \rightarrow \mathbb{M}^3(c)$ be a smooth immersion that satisfies a Weingarten equation of the form $H_1=f(H_1^2-H_2)$, for some $f \in C^\infty((-\varepsilon,\infty);\mathbb{R})$ and $\varepsilon>0$. Then $\varphi$ is totally umbilical or else, the umbilic locus consists entirely of isolated points of strictly negative index.} For a definition of index, see \cite[p. 107]{hopf2003differential}. As a direct consequence of the Poincaré-Hopf Theorem for manifolds with boundary \cite[p. 35]{milnor1997topology}, we have a boundary version of Hopf's Theorem: \ \noindent {\bf Theorem B }(Boundary version of Hopf's Theorem) Let $\Sigma^2$ be a compact manifold with boundary $\partial\Sigma$ for which the umbilic locus $\mathcal{U}$ is finite. Suppose that one of the principal directions is transversal to $\partial\Sigma$. Then $$\sum_{p \in \mathcal{U}} i(p)=\chi(\Sigma),$$ where $i(p)$ is the index of $p \in \mathcal{U}$. {\begin{proof}[Proof of Theorem \ref{003.1}] Since $\varphi : \Sigma^2 \rightarrow \mathbb{M}^3(c)$ is a $H_2$-surface, its mean curvature satisfies a Weingarten equation $H_1=f(H_1^2-H_2)$, where $f(y)=\sqrt{y+H_2}$ and $y \in \left(-H_2,+\infty\right)$. If $\varphi$ is not totally umbilic then, by Theorem A, the umbilical points of $\varphi$ form a finite set $\mathcal{U} \subseteq \Sigma$ and each umbilical point has negative index. Since $\nu$ is a principal direction along $\partial\Sigma$, one can use Theorem B to obtain $$0>\sum_{p \in \mathcal{U}} i(p)=\chi(\Sigma)=1,$$ which is a contradiction. Thus $\Sigma$ is totally umbilical. \end{proof}} \begin{obs}\label{003.2} The same argument holds if $\Sigma$ is a CMC surface since Theorem A still holds in this case (choose $f$ to be a constant function). \end{obs} \section{Proof of Theorem \ref{006.1}}\label{quatro} {In this section we will prove Theorem \ref{006.1}. The geodesic ball $B_R$ is convex for all $R \in (0,R_c)$, where $R_c=+\infty$ if $c \leq 0$ and $R_c=\frac{\pi}{2\sqrt{c}}$ if $c>0$. Its boundary $\partial B_R$ is a totally umbilical sphere whose mean curvature with respect to the inward unit normal is equal to $\frac{\cn_c(R)}{\sn_c(R)}$. One must state some identities that will be used throughout the proof.} \begin{lema}\label{006.2} Let $\varphi : \Sigma^2 \rightarrow \mathbb{M}^3(c)$ be a surface. Then \begin{eqnarray} L_1\varphi &=& 2H_2\eta-2cH_1\varphi \label{0015} \\ L_1\eta &=& -\tr\left(P_1A^2\right)\eta+2cH_2\varphi-\nabla H_2, \label{0016} \end{eqnarray} Here $L_1\varphi$ and $L_1\eta$ are calculated coordinate-wise. \end{lema} For a proof of \eqref{0015} and \eqref{0016} see \cite[Remark 5.1]{rosenberg1993hypersurfaces}. The next Lemma also has key role for the proof of Theorem \ref{006.1}. \begin{lema}\label{006.3} Suppose that $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is a surface such that $H_2>0$ and let $u \in C^\infty(\Sigma) \backslash\{0\}$ be a function such that $T_1u=0$. Then its nodal set $\Gamma=u^{-1}(\{0\})$ is a finite graph whose vertices are the critical points of $u$. In a neighborhood of each critical point $\Gamma$ is a star of at least two branches. \end{lema} \begin{proof} Let $p \in \Sigma$ and take $\varphi : U_p \subseteq \mathbb{R}^2 \rightarrow \Sigma$ a parametrization of $\Sigma$ at $p$ with local coordinates $(u,v)$. Since $L_1$ is a second-order elliptic differential operator only with principal part, it follows from PDE theory in \cite[Chapter 3]{epstein1962partial} that there exists a coordinate change $$\overline{u}=h_1(u,v), \quad \overline{v}=h_2(u,v)$$ of class $C^2$ in a neighborhood of $p_0=\varphi^{-1}(p)$ whose Jacobian does not vanish at $p_0$ that transforms the pullback of $L_1$ in the Laplacian operator. We may suppose (restricting $U_p$ if necessary) that $\left(\overline{u},\overline{v}\right)$ is a diffeomorphism in $U_p$. Thus, in the new coordinates $(\overline{u},\overline{v})$, $L_1$ is the Laplacian and \cite[Theorem 2.5]{cheng1976eigenfunctions} implies that its nodal lines in $U_p$ meet at the critical points. Since $\Sigma$ is compact, we can cover $\Sigma$ with finitely many such open neighborhoods $U_p$, proving the claim. \end{proof} \begin{proof}[Proof of Theorem \ref{006.1}] {The proof is an extension of that in \cite[Theorem 11]{ros1995stability}. From \cite[Lemma 1.1]{ros1997stability}, $\left(\emph{II}_{\partial B}\right)_{\overline\eta}(\overline\nu,\overline\nu)=-\frac{\cn_c(R)}{\sn_c(R)}$ and the curvature of $\partial\Sigma$ in $\Sigma$ is equal to $\frac{\cn_c(R)}{\sn_c(R)}$. Thus, the Gauss-Bonnet Theorem implies that $$2\pi\chi(\Sigma)=\int_\Sigma K\,d\mu_\Sigma+\int_{\partial\Sigma}\kappa_g\,d\mu_{\partial\Sigma}=\int_\Sigma H_2+c\,d\mu_\Sigma+\int_{\partial\Sigma}\kappa_g\,d\mu_{\partial\Sigma} >cA(\Sigma)+\frac{\cn_c(R)}{\sn_c(R)}\ell(\partial\Sigma).$$ In all three cases the inequality above implies that the genus of $\Sigma$ is equal to zero.} Now consider the case $c=0$ and suppose, without loss of generality, that $B_R$ is the unit ball centered at the origin of $\mathbb{R}^3$. Let $p_0 \in \Sigma$ be a point where the function $p \in \Sigma \mapsto \vert\varphi(p)\vert$ attains its minimum and define the function \begin{equation}\label{0019} f(p)=\left<\varphi(p) \wedge \eta(p_0),\eta(p)\right>, \quad p \in \Sigma, \end{equation} where $\wedge$ denotes the cross product in $\mathbb{R}^3$. It is clear that $f(p_0)=0$ and for all $\mathbf{v} \in T\Sigma$, \begin{equation*} \left<\nabla f,\mathbf{v}\right>= \mathbf{v}\left<\varphi \wedge \eta(p_0),\eta\right>=\left<\mathbf{v} \wedge \eta(p_0),\eta\right>+\left<\varphi \wedge \eta(p_0),\overline\nabla_{\mathbf{v}}\eta\right>=\left<\eta(p_0) \wedge \eta-A\left(\varphi \wedge \eta(p_0)\right)^\top,\mathbf{v}\right>. \end{equation*} Thus $\nabla f=\eta(p_0) \wedge \eta-A\left(\varphi \wedge \eta(p_0)\right)^\top$, where $^\top$ denotes the projection onto $T\Sigma$. Since $\left\vert\varphi\right\vert$ attains its minimum at $p_0$, we have $\varphi(p_0) \parallel \eta(p_0)$, hence $\nabla f(p_0)=0$. Also we have \begin{eqnarray*} L_1f &=& L_1\left<\varphi \wedge \eta(p_0),\eta\right>=L_1\left<\eta \wedge \varphi,\eta(p_0)\right> \\ &=& L_1\left<\left(\eta^2\varphi^3-\eta^3\varphi^2,\eta^3\varphi^1-\eta^1\varphi^3,\eta^1\varphi^2-\eta^2\varphi^1\right),\eta(p_0)\right> \\ &=& \left<\left(L_1\left(\eta^2\varphi^3-\eta^3\varphi^2\right),L_1\left(\eta^3\varphi^1-\eta^1\varphi^3\right),L_1\left(\eta^1\varphi^2-\eta^2\varphi^1\right)\right),\eta(p_0)\right>, \end{eqnarray*} where $\varphi^i=\left<\varphi,\mathbf{e}_i\right>$, $\eta^i=\left<\eta,\mathbf{e}_i\right>$, $i \in \{1,2,3\}$, and the vectors $\{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$ form the canonical basis of $\mathbb{R}^3$. Using \eqref{0015} and \eqref{0016}, we have for $i,j \in \{1,2,3\}$, \begin{eqnarray*} L_1\left(\eta^i\varphi^j\right) &=& \varphi^jL_1\eta^i+\eta^iL_1\varphi^j+2\left<P_1\nabla\eta^i,\nabla\varphi^j\right> \\ &=& -2H_1H_2\eta^i\varphi^j+2H_2\eta^i\eta^j-2\left<P_1A\mathbf{e}_i^\top,\mathbf{e}_j^\top\right> \\ L_1\left(\eta^i\varphi^j-\eta^j\varphi^i\right) &=& -2H_1H_2\left(\eta^i\varphi^j-\eta^j\varphi^i\right). \end{eqnarray*} Thus, $L_1f+2H_1H_2f=0$ on $\Sigma$. Moreover, since $\varphi=\nu$ on $\partial\Sigma$, the Lemma \ref{002.4} implies that, on $\partial\Sigma$ \begin{eqnarray*} \frac{\partial f}{\partial\nu} &=& \left<\nabla f,\nu\right>=\left<\eta(p_0) \wedge \eta-A(\varphi \wedge \eta(p_0))^\top,\nu\right> \\ &=& \left<\nu \wedge \eta(p_0),\eta\right>-\left<\nu \wedge \eta(p_0),A\nu\right> \\ &=& f-\left\vert{A\nu}\right\vert\left<\nu \wedge \eta(p_0),\nu\right>=f. \end{eqnarray*} Hence the function $f$ satisfies \begin{equation}\label{0020} \begin{dcases*} L_1f+2H_1H_2f=0& in $\Sigma$ \\ \frac{\partial f}{\partial\nu}-f=0& on $\partial\Sigma$ \end{dcases*}.\end{equation} We claim that $f \equiv 0$ on $\Sigma$. Otherwise, Lemma \ref{006.3} implies that the lines of the nodal set $f^{-1}(\{0\})$ meet at the critical points of $f$. Using the Gauss-Bonnet theorem for each connected component $\Sigma_i$ of $\Sigma \backslash \beta^{-1}(\{0\})$, we have \begin{equation}\label{0021} \int_{\Sigma_i} K\,d\mu_\Sigma=2\pi\chi(\Sigma_i)-\int_{\partial\Sigma_i} \kappa_g\,d\mu_{\partial\Sigma}-\sum_j \theta_{ij}, \end{equation} where $\theta_{ij}$, $j \in \{1,...,j_i\}$ denotes the external angles of $\Sigma_i$. Summing up \eqref{0021} for all $i$, we obtain \begin{eqnarray} \int_\Sigma K\,d\mu_\Sigma &=& \sum_i \int_{\Sigma_i} K\,d\mu_\Sigma=\sum_i \left(2\pi\chi(\Sigma_i)-\int_{\partial\Sigma_i} \kappa_g\,d\mu_{\partial\Sigma}-\sum_j \theta_{ij}\right) \nonumber \\ &=& 2\pi\sum_i \chi(\Sigma_i)-\int_{\partial\Sigma} \kappa_g\,d\mu_{\partial\Sigma}-\sum_l \theta_l, \label{0022} \end{eqnarray} where the last term means the sum of all external angles for every connected component $\Sigma_i$. Since $\partial\Sigma$ is smooth, it follows again from the Gauss-Bonnet theorem that \eqref{0022} implies to \begin{eqnarray*} 2\pi\left(2-2g-s\right) &=& 2\pi\chi(\Sigma)=\int_\Sigma K\,d\mu_\Sigma+\int_{\partial\Sigma} \kappa_g\,d\mu_{\partial\Sigma} \\ &=& 2\pi\sum_i \chi(\Sigma_i)-\sum_l \theta_l, \end{eqnarray*} where $s$ is the number of components of $\partial\Sigma$. Since $f(p_0)=0$ and $\nabla f(p_0)=0$, the Lemma \ref{006.3} implies there are at least two nodal lines of $f$ intersecting at $p_0$ and forming a star at $p_0$; so $\sum_l \theta_l \geq 2\pi$. On the other hand, on each connected component $\Gamma_i$ of $\partial\Sigma$, $i \in \{1,...,s\}$, choosing a positively oriented arclength parametrization $\gamma$ we have $\varphi \wedge \eta=-\gamma^\prime$. So \begin{equation*} \int_{\Gamma_i}f\,d\mu_{\partial\Sigma}=\int_{\Gamma_i} \left<\varphi\wedge\eta(p_0),\eta\right>\,d\mu_{\partial\Sigma}=-\int_{\Gamma_i} \left<\varphi\wedge\eta,\eta(p_0)\right>\,d\mu_{\partial\Sigma}=\int_{\Gamma_i} \left<\gamma^\prime,\eta(p_0)\right>\,d\mu_{\partial\Sigma}=0, \end{equation*} and it follows that $f$ has at least two zeroes on each component $\Gamma_i$. Each point of $f^{-1}\left(\{0\}\right) \cap \Gamma_i$ contributes with at least $\pi$ for the sum of the $\theta_j$ in the last equation. Putting things together, we have \begin{equation}\label{0023} \sum_l \theta_l \geq 2\pi\left(1+s\right) \end{equation} and using \eqref{0023} in \eqref{0022} we obtain $$\sum_i \chi(\Sigma_i)=\frac{1}{2\pi}\left(2\pi\left(2-2g-s\right)+\sum_l \theta_l\right) \geq 2-2g-s+1+s=3-2g.$$ Assuming that $\Sigma$ has genus $g=0$, it follows that $\Sigma\backslash f^{-1}\left(\{0\}\right)$ has at least three connected components. If $\Sigma_1$ and $\Sigma_2$ are two connected components of the nodal domain of $f$, define $$\widetilde{f}=\begin{dcases*}f& in $\Sigma_1$ \\ \alpha f& in $\Sigma_2$ \\ 0& in $\Sigma \backslash (\Sigma_1 \cup \Sigma_2)$\end{dcases*},$$ where $\alpha \in \mathbb{R}$ is such that $\widetilde{f} \in \mathcal{F}$. Since $\partial\Sigma_i \cap \partial\Sigma=\partial\Sigma \cap \Sigma_i$ and $\widetilde{f} \equiv 0$ outside $\Sigma_i$, we have \begin{eqnarray*} \int_{\Sigma_1} \left<P_1\nabla\widetilde{f},\nabla\widetilde{f}\right>-2H_1H_2\widetilde{f}^2\,d\mu_\Sigma &=& \int_{\Sigma_1} \left<P_1\nabla\widetilde{f},\nabla f\right>-2H_1H_2\widetilde{f}f\,d\mu_\Sigma \\ &=& -\int_{\Sigma} \widetilde{f}\left(L_1f+2H_1H_2f\right)\,d\mu_\Sigma+\int_{\partial\Sigma \cap \Sigma_1}\vert{P_1\nu}\vert\widetilde{f}\frac{\partial f}{\partial\nu}\,d\mu_{\partial\Sigma} \\ &=& \int_{\partial\Sigma \cap \Sigma_1} \left\vert{P_1\nu}\right\vert \widetilde{f}^2\,d\mu_{\partial\Sigma} \end{eqnarray*} and, similarly, $$\int_{\Sigma_2}\left<P_1\nabla\widetilde{f},\nabla\widetilde{f}\right>-2H_1H_2\widetilde{f}^2\,d\mu_\Sigma=\int_{\partial\Sigma \cap \Sigma_2}\left\vert{P_1\nu}\right\vert\widetilde{f}^2\,d\mu_{\partial\Sigma}.$$ Thus, $$\mathcal{I}_1(\widetilde{f},\widetilde{f})=\sum_{i=1}^2 \int_{\Sigma_i} \left<P_1\nabla\widetilde{f},\nabla\widetilde{f}\right>-2H_1H_2\widetilde{f}^2\,d\mu_\Sigma-\int_{\partial\Sigma \cap \Sigma_i}\left\vert{P_1\nu}\right\vert\widetilde{f}^2\,d\mu_{\partial\Sigma}=0.$$ Hence, the second item of Lemma \ref{005.3} implies that $\widetilde{f}$ is a Jacobi field on $\Sigma$. But since $\widetilde{f} \equiv 0$ outside of $\Sigma_1 \cap \Sigma_2$, the Aronszajn's unique continuation principle \cite{aronszajn1956unique} implies that $\widetilde{f} \equiv 0$, which is a contradiction. Finally, since $f \equiv 0$, the Killing field $p \in \mathbb{R}^3 \mapsto p \wedge \eta(p_0) \in \mathbb{R}^3$ is tangent to $\Sigma$. Hence, $\Sigma$ is a rotation surface around the axis $\eta(p_0)$ with fixed point $p_0$ and thus, $\Sigma$ must be homeomorphic to a disk. Using Theorem \ref{003.1}, we conclude that $\Sigma$ is totally umbilical. The non-Euclidean cases use similar arguments and, since the spherical and the hyperbolic cases are very similar, we will give a sketch of the proof only when $c=-1$. Using the same notation used in \cite[Theorem 5.1]{souam1997stability}, define $f : \Sigma \rightarrow \mathbb{R}$ by $$f(p)=\left<\varphi(p) \wedge \eta(p_0) \wedge \mathbf{e}_4,\eta(p)\right>.$$ The same arguments used in the Euclidean case gives $\nabla f(p_0)=0$ and $$\begin{dcases*}L_1f+2(H_1H_2-1)f=0 & in $\Sigma$ \\ \frac{\partial f}{\partial\nu}-\frac{\cn_c(R)}{\sn_c(R)}f=0& on $\partial\Sigma$\end{dcases*}.$$ It can also be shown that $f \equiv 0$ and to prove this claim, it is considered a positively oriented arclength parametrization $\gamma$ of a connected component $\Gamma_i$ of $\partial\Sigma$ satisfying $\varphi \wedge \eta \wedge \nu=-\gamma^\prime$. The identity $f \equiv 0$ implies that $\Sigma$ is a rotation surface in $\mathbb{R}^4$ around the plane generated by $\mathbf{e}_4$ and $\eta(p_0)$ with fixed point $p_0$, proving that $\Sigma$ is a disk. \end{proof} \section{Proof of Theorem \ref{008.1}}\label{cinco} In this section we will extend \cite[Theorem 3.1]{ainouz2016stable} for $1$-stable $H_2$-surfaces with free boundary in a slab of $\mathbb{R}^3$, $H_2>0$ and genus $0$. \begin{proof}[Proof of Theorem \ref{008.1}] The proof of this result is an adaptation to the arguments used in \cite[Theorem 3.1]{ainouz2016stable}. Without loss of generality, one can suppose that $\Pi_1=\{x_3=0\}$ and $\Pi_2=\{x_3=1\}$. Let $\Gamma$ be a connected component of $\partial\Sigma$ such that $\varphi(\Gamma)$ lies on $\Pi_1$ and consider in this plane the circumscribed circle $\mathscr{C}$ about $\varphi(\Gamma)$. We will prove that $\varphi(\Sigma)$ is a surface of revolution around the vertical axis passing through the center of $\mathscr{C}$. Assuming, without loss of generality, the center of $\mathscr{C}$ is the origin of $\mathbb{R}^3$ and consider the function $f(p)=\left<\varphi(p) \wedge \mathbf{e}_3,\eta(p)\right>$, $p \in \Sigma$, where $\wedge$ is the cross product of $\mathbb{R}^3$. A similar computation to the one in Theorem \ref{006.1} to obtain \eqref{0020} shows that $$\begin{dcases*}L_1f+2H_1H_2f=0 & in $\Sigma$ \\ \frac{\partial f}{\partial\nu}=0 & on $\partial\Sigma$\end{dcases*}.$$ The proof is finished if one can show that $f \equiv 0$. Suppose, otherwise, that $f \not\equiv 0$. Then Lemma \ref{006.3} implies its nodal set $f^{-1}(\{0\})$ is a graph whose vertices are the critical points of $f$. We must show the nodal domain $\Sigma \backslash f^{-1}(\{0\})$ has at least $3$ connected components. If the function $f$ does not change its sign in a neighborhood of a point $p_0 \in f^{-1}(\{0\}) \cap \partial\Sigma$ then, as $L_1f=-2H_1H_2f$, the strong maximum principle \cite[Theorem 3.5]{gilbarg1977elliptic} and the Hopf Lemma \cite[Lemma 3.4]{gilbarg1977elliptic} implies that $\frac{\partial f}{\partial\nu}(p_0) \neq 0$ unless $f \equiv 0$ in a neighborhood of $p_0$, thus $f \equiv 0$ by Aronszajn's unique continuation principle \cite{aronszajn1956unique}. In both cases this leads to a contradiction, therefore the nodal domain has at least two connected components. The same arguments used in \cite[Theorem 3.1]{ainouz2016stable} to prove the nodal domain has a third connected component are valid here. Denoting $\Sigma_1$ and $\Sigma_2$ two of these components, define the function $$\widetilde{f}=\begin{dcases*} f& in $\Sigma_1$ \\ \alpha f& in $\Sigma_2$ \\ 0 & in $\Sigma \backslash (\Sigma_1 \cup \Sigma_2)$\end{dcases*},$$ where $\alpha \in \mathbb{R}$ is such that $\widetilde{f} \in \mathcal{F}$. Since $\partial\Sigma_i \cap \partial\Sigma=\partial\Sigma \cap \Sigma_i$ and $\widetilde{f} \equiv 0$ outside $\Sigma_i$, we obtain \begin{eqnarray*} \int_{\Sigma_1} \left<P_1\nabla\widetilde{f},\nabla\widetilde{f}\right>-2H_1H_2\widetilde{f}^2\,d\mu_\Sigma &=& \int_{\Sigma_1} \left<P_1\nabla\widetilde{f},\nabla f\right>-2H_1H_2\widetilde{f}f\,d\mu_\Sigma \\ &=& -\int_{\Sigma_1} \widetilde{f}\left(L_1f+2H_1H_2f\right)\,d\mu_\Sigma+\int_{\partial\Sigma \cap \Sigma_1} \left\vert{P_1\nu}\right\vert\widetilde{f}\frac{\partial f}{\partial\nu}\,d\mu_\Sigma \\ &=& 0 \end{eqnarray*} and, similarly, $\int_{\Sigma_2} \left<P_1\nabla\widetilde{f},\nabla\widetilde{f}\right>-2H_1H_2\widetilde{f}^2\,d\mu_\Sigma=0$. Thus, $$\mathcal{I}_1(\widetilde{f},\widetilde{f})=\sum_{i=1}^2 \int_{\Sigma_i} \left<P_1\nabla\widetilde{f},\nabla\widetilde{f}\right>-2H_1H_2\widetilde{f}^2\,d\mu_\Sigma=0.$$ and since $\Sigma$ is $r$-stable, Lemma \ref{005.3} implies that $\widetilde{f}$ is a Jacobi field on $\Sigma$. However, since $\widetilde{f}$ vanishes on $\Sigma \backslash (\Sigma_1 \cup \Sigma_2)$, it follows from Aronszajn's unique continuation principle that $\widetilde{f}=0$, which is a contradiction. Therefore $f \equiv 0$ and $\varphi(\Sigma)$ is a surface of revolution around the $x_3$-axis. \end{proof} \end{document}
\begin{document} \title{Exponential Separations in Symmetric Neural Networks} \begin{abstract} In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network~\parencite{santoro2017simple} architecture as a natural generalization of the DeepSets~\parencite{zaheer2017deep} architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size $N$ with elements in dimension $D$, which can be efficiently approximated by the former architecture, but provably requires width exponential in $N$ and $D$ for the latter. \end{abstract} \section{Introduction} The modern success of deep learning can in part be attributed to architectures that enforce appropriate invariance. Invariance to permutation of the input, i.e. treating the input as an unordered set, is a desirable property when learning \emph{symmetric} functions in such fields as particle physics and population statistics. The simplest architectures that enforce permutation invariance treat each set element individually without allowing for interaction, as captured by the popular \emph{DeepSet} model ~\parencite{zaheer2017deep,qi2017pointnet}. Several architectures explicitly enable interaction between set elements, the simplest being the Relational Networks~\parencite{santoro2017simple} that encode pairwise interaction. This may be understood as an instance of \emph{self-attention}, the mechanism underlying Transformers~\parencite{vaswani2017attention}, which have emerged as powerful generic neural network architectures to process a wide variety of data, from image patches to text to physical data. Specifically, Set Transformers \parencite{lee2019set} are special instantiations of Transformers, made permutation equivariant by omitting positional encoding of inputs, and using self-attention for pooling. Both the DeepSets and Relational Networks architectures are universal approximators for the class of symmetric functions. But empirical evidence suggests an inherent advantage of symmetric networks using self-attention in synthetic settings ~\parencite{murphy2018janossy}, on point cloud data~\parencite{lee2019set} and in quantum chemistry~\parencite{pfau2020ab}. In this work, we formalize this question in terms of approximation power, and explicitly construct symmetric functions which provably require exponentially-many neurons in the DeepSets model, yet are efficiently approximated with self-attention. This exponential separation bears notable differences from typical separation results. In particular, while the expressive power of a vanilla neural network is characterized by depth and width, expressiveness of symmetric networks is controlled particularly by \emph{symmetric width}. In contrast to depth separations of vanilla neural networks~\parencite{eldan2016power}, in this work we observe width separations, where the weaker architectures (even with arbitrary depth) require exponential symmetric width to match the expressive power of stronger architectures. \paragraph{Summary of Contributions} In this work: \begin{itemize} \mathbf{i}tem We demonstrate a \emph{width separation} between the DeepSets and Relational Network architectures, where the former requires symmetric width $L \gg poly(N, D)$ to approximate a family of analytic symmetric functions, while the latter can approximate with polynomial efficiency. This also answers an open question of high-dimensional DeepSets representation posed in~\textcite{wagstaff2022universal} \mathbf{i}tem We introduce an extension of the Hall inner product to high dimensions that preserves low-degree orthogonality of multisymmetric powersum polynomials, which may be of independent interest. \end{itemize} \section{Setup and Main Result}\label{sec:setup} \subsection{Symmetric Architectures} \begin{figure*} \caption{DeepSets with symmetric width $L$} \caption{Relational Network with symmetric width $L$} \caption{Architectural diagram for $\text{Sym} \label{fig:diagram} \end{figure*} To introduce the symmetric architectures, we must first characterize how to treat sets as inputs. We will consider sets of size $N$, where each element of the set is a vector of dimension $D$. In particular, we will represent a set as a matrix $X \mathbf{i}n \mathbb{C}^{D \times N}$. Thus, each column vector $x_n \mathbf{i}n \mathbb{C}^D$ is an element of the set. Note that we consider complex-valued inputs because the natural inner product over symmetric polynomials integrates over the complex unit circle, see \textcite{macdonald1998symmetric} or Theorem~\ref{thm:hall-inner-product}. A function $f: \mathbb{C}^{D \times N} \rightarrow \mathbb{C}$ is \emph{symmetric} if $f(X) = f(X\mathcal{P}i)$ for any permutation matrix $\mathcal{P}i \mathbf{i}n \mathbb{R}^{N\times N}$, i.e. if $f$ is invariant to permuting the columns of $X$. In other words, a symmetric function treats the input $X$ as an unordered set of column vectors. Given the \emph{symmetric width} parameter $L$, we consider two primary symmetric architectures: \begin{definition} Let $\text{Sym}_L$ denote the class of \emph{singleton symmetric networks} with symmetric width $L$, i.e. functions $f$ of the form: \begin{align}\label{eq:symnn} f(X) & = \rho(\phi_1(X), \dots, \phi_L(X)) \\ \phi_l(X) & = \sum_{n=1}^N \psi_l(x_n) \end{align} where $\{\psi_l: \mathbb{C}^D \rightarrow \mathbb{C}\}_{l=1}^L$ and $\rho: \mathbb{C}^L \rightarrow \mathbb{C}$ are arbitrary neural networks with analytic activations. \end{definition} The class $\text{Sym}_L$ is exactly the architecture of DeepSets~\parencite{zaheer2017deep} restricted to analytic activations. However, we introduce this notation to differentiate this class from the more expressive architectures that allow for pairwise interaction among set elements. From the theory of symmetric polynomials, if $L \geq L^* := \binom{N+D}{N} - 1$, then $f \mathbf{i}n \text{Sym}_L$ is a universal approximator for any analytic symmetric function~\parencite{rydh2007minimal}. Therefore we will primarily be interested in the expressive power of $\text{Sym}_L$ for $L < L^*$. \begin{definition} Let $\text{Sym}_Lt$ denote the class of \emph{pairwise symmetric networks} with symmetric width $L$, i.e. functions $f$ of the form: \begin{align}\label{eq:psymnn} f(X) & = \rho(\phi_1(X), \dots, \phi_L(X)) \\ \phi_l(X) & = \sum_{n,n'=1}^N\psi_l(x_n, x_{n'}) \end{align} where $\{\psi_l: \mathbb{C}^{D \times D} \rightarrow \mathbb{C}\}_{l=1}^L$ and $\rho: \mathbb{C}^L \rightarrow \mathbb{C}$ are arbitrary neural networks with analytic activations. \end{definition} Similarly, the class $\text{Sym}_Lt$ is exactly the architecture of Relational Pooling~\parencite{santoro2017simple} with analytic activations. We note this architecture is also equivalent to the $2$-ary instantiation of Janossy Pooling~\parencite{murphy2018janossy}. \subsection{Main Result} Our main result demonstrates an exponential separation, where $\text{Sym}_L$ requires exponentially large symmetric width $L$ to match the expressive power of the class $\text{Sym}_Lt$ for $L = 1$. We choose norms to make this separation as prominent as possible: there is a hard function that can be approximated in $\text{Sym}_Lt$ in the infinity norm, but cannot be approximated in $\text{Sym}_L$ even in an appropriately chosen $L_2$ norm with respect to some non-trivial data distribution. We require one activation assumption to realize the $\text{Sym}_Lt$ approximation: \begin{assumption}\label{ass:act} The activation $\sigma : \mathbb{C} \rightarrow \mathbb{C}$ is analytic, and for a fixed $D, N$ there exist two-layer neural networks $f_1, f_2$ using $\sigma$, both with $O\left(D^2 + D \log \frac{D}{\epsilon}\right)$ width and $O(D \log D)$ bounded weights, such that: \begin{align} \sup_{|\xi| \leq 3} |f_1(\xi) - \xi^2| \leq \epsilon, \qquad \sup_{|\xi| \leq 3} \left|f_2(\xi) - \left(1 - (\xi/4)^{\min(D, \sqrt{N}/2)}\right) \frac{\xi - 1/4}{\xi/4 - 1} \right| \leq \epsilon \end{align} \end{assumption} Essentially this assumption guarantees that networks built with the analytic activation $\sigma$ are able to efficiently approximate the map $\xi \rightarrow \xi^2$, and, a truncated form of the finite Blaschke product\parencite{garnett2007bounded} with one zero at $\xi = 4$. We show in Lemma~\ref{lem:expnet-epsilon} that the $\exp$ activation satisfies this assumption. \begin{theorem}[Exponential width-separation]\label{thm:main-result-body} Fix $N$ and $D > 1$, and a non-trivial data distribution $\mu$ on $D \times N$ copies of the unit complex circle $({S^1})^{D \times N}$. Then there exists an analytic symmetric function $g: \mathbb{C}^{D \times N} \rightarrow \mathbb{C}$ such that $\|g\|_{L_2(\mu)} = 1$ and: \begin{itemize} \mathbf{i}tem For $L \leq N^{-2} \exp(O(\min(D, \sqrt{N}))$, \begin{align} \min_{f \mathbf{i}n \text{Sym}_L} \|f - g\|_{L_2(\mu)}^2 \geq \frac{1}{12} ~. \end{align} \mathbf{i}tem There exists $f \mathbf{i}n \text{Sym}_Lt$ with $L = 1$, parameterized with an activation $\sigma$ that satisfies Assumption~\ref{ass:act}, with width $poly(N,D,1/\epsilon)$, depth $O(\log D)$, and max weight $O(D \log D)$ such that over $({S^1})^{D\times N}$: \begin{align} \|f - g\|_\mathbf{i}nfty \leq \epsilon \end{align} \end{itemize} \end{theorem} \begin{remark} The lower bound is completely independent of the width and depth of the parameterized networks $\{\psi_l\}$ and $\rho$. The only parameter that the theorem restricts is the symmetric width $L$. This is in sharp contrast to the separations of vanilla networks~\parencite{eldan2016power}, where there is a natural trade-off between width and depth. \end{remark} \begin{remark} In the upper bound, we consider the network $f \mathbf{i}n \text{Sym}_Lt$ to have width and depth in the usual sense of vanilla neural networks, where the parameterized maps $\{\psi_l\}$ and $\rho$ obey the width, depth, and weight bounds given. \end{remark} \section{Related Work} \subsection{Depth Separation} Numerous works have studied the difference in expressive power between different neural network architectures. Many of these works center on the representational gap between two-layer and three-layer networks~\parencite{eldan2016power,daniely2017depth}. In particular, recent works have focused on generalizing the family of functions that realize these separations, to various radial functions~\parencite{safran2017depth} and non-radial functions~\parencite{venturi2021depth}. A separate line of work considers separations between networks when the depth varies polynomially~\parencite{telgarsky2016benefits}. Notably, \textcite{vardi2022width} demonstrates that depth has a greater impact on expressivity than width, in the case of vanilla neural networks. \subsection{Symmetric Architectures} We primarily consider the symmetric neural network parameterization as introduced in DeepSets\parencite{zaheer2017deep}, with PointNet\parencite{qi2017pointnet} a similar symmetric parameterization using a different pooling function. Simple linear equivariant layers were also introduced in~\textcite{zaheer2017deep}. In the context of relationships between objects in an image, the first symmetric architecture enabling explicit pairwise interaction was introduced in~\textcite{santoro2017simple}. More complicated symmetric architectures, allowing for higher-order interaction and more substantial equivariant layers, were built on top of attention primitives~\parencite{ma2018attend,lee2019set}. And the notion of explicit high-order interactions between set elements before symmetrizing is formalized in the architecture of Janossy pooling~\parencite{murphy2018janossy}. Symmetric architectures are generalized by graph neural networks~\parencite{kipf2016semi,scarselli2008graph}, under the restriction to the complete graph. \subsection{Symmetric Network Expressivity} The dependence of representational power on the symmetric width parameter $L$ was first demonstrated in the $D=1$ case. Under the strong condition $L < N$, it was proven there are symmetric functions which cannot be exactly represented by a DeepSets network~\parencite{wagstaff2019limitations}, and this was later strengthened to functions which cannot be approximated in the infinity norm to arbitrary precision~\parencite{wagstaff2022universal}. The work introducing Janossy pooling~\parencite{murphy2018janossy} also includes a theoretical result showing singleton networks cannot exactly represent some particular pairwise symmetric network. Crucially however, this result is restricted to a simplified, non-universal symmetric architecture excluding the $\rho$ transformation, and therefore does not characterize the real-world architectures given above. The question of expressiveness in symmetric networks may also be generalized to graph neural networks, with a focus on distinguishing non-isomorphic graphs as compared to the Weissfeler-Lehman test\parencite{xu2018powerful} and calculating invariants such as substructure counting\parencite{chen2020can}. In particular, one may understand expressiveness in symmetric networks incorporating pairwise interaction as the ability to learn functions of the complete graph decorated with edge features. \subsection{Symmetric Polynomial Theory} Our proofs rely on the technical machinery of symmetric polynomial theory, thoroughly characterized in~\textcite{macdonald1998symmetric}. In particular, we utilize the integral representation of the finite-variable Hall Inner product as introduced in Section~\ref{sec:prelim}. Because this integral is defined over the complex unit circle, we consequently consider complex-valued neural networks~\parencite{bassey2021survey}. The connection of symmetric networks to the powersum polynomials was first observed in~\textcite{zaheer2017deep}, and likewise the multisymmetric powersum polynomials have been applied in higher dimensional symmetric problems~\parencite{maron2019provably,segol2019universal}. The algebraic properties of the multisymmetric powersum polynomials are well-studied, for example as a basis of higher dimensional symmetric polynomials~\parencite{rydh2007minimal} and through their algebraic dependencies~\parencite{domokos2007vector}. However, to the best of our knowledge this is the first work to apply the Hall inner product to symmetric neural networks, and to extend this inner product to yield low-degree orthogonality over the multisymmetric polynomials. \section{Warmup: One-dimensional set elements} To begin, we consider the simpler case where $D = 1$, i.e. where we learn a symmetric function acting on a set of scalars. It was already observed in~\textcite{zaheer2017deep} that the universality of DeepSets could be demonstrated by approximating the network with symmetric polynomials. We first demonstrate that through this approximation, we can relate the symmetric width $L$ to expressive power. \subsection{Symmetric Polynomials} In order to approximate symmetric networks by symmetric polynomials, we choose a suitable basis. The powersum polynomials serve as the natural choice, as their structure matches that of a singleton symmetric network, and they obey very nice orthogonality properties that we detail below. \begin{definition} For $k \mathbf{i}n \mathbb{N}$ and $x \mathbf{i}n \mathbb{C}^N$, the \emph{normalized powersum polynomial} is defined as $$p_k(x) = \frac{1}{\sqrt{k}} \sum_{n=1}^N x_n^k$$ with $p_0(x) = 1$. \end{definition} A classical result in symmetric polynomial theory is the existence of an $L_2$ inner product that grants orthogonality for products of powersums. To make this notion explicit and keep track of products, we index products with partitions. \begin{definition} An \emph{integer partition} $\lambda$ is non-increasing, finite sequence of positive integers $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_k$. The weight of the partition is given by $|\lambda| = \sum_{i=1}^k \lambda_i$. The length of a partition $l(\lambda)$ is the number of terms in the sequence. \end{definition} Then we characterize a product of powersums by: \begin{align} p_\lambda(x) = \prod_i p_{\lambda_i}(x) \end{align} This notation intentionally also allows for the empty partition, such that if $\lambda = \varnothing$ then $p_\lambda = 1$. All together, we can now state the following remarkable fact: \begin{theorem}[{\cite[Chapter VI (9.10)]{macdonald1998symmetric}} ]\label{thm:hall-inner-product} There exists a $L_2(d\nu)$ inner product (for some probability measure $\nu$) such that, for partitions $\lambda, \mu$ with $|\lambda| \leq N$: \begin{align} \langle p_\lambda, p_\mu \rangle_V = z_\lambda \mathbbm{1}_{\lambda = \mu} \end{align} where $z_\lambda$ is some combinatorial constant. \end{theorem} We index this inner product with $V$ because it is written as an expectation with respect to a density proportional to the squared Vandermonde polynomial (see Section~\ref{sec:prelim} for the precise definition). This inner product may also be considered the finite-variable specialization of the Hall inner product, defined on symmetric polynomials over infinitely many variables~\cite[Chapter I (4.5)]{macdonald1998symmetric}. It's easy to check that the degree of $p_\lambda$ is equal to $|\lambda|$. So this theorem states that the powersum terms $p_\lambda$ are "almost" an orthogonal basis, except for correlation between two high-degree terms. Let us remark that we assume analytic activations for the sake of this theorem, as the orthogonality property does not hold for symmetric polynomials with negative exponents. However, in exchange for that assumption we can apply this very powerful inner product, that ultimately results in the irrelevance of network depth. \subsection{Projection Lemma} Before we can proceed to prove a representational lower bound, we need one tool to better understand $f\mathbf{i}n\text{Sym}_L$. Utilizing the orthogonality properties of the inner product $\langle \cdot, \cdot \rangle_V$ allows us to project any $f \mathbf{i}n \text{Sym}_L$ to a simplified form, while keeping a straightforward dependence on $L$. For example, consider some uniformly convergent power series (with no constant term) $\phi(x) = \sum_{i=1}^\mathbf{i}nfty c_{ik} p_k(x)$. We claim $\langle p_2 p_1, \phi^3 \rangle_V = 0$. Indeed, expanding $\phi^3$, one exclusively gets terms of the form $p_{k_1} p_{k_2} p_{k_3}$, and because the partition $\{k_1, k_2, k_3\}$ is of a different length than $\{2, 1\}$, they are clearly distinct partitions so by orthogonality $\langle p_2p_1, p_{k_1} p_{k_2} p_{k_3}\rangle_V = 0$. Motivated by this observation, we can project $f$ to only contain products of two terms. Let us introduce $\mathcal{P}_1$ to be the orthogonal projection onto $span(\{p_t : 1 \leq t \leq N/2\})$, and $\mathcal{P}_2$ to be the orthogonal projection onto $span(\{p_tp_{t'} : 1 \leq t,t' \leq N/2\})$. \begin{lemma}\label{lem:proj-one-dim} Given any $f \mathbf{i}n \text{Sym}_L$, we may choose coefficients $v_{ij}$ over $i \leq j \leq L$, and symmetric polynomials $\phi_i$ over $i \leq L$, such that: \begin{align} \mathcal{P}_2 f = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1\phi_j) \end{align} \end{lemma} \subsection{Rank Lemma} Given the reduced form of $f$ above, we may now go about lower bounding its approximation error to a given function $g$. By the properties of orthogonal projection, we have $\|f-g\|_V^2 \geq \|\mathcal{P}_2(f - g)\|_V^2$. And by Parseval's theorem, the function approximation error $\|\mathcal{P}_2 f-\mathcal{P}_2 g\|_V^2$ equals $$\sum_{t\leq t'} \left(\left\langle \mathcal{P}_2 f, \frac{p_{t}p_{t'}}{\|p_{t}p_{t'}\|_V} \right\rangle_V - \left\langle \mathcal{P}_2 g, \frac{p_{t}p_{t'}}{\|p_{t}p_{t'}\|_V} \right\rangle_V\right)^2~.$$ Rearranging the orthogonal coefficients in the form of matrices, we have the following fact: \begin{lemma}\label{lem:rank-one-dim} Given any $f \mathbf{i}n \text{Sym}_L$, and $g$ such that $P_2 g = g$, we have the bound \begin{align} \|\mathcal{P}_2f - \mathcal{P}_2g\|_V^2 \geq \frac{1}{2} \|F - G\|_F^2 \end{align} where $F, G \mathbf{i}n \mathbb{C}^{N/2 \times N/2}$ are matrices with entries $F_{tt'} = \left\langle \mathcal{P}_2f, p_t p_{t'} \right\rangle_V$, $G_{tt'} = \left\langle \mathcal{P}_2g, p_t p_{t'} \right\rangle_V$. Furthermore, $F$ has maximum rank $L$. \end{lemma} The significance of this lemma is the rank constraint: it implies that choosing symmetric width $L$ corresponds to a maximum rank $L$ on the matrix $F$. From here, we can use standard arguments about low-rank approximation in the Frobenius norm to yield a lower bound. \subsection{Separation in one-dimensional case} Our main goal in this section is to construct a hard symmetric function $g$ that cannot be efficiently approximated by $\text{Sym}_L$ for $L \leq N/4$. It is not particularly expensive for the symmetric width $L$ to scale linearly with the set size $N$: however, we will use the same proof structure to prove Theorem~\ref{thm:main-result-body}, which will require $L$ to scale exponentially. \begin{theorem} For $D = 1$: \begin{align} \max_{\|g\|_V = 1} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_V^2 \geq 1 - \frac{2L}{N} \end{align} In particular, for $L = \frac{N}{4}$ we recover a constant lower bound of $\frac{1}{2}$. \end{theorem} \begin{proof}[Proof (sketch).] Choose $g$ such that $\mathcal{P}_2 g = g$. Then because $\mathcal{P}_2$ is an orthogonal projection and applying Lemma~\ref{lem:rank-one-dim}: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_V^2 & \geq \min_{f\mathbf{i}n\text{Sym}_L} \|\mathcal{P}_2 f - \mathcal{P}_2 g \|_V^2 \\ & \geq \frac{1}{2} \min_{\text{rank}(F) \leq L} \|F - G\|_F^2 \end{align} We note that $\|p_tp_t\|_V^2 = z_{\{t,t\}} = 2$, so the choice of $g = \frac{1}{\sqrt{N}} \sum_{t=1}^{N/2} p_t p_t$ can be seen to obey $\|g\|_V = 1$, and implies that $G$ is the scaled identity matrix $\frac{2}{\sqrt{N}} I \mathbf{i}n \mathbb{C}^{N/2 \times N/2}$. Then by standard properties of the SVD: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_V^2 & \geq \frac{1}{2} \min_{\text{rank}(F) \leq L} \|F - \frac{2}{\sqrt{N}}I\|_F^2 \\ & = \frac{1}{N/2} \min_{\text{rank}(F) \leq L} \|F - I\|_F^2 \\ & = \frac{1}{N/2} (N/2 - L) \\ & = 1 - \frac{2L}{N} \end{align} \end{proof} \section{Proof Sketch of Main Result}\label{sec:sketch} \subsection{Challenges for High-dimensional Set Elements} We'd like to strengthen this separation in several ways: \begin{itemize} \mathbf{i}tem Generalize to the $D > 1$ case, \mathbf{i}tem Realize a separation where the symmetric width $L$ must scale exponentially in $N$ and $D$, showing that $\text{Sym}_L$ is infeasible, \mathbf{i}tem Show the hard function $g$ can nevertheless be efficiently approximated in $\text{Sym}_Lt$ for $L$ polynomial in $N$ and $D$ \end{itemize} First, in order to approximate via polynomials in the high-dimenionsal case, we will require the high-dimensional analogue of powersum polynomials: \begin{definition} For a multi-index $\alpha \mathbf{i}n \mathbb{N}^{D}$, the \emph{normalized multisymmetric powersum polynomial} is defined as: \begin{align} \bold{p}_{\alpha}(X) & = \frac{1}{\sqrt{|\alpha|}} \sum_n \prod_d x_{dn}^{\alpha_d}~. \end{align} \end{definition} So the plan is to find a high-dimensional analogue of Lemma~\ref{lem:proj-one-dim} and Lemma~\ref{lem:rank-one-dim}, now using multisymmetric powersum polynomials, mimic the proof of the $D = 1$ case, and then additionally show the hard function $g$ is efficiently computable in the pairwise symmetric architecture. Note that because the algebraic basis of multisymmetric powersum polynomials is of size $L^* = \binom{N + D}{N} - 1$, we can expect an exponential separation when we apply a similar rank argument.\footnote{We subtract one in order to discount the constant polynomial.} \subsection{Sketch of Main result (lower bound)}\label{sec:lower-bound} Because we are in high dimensions, we cannot simply apply the restricted Hall inner product introduced in Theorem~\ref{thm:hall-inner-product}. To the best of our knowledge, there is no standard generalization of the Hall inner product to multi-symmetric polynomials that preserves the orthogonality property. For the main technical ingredient in the high-dimensional case we introduce a novel generalization, which builds on two inner products. First, we introduce a new input distribution $\nu$ over set inputs $X \mathbf{i}n \mathbb{C}^{D \times N}$, and induce an $L_2$ inner product: \begin{align} \langle f, g \rangle_\mathcal{A} = \mathbb{E}_{X\sim \nu}\left[f(X) \overline{g(X)}\right]~. \end{align} We use this inner product to measure the approximation error of $\text{Sym}_L$. That is, we seek a lower bound to $\min_{f \mathbf{i}n \text{Sym}_L} \|f - g\|_\mathcal{A}$, for a suitable choice of hard function $g$. We can now apply an analogue of Lemma~\ref{lem:proj-one-dim} to project $f$ to a simplified form. But we cannot immediately apply an analogue of Lemma~\ref{lem:rank-one-dim}, as it relied on Parseval's theorem and the low-degree multisymmetric powersum polynomials are not orthogonal in this inner product. Put another way, if we represent $\langle \cdot, \cdot \rangle_\mathcal{A}$ as a matrix in the basis of low-degree multisymmetric powersums, it will be positive-definite but include some off-diagonal terms. The idea is to now introduce a new inner product with a different input distribution $\nu_0$ \begin{align} \langle f, g \rangle_{\mathcal{A}A} = \mathbb{E}_{X\sim \nu_0}\left[f(X) \overline{g(X)}\right]~, \end{align} and define the bilinear form \begin{align} \langle f, g \rangle_{*} = \langle f, g \rangle_{\mathcal{A}} - 2 \langle f, g \rangle_{\mathcal{A}A} ~. \end{align} Typically positive-definiteness is lost when subtracting two inner products, but we prove that $\langle \cdot, \cdot \rangle_*$ is an inner product when restricted to a particular subspace of symmetric polynomials (see Theorem~\ref{thm:bilinear-form}). Furthermore, the careful choice of $\nu$ and $\nu_0$ cancels the off-diagonal correlation of different multisymmetric powersums, so they are orthogonal under this new inner product $\langle \cdot, \cdot \rangle_*$. By the norm domination $\|\cdot\|_\mathcal{A} \geq \|\cdot\|_*$, we are able to pass from the former $L_2$ norm to the latter norm that obeys orthogonality, and apply an analogue of the Rank Lemma~\ref{lem:rank-one-dim}. Thus we derive a lower bound using any hard function $g$ whose corresponding matrix $G$ (built from orthogonal coefficients) is diagonal and high-rank. And because the total number of polynomials is $L^*$, the rank argument now yields an exponential separation. Based on this proof, we have much freedom in our choice of $g$. By choosing its coefficients in the basis of multisymmetric powersum polynomials, it's easy to enforce the conditions that $G$ is diagonal and high-rank for variety of possible functions. However, ensuring that $g$ is not pathological (i.e. that it is bounded and Lipschitz), and can be efficiently approximated in $\text{Sym}_Lt$, requires a more careful choice. \subsection{Sketch of Main Result (upper bound)}\label{sec:upper-bound-sketch} It remains to approximate the hard function $g$ with a network from $\text{Sym}_Lt$. First we must make a choice of $g$ in particular. Based on the lower bound proof, the desiderata for $g$ is that it is supported exclusively on terms of the form $\bold{p}_{\alpha} \bold{p}_{\alpha}$ over many values of $\alpha$, as this induces a diagonal and high-rank matrix $G$ in an analogue of Lemma~\ref{lem:rank-one-dim}. Furthermore, by simple algebra one can confirm that $\bold{p}_{\alpha}(X)\bold{p}_{\alpha}(X) = \frac{1}{|\alpha|} \sum_{n,n'} \prod_{d=1}^D (x_{dn} x_{dn'})^{\alpha_d}$, so $g$ supported on these polynomials can clearly be written in the form of a network in $\text{Sym}_Lt$. This structure of $g$ guarantees difficult approximation, and is akin to the radial structure of the hard functions introduced in works on depth separation~\parencite{eldan2016power}. We must however be careful in our choice of $g$: for the matrix $G$ to be high-rank, $g$ must be supported on exponentially many powersum polynomials. But this could make $\|g\|_\mathbf{i}nfty$ exponentially large, and therefore challenging to approximate efficiently with a network from $\text{Sym}_Lt$. We handle this difficulty by defining $g$ in a different way. We introduce a finite Blaschke product $\mu(\xi) = \frac{\xi - 1/4}{\xi/4 - 1}$, a function that analytically maps the unit complex circle to itself. Then the choice \begin{align} g(X) & = \sum_{n,n'=1}^N \prod_{d=1}^D \mu(x_{dn} x_{dn'}) \end{align} ensures that $\|g\|_\mathbf{i}nfty$, $\|g\|_\mathcal{A}$, and $\mathrm{Lip}(g)$ are all polynomial in $N,D,\frac{1}{\epsilon}$ for $\epsilon$ approximation error (see Lemma~\ref{lem:g-prop}). Furthermore, again from simple algebra it is clear that $g$ is only supported on terms of the form $\bold{p}_{\alpha} \bold{p}_{\alpha}$. So it remains to show that the induced diagonal matrix $G$ is effectively high rank, which follows from expanding the Blaschke products. Satisfied that this choice of $g$ will meet the desiderata for the lower bound, and has no pathological behavior, it remains to construct $f\mathbf{i}n\text{Sym}_Lt$ for $L=1$ that approximates $g$. That is, choose $\psi_1$ and $\rho$ so that $g(X) \approx \rho\left(\sum_{n,n'=1}^N \psi_1(x_n, x_{n'})\right)$. Clearly we may take $\rho$ to be the identity, and $\psi_1(x_n, x_{n'})$ to approximate $\prod_{d=1}^D \mu(x_{dn} x_{dn'})$, which is straightforwardly calculated in depth $O(\log D)$ by performing successive multiplications in a binary-tree like structure (see Theorem~\ref{thm:upper-bound-explicit}). Ultimately, we use a slight variant of this function for the formal proof. Because the orthogonality of our newly introduced inner product $\langle \cdot, \cdot \rangle_*$ only holds for low-degree polynomials, we must truncate high-degree terms of $g$; we confirm in Appendix \ref{sec:upper-bound} that this truncation nevertheless preserves the properties we care about. \section{Discussion} In this work, we've demonstrated how symmetric width captures more of the expressive power of symmetric networks than depth when restricted to analytic activations, by evincing an exponential separation between two of the most common architectures that enforce permutation invariance. The most unusual property of this result is the complete independence of depth, owing to the unique orthogonality properties of the restricted Hall inner product when paired with the assumption of analyticity. This stands in contrast to the case of vanilla neural networks, for which separations beyond small depth would resolve open questions in circuit complexity suspected to be quite hard~\parencite{vardi2021size}. Furthermore, the greater dependence on width than depth is a unique property to symmetric networks, whereas the opposite is true for vanilla networks~\parencite{vardi2022width}. A natural extension would be to consider the simple equivariant layers introduced in~\textcite{zaheer2017deep}, which we suspect will not substantially improve approximation power of $\text{Sym}_L$. Furthermore, allowing for multiple such equivariant layers, this network becomes exactly akin to a Graph Convolutional Network ~\parencite{kipf2016semi} on a complete graph, whereas $\text{Sym}_Lt$ corresponds to a message passing network~\parencite{gilmer2017neural} as it is capable of interpreting edge features. \subsection{Limitations}\label{sec:limitations} The major limitation of this result is the restriction to analytic functions. Although analytic symmetric functions nevertheless appear crucially in the study of exactly solvable quantum systems~\parencite{langmann2005method, beau2021parent}, this assumption may be be overly strict for general problems of learning symmetric functions. We nevertheless conjecture that these bounds will still hold even allowing for non-analytic activations, and consider this an exciting question for future work. Additionally, whether the hard function $g$ can be efficiently learned with gradient descent remains unclear, and future work could touch on the learnability. \paragraph{Acknowledgements:} This work has been partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF-1845360, and NSF CCF-1814524. \printbibliography \appendix \appendixpage \startcontents[sections] \printcontents[sections]{l}{1}{\setcounter{tocdepth}{2}} \section{Preliminaries}\label{sec:prelim} \subsection{Notation} We'll use $\mathbb{N}$ to denote the naturals including $0$. The indicator function for the condition $x = y$ is written as $\mathbbm{1}_{x = y}$. Given an integer $\emph{weak composition}$ $\alpha \mathbf{i}n \mathbb{N}^D$, we will often consider the multidimensional polynomial $z^\alpha = \prod_{d=1}^D z_{d}^{\alpha_d}$. For two vectors $x, x' \mathbf{i}n \mathbb{C}^D$, we denote their elementwise product by $x \circ x'$. \subsection{Inner Products} We introduce two $L_2$ inner products (defined with respect to probability measures) we'll use throughout the work. For symmetric functions $f, g: \mathbb{C}^{N} \rightarrow \mathbb{C}$, define: \begin{align} \langle f, g \rangle_V = \frac{1}{(2\pi)^NN!} \mathbf{i}nt_{[0,2\pi]^N}f(e^{i\boldsymbol{\theta}}) \conj{g(e^{i\boldsymbol{\theta}})} |V(e^{i\boldsymbol{\theta}})|^2 d\boldsymbol{\theta}~, \end{align} where for $z \mathbf{i}n \mathbb{C}^N$, we have the Vandermonde determinant \begin{align} V(z) = \prod_{1 \leq i < j \leq N} (z_j - z_i)~. \end{align} This inner product is well-known in the theory of symmetric polynomials, as a finite-variable analogue of the Hall inner product~\parencite{macdonald1998symmetric}. Equivalently, if we let $V$ denote the joint density of eigenvalues of a Haar-distributed unitary matrix in $\mathbb{C}^{N \times N}$, it is known~\parencite{diaconis1994eigenvalues} that this inner product may be written as \begin{align} \langle f, g \rangle_V = \mathbb{E}_{y \sim V}\left[f(y) \overline{g(y)} \right]~. \end{align} For arbitrary functions $f, g: \mathbb{C}^D \rightarrow \mathbb{C}$, we also consider the $L_2$ inner product given as an expectation over $D$ random variables \begin{align} \langle f, g \rangle_{S^1} &= \frac{1}{(2\pi)^D} \mathbf{i}nt_{[0,2\pi]^D} f(e^{i\boldsymbol{\theta}}) \conj{g(e^{i\boldsymbol{\theta}})} d\boldsymbol{\theta} \\ & = \mathbb{E}_{q \sim ({S^1})^D}\left[f(q)\overline{g(q)} \right]~, \end{align} with the notation $q \sim ({S^1})^D$ meaning each entry of $q$ is i.i.d. uniform on ${S^1}$. For this inner product, we will introduce the following notation. For a multi-index $\alpha \mathbf{i}n \mathbb{N}^D$ and a dummy variable $q$ of dimension $D$, we let $q^\alpha$ denote the polynomial function $z \mapsto z^\alpha$. Then it's clear that \begin{align} \langle q^\alpha, q^\beta \rangle_{S^1} = \mathbbm{1}_{\alpha = \beta}~. \end{align} Note that we will consider this inner product over varying dimensions throughout the paper, but it will be clear from context the dimension, i.e. how many i.i.d. random variables uniform on $S^1$ we are sampling over. \subsection{Symmetric Polynomials} We remind the notation from the main body: $p_0(x) = 1$, and for $k \mathbf{i}n \mathbb{N} \setminus\{0\}$ and any partition $\lambda$: \begin{align} p_k(x) & = \frac{1}{\sqrt{k}} \sum_{n=1}^N x_n^k \\ p_\lambda(x) &= \prod_i p_{\lambda_i}(x)~. \end{align} We will also sometimes use set notation to index products of powersums. For example, $p_{\{2,1\}} = p_2 p_1 = p_1 p_2$. Finally, we need the notation that if $n_t$ denotes the number of times $t$ appears in $\lambda$, then $z_\lambda = \prod_t n_t!$. Note that this definition of $z_\lambda$ is slightly different that most texts, as we're considering the normalized powersums. Then we can state Theorem~\ref{thm:hall-inner-product} explicitly: \begin{theorem}\label{thm:hall-inner-product-explicit}[{\cite[Chapter VI (9.10)]{macdonald1998symmetric}} ] For partitions $\lambda, \mu$ with $|\lambda| \leq N$: \begin{align} \langle p_\lambda, p_\mu \rangle_V = z_\lambda \mathbbm{1}_{\lambda = \mu}~. \end{align} \end{theorem} \subsection{Multisymmetric Polynomials} When $D > 1$, in order to approximate our network with polynomials, we introduce the multivariate analog of symmetric polynomials. For example, suppose $D = 2$, and we write our set elements the following way: $$ X = \left\{\begin{bmatrix} y_1 \\ z_1 \end{bmatrix}, \begin{bmatrix} y_2 \\ z_2 \end{bmatrix}, \dots \begin{bmatrix} y_N \\ z_N \end{bmatrix}\right\} $$ Then a basis of symmetric functions is given by the multisymmetric power sum polynomials, some examples: \begin{align} \bold{p}_{(2,3)}(X) & = \frac{1}{\sqrt{2 + 3}}\sum_n y_n^2 z_n^3 \\ \bold{p}_{(4,1)}(X) & = \frac{1}{\sqrt{4 + 1}}\sum_n y_n^4 z_n^1 ~. \end{align} For general $N$ and $D$, our input is $X \mathbf{i}n \mathbb{C}^{D \times N}$ where we want functions that are invariant to permuting the columns $x_n$ of this matrix. Note that we write scalar entries of this matrix as $x_{dn}$. \begin{definition} For a multi-index $\alpha \mathbf{i}n \mathbb{N}^{D}$, the \emph{normalized multisymmetric powersum polynomial} is defined as: \begin{align} \bold{p}_{\alpha}(X) & = \frac{1}{\sqrt{|\alpha|}} \sum_n x_n^\alpha \\ &= \frac{1}{\sqrt{|\alpha|}} \sum_n \prod_d x_{dn}^{\alpha_d} \end{align} with $\bold{p}_{0} = 1$. \end{definition} An algebraic basis of symmetric functions in this setting is given by all $\bold{p}_{\alpha}$ for all $|\alpha| \leq N$, where $|\alpha| = \sum_{d} \alpha_d$ (for a proof see~\textcite{rydh2007minimal}). We remind the notation from the introduction, where $L^*(N, D) = |\{\alpha \mathbf{i}n \mathbb{N}^D: |\alpha| \leq N\}| = \binom{N + D}{N} - 1$ is the size of this algebraic basis (discouting the constant polynomial). Intuitively then it's clear why $L \geq L^*$ will make $\text{Sym}_L$ a universal approximator, as each of the $L$ symmetric features $\{\phi_l\}_{l=1}^L$ will calculate one of these basis elements. \section{One Dimensional Set Elements} We will first consider the setting where $D = 1$, i.e. each set element is a scalar. In this setting, we will amend notation slightly so that we consider symmetric functions $f$ acting on $x \mathbf{i}n \mathbb{C}^N$, where each $x_n$ is a scalar set element. \subsection{Projection Lemma} Let us remind $\mathcal{P}_1$ to be the orthogonal projection onto $span(\{p_t : 1 \leq t \leq N/2\})$, and $\mathcal{P}_2$ to be the orthogonal projection onto $span(\{p_tp_{t'} : 1 \leq t,t' \leq N/2\})$. \begin{lemma}\label{lem:proj} Given any $f \mathbf{i}n \text{Sym}_L$, we may choose coefficients $v_{ij}$ over $i \leq j \leq L$, and symmetric polynomials $\phi_i$ over $i \leq L$, such that: \begin{align} \mathcal{P}_2 f = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1\phi_j)~. \end{align} \end{lemma} \begin{proof} Consider the general parameterization of $f$ given in Equation~\ref{eq:symnn}. Because all network activations are analytic, we can write all maps parameterizing $f$ by power series. Note that the inner product $\langle \cdot, \cdot \rangle_V$ integrates over a compact domain, therefore the projection $\mathcal{P}_2 f$ will be determined by the value of $f$ restricted to that domain. Thus, all power series in the sequel will converge uniformly and we may freely interchange infinite sums with each other as well as with inner products. Explicitly, to parameterize $f$ we write $\psi_l(x_n) = c_{l0} + \sum_{k=1}^\mathbf{i}nfty \frac{c_{lk}}{\sqrt{k}} x_n^k$ so that $\phi_l(x) = \sum_{n=1}^N \psi_l(x_n) = Nc_{l0} + \sum_{k=1}^\mathbf{i}nfty c_{lk} p_k(x)$. Because $\rho$ is also given as a power series, it can be equivalently written as a power series with all variables having constant offsets. So we can subtract the constant terms from every $\phi_l$ and write: \begin{align} \rho(y) &= \sum_{\eta \mathbf{i}n \mathbb{N}^L} v_\eta y^\eta ~,\\ \phi_l & = \sum_{k=1}^\mathbf{i}nfty c_{lk} p_k~, \end{align} where $y^\eta = \prod_{n=1}^N y_n^{\eta_n}$. Hence \begin{align} f & = \rho(\phi_1, \dots, \phi_L) = \sum_{\eta} v_\eta \phi^\eta ~. \end{align} We proceed to calculate $\mathcal{P}_2 f$. To begin, consider $\langle p_t p_{t'}, \phi^\eta \rangle$ for any choice of indices $1 \leq t, t' \leq N/2$. To illustrate, suppose $\eta_i = \eta_j = \eta_k = 1$ and $\eta$ is $0$ everywhere else. Then we may write \begin{align} \langle p_t p_{t'}, \phi^\eta \rangle_V = \langle p_t p_{t'}, \phi_i \phi_j \phi_k \rangle_V & = \sum_{i'=1}^\mathbf{i}nfty \sum_{j'=1}^\mathbf{i}nfty \sum_{k'=1}^\mathbf{i}nfty c_{ii'} c_{jj'} c_{kk'} \langle p_t p_{t'}, p_{i'} p_{j'} p_{k'} \rangle_V = 0~. \end{align} In other words, after distributing the product $\phi_i \phi_j \phi_k$, we are left with a sum of terms of the form $p_{i'} p_{j'} p_{k'}$. So treated as partitions, we clearly have $\{i', j', k'\} \neq \{t, t'\}$, where all these indices are positive. Thus, because $t + t' \leq N$, we can apply the orthogonality property of the inner product to conclude $\langle p_t p_{t'}, p_{i'} p_{j'} p_{k'} \rangle_V = 0$. By similar logic, $\langle p_t p_{t'}, \phi^\eta \rangle = 0$ whenever $|\eta| \neq 2$, so we may cancel all such terms in the expansion of $f$ to get \begin{align*} \mathcal{P}_2 f = \mathcal{P}_2 \left(\sum_{\eta \mathbf{i}n \mathbb{N}^L} v_\eta \phi^\eta\right) = \sum_{|\eta| = 2} v_\eta \mathcal{P}_2 \phi^\eta~. \end{align*} Here we can simplify notation. Let $\{e_i\}_{i=1}^L$ denote the standard basis vectors in dimension $L$. Every $\eta \mathbf{i}n \mathbb{N}^L$ with $|\eta| = 2$ can be written as $\eta = e_i + e_j$, so let $v_{ij} := v_{e_i + e_j}$. Then we can rewrite: $$ \mathcal{P}_2 f = \sum_{i \leq j}^L v_{ij} \mathcal{P}_2 \phi_i \phi_j~. $$ Finally, note again by orthogonality we have that $\mathcal{P}_2 (p_{i'} p_{j'}) = 0$ if it is not the case that $1 \leq i', j' \leq N/2$. So observe that we may pass from $\mathcal{P}_2$ to $\mathcal{P}_1$: \begin{align} \mathcal{P}_2 \phi_i \phi_j &= \mathcal{P}_2 \left(\sum_{i'=1}^\mathbf{i}nfty c_{ii'} p_{i'} \right) \left(\sum_{j'=1}^\mathbf{i}nfty c_{jj'} p_{j'} \right) \\ & = \mathcal{P}_2 \sum_{i'=1}^\mathbf{i}nfty \sum_{j'=1}^\mathbf{i}nfty c_{ii'} c_{jj'} p_{i'} p_{j'} \\ & = \sum_{i'=1}^{N/2} \sum_{j'=1}^{N/2} c_{ii'} c_{jj'} p_{i'} p_{j'} \\ & = \left(\sum_{i'=1}^{N/2} c_{ii'} p_{i'} \right) \left(\sum_{j'=1}^{N/2} c_{jj'} p_{j'} \right) \\ & = (\mathcal{P}_1 \phi_i) (\mathcal{P}_1 \phi_j)~. \end{align} So ultimately we get \begin{align} \mathcal{P}_2 f & = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1 \phi_j)~. \end{align} \end{proof} \subsection{Rank Lemma} The following lemma is a generalization of the the Rank Lemma~\ref{lem:rank-one-dim}, which we will use for both the one- and high-dimensional cases. Ultimately, for an inner product $\langle \cdot, \cdot \rangle$ with certain orthogonality properties, it allows us to pass from function error $\|f-g\|^2$ to Frobenius norm error $\|F - G\|_F^2$ for some induced matrices $F, G$. \begin{lemma}\label{lem:rank} Consider a commutative algebra equipped with an inner product, and a set of elements $\{p_t\}_{t=1}^T$. Suppose the terms $p_{\{t,t'\}} = p_t p_{t'}$, indexed by sets $\{t,t'\}$, are pairwise orthogonal, and normalized such that \begin{align*} \|p_{t} p_{t'}\|^2 \geq \begin{cases} 1 & t \neq t' \\ 2 & t = t' \end{cases} \end{align*} Consider the terms: \begin{align*} \phi_l & = \sum_{t=1}^T c_{lt} p_t ~,\\ f & = \sum_{l \leq l'}^L \frac{v_{ll'}}{1 + \mathbbm{1}_{l=l'}} \phi_l \phi_{l'} ~,\\ g & = \sum_{t \leq t'}^T \frac{g_{tt'}}{1 + \mathbbm{1}_{t=t'}} p_t p_{t'}~. \end{align*} Then we have the bound \begin{align} \|f - g\|^2 \geq \frac{1}{2} \|C^T V C - G\|_F^2~, \end{align} where $C_{lt} = c_{lt}, V_{ll'} = v_{ll'}, G_{tt'} = g_{tt'}$, where we define $V$ and $G$ to be symmetric. \end{lemma} \begin{proof} To begin, we calculate inner products for $t \neq t'$: \begin{align} \left\langle f, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle & = \frac{1}{\|p_{\{t,t'\}}\|}\left\langle \sum_{l \leq l'}^L \sum_{t,t'=1}^T \frac{v_{ll'}}{1 + \mathbbm{1}_{l=l'}} c_{lt} c_{l't'} p_t p_{t'}, p_t p_{t'} \right\rangle \\ & = \|p_{t} p_{t'}\| \sum_{l \leq l'}^L \frac{v_{ll'}}{1 + \mathbbm{1}_{l=l'}} (c_{lt} c_{l't'} + c_{lt'} c_{l't})\\ & = \|p_{t} p_{t'}\| \left( \sum_{l = l'}^L \frac{v_{ll}}{2} (c_{lt} c_{lt'} + c_{lt'} c_{lt}) + \sum_{l < l'}^L v_{ll'} (c_{lt} c_{l't'} + c_{lt'} c_{l't}) \right)\\ & = \|p_{t} p_{t'}\| \left( \sum_{l = l'}^L v_{ll} c_{lt}c_{lt'} + \sum_{l < l'}^L v_{ll'} (c_{lt} c_{l't'} + c_{lt'} c_{l't}) \right)~. \end{align} Defining $v_{ll'} = v_{l'l}$, we may reindex and write the second sum as: \begin{align} \sum_{l < l'}^L v_{ll'} (c_{lt} c_{l't'} + c_{lt'} c_{l't}) & = \sum_{l < l'}^L v_{ll'}c_{lt} c_{l't'} + \sum_{l < l'}^L v_{ll'} c_{lt'} c_{l't} \\ & = \sum_{l < l'}^L v_{ll'}c_{lt} c_{l't'} + \sum_{l > l'}^L v_{ll'} c_{lt} c_{l't'}~. \end{align} So putting this together we get \begin{align*} \left\langle f, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle = \|p_{t} p_{t'}\| \left( \sum_{l, l'}^L v_{ll'} c_{lt}c_{l't'} \right) = \|p_{t} p_{t'}\| [C^TVC]_{t,t'}~. \end{align*} By a similar calculation we conclude: \begin{align*} \left\langle f, \frac{p_{\{t,t\}}}{\|p_{\{t,t\}}\|} \right\rangle = \frac{\|p_{t} p_{t}\|}{2} [C^TVC]_{t,t}~. \end{align*} For $g$, we can directly calculate: \begin{align} \left\langle g, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle & = \|p_{t} p_{t'}\|[G]_{t,t'} \\ \left\langle g, \frac{p_{\{t,t\}}}{\|p_{\{t,t\}}\|} \right\rangle & = \frac{\|p_{t} p_{t}\|}{2} [G]_{t,t}~. \end{align} Finally, by Parseval's Theorem we calculate: \begin{align} \|f - g\|^2 & = \sum_{t} \left( \left\langle f, \frac{p_{\{t,t\}}}{\|p_{\{t,t\}}\|} \right\rangle - \left\langle g, \frac{p_{\{t,t\}}}{\|p_{\{t,t\}}\|} \right\rangle \right)^2 + \sum_{t < t'}^T \left( \left\langle f, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle - \left\langle g, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle \right)^2 \\ & = \sum_{t} \left( \left\langle f, \frac{p_{\{t,t\}}}{\|p_{\{t,t\}}\|} \right\rangle - \left\langle g, \frac{p_{\{t,t\}}}{\|p_{\{t,t\}}\|} \right\rangle \right)^2 + \frac{1}{2} \sum_{t \neq t'}^T \left( \left\langle f, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle - \left\langle g, \frac{p_{\{t,t'\}}}{\|p_{\{t,t'\}}\|} \right\rangle \right)^2 \\ & = \sum_{t}^T \frac{\|p_{\{t,t\}}\|^2}{4} [C^TVC - G]_{t,t}^2 + \frac{1}{2} \sum_{t \neq t'}^T \|p_{\{t,t'\}}\|^2 \cdot [C^TVC - G]_{t,t'}^2 \\ & \geq \frac{1}{2} \sum_{t}^T [C^TVC - G]_{t,t}^2 + \frac{1}{2} \sum_{t \neq t'}^T [C^TVC - G]_{t,t'}^2~, \end{align} where in the last line we use our assumption on the lower bound of $\|p_{\{t,t'\}}\|^2$ and $\|p_{\{t,t\}}\|^2$. Hence: \begin{align} \|f - g\|^2 \geq \frac{1}{2} \|C^TVC - G\|_F^2~. \end{align} \end{proof} \subsection{Proof of one-dimensional Lower Bound} \begin{theorem} Let $D = 1$. Then using the Vandermonde $L_2$ inner product over symmetric polynomials \begin{align} \max_{\|g\|_V = 1} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_V^2 \geq 1 - \frac{2L}{N}~. \end{align} In particular, for $L = \frac{N}{4}$ we recover a constant lower bound. \end{theorem} \begin{proof} We first build our counterexample $g$ by choosing its coefficients in the powersum basis, say: \begin{align} g & = \frac{1}{\sqrt{N}} \sum_{t=1}^{N/2} p_t p_t~. \end{align} From orthogonality and the fact that $\|p_tp_t\|_V^2 = 2$ it's clear that $\|g\|_V = 1$, and note that $\mathcal{P}_2 g = g$. Applying Lemma~\ref{lem:proj}, for any $f \mathbf{i}n \text{Sym}_L$ we can write $\mathcal{P}_2 f$ in the form \begin{align} \mathcal{P}_2 f & = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1 \phi_j)~. \end{align} One may also confirm that the Vandermonde inner product satisfies the requirements of Lemma~\ref{lem:rank} when restricted to the range of $\mathcal{P}_2$, owing to the orthogonality property and the fact that for $1 \leq t, t' \leq N/2$: \begin{align*} \langle p_{t} p_{t'}, p_{t} p_{t'} \rangle_V = \begin{cases} 1 & t \neq t' \\ 2 & t = t' \end{cases} \end{align*} So we've met all the necessary requirements to apply Lemma~\ref{lem:rank} to $\mathcal{P}_2 f$ and $\mathcal{P}_2 g$, thus we have: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_V^2 & \geq \min_{f\mathbf{i}n\text{Sym}_L} \|\mathcal{P}_2 f - \mathcal{P}_2 g \|_V^2 \\ & \geq \min_{C,V} \frac{1}{2} \|C^TVC - 2 * \frac{1}{\sqrt{N}} I\|_F^2 \\ & = \min_{C,V} \frac{1}{N/2} \|C^TVC - I\|_F^2~, \end{align} where the factor of $2$ appears based on the definition of the matrix $G$ in Lemma~\ref{lem:rank} Note that $CVC^T \mathbf{i}n \mathbb{C}^{N/2 \times N/2}$, but $V \mathbf{i}n \mathbb{C}^{L \times L}$. So if $N/2 > L$, then $CVC^T$ is a rank-deficient approximation of the identity, and clearly we have \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_V^2 & \geq \frac{N/2 - L}{N/2} = 1 - \frac{2L}{N}~. \end{align} \end{proof} \section{Exact statement of Main Result} \subsection{Theorem Statement} We begin by restating the main result, where for convenience we will change from $N$ set elements to $2N$. We introduce the notation $\hat{D} := \min\left(D, \lfloor\sqrt{N/2}\rfloor\right)$. We also introduce the $L_2$ inner product \begin{align}\label{eq:inner_prod_a} \langle f, g \rangle_\mathcal{A} = \mathbb{E}_{y \sim V; q,r \sim (S^1)^D} \left[f(X(y,q,r)) \conj{g(X(y,q,r))}\right]~, \end{align} where the set input $X(y,q,r) \mathbf{i}n \mathbb{C}^{D \times 2N}$ with matrix entries $x_{dn}(y,q,r)$ is defined by: \begin{align} x_{dn}(y,q,r) = \begin{cases} q_d y_{n} & 1 \leq n \leq N ~,\\ r_d y_{n-N} & N+1 \leq n \leq 2N~. \end{cases} \end{align} And we restate the activation assumption in this new notation: \begin{assumption}\label{ass:act-real} The activation $\sigma : \mathbb{C} \rightarrow \mathbb{C}$ is analytic, and for a fixed $D, N$ there exist two-layer neural networks $f_1, f_2$ using $\sigma$, both with $O\left(D^2 + D \log \frac{D}{\epsilon}\right)$ width and $O(D \log D)$ bounded weights, such that: \begin{align} \sup_{|\xi| \leq 3} |f_1(\xi) - \xi^2| \leq \epsilon, \qquad \sup_{|\xi| \leq 3} \left|f_2(\xi) - \left(1 - (\xi/4)^{\min(D, \sqrt{N/2})}\right) \frac{\xi - 1/4}{\xi/4 - 1} \right| \leq \epsilon \end{align} \end{assumption} Then our main theorem is thusly: \begin{theorem}[Exponential width-separation]\label{thm:main-result} Fix $2N$ and $D$ such that $\hat{D} > 1$, and consider set elements $X \mathbf{i}n \mathbb{C}^{D \times 2N}$. Define \begin{align} g(X) = -\frac{4N^2}{4^{\hat{D}}} + \sum_{n,n'=1}^{2N}\prod_{d=1}^{\hat{D}} \left(1 - (x_{dn}x_{dn'}/4)^{\hat{D}}\right)\frac{x_{dn}x_{dn'} - 1/4}{x_{dn}x_{dn'}/4 - 1} \\ \end{align} and $g' = \frac{g}{\|g\|_\mathcal{A}}$. Then the following is true: \begin{itemize} \mathbf{i}tem For $L \leq N^{-2}\exp(O(\hat{D}))$, \begin{align} \min_{f \mathbf{i}n \text{Sym}_L} \|f - g'\|_\mathcal{A}^2 \geq \frac{1}{12}~. \end{align} \mathbf{i}tem For $L = 1$, there exists $f \mathbf{i}n \text{Sym}_Lt$, parameterized with an activation $\sigma$ that satisfies Assumption~\ref{ass:act-real}, with width $poly(N,D,1/\epsilon)$, depth $O(\log D)$, and maximum weight magnitude $O(D \log D)$ such that over the unit torus: \begin{align} \|f - g'\|_\mathbf{i}nfty \leq \epsilon~. \end{align} \end{itemize} \end{theorem} \begin{remark} Let us remark about one aspect that will ease exposition. In the sequel, we will assume $D \leq \sqrt{N/2}$ so that $\hat{D} = D$. This is not a necessary assumption; in the case that $D > \sqrt{N/2}$, we can simply replace all instances of $D$ with $\hat{D}$ in the definition of $g$ and the subsequent proof. Because the data distribution has each row of $X \mathbf{i}n \mathbb{C}^{D \times 2N}$ is i.i.d., the proof goes through exactly. Indeed, it would be equivalent to truncating each set vector to the first $\hat{D}$ elements. This will only impact the bounds by replacing $D$ with $\hat{D}$, in which circumstances we will clearly state. \end{remark} \subsection{Proof Roadmap} Let us roadmap the general proof. In Section~\ref{sec:l2-inner-product}, we justify the inner product $\langle \cdot, \cdot \rangle_\mathcal{A}$ and show it can be used to prove a high-dimensional analogue of the Projection Lemma (see Lemma~\ref{lem:projhigh}). In Section~\ref{sec:diagonal-inner-product} we further introduce a second inner product, whose orthogonality properties (see Theorem~\ref{thm:bilinear-form}) allow us to apply the Rank Lemma~\ref{lem:rank}. In Section~\ref{sec:actual-proof-of-lower-bound}, we combine these results to first prove a lower bound for a simple choice of hard function (see Theorem~\ref{thm:high-separation-easy}). Because this simple choice is not suitable for demonstrating the upper bound, we then conclude by showing the hard function $g'$ also evinces a lower bound via a similar argument (see Theorem~\ref{thm:high-separation-hard}). In Section~\ref{sec:construct-g}, we demonstrate the properties of the hard function $g$, by constructing the pieces of $g$ one by one and controlling their behavior, leading to Lemma~\ref{lem:g-prop} which yields all the properties we need about $g$ for the rest of the proof. In Section~\ref{sec:upper-bound} we complete the proof of the upper bound. Specifically, in Section~\ref{sec:exact-rep} we show how to write $g'$ exactly in an analogous form to $\text{Sym}_Lt$, but using very specific activations. In Section~\ref{sec:approx-rep} we write an approximation of this network in $\text{Sym}_Lt$ using a given activation, and in Section~\ref{sec:proof-upper-bound} we control the error between these two networks. \section{Lower Bound of Main Result}\label{sec:lower-bound-app} \subsection{An $L_2$ inner product}\label{sec:l2-inner-product} As discussed in Section~\ref{sec:lower-bound}, we must first define an appropriate $L_2$ inner product, before we can prove a lower bound on function approximation. To that end, we will define an input distribution for the set inputs $X$. Let us introduce several random variables: let $y \sim V$ as in the definition of the inner product $\langle \cdot, \cdot \rangle_V$ over $N$ variables. Let $q$ and $r$ be two random vectors of dimension $D$, with each entry $i.i.d.$ uniform on $S^1$. Then we can define an input distribution for $X \mathbf{i}n \mathbb{C}^{D \times 2N}$ with matrix entries $x_{dn}$: \begin{align} x_{dn} = \begin{cases} q_d y_{n} & 1 \leq n \leq N \\ r_d y_{n-N} & N+1 \leq n \leq 2N~. \end{cases} \end{align} The point of this assignment is how it transforms multisymmetric power sums: \begin{align} \bold{p}_\alpha(X) & = \frac{1}{\sqrt{|\alpha|}}\sum_{n=1}^{2N} \prod_d x_{dn}^{\alpha_d} \\ & = \frac{1}{\sqrt{|\alpha|}} \sum_{n=1}^{N} \prod_d y_{n}^{\alpha_d} q_d^{\alpha_d} + \frac{1}{\sqrt{|\alpha|}} \sum_{n=1}^{N} \prod_d y_{n}^{\alpha_d} r_d^{\alpha_d} \\ & = p_{|\alpha|}(y) \cdot (q^\alpha + r^\alpha)~. \end{align} Then as stated before we have the inner product: \begin{align}\label{eq:inner_prod_a} \langle f, g \rangle_\mathcal{A} = \mathbb{E}_{y \sim V, q \sim (S^1)^D, r \sim ({S^1})^D} \left[f(X) \conj{g(X)}\right]~. \end{align} From our choices above we may use separability to write $\langle \cdot, \cdot \rangle_\mathcal{A}$ in terms of previously introduced inner products. For example: \begin{align} \langle \bold{p}_\alpha, \bold{p}_\beta \rangle_\mathcal{A} & = \mathbb{E}_{y,q,r}\left[p_{|\alpha|}(y) (q^\alpha + r^\alpha) \conj{p_{\beta|}(y) (q^\beta + r^\beta)} \right] \\ & = \mathbb{E}_y\left[p_{|\alpha|}(y) \conj{p_{\beta|}(y)} \right] \mathbb{E}_{q,r}\left[(q^\alpha + r^\alpha) \conj{(q^\beta + r^\beta)} \right] \\ & = \langle p_{|\alpha|}, p_{|\beta|} \rangle_V \cdot \langle q^\alpha + r^\alpha, q^\beta + r^\beta \rangle_{S^1}~. \end{align} We can now observe this inner product grants a ``partial" orthogonality: \begin{lemma}\label{lem:partialortho} Consider $\alpha, \beta \mathbf{i}n \mathbb{N}^D$ with $1 \leq |\alpha|, |\beta| \leq N/2$. Then for $\gamma_k \mathbf{i}n \mathbb{N}^{D} \setminus \{0\}$, if $K \neq 2$ \begin{align} \left\langle \bold{p}_\alpha \bold{p}_\beta, \prod_{k=1}^K \bold{p}_{\gamma_k} \right\rangle_{\mathcal{A}} = 0~. \end{align} Otherwise, for $K = 2$, we have: \begin{align}\label{eq:twoprod} \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}} = 2 \cdot (1 + \mathbbm{1}_{|\alpha| = |\beta|}) \cdot \mathbbm{1}_{\{|\alpha|, |\beta|\} = \{|\gamma|, |\delta|\}} \cdot (\mathbbm{1}_{\alpha + \beta = \gamma + \delta} + \mathbbm{1}_{(\alpha, \beta) = (\gamma, \delta)} + \mathbbm{1}_{(\alpha, \beta) = (\delta, \gamma)})~. \end{align} \end{lemma} \begin{proof} By separability, we can confirm that \begin{align} \langle \bold{p}_\alpha \bold{p}_\beta, \prod_{k=1}^K \bold{p}_{\gamma_k} \rangle_{\mathcal{A}} = \langle p_{|\alpha|} p_{|\beta|}, \prod_{k=1}^K p_{|\gamma_k|} \rangle_V \cdot C~, \end{align} where $C$ is the value of the expectation on the random variables $q$ and $r$. Thus if $K \neq 2$, because $|\alpha| + |\beta| \leq N$, this term is 0 by orthogonality of the Vandermonde inner product. For the $K=2$ case, we begin again by using separability: \begin{align} \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}} & = \left\langle p_{|\alpha|} p_{|\beta|} , p_{|\gamma|} p_{|\delta|} \right\rangle_V \cdot \left\langle (q^\alpha + r^\alpha) (q^\beta + r^\beta), (q^\gamma + r^\gamma) (q^\delta + r^\delta) \right\rangle_{S^1}~. \end{align} Let's consider first the inner product of power sums. Plugging in the definition of the normalizing constant $z_\lambda$ gives: \begin{align*} \left\langle p_{|\alpha|} p_{|\beta|} , p_{|\gamma|} p_{|\delta|} \right\rangle_V = (1 + \mathbbm{1}_{|\alpha| = |\beta|}) \cdot \mathbbm{1}_{\{|\alpha|, |\beta|\} = \{|\gamma|, |\delta|\}}~. \end{align*} Consider now the second inner product term. Noting that each element $q_d, r_d$ is i.i.d. uniform on the unit circle, orthogonality of the Fourier basis implies we can calculate this inner product by only including terms with matching exponents. Bearing in mind that $\alpha, \beta, \gamma, \delta \neq 0$, we must always have terms of the form $\langle q^{\alpha + \beta}, q^\gamma r^\delta \rangle_{S^1} = 0$, and therefore we distribute and calculate: \begin{align*} & \left\langle q^{\alpha + \beta} + q^\alpha r ^\beta + q^{\beta} r^\alpha + r^{\alpha + \beta}, q^{\gamma + \delta} + q^\gamma r ^\delta + q^{\delta} r^\gamma + r^{\gamma + \delta} \right\rangle_{S^1} \\ & = \langle q^{\alpha + \beta}, q^{\gamma + \delta} \rangle_{S^1} + \langle q^\alpha r ^\beta + q^{\beta} r^\alpha, q^\gamma r ^\delta + q^{\delta} r^\gamma \rangle_{S^1} + \langle r^{\alpha + \beta}, r^{\gamma + \delta} \rangle_{S^1} \\ & = 2 \cdot \mathbbm{1}_{\alpha + \beta = \gamma + \delta} + 2 \cdot \mathbbm{1}_{(\alpha, \beta) = (\gamma, \delta)} + 2 \cdot \mathbbm{1}_{(\alpha, \beta) = (\delta, \gamma)}~. \end{align*} Collecting the terms of both products and evaluating the indicator functions under all cases gives the result. \end{proof} Looking at Equation~\ref{eq:twoprod}, we can see inner product $\langle \cdot, \cdot \rangle_\mathcal{A}$ does not grant full orthogonality. The inner product gives orthogonality between powersum products of different lengths, but $\langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}}$ can be non-zero if $\alpha + \beta = \gamma + \delta$, even in the cases where $\{\alpha, \beta\} \neq \{\gamma, \delta\}$. Nevertheless, this inner product still suffices to prove a similar result about projection for the $D > 1$ case. Let $\mathcal{P}_1$ be the orthogonal projection onto $span(\{\bold{p}_\alpha : 1 \leq |\alpha|, |\beta| \leq N/2 \})$ and $\mathcal{P}_2$ be the orthogonal projection onto $span(\{\bold{p}_\alpha \bold{p}_\beta : 1 \leq |\alpha|, |\beta| \leq N/2 \})$. Here by orthogonal, we mean with respect to $\langle \cdot, \cdot \rangle_\mathcal{A}$. \begin{lemma}\label{lem:projhigh} Given any $f \mathbf{i}n \text{Sym}_L$ with $D > 1$, we may choose coefficients $v_{ij}$ over $i \leq j \leq L$, and multisymmetric polynomials $\phi_i$ over $i \leq L$, such that: \begin{align} \mathcal{P}_2 f = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1\phi_j)~. \end{align} \end{lemma} \begin{proof} As in Lemma~\ref{lem:proj}, if we approximate $\psi_l(x_n) = c_{l0} + \sum_{\alpha \neq 0} \frac{c_{l\alpha}}{\sqrt{|\alpha|}} x_n^\alpha$, then symmetrizing gives $\phi_l(X) = Nc_{l0} + \sum_{\alpha \neq 0} c_{l\alpha} \bold{p}_{\alpha}$. By a similar approximation as in Lemma~\ref{lem:proj} that allows us to subtract out constant terms, we write: \begin{align} f & = \sum_{\eta \mathbf{i}n \mathbb{N}^L} v_\eta \phi^\eta~, \\ \phi_l & = \sum_{\alpha \neq 0} c_{l\alpha} \bold{p}_\alpha~. \end{align} Note that by Lemma~\ref{lem:partialortho}, $\langle \bold{p}_\alpha \bold{p}_\beta, \phi^\eta \rangle_\mathcal{A} = 0$ unless $|\eta| = 2$. So similarly to before, we may rewrite \begin{align*} \mathcal{P}_2 f = \sum_{|\eta| = 2} v_\eta \mathcal{P}_2 \phi^\eta~. \end{align*} Here we can simplify notation. Let $\{e_i\}_{i=1}^L$ denote the standard basis vectors in dimension $L$. Every $\eta \mathbf{i}n \mathbb{N}^L$ with $|\eta| = 2$ can be written as $\eta = e_i + e_j$, so let $v_{ij} := v_{e_i + e_j}$. Then we can rewrite: \begin{align*} \mathcal{P}_2 f = \sum_{i \leq j} v_{ij} \mathcal{P}_2 \phi_i \phi_j~. \end{align*} Again, by Lemma~\ref{lem:partialortho}, we know $\mathcal{P}_2$ will annihilate any term of the form $\bold{p}_\gamma \bold{p}_\delta$ if it's not the case that $1 \leq |\gamma|, |\delta| \leq N/2$. One can see this by noting that, for $1 \leq |\alpha|, |\beta| \leq N/2$, then $\{|\alpha|, |\beta|\} \neq \{|\gamma|, |\delta|\}$, and by the Lemma, $\langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_\mathcal{A} = 0$. So we may pass from $\mathcal{P}_2$ to $\mathcal{P}_1$: \begin{align} \mathcal{P}_2 \phi_i \phi_j &= \mathcal{P}_2 \left(\sum_{\gamma \mathbf{i}n \mathbb{N}^D} c_{i\gamma} \bold{p}_\gamma \right) \left(\sum_{\delta \mathbf{i}n \mathbb{N}^D} c_{j\delta} \bold{p}_\delta \right) \\ & = \mathcal{P}_2 \sum_{\gamma \mathbf{i}n \mathbb{N}^D} \sum_{\delta \mathbf{i}n \mathbb{N}^D} c_{i\gamma} c_{j\delta} \bold{p}_\gamma\bold{p}_\delta \\ & = \sum_{1 \leq |\gamma| \leq N/2} \sum_{1 \leq |\delta|\leq N/2} c_{i\gamma} c_{j\delta} \bold{p}_\gamma\bold{p}_\delta \\ & = \left(\sum_{1 \leq |\gamma| \leq N/2} c_{i\gamma} \bold{p}_\gamma \right) \left(\sum_{1 \leq |\delta| \leq N/2} c_{j\delta} \bold{p}_\delta \right) \\ & = (\mathcal{P}_1 \phi_i) (\mathcal{P}_1 \phi_j)~. \end{align} So ultimately we get \begin{align} \mathcal{P}_2 f & = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1 \phi_j)~. \end{align} \end{proof} \subsection{A Diagonal Inner Product}\label{sec:diagonal-inner-product} Before we can apply Lemma~\ref{lem:rank}, which lets us transform function approximation error into matrix approximation error, we need a better inner product, one that is diagonal in the low-degree multisymmetric powersum basis. Consider two more inner products, defined for $f,g$ in the range of $\mathcal{P}_2$: \begin{align} \langle f, g \rangle_{\mathcal{A}A} = \mathbb{E}_{y \sim V, q \sim (S^1)^D, r = 0} \left[f(X) \conj{g(X)}\right]~. \end{align} This is nearly the same distribution as before, except we fix $r = 0$. Then define \begin{align} \langle f, g \rangle_{*} = \langle f, g \rangle_{\mathcal{A}} - 2 \langle f, g \rangle_{\mathcal{A}A} ~. \end{align} Because $f$ and $g$ are restricted to the range of $\mathcal{P}_2$, we demonstrate positive-definiteness of this object, and therefore it is a valid inner product. \begin{theorem}\label{thm:bilinear-form} The bilinear form $\langle \cdot, \cdot \rangle_{*}$ is an inner product when restricted to the range of $\mathcal{P}_2$. Furthermore, it is diagonal in the powersum basis $p_{\{\alpha,\beta\}}$ for $1 \leq |\alpha|, |\beta| \leq N/2$. \end{theorem} \begin{proof} Given $\bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \mathbf{i}n im(\mathcal{P}_2)$, we can consider $\langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}A}$ which can similarly be calculated via separability: \begin{align*} \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}A} & = \langle p_{|\alpha|} p_{|\beta|} , p_{|\gamma|} p_{|\delta|} \rangle_V \cdot \langle q^{\alpha + \beta} , q^{\gamma + \delta} \rangle_{S^1} \\ & = (1 + \mathbbm{1}_{|\alpha| = |\beta|}) \cdot \mathbbm{1}_{\{|\alpha|, |\beta|\} = \{|\gamma|, |\delta|\}} \cdot \mathbbm{1}_{\alpha + \beta = \gamma + \delta}~. \end{align*} It follows from Lemma~\ref{lem:partialortho} that: \begin{align*} \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{*} & = \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}} - 2\langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_{\mathcal{A}A}\\ & = 2 \cdot (1 + \mathbbm{1}_{|\alpha| = |\beta|}) \cdot (\mathbbm{1}_{(\alpha, \beta) = (\gamma, \delta)} + \mathbbm{1}_{(\alpha, \beta) = (\delta, \gamma)})~. \end{align*} To eliminate the ambiguity of $\bold{p}_\alpha \bold{p}_\beta$ vs. $\bold{p}_\beta \bold{p}_\alpha$, let us define $\bold{p}_{\{\alpha, \beta\}}$ equal to both these terms. Then we can equivalently write: \begin{align*} \langle \bold{p}_{\{\alpha, \beta\}}, \bold{p}_{\{\gamma, \delta\}} \rangle_{*} & = 2 \cdot (1 + \mathbbm{1}_{|\alpha| = |\beta|}) \cdot (1 + \mathbbm{1}_{\alpha = \beta}) \cdot \mathbbm{1}_{\{\alpha, \beta\} = \{\gamma, \delta\}}~. \end{align*} Evaluating the indicator functions under all cases we can see: \begin{align*} \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\gamma \bold{p}_\delta \rangle_* = \begin{cases} 0 & \{\alpha, \beta\} \neq \{\gamma, \delta\} \\ 2 & \{\alpha, \beta\} = \{\gamma, \delta\}, \quad |\alpha| \neq |\beta| \\ 4 & \{\alpha, \beta\} = \{\gamma, \delta\}, \quad |\alpha| = |\beta|, \quad \alpha \neq \beta \\ 8 & \{\alpha, \beta\} = \{\gamma, \delta\}, \quad \alpha = \beta \end{cases} \end{align*} Then we've shown that the bilinear form $\langle \cdot, \cdot \rangle_*$, treated as a matrix in the basis of all $\bold{p}_{\{\alpha, \beta\}}$, is positive-definite and diagonal. Since this basis spans the range of $\mathcal{P}_2$, it follows that the bilinear form is an inner product. \end{proof} \subsection{Proof of Lower Bound}\label{sec:actual-proof-of-lower-bound} We first prove a lower bound using a slightly simpler hard function $g$, before updating the argument to the true choice of $g$ further below. \begin{theorem}\label{thm:high-separation-easy} Let $D > 1$. In particular, assume $\min(N/2, D-1) \geq 2$. Then we have \begin{align} \max_{\|g\|_\mathcal{A} = 1} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_\mathcal{A}^2 \geq \frac{1}{6} - \frac{L}{6 \cdot 2^{\min(N/2, D-1)}}~. \end{align} So for $L \leq 2^{{\min(N/2, D-1)}-3}$ we have a constant lower bound on the approximation error. \end{theorem} \begin{proof} Define $T = |\{\alpha \mathbf{i}n \mathbb{N}^D : |\alpha| = N/2\}|$ and choose the bad function $g = \frac{1}{\sqrt{12T}} \sum_{|\alpha| = N/2} \bold{p}_{\{\alpha, \alpha\}}$. Observe that although $\langle \cdot, \cdot \rangle_\mathcal{A}$ is not fully orthogonal in the powersum basis, we can nevertheless calculate by Lemma~\ref{lem:partialortho} that for $|\alpha| = |\beta| = N/2$: \begin{align} \langle \bold{p}_{\{\alpha, \alpha\}}, \bold{p}_{\{\beta, \beta\}} \rangle_\mathcal{A} &= 4 \cdot (\mathbbm{1}_{\alpha + \alpha = \beta + \beta} + \mathbbm{1}_{(\alpha, \alpha) = (\beta, \beta)} + \mathbbm{1}_{(\alpha, \alpha) = (\beta, \beta)}) \\ & = 12 \cdot \mathbbm{1}_{\alpha = \beta}~. \end{align} Therefore we can confirm that $g$ is normalized: \begin{align} \|g\|_\mathcal{A}^2 & = \frac{1}{12T} \sum_{|\alpha| = N/2} \sum_{|\beta| = N/2} \langle \bold{p}_{\{\alpha, \alpha\}}, \bold{p}_{\{\beta, \beta\}} \rangle_\mathcal{A} \\ & = \frac{1}{12T} \sum_{|\alpha| = N/2} \sum_{|\alpha| = N/2} 12 \cdot \mathbbm{1}_{\alpha = \beta} \\ & = \frac{1}{T} \sum_{|\alpha| = N/2} 1 \\ & = 1~. \end{align} Again, we have $\mathcal{P}_2 g = g$. Now by Lemma~\ref{lem:projhigh}, we may write: \begin{align*} \mathcal{P}_2 f & = \sum_{i\leq j}^L v_{ij} (\mathcal{P}_1 \phi_i) (\mathcal{P}_1 \phi_j)~. \end{align*} Finally, note that $\langle \cdot, \cdot \rangle_*$ obeys the inner product conditions of Lemma~\ref{lem:rank} on the range of $\mathcal{P}_2$, following from orthogonality and the normalization: \begin{align*} \langle \bold{p}_\alpha \bold{p}_\beta, \bold{p}_\alpha \bold{p}_\beta \rangle_* = \begin{cases} 2 & |\alpha| \neq |\beta| \\ 4 & |\alpha| = |\beta|, \quad \alpha \neq \beta \\ 8 & \alpha = \beta \end{cases} \end{align*} So we can apply Lemma~\ref{lem:rank} to $\mathcal{P}_2 f, \mathcal{P}_2 g$, and the inner product $\langle \cdot, \cdot \rangle_*$. Hence, we can derive: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_\mathcal{A}^2 & \overset{(a)}{\geq} \min_{f\mathbf{i}n\text{Sym}_L} \|\mathcal{P}_2 f - \mathcal{P}_2 g\|_\mathcal{A}^2 \\ & \overset{(b)}{\geq} \min_{f\mathbf{i}n\text{Sym}_L} \|\mathcal{P}_2 f - \mathcal{P}_2 g\|_*^2 \\ & \overset{(c)}{\geq} \min_{C,V} \frac{1}{2} \|C^T V C - 2 * \frac{1}{\sqrt{12T}} I\|_F^2 \\ & = \min_{C,V} \frac{1}{6T} \|C^T V C - I\|_F^2~. \end{align} Here, $(a)$ follows from the definition of $\mathcal{P}_2$ as an orthogonal projection with respect to $\langle \cdot, \cdot \rangle_\mathcal{A}$, $(b)$ follows from the fact that $\|\cdot\|_\mathcal{A}^2 \geq \|\cdot\|_*^2$, and $(c)$ follows from the application of Lemma~\ref{lem:rank}. These matrices are elements of $\mathbb{C}^{T \times T}$, but the term $C^TVC$ is constrained to rank $L$. Hence, as before we calculate: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g\|_\mathcal{A}^2 \geq \frac{T - L}{6T} = \frac{1}{6} - \frac{L}{6T}~. \end{align} Letting $m = \min(N/2, D-1)$ and assuming $m \geq 2$, it is a simple bound to calculate $$T = \binom{N/2 + D -1}{N/2} \geq \binom{2m}{m} \approx \frac{4^m}{\sqrt{\pi m}} \geq 2^m~,$$ and the bound follows. \end{proof} This theorem demonstrates a hard function $g$ that cannot be efficiently approximated by $f \mathbf{i}n \text{Sym}_L$ for $L = poly(N, D)$, but it does not yet evince a separation. Indeed, observing that $\|g\|_\mathbf{i}nfty = \frac{1}{\sqrt{12T}} N^2 T = \frac{N^2 \sqrt{T}}{\sqrt{12}}$, $g$ has very large magnitude, and there's no obvious way to easily approximate this function by an efficient network in $\text{Sym}_Lt$. Thus, we consider a more complicated choice for $g$, that allows for the separation: \begin{theorem}\label{thm:high-separation-hard} Let $D > 1$. Then let $g' = \frac{g}{\|g\|_\mathcal{A}}$ for $g$ as defined in Lemma~\ref{lem:g-prop}, such that $\|g'\|_\mathcal{A} = 1$. Then for $L \leq N^{-2} \exp(O(D))$: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g'\|_\mathcal{A}^2 \geq \frac{1}{12}~. \end{align} \end{theorem} \begin{proof} The lower bound follows almost identically as before. By Lemma \hyperref[g-four]{\ref*{lem:g-prop}.\ref*{g-four}} we still have that $\mathcal{P}_2 g' = g'$. So we can write \begin{align} g &=\sum_{1 \leq |\alpha| \leq N/2} g_\alpha \bold{p}_{\{\alpha,\alpha\}} \\ g' &=\sum_{1 \leq |\alpha| \leq N/2} \frac{g_\alpha}{\|g\|_\mathcal{A}} \bold{p}_{\{\alpha,\alpha\}} ~. \end{align} Thus, by the same reasoning as Theorem~\ref{thm:high-separation-easy} we recover the lower bound: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g'\|_\mathcal{A}^2 & \geq \min_{f\mathbf{i}n\text{Sym}_L} \|\mathcal{P}_2 f - \mathcal{P}_2 g'\|_\mathcal{A}^2 \\ & \geq \min_{f\mathbf{i}n\text{Sym}_L} \|\mathcal{P}_2 f - \mathcal{P}_2 g'\|_*^2 \\ & \geq \min_{C,V} \frac{1}{2} \|C^T V C - G'\|_F^2~, \end{align} where $G'$ is the matrix induced by $g'$ as given in Lemma~\ref{lem:rank}, i.e. the diagonal matrix indexed by $G_{\alpha\alpha}' = \frac{2g_\alpha}{\|g\|_\mathcal{A}}$. Now, by the partial orthogonality of $\langle \cdot, \cdot \rangle_\mathcal{A}$ noted in Lemma~\ref{lem:partialortho}, we have: \begin{align} \|g\|_\mathcal{A}^2 & = \sum_{1 \leq |\alpha| \leq N/2} \quad \sum_{1 \leq |\beta| \leq N/2} \langle g_\alpha \bold{p}_{\{\alpha, \alpha\}}, g_\beta \bold{p}_{\{\beta, \beta\}} \rangle_\mathcal{A} \\ & = \sum_{1 \leq |\alpha| \leq N/2} \quad \sum_{1 \leq |\beta| \leq N/2} g_\alpha \overline{g_\beta} ( 12 \cdot \mathbbm{1}_{\alpha = \beta}) \\ & = 12 \sum_{1 \leq |\alpha| \leq N/2} |g_\alpha|^2 ~. \end{align} Hence, we can say \begin{align} \|G'\|_F^2 & = \sum_{1 \leq |\alpha| \leq N/2} \left|\frac{2g_\alpha}{\|g\|_\mathcal{A}}\right|^2 \\ & = \frac{4 \sum_{1 \leq |\alpha| \leq N/2} |g_\alpha|^2 }{12 \sum_{1 \leq |\alpha| \leq N/2} |g_\alpha|^2} \\ & = \frac{1}{3}~. \end{align} Call $G_L'$ the best rank-$L$ approximation of $G'$ in the Frobenius norm. By classical properties of SVD it follows that $G_L'$ is a diagonal matrix with $L$ entries corresponding to the $L$ largest elements of $G'$. Then because $\|G'\|_F^2 = \frac{1}{3}$: \begin{align} \|G_L' - G'\|_F^2 = \frac{1}{3} - \sum_{l=1}^L \left(\frac{|2g_{\alpha_l}|}{\|g\|_\mathcal{A}}\right)^2~, \end{align} where we order $|g_{\alpha_l}|$ in non-increasing order. Combining Lemma \hyperref[g-two]{\ref*{lem:g-prop}.\ref*{g-two}} and \hyperref[g-four]{\ref*{lem:g-prop}.\ref*{g-four}} yields the inequality that for all $\alpha$ such that $1 \leq |\alpha| \leq N/2$: \begin{align} \left(\frac{|2g_\alpha|}{\|g\|_\mathcal{A}}\right)^2 \leq 4N^2\left(1 - \left(\frac{1}{4}\right)^2\right)^{2D}~, \end{align} so we can conclude \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g'\|_\mathcal{A}^2 & \geq \frac{1}{2}\|G_L' - G'\|_F^2 \\ & \geq \frac{1}{6} - 2LN^2\left(1 - \left(\frac{1}{4}\right)^2\right)^{2D}~. \end{align} Hence, if $L \leq \frac{1}{24} \cdot N^{-2} \left(\frac{16}{15} \right)^{2D}$, we derive a lower bound: \begin{align} \min_{f\mathbf{i}n\text{Sym}_L} \|f - g'\|_\mathcal{A}^2 & \geq \frac{1}{12}~. \end{align} \end{proof} We remark here that in the instance $D > \sqrt{N/2}$, we replace $D$ with $\hat{D}$ in the above bound, which is consistent with Theorem~\ref{thm:main-result}. \section{Definition of hard function $g$}\label{sec:construct-g} In this section we incrementally build the (unnormalized) hard function $g$, ultimately for the sake of Lemma~\ref{lem:g-prop}. This lemma characterizes all the properties of $g$ that we need to guarantee the lower and upper bounds. \begin{remark} In the following section, we assume $D \leq \sqrt{N/2}$ for simplicity of exposition. In the case that $D > \sqrt{N/2}$, we replace all instances of $D$ in our functional definitions with $\hat{D} = \min(D, \sqrt{N/2})$, which is only necessary for a projection argument in Lemma~\ref{lem:g-prop} and makes no meaningful change to the proofs. \end{remark} \subsection{Mobius transform} We begin with the following, with $\xi \mathbf{i}n \mathbb{C}$ and $|\xi| = 1$. And in the sequel, we always fix $r = 1/4$. Consider the 1-D Mobius transformation, with its truncated variant with $t\geq 1$: \begin{align} \mu(\xi) &= \frac{\xi - r}{r\xi - 1} \\ \hat{\mu}_t(\xi) &= \left(1 - (r\xi)^{t}\right)\cdot \mu(\xi) \\ & = (r - \xi)\cdot\left(1 + r\xi + (r\xi)^2 + \dots + (r\xi)^{t - 1}\right) \end{align} \begin{lemma}\label{lem:mu-prop} The following properties hold (where infinity norms are defined with respect to ${S^1}$): \begin{enumerate} \mathbf{i}tem \label{mu-one} $\|\mu\|_\mathbf{i}nfty = 1$ \mathbf{i}tem \label{mu-two} $\|\mu\|_{S^1} = 1$ \mathbf{i}tem \label{mu-three} $\|\hat{\mu}_t\|_\mathbf{i}nfty \leq 1 + r^t$ \mathbf{i}tem \label{mu-four} $\|\hat{\mu}_t\|_{S^1}^2 = 1 + r^{2t}$ \mathbf{i}tem \label{mu-five} $\langle\hat{\mu}_t, 1\rangle_{S^1} = r$, $\langle \hat{\mu}_t, \xi\rangle_{S^1} = r^2 - 1$ and $|\langle \hat{\mu}_t, \xi^a\rangle_{S^1}| < 1-r^2$ for all $a \geq 2$ \mathbf{i}tem \label{mu-six} For $|\xi| = 1, |\omega| \leq 1 + \frac{1}{t}$, \begin{align} |\hat{\mu}_t(\xi) - \hat{\mu}_t(\omega)| \leq 6 |\xi - \omega| \end{align} \end{enumerate} \end{lemma} \begin{proof} It is a fact~\parencite{garnett2007bounded} that $\mu$ analytically maps the unit disk to itself, and additional the unit circle to itself, i.e. for any $|\xi| = 1$ we have $|\mu(\xi)| = 1$. Hence $\|\mu\|_\mathbf{i}nfty = \|\mu\|_{S^1} = 1$. We can see that truncation gently perturbs this fact, so for $|\xi| = 1$: \begin{align} |\hat{\mu}_t(\xi)| & = |1 - (r\xi)^t| \cdot |\mu(\xi)|\\ & \leq 1 + r^t \end{align} Additionally, we can calculate the coefficient on each monomial in $\hat{\mu}$: \begin{align} \langle \hat{\mu}_t, \xi^a \rangle_{S^1} = \begin{cases} r & a = 0 \\ -(r^{a-1} - r^{a+1}) & 1 \leq a \leq t - 1 \\ -r^{t-1} & a = t \\ 0 & a \geq t \end{cases} \end{align} It is easy to confirm that the value of $|\langle \hat{\mu}_t, \xi^a \rangle_{S^1}|$ is maximized at $a = 1$. Hence, we can write the $L_2$ norm: \begin{align} \|\hat{\mu}_t\|_{{S^1}}^2 &= \sum_{a=0}^\mathbf{i}nfty |\langle \hat{\mu}_t, \xi^a \rangle_{S^1}|^2 \\ & = r^2 + \sum_{a=1}^{t-1} \left(r^{a-1} - r^{a+1}\right)^2 + r^{2t-2} \\ & = r^2 + \sum_{a=1}^{t-1} \left(r^{2a-2} -2r^{2a} + r^{2a+2}\right) + r^{2t-2} \\ & = 1 + r^{2t} \end{align} Finally, for $|\xi| = 1, |\omega| \leq 1 + \frac{1}{t} \leq 2$: \begin{align} |\mu(\xi) - \mu(\omega)| & = \left| \frac{\xi - r}{r\xi - 1} - \frac{\omega - r}{r\omega - 1} \right| \\ & = \left| \frac{(r^2-1)(\xi - \omega)}{(r\xi - 1)(r\omega - 1)} \right|~. \end{align} So noting $r = \frac{1}{4}$ we get \begin{align} |\mu(\xi) - \mu(\omega)| & \leq \frac{8}{3}|\xi - \omega|~. \end{align} Thus: \begin{align} |\hat{\mu}(\xi) - \hat{\mu}(\omega)| & = \left|\left(1 - (r\xi)^{t}\right)\cdot \mu(\xi) - \left(1 - (r\omega)^{t}\right)\cdot \mu(\omega) \right| \\ & \leq \left|\left(1 - (r\xi)^{t}\right)\cdot \mu(\xi) - \left(1 - (r\omega)^{t}\right)\cdot \mu(\xi) \right| + \left|\left(1 - (r\omega)^{t}\right)\cdot \mu(\xi) - \left(1 - (r\omega)^{t}\right)\cdot \mu(\omega) \right| \\ & \leq |\mu(\xi)| \cdot r^t |\xi^t - \omega^t| + |1 - (r\omega)^t| \cdot |\mu(\xi) - \mu(\omega)|\\ & \leq r^t |\xi^t - \omega^t| + |1 - (r\omega)^t| \cdot \frac{8}{3} |\xi - \omega|~. \end{align} Note that for $|\xi| = 1, |\omega| \leq 1 + \frac{1}{t}$, because $|\omega|^k \leq e$ for $k \leq t$, we have \begin{align} \left|\xi^{t} - \omega^{t} \right| = \left|(\xi - \omega)(\xi^{t-1} + \xi^{t-2}\omega + \dots + \xi \omega^{t-2} + \omega^{t-1} \right| \leq et |\xi - \omega|~. \end{align} Further plugging in that $r = \frac{1}{4}$ and $t \geq 1$: \begin{align} |\hat{\mu}(\xi) - \hat{\mu}(\omega)| & \leq 4^{-t} et |\xi - \omega| + \left(1 + 4^{-t}e\right) \cdot \frac{8}{3} |\xi - \omega| \\ & < 6 |\xi - \omega|~. \end{align} \end{proof} \subsection{$h$ function} Now, consider $z \mathbf{i}n \mathbb{C}^D$ with $|z_i| = 1$ for all $i$. We now define: \begin{align} h(z) &= \prod_{i=1}^{D} \hat{\mu}_{D}(z_i)~. \end{align} \begin{lemma}\label{lem:h-prop} The following are true: \begin{enumerate} \mathbf{i}tem \label{h-one} $\|h\|_\mathbf{i}nfty \leq 1 + 2^{-D}$ \mathbf{i}tem \label{h-two} $1 \leq \|h\|_{S^1}^2 \leq 1 + 2^{-D}$ \mathbf{i}tem \label{h-three} For $z, z' \mathbf{i}n (S^1)^D$ $$ |h(z) - h(z')| \leq 12 \|z-z'\|_{1}~. $$ \end{enumerate} \end{lemma} \begin{proof} We can immediately bound: \begin{align} \|h\|_\mathbf{i}nfty &= \prod_{i=1}^{D} \|\hat{\mu}_{D}\|_\mathbf{i}nfty\\ & \overset{(a)}{\leq} \left(1 + r^{D}\right)^{D} \\ &\overset{(b)}{\leq} 1 + 2^{D} \cdot r^{D} \\ &\leq 1 + 2^{-D}~, \end{align} where $(a)$ follows from Lemma \hyperref[mu-three]{\ref*{lem:mu-prop}.\ref*{mu-three}} and $(b)$ follows from the binomial identity that $(1+x)^t \leq 1 + 2^tx$ for $x\mathbf{i}n[0,1], t \geq 1$. In the last line we simply plug in $r = 1/4$. Similarly by Lemma \hyperref[mu-four]{\ref*{lem:mu-prop}.\ref*{mu-four}}, \begin{align} \|h\|_{S^1}^2 &= \prod_{i=1}^{D} \|\hat{\mu}_{D}\|_{S^1}^2\\ & = \left(1 + r^{2D}\right)^{D}\\ & \leq \left(1 + r^{D}\right)^{D}~. \end{align} And so by the same binomial inequality, we have \begin{align} 1 \leq \|h\|_{S^1}^2 \leq 1 + 2^{-D}~. \end{align} Finally, observe that: \begin{align} |h(z) - h(z')| & \leq \sum_{i=1}^{D} \left| \left(\prod_{j=1}^{i-1} \hat{\mu}_{D}(z_j)\right) (\hat{\mu}_{D}(z_i) - \hat{\mu}_{D}(z_i')) \left(\prod_{j=i+1}^{D} \hat{\mu}_{D}(z_j')\right) \right| \\ & \overset{(a)}{\leq} \sum_{i=1}^{D} \left|\hat{\mu}_{D}(z_i) - \hat{\mu}_{D}(z_i')\right| (1 + r^{D})^{D-1} \\ & \overset{(b)}{\leq} 6 \sum_{i=1}^{D} |z_i - z_i'| \left(1 + 2^{-D}\right) \\ & \leq 12 \|z-z'\|_1~, \end{align} where in $(a)$ we apply \hyperref[mu-three]{\ref*{lem:mu-prop}.\ref*{mu-three}}, and in $(b)$ we apply \hyperref[mu-six]{\ref*{lem:mu-prop}.\ref*{mu-six}} and the same binomial identity as above. \end{proof} \subsection{$g$ function} Now, reminding $z_{n,n'} = x_n \circ x_{n'}$, let: \begin{align} g(X) &= -4N^2r^{D} + \sum_{n,n'=1}^{2N} h(z_{n,n'}) ~. \end{align} Note that we subtract a constant here to ensure $g$ has no constant term, which will be necessary for the fact $\mathcal{P}_2 g = g$. \begin{remark} The following lemma is the only place we explicitly require the assumption $D \leq \sqrt{N/2}$, as this guarantees that $\mathcal{P}_2 g = g$. In the case that $D > \sqrt{N/2}$, we simply replace all instances of $D$ in this section with $\hat{D} = \min(D, \sqrt{N/2})$. This ensures $g$ is only supported on $\bold{p}_{\{\alpha, \alpha\}}$ with $|\alpha| \leq \hat{D}^2 \leq N/2$. And the subsequent proofs are identical. \end{remark} \begin{lemma}\label{lem:g-prop} The following are true: \begin{enumerate} \mathbf{i}tem \label{g-one} $\|g\|_\mathbf{i}nfty \leq 12N^2$. \mathbf{i}tem \label{g-two} $1 \leq \|g\|_\mathcal{A}^2 \leq 3N^2(1+2^{-D})$. \mathbf{i}tem \label{g-three}$\mathcal{P}_2g = g$. \mathbf{i}tem \label{g-four} We may write $g =\sum_{1 \leq |\alpha| \leq N/2} g_\alpha \bold{p}_{\{\alpha,\alpha\}}$, where $|g_\alpha|^2 \leq N^2 (1-r^2)^{2D}$. \mathbf{i}tem \label{g-five} $\mathrm{Lip}(g) \leq 48 N \sqrt{ND}$. \end{enumerate} \end{lemma} \begin{proof} First, it's easy to see from Lemma \hyperref[h-one]{\ref*{lem:h-prop}.\ref*{h-one}} \begin{align} \|g\|_\mathbf{i}nfty &\leq |-4N^2r^{D}| + 4N^2 \|h\|_\mathbf{i}nfty \\ & \leq 4N^2 \left(2^{-2D} + 1 + 2^{-D}\right) \\ & \leq 12N^2~. \end{align} Let us expand $h$ as \begin{align} h(z) = \sum_{\|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha z^\alpha~, \end{align} noting that by definition of $\hat{\mu}_{D}$ and Lemma \hyperref[mu-five]{\ref*{lem:mu-prop}.\ref*{mu-five}} we have the constant term $h_0 = r^{D}$. Now we can expand \begin{align} g(X) & = -4N^2r^{D} + \sum_{n,n'=1}^{2N} h(z_{n,n'})\\ & = -4N^2r^{D} + \sum_{n,n'=1}^{2N} \left[ r^{D} + \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha z_{n,n'}^\alpha \right] \\ & = \sum_{n,n'=1}^{2N} \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha z_{n,n'}^\alpha \\ & = \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha \sum_{n,n'=1}^{2N} \prod_{d=1}^D (x_{dn}x_{dn'})^{\alpha_d} \\ & = \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha |\alpha| \left(\frac{1}{\sqrt{|\alpha|}}\sum_{n=1}^{2N} \prod_{d=1}^D x_{dn}^{\alpha_d} \right) \left(\frac{1}{\sqrt{|\alpha|}}\sum_{n'=1}^{2N} \prod_{d'=1}^D x_{d'n'}^{\alpha_{d'}} \right) \\ & = \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha |\alpha| \bold{p}_{\{\alpha, \alpha\}}(X) ~. \end{align} Note that $\|\alpha\|_\mathbf{i}nfty \leq D$ implies $|\alpha| \leq {D}^2 \leq N/2$, so it clearly follows that $\mathcal{P}_2 g = g$. So by Lemma~\ref{lem:partialortho}, $\langle \bold{p}_{\{\alpha, \alpha\}}, \bold{p}_{\{\beta, \beta\}} \rangle_\mathcal{A} = 12 \cdot \mathbbm{1}_{\alpha = \beta}$ whenever $1 \leq |\alpha|, |\beta| \leq N/2$, so we can handily calculate: \begin{align} \|g\|_\mathcal{A}^2 & = \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha^2 |\alpha|^2 \|\bold{p}_{\{\alpha, \alpha\}}\|_\mathcal{A}^2 \\ & \leq 12 \cdot (N/2)^2 \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha^2 \\ & \leq 3 N^2 \|h\|_{S^1}^2 \\ & \leq 3 N^2 \left(1 + 2^{-D}\right)~, \end{align} where the last line uses Lemma \hyperref[h-two]{\ref*{lem:h-prop}.\ref*{h-two}}. And likewise \begin{align} \|g\|_\mathcal{A}^2 & = \sum_{1 \leq \|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha^2 |\alpha|^2 \|\bold{p}_{\{\alpha, \alpha\}}\|_\mathcal{A}^2 \\ & \geq 12 \left(-r^D + \sum_{\|\alpha\|_\mathbf{i}nfty \leq D} h_\alpha^2\right) \\ & = 12 (-r^D + \|h\|_{S^1}^2)\\ & \geq 1~, \end{align} and the last line again uses Lemma \hyperref[h-two]{\ref*{lem:h-prop}.\ref*{h-two}}. Finally, note that for any $\alpha$ such that $|\alpha| \leq N/2$, applying Lemma \hyperref[mu-five]{\ref*{lem:mu-prop}.\ref*{mu-five}}. \begin{align} |g_\alpha|^2 = \left|h_\alpha |\alpha|\right|^2 & = |\alpha|^2 \prod_{i=1}^{D} |\langle \hat{\mu}_{D}, \xi^{\alpha_i} \rangle_{S^1}|^2 \\ & \leq N^2 (1-r^2)^{2D}~. \end{align} Finally we consider the Lipschitz norm. For $X, \hat{X} \mathbf{i}n \mathbb{C}^{D \times 2N}$ with each entry of unit norm, it's easy to confirm by Lemma \hyperref[h-three]{\ref*{lem:h-prop}.\ref*{h-three}} that: \begin{align} |g(X) - g(\hat{X})| &\leq \sum_{n,n'=1}^{2N} |h(z_{n,n'}) - h(\hat{z}_{n,n'}')|\\ & \leq 12 \sum_{n,n'=1}^{2N} \|z_{n,n'} - \hat{z}_{n,n'}\|_1 \\ & = 12 \sum_{n,n'=1}^{2N} \sum_{d=1}^D |x_{dn} x_{dn'} - \hat{x}_{dn} \hat{x}_{dn'}| \\ & \leq 12 \sum_{n,n'=1}^{2N} \sum_{d=1}^D |x_{dn}| \cdot |x_{dn'} - \hat{x}_{dn'}| + |\hat{x}_{dn'}| \cdot |x_{dn} - \hat{x}_{dn}| \\ & = 48 N \sum_{n=1}^{2N} \sum_{d=1}^D |x_{dn} - \hat{x}_{dn}| \\ & = 48 N \|X - \hat{X}\|_1 \\ & \leq 48 N \sqrt{2ND} \|X - \hat{X}\|_2 \end{align} \end{proof} \section{Upper Bound of Main Result}\label{sec:upper-bound} In this section we prove the upper bound to representing $g$ with an admissible activation that satisfies Assumption~\ref{ass:act-real}. The strategy is as follows. In Section~\ref{sec:exact-rep} we exactly encode the hard function $g$ with an efficient network, but allowing the choice of very particular activation functions. In Section~\ref{sec:approx-rep}, we leverage Assumption~\ref{ass:act-real} to build a network that approximates the exact one, using a given activation. We complete the proof in Section~\ref{sec:proof-upper-bound} by showing the exact and approximate networks stay close together, inducting through the layers. \subsection{Exact Representation}\label{sec:exact-rep} Let us first describe how to write $g$ exactly with a network in $\text{Sym}_Lt$, using particular activations. We can then demonstrate to approximate those activations, which only introduces a polynomial dependence in the desired error bound $\epsilon$. For exact representation, the activations we will allow are $\xi \rightarrow \xi^2$, and $\xi \rightarrow \hat{\mu}_{D}(\xi)$. Note that from the fact that $\xi \cdot \omega = \frac{1}{2} \left((\xi + \omega)^2 - \xi^2 - \omega^2\right)$, we can exactly multiply scalars with these activations. Then consider the following structure for $f \mathbf{i}n \text{Sym}_Lt$ with $L = 1$. Given $x, x' \mathbf{i}n \mathbb{C}^D$ with $|x_i| = |x_i'| = 1$ for all $i$, we define $\psi_1^*(x,x')$ via a network as follows. In particular, we will use $\cdot$ to explicitly indicate all scalar multiplication: \begin{align} z^* &= (x_1 \cdot x_1', \dots, x_D \cdot x_D') \\ Z^{(1)*} &= \left(\hat{\mu}_{D}(z_1^*), \dots, \hat{\mu}_{D}(z_{D}^*) \right) \mathbf{i}n \mathbb{C}^{D} \\ Z^{(2)*} &= \left(Z_1^{(1)*} \cdot Z_2^{(1)*}, \dots, Z_{D-1}^{(1)*} \cdot Z_{D}^{(1)*} \right) \mathbf{i}n \mathbb{C}^{D/2} \\ & \dots \\ Z^{(\log_2 D)*} &= Z_1^{(\log_2 D - 1)*} \cdot Z_2^{(\log_2 D - 1)*} \mathbf{i}n \mathbb{C} \\ \psi_1^*(x,x') &= Z^{(\log_2 D)*} \end{align} In other words, we exactly calculate $\psi_1^*(x,x') = h(x \circ x')$ through $\log_2 D$ layers by multiplying the terms $\hat{\mu}_{D}(z_i)$ at each layer. Note that $|z_i^*| = 1$ for all $i$. So by applying Lemma \hyperref[mu-three]{\ref*{lem:mu-prop}.\ref*{mu-three}}, it is the case that each entry $|Z_i^{(k)*}| = |\hat{\mu}_D(z_i^*)|^k \leq (1+r^D)^D \leq 1 + 2^{-D}$ for all $k \leq \log_2 D$. Now, for an input $\xi \mathbf{i}n \mathbb{C}$ we define the map \begin{align} \rho^*(\xi) = \frac{-4N^2r^D + \xi}{\|g\|_\mathcal{A}}~, \end{align} and it's easy to confirm that we exactly represent: \begin{align} g'(X) = \rho^*\left(\sum_{n,n'=1}^{2N} \psi_1^*(x_n, x_n')\right)~. \end{align} \subsection{Approximate Representation}\label{sec:approx-rep} Now, we can imitate the network above using the exp activation, and control the approximation error in the infinity norm. Let us assume we've chosen $f_1, f_2$ as in Lemma~\ref{lem:expnet-epsilon}. Furthermore, let us define $\xi \star \omega = \frac{1}{2} \left(f_1(\xi + \omega) - f_1(\xi) - f_1(\omega) \right)$, so that $\star$ approximates scalar multiplication. Then we mimic the exact network via: \begin{align} z &= (x_1 \star x_1', \dots, x_D \star x_D') \\ Z^{(1)} &= \left(f_2(z_1), \dots, f_2(z_D) \right) \mathbf{i}n \mathbb{C}^D \\ Z^{(2)} &= \left(Z_1^{(1)} \star Z_2^{(1)}, \dots, Z_{D-1}^{(1)} \star Z_D^{(1)} \right) \mathbf{i}n \mathbb{C}^{D/2} \\ & \dots \\ Z^{(\log_2 D)} &= Z_1^{(\log_2 D - 1)} \star Z_2^{(\log_2 D - 1)} \mathbf{i}n \mathbb{C} \\ \psi_1(x,x') &= Z^{(\log_2 D)}~. \end{align} In other words, we replace all instances of multiplication $\cdot$ with $\star$, and all instances of $\hat{\mu}_{D}$ with $f_2$. Finally, we define the map $\rho$ as: \begin{align} \rho(\xi) = \frac{4N^2}{\|g\|_\mathcal{A}} \cdot \left(\frac{\xi}{4N^2} \star 1 - r^D\right)~, \end{align} where we can clearly represent the constant $r^D$ via one additional neuron. \subsection{Proof of Upper Bound}\label{sec:proof-upper-bound} We complete the approximation of $g'$ by showing the exact and approximate networks are nearly equivalent in infinity norm, leveraging the assumption on our activation. \begin{theorem}\label{thm:upper-bound-explicit} Consider $\epsilon > 0$ such that $\epsilon \leq \min\left(\frac{1}{100}, \frac{1}{12D^2} \right)$. For $L = 1$, there exists $f \mathbf{i}n \text{Sym}_Lt$, parameterized with an activation $\sigma$ that satisfies Assumption~\ref{ass:act-real}, with width $O(D^3 + D^2 \log \frac{DN}{\epsilon}$, depth $O(\log D)$, and maximum weight magnitude $D \log D$ such that over inputs $X \mathbf{i}n \mathbb{C}^{D \times 2N}$ with unit norm entries: \begin{align} \|f - g'\|_\mathbf{i}nfty \leq \epsilon~. \end{align} \end{theorem} \begin{proof} Let $f$ be given by the $\text{Sym}_Lt$ network calculated in the previous section, i.e. \begin{align} f(X) = \rho\left(\sum_{n,n'=1}^{2N} \psi_1(x_n, x_n') \right)~. \end{align} Clearly $L$ = 1. From Assumption~\ref{ass:act-real} and what it guarantees about $f_1$ and $f_2$, it's clear that the maximum width of $f$ is $O(D^3 + D^2 \log \frac{D}{\epsilon})$, the depth is $O(\log D)$, and the maximum weight magnitude is $O(D \log D)$. We can prove the quality of approximation by matching layer by layer. First we note a quick lemma: \begin{lemma}\label{lem:mult} For $|\xi|, |\omega| \leq \frac{3}{2}$: \begin{align} |\xi \star \omega - \xi \cdot \omega| \leq \frac{3}{2} \epsilon~. \end{align} \end{lemma} \begin{proof} Based on Assumption~\ref{ass:act-real}, note that for $|\xi|, |\omega| \leq \frac{3}{2}$, we have that $|\xi+\omega| \leq 3$ and therefore: \begin{align} |\xi \star \omega - \xi \cdot \omega| & \leq \frac{1}{2}\left( |f_1(\xi + \omega) - (\xi + \omega)^2| + |f_1(\xi) - \xi^2| + |f_1(\omega) - \xi^2|\right) \\ & \leq \frac{3}{2}\epsilon~. \end{align} \end{proof} It follows that, because all $|x_i| = 1$: \begin{align} \|z^* - z\|_\mathbf{i}nfty = \max_{i \leq D} |x_i \star x_i' - x_i \cdot x_i'| \leq \frac{3}{2} \epsilon~. \end{align} Now, because $|z_i^*| = 1$, it follows from our assumption on $\epsilon$ that $|z_i| \leq 1 + \frac{3}{2}\epsilon \leq 1 + \frac{1}{D}$. Hence, we can apply Lemma \hyperref[mu-six]{\ref*{lem:mu-prop}.\ref*{mu-six}} and say \begin{align} \|Z^{(1)*} - Z^{(1)}\|_\mathbf{i}nfty &= \max_{i \leq D} |\hat{\mu}_{D}(z_i^*) - f_2(z_i)| \\ & \leq \max_{i \leq D} |\hat{\mu}_{D}(z_i^*) - \hat{\mu}_{D}(z_i)| + |\hat{\mu}_D(z_i) - f_2(z_i)| \\ & \overset{(a)}{\leq} 6 \left(\frac{3}{2}\epsilon\right) + \epsilon \\ & \leq 10 \epsilon~. \end{align} where $(a)$ follows from Lemma \hyperref[mu-six]{\ref*{lem:mu-prop}.\ref*{mu-six}} and Assumption~\ref{ass:act-real} again. Note, observe the following inequality, for any $i$: \begin{align} |Z_{2i}^{(1)*} \cdot Z_{2i+1}^{(1)*} - Z_{2i}^{(1)} \cdot Z_{2i+1}^{(1)} | & \leq |Z_{2i}^{(1)*} \cdot Z_{2i+1}^{(1)*} - Z_{2i}^{(1)*} \cdot Z_{2i+1}^{(1)} | + |Z_{2i}^{(1)*} \cdot Z_{2i+1}^{(1)} - Z_{2i}^{(1)} \cdot Z_{2i+1}^{(1)} |\\ & = |Z_{2i}^{(1)*}| \cdot |Z_{2i+1}^{(1)*} - Z_{2i+1}^{(1)}| + |Z_{2i+1}^{(1)}| \cdot |Z_{2i}^{(1)*} - Z_{2i}^{(1)}| \\ & = |\hat{\mu}_D(z_{2i}^*)| \cdot 10\epsilon + |f_2(z_{2i+1})| \cdot 10\epsilon \\ & \overset{(a)}{\leq} 10\epsilon(|\hat{\mu}_D(z_{2i}^*)| + |\hat{\mu}_D(z_{2i+1})| + \epsilon) \\ & \overset{(b)}{\leq} 10\epsilon\left(|\hat{\mu}_D(z_{2i}^*)| + |\hat{\mu}_D(z_{2i+1}^*)| + 6\left(\frac{3}{2}\epsilon\right) + \epsilon\right) \\ & \overset{(c)}{\leq} 10\epsilon(1 + r^{D} + 1 + r^{D} + 4\epsilon + \epsilon) \\ & \overset{(d)}{\leq} 10\epsilon(5/2) \\ & \leq 25\epsilon~, \end{align} where $(a)$ follows from Lemma~\ref{lem:expnet-epsilon}, $(b)$ follows from Lemma \hyperref[mu-six]{\ref*{lem:mu-prop}.\ref*{mu-six}}, $(c)$ follows from Lemma \hyperref[mu-three]{\ref*{lem:mu-prop}.\ref*{mu-three}}, and $(d)$ follows from the fact that $\epsilon \leq \frac{1}{100}$. Hence, to draw error bounds one layer higher, we calculate: \begin{align} \|Z^{(2)*} - Z^{(2)}\|_\mathbf{i}nfty &= \max_{i \leq D/2} |Z_{2i}^{(1)*} \cdot Z_{2i+1}^{(1)*} - Z_{2i}^{(1)} \star Z_{2i+1}^{(1)} | \\ &\leq \max_{i \leq D/2} |Z_{2i}^{(1)*} \cdot Z_{2i+1}^{(1)*} - Z_{2i}^{(1)} \cdot Z_{2i+1}^{(1)} | + |Z_{2i}^{(1)} \cdot Z_{2i+1}^{(1)} - Z_{2i}^{(1)} \star Z_{2i+1}^{(1)} | \\ & \overset{(a)}{\leq} 25\epsilon + \frac{3}{2} \epsilon \\ & \leq 27 \epsilon~, \end{align} where in line (a) we apply Lemma~\ref{lem:mult} under the assumption that $|Z_i^{(1)}| \leq \frac{3}{2}$ for all $i$. Note that from Lemma \hyperref[mu-three]{\ref*{lem:mu-prop}.\ref*{mu-three}} \begin{align} |Z_i^{(1)}| &\leq |Z_i^{(1)} - Z_i^{(1)*}| + |Z_i^{(1)*}| \\ & \leq 10\epsilon + 1 + r^D < \frac{3}{2} \end{align} so this assumption is guaranteed. We induct upwards through layers: assume that $\|Z^{(k)*} - Z^{(k)}\|_\mathbf{i}nfty \leq 3^{k+1} \epsilon$ for $k \geq 2$. Then: \begin{align} |Z_{2i}^{(k)*} \cdot Z_{2i+1}^{(k)*} - Z_{2i}^{(k)} \cdot Z_{2i+1}^{(k)} | & \leq |Z_{2i}^{(k)*} \cdot Z_{2i+1}^{(k)*} - Z_{2i}^{(k)*} \cdot Z_{2i+1}^{(k)} | + |Z_{2i}^{(k)*} \cdot Z_{2i+1}^{(k)} - Z_{2i}^{(k)} \cdot Z_{2i+1}^{(k)} |\\ & = |Z_{2i}^{(k)*}| \cdot |Z_{2i+1}^{(k)*} - Z_{2i+1}^{(k)}| + |Z_{2i+1}^{(k)}| \cdot |Z_{2i}^{(k)*} - Z_{2i}^{(k)}| \\ & \overset{(a)}{\leq} 3^{k+1}\epsilon(|Z_{2i}^{(k)*}| + |Z_{2i+1}^{(k)}|) \\ & \overset{(b)}{\leq} 3^{k+1}\epsilon(|Z_{2i}^{(k)*}| + |Z_{2i+1}^{(k)*}| + 3^{k+1}\epsilon) \\ & \overset{(c)}{\leq} 3^{k+1}\epsilon((1+r^D)^D + (1+r^D)^D + 3^{k+1}\epsilon) \\ & \overset{(d)}{\leq} 3^{k+1}\epsilon\left(1 + 2^{-D} + 1 + 2^{-D} + \frac{1}{4}\right) \\ & \leq 3^{k+1}\epsilon \left(\frac{11}{4}\right)~, \end{align} where $(a)$ and $(b)$ are both applications of the inductive hypothesis, $(c)$ follows from Lemma \hyperref[mu-three]{\ref*{lem:mu-prop}.\ref*{mu-three}}, $(d)$ is the binomial inequality and the fact that for any $k \leq \log_2 D$: \begin{align} 3^{k+1} \epsilon & \leq 3\left(4^{\log_2 D}\right)\epsilon \\ & = \frac{\epsilon}{3D^2}\\ & \leq \frac{1}{4}~. \end{align} And as before: \begin{align} \|Z^{(k+1)*} - Z^{(k+1)}\|_\mathbf{i}nfty &= \max_{i} |Z_{2i}^{(k)*} \cdot Z_{2i+1}^{(k)*} - Z_{2i}^{(k)} \star Z_{2i+1}^{(k)} | \\ &\leq \max_{i} |Z_{2i}^{(k)*} \cdot Z_{2i+1}^{(k)*} - Z_{2i}^{(k)} \cdot Z_{2i+1}^{(k)} | + |Z_{2i}^{(k)} \cdot Z_{2i+1}^{(k)} - Z_{2i}^{(k)} \star Z_{2i+1}^{(k)} | \\ & \overset{(a)}{\leq} 3^{k+1}\epsilon \left(\frac{11}{4}\right) + \frac{3}{2} \epsilon \\ & \leq 3^{k+2}\epsilon~, \end{align} where in line (a) we apply Lemma~\ref{lem:mult} under the assumption that $|Z_i^{(k)}| \leq \frac{3}{2}$ for all $i$. Note that as before \begin{align} |Z_i^{(k)}| &\leq |Z_i^{(k)} - Z_i^{(k)*}| + |Z_i^{(k)*}| \\ & \leq 3^{k+1}\epsilon + (1 + r^D)^D \\ & \leq 3^{k+1}\epsilon + 1 + 2^{-D} \leq \frac{3}{2}~, \end{align} so the assumption is granted. Thus, completing the induction and remembering the definition of $\psi_1$, we conclude: \begin{align} \|\psi_1^*(x_n,x_{n'}) - \psi_1(x_n,x_{n'})\|_\mathbf{i}nfty \leq 3^{\log_2 D + 1}\epsilon < 3D^2 \epsilon~. \end{align} Hence, we can finally bound the final networks: \begin{align} \|g' - f\|_\mathbf{i}nfty &= \left\|\rho^*\left(\sum_{n,n'=1}^{2N}\psi_1^*(x_n,x_{n'})\right) - \rho\left(\sum_{n,n'=1}^{2N}\psi_1(x_n,x_{n'})\right)\right\|_\mathbf{i}nfty \\ &= \frac{1}{\|g\|_\mathcal{A}} \left\|\sum_{n,n'=1}^{2N}\psi_1^*(x_n,x_{n'}) - 4N^2 \left(\left[\frac{1}{4N^2}\sum_{n,n'=1}^{2N}\psi_1(x_n,x_{n'})\right] \star 1\right)\right\|_\mathbf{i}nfty \\ & \overset{(a)}{\leq} 4N^2 \left\|\frac{1}{4N^2} \sum_{n,n'=1}^{2N}\psi_1^*(x_n,x_{n'}) - \left(\left[\frac{1}{4N^2}\sum_{n,n'=1}^{2N}\psi_1(x_n,x_{n'})\right] \star 1\right)\right\|_\mathbf{i}nfty \\ & \overset{(b)}{\leq} 4N^2 \left\|\frac{1}{4N^2} \sum_{n,n'=1}^{2N}\psi_1^*(x_n,x_{n'}) - \frac{1}{4N^2} \sum_{n,n'=1}^{2N}\psi_1^*(x_n,x_{n'})\right\|_\mathbf{i}nfty + 4N^2 \cdot \frac{3}{2} \epsilon \\ & \leq 4N^2 \left\|\psi_1^*(x,x') - \psi(x,x')\right\|_\mathbf{i}nfty + 4N^2 \cdot \frac{3}{2} \epsilon \\ & \leq 12N^2D^2\epsilon + 6N^2\epsilon \\ & \leq 18N^2D^2 \epsilon~, \end{align} where in $(a)$ we apply the lower bound $\|g\|_A \geq 1$ from \hyperref[g-two]{\ref*{lem:g-prop}.\ref*{g-two}} and in $(b)$ we once again apply Lemma~\ref{lem:mult}, valid from the fact that for all $X$ with unit norm entries: \begin{align} \left|\frac{1}{4N^2}\sum_{n,n'=1}^{2N}\psi_1(x_n,x_{n'})\right| \leq 3D^2 \epsilon \leq \frac{3}{2}~. \end{align} So it remains to map $\epsilon \rightarrow \frac{\epsilon}{18N^2D^2}$ in order to yield that $\|f - g'\| \leq \epsilon$. Note that this remapping only changes the maximum width to be $O(D^3 + D^2 \log \frac{ND}{\epsilon}$. \end{proof} \section{Activation Assumption for $\exp$} We prove that the activation $\exp$ satisfies Assumption~\ref{ass:act-real}. We need the following standard fact, whose proof we include for completeness: \begin{lemma}\label{lem:unity} Fix $J$ and let $\gamma$ be a primitive $J$th root of unity. Then \begin{align} \frac{1}{J}\sum_{j=0}^{J-1} \gamma^{ij} = \begin{cases} 1 & i \equiv 0 \mod J \\ 0 & i \not\equiv 0 \mod J \end{cases} \end{align} \end{lemma} \begin{proof} If $i \equiv 0 \mod J$, then $\gamma^{ij} = 1$ for all integer $j$ and clearly \begin{align} \frac{1}{J}\sum_{j=0}^{J-1} \gamma^{ij} = 1~. \end{align} Suppose $i \not\equiv 0 \mod J$. Note that any $J$th root of unity $x$ must satisfy $x^J = 1$, or equivalently \begin{align} (1-x)\left(\sum_{j=0}^{J-1} x^j \right) = 0~. \end{align} Because $i \not\equiv 0 \mod J$ and $\gamma$ is a primitive root, it follows $\gamma^i \neq 1$ is another root. Therefore setting $x = \gamma^i$ and factoring out the non-zero term $(1 - \gamma^i)$ gives \begin{align} \sum_{j=0}^{J-1} \gamma^{ij} = 0~. \end{align} \end{proof} Using this fact, we can approximate simple analytic functions via shallow networks in the $\exp$ activation. \begin{lemma}\label{lem:expnet} For any $J \mathbf{i}n \mathbb{N}$ with $J > D$, there exists a shallow neural networks $f_1, f_2$ using the $\exp$ activation, with $O(JD)$ neurons and $O(D\log D)$ weights, such that \begin{align} \sup_{|\xi| \leq 3}\left|f_1(\xi) - \xi^2\right| &\leq \frac{4}{J!} \left(\frac{3}{4}\right)^J \\ \sup_{|\xi| \leq 3}\left|f_2(\xi) - \hat{\mu}_{D}(\xi)\right| &\leq 17D \left(\frac{3}{4}\right)^J ~. \end{align} \end{lemma} \begin{proof} Let $\gamma$ be a primitive $J$th root of unity, $r = 1/4$, and let $k \mathbf{i}n \mathbb{N}$ such that $0 \leq k \leq J-1$. By applying Lemma~\ref{lem:unity} we can define a network $f^{(k)}$ and expand as: \begin{align} f^{(k)}(\xi) & := \sum_{j=0}^{J-1} \frac{\gamma^{-kj}}{J} \exp(\gamma^j r \xi) \\ & = \sum_{j=0}^{J-1} \frac{\gamma^{-kj}}{J} \sum_{i=0}^\mathbf{i}nfty \frac{(\gamma^j r \xi)^i}{i!} \\ & = \sum_{i=0}^\mathbf{i}nfty \frac{(r\xi)^i}{i!} \left[\frac{1}{J} \sum_{j=0}^{J-1} \gamma^{(i-k)j}\right] \\ & = \sum_{i=0}^\mathbf{i}nfty \frac{(r\xi)^i}{i!} \mathbbm{1}_{i \equiv k \mod J} \\ & = \sum_{i=0}^\mathbf{i}nfty \frac{(r\xi)^{iJ + k}}{(iJ + k)!} \\ & = \frac{(r\xi)^k}{k!} + \sum_{i=1}^\mathbf{i}nfty \frac{(r\xi)^{iJ + k}}{(iJ + k)!} ~. \end{align} It follows that we can bound: \begin{align} \sup_{|\xi| \leq 3} \left|f^{(k)}(\xi) - \frac{(r\xi)^k}{k!}\right| & \leq \sum_{i=1}^\mathbf{i}nfty \left|\frac{(r\xi)^{iJ + k}}{(iJ + k)!}\right| \\ & \leq \frac{1}{J!} \sum_{i=1}^\mathbf{i}nfty \left(\frac{3}{4}\right)^{iJ + k} \\ & \leq \frac{1}{J!} \left(\frac{3}{4}\right)^J \sum_{i=0}^\mathbf{i}nfty \left(\frac{3}{4}\right)^{iJ} \\ & \leq \frac{1}{J!} \left(\frac{3}{4}\right)^J \frac{1}{1 - (3/4)^J} \\ & \leq \frac{4}{J!} \left(\frac{3}{4}\right)^J~, \end{align} so we can define \begin{align} f_1(\xi) := \frac{2}{r^2} f^{(2)}(\xi) \end{align} with only $J$ neurons each of width magnitude at most $O(1)$, and instantly gain the bound \begin{align} \sup_{|\xi| \leq 3} \left|f_1(\xi) - \xi^2 \right| &= \sup_{|\xi| \leq 3} \frac{2}{r^2} \left|f^{(k)}(\xi) - \frac{(r\xi)^2}{2!}\right| \\ & \leq \frac{2}{r^2} \cdot \frac{4}{J!} \left(\frac{3}{4}\right)^J~. \end{align} Second, we define \begin{align} f_2(\xi) &:= r \left(\sum_{k=0}^{D-1} k! f^{(k)}(\xi) \right) - \sum_{k=1}^{D} \frac{k!}{r} f^{(k)}(\xi)~. \end{align} First, let us note that, in spite of seeming to have factorial weights, we can write this network with small weights via properties of the exponential: \begin{align} f_2(\xi) &= r \left(\sum_{k=0}^{D-1} \exp(\log k!) f^{(k)}(\xi) \right) - \sum_{k=1}^{D} \frac{\exp(\log k!)}{r} f^{(k)}(\xi)\\ & = r \sum_{k=0}^{D-1} \sum_{j=0}^{J-1} \frac{\gamma^{-kj}}{J} \exp(\log k! + \gamma^j r \xi) - \sum_{k=1}^{D} \frac{1}{r} \sum_{j=0}^{J-1} \frac{\gamma^{-kj}}{J} \exp(\log k! + \gamma^j r \xi)~. \end{align} The network contains $2DJ$ neurons, with the norm of each weight bounded by $O(D \log D)$. Then using the decomposition $$\hat{\mu}_D(\xi) = r \sum_{k=0}^{D-1} (r\xi)^k - \frac{1}{r} \sum_{k=1}^{D} (r\xi)^k$$ we derive: \begin{align} \sup_{|\xi| \leq 3} \left|f_2(\xi) - \hat{\mu}_{D}(\xi) \right| & \leq \sup_{|\xi| \leq 3} \left|r \left(\sum_{k=0}^{D-1} k! f^{(k)}(\xi) \right) - r \sum_{k=0}^{D-1} (r\xi)^k \right| + \left| \left(\sum_{k=1}^{D} \frac{k!}{r} f^{(k)}(\xi) \right) - \frac{1}{r} \sum_{k=1}^{D} (r\xi)^k \right| \\ & \leq \left(\sum_{k=0}^{D-1} rk!\right) \frac{4}{J!} \left(\frac{3}{4}\right)^J + \left(\sum_{k=1}^D \frac{k!}{r}\right) \frac{4}{J!} \left(\frac{3}{4}\right)^J \\ & \leq 17D \left(\frac{3}{4}\right)^J~. \end{align} \end{proof} Now, let us restate this result, choosing the error rate $\epsilon$ explicitly: \begin{lemma}\label{lem:expnet-epsilon} For any $\epsilon > 0$, there exists a shallow neural networks $f_1, f_2$ using the $\exp$ activation, with $O\left(D^2 + D \log \frac{D}{\epsilon}\right)$ neurons and $O(D \log D)$ weights, such that \begin{align} \sup_{|\xi| \leq 3}\left|f_1(\xi) - \xi^2\right| &\leq \epsilon ~,\\ \sup_{|\xi| \leq 3}\left|f_2(\xi) - \hat{\mu}_{D}(\xi)\right| &\leq \epsilon ~. \end{align} \end{lemma} We remark again that, in the event $D > \sqrt{N/2}$, we replace $D$ with $\hat{D}$ in order to approximate the Blaschke product $\hat{\mu}_{\hat{D}}$ as this is the function we use to build the hard function $g$ in that case. So we recover the statement of Assumption~\ref{ass:act-real}. \end{document}
\begin{document} \title{Configuration spaces in algebraic topology} \begin{abstract} These expository notes are dedicated to the study of the topology of configuration spaces of manifolds. We give detailed computations of many invariants, including the fundamental group of the configuration spaces of $\mathbb{R}^2$, the integral cohomology of the ordered---and the mod $p$ cohomology of the unordered---configuration spaces of $\mathbb{R}^n$, and the rational cohomology of the unordered configuration spaces of an arbitrary manifold of odd dimension. We also discuss models for mapping spaces in terms of labeled configuration spaces, and we show that these models split stably. Some classical results are given modern proofs premised on hypercover techniques, which we discuss in detail. \end{abstract} \tableofcontents \section{Introduction} These notes are concerned with the study of the following spaces. \begin{blankdefinition} The \emph{configuration space} of $k$ ordered points in the topological space $X$ is \[\mathrm{Conf}_k(X):=\{(x_1,\ldots, x_k)\in X^k: x_i\neq x_j\text{ if } i\neq j\},\] endowed with the subspace topology.\footnote{The reader is warned that, in the literature on configuration spaces, there are almost as many traditions of notation as there are references.} The \emph{unordered} configuration space is the quotient \[B_k(X):=\mathrm{Conf}_k(X)/\Sigma_k.\] \end{blankdefinition} Our goal is to attempt to understand the topology of these spaces, in the case of $X$ a manifold, through the lens of homotopy groups and (co)homology. Before beginning our study in earnest, we first mention a few reasons that configuration spaces are particularly interesting objects of study. \subsection{Invariants} The homotopy type of a fixed configuration space is a homeomorphism invariant of the background manifold, and these invariants tend to remember a rather large amount of information. A simple-minded example is provided by Euclidean spaces of different dimension; indeed, as we will see, there is a homotopy equivalence $B_2(\mathbb{R}^m)\simeq B_2(\mathbb{R}^n)$ if and only if $m=n$. In other words, configuration spaces are sensitive to the dimension of a manifold. A somewhat more sophisticated example is provided by the fact that $B_2(T^2\setminus \mathrm{pt})\not\simeq B_2(\mathbb{R}^2\setminus S^0)$, which can be shown by a homology calculation. Note that $T^2\setminus \mathrm{pt}$ and $\mathbb{R}^2\setminus S^0$ have the same dimension and homotopy type, having $S^1\vee S^1$ as a common deformation retract. On the other hand, $(T^2\setminus\mathrm{pt})^+\cong T^2\not\simeq S^1\vee S^1\vee S^2\simeq (\mathbb{R}^2\setminus S^0)^+$, so we might conclude from this example that configuration spaces are sensitive to the \emph{proper} homotopy type of a manifold. In order to discuss the most striking illustration of the sensitivity of configuration spaces, we recall that the \emph{Lens spaces} are a family of compact $3$-manifolds given by \[L(p,q):=S^3/C_p,\] where the cyclic group $C_p$ acts on $S^3\subseteq \mathbb{C}^2$ by multiplication by $(e^{2\pi i/p}, e^{2\pi iq/p})$. It is a classical theorem of Reidemeister that \begin{align*} L(p,q_1)\simeq L(p, q_2)&\iff q_1q_2\equiv \pm n^2\mod p\\ L(p,q_1)\cong L(p,q_2)&\iff q_1\equiv \pm q_2^{\pm 1}\mod p. \end{align*} In particular, $L(7,1)$ and $L(7,2)$ are homotopy equivalent but not homeomorphic, and, according to a theorem of Longoni--Salvatore \cite{LongoniSalvatore:CSNHI}, their configuration spaces distinguish them. Thus, configuration spaces are sensitive at least to the \emph{simple} homotopy type of a manifold. \subsection{Braids} A point moving in $B_k(\mathbb{R}^2)$ traces out $k$ different paths that weave among one another but can never overlap. For this reason, we think of the fundamental group $\pi_1(B_k(\mathbb{R}^2))$ as the group of geometric \emph{braids} on $k$ strands, with composition given by concatenation of braids. As we shall see, this braid group admits the remarkably simple presentation \[\pi_1(B_k(\mathbb{R}^2))\cong \langle \sigma_1,\ldots, \sigma_{k-1}\mid\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1},\, \sigma_i\sigma_j=\sigma_j\sigma_i \text{ if } |i-j|>1\rangle,\] due originally to Artin \cite{Artin:TB}. The combinatorial, algebraic, and geometric properties of these and related braid groups are of fundamental importance to a vast swath of mathematics that encompasses knot theory, mapping class groups \cite{Birman:BLMCG}, quantum groups \cite{Kassel:QG}, category theory \cite{JoyalStreet:BMC}, and motion planning \cite{Farber:CSRMPA}. \subsection{Embeddings} In the company of manifolds with trivialized tangent bundles, it is possible to speak of a \emph{framed embedding}, which is to say an embedding respecting the fixed trivialization, possibly up to a homotopy through bundle maps. The tangent bundle of any Euclidean space is canonically trivialized, and evaluation at the origin determines a homotopy equivalence \[\mathrm{Emb}^\mathrm{fr}(\amalg_k\mathbb{R}^n, \mathbb{R}^n)\xrightarrow{\sim} \mathrm{Conf}_k(\mathbb{R}^n).\] As a consequence of this observation and the fact that framed embeddings compose, we see that the collection $\{\mathrm{Conf}_k(\mathbb{R}^n)\}_{k\geq0}$ of homotopy types is equipped with hidden algebraic structure. This algebraic structure is that of an \emph{operad}, which goes by the name of $E_n$. This perspective has many important descendents, three of which we name. \subsection{Iterated loop spaces} For any $X$, there is a collection of maps \[\mathrm{Emb}^\mathrm{fr}(\amalg_k\mathbb{R}^n,\mathbb{R}^n)\times (\Omega^nX)^k\to \Omega^n X,\] which arise from a variant of the Pontrjagin--Thom collapse construction. The compatibilities among these maps are summarized by saying that $\Omega^n X$ is an $E_n$-\emph{algebra}; in particular, the homology groups of the configuration spaces of $\mathbb{R}^n$ encode algebraic operations on $H_*(\Omega^nX)$. Remarkably, there is also a partial converse, due to May \cite{May:GILS}, which amounts to an algebraic classification of $n$-fold loop spaces. \subsection{Factorization homology} The operad $E_n$ is defined in terms of embeddings among disjoint unions of Euclidean spaces, and so, after choosing a coordinate chart, we think of the structure of an $E_n$-algebra $A$ as being ``defined locally'' on a manifold $M$. Patching this local structure together across the elements of an atlas produces a manifold invariant, called the factorization homology of $M$ with coefficients in $A$ and denoted $\int_MA$, which can be thought of as a kind of space of configurations in $M$ labeled by elements of the algebra \cite{AyalaFrancis:FHTM}. In the example of an $n$-fold loop space discussed above, the \emph{non-Abelian Poincar\'{e} duality} of Salvatore \cite{Salvatore:CSSl} and Lurie \cite{Lurie:HA} supplies the identification \[\int_M\Omega^nX\xrightarrow{\sim} \mathrm{Map}_c(M, X),\] as long as $X$ is $(n-1)$-connected. Later, we will encounter a precursor to this result in the configuration space models for mapping spaces introduced by McDuff \cite{McDuff:CSPNP,Boedigheimer:SSMS}. In a sense, factorization homology is a method for using configuration spaces to probe manifolds, but it can also be used to study the configuration spaces themselves, reversing this flow of information \cite{Knudsen:BNSCSVFH}. In particular, a theorem of Ayala--Francis \cite{AyalaFrancis:FHTM} shows that configuration spaces enjoy a kind of Mayer--Vietoris property in the existence of a quasi-isomorphism \[C_*(B(M))\simeq C_*(B(M_1))\bigotimes^\mathbb{L}_{C_*(B(N\times\mathbb{R}))}C_*(B(M_2)),\] where $B(X):=\coprod_{k\geq0} B_k(X)$. The fundamental fact underlying this quasi-isomorphism is the contractibility of the unordered configuration spaces of $\mathbb{R}$. \subsection{Embedding calculus} The embedding calculus of Goodwillie and Weiss \cite{Weiss:EPVIT,GoodwillieWeiss:EPVIT} produces a tower of approximations \[\xymatrix{ &\vdots\ar[d]\\ &T_2\mathrm{Emb}(M,N)\ar[d]\\ \mathrm{Emb}(M,N)\ar[uur]\ar[ur]\ar[r]&T_1\mathrm{Emb}(M,N), }\] which can be thought of as \emph{algebraic} approximations, where algebra is construed in the operadic sense. Often these approximations become arbitrarily good---in particular, according to a theorem of Goodwillie--Klein \cite{GoodwillieKlein:MDSSE}, this occurs in codimension at least 3---so that one obtains a cofiltration of the space of embeddings. The layers of this cofiltration are described as spaces of sections of certain bundles over configuration spaces, so hard questions about embeddings can sometimes be translated, at least in principle, into softer questions about the ordinary topology of configuration spaces. \subsection{What is in these notes?} Our guiding mathematical impulse is the basic imperative of the algebraic topologist. Configuration spaces are interesting; therefore, it is worthwhile to compute their homotopy groups and (co)homology. From a more pedagogical standpoint, we have attempted to design a set of notes that might help to fulfill three goals of a graduate student hoping to enter the field: first, to become familiar with some of what is already known; second, to acquire a set of modern tools; and, third, to see how these tools might be applied in practice. For this reason, we have opted to explore a range of classical topics with a mix of classical and non-classical techniques, developing the latter along the way, with some old results being given new proofs. In particular, emphasis is placed on the use of hypercover techniques, which the author has found indispensable in his own work. We now give a linear outline of the contents of these notes, including primary references. \begin{enumerate} \item[\S\ref{section:configuration spaces}] \emph{Configuration spaces and braids}. After exploring examples and developing basic properties, we exploit the Fadell--Neuwirth fibrations \cite{FadellNeuwirth:CS} to deduce a number of easy computations of homotopy groups. Afterward, we turn to the identification of $\pi_1(B_k(\mathbb{R}^2))$ with the braid group \cite{Artin:TB, FoxNeuwirth:BG}. We follow one of the approaches outlined in \cite{Birman:BLMCG}, reviewing the Reidemeister--Schreier method along the way from the topological viewpoint \cite{ZieschangVogtColdeway:SPDG}. We defer the proof of Fadell--Neuwirth, which will eventually be our first use of hypercover techniques. \item[\S\ref{section:cohomology}] \emph{(Co)homology of $\mathrm{Conf}_k(\mathbb{R}^n)$}. The main result of this section is the computation of the integral cohomology ring of $\mathrm{Conf}_k(\mathbb{R}^n)$ \cite{Arnold:CRCBG, CohenLadaMay:HILS}. Our argument is essentially that of Cohen. As it turns out, the natural basis in homology is more convenient for some purposes, and we develop this viewpoint in detail following \cite{Sinha:HLDO}. We close by using these computations to dedue the rational homology of $B_k(\mathbb{R}^n)$. \item[\S\ref{section:covering theorems}] \emph{Covering theorems}. We give a detailed development of the machinery of \v{C}ech covers, hypercovers, and complete covers following \cite{DuggerIsaksen:THAR}. The main result is that the weak homotopy type of a space may be recovered as the homotopy colimit over a complete cover---a topological basis, for example. \item[\S\ref{section:deferred proofs}] \emph{Deferred proofs}. We use the covering techniques of the previous section, together with Quillen's Theorem B, to prove Fadell--Neuwirth, which was the main tool in \S\ref{section:configuration spaces}. We then use the same techniques to construct the Serre spectral sequence following \cite{Segal:CSSS}, from which we deduce the Leray--Hirsch theorem, which was used in \S\ref{section:cohomology}. \item[\S\ref{section:mapping space models}] \emph{Mapping space models}. We introduce configuration spaces with labels in a pointed space \cite{McDuff:CSPNP, Boedigheimer:SSMS}, which we motivate as a type of homology theory for manifolds following \cite{AyalaFrancis:FHTM}. We give a new proof of the analogue of the exactness axiom for homology, namely the existence of a certain homotopy pullback square, using hypercovers and Quillen's Theorem B. Using scanning techniques following \cite{Boedigheimer:SSMS}, we show that labeled configuration spaces model mapping spaces, an analogue of Poincar\'{e} duality. \item[\S\ref{section:homology from splittings}] \emph{Homology calculations from stable splittings}. The main result is the computation of the rational homology of $B_k(M)$ for odd-dimensional $M$ following \cite{BoedigheimerCohenTaylor:OHCS}. The key ingredient is to show that labeled configuration split stably \cite{Boedigheimer:SSMS, CohenMayTaylor:SCSC}. \item[\S\ref{section:mod p cohomology}] \emph{Mod $p$ cohomology}. The main result is the computation of the mod $p$ cohomology of $B_p(\mathbb{R}^n)$. We follow the argument of \cite[III]{CohenLadaMay:HILS} in the large, giving simplified and corrected proofs of several key steps using the homology basis developed in \S\ref{section:cohomology}. \item[\S\ref{section:lie algebra methods}] \emph{Postscript: Lie algebra methods}. We give an informal discussion of the main result of \cite{Knudsen:BNSCSVFH}, which expresses the rational homology of $B_k(M)$ for arbitrary $M$ as Lie algebra homology. We use this expression to give a proof of homological stability for configuration spaces. \item[\ref{appendix:split simplicial spaces}] \emph{Split simplicial spaces}. In this appendix, following \cite{DuggerIsaksen:THAR}, we develop a criterion guaranteeing that degreewise weak equivalence between simplicial spaces induces a weak equivalence after geometric realization. We then verify that this criterion is satisfied in the cases of interest. \item[\ref{section:homotopy colimits}] \emph{Homotopy colimits}. We motivate the bar construction model for the homotopy colimit and discuss its justification through the theory of derived functors and model categories \cite{Quillen:HA, Hirschhorn:MCL, Dugger:PHC}. We give a detailed proof that the bar construction is cofibrant in the projective model structure on functors. We discuss homotopy left Kan extensions and prove Quillen's Theorem B following \cite{GoerssJardine:SHT}. \item[\ref{appendix:Spanier--Whitehead}] \emph{The Spanier--Whitehead category}. We discuss a very rudimentary form of stable homotopy theory, which is necessary for the theorem on stable splittings \cite{SpanierWhitehead:FAHT, DellAmbrogio:SWCAT}. We avoid the need to introduce the full machinery of spectra with the notion of a filtered stable weak equivalence. \item[\ref{appendix:periodicity}] \emph{Tate cohomology and periodicity}. We review some facts about Tate cohomology \cite{CartanEilenberg:HA} and give a proof of the periodicity theorem used in \S\ref{section:mod p cohomology} following \cite{Swan:PPFG}. \end{enumerate} \subsection{What is not in these notes?} The title of these notes could---and someday, perhaps, will---be the title of a book, or of a book series. That being the case, there are vast and important swaths of the literature that we do not touch. What follows is a non-exhaustive list of omissions, in no particular order. \begin{itemize} \item Iterated loop space theory \cite{May:GILS, Segal:CSILS} and the computation of the mod $p$ homology of $B_k(\mathbb{R}^n)$ with all of the induced structure \cite[III]{CohenLadaMay:HILS}. \item Factorization homology \cite{AyalaFrancis:FHTM} and embedding calculus \cite{Weiss:EPVIT}. \item The Cohen--Taylor--Totaro spectral sequence \cite{Totaro:CSAV} and representation stability \cite{ChurchEllenbergFarb:FIMSRSG}. \item The Fulton--MacPherson compactification \cite{FultonMacPherson:CCS,Sinha:MTCCS} and formality \cite{Kontsevich:OMDQ, LambrechtsVolic:FLNDO}. \item The failure of homotopy invariance \cite{LongoniSalvatore:CSNHI}. \item Configuration spaces of graphs \cite{Abrams:CSBGG, Ghrist:CSBGGR, AnDrummond-ColeKnudsen:SSGBG}. \item Topological combinatorics \cite{Farber:TBPI, BlagojevichLueckZiegler:ETCS}. \end{itemize} The author hopes to remedy some of these omissions---as well as the unforgivable absence of pictures---in future versions of these notes. \subsection{Prerequisites} Sections \ref{section:configuration spaces}-\ref{section:cohomology} of these notes should be accessible to anyone with a first course in algebraic topology \cite{Hatcher:AT}. In particular, facts about homology, cohomology, homotopy groups, covering spaces, and the like are used freely and without comment. In the remainder of the notes, particularly in \S\ref{section:covering theorems}, it will be helpful to have a working knowledge of basic category theory \cite{MacLane:CWM} and simplicial methods \cite{GoerssJardine:SHT}. We make some use of the Serre spectral sequence, which we review at the appropriate time, albeit briefly, and increasingly heavy use of homotopy colimits, which are discussed in some detail in Appendix \ref{section:homotopy colimits}. We make do with a very primitive form of stable homotopy theory, which is reviewed in Appendix \ref{appendix:Spanier--Whitehead}. No knowledge of spectra is assumed or used (although it never hurts). \subsection{Conventions} For convenience, we take manifolds to be smooth of finite type. In dealing with graded vector spaces, we make use of the usual Koszul sign convention. For $V$ a graded vector space, $\mathrm{Sym}(V)=\bigoplus_{k\geq0}\mathrm{Sym}^k(V)$ is the tensor product of a polynomial algebra on the even part of $V$ with an exterior algebra on the odd part of $V$. We grade homologically and write $V[r]$ for the homological suspension of $V$---that is, the degree $n$ part of $V[r]$ is the degree $n-r$ part of $V$. \subsection{Acknowledgments} These notes grew out of a course of the same name that I taught during the fall semester of 2017 at Harvard University. I am grateful to the members of the class for their engagement and to Harvard for the opportunity to design a course with complete freedom. Extra thanks are due to Sander Kupers for catching innumerable errors while they were still on the blackboard (if he missed any, we can blame him). I wish to acknowledge a few key influences on the development of the course and notes. First and most important of these is the mathematics and metamathematics that I learned from my advisor, John Francis, without whom, I am certain, these notes would not exist. Second, the theory of hypercovers and complete covers developed by Dugger--Isaksen \cite{DuggerIsaksen:THAR} plays a central role here, as it has in most of what I have done as a mathematician. Third, the geometric description of the homology of configuration spaces in terms of planetary systems given by Sinha \cite{Sinha:HLDO} is not only very beautiful but also very useful, as it allows for greatly simplified arguments in some cases. Fourth, I would like to thank Haynes Miller for his sustained encouragement in the teaching of the course and the writing of these notes. Finally, I thank everyone whose enthusiastic conversation has helped to foster my love for the ideas discussed here. In alphabetical order, with many omissions, this group includes Byung Hee An, Lauren Bandklayder, Mark Behrens, Lukas Brantner, Tom Church, Gabriel Drummond-Cole, Jordan Ellenberg, Elden Elmanto, Benson Farb, Ezra Getzler, Paul Goerss, Owen Gwilliam, Kathryn Hess, Sander Kupers, Pascal Lambrechts, Daniel L\"{u}tgehetmann, Jeremy Miller, Sam Nariman, Martin Palmer, Dan Petersen, Dev Sinha, Joel Specter, Alex Suciu, Hiro Tanaka, Nathalie Wahl, Brian Williams, Dylan Wilson, and Jesse Wolfson. \section{Configuration spaces and braids}\label{section:configuration spaces} \subsection{Basic examples} In order to begin to develop a feel for how configuration spaces behave, we begin with a few examples. \begin{example}[$\varnothing$, $\mathbb{R}^0$] It is usually best to begin with the trivial cases. In the case of the empty manifold, we have \[\mathrm{Conf}_k(\varnothing)=\begin{cases} \mathrm{pt}&\quad k=0\\ \varnothing&\quad\text{ else,} \end{cases}\] while \[\mathrm{Conf}_k(\mathbb{R}^0)=\begin{cases} \mathrm{pt}&\quad k=0,1\\ \varnothing&\quad \text{else.} \end{cases}\] Notice that $\mathrm{Conf}_0(X)$ is a singleton for any space $X$. \end{example} \begin{example}[$\mathbb{R}\cong (0,1)$] From the definition $\mathrm{Conf}_2((0,1))=(0,1)^2\setminus\{(x,y):x=y\}$, it is clear that the ordered configuration space two points in $(0,1)$ is a disjoint union of two open 2-simplices, and that $\Sigma_2$ acts by permuting these components. In particular, $B_2((0,1))\cong \mathring{\Delta}^2\simeq \mathrm{pt}$. This description generalizes readily to higher $k$. Note that the natural orientation of $(0,1)$ induces a second ordering on the coordinates of any configuration, which is to say a permutation of $\{1,\ldots, k\}$, that any such permutation is possible, and that the assignment of configuration to permutation is locally constant by the intermediate value theorem. Thus, we have a $\Sigma_k$-equivariant bijection $\pi_0(\mathrm{Conf}_k((0,1)))\cong\Sigma_k$, and the unordered configuration space is naturally identified with the path component of the identity permutation. For this space, we define a map \begin{align*} B_k((0,1))&\to\mathring{\Delta}^k:=\left\{(t_0,\ldots, t_k)\in \mathbb{R}^{k+1}: t_i>0,\, \sum_{i=0}^kt_i=1\right\}\\ \{x_1,\ldots, x_k\}&\mapsto (x_1, x_2-x_1, \ldots, 1-x_k), \end{align*} where the set $\{x_1,\ldots, x_k\}$ is ordered so that $x_1<\cdots< x_k$. This map is a homeomorphism; in particular, the unordered configuration spaces of $\mathbb{R}$ are all contractible. \end{example} \begin{example}[$\mathbb{R}^n$] In the configuration space of two points in $\mathbb{R}^n$, there are exactly three pieces of data---the direction, the center of mass, and the distance---and only one of these is not a contractible choice. Precisely, the map \begin{align*}\mathrm{Conf}_2(\mathbb{R}^n)&\to S^{n-1}\times\mathbb{R}_{>0}\times\mathbb{R}^n\\ (x_1,x_2)&\mapsto \left(\frac{x_2-x_1}{\|x_2-x_1\|}, \|x_2-x_1\|,\frac{x_1+x_2}{2}\right) \end{align*} is a homeomorphism, and, in particular, the \emph{Gauss map} \[\mathrm{Conf}_2(\mathbb{R}^n)\to S^{n-1}\] given by the first coordinate of this homeomorphism is a homotopy equivalence. Since this map is also $\Sigma_2$-equivariant for the antipodal action on the target, we conclude that $B_2(\mathbb{R}^n)\simeq \mathbb{RP}^{n-1}$. The Gauss map will play a fundamental role in our later study; in particular, by pulling back a standard volume form, this map furnishes us with our first example of a nonzero higher dimensional class in the cohomology of configuration spaces. We cannot be as explicit about $\mathrm{Conf}_k(\mathbb{R}^n)$ for higher $k$, but it should be noted that this space is an example of the complement of a \emph{hyperplane arrangement}, since \[\mathrm{Conf}_k(\mathbb{R}^n)=\mathbb{R}^{nk}\setminus \bigcup_{1\leq i<j\leq k}\{(x_1,\ldots, x_k)\in\mathbb{R}^{nk}: x_i=x_j\},\] which is an interesting and classical type of mathematical object in its own right with its own literature \cite{OrlikTerao:AH}. \end{example} \begin{example}[$S^1$] By definition $\mathrm{Conf}_2(S^1)$ is the $2$-torus with its diagonal removed, which is homeomorphic to a cylinder, and, by cutting and pasting, one can see directly that the corresponding unordered configuration space is the open M\"{o}bius band \cite{Tuffley:FSSS}. In the general case \cite[p. 292]{CrabbJames:FHT}, make the identification $S^1=\mathbb{R}/\mathbb{Z}$, and consider first the subspace $A\subseteq \mathrm{Conf}_k(S^1)$ of configurations $(x_1,\ldots, x_k)$ whose ordering coincides with the cyclic ordering induced by the standard orientation of $S^1$. For $1\leq i\leq k$, the difference $t_i=x_{i+1}-x_i\in (0,1)$ is well-defined, where we set $t_{k+1}=t_1+1$, and $\sum_{i=1}^kt_i=1$. Recording the normalized first coordinate and the $t_i$ defines a $C_k$-equivariant homeomorphism $A\cong S^1\times\mathring{\Delta}^{k-1}$, which induces a $\Sigma_k$-equivariant homeomorphism \[(S^1\times\mathring{\Delta}^{k-1})\times_{C_k}\Sigma_k\xrightarrow{\simeq} \mathrm{Conf}_k(S^1).\] In particular, $B_k(S^1)\simeq S^1/C_k\cong S^1$ for $k>0$. \end{example} \begin{example}[$S^n$] In the case of two points in the $n$-sphere, as with Euclidean space, the choice of the direction in $S^n$ between the two points is the fundamental parameter. Precisely, the projection onto the first coordinate defines a map $\mathrm{Conf}_2(S^n)\to \mathrm{Conf}_1(S^n)$, and recording the unit tangent vector in the direction of the second coordinate produces a commuting diagram \[\xymatrix{ \mathrm{Conf}_2(S^n)\ar[d]\ar[r]&\mathrm{Sph}(TS^n)\ar[d]\\ \mathrm{Conf}_1(S^n)\ar@{=}[r]&S^n }\] in which the top map is a homotopy equivalence. The projection map is a special case of a family of maps, the \emph{Fadell--Neuwirth fibrations} \cite{FadellNeuwirth:CS}, which relate different cardinalities of configuration space in any background manifold. The existence of this family of maps, which will be one of our most important tools in what follows, places very strong constraints on configuration spaces; for example, it ultimately implies that the $i$th Betti number of $B_k(M)$ is constant for large $k$ \cite{Church:HSCSM,RandalWilliams:HSUCS}. This phenomenon of \emph{homological stability}, and the corresponding \emph{representation stability} in the ordered case, has since become an active area of study in its own right \cite{Farb:RS}. \end{example} \subsection{First properties} We begin our formal study of configuration spaces by cataloguing a few of their basic features. First, we note that, if $f:X\to Y$ is an injective continuous map, we have the $\Sigma_k$-equivariant factorization \[\xymatrix{ X^k\ar[rr]^-{f^k}&&Y^k\\ \mathrm{Conf}_k(X)\ar[u]\ar@{-->}[rr]^-{\mathrm{Conf}_k(f)}&&\mathrm{Conf}_k(Y)\ar[u] }\] and thus an induced map $B_k(f):B_k(X)\to B_k(Y)$. This construction respects composition and identities by inspection, so we have functors \begin{align*} \mathrm{Conf}_k:\mathcal{T}\mathrm{op}^{\mathrm{inj}}&\to \mathcal{T}\mathrm{op}^{\Sigma_k}\\ B_k:\mathcal{T}\mathrm{op}^{\mathrm{inj}}&\to \mathcal{T}\mathrm{op}, \end{align*} where $\mathcal{T}\mathrm{op}^\mathrm{inj}$ denotes the category of topological spaces and injective continuous maps, $\mathcal{T}\mathrm{op}^{\Sigma_k}$ the category of $\Sigma_k$-spaces and equivariant maps, and $\mathcal{T}\mathrm{op}$ the category of topological spaces. \begin{proposition} If $f:X\to Y$ is an open embedding, then $\mathrm{Conf}_k(f):\mathrm{Conf}_k(X)\to \mathrm{Conf}_k(Y)$ and $B_k(f):B_k(X)\to B_k(Y)$ are also open embeddings. \end{proposition} \begin{proof} From the definition of the product topology, $f^k:X^k\to Y^k$ is an open embedding, and the first claim follows. The second claim follows from the first and the fact that $\pi:\mathrm{Conf}_k(X)\to B_k(X)$ is a quotient map. \end{proof} This functor also respects a certain class of weak equivalences, for which we now introduce some (very nonstandard) terminology. \begin{definition} Let $f,g:X\to Y$ be injective continuous maps. \begin{enumerate} \item We say that a map $H:X\times[0,1]\to Y$ is a \emph{monotopy} between $f$ and $g$ if it is a homotopy between $f$ and $g$ and if $H_t$ is injective for every $t\in [0,1]$. \item We say that $f$ and $g$ are \emph{monotopic} if there is such an injectopy. \item We say that an injective continuous map $f:X\to Y$ is an \emph{monotopy equivalence} if there is an injective continuous map $g:Y\to X$ such that $g\circ f$ is monotopic to $\id_X$ and $f\circ g$ is monotopic to $\id_Y$. \end{enumerate} \end{definition} \begin{proposition}\label{prop:monotopy} If $f$ and $g$ are monotopic, then $\mathrm{Conf}_k(f)$ and $\mathrm{Conf}_k(g)$ are homotopic (in fact monotopic). \end{proposition} \begin{proof} In the solid commuting diagram \[\xymatrix{ X^k\times[0,1]^k\ar@{=}[r]^-\sim& (X\times[0,1])^k\ar[r]^-{H^k}&Y^k\\\\ X^k\times[0,1]\ar[uurr]^-{H^\Delta}\ar[uu]^-{\id_{X^k}\times\Delta_k}\\ \mathrm{Conf}_k(X)\times[0,1]\ar@{-->}[rr]\ar[u]&&\mathrm{Conf}_k(Y),\ar[uuu] }\] the diagonal composite is given by the formula \[H_t^\Delta(x_1,\ldots, x_k)=(H_t(x_1), \ldots, H_t(x_k)).\] By assumption, $H_t$ is injective for each $t\in[0,1]$, so the dashed filler exists. By construction, this map is injective for each $t$ and restricts to $\mathrm{Conf}_k(f)$ at $t=0$ and to $\mathrm{Conf}_k(g)$ at $t=1$. \end{proof} \begin{corollary} If $f$ is a monotopy equivalence, then both $\mathrm{Conf}_k(f)$ and $B_k(f)$ are homotopy (in fact monotopy) equivalences. \end{corollary} \begin{proof} The claim for $\mathrm{Conf}_k(f)$ is immediate from Proposition \ref{prop:monotopy}. For $B_k(f)$, we note that the homotopies constructed in the proof of that result are homotopies through $\Sigma_k$-equivariant maps, so they descend to the unordered configuration space. \end{proof} \begin{remark} Another point of view on the previous two results is provided by the fact (which we will not prove here) that $\mathrm{Conf}_k$ and $B_k$ are \emph{enriched} functors, where the space of injective continuous maps from $X$ to $Y$ is given the subspace topology induced by the compact-open topology on $\mathrm{Map}(X,Y)$. Taking this fact for granted, these results follows immediately, since a monotopy is simply a path in this mapping space. \end{remark} The most important examples of monotopy equivalences will be isotopy equivalences of manifolds, as in the following example. \begin{example} If $M$ is a manifold with boundary, then $\partial M$ admits a collar neighborhood $U\cong \partial M\times (0,1]$. We define a map $r:M\to M$ by setting $r|_{\partial M\times(0,1]}(x,t)=(x,\frac{t}{2})$ and extending by the identity. This map is injective, and dilation defines monotopies from $r\circ i:\mathring{M}\to \mathring{M}$ and $i\circ r:M\to M$ to the respective identity maps. It follows that the induced map $\mathrm{Conf}_k(\mathring{M})\to \mathrm{Conf}_k(M)$ is a homotopy equivalence. \end{example} These functors also interact well with the operation of disjoint union. \begin{proposition} Let $X$ and $Y$ be topological spaces. The natural map \[\coprod_{i+j=k}\mathrm{Conf}_i(X)\times\mathrm{Conf}_j(Y)\times_{\Sigma_i\times\Sigma_j}\Sigma_k\to \mathrm{Conf}_k(X\amalg Y)\] is a $\Sigma_k$-equivariant homeomorphism. In particular, the natural map \[\coprod_{i+j=k}B_i(X)\times B_j(Y)\to B_k(X\times Y)\] is a homeomorphism. \end{proposition} \begin{proof} From the definitions, the dashed filler exists in the commuting diagram \[\xymatrix{\displaystyle \coprod_{i+j=k}X^i\times Y^j\times_{\Sigma_i\times\Sigma_j}\Sigma_k\ar[r]^-\simeq&(X\amalg Y)^k\\ \displaystyle\coprod_{i+j=k}\mathrm{Conf}_i(X)\times\mathrm{Conf}_j(Y)\times_{\Sigma_i\times\Sigma_j}\Sigma_k\ar@{-->}[r]\ar[u]& \mathrm{Conf}_k(X\amalg Y)\ar[u]}\] and is easily seen to be a bijection, which implies the first claim, since the vertical arrows are inclusions of subspaces. The second claim follows from the first after taking the quotient by the action of $\Sigma_k$. \end{proof} Thus, we may restrict attention to connected background spaces whenever it is convenient to do so. Our next goal is to come to grips with the local structure of configuration spaces. We assume from now on that $X$ is locally path connected, and we fix a basis $\B$ for the topology of the space $X$ consisting of connected subsets. We define two partially ordered sets as follows. \begin{enumerate} \item We write $\B_k=\{U\subseteq X: U\cong \amalg_{i=1}^k U_i,\, U_i\in\B\}$, and we impose the order relation \[U\leq V\iff U\subseteq V \text{ and } \pi_0(U\subseteq V) \text{ is surjective}.\] \item We write $\B_k^\Sigma=\{(U,\sigma): U\in \B_k, \,\sigma:\{1,\ldots, k\}\xrightarrow{\simeq}\pi_0(U)\}$, and we impose the order relation \[(U,\sigma)\leq (V,\tau)\iff U\leq V\text{ and } \tau=\sigma\circ\pi_0(U\subseteq V).\] \end{enumerate} Denoting the poset of open subsets of a space $Y$ by $\op O(Y)$, there is an inclusion $\B_k^\Sigma\to \op O(\mathrm{Conf}_k(X))$ of posets defined by \[U\mapsto \mathrm{Conf}_k^0(U,\sigma):=\left\{(x_1\ldots, x_k)\in \mathrm{Conf}_k(U): x_i\in U_{\sigma(i)}\right\}\subseteq\mathrm{Conf}_k(X)\] and similarly an inclusion $\B_k\to \op O(B_k(X))$ defined by \[ U\mapsto B_k^0(U):=\left\{\{x_1,\ldots, x_k\}\in B_k(U): \{x_1,\ldots, x_k\}\cap U_i\neq \varnothing,\, 1\leq i\leq k\right\}\subseteq B_k(X).\] Note that these subsets are in fact open, since $U\subseteq X$ is open and configuration spaces respect open embeddings. \begin{lemma} For any $U\in \B_k$ and $\sigma:\{1,\ldots, k\}\xrightarrow{\simeq} \pi_0(U)$, there are canonical homeomorphisms \[B_k^0(U)\cong\mathrm{Conf}_k^0(U,\sigma)\cong\prod_{i=1}^kU_{\sigma(i)}.\] \end{lemma} \begin{proof} It is easy to see from the definitions that the dashed fillers in the commuting diagram \[\xymatrix{X^k&\mathrm{Conf}_k(X)\ar[l]\ar[r]& B_k(X)\\ \displaystyle\prod_{i=1}^kU_{\sigma(i)}\ar[u]&\mathrm{Conf}_k^0(U,\sigma)\ar@{-->}[l]\ar@{-->}[r]\ar[u]&B_k^0(U)\ar[u] }\] exist and are bijections. Since the lefthand map is the inclusion of a subspace and the righthand map is a quotient map, the claim follows. \end{proof} \begin{proposition}\label{prop:conf basis} Let $X$ be a locally path connected Hausdorff space and $\B$ a topological basis for $X$ consisting of connected subsets. \begin{enumerate} \item The collection $\{\mathrm{Conf}_k^0(U,\sigma): (U,\sigma)\in \B_k^\Sigma\}\subseteq \op O(\mathrm{Conf}_k(X))$ is a topological basis. \item The collection $\{B_k^0(U): U\in \B_k\}\subseteq \op O(B_k(X))$ is a topological basis. \end{enumerate} \end{proposition} \begin{proof} By the definition of the product and subspace topologies, it will suffice for the first claim to show that, given $(x_1,\ldots, x_k)\in V\subseteq X^k$ such that \begin{itemize} \item $V\cong \prod_{i=1}^k V_i$ for open subsets $x_i\in V_i\subseteq X$, and \item $V\subseteq \mathrm{Conf}_k(X)$, \end{itemize} there exists $(U,\sigma)\in \B_k^\Sigma$ with $(x_1,\ldots, x_k)\in \mathrm{Conf}_k^0(U,\sigma)\subseteq V$. Now, since $\B$ is a topological basis, we may find $U_i\in \B$ with $x_i\in U_i\subseteq V_i$. The second condition and the assumption that $X$ is Hausdorff imply that the $V_i$ are pairwise disjoint, so we may set $U=\coprod_{i=1}^kU_i$ and take $\sigma(i)=[U_i]$. With these choices \[(x_1,\ldots, x_k)\in \mathrm{Conf}_k^0(U,\sigma)\cong\prod_{i=1}^k U_i\subseteq \prod_{i=1}^k V_i=V,\] as desired. The second claim follows from first, the fact that $\pi:\mathrm{Conf}_k(X)\to B_k(X)$ is a quotient map, and the fact that $\pi(\mathrm{Conf}_k^0(U,\sigma))=B_k^0(U)$ for every $\sigma$. \end{proof} These and related bases will be important for our later study, when we come to hypercover methods. For now, we draw the following consequences. \begin{corollary} Let $X$ be a locally path connected Hausdorff space. The projection $\pi:\mathrm{Conf}_k(X)\to B_k(X)$ is a covering space. \end{corollary} \begin{proof} For $U\in \B_k$, we have $\Sigma_k$-equivariant identifications \[\pi^{-1}(B_k^0(U))=\bigcup_{\sigma:\{1,\ldots, k\}\cong \pi_0(U)} \mathrm{Conf}_k^0(U,\sigma)\cong B^0_k(U)\times\Sigma_k,\] where the second is induced by a choice of ordering of $\pi_0(U)$. \end{proof} As the example of the line with two origins shows, one cannot in general remove the hypothesis that $X$ be Hausdorff (we learned this example from Sander Kupers). \begin{corollary} If $M$ is an $n$-manifold, then $\mathrm{Conf}_k(M)$ and $B_k(M)$ are $nk$-manifolds. \end{corollary} \begin{proof} We take $\B$ to be the set of Euclidean neighborhoods in $M$, in which case \[B_k^0(U)\cong \mathrm{Conf}_k^0\cong \mathbb{R}^{nk}\] for any $U\in \B_k$ and $\sigma:\{1,\ldots, k\}\xrightarrow{\simeq}\pi_0(U)$. \end{proof} \begin{exercise} When is $B_k(M)$ orientable? \end{exercise} \subsection{Forgetting points} From now on, unless otherwise specified, we take our background space to be a manifold $M$. In this case, we have access to a poweful tool relating configuration spaces of different cardinalities. The starting point is the observation that the natural projections from the product factor through the configuration spaces as in the following commuting diagram: \[\xymatrix{\mathrm{Conf}_\ell(M)\ar@{-->}[d]\ar[r]&M^\ell\ar[d]&(x_1,\ldots, x_\ell)\ar@{|->}[d]\\ \mathrm{Conf}_k(M)\ar[r]&M^k&(x_1,\ldots, x_k) }\] (we take the projection to be on the last $\ell-k$ coordinates for simplicity, but it is not necessary to make this restriction). Clearly, the fiber over a configuration $(x_1,\ldots, x_k)$ in the base is the configuration space $\mathrm{Conf}_{\ell-k}(M\setminus\{x_1,\ldots, x_k\})$. Our first theorem asserts that the situation is in fact much better than this. \begin{recollection} Recall that, if $f:X\to Y$ is a continuous map, the \emph{mapping path space} of $f$ is the space of paths in $Y$ out of the image of $f$. In other words, it is the pullback in the diagram \[\xymatrix{E_f\ar[r]\ar[d]&Y^{[0,1]}\ar[d]&p\ar@{|->}[d]\\ X\ar[r]^-f&Y&p(0). }\] The inclusion $X\to E_f$ given by the constant paths is a homotopy equivalence, and evaluation at $1$ defines a map $\pi_f:E_f\to Y$, which is a fibration \cite[7.3]{May:CCAT}. The \emph{homotopy fiber} of $f$ is the fiber \[\mathrm{hofib}(f):=\pi_f^{-1}(y)\] of this fibration, where $y\in Y$ is some basepoint. The construction of $E_f$, and hence the homotopy fiber, is functorial, and we say that a diagram \[\xymatrix{ X\ar[d]_-f\ar[r]&X'\ar[d]^-{f'}\\ Y\ar[r]&Y' }\] is \emph{homotopy Cartesian}, or a \emph{homotopy pullback square}, if the induced map $\mathrm{hofib}(f)\to \mathrm{hofib}(f')$ is a weak equivalence. The primary benefit of knowing that a square is homotopy Cartesian is the induced Mayer--Vietoris long exact sequence in homotopy groups. \end{recollection} \begin{theorem}[Fadell--Neuwirth \cite{FadellNeuwirth:CS}]\label{thm:Fadell--Neuwirth} Let $M$ be a manifold and $0\leq k\leq \ell<\infty$. The diagram \[\xymatrix{ \mathrm{Conf}_{\ell-k}(M\setminus\{x_1,\ldots, x_k\})\ar[d]\ar[r]&\mathrm{Conf}_\ell(M)\ar[d]\\ (x_1,\ldots, x_k)\ar[r]&\mathrm{Conf}_k(M) }\] is homotopy Cartesian. \end{theorem} \begin{remark} In fact, Fadell--Neuwirth prove that this map is a locally trivial fiber bundle, and they give an identification of its structure group. We will not need this full statement, and our alternate proof of this weaker form will allow us to illustrate the efficacy of hypercover methods at a later point. \end{remark} The proof is a debt that we will return to pay after having developed a few more advanced homotopy theoretic techniques. For the time being, we concentrate on exploiting this result. \begin{corollary} If $M$ is a simply connected $n$-manifold with $n\geq3$, then $\mathrm{Conf}_k(M)$ is simply connected for every $k\geq0$. In particular, $\pi_1(B_k(M))\cong\Sigma_k$. \end{corollary} \begin{proof} The case $k=0$ is trivial and the case $k=1$ is our assumption. The Van Kampen theorem and our assumption on $n$ imply that $M\setminus\{\mathrm{pt}\}$ is simply connected, so the first claim follows by induction using the exact sequence \[\xymatrix{\pi_1(M\setminus \{\mathrm{pt}\})\ar[r]&\pi_1(\mathrm{Conf}_{k}(M))\ar[r]&\pi_1(\mathrm{Conf}_{k-1}(M)).}\] The second claim follows from the observation that $\mathrm{Conf}_k(M)\to B_k(M)$ is a $\Sigma_k$-cover with simply connected total space and hence the universal cover. \end{proof} \begin{corollary}\label{cor:surface aspherical} If $M$ is a connected surface different from $S^2$ or $\mathbb{RP}^2$, then $\mathrm{Conf}_k(M)$ is aspherical for every $k\geq0$. In particular, $B_k(M)$ is aspherical. \end{corollary} \begin{proof} The case $k=0$ is obvious and the case $k=1$ follows from our assumption on $M$. This assumption further guarantees that $M\setminus\{\mathrm{pt}\}$ is also aspherical, so the first claim follows by induction using the exact sequence \[\xymatrix{\pi_i(M\setminus \{\mathrm{pt}\})\ar[r]&\pi_i(\mathrm{Conf}_{k}(M))\ar[r]&\pi_i(\mathrm{Conf}_{k-1}(M))}\] with $i\geq2$. The second claim follows from the first and the fact that $\pi:\mathrm{Conf}_k(M)\to B_k(M)$ is a covering space. \end{proof} In order to proceed further, it will be useful to have a criterion for splitting these exact sequences. \begin{proposition} If $M$ is the interior of a manifold with non-empty boundary, then the map $\pi_{k,\ell}:\mathrm{Conf}_\ell(M)\to \mathrm{Conf}_k(M)$ admits a section up to homotopy. \end{proposition} \begin{proof} Write $M=\mathring{N}$, and fix a collar neighborhood $\partial N\subseteq U$ and an ordered set $\{x_{k+1},\cdots, x_\ell\}$ of distinct points in $U$. By retracting along the collar, we obtain an embedding $\varphi:M\to M$ that is isotopic to the identity and misses our chosen points. The assignment $(x_1,\ldots, x_k)\mapsto (\varphi(x_1),\ldots, \varphi(x_k), x_{k+1},\cdots, x_\ell)$ defines a continuous map $s:\mathrm{Conf}_k(M)\to \mathrm{Conf}_\ell(M)$ such that $\pi_{k,\ell}\circ s=\mathrm{Conf}_k(\varphi)\simeq \id_{\mathrm{Conf}_k(M)}$, since $\varphi$ is isotopic to the identity. \end{proof} \begin{corollary} For $n\geq 3$, $k\geq0$, and $i\geq0$, there is an isomorphism \[\pi_i(\mathrm{Conf}_k(\mathbb{R}^n))\cong \prod_{j=1}^{k-1} \pi_i\left(\bigvee_jS^{n-1}\right).\] \end{corollary} \begin{proof} For $k\in \{0,1\}$ the claim is obvious, as is the claim for $\pi_0$, and the claim for $\pi_1$ has already been established. In the generic case, we proceed by induction using the exact sequence \[\xymatrix{ \pi_{i+1}(\mathrm{Conf}_{k-1}(\mathbb{R}^n))\ar[r]& \pi_i(\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\})\ar[r]&\pi_i(\mathrm{Conf}_k(\mathbb{R}^n))\ar[r]&\pi_i(\mathrm{Conf}_{k-1}(\mathbb{R}^n)). }\] The section up to homotopy constructed above induces a section at the level of homotopy groups, so the lefthand map is trivial and the righthand map is surjective. The result now follows from the homotopy equivalence $\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\}\simeq \bigvee_{k-1}S^{n-1}$ and the fact that all groups in sight are Abelian. \end{proof} The higher homotopy groups of bouquets of spheres being very complicated objects \cite{Hilton:HGUS}, this result is a striking contrast to the situation in dimension 2 as characterized in Corollary \ref{cor:surface aspherical}. \begin{remark} It should be noted that the product decomposition of the previous corollary is additive only. Viewed as shifted Lie algebra via the Whitehead bracket, $\pi_*(\mathrm{Conf}_k(\mathbb{R}^n))$ has a very rich structure---see \cite[II]{FadellHusseini:GTCS}. \end{remark} The following result is proved by essentially the same argument. \begin{corollary} The fundamental group of $\mathrm{Conf}_k(\mathbb{R}^2)$ is an iterated extension of free groups. \end{corollary} \subsection{Braid groups} We turn our attention now to the fundamental group of the unordered configuration space $B_k(\mathbb{R}^2)$. We fix the basepoints $\{(2i,0)\}_{i=1}^k\in B_k(\mathbb{R}^2)$ and $(2,\ldots, 2k)\in \mathrm{Conf}_k(\mathbb{R}^2)$ and employ them implicitly throughout. An element in $\pi_1(B_k(\mathbb{R}^2))$ is determined by the data of a permutation $\tau\in\Sigma_k$ and a path $p:[0,\pi]\to (\mathbb{R}^2)^k$ such that \begin{enumerate} \item $p(0)_r=(2r,0)$ for $1\leq r\leq k$, \item $p(\pi)_r=(2\tau(r),0)$ for $1\leq r\leq k$, and \item $p(t)_r\neq p(t)_{r'}$ for $1\leq r\neq r'\leq k$ and $t\in[0,\pi]$. \end{enumerate} Composition is determined by composition of permutations and concatenation of paths; in particular, there is a canonical group homomorphism \[\pi_1(B_k(\mathbb{R}^2))\to \Sigma_k,\] which we will shortly see to be surjective. We have the following simple observation regarding this homomorphism. \begin{lemma} The subgroup $\pi_1(\mathrm{Conf}_k(\mathbb{R}^2))\leq \pi_1(B_k(\mathbb{R}^2))$ coincides with the kernel of the homomorphism $B_k\to \Sigma_k$. \end{lemma} \begin{proof} It is obvious that $\pi_1(\mathrm{Conf}_k(\mathbb{R}^2))\leq \ker(B_k\to \Sigma_k)$, and both subgroups have the same index in $B_k$, since $\mathrm{Conf}_k(\mathbb{R}^2)\to B_k(\mathbb{R}^2)$ is $k!$-fold cover. \end{proof} To see verify surjectivity, we will exhibit elements $\sigma_i\in\pi_1(B_k(\mathbb{R}^2))$ lifting the respective transpositions $\tau_i=(i,i+1)$. \begin{construction} For $1\leq i\leq k$, define a path $p_i:[0,\pi]\to (\mathbb{R}^2)^k$ by the formula \[p_i(t)_r=\begin{cases} (2r,0)&\quad r\notin\{i,i+1\}\\ c_{2i+1,1}(t+\pi)&\quad r=i\\ c_{2i+1,1}(t)&\quad r=i+1, \end{cases}\] where $c_{a,b}(t)=(a+b\cos t,b\sin t)$ is the standard parametrization of the circle of radius $b$ centered at $(a,0)$. The dashed factorization exists in the commuting diagram \[\xymatrix{[0,\pi]\ar[d]_-{p_i}\ar@{-->}[dr]\ar[r]&B_k(\mathbb{R}^2)\\ (\mathbb{R}^2)^k&\mathrm{Conf}_k(\mathbb{R}^2)\ar[l]\ar[u]_-\pi. }\] Indeed, $\sin t=-\sin t$ if and only if $t\in \{0,\pi\}$, and in both of these cases we have $\cos t\neq -\cos t$. We write $\sigma_i$ for the homotopy class of the top horizontal map relative to $\{0,\pi\}$. Since $p_i(0)=((2,0),\ldots, (2i,0), (2i+2,0), \ldots, (2k,0))$ and $p_i(\pi)=((2,0),\ldots, (2i+2,0), (2i,0),\ldots, (2k,0))$, the class $\sigma_i$ defines an element of $\pi_1(B_k(\mathbb{R}^2))$. \end{construction} These elements satisfy some easy relations. \begin{lemma} If $|i-j|>1$, then $\sigma_i\sigma_j=\sigma_j\sigma_i$. \end{lemma} \begin{proof} Without loss of generality, we may assume that $j>i$. Define $H:[0,\pi]^2\to (\mathbb{R}^2)^k$ by the formula \[H(s,t)_r=\begin{cases} (2r,0)&\quad r\notin\{i,i+1,j,j+1\}\\ p_i(s)_r&\quad r\in\{i,i+1\}\\ p_j(t)_r&\quad r\in \{j,j+1\}. \end{cases}\] For $r\in\{i,i+1\}$, the image of $H(s,t)_{r}$ is contained in a closed disk of radius $1$ around $(2i+1,0)$, and, for $r\in\{j,j+1\}$, in a closed disk of radius $1$ around $(2j+1,0)$. Since $|j-i|>1$, these sets are disjoint, and we conclude that $H$ factors through $\mathrm{Conf}_k(\mathbb{R}^2)$. The lemma now follows from the observation that $\sigma_j\sigma_i$ (resp. $\sigma_i\sigma_j$) is represented by the path obtained by composing $H$ with the counterclockwise (resp. clockwise) path around the boundary of $[0,\pi]^2$. \end{proof} \begin{lemma} For $1\leq i\leq k$, $\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$. \end{lemma} \begin{proof} Define maps $q_1,q_2:[0,\pi]\to (\mathbb{R}^2)^k$ by the formulas \[ q_1(t)_r=\begin{cases} (2r,0)&\quad r\notin\{i,i+1, i+2\}\\ c_{2i+2,2}(t+\pi)&\quad r=i\\ c_{2i+1,1}(2t)&\quad r=i+1\\ c_{2i+2,2}(t)&\quad r=i+2 \end{cases} \] \[ q_2(t)_r=\begin{cases} (2r,0)&\quad r\notin\{i,i+1,i+2\}\\ \bar c_{2i+2,2}(t)&\quad r=i\\ \bar c_{2i+3,1}(2t+2\pi)&\quad r=i+1\\ \bar c_{2i+2,2}(t+\pi)&\quad r=i+2, \end{cases}\] where a bar indicates reversal of parametrization. We claim that the concatenation of $q_1$ followed by $q_2$ represents $\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1}\sigma_i\sigma_{i+1}\sigma_i$. Assuming this claim, the lemma folllows, for the three nonconstant coordinate functions of this concatenation are nullhomotopic via homotopies with pairwise disjoint images. To see why the claim is true, consider the representative $\tilde q$ of $\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1}\sigma_i\sigma_{i+1}\sigma_i$ that is given by concatenating the representatives $p_i$ and $p_{i+1}$ and their inverses. The path $\tilde q(t)_i$ first follows the lower half-circle of radius $1$ centered at $(2i+1,0)$, then follows the lower half-circle of radius $1$ centered at $(2i+3,0)$, then retraces these steps exactly. Thus, $\tilde q(t)_i$ is homotopic by a straight-line homotopy to a path that first follows the lower half-circle of radius 2 centered at $2i+2$ and then retraces this path, and the image of this homotopy away from $\tilde q(t)_i$ is disjoint from the images of $\tilde q(t)_r$ for $r\neq i$. In this way, we obtain a homotopy in the configuration space, and similar considerations apply to the $(i+2)$nd coordinate and the respective upper half-circles. Thus, $\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1}\sigma_i\sigma_{i+1}\sigma_i$ is represented by a path in which the $i$th and $(i+2)$nd coordinate trace and then retrace these larger half-circles while the $(i+1)$st coordinate traverses the circle of radius 1 centered at $2i+1$ followed by the circle of radius 1 centered at $2i+3$ in reverse. Since the concatenation of $q_1$ followed by $q_2$ is such a path, this establishes the claim. \end{proof} \begin{definition}[Artin \cite{Artin:TB}] The \emph{braid group} on $k$ strands is the group $B_k$ defined by the presentation \[B_k=\left\langle \sigma_1,\ldots, \sigma_{k-1}\mid\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1},\, \sigma_i\sigma_j=\sigma_j\sigma_i \text{ if } |i-j|>1\right\rangle.\] \end{definition} The previous two lemmas imply the existence of a canonical group homomorphism from the abstract group $B_k$ to $\pi_1(B_k(\mathbb{R}^2))$. \begin{theorem}[Fox--Neuwirth \cite{FoxNeuwirth:BG}]\label{thm:fox-neuwirth} The homomorphism $B_k\to \pi_1(B_k(\mathbb{R}^2))$ is an isomorphism. \end{theorem} Before proceeding with the proof, we first recall that the symmetric group on $k$ letters admits the Coxeter presentation \[\Sigma_k=\langle \tau_1,\ldots, \tau_k\mid \tau_i^2=1,\,(\tau_i\tau_{i+1})^3=1,\,(\tau_i\tau_j)^2=1\text{ if }|i-j|>1\rangle.\] It follows that the assignment $\sigma_i\mapsto \tau_i$ determines a group homomorphism $B_k\to \Sigma_k$. \begin{definition} The \emph{pure braid group} on $k$ strands is the subgroup $P_k=\ker(B_k\to \Sigma_k)\leq B_k$. \end{definition} \begin{proof}[Proof of Theorem \ref{thm:fox-neuwirth}] Consider the following diagram of group homomorphisms: \[\xymatrix{ 1\ar[r]&P_k\ar[r]\ar@{-->}[d]&B_k\ar[r]\ar[d]&\Sigma_k\ar@{=}[d]\ar[r]&1\\ 1\ar[r]&\pi_1(\mathrm{Conf}_k(\mathbb{R}^2))\ar[r]&\pi_1(B_k(\mathbb{R}^2))\ar[r]&\Sigma_k\ar[r]&1. }\] The top row is exact by definition and the bottom was shown to be exact above. The righthand square commutes by the construction of the $\sigma_i$, and it follows that the dashed factorization exists making the lefthand square commute. Thus, by the five lemma, it will suffice to verify that the induced map $P_k\to \pi_1(\mathrm{Conf}_k(\mathbb{R}^2))$ is an isomorphism for each $k$. For this claim, we proceed by induction on $k$, the cases $k=0$ and $k=1$ being trivial. Consider the following diagram of group homomorphisms: \[\xymatrix{ 1\ar[r]&K\ar@{-->}[d]\ar[r]& P_k\ar[d]\ar@{-->}[r]& P_{k-1}\ar[d]^-{\mathrel{\rotatebox[origin=c]{-90}{$\simeq$}}}\ar[r]&1\\ 1\ar[r]&\pi_1(\mathbb{R}^2\setminus \{x_1,\ldots, x_{k-1}\})\ar[r]&\pi_1(\mathrm{Conf}_k(\mathbb{R}^2))\ar[r]&\pi_1(\mathrm{Conf}_{k-1}(\mathbb{R}^2))\ar[r]&1. }\] The middle and righthand vertical maps are the maps constructed above, the upper righthand horizontal map is defined by the inductive hypothesis and the requirement that the righthand square commute, and the subgroup $K\leq P_k$ is defined to be the kernel of this map. Since the top sequence is exact by definition and the bottom by Theorem \ref{thm:Fadell--Neuwirth} and the fact that the three spaces shown are all aspherical, the dashed vertical factorization exists making the lefthand square commute. As we will see below in Lemma \ref{lem:kernel isomorphism}, $K$ is generated by $k-1$ elements, which map to a set of $k-1$ free generators under the lefthand vertical map, and assuming this fact, we may complete the proof. Indeed, these $k-1$ elements determine a homomorphism from the free group, which is a section of the map in question. This section is injective, since it is a section, as well as surjective, since its image contains a generating set. Since the section is an isomorphism, the map itself is so as well, and the five lemma implies the claim. \end{proof} In this way, we are led to the following question, which will next occupy our attention: given a presentation of a group $G$, how can we find a presentation for a subgroup $H\leq G$? \subsection{The Reidemeister--Schreier method} We now discuss a technique from combinatorial group theory for producing presentations of subgroups---see \cite[2.3]{MagnusKarrassSolitar:CGT} for a standard account. We take the topological viewpoint expounded in \cite{ZieschangVogtColdeway:SPDG}. \begin{definition} A \emph{graph} is a 1-dimensional CW complex $\Gamma$ with at most countably many cells. The 0-cells of $\Gamma$ are its \emph{vertices} and the 1-cells its \emph{edges}. An \emph{edge path} is a sequence of oriented edges $\{e_i\}_{i=1}^n$ such that the head of $e_i$ is the tail of $e_{i+1}$ for $1\leq i<n$. An edge path is \emph{reduced} if it contains no subpath of the form $ee^{-1}$, where $e^{-1}$ denotes the edge $e$ with the opposite orientation. A \emph{tree} is a contractible graph. A \emph{spanning tree} for a graph $\Gamma$ is a subgraph $T\subseteq \Gamma$ such that $T$ is a tree and $T$ contains every vertex of $\Gamma$. \end{definition} \begin{lemma} In a tree $T$, any pair of distinct vertices are connected by a unique reduced edge path. \end{lemma} \begin{proof} After perhaps discarding redundant edges, the existence of two such edge paths produces an embedding of $S^1$ into $T$, a contradiction. \end{proof} \begin{lemma} If $\Gamma$ is a connected graph, then $\Gamma$ admits a spanning tree. \end{lemma} \begin{proof} Fix a vertex $v\in \Gamma$, set $T_0=\{v\}$, and define $T_i\subseteq \Gamma$ for $i>0$ recursively by adding to $T_{i-1}$ each vertex $w\in \Gamma$ that is separated from $T_{i-1}$ by a exactly one edge, together with one such edge for each such $w$. Then $T=\bigcup_{i=0}^\infty T_i$ is a spanning tree. \end{proof} This observation implies the following familiar consequence. \begin{corollary} If $\Gamma$ is a connected graph, $v\in \Gamma$ is a vertex, and $T\subseteq \Gamma$ is a spanning tree, then $\pi_1(\Gamma,v)$ is freely generated by a set of cardinality $|\pi_0(\Gamma\setminus T)|$. \end{corollary} \begin{proof} Extend a contraction of $T$ to the vertex $v$ to obtain a deformation of $\Gamma$ onto a wedge of $|\pi_0(\Gamma\setminus T)|$-many circles, and apply the Van Kampen theorem. \end{proof} In particular, after fixing a vertex $v\in\Gamma$ and a spanning tree $T\subseteq\Gamma$, a set of free generators for $\pi_1(\Gamma, v)$ is given by the collection of loops given by concatenating an edge $e$ in $\Gamma\setminus T$ on either side with the unique reduced edge paths in $T$ from $v$ to the endpoints of $e$. If the free group $F(S)$ on the set $S$ corresponds to the graph $\Gamma_S:=\bigvee_S S^1$, then a subgroup $H\leq F(S)$ corresponds to a covering space $\pi:\widetilde \Gamma\to \Gamma_S$. Since any covering space has unique lifting for maps with simply connected domains, a covering space of a graph is again a graph in a canonical way; in particular, we conclude the following classical fact. \begin{corollary}[Nielsen--Scheier] A subgroup of a free group is free. \end{corollary} We now consider the problem of finding generators for $H$. We begin by fixing a basepoint $\tilde v\in\widetilde\Gamma$ lying over the unique vertex $v\in \Gamma_S$ and a spanning tree $\tilde v\in T\subseteq \widetilde\Gamma$. The edges of $\widetilde\Gamma$ correspond via $\pi$ to edges of $\Gamma_S$, which is to say generators of $G$, and the vertices correspond bijectively to the set $F(S)/H$ of cosets. For any vertex $w\in T$, there is a unique reduced edge path in $T$ from $\tilde v$ to $w$, and the word corresponding to this edge path represents the coset corresponding to $w$. This edge path clearly has the property that each of its initial segments is again such a path. \begin{definition} A set $A=\{g_\ell\}_{\ell\in F(S)/H}$ of coset representatives is a \emph{Schreier set} for $H$ if, after writing each $g_\ell$ as a reduced word in $S$, every initial subword of $g_\ell$ is again an element of $A$. \end{definition} As a matter of notation, we write $g\mapsto \bar g$ for the composite $F(S)\to F(S)/H\cong A\to F(S)$. \begin{theorem}[Reidemeister--Schreier, part I] Let $H\leq F(S)$ be a subgroup and $A$ a Schreier set for $H$. The nontrivial elements of the form $g_\ell s(\overline{g_\ell s})^{-1}$ for $\ell\in F(S)/H$ and $s\in S$ freely generate $H$. \end{theorem} \begin{proof} Fix a basepoint $\tilde v$. By path lifting, the Schreier set $A$ arises geometrically from a spanning tree $T$. The element $g_\ell s(\overline{g_\ell s})^{-1}$ corresponds to the path given by following the unique reduced edge path from $\tilde v$ to the vertex corresponding to $\ell$, crossing the edge $e$ corresponding to $s$, and then following the unique reduced edge path back to $\tilde v$. If $e$ lies in $T$, then $\overline{g_\ell s}= g_\ell s$, and the element is trivial; otherwise, we recognize $g_\ell s(\overline{g_\ell s})^{-1}$ as one of our previously established free generators for $H\cong\pi_1(\widetilde\Gamma, \tilde v)$. \end{proof} In order to accommodate the presence of relations, we extend our definition of a Schreier set as follows. \begin{definition} Let $G=\langle s\in S\mid r\in R\rangle$ be a group with a presentation and $H\leq G$ a subgroup. A set $A=\{g_\ell\}_{\ell\in G/H}$ of coset representatives is a \emph{Schreier set} for $H$ if it lifts to a Schreier set for $F(S)\times_GH\leq F(S)$. \end{definition} By the Van Kampen theorem, the group $G$ with the specified presentation is the fundamental group of the CW complex obtained by attaching 2-cells to $\Gamma_S$ according to the words $r\in R$. This cell structure lifts canonically to the covering space by attaching cells along the conjugates $g_\ell rg_\ell^{-1}$. Thus, we conclude the following generalization. \begin{theorem}[Reidemeister--Schreier, part II] Let $G=\langle s\in S\mid r\in R\rangle$ be a group with a presentation, $H\leq G$ a subgroup, and $A$ a Schreier set for $H$. Then $H$ is generated by the nontrivial elements $g_\ell s(\overline{g_\ell s})^{-1}$ for $\ell\in G/H$ and $s\in S$, with defining relations $g_\ell rg_\ell^{-1}$ for $\ell\in G/H$ and $r\in R$, written in the nontrivial generators. \end{theorem} \subsection{Back to braid groups} Before leveraging this result in completing the proof of Theorem \ref{thm:fox-neuwirth}, we pause to establish some notation. For $1\leq i<j\leq k$, we write $A_{ij}$ for the braid in which the $j$th strand winds once around the $i$th strand, passing in front of the intervening strands, and then returns to its starting position. In symbols, \[A_{ij}=\sigma_{j-1}\cdots\sigma_{i+1}\sigma_i^2\sigma_{i+1}^{-1}\cdots \sigma_{j-1}^{-1}.\] Note that the braids $A_{ij}$ are all pure. We write $U_k\leq B_k$ for the subgroup generated by the elements $A_{ik}$ for $1\leq i<k$. It is easy to verify geometrically that $U_k\leq \ker(P_k\to P_{k-1})$ and that the induced homomorphism $U_k\to \pi_1(\mathbb{R}^2\setminus\{x_1,\ldots, x_{k-1}\})$ sends $\{A_{ik}\}_{i=1}^{k-1}$ to a set of free generators; in particular, $U_k$ is free on these generators. Our goal is to prove the following, which is the missing ingredient in the proof of Theorem \ref{thm:fox-neuwirth}. \begin{lemma}\label{lem:kernel isomorphism} The inclusion $U_k\leq \ker(P_k\to P_{k-1})$ is an isomorphism. \end{lemma} This result will be an easy consequence of the following. \begin{proposition}\label{prop:pure semidirect} There is an isomorphism $P_k\cong U_k\rtimes P_{k-1}$. \end{proposition} \begin{proof}[Proof of Lemma \ref{lem:kernel isomorphism}] Inverting isomorphisms in the diagram \[\xymatrix{ U_k\ltimes P_{k-1}\ar[d]_-\wr\ar[r]&P_{k-1}\ar@{=}[d]\\ P_k\ar[d]&P_{k-1}\ar[d]^-\wr\\ \pi_1(\mathrm{Conf}_k(\mathbb{R}^2))\ar[r]&\pi_1(\mathrm{Conf}_{k-1}(\mathbb{R}^2)), }\] of group homomorphisms, we obtain two maps $P_k\to P_{k-1}$, the kernels of which are $U_k$ and the kernel in question; therefore, to conclude that these two coincide, it suffices to verify that the diagram commutes. By inspection of the element in homotopy corresponding to $A_{i k}$, it is clear that both composites annihilate $U_k$, so it suffices to verify that the composite \[P_{k-1}\to P_k\to \pi_1(\mathrm{Conf}_k(\mathbb{R}^2))\to \pi_1(\mathrm{Conf}_{k-1}(\mathbb{R}^2))\] is the canonical map. By the previous corollary and induction on $k$, $P_{k-1}$ is generated by $\{A_{ij}\}_{1\leq i<j\leq k-1}$, and the claim follows from the geometric interpretation of the $A_{ij}$. \end{proof} Our proof of Proposition \ref{prop:pure semidirect} will be somewhat indirect, proceeding through the intermediate group $D_k\leq B_k$ of braids that do not permute the last strand. In other words, $D_k$ is defined as the pullback in the diagram \[\xymatrix{ D_k\ar[r]\ar[d]&B_k\ar[d]\\ \Sigma_{k-1}\ar[r]&\Sigma_k. }\] The proposition is an immediate consequence of the following result. \begin{proposition}\label{prop:D semidirect} There is an isomorphism $D_k\cong U_k\rtimes B_{k-1}$. \end{proposition} We prove this result by applying the Reidemeister--Schreier method to $D_k\leq B_k$. Define elements $g_\ell=\sigma_{k-1}\cdots \sigma_\ell$ for $1\leq \ell\leq k$. \begin{lemma}\label{lem:schreier set} The set $\{g_\ell\}_{\ell=1}^k$ is a Schreier set for $D_k$ in $B_k$. \end{lemma} \begin{proof} Reading off the last entry of a permutation defines a bijection $\Sigma_k/\Sigma_{k-1}\cong \{1,\ldots, k\}$, and the last entry of the permutation $\tau_{k-1}\cdots \tau_\ell$ is $\ell$. Thus, the composite \[\{g_\ell\}_{\ell=1}^k\to B_k/D_k\to \Sigma_k/\Sigma_{k-1}\to \{1,\ldots, k\}\] is surjective and hence bijective. Since the rightmost two maps are also bijective, the first is as well, and we conclude that $\{g_\ell\}_{\ell=1}^k$ is a set of coset representatives for $D_k$ in $B_k$. Since any initial word in $g_\ell$ is obviously of the form $g_{\ell'}$ for some $\ell'$, the claim follows. \end{proof} In carrying out our computations below, we will need to know the following relations. \begin{lemma}\label{lem:g sigma relations} The following relations hold in $B_k$:\begin{align*} &\sigma_i^{g_\ell}=\sigma_i, &1\leq i<\ell-1< k \\ &\sigma_i^{g_\ell}=\sigma_{i-1}, &1\leq \ell<i<k\\ &g_i\sigma_ig_{i+1}^{-1}=A_{ik}, &1\leq i< k\\ &g_i\sigma_{i-1}g_{i-1}^{-1}=1, &1<i\leq k. \end{align*} \end{lemma} \begin{proof} The third and fourth relations are obvious from the definitions, and the first is immediate from the commutativity relations in $B_k$. For the second, we have that \begin{align*} g_\ell\sigma_ig_\ell^{-1}&=\sigma_{k-1}\cdots\sigma_\ell\sigma_i\sigma_\ell^{-1}\cdots \sigma_{k-1}^{-1}\\ &=\sigma_{k-1}\cdots \sigma_i\sigma_{i-1}\sigma_i\sigma_{i-2}\cdots \sigma_\ell\sigma_\ell^{-1}\cdots \sigma_{k-1}^{-1}\\ &=\sigma_{k-1}\cdots \sigma_i\sigma_{i-1}\sigma_i\sigma_{i-1}^{-1}\cdots \sigma_{k-1}^{-1}\\ &=\sigma_{k-1}\cdots \sigma_{i-1}\sigma_i\sigma_{i-1}\sigma_{i-1}^{-1}\cdots \sigma_{k-1}^{-1}\\ &=\sigma_{k-1}\cdots\sigma_{i+1}\sigma_{i-1}\sigma_{i+1}^{-1}\cdots\sigma_{k-1}^{-1}\\ &=\sigma_{j-1}, \end{align*} where the second equality follows from the commutativity relations, the third by cancellation, the fourth by the braid relations, the fifth by cancellation, and the last by commutativity. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:D semidirect}] We have that, modulo $\Sigma_{k-1}$, \[\tau_{k-1}\cdots \tau_\ell\tau_i\equiv \begin{cases} \tau_{k-1}\cdots \tau_\ell&\quad 1\leq i<\ell-1< k\text{ or }1\leq \ell<i< k\\ \tau_{k-1}\cdots \tau_{\ell-1}&\quad 1\leq \ell-1=i<k\\ \tau_{k-1}\cdots \tau_{\ell+1}&\quad 1\leq \ell=i<k. \end{cases}\] Thus, we have \[\overline{g_\ell\sigma_i}=\begin{cases} g_\ell&\quad1\leq i<\ell-1< k\text{ or }1\leq \ell<i< k\\ g_{\ell-1}&\quad 1\leq \ell-1=i<k\\ g_{\ell+1}&\quad 1\leq \ell=i<k, \end{cases}\] and Lemma \ref{lem:g sigma relations} now implies that \begin{align*}g_\ell\sigma_{\ell-1}(\overline{g_\ell\sigma_{\ell-1}})^{-1}&=g_\ell\sigma_{\ell-1}g_{\ell-1}^{-1}=1\\ g_\ell\sigma_{\ell}(\overline{g_\ell\sigma_{\ell}})^{-1}&=g_\ell\sigma_\ell g_{\ell+1}^{-1}=A_{\ell k}\\ g_\ell\sigma_{i}(\overline{g_\ell\sigma_{i}})^{-1}&=\sigma_i^{g_\ell}=\begin{cases} \sigma_i&\quad 1\leq i<\ell-1< k \\ \sigma_{i-1}&\quad 1\leq \ell<i<k. \end{cases} \end{align*} We conclude by the Reidemeister--Schreier method that $D_k$ is generated by the collection $\{A_{\ell k}, \sigma_i: 1\leq \ell\leq k,\, 1\leq i< k-1\}$, which is to say by the subgroups $U_k$ and $B_{k-1}$. In order to determine the relations, we conjugate the relations in $B_k$ by the $g_\ell$ and express the result in terms of our chosen generators using Lemma \ref{lem:g sigma relations}. We begin with the commutativity relations, for which there are six cases. \begin{enumerate} \item For $1\leq \ell<i<j-1< k-1$, \begin{align*} [\sigma_i,\sigma_j]^{g_\ell}&= \sigma_i^{g_\ell}\sigma_j^{g_\ell}\sigma_i^{-g_\ell}\sigma_j^{-g_\ell}\\ &=[\sigma_{i-1},\sigma_{j-1}] \end{align*} \item For $1\leq i=\ell <j-1<k-1$, \begin{align*} [\sigma_i,\sigma_j]^{g_\ell}&=g_\ell\sigma_ig_{\ell+1}^{-1}g_{\ell+1}\sigma_jg_{\ell+1}^{-1}g_{\ell+1}\sigma_i^{-1}g_{\ell}^{-1}g_\ell\sigma_j^{-1}g_{\ell}^{-1}\\ &=[A_{ik},\sigma_{j-1}]. \end{align*} \item For $1\leq i=\ell-1<j-1<k-1$, \begin{align*} [\sigma_i,\sigma_j]^{g_\ell}&=g_\ell\sigma_ig_{\ell-1}^{-1}g_{\ell-1} \sigma_jg_{\ell-1}^{-1}g_{\ell-1}\sigma_i^{-1}g_\ell^{-1}g_\ell\sigma_j^{-1}g_\ell^{-1}\\ &=[1, \sigma_{j-1}]\\ &=1. \end{align*} \item For $1\leq i <\ell-1=j-1<k-1$, \begin{align*} [\sigma_i,\sigma_j]^{g_\ell}&=g_\ell\sigma_ig_\ell^{-1}g_\ell\sigma_jg_{\ell+1}^{-1}g_{\ell+1}\sigma_i^{-1}g_{\ell+1}^{-1}g_{\ell+1}\sigma_j^{-1}g_\ell^{-1}\\ &=[\sigma_i,A_{jk}]. \end{align*} \item For $1< i+1<j=\ell-1<k-1$, \begin{align*} [\sigma_i,\sigma_j]^{g_\ell}&=g_\ell\sigma_ig_\ell^{-1}g_\ell\sigma_jg_{\ell-1}^{-1}g_{\ell-1}\sigma_i^{-1}g_{\ell-1}^{-1}g_{\ell-1}\sigma_j^{-1}g_\ell\\ &=[\sigma_i,1]\\ &=1. \end{align*} \item For $1< i+1<j<\ell-1<k-1$, \begin{align*} [\sigma_i,\sigma_j]^{g_\ell}&=\sigma_i^{g_\ell}\sigma_j^{g_\ell}\sigma_i^{-g_\ell}\sigma_j^{-g_\ell}\\ &=[\sigma_i,\sigma_j]. \end{align*} \end{enumerate} We turn now to the braid relations, for which there are five further cases. \begin{enumerate} \item[(7)] For $1\leq \ell<i< k-1$, \begin{align*} (\sigma_i\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1})^{g_\ell}&=\sigma_i^{g_\ell}\sigma_{i+1}^{g_\ell}\sigma_i^{g_\ell}\sigma_{i+1}^{-g_\ell}\sigma_i^{-g_\ell}\sigma_{i+1}^{-g_\ell}\\ &=\sigma_{i-1}\sigma_{i}\sigma_{i-1}\sigma_{i}^{-1}\sigma_{i-1}^{-1}\sigma_{i}^{-1}. \end{align*} \item[(8)] For $1\leq \ell=i<k-1$, \begin{align*} (\sigma_i\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1})^{g_\ell}&=g_\ell\sigma_ig_{\ell+1}^{-1}g_{\ell+1}\sigma_{i+1}g_{\ell+2}^{-1}g_{\ell+2}\sigma_ig_{\ell+2}^{-1}g_{\ell+2}\sigma_{i+1}^{-1}g_{\ell+1}^{-1}g_{\ell+1}\sigma_i^{-1}g_\ell^{-1}g_\ell\sigma_{i+1}^{-1}g_\ell^{-1}\\ &=A_{ik}A_{i+1,k}(A_{ik}A_{i+1,k})^{-\sigma_i}. \end{align*} \item[(9)] For $1\leq i=\ell-1\leq k-1$, \begin{align*} (\sigma_i\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1})^{g_\ell}&=g_\ell\sigma_ig_{\ell-1}^{-1}g_{\ell-1}\sigma_{i+1}g_{\ell-1}^{-1}g_{\ell-1}\sigma_ig_{\ell}^{-1}g_{\ell}\sigma_{i+1}^{-1}g_{\ell+1}^{-1}g_{\ell+1}\sigma_i^{-1}g_{\ell+1}^{-1}g_{\ell+1}\sigma_{i+1}^{-1}g_\ell^{-1}\\ &=A_{ik}^{\sigma_i}A_{i+1,k}^{-1}. \end{align*} \item[(10)] For $1< i+1=l-1\leq k-1$, \begin{align*} (\sigma_i\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1})^{g_\ell}&=g_\ell\sigma_ig_{\ell}^{-1}g_\ell \sigma_{i+1}g_{\ell-1}^{-1}g_{\ell-1}\sigma_ig_{\ell-2}^{-1}g_{\ell-2}\sigma_{i+1}^{-1}g_{\ell-2}^{-1}g_{\ell-2}\sigma_i^{-1}g_{\ell-1}^{-1}g_{\ell-1}\sigma_{i+1}^{-1}g_\ell^{-1}\\ &=[\sigma_i,1]\\ &=1. \end{align*} \item[(11)] For $1<i+1<\ell-1\leq k-1$, \begin{align*} (\sigma_i\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1})^{g_\ell}&=\sigma_i^{g_\ell}\sigma_{i+1}^{g_\ell}\sigma_i^{g_\ell}\sigma_{i+1}^{-g_\ell}\sigma_i^{-g_\ell}\sigma_{i+1}^{-g_\ell}\\ &=\sigma_i\sigma_{i+1}\sigma_i\sigma_{i+1}^{-1}\sigma_i^{-1}\sigma_{i+1}^{-1} \end{align*} \end{enumerate} Relations (3), (5), and (10) are vacuous; relations (1), (6), (7), and (11) yield the defining presentation for $B_{k-1}$; and relations (2), (4), (8), and (9) imply that $U_k$ is a normal subgroup. We recognize the resulting presentation as a presentation of a semidirect product of $B_{k-1}$ with the free group on the set $\{A_{\ell k}\}_{\ell=1}^k$. \end{proof} \section{(Co)homology of $\mathrm{Conf}_k(\mathbb{R}^n)$}\label{section:cohomology} Our present goal is to understand the cohomology ring $H^*(\mathrm{Conf}_k(\mathbb{R}^n))$ with integer coefficients. Since the cases $n\in\{0,1\}$ are exceptional and easy, we assume throughout that $n\geq0$. \subsection{Additive structure} We begin with the task of understanding the cohomology as a graded Abelian group. \begin{definition} Let $V$ be a degreewise finitely generated non-negatively graded Abelian group $V$. The \emph{Poincar\'{e} polynomial} of $V$ is the polynomial \[P(V)=\sum_{i\geq0}\mathrm{rk}(V_i)t^i.\] If $X$ is a space of finite type, the \emph{Poincar\'{e} polynomial} of $X$ is the $P(X):= P(H_*(X))$. \end{definition} The Poincar\'{e} polynomial for graded Abelian groups is additive under direct sum and multiplicative under tensor product, so the Poincar\'{e} polynomial for spaces is additive under disjoint union and, with appropriate torsion-freeness assumptions in place, multiplicative under Cartesian product. \begin{theorem}[Leray--Hirsch]\label{thm:Leray--Hirsch} Suppose that the diagram \[\xymatrix{ F\ar[r]\ar[d]&E\ar[d]\\ \mathrm{pt}\ar[r]&B }\] is homotopy Cartesian and that \begin{enumerate} \item $F$ and $B$ are path connected, \item $H^*(F)$ is free Abelian, \item $H^*(F)$ or $H^*(B)$ is of finite type, and \item $H^*(E)\to H^*(F)$ is surjective. \end{enumerate} There is an isomorphism $H^*(E)\cong H^*(B)\otimes H^*(F)$ of $H^*(B)$-modules. In particular, we have the equation \[P(E)=P(B)P(F).\] \end{theorem} The proof, which is premised on a few basic properties of the Serre spectral sequence, is deferred to a later point in the notes, at which we will discuss this tool in some detail. In order to apply the Leray--Hirsch theorem, we must verify point (4). In doing so, we employ the \emph{Gauss maps} \begin{align*} \mathrm{Conf}_k(\mathbb{R}^n)&\xrightarrow{\gamma_{ab}} S^{n-1}\\ (x_1,\ldots, x_k)&\mapsto \frac{x_b-x_a}{\|x_b-x_a\|}. \end{align*} \begin{lemma}\label{lem:cohomology surjective} The natural map $H^*(\mathrm{Conf}_k(\mathbb{R}^n))\to H^*(\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\})$ is surjective. \end{lemma} \begin{proof} We first construct a collection of maps $\varphi_a:S^{n-1}\to \mathbb{R}^{n}\setminus\{x_1,\ldots, x_{k-1}\}$ inducing a homotopy equivalence from the bouquet $\bigvee_{k-1}S^{n-1}$. We will then show that each composite \[\xymatrix{S^{n-1}\ar[r]^-{\varphi_a}&\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\}\subseteq \mathrm{Conf}_k(\mathbb{R}^n))\ar[r]^-{\gamma_{ka}}& S^{n-1} }\] is the identity, implying that $H^*(\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\})$ is generated by classes of the form $\gamma_{ka}^*(\alpha)$ for $\alpha\in H^*(S^{n-1})$. For the construction, we set $x_a=(3^a,0,\ldots, 0)$ for $1\leq a\leq k-1$ and define $\varphi_a:S^{n-1}\to\mathbb{R}^n\setminus\{x_1,\ldots, x_k\}$ by \[\varphi_a(v)_r=(3^a,0,\ldots, 0)+ 3^av\] where $v\in S^{n-1}$ is regarded as a unit vector in $\mathbb{R}^n$. Then $\varphi_a(-1,\ldots, 0)=(0,\ldots, 0)$ for $1\leq a\leq k-1$, and the induced map from $\bigvee_{k-1}S^{n-1}$ is clearly a homotopy equivalence. Finally, we have \[\gamma_{ka}\circ\varphi_a(v)=\frac{(3^a,0,\ldots, 0)+3^av-(3^a,0,\ldots, 0)}{\|(3^a,0,\ldots, 0)+3^av-(3^a,0,\ldots, 0)\|}=\frac{3^av}{\|3^av\|}=v.\] \end{proof} Write $\alpha_{ab}\in H^{n-1}(\mathrm{Conf}_k(\mathbb{R}^n))$ for the pullback along $\gamma_{ab}$ of the standard volume form on $S^{n-1}$. We are now able to give a complete additive description of the cohomology ring in terms of these generators. \begin{corollary} For any $k\geq0$, $H^*(\mathrm{Conf}_k(\mathbb{R}^n))$ is free with basis \[S_k=\{\alpha_{a_1b_1}\alpha_{a_2b_2}\cdots \alpha_{a_{m}b_m}: m\geq0,\,1\leq b_1<\cdots<b_m \leq k,\, a_\ell<b_\ell\}.\] In particular, the Poincar\'{e} polynomial is given by \[P(\mathrm{Conf}_k(\mathbb{R}^n))=\prod_{j=1}^{k-1}(1+jt^{n-1}).\] \end{corollary} \begin{proof} Since $\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\}$ and $\mathrm{Conf}_{k-1}(\mathbb{R}^n)$ are path connected and the cohomology of the former is free, Theorem \ref{thm:Fadell--Neuwirth} and the Lemma \ref{lem:cohomology surjective} allow us to apply the Leray--Hirsch theorem. The first and third claim now follow by induction and the observation that $H^*(\mathbb{R}^n\setminus\{x_1,\ldots, x_{k-1}\})$ is free with Poincar\'{e} polynomial $1+(k-1)t^{n-1}$. For the third claim, we note that, by induction, the Leray--Hirsch theorem gives the additive isomorphism \[H^*(\mathrm{Conf}_k(\mathbb{R}^n))\cong \mathbb{Z}\langle S_{k-1}\rangle\otimes \mathbb{Z}\langle1,\, \alpha_{ak},\, 1\leq a\leq k-1\rangle,\] and it is easy to check that the map \begin{align*} S_{k-1}\times \{1,\, \alpha_{ak},\,1\leq a\leq k-1\}\to S_k \end{align*} given by concatenation on the right is a bijection. \end{proof} \subsection{Multiplicative structure} In order to obtain a multiplicative description, we require information about the relations among the various $\alpha_{ab}$. We begin with a few easy but useful observations. Recall that $\Sigma_k$ has a right action on $\mathrm{Conf}_k(\mathbb{R}^n)$ given by \[(x_1,\ldots, x_k)\cdot \sigma=(x_{\sigma(1)},\ldots, x_{\sigma(k)}).\] This action induces a left action on cohomology. \begin{proposition} The following relations hold in $H^*(\mathrm{Conf}_k(\mathbb{R}^n))$ for $1\leq a\neq b\leq k$. \begin{enumerate} \item $\alpha_{ab}=(-1)^n\alpha_{ba}$ \item $\alpha_{ab}^2=0$ \item $\sigma^*\alpha_{ab}=\alpha_{\sigma(a)\sigma(b)}$ for $\sigma\in\Sigma_k$. \end{enumerate} \end{proposition} \begin{proof} The first claim follows from the observation that the diagram \[\xymatrix{ &\mathrm{Conf}_2(\mathbb{R}^n)\ar[dl]_-{\gamma_{ab}}\ar[dr]^-{\gamma_{ba}}\\ S^{n-1}\ar[rr]^-{x\mapsto -x}&&S^{n-1} }\] commutes, and that the degree of the antipodal map on $S^{n-1}$ is $(-1)^n$. The second follows form the fact that the volume form on $S^{n-1}$ squares to zero. For the third claim, we have the commuting diagram \[\xymatrix{ \mathrm{Conf}_k(\mathbb{R}^n)\ar[dr]_-{\gamma_{\sigma(a)\sigma(b)}}\ar[rr]^-\sigma&&\mathrm{Conf}_k(\mathbb{R}^n)\ar[dl]^-{\gamma_{ab}}\\ &S^{n-1}. }\] \end{proof} We come now to the fundamental relation. \begin{proposition}[Arnold relation] For $ 1\leq a<b<c\leq k$, \[\alpha_{ab}\alpha_{bc}+\alpha_{bc}\alpha_{ca}+\alpha_{ca}\alpha_{ab}=0.\] \end{proposition} \begin{remark} The Arnold relation holds without the assumption $a<b<c$. Indeed, the general form follows from the form given here and the antipodal relation $\alpha_{ab}=(-1)^n\alpha_{ba}$. \end{remark} We will discuss three proofs of this relation. For the time being, we concentrate on exploiting it. We write $\op A$ for the quotient of the free graded commutative algebra on generators $\{\alpha_{ab}\}_{1\leq a\neq b\leq k}$ by the ideal generated by $\{\alpha_{ab}+(-1)^{n+1}\alpha_{ba},\, \alpha_{ab}^2, \,\alpha_{ab}\alpha_{bc}+\alpha_{bc}\alpha_{ca}+\alpha_{ca}\alpha_{ab}\}$. \begin{theorem}[Arnold \cite{Arnold:CRCBG}, Cohen \cite{CohenLadaMay:HILS}] The natural map of graded commutative algebras \[\op A\to H^*(\mathrm{Conf}_k(\mathbb{R}^n))\] is an isomorphism. \end{theorem} \begin{proof} The map is surjective, since its image contains a generating set, so it will suffice to show that the basis $S_k$ exhibited above, thought of as lying in $\op A$, spans. Indeed, it follows that $S_k$ is a basis for $\op A$, since any relation would map to a relation in $H^*(\mathrm{Conf}_k(\mathbb{R}^n))$, and this fact implies the claim. To verify that $S_k$ spans, it suffices to show that the element $\alpha_{a_1b_1}\cdots\alpha_{a_m b_m}$ may be written as a linear combination of elements in $S_k$. By the antipodal relation and graded commutativity, we may assume that $a_\ell<b_\ell$ for $1\leq \ell\leq m$ and that $1\leq b_1\leq\cdots\leq b_m\leq k$. We proceed by downward induction on the largest value of $\ell$ such that $b_\ell=b_{\ell-1}=:b$. By the Arnold and antipodal relations, we have \begin{align*} \alpha_{a_1b_1}\cdots\alpha_{a_{\ell-1}b}\alpha_{a_\ell b}\cdots\alpha_{a_m b_m}&=(-1)^{n+1}\alpha_{a_1b_1}\cdots(\alpha_{a_\ell a_{\ell-1}}\alpha_{a_{\ell-1}b}+\alpha_{ba_\ell}\alpha_{a_\ell a_{\ell-1}})\cdots\alpha_{a_mb_m}\\ &=(-1)^{n+1} \alpha_{a_1b_1}\cdots \alpha_{a_\ell a_{\ell-1}}\alpha_{a_{\ell-1}b}\cdots \alpha_{a_mb_m}\\&\qquad+(-1)^{2n+1+(n-1)^2}\alpha_{a_1b_1}\cdots \alpha_{a_\ell a_{\ell-1}}\alpha_{a_\ell b}\cdots \alpha_{a_mb_m} \end{align*} After a second induction on the number of values of $\ell$ such that $b_\ell=b$, we may assume that $b_{l-2}<b$. Then, using our assumption that $a_\ell<b$ and $a_{\ell-1}<b$, these two monomials lie in the span of $S_k$ by the first induction. \end{proof} \subsection{The Arnold relation} We now describe several approaches to proving the Arnold relation. The first reduction in all three cases is the observation that, using the projection $\mathrm{Conf}_k(\mathbb{R}^n)\to \mathrm{Conf}_3(\mathbb{R}^n)$ sending $(x_1,\ldots, x_k)$ to $(x_a,x_b,x_c)$, it suffices to verify the relation $\alpha_{12}\alpha_{23}+\alpha_{23}\alpha_{31}+\alpha_{31}\alpha_{12}=0$ in $H^*(\mathrm{Conf}_3(\mathbb{R}^n))$. The first argument, due to Cohen \cite{CohenLadaMay:HILS}, is the most elementary, and we will be able to give a complete account, since it uses only techniques that we have already encountered. \begin{proof}[Cohen's proof of the Arnold relation] We have already seen that $H^{2n-2}(\mathrm{Conf}_3(\mathbb{R}^n))$ is free of rank 2 with basis $\{\alpha_{12}\alpha_{23}, \alpha_{31}\alpha_{12}\}$, where we have used the antipodal relation and graded commutativity to rearrange indices in the second case. Note that another basis for this module is $\{\alpha_{12}\alpha_{23},\alpha_{23}\alpha_{31}\}$, since the permutation $\binom{123}{321}$ interchanges this set with our known basis up to sign. We conclude the existence of a relation \[x\alpha_{12}\alpha_{23}+y\alpha_{23}\alpha_{31}+z\alpha_{31}\alpha_{12}=0.\] Applying $\tau_{12}$ to this relation, we obtain the relation \begin{align*} 0&=x\alpha_{21}\alpha_{13}+y\alpha_{13}\alpha_{32}+z\alpha_{32}\alpha_{21}\\ &=(-1)^{(n-1)^2+2n}(x\alpha_{31}\alpha_{12}+y\alpha_{23}\alpha_{31}+z\alpha_{12}\alpha_{23}). \end{align*} Canceling the sign and subtracting the result from the known relation yields \[(x-z)\alpha_{12}\alpha_{23}+(z-x)\alpha_{31}\alpha_{12}=0,\] whence $x=z$ by linear independence. Repeating the same process with $\tau_{23}$ shows that \[(x-y)\alpha_{12}\alpha_{23}+(y-x)\alpha_{23}\alpha_{31}=0,\] whence $x=y$ by linear independence. We conclude that the expression in question is $x$-torsion and therefore zero, since $H^*(\mathrm{Conf}_3(\mathbb{R}^n))$ is torsion-free. \end{proof} The original proof, due to Arnold \cite{Arnold:CRCBG}, is of a very different flavor, but is only valid in its original form in dimension 2. \begin{proof}[Arnold's proof of the Arnold relation ($n=2$)] Since there is no torsion, it suffices to prove the relation holds in cohomology with coefficients in $\mathbb{C}$. Make the identification $\mathbb{R}^2\cong\mathbb{C}$. The class $\alpha_{ab}$ is obtained by pulling back a standard generator of $H^1(S^1)$ along the composite \[\xymatrix{\mathrm{Conf}_k(\mathbb{C})\ar[r]^-{(z_a,z_b)}&\mathrm{Conf}_2(\mathbb{C})\ar[r]^-{z_2-z_1}&\mathbb{C}^\times\ar[r]^-{\frac{z}{\|z\|}}&S^1. }\] A representative for this generator in $H^1(\mathbb{C}^\times)$ is given by the differential form $dz/z$, since \[\frac{1}{2\pi i}\int_{S^1}\frac{dz}{z}=1\] by the residue theorem. Therefore, we may represent $\alpha_{ab}$ by the differential form \[\omega_{ab}=\frac{dz_b-dz_a}{z_b-z_a}.\] The claim now follows from the easy observation that the differential forms $\omega_{ab}$ satisfy the Arnold relation. \end{proof} This line of argument actually yields the far stronger result of a quasi-isomorphism \[H^*(\mathrm{Conf}_k(\mathbb{C}))\xrightarrow{\sim} \Omega^*(\mathrm{Conf}_k(\mathbb{C});\mathbb{C})\] of differential graded algebras, where the cohomology is regarded as a chain complex with zero differential; in jargon, $\mathrm{Conf}_k(\mathbb{C})$ is \emph{formal}. In higher dimensions, the corresponding differential forms satisfy the relation only up to a coboundary, i.e., we have the equation \[\omega_{12}\omega_{23}+\omega_{23}\omega_{31}+\omega_{31}\omega_{12}=d\beta.\] Roughly, the differential form $\beta$ is obtained by integrating the form $\omega_{14}\omega_{24}\omega_{34}$ along the fibers of the projection $\pi:\mathrm{Conf}_4(\mathbb{R}^n)\to \mathrm{Conf}_3(\mathbb{R}^n)$ onto the first three coordinates. To see why this might be the case, we imagine that a fiberwise version of Stokes' theorem should imply that the boundary of the fiberwise integral should be the fiberwise integral along the ``boundary'' of the fiber, which in turn should be a sum of four terms: the first three terms are the loci where $x_i=x_4$ for $1\leq i\leq 3$, and the fourth lies at infinity, where $x_4$ is very far away. We might imagine that the three terms in the Arnold relation arise from these first three terms and that the term at infinity vanishes. Of course, the fiber of this projection is non-compact, so, in order to make this kind of reasoning precise, one must replace the configuration space $\mathrm{Conf}_k(\mathbb{R}^n)$ with its \emph{Fulton-MacPherson compactification} $\mathrm{Conf}_k[\mathbb{R}^n]$, which is defined as the closure of the image of $\mathrm{Conf}_k(\mathbb{R}^n)$ under the maps \[\mathrm{Conf}_k(\mathbb{R}^n)\to (\mathbb{R}^n)^k\times(S^{n-1})^{\binom{n}{2}}\times[0,\infty]^{\binom{n}{3}}\] given by the inclusion in the first factor, the Gauss maps $\gamma_{ab}$ for $1\leq a<b\leq k$ in the second, and the relative distance functions $\delta_{abc}(x_1,\ldots, x_k)=\frac{\|x_a-x_b\|}{\|x_a-x_c\|}$ for $1\leq a<b<c\leq k$ in the third---see the original references \cite{FultonMacPherson:CCS, AxelrodSinger:CSPT} or the detailed account \cite{Sinha:MTCCS}. It turns out that $\mathrm{Conf}_k[\mathbb{R}^n]$ is a manifold with corners on which the integration described above can actually be carried out. Using this compactification, Kontsevich \cite{Kontsevich:OMDQ} was able to carry out an analogue of Arnold's program from above. The basic observation is that the construction of $\beta$ is an example of a more systematic method for generating differential forms from graphs, which, when pursued fully, yields a zig-zag of quasi-isomorphisms \[\xymatrix{H^*(\mathrm{Conf}_k(\mathbb{R}^n))& ?\ar[r]^-\sim\ar[l]_-\sim& \Omega^*(\mathrm{Conf}_k[\mathbb{R}^n]),}\] where the unspecified middle term is a certain ``graph complex.'' Thus, in higher dimensions, too, configuration spaces are formal. See \cite{LambrechtsVolic:FLNDO} for a detailed proof of the formality theorem. \subsection{Planetary systems} The third proof of the Arnold relation will proceed through a geometric, intersection-theoretic analysis following \cite{Sinha:HLDO}. In order to pursue this direction, we will need to understand something about the homology of $H_*(\mathrm{Conf}_k(\mathbb{R}^n))$. We begin by introducing a systematic method for generating homology classes. \begin{definition} Fix a subset $S\subseteq\{1,\ldots, k\}$. \begin{enumerate} \item An $S$-\emph{tree} $T$ is a pair of an ordering of $S$ and a binary parenthesization of $S$ with its ordering. \item A $k$-\emph{forest} is an ordered partition $\{1,\ldots, k\}\cong \coprod_i S_i$ and an $S_i$-tree for each $i$. \end{enumerate} \end{definition} \begin{example} With $S=\{1,3,4,7,8\}\subseteq \{1,\ldots, 9\}$, the expression $((48)((17)3))$ is an $S$-tree. \end{example} \begin{example} There is a unique $S$-tree with $S=\{i\}\subseteq\{1,\ldots, k\}$ given by the expression $i$. \end{example} The terminology is motivated by the observation that the data of an $S$-tree is equivalent to an isotopy class of planar tree $T$ with the following features: \begin{enumerate} \item $T$ has only univalent and trivalent vertices, called \emph{external} and \emph{internal}, respectively; \item $T$ has a distinguished external \emph{root} vertex, and its other external vertices are \emph{leaves}; \item the leaves of $T$ are labeled by elements of $S$. \end{enumerate} The internal vertices in the geometric picture correspond bijectively to the pairs of matching open and closed parentheses in the combinatorial picture. We write $V(T)$ for the set of internal vertices of $T$. Note that an internal vertex lies on the path from the leaf $i$ to the root if and only if its parentheses enclose $i$; in this case, we write $v<i$. We define the \emph{height} $h(v)$ of a vertex $v$ to be the number of edges between $v$ and the root. In the combinatorial picture, the height of an internal vertex corresponds to the depth of the corresponding pair of parentheses. Fix $S\subseteq \{1,\ldots, k\}$, and let $T$ be an $S$-tree. For each $i\in S$, define a map \begin{align*} (S^{n-1})^{V(T)}&\xrightarrow{P_{T,i}} \mathbb{R}^n\\ (u_v)_{v\in V(T)}&\mapsto \displaystyle \sum_{v<i}(-1)^{\delta(i,v)}\epsilon^{h(v)}u_v, \end{align*} where $\epsilon$ is a small positive real number, and $\delta(i,v)$ takes the value $1$ if the path from $i$ to the root passes through the left edge at $v$ and $0$ if the right edge. \begin{lemma} For $1\leq i\neq j\leq k$, $P_{T,i}\left((u_v)_{v\in V(T)}\right)\neq P_{T,j}\left((u_v)_{v\in V(T)} \right)$. \end{lemma} \begin{proof} Let $w$ denote the highest internal vertex with $w<i$ and $w<j$, and assume without loss of generality that $\delta(j,w)=0$ and $\delta(i,w)=1$. Then, cancelling terms involving $v<w$, we have \begin{align*} P_{T,j}\left((u_v)_{v\in V(T)}\right)-P_{T,i}\left((u_v)_{v\in V(T)}\right)&=\displaystyle \sum_{v<j}(-1)^{\delta(j,v)}\epsilon^{h(v)}u_v-\displaystyle \sum_{v<i}(-1)^{\delta(i,v)}\epsilon^{h(v)}u_v\\ &=\epsilon^{h(w)}\left(2u_w+\epsilon(\cdots)\right). \end{align*} For $\epsilon$ sufficiently small, this expression does not vanish. \end{proof} Thus, taking the coordinates of $\{1,\ldots, k\}\setminus S$ to be fixed at some large, distinct values, we obtain a map $P_T:(S^{n-1})^{V(T)}\to \mathrm{Conf}_k(\mathbb{R}^n)$. We refer to $P_T$ and to the image of the fundamental class under $P_T$ interchangeably as the \emph{planetary system} associated to $T$. A forest $F=\{T_i\}$ also defines a planetary system $P_F$ by taking products of translates of the planetary systems of its component trees. \begin{definition} A tree is \emph{tall} if it is of the form $(\cdots(i_1i_2)\cdots i_m)$ with $i_1$ minimal considered as a natural number. A forest is \emph{tall} if \begin{enumerate} \item each component tree is tall, and \item the induced ordering on the minimal leaves of the component trees is the natural ordering. \end{enumerate} \end{definition} \begin{proposition} Planetary systems of tall trees form a basis for $H_{(k-1)(n-1)}(\mathrm{Conf}_k(\mathbb{R}^n))$. \end{proposition} \begin{proof} From Leray--Hirsch and Fadell--Neuwirth, we know that $H^{(k-1)(n-1)}(\mathrm{Conf}_k(\mathbb{R}^n))$ is free Abelian of rank $(k-1)!$, so the homology group of interest has these properties as well. On the other hand, the set of tall trees on $\{1,\ldots, k\}$ is put into bijection with the set of permutations $\sigma\in \Sigma_{k}$ fixing $1$ by associating to $\sigma$ the tree \[T_\sigma=((\cdots(\sigma^{-1}(1)\sigma^{-1}(2))\cdots \sigma^{-1}(k))).\] Since this set has cardinality $(k-1)!$, we conclude that it suffices to show that the corresponding planetary systems are linearly independent. For this task, we define a map $\gamma_\sigma:\mathrm{Conf}_k(\mathbb{R}^n)\to (S^{n-1})^{k-1}$ by putting $\gamma_{\sigma^{-1}(i)\sigma^{-1}(i+1)}$ in the $i$th coordinate, and we set \[\alpha_\sigma=\gamma_\sigma^*(\mathrm{vol}_{S^{n-1}})=\alpha_{\sigma^{-1}(1)\sigma^{-1}(2)}\cdots\alpha_{\sigma^{-1}(k-1)\sigma^{-1}(k)}.\] Then the proof will be complete upon verifying that \[\left\langle P_{T_\sigma}, \alpha_{\tau}\right\rangle=\delta_{\sigma\tau}.\] Write $\{v_1,\ldots, v_{k-1}\}$ for the vertices of $T_\sigma$, where $v_i$ is the unique internal vertex with $h(v_i)=k-i$. For $i<j$, we compute that $\gamma_{\sigma^{-1}(i)\sigma^{-1}(j)}(P_{T_\sigma}(u_{v_1},\ldots, u_{v_{k-1}}))$ is the unit vector in the direction of \begin{align*} \left(\epsilon^{k-j+1}u_{v_{j-1}}-\sum_{\ell=j}^{k-1}\epsilon^{k-\ell}u_{v_\ell}\right)-\left(\epsilon^{k-i+1}u_{v_{i-1}}-\sum_{\ell=i}^{k-1}\epsilon^{k-\ell}u_{v_\ell}\right) &=\epsilon^{k-j+1}\left(2u_{v_{j-1}}+\epsilon(\cdots)\right). \end{align*} Letting $\epsilon$ tend to zero defines a homotopy between this map and the map \[(u_{v_1},\ldots, u_{v_{k-1}})\mapsto u_{v_{j-1}}.\] Therefore, $\gamma_\sigma\circ P_{T_\sigma}$ is homotopic to the identity, whence $\langle P_{T_\sigma},\alpha_\sigma\rangle=1$. Assume now that $\tau\neq \sigma$. Then there is some $1<i<k$ such that $\sigma\tau^{-1}(i)$ is greater than both $\sigma\tau^{-1}(i-1)$ and $\sigma\tau^{-1}(i+1)$, for otherwise, using the fact that $\sigma\tau^{-1}(1)=1$, we conclude that $\sigma\tau^{-1}$ is order-preserving and hence the identity, a contradiction. But then, by the previous calculation, the two maps \begin{align*}\gamma_{\tau^{-1}(i-1)\tau^{-1}(i)}&=\gamma_{\sigma^{-1}(\sigma\tau^{-1}(i-1))\sigma^{-1}(\sigma\tau^{-1}(i))}\\ \gamma_{\tau^{-1}(i)\tau^{-1}(i+1)}&=\gamma_{\sigma^{-1}(\sigma\tau^{-1}(i))\sigma^{-1}(\sigma\tau^{-1}(i+1))} \end{align*} differ by the antipodal map up to homotopy, so $\gamma_\tau$ factors through a submanifold of $(S^{n-1})^k$ of positive codimension. It follows that $\langle P_{T_\sigma},\alpha_\tau\rangle=0$. \end{proof} \begin{corollary} Planetary systems of tall forests form a basis for $H_*(\mathrm{Conf}_k(\mathbb{R}^n))$. \end{corollary} \begin{proof} Let $F=\{T_{\sigma_i}\}$ be a tall forest, where $T_{\sigma_i}$ has $k_i$ leaves. We apply the same reasoning to the diagram \[\xymatrix{(S^{n-1})^{V(F)}\ar@{=}[d]\ar[r]^-{P_F}&\mathrm{Conf}_k(\mathbb{R}^n)\ar[r]& \displaystyle\prod_i \mathrm{Conf}_{k_i}(\mathbb{R}^n)\ar[r]^-{(\gamma_{\sigma_i})}&\displaystyle\prod_{i}(S^{n-1})^{V(T_i)}\\ \displaystyle\prod_i(S^{n-1})^{V(T_i)}\ar[urr]_-{(P_{T_{\sigma_i}})}, }\] which commutes up to homotopy. \end{proof} \begin{recollection} One version of Poincar\'{e} duality for oriented, connected, boundaryless, possibly non-compact $n$-manifolds of finite type is the isomorphism \[\widetilde H_i(M^+)\cong H^{n-i}(M),\] where $M^+$ denotes the one-point compactification of $M$ and we reduce with respect to the point at infinity. In particular, such a manifold has a fundamental class $[M]\in \widetilde H^n(M^+)$, defined as the preimage of $1\in H^0(M)$ under this isomorphism. This duality can sometimes be interpreted geometrically. \begin{enumerate} \item If $N\subseteq M$ is a proper submanifold of dimension $r$ and $P\subseteq M$ is a compact submanifold of dimension $n-r$, we may contemplate the composite \[\xymatrix{ \widetilde H_r(N^+)\otimes H_{n-r}(P)\ar[r]&\widetilde H_r(M^+)\otimes H_{n-r}(M)\cong H^{r}(M)\otimes H_{n-r}(M)\ar[r]^-{\langle-,-\rangle}&\mathbb{Z}. }\] (note that the existence of the first map uses the fact that $N$ is properly embedded). If $N$ and $P$ intersect transversely, then the value of this composite on $[N]\otimes [P]$ is the signed intersection number of $N$ and $P$. \item Since cohomology is a ring, we may likewise contemplate the composite \[\xymatrix{ \widetilde H_r(N_1^+)\otimes \widetilde H_s(N_2^+)\ar[r]& H^{n-r}(M)\otimes H^{n-s}(M)\ar[r]^-\smile& H^{2n-r-s}(M)\cong \widetilde H^{r+s-n}(M^+), }\] where $N_1$ and $N_2$ are proper submanifolds of dimension $r$ and $s$, respectively. If $N_1$ and $N_2$ intersect transversely, then the value of this composite on $[N_1]\otimes[N_2]$ is $[N_1\cap N_2]$. \end{enumerate} \end{recollection} Now, consider the submanifold of $\mathrm{Conf}_3(\mathbb{R}^n)$ defined by requiring that $x_1$, $x_2$, and $x_3$ be collinear. This manifold has three connected components, and we let $C_a$ denote the component in which $x_a$ lies between $x_b$ and $x_c$. Then the map \begin{align*} C_a&\to\mathbb{R}^n\times\mathbb{R}_{>0}\times\mathbb{R}_{>0}\times S^{n-1}\\ (x_1, x_2, x_3)&\mapsto \left(x_a,\, |x_b-x_a|,\, |x_c-x_a|, \,\frac{x_c-x_b}{|x_c-x_b|}\right) \end{align*} is a homeomorphism. In particular, $\dim C_a=2n+1$. Note that $C_a$ is closed as a subspace of $\mathrm{Conf}_3(\mathbb{R}^n)$ and hence proper as a submanifold. \begin{proof}[Sinha's proof of the Arnold relation] Pushing forward $[C_1]$ and applying Poincar\'{e} duality as above, we obtain an element of $H^{n-1}(\mathrm{Conf}_3(\mathbb{R}^n))$. By our homology calculation, this class is determined by evaluating it on $P_{(12)}$ and $P_{(13)}$. These values are given by the respective intersection numbers with $C_1$, which are $\pm 1$ with opposite signs. Thus, with the appropriate choice of orientation, $C_1$ is Poincar\'{e} dual to $\alpha_{12}-\alpha_{13}$. Similar remarks apply to $C_2$, and, since $C_1\cap C_2=\varnothing$, we conclude that \begin{align*} 0&=(\alpha_{12}-\alpha_{13})(\alpha_{23}-\alpha_{21})\\ &=\alpha_{12}\alpha_{23}-\alpha_{12}\alpha_{21}-\alpha_{13}\alpha_{23}+\alpha_{13}\alpha_{21}\\ &=\alpha_{12}\alpha_{23}+(-1)^{n(n-1)}\alpha_{23}\alpha_{31}+(-1)^{2n}\alpha_{31}\alpha_{12}\\ &=\alpha_{12}\alpha_{23}+\alpha_{23}\alpha_{31}+\alpha_{31}\alpha_{12}. \end{align*} \end{proof} \subsection{The Jacobi identity and little cubes} The Arnold relation has its reflection in homology. For trees $T_1$ and $T_2$, we write $[T_1,T_2]$ for the tree obtained by grafting the roots of $T_1$ and $T_2$ to the leaves of $(12)$, in this order. \begin{proposition}[Jacobi identity]\label{prop:jacobi} The relation $[[T_1,T_2], T_3]+[[T_2,T_3], T_1]+[[T_3,T_1],T_2]=0$ holds in $H_*(\mathrm{Conf}_k(\mathbb{R}^n))$. More generally, if $R$ is any tree, then the trees resulting from grafting the roots of $[[T_1,T_2], T_3]$, $[[T_2,T_3], T_1]$, and $[[T_3,T_1],T_2]$ to any fixed leaf of $R$ sum to zero. \end{proposition} It is possible to give a geometric derivation of the Jacobi identity---see \cite{Sinha:HLDO}---but we will pursue an alternate route. We begin by observing that the most basic case of the identity, in which $T_1$, $T_2$, $T_3$, and $R$ are all trivial trees with no internal vertices, is essentially immediate from what we have already done. \begin{proof}[Proof of Proposition \ref{prop:jacobi}, trivial case] We calculate that \begin{align*} \left\langle ((23)1), \alpha_{12}\alpha_{23}\right\rangle&=\left\langle ((23)1), -\alpha_{23}\alpha_{31}-\alpha_{31}\alpha_{12}\right\rangle\\ &=-\left\langle((23)1), \alpha_{23}\alpha_{31}\right\rangle+(-1)^{1+2n+(n-1)^2}\left\langle ((23)1), \alpha_{21}\alpha_{13}\right\rangle\\ &=-\left\langle((13)2), \alpha_{13}\alpha_{32}\right\rangle+(-1)^n\left\langle((13)2), \alpha_{12}\alpha_{23}\right\rangle\\ &=-1, \end{align*} where we have applied the permutation $\tau_{12}$ in going from the second to the third line, and the last equality follows from the perfect pairing between tall trees and the corresponding cohomology classes. A similar calculation shows that $\left\langle((23)1), \alpha_{31}\alpha_{12}\right\rangle=-1$, and it follows that \[((23)1)=-((31)2)-((12)3),\] as desired. \end{proof} The general form of the Jacobi identity follows from this basic case once we are assured that grafting of trees is linear. In order to see why this linearity might hold, we turn to an alternative model for the homotopy types of configuration spaces---for original references, see \cite{BoardmanVogt:HIASTS, May:GILS}. \begin{definition} A \emph{little $n$-cube} is an embedding $f:(0,1)^n\to (0,1)^n$ of the form $f(x)=Dx+b$, where $b\in(0,1)^n$ and $D$ is a diagonal matrix with positive eigenvalues. \end{definition} We write $\op C_n(k)$ for the space of $k$-tuples of little $n$-cubes with pairwise disjoint images, topologized either as a subspace of $\mathrm{Map}(\amalg_k (0,1)^n, (0,1)^n)$. Note that, since little cubes are closed under composition, we have a collection of maps of the form \[\op C_n(m)\times \op C_n(k_1)\times\cdots\op C_n(k_m)\to \op C_n(k)\] whenever $k_1+\cdots +k_m=k$. These maps furnish the collection $\{\op C_n(k)\}_{k\geq0}$ of spaces with the structure of an \emph{operad} \cite{May:GILS}, but we will not need to make use of the full strength of this notion. \begin{proposition} The map $\op C_n(k)\to \mathrm{Conf}_k((0,1)^n)\cong\mathrm{Conf}_k(\mathbb{R}^n)$ given by evaluation at $(1/2,\ldots, 1/2)$ is a homotopy equivalence. \end{proposition} \begin{proof}[Sketch proof] A section of the map in question is defined by sending a configuration $x$ to the unique $k$-tuple of little cubes $(f_1,\ldots, f_k)$ with the following properties: \begin{enumerate} \item $f_i(1/2, \ldots, 1/2)=x_i$ for $1\leq i \leq k$; \item all sides of each $f_i$ have equal length, and all $f_i$ have equal volume; \item the images of the $f_i$ do not have pairwise disjoint closures. \end{enumerate} One checks that this map is continuous, so that we may view the configuration space as a subspace of $\op C_n(k)$. Scaling defines a deformation retraction onto this subspace. \end{proof} For further details, see \cite[4.8]{May:GILS}. \begin{proof}[Proof of Proposition \ref{prop:jacobi}, general case] By considering planetary systems of little cubes rather than configurations, one obtains the dashed lifts depicted in the diagram \[\xymatrix{ &\op C_n(k)\ar[d]\\ (S^{n-1})^{V(F)}\ar@{-->}[ur]^-{P_F^\Box}\ar[r]^-{P_F}&\mathrm{Conf}_k(\mathbb{R}^n). }\] With these maps in hand, the combinatorics of grafting trees becomes the combinatorics of composing little cubes; that is, the tree $[[T_1, T_2], T_3]$ is the image of $\left( ((12)3), T_1, T_2, T_3\right)$ under the composition map \[\op C_n(3)\times \op C_n(3)\times \op C_n(k_1)\times \op C_n(k_2)\times\op C_n(k_3)\to \op C_n(k),\] and similar remarks pertain to grafting roots of trees onto a fixed leaf of a tree $R$. Thus, grafting, as the map induced on homology by a map of spaces, is linear, and the general identity follows from the basic case proven above. \end{proof} A similar argument as in our earlier cohomology calculation, using the Jacobi identity to rebracket forests into sums of long forests, proves the following. \begin{theorem}[Cohen] The graded Abelian group $H_*(\mathrm{Conf}_k(\mathbb{R}^n))$ is isomorphic to the quotient of the free Abelian group with basis the set of $k$-forests by the Jacobi relations and signed antisymmetry. \end{theorem} \begin{remark} This isomorphism may be promoted to an isomorphism of the operad of graded Abelian groups given by the collection $\{H_*(\op C_n(k))\}_{k\geq0}$ with the operad controlling $(n-1)$-shifted Poisson algebras. \end{remark} \subsection{The unordered case} We close this section with a calculation in the unordered case. \begin{proposition}\label{prop:unordered rational} For $k\geq2$ and $n\geq1$, there is an isomorphism \[ H_i(B_k(\mathbb{R}^n);\mathbb{Q})\cong\begin{cases} \mathbb{Q}&\quad\text{if either $i=0$ or $i=n-1$ is odd}\\ 0&\quad\text{otherwise.} \end{cases} \] \end{proposition} \begin{remark} Note the vast difference in size and complexity between the rational homology of $B_k(\mathbb{R}^n)$ and that of $\mathrm{Conf}_k(\mathbb{R}^n)$. This disparity, which may at first seem surprising, is characteristic of the relationship between ordered and unordered configuration spaces in characteristic zero. In finite characteristic, as we will see, this relationship is reversed, and it is the homology in the unordered case that is by far more complex. One obvious indicator of the rational difference between ordered and unordered is the fact that the $i$th Betti number of $\mathrm{Conf}_k(\mathbb{R}^n)$ tends to infinity with $k$, while that of $B_k(\mathbb{R}^n)$ quickly stabilizes to a fixed value. This observation is a simple example of the general phenomenon of \emph{homological stability} for configuration spaces of manifolds \cite{Church:HSCSM, RandalWilliams:HSUCS}. Although the Betti numbers in the ordered case do not stabilize, the analogous phenomenon of \emph{representation stability}, which takes the action of $\Sigma_k$ into account, does occur \cite{Farb:RS}. \end{remark} In making this calculation, we will use the following basic fact. \begin{lemma}\label{lem:transfer} Let $\pi: E\to B$ be a finite regular cover with deck group $G$. If $\mathbb{F}$ is a field in which $|G|$ is invertible, then the natural map \[\bar\pi_*:H_*(E; \mathbb{F})_G\to H_*(B;\mathbb{F})\] is an isomorphism. \end{lemma} This result is a consequence of the existence and basic properties of the \emph{transfer map}. Recall that the transfer is a wrong-way map on homology \[\mathrm{tr}:H_*(B)\to H_*(E)\] defined by sending a singular chain to the sum over its $|G|$ lifts to $E$, which is clearly a chain map. It is obvious from the definition that $\pi_*(\mathrm{tr}(\alpha))=|G|\alpha$. \begin{proof}[Proof of Lemma \ref{lem:transfer}] We claim that the composite \[\xymatrix{f:H_*(B;\mathbb{F})\ar[r]^-{\frac{1}{|G|}\mathrm{tr}}&H_*(E;\mathbb{F})\ar[r]&H_*(E;\mathbb{F})_{G}}\] is an inverse isomorphism to $\bar\pi_*$. Note that we have used the assumption that $|G|$ is invertible in $\mathbb{F}$ in defining $f$. In one direction, we compute that \[\bar\pi_*(f(\alpha))=\pi_*\left(\frac{1}{|G|}\mathrm{tr}(\alpha)\right)=\frac{1}{|G|}\pi_*(\mathrm{tr}(\alpha))=\alpha,\] and in the other we have \[f(\bar\pi_*([\beta]))=f(\pi_*(\beta))=\frac{1}{|G|}\left[\mathrm{tr}(\pi_*(\beta))\right]=\frac{1}{|G|}\left[\sum_{g\in G}g\cdot\beta\right]=\frac{1}{|G|}\left[\sum_{g\in G}\beta\right]=\beta.\] \end{proof} With the identification $H_*(B_k(\mathbb{R}^n);\mathbb{Q})\cong H_*(\mathrm{Conf}_k(\mathbb{R}^n);\mathbb{Q})_{\Sigma_k}$ in hand, we proceed by first identifying the coinvariants in top degree. \begin{lemma}\label{lem:top homology} For $k>1$, there is an isomorphism \[H_{(n-1)(k-1)}(\mathrm{Conf}_k(\mathbb{R}^n);\mathbb{Q})_{\Sigma_k}\cong\begin{cases} \mathbb{Q}&\quad k=2 \text{ and $n$ even}\\ 0&\quad\text{otherwise.} \end{cases}\] \end{lemma} \begin{proof} If $n$ is odd, then any tall tree $T$ is equal to the additive inverse of the tree obtained by switching the labels of the first two leaves of $T$. Since this operation may be achieved by the action of the symmetric group, it follows that $2[T]=0$ at the level of coinvariants, whence $[T]=0$. Since tall trees span the top homology, their images span the coinvariants, and the claim follows in this case. Assume now that $n$ is even. If $k\geq3$, then the Jacobi identity applied to the bottom three leaves of a tall tree $T$ shows that $3[T]=0$, and so $[T]=0$, and we conclude as before. In the remaining case $k=2$, we note that $H_{n-1}(\mathrm{Conf}_2(\mathbb{R}^n))\cong\mathbb{Z}\langle P_{(12)}\rangle$, and that $\Sigma_2$ acts trivially. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:unordered rational}] As a consequence of our description in terms of tall forests, we have the following calculation: \begin{align*} H_*(\mathrm{Conf}_k(\mathbb{R}^n))&\cong \bigoplus_{\text{partitions of [k]}}\bigotimes_i H_{(n-1)(k_i-1)}(\mathrm{Conf}_{k_i}(\mathbb{R}^n))\\ &\cong \bigoplus_{r\geq0}\left(\bigoplus_{k_1+\cdots+k_r=k}\bigotimes_{i=1}^rH_{(n-1)(k_i-1)}(\mathrm{Conf}_{k_i}(\mathbb{R}^n))\otimes_{\Sigma_{k_1}\times\cdots\times\Sigma_{k_r}}\mathbb{Z}[\Sigma_k]\right)_{\Sigma_r}\\ &\cong\bigoplus_{r\geq0}\left(\bigoplus_{k_1+\cdots+k_r=k}\bigotimes_{i=1}^rH_{(n-1)(k_i-1)}(\mathrm{Conf}_{k_i}(\mathbb{R}^n))\otimes\mathbb{Z}[\Sigma_k]\right)_{\Sigma_r\ltimes \Sigma_{k_1}\times\cdots\times\Sigma_{k_r}}. \end{align*} Thus, tensoring with $\mathbb{Q}$, forming the $\Sigma_k$-coinvariants, and using that $k!$ is invertible, we find that \begin{align*} H_*(B_k(\mathbb{R}^n);\mathbb{Q})&\cong \bigoplus_{r\geq0}\left(\bigoplus_{k_1+\cdots+k_r=k}\bigotimes_{i=1}^rH_{(n-1)(k_i-1)}(\mathrm{Conf}_{k_i}(\mathbb{R}^n);\mathbb{Q})\right)_{\Sigma_r\ltimes \Sigma_{k_1}\times\cdots\times\Sigma_{k_r}}\\ &\cong\bigoplus_{r\geq0}\left(\bigoplus_{k_1+\cdots+k_r=k}\bigotimes_{i=1}^rH_{(n-1)(k_i-1)}(\mathrm{Conf}_{k_i}(\mathbb{R}^n);\mathbb{Q})_{\Sigma_{k_i}}\right)_{\Sigma_r}. \end{align*} The claim now follows easily from Lemma \ref{lem:top homology}, since the only nonvanishing terms up to the action of $\Sigma_r$ are $(k_1,\dots, k_m)=(1,\ldots, 1)$ and possibly $(k_1,\ldots, k_m)=(2,1,\ldots, 1)$. \end{proof} With a few more definitions in hand, this calculation may be packaged in a more succinct form. \begin{definition} A \emph{symmetric sequence} of graded Abelian groups is a collection $\{V(k)\}_{k\geq0}$ where $V(k)$ is a graded Abelian group equipped with an action of $\Sigma_k$. \end{definition} Thus, a symmetric sequence is equivalent to the data of a functor from the category $\Sigma$ of finite sets and bijections to graded Abelian groups. There is a notion of tensor product of symmetric sequences, which is given by the formula \begin{align*}(V\otimes W)(k)&=\bigoplus_{i+j=k}V(i)\otimes W(j)\otimes_{\Sigma_i\times\Sigma_j}\mathbb{Z}[\Sigma_k]. \end{align*} Defining a symmetric sequence by $H_*(\mathrm{Conf}(\mathbb{R}^n))(k)=H_*(\mathrm{Conf}_k(\mathbb{R}^n))$, we now recognize the identification \[H_*(\mathrm{Conf}(\mathbb{R}^n))\cong \mathrm{Sym}(H_\mathrm{top}(\mathrm{Conf}(\mathbb{R}^n)))\] with the symmetric algebra for this tensor product. Now, a symmetric sequence $V$ determines a bigraded Abelian group $V_\Sigma$ by the formula \[V_\Sigma=\bigoplus_{k\geq0}V(k)_{\Sigma_k},\] and it is immediate from the formula that \[(V\otimes W)_\Sigma\cong V_\Sigma\otimes W_\Sigma.\] Thus, we have an isomorphism of bigraded vector spaces \begin{align*} \textstyle\bigoplus_{k \geq0}H_*(B_k(\mathbb{R}^n);\mathbb{Q})&\cong H_*(\mathrm{Conf}(\mathbb{R}^n))_\Sigma\\ &\cong \mathrm{Sym}(H_\mathrm{top}(\mathrm{Conf}(\mathbb{R}^n)))_\Sigma\\ &\cong \mathrm{Sym}(H_\mathrm{top}(\mathrm{Conf}(\mathbb{R}^n))_\Sigma)\\ &\cong \mathrm{Sym}(\mathbb{Q}[0,1]\oplus \mathbb{Q}[n-1, 2]). \end{align*} \begin{remark} From the operadic point of view, this bigraded Abelian group is the free shifted Poisson algebra on one generator. \end{remark} This calculation illustrates a valuable lesson, namely that configuration spaces tend to exhibit more structure when taken all together. This insight will be indispensable to us in our future investigations. Before pursuing this direction, however, we will need to invest in some new tools. \section{Covering theorems}\label{section:covering theorems} Having exploited it at length, our next long-term goal is to circle back and prove our version of the Fadell--Neuwirth theorem asserting that the diagram \[\xymatrix{ \mathrm{Conf}_{\ell-k}(M\setminus\{x_1,\ldots, x_k\})\ar[r]\ar[d]&\mathrm{Conf}_\ell(M)\ar[d]\\ (x_1,\ldots,x_k)\ar[r]&\mathrm{Conf}_k(M) }\] is homotopy Cartesian. This type of statement is about the local homotopy type of the configuration space, while, through the topological basis that we exhibited in Proposition \ref{prop:conf basis}, we have fine control over the local topology. Of course, in some sense, everything about a space $X$ is determined by a basis, since $X$ can be reconstructed from the basis by gluing. On the other hand, as the following classic example illustrates, gluing is not a homotopically well-behaved operation. \begin{example} The two diagrams \[\xymatrix{ S^{n-1}\times(0,1)\ar[d]\ar[r]&\mathring{D}^n\ar[d]&&S^{n-1}\ar[r]\ar[d]&\mathrm{pt}\ar[d]\\ \mathring{D}^n\ar[r]&S^n&&\mathrm{pt}\ar[r]&\mathrm{pt} }\] are pushout squares, and there is a map from the left square to the right that is a homotopy equivalence on all but the bottom right corner. \end{example} This example illustrates that ordinary gluing, which is a colimit construction, is insufficient for the kind of homotopy theoretic questions we wish to pursue. The replacement will be the \emph{homotopy colimit}, which we review at length in Appendix \ref{section:homotopy colimits}. The kind of result that we aim to prove is the following. \begin{theorem}\label{thm:basis recovery} Let $\mathcal{B}$ be a topological basis for $X$, regarded as a poset under inclusion and thereby as a category. The natural map \[\hocolim_{U\in \mathcal{B}}U\to X\] is a weak equivalence. \end{theorem} This result will be an immediate consequence of Theorem \ref{thm:complete cover recovery}. First, we explore the intermediary concepts of \v{C}ech covers and hypercovers, which are interesting in their own right. These tools all permit the reconstruction of the weak homotopy type of $X$ from various forms covering data. A general reference for this material is \cite{DuggerIsaksen:THAR}. \subsection{\v{C}ech covers} \begin{recollection} We write $\Delta_+$ for the category of finite ordered sets and $\Delta\subseteq \Delta_+$ for the full subcategory of sets that are nonempty. Thus, up to isomorphism, the objects of $\Delta_+$ are the sets $[n]=\{0,\ldots, n\}$ for $n\geq-1$. A \emph{simplicial space} is a functor $\op X:\Delta^{op}\to \mathcal{T}\mathrm{op}$ (resp. \emph{augmented simplicial space}, $\Delta_+^{op}$). Using the traditional notation $\op X_n:=\op X([n])$, we write $d_i:\op X_n\to \op X_{n-1}$ for map induced by the inclusion $[n-1]\to [n]$ that misses the element $i$ (the $i$th \emph{face map}), and $s_i:\op X_{n}\to \op X_{n+1}$ for the map induced by the surjection $[n+1]\to [n]$ that sends $i$ and $i+1$ to $i$ (the $i$th \emph{degeneracy}). If $\op X$ is augmented, we refer to the induced map $\op X_0\to \op X_{-1}$ as the \emph{augmentation}. The \emph{geometric realization} of the simplicial space $\op X$ is the quotient \[|\op X|:=\faktor{\coprod_{n\geq0}\op X_n\times\Delta^n}{\sim}=\mathrm{coeq}\left(\coprod_{\Delta([\ell],[m])}\op X_m\times\Delta^\ell\rightrightarrows \coprod_{n\geq0} \op X_n\times\Delta^n\right),\] where the two arrows are given by the covariant functoriality of $\Delta^{(-)}$ and the contravariant functoriality of $\op X$, respectively. If $\op X$ is augmented, there results a canonical map $|\op X|\to \op X_{-1}$. \end{recollection} \begin{definition} Let $\U=\{U_\alpha\}_{\alpha\in A}$ be an open cover of $X$. The \emph{\v{C}ech nerve} of $\U$ is the augmented simplicial space $\check{C}(\U):\Delta^{op}_+\to \mathcal{T}\mathrm{op}$ specified as follows. \begin{enumerate} \item In nonnegative simplicial degree, we have \[\check{C}(\U)_n=\coprod_{A^{n+1}}U_{\alpha_0}\cap\cdots\cap U_{\alpha_n},\] and $\check{C}(\U)_{-1}=X$. \item The face maps $d_i$ is induced by the inclusions \[U_{\alpha_0}\cap\cdots\cap U_{\alpha_n}\subseteq U_{\alpha_0}\cap\cdots \widehat{U}_{\alpha_i}\cdots \cap U_{\alpha_n}.\] \item The degeneracy $s_i$ is induced by the identifications \[U_{\alpha_0}\cap\cdots\cap U_{\alpha_n}=U_{\alpha_0}\cap \cdots\cap U_{\alpha_i}\cap U_{\alpha_i}\cap \cdots\cap U_{\alpha_n}\] \item The augmentation is induced by the inclusions $U_\alpha\subseteq X$. \end{enumerate} \end{definition} Applying $H_0$ to $\check{C}(\U)$ in each simplicial degree and taking the alternating sum of the face maps as a differential, we obtain the classical \emph{\v{C}ech complex} \[\cdots \to \bigoplus_{A^{n+1}}H_0(U_{\alpha_0}\cap \cdots \cap U_{\alpha_n})\to \cdots\to \bigoplus_AH_0(U_\alpha),\] which computes the homology of $X$ if $\U$ is a sufficiently good cover. In fact, this result can be strengthened to a recovery of the full weak homotopy type. \begin{theorem}[Segal, Dugger-Isaksen]\label{thm:cech recovery} For any topological space $X$ and any open cover $\U$ of $X$, the augmentation \[|\check{C}(\U)|\to X\] is a weak homotopy equivalence. \end{theorem} The proof will make use of a wonderful local-to-global principle. In establishing this principle, we will employ a little machinery, but see \cite[16.24]{Gray:HT} for a more elementary argument premised on subdivision. \begin{proposition}\label{prop:mayer-vietoris} Let $f:Y\to Z$ be a continuous map and $\U=\{U_\alpha\}_{\alpha\in A}$ an open cover of $Z$. If the induced map $f^{-1}(U_{\alpha_0}\cap\cdots\cap U_{\alpha_n})\to U_{\alpha_0}\cap\cdots\cap U_{\alpha_n}$ is a weak homotopy equivalence for every $(\alpha_0,\ldots, \alpha_n)\in A^{n+1}$, then $f$ is also a weak homotopy equivalence. \end{proposition} \begin{proof} Suppose first that $\U=\{U,V\}$. Using the appropriate versions of the Van Kampen \cite{BrownRazekSalleh:VKTUNS} and Mayer-Vietoris \cite[5.13]{DavisKirk:LNAT} theorems, we conclude that $f$ induces isomorphisms on fundamental groupoids and on homology with arbitrary local coefficients, so the claim follows in this case from the Whitehead theorem \cite[6.71]{DavisKirk:LNAT}. In the general case, we choose an ordinal $\lambda$ and a bijection $\varphi:\lambda\cong \U$ and proceed by transfinite induction, assuming that the claim is known for all covers with the cardinality of $\mu<\lambda$. Suppose first that $\lambda$ is a successor ordinal. Setting $U=U_\lambda$ and $V=\bigcup_{\mu<\lambda}U_\mu$, it will suffice by the previous argument to verify that the restrictions of $f$ to $U$, to $V$, and to $U\cap V$ are all weak homotopy equivalences. The case of $U$ is a special case of our hypothesis, and the cases of $V$ and $U\cap V$ follow from the inductive assumption applied to the covers $\{U_\mu\}_{\mu<\lambda}$ and $\{U_\mu\cap U_\lambda\}_{\mu<\lambda}$, respectively, each of which is in bijection with $\lambda-1$ and satisfies the hypothesis of the proposition. Suppose now that $\lambda$ is a limit ordinal. By compactness, any map $(D^{n+1}, S^n)\to (Z,Y)$ factors as in the solid commuting diagram \[\xymatrix{ S^n\ar[d]\ar[d]\ar[r]&\ar[d]\displaystyle f^{-1}\left(\bigcup_{\mu<\mu_0}U_\mu\right)\ar[r]&Y\ar[d]^-f\\ D^{n+1}\ar[r]\ar@{-->}[ur]&\displaystyle\bigcup_{\mu\leq \mu_0}U_\mu\ar[r]& Z }\] for some $\mu_0<\lambda$. The inductive hypothesis applied to the cover $\{U_\mu\}_{\mu\leq \mu_0}$ implies that the middle map is a weak homotopy equivalence, so the dashed lift exists making the upper triangle commute and the lower triangle commute up to homotopy. It follows that $\pi_n(f)=0$ for every $n\geq0$, and consideration of the long exact sequence in homotopy associated to $f$ completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:cech recovery}] For $(\alpha_0,\ldots, \alpha_n)\in A^{n+1}$, set $\U_{\alpha_0,\ldots, \alpha_n}=\{U_\alpha\cap U_{\alpha_0}\cap\cdots\cap U_{\alpha_n}\}_{\alpha\in A}$, and note that the diagram \[\xymatrix{ |\check{C}(\U_{\alpha_0,\ldots, \alpha_n})|\ar[r]\ar[d]&|\check{C}(\U)|\ar[d]\\ U_{\alpha_0}\cap\cdots\cap U_{\alpha_n}\ar[r]& X }\] is a pullback. Therefore, by Proposition \ref{prop:mayer-vietoris}, it suffices to prove the claim under the assumption that $X$ is a member of $\U$. In this case, $\check{C}(\U)$ has an extra degeneracy given by forming the intersection with $X$. \end{proof} \subsection{Hypercovers} In light of Theorem \ref{thm:cech recovery}, it is natural to wonder what makes \v{C}ech nerves of covers special. The first step toward answering this question is to notice that \v{C}ech nerves may be completely characterized. \begin{definition} Let $f:Y\to Z$ be a continuous map. We say that $f$ is a \emph{covering map} if, up to homeomorphism, it is of the form $\coprod_{\alpha\in A} U_\alpha\to Z$ for some open cover $\U=\{U_\alpha\}_{\alpha\in A}$ of $Z$. \end{definition} \begin{definition} We say that an augmented simplicial space $\op X$ is a \emph{\v{C}ech cover} if \begin{enumerate} \item the augmentation $\op X_0\to \op X_{-1}$ is a covering map, and \item for every $n>0$, the natural map \[\op X_n\to M_n(\op X):=\mathrm{eq}\left(\prod_{0\leq i\leq n}\op X_{n-1}\rightrightarrows\prod_{0\leq i<j\leq n}\op X_{n-2}\right)\] induced by the face maps is a homeomorphism. \end{enumerate} \end{definition} The space $M_n(\op X)$ is called the $n$th \emph{matching space} of $\op X$. \begin{example} In low degrees, we have $M_0(\op X)=\op X_{-1}$ and $M_1(\op X)=\op X_0\times_{\op X_{-1}}\op X_0$. \end{example} \begin{exercise} The \v{C}ech nerve of an open cover is a \v{C}ech cover, and, conversely, a \v{C}ech cover $\op X$ is the \v{C}ech nerve of some open cover $\U$---for example, we may take $\U$ to be the collection of connected components of $\op X_0$---but different open covers may have isomorphic \v{C}ech nerves. \end{exercise} Thus, a \v{C}ech cover is exactly what we obtain by deforming the constant simplicial object at $X$ by replacing the 0-simplices by a cover and filling in the rest of the object in the canonical way with matching objects. This observation identifies \v{C}ech covers as the first stage in an obvious hierarchy of covering notions. \begin{definition} We say that an augmented simplicial space $\op X:\Delta^{op}_+\to \mathcal{T}\mathrm{op}$ is a \emph{hypercover} if the canonical map $\op X_n\to M_n(\op X)$ is a covering map for all $n\geq0$. We say that the hypercover $\op X$ is \emph{bounded} if there is some $N$ such that this map is an isomorphism for $n>N$, the smallest such $N$ being the \emph{height} of $\op X$. If $\op X_{-1}=X$, then we say that $\op X$ is a hypercover of $X$. \end{definition} A hypercover of height 0 is precisely a \v{C}ech cover, while a hypercover of height $-1$ is isomorphic to the constant simplicial space with value $X$. \begin{theorem}[Dugger-Isaksen]\label{thm:hypercover recovery} For any topological space $X$, and any hypercover $\op X$ of $X$, the augmentation map \[|\op X|\to X\] is a weak homotopy equivalence. \end{theorem} The proof will make use of some formal machinery from the theory of simplicial spaces, which identifies the matching object $M_n(\op X)$ itself as the degree $n$ entry of a simplicial space. First, a few categorical reminders. \begin{recollection} Let $\op C$ be a category and $\D$ a category with small limits and colimits, and let $\iota:\op C_0\to \op C$ be a functor. Then the restriction functor $\iota^*:\op D^{\op C}\to \op D^{\op C_0}$ admits both a left and a right adjoint, the so-called \emph{Kan extension} functors, as depicted in the following diagram \[\doubleadjunct{\op D^{\op C_0}}{\op D^{\op C}}{\op D^{\op C_0}}{\iota_!}{\iota^*}{\iota^*}{\iota_*}.\] We may also use the notation $\mathrm{Lan}_\iota=\iota_!$ and $\mathrm{Ran}_\iota=\iota_*$ for the left and right Kan extensions, respectively. The value of the left Kan extension is given by the formula \[\iota_!F(C)\cong \colim\left((\op C_0\downarrow C)\to \op C\xrightarrow{F} \op D\right),\] where $(\op C_0\downarrow C)$ denotes the \emph{overcategory} whose objects are morphisms $f:C'\to C$ with $C'$ an object of $\op C_0$, and whose morphisms are commuting triangles \[\xymatrix{ C'\ar[rr]\ar[dr]&& C''\ar[dl]\\ &C. }\] Dually, the right Kan extension $\iota_*F$ is computed as the corresponding limit over the undercategory $(C\downarrow\op C_0)$. \end{recollection} Taking $i:(\Delta^{op}_+)_{\leq n}\to \Delta^{op}_+$ to be the inclusion of the full subcategory of finite ordered sets of cardinality at most $n+1$, we obtain adjunctions \[\doubleadjunct{\mathcal{T}\mathrm{op}^{(\Delta^{op}_+)_{\leq n}}}{\mathcal{T}\mathrm{op}^{\Delta^{op}_+}}{\mathcal{T}\mathrm{op}^{(\Delta^{op}_+)_{\leq n}}}{\iota_!}{\iota^*}{\iota^*}{\iota_*}.\] We write $\tau_{\leq n}(\op X)=\iota^*\op X$, $\mathrm{sk}_n(\op X)=\iota_!\iota^*\op X$, and $\mathrm{cosk}_n(\op X)=\iota_*\iota^*\op X$ and refer to the $n$-\emph{truncation}, $n$-\emph{skeleton}, and $n$-\emph{coskeleton}, respectively, of $\op X$. An exercise in the manipulation of limits shows that matching objects and coskeleta are related as \[M_n(\op X)\cong \mathrm{cosk}_{n-1}(\op X)_n,\] and the map $\op X_n\to M_n(\op X)$ discussed above is a component of the unit transformation of the appropriate adjunction. Note that $\mathrm{cosk}_{n-1}(\op X)_k\cong \op X_k$ for $k<n$, so the matching object is in some sense the primary measure of the difference between $\op X$ and its $n-1$-coskeleton. We will need a few facts about the behavior of coskeleta of hypercovers. First, we note that, by elementary properties of Kan extensions, we have \[ \mathrm{cosk}_m(\mathrm{cosk}_n(\op X))\cong\begin{cases} \mathrm{cosk}_n(\op X)&\quad m\geq n\\ \mathrm{cosk}_m(\op X)&\quad m\leq n \end{cases}\] for any augmented simplicial space $\op X$. Second, we have the following pullback diagram for any $n$ and $m$ \[\xymatrix{ \mathrm{cosk}_{m}(\op X)_n\ar[d]\ar[r]&\displaystyle\prod_{[m]\subseteq [n]} \op X_m\ar[d]\\ \mathrm{cosk}_{m-1}(\op X)_n\ar[r]&\displaystyle\prod_{[m]\subseteq [n]} M_m(\op X), }\] where the products are indexed on the set of injective order-preserving maps. This pullback square, which is dual to the inductive formation of skeleta by cell attachments, may be derived as an exercise in the manipulation of limits. \begin{remark}\label{remark:mnemonic} A mnemonic for this pullback diagram, which is rigorous at the level of simplicial sets and can be made rigorous for simplicial spaces with the correct interpretation of the symbols, is the following. By adjunction, we should think that \[\mathrm{cosk}_m(\op X)_n\cong\mathrm{Hom}(\Delta^n, \mathrm{cosk}_m(\op X))\cong \mathrm{Hom}(\tau_{\leq m}(\Delta^n), \tau_{\leq m}(\op X))\cong \mathrm{Hom}(\mathrm{sk}_m(\Delta^k), \op X),\] and there is a pushout diagram \[\xymatrix{ \displaystyle\coprod_{[m]\subseteq [n]}\partial \Delta^n\ar[r]\ar[d]&\mathrm{sk}_{m-1}(\Delta^n)\ar[d]\\ \displaystyle\coprod_{[m]\subseteq [n]}\Delta^n\ar[r]&\mathrm{sk}_m(\Delta^n). }\] \end{remark} \begin{lemma}\label{lem:coskeleta} Let $\op X$ be a hypercover. \begin{enumerate} \item The map $\op X\to \mathrm{cosk}_N(\op X)$ is an isomorphism if and only if $\op X$ is of height at most $N$. \item For every $N\geq0$, $\mathrm{cosk}_{N-1}(\op X)$ is a hypercover, which is of height at most $N-1$ if $\op X$ is of height at most $N$. \item If $\op X$ is of height at most $N$, then the map $\op X\to \mathrm{cosk}_{N-1}(\op X)$ is a degreewise covering map. \end{enumerate} \end{lemma} \begin{proof} For the first claim, assume first that $\op X$ is of height at most $N$. Since $\op X_n\cong\mathrm{cosk}_N(\op X)_n$ whenever $n\leq N$, it suffices to demonstrate the isomorphism for $n>N$. Since $\op X$ is of height at most $N$, the righthand map in the pullback diagram above is an isomorphism whenever $m>N$, so the lefthand map is an isomorphism in these cases as well. Thus, \[\op X_n\cong\mathrm{cosk}_n(\op X)_n\cong \mathrm{cosk}_{n-1}(\op X)_n\cong\cdots\cong \mathrm{cosk}_N(\op X)_n,\] as desired. On the other hand, suppose that $\op X\cong\mathrm{cosk}_N(\op X)$. Then for $m\geq N$, we have \[\mathrm{cosk}_m(\op X)\cong\mathrm{cosk}_m(\mathrm{cosk}_N(\op X))\cong \mathrm{cosk}_N(\op X)\cong \op X,\] which implies the claim after setting $m=n-1$ and evaluating at $n$. For the second claim, it suffices by point (1) and the isomorphism $\mathrm{cosk}_{N-1}(\mathrm{cosk}_{N-1}(\op X))\cong\mathrm{cosk}_{N-1}(\op X)$ to show that $\mathrm{cosk}_{N-1}(\op X)$ is a hypercover. For $n\geq N$, we have \[M_n(\mathrm{cosk}_{N-1}(\op X))=\mathrm{cosk}_{n-1}(\mathrm{cosk}_{N-1}(\op X))_n\cong\mathrm{cosk}_{N-1}(\op X)_n,\] as desired, while, for $n\leq N$, we have the commuting diagram \[\xymatrix{ \mathrm{cosk}_{N-1}(\op X)_n\ar[r]&M_n(\mathrm{cosk}_{N-1}(\op X))=\mathrm{cosk}_{n-1}(\mathrm{cosk}_{N-1}(\op X))_n\\ \op X_n\ar[r]\ar@{=}[u]_-\wr&M_n(\op X)=\mathrm{cosk}_{n-1}(\op X)_n.\ar@{=}[u]^-\wr }\] Since $\op X$ is a hypercover, the bottom map, and hence the top map, is a covering map, as desired. The third claim is immediate for simplicial degree $n<N-1$. For $n=N-1$, we appeal to the pullback square above with $m=N$. Since $\op X$ is a hypercover, each of the maps $\op X_m\to M_m(\op X)$ is a covering map. Since covering maps are preserved under finite products and pullback, it follows that the lefthand map is a covering map, as desired. \end{proof} In the proof of Theorem \ref{thm:hypercover recovery}, we will thrice use that a degreewise weak homotopy equivalence between simplicial spaces induces a weak homotopy equivalence after geometric realization. This implication does not hold in general, but it does hold for a certain class of \emph{split} simplicial spaces, as shown in Appendix \ref{appendix:split simplicial spaces}, which includes the examples we need. \begin{proof}[Proof of Theorem \ref{thm:hypercover recovery}] We proceed by induction on the height $N$ of $\op X$, the base case $N=0$ being the case of a \v{C}ech cover. By Lemma \ref{lem:coskeleta}, the natural map $\op X\to \mathrm{cosk}_{N-1}(\op X)=:\op Y$ is a covering map in each degree, so we may extend this map to the horizontally augmented bisimplicial space \[\op W:=\left(\cdots\rightrightarrows\op X\times_{\op Y}\op X\rightrightarrows\op X\to \op Y\right)\] in which the $n$th row is the \v{C}ech nerve of the covering map $\op X_n\to \op Y_n$. By Theorem \ref{thm:cech recovery}, the map \[|\op W|_h\xrightarrow{\sim}\op Y\] is a degreewise weak homotopy equivalence, where $|-|_h$ denotes the geometric realization in the horizontal direction, and we conclude that \[|d^*\op W|\cong \big||\op W|_h\big|\xrightarrow{\sim}|\op Y|\xrightarrow{\sim} X,\] where $d:\Delta^{op}_+\to \Delta^{op}_+\times\Delta^{op}_+$ is the diagonal functor. Here the isomorphism follows from the general fact that the diagonal coincides with either iterated geometric realization \cite[p. 86]{Quillen:HAKTI}, the rightmost weak equivalence follows from the inductive hypothesis and the fact that $\op Y$ is a hypercover of height at most $N-1$, and the middle weak equivalence follows from Proposition \ref{prop:split criterion} in light of the fact that both $|\op W|_h$ and $\op Y$ are split by Corollaries \ref{cor:hypercovers are split} and \ref{cor:horizontal cech cover split}. Thus, in order to conclude the result for bounded $\op X$, it suffices to show that the natural map of simplicial spaces \[i:\op X\to d^*\op W=\left(\cdots\rightrightarrows \op X_1\times_{\op Y_1} \op X_1\rightrightarrows \op X_0\right)\] given by the diagonal in each degree, admits a retraction over the constant simplicial space with value $X$. Indeed, assuming this fact, it follows that the map $|\op X|\to X$ is a retract of a weak homotopy equivalence, and the claim follows. To define the putative retraction $r:d^*\op W\to \op X\cong\mathrm{cosk}_{N}(\op X),$ it suffices by adjunction to exhibit a map $\bar r:\tau_{\leq N}(d^*\op W)\to \tau_{\leq N}(\op X)$, for which we take the isomorphism \[\bar r_n:\op X_n\times_{\op Y_n}\cdots\times_{\op Y_n}\op X_n=\op X_n\times_{\op X_n}\cdots\times_{\op X_n}\op X_n\cong\op X_n\] in degrees $n<N$. In degree $N$, we take the map \[\bar r_N:\op X_N\times_{\op Y_N}\cdots \times_{\op Y_N}\op X_N\to \op X_N\] to be any of the projections. One checks that, with these choices, $\bar r$ is a simplicial map, and that $\tau_{\leq N}(r\circ i)=\id_{\tau_{\leq N}(\op X)}$, implying the claim. To conclude the claim for $\op X$ not necessarily bounded, it will suffice to show that the natural map \[\pi_n(|\op X|)\to \pi_n(|\mathrm{cosk}_{n+1}(\op X)|)\] is an isomorphism for every $n\geq0$, since $\mathrm{cosk}_{n+1}(\op X)$ is a bounded hypercover, so $\pi_n(|\mathrm{cosk}_{n+1}(\op X)|\cong\pi_n(X)$ by the argument above. For this claim, we note that \[|\op X| \xleftarrow{\sim}\big||\mathrm{Sing}(\op X)|_h\big|\cong |d^*\mathrm{Sing}(\op X)|,\] and similarly for $\mathrm{cosk}_{n+1}(\op X)$---these are the second and third cases in which we use that a degreewise weak equivalence induces a weak equivalence on realizations, for which we again invoke Proposition \ref{prop:split criterion} in light of Corollary \ref{cor:hypercovers are split} and Lemma \ref{lem:realization split}. From this equivalence, we conclude that \begin{align*} \pi_n(|X|)&\cong \pi_n(|d^*\mathrm{Sing}(\op X)|)\\ &\cong\pi_n(|\mathrm{sk}_{n+1}(d^*\mathrm{Sing}(\op X))|)\\ &\cong \pi_n(|\mathrm{sk}_{n+1}(d^*\mathrm{Sing}(\mathrm{cosk}_{n+1}(\op X)))|)\\ &\cong \pi_n(|d^*\mathrm{Sing}(\mathrm{cosk}_{n+1}(\op X))|)\\ &\cong \pi_n(|\mathrm{cosk}_{n+1}(\op X)|), \end{align*} where the second and fourth isomorphism follow as usual from the fact that $\pi_n(S^{m})=0$ for $n<m$, and the third follows from the fact that $\op X\to \mathrm{cosk}_{n+1}(\op X)$ is an isomorphism through simplicial degree $n+1$ \end{proof} \subsection{Complete covers} We arrive now at the desired result. \begin{definition} We say that an open cover $\U$ of $X$ is \emph{complete} if $\U$ contains an open cover of $\bigcap_{U\in \U_0}U$ for every finite subset $\U_0\subseteq \U$. \end{definition} We regard $\U$ as a partially ordered set and thereby as a category. \begin{theorem}[Dugger-Isaksen]\label{thm:complete cover recovery} If $\U$ is a complete cover of $X$, then the natural map \[\hocolim_{U\in \U}U\to X\] is a weak equivalence. \end{theorem} \begin{remark} We adopt the standard abuse of referring to the homotopy colimit as a space rather than an object of the homotopy category. \end{remark} Any open cover containing a basis for the topology of $X$ is complete, so Theorem \ref{thm:basis recovery} is a special case of this result. The strategy of the proof is to reduce the result to Theorem \ref{thm:hypercover recovery} by relating the bar construction modeling the homotopy colimit in question to a certain hypercover. \begin{construction} We write $P_n$ for the set of nonempty subsets of $[n]$, regarded as partially ordered under inclusion and thereby as a category. Given a functor $F:I\to \mathcal{T}\mathrm{op}$, we write \[\mathrm{Bar}_n^\#(F)=\coprod_{f:P_n^{op}\to I}F(f([n]))\] and extend this to a simplicial space $\mathrm{Bar}_\bullet^\#(F)$ by letting the structure map associated to $h:[m]\to [n]$ be given by the restriction along the induced map $P_m^{op}\to P_n^{op}$. There is a canonical map $\mathrm{Bar}_\bullet(F)\to\mathrm{Bar}_\bullet^\#(F)$ of simplicial spaces induced by restriction along (the opposites of) the functors $P_n\to [n]$ sending a subset to its maximal element. Both simplicial spaces are naturally augmented over $\colim_IF$, and this map is compatible with the augmentations. \end{construction} Following Theorem \ref{thm:hypercover recovery}, the theorem will be an immediate consequence of the following two results. \begin{proposition}\label{prop:subdivided comparison} For any functor $F:I\to \mathcal{T}\mathrm{op}$, the induced map $|\mathrm{Bar}_\bullet(F)|\to |\mathrm{Bar}_\bullet^\#(F)|$ is a weak homotopy equivalence. \end{proposition} We will take up the proof of this proposition momentarily. \begin{lemma} Let $\U$ be an open cover and $F:\U\to \mathcal{T}\mathrm{op}$ the tautological functor. If $\U$ is complete, then the augmented simplicial space $\mathrm{Bar}_\bullet^\#(F)$ is a hypercover. \end{lemma} \begin{proof} Writing $\overline P_n\subseteq P_n$ for the subcategory of proper nonempty subsets, we compute that \begin{align*} M_n(\mathrm{Bar}_\bullet^\#(F))&=\mathrm{eq}\left(\prod_{S\subsetneq[n]}\coprod_{f:P_S^{op}\to \U}F(f(S))\rightrightarrows \prod_{S_1\subseteq S_2\subsetneq[n]}\coprod_{f:P_{S_1}^{op}\to \U}F(f(S_1))\right)\\ &\cong \mathrm{eq}\left(\coprod_{f:\overline{P}_n^{op}\to \U}\left(\prod_{S\in \overline P_n}F(f(S))\rightrightarrows\prod_{S_1\subseteq S_1\in \overline P_n}F(f(S)\right)\right)\\ &\cong \coprod_{f:\overline{P}_n^{op}\to \U}\bigcap_{S\in \overline P_n}F(f(S)), \end{align*} and the canonical map to the matching object is the map \[\coprod_{f:P_n^{op}\to \U}F(f([n]))\to \coprod_{f:\overline{P}_n^{op}\to \U}\bigcap_{S\in \overline P_n}F(f(S))\] with components given by the inclusions $F(f([n]))\subseteq F(f(S))$ in $\U$ induced by the inclusions $S\subseteq [n]$ in $P_n$. Fixing $f:\overline P_n^{op}\to \U$, the inverse image under this map of the subspace $\bigcap_{S\in \overline P_n}F(f(S))$ is the disjoint union over all extensions of $f$ to $P_n^{op}$ of the value on $[n]$, which is to say the disjoint union of the elements of $\U$ contained in the intersection. Since the cover is complete, this collection of opens forms an open cover of the intersection and the claim follows. \end{proof} In order to prove Proposition \ref{prop:subdivided comparison}, we reinterpret the bar construction in a way that will generalize in parallel to the variant $\mathrm{Bar}_\bullet^\#(F)$. \begin{definition} Let $I$ be a category. The \emph{category of simplices} of $I$ is the category $\Delta^{op}I$ specified as follows. \begin{enumerate} \item An object of $\Delta^{op}I$ is a functor $f:[n]^{op}\to I$. \item A morphism from $f:[n]^{op}\to I$ to $g:[m]^{op}\to I$ is a morphism $h:[m]\to [n]$ in $\Delta$ making the diagram \[\xymatrix{ [n]^{op}\ar[dr]_-{f}&&[m]^{op}\ar[dl]^-{g}\ar[ll]_-{h^{op}}\\ &I }\] commute. \item Composition is given by composition in $\Delta$. \end{enumerate} The \emph{category of subdivided simplices} of $I$ is the category $\Delta_\#^{op}I$ with objects functors $f:P_n^{op}\to I$ and morphisms given as in $\Delta^{op}I$. \end{definition} \begin{remark} The reader is warned that our terminology is nonstandard. \end{remark} \begin{remark} In keeping track of the variance, it is helpful to think of the morphism from $f$ to $g$ as being given by the pullback functor $(h^{op})^*$. \end{remark} These categories come equipped with a number of functors, which we summarize in the following commuting diagram \[\xymatrix{ &I\\ \Delta^{op}I\ar[rr]^-\rho\ar[dr]_-\delta\ar[ur]^-\lambda&&\Delta^{op}_\#I\ar[ul]_{\lambda_\#}\ar[dl]^-{\delta_\#}\\ &\Delta^{op} }\] Here the functor $\delta$ records the domains of functors, and the functor $\lambda$ is given on objects by the formula \[\lambda\left(f:[n]^{op}\to I\right)=f(n).\] We use our choice of variance in extending $\lambda$ to a functor, for, since $h(m)\leq n$, there is a canonical map $n\to h(m)$ in $[n]^{op}$, and the functor $f$ provides a map $\lambda(f)=f(n)\to f(h(m))= g(m)=\lambda(g)$. Similarly, $\delta_\#(f:P_n^{op}\to I)=[n]$ and $\lambda_\#(f:P_n^{op}\to I)=f([n])$, and the functor $\rho$ is defined by the restrictions along the functors $P_n\to [n]$ mentioned above. \begin{lemma}\label{lem:domain finality} For every $n\geq0$, the natural functor $\delta^{-1}([n])\to (\delta\downarrow[n])$ is homotopy final, and similarly for $\delta_\#$. \end{lemma} \begin{proof} We prove the first claim, the argument for the second being essentially identical. The overcategory $(\delta\downarrow[n])$ has objects the pairs of a functor $f:[m]^{op}\to I$ and a morphism $r:[n]\to [m]$ in $\Delta$, and a morphism is a commuting diagram \[\xymatrix{ &[n]^{op}\ar[dl]_-{r^{op}}\ar[dr]^-{(r')^{op}}\\ [m]^{op}\ar[dr]_-{f}&&[m']^{op}\ar[dl]^-{f'}\ar[ll]\\ &I. }\] Note that the variance is such that this diagram represents a morphism from $(f, r)$ to $(f', r')$. Thus, denoting the natural inclusion by $\iota:\delta^{-1}([n])\to (\delta\downarrow[n])$, we see from the definitions that $\left((f,r)\downarrow\iota)\right)$ is the category of commuting diagrams of the form \[\xymatrix{ &[n]^{op}\ar[dl]_-{r^{op}}\ar@{=}[dr]\\ [m]^{op}\ar[dr]_-{f}&&[n]^{op}\ar[dl]\ar[ll]\\ &I. }\] This category is isomorphic to the discrete category with one object, which is certainly contractible. \end{proof} \begin{corollary}\label{cor:bar kan extension} There are natural degreewise weak equivalences \[\mathrm{hoLan}_\delta(\lambda^*F)\xrightarrow{\sim}\mathrm{Bar}_\bullet(F)\] and \[\mathrm{hoLan}_{\delta_\#}(\lambda_\#^*F)\xrightarrow{\sim}\mathrm{Bar}^\#_\bullet(F)\] of simplicial spaces. \end{corollary} \begin{proof} We calculate that \begin{align*} \mathrm{hoLan}_\delta(\lambda^*F)_n&\simeq \hocolim\left((\delta\downarrow[n])\to \Delta^{op}I\xrightarrow{\lambda} I\xrightarrow{F}\mathcal{T}\mathrm{op}\right)\\ &\simeq \hocolim\left(\delta^{-1}([n])\to\Delta^{op}I\xrightarrow{\lambda} I\xrightarrow{F}\mathcal{T}\mathrm{op}\right)\\ &\simeq \coprod_{f:[n]^{op}\to I}F(f(n))\\ &\cong\coprod_{i_n\to \cdots \to i_0}F(i_n)\\ &=\mathrm{Bar}_n(F), \end{align*} where Lemma \ref{lem:domain finality} and Proposition \ref{prop:hocolim facts}(1) are used in obtaining the second equivalence. The calculation in the subdivided case is essentially identical. \end{proof} The final ingredient that we will need is the following. \begin{lemma}\label{lem:last value finality} The functors $\lambda$ and $\lambda_\#$ are each homotopy final. \end{lemma} \begin{proof} The claim for $\lambda$ will follow after verifying for each $i\in I$, first, that the category $\lambda^{-1}(i)$ has a final object, and, second, that the canonical functor $\iota:\lambda^{-1}(i)\to (i\downarrow\lambda)$ is homotopy initial. For the first claim, we note that the functor $[0]^{op}\to I$ with value $i$ is final in $\lambda^{-1}(i)$. For the second claim, we observe that an object of $(i\downarrow\lambda)$ is simply a composable tuple $i\to f(n)\to \cdots\to f(0)$, which determines a canonical functor $\bar f:[n+1]^{op}\to I$ such that $\lambda(\bar f)=i$, together with a universal map $\bar f\to f$. The argument for $\lambda_\#$ is essentially the same, the only difference being that we extend $f:P_n^{op}\to I$ to $\bar f:P_{n+1}^{op}\to I$ by the prescription \[ \bar f(S)=\begin{cases} f(S)&\quad n+1\notin S\\ i&\quad n+1\in S. \end{cases} \] \end{proof} \begin{proof}[Proof of Proposition \ref{prop:subdivided comparison}] The claim will follow after verifying that each of the marked arrows in the commuting diagram \[\xymatrix{ \displaystyle\hocolim_IF\ar@{=}[d]&\displaystyle\hocolim_{\Delta^{op}I}\lambda^*F\ar[l]_-{(1)}\ar[d]\ar[r]^-{(3)}&\displaystyle\hocolim_{\Delta^{op}}\mathrm{hoLan}_\delta(\lambda^*F)\ar[d]\ar[r]^-{(5)}&|\mathrm{Bar}_\bullet(F)|\ar[d]\\ \displaystyle\hocolim_IF&\displaystyle\hocolim_{\Delta^{op}_\#I}\lambda_\#^*F\ar[l]_-{(2)}\ar[r]^-{(4)}&\displaystyle\hocolim_{\Delta^{op}}\mathrm{hoLan}_{\delta_\#}(\lambda_\#^*F)\ar[r]^-{(6)}&|\mathrm{Bar}^\#_\bullet(F)| }\] is a weak equivalence. The first and second follow from Proposition \ref{prop:hocolim facts}(1) and Lemma \ref{lem:last value finality}, the third and fourth are formal, since left Kan extensions compose, and the fifth and sixth follow from Corollary \ref{cor:bar kan extension} and Proposition \ref{prop:hocolim facts}(2). \end{proof} \section{Deferred proofs}\label{section:deferred proofs} \subsection{Fadell--Neuwirth fibrations} We are now able to repay the first of our long outstanding debts, namely the proof of Theorem \ref{thm:Fadell--Neuwirth}, which asserts that the diagram \[\xymatrix{ \mathrm{Conf}_{\ell-k}(M\setminus\{x_1,\ldots, x_k\})\ar[d]\ar[r]&\mathrm{Conf}_\ell(M)\ar[d]\\ (x_1,\ldots, x_k)\ar[r]&\mathrm{Conf}_k(M) }\] is homotopy Cartesian. Recall that $\mathrm{Conf}_\ell(M)$ has a topological basis indexed by the partially ordered set \[\op B(M)_\ell^\Sigma=\left\{(U,\sigma):(\mathbb{R}^n)^{\amalg \ell}\cong U\subseteq M,\, \sigma:\{1,\ldots, \ell\}\xrightarrow{\simeq} \pi_0(U)\right\}\] consisting of the open sets $\mathrm{Conf}_\ell^0(U,\sigma)=\{x\in \mathrm{Conf}_\ell(M): x_i\in U_{\sigma(i)}\}.$ Thus, we have a functor \[\mathrm{Conf}_\ell^0:\op B(M)_\ell^\Sigma\to \mathcal{T}\mathrm{op}\] whose homotopy colimit is canonically equivalent to $\mathrm{Conf}_\ell(M)$, and similarly for $\mathrm{Conf}_k(M)$. We also have a functor $\pi:\op B(M)_\ell^\Sigma\to \op B(M)_k^\Sigma$ defined by \[\pi(U,\sigma)=\left(\coprod_{i=1}^kU_{\sigma(i)},\, \sigma|_{\{1,\ldots, k\}}\right)\] and a natural transformation fitting into the commuting digram \[\xymatrix{ \mathrm{Conf}_\ell^0\ar[d]\ar@{-->}[r]&\pi^*\mathrm{Conf}_k^0\ar[d]\\ \mathrm{Conf}_\ell(M)\ar[r]&\mathrm{Conf}_k(M), }\] where the bottom map is the Fadell--Neuwirth map given by projection onto the first $k$ factors. Now, since each $\mathrm{Conf}_\ell^0(U,\sigma)$ is contractible, we obtain the lefthand set of weak equivalences in the commuting diagram \[\xymatrix{ B(\op B(M)_\ell^\Sigma)\ar[d]_-{B\pi}&\hocolim_{\op B(M)_\ell^\Sigma}\mathrm{Conf}_\ell^0\ar[d]\ar[r]^-\sim\ar[l]_-\sim&\mathrm{Conf}_\ell(M)\ar[d]\\ B(\op B(M)_k^\Sigma)&\hocolim_{\op B(M)_k^\Sigma}\mathrm{Conf}_k^0\ar[r]^-\sim\ar[l]_-\sim&\mathrm{Conf}_k(M). }\] Thus, understanding the homotopy fiber of the map rightmost map is tantamount to understanding the homotopy fiber of the map between classifying spaces induced by the functor $\pi$. \begin{proof}[Proof of Theorem \ref{thm:Fadell--Neuwirth}] We wish to apply Corollary \ref{cor:quillen b} to obtain the homotopy pullback square \[\xymatrix{ B((U,\sigma)\downarrow\pi)\ar[d]\ar[r]&B(\op B(M)_\ell^\Sigma)\ar[d]^-{B\pi}\\ \mathrm{pt}\ar[r]^-{(U,\sigma)}&B(\op B(M)_k^\Sigma). }\] In order to verify that the hypotheses hold in this case, we note that $((U,\sigma)\downarrow\pi)$ is the category of $(W,\tau)\in \op B(M)_\ell^\Sigma$ such that $U_{\sigma(i)}\subseteq W_i$ for $1\leq i\leq k$. It is easy to see that the inclusion of the subcategory with $U_{\sigma(i)}=W_{\tau(i)}$ is homotopy initial, and this subcategory is isomorphic to $\op B(M\setminus U)_{\ell-k}^\Sigma$. Since homotopy initial functors induce weak equivalences on classifying spaces, we have the weak equivalences in the diagram \[\xymatrix{ B((U,\sigma)\downarrow\pi)&B(\op B(M\setminus U)_{\ell-k}^\Sigma)\ar[l]_-\sim&\hocolim_{\op B(M\setminus U)_{\ell-k}^\Sigma}\mathrm{Conf}_{\ell-k}^0\ar[l]_-\sim\ar[r]^-\sim&\mathrm{Conf}_{\ell-k}(M\setminus U)\\ B((U',\sigma')\downarrow\pi)\ar[u]&B(\op B(M\setminus U')_{\ell-k}^\Sigma)\ar[u]\ar[l]_-\sim&\hocolim_{\op B(M\setminus U')_{\ell-k}^\Sigma}\mathrm{Conf}_{\ell-k}^0\ar[u]\ar[l]_-\sim\ar[r]^-\sim&\mathrm{Conf}_{\ell-k}(M\setminus U')\ar[u]}\] for any $(U,\sigma)\leq (U',\sigma')$ in $\op B(M)_k^\Sigma$. Since $M\setminus U'\subseteq M\setminus U$ is a monotopy equivalence, the rightmost map is a weak equivalence, so all of the vertical arrows are weak equivalences. Therefore, by Corollary \ref{cor:quillen b} and what we have already shown, we have the homotopy pullback \[\xymatrix{ B(\op B(M\setminus U)_{\ell-k}^\Sigma)\ar[d]\ar[r]&B(\op B(M)_\ell^\Sigma)\ar[d]^-{B\pi}\\ \mathrm{pt}\ar[r]^-{(U,\sigma)}&B(\op B(M)_k^\Sigma). }\] The proof is concluded upon noting that the inclusion \[\mathrm{Conf}_{\ell-k}(M\setminus U)\to \mathrm{Conf}_{\ell-k}(M\setminus \{x_1,\ldots, x_k\})\] is a weak equivalence for any $x_i\in U_{\sigma(i)}$. \end{proof} \subsection{Spectral sequences} In proving the Leray--Hirsch theorem, and in much of what will follow, we will make use of the Serre spectral sequence. We begin with a few reminders on spectral sequences---for a general reference, see \cite{McCleary:UGSS}. \begin{definition} A (cohomological) \emph{spectral sequence} is a collection $\{E_r\}_{r\geq0}$ of bigraded $R$-modules, called the \emph{pages} of the spectral sequence, equipped with \begin{enumerate} \item differentials $d_r:E_r\to E_r$ of bidegree $(r,1-r)$, and \item isomorphisms $H(E_r,d_r)\cong E_{r+1}$ of bigraded $R$-modules. \end{enumerate} \end{definition} A \emph{map of spectral sequences} is a collection of bigraded chain maps $f_r:E_r\to \widetilde E_r$ compatible with these isomorphisms. We say that the spectral sequence $\{E_r\}$ is \emph{multiplicative} if each $E_r$ is equipped with the structure of a bigraded $R$-algebra for which $d_r$ is a bigraded derivation and the isomorphisms of (2) are algebra isomorphisms (note that this language is abusive, since multiplicativity is a structure rather than a property). Recall that $d_r$ is bigraded derivation if it satisfies the \emph{Leibniz rule} \[d_r(ab)=d_r(a)b+(-1)^{p+q}ad_r(b),\qquad |a|=(p,q).\] \begin{remark} Our statements and definitions are cohomological, since that is the nature of the application we have in mind, but the obvious dual notions are valid and often better behaved. \end{remark} In a typical situation, the $E_2$-page is something identifiable and relatively computable; each module $E_r^{p,q}$ is independent of $r$ for sufficiently large, and the identification of this common module $E_\infty^{p,q}$ is the goal; and the differentials are mysterious. We will not define $E_\infty$ or discuss convergence here---see \cite[3]{McCleary:UGSS} for details. \begin{example} If $(V,\partial,\delta)$ is a bicomplex satisfying mild boundedness conditions, there is a spectral sequence with $E_1\cong H(V,\partial)$ and \[E_2\cong H(H(V,\partial),\delta)\implies H(V,\partial+\delta).\] See \cite[2.4]{McCleary:UGSS} for a construction of this spectral sequence. An alternative perspective on the spectral sequence of a bicomplex is offered by the \emph{homotopy transfer theorem} \cite{Vallette:AHO}. The starting observation is that the differential $\delta$ can be viewed as an algebraic structure on the chain complex $(V,\partial)$, in the form of an action of the dual numbers $k[\epsilon]/\epsilon^2$ (we work over a field $k$ for simplicity). After choosing representatives for homology and extending this choice to a chain deformation retraction of $(V,\partial)$ onto $(H(V,\partial),0)$, the homotopy transfer theorem endows $H(V,\partial)$ with the structure of a \emph{homotopy} $k[\epsilon]/\epsilon^2$-module, which amounts to the induced differential $\delta$ together with countably many higher operations $\{\delta_n\}$ satisfying certain relations (classically, this structure is known as a \emph{multicomplex}). These higher operations are direct analogues of the Massey products on the cohomology of a space, and they induce the differentials in the spectral sequence for $(V,\partial,\delta)$. \end{example} \begin{example} If $X=\bigcup_{p\geq0} F_pX$ is a filtered space satisfying mild completeness conditions, there is a spectral sequence \[E_1^{p,q}\cong H^{p+q}(F_pX,F_{p-1}X)\implies H^{p+q}(X),\] i.e., there is an isomorphism of the $E_\infty$-page with the associated graded of the induced filtration on $H^*(X)$. The differential $d_1$ is the connecting homomorphism in the long exact sequence for the triple $(F_pX,F_{p-1}X, F_{p-2}X)$, and a filtration preserving map between spaces induces a map between the associated spectral sequences. The same construction goes through with integral cohomology replaced by an arbitrary cohomology theory. See \cite[2.2]{McCleary:UGSS} for details. \end{example} Applying this example to the skeletal filtration of the geometric realization of a simplicial space yields the following result. As a matter of notation, for a simplicial $R$-module $V$, we write $\mathrm{Alt}(V)$ for the associated chain complex of $R$-modules, whose differential is the alternating sum of the face maps of $V$. \begin{corollary}[{\cite[5.1,\,5.3,\,5.4]{Segal:CSSS}}]\label{cor:simplicial spectral sequence} Let $\op X:\Delta^{op}\to \mathcal{T}\mathrm{op}$ be a simplicial space and $\op H$ a cohomology theory. There is a spectral sequence \[E_2^{p,q}\cong H^p(\mathrm{Alt}(\op H^q(\op X)))\implies \op H^{p+q}(|\op X|),\] which is natural for simplicial maps and multiplicative if $\op H$ is. \end{corollary} Using this basic construction and our results on \v{C}ech nerves, we may associate a spectral sequence to a general map. \begin{theorem}[Leray, Segal]\label{thm:spectral sequence of a map} Let $f:X\to Y$ be a map between topological spaces with $Y$ paracompact and $\op H$ a cohomology theory. There is a spectral sequence \[E_2^{p,q}\cong H^p(Y; \op H^q(f^{-1}))\implies \op H^{p+q}(X),\] where $\op H^q(f^{-1})$ is the sheaf associated to the presheaf $U\mapsto\op H^q(f^{-1}(U))$. Moreover, this spectral sequence is natural on the arrow category and multiplicative if $\op H$ is. \end{theorem} \begin{proof} For each open cover $\op U$ of $Y$, the collection $f^{-1}\op U=\{f^{-1}(V): V\in \U\}$ is an open cover of $X$, so we have a canonical weak homotopy equivalence \[|\check{C}(f^{-1}\op U)|\xrightarrow{\sim} X.\] We now apply Corollary \ref{cor:simplicial spectral sequence} to obtain a spectral sequence with \begin{align*}E_2^{p,q}&\cong H^p(\mathrm{Alt}(\op H^q(\check{C}(f^{-1}\op U))))\\ &= H^p\left(\cdots \to\bigoplus_{\op U^2}\op H^q(f^{-1}(V_0\cap V_1))\to \bigoplus_{\op U}H^q(f^{-1}(V))\right)\\ &\cong \check{H}(\op U;\op H^q(f^{-1}))\implies \op H^{p+q}(X),\end{align*} where $\check{H}$ denotes \v{C}ech cohomology. A refinement of open covers induces a map at the level of \v{C}ech nerves and therefore a map of spectral sequences, and, forming the colimit over the partially ordered set of open covers of $Y$, we obtain at last the spectral sequence \begin{align*}E_2^{p,q}&\cong \colim_{\op U}\check{H}^p(U;\op H^q(f^{-1}))\\ &\cong H^p(Y;\op H^q(f^{-1}))\implies \op H^{p+q}(X), \end{align*} where the last isomorphism uses the assumption that $Y$ is paracompact. \end{proof} This spectral sequence specializes to two well-known spectral sequences. \begin{corollary}[Atiyah-Hirzebruch] Let $X$ be a paracompact space and $\op H$ a cohomology theory. There is a spectral sequence \[E_2^{p,q}\cong H^p(X; \op H^q(\mathrm{pt}))\implies \op H^{p+q}(X),\] which is natural and multiplicative if $\op H$ is. \end{corollary} \begin{proof} We apply Theorem \ref{thm:spectral sequence of a map} to the map $\id_X$, in which case the sheaf in question is the constant sheaf with value $\op H^q(\mathrm{pt})$. \end{proof} The spectral sequence of greatest interest to us is the \emph{Serre spectral sequence} associated to a fibration \cite{Serre:HSEF}. \begin{corollary}[Serre]\label{cor:serre ss} Let $R$ be a ring and $\pi:E\to B$ a fibration with fiber $F$, and assume that $B$ is path-connected and paracompact. There is a spectral sequence \[E_2^{p,q}\cong H^p(Y; \underline{H^q(F; R)})\implies H^{p+q}(X;R),\] which is multiplicative and natural for maps of fibrations, where $\underline{H^q(F; R)}$ denotes the local coefficient system induced by the homotopy action of $\pi_1(B)$ on $F$. \end{corollary} \begin{remark} Since an arbitrary map may be approximated by a fibration over a paracompact base, there is a Serre spectral sequence for an arbitrary map with connected target, which involves the homotopy fiber rather than the point-set fiber. \end{remark} \begin{corollary}\label{cor:ss of a cover} Let $\pi:P\to X$ be a connected principal $G$-bundle. There is a spectral sequence \[E_2^{p,q}\cong H^p(G;H^q(P))\implies H^{p+q}(X),\] which is multiplicative and natural for maps of principal $G$-bundles. \end{corollary} \begin{proof} Forming the Borel construction on $P$, we obtain the homotopy pullback square \[\xymatrix{ P\ar[d]\ar[r]&EG\times_G X\ar[d]\\ \mathrm{pt}\ar[r]&BG, }\] and, since $G$ acts freely on $X$, the natural map $EG\times_G P\to X$ is a weak equivalence. The claim follows from Corollary \ref{cor:serre ss} after identifying the group cohomology of $G$ with the twisted cohomology of $BG$. \end{proof} \subsection{The Leray--Hirsch theorem} We turn now to the proof of the Leray--Hirsch theorem. We make use of the following basic observation, which identifies the \emph{edge maps} in the Serre spectral sequence. \begin{lemma}\label{lem:edge maps} Let $\{E_r\}$ be the spectral sequence for the fibration $F\xrightarrow{i} E\xrightarrow{\pi}B$. There is a commuting diagram \[\xymatrix{ H^q(E;R)\ar@{->>}[d]\ar[r]^-{i^*}&H^q(F;R)\ar@{=}[d]^-\wr\\ E_\infty^{0,q}\ar@{^{(}->}[r]&E_2^{0,q} }\] for every $q\geq0$. Moreover, if $F$ is connected, then there is a commuting diagram \[\xymatrix{ H^p(B;R)\ar@{=}[d]_-{\wr}\ar[r]^-{\pi^*}&H^p(E;R)\\ E^{p,0}_2\ar@{->>}[r]&E^{p,0}_\infty\ar@{^{(}->}[u] }\] for every $p\geq0$. \end{lemma} \begin{proof} Both claims follow by naturality from the commuting diagram \[\xymatrix{ F\ar@{=}[d]\ar@{=}[r]&F\ar[d]^-i\ar[r]&\mathrm{pt}\ar[d]\\ F\ar[d]\ar[r]^-i&E\ar[r]^-\pi\ar[d]^-\pi&B\ar@{=}[d]\\ \mathrm{pt}\ar[r]&B\ar@{=}[r]&B, }\] in which the bottom set of vertical arrows are all fibrations. The assumption that $F$ is connected permits the identification \[E_2^{p,0}\cong H^p(B;\underline{H^0(F;R)})\cong H^p(B;R).\] \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Leray--Hirsch}] The assumption that $B$ is connected permits the use of the Serre spectral sequence. The assumption that $F$ is connected and the surjectivity assumption together imply that the local system $\underline{H^q(F)}$ is trivial; indeed, the former implies that $\pi_1(E)$ surjects onto $\pi_1(B)$, so the action of $\pi_1(B)$ on $H^*(F)$ may be computed after lifting both to $E$. This action is trivial, since the action of $\pi_1(E)$ on $H^*(E)$ is so. The assumptions on the cohomology of $F$ and $B$ now permit us to apply the K\"{u}nneth theorem in cohomology to obtain the isomorphism \[E_2^{p,q}\cong H^p(B)\otimes H^q(F).\] We claim that this spectral sequence collapses at $E_2$. To see why this is so, we first note that surjectivity of $i^*$ and Lemma \ref{lem:edge maps} imply the isomorphism $E_2^{0,q}\cong E_\infty^{0,q}$, so $d_r|_{E_r^{0,q}}=0$ for all $r\geq2$ and $q\geq0$. Assume for induction that we have established the isomorphism $E_2\cong E_r$, the base case being $r=2$. Since a bihomogeneous element in $E_r$ may be written as $a\otimes b$ for $a\in E_r^{p,0}$ and $b\in E_r^{0,1}$, it suffices by the Leibniz rule to show that $d_r(a)=0$ and that $d_r(b)=0$ separately. The latter equality was shown above, and the former holds for degree reasons. Thus, we have an isomorphism of $H^*(B)$-modules between $H^*(B)\otimes H^*(F)$ and the associated graded for the induced filtration on $H^*(E)$. Since $H^*(F)$ is free Abelian, $i^*$ admits a section $s$, and the map \begin{align*} H^*(B)\otimes H^*(F)&\to H^*(E)\\ a\otimes b&\mapsto s(a)\smile \pi^*(b) \end{align*} is an isomorphism, since it becomes an isomorphism after passing to the associated graded. \end{proof} \section{Mapping space models}\label{section:mapping space models} Our next goal is to explore the perhaps surprising connection between configuration spaces and mapping spaces, following the seminal work of McDuff \cite{McDuff:CSPNP} (see also \cite{Boedigheimer:SSMS}). A motivating idea due to Segal \cite{Segal:CSILS} is that of the \emph{electric field map} \[\coprod_{k\geq0}B_k(\mathbb{R}^n)\to \Omega^nS^n,\] which assigns to a configuration, viewed as a collection of point charges, the corresponding electric field. As a vector field, this electric field is a priori a map from $\mathbb{R}^n$ to itself; however, since it becomes infinite at the location of a point charge, and since it tends to zero at infinity, it naturally extends to a map between the respective one-point compactifications. Alternatively, we can understand this map as a kind of Pontrjagin-Thom construction, sending a configuration of $k$ points to the composite $S^n\to \vee_k S^n\to S^n$ of the Thom collapse map for the normal bundle of the configuration followed by the fold map. The electric field map is not a homotopy equivalence; for example, the induced map on $\pi_0$ is the inclusion $\mathbb{N}\to \mathbb{Z}$. We would like to understand whether this failure can be rectified, as well as whether the same idea may be adapted to more general background manifolds and target spaces. \begin{remark} In a sense, this ``group completion'' discrepancy on connected component is the only obstruction to the map being an equivalence---see \cite{Segal:CSILS}. \end{remark} \subsection{Labeled configuration spaces} We begin by identifying the type of combinatorics at play. \begin{definition} A \emph{pointed finite set} is a finite set $I$ together with a distinguished element, called the \emph{basepoint} and denoted $*$. We write $I^\circ$ for the pointed finite set $I\setminus \{*\}$. A map $f:I\to J$ is \emph{inert} if \begin{enumerate} \item $f(*)=*$, and \item $f|_{f^{-1}(J^\circ)}$ is a bijection onto $J^\circ$. \end{enumerate} \end{definition} We write $\mathrm{Inrt}$ for the category pointed finite sets and inert maps. Note that this category is isomorphic to the opposite of the category of finite sets and injective maps. \begin{construction} A manifold $M$ defines a functor from $\mathrm{Inrt}$ to $\mathcal{T}\mathrm{op}$ by sending $I$ to $\mathrm{Conf}_{I^\circ}(M)$ and the inert map $f:I\to J$ to the projection \[\xymatrix{ \mathrm{Conf}_{I^\circ}(M)\ar[d]\ar@{-->}[r]&\mathrm{Conf}_{\pi^{-1}(J^\circ)}(M)\ar[r]^-{\simeq}\ar[d]&\mathrm{Conf}_{J^\circ}(M)\ar[d]\\ M^{I^0}\ar[r]^-{\pi_f}&M^{\pi^{-1}(J^\circ)}\ar[r]^-\simeq&M^{J^\circ} }\] given by the formula \[\pi_f\left((m_i)_{i\in I^\circ}\right)=(m_{f^{-1}(j)})_{j\in J^\circ}.\] \end{construction} \begin{construction} A based space $(X,x_0)$ determines a functor from $\mathrm{Inrt}^{op}$ to $\mathcal{T}\mathrm{op}$ by sending $I$ to $\mathrm{Map}_*(I, X)\cong X^{I^\circ}$ and the inert map $f$ to the inclusion \[X^{J^\circ}\cong X^{f^{-1}(J^\circ)}\times \{x_0\}^{I^\circ\setminus f^{-1}(J^\circ)}\subseteq X^{f^{-1}(J^\circ)}\times X^{I^\circ\setminus f^{-1}(J^\circ)}\cong X^{I^\circ}.\] \end{construction} \begin{definition} The \emph{configuration space of $M$ with labels in $X$} is the coequalizer \[\mathrm{Conf}_X(M)=\mathrm{coeq}\left( \coprod_{J\to K}\mathrm{Conf}_{J^\circ}(M)\times X^{K^\circ}\rightrightarrows\coprod_{I}\mathrm{Conf}_{I^\circ}(M)\times X^{I^\circ} \right),\] where the coproducts are indexed on the morphisms and objects of $\mathrm{Inrt}$, respectively. \end{definition} \begin{remark} Note that this construction is sensible without the assumption that $M$ be a manifold. \end{remark} Thus, a point in $\mathrm{Conf}_X(M)$ is a finite formal sum $\sum m_ax_a$ with $m_a\in M$ distinct and $x_a\in X$, and the following relation holds \[\textstyle\sum m_ax_a\sim \sum m_ax_a+mx_0.\] We refer to the point $x_a$ as the \emph{label} of $m_a$. The topology is such that a point vanishes if its label moves to the basepoint of $X$; thus, if $x_k\to x_0$ in $X$, for example, then $mx_k\to \varnothing$ for any $m\in M$. \begin{example} For any $M$, $\mathrm{Conf}_\mathrm{pt}(M)=\{\varnothing\}$. \end{example} \begin{example} For any $M$, there is a homeomorphism $\mathrm{Conf}_{S^0}(M)\cong \coprod_{k\geq0}B_k(M).$ \end{example} We will also have use for a relative version of this construction. If $M_0\subseteq M$ is a closed subspace, we write $\mathrm{Conf}_X(M,M_0)$ for the quotient of $\mathrm{Conf}_X(M)$ by the further relation \[\textstyle\sum m_ax_a\sim \sum m_ax_a+mx,\qquad m\in M_0.\] We refer to this space as the labeled configuration space with \emph{annihilation in $M_0$}. In this space, a point vanishes also if it collides with the annihilation subspace $M_0$; thus, if $m_k\to m\in M_0$, for example, then $m_kx\to \varnothing$ for any $x\in X$. \begin{example} For any $M$ and $X$, $\mathrm{Conf}_X(M,M)=\{\varnothing\}$ \end{example} \begin{example} If either $X$ or $(M,M_0)$ is path connected, then so is $\mathrm{Conf}_X(M,M_0)$ (recall that a pair is path connected if the map on path components is surjective). \end{example} \begin{definition} The \emph{support} of the configuration $\sum m_ax_a$ is the (finite) subset \[\textstyle\mathrm{Supp}\left(\sum m_ax_a\right)=\left\{m_a\mid m_a\notin M_0 \text{ and } x_a\neq x_0\right\}\subseteq M.\] \end{definition} The space $\mathrm{Conf}_X(M,M_0)$ is filtered by the closed subspaces \[\mathrm{Conf}_X(M,M_0)_{\leq k}:=\left\{\textstyle \sum m_ax_a\mid |\mathrm{Supp}(\sum m_ax_a)|\leq k\right\}.\] Moreover, both the successive quotients and the successive complements of this filtration are comprehensible, as \[\faktor{\mathrm{Conf}_X(M,M_0)_{\leq k}}{\mathrm{Conf}_X(M,M_0)_{\leq k-1}}\cong \mathrm{Conf}_k(M,M_0)\wedge_{\Sigma_k}X^{\wedge k},\] where $\mathrm{Conf}_k(M,M_0)$ is the quotient of $\mathrm{Conf}_k(M)$ by the subspace of configurations intersecting $M_0$ non-vacuously, while \[\mathrm{Conf}_X(M,M_0)_{\leq k}\setminus \mathrm{Conf}_X(M,M_0)_{\leq k-1}\cong \mathrm{Conf}_k(M\setminus M_0)\times_{\Sigma_k}(X\setminus x_0)^k.\] \begin{example} The filtration quotients of $\mathrm{Conf}_{S^r}(M)$ are given by the Thom spaces of the vector bundles $\mathrm{Conf}_k(M)\times_{\Sigma_k} \mathbb{R}^{rk}\to B_k(M),$ so, by the Thom isomorphism, we may compute the homology of the configuration spaces of $M$ from knowledge of this filtration. In fact, as we will show, this filtration splits at the level of homology, making this type of computation often feasible in practice. Notice that, for $r>0$, $\mathrm{Conf}_{S^r}(M)$ is connected. In particular, the analogous electric charge map $\mathrm{Conf}_{S^r}(\mathbb{R}^n)\to \Omega^nS^{n+r}$ is a bijection on $\pi_0$, unlike in the case $r=0$ considered above. In fact, as we will show, this map is a homotopy equivalence. \end{example} We close this section by advancing the thesis that the construction $\mathrm{Conf}_X$ should be thought of as a kind of homology theory for manifolds (see \cite{AyalaFrancis:FHTM} for a detailed elaboration on this idea). It is easy to see that analogues of some of the Eilenberg-Steenrod axioms hold. For example, the construction is functorial in an obvious way for embeddings of pairs $(M, M_0)\to (N, N_0)$ and for baseed maps $X\to Y$; the map induced by an isotopy equivalence in the former case or a homotopy equivalence in the latter is a homotopy equivalence; we have a homeomorphism \[\mathrm{Conf}_X(M\amalg N, M_0\amalg N_0)\cong \mathrm{Conf}_X(M, M_0)\times\mathrm{Conf}_X(N, N_0),\] which is an analogue of the additivity axiom; and an analogue of excision is supplied by the homeomorphism \[\mathrm{Conf}_X(M\setminus U, M_0\setminus U)\xrightarrow{\simeq} \mathrm{Conf}_X(M,M_0)\] for $U\subseteq M_0$ open in $M$. The basic building blocks of manifolds being disks rather than points, an appropriate analogue of the dimension axiom is supplied by the following result: \begin{proposition} Fix $X$ and $n\geq0$. \begin{enumerate} \item The inclusion $\mathrm{Conf}_X(D^n,\partial D^n)_{\leq 1}\subseteq \mathrm{Conf}_X(D^n,\partial D^n)$ is a weak homotopy equivalence, and \item there is a homeomorphism $\mathrm{Conf}_X(D^n,\partial D^n)_{\leq 1}\cong \Sigma^nX$. \end{enumerate} \end{proposition} \begin{proof} For (1), we note that, by radial expansion, any pointed map from a compact space $K$ to $\mathrm{Conf}_X(D^n,\partial D^n)$ factors up to pointed homotopy through $\mathrm{Conf}_X(D^n,\partial D^n)_{\leq 1}$. For (2), we calculate that \begin{align*} \mathrm{Conf}_X(D^n,\partial D^n)_{\leq 1}&\cong \frac{\mathrm{Conf}_0(D^n)\times X^0\amalg \mathrm{Conf}_1(D^n)\times X^1}{\sim}\\ &\cong \frac{(D^n\times X)_+}{(\partial D^n\times X)_+\cup (D^n\times\{x_0\})_+}\\ &\cong \frac{D^n_+\wedge X}{\partial D^n_+\wedge X}\\ &\cong S^n\wedge X, \end{align*} as desired. \end{proof} Finally, we have the following analogue of exactness. \begin{theorem}[McDuff]\label{thm:exactness} Let $M_0\subseteq M$ be a closed submanifold of codimension 0, possibly with boundary. If $X$ is connected, then the diagram \[ \xymatrix{ \mathrm{Conf}_X(M_0)\ar[d]\ar[r]&\mathrm{Conf}_X(M)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(M,M_0) } \] induced by the maps $(M_0,\varnothing)\to (M,\varnothing)\to (M,M_0)$ is homotopy Cartesian. \end{theorem} We will take up the proof of this theorem presently. We conclude this section with an obvious question. \begin{question} If $\mathrm{Conf}_X$ is a homology theory, what is Poincar\'{e} duality? \end{question} \subsection{Local topology} Our strategy in proving Theorem \ref{thm:exactness} will be to combine covering technology and Quillen's Theorem B as in our proof of Fadell--Neuwirth. In order to do so, we must come to grips with the local topology of $\mathrm{Conf}_X(M,M_0)$. \begin{definition} An open subset $U\subseteq M$ is \emph{pointed} with respect to $M_0$ if some union of components $U_*\subseteq U$ contains $M_0$ as an isotopy retract. \end{definition} \begin{remark} The assumption that $M_0\subseteq M$ is a codimension 0 submanifold guarantees a ready supply of open sets in $M$ that are pointed with respect to $M_0$; indeed, we may take the union of $M_0$ with any tubular neighborhood of $\partial M_0$ in $M\setminus \mathring{M_0}$. \end{remark} Note that, if $U$ is pointed with respect to $M$, then $\pi_0(U, U_*)$ is canonically pointed by the class of $U_*$. \begin{construction} Fix $M_0\subseteq M$ and $x_0\in X$ as above. We define a category $\op B(M,M_0;X)$ by the following specifications. \begin{enumerate} \item An object of $\op B(M,M_0;X)$ is a pair $(U,\sigma)$ with $U\subseteq M$ a proper open subset pointed with respect to $M_0$ and $\sigma:\pi_0(U, U_*)\to\mathrm{Op}(X)$ a function with $x_0\in \sigma(U_*)$, such that, for $i\in \pi_0(U, U_*)^\circ$ \begin{enumerate} \item $U_i\cong\mathbb{R}^n$, \item $\sigma(U_i)\simeq \mathrm{pt}$, and \item $\sigma(U_i)\cap \sigma(U_*)=\varnothing$. \end{enumerate} \item A morphism $(U,\sigma)\to (V,\tau)$ is an inert map $f:\pi_0(U, U_*)\to \pi_0(V, V_*)$ such that \begin{enumerate} \item $U_{i}\subseteq V_{f(i)}$ and $\sigma(U_{i})\subseteq \tau(V_{f(i)})$ for $i\notin f^{-1}(*)\setminus\{*\}$, and \item either $U_i\subseteq V_*$ or $\sigma(U_i)\subseteq \tau(V_*)$ for $i\in f^{-1}(*)\setminus \{*\}$. \end{enumerate} \end{enumerate} We write $\mathrm{Conf}_X^0(U,\sigma)\subseteq \mathrm{Conf}_X(M,M_0)$ for the subspace consisting of labeled configurations $\sum m_a x_a$ such that \begin{enumerate} \item for each $i\in \pi_0(U)^\circ$, there is a unique $a$ with $m_a\in U_i$ and $x_a\in \sigma(U_i)$; and \item otherwise, either $m_a\in U_*$ or $x_a\in \sigma(U_*)$. \end{enumerate} \end{construction} We summarize the relevant facts about this category and these subspaces in a series of lemmas. First, a preliminary concept. \begin{definition} Let $\sum m_a x_a$ be a configuration lying in $\mathrm{Conf}_X^0(U,\sigma)$. We say that the point $m_{a}$ is \emph{essential} for $(U,\sigma)$ if $m_a\in U_i$ and $x_a \in \sigma(U_i)$ for some (necessarily unique) $i\in\pi_0(U,U_*)$. \end{definition} Equivalently, $m_a$ is essential\[\textstyle\sum_{a'\neq a}m_{a'}x_{a'}\notin \mathrm{Conf}_X^0(U,\sigma).\] Recall that a category is a poset if and only if, first, each hom set contains at most one element, and, second, every isomorphism is an identity. \begin{lemma}\label{lem:labeled poset} The category $\op B(M,M_0;X)$ is a poset, and the inequality $(U,\sigma)\leq (V,\tau)$ holds if and only if the containment $\mathrm{Conf}_X^0(U,\sigma)\subseteq \mathrm{Conf}_X^0(V,\tau)$ holds. \end{lemma} \begin{proof} Suppose that the inert maps $f$ and $g$ each determine a morphism $(U,\sigma)\to (V,\tau)$ in $\op B(M,M_0;X)$. Then there exists $i\in \pi_0(U,U_*)^\circ$ such that $f(i)=*\neq g(i)$; otherwise, $f$ and $g$ differ by a permutation of $\pi_0(V,V_*)$, and the conditions on morphisms in $\op B(M,M_0;X)$ force this permutation to be the identity. Now, since $f(i)=*$, either $U_i\subseteq V_*$ or $\sigma(U_i)\subseteq \tau(V_*)$; however, since $g(i)\neq *$, $U_i\subseteq V_{g(i)}$ and $\sigma(U_i)\subseteq \tau(V_{g(i)}$. It follows that either $V_{g(i)}\cap V_*\neq \varnothing$ or $\tau(V_{g(i)})\cap\tau(V_*)\neq \varnothing$, a contradiction. We conclude that hom sets in $\op B(M,M_0;X)$ have cardinality at most 1. Now, if $(U,\sigma)\cong (V,\tau)$, then the associated inert map is also an isomorphism, which is to say a pointed bijection, so $U_i\subseteq V_{f(i)}\subseteq U_i$ and $\sigma(U_i)\subseteq\tau(V_{f(i)})\subseteq \sigma(U_i)$ for every $i\in \pi_0(U,U_*)$, and it follows that $(U,\sigma)=(V,\tau)$. Thus, $\op B(M,M_0;X)$ is a poset. For the second claim, we verify the ``if'' implication, the converse being essentially obvious. Suppose, then, that $\mathrm{Conf}_X^0(U,\sigma)\subseteq \mathrm{Conf}_X^0(V,\tau)$, and choose $\sum m_ax_a\in\mathrm{Conf}_X^0(U,\sigma)$. We note that \begin{align*} \pi_0(U,U_*)^\circ&\cong\left\{a\mid m_a\text{ is essential for $(U,\sigma)$}\right\}\\ &\supseteq\left\{a\mid m_a\text{ is essential for $(V,\tau)$}\right\}\\ &\cong\pi_0(V,V_*)^\circ, \end{align*} where the inclusion follows from the definition of an essential point and our containment assumption. Since the data of an inert map is equivalent to that of an inclusion in the opposite direction, this observation defines an inert map $f:\pi_0(U,U_*)\to \pi_0(V,V_*)$. It is easy to see that two labeled configurations that may be joined by a path give rise to the same inert map; therefore, since $\mathrm{Conf}_X^0(U,\sigma)$ is path connected, $f$ is independent of the choice of $\sum m_ax_a$. To see that $f$, so defined, witnesses the inequality $(U,\sigma)\leq (V,\tau)$, there are two points to verify. \begin{enumerate} \item For $i\in f^{-1}(\pi_0(V,V_*)^\circ)$, we note that $U_i\cap V_{f(i)}\neq \varnothing$, since there is some point $m_a$ that is essential both for $(U,\sigma)$ and for $(V,\tau)$. The existence of a path in $U_i$ from $m_a$ to a point lying outside of this intersection leads to a contradiction of the fact that $m_a$ is essential for $(V,\tau)$, so $U_i\subseteq V_{f(i)}$. On the other hand, since $U_*$ and $V_*$ both contain $M_0$ as an isotopy retract, we have the bijection $\pi_0(U_*)\cong\pi_0(M_0)\cong \pi_0(V_*)$. Thus, since $V_*\cap V_j=\varnothing$ for $j\neq*$, it follows that $U_*\cap V_j=\varnothing$ for $j\neq *$. Allowing a point to range over $U_*$ with a fixed label lying outside of $\tau(V_*)$ now shows that $U_*\subseteq V_*$.\\ \item For $*\neq i\in f^{-1}(*)$, if neither containment holds, then there exist $m\in U_i$ and $x\in \sigma(U_i)$ with $m\notin V_*$ and $x\notin \tau(V_*)$. Thus, there is a configuration of the form $\sum m_ax_a+mx\in \mathrm{Conf}_X^0(U,\sigma)$; however, since $m$ is not essential in $(V,\tau)$ by the definition of $f$, we must also have $\sum m_ax_a+mx\notin \mathrm{Conf}_X^0(V,\tau)$, contradicting our assumption. \end{enumerate} \end{proof} \begin{lemma}\label{lem:labeled open} Each $\mathrm{Conf}_X^0(U,\sigma)$ is open in $\mathrm{Conf}_X(M,M_0)$. \end{lemma} \begin{proof} From the definitions, a subset of $\mathrm{Conf}_X(M, M_0)$ is open if and only if its preimage under each of the maps \[q_r:\mathrm{Conf}_r(M)\times X^r\to \mathrm{Conf}_X(M,M_0)\] is so. For $r<k:=|\pi_0(U)^\circ|$, we have $q^{-1}_r(\mathrm{Conf}_X^0(U,\sigma)=\varnothing$; otherwise, the inverse image is the $\Sigma_r$-orbit of the subspace \[\bigcup_{k+\ell+m=r} A_{k,\ell,m}\times\prod_{i=1}^k\sigma(U_i)\times X^\ell\times \sigma(U_*)^m,\] where $A_{k,\ell,m}\subseteq \mathrm{Conf}_r(M)$ is defined by requiring that each of the diamonds in the commuting diagram \[\xymatrix{ &&A_{k,\ell,m}\ar[dr]\ar[dl]\\ &\bullet\ar[dl]\ar[dr]^-{(3)}&&\bullet\ar[dl]_-{(4)}\ar[dr]\\ \mathrm{Conf}^0_k(U\setminus U_*)\ar[dr]^-{(1)}&&\mathrm{Conf}_r(M)\ar[dr]\ar[dl]&&\mathrm{Conf}_\ell(U_*)\ar[dl]_-{(2)}\\ &\mathrm{Conf}_k(M)&&\mathrm{Conf}_\ell(M) }\] be a pullback; in other words, $A_{k,\ell,m}$ is the subspace in which the first $k$ points lie $\pi_0$-surjectively in $U\setminus U_*$, the next $\ell$ points lie in $U_*$, and the remaining $m$ points are arbitrary. Since the first and second marked arrows are inclusions of open subspaces, so are the third and fourth marked arrows. Thus, $A_{k,\ell,m}$ is the intersection of two open subspaces and hence itself open, implying the claim. \end{proof} \begin{lemma}\label{lem:labeled complete cover} The collection $\left\{\mathrm{Conf}_X^0(U,\sigma): (U,\sigma)\in B(M,M_0; X)\right\}$ is a complete cover of $\mathrm{Conf}_X(M,M_0)$. \end{lemma} \begin{proof} We first verify that $\U$ is a cover. We begin by fixing $\sum m_ax_a\in \mathrm{Conf}_X(M,M_0)$ with $m_a\notin M_0$ and $x_a\neq x_0$ for all $a$. We now choose \begin{enumerate} \item disjoint Euclidean neighborhoods $m_a\in U_a$; \item a tubular neighborhood $M_0\subseteq U_*$ disjoint from $\bigcup_a U_a$; \item contractible neighborhoods $x_a\in \sigma(U_a)$ disjoint from $x_0$; and \item a contractible neighborhood $x_0\in \sigma(U_*)$ disjoint from $\bigcup_a \sigma(U_a)$. \end{enumerate} Setting $U=U_*\cup\bigcup_a U_a$, we clearly have $\sum m_ax_a\in\mathrm{Conf}_X^0(U,\sigma)$. For completeness, suppose that $\sum m_ax_a\in\bigcap_{r=1}^N\mathrm{Conf}_X^0(V_r,\tau_r)$. We make the same sequence of choices as in the previous step, while requiring that \begin{enumerate} \item if $m_a$ is contained in some $V_{r,i}$, then $U_a\subseteq \bigcap_{m_a\in V_{r,i}} V_{r,i}$;\item $U_*\subseteq \bigcap_{r=1}^N V_{r,*}$; \item if $x_a$ is contained in some $\tau_r(V_{r,i})$, then $x_0\notin\sigma(U_a)\subseteq \bigcap_{x_a\in \tau_r(V_{r,i})}\tau_r(V_{r,i})$; and \item $\sigma(U_a)\subseteq \bigcap_{r=1}^N \tau_r(V_{r,*})$. \end{enumerate} We define a function $f:\pi_0(U,U_*)\to \pi_0(V_r, V_{r,*})$ by sending $U_a$ to the component $V_{r,i}$ such that $m_a\in V_{r,i}$, provided $m_a$ is essential for $(V_r,\tau_r)$, and to the basepoint otherwise. It is easy to check that $f$ is a well-defined inert map, and the proof is complete upon verifying that $f$ witnesses the inequality $(U,\sigma)\leq (V_r,\tau_r)$. There are two points to check. \begin{enumerate} \item If $f(a)\neq *$ or if $a=*$, then $U_a\subseteq V_{r,f(a)}$ and $\sigma(U_a)\subseteq \tau_r(V_{r,f(a)})$ by construction. \item If $*\neq a\in f^{-1}(*)$, then $m_a$ is not essential for $(V_r, \tau_r)$, so either $m_a\in V_{r,*}$ or $x_a\in \tau_r(V_{r,*})$. By construction, then, either $U_a\subseteq V_{r,*}$ or $\sigma(U_a)\subseteq \tau_r(V_{r,*})$. \end{enumerate} \end{proof} \begin{lemma}\label{lem:labeled contractible} Each $\mathrm{Conf}_X^0(U, \sigma)$ is contractible. \end{lemma} \begin{proof} A choice of contraction of $\sigma(U_*)$ onto $x_0$ defines a homotopy equivalence \[\mathrm{Conf}_X^0(U,\sigma)\simeq\prod_{\pi_0(U,U_*)^\circ}(U_i\times \sigma(U_i))\times \mathrm{Conf}_X(U_*, M_0).\] The second factor is contractible in view of the isotopy equivalence of pairs $(M_0, M_0)\xrightarrow{\sim} (U_*, M_0)$, and each $U_i$ and each $\sigma(U_i)$ is contractible by assumption. \end{proof} We write $q:\mathrm{Conf}_X(M)\to \mathrm{Conf}_X(M,M_0)$ for the map induced by $(M,\varnothing)\to (M,M_0)$. \begin{lemma}\label{lem:labeled fiber} For every $(U,\sigma)$, there is a homotopy equivalence $q^{-1}\mathrm{Conf}_X^0(U,\sigma)\simeq \mathrm{Conf}_X(M_0)$. \end{lemma} \begin{proof} The inverse image $q^{-1}\mathrm{Conf}_X(U,\sigma)$ is the subspace of labeled configurations in $\mathrm{Conf}_X(M)$ with 1) exactly one essential point in each $U_i$, 2) all other points not lying in $U_*$ labeled by points of $\sigma(U_*)$, and 3) an arbitrary subconfiguration lying in $U_*$. Thus, a choice of contraction of $\sigma(U_*)$ onto $x_0$ defines a homotopy equivalence \[q^{-1}\mathrm{Conf}_X^0(U,\sigma)\simeq \prod_{\pi_0(U,U_*)^\circ}(U_i\times \sigma(U_i))\times\mathrm{Conf}_X(U_*).\] Since each $U_i$ and each $\sigma(U_i)$ is contractible, the claim follows from the isotopy equivalence $M_0\xrightarrow{\sim} U_*$. \end{proof} \begin{lemma}\label{lem:labeled locally constant} If $X$ is connected, then the inclusion \[q^{-1}\mathrm{Conf}_X^0(U,\sigma)\to q^{-1}\mathrm{Conf}_X^0(V,\tau)\] is a homotopy equivalence whenever $(U,\sigma)\leq(V,\tau)$. \end{lemma} Before turning to the proof, we consider an example illustrating the necessity of the hypothesis on $X$. \begin{example} Set $M=D^n_6(0)$, $M_0=D_6^n(0)\setminus\mathring{D}_5^n(0)$, $X=S^0$. We define a pair $(U,\sigma)\leq (V,\tau)$ by setting \begin{align*} U_1&=\mathring{D}^n_1(0)\\ U_2&=\mathring{D}_{1/3}(7/2,0,\ldots, 0)\\ U_*&=D^n_6(0)\setminus D_4^n(0)\\ V_1&=\mathring{D}_2^n(0)\\ V_*&=D^n_6(0)\setminus D_3^n(0). \end{align*} Since $X=S^0$, $\sigma$ and $\tau$ are determined, and, since $U_1\subseteq V_1$ and $U_*\cup U_2\subseteq V_*$, the inert map $f:\{1,2,*\}\to \{1,*\}$ with $f(1)=1$ and $f(2)=*$ witnesses the claimed inequality. Now, in the commuting diagram \[\xymatrix{ q^{-1}\mathrm{Conf}_{S^0}(U,\sigma)\ar[d]\ar@{=}[r]^-\sim& U_1\times U_2\times \mathrm{Conf}_{S^0}(U_*)\ar[d]&\{m_1\}\times\{m_2\}\times\displaystyle\coprod_{k\geq0}B_k(U_*)\ar[d]\ar[l]_-{\sim}\\ q^{-1}\mathrm{Conf}_{S^0}(V,\tau)\ar@{=}[r]^-\sim&V_1\times\mathrm{Conf}_{S^0}(V_*)&\{m_1\}\times\displaystyle\coprod_{k\geq0}B_k(V_*),\ar[l]_-{\sim} }\] the point $\{m_1\}\times\{\varnothing\}$ does not lie in the image of the rightmost map; therefore, this map fails to surject on $\pi_0$. \end{example} \begin{definition} Let $P$ be connected and $N\subsetneq P$ an isotopy retract. For $p\in P\setminus N$ and $x\neq x_0$, the \emph{stabilization map} with respect to $p$ and $x$ is the map \[\mathrm{Conf}_X(N)\cong \{px\}\times \mathrm{Conf}_X(N)\subseteq \mathrm{Conf}_X(D_{\epsilon}(p))\times\mathrm{Conf}_X(N)\to \mathrm{Conf}_X(M),\] where the rightmost map is the composite of the homeomorphism $\mathrm{Conf}_X(D_\epsilon(p))\times\mathrm{Conf}_X(N)\cong \mathrm{Conf}_X(D_\epsilon(p)\amalg N)$ with the structure map for the inclusion. \end{definition} Thus, in the homotopy category of spaces, there is a well-defined stabilization map $\mathrm{Conf}_X(P)\to \mathrm{Conf}_X(P)$ depending only on the class of $x$ in $\pi_0(X)$ and a choice of element in $\pi_0(P,N)$. \begin{proposition} Stabilization with respect to $p$ and $x$ is an equivalence if and only if $x$ lies in the path component of $x_0$. \end{proposition} \begin{proof} If $x$ lies in the path component of $x_0$, then a path joining the two defines a homotopy from the stabilization map to the structure map for the isotopy equivalence $N\subseteq P$. Conversely, suppose that $X=X_1\amalg X_2$ with $x_0\in X_1$ and $x\in X_2$. The subspace \[\left\{{\textstyle\sum m_ax_a}\mid \{x_a\}\cap X_2\neq \varnothing\right\}\subset\mathrm{Conf}_X(P)\] is closed, open, proper, and contains the image of stabilization with respect to $p$ and $x$. It follows that stabilization does not surject on $\pi_0$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:labeled locally constant}] Suppose that $f:\pi_0(U,U_*)\to \pi_0(V,V_*)$ witnesses the inequality $(U,\sigma)\leq(V,\tau)$, and write \[\pi_0(U,U_*)\cong *\sqcup f^{-1}(\pi_0(V,V_*)^\circ)\sqcup I\sqcup I',\] where $I=\{*\neq i\in f^{-1}(*)\mid\sigma(U_i)\subsetneq \tau(V_*)\}$ and $I'=\{*\neq i\in f^{-1}(*)\mid\sigma(U_i)\subseteq \tau(V_*)\}$. Choosing contractions of $\sigma(U_*)$ and $\tau(V_*)$ onto $x_0$ and points in $U_i\times\sigma(U_i)$ for $i\in\pi_0(U,U_*)^\circ$ induces the equivalences in the commuting diagram \[\xymatrix{ q^{-1}\mathrm{Conf}_X(U,\sigma)\ar[d]&\displaystyle\prod_I(U_i\times \sigma(U_i))\times\mathrm{Conf}_X(U_*)\ar[d]\ar[l]_-\sim&\mathrm{Conf}_X(U_*)\ar[l]_-\sim\ar[d]\\ q^{-1}\mathrm{Conf}_X(V,\tau)&\mathrm{Conf}_X(V_*)\ar[l]_-\sim&\mathrm{Conf}_X(V_*),\ar@{=}[l] }\] where the righthand map is a composite of $|I|$ stabilization maps, each of which is an equivalence by our assumption on $X$. \end{proof} We now prove the theorem. \begin{proof}[Proof of Theorem \ref{thm:exactness}] Lemma \ref{lem:labeled poset} supplies the top arrow and Lemma \ref{lem:labeled open} the dashed filler in the commuting diagram \[\xymatrix{ \op B(M,M_0;X)\ar@{-->}[dr]\ar[rr]^-{\mathrm{Conf}_X^0}&&\mathcal{T}\mathrm{op}\\ &\mathrm{Op}(\mathrm{Conf}_X(M,M_0)).\ar[ur] }\] By Lemma \ref{lem:labeled complete cover}, then, \[\mathrm{Conf}_X(M,M_0)\simeq \hocolim_{\op B(M,M_0;X)}\mathrm{Conf}_X^0\simeq B(\op B(M,M_0;X)),\] where the second equivalence follows from Lemma \ref{lem:labeled contractible}. We similarly have the commuting diagram \[\xymatrix{ \op B(M,M_0;X)\ar@{-->}[dr]\ar[rr]^-{q^{-1}\mathrm{Conf}_X^0}&&\mathcal{T}\mathrm{op}\\ &\mathrm{Op}(\mathrm{Conf}_X(M)).\ar[ur] }\] Lemma \ref{lem:labeled complete cover} implies that the collection $\left\{q^{-1}(\mathrm{Conf}_X^0(U,\sigma): (U,\sigma)\in \op B(M, M_0;X)\right\}$ is likewise a complete cover of $\mathrm{Conf}_X(M)$, so \[\mathrm{Conf}_X(M)\simeq \hocolim_{\op B(M, M_0;X)}q^{-1}\mathrm{Conf}^0_X.\] Since $X$ is connected, Lemma \ref{lem:labeled locally constant} and Corollary \ref{cor:hocolim quasifibration} grant that the diagram \[\xymatrix{ q^{-1}\mathrm{Conf}_X^0(U,\sigma)\ar[r]\ar[d]&\displaystyle\hocolim_{\op B(M,M_0;X)}q^{-1}\mathrm{Conf}_X^0\ar[d]\\ \mathrm{pt}\ar[r]^-{(U,\sigma)}&B(\op B(M,M_0;X)) }\] is homotopy Cartesian, and the desired conclusion follows from Lemma \ref{lem:labeled fiber} and the identifications already established. \end{proof} We close by pointing out that the same reasoning provides an analogue of the long exact sequence for a triple. \begin{theorem}[McDuff] Let $M_0$, $M_1$, and $M_0\cap M_1$ be closed submanifolds of codimension 0 in $M$. If either $X$ or $(M_0,M_0\cap M_1)$ is connected, then the diagram \[\xymatrix{ \mathrm{Conf}_X(M_0, M_0\cap M_1)\ar[r]\ar[d]&\mathrm{Conf}_X(M, M_1)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(M, M_0\cup M_1) }\] is homotopy Cartesian. \end{theorem} \subsection{Duality for labeled configuration spaces} We now disuss an analogue of Poincar\'{e} duality in the context of the ``homology theory'' of labeled configuration spaces. One approach to the classical isomorphism $H_*(M)\cong H_c^{n-*}(M)$ is through the following steps. \begin{enumerate} \item By taking a compact exhaustion, reduce to the statement $H_*(N)\cong H_c^{n-*}(N,\partial N)$ for compact manifolds with boundary. \item Observe the local calculation $\mathbb{Z}\cong H_*(D^n)\cong H^{n-*}(D^n,\partial D^n)\cong \widetilde H^{n-*}(S^n)\cong\mathbb{Z}$. \item Deduce the general case inductively using Mayer-Vietoris and/or exactness. \end{enumerate} Our approach will proceed along these same lines; first, however, we must identify the object that is to play the role of compactly supported cohomology. \begin{recollection} A Riemannian $n$-manifold $M$ has a principal right $O(n)$-bundle of \emph{orthonormal frames} \[\xymatrix{ \mathrm{Isom}(\mathbb{R}^n, T_pM)\ar[d]\ar[r]&\mathrm{Fr}_M\ar[d]\\ \{p\}\ar[r]&M, }\] with fiber over $p$ the set of isometries from $\mathbb{R}^n$ with the standard inner product to the tangent fiber over $p$. The orthogonal group $O(n)$ acts on this space of isometries by precomposition. Note the $O(n)$-equivariant but non-canonical isomorphism $\mathrm{Isom}(\mathbb{R}^n, T_pM)\cong O(n)$. \end{recollection} We equip $M$ with a metric, whose features will be further specified in time. \begin{construction} Define a bundle $E_X=E_X(M)$ by \[E_X:=\mathrm{Fr}_M\times_{O(n)}\mathrm{Conf}_X(D^n,\partial D^n),\] where the action of $O(n)$ on $\mathrm{Conf}_X(D^n,\partial D^n)$ arises from the action by diffeomorphisms of $O(n)$ on the pair $(D^n,\partial D^n)$. \end{construction} The projection $\pi: E_X\to M$ has a canonical section defined by $s_0(p)=\varnothing$, and we may identify the fiber $\pi^{-1}(p)$ with $\mathrm{Conf}_X(D_1(T_p(M)), \partial D_1(T_pM))$. Moreover, the inclusion of the subbundle $\mathrm{Fr}_M\times_{O(n)}\mathrm{Conf}_X(D^n,\partial D^n)_{\leq 1}$ is a fiberwise homotopy equivalence, and, using the isomorphism $\mathrm{Conf}_X(D^n,\partial D^n)\cong\Sigma^n X$, which is $O(n)$-equivariant with $O(n)$ acting on the righthand side via the suspension coordiantes, we may identify this subbundle with the fiberwise smash product of $X$ with the fiberwise one point compactification of $TM$. Under this identification, the section $s_0$ is given by the fiberwise basepoint. \begin{definition} For a map $f:E\to B$, the \emph{space of sections} of $f$ over $A\subseteq B$ is the pullback in the diagram \[\xymatrix{ \Gamma(A;E)\ar[r]\ar[d]&\mathrm{Map}(A,E)\ar[d]^-{f_*}\\ \{A\subseteq B\}\ar[r]&\mathrm{Map}(A,B). }\] Fixing a section $s_0$ we say that such a section $s$ has \emph{compact support} if $s|_{A\setminus K}= s_0|_{A\setminus K}$ for some compact subset $K\subseteq A$. We write $\Gamma_c(A;E)\subseteq\Gamma(A;E)$ for the subspace of compactly supported sections. \end{definition} Note that we have the identification \[\Gamma_c(A;E)\cong \colim_K \Gamma(A, A\setminus K;E),\] where the colimit is taken over the category of inclusions among compact subsets of $A$, and the space of relative sections $\Gamma(A,A_0;E)$ for $A_0\subseteq A$ is defined as the pullback in the diagram \[\xymatrix{ \Gamma(A,A_0;E)\ar[d]\ar[r]&\Gamma(A;E)\ar[d]^-{(A_0\subseteq A)^*}\\ \{s_0|_{A_0}\}\ar[r]&\Gamma(A_0;E). }\] The space $\Gamma_c(M;E_X)$ will play the role of compactly supported cohomology in our version of Poincar\'{e} duality. In order to make the comparison to the labeled configuration space $\mathrm{Conf}_X(M)$, we require a map. \begin{recollection} Given a Riemannian manifold $M$, a point $p\in M$, and a sufficiently small $\epsilon>0$, there is an \emph{exponential map} \[\exp_p^\epsilon:D_\epsilon(T_pM)\to M,\] which is an embedding. \end{recollection} We assume that $M$ is equipped with a metric such that $\exp_p^1$ is an embedding for all $p\in M$; for example, such a metric exists if $M$ is the interior of a compact manifold with boundary. \begin{definition} The \emph{scanning map} for $M$ and $X$ is the map \[s:\mathrm{Conf}_X(M)\to \Gamma(M;E_X)\] defined by letting $s\left(\sum m_a x_a\right)(p)$ be the image of $\sum m_ax_a$ under the composite map \begin{align*}\mathrm{Conf}_X(M)&\to\mathrm{Conf}_X(M,M\setminus \exp_p^1(\mathring{D}_1(T_pM)))\\ &\cong \mathrm{Conf}_X(\exp_p^1(D_1(T_pM)), \exp_p^1(\partial D_1(T_pM)))\\ &\cong \mathrm{Conf}_X(D_1(T_pM),\partial D_1(T_pM))\\ &\cong\pi^{-1}(p), \end{align*} where the first map is induced by $(M,\varnothing)\to (M,M\setminus \exp_p^1(\mathring{D}_1(T_pM)))$, the second is excision, and the third is induced by the exponential map. \end{definition} Note that, since there are only finitely many $m_a$ with $x_a\neq x_0$, the section $s\left(\sum m_a x_a\right)$ always has compact support. \begin{theorem}[McDuff]\label{thm:duality noncompact} Scanning induces a weak equivalence \[\mathrm{Conf}_X(M)\to \Gamma_c(X;E_X)\] provided $X$ is connected. \end{theorem} As in our approach to classical Poincar\'{e} duality outlined above, we deduce this result from a version for compact manifolds with boundary $N$. In order to accommodate boundary into our scanning technique, we set $W=N\amalg_{\partial N}[0,1)\times\partial N$ and arrange as before that $\exp_p^1$ is an embedding for $p\in W$. Using a collar neighborhood of $\partial N$ in $N$, we choose an isotopy retract $N'\subseteq N$ such that $\exp_p^1(D_1(T_pN))\cap N'=\varnothing$ for $p\in \partial N\times[0,1)$, implying that the dashed filler exists in the commuting diagram \[\xymatrix{\mathrm{Conf}_X(N)\ar[r]^-{N\subseteq W}& \mathrm{Conf}_X(W) \ar[r]^-s& \Gamma(W;E_X)\\ \mathrm{Conf}_X(N')\ar[u]^-\wr\ar@{-->}[rr]&&\Gamma(W,\partial N\times[0,1);E_X)\cong \Gamma(N,\partial N;E_X).\ar[u] }\] \begin{theorem}[McDuff]\label{thm:duality with boundary} Let $N$ be a compact manifold with boundary. Scanning induces a weak equivalence \[\mathrm{Conf}_X(N)\simeq \Gamma(N,\partial N;E_X)\] provided $X$ is connected. \end{theorem} Before turning to the proof of this result, we use it to deduce Theorem \ref{thm:duality noncompact}. \begin{proof}[Proof of Theorem \ref{thm:duality noncompact}] Realize $M$ as the interior of a compact manifold with boundary and choose a compact exhaustion $M=\colim_k N_k$ with each $N_k$ an isotopy retract along a collar neighborhood of the boundary. In the commuting diagram \[\xymatrix{ \mathrm{Conf}_X(M)\ar[r]&\Gamma_c(M;E_X)\\ \colim_k \mathrm{Conf}_X(N_k)\ar[u]\\ \colim_k\mathrm{Conf}_X(N_k'\ar[u])\ar[r]&\colim_k\Gamma(N_k,\partial N_k;E_X),\ar[uu] }\] the upper left arrow is a homeomorphism; the lower left arrow is a obtained from a levelwise weak equivalence between diagrams of relatively $T_1$ inclusions, since $N_k'\subseteq N_k$ is an isotopy equivalence; the same holds for the bottom arrow by Theorem \ref{thm:duality with boundary}; and the righthand arrow is a homeomorphism, since $\Gamma(N_k,\partial N_k;E_X)\cong \Gamma(M,M\setminus N_k;E_X)$ and the diagram $\{N_k\}_{k\geq0}$ is final in the category of compact subsets of $M$. It follows that the top arrow is a weak equivalence, as desired. \end{proof} As in our proof schematic from before, our strategy in proving Theorem \ref{thm:duality with boundary} will be induction on a local calculation. We first clarify the nature of the induction in question. \begin{recollection} If $M$ is an $n$-manifold with boundary and $\varphi: \partial D^i\times D^{n-i}\to \partial M$ an embedding, then we obtain a new manifold with boundary $\overline M:= M\cup_\varphi (D^i\times D^{n-i})$, which we refer to as the result of \emph{attaching an $i$-handle} to $M$ along $\varphi$. A compact manifold with boundary may be built from the empty manifold by a finite sequence of handle attachments. \end{recollection} Our strategy, then, will be to proceed by induction on such a handle decomposition. Specifically, a handle attachment gives rise a homotopy pullback square \[\xymatrix{ \mathrm{Conf}_X(M)\ar[r]\ar[d]&\mathrm{Conf}_X(\overline M)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(D^i\times D^{n-i}, \partial D^i\times D^{n-i}) }\] provided $X$ is connected. Thus, if the theorem is known for $M$, then the theorem for $\overline M$ will follow by examining $\mathrm{Conf}_X(D^i\times D^{n-i}, \partial D^i\times D^{n-i})$. This logic leads us to formulate a third, relative version of the theorem. \begin{theorem}[McDuff]\label{thm:duality relative} Let $M$ be a compact manifold with boundary and $M_0\subseteq M$ a closed submanifold. Scanning induces a weak equivalence \[\mathrm{Conf}_X(M,M_0)\simeq \Gamma(M\setminus M_0, \partial M\setminus M_0; E_X)\] provided either $X$ or $(M,M_0)$ is connected. \end{theorem} We note first that it suffices to consider only those pairs $(M,M_0)$ with $M_0\subseteq \partial M$. Indeed, if $M_0\subseteq N$ is a closed tubular neighborhood, then \begin{align*} \mathrm{Conf}_X(M,M_0)&\xrightarrow{\sim} \mathrm{Conf}_X(M, N)\\ &\cong \mathrm{Conf}_X(M\setminus \mathring{N}, \partial N), \end{align*} and, similarly, \begin{align*} \Gamma(M\setminus M_0, \partial M\setminus M_0)&\xrightarrow{\sim}\Gamma(M\setminus N, \partial M\setminus N)\\ &=\Gamma\left((M\setminus \mathring{N})\setminus \partial N, \partial (M\setminus \mathring{N})\setminus \partial N\right), \end{align*} so the case of $(M,M_0)$ follows from the case of $(M\setminus \mathring{N}, \partial N)$. In order to proceed, we must accommodate the relative case into our scanning setup. \begin{construction} Assuming that $M_0\subseteq\partial M$, we write $\partial M=M_0\cup_{\partial M_1} M_1$, set $W=M\sqcup_{\partial M}\partial M\times[0,1)$, and let $M'$ be a retract along a collar of $M_1$ relative to $M_0$. Finally, choosing a tubular neighborhood $M_0\subseteq N\subseteq W$ such that $\exp_p^1(\mathring{D}_1(T_pM))\cap M_0=\varnothing$ for $p\notin N$, we obtain, up to homotopy, a scanning map \[\mathrm{Conf}_X(M,M_0)\xleftarrow{\sim}\mathrm{Conf}_X(M', M_0)\to \Gamma(M\setminus N, \partial M\setminus N)\xleftarrow{\sim}\Gamma(M\setminus M_0, \partial M\setminus M_0),\] where the middle arrow is defined by the same formula as before. \end{construction} This construction is possible because $M_0\subseteq M\setminus \exp_p^1(\mathring{D}_1(T_pM))$ for $p\notin N$, so the required map \[\mathrm{Conf}_X(M, M_0)\to \mathrm{Conf}_X(M,M\setminus \exp_p^1(D_1(T_pM)))\cong \mathrm{Conf}_X(D_1(T_pM), \partial D_1(T_pM))\] is defined. The key step in the proof of Theorem \ref{thm:duality relative} is the following result. \begin{lemma}\label{lem:handle case} For $0<i\leq n$ and any $X$, the conclusion of Theorme \ref{thm:duality relative} holds for the pair $(M,M_0)=(D^i\times D^{n-i}, \partial D^i\times D^{n-i})$. Moreover, if $X$ is connected, the conclusion also holds in the case $i=0$. \end{lemma} \begin{proof} We proceed by downward induction on $i$. For the base case $i=n$, it suffices to show that the dashed restriction in the commuting diagram \[\xymatrix{ \mathrm{Conf}_X(D_1^n, \partial D_1^n)_{\leq 1}\ar[d]_-\wr\ar@{=}[r]^-\sim&\Sigma^nX\ar@{-->}[d] \\ \mathrm{Conf}_X(D_1^n,\partial D_1^n)\ar[r]&\Gamma(\mathring{D}_{1/2}^n;E_X)&\Gamma(\mathring{D}_{1}^n;E_X)\ar[l]_-\sim }\] is a weak equivalence. For this claim, we note that radial expansion of the scanning neighborhood defines a homotopy from this restriction to the map $\Sigma^nX\to \Gamma(\mathring{D}^n_{1/2}; E_X)\cong \mathrm{Map}(\mathring{D}^n_{1/2}, \Sigma^nX)$ sending a point in $\Sigma^nX$ to the constant map to that point. For the induction step, we write $\partial_\epsilon D^i$ for a closed collar neighborhood of $\partial D^i$, and we write $\partial_\epsilon D^i=\partial_\epsilon^+ D^i\cup \partial_\epsilon^- D^i$ with $\partial_\epsilon^+D^i\cap\partial_\epsilon^- D^i\cong \partial_\epsilon D^{i-1}\times D^1$. By exactnes, the diagram \[\xymatrix{\mathrm{Conf}_X(\partial^+_\epsilon D^i\times D^{n-i}, \partial_\epsilon D^{i-1}\times D^1\times D^{n-i})\ar[r]\ar[d]&\mathrm{Conf}_X(D^i\times D^{n-i}, \partial_\epsilon^- D^i\times D^{n-i})\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(D^i\times D^{n-i}, \partial_\epsilon D^i\times D^{n-i}) }\] is homotopy Cartesian provided $X$ is connected or $i>0$. The conclusion holds for the bottom right pair by induction, since \[\mathrm{Conf}_X(D^i\times D^{n-i}, \partial_\epsilon D^i\times D^{n-i})\simeq \mathrm{Conf}_X(D^i\times D^{n-i}, \partial D^i\times D^{n-i}),\] while for the upper right pair both the labeled configuration space and the section space are contractible. It follows that the conclusion also holds for the upper left pair, but \[\mathrm{Conf}_X(\partial^+_\epsilon D^i\times D^{n-i}, \partial_\epsilon D^{i-1}\times D^1\times D^{n-i})\cong \mathrm{Conf}_X(D^{i-1}\times D^{n-(i-1)}, \partial D^{i-1}\times D^{n-(i-1)}),\] so the claim follows. \end{proof} We now prove the theorem following \cite{Boedigheimer:SSMS}. \begin{proof}[Proof of Theorem \ref{thm:duality relative}] We first prove the claim in the special case $M_0=\partial M$. We proceed by induction on a handle decomposition of $M$, which we may take to involve no $n$-handles if no component of $M$ has empty boundary, which is the condition that $(M,\partial M)$ be connected. The base case of $M=D^n$ is known. For the induction step, we write $\overline M=M\cup_\varphi D^i\times D^{n-i}$ and assume the claim for $(M,\partial M)$. By exactness, the diagram \[\xymatrix{ \mathrm{Conf}_X(D^i\times D^{n-i}, D^i\times\partial D^{n-i})\ar[d]\ar[r]&\mathrm{Conf}_X(\overline M, \partial\overline M)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(\overline M, \partial \overline M\cup D^i\times D^{n-i}) }\] is homotopy Cartesian if $X$ is connected or if $i<n$. By excision, \[\mathrm{Conf}_X(\overline M,\partial \overline M\cup D^i\times D^{n-i})\cong\mathrm{Conf}_X(M,\partial M),\] so the claim is known for this pair by induction. Since it is also known for the upper left pair by Lemma \ref{lem:handle case}, the proof is complete in this case. In the general case, we write $\partial M=M_0\cup_{\partial M_1} M_1$, set $\overline W=M\sqcup_{\partial M}\partial M\times[0,1]$, and note that the diagram \[\xymatrix{ \mathrm{Conf}_X(M_1\times[0,1], \partial M_1\times[0,1])\ar[r]\ar[d]&\mathrm{Conf}_X(\overline W, M_0\times[0,1])\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(\overline W, \partial M\times[0,1]).}\] is homotopy Cartesian if $X$ is connected or $(M_1,\partial M_1)$ is connected. Assuming one of these conditions to hold, excision and isotopy invariance show that \begin{align*}\mathrm{Conf}_X(\overline W, \partial M\times[0,1])&\cong\mathrm{Conf}_X(M, \partial M)\\ \mathrm{Conf}_X(\overline W, M_0\times[0,1])&\simeq \mathrm{Conf}_X(M, M_0), \end{align*} so it suffices to prove the claim for the pair $(M_1\times[0,1],\partial M_1\times[0,1])$. To establish this last case, we use exactness once more, invoking our assumed connectivity of $X$ or $(M_1,\partial M_1)$, to obtain the homotopy pullback square \[\xymatrix{ \mathrm{Conf}_X(M_1\times[0,1], \partial M_1\times[0,1])\ar[d]\ar[r]&\mathrm{Conf}_X(M_1\times[0,2], \partial (M_1\times[0,2])\setminus \mathring{M_1}\times\{0\})\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(M_1\times [0,2], M_1\times[0,2]\setminus \mathring{M}_1\times(1,2)). }\] The upper right entry and the corresponding section space are contractible, and the case of a manifold relative to its boundary shows that the theorem holds for the bottom right pair, since \[\mathrm{Conf}_X(M_1\times [0,2], M_1\times[0,2]\setminus \mathring{M}_1\times(1,2))\cong\mathrm{Conf}_X(M_1\times[1,2],\partial (M_1\times [1,2]))\] by excision. Thus, the proof is complete under the assumption that $X$ or $(M_1,\partial M_1)$ is connected. In case $X$ and $(M_1,\partial M_1)$ are both disconnected, we let $N$ denote the union of the components of $\partial M$ not intersecting $M_0$, and we set $\widetilde M=M\cup_N M$. Exactness implies that the diagram \[\xymatrix{ \mathrm{Conf}_X(M,M_0)\ar[d]\ar[r]&\mathrm{Conf}_X(\widetilde M, M_0\sqcup M_0)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(\widetilde M, M\sqcup M_0) }\] is homotopy Cartesian, since $(M,M_0)$ is connected by assumption. Since the theorem holds for the two righthand pairs by what has already been shown, it also holds for $(M,M_0)$. \end{proof} \subsection{Postmortem}Before moving on from the subject of Poincar\'{e} duality for labeled configuration spaces, we pause to give an informal discussion of several generalizations and continuations of this story. \begin{perspective} We have shown that, for connected $X$, there is a weak equivalence \[\mathrm{Conf}_X(D^n)\xrightarrow{\sim}\Omega^n\Sigma^nX.\] In fact, this equivalence may be upgraded to an equivalence of $E_n$-\emph{algebras} \cite{May:GILS}. Since the domain of this map is the free $E_n$-algebra on $X$ in the category of pointed spaces under Cartesian product, monadicity considerations show that the homotopy theory of connected $E_n$-algebras is equivalent to that of connected $n$-fold loop spaces. This equivalence can be further upgraded to an equivalence of the homotopy theory of all $n$-fold loop spaces with that of \emph{grouplike} $E_n$-algebras, i.e., those whose connected components form a group. \end{perspective} \begin{perspective} For connected $X$, we have the equivalence \[\mathrm{Conf}_X(M)\xrightarrow{\sim}\Gamma_c(M;E_X)=\Gamma(M^+,\infty; E_X)\] as well as the dual equivalence \[\mathrm{Conf}_X(M^+,\infty):=\colim_K\mathrm{Conf}_X(K,\partial K)\xrightarrow{\sim}\colim_K\Gamma(\mathring{K};E_X)\cong \Gamma(M;E_X).\] Thus, the duality of the ``manifolds'' $M$ and $M^+$ mirrors the duality of the ``coefficient systems'' $\Omega^n\Sigma^nX$ and $\Sigma^nX$, a type of algebra and coalgebra, respectively, which is an algebraic phenomenon known as \emph{Koszual duality.} This observation has a generalization in terms of factorization (co)homology in the form of a ``Poincar\'{e}/Koszul duality'' map \[\int_Z A\to \int^{Z^\neg}B^nA,\] where $Z$ is a \emph{zero-pointed manifold} and $Z^\neg$ its \emph{negation}---in our case, $(M_+)^\neg=M^+$ and $(M^+)^\neg=M_+$. Like the scanning map it generalizes, this map is an equivalence under connectivity assumptions \cite{AyalaFrancis:PKD}. \end{perspective} \begin{perspective} It is natural to wonder what can be said about the scanning map \[\mathrm{Conf}_X(M)\to \Gamma(M,\partial M; E_X)\] for disconnected $X$, particularly since our motivating example was the case $X=S^0$. The key to answering this question is the observation that, in the local case $M=D^n$, both the labeled configuration space and the section space carry the structure of a monoid up to homotopy---indeed, as we noted above, both are $E_n$-algebras---and the scanning map respects this structure. Like a group, a monoid has a classifying space, and, at least in the case of connected $X$, the homotopy pullback square \[\xymatrix{\mathrm{Conf}_X(D^n)\ar[d]\ar[r]&\mathrm{Conf}_X(D^{n-1}\times(D^1,\{1\}))\simeq\mathrm{pt}\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_X(D^{n-1}\times (D^1,\partial D^1)) }\] quaranteed by exactness shows that \[B\mathrm{Conf}_X(D^n)\simeq \mathrm{Conf}_X(D^{n-1}\times(D^1,\partial D^1))\overset{?}{\simeq} \mathrm{Conf}_{\mathrm{Conf}_X(D^1,\partial D^1)}(D^{n-1})\simeq\mathrm{Conf}_{\Sigma X}(D^{n-1}),\] and, in fact, this equivalence holds for any $X$ \cite[2.1]{Segal:CSILS}. Since $\Gamma(D^n, \partial D^n;E_X)\cong\Omega^n\Sigma^nX$ is grouplike, it follows from the group completion theorem \cite[1]{McDuffSegal:HFGCT} that we have the induced isomorphism \[H_*(\mathrm{Conf}_X(D^n))[\pi_0^{-1}]\cong H_*(\Gamma(D^n,\partial D^n;E_X)),\] giving a precise answer to the question of how far the scanning map is from a weak equivalence, at least in this local case. Note that $\pi_0$ acts on homology by stabilization, so, taking $X=S^0$, this isomorphism may be interpreted as the statement that \[\colim_k H_i(B_k(D^n))\cong H_i(\Omega^n_0\Sigma^n)\] for $i>0$, where $\Omega^n_0\Sigma^n\subseteq \Omega^n\Sigma^n$ is the degree zero component. In the general case, assuming $\partial M\neq\varnothing$, we proceed by choosing a small disk $D$ sharing part of its boundary with $M$ and considering the diagram \[\xymatrix{ \mathrm{Conf}_X(D)\ar[r]\ar[d]& \Gamma(D,\partial D;E_X)\ar[d]\\ \mathrm{Conf}_X(M)\ar[r]\ar[d]_-q& \Gamma(M,\partial M;E_X)\ar[d]\\ \mathrm{Conf}_X(M,D)\ar[r]^-\sim&\Gamma(M\setminus D, \partial M\setminus D; E_X). }\] Taking the homotopy colimit over stabilization maps corresponding to various components of $X$, the top arrow becomes an isomorphism on homology, and, while the map from $\mathrm{Conf}_X(D)$ to the homotopy fiber is not a weak equivalence, it does becomes a homology isomorphism in the limit. One way to see this is by applying the following result, which is proved in exactly the same manner as Corollary \ref{cor:hocolim quasifibration}. \begin{proposition} If $F:I\to \mathrm{Top}$ is a functor sending each morphism in $I$ to a homology isomorphism, then the diagram \[\xymatrix{ F(i)\ar[r]\ar[d]&\hocolim_IF\ar[d]\\ \mathrm{pt}\ar[r]^-i&BI }\] of spaces is homology Cartesian for every object $i\in I$. \end{proposition} In the scenario at hand, we apply this result to the functor \begin{align*} \op B(M, D; X)&\to \mathcal{T}\mathrm{op}\\ (U,\sigma)&\mapsto(\hocolim q)^{-1}\mathrm{Conf}^0_X(U,\sigma). \end{align*} The previous local group completion calculation shows that the hypothesis is satisfied. Finally, the Serre spectral sequence implies that scanning induces an isomorphism \[H_*(\hocolim \mathrm{Conf}_X(M))\cong H_*(\Gamma(M,\partial M; E_X)).\] In particular, the homology of $B_k(M)$ converges to that of a component of $\Gamma(M,\partial M; E_{S^0}),$ and, with more care, one can show that each $H_i(B_k(M))$ eventually stabilizes; one says that configuration spaces of open manifolds exhibit \emph{homological stability}. What is more surprising is that configuration spaces of closed manifolds also exhibit homological stability, at least rationally \cite{Church:HSCSM, RandalWilliams:HSUCS}. \end{perspective} \section{Homology calculations from stable splittings}\label{section:homology from splittings} The goal of this section is the computation of the rational homology of the unordered configuration spaces of a large class of manifolds. \begin{theorem}\label{thm:odd homology} If $M$ is an odd dimensional compact manifold with boundary and $\mathbb{F}$ is a field of characteristic 0, then \[H_*(B_k(M))\cong \mathrm{Sym}^k\left(H_*(M)\right).\] \end{theorem} \begin{remark} Since configuration spaces are isotopy invariant, this computation is valid for any odd-dimensional manifold of finite type. \end{remark} \begin{remark} With slightly more care, it is not hard to show that this isomorphism is induced by the canonical inclusion $B_k(M)\to \mathrm{Sym}^k(M)$. \end{remark} \begin{remark} Similar, but more complicated, statements are available for $\mathbb{F}_p$, and may be derived in essentially the same way, given the requisite knowledge of the mod $p$ homology of the configuration spaces of $\mathbb{R}^n$. In the case $p=2$, one can eliminate the requirement that $n$ be odd. \end{remark} Our plan is to exploit the natural filtration $\mathrm{Conf}_X(M)=\bigcup_{k\geq0}\mathrm{Conf}_X(M)_{\leq k}$, whose successive quotients are identified as \[\faktor{\mathrm{Conf}_X(M)_{\leq k}}{\mathrm{Conf}_X(M)_{\leq k-1}}\cong \mathrm{Conf}_k(M)_+\wedge_{\Sigma_k}X^{\wedge k}.\] In the case $X=S^m$, these quotients are Thom spaces of vector bundles on the configuration spaces $B_k(M)$, so, by the Thom isomorphism, the primary obstruction to extracting the homology of $B_k(M)$ from the homology of $\mathrm{Conf}_{S^m}(M)\simeq \Gamma_c(M; E_{S^m})$ is the difference between $\mathrm{Conf}_{S^m}(M)$ and the graded space associated to the cardinality filtration. Happily, a result of \cite{CohenMayTaylor:SCSC,Boedigheimer:SSMS} guarantees that, at the level of homology, this difference is no difference at all. \subsection{Stable splitting}\label{section:stable splittings} In this section, we prove the following result. See Appendix \ref{appendix:Spanier--Whitehead} for an explanation of unfamiliar terms. \begin{theorem}[B\"{o}digheimer, Cohen-May-Taylor]\label{thm:stable splitting} There is a filtered stable weak equivalence \[\mathrm{Conf}_X(M,M_0)\xrightarrow{\sim_s}\bigvee_{k\geq1}\mathrm{Conf}_k(M,M_0)\wedge_{\Sigma_k}X^{\wedge k}\] for any manifold $M$, submanifold $M_0$, and pointed CW complex $X$. \end{theorem} Since filtered stable weak equivalences induce isomorphisms on homology by Corollary \ref{cor:filtered stable weak equivalence homology}, this result will be sufficient to allow us to proceed with our computation. There are other lenses through which to view this result aside from this application to configuration spaces; indeed, one of its appealing features is that, through the scanning map, it allows information to flow equally in the other direction, implying stable splittings for many familiar mapping spaces. \begin{example}[Snaith] For $X$ connected, we have \[\Omega^n\Sigma^nX\xleftarrow{\sim} \mathrm{Conf}_X(D^n)\xrightarrow{\sim_s} \bigvee_{k\geq1}\mathrm{Conf}_k(D^n)_+\wedge_{\Sigma_k}X^{\wedge k}.\] \end{example} \begin{example}[Goodwillie] For $X$ connected, we have \[\Lambda\Sigma X\xleftarrow{\sim} \mathrm{Conf}_X(S^1)\xrightarrow{\sim_s} \bigvee_{k\geq1}S^1_+\wedge_{C_k}X^{\wedge k}.\] \end{example} \begin{example}[\cite{Boedigheimer:SSMS}] Let $K$ be a finite CW complex. For connected $X$, the mapping space $\mathrm{Map}(K,\Sigma^mX)$ splits stably for sufficiently large $m$. Indeed, choosing $M\simeq K$ with $M$ a parallelizable $m$-manifold, we have \begin{align*} \mathrm{Map}(K,\Sigma^mX)&\simeq \mathrm{Map}(\mathring{M}, \Sigma^mX)\\ &\simeq\Gamma(\mathring{M}; E_X)\\ &\simeq \mathrm{Conf}_X(M,\partial M)\\ &\simeq_s \bigvee_{k\geq1}\mathrm{Conf}_k(M,\partial M)\wedge_{\Sigma_k} X^{\wedge k}. \end{align*} \end{example} For the remainder of this section, we set $V_k=\bigvee_{i=1}^k\mathrm{Conf}_i(M,M_0)\wedge_{\Sigma_i}X^{\wedge i}$. To begin with, we seek to find a non-decreasing sequence $\{r_k\}_{k\geq1}$ of non-negative integers and a collection of maps $\Sigma^{r_k}\mathrm{Conf}_X(M,M_0)_{\leq k}\to \Sigma^{r_k} V_k$ making each of the diagrams \[\xymatrix{ \Sigma^{r_{k}}\mathrm{Conf}_X(M,M_0)_{\leq k}\ar[r]& \Sigma^{r_{k}} V_{k}\\ \Sigma^{r_{k}}\mathrm{Conf}_X(M,M_0)_{\leq k-1}\ar[u]\ar[r]& \Sigma^{r_{k}} V_{k-1}\ar[u] }\] commute up to homotopy. Using the adjunction between loops and suspension and the fact that, by scanning, \[\Omega^r\Sigma^rA\simeq \mathrm{Conf}_A(\mathbb{R}^r)\implies \Omega^\infty\Sigma^\infty A\simeq \mathrm{Conf}_A(\mathbb{R}^\infty),\] it will suffice to produce the commuting diagram \[\xymatrix{ \mathrm{Conf}_X(M,M_0)\ar[r]^-\Phi&\mathrm{Conf}_V(\mathbb{R}^\infty)\\ \vdots\ar[u]&\vdots\ar[u]\\ \mathrm{Conf}_X(M,M_0)_{\leq k}\ar[u]\ar[r]^-{\Phi_k}&\mathrm{Conf}_{V_k}(\mathbb{R}^{r_k})\ar[u]\\ \mathrm{Conf}_X(M,M_0)_{\leq k-1}\ar[r]^-{\Phi_{k-1}}\ar[u]&\mathrm{Conf}_{V_{k-1}}(\mathbb{R}^{r_{k-1}})\ar[u]\\ \vdots\ar[u]&\vdots\ar[u] }\] \begin{construction}\label{construction:power set map} Choose an embedding $\varphi:\coprod_{k\geq0}B_k(M)\to \mathbb{R}^\infty$ such that $\varphi|_{B_k(M)}$ factors through $\mathbb{R}^{r_k}$. We define $\Phi:\mathrm{Conf}_X(M,M_0)\to \mathrm{Conf}_V(\mathbb{R}^\infty)$ by the formula \[\Phi\left(\sum_{a\in A}m_ax_a\right)=\sum_{\varnothing\neq I\subseteq A}\varphi\left(\{m_a\}_{a\in I}\right)\cdot\left[\sum_{a\in I}m_ax_a\right]_{|I|},\] where $[-]_k:\mathrm{Conf}_X(M,M_0)_{\leq k}\to \mathrm{Conf}_k(M,M_0)\wedge_{\Sigma_k}X^{\wedge k}=V_k/V_{k-1}$ is the quotient map. \end{construction} \begin{lemma} The map $\Phi$ is well-defined. \end{lemma} \begin{proof} First, we note $\varphi(\{m_a\}_{a\in I}\neq \varphi(\{m_a\}_{a\in J}$ for $I\neq J$, so the set $\left\{\varphi(\{m_a\}_{a\in I})\right\}_{\varnothing\neq I\subseteq A}$ does in fact define an element of $B_{2^{|A|}-1}(\mathbb{R}^\infty)$. Indeed, if $I\neq J$, then $\{m_a\}_{a\in I}\neq \{m_a\}_{a\in J}$, since $m_a\neq m_b$ if $a\neq b$, and $\varphi$ is injective. Second, we verify that the definition of $\Phi$ is independent of the choice of representative of a labeled configuration by an expression of the form $\sum_{a\in A}m_ax_a$. Indeed, any two representatives differ by a term of the form $m_bx_b$ with $m_b\in M_0$ or $x_b=x_0$, and $\left[\sum_{a\in I} m_ax_a\right]_{|I|}=0$ whenever $b\in I$; therefore, we have \begin{align*}\Phi\left(\sum_{a\in A}m_ax_a\right)&=\sum_{\varnothing\neq I\subseteq A\setminus \{b\}}\varphi\left(\{m_a\}_{a\in I}\right)\cdot\left[\sum_{a\in I}m_ax_a\right]_{|I|}+\sum_{b\in I\subseteq A}\varphi\left(\{m_a\}_{a\in I}\right)\cdot0\\&=\sum_{\varnothing\neq I\subseteq A\setminus \{b\}}\varphi\left(\{m_a\}_{a\in I}\right)\cdot\left[\sum_{a\in I}m_ax_a\right]_{|I|}\\ &=\Phi\left(\sum_{b\neq a\in A}m_ax_a\right), \end{align*} as claimed. \end{proof} Note that, by construction, $\Phi|_{\mathrm{Conf}_k(M,M_0)_{\leq k}}$ factors through a map $\Phi_k:\mathrm{Conf}_k(M,M_0)_{\leq k}\to \mathrm{Conf}_{V_k}(\mathbb{R}^{r_k})$ as in the diagram displayed above. \begin{lemma}\label{lem:phi compatibility} For each $k\geq1$, the diagram \[\xymatrix{ \mathrm{Conf}_X(M,M_0)_{\leq k}\ar[d]_-{[-]_k}\ar[rr]^-{\Phi_k}&&\mathrm{Conf}_{V_k}(\mathbb{R}^{r_k})\ar[d]\\ V_k/V_{k-1}\ar@{=}[r]&\mathrm{Conf}_{V_k/V_{k-1}}(\mathbb{R}^0)\ar[r]&\mathrm{Conf}_{V_k/V_{k-1}}(\mathbb{R}^{r_k}) }\] commutes up to homotopy. In the case $k=1$, we adopt the convention that $V_0$ is the basepoint, so that $V_1/V_0=V_1$. \end{lemma} \begin{proof} The clockwise composite is the map \[\sum_{a\in A}m_ax_a\mapsto \begin{cases} \varphi\left(\{m_a\}_{a\in A}\right)\cdot\left[\sum_{a\in A}m_ax_a\right]_k&\quad |A|=k\text{ and }m_a\notin M_0,\, x_a\in x_0 \text{ for } a\in A\\ 0&\quad\text{otherwise.} \end{cases}\] Since $\varphi$ is homotopic to the constant map to the origin, the claim follows. \end{proof} The last ingredient is the following. \begin{lemma}\label{lem:labeled cofibrations} For any manifold $M$, submanifold $M_0$, and pointed CW complex $X$, the inclusion $\mathrm{Conf}_X(M,M_0)_{\leq k-1}\to \mathrm{Conf}_X(M,M_0)_{\leq k}$ is a Hurewicz cofibration. \end{lemma} Before proving this fact, we pause to deduce the theorem. \begin{proof}[Proof of Theorem \ref{thm:stable splitting}] By Lemma \ref{lem:phi compatibility}, we have the homotopy commutative diagram \[\xymatrix{ \mathrm{Conf}_X(M,M_0)_{\leq 1}\ar@{=}[d]\ar[rr]^-{\Phi_1}&&\mathrm{Conf}_{V_1}(\mathbb{R}^{r_1})\ar@{=}[d]\\ V_1\ar@{=}[r]&\mathrm{Conf}_{V_1}(\mathbb{R}^0)\ar[r]&\mathrm{Conf}_{V_1}(\mathbb{R}^{r_1}), }\] so the composite $\mathrm{Conf}_X(M,M_0)_{\leq 1}\to \mathrm{Conf}_{V_1}(\mathbb{R}^{r_1})\simeq \Omega^{r_1}\Sigma^{r_1}V_1$ is homotopic to the unit map. It follows that the induced map $\mathrm{Conf}_X(M,M_0)_{\leq 1}\to V_1$ in the Spanier--Whitehead category is the identity and in particular an isomorphism. Next, using the same lemma, together with the commutativity of the large diagram above, we find that the induced diagram \[\xymatrix{ \mathrm{Conf}_X(M,M_0)_{\leq k-1}\ar[d]\ar[r]&\mathrm{Conf}_X(M,M_0)_{\leq k}\ar[d]\ar[r]&V_k/V_{k-1}\ar@{=}[d]\\ V_{k-1}\ar[r]&V_k\ar[r]&V_k/V_{k-1} }\] in the Spanier--Whitehead category commutes. The lefthand vertical arrow is an isomorphism by induction, so Lemma \ref{lem:five lemma} will allow us to conclude that the middle arrow is an isomorphism as long as we are assured that the top row is a cofiber sequence, which follows from Lemma \ref{lem:labeled cofibrations}. Thus, the adjoints of the maps $\mathrm{Conf}_X(M,M_0)_{\leq k}\xrightarrow{\Phi_k} \mathrm{Conf}_{V_k}(\mathbb{R}^{r_k})\simeq \Omega^{r_k}\Sigma^{r_k}V_k$ constitute the data of a filtered stable weak equivalence, as claimed. \end{proof} \begin{remark} The proof of Theorem \ref{thm:stable splitting} given here is the classical one following \cite{Boedigheimer:SSMS} and \cite{CohenMayTaylor:SCSC}. More modern arguments have since become available. For one such agument, which uses hypercover-type techniques like those of \S\ref{section:covering theorems}, see \cite{Bandklayder:SSNPVNPD}. \end{remark} We turn now to the proof of Lemma \ref{lem:labeled cofibrations}. Recall that the condition for $A\to B$ to be a Hurewicz cofibration is the existence of a solution to all lifting problems of the form \[\xymatrix{ A\ar[d]\ar[r]&Y^I\ar[d]\\ B\ar@{-->}[ur]\ar[r]& Y, }\] where $Y$ is an arbitrary topological space. \begin{definition} A pair of spaces $(B,A)$ is a \emph{neighborhood deformation retract} (NDR) pair if there exist maps $f:B\to I$ and $H:B\times I\to B$ such that \begin{enumerate} \item $f^{-1}(0)=A$, \item $H|_{A\times I}=\id_{A\times I}$, and \item $H(b,1)\in A$ if $f(b)<1$. \end{enumerate} \end{definition} We record the following standard facts about NDR pairs---see \cite[A]{May:GILS}, for example. As a matter of notation, given a subspace $A\subseteq B$, we write \[Z_k(A):=\left\{(x_1,\ldots, x_k)\in B^k: \{x_i\}_{i=1}^k\cap A\neq \varnothing\right\}\subseteq B^k.\] \begin{proposition}\label{prop:NDR facts} \begin{enumerate} \item If $B$ is Hausdorff, then the inclusion of the closed subspace $A$ is a Hurewicz cofibration if and only if $(B,A)$ is an NDR pair. \item If $(B,A)$ and $(B',A')$ are NDR pairs, then so is $(B\times B', B\times A'\cup A\times B')$. \item If $(B,A)$ is an NDR pair, then $(B^k, Z_k(A))$ is a $\Sigma_k$-equivariant NDR pair. \end{enumerate} \end{proposition} \begin{proof}[Proof of Lemma \ref{lem:labeled cofibrations}] Since $M_0\subseteq M$ is a submanifold and $x_0\in X$ is a 0-cell, both $(M,M_0)$ and $(X,x_0)$ are NDR pairs. Thus, by Proposition \ref{prop:NDR facts}(3), $(M^k, Z_k(M_0))$ and $(X^k, x_0^k)$ are $\Sigma_k$-equivariant NDR pairs, and, by means of a tubular neighborhood, we may further arrange that the map $H_t:M^k\to M^k$ is injective for each $0\leq t<1$; therefore, $(\mathrm{Conf}_k(M), Z_k(M_0)\cap \mathrm{Conf}_k(M))$ is a $\Sigma_k$-equivariant NDR pair. Thus, by point (2), the pair \[\left(\mathrm{Conf}_k(M)\times X^k, \mathrm{Conf}_k(M)\times Z_k(x_0)\cup \left(Z_k(M_0)\cap \mathrm{Conf}_k(M)\right)\times X^k\right)\] is an NDR pair. It follows that the top arrow in the pushout diagram \[\xymatrix{ \mathrm{Conf}_k(M)\times_{\Sigma_k} Z_k(x_0)\cup \left(Z_k(M_0)\cap \mathrm{Conf}_k(M)\right)\times_{\Sigma_k} X^k\ar[r]\ar[d]&\mathrm{Conf}_k(M)\times_{\Sigma_k}X^k\ar[d]\\ \mathrm{Conf}_X(M,M_0)_{\leq k-1}\ar[r]&\mathrm{Conf}_X(M,M_0)_{\leq k} }\] is the inclusion of an NDR pair and hence a Hurewicz cofibration, which implies that the bottom arrow is a Hurewicz cofibration as well. \end{proof} \subsection{Homology decomposition} Having completed the proof of the theorem on stable splittings, our next task is to examine its implications for the homology of configuration spaces. Fix a field $\mathbb{F}$ and write $H_*=H_*(-;\mathbb{F})$. We will prove the following theorem of \cite{BoedigheimerCohenTaylor:OHCS}. \begin{theorem}[B\"{o}digheimer-Cohen-Taylor]\label{thm:labeled conf homology} Let $M$ be a compact manifold, possibly with boundary. For any $r>1$, there is an isomorphism of bigraded vector spaces \[H_*(\mathrm{Conf}_{S^r}(M))\cong \bigotimes_{i=0}^nH_*(\Omega^{n-i}S^{n+r})^{\otimes\dim H_i(M)}\] provided $r+n$ is odd or $\mathbb{F}$ is of characteristic 2. \end{theorem} The auxiliary grading referred to in the statement is determined on the lefthand side by the isomorphism \[H_*(\mathrm{Conf}_{S^r}(M))\cong\bigoplus_{k\geq1} H_*(\mathrm{Conf}_k(M)_+\wedge_{\Sigma_k}S^{rk})\] and on the righthand side by the isomorphisms \begin{align*}H_*(\Omega^{n-i}S^{n+r})&\cong H_*(\mathrm{Conf}_{S^r}(D^{i}\times D^{n-i}, \partial D^i\times D^{n-i}))\\&\cong \bigoplus_{k\geq1}H_*(\mathrm{Conf}_k(S^i\wedge D^{n-i}_+, *)\wedge_{\Sigma_k}S^{rk})\end{align*} provided by scanning and stable splitting. \begin{remark} The obvious relative statement relating $\mathrm{Conf}_{S^r}(M,M_0)$ and $H_*(M,M_0)$ is also true, but we will not need to use it. For the necessary alterations in the argument, see \cite[3.5]{BoedigheimerCohenTaylor:OHCS}. \end{remark} \begin{remark} By a different argument, the theorem can also be shown to hold in the case $r=1$---see \cite[4.5]{BoedigheimerCohenTaylor:OHCS}. The theorem is false for $r=0$. \end{remark} For the remainder of the lecture, we implicitly assume that the hypotheses of the theorem are satisfied. The strategy will be to proceed by induction on a handle deccomposition of $M$ using exactness and the Serre spectral sequence. We may assume that $M$ is connected; then, in the base case $M=D^n$, we have $\mathrm{Conf}_{S^r}(D^n)\simeq\Omega^nS^{n+r},$ and the theorem holds without assumption on the parity of $r+n$ or the characteristic of the field. For the induction step, we write $\overline M=M\cup_\varphi D^i\times D^{n-i}$ with $i>0$ and consider the following two cases. Assume first that $H_i(\overline M)$ surjects onto $H_i(\overline M,M)\cong \widetilde H_i(S^i)$. Since $r>0$, the sphere $S^r$ is connected, so exactness provides us with the homotopy pullback square \[\xymatrix{\mathrm{Conf}_{S^r}(M)\ar[d]\ar[r]&\mathrm{Conf}_{S^r}(\overline M)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_{S^r}(\overline M, M). }\] In the lower left corner, we have \begin{align*} \mathrm{Conf}_{S^r}(\overline M, M)&\cong \mathrm{Conf}_{S^r}(D^i\times D^{n-i}, \partial D^i\times D^{n-i})\\ &\simeq \mathrm{Map}\left(\mathring{D}^i\times D^{n-i}, \mathring{D}^i\times \partial D^{n-i}, S^{n+r}\right)\\ &\simeq \Omega^{n-i}S^{n+r}, \end{align*} which is simply connected, since \[\pi_1(\Omega^{n-i}S^{n+r})\cong\pi_{n-i+1}(S^{n+r})=0\] for $n+r\geq n+1>n-i+1$. Thus, the action on the homology of the homotopy fiber is trivial. Since we are working over a field, we conclude that, in the homology Serre spectral sequence for this homotopy pullback, we have \begin{align*} E^2&\cong H_*(\Omega^{n-i}S^{n+r})\otimes H_*(\mathrm{Conf}_{S^r}(M))\\ &\cong H_*(\Omega^{n-i}S^{n+r})\otimes\bigotimes_{j=0}^nH_*(\Omega^{n-j}S^{n+r})^{\otimes\dim H_j(M)}\\ &\cong \bigotimes_{i=0}^nH_*(\Omega^{n-i}S^{n+r})^{\otimes\dim H_i(\overline{M})}, \end{align*} where the second isomorphism uses the inductive hypothesis and the third our assumption on the homology of $\overline M$. Thus, in order to prove the theorem for $\overline M$, it will suffice to show that the Serre spectral sequence collapses at $E^2$. Assume instead that the map $H_i(\overline M)\to H_i(\overline M, M)$ is trivial. Proceeding as before, and looping the resulting homotopy pullback once, we obtain the homotopy pullback square \[\xymatrix{ \Omega\,\mathrm{Conf}_{S^r}(\overline M, M)\ar[d]\ar[r]&\mathrm{Conf}_{S^r}(M)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_{S^r}(\overline M). }\] We now use the assumption that $r>1$. \begin{lemma} For $r>1$, $\mathrm{Conf}_{S^r}(\overline M)$ is simply connected. \end{lemma} \begin{proof} We again proceed by induction over a handle decomposition. Assuming the claim is known for $M$, the exact sequence \[\cdots\to \pi_1(\mathrm{Conf}_{S^r}(M))\to \pi_1(\mathrm{Conf}_{S^r}(\overline M))\to \pi_1(\Omega^{n-i}S^{n+r})\to \cdots\] implies the claim for $\overline M$. For the base case, we note that \[\pi_1(\mathrm{Conf}_{S^r}(D^n))\cong \pi_1(\Omega^nS^{n+r})\cong \pi_{n+1}(S^{n+r})=0,\] since $r>1$. \end{proof} Thus, in this second case as well, the action on the homology of the homotopy fiber is trivial, and we have \begin{align*} E^2\cong H_*(\mathrm{Conf}_{S^r}(\overline M))\otimes H_*(\Omega^{n-i+1}S^{n+r})\implies H_*(\mathrm{Conf}_{S^r}(M)\cong \bigotimes_{j=0}^nH_*(\Omega^{n-j}S^{n+r})^{\otimes\dim H_j(M)}, \end{align*} where the inductive hypothesis has been used in identifying the $E^\infty$ page. Thus, in order to prove the theorem for $\overline M$, it will again suffice to show that the Serre spectral sequence collapses at $E^2$. Assuming the collapse of these spectral sequences, the final step in the proof of Theorem \ref{thm:labeled conf homology} is to identify the extra gradings on each side. In order to do so, we note that the maps \[\mathrm{Conf}_{S^r}(M)\to \mathrm{Conf}_{S^r}(\overline M)\to \mathrm{Conf}_{S^r}(\overline M,M)\] are compatible with the respective filtrations by cardinality, so this filtration gives rise to an third grading at the level of the Serre spectral sequence. Since this grading is precisely the auxiliary grading referred to in the statement of the theorem, and since the isomorphism in question was established by observing the spectral sequence to collapse, it follows that the isomorphism is compatible with this auxiliary grading. In summary, we have reduced the theorem to the following result. \begin{lemma}\label{lem:collapse lemma} Let $\overline M=M\cup_\varphi D^i\times D^{n-i}$. \begin{enumerate} \item If $H_i(\overline M)\to H_i(\overline M,M)$ is surjective, then the homological Serre spectral sequence for the fiber sequence \[\mathrm{Conf}_{S^r}(M)\to \mathrm{Conf}_{S^r}(\overline M)\to \mathrm{Conf}_{S^r}(\overline M,M)\] collapses at $E^2$. \item If $H_i(\overline M)\to H_i(\overline M,M)$ is zero, then the homological Serre spectral sequence for the fiber sequence \[ \Omega\,\mathrm{Conf}_{S^r}(\overline M,M)\to\mathrm{Conf}_{S^r}(M)\to \mathrm{Conf}_{S^r}(\overline M)\] collapses at $E^2$. \end{enumerate} \end{lemma} We will prove part (1) only, as the proof of part (2) is nearly identical---see \cite[3.4]{BoedigheimerCohenTaylor:OHCS} for full details. Our strategy in proving this result will be to relate the spectral sequence in question to a second spectral sequence, whose collapse will be easier to prove. Recall from the proof of Theorem \ref{thm:stable splitting} that we have the map \[\eta_M:\mathrm{Conf}_{S^r}(M)\xrightarrow{\Phi}\mathrm{Conf}_V(\mathbb{R}^\infty)\to \mathrm{Conf}_{M_+\wedge S^r}(\mathbb{R}^\infty)\simeq \Omega^\infty\Sigma^\infty(M_+\wedge S^r),\] whose adjoint we might think of conceptually as comparing the ``homology theory'' of labeled configuration spaces to ordinary homology. In order to exploit this map, we require the following simple observation. \begin{lemma}\label{lem:Q and fiber sequences} If $A\to B\xrightarrow{f} C$ is a cofiber sequence, then \[\Omega^\infty\Sigma^\infty A\to \Omega^\infty\Sigma^\infty B\to \Omega^\infty\Sigma^\infty C\] is a fiber sequence. \end{lemma} \begin{proof} From our previous observation that $\pi_i\circ \Omega^\infty\Sigma^\infty\cong\pi_i^s$, the long exact sequence in stable homotopy associated to a cofiber sequence, and the long exact sequence in homotopy associated to a fiber sequence, we obtain the commuting diagram \[\xymatrix{ \cdots\ar[r]&\pi_{i+1}(\Omega^\infty\Sigma^\infty C)\ar@{=}[d]\ar[r]&\pi_i(\Omega^\infty\Sigma^\infty A)\ar[d]\ar[r]& \pi_i(\Omega^\infty\Sigma^\infty B)\ar@{=}[d]\ar[r]&\cdots\\ \cdots\ar[r]&\pi_{i+1}(\Omega^\infty\Sigma^\infty C)\ar[r]&\pi_i(\mathrm{hofiber}\,f)\ar[r]&\pi_i(\Omega^\infty\Sigma^\infty B)\ar[r]&\cdots }\] with exact rows. It follows from the five lemma that the canonical map from $\Omega^\infty\Sigma^\infty A$ to the homotopy fiber is a weak equivalence, as claimed. \end{proof} Therefore, since $M_+\to \overline M_+\to \overline M/M$ is a cofiber sequence, we obtain a map of fiber sequences as depicted in the following commuting diagram \[\xymatrix{ \mathrm{Conf}_{S^r}(M)\ar[d]\ar[r]&\Omega^\infty\Sigma^\infty(M_+\wedge S^r)\ar[d]\\ \mathrm{Conf}_{S^r}(\overline M)\ar[d]\ar[r]&\Omega^\infty\Sigma^\infty(\overline M_+\wedge S^r)\ar[d]\\ \mathrm{Conf}_{S^r}(\overline M,M)\ar[r]&\Omega^\infty\Sigma^\infty(\overline M/M\wedge S^r), }\] and thereby a map at the level of Serre spectral sequences \[E(\mathrm{Conf})\to E(\Omega^\infty\Sigma^\infty).\] This map is the desired comparison map. The following lemma asserts that its target has the desired property. \begin{lemma}\label{lem:surjective collapse} If $H_*(\overline M)\to H_*(\overline M,M)$ is surjective, then the spectral sequence $E(\Omega^\infty\Sigma^\infty)$ collapses at $E^2$. \end{lemma} In order to prove this auxiliary collapse result, we make use of the following observation. \begin{lemma}\label{lem:Q and surjections} The functor $\Omega^\infty\Sigma^\infty$ preserves homology surjections. \end{lemma} This result, in turn, relies on the following basic fact. \begin{lemma}\label{lem:conf classifying space} The space $\mathrm{Conf}_k(\mathbb{R}^\infty)$ is contractible for every $k\geq0$. \end{lemma} \begin{proof} We proceed by induction on $k$, the base case $k\in \{0,1\}$ being obvious. By Fadell--Neuwirth, we have a homotopy pullback square \[\xymatrix{\mathbb{R}^n\setminus\{x_1,\ldots, x_k\}\ar[d]\ar[r]&\mathrm{Conf}_{k+1}(\mathbb{R}^n)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_k(\mathbb{R}^n) }\] for each $n\geq0$, and thus, in the limit, we obtain the homotopy pullback square \[\xymatrix{\mathbb{R}^\infty\setminus\{x_1,\ldots, x_k\}\ar[d]\ar[r]&\mathrm{Conf}_{k+1}(\mathbb{R}^\infty)\ar[d]\\ \mathrm{pt}\ar[r]&\mathrm{Conf}_k(\mathbb{R}^\infty). }\] The space in the upper left corner is homotopy equivalent to $\bigvee_kS^\infty$, which is contractible, and the claim follows from the long exact sequence in homotopy. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:Q and surjections}] We give the proof under the assumption that the source and target of the homology surjection are both connected, which is the only case that we will use. The general case may be treated in the same way with group completion techniques. Let $f:X\to Y$ be a homology surjection between connected pointed spaces. We have the chain of isomorphisms \begin{align*} H_*(\Omega^\infty\Sigma^\infty X)&\cong H_*(\mathrm{Conf}_X(\mathbb{R}^\infty))\\ &\cong \bigoplus_{k\geq0} H_*(\mathrm{Conf}_k(\mathbb{R}^\infty)_+\wedge_{\Sigma_k}X^{\wedge k})\\ &\cong \bigoplus_{k\geq0} H_*(\Sigma_k; \widetilde H(X)^{\otimes k}), \end{align*} and similarly for $Y$, where the first isomorphism uses connectivity and scanning, the second uses stable splitting, and the third uses Lemma \ref{lem:conf classifying space}, the fact that the action of $\Sigma_k$ on $\mathrm{Conf}_k(\mathbb{R}^\infty)$ is contractible, and the K\"{u}nneth isomorphism. Since we are working over a field, the surjection $\widetilde H_*(X)\to \widetilde H_*(Y)$ splits, and this splitting induces an equivariant splitting of $\widetilde H_*(X)^{\otimes k}\to \widetilde H_*(Y)^{\otimes k}$ and hence a splitting at the level of group homology. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:surjective collapse}] In the commuting diagram \[\xymatrix{ H_*(\Omega^\infty\Sigma^\infty(\overline M_+\wedge S^r))\ar@{->>}[d]\ar[r]&H_*(\Omega^\infty\Sigma^\infty(\overline M/M\wedge S^r))\ar@{=}[d]^-\wr\\ E^\infty_{*,0}\ar[r]&E^2_{*,0}, }\] the bottom arrow is an injection, since $E^\infty_{*,0}=\bigcap_{r}\ker d^r_{*,0}$ (recall that the differential $d^r$ in the homological Serre spectral sequence has bidegree $(-r,r-1)$). Since the quotient map $\overline M\to \overline M/M$ induces a surjection on homology by assumption, it does so after smashing with $S^r$; therefore, the top map is a surjection by Lemma \ref{lem:Q and surjections}. It follows that the bottom map is a surjection and therefore an isomorphism, and we conclude that $d_r|_{E^r_{*,0}}$ vanishes for every $r\geq2$. Now, the functor $\Omega^\infty\Sigma^\infty$ takes values in homotopy associative and commutative $H$-spaces, so the spectral sequence in question is a spectral sequence of graded commutative algebras. By induction and Leibniz rule, it now follows that $d^r$ is identically zero for all $r\geq2$, as desired. \end{proof} The other relevant piece of information about this comparison is the following fact. \begin{lemma}\label{lem:spectral sequence injective} The map $E^2(\mathrm{Conf})\to E^2(\Omega^\infty\Sigma^\infty)$ is injective. \end{lemma} Before demonstrating this injectivity, we use it to prove the desire result collapse. \begin{proof}[Proof of Lemma \ref{lem:collapse lemma}] By induction with base case covered by Lemma \ref{lem:spectral sequence injective}, $E^2(\mathrm{Conf})\cong E^r(\mathrm{Conf})$ and $E^r(\mathrm{Conf})\to E^r(\Omega^\infty\Sigma^\infty)$ is injective. By Lemma \ref{lem:surjective collapse}, $d^r(x)$ becomes zero in $E^r(\Omega^\infty\Sigma)$ for any $r\geq2$ and $x\in E^r(\mathrm{Conf})$; therefore, we conclude that $d^r$ is identically zero, so $E^2(\mathrm{Conf})\cong E^{r+1}(\mathrm{Conf})$ and $E^{r+1}(\mathrm{Conf})\to E^{r+1}(\Omega^\infty\Sigma^\infty)$ is inejctive. Thus, the spectral sequence collapses at $E^2$. \end{proof} \subsection{Loop space calculations} We will give the proof of Lemma \ref{lem:spectral sequence injective} under the assumption that $r+n$ is odd and $\mathbb{F}$ has characteristic 0. The argument in finite characteristic proceeds along similar lines, leveraging a more complicated computation of the homology of iterated loop spaces carried out in \cite{CohenLadaMay:HILS}. The simpler computation that we will use is the following. \begin{proposition}\label{prop:loop space odd sphere} If $m$ is odd and $\mathbb{F}$ is of characteristic 0, then, for any $0\leq k<m$, there is an isomorphism of graded vector spaces \[\mathrm{Sym}_\mathbb{F}(\alpha)\xrightarrow{\simeq}H_*(\Omega^kS^m),\] where $\alpha$ is the image of the class of the identity under the composite map \[\pi_m(S^m)\cong \pi_{m-k}(\Omega^k S^m)\to H_*(\Omega^k S^m).\] Moreover, for $k>0$, this isomorphism is an isomorphism of algebras. \end{proposition} We will return to this calculation in the next lecture. \begin{proof}[Proof of Lemma \ref{lem:spectral sequence injective}] In light of the commuting diagram \[\xymatrix{ E^2(\mathrm{Conf})\ar[d]\ar@{=}[r]^-\sim&H_*(\mathrm{Conf}_{S^r}(\overline M,M))\otimes H_*(\mathrm{Conf}_{S^r}(M))\ar[d]^-{H_*(\eta_{(\overline M,M)})\otimes H_*(\eta_M)}\\ E^2(\Omega^\infty\Sigma^\infty)\ar@{=}[r]^-\sim&H_*(\Omega^\infty\Sigma^\infty(\overline M/M\wedge S^r))\otimes H_*(\Omega^\infty\Sigma^\infty(M_+\wedge S^r)), }\] it suffices to show that $\eta_{(\overline M,M)}$ and $\eta_M$ are each injective on homology. For the map $\eta_{(\overline M,M)}$, a brief consideration of the definition of the map $\Phi$, along the lines of earlier arguments, shows that the composite \[ \Omega^{n-i}\Sigma^{n-i}S^{r+i}\simeq \mathrm{Conf}_{S^r}(\overline M,M)\xrightarrow{\eta_{(\overline M,M)}} \mathrm{Conf}_{\overline M/M\wedge S^r}(\mathbb{R}^\infty)\simeq\Omega^\infty\Sigma^\infty S^{r+i} \] is homotopic to the canonical inclusion. Thus, the claim in this case follows from Proposition \ref{prop:loop space odd sphere}, since \begin{align*} H_*(\Omega^\infty\Sigma^\infty S^{r+i})&\cong \colim_\ell H_*(\Omega^\ell \Sigma^\ell S^{r+i})\\ &\cong \colim_s H_*(\Omega^{n-i+2s}\Sigma^{n-i+2s}S^{r+i})\\ &\cong \colim_s H_*(\Omega^{n-i+2s}S^{n+r+2s})\\ &\cong \colim_s\mathrm{Sym}_\mathbb{F}(\alpha_s)\\ &\cong\mathrm{Sym}_\mathbb{F}(\alpha_\infty), \end{align*} where $|\alpha_s|=|\alpha_\infty|=r+i$, and we have used that $n+r+2s$ is odd. The case of the map $\eta_M$ follows by induction on a handle decomposition, the base case of $D^n$ following as before from Proposition \ref{prop:loop space odd sphere}. \end{proof} Note that, in the characteristic 0 case, the injection $E^2(\mathrm{Conf})\to E^2(\Omega^\infty\Sigma^\infty)$ is in fact an isomorphism. We abstract the general features of the calculation behind Proposition \ref{prop:loop space odd sphere} in the following two lemmas. \begin{lemma} Let $\mathbb{F}$ be any field and $\{E^r\}$ a first-quadrant, homological, multiplicative spectral sequence over $\mathbb{F}$. If\begin{enumerate} \item $E^2_{*,*}\cong E^2_{*,0}\otimes E^2_{0,*}$, \item $E^2_{*,0}\cong\wedge_\mathbb{F}(x)$ with $|x|$ odd, and \item $E^\infty=E^\infty_{0,0}\cong\mathbb{F}$, \end{enumerate} then $E^2_{0,*}\cong\mathbb{F}[y]$ with $|y|=|x|-1$. \end{lemma} \begin{proof} We note first that $d^r|_{E_{*,0}}=0$ for $r<|x|$ by (1); therefore, by the Leibniz rule, $d^r\equiv0$ for $r<|x|$. Next, we note that $y:=d^{|x|}(x)\neq 0$, for otherwise $x$ is a permanent cycle, contradicting (3). By the Leibniz rule, we compute that \begin{align*} d^{|x|}(x\otimes y^\ell)&=d^{|x|}(x)y^\ell+(-1)^{|x|}xd^{|x|}(y^\ell)\\ &=y^{\ell+1}. \end{align*} Note that $y^{\ell+1}\neq 0$ for each $\ell$, since otherwise $x\otimes y^\ell$ is a permanent cycle, and the result follows, since there can be no further differentials and hence no further elements in $E^2_{0,*}$. \end{proof} \begin{lemma} Let $\mathbb{F}$ be a field of characteristic zero and $\{E^r\}$ a first-quadrant, homological, multiplicative spectral sequence over $\mathbb{F}$. If\begin{enumerate} \item $E^2_{*,*}\cong E^2_{*,0}\otimes E^2_{0,*}$, \item $E^2_{*,0}\cong \mathbb{F}[y]$ with $|y|$ even, and \item $E^\infty=E^\infty_{0,0}\cong\mathbb{F}$, \end{enumerate} then $E^2_{0,*}\cong\wedge_\mathbb{F}(x)$ with $|x|=|y|-1$. \end{lemma} \begin{proof} As in the previous argument, we find that $d^r\equiv0$ for $r<|y|$, and that $x:=d^{|y|}(y)\neq0$. By the Leibniz rule and induction, we have that \begin{align*} d^{|y|}(y^\ell)&=d^{|y|}(y)y^{\ell-1}+(-1)^{|y|} yd^{|y|}(y^{\ell-1})\\ &=\ell y^{\ell-1}\otimes x\\ &\neq 0, \end{align*} since $\mathbb{F}$ has characteristic zero. Since there can be no further differentials, the claim follows. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:loop space odd sphere}] We proceed by induction on $k$, the base case of $k=0$ being the observation that $H_*(S^m)\cong \wedge_\mathbb{F}(\alpha)$. For the induction step, we use the Serre spectral sequence for the homotopy pullback square \[\xymatrix{ \Omega^k S^m\ar[r]\ar[d]&P\,\Omega^{k-1}S^m\ar[d]\\ \mathrm{pt}\ar[r]&\Omega^{k-1}S^m, }\] where $P$ denotes the path space functor. Since the path space is contractible, and since, for $k>1$, this spectral sequence is multiplicative, one of the two lemmas applies, depending on the parity of $k$. To identify the generator $\alpha$, it suffices to note that $\alpha\neq0$ by the Hurewicz theorem, and that $|\alpha|=m-k$. In the case $k=1$, the spectral sequence is not multiplicative; rather, via naturality and the commutative diagram \[\xymatrix{ \Omega S^m\times\Omega S^m\ar[d]\ar[r]&\Omega S^m\ar[d]\\ \Omega S^m\times PS^m\ar[r]\ar[d]&PS^m\ar[d]\\ S^m\ar@{=}[r]&S^m }\]it is a spectral sequence of $H_*(\Omega S^m)$-modules. This structure is sufficient to imply the necessary equation \[d^m(x\otimes y^\ell)=d^m(x)y^\ell=y^{\ell+1}\] in this case as well. \end{proof} \begin{example} In finite characteristic $p$, the same calculation shows that $H_*(\Omega S^m)\cong \mathbb{F}[y]$. For $\Omega^2S^m$, however, the argument breaks down, for now\[d^{m-1}(y^p)=py^{p-1}=0,\] implying the existence of further elements \begin{align*} Q^1(y)&:=d^{p(m-1)}(y^p)\\ \beta Q^1(y)&=d^{(p-1)(m-1)}(y^{p-1}\otimes x) \end{align*} of degree $p(m-1)-1$ and $p(m-1)-2$, respectively. A systematic approach to these and other nontrivial higher differentials, and the resulting profusion of homology classes, is provided by the framework of homology operations for iterated loop spaces. The symbol $Q^1$ used above refers to a certain \emph{Dyer-Lashof} operation and the letter $\beta$ to the Bockstein operator, and it turns out that the homology of iterated loop spaces of spheres can be completely described in terms of composites of Dyer-Lashof operations and the Bockstein---see \cite{CohenLadaMay:HILS} for further details. \end{example} Our next move, having completed the proof of Theorem \ref{thm:labeled conf homology}, will be to exploit it in understanding the homology of ordinary configuration spaces. \subsection{Odd dimensional homology} Our strategy in proving Theorem \ref{thm:odd homology} will be to reinterpret the two sides of the isomorphism of Theorem \ref{thm:labeled conf homology}, keeping track of the extra grading. On the lefthand side of the isomorphism, as bigraded vector spaces, we have \begin{align*}H_*(\mathrm{Conf}_{S^r}(M))&\cong \mathbb{F}\oplus\widetilde H_*(\mathrm{Conf}_{S^r}(M))\\ &\cong \mathbb{F}\oplus \bigoplus_{k\geq1}\widetilde H_*\left(\mathrm{Conf}_k(M)_+\wedge_{\Sigma_k}S^{rk}\right)\\ &\cong\bigoplus_{k\geq0}\widetilde H_*\left(\mathrm{Conf}_k(M)_+\wedge_{\Sigma_k}S^{rk}\right) \end{align*} by stable splitting. Writing $\pi_{k,r}$ for the natural projection \[\pi_{k,r}:\mathrm{Conf}_k(M)\times_{\Sigma_k}\mathbb{R}^{rk}\to B_k(M),\] we have the following observation. \begin{lemma}\label{lem:thom space} The map $\pi_{k,r}$ is a vector bundle of rank $rk$, and there is a pointed homeomorphism \[\mathrm{Th}(\pi_{k,r})\cong \mathrm{Conf}_k(M)_+\wedge_{\Sigma_k} S^{rk}.\] \end{lemma} \begin{proof} For an open subset $U\subseteq M$ with $k$ connected components, each Euclidean, a choice of ordering $\tau:\{1,\ldots, k\}\cong \pi_0(U)$ gives rise to the commuting diagram \[\xymatrix{\displaystyle\coprod_{\sigma:\{1,\ldots, k\}\cong \pi_0(U)}\prod_{i=1}^kU_{\sigma(i)}\times\mathbb{R}^{rk}\ar[d]&\displaystyle\coprod_{\sigma:\{1,\ldots, k\}\cong \pi_0(U)}\mathrm{Conf}_k^0(U,\sigma)\times\mathbb{R}^{rk}\ar[l]_-{\simeq}\ar[d]\ar[r]&\mathrm{Conf}_k(M)\times\mathbb{R}^{rk}\ar[d]\\ \displaystyle\prod_{i=1}^kU_{\tau(i)}\times\mathbb{R}^{rk}\ar[d]&\displaystyle\left(\coprod_{\sigma:\{1,\ldots, k\}\cong \pi_0(U)}\mathrm{Conf}_k^0(U,\sigma)\times\mathbb{R}^{rk}\right)_{\Sigma_k}\ar[d]\ar[l]_-\simeq\ar[r]&\mathrm{Conf}_k(M)\times_{\Sigma_k}\mathbb{R}^{rk}\ar[d]^-{\pi_{k,r}}\\ \displaystyle\prod_{i=1}^k U_{\tau(i)}&B_k^0(U)\ar[l]_-\simeq\ar[r]&B_k(M), }\] in which the two righthand squares are pullback squares and the upper three vertical maps are projections to spaces of $\Sigma_k$-coinvariants. Since $B_k(M)$ is covered by open subsets of the form $B_k^0(Y)$, it follows that $\pi_{k,r}$ is locally trivial. Since addition and scalar multiplication in $\mathbb{R}^{rk}$, the linear structure of $\mathrm{Conf}_k(M)\times\mathbb{R}^{rk}$ descends to the quotient. For the second claim, we have \begin{align*} \mathrm{Th}(\pi_{k,r})&\cong D(\pi_{k,r})/S(\pi_{k,r})\\ &\cong \frac{\mathrm{Conf}_k(M)\times_{\Sigma^k}D^{rk}}{\mathrm{Conf}_k(M)\times_{\Sigma_k}S^{rk-1}}\\ &\cong \mathrm{Conf}_k(M)_+\wedge_{\Sigma_k}S^{rk}. \end{align*} \end{proof} In order to apply the Thom isomorphism, we require the following result. \begin{lemma}\label{lem:orientable} If $r$ is even, then $\pi_{k,r}$ is orientable. \end{lemma} \begin{proof} Since $r$ is even, any orientation of $\mathbb{R}^r$ induces a $\Sigma_k$-invariant orientation of $\mathbb{R}^{rk}$, whence of the trivial bundle $\mathrm{Conf}_k(M)\times \mathbb{R}^{rk}$. By $\Sigma_k$-invariance, this orientation descends to the quotient. \end{proof} \begin{remark} Again, characterstic 2 is exceptional. \end{remark} \begin{corollary}\label{cor:lhs} For $n$ odd and $r>1$ even, there is an isomorphism of bigraded vector spaces \[H_*(\mathrm{Conf}_{S^r}(M))\cong\bigoplus_{k\geq0} H_*(B_k(M))[rk].\] \end{corollary} As for the righthand side of the isomorphism of Theorem \ref{thm:labeled conf homology}, stable splitting gives us the following commuting diagram of isomorphisms \[\xymatrix{ H_*(\Omega^{n-i}S^{n+r})\ar@{=}[d]_-\wr& \displaystyle\bigoplus_{k\geq0}\widetilde H_*\left(\mathrm{Conf}_k(D^i\times D^{n-i}, \partial D^i\times D^{n-i})\wedge_{\Sigma_k}S^{rk}\right) \ar[l]_-\simeq\ar@{-->}[d]\\ H_*(\Omega^{n-i}\Sigma^{n-i}S^{r+i})&\displaystyle\bigoplus_{k\geq0}\widetilde H_*\left(\mathrm{Conf}_k(\mathbb{R}^{n-i})_+\wedge_{\Sigma_k}S^{k(r+i)}\right).\ar[l]_-\simeq & }\] All four terms of this diagram are naturally bigraded, and the horizontal morphisms are compatible with these bigradings. It is possible, by direct consideration of the scanning map, to show that the vertical arrows are also compatible with the bigradings. Rather than proceed in this manner, however, we invoke the following result, which likewise assures us that we may work with the lower left and upper right corners interchangeably. \begin{lemma} For any manifold $N$ and $i,k\geq0$, there is a canonical, $\Sigma_k$-equivariant weak equivalence \[\Sigma^{ik}_+\mathrm{Conf}_k(N)\xrightarrow{\sim} \mathrm{Conf}_k\left(D^i\times N,\partial D^i\times N\right).\] \end{lemma} \begin{proof} The map is defined via the inclusion \[\Sigma_+^{ik}\mathrm{Conf}_k(N)\cong \frac{(D^i)^k\times\mathrm{Conf}_k(N)}{\partial (D^i)^k\times \mathrm{Conf}_k(N)}\xrightarrow{\simeq} U\subseteq \mathrm{Conf}_k\left(D^i\times N, \partial D^i\times N\right)\] of the open subspace $U$ consisting of the basepoint together with all nontrivial configurations whose coordinates in $N$ are distinct. Letting $V$ denote the open subspace consisting of the basepoint together with all nontrivial configurations with at least two distinct coordinates in $D^i$, it is clear that $\mathrm{Conf}_k(D^i\times N,\partial D^i\times N)=U\cup V$. Now, both $V$ and $U\cap V$ are contractible by radial expansion, so the inclusion $i$ of $U$ induces the weak equivalences \begin{align*} i^{-1}(U)&\xrightarrow{=} U\\ i^{-1}(V)=U\cap V&\xrightarrow{\sim} V\\ i^{-1}(U\cap V)&\xrightarrow{=}U\cap V. \end{align*} It follows from Proposition \ref{prop:mayer-vietoris} that $i$ is a weak equivalence. \end{proof} The final ingredient in the proof will be the following calculation. \begin{lemma} For $n$ odd, $r>1$ even, $i\geq0$, and $\mathbb{F}$ of characteristic 0, \[\widetilde H_*\left(\mathrm{Conf}_k(\mathbb{R}^{n-i})_+\wedge_{\Sigma_k}S^{k(r+i)}\right)\cong\begin{cases} \mathbb{F}[k(r+i)]&\quad i \text{ even or } k\in\{0,1\}\\ 0&\quad \text{otherwise.} \end{cases}\] \end{lemma} \begin{proof} If $i$ is even, then $\pi_{r+i,k}$ is orientable, since $r$ is even, so the homology group in question is identified with \[H_*(B_k(\mathbb{R}^{n-i}))[k(r+i)]\cong\mathbb{F}[k(r+i)]\] by the Thom isomorphism, where we have used that $n-i$ is odd when $n$ is odd and $i$ is even. On the other hand, if $i$ is odd, then by the K\"{u}nneth theorem and the assumption on the charateristic of $\mathbb{F}$, we have \begin{align*}\widetilde H_*\left(\mathrm{Conf}_k(\mathbb{R}^{n-i})_+\wedge_{\Sigma_k}S^{k(r+i)}\right) &\cong H_*(\mathrm{Conf}_k(\mathbb{R}^{n-i}))\otimes_{\Sigma_k}\widetilde H_*(S^{k(r+i)})\\ &\cong H_*(\mathrm{Conf}_k(\mathbb{R}^{n-i}))\otimes_{\Sigma_k}\mathbb{F}^{\mathrm{sgn}}[k(r+i)], \end{align*} where $\mathbb{F}^\mathrm{sgn}$ denotes the sign representation of $\Sigma_k$. Earlier, we showed that the homology group $H_*(\mathrm{Conf}_k(\mathbb{R}^{n-i}))$ has a spanning set indexed by $k$-forests, where switching adjacent leaves of a tree introduces a sign $(-1)^\epsilon$, where $\epsilon$ is the degree of the antipodal map on $S^{n-i-1}$. Since $n$ and $i$ are both odd, it follows that $\epsilon=0$. After tensoring with the sign representation, it follows that any transposition in $\Sigma_k$ acts by $-1$. Thus, in this case, \[H_*(\mathrm{Conf}_k(\mathbb{R}^{n-i})\otimes_{\Sigma_k}\mathbb{F}^\mathrm{sgn}[k(r+i)]\cong\begin{cases} \mathbb{Q}[k(r+i)]&\quad k\in\{0,1\}\\ 0&\quad\text{otherwise,} \end{cases}\] which completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:odd homology}] Combining what we have done so far, we obtain the chain of isomorphisms of bigraded vector spaces \begin{align*} \bigoplus_{k\geq0} H_*(B_k(M))[kr]&\cong \bigotimes_{i=0}^n\left(\bigoplus_{\ell\geq0}H_*\left(\mathrm{Conf}_\ell(\mathbb{R}^{n-i})_+\wedge_{\Sigma_\ell}S^{\ell(r+i)}\right)\right)^{\otimes\dim H_i(M)}\\ &\cong\bigotimes_{i\text{ even}}\left(\bigoplus_{\ell\geq0}\mathbb{F}[\ell(r+i)]\right)^{\otimes \dim H_i(M)}\otimes\bigotimes_{i\text{ odd}}\left(\bigoplus_{\ell\in\{0,1\}}\mathbb{F}[\ell(r+i)]\right)^{\otimes\dim H_i(M)}\\ &\cong \bigotimes_{i\text{ even}}\mathrm{Sym}(\mathbb{F}[r+i])^{\otimes \dim H_i(M)}\otimes\bigotimes_{i\text{ odd}}\mathrm{Sym}(\mathbb{F}[r+i])^{\otimes\dim H_i(M)}\\ &\cong\mathrm{Sym}(H_\mathrm{even}(M)[r])\otimes\mathrm{Sym}(H_\mathrm{odd}(M)[r])\\ &\cong \mathrm{Sym}\left(H_*(M)[r]\right), \end{align*} where we have repeatedly used that $\mathrm{Sym}$ sends direct sums to tensor products. It follows that \[H_*(B_k(M))[kr]\cong \mathrm{Sym}^k(H_*(M))[kr],\] which completes the proof. \end{proof} \section{Mod $p$ cohomology}\label{section:mod p cohomology} In the previous section, we expressed the mod $p$ (co)homology of the unordered configuration of a manifold of odd dimension in terms of the (co)homology of the spaces $\mathrm{Conf}_k(\mathbb{R}^n)_+\wedge_{\Sigma_k}S^{rk}$ for varying $k$, $n$, and $r$. Therefore, reformulating a little using the K\"{u}nneth isomorphism, we see that this computation amounts to essentially two computations of cohomology with local coefficients; indeed, we have \begin{align*} H^*\left(\mathrm{Conf}_k(\mathbb{R}^n)_+\wedge_{\Sigma_k}S^{rk}\right)&\cong H\left(\mathrm{Map}_{\Sigma_k}\left(C_*(\mathrm{Conf}_k(\mathbb{R}^n)), \widetilde{C}^*(S^{rk})\right)\right)\\ &\cong H\left(\mathrm{Map}_{\Sigma_k}\left(C_*(\mathrm{Conf}_k(\mathbb{R}^n)), \mathbb{F}_p(r)\right)\right)[kr]\\ &=: H^*(B_k(\mathbb{R}^n);\mathbb{F}_p(r))[kr], \end{align*} where $\mathbb{F}_p(r)$ is the $\Sigma_p$-module defined by \[ \mathbb{F}_p(r)=\begin{cases}\mathbb{F}_p^\mathrm{triv}&\quad r\text{ even}\\ \mathbb{F}_p^\mathrm{sgn}&\quad r\text{ odd}. \end{cases}\] Our present goal is to describe the method behind this computation, which was first carried out in \cite[III]{CohenLadaMay:HILS}. For ease of exposition, we will focus on the case $r$ even, but the other case may be treated in an entirely parallel fashion. We also restrict to the case $k=p$, from which the other cases may be derived in a rather roundabout manner. \subsection{Outline of argument} The strategy is to leverage our complete understanding of cohomology of the covering space $\mathrm{Conf}_k(\mathbb{R}^n)$ using the spectral sequence of Corollary \ref{cor:ss of a cover}. From the commutative diagram of covering spaces \[\xymatrix{ \mathrm{Conf}_p(\mathbb{R}^n)\ar[d]\ar@{=}[r]&\mathrm{Conf}_p(\mathbb{R}^n)\ar[d]\ar[r]&\mathrm{Conf}_p(\mathbb{R}^\infty)\ar[d]\\ \mathrm{Conf}_p(\mathbb{R}^n)_{C_p}\ar[r]&B_p(\mathbb{R}^n)\ar[r]&B_p(\mathbb{R}^\infty), }\] we obtain the commutative diagram of fiber sequences \[\xymatrix{ \mathrm{Conf}_p(\mathbb{R}^n)\ar[d]\ar@{=}[r]&\mathrm{Conf}_p(\mathbb{R}^n)\ar[d]\ar[r]&\mathrm{pt}\ar[d]\\ \mathrm{Conf}_p(\mathbb{R}^n)_{C_p}\ar[d]\ar[r]&B_p(\mathbb{R}^n)\ar[d]\ar[r]&B\Sigma_p\ar@{=}[d]\\ BC_p\ar[r]&B \Sigma_p\ar@{=}[r]&B\Sigma_p, }\] and hence the commuting diagram of spectral sequences \[\xymatrix{ H^s(\Sigma_p)\ar[d]\ar@{=}[r]&H^s(\Sigma_p)\ar[d]\\ H^s(\Sigma_p; H^t(\mathrm{Conf}_p(\mathbb{R}^n)))\ar[d]\ar@{=>}[r]&H^{s+t}(B_p(\mathbb{R}^n))\ar[d]\\ H^s(C_p;H^t(\mathrm{Conf}_p(\mathbb{R}^n)))\ar@{=>}[r]&H^{s+t}(\mathrm{Conf}_p(\mathbb{R}^n)_{C_p}). }\] The utility of this maneuver lies in the following basic observation. \begin{lemma}\label{lem:principal bundle transfer} If $\pi:P\to X$ is a principal $G$-bundle and $H<G$ is a $p$-Sylow subgroup, then the induced map of spectral sequences is injective on $E_2$ and on $E_\infty$. \end{lemma} \begin{proof} On $E_\infty$, the map is induced by the projection $\pi_H:P/H\to X$, which is a $[G:H]$-fold cover; therefore, the composite of $\pi^*_H$ followed by the transfer map is multiplication by $[G:H]$, which is injective, since $H$ is $p$-Sylow. On $E_2$, the map is induced by the map $BH\to BG$, which is also a $[G:H]$-fold cover, and a similar argument in cohomology with local coefficients applies. \end{proof} Thus, the spectral sequence of immediate interest is a summand of the spectral sequence obtained by restriction to $C_p$. Regarding this spectral sequence, we have the following \begin{vanishingtheorem} In the spectral sequence for the cover $\mathrm{Conf}_p(\mathbb{R}^n)\to \mathrm{Conf}_p(\mathbb{R}^n)_{C_p}$, we have $E_2^{s,t}=0$ if $s>0$ and $0<t<(n-1)(p-1)$. \end{vanishingtheorem} By Lemma \ref{lem:principal bundle transfer}, then, the same vanishing range holds for the spectral sequence of primary interest. Moreover, as we calculated earlier, $H^*(\mathrm{Conf}_p(\mathbb{R}^n))=0$ for $*>(n-1)(p-1)$, so the spectral sequence is concentrated in the $s=0$ column in degree at most $(n-1)(p-1)$ and in the two rows $t=0$ and $t=(n-1)(p-1)$. As for the differentials, it follows from what has already been said that the only possible nonzero differentials are $d_r$ on the $s=0$ column for $2\leq r\leq (n-1)(p-1)$ and on the $t=(n-1)(p-1)$ column for $r=(n-1)(p-1)+1$. Moreover, we have the following simple, but crucial, observation: \begin{lemma}\label{lem:high differential} In the spectral sequence for the cover $\mathrm{Conf}_p(\mathbb{R}^n)\to B_p(\mathbb{R}^n)$, the differential $d_{(n-1)(p-1)+1}:E_{(n-1)(p-1)+1}^{s,(n-1)(p-1)}\to E_{(n-1)(p-1)+1}^{s+(n-1)(p-1)+1,0}$ is an isomorphism for $s>n+p-2$. \end{lemma} \begin{proof} Assuming otherwise, it follows that $E_\infty^{s+(n-1)(p-1)+1,0}\neq 0$, and hence that $B_p(\mathbb{R}^n)$ has non-vanishing homology in some degree strictly greater than $(n+p-2)+(n-1)(p-1)+1=np$. Since $B_p(\mathbb{R}^n)$ is a manifold of dimension $np$, this is a contradiction. \end{proof} Together with the following classical calculation, this observation will allow us to populate much of the $t=(n-1)(p-1)$ row. \begin{theorem}[Nakaoka] There is a commuting diagram \[\xymatrix{ H^*(C_p)\ar@{=}[r]^-\sim&\wedge_{\mathbb{F}_p}[u]\otimes \mathbb{F}_p[\beta u]&u^{2(p-1)-1}\\ H^*(\Sigma_p)\ar@{=}[r]^-\sim\ar[u]&\wedge_{\mathbb{F}_p}[v]\otimes \mathbb{F}_p[\beta v]\ar[u]&v,\ar@{|->}[u] }\] where $|u|=1$ and $\beta$ denotes the mod $p$ Bockstein. \end{theorem} In order to populate the remainder of this row, we make use of the following result, the proof of which is discussed in Appendix \ref{appendix:periodicity}. \begin{periodicitytheorem} Let $M$ be a $C_p$-module and $N$ a $\Sigma_p$-module. \begin{enumerate} \item Cup product with $\beta u$ induces an isomorphism \[H^s(C_p;M)\xrightarrow{\simeq} H^{s+2}(C_p; M).\] \item Cup product with $\beta v$ induces an isomorphism \[H^s(\Sigma_p;N)\xrightarrow{\simeq} H^{s+2(p-1)}(\Sigma_p; N).\] \end{enumerate} \end{periodicitytheorem} The final step is the identification of the $s=0$ column, which is simply the module of invariants for the action $\Sigma_p$. For simplicity, we do not state the analogous result for $p=3$. \begin{invariantstheorem} For any prime $p>3$,\[I:=H^*(\mathrm{Conf}_p(\mathbb{R}^n))^{\Sigma_p}\cong\begin{cases} \wedge_{\mathbb{F}_p}(\alpha_{n-1})&\quad \text{$n$ even}\\ \mathbb{F}_p&\quad \text{$n$ odd}. \end{cases}\] \end{invariantstheorem} Thus, the only possible differential aside from $d^{(n-1)(p-1)+1}$, whose effect is determined by Lemma \ref{lem:high differential} and periodicity, is $d^n$. After checking that this differential vanishes, the calculation follows. \begin{theorem}[Cohen] There is an isomorphism \[H^*(B_p(\mathbb{R}^n))\cong I\times_{\mathbb{F}_p} \frac{H^*(\Sigma_p)}{H^{>(n-1)(p-1)}(\Sigma_p)}.\] \end{theorem} In the remainder of this section, we now give this outline flesh. \begin{warning} This argument is the one given in the original reference \cite{CohenLadaMay:HILS}, but we adopt our own notational conventions and give different proofs of some result. \end{warning} \subsection{Invariants} We begin with a few sample calculations. \begin{example} For $p=3$, a basis for the degree $2n-2$ cohomology is given by the set $\left\{\alpha_{12}\alpha_{13}, \alpha_{12}\alpha_{23}\right\}$. We compute that \begin{align*} \tau_{12}^*(\alpha_{12}\alpha_{13})&=(-1)^n\alpha_{12}\alpha_{23}\\ \tau_{23}^*(\alpha_{12}\alpha_{13})&=(-1)^{(n-1)^2}\alpha_{12}\alpha_{13}\\ \tau_{12}^*(\alpha_{12}\alpha_{23})&=(-1)^n\alpha_{12}\alpha_{13}\\ \tau_{23}^*(\alpha_{12}\alpha_{23})&=(-1)^{n+1}\alpha_{12}\alpha_{13}+(-1)^{(n-1)^2+1}\alpha_{12}\alpha_{23}. \end{align*} Thus, if $\beta=\lambda_1\alpha_{12}\alpha_{13}+\lambda_2\alpha_{12}\alpha_{23}$ is a fixed point, we have the equations \begin{align*} \lambda_1&=(-1)^n\lambda_2\\ \lambda_1&=(-1)^{(n-1)^2}\lambda_1+(-1)^{n+1}\lambda_2, \end{align*} which imply $\beta=0$ unless $2\equiv(-1)^{(n-1)^2}\mod 3$, which occurs precisely when $n$ is even. This special case is the reason for the restriction $p>3$ in the statement of the theorem. \end{example} \begin{example}\label{example:fixed point} Assuming that $n$ is even and setting $\alpha=\sum_{1\leq a<b\leq p}\alpha_{ab}$, we compute that \begin{align*} \sigma^*\alpha&=\sum_{1\leq a<b\leq p}\alpha_{\sigma(a)\sigma(b)}\\ &=\sum_{\sigma(a)<\sigma(b)}\alpha_{\sigma(a)\sigma(b)}+(-1)^n\sum_{\sigma(b)<\sigma(a)}\alpha_{\sigma(a)\sigma(b)}\\ &=\alpha, \end{align*} so the fixed point set is at least as large as claimed in the theorem above. Thus, it remains to show that $\alpha$ is the only possible nontrivial fixed point. \end{example} In order to prove the theorem, we pass from cohomology to homology in order to btain a more convenient basis. This passage is justified by the tensor/hom adunction; for a finite group $G$ and $G$-module $M$, we have \begin{align*} H^*(G; M^\vee)&\cong \mathrm{Ext}^*_{\mathbb{F}[G]}\left(\mathbb{F}, M^\vee\right)\\ &\cong \mathrm{Ext}^*_{\mathbb{F}[G]}\left(\mathbb{F}, \mathrm{Ext}^*_\mathbb{F}\left(M,\mathbb{F}\right)\right)\\ &\cong \mathrm{Ext}^*_{\mathbb{F}}\left(\mathrm{Tor}_*^{\mathbb{F}[G]}\left(\mathbb{F}, M\right),\mathbb{F}\right)\\ &\cong H_*(G; M)^\vee. \end{align*} In particular, in the case of interest, we have the isomorphisms \begin{align*} H^*(\mathrm{Conf}_p(\mathbb{R}^n))^{\Sigma_p}&\cong \left(H_*(\mathrm{Conf}_p(\mathbb{R}^n))_{\Sigma_p}\right)^\vee\\ H^*(C_p;H^*(\mathrm{Conf}_p(\mathbb{R}^n)))&\cong H_*(C_p;H_*(\mathrm{Conf}_p(\mathbb{R}^n)))^\vee. \end{align*} Recall from \S\ref{section:cohomology} that $H_*(\mathrm{Conf}_p(\mathbb{R}^n))$ is spanned by classes indexed by $p$-forests modulo the Jacobi identity and graded antisymmetry, with a preferred basis given by the tall forests. The advantage of this perspective is that $\Sigma_p$ acts on a forest by permuting the leaves, which are independent of one another, whereas the indices of a monomial such as $\alpha_{ab}\alpha_{bc}\alpha_{ac}$ are far from independent. The first of the theorems in question is now almost immediate; indeed, we essentially gave the argument previously when computing $H_*(B_k(\mathbb{R}^n);\mathbb{Q})$. \begin{proof}[Proof of Invariants Theorem] Let $\alpha$ be the class of a tall forest with at least three leaves; thus, $|\alpha|\geq 2(n-1)$. By the Jacobi identity, $3[\alpha]=0$ in $H_*(\mathrm{Conf}_p(\mathbb{R}^n))_{\Sigma_p}$; therefore, since $p>3$, the map from $H_*(\mathrm{Conf}_p(\mathbb{R}^n))$ to the module of coinvariants is zero, since it annihilates a basis. Since this map is also surjective, the claim follows in degree $2(n-1)$ and higher. The argument in degree $0$ is trivial. In degree $n-1$, there are two cases to consider. If $n$ is odd, then every basis element is equivalent to its negative in coinvariants, which must therefore be trivial. If $n$ is even, we note that $\Sigma_p$ acts transitively on our preferred homology basis, so $H_{n-1}(\mathrm{Conf}_p(\mathbb{R}^n))_{\Sigma_p}$ has dimension at most 1. Therefore, by Example \ref{example:fixed point}, the dimension is exactly 1. \end{proof} \begin{remark} This argument differs from that given in \cite[III.9]{CohenLadaMay:HILS}. We were unable to justify some of the claims made in the course of that argument. \end{remark} \subsection{Vanishing} We turn now to the vanishing theorem. As in the previous section, we may reformulate this result in homological terms as the assertion \[H_s(C_p; H_t(\mathrm{Conf}_p(\mathbb{R}^n))=0\] for $s>0$ and $0<t<(n-1)(p-1)$. \begin{remark} We emphasize that this theorem requires no restriction on $p$ (note that the case $p=2$ is vacuous). \end{remark} We will use the following vanishing criterion. \begin{proposition}\label{prop:cyclic decomposition} Let $V$ be a $C_p$-module over $\mathbb{F}$ and $\sigma\in C_p$ a fixed generator. If $V$ admits a decomposition of the form \[V\cong \bigoplus_{i=1}^{p}V_{\sigma^i}\] such that $\sigma\left(V_{\sigma^i}\right)\subseteq V_{\sigma^{i+1}}$, then $H_s(C_p;V)=0$ for $s>0$. \end{proposition} \begin{proof} The trivial $C_p$-module $\mathbb{F}$ admits the so-called periodic resolution \[\cdots\to \mathbb{F}[C_p]\xrightarrow{N}\mathbb{F}[C_p]\xrightarrow{\sigma-1}\mathbb{F}[C_p]\xrightarrow{\epsilon}\mathbb{F},\] where $N=1+\sigma+\cdots+\sigma^{p-1}$ and $\epsilon$ denotes the augmention. Thus, the group homology of $V$ is computed by the complex \[\cdots \to V\xrightarrow{N} V\xrightarrow{\sigma-1} V,\] so it suffices to show that the inclusion $\mathrm{im}(N)\subseteq \ker(\sigma-1)$ is an equality. Suppose that $\sigma(v)=v$. Our assumption on $V$ provides the unique decomposition \[\sum_{i=1}^pv_{\sigma^i}=v=\sigma(v)=\sum_{i=1}^p\sigma(v_{\sigma^i}).\] Since $\sigma(v_{\sigma^i})\in V_{\sigma^{i+1}}$, it follows by induction that $v_{\sigma^i}=v_\sigma$ for all $1\leq i\leq p$, so $v=N v_\sigma.$ \end{proof} In order to apply this observation to our situation, we recall that a $p$-forest is simply an ordered partition of $\{1,\ldots, p\}$ together with a binary parenthesization of each block of the partition, and that changing the order of the partition introduces an overall sign. Since the Jacobi identity and antisymmetry do not change the partition of a forest, we have the direct sum decomposition \[H_*(\mathrm{Conf}_p(\mathbb{R}^n))\cong \bigoplus_{1\leq \ell\leq p}\bigoplus_{[\pi]\in \mathrm{Surj}(p,\ell)_{\Sigma_\ell}}F_{[\pi]},\] where $F_{[\pi]}$ denotes the subspace spanned by the forests with underlying unordered partition $[\pi]$. We now make three observations. \begin{enumerate} \item The degree 0 subspace is exactly the $\ell=p$ summand. Thus, we disregard this summand. \item The degree $(n-1)(p-1)$ subspace is exactly the $\ell=1$ summand. Thus, we disregard this summand. \item The symmetric group acts via the action on $\mathrm{Surj}(p,\ell)_{\Sigma_\ell}$ given by pre-composition. In particular, the $\ell$th summand above is closed under the action of $\Sigma_p$. \end{enumerate} We conclude that the Vanishing Theorem is equivalent to the claim that \[H_{s}\left(C_p;\bigoplus_{\mathrm{Surj}(\ell,p)_{\Sigma_\ell}}F_{[\pi]}\right)=0\] for $s>0$ and $1<\ell<p$. The essential observation in establishing this claim is the following. \begin{lemma}\label{lem:partitions} For any $1<\ell<p$ and $[\pi]\in \mathrm{Surj}(p,\ell)_{\Sigma_\ell}$, \[[\pi\circ\sigma]\neq[\pi].\] \end{lemma} \begin{proof} There are numbers $1\leq i,j\leq \ell$ such that $|\pi^{-1}(i)|\neq |\pi^{-1}(j)|$; indeed, assuming otherwise implies that $\ell\mid p$, which contradicts our assumption that $\ell\notin\{1,p\}$. Now, if $\rho$ is a cyclic permutation taking any element of $\pi^{-1}(i)$ to $\pi^{-1}(j)$, then $[\pi]\neq [\pi\circ\rho]=[\pi\circ\sigma^i]$, which leads to a contradiction under the assumption that $[\pi\circ\sigma]=[\pi]$. \end{proof} \begin{proof}[Proof of Vanishing Theorem] Since $F_{[\pi]}\cap F_{[\pi']}=0$ for $[\pi]\neq [\pi']$, Lemma \ref{lem:partitions} implies that $F_{[\pi]}\cap F_{[\pi\circ\sigma]}=0$ for any $1<\ell<p$ and $[\pi]\in \mathrm{Surj}(p,\ell)_{\Sigma_\ell}$. Thus, for fixed $[\pi]$, the submodule $\bigoplus_{i=1}^pF_{[\pi\circ\sigma^i]}$ satisfies the hypotheses of Proposition \ref{prop:cyclic decomposition}. By induction $\bigoplus_{\mathrm{Surj}(\ell,p)_{\Sigma_\ell}}F_{[\pi]}$ decomposes as a direct sum of submodules of this form. The proposition now implies the claim. \end{proof} \begin{remark} The reader may compare the complexity of this argument in homology to the argument in cohomology of \cite[III.10]{CohenLadaMay:HILS}. \end{remark} \section{Postscript: Lie algebra methods}\label{section:lie algebra methods} In this short final portion of these notes, we give an informal discussion of an approach to computing the rational homology of configuration spaces of general manifolds, which is premised on Lie algebras. \subsection{Lie algebras and their homology} We begin with a few reminders. Fix a field $\mathbb{F}$ of characteristic zero. \begin{definition} A \emph{graded Lie algebra} is a graded vector space $\mathfrak{g}$ equipped with a map \[[-,-]:\mathfrak{g}^{\otimes 2}\to \mathfrak{g},\] called the \emph{Lie bracket} of $\mathfrak{g}$, satisfying the following two identities: \begin{enumerate} \item $[x,y]=(-1)^{|x||y|+1}[y,x]$ \item $(-1)^{|x||z|}\left[x,[y,z]\right]+(-1)^{|z||y|}\left[z,[x,y]\right]+(-1)^{|y||x|}\left[y,[z,x]\right]=0.$ \end{enumerate} \end{definition} \begin{example} An ordinary Lie algebra may be viewed as a graded Lie algebra concentrated in degree 0. \end{example} \begin{example} Given a graded vector space $V$, there is a free Lie algebra $\mathcal{L}(V)$ satisfying the obvious universal property. In particular, it follows easily from the definition that \[\mathcal{L}\left(v_r\right)=\begin{cases} \mathbb{F}\langle v\rangle&\quad r\text{ even}\\ \mathbb{F}\langle v\rangle\oplus\mathbb{F}\langle [v,v]\rangle &\quad r\text{ odd.} \end{cases}\] \end{example} \begin{example} If $\mathfrak{g}$ is a graded Lie algebra and $A$ a graded commutative (possibly nonunital) algebra, then $A\otimes\mathfrak{g}$ is canonically a graded Lie algebra with bracket determined by the formula \[[\alpha\otimes x,\beta\otimes y]=(-1)^{|x||\beta|}\alpha\beta\otimes[x,y].\] \end{example} The first identity of the definition above, which is usually called \emph{graded antisymmetry}, ensures that the bracket descends to a map $\mathrm{Sym}^2\left(\mathfrak{g}[1]\right)[-1]\to \mathfrak{g}[1]$. The second identity, known as the \emph{Jacobi identity}, ensures that the appropriate composite \[\mathrm{Sym}^3\left(\mathfrak{g}[1]\right)[-2]\to \mathrm{Sym}^2\left(\mathfrak{g}[1]\right)[-1]\to \mathfrak{g}[1]\] is zero. We now systematize these observations. \begin{recollection} For a graded vector space $V$, write $\mathrm{Sym}(V)=\mathbb{F}[V_{\mathrm{even}}]\otimes\Lambda_\mathbb{F}[V_{\mathrm{odd}}]$. This object is a graded cocommutative coalgebra under the \emph{shuffle coproduct} \[\Delta(x_1\cdots x_k)=\sum_{i+j=k}\sum_{\Sigma_k/\Sigma_i\times\Sigma_j}\epsilon(\sigma; x_1,\ldots, x_k)\, x_{\sigma(1)}\cdots x_{\sigma(i)}\otimes x_{\sigma(i+1)}\cdots x_{\sigma(k)},\] where $\epsilon(-;x_1,\ldots, x_k):\Sigma_k\to \{\pm 1\}$ is the group homomorphism determined by the formula $\epsilon(\tau_j;x_1,\ldots, x_k)=(-1)^{|x_j||x_{j+1}|}$. Recall that a \emph{coderivation} of a graded coalgebra $(C,\Delta)$ is a self-map $\delta$ satisfying the ``co-Leibniz rule'' $\Delta\circ\delta=(\id\otimes \delta+\delta\otimes \id)\circ\Delta$. Coderivations of $\mathrm{Sym}(V)$ of fixed degree $m$ decreasing word length by $n$ are in bijection with graded maps \[\mathrm{Sym}^{n+1}(V)\to V\] of degree $m$ \cite[22(a)]{FelixHalperinThomas:RHT}. \end{recollection} From this universal property and graded antisymmetry, we conclude that the bracket of a Lie algebra $\mathfrak{g}$ determines a coderivation $d_{[-,-]}=d$ of $\mathrm{Sym}\left(\mathfrak{g}[1]\right)$ of degree $-1$. Moreover, since the square of an odd coderivation is again a coderivation, the Jacobi identity implies that $d^2$ is the unique coderivation determined by 0, which, of course, is zero. Thus, the following definition is sensible. \begin{definition} Let $\mathfrak{g}$ be a graded Lie algebra. The \emph{Chevalley-Eilenberg complex} of $\mathfrak{g}$ is the chain complex \[\mathrm{CE}(\mathfrak{g}):=\left(\mathrm{Sym}(\mathfrak{g}[1]),\,d_{[-,-]}\right).\] The \emph{Lie algebra homology} of $\mathfrak{g}$ is \[H^\mathcal{L}(\mathfrak{g}):=H\left(\mathrm{CE}(\mathfrak{g})\right).\] \end{definition} \begin{remark} The formula given above for the coproduct $\Delta$ on $\mathrm{Sym}(V)$ is determined by requiring that \begin{enumerate} \item $\Delta(x)=1\otimes x+x\otimes 1$ and \item $\Delta(xy)=\Delta(x)\Delta(y)$, \end{enumerate} or, in other words, by requiring that the elements of $V$ be primitive and that $\Delta$ be a map of algebras. Note, however, that the Chevalley-Eilenberg complex is a differential graded coalgebra but \emph{not} a differential graded algebra. \end{remark} \begin{remark} The differential in the Chevalley-Eilenberg complex may be computed explicitly as \[d(x_1\cdots x_k)=\sum_{1\leq i<j\leq k}(-1)^{|x_i|} \epsilon(\sigma_{ij};x_1,\ldots, x_k)\, [x_i,x_j]\,x_1\cdots \hat x_i\cdots \hat x_j\cdots x_k,\] where $\sigma_{ij}$ is the unique $(2,k-2)$-shuffle sending $i$ to $1$ and $j$ to $2$. \end{remark} \begin{remark} As the terminology suggests, there is an isomorphism \[H^\mathcal{L}(\mathfrak{g})\cong \mathrm{Tor}_*^{U(\mathfrak{g})}(\mathbb{F},\mathbb{F}),\] where $U(\mathfrak{g})$ is the universal enveloping algebra of $\mathfrak{g}$. \end{remark} \begin{remark} If $\mathfrak{g}$ is the Lie algebra associated to a compact Lie group, we have a composite quasi-isomorphism \[\mathrm{CE}(\mathfrak{g})^\vee\xrightarrow{\simeq}\Omega(G)^G\xrightarrow{\sim}\Omega(G),\] where $\Omega(G)^G$ is the space of left-invariant differential forms on $G$ \cite{ChevalleyEilenberg:CTLGLA}. \end{remark} \subsection{Lie algebra homology and configuration spaces} The relevance of Lie algebra homology from our point of view is the following result. \begin{theorem}[{\cite{Knudsen:BNSCSVFH}}] Let $M$ be an orientable $n$-manifold. There is an isomorphism \[\bigoplus_{k\geq0}H_*(B_k(M))\cong H^\mathcal{L}\left(\mathfrak{g}_M\right)\] of bigraded vector spaces, where $\mathfrak{g}_M=H_c^{-*}(M)\otimes \mathcal{L}(v_{n-1,1})$. \end{theorem} \begin{remark} Explicitly, we have \[\mathfrak{g}_M=\begin{cases} H_c^{-*}(M)\otimes v&\quad n \text{ odd}\\ H_c^{-*}(M)\otimes v\oplus H_c^{-*}(M)\otimes[v,v]&\quad n\text{ even,} \end{cases}\] where in the first case all brackets vanish and in the second the bracket is determined up to sign by the cup product. \end{remark} \begin{remark} An analogous statement for nonorientable manifolds is also valid---see \cite{Knudsen:BNSCSVFH}. \end{remark} \begin{example} If $n$ is odd, then the Lie bracket in $\mathfrak{g}_M$ is identically zero, so the differential in $\mathrm{CE}(\mathfrak{g}_M)$ vanishes. Thus, equating auxiliary gradings, we find in this case that \begin{align*} H_*(B_k(M))\cong\mathrm{Sym}^k\left(H_c^{-*}(M)[n-1][1]\right)\cong \mathrm{Sym}^k\left(H_*(M)\right) \end{align*} by Poincar\'{e} duality, recovering the computation of Theorem \ref{thm:odd homology}. \end{example} \begin{example} Let $M_1=T^2\setminus \mathrm{pt}$ and $M_2=\mathbb{R}^2\setminus S^0$. Then, since $M_1^+\cong T^2$ and $M_2^+\simeq S^1\vee S^1\vee S^2$, we have \[H_c^{-*}(M_j)\cong\begin{cases} \mathbb{F}\left\langle x_{-1}, y_{-1}, z_{-2}\mid xy=z\right\rangle&\quad j=1\\ \mathbb{F}\left\langle x_{-1}, y_{-1}, z_{-2}\right\rangle&\quad j=2. \end{cases}\] Thus, as a bigraded vector space, we have \[\mathfrak{g}_{M_j}\cong\mathbb{F}\left\langle x_{0,1}, y_{0,1}, z_{-1,1}, \tilde x_{1,2}, \tilde y_{1,2}, \tilde z_{0,2}\right\rangle,\] and the bracket is determined by \[[x,y]=\begin{cases} \tilde z&\quad j=1\\ 0&\quad j=2. \end{cases}\] It follows that the weight 2 subcomplex of the Chevalley-Eilenberg complex is given additively as \[\xymatrix{\mathbb{F}\left\langle \tilde x, \tilde y, xy\right\rangle \ar[rr]^-{xy\mapsto [x,y]}&& \mathbb{F}\left\langle xz, yz, \tilde z\right\rangle\ar[r]^-{0}& \mathbb{F}\left\langle z^2\right\rangle.}\] It follows that \[\dim H_2(B_2(M_j))=\begin{cases} 2&\quad j=1\\ 3&\quad j=2. \end{cases}\] In particular, $B_2(T^2\setminus \mathrm{pt})\not\simeq B_2(\mathbb{R}^2\setminus S^0)$. \end{example} \begin{remark} This type of computation can be pushed much further; indeed, \cite{DrummondColeKnudsen:BNCSS} determines $H_i(B_k(\Sigma_g))$ for every degree $i$, cardinality $k$, and genus $g$. For example, \[\dim H_{101}(B_{100}(\Sigma_3))=28,449,499.\] \end{remark} \subsection{Homological stability} In the previous section, we saw that the rational homology of unordered configuration spaces may be computed as Lie algebra homology. Our present goal is to leverage this information in order to understand some of the qualitative behavior of these homology groups. To begin, recall that, if $\partial M\neq \varnothing$, there is a stabilization map \[B_k(M)\to B_{k+1}(M)\] defined by inserting a point in a collar neighborhood of the boundary, and this map induces an isomorphism on integral homology \cite{McDuff:CSPNP}. Surprisingly, although the stabilization map may fail to exist, stability actually holds in general, at least rationally. \begin{theorem}[Church] Let $M$ be a connected $n$-manifold with $n>2$. There is a map \[H_i(B_{k+1}(M);\mathbb{Q})\to H_i(B_k(M);\mathbb{Q})\] that is an isomorphism for $i\leq k$. \end{theorem} Although our approach to this result will differ from \cite{Church:HSCSM}, it will be motivationally useful to recall the definition of the map used therein. The idea is that, although one does not have a stabilization map in general, there is always a map going the other direction, at least at the level of ordered configurations, namely the projection of the Fadell-Neuwirth fibration $\mathrm{Conf}_{k+1}(M)\to \mathrm{Conf}_k(M)$. This map is only $\Sigma_k$-equivariant and so fails to descend to the unordered configuration spaces, but this can be remedied by remembering that there are in fact $k+1$ different such projections. The desired map is then obtained as the dashed filler in the commuting diagram \[\xymatrix{ H_*(\mathrm{Conf}_{k+1}(M))\ar[d]\ar[r]&H_*(B_{k+1}(M))\ar@{-->}[dd]^-{\pi}\\ H_*(\mathrm{Conf}_k(M))^{\oplus k+1}\ar[d]_-\Sigma\\ H_*(\mathrm{Conf}_k(M))\ar[r]&H_*(B_k(M)), }\] where the $i$th coordinate of the upper left map is the projection away from the $i$th coordinate. The key to our argument is the observation that this map is a piece of a larger structure. Indeed, whenever $i+j=k$, we have a $\Sigma\times\Sigma_j$-equivariant map \begin{align*} \mathrm{Conf}_k(M)&\to \mathrm{Conf}_i(M)\times\mathrm{Conf}_j(M)\\ (x_1,\ldots, x_k)&\mapsto \left(\left(x_1,\ldots, x_i\right), \left(x_{i+1},\ldots, x_k\right)\right), \end{align*} which induces the commuting diagram \[\xymatrix{ H_*(\mathrm{Conf}_k(M))\ar[d]\ar[r]&H_*(\mathrm{Conf}_i(M))\otimes H_*(\mathrm{Conf}_j(M))\otimes_{\Sigma_i\times\Sigma_j}\Sigma_k\ar[d]\\ H_*(B_k(M))\ar@{-->}[r]&H_*(B_i(M))\otimes H_*(B_j(M)), }\] which, after summing over $k$ and $i+j=k$, produces a map \[\Delta:H_*(B(M))\to H_*(B(M))\otimes H_*(B(M)),\] where $B(M):=\coprod_{k\geq0}B_k(M)$. This map is the comultiplication of a cocommutative coalgebra structure on $H_*(B(M))$. Morally speaking, this comultiplication is given by the formula \[\text{``}\Delta(x_1,\ldots, x_k)= \sum_{i+j=k}\sum_{\Sigma_k/\Sigma_i\times\Sigma_j}(x_{\sigma(1)},\ldots, x_{\sigma(i)})\otimes (x_{\sigma(i+1)}, \ldots, x_{\sigma(k)})\text{''}\] From this point of view, Church's map $\pi$ is obtained by using the comultiplication to split points apart and then discarding all summands not of the form $i=1$ and $j=k-1$. This type of operation is familiar in the theory of coalgebras. \begin{definition} Let $(C,\Delta)$ be a differential graded coalgebra and $\lambda\in C^\vee$ an $r$-cocycle. The \emph{cap product} with $\lambda$ is the composite \[\lambda\frown(-):C\cong\mathbb{Q}\otimes C\xrightarrow{\lambda\otimes\Delta}C^\vee[r]\otimes C\otimes C\cong C^\vee\otimes C\otimes C[r]\xrightarrow{\langle-,-\rangle,\otimes \id_C}\mathbb{Q}[r]\otimes C\cong C[r].\] \end{definition} Explicitly, if $\Delta(c)=\sum_{i} c_i\otimes c_i'$, then \[\lambda\frown c=\sum_i\langle \lambda, c_i\rangle c_i'.\] Since $\lambda$ is assumed to be closed, $\lambda\frown(-)$ is a chain map and hence induces a map of the same name at the level of homology. The moral of our discussion so far is that Church's map $\pi$ is given by the cap product with the unit in $H^0(M)$, which is dual to the homology class of a single point, since $M$ is connected. Now, as we saw last time, $H_*(B(M))$ may be computed using the Chevalley-Eilenberg complex of the Lie algebra $\mathfrak{g}_M$, which is a cocommutative coalgebra whose comultiplication obeys a very similar formula to that given above. This observation leads us to formulate the following version of Church's theorem, which is the one that we will prove. Let $x\in H_c^{-*}(M)[n]\subseteq \mathfrak{g}_M[1]\subseteq \mathrm{CE}(\mathfrak{g}_M)$ denote the Poincar\'{e} dual of the class of a point in $M$, and let $\lambda$ be the dual functional to $x$. Finally, write $\mathrm{CE}(\mathfrak{g}_M)_k$ for the summand of the Chevalley-Eilenberg complex of weight $k$. \begin{theorem}\label{thm:my stability} Let $M$ be a connected $n$-manifold with $n>2$. Cap product with $\lambda$ induces an isomorphism \[H_i(\mathrm{CE}(\mathfrak{g}_M)_{k+1})\xrightarrow{\simeq} H_i(\mathrm{CE}(\mathfrak{g}_M)_{k})\] for $i\leq k$. \end{theorem} \begin{remark} The same conclusion holds in the case $n=2$ with a slightly worse stable range. The theorem is false for trivial reasons for $n=1$ and vacuous for $n=0$. \end{remark} In fact, we will show that the chain level cap product is a chain isomorphism in a range. In order to do so, it will be useful to have a formula for the cap product. \begin{lemma} $\lambda\frown(-)=\frac{d}{dx}$. \end{lemma} \begin{proof} The claim is equivalent to the claim that $\lambda\frown x^ry=rx^{r-1}y$ for every monomial $y$ not divisible by $x$. We compute that \begin{align*} \Delta(x^ry)&=\Delta(x)^r\Delta(y)\\ &=(1\otimes x+x\otimes 1)^r\sum_j y_j\otimes y_j'\\ &=\sum_{i,j}\binom{r}{i}x^i y_j\otimes x^{r-i}y_j', \end{align*} whence \[\lambda\frown x^ry=\sum_{i,j}\binom{r}{i}\langle \lambda, x^i y_j\rangle x^{r-i}y_j'\]Since $\lambda$ is the dual functional to $x$, the $(i,j)$ term of this term vanishes unless $i=1$ and $y_j$ is a scalar. There are now two cases. If $y$ itself is a scalar, then $\Delta(y)=y\otimes y$, and the claim follows easily. If $y$ is not a scalar, then $\Delta(y)\equiv 1\otimes y+y\otimes1$ modulo elementary tensors in which neither factor is a scalar, and the claim again follows easily. \end{proof} \begin{corollary} The chain map $\lambda\frown(-)$ is surjective with kernel spanned by the monomials not divisible by $x$. \end{corollary} Thus, it remains to determine which monomials are divisible by $x$. For simplicity, we assume $n>2$, but \begin{lemma} If $n>2$, then any nonzero monomial $y$ with $\mathrm{wt}(y)>|y|$ is divisible by $x$. \end{lemma} \begin{proof} Write $y=y_1\cdots y_r$ with $y_j\in \mathfrak{g}_M[1]$. Since $\mathrm{wt}(y)>|y|$, we conclude that $\mathrm{wt}(y_j)>|y_j|$ for some $j$. Since $y_j\in \mathfrak{g}_M[1]$, the weight of $y_j$ is either 1 or 2, and we treat these cases separately. If $\mathrm{wt}(y_j)=1$, then $y_j\in H_c^{-*}(M)[n]$, which is concentrated in degrees $0\leq *\leq n$, so the assumption $|y_j|<1$ implies that $y_j=0$. Since $M$ is connected, it follows that $y_j$ is a multiple of $x$. If $\mathrm{wt}(y_j)+2$, then $y_j\in H_c^{-*}(M)[2n-1]$, which is concentrated in degrees $n-1\leq *\leq 2n-1$. Since $|y_j|<2$ and $n>2$, it follows that $y_j=0$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:my stability}] What we have shown so far is that the chain map \[\mathrm{CE}(\mathfrak{g}_M)_{k+1}\to \mathrm{CE}(\mathfrak{g}_M)_k\] induced by $\lambda\frown(-)$ is surjective and an isomorphism through degree $k$. An easy exercise shows that any chain map with these properties induces an isomorphism in homology through degree $k$. \end{proof} \begin{appendix} \section{Split simplicial spaces}\label{appendix:split simplicial spaces} In this appendix, we develop a criterion guaranteeing that a degreewise weak homotopy equivalence of simplicial spaces induces a weak homotopy equivalence after geometric realization. We follow \cite{DuggerIsaksen:THAR}, but similar results may be found in \cite[11.15]{May:GILS} and \cite[A.5]{Segal:CCT}. \subsection{Split simplicial spaces}\label{section:split} \begin{definition} A simplicial space $\op X$ is \emph{split} if there are subspaces $N_m(\op X)\subseteq \op X_m$ for each $m\geq0$, called the \emph{non-degenerate part} in degree $n$, such that the map \[\coprod_{[n]\twoheadrightarrow[m]}N_m(\op X)\to \op X_n\] induced by the degeneracies is a homeomorphism for every $n\geq0$. \end{definition} \begin{proposition}[Dugger-Isaksen]\label{prop:split criterion} Let $f:\op X\to \op Y$ be a map between split simplicial spaces. If $f_n:\op X_n\to \op Y_n$ is a weak equivalence for every $n\geq0$, then $|f|$ is a weak equivalence. \end{proposition} The strategy of the proof is simple. First, we argue that $f$ induces a weak equivalence on geometric realizations of $n$-skeleta for every $n$; second, we argue that every element in homotopy of the full realization is captured by some skeleton. In order to put this plan into action, we need to have control over skeleta. \begin{lemma}\label{lem:split pushout} Let $\op X$ be a split simplicial space. The diagram \[\xymatrix{ N_n(\op X)\times\partial \Delta^n\ar[r]\ar[d]&|\mathrm{sk}_{n-1}(\op X)|\ar[d]\\ N_n(\op X)\times\Delta^n\ar[r]&|\mathrm{sk}_n(\op X)| }\] is a pushout square. \end{lemma} \begin{proof} Recall that the \emph{tensor} of a space $X$ with a simplicial space $\op Z$ is the simplicial space $(X\otimes\op Z)_n=X\times \op Z_n$, with simplicial structure maps induced by those of $\op Z$, together with the identity on $X$. Since geometric realization, as a left adjoint, preserves colimits, it suffices to produce a pushout square in simplicial spaces of the form \[\xymatrix{ N_n(\op X)\otimes\partial \Delta^n\ar[r]\ar[d]&\mathrm{sk}_{n-1}(\op X)\ar[d]\\ N_n(\op X)\otimes\Delta^n\ar[r]&\mathrm{sk}_n(\op X), }\] where we have indulged in the traditional abuse of using the same notation $\Delta^n$ for the representable simplicial set $\mathrm{Hom}_\Delta(-,[n])$ and its geometric realization, and similarly for $\partial\Delta^n$. To verify that this diagram is a pushout, it suffices to check in each level. Now, it is direct from the definitions that \[\mathrm{sk}_n(\op X)_m=\mathrm{sk}_{n-1}(\op X)_m\amalg\left(\coprod_{[m]\twoheadrightarrow[n]}N_n(\op X)\right),\] so $\mathrm{sk}_n(\op X)_m$ is the pushout of $\mathrm{sk}_{n-1}(\op X)_m$ and $N_n(\op X)\times \Delta^n_m$ over a coproduct of copies of $N_n(\op X)$ indexed by the set of maps $f:[m]\to [n]$ that fail to be surjective, which is exactly $\partial\Delta^n_m$. \end{proof} This fact will only be useful once we are assured that such pushouts are homotopically well-behaved. With regularity assumptions on the spaces involved, the following type of result is common knowledge, but in fact it holds in complete generality. \begin{lemma}\label{lem:pushout invariant} If $f:A\to A'$ and $g:B\to B'$ are weak homotopy equivalences, and if the front and back faces in the commuting diagram \[\xymatrix{&A\times \partial\Delta^n\ar[rr]\ar[dd]_>>>>>>>{}|!{[d]}\hole\ar[dl]_-{f\times\id_{\partial\Delta^n}}&&B\ar[dd]\ar[dl]_-g\\ A'\times\partial\Delta^n\ar[rr]\ar[dd]&& B'\ar[dd]\\ &A\times\Delta^n\ar[dl]_{f\times\id_{\partial\Delta^n}}\ar[rr]^<<<<<<<<{}|!{[r]}\hole&&C\ar[dl]^-h \\ A'\times\Delta^n\ar[rr]&&C' }\] are pushout squares, then $h:C\to C'$ is a weak homotopy equivalence. \end{lemma} \begin{proof} We cover $C$ by two open sets, the first being $U_1\times A\times D$, where $D\subseteq\mathring{\Delta}^n$ is a Euclidean neighborhood of the barycenter, and the second $U_2=B\coprod_{A\times\partial\Delta^n}(A\times P)$, where $P\subseteq \Delta^n$ is the complement of the barycenter. Similarly, we cover $C'$ by $U_1'$ and $U_2'$. Clearly, $h^{-1}(U_j')=U_j$ for $j\in\{0,1\}$. Consider the commuting diagrams \[\xymatrix{ U_1\ar[d]_-{h|_{U_1}}\ar[r]&A\ar[d]^-f&& U_2\ar[d]_-{h|_{U_2}}&B\ar[l]\ar[d]^-g&& U_1\cap U_2\ar[d]_-{h|_{U_1\cap U_2}}&A\times(D\cap P)\ar@{=}[l]_-{\simeq}\ar[d]^-{f\times \id_{D\cap P}}\\ U_1'\ar[r]&A'&& U_2'&\ar[l]B'&&U_1'\cap U_2'\ar@{=}[r]^-{\simeq}&A'\times(D\cap P), }\] where the horizontal arrows in the leftmost diagram are the projections onto the first factor, and the horizontal arrows in the middle idagram are induced by the inclusion $\partial\Delta^n\subseteq P$. Both horizontal arrows in the leftmost diagram are homotopy equivalences, and $f$ is a weak homotopy equivalence by assumption; both horizontal arrows in the middle diagram are inclusions of deformation retracts, and $g$ is a weak homotopy equivalence by assumption; and $f\times\id_{D\cap P}$ is a weak homotopy equivalence by assumption. Thus, by two-out-of-three, all three restrictions of $h$ are weak homotopy equivalences, so $h$ itself is a weak homotopy equivalence. \end{proof} In verifying that elements in the homotopy groups of $|\op X|$ are all captured by skeleta, we must be assured that the inclusions among skeleta are not too pathological. This assurance takes the form of a relative separation axiom. \begin{definition} A subspace $A\subseteq B$ is \emph{relatively $T_1$} if any open set $U\subseteq A$ may be separated from any point $b\in B\setminus U$ by an open set $U\subseteq V\subseteq B$. An inclusion map is \emph{relatively $T_1$} if its image is so. \end{definition} This terminology is motivated by the observation that a space is $T_1$ if and only if each of its points is relatively $T_1$. Since finite intersections of open sets are open, we have the following immediate consequence: \begin{lemma} If $A\subseteq B$ is relatively $T_1$, then any open set $U\subseteq A$ may be separated from any finite subset of $B\setminus U$ by an open set $U\subseteq V\subseteq B$. \end{lemma} The importance of this notion for our purposes is the following result. \begin{proposition}\label{prop:factor through colimit} Let $Y_i\subseteq Y_{i+1}$ be a relatively $T_1$ inclusion for $i\geq 1$. If $K$ is compact, then any map $f:K\to \colim_\mathbb{N} Y_i$ factors through the inclusion of some $Y_i$. \end{proposition} \begin{proof} If $f$ does not factor as claimed, then, without loss of generality, we may assume the existence of $x_i\in \mathrm{im}(f)\cap Y_i$ for each $i\geq1$. Recall that a subset of the colimit is open precisely when its intersection with each stage is open; thus, for each $j\geq1$, we may define an open subset $U_j\subseteq\colim_\mathbb{N} Y_i$ by the following prescription: \begin{enumerate} \item for $1\leq i<j$, set $U_{ij}=\varnothing$; \item for $i=j$, set $U_{ij}=Y_j$; \item for $i>j$, take $U_{ij}$ to be an open subset of $Y_i$ separating $U_{i-1,j}$ from the set $\{x_{j+1},\ldots, x_i\}$; \item finally, set $U_j=\colim_\mathbb{N}U_{ij}$. \end{enumerate} Then $U_j\cap Y_i=U_{ij}$, so $U_j$ is an open subset the colimit, and, since $Y_j\subseteq U_{j}$, the collection $\{U_j\}_{j\in\mathbb{N}}$ is an open cover of $\colim_\mathbb{N}Y_i$. Since $K$ is compact, $\mathrm{im}(f)$ is compact, so it is contained in some finite subcover $\{U_{j_r}\}_{r=1}^N$. But, by construction, $U_{j_r}$ does not contain $x_i$ for $i>j_r$, so $\bigcup_{r=1}^N U_{j_r}$ does not contain $x_i$ for $i>\max\{j_r:1\leq r\leq N\}$, a contradiction. \end{proof} This fact will only be useful once we are able to identify relatively $T_1$ maps, a task that is made easier by the following observation. \begin{lemma}\label{lem:stable under pushout} Relatively $T_1$ inclusions are stable under finite products and pushouts along arbitrary continuous maps. \end{lemma} \begin{proof} For the first claim, it suffices by induction to show that $A_1\times A_2\subseteq B_1\times B_2$ is relatively $T_1$ if each $A_j\subseteq B_j$ is so. Fix an open subset $U\subseteq A_1\times A_2$ and a point $(x_1,x_2)\in B_1\times B_2\setminus U$. By the definition of the product topology, we have $U=\bigcup_{i\in I}U_{i1}\times U_{i2}$ for open subsets $U_{ij}\subseteq A_j$. By our assumption on the inclusions of the $A_j$, we may find open subsets $U_{ij}\subseteq W_{ij}\subseteq B_j$ for each $i\in I$ such that $x_j\notin W_{ij}$. Then $U\subseteq W:=\bigcup_{i\in I}W_{i1}\times W_{i2}$ is open in $B_1\times B_2$, and $(x_1,x_2)\notin W$, as desired. For the second claim, suppose that the diagram \[\xymatrix{ A\ar[r]^-f\ar[d]_-i&Y\ar[d]\\ B\ar[r]^-g&Z }\] is a pushout square and that $i$ is a relatively $T_1$ inclusion. Fix an open subset $U\subseteq Y$ and a point $z\in Z\setminus U$ (here, in a small abuse, we identity $Y$ with its image in $Z$, since the pushout of an inclusion is an inclusion). There are two cases to consider. Assume first that $z\in Y$. Since $f^{-1}(U)$ is open in $A$ and $i$ is an inclusion, there is an open subset $W\subseteq B$ with $W\cap A=f^{-1}(U)$, and $W\amalg_{f^{-1}(U)} U\subseteq Z$ is open. To see that $z$ is not contained in this subset, it suffices to show that $z\notin g(W)$, since $z\notin U$ by assumption. But $z\in Y$, and $Y\cap g(B)=f(A)$, so $Y\cap g(W)=f(W\cap A)=U$, and the claim follows. On the other hand, suppose that $z\notin Y$; in particular, $z=g(b)$ for a unique $b\in B$. Then $b\notin i(f^{-1}(U))$ and $f^{-1}(U)\subseteq A$ is open, so, since $i$ is relatively $T_1$, there is an open subset $i(f^{-1}(U))\subseteq W\subseteq B$ with $b\notin W$. As before, $W\coprod_{f^{-1}(U)} U$ is open in $Z$ and clearly does not contain $z$. \end{proof} \begin{corollary}\label{cor:pushout is T1} For any pushout diagram of the form\[\xymatrix{ A\times \partial \Delta^n\ar[r]\ar[d]_-{\id_A\times(\partial\Delta^n\subseteq\Delta^n)}&Y\ar[d]^-i\\ A\times \Delta^n\ar[r]& Z }\] the inclusion $Y\to Z$ is relatively $T_1$. \end{corollary} Finally, we will need the following, essentially obvious, observation. \begin{lemma}\label{lem:disjoint weak equivalence} If $f:W\amalg X\to Y\amalg Z$ is a weak homotopy equivalence such that $f|_W$ factors through $Y$ as a weak homotopy equivalence, then $f|_X$ factors through $Z$ as a weak homotopy equivalence. \end{lemma} \begin{proof} The claim that $f|_X$ factors through $Z$ is obvious after applying $\pi_0$ and considering the analogous claim for bijections of sets, since $\pi_0(f)$ is a bijection. The claim that this factorization is a weak homotopy equivalence follows in the same way after applying $\pi_n$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:split criterion}] Fix $N_n(\op X)\subseteq \op X_n$ and $N_n(\op Y)\subseteq\op Y$ witnessing $\op X$ and $\op Y$ as split. We claim that the restriction of $f_n$ to $N_n(\op X)$ factors through $N_n(\op Y)$ as a weak homotopy equivalence for every $n\geq0$. Having established this, it will follow by induction and Lemmas \ref{lem:split pushout} and \ref{lem:pushout invariant} above that the induced map $|\mathrm{sk}_n(\op X)|\to |\mathrm{sk}_n(\op Y)|$ is a weak homotopy equivalence for every $n\geq0$. To establish the claim, we proceed by induction on $n$, the base case following from our assumption, since $N_0(\op X)=\op X_0$ and similarly for $\op Y$. For the induction step, we note that the inductive assumption implies that the dashed filler in the diagram \[\xymatrix{ N_m(\op X)\ar@{-->}[d]\ar[r]&\op X_m\ar[d]^-{f_m}\ar[r]^-{s}&\op X_n\ar[d]^-{f_n}\\ N_m(\op Y)\ar[r]&\op Y_m\ar[r]^{s}&\op Y_n }\] exists and is a weak homotopy equivalence for every $m<n$ and every degeneracy $s$. Thus, the dashed filler in the diagram \[\xymatrix{ \displaystyle\coprod_{[n]\twoheadrightarrow[m]\neq[n]}N_m(\op X)\ar[r]\ar@{-->}[d]&\op X_n\ar[d]^-{f_n}\\ \displaystyle\coprod_{[n]\twoheadrightarrow[m]\neq[n]}N_m(\op Y)\ar[r]&\op Y_n }\] exists and is a weak homotopy equivalence. Since the righthand map is also a weak homotopy equivalence, the claim follows from Lemma \ref{lem:disjoint weak equivalence}. Now, from Lemma \ref{lem:split pushout} and Corollary \ref{cor:pushout is T1}, it follows by induction that each of the inclusions $|\mathrm{sk}_n(\op X)|\to |\mathrm{sk}_{n+1}(\op X)|$ is relatively $T_1$, and similarly for $\op Y$. Since $|\op X|=\colim_\mathbb{N}|\mathrm{sk}_n(\op X)|$, and likewise for $\op Y$, we conclude from Proposition \ref{prop:factor through colimit} that any map $(D^{m}, S^{m-1})\to (|\op Y|,|\op X|)$ factors as in the solid commuting diagram \[\xymatrix{ S^{m-1}\ar[d]\ar[r]&|\mathrm{sk}_n(\op X)|\ar[d]\ar[r]&|\op X|\ar[d]^-{|f|}\ar[d]\\ D^m\ar@{-->}[ur]\ar[r]&|\mathrm{sk}_n(\op Y)|\ar[r]&|\op Y|. }\] We have already shown the middle arrow to be a weak homotopy equivalence, so the dashed filler exists making the upper triangle commute and the lower triangle commute up to homotopy relative to $S^{m-1}$. Thus, $\pi_m(f)=0$ for every $m\geq0$, and the claim follows. \end{proof} \subsection{Some examples}\label{section:loose ends} In this section, we identify a few classes of split simplicial spaces used in the main text. We begin by noting two easy consequences of the simplicial identities. First, for any simplicial space $\op X$, each degeneracy $s_i:\op X_{n-1}\to \op X_n$ is injective. Second, the intersection $s_i(\op X_{n-1})\cap s_j(\op X_{n-1})$ is contained in the union of the images of the various iterated degeneracies $\op X_{n-2}\to \op X_n$. \begin{lemma}\label{lem:covering map split} Let $f:\op X\to \op Y$ be a degreewise covering map. If $\op Y$ is split, then $\op X$ is split. \end{lemma} \begin{proof} Setting $N_0(\op X)=\op X_0$, assume for $m<n$ that $N_m(\op X)\subseteq \op X_m$ has been constructed with the desired property. The degeneracies of $\op X$ induce a map \[s:\coprod_{\pi:[n]\twoheadrightarrow[m]\neq [n]}N_m(\op X)\to \op X_n,\] and the observations above imply that $s$ is injective. We claim that $s$ is a local homeomorphism and hence the inclusion of a collection of connected components, which is enough to imply the claim, for in this case we may take $N_n(\op X)=\op X_n\setminus\mathrm{im}(s)$. To see that $s$ is a local homeomorphism, we may restrict our attention to the component indexed by $\pi$, in which case it suffices to show that the top map in the commuting diagram \[\xymatrix{ \op X_n\ar[d]_-{f_n}& \op X_m\ar[l]_-{\pi^*}\ar[d]^-{f_m}\\ \op Y_n& \op Y_m\ar[l]^-{\pi^*} }\] is a local homeomorphism. Since $\op Y$ is split, the bottom map is a local homeomorphism, and the righthand map, as a covering map, is a local homeomorphism. Thus, by commutativity, given $x\in \op X_m$, there is a connected open neighborhood $x\in U$ such that $(f_n\circ\pi^*)|_{U}$ is a homeomorphism onto its image. Since $U$ is connected and $f_n$ is a covering map, there is a subspace $V\subseteq \op X_n$ such that $\pi^*(U)\subseteq V$ and $f_n|_V$ is a homeomorphism onto its image. It follows that $\pi^*|_{U}$ is a homeomorphism onto its image, as desired. \end{proof} \begin{corollary}\label{cor:hypercovers are split} If $\op X$ is a hypercover, then $\op X$ is split. \end{corollary} \begin{proof} Since the condition of being split is a condition that, in each degree, involves only a truncation of the simplicial space in question, and since a hypercover $\op X$ coincides with some bounded hypercover through any simplicial degree, it suffices to assume that $\op X$ is bounded of height at most $N$. We proceed by induction on $N$, the base case being the observation that \v{C}ech covers are split, with degree $n$ non-degenerate part given by the union of the $(n+1)$-fold intersections of cover elements in which no two adjacent indices coincide. The inductive step follows from Lemma \ref{lem:covering map split}, since $\op X\to \mathrm{cosk}_{N-1}(\op X)$ is a degreewise covering map with split target by Lemma \ref{lem:coskeleta} and induction. \end{proof} We also have the following simple observation. \begin{lemma}\label{lem:split levelwise} If $f:\op X\to \op Y$ is a levelwise weak equivalence and $\op Y$ is split, then $\op X$ is split. \end{lemma} \begin{corollary}\label{cor:horizontal cech cover split} Let $f:\op X\to \op Y$ be a degreewise covering map and $\op W$ the bisimplicial space obtained by forming the degreewise \v{C}ech nerve. If $\op Y$ is split, then $|\op W|_h$ is split. \end{corollary} Finally, we have the following observation, which is immediate from the fact that the singular functor and geometric realization each preserve coproducts. \begin{lemma}\label{lem:realization split} If $\op X$ is split, then the simplicial space $|\mathrm{Sing}(\op X)|$, obtained by applying the singular functor and geometric realization in each degree, is split. \end{lemma} \section{Homotopy colimits}\label{section:homotopy colimits} \subsection{Bar construction} Recall that, if $M$ and $N$ are differential graded modules over a ring $R$, then the homology of the relative tensor product $M\otimes_R N$ may fail to be invariant under quasi-isomorphisms in the factors. This defect can be eliminated by resolving $M$, say, by nice $R$-modules before computing the tensor product, and a canonical choice of such a resolution with free entries is given by the \emph{bar complex} \[\cdots\to M\otimes R^{\otimes n+1}\to \cdots\to M\otimes R\to M,\] where the differentials are defined as alternating sums of maps defined in terms of the multiplication in $R$ and the module structure of $M$. In order to apply this intuition in more general contexts, we note that the bar complex arises from a simplicial $R$-module whose construction depended only on the existence of a free/forgetful adjunction relating differential graded $R$-modules to chain complexes. The corresponding simplicial \emph{monadic bar construction} is available whenever such adjunction data is present. In our context, the role of the $R$-module $M$ is played by a functor $F:I\to \mathcal{T}\mathrm{op}$. Since a ring is roughly like a category with one object, we guess that the corresponding forgetful functor should be the functor that only remembers the values of a functor on objects. In other words, writing $I_0$ for the discrete category with the same objects as $I$, we should consider the forgetful functor \[\iota^*:\mathcal{T}\mathrm{op}^I\to \mathcal{T}\mathrm{op}^{I_0}\] given by restriction along the inclusion $\iota:I_0\to I$. This functor does admit a left adjoint $\iota_!$, given by left Kan extension. The standard formula for the left Kan extension gives us \begin{align*} \iota_!\iota^*F(i)=\colim\left((\iota\downarrow i)\to I_0\xrightarrow{\iota} I\xrightarrow{F} \mathcal{T}\mathrm{op}\right)\cong \coprod_{i_1\to i}F(i_1), \end{align*} and, more generally, \[(\iota_!\iota^*)^{n+1}F(i)\cong\coprod_{i_{n+1}\to \cdots \to i_1\to i}F(i_{n+1}).\] We obtain in this way a simplicial functor, whose geometric realization maps to $F$, which we wish to think of as a kind of resolution of $F$. Our guess, then, is that the homotopy colimit should be the colimit of this geometric realization. Since colimits commute with geometric realization, and since \[\colim_I(\iota_!\iota^*)^{n+1} F\cong \colim_{I_0}\iota^*(\iota_!\iota^*)^nF\cong \coprod_{i_0\in I}(\iota_!\iota^*)^nF(i_0)\cong \coprod_{i_n\to \cdots \to i_0}F(i_n),\] this guess may be rephrased in terms of the following object. \begin{definition} Let $F:I\to \mathcal{T}\mathrm{op}$ be a functor. The \emph{(simplicial) bar construction} on $F$ is the simplicial space $\mathrm{Bar}_\bullet(F)$ with \[\mathrm{Bar}_n(F)=\coprod_{i_n\to \cdots \to i_0}F(i_n)\] and with face and degeneracy maps given by the composition and identities in $I$, respectively. \end{definition} \begin{hypothesis} Let $F:I\to \mathcal{T}\mathrm{op}$ be a functor. The homotopy colimit of $F$ is the space \[\hocolim_I F=|\mathrm{Bar}_\bullet(F)|.\] \end{hypothesis} The first check of this hypothesis is to verify the following. \begin{proposition} Let $F,G:I\to \mathcal{T}\mathrm{op}$ be functors and $\varphi:F\to G$ a natural transformation. If $\varphi(i)$ is a weak homotopy equivalence for every $i\in I$, then $|\mathrm{Bar}_\bullet(\varphi)|$ is also a weak homotopy equivalence. \end{proposition} \begin{proof} It is easy to see that the simplicial space $\mathrm{Bar}_\bullet(F)$ is split, so the claim follows from Proposition \ref{prop:split criterion}. \end{proof} Since there is a natural map $|\mathrm{Bar}_\bullet(F)|\to \colim_I F$ supplied by the observation that \[\colim_IF\cong \mathrm{coeq}\left(\coprod_{i_1\to i_0}F(i_1)\rightrightarrows \coprod_{i_0}F(i_0)\right),\] we may summarize our progress so far as having exhibited one homotopy invariant approximation to the colimit. On the other hand, another homotopy invariant approximation to the colimit is the constant functor with value $\varnothing$! Why should we think that our proposed construction of the homotopy colimit is any better than this functor? \subsection{Derived functors} We now formulate precisely what it means to be the best approximation by a homotopy invariant functor. \begin{definition} A \emph{category with weak equivalences} is a pair $(\op C, \mathrm{weq}(\op C))$ of a category and a collection of morphisms that contains the isomorphisms and has the property that if, in the commuting diagram \[\xymatrix{ A\ar[dr]\ar[rr]&&B\ar[dl]\\ &B }\] in $\op C$, any two arrows lie in $\mathrm{weq}(\op C)$, then so does the third. \end{definition} The arrows in $\mathrm{weq}(\op C)$ are called \emph{weak equivalences}, and the closure property is called \emph{two-out-of-three}. \begin{example} Two cases of interest are the category of topological spaces with weak homotopy equivalences and the category of chain complexes with quasi-isomorphisms. \end{example} \begin{example} If $(\op C, \mathrm{weq}(\op C))$ is a category with weak equivalences and $I$ is a category, then the functor category $\op C^I$ is again a category with weak equivalences when equipped with the \emph{pointwise} weak equivalences, i.e., a natural transformation is a weak equivalence if and only if each of its components is so. \end{example} \begin{definition} Let $(\op C, \mathrm{weq}(\op C))$ be a category with weak equivalences. A \emph{homotopy category} for $\op C$ is a category $\mathrm{Ho}(\op C)$ equipped with a functor $\gamma=\gamma_{\op C}:\op C\to \mathrm{Ho}(\op C)$ with the following properties. \begin{enumerate} \item If $f\in\mathrm{weq}(\op C)$, then $\gamma(f)$ is an isomorphism. \item Any functor $F:\op C\to \op D$ sending weak equivalences in $\op C$ to isomorphisms in $\op D$ factors uniquely through $\gamma$. \end{enumerate} \end{definition} If $\op C$ has a homotopy category, then it is unique up to a unique equivalence of categories, so there is no harm in referring to \emph{the} homotopy category of $\op C$. Often it will be the case, as with the colimit functor, that one is given a functor $F$ that does not send weak equivalences to isomorphisms. In this case, one can ask for the best approximation to $F$ by a functor having this property. \begin{definition} Let $(\op C, \mathrm{weq}(\op C))$ be a category with weak equivalences and $F:\op C\to \op D$ a functor. A \emph{left derived functor} of $F$ is a functor $\mathbb{L}F:\mathrm{Ho}(\op C)\to \op D$ equipped with a natural transformation $\mathbb{L}F\circ\gamma\to F$ that is final among functors $T:\mathrm{Ho}(\op C)\to \op D$ equpped with natural transformations $T\circ \gamma\to F$. \end{definition} Dually, we have the notion of a \emph{right} derived functor $\mathbb{R}F$. Note that, categorically speaking, the left derived functor $\mathbb{L}F$ is the right Kan extension of $F$ along $\gamma$. \begin{definition} Let $(\op C,\mathrm{weq}(\op C))$ be a category with weak equivalences, and assume that $\op C$ admits colimits indexed by $I$. The \emph{homotopy colimit} functor for $I$-shaped diagrams, if it exists, is the left derived functor of the composite $\gamma\circ\colim_I$, as depicted in the following digram: \[\xymatrix{ \op C^I\ar[d]_-\gamma\ar[rr]^-{\colim_I}&&\op C\ar[d]^-\gamma\\ \mathrm{Ho}(\op C^I)\ar[rr]^-{\hocolim_I}\ar@{=>}[urr]&&\mathrm{Ho}(\op C) }\] \end{definition} One says that $\hocolim_I$ is the \emph{total left derived functor} of $\colim_I$. Thus, the homotopy colimit is the closest approximation to the colimit by a homotopy invariant construction. \subsection{Model structures} Our next task is to make this definition into something useable in practice. Our approach, following \cite{Quillen:HA}, will be to impose extra structure on our category, motivated by the structure observed ``in the wild'' in the homotopy theory of spaces, enabling us to make these abstract notions concrete. It should be emphasized, however, that this structure is scaffolding, and that the fundamental objects of interest are all at the level of the bare category with weak equivalences. \begin{definition} Let $(\op C, \mathrm{weq}(\op C))$ be a category with weak equivalences, and assume that $\op C$ has small limits and colimits. A \emph{model structure} on $\op C$ is a pair $(\mathrm{cof}(\op C),\mathrm{fib}(\op C))$ of classes of morphisms in $\op C$ satisfying the following axioms. \begin{enumerate} \item Both $\mathrm{cof}(\op C)$ and $\mathrm{fib}(\op C)$ are closed under retracts in the arrow category $\op C^{\Delta^1}$. \item If $i\in\mathrm{cof}(\op C)$ and $p\in \mathrm{fib}(\op C)$, then the dashed filler exists in the commuting diagram \[\xymatrix{ A\ar[d]_-i\ar[r]&B\ar[d]^-p\\ C\ar@{-->}[ur]\ar[r]&D }\] provided either $i$ or $p$ is a weak equivalence. \item Any morphism $f:A\to B$ in $\op C$ may be factored as \begin{enumerate} \item $f=p\circ i$ with $i\in \mathrm{cof}(\op C)\cap \mathrm{weq}(\op C)$ and $p\in \mathrm{fib}(\op C)$ and as \item $f=q\circ j$ with $j\in \mathrm{cof}(\op C)$ and $q\in \mathrm{fib}(\op C)\cap \mathrm{weq}(\op C)$. \end{enumerate} \end{enumerate} A \emph{model category} is a category with weak equivalences equipped with a model structure. The morphisms in $\mathrm{cof}(\op C)$, $\mathrm{fib}(\op C)$, $\mathrm{cof}(\op C)\cap \mathrm{weq}(\op C)$, and $\mathrm{fib}(\op C)\cap \mathrm{weq}(\op C)$ are the \emph{cofibrations}, the \emph{fibrations}, the \emph{trivial cofibrations}, and the \emph{trivial fibrations}, respectively. An object is \emph{cofibrant} if the unique morphism from the initial object of $\op C$ is a cofibration (resp. \emph{fibrant}, final, fibration). \end{definition} In the situation of (2), we say that $i$ has the \emph{left lifting property} with respect to $p$, and that $p$ has the \emph{right lifting property} with respect to $i$. \begin{exercise}[{See \cite[7]{Hirschhorn:MCL}}] Derive the following consequences of the model category axioms. \begin{enumerate} \item Weak equivalences are closed under retracts in the arrow category. \item (Trivial) cofibrations are closed under coproducts and pushouts along arbitrary morphisms. \item (Trivial) fibrations are closed under products and pullbacks along arbitrary morphisms. \item Cofibrations are exactly those morphisms with the left lifting property with respect to every trivial fibration, and similarly for the other classes of morphisms. \end{enumerate} \end{exercise} From the point of view of our motivating questions, the benefit of the presence of a model structure on a category with weak equivalences is that it allows for explicit control over the homotopy category and derived functors. \begin{proposition}[{\cite[8.3.5, 8.4.4]{Hirschhorn:MCL}}] Let $\op C$ be a model category and $F:\op C\to \op D$ a functor. \begin{enumerate} \item The homotopy category $\mathrm{Ho}(\op C)$ exists. \item If $F$ sends weak equivalences between cofibrant objects to isomorphisms in $\op D$, then $\mathbb{L} F$ exists, and the natural map $\mathbb{L}F(\gamma(C))\to \gamma(F(C))$ is an isomorphism for $C$ cofibrant (resp. fibrant, $\mathbb{R}F$). \end{enumerate} \end{proposition} From what we have said so far, we draw the following strategy for working with homotopy colimits: \begin{enumerate} \item endow the functor category $\mathcal{T}\mathrm{op}^I$ with a model structure; \item verify that, in this model structure, the total left derived functor of $\colim_I:\mathcal{T}\mathrm{op}^I\to \mathcal{T}\mathrm{op}$ exists; and \item understand cofibrant replacement in this model structure. \end{enumerate} With these steps accomplished, we may compute the homotopy colimit by cofibrantly replacing and computing the ordinary colimit. We will be aided in accomplishing the second step of this strategy by the following result---see \cite[8.5.3, 8.5.18]{Hirschhorn:MCL}. \begin{theorem}[Quillen] Let $\op C$ and $\op D$ be model categories and \[\adjunct{\op C}{\op D}{F}{G}\] an adjunction. The following two conditions are equivalent: \begin{enumerate} \item $F(\mathrm{cof}(\op C))\subseteq \mathrm{cof}(\op D)$ and $F(\mathrm{cof}(\op C)\cap \mathrm{weq}(\op C))\subseteq \mathrm{cof}(\op D)\cap \mathrm{weq}(\op D)$; \item $G(\mathrm{fib}(\op D))\subseteq \mathrm{fib}(\op C)$ and $G(\mathrm{fib}(\op D)\cap \mathrm{weq}(\op D))\subseteq \mathrm{fib}(\op C)\cap \mathrm{weq}(\op C)$. \end{enumerate} Moreover, if these conditions hold, there is an adjunction \[\adjunct{\mathrm{Ho}(\op C)}{\mathrm{Ho}(\op D).}{\mathbb{L}(\gamma_{\op D}\circ F)}{\mathbb{R}(\gamma_{\op C}\circ G)}\] In particular, both total derived functors exist. \end{theorem} Such an adjunction is commonly known as a \emph{Quillen adjunction} or \emph{Quillen pair}, and the functors $F$ and $G$ are left and right \emph{Quillen functors}, respectively. Thus, if the putative model structure on $\mathcal{T}\mathrm{op}^I$ is chosen to be compatible with some fixed model structure on $\mathcal{T}\mathrm{op}$ in the sense that the diagonal functor $\mathcal{T}\mathrm{op}\to \mathcal{T}\mathrm{op}^I$ is a right Quillen functor, then the existence of the homotopy colimit will be assured. We begin by specifying our choice of model structure on $\mathcal{T}\mathrm{op}$. \begin{theorem}[Quillen]\label{thm:quillen model structure} The following specifications define a model structure on the category $\mathcal{T}\mathrm{op}$ of topological spaces: \begin{enumerate} \item the weak equivalences are the weak homotopy equivalences; \item the fibrations are the Serre fibrations; \item the cofibrations are the retracts of relative cell complexes. \end{enumerate} \end{theorem} \begin{proof} We highlight only one aspect of the proof, namely the verification of the factorization axiom via the so-called \emph{small object argument}. For a detailed proof of the full result, see \cite{Hirschhorn:QMCTS}. Let $f:X\to Y$ be any map, which we wish to factor as $f=p\circ i$ with $p$ a fibration and $i$ a trivial cofibration. We define $X_1$ as the pushout in the diagram \[\xymatrix{ \displaystyle\coprod\Lambda_k^n\ar[d]\ar[r]& X\ar[d]^-{i_1}\\ \displaystyle\coprod\Delta^n\ar[r]&X_1, }\] where the coproduct in each case is indexed by the set \[\coprod_{n\geq0}\coprod_{0\leq k\leq n}\mathrm{Hom}(\Delta^n,Y)\times_{\mathrm{Hom}(\Lambda_k^n, Y)}\mathrm{Hom}(\Lambda_k^n, X).\] The simplicial set $\Lambda_k^n$ is the $k$th \emph{horn} of $\Delta^n$---see \cite[I.1]{GoerssJardine:SHT}. Setting $X_0=X$ and assuming $i_r:X_{r-1}\to X_r$ to have been defined, we apply the same procedure to the induced map $X_r\to Y$ to obtain $i_{r+1}:X_r\to X_{r+1}$, and we set $X_f=\colim_{r\geq0} X_r$. We claim that, in the commuting diagram \[\xymatrix{ X\ar[dr]_-f\ar[rr]^-i&& X_f\ar[dl]^-p\\ &Y, }\] $i$ is a trivial cofibration and $p$ a fibration. The former claim involves finding a lift in the diagram \[\xymatrix{ X\ar[d]_-{i_1}\ar[d]\ar[rrr]&&&E\ar[ddd]^-q\\ X_1\ar[d]_-{i_2}\\ \vdots\ar[d]\\ X_f\ar[rrr]&&&B }\] with $q$ a fibration, which may be accomplished inductively using the fact that each $i_r$ is a trivial cofibration, since each inclusion $\Lambda_k^n\subseteq \Delta^n$ is so. For the latter claim, we note that any map of pairs $(\Delta^n,\Lambda_i^n)\to (Y, X_f)$ factors as in the diagram \[\xymatrix{ \Lambda_i^n\ar[dd]\ar[r]&X_r\ar[d]_-{i_{r+1}}\ar[r]& X_f\ar[dd]^-p\\ &X_{r+1}\ar[ur]\ar[dr]\\ \Delta^n\ar@{-->}[ur]\ar[rr]&&Y, }\] and the dashed filler exists by our definition of $X_{r+1}$ as the pushout. In order to factor $f$ as a cofibration followed by a trivial fibration, we apply the same argument with horn inclusions replaced by the inclusion $\partial \Delta^n\subseteq \Delta^n$. \end{proof} \begin{remark} Note that the factorization produced by the small object argument is even functorial. \end{remark} \subsection{Cofibrant generation} The only essential features of the category $\mathcal{T}\mathrm{op}$ used in the argument of Theorem \ref{thm:quillen model structure} was the existence of a collection $S=\{\Lambda_k^n\to \Delta^n\}$ of ``test maps'' for fibrations, and similarly for trivial fibrations, with the property that the domain of an element of $S$ is \emph{small} with respect to composites of iterated pushouts of members of $S$. These features can be axiomatized. \begin{definition} Let $\op C$ be a category and $S$ a set of morphisms in $\op C$. A morphism in $\op C$ is \begin{enumerate} \item $S$-\emph{injective} if it has the right lifting property with respect to every element of $S$; \item an $S$-\emph{cofibration} if it has the left lifting property with respect to every $S$-injective; \item a \emph{relative $S$-cell complex} if it is a composite of pushouts of elements of $S$. \end{enumerate} We say that $S$ \emph{permits the small object argument} if the domains of elements of $S$ are small with respect to the collection of relative $S$-cell complexes. \end{definition} \begin{example} Let $\op C=\mathcal{T}\mathrm{op}$. If $S$ is the collection of inclusions $\partial\Delta^n\subseteq\Delta^n$, then the $S$-injectives are the trivial fibrations, and the $S$-cofibrations are the cofibrations. On the other hand, if $S$ is the collection of inclusions $\Lambda_k^n\subseteq\Delta^n$, then the $S$-injectives are fibrations, and the $S$-cofibrations are the trivial cofibrations. Both of these collections permit the small object argument, since in both cases every element of $S$ is a relatively $T_1$ inclusion with compact domain. \end{example} Motivated by this example, we think of the elements of $S$ as ``generators'' for a set of cofibrations. \begin{theorem}[Kan] Let $(\op C, \mathrm{weq}(\op C))$ be a category with weak equivalences with $\mathrm{weq}(\op C)$ closed under retracts, and let $S_\mathrm{cof}$ and $S_\mathrm{triv}$ be sets of morphisms in $\op C$. Assume that \begin{enumerate} \item $S_\mathrm{cof}$ and $S_\mathrm{triv}$ each permit the small object argument, \item every $S_\mathrm{triv}$-cofibration is both an $S_\mathrm{cof}$-cofibration and a weak equivalence, \item every $S_\mathrm{cof}$-injective is both an $S_\mathrm{triv}$-injective and a weak equivalence, and \item one of the following conditions holds: \begin{enumerate} \item any map that is an $S_\mathrm{cof}$-cofibration and a weak equivalence is also an $S_\mathrm{triv}$-cofibration, or \item any map that is an $S_\mathrm{triv}$-injective and a weak equivalence is also an $S_\mathrm{cof}$-injective. \end{enumerate} \end{enumerate} Then $(\op C,\mathrm{weq}(\op C))$ extends to a model structure with cofibrations the $S_\mathrm{cof}$-cofibrations and fibrations the $S_\mathrm{triv}$-injectives. \end{theorem} \begin{proof} Given $f:X\to Y$, we apply the same procedure as above to factor $f$ as $p \circ i$, where $p$ is an $S_\mathrm{triv}$-injective and $i$ is a relative $S_\mathrm{triv}$-cell complex. The latter is in particular an $S_\mathrm{triv}$-cofibration, so point (2) applies to show that this factorization is of the desired form. The dual argument, using point (3), furnishes the other factorization. Point (4) is used to demonstrate the lifting axiom. For a detailed account, see \cite[11.3.1]{Hirschhorn:MCL}. \end{proof} One can show that, in the situation of this theorem, the cofibrations are exactly the retracts of relative $S_\mathrm{cof}$-cell complexes, and similarly for trivial cofibrations and $S_\mathrm{triv}$---see \cite[10.5.22]{Hirschhorn:MCL}. For obvious reasons, it is standard to refer to such collections $S_\mathrm{cof}$ and $S_\mathrm{triv}$ as \emph{generating} cofibrations and trivial cofibrations, respectively, and to refer to the resulting model category as \emph{cofibrantly generated}. One of the excellent features of cofibrantly generated model structures is that they are very portable. \begin{corollary} Let $\op C$ be a cofibrantly generated model category with generators $S_\mathrm{cof}$ and $S_\mathrm{triv}$, $\op D$ a category with small (co)limits, and \[\adjunct{\op C}{\op D}{F}{G}\] an adjunction. If \begin{enumerate} \item $F(S_\mathrm{cof})$ and $F(S_\mathrm{triv})$ permit the small object argument, and \item $G$ sends relative $F(S_\mathrm{triv})$-cell complexes to weak equivalences, \end{enumerate} then the category with weak equivalences $(\op D, G^{-1}(\mathrm{weq}(\op C)))$ extends to model structure cofibrantly generated by $F(S_\mathrm{cof})$ and $F(S_\mathrm{triv})$. Moreover, $F$ and $G$ are a Quillen pair with respect to this model structure. \end{corollary} \begin{proof} We apply the previous theorem. Prerequisitely, we note that $G^{-1}(\mathrm{weq}(\op C))$ satisfies two-out-of-three and is closed under retracts, and point (1) is true by assumption. For point (2), we note that $F(S_\mathrm{triv})$-cofibrations are retracts of relative $F(S_\mathrm{triv})$-cell complexes, which are sent to weak equivalences by assumption, so $F(S_\mathrm{triv})$-cofibrations are weak equivalences. To see that they are also $F(S_\mathrm{cof})$-cofibrations are also $F(S_\mathrm{triv})$-cofibrations, it suffices by definition to show that $F(S_\mathrm{cof})$-injectives are also $F(S_\mathrm{triv})$-injectives. For this, we note that the two lifting problems \[\xymatrix{ F(A)\ar[d]_-{F(i)}\ar[r]&B\ar[d]^-p&&A\ar[d]_-i\ar[r]&G(B)\ar[d]^-{G(p)}\\ F(C)\ar@{-->}[ur]\ar[r]&D&&C\ar@{-->}[ur]\ar[r]&G(D) }\] are equivalent, and that $S_\mathrm{cof}$-injectives are also $S_\mathrm{triv}$-injectives. The remaining assumptions are verified in a similar manner---see \cite[11.3.2]{Hirschhorn:MCL} for a detailed account. \end{proof} The resulting model structure on $\op D$ is called the \emph{transferred} model structure. \subsection{Projective model structure} In our example of interest, we obtain the \emph{projective} model structure on functors: \begin{corollary}[{\cite[11.6.1]{Hirschhorn:MCL}}] Let $\op C$ be a cofibrantly generated model category and $I$ any small category. The following specifications define a model structure on the functor category $\op C^I$: \begin{enumerate} \item the weak equivalences are the pointwise weak equivalences; \item the fibrations are the pointwise fibrations; \item the cofibrations are the natural transformations with the left lifting property with respect to every pointwise fibration. \end{enumerate} \end{corollary} Turning now to the question of cofibrant replacement, we write $\mathrm{Bar}_\bullet(F,-)$ for the augmented simplicial functor given in degree $n$ by \[\mathrm{Bar}_\bullet(F,-)=(\iota_!\iota^*)^{n+1}F(i)\cong\coprod_{I_0^{n+1}}F(i_{n+1})\times \mathrm{Hom}(i_{n+1}, i_n)\times\cdots\times\mathrm{Hom}(i_2,i_1)\times \mathrm{Hom}(i_1,-)\] and with the face and degeneracy maps induced by composition and insertion of identities, respectively. As we saw in the previous lecture, we have $\colim_I\mathrm{Bar}_\bullet(F,-)=\mathrm{Bar}_\bullet(F)$. \begin{proposition}\label{prop:bar is cofibrant} The augmentation $|\mathrm{Bar}_\bullet(F,-)|\to F$ is a pointwise weak homotopy equivalence. Moreover, if $F$ is pointwise cofibrant, then $|\mathrm{Bar}_\bullet(F,-)|$ is projective cofibrant. \end{proposition} \begin{proof} Since $\iota^*$ reflects weak equivalences and commutes with colimits, it suffices to prove the first claim instead for the geometric realization of the augmented simplicial object $\iota^*\mathrm{Bar}_\bullet(F, -)$ in $\mathcal{T}\mathrm{op}^{I_0}$. But this augmented simplicial object has an extra degeneracy, which is given by the unit of the $(\iota_!,\iota^*)$-adjunction. For the second claim, it will suffice to show that each of the maps $|\mathrm{sk}_{n-1}(\mathrm{Bar}_\bullet(F,-))|\to |\mathrm{sk}_n(\mathrm{Bar}_\bullet(F,-))|$ is a cofibration between cofibrant objects. We proceed by induction on $n$, the base case of the cofibrancy of the $0$-skeleton following from our assumption that $F$ is pointwise cofibrant. For the induction step, we write \[N_n(-)=\coprod_{I_0^{n+1}}F(i_{n+1})\times \mathrm{Hom}'(i_{n+1}, i_n)\times\cdots\times \mathrm{Hom}'(i_2,i_1)\times \mathrm{Hom}(i_1,-)\subseteq \mathrm{Bar}_\bullet(F,-),\] where \[\mathrm{Hom}'(i,j)=\begin{cases} \mathrm{Hom}(i,j)&\quad i\neq j\\ \mathrm{Hom}(i,i)\setminus\{\id_i\}&\quad i=j. \end{cases} \] After evaluating at an object $i$, these spaces witness $\mathrm{Bar}_\bullet(F, i)$ as split, so we have the pushout square \[\xymatrix{ \partial\Delta^n\times N_n(i)\ar[r]\ar[d]&|\mathrm{sk}_{n-1}(\mathrm{Bar}_\bullet(F,i))|\ar[d]\\ \Delta^n \times N_n(i)\ar[r]&|\mathrm{sk}_{n}(\mathrm{Bar}_\bullet(F,i))|, }\] which is moreover natural in $i$. Since the $(n-1)$-skeleton is known to be cofibrant by induction, it suffices to show that the map $\partial\Delta^n \times N_n(-)\to \Delta^n\times N_n(-)$ is a projective cofibration. For every $j\in I_0$, define a functor $N_n^{j}:I_0\to \mathcal{T}\mathrm{op}$ by \[N_n^{j}(i):=\begin{cases} \displaystyle\coprod_{I_0^{n}}F(i_{n+1})\times \mathrm{Hom}'(i_{n+1}, i_{n})\times\cdots\times \mathrm{Hom}'(i_2,j)&\quad i=j\\ \varnothing&\quad\text{otherwise.} \end{cases}\] Then we evidently have the commuting diagram \[\xymatrix{ \partial\Delta^n\times N_n(-)\ar[r]&\Delta^n\times N_n(-)\\ \displaystyle\coprod_{I_0}\iota_!\left(\partial\Delta^n\times N_n^{j}\right)\ar@{=}[u]^-\wr\ar[r]&\displaystyle\coprod_{I_0}\iota_!\left(\Delta^n\times N_n^{j}\right)\ar@{=}[u]_-\wr }\] of functors, where the components of the bottom arrow are each induced by the inclusion $\partial \Delta^n\subseteq \Delta^n$. Since this map is a cofibration in $\mathcal{T}\mathrm{op}$, since products with cofibrant objects preserve cofibrations, since $\iota_!$ is a left Quillen functor, and since cofibrations are closed under coproducts, it follows that the bottom arrow, and hence the top arrow, in this diagram is a projective cofibration. \end{proof} \begin{corollary} For any functor $F:I\to \mathcal{T}\mathrm{op}$, the canonical map $|\mathrm{Bar}_\bullet(F)|\to \colim_I F$ exhibits a homotopy colimit of $F$. \end{corollary} \begin{proof} Since the values of the functor $\mathrm{Bar}_\bullet(-):\mathcal{T}\mathrm{op}^I\to \mathcal{T}\mathrm{op}^{\Delta^{op}}$ are all split, the functor $|\mathrm{Bar}_\bullet(-)|$ descends to a functor at the level of homotopy categories. Thus, from the existence of the canonical map to the colimit and the universal property of the derived functor, we obtain the commuting diagram \[\xymatrix{ \gamma\left(|\mathrm{Bar}_\bullet(Q(F))|\right)\ar[d]\ar[r]&\gamma\left(|\mathrm{Bar}_\bullet(F)|\right)\ar[d]\\ \hocolim_IQ(F)\ar[r]&\hocolim_IF }\] in $\mathrm{Ho}(\mathcal{T}\mathrm{op})$, where $Q:\mathcal{T}\mathrm{op} \to \mathcal{T}\mathrm{op}$ is any cofibrant replacement functor (for example, we may take $Q(X)$ to be the geometric realization of the singular simplicial set of $X$). The bottom map is an isomorphism, since weak equivalences in $\mathcal{T}\mathrm{op}^I$ are pointwise; the top map is a weak equivalence because both simplicial spaces are split; and the lefthand map is an isomorphism by Proposition \ref{prop:bar is cofibrant}. It follows that the righthand map is an isomorphism, as claimed. \end{proof} \begin{remark} Most of what we have said carries over verbatim to the setting of a general cofibrantly generated simplicial model category, but the ability to forego pointwise cofibrant replacement seems to be an accident specific to $\mathcal{T}\mathrm{op}$, which is connected to the existence of the \emph{Str{\o}m model structure}---see \cite[A]{DuggerIsaksen:THAR} for further discussion. \end{remark} \begin{recollection} To a category $I$, we may associate its \emph{nerve}, which is the simplicial set $NI$ given in degree $n$ by $NI_n=\mathrm{Fun}([n], I)$, the set of composable $n$-tuples of morphisms in $I$. The \emph{classifying space} of $I$ is the space $BI:=|NI|$, and we say that $I$ is \emph{contractible} if its classifying space is weakly contractible. We say that a functor $T:I\to J$ is \emph{homotopy final} if the overcategory $(j\downarrow T)$ is contractible for every object $j\in J$ (resp. \emph{homotopy initial}, $(T\downarrow j)$). Homotopy final functors induce weak homotopy equivalences on classifying spaces, and, since $BI\simeq BI^{op}$, the same holds for homotopy initial functors. \end{recollection} \begin{example} A category with an initial or final object is contractible, as is a (co)filtered category. \end{example} We record the following standard facts about homotopy colimits. \begin{proposition}[{\cite[6.7,\,20.3]{Dugger:PHC}}]\label{prop:hocolim facts} Let $F:J\to \mathcal{T}\mathrm{op}$ be a functor. \begin{enumerate} \item If $T:I\to J$ is homotopy final, then the induced map \[\hocolim_I T^*F\to \hocolim_J F\] is a weak equivalence. \item If $J=\Delta^{op}$ and $F$ is split, then \[\hocolim_{\Delta^{op}}F\simeq |F|.\] \end{enumerate} \end{proposition} \subsection{Relative homotopy colimits} We will also have use for a relative version of the homotopy colimit. Note that, if $\lambda:I\to J$ is a functor, then the restriction $\lambda^*$ preserves fibrations and weak equivalences in the respective projective model structures, since both are pointwise. Thus, $(\lambda_!,\lambda^*)$ is a Quillen pair, and we may contemplate the \emph{homotopy left Kan extension} $\mathrm{hoLan}_\lambda:=\mathbb{L}\lambda_!$. In order to understand this functor, we recall that, from the commuting diagram of categories \[\xymatrix{ (\lambda\downarrow j)\ar[d]_-\mathrm{pt}\ar[r]^-{\pi_j} &I\ar[d]^-\lambda\\ \mathrm{pt} \ar[r]^-{\iota_j}&J, }\] there is an induced \emph{base change isomorphism} \[\lambda_!F(j)=\iota_j^*\lambda_!F\xrightarrow{\simeq}\mathrm{pt}_!\pi_j^*F=\colim_{(\lambda\downarrow j)}\pi_j^*F.\] Our next results asserts that the analgous result holds for the homotopical versions of these functors. \begin{corollary} For any $\lambda:I\to J$ and $F:I\to \mathcal{T}\mathrm{op}$, there is a natural isomorphism \[\mathrm{hoLan}_\lambda F(j)\cong \hocolim_{(\lambda\downarrow j)}\pi_j^*F\] in $\mathrm{Ho}(\mathcal{T}\mathrm{op}^J)$. \end{corollary} \begin{proof} We may assume that $F$ is objectwise cofibrant. In this case, by Proposition \ref{prop:bar is cofibrant}, we have the isomorphisms in $\mathrm{Ho}(\mathcal{T}\mathrm{op}^J)$ \[\mathrm{hoLan}_\lambda F(j)\cong \lambda_!|\mathrm{Bar}_\bullet(F, -)|\cong \colim_{(\lambda\downarrow j)}|\mathrm{Bar}_\bullet(F,\pi_j(-))|,\] whereas \[\hocolim_{(\lambda\downarrow j)}\pi_j^*F\cong \colim_{(\lambda\downarrow j)}|\mathrm{Bar}_\bullet(\pi_j^*F, -)|.\] By inspection, the simplicial functors $\mathrm{Bar}_\bullet(F,\pi_j(-))$ and $\mathrm{Bar}_\bullet(\pi_j^*F,-)$ are isomorphic, and the claim follows. \end{proof} \subsection{Quillen's Theorem B} For any functor $F:I\to \mathcal{T}\mathrm{op}$, there is a natural map \[\hocolim_I F\to BI,\] which is induced on geometric realizations by the simplicial map $\mathrm{Bar}_\bullet(F)\to \mathrm{Bar}_\bullet(\underline \mathrm{pt})$ arising from the unique natural transformation $F\to \underline \mathrm{pt}$ (note that we have used the isomorphism $BI\cong BI^{op}$, since the latter bar construction is the nerve $NI^{op}$). Since the bar construction is split, this map is a weak equivalence whenever $F$ is pointwise contractible. When $F$ is not pointwise contractible, it is often useful to be able to understand the homotopy fiber of this map. The result that will guide is in this task is due to Quillen and usually referred to as ``Theorem B.'' We follow the treatment of \cite[IV]{GoerssJardine:SHT}, beginning with the following preliminary result, which is interesting in its own right. \begin{lemma}\label{lem:baby unstraightening} If $F:I\to \mathrm{sSet}$ is a functor sending each morphism in $I$ to a weak equivalence, then the diagram \[\xymatrix{ F(i)\ar[r]\ar[d]&d^*\mathrm{Bar}_\bullet(F)\ar[d]\\ \Delta^0\ar[r]^-i&NI^{op} }\] of simplicial sets is homotopy Cartesian for every object $i\in I$. \end{lemma} \begin{proof} We will produce a diagram of the form \[\xymatrix{ F(i)\ar[r]\ar[d]&K\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)\ar[d]\ar[r]&\mathrm{Bar}_\bullet(F)\ar[d]\\ \Delta^0\ar[r]^-j& K\ar[r]^-p&NI^{op}, }\] in which $j$ is a trivial cofibration and $p$ a fibration, and we will show that the upper left map is a weak equivalence. We take $K=\colim K_r$ to be the output of the small argument applied to $\Delta^0\to NI^{op}$; that is, $K_r$ is defined as the pushout in the diagram \[\xymatrix{ \displaystyle \coprod \Lambda_k^n\ar[dd]\ar[rr]&&K_{r-1}\ar[dl]\ar[dd]\\ &K_r\ar@{-->}[dr]\\ \displaystyle\coprod \Delta^n\ar[ur]\ar[rr]&& NI^{op}, }\] where $K_0=\Delta^0$. It now follows that each of the diagrams \[\xymatrix{ \displaystyle \coprod \Lambda_k^n\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)\ar[d]\ar[r]&K_{r-1}\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)\ar[d]\\ \displaystyle\coprod \Delta^n\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)\ar[r]&K_r\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F), }\] is a pushout, since colimits in $(\mathrm{sSet}\downarrow NI^{op})$ are computed in simplicial sets, and since the fiber product with $d^*\mathrm{Bar}_\bullet(F)$ admits a right adjoint on this category. Thus, since $K_0\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)=F(i),$ it will suffice to show that each of the maps \[\Lambda_k^n\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)\to \Delta^n\times_{NI^{op}}d^*\mathrm{Bar}_\bullet(F)\] is a weak equivalence. This map is obtained by applying the diagonal to the bottom map in the diagram \[\xymatrix{ \displaystyle\coprod_{(k_0\leq\cdots\leq k_r)\in (\Lambda_k^n)_r}F(\sigma(n))\ar[r]\ar[d]&\displaystyle\coprod_{(k_0\leq\cdots\leq k_r)\in (\Delta^n)_r}F(\sigma(n))\ar[d]\\ \displaystyle\coprod_{(k_0\leq\cdots\leq k_r)\in (\Lambda_k^n)_r}F(\sigma(k_r))\ar[r]&\displaystyle\coprod_{(k_0\leq\cdots\leq k_r)\in (\Delta^n)_r}F(\sigma(k_r)) }\] of bisimplicial sets, where $\sigma:[n]\to I^{op}$ is the given simplex of $NI^{op}$. The vertical maps, which are weak equivalences by our assumption on $F$, are supplied by the unique morphism to the final object $n\in [n]$, and the diagonal of the top map is the weak equivalence $\Lambda_k^n\times F(\sigma(n))\to \Delta^n\times F(\sigma(n))$. It follows by two-out-of-three that the bottom map becomes a weak equivalence after applying $d^*$, which was to be shown. \end{proof} \begin{corollary}\label{cor:hocolim quasifibration} If $F:I\to \mathrm{Top}$ is a functor sending each morphism in $I$ to a weak equivalence, then the diagram \[\xymatrix{ F(i)\ar[r]\ar[d]&\hocolim_IF\ar[d]\\ \mathrm{pt}\ar[r]^-i&BI }\] of spaces is homotopy Cartesian for every object $i\in I$. \end{corollary} \begin{proof} Applying Lemma \ref{lem:baby unstraightening} to the functor $\mathrm{Sing}(F)$ yields the homotopy pullback square \[\xymatrix{ \mathrm{Sing}(F(i))\ar[d]\ar[r]&d^*\mathrm{Bar}_\bullet(\mathrm{Sing}(F))\ar[d]\\ \Delta^0\ar[r]^-i&NI^{op}. }\] The claim now follows after applying geometric realization, since \[|d^*\mathrm{Bar}_\bullet(\mathrm{Sing}(F))|\cong \big||\mathrm{Bar}_\bullet(\mathrm{Sing}(F))|\big|\cong |\mathrm{Bar}_\bullet(|\mathrm{Sing}(F)|)|\simeq |\mathrm{Bar}_\bullet(F)|\simeq \hocolim_IF,\] and since geometric realization, as part of the Quillen equivalence between topological spaces and simplicial sets, preserves homotopy pullbacks. \end{proof} \begin{remark} In case $I$ is the category associated to a group $G$, the statement becomes the familiar fact that the homotopy colimit of a $G$-space $X$, thought of as a functor, is given by the Borel construction on $X$, which forms a bundle over $BG$ with fiber $X$. In general, this corollary encourages us to think of a functor as defining a sort of bundle over the classifying space with total space the homotopy colimit, fibers given by the functor itself, and, as we will see, space of sections given by the homotopy limit. Taking this kind of idea to its logical conclusion leads to the \emph{unstraightening} construction---see \cite[3.2]{Lurie:HTT}. \end{remark} \begin{corollary}[Quillen's Theorem B]\label{cor:quillen b} Let $T:I\to J$ be a functor such that, for every morphism $\alpha:j\to j'$ in $J$, the map $B(j'\downarrow T)\to B(j\downarrow T)$ is a weak equivalence. The diagram \[\xymatrix{ B(j\downarrow T)\ar[r]\ar[d]&BI\ar[d]^-{BT}\\ \mathrm{pt}\ar[r]&BJ }\] is homotopy Cartesian. \end{corollary} \begin{proof} By Lemma \ref{lem:baby unstraightening}, it suffices to show that \[\hocolim_{J^{op}}B(-\downarrow T)\xrightarrow{\sim} BI.\] To see why this is so, we note that the lefthand side arises from the bisimplicial set given in bidegree $(m,n)$ by \begin{align*} \coprod_{j_0\to\cdots \to j_n}N(j_n\downarrow T)_m&\cong \coprod_{j_0\to\cdots\to j_n\to T(i_0)\to \cdots \to T(i_m)}\mathrm{pt}\\ &\cong \coprod_{i_0\to \cdots \to i_m}N(J\downarrow T(i_0))_n. \end{align*} Thus, \[\hocolim_{J^{op}}B(-\downarrow T)\simeq\hocolim_{I^{op}}B(J\downarrow T(-))\simeq \hocolim_{I^{op}}\underline{\mathrm{pt}}\simeq BI,\] where for the second equivalence we have used that $(J\downarrow T(i))$ is contractible for every $i\in I$, having the initial object $\id_i$. \end{proof} \section{The Spanier--Whitehead category}\label{appendix:Spanier--Whitehead} \subsection{Stable homotopy} We begin by recalling the following classical fact, which asserts that homotopy behaves like homology in a certain ``stable'' range. \begin{theorem}[Homotopy excision] Let $(B,A)$ be a $q$-connected CW pair. If $A$ is $p$-connected, then the map induced on $\pi_i$ by the map $(B,A)\to (B/A, *)$ is an isomorphism for $i\leq p+q$ and a surjection for $i=p+q+1$. \end{theorem} Recall that the suspension functor $A\mapsto \Sigma A$ is homotopy invariant if $A$ is a CW complex. Thus, we obtain in this case a \emph{suspension homomorphism} \[\xymatrix{ \pi_i(A)=[S^i, A]_*\xrightarrow{\Sigma} [\Sigma S^i, \Sigma A]_*\cong [S^{i+1}, \Sigma A]_*=\pi_{i+1}(\Sigma A). }\] \begin{corollary}[Freudenthal suspension theorem] If $A$ is a $p$-connected CW complex, then the suspension homomorphism \[\pi_i(A)\to \pi_{i+1}(\Sigma A)\] is an isomorphism for $i\leq 2p$ and a surjection for $i=2p+1$. \end{corollary} \begin{proof} The suspension map coincides with the dashed map in the commuting diagram \[\xymatrix{ \pi_{i+1}(CA)\ar[d]\ar[r]&\pi_{i+1}(CA,A)\ar[d]^-{(\star)}\ar[r]^-\simeq&\pi_i(A)\ar@{-->}[ddl]\\ \pi_{i+1}(SA)\ar@{=}[d]_-\wr\ar@{=}[r]&\pi_{i+1}(SA)\ar@{=}[d]^-\wr\\ \pi_{i+1}(\Sigma A)\ar@{=}[r]&\pi_{i+1}(\Sigma A) }\] Since $A$ is $p$-connected by assumption, the pair $(CA,A)$ is $(p+1)$-connected, so the starred map is an isomorphism for $i+1\leq 2p+1$ and a surjection in the next degree, implying the claim. \end{proof} In light of this result, it is reasonable to make the following definition. \begin{definition} Let $A$ be a pointed CW complex. The $i$th \emph{stable homotopy group} of $A$ is \[\pi_i^s(A):=\colim_{r} \pi_{i+r}^s(\Sigma^rA)\cong \pi_{2i+2}(\Sigma^{i+2}A).\] A \emph{stable map} from $A$ to $B$ is a map $f:\Sigma^rA\to \Sigma^rB$ for some $r$, regarded as an element of $\colim_r[\Sigma^rA,\Sigma^rB]_*$. \end{definition} We may compose the stable maps $f:\Sigma^rA\to \Sigma^rB$ and $g:\Sigma^sB\to \Sigma^sC$ to obtain the stable map \[\Sigma^rg\circ\Sigma^s f:\Sigma^{r+s}A\to \Sigma^{r+s}C.\] Since an element of $\pi_i^s(A)$ is nothing other than a stable map from $S^i$ to $A$, it follows that stable maps induce morphisms at the level of stable homotopy groups. \begin{definition} A \emph{stable weak equivalence} is a stable map that induces an isomorphism on all stable homotopy groups. \end{definition} We shall use the notation $f:A\xrightarrow{\sim_s} B$ to indicate a stable weak equivalence. \begin{recollection} If $f:A\to B$ is a continuous map, the \emph{mapping cylinder} of $f$ is the pushout in the diagram \[\xymatrix{ A\ar[d]_-{\{1\}}\ar[r]^-f&B\ar[d]\\ A\times[0,1]\ar[r]& \mathrm{Cyl}(f). }\] The \emph{mapping cone} of $f$ is the quotient \[C(f):=\frac{\mathrm{Cyl}(f)}{A\times\{0\}}.\] The diagram \[\xymatrix{ A\ar[d]\ar[r]^-f&B\ar[d]\\ \mathrm{pt}\ar[r]&C }\] is a \emph{cofiber sequence} if the induced map \[C(f)\to B/f(A)\to C\] is a weak equivalence. \end{recollection} \begin{example} If $f$ is a Hurewicz cofibration, then $A\xrightarrow{f} B\to B/f(A)$ is a cofiber sequence. \end{example} For our purposes, the key fact about stable weak equivalences is the following. \begin{lemma}\label{lem:five lemma} In the commuting diagram \[\xymatrix{ A\ar[r]\ar[d]_-{f_1}&B\ar[d]_-{f_2}\ar[r] &C\ar[d]_-{f_3}\\ A'\ar[r]&B'\ar[r]&C', }\] if both rows are cofiber sequences and $f_1$ and $f_3$ are stable weak equivalences, then $f_2$ is also a stable weak equivalence. \end{lemma} It will be useful to have a means of producing stable maps. \begin{lemma}\label{lem:stable maps adjunction} Let $A$ be a finite CW complex. The set of stable maps from $A$ to $B$ is a in natural bijection with $[A,\,\Omega^\infty\Sigma^\infty B]_*$, where $\Omega^\infty\Sigma^\infty B:=\colim_r \Omega^r\Sigma^rB$. \end{lemma} \begin{proof} By finiteness and adjunction, we have \begin{align*} [A,\Omega^\infty\Sigma^\infty B]_*&=[A,\colim_r\Omega^r\Sigma^r B]_*\\ &\cong \colim_r[A, \Omega^r\Sigma^rB]_*\\ &\cong \colim_r[\Sigma^r A, \Sigma^rB]_*. \end{align*} \end{proof} \subsection{Spanier--Whitehead category} In order to see why this lemma should be true, we locate the notion of stable weak equivalence within a convenient categorical context, which is a slight variant of that introduced in \cite{SpanierWhitehead:FAHT}. \begin{definition} The \emph{Spanier--Whitehead category} is the category $\mathcal{S}\mathrm{W}$ in which an object is a pair $(A,m)$ of a pointed CW complex and an integer, the set of morphisms are given by \[\mathcal{S}\mathrm{W}\left((A,m), (B,n)\right)=\colim_r \left[\Sigma^{r+m}A, \Sigma^{r+n}B\right]_*,\] where the colimit is taken over the set of natural numbers $r$ such that $r+m$ and $r+n$ are both nonnegative, and composition is defined in the same manner as composition of stable maps. \end{definition} We begin with a few basic observations on this category. \begin{enumerate} \item The full subcategory of objects of the form $(A,0)$ is the category stable maps between pointed CW complexes from the previous lecture. Note, however, that this subcategory is not closed under isomorphism in $\mathcal{S}\mathrm{W}$. \item The assignment $A\mapsto (A,0)$ extends to a functor $\mathrm{Ho}(\mathcal{T}\mathrm{op}_*)\to \mathcal{S}\mathrm{W}$ fitting into the commuting diagram \[\xymatrix{ \mathrm{Ho}(\mathcal{T}\mathrm{op}_*)\ar[d]_-\Sigma\ar[r]& \mathcal{S}\mathrm{W}\ar[d]^-\Sigma&(A,m)\ar@{|->}[d]\\ \mathrm{Ho}(\mathcal{T}\mathrm{op}_*)\ar[r]&\mathcal{S}\mathrm{W}&(\Sigma A, m). }\] \item The class of the isomorphism \[\{\Sigma^m\Sigma A\cong\Sigma^{m+1}A\}\in [\Sigma^m\Sigma A,\Sigma^{m+1}A]_*\to\mathcal{S}\mathrm{W}\left((\Sigma A,m),(A,m+1)\right)\] defines a natural isomorphism $(\Sigma A, m)\cong (A,m+1)$, from which we conclude that $\Sigma:\mathcal{S}\mathrm{W}\to \mathcal{S}\mathrm{W}$ is an equivalence of categories with quasi-inverse $\Sigma^{-1}(A,m)=(A,m-1)$. In fact, the pair $(\mathcal{S}\mathrm{W},\Sigma)$ is universal with respect to this property in an appropriate sense---see \cite{DellAmbrogio:SWCAT}. \item Any finite diagram in $\mathcal{S}\mathrm{W}$, after finitely many applications of the functor $\Sigma$, may be realized by a homotopy commutative diagram of CW complexes. \item Since $(A,m)\cong(\Sigma A,m-1)$, the functor $\mathcal{S}\mathrm{W}((A,m),-)$ is naturally valued in groups; moreover, since $(A,m)\cong(\Sigma A,m-2)$, these groups are Abelian. This extra structure witnesses $\mathcal{S}\mathrm{W}$ as a \emph{preadditive} category, which is to say a category enriched in Abelian groups. \item Fix $(A,m)$ and $(B,n)$, and let $N\geq 0$ be large enough so that $m+N$ and $n+N$ are both nonnegative. There is a natural chain of isomorphisms \begin{align*} \mathcal{S}\mathrm{W}\left((\Sigma^{m+N}A\vee \Sigma^{n+N}B, -N), (C,p)\right)&= \colim_{r}\left[\Sigma^{r-N}(\Sigma^{m+N}A\vee\Sigma^{n+N}B), \Sigma^{p+r}C\right]_*\\ &\cong\colim_{r}\left[\Sigma^{m+r}A\vee \Sigma^{n+r}B, \Sigma^{p+r}C\right]_*\\ &\cong \colim_{r}\left(\left[\Sigma^{m+r}A, \Sigma^{p+r}C\right]_*\times \left[\Sigma^{n+r}B, \Sigma^{p+r}C\right]_*\right)\\ &\cong \colim_{r}\left[\Sigma^{m+r}A, \Sigma^{p+r}C\right]_*\times \colim_r\left[\Sigma^{n+r}B, \Sigma^{p+r}C\right]_*\\ &\cong \mathcal{S}\mathrm{W}\left((A,m), (C,p)\right)\times\mathcal{S}\mathrm{W}\left((B,n), (C,p)\right), \end{align*} which exhibits a coproduct of $(A,m)$ and $(B,n)$. In the same way, we see that $\mathcal{S}\mathrm{W}$ has finite coproducts, and, since $\mathcal{S}\mathrm{W}$ is preadditive, finite biproducts \cite[VIII:2]{MacLane:CWM}; in other words, $\mathcal{S}\mathrm{W}$ is an \emph{additive} category. \end{enumerate} \subsection{Triangulated structure} In fact, there is more structure to be uncovered. \begin{definition} Let $\op C$ be an additive category with an additive self-equivalence $\Sigma:\op C\to \op C$. A \emph{triangulation} of $\op C$ is a class $\op T$ of triples $(f,g,h)$ of morphisms of the form \[A\xrightarrow{f} B\xrightarrow{g} C\xrightarrow{h} \Sigma A,\] which satisfy the following axioms. \begin{enumerate} \item For every $A\in\op C$, $(\id_A,0,0)\in \op T$. \item For each morphism $f$, there exists $(f,g,h)\in \op T$. \item The class $\op T$ is closed under isomorphism. \item If $(f,g,h)\in \op T$, then $(g,h,-\Sigma f)\in\op T$. \item Given the solid commuting diagram \[\xymatrix{ A\ar@{=}[d]\ar[r]& B\ar[d]\ar[r]&C\ar@{-->}[d]\ar[r]&\Sigma A\ar@{=}[d]\\ A\ar[r]& B'\ar[d]\ar[r]&C'\ar@{-->}[d]\ar[r]&\Sigma A\\ &D\ar[d]\ar@{=}[r]&D\ar@{-->}[d]\\ &\Sigma B\ar[r]&\Sigma C }\] in which the rows and lefthand column lie in $\op T$, the dashed fillers exist making the entire diagram commute, and the righthand column also lies in $\op T$. \end{enumerate} \end{definition} \begin{remark} Various equivalent combinations of axioms are possible. We follow \cite{May:ATC}. \end{remark} \begin{remark} The elements of $\op T$ are typically referred to as \emph{distinguished triangles}, and the crucial fifth axiom is known variously as the ``octahedral axiom,'' for one of its visual representations, and ``Verdier's axiom.'' One way to understand this axiom is as enforcing a kind of third isomorphism theorem in $\op C$, i.e., that the ``quotient'' of $B'/A$ by $B/A$ should coincide with $B'/B$. \end{remark} We now seek to apply this formalism to the Spanier--Whitehead category. \begin{definition} A sequence $(A,m)\to (B,n)\to (C,p)$ in $\mathcal{S}\mathrm{W}$ is a \emph{cofiber sequence} if, after applying $\Sigma^N$ for some $N\in\mathbb{Z}$, it becomes isomorphic to the image of a cofiber sequence in $\mathrm{Ho}(\mathcal{T}\mathrm{op}_*)$. \end{definition} Recall that, for any map $f:A\to B$, we obtain a canonical map $C(f)\to \Sigma A$ by collapsing $B\subseteq C(f)$. Thus, any cofiber sequence in $\mathrm{Ho}(\mathcal{T}\mathrm{op}_*)$ extends canonically to a sequence of the form \[A\xrightarrow{f} B\xrightarrow{g} C\xrightarrow{h} \Sigma A.\] We only sketch the proof of the following fundamental result in stable homotopy theory. For a detailed proof in an expanded context, see \cite[A.12]{Schwede:TTC}, for example. \begin{theorem}[Puppe] The collection of cofiber sequences is a triangulation of $\mathcal{S}\mathrm{W}$. \end{theorem} \begin{proof}[Sketch proof] The first and third axioms are obvious, and the second follows from cellular approximation and the observation that the mapping cone of a cellular map between CW complexes is again a CW complex. After suspending, the fourth axiom follows from the standard fact that the rotation \[B\xrightarrow{g} C\xrightarrow{h} \Sigma A\xrightarrow{-\Sigma f} \Sigma B\] of a cofiber sequence is again a cofiber sequence \cite[8.4]{May:CCAT}. Finally, again after suspending, the fifth axiom reduces to checking that, given maps $f:A\to B$ and $f':B\to B'$, the natural sequence \[C(f)\to C(f'\circ f)\to C(f')\] is a cofiber sequence. After replacing $f$ and $f'$ by cofibrations, this claim follows from the homeomorphism \[\frac{B'/A}{B/A}\cong \frac{B'}{B}.\] \end{proof} \subsection{Consequences} An important consequence of this structure is the following result, which is valid in any triangulated category---see \cite[13.4.2]{Stacks}, for example. \begin{corollary} For any $(A,m)\in \mathcal{S}\mathrm{W}$, the functors $\mathcal{S}\mathrm{W}\left((A,m),-\right)$ and $\mathcal{S}\mathrm{W}\left(-,(A,m)\right)$ each send cofiber sequences to long exact sequences of Abelian groups. \end{corollary} Since $\pi_i^s(A)=\mathcal{S}\mathrm{W}\left((S^0, i), (A,0)\right)$, Lemma \ref{lem:five lemma} now follows by the five lemma. We also have the following appealing interpretation of stable weak equivalences. \begin{corollary}\label{cor:finite stable weak equivalence} If $A$ and $B$ are finite CW complexes and $f:(A,m)\to (B,n)$ induces an isomorphism \[\mathcal{S}\mathrm{W}\left((S^0,i),(A,m)\right)\xrightarrow{\simeq}\mathcal{S}\mathrm{W}\left((S^0,i),(B,n)\right)\] for every $i\in \mathbb{Z}$, then $f$ is an isomorphism. In particular, two finite CW complexes are stably weakly equivalent if and only if they become isomorphic in $\mathcal{S}\mathrm{W}$. \end{corollary} \begin{proof} By induction on the skeletal filtration of $C$, using the fact that cofiber sequences induce long exact sequences, one shows that \[\mathcal{S}\mathrm{W}\left((C,p),(A,m)\right)\xrightarrow{\simeq}\mathcal{S}\mathrm{W}\left((C,p),(B,n)\right)\] for any finite CW complex $C$ and $p\in\mathbb{Z}$. The claim now follows from the Yoneda lemma for the subcategory of $\mathcal{S}\mathrm{W}$ on the finite CW complexes. \end{proof} \begin{corollary}\label{cor:stable weak equivalence homology} Stable weak equivalences induce isomorphisms on homology. \end{corollary} \begin{proof} By the suspension isomorphism, homology descends to a functor on $\mathcal{S}\mathrm{W}$. Since any functor sends isomorphisms to isomorphisms, the previous corollary implies the claim for stable weak equivalences between finite CW complexes. Since homology commutes with filtered colimits, the general case follows. \end{proof} \begin{remark} It is a somewhat paradoxical fact that, although, as we have seen, the Spanier--Whitehead category effectively captures the stable phenomenon of homotopy behaving homologically, we in fact lose almost all Eilenberg-MacLane spaces upon passage to $\mathcal{S}\mathrm{W}$, since suspending destroys the property of being a $K(G,n)$. On the other hand, the Freudenthal suspension theorem guarantees that the homotopy groups remain correct in a range, and the map \[\Sigma K(G,n)\to K(G,n+1)\] adjoint to the weak equivalence $K(G,n)\xrightarrow{\sim} \Omega K(G,n+1)$ exhibits a sequence of objects in $\mathcal{S}\mathrm{W}$ that we might think should converge to the missing Eilenberg-MacLane object. This line of reasoning is one way to motivate the enlargement of the Spanier--Whitehead category to the full stable homotopy category or homotopy category of spectra---see \cite{Puppe:OSHC}, for example. Another motivation is taken up below. \end{remark} \subsection{Filtered stable weak equivalences} The purpose of this section is to address and partially remove the finiteness assumption in Corollary \ref{cor:finite stable weak equivalence}. We begin by noting that the assumption would have been unnecessary had we been assured that $(C,p)$ is the colimit in $\mathcal{S}\mathrm{W}$ of the objects $(\mathrm{sk}_k(C),p)$. Unfortunately, we have the computation \begin{align*} \mathcal{S}\mathrm{W}\left((C,p), (A,m)\right)&=\textstyle{\colim_r}\left[\Sigma^{p+r}C, \Sigma^{m+r}A\right]_*\\ &\cong{\textstyle\colim_r}\,\pi_0\,{\textstyle\mathrm{Map}_*}\left(\Sigma^{p+r}C, \Sigma^{m+r}A\right)\\ &\cong{\textstyle\colim_r}\,\pi_0\left({\textstyle{\holim_k}}\,{\textstyle{\mathrm{Map}_*}}\left(\Sigma^{p+r}\mathrm{sk}_k(C),\Sigma^{m+r}A\right)\right), \end{align*} and the formation of homotopy groups fails in general to commute with sequential homotopy limits; indeed, this failure is measured by the \emph{Milnor exact sequence} \[0\to \textstyle\lim^1_k\pi_{i+1}(X_k)\to \pi_i\left(\holim_k X_k\right)\to \lim_k\pi_i(X_k)\to0.\] To summarize the problem, the category $\mathcal{S}\mathrm{W}$ lacks certain filtered colimits that we might naively expect it to have. In these notes, we adopt a somewhat ad hoc solution to this problem, which is nevertheless sufficient for our needs (but see Remark \ref{rmk:spectra} below). We make the following (non-standard) definition. \begin{definition} Let $X=\bigcup_{k\geq1} X_k$ and $Y=\bigcup_{k\geq1} Y_k$ be filtered pointed spaces. A \emph{filtered stable weak equivalence} is a collection of stable weak equivalences $f_k:X_k\xrightarrow{\sim_s} Y_k$ fitting into a commuting diagram \[\xymatrix{ X_{k-1}\ar[d]\ar[r]^-{f_{k-1}}&Y_{k-1}\ar[d]\\ X_k\ar[r]^-{f_k}&Y_k }\] of stable maps. \end{definition} Since homology commutes with filtered colimits, we have the following consequence of Corollary \ref{cor:stable weak equivalence homology}. \begin{corollary}\label{cor:filtered stable weak equivalence homology} Filtered stable weak equivalences induce isomorphisms on homology. \end{corollary} \begin{remark}\label{rmk:spectra} A more principled solution to the problem of finiteness is provided by the category $\mathcal{S}\mathrm{p}$ of spectra, which may be constructed in many inequivalent ways, all of which yield the same homotopy category, typically called the \emph{stable homotopy category}. Using the subscript $\mathrm{fin}$ to indiate full subcategories spanned by finite spaces, the relationships among the various categories in question may be summarized in the following commuting diagram \[\xymatrix{ \mathcal{T}\mathrm{op}_*\ar[r]\ar[dd]_-{\Sigma^\infty}&\mathrm{Ho}(\mathcal{T}\mathrm{op}_*)\ar[d]&\mathrm{Ho}(\mathcal{T}\mathrm{op}_{*,\mathrm{fin}})\ar[l]\ar[d]\\ &\mathcal{S}\mathrm{W}\ar@{-->}[d]&\mathcal{S}\mathrm{W}_\mathrm{fin}\ar[l]\ar@{-->}[dl]\\ \mathcal{S}\mathrm{p}\ar[r]&\mathrm{Ho}(\mathcal{S}\mathrm{p}). }\] The vertical dashed functor is badly behaved, being neither fully faithful nor essentially surjective, but the diagonal dashed functor is the inclusion of the full subcategory of compact objects in $\mathrm{Ho}(\mathcal{S}\mathrm{p})$; indeed, this fact may be used as one of a set of axioms characterizing the homotopy theory of spectra. The category of spectra has many wonderful properties, but only two are directly relevant to our discussion: first, $\mathrm{Ho}(\mathcal{S}\mathrm{p})$ is again a triangulated category, so arguments in $\mathcal{S}\mathrm{W}$ translate directly to this new context; and $\Sigma^\infty C\cong\colim_k\Sigma^\infty \mathrm{sk}_k(C)$, repairing the deficiency of the Spanier--Whitehead category that hindered us before. \end{remark} \section{Tate cohomology and periodicity}\label{appendix:periodicity} The goal of this appendix is to give an indication of the proof of the periodicity theorem used in the computation of the mod $p$ cohomology of $B_p(\mathbb{R}^n)$ for $p$ odd. The idea that we will pursue is that, if multiplication by the class in question is to be an isomorphism, then it should have an inverse, which we might imagine to be given by ``multiplication by an element of negative degree.'' Somewhat surprisingly, this idea is not nonsense, and the framework that gives it sense is the framework of \emph{Tate cohomology}, which we review briefly. For a more complete survey, see \cite[XII]{CartanEilenberg:HA}. \subsection{Tate cohomology} \begin{definition} Let $G$ be a finite group and $V$ a $\mathbb{Z}[G]$-module. The \emph{norm map} for $V$ is the dashed filler in the commuting diagram \[\xymatrix{ V\ar[d]\ar[r]^-{\sum_{G} g}&V\\ H_0(G;V)\ar@{-->}[r]^-N& H^0(G;V).\ar[u] }\] \end{definition} The norm map permits the definition of a cohomology theory combining ordinary group cohomology and group homology. \begin{definition} Let $G$ be a finite group and $V$ a $\mathbb{Z}[G]$-module. The \emph{Tate cohomology} of $G$ with coefficients in $V$ is \[ \hat H^n(G;V)=\begin{cases} H^n(G;V)&\quad n>0\\ \mathrm{coker}\,N&\quad n=0\\ \ker N&\quad n=-1\\ H_{-n-1}(G;V)&\quad n<-1. \end{cases} \] \end{definition} We now summarize a few facts about Tate cohomology. \begin{enumerate} \item Tate cohomology is the cohomology associated to a certain type of resolution \cite[XII.3.2]{CartanEilenberg:HA}. By definition, a \emph{complete resolution} is a commuting diagram of free $\mathbb{Z}[G]$-modules \[\xymatrix{ \cdots\ar[r]&X_1\ar[r]^-{\delta_1}&X_0\ar[rr]^-{\delta_0}\ar[dr]_-{\epsilon}&&X_{-1}\ar[r]^-{\delta_{-1}}&X_{-2}\ar[r]&\cdots\\ &&&\mathbb{Z}\ar[ur]_-{\eta} }\] in which the infinite row is exact, and Tate cohomology may be computed as \[\hat H^n(G;V)\cong H^n\left(\mathrm{Hom}_{\mathbb{Z}[G]}(X_\bullet, V)\right).\] \item An injective group homomorphism induces a restriction on Tate cohomology \cite[XII.8]{CartanEilenberg:HA}. Tate cohomology is not functorial for general group homomorphisms. \item Tate cohomology is multiplicative in the sense that there is a unique $C_2$-equivariant graded homomorphism \[\hat H(G;V_1)\otimes \hat H(G; V_2)\to \hat H(G;V_1\otimes V_2)\] extending the cup product in group cohomology and compatible with connecting homomorphisms \cite[XII.4]{CartanEilenberg:HA}. In particular, $\hat H(G;\mathbb{Z})$ is a graded commutative ring and $\hat H(G;V)$ is a module over this ring. \item Tate cohomology enjoys a self-duality \cite[XII.6.6]{CartanEilenberg:HA} of the form \[\hat H^n(G;\mathbb{Z})\cong \mathrm{Hom}_\mathbb{Z}\left(\hat H^{-n}(G;\mathbb{Z}), C_{|G|}\right).\] \end{enumerate} \begin{remark} According to (3), there are multiplication maps of the form \[H_m(BG)\otimes H_n(BG)\to H_{m+n+1}(BG),\] which may be interpreted from the point of view of singular chains in terms of the join of simplices \cite{Tene:PNTCFG}. \end{remark} \begin{example} The periodic resolution for the cyclic group $C_k$ extends to a complete resolution \[\xymatrix{ \cdots\ar[r]&\mathbb{Z}[C_k]\ar[r]^-{\sigma-1}&\mathbb{Z}[C_k]\ar[rr]^-{\sum_{i=1}^k\sigma^i}\ar[dr]_-{\epsilon}&&\mathbb{Z}[C_k]\ar[r]^-{\sigma-1}&\mathbb{Z}[C_k]\ar[r]&\cdots\\ &&&\mathbb{Z}\ar[ur]_-{\sum_{i=1}^k\sigma^i} }\] and it follows that \[ \hat H^n(C_k;\mathbb{Z})\cong\begin{cases} C_k&\quad n \text{ even}\\ 0&\quad n \text{ odd.} \end{cases} \] As for multiplicative structure, one can show that $\hat H(C_k;\mathbb{Z})\cong C_k[\beta,\beta^{-1}]$, where $\beta$ is the class represented by the 2-cocycle in $\mathrm{Hom}_{\mathbb{Z}[C_k]}\left(\mathbb{Z}[C_k], \mathbb{Z}\right)\cong \mathrm{Hom}_\mathbb{Z}\left(\mathbb{Z},\mathbb{Z}\right)$ corresponding to the identity \cite[XII.7]{CartanEilenberg:HA}. \end{example} \subsection{Periods} The periodicity evident in the previous calculation is a special case, and also the source, of a more general phenomenon. For a $\mathbb{Z}[G]$-module $V$, write $\hat H(G;V)_p$ for the $p$-primary component of $\hat H(G;V)$ and $|G|_p$ for the largest power of $p$ dividing $G$. \begin{proposition} Multiplication by $\alpha\in \hat H^n(G;\mathbb{Z})$ induces an isomorphism on $\hat H(G;V)_p$ for every $\mathbb{Z}[G]$-module $V$ if and only if $\alpha$ has exact order $|G|_p$. \end{proposition} The idea of the argument is that such an element $\alpha$ determines a homomorphism $\hat H^n(G;\mathbb{Z})\to C_{|G|}$ and thus an element $\alpha^{-1}\in \hat H^{-n}(G;\mathbb{Z})$ by the duality referenced above, and this element will act as a multiplicative inverse to $\alpha$. For details, see \cite[XII.11.1, Exercise XII.11]{CartanEilenberg:HA}. \begin{definition} The $p$-\emph{period} of $G$, if it exists, is the least $n>0$ such that $\hat H^n(G;\mathbb{Z})$ contains an element of exact order $|G|_p$. \end{definition} \begin{remark} Most groups do not have a $p$-period; indeed, if $p$ is odd, then $G$ has a $p$-period if and only if the $p$-Sylow subgroup of $G$ is cyclic \cite[Exercsie XII.11]{CartanEilenberg:HA}. \end{remark} \begin{example} The cyclic group $C_{p^k}$ has $p$-period $2$. \end{example} The following theorem of \cite{Swan:PPFG} will allow us to determine the $p$-period of $\Sigma_p$. For a subgroup $H\leq G$, we write $C_G(H)$ and $N_G(H)$ for the centralizer and normalizer of $H$ in $G$, respectively. \begin{theorem}[Swan] Let $G$ be a finite group, $p$ an odd prime, and $H\leq G$ a $p$-Sylow subgroup. If $H$ is cyclic, then the $p$-period of $G$ is equal to $2\left|\frac{N_G(H)}{C_G(H)}\right|$. \end{theorem} Before discussing the proof of this theorem, we use it to prove the Periodicity Theorem. First, we recall a few standard facts about cyclic groups. \begin{lemma} Fix $k\geq0$. \begin{enumerate} \item $C_{\Sigma_k}(C_k)=C_k$ \item $N_{\Sigma_k}(C_k)\cong C_k\rtimes \mathrm{Aut}(C_k)$ \item $\mathrm{Aut}(C_k)\cong C_k^\times$ \end{enumerate} \end{lemma} \begin{proof}[Proof of Periodicity Theorem] Since $V$ is defined over $\mathbb{F}_p$, we have $\hat H(G;V)_p=\hat H(G;V)$. Thus, the first claim is implied by our previous calculation of the $p$-period of $C_p$, while the second claim is implied by Swan's theorem and the calculation \[2\left|\frac{N_{\Sigma_p}(C_p)}{C_{\Sigma_p}(C_p)}\right|=2\frac{|C_p|\cdot|C_p^\times|}{|C_p|}=2(p-1).\] \end{proof} \subsection{Proof of Swan's theorem} In order to deduce the Swan's theorem from our calculation in the case of a cyclic group, we will need to understand the relationship between the Tate cohomology of $G$ and that of its $p$-Sylow subgroup. \begin{definition} Let $H\leq G$ be a subgroup. An element $\alpha\in \hat H(H;V)$ is \emph{stable} if, for every $g\in G$, $\alpha$ equalizes the two homomorphisms \[\hat H(H; V)\to \hat H(H\cap H^g;V)\] induced by the inclusion $H\cap H^g\leq H$ and the injection $(-)^{g^{-1}}:H\cap H^g\to H$, respectively. \end{definition} A transfer argument, one proves the following \cite[XII.10.1]{CartanEilenberg:HA}. \begin{proposition} If $H\leq G$ is a $p$-Sylow subgroup, then, for every $\mathbb{Z}[G]$-module $V$, the natural map $\hat H(G;V)_p\to \hat H(H;V)$ is injective with image the stable classes. \end{proposition} Thus, in order to deduce the period of $G$ from the period of $H$, we must understand the stable classes. \begin{lemma} If the $p$-Sylow subgroup $H\leq G$ is Abelian, then $\alpha\in \hat H^*(H;\mathbb{Z})$ is stable if and only if $\alpha$ is fixed by $N_G(H)$. \end{lemma} \begin{proof} The ``only if'' direction is obviously true without assumption on $H$, so assume that $\alpha$ is fixed by $N_G(H)$, and choose $g\in G$. Since $H$ is Abelian, both $H$ and $H^g$ are contained in $C_G(H\cap H^g)$, and, since $H$ is a $p$-Sylow subgroup of $G$, it follows that both are $p$-Sylow subgroups of this centralizer. Since all such are conjugate, it follows that $H=H^{tg}$ for some $t\in C_G(H\cap H^g)$, whence $tg\in N_G(H)$. We therefore have the commuting diagram \[\xymatrix{ H\cap H^g\ar@{=}[dd]_-{(-)^{t^{-1}}}\ar[rr]&&H\ar[dd]^-{(-)^{(tg)^{-1}}}\\\\ H\cap H^g\ar[rr]^-{(-)^{g^{-1}}}&&H, }\] and the maps induced on Tate cohomology by the righthand map fixes $\alpha$ by assumption. It follows that $\alpha$ equalizes the two horizontal maps, as desired. \end{proof} \begin{lemma} If the $p$-Sylow subgroup $H\leq G$ is cyclic, then $n$ is a multiple of the $p$-period of $G$ if and only if $n$ is even and $N_G(H)$ fixes $\hat H^n(H;\mathbb{Z})$ pointwise. \end{lemma} \begin{proof} By assumption, the $p$-period of $H$ is $2$, so we may assume that $n$ is even \cite[XII.11.3]{CartanEilenberg:HA}. Then $n$ is a multiple of the $p$-period of $G$ if and only if $\hat H^n(G;\mathbb{Z})$ contains an element of exact order $|G|_p$. By our computation in the case of a cyclic group and the previous proposition, this statement is equivalent to the statement that the map $\hat H^n(G;\mathbb{Z})_p\to \hat H^n(H;\mathbb{Z})$ is an isomorphism, which is to say that every class in the target is stable. Since $H$ is in particular Abelian, the previous lemma shows that this condition is equivalent to the condition that $N_G(H)$ act trivially in degree $n$. \end{proof} \begin{proof}[Proof of Swan's theorem] Suppose that $n$ is a multiple of the $p$-period of $G$; then, by the lemma, $n$ is even and every class in degree $n$ Tate cohomology is fixed by $N_G(H)$. Now, $N_G(H)$ acts on $H$ via the composite \[N_G(H)\twoheadrightarrow \frac{N_G(H)}{C_G(H)}\hookrightarrow \mathrm{Aut}(H)\cong H^\times.\] By our assumption on $H$, the target is cyclic, so the intermediate quotient is also cyclic, and the action on $\hat H^n(H;\mathbb{Z})$ is by multiplication by $r^{n/2}$ with $(r,p)=1$ \cite[Lemma 3]{Swan:PPFG}. Since this action is trivial, $n/2$ is a multiple of the order of the cyclic group $N_G(H)/C_G(H)$. \end{proof} \end{appendix} \end{document} \end{document}
\begin{document} \title {Uniqueness of positive bound states to Schrodinger systems with critical exponents} \author{ Congming Li \thanks{Partially supported by NSF Grant DMS-0401174 } \hspace{.2in} Li Ma\thanks{Partially supported by the National Natural Science Foundation of China 10631020 and SRFDP 20060003002 } } \date{} \maketitle \begin{abstract} We prove the uniqueness for the positive solutions of the following elliptic systems: \begin{eqnarray*} \left\{\begin{array}{ll} - \mbox{$\bigtriangleup$} (u(x) ) = u(x)^{\alpha}v(x)^{\beta} \\ - \mbox{$\bigtriangleup$} ( v(x) ) = u(x)^{\beta} v(x)^{\alpha} \end{array} \right. \end{eqnarray*} Here $x\in R^n$, $n\geq 3$, and $1\leq \alpha, \beta\leq \frac{n+2}{n-2}$ with $\alpha+\beta=\frac{n+2}{n-2}$. In the special case when $n=3$ and $\alpha =2, \beta=3$, the systems come from the stationary Schrodinger system with critical exponents for Bose-Einstein condensate. As a key step, we prove the radial symmetry of the positive solutions to the elliptic system above with critical exponents. \end{abstract} \emph{Keyword: Moving plane, positive solutions, radial symmetric, uniqueness} {\em Mathematics Subject Classification: 35J45, 35J60, 45G05, 45G15} \section{Introduction} In this paper, we consider the uniqueness of positive solutions to the following stationary Schrodinger system: \begin{equation} \label{EllS} \left\{\begin{array}{ll} - \mbox{$\bigtriangleup$} (u(x) ) = u(x)^{\alpha}v(x)^{\beta} \\ - \mbox{$\bigtriangleup$} ( v(x) ) = u(x)^{\beta} v(x)^{\alpha} \end{array} \right. \end{equation} Here $x\in R^n$, $n\geq 3$, and $1\leq \alpha, \beta\leq \frac{n+2}{n-2}$ with $\alpha+\beta=\frac{n+2}{n-2}$. In the special case when $n=3$ and $\alpha =2, \beta=3$, the systems come from the stationary Schrodinger system with critical exponents for Bose-Einstein condensate (\cite{KL},\cite{LW1},\cite{LW2}, and \cite{MZ}). In the earlier works \cite{LW1},\cite{LW2}, and \cite{MZ}, people pay more attention to the elliptic system (\ref{EllS}) with subcritical exponents. Very interestingly, Chen and Li have proved that the best constant in weighted Hardy-Littlewood-Sobolev inequality can be achieved by explicit radially symmetric functions (see \cite{CL3} and \cite{L}). As a consequence of their work, the uniqueness of positive solutions to the corresponding elliptic system (it is (\ref{EllS}) in the case when $\alpha=0$ and $\beta=\frac{n+2}{n-2})$) has been settled down. However, when $0<\alpha, \beta$, the uniqueness of smooth positive solutions to the stationary Schrodinger system (\ref{EllS}) is an open question. Generally speaking, there are very few result even for the uniqueness of positive solutions to the ordinary differential systems. The aim of this paper is to prove the radial symmetry and uniqueness of positive solutions to (\ref{EllS}) with critical exponents and $1\leq \alpha<\beta\leq \frac{n+2}{n-2}$. As one can expect, just like in the work M.Weinstein \cite{We1} in the scalar case with the sub-critical exponent, that there is a closed relationship between the stationary Schrodinger system with critical exponent with the Hardy-Littlewood-Sobolev inequality. As we show below, this is true. Since we shall use Hardy-Littlewood-Sobolev inequality to prove radial symmetry of our solutions, let's do an excursion about recent progress of Lieb's conjecture. Let us begin by recalling the well-known Hardy-Littlewood-Sobolev inequalities. Let $0 <\lambda < n$, $1 < s, r < \infty$, and $\|f\|_p$ be the $L^p(R^n)$ norm of the function $f$. We shall write by $\|f\|_{p,\Omega}$ the $L^p$ norm of the function on the domain $\Omega$. Then, classical Hardy-Littlewood-Sobolev inequality (HLS) states that: \begin{equation} \int_{R^n}\int_{R^n} \frac{f(x) g(y)}{|x - y|^{\lambda}} dx dy \leq C_{s,\lambda, n} \|f\|_r\|g\|_s \label{HLS} \end{equation} for any $f \in L^r(R^n)$, $g \in L^s(R^n)$, and for $\frac{1}{r} + \frac{1}{s} + \frac{\lambda}{n} = 2$. Hardy and Littlewood also introduced the double weighted inequality, which was later generalized by Stein and Weiss in \cite{SW2} in the following form: \begin{equation} \left|\int_{R^n}\int_{R^n} \frac{f(x)g(y)}{|x|^{\alpha_0}|x-y|^{\lambda}|y|^{\beta_0}}dxdy \right| \le C_{\alpha_0,\beta_0,s,\lambda,n }\|f\|_r \|g\|_s \label{whls} \end{equation} where $\alpha_0+\beta_0 \geq 0$, \begin{equation} 1 - \frac{1}{r} - \frac{\lambda}{n} < \frac{\alpha_0}{n} < 1 - \frac{1}{r} , \; \mbox{ and } \frac{1}{r} + \frac{1}{s} + \frac{\lambda + \alpha_0 + \beta_0}{n} = 2 . \label{B1} \end{equation} The best constant in the weighted inequality (\ref{whls}) can be obtained by maximizing the functional \begin{equation} J(f,g) = \int_{R^n}\int_{R^n}\frac{f(x)g(y)} {|x|^{\alpha_0}|x-y|^{\lambda}|y|^{\beta_0}} dx dy \label{J} \end{equation} under the constraints $ \|f\|_r = \|g\|_s =1 $. Then the corresponding Euler-Lagrange equations are the system of integral equations: \begin{equation} \left \{ \begin{array}{l} \lambda_1 r {f(x)}^{r-1} = \frac{1}{|x|^{\alpha_0}}\int_{R^{n}} \frac{g(y)}{|y|^{\beta_0}|x-y|^{\lambda}} dy\\ \lambda_2 s {g(x)}^{s-1} = \frac{1}{|x|^{\beta_0}}\int_{R^{n}} \frac{f(y)}{|y|^{\alpha_0}|x-y|^{\lambda}} dy \end{array} \right. \label{DHLSEL} \end{equation} where $f, g \geq 0, \;x \in R^n$, and $\lambda_1 r =\lambda_2 s =J(f,g)$. Let $u=c_1 f^{r-1}$, $v=c_2 g^{s-1}$, $p=\frac{1}{r-1}$, $q=\frac{1}{s-1}$, $pq \neq 1 $, and for a proper choice of constants $c_1$ and $c_2$, system (\ref{DHLSEL}) becomes \begin{equation} \left \{ \begin{array}{l} u(x) = \frac{1}{|x|^{\alpha}}\int_{R^{n}} \frac{v(y)^q}{|y|^{\beta_0}|x-y|^{\lambda}} dy\\ v(x) = \frac{1}{|x|^{\beta}}\int_{R^{n}} \frac{u(y)^p}{|y|^{\alpha_0}|x-y|^{\lambda}} dy \end{array} \right.\label{ELWHLS} \end{equation} where $u, v \geq 0$, $0 <p, q<\infty$, $0 <\lambda < n$, $\frac{\alpha_0}{n} < \frac{1}{p+1}<\frac{\lambda+\alpha_0}{n}$, and $\frac{1}{p+1}+\frac{1}{q+1}=\frac{\lambda+\alpha_0+\beta_0}{n}$. Note that in the special case where $\alpha_0 =0$ and $\beta_0 =0$, system (\ref{ELWHLS}) reduces to the following system: \begin{equation} \left \{ \begin{array}{l} u(x) = \int_{R^{n}} \frac{v^q(y)}{|x - y|^{\lambda}} dy\\ v(x) = \int_{R^{n}} \frac{u^p(y)}{|x - y|^{\lambda}} dy \end{array} \right. \label{sys} \end{equation} with \begin{equation} \frac{1}{q+1}+\frac{1}{p+1}=\frac{\lambda}{n}. \label{a1} \end{equation} It is well-known that this integral system is closely related to the system of partial differential equations \begin{equation} \left \{ \begin{array}{l} (-\Delta)^{\gamma/2} u = v^q , \;\; u>0, \mbox{ in } R^n,\\ (-\Delta)^{\gamma/2} v = u^p, \;\; v>0, \mbox{ in } R^n, \end{array} \right. \label{des} \end{equation} where $\gamma = n - \lambda$. When $p=q=\frac{n+\gamma}{n-\gamma}$, and $u(x)=v(x)$, system (\ref{sys}) becomes the single equation: \begin{equation} u(x) = \int_{R^{n}} \frac{u(y)^{\frac{n+\gamma}{n-\gamma} }}{|x - y|^{n - \gamma}} dy,\;\; u >0, \mbox{ in } R^n. \label{eq} \end{equation} The corresponding PDE is the well-known family of semi-linear equations \begin{equation} (- \Delta)^{\gamma/2} u = u^{(n+\gamma)/(n-\gamma)} , \;\; u>0, \;\; \mbox{ in } R^n \label{de} \end{equation} In particular, when $n \geq 3 $, and $\gamma = 2,$ (\ref{de}) becomes \begin{equation} -\Delta u = u^{ (n+2)/(n-2) }, \;\; u>0, \mbox{ in } R^n. \label{1.2} \end{equation} The classification of the solutions of (\ref{1.2}) has provided an important ingredient in the study of the well-known Yamabe problem and the prescribing scalar curvature problem. Equation (\ref{1.2}) were studied by Gidas, Ni, and Nirenberg \cite{GNN}, Caffarelli, Gidas, and Spruck \cite{CGS}, Chen and Li \cite{CL}, and Li \cite{Li}. They classified all the solutions. Recently, Wei and Xu \cite{WX} generalized this result to the solutions of more general equation (\ref{de}) with $\gamma$ being any even numbers between 0 and n. Although the systems for other real values of $\alpha, \beta$ between 0 and n are of interest to people, we shall only concentrate in this paper to the system (\ref{EllS}) with critical exponents when $1\leq \alpha, \beta\leq \frac{n+2}{n-2}$ and $\alpha+\beta= \frac{n+2}{n-2}$. Our main results are \begin{mthm}\label{thm1}. Any $L^{\frac{2n}{n-2}}(\mathbf{R}^n)\times L^{\frac{2n}{n-2}}(\mathbf{R}^n)$ positive solution pair $(u,v)$ to the system (\ref{EllS}) with critical exponents are radial symmetric functions. \end{mthm} and \begin{mthm}\label{thm2}. Assume that $1\leq \alpha<\beta\leq \frac{n+2}{n-2}$. Then any $L^{\frac{2n}{n-2}}(\mathbf{R}^n)\times L^{\frac{2n}{n-2}}(\mathbf{R}^n)$ radial symmetric solution pair $(u,v)$ to the system (\ref{EllS}) with critical exponents are unique such that $u=v$. \end{mthm} We point out that when $u=v$, the elliptic system (\ref{EllS}) reduces to the elliptic equation with critical exponent (\ref{1.2}). Then $u=v$ is in a special family of functions: \begin{equation} \phi_{x_o,t}(x)=c(\frac{t}{t^2 + |x - x_o|^2})^{(n-2)/2} \label{solu} \end{equation} where $t>0$, $x_o\in\mathbf{R}^n$, with some positive constants c such that each $\phi_{x_o,t}(x)$ solves (\ref{1.2}). This family of functions are important in the study of (\ref{EllS}). Our results are motivated from the previous work \cite{CLO2},where Chen, Li, and Ou considered more general system (\ref{sys}) and established the symmetry and monotonicity of the solutions. In \cite{CL2}, Chen and Li also obtained a regularity result of the solutions to (\ref{sys}). To establish the symmetry of the solution to (\ref{sys}), Chen, Li, and Ou \cite{CLO} \cite{CLO2} \cite{CLO4} introduced a new idea, an integral form of the method of moving planes. It is entirely different from the traditional method used for partial differential equations. Instead of relying on maximum principles, certain integral norms were estimated. The new method is a very powerful tool in studying qualitative properties of other integral equations and systems. In fact, following Chen, Li, and Ou's work, Jin and Li \cite{CJ} studied the symmetry of the solutions to the more general system (\ref{ELWHLS}). Chen and Ma \cite{MC} discussed the Liouville type theorem for the positive solutions to the elliptic system (\ref{des}). In this paper, we first prove the radial symmetry of the solutions to (\ref{EllS}) with critical exponents. It is obvious that the radial symmetry of the solutions reduces (\ref{EllS}) to a system of ODEs, which has special solution pair $(\phi_{o,t}(x),\phi_{o,t}(x))$. To prove the uniqueness, we prove that $u(0)=v(0)$. Then by the uniqueness of the initial value problem for ODE, we conclude that $u=v=\phi_{o,t}$. This is the key observation in establishing the uniqueness of positive solutions for (\ref{EllS}) with critical exponents. Theorems \ref{thm1} and \ref{thm2} will be proved in the next two sections. \section{Proof of the Radial symmetry} We use the moving plane method introduced by Chen-Li-Ou in \cite{CLO}. We shall use the Hardy-Littlewood-Sobolev inequality: \begin{equation} \label{HLS3} |Tf|_p\leq |f|_{\frac{np}{n+2p}}, \end{equation} where $C(n,p)$ is a uniform positive constant and $$ Tf(x)=\int_{\mathbf{R}^n}|x-y|^{2-n}f(y)dy. $$ {\bf The Proof of Theorem \ref{thm1}}. For each $\lambda\in \mathbf{R}$, we denote by $$ H_{\lambda}=\{x\in\mathbf{R}^n; x_1<\lambda\}. $$ For each $x=(x_1,x')\mathbf{R}^n$, we let $$ x_{\lambda}=(2\lambda-x_1,x') $$ be the reflection point of $x$ with respect to the hyperplane $\partial H_{\lambda}$. We let $e_1=(1,0,...0)$. We define $$ u_{\lambda}(x)=u(x_{\lambda}), \; \; B_{\lambda}^u=\{x\in H_{\lambda}; u_{\lambda}(x)>u(x)\}, $$ and $$ v_{\lambda}(x)=v(x_{\lambda}), \;\; B_{\lambda}^v=\{x\in H_{\lambda}; v_{\lambda}(x)>v(x)\}. $$ To do the moving plane method, we need the following formula, which is obtained by a change of variables. $$ u(x)=\int_{H_{\lambda}}\frac{u^{\alpha}v^{\beta}(y)}{|x-y|^{n-2}}dy+ \int_{H_{\lambda}} \frac{u_{\lambda}^{\alpha}v_{\lambda}^{\beta}(y)}{|x_{\lambda}-y|^{n-2}}dy, $$ and $$ u_{\lambda}(x)=\int_{H_{\lambda}}\frac{u^{\alpha}v^{\beta}(y)}{|x_{\lambda}-y|^{n-2}}dy+ \int_{H_{\lambda}}\frac{u_{\lambda}^{\alpha}v_{\lambda}^{\beta}(y)}{|x-y|^{n-2}}dy. $$ Then we have \begin{equation} \label{equ3} u_{\lambda}(x)-u(x)=\int_{H_{\lambda}} (u_{\lambda}^{\alpha}v_{\lambda}^{\beta}-u^{\alpha}v^{\beta})(y) (\frac{1}{|x-y|^{n-2}}-\frac{1}{|x_{\lambda}-y|^{n-2}})dy. \end{equation} Note that for $x\in H_{\lambda}$, we have $$ \frac{1}{|x-y|^{n-2}}>\frac{1}{|x_{\lambda}-y|^{n-2}}. $$ Then for $x\in B^u_{\lambda}$, we have \begin{equation} \label{est1} \left\{\begin{array}{ll} 0&\leq u_{\lambda}(x)-u(x)\\ &\leq \alpha\int_{B^u_{\lambda}} \frac{u_{\lambda}^{\alpha-1}v_{\lambda}^{\beta}(u_{\lambda}-u)}{|x-y|^{n-2}}dy +\beta \int_{B^v_{\lambda}} \frac{u_{\lambda}^{\alpha}v_{\lambda}^{\beta-1}(v_{\lambda}-v)}{|x-y|^{n-2}}dy\\ &:=I+II \end{array} \right. \end{equation} Let $p=\frac{2n}{n-2}$. Using the Hardy-Littlewood-Sobolev inequality (\ref{HLS3}) we can bound the first term $I$ in (\ref{est1}) by \begin{equation} \label{est2} \left\{\begin{array}{ll} |I|_p\leq C(n,p)|u_{\lambda}^{\alpha-1}v_{\lambda}^{\beta}(u_{\lambda}-u)|_{\frac{2n}{n+2}}\\ \leq C(n,p)|u_{\lambda}|_p^{\alpha-1}|v_{\lambda}|_p^{\beta}|u_{\lambda}-u|_p \end{array} \right. \end{equation} Here the integrations are over the set $B^u_{\lambda}$. Using again the Hardy-Littlewood-Sobolev inequality (\ref{HLS3}) we can bound the first term $II$ in (\ref{est1}) by \begin{equation} \label{est3} \left\{\begin{array}{ll} |II|_p\leq C(n,p)|u_{\lambda}^{\alpha}v_{\lambda}^{\beta-1}(v_{\lambda}-v)|_{\frac{2n}{n+2}}\\ \leq C(n,p)|u_{\lambda}|_p^{\alpha}|v_{\lambda}|_p^{\beta-1}|v_{\lambda}-v|_p \end{array} \right. \end{equation} Here the integrations are over the domain $B^v_{\lambda}$. Hence, we have \begin{equation}\label{mov1} |u_{\lambda}-u|_{p, B^u_{\lambda}}\leq C(n,p)(|u_{\lambda}|_{p,B^u_{\lambda}}^{\alpha-1}|v_{\lambda}|_{p,B^u_{\lambda}}^{\beta}|u_{\lambda}-u|_{p,B^u_{\lambda}} +|u_{\lambda}|_{p,B^v_{\lambda}}^{\alpha}|v_{\lambda}|_{p,B^v_{\lambda}}^{\beta-1}|v_{\lambda}-v|_{p,B^v_{\lambda}}) \end{equation} Similarly, we have for following formulae for $v$ and $v_{\lambda}$. $$ v(x)=\int_{H_{\lambda}}\frac{u^{\alpha}v^{\beta}(y)}{|x-y|^{n-2}}dy+ \int_{H_{\lambda}} \frac{v_{\lambda}^{\alpha}u_{\lambda}^{\beta}(y)}{|x_{\lambda}-y|^{n-2}}dy, $$ and $$ v_{\lambda}(x)=\int_{H_{\lambda}}\frac{v^{\alpha}u^{\beta}(y)}{|x_{\lambda}-y|^{n-2}}dy+ \int_{H_{\lambda}}\frac{v_{\lambda}^{\alpha}u_{\lambda}^{\beta}(y)}{|x-y|^{n-2}}dy. $$ Then we have the following estimate \begin{equation}\label{mov2} |v_{\lambda}-v|_p\leq C(n,p)(|v_{\lambda}|_{p,B^v_{\lambda}}^{\alpha-1}|u_{\lambda}|_{p,B^v_{\lambda}}^{\beta}|v_{\lambda}-v|_{p,B^v_{\lambda}} +|v_{\lambda}|_{p,B^u_{\lambda}}^{\alpha}|u_{\lambda}|_{p,B^u_{\lambda}}^{\beta-1}|u_{\lambda}-u|_{p,B^u_{\lambda}}) \end{equation} After these preparations, we can use the moving plane method as developed in \cite{CLO} to prove the radial symmetry of the solutions. At first, let's start the plane from the infinity. Indeed, for $\lambda>>1$ large enough, we know that the quantities $$ |v_{\lambda}|_{p,B^u_{\lambda}}, |u_{\lambda}|_{p,B^u_{\lambda}}, |v_{\lambda}|_{p,B^v_{\lambda}} $$ and $$ |u_{\lambda}|_{p,B^v_{\lambda}} $$ all are small, which give us that $$ |u_{\lambda}-u|_{p, B^u_{\lambda}} \leq \frac{1}{2}|v_{\lambda}-v|_{p,B^v_{\lambda}} $$ and $$ |v_{\lambda}-v|_{p,B^v_{\lambda}}\leq \frac{1}{2}|u_{\lambda}-u|_{p,B^u_{\lambda}}. $$ These imply that $|u_{\lambda}-u|_{p, B^u_{\lambda}} =0$ and $|v_{\lambda}-v|_{p,B^v_{\lambda}}=0$. These say that $B^u_{\lambda}=\phi$ and $B^v_{\lambda}=\phi$. Next we define $$ \lambda_0=\{\lambda\in\mathbf{R}; B^u_{\lambda'}=\phi\; \mbox{for all}\; \lambda'\geq\lambda\}. $$ Then it follows from the fact that $u(x)\to 0$ as $|x|\to \infty$ and $u(x)>0$ in $\mathbf{R}^n$ that $\lambda<+\infty$. By the definition of $\lambda_0$, we have $u_{\lambda_0}(x)\leq u(x)$ foe $x\in H_{\lambda_0}$. Using the expression (\ref{equ3}), we see that $u_{\lambda_0}(x)<u(x)$ for $x\in H_{\lambda_0}$. This implies that $|2\lambda e_1-B^{u}_{\lambda}|\to 0$ as $\lambda\to \lambda_0$. It is now standard to know (see \cite{CLO}) that $u_{\lambda_0}=u$, which then gives us $v_{\lambda_0}=v $. Since $x_1$ can be any directions, we conclude that $u$ and $v$ are radial symmetric about some point $x_o$. \section{Proof of the Uniqueness} In some sense, the proof of Theorem \ref{thm2} is just at hand by using the integral expression of the solution pair $(u,v)$. {\bf Proof of Theorem \ref{thm2}}. Let $(u, v)\in L^{\frac{2n}{n-2}}(\mathbf{R}^n)\times L^{\frac{2n}{n-2}}(\mathbf{R}^n)$ be a pair of solutions to system (\ref{EllS}). By Theorem \ref{thm1}, we know that $u$ and $v$ are radial symmetric about the some point $x_0$. We may say, $x_0=0$. Recall that we have assumed that $1\leq\alpha<\beta<\frac{n+2}{n-2}$. Since, $u\in L^{\frac{2n}{n-2}}(\mathbf{R}^n)$ and $v\in L^{\frac{2n}{n-2}}(\mathbf{R}^n)$. Using the same method in \cite{CLO}, we have that $u\in C^2(\mathbf{R}^n)$ and $v\in C^2(\mathbf{R}^n)$ with $$ u(x)\to 0, v(x)\to 0, $$ as $|x|\to \infty$. Since our solution $u$ is radially symmetric, hence we can write, in polar coordinates, the first equation in (\ref{EllS}) as $$ (r^{n-1} u'(r))' = - r^{n-1} u(r)^{\alpha}v(r)^{\beta} ,$$ where $r = |x|$. Integrating both sides from $0$ to $r$ yields $$ r^{n-1} u'(r) = - \int_0^r s^{n-1} u^{\alpha}v^{\beta} (s) ds .$$ It follows by another integration that \begin{equation} u(r) = u(0) - \int_0^r \frac{1}{\tau^{n-1}} \int_0^{\tau} s^{n-1}u^{\alpha}v^{\beta} ds d\tau . \label{7} \end{equation} Similarly, for $v(r)$, we have \begin{equation} v(r) = v(0) - \int_0^r \frac{1}{\tau^{n-1}} \int_0^{\tau} s^{n-1} v^{\alpha}u^{\beta} ds d\tau . \label{8} \end{equation} As we mentioned in the introduction, we need only to show that $u(0) = v(0)$. Otherwise, suppose \begin{equation} u(0) < v(0) , \label{9} \end{equation} then by continuity, for all small $r>0$, \begin{equation} u(r) < v(r) . \label{10} \end{equation} In other word, there exists an $R > 0$, such that \begin{equation} u(r) < v(r) , \;\; \forall r \in (0, R). \label{11} \end{equation} Let $R_o$ be the supreme value of $R$, such that (\ref{11}) holds. Then $R_o\leq \infty$ and $u(R_o)=v(R_o)$, where we have used the fact that $u(+\infty)=v(+\infty)=0$. By the definition of $R_o$ and $\alpha<\beta$, we have that \begin{equation} u(r)^{\alpha}v(r)^{\beta} > v(r)^{\alpha}u(r)^{\beta} , \;\; \forall r \in (0, R_o). \label{12} \end{equation} Then we have from (\ref{7}) and (\ref{8}) that $$0>u(0)-v(0)= \int_0^{R_o} \frac{1}{\tau^{n-1}} \int_0^{\tau} s^{n-1} (u^{\alpha}v^{\beta}-u^{\beta}v^{\alpha})(s) ds d\tau>0. $$ This is impossible. Similarly, one can show that $u (0) > v(0)$ is impossible. Therefore, we must have $$ u(0) = v(0). $$ Finally, by the standard ODE theory, we arrive at $$ u(r)\equiv v(r) . $$ Hence, our elliptic system (\ref{EllS}) has been reduced to the elliptic equation with critical exponent (\ref{1.2}). By now, it is standard to know that our solutions pair $u$ and $v$ are of the form (\ref{solu}). This completes the proof of Theorem \ref{thm2}. {\em Authors' Addresses and E-mails:} Congming Li Department of Applied Mathematics Campus Box 526, University of Colorado at Boulder, Boulder CO 80309,USA [email protected] Li Ma Department of Mathematics Tsinghua University Beijing 100084 China [email protected] \end{document}
\betaegin{document} \tauitle{ORLICZ-SOBOLEV VERSUS H\"OLDER LOCAL MINIMIZER FOR NON-LINEAR ROBIN PROBLEMS} \author{ Anouar Bahrouni, Hlel Missaoui and Hichem Ounaies} \title{ORLICZ-SOBOLEV VERSUS H\"OLDER LOCAL MINIMIZER FOR NON-LINEAR ROBIN PROBLEMS} \betaegin{abstract} We establish a regularity results for weak solutions of Robin problems driven by the well-known Orlicz $g$-Laplacian operator given by \betaegin{equation}\label{P} \left\lbrace \betaegin{array}{ll} -\tauriangle_g u=f(x,u),& x\in\Omegamega\\ \ \\ a(\vert \nabla u\vert)\varphirac{\phiartial u}{d\nu}+b(x)\vert u\vert^{p-2}u=0,& x\in \phiartial \Omegamega, \end{array} \rhoightarrowght.\tauag{P} \end{equation} where $\tauriangle_g u:=\tauext{div}(a(\vert\nabla u\vert)\nabla u)$, $\Omegamega\sigmaubset \mathbb{R}^N,\ N\geq 3$, is a bounded domain with $C^2$-boundary $\phiartial\Omegamega$, $\varphirac{\phiartial u}{d\nu}=\nabla u .\nu$, $\nu$ is the unit exterior vector on $\phiartial\Omegamega$, $p>0$, $b \in C^{1,\gamma}(\phiartial\Omegamega)$ with $\gamma\in (0,1)$ and $\inf\limits_{x\in \phiartial\Omegamega} b(x) > 0$. Precisely, by using a suitable variation of the Moser iteration technique, we prove that every weak solution of problem $(\rhoef{P})$ is bounded. Moreover, we combine this result with the Lieberman regularity theorem, to show that every $C^1(\omegaverline{\Omegamega})$-local minimizer is also a $W^{1,G}(\Omegamega)$-local minimizer for the corresponding energy functional of problem $(\rhoef{P})$. \end{abstract} {\sigmamall \tauextbf{2010 Mathematics Subject Classification:} 35J60,35j25, 35S30, 46E35.}\\ \ \\ \tauextbf{Keywords:}\sigmamall \tauextbf{ Orlicz-Sobolev space, Orlicz $g$-Laplacian, Robin boundary values, Moser iteration.} \sigmaection{Introduction} In this paper, we study the boundedness regularity for a weak solution and the relationship between the H\"older local minimizer and the Orlicz-Sobolev local minimizer for the corresponding energy functional of the following Robin problem: \betaegin{equation}\label{P} \left\lbrace \betaegin{array}{ll} -\tauriangle_g u=f(x,u),& \tauext{on}\ \Omegamega\\ \ \\ a(\vert\nabla u\vert)\varphirac{\phiartial u }{d\nu}+b(x)\vert u\vert^{p-2}u=0,& \tauext{on}\ \phiartial \Omegamega, \end{array} \rhoightarrowght.\tauag{P} \end{equation} where $\Omegamega$ is a bounded open subset of $\mathbb{R}^N$ with $C^2$-boundary $\phiartial\Omegamega$, $\tauriangle_g u:=\tauext{div}(a(\vert\nabla u\vert)\nabla u)$ is the Orlicz $g$-Laplacian operator, $\varphirac{\phiartial u}{d\nu}=\nabla u .\nu$, $\nu$ is the unit exterior vector on $\phiartial\Omegamega$, $p>0$, $b \in C^{1,\gamma}(\phiartial\Omegamega)$ with $\gamma\in (0,1)$ and $\inf\limits_{x\in \phiartial\Omegamega} b(x) > 0$ while the function $a(\vert t\vert)t$ is an increasing homeomorphism from $\mathbb{R}$ onto $\mathbb{R}$. In the right side of problem $(\rhoef{P})$ there is a Carath\'eodory function $f:\Omegamega\tauimes\mathbb{R}\longrightarrow \mathbb{R}$, that is $x\longmapsto f(x,s)$ is measurable for all $s\in\mathbb{R}$ and $s\longmapsto f(x,s)$ continuous for a.e. $x\in\Omegamega$.\\ Due to the nature of the non-homogeneous differential operator $g$-Laplacian, we shall work in the framework of Orlicz and Orlicz-Sobolev spaces. The study of variational problems in the classical Sobolev and Orlicz-Sobolev spaces is an interesting topic of research due to its significant role in many fields of mathematics, such as approximation theory, partial differential equations, calculus of variations, non-linear potential theory, the theory of quasi-conformal mappings, non-Newtonian fluids, image processing, differential geometry, geometric function theory, and probability theory (see \cite{16,17,18,19,6,25}).\\ It is worthwhile to mention that the Orlicz-Sobolev space is a generalization of the classical Sobolev space. Hence, several properties of the Sobolev spaces have been extended to the Orlicz-Sobolev spaces. To the best of our knowledge, there is a luck of some regularity results concerning the problem $(\rhoef{P})$. Precisely, the boundedness of a weak solution and the relationship between the Orlicz-Sobolev and H\"older local minimizers for the corresponding energy functional of $(\rhoef{P})$. Those results are crucial in some methods of the existence and multiplicity of solutions for the problem $(\rhoef{P})$.\\ The question of the boundedness regularity and the relationship between the Sobolev and H\"older local minimizers for certain $C^1$-functionals have been treated by many authors \cite{ylp,29,fan2,6,3,piral,30,lmt,31,shu,1,2,mgpw,4,pwa,patrik1,patrik2} and references therein. In \cite{1}, G. M. Lieberman, treated the regularity result up to the boundary for the weak solutions of the following problem \betaegin{equation}\label{E} \betaegin{array}{ll} -\tauriangle_p u=f(x,u),& x\in\Omegamega\\ \end{array} \tauag{E} \end{equation} where $\Omegamega$ is a bounded domain in $\mathbb{R}^N$ with $C^{1,\alpha}$-boundary. Precisely, under some assumptions on the structure of the $p$-Laplacian operator and on the non-linear term $f$, he proved that every bounded (i.e. $u\in L^{\infty}(\Omegamega)$) weak solution of the problem $(\rhoef{E})$ (with Dirichlet or Neumann boundary conditions) belongs to $C^{1,\betaeta}(\omegaverline{\Omegamega})$. In \cite{2}, G. M. Lieberman, extended the results obtained in \cite{1} to the Orlicz $g$-Laplacian operator. In \cite{fan2}, X. L. Fan, established the same results gave in \cite{1} for the variable exponent Sobolev spaces ($p$ being variable). Note that all the results cited in \cite{fan2,1,2} require that the weak solution belongs to $L^{\infty}(\Omegamega)$. The boundedness result for weak solution in the Dirichlet case can be deduced from Theorem $7.1$ of Ladyzhenskaya-Uraltseva \cite{lad} (problems with standard growth conditions) and Theorem $4.1$ of Fan-Zhao \cite{fan3} (problems with non-standard growth conditions). For the Neumann case, the boundedness result deduced from Proposition 3.1 of Gasi\`nski-Papageorgiou \cite{3} (problems with sub-critical growth conditions).\\ To the best of our knowledge, there is only one paper (see \cite{6}) devoted to the boundedness result of weak solutions to problems driven by the Orlicz $g$-Laplacian operator. Precisely, in \cite{6}, F. Fang and Z. Tan, with sub-critical growth conditions, proved that every weak solution of problems with Dirichlet boundary condition belongs to $L^{\infty}(\Omegamega)$. The approaches used by Fang and Tan in \cite{6} for the boundedness result doesn't work in our case (Robin boundary condition) since they require that $u_{\mid_{\phiartial\Omegamega}}$ is bounded ($u$ being the weak solution). To overcome this difficulty, we apply a suitable variation of the Moser iteration technique.\\ The question of the relationship between the Sobolev and H\"older local minimizers for certain functionals has taken the attention of many authors \cite{ylp,29,6,3,piral,30,34,31,shu,iann,mom,patrik1,patrik2} and references therein. In \cite{29}, Brezis and Nirenberg have proved a famous theorem which asserts that the local minimizers in the space $C^1$ are also local minimizers in the space $H^1$ for certain variational functionals. A result of this type was later extended to the space $W^{1,p}_0(\Omegamega)$ ( Dirichlet boundary condition), with $1<p<\infty$, by Garcia Azorera-Manfredi-Peral Alonso \cite{30} (see also Guo-Zhang \cite{31}, where $2\leq p$). The $W^{1,p}_n(\Omegamega)$-version (Neumann boundary condition) of the result can be found in Motreanu-Motreanu-Papageorgiou \cite{mom}. Moreover, this theorem has been extended to the $p(x)$-Laplacian equations (see \cite{3}), non-smooth functionals (see \cite{ylp,iann,patrik1}) and singular equations with critical terms (see \cite{34}).\\ As far as we know, there is only one paper (see \cite{6}) devoted for the result of Brezis and Nirenberg in the Orlicz case. Precisely, in \cite{6}, F. Fang and Z. Tan proved a boundedness regularity result and established the relation between the $C^1(\omegaverline{\Omegamega})$ and $W^{1,G}_0(\Omegamega)$ minimizers for an Orlicz problem with Dirichlet boundary condition. Since our problem $(\rhoef{P})$ is with Robin boundary condition, many approaches used in \cite{6} don't work.\\ The main novelty of our work, is the study of the boundedness regularity for weak solutions of problem $(\rhoef{P})$ and the relationship between the Orlicz-Sobolev and H\"older local minimizers for the energy functional of problem $(\rhoef{P})$. The non-homogeneity of the $g$-Laplacian operator bring us several difficulties in order to get the boundedness of a weak solution to the Robin Problem $(\rhoef{P})$.\\ This paper is organized as follows. In Section 2 we recall the basic properties of the Orlicz Sobolev spaces and the Orlicz Laplacian operator, and we state the main hypotheses on the data of our problem. Section 3 deals with two regularity results. In the first we prove that every weak solution of problem $(\rhoef{P})$ belongs to $L^{s}(\Omegamega)$, for all $1\leq s<\infty$. In the second we show that every solution of problem $(\rhoef{P})$ is bounded. In the last Section, we establish the relationship between the local $C^1(\omegaverline{\Omegamega})$-minimizer and the local $W^{1,G}(\Omegamega)$-minimizer for the corresponding energy functional. \sigmaection{Preliminaries} To deal with problem $(\rhoef{P})$, we use the theory of Orlicz-Sobolev spaces since problem $(\rhoef{P})$ contains a non-homogeneous function $a(.)$ in the differential operator. Therefore, we start with some basic concepts of Orlicz-Sobolev spaces and we set the hypotheses on the non-linear term $f$. For more details on the Orlicz-Sobolev spaces see \cite{111,6,8,9990,25,5,11} and the references therein.\\ The function $a : (0,+\infty) \rhoightarrowghtarrow (0,+\infty)$ is a function such that the mapping, defined by $$ g(t):= \left\lbrace \betaegin{array}{lcc} a(\vert t\vert)t, & \tauext{if}\ t\neq 0,\\ \ & \ \\ 0 ,& \tauext{if}\ t= 0, \end{array} \rhoightarrowght. $$ is an odd, increasing homeomorphism from $\mathbb{R}$ onto itself. Let $$G(t):=\int_{0}^{t}g(s)\ ds,\ \ \varphiorall\ t\in \mathbb{R},$$ $G$ is an $N$-function, i.e. Young function satisfying: $G$ is even, positive, continuous and convex function. Moreover, $G(0)=0$, $\varphirac{G(t)}{t}\rhoightarrowghtarrow 0$ as $t\rhoightarrowghtarrow0$ and $\varphirac{G(t)}{t}\rhoightarrowghtarrow +\infty$ as $t\rhoightarrowghtarrow+\infty$ (see \cite[Lemma 3.2.2, p. 128]{9990}). \\ In order to construct an Orlicz-Sobolev space setting for problem $(\rhoef{P})$, we impose the following class of assumptions on $G$, $a$ and $g$: \betaegin{enumerate} \item[$(G)$] \betaegin{enumerate} \item[$(g_1)$]: $a(t)\in C^{1}(0,+\infty),\ a(t)>0$ and $a(t)$ is an increasing function for $t>0.$ \item[$(g_2)$]: $1<p<g^-:=\inf\limits_{t>0}\varphirac{g(t)t}{G(t)}\leq g^{+}:=\sigmaup\limits_{t>0}\varphirac{g(t)t}{G(t)}<+\infty.$ \item[$(g_3)$]: $0<g^--1=a^-:=\inf\limits_{t>0}\varphirac{g^{'}(t)t}{g(t)}\leq g^+-1=a^{+}:=\sigmaup\limits_{t>0}\varphirac{g^{'}(t)t}{g(t)}<+\infty.$ \item[$(g_4)$]: $\displaystylesplaystyle{\int_1^{+\infty}\varphirac{G^{-1}(t)}{t^{\varphirac{N+1}{N}}}dt=\infty}$ and $\displaystylesplaystyle{\int_0^{1}\varphirac{G^{-1}(t)}{t^{\varphirac{N+1}{N}}}dt<\infty}$. \end{enumerate} \end{enumerate} The conjugate $N$-function of $G$, is defined by $$\tauilde{G}(t)=\int_{0}^{ t} \tauilde{g}(s)\ ds,$$ where $\tauilde{g}: \mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ is given by $\tauilde{g}(t)=\sigmaup\{s:\ g(s)\leq t\}$. If $g$ is continuous on $\mathbb{R}$, then $\tauilde{g}(t)=g^{-1}(t)$ for all $t\in \mathbb{R}.$ Moreover, we have \betaegin{equation}\label{1119} st\leq G(s)+\tauilde{G}(t), \end{equation} which is known as the Young inequality. Equality in $(\rhoef{1119})$ holds if and only if either $t=g(s)$ or $s=\tauilde{g}(t)$. In our case, since $g$ is continuous, we have $$\tauilde{G}(t)=\int_{0}^{ t}g^{-1}(s)\ ds.$$ The functions $G$ and $\tauilde{G}$ are complementary $N$-functions.\\ We say that $G$ satisfies the $\tauriangle_2$-condition, if there exists $C>0$, such that \betaegin{equation}\label{delta} G(2t) \leq CG(t),\ \tauext{for all}\ t > 0. \end{equation} We want to remark that assumption $(g_2)$ and $(\rhoef{delta})$ are equivalent (see \cite[Theorem 3.4.4, p. 138]{9990} and \cite{8}).\\ If $G_1$ and $G_2$ are two $N$-functions, we say that $G_1$ grow essentially more slowly than $G_2$ $(G_2\phirec\phirec G_1$ in symbols$)$, if and only if for every positive constant $k$, we have \betaegin{equation} \lim_{t\rhoightarrowghtarrow+\infty}\varphirac{G_1(t)}{G_2(k t)}=0. \end{equation} Another important function related to the $N$-function $G,$ is the Sobolev conjugate function $G_{*}$ defined by $$G_{*}^{-1}(t)=\int_{0}^{t}\varphirac{G^{-1}(s)}{s^{\varphirac{N+1}{N}}}\ ds, \ t>0$$ (see \cite[Definition 7.2.1, p. 352]{9990}).\\ If $G$ satisfies the $\tauriangle_2$-condition, then $G_*$ also satisfies the $\tauriangle_2$-condition. Namely, there exist $g^-_*=\varphirac{Ng^-}{N-g^-}$ and $g^+_*=\varphirac{Ng^-_*}{N-g^-_*}$ such that \betaegin{equation}\label{delta2} g^+<g^-_*:=\inf\limits_{t>0}\varphirac{g_*(t)t}{G_*(t)}\leq \varphirac{g_*(t)t}{G_*(t)}\leq g^{+}_*:=\sigmaup\limits_{t>0}\varphirac{g_*(t)t}{G_*(t)}<+\infty, \end{equation} (see \cite[Lemma 2.4, p. 240]{8}).\\ The Orlicz space $L^{G}(\Omegamega)$ is the vectorial space of measurable functions $u:\Omegamega\rhoightarrowghtarrow\mathbb{R}$ such that $$\rhoho(u)=\int_{\Omegamega}G(\vert u(x)\vert)\ dx < \infty.$$ \\ $L^{G}(\Omegamega)$ is a Banach space under the Luxemburg norm $$\Vert u\Vert_{(G)}=\inf \left\lbrace \lambda >0\ : \ \rhoho(\varphirac{u}{\lambda}) \leq 1\rhoightarrowght\rhobrace. $$ For Orlicz spaces, the H\"older inequality reads as follows $$\int_{\Omegamega}uvdx\leq \Vert u\Vert_{(G)}\Vert v\Vert_{(\tauilde{G})},\ \ \tauext{for all}\ u\in L^{G}(\Omegamega)\ \ \tauext{and}\ u\in L^{\tauilde{G}}(\Omegamega).$$ Next, we introduce the Orlicz-Sobolev space. We denote by $W^{1,G}(\Omegamega)$ the Orlicz-Sobolev space defined by $$W^{1,G}(\Omegamega):=\betaigg{\{}u\in L^{G}(\Omegamega):\ \varphirac{\phiartial u}{\phiartial x_{i}}\in L^{G}(\Omegamega),\ i=1,...,N\betaigg{\}}.$$ $W^{1,G}(\Omegamega)$ is a Banach space with respect to the norm $$\|u\|_G=\|u\|_{(G)}+ \|\nabla u\|_{(G)}.$$ Another equivalent norm is $$\Vert u\Vert=\inf \left\lbrace \lambda >0\ : \ \mathcal{K}(\varphirac{u}{\lambda}) \leq 1\rhoightarrowght\rhobrace,$$ where \betaegin{equation}\label{mod} \mathcal{K}(u)=\int_{\Omegamega}G(\vert \nabla u(x)\vert) dx+\int_{\Omegamega}G(\vert u(x)\vert)\ dx. \end{equation} If $G$ and its complementary function $\tauilde{G}$ satisfied the $\tauriangle_2$-condition, then $W^{1,G}(\Omegamega)$ is Banach, separable and reflexive space. For that, in our work, we also assume that $\tauilde{G}$ satisfies the $\tauriangle_2$-condition.\\ In the sequel, we give a general results related to the $N$-function and the Orlicz, Orlicz-Sobolev spaces. \betaegin{lemma}\label{lem12}(see \cite{11}). Let $G$ and $H$ be $N$-functions, such that $H$ grow essentially more slowly than $G_*$ (where $G_*$ is the Sobolev conjugate function of $G$). \betaegin{enumerate} \item[$(1)$]If $\displaystylesplaystyle{\int_1^{+\infty}\varphirac{G^{-1}(t)}{t^{\varphirac{N+1}{N}}}dt=\infty}$ and $\displaystylesplaystyle{\int_0^{1}\varphirac{G^{-1}(t)}{t^{\varphirac{N+1}{N}}}dt<\infty}$, then the embedding $W^{1,G}(\Omegamega)\hookrightarrow L^{H}(\Omegamega)$ is compact and the embedding $W^{1,G}(\Omegamega)\hookrightarrow L^{G_*}(\Omegamega)$ is continuous. \item[$(2)$] If $\displaystylesplaystyle{\int_1^{+\infty}\varphirac{G^{-1}(t)}{t^{\varphirac{N+1}{N}}dt}<\infty}$, then the embedding $W^{1,G}(\Omegamega)\hookrightarrow L^{H}(\Omegamega)$ is compact and the embedding $W^{1,G}(\Omegamega)\hookrightarrow L^{\infty}(\Omegamega)$ is continuous. \end{enumerate} \end{lemma} \betaegin{lemma}\label{lem8956}(see \cite[Lemma 2.6, p. 351]{6})\\ The condition $(g_3)$ implies that $$a^--1:=\inf\limits_{t>0}\varphirac{a^{'}(t)t}{a(t)}\leq a^{+}-1:=\sigmaup\limits_{t>0}\varphirac{a^{'}(t)t}{a(t)}<+\infty.$$ \end{lemma} \betaegin{lemma}\label{lem1}(see \cite{8})\\ Let $G$ be an $N$-function satisfying $(g_1)-(g_3)$ such that $\displaystylesplaystyle{G(t)=\int^{ t}_0 g( s)\ ds=\int^{ t}_0 a(\vert s\vert)s\ ds}$. Then \betaegin{enumerate} \item[$(1)$] $\min\lbrace t^{g^-}, t^{g^+}\rhobrace G(1)\leq G(t)\leq \max\lbrace t^{g^-}, t^{g^+}\rhobrace G(1)$, for all $0<t$; \item[$(2)$] $\min\lbrace t^{g^--1}, t^{g^+-1}\rhobrace g(1)\leq g(t)\leq \max\lbrace t^{g^--1}, t^{g^+-1}\rhobrace g(1)$, for all $0<t$; \item[$(3)$] $\min\lbrace t^{g^--2}, t^{g^+-2}\rhobrace a(1)\leq a(t)\leq \max\lbrace t^{g^--2}, t^{g^+-2}\rhobrace a(1)$, for all $0<t$; \item[$(4)$] $\min\lbrace t^{g^-}, t^{g^+}\rhobrace G(z)\leq G(tz)\leq \max\lbrace t^{g^-}, t^{g^+}\rhobrace G(z)$, for all $0<t$ and $z\in\mathbb{R}$; \item[$(5)$] $\min\lbrace t^{g^--1}, t^{g^+-1}\rhobrace g(z)\leq g(tz)\leq \max\lbrace t^{g^--1}, t^{g^+-1}\rhobrace g(z)$, for all $0<t$ and $z\in\mathbb{R}$; \item[$(6)$] $\min\lbrace t^{g^--2}, t^{g^+-2}\rhobrace a(\vert \eta\vert)\leq a(\vert t\eta\vert)\leq \max\lbrace t^{g^--2}, t^{g^+-2}\rhobrace a(\vert \eta\vert)$, for all $0<t$ and $\eta \in \mathbb{R}^N$. \end{enumerate} \end{lemma} \betaegin{lemma}\label{lem13}(See \cite{8}). Let $G$ be an $N$-function satisfying $(g_2)$ such that $\displaystylesplaystyle{G(t)=\int^{t}_0 g( s)\ ds}$. Then \betaegin{enumerate} \item[$(1)$] if $\Vert u\Vert_{(G)}<1$ then $\Vert u\Vert_{(G)}^{g^+}\leq \rhoho(u)\leq \Vert u\Vert_{(G)}^{g^-}$; \item[$(2)$] if $\Vert u\Vert_{(G)}\geq 1$ then $\Vert u\Vert_{(G)}^{g^-}\leq \rhoho(u)\leq \Vert u\Vert_{(G)}^{g^+}$; \item[$(3)$] if $\Vert u\Vert<1$ then $\Vert u\Vert^{g^+}\leq \mathcal{K}(u)\leq \Vert u\Vert^{g^-}$; \item[$(4)$] if $\Vert u\Vert\geq 1$ then $\Vert u\Vert^{g^-}\leq \mathcal{K}(u)\leq \Vert u\Vert^{g^+}$. \end{enumerate} \end{lemma} \betaegin{lemma}\label{lem001} Assume that $\Omegamega$ is a bounded domain with smooth boundary $\phiartial \Omegamega$. Then the embedding $W^{1,p}(\Omegamega)\hookrightarrow L^r(\Omegamega)$ is compact provided $1 \leq r < p^*$, where $p^*=\varphirac{Np}{N-p}$ if $p < N$ and $p^* := +\infty$ otherwise. \end{lemma} \betaegin{lemma}\label{lem002} Assume that $\Omegamega$ is a bounded domain and has a Lipchitz boundary $\phiartial\Omegamega$. Then the embedding $W^{1,p}(\Omegamega)\hookrightarrow L^r(\phiartial\Omegamega)$ is compact provided $1 \leq r < p^*$. \end{lemma} \betaegin{thm}\label{rem5} The Orlicz-Sobolev space $W^{1,G}(\Omegamega)$ is continuously and compactly embedded in the classical Lebesgue spaces $L^r(\Omegamega)$ and $L^r(\phiartial\Omegamega)$ for all $1 \leq r < g^-_*$. \end{thm} \betaegin{proof} By help of the assumption $(g_2)$, the Orlicz-Sobolev space $W^{1,G}(\Omegamega)$ is continuously embedded in the classical Sobolev space $W^{1,g^-}(\Omegamega)$. In light of Lemmas \rhoef{lem001} and \rhoef{lem002}, we deduce that $W^{1,g^-}(\Omegamega)$ is compactly embedded in $L^r(\Omegamega)$ and $L^r(\phiartial\Omegamega)$ for all $1 \leq r < g^-_*$. Hence, $W^{1,G}(\Omegamega)$ is continuously and compactly embedded in the classical Lebesgue space $L^r(\Omegamega)$ and $L^r(\phiartial\Omegamega)$ for all $1 \leq r < g^-_*$. \end{proof} \betaegin{lemma}\label{lem3}\cite[Lemma 3.2, p. 354]{6}\\ \betaegin{enumerate} \item[$(1)$] If $a(t)$ is increasing for $t>0$, there exists constant $d_1$ depending on $g^-,\ g^+$, such that \betaegin{equation} \label{111} \vert a(\vert \eta\vert)\eta -a(\vert \xi\vert)\xi\vert \leq d_1\vert \eta -\xi\vert a(\vert \eta\vert +\vert \xi\vert), \end{equation} for all $\eta,\ \xi\ \in \mathbb{R}^N$. \item[$(2)$] If $a(t)$ is decreasing for $t>0$, there exists constant $d_2$ depending on $g^-,\ g^+$, such that \betaegin{equation} \label{112} \vert a(\vert \eta\vert)\eta -a(\vert \xi\vert)\xi\vert \leq d_2 g(\vert \eta -\xi\vert), \end{equation} for all $\eta,\ \xi\ \in \mathbb{R}^N$. \end{enumerate} \end{lemma} \betaegin{lemma}\label{lem444} Let $G$ be an $N$-function satisfying $(g_1)-(g_3)$ such that $\displaystylesplaystyle{G(t)=\int^{ t}_0 g( s)\ ds=\int^{ t}_0 a(\vert s\vert)s\ ds}$. Then for every $\xi,\eta\in\mathbb{R}^N$, we have $$\langle a(\vert \eta\vert)\eta-a(\vert \xi\vert)\xi,\eta-\xi\rhoangle_{\mathbb{R}^N}\geq 0$$ where $\langle.\rhoangle_{\mathbb{R}^N}$ is the inner product on $\mathbb{R}^N$. \end{lemma} \betaegin{proof}[Proof] Let $\eta,\xi\in\mathbb{R}^N$. Since $G$ is convex, we have $$G(\vert \eta\vert)\leq G\left( \left| \varphirac{\eta+\xi}{2}\rhoightarrowght|\rhoightarrowght)+\langle a(\vert \eta\vert)\eta,\varphirac{\eta-\xi}{2}\rhoangle_{\mathbb{R}^N} $$ and $$G(\vert \xi\vert)\leq G\left( \left| \varphirac{\eta+\xi}{2}\rhoightarrowght|\rhoightarrowght)+\langle a(\vert \xi\vert)\xi,\varphirac{\xi-\eta}{2}\rhoangle_{\mathbb{R}^N} . $$ Adding the above two relations, we find that \betaegin{align}\label{hlel} \varphirac{1}{2}\langle a(\vert \eta\vert)\eta-a(\vert \xi\vert)\xi,\eta-\xi\rhoangle_{\mathbb{R}^N} \geq G(\vert \eta\vert)+G(\vert \xi\vert)-2G\left( \left| \varphirac{\eta +\xi}{2}\rhoightarrowght| \rhoightarrowght) \ \ \tauext{for all}\ \ \eta,\xi\in \mathbb{R}^N. \end{align} On the other hand, the convexity and the monotonicity of $G$ give \betaegin{align}\label{miss} G\left( \left| \varphirac{\eta +\xi}{2}\rhoightarrowght| \rhoightarrowght)\leq \varphirac{1}{2}\left[ G\left( \vert \eta \vert \rhoightarrowght)+G\left( \vert \xi \vert \rhoightarrowght)\rhoightarrowght] \ \ \tauext{for all}\ \ \eta,\xi\in \mathbb{R}^N. \end{align} From $(\rhoef{hlel})$ and $(\rhoef{miss})$, we get \betaegin{align*} \langle a(\vert \eta\vert)\eta-a(\vert \xi\vert)\xi,\eta-\xi\rhoangle_{\mathbb{R}^N}\geq 0,\ \ \tauext{for all}\ \ \eta,\xi\in \mathbb{R}^N. \end{align*} \end{proof} \betaegin{definition}\label{def0}(See \cite{5})\\ We say that $u\in W^{1,G}(\Omegamega)$ is a weak solution for problem $(\rhoef{P})$ if \betaegin{align}\label{O.O} & \int_{\Omegamega} a(\vert\nabla u\vert)\nabla u.\nabla v dx +\int_{\phiartial\Omegamega}b(x)\vert u\vert^{p-2}u v d\gamma =\int_{\Omegamega}f(x,u)vdx,\ \varphiorall v\in W^{1,G}(\Omegamega) \end{align} where $d\gamma$ is the measure on the boundary $\phiartial\Omegamega$.\\ The energy functional corresponding to problem $(\rhoef{P})$ is the $C^1$-functional $J :W^{1,G}(\Omegamega)\rhoightarrowghtarrow \mathbb{R}$ defined by \betaegin{equation}\label{eq9870} J(u)=\int_{\Omegamega}G(\vert\nabla u\vert) dx +\varphirac{1}{p}\int_{\phiartial\Omegamega}b(x)\vert u\vert^{p} d\gamma-\int_{\Omegamega}F(x,u)dx, \end{equation} for all $u\in W^{1,G}(\Omegamega)$. Where $\displaystylesplaystyle{F(x, t) = \int_{0}^{t} f(x, s) ds}$.\\ \end{definition} \betaegin{definitions}\label{def333}\ \\ \betaegin{enumerate} \item[$(1)$] We say that $u_0\in W^{1,G}(\Omegamega)$ is a local $C^1(\omegaverline{\Omegamega})$-minimizer of $J$, if we can find $r_0>0$ such that $$J(u_0)\leq J(u_0+v),\ \tauext{for all}\ v\in C^1(\omegaverline{\Omegamega})\ \tauext{with}\ \Vert v\Vert_{C^1(\omegaverline{\Omegamega})}\leq r_0.$$ \item[$(2)$] We say that $u_0\in W^{1,G}(\Omegamega)$ is a local $W^{1,G}(\Omegamega)$-minimizer of $J$, if we can find $r_1>0$ such that $$J(u_0)\leq J(u_0+v),\ \tauext{for all}\ v\in W^{1,G}(\Omegamega)\ \tauext{with}\ \Vert v\Vert\leq r_1.$$ \end{enumerate} \end{definitions} Now, we set the assumption on the non-linear term $f$ as follow. \betaegin{enumerate} \item[$(H)$] $f(x,0)=0$ and there exist an odd increasing homomorphism $h\in C^{1}(\mathbb{R},\mathbb{R})$, and a positive function $\widehat{a}(t)\in L^{\infty}(\Omegamega)$ such that $$\vert f(x,t)\vert \leq \widehat{a}(x)(1+h(\vert t\vert)), \ \ \varphiorall\ t\in \mathbb{R},\ \varphiorall x\in \omegaverline{\Omegamega}$$ and $$G\phirec\phirec H\phirec\phirec G_*,$$ $$\displaystylesplaystyle{1<g^+<h^-:=\inf\limits_{t>0}\varphirac{h(t)t}{H(t)}\leq h^{+}:=\sigmaup\limits_{t>0}\varphirac{h(t)t}{H(t)}<\varphirac{g^-_*}{g^-}},$$ $$\displaystylesplaystyle{1<h^--1:=\inf\limits_{t>0}\varphirac{h^{'}(t)t}{h(t)}\leq h^{+}-1:=\sigmaup\limits_{t>0}\varphirac{h^{'}(t)t}{h(t)}},$$ where $$H(t):=\int_0^t h(s)\ ds,$$ is an $N$-function. \end{enumerate} \betaegin{rem}\label{rem78} Some assertions in Lemma \rhoef{lem1} are remain valid for the $N$-function $H$ and the function $h$ \betaegin{enumerate} \item[$(1)$] $\min\lbrace t^{h^-}, t^{h^+}\rhobrace H(1)\leq H(t)\leq \max\lbrace t^{h^-}, t^{h^+}\rhobrace H(1)$, for all $0<t$; \item[$(2)$] $\min\lbrace t^{h^--1}, t^{h^+-1}\rhobrace h(1)\leq h(t)\leq \max\lbrace t^{h^--1}, t^{h^+-1}\rhobrace h(1)$, for all $0<t$; \item[$(3)$] $\min\lbrace t^{h^-}, t^{h^+}\rhobrace H(z)\leq H(tz)\leq \max\lbrace t^{h^-}, t^{h^+}\rhobrace H(z)$, for all $0<t$ and $z\in\mathbb{R}$; \item[$(4)$] $\min\lbrace t^{h^--1}, t^{h^+-1}\rhobrace h(z)\leq h(tz)\leq \max\lbrace t^{h^--1}, t^{h^+-1}\rhobrace h(z)$, for all $0<t$ and $z\in\mathbb{R}$. \end{enumerate} \end{rem} The main results of this paper are: \betaegin{thm}\label{thmC11} Under the assumptions $(G)$ and $(H)$, if $u\in W^{1,G}(\Omegamega)$ is a non-trivial weak solution of problem $(\rhoef{P})$, then $u\in L^{\infty}(\Omegamega)$ and $\Vert u\Vert_{\infty}\leq M=M(\Vert \widehat{a}\Vert_{\infty}, h(1), g^-,\vert \Omegamega\vert, \Vert u \Vert_{h^+})$. \end{thm} \betaegin{thm}\label{thmC12} Under the assumptions $(G)$ and $(H)$, if $u_0\in W^{1,G}(\Omegamega)$ is a local $C^1(\omegaverline{\Omegamega})$-minimizer of $J$, then $u_0\in C^{1,\alpha}(\omegaverline{\Omegamega})$ for some $\alpha\in (0,1)$ and $u_0$ is also a local $W^{1,G}(\Omegamega)$-minimizer of $J$. \end{thm} \sigmaection{Boundedness results for weak solutions of problem $(\rhoef{P})$} In this section, by using the Moser iteration technique, we prove a result concerning the boundedness regularity for the problem $(\rhoef{P})$. Our method, inspired by the work of Gasi\`nski and Papageorgiou \cite{3}.\\ Considering the following problem \betaegin{equation}\label{A} \left\lbrace \betaegin{array}{ll} -\tauext{div}(\mathcal{A}(x,\nabla u))=\mathcal{B}(x,u),\ \tauext{in}\ \Omegamega\\ \ \\ \mathcal{A}(x,\nabla u).\nu+\phisi(x,u)=0,\ \tauext{in}\ \phiartial\Omegamega \end{array} \rhoightarrowght. \tauag{A} \end{equation} where $\Omegamega$ is a bounded subset of $\mathbb{R}^N(N\geq 3)$ with $C^2$-boundary, $\mathcal{A}:\Omegamega\tauimes\mathbb{R}^N\rhoightarrowghtarrow\mathbb{R}^N$, $\mathcal{B}:\Omegamega\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ and $\phisi:\phiartial\Omegamega\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$. We assume that problem $(\rhoef{A})$ satisfies the following growth conditions: \betaegin{equation}\label{7546999882} \mathcal{A}(x,\eta)\eta\geq G(\vert \eta\vert),\ \ \tauext{for all}\ x\in \Omegamega\ \tauext{and}\ \eta\in \mathbb{R}^N, \end{equation} \betaegin{equation}\label{75469998822} \mathcal{A}(x,\eta)\leq c_0 g(\vert \eta\vert)+c_1,\ \ \tauext{for all}\ x\in \Omegamega\ \tauext{and}\ \eta\in \mathbb{R}^N, \end{equation} \betaegin{equation}\label{754699988222} \mathcal{B}(x,t)\leq c_2(1+h(\vert t\vert)),\ \ \tauext{for all}\ x\in \Omegamega\ \tauext{and}\ t\in \mathbb{R}, \end{equation} \betaegin{equation}\label{7546999882222} \phisi(x,t)\geq 0,\ \ \tauext{for all}\ x\in \phiartial\Omegamega\ \tauext{and}\ t\in \mathbb{R}_+, \end{equation} where $c_0,c_1,c_2$ are positive constant and $h$ is defined in assumption $(H)$.\\ We say that $u\in W^{1,G}(\Omegamega)$ is a weak solution of problem $(\rhoef{A})$ if \betaegin{equation}\label{4546787856} \int_{\Omegamega}\mathcal{A}(x,\nabla u)\nabla v dx+\int_{\phiartial\Omegamega}\phisi(x,u)vd\gamma=\int_{\Omegamega}\mathcal{B}(x,u)vdx,\ \ \tauext{for all}\ v\in W^{1,G}(\Omegamega). \end{equation} Let state the following useful result \betaegin{prop}\label{prop1}Suppose that $(G)$, $(H)$ and $(\rhoef{7546999882})-(\rhoef{7546999882222})$ are satisfied. Then, if $u \in W^{1,G}(\Omegamega)$ is a non-trivial weak solution of problem $(\rhoef{A})$, $u$ belongs to $L^{s}(\Omegamega)$ for every $1\leq s<\infty$. \end{prop} \betaegin{proof}[Proof] Let $u \in W^{1,G}(\Omegamega)$ be a non-trivial weak solution of problem $(\rhoef{A})$, $u^{+}:=\max\lbrace u,0\rhobrace\in W^{1,G}(\Omegamega)$ and $u^{-}:=\max\lbrace -u,0\rhobrace\in W^{1,G}(\Omegamega)$. Since $u=u^+ - u^-$, without loss of generality we may assume that $u\geq 0$.\\ We set, recursively $$p_{n+1}=\widehat{g} + \varphirac{\widehat{g}}{g^-}(\varphirac{p_n -h^+}{h^+}),\ \ \tauext{for all}\ n\geq 0,$$ such that $$ p_0=\widehat{g} = g_*^-=\varphirac{Ng^-}{N-g^-}\ \ \ (\tauext{recall that}\ g^-\leq g^+<N).$$ It is clear that the sequence $\lbrace p_n\rhobrace_{n\geq 0}\sigmaubseteq \mathbb{R}_+$ is increasing. Put $\displaystylesplaystyle{\tauheta_n=\varphirac{p_n -h^+}{h^+}>0}, \lbrace\tauheta_n\rhobrace_{n\geq 0}$ is an increasing sequence.\\ Let $$u_k=\min \lbrace u,k\rhobrace\in W^{1,G}(\Omegamega)\cap L^{\infty}(\Omegamega), \tauext{for all}\ k\geq 1\ \ \ (\tauext{since }\ u_k\leq k,\ \ \tauext{for all}\ k\geq 1).$$ In $(\rhoef{4546787856})$, we act with $u_k^{\tauheta_n +1}\in W^{1,G}(\Omegamega)$, to obtain \betaegin{align*} \int_{\Omegamega} \mathcal{A}(x,\nabla u).\nabla u_k^{\tauheta_n +1}dx &+\int_{\phiartial\Omegamega}\phisi(x,u) u_k^{\tauheta_n +1} d\gamma =\int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx. \end{align*} It follows, by conditions $(\rhoef{7546999882})$, $(\rhoef{7546999882222})$ and Lemma \rhoef{lem1}, that \betaegin{align}\label{3} (\tauheta_n +1)\int_{\lbrace\vert\nabla u_k\vert\leq 1\rhobrace}u_k^{\tauheta_n}G(\vert\nabla u_k\vert) dx &+ (\tauheta_n +1)G(1)\int_{\lbrace\vert\nabla u_k\vert> 1\rhobrace}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx\nonumber\\ & \leq (\tauheta_n +1)\int_{\Omegamega}u_k^{\tauheta_n}G(\vert\nabla u_k\vert) dx\nonumber\\ & \leq (\tauheta_n +1)\int_{\Omegamega}u_k^{\tauheta_n}\left[ \mathcal{A}(x,\nabla u).\nabla u_k\rhoightarrowght] dx\nonumber \\ & \leq \int_{\Omegamega} \mathcal{A}(x,\nabla u).\nabla u_k^{\tauheta_n +1}dx\nonumber\\ & \leq \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx. \end{align} Therefore \betaegin{align}\label{333} (\tauheta_n +1)G(1)\int_{\lbrace\vert\nabla u_k\vert> 1\rhobrace}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx& \leq \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx, \end{align} this gives, \betaegin{align}\label{334} (\tauheta_n +1)G(1)\int_{\Omegamega}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx & = (\tauheta_n +1)G(1)\left[ \int_{\lbrace\vert\nabla u_k\vert> 1\rhobrace}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx+\int_{\lbrace\vert\nabla u_k\vert\leq 1\rhobrace}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx\rhoightarrowght]\nonumber\\ & \leq \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx +(\tauheta_n +1)G(1)\int_{\lbrace\vert\nabla u_k\vert\leq 1\rhobrace}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx\nonumber\\ & \leq \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx +(\tauheta_n +1)G(1)\int_{\Omegamega}u_k^{\tauheta_n} dx. \end{align} Thus \betaegin{align}\label{33455} \int_{\Omegamega}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx & \leq \varphirac{1}{(\tauheta_n +1)G(1)} \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx\nonumber \\ & +\int_{\Omegamega}u_k^{\tauheta_n} dx. \end{align} Since $\tauheta_n\leq p_n$, and by the continuous embedding $L^{p_n}(\Omegamega)\hookrightarrow L^{\tauheta_n}(\Omegamega)$, then \betaegin{equation}\label{33505} \int_{\Omegamega}u_k^{\tauheta_n} dx\leq \vert \Omegamega\vert^{1-\varphirac{\tauheta_n}{p_n}}\Vert u_k\Vert^{\tauheta_n}_{p_n},\ \ \tauext{for all}\ k\geq 1. \end{equation} Combining $(\rhoef{33455})$ and $(\rhoef{33505})$, we infer that \betaegin{equation}\label{3366} \int_{\Omegamega}u_k^{\tauheta_n}\vert\nabla u_k\vert^{g^-} dx\leq \varphirac{1}{(\tauheta_n +1)G(1)}\int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n}{p_n}}\Vert u_k\Vert^{\tauheta_n}_{p_n}. \end{equation} Let us observe that \betaegin{align*} \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}} & =\nabla u_k^{(\varphirac{\tauheta_n}{g^-}+1)}= (\varphirac{\tauheta_n}{g^-}+1)u_k^{\varphirac{\tauheta_n}{g^-}}\nabla u_k \end{align*} and \betaegin{align*} \left| \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght| ^{g^-} = \left( \varphirac{\tauheta_n}{g^-}+1\rhoightarrowght) ^{g^-}u_k^{\tauheta_n}\left|\nabla u_k\rhoightarrowght|^{g^-}. \end{align*} Integrating over $\Omegamega$, we get \betaegin{align}\label{4} \int_{\Omegamega} \left| \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght|^{g^-}dx & = \left( \varphirac{\tauheta_n}{g^-}+1\rhoightarrowght)^{g^-}\int_{\Omegamega}u_k^{\tauheta_n}\left|\nabla u_k\rhoightarrowght|^{g^-}dx. \end{align} Putting together $(\rhoef{3366})$ and $(\rhoef{4})$, we conclude that \betaegin{align}\label{5} \int_{\Omegamega} \left| \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght|^{g^-}dx & \leq \left( \varphirac{\tauheta_n}{g^-}+1\rhoightarrowght)^{g^-}\left[ \varphirac{1}{(\tauheta_n +1)G(1)}\int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n}{p_n}}\Vert u_k\Vert^{\tauheta_n}_{p_n}\rhoightarrowght]\nonumber\\ & \leq \left( \tauheta_n+1\rhoightarrowght)^{g^-}\left[ \varphirac{1}{(\tauheta_n +1)G(1)}\int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n}{p_n}}\Vert u_k\Vert^{\tauheta_n}_{p_n}\rhoightarrowght]\nonumber\\ &\leq C_0\left( \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx+\left( 1+\Vert u_k\Vert^{p_n}_{p_n}\rhoightarrowght) \rhoightarrowght), \end{align} where $\displaystylesplaystyle{C_0=\left( \tauheta_n+1\rhoightarrowght)^{g^-}\left( \varphirac{1}{(\tauheta_n +1)G(1)}+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n}{p_n}}\rhoightarrowght) > 0}.$\\ On the other side, using the condition $(\rhoef{754699988222})$ and Remark \rhoef{rem78}, we see that \betaegin{align}\label{6} \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx & \leq c_2\int_{\Omegamega}\left( 1+ h(\vert u\vert)\rhoightarrowght)u_{k}^{\tauheta_n +1}dx\nonumber\\ & \leq c_2\int_{\Omegamega}\left( 1+ h(1)\max\lbrace\vert u\vert^{h^--1},\vert u\vert^{h^+-1}\rhobrace \rhoightarrowght)u_k^{\tauheta_n +1}dx\nonumber\\ & \leq c_2\left( \Vert u_k\Vert_{\tauheta_n+1}^{{\tauheta_n+1}}+ h(1)\int_{\Omegamega}\max\lbrace\vert u\vert^{h^--1},\vert u\vert^{h^+-1}\rhobrace u_k^{\tauheta_n +1}dx\rhoightarrowght) \nonumber\\ & \leq c_2\left[ \Vert u_k\Vert_{\tauheta_n+1}^{{\tauheta_n+1}}+ h(1)\left( \int_{\lbrace u\leq 1\rhobrace} u^{h^--1} u_k^{\tauheta_n +1}dx+\int_{\lbrace u>1\rhobrace} u^{h^+-1} u_k^{\tauheta_n +1}dx\rhoightarrowght)\rhoightarrowght] \nonumber\\ & \leq c_2\left[ \left( 1+h(1)\rhoightarrowght) \Vert u_k\Vert_{\tauheta_n+1}^{{\tauheta_n+1}}+ h(1)\int_{\Omegamega} u^{h^+-1} u_k^{\tauheta_n +1}dx\rhoightarrowght] \nonumber\\ & \leq c_2\left[ \left( 1+h(1)\rhoightarrowght) \Vert u_k\Vert_{\tauheta_n+1}^{{\tauheta_n+1}}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\Vert u_k\Vert^{\tauheta_n +1}_{(\tauheta_n +1)h^+}\rhoightarrowght] \ \ (\tauext{H\"older with}\ h^+\ \tauext{and}\ (h^+)^{'}=\varphirac{h^+-1}{h^+})\nonumber\\ & = c_2\left[ \left( 1+h(1)\rhoightarrowght) \Vert u_k\Vert_{\tauheta_n+1}^{{\tauheta_n+1}}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\Vert u_k\Vert^{\tauheta_n +1}_{p_n} \rhoightarrowght] \nonumber\\ & \leq c_2\left[ \left( 1+h(1)\rhoightarrowght)\vert \Omegamega\vert^{1-\varphirac{\tauheta_n+1}{p_n}} \Vert u_k\Vert^{\tauheta_n+1}_{{p_n}}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\Vert u_k\Vert^{\tauheta_n +1}_{p_n}\rhoightarrowght]\ \ (\tauext{since}\ L^{p_n}(\Omegamega)\hookrightarrow L^{\tauheta_n+1}(\Omegamega)) \nonumber\\ & \leq c_2\left[ \left( 1+h(1)\rhoightarrowght)\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}+h(1)\Vert u\Vert_{h^+}^{h^+-1}\rhoightarrowght] \Vert u_k\Vert_{p_n}^{\tauheta_n+1}\ \ (\tauext{since}\ \varphirac{\tauheta_n+1}{p_n}=\varphirac{1}{h^+})\nonumber\\ & \leq C_1\left(1+ \Vert u_k\Vert_{p_n}^{p_n}\rhoightarrowght), \end{align} where $\displaystylesplaystyle{C_1=c_2\left[ \left( 1+h(1)\rhoightarrowght)\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}+h(1)\Vert u\Vert_{h^+}^{h^+-1}\rhoightarrowght] >0}$. In $(\rhoef{6})$, we used the fact that $\tauheta_n+1<(\tauheta_n+1)h^+=p_n$.\\ Using $(\rhoef{5})$ and $(\rhoef{6})$, we find \betaegin{align}\label{ddhlel} \int_{\Omegamega}\left| \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght| ^{g^-}dx+\int_{\Omegamega}\left| u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght| ^{g^-}dx &\leq C_2 \left(1+ \Vert u_k\Vert^{p_n}_{p_n} \rhoightarrowght)+\int_{\Omegamega}\left| u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght| ^{g^-}dx\nonumber\\ & \leq C_2 \left(1+ \Vert u_k\Vert^{p_n}_{p_n} \rhoightarrowght)+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n+g^-}{p_n}}\Vert u_k\Vert^{\tauheta_n +g^-}_{p_n}\nonumber\\ & \leq \left( C_2+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n+g^-}{p_n}}\rhoightarrowght) \left(1+ \Vert u_k\Vert^{p_n}_{p_n} \rhoightarrowght)\nonumber\\ & = C_3 \left(1+ \Vert u_k\Vert^{p_n}_{p_n} \rhoightarrowght), \end{align} where $C_2=C_0(C_1+1)$ and $\displaystylesplaystyle{C_3=C_2+\vert \Omegamega\vert^{1-\varphirac{\tauheta_n+g^-}{p_n}}}.$\\ The inequality $(\rhoef{ddhlel})$ gives \betaegin{equation}\label{7} \left\lVert u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght\rhoVert^{g^-}_{W^{1,g^-}(\Omegamega)} \leq C_3 \left(1+ \Vert u_k\Vert^{p_n}_{p_n} \rhoightarrowght). \end{equation} Recall that $p_{n+1}=\widehat{g}+\varphirac{\widehat{g}}{g^-}\tauheta_n$ and so \betaegin{equation}\label{4787545455456} \varphirac{\tauheta_n+g^-}{g^-}=\varphirac{p_{n+1}}{\widehat{g}}. \end{equation} Since $g^-<\widehat{g}=\varphirac{Ng^-}{N-g^-}=g^-_*$, then the embedding $W^{1,g^-}(\Omegamega)\hookrightarrow L^{\widehat{g}}(\Omegamega)$ is continuous.\\ Hence, there is $C_4>0$ such that \betaegin{equation}\label{dddhlel} \left\lVert u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght\rhoVert^{g^-}_{\widehat{g}}\leq C_4\left\lVert u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght\rhoVert^{g^-}_{W^{1,g^-}(\Omegamega)}. \end{equation} Combining $(\rhoef{7})$, $(\rhoef{4787545455456})$ and $(\rhoef{dddhlel})$, we obtain \betaegin{equation}\label{8} \Vert u_k\Vert^{\varphirac{p_{n+1}}{\widehat{g}}g^-}_{p_{n+1}}\leq C_5 \left(1+ \Vert u_k\Vert^{p_n}_{p_n} \rhoightarrowght), \end{equation} where $C_5=C_4C_3.$ Next, let $k\rhoightarrowghtarrow+\infty$ in $(\rhoef{8})$ and applying the monotone convergence theorem, we find that \betaegin{equation}\label{9} \Vert u\Vert^{\varphirac{p_{n+1}}{\widehat{g}}g^-}_{p_{n+1}}\leq C_5 \left(1+ \Vert u\Vert^{p_n}_{p_n} \rhoightarrowght). \end{equation} Since $p_0=\widehat{g}$ and the embeddings $W^{1,G}(\Omegamega)\hookrightarrow W^{1,g^-}(\Omegamega)\hookrightarrow L^{\widehat{g}}(\Omegamega)$ are continuous, from $(\rhoef{9})$, we get \betaegin{equation}\label{10} u\in L^{p_n}(\Omegamega),\ \ \tauext{for all}\ n\geq 0. \end{equation} Note that $p_n\rhoightarrowghtarrow +\infty$ as $n\rhoightarrowghtarrow +\infty.$ Indeed, suppose that the sequence $\lbrace p_n\rhobrace_{n\geq 0}\sigmaubseteq [\widehat{g}, +\infty)$ is bounded. Then we have $p_n\longrightarrow \widehat{p}\geq \widehat{g}$ as $n\rhoightarrowghtarrow +\infty.$ By definition we have $$p_{n+1}=\widehat{g}+\varphirac{\widehat{g}}{g^-}(\varphirac{p_n-h^+}{h^+})\ \ \tauext{for all}\ n\geq 0,$$ with $p_0=\widehat{g}$, so $$\widehat{p}=\widehat{g}+\varphirac{\widehat{g}}{g^-}(\varphirac{\widehat{p}-h^+}{h^+}),$$ thus $$0<\widehat{p}\left( \varphirac{\widehat{g}}{g^-h^+}-1\rhoightarrowght)=\widehat{g}\left( \varphirac{1}{g^-}-1\rhoightarrowght)<0$$ which gives us a contradiction since $g^-h^+\leq \widehat{g}=g^-_*$ (see assumption $(H)$). Recall that for any measurable function $u:\Omegamega\longrightarrow\mathbb{R},$ the set $$S_u=\left\lbrace p\geq 1: \Vert u\Vert_p<+\infty\rhoightarrowght\rhobrace $$ is an interval. Hence, $S_u=[1,+\infty)$ (see $(\rhoef{10})$) and \betaegin{equation}\label{11} u\in L^s(\Omegamega),\ \ \tauext{for all}\ s\geq 1. \end{equation} This ends the proof. \end{proof} In the following, we prove that, if $u\in W^{1,G}(\Omegamega)$ is a weak solution of problem $(\rhoef{A})$ such that $u\in L^s(\Omegamega)$ for all $1\leq s<\infty$, then $u$ is a bounded function. \betaegin{prop}\label{prop2} Assume that $(G)$, $(H)$ and $(\rhoef{7546999882})-(\rhoef{7546999882222})$ hold. Let $u\in W^{1,G}(\Omegamega)$ be a non-trivial weak solution of problem $(\rhoef{A})$ such that $u\in L^{s}(\Omegamega)$ for all $1\leq s<\infty$, then $u\in L^{\infty}(\Omegamega)$ and $\Vert u\Vert_{\infty}\leq M=M(c_2, h(1), g^-,\vert \Omegamega\vert, \Vert u \Vert_{h^+})$. \end{prop} \betaegin{proof}[Proof] Let $u \in W^{1,G}(\Omegamega)$ be a non-trivial weak solution of problem $(\rhoef{A})$, $u^{+}:=\max\lbrace u,0\rhobrace\in W^{1,G}(\Omegamega)$ and $u^{-}:=\max\lbrace -u,0\rhobrace\in W^{1,G}(\Omegamega)$. Since $u=u^+ - u^-$, we may assume without loss of generality that $u\geq 0$.\\ Let $\sigmaigma_0=\widehat{g}=g_*^-=\varphirac{Ng^-}{N-g^-}$ and we define by a recursively way $$\sigmaigma_{n+1}=(\varphirac{\sigmaigma_n }{h^+}-1+g^-)\varphirac{\widehat{g}}{g^-},\ \ \tauext{for all}\ n\geq 0.$$ We have that the sequence $\displaystylesplaystyle{\lbrace \sigmaigma_n\rhobrace_{n\geq 0}\sigmaubseteq [\widehat{g}, +\infty)}$ is increasing and $\sigmaigma_n\longrightarrow+\infty$ as $n\rhoightarrowghtarrow+\infty$. Arguing as in the proof of Proposition \rhoef{prop1}, with $\tauheta_n=\varphirac{\sigmaigma_n }{h^+}-1$ and $u_k^{\tauheta_n+1}\in W^{1,G}(\Omegamega)\cap L^\infty(\Omegamega)$ as a test function in $(\rhoef{4546787856})$. So, we find the following estimation \betaegin{align}\label{588} \int_{\Omegamega} \left| \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght|^{g^-}dx & \leq \left( \tauheta_n+1\rhoightarrowght)^{g^-}\left[ \varphirac{1}{(\tauheta_n +1)G(1)}\int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx+\int_{\Omegamega} u_k^{\tauheta_n}dx\rhoightarrowght] \end{align} Using the assumption $(\rhoef{754699988222})$, $(\rhoef{11})$, Remark \rhoef{rem78} and H\"older inequality (for $h^+$ and $(h^+)^{'}=\varphirac{h^+}{h^+-1}$), we deduce that \betaegin{align}\label{7888} \int_{\Omegamega}\mathcal{B}(x,u)u_k^{\tauheta_n +1}dx & =\int_{\Omegamega}\mathcal{B}(x,u)u_k^{\varphirac{\sigmaigma_n }{h^+}}dx\nonumber\\ &\leq c_2\int_{\Omegamega}\left( 1+h(1)\max\left\lbrace u^{h^--1},u^{h^+-1}\rhoightarrowght\rhobrace \rhoightarrowght)u_k^{\varphirac{\sigmaigma_n }{h^+}}dx\nonumber\\ &\leq c_2\int_{\Omegamega}\left( (1+h(1))+h(1)u^{h^+-1}\rhoightarrowght)u_k^{\varphirac{\sigmaigma_n }{h^+}}dx\ (\ \tauext{since}\ h^-\leq h^+)\nonumber\\ &\leq c_2\left[(1+h(1))\int_{\Omegamega} u_k^{\varphirac{\sigmaigma_n }{h^+}}dx+ h(1)\int_{\Omegamega}u^{h^+-1}u_k^{\varphirac{\sigmaigma_n }{h^+}}dx\rhoightarrowght]\nonumber \\ &\leq c_2\left[(1+h(1))\Vert u_k\Vert^{\varphirac{\sigmaigma_n }{h^+}}_{\varphirac{\sigmaigma_n }{h^+}}+ h(1)\left( \int_{\Omegamega}u^{h^+}dx\rhoightarrowght)^{\varphirac{h^+-1}{h^+}}\left( \int_{\Omegamega}u_k^{\sigmaigma_n}dx\rhoightarrowght)^{\varphirac{1}{h^+}}\rhoightarrowght]\nonumber\\ &\leq c_2\left((1+h(1))\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}\Vert u_k\Vert^{\varphirac{\sigmaigma_n }{h^+}}_{\sigmaigma_n}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}\rhoightarrowght)\ (\ \tauext{since}\ L^{\sigmaigma_n}(\Omegamega)\hookrightarrow L^{\varphirac{\sigmaigma_n }{h^+}}(\Omegamega)\ )\nonumber \\ &\leq c_2\left((1+h(1))\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\rhoightarrowght)\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}\nonumber \\ &\leq C_6\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}} \end{align} for all $n\in\mathbb{N}$, where $\displaystylesplaystyle{C_6=c_2\left((1+h(1))\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\rhoightarrowght)}.$\\ Using the fact that $L^{\sigmaigma_n}(\Omegamega)\hookrightarrow L^{\varphirac{\sigmaigma_n }{h^+}-1}(\Omegamega)$, we obtain \betaegin{align}\label{888} \int_{\Omegamega} u_k^{\tauheta_n}dx &= \int_{\Omegamega} u_k^{\varphirac{\sigmaigma_n }{h^+}-1}dx\nonumber\\ & \leq \vert\Omegamega\vert^{1-\varphirac{1}{h^+}+\varphirac{1}{\sigmaigma_n}}\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1}\nonumber\\ &=C_7\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1},\ \ \tauext{for all}\ n\in\mathbb{N}, \end{align} where $\displaystylesplaystyle{C_7(n)=\vert\Omegamega\vert^{1-\varphirac{1}{h^+}+\varphirac{1}{\sigmaigma_n}}}$. \\ In light of the embedding theorem ($L^{h^+}(\Omegamega)\hookrightarrow L^{\varphirac{h^+(g^--1)}{h^+-1}}(\Omegamega))$ and H\"older inequality, we infer that \betaegin{align}\label{455169965} \int_{\Omegamega} \left| u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght|^{g^-}dx &=\int_{\Omegamega} u_k^{\tauheta_n +1} u_k^{g^--1}dx\nonumber\\ &\leq \int_{\Omegamega} u_k^{\tauheta_n +1} u^{g^--1}dx\ (\tauext{since}\ u_k\leq u,\ \tauext{for all}\ k\geq 1)\nonumber\\ & \leq \left( \int_{\Omegamega} u^{\varphirac{h^+(g^--1)}{h^+-1}} dx\rhoightarrowght)^{\varphirac{h^+-1}{h^+}}\left( \int_{\Omegamega}u_k^{(\tauheta_n+1)h^+}dx\rhoightarrowght)^{\varphirac{1}{h^+}}\nonumber\\ & \leq \vert \Omegamega\vert^{\varphirac{h^+-g^-}{h^+}}\Vert u\Vert_{h^+}^{g^--1} \Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n}{h^+}}\nonumber\\ & =C_8\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n}{h^+}},\ \ \tauext{for all}\ n\in\mathbb{N}, \end{align} where $\displaystylesplaystyle{C_8=\vert \Omegamega\vert^{\varphirac{h^+-g^-}{h^+}}\Vert u\Vert_{h^+}^{g^--1}}$.\\ Putting together $(\rhoef{588})$, $(\rhoef{7888})$, $(\rhoef{888})$ and $(\rhoef{455169965})$, we find that \betaegin{align}\label{988} \int_{\Omegamega} \left| \nabla u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght|^{g^-}dx+\int_{\Omegamega} \left| u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght|^{g^-}dx & \leq \left( \tauheta_n+1\rhoightarrowght)^{g^-}\left[ \left( \varphirac{C_6}{(\tauheta_n +1)G(1)}+C_8\rhoightarrowght) \Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}+C_7\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1}\rhoightarrowght]\nonumber\\ & \leq \left( \tauheta_n+1\rhoightarrowght)^{g^-}\left[ \left( C_6+C_8\rhoightarrowght) \Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}+C_7\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1}\rhoightarrowght],\ \ (\tauext{since}\ (\tauheta_n +1)G(1)\geq 1) \end{align} for all $n\in\mathbb{N}$. Since $g^-<\widehat{g}=\varphirac{Ng^-}{N-g^-}=g^-_*$, then the embedding $W^{1,G}(\Omegamega)\hookrightarrow W^{1,g^-}(\Omegamega)\hookrightarrow L^{\widehat{g}}(\Omegamega)$ are continuous. Moreover, there is $C_9>0$ such that \betaegin{equation}\label{1088} \left\lVert u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght\rhoVert_{\widehat{g}}^{g^-}\leq C_9\left\lVert u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght\rhoVert_{W^{1,g^-}(\Omegamega)}^{g^-},\ \ \tauext{for all}\ n\in\mathbb{N}. \end{equation} From $(\rhoef{988})$ and $(\rhoef{1088})$, we obtain \betaegin{align}\label{7777775} \left\lVert u_k^{\varphirac{\tauheta_n +g^-}{g^-}}\rhoightarrowght\rhoVert_{\widehat{g}}^{g^-}\leq C_9\left( \tauheta_n+1\rhoightarrowght)^{g^-}\left[ \left( C_6+C_8\rhoightarrowght) \Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}+C_7\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1}\rhoightarrowght] \end{align} for all $n\in\mathbb{N}$. From the definition of the sequence $\lbrace\sigmaigma_n\rhobrace_{n\in\mathbb{N}}$, we have $\displaystylesplaystyle{\varphirac{\sigmaigma_{n+1}}{\widehat{g}}=\varphirac{\tauheta_{n}+g^-}{g^-}}$.\\ It follows, by $(\rhoef{7777775})$, that \betaegin{equation}\label{118888} \Vert u_k\Vert^{\sigmaigma_{n+1}\varphirac{g^-}{\widehat{g}}}_{\sigmaigma_{n+1}}\leq \left( \tauheta_n+1\rhoightarrowght)^{g^-}C_9\left[ \left( C_6+C_8\rhoightarrowght) \Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}+C_7\Vert u_k\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1}\rhoightarrowght] \end{equation} for all $n\in\mathbb{N}$.\\ Let $k\longrightarrow+\infty$ in $(\rhoef{118888})$, and using the monotone convergence theorem , we get \betaegin{equation}\label{1188} \Vert u\Vert^{\sigmaigma_{n+1}\varphirac{g^-}{\widehat{g}}}_{\sigmaigma_{n+1}}\leq \left( \tauheta_n+1\rhoightarrowght)^{g^-}C_9\left[ \left( C_6+C_8\rhoightarrowght) \Vert u\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}}+C_7\Vert u\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n }{h^+}-1}\rhoightarrowght] \end{equation} We distinguish two cases.\\ \tauextbf{Case 1:} If $\left\lbrace n\in\mathbb{N},\ \Vert u\Vert_{\sigmaigma_n}\leq 1\rhoightarrowght\rhobrace $ is unbounded. Then, without lose of generality, we may assume that \betaegin{equation}\label{746654548} \Vert u\Vert_{\sigmaigma_n}\leq 1,\ \ \tauext{for all}\ n\in \mathbb{N}. \end{equation} Hence, $$\Vert u\Vert_{\infty} \leq 1$$ since, $ \sigmaigma_n\longrightarrow +\infty$ as $n\longrightarrow +\infty$ and $u\in L^s(\Omegamega)$ for all $s\geq 1$. So, we are done with $M=1$.\\ \tauextbf{Case 2:} If $\left\lbrace n\in\mathbb{N},\ \Vert u\Vert_{\sigmaigma_n}\leq 1\rhoightarrowght\rhobrace $ is bounded. Then, without lose of generality, we can suppose that \betaegin{equation}\label{74655654548} \Vert u\Vert_{\sigmaigma_n}> 1,\ \ \tauext{for all}\ n\in \mathbb{N}. \end{equation} From $(\rhoef{1188})$ and $(\rhoef{74655654548})$, we find that \betaegin{equation}\label{555559} \Vert u\Vert^{\sigmaigma_{n+1}\varphirac{g^-}{\widehat{g}}}_{\sigmaigma_{n+1}}\leq C_{10}\Vert u\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n}{h^+}},\ \ \tauext{for all}\ n\in\mathbb{N} \end{equation} where $C_{10}(n)=\left( \tauheta_n+1\rhoightarrowght)^{g^-} C_9 \left( C_6+C_7 +C_8\rhoightarrowght)$.\\ We want to remark that \betaegin{align}\label{8945666} C_9\left( C_6+C_7+C_8\rhoightarrowght) & =C_9\left[ c_2\left((1+h(1))\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}+ h(1) \Vert u\Vert_{h^+}^{h^+-1}\rhoightarrowght)+\vert \Omegamega\vert^{\varphirac{h^+-g^-}{h^+}}\Vert u\Vert_{h^+}^{g^--1}+\vert\Omegamega\vert^{1-\varphirac{1}{h^+}+\varphirac{1}{\sigmaigma_n}}\rhoightarrowght]\nonumber\\ & \leq C_9\left[ c_2\left((1+h(1))\vert \Omegamega\vert^{1-\varphirac{1}{h^+}}+ h(1)\Vert u\Vert_{h^+}^{h^+-1}\rhoightarrowght)+\vert \Omegamega\vert^{\varphirac{h^+-g^-}{h^+}}\Vert u\Vert_{h^+}^{g^--1}+\vert\Omegamega\vert+1\rhoightarrowght]\nonumber\\ & =C_{11},\ \ \tauext{for all}\ n\in \mathbb{N}. \end{align} Hence $C_{11}> 0$ is independent of $n$. Moreover, we have that \betaegin{equation}\label{9564872} (\tauheta_n+1)^{g^-}=(\varphirac{\sigmaigma_n}{h^+})^{g^-}\leq (\sigmaigma_n)^{g^-}\leq (\sigmaigma_{n+1})^{g^-},\ \ \tauext{for all}\ n\in\mathbb{N}. \end{equation} From $(\rhoef{555559})$, $(\rhoef{8945666})$ and $(\rhoef{9564872})$, we obtain \betaegin{equation}\label{14156699} \Vert u\Vert^{\sigmaigma_{n+1}\varphirac{g^-}{\widehat{g}}}_{\sigmaigma_{n+1}}\leq (\sigmaigma_{n+1})^{g^-} C_{11}\Vert u\Vert_{\sigmaigma_n}^{\varphirac{\sigmaigma_n}{h^+}},\ \ \tauext{for all}\ n\in\mathbb{N}. \end{equation} Therefore, from \cite[Theorem 6.2.6, p. 737]{npl}, we find that \betaegin{equation} \label{688888} \Vert u\Vert_{\sigmaigma_{n+1}}\leq M,\ \ \tauext{for all}\ n\in \mathbb{N} \end{equation} for some $M\left( c_2, h(1), g^-, \vert \Omegamega\vert, \Vert u \Vert_{h^+}\rhoightarrowght)\geq 0$.\\ In the other hand, by the hypotheses of proposition, we have that \betaegin{equation} \label{688} u\in L^s(\Omegamega),\ \ \tauext{for all}\ 1\leq s<\infty. \end{equation} Exploiting $(\rhoef{688888})$,$(\rhoef{688})$ and the fact that $\sigmaigma_n\longrightarrow +\infty$ as $n\longrightarrow +\infty$, we deduce that $$\Vert u\Vert_\infty\leq M.$$ This ends the proof. \end{proof} \betaegin{proof}[\tauextbf{Proof of Theorem \rhoef{thmC11}}] Let $$ \betaegin{array}{ll} \mathcal{A}(x,\eta)=a(\vert \eta \vert)\eta, & \tauext{for all}\ x\in \Omegamega\ \tauext{and}\ \eta \in \mathbb{R}^N\\ \ & \ \\ \mathcal{B}(x,t)=f(x,t),& \tauext{for all}\ x\in \Omegamega\ \tauext{and}\ t\in \mathbb{R}\\ \ & \ \\ \phisi(x,t)=b(x)\vert t\vert^{p-2}t,& \tauext{for all}\ x\in \phiartial\Omegamega\ \tauext{and}\ t\in \mathbb{R} \end{array} $$ in problem $(\rhoef{A})$. Then, $\mathcal{A},\mathcal{B}$ and $\phisi$ satisfy the growth conditions $(\rhoef{7546999882})-(\rhoef{7546999882222})$ and the problem $(\rhoef{A})$ turns to $(\rhoef{P})$. By the Propositions \rhoef{prop1} and \rhoef{prop2}, we conclude that every weak solution $u\in W^{1,G}(\Omegamega)$ of problem $(\rhoef{P})$ belongs to $L^{\infty}(\Omegamega)$ and $\Vert u\Vert_{\infty}\leq M=M(c_2=\Vert \widehat{a}\Vert_{\infty}, h(1), g^-,\vert \Omegamega\vert, \Vert u \Vert_{h^+})$. This ends the proof. \end{proof} \sigmaection{$W^{1,G}(\Omegamega)$ versus $C^1(\omegaverline{\Omegamega})$ local minimizers} In this section, using the regularity theory of Lieberman \cite{2}, we extend the result of Brezis and Nirenberg’s \cite{29} to the problem $(\rhoef{P})$. \betaegin{prop}\label{prop3} let $u_0\in W^{1,G}(\Omegamega)$ be a local $C^1(\omegaverline{\Omegamega})$-minimizer of $J$ (see Definition \rhoef{def333}), then $u_0$ is a weak solution for problem $(\rhoef{P})$ and $u_0\in C^{1,\alpha}(\omegaverline{\Omegamega})$, for some $\alpha\in (0,1)$. \end{prop} \betaegin{proof}[Proof] By hypothesis $u_0$ is a local $C^1(\omegaverline{\Omegamega})$-minimizer of $J$, for every $v\in C^1(\omegaverline{\Omegamega})$ and $t>0$ small enough, we have $J(u_0)\leq J(u_0+tv)$. Hence, \betaegin{equation}\label{20} 0\leq \langle J^{'}(u_0),v\rhoangle\ \ \tauext{for all}\ v\in C^1(\omegaverline{\Omegamega}). \end{equation} Since $C^1(\omegaverline{\Omegamega})$ is dense in $W^{1,G}(\Omegamega)$, from $(\rhoef{20})$ we infer that $J^{'}(u_0)=0$. Namely, \betaegin{align}\label{21} \int_{\Omegamega} a(\vert\nabla u_0\vert)\nabla u_0.\nabla v dx +\int_{\phiartial\Omegamega}b(x)\vert u_0\vert^{p-2}u_0 v d\gamma & =\int_{\Omegamega}f(x,u_0)vdx,\ \ \tauext{for all}\ v\in W^{1,G}(\Omegamega). \end{align} By the nonlinear Green's identity, we get \betaegin{align}\label{22} \int_{\Omegamega} a(\vert\nabla u_0\vert)\nabla u_0.\nabla v dx=-\int_{\Omegamega} \tauext{div}(a(\vert\nabla u_0\vert)\nabla u_0). v dx+ \int_{\phiartial\Omegamega}a(\vert \nabla u_0\vert)\varphirac{\phiartial u_0}{\phiartial \nu}v d\gamma,\ \ \tauext{for all}\ v\in W^{1,G}(\Omegamega). \end{align} It follows that, \betaegin{align}\label{23} \int_{\Omegamega} a(\vert\nabla u_0\vert)\nabla u_0.\nabla v dx=-\int_{\Omegamega} \tauext{div}(a(\vert\nabla u_0\vert)\nabla u_0). v dx,\ \ \tauext{for all}\ v\in W^{1,G}_0(\Omegamega). \end{align} Hence, by $(\rhoef{21})$ \betaegin{align*} -\int_{\Omegamega} \tauext{div}(a(\vert\nabla u_0\vert)\nabla u_0). v dx =\int_{\Omegamega}f(x,u_0)vdx,\ \ \tauext{for all}\ v\in W^{1,G}_0(\Omegamega), \end{align*} which gives, \betaegin{align}\label{24} & -\tauext{div}(a(\vert\nabla u_0(x)\vert)\nabla u_0(x)) =f(x,u_0(x)),\ \ \tauext{for almost}\ x\in \Omegamega. \end{align} From $(\rhoef{21})$, $(\rhoef{22})$ and $(\rhoef{24})$, we obtain \betaegin{equation}\label{25} \left\langle a(\vert \nabla u_0\vert)\varphirac{\phiartial u_0}{\phiartial \nu}+b(x)\vert u_0\vert^{p-2}u_0 ,v\rhoightarrowght\rhoangle_{\phiartial\Omegamega}=0\ \ \tauext{for all}\ v\in W^{1,G}(\Omegamega). \end{equation} It follows that $$a(\vert \nabla u_0\vert)\varphirac{\phiartial u_0}{\phiartial \nu}+b(x)\vert u_0\vert^{p-2}u_0=0\ \ \tauext{on}\ \phiartial\Omegamega.$$ So, $u_0\in W^{1,G}(\Omegamega)$ is a weak solution for the problem $(\rhoef{P})$. From Theorem \rhoef{thmC11}, we have that $u_0\in L^{\infty}(\Omegamega)$.\\ We define $A:\omegaverline{\Omegamega}\tauimes\mathbb{R}^N\rhoightarrowghtarrow\mathbb{R}^N$, $B:\omegaverline{\Omegamega}\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ and $\phihi:\phiartial\Omegamega\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ by \betaegin{equation}\label{789} \left\lbrace \betaegin{array}{ll} A(x,\eta)=a(\vert \eta\vert)\eta; \\ \ \\ B(x,t)=f(x,t);\\ \ \\ \phihi(x,t)=b(x)\vert t\vert^{p-2}t. \end{array} \rhoightarrowght. \end{equation} It is easy to show that, for $x,y\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\sigmaetminus\lbrace 0\rhobrace,\ \xi\in\mathbb{R}^N,\ t\in \mathbb{R},$ the following estimations hold: \betaegin{equation}\label{159} A(x,0)=0, \end{equation} \betaegin{equation}\label{1599} \sigmaum_{i,j=1}^N \varphirac{\phiartial (A)_j}{\phiartial\eta_i}(x,\eta)\xi_i\xi_j \geq \varphirac{g(\vert \eta \vert)}{\vert \eta \vert}\vert\xi\vert^2, \end{equation} \betaegin{equation}\label{15999} \sigmaum_{i,j=1}^N \left|\varphirac{\phiartial (A)_j}{\phiartial\eta_i}(x,\eta) \rhoightarrowght|\vert \eta\vert\leq c(1+g(\vert \eta\vert)), \end{equation} \betaegin{equation}\label{159999} \vert A(x,\eta)-A(y,\eta)\vert\leq c(1+g(\vert \eta\vert))(\vert x-y\vert^\tauheta),\ \tauext{for some}\ \tauheta \in (0,1), \end{equation} \betaegin{equation}\label{1599999} \vert B(x,t)\vert \leq c\left( 1+h(\vert t\vert)\rhoightarrowght). \end{equation} Indeed: inequalities $(\rhoef{159})$ , $(\rhoef{159999})$ and $(\rhoef{1599999})$ are evident.\\ For $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace 0\rhobrace,\ \xi\in\mathbb{R}^N$, we have \betaegin{equation}\label{12254996} D_\eta(A(x,\eta))\xi=a(\vert \eta\vert)\xi +a^{'}(\vert \eta\vert)\varphirac{\langle\eta ,\xi\rhoangle_{\mathbb{R}^N}}{\vert \eta\vert}\eta \end{equation} and \betaegin{equation}\label{122484996} \langle D_\eta(A(x,\eta))\xi,\xi\rhoangle_{\mathbb{R}^N}=a(\vert \eta\vert)\langle\xi,\xi\rhoangle_{\mathbb{R}^N} +a^{'}(\vert \eta\vert)\varphirac{\left[ \langle\eta ,\xi\rhoangle_{\mathbb{R}^N}\rhoightarrowght]^2 }{\vert \eta\vert} \end{equation} where $\langle,\rhoangle_{\mathbb{R}^N}$ is the inner product in $\mathbb{R}^N$. Hence, we have the following derivative \betaegin{align}\label{666} D_\eta (a(\vert \eta\vert)\eta)=\varphirac{a^{'}(\vert \eta\vert)}{\vert \eta\vert}\eta\eta^{T}+a(\vert \eta\vert)I_N & =a(\vert \eta\vert)\left( I_N+\varphirac{a^{'}(\vert \eta\vert)\vert \eta\vert }{a(\vert\eta\vert)}\varphirac{1}{\vert \eta\vert^2}M_N(\eta,\eta)\rhoightarrowght) \end{align} for all $\eta\in \mathbb{R}^N\betaackslash\lbrace 0\rhobrace$, where $\eta^T$ is the transpose of $\eta$, $I_N$ is the unit matrix in $M_N(\mathbb{R})$ and \betaegin{equation}\label{455666699} M_N(\eta,\eta)=\eta \eta^{T}=\betaegin{pmatrix} \eta_1^2 & \eta_1\eta_2 &\cdots & \eta_1\eta_N \\ \eta_2\eta_1 & \eta_2^2 &\cdots & \eta_2\eta_N\\ \vdots & \vdots& \ddots& \vdots\\ \eta_N\eta_1& \eta_N\eta_2 & \cdots & \eta_N^2\\ \end{pmatrix} \quad \end{equation} for all $\eta\in\mathbb{R}^N$.\\ Note that, for all $\eta\in \mathbb{R}^N$, we have \betaegin{equation}\label{4546789123} \lVert M_N(\eta,\eta)\rhoVert_{\mathbb{R}^N}=\sigmaum_{i,j=1}^{N}\vert \eta_i \eta_j\vert=\left( \sigmaum_{i=1}^{N}\vert \eta_i\vert\rhoightarrowght) ^{2}\leq N\sigmaum_{i=1}^{N}\vert \eta_i\vert^2=N\vert \eta\vert^2 \end{equation} where $\Vert .\Vert_{\mathbb{R}^N}$ is a norm on $M_N(\mathbb{R})$.\\ From $(\rhoef{122484996})$ and Lemma \rhoef{lem8956} , we have \betaegin{align}\label{954621866} \sigmaum_{i,j=1}^N \varphirac{\phiartial (A)_j}{\phiartial\eta_i}(x,\eta)\xi_i\xi_j & =\langle D_\eta(A(x,\eta))\xi,\xi\rhoangle\nonumber\\ & =a(\vert \eta\vert)\langle\xi,\xi\rhoangle_{\mathbb{R}^N} +a^{'}(\vert \eta\vert)\varphirac{\left[ \langle\eta ,\xi\rhoangle_{\mathbb{R}^N}\rhoightarrowght]^2 }{\vert \eta\vert}\nonumber\\ & =a(\vert \eta\vert)\left[ \langle\xi,\xi\rhoangle_{\mathbb{R}^N} +\varphirac{a^{'}(\vert \eta\vert)\vert \eta\vert}{a(\vert \eta\vert)}\varphirac{\left[ \langle\eta,\xi\rhoangle_{\mathbb{R}^N}\rhoightarrowght] ^2}{\vert \eta\vert^2}\rhoightarrowght] \nonumber\\ & \geq \varphirac{g(\vert \eta\vert)}{\vert \eta \vert}\vert \xi\vert^2 \end{align} for all $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace 0\rhobrace,\ \xi\in\mathbb{R}^N$.\\ Moreover, from $(\rhoef{666})$, $(\rhoef{4546789123})$ and Lemma \rhoef{lem8956}, we find that \betaegin{align}\label{455645489} \sigmaum_{i,j=1}^N \left|\varphirac{\phiartial (A)_j}{\phiartial\eta_i}(x,\eta) \rhoightarrowght|\vert \eta\vert &=\lVert D_\eta(A(x,\eta))\rhoVert_{\mathbb{R}^N}\vert \eta\vert\nonumber\\ &\leq \left( \lVert I_N\rhoVert_{\mathbb{R}^N}+\varphirac{a^{'}(\vert \eta\vert)\vert \eta\vert }{a(\vert\eta\vert)}\varphirac{1}{\vert \eta\vert^2}\lVert M_N(\eta,\eta)\rhoVert_{\mathbb{R}^N}\rhoightarrowght)g(\vert\eta \vert)\nonumber\\ &\leq \left( 1+\varphirac{a^{'}(\vert \eta\vert)\vert \eta\vert }{a(\vert\eta\vert)}\rhoightarrowght)Ng(\vert\eta \vert)\nonumber\\ &\leq a^+Ng(\vert\eta \vert)\nonumber\\ &\leq a^+N(1+g(\vert\eta \vert)) \end{align} for all $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace 0\rhobrace$.\\ The non-linear regularity result of Lieberman \cite[p. 320]{2} implies the existence of $\alpha\in(0,1)$ and $M_0\geq 0$ such that $$u_0\in C^{1,\alpha}(\omegaverline{\Omegamega})\ \ \tauext{and}\ \ \Vert u_0\Vert_{C^{1,\alpha}(\omegaverline{\Omegamega})}\leq M_0.$$ This ends the proof. \end{proof} \betaegin{prop}\label{prop4} Under the assumptions $(G)$ and $(H)$, if $u_0\in W^{1,G}(\Omegamega)$ is a local $C^1(\omegaverline{\Omegamega})$-minimizer of $J$ (see Definition \rhoef{def333}), then $u_0\in W^{1,G}(\Omegamega)$ is also a local $W^{1,G}(\Omegamega)$-minimizer of $J$ (see Definition \rhoef{def333}). \end{prop} \betaegin{proof}[Proof]Let $u_0$ be a local $C^1(\omegaverline{\Omegamega})$-minimizer of $J$, then, by Proposition $\rhoef{prop3}$, we have \betaegin{equation} u_0\in L^{\infty}(\Omegamega)\ \tauext{and}\ u_0\in C^{1,\alpha}(\omegaverline{\Omegamega})\ \ \tauext{for some}\ \alpha\in(0,1). \end{equation} To prove that $u_0$ is a local $W^{1,G}(\Omegamega)$-minimizer of $J$, we argue by contradiction. Suppose that $u_0$ is not a local $W^{1,G}(\Omegamega)$-minimizer of $J$. Let $\varepsilon\in(0,1)$ and define $$B(u_0,\varepsilon)=\left\lbrace v\in W^{1,G}(\Omegamega):\ \mathcal{K}(v-u_0)\leq \varepsilon\rhoightarrowght\rhobrace, $$ recall that $\displaystylesplaystyle{\mathcal{K}(v-u_0)=\int_{\Omegamega}G(\vert \nabla(v-u_0)\vert)dx+\int_{\Omegamega}G(\vert v-u_0\vert)dx}$.\\ We consider the following minimization problem: \betaegin{equation}\label{26} m_\varepsilon =\inf \left\lbrace J(v):\ v\in B(u_0,\varepsilon)\rhoightarrowght\rhobrace. \end{equation} By the hypothesis of contradiction and assumption $(H)$, we have \betaegin{equation}\label{27} -\infty<m_\varepsilon <J(u_0). \end{equation} The set $B(u_0,\varepsilon)$ is bounded, closed and convex subset of $W^{1,G}(\Omegamega)$ and is a neighbourhood of $u_0\in W^{1,G}(\Omegamega)$. Since $f(x,t)$ satisfies the assumption $(H)$, the functional $J:W^{1,G}(\Omegamega)\rhoightarrowghtarrow\mathbb{R}$ is weakly lower semicontinuous. So, From the Weierstrass theorem there exist $v_\varepsilon\in B(u_0,\varepsilon)$ such that $m_\varepsilon=J(v_\varepsilon)$. Moreover, by $(\rhoef{27})$, we deduce that $v_\varepsilon\neq 0$.\\ Now, using the Lagrange multiplier rule \cite[p. 35]{13}, we can find $\lambda_\varepsilon\geq 0$ such that $$\langle J^{'}(v_\varepsilon),v\rhoangle+\lambda_\varepsilon\langle \mathcal{K}^{'}(v_\varepsilon-u_0),v\rhoangle=0\ \ \tauext{for all}\ v\in W^{1,G}(\Omegamega),$$ which implies \betaegin{align}\label{29} \langle J^{'}(v_\varepsilon),v\rhoangle+\lambda_\varepsilon\langle \mathcal{K}^{'}(v_\varepsilon-u_0),v\rhoangle & =\int_{\Omegamega} a(\vert\nabla v_\varepsilon\vert)\nabla v_\varepsilon.\nabla v dx +\int_{\phiartial\Omegamega}b(x)\vert v_\varepsilon\vert^{p-2}v_\varepsilon v d\gamma\nonumber\\ &+\lambda_\varepsilon\int_{\Omegamega} a(\vert\nabla( v_\varepsilon-u_0)\vert)\nabla (v_\varepsilon-u_0).\nabla v dx -\int_{\Omegamega}f(x, v_\varepsilon)vdx\nonumber\\ & +\lambda_\varepsilon\int_{\Omegamega} a(\vert v_\varepsilon-u_0\vert) (v_\varepsilon-u_0) v dx\nonumber\\ & =0 \end{align} for all $v\in W^{1,G}(\Omegamega)$.\\ In the other side, from Proposition \rhoef{prop3}, we see that $u_0\in W^{1,G}(\Omegamega)$ is a weak solution for the problem $(\rhoef{P})$. Hence, \betaegin{align}\label{78956241} \int_{\Omegamega} a(\vert\nabla u_0\vert)\nabla u_0.\nabla v dx +\int_{\phiartial\Omegamega}b(x)\vert u_0\vert^{p-2} u_0 v d\gamma-\int_{\Omegamega}f(x, u_0)vdx=0 \end{align} for all $v\in W^{1,G}(\Omegamega)$.\\ Next, we have to show that $v_\varepsilon$ belongs to $L^{\infty}(\Omegamega)$ and hence to $C^{1,\alpha}(\omegaverline{\Omegamega})$. We distinguish three cases.\\ \tauextbf{Case 1:} If $\lambda_\varepsilon=0$ with $\varepsilon\in (0,1]$, we find that $v_\varepsilon$ solves the Robin boundary value problem $(\rhoef{P})$. As in Proposition $\rhoef{prop3}$, we prove that $v_\varepsilon\in C^{1,\alpha}(\omegaverline{\Omegamega})$ for some $\alpha\in (0,1)$ and there is $M_1\geq 0$ (independent of $\varepsilon$) such that $$\Vert v_\varepsilon\Vert_{C^{1,\alpha}(\omegaverline{\Omegamega})}\leq M_1.$$ \tauextbf{Case 2:} If $0<\lambda_\varepsilon\leq 1$ with $\varepsilon\in (0,1]$. Multiplying $(\rhoef{78956241})$ by $\lambda_\varepsilon>0$ and adding $(\rhoef{29})$, we get \betaegin{align}\label{7458869} \int_{\Omegamega} a(\vert\nabla v_\varepsilon\vert)\nabla v_\varepsilon.\nabla v dx &+\lambda_\varepsilon\int_{\Omegamega} a(\vert\nabla u_0\vert)\nabla u_0.\nabla v dx +\lambda_\varepsilon\int_{\Omegamega} a(\vert\nabla( v_\varepsilon-u_0)\vert)\nabla (v_\varepsilon-u_0).\nabla v dx \nonumber\\ & +\lambda_\varepsilon\int_{\phiartial\Omegamega}b(x)\vert u_0\vert^{p-2} u_0 v d\gamma +\int_{\phiartial\Omegamega}b(x)\vert v_\varepsilon\vert^{p-2}v_\varepsilon v d\gamma\nonumber\\ &=\lambda_\varepsilon\int_{\Omegamega}f(x, u_0)vdx +\int_{\Omegamega}f(x, v_\varepsilon)vdx -\lambda_\varepsilon\int_{\Omegamega} a(\vert v_\varepsilon-u_0\vert) (v_\varepsilon-u_0) v dx \end{align} for all $v\in W^{1,G}(\Omegamega)$.\\ Let $A_\varepsilon:\omegaverline{\Omegamega}\tauimes\mathbb{R}^N\rhoightarrowghtarrow\mathbb{R}^N$, $B_\varepsilon:\omegaverline{\Omegamega}\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ and $\phihi_\varepsilon:\phiartial\Omegamega\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ defined by \betaegin{equation}\label{788} \left\lbrace \betaegin{array}{ll} A_\varepsilon(x,\eta)=a(\vert \eta \vert)\eta+\lambda_\varepsilon a(\vert \eta -\nabla u_0\vert)(\eta-\nabla u_0)+\lambda_\varepsilon a(\vert \nabla u_0\vert)\nabla u_0;\\ \ \\ B_\varepsilon(x,t)=f(x,t)+\lambda_\varepsilon f(x,u_0)- \lambda_\varepsilon a(\vert t-u_0\vert)(t-u_0);\\ \ \\ \phihi_\varepsilon(x,t)=b(x)\left( \vert t\vert^{p-2}t+\lambda_\varepsilon \vert u_0\vert^{p-2}u_0 \rhoightarrowght). \end{array} \rhoightarrowght. \end{equation} It is clear that $A_\varepsilon\in C(\omegaverline{\Omegamega}\tauimes\mathbb{R}^N,\mathbb{R}^N)$. Hence, the equation $(\rhoef{7458869})$ is the weak formulation of the following Robin boundary value problem $$ \left\lbrace \betaegin{array}{ll} -\tauext{div}(A_\varepsilon(x,\nabla v_\varepsilon)) =B_\varepsilon(x,v_\varepsilon)\ & \tauext{on}\ \Omegamega,\\ \ & \ \\ A_\varepsilon(x,\nabla v_\varepsilon).\nu + \phihi_\varepsilon(x,v_\varepsilon)=0 & \tauext{on} \ \phiartial\Omegamega, \end{array} \rhoightarrowght. $$ where $\nu$ is the inner normal to $\phiartial\Omegamega$.\\ From Lemma \rhoef{lem444} and assumption $(G)$, for $\eta\in \mathbb{R}^n$ and $x\in\Omegamega$ , we have \betaegin{align}\label{45779756} \langle A_\varepsilon(x,\eta),\eta\rhoangle_{\mathbb{R}^N}& =\langle a(\vert \eta\vert)\eta,\eta\rhoangle_{\mathbb{R}^N}+ \lambda_\varepsilon\langle a(\vert \eta-\nabla u_0\vert)(\eta-\nabla u_0),\eta -\nabla u_0-(-\nabla u_0)\rhoangle_{\mathbb{R}^N}\nonumber\\ & -\lambda_\varepsilon\langle a(\vert -\nabla u_0\vert)(-\nabla u_0),\eta -\nabla u_0-(-\nabla u_0)\rhoangle_{\mathbb{R}^N}\nonumber\\ & \geq g(\vert \eta\vert)\vert \eta\vert\nonumber\\ &\geq G(\vert \eta\vert) \end{align} and \betaegin{align}\label{457779756} \vert A_\varepsilon(x,\eta)\vert& \leq a(\vert \eta\vert)\vert\eta\vert+ \lambda_\varepsilon a(\vert \eta-\nabla u_0\vert)\vert\eta-\nabla u_0\vert+\lambda_\varepsilon a(\vert \nabla u_0\vert)\vert \nabla u_0\vert\nonumber\\ & \leq g(\vert \eta\vert)+g(\vert \eta-\nabla u_0\vert)+g(\vert \nabla u_0\vert)\ \ (\tauext{since}\ 0<\lambda_\varepsilon\leq 1)\nonumber\\ &\leq g(\vert \eta\vert)+g(\vert \eta\vert+\vert\nabla u_0\vert)+g(\vert \nabla u_0\vert)\nonumber\\ & \leq c_0g(\vert \eta\vert)+c_1\ (\tauext{using Lemma \rhoef{lem1} and the monotonicity of}\ g). \end{align} Then, $A_\varepsilon, B_\varepsilon$ and $\phihi_\varepsilon$ satisfy the corresponding growth conditions $(\rhoef{7546999882})-(\rhoef{7546999882222})$. So, using the Propositions \rhoef{prop1} and \rhoef{prop2}, we obtain that $v_\varepsilon\in L^{\infty}(\Omegamega)$.\\ It remains, using the regularity theorem of Lieberman, to show that $v_\varepsilon\in C^{1,\alpha}(\omegaverline{\Omegamega})$ for some $\alpha\in (0,1)$. So, we need to prove that $A_\varepsilon$ and $B_\varepsilon$ satisfy the corresponding $(\rhoef{159})-(\rhoef{1599999})$. The inequalities $(\rhoef{159})$ and $(\rhoef{1599999})$ are evident. The inequality $(\rhoef{159999})$ follows from Lemma \rhoef{lem3} and the fact that $\nabla u_0$ is H\"older continuous.\\ As in $(\rhoef{12254996})$ and $(\rhoef{122484996})$, we have \betaegin{equation}\label{1225445996} D_\eta(a(\vert\eta-\nabla u_0\vert)(\eta-\nabla u_0))\xi=a(\vert \eta-\nabla u_0\vert)\xi +a^{'}(\vert \eta-\nabla u_0\vert)\varphirac{\langle \eta-\nabla u_0 ,\xi\rhoangle_{\mathbb{R}^N}}{\vert \eta-\nabla u_0\vert}(\eta-\nabla u_0) \end{equation} and \betaegin{equation}\label{12254445996} \langle D_\eta(a(\vert\eta-\nabla u_0\vert)(\eta-\nabla u_0))\xi,\xi\rhoangle_{\mathbb{R}^N}=a(\vert \eta-\nabla u_0\vert)\langle\xi,\xi\rhoangle_{\mathbb{R}^N} +a^{'}(\vert \eta-\nabla u_0\vert)\varphirac{\left[ \langle \eta-\nabla u_0 ,\xi\rhoangle_{\mathbb{R}^N}\rhoightarrowght] ^2}{\vert \eta-\nabla u_0\vert} \end{equation} for all $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace \nabla u_0\rhobrace,\ \xi\in\mathbb{R}^N$.\\ Exploiting Lemma $\rhoef{lem8956}$, $(\rhoef{954621866})$ and $(\rhoef{12254445996})$, we infer that \betaegin{align}\label{54548998} \sigmaum_{i,j=1}^N \varphirac{\phiartial (A_\varepsilon)_j}{\phiartial\eta_i}(x,\eta)\xi_i\xi_j &=\langle D_\eta (A)(x,\eta)\xi,\xi\rhoangle_{\mathbb{R}^N}\nonumber\\ &+\lambda_\varepsilon a(\vert \eta-\nabla u_0\vert )\left( \langle\xi,\xi\rhoangle_{\mathbb{R}^N}+\varphirac{a^{'}(\vert \eta-\nabla u_0\vert )\vert\eta-\nabla u_0\vert}{a(\vert\eta-\nabla u_0\vert)}\varphirac{\left[ \langle\eta-\nabla u_0,\xi\rhoangle_{\mathbb{R}^N}\rhoightarrowght] ^2}{\vert\eta-\nabla u_0\vert^2}\rhoightarrowght)\nonumber\\ & \geq \langle D_\eta (A)(x,\eta)\xi,\xi\rhoangle_{\mathbb{R}^N}\nonumber\\ & \geq \varphirac{g(\vert \eta\vert)}{\vert \eta\vert}\vert\xi\vert^2 \end{align} for all $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace \nabla u_0\rhobrace,\ \xi\in\mathbb{R}^N$.\\ Note that the derivative of $A_\varepsilon$ has the form \betaegin{equation}\label{7845661236} D_\eta (A_\varepsilon(x,\eta))=D_\eta (A(x,\eta))+\lambda_\varepsilon a(\vert \eta-\nabla u_0\vert)\left( I_N+\varphirac{a^{'}(\vert \eta-\nabla u_0\vert)\vert \eta-\nabla u_0\vert }{a(\vert\eta-\nabla u_0\vert)}\varphirac{1}{\vert \eta-\nabla u_0\vert^2}M_N(\eta-\nabla u_0,\eta-\nabla u_0)\rhoightarrowght) \end{equation} for all $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace \nabla u_0\rhobrace$, where $M_N(\eta-\nabla u_0,\eta-\nabla u_0)$ is defined in $(\rhoef{455666699})$.\\ As in $(\rhoef{4546789123})$, we have \betaegin{equation}\label{4545556789123} \lVert M_N(\eta-\nabla u_0,\eta-\nabla u_0)\rhoVert_{\mathbb{R}^N}\leq N\vert \eta-\nabla u_0\vert^2. \end{equation} In light of $(\rhoef{455645489})$, $(\rhoef{7845661236})$, $(\rhoef{4545556789123})$ and Lemma \rhoef{lem8956}, we see that \betaegin{align}\label{7458896323} \sigmaum_{i,j=1}^N \left|\varphirac{\phiartial (A_\varepsilon)_j}{\phiartial\eta_i}(x,\eta) \rhoightarrowght|\vert \eta\vert& =\lVert D_\eta (A_\varepsilon(x,\eta))\rhoVert_{\mathbb{R}^N}\vert \eta \vert\nonumber\\ & \leq a^+N a(\vert\eta \vert)\vert\eta \vert+\lambda_\varepsilon a(\vert \eta-\nabla u_0\vert)\vert \eta\vert \lVert I_N\rhoVert_{\mathbb{R}^N}\nonumber\\ & +\lambda_\varepsilon a(\vert \eta-\nabla u_0\vert) \vert\eta\vert\left( \varphirac{a^{'}(\vert \eta-\nabla u_0\vert)\vert \eta-\nabla u_0\vert }{a(\vert\eta-\nabla u_0\vert)}\varphirac{\lVert M_N(\eta-\nabla u_0,\eta-\nabla u_0)\rhoVert_{\mathbb{R}^N}}{\vert \eta-\nabla u_0\vert^2}\rhoightarrowght)\nonumber\\ &\leq a^+N a(\vert\eta \vert)\vert\eta \vert+ \lambda_\varepsilon a^+N a(\vert \eta-\nabla u_0\vert) \vert\eta\vert\nonumber\\ &\leq a^+N \vert\eta \vert\left( a(\vert\eta \vert)+ a(\vert \eta-\nabla u_0\vert) \rhoightarrowght) \nonumber\\ &\leq c(1+g(\vert \eta\vert)) \end{align} for all $x\in\omegaverline{\Omegamega},\ \eta \in \mathbb{R}^N\betaackslash\lbrace \nabla u_0\rhobrace$.\\ So, from the regularity theorem of Lieberman \cite[p. 320]{2}, we can find $\alpha\in (0,1)$ and $M_2>0$, both independent from $\varepsilon$, such that \betaegin{equation}\label{32} v_\varepsilon\in C^{1,\alpha}(\omegaverline{\Omegamega}),\ \ \Vert v_\varepsilon\Vert_{C^{1,\alpha}(\omegaverline{\Omegamega})}\leq M_2\ \ \tauext{for all}\ \varepsilon\in(0,1]. \end{equation} \tauextbf{Case 3:} If $1<\lambda_\varepsilon$ with $\varepsilon\in (0,1]$. Multiplying $(\rhoef{78956241})$ with $-1$, setting $y_\varepsilon=v_\varepsilon-u_0$ in $(\rhoef{29})$ and adding, we get \betaegin{align}\label{745886669} \int_{\Omegamega} a(\vert\nabla( y_\varepsilon+u_0)\vert)\nabla (y_\varepsilon+u_0).\nabla v dx &-\int_{\Omegamega} a(\vert\nabla u_0\vert)\nabla u_0.\nabla v dx +\lambda_\varepsilon\int_{\Omegamega} a(\vert\nabla y_\varepsilon\vert)\nabla y_\varepsilon.\nabla v dx \nonumber\\ & -\int_{\phiartial\Omegamega}b(x)\vert u_0\vert^{p-2} u_0 v d\gamma +\int_{\phiartial\Omegamega}b(x)\vert y_\varepsilon+u_0\vert^{p-2}( y_\varepsilon+u_0) v d\gamma\nonumber\\ &=\int_{\Omegamega}f(x, y_\varepsilon+u_0)vdx -\int_{\Omegamega}f(x, u_0)vdx -\lambda_\varepsilon\int_{\Omegamega} a(\vert y_\varepsilon\vert) y_\varepsilon v dx \end{align} for all $v\in W^{1,G}(\Omegamega)$.\\ Defining again $\tauilde{A}_\varepsilon:\omegaverline{\Omegamega}\tauimes\mathbb{R}^N\rhoightarrowghtarrow\mathbb{R}^N$, $\tauilde{B}_\varepsilon:\omegaverline{\Omegamega}\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ and $\tauilde{\phihi}_\varepsilon:\phiartial\Omegamega\tauimes\mathbb{R}\rhoightarrowghtarrow\mathbb{R}$ by \betaegin{equation}\label{789968} \left\lbrace \betaegin{array}{ll} \tauilde{A}_\varepsilon(x,\eta)=a(\vert \eta \vert)\eta+\varphirac{1}{\lambda_\varepsilon} a(\vert \eta +\nabla u_0\vert)(\eta+\nabla u_0)-\varphirac{1}{\lambda_\varepsilon} a(\vert \nabla u_0\vert)\nabla u_0; \\ \ \\ \tauilde{B}_\varepsilon(x,t)=\varphirac{1}{\lambda_\varepsilon}\left[ f(x,t+u_0)- f(x,u_0)\rhoightarrowght] - a(\vert t\vert)t;\\ \ \\ \tauilde{\phihi}_\varepsilon(x,t)=\varphirac{1}{\lambda_\varepsilon}b(x)\left( \vert t+u_0\vert^{p-2}(t+u_0)-\vert u_0\vert^{p-2}u_0\rhoightarrowght). \end{array} \rhoightarrowght. \end{equation} It is clear that $A_\varepsilon\in C(\omegaverline{\Omegamega}\tauimes\mathbb{R}^N,\mathbb{R}^N)$. Rewriting $(\rhoef{745886669})$, we find the following equation $$ \left\lbrace \betaegin{array}{ll} -\tauext{div}(\tauilde{A}_\varepsilon(x,\nabla y_\varepsilon)) =\tauilde{B}_\varepsilon(x,y_\varepsilon)\ & \tauext{on}\ \Omegamega,\\ \ & \ \\ \tauilde{A}_\varepsilon(x,\nabla y_\varepsilon).\nu +\tauilde{\phihi}_\varepsilon(x,y_\varepsilon)=0 & \tauext{on} \ \phiartial\Omegamega, \end{array} \rhoightarrowght. $$ where $\nu$ is the inner normal to $\phiartial\Omegamega$.\\ Again, from Propositions $\rhoef{prop1}$ and $\rhoef{prop2}$, we conclude that $y_\varepsilon\in L^{\infty}(\Omegamega)$. By the same arguments used in the case 2, we prove that $\tauilde{A}_\varepsilon$ and $\tauilde{B}_\varepsilon$ satisfy the corresponding inequalities $(\rhoef{159})-(\rhoef{1599999})$. So, the regularity theorem of Lieberman \cite[p. 320]{2} implies the existence of $\alpha\in (0,1)$ and $M_3\geq 0$ both independent of $\varepsilon$ such that $$y_\varepsilon\in C^{1,\alpha}(\omegaverline{\Omegamega}),\ \ \tauext{and}\ \Vert y_\varepsilon\Vert_{C^{1,\alpha}(\omegaverline{\Omegamega})}\leq M_3.$$ Since $y_\varepsilon=v_\varepsilon-u_0$ and $u_0\in C^{1,\alpha}(\omegaverline{\Omegamega})$, we infer that $$v_\varepsilon\in C^{1,\alpha}(\omegaverline{\Omegamega}),\ \ \tauext{and}\ \Vert v_\varepsilon\Vert_{C^{1,\alpha}(\omegaverline{\Omegamega})}\leq M_3.$$ Let $\displaystylesplaystyle{\varepsilon_n\sigmaearrow 0}$ as $n\longrightarrow +\infty$. Therefore, in the three cases, we have the same uniform $C^{1,\alpha}(\omegaverline{\Omegamega})$ bounds for the sequence $\lbrace v_{\varepsilon_n}\rhobrace_{n\geq 1}\sigmaubseteq W^{1,G}(\Omegamega)$. Then, passing to a subsequence if necessary, we obtain \betaegin{equation}\label{7594569} v_{\varepsilon_n}\rhoightarrowghtharpoonup v\ \ \tauext{in}\ C^{1,\alpha}(\omegaverline{\Omegamega}). \end{equation} Exploiting the compact embedding of $C^{1,\alpha}(\omegaverline{\Omegamega})$ into $C^{1}(\omegaverline{\Omegamega})$ (see \cite[Theorem 1.34, p. 11]{111}) in $(\rhoef{7594569})$, we get \betaegin{equation}\label{75568784} v_{\varepsilon_n}\rhoightarrowghtarrow v\ \ \tauext{in}\ C^{1}(\omegaverline{\Omegamega}). \end{equation} Recalling that $\Vert v_{\varepsilon_n}-u_0\Vert^{g^+}\leq \varepsilon_n$, for all $n\in\mathbb{N}$. So, \betaegin{equation}\label{74568784} v_{\varepsilon_n}\longrightarrow u_0\ \tauext{in}\ W^{1,G}(\Omegamega). \end{equation} Therefore, from $(\rhoef{75568784})$ and $(\rhoef{74568784})$, we obtain $$v_{\varepsilon_n}\rhoightarrowghtarrow u_0\ \ \tauext{in}\ C^{1}(\omegaverline{\Omegamega}).$$ So, for $n$ sufficiently large, say $n\geq n_0$, we have $$\Vert v_{\varepsilon_n}-u_0\Vert_{C^{1}(\omegaverline{\Omegamega})}\leq r_0,$$ (where $r_0>0$ is defined in Definition \rhoef{def333}) which provides \betaegin{equation}\label{misss} J(u_0)\leq J(v_{\varepsilon_n})\ \ \tauext{for all}\ n\geq n_0. \end{equation} On the other hand, we have \betaegin{equation}\label{missss} J(v_{\varepsilon_n})< J(u_0)\ \ \tauext{for all}\ n\in\mathbb{N}. \end{equation} Comparing $(\rhoef{misss})$ and $(\rhoef{missss})$, we reach a contradiction. This proves that $u_0$ is a local $W^{1,G}(\Omegamega)$-minimizer of $J$.\\ This ends the proof. \end{proof} \betaegin{proof}[\tauextbf{Proof of Theorem \rhoef{thmC12}:}] The proof deduced by applying the Propositions $\rhoef{prop3}$ and $\rhoef{prop4}$. \end{proof} \betaegin{thebibliography}{99} \betaibitem{111}R. Adams, Sobolev Spaces, \tauextit{Academic Press, New York}, (1975). \betaibitem{ylp}Y. Bai, L. Gasi{\'n}ski, P. Winkert, S. Zeng, $W^{1,p}$ versus $C^1$: The nonsmooth case involving critical growth, \tauextit{Bulletin of Mathematical Sciences}, \tauextbf{10} (2020), 2050009. \betaibitem{16}G. Bonanno, G. M. Bisci, V. D. R{\u{a}}dulescu, Existence of Three Solutions for a Nonhomogeneous Neumann Problem Through Orlicz-Sobolev Spaces, \tauextit{Nonlinear Anal. TMA}, \tauextbf{1} (2011), 4785-4795. \betaibitem{17}G. Bonanno, G. M. Bisci, V. D. R{\u{a}}dulescu, Arbitrarily Small Weak Solutions for a Nonlinear Eigenvalue Problem in Orlicz-Sobolev Spaces, \tauextit{Monatsh. Math.}, \tauextbf{165} (2012), 305-318. \betaibitem{18}G. Bonanno, G. M. Bisci, V. D. R{\u{a}}dulescu, Infinitely Many Solutions for a Class of Nonlinear Eigenvalue Problem in Orlicz-Sobolev Spaces,\tauextit{ C. R. Acad. Sci. Paris}, \tauextbf{349} (2011), 263-268. \betaibitem{19} G. Bonanno, G. M. Bisci, V. D. R{\u{a}}dulescu, Quasilinear Elliptic Non-homogeneous Dirichlet Problems Through Orlicz-Sobolev Spaces, \tauextit{Nonlinear Anal. TMA}, \tauextbf{75} (2012), 4441-4456. \betaibitem{29}H. Brezis, L. Nirenberg, $H^1$ versus $C^1$ local minimizers, \tauextit{C. R. Acad. Sci. Paris}, \tauextbf{317} (1993), 465-472. \betaibitem{fan2}X. L. Fan, Global $C^{1,\alpha}$-regularity for variable elliptic equations in divergence form, \tauextit{J. Differ. Equ}, \tauextbf{235} (2007), 397-417. \betaibitem{fan3}X. L. Fan, D. Zhao, A class of De Giorgi type and H\"older continuity, \tauextit{Nonlinear Anal.} \tauextbf{36} (1999), 295-318. \betaibitem{6}F. Fang, Z. Tan, Orlicz-Sobolev versus H\"older local minimizer and multiplicity results for quasilinear elliptic equations. \tauextit{Journal of Mathematical Analysis and Applications}, \tauextbf{402} (2013), 348-370. \betaibitem{8}N. Fukagai, M. Ito, K. Narukawa, Positive solutions of quasilinear elliptic equations with critical Orlicz-Sobolev nonlinearity on $\mathbb{R}^N$ , \tauextit{Funkcial. Ekvac}, \tauextbf{49} (2006), 235-267. \betaibitem{3}L. Gasi\`nski, N. S. Papageorgiou, Anisotropic nonlinear Neumann problems. \tauextit{Calculus of Variations and Partial Differential Equations}, \tauextbf{42} (2011), 323-354. \betaibitem{npl}L. Gasi\`nski, N. S. Papageorgiou, Nonlinear Analysis, \tauextit{Chapman \& Hall/CRC, Boca Raton, FL}, 2006. MR2168068 (2006e:47001). \betaibitem{piral}J. Garc\`ia Azorero, I. P. Alonso, Some results about the existence of a second positive solution in a quasilinear critical problem, \tauextit{Indiana University Mathematics Journal}, \tauextbf{43} (1994), 941-957. \betaibitem{30}J. Garc\`ia Azorero, J. J. Manfredi, I. P. Alonso, Sobolev versus H\"oder local minimizer and global multiplicity for some quasilinear elliptic equations,\tauextit{ Commun. Contemp. Math}, \tauextbf{2} (2000), 385-404. \betaibitem{34}J. Giacomoni, K. Saoudi, $W^{1,p}_0$ versus $C^1$ local minimizers for a singular and critical functional, \tauextit{J. Math. Anal. Appl}, \tauextbf{363} (2010), 697-710. \betaibitem{lmt}M. Guedda, L. V{\'e}ron, Quasilinear elliptic equations involving critical Sobolev exponents, \tauextit{Nonlinear Analysis: Theory, Methods \& Applications}, \tauextbf{13} (1989), 879-902. \betaibitem{31}Z. M. Guo, Z. T. Zhang, $W^{1,p}$ versus $C^1$ local minimizers and multiplicity results for quasilinear elliptic equations,\tauextit{ J. Math. Anal. Appl}, \tauextbf{286} (2003), 32-50. \betaibitem{shu}S. Hu, N. S. Papageorgiou, Nonlinear Neumann equations driven by a nonhomogeneous differential operator, \tauextit{Communications on Pure \& Applied Analysis}, \tauextbf{10} (2011), 1055. \betaibitem{13}S. Hu, N. S. Papageorgiou, “Handbook of Multivalued Analysis”, \tauextit{Kluwer Academic Publishers, Dordrecht}, \tauextbf{Vol. I} (1997). \betaibitem{iann}A. Iannizzotto, N. S. Papageorgiou, Existence of three nontrivial solutions for nonlinear Neumann hemivariational inequalities, \tauextit{Nonlinear Anal.} \tauextbf{70} (2009), 3285-3297. \betaibitem{9990} A. Kufner, O. John, S. Fucik, Function spaces, \tauextit{Springer Science Business Media}, \tauextbf{Vol. 3} (1979). \betaibitem{lad}O. A. Ladyzhenskaya, N. Uraltseva, Linear and Quasilinear Elliptic Equations, \tauextit{Mathematics in Science and Engineering}, Academic Press, New York. \tauextbf{46} (1968). \betaibitem{1}G. M. Lieberman, Boundary regularity for solutions of degenerate elliptic equations. \tauextit{Nonlinear Analysis: Theory, Methods and Applications}, \tauextbf{11} (1988), 1203-1219. \betaibitem{2}G. M. Lieberman, The natural generalizationj of the natural conditions of ladyzhenskaya and uralľtseva for elliptic equations. \tauextit{Communications in Partial Differential Equations}, \tauextbf{16} (1991), 311-361. \betaibitem{mgpw}G. Marino, P. Winkert, Moser iteration applied to elliptic equations with critical growth on the boundary, \tauextit{Nonlinear Analysis}, \tauextbf{180} (2019), 154-169. \betaibitem{25}M. Mihailesc{\u{u}}, V. D. R{\u{a}}dulescu, Neumann Problems Associated to Non-homogeneous Differential Operators in Orlicz-Sobolev Spaces, \tauextit{Ann. Inst. Fourier}, \tauextbf{58} (2008), 2087-2111. \betaibitem{mom}D. Motreanu, V. V. Motreanu, and N. S. Papageorgiou, Nonlinear neumann problems near resonance, \tauextit{Indiana Univ. Math. J}., \tauextbf{58} (2009), 1257-1279. \betaibitem{5}A. V. C. I. Mustafa, On a Robin problem in Orlicz-Sobolev spaces. \tauextit{TWMS Journal of Applied and Engineering Mathematics}, \tauextbf{9} (2019), 246-256. \betaibitem{4}N. S. Papageorgiou, V. D. R{\u{a}}dulescu, Nonlinear nonhomogeneous Robin problems with superlinear reaction term. \tauextit{Advanced Nonlinear Studies} , \tauextbf{16} (2016), 737-764. \betaibitem{11}M. M. Rao, Z. D. Ren, Theory of Orlicz Spaces,\tauextit{ Monographs and Textbooks in Pure and Applied Mathematics, Marcel Dekker Inc., New York}, \tauextbf{146}. \betaibitem{pwa}P. Winkert, $L^\infty$-estimates for nonlinear elliptic Neumann boundary value problems, \tauextit{Nonlinear Differential Equations and Applications NoDEA}, \tauextbf{17} (2010), 289-302. \betaibitem{patrik1}P. Winkert, Local $C^1(\Omegamega)$-minimizers versus local $W^{1,p}(\Omegamega)$-minimizers of nonsmooth functionals, \tauextit{Nonlinear Analysis: Theory, Methods \& Applications}, \tauextbf{72} (2010), 4298-4303. \betaibitem{patrik2}P. Winkert, Constant-sign and sign-changing solutions for nonlinear elliptic equations with Neumann boundary values, \tauextit{Advances in Differential Equations}, \tauextbf{15} (2010), 561-599. \end{thebibliography} \end{document}
\begin{document} \frontmatter \begin{abstract} This paper deals with nonlinear parabolic equation for which a local solution in time exists and then blows up in a finite time. We consider the Chipot-Weissler equation: \begin{equation*} u_{t}=u_{xx}+u^{p}-\left|u_{x}\right|^{q},\ \ x\in (-1,1);\ t>0, \ \ p>1 \text{ and } 1\leq q\leq \dfrac{2p}{p+1}. \end{equation*} We study the numerical approximation, we show that the numerical solution converges to the continuous one under some restriction on the initial data and the parameters $p$ and $q$. Moreover, we study the numerical blow up sets and we show that although the convergence of the numerical solution is guaranteed, the numerical blow up sets are sometimes different from that of the PDE. \end{abstract} \keywords{Chipot-Weissler equation, blow up, finite difference scheme, numerical blow up set, asymptotic behaviours, numerical convergence.} \frontmatter \maketitle \mainmatter\date{} \tableofcontents \section{Introduction} In this paper, we consider the nonlinear parabolic problem \begin{equation} \left\{ \begin{array}{lll} u_{t}=u_{xx}+u^{p}-\left|u_{x}\right|^{q},\ \ \ x\in (-1,1),\ t>0,\\ u(\pm 1,t)=0,\ \ t>0,\\ u(x,0)=u_{0}(x),\ \ x\in(-1,1). \end{array} \right. \label{exacte} \end{equation} Here $p>1,\ 1\leq q\leq \dfrac{2p}{p+1}$ and $u_{0}$ is a positive function which is compatible with the boundary condition. It is well known that for some initial data, this problem blows up in a finite time. Problem \eqref{exacte} was studied for the first time by Chipot and Weissler in \cite{chipotweissler}, since then, the phenomenon of blow up for different problems has been the issue of intensive study, see for example \cite{friedman},\cite{fujita},\cite{hayakawa},\cite{levine},\cite{souplet} and the references therein. There exists many theoretical studies on the question of the occurence of blow up, but from a numerical point of view, many interesting numerical questions for problem \eqref{exacte} are not treated.\\ \noindent We define the blow-up set for problem \eqref{exacte} as: \begin{equation*} B(u)=\left\{ x\in [-1,1];\ \ \exists \ (x_{n},t_{n})\rightarrow (x,T^{*})\ \text{ such that } u(x_{n},t_{n})\rightarrow +\infty \text{ as } n\rightarrow +\infty \right\}. \end{equation*} It is proved in \cite{chiblikfila} that the solution of \eqref{exacte} blows up only at the central point, that is: \begin{equation*} \exists \ \ T^{*}<+\infty \text{ such that } \lim_{t\rightarrow T^{*}}u(t,0)=+\infty \text{ but }\ \lim_{t\rightarrow T^{*}}u(t,x)<\infty \text{ when } x\neq 0. \end{equation*} In \cite{hani}, we have conctructed a finite difference scheme whose solution satisfies the same properties as the exact solution and moreover, we have proved that its solution blows up in a finite time. In this paper and for the same scheme, we show the convergence of the numerical solution to the continuous one under some restrictions on $p$ and $q$, and we study the asymptotic behaviour of the solution near its singularity. We prove that the numerical solution can blow up at more than one point, while a one point blow up is known to occur in the continuous problem. More precisely, we show that even if a difference solution blows up, its values remain bounded up to the moment of blow up except at the maximum point and its adjacent points, moreover, the number of blow up points depends, in a way, on the value of the parameter $q$.\\ \noindent We recall the scheme studied in \cite{hani}, for $j=1,...,N_{n}$ and $n\geq 0$ we have \begin{equation} \left\{ \begin{array}{lllll} \dfrac{u_{j}^{n+1}-u_{j}^{n}}{\tau_{n}}=\dfrac{u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h_{n}^{2}}+(u_{j}^{n})^{p}-\dfrac{1}{(2h_{n})^{q}}\left| u_{j+1}^{n}-u_{j-1}^{n}\right| ^{q-1}\left| u_{j+1}^{n+1}-u_{j-1}^{n+1}\right| ,\\ u_{j}^{0}=u_{0}(x_{j}),\\ u_{0}^{n}=u_{N_{n}+1}^{n}=0.\\ \end{array} \right. \label{approchee} \end{equation} \noindent We denote by $U^{n}:=(u_{0}^{n},...,u_{N_{n}+1}^{n})^{t}$ the numerical solution of \eqref{approchee}, and $$\left\|U^{n}\right\|_{\infty}=\max\limits_{1\leq j\leq N_{n}}|u_{j}^{n}|$$ the $L^{\infty}$ norm of $U^{n}$.\\ Here the notation $u_{j}^{n}$ is employed to denote the approximation of $u(x_{j},t^{n})$ for $x_{j}\in [-1,1]$ and $t^{n}\geq 0.$ Also, we fix other notations as follow: \begin{enumerate} \item $\tau: $ size parameter for the variable time mesh $\tau_{n}$. \item $h: $ size parameter for the variable space mesh $h_{n}$. \item $t^{n}$: $n$-th time step on $t>0$ determined as: \begin{equation*} \left\{ \begin{array}{lll} t_{0}=0\\ t_{n}=t^{n-1}+\tau_{n-1}=\sum\limits_{k=1}^{n-1}{\tau_{k}}; \ n\geq 1.\\ \end{array} \right.\\ \end{equation*} \item $x_{j}$: $j$-th net point on $[-1,1]$ determined as: \begin{equation*} \left\{ \begin{array}{lll} x_{0}=-1,\\ x_{j}=x_{j-1}+h_{n},\ j\geq 1 \text{ and }\ n\geq 0,\\ x_{N_{n}+1}=1. \end{array} \right.\\ \end{equation*} We suppose that a spatial net point $x_{m}$ coincides with the middle point $x=0.$ \item $\tau_{n}: $ discrete time increment of $n-$th step determined by \begin{equation*} \tau_{n}=\tau \min\left(1,\left\|U^{n}\right\|_{\infty}^{-p+1}\right). \end{equation*} \item $h_{n}: $ discrete space increment of $n-$th step determined by $$h_{n}= \min\left(h,\left(2\left\|U^{n}\right\|_{\infty}^{-q+1}\right)^{\frac{1}{2-q}}\right).$$ \item $N_{n}=\dfrac{1}{h_{n}}-1$ the number of subdivisions of the interval $[-1,1].$ \item $m=\dfrac{N_{n}+1}{2}.$ \end{enumerate} As in \cite{hani}, we suppose that the initial data $u_{0}$ satisfies the following conditions:\\ (A1) $u_{0}$ is continuous, nonconstant and nonnegative in $[-1,1].$\\ (A2) $u_{0}$ is spatially symmetric about $x=0.$\\ (A3) $u_{0}$ is strictly monotone increasing in $[-1,0].$\\ (A4) $u_{0}(-1)=u_{0}(1)=0.$\\ (A5) $u_{0}$ is large in the sense that $\left\|u_{0}\right\|_{\infty}>>1.$\\ This paper is organized as follows: In section 2, we state and prove the main results, that is, if $p=2$ and $q=1$ then the solution blows up at the maximum point and the points around it, but remains bounded at all of the rest points, while if $p>2$ and $q<\dfrac{2(p-1)}{p}$, then there is only a single point for the solution to blow up. In section 3, we prove the convergence of the numerical solution to the exact one. In section 4, we give an approximation of the blowing-up time. Finally, in section 5, we present some numerical simulations. \section{Main theorems} In this section, we study the asymptotic behaviour of the difference solution near the maximal point $x_{m}.$ \begin{th1} Let $U^{n}$ be a solution of \eqref{approchee}, we suppose that $h<\dfrac{1}{1+\tau}$. For $p=2$ and $q=1$, we have \begin{equation*} \lim_{n\rightarrow +\infty} u_{m-1}^{n}=\lim_{n\rightarrow +\infty} u_{m+1}^{n}=+\infty. \end{equation*} \end{th1} \begin{proof} For $j=m-1$, the equation of \eqref{approchee} can be rewritten as \begin{eqnarray} && (1+2\lambda_{n})u_{m-1}^{n+1} \label{aa}\\ &=&\lambda_{n}(u_{m-2}^{n+1}+u_{m}^{n+1})+u_{m-1}^{n}+\tau_{n}(u_{m-1}^{n})^{p} -\dfrac{\tau_{n}}{(2h_{n})^{q}}\left|u_{m}^{n}-u_{m-2}^{n}\right|^{q-1}\left|u_{m}^{n+1}-u_{m-2}^{n+1}\right|.\nonumber \\ \nonumber \end{eqnarray} Using positivity and monotony we get \begin{equation*} (1+2\lambda_{n})u_{m-1}^{n+1} \geq \lambda_{n}u_{m}^{n+1}+u_{m-1}^{n}-\dfrac{\tau_{n}}{(2h_{n})^{q}}(u_{m}^{n}-u_{m-2}^{n})^{q-1}(u_{m}^{n+1}-u_{m-2}^{n+1}). \end{equation*} We use that \begin{equation} u_{m}^{n}-u_{m-2}^{n}\leq 2 u_{m}^{n} \text{ and } u_{m}^{n+1}-u_{m-2}^{n+1}\leq 2 u_{m}^{n+1}, \label{bb} \end{equation} we obtain \begin{equation} u_{m-1}^{n+1}\geq \dfrac{\lambda_{n}}{1+2\lambda_{n}}u_{m}^{n+1}+\dfrac{1}{1+2\lambda_{n}}u_{m-1}^{n}-\dfrac{\tau_{n}}{h_{n}^{q}(1+2\lambda_{n})}(u_{m}^{n})^{q-1}u_{m}^{n+1}. \label{eq42} \end{equation} Furthermore from \eqref{aa} for $j=m$, we have \begin{equation} u_{m}^{n+1}=\dfrac{2\lambda_{n}}{1+2\lambda_{n}}u_{m-1}^{n+1}+\dfrac{u_{m}^{n}}{1+2\lambda_{n}}\left(1+\tau_{n}(u_{m}^{n})^{p-1}\right), \label{eq43} \end{equation} which implies that \begin{equation} u_{m}^{n+1}\geq \dfrac{u_{m}^{n}}{1+2\lambda_{n}}. \label{eq44} \end{equation} Using \eqref{eq42}, \eqref{eq43} and \eqref{eq44} we get for $p=2$ and $q=1$ \begin{equation*} u_{m-1}^{n+1}\geq \frac{\lambda_{n}}{(1+2\lambda_{n})^{2}}u_{m}^{n}+\frac{1}{1+2\lambda_{n}}u_{m-1}^{n} -\frac{\tau_{n}}{h_{n}(1+2\lambda_{n})}\left[\frac{2\lambda_{n}}{1+2\lambda_{n}}u_{m-1}^{n+1}+\frac{u_{m}^{n}}{1+2\lambda_{n}}(1+\tau_{n}u_{m}^{n})\right]. \end{equation*} Then, \begin{equation*} \left(1+\frac{2\tau_{n}\lambda_{n}}{h_{n}(1+2\lambda_{n})^{2}}\right)u_{m-1}^{n+1}\geq \frac{\lambda_{n}}{(1+2\lambda_{n})^{2}}u_{m}^{n}+\frac{1}{1+2\lambda_{n}}u_{m-1}^{n} -\frac{\tau_{n}u_{m}^{n}(1+\tau_{n}u_{m}^{n})}{h_{n}(1+2\lambda_{n})^{2}}, \end{equation*} which implies that \begin{equation} u_{m-1}^{n+1}\geq \frac{\lambda_{n}h_{n}u_{m}^{n}+h_{n}(1+2\lambda_{n})u_{m-1}^{n}-\tau_{n}u_{m}^{n}(1+\tau_{n}u_{m}^{n})}{h_{n}(1+2\lambda_{n})^{2}+2\tau_{n}\lambda_{n}}. \label{eq45} \end{equation} Since the solution blows up, then we have $u_{m}^{n}>1$, moreover \begin{equation*} \tau_{n}=\frac{\tau}{u_{m}^{n}} \text{ and } \ h_{n}=\min(\sqrt{2},h)=h. \end{equation*} Then \begin{equation*} \lambda_{n}=\frac{\tau}{h^{2}u_{m}^{n}}, \ \ \ h_{n}\lambda_{n}=\frac{\tau}{hu_{m}^{n}} \text{ and } \ \tau_{n}\lambda_{n}=\frac{\tau^{2}}{h^{2}(u_{m}^{n})^{2}}. \end{equation*} Hence, \eqref{eq45} implies \begin{equation*} u_{m-1}^{n+1}\geq \dfrac{\dfrac{\tau}{h}+h\left(1+\dfrac{2\tau}{h^{2}u_{m}^{n}}\right)u_{m-1}^{n}-\tau(1+\tau)}{h\left(1+\dfrac{2\tau}{h^{2}u_{m}^{n}}\right)^{2}+\dfrac{2\tau^{2}}{(u_{m}^{n})^{2}h^{2}}}. \end{equation*} As we have \begin{equation*} \lim_{n\rightarrow +\infty}u_{m}^{n}=+\infty, \end{equation*} then we get \begin{equation*} \lim_{n\rightarrow +\infty}u_{m-1}^{n+1}\geq \dfrac{\dfrac{\tau}{h}+h \lim\limits_{n\rightarrow +\infty}u_{m-1}^{n}-\tau(1+\tau)}{h}. \end{equation*} If we assume that $\lim\limits_{n\rightarrow +\infty}u_{m-1}^{n}\neq +\infty$, let $l=\lim\limits_{n\rightarrow +\infty}u_{m-1}^{n}$, then we have \begin{equation*} l\geq \dfrac{\tau}{h^{2}}+l-\dfrac{\tau(1+\tau)}{h} \Rightarrow \dfrac{\tau}{h}(\dfrac{1}{h}-(1+\tau))\leq 0, \end{equation*} which is a contradiction because $h<\dfrac{1}{1+\tau}.$\\ Therefore, we have $$\lim_{n\rightarrow +\infty} u_{m-1}^{n}=+\infty,$$ and using symmetry we get the result of Theorem 2.1. \end{proof} The next important result for this paper is mentioned in the next theorem: \begin{th1} Let $U^{n}$ be the solution of \eqref{approchee}, we suppose that $p\geq 2$ and $1\leq q<\dfrac{2(p-1)}{p}.$\\ \textbf{(a)} If $p=2$ and $q=1$ then \begin{equation*} \lim_{n\rightarrow +\infty} u_{m-2}^{n}< +\infty. \end{equation*} \textbf{(b)} If $p>2$ and $q<\dfrac{2(p-1)}{p}$ then \begin{equation*} \lim_{n\rightarrow +\infty} u_{m-1}^{n}< +\infty. \end{equation*} \end{th1} \begin{proof} Let prove \textbf{(a):} In \eqref{approchee}, if we take $p=2,\ q=1$ and $j=m-2$, we get \begin{eqnarray*} \frac{u_{m-2}^{n+1}-u_{m-2}^{n}}{\tau_{n}}&=&\frac{u_{m-1}^{n+1}-2u_{m-2}^{n+1}+u_{m-3}^{n+1}}{h_{n}^{2}}+\left(u_{m-2}^{n}\right)^{2}-\frac{1}{2h_{n}}\left(u_{m-1}^{n+1}-u_{m-3}^{n+1}\right)\\ &\leq& \frac{u_{m-1}^{n+1}-2u_{m-2}^{n+1}+u_{m-3}^{n+1}}{h_{n}^{2}}+\left(u_{m-2}^{n}\right)^{2}, \end{eqnarray*} but $u_{m-3}^{n+1}-u_{m-2}^{n+1}<0$, then \begin{equation*} \dfrac{u_{m-2}^{n+1}-u_{m-2}^{n}}{\tau_{n}}\leq \dfrac{u_{m-1}^{n+1}-u_{m-2}^{n+1}}{h_{n}^{2}}+\left(u_{m-2}^{n}\right)^{2}, \end{equation*} which implies that \begin{equation} (1+\lambda_{n})u_{m-2}^{n+1}\leq \lambda_{n}u_{m-1}^{n+1}+\left(1+\tau_{n}u_{m-2}^{n}\right)u_{m-2}^{n}. \label{eq46} \end{equation} In the other hand, in \eqref{aa} if we take $j=m-1$, we get \begin{equation*} \frac{u_{m-1}^{n+1}-u_{m-1}^{n}}{\tau_{n}}\leq \frac{u_{m-2}^{n+1}-2u_{m-1}^{n+1}+u_{m}^{n+1}}{h_{n}^{2}}+\left(u_{m-1}^{n}\right)^{2}, \end{equation*} but $u_{m-2}^{n+1}-u_{m-1}^{n+1}<0$, then \begin{equation*} \frac{u_{m-1}^{n+1}-u_{m-1}^{n}}{\tau_{n}}\leq \frac{-u_{m-1}^{n+1}+u_{m}^{n+1}}{h_{n}^{2}}+\left(u_{m-1}^{n}\right)^{2}, \end{equation*} which implies that \begin{equation*} (1+\lambda_{n})u_{m-1}^{n+1}\leq \lambda_{n}u_{m}^{n+1}+\left(1+\tau_{n}u_{m-1}^{n}\right)u_{m-1}^{n}, \end{equation*} and then \begin{equation} u_{m-1}^{n+1}\leq \frac{\lambda_{n}u_{m}^{n+1}+\left(1+\tau_{n}u_{m-1}^{n}\right)u_{m-1}^{n}}{1+\lambda_{n}}. \label{eq47} \end{equation} Next, if we recall \eqref{eq43} for $p=2$ we get \begin{equation} u_{m}^{n+1}=\frac{2\lambda_{n}}{1+2\lambda_{n}}u_{m-1}^{n+1}+\frac{1+\tau_{n}u_{m}^{n}}{1+2\lambda_{n}}u_{m}^{n}. \label{eq48} \end{equation} Putting \eqref{eq48} in \eqref{eq47} we get \begin{equation*} u_{m-1}^{n+1}\leq \frac{\lambda_{n}}{1+\lambda_{n}}\left[\frac{2\lambda_{n}}{1+2\lambda_{n}}u_{m-1}^{n+1}+\frac{1+\tau_{n}u_{m}^{n}}{1+2\lambda_{n}}u_{m}^{n}\right]+\frac{\left(1+\tau_{n}u_{m-1}^{n}\right)}{1+\lambda_{n}}u_{m-1}^{n}, \end{equation*} which implies that \begin{equation*} \left(1-\frac{2\lambda_{n}^{2}}{(1+\lambda_{n})(1+2\lambda_{n})}\right)u_{m-1}^{n+1}\leq \frac{\lambda_{n}(1+\tau_{n}u_{m}^{n})}{(1+2\lambda_{n})(1+\lambda_{n})}u_{m}^{n}+\frac{1+\tau_{n}u_{m-1}^{n}}{1+\lambda_{n}}u_{m-1}^{n}, \end{equation*} and then \begin{equation} u_{m-1}^{n+1}\leq \frac{\lambda_{n}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}}{1+3\lambda_{n}}. \label{eq49} \end{equation} Now, putting \eqref{eq49} in \eqref{eq46}, we get \begin{eqnarray*} u_{m-2}^{n+1}&\leq& \frac{\lambda_{n}}{1+\lambda_{n}}\left[\frac{\lambda_{n}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}}{1+3\lambda_{n}}\right]+\frac{\left(1+\tau_{n}u_{m-2}^{n}\right)u_{m-2}^{n}}{1+\lambda_{n}}\\ &=&\frac{\left(1+\tau_{n}u_{m-2}^{n}\right)u_{m-2}^{n}}{1+\lambda_{n}}+\frac{\lambda_{n}^{2}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}}{(1+\lambda_{n})(1+3\lambda_{n})}. \end{eqnarray*} Then \begin{equation} u_{m-2}^{n+1}\leq A_{n}u_{m-2}^{n}+B_{n}, \label{au} \end{equation} here we have put \begin{equation*} A_{n}=\frac{\left(1+\tau_{n}u_{m-2}^{n}\right)}{1+\lambda_{n}} \end{equation*} and \begin{equation*} B_{n}=\frac{\lambda_{n}^{2}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}}{(1+\lambda_{n})(1+3\lambda_{n})}. \end{equation*} Then the inequality \eqref{au} implies by iterations that \begin{eqnarray*} u_{m-2}^{n}&\leq & A_{n-1}u_{m-2}^{n-1}+B_{n-1}\\ &\leq& A_{n-1}A_{n-2}u_{m-2}^{n-2}+A_{n-1}B_{n-2}+B_{n-1}\\ &&\vdots\\ &\leq& u_{m-2}^{0}\prod_{k=0}^{n-1}{A_{k}}+\sum_{k=0}^{n-2}\left({B_{k}}\prod_{i=k+1}^{n-1}{A_{i}}\right)+B_{n-1} \\ &\leq& u_{m-2}^{0}\prod_{k=0}^{n}{A_{k}}+\sum_{k=0}^{n-2}{ B_{k}}\prod_{i=0}^{n-1}{A_{i}}+B_{n-1}\\ &\leq& u_{m-2}^{0}\prod_{k=0}^{n}{A_{k}}+\sum_{k=0}^{n-2}{ B_{k}}\prod_{k=0}^{n}{A_{k}}+B_{n-1}\left(\prod_{k=0}^{n}{A_{k}}\right)\\ &\leq& u_{m-2}^{0}\prod_{k=0}^{n}{A_{k}}+\sum_{k=0}^{n-1}{ B_{k}}\prod_{k=0}^{n}{A_{k}}\\ &\leq& u_{m-2}^{0}\prod_{k=0}^{n}{A_{k}}+\sum_{k=0}^{n}{ B_{k}}\prod_{k=0}^{n}{A_{k}}\\ &\leq& \left(u_{m-2}^{0}+\sum_{k=0}^{n}{ B_{k}}\right)\prod_{k=0}^{n}{A_{k}}. \end{eqnarray*} To ensure boundedness of $u_{m-2}^{n}$ we shall prove that \begin{equation*} \sum_{n\geq 0}{ B_{n}}<+\infty \text{ and } \prod_{n\geq0}{A_{n}}<+\infty. \end{equation*} To do this, we need the next lemma: \begin{lem} We define the sequence $a_{n}=\dfrac{u_{m-1}^{n}}{u_{m}^{n}}.$ \begin{enumerate} \item For $p=2$ and $q=1$, we assume that $\sup\limits_{n}u_{m-1}^{n}>\dfrac{3}{h^{2}}(1+\tau),$ then $(a_{n})_{n}$ converges to 0. \item For $p>2$ and $q<\dfrac{2(p-1)}{p}$, we have \begin{enumerate} \item $(a_{n})_{n}$ converges to 0. \item $\lim\limits_{n\rightarrow +\infty}\dfrac{a_{n+1}}{a_{n}}=\dfrac{1}{1+\tau}.$ \item $\lim\limits_{n\rightarrow +\infty}\dfrac{u^{n+1}_{m}}{u^{n}_{m}}=1+\tau>1.$ \end{enumerate} \end{enumerate} \end{lem} \begin{proof} First of all, we look for some useful relations between $a_{n}$ and $a_{n+1}$. We recall \eqref{aa} for $p\geq 2$ and $1\leq q< \dfrac{2(p-1)}{p}<\dfrac{2p}{p+1}.$ We use the same calculations as \eqref{eq49} we obtain that \eqref{aa} implies \begin{equation} u_{m-1}^{n+1}\leq \frac{\lambda_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{1+3\lambda_{n}}. \label{eq411} \end{equation} Using \eqref{eq43}, we get \begin{eqnarray} \nonumber a_{n+1}& =&\dfrac{u_{m-1}^{n+1}}{u_{m}^{n+1}} \\ \nonumber &=& \dfrac{1+2\lambda_{n}}{\dfrac{2\lambda_{n}u_{m-1}^{n+1}+(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}}{u_{m-1}^{n+1}}}\\ &= &\dfrac{1+2\lambda_{n}}{2\lambda_{n}+\dfrac{(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}}{u_{m-1}^{n+1}}}. \label{star} \end{eqnarray} By substituting \eqref{eq411} into \eqref{star} we get: \begin{eqnarray*} a_{n+1}& \leq& (1+2\lambda_{n}) \left\{2\lambda_{n}+\frac{(1+3\lambda_{n})(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}}{\lambda_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}\right\}^{-1} \\ & = & \frac{\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+(1+2\lambda_{n})^{2}(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{(1+3\lambda_{n}+2\lambda_{n}^{2})(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+2\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}\\ & \leq& \frac{\lambda_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{(1+\lambda_{n})(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+2\lambda_{n}(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}\\ & \leq& \frac{\lambda_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})a_{n}}{(1+\lambda_{n})(1+\tau_{n}(u_{m}^{n})^{p-1})+2\lambda_{n}(1+\tau_{n}(u_{m-1}^{n})^{p-1})a_{n}}. \end{eqnarray*} But we have $\tau_{n}=\dfrac{\tau}{(u_{m}^{n})^{p-1}}$, then \begin{equation} a_{n+1}\leq \frac{\lambda_{n}(1+\tau)+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})a_{n}}{(1+\lambda_{n})(1+\tau)+2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}}. \label{eq412} \end{equation} And finally we get \begin{equation} \frac{a_{n+1}}{a_{n}}\leq \frac{\lambda_{n}(1+\tau)(a_{n})^{-1}+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})}{(1+\lambda_{n})(1+\tau)+2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}}. \label{eq413} \end{equation} In the other hand, using \eqref{aa} and \eqref{bb} we get \begin{equation*} u_{m-1}^{n+1} \geq \frac{\lambda_{n}u_{m}^{n+1}+(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{1+2\lambda_{n}}-\frac{\tau_{n}}{h_{n}^{q}(1+2\lambda_{n})}(u_{m}^{n})^{q-1}u_{m}^{n+1}. \end{equation*} By using \eqref{eq43}, we have \begin{align*} u_{m-1}^{n+1} & \geq \frac{2\lambda_{n}^{2}u_{m-1}^{n+1}+\lambda_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{(1+2\lambda_{n})^{2}}\\ &\ \ \ \ \ \ -\frac{2\lambda_{n}\tau_{n}(u_{m}^{n})^{q-1}u_{m-1}^{n+1}}{h_{n}^{q}(1+2\lambda_{n})^{2}}-\frac{\tau_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})(u_{m}^{n})^{q}}{h_{n}^{q}(1+2\lambda_{n})^{2}}, \end{align*} which implies \begin{eqnarray*} &&\left(1-\frac{2\lambda_{n}^{2}}{(1+2\lambda_{n})^{2}}+\frac{2\lambda_{n}\tau_{n}(u_{m}^{n})^{q-1}}{h_{n}^{q}(1+2\lambda_{n})^{2}}\right)u_{m-1}^{n+1}\\ &\geq& \frac{\lambda_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{(1+2\lambda_{n})^{2}}\\ &&\ \ \ \ -\frac{\tau_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})(u_{m}^{n})^{q}}{h_{n}^{q}(1+2\lambda_{n})^{2}}, \end{eqnarray*} and then \begin{eqnarray} u_{m-1}^{n+1} \nonumber &\geq& \frac{\lambda_{n}h_{n}^{q}(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}+h_{n}^{q}(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{h_{n}^{q}(1+2\lambda_{n})^{2}-2h_{n}^{q}\lambda_{n}^{2}+2\lambda_{n}\tau_{n}(u_{m}^{n})^{q-1}}\\ && \ \ -\frac{\tau_{n}(1+\tau_{n}(u_{m}^{n})^{p-1})(u_{m}^{n})^{q}}{{h_{n}^{q}(1+2\lambda_{n})^{2}-2h_{n}^{q}\lambda_{n}^{2}+2\lambda_{n}\tau_{n}(u_{m}^{n})^{q-1}}}. \label{etoile} \end{eqnarray} Using \eqref{eq43} and \eqref{etoile}, we get \begin{eqnarray*} && a_{n+1}\\ &=&\dfrac{u_{m-1}^{n+1}}{u_{m}^{n+1}}\\ &= & (1+2\lambda_{n})\left\{2\lambda_{n}+\frac{(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}}{u_{m-1}^{n+1}}\right\}^{-1}\\ &\geq & \frac{(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}\left(\lambda_{n}h_{n}^{q}-\tau_{n}(u_{m}^{n})^{q-1}\right)+h_{n}^{q}(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}}{2\lambda_{n}h_{n}^{q}(1+\tau_{n}(u_{m-1}^{n})^{p-1})u_{m-1}^{n}+h_{n}^{q}(1+2\lambda_{n})(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}}. \end{eqnarray*} Then we can deduce that \begin{equation*} \frac{a_{n+1}}{a_{n}} \geq \frac{(1+\tau_{n}(u_{m}^{n})^{p-1})(\lambda_{n}-\tau_{n}h_{n}^{-q}(u_{m}^{n})^{q-1})a_{n}^{-1}+(1+2\lambda_{n})(1+\tau_{n}(u_{m-1}^{n})^{p-1})}{(1+\tau_{n}(u_{m}^{n})^{p-1})(1+2\lambda_{n})+2\lambda_{n}(1+\tau_{n}(u_{m-1}^{n})^{p-1})a_{n}}. \end{equation*} Finally we get \begin{equation} \frac{a_{n+1}}{a_{n}} \geq \frac{(1+\tau)(\lambda_{n}-\tau h_{n}^{-q}(u_{m}^{n})^{q-p})a_{n}^{-1}+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})}{(1+\tau)(1+2\lambda_{n})+2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}}. \label{eq414} \end{equation} Next, we prove that the sequence $(a_{n})_{n}$ converges to 0. To prove convergence, we only need to show that $\dfrac{a_{n+1}}{a_{n}}<1$.\\ But \begin{equation*} \frac{a_{n+1}}{a_{n}}\leq \frac{\lambda_{n}(1+\tau)(a_{n})^{-1}+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})}{(1+\lambda_{n})(1+\tau)+2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}}. \end{equation*} Let \begin{equation*} A:=(1+\tau)\lambda_{n}+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})a_{n}-(1+\lambda_{n})(1+\tau)a_{n}-2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}^{2}.\\ \end{equation*} We shall prove that $A< 0.$\\ \textbf{(1)} First of all, we can see that, for $p=2$ and $q=1$ \begin{eqnarray*} A&=&(1+\tau)\lambda_{n}+(1+2\lambda_{n})(1+\tau a_{n})a_{n}-(1+\lambda_{n})(1+\tau)a_{n}-2\lambda_{n}(1+\tau a_{n})a_{n}^{2}\\ &=& (1+\tau)\lambda_{n}+(1+\tau a_{n})a_{n}+2\lambda_{n}(1+\tau a_{n})a_{n}-(1+\tau)a_{n}-\lambda_{n}(1+\tau)a_{n}-2\lambda_{n}(1+\tau a_{n})a_{n}^{2}\\ &=&\lambda_{n}\left(1+\tau+2a_{n}(1+\tau a_{n})-(1+\tau)a_{n}-2a_{n}^{2}(1+\tau a_{n}) \right)+\tau a_{n}^{2}-\tau a_{n}\\ &=&\lambda_{n}\left((1+\tau)(1-a_{n})+2a_{n}(1+\tau a_{n})(1-a_{n}) \right)+\tau a_{n}(a_{n}-1)\\ &=&(1-a_{n})\lambda_{n}(1+\tau +2a_{n}(1+\tau a_{n}))+\tau a_{n}(a_{n}-1)\\ \end{eqnarray*} Using \begin{equation*} \left\{ \begin{array}{lll} a_{n}=\dfrac{u_{m-1}^{n}}{u_{m}^{n}},\\ 0<a_{n}<1,\\ a_{n}<u_{m-1}^{n},\\ \tau=\tau_{n}u_{m}^{n},\\ h_{n}=h,\\ \end{array} \right. \end{equation*} we get \begin{eqnarray*} A&=&(1-a_{n})\lambda_{n}(1+\tau +2a_{n}(1+\tau a_{n})+\tau a_{n}(a_{n}-1) \\ & <& \lambda_{n}(1-a_{n})(1+\tau +2(1+\tau))+\tau_{n} u_{m-1}^{n}(a_{n}-1)\\ & < &(1-a_{n})\tau_{n} (3h^{-2}(1+\tau)-u_{m-1}^{n})\\ \end{eqnarray*} Using the condition: $\sup\limits_{n}\left\{u_{m-1}^{n}\right\}>3h^{-2}(1+\tau)$,we can see that $A< 0$, so that $0\leq a_{n+1}< a_{n}<1$, which implies that $\lim\limits_{n\rightarrow +\infty}a_{n}=a$ exists and satisfies $0\leq a<1.$\\ \textbf{(2)} For $p>2$ and $q<\dfrac{2p-2}{p},$ we can see that \begin{equation*} \lambda_{n}-(1+\lambda_{n})a_{n}<0, \end{equation*} if not, then, \begin{equation*} \dfrac{\lambda_{n}}{a_{n}}\geq 1+\lambda_{n} \Rightarrow \frac{\tau 2^{\frac{-2}{2-q}}}{u_{m-1}^{n}}\left(u_{m}^{n}\right)^{\frac{2-2p+pq}{2-q}}\geq 1+\lambda_{n}, \end{equation*} which is a contradiction because of $q<\frac{2p-2}{p}.$\\ Let now, \begin{equation*} A_{1}=\frac{1+\tau}{1+\tau a_{n}^{p-1}}>1 \end{equation*} and \begin{equation*} A_{2}=\frac{2\lambda_{n}a_{n}-(1+2\lambda_{n})}{\lambda_{n}-(1+\lambda_{n})a_{n}}<1. \end{equation*} Then it is clear that \begin{eqnarray*} && A_{1}> a_{n}A_{2}.\\ &&\Rightarrow \frac{1+\tau}{1+\tau a_{n}^{p-1}} > \frac{a_{n}(2\lambda_{n}a_{n}-(1+2\lambda_{n}))}{\lambda_{n}-(1+\lambda_{n})a_{n}}.\\ &&\Rightarrow (1+\tau)(\lambda_{n}-(1+\lambda_{n})a_{n}) < a_{n} (1+\tau a_{n}^{p-1})(2\lambda_{n}a_{n}-(1+2\lambda_{n})).\\ &&\Rightarrow (1+\tau)\lambda_{n}+(1+2\lambda_{n})a_{n}(1+\tau a_{n}^{p-1})-(1+\tau)(1+\lambda_{n}a_{n})-2\lambda_{n}a_{n}^{2}(1+\tau a_{n}^{p-1})< 0.\\ &&\Rightarrow A< 0. \end{eqnarray*} So $0\leq a_{n+1} < a_{n} < 1.$\\ We shall prove now that $a=0$ for all $p>1$ and $1\leq q<\dfrac{2(p-1)}{p}$. By reduction to absurdity we suppose that $0<a<1$. Letting $n\rightarrow \infty $ in \eqref{eq412} we obtain \begin{equation*} a \leq \dfrac{1+\tau a^{p-1}}{1+\tau}a <a \end{equation*} which is a contradiction. This proves that $a=0.$\\ Next we prove that $\lim\limits_{n\rightarrow+\infty}\dfrac{a_{n+1}}{a_{n}}=\dfrac{1}{1+\tau}$, for $p>2$ and $q<\dfrac{2(p-1)}{p}$\\ By means of \eqref{eq413} we get \begin{equation} \frac{a_{n+1}}{a_{n}}\leq \frac{\lambda_{n}(1+\tau)(a_{n})^{-1}+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})}{(1+\lambda_{n})(1+\tau)+2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}}, \label{eq416} \end{equation} but \begin{equation*} \lambda_{n}(1+\tau)(a_{n})^{-1}=c_{1}(u_{m-1}^{n})^{-1}(u_{m}^{n})^{\frac{-2p+pq+2}{2-q}}, \end{equation*} where $c_{1}=\dfrac{\tau(1+\tau)}{4^{\frac{1}{2-q}}}.$\\ And for $q<\dfrac{2p-2}{p}$ we have $\dfrac{-2p+pq+2}{2-q}<0$, then we obtain \begin{equation} \lim_{n\rightarrow +\infty}\dfrac{a_{n+1}}{a_{n}}\leq \dfrac{1}{1+\tau}. \label{h} \end{equation} In the other hand, using \eqref{eq414} we get \begin{equation*} \frac{a_{n+1}}{a_{n}} \geq \frac{(1+\tau)(\lambda_{n}-\tau h_{n}^{-q}(u_{m}^{n})^{q-p})a_{n}^{-1}+(1+2\lambda_{n})(1+\tau(a_{n})^{p-1})}{(1+\tau)(1+2\lambda_{n})+2\lambda_{n}(1+\tau(a_{n})^{p-1})a_{n}}, \label{eq417} \end{equation*} but \begin{equation*} (\lambda_{n}-\tau h_{n}^{-q}(u_{m}^{n})^{q-p})a_{n}^{-1}=(u_{m-1}^{n})^{-1}\left(c_{1}\left(u_{m}^{n}\right)^{-p+2+\frac{2q-2}{2-q}}-c_{2}\left(u_{m}^{n}\right)^{q-p+1+\frac{-q(-q+1)}{2-q}}\right), \end{equation*} where $c_{1},c_{2} \in \mathbb{R}.$\\ And for $q<\dfrac{2(p-1)}{p}$ we have \begin{equation*} \left(\lambda_{n}-\tau h_{n}^{-q}(u_{m}^{n})^{q-p}\right)a_{n}^{-1}\rightarrow 0 \text{ as } n\rightarrow +\infty, \end{equation*} then we obtain \begin{equation} \lim_{n\rightarrow +\infty}\dfrac{a_{n+1}}{a_{n}}\geq \dfrac{1}{1+\tau}. \label{hh} \end{equation} Finally from \eqref{h} and \eqref{hh} we deduce that \begin{equation*} \lim_{n\rightarrow +\infty}\dfrac{a_{n+1}}{a_{n}}= \dfrac{1}{1+\tau}<1. \end{equation*} To finish the proof of Lemma 2.3 we shall prove that for all $p>2$ and $q<\dfrac{2(p-1)}{p}$ we have $\lim\limits_{n\rightarrow +\infty}\dfrac{u_{m}^{n+1}}{u_{m}^{n}}=1+\tau.$\\ From \eqref{eq43}, we know that \begin{equation*} (1+2\lambda_{n})u_{m}^{n+1}-2\lambda_{n}u_{m-1}^{n+1}=(1+\tau_{n}(u_{m}^{n})^{p-1})u_{m}^{n}, \end{equation*} which implies \begin{equation*} 1+2\lambda_{n}-2\lambda_{n}\frac{u_{m-1}^{n+1}}{u_{m}^{n+1}}=(1+\tau_{n}(u_{m}^{n})^{p-1})\frac{u_{m}^{n}}{u_{m}^{n+1}}. \end{equation*} Then \begin{equation*} 1+2\lambda_{n}-2\lambda_{n}a_{n+1}=(1+\tau)\frac{u_{m}^{n}}{u_{m}^{n+1}}. \end{equation*} So \begin{equation*} 1=(1+\tau)\lim_{n\rightarrow +\infty}\frac{u_{m}^{n}}{u_{m}^{n+1}}. \end{equation*} This implies $\lim\limits_{n\rightarrow +\infty}\dfrac{u_{m}^{n+1}}{u_{m}^{n}}=1+\tau>1.$ This achieve the proof of Lemma 2.3. \end{proof} Now we can finish the proof of Theorem 2.2. We have showed that \begin{equation*} u_{m-2}^{n} \leq \left(u_{m-2}^{0}+\sum_{k=0}^{n}{ B_{k}}\right)\prod_{k=0}^{n}{A_{k}}, \end{equation*} where \begin{equation*} A_{n}=\frac{\left(1+\tau_{n}u_{m-2}^{n}\right)}{1+\lambda_{n}} \end{equation*} and \begin{equation*} B_{n}=\frac{\lambda_{n}^{2}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}}{(1+\lambda_{n})(1+3\lambda_{n})}. \end{equation*} Using that $u_{m}^{n}>>1$, we can see that for $p=2$ and $q=1$ we have \begin{equation*} \tau_{n}=\dfrac{\tau}{u_{m}^{n}} \text{ and } h_{n}=h. \end{equation*} Then \begin{equation*} A_{n} \leq 1+\tau_{n}u_{m-2}^{n}= 1+\tau \frac{u_{m-2}^{n}}{u_{m}^{n}}\leq 1+\tau \frac{u_{m-1}^{n}}{u_{m}^{n}}=1+\tau a_{n}, \end{equation*} and \begin{align*} B_{n}& =\frac{\lambda_{n}^{2}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}}{(1+\lambda_{n})(1+3\lambda_{n})} \\ & \leq \lambda_{n}^{2}(1+\tau_{n}u_{m}^{n})u_{m}^{n}+\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}u_{m-1}^{n})u_{m-1}^{n}\\ & \leq \lambda_{n}^{2}(1+\tau)u_{m}^{n}+\lambda_{n}(1+2\lambda_{n})(1+\tau_{n}u_{m}^{n})u_{m-1}^{n}\\ & = \frac{\tau^{2}}{h^{4}}(1+\tau)\frac{1}{u_{m}^{n}}+\frac{\tau}{h^{2}u_{m}^{n}}(1+2\frac{\tau}{h^{2}u_{m}^{n}})(1+\tau)u_{m-1}^{n}\\ & \leq c^{2}(1+\tau)\frac{u_{m-1}^{n}}{u_{m}^{n}}+c(1+2c)(1+\tau)\frac{u_{m-1}^{n}}{u_{m}^{n}}\\ &\leq c(1+\tau)\left(c+(1+2c)\right)\frac{u_{m-1}^{n}}{u_{m}^{n}}\\ &\leq c(1+\tau)(1+3c)\frac{u_{m-1}^{n}}{u_{m}^{n}}, \end{align*} with $c=\dfrac{\tau}{h^{2}}.$\\ But we have \begin{equation*} \lim_{n\rightarrow +\infty}\frac{a_{n+1}}{a_{n}}<1 \text{ and } a_{n}>0, \end{equation*} then \begin{equation*} 0< \sum_{n\geq0}{a_{n}}< +\infty. \end{equation*} In the other hand, for all $c>0$, we have $\sum\limits_{n\geq0}{ca_{n}}< +\infty$, then \begin{equation*} 1< \prod_{n\geq 0} (1+ca_{n})< +\infty. \end{equation*} We deduce from this that \begin{equation*} 0<\sum_{n\geq 0}{ B_{n}}\leq c(1+\tau)(1+3c)\sum_{n\geq0}{a_{n}}<+\infty, \end{equation*} and \begin{equation*} 1<\prod_{n\geq0}{A_{n}}\leq \prod_{n\geq 0} (1+\tau a_{n})<+\infty, \end{equation*} which implies that \begin{equation*} \lim_{n\rightarrow +\infty}u_{m-2}^{n}<+\infty. \end{equation*} Now we will prove the second result of Theorem 2.2, that is: \begin{equation*} \text{ If } p>2 \text{ and } q<\frac{2(p-1)}{p} \text{ then } \lim_{n\rightarrow +\infty} u_{m-1}^{n}< +\infty. \end{equation*} In \eqref{approchee}, we put $j=m-1$ and we consider the quantity \begin{align*} u_{m-1}^{n+1}-u_{m-1}^{n} & \leq \lambda_{n}(u_{m-2}^{n+1}-2u_{m-1}^{n+1}+u_{m}^{n+1})+\tau_{n}(u_{m-1}^{n})^{p}\\ & =G_{n}+H_{n}, \end{align*} where \begin{eqnarray*} G_{n}&=& \lambda_{n}(u_{m-2}^{n+1}-2u_{m-1}^{n+1}+u_{m}^{n+1})\\ &=&c(u_{m}^{n})^{-p+1+\frac{2(q-1)}{2-q}}(u_{m-2}^{n+1}-2u_{m-1}^{n+1}+u_{m}^{n+1})\\ &=&c(u_{m}^{n})^{-p+1+\frac{2(q-1)}{2-q}}u_{m}^{n+1}(\frac{u_{m-2}^{n+1}}{u_{m}^{n+1}}-2\frac{u_{m-1}^{n+1}}{u_{m}^{n+1}}+1)\\ &=& c(u_{m}^{n})^{-p+2+\frac{2(q-1)}{2-q}}\frac{u_{m}^{n+1}}{u_{m}^{n}}(\frac{u_{m-2}^{n+1}}{u_{m-1}^{n+1}}a_{n+1}-2a_{n+1}+1)\\ &=&c(u_{m}^{n})^{-p+2+\frac{2q-2}{2-q}}\frac{u_{m}^{n+1}}{u_{m}^{n}}(1-a_{n+1}(2-\frac{u_{m-2}^{n+1}}{u_{m-1}^{n+1}}))\\ & >&0, \end{eqnarray*} with $c:=\dfrac{\tau}{2^{\frac{2}{2-q}}}$ and \begin{equation*} H_{n}=\tau_{n}(u_{m-1}^{n})^{p}=\tau u_{m}^{n}(a_{n})^{p}>0. \end{equation*} Therefore, using Lemma 2.3, we get \begin{equation*} \lim_{n\rightarrow +\infty}\dfrac{G_{n+1}}{G_{n}} =\left(1+\tau\right)^{-p+2+\frac{2q-2}{q-2}}<1 \text{ for } q<\dfrac{2(p-1)}{p}, \end{equation*} which implies \begin{equation*} \sum_{n\geq0}{G_{n}}<+\infty. \end{equation*} Also \begin{equation*} \lim_{n\rightarrow +\infty}\frac{H_{n+1}}{H_{n}} =(1+\tau)^{-p+1}<1 \text{ for } p>2, \end{equation*} which implies \begin{equation*} \sum_{n\geq0}{H_{n}}<+\infty. \end{equation*} Hence we get the boundedness of $u_{m-1}^{n}$ from: \begin{eqnarray*} 0<u_{m-1}^{n}&=& \sum_{k=1}^{n}{(u_{m-1}^{k}-u_{m-1}^{k-1})}+u_{m-1}^{0} \\ & \leq & \sum_{k=1}^{n}{(G_{k-1}+H_{k-1})}+u_{m-1}^{0} \\ & \leq & \sum_{k=0}^{+\infty}{(G_{k}+H_{k})}+u_{m-1}^{0} \\ & < & +\infty. \end{eqnarray*} Thus we have completed the proof of Theorem 2.2. \end{proof} \section{Convergence } \noindent In this section we prove the convergence of the numerical solution given by \eqref{approchee}, to the nodal values of the solution $u$ of \eqref{exacte} on each fixed interval time $[0,T],T<T^{*}$ as far as the smoothness of $u$ is guaranteed. \begin{lem} Let $u$ be the classical solution of \eqref{exacte} and $U^{n}$ be the numerical solution of \eqref{approchee}. Let $T$ be an arbitrary number such that $0<T<T^{*}$. Then there exist positive constants $C_{0},\ C_{1}$, depending only on $T$ and $u_{0}$, such that\\ \textbf{(A)} For $p>2$ and $q<\dfrac{2(p-1)}{p}$ \begin{equation*} \max_{1\leq j \leq m-2}\left|u_{j}^{n}-u(x_{j},t^{n})\right|\leq C_{0}h^{3-q} \end{equation*} holds so far as $t_{n}<T.$\\ \textbf{(B)} For $p>1$ and $q=1$ \begin{equation*} \max_{1\leq j \leq m-1}\left|u_{j}^{n}-u(x_{j},t^{n})\right|\leq C_{1}h^{2} \end{equation*} holds so far as $t_{n}<T.$ \end{lem} \noindent Before studying local convergence, we prove the consistency of the scheme. \subsection{Consistency} For all $1\leq j \leq N_{n},$ we define \begin{eqnarray*} \epsilon_{j}^{n}& =&\frac{u(x_{j},t^{n+1})-u(x_{j},t^{n})}{\tau_{n}}-\frac{u(x_{j+1},t^{n+1})-2u(x_{j},t^{n+1})+u(x_{j-1},t^{n+1})}{h_{n}^{2}} \\ &-&\left(u(x_{j},t^{n})\right)^{p}+\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q-1}\left|\frac{u(x_{j+1},t^{n+1})-u(x_{j-1},t^{n+1}}{2h_{n}}\right|. \end{eqnarray*} We use Taylor formula, we obtain \begin{eqnarray} \frac{\partial u}{\partial t}(x_{j},t^{n})&=&\frac{u(x_{j},t^{n+1})-u(x_{j},t^{n})}{\tau_{n}}-\frac{\tau_{n}}{2}\frac{\partial^{2} u}{\partial t^{2}}(x_{j},t^{n}+\tau_{n}\theta_{1}). \label{etoi}\\ \frac{\partial u}{\partial x}(x_{j},t^{n})&=&\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}-\frac{h_{n}^{2}}{3}\frac{\partial^{3} u}{\partial x^{3}}(x_{j}+h_{n}\theta_{2},t^{n})\nonumber \\ &&\ \ \ -\frac{h_{n}^{2}}{3}\frac{\partial^{3} u}{\partial x^{3}}(x_{j}-h_{n}\theta_{3},t^{n}). \label{E}\\ \frac{\partial^{2} u}{\partial x^{2}}(x_{j},t^{n})&=&\frac{u(x_{j+1},t^{n})-2u(x_{j},t^{n})+u(x_{j-1},t^{n})}{h_{n}^{2}}-\frac{h_{n}^{2}}{24}\frac{\partial^{4} u}{\partial x^{4}}(x_{j}+h_{n}\theta_{4},t^{n}) \nonumber \\ &&\ \ \ -\frac{h_{n}^{2}}{24}\frac{\partial^{4} u}{\partial x^{4}}(x_{j}-h_{n}\theta_{5},t^{n}).\nonumber \\ \nonumber \frac{\partial^{2} u}{\partial x^{2}}(x_{j},t^{n})&=&\frac{\partial^{2} u}{\partial x^{2}}(x_{j},t^{n+1})-\tau_{n}\frac{\partial^{3} u}{\partial t \partial x^{2}}(x_{j},t^{n}+\tau_{n}\theta_{6})\\ \nonumber &=&\frac{u(x_{j+1},t^{n+1})-2u(x_{j},t^{n+1})+u(x_{j-1},t^{n+1})}{h_{n}^{2}}+\frac{h_{n}^{2}}{24}\frac{\partial^{4} u}{\partial x^{4}}(x_{j}+h_{n}\theta_{7},t^{n+1})\\ &&+\frac{h_{n}^{2}}{24}\frac{\partial^{4} u}{\partial x^{4}}(x_{j}-h_{n}\theta_{8},t^{n+1})+\tau_{n}\frac{\partial^{3} u}{\partial t \partial x^{2}}(x_{j},t^{n}+\tau_{n}\theta_{6}). \label{et} \end{eqnarray} \noindent where $0<\theta_{i}<1$ for $i=1,...,8.$\\ We define \begin{equation*} F=\left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}-\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}. \end{equation*} We use the mean value theorem, the monotony and the symmetry of the exact solution proved in Theorem 2.3 and Theorem 2.4 in \cite{hani}, then there exists $A$ between $\dfrac{\partial u}{\partial x}(x_{j},t^{n})$ and\\ $\dfrac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}$ such that \begin{eqnarray*} &&\left| \left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}-\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}\right| \\ &=&\left| q\left|A\right|^{q-1}\left(\frac{\partial u}{\partial x}(x_{j},t^{n})-\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right)\right| \\ &=& q\left|A\right|^{q-1} o(h_{n}^{2}), \end{eqnarray*} with \begin{equation*} \left|\frac{\partial u}{\partial x}(x_{j},t^{n})-A\right|\leq \left|\frac{\partial u}{\partial x}(x_{j},t^{n})-\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right| \leq o(h_{n}^{2}). \end{equation*} Since $\left|\dfrac{\partial u}{\partial x}\right|$ is bounded before blow up by \cite{chipotweissler}, then we can deduce that $A$ is bounded too. So we can write that \begin{eqnarray} \left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}&=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}+o(h_{n}^{2})\label{starstar}\\ \nonumber &=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q-1}\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|+o(h_{n}^{2})\\ \nonumber &=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q-1}\left|\frac{\partial u}{\partial x}(x_{j},t^{n})+o(h_{n}^{2})\right|+o(h_{n}^{2})\\ \nonumber &=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q-1}\left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|+o(h_{n}^{2})\\ \nonumber &=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q-1}\left|\frac{\partial u}{\partial x}(x_{j},t^{n+1})\right|+o(\tau_{n})+o(h_{n}^{2}). \end{eqnarray} Then \begin{eqnarray} \left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}&=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q-1}\left|\frac{u(x_{j+1},t^{n+1})-u(x_{j-1},t^{n+1})}{2h_{n}}\right| \nonumber \\ &&\ \ \ \ +o(\tau_{n})+o(h_{n}^{2}). \label{etoietoi} \end{eqnarray} We replace \eqref{etoi}, \eqref{et} and \eqref{etoietoi} in $\epsilon_{j}^{n}$ we obtain \begin{eqnarray*} \epsilon_{j}^{n}&=&\frac{\partial u}{\partial t}(x_{j},t^{n})+\frac{\tau_{n}}{2}\frac{\partial^{2} u}{\partial t^{2}}(x_{j},t^{n}+\tau_{n}\theta_{1})- \frac{\partial^{2} u}{\partial x^{2}}(x_{j},t^{n})-\tau_{n}\frac{\partial^{3} u}{\partial t \partial x^{2}}(x_{j},t^{n}+\tau_{n}\theta_{4})\\ &&\ \ \ \ -\frac{h_{n}^{2}}{24}\frac{\partial^{4} u}{\partial x^{4}}(x_{j}+h_{n}\theta_{5},t^{n+1})-\frac{h_{n}^{2}}{24}\frac{\partial^{4} u}{\partial x^{4}}(x_{j}-h_{n}\theta_{6},t^{n+1})-\left(u(x_{j},t^{n})\right)^{p}\\ &&\ \ \ \ \ \ +\left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}+o(\tau_{n})+o(h_{n}^{2}). \end{eqnarray*} If we put \begin{equation*} R_{1}=\max_{x,t}\left|\frac{1}{2}\frac{\partial^{2} u}{\partial t^{2}}(x,t)+\frac{\partial^{3} u}{\partial t \partial x^{2}}(x,t)\right| \text{ and } R_{2}=\frac{1}{12}\max_{x,t}\left|\frac{\partial^{4} u}{\partial x^{4}}(x,t)\right|, \end{equation*} we can deduce that \begin{equation*} \max_{1\leq j \leq N_{n}}\epsilon_{j}^{n}\leq C_{1}\tau_{n}+C_{2}h_{n}^{2}, \end{equation*} with $C_{1}\tau_{n}=R_{1}\tau_{n}+o(\tau_{n})$ and $C_{2}h_{n}^{2}=R_{2}h_{n}^{2}+o(h_{n}^{2}).$ \subsection{Local convergence} Let $e_{j}^{n}=u_{j}^{n}-u(x_{j},t^{n})$ for $j=1,...,m-2.$\\ \textbf{(A):} Using \eqref{etoi}, \eqref{et} and \ref{starstar} we get \begin{eqnarray*} &&\frac{u(x_{j},t^{n+1})-u(x_{j},t^{n})}{\tau_{n}}-\frac{u(x_{j+1},t^{n+1})-2u(x_{j},t^{n+1})+u(x_{j-1},t^{n+1})}{h_{n}^{2}} -\left(u(x_{j},t^{n})\right)^{p}\\ &&\ \ \ \ \ \ \ +\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}\\ &=&\frac{\tau_{n}}{2}\frac{\partial^{2}u}{\partial t^{2}}(x_{j},t^{n}+\theta_{1}\tau_{n})-\tau_{n}\frac{\partial^{3} u}{\partial t \partial x^{2}}(x_{j},t^{n}+\theta_{6}\tau_{n})\\ &&\ \ \ \ \ \ \ -\frac{h_{n}^{2}}{24}\left\{\frac{\partial^{4}u}{\partial x^{4}}(x_{j}+\theta_{7}h_{n},t^{n+1})+\frac{\partial^{4}u}{\partial x^{4}}(x_{j}-\theta_{8}h_{n},t^{n+1})\right\}+o(h_{n}^{2}). \end{eqnarray*} Let \begin{eqnarray*} r_{j}^{n}&:=&-\frac{\tau_{n}}{2}\frac{\partial^{2}u}{\partial t^{2}}(x_{j},t^{n}+\theta_{1}\tau_{n})+\tau_{n}\frac{\partial^{3} u}{\partial t \partial x^{2}}(x_{j},t^{n}+\theta_{6}\tau_{n})\\ &&\ \ \ \ \ +\frac{h_{n}^{2}}{24}\left\{\frac{\partial^{4}u}{\partial x^{4}}(x_{j}+\theta_{7}h_{n},t^{n+1})+\frac{\partial^{4}u}{\partial x^{4}}(x_{j}-\theta_{8}h_{n},t^{n+1})\right\}+o(h_{n}^{2}) \end{eqnarray*} Then \begin{eqnarray} &&\frac{u(x_{j},t^{n+1})-u(x_{j},t^{n})}{\tau_{n}}-\frac{u(x_{j+1},t^{n+1})-2u(x_{j},t^{n+1})+u(x_{j-1},t^{n+1})}{h_{n}^{2}}-\left(u(x_{j},t^{n})\right)^{p} \nonumber \\ &&\ \ \ +\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}=-r_{j}^{n}. \label{C} \end{eqnarray} Using \eqref{approchee}, we have \begin{equation} \frac{u_{j}^{n+1}-u_{j}^{n}}{\tau_{n}}-\frac{u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h_{n}^{2}}-(u_{j}^{n})^{p}+\frac{1}{(2h_{n})^{q}}|u_{j+1}^{n}-u_{j-1}^{n}|^{q-1}|u_{j+1}^{n+1}-u_{j-1}^{n+1}|=0. \label{D} \end{equation} From \eqref{C} and \eqref{D}, $e_{j}^{n}$ satisfies \begin{eqnarray*} &&\frac{e_{j}^{n+1}-e_{j}^{n}}{\tau_{n}}-\frac{e_{j+1}^{n+1}-2e_{j}^{n+1}+e_{j-1}^{n+1}}{h_{n}^{2}}-\left((u_{j}^{n})^{p}-u(x_{j},t^{n})^{p}\right)\\ &&\ \ \ \ \ +\frac{1}{(2h_{n})^{q}}|u_{j+1}^{n}-u_{j-1}^{n}|^{q-1}|u_{j+1}^{n+1}-u_{j-1}^{n+1}| -\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}\\ &=& r_{j}^{n}. \end{eqnarray*} By the mean-value Theorem, for $f(X)=X^{p}$, we get \begin{eqnarray*} (u_{j}^{n})^{p}-(u(x_{j},t^{n}))^{p}&=&f(u_{j}^{n})-f(u(x_{j},t^{n})\\ &=&f'(u(x_{j},t^{n})+\theta_{9}e_{j}^{n})e_{j}^{n}, \end{eqnarray*} for some $\theta_{9}\in [0,1]$. Then we obtain \begin{eqnarray*} &&\frac{e_{j}^{n+1}-e_{j}^{n}}{\tau_{n}}-\frac{e_{j+1}^{n+1}-2e_{j}^{n+1}+e_{j-1}^{n+1}}{h_{n}^{2}}\\ &=&f'(u(x_{j},t^{n})+\theta_{9}e_{j}^{n})e_{j}^{n}-\frac{1}{(2h_{n})^{q}}|u_{j+1}^{n}-u_{j-1}^{n}|^{q-1}|u_{j+1}^{n+1}-u_{j-1}^{n+1}|\\ &&\ \ \ \ \ +\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}\right|^{q}+r_{j}^{n}.\\ \end{eqnarray*} Using \eqref{starstar} we get \begin{eqnarray*} &&\frac{e_{j}^{n+1}-e_{j}^{n}}{\tau_{n}}-\frac{e_{j+1}^{n+1}-2e_{j}^{n+1}+e_{j-1}^{n+1}}{h_{n}^{2}}\\ &=&f'(u(x_{j},t^{n})+\theta_{5}e_{j}^{n})e_{j}^{n}-\frac{1}{(2h_{n})^{q}}|u_{j+1}^{n}-u_{j-1}^{n}|^{q-1}|u_{j+1}^{n+1}-u_{j-1}^{n+1}|+\left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}+r_{1j}^{n}. \end{eqnarray*} with $r_{1j}^{n}=r_{j}^{n}+o(h_{n}^{2}).$\\ Let: \begin{align*} E^{n}&=\max_{1\leq j \leq m-2}\left|e_{j}^{n}\right|, & U=&\max_{x,t}\left|u(x,t)\right|, & V&=\max_{x,t}\left|\frac{\partial u}{\partial x}(x,t)\right|, \\ W&=\frac{2}{3}\max_{x,t}\left|\frac{\partial^{3} u}{\partial x^{3}}(x,t)\right|, & K&=f'(U+1), \end{align*} and \begin{equation*} R=\frac{\lambda_{n}}{2}\max_{x,t}\left|\frac{\partial^{2}u}{\partial t^{2}}(x,t)\right|+\lambda_{n}\max_{x,t}\left|\frac{\partial^{2}u}{\partial x^{2}}(x,t)\right|+\frac{1}{12}\max_{x,t}\left|\frac{\partial^{4}u}{\partial x^{4}}(x,t)\right|+o(1)+o(\lambda_{n}). \end{equation*} But from \eqref{E} we have \begin{eqnarray} \nonumber &&\left|\frac{\partial u}{\partial x}(x,t)-\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|\\ \nonumber &=&\left|\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}-\frac{h_{n}^{2}}{3}\frac{\partial^{3}u}{\partial x^{3}}(x_{j}+\theta h_{n},t^{n})+\frac{h_{n}^{2}}{3}\frac{\partial^{3}u}{\partial x^{3}}(x_{j}-\theta h_{n},t^{n}) -\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|\\ \nonumber &=&\left|\frac{e_{j-1}^{n}-e_{j+1}^{n}}{2h_{n}}-\frac{h_{n}^{2}}{3}\frac{\partial^{3}u}{\partial x^{3}}(x_{j}+\theta h_{n},t^{n})-\frac{h_{n}^{2}}{3}\frac{\partial^{3}u}{\partial x^{3}}(x_{j}-\theta h_{n},t^{n})\right|\\ &\leq& \frac{E^{n}}{h_{n}}+h_{n}^{2}W. \label{F} \end{eqnarray} Then by \eqref{F} and the mean value theorem, for $g(X)=\left|X\right|^{q}$, we get \begin{eqnarray} \nonumber &&\left|\left|\frac{\partial u}{\partial x}(x_{j},t^{n})\right|^{q}-\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q}\right|\\ &\leq&qg'\left(\frac{\partial u}{\partial x}(x_{j},t^{n})+\theta\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)\right)\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right). \label{H} \end{eqnarray} In the other hand, for $1\leq j\leq m-2$ we have, \begin{eqnarray} \nonumber &&\left|\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q}-\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left|\frac{u_{j+1}^{n+1}-u_{j-1}^{n+1}}{2h_{n}}\right|\right|\\ \nonumber &=&\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left(\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}-\frac{u_{j+1}^{n+1}-u_{j-1}^{n+1}}{2h_{n}}\right)\\ \nonumber &\leq&\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left(\frac{e_{j+1}^{n}-e_{j-1}^{n}}{2h_{n}}-\frac{e_{j+1}^{n+1}-e_{j-1}^{n+1}}{2h_{n}}+o(\tau_{n})+o(h_{n}^{2})\right)\\ \nonumber &\leq&\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left(\frac{e_{j+1}^{n}-e_{j-1}^{n}}{2h_{n}}\right)+\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left(\frac{e_{j+1}^{n+1}-e_{j-1}^{n+1}}{2h_{n}}\right)\\ &&\ \ \ \ \ \ +\left(o(\tau_{n})+o(h_{n}^{2})\right)\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}. \label{G} \end{eqnarray} Then from \eqref{G} and \eqref{H} we get \begin{eqnarray*} &&\frac{e_{j}^{n+1}-e_{j}^{n}}{\tau_{n}}-\frac{e_{j+1}^{n+1}-2e_{j}^{n+1}+e_{j-1}^{n+1}}{h_{n}^{2}}\\ &\leq&f'(u(x_{j},t^{n})+\theta_{9}e_{j}^{n})e_{j}^{n}+r_{1j}^{n}+\left(o(\tau_{n})+o(h_{n}^{2})\right)\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\\ &+&qg'\left(\frac{\partial u}{\partial x}(x_{j},t^{n})+\theta\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)\right)\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)+\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left(\frac{e_{j+1}^{n}-e_{j-1}^{n}}{2h_{n}}\right)\\ &&\ \ \ \ +\left|\frac{u_{j+1}^{n}-u_{j-1}^{n}}{2h_{n}}\right|^{q-1}\left(\frac{e_{j+1}^{n+1}-e_{j-1}^{n+1}}{2h_{n}}\right). \end{eqnarray*} Let $M:=\left\|U^{n}\right\|_{\infty}=u_{m}^{n}.$ Finally we obtain \begin{eqnarray*} \frac{E^{n+1}-E^{n}}{\tau_{n}}&\leq& KE^{n}+h_{n}^{2}R+\left(o(\tau_{n})+o(h_{n}^{2})\right)\left(\frac{M}{h_{n}}\right)^{q-1}\\ &+&qg'\left(V+\theta\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)\right)\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)\\ &=&E^{n}\left(K+\frac{q}{h_{n}}g'\left(V+\theta\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)\right)\right)\\ &+&h_{n}^{2}\left(R+\left(o(\lambda_{n})+o(1)\right)\left(\frac{M}{h_{n}}\right)^{q-1}+Wqg'\left(V+\theta\left(\frac{E^{n}}{h_{n}}+h_{n}^{2}W\right)\right)\right)\\ &=&\frac{E^{n}}{h_{n}^{q}}\left(h_{n}^{q}K+qg'\left(h_{n}V+\theta\left(E^{n}+h_{n}^{3}W\right)\right)\right)\\ &+&h_{n}^{3-q}\left(h_{n}^{q-1}R+\left(o(\lambda_{n})+o(1)\right)M^{q-1}+Wqg'\left(h_{n}V+\theta\left(E^{n}+h_{n}^{3}W\right)\right)\right). \end{eqnarray*} Let \begin{eqnarray*} B&=&h_{n}^{q}K+qg'\left(h_{n}V+\theta\left(E^{n}+h_{n}^{3}W\right)\right)\\ C&=&h_{n}^{q-1}R+\left(o(\lambda_{n})+o(1)\right)M^{q-1}+Wqg'\left(h_{n}V+\theta\left(E^{n}+h_{n}^{3}W\right)\right). \end{eqnarray*} Then, \begin{eqnarray*} E^{n+1}&\leq&\left(1+\tau_{n}\frac{B}{h_{n}^{q}}\right)E^{n}+\tau_{n}h_{n}^{3-q}C\\ &\leq&\left(1+\tau_{n}NB\right)E^{n}+\tau_{n}h_{n}^{3-q}C\\ &\leq&\exp(NBT)h_{n}^{3-q}CT. \end{eqnarray*} With $N$ is constant such that \begin{equation*} \text{For } \ t_{n}<T \text{ and } h_{n}=\left(2M^{-q+1}\right)^{\frac{1}{2-q}} \text{ we have: } \ \ \dfrac{1}{h_{n}^{q}}=\dfrac{M^{\frac{q(q-1)}{2-q}}}{2^{\frac{q}{2-q}}}:=N, \end{equation*} which is bounded by Theorem 2.2. Then we get \begin{equation*} \max_{1\leq j \leq m-2}\left|u_{j}^{n}-u(x_{j},t^{n})\right|\leq C_{0}(T)h^{3-q}.\\ \end{equation*} Now, we will prove the last part of the lemma.\\ \textbf{(B):} We do the same thing for $p>1$ and $q=1$, we get for $j=1,...,m-1$ \begin{eqnarray*} &&\frac{e_{j}^{n+1}-e_{j}^{n}}{\tau_{n}}-\frac{e_{j+1}^{n+1}-2e_{j}^{n+1}+e_{j-1}^{n+1}}{h_{n}^{2}}\\ &=&f'(u(x_{j},t^{n})+\theta_{9}e_{j}^{n})e_{j}^{n}-\frac{u_{j+1}^{n+1}-u_{j-1}^{n+1}}{2h_{n}}+\frac{u(x_{j+1},t^{n})-u(x_{j-1},t^{n})}{2h_{n}}+r_{j}^{n}\\ &=&f'(u(x_{j},t^{n})+\theta_{9}e_{j}^{n})e_{j}^{n}+\frac{e_{j-1}^{n+1}-e_{j+1}^{n+1}}{2h_{n}}+r_{j}^{n}. \end{eqnarray*} And then \begin{eqnarray*} &&\frac{E^{n+1}-E^{n}}{\tau_{n}}\leq KE^{n}+h_{n}^{2}R.\\ &\Rightarrow& E^{n+1}\leq \tau_{n}KE^{n}+\tau_{n}h_{n}^{2}R.\\ &\Rightarrow& E^{n+1}\leq \exp(KT)h_{n}^{2}RT. \end{eqnarray*} And finally we obtain \begin{equation*} \max_{1\leq j\leq m-1}\left|u_{j}^{n}-u(x_{j},t^{n})\right|\leq C_{1}(T)h^{2}. \end{equation*} \section{Approximation of the blowing up time} In this section, we give an idea about the numerical blow-up time. First of all we recall a result of Souplet and Weissler \cite{soupletweissler} \begin{th1} Let $\psi \in W^{1,s}_{0}(\Omega)$, ($s$ large enough), with $\psi\geq 0$ and $\psi \neq 0.$ \begin{enumerate} \item There exists some $\lambda_{0}=\lambda_{0}(\psi)>0$ such that for all $ \lambda>\lambda_{0}$, the solution of \eqref{exacte} with initial data $\phi=\lambda \psi$ blows up in finite time in $W^{1,s}$ norm. \item There is some $C>0$ such that \begin{equation*} T^{*}(\lambda \psi)\leq \frac{C}{(\lambda \left| \psi \right|_{\infty})^{p-1}},\ \ \ \lambda\rightarrow \infty. \end{equation*} \item \begin{equation*} T^{*}(\lambda \psi)\geq \frac{1}{(p-1)(\lambda \left| \psi \right|_{\infty})^{p-1}}. \end{equation*} \end{enumerate} \end{th1} We define now \begin{equation} T_{num}^{*}:=\sum_{n\geq 0}{\tau_{n}} \label{timeblowup} \end{equation} and call it the numerical blow-up time. In \cite{hani}, we have proved that \begin{equation*} u_{m}^{n}\geq \left(\dfrac{1+\tau}{1+\tau 2^{\frac{-q}{2-q}}\left(u_{m}^{0}\right)^{\frac{-2p+q(1+p)}{2-q}}}\right)^{n}u_{m}^{0}. \end{equation*} which implies that \begin{equation} \dfrac{1}{(u_{m}^{n})^{p-1}}\leq \dfrac{1}{\left(\dfrac{1+\tau}{1+\tau 2^{\frac{-q}{2-q}}(u_{m}^{0})^{\frac{-2p+q(1+p)}{2-q}}}\right)^{n(p-1)}}(u_{m}^{0})^{-p+1}. \label{time} \end{equation} Using \eqref{timeblowup} and \eqref{time} we get \begin{eqnarray} T_{num}^{*}&=&\tau \sum_{n\geq 0}\dfrac{1}{(u_{m}^{n})^{p-1}} \nonumber \\ &\leq& \dfrac{\tau}{(u_{m}^{0})^{p-1}}\sum_{n\geq 0}\left(\dfrac{1}{\left(\frac{1+\tau}{1+\tau 2^{\frac{-q}{2-q}}\left(u_{m}^{0}\right)^{\frac{-2p+q(1+p)}{2-q}}}\right)^{p-1}}\right)^{n} \nonumber \\ &=&\dfrac{\tau}{(u_{m}^{0})^{p-1}}\sum_{n\geq 0}\left(\left(\dfrac{1+\tau 2^{\frac{-q}{2-q}}\left(u_{m}^{0}\right)^{\frac{-2p+q(1+p)}{2-q}}}{1+\tau}\right)^{p-1}\right)^{n} \nonumber \\ &=&\dfrac{\tau}{(u_{m}^{0})^{p-1}}\dfrac{1}{1-\left(\dfrac{1+\tau 2^{\frac{-q}{2-q}}\left(u_{m}^{0}\right)^{\frac{-2p+q(1+p)}{2-q}}}{1+\tau}\right)^{p-1}}:=T^{**} \label{tstar} \end{eqnarray} \section{Numerical simulations} In this section, we present some numerical simulations that illustrate our results. In figure 1, we take $p=4>2$ and $q=1.3<\frac{2(p-1)}{p}$, one can see that the solution is bounded in $x_{m-1}$. Then we take $p=2$ and $q=1$, it is clear from figure 2 that the solution blows up in $x_{m-1},$ and from figure 3, we can see that the solution is bounded in $x_{m-2}.$\\ Concerning the approximation of the blowing up time, if we take the initial data $u_{0}(x)=\lambda \sin(\frac{\pi}{2}(x+1))$, with $\lambda>0$ then $\left\|u_{0}\right\|_{\infty}=\lambda$. Theoretically we know that \begin{equation*} T^{*}\geq \dfrac{1}{(p-1)\left\|u_{0}\right\|_{\infty}^{p-1}}. \end{equation*} Let $g(\lambda)=\dfrac{1}{(p-1)\lambda^{p-1}}$ and $p=3$. In the next table, and for some values of $\lambda$ we can see that $T^{*}_{num}\geq g(\lambda)$ which is compatible with the theoretical result, this is illustrated in figure 4. Also, using \eqref{tstar} and for $\lambda=10^{3}$ we have $$T^{*}_{num}\approx 5.067.10^{-7}\leq T^{**}=5.075.10^{-7}.$$ \begin{table}[H] \centering \begin{tabular}{|*{6}{c|}} \hline $\lambda$& $10$&$10^{2}$&$10^{3}$&$10^{4}$&$10^{5}$ \rule[-7pt]{0pt}{20pt}\\ \hline $g(\lambda)$&$5.10^{-3}$&$5.10^{-5}$&$5.10^{-7}$&$5.10^{-9}$&$5.10^{-11}$ \rule[-7pt]{0pt}{20pt}\\ \hline $T^{*}_{num}$&$5.177.10^{-3}$&$5.068.10^{-5}$&$5.067.10^{-7}$&$5.075.10^{-9}$&$5.058.10^{-11}$ \rule[-7pt]{0pt}{20pt}\\ \hline \end{tabular} \caption{Comparison of the function $g(\lambda)$ with the numerical blow up time $T^{*}_{num}$.} \label{tab:1} \end{table} \begin{figure} \caption{Evolution of the numerical solution at $x_{m} \caption{Evolution of the numerical solution at $x_{m} \end{figure} \begin{figure} \caption{Evolution of the numerical solution at $x_{m} \caption{Graphics of $g(\lambda)$ and approximation of the numerical blow-up time for $p=3.$} \end{figure} \section{Conclusion } \noindent We have showed that when $p=2$ and $q=1$, the finite difference solution blows up at more than one point and that when $p>2$ and $q<\dfrac{2(p-1)}{p}$, the only numerical blow up point is the mid-point $x=0.$ This is an interesting phenomena in view of the fact that the solution of the corresponding PDE blows up only at one point $x=0$ for any $p>1$ and $1\leq q\leq \dfrac{2p}{p+1}.$ Remark that for $1<p<2$ and $\dfrac{2(p-1)}{p}\leq q< \dfrac{2p}{p+1},$ we have no idea about the boundedness of $u_{m-1}^{n}$ and $u_{m-2}^{n}.$ \begin{figure} \caption{Graphics of the asymptotic behaviours of the solution near the blowing-up point.} \end{figure} \end{document}
\begin{document} \title{Quantum supremacy with spin squeezed atomic ensembles} \author{Yueheng Shi} \thanks{These authors contributed equally} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \affiliation{New York University Shanghai, 1555 Century Ave, Pudong, Shanghai 200122, China} \affiliation{Carleton College, Northfield, MN, 55057, USA} \affiliation{Washington University in St. Louis, St. Louis, MO, 63130, USA} \author{Junheng Shi} \thanks{These authors contributed equally} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \affiliation{New York University Shanghai, 1555 Century Ave, Pudong, Shanghai 200122, China} \author{Tim Byrnes} \email{[email protected]} \affiliation{New York University Shanghai, 1555 Century Ave, Pudong, Shanghai 200122, China} \affiliation{State Key Laboratory of Precision Spectroscopy, School of Physical and Material Sciences, East China Normal University, Shanghai 200062, China} \affiliation{NYU-ECNU Institute of Physics at NYU Shanghai, 3663 Zhongshan Road North, Shanghai 200062, China} \affiliation{Center for Quantum and Topological Systems (CQTS), NYUAD Research Institute, New York University Abu Dhabi, UAE} \affiliation{National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \affiliation{Department of Physics, New York University, New York, NY 10003, USA} \begin{abstract} We propose a method to achieve quantum supremacy using ensembles of qubits, using only spin squeezing, basis rotations, and Fock state measurements. Each ensemble is assumed to be controllable only with its total spin. Using a repeated sequence of random basis rotations followed by squeezing, we show that the probability distribution of the final measurements quickly approaches a Porter-Thomas distribution. We show that the sampling probability can be related to a \#P hard problem with a complexity scaling as $ (N+1)^M$, where $ N $ is the number of qubits in an ensemble and $ M $ is the number of ensembles. The scheme can be implemented with hot or cold atomic ensembles. Due to the large number of atoms in typical atomic ensembles, this allows access to the quantum supremacy regime with a modest number of ensembles or gate depth. \end{abstract} \maketitle \section{Introduction} Quantum supremacy, or quantum computational advantage, is the notion that a quantum device can vastly outperform the computational capabilities of existing classical computers in a given task \cite{preskill2012quantum}. Obtaining a quantum speedup, as proposed in quantum algorithms such as Shor's algorithm \cite{shor1999polynomial}, has always been central to the interest in quantum computing. Quantum simulation remains one of the most promising applications of quantum technology precisely because simulating it on a classical computer is intractable \cite{buluta2009quantum,georgescu2014quantum,cirac2012goals}. However, to demonstrate quantum supremacy, one of the important tasks is to {\it prove} the superiority of the quantum device. The proof of this generally requires two parts \cite{Bouland2018}. First, one requires showing that the computational task has a certain ``hardness" from the perspective of computational complexity theory, thereby invalidating the extended Church-Turing thesis. Second, one compares the computational time of the quantum device with that of the best classical algorithm running on the fastest available computer. For the Noisy Intermediate Scale Quantum (NISQ) devices that are available today \cite{preskill2018quantum}, the limited number of qubits under imperfect conditions can be taken advantage of to design powerful classical algorithms, pushing the quantum supremacy regime to larger quantum systems. Current approaches for demonstrating quantum supremacy include Boson Sampling \cite{aaronson_computational_2013}, Gaussian Boson Sampling \cite{hamilton_gaussian_2017} and Instantaneous Quantum Polynomial (IQP) circuits \cite{Bremner2010,bremner_average-case_2016}. The first experimental demonstration of quantum supremacy was achieved using random quantum circuits in a 51 qubit superconducting quantum computing device \cite{Arute2019}. This was followed by a demonstration of quantum supremacy in Gaussian Boson Sampling \cite{zhong2020quantum,wu2021strong}, where squeezed light is input to the linear optical network. The two components, of demonstrating complexity of the problem and practical superiority to a classical algorithm, make the design of novel quantum supremacy demonstrations still quite challenging. Atomic systems offer a fascinating possibility in this context. They offer a high degree of controllability, and typically consist of a large number of atoms. For example, in experiments involving Bose-Einstein condensates one typically has $ \sim 10^5 $ atoms \cite{greiner2002quantum}, and for experiments with atomic ensembles typically have $ \sim 10^{12} $ atoms \cite{julsgaard2001experimental}. While this far exceeds the number of qubits in state-of-the-art quantum computers, one of the limitations is the lack of microscopic control of the atoms, although in recent years progress on front has also been made \cite{bernien2017probing}. For this reason one of the main applications of such systems has been in the context of quantum simulation \cite{buluta2009quantum,georgescu2014quantum,cirac2012goals}, where microscopic control is often unnecessary to realize a physical model. In Ref. \cite{kocharovsky2022quantum} a proposal was made based on sampling of the Bogoliubov distribution in a multi-trap cold atom system. Up to this point it has remained a tantalizing possibility to rigorously show that atomic systems also lie in the quantum supremacy regime. In this paper, we propose an experimental scheme to achieve quantum supremacy with ensembles of qubits, using only spin squeezing, basis rotations, and total spin measurements. The basic scheme is shown in Fig. \ref{fig1}. After initializing the qubits in a spin coherent state \cite{Byrnes2021}, they are spin squeezed in random bases, by applying a sequence of spin squeezing and rotations around the $ x,y,z $-axes. The aim is then to perform sampling of the measurement distribution for a given squeezing sequence. We analyze the complexity of simulating such random circuits classically, and show that this is intractable for large particle and ensemble numbers, by connecting it to a \#P-hard problem. Finally, we show a classical simulation method suitable for large scale systems and show the regime in which quantum supremacy should be attainable. \begin{figure} \caption{Random quantum circuits using qubit ensembles. (a) The schematic setup considered in this paper. $ M $ ensembles of qubits, each containing $ N $ qubits are controlled using basis rotations $ U^\alpha (\theta) $ as defined in (\ref{basisrot} \label{fig1} \end{figure} \section{Physical system} We consider $ M $ ensembles each containing $ N $ qubits (Fig. \ref{fig1}(a)). Such qubit ensembles can be implemented using hot atomic ensembles in glass cells, or optically/magnetically trapped cold atoms. Suitable logical states are the hyperfine ground states of the atoms \cite{hammerer2010,pezze2018}. For atomic ensemble in glass cells, the number of atoms can be typically in the region of $ N \sim 10^{12} $ \cite{pezze2018,hammerer2010, julsgaard2001experimental,Bao2020}. Multi-ensemble systems have been realized with atomic cells, in Ref. \cite{pu2017experimental,pu2018experimental} $ M = 25, 225 $ was achieved. We assume that the qubits within each ensemble cannot be individual controlled, but each ensemble can be addressed individually. The only operators that are available for control and readout of the ensembles are in terms of collective spin operators, defined as $ \hat{S}^\alpha_m = \sum_{n=1}^N \hat{\sigma}_{n,m}^{\alpha} $, where $ \alpha \in \{x,y,z \}$, $ m \in [1,M]$ labels the ensemble, and $ \hat{\sigma}_n^{\alpha}$ are the Pauli matrices. Spin operators satisfy commutation relations $ [\hat{S}^\alpha_m, \hat{S}^\beta_{m'} ] = 2i \epsilon^{\alpha \beta \gamma} \delta_{m m'} \hat{S}^\gamma_m $, where $ \epsilon^{\alpha \beta \gamma}$ is the Levi-Civita antisymmetric tensor and $ \delta_{mm'} $ is the Kronecker delta. Our quantum supremacy protocol is based on a combination of quantum gates on the ensembles, followed by a measurement. The following operations are assumed to be available to realize quantum supremacy in a such a system of ensembles. First, we assume that we can perform basis rotations $ U^\alpha_n (\theta) = e^{-i \hat{S}^\alpha_m \theta} $ on each ensemble. These are collective spin rotations, and correspond to the simultaneous rotations of all $N$ individual qubits about the same axis. Such collective spin rotations are routinely performed with either radio frequency/microwave or Raman pulses in atomic ensembles \cite{Byrnes2021,Abdelrahman2014,genov2014correction}. In fact for our random quantum circuit we make the further restriction to the following gates \begin{align} {\hat{X}}_m^{1/2} = e^{-i\hat{S}_m^x\pi/4} \hspace{5mm} { \hat{Y} }_m^{1/2} = e^{-i\hat{S}_m^y\pi/4} \hspace{5mm} { \hat{Z} }_m^{1/4} = e^{-i\hat{S}_m^z\pi/8} . \label{basisrot} \end{align} Note that the $ z $-axis rotation is taken to correspond to the $ \pi/8$ gate, which is to ensure that a non-Clifford gate is present for basis rotations \cite{gottesman1998heisenberg}. While this is not essential since the squeezing gates are non-Clifford gates, this helps to improve the convergence of the random circuit. We also assume that spin squeezing operations are available on the ensembles. A squeezing operation applied on the $ m$th ensemble is defined using the operator $ \hat{Q}_m (\xi) = e^{-i(\hat{S}_m^z)^2 \xi } $ which is a one-axis spin squeezing Hamiltonian \cite{kitagawa1993squeezed}. The squeezing operation produces correlations between the particles, and is an entangling operation \cite{sorensen2001many}. Such squeezing has been realized in cold atoms using nonlinear interactions \cite{gross2012spin,riedel2010atom} or quantum nondemolition measurements in hot atomic ensembles \cite{hald1999spin,kuzmich2000generation,hammerer2010,takano2009spin,Bao2020}. By considering multiple ensembles as a single spin, spin squeezing across multiple ensembles may be produced $ \hat{Q}_{nm} (\xi) = e^{-i(\hat{S}_m^z+ \hat{S}_n^z)^2 \xi }$. This squeezing operation generates entanglement between different ensembles due to the cross terms $ \hat{S}_m^z \hat{S}_n^z $ \cite{byrnes2013fractality,jing2019split,kitzinger2020two}. Considering all the ensembles together produces squeezing across all the ensembles according to \begin{equation} \hat{\cal Q}(\xi) = e^{-i (\sum_{m=1}^M \hat{S}^z_m)^2 \xi } . \label{squeezingall} \end{equation} Squeezing across multiple ensembles has been demonstrated in hot atomic ensembles using quantum nondemolition measurements \cite{julsgaard2001experimental,hammerer2010,krauter2013deterministic}. Several proposals for producing squeezing between cold atom ensembles have been proposed \cite{pyrkov2013entanglement,PhysRevA.74.022312,aristizabal2021quantum,pyrkov2014full,jing2019split,byrnes2013fractality,rosseau2014entanglement}. Finally, the measurement is performed in the eigenbasis of the $ \hat{S}^z_m $ operator, defined as \begin{align} \hat{S}^z_m | k \rangle = (2k - N) | k \rangle , \label{fockstates} \end{align} where $ k \in [0,N] $. \section{Random quantum circuits} Our approach to quantum supremacy follows a similar general approach to random quantum circuits as demonstrated in Ref. \cite{Arute2019}. We first initialize all the ensembles to a spin coherent state polarized in the $ z $-direction $ |\psi_0 \rangle = | k = N \rangle^{\otimes M } = | 0 \rangle^{\otimes NM} $. The first step of the circuit is to apply a Hadamard gate on each ensemble $ H_m = U^x_m (\pi/2) U^z_m (\pi/2) U^x_m (\pi/2)$. This produces an equal superposition of all states $ |+ \rangle^{\otimes NM} $. The random circuits we consider consist of $L$ cycles, where each cycle contains two gates consisting of the squeezing operation (\ref{squeezingall}) followed by a basis rotation (\ref{basisrot}). After $L$ such cycles, a projection measurement is made on the Fock basis (\ref{fockstates}), obtaining a single sample (Fig. \ref{fig1}(b)). The aim of the quantum circuit is to obtain the probability distribution of a given measurement outcome, given by \begin{align} p_{\vec{k} } & = | \langle \vec{k} | {\cal C } |\psi_0 \rangle |^2 , \label{probabilityk} \end{align} where the random quantum circuit is \begin{align} {\cal C } & = \left( \prod_{l=1}^L \hat{U}_l \hat{{\cal Q}} \right) H^{\otimes M } . \label{randomqcircuit} \end{align} Here $ \hat{U}_l = \otimes_{m=1}^M \hat{W}_m $ and $ \hat{W}_m $ is randomly chosen from $\{ \hat{X}_m^{1/2}, \hat{Y}_m^{1/2}, \hat{Z}_m^{1/4} \} $ on each cycle $ l $. A particular measurement outcome is given by $ | \vec{k} \rangle = \otimes_{m=1}^M | k_m \rangle $, where $\vec{k} = (k_1, \dots, k_M) $. \section{Randomness of quantum circuit} To verify that our proposed sequence gives a Gaussian random state, we calculate the entropy of the measurement probabilities (Fig. \ref{fig2}(a)). For a $D\times D$ random matrix, the probability follows the Porter-Thomas (PT) distribution, which has an entropy of $ \ln D-1+\gamma$, where $\gamma \approx 0.577$ is the Euler constant\cite{porter_fluctuations_1956}. Fig. \ref{fig2}(a) shows the entropy of the probabilities (\ref{probabilityk}) as a function of number of cycles. Even for the relatively small number of particles, we observe a fast convergence to the predicted entropy of the PT-distribution after typically $ L = 8 $ cycles. We also verify that the probability distribution (\ref{probabilityk}) follows the PT-distribution, by showing the sorted probabilities after $L = 10 $ cycles (Fig. \ref{fig2}(b)). We see that for the probabilities (\ref{probabilityk}) there is excellent agreement, again despite the small number of particles examined. We also examine the effect of decoherence on our proposed sequence, by applying a Lindbladian dephasing after each cycle (see Appendix). The general effect is to turn the PT-distribution into a uniform distribution, which is as expected as the state becomes a completely mixed state. \begin{figure} \caption{Randomness of the measured probability distribution for the random quantum circuit (\ref{randomqcircuit} \label{fig2} \end{figure} \section{Complexity analysis} We now provide a proof of the hardness of our problem in terms of computational complexity theory. As given in previous works \cite{aaronson_computational_2013,bremner_classical_2011,bremner_average-case_2016,bremner_classical_2011}, the complete proof to show that there exists no efficient classical sampler to simulate the output of an average-case circuit follows several steps. The first step is to show the existence of a class of worst-case circuits and prove that the complexity of estimating the probability of certain outputs in those circuits is \#P-hard. The second step is to extend this hardness to average-case circuits by a worst-to-average reduction. Bouland, Vazirani and co-workers showed a general method on how to perform a reduction from the worst-case circuit to the average-case \cite{Bouland2018}. The third step is to show if an efficient classical algorithm that can approximate the output probability of an average-case circuit up to an additive error, then the polynomial hierarchy will collapse to its third level \cite{arora_computational_2009,toda_pp_1991,aaronson_quantum_2005,han_threshold_1997}. As the second and third steps are established in previous papers, we focus our attention on the first step (see Appendix for a summary). Consider the class of quantum circuits given in Fig. \ref{fig3}(a). After the initial Hadamard gates, commuting two- and three-ensemble interactions of the form \begin{equation} \hat{T}_{nm}(\xi) = e^{-i\hat{S}_n^z \hat{S}_m^z \xi }, \hspace{1cm} \hat{R}_{lmn} (\chi) = e^{-i\hat{S}_l^z \hat{S}_m^z \hat{S}_n^z \chi } \label{squeezingthree} \end{equation} are performed, as well as basis rotations around the $ z$-axis. Such gates can be produced by universality arguments \cite{lloyd1995almost,byrnes2015macroscopic} (see Appendix). Finally a Hadamard gate is applied, followed by a measurement. The probability of the outcome $ \vec{k} = \vec{0} $ can be calculated using standard methods (see Appendix) to be \begin{align} p_{\vec{0}} = & \frac{\left| \sum_{\vec{\sigma}} e^{-i \pi f(\vec{\sigma})} \right|^2 }{4^{NM}} =\frac{\left| \sum_{\vec{\sigma}} (-1)^{f(\vec{\sigma})} \right|^2 }{4^{NM}} \label{probzero} \\ f(\vec{\sigma}) = & \sum_{m_1 m_2 m_3} \alpha_{m_1 m_2 m_3} k_{m_1} k_{m_2} k_{m_3} \nonumber \\ & + \sum_{m_1 m_2} \beta_{m_1 m_2} k_{m_1} k_{m_2} + \sum_m \gamma_m k_{m} .\label{hardness_f} \end{align} where $ k_m = (N + \sum_{n=1}^N \sigma_{n,m})/2 $ counts the number of $ \sigma_{n,m} =1$ in the $ m$th ensemble. The sum over $ \vec{\sigma} $ runs over the $ 2^{NM} $ configurations of the whole system. The parameters $ \alpha, \beta, \gamma$ can be simply related to the evolution times of the circuit in Fig. \ref{fig3}(a), such that any desired set of parameters can be created. Thus using a suitable circuit it is possible to realize $ \alpha_{m_1 m_2 m_3}, \beta_{m_1 m_2}, \gamma_m \in \{0, 1\} $. The sum in (\ref{probzero}) is known as the gap function, and when $ f $ is a degree 3 polynomial, it is known to be \#P-hard to calculate for the case that $ k_m \in \{0,1\} $ \cite{bremner_average-case_2016,Gao2017}. The difficulty of the evaluation of the sum originates from the lack of simple structure of $ (-1)^{f(\vec{k})} $, such that the number of $ \vec{k}$ that give $ \pm 1 $ cannot be found easily. In our case, $ k_m \in [0,N] $, rather than $ k_m \in \{0,1\} $, but due to the fact that only the parity of the function $ f $ matters in the sum in (\ref{probzero}), it follows that only the parity of the $ k_m $ matters for the function $ f $. For example, for a term such as $ (-1)^{k_1 k_2 k_3} $, this is only $-1$ when $ k_1, k_2, k_3 $ are all odd. This means that here $ f $ encodes the same problem as the binary case, with the mapping $ k_m \rightarrow k_m \text{mod } 2$. This shows that sampling from the circuit in Fig. \ref{fig3}(a) is \#P-hard, by equivalence to the original binary case. Another way to see the complexity of circuits of the form of Fig. \ref{fig3}(a) is by connecting it to IQP. Using an extension of the arguments used to derive (\ref{squeezingthree}), one may show that any gate of the form $\exp\left[-i\theta \prod_{m\in \mathcal{M}}\hat{S}_m^z \right]$ can be generated, where $\mathcal{M}$ runs over a subset of the ensembles. For the $ N = 1 $ case, using such gates in a circuit of the form of Fig. \ref{fig3}(a) coincides exactly with IQP. We may then use the results of Ref. \cite{fujii_commuting_2017}, which showed the hardness of IQP by connecting it to the hardness of calculating the partition function. For $ N>1 $, the extremal values $ k = \{0,N \} $ coincide with the $ N = 1$ case \cite{mohseni2021error}, but the sum in the probability expression will involve additional terms that are distinct to the extremal values. Thus for an exact evaluation of the amplitude, the complexity of the circuit is at least as hard as IQP. \begin{figure} \caption{(a) Example of the worst-case circuit used in the evaluation of the probability (\ref{probzero} \label{fig3} \end{figure} \section{Simulation algorithm} Demonstrating quantum supremacy requires not only realizing a quantum device that performs sampling efficiently, but also comparison to a classical algorithm that can perform the corresponding calculation. Here we describe an algorithm using a Feynman Path Integral (FPI) classical sampling approach \cite{Boixo2017,Boixo2018}. The aim of the FPI based sampling algorithm is to calculate the amplitude \begin{align} \langle \vec{k}_f | \hat{{\cal C}} | \psi_0 \rangle = \sum_{ \vec{k}^1, \dots, \vec{k}^{T-1} } \prod_{t=1}^T \langle \vec{k}^{t} | C^{(t)}| \vec{k}^{t-1} \rangle , \label{pathintens} \end{align} where $ | \vec{k}^t \rangle= \otimes_{m=1}^M | k_m^t \rangle $ is a ensemble configuration with $ k_m^t \in [0,N] $ and $ {\hat{\cal C}} = \prod_{t=0}^T C^{(t)} $ is the total circuit written gate by gate. The number of the gates applied in total is $T$, the initial state is $ | \vec{k}^0 \rangle = | \psi_0 \rangle $, and the final state is $ | \vec{k}_f \rangle = | \vec{k}^T \rangle $. Each term in the multidimensional sum of (\ref{pathintens}) represents a path in spin configuration space, evolving in a ``time'' direction labeled by $ t $. The sum in (\ref{pathintens}) in principle runs over $ (N+1)^{M(T-1)} $, terms but due to the diagonal nature of the $ {\cal Q} , \hat{Z}_m^{1/4} $ gates, and the local nature of the $ \hat{X}_m^{1/4}, \hat{Y}_m^{1/4}$, many of the amplitudes give zero (see Appendix). This reduces the sum to $ (N+1)^G $, where $ G $ is the number of two-sparse gates (i.e. the off-diagonal $ \hat{X}_m^{1/4}, \hat{Y}_m^{1/4}$). For large $ N $, this can still be a formidable number of paths, and hence one can approximate the amplitude by randomly sampling from these paths. Such a path integral Monte Carlo approach is effective when the Hilbert space dimension is prohibitively large to perform a direct matrix computation, which scales in complexity as $ G (N+1)^{2M}$. For ensemble sizes $ N = 10^{12} $ and $ M = 2$ such an exact computation is not tractable. On the other hand, it is possible always to perform a Monte Carlo calculation, by simply adjusting the number of paths, at the cost of a reduced fidelity. Fig. \ref{fig3}(b) shows an example of the fidelity of the FPI method with the number of cycles for a fixed number of paths. We compare this to the fidelity of the direct matrix calculation including decoherence, which also shows a decrease in fidelity with $L $, since the dephasing is added after each cycle. We see that for small $ L $, the FPI method produces good estimates of the state, but sharply loses fidelity when $(N+1)^G $ exceeds the number of sampled paths. Meanwhile, the decoherence calculation decreases but at a slower pace, such that some fidelity is retained even for deep circuits. We expect a similar situation for an experimental quantum supremacy demonstration, where the loss of fidelity for the experiment will have a slower decay with $ L $ than the FPI approach. In Ref. \cite{Arute2019}, it is estimated that a million CPU cores could be used to sum over $\sim 10^{14}$ paths over 2 weeks. Hence for $ N =10^{12}$ and $ G = 2$ we expect that the fidelity of the FPI will be very low, giving opportunity for a quantum device to exceed its performance. \section{Conclusions} We have proposed a route to achieve quantum supremacy using ensembles of qubits, where each ensemble is controlled only using its total spin. The primary physical platform for realization of this scheme is atomic ensembles, where $ N $ can be extremely large. The relevant Hilbert space for the system is $(N+1)^M$, meaning that even for a modest number of ensembles, the complexity can be enormous. Despite the lack of microscopic control of the atoms, the system quickly approaches a PT-distribution. We showed the complexity of the problem by showing that a hard instance of the circuit is equivalent to a \#P-hard problem. A path integral simulation algorithm was also introduced, which is appropriate for simulating large-scale systems, and has a complexity scaling as $ (N+1)^G$. Despite only having limited control of the quantum system, i.e. no microscopic control of individual qubits, it is possible to connect it to a computationally difficult problem. Having limited control is one of the principal technological differences in a quantum simulator versus a quantum computer, but we see here that nevertheless that the complexity of classically simulating such devices can be high. Using a similar ensemble approach, quantum computing has been proposed \cite{byrnes2012macroscopic,mohseni2021error,abdelrahman2014coherent,Byrnes2021}, suggesting further applications beyond quantum supremacy. We anticipate the primary technological difficulty in realizing our scheme is in the measurement readout, where ideally the spin of the ensemble should be read out with single atom resolution. In the context of cold atoms, advances have been made where close to single atom resolution has been achieved \cite{hume2013accurate,huper2019preparation}. Analogous challenges have been present in the optical context, where the lack of single photon resolution has shown not to be an impediment towards reaching quantum supremacy \cite{zhong2020quantum,wu2021strong,quesada_gaussian_2018,shi2021gaussian}. \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China (62071301); NYU-ECNU Institute of Physics at NYU Shanghai; the Joint Physics Research Institute Challenge Grant; the Science and Technology Commission of Shanghai Municipality (19XD1423000,22ZR1444600); the NYU Shanghai Boost Fund; the China Foreign Experts Program (G2021013002L); the NYU Shanghai Major-Grants Seed Fund. \end{acknowledgments} \appendix \section{Completely symmetric subspace} In this section, we show the mapping between the completely symmetric qubit states and bosonic Fock states on a single ensemble. The Hilbert space on each ensemble is formally of dimension $ 2^N$. However, the collective spin operators are symmetric under particle interchange, and we will also start in an initial state that obeys this symmetry. Under these conditions, it is possible to restrict the Hilbert space to a smaller subspace, where all states are symmetric under interchange \cite{Byrnes2021}. In this case we may map the total spin operators to Schwinger boson operators \begin{align} \hat{S}^x_m & = \hat{a}_m^\dagger\hat{b}_m+\hat{b}_m^\dagger\hat{a}_m \nonumber \\ \hat{S}^y_m & = -i\hat{a}_m^\dagger\hat{b}_m+i\hat{b}_m^\dagger\hat{a}_m \nonumber \\ \hat{S}^z_m & = \hat{a}_m^\dagger\hat{a}_m-\hat{b}_m^\dagger\hat{b}_m, \end{align} which act on orthonormal Fock states are defined as \begin{align} |k\rangle = \frac{(a^\dagger_m)^k(b^\dagger_m)^{N-k}}{\sqrt{k!(N-k)!}} | \text{vac} \rangle , \end{align} where $\hat{a} $ and $\hat{b} $ are bosonic annihilation operators. Thus for a single qubit ensemble, there are $N+1$ Fock states available, and the total Hilbert space of the $ M $ ensemble system has a dimension of $ (N+1)^M $. \section{Ensemble gates} In this section we show how to produce two- and three-ensemble entangling gates. It also serves as an example for demonstrating the construction of unitary commuting gates which are the key component of IQP circuits in our system. Such commuting gates can be produced by universality arguments as we show below. \subsection{Two-ensemble interactions} Expanding two-ensemble squeezing gate, we see that it is a combination of an entangling interaction as well as squeezing on individual ensembles \begin{align} \hat{Q}_{nm} (\xi) & = e^{-i ((\hat{S}^z_n)^2 + (\hat{S}^z_m)^2 + 2 \hat{S}^z_n \hat{S}^z_m) \xi} . \end{align} By applying local squeezing operators we may obtain the two-ensemble interaction gate \begin{align} T_{nm} (\xi) & = \hat{Q}_{nm} (\xi/2) \hat{Q}_{n} (-\xi/2) \hat{Q}_{m} (-\xi/2) \nonumber \\ & = e^{-i \hat{S}^z_n \hat{S}^z_m \xi} . \end{align} \subsection{Three-ensemble interactions} In this section we show that using the assumed operations of the main text, it is possible to produce effective three-ensemble interactions. First consider the evolution sequence \begin{align} U^A_{nm} & = e^{i \hat{S}^y_n \pi/4} \hat{Q}_{nm} (\phi) \hat{Q}_n (- \phi) \hat{Q}_m (-\phi) e^{-i \hat{S}^y_n \pi/4} \nonumber \\ & = e^{i \hat{S}^y_n \pi/4} e^{i ( \hat{S}^z_n + \hat{S}^z_{m})^2 \phi } e^{ -i ( ( \hat{S}^z_n )^2 +( \hat{S}^z_m )^2) \phi } e^{-i \hat{S}^y_n \pi/4} \nonumber \\ & = e^{i \hat{S}^y_n \pi/4} e^{2 i \hat{S}^z_n \hat{S}^z_{m} \phi } e^{-i \hat{S}^y_n \pi/4} \nonumber \\ &= e^{2 i \hat{S}^x_n \hat{S}^z_m \phi } . \end{align} We can see that the above sequence produces the effective Hamiltonian \begin{align} H_{nm}^A = \hat{S}^x_n \hat{S}^z_m . \label{hama} \end{align} Similarly defining the same sequence but with initial rotations around the $ x $-axis we have \begin{align} U^B_{nm} & = e^{i \hat{S}^x_n \pi/4}\hat{Q}_{nm} (\phi) \hat{Q}_n (-\phi) \hat{Q}_m (-\phi) e^{-i \hat{S}^x_n \pi/4} \nonumber \\ &= e^{2 i \hat{S}^y_n \hat{S}^z_m \phi } . \end{align} which gives rise to the Hamiltonian \begin{align} H_{nm}^B = \hat{S}^y_n \hat{S}^z_m . \label{hamb} \end{align} We then use the general result of Ref. \cite{lloyd1995almost}, where it is shown that \begin{align} e^{[A,B] t} \approx \left( e^{-iB \sqrt{t/n} } e^{-iA \sqrt{t/n} } e^{iB \sqrt{t/n} } e^{iA \sqrt{t/n} } \right)^n , \label{commutatorlloyd} \end{align} where $ A, B $ are Hamiltonians that are available and the approximation improves for large $ n $. The meaning of this is that if Hamiltonians $ A, B $ are available, then it is also possible to implement the Hamiltonian $ i [A,B] $. Using the commutator relation (\ref{commutatorlloyd}) we can see that using (\ref{hama}) and (\ref{hamb}) it is possible to produce the three-ensemble interaction \begin{align} H_{nml}^{(3)} & = i [H_{nm}^A, H_{nl}^B] \nonumber \\ & = - 2 \hat{S}^z_n \hat{S}^z_m \hat{S}^z_{l} , \end{align} as desired. \subsection{Higher order interactions} Higher order interactions can be produced by the same arguments as for three-ensemble interactions. For example, by commuting Hamiltonians (\ref{hama}) and \begin{align} H_{nml}^B = \hat{S}^y_n \hat{S}^z_m \hat{S}^z_{l} \end{align} we may generate fourth order Hamiltonians, and the process can be repeated. \section{Dephasing evolution} We apply a dephasing evolution of Lindblad form to examine the effect of decoherence on the Porter-Thomas distribution. The master equation reads \begin{equation} \frac{d\rho}{d t} = -\frac{i}{\hbar}[H,\rho]+ \gamma\sum_{m=1}^M (\hat{S^z_m}\rho \hat{S^z_m}-\frac{1}{2}\{ (\hat{S^z_m})^2,\rho\}) . \end{equation} We apply the master equation after each cycle, i.e. after application of $ \hat{U}_l \hat{\cal Q} $. In the case that $ H = 0 $, the master equation can be solved exactly, such that after evolving for a time $ t $, the density matrix becomes \begin{align} \rho_{\vec{k} \vec{k}'} (\tau) = e^{-2 \tau \sum_{m=1}^M (k_m- k_m')^2 } \rho_{\vec{k} \vec{k}'} (0) \end{align} where $ \tau = \gamma t $ and the density matrix elements are defined as \begin{align} \rho_{\vec{k} \vec{k}'} (t) & = \langle \vec{k} | \rho(\tau ) | \vec{k}' \rangle \end{align} and $\vec{k} = (k_1, \dots, k_M) $. \section{Computational complexity proof} In this section, we provide the complexity proof sketched in the main text. First, we elaborate the demonstration of the worst-case circuit in our scheme such that exactly computing its probability amplitude is $\#$P-hard. Next, we provide a detailed sketch of proof for the second step and third step mentioned the main text. The second step is for the worst-to-average reduction proof employed in Ref. \cite{Bouland2018} to show that approximating the output probability of an average-case circuit in our scheme is $\#$P-hard too. The third step is relating an individual output probability to simulating the whole distribution mainly with the help from Stockmeyer Counting Theorem. Then we disapprove the existence of an efficient classical sampler that can approximate the distribution to an additive error, otherwise the polynomial hierarchy will collapse to its third order. \subsection{Worst-case circuit} \label{sec:worstcaseconj} We consider a similar approach of constructing the worst-case circuit as in Refs. \cite{bremner_average-case_2016}. In their approach, the output probability of the constructed circuit is mapped to a function which is known to be computationally hard. More specifically, they find a circuit $\hat{\cal C}_\te{H}$ such that \begin{equation} \langle \vec{0} |\hat{\cal C}_\te{H} | \psi_0 \rangle \propto \sum_z (-1)^{f(z)} , \label{previousres} \end{equation} where $z$ is a binary string of length $ M $ and $f(z)$ is a degree 3 polynomial that maps $z$ to integers, namely $f(z) \in \mathcal{Z}$. The right hand side of (\ref{previousres}) is known to be a $\#$P-hard function to compute exactly since it corresponds to the counting problem of the number of $z$ whose $f(z)$ is even minus the number of $z$ whose $f(z)$ is odd. This might seem different from the original definition where $f(z)\in \{0,1\}$. But as pointed out in the main paper, only the parity of $f(z)$ matters, such that the mapping $z \rightarrow z \mod 2$ makes the problems equivalent. While the initial state is fixed, the final state is dependent upon the measurement outcome. Due to the ``hiding" property of RCS circuit \cite{Bouland2018}, we can however focus on a fixed output. In this way, it is argued that the random circuit $\hat{\cal C}_\te{H} $ is a problem \#P-hard. Intuitively, the hardness of the computational task comes from the fact that calculating the summation on the right hand side of (\ref{previousres}) requires numerating $2^M$ combinations of $z$. Now let us consider our ensemble case, specifically the circuit given in Fig. 3(a) of the main text. For this calculation we shall derive the corresponding amplitude to (\ref{previousres}) in the qubit formalism. Firstly the initial state can be written \begin{align} | \psi_0 \rangle = \prod_{m=1}^M |0 \rangle^{\otimes N} = | 0 \rangle^{\otimes NM } . \end{align} After the Hadamard gates the state becomes \begin{align} H^{\otimes M } | \psi_0 \rangle = | + \rangle^{\otimes NM} = \frac{1}{\sqrt{2^{NM}}} \sum_{\vec{\sigma}} | \vec{\sigma} \rangle , \end{align} where \begin{align} \vec{\sigma} = (\sigma_{1,1}, \dots, \sigma_{n,m}, \dots, \sigma_{N,M} ) \end{align} is a microscopic spin configuration over the $ NM $ qubits. The spin $ \sigma_{n,m} = \pm 1 $ refers to the spin configuration of the $ n$th spin within the $ m$th ensemble. Applying the sequence of gates in Fig. 3(a) of the main text, we have \begin{widetext} \begin{align} & \left[ \prod_{m} U_{m}^z (\theta_{m}) \right] \left[ \prod_{m_1 m_2} T_{m_1 m_2} (\xi_{m_1 m_2}) \right] \left[ \prod_{m_1 m_2 m_3} R_{m_1 m_2 m_3} (\chi_{m_1 m_2 m_3}) \right] H^{\otimes M } | \psi_0 \rangle \nonumber \\ & =\exp \left( -i \sum_{m} \theta_{m} \hat{S}^z_{m} \right) \exp \left( -i \sum_{m_1 m_2} \xi_{m_1 m_2} \hat{S}^z_{m_1} \hat{S}^z_{m_2} \right) \exp \left( -i \sum_{m_1 m_2 m_3} \chi_{m_1 m_2 m_3} \hat{S}^z_{m_1} \hat{S}^z_{m_2} \hat{S}^z_{m_3} \right) | + \rangle^{\otimes NM} \nonumber \\ & = \frac{1}{\sqrt{2^{NM}}} \sum_{\vec{\sigma}} \exp \bigg(-i \sum_{m_1 m_2 m_3} \sum_{n_1 n_2 n_3} \chi_{m_1 m_2 m_3} \sigma_{n_1,m_1} \sigma_{n_2,m_2} \sigma_{n_3,m_3} -i \sum_{m_1 m_2} \sum_{n_1 n_2} \xi_{m_1 m_2} \sigma_{n_1,m_1} \sigma_{n_2,m_2} \nonumber \\ & -i \sum_{m} \sum_n \theta_{m} \sigma_{n,m} \bigg) | \vec{\sigma} \rangle . \end{align} \end{widetext} The particular measurement outcome $ | 0 \rangle^{\otimes NM} $ has the amplitude \begin{widetext} \begin{align} & \langle 0 |^{\otimes NM} H^{\otimes M } \left[ \prod_{m} U_{m}^z (\theta_{m}) \right] \left[ \prod_{m_1 m_2} T_{m_1 m_2} (\xi_{m_1 m_2}) \right] \left[ \prod_{m_1 m_2 m_3} R_{m_1 m_2 m_3} (\chi_{m_1 m_2 m_3}) \right] H^{\otimes M } | \psi_0 \rangle \nonumber \\ & = \frac{1}{2^{NM}} \sum_{\vec{\sigma}} \exp \left( -i \sum_{m_1 m_2} \xi_{m_1 m_2} \hat{S}^z_{m_1} \hat{S}^z_{m_2} \right) \exp \left( -i \sum_{m_1 m_2 m_3} \chi_{m_1 m_2 m_3} \hat{S}^z_{m_1} \hat{S}^z_{m_2} \hat{S}^z_{m_3} \right) | + \rangle^{\otimes NM} \nonumber \\ & = \frac{1}{2^{NM}} \sum_{\vec{\sigma}} \exp \bigg( -i \sum_{m_1 m_2 m_3} \sum_{n_1 n_2 n_3} \chi_{m_1 m_2 m_3} \sigma_{n_1,m_1} \sigma_{n_2,m_2} \sigma_{n_3,m_3} -i \sum_{m_1 m_2} \sum_{n_1 n_2} \xi_{m_1 m_2} \sigma_{n_1,m_1} \sigma_{n_2,m_2} \nonumber \\ & -i \sum_{m_1} \sum_{n_1} \theta_{m_1} \sigma_{n_1,m_1} \bigg) . \label{amplitudefinal} \end{align} \end{widetext} Let us now find what the coefficients $ \chi, \xi, \theta$ should be for a specified function $ f(\vec{\sigma}) $ given in Eq. (8) of the main text. Substituting the relation \begin{align} k_m = \frac{1}{2} (N + \sum_{n=1}^N \sigma_{n,m}), \end{align} into Eq. (8), we have \begin{align} f(\vec{\sigma}) = & \sum_{m_1 m_2 m_3} \sum_{n_1 n_2 n_3} \frac{\alpha_{m_1 m_2 m_3}}{8} \sigma_{n_1,m_1} \sigma_{n_2,m_2} \sigma_{n_3,m_3} \nonumber \\ & + \sum_{m_1 m_2} \sum_{n_1 n_2} \bigg[ \frac{N}{8} \sum_m ( \alpha_{m_1 m_2 m} + \alpha_{m_1 m m_2} \nonumber \\ & + \alpha_{m m_2 m_1} )+ \frac{\beta_{m_1 m_2}}{4} ) \bigg] \sigma_{n_1,m_1} \sigma_{n_2,m_2} \nonumber \\ & + \sum_{m_1} \sum_{n_1} \bigg[ \frac{N^2}{8} \sum_{m m'} ( \alpha_{m_1 m m'} + \alpha_{m m_1 m'} \nonumber \\ &+ \alpha_{m m' m_1}) + \frac{N}{4} \sum_m ( \beta_{m_1 m} + \beta_{m m_1}) + \frac{\gamma_{m_1}}{2} \bigg] \sigma_{n_1,m_1} \nonumber \\ & + \frac{N^3}{8} \sum_{m_1 m_2 m_3} \alpha_{m_1 m_2 m_3}+ \frac{N^2}{4} \sum_{m_1 m_2} \beta_{m_1 m_2} \nonumber\\ &+ \frac{N}{2} \sum_{m_1} \gamma_{m_1} . \end{align} Matching this to (\ref{amplitudefinal}), we may choose the circuit parameters as \begin{align} \chi_{m_1 m_2 m_3} = & \frac{\pi \alpha_{m_1 m_2 m_3}}{8} \nonumber \\ \xi_{m_1 m_2} = & \frac{N\pi }{8} \sum_m ( \alpha_{m_1 m_2 m} + \alpha_{m_1 m m_2} + \alpha_{m m_2 m_1} ) \nonumber \\ &+ \frac{\pi \beta_{m_1 m_2}}{4} \nonumber \\ \theta_{m_1} = & \frac{N^2 \pi }{8} \sum_{m m'} ( \alpha_{m_1 m m'} + \alpha_{m m_1 m'} + \alpha_{m m' m_1}) \nonumber \\ &+ \frac{N\pi}{4} \sum_m ( \beta_{m_1 m} + \beta_{m m_1}) + \frac{\gamma_{m_1}\pi }{2} . \end{align} Choosing these parameters, and taking the modulus squared of the amplitude (\ref{amplitudefinal}) gives the probability \begin{align} p_{\vec{0}} = & |\langle 0 |^{\otimes NM} H^{\otimes M } \left[ \prod_{m} U_{m}^z (\theta_{m}) \right] \left[ \prod_{m_1 m_2} T_{m_1 m_2} (\xi_{m_1 m_2}) \right] \nonumber \\ &\times \left[ \prod_{m_1 m_2 m_3} R_{m_1 m_2 m_3} (\chi_{m_1 m_2 m_3}) \right] H^{\otimes M } | \psi_0 \rangle|^2 \nonumber \\ = & \frac{1}{4^{NM}} | \sum_{\vec{\sigma}} e^{-i \pi f(\vec{\sigma})} |^2 \end{align} as claimed in the main text. We note that the sum over $ 2^{MN}$ can be reduced to a sum over $ (N+1)^M $ using the fact that $ k_m $ only depends on a global property of the ensemble. This does not affect the complexity since even in the case of $ N = 1$, the problem is still \#P-hard. In the main text we have limited ourselves to $ \alpha, \beta, \gamma \in \{0,1\} $ in order to take advantage of the complexity of degree 3 polynomials. Expanding the range of $\alpha,\beta,\gamma$ will give a larger range of values of $ e^{-i\pi f(\vec{\sigma}) } $, such that (7) in the main text will involve a sum over $(N+1)^M$ complex phases without any structure, further increasing the complexity of the evaluation. \subsection{ Worst-to-average-case reduction } In this part we provide a sketch of proof for the worst-to-average-case reduction developed in Ref. \cite{Bouland2018} which enables us to prove \#P-hardness of exactly computing the output probability of an average-case quantum circuit. In the previous part we have shown the existence of worst-case circuit whose output probability is \#P-hard to exactly compute. By showing that the probabilities of worst-case circuit can be obtained from probabilities of average-case circuit in polynomial time, we can prove that the exact computation of the latter is also \#P-hard. The basic idea is to express the probability as a function of an variable characterizing different circuits. When the variable takes certain values, the function outputs the probability of the worst-case circuit. If we can infer the form of the function using polynomial number of points obtained from the average-case circuits, the worst-to-average reduction is established. The first step is to to connect the average-case circuit and the worst-case circuit based on the Haar-measure invariance of matrix multiplication. Specifically, we take a worst-case circuit $\mathcal{C}$ and a haar-random matrix $\mathcal{H}$, and construct a circuit as the multiplication of $\mathcal{C}$ and a fraction of $\mathcal{H}$: \begin{align} &\mathcal{G}_1(\theta) = \mathcal{C} \mathcal{H}_1(\theta) \label{C1} \\ &\mathcal{H}_1(\theta) = \mathcal{H} \mathrm{e}^{-\theta\log \mathcal{H}} \end{align} When $\theta=1$, $\mathcal{G}_1(1) = \mathcal{C}$ which becomes the worst-case circuit, but if $\theta\rightarrow0$, the circuit is completely scrambled, in this way $\mathcal{G}_1(\theta)$ builds the connection between worst case and average case via the variable $\theta$. Now the task becomes using $\langle\vec{0}|\mathcal{G}_1(\theta\rightarrow 0)|\psi_0\rangle $ to calculate $\langle\vec{0}|\mathcal{G}_1(\theta=1)|\psi_0\rangle $ in polynomial time. Through the Berlekamp-Welch Algorithm\cite{welch1986} we know that if $\langle\vec{0}|\mathcal{G}_1(\theta)|\psi_0\rangle$ is a degree $d$ polynimial of $\theta$, then with a least $d+1$ different points, $\langle\vec{0}|\mathcal{G}_1(\theta)|\psi_0\rangle$ can be recovered in poly($d$) deterministic time. Therefore the next step is to Taylor expand $\mathrm{e}^{-\theta\log \mathcal{H}}$ in Eq.(\ref{C1}) and truncate at certain degree: \begin{align} &\mathcal{G}_2(\theta,K) = \mathcal{C} \mathcal{H}_2(\theta,K) \label{C2} \\ &\mathcal{H}_2(\theta,K) = \mathcal{H} \sum_{k=0}^K \dfrac{(-\theta \log \mathcal{H})^k}{k!} \end{align} According to the standard bound of Taylor series, the distance between $\langle\vec{0}|\mathcal{G}_2(\theta,K)|\psi_0\rangle$ and $\langle\vec{0}|\mathcal{G}_1(\theta)|\psi_0\rangle$ is at most $2^{-\text{poly}(n)}$ for a sufficiently large choice of $K=\text{poly}(n)$. This applies for the distance between $\langle\vec{0}|\mathcal{G}_2(1,K)|\psi_0\rangle$ and $\langle\vec{0}|\mathcal{C}|\psi_0\rangle$ as well which is what we wish to calculate in the end. As a Haar-random matrix, $\mathcal{H}$ can be decomposed into its diagonal form as \begin{equation} \mathcal{H} = U \text{diag}( \mathrm{e}^{\mathrm{i}\phi_1}, \mathrm{e}^{\mathrm{i}\phi_2}, ..., \mathrm{e}^{\mathrm{i}\phi_N} ) U^\dag \end{equation} where $U$ is formed with eigenvectors of the decomposition. Therefore $\mathrm{e}^{-\theta\log \mathcal{H}}$ can only affect the eigenvalue of $\mathcal{H}$, reducing the range of $\{\phi_1, \phi_2,..., \phi_N\}$ from $2\pi$ to $2\pi(1-\theta)$. For a Haar-random choice of $\mathcal{H}_1$, with a probability $1 - 1/\text{poly} (n) $ it falls into the distribution of constructed average-case circuit $\mathcal{G}_1(\theta)$ if we let $\theta$ = 1/poly($n$). Combining these two steps it is easy to see that if there exists a machine \textit{O} that can exactly compute $\langle \vec{0} |\mathcal{G}_2|\psi_0\rangle$ in polynomial time for 3/4 of $G_2$ from the aforementioned distribution as an average-case circuit, then with at least $d+1$ queries of $O$ for $\{\theta_1, \theta_2,..., \theta_{d+1}\}\in[0, 1/\text{poly}(n))$, the degree $d$ polynomial $\langle\vec{0}|\mathcal{G}_2(\theta,K) |\psi_0\rangle$ can be recovered using the Berlekamp-Welch Algorithm. After that we can calculate $\langle \vec{0}|\mathcal{G}_2(1,K) |\psi_0\rangle$ which equals to $\langle \vec{0}|\mathcal{C}|\psi_0\rangle$ within additive precision $2^{-\text{poly}(n)}$. In this way we have reduced the hardness of exactly computing the output probability of the worst-case circuit to that of the average-case circuit. \subsection{The complexity of approximating the average-case circuit} In previous part we have sketched the worst-to-average-case reduction for exact computation. In this part, we show that this reduction still works for the approximation version of the problem. This is mainly because the distance between the worst-case probability $\langle\vec{0}|\mathcal{C}|\psi_0\rangle$ and $\langle\vec{0}|\mathcal{G}_2(1,K)|\psi_0\rangle$ which is calculated through average-case probabilities is much smaller than the distance between $\langle\vec{0}|\mathcal{C}|\psi_0\rangle$ and its approximation $\widetilde{\langle\vec{0}|\mathcal{C}|\psi_0\rangle}$ that $\left| \langle\vec{0}|\mathcal{C}|\psi_0\rangle-\langle\vec{0}|\mathcal{G}_2(1,K)|\psi_0\rangle \right| < \left|\langle\vec{0}|\mathcal{C}|\psi_0\rangle-\widetilde{\langle\vec{0}|\mathcal{C}|\psi_0\rangle}\right| \leq 2^{-n}/\text{poly}(n) $. Hence if there exists an efficient approximation method for average-case circuit, using the same reduction technique in previous part the approximated probability for the worst-case circuit can also be calculated. In other words, if there exists a $(\delta,\epsilon)$-approximator $O'$ for the average-case circuit: \begin{equation} \text{Pr}\left( | O(\mathcal{G}_2(\theta,K)) - \langle\vec{0}|\mathcal{G}_2(\theta,K)|\psi_0\rangle | \leq \epsilon \right) \geq 1 - \delta \end{equation} It is then a $(\delta',\epsilon')$-approximator $O'$ for the worst-case circuit: \begin{equation} \text{Pr}\left( | O(\mathcal{G}_2(1,K)) - \langle\vec{0}|\mathcal{C}|\psi_0\rangle | \leq \epsilon' \right) \geq 1 - \delta' \end{equation} where $\delta'=\delta+1/\text{poly}(n), \epsilon'=\epsilon+1/\text{exp}(n)$. \subsection{From approximating individual output probabilities to classical simulation} So far we have established the complexity of approximating the single output probability of an average-case circuit. Now we move on to disprove the existence of efficient classical sampler that can simulate the distribution to variation distance errors. Let us suppose there exists an efficient classical sampler $\mathcal{A}(\mathcal{G}')$ that can sample from a probability distribution which approximates the output probability distribution of an average-case circuit $\mathcal{G}'$ up to additive $\epsilon$ in $l_1$ norm \begin{equation} \sum_{\vec{x}} |q_{\vec{x}} - p_{\vec{x}}| < \epsilon, \end{equation} where $q_{\vec{x}}$ denotes the probability of obtaining output $\vec{x}$ from $\mathcal{A}(\mathcal{G}')$. According to the Stockmeyer Counting Theorem, there exists an algorithm with access to an NP-oracle that can approximate $q_{ \vec{x} }$ to a multiplicative error: \begin{equation} |\widetilde{q}_{\vec{x}} - q_{\vec{x}} | \leq \dfrac{q_{\vec{x}}}{\text{poly}(n)}. \end{equation} Since $p_{\vec{x}}$ follows the Porter-Thomas distribution such that $\text{E}_{\vec{x}}(p_{\vec{x}}) = 1/(N+1)^M$, then from Markov's inequality we have \begin{equation} \text{Pr}\left( |q_{\vec{x}}-p_{\vec{x}} | \geq \dfrac{\epsilon}{(N+1)^M\delta} \right) \leq \delta. \end{equation} Then with probability at least $1-\delta$ over the choice of $\vec{x}$, \begin{equation} \begin{split} |\widetilde{q}_{\vec{x}} - p_{\vec{x}} | &\leq |\widetilde{q}_{\vec{x}} - q_{\vec{x}} | + |q_{\vec{x}}-p_{\vec{x}} | \\ &\leq \dfrac{q_{\vec{x}}}{\text{poly}(N)} + |q_{\vec{x}}-p_{\vec{x}} | \\ &\leq \dfrac{p_{\vec{x}}}{\text{poly}(N)} + \left(1+\dfrac{1}{\te{poly}(N)}\right) |q_{\vec{x}}-p_{\vec{x}} | \\ &\leq \dfrac{p_{\vec{x}}}{\text{poly}(N)} + \left(1+\dfrac{1}{\te{poly}(N)}\right)\dfrac{\epsilon}{(N+1)^M\delta} . \end{split} \end{equation} Letting $\epsilon=\delta/8$ and Porter-Thomas distribution gives \begin{equation} \te{Pr}\left( p_{\vec{x}} > \dfrac{1}{(N+1)^M} \right) = \dfrac{1}{e}, \end{equation} then with probability $1/e - \delta$, we can approximate $p_{\vec{x}}$ to a multiplicative error $1/4 + o(N)$. Therefore the existence of an efficient classical sampler indicates the existence of a $\te{BPP}^\te{NP}$ algorithm that can approximate the output probability of average-case circuit which is proven to be a \#P-hard problem. This will collapse the polynomial hierarchy to the third order because $\te{BPP}^\te{NP}$ is on the third order of the polynomial hierarchy \cite{arora_computational_2009} yet the whole polynomial hierarchy is contained in \#P-hard \cite{toda_pp_1991}. \section{Feynman Path Integral based sampling algorithm} \subsection{Random qubit circuits} We first review the methods shown in Refs. \cite{Boixo2017,Boixo2018}, which showed the Feynman Path Integral (FPI) based classical sampling algorithm for simulating a qubit-based quantum circuit. According to the definition of path integral, the amplitude of finding the output of a particular qubit circuit can be expressed as \begin{equation} \langle \vec{\sigma}_f|{\hat{\cal C}} | \psi_0\rangle = \sum_{ \vec{\sigma}^1, \dots , \vec{\sigma}^{T-1} } \prod_{t=1}^T \langle \vec{\sigma}^t | C^{(t)} | \vec{\sigma}^{t-1}\rangle, \end{equation} where $ | \vec{\sigma}^t \rangle = \otimes_{m=1}^M |\sigma_m^t\rangle$ is a spin configuration and $\sigma_m^t = \pm 1$. The initial state is $ | \vec{\sigma}^{0} \rangle = | \psi_0\rangle $. The number of the gates applied in total is $T$ and $| \vec{\sigma}_f \rangle = |\vec{\sigma}^T \rangle$ represents the final state. The total circuit consists of the sequence \begin{align} {\hat{\cal C}} = \prod_{t=1}^T C^{(t)} , \label{gatebygate} \end{align} where the product is applied in order from right to left, from $ t = 1 $ to $ t = T$. In (\ref{gatebygate}), each of the $ C^{(t)} $ is a single gate, such that the total circuit written gate by gate. Ref. \cite{Boixo2017} shows that it is equivalent to evaluating \begin{equation} \langle \vec{\sigma}_f|{\hat{\cal C}} | \psi_0\rangle = 2^{-G/2}\sum_s \text{exp}(i H_s ) , \label{amplitudepathint} \end{equation} where $G = \sum_{m=1}^M d(m)$ is the total number of two-sparse gates, i.e. gates with non-zero off diagonal terms. In a given path qubit $m$ goes through the path $\{s_m^k\}_{k=0}^{d(m)}$, where $d(m)$ is the number of two-sparse gates applied to qubit $m$. Then the set of all possible path is $s = \{s_m^k\}$ with $m\in[1\ldots M]$ and $k\in[0\ldots d(m)-1]$. The total number of paths required to recover the exact probability distribution is $2^G$. Here, $H_s$ is a Hamiltonian describing an effective classical 3D Ising model for the path integral model, which is given by \begin{equation} H_s = \sum_{m=1}^M \sum_{k=1}^{d(m)-1}h_m s_m +\sum_{m'<m}^M \sum_{k=1}^{d(m)-1}\sum_{l=1}^{d(m')-1} \mathcal{J}_{m m'}^{kl}s_m^k s_{m'}^l \label{fullIsing} \end{equation} where $m,m'$ denotes the space degree of freedom that decides the coupling between different spins and $k,l$ denotes the time degree of freedom that constrains the layers of interaction as shown in Fig. \ref{Ising}. We note that $ H_s $ also depends upon upon the initial $ | \psi_0 \rangle $ and final state $ | \vec{\sigma}_f \rangle $ through boundary terms which are of fixed spin configuration $ s_m^k$. \begin{figure} \caption{Mapping of the quantum circuit to a classical Ising model. } \label{Ising} \end{figure} \subsection{Multi-ensemble random quantum circuit} We now apply the same technique to derive the path integral calculation for our ensemble based random quantum circuit. As before, we would like to calculate the amplitude \begin{align} \langle \vec{k}_f | \hat{{\cal C}} | \psi_0 \rangle = \sum_{ \vec{k}^1, \dots, \vec{k}^{T-1} } \prod_{t=1}^T \langle \vec{k}^{t} | C^{(t)}| \vec{k}^{t-1} \rangle , \label{pathintens2} \end{align} where $ | \vec{k}^t \rangle= \otimes_{m=1}^M | k_m^t \rangle $ is a ensemble configuration with $ k_m^t \in [0,N] $. The initial state is $ | \vec{k}^0 \rangle = | \psi_0 \rangle $ and the final state is $ | \vec{k}_f \rangle = | \vec{k}^T \rangle $. The total circuit consists of the sequence \begin{align} {\hat{\cal C}} = \left( \prod_{l=1}^L \left[ \prod_{m=1}^M W_m^{(l)} \right] \hat{\cal Q} \right) \left( \prod_{m'=1}^M H_{m'} \right) , \end{align} where we denote the basis rotation operators $ W_m^{(l)} $ as the random rotation from the choice $\{ \hat{X}_m^{1/2}, \hat{Y}_m^{1/2}, \hat{Z}_m^{1/4} \} $ on the $ l$th cycle. The product is applied in order from right to left, from $ l = 1 $ to $ l = L$. The remaining products labels are also taken to be in order from right to left as $ m , m' $ increases, although the terms within these product commute so this is an arbitrary choice. The $ C^{(t)} $ operators are then the individual gates in this order of this sequence, for example \begin{align} C^{(1)} & = H_1 \nonumber \\ C^{(M)} & = H_M \nonumber \\ C^{(M+1)} & = {\cal Q} \nonumber \\ C^{(M+2)} & = W_1^{(1)} \nonumber \\ C^{(2M+1)} & = W_M^{(1)} \nonumber \\ C^{(2M+2)} & = {\cal Q} . \end{align} The matrix elements in (\ref{pathintens}) are diagonal for the operators $ \cal Q $ and $ Z_m^{1/4} $ taking matrix elements \begin{align} \langle \vec{k} | {\cal Q} | \vec{k'} \rangle & = \delta_{\vec{k} \vec{k'} } e^{-i \sum_{m=1}^M (2k_m- N)^2 \xi } \nonumber \\ \langle \vec{k} | Z_m^{1/4} | \vec{k'} \rangle & = \delta_{\vec{k} \vec{k'} } e^{-i (2k_m-N) \pi/8 } . \label{diagmatelem} \end{align} The two remaining off-diagonal (two-sparse) operators have matrix elements \begin{align} \langle \vec{k} | X_m^{1/2} | \vec{k'} \rangle & = \left( \prod_{m'\ne m } \delta_{k_{m'} k_{m'}'} \right) \langle k_m | e^{-i S_m^y \pi/4 } | k_m' \rangle e^{i(k_m' -k_m)\pi/2} \nonumber \\ \langle \vec{k} | Y_m^{1/2} | \vec{k'} \rangle & = \left( \prod_{m'\ne m } \delta_{k_{m'} k_{m'}'} \right) \langle k_m | e^{-i S_m^y \pi/4 } | k_m' \rangle , \label{offdiagmatelem} \end{align} where \cite{Byrnes2021} \begin{align} & \langle k | e^{-i S^y \pi/4} | k' \rangle = \frac{\sqrt{ k'! (N-k')! k! (N-k)!} }{\sqrt{2^N}} \nonumber \\ & \times \sum_n \frac{(-1)^n }{(k-n)!(N-k'-n)!n!(k'-k+n)!} . \label{syrotmatrixelement} \end{align} We may picture the multidimensional sum in (\ref{pathintens}) as a path through configurational space, where there are $ (N+1)^{M(T-1)} $ different routes to get from the initial state $ |\vec{k}^0 \rangle $ to $ |\vec{k}^{T} \rangle $. Due to the delta functions in the matrix elements (\ref{diagmatelem}) and (\ref{offdiagmatelem}), many of these paths have zero amplitude and hence may be removed. Only the off-diagonal terms (\ref{offdiagmatelem}) create a branching in configurational space, hence the total number of paths reduces to $ (N+1)^G $, where $ G $ is the total number of two-sparse ensemble gates. Running over the complete set of these paths in (\ref{pathintens}) gives the final result. Summing over all $ (N+1)^G $ paths may be exceedingly large to perform exhaustively. In this case, we may approximate the path integral by a random sample of the paths. Starting from the configuration corresponding to the initial state $ \vec{k}^0 $, we choose a random new configuration each time one of the two-sparse gates are applied. For example, for a two-sparse gate on ensemble $ m $, the new configuration is chosen according to \begin{align} \vec{k} = (k_1,\dots, k_m, \dots, k_M) \rightarrow \vec{k}' = (k_1,\dots, k_m', \dots, k_M) \end{align} where $ k_m' $ is randomly selected from a uniform distribution. For the diagonal matrices $ Z_m^{1/4} $ and $ \cal Q$, the configuration is unchanged. This is repeated until the $ (T-1)$th configuration. The final configuration is fixed to $ | \vec{k}_f \rangle $, according to the final state. We then estimate the amplitude according to \begin{align} \Psi_{\text{est}} (\vec{k}_f) = \langle \vec{k}_f | \hat{{\cal C}} | \psi_0 \rangle_{\text{est}} \propto \sum_{ \vec{k} \in {\cal K} } \prod_{t=1}^T \langle \vec{k}^{t} | C^{(t)}| \vec{k}^{t-1} \rangle , \label{pathintest} \end{align} where $ {\cal K} $ is a set of random paths in configuration space. To find the fidelity, we perform the procedure in (\ref{pathintest}) for all the output states $ | \vec{k}_f \rangle $. We then normalize the state and compare it to the exact value using the fidelity \begin{align} F = \left| \sum_{\vec{k} } \Psi_{\text{est}}^* (\vec{k}_f) \Psi_{\text{exact}} (\vec{k}_f) \right|^2 . \end{align} The exact wavefunction is evaluated by a direct matrix multiplication \begin{align} \Psi_{\text{exact}} (\vec{k} ) = \langle \vec{k} | {\cal C} | \psi_0 \rangle . \end{align} \begin{thebibliography}{63} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Preskill}(2012)}]{preskill2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1203.5813}\ } (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shor}(1999)}]{shor1999polynomial} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Shor}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {SIAM review}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {303} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Buluta}\ and\ \citenamefont {Nori}(2009)}]{buluta2009quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Buluta}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {326}},\ \bibinfo {pages} {108} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Georgescu}\ \emph {et~al.}(2014)\citenamefont {Georgescu}, \citenamefont {Ashhab},\ and\ \citenamefont {Nori}}]{georgescu2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont {Georgescu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ashhab}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {153} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cirac}\ and\ \citenamefont {Zoller}(2012)}]{cirac2012goals} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature physics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {264} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bouland}\ \emph {et~al.}(2018)\citenamefont {Bouland}, \citenamefont {Fefferman}, \citenamefont {Nirkhe},\ and\ \citenamefont {Vazirani}}]{Bouland2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bouland}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fefferman}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Nirkhe}}, \ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Vazirani}},\ }\href {\doibase 10.1038/s41567-018-0318-2} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {159} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(2018)}]{preskill2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {79} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aaronson}\ and\ \citenamefont {Arkhipov}(2013)}]{aaronson_computational_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Aaronson}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Arkhipov}},\ }\href {\doibase 10.4086/toc.2013.v009a004} {\bibfield {journal} {\bibinfo {journal} {Theory of Computing}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {143} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hamilton}\ \emph {et~al.}(2017)\citenamefont {Hamilton}, \citenamefont {Kruse}, \citenamefont {Sansoni}, \citenamefont {Barkhofen}, \citenamefont {Silberhorn},\ and\ \citenamefont {Jex}}]{hamilton_gaussian_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont {Hamilton}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kruse}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Sansoni}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Barkhofen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Silberhorn}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Jex}},\ }\href {\doibase 10.1103/PhysRevLett.119.170501} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {170501} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bremner}\ \emph {et~al.}(2010)\citenamefont {Bremner}, \citenamefont {Jozsa},\ and\ \citenamefont {Shepherd}}]{Bremner2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Bremner}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Shepherd}},\ }\href {\doibase 10.1098/rspa.2010.0301} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {467}},\ \bibinfo {pages} {459} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bremner}\ \emph {et~al.}(2016)\citenamefont {Bremner}, \citenamefont {Montanaro},\ and\ \citenamefont {Shepherd}}]{bremner_average-case_2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Bremner}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Montanaro}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Shepherd}},\ }\href {\doibase 10.1103/PhysRevLett.117.080501} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {080501} (\bibinfo {year} {2016})},\ \bibinfo {note} {arXiv: 1504.07999}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arute}\ \emph {et~al.}(2019)\citenamefont {Arute}, \citenamefont {Arya}, \citenamefont {Babbush}, \citenamefont {Bacon}, \citenamefont {Bardin}, \citenamefont {Barends}, \citenamefont {Biswas}, \citenamefont {Boixo}, \citenamefont {Brandao}, \citenamefont {Buell}, \citenamefont {Burkett}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chiaro}, \citenamefont {Collins}, \citenamefont {Courtney}, \citenamefont {Dunsworth}, \citenamefont {Farhi}, \citenamefont {Foxen}, \citenamefont {Fowler}, \citenamefont {Gidney}, \citenamefont {Giustina}, \citenamefont {Graff}, \citenamefont {Guerin}, \citenamefont {Habegger}, \citenamefont {Harrigan}, \citenamefont {Hartmann}, \citenamefont {Ho}, \citenamefont {Hoffmann}, \citenamefont {Huang}, \citenamefont {Humble}, \citenamefont {Isakov}, \citenamefont {Jeffrey}, \citenamefont {Jiang}, \citenamefont {Kafri}, \citenamefont {Kechedzhi}, \citenamefont {Kelly}, \citenamefont {Klimov}, \citenamefont {Knysh}, \citenamefont {Korotkov}, \citenamefont {Kostritsa}, \citenamefont {Landhuis}, \citenamefont {Lindmark}, \citenamefont {Lucero}, \citenamefont {Lyakh}, \citenamefont {Mandr{\`{a}}}, \citenamefont {McClean}, \citenamefont {McEwen}, \citenamefont {Megrant}, \citenamefont {Mi}, \citenamefont {Michielsen}, \citenamefont {Mohseni}, \citenamefont {Mutus}, \citenamefont {Naaman}, \citenamefont {Neeley}, \citenamefont {Neill}, \citenamefont {Niu}, \citenamefont {Ostby}, \citenamefont {Petukhov}, \citenamefont {Platt}, \citenamefont {Quintana}, \citenamefont {Rieffel}, \citenamefont {Roushan}, \citenamefont {Rubin}, \citenamefont {Sank}, \citenamefont {Satzinger}, \citenamefont {Smelyanskiy}, \citenamefont {Sung}, \citenamefont {Trevithick}, \citenamefont {Vainsencher}, \citenamefont {Villalonga}, \citenamefont {White}, \citenamefont {Yao}, \citenamefont {Yeh}, \citenamefont {Zalcman}, \citenamefont {Neven},\ and\ \citenamefont {Martinis}}]{Arute2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Arute}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Arya}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bacon}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Bardin}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Biswas}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\ \bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Buell}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Burkett}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Collins}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Courtney}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Foxen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Giustina}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Graff}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Guerin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Habegger}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Harrigan}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Hartmann}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ho}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hoffmann}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont {Humble}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Isakov}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kafri}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kechedzhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {P.~V.}\ \bibnamefont {Klimov}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Knysh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Korotkov}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Kostritsa}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Landhuis}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lindmark}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lucero}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lyakh}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Mandr{\`{a}}}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {McEwen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mi}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Michielsen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Naaman}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Neill}}, \bibinfo {author} {\bibfnamefont {M.~Y.}\ \bibnamefont {Niu}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ostby}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Petukhov}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Platt}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Quintana}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Rieffel}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Satzinger}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Smelyanskiy}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Trevithick}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vainsencher}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Villalonga}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {Z.~J.}\ \bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Yeh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zalcman}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ }\href {\doibase 10.1038/s41586-019-1666-5} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {574}},\ \bibinfo {pages} {505} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhong}\ \emph {et~al.}(2020)\citenamefont {Zhong}, \citenamefont {Wang}, \citenamefont {Deng}, \citenamefont {Chen}, \citenamefont {Peng}, \citenamefont {Luo}, \citenamefont {Qin}, \citenamefont {Wu}, \citenamefont {Ding}, \citenamefont {Hu} \emph {et~al.}}]{zhong2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-S.}\ \bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {L.-C.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Luo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Qin}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hu}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {370}},\ \bibinfo {pages} {1460} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2021)\citenamefont {Wu}, \citenamefont {Bao}, \citenamefont {Cao}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Chung}, \citenamefont {Deng}, \citenamefont {Du}, \citenamefont {Fan} \emph {et~al.}}]{wu2021strong} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {W.-S.}\ \bibnamefont {Bao}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {T.-H.}\ \bibnamefont {Chung}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Deng}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Fan}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {180501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Greiner}\ \emph {et~al.}(2002)\citenamefont {Greiner}, \citenamefont {Mandel}, \citenamefont {Esslinger}, \citenamefont {H{\"a}nsch},\ and\ \citenamefont {Bloch}}]{greiner2002quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Greiner}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Mandel}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Esslinger}}, \bibinfo {author} {\bibfnamefont {T.~W.}\ \bibnamefont {H{\"a}nsch}}, \ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Bloch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {nature}\ }\textbf {\bibinfo {volume} {415}},\ \bibinfo {pages} {39} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Julsgaard}\ \emph {et~al.}(2001)\citenamefont {Julsgaard}, \citenamefont {Kozhekin},\ and\ \citenamefont {Polzik}}]{julsgaard2001experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Julsgaard}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kozhekin}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Polzik}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {413}},\ \bibinfo {pages} {400} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bernien}\ \emph {et~al.}(2017)\citenamefont {Bernien}, \citenamefont {Schwartz}, \citenamefont {Keesling}, \citenamefont {Levine}, \citenamefont {Omran}, \citenamefont {Pichler}, \citenamefont {Choi}, \citenamefont {Zibrov}, \citenamefont {Endres}, \citenamefont {Greiner} \emph {et~al.}}]{bernien2017probing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bernien}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Schwartz}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Keesling}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Levine}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Omran}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Pichler}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Choi}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Zibrov}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Endres}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Greiner}}, \emph {et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {551}},\ \bibinfo {pages} {579} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kocharovsky}\ \emph {et~al.}(2022)\citenamefont {Kocharovsky}, \citenamefont {Kocharovsky},\ and\ \citenamefont {Tarasov}}]{kocharovsky2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Kocharovsky}}, \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Kocharovsky}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Tarasov}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2201.00427}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Byrnes}\ and\ \citenamefont {Ilo-Okeke}(2021)}]{Byrnes2021} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}}\ and\ \bibinfo {author} {\bibfnamefont {E.~O.}\ \bibnamefont {Ilo-Okeke}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Atom Optics: Theory and Applications to Quantum Technology}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2021})\BibitemShut {NoStop} \bibitem [{\citenamefont {Hammerer}\ \emph {et~al.}(2010)\citenamefont {Hammerer}, \citenamefont {S\o{}rensen},\ and\ \citenamefont {Polzik}}]{hammerer2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Hammerer}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {S\o{}rensen}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Polzik}},\ }\href {\doibase 10.1103/RevModPhys.82.1041} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {1041} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pezz{\`e}}\ \emph {et~al.}(2018)\citenamefont {Pezz{\`e}}, \citenamefont {Smerzi}, \citenamefont {Oberthaler}, \citenamefont {Schmied},\ and\ \citenamefont {Treutlein}}]{pezze2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Pezz{\`e}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smerzi}}, \bibinfo {author} {\bibfnamefont {M.~K.}\ \bibnamefont {Oberthaler}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Schmied}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Treutlein}},\ }\href {\doibase 10.1103/RevModPhys.90.035005} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {035005} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bao}\ \emph {et~al.}(2020)\citenamefont {Bao}, \citenamefont {Duan}, \citenamefont {Jin}, \citenamefont {Lu}, \citenamefont {Li}, \citenamefont {Qu}, \citenamefont {Wang}, \citenamefont {Novikova}, \citenamefont {Mikhailov}, \citenamefont {Zhao}, \citenamefont {M{\o}lmer}, \citenamefont {Shen},\ and\ \citenamefont {Xiao}}]{Bao2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Duan}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jin}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Qu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Novikova}}, \bibinfo {author} {\bibfnamefont {E.~E.}\ \bibnamefont {Mikhailov}}, \bibinfo {author} {\bibfnamefont {K.-F.}\ \bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {M{\o}lmer}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shen}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xiao}},\ }\href {\doibase 10.1038/s41586-020-2243-7} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {581}},\ \bibinfo {pages} {159} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pu}\ \emph {et~al.}(2017)\citenamefont {Pu}, \citenamefont {Jiang}, \citenamefont {Chang}, \citenamefont {Yang}, \citenamefont {Li},\ and\ \citenamefont {Duan}}]{pu2017experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Pu}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Chang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Duan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature communications}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {1} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pu}\ \emph {et~al.}(2018)\citenamefont {Pu}, \citenamefont {Wu}, \citenamefont {Jiang}, \citenamefont {Chang}, \citenamefont {Li}, \citenamefont {Zhang},\ and\ \citenamefont {Duan}}]{pu2018experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Pu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Chang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Duan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science advances}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {eaar3931} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abdelrahman}\ \emph {et~al.}(2014{\natexlab{a}})\citenamefont {Abdelrahman}, \citenamefont {Mukai}, \citenamefont {H\"{a}ffner},\ and\ \citenamefont {Byrnes}}]{Abdelrahman2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Abdelrahman}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mukai}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H\"{a}ffner}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href {\doibase 10.1364/oe.22.003501} {\bibfield {journal} {\bibinfo {journal} {Optics Express}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {3501} (\bibinfo {year} {2014}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Genov}\ \emph {et~al.}(2014)\citenamefont {Genov}, \citenamefont {Schraft}, \citenamefont {Halfmann},\ and\ \citenamefont {Vitanov}}]{genov2014correction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~T.}\ \bibnamefont {Genov}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Schraft}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Halfmann}}, \ and\ \bibinfo {author} {\bibfnamefont {N.~V.}\ \bibnamefont {Vitanov}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {043001} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}(1998)}]{gottesman1998heisenberg} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint quant-ph/9807006}\ } (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitagawa}\ and\ \citenamefont {Ueda}(1993)}]{kitagawa1993squeezed} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kitagawa}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ueda}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {5138} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {S{\o}rensen}\ \emph {et~al.}(2001)\citenamefont {S{\o}rensen}, \citenamefont {Duan}, \citenamefont {Cirac},\ and\ \citenamefont {Zoller}}]{sorensen2001many} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {S{\o}rensen}}, \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Duan}}, \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {409}},\ \bibinfo {pages} {63} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gross}(2012)}]{gross2012spin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gross}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics B: Atomic, Molecular and Optical Physics}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {103001} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Riedel}\ \emph {et~al.}(2010)\citenamefont {Riedel}, \citenamefont {B{\"o}hi}, \citenamefont {Li}, \citenamefont {H{\"a}nsch}, \citenamefont {Sinatra},\ and\ \citenamefont {Treutlein}}]{riedel2010atom} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Riedel}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {B{\"o}hi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {T.~W.}\ \bibnamefont {H{\"a}nsch}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sinatra}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Treutlein}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {464}},\ \bibinfo {pages} {1170} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hald}\ \emph {et~al.}(1999)\citenamefont {Hald}, \citenamefont {S{\o}rensen}, \citenamefont {Schori},\ and\ \citenamefont {Polzik}}]{hald1999spin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hald}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {S{\o}rensen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schori}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Polzik}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {1319} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kuzmich}\ \emph {et~al.}(2000)\citenamefont {Kuzmich}, \citenamefont {Mandel},\ and\ \citenamefont {Bigelow}}]{kuzmich2000generation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kuzmich}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mandel}}, \ and\ \bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont {Bigelow}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {1594} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Takano}\ \emph {et~al.}(2009)\citenamefont {Takano}, \citenamefont {Fuyama}, \citenamefont {Namiki},\ and\ \citenamefont {Takahashi}}]{takano2009spin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Takano}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fuyama}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Namiki}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Takahashi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {033601} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Byrnes}(2013)}]{byrnes2013fractality} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {023609} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jing}\ \emph {et~al.}(2019)\citenamefont {Jing}, \citenamefont {Fadel}, \citenamefont {Ivannikov},\ and\ \citenamefont {Byrnes}}]{jing2019split} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fadel}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Ivannikov}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {093038} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitzinger}\ \emph {et~al.}(2020)\citenamefont {Kitzinger}, \citenamefont {Chaudhary}, \citenamefont {Kondappan}, \citenamefont {Ivannikov},\ and\ \citenamefont {Byrnes}}]{kitzinger2020two} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kitzinger}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Chaudhary}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kondappan}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Ivannikov}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Research}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {033504} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krauter}\ \emph {et~al.}(2013)\citenamefont {Krauter}, \citenamefont {Salart}, \citenamefont {Muschik}, \citenamefont {Petersen}, \citenamefont {Shen}, \citenamefont {Fernholz},\ and\ \citenamefont {Polzik}}]{krauter2013deterministic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krauter}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Salart}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Muschik}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Petersen}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shen}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Fernholz}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Polzik}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {400} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pyrkov}\ and\ \citenamefont {Byrnes}(2013)}]{pyrkov2013entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Pyrkov}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {093019} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Treutlein}\ \emph {et~al.}(2006)\citenamefont {Treutlein}, \citenamefont {H\"ansch}, \citenamefont {Reichel}, \citenamefont {Negretti}, \citenamefont {Cirone},\ and\ \citenamefont {Calarco}}]{PhysRevA.74.022312} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Treutlein}}, \bibinfo {author} {\bibfnamefont {T.~W.}\ \bibnamefont {H\"ansch}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Reichel}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Negretti}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Cirone}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Calarco}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {022312} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aristizabal-Zuluaga}\ \emph {et~al.}(2021)\citenamefont {Aristizabal-Zuluaga}, \citenamefont {Skobleva}, \citenamefont {Richter}, \citenamefont {Ji}, \citenamefont {Mao}, \citenamefont {Kondappan}, \citenamefont {Ivannikov},\ and\ \citenamefont {Byrnes}}]{aristizabal2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Aristizabal-Zuluaga}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Skobleva}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Richter}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ji}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Mao}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kondappan}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Ivannikov}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics B: Atomic, Molecular and Optical Physics}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {105502} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pyrkov}\ and\ \citenamefont {Byrnes}(2014)}]{pyrkov2014full} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Pyrkov}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {062336} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rosseau}\ \emph {et~al.}(2014)\citenamefont {Rosseau}, \citenamefont {Ha},\ and\ \citenamefont {Byrnes}}]{rosseau2014entanglement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rosseau}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Ha}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {052315} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Porter}\ and\ \citenamefont {Thomas}(1956)}]{porter_fluctuations_1956} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~E.}\ \bibnamefont {Porter}}\ and\ \bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {Thomas}},\ }\href {\doibase 10.1103/PhysRev.104.483} {\bibfield {journal} {\bibinfo {journal} {Physical Review}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {483} (\bibinfo {year} {1956})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bremner}\ \emph {et~al.}(2011)\citenamefont {Bremner}, \citenamefont {Jozsa},\ and\ \citenamefont {Shepherd}}]{bremner_classical_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Bremner}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Shepherd}},\ }\href {\doibase 10.1098/rspa.2010.0301} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {467}},\ \bibinfo {pages} {459} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arora}\ and\ \citenamefont {Barak}(2009)}]{arora_computational_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Arora}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Barak}},\ }\href@noop {} {\emph {\bibinfo {title} {Computational complexity: a modern approach}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {Toda}(1991)}]{toda_pp_1991} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Toda}},\ }\href {\doibase 10.1137/0220053} {\bibfield {journal} {\bibinfo {journal} {SIAM Journal on Computing}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {865} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aaronson}(2005)}]{aaronson_quantum_2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Aaronson}},\ }\href {\doibase 10.1098/rspa.2005.1546} {\bibfield {journal} {\bibinfo {journal} {Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {461}},\ \bibinfo {pages} {3473} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Han}\ \emph {et~al.}(1997)\citenamefont {Han}, \citenamefont {Hemaspaandra},\ and\ \citenamefont {Thierauf}}]{han_threshold_1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont {Hemaspaandra}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Thierauf}},\ }\href {\doibase 10.1137/S0097539792240467} {\bibfield {journal} {\bibinfo {journal} {SIAM Journal on Computing}\ }\textbf {\bibinfo {volume} {26}},\ \bibinfo {pages} {59} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}(1995)}]{lloyd1995almost} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages} {346} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Byrnes}\ \emph {et~al.}(2015)\citenamefont {Byrnes}, \citenamefont {Rosseau}, \citenamefont {Khosla}, \citenamefont {Pyrkov}, \citenamefont {Thomasen}, \citenamefont {Mukai}, \citenamefont {Koyama}, \citenamefont {Abdelrahman},\ and\ \citenamefont {Ilo-Okeke}}]{byrnes2015macroscopic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rosseau}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Khosla}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Pyrkov}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Thomasen}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mukai}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Koyama}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Abdelrahman}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ilo-Okeke}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics Communications}\ }\textbf {\bibinfo {volume} {337}},\ \bibinfo {pages} {102} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gao}\ \emph {et~al.}(2017)\citenamefont {Gao}, \citenamefont {Wang},\ and\ \citenamefont {Duan}}]{Gao2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Gao}}, \bibinfo {author} {\bibfnamefont {S.-T.}\ \bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Duan}},\ }\href {\doibase 10.1103/PhysRevLett.118.040502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {040502} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fujii}\ and\ \citenamefont {Morimae}(2017)}]{fujii_commuting_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Morimae}},\ }\href {\doibase 10.1088/1367-2630/aa5fdb} {\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {033003} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mohseni}\ \emph {et~al.}(2021)\citenamefont {Mohseni}, \citenamefont {Narozniak}, \citenamefont {Pyrkov}, \citenamefont {Ivannikov}, \citenamefont {Dowling},\ and\ \citenamefont {Byrnes}}]{mohseni2021error} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Narozniak}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Pyrkov}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Ivannikov}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Dowling}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Information}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boixo}\ \emph {et~al.}(2018{\natexlab{a}})\citenamefont {Boixo}, \citenamefont {Isakov}, \citenamefont {Smelyanskiy}, \citenamefont {Babbush}, \citenamefont {Ding}, \citenamefont {Jiang}, \citenamefont {Bremner}, \citenamefont {Martinis},\ and\ \citenamefont {Neven}}]{Boixo2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Isakov}}, \bibinfo {author} {\bibfnamefont {V.~N.}\ \bibnamefont {Smelyanskiy}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Bremner}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}},\ }\href {\doibase 10.1038/s41567-018-0124-x} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {595} (\bibinfo {year} {2018}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boixo}\ \emph {et~al.}(2018{\natexlab{b}})\citenamefont {Boixo}, \citenamefont {Isakov}, \citenamefont {Smelyanskiy},\ and\ \citenamefont {Neven}}]{Boixo2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Isakov}}, \bibinfo {author} {\bibfnamefont {V.~N.}\ \bibnamefont {Smelyanskiy}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}},\ }\href@noop {} {\ (\bibinfo {year} {2018}{\natexlab{b}})},\ \Eprint {http://arxiv.org/abs/1712.05384} {arXiv:1712.05384 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Byrnes}\ \emph {et~al.}(2012)\citenamefont {Byrnes}, \citenamefont {Wen},\ and\ \citenamefont {Yamamoto}}]{byrnes2012macroscopic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Wen}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {040306} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abdelrahman}\ \emph {et~al.}(2014{\natexlab{b}})\citenamefont {Abdelrahman}, \citenamefont {Mukai}, \citenamefont {H{\"a}ffner},\ and\ \citenamefont {Byrnes}}]{abdelrahman2014coherent} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Abdelrahman}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mukai}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"a}ffner}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics express}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {3501} (\bibinfo {year} {2014}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hume}\ \emph {et~al.}(2013)\citenamefont {Hume}, \citenamefont {Stroescu}, \citenamefont {Joos}, \citenamefont {Muessel}, \citenamefont {Strobel},\ and\ \citenamefont {Oberthaler}}]{hume2013accurate} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hume}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Stroescu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Joos}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Muessel}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Strobel}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Oberthaler}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages} {253001} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {H{\"u}per}\ \emph {et~al.}(2019)\citenamefont {H{\"u}per}, \citenamefont {P{\"u}r}, \citenamefont {Hetzel}, \citenamefont {Geng}, \citenamefont {Peise}, \citenamefont {Kruse}, \citenamefont {Kristensen}, \citenamefont {Ertmer}, \citenamefont {Arlt},\ and\ \citenamefont {Klempt}}]{huper2019preparation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {H{\"u}per}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {P{\"u}r}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hetzel}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Geng}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Peise}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kruse}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kristensen}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Ertmer}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Arlt}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Klempt}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1912.05689}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Quesada}\ \emph {et~al.}(2018)\citenamefont {Quesada}, \citenamefont {Arrazola},\ and\ \citenamefont {Killoran}}]{quesada_gaussian_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Quesada}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Arrazola}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Killoran}},\ }\href {\doibase 10.1103/PhysRevA.98.062322} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {062322} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shi}\ and\ \citenamefont {Byrnes}(2021)}]{shi2021gaussian} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Shi}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Byrnes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2105.09583}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Welch}\ and\ \citenamefont {Berlekamp}(1986)}]{welch1986} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~R.}\ \bibnamefont {Welch}}\ and\ \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Berlekamp}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {US patent 4,633,470}\ } (\bibinfo {year} {1986})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{A Markov Game Model for AI-based Cyber Security Attack Mitigation} \author{\IEEEauthorblockN{Hooman Alavizadeh\textsuperscript{1}, Julian Jang-Jaccard\textsuperscript{1}, Tansu Alpcan\textsuperscript{2} and Seyit A. Camtepe\textsuperscript{3}} \IEEEauthorblockA{\textsuperscript{1} School of Natural and Computational Sciences,\\ Massey University, Auckland, New Zealand\\ \textsuperscript{2} Department of Electrical and Electronic Engineering,\\ University of Melbourne, Parkville, Australia\\ \textsuperscript{3} CSIRO Data61, Australia\\ Email: \{h.alavizadeh, j.jang-jaccard\}@massey.ac.nz} } \maketitle \begin{abstract} The new generation of cyber threats leverages advanced AI-aided methods, which make them capable to launch multi-stage, dynamic, and effective attacks. Current cyber-defense systems encounter various challenges to defend against such new and emerging threats. Modeling AI-aided threats through game theory models can help the defender to select optimal strategies against the attacks and make wise decisions to mitigate the attack's impact. This paper first explores the current state-of-the-art in the new generation of threats in which AI techniques such as deep neural network is used for the attacker and discusses further challenges. We propose a Markovian dynamic game that can evaluate the efficiency of defensive methods against the AI-aided attacker under a cloud-based system in which the attacker utilizes an AI technique to launch an advanced attack by finding the shortest attack path. We use the CVSS metrics to quantify the values of this zero-sum game model for decision-making. \end{abstract} \begin{IEEEkeywords} Markovian Game; AI-based threats; Cloud computing; Game theory; Attack models \end{IEEEkeywords} \section{Introduction} A strong cyber defense system should be able to defend against the new generation of cyber threats~\cite{hou2020low,brundage2018malicious}. AI-powered threats are emerging attacks that use AI capabilities to launch various types of attacks to the system such as targeted attacks, adaptive attacks~\cite{hu2015dynamic}, DeepLocker~\cite{stoecklin2018deeplocker}, and so forth~\cite{wang2020ai}. Although the traditional adversarial attacks were relatively easier to detect and defend as the attack patterns were algorithmic, the emerging attacks leverage AI features such as machine learning (ML) and deep learning to make malware much more evasive and pervasive. In addition, the AI techniques combined with automation can make the malware more scalable and make them able to attack the targets without human intervention~\cite{wang2020ai}. Game theory is a mathematical model based on a decision-responsive manner that makes a variation on strategies for each player according to the decision or movements of other players. In fact, game theory can be defined as studying the cooperation and conflict between rational and intelligent decision-makers based on the principles of using mathematical models. Game Theory models have been extensively studied for their use in cyber security and have shown to be very effective in the evaluation of defensive systems and addressing the security of networks and systems~\cite{manshaei2013game,attiah2018game,rass2017physical}. In this paper, we consider an advanced attacker which is able to utilize AI techniques such as deep neural network (DNN) to find their target in a networked system fast and effectively. We assume that the attacker can leverage deep learning techniques to estimate the shortest attack path in a networked model (such as those proposed in \cite{rizi2018shortest,salehi2020shortest,qu2013efficient}). This enables the attackers to be resilient against dynamic defense techniques~\cite{cho2020toward,alavizadeh2019automated}. Moreover, we show how game theory can be leveraged to evaluate the dynamic defensive scenarios for avoiding these kinds of AI-powered attacks. Concretely, we address the specific problem in which the AI-aided attacker can find the shortest attack path in a cloud. The cloud system and attack model are modeled as a zero-sum Markov Game. The cloud model is based on an Attack Graph (AG) representing the states of the game. We define a zero-sum Markovian game model that captures the capabilities of the attacker and defender. In this case, the attacker can choose the actions that exploit the real-world vulnerabilities reported in the Common Vulnerabilities and Exploits (CVEs) through National Vulnerability Database (NVD). Then, the defender's actions are modeled based on a dynamic defense consisting of the placement of detection systems (such as IDS) to the hosts in the cloud. We design the rewards of this game by leveraging the Common Vulnerability Scoring Systems (CVSS) values such as attack impact and exploitability. This helps the defender to select appropriate strategies using the limited number of monitoring actions for each state of the game. The defender action is the placement of an IDS in a host in the cloud to monitor and detect any prospective threats. However, the defense action may incur some performance degradation and cost to the defender. In this paper, we utilized a Markovian game model to analyze AI-aided attacks for a cloud model which can help the defender to mitigate attacker rewards using a dynamic IDS placement strategy. The main contributions of this paper are as follows: \begin{itemize} \item We provide an up-to-date review on the new generation of threats such as AI-powered threats and categorize them based on two main threat types which are \textit{AI-aided attacks} and \textit{AI-embedded attacks}. \item We discuss the AI-aided attack approaches such as those using graph-based AI techniques to find the targets in a networked system effectively. We investigate in depth a threat model which is able to utilize Deep Neural Network (DNN) to find the shortest attack path in a cloud to launch the attack efficiently. \item We propose a Markovian game model that can evaluate the effectiveness of the state-of-art defense mechanisms against the AI-aided attacks in which the attacker utilizes an AI technique such as DNN to estimate the shortest attack path in a cloud system. We quantify the game parameters based on a zero-sum game and CVSS values. \item We offer the formal mathematical definitions for the proposed game model. We also determine the probabilistic values of the states of the game. Finally, we clearly formalize, analyze, and quantify actions and states transition probabilities for the game model. \end{itemize} The rest of the paper is organized as follows. Section \ref{sec:back} discusses some essential concepts which are related to this paper including AI-powered threats categorization. The related work is given in Section~\ref{sec:rel}. In Section~\ref{sec:proposed}, we define the necessary concepts, definitions, mathematical notations, and propose our Markovian game model. Discussion and limitations of the current study are presented in Section \ref{sec:discussion}. Finally, we conclude the paper in Section~\ref{sec:conclusion}. \section{AI-powered Threats}\label{sec:back} {In these days, attackers use innovative and smart methods for launching various types of attacks. It includes delivering malicious activities, exploiting zero-day vulnerabilities, and use of deep neural networks to find the targets effectively. Thus, the cyber defensive system should be able to deal with a wide range of more intelligent, persistent, and sophisticated attacks equipped with more advanced technologies such as AI-powered attacks. In~\cite{kaloudi2020ai}, the authors investigated the AI-powered cyber attacks and mapped them onto a proposed framework with new threats including the classification of several aspects of threats that use AI during the cyber-attack life cycle. We categorize the AI-powered attacks based on {\em AI-aided} and {\em AI-embedded} attacks as shown in Figure~\ref{fig:AI-pow}. AI-aided attacks are of those who leverage AI to launch the attacks effectively. In this type, the intelligent attackers use AI techniques. However, in AI-embedded attack, the threats are weaponized by AI themselves such as Deep locker~\cite{stoecklin2018deeplocker}.} In AI-aided attacks, the attackers use AI to launch various attacks based on different targets such as adaptive attacks or multi-stage attacks which need knowledgeable attackers equipped with prior knowledge of the target system. Moreover, adaptive attacks are known as intelligent attacks. In these types of attacks, the {attacks can be adapted to the dynamic environment that changes the system's conditions. These types of attackers are intelligent enough so that they can wisely manage their resource limitations while they opportunistically try to compromise the entire system such as executing adaptive attacks~\cite{tramer2020adaptive}.} The attacker could launch various AI-based techniques to detect and recognize the target network, vulnerabilities, and valuable targets~\cite{kaloudi2020ai}. Moreover, the new type of AI-aided attackers are able to use high intelligence and sufficient resources to launch increasingly more intelligent, sophisticated and persistent attacks. This enables the attackers to find valuable targets in the network, find attack path efficiently, and obtain highly sensitive information~\cite{tramer2020adaptive,hutchins2011intelligence}. In particular, attackers are intelligent enough to (i) optimize the attack; for instance, they can estimate the most efficient attack path in a networked system with lower cost and higher benefits~\cite{feng2017signaling}, (ii) execute multi-stage attacks including reconnaissance phase by scanning the system prior to real attacks exploiting the system's components (such as virtual machines). They are able to continue to the attack by exploiting other system's components after they break into the system~\cite{hu2015dynamic}. \begin{figure} \caption{AI-powered threats categorization.} \label{fig:AI-pow} \end{figure} \begin{table*}[t] \centering \footnotesize \caption{Comparison of Graph-based techniques which can be leveraged by attackers to find attack path such as shortest attack path in the network} \begin{tabular}{@{}|l|l|l|l|l|l|l|@{}} \hline \textbf{Approach} & \textbf{Technique} & \textbf{Advantages} & \textbf{Disadvantages} & \textbf{Practicality} & \textbf{\begin{tabular}[c]{@{}l@{}}Resilience to\\ dynamic defense\end{tabular}} & \textbf{Ref.} \\ \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Classic search \\ strategies\end{tabular}}} & BFS/DFS & Fast for small size & \begin{tabular}[c]{@{}l@{}}--Not scalable\\ --Time complexity\\ --Exhaustive search\end{tabular} & High & Verylow & \cite{bagheri2008finding} \\ \cline{2-7} & Dijkstra & \begin{tabular}[c]{@{}l@{}}--Fast for small size\\ --Consider priority\end{tabular} & \begin{tabular}[c]{@{}l@{}}--Not scalable\\ -- Resource consumption\end{tabular} & High & Low & \cite{deng2012fuzzy} \\ \hline \textbf{Traditional AI-based} & $A^*$ & Good performance & \begin{tabular}[c]{@{}l@{}}--Needs heuristic\\ --Real implementation\end{tabular} & Low & Low & \cite{goldberg2005computing} \\ \hline \multirow{3}{*}{\textbf{Learning-based AI}} & Machine Learning & Scalable & Need Training & Medium & High & \cite{kojic2006neural} \\ \cline{2-7} & Neural Network & \begin{tabular}[c]{@{}l@{}}--Scalable\\ --Adaptive\end{tabular} & Need Training & Medium & High & \cite{kojic2006neural} \\ \cline{2-7} & Deep Learning & \begin{tabular}[c]{@{}l@{}}--Scalable\\ --Adaptive\end{tabular} & Need Training & Medium & Very High & \cite{rizi2018shortest,salehi2020shortest} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Evolutionary\\ Computation (EC)\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Genetic Algorithm\\ (GA)\end{tabular} & Fast approach & \begin{tabular}[c]{@{}l@{}}--Low scalability\\ -- not optimal\end{tabular} & Low & Medium & \cite{bagheri2008finding,syarif2018solving} \\ \hline \end{tabular} \end{table*} In contrast, the AI capabilities are embedded into the malware or threat itself in the AI-embedded attacks. Such as targeted attacks and Deep Locker~\cite{stoecklin2018deeplocker} in which the current defensive technologies cannot easily detect and defend against those types of attacks. Deeplocker uses neural network technologies and includes three layers of concealment which make the Deeplocker evasive to detect. In the first layer, the target is concealed (who and what the target is). The second layer is the target instance concealment which hides the specifics. Finally, the third layer conceals the malicious intent which encrypts the payload and hides the attack technique~\cite{stoecklin2018deeplocker}. Another example of AI-embedded attacks is the new generation of botnet attacks which are very difficult to be detected by the current decisive strategies~\cite{wang2020ai}. They utilized deep neural networks such as convolutional neural networks (CNN) to establish a covert channel between bots and botmasters. However, in this paper, we only consider AI-aided attacks in which the attacker can use a deep neural network to find out the shortest attack path in a targeted network. \section{Related Work} \label{sec:rel} \subsection{Graph-based AI-aided Threats} The application of Artificial Intelligence (AI) such as Genetic Algorithms and machine learning in the graph theory-based problems such as finding the fastest, shortest, or cheapest path in a network has been studied in various research, the application of Genetic Algorithm~\cite{bagheri2008finding}, Neural Network~\cite{kojic2006neural}, Deep Learning~\cite{rizi2018shortest,salehi2020shortest} and so on. However, the application of graph theory-based and learning-based approaches for finding the shortest and most suitable paths in a network can be leveraged by attackers to launch their attacks effectively. Finding the attack path from source to target in a graphical attack model is an important capability for the attackers~\cite{zimba2019bayesian,liu2020network}. This enables the attacker to reach the target with minimum effort and cost. \subsubsection{Classical Search Strategies} {The classic algorithms for calculating shortest paths are Breadth-first search (BFS), Dijkstra's, and Bellman-Ford. Dijkstra's is the most used case of finding the shortest path between two specific nodes with no available path cost heuristic.} {The earliest work on neural-based solutions to the shortest path was motivated by communications and packet routing, where approximate methods faster than the classical algorithms were desired. These operate quite differently from today's neural networks, they used iterative back-propagation to solve the shortest path on a specific graph~\cite{kojic2006neural}}. {Groundbreaking is another approach to the problem was DeepMind's Differentiable neural computers, which computed shortest paths across the large graph of geographical places. The method is by taking the graph as a sequence of connection tuples and learning a step-by-step algorithm using a read-write memory. It was provided with a learning curriculum, gradually increasing the graph size.} However, the traditional methods have scalability problem for larger graphs. For instance, an efficient implementations of Dijkstra can compute the shortest paths for a node to others in $O(n log n+m)$ time in a graph with $n$ nodes and $m$ edges. \subsubsection{Traditional AI-based Strategies} The $A^*$ search algorithm uses heuristic-based techniques to compute the shortest path (it also is known as light generalization of Dijkstra algorithm). Practically, the $A^*$ algorithm as quickly as Dijkstra's algorithm. However, those methods suffer from high time complexity and difficulties in finding the appropriate heuristic. There are many methods to obtain the shortest attack path in a network such as the classical Dijkstra algorithm and traditional AI-based such as $A^*$. However, from the attacker's point of view, the shortest attack path has to be computed within a very short period of time and scalable manner in order to support time-constrained attacks which can be detected and deceived by asynchronous defensive mechanisms which change the attack surface periodically~\cite{cho2020toward}. However, attackers prefer to use more intelligent approaches to find their targets using the shortest attack path~\cite{zimba2019bayesian}. Many studies investigated the optimized method for finding the shortest path from the attacker's perspective. For instance, in \cite{zimba2019bayesian}, authors proposed an optimal algorithm for APT attackers which enables the attacker to find the shortest attack path from multiple sources in a graphical attack model using a Bayesian network. Moreover, in~\cite{bagheri2008finding}, the authors proposed an approach to find the shortest path using a learning-based Genetic Algorithm (GA) based on principles of stochastic search and optimization techniques which is a fast algorithm with minimum failure ratio. However, most of the classical search and traditional methods are not able to satisfy the essential requirements for high-speed and fast response in large-scale applications, adaptation with dynamic graphs, real-world problems, and so on. Moreover, finding global optimal solutions is difficult in these approaches while it is easier to find global solutions in the classical shortest path approaches and pulse-coupled neural network (PCNN)~\cite{huang2017time}. \subsubsection{Learning-based AI approaches} To address the drawbacks of traditional methods, some neural network architectures have been designed for finding shortest path. In fact, neural network can be used to either find or estimate the shortest path on a certain network effectively~\cite{huang2017time,qu2013efficient,salehi2020shortest}. For instance, in~\cite{qu2013efficient}, a dynamic algorithm is developed to compute shortest path for large-scale networks through a modified model of pulse-coupled neural networks (M-PCNNs). However, their method could be ineffective when multiple link changes occur. Thus, this method is not useful for the attackers when the defender makes the system dynamic by changing the attack surface. {In \cite{huang2017time}, the authors proposed the time-dependent shortest path problem which is able to find the globally optimal solution through a time-delay neural network (TDNN) framework}. Moreover, in~\cite{salehi2020shortest}, {the authors proposed a novel method that is able to estimate the nodes' distances through embedding graphs into an embedding space. In the proposed method, they used leveraged a feed-forward neural network which used vectors obtained from two recent graph embedding techniques. Finally, The method can produce the shortest distances for the majority of node pairs.} \subsection{Game Theory Review} This section provides an overview of research on security defensive methods that use game-theoretic approaches. We present a selected set of works to highlight the application of game theory in addressing different forms of security-related problems. {Game theory has long been applied to study network security~\cite{alpcan2019decision,manshaei2013game}. Regarding the use of Bayesian game models, Alpcan and Basar~\cite{alpcan2006intrusion} proposed a security game between attacker and intrusion detection system (IDS) in the sensor network. They modeled their solution based on a finite Markov chain, Markov decision process, and Q-learning. Sheyner et al~\cite{jha2002two} presented a formal cost-benefit analysis for the attacks on a given network equipped with security measures for defending on the network attacks. In~\cite{chowdhary2016sdn}, the authors provided a method for attack graph construction based on network reconfiguration through parallel computing method. The proposed method can leverage the strategic reason information of attacks in large-scale systems. } Alpcan \& Basar~\cite{alpcan2003game}, proposed a two-player non-cooperative and non-zero-sum game to address the attack defense problem in the sensor networks. In their proposed game model, they assumed that the players have complete information about the system and the payoff functions of each other based on each player's optimal strategy. However, the drawback of the proposed game model is that the players have complete information about the game. Consequently, various relevant research introduced the game theory and studied the optimal strategy of network attack and defense. Moreover, in~\cite{jiang2009evaluating}, the authors proposed the offensive and defensive game model which utilized a network optimal active defense method. They solved a Nash equilibrium condition between the the attacker and defender to find the optimal attack and defense strategy. However, their main idea is to address the network security based on reducing the loss of security loss and risk management. In~\cite{gang2014network}, the authors proposed a non-cooperative non-zero-sum game model based on network security decision-making method for obtaining the optimal attack and defense evaluation. The proposed technique is able to generate an optimal strategy for attacker and defender by analyzing the interactions of both attackers and defenders. In~\cite{zhang2018network}, the authors used the offensive and defensive differential game and proposed a network security defense decision-making method. Based on their security evolution model, it analyzes the security state changing process of the network system and consequently generates the attack and defense differential game model. The proposed technique could provide the optimal defense strategy selection through the saddle point strategy selection. However, in the proposed game models, there is a lack of evaluation of more intelligent attackers based on the game models for defender and intelligent attackers. \section{Threat Model} \label{sec:sp-DNN} In this section, we demonstrated how a deep neural network can be used to estimate the shortest path in a graph efficiently. However, this model can be used by adversaries to learn the targeted system and launch a fast and effective attack toward the system. In \cite{salehi2020shortest}, the authors proposed a novel method for estimation of the shortest path distance between two nodes through vector embedding generated by deep learning approaches. The procedure undergoes three main phases: \begin{itemize} \item \textit{Graph analysis and embeddings}: The first step is to learn the nodes' mapping to a low dimension space of features maximizing the likelihood of preserving network neighborhoods of nodes using a graph embedding technique~\cite{grover2016node2vec}. An approach that is used for graph embedding is called the encoder-decoder approach which can encode the graph based on each node's attributes to a low-dimension vector and then decode the graph and related information from the low-dimension learned embeddings. However, graph embeddings include all required information for downstream machine learning tasks. The encoding function can be formulated as $Enc: V\rightarrow R^d$ based on~\cite{hamilton2017representation}, where $z_i$ denoted as the embedding for the node $v_i \in V$. Then, the decoder is defined as a function accepting a set of node embeddings and consequently decoding a user-specified graph statistics from the embeddings such that $Dec: R^d \times R^d \rightarrow R^+$. \item \textit{Training set collection}: In this step, the training set can be achieved by computing the actual shortest path distances from a group of nodes to all of the remaining nodes. {The graph can be formally defined as $G(V, E)$ with $n$ nodes and $m$ edges. Based on two nodes $s$, $t \in V$, it can be defined as $ns$,$t=\{s, u1, u2, ...,ul-1, t\}$ to be a path of length $|ns,t|=l$ between $s$ and $t$, if $\{u1, ..., ul\}$ $\in V$ and ${(s, u1),(u1, u2), . . . ,(ul-1, t)} \in E$, and let $n_{s,t}$ be the set of all paths from $s$ to $t$. Moreover, let define $dG(s, t)$ as the shortest path length between any two nodes $s,t \in V$.} Then, a graph embedding technique described before can produce a vector embedding for the graph $G$ for every node $v \in V$ as $\theta(v)\in R^d$. To collect the training samples, the training pairs of the entire graph $G$ need to be extracted which computes the actual shortest distances from each landmark (group of nodes which denotes as $l$) to all of the remaining nodes using BFS search. It finally yields $l(n-l)$ training pairs. However, for testing pairs, the same strategy as training pairs can be used through considering a smaller set of landmarks and performing BFS traversals from landmarks to the other nodes which can generate a set of unseen pairs. \item \textit{Neural Network Training}: The vectors of the training set (network embedding vector) will be fed into the feed-forward neural network (FNN) to estimate the distance between the nodes. Given a training pair $<\theta(v),\theta(U)>$, a joint representation as the input to the neural network can be generated through applying the binary operations such as concatenation, subtraction, point-wise multiplication, and average over the vector embeddings. The FNN consists of an input layer, a hidden layer, and an output layer and the size of the input layer depends on the binary operation on vector embeddings. Finally, the real-valued distance can be obtained by the neural network mappings of the input vectors to an output. \end{itemize} \begin{figure} \caption{Shortest path estimation using Deep Neural Network steps.} \label{fig:steps} \end{figure} However, as demonstrated in Figure~\ref{fig:steps}, the advanced attackers can utilized AI techniques such as those as discussed above to find their target effectively and fast and can efficiently leverage deep learning techniques for finding the shortest path in a graph (such as those proposed in \cite{rizi2018shortest,salehi2020shortest,qu2013efficient}) which is resilience against the dynamic defense and also can provide the scalability of the model such as large cloud or enterprises. In this paper, we demonstrate how game theory can be leveraged to evaluate the defensive scenarios for avoiding these kinds of AI-powered attacks. \section{Game Theory for AI-aided Attacks and Dynamic Defence Evaluation}\label{sec:proposed} In this section, we propose the system model for both attack and defense including the capabilities of AI-aided attackers which make them able to effectively find the shortest attack path in the modeled system. We then define and propose a zero-sum dynamic game model which can capture and evaluate various attacker actions and defenders in different discrete states of the game. \subsection{Preliminaries}\label{sec:pre} In this section, we introduce the notation and definitions used throughout this paper including the cloud model, attack, model, game theory definition, zero-sum Markovian game model, and so forth. \begin{figure*} \caption{(a) The cloud-model including 10 VMs in different hosts (servers). (b) Attack graph model corresponding to the cloud-model (capturing the connectivities of the VMs, note that the shortest attack path is shown as dashed lines).} \label{fig:model} \end{figure*} We modeled a cloud system consisting of 10 Virtual Machines (VMs) distributed in 5 different physical servers or hosts in the cloud. We assume that only the VMs in host 1 (denoted as $h_1$) are connected to the internet and are the entry point of the system. The cloud model is demonstrated as in Figure~\ref{fig:model}. Each VM has a number of vulnerabilities associated with the Operating System (OS) it uses as in Table~\ref{vuls}. Thus, the attack model can be represented as a directed attack graph. {Let $AG = (V, E)$ be a graph, where $V$ is a set of all the nodes and $E$ is a set of all the edges. The aim of the attacker is to obtain the shortest attack path (SAP) which is a path between two nodes without considering the weight of the attack path. Note that the weight of the edges determines the exploitability $e$ of the connected VM based on Table~\ref{vuls}.} \begin{table}[t] \centering \caption{Hosts vulnerabilities and exploitability information ($|V|$ is the number of vulnarabilities and Exploitability is the maximum exploitability of all vulnerabilities for a host)} \label{vuls} \footnotesize \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{VM} & \multirow{2}{*}{Current Host} & \multicolumn{2}{l|}{Vulnerabilities (V)} & \multirow{2}{*}{Impact} \\ \cline{3-4} & & $|V|$ & Exploitability (e) & \\ \hline $vm_1$ & $h_1$ & 4 & 0.53 & 10 \\ \hline $vm_2$ & $h_2$ & 4 & 0.55 & 8 \\ \hline $vm_3$ & $h_1$ & 3 & 0.51 & 9 \\ \hline $vm_4$ & $h_3$ & 3 & 0.49 & 8 \\ \hline $vm_5$ & $h_2$ & 2 & 0.47 & 9 \\ \hline $vm_6$ & $h_3$ & 1 & 0.45 & 9 \\ \hline $vm_7$ & $h_3$ & 1 & 0.43 & 10 \\ \hline $vm_8$ & $h_4$ & 1 & 0.43 & 9 \\ \hline $vm_9$ & $h_4$ & 1 & 0.43 & 10 \\ \hline $DB$ & $h_5$ & 1 & 0.43 & 10 \\ \hline \end{tabular} \end{table} \subsubsection{Attack Model} {In this paper, we model an omniscient attacker which is able to traverse to reach the goal provided the perceived vulnerabilities exist.} {The attacker can launch AI-aided attacks using AI techniques to utilize multiple attack vectors by pursuing different exploits~\cite{zimba2019bayesian}. The attack usually comprises a sequence of transitions over time traversing the cloud network from one node to the others.} {As the attacker is an intelligent threat actor, he can use a variety of reconnaissance techniques to surveil the network for such vulnerabilities.} \noindent{\textit{{General capabilities.}}} We assume that the attacker is able to use various reconnaissance tools and techniques to gain enough information about the target system. The attacker has some information regarding the cloud hosts, VMs, and network gained through the various network and vulnerability scanning tools such as Nessus, Open Vulnerability Assessment Scanner (OpenVAS), etc. {The attacker can likewise infer adjacent networks by considering his subnet address and broadcast address. He further can have the topological overview of the associated subnets should he get hold of the corresponding routing tables.} \noindent{\textit{{AI-based capabilities.}}} We assume that the attacker is an AI-aided attacker which is able to leverage AI capabilities to gain valuable information about the system, targets, and attack paths. In this paper, we assume that the attacker can leverage AI to estimate the shortest attack path in the modeled cloud system. The shortest attack path for the cloud model is highlighted as a dashed line in Figure~\ref{fig:AG}. We assume that the attacker can leverage Deep learning Techniques (as in~\cite{salehi2020shortest}) to estimate the shortest path attack from the entry point of the system (i.e. internet) to the targeted host (DB) in the system. The attacker can exploit the vulnerabilities existing on each host with the probabilities defined based on the CVSS metrics. Note that for each attack step, the attacker needs to estimate the shortest attack path again as the system is dynamic and may be changed. However, for each attack step in an attack path, the attacker incurs some expenses such as costs and time. \noindent{\textit{{Attacker's goal and actions}}.} The goal of the attacker is also well defined, which in this case is to exploit the database (DB) in the cloud through compromising the vulnerabilities existing on each VM in the attack path. The attacker has the ability to perform various actions. First, the attacker can take no action to hide. Once the attacker estimates the shortest attack path through AI techniques described in Section~\ref{sec:sp-DNN}, the attacker undertakes various actions to exploit the vulnerabilities of each VM in the attack path and finally exploit the target. Note that after exploiting a VM the attacker reaches a new state. We assume that the attack mounted by the attacker is {monotonic which means once an attacker has reached a certain state, they do not need to go back to any previous state, when targeting a specific goal.} \begin{table*}[t] \centering \caption{Payoff matrix formalization based on $a^A, a^D$ for the game states $s_0$--$s_4$} \label{TII} \begin{tabular}{@{}p{2.1cm}p{2cm}ll@{}} \toprule \multicolumn{4}{c}{$s_0$: Initial State (no exploit)} \\\midrule A/D & No-act & Def-$h_1$ & Def-$h_2$ \\ \midrule No-att & $0,0$ & $C_{def}, -C_{def}$ & $C_{def}, -C_{def}$ \\ $E(vm_1\in h_1)$ & $I_{vm_1},-I_{vm_1} $ & $-(I_{vm_1}-C_{def}), I_{vm_1}-C_{def}$ & $I_{vm_1}+C_{def}, -(I_{vm_1}+C_{def})$ \\ $E(vm_2 \in h_2)$ & $I_{vm_2},-I_{vm_2} $ & $I_{vm_2}+C_{def}, -(I_{vm_2}+C_{def})$ & $-(I_{vm_2}-C_{def}), I_{vm_2}-C_{def}$ \\\midrule \multicolumn{4}{c}{$s_1$: Transition State ($vm_2 \in h_2$ exploited)} \\\midrule A/D & No-act & Def-$h_3$ & Def-$h_2$ \\ \midrule No-att & $0,0$ & $C_{def}, -C_{def}$ & $C_{def}, -C_{def}$ \\ $E(vm_4 \in h_3)$ & $I_{vm_4},-I_{vm_4} $ & $-(I_{vm_4}-C_{def}), I_{vm_4}-C_{def}$ & $I_{vm_4}+C_{def}, -(I_{vm_4}+C_{def})$ \\ $E(vm_5 \in h_2)$ & $I_{vm_5},-I_{vm_5} $ & $I_{vm_5}+C_{def}, -(I_{vm_5}+C_{def})$ & $-(I_{vm_5}-C_{def}), I_{vm_5}-C_{def}$ \\\midrule \multicolumn{4}{c}{$s_2$: Transition State ($vm_5 \in h_2$ exploited)} \\\midrule A/D & No-act & Def-$h_3$ & Def-$h_4$ \\ \midrule No-att & $0,0$ & $C_{def}, -C_{def}$ & $C_{def}, -C_{def}$ \\ $E(vm_7 \in h_3)$ & $I_{vm_7},-I_{vm_7} $ & $-(I_{vm_7}-C_{def}), I_{vm_7}-C_{def}$ & $I_{vm_7}+C_{def}, -(I_{vm_7}+C_{def})$ \\ $E(vm_9\in h_4)$ & $I_{vm_9},-I_{vm_9} $ & $I_{vm_9}+C_{def}, -(I_{vm_9}+C_{def})$ & $-(I_{vm_9}-C_{def}), I_{vm_9}-C_{def}$ \\\midrule \multicolumn{4}{c}{$s_3$: Transition State ($vm_9 \in h_4$ exploited)} \\\midrule A/D & No-act & Def-$h_3$ & Def-$h_5$ \\ \midrule No-att & $0,0$ & $C_{def}, -C_{def}$ & $C_{def}, -C_{def}$ \\ $E(vm_6 \in h_3)$ & $I_{vm_6},-I_{vm_6} $ & $-(I_{vm_6}-C_{def}), I_{vm_6}-C_{def}$ & $I_{vm_6}+C_{def}, -(I_{vm_6}+C_{def})$ \\ $E(DB \in h_5)$ & $I_{DB},-I_{DB} $ & $I_{DB}+C_{def}, -(I_{DB}+C_{def})$ & $-(I_{DB}-C_{def}), I_{DB}-C_{def}$ \\\midrule \multicolumn{4}{c}{$s_4$: Final State (DB exploited)} \\\midrule \end{tabular} \end{table*} \subsection{Game Model Definition} \subsubsection{Game Model Assumption} We assume that the game for the defined cloud model defined in Section~\ref{sec:pre} can be modeled as a zero-sum Markov game in which the defender tries to place the IDS in the cloud's host to detect the attacker and defend against the attacker's action. These game attacks can be modeled using a Markovian model with finite states based on the attack. We also assume that both the players have full observability of the state in which they are. Additionally, we assume that the attacker can remain undetected in any VM in the cloud until it attempts to attack the other connected VM by exploiting the targeted VM vulnerabilities. \subsubsection{Markovian Game Model} {We model the attacker and a defender scenario as a two-player zero-sum Markov game leveraging the information in the cloud model and corresponding AG represented in Figure~\ref{fig:AG}. We also assume that the states and actions of the model are both discrete and finite. Moreover, the transition from each state and consequently the corresponding reward for each state depends on players' actions for that specific state (this also can be modeled based on previous states and actions which are beyond the scope of this paper). We formally define a zero-sum Markov Game model based on obvious Markovian assumption and explain how each of these parameters are obtained in our cloud model.} A Markov game for two players (in here, attacker and defender) can be defined by a tuple (S,$\mathcal{A}_A$, $\mathcal{A}_D$, T, R) where, \begin{itemize} \item $S=\{s_0, s_1,s_2,\dots,s_r\}$ denoted the finite states of the game where in here $|S|=max(len(ap_i \in AP))$. \item $\mathcal{A}_A=\{a_0^A,a_1^A,a_2^A,...\}$ is the attacker's set of actions. The defender can have a set of actions as $\mathcal{D}_A=\{a_0^D,a_1^D,a_2^D,...\}$. \item $T=(s,a^A,a^D,s')$ is a States' Transition where the current state $s \in S$ will be changes to $s' \in S$ upon the actions come from both attacker and defender respectively. However, each transition has a probability which is denoted by $tp(T)$. \item $R^A(s;a^A; a^D)$ is the reward obtained by attacker if in state $s$, attacker and defender take the actions $a^A$ and $a^D$ respectively. However, the reward can be negative $-R^A(s; a^A; a^D)$ is the attacker choose a wrong action. \item $\lambda^p \in [0,1)$ is defined as the discount factor for the corresponding player $p$. \end{itemize} \subsubsection{Reward function} {The reward or payoff function depends on the actions taken by the attacker and the defender in each state of the game. We use the CVSS values defined in Table~\ref{vuls} for the reward or penalties associated with the successful or unsuccessful actions taken by either attacker or defender}. To quantify reward values we use the important variables such as the impact of an attack and cost of defense ($C_{def}$), we used CVSS metrics that provide the Impact ($I$) for a specific VM ($I_{vm_i}$), Exploitability Scores ($e$), and other relevant metrics. $I_{vm_i}$ is a metric that computes the damage imposed to the VM by computing all impacts on the resources through an attack. For instance, $I_{vm_4}=8$ is the attack impact value on the VM $vm_4$ based on the related impact metrics of vulnerabilities in CVSS represented in Table~\ref{vuls}. The rewards matrix for attackers is formulated as Equation~\ref{eq:reward_A}. \begin{equation} \small \label{eq:reward_A} R^\mathcal{A}_{a^A,a^D} \text{=} \begin{cases} 0 & \text{if}~a^A\subset \O\\ C_{def} & \text{if}~a^A\subset\O,a^D\not\subset\O\\ I_{vm_i}\text{+} C_{def} & \text{if}~a^A\text{=}E(vm_i),a^D\not\subset\O,vm_i\notin H(ids)\\ I_{vm_i} & \text{if}~a^A\text{=}E(vm_i),a^D\subset\O\\ \text{-}(I_{vm_i}\text{-} C_{def}) & \text{if}~a^A\text{=}E(vm_i),a^D\not\subset\O,vm_i\in H(ids)\\ \end{cases} \end{equation} Note that $H(ids)$ is a function that returns the host in which $ids$ has been located. For instance, if the defender locate the IDS in Host $h_4$, then $H(ids)$ returns $h_4$. As the game is a zero-sum game the reward for the defender is as equation~\ref{eq:reward_D}. \begin{equation} \label{eq:reward_D} R^\mathcal{D}_{a^A,a^D}=-1~*~R^\mathcal{A}_{a^A,a^D} \end{equation} \begin{table}[t] \centering \caption{Payoff Matrix quantifying based on zero-sum game and CVSS values} \label{TII-val} \begin{tabular}{@{}p{2.1cm}p{2cm}ll@{}} \toprule \multicolumn{4}{c}{$s_0$: Initial State (no exploit)} \\\midrule A/D & No-act & Def-$h_1$ & Def-$h_2$ \\ \midrule No-att & $0,0$ & $2, -2$ & $2, -2$ \\ $E(vm_1\in h_1)$ & $10,-10 $ & $-8, 8$ & $12,-12$ \\ $E(vm_2 \in h_2)$ & $8,-8 $ & $10,-10$ & $-6,6$ \\\midrule \multicolumn{4}{c}{$s_1$: Transition State ($vm_2 \in h_2$ exploited)} \\\midrule A/D & No-act & Def-$h_3$ & Def-$h_2$ \\ \midrule No-att & $0,0$ & $2, -2$ & $2, -2$ \\ $E(vm_4 \in h_3)$ & $8,-8 $ & $-6, 6$ & $10,-10$ \\ $E(vm_5 \in h_2)$ & $9,-9 $ & $11,-11$ & $-7,7$ \\\midrule \multicolumn{4}{c}{$s_2$: Transition State ($vm_5 \in h_2$ exploited)} \\\midrule A/D & No-act & Def-$h_3$ & Def-$h_4$ \\ \midrule No-att & $0,0$ & $2, -2$ & $2, -2$ \\ $E(vm_7 \in h_3)$ & $10,-10 $ & $-8,8$ & $12,-12$ \\ $E(vm_9\in h_4)$ & $10,-10 $ & $12,-12$ & $-8,8$ \\\midrule \multicolumn{4}{c}{$s_3$: Transition State ($vm_9 \in h_4$ exploited)} \\\midrule A/D & No-act & Def-$h_3$ & Def-$h_5$ \\ \midrule No-att & $0,0$ & $2, -2$ & $2, -2$ \\ $E(vm_6 \in h_3)$ & $9,-9$ & $-7,7$ & $11,-11$ \\ $E(DB \in h_5)$ & $10,-10 $ & $12,-12$ & $-8,8$ \\\bottomrule \end{tabular} \end{table} As stated earlier, the formulation of the reward function is based on CVSS values and mainly the impact of the attack on a targeted VM. If the defender and the attacker do not take any action such that $a^A\subset\O,a^D\not\subset\O$ both get zero rewards. Moreover, if the attacker doesn't attack ($no$-$att$) while the defender place the IDS to any host in the cloud to secure any hosts, the defender incurs a cost for the defense ($-C_{def}$) and gets a negative reward. However, if the attacker attacks on a VM $vm_i$ while the defender place the IDS to detect attacks on the host in which the targeted VM $vm_i$ is located such that $vm_i\in H(ids)$, then the defender gets the reward for avoiding the attack impact on that VM ($I_{vm_i}$), but as the defender incurs some costs for the defense the total reward of successful defense is formulated as $I_{vm_i}-C_{def}$. For instance, suppose that the cost of defense is 2 units (for both successful or unsuccessful defense). Then, if attacker exploits VM $vm_1$ and defender put IDS on the host $h1$ ($vm_1\in h_1$), the defender gain a total reward of 7 which is as $R^D=I_{vm_1}-C_{def}=9-2$ while the attacker is penalized by -7 unit. In contrast, if the attacker attacks on a VM $vm_i$ while the defender place the IDS to detect attacks on the host in which the targeted VM $vm_i$ is not located such that $vm_i\notin H(ids)$, then the defender gets the penalty for wrong defense and incurs the impact of the attack on that VM plus the cost of wrong defense which is $-(I_{vm_i}+C_{ids})$ while the attacker reward would be as $I_{vm_i}+C_{ids}$ based on the zero-sum definition. For instance, if attacker exploits VM $vm_1$ and defender put IDS on the host $h2$ ($vm_1\notin h_2$), the defender gets a negative reward of -11 which is as the sum of the impact of attack on that VM and the cost of defense as $R^D=-1*(I_{vm_1}+C_{def})=-1*(9+2)$. Then, the attacker gets rewards of 11 which is $R^A=-R^D$. Lastly, if the attacker attacks on a VM and the defender takes no action then the attacker gains the reward for the successful attack which is equivalent to the impact of the attack on exploited VM $vm_i$ as $R^A=I_{vm_i}$ while the defender gets a negative reward as $R^D=-I_{vm_i}$. A normal-form zero-sum reward matrix for the four states of the game in the Markov game is shown in Table~\ref{TII-val} which is quantified based on the CVSS values and the reward function formulation explained before. \begin{figure*} \caption{Markov model of the game with finite states and deterministic probabilities based on the shortest attack path.} \label{fig:MK} \end{figure*} \subsubsection{States, actions and transitions} The Markov model of the proposed game is illustrated in Figure~\ref{fig:MK} which captures the transitions and associated probabilities in which the attacker tries to find the shortest attack path to exploit DB (based on Figure~\ref{fig:net}). \noindent{\textit{{States}}.} {It represents the state attacker/defender currently have in the cloud over different preformed actions. We extract the information from the shortest path in the cloud attack graph to define the states. For instance, for the attacker, initial state $s_0 = (Host; User)$, if the successful execution of the exploit of VM $vm_2$ is performed by the attacker $E(vm_2)$, the attacker can transition to another state $s_1 = (H_1; Attacker)$.} \noindent{\textit{{Actions and state transitions}.}} {Based on the system model represented in Figure~\ref{fig:AG}, the attacker has at most three possible actions in each state. The attacker can choose no attack ($no$-$att$ or $\O$) or attack to another adjacent VM by exploiting the vulnerabilities of targeted VM. Thus, for each state the maximum actions can be defined as $Max(Deg(vm_i \in H) + 1 = 3$. For instance, in $s_0$, the action space for the attacker can be as $a^A_{0,s_0} =\O$, $a^A_{1,s_0} =E(vm_1)$, $a^A_{2,s_0}=E(vm_2)$. Similarly, the defender has its own possible actions to defend ($Def$) hosts. For instance, the defender can perform no defence ($No$-$act$ or $\O$). All possible actions for the defender in state $s_0$ is as $a^D_{0,s_0} = \O$, $a^D_{1,s_0} = D(h_1)$, $a^D_{2,s_0} = D(h_2)$.} \subsection{Probabilistic Model and Game Solving} \subsubsection{Uniform Random Strategy (URS)} We assume that the defender uses Uniform Random Strategy (URS) where the defender selects the actions $a^D_s \in \mathcal{D}_A$ based on a uniform probability distribution over its possible actions in the corresponding state. The decision-making process can be viewed as randomization that chooses the next valid state based on a specific probability distribution over states $s_q \in S$, then choosing the next host for IDS placement from a uniform distribution to become specific instances of randomization. We choose URS as the baseline of the defender's strategy. For instance, based on the initial state $s_0$ shown in Table~\ref{TII-val}, The defender can select the mixed strategy of placing IDS on host $h_1$, host $h_2$ in the cloud, or taking no action. Thus, the selection of actions is uniformly distributed for the defender with the equal probability of $3.33$. However, many studies claim that defender's strategy selection can be performed as pure or URS for dynamic defense~\cite{zhuang2014towards}. \subsubsection{Maxmin strategy} In this game, the attacker $P_A$ aims to maximize his expected discounted reward and the defender $P_D$ tries to choose the actions that minimize the expected reward for the attacker. Thus, the maxmin strategy can be considered for calculating the expected reward of $P_A$ in the Markov game. {Given $Q(s,a^A, a^D)$, an agent is able to maximize the reward using the greedy strategy by always selecting the action having the highest $Q$-value. This strategy is considered as a greedy strategy because it treats $Q(s,a^A, a^D)$ as a surrogate for immediate reward and then acts to maximize its immediate gain. However, it is optimal because the $Q$-function is an accurate summary of future rewards. } We now define the quality of an action or the $Q$ value used to represent the expected reward the attacker (A) will get for choosing the action $a^A \in \mathcal{A}_A$ while the defender chooses $a^D \in \mathcal{D}_A$. $$ Q(s,a^A, a^D)=R(s,a^A, a^D)+\lambda \sum_{s'} T(s,a^A,a^D,s') $$ While the value of a state $s \in S$ in a Markov game is as $$ V(s)= max_{\pi(s)}~min_{a^D}\sum_{a^A} Q(s,a^A,a^D).\pi_{a^A} $$ However, the defender tries to minimize the attacker's reward by placing IDS in the various hosts in the cloud that are a part of the attack paths in the attack graph, while the attacker aims to use a mixed policy $\pi(s)$ over it possible actions in $a^D$ to maximize its total reward. Thus, the Markov Game is useful framework for the defender to model the attacker's policy so that they can take necessary actions and countermeasures by making decision in each state to minimize the expected attacker's utility. \subsubsection{Transitions Probabilities Assignment} The actions for attackers and defender are considered separately for each states. For instance, the attacker action space in the initial state $S_0$ is as $\mathcal{A}_{A,s_0}=\{a^A_{0,s_0},a^A_{1,s_0},a^A_{2,s_0}\}$ where $a^A_{0,s_0}=\O$ which indicates the attacker takes no action/attack ($No$-$att$) to avoid detection, $a^A_{1,s_0}=E_1$ which implies that the attacker exploits VM $vm_1$ (note that $E(vm_i)$ is shortly denoted as $E_i$), and $a^A_{2,s_0}=E_2$ which means exploiting of VM $vm_2$ or $E(vm_2)$. The probability of attack access through considering all possible actions for each state $s_j$, denoted as $p(AS_{s_j})$, can be defined as the Equation~\eqref{eq:pas}. This means the attacker can launch a successful attack by taking only one successful action in that state (note that the action $No$-$att$ is not considered as a successful attack action). \begin{equation} \label{eq:pas} p(AS)_{s_q}= 1-\prod_{a^A_{j,s_q} \in A_{A,s_q}-\{\O\}}{\Big(1-e(a^A_{j,s_q})\Big)} \end{equation} Note that $e(a^A_{j,s_q})$ is the probability of attack success by taking the specific action $a^A_{j}$ in a state $s_q$ which is the exploitability of the targeted VM based on Table~\ref{vuls}. For instance, exploiting of $vm_1$ is an action of the attacker $a^A_{1,s_0}=E_1$, then $e(a^A_{1,s_0})$ is $e(E_1)=e(vm_1)=0.53$. Now we define the probability that the attacker chooses a specific action $a^A_z$ in a current state $S_q$ as Equation~\eqref{eq:ps}. \begin{equation} \label{eq:ps} p(a^A_{z,s_q})=\frac{e(a^A_{z,s_q})}{\sum_{a^A_{j,s_q} \in A_{A,s_q}}{\Big(e(a^A_{j,s_q})\Big)}} \end{equation} For instance, the probability that attacker takes action $a^A_{1,s_0}$ in state $s_0$ which means that the attacker prefers to exploit $vm_1$ (denoted as $p(a^A_{1,s_0})$ or $p(E_1)$) is calculated as: $$ p(a^A_{1,s_0})=\frac{e(a^A_{1,s_0})}{e(a^A_{1,s_0})+e(a^A_{2,s_0})}=\frac{e(E_1)}{e(E_1)+e(E_2)} $$ Based on the above equation, the result of $p(E_1)$ is as $\frac{0.53}{1.08}\approx 0.49$. Similarly, the probability of the attacker choose the second action $p(E_2)$ is computed as $p(a^A_{2,s_0})=p(E_2)\approx 0.51$. We then define the transition probability for attackers only for a specific attack action ($a^A_z$) as Equation~\eqref{eq:TA}. \begin{equation} \label{eq:TA} \tau(a^A_{z,s_q}) = \begin{cases} p(a^A_{z,s_q}).e(z,s_q) \hspace{25mm} \small \text{if}~a^A_{z,sq}\not\subset\O\\ 1-\sum_{a^A_{j,s_q} \in A_{A,s_q}}{\Big( p(a^A_{j,s_q}).e(j,s_q) \Big)} ~~ \small \text{otherwise}\\ \end{cases} \end{equation} For instance, the $\tau(a^A_{2,s_0})= \tau (E_2)= 0.51*0.55=0.28$ which is the product of the probability that the attacker choose action $a^A_{2,s_0}$ and the attack success probability of the related attack action $e(a^A_{2,s_0})$. We then assume that the probability of defender's actions for each state of the same are uniformly distributed. Thus, the transition probability for defender only for n specific defend action ($a^D_z$) for the state $s_q$ is defined as: \begin{equation} \label{eq:TA} \tau(a^D_{z,s_q}) =\frac{1}{|\mathcal{D}_{A,s_q}|}, \end{equation} where $|\mathcal{D}_{A,s_q}|$ is the numbers of actions for defender. For instance, if the defender has three actions such as no action ($no$-$act$), defend host $h_1$, and defend host $h_2$, then $\tau(a^D_{2,s_0})=\tau(D_2)\approx 0.33$. Note that $D_2 \in \mathcal{D}_A$ indicates the defend of host $h_2$ (placement of IDS in host $h_2$). Now we define the transaction probability based on both attacker's and defender's actions as Equation~\eqref{eq:T}. \begin{equation} \label{eq:T} tp(s,a^A_z,a^D_z,s')=\tau(a^A_{z,s}).\tau(a^D_{z,s}) \end{equation} For example, the transition probability for $T_{0,1} = (s_0,a^A,a^D,s_1)$ can be computed as: $$ tp(T_{0,1})=tp(s_0,a^A,a^D,s_1)=\tau(E_2).\tau(D_2) \cup \tau(E_2).\tau(\O) \\ $$ which yields $p(T_{0,1}) \approx 0.19$. Similarly, the transition from state $s_0$ to $s_0$ can be defined as $T_{0,0} = (s_0,a^A,a^D,s_0)$ and its probability is computed as: \begin{equation}\label{optimization_model} \begin{split} tp(T_{0,0})=tp(s_0,a^A,a^D,s_0)=\tau(\O).\tau(\O) \cup \tau(\O).\tau(D_1) \\ \cup \tau(\O).\tau(D_2)\approx 0.41. \end{split} \end{equation} Likewise, all the Markovian model transitions for the states of the game and the transition probabilities are computed and illustrated in Figure~\ref{fig:MK}. Note that the Markovian game is modeled based on the attacker using the shortest attack path to find the target for each state of the game and the probabilities are defined and formulated based on the URS for this game. However, transitions probabilities assignment though Q-learning is not considered in this paper and will be considered in our future work. \section{Discussions and Limitations}\label{sec:discussion} This paper attempts to initiate a discussion regarding game theory evolution which can be able to evaluate AI-aided threats and making appropriate decisions on possible defensive strategies. In this paper, we only considered an example of AI-aided attacks which is able to leverage the deep neural networks to find the shortest attack path in a networked system. However, a more capable game model needs to be proposed to be able to defend against AI-embedded attacks such as Deep locker. We summarize some limitation and further challenges as follows: \noindent{\textit{Limitation and Challenges}.} Following challenges need to be investigated more in the current game theory models against the next generation of cyber threats. \begin{itemize} \item In this paper, we only modeled an attacker that is able to estimate the shortest attack path in the modeled cloud. However, other efficacy factors need also be considered in the model such as maximum exploitability ($ME$). It can ensure that the attacker traverses the attack paths which yields the maximum probability of success. The omniscient can leverage AI techniques as defined in Section~\ref{sec:sp-DNN} and \cite{zimba2019bayesian} to estimate and probe not only the shortest path but also the best attack path in the network. Thus, the attacker can find the attack path with the maximum exploitability values based on CVSS values. In fact, the attacker goes through the path(s) with the highest attack success probability values. Based on the attacker's point of view, $ME$ can be defined as the following equation. $$ ME=\min_{ap \in {AP}}\Big(\sum_{vm_i \in ap} e({vm_i})\Big) $$ However, in this paper, the main focus is on the attacker using DNN to find the shortest attack path and we aim to further consider other attack strategies such as $ME$ estimation in our future work. \item We only considered our proposed game theory model for a small cloud model. However, this can be extended to larger cloud models in which the attacker can find the shortest attack path through DNN more efficiently. The large cloud system can be modeled using an AG,~see Figure~\ref{fig:AG-Large}. In this case, the maximum number of states ($S$) in the game model can is determined as the maximum number of VMs in the shortest attack path: $$ SAP = min\big(len(ap_i \in AP)\big), $$ then, $|S|=SAP$ As the model is based on the shortest attack path, the model is not faced with a state explosion problem. \item An important limitation of the game-theoretic approach in cyber security is the difficulty to perfectly quantifying parameters of cyberspace. However, this might affect the decision-making process by defenders. However, it is important to leverage Machine learning-based approaches to identify parameters and values for both attacker and defender. \item The application of Nash equilibrium requires both attackers and defenders to choose their own optimal strategies at the same time which the process is difficult to be achieved in the reality especially for the new generation of threats. \item However, in most game theory models, the investigations are based on the hypothesis for both sides of attacker and defender and are completely rational. Both parties know how to realize the maximization of their reward values. However, the attack-defense information of the actual network is and intimate and asymmetric as the strategies' rewards could be private information for the game players. \end{itemize} \begin{figure} \caption{AG generated for a large cloud model. The AI-aided attacker can leverage DNN to estimate the shortest attack path effectively in large cloud model.} \label{fig:AG-Large} \end{figure} \section{Conclusion}\label{sec:conclusion} This paper first discusses the different types of AI-powered attacks categorized as AI-aided and AI-embedded attacks. Then the application of game theory to model various cyber threat scenarios is reviewed. Then, a cloud system is proposed in which an AI-aided attacker can find the shortest attack path in the network effectively using a deep neural network. We proposed a zero-sum Markovian game model which is able to model AI-aided attacks based on finite states which can help the defender to make appropriate decision to mitigate the attack impact. This paper demonstrates the potential of Markovian game theory models in strengthening the decision-making based on the capabilities of AI-based threats and finding optimized strategies in decision making comparing with other time-consuming optimization techniques. However, there are several critical challenges that need to be further studied and addressed before modeling the AI-based game theory models. We hope that our discussion and our initial design of an experimental model can trigger more profound research to make the game theory more viable for novel cyber threat evaluation. \end{document}
\begin{document} \title{Heat content and inradius \\ for regions with a Brownian boundary} \author{\renewcommand{\arabic{footnote}}{\arabic{footnote}} M.\ van den Berg \footnotemark[1] \\ \renewcommand{\arabic{footnote}}{\arabic{footnote}} E.\ Bolthausen \footnotemark[2] \\ \renewcommand{\arabic{footnote}}{\arabic{footnote}} F.\ den Hollander \footnotemark[3] } \footnotetext[1]{ School of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW, United Kingdom. } \footnotetext[2]{ Institut f\"ur Mathematik, Universit\"at Z\"urich, Winterthurerstrasse 190, CH-8057 Z\"urich, Switzerland. } \footnotetext[3]{ Mathematical Institute, Leiden University, P.O.\ Box 9512, 2300 RA Leiden, The Netherlands. } \date{12 March 2013} \maketitle \begin{abstract} In this paper we consider $\beta[0,s]$, Brownian motion of time length $s>0$, in $m$-dimensional Euclidean space $\R^m$ and on the $m$-dimensional torus $\T^m$. We compute the expectation of (i) the heat content at time $t$ of $\R^m \backslash \beta [0,s]$ for fixed $s$ and $m=2,3$ in the limit $t \da 0$, when $\beta[0,s]$ is kept at temperature $1$ for all $t>0$ and $\R^m \backslash \beta [0,s]$ has initial temperature $0$, and (ii) the inradius of $\T^m \backslash \beta [0,s]$ for $m=2,3,\cdots$ in the limit $s\to\infty$. \vskip 0.5truecm \noindent {\it AMS} 2000 {\it subject classifications.} 35J20, 60G50.\\ {\it Key words and phrases.} Laplacian, Brownian motion, Wiener sausage, heat content, inradius, spectrum. \noindent {\it Acknowledgment.} MvdB was supported by The Leverhulme Trust, Research Fellowship 2008/0368, EB by SNSF-grant 20-100536/1, and FdH by ERC Advanced Grant 267356-VARIS. \end{abstract} \section{Introduction and main results} \label{S.1} Asymptotic properties of the heat content and the inradius for regions with a fractal boundary have received a lot of attention in the literature. Most of the focus has been on porous regions (e.g.\ the $m$-dimensional Euclidean space $\R^m$ from which a Poisson cloud of non-polar sets is removed \cite{Sz}), and regions with a fractal polygonal boundary e.g.\ the von Koch snow flake and its relatives \cite{vdB1, vdB2, vdBedHo99}. In this paper we consider the region obtained from $\R^m$ or the $m$-dimensional torus $\T^m$ by cutting out a Brownian path of time length $s$. In Sections \ref{S.1.1} and \ref{S.1.2} we consider the heat content, and in Section \ref{S.1.3} the inradius. We formulate some open problems in Section \ref{S.1.5}. The proofs are deferred to Sections \ref{S.2}--\ref{S.3}. \subsection{Heat content outside compact sets} \label{S.1.1} Let $K$ be a compact non-polar set in $\R^m$ with boundary $\partial K$, and let $v\colon\,\R^m \backslash K \times [0,\infty)\to \R$ be the unique weak solution of the heat equation \begin{equation} \label{eq1alt} \left\{\begin{array}{llll} \Delta v(x;t) &=& \partial v(x;t)/\partial t, &x\in \R^m \backslash K,\, t>0,\\ v(x;t) &=& 1, &x \in \partial K,\, t>0,\\ v(x;0) &=& 0, &x \in \R^m \backslash K. \end{array} \right. \end{equation} Then $v(x;t)$ represents the temperature at point $x$ at time $t$ when $\partial K$ is kept at temperature $1$ and the initial temperature is $0$. The heat content of $\R^m \backslash K$ at time $t$ is defined by \begin{equation} \label{e9} E_K(t) = \int_{\R^m \backslash K} v(x;t)\,dx. \end{equation} If $\partial K$ is $C^\infty$, then $E_K(t)$ has an asymptotic series expansion for $t\da 0$ of the form \begin{equation} \label{EKt0} E_K(t) = \sum_{j=1}^J a_j(K)\,t^{j/2} + O\big(t^{(J+1)/2}\big), \qquad t \da 0,\,J\in\N, \end{equation} where the coefficients are local geometric invariants of $\R^m \backslash K$. In particular, \begin{equation*} \begin{aligned} a_1(K) &= 2 \pi^{-1/2} \int_{\partial K} dz,\\ a_2(K) &= 2^{-1}(m-1) \int_{\partial K} H(z)\,dz, \end{aligned} \end{equation*} where $dz$ is the surface measure on $\partial K$, and $H(z)$ is the mean curvature at $z$ of $\partial K$ with inward orientation. Formulas of this type can be found in the general setting of Riemannian manifolds and Laplace-type operators \cite{G}. The case where $\partial K$ is only $C^3$ was settled by probabilistic tools in \cite{vdBLeG}, and the expansion in \eqref{EKt0} holds for $J=2$. The asymptotic behaviour for $t\to \infty$ is different and its analysis does not require smoothness of $\partial K$. For $m=3$ it is shown in \cite{LeG1}, \cite{P} (see \cite{S1} for earlier results) that if $K$ is a compact set, then \begin{equation} \label{e10} E_K(t) = \sum_{j=1}^3 b_j(K)\,t^{(3-j)/2} + O\big(t^{-1/2}\big), \qquad t \to \infty, \end{equation} with \begin{equation} \label{e10alt} \begin{aligned} b_1(K) &= \cp(K),\\ b_2(K) &= 2^{-1}\pi^{-3/2}\cp(K)^2,\\ b_3(K) &= (4\pi)^{-2}\cp(K)^3-|K|-(8\pi)^{-1}\int_K \int_K \|x-y\|\,\mu_K(dx)\mu_K(dy),\\ \end{aligned} \end{equation} where $\|\cdot\|$ is the Euclidean norm, $\mu_K$ is the equilibrium measure of $K$, $\cp(K) = \int_K \mu_K(dx)$ is the Newtonian capacity of $K$, and $|K|$ is the Lebesgue measure of $K$. For $m=2$ it is shown in \cite{LeG} that if $K$ is a non-polar set, then \begin{equation} \label{e11} E_K(t)= t \sum_{j=1}^{J} b_j(K)\,(\log t)^{-j} + O\big(t(\log t)^{-(J+1)}\big), \ \ t \to \infty,\,J \in \N, \end{equation} where $b_1(K)=4\pi$, and where the higher-order coefficients all depend on the logarithmic capacity of $K$ only. For a wide class of regions with a fractal boundary it is known that $E_K(t)$ is comparable with $t^{(m-d)/2}$ for $t \da 0$, where $d$ is the interior Minkowski dimension of the boundary of $\R^m \backslash K$ \cite{vdB1}. In the case of a self-similar boundary it is sometimes possible to obtain more detailed results \cite{vdBedHo99}, \cite{vdB2}. It turns out that if there is a dominant arithmetic sequence of length scales, then the leading term is $t^{(m-d)/2}$ times a periodic function of $\log(t^{-1})$. If there is no such sequence, then the leading term is $t^{(m-d)/2}$ times a constant. However, neither the periodic function nor the constant is known explicitly. In Section \ref{S.1.2} below we study the case where $K$ is a Brownian path of time length $s$. We will see that for this case there is no periodic function of $\log(t^{-1})$. \subsection{Expectation of the heat content outside a Brownian path} \label{S.1.2} The solution of (\ref{eq1alt}) is given by \begin{equation} \label{vsolrep} v(x;t) = \P_x(\tau_K \leq t), \end{equation} where $\tau_K=\inf\{u \geq 0 \colon\,\bar\beta(u) \in K\}$ with $(\bar\beta(u), \,u \geq 0;\,\P_x,\,x \in \R^m)$ Brownian motion on $\R^m$. Let $E_{\bar\beta[0,s]} (t)$ denote the heat content of $\R^m\backslash\beta[0,s]$ at time $t$, and let \begin{equation} \label{avhc} E(s,t) = \E_0\big(E_{\beta[0,s]}(t)\big) \end{equation} denote its expectation. Since $\cp(\beta[0,s])=0$ for $m=4,5,\cdots$, only $m=2,3$ are relevant. In Section \ref{S.2} we will prove the following duality property. \begin{theorem} \label{thdual} \textup{(a)} Let $m=2,3$. Then for $s,t>0$, \begin{equation} \label{Escal} E(s,t) = E(t,s), \qquad E(s,t) = (t/s)^{m/2}\,E(s,s^2/t). \end{equation} \textup{(b)} Let $m=3$. Then for $s>0$, \begin{equation} \label{e13} E(s,t) = c_1(s)\,t^{1/2} + c_2(s)\,t + c_3(s)\,t^{3/2} + O(t^2), \qquad t \da 0, \end{equation} with \begin{equation} \begin{aligned} c_1(s) &=\mathcal{C}_1 s,\\ c_2(s) &= 2^{-1}\pi^{-3/2}\mathcal{C}_2 s^{1/2},\\ c_3(s) &= (4\pi)^{-2}\mathcal{C}_3 - (8\pi)^{-1} \E_0\left(\int_{\R^3} \int_{\R^3} \|x-y\|\,\mu_{\beta[0,1]}(dx)\mu_{\beta[0,1]}(dy)\right), \end{aligned} \end{equation} where \begin{equation*} \mathcal{C}_i= \E_0\Big(\big(\cp\big(\beta[0,1]\big)\big)^i\Big), \qquad i=1,2,3. \end{equation*} \textup{(c)} Let $m=2$. Then for $s>0$, \begin{equation*} E(s,t) = 4\pi s\, \big(\log(t^{-1})\big)^{-1} + O\big((\log(t^{-1}))^{-2}\big), \qquad t \da 0. \end{equation*} \end{theorem} Theorem \ref{thdual}(a) states a duality property from which Theorem \ref{thdual}(b-c) easily follows. Indeed from (\ref{e10}--\ref{e10alt}) and \eqref{avhc} we obtain that for $m=3$, \begin{equation} \label{Etlarge} E(s,t) = \sum_{j=1}^3 c_j(s)\,t^{(3-j)/2} + O\big(t^{-1/2}\big), \qquad t\to\infty, \end{equation} with $c_j(s) = \E_0^1(b_j(\beta[0,s]))$. By combining \eqref{Etlarge} with \eqref{Escal}, we obtain the claim in Theorem \ref{thdual}(b). Similarly, it follows from (\ref{e11}) that for $m=2$, \begin{equation} \label{Etlargealt} E(s,t)= 4\pi\,t(\log t)^{-1} + O\big(t(\log t)^{-2}\big), \qquad t \to \infty. \end{equation} By combining \eqref{Etlargealt} with \eqref{Escal}, we obtain the claim in Theorem \ref{thdual}(c). \subsection{Expectation of the inradius of a torus cut by a Brownian path} \label{S.1.3} Let $\T^m$ denote the $m$-dimensional torus, which we identify with the set $(-\frac{1}{2},+\frac{1}{2}]^m$. We denote the distance on $\T^m$ between points $x$ and $y$ by $d(x,y)$. $\T^m$ is a compact and connected Riemannian manifold without boundary. The associated Laplace-Beltrami operator is the generator of Brownian motion on $\T^m$. The latter process can be obtained by wrapping Brownian motion on $\R^m$ around $\T^m$. Namely, let $(\tilde{\beta}(u), \,u\geq 0;\,\P_x,\,x\in \R^m)$ denote Brownian motion on $\R^m$, with the Laplacian as generator, and put \begin{equation*} \beta(u) = \big(\tilde{\beta}(u)+(\tfrac12,\cdots,\tfrac12)\big)\,(\mathrm{mod}\,\Z^m) - (\tfrac12,\cdots,\tfrac12). \end{equation*} Then $(\beta(u),u\geq 0;\,\P_x,\,x\in \R^m)$ is Brownian motion on $\T^m$. On $\T^m$ we define a random distance function $d_s$ by putting \begin{equation*} d_s(x) = \inf\{d(x,y)\colon\,y\in\beta[0,s]\}, \qquad x\in\T^m, \end{equation*} i.e., the distance of $x$ to $\beta[0,s]$. The inradius of $\T^m\backslash\beta[0,s]$ is the random variable $\rho(s)$ defined by \begin{equation*} \rho(s) = \sup\{d_s(x)\colon\,x\in\T^m\}. \end{equation*} The supremum is attained because $d_s$ is continuous and $\T^m$ is compact. Hence there exists an open ball with radius $\rho(s)$ in $\T^m\backslash\beta[0,s]$. The inradius is a non-trivial random variable in all dimensions. \begin{theorem} \label{inradmgeq3} If $m=3,4,\cdots$, then \begin{equation*} \lim_{s\to\infty}\,\left(\frac{s}{\log s}\right)^{1/(m-2)}\, \E_0(\rho(s))= \left(\frac{m}{(m-2)\kappa_m}\right)^{1/(m-2)}, \end{equation*} where $\kappa_m$ is the Newtonian capacity of the ball with radius $1$ in $\R^m$, given by \begin{equation*} \label{e5.20} \kappa_m=4\pi^{m/2}[\Gamma((m-2)/2)]^{-1}. \end{equation*} \end{theorem} \begin{theorem} \label{inradm=2} If $m=2$, then \begin{equation*} \lim_{s\to\infty} s^{-1/2}\,\log\E_0(\rho(s))=-\pi^{1/2}. \end{equation*} \end{theorem} \subsection{Discussion and open problems} \label{S.1.5} {\bf Heat content.} Since the Minkowski dimension of the Brownian path equals $d=2$, the power $1/2$ in (\ref{e13}) for $m=3$ agrees with the exponent $(m-d)/2$ mentioned below (\ref{e11}). The absence of a periodic function multiplying the term $t^{1/2}$ reflects the fact that there is no dominant arithmetic sequence of length scales in the Brownian path. Comparing (\ref{e13}) with (\ref{e10}--\ref{e10alt}), we see that $\beta[0,s]$ has an effective area $2^{-1}\pi^{1/2}\mathcal{C}_1 s$ and an effective mean curvature integral $2^{-1}\pi^{-3/2}\mathcal{C}_2 s^{1/2}$. We see from Theorem \ref{thdual} that the complement of the Brownian path heats up much faster for $m=2$ than for $m=3$ as $t$ increases from $0$. With the help of (\ref{e11}) it is actually possible to obtain the full asymptotic series for $m=2$. The geometry of $\beta[0,1]$ enters into this series only via the expectation of the powers of the logarithm of the logarithmic capacity of $\beta[0,1]$. \noindent {\bf Inradius.} Theorems \ref{inradmgeq3}--\ref{inradm=2} identify the scaling behavior of the expectation of the inradius. Strong laws of large numbers were derived in \cite{DPR} and \cite{DPRZ}. Our proofs for the upper bounds in Section 3.1 are based on Wiener sausage estimates and spectral decomposition, while the lower bounds in Section 3.2 and Section 3.3 respectively, are based on results of \cite{DPR} and \cite{DPRZ} which rely in turn on excursions of Brownian motions. \noindent {\bf Spectrum.} There are several other set functions that are closely related to the inradius such as the principal Dirichlet eigenvalue. For $s>0$, let $\Delta_s$ be the Laplacian acting in $L^2(\T^m \backslash \beta[0,s])$ with Dirichlet boundary condition on $\beta[0,s]$. The spectrum of $-\Delta_s$ is bounded from below by the spectrum of $-\Delta$ on $\T^m$. Since the latter is discrete, the spectrum of $-\Delta_s$ is discrete, with eigenvalues $(\lambda_{j,s})_{j\in\N}$, labeled in non-decreasing order and including multiplicities. Since the Newtonian capacity of $\beta[0,s]$ is zero for $m>3$, the spectrum of $-\Delta_s$ is non-trivial if and only if $m=2,3$. Since $s\mapsto\T^m\backslash\beta[0,s]$ is decreasing we have, by domain monotonicity of Dirichlet eigenvalues \cite{RS}, that $s\mapsto \lambda_{j,s}$ is increasing. \begin{conjecture} \label{specm=2} If $m=2$, then \begin{equation} \label{e17} \lim_{s\to\infty} s^{-1/2}\log \E_0(\lambda_{1,s}) = 2\pi ^{1/2}. \end{equation} \end{conjecture} \begin{conjecture} \label{specm=3} If $m=3$, then \begin{equation}\label{e16} \lim_{s\rightarrow \infty} \left(\frac{\log s}{s}\right)^2 \E_0(\lambda_{1,s}) = 9^{-1}(2\pi)^4. \end{equation} \end{conjecture} The heuristic behind Conjecture \ref{specm=2} is as follows. The complement of the Brownian path $\beta[0,s]$ in $\T^2$ consists of simply connected open components. Hence the spectrum of the Dirichlet Laplacian acting in $L^2(\T^2\setminus\beta[0,s])$ is the union of the spectra of the Dirichlet Laplacian for all the components. In particular, $\lambda_{1,s}$ is the minimum over all first eigenvalues of these components. For a simply connected open set $\Omega$ in $\R^2$ with inradius $\rho$ we know from \cite{An86} that the first Dirichlet eigenvalue is comparable to $\rho^{-2}$. The same is not true for the bottom of the spectrum of a simply connected open set $\Omega$ in $\T^2$. Indeed, the inradius of $\T^2$ is bounded from above by half its diameter. Hence, if $\Omega$ is almost all of $\T^2$, then the bottom of the spectrum is close to $0$ while $\rho^{-2}$ is bounded away from $0$. However, for large $s$ it is very unlikely that such a large component exists. Thus, we have that the typical component for large $s$ has small $\rho(s)$, and hence, by \cite{An86}, \begin{equation*} \lambda_{1,s}\asymp \rho(s)^{-2}. \end{equation*} The right-hand side is of order $e^{2(\pi s)^{1/2}}$, which explains (\ref{e17}). The heuristic behind Conjecture \ref{specm=3} is as follows. The simplifying features for $m=2$ are absent for $m=3$. We expect that with high probability the largest open ball in $\T^3\setminus\beta[0,s]$ with inradius $\rho(s)$ is ``densely surrounded'' by the Brownian path. So $\lambda_{1,s}$ is the first Dirichlet eigenvalue of a ball with radius $\rho(s)$ in $\T^3$ (or $\R^3$). Hence \begin{equation} \label{e18a*} \lambda_{1,s}\approx \pi^2\rho(s)^{-2}. \end{equation} We also expect that the inradius $\rho(s)$ gets very narrowly distributed around its mean as $s\to\infty$. Hence for large $s$, \begin{equation*} \E_0(\lambda_{1,s})\approx \pi^2(\E_0(\rho(s)))^{-2}. \end{equation*} The asymptotic behaviour of the latter expectation can be read off from Theorem \ref{inradmgeq3} for $m=3$, and implies \eqref{e16}. \noindent {\bf Large deviations.} As explained at the end of Section \ref{S.2}, it is easy to show that for $m=3$ and $s>0$ the following strong law of large numbers holds: \begin{equation} \label{eq11} \lim_{t \da 0} E_{\beta[0,s]}(t)/E(s,t) = 1 \qquad \P_0-a.s. \end{equation} We expect that both $\rho(s)$ and $\lambda_{1,s}$ satisfy the strong law of large numbers as $s\to\infty$. It is interesting to determine their large deviation behaviour. For $m \geq 3$ this was achieved in \cite{GdH}, which was inspired by an unpublished earlier version of the present paper. Interestingly, the large deviations are so uncostly in one direction that they do not imply our results about the expectation. \section{Proof of Theorem \ref{thdual}} \label{S.2} Let \begin{equation*} \big(\beta_i(u),\,u \geq 0;\,\P^i_x,\,x \in \R^m\big), \qquad i=1,2, \end{equation*} be two independent Brownian motions on $\R^m$. Recalling (\ref{vsolrep}), we have from (\ref{e9}) that \begin{equation*} E_K(t) + |K| = \E^2_0\big(|W_{\beta_2}^K(t)|\big), \end{equation*} where \begin{equation*} W_{\beta_2}^K(t)= \cup_{u \in [0,t]} \{K + \beta_2(u)\} = K + \beta_2[0,t] \end{equation*} is the $K$-set Wiener sausage associated with $\beta_2$ up to time $t$. Now choose $K=\beta_1[0,s]$, $s>0$. Then, since $|\beta_1[0,s]|=0$, the heat content in $\R^m \backslash\beta_1[0,s]$ becomes \begin{equation*} E_{\beta_1[0,s]}(t) = \E_0^2\big(|W(s,t)|\big) \end{equation*} with \begin{equation*} W(s,t) = W_{\beta_2}^{\beta_1[0,s]}(t) = \beta_1[0,s]+\beta_2[0,t] = W_{\beta_1}^{\beta_2[0,t]}(s). \end{equation*} The expected heat content becomes \begin{equation*} E(s,t) = \E_0^1\big(E_{\beta_1[0,s]}(t)\big) = (\E_0^1 \otimes \E_0^2)(|W(s,t)|) = \E_0^2\big(E_{\beta_2[0,t]}(s)\big). \end{equation*} We have the following two elementary lemmas. \begin{lemma} \label{Lem1} For all $s,t\geq 0$, \begin{equation*} E(s,t) = E(t,s). \end{equation*} \end{lemma} \begin{proof} Note that \begin{equation} \label{e20} E(s,t) = (\E_0^1\otimes\E_0^2)\big(|W(s,t)|\big) = \int_{\R^m} dx\,\, (\P_x^1 \otimes \P_0^2)\big(\tau_{\beta_2[0,t]} \leq s\big) \end{equation} with \begin{equation*} \tau_{\beta_2[0,t]} = \inf\big\{u \geq 0\colon\,\beta_1(u)\in\beta_2[0,t]\big\}. \end{equation*} We may rewrite (\ref{e20}) as \begin{equation*} E(s,t) = \int_{\R^m} dx\,\,(\P_x^1 \otimes \P_0^2)\big(\beta_1[0,s] \cap \beta_2[0,t] \neq \emptyset\big), \end{equation*} from which the symmetry property follows via the change of variable $x \to -x$. \end{proof} \begin{lemma} \label{Lem2} For all $a>0$ and $s,t \geq 0$, \begin{equation*} |W(s,t)| \triangleq a^{-m}|W(a^2s,a^2t)|, \end{equation*} where $\triangleq$ denotes equality in distribution. \end{lemma} \begin{proof} Let $W_{\beta_1}^A(t)$ is the $A$-set Wiener sausage associated with $\beta_1$ up to time $t$, and note the two scaling relations \begin{equation*} \beta_2[0,t] \triangleq a^{-1}\,\beta_2[0,a^2t],\qquad W_{\beta_1}^A(s) \triangleq a^{-d}\,W_{\beta_1}^{aA}(a^2s). \end{equation*} The claim follows by choosing $A=a^{-1}\beta_2[0,a^2s]$. \end{proof} Theorem \ref{thdual}(a) follows from Lemma \ref{Lem1} and Lemma \ref{Lem2} with $a^2=s/t$. Theorems \ref{thdual}(b-c) follow from (\ref{Escal}), (\ref{Etlarge}--\ref{Etlargealt}) and the scaling relations \begin{equation*} \begin{aligned} \cp\big(\beta_1[0,s]\big) &\triangleq s^{1/2}\,\cp\big(\beta_1[0,1]\big),\\ \mu_{\beta_1[0,s]}(sA) &\triangleq \mu_{\beta_1[0,1]}(A), \end{aligned} \end{equation*} which are valid for any compact subset $A\subset \R^3$. The latter imply that $c_j(s) = s^{j/2}c_j(1)$, $s>0$, for $m=3$. A standard renewal argument \cite{S1}, \cite{S2} gives the strong law of large numbers for $|W(s,t)|$, namely, for $s>0$, \begin{equation*} \lim_{t\to\infty} |W(s,t)|/\E^1_0\big(|W(s,t)|\big) = 1 \qquad (\P_0^1\otimes\P_0^2)-a.s. \end{equation*} This in turn implies that for $s>0$, \begin{equation*} \lim_{t\to\infty} \E^2_0\big(|W(s,t)|\big)/(\E^1_0\otimes\E_0^2)\big(|W(s,t)|\big) = 1 \qquad \P_0^1-a.s., \end{equation*} which is the same as \begin{equation} \label{e30alt} \lim_{t\to\infty} E_{\beta_1[0,s]}(t)/E(s,t) = 1 \qquad \P_0^1-a.s. \end{equation} The claim in (\ref{eq11}) follows from (\ref{e30alt}) via Lemma \ref{Lem2} with $a^2=s/t$. \section{Proof of Theorems \ref{inradmgeq3}--\ref{inradm=2}} \label{S.3} For $x\in\T^m$ and $\epsilon>0$, let $T_{x,\epsilon}=\inf\{u\geq 0\colon\,\beta(u)\in B_x(\epsilon)\}$, where $(\beta(u),\,u\geq 0;\,\P_x,\,x\in\T^m)$ is Brownian motion on $\T^m$, and $B_x(\epsilon)$ is the open ball with center $x$ and radius $\epsilon$ in $\T^m$. Then \begin{equation*} T_\epsilon = \sup_{x\in\T^m} T_{x,\epsilon} \end{equation*} is the cover time of $\T^m$ by the Wiener sausage with radius $\epsilon$. By translation invariance, we have \begin{equation*} \P_0[T_{\epsilon}>s] = \P_x[T_{\epsilon}>s], \ \ \ x\in\T^m,\,s\geq 0, \end{equation*} which, since $|\T^m|=1$, gives \begin{equation} \label{e5.2} \P_0[T_{\epsilon}>s] = \int_{\T^m} dx\,\P_x[T_{\epsilon}>s]. \end{equation} \subsection{Upper bound} Let $N\in\N$, and let $\{x_1,x_2,\cdots,x_{N^m}\}=(N^{-1}\Z)^m\cap\T^m.$ Let $\eta \in (0,1/4)$ be arbitrary, and consider the collection of open balls with centers $\{x_1,x_2,\cdots,x_{N^m}\}$ and radii $(1-\eta)\epsilon$. There exists $v\in \{x_1,x_2,\cdots,x_{N^m}\}$ such that $d(x,v)\leq (2N)^{-1}m^{1/2}$. For $N\geq m^{1/2}/(2\epsilon \eta),$ we have $B_x(\epsilon) \supset B_v((1-\eta)\epsilon)$. This implies that if $N\ge m^{1/2}/(2\epsilon \eta)$, then \begin{equation*} \big\{B_{x_i}((1-\eta)\epsilon) \cap \beta[0,s]\ne \emptyset, i=1,2,\cdots,N^m\big\} \subset \{T_{\epsilon}\le s\}. \end{equation*} It follows that \begin{align} \label{e5.4} \P_x[T_{\epsilon}>s] &\leq 1-\P_x\big[B_{x_i}((1-\eta)\epsilon)\cap \beta[0,s]\ne \emptyset, i=1,\cdots,N^m\big]\nonumber\\ &\leq 1-\E_x\left[\prod_{i=1}^{N^m}\left(1-1_{B_{x_i}((1-\eta)\epsilon)\cap \beta[0,s]= \emptyset}\right)\right]\nonumber\\ &\leq \left(\sum_{i=1}^{N^m}\P_x\big[{B_{x_i}((1-\eta)\epsilon)\cap \beta[0,s]= \emptyset}\big]\right)\wedge 1. \end{align} By \eqref{e5.2} and \eqref{e5.4}, we have \begin{align} \label{e5.5} \P_0[T_{\epsilon}>s] &\leq \int_{\T^m} dx \left(\sum_{i=1}^{N^m}\P_x\big[{B_{x_i}((1-\eta)\epsilon)\cap \beta[0,s]= \emptyset}\big]\right)\wedge 1\nonumber\\ &\leq \left(N^ m\int_{\T^m} dx\,\P_x[{B_{x_1}((1-\eta)\epsilon)\cap \beta[0,s]= \emptyset}]\right)\wedge1. \end{align} Next, let $\mu_{1,(1-\eta)\epsilon}<\mu_{2,(1-\eta)\epsilon}\leq\cdots$ be the spectrum of the Dirichlet Laplacian acting in $L^2(\T^m\setminus \overline{B}_{x_1} ((1-\eta)\epsilon))$, with a corresponding orthonormal set of eigenfunctions $\{\psi_{j,(1-\eta)\epsilon}, j=1,2,\cdots\}$. Then \begin{equation} \label{e5.6} \P_x\big[B_{x_1}((1-\eta)\epsilon)\cap \beta[0,s]=\emptyset\big] = \sum_{j=1}^{\infty} e^{-s\mu_{j,(1-\eta)\epsilon}}\, \psi_{j,(1-\eta)\epsilon }(x) \int_{\T^m\setminus\overline{B}_{x_1}((1-\eta)\epsilon))}dy\, \psi_{j,(1-\eta)\epsilon}(y). \end{equation} Hence, by Parseval's identity and \eqref{e5.6}, \begin{align} \label{e5.7} \int_{\T^m}dx\,\P_x[B_{x_1}((1-\eta)\epsilon)\cap\beta[0,s]=\emptyset] &= \sum_{j=1}^{\infty} e^{-s\mu_{j,(1-\eta)\epsilon}} \left(\int_{\T^m\setminus\overline{B}_{x_1}((1-\eta)\epsilon))}\, \psi_{j,(1-\eta)\epsilon}\right)^2\nonumber\\ &\leq e^{-s\mu_{1,(1-\eta)\epsilon}} \sum_{j=1}^{\infty}\left(\int_{\T^m\setminus\overline{B}_{x_1}((1-\eta)\epsilon))}\, \psi_{j,(1-\eta)\epsilon}\right)^2\nonumber\\ &= e^{-s\mu_{1,(1-\eta)\epsilon}}\,(|\T^m|-|\overline{B}_{x_1}((1-\eta)\epsilon)|) \nonumber\\ &\leq e^{-s\mu_{1,(1-\eta)\epsilon}}. \end{align} By \eqref{e5.5} and \eqref{e5.7}, \begin{equation*} \P_0[T_{\epsilon}>s] \leq \left(N^m e^{-s\mu_{1,(1-\eta)\epsilon}}\right)\wedge 1. \end{equation*} Since $\textup{diam}(\T^m)=2^{-1}m^{1/2}$, the inradius is bounded from above by $4^{-1}m^{1/2}$. Moreover, $\{\rho(s)>\epsilon\}=\{T_{\epsilon}> s\}.$ Hence \begin{equation} \label{e5.9} \E_0(\rho(s))=\int_0^{4^{-1}m^{1/2}} d\epsilon\,\P_0[T_{\epsilon} > s] \leq \int_0^{4^{-1}m^{1/2}} d\epsilon\, \left(\left(N^me^{-s\mu_{1,(1-\eta)\epsilon}}\right)\wedge 1\right). \end{equation} Let \begin{equation} \label{e5.10} N=[m^{1/2}/(2\epsilon \eta)]+1. \end{equation} Since $\epsilon \le 4^{-1}m^{1/2}$ and $\eta< 1/4$, we have \begin{equation} \label{e5.11} N\leq m^{1/2}/(\epsilon \eta). \end{equation} \noindent $\bullet$ First consider the case $m=2$. By \cite{O}, we have that \begin{equation*} \mu_{1,\epsilon} = 2\pi \left(\log(1/{\epsilon})\right)^{-1} + O((\log(1/{\epsilon}))^{-2}),\ \ \epsilon \downarrow 0. \end{equation*} Hence there exists $\epsilon_0(\eta)$ such that, for $\epsilon\leq \epsilon_0(\eta)$, \begin{equation*} \mu_{1,\epsilon} \geq 2\pi\left(\log(1/{\epsilon})\right)^{-1}(1-\eta). \end{equation*} So, abbreviating $\epsilon \le \epsilon_1(\eta)=(2^{1/2}/4)\wedge \epsilon_0(\eta)$, we have for $\epsilon \leq \epsilon_1(\eta)$, \begin{align} \label{e5.14} \mu_{1,(1-\eta)\epsilon} &\geq 2\pi \left(\log(1/{(1-\eta)\epsilon})\right)^{-1}\nonumber\\ &= 2\pi \left(\log(1/{\epsilon})\right)^{-1} \left(1+\frac{\log(1/(1-\eta))}{\log(1/\epsilon)}\right)^{-1}(1-\eta)\nonumber\\ &\geq 2\pi \left(\log(1/{\epsilon})\right)^{-1} \left(1+\frac{2\log(1/(1-\eta))}{3\log 2}\right)^{-1}(1-\eta)\nonumber\\ &\geq 2\pi \left(\log(1/{\epsilon})\right)^{-1} \left(1-\log(1/(1-\eta))\right)(1-\eta)\nonumber\\ &\geq 2\pi \left(\log(1/{\epsilon})\right)^{-1} \left(1-\frac{\eta}{1-\eta}\right)(1-\eta)\nonumber\\ &= 2\pi \left(\log(1/{\epsilon})\right)^{-1}(1-2\eta). \end{align} Putting \eqref{e5.9}, \eqref{e5.11} and \eqref{e5.14} together, we obtain \begin{align} \label{e5.15} \E_0(\rho(s)) &\leq \int_0^{\epsilon_1(\eta)}d\epsilon \left((2(\eta\epsilon)^{-2} e^{-2\pi s(\log(1/\epsilon))^{-1}(1-2\eta)})\wedge 1\right)\nonumber\\ &\qquad + 2\int_{\epsilon_1(\eta)}^{\infty} d\epsilon\,(\eta \epsilon)^{-2} e^{-2\pi s(\log(1/\epsilon_1(\eta)))^{-1}(1-2\eta)}. \end{align} The second term in \eqref{e5.15} is bounded from above by \begin{equation*} 2\eta^{-2}\epsilon_1(\eta)^{-1}e^{-(\pi/2) s(\log(1/\epsilon_1(\eta)))^{-1}}. \end{equation*} By changing variables, $\epsilon=e^{-\theta}$, we obtain that the first integral is bounded from above by \begin{align} \label{e5.17} &2\eta^{-2}\int_0^{\infty} d\theta\,e^{-\theta} \left(e^{2\theta-2\pi s(1-2\eta)/{\theta}}\wedge 1\right)\nonumber\\ &\leq 2\eta^{-2}\int_0^{(\pi s(1-2\eta))^{1/2}} d\theta\, e^{\theta-2\pi s(1-2\eta)/{\theta}}+2\eta^{-2} \int_{(\pi s(1-2\eta))^{1/2}}^{\infty} d\theta\,e^{-\theta}\nonumber\\ &\leq 2\eta^{-2}((\pi s)^{1/2}+1)e^{-(\pi s(1-2\eta))^{1/2}}. \end{align} It follows from (\ref{e5.15}--\ref{e5.17}) that for $\eta\in(0,1/4)$, \begin{equation*} \limsup_{s\to \infty}s^{-1/2}\log \E_0(\rho(s)) \leq -(\pi(1-2\eta))^{1/2}. \end{equation*} This proves the upper bound in Theorem \ref{inradm=2} for $m=2$ because $\eta\in(0,1/4)$ was arbitrary. \noindent $\bullet$ Next consider the case $m=3,4,\cdots$. By Theorem 1 in \cite{CF}, we have that \begin{equation*} \mu_{1,\epsilon}=\kappa_m\epsilon^{m-2}(1+o(1)), \qquad \epsilon \downarrow 0. \end{equation*} Hence there exists $\epsilon_0(\eta)$ such that, for $\epsilon\leq \epsilon_0(\eta)$, \begin{equation} \label{e5.21} \mu_{1,\epsilon} \geq \kappa_m\epsilon^{m-2}(1-\eta). \end{equation} By \eqref{e5.9}, \eqref{e5.11} and \eqref{e5.21} we have, for any $\eta \in (0,1/4)$, \begin{align} \label{e5.22} \E_0(\rho(s)) &\leq \int_0^{\min\{m^{1/2}/4,\epsilon_0(\eta)\}} d\epsilon \left(m^{m/2}(\epsilon \eta)^{-m} e^{-s\kappa_m(1-\eta)\epsilon^{m-2}}\wedge 1\right)\nonumber\\ &\qquad +\int_{\min\{m^{1/2}/4,\epsilon_0(\eta)\}}^{m^{1/2}/4} d\epsilon\, m^{m/2}(\epsilon\eta)^{-m}e^{-s\kappa_m(1-\eta)\epsilon_0(\eta)^{m-2}}. \end{align} The second term in \eqref{e5.22} is bounded from above by \begin{equation} \label{e5.23} m^{m/2}(m-1)^{-1}\eta^{-m}(\max\{4/m^{1/2},\epsilon_0(\eta)^{-1}\})^{m-1} e^{-s\kappa_m(1-\eta)\epsilon_0(\eta)^{m-2}}. \end{equation} The first term in \eqref{e5.22} is bounded from above by \begin{align} \label{e5.24} &\int_0^{\infty}d\epsilon\left(m^{m/2}(\epsilon\eta)^{-m} e^{-s\kappa_m(1-\eta)\epsilon^{m-2}}\wedge1\right)\nonumber\\ &\qquad = (1-\eta)^{-1/(m-2)}(m-2)^{-1}(s\kappa_m)^{-1/(m-2)}\int_0^{\infty} d\theta\, \theta^{(3-m)/(m-2)}(K\theta^{-m/(m-2)}e^{-\theta} \wedge 1), \end{align} where \begin{equation} \label{e5.25} K=m^{m/2}\eta^{-m}(1-\eta)^{m/(m-2)}(s\kappa_m)^{m/(m-2)}. \end{equation} Let $\theta_K$ be the unique positive root of \begin{equation} \label{e5.26} K\theta^{-m/(m-2)}e^{-\theta}=1. \end{equation} The integral in the right-hand side of \eqref{e5.24} equals \begin{align} \label{e5.27} &\int_0^{\theta_K} d\theta\,\theta^{(3-m)/(m-2)} + K\int_{\theta_K}^{\infty} d\theta\,\theta^{(3-2m)/(m-2)}e^{-\theta}\nonumber\\ &\qquad \leq (m-2)\theta_K^{1/(m-2)}+K\theta_K^{(3-2m)/(m-2)}e^{-\theta_K}\nonumber\\ &\qquad =(m-2)\theta_K^{1/(m-2)}+\theta_K^{(3-m)/(m-2)}. \end{align} For $K\ge e$, we have $\theta_K\ge 1$, and, by \eqref{e5.26}, \begin{equation} \label{e5.28} e^{\theta_K}= K \theta_K^{-m/(m-2)}\leq K, \qquad K\geq e. \end{equation} Hence $\theta_K\le \log K$ for $K\geq e$. It follows from (\ref{e5.22}-\ref{e5.28}) that for $s\to \infty$, \begin{equation*} \E_0(\rho(s))\le(1-\eta)^{-1/(m-2)}(\log K)^{1/(m-2)} (s\kappa_m)^{-1/(m-2)}+O(s^{-1/(m-2)}(\log s)^{(3-m)/(m-2)}). \end{equation*} Hence \begin{equation*} \limsup_{s\to \infty}\,\left(\frac{s}{\log s}\right)^{1/(m-2)} \E_0(\rho(s)) \leq (1-\eta)^{-1/(m-2)}\left(\frac{m}{(m-2)\kappa_m}\right)^{1/(m-2)}. \end{equation*} Let $\eta \downarrow 0$ to get the upper bound in Theorem \ref{inradmgeq3}. \subsection{Lower bound for $m\ge3$} To prove the lower bound in Theorem \ref{inradmgeq3} we use the following inequality in \cite {DPR}. Let $\eta \in (0,1/10]$ and $\delta \in (0,1/10]$ be arbitrary, and let, for $n \in \N$, \begin{equation} \label{e5.31} \epsilon_n=(1-\eta)^n, \end{equation} \begin{equation} \label{e5.32} v_n=(1-\eta)\kappa_m^{-1}\epsilon_n^{2-m}, \end{equation} and \begin{equation} \label{e5.33} K_n\ge \epsilon_n^{-m(1-2\delta)}. \end{equation} The very last inequality of \cite{DPR} implies that for the sequence $(\epsilon_n)$ there exists $c=c(\eta,\delta)$ such that \begin{equation*} \P_0[T_{\epsilon_n}\le(1-2\eta)v_n \log K_n]\le 4(1-\eta)^{cn}. \end{equation*} Let $\phi \in (0,1/4]$ be arbitrary. There exists $N(\eta,\delta,\phi)\in \N$ such that for $n\ge N(\eta,\delta,\phi)$, \begin{equation*} \P_0[T_{\epsilon_n}\le(1-2\eta)v_n \log K_n]\le \phi. \end{equation*} or \begin{equation*} \P_0[T_{\epsilon_n}\ge(1-2\eta)v_n \log K_n]\ge 1-\phi. \end{equation*} This, together with (\ref{e5.32}--\ref{e5.33}), gives that \begin{equation} \label{e5.37} \P_0[T_{\epsilon_n}\ge C m\kappa_m^{-1}\epsilon_n^{2-m} \log (\epsilon_n^{-1})]\ge 1-\phi, \end{equation} where $C=(1-2\eta)(1-\eta)(1-2\delta)$. We now choose $n=n(s,\eta)\in \Z$ such that \begin{equation} \label{e5.38} (1-\eta)^{n-1}\ge \left(C(m-2)^{-1}m\kappa_m^{-1}\frac{\log s}{s}\right)^{1/(m-2)}\ge(1-\eta)^n. \end{equation} Then $n\in \N$ and $n\ge N(\eta,\delta,\phi)$ for all $s$ large enough. By \eqref{e5.31} and \eqref{e5.38} \begin{equation}\label{e5.39} \epsilon_n^{2-m}=(1-\eta)^{n(2-m)}\ge C^{-1}(m-2)m^{-1}\kappa_m\frac{s}{\log s}. \end{equation} On the other hand, by \eqref{e5.31} and \eqref{e5.38} we have that \begin{equation} \label{e5.40} \log(\epsilon_n^{-1})\ge (m-2)^{-1}\log\left(C^{-1}(m-2)m^{-1}\kappa_m\frac{s}{\log s}\right). \end{equation} By (\ref{e5.39}--\ref{e5.40}), \begin{equation} \label{e5.41} C m\kappa_m^{-1}\epsilon_n^{2-m} \log (\epsilon_n^{-1}) \ge s+h(s), \end{equation} where \begin{equation*} h(s)=\frac{s}{\log s}\log \left(\frac{(m-2)m^{-1}\kappa_m}{\log s}\right). \end{equation*} By the definition of $n$ in \eqref{e5.38} and by \eqref{e5.37} and \eqref{e5.41}, we have that for all $s$ sufficiently large, \begin{equation*} \P_0[T_{\epsilon_n}\ge s+h(s)]\ge 1-\phi. \end{equation*} Hence, by the first equality in \eqref{e5.10}, \eqref{e5.31} and \eqref{e5.38}, \begin{align} \label{e5.44} \E_0(\rho(s+h(s)))&=\int_0^{4^{-1}m^{1/2}} d\epsilon\,\P_0[T_{\epsilon} > s+h(s)]\nonumber \\ &\ge\int_0^{\epsilon_n} d\epsilon\,\P_0[T_{\epsilon} > s+h(s)]\nonumber \\ &\ge (1-\phi)\epsilon_n\nonumber \\ & \ge (1-\phi)(1-\eta)\ \left(C(m-2)^{-1}m\kappa_m^{-1}\frac{\log s}{s}\right)^{1/(m-2)}. \end{align} It remains to show that \eqref{e5.44} implies the lower bound in Theorem \ref{inradmgeq3}. We abbreviate $\sigma=s+h(s)$. Since $(\log \log s)/ \log s\rightarrow 0$ as $s \rightarrow \infty$, we have that $|h(s)|\le s/2$ for all $s$ sufficiently large. Hence $\sigma \ge s/2$ for all such $s$, and $s=\sigma-h(s)\le \sigma-h(2\sigma)$. It follows that for all such $s$, \begin{equation*} \E_0(\rho(\sigma))\ge (1-\phi)(1-\eta)\left(C(m-2)^{-1}m\kappa_m^{-1}\frac{\log \sigma}{\sigma-h(2\sigma)} \right)^{1/(m-2)}. \end{equation*} In particular, it follows that \begin{align}\label{e5.46} \liminf_{\sigma \rightarrow \infty}&\ \E_0(\rho(\sigma))\left(\frac{\sigma}{\log \sigma}\right)^{1/(m-2)}\nonumber \\ & \ge(1-\phi)(1-\eta)\left(C(m-2)^{-1}m\kappa_m^{-1}\liminf_{\sigma \rightarrow \infty}\frac{\sigma}{\sigma-h(2\sigma)}\right)^{1/(m-2)}\nonumber \\ & =(1-\phi)(1-\eta)\left(C(m-2)^{-1}m\kappa_m^{-1}\right)^{1/(m-2)}. \end{align} Letting first $\phi \downarrow 0$, then $\delta \downarrow 0$ and finally $\eta \downarrow 0$, we conclude from \eqref{e5.46} that \begin{equation*} \liminf_{\sigma \rightarrow \infty} \E_0(\rho(\sigma))\left(\frac{\sigma}{\log \sigma}\right)^{1/(m-2)} \geq \left((m-2)^{-1}m\kappa_m^{-1}\right)^{1/(m-2)}. \end{equation*} This proves the lower bound in Theorem \ref{inradmgeq3}. \subsection{Lower bound for $m=2$} To prove the lower bound in Theorem \ref{inradm=2} we use the following inequality in \cite {DPRZ}. Let $\delta \in (0,1/10]$ be arbitrary, fix $\gamma \in (0,1-\delta)$ and let \begin{equation*} \epsilon_n=2n^{\gamma-1}. \end{equation*} It was shown in \cite{DPRZ} that there exist $N_0(\gamma,\delta)\in \N$ such that for all $n\ge N_0(\gamma,\delta)$, \begin{equation*} \P_0[T_{\epsilon_n}\ge \pi^{-1}(1-\gamma-\delta)^2(\log n)^2]\ge 1-\delta. \end{equation*} We let $n=n(s,\gamma,\delta)\in\{2,3,\cdots\}$ be such that \begin{equation} \label{e5.50} \pi^{-1}(1-\gamma-\delta)^2(\log n)^2\ge s \ge \pi^{-1}(1-\gamma-\delta)^2(\log (n-1))^2. \end{equation} It follows that, for all $s$ sufficiently large and $n\ge N_0(\gamma,\delta)$, \begin{equation*} \P_0[T_{\epsilon_n}\ge s]\ge 1-\delta. \end{equation*} In particular, for all $s$ sufficiently large we have that \begin{align} \label{e5.52} \E_0(\rho(s))&=\int_0^{4^{-1}\sqrt2} d\epsilon\, \P_0[T_{\epsilon}\ge s] \ge \int_0^{\epsilon_n} d\epsilon\, \P_0[T_{\epsilon}\ge s]\nonumber \\ &\ge\int_0^{\epsilon_n} d\epsilon\,\P_0[T_{\epsilon_n}\ge s]\ge \epsilon_n(1-\delta) = 2n^{\gamma-1}(1-\delta). \end{align} By the second inequality in \eqref{e5.50}, we have that \begin{equation} \label{e5.53} n\le 1+e^{(\pi s)^{1/2}/(1-\gamma-\delta)}. \end{equation} Since $1\le e^{(\pi s)^{1/2}/(1-\gamma-\delta)}$, we have by (\ref{e5.52}--\ref{e5.53}) that \begin{equation*} \E_0(\rho(s))\ge 2^{\gamma}(1-\delta)e^{(\gamma-1)(\pi s)^{1/2}/(1-\gamma-\delta)}. \end{equation*} Hence \begin{equation} \label{e5.55} \liminf_{s \rightarrow \infty}s^{-1/2}\log \E_0(\rho(s))\ge(\gamma-1)\pi ^{1/2}/(1-\gamma-\delta) \end{equation} Letting $\delta \downarrow 0$ we obtain from \eqref{e5.55} that \begin{equation*} \liminf_{s \rightarrow \infty}s^{-1/2}\log \E_0(\rho(s))\ge -\pi^{-1/2}. \end{equation*} This proves the lower bound in Theorem \ref{inradm=2}. \end{document}
\begin{document} \title{Stable hypersurfaces with constant higher order mean curvature} \begin{abstract} We propose a notion of stability for capillary hypersurfaces with constant higher order mean curvature and we generalize some results of the classical stability theory for CMC capillary hypersurfaces. \end{abstract} \unmarkedfntext{Keywords: Isometric immersions, free boundary, capillary, higher order mean curvature} \unmarkedfntext{2000 Mathematical Subject Classification: 53C42, 53A10.} \unmarkedfntext{The authors were partially supported by CAPES.} \section{Introduction} \noindent Let $M^{n+1}$ be an oriented Riemannian manifold and $\varphi : \Sigma^n \rightarrow M$ be an oriented hypersurface. The higher order mean curvature of order $k$ of the hypersurface, $H_k$, is defined as the normalized $k$-th symmetric function of the principal curvatures of $\varphi : \Sigma^n \rightarrow M$. We recall that $H=H_1$ is the mean curvature. Consider a closed domain with smooth boundary $\Omega\subset M$. In analogy with the constant mean curvature case (CMC), we say that a compact $H_k$-hypersurface in $\Omega$ is capillary if it is a $H_k$-hypersurface with non-empty boundary, whose boundary meets the $\partial \Omega$ at a constant angle. In the particular case where the angle is $\pi/2$, the capillary $H_k$-hypersurface is called free boundary. In this paper, we are interested in stability questions involving capillary $H_k$-hypersurfaces in $\Omega$. The stability of the minimal or CMC hypersurfaces has been drawing the attention of many mathematicians over the years. Stable hypersurfaces in this case are those for which the second variation of the area is non-negative for a suitable class of variational problems. The existence or not of a boundary $\partial\Sigma$ on $\Sigma$ and the specific constraints on the variations determine the kind of stability problem we deal with. The stability results for $H_1$-hypersurfaces in the literature covers from that applied to closed (without boundary) hypersurfaces to that applied to capillary hypersurfaces, which includes the free boundary hypersurfaces as a particular case (see \cite{barbosa1984stability, lucas1988stability, ros1997stability, souam1997stability, wang2019uniqueness}). We recall that a closed $H_k$-hypersurface, $k>1$, of a space form is a critical point for a modified area functional for volume preserving variations and it inherits the concept of stability from this variational problem (see \cite{marques1997stability}). We point out that a similar variational characterization of $H_k$-hypersurfaces in a general Riemannian manifold is not known. Despite this, in \cite{elbert2019note}, the second author and B. Nelli generalized the notion of stability for closed (or fixed boundary) $H_k$-hypersurfaces of a general Riemannian manifold for $k>1$. Inspired by the approach of Alaee, Lesourd and Yau in \cite{alaee2021stable} when dealing capillary marginally outer trapped surfaces (MOTS), we propose a stability theory for capillary $H_k$-hypersurfaces, $k>1$, that generalizes the capillary CMC ($k=1$) stability theory (see \cite{ros1997stability, ros1995stability, souam1997stability}). We also notice that when the hypersurface has empty or fixed boundary, we recover the stability theory proposed in \cite{elbert2019note}, and a fortiori, the classical definition of stability for a CMC hypersurface with empty or fixed. It is worthwhile to say that for $H_k$-hypersurfaces, $k>1$, we give two different notions of stability, the $k$-stability and the symmetric $k$-stability, and that for (simply connected) space forms these two notions coincide. The paper is organized as follows. The definition of $k$-stability is given in Section \ref{tres}. In Section \ref{quatro}, we deal with capillary $H_k$-hypersurfaces of space forms and we prove: \ {\it Capillary geodesic caps (see Section \ref{quatro} for the precise definition) supported on a totally umbilical hypersurface $\partial\Omega$ of a space form are $k$-stable.} \ In Section \ref{cinco}, we give some results for free boundary $H_k$-hypersurfaces supported on a geodesic sphere of a space form when $H_k=0$ and on a totally geodesic hypersurface of a space form. In Section \ref{seis}, we discuss the symmetric $k$-stability theory, and finally, in Section \ref{sete}, we address the stability of cylinders in $M \times \mathbb{R}$. It is natural to ask if it is possible to generalize other stability results, established for $H_1$-hypersurfaces, for $H_k$-hypersurfaces. We quote, in particular, some results for capillary $H_1$-hypersurfaces supported on special hypersurfaces $\partial\Omega$ of space forms, where $\partial\Omega$ is: \begin{itemize} \item a geodesic sphere \cite{wang2019uniqueness}; \item a hyperplane or a slab \cite{ainouz2016stable, souam2021stable}; \item a wedge \cite{lopez2014capillary, choe2015stable, pyo2019rigidity}; \item the boundary of a region bounded by a union of hyperplanes of $\mathbb{R}^{n+1}$ \cite{li2017stability, souam2021stable2}. \end{itemize} For some of these questions we have partial results and we hope we can address them in a forthcoming work. \section{Preliminaries} Let $\left(M^{n+1},g\right)$ be an oriented Riemannian manifold and $\varphi : \Sigma^n \rightarrow M$ be an oriented hypersurface with unit normal vector field $\eta$ in the normal bundle $N\Sigma$. Its second fundamental form $\emph{II}$, scalar second fundamental form $\emph{II}_\eta$ and Weingarten operator $A=\left(\emph{II}_\eta\right)^\flat$ are defined, respectively, as \begin{eqnarray*} \emph{II}\left(X,Y\right) &=& \left(\overline\nabla_XY\right)^\perp=\left<\overline\nabla_XY,\eta\right>\eta=\emph{II}_\eta\left(X,Y\right)\eta \\ \left<A(X),Y\right> &=& \emph{II}_\eta\left(X,Y\right)=\left<-\overline\nabla_X\eta,Y\right>, \end{eqnarray*} where $X,Y \in \Gamma(T\Sigma)$ and $\overline\nabla $ is the Levi-Civita connection of $M$. Let $\left\{\kappa_1(p),...,\kappa_n(p)\right\}$ be the principal curvatures of the hypersurface $\varphi$ at $p$. Let $\sigma_r$ be the $r$-th symmetric elementary polynomial \begin{equation*} \sigma_r(x_1,...,x_n)=\begin{dcases*} 1, & \quad $r=0$ \\ \sum_{1 \leq i_1<...<i_r \leq n} x_{i_1}...x_{i_r}, & \quad $r \in \{1,...,n\}$ \\ 0, & \quad $r >n.$ \end{dcases*}.\end{equation*} The normalized mean curvature of order $r$, $H_r$, and the non-normalized mean curvature of order $r$, $S_r$, of $\Sigma$ in $p$ are defined by \begin{equation*} H_r(p)=\binom{n}{r}^{-1}S_r(p)=\binom{n}{r}^{-1}\sigma_r(\kappa_1(p),...,\kappa_n(p)). \end{equation*} The functions $S_r$ can also be deduced from the equation \begin{equation}\label{1} \det\left(tI-A\right)=\sum_{r=0}^n \left(-1\right)^rS_rt^{n-r}. \end{equation} The hypersurface is said to have constant mean curvature of order $r$ if $H_r$ is constant over $\Sigma$; when this happens, $\Sigma$ is called an \textbf{$H_r$-hypersurface}. The Newton transformations $P_r$ associated to the $r$-mean curvatures $H_r$ are defined by \begin{equation*} P_r=\begin{dcases*} I, & \quad $r=0$ \\ S_rI-AP_{r-1}, & \quad $r \geq 1$ \end{dcases*}.\end{equation*} Since $A_p$ is self-adjoint for all $p \in \Sigma$, the Newton transformations are self-adjoint as well and their eigenvectors are the same as those of $A$. Also, since \eqref{1} holds, it follows from the Cayley-Hamilton Theorem that $P_n=0$. If $\{e_1,...,e_n\}$ denotes the eigenvectors of $A$, let $S_r\left(A_i\right)$ be the symmetric elementary polynomial of order $r$ associated to $A_i=A\vert_{e_i^\perp}$. Then we have the following properties, whose proof can be found in \cite[Lemma 2.1]{marques1997stability}: \begin{lema}\label{002.1} For each $r \in \{1,...,n-1\}$, \begin{enumerate}[(i)] \item $P_re_i=S_r(A_i)e_i$ \item $\tr\,(P_r)=\sum_{i=1}^n S_r(A_i)=(n-r)S_r$ \item $\tr\,(P_rA)=\sum_{i=1}^n \kappa_iS_r(A_i)=(r+1)S_{r+1}$ \item $\tr\,(P_rA^2)=\sum_{i=1}^n \kappa_i^2S_r(A_i)=S_1S_{r+1}-(r+2)S_{r+2}.$ \end{enumerate} \end{lema} The next Lemma provide some useful inequalities involving the functions $H_r$. \begin{lema}\label{002.2} Suppose that $\kappa_1,...,\kappa_n \geq 0$. Then \begin{enumerate}[(i)] \item $H_{r-1}H_{r+1} \leq H_r^2$, $r \in \{1,...,n-1\}$ \item $H_1 \geq H_2^{1/2} \geq ... \geq H_n^{1/n}$ \item $H_1H_{r+1} \geq H_{r+2}$, $r \in \{0,...,n-2\}$ \end{enumerate} and the previous inequalities are identities if and only if $\kappa_1=...=\kappa_n$. \end{lema} The proofs of (i) and (ii) can be found on \cite[p. 52]{hardy1952inequalities}, whereas (iii) is a direct consequence of (ii). \ In a general Riemannian manifold $(M,g)$ with Levi-Civita connection $\overline\nabla$, if $\phi$ is a pointwise symmetric $(2,0)$-tensor in $M$, the Cheng-Yau operator of $f \in C^\infty(M)$ is defined by \begin{equation*} \Box f=\tr\left(\phi\left(\hess f\right)^\flat\right), \end{equation*} where $\hess f$ is the Hessian of $f$ in $M$ and $\left(\hess f\right)^\flat$ is the metric $(1,1)$-tensor field on $M$ equivalent to $\hess f$. After some basic computations the Cheng-Yau operator can be written as $$\Box f=\dive\left(\phi\overline\nabla f\right)-\left<\dive\phi,\overline\nabla f\right>,$$ where $\dive\phi:=\tr\left(\overline\nabla\phi\right)$. The operator $\phi$ is said to be divergence free if $\dive\phi=0$. In \cite[Theorem 4.1]{rosenberg1993hypersurfaces}, H. Rosenberg proved that $P_r$ is divergence free when $M$ has constant sectional curvature (see also \cite[Corollary 3.7]{elbert2002constant} for the case where $r=1$ and $M$ is Einstein). When considering an oriented hypersurface $\varphi : \Sigma \rightarrow M$ with shape operator $A$, the $L_r$-operator of $\Sigma$ is defined as the Cheng-Yau operator for the Newton transformation $P_r$, i.e., \begin{equation*} L_rf=\tr\left(P_r\left(\hess f\right)^\flat\right), \quad f \in C^\infty(\Sigma). \end{equation*} Here, we say that $-L_r$ is a second-order elliptic differential operator when $P_r$ is positive definite on each point of $\Sigma$. The next Lemma provides conditions that guarantee the ellipticity of $L_r$. \begin{lema}\label{002.3} With the same notation: \begin{enumerate}[(i)] \item If $H_2>0$ then, after a choice of orientation on $\varphi$, $P_1$ is positive definite. \item If $H_{r+1}>0$ for $r>1$ and if there exists a point $p_0 \in \Sigma$ such that every principal curvature of $\Sigma$ in $p_0$ is positive then $P_j$ is positive definite for all $1 \leq j \leq r$. \item If $H_{r+1}=0$ and the rank of $A$ is greater than $r$ then $P_r$ is definite. \end{enumerate} \end{lema} The proofs of these statements can be found in \cite[Lemma 3.10]{elbert2002constant}, \cite[Proposition 3.2]{cheng2005embedded} and \cite[Corollary 2.3]{Hounie1995maximum}, respectively. Let $\Omega \subseteq M$ be a closed domain with smooth boundary $\partial\Omega$ and assume that $\varphi : \Sigma \rightarrow M$ is an oriented hypersurface such that $\varphi(\Sigma) \subseteq \Omega$ and $\varphi(\partial\Sigma) \subseteq \partial\Omega$. Let $\nu \in \Gamma\left(T\Sigma\vert_{\partial\Sigma}\right)$ be the unit outward conormal vector field on $\partial\Sigma$ and let $\overline\nu \in \Gamma\left(T\partial\Omega\vert_{\partial\Sigma}\right)$ and $\overline\eta \in \Gamma\left(TM\vert_{\partial\Omega}\right)$ be the unit normal vector fields associated to the immersions $\varphi\vert_{\partial\Sigma} : \partial\Sigma \rightarrow \partial\Omega$ and $\iota_{\partial\Omega} : \partial\Omega \hookrightarrow M$, respectively, such that $\left\{\nu,\eta\right\}$ has the same orientation as $\left\{\overline\nu,\overline\eta\right\}$ on each point of $\varphi(\partial\Sigma)$. If $\theta$ denotes the angle between $\nu$ and $\overline\nu$, then \begin{equation}\label{0002}\begin{dcases*} \nu=\cos\theta\,\overline\nu+\sin\theta\,\overline\eta \\ \eta=-\sin\theta\,\overline\nu+\cos\theta\,\overline\eta \end{dcases*}.\end{equation} or conversely, \begin{equation}\label{0092}\begin{dcases*} \overline\nu=\cos\theta\,\nu-\sin\theta\,\eta \\ \overline\eta=\sin\theta\,\nu+\cos\theta\,\eta \end{dcases*}. \end{equation} A $H_r$-hypersurface $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is said to be \textbf{capillary} if the contact angle $\theta$ between $\partial\Sigma$ and $\partial\Omega$ is constant. When $\theta=\frac{\pi}{2}$, $\varphi$ is called a \textbf{free boundary hypersurface}. When $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is a hypersurface with free boundary in $\Omega$, \eqref{0002} implies that $\nu=\overline\eta$ and $\eta=-\overline\nu$. The following result will be used throughout this article. \begin{lema}[{\cite[Lemma 2.2]{ainouz2016stable}}]\label{002.4} Suppose $\iota_{\partial\Omega}$ is a totally umbilical immersion into $M$ and that $\varphi$ is a capillary $H_r$-hypersurface into $M$. Then the unit outwards normal vector field $\nu \in \Gamma\left(T\Sigma\vert_{\partial\Sigma}\right)$ is a principal direction of $\varphi$. \end{lema} \section{Stability of capillary $H_{r+1}$-hypersurfaces}\label{tres} The notion of stability for $H_{r+1}$-hypersurfaces of a Riemannian manifold with empty (or fixed) boundary is well stated in the literature. The case $r=0$ (mean curvature) was established more than 40 years ago (see \cite{barbosa1984stability, lucas1988stability}). In 1997, L. Barbosa and G. Colares \cite{marques1997stability} addressed the empty (or fixed) boundary case for $H_{r+1}$-hypersurfaces of space forms and $r>0$. More than 20 years later, in a recent paper \cite{elbert2019note}, the second author and B. Nelli generalized this notion of stability for $H_{r+1}$-hypersurfaces of a general Riemannian manifold for $r>0$. For the capillary case the picture is more complex, since stability is settled only for mean curvature hypersurfaces ($r=0$) of space forms (see \cite{ros1995stability, souam1997stability}). The natural generalization of the variational problem for more general ambient spaces and $r>0$ has not made progress since the late 1990s. Inspired by the approach of \cite{alaee2021stable} we propose a notion of stability for capillary $H_{r+1}$-hypersurfaces that generalizes the works \cite{ros1995stability} and \cite{ souam1997stability} in these two directions: we deal with $r>0$ and with a more general ambient space. A variation of $\varphi$ is a smooth function $\Phi : \Sigma \times (-\varepsilon,\varepsilon) \rightarrow M$ such that, for each $t \in (-\varepsilon,\varepsilon)$, $\varphi_t=\Phi\vert_{\Sigma \times \{t\}}$ is an isometric immersion and $\varphi_0=\varphi$. The pair $(\Sigma,\varphi_t^*g)$ will be denoted by $\Sigma_t$. The variational field of $\Phi$ in $\varphi_t$ is defined by $$\xi_t(p)=\left.\Phi_*\frac{\partial}{\partial t}\right\vert_{(p,t)} \in \Gamma\left(TM\vert_{\varphi_t(\Sigma)}\right).$$ If $\eta_t \in \Gamma(N\Sigma)$ is the unit normal vector field of $\varphi_t$, the support function of $\Phi$ at $t$ is defined by $$f_t=\left<\xi_t,\eta_t\right> \in C^\infty(\Sigma).$$ Since $\varphi_t : \Sigma \rightarrow M$ is an oriented hypersurface, one can define its second fundamental form $\emph{II}_t$, its scalar second fundamental form $\left(\emph{II}_t\right)_{\eta_t}$ and its Weingarten operator $A_t$. Also, we set $\overline{R}_{\eta_t}\left(X\right):=\overline{R}\left(\eta_t,X\right)\eta_t$, where $\overline{R}$ the Riemann curvature tensor of $M$ defined by $$\overline{R}(X,Y)Z=\overline\nabla_Y\overline\nabla_XZ-\overline\nabla_X\overline\nabla_YZ+\overline\nabla_{[X,Y]}Z, \quad X,Y,Z \in \Gamma(TM).$$ If $S_{r+1}(t)$ denotes the non-normalized mean curvature of $(r+1)$-th order associated to immersion $\varphi_t$, its variation is given by \begin{equation}\label{0004} S_{r+1}^\prime(t)=\left(L_r\right)_tf_t+\left(S_1(t)S_{r+1}(t)-(r+2)S_{r+2}(t)\right)f_t+\tr_{\Sigma_t}\left(\left(P_r\overline{R}_\eta\right)_t\right)f_t+\xi_t^\top\left(S_{r+1}(t)\right), \end{equation} where $\left(L_r\right)_t$ is the $L_r$-operator of immersion $\varphi_t$ and $\left(P_r\overline{R}_\eta\right)_t:=\left(P_r\right)_t \circ \overline{R}_{\eta_t}$. A proof of \eqref{0004} can be found in \cite[Proposition 3.2]{elbert2002constant}. The enclosed volume between $\Sigma$ and $\Sigma_t$ is defined as $\mathcal{V}(t)=\int_{\Sigma \times [0,t]} \Phi^*d\mu_M$, with $d\mu_M$ being the volume form of $(M,g)$. A variation $\Phi$ is volume-preserving if $\mathcal{V}(t)=\mathcal{V}(0)$ for all $t \in (-\varepsilon,\varepsilon)$. It is known that $$\mathcal{V}^\prime(0)=\int_\Sigma f\,d\mu_\Sigma,$$ where $u=\left<\xi,\eta\right> \in C^\infty(\Sigma)$ and $d\mu_\Sigma$ is the volume form of $\left(\Sigma,\varphi^*g\right)$. Thus, a variation $\Phi$ is volume-preserving if and only if $\int_\Sigma f\,d\mu_\Sigma=0$. When dealing with stability questions we will be interested in the cases where $P_r$ is definite. For simplicity reasons, we will assume without loss of generality that the $H_{r+1}$-hypersurface has a positive definite Newton tensor for each point of $\Sigma$. With slight modifications, we can also address the case where $P_r$ is negative definite (see Remark \ref{004.5}) \begin{defn}\label{004.1} We say that a $H_{r+1}$-hypersurface $\varphi : \Sigma \rightarrow M$ is \textbf{positive definite} if $P_r$ is positive definite on each point $p \in \Sigma$. \end{defn} A variation $\Phi$ of a hypersurface $\varphi : \Sigma \rightarrow \Omega \subseteq M$ is called admissible if $\varphi_t(\inte\Sigma) \subseteq \inte\Omega$ and $\varphi_t(\partial\Sigma) \subseteq \partial\Omega$ for any $t \in (-\varepsilon,\varepsilon)$, where $\varphi_t=\Phi\vert_{\Sigma \times \{t\}}$. If $\Phi$ is an admissible variation of $\varphi$, then $\xi\vert_{\partial\Sigma} \in \Gamma\left(T\partial\Omega\vert_{\partial\Sigma}\right)$. If $\Sigma$ is a capillary $H_{r+1}$-hypersurface supported on $\partial\Omega$ with contact angle $\theta \in (0,\pi)$ and $\Phi$ is a volume-preserving admissible variation of $\varphi$, define the functional \begin{equation}\label{0005} \mathcal{F}_{r,\theta}[\Sigma_t]=-\int_\Sigma S_{r+1}(t)\left<\xi_t,\eta_t\right>\,d\mu_{\Sigma_t}+\int_{\partial\Sigma} \left<\xi_t,(P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu)_t\right>\,d\mu_{\partial\Sigma_t}, \end{equation} where $d\mu_{\Sigma_t}$ and $d\mu_{\partial\Sigma_t}$ denote the volume forms of $\Sigma_t$ and $\partial\Sigma_t=\left(\partial\Sigma,\left(\varphi_t\vert_{\partial\Sigma}\right)^*g\right)$, respectively. Note that when we set $r=0$, (\ref{0005}) is the first variation formula obtained in \cite{ros1995stability, souam1997stability}. It can be proved (see \cite{lucas1988stability} and \cite{ros1995stability}) that for each smooth function $f$ on $\Sigma$ there exists an admissible normal variation of $\varphi$ with variation vector field $f\eta$. If, in addition, f satisfies $\int_\Sigma f\,d\mu_\Sigma=0$, the variation is volume-preserving. In this article we will assume that $\partial\Omega$ is a totally umbilical hypersurface of $M$. \begin{teo}\label{004.2} If $\partial\Omega$ is totally umbilical and $\Phi$ is an admissible volume-preserving variation of a positive definite capillary $H_{r+1}$-hypersurface $\varphi : \Sigma\rightarrow \Omega \subseteq M$ supported on $\partial\Sigma$ then \begin{multline}\label{0006} \left.\frac{\partial}{\partial t}\mathcal{F}_{r,\theta}\left[\Sigma_t\right]\right\vert_{t=0}=-\int_\Sigma f\left(L_rf+\tr\left(P_r\left(A^2+\overline{R}_\eta\right)\right)f\right)\,d\mu_\Sigma+\\+\int_{\partial\Sigma} \left\vert{P_r\nu}\right\vert\,f\left(\frac{\partial f}{\partial\nu}+\left(\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu)-\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)\right)f\right)\,d\mu_{\partial\Sigma}, \end{multline} where $f=\left<\xi,\eta\right> \in C^\infty(\Sigma)$ is the support function of $\Phi$ at $t=0$ and $\emph{II}_\Sigma$ and $\emph{II}_{\partial\Omega}$ are the second fundamental forms of $\varphi$ and $\iota_{\partial\Omega} : \partial\Omega \hookrightarrow \Omega$, respectively. \end{teo} A proof of Theorem \ref{004.2} will be given in the Appendix. \begin{defn}\label{004.3} A positive definite capillary $H_{r+1}$-hypersurface $\varphi : \Sigma \rightarrow \Omega \subseteq M$ supported on $\partial\Omega$ with contact angle $\theta \in (0,\pi)$ is \textbf{$r$-stable} if $\left.\frac{\partial}{\partial t}\mathcal{F}_{r,\theta}\left[\Sigma_t\right]\right\vert_{t=0} \geq 0$ for any volume-preserving admissible variation $\Phi$ of $\varphi$. If the inequality holds for all admissible variations of $\varphi$, $\Sigma$ is said to be \textbf{strongly $r$-stable}. \end{defn} Associated to \eqref{0006} we have the following eigenvalue problem: \begin{equation}\label{0007}\begin{dcases*} T_rf=-L_rf-q_rf=\lambda f, & \quad $\text{in}~\Sigma$ \\ \frac{\partial f}{\partial\nu}+\alpha_\theta f=0, & \quad $\text{on}~\partial\Sigma$ \end{dcases*},\end{equation} where $q_r=\tr\left(P_r\left(A^2+\overline{R}_\eta\right)\right) \in C^\infty(\Sigma)$ and $\alpha_\theta=\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu)-\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu) \in C^\infty(\partial\Sigma)$. \ As we mentioned in the introduction, in a general ambient space, $T_r$ is not a "divergence form" operator and thus, \eqref{0006} gives rise to a non-symmetric bilinear form. Although $T_r$ is not self-adjoint, there exists a real eigenvalue $\lambda_p=\lambda_p(T_r)$ (see \cite[Theorem 3.1]{li2017eigenvalue}) called the \textbf{principal eigenvalue} such that any other eigenvalue $\lambda \in \mathbb{C}$ satisfies $\re\lambda \geq \lambda_p$. Also, the associated eigenspace has dimension equal to one and the associated eigenfunction is (strictly) positive. \ The proposition below gives useful characterizations of $r$-stability. \begin{prop}\label{004.4} Let $\varphi : \Sigma \rightarrow \Omega \subseteq M$ be a positive definite capillary $H_{r+1}$-hypersurface supported on $\partial\Omega$. The following statements are equivalent: \begin{enumerate}[(i)] \item $\Sigma$ is strongly $r$-stable. \item The principal eigenvalue $\lambda_p(T_r)$ of \eqref{0007} is non-negative. \item There exists a positive function $f_0 \in C^\infty(\Sigma)$ such that $\frac{\partial f_0}{\partial\nu}+\alpha_\theta f_0=0$ on $\partial\Sigma$ and $T_rf_0 \geq 0$ in $\Sigma$. \end{enumerate} \end{prop} \begin{proof} Assume (i) true and let $f_0 \in C^\infty(\Sigma)$ be a (strictly) positive eigenfunction of \eqref{0007} associated to $\lambda_p$. Then $$\lambda_p(T_r)\int_\Sigma f_0^2\,d\mu_\Sigma=\int_\Sigma f_0T_rf_0\,d\mu_\Sigma+\int_{\partial\Sigma} \left\vert{P_r\nu}\right\vert f_0\left(\frac{\partial f_0}{\partial\nu}+\alpha_\theta f_0\right)\,d\mu_{\partial\Sigma} \geq 0,$$ which proves (ii). Now assume (ii) is true and let $f_0>0$ denotes an eigenfunction associated to the principal eigenvalue $\lambda_p$ of \eqref{0007}. Thus $T_rf_0=\lambda_pf_0 \geq 0$ in $\Sigma$ and this proves (iii). Finally assume (iii) is true and let $f_0 \in C^\infty(\Sigma)$ be a positive function such that $\frac{\partial f_0}{\partial\nu}+\alpha_\theta f_0=0$ on $\partial\Sigma$ and $T_rf_0=\lambda_pf_0 \geq 0$ in $\Sigma$. Now, take $f \in C^\infty(\Sigma)$ and set $\tilde{f}=\frac{f}{f_0} \in C^\infty(\Sigma)$. For $f=\tilde{f}f_0$ we take an admissible normal variation of $\varphi$ with variation vector field $f\eta$. Then by using (\ref{0006}) and Lemma \ref{002.4} we obtain \begin{eqnarray*} \left.\frac{\partial}{\partial t}\mathcal{F}_{r,\theta}\left[\Sigma_t\right]\right\vert_{t=0} &=& -\int_\Sigma f\left(L_rf+q_rf\right)\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert f\left(\frac{\partial f}{\partial\nu}+\alpha_\theta f\right)\,d\mu_{\partial\Sigma} \\ &=& -\int_\Sigma \tilde{f}f_0\left(L_r(\tilde{f}f_0)+q_r\tilde{f}f_0\right)\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert \tilde{f}f_0\left(\frac{\partial (\tilde{f}f_0)}{\partial\nu}+\alpha_\theta\tilde{f}f_0\right)\,d\mu_{\partial\Sigma} \\ &=& -\int_\Sigma \tilde{f}^2f_0\left(L_rf_0+q_rf_0\right)+2\tilde{f}f_0\left<P_r\nabla\tilde{f},\nabla f_0\right>+\tilde{f}f_0^2L_r\tilde{f}\,d\mu_\Sigma+\\ && +\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert \tilde{f}f_0\left(\tilde{f}\left(\frac{\partial f_0}{\partial\nu}+\alpha_\theta f_0\right)+f_0\frac{\partial\tilde{f}}{\partial\nu}\right)\,d\mu_{\partial\Sigma} \\ &\geq& -\int_\Sigma \tilde{f}f_0^2L_r\tilde{f}+2\tilde{f}f_0\left<P_r\nabla\tilde{f},\nabla f_0\right>\,d\mu_\Sigma+\int_{\partial\Sigma} \left<\tilde{f}f_0^2\nabla\tilde{f},P_r\nu\right>\,d\mu_{\partial\Sigma} \\ &=& \int_\Sigma \dive\left(\tilde{f}f_0^2P_r\nabla\tilde{f}\right)-\tilde{f}f_0^2L_r\tilde{f}-2\tilde{f}f_0\left<P_r\nabla\tilde{f},\nabla f_0\right>\,d\mu_\Sigma \\ &=& \int_\Sigma \tilde{f}f_0^2L_r\tilde{f}+\left<P_r\nabla\tilde{f},\nabla\left(\tilde{f}f_0^2\right)\right>-\tilde{f}f_0^2L_r\tilde{f}-2\tilde{f}f_0\left<P_r\nabla\tilde{f},\nabla f_0\right>\,d\mu_\Sigma \\ &=& \int_\Sigma f_0^2\left<P_r\nabla\tilde{f},\nabla\tilde{f}\right>\,d\mu_\Sigma \geq 0. \qedhere \end{eqnarray*} \end{proof} \begin{obs}\label{004.5} A similar construction can be made when considering $P_r$ being negative definite on each point of $\Sigma$. In this case the functional is defined to be $$\mathcal{F}_{r,\theta}[\Sigma_t]=\int_\Sigma S_{r+1}(t)\left<\xi_t,\eta_t\right>\,d\mu_{\Sigma_t}+\int_{\partial\Sigma} \left<\xi_t,(P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu)_t\right>\,d\mu_{\partial\Sigma_t}$$ and $$\left.\frac{\partial}{\partial t}\mathcal{F}_{r,\theta}\left[\Sigma_t\right]\right\vert_{t=0}=\int_\Sigma fT_rf\,d\mu_\Sigma+\int_{\partial\Sigma} \left\vert{P_r\nu}\right\vert\,f\left(\frac{\partial f}{\partial\nu}+\alpha_\theta f\right)\,d\mu_{\partial\Sigma},$$ where in this case $T_r=L_r+\tr\left(P_r\left(A^2+\overline{R}_\eta\right)\right)$. \end{obs} \section{Stability of capillary $H_{r+1}$-hypersurfaces in space forms}\label{quatro} In this section we specialize to stability of $H_{r+1}$-hypersurfaces immersed on space forms and furnish a particular view of the theory for this setting. In this paper, $\mathbb{M}^{n+1}(c)$ denote the simply connected space form of constant sectional curvature $c$, i.e., $\mathbb{M}^{n+1}(c)$ is equal to $\mathbb{R}^{n+1}$ if $c=0$, $\mathbb{S}^{n+1}(c)$ if $c>0$ and $\mathbb{H}^{n+1}(c)$ if $c=0$. As we mentioned before, in \cite[Theorem 4.1]{rosenberg1993hypersurfaces}, H. Rosenberg proved that $P_r$ is divergence free for hypersurfaces on space forms, i.e., $$\dive P_r=\sum_{i=1}^n (\nabla_{e_i}P_r)e_i=0,$$ where $\left\{e_i\right\}$ is an arbitrary local frame in $\Sigma$. Thus, the Stokes theorem and Lemma \ref{002.4} imply that \begin{equation}\label{0008} -\int_\Sigma fL_rf\,d\mu_\Sigma=\int_\Sigma \left<P_r\nabla f,\nabla f\right>\,d\mu_\Sigma-\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert f\frac{\partial f}{\partial\nu}\,d\mu_{\partial\Sigma} \end{equation} for all $f \in C^\infty(\Sigma)$. Together with the identity $$\overline{R}(X,Y)Z=c\left(\left<X,Z\right>Y-\left<Y,Z\right>X\right), \quad X,Y,Z \in \Gamma\left(T\mathbb{M}^{n+1}(c)\right),$$ we have that $\overline{R}_\eta(X)=cX$ and $\tr\left(P_r\overline{R}_\eta\right)=c\tr P_r=c(n-r)S_r$. Hence, when the ambient space is a space form, \eqref{0006} becomes \begin{multline}\label{0009} \left.\frac{\partial}{\partial t}\mathcal{F}_{r,\theta}\left[\Sigma_t\right]\right\vert_{t=0}=\int_\Sigma \left<P_r\nabla f,\nabla f\right>-\left(S_1S_{r+1}-(r+2)S_{r+2}\right)f^2-c\left(n-r\right)S_rf^2\,d\mu_\Sigma+\\ +\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\alpha_\theta f^2\,d\mu_{\partial\Sigma} \end{multline} Given a function $f \in C^\infty(\Sigma)$, the $H^1$-norm of $f$ is defined by $$\left\Vert{f}\right\Vert_{H^1(\Sigma)}^2=\left\Vert{f}\right\Vert_{L^2(\Sigma)}^2+\left\Vert{\nabla f}\right\Vert_{L^2(\Sigma)}^2=\int_\Sigma f^2+\left\vert\nabla f\right\vert^2\,d\mu_\Sigma$$ and the Sobolev space $H^1(\Sigma)$ is defined as the closure of $C^\infty(\Sigma)$ with respect to the norm $\left\Vert\cdot\right\Vert_{H^1(\Sigma)}$. This set endowed with this norm is a Hilbert space, thus one can view \eqref{0009} as a quadratic form associated to a bilinear symmetric form on $H^1(\Sigma)$. \begin{defn}\label{005.1} Let $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^{n+1}(c)$ be a positive definite $H_{r+1}$-hypersurface. The \textbf{$r$-index form} of $\varphi$ is defined by \begin{equation}\label{0090} \mathcal{I}_{r,\theta}(f_1,f_2)=\int_\Sigma\left<P_r\nabla f_1,\nabla f_2\right>-\tr\left(P_r\left(A^2+\overline{R}_\eta\right)\right)f_1f_2\,d\mu_\Sigma+\int_{\partial\Sigma}\vert{P_r\nu}\vert\alpha_\theta f_1f_2\,d\mu_{\partial\Sigma} \end{equation} where $f_1,f_2 \in H^1(\Sigma)$, or equivalently, \begin{multline}\label{0010} \mathcal{I}_{r,\theta}(f_1,f_2)=\int_\Sigma \left<P_r\nabla f_1,\nabla f_2\right>-\left(S_1S_{r+1}-(r+2)S_{r+2}+c\left(n-r\right)S_r\right)f_1f_2\,d\mu_\Sigma+\\+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\alpha_\theta f_1f_2\,d\mu_{\partial\Sigma}, \end{multline} \end{defn} From the above definition, one can notice that $\Sigma$ is strongly $r$-stable if and only if $\mathcal{I}_{r,\theta}(f,f) \geq 0$ for all $f \in H^1(\Sigma)$ and that $\Sigma$ is $r$-stable if $\mathcal{I}_{r,\theta}(f,f) \geq 0$ for all $f \in \mathcal{F}=\left\{f \in H^1(\Sigma)\,|\,\int_\Sigma f\,d\mu_\Sigma=0\right\}$. As in the case $r=0$, when considering $\varphi$ a capillary $(r+1)$-minimal hypersurface, i.e. $H_{r+1}=0$, we say that $\varphi$ is stable if $\mathcal{I}_{r,\theta}(f,f) \geq 0$ for all $f \in C_0^\infty(\Sigma)$. This means the hypothesis on the variation being volume-preserving is dropped. \ It is known that if $\Sigma$ is a totally umbilical hypersurface of $\mathbb{M}^{n+1}(c)$ then $\Sigma$ is contained into a totally geodesic hypersurface, a geodesic sphere and, for $c<0$, $\Sigma$ can also be part of a horosphere or a equidistant hypersurface \cite[p. 75, 77]{spivak1970comprehensive}. Horospheres have all principal curvatures equal to $1$ and equidistant hypersurfaces have all principal curvatures positive and smaller than $1$. The next result, whose proof is given in Appendix B, shows that such totally umbilical hypersurfaces are examples of $r$-stable capillary $H_{r+1}$-hypersurfaces of $\mathbb{M}^{n+1}(c)$. \begin{prop}\label{004.6} Suppose that $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^{n+1}(c)$ is a positive definite compact totally umbilical capillary $H_{r+1}$-hypersurface supported on a connected totally umbilical hypersurface $\partial\Omega$ of $\mathbb{M}^{n+1}(c)$. Then $\varphi$ is $r$-stable. \end{prop} The $0$-stability of a totally umbilical hypersurface supported on a horosphere is discussed in \cite[Proposition 2.5]{guo2022stable}. A \textbf{geodesic cap} $\Sigma$ of $\mathbb{M}^{n+1}(c)$ is a geodesic ball of a geodesic sphere of $\mathbb{M}^{n+1}(c)$. The proposition above shows such capillary hypersurfaces supported on totally umbilical hypersurface are $r$-stable. \begin{cor} Capillary geodesic caps supported on a totally umbilical hypersurface of $\mathbb{M}^{n+1}(c)$ are $r$-stable. \end{cor} A normal vector field $\xi=f\eta$, with $f \in \mathcal{F}$, is a \textbf{Jacobi field} if $f \in \ker\mathcal{I}_{r,\theta}\vert_{\mathcal{F} \times \mathcal{F}}$, i.e., $\mathcal{I}_{r,\theta}(f,g)=0$ for every $g \in \mathcal{F}$. The next lemma gives a characterization of Jacobi fields on $\Sigma$. \begin{lema}\label{005.3} Let $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^{n+1}(c)$ be a positive definite $H_{r+1}$-hypersurface with free boundary in $\partial\Omega$ and $f \in \mathcal{F}$. Then \begin{enumerate}[(i)] \item $\xi=f\eta$ is a Jacobi field on $\Sigma$ if and only if $f \in C^\infty(\Sigma)$ and \begin{equation}\label{0011}\begin{dcases*} T_rf=-L_rf-\tr\left(A^2\left(P_r+\overline{R}_\eta\right)\right)f=\text{constant} & in $\Sigma$ \\ \frac{\partial f}{\partial\nu}+\alpha_\theta f=0 & on $\partial\Sigma$ \end{dcases*}.\end{equation} \item If $\varphi$ is $r$-stable and $\mathcal{I}_{r,\theta}(f,f)=0$ then $f$ is a Jacobi field on $\Sigma$. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[(i)] \item First suppose that $f \in C^\infty(\Sigma)$ satisfies \eqref{0011} and let $g \in \mathcal{F}$. If $\left(g_m\right)_{m \in \mathbb{N}}$ is a sequence of smooth functions on $\Sigma$ such that $\int_\Sigma g_m\,d\mu_\Sigma=0$ for all $m \in \mathbb{N}$ and $g_m \stackrel{m \rightarrow \infty}{\longrightarrow} g$ in the $H^1(\Sigma)$-sense, it follows from \eqref{0010} and \eqref{0011} that \begin{eqnarray*} \mathcal{I}_{r,\theta}(f,g_m) &=& -\int_\Sigma \left(L_rf+\tr_\Sigma\left(P_r\left(A^2+\overline{R}_\eta\right)\right)f\right)g_m\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\left(\frac{\partial f}{\partial\nu}+\alpha_\theta f\right)g_m\,d\mu_{\partial\Sigma} \\ &=& 0 \\ \mathcal{I}_{r,\theta}(f,g) &=& \lim_{m \rightarrow \infty} \mathcal{I}_{r,\theta}(f,g_m)=0, \end{eqnarray*} proving that $f\eta$ is a Jacobi field on $\Sigma$. To prove the converse, we will first claim that \begin{equation}\label{0012} \mathcal{I}_{r,\theta}(f,g)=b\int_\Sigma g\,d\mu_\Sigma, \quad \forall g \in C_0^\infty(\Sigma), \end{equation} where $b$ is a constant to be specified. In fact, let $g_1 \in C_0^\infty(\Sigma)$ be a function such that $\int_\Sigma g_1\,d\mu_\Sigma \neq 0$ and define $b=\frac{\mathcal{I}_{r,\theta}(f,g_1)}{\int_\Sigma g_1\,d\mu_\Sigma}$. Given $g \in C^\infty(\Sigma)$, let $g_2=-\frac{\int_\Sigma g\,d\mu_\Sigma}{\int_\Sigma g_1\,d\mu_\Sigma}g_1+g \in \mathcal{F}$. Then, \begin{equation*} 0=\mathcal{I}_{r,\theta}(f,g_2)=-\dfrac{\int_\Sigma g\,d\mu_\Sigma}{\int_\Sigma g_1\,d\mu_\Sigma}\mathcal{I}_{r,\theta}(f,g_1)+\mathcal{I}_{r,\theta}(f,g)=-b\int_\Sigma g\,d\mu_\Sigma+\mathcal{I}_{r,\theta}(f,g), \end{equation*} proving \eqref{0012}. Thus, $f$ is a weak solution to the first equation in \eqref{0011} and since $P_r$ is positive definite at each point of $\Sigma$, the regularity theory of second-order elliptic self-adjoint linear operator implies that $f \in C^\infty(\Sigma)$, proving that $f$ satisfies the equation in the strong sense. In order to prove that $\frac{\partial f}{\partial\nu}+\alpha_\theta f=0$ on $\partial\Sigma$, let \begin{equation*}g=\begin{dcases*} \frac{\partial f}{\partial\nu}+\alpha_\theta f,& in $\partial\Sigma$ \\ 0,& else \end{dcases*}.\end{equation*} Since $f\eta$ is a Jacobi field, we have \begin{equation*} 0=\mathcal{I}_{r,\theta}(f,g)=\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\left(\frac{\partial f}{\partial\nu}+\alpha_\theta f\right)^2\,d\mu_{\partial\Sigma}, \end{equation*} proving (i). \item Let $g \in \mathcal{F}$. Since $\varphi$ is $r$-stable, we have $\mathcal{I}_{r,\theta}\left(f+\varepsilon g,f+\varepsilon g\right) \geq 0$ for all $\varepsilon>0$. Thus, \begin{eqnarray} 0 &\leq& \mathcal{I}_{r,\theta}(f+\varepsilon g,f+\varepsilon g)=\mathcal{I}_{r,\theta}(f,f)+2\varepsilon\,\mathcal{I}_{r,\theta}(f,g)+\varepsilon^2\mathcal{I}_{r,\theta}(g,g) \nonumber \\ &=& 2\varepsilon\,\mathcal{I}_{r,\theta}(f,g)+\varepsilon^2\mathcal{I}_{r,\theta}(g,g). \label{0013} \end{eqnarray} Dividing \eqref{0013} by $2\varepsilon$ and letting $\varepsilon \rightarrow 0^+$, we obtain $\mathcal{I}_{r,\theta}(f,g) \geq 0$. The reverse inequality is obtained replacing $\varepsilon$ by $-\varepsilon$. \qedhere \end{enumerate} \end{proof} \section{Stability results for free boundary $H_{r+1}$-hypersurfaces in $\mathbb{M}^{n+1}(c)$}\label{cinco} In this section we will assume that $\theta=\frac{\pi}{2}$ and we will use following models for $\mathbb{M}^{n+1}(c)$: \begin{eqnarray*} \mathbb{R}^{n+1} &=& \left\{x=(x_1,...,x_{n+2}) \in \mathbb{R}^{n+2}\,\left|\right.\,x_{n+2}=0\right\} \\ \mathbb{S}^{n+1} &=& \left\{x=(x_1,...,x_{n+2}) \in \mathbb{R}^{n+2}\,\left|\right.\,x_1^2+...+x_{n+2}^2=\frac{1}{c^2}\right\} \\ \mathbb{H}^{n+1}(c) &=& \left\{x=(x_1,...,x_{n+2}) \in \mathbb{R}_1^{n+2}\,\left|\right.\,x_1^2+...+x_{n+1}^2-x_{n+2}^2=-\frac{1}{c^2}, x_{n+2}>0\right\} \end{eqnarray*} endowed with the pullback of the Euclidean metric for $c \geq 0$ or the Minkowski metric for $c<0$. Define \begin{equation}\label{0072} \sn_c(\rho)=\begin{dcases*} \frac{\sin\left(\rho\sqrt{c}\right)}{\sqrt{c}},& if $c>0$ \\ \rho,& if $c=0$ \\ \frac{\sinh\left(\rho\sqrt{-c}\right)}{\sqrt{-c}},& if $c<0$ \end{dcases*} \end{equation} and $\cn_c(\rho)=\sn_c^\prime(\rho)$. For the results of this section we will assume that $c \in \{-1,0,1\}$. If $\{\mathbf{e}_1,...,\mathbf{e}_{n+2}\}$ denote the vectors of the canonical basis of $\mathbb{R}^{n+2}$ then we have some relations involving $\Sigma$, $\nu$ and $\varphi$. The proof can be found in \cite[Lemma 1.1]{souam1997stability}. \begin{lema}\label{007.2} Let $B_R$ be a geodesic ball of $\mathbb{M}^{n+1}(c)$ and let $\varphi : \Sigma^n \rightarrow B_R \subseteq \mathbb{M}^{n+1}(c)$ be a $H_{r+1}$-hypersurface. If $\{\mathbf{e}_1,...,\mathbf{e}_{n+1},\mathbf{e}_{n+2}\}$ denote the vectors of the canonical basis in $\mathbb{R}^{n+2}$, we have \begin{enumerate}[(i)] \item On $\partial\Sigma$ we have $\sn_c(R)\,\nu-\cn_c(R)\,\varphi=-\frac{1}{\sqrt{c}}\,\mathbf{e}_{n+2}$ for $c \neq 0$ and $\nu=\frac{1}{R}\varphi$ for $c=0$. \item The second fundamental form of $\iota_{\partial\Sigma} : \partial\Sigma \hookrightarrow \Sigma$ with respect to $-\nu$ is given by $\frac{\cn_c(R)}{\sn_c(R)}\left<\cdot,\cdot\right>$, where $\left<\cdot,\cdot\right>$ is the metric of $\partial\Sigma$ induced by $\varphi\vert_{\partial\Sigma}$. In particular, if $n=2$, the geodesic curvature of $\partial\Sigma$ in $\Sigma$ of any point is given by $\frac{\cn_c(R)}{\sn_c(R)}$. \end{enumerate} \end{lema} The next Lemma gives important relations between $\varphi$ and $\eta$. \begin{lema}\label{006.2} Let $\varphi : \Sigma^n \rightarrow \mathbb{M}^{n+1}(c) \subseteq \mathbb{R}^{n+2}$ be a hypersurface. Then \begin{eqnarray} L_r\varphi &=& (r+1)S_{r+1}\eta-c(n-r)S_r\varphi \label{0015} \\ L_r\eta &=& -\tr\left(P_rA^2\right)\eta+c(r+1)S_{r+1}\varphi-\nabla S_{r+1}, \label{0016} \end{eqnarray} where $L_r\varphi$ and $L_r\eta$ are calculated coordinate-wise. Moreover, if $c=0$ then \begin{eqnarray} \frac{1}{2}L_r\left\vert\varphi\right\vert^2 &=& (n-r)S_r+(r+1)S_{r+1}\left<\varphi,\eta\right> \label{0017} \\ L_r\left<\varphi,\eta\right> &=& -(r+1)S_{r+1}-\left(S_1S_{r+1}-(r+2)S_{r+2}\right)\left<\varphi,\eta\right>-\left<\nabla S_{r+1},\varphi^\top\right> \label{0018} \end{eqnarray} \end{lema} For a proof of \eqref{0015} and \eqref{0016} see \cite[Remark 5.1]{rosenberg1993hypersurfaces} and for a proof of \eqref{0017} and \eqref{0018} see \cite[Lemma 1, (b)]{alencar1998integral} and \cite[Lemma 2]{alencar1998integral}, respectively. \begin{teo}\label{007.1} Let $B_R \subseteq \mathbb{M}^{n+1}(c)$ be a geodesic ball with radius $R>0$. Then, for $r>0$, it does not exist a positive $r$-stable $(r+1)$-minimal hypersurface with free boundary in $B_R$. \end{teo} \begin{proof} The proof of this result is based on \cite[Theorem 2.1]{souam1997stability}. Suppose such hypersurface $\varphi : \Sigma \rightarrow B_R \subseteq \mathbb{M}^{n+1}(c)$ exists and define the vector $\widetilde\varphi=\int_\Sigma \varphi\,d\mu_\Sigma$. One can take $n$ linearly independent vectors $\mathbf{u}_1,...,\mathbf{u}_n \in \mathbb{R}^{n+1}$ such that $\left<\mathbf{e}_{n+2},\mathbf{u}_i\right>=0$ and $$0=\left<\widetilde\varphi,\mathbf{u}_i\right>=\int_\Sigma \left<\varphi,\mathbf{u}_i\right>\,d\mu_\Sigma$$ for all $i \in \{1,...,n\}$. Let $f_i=\left<\varphi,\mathbf{u}_i\right> \in \mathcal{F}$ for $i \in \{1,...,n\}$. The formula \eqref{0015}, the hypothesis $S_{r+1}=0$ and the item (i) of Lemma \ref{007.2} yield to \begin{eqnarray} T_rf_i &=& -\left(L_r+\tr\left(P_rA^2\right)+c(n-r)S_r\right)f_i \nonumber \\ &=& -\tr\left(P_rA^2\right)f_i=\left\vert\sqrt{P_r}A\right\vert^2f_i, \label{0025} \end{eqnarray} and \begin{equation}\label{0026} \frac{\partial f_i}{\partial\nu}+\left(\emph{II}_{\partial B}\right)_{\overline\eta}\left(\overline\nu,\overline\nu\right)f_i=\nu\left<\varphi,\mathbf{u}_i\right>-\frac{\cn_c(R)}{\sn_c(R)}\left<\varphi,\mathbf{u}_i\right>=\left<\nu-\frac{\cn_c(R)}{\sn_c(R)}\varphi,\mathbf{u}_i\right>=0 \end{equation} on $\partial\Sigma$, the equations \eqref{0025} and \eqref{0026} applied to the $r$-stability hypothesis imply that \begin{eqnarray} 0 &\leq& \mathcal{I}_{r,\pi/2}(f_i,f_i)=\int_\Sigma f_iT_rf_i\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert f_i\left(\frac{\partial f_i}{\partial\nu}+\left(\emph{II}_{\partial B}\right)_{\overline\eta}\left(\overline\nu,\overline\nu\right)f_i\right)\,d\mu_{\partial\Sigma} \nonumber \\ &=& -\int_\Sigma \left\vert\sqrt{P_r}A\right\vert^2f_i^2\,d\mu_\Sigma \label{0027} \end{eqnarray} for all $i \in \{1,...,n\}$. Summing up the inequality \eqref{0027} for all $i \in \{1,...,n\}$ we obtain $$0 \leq \sum_{i=1}^n \mathcal{I}_{r,\pi/2}(f_i,f_i)=-\int_\Sigma \left\vert\sqrt{P_r}A\right\vert^2\left(\sum_{i=1}^n f_i^2\right)\,d\mu_\Sigma \leq 0.$$ Thus $\left\vert\sqrt{P_r}A\right\vert^2\left(\sum_{i=1}^n f_i^2\right) \equiv 0$ on $\Sigma$. Notice that the functions $f_i \in \mathcal{F}$ are restrictions to $\Sigma$ of $n$ linearly independent linear forms $F_i=\left<\cdot,\mathbf{u}_i\right> \in \left(\mathbb{R}^{n+2}\right)^*$. Since $$\left(\sum_{i=1}^n f_i^2\right)^{-1}(\{0\})=\bigcap_{i=1}^n f_i^{-1}(\{0\})=\bigcap_{i=1}^n \left(\ker F_i \cap \Sigma\right)=\left(\bigcap_{i=1}^n \ker F_i\right) \cap \Sigma \subseteq \left(\bigcap_{i=1}^n \ker F_i\right),$$ the set $\left(\sum_{i=1}^n f_i^2\right)^{-1}(\{0\})$ is contained into a $2$-dimensional linear subspace whose intersection with $\mathbb{M}^{n+1}(c)$ has dimension, at most, equal to one. Thus $\left(\sum_{i=1}^n f_i^2\right)^{-1}(\{0\})$ has measure zero and, therefore, $\left\vert\sqrt{P_r}A\right\vert^2=0$ on $\Sigma$. Since $\sqrt{P_r}$ is also positive definite, it is invertible and $$A=\left(\sqrt{P_r}\right)^{-1}\sqrt{P_r}A=0,$$ proving that $\Sigma$ is totally geodesic and obtaining a contradiction since $P_r$ is assumed to be a definite operator on each point of $\Sigma$. \end{proof} Before proving the next theorem, we need the following definition and the following Lemma. \begin{defn}\label{009.2} The $k$-th \textbf{coefficient of umbilicity} $\tau_k$ of a hypersurface $\varphi : \Sigma \rightarrow M$ is defined by $$\tau_k=(k+1)^2S_{k+1}^2-(n-k)S_k\tr\left(P_kA^2\right).$$ \end{defn} \begin{lema}[Umbilicity Lemma]\label{009.3} If $\kappa_1,...,\kappa_n \geq 0$ and $P_k$ is positive definite on each point of $\Sigma$ then $$\tau_k=(k+1)^2S_{k+1}^2-(n-k)S_k\tr\left(P_kA^2\right) \leq 0$$ and $\tau_k \equiv 0$ if and only if $\kappa_1=...=\kappa_n$. \end{lema} Notice that for $k=0$, $\tau_0=S_1^2-nS_0\tr A^2=n\left(nH^2-\vert{A}\vert^2\right) \leq 0$. \begin{proof} Since $\kappa_1,...,\kappa_n\geq 0$ and $(n-k)\binom{n}{k}=(k+1)\binom{n}{k+1}$ for all $k \in \{0,...,n-1\}$, it follows from Lemma \ref{002.2} that \begin{eqnarray*} \tau_k &=& (k+1)^2S_{k+1}^2-(n-k)S_k\left(S_1S_{k+1}-(k+2)S_{k+2}\right) \\ &=& (k+1)^2\binom{n}{k+1}^2H_{k+1}^2-(n-k)\binom{n}{k}H_k\left(n\binom{n}{k+1}H_1H_{k+1}-(k+2)\binom{n}{k+2}H_{k+2}\right) \\ &=& (n-k)\binom{n}{k}\left((n-k)\binom{n}{k}H_{k+1}^2-H_k\left(\frac{n}{k+1}H_1H_{k+1}-(n-k-1)\binom{n}{k+1}H_{k+2}\right)\right) \\ &=& \left((n-k)\binom{n}{k}\right)^2\left(H_{k+1}^2-H_k\left(\frac{nH_1H_{k+1}-(n-k-1)H_{k+2}}{k+1}\right)\right) \\ &\leq& \left((n-k)\binom{n}{k}\right)^2\left(H_{k+1}^2-H_1H_kH_{k+1}\right) \leq 0. \end{eqnarray*} If the equality holds on $\Sigma$ then we have $$0=\tau_k \leq H_{k+1}\left((n-k)\binom{n}{k}\right)^2\left(H_{k+1}-H_1H_k\right) \leq 0,$$ which shows that $H_1H_k \equiv H_{k+1}$ on $\Sigma$ and it follows from Lemma \ref{002.2} that $\kappa_1=...=\kappa_n$ at each point of $\Sigma$. \end{proof} The next result generalizes \cite[Theorem 3.1]{souam1997stability} for any $r \in \{0,...,n-1\}$ and \cite[Theorem 5.1 (i)]{ainouz2016stable} for any stable $H_{r+1}$-hypersurface with free boundary in totally geodesic hypersurfaces of space forms. \begin{teo}\label{009.1} Let $\Pi$ be a totally geodesic hypersurface of $\mathbb{M}^{n+1}(c)$ and let $\varphi : \Sigma \rightarrow \mathbb{M}^{n+1}(c)$ be a compact $r$-stable hypersurface with free boundary in $\Pi$ which lies in one side of $\Pi$. Then $\varphi(\Sigma)$ is a geodesic half-sphere whose center is in $\Pi$. \end{teo} \begin{proof} First consider $c \in \{-1,0\}$ and, without loss of generality, assume that $\Pi=\widetilde\Pi \cap \mathbb{M}^{n+1}(c)$, where $\widetilde\Pi$ is the hyperplane of $\mathbb{R}^{n+2}$ with equation $x_1=0$. If $\phi \in \Isom(\mathbb{M}^{n+1}(c))$ is the isometry defined by $$\phi(x_1,x_2,...,x_{n+2})=(-x_1,x_2,...,x_{n+2}), \quad (x_1,...,x_{n+2}) \in \mathbb{M}^{n+1}(c),$$ then the image of $\Sigma$ through $\phi$ is also a $r$-stable $H_{r+1}$-hypersurface with free boundary in $\Pi$ since given any function $f_0 \in \mathcal{F}_{\phi(\Sigma)}=\left\{f \in H^1(\phi(\Sigma))\,\left|\right.\,\int_{\phi(\Sigma)}f\,d\mu_{\phi(\Sigma)}=0\right\}$, we have $f_0 \circ \phi\vert_\Sigma^{-1} \in \mathcal{F}_\Sigma$. Thus, $\mathcal{I}_r^{\phi(\Sigma)}(f_0,f_0)=\mathcal{I}_r^\Sigma(f_0\circ\phi\vert_\Sigma^{-1},f_0\circ\phi\vert_\Sigma^{-1}) \geq 0$, proving the claim. Now since $\widetilde\Sigma=\Sigma \cup \phi(\Sigma)$ is a closed $H_{r+1}$-hypersurface in $\mathbb{M}^{n+1}(c)$, the same argument used in \cite[Remark 5.2]{ainouz2016stable} gives that $\widetilde\Sigma$ is a geodesic sphere and $\Sigma$ is a geodesic half-sphere. Now consider the case that $c=1$. Since \cite[Theorem 5.3]{marques1997stability} only holds for hypersurfaces contained in a hemisphere of $\mathbb{S}^{n+1}$, the approach used in the case of $c \leq 0$ cannot be used here. Let $\widetilde\varphi=\int_\Sigma\varphi\,d\mu_\Sigma$ and $\widetilde\eta=\int_\Sigma\eta\,d\mu_\Sigma$, seen as constant vectors in $\mathbb{R}^{n+2}$. Since the unit normal vector field of $\Pi \cong \mathbb{S}^n$ is the restriction of a constant vector field of $\mathbb{R}^{n+2}$, the free boundary condition implies that $\nu$ is equal to a constant vector field named $\mathbf{v} \in \mathbb{R}^{n+2}$. Thus one can find $n-1$ linearly independent vectors $\mathbf{u}_1,...,\mathbf{u}_{n-1} \in \mathbb{R}^{n+2}$ such that \begin{equation}\label{0028} \left<\widetilde\varphi,\mathbf{u}_i\right>=\left<\widetilde\eta,\mathbf{u}_i\right>=\left<\mathbf{v},\mathbf{u}_i\right>=0, \quad i \in \{1,...,n-1\}. \end{equation} For each $i \in \{1,...,n-1\}$ define the functions $f_i=\left<\varphi,\mathbf{u}_i\right>$, $g_i=\left<\eta,\mathbf{u}_i\right>$, $i \in \{1,...,n-1\}$, on $\Sigma$. From \eqref{0028}, we have that $f_i,g_i \in \mathcal{F} \cap C^\infty(\Sigma)$ for all $i \in \{1,...,n\}$. The equations \eqref{0015} and \eqref{0016} of Lemma \ref{006.2} gives for $i \in \{1,...,n-1\}$ \begin{eqnarray} T_rf_i &=& -L_r\left<\varphi,\mathbf{u}_i\right>-\left(\tr\left(P_rA^2\right)+(n-r)S_r\right)\left<\varphi,\mathbf{u}_i\right> \nonumber \\ &=& -\left(r+1\right)S_{r+1}g_i-\tr\left(P_rA^2\right)f_i \label{0029} \end{eqnarray} and \begin{eqnarray} T_rg_i &=& -L_r\left<\eta,\mathbf{u}_i\right>-\left(\tr\left(P_rA^2\right)+(n-r)S_r\right)\left<\eta,\mathbf{u}_i\right> \nonumber \\ &=& -\left(r+1\right)S_{r+1}f_i-\left(n-r\right)S_rg_i. \label{0030} \end{eqnarray} Since Lemma \ref{002.4} implies that $\nu$ is a principal direction of $\Sigma$ along $\partial\Sigma$, for $i \in \{1,..,n-1\}$ we have on $\partial\Sigma$ \begin{eqnarray} \frac{\partial f_i}{\partial\nu} &=& \nu\left<\varphi,\mathbf{u}_i\right>=\left<\nu,\mathbf{u}_i\right>=0 \label{0031} \\ \frac{\partial g_i}{\partial\nu} &=& \nu\left<\eta,\mathbf{u}_i\right>=\left<\widetilde\nabla_\nu\eta,\mathbf{u}_i\right>=\left<\overline\nabla_\nu\eta+\left<\nu,\eta\right>\varphi,\mathbf{u}_i\right>=-\left<A\nu,\mathbf{u}_i\right>=-\left\vert{A\nu}\right\vert\left<\nu,\mathbf{u}_i\right>=0, \label{0032} \end{eqnarray} where $\widetilde\nabla$ is the Levi-Civita connection of $\mathbb{R}^{n+2}$. By using \eqref{0029}, \eqref{0030}, \eqref{0031} and \eqref{0032} in \eqref{0010} we obtain for $i \in \{1,...,n-1\}$ \begin{eqnarray} \mathcal{I}_{r,\pi/2}\left(f_i,f_i\right) &=& \int_\Sigma f_iT_rf_i\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert f_i\frac{\partial f_i}{\partial\nu}\,d\mu_{\partial\Sigma} \nonumber \\ &=& -\int_\Sigma \tr\left(P_rA^2\right)f_i^2+\left(r+1\right)S_{r+1}f_ig_i\,d\mu_\Sigma \label{0033} \\ \mathcal{I}_{r,\pi/2}\left(g_i,g_i\right) &=& \int_\Sigma g_iT_rg_i\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert g_i\frac{\partial g_i}{\partial\nu}\,d\mu_{\partial\Sigma} \nonumber \\ &=& -\int_\Sigma \left(r+1\right)S_{r+1}f_ig_i+\left(n-r\right)S_rg_i^2\,d\mu_\Sigma \label{0034} \end{eqnarray} The $r$-stability hypothesis applied to the equations \eqref{0033} and \eqref{0034} for all $i \in \{1,...,n-1\}$ yield to \begin{eqnarray} 0 &\leq& \mathcal{I}_r\left(f_i,f_i\right)+\mathcal{I}_r\left(g_i,g_i\right) \nonumber \\ &=& -\int_\Sigma \tr\left(P_rA^2\right)f_i^2+2\left(r+1\right)S_{r+1}f_ig_i+\left(n-r\right)S_rg_i^2\,d\mu_\Sigma \nonumber \\ &=& -\int_\Sigma\frac{1}{(n-r)S_r}\left((n-r)S_r\tr\left(P_rA^2\right)f_i^2+2(n-r)(r+1)S_rS_{r+1}f_ig_i+(n-r)^2S_r^2g_i^2\right)\,d\mu_\Sigma \nonumber \\ &\leq& -\int_\Sigma\frac{1}{(n-r)S_r}\left((r+1)^2S_{r+1}^2f_i^2+2(n-r)(r+1)S_rS_{r+1}f_ig_i+(n-r)^2S_r^2g_i^2\right)\,d\mu_\Sigma \label{0035} \\ &=& -\int_\Sigma\frac{\left((r+1)S_{r+1}f_i+(n-r)S_rg_i\right)^2}{(n-r)S_r}\,d\mu_\Sigma \leq 0, \label{0036} \end{eqnarray} where \eqref{0035} is a consequence of Lemma \ref{009.3}. Since \eqref{0036} holds, we have for all $i \in \{1,..,n\}$, $$\int_\Sigma\frac{\left((r+1)S_{r+1}f_i+(n-r)S_rg_i\right)^2}{(n-r)S_r}\,d\mu_\Sigma=\int_\Sigma \tr\left(P_rA^2\right)f_i^2+2\left(r+1\right)S_{r+1}f_ig_i+\left(n-r\right)S_rg_i^2\,d\mu_\Sigma=0,$$ hence \begin{eqnarray} 0 &=& \int_\Sigma\frac{\left((r+1)S_{r+1}f_i+(n-r)S_rg_i\right)^2}{(n-r)S_r}-\left(\tr\left(P_rA^2\right)f_i^2+2\left(r+1\right)S_{r+1}f_ig_i+\left(n-r\right)S_rg_i^2\right)\,d\mu_\Sigma \nonumber \\ &=& \int_\Sigma\frac{\tau_r}{(n-r)S_r}f_i^2\,d\mu_\Sigma. \label{0037} \end{eqnarray} Summing up \eqref{0037} for $i \in \{1,...,n-1\}$ we get $\int_\Sigma \frac{\tau_r}{(n-r)S_r}\sum_{i=1}^{n-1}f_i^2\,d\mu_\Sigma=0$, and since $\tau_r \leq 0$ and $H_r>0$, we conclude that $\tau_r\sum_{i=1}^{n-1} f_i^2=0$ on $\Sigma$. If $n>2$ then the argument used in Theorem \ref{007.1} to prove $\left(\sum_{i=1}^{n-1} f_i^2\right)^{-1}(\{0\})$ has measure zero can be used here and we conclude that $\varphi(\Sigma)$ is totally umbilical. Otherwise, if $n=2$ then, by the holomorphic Hopf differential, it is known that $\Sigma$ is totally umbilical or its umbilic points are isolated. If $\Sigma$ is not totally umbilical we would obtain that $f_1 \equiv 0$. But this implies that $\varphi\left(\Sigma\right)$ is contained in a $3$-dimensional subspace of $\mathbb{R}^4$ and intersecting with $\mathbb{S}^3$ we would conclude that $\varphi\left(\Sigma\right)$ is contained in an equator of $\mathbb{S}^3$, which is a contradiction. \end{proof} \section{Symmetric $r$-stability}\label{seis} Inspired by the results in \cite{elbert2019note}, we provide a notion of symmetric $r$-stability for positive definite capillary $H_{r+1}$-hypersurfaces with constant contact angle $\theta \in (0,\pi)$. In the case $r=0$ or in the case $r>0$ and $M=\mathbb{M}^{n+1}(c)$, the notion of symmetric $r$-stability is equivalent to that given in the Definition \ref{004.3}. In the case $r>0$ and general ambient space, they do not coincide and we will have, therefore, two different notions of stability to work with. The symmetric stability will equip the theory for $r>0$ in a general ambient space with a bilinear symmetric form, which is the key that allows to mimic part of the classical CMC stability theory. Let us fix notation before starting the next result. We set \begin{equation}\label{0075} Q_r=q_r-\frac{\left\vert\sqrt{P_r}X_r\right\vert^2}{4}-\frac{\dive\left(P_rX_r\right)}{2} =\tr\left(P_r\left(A^2+\overline{R}_\eta\right)\right)-\frac{\left\vert\sqrt{P_r}X_r\right\vert^2}{4}-\frac{\dive\left(P_rX_r\right)}{2}, \end{equation} where $X_r=-P_r^{-1}\dive P_r$. \begin{prop}\label{011.1} Let $\varphi : \Sigma^n \rightarrow \Omega \subseteq M$ be a strongly $r$-stable capillary hypersurface supported on $\partial\Omega$. Then the symmetric bilinear form $\mathcal{I}_{r,\theta}^S : H^1(\Sigma) \times H^1(\Sigma) \rightarrow \mathbb{R}$ defined by $$\mathcal{I}_{r,\theta}^S(f_1,f_2)=\int_\Sigma \left<P_r\nabla f_1,\nabla f_2\right>-Q_rf_1f_2\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\left(\alpha_\theta-\frac{\left<X_r,\nu\right>}{2}\right)f_1f_2\,d\mu_{\partial\Sigma},$$ is positive definite. \end{prop} \begin{proof} Let $f_0 \in C^\infty(\Sigma)$ be a positive function such that $\frac{\partial f_0}{\partial\nu}+\alpha_\theta f_0=0$ on $\partial\Sigma$ and $T_rf_0 \geq 0$ in $\Sigma$. Then denoting $\widetilde{f}=\log f_0 \in C^\infty(\Sigma)$ we have $\nabla\widetilde{f}=\frac{\nabla f_0}{f_0}$ in $\Sigma$ and \begin{eqnarray} 0 &\leq& \frac{T_rf_0}{f_0}=-\frac{1}{f_0}\dive\left(P_r\nabla f_0\right)+\left<\dive P_r,\frac{\nabla f_0}{f_0}\right>-q_r \nonumber \\ &=& -\dive\left(P_r\frac{\nabla f_0}{f_0}\right)+\left<P_r\nabla f_0,\nabla\left(\frac{1}{f_0}\right)\right>-\left<P_r\left(-P_r^{-1}\dive P_r\right),\nabla\widetilde{f}\right>-q_r \nonumber \\ &=& -\dive\left(P_r\nabla\widetilde{f}\right)-\left<P_r\nabla\widetilde{f},\nabla\widetilde{f}\right>-2\left<P_r\frac{X_r}{2},\nabla\widetilde{f}\right>-q_r \nonumber \\ &=& -\dive\left(P_r\nabla\widetilde{f}\right)-\left\vert{P_r\left(\nabla\widetilde{f}+\frac{X_r}{2}\right)}\right\vert^2+\frac{\vert\sqrt{P_r}X_r\vert^2}{4}-q_r \nonumber \\ &=& -\dive\left(P_r\left(\nabla\widetilde{f}+\frac{X_r}{2}\right)\right)-\left\vert{P_r\left(\nabla\widetilde{f}+\frac{X_r}{2}\right)}\right\vert^2-\left(q_r-\frac{\vert\sqrt{P_r}X_r\vert^2}{4}-\frac{\dive\left(P_rX_r\right)}{2}\right). \label{0053} \end{eqnarray} Denoting $Y_r=\nabla\widetilde{f}+\frac{X_r}{2}$ we obtain that $-\dive\left(P_r Y_r\right)-\vert\sqrt{P_r}Y_r\vert^2-Q_r \geq 0$. Thus, if $f \in C^\infty(\Sigma)$ then \eqref{0053} gives \begin{eqnarray} 0 &\leq& \int_\Sigma -f^2\dive\left(P_rY_r\right)-f^2\vert\sqrt{P_r}Y_r\vert^2-Q_rf^2\,d\mu_\Sigma \nonumber \\ &=& \int_\Sigma \left<P_rY_r,\nabla\left(f^2\right)\right>-f^2\vert\sqrt{P_r}Y_r\vert^2-Q_rf^2\,d\mu_\Sigma-\int_\Sigma \dive\left(f^2P_rY_r\right)\,d\mu_\Sigma \nonumber \\ &=& \int_\Sigma 2f\left<\sqrt{P_r}\nabla f,\sqrt{P_r}Y_r\right>-f^2\vert\sqrt{P_r}Y_r\vert^2-Q_rf^2\,d\mu_\Sigma-\int_{\partial\Sigma} f^2\left<P_rY_r,\nu\right>\,d\mu_{\partial\Sigma}. \label{0054} \end{eqnarray} Since the Cauchy-Schwarz and geometric-arithmetic inequality gives $$2f\left<\sqrt{P_r}\nabla f,\sqrt{P_r}Y_r\right> \leq 2\vert{f}\vert\vert\sqrt{P_r}\nabla f\vert\vert\sqrt{P_r}Y_r\vert \leq \vert\sqrt{P_r}\nabla f\vert^2+f^2\vert\sqrt{P_r}Y_r\vert^2,$$ \eqref{0054} implies that \begin{equation}\label{0055} \int_\Sigma \vert\sqrt{P_r}\nabla f\vert^2-Q_rf^2\,d\mu_\Sigma-\int_{\partial\Sigma} \left\vert{P_r\nu}\right\vert\left<\nabla\widetilde{f}+\frac{X_r}{2},\nu\right>f^2\,d\mu_{\partial\Sigma} \geq 0 \end{equation} for all $f \in C^\infty(\Sigma)$. Also since $\left<\nabla\widetilde{f},\nu\right>=\frac{1}{f_0}\frac{\partial f_0}{\partial\nu}=-\alpha_\theta$ on $\partial\Sigma$, we conclude from \eqref{0052} and \eqref{0055} that $$\mathcal{I}_{r,\theta}^S(f,f)=\int_\Sigma \left<P_r\nabla f,\nabla f\right>-Q_rf^2\,d\mu_\Sigma+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\left(\alpha_\theta-\frac{\left<X_r,\nu\right>}{2}\right)f^2\,d\mu_{\partial\Sigma} \geq 0,$$ proving the claim. \end{proof} Inspired by the above result, we can revisit some of the definitions we gave and obtain their ``symmetrized" versions. \begin{defn}\label{011.2} For a positive definite $H_{r+1}$-hypersurface $\varphi : \Sigma^n \rightarrow M$ we define the \textbf{symmetric r-index form} of $\varphi$ by \begin{multline}\label{0052} \mathcal{I}_{r,\theta}^S(f_1,f_2)=\int_\Sigma \left<P_r\nabla f_1,\nabla f_2\right>-Q_rf_1f_2\,d\mu_\Sigma+\\+\int_{\partial\Sigma}\left\vert{P_r\nu}\right\vert\left(\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu)-\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)-\frac{\left<X_r,\nu\right>}{2}\right)f_1f_2\,d\mu_{\partial\Sigma}. \end{multline} \end{defn} \begin{defn}\label{011.3} A positive definite capillary $H_{r+1}$-hypersurface $\varphi : \Sigma^n \rightarrow \Omega \subseteq M$ supported on $\partial\Omega$ is \textbf{symmetric $r$-stable} if $\mathcal{I}_{r,\theta}^S(f,f) \geq 0$ for all $f \in \mathcal{F}$. If the inequality holds for all $f \in H^1(\Sigma)$ then the hypersurface is called \textbf{strongly symmetric $r$-stable}. \end{defn} We notice that when $r=0$ or $M=\mathbb{M}^{n+1}(c)$, $X_r=0$ in \eqref{0075} and, in this case, $\mathcal{I}_{r,\theta}^S$ given by \eqref{0052} coincides with the index formula given in Definition \ref{005.1} and $r$-stability and symmetric $r$-stability coincide. In the general case, from Proposition \ref{011.1} we conclude that \begin{cor}\label{011.4} Let $\varphi : \Sigma \rightarrow M$ be a positive definite capillary $H_{r+1}$-hypersurface supported on $\partial\Omega$. If $\Sigma$ is $r$-stable then it is symmetric $r$-stable. \end{cor} As well as the bilinear form given in \eqref{0010}, the bilinear form \eqref{0052} is also associated to a differential operator. \begin{defn}\label{011.5} The \textbf{symmetric $r$-stability operator} for a positive definite $H_{r+1}$-hypersurface $\varphi : \Sigma^n \rightarrow M$ is given by \begin{equation}\label{0077} T_r^S=-\dive\left(P_r\nabla\cdot\right)-Q_r. \end{equation} \end{defn} \section{Stability of cylinders in $M \times \mathbb{R}$}\label{sete} We start this section organizing the most relevant definitions of the symmetric stability theory for $H_{r+1}$-hypersurfaces, based on the approach of the present paper (free boundary) and of \cite{elbert2019note} (fixed boundary or empty boundary). A capillary positive definite $H_{r+1}$-hypersurface $\varphi : \Sigma^n \rightarrow \Omega \subseteq M$ supported on $\partial\Omega$ is \begin{enumerate}[(i)] \item symmetric $r$-stable if $\mathcal{I}_{r,\theta}^S(f,f) \geq 0$ for all $f \in \left\{f \in H^1(\Sigma)\,|\,\int_\Sigma f\,d\mu_\Sigma=0\right\}$; \item strongly symmetric $r$-stable if $\mathcal{I}_{r,\theta}^S(f,f) \geq 0$ for all $f \in H^1(\Sigma)$. \end{enumerate} For a positive definite $H_{r+1}$-hypersurface with closed or fixed boundary $\varphi : \Sigma^n \rightarrow M$ the index form that replaces \eqref{0052} is $I_r^S : H_0^1(\Sigma) \times H_0^1(\Sigma) \rightarrow \mathbb{R}$, where \begin{equation}\label{0076} I_r^S(f_1,f_2)=\int_\Sigma \left<P_r\nabla f_1,\nabla f_2\right>-Q_rf_1f_2\,d\mu_\Sigma, \end{equation} and we say that $\varphi : \Sigma^n \rightarrow M$ is \begin{enumerate}[(i)] \item symmetric $r$-stable if $I_r^S(f,f) \geq 0$ for all $f \in \left\{f \in H_0^1(\Sigma)\,|\,\int_\Sigma f\,d\mu_\Sigma=0\right\}$; \item strongly symmetric $r$-stable if $I_r^S(f,f) \geq 0$ for all $f \in H_0^1(\Sigma)$, \end{enumerate} where $H_0^1(\Sigma)$ is the closure of $C_0^\infty(\Sigma)$ with respect to the norm $\left\Vert\cdot\right\Vert_{H^1(\Sigma)}$. Associated to the index form, we set $(T_r^S,B)$ to denote the symmetric $r$-stability operator \eqref{0077} with the boundary condition $B(f)=0$ on $\partial\Sigma$, where \begin{equation}\label{0078} B(f)=\begin{dcases*} \frac{\partial f}{\partial\nu}+\alpha_\theta^S f,& for capillary (Robin condition)\\ f,& for closed or fixed boundary (Dirichlet condition) \end{dcases*}, \end{equation} where $\alpha_\theta^S=\alpha_\theta-\frac{\left<X_r,\nu\right>}{2} \in C^\infty(\partial\Sigma)$. For each boundary condition $B(f)=0$, we can consider an eigenvalue problem which is related to the corresponding stability problem. Now we particularize to the eigenvalue problem with Dirichlet boundary condition, namely, \begin{equation}\label{0056} \begin{dcases*} T^S_rf=-\dive\left(P_r\nabla f\right)-Q_rf=\lambda f,& in $\Sigma$ \\ f=0,& on $\partial\Sigma$ \end{dcases*}, \end{equation} with $f \in H_0^1(\Sigma) \backslash \{0\}$. Let $\lambda_1<\lambda_2$ be the first and the second eigenvalues of this problem. For $l \in \{1,2\}$, let $E_{\lambda_l}$ be the eigenspace of $H_0^1(\Sigma)$ associated to $\lambda_l$ and $$E_{\lambda_l}^\perp=\left\{f \in H_0^1(\Sigma)\,|\,\left<f,g\right>_{L^2(\Sigma)}=0 \text{ for any }g \in E_{\lambda_l}\right\}.$$ Then \begin{equation}\label{0057} \lambda_1=I_r^S(f_1,f_1)=\min\left\{I_r^S(f,f)\,\left|\right.\,f \in H_0^1(\Sigma)\text{ and }\int_\Sigma f^2\,d\mu_\Sigma=1\right\} \end{equation} and \begin{equation}\label{0058} \lambda_2=I_r^S(f_2,f_2)=\min\left\{I_r^S(f,f)\,\left|\right.\,f \in H_0^1(\Sigma) \cap E_{\lambda_1}^\perp \text{ and }\int_\Sigma f^2\,d\mu_\Sigma=1\right\}, \end{equation} where $f_1$ and $f_2$ are elements of an orthonormal basis $\{f_l\}_{l \in \mathbb{N}}$ for $L^2(\Sigma)$ composed by eigenfunctions of \eqref{0056}. The next lemma, which is a generalization of a result proven by M. Koiso in \cite[Theorem 1.3]{koiso2002deformation} and was used by R. Souam in \cite[Theorem 3.1]{souam2021stable}, gives criteria for the symmetric $r$-stability for closed hypersurfaces. The proof is essentially the same of that in \cite{koiso2002deformation} and we include it on Appendix C for completeness. \begin{lema}\label{012.1} Let $\varphi : \Sigma \rightarrow M$ be a closed $H_{r+1}$-hypersurface. The following hold: \begin{enumerate}[(i)] \item $\lambda_1 \geq 0$ if and only if $\Sigma$ is strongly symmetric $r$-stable. \item If $\lambda_1<0<\lambda_2$, then there exists a unique function $f \in H_0^1(\Sigma)$ such that $T_r^Sf=-1$, and $\Sigma$ is symmetric $r$-stable if and only if $\int_\Sigma f\,d\mu_\Sigma \geq 0$. \item If $\lambda_1<0=\lambda_2$ and there exists $f \in E_{\lambda_2}$ satisfying $\int_\Sigma f\,d\mu_\Sigma \neq 0$, then $\Sigma$ is symmetric $r$-unstable. \item If $\lambda_1<0=\lambda_2$ and $\int_\Sigma f\,d\mu_\Sigma=0$ for any $f \in E_{\lambda_2}$ then there exists a unique $f \in H_0^1(\Sigma) \cap E_{\lambda_2}^\perp$ such that $T_r^Sf=-1$ and $\Sigma$ is symmetric $r$-stable if and only if $\int_\Sigma f\,d\mu_\Sigma \geq 0$. \item if $\lambda_2<0$, then $\Sigma$ is symmetric $r$-unstable. \end{enumerate} \end{lema} The following result is inspired on \cite[Theorem 3.3]{souam2021stable}. \begin{teo}\label{012.2} Let $\varphi_0 : \Sigma_0 \rightarrow M$ be a closed oriented and positive definite $H_{r+1}^{(0)}$-hypersurface and let $l>0$. The map $\widetilde\varphi:=\varphi_0 \times \id_{[0,l]} : \Sigma:=\Sigma_0 \times [0,l] \rightarrow M \times \mathbb{R}$ is a positive definite free boundary hypersurface with $(r+1)$-th order mean curvature equal to $\frac{n-r-1}{n}H_{r+1}^{(0)}$. If $\lambda_1^{(0)}$ is the first eigenvalue of $T_r^S$ on $\Sigma_0$. Then we have \begin{enumerate}[(i)] \item If $\Sigma_0$ is symmetric $r$-unstable then $\Sigma$ is symmetric $r$-unstable. \item Suppose that $\Sigma_0$ is symmetric $r$-stable. \begin{description} \item[a)] Assume, in addition, that $\Sigma$ is symmetric $r$-stable. Then $\lambda_1^{(0)}+S_r^{(0)}\frac{\pi^2}{l^2}\geq 0$ \item[b)] Assume, in addition, that $S_r^{(0)}$ constant and that $\lambda_1^{(0)}+S_r^{(0)}\frac{\pi^2}{l^2} \geq 0$. Then $\Sigma$ is symmetric $r$-stable. \end{description} \end{enumerate} \end{teo} \begin{proof} Denote with a superscript $^{(0)}$ the quantities related to $\Sigma_0$ and by $t$ the global coordinate of $\mathbb{R}$. If $\kappa_1^{(0)},...,\kappa_{n-1}^{(0)}$ denotes the principal curvatures of $\Sigma_0$ associated with the eigenvectors $\{e_1^{(0)},...,e_{n-1}^{(0)}\}$ and $\pi_M$ is the projection of $M \times \mathbb{R}$ onto $M$, then the principal curvatures $\kappa_1,...,\kappa_n$ of $\Sigma$ are equal to $$\kappa_i=\begin{dcases*}\kappa_i^{(0)} \circ \pi_M,& $i \neq n$ \\ 0,& $i=n$\end{dcases*}$$ and the associated eigenvectors are $\left\{e_1,..,e_{n-1},e_n=\left.\frac{\partial}{\partial t}\right\vert_\Sigma\right\}$, where $e_i \in \Gamma(T\Sigma)$ is the horizontal vector field such that $e_i^{(0)}=d\pi_Me_i$. The eigenvalues $S_r(A_i)$ of the Newton transformation $P_r$ of $\Sigma$ are equal to $S_r^{(0)} \circ \pi_M$ if $i=n$ and \begin{eqnarray*} S_r(A_i) &=& \sum_{1 \leq i_1<...<i_r \leq n; i_1,...,i_r \neq i} \kappa_{i_1}\cdot ... \cdot \kappa_{i_r} \\ &=& \kappa_n\sum_{1 \leq i_1<...<i_{r-1} \leq n-1; i_1,...,i_{r-1} \neq i} \kappa_{i_1}\cdot ... \cdot \kappa_{i_{r-1}}+\sum_{1 \leq i_1<...<i_r \leq n-1; i_1,...,i_r \neq i}\kappa_{i_1}\cdot ... \cdot \kappa_{i_r} \\ &=& S_r^{(0)}(A_i^{(0)}) \circ \pi_M, \quad i \neq n, \end{eqnarray*} proving that $\Sigma$ is positive definite provided $\Sigma_0$ is also positive definite. Also, its $(r+1)$-th order mean curvature is equal to \begin{eqnarray*} S_{r+1} &=& \sum_{1 \leq i_1<...<i_{r+1} \leq n} \kappa_{i_1}\cdot ... \cdot \kappa_{i_r} \\ &=& \kappa_n\sum_{1 \leq i_1<...<i_r \leq n-1}\kappa_{i_1}\cdot ... \cdot \kappa_{i_r}+\sum_{1 \leq i_1<...<i_{r+1} \leq n-1} \kappa_{i_1}\cdot ... \cdot \kappa_{i_r}=S_{r+1}^{(0)}, \end{eqnarray*} proving that $$H_{r+1}=\binom{n}{r+1}^{-1}S_{r+1}=\binom{n}{r+1}^{-1}S_{r+1}^{(0)}=\dfrac{\binom{n-1}{r+1}}{\binom{n}{r+1}}H_{r+1}^{(0)}=\frac{n-r-1}{n}H_{r+1}^{(0)}.$$ Let $\{u_1,..,u_{n-1}, u_n=\frac{\partial}{\partial t}\}$ be a geodesic frame of $\Sigma$ centered at some point $p=(p_0,t) \in \Sigma$. Then we have, $$\dive_\Sigma P_r=\sum_{i=1}^n \nabla_{u_i}(P_r(u_i))=\sum_{i=1}^{n-1}\nabla_{u_i}(P_r(u_i)),$$ proving that $\dive_\Sigma P_r$ and, {\it a fortiori}, $X_r$ are horizontal vector fields whose projections onto $T\Sigma_0$ are equal to $\dive_{\Sigma_0}P_r^{(0)}$ and $X_r^{(0)}$, respectively. Hence, $\left\vert\sqrt{P_r}X_r\right\vert=\left\vert\sqrt{P_r^{(0)}}X_r^{(0)}\right\vert_0$, and that $\dive_\Sigma\left(P_rX_r\right)$ is a horizontal vector field whose projection onto $T\Sigma_0$ is equal to $\dive_{\Sigma_0}\left(P_r^{(0)}X_r^{(0)}\right)$. Also, since $\nu=\begin{dcases*}-e_n,& $t=0$ \\ e_n,& $t=l$\end{dcases*}$ on $\partial\Sigma=\Sigma_0 \times \{0,l\}$, we have that $\left<X_r,\nu\right>=0$ on $\partial\Sigma$. Also since, $\tr_\Sigma\left(P_rA^2\right)=\tr_{\Sigma_0}\left(\left(P_rA^2\right)^{(0)}\right)\circ\pi_M$ on $\Sigma$ and the unit normal vector field $\eta \in \Gamma(N\Sigma)$ is the horizontal vector field of $M \times \mathbb{R}$ whose projection onto $TM$ is the unit normal vector field $\eta^{(0)}$ of $\Sigma_0$ and $\overline{R}(\eta,\partial_t)\eta=0$, we have that $\tr_\Sigma\left(P_r\overline{R}_\eta\right)=\tr_{\Sigma_0}\left(P_r^{(0)}\overline{R}_{\eta^{(0)}}^{(0)}\right) \circ \pi_M$, where $\overline{R}^{(0)}$ is the curvature tensor of $M$. Hence, $Q_r=Q_r^{(0)} \circ \pi_M$ on $\Sigma$. Suppose that $\Sigma_0$ is a symmetric $r$-unstable hypersurface. Let $f_0 \in \mathcal{F}_{\Sigma_0}$ be such that $I_r^S(f_0,f_0)<0$ then, for $\tilde{f}=f_0 \circ \pi_M \in \mathcal{F}_\Sigma$, we obtain that $\mathcal{I}_r^S(\tilde{f},\tilde{f})=I_r^S(f_0,f_0)<0$. Thus $\Sigma$ is symmetric $r$-unstable. Now suppose that $\Sigma_0$ is symmetric $r$-stable and assume that $\lambda_1^{(0)}+S_r^{(0)}\frac{\pi^2}{l^2}<0$. Then if $f_0 \in H^1(\Sigma_0)$ is an eigenfunction of $T_r^S$ associated to $\lambda_1^{(0)}$, we have for $\tilde{f}(p)=f_0(p_0)\cdot\cos\left(\frac{\pi t}{l}\right) \in H^1(\Sigma)$ $$\int_\Sigma \tilde{f}\,d\mu_\Sigma=\int_0^l\left(\int_{\Sigma_0}f_0\,d\mu_{\Sigma_0}\right)\cos\frac{\pi t}{l}\,dt=0,$$ showing that $\tilde{f} \in \mathcal{F}_\Sigma$. Moreover, \begin{eqnarray*} T_r^S\tilde{f} &=& -\dive_\Sigma\left(P_r\nabla\tilde{f}\right)-Q_r\tilde{f} \\ &=& -\dive_\Sigma\left(\cos\frac{\pi t}{l}P_r\nabla f_0-\frac{\pi}{l}\sin\frac{\pi t}{l}P_r\frac{\partial}{\partial t}\right)-Q_rf_0\cos\frac{\pi t}{l} \\ &=& \left(-\dive_{\Sigma_0}\left(P_r^{(0)}\nabla^{(0)}f_0\right)+Q_rf_0\right)\cos\frac{\pi t}{l}+S_r^{(0)}\frac{\pi^2}{l^2}f_0\cos\frac{\pi t}{l} \\ &=& \left(\tilde\lambda_1+S_r^{(0)}\frac{\pi^2}{l^2}\right)\tilde{f}. \end{eqnarray*} Thus, $\mathcal{I}_r^S(\tilde{f},\tilde{f})=\int_\Sigma \left(\tilde\lambda_1+S_r^{(0)}\frac{\pi^2}{l^2}\right)\tilde{f}^2\,d\mu_\Sigma<0$, proving that $\Sigma$ is symmetric $r$-unstable. Now suppose that $S_r^{(0)}$ is constant and that $\tilde\lambda_1+S_r^{(0)}\frac{\pi^2}{l^2} \geq 0$. Consider the immersion $\widehat\varphi :=\varphi \times \id_{\mathbb{S}^1\left(\frac{l}{\pi}\right)} : \widehat\Sigma:=\Sigma_0 \times \mathbb{S}^1\left(\frac{l}{\pi}\right) \rightarrow M \times \mathbb{S}^1\left(\frac{l}{\pi}\right)$ and denote with a hat $\widehat\cdot$ the quantities related to $\widehat\varphi$. The curvatures of $\widehat\Sigma$ are the equal to those of $\Sigma$ and if $\widehat\eta$, $\widehat{A}$, $\widehat{P_r}$ are the unit normal vector field, second fundamental form and the $r$-th order Newton transformation of $\widehat\varphi$ and $\widehat{R}$ is the Riemann curvature tensor of $M \times \mathbb{S}^1\left(\frac{l}{\pi}\right)$ then $\dive_{\widehat\Sigma}\left(\widehat{P}_r\widehat\nabla\cdot\right)=\dive_\Sigma\left(P_r\nabla\cdot\right)$ and $\widehat{Q}_r=Q_r$ in $\widehat\Sigma$. The eigenvalues and eigenfunctions of the eigenvalue problem $f^{\prime\prime}+\frac{\mu}{S_r^{(0)}}f=0$ on $\mathbb{S}^1\left(\frac{l}{\pi}\right)$ are given by $\mu_m=S_r^{(0)}\frac{m^2\pi^2}{l^2}$ for $m \in \mathbb{N} \cup \{0\}$ and $f_m(t)=\cos\left(\sqrt{\frac{\mu_m}{S_r^{(0)}}}t\right)$. Let $\lambda_1^{(0)}<\lambda_2^{(0)} \leq ... \leq \lambda_k^{(0)} \nearrow +\infty$ be the sequence of eigenvalues of $T_r^{S,(0)}$ on $\Sigma_0$. Since $\widehat{Q}_r=Q_r$ depends only on the coordinates of $\Sigma_0$, the eigenvalues of $\widehat{T}_r^S$ on $\widehat\Sigma$ are given by $\lambda_k^{(0)}+S_r^{(0)}\frac{m^2\pi^2}{l^2}$, with $(k,m) \in \mathbb{N} \times \left(\mathbb{N} \cup \{0\}\right)$. In particular, the first two eigenvalues of \eqref{0056} in $\widehat\Sigma$ are equal to \begin{eqnarray*} \widehat\lambda_1 &=& \lambda_1^{(0)} \\ \widehat\lambda_2 &=& \min\left\{\lambda_1^{(0)}+S_r^{(0)}\frac{\pi^2}{l^2},\lambda_2^{(0)}\right\}. \end{eqnarray*} If $\lambda_1^{(0)} \geq 0$ then $\widehat\lambda_1 \geq 0$, which implies that $\widehat\Sigma$ is strongly symmetric $r$-stable. So we assume $\lambda_1^{(0)}<0$. Since $\Sigma_0$ is symmetric $r$-stable, it follows from Lemma \ref{012.1} that $\lambda_2^{(0)} \geq 0$. Thus, $\widehat\lambda_2 \geq 0$. Consider the solution $f \in C^\infty(\Sigma_0)$ to the equation $T_r^Sf=-1$ described on the items (ii) and (iv) of Lemma \ref{012.1}. Then the function $\widetilde{f}(p_0,t)=f(p_0) \in C^\infty(\widehat\Sigma)$ satisfies $\widehat{T}_r^S\widehat{f}=T_r^Sf=-1$ and $$\int_{\widehat\Sigma}\widehat{f}\,d\mu_{\widehat\Sigma}=2l\int_{\Sigma_0}f\,d\mu_{\Sigma_0} \geq 0.$$ Hence, it follows from the items (ii) and (iv) of Lemma \ref{012.1} that $\widehat\Sigma$ is symmetric $r$-stable. Now we will show the symmetrical $r$-stability of $\widehat\Sigma$ implies the symmetrical $r$-stability of $\Sigma$. In fact, let $f \in \mathcal{F}_\Sigma$ and extend $f$ to a function in $H^1(\Sigma_0 \times [0,2l])$ denoting $f(p_0,t)=f(p_0,2l-t)$ for $(p_0,t) \in \Sigma_0 \times [l,2l]$. Since $f(p_0,2l)=f(p_0,0)$ for all $p_0 \in \Sigma_0$ we obtain a function $\widehat{f} \in H^1(\widehat\Sigma)$ such that $\int_{\widehat\Sigma}\widehat{f}\,d\mu_{\widehat\Sigma}=0$ and $\mathcal{I}_r^S(f,f)=\frac{1}{2}\widehat{\mathcal{I}}_r^S(\widehat{f},\widehat{f}) \geq 0$, proving that $\Sigma$ is symmetric $r$-stable. \end{proof} As a consequence of Theorem \ref{012.2} we obtain a characterization of symmetric $r$-stable tubes $\partial B_R^{\mathbb{M}^n(c)} \times [0,l] \subseteq \mathbb{M}^n(c) \times [0,l]$ with radius $R>0$ and height $l>0$. Here we will consider the warped product model $\mathbb{M}^n(c)=[0,R_c) \times_{\sn_c} \mathbb{S}^{n-1}$, where $R_c=+\infty$ if $c<0$ and $R_c=\frac{\pi}{\sqrt{c}}$ if $c>0$. Since geodesic spheres with radius $R \in (0,R_c)$ are totally umbilical hypersurfaces with constant curvature equal to $\frac{\cn_c(R)}{\sn_c(R)}$, we have that $S_r^{(0)}=\binom{n-r-1}{n-1}\left(\frac{\cn_c(R)}{\sn_c(R)}\right)^r$ and $P_r^{(0)}=\frac{n-r-1}{n-1}\binom{n-r-1}{n-1}\left(\frac{\cn_c(R)}{\sn_c(R)}\right)^rI$. Hence, \begin{eqnarray} T_r^{(0)} &=& -\frac{n-r-1}{n-1}\binom{n-r-1}{n-1}\left(\frac{\cn_c(R)}{\sn_c(R)}\right)^r\left(\Delta+(n-1)\left(\frac{\cn_c^2(R)}{\sn_c^2(R)}+c\right)\right) \nonumber\\ &=& \frac{n-r-1}{n-1}\binom{n-1}{r}s_c^r(R)T_0^{(0)}. \label{0062} \end{eqnarray} Now since the first closed eigenvalue of $-\Delta$ on $\mathbb{S}^{n-1}$ is equal to $0$ (see \cite[p. 34]{chavel1984eigenvalues}), the first eigenvalue of $-\Delta$ on $\partial B_R=\left(\mathbb{S}^{n-1},\frac{\cn_c^2(R)}{\sn_c^2(R)}\,g_{\mathbb{S}^{n-1}}\right)$ is given by $\lambda_1(-\Delta,\partial B_R)=\frac{\lambda_1(-\Delta,\mathbb{S}^{n-1})}{f_c(R)^2}=0$. Thus, the first eigenvalue $\lambda_1^{(0)}$ of \eqref{0062} is equal to \begin{eqnarray} \lambda_1^{(0)} &=& -(n-r-1)\binom{n-1}{r}\cdot \left(\frac{\cn_c(R)}{\sn_c(R)}\right)^r \cdot \left(\frac{\cn_c^2(R)}{\sn_c^2(R)}+c\right) \nonumber \\ &=& -(n-r-1)\binom{n-1}{r} \cdot \frac{1}{\sn_c^2(R)} \cdot \left(\frac{\cn_c(R)}{\sn_c(R)}\right)^r \label{0063} ,\end{eqnarray} where the last equation is a consequence of the identity $\cn_c^2(\rho)+c\sn_c^2(\rho)=1$ for all $\rho \in [0,R_c)$. Also, since geodesic spheres on space forms are $r$-stable \cite[Proposition 5.1]{marques1997stability}, we obtain a generalization of \cite[Corollary 3.4]{souam2021stable}. \noindent\begin{cor}\label{012.3} \ \begin{enumerate}[(i)] \item A tube of radius $R>0$ and height $l>0$ in $\mathbb{H}^n(c) \times [0,l]$ is symmetric $r$-stable if and only if $\frac{\pi\sinh\left(R\sqrt{-c}\right)}{\sqrt{-c}} \geq l\sqrt{n-r-1}$. \item A tube of radius $R>0$ and height $l>0$ in $\mathbb{R}^n \times [0,l]$ is $r$-stable if and only if $\pi R \geq l\sqrt{n-r-1}$. \item A tube of radius $R>0$ and height $l>0$ in $\mathbb{S}^n(c) \times [0,l]$ is symmetric $r$-stable if and only if $\frac{\pi\sin\left(R\sqrt{c}\right)}{\sqrt{c}} \geq l\sqrt{n-r-1}$. \end{enumerate} \end{cor} \begin{proof} Since $\partial B_R$ is $r$-stable, it follows from Theorem \ref{012.2} that $\partial B_R \times [0,l]$ is symmetric $r$-stable if and only if $\lambda_1^{(0)}+S_r^{(0)}\frac{\pi^2}{l^2} \geq 0$. But since $S_r^{(0)}=\binom{n-1}{r}\left(\frac{\cn_c(R)}{\sn_c(R)}\right)^r$, if follows from \eqref{0063} that a tube is symmetrically stable if and only if \begin{eqnarray*} \binom{n-1}{r} \cdot \left(\frac{\cn_c(R)}{\sn_c(R)}\right)^r \cdot \left(-\frac{n-r-1}{\sn_c^2(R)}+\frac{\pi^2}{l^2}\right) \geq 0 &\iff& \frac{\pi^2}{l^2} \geq \frac{n-r-1}{\sn_c^2(R)} \\ &\iff& \pi\sn_c(R) \geq l\sqrt{n-r-1}, \end{eqnarray*} proving the result. \end{proof} \appendix \section{Proof of Theorem \ref{004.2}} For completeness we will give a proof of Proposition \ref{004.2}. The computations made here are similar of when $r=0$, done by Ros and Souam in \cite[Section 4]{ros1997stability}. Let $\Phi : \Sigma^n \times (-\varepsilon,\varepsilon) \rightarrow \Omega \subseteq M$ be an admissible volume-preserving variation of a capillary $H_{r+1}$-hypersurface $\varphi : \Sigma \rightarrow M$ supported on $\partial\Omega$. The derivative of \eqref{0005} is equal to \begin{equation}\label{0064} \left.\frac{\partial}{\partial t}\mathcal{F}_r[\Sigma_t]\right\vert_{t=0}=-\int_\Sigma \left.\frac{\partial}{\partial t}\left(S_{r+1}(t)\left<\xi_t,\eta_t\right>\,d\mu_{\Sigma_t}\right)\right\vert_{t=0}+\int_{\partial\Sigma} \left.\frac{\partial}{\partial t}\left(\left<\xi_t,(P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu)_t\right>\,d\mu_{\partial\Sigma_t}\right)\right\vert_{t=0}. \end{equation} Set \begin{eqnarray} I_1 &=& -\int_\Sigma \left.\frac{\partial}{\partial t}\left(S_{r+1}(t)\left<\xi_t,\eta_t\right>\,d\mu_{\Sigma_t}\right)\right\vert_{t=0} \label{0065} \\ I_2 &=& \int_{\partial\Sigma} \left.\frac{\partial}{\partial t}\left(\left<\xi_t,\left(P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu\right)_t\right>\,d\mu_{\partial\Sigma_t}\right)\right\vert_{t=0} \label{0066} \end{eqnarray} Since $S_{r+1}(0)$ is constant and $\Phi$ is volume-preserving, applying \eqref{0004} to \eqref{0065} yields to \begin{eqnarray} I_1 &=& -\int_\Sigma S_{r+1}^\prime(0)\left<\xi,\eta\right>\,d\mu_\Sigma \nonumber \\ &=& -\int_\Sigma f\left(L_rf+\left(S_1S_{r+1}-(r+2)S_{r+2}\right)f+\tr\left(P_r\overline{R}_\eta\right)f\right)\,d\mu_\Sigma, \label{0067} \end{eqnarray} where $f=\left<\xi,\eta\right> \in C^\infty(\Sigma)$ is the support function of $\Phi$ at $t=0$. The Lemma \ref{002.4} with \eqref{0002} and the fact that $\xi\vert_{\partial\Sigma} \in \Gamma\left(T\partial\Omega\vert_{\partial\Sigma}\right)$ imply that $$\left<\xi,P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu\right>=\vert{P_r\nu}\vert\left<\xi,\nu-\cos\theta\,\overline\nu\right>=\vert{P_r\nu}\vert\left<\xi,\sin\theta\,\overline\eta\right>=0$$ along $\partial\Sigma$. Thus, the second term of \eqref{0064} is equal to \begin{equation}\label{0068} I_2=\int_{\partial\Sigma} \left(\left<\overline\nabla_\xi\xi,P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu\right>+\left<\xi,\overline\nabla_\xi\left(P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu\right)\right>\right)\,d\mu_{\partial\Sigma}. \end{equation} Since \begin{eqnarray} \left<\overline\nabla_\xi\xi,P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu\right> &=& \vert{P_r\nu}\vert\left<\overline\nabla_\xi\xi,\nu-\cos\theta\,\overline\nu\right> \label{0069} \\ \left<\xi,\overline\nabla_\xi\left(P_r\nu-\vert{P_r\nu}\vert\cos\theta\,\overline\nu\right)\right> &=& \left<\xi,\overline\nabla_{P_r\nu}\xi+\left[\xi,P_r\nu\right]-\cos\theta\,\overline\nabla_\xi\left(\vert{P_r\nu}\vert\,\overline\nu\right)\right> \nonumber \\ &=& \left<\xi,\vert{P_r\nu}\vert\overline\nabla_\nu\xi+\left[\xi,\vert{P_r\nu}\vert\,\nu\right]-\cos\theta\left(\xi\vert{P_r\nu}\vert\,\overline\nu+\vert{P_r\nu}\vert\,\overline\nabla_\xi\overline\nu\right)\right> \nonumber \\ &=& \left<\xi,\vert{P_r\nu}\vert\left(\overline\nabla_\nu\xi+\left[\xi,\nu\right]-\cos\theta\,\overline\nabla_\xi\overline\nu\right)+\xi\vert{P_r\nu}\vert\left(\nu-\cos\theta\,\overline\nu\right)\right> \nonumber \\ &=& \vert{P_r\nu}\vert\left<\xi,\overline\nabla_\xi\nu-\cos\theta\,\overline\nabla_\xi\overline\nu\right>+\xi\vert{P_r\nu}\vert\left<\xi,\sin\theta\,\overline\eta\right> \nonumber \\ &=& \vert{P_r\nu}\vert\left<\xi,\overline\nabla_\xi\nu-\cos\theta\,\overline\nabla_\xi\overline\nu\right>. \label{0070} \end{eqnarray} the equations \eqref{0068}, \eqref{0069} and \eqref{0070} give \begin{equation}\label{0071} I_2=\int_{\partial\Sigma}\vert{P_r\nu}\vert\left(\left<\overline\nabla_\xi\xi,\nu-\cos\theta\,\overline\nu\right>+\left<\xi,\overline\nabla_\xi\nu-\cos\theta\,\overline\nabla_\xi\overline\nu\right>\right)\,d\mu_{\partial\Sigma}. \end{equation} Notice that in the boundary $\partial\Sigma$, the variational field can be written as $\xi=\widetilde\xi+\left<\xi,\nu\right>\nu+f\eta$, where $\widetilde\xi \in \Gamma\left(T\partial\Sigma\right)$ is the projection of $\xi$ onto $T\partial\Sigma$. From the fact that $\xi\vert{\partial\Sigma} \in \Gamma\left(T\partial\Omega\vert_{\partial\Sigma}\right)$ and the relation between $\overline\nu$ and $\{\nu,\eta\}$, we have $$0=\left<\xi,\overline\eta\right>=\left<\xi,\sin\theta\,\nu+\cos\theta\,\eta\right> \implies \left<\xi,\nu\right>=-f\cot\theta.$$ Thus, \begin{eqnarray} \xi &=& \widetilde\xi-f\cot\theta\,\nu+f\eta=\widetilde\xi-\frac{f}{\sin\theta}\left(\cos\theta\,\nu-\sin\theta\,\eta\right) \nonumber \\ &=& \widetilde\xi-f\csc\theta\,\overline\nu \label{0074} .\end{eqnarray} From \eqref{0002} and \eqref{0074}, \begin{eqnarray} \left<\overline\nabla_\xi\xi,\nu-\cos\theta\,\overline\nu\right> &=& \sin\theta\left<\overline\nabla_\xi\xi,\overline\nu\right>=\sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\xi,\xi) \nonumber \\ &=& \sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi-f\csc\theta\,\overline\nu,\widetilde\xi-f\csc\theta\,\overline\nu) \nonumber \\ &=& \sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\widetilde\xi)-2f\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\overline\nu)+f^2\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu) \label{0088} .\end{eqnarray} In order to calculate the second term of \eqref{0071}, one needs to calculate $\overline\nabla_\xi\eta$, $\overline\nabla_\xi\nu$ and $\overline\nabla_\xi\overline\nu$. Let $\{e_1,...,e_n\}$ be an orthonormal basis of $T_p\Sigma$ for some point $p \in \Sigma$ such that $[e_i,e_j]=[e_i,\xi]=0$ for $i \neq j$ and extend them via $(\varphi_t)_*$ at $p$. Since $\left<e_i,\eta\right>=0$ and $\left[e_i,\xi\right]=0$, we have \begin{eqnarray} \overline\nabla_\xi \eta &=& \left<\overline\nabla_\xi \eta,\eta\right>\eta+\sum_{i=1}^n \left<\overline\nabla_\xi \eta,e_i\right>e_i \nonumber \\ &=& -\sum_{i=1}^n \left<\overline\nabla_\xi e_i,\eta\right>e_i \nonumber \\ &=& -\sum_{i=1}^n \left<\overline\nabla_{e_i}\xi,\eta\right>e_i \nonumber \\ &=& -\sum_{i=1}^n \left<\overline\nabla_{e_i} \left(\xi^\top+f\eta\right),\eta\right>e_i \nonumber \\ &=& -\sum_{i=1}^n \left<\overline\nabla_{e_i}\xi^\top+f\overline\nabla_{e_i}\eta+e_if\,\eta,\eta\right>e_i \nonumber \\ &=& \sum_{i=1}^n \left(\left<\overline\nabla_{e_i}\eta,\xi^\top\right>-e_if\right)e_i \nonumber \\ &=& -\sum_{i=1}^n \left<-\overline\nabla_{\xi^\top}\eta,e_i\right>e_i-\nabla f \nonumber \\ &=& -A\xi^\top-\nabla f \label{0080} ,\end{eqnarray} where $A$ is the shape operator of $\varphi$. Therefore, if $\{\widetilde{e}_1,...,\widetilde{e}_{n-1}\}$ denotes an orthonormal basis of $T_p(\partial\Sigma)$ with $p \in \partial\Sigma$, extending them via $\left(\varphi_t\right)_*$ at $p$ and using \eqref{0080}, we have \begin{eqnarray} \overline\nabla_\xi \nu &=& \left<\overline\nabla_\xi \nu,\nu\right>\nu+\left<\overline\nabla_\xi \nu,\eta\right>\eta+\sum_{i=1}^{n-1} \left<\overline\nabla_\xi \nu,\widetilde{e}_i\right>\widetilde{e}_i \nonumber \\ &=& -\left<\overline\nabla_\xi \eta,\nu\right>\eta-\sum_{i=1}^{n-1} \left<\overline\nabla_\xi \widetilde{e}_i,\nu\right>\widetilde{e}_i \nonumber \\ &=& \left<A\xi^\top+\nabla f,\nu\right>\eta-\sum_{i=1}^{n-1} \left<\overline\nabla_{\widetilde{e}_i}\xi,\nu\right>\widetilde{e}_i \nonumber \\ &=& \left(\frac{\partial f}{\partial\nu}+\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu)\right)\eta-\sum_{i=1}^{n-1}\left<\overline\nabla_{\widetilde{e}_i}\left(\widetilde\xi-f\cot\theta\,\nu+f\eta\right),\nu\right>\widetilde{e}_i \nonumber \\ &=& \left(\frac{\partial f}{\partial\nu}+\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu)\right)\eta-\sum_{i=1}^{n-1}\left(\left<\overline\nabla_{\widetilde{e}_i}\widetilde\xi,\nu\right>-\cot\theta\,\widetilde{e}_if+f\left<\overline\nabla_{\widetilde{e}_i}\eta,\nu\right>\right)\widetilde{e}_i \nonumber \\ &=& \left(\frac{\partial f}{\partial\nu}+\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu)\right)\eta-\sum_{i=1}^{n-1}\left(\left<\nabla_{\widetilde{e}_i}\widetilde\xi,\nu\right>-\cot\theta\,\widetilde{e}_if-f\left<\overline\nabla_{\widetilde{e}_i}\nu,\eta\right>\right)\widetilde{e}_i \nonumber \\ &=& \left(\frac{\partial f}{\partial\nu}+\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu)\right)\eta-A_{\partial\Sigma}\widetilde\xi+\cot\theta\,\widetilde\nabla f+f\left(A\nu-\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)\nu\right) \label{0081} ,\end{eqnarray} where $A_{\partial\Sigma}$ is the shape operator of $\iota_{\partial\Sigma} : \partial\Sigma \hookrightarrow \Sigma$ and $\widetilde\nabla f$ is the gradient of $f$ in $\partial\Sigma$. We also have \begin{eqnarray} \overline\nabla_\xi\overline{\nu} &=& \left<\overline\nabla_\xi\overline{\nu},\overline\nu\right>\overline\nu+\left<\overline\nabla_\xi\overline{\nu},\overline\eta\right>\overline\eta+\sum_{i=1}^{n-1}\left<\overline\nabla_\xi\overline{\nu},\widetilde{e}_i\right>\widetilde{e}_i \nonumber \\ &=& \left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\xi,\overline\nu)\,\overline\eta-\sum_{i=1}^{n-1}\left<\overline\nabla_\xi\widetilde{e}_i,\overline\nu\right>\widetilde{e}_i \nonumber\\ &=& \left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\xi,\overline\nu)\,\overline\eta-\sum_{i=1}^{n-1}\left<\overline\nabla_{\widetilde{e}_i}\left(\widetilde\xi-f\csc\theta\,\overline\nu\right),\overline\nu\right>\widetilde{e}_i \nonumber\\ &=& \left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\xi,\overline\nu)\,\overline\eta-\sum_{i=1}^{n-1}\left(\left<\overline\nabla_{\widetilde{e}_i}\widetilde\xi,\overline\nu\right>-\csc\theta\,\widetilde{e}_if\right)\widetilde{e}_i \nonumber\\ &=& \left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\xi,\overline\nu)\,\overline\eta-\widetilde{A}\widetilde{\xi}-\csc\theta\,\widetilde\nabla f \label{0082} ,\end{eqnarray} where $\widetilde{A}$ is the shape operator of $\varphi\vert_{\partial\Sigma} : \partial\Sigma \rightarrow \partial\Omega$. Using \eqref{0081} and \eqref{0082} in \eqref{0070} we obtain \begin{multline}\label{0083} \left<\xi,\overline\nabla_\xi\nu-\cos\theta\,\overline\nabla_\xi\overline\nu\right>=f\left(\frac{\partial f}{\partial\nu}+\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu)\right)-\left<A_{\partial\Sigma}\widetilde\xi,\xi\right>+f\left<A\nu,\xi\right>+\\+f^2\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)+\cos\theta\left<\widetilde{A}\widetilde\xi,\xi\right> .\end{multline} Since \begin{eqnarray} \left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu) &=& \left<\overline\nabla_{\xi^\top}\nu,\eta\right>=\left<\overline\nabla_{\widetilde\xi}\nu,\eta\right>-f\cot\theta\left<\overline\nabla_\nu\nu,\eta\right> \nonumber \\ &=& \left<\overline\nabla_{\widetilde\xi}\left(\cos\theta\,\overline\nu+\sin\theta\,\overline\eta\right),-\sin\theta\,\overline\nu+\cos\theta\,\overline\eta\right>-f\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu) \nonumber\\ &=& \left<\overline\nabla_{\widetilde\xi}\overline\nu,\overline\eta\right>-f\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu) \nonumber\\ &=& \left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\overline\nu)-f\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu) \label{0084} ,\end{eqnarray} \begin{eqnarray} \left<A_{\partial\Sigma}\widetilde\xi,\xi\right> &=& \left<A_{\partial\Sigma}\widetilde\xi,\widetilde\xi\right>=-\left<\nabla_{\widetilde\xi}\nu,\widetilde\xi\right>\nonumber\\ &=& -\left<\overline\nabla_{\widetilde\xi}\nu-\left(\emph{II}_\Sigma\right)_\eta(\widetilde\xi,\nu)\,\eta,\widetilde\xi\right> \nonumber \\ &=& -\left<\overline\nabla_{\widetilde\xi}\left(\cos\theta\,\overline\nu+\sin\theta\,\overline\eta\right),\widetilde\xi\right> \nonumber\\ &=& -\cos\theta\left<\overline\nabla_{\widetilde\xi}\overline\nu,\widetilde\xi\right>-\sin\theta\left<\overline\nabla_{\widetilde\xi}\overline\eta,\widetilde\xi\right> \nonumber\\ &=& \cos\theta\left<\widetilde{A}\widetilde\xi,\widetilde\xi\right>+\sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\widetilde\xi) \nonumber\\ &=& \cos\theta\left<\widetilde{A}\widetilde\xi,\xi\right>+\sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\widetilde\xi) \label{0085} \end{eqnarray} and \begin{equation}\label{0086} \left<A\nu,\xi\right>=\left<A\nu,\xi^\top\right>=\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu) ,\end{equation} the equations \eqref{0084}, \eqref{0085} and \eqref{0086} in \eqref{0083} yield to \begin{eqnarray} \left<\xi,\overline\nabla_\xi\nu-\cos\theta\,\overline\nabla_\xi\overline\nu\right> &=& f\frac{\partial f}{\partial\nu}+2f\left(\emph{II}_\Sigma\right)_\eta(\xi^\top,\nu)+f^2\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)-\sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\widetilde\xi) \nonumber\\ &=& f\frac{\partial f}{\partial\nu}+2f\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\overline\nu)-f^2\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)-\sin\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\widetilde\xi,\widetilde\xi) \label{0087} .\end{eqnarray} Summing \eqref{0088} and \eqref{0087} we obtain \begin{multline}\label{0089} \left<\overline\nabla_\xi\xi,\nu-\cos\theta\,\overline\nu\right>+\left<\xi,\overline\nabla_\xi\nu-\cos\theta\,\overline\nabla_\xi\overline\nu\right>=f\left(\frac{\partial f}{\partial\nu}+\left(\csc\theta\left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(\overline\nu,\overline\nu)-\cot\theta\left(\emph{II}_\Sigma\right)_\eta(\nu,\nu)\right)f\right). \end{multline} We complete the proof after integrating \eqref{0089} over $\partial\Omega$. \section{Proof of Proposition \ref{004.6}} In this appendix we give a proof of Proposition \ref{004.6} based on the arguments of \cite[Appendix A]{guo2022stable}. First notice that a totally umbilical hypersurface $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^{n+1}(c)$ supported on a totally umbilical hypersurface $\partial\Omega$ is $r$-stable if and only if it is $0$-stable. In fact, if $\kappa \in (0,\infty)$ is the umbilicity factor of $\varphi$ then $P_r=\frac{n-r}{n}\binom{n}{r}\kappa^r\,I$. Hence, $S_r=\frac{1}{n-r}\tr P_r=\binom{n}{r}\kappa^r$ and for all $f_1,f_2 \in H^1(\Sigma)$, \begin{eqnarray*} \mathcal{I}_{r,\theta}(f_1,f_2) &=& \frac{n-r}{n}\binom{n}{r}\kappa^r\left(\int_\Sigma \left<\nabla f_1,\nabla f_2\right>-\left(\vert{A}\vert^2+nc\right)f_1f_2\,d\mu_\Sigma+\int_{\partial\Sigma}\alpha_\theta f_1f_2\,d\mu_{\partial\Sigma}\right) \\ &=& \frac{n-r}{n}\binom{n}{r}\kappa^r\,\mathcal{I}_{0,\theta}(f_1,f_2), \end{eqnarray*} proving the claim. For the rest of the appendix we will consider the warped product model for $\mathbb{M}^{n+1}(c)$ described on Section 7. Now for $\rho_0 \in (0,R_c)$, consider the Robin eigenvalue problem for $-\Delta-nc$ on a geodesic ball $B_{\rho_0}$ of $\mathbb{M}^n(c)$ \begin{equation}\label{0061} \begin{dcases*} -\Delta f-ncf=\lambda f,& in $B_{\rho_0}$ \\ \frac{\partial f}{\partial\overline\eta}-\frac{\cn_c(\rho_0)}{\sn_c(\rho_0)}f=0,& in $\partial B_{\rho_0}$ \end{dcases*} .\end{equation} where $\overline\eta$ is the outward unit normal field of $\partial B_{\rho_0}$ and denote $\lambda_1$ and $\lambda_2$ the first and the second Robin eigenvalues of \eqref{0061}. The following lemma, which is adapted from \cite[Proposition A.1]{guo2022stable}, shows that the second eigenvalue is equal to zero. \begin{lema}\label{00B.1} $\lambda_2=0$ and its multiplicity is equal to $n$. \end{lema} \begin{proof} Since $B_{\rho_0}$ is rotationally symmetric, the eigenvalues of \eqref{0061} are given by $\tau_{k,l}$, for $(k,l) \in \mathbb{N} \times \left(\mathbb{N}\times\{0\}\right)$, where $\tau_{k,l}$ is the $k$-th eigenvalue of the problem \begin{equation}\label{0073} \begin{dcases*} -f^{\prime\prime}(\rho)-(n-1)\frac{\cn_c(\rho)}{\sn_c(\rho)}f^\prime(\rho)-\left(nc-\frac{l\left(l+n-2\right)}{\sn_c^2(\rho)}\right)f(\rho)=\tau_{\cdot,l}f(\rho),& in $(0,\rho_0)$ \\ f^\prime(\rho_0)-\frac{\cn_c(\rho_0)}{\sn_c(\rho_0)}f(\rho_0)=0 \end{dcases*} \end{equation} and $f(0)>0$, $f^\prime(0)>0$ for $l=0$ or $f(\rho) \sim \rho^l$ as $\rho\rightarrow 0$ for $l \in \mathbb{N}$. The eigenvalues $\tau_{k,l}$ of \eqref{0073} are given by \begin{equation}\label{0079} \tau_{1,l}=\inf\left\{\dfrac{\int_0^{\rho_0}\left(f^\prime(\rho)^2-\left(nc-\frac{l\left(l+n-2\right)}{\sn_c^2(\rho)}\right)f(\rho)^2\right)\sn_c^{n-1}(\rho)\,d\rho-\sn_c^{n-2}(\rho_0)\cn_c(\rho_0)f(\rho_0)^2}{\int_0^{\rho_0}f(\rho)^2\sn_c^{n-1}(\rho)\,d\rho}\right\} ,\end{equation} over all $f \in H^1([0,\rho_0])\backslash\{0\}$ with $f(0)=0$ and for $k \in \mathbb{N}$, $\tau_{k,l}$ is defined in a similar way as $\eqref{0079}$ over all $f \in E_{\tau_{m,l}}^\perp\backslash\{0\}$ for all $m \in \{0,...,k\}$, where $E_{\tau_{m,l}}$ is the eigenspace of $\eqref{0073}$ associated to $\tau_{m,l}$ and with $f(0)=0$. Since $\tau_{1,l_1}<\tau_{1,l_2}$ for all $l_1<l_2$ we have that $\lambda_1=\tau_{1,0}$ and $\lambda_2=\min\left\{\tau_{2,0},\tau_{1,1}\right\}$. Notice that $\tau_{1,1}=0$, since for the positive function $f_1=\sn_c$ in $[0,\rho_0]$, we have $f_1^\prime(\rho)^2+cf_1(\rho)^2=1$, $f_1^{\prime\prime}(\rho)=-cf_1(\rho)$ and \begin{eqnarray*} -f_1^{\prime\prime}(\rho)-(n-1)\frac{\cn_c(\rho)}{\sn_c(\rho)}f_1^\prime(\rho)-\left(nc-\frac{n-1}{\sn_c^2(\rho)}\right)f_1(\rho) &=& cf_1(\rho)-(n-1)\frac{f_1^\prime(\rho)^2}{f_1(\rho)}-ncf_1(\rho)+\frac{n-1}{f_1(\rho)} \\ &=& \frac{n-1}{f_1(\rho)} \cdot \left(1-f_1^\prime(\rho)^2\right)-c(n-1)f_1(\rho) \\ &=& 0 \end{eqnarray*} for all $\rho\in(0,\rho_0)$. The same argument used in \cite[Proposition A.1]{guo2022stable} shows that $\tau_{2,0}>0$ and its multiplicity is the same as that of second eigenvalue $n-1$ of the Laplacian of $\mathbb{S}^{n-1}$. \end{proof} \begin{proof}[Proof of Proposition \ref{004.6}] Let $\varphi : \Sigma \rightarrow \Omega \subseteq \mathbb{M}^{n+1}(c)$ be a totally umbilical capillary hypersurface supported on $\partial\Omega$ with contact angle $\theta\in(0,\pi)$. If $\kappa$ and denotes the principal curvature of $\varphi$, the Gauss equation shows the sectional curvature is equal to $K=K_{\mathbb{M}^{n+1}(c)}+\kappa^2=\kappa^2+nc$. From \eqref{0092} we have that $\nu=-\cot\theta\,\eta+\csc\theta\,\overline\eta$, thus for all $X,Y \in \Gamma(T\Sigma)$ \begin{eqnarray*} \left(\emph{II}_{\partial\Sigma}\right)_\nu(X,Y) &=& \left<\nabla_XY,\nu\right> \nonumber \\ &=& \left(\emph{II}_{\partial\Omega}\right)_{\overline\eta}(X,Y)\,\csc\theta-\left(\emph{II}_\Sigma\right)_\eta(X,Y)\,\cot\theta ,\end{eqnarray*} proving that $\iota_{\partial\Sigma} : \partial\Sigma \hookrightarrow \Sigma$ is a totally umbilical hypersurface with umbilicity factor equal to $\kappa_{\partial\Sigma}=\kappa_{\partial\Omega}\csc\theta-\kappa\cot\theta$. Thus the bilinear form associated to the weak formulation of \eqref{0061} is exactly the $0$-index form $\mathcal{I}_{0,\theta}$ described in \eqref{0010}. From the Lemma \ref{00B.1}, the Morse index of $\mathcal{I}_{0,\theta}$ over $H^1(\Sigma)$ is equal to $1$. But since the function \begin{equation}\label{0091} f(\rho)=\begin{dcases*} \frac{\cn_c(\rho_0)\cn_c(\rho)-1}{nc},& $c \neq 0$ \\ -\frac{\rho^2+\rho_0^2}{2n},& $c=0$ \end{dcases*} \end{equation} is a non-positive function in $\Sigma$ such that, for $c \neq 0$, \begin{eqnarray*} \Delta f+ncf &=& \frac{1}{\sn_c^{n-1}(\rho)}\frac{d}{d\rho}\left(\sn_c^{n-1}(\rho)\frac{d}{d\rho}\left(\frac{\cn_c(\rho_0)\cn_c(\rho)-1}{nc}\right)\right)+\cn_c(\rho_0)\cn_c(\rho)-1 \\ &=& -\frac{\cn_c(\rho_0)}{n\sn_c^{n-1}(\rho)}\frac{d}{d\rho}\left(\sn_c^n(\rho)\right)+\cn_c(\rho_0)\cn_c(\rho)-1 \\ &=& -\frac{cn_c(\rho_0)}{n\sn_c^{n-1}(\rho)} \cdot n\sn_c^{n-1}(\rho)\cdot\cn_c(\rho)+\cn_c(\rho_0)\cn_c(\rho)-1=-1 \end{eqnarray*} in $\Sigma$ and \begin{eqnarray*} \frac{\partial f}{\partial\overline\eta} &=& \left.\frac{d}{d\rho}\left(\frac{\cn_c(\rho_0)\cn_c(\rho)-1}{nc}\right)\right\vert_{\rho=\rho_0}=-\frac{\cn_c(\rho_0)}{n} \cdot\sn_c(\rho_0)=-\frac{\cn_c(\rho_0)}{\sn_c(\rho_0)} \cdot \frac{c\sn_c^2(\rho_0)}{nc} \\ &=& \frac{\cn_c(\rho_0)}{\sn_c(\rho_0)}\cdot\frac{\cn_c^2(\rho_0)-1}{nc}=\frac{\cn_c(\rho_0)}{\sn_c(\rho_0)}f(\rho_0) \end{eqnarray*} on $\partial\Sigma$, and for $c=0$, \begin{equation*} \Delta f=\frac{1}{\rho^{n-1}}\frac{d}{d\rho}\left(\rho^{n-1}\frac{d}{d\rho}\left(-\frac{\rho^2+\rho_0^2}{2n}\right)\right)=-\frac{1}{n\rho^{n-1}}\frac{d}{d\rho}\left(\rho^n\right)=-1 \end{equation*} in $\Sigma$ and \begin{equation*} \frac{\partial f}{\partial\overline\eta}=\left.\frac{d}{d\rho}\left(-\frac{\rho^2+\rho_0^2}{2n}\right)\right\vert_{\rho=\rho_0}=-\frac{\rho_0}{n}=-\frac{1}{\rho_0}\cdot\frac{2\rho_0^2}{2n}=\frac{\cn_0(\rho_0)}{\sn_0(\rho_0)}f(\rho_0) \end{equation*} on $\partial\Sigma$, \cite[Proposition 3.1]{tran2020morse} implies the Morse index of $\mathcal{I}_{0,\theta}$ over $\mathcal{F}$ is equal to $1-1=0$, proving that $\varphi$ is $0$-stable. \end{proof} \section{Proof of Lemma \ref{012.1}} In this appendix we give a proof of Lemma \ref{012.1}. \begin{enumerate}[(i)] \item The first claim is immediate. \item Now assume that $\lambda_1<0$ and let $f_i \in E_{\lambda_i}$ be an eigenfunction associated to the $i$-th eigenvalue $\lambda_i$ of $T_r^S$ such that $\left\{f_i\right\}_{i \in \mathbb{N}}$ is an orthonormal basis for $L^2(\Sigma)$. Since $f_1$ does not change its sign, we have $\int_\Sigma f_1\,d\mu_\Sigma \neq 0$. Given $f \in H_0^1(\Sigma)$, let $a=-\frac{\int_\Sigma f\,d\mu_\Sigma}{\int_\Sigma f_1\,d\mu_\Sigma}$ and define $\widetilde{f}=af_1+f$; we have $\int_\Sigma \widetilde{f}\,d\mu_\Sigma=0$. If $0$ is not an eigenvalue of $T_r^S$, it follows from the Fredholm alternative that there exists a unique function $f \in H_0^1(\Sigma)$ satisfying $T_r^Sf=-1$ weakly in $\Sigma$, i.e., $I_r^S(f,g)=-\int_\Sigma g\,d\mu_\Sigma$ for all $g \in H_0^1(\Sigma)$. Therefore, \begin{eqnarray*} I_r^S(\widetilde{f},\widetilde{f}) &=& I_r^S(af_1+f,af_1+f)=a^2I_r^S(f_1,f_1)+2aI_r^S(f_1,f)+I_r^S(f,f) \\ &=& a^2\lambda_1-2a\int_\Sigma f_1\,d\mu_\Sigma-\int_\Sigma f\,d\mu_\Sigma \\ &=& a^2\lambda_1+\int_\Sigma f\,d\mu_\Sigma, \end{eqnarray*} proving that $\varphi$ is symmetric $r$-unstable if $\int_\Sigma f\,d\mu_\Sigma<0$. If $\int_\Sigma f\,d\mu_\Sigma \geq 0$ we have \begin{equation}\label{0060} I_r^S(f,f)=-\int_\Sigma f\,d\mu_\Sigma \leq 0. \end{equation} Since $\lambda_2>0$, we have $f \notin E_{\lambda_1}^\perp$; otherwise we would obtain as a consequence of \eqref{0058} and \eqref{0060} that \begin{equation}\label{desigualdade} 0 \geq I_r^S\left(\frac{f}{\left\Vert{f}\right\Vert_{L^2(\Sigma)}},\frac{f}{\left\Vert{f}\right\Vert_{L^2(\Sigma)}}\right) \geq \lambda_2>0, \end{equation} which is a contradiction. Thus, given $\overline{f} \in \mathcal{F}$, there exist a unique $b \in \mathbb{R}$ and $\widehat{f} \in E_{\lambda_1}^\perp$ such that $\overline{f}=bf+\widehat{f}$. Therefore, \begin{eqnarray*} I_r^S(\overline{f},\overline{f}) &=& bI_r^S(f,\overline{f})+I_r^S(\widehat{f},bf+\widehat{f}) \\ &=& -b\int_\Sigma\overline{f}\,d\mu_\Sigma-b\int_\Sigma\widehat{f}\,d\mu_\Sigma+I_r^S(\widehat{f},\widehat{f}) \\ &\geq& b^2\int_\Sigma f\,d\mu_\Sigma+\lambda_2\int_\Sigma \widehat{f}^2\,d\mu_\Sigma \geq 0, \end{eqnarray*} which implies that $\varphi$ is symmetric $r$-stable and proving (ii). \item In order to prove (iii), suppose $f=f_2$ and let $\widetilde{f}=af_1+f_2$. It follows from the orthonormality of $\left\{f_l\right\}_{l \in \mathbb{N}}$ that \begin{eqnarray*} I_r^S(\widetilde{f},\widetilde{f}) &=& a^2I_r^S(f_1,f_1)+2aI_r^S(f_1,f_2)+I_r^S(f_2,f_2) \\ &=& a^2\lambda_1\int_\Sigma f_1^2\,d\mu_\Sigma+2a\lambda_2\int_\Sigma f_1f_2\,d\mu_\Sigma+\lambda_2\int_\Sigma f_2^2\,d\mu_\Sigma \\ &=& \lambda_1\left(\int_\Sigma f_2\,d\mu_\Sigma\right)^2\left(\int_\Sigma f_1\,d\mu_\Sigma\right)^{-2}+\lambda_2. \end{eqnarray*} Therefore, if $\lambda_2 \leq 0$ and $\int_\Sigma f_2\,d\mu_\Sigma \neq 0$, we have $I_r^S(\widetilde{f},\widetilde{f})<0$, proving that $\varphi$ is $r$-unstable and proving the first claim of (iii). The result also holds if $\lambda_2<0$, which is (v). \item Finally, in order to prove the claim of (iv), notice that since $\lambda_2=0$ and $1 \in E_{\lambda_2}^\perp$ by hypothesis, there exists a unique $f \in H_0^1(\Sigma) \cap E_{\lambda_2}^\perp$ such that $T_r^Sf=-1$ weakly in $\Sigma$. If $\int_\Sigma f\,d\mu_\Sigma >0$, we use \eqref{0058}, \eqref{0060} and (\ref{desigualdade}) in order to conclude that $f \notin E_{\lambda_1}^\perp$ and we proceed as in the proof of ii). If $\int_\Sigma f\,d\mu_\Sigma=0$, we have $$I_r^S(f,f)=-\int_\Sigma f\,d\mu_\Sigma=0=\lambda_2.$$ If $f \in E_{\lambda_1}^\perp$, by (\ref{0058}) $f$ would be an eigenfunction associated to $\lambda_2=0$, thus $T_r^Sf=0$, which is a contradiction. Therefore $f \notin E_{\lambda_1}^\perp$ and the proof is analogous to the last part of the proof of (ii). \end{enumerate} \end{document}
\begin{document} \title{Style Pooling: Automatic Text Style Obfuscation for Improved\ Classification Fairness space{-1ex} \setlength{\belowdisplayskip}{1ex} \setlength{\belowdisplayshortskip}{1ex} \setlength{\abovedisplayskip}{-0.5ex} \setlength{\abovedisplayshortskip}{0pt} {\bm{s}}pace{-1.5ex} \begin{abstract} {\bm{s}}pace{-1.5ex} Text style can reveal sensitive attributes of the author (e.g. race or age) to the reader, which can, in turn, lead to privacy violations and bias in both human and algorithmic decisions based on text. For example, the style of writing in job applications might reveal protected attributes of the candidate which could lead to bias in hiring decisions, regardless of whether hiring decisions are made algorithmically or by humans. We propose a VAE-based framework that obfuscates stylistic features of human-generated text through style transfer \textit{by automatically re-writing the text itself}. Our framework operationalizes the notion of obfuscated style in a flexible way that enables two distinct notions of obfuscated style: (1) a minimal notion that effectively \textit{intersects} the various styles seen in training, and (2) a maximal notion that seeks to obfuscate by adding stylistic features of \textit{all sensitive attributes} to text, in effect, computing a \textit{union} of styles. Our style-obfuscation framework can be used for multiple purposes, however, we demonstrate its effectiveness in improving the fairness of downstream classifiers. We also conduct a comprehensive study on style pooling's effect on fluency, semantic consistency, and attribute removal from text, in two and three domain style obfuscation.\footnote{Code, models, and data is available at \url{https://github.com/mireshghallah/style-pooling}} \end{abstract} {\bm{s}}pace{-3.5ex} \section{Introduction} {\bm{s}}pace{-1ex} Machine learning (ML) algorithms are used in a wide range of tasks, including high-stakes applications like determining credit ratings, setting insurance policy rates, making hiring decisions, and performing facial recognition. It has been shown that such algorithms can produce outcomes that are biased towards a certain gender or race~\cite{buolamwini2018gender, silva-etal-2021-towards, sheng2021societal}. \begin{table*}[] \centering \caption{Example Blog sentences transformed with A4NT~\cite{a4nt} and our proposed Intersection and Union obfuscations. Our Intersection obfuscation aims at changing the style such that it does not reflect either teen or adult style. However, the union, tries to reflect both by making changes like adding ``...'' to the beginning of the sentence (adult style) while keeping the ``grr'' (teen style). Or by adding exclamation marks at the end of the sentence.} {\bm{s}}pace{-1ex} \label{tab:sentences} \centering \begin{adjustbox}{width=1\linewidth, margin=-1ex 0ex 0ex 0ex} \input{tables/sentences_all} \end{adjustbox} {\bm{s}}pace{-3ex} \end{table*} Ideally, high-stakes decisions made by either humans or ML algorithms, should not be influenced by irrelevant, protected attributes like nationality, age, or gender. In many instances, the input data used for making high-stakes decisions is text that is authored by a human candidate -- for example, hiring decisions are often based on bios and personal statements. Recent work~\cite{de2019bias} shows that automatic hiring-decision models trained on bios are less likely to select female candidates for certain roles (e.g. architect, software engineer, and surgeon) even when the gender of the author is not explicitly provided to the system. Bias is, of course, not limited to algorithmic decisions, humans make biased decisions based on text, even when the protected attributes of the author are not explicitly revealed~\cite{pedreshi2008discrimination}. Together, these results indicate that both algorithms and humans can (1) decipher protected attributes of authors based on stylistic features of text, and (2) whether consciously or not, be biased by this information. A large body of prior work has attempted to address \textit{algorithmic bias} by modifying different stages of the natural language processing (NLP) pipeline. For example, \citet{ravfogel2020null} attempt to de-bias word embeddings used by NLP systems, while \citet{elazar2018adversarial} address the bias in learned model representations and encodings. While effective in many cases, such approaches do nothing to mitigate bias in decisions made by humans based on text. We propose a fundamentally different approach. Rather than mitigating bias in learning algorithms that make decisions based on text, we propose a framework that obfuscates stylistic features of human-generated text \textit{by automatically re-writing the text itself}. By obfuscating stylistic features, readers (human or algorithms) will be less able to infer protected attributes that enable bias. We introduce a novel framework that enables `style pooling': the automatic transduction of user-generated text to a central, obfuscated style. Notions of `centrality' can themselves introduce bias -- for example, a system might learn to obfuscate by mapping all text to the dominant style seen in its training corpus. This might `white-wash' text, ignoring stylistic features of underrepresented groups in the learned notion of central style. Our framework operationalizes the notion of centrality in a more flexible way: our probabilistic approach allows us to choose between two distinct notions of centrality. First, we define a variant of our model which is incentivized to learn a minimal notion of central style that effectively \textit{intersects} the various styles seen in training. This is achieved through the design of this variant's probabilistic prior. We further equip this variant with a novel ``de-boosting'' mechanism, which amplifies the use of words that are less likely to leak sensitive attributes, and de-incentivizes the use of words whose presence might hint at a particular sensitive attribute. Second, we propose an alternative prior that instead incentivizes a maximal notion of style that seeks to obfuscate by adding stylistic features of all protected attributes to text -- in effect, computing a \textit{union} of styles. Table~{\textnormal{e}}f{tab:sentences} shows our intersection and union obfuscation applied to sentences from the Blogs dataset, and highlights the differences between them. While we propose both these obfuscations in our framework and leave it to the users to choose, it is worth noting that the cognitive process literature shows that when humans are confronted with conflicting biasing information, they tend to form an opinion about the conflicting text, based on their own implicit biases~\cite{richter2017comprehension}. Therefore, removing sensitive stylistic features may be more effective than combining them. This is also commensurate with our findings, where we observed that intersection more successfully improves the fairness metric (Section~{\textnormal{e}}f{sec:fairness}). We extensively evaluate our proposed framework on a wide range of tasks. First, we compare and contrast our ``intersection'' and ``union'' obfuscations on a modified version of the Yelp dataset~\cite{shen2017style} where we have created three stylistic domains by deliberately misspelling three disjoint sets of words. We show that our intersection obfuscation successfully removes these misspellings and replaces them by the dominant spelling of the word $99.20\%$ of the time, while our union obfuscation spreads the misspellings into the other two domains $46.40\%$ of the time. Then, we evaluate our framework on the Blogs data~\cite{schler2006effects}, where the sensitive attribute is age, and we measure the impact our obfuscations have on the fairness of a job classifier, using the the TPR-gap measure from~\citet{de2019bias}. We also evaluate the removal of sensitive attributes, fluency of the generated text, and the uncertainty of a sensitive attribute classifier for our framework, in both two and three domain setups. Probabilistic style transfer {\bm{s}}pace{-1ex} \section{Proposed Method} {\bm{s}}pace{-1.5ex} \begin{figure} \caption{Proposed unsupervised framework for \textit{style pooling} \label{fig:model} \end{figure} In this section, we first introduce our model structure, then describe our style-pooling priors and the unsupervised learning and inference techniques we leverage for this model class. Finally, we introduce our style de-boosting mechanism. {\bm{s}}pace{-1ex} \subsection{Model Structure} {\bm{s}}pace{-1ex} Consider a training corpus consisting of utterances produced by authors with various protected attributes. In Figure~{\textnormal{e}}f{fig:model}, we depict a grouping of authors by age into three domains. We let $x_i$ represent an individual observed text utterance in the corpus, and assume $M$ domains (sensitive attribute classes) in the dataset. $y_i$ is a latent variable that represents the obfuscated version of $x_i$. Hence, $y_i$ is a text valued latent, while $x_i$ is a text valued observation. We let $d(i)$ denote the domain of the $i^{th}$ sample in the dataset. With this definition, our generative process assumes each sentence $x_i$, with corresponding domain $d(i)$, is generated as follows: First, a latent sentence $y_i$ is sampled from a central prior, $p_{prior}(y_i)$, which is domain agnostic. Then, $x_i$ is sampled conditioned on $y_i$ from a transduction model, $p(x_i|y_i;\theta_{y \to x}^{d(i)})$ . We let $\theta_{y \to x}^{D_j}$ represent the parameters of the transduction model for the $j$th domain. We extensively discuss $p_{prior}$ in the next section. For now, we assume the prior distributions are pretrained on the observed data and therefore omit their parameters for simplicity of notation. Together, this gives the following joint likelihood: {\small {\bm{s}}pace{-1ex} \begin{equation} \begin{alignedat}{1} \label{eq:joint} p(&X^{D_1},...,X^{D_M}, Y;\theta_{y \to x}^{D_1}, ...,\theta_{y \to x}^{D_M})\\ &= \prod^N_{i = 1}p\big(x_i|y_{i}; \theta_{y \to x}^{d(i)}) p_{prior}\big(y_{i}\big) \end{alignedat} \end{equation}} The log marginal likelihood of the observed data, which we approximate during training, can be written as: {\small {\bm{s}}pace{-1ex} \begin{equation} \begin{alignedat}{1} \label{eq:marginal} \log\ p(&X^{D_1},...,X^{D_M};\theta_{y \to x}^{D_1}, ...,\theta_{y \to x}^{D_M})\\= & \log \sum\nolimits_Y p(X^{D_1},...,X^{D_M};\theta_{y \to x}^{D_1}, ...,\theta_{y \to x}^{D_M}) \end{alignedat} \end{equation} } \noindent\textbf{Neural Architectures.} We select a parameterization for our transduction distributions that makes no independence assumptions. We use an encoder-decoder architecture based on the standard attentional Seq2Seq model which has been shown to be successful across various tasks~\cite{bahdanau2014neural, rush2015neural}. Our prior distributions for each domain are built using recurrent language models which also make no independence assumptions. {\bm{s}}pace{-1.5ex} \subsection{Prior Distributions} {\bm{s}}pace{-1.ex} The critical component of our framework that incentivizes obfuscation are our specialized priors, as depicted in Figure~{\textnormal{e}}f{fig:model}. We introduce two prior variants, $p_{\textrm{Inter}}(y)$ and $p_{\textrm{Union}}(y)$, which incentivize induction of intersected styles and the union of all styles, respectively. Each prior is assembled out of $M$ (here $M=3)$ separate language models -- $p^{D_1}$, $p^{D_2}$, ..., $p^{D_M}$ -- each trained on the corresponding domain of observed utterances in the training data. The intersection prior, $p_{\textrm{Inter}}(y)$, is computed by taking the sum of the likelihoods of an entire utterance across the language models from all $M$ domains (and then re-normalizing to ensure that resulting prior is a valid distribution). This utterance-level average pooling approach incentivizes a ``majority-voting'' effect, in which the model is pressured to remove any words and stylistic features that are characteristic of one domain, but not the others, and converge to features that are shared by the majority of the domains. Therefore the prior for intersection becomes: {\small {\bm{s}}pace{-2ex} \begin{equation} p_{Inter}(y_i) =\frac{1}{M} \sum^M\nolimits_j p^{D_j}(y_i) \label{eq:inters} \end{equation} } In contrast, the union prior, $p_{\textrm{Union}}(y)$, computes the likelihood of an utterance according to the minimum likelihood across each domain's language model \textit{at each token position}, $t$.\footnote{The token-wise min of the language models is not, itself, a normalized distribution. However, we can treat it as implicitly normalized in our training objective (discussed in the next section) because the absence of normalization only contributes an additive constant to our objective.} Through experimentation (Sec.~{\textnormal{e}}f{sec:synth}) we empirically observed that this prior rewards the model for inserting as many stylistic features as possible that are unique to each domain. {\begingroup\makeatletter\def8{8}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\large\normalfont#1}} {\bm{s}}pace{-2ex} \begin{equation} \begin{alignedat}{1} p_{Union}(y_i) \propto \prod\nolimits_t^T \min (p^{D_1}(y_{i,t}|y_{i,<t}), ..., p^{D_M}(y_{i,t}|y_{i,<t}) ) \end{alignedat} {\bm{s}}pace{-5ex} \label{eq:union} \end{equation} \endgroup } {\bm{s}}pace{-1ex} \subsection{Learning and Inference} {\bm{s}}pace{-1ex} Training is accomplished using an approach from~\cite{he2020probabilistic}: We employ seq2seq inference networks and use an amortized inference scheme similar to that used in a conventional VAE, but for sequential discrete latents. Ideally, learning should directly optimize the log data likelihood, which is the marginal shown in Eq.~{\textnormal{e}}f{eq:marginal}. However, due to our model's neural parameterization, the marginal is intractable. To overcome the intractability of computing the true data likelihood, we adopt amortized variational inference~\cite{kingma2013auto} to derive a surrogate objective for learning the evidence lower bound (ELBO) on log marginal likelihood: {\small \begin{gather} \begin{alignedat}[b]{2} & \log\ p(X^{D_1},&&...,X^{D_M}; \theta_{y\to x}^{D_1},...,\theta_{y\to x}^{D_M}) \\ & \geq {\mathcal{L}}_{\text{ELBO}} && (X^{D_1},...,X^{D_M};\theta_{y\to x}^{D_1},...,\theta_{y\to x}^{D_M}, \phi_{x\to y}) \\ &= \sum^N\nolimits_i &&\underbrace{\Big[ {\mathbb{E}}_{q(y_i | x_i; \phi_{x \to y})}[\log\ p(x_i|y_i; \theta_{y \to x}^{d(i)})]}_{\textrm{Reconstruction likelihood}} \\ & &&- \underbrace{D_{\mathrm{KL}}\big(q(y_i | x_i ; \phi_{y \to x})) || p_{prior}(y_i) \big) \Big]}_{\textrm{KL regularizer}} \end{alignedat}\label{eq:elbo} {\textnormal{a}}isetag{30pt} {\bm{s}}pace{-2ex} \end{gather}} This new objective introduces $q(y | x ; \phi_{x \to y})$, which represents the inference network distribution that approximates the model’s true posterior, $p(y|x; \theta_{x\to y})$. Learning operates by optimizing the lower bound over both variational and model parameters. Once training is over, the posterior distribution can be used for style obfuscation. The reconstruction and KL terms in Eq.~{\textnormal{e}}f{eq:elbo} involve intractable expectations, which means we need to approximate their gradients. To address this, we use the Gumbel-softmax~\cite{jang2016categorical} straight-through estimator to backpropagate gradients from both the KL and reconstruction loss terms. \noindent\textbf{Length Control.} During the training of the model, we observed that it tends to repeat the same word when it is trying to generate obfuscated text, $y_i$. To mitigate this, we append two floating point length tokens to the input of the inference networks decoder at each step $t$, one of these tokens tells the model which step it is on, and the other tells it how many steps are left~\cite{kikuchi2016controlling, hu2017toward}. We also experimented with positional embeddings instead of floating point tokens, but we observed that they yield worse convergence. Another measure we take to encourage shorter sentences was to hard stop the decoding during training once the re-written sentence had the same length as the original sentence. To further stabilize training we share parameters between the inference network and the transduction models, appending an embedding to the input to indicate the output domain. {\bm{s}}pace{-2ex} \subsection{Style De-boosting} {\bm{s}}pace{-1ex} To better encourage the removal of identifying stylistic features, we introduce a de-boosting mechanism, which incentivizes the use of words that are less likely to leak sensitive attributes, and de-incentivizes the use of words whose presence might hint at a particular sensitive attribute. We build on the intuition that for a given word $w$ in the vocabulary, if the probability that it belongs to domain $m$ is similar to the probability that it belongs to domain $k$, for any given $m, k$ within the possible domains, $M$, then we can assume that this word does not reveal style. However, if there is a huge gap in the two probabilities, that word might hint at a certain domain if it is present in the re-written text. Therefore, we devise a normalized ``style score'', $s$, for each word $w$ in the vocabulary~\footnote{While this style score may also highlight \textit{content} that is characteristic of a domain in addition to stylistic word choices, we find in experiments that our use of de-boosting does not substantially harm the utility of downstream classifiers -- indicating that content is largely preserved, even with de-boosting.}: {\bm{s}}pace{-1ex} \begin{equation} \footnotesize s_w = \frac{\max(f^{D_1}_w,f^{D_2}_w, ...,f^{D_M}_w) - \min(f^{D_1}_w,f^{D_2}_w, ...,f^{D_M}_w)}{\max(f^{D_1}_w,f^{D_2}_w, ...,f^{D_M}_w)} \label{word:score} \end{equation} Where $f^{D_1}_w$ is frequency of word $w$ in the training corpus for domain $D_1$, divided by the overall number of tokens (words) in the domain corpus. Using these scores, we modify the output logits of the decoder so that the output probability distribution over the vocabulary for sample $i$ at step $t$ is given by: \begin{equation} p(y_{i,t}|y_{i,<t},x_i) \propto \text{softmax}(L_{i,t} - \gamma*S) \label{eq:deboost} \end{equation} \noindent Here, $L_{i,t}$ represents the logits at step $t$, while $S$ is the score vector for all the words in the vocabulary. $\gamma$ is a multiplier that helps tune the amount of de-boosting. Due to the nature of this de-boosting mechanism, it makes sense only to use it with the intersection obfuscation and not the union. \section{Experimental Setup} Here, we provide a brief description of our experimental setup. Our code, data and model checkpoints are uploaded in the supplementary material. More details on the code, model configurations, datasets and hyperparameters are provided in Appendix Sections~{\textnormal{e}}f{sec:app-code},~{\textnormal{e}}f{sec:app-configs},~{\textnormal{e}}f{sec:app-data} and~{\textnormal{e}}f{sec:app-hyperparams}. \subsection{Model Configurations} We used a single layer attentional LSTM-based Seq2Seq encoder-decoder for all the experiments, with hidden layer size of $512$ for both encoder and decoder, and word embedding size of $128$. For the attribute classifiers and language models, we also use LSTM models with the same architecture, with a final projection layer of the size of sensitive classes/vocabulary. \subsection{Datasets} \label{sec:exp-data} \noindent\textbf{Synthetic Yelp dataset~\cite{shen2017style}.} We shuffle all the sentences in the Yelp reviews dataset and divide them into three groups (domains). We then randomly choose $15$ words from the top $20$ highest frequency words in the dataset, and allocate the set of top $5$ words ($W_1$) to $D_1$ (domain $1$), next $5$ to $D_2$ and the least frequent $5$ words to $D_3$. We misspell all occurrences of $W_1$ in $D_1$, by changing ``word'' to ``11word11''. We then add ``11word11'' to the vocabulary, and do this for all the $5$ words in all $3$ domains ($15$ words total). After this transformation, we have $3$ domains with disjoint stylistic markers, which can help us more concretely analyze our obfuscation mechanism. \noindent\textbf{Blogs dataset~\cite{schler2006effects}.} The blogs dataset is a collection of micro blogs containing over 3.3 million sentences along with annotation of author’s age and occupation. We use this data in both two and three domain style pooling, where we treat age as the sensitive attribute and balance the data so each domain has the same number of sentences. In the two domain setup, we divide the data in two groups of teenagers and adults. In the three style setup, we have three groups of teenagers, young adults (20s) and adults (people in their 30s and 40s). We use this dataset for multiple evaluations including fairness. We compare our obfuscation to that of~\citet{a4nt} in all evaluations with this data. \noindent\textbf{Twitter dataset~\cite{rangel2016overview}.} We use data from the PAN16 dataset, which contains manually annotated (from LinkedIn) age and gender of 436 Twitter users, along with up to 1000 tweets from each user. We use this data for the purpose of sensitive attribute (age) removal comparison with~\citet{elazar2018adversarial} in Section~{\textnormal{e}}f{sec:app-elazar}, and have therefore used the exact same preprocessing and handling of the data as done by them. \noindent\textbf{DIAL dataset~\cite{DIAL}.} This is a Twitter dataset which has binary dialect annotations of African American English (AAE) and Standard American English (SAE)\footnote{Using standard for non-AAE might not be the most suitable naming, but we use it hereon given the lack of a better substitute.}, setting ``author’s race'' as the sensitive attribute. We use this dataset for comparison with the work~\citet{xu2019privacy}. \subsection{Baselines} \noindent\textbf{One language model prior (One-LM).} This model is an instance of our framework which uses the output distribution of a single language model as the prior. For the Yelp Synthetic data this single LM is trained on the original data which does not have our modifications and would provide the ideal ``intersection'', since the original data itself does not have misspellings from any of our synthetic domains and can be considered as central. In the case of the Blogs data where we don't have any ideal central data which is void of style, we train an age classifier and then choose the sentences from the training set that the classifier missclassifies. We create a new training set with these samples and train a single LM on them, and use it for the prior. The intuition is that if the classifier could not guess the domain, these samples are probably close to the notion of centrality we are looking for. \noindent\textbf{A4NT~\cite{a4nt}.} ``A4NT: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation'' is the most closely related past work that also attempts to obfuscate text style through automatic re-writing. However, their adversarial approach uses a discriminator network to hide protected attributes simply by mapping the style of one protected category to that of another. \noindent\textbf{PATR~\cite{xu2019privacy}.} Privacy Aware Text Rewriting (PATR) is another work close to ours, which removes sensitive attributes through text re-writing using translation and adversarial learning. Unlike style pooling, PATR, targets privacy and is therefore not concerned with the union vs. intersection of sensitive attributes. \noindent\textbf{Original.} We include an ``original'' baseline in our measurements, which shows the value of a given metric if the original un-obfuscated data was used. \subsection{Evaluation Metrics} Below we discuss our evaluation metrics, all of which are measured on the test data. \subsubsection{Fairness} \textbf{TPR-gap.} We first define a classifier whose main task is to determine if the occupation of an author is student or not, given text from their blog. We set the age of the author as a sensitive attribute, and want to measure the bias in the classifier, given age. We follow~\cite{de2019bias} and use the ``True Positive Rate gap in age'' metric. This measure quantifies the bias in a classifier by finding the gap between the true positive rate for each sensitive attribute group (teen vs. adult). For a binary sensitive attribute $a$ (age) and a true class (for the classifier's main task) $y$, we define: \begin{align} TPR_{a,y} =& P(\hat{Y} = y | A = \textsl{a}, Y = y )\\ GAP_{a,y}^{TPR} =& TPR_{a,y} - TPR_{a',y} \end{align} \noindent where $A$ is the random variable denoting binary sensitive attribute with values $a$ and $a'$. $Y$, $\hat{Y}$ are random variables denoting the correct class and the predicted class, respectively. The lower the gap is, the more fair the classifier. We report $GAP_{Teen, Student}$, which reflects how biased the classifier is towards classifying teens as students. \subsubsection{Linguistic} \textbf{Back-Translation (BT) accuracy.} We translate the obfuscated samples back to their original domain using the model, and then for each token see if it has been correctly back-translated to its origin or not. We use this metric to see whether the obfuscated version contains sufficient information about content to reconstruct the original. \noindent\textbf{GPT-2 PPL.} We feed our obfuscated test sentences to a huggingface~\cite{gpt-2} pre-trained GPT-2 medium model, and report its perplexity (PPL), as an automatic measure of fluency. Lower PPL hints at more fluent text. \noindent\textbf{BLEU Score.} In the Yelp Synthetic data experiments, since we have the original (not misspelled) text, we can calculate and report the BLEU score. \noindent\textbf{GLEU Score.} We use GLEU~\cite{gleu} score as another metric for evaluating the fluency of the generated sentences. \noindent\textbf{Lexical Diversity (Lex. Div.)} To better quantify the differences between different obfuscations, we calculate the lexical diversity as a ratio where the size of the vocabulary of the model's output text is the numerator, and the denominator is the overall size of the model's output text (number of all the tokens in the output). \subsubsection{Sensitive-Attribute Classification} \textbf{Sensitive-attribute Classifier (Clsf.) accuracy.} To evaluate the removal of sensitive attributes, we train a sensitive-attribute classifier, and use its accuracy as a metric. The closer the accuracy is to chance level (random guess), the more successful is the removal. However, there is a caveat to this metric: it is not always clear how the classifier is making its decision, if it is based on content, or style. Therefore, this metric alone is \textbf{not conclusive}. \noindent\textbf{Entropy.} To better measure how uncertain the classifier becomes, we also compute its average Entropy across all test samples. Entropy is always between $[0.0,1.0]$ for two domain classification and $[0.0,1.59]$ for three domain classification. The higher it is, the more uncertain the classifier is (more desirable for our purpose). \noindent\textbf{Confident Response (CR) percentage.} We calculate the percentage of the responses from the classifier for which it was more than $75\%$ sure. {\bm{s}}pace{-1.0ex} \section{Experimental Results} \subsection{Synthetic Yelp Data}\label{sec:synth} Table~{\textnormal{e}}f{tab:synthesized} shows the experimental results for the Synthetic Yelp dataset experiment, where we trained our proposed framework using the three synthetic domains with misspellings, as explained in Section~{\textnormal{e}}f{sec:exp-data}. The \textit{Corrected}, \textit{Remaining} and \textit{Removed} percentages refer to the average ratio of the misspellings corrected, remaining and removed for each domain. These should all sum up to $100\%$. The \textit{Spread} is the average ratio of the number of words from one domain that have been changed to misspellings from another domain. For instance, if there are 100 occurrences of ``word'' outside $D_1$ before obfuscation, if $40$ of them are converted to ``11word11'' after obfuscation, then the spread would be $40\%$. The \textit{One-LM} can be considered as an ``oracle baseline'' in this case, since it was trained on original (no misspellings) data. The main goal of this controlled experiment is to compare and contrast our intersection and union obfuscations. From the Table we can see that both our obfuscations lead to high fidelity (back-translation accuracy) and semantic consistency (BLEU score). They also both render the domain classifier very close to chance level ($33.33\%$). The main differences between these two methods becomes more clear when we look at the corrected, remaining, and spread numbers. The intersection obfuscation with its average pooling, demonstrates a majority voting behavior which incentivizes correcting the misspellings since 2 out of the 3 language models advocate for the correct spelling. Therefore $99.20\%$ of the misspellings are corrected using intersection, very close to the oracle baseline. The Union prior, on the other hand, corrects only $45.17\%$ of the misspellings, and lets $54.37\%$ of them to remain as they are. It also converts $46.40\%$ of the correctly spelled words in other domains to misspellings. This shows that the union is in fact mixing the styles, creating sentences that might have more than one misspelling in them. \begin{table} \centering \footnotesize \caption{Results for the Synthetic Yelp dataset with $3$ domains. \textit{Corrected} shows what \% of modified words in a domain were corrected back to their original format. \textit{Spread} shows the reverse.} {\bm{s}}pace{-2ex} \label{tab:synthesized} \input{tables/synthesized_multi_dom} {\bm{s}}pace{-3ex} \end{table} \subsection{Blogs Data} Tables~{\textnormal{e}}f{table:ratio},~{\textnormal{e}}f{tab:2dom} and ~{\textnormal{e}}f{tab:3dom} summarize the experimental results for the Blogs dataset. Below, we will explain each experiment in more detail. \subsubsection{Fairness Results}\label{sec:fairness} Table~{\textnormal{e}}f{table:ratio} shows the results for the fairness metric measurements on text generated using different obfuscations, for ``Occupation'' classifiers. We have selected a subset of the Blogs data for this experiment, where author occupation is either student or arts, and the age is either teen or adult (two domain obfuscation). we have taken an approach similar to that of~\citet{ravfogel2020null}, where we create $4$ different levels of imbalance. In all cases, the dataset is balanced with respect to both occupation and age. We change only the proportion of each age within each occupation class (e.g., in the 0.8 ratio, the student occupation class is composed of $80\%$ teens and $20\%$ adults, while the arts class is composed of $20\%$ teens and $80\%$ adults). For each imbalance ratio we train the classifier on the original imbalanced data, and then test it with original and autmotically generated data from different baselines. Based on Table~{\textnormal{e}}f{table:ratio}, we can see that our Intersection obfuscation can improve fairness (TPR-gap) with little harm to the classifier accuracy (Occupation), in comparison to the original data and A4NT. We can trade-off classifier accuracy and fairness, by increasing the de-boosting (DB) multiplier. In the Table, \textit{Intersection} shows Intersection obfuscation with different DB levels. In the case of $DB=40$, we lose slightly more utility, but observe much better fairness. A4NT's performance in terms of the fairness metric (TPR-Gap) is comparable to our \textit{Intersection} obfuscation (even without de-boosting), however, in maintaining occupation accuracy (utility), A4NT performs much more poorly. We presume this is because A4NT removes sensitive attributes solely based on hints from a discriminator, and the low occupation accuracy suggests the discriminator captures the content more than it captures style, therefore it changes the meaning and structure of the sentences as well. Our human judgments for semantic consistency and fluency in Section~{\textnormal{e}}f{sec:human} support this hypothesis. Our \textit{Union} obfuscation, however, does not improve the fairness. We hypothesize this could be caused by keeping/adding biasing words, which can perpetuate the existing impartialities in the classifier, similar to how human cognition works~\cite{richter2017comprehension}. \begin{table*} \caption{Fairness results for the Blogs data. The main task is classifying if the author occupation is student or not. Higher occupation accuracy and lower TPR-gap are better. DB denotes our style de-boosting technique, and the number next to it shows its multiplier. Larger multiplier means stronger style obfuscation. }\label{table:ratio} {\bm{s}}pace{-2ex} \centering \footnotesize \begin{adjustbox}{width=\textwidth, center} \input{tables/fairness_ratio_comp_temp} \end{adjustbox} {\bm{s}}pace{-1ex} \end{table*} \subsubsection{Linguistic and Sensitive-attribute Classification Results} The top section of Tables~{\textnormal{e}}f{tab:2dom} and~{\textnormal{e}}f{tab:3dom} show the linguistic and sensitive-attribute classification metrics for the two and three domain obfuscations, respectively. Since A4NT cannot be applied to non-binary style obfuscations as is, there are no results for it in three domains. We can see that for both two and three domains the de-boosting (denoted as DB) offers a trade-off between the linguistic quality of the generated text and the obfuscation of sensitive attributes. Compared to the One-LM baseline, for corresponding levels of de-boosting, our \textit{Intersection} obfuscation is almost always superior, in both text quality and obfuscation. The \textit{Intersection} obfuscation with de-boosting multiplier of 25 outperforms A4NT, with lower classifier accuracy, higher entropy and much lower Confident Response (CR) rate from the classifier. In general, the \textit{Intersection} obfuscation, even without de-boosting does well on \textit{Entropy} and \textit{CR}, which shows that our method is doing well at creating doubt in terms of what the age of the author is. One caveat however, across both two and three domain obfuscations is the classifier accuracy, which does not decrease much. We hypothesize that one reason for this could be the dependency between style and content, and that the sensitive-attribute classifier could be basing its decisions on content, therefore changing the style would not hide the sensitive attribute. Our \textit{Union} obfuscation is behaving differently from the \textit{Intersection}, and is inferior in terms of obfuscating the text, with higher classifier accuracy and lower entropy. However, it has higher lexical diversity, which could hint at it trying to keep sentences diverse and ``adding styles", whereas the \textit{Intersection} is only keeping the common words and is therefore decreasing the lexical diversity. \begin{table}[] \centering \caption{Comparison with PATR~\cite{xu2019privacy}, on the Twitter DIAL dataset, where the author's race is the sensitive attribute.} {\bm{s}}pace{-1ex} \label{tab:patr} \begin{adjustbox}{width=\linewidth, center} \input{tables/PATR} \end{adjustbox} {\bm{s}}pace{-1ex} \end{table} \begin{table*}[] \centering \caption{Linguistic and sensitive-attribute classifier results for Blogs data, considering \textit{two} sensitive age domains of teens and adults. For BT accuracy and entropy higher is better, for PPL and Confident Response (CR) lower is better.} {\bm{s}}pace{-2ex} \label{tab:2dom} \begin{adjustbox}{width=\textwidth, center} \input{tables/2dom} \end{adjustbox} \end{table*} \begin{table*}[] \centering \caption{Linguistic and sensitive-attribute classifier results for Blogs data, considering \textit{three} sensitive age domains of teens and adults. For BT accuracy and entropy higher is better, for PPL and Confident Response (CR) lower is better. } {\bm{s}}pace{-2ex} \label{tab:3dom} \begin{adjustbox}{width=\textwidth, center} \input{tables/3dom} \end{adjustbox} {\bm{s}}pace{-2ex} \end{table*} \subsection{Comparison with PATR} Table~{\textnormal{e}}f{tab:patr} provides a comparison between our style pooling method, and PATR~\cite{xu2019privacy}. $\alpha$ is knob used by PATR to tune the intensity of attribute removal, and the classifier accuracy on non-modified text is 86.3\%. We can see that without de-boosting, our intersection method drops the classifier accuracy to 74.05\% with a GLEU score of 26.32. PATR drops the classifier accuracy to 74.85\%, but with a worse level of GLEU. With de-boosting, however, we can achieve a classifier accuracy of 62.12\% with GLEU of 17.2, whereas PATR reports accuracy of 65.75\% for a much lower GLEU of 9.67 when $\alpha$ is increased. This shows that our de-boosting mechanism can provide an advantage by giving a lower probability to attribute revealing components, while maintaining the sentence structure. Our union method also achieves 73.27\% accuracy with 26.25 GLEU, making it most suitable for cases where the semantic consistency of the sentences is most important. \subsection{Evaluation with Human Judgments} \label{sec:human} We design two crowd-sourcing tasks on Amazon Mechanical Turk. (1) Fluency: We provide workers with a pair of obfuscated sentences, and ask them which sentence is more fluent. (2) Semantic Consistency: We provide the original (un-obfuscated) sentences, and ask workers which of the obfuscated sentences is closer in meaning to the original sentence. The model checkpoints used for human evaluations here are those whose fairness and linguistic metrics are reported in Tables~{\textnormal{e}}f{table:ratio},~{\textnormal{e}}f{tab:2dom}. We use our intersection obfuscation, with no de-boosting. We randomly select $188$ sentences from the test set, and used the model outputs for human judgment. For consistency, each pair of sentences is rated by three workers and we take the majority vote. In terms of fluency, the workers preferred our obfuscations over those of A4NT for $60.38\%$ of the sentences. In terms of semantic consistency, for $72.13\%$ sentences they found our obfuscations to be closer in meaning to the original ones. {\bm{s}}pace{-1ex} \section{Related Work} {\bm{s}}pace{-1ex} A large body of prior work has attempted to address \textit{algorithmic bias} by modifying different stages of the natural language processing (NLP) pipeline.~\citet{blodgett-etal-2021-stereotyping},~\citet{barikeri-etal-2021-redditbias},~\citet{farrand2020neither},~\citet{mireshghallah-etal-2021-privacy} and~\citet{sheng2019woman} propose and analyze benchmarks for evaluating fairness in different applications. ~\citet{ravfogel2020null},~\citet{kaneko2019gender},~\citet{shin2020neutralizing} and~\citet{kanekodebiasing} attempt to de-bias word embeddings used by NLP systems, while \citet{elazar2018adversarial, Barrett2019AdversarialRO,wang-etal-2021-dynamically} attempt to de-bias model representations and encodings. There is also a large body of work on modifying learning algorithms and inference procedures to produce more fair outcomes~\cite{agarwal2018reductions, madras2018predict, zafar2017fairness,han-etal-2021-decoupling,mireshghallah2021not}. While effective in many cases, such approaches do nothing to mitigate \textit{human bias} in decisions based on text. Fundamentally, our framework is concerned with stylistic features of human-generated text. Thus, a large body of prior work on methods for unsupervised style transfer are related to our approach~\cite{santos2018fighting,yang2018unsupervised,luo2019dual,he2020probabilistic}. There is also a vast body of work on style obfuscation~\cite{emmery2018style, reddy2016obfuscating, bevendorff2019heuristic, a4nt}. Our work is most closely related to~\citet{a4nt} and ~\citet{xu2019privacy}. A4NT~\cite{a4nt} attempts to obfuscate text style through automatic re-writing. However, their approach attempts to hide protected attributes simply by mapping the style of one protected category to that of another. In contrast, we seek not to map the author's text to another author's style, but to a central obfuscated style. ~\citeauthor{xu2019privacy} propose Privacy Aware Text Re-writing (PATR), which takes a similar adversarial learning translation based approach to address this problem and re-write text. One fundamental difference between our style-pooling method and PATR is that we provide the choice of union vs. intersection of styles, which is concerned with the societal aspects of removing sensitive attributes, since we are targeting removal of bias. PATR, however, targets privacy and is therefore not concerned with the union vs. intersection of sensitive attributes. Finally, there is a body of work on re-writing text to mitigate the potential biases within the content of the text itself. ~\citeauthor{Ma2020PowerTransformerUC} propose PowerTransformer, which rewrites text to correct the implicit and potentially undesirable bias in character portrayals. ~\citeauthor{Pryzant2020AutomaticallyNS} propose a framework that addresses subjective bias in text and ~\citeauthor{field2020unsupervised} and ~\citeauthor{zhou2021challenges} introduce approaches to identifying gender bias against women at a comment level and dialect bias in text, respectively. These works focus on the text content, and not on the stylistic features of the author. {\bm{s}}pace{-1.5ex} \section{Conclusion} {\bm{s}}pace{-1.5ex} We proposed a probabilistic VAE framework for automatically re-writing text in order to obfuscate stylistic features that might reveal sensitive attributes of the author. We demonstrated in experiments that our proposed framework can indeed reduce bias in downstream text classification. Finally, our model poses two ways of defining a central style. Future work might consider further explorations of alternative notions of stylistic centrality. \section*{Acknowledgments} The authors would like to thank the anonymous reviewers and meta-reviewers for their helpful feedback. We also thank Junxian He for insightful discussions. Additionally, we thank our colleagues at the UCSD Berg Lab for their helpful comments and feedback. \section*{Ethical Considerations} Our proposed model is intended to be used to address a real-world fairness issue. However, this is an extremely complicated topic, and it should be treated with caution, especially upon deploying possible mitigations such as ours. One potential issue we see is the chance that systems like this might obfuscate text by converging towards the majority and erasing styles of marginalized communities. We have tried to address this concern, and raise discussion around it in our introduction and model design, by allowing for multiple operationalizations of a ``central'' style, and introducing the union and intersection obfuscations. Defining a true notion of centrality that would effectively protect sensitive attributes without erasing any specific styles of writing requires further study. \appendix \section{Appendix} \label{sec:appendix} \subsection{Experiment Code}\label{sec:app-code} We have uploaded code, data and model checkpoints needed for reproducing the experiments in \url{https://github.com/mireshghallah/style-pooling}. There, you will find a \texttt{Read Me} file which includes all the necessary steps and link to the data and model checkpoints. In short, you need to download the data and model checkpoints from the link. When you download the models-data compressed folder, extract it. Place the content of the data folder and the models in corresponding folders in the code. The package dependencies are all included in the \texttt{dependencies} file. In order to create a similar setup, please install the exact version mentioned there. Once all the directories are setup, you can train your own models using the commands in the \texttt{Read Me}, or you can evaluate the models we have already provided. Evaluation code is included in the \texttt{results\_ipynbs} folder. \subsection{Model Configurations}\label{sec:app-configs} \paragraph{Seq2Seq Model.} For all the experiments, We use single layer LSTMs with hidden size of 512 as both the encoder and decoder, and we use a word embedding size of 128. We apply dropout to the readout states before softmax with a rate of 0.3. We add a max pooling operation over the encoder hidden states before feeding it to the decoder. \paragraph{Language Model: Yelp data.} We use an LSTM language model with hidden size of 512 and word embedding size of 128 and dropout value of 0.3. \paragraph{Language Model: Blog and Twitter data.} We use an LSTM language model with hidden size of 2048 and word embedding size of 1024 and dropout value of 0.3. \paragraph{Sensitive-attribute Classifiers.} We use LSTM classifiers for classifying sensitive attributes. The hidden size is 512 and word embedding size is 128. The last layer size is the number of sensitive classes. \paragraph{GPT-2} We used this repository~\url{https://github.com/priya-dwivedi/Deep-Learning/tree/master/GPT2-HarryPotter-Training} to download and feed data to the GPT-2 model, and get the PPL score. \subsection{Dataset Details}\label{sec:app-data} \paragraph{Yelp.} The training set contains 399,999 sentences, and test set consists of 30,000 sentences, both divided equally between the 3 domains. The vocabulary size is 9.6k words. The misspelled words of $D_0$, $D_1$ and $D_2$ are ``00great00, 00this00, 00it00, 00to00, 00food00'', ``11of11, 11place11, 11for11, 11good11, 11service11'' and ``22they22, 22are22, 22in22, 22very22, 22my22'', respectively. \paragraph{Blogs.} The blogs dataset is a collection of micro blogs from \url{blogger.com} which consists of 19,320 `documents' (over 3.3 million sentences) along with annotation of author’s age, gender, and occupation. Each document is a collection of an individual author's posts. We will use this data in both two and three domain style-pooling, where we treat age as the sensitive attribute and balance the data so each domain has the same number of sentences. In the two domain setup, we divide the data in two groups of teenagers, $13-18$ and adults $23-48$. In the three style setup, we have three groups of teenagers ($13-18$), young adults ($23-28$) and adults ($33-48$). The age groups $19-22$ and $29-32$ are missing from the data. After preprocessing and balancing the dataset, we end-up with 1.2 Milion sentences in the training set, 400k sentences in the test for the 2 domain setup, and 762k training sentences and 192k test sentences for the test set. There are 10k words in the vocabulary. All the datasets are balanced. \paragraph{Twitter.} There are 146.5k sentences in the training set, and 11.2k sentences in the test set. We reproduced this data using this scripts from~\citet{elazar2018adversarial}'s GitHub repository:~\url{https://github.com/yanaiela/demog-text-removal}. \subsection{Hyperparameters}\label{sec:app-hyperparams} For all experiments, we set the training batch size to $32$, the test batch size to $128$ and the temperature of the softmax to $0.01$. \noindent{\textbf{KL weight hyperparameter:}} The KL term in Eq.~{\textnormal{e}}f{eq:elbo} that appears naturally in the ELBO objective, can be treated as a regularizer that uses our $p_{prior}$ to induce the type of style we want. Therefore, in practice, we add a weight $\lambda$ to the KL term in ELBO since the regularization strength from our priors varies depending on the datasets, training data size, or prior structures\cite{bowman2016generating}. \paragraph{Yelp.} For the Yelp experiments, the learning rate is set to $0.001$ and the KL weight ($\lambda$) for the Union, One-LM and Intersection experiments are $0.03$, $0.03$ and $0.02$, respectively. \paragraph{Blogs and Twitter.} For the Blogs experiments, the learning rate is $0.0005$, and the KL weight ($\lambda$) is $0.04$ (for both 2 and 3 domains). \subsection{Comparison with A4NT Details} To compare with the work ``A4NT: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation''~\cite{a4nt}, we downloaded a checkpoint of their pre-trained model, available in their github repository: \url{https://github.com/rakshithShetty/A4NT-author-masking}. Since we have also used the same dataset with the same train/test separation, we use the model as is for evaluation. \subsection{Human Evaluation Experiment Details} Our crowd workers are recruited from the Amazon Mechanical Turk (AMT) platform. Each HIT required the workers to answer a question regarding only one pair of sentences, and each worker was paid $\$0.1$ per HIT. For English proficiency, the workers were restricted to be from USA or UK. For the semantic consistency test, the question asked from the Turkers was: ``Which sentence is closer in meaning to the original sentence below?'', where the original sentence and the obfuscated ones where provided to the workers. For fluency, we asked: ``Which sentence is more fluent in English?''. \subsection{Comparison with ~\citet{elazar2018adversarial}}\label{sec:app-elazar} ~\citet{elazar2018adversarial} aims at creating representations for text that could be used for a specific classification task, while hiding sensitive attributes. Although our approach deals with the text as opposed to representations and and can be applied for a wider range of downstream tasks, we offer a brief comparison to this method. ~\citeauthor{elazar2018adversarial} use the Twitter dataset~\citet{rangel2016overview}, set the sensitive attribute to be age, and try to produce representations that would perform well on the main task of ``conversation detection'' (mention detection) on Tweets. On the original data, they report an accuracy of $77.5\%$ and $64.8\%$ for a classifier that tries to classify conversations and age, respectively, which drop to $72.5\%$ and $57.3\%$, after applying their adversarial learning scheme. We cloned their repository and used their code to process the dataset.We then created and trained the conversation and age classifiers, and reached an accuracy of $75.8\%$ and $64.63\%$ for them, respectively. These dropped to $73.28\%$ and $54.2\%$, after applying applying our intersection method. This shows that for this particular task, our re-written text can out-perform prior work. \end{document}
\begin{document} \title{Improved Bounds for Uniform Hypergraphs without Property B} \begin{abstract} A hypergraph\ is said to be properly 2-colorable\ if there exists a 2-colorableing\ of its vertices such that no hyperedge\ is monochromatic. On the other hand, a hypergraph\ is called non-2-colorable\ if there exists at least one monochromatic hyperedge\ in each of the possible 2-colorings of its vertex set. Let \mninline\ denote the minimum number of hyperedges\ in a non-2-colorablecolornhg. Establishing the lower and upper bounds on \mninline\ is a well-studied research direction over several decades. In this paper, we present new constructions for non-$2$-colorable $n$-uniform hypergraphs. These constructions improve the upper bounds for $m(8)$, $m(13)$, $m(14)$, $m(16)$ and $m(17)$. We also improve the lower bound for \mninline[5]. \noindent \textbf{Keywords:} Property B; Uniform Hypergraphs; Hypergraph\ 2-colorableing \end{abstract} \section{Introduction} Hypergraphs\ are combinatorial structures that are generalizations of graphs. Let hypergraphnot\ be an \nuniform\ hypergraph\ with vertex set $V,$ with each hyperedge\ in $E$ having exactly $n$ vertices in it. A \itt{2-coloring} of $H$ is an assignment of one of the two colors red and blue to each of the vertices in $V$. We say a 2-colorableing\ of $H$ to be \itt{proper} if each of its hyperedges\ has red as well as blue vertices. $H$ is said to be \itt{non-2-colorable} if no proper 2-coloring exists for it; otherwise, it is said to satisfy \itt{Property B}. For an integer $n \geq 1$, let \mninline\ denote the minimum number of hyperedges\ present in a non-2-colorablecolornhg. Establishing an upper bound on \mninline\ is a well-explored combinatorial problem. Erd\H{o}s \cite{erdos1963combinatorial} gave a non-constructive proof to establish the currently best-known upper bound $\mn = O(n^2 2^n)$. However, there is no known construction for a non-$2$-colorable $n$-uniform hypergraph that matches this upper bound. Abbott and Moser \cite{abbott1964combinatorial} constructed a non-$2$-colorable $n$-uniform hypergraph with $O((\sqrt{7} + o(1))^n)$ hyperedges. Recently, Gebauer \cite{gebauer2013construction} improved this result by constructing a non-$2$-colorable $n$-uniform hypergraph with $O(2^{(1 + o(1))n})$ hyperedges. Even though this is the best construction known for a non-$2$-colorable $n$-uniform hypergraph for large $n$, it is still asymptotically far from the above-mentioned non-constructive upper bound given by Erd\H{o}s. Finding upper bounds for small values of $n$ is also a well-studied problem and several constructions have been given for establishing these. For example, it can be easily seen that $m(1) \leq 1$, $m(2) \leq 3$ (the corresponding $2$-uniform hypergraph is the triangle graph) and $m(3) \leq 7$ (the corresponding $3$-uniform hypergraph is known as the Fano plane \cite{klein1870theorie}, denoted by $H_f$ in this paper). The previously-mentioned construction of Abbott and Moser shows that $m(4) \leq 27$, $m(6) \leq 147$ and $m(8) \leq 2187$. Moreover, their construction also gives non-trivial upper bounds on $m(n)$ for $n= 9, 10, 12, 14, 15$ and $16$. For $n \geq 3$, Abbott and Hanson \cite{abbott1969combinatorial} gave a construction using a non-2-colorablecolornhg[(n-2)]\ to show that $m(n) \leq n \cdot m(n - 2) + 2^{n - 1} + 2^{n-2}((n - 1) \mod 2)$. Using the best-known\ upper bounds on \mninline[n-2], this recurrence relation establishes non-trivial upper bounds as well as improve such bounds on \mninline\ for a few small values of $n$. For example, it shows that $m(4) \leq 24$, $m(5) \leq 51$ and $m(7) \leq 421$. Seymour \cite{seymour1974note} further improved the upper bound on \mninline[4]\ to $m(4) \leq 23$ by constructing a non-2-colorablecolornhg[4]\ with 23 hyperedges. In this paper, we denote this hypergraph by $H_s$. For even integers $n \geq 4$, Toft \cite{toft1973color} generalized this construction using a non-2-colorablecolornhg[(n-2)]\ to improve Abbott and Hanson's result to $m(n) \leq n \cdot m(n - 2) + 2^{n - 1} + \binom{n}{n/2}/2$. In particular, this led to establishing an upper bound $m(8) \leq 1339$. For a given integer $n \geq 3$ and a non-2-colorablecolornhg[(n-2)]\ $A$, we refer to Abbott-Hanson's construction for odd $n$ and Toft's construction for even $n$ as \itt{Abbott-Hanson-Toft construction} and denote the number of hyperedges\ in such a hypergraph\ as \mnabtinline. It can be easily observed that $\mn \leq \mnabtmath$ for any non-2-colorablecolornhg[(n-2)]\ $A$. In fact, we have already seen that the above-mentioned upper bounds $\mn[4] \leq 23$, $\mn[5] \leq 51$, $\mn[7] \leq 421$ and $\mn[8] \leq 1339$ are obtained by Abbott-Hanson-Toft constructions using the best-known\ constructions for non-2-colorable\ 2, 3, 5 and 6-uniform hypergraphs, respectively. Recently, a construction given by Mathews et al. \cite{mathews2015construction} improved the upper bound on $m(8)$ to $m(8) \leq 1269$. In addition, they modified the Abbott-Hanson-Toft construction to improve the upper bounds on \mninline\ for $n = 11, 13$ and $17$. The currently best-known\ upper bounds on \mninline\ for $n \leq 17$ are given in Table \ref{table-cur-known-upper}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline $n$ & $m(n)$ & Corresponding construction/recurrence relation \\ \hline 1 & $m(1) = 1$ & Single Vertex\\ 2 & $m(2) = 3$ & Triangle Graph\\ 3 & $m(3) = 7$ & Fano Plane \cite{klein1870theorie}\\ 4 & $m(4) = 23$ & \cite{ostergaard2014minimum}, \cite{seymour1974note}\\ 5 & $m(5) \leq 51$ & $m(5) \leq 2^4 + 5 m(3)$\\ 6 & $m(6) \leq 147$ & $m(6) \leq m(2) m(3)^2$\\ 7 & $m(7) \leq 421$ & $m(7) \leq 2^6 + 7 m(5)$\\ 8 & $m(8) \leq 1269$ & \cite{mathews2015construction}\\ 9 & $m(9) \leq 2401$ & $m(9) \leq m(3)^4$\\ 10 & $m(10) \leq 7803$ & $m(10) \leq m(2) m(5)^2$\\ 11 & $m(11) \leq 25449$ & $m(11) \leq 15 \cdot 2^8 + 9 m(9)$\\ 12 & $m(12) \leq 55223$ & $m(12) \leq m(3)^4 m(4)$\\ 13 & $m(13) \leq 297347$ & $m(13) \leq 17 \cdot 2^{10} + 11 m(11)$ \\ 14 & $m(14) \leq 531723$ & $m(14) \leq m(2) m(7)^2$\\ 15 & $m(15) \leq 857157$ & $m(15) \leq m(3)^5 m(5)$\\ 16 & $m(16) \leq 4831083$ & $m(16) \leq m(2) m(8)^2$\\ 17 & $m(17) \leq 13201419$ & $m(17) \leq 21 \cdot 2^{14} + 15 m(15)$ \\ \hline \end{tabular} \end{center} \caption{Best-known upper bounds on $m(n)$ for small values of $n$} \label{table-cur-known-upper} \end{table} In the other direction, Erd\H{o}s \cite{erdos1963combinatorial} showed the lower bound on \mninline\ to be $\mn = \Omega(2^n)$, which was later improved by Beck \cite{beck19783} to $m(n) = \Omega(n^{1/3 - o(1)} 2^n)$. The currently best-known\ lower bound $m(n) = \Omega(\sqrt{\frac{n}{\ln n}} 2^n)$ was given by Radhakrishnan and Srinivasan \cite{radhakrishnan2000improved}. Recently, a simpler proof for the same result has been given by Cherkashin and Kozik \cite{cherkashin2014note}. Note that there is a significant asymptotic gap between the currently best-known lower and upper bounds on $m(n)$. Even for small values of $n$, we are only aware of a few lower bounds for \mninline\ that match the corresponding upper bounds. It can be easily seen that $m(1) \geq 1$, $m(2) \geq 3$ and $m(3) \geq 7$ and therefore $m(1) = 1$, $m(2) = 3$ and $m(3) = 7$. Recently, {\"O}sterg{\aa}rd \cite{ostergaard2014minimum} showed that $m(4) \geq 23$ and established $m(4) = 23$ as a result. The exact values of \mninline\ are not yet known for $n \geq 5$, even though it can be easily observed that $m(n+1) \geq m(n)$ for any $n \geq 1$. \subsection{Our Contributions} In this paper, we give constructions that improve the best-known\ upper bounds on $m(8)$, $m(13)$, $m(14)$, $m(16)$ and $m(17)$. We also establish a non-trivial lower bound on \mninline[5]. In Section \ref{upper-const-one}, we give a construction that gives the following recurrence relation. In particular, it improves the upper bound on $m(13)$. \begin{result} \label{result-gen-abt} Consider an integer $k \geq 1$. For an odd $n > 2k$, $m(n) \leq \binom{n+k-1}{k} m(n-2k) + \binom{n+k-1}{k-1} 2^{n-1}$. For an even $n > 2k$, $m(n) \leq \binom{n+k-1}{k} m(n-2k) + \binom{n+k-1}{k-1} (2^{n-1} + \binom {n}{n/2}/2)$. \end{result} This construction also gives a non-2-colorablecolornhg\ with $O(3.76^n)$ hyperedges. Even though we note that it gives a better constructive upper bound $m(n) = O(3.76^n)$ than the trivial bound $m(n) \leq \binom{2n-1}{n} = \Theta(4^{n}/\sqrt{n})$, it is asymptotically worse than the previously mentioned constructive upper bounds $m(n) = O((\sqrt{7} + o(1))^n)$ \cite{abbott1964combinatorial} and $m(n) = O(2^{(1 + o(1))n})$ \cite{gebauer2013construction}. In Section \ref{upper-const-two}, we provide another construction that improves the upper bounds on $m(8)$, $m(13)$, $m(14)$, $m(16)$ and $m(17)$. \begin{result} Consider an integer $k$ satisfying $0 < k < n $. Let $w = \floor{n/k}$, $x = n\mod k$, $y = \floor{k/x}$ and $z = k\mod x$. \label{result-multicore} \begin{enumerate} \item[(a)] If $x > 0$ and $z > 0$, $m(n) \leq w \cdot m(n-k) m(k) + y \cdot m(k)^w m(x) + \tbinom{x+z-1}{z} m(n-k) m(x)^y + \tbinom{x+z-1}{x} m(k)^w$. \item[(b)] If $x > 0$ and $z = 0$, $m(n) \leq w \cdot m(n-k) m(k) + y \cdot m(k)^w m(x) + m(n-k) m(x)^y$. \item[(c)] If $x = 0$, $m(n) \leq w \cdot m(n-k) m(k) + m(k)^w$. \end{enumerate} \end{result} In Section \ref{upper-const-five}, we give a construction to prove the following result that further improves the upper bounds on $m(13)$ and $m(16)$. \begin{result} \label{result-block} Consider an integer $k \geq 2$ and a non-2-colorablecolornhg[(k-1)]\ $H_{1c}$. Then, $m(3k+1) \leq (m(k - 1) + 2^{k - 1}) m(k + 1)^2 + 2 m_{H_{1c}}(k + 1) m(k)^2 + 4 m(k + 1) m(k)^2$. \end{result} \noindent In Section \ref{our-work-low}, we improve the currently best-known\ lower bound $m(5) \geq 28$. \begin{result} \label{result-improv-lower} $m(5) \geq 29$. \end{result} \section{Previous Results \label{prev-works}} \subsection{Abbott-Moser Construction} \label{prev-result-abbott-moser} Abbott and Moser \cite{abbott1964combinatorial} gave the construction for a non-2-colorablecolornhg\ hypergraphnot\ by exploiting the known constructions of non-2-colorable\ \nuniform[a]\ and \nuniform[b]\ hypergraphs\ for any composite $n$ satisfying $n = ab$ for two integers $a \geq 1, b \geq 1$.\footnote{Note that the notations used in a sub-section are not related to the notations used in other sub-sections, unless specified otherwise.} Let \cushgnot{a} and \cushgnot{b} be non-2-colorable\ \nuniform[a]\ and \nuniform[b] hypergraphs, respectively. $H$ is constructed using \cardinline{V_a} identical copies of $H_b$ by replacing each vertex of $H_a$ with a copy of $H_b$. Let us denote the copies of $H_b$ as $\cushg{b_1}, \cushg{b_2}, \ldots, \cushg{b_{\card{V_a}}}$. The vertex set of $H$ is $V = V_{b_1} \cup V_{b_2} \cup \cdots \cup V_{b_{\card{V_a}}}$. The hyperedge\ set of $H$ is constructed as follows. For each hyperedge\ $\{v_1, \ldots, v_a\}$ in $E_a$, the following collection of hyperedges\ $\{\{e_1 \cup \cdots \cup e_a \}: e_1 \in E_{b_{v_1}}, \ldots, e_a \in E_{b_{v_a}} \}$ is added to $E$. The resulting hypergraph\ $H$ is \nuniform\ and it is evident from the construction that it has $\card{E_a} \card{E_b}^a$ hyperedges. Abbott and Moser \cite{abbott1964combinatorial} showed that $H$ is non-2-colorable, thereby proving the following result. \begin{lemma} \label{prev-result-abbott-moser-lemma} For any composite $n$ satisfying $n = ab$ for integers $a, b \geq 1$, $m(n) \leq m(a) m(b)^a$. \end{lemma} This construction gives the best-known\ upper bounds for some small values of $n$. For example, it shows that $m(6) \leq 147$, $m(9) \leq 2401$, $m(10) \leq 7803$, $m(12) \leq 55223$, $m(14) \leq 531723$, $m(15) \leq 857157$ and $m(16) \leq 4831083$. \subsection{Abbott-Hanson-Toft Construction} \label{prev-result-abbott-hanson-toft} As mentioned in the introduction, Abbott-Hanson's construction \cite{abbott1969combinatorial} for odd $n$ along with Toft's construction \cite{toft1973color} for even $n$ is referred to as Abbott-Hanson-Toft construction. For a given $n \geq 3$, this construction is built using a non-2-colorablecolornhg[(n-2)], which we call as the \itt{core hypergraph} and denote by \cushgnot{c}. Let its hyperedge\ set be \hyperedgeset{E_c}{m_c}. Let $A$ and $B$ be two disjoint sets of vertices where \vertexset{A}{a}{n} and \vertexset{B}{b}{n}, each disjoint with $V_c$. For a given $K \subset \{1, 2, \ldots, n\}$, we define \vertexsubset{A_K}{K}{a}, \vertexsubset{B_K}{K}{b}, $\overline{A}_{K} = A \setminus A_{K}$ and $\overline{B}_{K} = B \setminus B_K$. The construction of the non-2-colorablecolornhg\ hypergraphnot\ is as follows. The vertex set is $V = V_c \cup A \cup B$ and the hyperedge\ set $E$ consists of the following hyperedges: \begin{enumerate} \item[(i)] $e_i \cup \{a_j\} \cup \{b_j\}$ for every pair $i, j$ satisfying $1 \leq i \leq m_c$ and $1 \leq j \leq n$ \item[(ii)] $A_K \cup \overline{B}_K$ for each $K$ such that \cardinline{K} is odd and $1 \leq \card{K} \leq \floor{n/2}$ \item[(iii)] $\overline{A}_K \cup B_K$ for each $K$ such that \cardinline{K} is even and $2 \leq \card{K} \leq \floor{n/2}$ \item[(iv)] $A$ \end{enumerate} It is easy to observe that the number of hyperedges\ in $H$ is $2^{n-1} + n m_c$ for odd $n$ and $2^{n-1} + n m_c + \binom{n}{n/2}/2$ for even $n$. Abbott-Hanson \cite{abbott1969combinatorial} and Toft \cite{toft1973color} proved that $H$ is non-2-colorable, and the construction gives the upper bound on \mninline\ as follows. \begin{lemma} \label{prev-result-abbott-hanson-toft-lemma} \[ m(n) \leq \begin{cases} 2^{n-1} + n \cdot m(n-2) & \text{if } n \text{ is odd} \\ 2^{n-1} + n \cdot m(n -2) + \binom{n}{n/2}/2 & \text{if } n \text{ is even} \end{cases} \] \end{lemma} Lemma \ref{prev-result-abbott-hanson-toft-lemma} gives the best-known\ upper bounds on \mninline\ for $n=5$ and $7$ as $m(5) \leq 51$ and $m(7) \leq 421$, respectively. \subsection{Mathews-Panda-Shannigrahi Construction} The following construction of a non-2-colorablecolornhg\ for $n \geq 3$ is an improvement over the Abbott-Hanson-Toft construction mentioned above. Similar to the Abbott-Hanson-Toft construction, this construction also utilizes a non-2-colorablecolornhg[(n-2)] \cushgnot{c} that is called the core hypergraph\ in Section \ref{prev-result-abbott-hanson-toft}. Let \hyperedgecustset{E_c}{e}{m_c}. In addition, this construction uses two disjoint vertex sets \vertexset{A}{a}{n} and \vertexset{B}{b}{n}, each disjoint from $V_c$. Let $B^1$ denote the ordered set $B^1 = (b_1, b_2, \ldots, b_n)$, where the ordering is defined as $b_1 \prec b_2 \prec \ldots \prec b_n$. For any $1 \leq p \leq n$, let $B^p$ denote the ordered set where $b_1$ and $b_p$ are swapped in the ordering. Let the ordered set $B^p = (b_p, b_2, \ldots, b_{p-1}, b_1, b_{p+1}, \ldots, b_n)$ be denoted by $(w^p_1, w^p_2, \ldots, w^p_n)$, where the ordering is given as $w^p_1 \prec w^p_2 \prec \ldots \prec w^p_n$. For $K \subset \{1, 2, \ldots, n\}$, let \vertexsubset{A_K}{K}{a}, $\overline{A}_K = A \setminus A_K$, \vertexsubset{B_K^p}{K}{w^p} and $\overline{B}_K^p = B \setminus B_K^p$. The construction of the non-2-colorablecolornhg\ hypergraphnot\ is defined as follows. The vertex set is $V = V_c \cup A \cup B$ and the hyperedge\ set $E$ consists of following hyperedges: \begin{enumerate} \item[(i)] $e_i \cup \{a_j\} \cup \{b_j\}$ for every pair $i, j$ satisfying $1 \leq i \leq m_c$ and $2 \leq j \leq n$ \item[(ii)] $A_K \cup \overline{B}_K^p$ for each $p$ satisfying $1 \leq p \leq n$, and each $K$ such that \cardinline{K} is odd and $1 \leq \card{K} \leq \floor{n/2}$ \item[(iii)] $\overline{A}_K \cup B_K^p$ for each $p$ satisfying $1 \leq p \leq n$, and each $K$ such that \cardinline{K} is even and $2 \leq \card{K} \leq \floor{n/2}$ \item[(iv)] $A$ \end{enumerate} It can be seen that the number of hyperedges\ in $H$ is at most $(n + 1) 2^{n-2} + (n - 1) m_c$ when $n$ is odd and $(n + 1)2^{n-2} + \binom{n}{n/2}/2 + (n - 1) \Big(m_c + \binom{n-2}{(n-2)/2}\Big)$ when $n$ is even. Mathews et al. \cite{mathews2015construction} showed that $H$ is non-2-colorable, which gives the following result. \begin{lemma} \[ m(n) \leq \begin{cases} (n + 1)2^{n-2} + (n - 1) \cdot m(n - 2) & \text{if } n \text{ is odd} \\ (n + 1)2^{n-2} + \binom{n}{n/2}/2 + (n - 1) \Big(m(n - 2) + \binom{n-2}{(n-2)/2}\Big) & \text{if } n \text{ is even} \end{cases} \] \end{lemma} This result improved the upper bounds on \mninline[13]\ and \mninline[17]\ to $m(13) \leq 357892$ and $m(17) \leq 14304336$, respectively. However, Mathews et al. modified the above construction in the same paper to provide another construction that gives the following result. \begin{lemma} \[ m(n) \leq \begin{cases} (n + 4)2^{n-3} + (n - 2) \cdot m(n - 2) & \text{if } n \text{ is odd} \\ (n + 4)2^{n-3} + (n - 2) \cdot m(n - 2) + n \binom{n-2}{(n-2)/2}/2 + \binom{n}{n/2}/2 & \text{if } n \text{ is even} \end{cases} \] \end{lemma} This construction improved the upper bound on \mninline[11]\ to $m(11) \leq 25449$ and further improved the above-mentioned upper bounds on \mninline[13]\ and \mninline[17]\ to $m(13) \leq 297347$ and $m(17) \leq 13201419$, respectively. \section{Generalized Abbott-Hanson-Toft Construction \label{upper-const-one}} For any $k \geq 1$, we construct a non-2-colorablecolornhg\ hypergraphnot\ for an integer $n$ satisfying $n > 2k$. This construction uses a non-2-colorablecolornhg[(n - 2k)]\ \cushgnot{c} with \hyperedgeset{E_c}{m_c}. Consider two disjoint sets of vertices \vertexset{A}{a}{n+k-1} and \vertexset{B}{b}{n+k-1}, each disjoint with $V_c$. Let us define $\mathcal{I}_i$ to be the collection of all $i$-element subsets of the set $\mathcal{N} = \{1, 2, \ldots, n + k - 1\}$. For an $I \in \mathcal{I}_{k-1}$, consider $K_I \subseteq \mathcal{N} \setminus I$. We define $A_I = \bigcup_{i \in I} \{a_i\}$, $B_I = \bigcup_{i \in I} \{b_i\}$, $A_{K_I} = \bigcup_{i \in K_I} \{a_i\}$, $B_{K_I} = \bigcup_{i \in K_I}\{b_i\}$, $\overline{A}_{K_I} = A \setminus (A_{K_I} \cup A_I)$ and $\overline{B}_{K_I} = B \setminus (B_{K_I} \cup B_I)$. The non-2-colorablecolornhg\ hypergraphnot\ is constructed with the vertex set $V = V_c \cup A \cup B$. The hyperedge\ set $E$ consists of the following hyperedges: \begin{enumerate} \item[(i)] $e_i \cup A_I \cup B_I$ for each $I \in \mathcal{I}_k$ and all $i$ satisfying $1 \leq i \leq m_c$ \item[(ii)] $A_{K_I} \cup \overline{B}_{K_I}$ for each $K_I$ such that \cardinline{K_I} is odd and $1 \leq \card{K_I} \leq \floor{n/2}$, for each $I \in \mathcal{I}_{k-1}$ \item[(iii)] $\overline{A}_{K_I} \cup B_{K_I}$ for each $K_I$ such that \cardinline{K_I} is even and $0 \leq \card{K_I} \leq \floor{n/2}$, for each $I \in \mathcal{I}_{k-1}$ \end{enumerate} The number of hyperedges\ in $H$ is $\binom{n+k-1}{k} m_c + \binom{n+k-1}{k-1} 2^{n-1}$ when $n$ is odd and $\binom{n+k-1}{k} m_c + \binom{n+k-1}{k-1} \big( 2^{n-1} + \binom{n}{n/2}/2 \big)$ when $n$ is even. For a 2-colorableing\ of $H$, we call \matchingv\ to be a \itt{matching pair} if both the vertices are colored by the same color. For a given $I \in \mathcal{I}_{k-1}$, we define $A_{{blue}_I}$ to be the set blue vertices in $A \setminus A_I$, and $B_{{blue}_I} = \{b_i : a_i \in A_{{blue}_I}\}$. Let $\overline{A}_{{blue}_I} = A \setminus (A_I \cup A_{{blue}_I})$ and $\overline{B}_{{blue}_I} = \{b_i : a_i \in \overline{A}_{{blue}_I}\}$. \begin{lemma} \label{lem-const-one-no-matching} Consider any $I \in \mathcal{I}_{k-1}$. If there is no matching pair of vertices between $A \setminus A_I$ and $B \setminus B_I$ in a 2-colorableing\ \colorinline\ of hypergraph\ $H$, then there exists at least one monochromatiche\ in the coloring \colorinline. \end{lemma} \begin{proof} Assume for the sake of contradiction that \colorinline\ is a proper 2-colorableing\ of hypergraph\ $H$ with no matching pair of vertices between $A \setminus A_I$ and $B \setminus B_I$. We arrive at a contradiction in each of the cases below. \begin{enumerate} \item[Case $1.$] $1 \leq \card{A_{{blue}_I}} \leq \floor{n/2}$ If \cardinline{A_{{blue}_I}} is odd, the hyperedge\ $A_{{blue}_I} \cup \overline{B}_{{blue}_I}$ is monochromatic\ in blue. If \cardinline{A_{{blue}_I}} is even, the hyperedge\ $\overline{A}_{{blue}_I} \cup B_{{blue}_I}$ is monochromatic\ in red. \item[Case $2.$] $\floor{n/2} < \card{A_{{blue}_I}} < n$ If $n - \card{A_{{blue}_I}}$ is odd, the hyperedge\ $\overline{A}_{{blue}_I} \cup B_{{blue}_I}$ is monochromatic\ in red. If $n - \card{A_{{blue}_I}}$ is even, the hyperedge\ $A_{{blue}_I} \cup \overline{B}_{{blue}_I}$ is monochromatic\ in blue. \item[Case $3.$] $\card{A_{{blue}_I}} = 0$ or $\card{A_{{blue}_I}} = n$ If $\card{A_{{blue}_I}} = 0$, $A \setminus A_I$ is monochromatic\ in red. If $\card{A_{{blue}_I}} = n$, $A \setminus A_I$ is monochromatic\ in blue. \end{enumerate} \end{proof} \begin{proof}[Proof of Result \ref{result-gen-abt}] \label{the-const-one-final} Let us assume for the sake of contradiction that there exists a proper 2-colorableing\ \colorinline\ for hypergraph\ $H$. We know that the core hypergraph\ $H_c$ is a non-2-colorablecolornhg[(n-2k)] and thus has a monochromatiche\ in the coloring \colorinline. Without loss of generality, assume $H_c$ to be monochromatic\ in red. The hyperedges\ added in Step (i) ensure that no more than $(k-1)$ matching pairs of red vertices exist in \colorinline. This implies that there exists an $I \in \mathcal{I}_{k-1}$ such that there is no matching pair of red vertices from $A' = A \setminus A_I$ and $B' = B \setminus B_I$. As a result, it follows from Lemma \ref{lem-const-one-no-matching} that there exists at least one matching pair of blue vertices from $A'$ and $B'$. Let \matchingv[p]\ be such a matching pair of blue vertices, where $a_p \in A'$ and $b_p \in B'$. This leads to a contradiction in each of the following cases. \begin{enumerate} \item[Case $1.$] $1 \leq \card{A_{{blue}_I}} \leq \ceil{n/2}$ If $\card{A_{{blue}_I}}$ is odd, the hyperedge\ $A_{{blue}_I} \cup \overline{B}_{{blue}_I}$ is monochromatic\ in blue. If $\card{A_{{blue}_I}}$ is even, the hyperedge\ $\overline{B}_{{blue}_I} \cup \{b_p\} \cup A_{{blue}_I} \setminus \{a_p\}$ is monochromatic\ in blue. \item[Case $2.$] $\ceil{n/2} < \card{A_{{blue}_I}} < n$ If $n - \card{A_{{blue}_I}}$ is odd, $\card{\overline{B}_{blue_I} \cup \{b_p\}}$ is even. Therefore, the hyperedge\ $\overline{B}_{{blue}_I} \cup \{b_p\} \cup A_{{blue}_I} \setminus \{a_p\}$ is monochromatic\ in blue. If $n - \card{A_{{blue}_I}}$ is even, the hyperedge\ $A_{{blue}_I} \cup \overline{B}_{{blue}_I}$ is monochromatic\ in blue. \item[Case $3.$] $\card{A_{{blue}_I}} = n$ If $\card{A_{{blue}_I}} = n$, $A'$ is monochromatic\ in blue. \end{enumerate} This completes the proof that $H$ is non-2-colorable. Therefore, we arrive at the following result. \[ m(n) \leq \begin{cases} \binom{n+k-1}{k} m(n-2k) + \binom{n+k-1}{k-1}2^{n-1} & \text{if } n \text{ is odd} \\ \binom{n+k-1}{k} m(n-2k) + \binom{n+k-1}{k-1}\Big(2^{n-1} + \binom {n}{n/2}/2\Big) & \text{if } n \text{ is even} \end{cases} \] \end{proof} We set $k = 2$ in Result \ref{result-gen-abt} and use $m(9) \leq 2401$ from Table \ref{table-cur-known-upper} to get an improvement of the upper bound on \mninline[13]\ to $m(13) \leq \binom{14}{2} m(9) + \binom{14}{1} 2^{12} = 275835$. \subsubsection{Optimization of \mninline} From the construction above, we obtain the following for any integer $n$ greater than a large constant $n_0 > 0$. \begin{align*} m(n) &\leq \binom{n+k-1}{k} m(n-2k) + \binom{n+k-1}{k-1}\bigg(2^{n-1} + \binom {n}{\lfloor n/2 \rfloor}/2\bigg) \\ &\leq \binom{n+k-1}{k} m(n-2k) + \binom{n+k-1}{k-1} 2^n \\ \shortintertext{Let $k = np$, where $\frac{1}{n} \leq p < 0.5$. Therefore,} m(n) &\leq \binom{n+np-1}{np} m(n-2np) + \binom{n+np-1}{np-1} 2^n \\ &\leq \binom{n+np}{np} m(n-2np) + \binom{n+np}{np} 2^n. \\ \shortintertext{Applying $\binom{n}{k} < (\frac{en}{k})^k$ and setting $n$ such that $n{(1-2p)}^i$ is an integer for all $i$ in the range $1 \leq i \leq s$, we obtain the following.} m(n) &< \bigg(\frac{e(n+np)}{np}\bigg)^{np} \Big(m(n-2np) + 2^n\Big) \\ &< \bigg(\frac{e(1+p)}{p}\bigg)^{np} \Bigg[2^n + \bigg(\frac{e(1+p)}{p}\bigg)^{(n-2np)p} \Bigg( m(n(1-2p)^2) + 2^{n-2np}\Bigg) \Bigg] \\ &\hspace{2cm} \vdots \\ &< m(n(1-2p)^s) \bigg(\frac{e(1+p)}{p}\bigg)^{\sum_{i=0}^{s-1} np(1-2p)^i} \\ & \hspace{1cm} + \sum_{i=0}^{s-1} 2^{n(1-2p)^i} \bigg(\frac{e(1+p)}{p}\bigg)^{\sum_{j=0}^{i} np(1-2p)^j}\\ &= m(n(1-2p)^s) \bigg(\frac{e(1+p)}{p}\bigg)^{\frac{n}{2}(1-(1-2p)^s)} \\ & \hspace{1cm} + \sum_{i=0}^{s-1} 2^{n(1-2p)^i} \bigg(\frac{e(1+p)}{p}\bigg)^{\frac{n}{2}(1-(1-2p)^{(i+1)})} \\ &\leq m(n(1-2p)^s) \bigg(\frac{e(1+p)}{p}\bigg)^{\frac{n}{2}} \\ & \hspace{1cm} + \bigg(\frac{e(1+p)}{p}\bigg)^{\frac{n}{2}} \sum_{i=0}^{s-1} \Bigg(\frac{2}{\Big(\frac{e(1+p)}{p}\Big)^{(1-2p)/2}}\Bigg)^{n(1-2p)^i}\\ \shortintertext{For any integer $c > 0$, we observe that there exists a constant $c' > \frac{\ln n - \ln c}{2 \ln n}$ such that $n(1-2p)^s < c$ for $s \geq c'n\ln n$. Using $s = c' n\ln n$ in the above equation, we have} m(n) &\leq \bigg(\frac{e(1+p)}{p}\bigg)^{\frac{n}{2}} \Bigg[ m(c) + \sum_{i=0}^{c'n\ln n - 1} \Bigg(\frac{2}{\Big(\frac{e(1+p)}{p}\Big)^{(1-2p)/2}}\Bigg)^{n(1-2p)^i} \Bigg]. \\ \shortintertext{We observe that $2 \Big(\big(\frac{e(1+p)}{p}\big)^{(1-2p)/2}\Big)^{-1}$ increases if $p$ increases and its value is less than 1 for $0 < p \leq 0.2381$. Using $p=0.2381$ in the above equation, we obtain} m(n) &< \Big(\frac{e(1+p)}{p}\Big)^{\frac{n}{2}} \big[ m(c) + c' n \ln n \big]\\ &= O(3.7596^n \cdot n \ln n)\\ &= O(3.76^n). \end{align*} \section{Multi-Core Construction \label{upper-const-two}} Consider an integer $k$ satisfying $0 < k < n$. We define $w = \floor{n/k}, x = n \mod k, y = \floor{k/x}$ and $z = k \mod x$. A multi-core construction makes use of a non-2-colorablecolornhg[(n-k)]\ \cushgnot{c}, a total of $w$ identical non-2-colorablecolornhgs[k]\ $\cushg{1}, \ldots, \cushg{w}$ and a total of $y$ identical non-2-colorablecolornhgs[x]\ $\cushgprime{1}, \ldots, \cushgprime{y}$. The vertex sets of the hypergraphs\ $H_c$, $H_1, \ldots, H_w$, $H'_1, \ldots, H'_y$ are pairwise disjoint. Let us denote \hyperedgecustset{E_c}{e}{m_c}, \hyperedgecustset{E_1}{e^1}{m_k}, $\ldots$, \hyperedgecustset{E_w}{e^w}{m_k}, \hyperedgecustset{E'_1}{e'^1}{m_x}, $\ldots$, \hyperedgecustset{E'_y}{e'^y}{m_x}. Consider a vertex set \vertexset{A}{a}{x+z-1}, disjoint with each of $V_c$, $V_1, \ldots, V_w$, $V'_1, \ldots, V'_y$. We define $\mathcal{A}_p$ as the collection of all $p$-element subsets of the vertex set $A$. Let $\mathcal{E} = \{j_1 \cup j_2 \cup \cdots \cup j_w : (j_1, j_2, \ldots, j_w) \in E_1 \times E_2 \times \cdots \times E_w\}$ and $\mathcal{E'} = \{j'_1 \cup j'_2 \cup \cdots \cup j'_y : (j'_1, j'_2, \ldots, j'_y) \in E'_1 \times E'_2 \times \cdots \times E'_y\}$. We define the construction of a non-2-colorablecolornhg\ hypergraphnot\ as follows. The vertex set is $V = V_c \cup A \cup V_1 \cup \cdots \cup V_w \cup V'_1 \cup \cdots \cup V'_y$. The construction of the hyperedges\ belonging to $E$ depends on the values of $x$ and $z$ as follows. \begin{enumerate} \item[Case $1.$] For $x > 0$ and $z > 0$, $E$ contains the following hyperedges. \begin{enumerate} \item[(i)] $e_i \cup e_j^l$ for every triple $i, j, l$ satisfying $1 \leq i \leq m_c$, $1 \leq j \leq m_k$ and $1 \leq l \leq w$ \label{mul-core-step-1} \item[(ii)] $e'^j_i \cup e$ for every triple $i, j, e$ satisfying $1 \leq i \leq m_x$, $1 \leq j \leq y$ and $e \in \mathcal{E}$ \label{mul-core-step-2} \item[(iii)] $e_i \cup e' \cup S $ for every triple $i, e, S$ satisfying $1 \leq i \leq m_c$, $e' \in \mathcal{E'}$ and $S \in \mathcal{A}_z$ \label{mul-core-step-3} \item[(iv)] $e \cup S$ for every pair $e, S$ satisfying $e \in \mathcal{E}$ and $S \in \mathcal{A}_x$ \label{mul-core-step-4} \end{enumerate} \item[Case $2.$] For $x > 0$ and $z = 0$, $E$ contains the following hyperedges. \begin{enumerate} \item[(i)] $e_i \cup e_j^l$ for every triple $i, j, l$ satisfying $1 \leq i \leq m_c$, $1 \leq j \leq m_k$ and $1 \leq l \leq w$ \item[(ii)] $e'^j_i \cup e$ for every triple $i, j, e$ satisfying $1 \leq i \leq m_x$, $1 \leq j \leq y$ and $e \in \mathcal{E}$ \item[(iii)] $e_i \cup e'$ for every pair $i, e'$ satisfying $1 \leq i \leq m_c$ and $e' \in \mathcal{E'}$ \end{enumerate} \item[Case $3.$] For $x = 0$, $E$ contains the following hyperedges. \begin{enumerate} \item[(i)] $e_i \cup e_j^l$ for every triple $i, j, l$ satisfying $1 \leq i \leq m_c$, $1 \leq j \leq m_k$ and $1 \leq l \leq w$ \item[(ii)] $e$ for each $e \in \mathcal{E}$ \end{enumerate} \end{enumerate} \noindent The number of hyperedges\ in $H$ is given by \begin{align*} \card{E} &= \begin{cases} w m_c m_k + y m_x (m_k)^w + \binom{x + z - 1}{z} m_c (m_x)^y + \binom{x + z - 1}{x} (m_k)^w & \text{if } x>0, z>0 \\ w m_c m_k + y m_x (m_k)^w + m_c (m_x)^y & \text{if } x>0, z=0 \\ w m_c m_k + (m_k)^w & \text{if } x=0 \end{cases} \end{align*} \begin{proof}[Proof of Result \ref{result-multicore}] \label{the-const-two-final} For the sake of contradiction, let us assume that \colorinline\ is a proper 2-colorableing\ of $H$. Without loss of generality, let the hypergraph\ $H_c$ be monochromatic in red in the coloring \colorinline. The hyperedges\ formed in Step (i) in each of the cases ensure that the hypergraphs\ $H_j$ are monochromatic in blue for each $j \in \{1, \ldots, w\}$. \begin{enumerate} \item[{Case} 1.] If $x > 0$ and $z > 0$, the hyperedges\ formed in Step (ii) ensure that the hypergraphs\ $H'_l$ are monochromatic in red for each $l \in \{1, 2, \ldots, y\}$. It can be noted from the hyperedges\ generated in Step (iii) that there are at most $z-1$ red vertices in the set $A$. This implies that $A$ has at least $x$ blue vertices. The hyperedges\ formed in Step (iv) ensure that there are at most $x-1$ blue vertices in $A$. Thus, we have a contradiction. \item[{Case} 2.] If $x > 0$ and $z = 0$, the hyperedges\ formed in Step (ii) ensure that the hypergraphs\ $H'_l$ are monochromatic in red for each $l \in \{1, 2, \ldots, y\}$. It can be easily noted that the hyperedges\ generated in Step (iii) include a red monochromatiche. Thus, we have a contradiction. \item[{Case} 3.] If $x = 0$, it immediately follows that we have a blue monochromatiche\ in the hyperedges\ generated by Step (ii) of the construction. This leads to a contradiction. \end{enumerate} \noindent Thus, we have the following result on \mninline. \begin{align*} \shortintertext{If $x > 0$ and $z > 0$,} m(n) &\leq w \cdot m(n-k) m(k) + y \cdot m(k)^w m(x) \\ & \hspace{1cm} + \tbinom{x+z-1}{z} m(n-k) m(x)^y + \tbinom{x+z-1}{x} m(k)^w. \\ \shortintertext{If $x > 0$ and $z = 0$,} m(n) &\leq w \cdot m(n-k) m(k) + y \cdot m(k)^w m(x) + m(n-k) m(x)^y. \\ \shortintertext{If $x = 0$,} m(n) &\leq w \cdot m(n-k) m(k) + m(k)^w. \end{align*} \end{proof} These recurrence relations give improvements on \mninline[n]\ for $n = 8, 13, 14, 16$ and $17$ as follows: \begin{itemize} \item For $n = 8$ and $k = 5$, we have $m(8) \leq m(3) m(5) + m(5) m(3) + \tbinom{4}{2} m(3) m(3) + \tbinom{4}{3} m(5) \leq 1212$ by using $m(3) = 7$ and $m(5) \leq 51$ from Table \ref{table-cur-known-upper}. \item For $n = 13$ and $k = 5$, we obtain $m(13) \leq 2 m(8) m(5) + m(5)^2 m(3) + \tbinom{4}{2} m(8) m(3) + \tbinom{4}{3} m(5)^2 \leq 203139$ by using $m(3) = 7$ and $m(5) \leq 51$ from Table \ref{table-cur-known-upper} and $m(8) \leq 1212$ obtained above. \item For $n = 14$ and $k = 5$, the recurrence relation gives $m(14) \leq 2 m(9) m(5) + m(5)^2 m(4) + \tbinom{4}{1} m(9) m(4) + \tbinom{4}{4} m(5)^2 \leq 528218$ by using $m(4) = 23$, $m(5) \leq 51$ and $m(9) \leq 2401$ from Table \ref{table-cur-known-upper}. \item For $n = 16$ and $k = 7$, we have $m(16) \leq 2 m(9) m(7) + 3 m(7)^2 m(2) + \tbinom{2}{1} m(9) m(2)^3 + \tbinom{2}{2} m(7)^2 \leq 3923706$ by using $m(2) = 3$, $m(7) \leq 421$ and $m(9) \leq 2401$ from Table \ref{table-cur-known-upper}. \item Finally, for $n = 17$ and $k = 7$, we obtain $m(17) \leq 2 m(10) m(7) + 2 m(7)^2 m(3) + \tbinom{3}{1}m(10) m(3)^2 + \tbinom{3}{3} m(7)^2 \leq 10375782$ by using $m(3) = 7$, $m(7) \leq 421$ and $m(10) \leq 7803$ from Table \ref{table-cur-known-upper}. \item It can also be noted that this construction matches the currently best-known\ upper bounds on $m(6)$ and $m(10)$ for $k = 3$ and $k = 5$, respectively. \end{itemize} \section{Block Construction} \label{upper-const-five} For an integer $k > 0$, we describe the construction of a collection $\mathcal{H}$ of non-2-colorablecolornhgs. Any hypergraph\ hypergraphnot\ belonging to this collection is constructed using a non-2-colorablecolornhg[(n-2k)]\ denoted by \cushgnot{c} and two disjoint collections of hypergraphs\ $\mathcal{A}$ and $\mathcal{B}$. Let \hyperedgecustset{E_c}{e}{m_c}. Let \vertexset{\mathcal{A}}{H}{t} and \vertexset{\mathcal{B}}{H'}{t} be the collection of hypergraphs\ such that each of \cushgnot{i} and \cushgprimenot{i} is an identical copy of a non-2-colorable\ \nuniform[k_i]\ hypergraph\ satisfying $k_i \geq k$ and $\sum_{i=1}^{t} k_i \geq n$. Note that the sets $V_c, V_1, V_2, \ldots, V_t, V'_1, V'_2, \ldots V'_t$ are pairwise disjoint. Let $P = \{i_1, i_2, \ldots, i_p\} \subseteq \{1, 2, \ldots, t\}$ such that $1 \leq i_1 < i_2 < \ldots < i_p \leq t$. Using the Cartesian products $\mathcal{C}_P = E_{i_1} \times E_{i_2} \times \cdots \times E_{i_p}$ and $\mathcal{C}'_P = E'_{i_1} \times E'_{i_2} \times \cdots \times E'_{i_p}$, let us define the collection of hyperedges\ $\mathcal{A}_P$ and $\mathcal{B}_P$ as $\mathcal{A}_P = \{ j_1 \cup j_2 \cup \cdots \cup j_p : (j_1, j_2, \ldots, j_p) \in \mathcal{C}_P\}$ and $\mathcal{B}_P = \{ j'_1 \cup j'_2 \cup \cdots \cup j'_p : (j'_1, j'_2, \ldots, j'_p) \in \mathcal{C}'_P\}$, respectively. Also, let $\overline{P} = \{1, 2, \ldots, t\} \setminus P$. The hypergraph\ $H$ has the vertex set $V = V_c \cup V_1 \cup \cdots \cup V_t \cup V'_1 \cup \cdots \cup V'_t$ and the hyperedge\ set $E$ is generated from the following hyperedges, each containing at least $n$ vertices. \begin{enumerate} \item[(i)] For each $j$ satisfying $1 \leq j \leq t$, $e_i \cup e_{H_j} \cup e_{H'_j}$ for every triple $i, e_{H_j}, e_{H'_j}$ satisfying $1 \leq i \leq m_c$, $e_{H_j} \in E_j$ and $e_{H'_j} \in E'_j$ \label{const-four-step1} \item[(ii)] For each $P \subset \{1, 2, \ldots, t\}$ such that \cardinline{P} is odd and $1 \leq \card{P} \leq \floor{t/2}$, $e_{H} \cup e_{H'}$ for every pair $e_H, e_{H'}$ satisfying $e_{H} \in \mathcal{A}_P$ and $e_{H'} \in \mathcal{B}_{\overline{P}}$ \label{const-four-step2} \item[(iii)] For each $P \subset \{1, 2, \ldots, t\}$ such that \cardinline{P} is even and $0 \leq \card{P} \leq \floor{t/2}$, $e_{H} \cup e_{H'}$ for every pair $e_H, e_{H'}$ satisfying $e_{H} \in \mathcal{A}_{\overline{P}}$ and $e_{H'} \in \mathcal{B}_P$ \label{const-four-step3} \end{enumerate} We select an arbitrary set of $n$ vertices from each of the hyperedges\ generated above to form the hyperedge\ set $E$. In case a hyperedge\ is included more than once in $E$ by this process, we keep only one of those to ensure that $E$ is not a multi-set. Let us count the number of hyperedges\ added to the hyperedge\ set $E$. Step $(i)$ adds at most $\card{E_c} \sum_{i=1}^{t} \card{E_i} \card{E'_i} = \sum_{i=1}^{t} \card{E_i}^2 \card{E_c}$ hyperedges, whereas Steps $(ii)$ and $(iii)$ together add at most $\prod_{i=1}^{t} \card{E_i} \big(1 + \binom{t}{1} + \ldots + \binom{t}{\floor{t/2}} \big)$ hyperedges. Note that $\card{E} \leq \sum_{i=1}^{t} \card{E_i}^2 \card{E_c} + 2^{t-1} \prod_{i=1}^{t} \card{E_i}$ when $t$ is odd, and $\card{E} \leq \sum_{i=1}^{t} \card{E_i}^2 \card{E_c} + \big(2^{t-1} + \binom{t}{t/2}/2 \big) \prod_{i=1}^{t} \card{E_i}$ when $t$ is even. In the following lemma, we prove that $H$ is non-2-colorable\ by showing that any proper 2-coloring of $H$ can be used to obtain a proper 2-coloring of any \nuniform[t]\ hypergraph\ constructed by Abbott-Hanson-Toft construction. \begin{lemma} $H$ is non-2-colorable. \label{upper-const-block-lemma} \end{lemma} \begin{proof} Consider any \nuniform[t]\ hypergraph\ \cushgnot{{AHT}} constructed by Abbott-Hanson-Toft construction using a non-2-colorable\ \nuniform[(t-2)]\ core hypergraph\ and two disjoint vertex sets $\{p_1, \ldots, p_t\}$ and $\{q_1, \ldots, q_t\}$. Assuming for the sake of contradiction that a proper 2-colorableing\ exists for $H$, we give a proper 2-colorableing\ for $H_{AHT}$ as follows. \begin{itemize} \item Color all vertices of the non-2-colorable\ \nuniform[(t-2)]\ core hypergraph\ of $H_{AHT}$ with the color of the monochromatiche\ of $H_c$ used in the construction of $H$. \item Color each vertex $p_i$ with the color of the monochromatiche\ of $H_i$ used in the construction of $H$. \item Similarly, color each vertex $q_i$ with the color of the monochromatiche\ of $H'_i$ used in the construction of $H$. \end{itemize} Since $H_{AHT}$ is non-2-colorable, we have a contradiction. As a result, we have the following recurrence relation. \begin{align*} m(n) \leq \begin{cases} m(n - 2k) \sum_{i=1}^{t} m(k_i)^2 + 2^{t-1} \prod_{i=1}^{t} m(k_i) & \text{ if } t \text{ is odd} \\ m(n - 2k) \sum_{i=1}^{t} m(k_i)^2 + \big(2^{t-1} + \binom{t}{t/2}/2 \big) \prod_{i=1}^{t} m(k_i) & \text { if } t \text{ is even} \end{cases} \end{align*} \end{proof} Consider the special case when $n = 3k+1$. Setting the values of $t$ and $k_i$'s as $t = 3$, $k_1 = k + 1$ and $k_2 = k_3 = k$ in this special case, we obtain the following recurrence relation. \begin{align} m(3k+1) & \leq m(k+1)^3 + 6 m(k)^2 m(k+1) \label{eq-block-con-gen-3k1} \end{align} \noindent We give an improvement of this result below. \subsubsection*{Modified Block Construction} Let us first repeat the detailed description for the special case mentioned above, i.e., the construction of a non-2-colorablecolornhg[(3k+1)]\ hypergraphnot\ belonging to $\mathcal{H}$. We construct $H$ using a non-2-colorablecolornhg[(k+1)]\ \cushgnot{c}\ along with non-2-colorablecolornhgs[(k+1)]\ \cushgnot{1}, \cushgprimenot{1} and non-2-colorablecolornhgs[k]\ \cushgnot{2}, \cushgprimenot{2}, \cushgnot{3}, \cushgprimenot{3}. Note that each $H'_i$ is an identical copy of $H_i$ for $1 \leq i \leq 3$. For the modified construction described below, we set $H_1$ as the Abbott-Hanson-Toft construction that uses a non-2-colorable\ \nuniform[(k-1)] core hypergraph\ \cushgnot{{1c}} and disjoint vertex sets \vertexset{A}{a}{k+1}, \vertexset{B}{b}{k+1}. Note that $H'_1$ is not necessarily identical to $H_1$ in this modified block construction, whereas each $H'_i$ is an identical copy of $H_i$ for $2 \leq i \leq 3$. Using the notations introduced above, the vertex set of $H$ is $V = V_c \cup V_{1c} \cup A \cup B \cup V'_1 \cup V_2 \cup V'_2 \cup V_3 \cup V'_3$. The hyperedge\ set $E$ is generated from the following hyperedges. \begin{enumerate} \item[(a)] $e_{H_c} \cup e_{H_1} \cup e_{H'_1}$ for every triple $e_{H_c}, e_{H_1}, e_{H'_1}$ satisfying $e_{H_c} \in E_c$, $e_{H_1} \in E_1$ and $e_{H'_1} \in E'_1$ \item[(b)] $e_{H_c} \cup e_{H_2} \cup e_{H'_2}$ for every triple $e_{H_c}, e_{H_2}, e_{H'_2}$ satisfying $e_{H_c} \in E_c$, $e_{H_2} \in E_2$ and $e_{H'_2} \in E'_2$ \item[(c)] $e_{H_c} \cup e_{H_3} \cup e_{H'_3}$ for every triple $e_{H_c}, e_{H_3}, e_{H'_3}$ satisfying $e_{H_c} \in E_c$, $e_{H_3} \in E_3$ and $e_{H'_3} \in E'_3$ \item[(d)] $e_{H_1} \cup e_{H'}$ for every pair $e_{H_1}, e_{H'}$ satisfying $e_{H_1} \in E_1$ and $e_{H'} \in \{j'_2 \cup j'_3 : (j'_2, j'_3) \in E'_2 \times E'_3\}$ \item[(e)] $e_{H_2} \cup e_{H'}$ for every pair $e_{H_2}, e_{H'}$ satisfying $e_{H_2} \in E_2$ and $e_{H'} \in \{j'_1 \cup j'_3 : (j'_1, j'_3) \in E'_1 \times E'_3\}$ \item[(f)] $e_{H_3} \cup e_{H'}$ for every pair $e_{H_3}, e_{H'}$ satisfying $e_{H_3} \in E_3$ and $e_{H'} \in \{j'_1 \cup j'_2 : (j'_1, j'_2) \in E'_1 \times E'_2\}$ \item[(g)] All elements of the set $\{j_1 \cup j_2 \cup j_3 : (j_1, j_2, j_3) \in E_1 \times E_2 \times E_3\}$ \end{enumerate} Note that each of the hyperedges\ formed in Steps (b) to (g) has $3k + 1$ vertices. However, the hyperedges\ formed in Step (a) have $3k + 3$ vertices in each of them. We can remove any two vertices from each of these hyperedges\ to obtain the following recurrence relation. Recall that $m_{H_{1c}}(k+1)$ denotes the number of hyperedges\ in the non-2-colorablecolornhg[(k+1)]\ constructed by Abbott-Hanson-Toft construction that uses $H_{1c}$ as its core hypergraph. \begin{align} m(3k+1) \leq m_{H_{1c}}(k+1)m(k+1)^2 + 2m_{H_{1c}}(k+1)m(k)^2 + 4m(k+1)m(k)^2 \label{eq-block-con-abt} \end{align} Whenever $m(k+1) < m_{H_{1c}}(k+1)$, it is evident that the upper bound on \mninline[3k+1]\ that this recurrence relation gives is worse than the one given by Eq. \ref{eq-block-con-gen-3k1}. However, we observe that we can improve Eq. \ref{eq-block-con-abt} by carefully selecting the two vertices to be removed from each hyperedge\ formed in Step (a). Recall that each of these hyperedges\ is a union of three hyperedges\ $e_{H_c} \in E_c$, $e_{H_1} \in E_1$ and $e_{H'_1} \in E'_1$. In the following paragraph, we describe a process to create a set of $k-1$ vertices from each hyperedge\ in the \nuniform[(k+1)]\ hypergraph\ \cushgnot{1}. For each hyperedge\ $e_{H_c} \cup e_{H_1} \cup e_{H'_1}$ formed in Step (a), we use this process to remove two vertices from $e_{H_1}$. \noindent Given a hyperedge\ $h \in E_1$, we create a set $h'$ containing $k-1$ vertices as follows. \begin{enumerate} \item[{Case} 1.] If $h$ is created by Step (i) of Abbott-Hanson-Toft construction, i.e., if $h$ is of the form $e \cup \{a_i\} \cup \{b_i\}$ for some $e \in E_{1c}$, $a_i \in A$ and $b_i \in B$, we define $h' = e$. In other words, we remove $a_i$ and $b_i$ from $h$ to create $h'$. \item[{Case} 2.] If $h$ is created in Step (ii) of Abbott-Hanson-Toft construction, i.e., if $h$ is of the form $A_K \cup \overline{B}_{K}$ for some $K \subset \{1, \ldots, k + 1\}$ such that \cardinline{K} is odd and $1 \leq \card{K} \leq \floor{(k+1)/2}$, we define $h' = A_K \cup \overline{B}_{K} \setminus \{a_{k}, a_{k + 1}, b_k, b_{k + 1}\}$. \item[{Case} 3.] If $h$ is created in Step (iii) of Abbott-Hanson-Toft construction, i.e., if $h$ is of the form $\overline{A}_K \cup B_{K}$ for some $K \subset \{1, \ldots, k + 1\}$ such that \cardinline{K} is even and $2 \leq \card{K} \leq \floor{(k+1)/2}$, we define $h' = \overline{A}_K \cup B_{K} \setminus \{a_{k}, a_{k + 1}, b_k, b_{k + 1}\}$. \item[{Case} 4.] If $h$ is formed in Step (iv) of Abbott-Hanson-Toft construction, i.e., if $h = A$, we define $h' = A \setminus \{a_k, a_{k+1}\}$. \end{enumerate} \noindent This completes the construction of the \nuniform[(3k+1)]\ hypergraph\ $H$. \begin{proof}[Proof of Result \ref{result-block}] We improve the recurrence relation given in Eq. \ref{eq-block-con-abt} as a result of selecting $k-1$ vertices from each $h \in E_1$, as described above. Since this process generates multiple copies of some $(k-1)$-element vertex sets, the number of distinct hyperedges\ formed in Step (a) in the construction of $H$ is reduced. Let us determine the cardinality of the set $\{h' : h' \text{ is generated from some } h \in E_1\}$. It is easy to observe that the number of distinct $h'$'s formed in Case 1 is $\card{E_{1c}}$. On the other hand, the total number of distinct $h'$'s formed in Cases 2, 3 and 4 is at most $2^{k-1}$. It follows from the fact that there are $2^{k-1}$ subsets of $A \setminus \{a_k, a_{k+1}\}$ and each $h'$ formed in one of the Cases 2, 3 and 4 is a union of the sets $\bigcup_{i \in P} \{a_i\}$ and $\bigcup_{i \in \{1, \ldots, k-1\} \setminus P} \{b_i\}$ for some $P \subseteq \{1, \ldots, k-1\}$. Since we have shown in Lemma \ref{upper-const-block-lemma} that $H$ is non-2-colorable, we have the following improvement over Eq. \ref{eq-block-con-abt}. \begin{align*} m(3k+1) &\leq (m(k-1) + 2^{k-1}) m(k+1)^2 \\& \hspace{1cm} + 2m_{H_{1c}}(k+1) m(k)^2 + 4m(k+1) m(k)^2 \end{align*} \end{proof} \noindent This result improves the upper bounds on \mninline\ for $n = 13$ and 16 as follows. \begin{itemize} \item For $n = 13$, we have $k = 4$. Note that $m_{H_{1c}}(5) = 51$, when the Fano plane \cite{klein1870theorie} $H_f$ having 7 hyperedges\ is used as the core hypergraph\ $H_{1c}$. Therefore, we obtain $m(13) \leq (m(3) + 2^3)m(5)^2 + 2m_{H_{1c}}(5)m(4)^2 + 4m(5)m(4)^2 \leq 200889$ by using $m(3) = 7$, $m(4) = 23$ and $m(5) \leq 51$ from Table \ref{table-cur-known-upper}. \item For $n = 16$, we have $k = 5$. Note that $m_{H_{1c}}(6) = 180$, when the non-2-colorablecolornhg[4]\ $H_s$ with 23 hyperedges\ is used as the core hypergraph\ $H_{1c}$. Therefore, we obtain $m(16) \leq (m(4) + 2^{4})m(6)^2 + 2m_{H_{1c}}(6)m(5)^2 + 4m(6)m(5)^2 \leq 3308499$ by using $m(4) = 23$, $m(5) \leq 51$ and $m(6) \leq 147$ from Table \ref{table-cur-known-upper}. \end{itemize} \section{Improved Lower Bound for $m(5)$} \label{our-work-low} For the sake of completeness, we begin this section with a proof of the result given by Goldberg and Russell \cite{Goldberg93towardcomputing} for the lower bounds on \mninline\ for small values of $n$. This result uses Lemma \ref{lem:lower_erdos} and Lemma \ref{lem:lower_schonheim} in its proof. Let \mvninline{l}{n} be the minimum number of hyperedges\ in a non-2-colorablecolornhg\ with $l$ vertices. \begin{lemma}\cite{erdos1969combinatorial} $m_{2n-1}(n) = m_{2n}(n) = \tbinom{2n - 1}{n}$. \label{lem:lower_erdos} \end{lemma} \begin{lemma}\cite{colbourn2006handbook} (Sch\"onheim bound) Consider positive integers $l \geq n \geq t \geq 1$ and $\lambda \geq 1$. Any \nuniform\ hypergraph\ with $l$ vertices such that every $t$-subset of its vertices is contained in at least $\lambda$ hyperedges\ has at least $\Big\lceil \frac{l}{n} \Big\lceil \frac{l - 1}{n - 1} \cdots \Big\lceil \frac{\lambda (l - t + 1)}{n - t + 1}\Big\rceil \cdots \Big\rceil \Big\rceil $ hyperedges. \label{lem:lower_schonheim} \end{lemma} \begin{lemma} \cite{Goldberg93towardcomputing} If $n \geq 4$, then $\displaystyle m(n) \geq \min_{x > 2n, x \in \mathbb{N}} \bigg\{ \max \bigg\{\mnlowerblocked{x}{n}, \mnlowerschoheim{x}{n}{n-1} \bigg\} \bigg\}$. \label{lem:goldberg} \end{lemma} \begin{proof} Let us consider an \nuniform\ hypergraph\ hypergraphnot\ such that the number of hyperedges\ satisfies $\displaystyle \card{E} < \min_{x > 2n, x \in \mathbb{N}} \bigg\{ \max \bigg\{ \mnlowerblocked{x}{n}, \mnlowerschoheim{x}{n}{n-1} \bigg\} \bigg\}$. We call a 2-colorableing\ of the hypergraph\ to be \itt{balanced} if the coloring has $\floor{\card{V}/2}$ red vertices and $\ceil{\card{V}/2}$ blue vertices. It can be noted that the possible number of ways to give a balanced coloring for $H$ is $\binom{\card{V}}{\floor{\card{V}/2}}$ and not all of these are proper 2-colorableings. Let us define $f(x) = \mnlowerblocked{x}{n}$, $g(x) = \mnlowerschoheim{x}{n}{n-1}$ and $\displaystyle r = \min_{x > 2n, x \in \mathbb{N}} \big\{ \max \big\{ f(x), g(x) \big\} \big\}$. Let this minimum value $r$ be obtained by $x = v_{\text{opt}}$. When $x > 2n$, observe that $f(x)$ is non-increasing and $g(x)$ is non-decreasing with increasing $x \in \mathbb{N}$. Moreover, we also observe that $\binom{2n-1}{n} \geq f(2n+1) \geq g(2n+1)$ for $n \geq 4$. \begin{enumerate} \item[{Case }1.] If $n \leq \card{V} \leq 2n - 2$, any balanced coloring of its vertex set is a proper 2-colorableing\ of $H$. \item[{Case }2.] If $\card{V} = 2n - 1$ or $\card{V} = 2n$, it follows from Lemma \ref{lem:lower_erdos} that \mvninline{2n-1}{n} = \mvninline{2n}{n} = $\binom{2n - 1}{n}$. Since $\card{E} < r \leq \binom{2n - 1}{n}$, $H$ is properly 2-colorable. \item[{Case }3.] If $2n + 1 \leq \card{V} \leq v_{\text{opt}}$, consider a balanced coloring of $H$. We say that such a coloring is \itt{blocked} by a hyperedge\ if it is monochromatic\ in the coloring. Note that a red monochromatiche\ blocks $\tbinom{ \card{V}- n}{\lfloor \card{V}/2 \rfloor - n}$ and a blue monochromatiche\ blocks $\binom{\card{V} - n}{\ceil{\card{V}/2} - n}$ such colorings. In order to ensure that none of these balanced colorings is a proper 2-colorableing\ of $H$, we need at least $f(\card{V})$ hyperedges. Since $\card{E} < r \leq f(\card{V})$ for $2n + 1 \leq \card{V} \leq v_{\text{opt}}$, at least one of the balanced colorings of $H$ is a proper 2-colorableing\ of it. \item[{Case }4.] If $\card{V} > v_{\text{opt}}$, assume the induction hypothesis that any \nuniform\ hypergraph\ with $\card{V} - 1$ vertices and $\card{E}$ hyperedges\ is properly 2-colorable. The base case $\card{V} = v_{\text{opt}}$ is proved in Case 3. If there exists a pair of vertices $\{v_i, v_j\}$ not contained together in any hyperedge of $H$, consider a new hypergraph\ hypergraphprimenot\ constructed by merging $v_i$ and $v_j$ into a new vertex $v$. Since $H'$ is \nuniform\ with $\card{V'} = \card{V} - 1$ and $\card{E'} = \card{E}$, we know from the induction hypothesis that $H'$ is properly 2-colorable. This coloring of $H'$ can be extended to a proper 2-colorableing\ of $H$ by assigning the color of $v$ to $v_i$ and $v_j$. Since $\card{E} < r$ and it follows from Lemma \ref{lem:lower_schonheim} that the minimum number of hyperedges\ required to ensure that each pair of vertices is contained in at least one hyperedge\ is $g(\card{V}) \geq r$, we are guaranteed to have a pair of vertices $\{v_i, v_j\}$ not contained together in any hyperedge\ of $H$. \end{enumerate} \end{proof} Lemma \ref{lem:goldberg} implies that $m(5) \geq 28$, which is obtained when $x=23$. We improve this to $m(5) \geq 29$ using the following lemma. The first three cases of the proof for this improved lower bound are the same as the ones used in the proof above. We use Lemma \ref{lem:radhakrishnan_srini} to improve Case 4 of the proof. \begin{lemma}\cite{radhakrishnan2000improved} Consider a positive integer $\gamma$ and a fraction $p \in [0, 1]$. Any \nuniform\ hypergraph\ hypergraphnot\ satisfying $\card{\{ \{e_1, e_2\} : e_1, e_2 \in E,\ \card{e_1 \cap e_2} = 1 \}} \leq \gamma$ is properly 2-colorable\ if $2^{-n+1} (1 - p)^n \card{E} + 4 \gamma \big( 2^{-2n+1} p \int_0^1 (1 - (xp)^2)^{n-1} \mathrm{d}x \big) < 1$. \label{lem:radhakrishnan_srini} \end{lemma} \begin{proof}[Proof of Result \ref{result-improv-lower}] Let us consider a \nuniform[5]\ hypergraph\ hypergraphnot\ with at most 28 hyperedges. We show that it is properly 2-colorable. \begin{enumerate} \item[{Case }1.] If $5 \leq \card{V} \leq 8$, any balanced coloring of its vertex set is a proper 2-colorableing\ of $H$. \item[{Case }2.] If $\card{V} = 9$ or $\card{V} = 10$, it follows from Lemma \ref{lem:lower_erdos} that \mvninline{9}{5} = \mvninline{10}{5} = 126. Since $\card{E} \leq 28$, $H$ has a proper 2-colorableing. \item[{Case }3.] If $11 \leq \card{V} \leq 22$, consider a balanced coloring of $H$. We observe that a red monochromatiche\ blocks $\tbinom{ \card{V}- 5}{\lfloor \card{V}/2 \rfloor - 5}$ and a blue monochromatiche\ blocks $\binom{\card{V} - 5}{\ceil{\card{V}/2} - 5}$ such colorings. In order to ensure that none of these balanced colorings is a proper 2-colorableing\ of $H$, we need at least $\mnlowerblocked{\card{V}} {5}$ hyperedges. Since $11 \leq \card{V} \leq 22$, it implies that we need at least 29 hyperedges\ to ensure that no balanced coloring of $H$ is a proper 2-colorableing. \item[{Case }4.] If $\card{V} = 23$ and there exists a pair of vertices $\{v_i, v_j\}$ not contained together in any hyperedge\ of $H$, we construct a new hypergraph\ hypergraphprimenot\ by merging vertices $v_i$ and $v_j$ into a new vertex $v$. We observe that $H'$ is \nuniform[5]\ with 22 vertices and \cardinline{E} hyperedges. It follows from Case 3 that $H'$ is properly 2-colorable. This coloring of $H'$ can be extended to a proper 2-colorableing\ of $H$ by assigning the color of $v$ to $v_i$ and $v_j$. If $\card{E} \leq 27$, note that Lemma \ref{lem:lower_schonheim} ensures that there exists a pair of vertices not contained together in any hyperedge\ of $H$. Therefore, we would complete the proof by assuming that $\card{E} = 28$ and every pair of vertices is contained in at least one hyperedge\ of $H$. For such a hypergraph, we show that the cardinality of the set $\{ \{e_1, e_2\} : e_1, e_2 \in E,\ \card{e_1 \cap e_2} = 1 \}$ is at most 335. Setting $p = 0.3, \gamma = 335, n = 5$ and $\card{E} = 28$ in Lemma \ref{lem:radhakrishnan_srini}, we observe that $H$ is properly 2-colorable\ since $2^{-n+1} (1 - p)^n \card{E} + 4 \gamma \cdot 2^{-2n+1} p \int_0^1 (1 - (xp)^2)^{n-1} \mathrm{d}x < 1$. In order to show that the cardinality of the set $\{ \{e_1, e_2\} : e_1, e_2 \in E,\ \card{e_1 \cap e_2} = 1 \}$ is at most 335, we consider the degree sequence of $H$. Note that the \itt{degree} of a vertex is defined as the number of hyperedges\ it is contained in and the \itt{degree sequence} of a hypergraph\ is the ordering of the degrees of its vertices in a non-increasing order. Consider an arbitrary vertex $u$ of $H$. Observe that there are 22 distinct vertex pairs involving $u$ and any hyperedge\ containing $u$ has 4 such pairs in it. Therefore, the degree of $u$ is at least 6 and there exists another vertex $u'$ such that $\{u, u'\}$ is contained in at least two different hyperedges\ of $H$. Since the sum of the degrees of the vertices of $H$ is 140, the only possible degree sequences of $H$ are $\langle 8, 6, \ldots, 6 \rangle$ and $\langle 7, 7, 6, \ldots, 6 \rangle$. For the first sequence, the cardinality of the set $\{ \{e_1, e_2\} : e_1,e_2 \in E,\ \card{e_1 \cap e_2} = 1 \}$ is upper bounded by $(\binom{6}{2} - 1) \cdot 22 + (\binom{8}{2} - 1) = 335$. For the second sequence, it is upper bounded by $(\binom{6}{2} - 1) \cdot 21 + (\binom{7}{2} - 1) \cdot 2 = 334$. \item[{Case }5.] If $\card{V} \geq 24$, assume the induction hypothesis that any \nuniform[5]\ hypergraph\ with $\card{V} - 1$ vertices and $\card{E}$ hyperedges\ is properly 2-colorable. The base case $\card{V} = 23$ is proved in Case 4. If there exists a pair of vertices $\{v_i, v_j\}$ not contained together in any hyperedge\ of $H$, consider a new hypergraph\ hypergraphprimenot\ constructed by merging $v_i$ and $v_j$ into a new vertex $v$. Since $H'$ is \nuniform[5] with $\card{V'} = \card{V} - 1$ and $\card{E'} = \card{E}$, we know from the induction hypothesis that $H'$ is properly 2-colorable. This coloring of $H'$ can be extended to a proper 2-colorableing\ of $H$ by assigning the color of $v$ to $v_i$ and $v_j$. Since it follows from Lemma \ref{lem:lower_schonheim} that the minimum number of hyperedges\ required to ensure that each pair of vertices is contained in at least one hyperedge\ is $\mnlowerschoheim{\card{V}}{5}{4} \geq 29$, we are guaranteed to have a pair of vertices $\{v_i, v_j\}$ not contained together in any hyperedge\ of $H$. \end{enumerate} \end{proof} \section{Conclusion \label{concl-open}} In this paper, we establish the lower bound $m(5) \geq 29$ which is still far from the best-known\ upper bound $m(5) \leq 51$. We also establish improved upper bounds for \mninline[8], \mninline[13], \mninline[14], \mninline[16]\ and \mninline[17]. In Table \ref{table-new-known-bounds}, we highlight these improved bounds on \mninline\ for $n \leq 17$. It would be interesting to determine the exact values of \mninline\ for $n \geq 5$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline $n$ & $m(n)$ & Corresponding construction/recurrence relation\\ \hline 1 & $m(1) = 1$ & Single Vertex\\ 2 & $m(2) = 3$ & Triangle Graph\\ 3 & $m(3) = 7$ & Fano Plane \cite{klein1870theorie}\\ 4 & $m(4) = 23$ & \cite{ostergaard2014minimum}, \cite{seymour1974note}\\ 5 & $m(5) \leq 51$ & $m(5) \leq 2^4 + 5 m(3)$\\ 6 & $m(6) \leq 147$ & $m(6) \leq m(2) m(3)^2$\\ 7 & $m(7) \leq 421$ & $m(7) \leq 2^6 + 7 m(5)$\\ 8 & $m(8) \leq {\bf 1212} $ & $m(8) \leq 2 m(3) m(5) + \tbinom{4}{2} m(3) m(3) + \tbinom{4}{3} m(5) $\\ 9 & $m(9) \leq 2401$ & $m(9) \leq m(3)^4$\\ 10 & $m(10) \leq 7803$ & $m(10) \leq m(2) m(5)^2$\\ 11 & $m(11) \leq 25449$ & $m(11) \leq 15 \cdot 2^8 + 9 m(9)$\\ 12 & $m(12) \leq 55223$ & $m(12) \leq m(3)^4 m(4)$\\ 13 & $m(13) \leq {\bf 200889}$ & $m(13) \leq (m(3) + 2^3)m(5)^2 + 2m_{H_f}(5)m(4)^2 + 4m(5)m(4)^2$ \\ 14 & $m(14) \leq {\bf 528218}$ & $m(14) \leq 2 m(9) m(5) + m(5)^2 m(4) + \tbinom{4}{1} m(9) m(4) + \tbinom{4}{4} m(5)^2$\\ 15 & $m(15) \leq 857157$ & $m(15) \leq m(3)^5 m(5)$\\ 16 & $m(16) \leq {\bf 3308499}$ & $m(16) \leq (m(4) + 2^{4})m(6)^2 + 2m_{H_s}(6)m(5)^2 + 4m(6)m(5)^2$\\ 17 & $m(17) \leq {\bf 10375782}$ & $m(17) \leq 2 m(10) m(7) + 2 m(7)^2 m(3) + \tbinom{3}{1}m(10) m(3)^2 + \tbinom{3}{3} m(7)^2 $ \\ \hline \end{tabular} \end{center} \caption{Improved upper bounds on $m(n)$ for small values of $n$} \label{table-new-known-bounds} \end{table} \small \end{document}
\begin{document} \title{Error analysis for approximations to one-dimensional SDEs via the perturbation method ootnote { Accepted for publication in Osaka Journal of Mathematics. } \begin{abstract} We study asymptotic error distributions associated with standard approximation scheme for one-dimensional stochastic differential equations driven by fractional Brownian motions. This problem was studied by, for instance, Gradinaru-Nourdin \cite{GradinaruNourdin2009}, Neuenkirch and Nourdin~\cite{NeuenkirchNourdin2007} and the second named author~\cite{Naganuma2015}. The aim of this paper is to extend their results to the case where the equations contain drift terms and simplify the proof of estimates of the remainder terms in \cite{Naganuma2015}. To this end, we represent the approximation solution as the solution of the equation which is obtained by replacing the fractional Brownian path with a perturbed path. We obtain the asymptotic error distribution as a directional derivative of the solution by using this expression. \end{abstract} \section{Introduction} For a one-dimensional fractional Brownian motion (fBm) $B$ with the Hurst $1/3<H<1$, we consider a one-dimensional stochastic differential equation (SDE) \begin{align}\label{eq_20141122070455} X_t = \xi + \int_0^t b(X_s)\, ds + \int_0^t \sigma(X_s)\, d^\circ B_s, \quad t\in[0,1], \end{align} where $\xi\in\mathbf{R}$ is a deterministic initial value and $d^\circ B$ stands for the symmetric integral in the sense of Russo-Vallois. We may write $X_t(\xi,B)$, $X_t(B)$ to indicate the dependence of the initial value and the driving path. We consider three schemes to approximate the solution to \eqref{eq_20141122070455} and study asymptotic error distributions of them. We treat the Euler scheme, the Milstein type scheme and the Crank-Nicolson scheme as real-valued stochastic processes on the interval $[0,1]$. There are several frameworks to treat SDEs driven by fBm. For multidimensional case, the Young integration theory and the rough path analysis are powerful tools \cite{Lyons1994,Lyons1998}. We can however deal with SDEs in dimension one more easily by using the theory of the symmetric integral \cite{Nourdin2004}. The symmetric integral was proposed by Russo-Vallois \cite{RussoVallois1993a} with a motivation to establish non-causal stochastic integration theory. Recently, Nourdin and his coauthors developed a theory of integration with respect to general integrators including fBm \cite{Nourdin2004,GradinaruNourdinRussoVallois2005} with a spirit of \cite{RussoVallois1993a}. In the present article, we adopt the symmetric integral and give a meaning to \eqref{eq_20141122070455}. The Euler scheme, the Milstein type scheme and the Crank-Nicolson scheme for SDEs driven by fBm are considered by many researchers. In the consideration of approximation schemes, they are interested in the sharp error bounds (convergence rates) and the limits of errors normalized by the convergence rates (asymptotic error distribution). In multidimensional case, Mishura-Shevchenko \cite{MishuraShevchenko2008}, Friz-Riedel \cite{FrizRiedel2014} and Bayer et al.\ \cite{BayerFrizRiedelSchoenmakers2016} obtain an almost sharp convergence rate of the Euler scheme and the Milstein type scheme, respectively. Hu-Liu-Nualart \cite{HuLiuNualart2016} consider asymptotic error distributions of the Euler scheme for SDEs driven by fBm with $1/2<H<1$. Liu-Tindel \cite{LiuTindel2017arXiv} treat the same problem in the case $1/3<H<1/2$. There are a lot of results on asymptotic error distributions of schemes for one-dimensional SDEs. For example, Neuenkirch-Nourdin \cite{NeuenkirchNourdin2007} show the convergence of the normalized error of the Euler scheme for an SDE with a drift term driven by fBm with $1/2<H<1$. Gradinaru-Nourdin \cite{GradinaruNourdin2009} deal with the Milstein type scheme for an SDE without a drift term, namely $b\equiv 0$ in \eqref{eq_20141122070455}, and prove that the normalized error of it converges to some random variable. We next explain preceding results on the Crank-Nicolson scheme for one dimensional SDE. The first result on the error of it is obtained in \cite{NeuenkirchNourdin2007}; the authors obtain an almost sharp convergence rate. In \cite{GradinaruNourdin2009}, the authors treat the error of the Crank-Nicolson scheme for an SDE without a drift term driven by a standard Brownian motion and obtain the convergence of the normalized error. The second named author~\cite{Naganuma2015} in the present paper shows the convergence of the normalized error for fBm with $1/3<H<1/2$. It is crucial to these studies that the solution is given by a function of $B_t$ as $X_t(\xi,B)=\phi(\xi, B_t)$, where $\phi$ is a certain smooth increasing function depending only on $\sigma$. This is a Doss-Sussmann type representation of the solution. Let denote the approximation solution by $\bar{X}^{(m)}_t(\xi, B)$, where $m$ is a positive integer. Let $B^{(m)}_t$ be the dyadic polygonal approximation of the fBm $B$ such that $B^{(m)}_{\dyadicPart[m]{k}}=B_{\dyadicPart[m]{k}}$ for every $k=0,\dots,2^m$, where $\dyadicPart[m]{k}=k2^{-m}$. For the Wong-Zakai approximation, $\bar{X}^{(m)}_t(\xi,B)=\phi(\xi,B^{(m)}_t)$ holds. Hence the analysis of the error $\bar{X}^{(m)}_t-X_t$ is almost similar to that of $B-B^{(m)}$ itself. Clearly, this simple relation does not hold any more for other approximation schemes such as the Euler, Milstein and Crank-Nicolson schemes. However, if the dispersion coefficient $\sigma$ is strictly positive, there exists unique $B$-dependent random variable $h^{(m)}$ such that $\bar{X}^{(m)}_{\tau^m_k}(\xi,B)=\phi(\xi, B+h^{(m)})$ for all $k$. After obtaining this formula, it is clear that the analysis of $\{h^{(m)}\}$ is important to the study of the error $X_t(\xi,B)-\bar{X}^{(m)}_t(\xi,B)$. This is one of main ideas of the proof in \cite{NeuenkirchNourdin2007, Naganuma2015}. Even if the equations contain the drift terms, the Doss-Sussmann representation still holds and the solution mapping $B\mapsto X(\xi,B)$ is Lipschitz continuous in the uniform convergence topology in one dimensional cases. Further, under the nondegeneracy assumption of $\sigma$, we can show that there exists a unique piecewise linear $h^{(m)}$ such that $\bar{X}^{(m)}_{\dyadicPart[m]{k}}(\xi,B)=X_{\dyadicPart[m]{k}}(\xi,B+h^{(m)})$ ~$(0\le k\le 2^m)$ hold. By this perturbation representation of the approximate solutions and the analysis of $h^{(m)}$, we can show the convergence of the normalized error distribution. Hence, the present paper is a natural extension of the preceding studies. We use central limit theorem for the Hermite variation process to see the asymptotic behavior of the normalized error similarly to \cite{NeuenkirchNourdin2007, Naganuma2015}. The proof that the remainder term is negligible in \cite{Naganuma2015} was done by a long calculation. In this paper, we give simpler and shorter argument for estimates of remainder terms. The organization of this paper is as follows. In \secref{sec_201708102140}, we explain three approximation schemes, that is, Euler, Milstein and Crank-Nicolson scheme. We next state our main theorems which determine the asymptotic error distributions in Theorem~\ref{thm_20141204013919}, Theorem~\ref{thm_20141204015019} and Theorem~\ref{thm_20141204015044}. The next two sections are preliminaries for the proofs of these theorems. In \secref{sec_20150623023608}, we recall the definition of Russo-Vallois symmetric integral. We consider the solutions to SDEs driven by fractional Brownian motions with the Hurst parameter $1/3<H\le 1/2$. In this case, the solution has a Doss-Sussmann representation and the Russo-Vallois integral is the same as the symmetric Riemman-Stieltjes integral as Stratonovich integral. By using this, we obtain estimates of iterated integrals. Also we prepare lemmas for directional derivative of the solution with respect to the driving path. In \secref{sec_20150519021523}, we collect necessary results for convergence of variation functionals. These are essential for the proof of our main theorems. We give the proof of these results in Appendixes~\ref{sec_20150107074711} and \ref{sec_20170519021523}. In \secref{sec_201707212021}, we consider the Crank-Nicolson scheme and prove Theorem~\ref{thm_20141204015044}. For the reader's convenience, we give a skecth of the proof by using the perturbation path $h^{(m)}$ in \rref{rem_201708102135}. The proof of other two theorems are essentially similar to that of this theorem. We give the sketch of the proof for other two schemes, Euler scheme and Milstein type scheme in \secref{sec_201707212051}. In Appendix~\ref{sec_20150623014415}, we prepare the Gaussian analysis and Malliavin calculus. In Appendixes~\ref{sec_20150107074711} and \ref{sec_20170519021523}, we prove the results stated in \secref{sec_20150519021523}. Throughout this paper, we use the following notaion. For $m\in\mathbf{N}$, we denote by $\{\dyadicPart[m]{k}\}_{k=0}^{2^m}$ the $m$-th dyadic rationals, that is, $\dyadicPart[m]{k}=k2^{-m}$ for $k=0,\dots,2^m$. For $n\in\{0\}\cup\mathbf{N}\cup\{\infty\}$, $\SmoothFunc{n}{\mathbf{R}^d}{\mathbf{R}}$ denotes the set of all $n$-times continuously differentiable $\mathbf{R}$-valued functions defined on $\mathbf{R}^d$. For $n\in\{0\}\cup\mathbf{N}\cup\{\infty\}$, $\SmoothFunc[bdd]{n}{\mathbf{R}^d}{\mathbf{R}}$ (resp. $\SmoothFunc[poly]{n}{\mathbf{R}^d}{\mathbf{R}}$) stands for the set of all functions $f\in\SmoothFunc{n}{\mathbf{R}^d}{\mathbf{R}}$ which are bounded (resp. polynomial growth) together with all their derivatives. For $k,l\in\{0\}\cup\mathbf{N}$, $\SmoothFunc{k,l}{\mathbf{R}^2}{\mathbf{R}}$ denotes the set of all functions $f:\mathbf{R}^2\to\mathbf{R}$ which is $k$-times (resp. $l$-times) continuously differentiable with respect to the first (resp. second) variable. We denote the set of right continuous paths on $\mathbf{R}^d$ whose left limit exist by $D([0,1];\mathbf{R}^d)$. For $\lambda\in(0,1]$, $\HolFunc{\lambda}{[0,1]}{\mathbf{R}}$ stands for the set of all $\lambda$-H\"{o}lder continuous functions from $[0,1]$ to $\mathbf{R}$. The space $\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$ is the set of all functions $g\in\HolFunc{\lambda}{[0,1]}{\mathbf{R}}$ starting from zero. For $g\in\HolFunc{\lambda}{[0,1]}{\mathbf{R}}$ and $0\leq t\leq 1$, we define the uniform norm by $ \|g\|_{\infty,[0,t]} = \sup_{0\leq s\leq t} |g_s| $. We simply write $\|g\|_{\infty}=\|g\|_{\infty,[0,1]}$. For fixed $0<s<1$, we define the shift operator $\theta_s$ by $(\theta_s g)(t)=g_{t+s}-g_s$ for $0\leq t\leq 1-s$. \section{Main results}\label{sec_201708102140} We state our main result. For $b,\sigma\in\SmoothFunc[bdd]{\infty}{\mathbf{R}}{\mathbf{R}}$, we consider an SDE \eqref{eq_20141122070455}. Throughout this paper, we consider a solution $X$ to \eqref{eq_20141122070455} given by \eqref{eq_1545722900}. We refer the meaning of SDEs driven by fBm to \secref{sec_20150623023608}. To state our main results, we recall the definitions of three approximation schemes. \begin{definition}[The Euler scheme] For every $m\in\mathbf{N}$, the Euler scheme $\bar{X}^{(m)}:[0,1]\to\mathbf{R}$ is defined by \begin{align*} \left\{ \begin{aligned} \bar{X}^{(m)}_0 &= \xi,\\ \bar{X}^{(m)}_t &= \bar{X}^{(m)}_{\dyadicPart[m]{k-1}} + b(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (t-\dyadicPart[m]{k-1}) + \sigma(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (B_t-B_{\dyadicPart[m]{k-1}}) \quad \text{for $\dyadicPart[m]{k-1}<t\leq\dyadicPart[m]{k}$.} \end{aligned} \right. \end{align*} \end{definition} \begin{definition}[The Milstein type scheme] For every $m\in\mathbf{N}$, the Milstein type scheme $\bar{X}^{(m)}:[0,1]\to\mathbf{R}$ is defined by \begin{align*} \left\{ \begin{aligned} \bar{X}^{(m)}_0 &= \xi,\\ \bar{X}^{(m)}_t &= \bar{X}^{(m)}_{\dyadicPart[m]{k-1}} + b(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (t-\dyadicPart[m]{k-1}) + \frac{1}{2} bb'(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (t-\dyadicPart[m]{k-1})^2\\ &\phantom{=}\quad + \frac{1}{2} [\sigma b'+\sigma'b] (\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (t-\dyadicPart[m]{k-1}) (B_t-B_{\dyadicPart[m]{k-1}})\\ &\phantom{=}\quad + \sigma(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (B_t-B_{\dyadicPart[m]{k-1}}) + \frac{1}{2} \sigma\sigma'(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) (B_t-B_{\dyadicPart[m]{k-1}})^2 \quad \text{for $\dyadicPart[m]{k-1}<t\leq\dyadicPart[m]{k}$}. \end{aligned} \right. \end{align*} \end{definition} \begin{definition}[The Crank-Nicolson scheme] For every $m\in\mathbf{N}$, the Crank-Nicolson scheme $\bar{X}^{(m)} :[0,1]\to\mathbf{R}$ is defined by a solution to an equation \begin{align*} \left\{ \begin{aligned} \bar{X}^{(m)}_0 &= \xi,\\ \bar{X}^{(m)}_t &= \bar{X}^{(m)}_{\dyadicPart[m]{k-1}} + \frac{1}{2} \left\{ b(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) + b(\bar{X}^{(m)}_t) \right\} (t-\dyadicPart[m]{k-1})\\ &\phantom{=}\quad\qquad + \frac{1}{2} \left\{ \sigma(\bar{X}^{(m)}_{\dyadicPart[m]{k-1}}) + \sigma(\bar{X}^{(m)}_t) \right\} (B_t-B_{\dyadicPart[m]{k-1}}) \quad \text{for $\dyadicPart[m]{k-1}<t\leq\dyadicPart[m]{k}$.} \end{aligned} \right. \end{align*} \end{definition} Since the Crank-Nicolson scheme is an implicit scheme, we need to restrict the domain of it and assure an existence of a solution to the equation above. Roughly speaking, the existence of the solution is ensured for large $m$. In order to state our main results concisely, we set $\Wronskian=\sigma b'-\sigma'b$ and \begin{align}\label{eq_20141226020239} J_t = \exp \left( \int_0^t b'(X_u)\, du + \int_0^t \sigma'(X_u)\, d^\circ B_u \right). \end{align} We assume the following hypothesis in order to obtain an expression of the error of the scheme; \begin{hypothesis}\label{hypo_ellipticity} $\inf\sigma>0$. \end{hypothesis} The following are our main results. \begin{theorem}[Euler scheme]\label{thm_20141204013919} We consider the Euler scheme. Assume that \hyporef{hypo_ellipticity} is satisfied. For $1/2<H<1$, we have \begin{align*} \lim_{m\to\infty} 2^{m(2H-1)} \{\bar{X}^{(m)}-X\} = \sigma(X) U + J \int_0^\cdot J_s^{-1} \Wronskian(X_s) U_s\, ds \end{align*} in probability with respect to the uniform norm. Here $U$ is defined by \begin{align*} U_t = \int_0^t f_2(X_u)\, du, \end{align*} where $f_2=-\sigma'/2$. \end{theorem} In this theorem, the limit is a continuous stochastic process indexed by the elements of the interval $[0,1]$. When we emphasize the time parameter $t$, we express the limit process as $ \sigma(X_t) U_t + J_t \int_0^t J_s^{-1} \Wronskian(X_s) U_s\, ds $. \begin{theorem}[Milstein type scheme]\label{thm_20141204015019} Assume that \hyporef{hypo_ellipticity} is satisfied. We consider Milsten type scheme. For $1/3<H<1/2$ (resp. $H=1/2$), we have \begin{align*} \lim_{m\to\infty} 2^{m(4H-1)} \{\bar{X}^{(m)}-X\} = \sigma(X) U + J \int_0^\cdot J_s^{-1} \Wronskian(X_s) U_s\, ds \end{align*} in probability (resp. weakly) with respect to the uniform norm. Here $U$ is a stochastic process defined as follows; we set \begin{gather*} \begin{aligned} \psi &= - \frac{1}{4} \left[ \frac{\sigma'(\sigma b'+\sigma'b)+\sigma(\sigma''b+\sigma b'')}{\sigma} \right], & f_3 &= - \frac{1}{3!} [(\sigma')^2+\sigma\sigma''], \end{aligned}\\ \begin{aligned} f^\dagger_4 &= \frac{1}{24} [ \sigma^2\sigma''' +6\sigma\sigma'\sigma'' +3(\sigma')^3 ], & g_1 &= \frac{\Wronskian}{\sigma}. \end{aligned} \end{gather*} \begin{enumerate} \item For $1/3<H<1/2$, we set \begin{align*} U_t &= 3 \int_0^t f^\dagger_4(X_u)\, du. \end{align*} \item For $H=1/2$, we set \begin{multline*} U_t = \int_0^t \psi(X_u)\, du + \sqrt{6} \int_0^t f_3(X_u)\, dW_u + 3 \int_0^t f_3(X_u) \circ dB_u\\ + 3 \int_0^t f^\dagger_4(X_u)\, du + \frac{1}{\sqrt{12}} \int_0^t g_1(X_u)\, d\tilde{W}_u, \end{multline*} where $W$ and $\tilde{W}$ are standard Brownian motions and $B$, $W$ and $\tilde{W}$ are independent. Also $dW_u, d\tilde{W}_u$ and ${}\circ dB_u$ stand for the It\^o integral and the Stratonovich integral, respectively. \end{enumerate} \end{theorem} \begin{theorem}[Crank-Nicolson scheme]\label{thm_20141204015044} Assume that \hyporef{hypo_ellipticity} is satisfied. For $1/3<H\leq 1/2$, we have \begin{align*} \lim_{m\to\infty} 2^{m(3H-1/2)} \{\bar{X}^{(m)}-X\} = \sigma(X) U + J \int_0^\cdot J_s^{-1} \Wronskian(X_s) U_s\, ds \end{align*} weakly with respect to the uniform norm. Here $U$ is a stochastic process defined as follows; we set \begin{align*} \psi &= \frac{1}{4} [\sigma'b'+\sigma''b], & f_3 &= \frac{1}{12} [(\sigma')^2+\sigma\sigma''], & g_1 &= \frac{\Wronskian}{\sigma}. \end{align*} \begin{enumerate} \item For $1/3<H<1/2$, we set \begin{align*} U_t = \sigma_{3,H} \int_0^t f_3(X_u)\, dW_u, \end{align*} where $\sigma_{3,H}$ is a positive constant defined by \eqref{eq_const_sigma_q_Hurst} and $W$ is a standard Brownian motion independent of $B$. \item For $H=1/2$, we set \begin{align*} U_t = \int_0^t \psi(X_u)\, du + \sqrt{6} \int_0^t f_3(X_u)\, dW_u + 3 \int_0^t f_3(X_u) \circ dB_u + \frac{1}{\sqrt{12}} \int_0^t g_1(X_u)\, d\tilde{W}_u, \end{align*} where $W$ and $\tilde{W}$ are standard Brownian motions and $B$, $W$ and $\tilde{W}$ are independent. \end{enumerate} \end{theorem} We make remarks on our main results. \begin{enumerate} \item We explain how we derive $f_i, g_1, \varphi_{\boldsymbol{i}}, \psi, f_4^{\dagger}$ $(i=2,3,4, \boldsymbol{i}=011,101,110)$. Since \trefs{thm_20141204013919}{thm_20141204015019}{thm_20141204015044} are proved by the same method, we explain the case of the Crank-Nicolson scheme (\tref{thm_20141204015044}) as an example. In the first step of our proof, we need to calculate one-step error $\hat{\kappa}_{k}$ of each approximation scheme as in \eqref{eq_1545356684}. In that calculation, the functions $\hat{f}_i, \hat{g}_1, \hat{\varphi}, \hat{\varphi}_{\boldsymbol{i}}$, which are defined by $\sigma$ and $b$, appear as the coefficients of the monomials of the increments of $\Delta B_k=B_{\dyadicPart[m]{k}}-B_{\dyadicPart[m]{k-1}}$ and $\Delta=2^{-m}$ and iterated integrals of $B_t$ and $t$ (\lref{lem_201707121721}). We define the functions $f_i, g_1, \varphi, \varphi_{\boldsymbol{i}}$ by using $\hat{f}_i, \hat{g}_1, \hat{\varphi}, \hat{\varphi}_{\boldsymbol{i}}$ and express main part of the piecewise linear function $h^{(m)}$ in terms of $f_i, g_1, \varphi, \varphi_{\boldsymbol{i}}$ (\lref{lem_201708091518}). Finally, we study asymptotic of $h^{(m)}$ and then define $\psi=\phi+(\varphi_{011}+\varphi_{110})/4$ (\lref{lem_201708091528}). In the case of the Euler and Milstein scheme, we show lemmas corresponding to \lrefs[lem_201707121721]{lem_201707121721}{lem_201708091528}. The function $f_4^\dagger$ in the Milstein scheme appears in studying in asymptotic of $h^{(m)}$. \item \tref{thm_20141204013919} is an extension of \cite{NeuenkirchNourdin2007}, but the proof is completely different and comparatively more simple. \item In \cite{GradinaruNourdin2009}, the authors consider higher order schemes for SDEs without drift terms. \tref{thm_20141204015019} coresponds to the second order scheme for an SDE containing a drift term. \item \tref{thm_20141204015044} is an extension of \cite{GradinaruNourdin2009,Naganuma2015}. To our knowledge, the convergence of the approximation solution itself is not unknown for $1/6<H\le 1/3$ (\cite{Nourdin2008a}). When $\sigma(x)^2$ is a quadratic function of $x$, \tref{thm_20141204015044} is proved in \cite{NeuenkirchNourdin2007} for $1/6<H<1/2$. In the case where $H>1/3$, the convergence of the approximation solution is a pathwise result, that is, the result holds for SDEs driven by H\"older continuous paths with H\"older exponent which is greater than $1/3$. However, the proof of \cite{NeuenkirchNourdin2007} is due to a central limit theorem and it is not clear that this is also a pathwise result. \end{enumerate} \section{ODEs driven by H\"{o}lder continuous functions and SDEs} \label{sec_20150623023608} In this section, we define the symmetric integral in the sense of Russo-Vallois and discuss a unique existence and properties of a solution to an ordinary differential equation (ODE). Let $1/3<\lambda<1$. For a $\lambda$-H\"older continuous function $g:[0,1]\to\mathbf{R}$, we consider an ODE \begin{align}\label{eq_20140928070616} x_t = \xi + \int_0^t b(x_u)\, du + \int_0^t \sigma(x_u)\, d^\circ g_u, \quad t\in[0,1], \end{align} where $\xi\in\mathbf{R}$ and $d^\circ g$ denotes the symmetric integral. We shall also write $x_t(\xi,g)$, $x(\xi)$, or $x(g)$ for the solution $x$ to emphasize dependence on the initial value $\xi$ and/or the driver $g$. Since fBm with the Hurst $1/3<H<1$ is $(H-\epsilon)$-H\"{o}lder continuous with probability one, we can deal with SDE \eqref{eq_20141122070455} in pathwise sense by using the theory of ODEs \eqref{eq_20140928070616}. We have $\lambda=H-\epsilon$ in mind. See \secref{sec_201708051641}. We prepare notation. For $g\in\HolFunc{\lambda}{[0,1]}{\mathbf{R}}$, we use the symbol $\const[g]$, which may change line by line, to denote a constant which has a bound \begin{align*} \const[1] \left\{ 1 + \sup_{0\leq s<t\leq 1} \frac{|g_t-g_s|}{(t-s)^\lambda} \right\}^{\const[2]} \end{align*} for some constants $\const[1]$ and $\const[2]$, which may depend on the H\"{o}lder exponent $\lambda$ but not on $g$. \subsection{Existence and uniqueness} We collect facts on the symmetric integral and a solution to an ODE \eqref{eq_20140928070616}. In what follows, we assume $1/3<\lambda<1$. \begin{definition} For continuous functions $f,g:[0,1]\to\mathbf{R}$, we define the symmetric integral in the sense of Russo-Vallois by \begin{align*} \int_0^t f_u\, d^\circ g_u = \lim_{\epsilon\downarrow 0} \int_0^t \frac{f_{(u+\epsilon)\wedge t}+f_u}{2} \frac{g_{(u+\epsilon)\wedge t}-g_u}{\epsilon}\, du \end{align*} if the limit of the right-hand side exists. \end{definition} \begin{proposition}[{\cite[Theorem~4.1.7]{Nourdin2004}}]\label{prop_20140928075311} Let $a\in\HolFunc{1}{[0,1]}{\mathbf{R}}$ and $g\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. Then, for any $f\in\SmoothFunc{1,3}{\mathbf{R}^2}{\mathbf{R}}$, $ \int_0^t \partial_2 f(a_u,g_u)\, d^\circ g_u $ exists and it holds that \begin{align*} f(a_t,g_t) = f(a_0,g_0) + \int_0^t \partial_1 f(a_u,g_u)\, da_u + \int_0^t \partial_2 f(a_u,g_u)\, d^\circ g_u. \end{align*} \end{proposition} \begin{remark}\label{rem_20150304061109} Let $a\in\HolFunc{1}{[0,1]}{\mathbf{R}}$ and $g\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. Let $f\in\SmoothFunc{1,2}{\mathbf{R}^2}{\mathbf{R}}\cap \SmoothFunc{1}{\mathbf{R}^2}{\mathbf{R}}$. Then, we can choose a primitive function $F\in\SmoothFunc{1,3}{\mathbf{R}^2}{\mathbf{R}}\cap\SmoothFunc{1}{\mathbf{R}^2}{\mathbf{R}}$ with respect to the second variable, that is, $f(x,y)=\partial_2 F(x,y)$ for any $x,y\in\mathbf{R}$. Indeed, $F(x,y)=\int_0^y f(x,\eta)\,d\eta$ is a primitive function and the continuity of $\partial_1 f$ implies $ \partial_1 F(x,y) = \int_0^y \partial_1 f(x,\eta)\,d\eta $. Hence, from \pref{prop_20140928075311}, we see $ \int_0^t f(a_u,g_u)\, d^\circ g_u $ exists and it holds that \begin{align*} \int_0^t f(a_u,g_u)\, d^\circ g_u = F(a_t,g_t) - F(a_0,g_0) - \int_0^t \partial_1 F(a_u,g_u)\, da_u. \end{align*} \end{remark} The next proposition asserts that a symmetric integral is a limit of a modified Riemann sum. \begin{proposition}\label{prop_20150424100658} Let $a\in\HolFunc{1}{[0,1]}{\mathbf{R}}$ and $g\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. Let $0=t_0<\dots<t_n=t$ be a partition of $[0,t]$. For any $f\in\SmoothFunc{1,2}{\mathbf{R}^2}{\mathbf{R}}$, we see that \begin{align*} \sum_{k=1}^n \frac{f(a_{t_{k-1}},g_{t_{k-1}})+f(a_{t_k},g_{t_k})}{2} (g_{t_k}-g_{t_{k-1}}) \end{align*} converges to $ \int_0^t f(a_u,g_u)\, d^\circ g_u $ as $\max\{t_k-t_{k-1};k=1,\dots,n\}$ tends to $0$. \end{proposition} \begin{proof} We use the formula in \rref{rem_20150304061109}. We have \begin{align*} \int_s^t f(a_u,g_u)\, d^{\circ}g_u &= F(a_t,g_t)-F(a_s,g_s)-\int_s^t\partial_1F(a_u,g_u)da_u\\ &= \left\{ F(a_t,g_t)-F(a_s,g_t) - \int_s^t \partial_1F(a_u,g_u)\, da_u \right\} +\{F(a_s,g_t)-F(a_s,g_s)\}\\ &= \int_s^t \{\partial_1F(a_u,g_t)-\partial_1F(a_u,g_u)\}\, da_u\\ &\phantom{=}\quad +f(a_s,g_s)(g_t-g_s)+\partial_2f(a_s,g_s)\frac{1}{2}(g_t-g_s)^2 +\partial_2^2f(a_s,g_s+\theta(g_t-g_s))\frac{(g_t-g_s)^3}{3!}\\ &= f(a_s,g_s)(g_t-g_s) +\partial_2f(a_s,g_s)\frac{1}{2}(g_t-g_s)^2 +O(|t-s|^{1+\lambda}) +O(|t-s|^{3\lambda}), \end{align*} where we used the Taylor formula and the H\"older continuity of $g$. On the other hand, by using the Taylor formula again, we have \begin{align*} \frac{f(a_s,g_s)+f(a_t,g_t)}{2}(g_t-g_s) &= f(a_s,g_s)(g_t-g_s) + \frac{1}{2} \left\{f(a_s,g_t)-f(a_s,g_s)\right\} (g_t-g_s)\\ &\phantom{=}\quad + \frac{1}{2} \left\{f(a_t,g_t)-f(a_s,g_t)\right\} (g_t-g_s)\\ &= f(a_s,g_s)(g_t-g_s) +\frac{1}{2}\partial_2f(a_s,g_s)(g_t-g_s)^2\\ &\phantom{=}\quad + \frac{1}{4} \partial_2^2f(a_s,g_s+\theta(g_t-g_s)) (g_t-g_s)^3\\ &\phantom{=}\quad + \frac{1}{2} \partial_1f(a_s+\theta'(g_t-g_s),g_t) (a_t-a_s)(g_t-g_s). \end{align*} Therefore, we obtain \begin{align*} \int_s^tf(a_u,g_u)\,d^{\circ}g_u = \frac{f(a_t,g_t)+f(a_s,g_s)}{2}(g_t-g_s) +R(s,t), \end{align*} where $|R(s,t)|\le \const[g]|t-s|^{(1+\lambda)\wedge (3\lambda)}$. By the additivity property of the integral, $\int_s^tf(a_u,g_u)\,d^{\circ}g_u+\int_t^vf(a_u,g_u)\,d^{\circ}g_u= \int_s^vf(a_u,g_u)\,d^{\circ}g_u$~$(s<t<v)$ and a limiting argument, we obtain the desired result. \end{proof} Next we consider properties of \eqref{eq_20140928070616}. Let us start our discussion with properties of the flow $\phi$ associated to $\sigma$, that is, $\phi$ is a unique solution $\phi$ to an ODE \begin{align}\label{eq_20140928162150} \phi(\alpha,\beta) = \alpha + \int_0^\beta \sigma(\phi(\alpha,\eta))\, d\eta, \quad \beta \in \mathbf{R}. \end{align} \begin{proposition}[{\cite[Lemma~2]{Doss1977}}]\label{prop_20140929101727} Let $n\geq 1$. For any $\sigma\in\SmoothFunc[bdd]{n}{\mathbf{R}}{\mathbf{R}}$ and an initial point $\alpha\in\mathbf{R}$, there exists a unique solution to \eqref{eq_20140928162150}. The unique solution $\phi$ satisfies the following: \begin{enumerate} \item\label{item_1545964854} $\phi\in\SmoothFunc{n,n+1}{\mathbf{R}^2}{\mathbf{R}}\cap\SmoothFunc{n}{\mathbf{R}^2}{\mathbf{R}}$, \item\label{item_1545380688} $ \phi(\alpha,\beta) = \phi(\phi(\alpha,\beta'),\beta-\beta') $, \item\label{item_1545964864} $ \partial_1\phi(\alpha,\beta) = \exp \left( \int_0^\beta \sigma'(\phi(\alpha,\eta))\, d\eta \right) $. \end{enumerate} \end{proposition} To state assertion about uniqueness of solutions to \eqref{eq_20140928070616}, we introduce a class $\mathfrak{C}$ of the solutions by \begin{align*} \mathfrak{C} = \left\{ x\in\HolFunc{\lambda}{[0,1]}{\mathbf{R}};\, \begin{minipage}{20em} there exist $f\in\SmoothFunc{1,3}{\mathbf{R}^2}{\mathbf{R}}$ and $k\in\HolFunc{1}{[0,1]}{\mathbf{R}}$\\ such that $x_t=f(k_t,g_t)$ for all $t\in[0,1]$ \end{minipage} \right\}. \end{align*} Note that $\mathfrak{C}$ depends on $g\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. \begin{proposition}[{\cite[Theorem~4.3.1]{Nourdin2004}}, {\cite[Section~3]{NourdinSimon2007}}]\label{prop_20140928080940} Let $g\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. Assume that $b\in\SmoothFunc[bdd]{1}{\mathbf{R}}{\mathbf{R}}$ and $\sigma\in\SmoothFunc[bdd]{2}{\mathbf{R}}{\mathbf{R}}$. Then, a unique solution to \eqref{eq_20140928070616} in the class $\mathfrak{C}$ exists and it is given by \begin{align}\label{eq_1545722900} x_t=\phi(a_t,g_t), \end{align} where $\phi$ and $a\equiv a(\xi,g)$ are given by solutions to \eqref{eq_20140928162150} and \begin{align*} a_t = \xi + \int_0^t f_{\sigma,b}(a_u,g_u)\, du, \quad t\in[0,1], \end{align*} respectively. Here $f_{\sigma,b}=f_1f_2$ with \begin{align*} f_1(x,y) &= \exp \left( - \int_0^y \sigma'(\phi(x,\eta))\, d\eta \right), & f_2(x,y) &= b(\phi(x,y)). \end{align*} \end{proposition} \begin{proof} It is easily shown that $x$ given by \eqref{eq_1545722900} belongs to $\mathfrak{C}$ and satisfy \eqref{eq_20140928070616}. Indeed, \pref{prop_20140929101727}~(\ref{item_1545964854}) implies $ \phi \in \SmoothFunc{2,3}{\mathbf{R}^2}{\mathbf{R}} \subset \SmoothFunc{1,3}{\mathbf{R}^2}{\mathbf{R}} $ and $a\in\HolFunc{1}{[0,1]}{\mathbf{R}}$. From \pref{prop_20140928075311} and \pref{prop_20140929101727}~(\ref{item_1545964864}), we see that $x$ satisfies \eqref{eq_20140928070616}. To prove the uniqueness, we borrow results from \cite[Section~3]{NourdinSimon2007}. Let $x$ be a solution in the class $\mathfrak{C}$ and given by $x=f(k,g)$ for $f\in\SmoothFunc{1,3}{\mathbf{R}^2}{\mathbf{R}}$ and $k\in\HolFunc{1}{[0,1]}{\mathbf{R}}$. Since $\int_s^t x_u\,d^\circ g_u=\int_s^t f(k_u,g_u)\,d^\circ g_u$ is well-defined from \rref{rem_20150304061109}, set $ A_{st} = \int_s^t x_u\,d^\circ g_u - \frac{1}{2} (x_t+s_s) (g_t-g_s) $. Then, we deduce that $(x,A)$ is a solution to \eqref{eq_20140928070616} in the sense of \cite[Definition~3.1]{NourdinSimon2007} from \cite[Lemma~3.4 and Proposition~3.5]{NourdinSimon2007}. Finally, \cite[Corollary~3.7]{NourdinSimon2007} implies $x_t=\phi(a_t,g_t)$. \end{proof} \begin{proposition}\label{prop_20140928154144} Let $x$ be the solution to \eqref{eq_20140928070616} given by \eqref{eq_1545722900}. For fixed $0<s<1$, we have $x_{s+t}(\xi,g)= x_t(x_s(\xi,g),\theta_sg)$ for any $0\leq t\leq 1-s$. \end{proposition} \begin{proof} We first prove $a_t\left(x_s(\xi,g),\theta_s g\right) = \tilde{a}_t := \phi(a_{s+t}(\xi,g),g_s) $. From \pref{prop_20140929101727}, we see \begin{align*} \frac{1}{f_1(x,y')} \cdot f_1(x,y) &= f_1(\phi(x,y'),y-y'), & f_2(x,y) &= f_2(\phi(x,y'),y-y'). \end{align*} Hence, it holds that \begin{align*} \frac{d}{dt} \tilde{a}_t &= \partial_1\phi(a_{s+t}(\xi,g),g_s) \frac{d}{dt}a_{s+t}(\xi,g)\\ &= \frac{1}{f_1(a_{s+t}(\xi,g),g_s)} \cdot f_1(a_{s+t}(\xi,g),g_{s+t}) f_2(a_{s+t}(\xi,g),g_{s+t})\\ &= [f_1f_2](\phi(a_{s+t}(\xi,g), g_s),g_{s+t}-g_s)\\ &= f_{\sigma,b}(\tilde{a}_t,(\theta_s g)_t). \end{align*} By the definition of $\tilde{a}$ and \pref{prop_20140928080940}, we have $ \tilde{a}_0 = \phi(a_s(\xi,g),g_s) = x_s(\xi,g) $. It follows from the uniquness of a solution that $ a_t\left(x_s(\xi,g),\theta_s g\right) = \tilde{a}_t $. Combining \pref{prop_20140929101727} (\ref{item_1545380688}), \pref{prop_20140928080940} and this equality, we obtain \begin{align*} x_{s+t}(\xi,g) &= \phi(a_{s+t}(\xi,g),g_{s+t}) = \phi(\phi(a_{s+t}(\xi,g_s),g_{s+t}-g_s)\\ &= \phi(a_t(x_s(\xi,g),\theta_s g), (\theta_s g)_t) = x_t\left(x_s(\xi,g),\theta_sg\right), \end{align*} which completes the proof. \end{proof} \begin{remark}\label{rem_201708111136} We assume the same assumption as in \pref{prop_20140928080940} and consider the solution $x$ to \eqref{eq_20140928070616} given by \eqref{eq_1545722900}. In the proposition, we consider H\"older continuous paths. However it is easy to check that the mapping $g\mapsto x(g)$ can be extended to a continuous mapping on $C([0,1];\mathbf{R})$ with the uniform convergence norm $\|\cdot\|_{\infty}$. Further, by \rref{rem_20150304061109}, for any $f\in\SmoothFunc{1,2}{\mathbf{R}^2}{\mathbf{R}}\cap\SmoothFunc{1}{\mathbf{R}^2}{\mathbf{R}}$, we have the continuity of the mapping in the uniform convergence topology : \begin{align*} C([0,1];\mathbf{R})\ni g \mapsto \int_0^{\cdot}f(a_s(g),x_s(g))\,d^{\circ}g_s \in C([0,1];\mathbf{R}). \end{align*} \end{remark} \subsection{The Taylor expansion and its remainder estimates} \label{sec_20141211055401} For notational convenience, we set $g^0_t=t$, $g^1_t=g_t$ for $0\leq t\leq 1$. Let $x$ be the solution to \eqref{eq_20140928070616} given by \eqref{eq_1545722900}. Assume that $b\in\SmoothFunc[bdd]{1}{\mathbf{R}}{\mathbf{R}}$ and $\sigma\in\SmoothFunc[bdd]{2}{\mathbf{R}}{\mathbf{R}}$. For $0\leq s\leq t\leq 1$ and $f\in\SmoothFunc[bdd]{2}{\mathbf{R}}{\mathbf{R}}$, we can define \begin{align*} I^0_{st} &= \int_s^t f(x_u)\, dg^0_u, & I^1_{st}(f) = \int_s^t f(x_u)\, d^{\circ}g^1_u. \end{align*} Here, $I^0_{st}(f)$ is a usual Riemann integral. As for $I^1_{st}(f)$, the reasoning is as follows. By using functions $\phi$ and $a$ given in \pref{prop_20140928080940}, we have $f(x_u)=[f\circ\phi](a_u,g_u)$ and $f\circ\phi\in\SmoothFunc{1,2}{\mathbf{R}^2}{\mathbf{R}}\cap \SmoothFunc{1}{\mathbf{R}^2}{\mathbf{R}}$. From \rref{rem_20150304061109}, we see $F(x,y)=\int_0^y f(x,\eta)\,d\eta$ belongs to $\SmoothFunc{1,3}{\mathbf{R}^2}{\mathbf{R}}\cap \SmoothFunc{1}{\mathbf{R}^2}{\mathbf{R}}$ and it holds that \begin{align*} I^{1}_{st}(f) = \int_s^t [f\circ\phi](a_u,g_u)\, d^\circ g_u = F(a_t,g_t) - F(a_s,g_s) - \int_s^t \partial_1 F(a_u,g_u)\, da_u. \end{align*} Hence we see $I^{1}_{st}(f)$ is well-defined. Further, for any $\alpha_1,\dots,\alpha_n\in\{0,1\}$, we can define the iterated integral \begin{align*} I^{\alpha_1\cdots\alpha_n}_{st}(f) = \int_s^t I^{\alpha_1\cdots\alpha_{n-1}}_{su}(f)\, d^\circ g^{\alpha_n}_{u} \end{align*} inductively in the same way. For $f\equiv 1$, we set $ g^{\alpha_1\cdots\alpha_n}_{st} = I^{\alpha_1\cdots\alpha_n}_{st}(f) $. We set $V_0=b$, $V_1=\sigma$ and define a vector field by $\vecField{V}_\alpha f=V_\alpha f'$. From \rref{rem_20150304061109}, we see the following estimate. \begin{lemma}\label{lem_201708052140} Assume that $b\in\SmoothFunc[bdd]{1}{\mathbf{R}}{\mathbf{R}}$ and $\sigma\in\SmoothFunc[bdd]{2}{\mathbf{R}}{\mathbf{R}}$. Let $f\in\SmoothFunc[bdd]{2}{\mathbf{R}}{\mathbf{R}}$. Let $\alpha_1,\ldots,\alpha_n\in \{0,1\}$ and set $r_i=\sharp\{k=1,\dots,n;\alpha_k=i\}$. Then, there exists a constant $C=\const[f,g,\alpha_1,\ldots,\alpha_n]$ which depends only on $f$, the H\"older constant of $g$ and $\alpha_1,\ldots,\alpha_n$ such that, for any $0\leq s<t\leq 1$, \begin{align*} |I^{\alpha_1\cdots\alpha_n}_{st}(f)| \leq C (t-s)^{r_0+r_1\lambda}. \end{align*} \end{lemma} We use the above Taylor expansion and the estimate of iterated integrals in the calculation below. Using \pref{prop_20140928075311}, we can prove the following by induction on $n$; \begin{proposition}\label{prop_20141107094620} Let $n\geq 0$. Assume that $b,\sigma\in\SmoothFunc[bdd]{n+2}{\mathbf{R}}{\mathbf{R}}$. Then, for any $0\leq s<t\leq 1$, we have \begin{align*} x_t-x_s &= \sum_{k=1}^n \sum_{\alpha_1,\dots,\alpha_k\in\{0,1\}} \left[ \vecField{V}_{\alpha_1} \cdots \vecField{V}_{\alpha_{k-1}} V_{\alpha_k} \right] (x_s) g^{\alpha_1\cdots\alpha_k}_{st}\\ &\phantom{=}\quad\qquad + \sum_{\alpha_1,\dots,\alpha_n,\alpha_{n+1}\in\{0,1\}} I^{\alpha_{1}\alpha_2\cdots\alpha_{n+1}}_{st} \left( \vecField{V}_{\alpha_{1}} \vecField{V}_{\alpha_2} \cdots \vecField{V}_{\alpha_n} V_{\alpha_{n+1}} \right). \end{align*} \end{proposition} We calculate each terms in \pref{prop_20141107094620}. We first note that the $p$-th iterated integral $g^{\alpha\cdots\alpha}_{st}$ is equal to $(g^\alpha_t-g^\alpha_s)^p/p!$. This can be checked by a direct calculation. \begin{proposition}\label{prop_20141010094511} Assume that $b,\sigma\in\SmoothFunc[bdd]{6}{\mathbf{R}}{\mathbf{R}}$. Then, for any $0\leq s<t\leq 1$, we have \begin{align*} x_t-x_s &= b(x_s)(t-s) +\sigma(x_s)(g_t-g_s) +\frac{1}{2}\left[\sigma\sigma'\right](x_s)(g_t-g_s)^2\\ &\phantom{=}\quad + \frac{1}{3!}\left[\sigma(\sigma\sigma')'\right](x_s)(g_t-g_s)^3 + \frac{1}{4!}\left[\sigma(\sigma(\sigma\sigma')')'\right](x_s)(g_t-g_s)^4\\ &\phantom{=}\quad +[b\sigma'](x_s)(g_t-g_s)(t-s) +[\sigma b'-b\sigma'](x_s)g^{10}_{st} +\frac{1}{2}[b'b](x_s)(t-s)^2\\ &\phantom{=}\quad + [b\left(\sigma\sigma'\right)'](x_s) g^{011}_{st} + [\sigma\left(b\sigma'\right)'](x_s) g^{101}_{st} + [\sigma\left(\sigma b'\right)'](x_s) g^{110}_{st} + r_{st}, \end{align*} where $|r_{st}|\leq C_g (t-s)^{\min\{2+\lambda,1+3\lambda,5\lambda\}}$. \end{proposition} \begin{proof} Set \begin{align*} J^k_{st} &= \sum_{\alpha_1,\dots,\alpha_k\in\{0,1\}} \left[ \vecField{V}_{\alpha_1} \cdots \vecField{V}_{\alpha_{k-1}} V_{\alpha_k} \right] (x_s) g^{\alpha_1\cdots\alpha_k}_{st},\\ \tilde{J}^k_{st} &= \sum_{\alpha_1,\dots,\alpha_k\in\{0,1\}} I^{\alpha_1\cdots\alpha_k}_{st} \left( \vecField{V}_{\alpha_1} \cdots \vecField{V}_{\alpha_{k-1}} V_{\alpha_k} \right). \end{align*} Then we see $ x_t-x_s = J^1_{st} +\dots +J^4_{st} +\tilde{J}^5_{st} $ and \begin{align*} J^1_{st} &= b(x_s) g^0_{st} + \sigma(x_s) g^1_{st},\\ J^2_{st} &= [bb'](x_s) g^{00}_{st} + [\sigma b'](x_s) g^{10}_{st} + [b\sigma'](x_s) g^{01}_{st} + [\sigma\sigma'](x_s) g^{11}_{st},\\ J^3_{st} &= [\sigma(\sigma b')'](x_s) g^{110}_{st} + [\sigma(b\sigma')'](x_s) g^{101}_{st} + [b(\sigma\sigma')'](x_s) g^{011}_{st} + [\sigma(\sigma\sigma')'](x_s) g^{111}_{st} + r^{(3)}_{st},\\ J^4_{st} &= [\sigma(\sigma(\sigma\sigma')')'](x_s) g^{1111}_{st} + r^{(4)}_{st}, \end{align*} where $r^{(3)}_{st}$ and $r^{(4)}_{st}$ satisfy $ |r^{(3)}_{st}| \leq \const[g] (t-s)^{2+\lambda} $ and $ |r^{(4)}_{st}| \leq \const[g] (t-s)^{1+3\lambda} $, respectively. In addition, we have $ |\tilde{J}^5_{st}| \leq \const[g] (t-s)^{5\lambda} $. Noting $[\sigma b'](x_s)g^{10}_{s,t}+[b\sigma'](x_s)g^{01}_{st}= [b\sigma'](x_s)(g_t-g_s)(t-s)+[\sigma b'-b\sigma'](x_s)g^{10}_{st}$, we complete the proof. \end{proof} \subsection{Directional derivatives of solutions} In what follows, we assume that \hyporef{hypo_ellipticity} is satisfied and find expressions of the solution $x\equiv x(g)$ to \eqref{eq_20140928070616} given by \eqref{eq_1545722900} and its directional derivatives. We follow the approach employed in \cite{DetempleGarciaRindisbacher2005} in order to do so. For $g\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$, we set \begin{align}\label{eq_20150218112958} J_t(g) = \exp \left( \int_0^t b'(x_u(g))\, du + \int_0^t \sigma'(x_u(g))\, d^\circ g_u \right). \end{align} This is a deterministic version of \eqref{eq_20141226020239}. Note that $J_t(g)$ is expressed by \begin{align}\label{eq_20141226020204} J_t(g) = \frac{\sigma(x_t(g))}{\sigma(x_0(g))} \exp \left( \int_0^t \left[ \frac{\Wronskian}{\sigma} \right] (x_u(g))\, du \right). \end{align} Indeed, we see \begin{align*} \log \sigma(x_t(g)) &= \log (\sigma\circ\phi)(a_t(g),g_t)\\ &= \log \sigma(x_0) + \int_0^t \left[ \frac{\sigma'b}{\sigma} \right] (x_u(g))\, du + \int_0^t \sigma'(x_u(g))\, d^\circ g_u \end{align*} from \pref{prop_20140928075311}. This implies \begin{align*} \sigma(x_t(g)) = \sigma(x_0) \exp \left( \int_0^t \left[ \frac{\sigma'b}{\sigma} \right] (x_u(g))\, du + \int_0^t \sigma'(x_u(g))\, d^\circ g_u \right). \end{align*} Substituting the above to \eqref{eq_20141226020204}, we obtain \eqref{eq_20150218112958}. \begin{proposition}\label{prop_20140929105014} Let $b,\sigma\in\SmoothFunc[bdd]{n+1}{\mathbf{R}}{\mathbf{R}}$ for $n\geq 1$. Assume that \hyporef{hypo_ellipticity} is satisfied. Then, the functional $g\mapsto x_t(g)$ is $n$-times Fr\'{e}chet differentiable in $\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. In particular, the derivatives satisfy the following; \begin{enumerate} \item \label{item_20141228072212} For any $h^1,\dots,h^\nu\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$, we have \begin{align*} | \nabla_{h^\nu} \cdots \nabla_{h^1} x_t(g) | \leq \const[\nu] \|h^1\|_\infty \cdots \|h^\nu\|_\infty, \end{align*} where $\const[\nu]$ is a positive constant depending only on $b,\sigma$ and $\nu$. \item \label{item_20141106063937} The first derivative $\nabla_h x_t(g)$ is expressed as \begin{align*} \nabla_h x_t(g) &= \sigma(x_t(g))h_t + \int_0^t J_t(g)(J_s(g))^{-1} \Wronskian(x_s(g)) h_s\, ds. \end{align*} \item \label{item_20141106063948} If $h$ is Lipschitz continuous, then $\nabla_h x_t(g)$ is expressed as \begin{align*} \nabla_h x_t(g) = \int_0^t \dot{h}_s \sigma(x_s(g)) J_t(g)(J_s(g))^{-1}\, ds = \sigma(x_t(g)) \int_0^t \exp \left( \int_s^t \left[ \frac{\Wronskian}{\sigma} \right] (x_u(g))\, du \right) \dot{h}_s\, ds. \end{align*} \end{enumerate} \end{proposition} In order to prove \pref{prop_20140929105014}, we set \begin{align*} F(x) &= \int_0^x \frac{d\xi}{\sigma(\xi)}, & G&=F^{-1}, & \tilde{b} &= \left[ \frac{b}{\sigma} \right] \circ G, & y_0 &= F(x_0). \end{align*} We consider a solution $y$ to an ODE \begin{align}\label{eq_20140928074406} y_t = y_0 + \int_0^t \tilde{b}(y_u)\, du + g_t. \end{align} Then we obtain an expression of the solution $x_t$ to \eqref{eq_20140928070616} as follows; \begin{proposition}\label{prop_20140928105149} Let $y$ be a solution to \eqref{eq_20140928074406}. The solution $x$ to \eqref{eq_20140928070616} given by \eqref{eq_1545722900} is expressed by $x=G(y)$. \end{proposition} \begin{proof} Due to \pref{prop_20140928080940}, we see the assertion by showing $G(y)\in\mathfrak{C}$ and it satisfies \eqref{eq_20140928070616}. Note that the solution $y$ is given by $ y_t=\tilde{a}_t+g_t $, where $\tilde{a}$ is a solution to $ \tilde{a}_t = y_0 + \int_0^t \tilde{b}(\tilde{a}_u+g_u)\, du $. Hence $G(y)\in\mathfrak{C}$. We prove that $G(y)$ satisfies \eqref{eq_20140928070616}. From \pref{prop_20140928075311}, we see \begin{align*} G(y_t)-x_0 &= G(\tilde{a}_t+g_t)-G(\tilde{a}_0+g_0)\\ &= \int_0^t G'(\tilde{a}_u+g_u)\, d\tilde{a}_u + \int_0^t G'(\tilde{a}_u+g_u)\, d^\circ g_u. \end{align*} The first term is equal to \begin{align*} \int_0^t \sigma(G(\tilde{a}_u+g_u)) \tilde{b}(\tilde{a}_u+g_u)\, du &= \int_0^t \sigma(G(\tilde{a}_u+g_u)) \left[ \frac{b}{\sigma} \right] (G(\tilde{a}_u+g_u))\, du\\ &= \int_0^t b(G(y_u))\, du \end{align*} and the second one is $ \int_0^t \sigma(G(y_u))\, d^\circ g_u $. We see that $G(y)$ satisfies \eqref{eq_20140928070616}. The proof is completed. \end{proof} We see that the solution $y_t$ to \eqref{eq_20140928074406} with any coefficient $\tilde{b}$ and initial point $y_0$ is differentiable. \begin{proposition}\label{prop_20141030154436} Assume that $\tilde{b}\in\SmoothFunc[bdd]{n+1}{\mathbf{R}}{\mathbf{R}}$ for $n\geq 1$. The functional $g\mapsto y_t(g)$ is $n$-times Fr\'{e}chet differentiable in $\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$. In particular, the derivatives satisfy the following; \begin{enumerate} \item For any $h^1,\dots,h^\nu\in\HolFunc[0]{\lambda}{[0,1]}{\mathbf{R}}$, we have \begin{align*} | \nabla_{h^\nu} \cdots \nabla_{h^1} y_t(g) | \leq \const[\nu] \|h^1\|_\infty \cdots \|h^\nu\|_\infty \end{align*} where $\const[\nu]$ is a positive constant depending only on $\tilde{b}$ and $\nu$. \item \label{item_1505962351}The first derivative $\nabla_h y_t(g)$ is expressed by \begin{align*} \nabla_h y_t(g) = h_t + \int_0^t \exp \left( \int_s^t \tilde{b}'(y_u(g))\, du \right) \tilde{b}'(y_s(g)) h_s\, ds. \end{align*} \end{enumerate} \end{proposition} For the sake of conciseness, we omit the proof of the above proposition and show \pref{prop_20140929105014}. \begin{proof}[Proof of \pref{prop_20140929105014}] The differentiability and Assertion~(\ref{item_20141228072212}) follow from \prefs{prop_20140928105149}{prop_20141030154436}. Noting $\tilde{b}'(y_t(g))=[\Wronskian/\sigma](x_t(g))$, we see that Assertion~(\ref{item_20141106063937}) is true. Assertion (\ref{item_20141106063948}) follows from Assertion~(\ref{item_20141106063937}) and the integration by parts formula. \end{proof} \subsection{SDEs driven by fBm}\label{sec_201708051641} We consider existence and properties of a solution to an SDE \eqref{eq_20141122070455}. Let us start our discussion with the definition of fBm; \begin{definition} A one-dimensional centered Gaussian process $B=\{B_t\}_{0\leq t<\infty}$ starting from zero is called fractional Brownian motion (fBm) with the Hurst $0<H<1$ if its covariance is given by \begin{align}\label{eq_20150219103359} \boldsymbol E[B_sB_t] = R(s,t) = \frac{1}{2} \left\{ s^{2H} + t^{2H} - |t-s|^{2H} \right\}. \end{align} \end{definition} It is well known that fBm $B$ has stationary increments in the sense of $ \boldsymbol E [(B_t-B_s)(B_v-B_u)] = \boldsymbol E [(B_{t+a}-B_{s+a})(B_{v+a}-B_{u+a})] $ for any $0\leq s\leq t\leq u\leq v<\infty$ and $0\leq a<\infty$ and that it has self-similarity, namely, for any $a>0$, $\{a^{-H} B_{at}\}_{0\leq t<\infty}$ is also fBm with the Hurst $H$. In addition, it has a modulus of continuity of trajectories; there exists a measurable subset $\Omega_0$ of $\Omega$ such that $\boldsymbol P(\Omega_0)=1$ and for any $0<\epsilon<H$, there exists a nonnegative random variable $G_\epsilon$ such that $\boldsymbol E[G_\epsilon^p]<\infty$ for any $p\geq 1$ and \begin{align}\label{eq_20141224083943} |B_t(\omega)-B_s(\omega)| \leq G_\epsilon(\omega) |t-s|^{H-\epsilon} \end{align} for any $0\leq s,t<\infty$ and $\omega\in\Omega_0$. Assume that $1/3<H<1$. From \pref{prop_20140928080940} and the H\"{o}lder continuity of fBm \eqref{eq_20141224083943}, we see existence of a unique solution to the SDE \eqref{eq_20141122070455} in the pathwise sense. More precisely, since $B(\omega)$ for any $\omega\in\Omega_0$ is $(H-\epsilon)$-H\"{o}lder continuous, a solution $X$ to \eqref{eq_20141122070455} is give by \eqref{eq_1545722900} and it is unique in sense of \pref{prop_20140928080940}. In the same way as $x$, we shall also write $X(\xi)$, $X(B)$, or $X(\xi,B)$ to emphasize dependence on the initial value $\xi$ and/or the driver $B$. \begin{proposition}\label{prop_20150520004508} Assume that $b\in\SmoothFunc[bdd]{1}{\mathbf{R}}{\mathbf{R}}$ and $\sigma\in\SmoothFunc[bdd]{2}{\mathbf{R}}{\mathbf{R}}$. Then there exists a unique solution $X$ to \eqref{eq_20141122070455} and the following are satisfied: \begin{enumerate} \item \label{item_20150327035831} $X$ is adapted to the fBm filtration $\{\mathcal{F}_{t}\}_{0\leq t\leq1}$, where $ \mathcal{F}_{t} = \sigma(B_u;0\leq u\leq t) $, \item \label{item_20141102234451} $t\mapsto X_t$ is $(H-\epsilon)$-H\"{o}lder continuous a.s.\ for every $0<\epsilon<H$, \item \label{item_20141103222159} for any $r\geq 1$, there exists a positive constant $\const$ such that \begin{align*} \boldsymbol E[|X_t-X_s|^r]^{1/r} \leq \const (t-s)^H \end{align*} for any $0\leq s<t\leq 1$. \end{enumerate} \end{proposition} \begin{proof} The first assertion follows from \pref{prop_20140928080940}. We show the second and third assertion. We decompose $X_t-X_s$ into $ \{\phi(a^B_t,B_t)-\phi(a^B_s,B_t)\} +\{\phi(a^B_s,B_t)-\phi(a^B_s,B_s)\} $. From \prefs{prop_20140929101727}{prop_20140928080940}, we have \begin{align*} |\phi(a^B_t,B_t)-\phi(a^B_s,B_t)| &\leq e^{c_1|B_t|} \int_s^t c_2e^{c_3|B_u|}\, du,\\ |\phi(a^B_s,B_t)-\phi(a^B_s,B_s)| &\leq c_4|B_t-B_s|, \end{align*} where $c_1$, $c_2$, $c_3$, $c_4$ are positive constants. The proof is completed. \end{proof} \section{Convergence of variation functionals}\label{sec_20150519021523} Let $B=\{B_t\}_{0\leq t\leq 1}$ be an fBm with the Hurst $1/3<H<1$ and $X=\{X_t\}_{0\leq t\leq 1}$ the solution to \eqref{eq_20141122070455} given by \eqref{eq_1545722900}. We assume that $b,\sigma\in\SmoothFunc[bdd]{\infty}{\mathbf{R}}{\mathbf{R}}$. For these processes, we define the weighted Hermite variations and the trapezoidal error variations. The purpose of this section is to present necessary results for asymptotics of the variations. Let $f\in\SmoothFunc[poly]{2q}{\mathbf{R}}{\mathbf{R}}$ for $q\geq 2$ and $g\in\SmoothFunc[poly]{2}{\mathbf{R}}{\mathbf{R}}$. Let $\mu$ be a probability measure on $[0,1]$. For every $0\leq s<t\leq 1$ and continuous path $x:[0,1]\to\mathbf{R}$, define \begin{align*} F_{st}(x) \equiv F^{f,\mu}_{st}(x) := \int_0^1 f(\theta x_t+(1-\theta)x_s)\, \mu(d\theta). \end{align*} We define the weighted Hermite variations $ \wHerVar{q}{m}(t) \equiv \wHerVar{q,f,\mu}{m}(t) $ by \begin{align*} \wHerVar{q}{m}(t) = \sum_{k=1}^{\intPart{2^m t}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) \hermitePoly{q}(2^{mH}B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}) \end{align*} and the trapezoidal error variations $ \wTRVar{}{m}(t) \equiv \wTRVar{g}{m}(t) $ by \begin{align*} \wTRVar{}{m}(t) = \sum_{k=1}^{\intPart{2^m t}} g(X_{\dyadicPart[m]{k-1}}) \left( \frac{1}{2\cdot 2^m} B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} - \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} B_{\dyadicPart[m]{k-1}u}\, du \right). \end{align*} Here, $B_{st}=B_t-B_s$ for $0\leq s<t\leq 1$ and $H_q$ is the $q$-th Hermite polynomial defined by \begin{align*} \hermitePoly{q}(\xi) = (-1)^q e^{\xi^2/2} \frac{d^q}{d\xi^q} e^{-\xi^2/2}. \end{align*} The first few Hermite polynomials are $\hermitePoly{1}(\xi)=\xi$, $\hermitePoly{2}(\xi)=\xi^2-1$, $\hermitePoly{3}(\xi)=\xi^3-3\xi$, and $\hermitePoly{4}(\xi)=\xi^4-6\xi^2+3$. We set $\hermitePoly{0}(\xi)=1$ by convention. The following limit theorems are vital for our proof. These results are proved in Appendixes~\ref{sec_20150107074711} and \ref{sec_20170519021523}. \begin{theorem}\label{thm_20141203094309} Let $q\geq 2$ be even. We have \begin{align*} \lim_{m\to\infty} 2^{m(qH-1)} \sum_{k=1}^{\intPart{2^m\cdot}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) (B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}})^q = \boldsymbol E[Z^q] \int_0^\cdot f(X_s)\, ds \end{align*} in probability with respect to the uniform norm. Here $Z$ is a standard Gaussian random variable. \end{theorem} \begin{theorem}\label{thm_20141117071529} Let $q\geq 2$ and $1/2q<H<1-1/2q$. We have \begin{align*} \lim_{m\to\infty} \left( B,2^{-m/2}\wHerVar{q}{m} \right) = \left( B, \sigma_{q,H} \int_0^\cdot f(X_s)\, dW_s \right) \end{align*} weakly in the Skorokhod topology, where $\sigma_{q,H}$ is a constant defined by \eqref{eq_const_sigma_q_Hurst} and $W$ is a standard Brownian motion independent of $B$. \end{theorem} \begin{theorem}\label{thm_20141123062022} Let $q\geq 2$ and $H=1/2$. We have \begin{align*} \lim_{m\to\infty} \left( B,2^{-m/2}\wHerVar{q}{m},2^{m}\wTRVar{}{m} \right) = \left( B, \sqrt{q!} \int_0^\cdot f(X_s)\, dW_s, \frac{1}{\sqrt{12}} \int_0^\cdot g(X_s)\, d\tilde{W}_s \right) \end{align*} weakly in the Skorokhod topology, where $W$ and $\tilde{W}$ are standard Brownian motions and $B$, $W$ and $\tilde{W}$ are independent. \end{theorem} \begin{proposition}\label{prop_20141117062354} If $0<H<1/2$ (resp. $1/2\leqH<1$), then the process $2^{mr}\wTRVar{}{m}$ for $0<r<2H$ (resp. $0<r<1$) converges to the process $0$ in probability with respect to the uniform norm. \end{proposition} In order to prove \tref[thm_20141117071529]{thm_20141123062022}, we use a simplified version of them. Let $q\geq 2$. We set \begin{align*} \hermiteVar{q}{m}(t) &= 2^{-m/2} \sum_{k=1}^{\intPart{2^m t}} \hermitePoly{q} ( 2^{mH} B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} ) \end{align*} and \begin{align*} \TRVar{m}(t) &= 2^{-m/2} \sum_{k=1}^{\intPart{2^m t}} 2^{m(H+1)} \left( \frac{1}{2\cdot 2^m} B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} - \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} B_{\dyadicPart[m]{k-1}u}\, du \right). \end{align*} Then, we see $ \hermiteVar{q}{m} = 2^{-m/2} \wHerVar{q,f,\mu}{m} $ and $ \TRVar{m} = 2^{m(H+1/2)} \wTRVar{g}{m} $ for $f=g\equiv 1$ and the following: \begin{proposition}\label{prop_20141102222020} Assume $q\geq 2$ and $0<H<1-1/2q$. Then we have \begin{align*} \lim_{m\to\infty} ( B,\hermiteVar{q}{m},\TRVar{m} ) = ( B,\sigma_{q,H}W,\tilde{\sigma}_H \tilde{W} ) \end{align*} weakly in the Skorokhod topology. Here $W$ and $\tilde{W}$ are independent standard Brownian motions independent of $B$, and $\sigma_{q,H}$ and $\sigma_H$ are positive constants given by \begin{align} \label{eq_const_sigma_q_Hurst} \sigma_{q,H}^2 &= q! \left( 1+2\sum_{l=1}^\infty \rho_H(l)^q \right),\\ \notag \tilde{\sigma}_H^2 &= \frac{1}{4} \frac{1-H}{1+H} + 2 \sum_{l=1}^\infty \tilde{\rho}_H(l) \end{align} with \begin{gather*} \rho_H(l) = \boldsymbol E [B_1(B_{l+1}-B_l)] = \frac{1}{2} ( |l+1|^{2H} + |l-1|^{2H} - 2|l|^{2H} ),\\ \tilde{\rho}_H(l) = \boldsymbol E \left[ \left( \frac{1}{2} B_1 - \int_0^1 B_u\, du \right) \left( \frac{1}{2} (B_{l+1}-B_l) - \int_l^{l+1} (B_u-B_l)\, du \right) \right]. \end{gather*} \end{proposition} We close this section with making remarks on results above: \begin{remark}\label{rem_201708101002} \begin{enumerate} \item In Appendix~\ref{sec_20150107074711}, we show \pref{prop_20141102222020} by showing relative compactness (\lref{lem_20150519013003}) and convergence in the sense of finite-dimensional distributions (\lref{lem_20150519013014}). In the proof of \lref{lem_20150519013014}, we show independence of $B$, $W$ and $\tilde{W}$ by using the multidimensional fourth moment theorem by Peccati and Tudor \cite{PeccatiTudor2005}. \item In Appendix~\ref{sec_20170519021523}, we show \trefs{thm_20141203094309}{thm_20141117071529}{thm_20141123062022} and \pref{prop_20141102222020}. In order to prove \trefs{thm_20141203094309}{thm_20141117071529}{thm_20141123062022}, we use good properties of the solution $X$: for example, the continuity of the solution map $B\mapsto X$, the continuity of the map $t\mapsto X_t$ and Malliavin differentiability of $X_t$. In addition, \pref{prop_20141102222020} is essential for \tref[thm_20141117071529]{thm_20141123062022}. Since \pref{prop_20141102222020} is a consequence of the fourth moment theorem, these theorems also one of it. \item \tref[thm_20141203094309]{thm_20141117071529} are a slight extensions of \cite[Theorem~2.1]{GradinaruNourdin2009}, \cite[Theorem~1]{NourdinNualartTudor2010} and \cite[Theorem~15]{Naganuma2015}. In these references, the authors showed convergences of the weighted Hermite variations $\wHerVar{q}{m}$ in which $F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X)$ are replaced by $f(B_{\dyadicPart[m]{k-1}})$ or $F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(B)$, that is, they considered functionals which are expressed by fBm $B$ explicitly. On the other hand, we consider functionals of the solution $X$ to \eqref{eq_20141122070455} in \tref[thm_20141203094309]{thm_20141117071529}. \tref{thm_20141123062022} is an exention of \tref{thm_20141117071529} in the case $H=1/2$. \item Since a standard Brownian motion has independent increments, we see $\rho_{1/2}(l)=0$ and $\tilde{\rho}_{1/2}(l)=0$ for $l\geq 1$. Hence we have $\sigma_{q,1/2}=\sqrt{q!}$ and $\sigma_{1/2}=1/\sqrt{12}$. \end{enumerate} \end{remark} \section{The Crank-Nicolson scheme}\label{sec_201707212021} In this section, we show \tref{thm_20141204015044}. Below, we fix sufficiently small $0<\epsilon<H$ and write $H^{-}=H-\epsilon$. For $m\in\mathbf{N}$, we may write $\Delta=2^{-m}$, $\Delta B_k=B_{\tau^m_{k-1}\tau^m_{k}}$~$(1\le k\le 2^m)$, $\Delta(\Delta B_k)^n=\Delta\cdot(\Delta B_k)^n$ ($n=1,2,\dots$) and $\Delta(\Delta B_k)=\Delta(\Delta B_k)^1$. We use the notation $B^{\boldsymbol{i}}_{st}$ $(\boldsymbol{i}=10, 01, 011, 101, 110)$ to denotes the iterated integral introduced in \secref{sec_20141211055401}. We denote by $O(\Delta^p)$ the term which is less than or equal to $C \Delta^p$, where $C$ does not depend on $m$ and $\xi$. \subsection{Well-definedness of the Crank-Nicolson scheme} Since the Crank-Nicolson scheme is an implicit scheme, we need to define the set on which the scheme can be defined. Recall that $(\Omega,\mathcal{F},\boldsymbol P)$ denotes the canonical probability space which defines fBm $B(\omega)$ with the Hurst parameter $H$ and \begin{align*} \Omega_0 &= \bigcap_{0<\epsilon<H} \{ \omega\in \Omega; B(\omega) \in \HolFunc[0]{H-\epsilon}{[0,1]}{\mathbf{R}} \}. \end{align*} For every $m\in\mathbf{N}$, we define \begin{align*} \Omega^{{\rm CN}(m)} = \Omega_0 \cap \left\{ \omega\in \Omega ; \sup_{|t-s|\le 2^{-m}} \frac{|B_t(\omega)-B_s(\omega)|}{(t-s)^{H-\epsilon}} \le 1 \right\}. \end{align*} Note that $\Omega^{{\rm CN}(m)}\subset \Omega^{{\rm CN}(m+1)}$ for any $m$ and $\lim_{m\to\infty}P(\Omega^{{\rm CN}(m)})=1$ for the fBm with the Hurst parameter $H$. We show that the Crank-Nicolson scheme is defined on $\Omega^{{\rm CN}(m)}$ for large $m$. \begin{proposition}\label{prop_20141111045543} Suppose \begin{align}\label{eq_201708101103} m > \max \left\{ 1+\log_2(\sup |b'|), \frac{1+\log_2(\sup|\sigma'|)}{H-\epsilon} \right\}. \end{align} Let $0<s<t<1$ satisfy $|t-s|\le 2^{-m}$. Then for any $\xi\in\mathbf{R}$ and $\omega\in \Omega^{{\rm CN}(m)}$, there exists a unique $\eta_t$ satisfying \begin{align*} \eta_t = \xi +\frac{b(\xi)+b(\eta_t)}{2}(t-s) +\frac{\sigma(\xi)+\sigma(\eta_t)}{2}(B_t(\omega)-B_s(\omega)). \end{align*} \end{proposition} \begin{proof} Set \begin{align*} F(\xi,\delta,\Delta;\eta) = \eta - \left[ \xi + \frac{1}{2} \left\{ b(\xi) + b(\eta) \right\} \delta + \frac{1}{2} \left\{ \sigma(\xi) + \sigma(\eta) \right\} \Delta \right]. \end{align*} If $|\delta|<1/(2\sup|b'|)$ and $|\Delta|< 1/(2\sup|\sigma'|)$, then $ [\partial F/\partial\eta](\xi,\delta,\Delta;\eta) = 1 - \{ (1/2) b'(\eta) \delta + (1/2) \sigma'(\eta) \Delta \} $ satisfies \begin{align*} \frac{\partial F}{\partial\eta}(\xi,\delta,\Delta;\eta) \geq 1 - \frac{1}{2} |b'(\eta)| |\delta| - \frac{1}{2} |\sigma'(\eta)| |\Delta| \geq \frac{1}{2}, \end{align*} which implies that $\eta\mapsto F(\xi,\delta,\Delta;\eta)$ is strictly increasing. Hence there exists a unique value $f(\xi,\delta,\Delta)$ such that $F(\xi,\delta,\Delta;f(\xi,\delta,\Delta))=0$ and $f(\xi,0,0)=\xi$. Under the assumption on $m$ and $s,t$, it holds that $t-s<1/(2\sup|b'|)$ and $|B_t(\omega)-B_s(\omega)|<1/(2\sup|\sigma'|)$ ~$(\omega\in \Omega^{{\rm CN}(m)})$. Hence $\eta_t$ is uniquely defined as $ \eta_t =f(\xi,t-s,B_t(\omega)-B_s(\omega)). $ \end{proof} \begin{remark}\label{rem_201708101101} Clearly, the implicit function $f(\xi,\delta,\Delta)$ $(\xi\in \mathbf{R}, |\delta|<1/(2\sup|b'|), |\Delta|<1/(2\sup|\sigma'|)$ is a $C^{\infty}$ function. \end{remark} \subsection{Proof of \tref{thm_20141204015044}}\label{sec_1506485494} The Crank-Nicolson approximation solution $\bar{X}^{(m)}$ can be defined on $\Omega^{{\rm CN}(m)}$ for $m$ in \eqref{eq_201708101103}. From now on, we assume $m$ satisifes \eqref{eq_201708101103}. For $\omega\notin \Omega^{{\rm CN}(m)}$, we always set $\bar{X}^{(m)}_t(\xi,B)\equiv \xi$. To study the error $\bar{X}^{(m)}-X$, we prove that there exists a piecewise linear path $h$ such that $X_{\dyadicPart[m]{k}}(\xi,B+h)=\bar{X}^{(m)}_{\dyadicPart[m]{k}}(\xi,B)$ for all $0\leq k\leq 2^m$. Let $h$ be a piecewise linear path defined on $[0,1]$ with $h_0=0$ whose partition points are dyadic points $\{\tau^m_k\}_{k=0}^{2^m}$. Then $h$ can be identified with the set of values at the partition points $\{h(\dyadicPart[k]{k})\}_{k=1}^{2^m}$. We write $\kappa_k=h(\tau^m_k)-h(\tau^m_{k-1})$ $(1\le k\le 2^m)$. \begin{lemma}\label{lem_201707240936} Let $\omega\in \Omega_0$. Then there exist unique $\kappa_k\in \mathbf{R}$~$(1\le k\le 2^m)$ such that \begin{align*} \bar{X}^{(m)}_{\dyadicPart[m]{k}}(\xi,B) = X_{\dyadicPart[m]{k}}(\xi,B+h), \qquad 1\le k\le 2^m. \end{align*} \end{lemma} We denote the above $h$ by $h^{(m)}$. Although $\kappa_k$ depends on $m$ similarly, we use the same notation $\kappa_k$ for simplicity. $h^{(m)}(\omega)$ is defined for all $\omega\in \Omega_0$. Of course, the definition of $\bar{X}^{(m)}$ on $\Omega_0\setminus \Omega^{{\rm CN}(m)}$ is essentially meaningless and the behavior of $h^{(m)}$ on $\Omega_0\setminus \Omega^{{\rm CN}(m)}$ has nothing to do with the asymptotics of the error. Before proving the existence of $h^{(m)}$, we give a rough sketch how to prove \tref{thm_20141204015044} by using $h^{(m)}$. \begin{remark}[Rough sketch of the proof of \tref{thm_20141204015044}]\label{rem_201708102135} We decompose $h^{(m)}$ as $h^{(m)}=h_M^{(m)}+h_{R}^{(m)}$. Here, $h_M^{(m)}$ is the main term and we see \begin{align}\label{eq_1506055600} \lim_{m\to\infty} 2^{m(3H-\frac{1}{2})} h_M^{(m)} = U \qquad \text{in law}, \end{align} where $U$ is a random variable. The term $h_{R}^{(m)}$ is the remainder term satisfying that for small $\delta>0$, \begin{align}\label{eq_1506055256} \lim_{m\to\infty} 2^{m(3H-\frac{1}{2}+\delta)} \|h_R^{(m)}\|_{\infty} = 0 \qquad \text{in probability} \end{align} By using the derivative of $X(\xi,B)$ with respect to $B$, we have $ 2^{m(3H-\frac{1}{2})} \{\bar{X}^{(m)}(\xi,B)-X(\xi,B)\} = I_1+I_2+I_3 $, where \begin{align*} I_1 &= \nabla_{(2^{m})^{3H-\frac{1}{2}}h^{(m)}_M}X(\xi,B),\\ I_2 &= (2^m)^{3H-\frac{1}{2}} \left\{\bar{X}^{(m)}(\xi,B)-X(\xi,B+h^{(m)}_M)\right\},\\ I_3 &= (2^m)^{3H-\frac{1}{2}} \left\{ X(\xi,B+h^{(m)}_M) -X(\xi,B) -\nabla_{h^{(m)}_M}X(\xi,B) \right\}. \end{align*} By the convergence $2^{m(3H-\frac{1}{2})}h_M^{(m)}\to U$ in law, we have $I_1=\nabla_{2^{m(3H-\frac{1}{2})}h^{(m)}_M}X(\xi,B)\to \nabla_U X(\xi,B)$ in law. Since \begin{align*} I_2 \thickapprox 2^{m(3H-\frac{1}{2})} \left\{X(\xi,B+h^{(m)})-X(\xi,B+h^{(m)}_M)\right\} \thickapprox \nabla_{2^{m(3H-\frac{1}{2})}h^{(m)}_R}X(\xi,B), \end{align*} the middle term converges to 0 in probability. For the third term, considering the second derivative, we have \begin{align*} I_3 \thickapprox 2^{m(3H-\frac{1}{2})} \frac{1}{2} \nabla_{h^{(m)}_M}^2X(\xi,B). \end{align*} Therefore this term also converges to $0$ in probability because $h^{(m)}_M$ is of order $2^{-m(3H-\frac{1}{2})}$. In the following, $h^{(m)}_M$ and $h^{(m)}_R$ are piecewise linear paths corresponding to $\{\tilde{\kappa}_k\}$ and $\{R_k(\omega)\}$ in \lref{lem_201708091518}. We conclude this remark by making a comment on \eqref{eq_1506055600} and \eqref{eq_1506055256}. The convergence \eqref{eq_1506055600} of the main term is shown by \tref{thm_20141117071529} and so on in \lref{lem_201708091528}. By using this result, we see the convergence \eqref{eq_1506055256} of the remainder in \lref{lem_201708091518}. We should mention that the method used in \lref{lem_201708091518} makes estimate of the remainder simpler drastically than that of \cite{Naganuma2015}. \end{remark} We now prove the existence of $h^{(m)}$. To this end, we need the bijectivity of the map $\kappa\mapsto X_t(\xi,B+\kappa\ell)$ which follows from the following lemma. Here $\ell_t=t$. This lemma is an immediate consequence of \pref{prop_20140929105014}~(\ref{item_20141106063948}). \begin{lemma}\label{lem_201707251240} There exist positive numbers $C_1, C_2$ which are independent of $B, \xi, t$ such that \begin{align*} C_1t \leq \frac{d}{d\kappa}X_t(\xi,B+\kappa\ell) \leq C_2t. \end{align*} In particular, the mapping $\mathbf{R}\ni\kappa\mapsto X_t(\xi,B+\kappa\ell)$ is bijection on $\mathbf{R}$. \end{lemma} We prove \lref{lem_201707240936}. We write $\xi_k=\bar{X}^{(m)}_{\tau^m_k}(\xi,B)$. \begin{proof}[Proof of \lref{lem_201707240936}] We prove this by an induction on $k$. Let $k=1$. It suffices to prove the existence $\kappa_1$ satisfying $\xi_1=X_{2^{-m}}(\xi,B+2^m\kappa_1\ell)$. Since $\kappa\mapsto X_{2^{-m}}(\xi,B+2^m\kappa\ell)$ is a bijective mapping, $\kappa_1$ is uniquely determined. Suppose the equality holds upto $k$. Noting $\xi_{k+1}=X_{\tau^m_{k+1}}(\xi,B+h)$ is equivalent to $\xi_{k+1}=X_{2^{-m}}(\xi_k,\theta_{\tau^m_k}B+2^m\kappa_{k+1}\ell)$ and by applying \lref{lem_201707251240}, the proof is completed. \end{proof} In the rest of this subsection, we state some key lemmas (\lrefs[lem_201707121721]{lem_201708091518}{lem_201708091528}) for \tref{thm_20141204015044} and show the theorem. The key lemmas is shown in the next subsection. In these lemmas, we calculate $\kappa_k$ and determine the main term of the error. By the definition, $\kappa_{k}$ $(1\le k\le 2^m)$ satisfies the equation \begin{align}\label{eq_1545356684} X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B+2^m\kappa_{k}\ell) -X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B) = \{ \xi_{k}-\xi_{k-1} \} - \{ X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B) - \xi_{k-1} \}. \end{align} We set $\hat{\kappa}_{k}$ by the left-hand side of the above equality. The quantity $\hat{\kappa}_k$ is the 1-step error of the Crank-Nicolson scheme. We calculate $\hat{\kappa}_k$ and $\kappa_k$ with small remainder terms. By this calculation and the H\"older continuity of $B$, we see that $\max_{1\le k\le 2^m}|\bar{X}^{(m)}_{\tau^m_k}-X_{\tau^m_k}|$ converges to $0$ if $H>\frac{1}{3}$ (\lref{lem_201707121721}). This is a rough estimate. We improve it later by identifying the main term of the error (\lref{lem_201708091518}). In order to express $\hat{\kappa}_k$, we introduce \begin{align*} \hat{f}_3 &= \frac{1}{12} [ \sigma^2\sigma''+\sigma (\sigma')^2 ], \quad \hat{f}_4 = \frac{1}{24} [ \sigma^3\sigma''' +5\sigma^2\sigma'\sigma'' +2\sigma(\sigma')^3 ],\quad \hat{g}_1 = \Wronskian,\\ \hat{\varphi} &= \frac{1}{4} \left[b(\sigma')^2+\sigma^2 b''\right] + \frac{1}{2} [ b\sigma\sigma''+\sigma\sigma'b' ],\\ \hat{\varphi}_{011} &= -b(\sigma\sigma')', \quad \hat{\varphi}_{101} = -\sigma (b\sigma')',\quad \hat{\varphi}_{110} = -\sigma(\sigma b')'. \end{align*} Here, we recall $\Wronskian=\sigma b'-\sigma'b$. We also see that the main term of $\kappa_k$ is expressed by the following functions: \begin{align*} f_3 &= \frac{1}{12} [\sigma\sigma''+(\sigma')^2], \quad f_4 = \frac{1}{24} \sigma (\sigma\sigma'''+ 3\sigma'\sigma''), \quad g_1 = \frac{\Wronskian}{\sigma},\\ \varphi &= \frac{1}{4} \left[ \frac{b(\sigma')^2}{\sigma} +\sigma b'' \right] +\frac{1}{2}(b\sigma''+\sigma'b'),\\ \varphi_{011} &= -\frac{b(\sigma\sigma')'}{\sigma}, \quad \varphi_{101} = -(b\sigma')',\quad \varphi_{110} = -(\sigma b')'. \end{align*} Note that $f_4=(\hat{f}_4-\sigma'\hat{f}_3)/\sigma$ and that $h=\hat{h}/\sigma$ for $h=f_3,g_1,\phi,\phi_{011},\phi_{101},\phi_{110}$. By a simple calculation, we have $f_4=\sigma f_3'/2$. This identity is a key for the convergence of the main term of the error similarly to the case where $b\equiv 0$ (\cite{NeuenkirchNourdin2007,Naganuma2015}); see \lref{lem_201708091528}. The expression of $\hat{\kappa}_{k}$ and the convergence of $\max_{1\le k\le 2^m}|\bar{X}^{(m)}_{\tau^m_k}-X_{\tau^m_k}|$ are obtained as follows: \begin{lemma}\label{lem_201707121721} For any $\omega\in \Omega^{{\rm CN}(m)}$, the following hold. \begin{enumerate} \item \label{item_1505965509} We have \begin{align*} \hat{\kappa}_{k} &= \hat{f}_3(\xi_{k-1})(\Delta B_k)^3 +\hat{f}_4(\xi_{k-1})(\Delta B_k)^4 + \hat{g}_1(\xi_{k-1}) \left( \frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k} \right)\\ &\quad\quad + \hat{\varphi}(\xi_{k-1})\Delta (\Delta B_k)^2 + \hat{\varphi}_{011}(\xi_{k-1}) B^{011}_{\tau^m_{k-1}\tau^m_k} + \hat{\varphi}_{101}(\xi_{k-1}) B^{101}_{\tau^m_{k-1}\tau^m_k} + \hat{\varphi}_{110}(\xi_{k-1}) B^{110}_{\tau^m_{k-1}\tau^m_k}\\ &\quad\quad +O(\Delta^{5H^{-}})+O(\Delta^{3H^{-}+1}) +O(\Delta^{H^{-}+2}). \end{align*} \item \label{item_1505964312} We have $\hat{\kappa}_k=O(\Delta^{3H^{-}})$, $\kappa_k=O(\Delta^{3H^{-}})$ and \begin{align*} \max_{1\le k\le 2^m} | X_{\tau^m_k}(\xi,B) - \bar{X}^{(m)}_{\tau^m_k}(\xi,B) | = O(\Delta^{3H^{-}-1}). \end{align*} In particular, the Crank-Nicolson approximation solution converges to the solution itself at the partition points uniformly if $H>\frac{1}{3}$. \item \label{item_1547629361} We have \begin{align*} \max_{0\leq t\leq 1} |\bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)})| = O(\Delta^{3H^{-}}). \end{align*} \end{enumerate} \end{lemma} The next lemma asserts that $\tilde{\kappa}_k$ is the main term of $\kappa_k$. As stated in \rref{rem_201708102135}, in order to prove it, we use not only the H\"older regularity of $B$ but also the convergence in law of the main term of $h^{(m)}$. \begin{lemma}\label{lem_201708091518} For $1\le k\le 2^m$, let \begin{align*} \tilde{\kappa}_{k} &= f_3(X_{\tau^m_{k-1}})(\Delta B_k)^3 +f_4(X_{\tau^m_{k-1}})(\Delta B_k)^4 + g_1(X_{\tau^m_{k-1}}) \left(\frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k}\right)\\ &\quad +\varphi(X_{\tau^m_{k-1}})\Delta (\Delta B_k)^2 +\varphi_{011}(X_{\tau^m_{k-1}})B^{011}_{\tau^m_{k-1}\tau^m_k} +\varphi_{101}(X_{\tau^m_{k-1}})B^{101}_{\tau^m_{k-1}\tau^m_k} +\varphi_{110}(X_{\tau^m_{k-1}})B^{110}_{\tau^m_{k-1}\tau^m_k} \end{align*} and set $R_k(\omega)=\kappa_k-\tilde{\kappa}_k$. Then there exists $\delta>0$ such that $ \lim_{m\to\infty} (2^m)^{3H-\frac{1}{2}+\delta} \max_{1\le k\le 2^{m}} |\sum_{i=1}^{k}R_i| = 0 $ in probability. \end{lemma} \begin{remark} Although $\tilde{\kappa}_k$ and $\kappa_k$ are defined on $\Omega_0$, the definition of $\kappa_k$ on $\Omega_0\setminus\Omega^{{\rm CN}(m)}$ is essentially meaningless. However, the statement of the convergence of $R_k$ makes sense because $\lim_{m\to\infty}\boldsymbol P(\Omega^{{\rm CN}(m)})=1$. \end{remark} The following processes are candidates of the main term of $h^{(m)}$: \begin{gather}\label{eq_1506569861} \left\{ \begin{aligned} \Phi_1(t) &= \sum_{k=1}^{\intPart{2^mt}} \left\{ f_3(X_{\tau^m_{k-1}})(\Delta B_k)^3 +f_4(X_{\tau^m_{k-1}})(\Delta B_k)^4 \right\},\\ \Phi_2(t) &= \sum_{k=1}^{\intPart{2^mt}} g_1(X_{\tau^m_{k-1}}) \left( \frac{\Delta}{2}\Delta B_k - B^{10}_{\tau^m_{k-1}\tau^m_k} \right),\\ \Phi_3(t) &= \sum_{k=1}^{\intPart{2^mt}} \Biggl\{ \varphi(X_{\tau^m_{k-1}}) \Delta(\Delta B_k)^2 + \varphi_{011}(X_{\tau^m_{k-1}}) B^{011}_{\tau^m_{k-1}\tau^m_k} + \varphi_{101}(X_{\tau^m_{k-1}}) B^{101}_{\tau^m_{k-1}\tau^m_k} + \varphi_{110}(X_{\tau^m_{k-1}}) B^{110}_{\tau^m_{k-1}\tau^m_k} \Biggr\},\\ \Phi_4(t) &= -\sum_{k=1}^{\intPart{2^mt}} [g_1\sigma'](X_{\tau^m_{k-1}}) \Delta B_k \left( \frac{\Delta}{2}\Delta B_k -B^{10}_{\tau^m_{k-1}\tau^m_k} \right). \end{aligned} \right. \end{gather} \begin{remark} The processes $\Phi_1$, $\Phi_2$ and $\Phi_3$ are arising from the expression of $\tilde{\kappa}_{k}$. In order to prove \lref{lem_201708091518}, it is necessary to consider $\Phi_4$ together. \end{remark} By using \tref{thm_20141117071529}, \tref{thm_20141123062022} and \pref{prop_20141117062354}, we can show the next lemma, which gives us asymptotic of $\Phi_1$, $\Phi_2$, $\Phi_3$ and $\Phi_4$. \begin{lemma}\label{lem_201708091528} Let $W$ and $\tilde{W}$ be standard Brownian motions. Assume that $B$, $W$ and $\tilde{W}$ are independent. The next assertions hold. \begin{enumerate} \item Let $\frac{1}{3}<H<\frac{1}{2}$. Then $ \left( B, (2^m)^{3H-\frac{1}{2}} (\Phi_1,\Phi_2,\Phi_3,\Phi_4) \right) $ converges weakly to $ \left( B, \sigma_{3,H}\int_0^{\cdot}f_3(X_t)dW_t, 0, 0, 0 \right)$ in $D([0,1];\mathbf{R}^4)$ with respect to the Skorokhod topology. Here, $\sigma_{3,H}$ is a constant defined by \eqref{eq_const_sigma_q_Hurst}. \item Let $H=\frac{1}{2}$. Then $ \left( B, 2^m (\Phi_1,\Phi_2,\Phi_3,\Phi_4) \right) $ converges weakly to \begin{align*} &\Biggl( B, \sqrt{6}\int_0^{\cdot}f_3(X_s)\,dW_s+3\int_0^{\cdot}f_3(X_s)\circ dB_s, \frac{1}{\sqrt{12}}\int_0^{\cdot}g_1(X_s)d\tilde{W}_s,\\ &\qquad\qquad\qquad\qquad\qquad\qquad \int_0^{\cdot}\varphi(X_s)\,ds + \frac{1}{4} \int_0^{\cdot} \left\{\varphi_{011}(X_s)+\varphi_{110}(X_s)\right\}\,ds, 0 \Biggr) \end{align*} in $D([0,1];\mathbf{R}^4)$ with respect to the Skorokhod topology. \end{enumerate} \end{lemma} We are in a position to show \tref{thm_20141204015044}. Proofs of \lrefs[lem_201707121721]{lem_201708091518}{lem_201708091528} are postponed in \secref{sec_1506314800}. \begin{proof}[Proof of \tref{thm_20141204015044}] We follow the idea in \rref{rem_201708102135}. Let $h^{(m)}_M$ and $h^{(m)}_R$ be piecewise linear paths associated with $\{\tilde{\kappa}_k\}$ and $\{R_k\}$, respectively, in \lref{lem_201708091518}. By \lref{lem_201708091528}, we have the weak convergence in the Skorokhod topology in $D([0,1];\mathbf{R}^2)$, \begin{align*} \left(B,(2^m)^{3H-\frac{1}{2}}(\Phi_1+\Phi_2+\Phi_3)\right) \to (B,U), \end{align*} where $U$ is the same process defined in \tref{thm_20141204015044}. Since $h^{(m)}_M$ is a piecewise linear and $\Phi_1+\Phi_2+\Phi_3$ is step function, we have \begin{align*} \|h^{(m)}_M-(\Phi_1+\Phi_2+\Phi_3)\|_{\infty} = O(\Delta^{3H^-}) \qquad \omega\in \Omega^{{\rm CN}(m)}. \end{align*} Hence $\lim_{m\to\infty} (2^m)^{3H-\frac{1}{2}}\|h^{(m)}_M-(\Phi_1+\Phi_2+\Phi_3)\|_{\infty}=0$ in probability. Consequently, we have the weak convergence in the uniform convergence topology in $C([0,1];\mathbf{R}^3)$: \begin{align}\label{eq_201708110956} \left(B,(2^m)^{3H-\frac{1}{2}}h^{(m)}_M\right) \to (B,U). \end{align} As stated in \rref{rem_201708102135}, we have $ (2^m)^{3H-\frac{1}{2}} \{\bar{X}^{(m)}(\xi,B)-X(\xi,B)\} = I_1+I_2+I_3 $, where \begin{align*} I_1 &= \nabla_{(2^{m})^{3H-\frac{1}{2}}h^{(m)}_M}X(\xi,B),\\ I_2 &= (2^m)^{3H-\frac{1}{2}} \left\{\bar{X}^{(m)}(\xi,B)-X(\xi,B+h^{(m)}_M)\right\},\\ I_3 &= (2^m)^{3H-\frac{1}{2}} \left\{ X(\xi,B+h^{(m)}_M) -X(\xi,B) -\nabla_{h^{(m)}_M}X(\xi,B) \right\}. \end{align*} We consider $I_2$ and $I_3$ first. By Taylor's theorem, we have \begin{align*} |\bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)}_M)| &\leq | \bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)}) | + | X_t(\xi,B+h^{(m)})-X_t(\xi,B+h^{(m)}_M) |\\ &\leq | \bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)}) | + \left| \int_0^1\nabla_{h^{(m)}_R} X_t(\xi,B+\theta h^{(m)}_R)\,d\theta \right|. \end{align*} By using \lref{lem_201707121721}~(\ref{item_1547629361}) and the boundedness of the derivative, we have \begin{align*} \|\bar{X}^{(m)}(\xi,B)-X(\xi,B+h^{(m)}_M)\|_\infty &\le C \{ \Delta^{3H^{-}} + \|h^{(m)}_R\|_{\infty} \}. \end{align*} Here $C$ is a constant independent of $m$. Combining this and \lref{lem_201708091518}, we have $\|I_2\|_{\infty}$ converges to $0$ in probability. Similarly, we have \begin{align*} \|I_3\|_{\infty}\le C(2^m)^{3H-\frac{1}{2}}\|h^{(m)}_M\|_{\infty}^2 \to 0\qquad \text{in probability}. \end{align*} We next consider the main term $I_1$. Let $J_t(g)$ be the continuous path defined by $g$ in \eqref{eq_20150218112958}. By \rref{rem_201708111136}, the mapping $g\mapsto J(g)$ is continuous on $C([0,1];\mathbf{R})$. From this, we have the continuity of the mapping \begin{align*} C([0,1];\mathbf{R}^2)\ni(g,z) \mapsto \sigma(x(g))z+J(g)\int_0^{\cdot}J_s^{-1}(g)\Wronskian(x_s(g))z_s\,ds \in C([0,1];\mathbf{R}). \end{align*} Combining \pref{prop_20140929105014}, \eqref{eq_201708110956} and the above, we complete the proof. \end{proof} \subsection{Proof of key lemmas}\label{sec_1506314800} In the rest of this section, we show \lrefs[lem_201707121721]{lem_201708091518}{lem_201708091528}. \lref{lem_201707121721} follows from the next lemma immediately: \begin{lemma}\label{lem_1506077157} For any $\omega\in \Omega^{{\rm CN}(m)}$, the following hold. \begin{enumerate} \item \label{item_1506567781} We have \begin{align*} & \xi_k-\xi_{k-1} = b(\xi_{k-1})\Delta+\sigma(\xi_{k-1})\Delta B_k +\frac{1}{2}[\sigma'\sigma](\xi_{k-1})(\Delta B_k)^2 +\frac{1}{4}\left[\sigma(\sigma')^2+\sigma^2\sigma''\right](\xi_{k-1}) (\Delta B_k)^3\\ &\quad + \left[ \frac{1}{12}\sigma'''\sigma^3 +\frac{3}{8}\sigma^2\sigma'\sigma'' +\frac{1}{8}\sigma(\sigma')^3 \right] (\xi_{k-1}) (\Delta B_k)^4 + \frac{1}{2}\left[\sigma'b+\sigma b'\right](\xi_{k-1}) \Delta(\Delta B_k)\\ &\quad + \frac{1}{4} \left[ (b(\sigma')^2+\sigma^2b'') + 2(\sigma b\sigma''+\sigma\sigma'b') \right] (\xi_{k-1}) \Delta (\Delta B_k)^2 + \frac{1}{2}[bb'](\xi_{k-1})\Delta^2\\ &\quad +O(\Delta^{5H^{-}}) +O(\Delta^{3H^{-}+1}). \end{align*} \item \label{item_1506567805} We have \begin{multline*} X_\Delta(\xi_{k-1},\theta_{\tau^m_{k-1}}B)-\xi_{k-1}\\ \begin{aligned} &= b(\xi_{k-1})\Delta+\sigma(\xi_{k-1})\Delta B_k + \frac{1}{2} \left[\sigma\sigma'\right](\xi_{k-1})(\Delta B_k)^2 + \frac{1}{3!} \left[\sigma(\sigma\sigma')'\right](\xi_{k-1}) (\Delta B_k)^3\\ &\phantom{=}\quad + \frac{1}{4!} \left[ \sigma(\sigma(\sigma\sigma')')' \right] (\xi_{k-1}) (\Delta B_k)^4 +[b\sigma'](\xi_{k-1})\Delta(\Delta B_k) + [\sigma b'-b\sigma'](\xi_{k-1}) B^{10}_{\tau^m_{k-1}\tau^m_k}\\ &\phantom{=}\quad + b(\sigma\sigma')'(\xi_{k-1}) B^{011}_{\tau^m_{k-1}\tau^m_k} + \sigma(b\sigma')'(\xi_{k-1}) B^{101}_{\tau^m_{k-1}\tau^m_k} + \sigma(\sigma b')'(\xi_{k-1}) B^{110}_{\tau^m_{k-1}\tau^m_k} +\frac{1}{2}[b'b](\xi_{k-1})\Delta^2\\ &\phantom{=}\quad +O(\Delta^{5H^{-}})+O(\Delta^{3H^{-}+1}) +O(\Delta^{H^{-}+2}). \end{aligned} \end{multline*} \end{enumerate} \end{lemma} \begin{proof} (\ref{item_1506567781}) $\xi_k$ is determined by the equation \begin{align}\label{eq_20177111737} \xi_k = \xi_{k-1} +\frac{\sigma(\xi_{k-1})+\sigma(\xi_{k})}{2}\Delta B_k +\frac{b(\xi_{k-1})+b(\xi_{k})}{2}\Delta. \end{align} Since the implicit function is $C^{\infty}$ as in \rref{rem_201708101101}, there exist constants $a_{1,0},\dots,a_{4,0}$, $a_{0,1}$, $a_{1,1}$, $a_{2,1}$ and $a_{0,2}$ such that \begin{align*} \xi_k-\xi_{k-1} = \sum_{i=1}^4a_{i,0} (\Delta B_k)^i +a_{0,1}\Delta +a_{1,1}\Delta(\Delta B_k) +a_{2,1}\Delta(\Delta B_k)^2 +a_{0,2}\Delta^2 +O(\Delta^{3H^{-}+1}) +O(\Delta^{5H^{-}}). \end{align*} Putting this expansion of $\xi_k$ into the equation (\ref{eq_20177111737}) and compare the coefficients of the both sides of equation, we obtain the desired formula. \noindent (\ref{item_1506567805}) This is a immediate consequence of \pref{prop_20141010094511}. \end{proof} \begin{proof}[Proof of \lref{lem_201707121721}] (\ref{item_1505965509}) The assertion follows from \lref{lem_1506077157} and the definition of $\hat{\kappa}_k$. \noindent (\ref{item_1505964312}) The estimate $\hat{\kappa}_k=O(\Delta^{3H^{-}})$ follows from (\ref{item_1505965509}) and the H\"older continuity of $B$. It follows that $\kappa_k=O(\Delta^{3H^{-}})$ from the estimate of $\hat{\kappa}_k$ and \lref{lem_201707251240}. By combining $\kappa_k=O(\Delta^{3H^{-}})$ and the Lipschitz continuity of the mapping $B\mapsto X(B)$, we obtain the last assertion. \noindent (\ref{item_1547629361}) Since \lref{lem_1506077157} for $\Delta=t-\tau^m_{k-1}$ is still valid, for $\tau^m_{k-1}<t\leq\tau^m_k$, we have \begin{align*} \bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)}) &= \{\bar{X}^{(m)}_t(\xi,B)-\xi_{k-1}\} - \{X_{t-\tau^m_{k-1}}(\xi_{k-1},\theta_{\tau^m_{k-1}}(B+h^{(m)}))-\xi_{k-1}\}\\ &= O(h^{(m)}_t-h^{(m)}_{\tau^m_{k-1}}). \end{align*} Noting $ O(h^{(m)}_t-h^{(m)}_{\tau^m_{k-1}}) = O(\kappa_k) = O(\Delta^{3H^{-}}) $, we see the assertion. \end{proof} Next we show \lref{lem_201708091528}. To prove this lemma, we use the following results concerning the Skorokhod topology. \begin{proposition}\label{prop_201708091009} The following hold. \begin{enumerate} \item \label{item_1506071756} The mapping $ D([0,1];\mathbf{R}^d)\ni(x_i)_{i=1}^d \mapsto (\sum_{i=1}^dx_i)\in D([0,1];\mathbf{R}) $ is continuous. \item \label{item_1506071772} The mapping $ D([0,1];\mathbf{R}^d)\ni x \mapsto \sup_{0\le t\le 1}|x_t|\in \mathbf{R} $ is continuous. \item \label{item_1506072032} We assume random variables in this statement are defined in the same probability space. Let $\{X_n\}_{n=1}^{\infty}$ and $\{Y_n\}_{n=1}^{\infty}$ be random variables with values in $C([0,1];\mathbf{R}^{d_1})$ and $D([0,1];\mathbf{R}^{d_2})$, respectively. Let $\{Z_n\}_{n=1}^\infty$ be random variables with values in $D([0,1];\mathbf{R}^{d_3})$. Let $\varphi : C([0,1];\mathbf{R}^{d_1})\to C([0,1];\mathbf{R}^{d_4})$ be a continuous mapping. Suppose that $(X_n, Y_n)\in D([0,1];\mathbf{R}^{d_1+d_2})$ converges to $(X,Y)$ in law with respect to the Skorokhod topology and $\|Z_n\|_{\infty}\to 0$ in probability. Then $(X_n,Y_n,\varphi(X_n),Z_n)$ converges in law in the Skorokhod topology to $(X,Y,\varphi(X),0)\in D([0,1];\mathbf{R}^{d_1+d_2+d_3+d_4})$. \end{enumerate} \end{proposition} \begin{proof}[Proof of \lref{lem_201708091528}] First, we consider $\Phi_1$ and $\Phi_2$. Recalling $f_4=\sigma f_3'/2$, we have $ f_3(X_{\tau^m_{k-1}}) + f_4(X_{\tau^m_{k-1}})\Delta B_k = \{f_3(X_{\tau^m_{k-1}})+f_3(X_{\tau^m_k})\}/2 +O(\Delta^{2H^-}) +O(\Delta) $. Hence \begin{multline*} (2^m)^{3H-\frac{1}{2}} \left\{ f_3(X_{\tau^m_{k-1}})(\Delta B_k)^3 + f_4(X_{\tau^m_{k-1}})(\Delta B_k)^4 \right\}\\ = (2^m)^{-1/2} \frac{f_3(X_{\tau^m_{k-1}})+f_3(X_{\tau^m_k})}{2} \hermitePoly{3}(2^{mH}\Delta B_k) + (2^m)^{H-1/2} \frac{f_3(X_{\tau^m_{k-1}})+f_3(X_{\tau^m_k})}{2}3\Delta B_k +R_{m,k}(B) \end{multline*} where $R_{m,k}(B)=O(\Delta^{5H^{-}-3H+\frac{1}{2}})+O(\Delta^{3H^{-}-3H+\frac{3}{2}})$. Note that $ \lim_{m\to\infty} \sum_{k=1}^{2^m}|R_{m,k}| = 0 $ for any $\omega\in \bigcup_m\Omega^{{\rm CN}(m)}$. By \pref{prop_20150424100658}, we have \begin{align*} \left\| \sum_{k=1}^{\intPart{2^m\cdot}} \frac{f_3(X_{\tau^m_{k-1}})+f_3(X_{\tau^m_k})}{2} \Delta B_k - \int_0^{\cdot}f_3(X_s)\,d^\circ B_s \right\|_{\infty} \to 0\qquad \omega\in \bigcup_{m}\Omega^{{\rm CN}(m)}. \end{align*} By \rref{rem_201708111136}, the mapping $B\mapsto\int_0^{\cdot}f_3(X_s)\,d^{\circ}B_s$ is continuous in the uniform norm. By \tref{thm_20141117071529}, \tref{thm_20141123062022}, \pref{prop_20141117062354} and \pref{prop_201708091009}~(\ref{item_1506072032}), \begin{multline*} \left( B, (2^m)^{3H-\frac{1}{2}}(\Phi_1,\Phi_2) \right)\\ \begin{aligned} \to \begin{cases} \left( B, \displaystyle{\sqrt{6}\int_0^{\cdot}f_3(X_s)\,dW_s} +\displaystyle{3\int_0^{\cdot}f_3(X_s)\,d^\circ B_s}, \displaystyle{\frac{1}{\sqrt{12}}\int_0^{\cdot}g_1(X_s)\,d\tilde{W}_s} \right), & H=1/2,\\ \left( B, \displaystyle{\sigma_{3,H}\int_0^{\cdot}f_3(X_s)\,dW_s}, 0 \right),& 1/3<H<1/2 \end{cases} \end{aligned} \end{multline*} weakly in the Skorokhod topology. Note that $\sigma_{3,\frac{1}{2}}=\sqrt{6}$. (See \rref{rem_201708101002}.) Next, we consider $\Phi_3$. Suppose $1/3<H<1/2$. By \lref{lem_201708052140}, for any $\omega\in\Omega^{{\rm CN}(m)}$, \begin{align*} (2^m)^{3H-\frac{1}{2}} \sum_{k=1}^{2^m} \left( |\Delta (\Delta B_k)^2| +|B^{011}_{\tau^m_{k-1}\tau^m_k}| +|B^{101}_{\tau^m_{k-1}\tau^m_k}| +|B^{110}_{\tau^m_{k-1}\tau^m_k}| \right) = O(\Delta^{2H^{-}-3H+\frac{1}{2}}). \end{align*} Hence $\|\Phi_3\|_{\infty}$ converges to $0$ in probability. We consider the case $H=\frac{1}{2}$. Then we have \begin{align*} B^{011}_{s,t} &= \int_s^t\left(\int_s^u(r-s)dB_r\right)\,dB_u+\frac{(t-s)^2}{4},\\ B^{101}_{s,t} &= \int_s^t\left(\int_s^u(B_r-B_s)dr\right)\,dB_u,\\ B^{110}_{s,t} &= \int_s^t\left(\int_s^u(B_r-B_s)dB_r\right)\,du+\frac{(t-s)^2}{4}, \end{align*} where $dB_r$ is the It\^o integral. By the same reason as for $\Phi_3$, we see that for almost all $\omega$ uniformly, \begin{align*} \lim_{m\to\infty}2^m \sum_{k=1}^{\intPart{2^m\cdot}} \varphi_{\boldsymbol{i}}(X_{\tau^m_{k-1}}) B_{\tau^m_{k-1}\tau^m_k}^{\boldsymbol{i}} = \begin{cases} \displaystyle{\frac{1}{4}\int_0^{\cdot}\varphi_{\boldsymbol{i}}(X_s)\,ds}, & \boldsymbol{i}=011, 110,\\ 0, & \boldsymbol{i}=101. \end{cases} \end{align*} By a similar calculation to the above, we have \begin{align*} \lim_{m\to\infty} 2^m \sum_{k=1}^{\intPart{2^m\cdot}} \varphi(X_{\tau^m_{k-1}}) \Delta(\Delta B_k)^2 = \int_0^{\cdot} \varphi(X_s)\,ds \quad \text{a.s. $\omega$ uniformly}. \end{align*} Hence, we see that for almost all $\omega$ uniformly, \begin{align*} \lim_{m\to\infty} (2^m)^{3H-\frac{1}{2}}\Phi_3 = \int_0^{\cdot}\varphi(X_s)\,ds + \frac{1}{4} \int_0^{\cdot} \left\{\varphi_{011}(X_s)+\varphi_{110}(X_s)\right\}\,ds. \end{align*} Finally, we consider the term $\Phi_4$. Suppose $1/3<H<1/2$. Then for any $\omega\in \Omega^{{\rm CN}(m)}$ \begin{align}\label{eq_201707261039} (2^m)^{3H-\frac{1}{2}+\delta} \sum_{k=1}^{2^{m}} \left| \Delta B_k \left( \frac{\Delta}{2}\Delta B_k - B^{10}_{\tau^m_{k-1}\tau^m_k} \right) \right| =O(\Delta^{2H^{-}-3H+\frac{1}{2}-\delta}) =O\left(\Delta^{\frac{1}{2}-H-2\epsilon-\delta}\right). \end{align} Hence, if $\delta<\frac{1}{2}-H-2\epsilon$, $ \lim_{m\to\infty} \|(2^m)^{3H-\frac{1}{2}+\delta}\Phi_4\|_{\infty} = 0 $ in probability. We consider the case where $H=\frac{1}{2}$. In this case, $B_t$ is a standard Brownian motion and we have \begin{align*} \boldsymbol E \left[ \Delta B_k \left( \frac{\Delta}{2}\Delta B_k - B^{10}_{\tau^m_{k-1}\tau^m_k} \right) \right] &= 0, & \boldsymbol E \left[ \left\{ \Delta B_k \left( \frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k} \right) \right\}^2 \right] = \frac{\Delta^4}{3}. \end{align*} Since $X_t(\xi,B)$ is $\sigma(\{B_u~|~0\le u\le t\})$-adapted, by Doob's inequality, we have \begin{align*} \Delta^{-2} E\left[\sup_{0\le t\le 1} |\Phi_4(t)|^2\right]\le C\Delta. \end{align*} This implies that for any $\delta<\frac{1}{2}$, \begin{align}\label{eq_201707241515} \lim_{m\to\infty}\Delta^{-1-\delta} \sup_{0\le t\le 1}|\Phi_4(t)|=0 \qquad\text{a.s. $\omega$}. \end{align} From the calculation above, \rref{rem_201708111136} and \pref{prop_201708091009}~(\ref{item_1506072032}), we see the conclusion. \end{proof} The next lemma is a corollary of \lref{lem_201708091528} and \pref{prop_201708091009}, which is used in the proof of \lref{lem_201708091518}. \begin{lemma}\label{lem_1506322555} Set \begin{align*} \Psi_{m,\delta} &= (2^m)^{3H-\frac{1}{2}-\delta} \max_{0\le t\le 1} \left|\sum_{i=1}^4\Phi_i(t)\right|. \end{align*} Then, for any $\delta>0$, $\lim_{m\to\infty}\Psi_{m,\delta}=0$ in probability. \end{lemma} \begin{proof} From \pref{prop_201708091009}~(\ref{item_1506071756}) and (\ref{item_1506071772}), we see that $ \sup_{0\le t\le 1} \left|\sum_i(2^m)^{3H-\frac{1}{2}}\Phi_i(t)\right| $ converges in law. Thus we obtain that $\lim_{m\to\infty}\Psi_{m,\delta}=0$ in probability. \end{proof} Next, we show \lref{lem_201708091518}. By using \lref[lem_201707121721]{lem_201708091528}, we obtain a representation of the main term of $\kappa_{k}$ in terms of $\Delta$, $\Delta B_k$, $B^{\boldsymbol{i}}_{\tau^m_{k-1}\tau^m_{k}}$ and $X_{\tau^m_{k-1}}$. We divide this calculation into two steps. In the first step, we have the following. This estimate is a pathwise estimate. We use just H\"older continuity of the path of $B$. \begin{lemma}\label{lem_201707121723} Let $\omega\in \Omega^{{\rm CN}(m)}$. For $k$ $(1\le k\le 2^m)$ and $x\in \mathbf{R}$, let \begin{align*} F_k(x,B) &= f_3(x)(\Delta B_k)^3 +f_4(x)(\Delta B_k)^4 +g_1(x) \left( \frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k} \right)\\ &\phantom{=}\quad +\varphi(x)\Delta (\Delta B_k)^2 +\varphi_{011}(x)B^{011}_{\tau^m_{k-1}\tau^m_k} +\varphi_{101}(x)B^{101}_{\tau^m_{k-1}\tau^m_k} +\varphi_{110}(x)B^{110}_{\tau^m_{k-1}\tau^m_k},\\ G_k(x,B) &= - [g_1\sigma'](x) \Delta B_k \left(\frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k}\right),\\ r_k &= \kappa_{k}-F_k(\xi_{k-1},B)-G_k(\xi_{k-1},B). \end{align*} Then it holds that $r_k=O(\Delta^{3H^{-}+1})+O(\Delta^{5H^{-}})$. \end{lemma} \begin{proof} By the Taylor formula, there exists $0<\rho<1$ such that \begin{align*} \hat{\kappa}_k &=\xi_k -X_{2^{-m}}(\xi_{k-1}, \theta_{\tau^m_{k-1}}B)\\ &= X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B+2^m\kappa_{k} \ell) -X_{2^{-m}}(\xi_{k-1}, \theta_{\tau^m_{k-1}}B)\\ &= \nabla_{2^m\kappa_{k}\ell} X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B) + \frac{1}{2}\nabla_{2^m\kappa_{k}\ell}^2X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B +\rho 2^m\kappa_{k}\ell). \end{align*} Applying the estimate $\kappa_k=O(\Delta^{3H^{-}})$ and \pref{prop_20140929105014}~(\ref{item_20141228072212}), we see that the second term of the right-hand side is $O(\Delta^{6H^{-}})$. As for the first term, \pref{prop_20140929105014}~(\ref{item_20141106063948}), \lref{lem_201707121721}~(\ref{item_1505964312}) and \pref{prop_20141010094511} yield \begin{align*} \nabla_{2^m\kappa_{k}\ell} X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B) &= \sigma\left(X_\Delta(\xi_{k-1},\theta_{\tau^m_{k-1}}B)\right) \int_0^{\Delta} \exp \left( \int_s^{\Delta} \left[\frac{\Wronskian}{\sigma}\right] \left(X_{u}(\xi_{k-1},\theta_{\tau^m_{k-1}}B)\right)\, du \right) \frac{\kappa_k}{\Delta}\, ds\\ &= \sigma(\xi_{k-1})\kappa_{k} + \left\{ \sigma\left(X_\Delta(\xi_{k-1},\theta_{\tau^m_{k-1}}B)\right) - \sigma(\xi_{k-1}) \right\} \kappa_{k} +O(\Delta^{3H^{-}+1})\\ &= \left\{ \sigma(\xi_{k-1}) +\sigma(\xi_{k-1})\sigma'(\xi_{k-1})\Delta B_k \right\} \kappa_{k} +O(\Delta^{5H^{-}}) +O(\Delta^{3H^{-}+1}). \end{align*} Hence we see that $\hat{\kappa}_{k}$ and $\kappa_{k}$ satisfy \begin{align*} \hat{\kappa}_{k} = \sigma(\xi_{k-1}) \left\{ 1+ \sigma'(\xi_{k-1})\Delta B_k \right\} \kappa_{k} +O(\Delta^{3H^{-}+1}) +O(\Delta^{5H^{-}}). \end{align*} Since $|\sigma'(\xi_{k-1})\Delta B_k|\le 1/2$ on $\Omega^{{\rm CN}(m)}$, we can solve this equation and using \lref{lem_201707121721}~(\ref{item_1505965509}), \begin{align*} \kappa_k &= \sigma(\xi_{k-1})^{-1}\{1-\sigma'(\xi_{k-1})\Delta B_k\}\hat{\kappa}_k +O(\Delta^{3H^{-}+1}) +O(\Delta^{5H^{-}})\\ &= F_k(\xi_{k-1},B)+G_k(\xi_{k-1},B)\\ &\phantom{=}\quad - [\sigma^{-1}\sigma'](\xi_{k-1}) \Delta B_k \left\{ \hat{\kappa}_k -\hat{f}_3(\xi_{k-1})(\Delta B_k)^3 -\hat{g}_1(\xi_{k-1})\left(\frac{\Delta}{2}\Delta B_k -B^{10}_{\tau^m_{k-1}\tau^m_k}\right) \right\}\\ &\phantom{=}\quad +O(\Delta^{3H^{-}+1}) +O(\Delta^{5H^{-}}). \end{align*} Since $ \hat{\kappa}_k -\hat{f}_3(\xi_{k-1})(\Delta B_k)^3 -\hat{g}_1(\xi_{k-1})\left(\frac{\Delta}{2}\Delta B_k -B^{10}_{\tau^m_{k-1}\tau^m_k}\right) = O(\Delta^{2H^{-}+1}) +O(\Delta^{4H^{-}}) $, we complete the proof. \end{proof} Now, we are in a position to prove \lref{lem_201708091518}. \begin{proof}[Proof of \lref{lem_201708091518}] Let $ \epsilon_m = \max_{1\le k\le 2^m} | X_{\tau^m_k}(\xi,B)-\bar{X}^{(m)}_{\tau^m_k}(\xi,B) | $. We proved that $\lim_{m\to\infty}(2^m)^{3H^{-}-1}\epsilon_m=0$ for $\omega\in \bigcup_m\Omega^{{\rm CN}(m)}$. Our first task is to improve this estimate as $\lim_{m\to\infty}(2^m)^{3H-1/2-\delta}\epsilon_m=0$ in probability for any $\delta>0$ by using $\lim_{m\to\infty}\Psi_{m,\delta}=0$ in probability (recall \lref{lem_1506322555}). To this end, let \begin{align*} \kappa_{k,1}&=F_k(X_{\tau^m_{k-1}},B)+G_k(X_{\tau^m_{k-1}},B),\\ \kappa_{k,2}&=F_k(\xi_{k-1},B)+G_k(\xi_{k-1},B)- \left(F_k(X_{\tau^m_{k-1}},B)+G_k(X_{\tau^m_{k-1}},B)\right), \end{align*} where $F_k$ and $G_k$ are the same functions as in \lref{lem_201707121723}. Then $\kappa_k=\kappa_{k,1}+\kappa_{k,2}+r_k$, $\tilde{\kappa}_k=F_k(X_{\tau^m_{k-1}},B)$ and $R_k=G_k(X_{\tau^m_{k-1}}, B)+\kappa_{k,2}+r_k$ hold. Here, $r_k$ is defined in \lref{lem_201707121723}. Let $h^{(m)}_i$ $(i=1,2)$ be piecewise linear paths which are defined by $\{\kappa_{k,i}\}$. We define $h^{(m)}_r$ similarly by $\{r_k\}$. Note that $\|h_1^{(m)}\|_{\infty}=O(\Delta^{3H-1/2-\delta})\Psi_{m,\delta}$ holds. By the Lipschitz continuity of $F_k$ and $G_k$ with respect to $x$-variable, we have \begin{align*} \|h^{(m)}_2\|_{\infty}\le \sum_{k=1}^{2^m}|\kappa_{k,2}|&\le K\epsilon_{m}, \qquad \omega\in \Omega^{{\rm CN}(m)} \end{align*} where $ K=O(\Delta^{3H^{-}-1}). $ By \lref{lem_201707121723}, we have \begin{align}\label{eq_2017071724} \|h^{(m)}_r\|_{\infty} \le \sum_{k=1}^{2^m} |r_k| = O(\Delta^{3H^{-}}) +O(\Delta^{5H^{-}-1}), \qquad \omega\in \Omega^{{\rm CN}(m)}. \end{align} By the Lipschitz continuity of $B\mapsto X(\xi,B)$ in the uniform norm, we have \begin{align} \epsilon_m &= \max_{1\le k\le 2^m} |X_{\tau^m_k}(\xi,B)-X_{\tau^m_k}(\xi,B+h^{(m)}_1+h^{(m)}_2+h^{(m)}_r)|\nonumber\\ &\le C\sum_{i=1}^3\|h^{(m)}_i\|_{\infty} = \tilde{K}\epsilon_m+\hat{K},\qquad \omega\in \Omega^{{\rm CN}(m)},\label{eq_201707121756} \end{align} where $\tilde{K}=CK=O(\Delta^{3H^{-}-1})$ and $\hat{K}=C(\|h^{(m)}_1\|_{\infty}+\|h^{(m)}_r\|_{\infty})$. By applying the inequality \eqref{eq_201707121756}, $n$-times and using the rough estimate $\epsilon_m=O(\Delta^{3H^{-}-1})$, we get \begin{align*} \epsilon_m &\le \tilde{K}^nO(\Delta^{3H^{-}-1})+\hat{K}\left(1+\sum_{j=1}^{n-1} \tilde{K}^j\right). \end{align*} From this, we conclude that for $\omega\in \Omega^{{\rm CN}(m)}$, $\epsilon_m=\Psi_{m,\delta}(\omega)O(\Delta^{3H-1/2-\delta}) +O(\Delta^{3H^{-}})+O(\Delta^{5H^{-}-1})$ holds for any $\delta>0$. We now prove the estimate of the sum of $R_k$. Thanks for the the improved estimate of $\epsilon_m$, we obtain for any $\delta>0$ \begin{align*} \sum_{k=0}^{2^m-1}|\kappa_{k,2}|=O(\Delta^{3H-1/2+3H^{-}-1-\delta}) \Psi_{m,\delta}(\omega)+O(\Delta^{6H^{-}-1})+O(\Delta^{8H^{-}-2}), \qquad \text{$\omega\in \Omega^{{\rm CN}(m)}$.} \end{align*} We already proved the necessary estimates in (\ref{eq_2017071724}), (\ref{eq_201707261039}) and (\ref{eq_201707241515}) for the sum of $r_k$ and $G_k(X_{\tau^m_{k-1}},B)$. Thus, we complete the proof. \end{proof} \section{The Euler scheme and the Milstein scheme}\label{sec_201707212051} In this section, we show \tref[thm_20141204013919]{thm_20141204015019}, which are concerning with the Euler-Maruyama scheme and the Milstein scheme, respectively. Since the proofs are similar to one of \tref{thm_20141204015044}, we omit the detail and give key lemmas. We denote by $\bar{X}^{(m)}$ the Euler scheme or the Milstein scheme and set $\xi_k=\bar{X}^{(m)}_{\dyadicPart[m]{k}}$. Note that \lref{lem_201707240936} holds for the Euler scheme and the Milstein scheme. We see \lref{lem_201707240936} holds for the both of the schemes. We denote by $h^{(m)}$ the piecewise linear function which appears in \lref{lem_201707240936} and we write $\kappa_k=h^{(m)}(\dyadicPart[m]{k})-h^{(m)}(\dyadicPart[m]{k-1})$ for every $1\le k\le 2^m$. Because analysis of 1-step error $ \hat{\kappa}_k = \{ \xi_{k}-\xi_{k-1} \} - \{ X_{2^{-m}}(\xi_{k-1},\theta_{\tau^m_{k-1}}B) - \xi_{k-1} \} $ of the scheme and the main term $\tilde{\kappa}_k$ are essential in the proof, we state assertions on them, that is, we give counterparts of \lrefs[lem_201707121721]{lem_201708091518}{lem_201708091528}. \subsection{The Euler scheme} In this subsection, we assume $1/2<H<1$ and show \tref{thm_20141204013919}. To state assertions, we set $\hat{f}_2=-\sigma\sigma'/2$ and $f_2=-\sigma'/2$. Then we see the following lemmas: \begin{lemma}\label{lem_1506481850} For any $\omega\in\Omega_0$, the following hold: \begin{enumerate} \item We have $\hat{\kappa}_k=\hat{f}_2(\xi_{k-1})(\Delta B_k)^2+O(\Delta^{H^{-}+1})$. \item We have $\hat{\kappa}_k=O(\Delta^{2H^{-}})$, $\kappa_k=O(\Delta^{2H^{-}})$ and \begin{align*} \max_{1\le k\le 2^m} | X_{\tau^m_k}(\xi,B) - \bar{X}^{(m)}_{\tau^m_k}(\xi,B) | = O(\Delta^{2H^{-}-1}). \end{align*} \item We have \begin{align*} \max_{0\leq t\leq 1} |\bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)})| = O(\Delta^{2H^{-}}). \end{align*} \end{enumerate} \end{lemma} \begin{lemma}\label{lem_1506481781} For $1\le k\le 2^m$, let \begin{align*} \tilde{\kappa}_{k} &= f_2(X_{\dyadicPart[m]{k}})(\Delta B_k)^2 \end{align*} and set $R_k(\omega)=\kappa_k-\tilde{\kappa}_k$. Then $R_k=O(\Delta^{4H^{-}-1})+O(\Delta^{H^{-}+1})$. \end{lemma} \begin{lemma}\label{lem_1506482751} Let \begin{align*} \Phi_1(t) &= \sum_{k=1}^{\intPart{2^mt}} f_2(X_{\tau^m_{k-1}})(\Delta B_k)^2. \end{align*} Then, $ \left( B, 2^{m(2H-1)} \Phi_1 \right) $ converges to $ \left( B, \int_0^\cdot f_2(X_u)\, du \right) $ in $D([0,1];\mathbf{R}^2)$ with respect to the Skorokhod topology in probability. \end{lemma} Here we make comments on proof of the lemmas above: \begin{itemize} \item \lref{lem_1506481850} is seen by the similar way with \lref{lem_201707121721}. \item \lref{lem_1506481781} follows from the equality $\hat{\kappa}_k=\sigma(\xi_{k-1})\kappa_k+O(\Delta^{3H^{-}})$ and \lref{lem_1506481850} (note that we do not use \lref{lem_1506482751}). \item \lref{lem_1506482751} is a direct consequence of \tref{thm_20141203094309}. \end{itemize} Combining the lemmas, we obtain \tref{thm_20141204013919}. \subsection{The Milstein scheme} In this subsection, we assume $1/3<H\leq 1/2$ and \tref{thm_20141204015019}. We set \begin{align*} \hat{f}_3 &= - \frac{1}{3!} \sigma(\sigma\sigma')', & \hat{f}_4 &= - \frac{1}{4!} \sigma(\sigma(\sigma\sigma')')',\\ f_3 &= - \frac{1}{3!} (\sigma\sigma')', & f_4 &= - \frac{1}{4!} [ \sigma^2\sigma'''-3(\sigma')^3 ], & f_4^\dagger &= \frac{1}{4!} [ \sigma^2\sigma''' +6\sigma\sigma'\sigma'' +3(\sigma')^3 ]. \end{align*} Note that $f_4=(\hat{f}_4-\sigma'\hat{f}_3)/\sigma$ and $f_4^\dagger=f_4-\sigma f_3'/2$. We set $\varphi=0$ and use functions $\hat{g}_1$, $g_1$, $\hat{\varphi}_{\boldsymbol{i}}$, $\varphi_{\boldsymbol{i}}$ $(\boldsymbol{i}=011,101,110)$ introduced in \secref{sec_1506485494}. We define processes $\Phi_1,\dots,\Phi_4$ by \eqref{eq_1506569861} with the functions above. Then we see the next lemmas: \begin{lemma} For any $\omega\in\Omega$, the following hold. \begin{enumerate} \item We have \begin{align*} \hat{\kappa}_{k} &= \hat{f}_3(\xi_{k-1})(\Delta B_k)^3 +\hat{f}_4(\xi_{k-1})(\Delta B_k)^4 + \hat{g}_1(\xi_{k-1}) \left( \frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k} \right)\\ &\quad\quad + \hat{\varphi}_{011}(\xi_{k-1}) B^{011}_{\tau^m_{k-1}\tau^m_k} + \hat{\varphi}_{101}(\xi_{k-1}) B^{101}_{\tau^m_{k-1}\tau^m_k} + \hat{\varphi}_{110}(\xi_{k-1}) B^{110}_{\tau^m_{k-1}\tau^m_k}\\ &\quad\quad +O(\Delta^{5H^{-}}) +O(\Delta^{3H^{-}+1}) +O(\Delta^{H^{-}+2}). \end{align*} \item We have $\hat{\kappa}_k=O(\Delta^{3H^{-}})$, $\kappa_k=O(\Delta^{3H^{-}})$ and \begin{align*} \max_{1\le k\le 2^m} | X_{\tau^m_k}(\xi,B) - \bar{X}^{(m)}_{\tau^m_k}(\xi,B) | = O(\Delta^{3H^{-}-1}). \end{align*} \item We have \begin{align*} \max_{0\leq t\leq 1} |\bar{X}^{(m)}_t(\xi,B)-X_t(\xi,B+h^{(m)})| = O(\Delta^{3H^{-}}). \end{align*} \end{enumerate} \end{lemma} \begin{lemma} Let \begin{align*} \tilde{\kappa}_{k} &= f_3(X_{\tau^m_{k-1}})(\Delta B_k)^3 +f_4(X_{\tau^m_{k-1}})(\Delta B_k)^4 + g_1(X_{\tau^m_{k-1}}) \left(\frac{\Delta}{2}\Delta B_k-B^{10}_{\tau^m_{k-1}\tau^m_k}\right)\\ &\quad +\varphi_{011}(X_{\tau^m_{k-1}})B^{011}_{\tau^m_{k-1}\tau^m_k} +\varphi_{101}(X_{\tau^m_{k-1}})B^{101}_{\tau^m_{k-1}\tau^m_k} +\varphi_{110}(X_{\tau^m_{k-1}})B^{110}_{\tau^m_{k-1}\tau^m_k} \end{align*} and set $R_k(\omega)=\kappa_k-\tilde{\kappa}_k$. Then there exists $\delta>0$ such that $ \lim_{m\to\infty} (2^m)^{4H-1+\delta} \max_{1\le k\le 2^{m}} |\sum_{i=1}^{k}R_i| = 0 $ in probability. \end{lemma} \begin{lemma}\label{lem_1506499391} The following hold: \begin{enumerate} \item Let $\frac{1}{3}<H<\frac{1}{2}$. Then $ \left( B, (2^m)^{4H-1} (\Phi_1,\Phi_2,\Phi_3,\Phi_4) \right) $ converges to $ \left( B, 3\int_0^{\cdot}f_4^\dagger(X_s)\, ds, 0, 0, 0 \right)$ in $D([0,1];\mathbf{R}^4)$ with respect to the Skorokhod topology in probability. \item Let $H=\frac{1}{2}$. Then $ \left( B, 2^m (\Phi_1,\Phi_2,\Phi_3,\Phi_4) \right) $ converges weakly to \begin{multline*} \Biggl( B, \sqrt{6}\int_0^{\cdot}f_3(X_s)\,dW_s +3\int_0^{\cdot}f_3(X_s)\circ dB_s +3\int_0^{\cdot}f_4^\dagger(X_s)\, ds,\\ \frac{1}{\sqrt{12}}\int_0^{\cdot}g_1(X_s)d\tilde{W}_s, \frac{1}{4} \int_0^{\cdot} \left\{\varphi_{011}(X_s)+\varphi_{110}(X_s)\right\}\,ds, 0 \Biggr) \end{multline*} in $D([0,1];\mathbf{R}^4)$ with respect to the Skorokhod topology. \end{enumerate} \end{lemma} Note that in proof \lref{lem_1506499391} we used the decomposition \begin{align*} f_3(X_{\tau^m_{k-1}}) + f_4(X_{\tau^m_{k-1}})\Delta B_k &= \left\{f_3(X_{\tau^m_{k-1}})+\frac{1}{2}f_3'\sigma(X_{\tau^m_k})\Delta B_k\right\} +f_4^\dagger(X_{\tau^m_k})\Delta B_k\\ &= \frac{f_3(X_{\tau^m_{k-1}})+f_3(X_{\tau^m_k})}{2} +O(\Delta^{2H^-}) +O(\Delta) +f_4^\dagger(X_{\tau^m_k})\Delta B_k \end{align*} and apply \trefs{thm_20141203094309}{thm_20141117071529}{thm_20141123062022}. \appendix \section{Gaussian analysis and Malliavin calculus}\label{sec_20150623014415} We summarize basic results on Gaussian analysis and Malliavin calculus which we use to estimate some terms of error. For details, see \cite{Nualart2006}. Let $(\Omega,\mathcal{F},\boldsymbol P)$ be the canonical probability space for a one-dimensional centered continuous Gaussian process $X=\{X_t\}_{0\leq t\leq 1}$ with the covariance $\boldsymbol E[X_sX_t]=R(s,t)$, that is, $\Omega$ is the Banach space of continuous functions from $[0,1]$ to $\mathbf{R}$ starting at zero with the uniform norm $\|\cdot\|_\infty$, $\mathcal{F}$ the $\sigma$-field generated by the cylindrical subsets of $\Omega$, and $\boldsymbol P$ a probability measure on $\Omega$ such that the canonical process $X(\omega)=\omega$, $\omega\in\Omega$, is the Gaussian process. We construct an abstract Wiener space $(\Omega,\mathfrak{H},\boldsymbol P)$ and an isonormal Gaussian process $\{X(h)\}_{h\in\mathfrak{H}}$. The Hilbert space $\mathfrak{H}$ with the norm $\|\cdot\|_\mathfrak{H}$ and the inner product $\innerProd[\mathfrak{H}]{\cdot}{\ast}$ is defined by as follows; set $ [\mathscr{R}\indicator{[0,t)}](\cdot) = R(t,\cdot) = \boldsymbol E[X_tX_\cdot] $ and let $\mathfrak{H}_0$ be the linear span of functions $\mathscr{R}\indicator{[0,t)}$ and $\mathfrak{H}$ the Hilbert space defined as the closure of $\mathfrak{H}_0$ with respect to the inner product $ \innerProd[\mathfrak{H}] {\mathscr{R}\indicator{[0,s)}} {\mathscr{R}\indicator{[0,t)}} = \boldsymbol E[X_sX_t] $. We call the Hilbert space $\mathfrak{H}$ the Cameron-Martin subspace. Note the map $ \mathfrak{H}_0\ni \mathscr{R}\indicator{[0,t)} \mapsto X(\indicator{[0,t)})\in\LebSp{2}{\Omega}{\mathbf{R}} $ is an isometry. Hence if $\{h_n\}_{n=1}^\infty\subset\mathfrak{H}_0$ converges to $h\in\mathfrak{H}$, then $\{X(h_n)\}_{n=1}^\infty$ converges to some element $X(h)\in\LebSp{2}{\Omega}{\mathbf{R}}$. Hence we obtain the isonormal Gaussian process $\{X(h)\}_{h\in\mathfrak{H}}$. Next, we define the $q$-th Wiener integral $\WienerInt{q}$ which is a map from the symmetric space $\mathfrak{H}^{\odot q}$ to the $q$-th Wiener chaos $\WienerChaos{q}$ for $q\in\mathbf{N}$. In order to define $\mathfrak{H}^{\odot q}$, $\WienerChaos{q}$ and $\WienerInt{q}$, we denote by $\Lambda$ the set of sequences $\lambda=(\lambda_1,\dots)\in(\mathbf{N}\cup\{0\})^\infty$ such that all the elements vanish except a finite number of them and set $\lambda!=\prod_{n=1}^\infty \lambda_n!$ for $\lambda\in\Lambda$. We take an orthonormal basis $\{e_n\}_{n=1}^\infty$ of $\mathfrak{H}$. We denote by $\otimes$ the tensor product and by $\mathfrak{H}^{\otimes q}$ the tensor product space for $q\geq 2$. For $q=0,1$, we set $\mathfrak{H}^{\otimes 0}=\mathbf{R}$ and $\mathfrak{H}^{\otimes 1}=\mathfrak{H}$ by convention. We define the symmetrization $\tilde{h}\in\mathfrak{H}^{\otimes q}$ for $h\in\mathfrak{H}^{\otimes q}$ as follows: if $h$ has the form of $h=h_1\otimes\cdots\otimes h_q$ for $h_r\in\mathfrak{H}$, we set \begin{align*} (h_1\otimes\cdots\otimes h_q)^\sim = \frac{1}{q!} \sum_{\sigma\in\SymmetricGroup{q}} h_{\sigma(1)}\otimes\cdots\otimes h_{\sigma(q)}, \end{align*} where $\SymmetricGroup{q}$ is the symmetric group on $\{1,\dots,q\}$; we also define the symmetrization for general elements in $\mathfrak{H}^{\otimes q}$ by linearity. For notational simplicity, we set $ h_1\odot\cdots\odot h_q = (h_1\otimes\cdots\otimes h_q)^\sim $. An element $h\in\mathfrak{H}^{\otimes q}$ is said to be symmetric if $\tilde{h}=h$. We denote by $\mathfrak{H}^{\odot q}$ the set of symmetric elements of $\mathfrak{H}^{\otimes q}$. The space $\mathfrak{H}^{\odot q}$ forms a Hilbert space with respect to the scaled norm $\sqrt{q!}\|\cdot\|_{\mathfrak{H}^{\otimes q}}$. For $\lambda\in\Lambda$, set \begin{align*} e^{\lambda} = \frac{1}{\sqrt{\lambda!}} e_1^{\odot \lambda_1}\odot e_2^{\odot \lambda_2}\odot\cdots. \end{align*} Then, $\{e^{\lambda};|\lambda|=q,\lambda\in\Lambda\}$ is an orthonormal basis of $\mathfrak{H}^{\odot q}$. As we introduced in Section~\secref{sec_20150519021523}, $\hermitePoly{q}$ denotes the $q$-th Hermite polynomial. The $q$-th Wiener chaos $\WienerChaos{q}$ is defined as the closed subspace spanned by $\{\hermitePoly{q}(X(h));h\in\mathfrak{H},\|h\|_\mathfrak{H}=1\}$ in $\LebSp{2}{\Omega}{\mathbf{R}}$. For $\lambda\in\Lambda$, set \begin{align*} \WienerHermitePoly{\lambda} = \frac{1}{\sqrt{\lambda!}} \prod_{n=1}^{\infty} \hermitePoly{\lambda_n}(X(e_n)). \end{align*} Then, $\{\WienerHermitePoly{\lambda};|\lambda|=q,\lambda\in\Lambda\}$ is an orthonormal basis of $\WienerChaos{q}$. The $q$-th Wiener integral $\WienerInt{q}$ is defined by $ \WienerInt{q}(e^\lambda)=\WienerHermitePoly{\lambda} $ and is extend by linearity. The mapping $\WienerInt{q}:\mathfrak{H}^{\odot q}\to\WienerChaos{q}$ provides a real linear isometry between $\mathfrak{H}^{\odot q}$ and $\WienerChaos{q}$. Finally, we summarize results on Malliavin calculus. Let $\mathcal{S}$ be the totality of all smooth functionals which have the form of $F=f(X(h_1),\dots,X(h_\alpha))$, where $h_\beta\in\mathfrak{H}$ and $f\in\SmoothFunc[poly]{\infty}{\mathbf{R}^\alpha}{\mathbf{R}}$. The Malliavin derivative $DF$ of $F\in\mathcal{S}$ is an $\mathfrak{H}$-valued random variable and defined by \begin{align*} DF = \sum_{\beta=1}^\alpha \frac{\partial f}{\partial\xi_\beta}(X(h_1),\dots,X(h_\alpha)) h_\beta. \end{align*} By the iteration, one can define $n$-th derivative $D^nF$, which is an $\mathfrak{H}^{\odot n}$-valued random variable, by \begin{align*} D^nF = \sum_{\beta_1,\dots,\beta_n=1}^\alpha \frac{\partial^n f}{\partial\xi_{\beta_1}\cdots\partial\xi_{\beta_n}} (X(h_1),\dots,X(h_\alpha)) h_{\beta_1}\otimes\dots\otimes h_{\beta_n}. \end{align*} As usual, for $n\in\mathbf{N}$ and $1<p<\infty$, we define the Sobolev space $\SobSp{n}{p}{\Omega}{\mathbf{R}}$ by the completion of $\mathcal{S}$ by the norm \begin{align*} \|F\|_{\SobSp{n}{p}{\Omega}{\mathbf{R}}}^p = \sum_{k=0}^n \boldsymbol E[\|D^k F\|_{\mathfrak{H}^{\odot k}}^p]. \end{align*} We set $\SobSp{n}{\infty-}{\Omega}{\mathbf{R}}=\bigcap_{1<p<\infty}\SobSp{n}{p}{\Omega}{\mathbf{R}}$. Since the derivative operator $D$ is a continuous operator from $\SobSp{1}{2}{\Omega}{\mathbf{R}}$ to $\LebSp{2}{\Omega}{\mathfrak{H}}$, there exists its adjoint operator $\delta$, which is called the divergence operator or the Skorokhod integral. Notice that the duality relationship \begin{align*} \boldsymbol E[F\delta(u)]=\boldsymbol E[\innerProd[\mathfrak{H}]{DF}{u}] \end{align*} holds for any $F\in\SobSp{1}{2}{\Omega}{\mathbf{R}}$ and $u$ belonging to the domain of $\delta$. By the iteration, we see that there exists an operator $\delta^n$ such that \begin{align}\label{eq_duality_relationship} \boldsymbol E[F\delta^n(u)]=\boldsymbol E[\innerProd[\mathfrak{H}^{\otimes n}]{D^nF}{u}] \end{align} for any $F\in\SobSp{n}{2}{\Omega}{\mathbf{R}}$ and $u$ belonging to the domain of $\delta^n$. Notice that $h\in\mathfrak{H}^{\odot q}$ belongs to the domain of $\delta^q$ and $\delta^q(h)=\WienerInt{q}(h)$. From the It\^{o}-Wiener expansion and the Stroock formula, we obtain the product formula: \begin{align}\label{eq_product_formula} \WienerInt{p}(h^{\odot p})\WienerInt{q}(k^{\odot q}) = \sum_{r=0}^{p\wedge q} r! \binom{p}{r} \binom{q}{r} (h,k)_\mathfrak{H}^r \WienerInt{p+q-2r}(h^{\odot p-r}\odot k^{\odot q-r}) \end{align} for every $h,k\in\mathfrak{H}$. In what follows, we assume that fBm $B$ is defined on the canonical probability space $(\Omega,\mathcal{F},\boldsymbol P)$, that is, $B(\omega)=\omega$ for $\omega\in\Omega$ is fBm under the probability measure $\boldsymbol P$. In this setting, we can apply Gaussian analysis and Malliavin calculus to fBm. In particular, since $h\in\mathfrak{H}$ is given by $h_t=\boldsymbol E[ZB_t]$ for some square-integrable random variable $Z$, we see $ |h_t-h_s| \leq \boldsymbol E[Z^2]^{1/2} \boldsymbol E[(B_t-B_s)^2]^{1/2} = \boldsymbol E[Z^2]^{1/2} (t-s)^H $, which implies $ \mathfrak{H} \subset \HolFunc[0]{H}{[0,1]}{\mathbf{R}} \subset \HolFunc[0]{H-\epsilon}{[0,1]}{\mathbf{R}} $. From \pref{prop_20140929105014} and the inclusion $\mathfrak{H}\subset\HolFunc[0]{H-\epsilon}{[0,1]}{\mathbf{R}}$, the functional $\omega\mapsto X_t(\omega)$ is Fr\'{e}chet differentiable in $\mathfrak{H}$ and the derivative is integrable. Hence we see that $X_t$ is Malliavin differentiable and have $ \innerProd[\mathfrak{H}]{D X_t}{h} = \nabla_h X_t $ for any $h\in\mathfrak{H}$. More precisely, we obtain the following proposition. \begin{proposition}\label{prop_20150520004520} Let $b,\sigma\in\SmoothFunc[bdd]{n+1}{\mathbf{R}}{\mathbf{R}}$ for $n\geq 1$. Assume that \hyporef{hypo_ellipticity} is satisfied. Then $X_t\in\SobSp{n}{\infty-}{\Omega}{\mathbf{R}}$ and \begin{align*} |\innerProd[\mathfrak{H}^{\odot \nu}]{D^\nu X_t}{h^1\odot\cdots\odot h^\nu}| \leq \const[\nu] \|h^1\|_\infty \cdots \|h^n\|_\infty, \end{align*} for any $h^1,\dots,h^\nu\in\mathfrak{H}$ and $1\leq \nu\leq n$. Here $\const[\nu]$ is a positive constant depending only on $b,\sigma$ and $\nu$. \end{proposition} In what follows, we set \begin{align*} \delta_{st} &= \mathscr{R}\indicator{[s,t)},& \zeta_{st} &= \mathscr{R} \left[ \frac{1}{2} (t-s) \indicator{[s,t)} - \int_s^t \indicator{[s,v)}\, dv \right] \end{align*} for $0\leq s<t\leq 1$. Note \begin{gather*} \hermitePoly{q} ( 2^{mH} B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} ) = \WienerInt{q} ((2^{mH} \delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}})^{\odot q}) = 2^{mqH} \WienerInt{q} (\delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}^{\odot q}),\\ 2^{m(H+1)} \left( \frac{1}{2\cdot 2^m} B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} - \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} B_{\dyadicPart[m]{k-1}u}\, du \right) = 2^{m(H+1)} \WienerInt{1}(\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}). \end{gather*} The functions $\delta_{st}$ and $\zeta_{st}$ are bounded functions as follows: \begin{proposition}\label{prop_20150520010300} For any $0\leq s<t\leq 1$, we have \begin{gather*} \|\delta_{st}\|_\infty \leq \begin{cases} (t-s)^{2H},&0<H< 1/2,\\ 2H (t-s),&1/2\leqH<1,\\ \end{cases}\\ \|\zeta_{st}\|_\infty \leq \begin{cases} \displaystyle { \left( \frac{1}{2} + \frac{1}{2H+1} \right) (t-s)^{2H+1}, } & 0<H< 1/2,\\ 2H (t-s)^2, & 1/2\leqH<1.\\ \end{cases} \end{gather*} \end{proposition} \begin{proof} Note \begin{align*} |\boldsymbol E[(B_t-B_s)B_u]| \leq \begin{cases} (t-s)^{2H},&0<H< 1/2,\\ 2H (t-s),&1/2\leqH<1,\\ \end{cases} \end{align*} for any $0\leq s<t\leq 1$ and $0\leq u\leq 1$. We can find this estimate in \cite[Lemma~5,6]{NourdinNualartTudor2010}. The first assertion follows from this estimate and the identification $ \delta_{st}(u) = [\mathscr{R}\indicator{[s,t)}](u) = \boldsymbol E[(B_t-B_s)B_u] $. We see the second one from the expression \begin{align*} \zeta_{st}(u) = \frac{1}{2} (t-s) \boldsymbol E[(B_t-B_s)B_u] - \int_s^t \boldsymbol E[(B_v-B_s)B_u]\, dv. \end{align*} The proof is completed. \end{proof} \section{Proof of \pref{prop_20141102222020}}\label{sec_20150107074711} In this section, we prove \pref{prop_20141102222020}. The result of convergence of $(B,\hermiteVar{q}{m})$ can be found in \cite{NourdinNualartTudor2010}. Main contribution in this section is proof of convergence of $\TRVar{m}$. Throughout this section, we use the following notation: \begin{gather*} a_{k,l} = \boldsymbol E \left[ \left( \frac{1}{2} B_{k-1,k} - \int_{k-1}^k B_{k-1,u}\, du \right) \left( \frac{1}{2} B_{l-1,l} - \int_{l-1}^l B_{l-1,v}\, dv \right) \right],\\ a^\dagger_{k,l} = \boldsymbol E \left[ B_{k-1,k} \left( \frac{1}{2} B_{l-1,l} - \int_{l-1}^l B_{l-1,u}\, du \right) \right] \end{gather*} for $k,l\geq 1$. It follows from the stationary increments of fBm that \begin{align} \label{eq_20150518052410} a_{k,l}=a_{1,l-k+1},\\ \label{eq_20150518053842} a^\dagger_{k,l}=a^\dagger_{1,l-k+1} \end{align} for $1\leq k\leq l$. For the same reason, we have \begin{align}\label{eq_20150519005630} a_{k,k} = a_{1,1} = \frac{1}{4} \frac{1-H}{1+H}. \end{align} \subsection{Key estimates} Before starting to prove \pref{prop_20141102222020}, we show the next three propositions: \begin{proposition}\label{prop_20150518041212} It holds that \begin{align*} |a_{k,l}| \leq \const \begin{cases} |k-l|^{2H-4},& |k-l|\geq 1,\\ 1,& |k-l|=0 \end{cases} \end{align*} for any $k$ and $l$. \end{proposition} \begin{proposition}\label{prop_20141122044947} It holds that \begin{align*} |a^\dagger_{k,l}| \leq \const \begin{cases} |k-l|^{2H-3},& |k-l|\geq 1,\\ 1,& |k-l|=0, \end{cases} \end{align*} for any $k,l\geq 1$. \end{proposition} \begin{proposition}\label{prop_20141122044103} It holds that $ a^\dagger_{k,l}+a^\dagger_{l,k} = 0 $ for any $k,l\geq 1$. \end{proposition} The following is a key lemma to prove \prefs{prop_20150518041212}{prop_20141122044947}: \begin{lemma}\label{lem_20141120004521} It holds that \begin{multline*} \boldsymbol E [ (B_{x+k-1}-B_{s+k-1}) (B_{y+l-1}-B_{t+l-1}) ]\\ = \frac{1}{2} |k-l|^{2H} \left\{ \binom{2H}{2} \frac{b_2(x,s,y,t)}{(k-l)^2} + \binom{2H}{3} \frac{b_3(x,s,y,t)}{(k-l)^3} + R(k-l;x,s,y,t) \right\} \end{multline*} for any $0\leq x,s,y,t\leq 1$ and $k,l\in\mathbf{N}$ with $|k-l|\geq 2$. Here \begin{align*} b_2(x,s,y,t) &= 2(xy-xt-sy+st),\\ b_3(x,s,y,t) &= 3(x^2y-xy^2-x^2t+xt^2-s^2y+sy^2+s^2t-st^2) \end{align*} and $R$ satisfies $|R(k-l;x,s,y,t)|\leq \const|k-l|^{-4}$ for some positive constant $\const$. \end{lemma} \begin{proof} From \eqref{eq_20150219103359}, we have \begin{multline*} \boldsymbol E [ (B_{x+k-1}-B_{s+k-1}) (B_{y+l-1}-B_{t+l-1}) ]\\ \begin{aligned} &= \frac{1}{2} \left\{ -|x-y+k-l|^{2H} +|x-t+k-l|^{2H} +|s-y+k-l|^{2H} -|s-t+k-l|^{2H} \right\}\\ &= \frac{1}{2} |k-l|^{2H} \left\{ - \left| 1+\frac{x-y}{k-l} \right|^{2H} + \left| 1+\frac{x-t}{k-l} \right|^{2H} + \left| 1+\frac{s-y}{k-l} \right|^{2H} - \left| 1+\frac{s-t}{k-l} \right|^{2H} \right\}. \end{aligned} \end{multline*} Applying the binomial theorem, we obtain \begin{multline*} \boldsymbol E [ (B_{x+k-1}-B_{s+k-1}) (B_{y+l-1}-B_{t+l-1}) ]\\ = \frac{1}{2} |k-l|^{2H} \left\{ \sum_{\nu=0}^3 \binom{2H}{\nu} a_\nu \left( \frac{x-y}{k-l},\frac{x-t}{k-l},\frac{s-y}{k-l},\frac{s-t}{k-l} \right) + R(k-l;x,s,y,t) \right\}, \end{multline*} where $ a_\nu(z_1,z_2,z_3,z_4)=-z_1^\nu+z_2^\nu+z_3^\nu-z_4^\nu $ and $R$ is defined by \begin{align*} R(k-l;x,s,y,t) = \left\{ - r_3 \left( \frac{x-y}{k-l} \right) + r_3 \left( \frac{x-t}{k-l} \right) + r_3 \left( \frac{s-y}{k-l} \right) - r_3 \left( \frac{s-t}{k-l} \right) \right\} \end{align*} with the remainder term $r_3$. Note $ |r_3(\xi)| \leq \const |\xi|^4 $. Expanding the polynomials $a_\nu$, we see \begin{gather*} \begin{aligned} a_0 \left( \frac{x-y}{k-l},\frac{x-t}{k-l},\frac{s-y}{k-l},\frac{s-t}{k-l} \right) &= 0, & a_1 \left( \frac{x-y}{k-l},\frac{x-t}{k-l},\frac{s-y}{k-l},\frac{s-t}{k-l} \right) &= 0, \end{aligned}\\ \begin{aligned} a_2 \left( \frac{x-y}{k-l},\frac{x-t}{k-l},\frac{s-y}{k-l},\frac{s-t}{k-l} \right) &= \frac{1}{(k-l)^2} \cdot b_2(x,s,y,t), \end{aligned}\\ \begin{aligned} a_3 \left( \frac{x-y}{k-l},\frac{x-t}{k-l},\frac{s-y}{k-l},\frac{s-t}{k-l} \right) = \frac{1}{(k-l)^3} \cdot b_3(x,s,y,t). \end{aligned} \end{gather*} The proof is completed. \end{proof} \begin{proof}[Proof of \pref{prop_20150518041212}] The assertion for $|k-l|=0,1$ follows from the H\"{o}lder inequality and \eqref{eq_20150519005630}. We prove the assertion for $|k-l|\geq 2$. Note \begin{align*} \frac{1}{2} B_{k-1,k} - \int_{k-1}^k B_{k-1,u}\, du &= \frac{1}{2} (B_k-B_{k-1}) - \int_{k-1}^k (B_u-B_{k-1})\, du\\ &= \int_{k-1}^k du \int_{k-1}^k \mu_k(d\xi)\, (B_\xi-B_u)\\ &= \int_0^1 ds \int_0^1 \mu_1(dx)\, (B_{x+k-1}-B_{s+k-1}). \end{align*} Here we set $\mu_k=(\delta_k+\delta_{k-1})/2$ by using the Dirac delta function $\delta_a$. From this equality, we see \begin{align*} a_{k,l} = \int_0^1 ds \int_0^1 \mu_1(dx) \int_0^1 dt \int_0^1 \mu_1(dy)\, \boldsymbol E [ (B_{x+k-1}-B_{s+k-1}) (B_{y+l-1}-B_{t+l-1}) ]. \end{align*} Note that $b_2$ and $b_3$ in \lref{lem_20141120004521} satisfy \begin{gather*} \int_0^1 ds \int_0^1 \mu_1(dx) \int_0^1 dt \int_0^1 \mu_1(dy)\, b_\nu(x,s,y,t) = 0. \end{gather*} From \lref{lem_20141120004521}, we have \begin{align*} |a_{k,l}| &= \left| \int_0^1 ds \int_0^1 \mu_1(dx) \int_0^1 dt \int_0^1 \mu_1(dy)\, \frac{1}{2} |k-l|^{2H} R(k-l;x,s,y,t) \right|\\ &\leq \const|k-l|^{2H-4}, \end{align*} which implies the conclusion for $|k-l|\geq 2$. The proof is completed. \end{proof} \begin{proof}[Proof of \pref{prop_20141122044947}] The assertion for $|k-l|=0,1$ follows from the H\"{o}lder inequality and \eqref{eq_20150519005630}. We prove the assertion for $|k-l|\geq 2$. We have \begin{align*} a^\dagger_{k,l} &= \boldsymbol E \left[ (B_k-B_{k-1}) \left( \frac{1}{2} (B_l-B_{l-1}) - \int_0^1 (B_{y+l-1}-B_{l-1})\, dy \right) \right]\\ &= \frac{1}{2} \boldsymbol E[(B_k-B_{k-1})(B_l-B_{l-1})] - \int_0^1 \boldsymbol E[(B_k-B_{k-1})(B_{y+l-1}-B_{l-1})]\, dy. \end{align*} From \lref{lem_20141120004521}, we have \begin{align*} \boldsymbol E[(B_k-B_{k-1})(B_l-B_{l-1})] &= \frac{1}{2} |k-l|^{2H} \left\{ \binom{2H}{2} \frac{2}{(k-l)^2} + R(k-l;1,0,1,0) \right\} \end{align*} and \begin{multline*} \int_0^1 \boldsymbol E[(B_k-B_{k-1})(B_{y+l-1}-B_{l-1})]\, dy\\ \begin{aligned} &= \frac{1}{2} |k-l|^{2H} \left\{ \binom{2H}{2} \frac{1}{(k-l)^2} \int_0^1 2y\, dy + \binom{2H}{3} \frac{1}{(k-l)^3} \int_0^1 3(y-y^2)\, dy + \int_0^1 R(k-l;1,0,y,0)\, dy \right\}\\ &= \frac{1}{2} |k-l|^{2H} \left\{ \binom{2H}{2} \frac{1}{(k-l)^2} + \binom{2H}{3} \frac{1}{(k-l)^3} \frac{1}{2} + \int_0^1 R(k-l;1,0,y,0)\, dy \right\} \end{aligned} \end{multline*} From these equality, we have \begin{align*} a^\dagger_{k,l} &= \frac{1}{2} |k-l|^{2H} \left\{ - \frac{1}{2} \binom{2H}{3} \frac{1}{(k-l)^3} + \frac{1}{2} R(k-l;1,0,1,0) - \int_0^1 R(k-l;1,0,y,0)\, dy \right\}\\ &= - \frac{1}{4} \binom{2H}{3} \frac{|k-l|^{2H}}{(k-l)^3} + \frac{1}{2} |k-l|^{2H} \left\{ \frac{1}{2} R(k-l;1,0,1,0) - \int_0^1 R(k-l;1,0,y,0)\, dy \right\}. \end{align*} Recalling that $R$ satisfies $|R(k-l;x,s,y,t)|\leq \const|k-l|^{-4}$ for some positive constant $\const$, we obtain the conclusion. \end{proof} \begin{proof}[Proof of \pref{prop_20141122044103}] A direct computation yields \begin{align}\label{eq_20150508075555} a^\dagger_{k,l} &= \frac{1}{4} \left\{ -|k-l+1|^{2H} +|k-l-1|^{2H} \right\} - \int_0^1 \frac{1}{2} \left\{ -|k-l+1-s|^{2H} +|k-l-s|^{2H} \right\}\, ds \end{align} and \begin{align}\label{eq_20150508075943} a^\dagger_{l,k} &= \frac{1}{4} \left\{ -|l-k+1|^{2H} +|l-k-1|^{2H} \right\} {} - \int_0^1 \frac{1}{2} \left\{ -|k-l-t|^{2H} +|k-1+(1-t)|^{2H} \right\}\, dt. \end{align} The assertion follows from these two equalities. We see \eqref{eq_20150508075555} as follows: \begin{align*} a^\dagger_{k,l} &= \frac{1}{2} \boldsymbol E[(B_k-B_{k-1})(B_l-B_{l-1})] - \int_0^1 \boldsymbol E [ (B_k-B_{k-1}) (B_{s+l-1}-B_{l-1}) ]\, ds\\ &= \frac{1}{2} \frac{1}{2} \left\{ |k-l+1|^{2H} +|k-l-1|^{2H} -2|k-l|^{2H} \right\}\\ &\phantom{=}\quad\qquad {} - \int_0^1 \frac{1}{2} \left\{ -|k-(s+l-1)|^{2H} +|k-(l-1)|^{2H} \right.\\ &\phantom{=}\quad\qquad\qquad\qquad\qquad \left.{} +|(k-1)-(s+l-1)|^{2H} -|(k-1)-(l-1)|^{2H} \right\}\, ds. \end{align*} In order to prove \eqref{eq_20150508075943}, we exchange $k$ and $l$ in \eqref{eq_20150508075555} and obtain \begin{align*} a^\dagger_{l,k} &= \frac{1}{4} \left\{ -|l-k+1|^{2H} +|l-k-1|^{2H} \right\} - \int_0^1 \frac{1}{2} \left\{ -|l-k+1-s|^{2H} +|l-k-s|^{2H} \right\}\, ds. \end{align*} From the integration by substitution $t=1-s$, we see that the integral is equal to \begin{align*} \int_1^0 \frac{1}{2} \left\{ -|l-k+t|^{2H} +|l-k-(1-t)|^{2H} \right\} (-1)\, dt = \int_0^1 \frac{1}{2} \left\{ -|k-l-t|^{2H} +|k-1+(1-t)|^{2H} \right\}\, dt. \end{align*} These two equalities imply \eqref{eq_20150508075943}. \end{proof} \subsection{Relative compactness and convergence in fdds} We are ready to prove \pref{prop_20141102222020}. We show relative compactness and convergence in the sense of finite-dimensional distributions (fdds). \begin{lemma}\label{lem_20150519013003} Under the assumption of \pref{prop_20141102222020}, the sequence $\{(B,\hermiteVar{q}{m},\TRVar{m})\}_{m=1}^\infty$ is relative compact in the Skorokhod topology. \end{lemma} \begin{lemma}\label{lem_20150519013014} Under the assumption of \pref{prop_20141102222020}, the sequence $\{(B,\hermiteVar{q}{m},\TRVar{m})\}_{m=1}^\infty$ converges in the sense of fdds. More precisely, we have, for $0\leq s_1<t_1\leq\dots\leq s_d<t_d\leq 1$, \begin{multline*} \lim_{m\to\infty} \left( B_{t_1}-B_{s_1},\hermiteVar{q}{m}(t_1)-\hermiteVar{q}{m}(s_1),\TRVar{m}(t_1)-\TRVar{m}(s_1), \dots, \right.\\ \left. B_{t_d}-B_{s_d},\hermiteVar{q}{m}(t_d)-\hermiteVar{q}{m}(s_d),\TRVar{m}(t_d)-\TRVar{m}(s_d) \right)\\ = \left( B_{t_1}-B_{s_1}, \sigma_H (W_{t_1}-W_{s_1}), \tilde{\sigma}_H (\tilde{W}_{t_1}-\tilde{W}_{s_1}), \dots, \right.\\ \left. B_{t_d}-B_{s_d}, \sigma_H (W_{t_d}-W_{s_d}), \tilde{\sigma}_H (\tilde{W}_{t_d}-\tilde{W}_{s_d}) \right) \end{multline*} weakly in $(\mathbf{R}^d)^3$, where $W$ and $\tilde{W}$ are standard Brownian motions and $B$, $W$ and $\tilde{W}$ are independent. \end{lemma} Before beginning our discussion, we note that, for any $0\leq s<t\leq 1$ and $0\leq u<v\leq 1$, \begin{align}\label{eq_20150518045935} \boldsymbol E[\{\TRVar{m}(t)-\TRVar{m}(s)\}\{\TRVar{m}(v)-\TRVar{m}(u)\}] = \frac{1}{2^m} \sum_{k=\intPart{2^m s}+1}^{\intPart{2^m t}} \sum_{l=\intPart{2^m u}+1}^{\intPart{2^m v}} a_{k,l}. \end{align} Applying \eqref{eq_20150518052410} to \eqref{eq_20150518045935}, we see \begin{multline}\label{eq_20150518051514} \boldsymbol E[\{\TRVar{m}(t)-\TRVar{m}(s)\}^2]\\ \begin{aligned} &= \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \left( a_{1,1} + 2 \sum_{j=1}^{\intPart{2^m t}-\intPart{2^m s}-1} a_{1,j+1} \right) - \frac{2}{2^m} \sum_{j=1}^{\intPart{2^m t}-\intPart{2^m s}-1} ja_{1,j+1}. \end{aligned} \end{multline} \begin{proof}[Proof of \lref{lem_20150519013003}] The assertion follows from \begin{gather*} \boldsymbol E[\{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\}^4] \leq \const \left( \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \right)^2,\\ \boldsymbol E[\{\TRVar{m}(t)-\TRVar{m}(s)\}^4] \leq \const \left( \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \right)^2 \end{gather*} for any $0\leq s<t\leq 1$ and some constant $\const$. The first estimate is proved in \cite{NourdinNualartTudor2010}. Combining \eqref{eq_20150518051514} and \pref{prop_20150518041212}, we see \begin{align*} \boldsymbol E[\{\TRVar{m}(t)-\TRVar{m}(s)\}^2] &\leq \const \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m}. \end{align*} Since $\TRVar{m}(t)-\TRVar{m}(s)$ is a Gaussian random variable, we have the second estimate. \end{proof} \begin{proof}[Proof of \lref{lem_20150519013014}] We show \begin{gather} \label{eq_20150514085853} \lim_{m\to\infty} \boldsymbol E[\{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\}^4] = 3\sigma_H^4(t-s)^2,\\ \label{eq_20150514085924} \lim_{m\to\infty} \boldsymbol E [ \{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\}^2 ] = \sigma_H^2(t-s),\\ \label{eq_20150514090122} \lim_{m\to\infty} \boldsymbol E [ \{\TRVar{m}(t)-\TRVar{m}(s)\}^2 ] = \tilde{\sigma}_H^2(t-s),\\ \label{eq_20150514090838} \lim_{m\to\infty} \boldsymbol E [ \{B_t-B_s\} \{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\} ] = 0,\\ \label{eq_20150514090844} \lim_{m\to\infty} \boldsymbol E [ \{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\} \{\TRVar{m}(t)-\TRVar{m}(s)\} ] = 0,\\ \label{eq_20150514091605} \lim_{m\to\infty} \boldsymbol E [ \{B_t-B_s\} \{\TRVar{m}(t)-\TRVar{m}(s)\} ] = 0 \end{gather} for $0\leq s<t\leq 1$ and \begin{gather} \label{eq_20150514090229} \lim_{m\to\infty} \boldsymbol E [ \{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\} \{\hermiteVar{q}{m}(v)-\hermiteVar{q}{m}(u)\} ] = 0,\\ \label{eq_20150514090334} \lim_{m\to\infty} \boldsymbol E [ \{\TRVar{m}(t)-\TRVar{m}(s)\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ] = 0,\\ \label{eq_20150514090939} \lim_{m\to\infty} \boldsymbol E [ \{B_t-B_s\} \{\hermiteVar{q}{m}(v)-\hermiteVar{q}{m}(u)\} ] = 0,\\ \label{eq_20150514091004} \lim_{m\to\infty} \boldsymbol E [ \{\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ] = 0,\\ \label{eq_20150514091618} \lim_{m\to\infty} \boldsymbol E [ \{B_t-B_s\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ] = 0 \end{gather} for $0\leq s<t\leq 1$ and $0\leq u<v\leq 1$ with $(s,t)\cap(u,v)=\emptyset$. From these convergence and the fourth moment theorem in \cite{PeccatiTudor2005}, we see the assertion. The convergence \eqref{eq_20150514085853}, \eqref{eq_20150514085924} and \eqref{eq_20150514090229} are proved in \cite{NourdinNualartTudor2010}. We consider \eqref{eq_20150514090122} and \eqref{eq_20150514090334}. Both convergence follows from \eqref{eq_20150518051514} and \pref{prop_20150518041212}. In particular, \eqref{eq_20150514090122} is a direct consequence from them. We show \eqref{eq_20150514090334} for $s<t\leq u<v$. From \eqref{eq_20150518045935} and \eqref{eq_20150518052410}, we have \begin{multline*} | \boldsymbol E[\{\TRVar{m}(t)-\TRVar{m}(s)\}\{\TRVar{m}(v)-\TRVar{m}(u)\}] |\\ \leq \frac{1}{2^m} \sum_{k=\intPart{2^m s}+1}^{\intPart{2^m t}} \sum_{l=\intPart{2^m u}+1}^{\intPart{2^m v}} |a_{1,l-k+1}| \leq \frac{1}{2^m} \sum_{j=\intPart{2^m u}+1-\intPart{2^m t}}^{\intPart{2^m v}-\intPart{2^m s}-1} j|a_{1,j+1}| \leq \frac{1}{2^m} \sum_{j=1}^{2^m} j|a_{1,j+1}|. \end{multline*} Combining this estimate and \pref{prop_20150518041212}, we obtain \eqref{eq_20150514090334}. We study the equalities \eqref{eq_20150514090838}, \eqref{eq_20150514090844}, \eqref{eq_20150514090939} and \eqref{eq_20150514091004}. Since $B_t-B_s$, $\hermiteVar{q}{m}(t)-\hermiteVar{q}{m}(s)$ and $\TRVar{m}(t)-\TRVar{m}(s)$ belongs to first, $q$-th, first Wiener chaos, the expectations in \eqref{eq_20150514090838} and \eqref{eq_20150514090844} are equal to $0$. The same reason yields \eqref{eq_20150514090939} and \eqref{eq_20150514091004}. We prove \eqref{eq_20150514091605} and \eqref{eq_20150514091618}. Set $ B^{(m)}_t = B_{\intPart{2^m t}/2^m} = \sum_{k=1}^{\intPart{2^m t}} B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} $. We decompose $ \boldsymbol E [\{B_t-B_s\}\{\TRVar{m}(v)-\TRVar{m}(u)\}] $ into $ I^{(m)} + \boldsymbol E [ \{B^{(m)}_t-B^{(m)}_s\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ] + J^{(m)} $, where \begin{align*} I^{(m)} &= \boldsymbol E [ \{B_t-B^{(m)}_t\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ],\\ J^{(m)} &= \boldsymbol E [ \{B^{(m)}_s-B_s\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ]. \end{align*} We can show convergence of $I^{(m)}$ and $J^{(m)}$ easily. In fact, we see \begin{align*} |I^{(m)}| &\leq \boldsymbol E[\{B_t-B_{\intPart{2^m t}/2^m}\}^2]^{1/2} \boldsymbol E[\{\TRVar{m}(v)-\TRVar{m}(u)\}^2]^{1/2}\\ &\leq \left( t-\frac{\intPart{2^m t}}{2^m} \right)^H \left( \const \frac{\intPart{2^m u}-\intPart{2^m v}}{2^m} \right)^{1/2}. \end{align*} The same inequality holds for $J^{(m)}$. Hence we see the convergences. We consider convergence of $ \boldsymbol E [ \{B^{(m)}_t-B^{(m)}_s\} \{\TRVar{m}(v)-\TRVar{m}(u)\} ] $. Note \begin{align*} \boldsymbol E[\{B^{(m)}_t-B^{(m)}_s\}\{\TRVar{m}(v)-\TRVar{m}(u)\}] = 2^{-m(1/2+H)} \sum_{k=\intPart{2^m s}+1}^{\intPart{2^m t}} \sum_{l=\intPart{2^m u}+1}^{\intPart{2^m v}} a^\dagger_{k,l}. \end{align*} In the case that $s=u$ and $t=v$, we see \begin{multline*} \boldsymbol E[\{B^{(m)}_t-B^{(m)}_s\}\{\TRVar{m}(t)-\TRVar{m}(s)\}]\\ = 2^{-m(1/2+H)} \sum_{\intPart{2^m s}+1}^{\intPart{2^m t}} a^\dagger_{k,k} + 2^{-m(1/2+H)} \sum_{\intPart{2^m s}+1\leq k<l\leq\intPart{2^m t}} (a^\dagger_{k,l}+a^\dagger_{l,k}) = 0. \end{multline*} In the last line, we used \pref{prop_20141122044103}. From this, we see \eqref{eq_20150514091605}. In the case that $0\leq s<t\leq u<v\leq 1$, by noting \eqref{eq_20150518053842}, we have \begin{multline*} |\boldsymbol E[\{B^{(m)}_t-B^{(m)}_s\}\{\TRVar{m}(u)-\TRVar{m}(v)\}]|\\ \begin{aligned} \leq 2^{-m(1/2+H)} \sum_{k=\intPart{2^m s}+1}^{\intPart{2^m t}} \sum_{l=\intPart{2^m u}+1}^{\intPart{2^m v}} |a^\dagger_{1,l-k+1}| \leq 2^{-m(1/2+H)} \sum_{j=\intPart{2^m u}-\intPart{2^m t}+1}^{\intPart{2^m v}-\intPart{2^m s}-1} j|a^\dagger_{1,j+1}|. \end{aligned} \end{multline*} From \pref{prop_20141122044947}, we see \begin{align*} |\boldsymbol E[\{B^{(m)}_t-B^{(m)}_s\}\{\TRVar{m}(u)-\TRVar{m}(v)\}]| \leq \const 2^{-m(1/2+H)} \sum_{j=1}^{2^m} j \cdot j^{2H-3}. \end{align*} In the case that $0\leq u<v\leq s<t\leq 1$, we obtain the same inequality. We complete the proof of \eqref{eq_20150514091618}. \end{proof} \section{Proof of convergence of variation functionals} \label{sec_20170519021523} \subsection{Estimate on $\tilde{U}_m$} In this subsection, we prove \tref{thm_20141123062022}. At the beginning, we give an estimate of $\boldsymbol E[|\wTRVar{}{m}(t)-\wTRVar{}{m}(s)|^2]$. \begin{proposition}\label{prop_20150518032235} There exists a positive constant $\const$ independent of $m$ such that \begin{align*} \left| \boldsymbol E [ g(X_s) g(X_t) \WienerInt{2} ( \zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} \odot \zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}} ) ] \right| \leq \const \begin{cases} 2^{-m(4H+2)}, & 0<H< 1/2,\\ 2^{-4m}, & 1/2\leqH<1\\ \end{cases} \end{align*} for any $0\leq s,t\leq 1$ and $1\leq k, l\leq 2^m$. \end{proposition} \begin{proof} From the duality relationship \eqref{eq_duality_relationship}, we have \begin{align*} \boldsymbol E [ g(X_s) g(X_t) \WienerInt{2} ( \zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} \odot \zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}} ) ] &= \boldsymbol E \left[ \innerProdLarge[\mathfrak{H}^{\odot 2}] { D^2 \left\{ g(X_s) g(X_t) \right\} } { \zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} \odot \zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}} } \right] \end{align*} and the Leibniz rule implies \begin{align*} D^2 \left\{ g(X_s) g(X_t) \right\} &= g'(X_s) g(X_t) D^2 X_s + g''(X_s) g(X_t) (DX_s)^{\odot 2}\\ &\phantom{=}\quad + 2 g'(X_s) g'(X_t) D X_s \odot D X_t + g(X_s) g''(X_t) (DX_t)^{\odot 2} + g(X_s) g'(X_t) D^2 X_t. \end{align*} From the H\"{o}lder inequality and \pref{prop_20150520004520}, we have \begin{align*} \boldsymbol E \left[ g'(X_s) g(X_t) \innerProdLarge[\mathfrak{H}^{\odot 2}] { D^2 X_s } { \zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} \odot \zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}} } \right] &\leq \boldsymbol E [ | g'(X_{\dyadicPart[m]{k-1}}) g(X_{\dyadicPart[m]{l-1}}) |^2 ]^{1/2} \cdot \const \|\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}\|_\infty \|\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}\|_\infty\\ &\leq \const' \begin{cases} \displaystyle { \left( \frac{1}{2} + \frac{1}{2H+1} \right)^2 (2^{-m(2H+1)})^2, } & 0<H< 1/2,\\ (2H)^2 (2^{-2m})^2, & 1/2\leqH<1.\\ \end{cases} \end{align*} In the last line, we used \pref{prop_20150520010300} and the constant $\const$ and $\const'$ are independent of $m$. Since the other terms in the above also admit similar estimates, we see the assertion. \end{proof} \begin{proposition}\label{prop_20141104000212} There exists a positive constant $\const$ independent of $m$ such that \begin{align*} \boldsymbol E [|\wTRVar{}{m}(t)-\wTRVar{}{m}(s)|^2] \leq \const \cdot \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \cdot \begin{cases} 2^{-4mH}, & 0<H<1/2,\\ 2^{-2m}, & 1/2\leqH<1\\ \end{cases} \end{align*} for any $0\leq s,t\leq 1$. \end{proposition} \begin{proof} From the product formula \eqref{eq_product_formula}, we have \begin{align*} |\wTRVar{}{m}(t)-\wTRVar{}{m}(s)|^2 = \sum_{k,l=\intPart{2^m s}+1}^{\intPart{2^m t}} g(X_{\dyadicPart[m]{k-1}}) g(X_{\dyadicPart[m]{l-1}}) \WienerInt{1}(\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}) \WienerInt{1}(\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}) = S+T, \end{align*} where \begin{align*} S &= \sum_{k,l=\intPart{2^m s}+1}^{\intPart{2^m t}} g(X_{\dyadicPart[m]{k-1}}) g(X_{\dyadicPart[m]{l-1}}) \innerProd[\mathfrak{H}] {\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}} {\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}},\\ T &= \sum_{k,l=\intPart{2^m s}+1}^{\intPart{2^m t}} g(X_{\dyadicPart[m]{k-1}}) g(X_{\dyadicPart[m]{l-1}}) \WienerInt{2} ( \zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}} \odot \zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}} ). \end{align*} We estimate the expectations $\boldsymbol E[S]$ and $\boldsymbol E[T]$. The expectation $|\boldsymbol E[S]|$ is estimated by \begin{align*} |\boldsymbol E[S]| &= \left| \sum_{k,l=\intPart{2^m s}+1}^{\intPart{2^m t}} \boldsymbol E [ g(X_{\dyadicPart[m]{k-1}}) g(X_{\dyadicPart[m]{l-1}}) ] \innerProd[\mathfrak{H}] {\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}} {\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}} \right|\\ &\leq \left( \sup_{0\leq t\leq 1} \boldsymbol E[|g(X_t)|^2] \right) \sum_{k,l=\intPart{2^m s}+1}^{\intPart{2^m t}} | \innerProd[\mathfrak{H}] {\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}} {\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}} |. \end{align*} Combining the self-similarity of fBm and \pref{prop_20150518041212}, we have \begin{align*} |\boldsymbol E[S]| &\leq \left( \sup_{0\leq t\leq 1} \boldsymbol E[|g(X_t)|^2] \right) \cdot 2^{-m(2H+2)} \cdot \const ( \intPart{2^m t}-\intPart{2^m s} )\\ &= \const \left( \sup_{0\leq t\leq 1} \boldsymbol E[|g(X_t)|^2] \right) \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \cdot 2^{-m(2H+1)}. \end{align*} We evaluate the expectation $\boldsymbol E[T]$. From \pref{prop_20150518032235}, we obtain \begin{align*} \left| \boldsymbol E[T] \right| &\leq (\intPart{2^m t}-\intPart{2^m s})^2 \cdot \const \begin{cases} 2^{-2m(4H+2)}, & 0<H< 1/2,\\ 2^{-4m}, & 1/2\leqH<1,\\ \end{cases}\\ &\leq \const \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \cdot \begin{cases} 2^{-4mH}, & 0<H< 1/2,\\ 2^{-2m}, & 1/2\leqH<1.\\ \end{cases} \end{align*} The proof is completed. \end{proof} \begin{proof}[Proof of \pref{prop_20141117062354}] From \pref{prop_20141104000212}, we have \begin{align*} \boldsymbol E [ | 2^{mr} \wTRVar{}{m}(t) - 2^{mr} \wTRVar{}{m}(s) |^2 ] &= 2^{2mr} \boldsymbol E[|\wTRVar{}{m}(t)-\wTRVar{}{m}(s)|^2]\\ &\leq \const \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \cdot \begin{cases} 2^{-2m(2H-r)}, & 0<H<1/2,\\ 2^{-2m(1-r)}, & 1/2\leqH<1.\\ \end{cases} \end{align*} This inequality implies convergence of in the sense of fdds and relative compactness. For relative compactness, see \cite[Cororally~2.2]{BurdzySwanson2010}. The proof is completed. \end{proof} \begin{proof}[Proof of \tref{thm_20141123062022}] The assertion follows from convergence of in the sense of fdds and relative compactness of $\{(B,2^{-m/2}\wHerVar{q}{m},2^{m}\wTRVar{}{m})\}_{m=1}^\infty$, that is, we obtain \tref{thm_20141123062022} from the following \lrefs{prop_20141125002248}{prop_20141125002312}. \end{proof} \begin{lemma}\label{prop_20141125002248} Let $0\leq t_1<\dots<t_d\leq 1$. Under the assumption of \tref{thm_20141123062022}, we have \begin{multline}\label{eq_20141125004539} \lim_{m\to\infty} \left( B_{t_1},2^{-m/2}\wHerVar{q}{m}(t_1),2^{m}\wTRVar{}{m}(t_1), \dots, B_{t_d},2^{-m/2}\wHerVar{q}{m}(t_d),2^{m}\wTRVar{}{m}(t_d) \right)\\ \begin{aligned} &= \left( B_{t_1}, \sqrt{q!} \int_0^{t_1} f(X_s)\, dW_s, \frac{1}{\sqrt{12}} \int_0^{t_1} g(X_s)\, d\tilde{W}_s, \dots, \right.\\ &\phantom{=}\quad\qquad\qquad\qquad \left. B_{t_d}, \sqrt{q!} \int_0^{t_d} f(X_s)\, dW_s, \frac{1}{\sqrt{12}} \int_0^{t_d} g(X_s)\, d\tilde{W}_s, \right) \end{aligned} \end{multline} weakly in $(\mathbf{R}^d)^3$, where $W$ and $\tilde{W}$ are standard Brownian motions and $B$, $W$ and $\tilde{W}$ are independent. \end{lemma} \begin{lemma}\label{prop_20141125002312} Under the assumption of \tref{thm_20141123062022}, $\{(B,2^{-m/2}\wHerVar{q}{m},2^{m}\wTRVar{}{m})\}_{m=1}^\infty$ is relative compact in the Skorokhod topology. \end{lemma} \begin{proof}[Proof of \lref{prop_20141125002248}] We decompose $\wHerVar{q}{m}(t)$ and $\wTRVar{}{m}(t)$ into $ \wHerVar{q}{m,n}(t) + R^{(m,n)}(t) $ and $ \wTRVar{}{m,n}(t) + \tilde{R}^{(m,n)}(t) $ for $m\geq n$, respectively, where \begin{align*} \wHerVar{q}{m,n}(t) &= \sum_{k=1}^{\intPart{2^m t}} f(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \hermitePoly{q}(2^{mH}B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}),\\ R^{(m,n)}(t) &= \sum_{k=1}^{\intPart{2^m t}} \left\{ F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) - f(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \right\} \hermitePoly{q}(2^{mH}B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}),\\ \wTRVar{}{m,n}(t) &= \sum_{k=1}^{\intPart{2^m t}} g(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \WienerInt{1}(\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}),\\ \tilde{R}^{(m,n)}(t) &= \sum_{k=1}^{\intPart{2^m t}} \left\{ g(X_{\dyadicPart[m]{k-1}}) - g(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \right\} \WienerInt{1}(\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}). \end{align*} Here $ \lowerPart[n]{t} = \sup \{ \dyadicPart[n]{k}; \dyadicPart[n]{k}\leq t,k=0,\dots,2^n-1 \} $. We prove \begin{enumerate} \item \label{item_1506912770} The sequence $ \{ \{ ( B_{t_\alpha}, 2^{-m/2}\wHerVar{q}{m,n}(t_\alpha), 2^m \wTRVar{}{m,n}(t_\alpha) )_{\alpha=1}^d \}_{m=n}^\infty \}_{n=1}^\infty $ converges to the right-hand side of \eqref{eq_20141125004539} as $m\to\infty$ and $n\to\infty$. \item \label{item_1506912803} $ \displaystyle { \lim_{n\to\infty} \limsup_{m\to\infty} \boldsymbol E [|2^{-m/2}R^{(m,n)}(t_\alpha)|^2] = 0 } $ for $\alpha=1,\dots,d$, \item \label{item_1506912832} $ \displaystyle { \lim_{n\to\infty} \limsup_{m\to\infty} \boldsymbol E [|2^m\tilde{R}^{(m,n)}(t_\alpha)|^2] = 0 } $ for $\alpha=1,\dots,d$. \end{enumerate} Assertion~(\ref{item_1506912770}) is a direct consequence of \pref{prop_20141102222020}. To show Assertion~(\ref{item_1506912803}), we use the product formula \eqref{eq_product_formula} and estimate the expectations. For detail, see {\cite[Lemmas~22 and 23]{Naganuma2015}}. In the rest of this proof we show Assertion~(\ref{item_1506912832}) by using independent increments of the standard Brownian motion $B$. Set $ \tilde{Y}^{(m,n)}_k = \{ g(X_{\dyadicPart[m]{k-1}}) - g(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \} \WienerInt{1}(\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}) $ and $ \mathcal{F}_{t} = \sigma(B_u;0\leq u\leq t) $. Then, for $k<l$, random variables $\tilde{Y}^{(m,n)}_k$ and $ g(X_{\dyadicPart[m]{k-1}}) - g(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) $ are $\mathcal{F}_{\dyadicPart[m]{l-1}}$-measurable. In addition, $\WienerInt{1}(\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}})$ is independent of $\mathcal{F}_{\dyadicPart[m]{l-1}}$. This implies $ \boldsymbol E [ \WienerInt{1}(\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}) |\mathcal{F}_{\dyadicPart[m]{l-1}} ] = \boldsymbol E [ \WienerInt{1}(\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}) ] = 0 $ a.s. Hence, we have \begin{align*} \boldsymbol E [ \tilde{Y}^{(m,n)}_k \tilde{Y}^{(m,n)}_l |\mathcal{F}_{\dyadicPart[m]{l-1}} ] = \tilde{Y}^{(m,n)}_k \{ g(X_{\dyadicPart[m]{k-1}}) - g(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \} \boldsymbol E [ \WienerInt{1}(\zeta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}) |\mathcal{F}_{\dyadicPart[m]{l-1}} ] = 0 \end{align*} a.s.\ for $k<l$. From this, we obtain $ \boldsymbol E [ \tilde{Y}^{(m,n)}_k \tilde{Y}^{(m,n)}_l ] = 0 $ for $k\neq l$. In addition, we have \begin{align*} \boldsymbol E [ |\tilde{Y}^{(m,n)}_k|^2 ] \leq \boldsymbol E [ \{ g(X_{\dyadicPart[m]{k-1}}) - g(X_{\lowerPart[n]{\dyadicPart[m]{k-1}}}) \}^4 ]^{1/2} \boldsymbol E [ \WienerInt{1}(\zeta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}})^4 ]^{1/2} \leq \const 2^{-n} 2^{-3m} \end{align*} for some constant $\const$. From these, we obtain \begin{align*} \boldsymbol E[|2^m\tilde{R}^{(m,n)}(t)|^2] = 2^{2m} \sum_{k=1}^{\intPart{2^m t}} \boldsymbol E[|\tilde{Y}^{(m,n)}_k|^2] \leq 2^{2m} \cdot 2^m \cdot \const 2^{-n} 2^{-3m} = \const 2^{-n}, \end{align*} which implies the third assertion. The proof is completed. \end{proof} \begin{proof}[Proof of \lref{prop_20141125002312}] We can prove the assertion in the same way as {\cite[Proposition~18]{Naganuma2015}}. In the proof, we shall show that the processes satisfy some kind of moment condition for relative compactness. \end{proof} \subsection{Weighted Hermite and power variations} \label{sec_20150225090808} In this subsection, we prove \tref[thm_20141203094309]{thm_20141117071529}. At the beginning, we give an estimate of $\boldsymbol E[|\wHerVar{q}{m}(t)-\wHerVar{q}{m}(s)|^2]$ \begin{proposition}\label{prop_20141117073846} Let $\mu$ and $\nu$ be probability measure on $[0,1]$ and $f,g\in\SmoothFunc[poly]{2q}{\mathbf{R}}{\mathbf{R}}$. Then there exists a constant $\const$ such that \begin{align*} \left| \boldsymbol E \left[ \WienerInt{2q-2r} ( \delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}^{\odot q-r} \odot \delta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}^{\odot q-r} ) F^{f,\mu}_{st}(X) F^{g,\nu}_{uv}(X) \right] \right| \leq \const \begin{cases} 2^{-4m(q-r)H},&0<H<1/2,\\ 2^{-2m(q-r)},&1/2\leqH<1,\\ \end{cases} \end{align*} for any $0\leq s<t\leq 1$, $0\leq u<v\leq 1$ and $1\leq k,l\leq 2^m$. \end{proposition} \begin{proof} From the duality relationship \eqref{eq_duality_relationship} and the Leibniz rule, we see \begin{multline*} \boldsymbol E \left[ \WienerInt{2q-2r} ( \delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}^{\odot q-r} \odot \delta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}^{\odot q-r} ) F_{st}^{f,\mu}(X) F_{uv}^{g,\nu}(X) \right]\\ \begin{aligned} &= \boldsymbol E \left[ \innerProdLarge[\mathfrak{H}^{\odot 2q-2r}] { \delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}^{\odot q-r} \odot \delta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}^{\odot q-r} } { D^{2q-2r} \left\{ F_{st}^{f,\mu}(X) F_{uv}^{g,\nu}(X) \right\} } \right]\\ &= \sum_{a+b=2q-2r} \frac{(2q-2r)!}{a!b!} \boldsymbol E \left[ \innerProdLarge[\mathfrak{H}^{\odot 2q-2r}] { \delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}^{\odot q-r} \odot \delta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}}^{\odot q-r} } { D^a F_{st}^{f,\mu}(X) \odot D^b F_{uv}^{g,\nu}(X) } \right]. \end{aligned} \end{multline*} From \pref{prop_20150520010300}, we see that \begin{align*} \boldsymbol E [ | \innerProd[\mathfrak{H}^{\odot a}] {D^a F^{f,\mu}_{st}(X)} {h^1\odot\cdots\odot h^a} |^r ]^{1/r} \leq \const \|h^1\|_\infty\cdots\|h^a\|_\infty \leq \const \begin{cases} 2^{-2maH},&0<H< 1/2,\\ (2H)^a 2^{-ma},&1/2\leqH<1,\\ \end{cases} \end{align*} for $ h^1,\dots,h^a \in \{ \delta_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}, \delta_{\dyadicPart[m]{l-1}\dyadicPart[m]{l}} \} $. Combining them, the proof is completed. \end{proof} \begin{proposition}\label{prop_20141203092733} Let $q\geq 2$. There exists a positive constant $\const$ such that \begin{align*} \boldsymbol E [|\wHerVar{q}{m}(t)-\wHerVar{q}{m}(s)|^2] \leq \const (\intPart{2^m t}-\intPart{2^m s}) \begin{cases} 2^{m(1-2qH)}, & 0<H\leq 1/2q,\\ 1, & 1/2q<H<1-1/2q,\\ m, & H=1-1/2q,\\ 2^{m\{1-2q(1-H)\}}, & 1-1/2q<H<1,\\ \end{cases} \end{align*} for any $0\leq s<t\leq 1$. \end{proposition} \begin{proof} We can prove this proposition in the same way as \cite[Proposition~21]{Naganuma2015} by using \pref{prop_20141117073846} instead of \cite[Proposition~19]{Naganuma2015}. In more detail, we use \eqref{eq_product_formula} to rewrite $|\wHerVar{q}{m}(t)-\wHerVar{q}{m}(s)|^2$ by the It\^o-Wiener integrals. Then we see that it is expressed by the summation of the integrand in \pref{prop_20141117073846}. From \pref{prop_20141117073846}, we see the conclusion. \end{proof} We prove \tref[thm_20141203094309]{thm_20141117071529}. \begin{proof}[Proof of \tref{thm_20141203094309}] Recall the identity $ \xi^q = \sum_{r=0}^q \binom{q}{r} \boldsymbol E[Z^{q-r}] \hermitePoly{r}(\xi) $ for any $\xi\in\mathbf{R}$, where $Z$ is a standard Gaussian random variable. Applying this identity, we see \begin{align*} 2^{m(qH-1)} \sum_{k=1}^{\intPart{2^m\cdot}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) (B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}})^q &= 2^{-m} \sum_{k=1}^{\intPart{2^m\cdot}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) (2^{mH}B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}})^q\\ &= 2^{-m} \sum_{r=0}^q \binom{q}{r} \boldsymbol E[Z^{q-r}] \sum_{k=1}^{\intPart{2^m\cdot}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) \hermitePoly{r}(2^{mH}B_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}})\\ &= \boldsymbol E[Z^q] \cdot 2^{-m} \sum_{k=1}^{\intPart{2^m\cdot}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) + \sum_{r=2}^q \binom{q}{r} \boldsymbol E[Z^{q-r}] \cdot 2^{-m} \wHerVar{r}{m}. \end{align*} We prove convergence of the first and second term in the following. We consider the first term. Note \begin{multline*} 2^{-m} \sum_{k=1}^{\intPart{2^m t}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) - \int_0^t f(X_s)\, ds\\ \begin{aligned} &= \sum_{k=1}^{\intPart{2^m t}} \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X)\, ds - \sum_{k=1}^{\intPart{2^m t}} \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} f(X_s)\, ds - \int_{\intPart{2^m t}}^t f(X_s)\, ds\\ &= \sum_{k=1}^{\intPart{2^m t}} \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} \{ F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) - f(X_s) \}\, ds - \int_{\intPart{2^m t}}^t f(X_s)\, ds. \end{aligned} \end{multline*} Since $X$ is $(H-\epsilon)$-H\"{o}lder continuous, we see that the absolute value of the above has an upper bound \begin{align*} \sum_{k=1}^{\intPart{2^m t}} \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} | F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) - f(X_s) |\, ds \leq \sum_{k=1}^{\intPart{2^m t}} \int_{\dyadicPart[m]{k-1}}^{\dyadicPart[m]{k}} \const[X] 2^{-m(H-\epsilon)}\, ds = \const[X] 2^{-m(H-\epsilon)}, \end{align*} where $\const[X]$ is a random variable. Hence \begin{align*} \lim_{m\to\infty} 2^{-m} \sum_{k=1}^{\intPart{2^m \cdot}} F_{\dyadicPart[m]{k-1}\dyadicPart[m]{k}}(X) = \int_0^\cdot f(X_s)\, ds \end{align*} almost surely with respect to the uniform norm. We prove convergence of the process $2^{-m}\wHerVar{r}{m}$ to the process $0$ for $r=2,\dots,q$. It follows from \pref{prop_20141203092733} that \begin{align*} \boldsymbol E[|2^{-m}\wHerVar{r}{m}(t)-2^{-m}\wHerVar{r}{m}(s)|^2] &\leq \const \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \begin{cases} 2^{-2rmH}, & 0<H\leq 1/2r,\\ 2^{-m}, & 1/2r<H<1-1/2r,\\ m2^{-m}, & H=1-1/2r,\\ 2^{-2rm(1-H)}, & 1-1/2r<H<1,\\ \end{cases}\\ &\leq \const \left( \frac{\intPart{2^m t}-\intPart{2^m s}}{2^m} \right)^{1+\kappa} \begin{cases} 2^{-\kappa m}, & 0<H\leq 1/2r,\\ 2^{-\kappa m}, & 1/2r<H<1-1/2r,\\ m2^{-\kappa m}, & H=1-1/2r,\\ 2^{-\kappa m}, & 1-1/2r<H<1,\\ \end{cases} \end{align*} where \begin{align*} \kappa = \begin{cases} rH, & 0<H\leq 1/2r,\\ 1/2, & 1/2r<H<1-1/2r,\\ 1/2, & H=1-1/2r,\\ r(1-H), & 1-1/2r<H<1.\\ \end{cases} \end{align*} This inequality implies convergence of $2^{-m}\wHerVar{q}{m}$ to the zero process. The proof is completed. \end{proof} \begin{proof}[Proof of \tref{thm_20141117071529}] The assertion is proved in the same way as \cite[Theorem~15]{Naganuma2015} by using \pref{prop_20141117073846} instead of \cite[Proposition~19]{Naganuma2015}. In this proof, we use \pref{prop_20141102222020}. \end{proof} \section*{Acknowledgments} The first author's research was partially supported by Grant-in-Aid for Scientific Research (B) No.~JP16H03938. The second author's research was partially supported by Grant-in-Aid for Young Scientists (B) No.~JP17K14202. \address{ Graduate School of Mathematical Sciences,\\ The University of Tokyo,\\ Meguro-ku, Tokyo, 153-8914, Japan} {[email protected]} \address{ Graduate School of Engineering Science,\\ Osaka University,\\ Toyonaka, Osaka, 560-8531, Japan} {[email protected]} \end{document}
\begin{document} \title[Blow-up set for the nonlinear wave equation] {Solutions blowing up on any given compact set for the energy subcritical wave equation} \author[T. Cazenave]{Thierry Cazenave$^1$} \email{\href{mailto:[email protected]}{[email protected]}} \author[Y. Martel]{Yvan Martel$^{2}$} \email{\href{mailto:[email protected]}{[email protected]}} \author[L. Zhao]{Lifeng Zhao$^3$} \email{\href{mailto:[email protected]}{[email protected]}} \address{$^1$Sorbonne Universit\'e, CNRS, Universit\'e de Paris, Laboratoire Jacques-Louis Lions, B.C. 187, 4 place Jussieu, 75252 Paris Cedex 05, France} \address{$^2$CMLS, \'Ecole Polytechnique, CNRS, 91128 Palaiseau Cedex, France} \address{$^3$Wu Wen-Tsun Key Laboratory of Mathematics and School of Mathematical Sciences, University of Science and Technology of China, Hefei 230026, Anhui, China} \subjclass[2010]{Primary 35L05; secondary 35B44, 35B40} \keywords{nonlinear wave equation, finite-time blowup, blow-up set} \begin{abstract} We consider the focusing energy subcritical nonlinear wave equation $\partial_{tt} u - \Delta u= |u|^{p-1} u$ in ${\mathbb R}^N$, $N\ge 1$. Given any compact set $ K \subset {\mathbb R}^N $, we construct finite energy solutions which blow up at $t=0$ exactly on $ K $. The construction is based on an appropriate ansatz. The initial ansatz is simply $U_0(t,x) = \kappa (t + A(x) )^{ -\frac {2} {p-1} }$, where $A\ge 0$ vanishes exactly on $ K $, which is a solution of the ODE $h'' = h^p$. We refine this first ansatz inductively using only ODE techniques and taking advantage of the fact that (for suitably chosen $A$), space derivatives are negligible with respect to time derivatives. We complete the proof by an energy argument and a compactness method. \end{abstract} \maketitle \section{Introduction} We consider the focusing nonlinear wave equation on $\R^N$ \begin{equation}\label{wave} \partial_{tt} u - \Delta u= |u|^{p-1} u, \quad (t,x)\in\R\times\R^N, \end{equation} for any space dimension $N\geq 1$, and energy subcritical nonlinearities, \emph{i.e.} \begin{equation}\label{on:p} \mbox{$1<p<\infty$ if $N=1,2$ and } 1<p<\frac{N+2}{N-2} \mbox{ if $N\geq 3$}. \end{equation} It is well-known that under such condition on $p$ the Cauchy problem for \eqref{wave} is locally well-posed in the energy space $H^1(\R^N)\times L^2(\R^N)$ (see \cite{GV85, GV89, Segal}). For $H^1\times L^2$ solutions, the energy \[ E(u(t),\partial_tu(t))= \int \left\{ \frac 12 |\partial_t u(t,x)|^2 +\frac 12 |\nabla u(t,x)|^2 - \frac 1{p+1} |u(t,x)|^{p+1}\right\}dx \] is conserved through time. Moreover, it is known how to produce solutions blowing up in finite time (see \emph{e.g.} \cite{Keller,Levine}). Our main result states that for any given compact set $ \Cset $ of $\R^N$, there exists a finite-energy solution of \eqref{wave} which blows up in finite time exactly on~$ \Cset $. \begin{thm}\label{TH:1} Let $p$ satisfy \eqref{on:p} and let $ \Cset $ be any nonempty compact set of $\R^N$. There exist $\delta_0>0$ and a solution $(u,\partial_t u)\in \cont((0,\delta_0];H^1(\R^N)\times L^2(\R^N))$ of \eqref{wave} which blows up at time $0$ exactly on~$ \Cset $ in the following sense. \begin{itemize} \item If $x_0\in \Cset $ then for any $r>0$, \begin{equation}\label{xE} \lim_{t\downarrow 0} \|u(t)\|_{L^2(|x-x_0|<r)}=\infty\quad \mbox{and}\quad \lim_{t\downarrow 0} \|\partial_t u(t)\|_{L^2(|x-x_0|<r)}=\infty. \end{equation} \item If $x_0\not\in \Cset $ then there exists $r>0$ such that \begin{equation}\label{xnotE} \sup_{t\in(0,\delta_0]}\left\{\|u(t)\|_{L^2(|x-x_0|<r)}+\|\nabla u(t)\|_{L^2(|x-x_0|<r)}+\|\partial_t u(t)\|_{L^2(|x -x_0 |<r)}\right\}<\infty. \end{equation} \end{itemize} \end{thm} \begin{rem}\label{rk:2} For $t>0$, the function \begin{equation}\label{def:h} h(t)=\kappa t^{-\frac {2}{p-1}}\quad\mbox{where}\quad \kappa=\left[\frac {2(p+1)}{(p-1)^2}\right]^{\frac 1{p-1}} \end{equation} is a solution of the ordinary differential equation $h''=h^p$ which blows up at time~$0$. It is also a solution of \eqref{wave}, but of course it fails to be in the energy space. The function $h$ is the building block for our construction, it is thus relevant to compare it with the blow-up rate of the solutions constructed in Theorem~\ref{TH:1}. It follows from the proof that for any $0<\mu<\frac2{p-1}$ there exist solutions $u$ as in the statement of Theorem~\ref{TH:1} satisfying in addition the following estimates: for any $x_0\in \Cset $ $r>0$, and all $t\in (0,\delta_0]$, \begin{gather} t^{-\mu} \lesssim \|u(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac {2}{p-1}},\label{Qrp}\\ t^{-\mu-1} \lesssim\|\partial_t u(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac {2}{p-1}-1}.\label{dtQrp} \end{gather} Moreover, if $x_0\in \Cset $ and $ \Cset $ contains a neighborhood of $x_0$ then it also holds, for any $r>0$, and all $t\in (0,\delta_0]$, \begin{equation}\label{jus} \|u(t)\|_{L^2(|x-x_0|<r)}\gtrsim t^{-\frac {2}{p-1}}\quad \hbox{and}\quad \|\partial_t u(t)\|_{L^2(|x-x_0|<r)}\gtrsim t^{-\frac {2}{p-1}-1}. \end{equation} In contrast, if $x_0$ is an isolated point of the compact set $ \Cset $, solutions $u$ as in Theorem~\ref{TH:1} can be chosen so that, for a small $r>0$, \[ \lim_{t\downarrow 0} \left\{t^{\frac {2}{p-1}} \|u(t)\|_{L^2(|x-x_0|<r)}+ t^{\frac {2}{p-1}+1}\|\partial_t u(t)\|_{L^2(|x-x_0|<r)}\right\}=0. \] \end{rem} To prove Theorem~\ref{TH:1}, we follow the strategy developed in \cite{CaMaZh} to construct blow-up solutions of ODE type for a class of semilinear Schr\"odinger equations. First, we construct an approximate solution to the blow-up problem based on the explicit blow-up solution $h$ defined by \eqref{def:h}. The main order term of the approximate solution is $U_0(t,x)=\kappa(t+A(x))^{-\frac{2}{p-1}}$, where $A$ is a suitable nonnegative function which vanishes exactly on $ \Cset $ and whose behavior at $\infty$ ensures that $U_0$ belongs to the energy space. Typically, to obtain blowup at only one point $x_0$, it suffices to consider $A(x)=|x-x_0|^k$ for $k$ large enough. Compared to \cite{CaMaZh} where a simple ansatz such as $U_0$ is sufficient, at least for strong enough nonlinearities, the wave equation requires to introduce iterated refinements $U_J$ of this ansatz (the number of iterations $J\geq 1$ depends on $p$, see Remark~\ref{eRemStr1}). The basic idea is that for such blow-up profiles, the space derivatives are of lower order compared to time derivatives and to nonlinear terms. This allows to use only elementary arguments of ordinary differential equations for the construction of the refined ansatz $U_J(t,x)$, at fixed $x$. See Section~\ref{sec:2}. (The construction by purely ODE techniques of an approximate blow-up solution to an Euler-Poisson system is done in~\cite{GuoHJ}.) Second, we consider the sequence $(u_n)$ of solutions of the wave equation \eqref{wave} with initial data $u_n(\frac 1n)=U_J(\frac 1n)$. Using energy method in $H^1\times L^2$, we prove uniform estimates on this sequence on intervals $[\frac 1n,\delta_0]$, where $\delta_0>0$ is uniform in~$n$ (see Section~\ref{sec:3}). Passing to the limit $n\to \infty$ yields the solution $u$ of Theorem~\ref{TH:1}. We point out that this strategy by approximate solution and compactness is also reminiscent to \cite{Martel,Merle1,RaphaelSzeftel} where global or blow-up solutions with special asymptotic behavior are constructed using the reversibility of the equation and suitable uniform estimates on backwards solutions. For stability results concerning the solution $h$ \eqref{rk:2}, we refer to~\cite{Don-Sch}. For ODE-type blowup for quasilinear wave equations, see~\cite{Speck} and the references therein. We also refer to \cite{CollotGM} where an ODE blow-up profile similar to $U_0$ is used to construct blow-up solutions of the nonlinear heat equation with applications to the Burgers equation. In this article, we restrict ourselves to energy subcritical power nonlinearities for simplicity, since this framework allows us to use the energy method at the level of regularity $H^1\times L^2$ only. However, the approximate solutions constructed in Section~\ref{sec:2} are relevant for any power nonlinearity, and we expect that a higher order energy method (to estimate higher order Sobolev norms) should be sufficient to extend the construction to energy critical or supercritical nonlinearities (at least for integer powers to avoid regularity issues). \begin{rem} A more general question for nonlinear wave equations concerns the blow-up surface. For a solution of \eqref{wave} with initial data at $t=0$, which is assumed to blow up in finite time, there exists a $1$-Lipschitz function $x\mapsto \phi(x)>0$ such that the solution is well-defined in a suitable sense in the maximal domain of influence $D=\{(t,x):0\leq t<\phi(x)\}$, see \emph{e.g.} \cite{Alinhac}, Sections~III.2 and~III.3. The surface $\{(\phi(x),x):x\in\R^N\}$ is called the blow-up surface. The question of the regularity of blow-up surface is adressed in \cite{Alinhac,CaFr1, CaFr2,MerleZ2,MerleZ3}. The question of constructing solutions of the nonlinear wave equation with prescribed blow-up surface (with sufficient regularity and satisfying the space-like condition $\|\nabla \phi\|_{L^\infty}<1$) is also a classical question, adressed in several articles and books, notably \cite{KiLi1, KiLi2}, \cite{Kiche1,Kiche2,Kibook}, \cite{KiQi} and \cite{Alinhac}. The approach by Fuschian reduction is especially well-described in the book \cite{Kibook}. First developed for analytic surfaces and exponential nonlinearity, this method was later extended to surfaces with Sobolev regularity and to some power nonlinearities. However, it is not clear to us whether the strategy described in \cite{Kibook} for constructing solutions with given blow-up surface can be extended to power nonlinearities $|u|^{p-1}u$ for any $p>1$, or to more general nonlinearities. As discussed in \cite{Kibook,Kicontrol,KiQi}, prescribing the blow-up set of a blow-up solution can be seen as a sub-product of prescribing its blow-up surface. The solutions constructed in \cite{Kibook,Kicontrol,KiQi} may only exist in a space-time region around the blow-up surface, which does not guarantee that the solution is globally defined in space at any one specific time. However, in the one dimensional case \cite[Corollary 1.2]{KiQi} actually proves the existence of smooth initial data leading to blowup on arbitrary compact set of $\R$, for any power nonlinearity. We also would like to point out a difference between the above mentioned articles and our approach. Here, we resolutely work with finite energy solutions and the initial value problem for~\eqref{wave}. It is often argued that finite speed of propagation and cut-off arguments allow to reduce to finite energy solutions. For example, the function \eqref{def:h} is used to claim that ODE-type blowup is easy to reach for finite energy solutions. However, the cut-off necessary to localize the initial data could lead to blowup in an earlier time. Our method deals with these issues by constructing directly a finite energy solution with initial data from a finite energy ansatz. Moreover, we hope that our somehow elementary approach can be of interest for its simplicity and its large range of applicability to other more complicated problems where ODE blowup is relevant. \end{rem} \subsection*{Notation} We fix a smooth, even function $\chi:\R\to\R$ satisfying: \begin{equation}\label{def:chi} \hbox{$\chi\equiv 1$ on $[0,1]$, $\chi\equiv 0$ on $[2,\infty)$ and $\chi'\leq 0\leq \chi\leq 1$ on $[0,\infty)$.} \end{equation} For $p>1$ satisfying \eqref{on:p}, recall the well-known inequality, for any $u\in H^1$, \begin{equation}\label{gagl} \|u\|_{L^{p+1}}^{p+1} \lesssim\|u\|_{L^2}^{p+1-\frac{N}2(p-1)}\|\nabla u\|_{L^2}^{\frac{N}2(p-1)}. \end{equation} Let $f(u)=|u|^{p-1} u$ and $F(u)=\int_0^u f(v) dv$. For future reference, we recall Taylor's formulas involving the functions $F$ and $f$. Let $\bar p = \min(2,p)$. First, we claim that for any $u>0$ and $v\in \R$, \begin{equation}\label{taylor0} \Big|F(u+v)-F(v)-F'(u)v-\frac 12F''(u)v^2\Big| \lesssim |v|^{p+1}+u^{p-\bar p} |v|^{\bar p+1}. \end{equation} Indeed, in the region $|v|\geq \frac 12 u$, each term on the left-hand side is bounded by $|v|^{p+1}$. In the region $|v|\leq \frac 12 u$, we use Taylor's expansion to write \begin{equation*} \Big|F(u+v)-F(u)-F'(u)v-\frac 12F''(u)v^2\Big|\lesssim u^{p-2} |v|^3. \end{equation*} If $p\geq 2$, then $\bar p=2$ and \eqref{taylor0} is proved. If $1<p<2$, we finish by saying that in this case $u^{p-2} |v|^3\lesssim |v|^{p+1}$. The same argument shows that \begin{equation}\label{taylor1} | ( f(u+v)- f( u )- f'(u)v) v | \lesssim |v|^{p+1}+u^{p-\bar p} |v|^{\bar p+1}. \end{equation} Next, we claim that for any $u>0$ and $v\in \R$, \begin{equation}\label{taylor} \Big|f(u+v)-f(u)-f'(u)v-\frac 12f''(u)v^2\Big|\lesssim u^{-1}|v|^{p+1}+u^{p-\bar p-1}|v|^{\bar p+1}. \end{equation} Indeed, in the region $|v|\geq \frac 12 u$, each term on the left-hand side is bounded by $ u^{-1} |v|^{p+1} $, and~\eqref{taylor} follows. In the region $|v|\leq \frac 12 u$, we use Taylor's expansion to write \begin{equation*} \Big|f(u+v)-f(u)-f'(u)v-\frac 12f''(u)v^2\big|\lesssim u^{p-3} |v|^3. \end{equation*} If $p\geq 2$, then $\bar p=2$ and \eqref{taylor} is proved. If $1<p<2$, we finish by saying that in this case $u^{p-3} |v|^3\lesssim u^{-1} |v|^{p+1}$. In this article, we will use multi-variate notation and results from \cite{CoSa}. For any $\beta=(\beta_1,\ldots,\beta_N)\in \N^N$, $x=(x_1,\ldots,x_N)\in \R^N$, we set \begin{align*} &|\beta|=\sum_{j=1}^N \beta_j,\quad \beta!=\prod_{j=1}^N (\beta_j!), \quad x^\beta=\prod_{j=1}^N x_j^{\beta_j},\\ & \partial_x^0=\mathop{\textnormal{Id}}\quad \mbox{and}\quad \partial_x^{\beta} = \frac{\partial^{|\beta|}}{\partial_{x_1}^{\beta_1}\ldots\partial_{x_N}^{\beta_N}} \quad\mbox{for $|\beta|>0$.} \end{align*} For $\beta,\beta'\in \N^N$, we write $\beta'\leq \beta$ if $\beta_j'\leq \beta_j$ for all $j=1,\ldots,N$. When $\beta'\leq \beta$, we set \[ \binom{\beta}{\beta'}=\prod_{j=1}^N\binom{\beta_j}{\beta_j'} = \frac{\beta!}{\beta'!(\beta-\beta')!}. \] With this notation, given two functions $a,b:\R^{N}\to \R$, Leibniz's formula writes: \begin{equation}\label{lbz0} \partial_x^\beta \left(ab\right)= \sum_{\beta'\leq\beta}\binom{\beta}{\beta'}\left(\partial_x^{\beta'}a\right)\left(\partial_x^{\beta-\beta'}b\right). \end{equation} We write $\beta'\prec\beta$ if one of the following holds \begin{itemize} \item $|\beta'|<|\beta|$; \item $|\beta'|=|\beta|$ and $\beta_1'<\beta_1$; \item $|\beta'|=|\beta|$, $\beta_1'=\beta_1$,\ldots, $\beta_\ell '=\beta_\ell $ and $\beta_{\ell +1}'<\beta_{\ell +1}$ for some $1\leq \ell <N$. \end{itemize} Finally, we recall the Faa di Bruno formula (see Corollary~2.10 in~\cite{CoSa}). Let $n=|\beta|\geq 1$. Then, for functions $q:\R\to \R$, $a:\R^{N}\to \R$, \begin{equation}\label{fdb0} \partial_x^{\beta} (q\circ a)= \sum_{\nu=1}^n \left(q^{(\nu)}\circ a\right)\sum_{P(\beta,\nu)}(\beta!) \prod_{ \ell =1}^n \frac{\left(\partial_x^{\beta_\ell }a\right)^{\nu_\ell }}{(\nu_\ell !)(\beta_\ell !)^{\nu_\ell }} \end{equation} where \begin{align*} P(\beta,\nu) &=\Big\{(\nu_1,\ldots,\nu_n;\beta_1,\ldots,\beta_n): \mbox{there exists $1\leq m\leq n$ such that}\\ &\qquad \nu_\ell =0 \mbox{ and } \beta_\ell =0 \mbox{ for $1\leq \ell \leq n-m$} ;\, \nu_\ell >0 \mbox{ for $n-m+1\leq \ell \leq n$};\\ &\qquad \mbox{and } 0\prec\beta_{n-m+1}\prec\cdots\prec\beta_n \mbox{ are such that } \sum_{\ell =1}^n \nu_\ell =\nu,\ \sum_{\ell =1}^n \nu_\ell \beta_\ell = \beta \Big\}. \end{align*} \section{The blow-up ansatz}\label{sec:2} \subsection{Preliminary}\label{sec:2.1} Recall that $h$ is the explicit solution \eqref{def:h} of the equation $h''=h^p$ which blows up at $0$. The linearization of this equation around the solution $h$ yields the linear equation \[ g''=ph^{p-1} g=\frac {2p(p+1)}{(p-1)^2}t^{-2} g \] which admits the following two independent solutions \[ g_1(t)=t^{-\frac {p+1}{p-1}},\quad g_2(t)=t^{\frac {2p}{p-1}}, \quad \text{for } t>0. \] Since $\frac{p+1}{p-1}>\frac 2{p-1}$, the function $g_1$, related to time invariance, is more singular at $0$ than the function $h$. Note also that for a function $G$ satisfying $\int_0^1 s^{\frac{2p}{p-1}}| G(s)| ds<\infty$, a solution of the following linearized equation with source $G$ \[ g''=\frac {2p(p+1)}{(p-1)^2}t^{-2} g+G, \] is given by \[ g(t)=-\frac{p-1}{3p+1}\left( t^{-\frac {p+1}{p-1}}\int_0^t s^{\frac {2p}{p-1}} G(s) ds +t^\frac {2p}{p-1} \int_t^1 s^{-\frac {p+1}{p-1}} G(s) ds\right). \] \subsection{First blow-up ansatz}\label{s22} Set \begin{equation}\label{def:Jk} J=\left\lfloor\frac {p+1}{p-1}\right\rfloor \quad \mbox{and} \quad k\geq 2J+2 \end{equation} where $x\mapsto \lfloor x \rfloor$ is the floor function which maps $x$ to the greatest integer less than or equal to $x$. (See Remark~\ref{eRemStr1} below for the explanation of the numbers $J$ and $k$.) We consider a function $A:\R^N\to\R$ of class $\cont^{k-1}$ on $\R^N$ and of class $\cont^k$ piecewise on $\R^N$ such that, for any $\beta\in \N^N$, with $|\beta|\leq k-1$, the following hold \begin{equation} \label{on:A} \begin{cases} A\geq 0 \hbox{ and } |\partial_x^\beta A|\lesssim A^{1-\frac {|\beta|}{k}} & \text{on }\R^N ,\\ A(x)=|x|^k & \text{for }x\in \R^N , |x|\ge 2 . \end{cases} \end{equation} \begin{rem}\label{rk:4} Typical examples of such functions are $A(x):=|x|^k$, which vanishes at $0$ and \[ A(x):= \begin{cases} 0& \hbox{if $|x|\leq 1$}\\ (|x|-\chi( |x| ))^k&\hbox{if $1< |x|\leq 2$}\\ |x|^k&\hbox{if $|x|>2$} \end{cases} \] (where $\chi $ is given by~\eqref{def:chi}) which vanishes on the closed ball of center $0$ and radius $1$. Another example, important for the proof of Theorem \ref{TH:1} is given in Section~\ref{S:proof}: for any compact set $ \Cset $ of $\R^N$ included in the open ball of center $0$ and radius $1$, there exists a function $A$ satisfying \eqref{on:A} which vanishes exactly on $ \Cset $. \end{rem} For $t>0$ and $x\in \R^N$, set \[ U_0(t,x)=\kappa(t+A(x))^{-\frac{2}{p-1}}= h(W(t,x))\quad\mbox{where}\quad W(t,x)=t+A(x), \] so that $U_0$ satisfies $\partial_{tt} U_0 = f(U_0)$ on $(0,\infty)\times\R^N$. Let \[ \Ens_0=- \partial_{tt} U_0 + \Delta U_0 -f(U_0)= \Delta U_0. \] We gather in the next lemma some estimates for $U_0$ and $\Ens_0$. \begin{lem}\label{le:0} The function $U_0$ satisfies \begin{equation}\label{e:19} \partial_t U_0 = -\left( \frac 2{p+1} U_0^{p+1}\right)^{\frac 12},\quad (\partial_t U_0)^2= \frac 2{p+1} U_0^{p+1}, \quad\partial_{tt} U_0 = U_0^{p}. \end{equation} Moreover, for any $\beta\in \N^N$, $\rho\in \R$, $0<t\leq 1$, $x\in\R^N$, the following hold. \begin{enumerate}[label=\emph{(\roman*)}] \item If $0\leq|\beta|\leq k-1$ and $|x|\leq 2$, \begin{equation}\label{e:20} |\partial_x^\beta(U_0^\rho)|\lesssim U_0^{\rho+\frac{|\beta|}{k}\frac{p-1}2},\quad |\partial_t\partial_x^\beta(U_0^\rho)|\lesssim U_0^{\rho+(1+\frac{|\beta|}{k})\frac{p-1}2}. \end{equation} \item If $0\leq|\beta|\leq k-3$ and $|x|\leq 2$, \begin{equation}\label{e:21} |\partial_x^\beta\Ens_0|\lesssim U_0^{1+\frac{2+|\beta|}{k}\frac{p-1}2}. \end{equation} \item If $|x|>2$, \begin{equation}\label{e:22} |\partial_x^\beta U_0|\lesssim |x|^{-\frac{2k}{p-1}-|\beta|}, \quad |\partial_x^\beta\Ens_0|\lesssim |x|^{-\frac{2k}{p-1}-|\beta|-2}. \end{equation} \end{enumerate} Furthermore, for any $x_0\in \R^N$ such that $A(x_0)=0$, for any $r>0$, $0<t\leq 1$, \begin{equation}\label{e:24} t^{-\frac2{p-1}+\frac N{2k}}\lesssim \|U_0(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac2{p-1}}, \end{equation} \begin{equation}\label{e:25} t^{-\frac2{p-1}-1+\frac N{2k}}\lesssim \|\partial_t U_0(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac2{p-1}-1}, \end{equation} where the implicit constants in \eqref{e:24} and \eqref{e:25} depend on $r$. \end{lem} \begin{proof} The identities in~\eqref{e:19} follow from the definition of $U_0$ and direct calculations. Proof of \eqref{e:20}-\eqref{e:21}. For $0<t\leq 1$ and $|x|\leq 2$, one has $0<t+A(x)\lesssim 1$ and thus $U_0\gtrsim 1$. From $U_0=h\circ W$, setting $n=|\beta|$ and using \eqref{fdb0}, one has \[ \partial_x^\beta U_0= \sum_{\nu=1}^n \left(h^{(\nu)}\circ W\right)\sum_{P(\beta,\nu)}(\beta!) \prod_{ \ell =1}^n \frac{\left(\partial_x^{\beta_\ell }W\right)^{\nu_\ell }}{(\nu_\ell !)(\beta_\ell !)^{\nu_\ell }}. \] For $\nu\geq 1$, we have $|h^{(\nu)}\circ W|\lesssim W^{-\frac 2{p-1}-\nu}$. Moreover, using the assumption \eqref{on:A}, we have, for $1\leq |\beta_\ell |\leq k-1$, \[ |\partial_x^{\beta_\ell }W|\lesssim |\partial_x^{\beta_\ell } A|\lesssim A^{1-\frac{|\beta_\ell |}{k}}. \] Since $\sum_{\ell =1}^n \nu_\ell =\nu$, $\sum_{\ell =1}^n \nu_\ell |\beta_\ell |=|\beta|$ and $|\beta|\leq k-1$, we obtain \begin{align*} |\partial_x^\beta U_0|&\lesssim \sum_{\nu=1}^n W^{-\frac 2{p-1}-\nu} \sum_{P(\beta,\nu)} \prod_{ \ell =1}^n \left( A^{1-\frac{|\beta_\ell |}{k}}\right)^{\nu_\ell }\\ & \lesssim \sum_{\nu=1}^n W^{-\frac 2{p-1}-\nu} A^{\nu-\frac{|\beta|}{k}} \lesssim W^{-\frac2{p-1}-\frac{|\beta|}k}\lesssim U_0^{1+\frac{|\beta|}{k}\frac{p-1}2}, \end{align*} which proves the first estimate of \eqref{e:20} for $\rho=1$. For $\rho\in\R$, using \eqref{fdb0}, we also have, for $1\leq n=|\beta|\leq k-1$, \[ \partial_x^\beta(U_0^\rho) =\sum_{\nu=1}^n \left[\rho\cdots(\rho-\nu+1)\right]U_0^{\rho-\nu}\sum_{P(\beta,\nu)}(\beta!)\prod_{\ell =1}^n \frac{(\partial_x^{\beta_\ell } U_0)^{\nu_\ell }}{(\nu_\ell !)(\beta_\ell !)^{\nu_\ell }}. \] Using the above estimate on $|\partial_x^\beta U_0|$ and $\sum_{\ell =1}^n \nu_\ell =\nu$, $\sum_{\ell =1}^n \nu_\ell |\beta_\ell |=|\beta|$, we obtain \begin{equation*} |\partial_x^\beta (U_0^\rho)| \lesssim \sum_{\nu=1}^n U_0^{\rho-\nu} \sum_{P(\beta,\nu)}\prod_{\ell =1}^n U_0^{\nu_\ell \left[1+\frac{|\beta_\ell |}{k} \frac{p-1}2\right]} \lesssim \sum_{\nu=1}^n U_0^{\rho-\nu}U_0^{\nu+\frac{|\beta|}{k}\frac{p-1}2} \lesssim U_0^{\rho+\frac{|\beta|}{k}\frac{p-1}2}. \end{equation*} Next, using the first identity in~\eqref{e:19}, we see that $\partial _t U_0^\rho = -\rho ( \frac {2} {p+1} )^{\frac {1} {2}} U_0^{\rho + \frac {p-1} {2}}$; and so the second estimate in~\eqref{e:20} follows from the first. Since $\Ens_0= \Delta U_0$, \eqref{e:21} is an immediate consequence of the first estimate in~\eqref{e:20}. Estimate~\eqref{e:22} is a direct consequence of the definitions of $U_0$ and $\Ens_0$ and of the fact that $A(x)=|x|^k$ for $|x|>2$. Proof of \eqref{e:24}-\eqref{e:25}. For any $x_0\in \R^N$ and $r>0$, the upper bounds in \eqref{e:24} and \eqref{e:25} are direct consequences of the estimates $0\leq U_0\lesssim t^{-\frac 2{p-1}}$ and $|\partial_t U_0|\lesssim t^{-\frac 2{p-1}-1}$. Let $x_0\in \R^N$ be such that $A(x_0)=0$ and $r>0$. By \eqref{on:A} and the fact that the function $A$ is of class $\cont^k$ piecewise, the Taylor formula implies that for any $x$ such that $|x-x_0|<r$, $|A(x)|\leq C(r) |x-x_0|^k$. It follows that for such $x$, and for any $t\in (0,1]$, $U_0^2(t,x)=\kappa^2 (t+A(x))^{-\frac{4}{p-1}}\gtrsim (t+|x-x_0|^k)^{-\frac{4}{p-1}}$. The lower estimate in~\eqref{e:24} then follows from \begin{equation} \label{label4} \begin{split} \int_{|y |<r} (t+| y |^k)^{-\frac{4}{p-1}} dy&= t^{-\frac4{p-1}+\frac Nk}\int_{|z|<t^{-\frac1k}r} (1+|z|^k)^{-\frac{4}{p-1}} dz\\ &\gtrsim t^{-\frac4{p-1}+\frac Nk}\int_{|z|< r} (1+|z|^k)^{-\frac{4}{p-1}} dz \gtrsim t^{-\frac4{p-1}+\frac Nk}. \end{split} \end{equation} Estimate \eqref{e:25} is proved similarly. \end{proof} \subsection{Refined blow-up ansatz}\label{s23} Starting from $U_0$, we define by induction a refined ansatz to the nonlinear wave equation. Let $t_0=1$ and for any $j\in \{1,\ldots,J\}$, let $0<a_j\leq 1$ and $0<t_j\leq 1$ to be chosen later. Let \begin{align*} w_j& = - \kappa^{\frac{p-1}2} \frac{p-1}{3p+1} \left(U_0^{\frac {p+1}2} \int_0^t U_0^{-p} \Ens_{j-1} ds + U_0^{-p} \int_{t}^{t_{j-1}} U_0^{\frac {p+1}2} \Ens_{j-1} ds\right), \\ U_j& =U_0+ \sum_{\ell =1}^j \chi_\ell w_\ell ,\quad \Ens_j=-\partial_{tt}U_j+\Delta U_j+f(U_j), \end{align*} where $\chi_j(x)=\chi( A(x)/{a_j})$ and $\chi $ satisfies~\eqref{def:chi}. \begin{lem}\label{le:1} There exist $0<a_J\leq \cdots\leq a_1\leq 1$ and $0<t_J\leq \cdots\leq t_1\leq 1$ such that for any $0\leq j\leq J$, for any $\beta\in \N^N$, $0<t\leq t_j$ and $x\in \R^N$, the following hold. \begin{enumerate}[label=\emph{(\roman*)}] \item If $1\leq j\leq J$, $0\leq |\beta|\leq k-1-2j$, $|x|\leq 2$, then \begin{equation}\label{e:29} |\partial_x^\beta w_j|\lesssim U_0^{1-j(p-1)+\frac{2j +|\beta|}k \frac{p-1}2}, \end{equation} \begin{equation}\label{e:30} |\partial_t\partial_x^\beta w_j|\lesssim U_0^{\frac{p+1}2-j(p-1)+\frac{2j +|\beta|}k \frac{p-1}2}. \end{equation} \item If $1\leq j\leq J$, then \begin{equation}\label{e:31} |U_j-U_0| \leq \frac 14(1-2^{-j})U_0,\quad |U_j-U_0|\leq (1-2^{-j})(1+U_0)^{-\frac{p-1}2} U_0, \end{equation} \begin{equation}\label{e:31bis} |\partial_tU_j-\partial_t U_0|\lesssim U_0. \end{equation} \item If $0\leq |\beta|\leq k-3-2j$, $|x|\leq 2$, then \begin{equation}\label{e:32} |\partial_x^\beta \Ens_j|\lesssim U_0^{1-j(p-1)+\frac{2j+2 +|\beta|}k \frac{p-1}2}. \end{equation} \item If $|x|\geq 2$, then \begin{equation}\label{e:33} |\partial_x^\beta U_j |\lesssim |x|^{-\frac {2k}{p-1}-|\beta|}, \quad |\partial_x^\beta \Ens_j |\lesssim |x|^{-\frac {2k}{p-1}-2-|\beta|}. \end{equation} \end{enumerate} \end{lem} \begin{rem} \label{eRemStr1} We comment on the mechanism of the refined ansatz. For the energy control which we establish in the next section, we need an estimate on the error term $ \|\mathcal E_J\| _{ L^2 } \lesssim t^{ (\frac {2} {p-1})^+ }$. (See formulas~\eqref{dtener} and~\eqref{fForarap}.) By formula~\eqref{e:32}, this is achieved if $J > \frac {2} {p-1}$, which is the first condition in~\eqref{def:Jk}, and then $k$ sufficiently large (once $J$ is chosen), which is the second condition in~\eqref{def:Jk}. Note that for $p>3$, $J=1$ is enough, but one can never choose $J=0$, so a refined ansatz is always needed. We see on formula~\eqref{e:32} that at each step, the error estimate improves by a factor $U_0^{ - (p-1) (1 - \frac {1} {k}) } \sim t^{2 ( 1 - \frac {1} {k})}$. It is clear then that the number of steps goes to $\infty $ as $p\to 1$. \end{rem} \begin{proof}[Proof of Lemma~$\ref{le:1}$] Observe that \eqref{e:32} for $j=0$ is exactly \eqref{e:21} in Lemma~\ref{le:0}. Now, we proceed by induction on $j$: for any $1\leq j\leq J$, we prove that estimate \eqref{e:32} for $\Ens_{j-1}$ implies estimates \eqref{e:29}--\eqref{e:32} for $w_j$, $U_j$ and $\Ens_j$, for an appropriate choice of $a_j$ and~$t_{j}$. Proof of \eqref{e:29}-\eqref{e:30}. First, assuming \eqref{e:32} for $\Ens_{j-1}$, we show the following estimates related to the two components of $w_j$: for $|\beta|\leq k-1-2j$, $0<t\leq t_{j-1}$ and $|x|\leq 2$, \begin{align} \left|\partial_x^\beta\left(\int_0^t U_0^{-p} \mathcal E_{j-1} ds\right)\right| &\lesssim U_0^{-\frac{p-1}2-j (p-1)+\frac{2j+|\beta|}{k}\frac{p-1}2},\label{e:35}\\ \left|\partial_x^\beta\left(\int_{t}^{t_{j-1}} U_0^{\frac{p+1}2} \mathcal E_{j-1} ds\right)\right| &\lesssim U_0^{p+1-j (p-1)+\frac{2j+|\beta|}{k}\frac{p-1}2}.\label{e:36} \end{align} Indeed, we have by Leibniz's formula \eqref{lbz0} \begin{equation*} \partial_x^\beta \left(U_0^{-p}\Ens_{j-1}\right)= \sum_{\beta'\leq\beta}\binom{\beta}{\beta'}\left(\partial_x^{\beta'}(U_0^{-p})\right) \left(\partial_x^{\beta-\beta'}\Ens_{j-1}\right), \end{equation*} and thus, using \eqref{e:20} and \eqref{e:32}, \begin{align*} |\partial_x^\beta \left(U_0^{-p}\Ens_{j-1}\right)| &\lesssim \sum_{\beta'\leq\beta}\binom{\beta}{\beta'} U_0^{-p+\frac{|\beta'|}k\frac{p-1}2} U_0^{1-(j-1)(p-1)+\frac{2j +|\beta|-|\beta'|}k \frac{p-1}2}\\ &\lesssim U_0^{-j(p-1)+\frac{2j +|\beta|}k \frac{p-1}2} \lesssim (t+A)^\gamma, \end{align*} where for $j\geq 1$, $|\beta|\leq k$, \[ \gamma:=2j-\frac{2j+|\beta|}{k}=2j\left(1-\frac1k\right)-\frac{|\beta|}{k}\geq 0. \] Integrating on $(0,t)$ for $t\in (0,t_{j-1}]$, we obtain \[ \left|\partial_x^\beta\left(\int_0^t U_0^{-p} \mathcal E_{j-1} ds\right)\right| \lesssim (t+A)^{\gamma+1}\leq U_0^{-\frac{p-1}2-j\left(1-\frac1k\right)(p-1)+\frac{|\beta|}{k}\frac{p-1}2}, \] which is \eqref{e:35}. Similarly, using Leibniz's formula, we check the following estimate \[ |\partial_x^\beta ( U_0^{\frac{p+1}2}\Ens_{j-1})| \lesssim U_0^{\frac{3p+1}2-j(p-1)+\frac{2j +|\beta|}k \frac{p-1}2} \lesssim (t+A)^{-\gamma'}, \] where, using $0<j\leq J\leq \frac{p+1}{p-1}$, \[ \gamma':=\frac{3p+1}{p-1}-2j+\frac{2j+|\beta|}{k}> 1. \] Thus, by time integration, for $t\in (0,t_{j-1}]$, \[ \left| \int_t^{t_{j-1}} \partial_x^\beta ( U_0^{\frac{p+1}2}\Ens_{j-1}) ds\right| \lesssim (t+A)^{-\gamma'+1} \lesssim U_0^{p+1-j (p-1)+\frac{2j+|\beta|}{k}\frac{p-1}2}, \] which is \eqref{e:36}. Using Leibniz's formula, \eqref{e:20}, and~\eqref{e:35}-\eqref{e:36}, we deduce easily that, for any $\beta\in \N^N$, $|\beta|\leq k -1 -2j$, \begin{align*} \left|\partial_x^\beta\left( U_0^{\frac{p+1}2}\int_0^t U_0^{-p} \mathcal E_{j-1} ds\right)\right| &\lesssim U_0^{1-j (p-1)+\frac{2j+|\beta|}{k}\frac{p-1}2},\\ \left|\partial_x^\beta\left(U_0^{-p}\int_{t}^{t_{j-1}} U_0^{\frac{p+1}2} \mathcal E_{j-1} ds\right)\right| &\lesssim U_0^{1-j (p-1)+\frac{2j+|\beta|}{k}\frac{p-1}2}. \end{align*} Estimate \eqref{e:29} follows. Moreover, by the definition of $w_j$ and setting $b= \kappa^{\frac{p-1}2} \frac{p-1}{3p+1}$, \begin{equation}\label{e:dtwj} \partial_tw_j = - b \left(\partial_t(U_0^{\frac {p+1}2}) \int_0^t U_0^{-p} \Ens_{j-1} ds + \partial_t(U_0^{-p}) \int_{t}^{t_{j-1}} U_0^{\frac {p+1}2} \Ens_{j-1} ds \right) . \end{equation} Similarly as above, Leibniz's formula, \eqref{e:20}, and~\eqref{e:35}-\eqref{e:36} yield~\eqref{e:30}. Note that we have proved estimates~\eqref{e:29} and~\eqref{e:30} for all $0<t\le t _{ j-1 }$. Proof of \eqref{e:31}-\eqref{e:31bis}. For $0<t\leq t _{ j-1 } $ and $|x|\leq 2$, by the estimate \eqref{e:20} on $w_j$ for $\beta=0$, the property $U_0\gtrsim 1$ for $ |x| \le 2$, and the definition of $\chi_j$, we have \begin{equation*} \chi_j |w_j|\lesssim \chi_j U_0^{1-j(1-\frac 1k)(p-1)} \lesssim \chi_j U_0^{1-(1-\frac 1k)(p-1)} \lesssim \chi_j (t+A)^{2-\frac 2k} U_0 \lesssim (t +a_j)U_0 . \end{equation*} Choosing $0<a_j\leq 1$ and $0<t_j\leq t_{j-1}$ sufficiently small, for all $t\in (0,t_{j}]$, \[ \chi_j |w_j| \leq 2^{-j-2} U_0 \quad \hbox{and} \quad \chi_j |w_j| \leq 2^{-j} (1+U_0)^{-\frac {p-1}2} U_0 . \] From now on, $a_j$ and $t_j$ are fixed to such values. In the case $j=1$, this proves \eqref{e:31} for $|x|\leq 2$. For $2\leq j\leq J$, combining this estimate with \eqref{e:31} for $j-1$, we find, for all $t\in (0,t_j]$ and $|x|\leq 2$, \begin{equation}\label{point} \sum_{\ell =1}^{j} \chi_\ell |w_\ell |\leq \frac 14 (1-2^{-j}) U_0 \quad \hbox{and} \quad \sum_{\ell =1}^{j} \chi_\ell |w_\ell |\leq (1-2^{-j})(1+U_0)^{-\frac {p-1}2} U_0, \end{equation} which implies \eqref{e:31} for $U_j$ and for $|x|\leq 2$. To prove \eqref{e:31bis} for $|x|\leq 2$, we note that by \eqref{e:30} with $\beta =0$ and $U_0\gtrsim 1$, \[ \sum_{\ell =1}^{j} \chi_\ell |\partial_t w_\ell |\lesssim \sum_{\ell =1}^j\chi_\ell U_0^{\frac{p+1}2-\ell (1-\frac 1k)(p-1)} =U_0 \sum_{\ell =1}^j\chi_\ell U_0^{\frac{p-1}2 ( 1- 2\ell (1-\frac 1k) )} \lesssim U_0. \] For $|x|\geq 2$, \eqref{on:A} implies that $A(x)\geq 2^k\geq 2a_1\geq\cdots\geq 2a_j$ and thus $\chi_j(x)=0$ and $U_j(t,x)=U_0(t,x)$. The same applies to $\partial_t U_j$. Proof of \eqref{e:32}. Differentiating \eqref{e:dtwj} with respect to $t$, using the relations \eqref{e:19}, $\partial_{tt} (U_0^{\frac{p+1}2})=f'(U_0) U_0^{\frac{p+1}2}$ and $\partial_{tt} (U_0^{-p})=f'(U_0) U_0^{-p}$ (these calculations are related to observations made in Section \ref{sec:2.1}), we check that $w_j$ satisfies \[ \partial_{tt} w_j=f'(U_0) w_j+\Ens_{j-1}. \] Using also $U_j=U_{j-1}+\chi_j w_j$ and the definition of $\Ens_{j-1}$, we obtain \begin{align*} \Ens_j &=\Ens_{j-1} -\chi_j \partial_{tt}w_j +\Delta (\chi_j w_j)+ f(U_j)-f(U_{j-1})\\ &= (1-\chi_j) \Ens_{j-1}+\Delta (\chi_j w_j) +f(U_j)-f(U_{j-1})-f'(U_0)\chi_j w_j. \end{align*} We estimate $\partial_x^\beta$ of each term on the right-hand side above for $|\beta|\leq k-3-2j$ and $|x|\leq 2$. For the first term, recall that for $x$ such that $A(x)\leq a_j$, it holds $1-\chi_j(x)=0$ and for any $\beta$, $\partial_x^{\beta} \chi_j(x)=0$. Moreover, for $0<t\leq 1$, for $0\leq x\leq 2$ such that $A(x)\geq a_j$, it holds $A(x)\approx 1$ and so $U_0(t,x)\approx 1$. Thus, using the Leibniz formula and \eqref{e:32} for $\Ens_{j-1}$, we find \[ |\partial_x^\beta[(1-\chi_j) \Ens_{j-1}]| \lesssim U_0^{1-(j-1)(p-1)+\frac{2j +|\beta|}k \frac{p-1}2}\lesssim U_0^{1-j(p-1)+\frac{2j+2 +|\beta|}k \frac{p-1}2}. \] Next, by Leibniz's formula, the properties of $\chi$ and $\chi_j$, the estimate \eqref{e:29} on $w_j$ and then $U_0\gtrsim 1$, we have, for $0<t<t_j$ and $|x|\leq 2$, \[ |\partial_x^\beta\Delta (\chi_j w_j)|\lesssim \sum_{|\beta'|\leq |\beta|+2} |\partial_x^{\beta'} w_j| \lesssim U_0^{1-j(p-1)+\frac{2j + 2+|\beta|}k \frac{p-1}2}. \] Last, we estimate $\partial_x^\beta[f(U_j)-f(U_{j-1})-f'(U_0)\chi_j w_j]$. We begin with the case $\beta=0$. Recall that by \eqref{e:31}, we have $0<\frac 34 U_0\leq U_j\leq \frac 54 U_0$, so that by elementary calculations \[ \left|f(U_j)-f(U_{j-1})-f'(U_{j-1})\chi_j w_j\right|\lesssim \chi_j U_0^{p-2}w_j^2 \] and \[ \left|f'(U_{j-1})-f'(U_0)\right|\lesssim U_0^{p-2} \sum_{\ell =1}^{j-1} \chi_\ell |w_\ell |. \] These estimates imply \[ \left|f(U_j)-f(U_{j-1})-f'(U_0)\chi_j w_j\right|\lesssim \chi_j U_0^{p-2} |w_j|\sum_{\ell =1}^{j}\chi_\ell |w_\ell |. \] For $1\leq \ell \leq j$, using \eqref{e:29} and $U_0\gtrsim 1$, we have \begin{align*} U_0^{p-2}|w_j||w_\ell |&\lesssim U_0^{p-2} U_0^{1-j(1-\frac 1k)(p-1)} U_0^{1-\ell (1-\frac 1k)(p-1)}\lesssim U_0^{p-(j+\ell )(1-\frac 1k)(p-1)}\\ & \lesssim U_0^{p-(j+1)(1-\frac 1k)(p-1)}\lesssim U_0^{1-j(1-\frac 1k)(p-1)+\frac 1k(p-1)}. \end{align*} Thus, $|f(U_j)-f(U_{j-1})-f'(U_0)\chi_j w_j |\lesssim U_0^{1-j(1-\frac 1k)(p-1)+\frac 1k(p-1)}$ is proved. Now, we deal with the case $1\leq |\beta|\leq k-3-2j$. By the Taylor formula with integral remainder, we have, for any $U$ and $w$, \[ f(U+w)-f(U)-f'(U)w=w^2\int_0^1 (1-\theta)f''(U+\theta w) d\theta. \] Thus, by Leibniz's formula \eqref{lbz0} \begin{multline*} \partial_x^\beta\left[ f(U+w)-f(U)-f'(U)w\right] \\=\sum_{\beta'\leq \beta} \binom{\beta}{\beta'} \left( \partial_x^{\beta-\beta'}(w^2)\right) \int_0^1(1-\theta)\partial_x^{\beta'}[f''(U+\theta w)] d\theta. \end{multline*} Moreover, by the Faa di Bruno formula \eqref{fdb0}, for $\beta'\neq 0$, denoting $n'=|\beta'|$, \[ \partial_x^{\beta'} [f''(U+\theta w)] =\sum_{\nu=1}^{n'}f^{(\nu+2)}(U+\theta w)\sum_{P(\beta',\nu)}(\beta'!) \prod_{\ell =1}^{n'} \frac{(\partial_x^{\beta_\ell }(U+\theta w))^{\nu_\ell }}{(\nu_\ell !)(\beta_\ell !)^{\nu_\ell }}. \] To estimate the term $\partial_x^\beta[f(U_j)-f(U_{j-1})-f'(U_0)\chi_j w_j]$, we apply these formulas to $U=U_j$ and $w=\chi_j w_j$. For $\beta'\leq \beta$, using \eqref{e:29} and the properties of $\chi$, we have \begin{align*} \left|\partial_x^{\beta-\beta'}\left[(\chi_jw_j)^2\right]\right| &\lesssim \sum_{\beta''\leq \beta-\beta'} \left|\partial_x^{\beta''} (\chi_jw_j)\right| \left|\partial_x^{\beta-\beta'-\beta''}(\chi_jw_j)\right|\\ &\lesssim U_0^{2-2j(p-1)+\frac{4j+|\beta|-|\beta'|}{k}\frac{p-1}2}. \end{align*} For $\beta'=0$ and $\theta\in [0,1]$, using also \eqref{point}, we obtain \begin{align*} \left|\left( \partial_x^{\beta}[(\chi_jw_j)^2]\right) f''(U_0+\theta \chi_jw_j)\right| &\lesssim U_0^{2-2j(p-1)+\frac{4j+|\beta|}{k}\frac{p-1}2}U_0^{p-2}\\ &\lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{align*} For $\beta'\neq 0$, $\beta'\leq \beta$ and $\theta\in [0,1]$, using \eqref{e:20}, \eqref{e:29} and \eqref{point}, we have (recall that the definition of $P(\beta',\nu)$ implies that $\sum_{\ell =1}^{n'}\nu_\ell =\nu$ and $\sum_{\ell =1}^{n'}\nu_\ell |\beta_\ell |=|\beta'|$) \begin{align*} |\partial_x^{\beta'} [f''(U_{j-1}+\theta \chi_j w_j)]| &\lesssim \sum_{\nu=1}^{n'} U_0^{p-\nu-2} \sum_{P(\beta',\nu)} \prod_{\ell =1}^{n'} \left(U_0^{1+\frac{|\beta_\ell |}{k}\frac{p-1}2}\right)^{\nu_\ell }\\ &\lesssim \sum_{\nu=1}^{n'} U_0^{p-\nu-2}U_0^{\nu+ \frac{|\beta'|}{k} \frac{p-1}2} \lesssim U_0^{p-2+\frac{|\beta'|}{k}\frac{p-1}2}. \end{align*} Thus, similarly as before, it holds \begin{equation*} \left|\partial_x^{\beta-\beta'}\left[(\chi_j w_j)^2\right] \partial_x^{\beta'}\left[f''(U_{j-1}+\theta \chi_j w_j)\right]\right| \lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{equation*} Integrating these estimates in $\theta\in [0,1]$, we obtain \begin{equation}\label{UN} \left|\partial_x^{\beta} [f(U_j)-f(U_{j-1})-f'(U_{j-1})\chi_j w_j]\right| \lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{equation} By similar arguments, for any $U,W,w$, we have \begin{equation*} f'(U)-f'(W)=(U-W) \int_0^1 f''(W+\theta(U-W)) d\theta, \end{equation*} and thus \begin{multline*} \partial_x^{\beta}[w(f'(U)-f'(W))] \\=\sum_{\beta'\leq \beta} \binom{\beta}{\beta'} \left(\partial_x^{\beta-\beta'}[w(U-W)]\right) \int_0^1 \partial_x^{\beta'} [f''(W+\theta(U-W)) ] d\theta. \end{multline*} Moreover, for $\beta'\neq 0$, \begin{multline*} \partial_x^{\beta'} [f''(W+\theta(U-W)) ]\\ = \sum_{\nu=1}^{n'} f^{(\nu+2)}(W+\theta(U-W))\sum_{P(\beta',\nu)}(\beta'!) \prod_{\ell =1}^{n'} \frac{\left(\partial_x^{\beta_\ell }(W+\theta(U-W))\right)^{\nu_\ell }}{(\nu_\ell !)(\beta_\ell !)^{\nu_\ell }}. \end{multline*} To estimate the term $\partial^\beta [\chi_j w_j(f'(U_{j-1})-f'(U_0))]$, we apply these formulas to $U=U_{j-1}$, $W=U_0$ and $w=\chi_j w_j$. For $\beta'\leq \beta$, using \eqref{e:29} and the properties of $\chi$, we have, for $1\leq \ell \leq j-1$, \begin{equation*} \left| \partial_x^{\beta-\beta'}[\chi_j w_j \chi_\ell w_\ell ]\right| \lesssim U_0^{2-(j+\ell )(p-1)+\frac{2j+2\ell +|\beta|-|\beta'|}{k}\frac{p-1}2}. \end{equation*} For $\beta'=0$ and $\theta\in [0,1]$, from \eqref{point}, we obtain \begin{equation*} \left|\partial_x^{\beta}\left[\chi_j w_j(U_{j-1}-U_0)\right] f''(U_0+\theta (U_{j-1}-U_0))\right| \lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{equation*} For $\beta'\neq 0$, $\beta'\leq \beta$ and $\theta\in [0,1]$, by the formula above, using \eqref{e:20}, \eqref{e:29} and \eqref{point}, we have as before \begin{align*} |\partial_x^{\beta'}[f''(U_0+\theta (U_{j-1}-U_0))]| &\lesssim \sum_{\nu=1}^{n'} U_0^{p-\nu-2} \sum_{P(\beta',\nu)} \prod_{\ell =1}^{n'} \left(U_0^{1+ \frac{|\beta_\ell |}{k}\frac{p-1}2}\right)^{\nu_\ell }\\ &\lesssim \sum_{\nu=1}^{n'} U_0^{p-\nu-2} U_0^{\nu+\frac{|\beta'|}{k}\frac{p-1}2} \lesssim U_0^{p-2+ \frac{|\beta'|}{k}\frac{p-1}2}. \end{align*} Thus, we obtain \begin{equation*} \left|\partial_x^{\beta-\beta'}\left[\chi_j w_j(U_{j-1}-U_0)\right] \partial_x^{\beta'}\left[f''(U_0+\theta (U_{j-1}-U_0))\right]\right| \lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{equation*} Integrating in $\theta\in [0,1]$ and summing in $\beta'\leq \beta$, we obtain \begin{equation}\label{DEUX} \left| \partial^\beta [\chi_j w_j(f'(U_{j-1})-f'(U_0))]\right| \lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{equation} Combining \eqref{UN} and \eqref{DEUX}, we have proved for $t\in (0,t_{j}]$, $|x|\leq 2$, \begin{equation*} \left|\partial^\beta [f(U_j)-f(U_{j-1})-f'(U_0)\chi_j w_j]\right| \lesssim U_0^{1-j(p-1)+\frac{2j+2+|\beta|}{k}\frac{p-1}2}. \end{equation*} In conclusion, we have estimated all terms in the expression of $\partial_x^\beta\Ens_j$ and \eqref{e:32} is now proved. Finally, for $|x|\geq 2$, \eqref{on:A} implies that $A(x)\geq 2^k\geq 2a_1\geq\cdots\geq 2a_j$ and thus $\chi_j(x)=0$, $U_j(t,x)=U_0(t,x)$ and $\Ens_{j}(t,x)= \Ens_0(t,x)$, so that~\eqref{e:33} follows from~\eqref{e:22}. \end{proof} \section{Uniform bounds on approximate solutions}\label{sec:3} Let the function $\chi $ be given by~\eqref{def:chi} and $U_J$ be defined as in \S\ref{s23} with $J$ and $k$ as in \eqref{def:Jk}. Set \begin{equation}\label{def:la} \lambda= \min \left(J-\frac{2}{p-1},\frac 12\right) \in \left(0,\frac 12\right], \end{equation} and impose the following additional condition on $k$ \begin{equation}\label{on:k} k\geq \frac{2(p+1)}{\lambda(p-1)} +2. \end{equation} For any $n$ large, let $T_n=\frac 1n<t_J$ and \begin{equation}\label{def:Bn} B_n=\sup_{t\in [T_n,t_J]} \|U_J(t)\|_{L^\infty}\quad \mbox{so that}\quad \lim_{n\to \infty} B_n=\infty. \end{equation} We let $n$ be sufficiently large so that $B_n>1$, and we define the function $f_n:\R\to [0,\infty)$ by \begin{equation}\label{def:fn} f_n(u )= f(u) \chi\left(\frac{u}{B_n}\right) \quad \mbox{so that}\quad f_n(u)=\begin{cases} f(u) & \mbox{for $|u|<B_n$}\\ 0 & \mbox{for $u>2B_n$} \end{cases}. \end{equation} Let $F_n(v)=\int_0^v f_n(w) dw$. It follows from elementary calculations that for every $\alpha\in \N$, there exists a constant $C_\alpha>0$ independent of $n$, such that for all $u>0$, \begin{equation} |f_n^{(\alpha)}(u)|\leq C_\alpha u^{p-\alpha}. \end{equation} In particular, we observe that Taylor's estimates such as \eqref{taylor0}--\eqref{taylor} still hold for $F_n$ and $f_n$ with constants independent of $n$. We will refer to these inequalities for $F_n$ and $f_n$ with the same numbers~\eqref{taylor0}, \eqref{taylor1} and~\eqref{taylor}. In this proof, any implicit constant related the symbol $\lesssim$ is independent of $n$. We define the sequence of solutions $u_n$ of \begin{equation}\label{eq:un} \left\{\begin{aligned}&\partial_{tt} u_n-\Delta u_n=f_n(u_n)\\ &u_n(T_n)=U_J(T_n),\quad \partial_t u_n(T_n)=\partial_t U_J(T_n). \end{aligned}\right.\end{equation} The nonlinearity $f_n$ being globally Lipschitz, the existence of a global solution $(u_n,\partial_t u_n)$ in the energy space is a consequence of standard arguments from semi-group theory. Using energy estimates, we prove uniform bounds on $u_n$ in the energy space. For this we set, for all $t\in [T_n,t_J]$, \begin{equation}\label{uneps} u_n(t)=U_J(t)+\varepsilon_n(t), \end{equation} so that $(\varepsilon_n,\partial_t\varepsilon_n)\in \cont([T_n,t_J],H^1(\R^N)\times L^2(\R^N))$. \begin{prop}\label{pr:2} There exist $C>0$, $n_0>0$ and $0<\delta_0<1$ such that \begin{equation}\label{unif} \|(\varepsilon_n(t),\partial_t \varepsilon_n(t))\|_{H^1\times L^2}\leq C (t-T_n)^{\frac\lambda2}.\end{equation} for all $n\geq n_0$ and $t\in [T_n, T_n+\delta_0]$, where $\lambda $ is given by~\eqref{def:la}. \end{prop} \begin{proof} The equation of $\varepsilon_n$ on $[T_n,t_J]\times \R^N$ is \begin{equation}\label{eq:wn} \left\{\begin{aligned}&\partial_{tt} \varepsilon_n-\Delta \varepsilon_n=f_n(U_J+\varepsilon_n)-f_n(U_J) + \Ens_J\\ &\varepsilon_n(T_n)=0,\quad \partial_t \varepsilon_n(T_n)=0 \end{aligned}\right.\end{equation} where we have used from \eqref{def:Bn} and \eqref{def:fn} that $f(U_J)=f_n(U_J)$ on $[T_n,t_J]\times \R^N$. Define the auxiliary function $z$ as follows \[ \varepsilon_n= Q^{\frac 12} z \quad \hbox{where} \quad Q=(1-\chi+U_0)^{p+1} , \] where, by abuse of notation, we denote $\chi (x) = \chi ( |x| )$. We note that $Q \gtrsim 1$, $Q \lesssim t^{- \frac {2 (p+1) } {p-1}}$. Moreover, it follows from~\eqref{e:20} that $ |\nabla U_0 | \lesssim Q^{ \frac {1} {p+1} + \frac {p-1} {2k (p+1)} }$, from which we deduce easily that $ |\nabla Q |\lesssim t^{-\frac {1} {k}} Q$. One proves similarly that $ |\Delta Q |\lesssim t^{-\frac {2} {k}} Q$. To write the equation of $z$, we compute \begin{align*} \partial_{tt} \varepsilon_n & =\partial_{tt}(Q^{\frac 12}z)=(\partial_{tt} Q^{\frac 12}) z + (p+1) (1-\chi+U_0)^{\frac {p-1}2} \partial_t U_0 \partial_t z +Q^{\frac 12} \partial_{tt} z\\ &= (\partial_{tt} Q^{\frac 12}) z + Q^{-\frac 12} \partial_t (Q \partial_t z) . \end{align*} Thus, setting $G=Q^{\frac 12} (f'(U_0) Q^{\frac 12} - \partial_{tt} Q^{\frac 12})$, we obtain \begin{equation} \label{eqforz} \begin{split} \partial_t (Q\partial_t z) =& Q^{\frac 12} \Delta (Q^{\frac 12} z) \\ & + Q^{\frac12} \left( f_n(U_J+Q^{\frac 12} z)-f_n(U_J)-f_n'(U_0)Q^{\frac 12} z\right) + Gz+Q^{\frac 12}\Ens_J. \end{split} \end{equation} Let $\sigma=\frac 34$. We define the following weighted norm and energy functional for $z$, \begin{align*} \mathcal N&= \left(\int (Q\partial_t z)^2+ Q^2 |\nabla z|^2 + t^{-2\sigma} Q^2 z^2\right)^{\frac 12},\\ \mathcal H&= \int \Big[ (Q\partial_t z)^2+ Q^2 |\nabla z|^2 + t^{-2\sigma} Q^2 z^2 \\&\quad - Q \left(2F_n(U_J+Q^{\frac 12} z) -2F_n(U_J)-2F_n'(U_J)Q^{\frac 12} z- F_n''(U_0)Qz^2\right)\Big]. \end{align*} We remark that the first two terms in $\mathcal H$ are the energy for the linear part of equation~\eqref{eqforz}. The third term yields the control of a weighted $L^2$ norm, and the last term is associated with the nonlinear terms in the equation. \textbf{Step 1.} Coercivity of the energy. We claim that, for $0<\delta\leq t_J$ and $0<\omega\leq 1$ sufficiently small, for $n$ large, if $\mathcal N\leq \omega$ and $T_n\leq t\leq \delta$ then \begin{equation}\label{coer} \mathcal N^2 \leq 2 \mathcal H, \end{equation} and \begin{equation}\label{coer2} \|(\varepsilon_n(t),\partial_t \varepsilon_n(t))\|_{\dot H^1\times L^2}\lesssim \mathcal N,\quad \|\varepsilon_n(t)\|_{L^2}\lesssim t^\sigma \mathcal N. \end{equation} \emph{Proof of \eqref{coer}.} Let \begin{equation} \label{fDfnLam} \GDT1 = | 2 F_n(U_J+Q^{\frac 12} z) - 2 F_n(U_J)- 2 F_n'(U_J)Q^{\frac 12} z- F_n''(U_0)Qz^2 | . \end{equation} The triangle inequality and the Taylor inequality \eqref{taylor0} yield \begin{equation} \label{fEstLam0} \begin{split} \GDT1 \lesssim & | 2 F_n(U_J+Q^{\frac 12} z) - 2 F_n(U_J)- 2 F_n'(U_J)Q^{\frac 12} z- F_n''(U_J)Qz^2 | \\ & +|F_n''(U_0)-F_n''(U_J)| Q z^2 \\ \lesssim & \GCT1 \end{split} \end{equation} where \begin{equation} \label{fDefLam2} \GCT1 = Q^{\frac{p+1}2}|z|^{p+1}+U_J^{p-\bar p}Q^{\frac{\bar p+1}{2}}|z|^{\bar p+1} +U_0^{p-2}|U_0-U_J|Qz^2 . \end{equation} Using $U_J\lesssim U_0$ and $ U_0 \lesssim Q^{\frac {1} {p+1}}$, we see that $U_J^{p-\bar p} \lesssim Q^{ \frac {p- \bar p} {p+1} }$. Moreover, since $|U_0-U_J|\lesssim (1+U_0)^{-\frac{p-1}2}U_0\lesssim Q^{-\frac{p-1}{2(p+1)}} U_0$ (see \eqref{e:31}), we obtain \begin{equation*} U_0^{p-2}|U_0-U_J| \lesssim U_0^{p-1} Q^{-\frac{p-1}{2(p+1)}} \lesssim Q^{ \frac{p-1}{2(p+1)}} , \end{equation*} and so \begin{equation} \label{fDefLam3} \GCT1 \lesssim Q^{\frac{p+1}2}|z|^{p+1}+ Q^{\frac{\bar p+1}{2} + \frac {p- \bar p} {p+1} }|z|^{\bar p+1} + Q^{ \frac{p-1}{2(p+1)} + 1} z^2 . \end{equation} It follows that \begin{equation*} \int Q \GCT1 \lesssim \int Q^{\frac{p+3}2}|z|^{p+1}+\int Q^{\frac{\bar p+3}2+\frac{p-\bar p}{p+1}}|z|^{\bar p+1} +\int Q^{\frac{p-1}{2(p+1)}+2} z^2. \end{equation*} For the first term on the right-hand side above, we use $Q \gtrsim 1$, thus \begin{equation} \label{fEstIp1} \int Q^{\frac {p+3}2} |z|^{p+1} =\int Q^{-\frac {p-1}2} |Q z|^{p+1} \lesssim \int |Qz|^{p+1} . \end{equation} Applying now~\eqref{gagl}, $|\nabla Q|\lesssim t^{-\frac 1k}Q$, and the definition of $\mathcal N$, \begin{equation} \label{festQz} \begin{split} \int |Qz|^{p+1} & \lesssim \left(\int |\nabla (Qz)|^2\right)^{\frac{N}4(p-1)}\left(\int Q^2z^2\right)^{ \frac {p+1} {2} - \frac {N} {4} (p-1) } \\ & \lesssim \left(\int Q^2|\nabla z|^2 + t^{-\frac 2k} \int Q^2 z^2\right)^{\frac{N}4(p-1)}\left(\int Q^2 z^2\right)^{ \frac {p+1} {2} - \frac {N} {4} (p-1) } \\ & \lesssim \mathcal N^{p+1} . \end{split} \end{equation} In the case $1<p\leq 2$, one has $\bar p=p$ and the second term is identical to the first one. In the case $p>\bar p=2$, the second term $Q^{\frac52+\frac{p-2}{p+1}}|z|^{3}=Q^{\frac{7p+1}{2(p+1)}}|z|^3$ is estimated as follows (using $ |z|^3 \lesssim a^{p-2} |z|^{p+1} + \frac {1} {a} z^2$ with $a= Q^{\frac {p-1} {p+1}}$, $Q^{-1}\lesssim 1$ and $Q^{\frac{p-1}{2(p+1)}}\lesssim t^{-1}$) \begin{align*} Q^{\frac{7p+1}{2(p+1)}}|z|^3&\lesssim Q^{-\frac 32 \frac{p-1}{p+1}} Q^{p+1} |z|^{p+1} + Q^{\frac{5p+3}{2(p+1)}} z^2\\ &\lesssim Q^{p+1}|z|^{p+1}+ Q^{\frac{p-1}{2(p+1)}}Q^2z^2\lesssim Q^{p+1}|z|^{p+1}+t^{-1}Q^2z^2, \end{align*} and so \begin{equation} \label{fEstIp2} \int Q^{\frac{\bar p+3}2+\frac{p-\bar p}{p+1}}|z|^{\bar p+1} \lesssim \mathcal N^{p+1}+t^{2\sigma -1} \mathcal N^2. \end{equation} Last, since $Q^{\frac{p-1}{2(p+1)}}\lesssim t^{-1}$, we observe that \[ \int Q^{\frac{p-1}{2(p+1)}} Q^2z^2\lesssim t^{-1} \int Q^2 z^2\lesssim t^{2\sigma -1} \mathcal N^2. \] In conclusion, we have obtained $\int Q \GDT1 \lesssim \int Q \GCT1 \lesssim t^{2\sigma-1} \mathcal N^2 + \mathcal N^{p+1}$, which implies that for $t$ and $\mathcal N$ small enough, $\mathcal H\geq \frac 12 \mathcal N^2$. \emph{Proof of \eqref{coer2}.} Since $\varepsilon_n=Q^{\frac 12}z$, the inequality $\|\varepsilon_n\|_{L^2}\lesssim t^{-1}\mathcal N$ follows readily from the definition of $\mathcal N$ and $Q\gtrsim 1$. Next, using $|\nabla Q|\lesssim t^{-\frac 1k} Q$, we see that \[ \int |\nabla \varepsilon_n|^2=\int \Big| Q^{\frac 12}\nabla z+\frac 12 Q^{-\frac 12}z\nabla Q\Big|^2 \lesssim \int Q |\nabla z|^2 + t^{-\frac 2k} \int Q z^2 \lesssim \mathcal N^2. \] Last, using $|\partial_t Q|\lesssim Q^{\frac{p-1}{2(p+1)}} Q\lesssim t^{-\frac12} Q^{\frac{5p+3}{4(p+1)}}$, we have \begin{multline*} \int |\partial_t\varepsilon_n|^2=\int \Big| Q^{\frac 12}\partial_t z+\frac 12 Q^{-\frac 12}z \partial_t Q\Big|^2 \lesssim \int Q |\partial_t z|^2+\int |\partial_t Q|^2 Q^{-1} z^2\\ \lesssim \int Q^2 |\partial_t z|^2+ t^{-1} \int Q^{\frac{3p+1}{2(p+1)}} z^2\lesssim \mathcal N^2. \end{multline*} This completes the proof of \eqref{coer2}. \textbf{Step 2.} Energy control. We claim that for $0<\delta\leq t_J$ small enough and $C>0$ large enough, for any $n$ large and for all $t\in [T_n,T_n+\delta]$ \begin{equation}\label{dtener} \frac d{dt} \mathcal H \leq C \left[t^{-1+\lambda} \mathcal N+ t^{-\frac 12} \mathcal N^2 +\mathcal N^{p+1}\right]. \end{equation} \emph{Proof of \eqref{dtener}.} Taking the time-derivative of all the terms in $\mathcal H$, we obtain \begin{align*} \frac 12 \frac d{dt} \mathcal H&=\int \left( Q\partial_t z \partial_t(Q\partial_t z) + Q^2 \nabla z\cdot \nabla \partial_t z + t^{-2\sigma} Q^{2} z \partial_t z\right)\\&\quad - \int Q^{\frac 32} \left(f_n(U_J+Q^{\frac 12} z) -f_n(U_J)-f_n'(U_0)Q^{\frac 12}z\right)\partial_t z \\&\quad +\int Q\partial_t Q |\nabla z|^2 + t^{-2\sigma}\int Q\partial_t Qz^2 -\sigma t^{-2\sigma-1}\int Q^2 z^2 \\&\quad-\frac 12 \int \partial_t Q \left(2F_n(U_J+Q^{\frac 12} z)-2F_n(U_J)-2F_n'(U_J)Q^{\frac 12}z-F_n''(U_0)Qz^2\right) \\&\quad-\frac 12 \int \partial_t Q\left(f_n(U_J+Q^{\frac 12}z)-f_n(U_J)-f_n'(U_0)Q^{\frac 12}z\right)Q^{\frac 12}z \\&\quad- \frac 12 \int Q \partial_t U_0\left( 2f_n(U_J+Q^{\frac 12}z)-2f_n(U_J)-2f_n'(U_J)Q^{\frac 12} z-f_n''(U_0)Q z^2\right)\\ &\quad- \frac 12 \int Q \partial_t (U_J-U_0)\left( 2f_n(U_J+Q^{\frac 12}z)-2f_n(U_J)-2f_n'(U_J)Q^{\frac 12} z\right)\\ &=I_1+I_2+I_3+I_4+I_5+I_6+I_7. \end{align*} First, we note that $\partial_tQ=(p+1)Q^{\frac p{p+1}} \partial_t U_0 \le 0$, so that \begin{equation*} I_3 \le -\sqrt{2(p+1)} \int U_0^{\frac {p+1}2} Q^{\frac{2p+1}{p+1}}|\nabla z|^2 - \sigma t^{-2\sigma-1} \int Q^{2} z^2. \end{equation*} We now use equation~\eqref{eqforz} to replace the term $ \partial_t(Q\partial_t z) $ in $I_1$, and we obtain \begin{align*} I_1+I_2 &= \int \left(Q^{\frac 32} \partial_t z \Delta (Q^{\frac 12} z)+Q^2 \nabla z\cdot \nabla \partial_t z\right) \\&\quad + \int \left(Gz+Q^{\frac 12}\Ens_J \right) Q\partial_t z + t^{-2\sigma} \int Q^2 z \partial_t z =I_8+I_9+I_{10}. \end{align*} The term $I_{10}$ is controlled using the Cauchy-Schwarz inequality, \[|I_{10}|\leq \frac1{10} {|I_3|} + C t^{-2\sigma+1} \int (Q\partial_t z)^2 \leq \frac1{10}{|I_3|}+C t^{-2\sigma+1} \mathcal N^2 \leq \frac1{10}{|I_3|}+C t^{-\frac 12} \mathcal N^2 .\] Next, integrating by parts, \begin{align*} I_8 & =-\int \nabla (Q^{\frac 12} z) \cdot \nabla (Q^{\frac 32} \partial_t z)+Q^2 \nabla z\cdot \nabla (\partial_t z)\\ & =-\int z\nabla (Q^{\frac 12}) \cdot \nabla (Q^{\frac 32} \partial_t z ) -\int Q^{\frac 12}(\partial_t z) \nabla z \cdot \nabla (Q^{\frac 32})\\ & =-\int Q \partial_t z\nabla z \cdot \nabla Q +\int \Delta (Q^{\frac 12}) Q^{\frac 32} z \partial_t z . \end{align*} By $|\nabla Q|\lesssim t^{-\frac 1k} Q$ and the Cauchy-Schwarz inequality, \[ \left| \int Q \partial_t z\nabla z \cdot \nabla Q\right| \lesssim t^{-\frac 1k} \mathcal N^2. \] Similarly, $|\Delta (Q^{\frac 12}) Q^{\frac 32}|\lesssim |\nabla Q|^2 +|\Delta Q|Q\lesssim t^{-\frac 2k} Q^2$, and so \[ \left|\int \Delta (Q^{\frac 12}) Q^{\frac 32} z \partial_t z \right|\leq \int Q^2|\partial_t z|^2 + t^{-\frac 4k}\int Q^2z^2 \lesssim \mathcal N^2. \] We note that by Cauchy-Schwarz, \[ |I_9|\lesssim \|G\|_{L^\infty}\mathcal N^2 +\|Q^{\frac 12} \Ens_J\|_{L^2} \mathcal N, \] and so, we only have to bound the $L^\infty$ norm of $G$ and the $L^2$ norm of $Q^{\frac 12} \Ens_J$. We begin with $G= Q^{\frac 12} \left( pU_0^{p-1}Q^{\frac 12}-\partial_{tt} Q^{\frac 12}\right)$. Using $Q=(1-\chi+U_0)^{p+1}$ and the expressions of $\partial_{tt}U_0$ and $(\partial_t U_0)^2$, we observe that \[ \partial_{tt} Q^{\frac 12}= \frac {p+1}2U_0^pQ^{\frac {p-1}{2(p+1)}} +\frac {p-1}2U_0^{p+1}Q^{\frac{p-3}{2(p+1)}}. \] Thus, \begin{align*} &pU_0^{p-1}Q^{\frac 12}-\partial_{tt} Q^{\frac 12} \\&\quad =\frac {p+1}2U_0^{p-1}Q^{\frac {p-1}{2(p+1)}}\left(Q^{\frac 1{p+1}}-U_0\right) +\frac {p-1}2U_0^{p-1}Q^{\frac{p-3}{2(p+1)}}\left(Q^{\frac 2{p+1}}-U_0^2\right)\\ &\quad=\frac {p+1}2U_0^{p-1}Q^{\frac {p-1}{2(p+1)}}(1-\chi)+\frac {p-1}2U_0^{p-1}Q^{\frac{p-3}{2(p+1)}}(1-\chi)\left(1-\chi+2U_0\right). \end{align*} Since for $|x|>1$, we have $U_0\lesssim 1$ and $Q\lesssim 1$, we obtain $\|G\|_{L^\infty}\lesssim 1$. Now, we estimate $\|Q^{\frac 12}\Ens_J\|_{L^2}$ from Lemma~\ref{le:1}. For $|x|\geq 2$, it follows from \eqref{e:33} that \[ Q^{\frac 12}|\Ens_J|\lesssim |\Ens_J|\lesssim |x|^{-\frac{2k}{p-1}-2}. \] Note that for $N\geq 3$, $p-1\leq \frac 4{N-2}$ and so $\frac{2k}{p-1}+2\geq \frac{N-2}2k+2\geq N$. Thus, the following bound holds $\|Q^{\frac 12}\Ens_J\|_{L^2(|x|>2)}\lesssim 1.$ Next, using \eqref{e:32}, we have for $|x|\lesssim 2$ \begin{equation*} \begin{split} Q^{\frac 12}|\Ens_J| & \lesssim Q^{\frac 12} U_0^{1-J(p-1)+\frac{(J+1) (p-1) }{k}} \lesssim U_0^{\frac{p+3}2-J(p-1)+\frac{(J+1) (p-1)}{k} } \\ & \lesssim U_0^{\frac{p+3}2-J(1-\frac1k)(p-1)+\frac{p-1}k} . \end{split} \end{equation*} Note that by~\eqref{def:la}, $J\ge \lambda + \frac {2} {p-1}$, so that $- J (1- \frac {1} {k}) + \frac {p-1} {k} \le - 2 (1- \frac {1} {k})+ \frac {p-1} {k}- \lambda (p-1) (1- \frac {1} {k}) $; and so \begin{equation*} Q^{\frac 12}|\Ens_J| \lesssim U_0^{\frac{p+3}2-2(1-\frac1k)+\frac{p-1}k - \lambda(p-1)(1-\frac1k)} \lesssim U_0^{\frac{p-1}2+ \frac{p+1}k-\lambda(p-1)(1-\frac1k)} . \end{equation*} Moreover, the additional condition \eqref{on:k} is equivalent to $\frac{p+1}{k}-\lambda(p-1)(1-\frac1k)\leq -\frac{\lambda(p-1)}{2}$. Thus, for $|x|\lesssim 2$, \begin{equation} \label{fForarap} Q^{\frac 12}|\Ens_J| \lesssim U_0^{ (1 - \lambda ) \frac{p-1}2} \lesssim (t+A(x))^{-1+\lambda}\lesssim t^{-1+\lambda}. \end{equation} Therefore, one obtains $\|Q^{\frac 12}\Ens_J\|_{L^2}\lesssim t^{-1+\lambda}$. To complete the proof of \eqref{dtener}, we estimate $I_4$, $I_5$, $I_6$ and $I_7$. First, using~\eqref{fDfnLam}--\eqref{fDefLam3}, and $ |\partial_t Q|\lesssim |\partial_t U_0| Q^{\frac{p}{p+1}}\lesssim U_0^{\frac{p+1}2} Q^{\frac{p}{p+1}} \lesssim Q^{\frac{ 3 p +1 }{2 (p+1) }} $, we obtain \begin{equation*} \begin{split} |\partial _t Q | \GDT1 & \lesssim Q^{\frac{ 3 p +1 }{2 (p+1) }} \GCT1 \\ & \lesssim Q^{ p+1 - \frac{ p(p-1) }{2 (p+1) }} |z|^{p+1}+ Q^{\frac{ 3 p +1 }{2 (p+1) } + \frac{\bar p+1}{2} + \frac {p- \bar p} {p+1} }|z|^{\bar p+1} + Q^{ 2 + \frac{p-1}{p+1} } z^2 . \end{split} \end{equation*} Using $U_0 \gtrsim 1$ and the estimate~\eqref{festQz}, we treat the first term above as follows \begin{equation*} \int Q^{ p+1 - \frac{ p(p-1) }{2 (p+1) }} |z|^{p+1} \lesssim \int | Q z|^{p+1} \lesssim \mathcal N^{p+1} . \end{equation*} In the case $1<p\leq 2$, one has $\bar p=p$ and the second term is identical to the first one. In the case $p>\bar p=2$, the second term $Q^{\frac{ 4p }{ p+1 }}|z|^3$ is estimated as follows (using $ |z|^3 \lesssim a^{p-2} |z|^{p+1} + \frac {1} {a} z^2$ with $a= Q^{\frac {p-1} {p+1}}$, $Q^{-1}\lesssim 1$ and $Q^{\frac{p-1}{p+1}}\lesssim t^{-2}$) \begin{equation*} Q^{\frac{ 4p }{ p+1 }}|z|^3 \lesssim Q^{- \frac{ p-1 }{ p+1 }} Q^{p+1} |z|^{p+1} + Q^{\frac{ p- 1 }{ p+1 }} Q^2 z^2 \lesssim Q^{p+1} |z|^{p+1} + t^{-2} Q^2 z^2 . \end{equation*} Therefore \begin{equation*} \int Q^{\frac{ 4p }{ p+1 }}|z|^3 \lesssim \mathcal N^{p+1}+t^{-2(1-\sigma)}\mathcal N^2 \lesssim \mathcal N^{p+1}+t^{- \frac {1} {2} }\mathcal N^2. \end{equation*} Since $Q^{ 2 + \frac{p-1}{p+1} } z^2 \lesssim t^{-2} Q^2 z^2 $, we have proved \begin{equation} \label{fIntrFn1} |I_4| \lesssim \int | \partial _t Q| \GCT1 \lesssim \int Q^{\frac{ 3 p +1 }{2 (p+1) }} \GCT1 \lesssim \mathcal N^{p+1}+t^{- \frac {1} {2} }\mathcal N^2. \end{equation} We proceed similarly for $I_5$. Indeed, setting \begin{equation*} \begin{split} \GDT2 = & \left|f_n(U_J+Q^{\frac 12}z)-f_n(U_J)-f_n'(U_0)Q^{\frac 12}z\right|Q^{\frac 12}|z| \\ \le & \left|f_n(U_J+Q^{\frac 12}z)-f_n(U_J)-f_n'(U_J )Q^{\frac 12}z\right|Q^{\frac 12}|z| + |f_n'(U_0)-f_n'(U_J)|Qz^2 \end{split} \end{equation*} we deduce from~\eqref{taylor1} and Taylor's inequality that, with the notation~\eqref{fDefLam2}, \begin{equation*} \GDT2 \lesssim Q^{\frac {p+1}2} |z|^{p+1} + U_J^{p-\bar p} Q^{\frac {\bar p+1}2}|z|^{\bar p+1}+ U_0^{p-2}|U_0-U_J|Q|z|^2 \lesssim \GCT1. \end{equation*} Using the last two inequalities in~\eqref{fIntrFn1}, we conclude that $|I_5|\lesssim \mathcal N^{p+1}+t^{-\frac 12}\mathcal N^2$. Now, we estimate $I_6$, and we set \begin{equation*} \GDT3= | 2 f_n(U_J+Q^{\frac 12}z)-2f_n(U_J)-2f_n'(U_J)Q^{\frac 12} z-f_n''(U_J)Q z^2 | . \end{equation*} By the triangle inequality, Taylor's inequality \eqref{taylor}, and $U_J^{-1}\lesssim U_0^{-1}$ (see~\eqref{e:31}), \begin{equation*} \begin{split} \GDT3 \lesssim & | 2 f_n(U_J+Q^{\frac 12}z)-2f_n(U_J)-2f_n'(U_J)Q^{\frac 12} z-f_n''(U_0)Q z^2 | \\ & +|f_n''(U_0)-f_n''(U_J)| Q z^2 \\ \lesssim & U_J^{-1} Q^{\frac{p+1}2}|z|^{p+1}+U_J^{p-\bar p -1}Q^{\frac{\bar p+1}{2}}|z|^{\bar p+1} +U_0^{p-3}|U_0-U_J|Qz^2 \\ \lesssim & U_0^{-1} [ Q^{\frac{p+1}2}|z|^{p+1}+U_J^{p-\bar p }Q^{\frac{\bar p+1}{2}}|z|^{\bar p+1} +U_0^{p-2}|U_0-U_J|Qz^2 ] \\ \lesssim & U_0^{-1} \GCT1 \end{split} \end{equation*} with the notation~\eqref{fDefLam2}. Using $ |\partial_t U_0 | \lesssim U_0^{ \frac {p+1} {2} }$ and $U_0\lesssim Q^{\frac {1} {p+1}}$, we see that $Q |\partial _t U_0 | \lesssim Q U_0^{ \frac {p+1} {2} } \lesssim Q U_0^{ \frac {p-1} {2} } U_0 \lesssim Q^{ \frac { 3p + 1 } {2 (p+1)} } U_0 $, hence $Q |\partial_t U_0 | \GDT3 \lesssim Q^{ \frac { 3p + 1 } {2 (p+1)} } \GCT1$. The last inequality in~\eqref{fIntrFn1} yields $|I_6|\lesssim \mathcal N^{p+1}+t^{-\frac 12}\mathcal N^2$. Finally, we estimate $I_7$ and we set \begin{equation*} \GDT4 = |f_n(U_J+Q^{\frac 12}z)- f_n(U_J)- f_n'(U_J)Q^{\frac 12} z| . \end{equation*} By the triangle inequality and Taylor's expansion~\eqref{taylor}, \begin{equation*} \begin{split} \GDT4 \lesssim & \Bigl|f_n(U_J+ Q^{\frac 12}z)- f_n(U_J)- f_n'(U_J)Q^{\frac 12} z- \frac {1} {2} f''_{ n}(U_J) Q z^2 \Bigr| \\ & + \frac {1} {2} | f'' _{ n}(U_J) |Q z^2 \\ \lesssim & U_J^{-1} Q^{ \frac {p+1} {2}} |z|^{p+1}+ U_J ^{p-\bar p-1} Q^{ \frac {\bar p+1} {2} } |z|^{\bar p+1} + U_J^{p-2} Q z^2 \end{split} \end{equation*} Using $Q | \partial _t (U_J - U_0) | \lesssim Q U_0$ (see~\eqref{e:31bis}), $U_J^{-1} \lesssim U_0^{-1}$, and $U_J \lesssim U_0$, we obtain \begin{equation*} Q | \partial _t (U_J - U_0) | \GDT4 \lesssim Q^{ \frac {p+3} {2}} |z|^{p+1}+ U_0 ^{p-\bar p} Q^{ \frac {\bar p+3} {2} } |z|^{\bar p+1} + U_0^{p-1} Q^2 z^2 . \end{equation*} Since $ U_0 \lesssim Q^{\frac {1} {p+1}}$ and $U_0^{p-1} \lesssim t^{-2}$, we deduce that \begin{equation*} Q | \partial _t (U_J - U_0) | \GDT4 \lesssim Q^{ \frac {p+3} {2}} |z|^{p+1}+ Q^{ \frac {p-\bar p} {p+1} + \frac {\bar p+3} {2} } |z|^{\bar p+1} + t^{-2} Q^2 z^2 . \end{equation*} Applying~\eqref{fEstIp1}-\eqref{festQz} for the first term and~\eqref{fEstIp2} for the second term, we see that $|I_7|\lesssim \mathcal N^{p+1}+t^{-\frac 12}\mathcal N^2$. Collecting the above estimates, we have proved \eqref{dtener}. \textbf{Step 3.} Conclusion. The values of $\delta\in (0,t_J]$ and $0< \omega \le 1$ are now fixed so that \eqref{coer}, \eqref{coer2} and \eqref{dtener} hold. Since $\mathcal N(T_n)=0$, the following is well-defined \[ T_n^\star=\sup\{t\in [T_n,\delta]: \mbox{for all $s\in [T_n,t]$, $\mathcal N(s) \leq \omega$}\} \] and by continuity, $T_n^\star\in (T_n,\delta]$. For all $t\in [T_n,T^\star_n]$, using \eqref{dtener}, we find (recall that $\lambda\in (0,\frac 12]$) \[ \frac{d}{dt} \mathcal H \leq C \left[ t^{-1+\lambda} + t^{-\frac 12}+ 1 \right] \leq C t^{-1+\lambda}. \] Let $t\in [T_n,T_n^\star]$. Since $\mathcal H(T_n)=0$, we obtain by integration on $[T_n,t]$ \[ \mathcal H(t) \leq C (t^{\lambda}-T_n^{\lambda}) \leq C (t-T_n)^{\lambda}. \] Therefore, using the definition of $T_n^\star$ and \eqref{coer}, for all $t\in [T_n,T_n^\star]$, \[ \mathcal N(t) \leq C (t-T_n)^{\frac \lambda 2}. \] In particular, there exists $\delta_0>0$ independent of $n$ such that, for $n$ large, it holds $T_n^\star\geq T_n+\delta_0$. Moreover, using \eqref{coer2}, for all $t\in [T_n,T_n+\delta_0]$, \[ \|(\varepsilon_n(t),\partial_t \varepsilon_n(t))\|_{H^1\times L^2} \lesssim \mathcal N(t) \lesssim (t-T_n)^{\frac \lambda 2}, \] which completes the proof of Proposition~\ref{pr:2}. \end{proof} \section{End of the proof of Theorem~\ref{TH:1}}\label{S:proof} Let $ \Cset $ be any compact set of $\R^N$ included in the ball of center $0$ and radius~$1$ (by the scaling invariance of equation \eqref{wave}, this assumption does not restrict the generality). It is well-known that there exists a smooth function $Z:\R^N\to [0,\infty)$ which vanishes exactly on $ \Cset $ (see \emph{e.g.}~Lemma 1.4, page 20 of \cite{MoRe}). For $p$ as in \eqref{on:p}, choose $J$ and $k$ satisfying \eqref{def:Jk} and \eqref{on:k}. Define the function $A:\R^N\to [0,\infty)$ by \[ A(x)=\left(Z(x) \chi( { |x|}) + (1- \chi( { |x|}) ) |x|\right)^k, \] where $\chi $ is given by~\eqref{def:chi}. It follows that the function $A$ satisfies \eqref{on:A} and vanishes exactly on $ \Cset $. We consider the global solutions $u_n$ of equation \eqref{eq:un}, $\varepsilon_n$ defined by \eqref{uneps} and we set for $0\leq t\leq t_J-T_n$, \[ V_n(t)=U_J(T_n+t),\quad \eta_n(t)=\varepsilon_n(T_n+t),\quad \Fns_n(t)=\Ens_J(T_n+t). \] It follows from Proposition~\ref{pr:2} that there exist $0<\delta_0< t_J$, $0<\lambda\leq \frac 12$, and $C>0$ such that, for $n$ large and for all $t\in [0,\delta_0]$, \begin{equation}\label{unifeta} \|(\eta_n(t),\partial_t \eta_n(t))\|_{H^1\times L^2}\leq C t^{\frac\lambda2}. \end{equation} Moreover, it follows from \eqref{eq:wn} that \begin{equation}\label{eq:eta} \partial_{tt}\eta_n-\Delta \eta_n = f_n(V_n+\eta_n)-f_n(V_n)+\Fns_n. \end{equation} Using the estimate $|f_n(u+v)-f_n(u)|\lesssim (|u|^{p-1}+|v|^{p-1})|v|$ and the embeddings $H^1(\R^N)\hookrightarrow L^{p+1}(\R^N)$, $L^{\frac{p+1}p}(\R^N)\hookrightarrow H^{-1}(\R^N)$, we deduce that \[ \|\partial_{tt}\eta_n\|_{H^{-1}} \lesssim \|\eta_n\|_{H^1}+\|V_n\|_{H^1}^{p-1}\|\eta_n\|_{H^1}+\|\eta_n\|_{H^1}^{p}+\|\Fns_n\|_{L^2} \] so that by the estimates of Lemmas~\ref{le:0} and~\ref{le:1}, there exist $C,c>0$ such that, for all $t\in (0,\delta_0]$, \begin{equation}\label{unifeta2} \|\partial_{tt}\eta_n\|_{H^{-1}} \leq Ct^{-c}. \end{equation} Given $\tau\in (0,\delta_0)$, it follows from \eqref{unifeta} and \eqref{unifeta2} that the sequence $(\eta_n)_{n\geq 1}$ is bounded in $L^{\infty}((\tau,\delta_0),H^1(\R^N))\cap W^{1,\infty}((\tau,\delta_0),L^2(\R^N)) \cap W^{2,\infty}((\tau,\delta_0),H^{-1}(\R^N))$. Therefore, after possibly extracting a subsequence (still denoted by $\eta_n$), there exists $\eta\in L^{\infty}((\tau,\delta_0),H^1(\R^N))\cap W^{1,\infty}((\tau,\delta_0),L^2(\R^N)) \cap W^{2,\infty}((\tau,\delta_0),H^{-1}(\R^N))$ such that \begin{align} & \eta_n \mathop{\longrightarrow}_{n\to\infty} \eta \quad \mbox{in $L^{\infty}((\tau,\delta_0),H^1(\R^N))$ weak$^*$}\label{conv1}\\ & \partial_t\eta_n \mathop{\longrightarrow}_{n\to\infty} \partial_t\eta \quad \mbox{in $L^{\infty}((\tau,\delta_0),L^2(\R^N))$ weak$^*$}\\ & \partial_{tt}\eta_n \mathop{\longrightarrow}_{n\to\infty} \partial_{tt}\eta \quad \mbox{in $L^{\infty}((\tau,\delta_0),H^{-1}(\R^N))$ weak$^*$}\label{conv3}\\ & \eta_n \mathop{\longrightarrow}_{n\to\infty} \eta \quad \mbox{weakly in $H^1(\R^N)$, for all $t\in [\tau,\delta_0]$}\label{conv4}\\ & \partial_t\eta_n \mathop{\longrightarrow}_{n\to\infty}\partial_t\eta \quad \mbox{weakly in $L^2(\R^N)$, for all $t\in [\tau,\delta_0]$}.\label{conv5} \end{align} Since $\tau\in (0,\delta_0)$ is arbitrary, a standard argument of diagonal extraction shows that there exists a function $\eta\in L^{\infty}_{\rm loc}((0,\delta_0),H^1(\R^N))\cap W^{1,\infty}_{\rm loc}((0,\delta_0),L^2(\R^N)) \cap W^{2,\infty}_{\rm loc}((0,\delta_0),H^{-1}(\R^N))$ such that (after extraction of a subsequence) \eqref{conv1}--\eqref{conv5} hold for all $0<\tau<\delta_0$. Moreover, \eqref{unifeta} and \eqref{conv4}--\eqref{conv5} imply that \begin{equation}\label{unifetalim} \|(\eta(t),\partial_t\eta(t))\|_{H^1\times L^2}\leq C t^{{\frac\lambda2}}, \quad t\in (0,\delta_0), \end{equation} and \eqref{unifeta2} and \eqref{conv3} imply that \begin{equation}\label{unifetalim2} \|\partial_{tt}\eta\|_{L^\infty((\tau,\delta_0),H^{-1})}\leq C \tau^{-c}, \quad \tau\in (0,\delta_0). \end{equation} In addition, it follows easily from \eqref{eq:eta}, \eqref{def:Bn}, \eqref{def:fn} and the convergence properties \eqref{conv1}--\eqref{conv5} that \begin{equation}\label{eq:w} \partial_{tt} \eta-\Delta \eta=f(U_J+\eta)-f(U_J) + \Ens_J \end{equation} in $L^{\infty}_{\rm loc}((0,\delta_0),H^{-1}(\R^N))$. Therefore, setting \[ u(t)=U_J(t)+\eta(t), \quad t\in (0,\delta_0), \] we observe that the function $u\in L^{\infty}_{\rm loc}((0,\delta_0),H^1(\R^N))\cap W^{1,\infty}_{\rm loc}((0,\delta_0),L^2(\R^N)) \cap W^{2,\infty}_{\rm loc}((0,\delta_0),H^{-1}(\R^N))$ and satisfies $\partial_{tt} u-\Delta u =f(u)$ in $L^{\infty}_{\rm loc}((0,\delta_0),H^{-1}(\R^N))$. It is a well-known property of the energy subcritical wave equation (corresponding to assumption \eqref{on:p}) that then it holds the stronger property \begin{equation}\label{5.12} u\in \cont((0,\delta_0),H^1(\R^N))\cap \cont^1((0,\delta_0),L^2(\R^N))\cap\cont^2((0,\delta_0),H^{-1}(\R^N)). \end{equation} We refer for example to Proposition 3.1 and Lemma 2.1 in \cite{GV85}. Finally, we prove estimates \eqref{xE} and \eqref{xnotE}. For $x_0\not\in \Cset $, there exist $r>0$ and $C>0$ such that $A(x)\geq C$ for all $x\in \R^N$ such that $|x-x_0|<r$. In particular, there exist $0<c<C$ such that $ c \le |U_0 (t,x)| \le C $ for $|x-x_0|<r$ and $0 <t \le \delta _0$. Using~\eqref{e:20}, \eqref{e:22}, \eqref{e:29}, \eqref{e:31bis} and~\eqref{e:33}, we easily deduce that $|U_J(x)| + |\nabla U_J | +|\partial_t U_J(x)|\leq C'$ for some constant $C'>0$. Estimate \eqref{xnotE} then follows from~\eqref{unifetalim}. For $x_0\in \Cset $, \eqref{e:24}, \eqref{e:25}, \eqref{e:31} and \eqref{e:31bis} imply, for $t\in (0,\delta_0)$, \begin{gather*} t^{-\mu} \lesssim \|U_J(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac {2}{p-1}}, \\ t^{-\mu-1} \lesssim\|\partial_t U_J(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac {2}{p-1}-1}, \end{gather*} where $\mu=\frac 2{p-1}-\frac{N}{2k}$. Estimate \eqref{xE}, and more precisely estimates \eqref{Qrp} and \eqref{dtQrp} then follow from \eqref{unifetalim}. Now, we justify the last part of Remark~\ref{rk:2}. If $x_0\in \Cset $ and $ \Cset $ contains a neighborhood of $x_0$ then $A(x)=0$ on this neighborhood and the lower estimate easily follows. In the case where $x_0\in \Cset $ is isolated, the function $A$ can be chosen so that $A(x)=|x|^k$ in a neighbourhood of $x_0$ (see Remark~\ref{rk:4}). In particular, by \eqref{label4} and a similar estimate for $\partial_t U_0$, we obtain for small $r>0$, $\|u(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-\frac2{p-1}+\frac N{2k}}$ and $\|\partial_t u(t)\|_{L^2(|x-x_0|<r)}\lesssim t^{-1-\frac2{p-1}+\frac N{2k}}$. \end{document}